1

This image shows my computational graph(Something like a DCGAN). And I first call backward on the last intermediate node of G1 with retain_graph=true. And then call backward on the last intermediate node of G2 with retain_graph=false.

My question is graph G1 still in memory?

I want to train my network in N epochs. How many G1 graphs are still in memory?

enter image description here

alihejrati
  • 61
  • 4
  • Perhaps this could help: [what-does-the-parameter-retain-graph-mean-in-the-variables-backward-method](https://stackoverflow.com/questions/46774641/what-does-the-parameter-retain-graph-mean-in-the-variables-backward-method) – Mehdi Seifi May 09 '22 at 06:56

0 Answers0