0

Can someone tell my why when I train my model using tensorflow-gpu in the jupyter notebook that my dedicated GPU memory is 85% in use even after the training model has completed so if I try to run the same model or a modified model I get the error Failed to get convolution algorithm. This is probably because cuDNN failed to initialize. If I want to run another model I need to quit the Anaconda Prompt and relaunch jupyter notebook for the memory to clear. Is this happening to anyone else? Does anyone know how to clear the GPU memory?

talonmies
  • 68,743
  • 34
  • 184
  • 258
rzaratx
  • 566
  • 1
  • 5
  • 20
  • This is a long well known problem in Tensorflow and there are answers. Please check these links out: https://github.com/tensorflow/tensorflow/issues/17048 and https://stackoverflow.com/questions/59363874/cuda-error-out-of-memory-python-interpreter-utilizes-all-gpu-memory – velociraptor11 Jan 08 '20 at 01:35
  • Does this answer your question? [CUDA Error: out of memory - Python process utilizes all GPU memory](https://stackoverflow.com/questions/59363874/cuda-error-out-of-memory-python-process-utilizes-all-gpu-memory) – talonmies Jan 07 '21 at 12:09

0 Answers0