76

My CUDA program crashed during execution, before memory was flushed. As a result, device memory remained occupied.

I'm running on a GTX 580, for which nvidia-smi --gpu-reset is not supported.

Placing cudaDeviceReset() in the beginning of the program is only affecting the current context created by the process and doesn't flush the memory allocated before it.

I'm accessing a Fedora server with that GPU remotely, so physical reset is quite complicated.

So, the question is - Is there any way to flush the device memory in this situation?

Soroosh Bateni
  • 849
  • 9
  • 20
timdim
  • 791
  • 1
  • 5
  • 6
  • "As a result, device memory remains occupied" - How do you know this to be true? – talonmies Mar 04 '13 at 08:28
  • 4
    Although `nvidia-smi --gpu-reset` is not available, I can still get some information with `nvidia-smi -q`. In most fields it gives 'N/A', but some information is useful. Here is the relevant output: `Memory Usage Total : 1535 MB Used : 1227 MB Free : 307 MB` – timdim Mar 04 '13 at 08:35
  • Plus, I fail to allocate memory for variables, which are small enough – timdim Mar 04 '13 at 08:36
  • Is the process which was holding the context on the GPU still alive? Even catastrophic termination of a process should result in the driver destroying the context and releasing resources. – talonmies Mar 04 '13 at 09:19
  • It doesn't look like it is alive. At least, I don't see it alive on CPU. I guess, the process on GPU cannot be alive as well, since I can launch another kernel (concurrent execution is not available on my GPU). But the memory is still occupied, I can be sure about it because of the reasons described above – timdim Mar 04 '13 at 09:25
  • 1
    If you have root access, you can unload and reload the `nvidia` driver. – tera Mar 04 '13 at 10:14
  • Did it crash oh host side or while kernel was running? – CygnusX1 Mar 04 '13 at 11:57
  • 4
    If you do `ps -ef |grep 'whoami'` and the results show any processes that appear to be related to your crashed session, kill those. (the single quote ' should be replaced with backtick ` ) – Robert Crovella Mar 04 '13 at 16:18
  • 2
    Have you tried `sudo rmmod nvidia`? – Przemyslaw Zych Mar 04 '13 at 22:46
  • ksooklall has a great answer to find what is hogging the memory, even if it doesn't show on nvidia-smi. – Davidmh Oct 10 '17 at 18:50
  • 2
    `nvidia-smi -caa` worked great for me to release memory on all GPUs at once. – David Arenburg Jun 25 '19 at 11:54

8 Answers8

148

check what is using your GPU memory with

sudo fuser -v /dev/nvidia*

Your output will look something like this:

                     USER        PID  ACCESS COMMAND
/dev/nvidia0:        root       1256  F...m  Xorg
                     username   2057  F...m  compiz
                     username   2759  F...m  chrome
                     username   2777  F...m  chrome
                     username   20450 F...m  python
                     username   20699 F...m  python

Then kill the PID that you no longer need on htop or with

sudo kill -9 PID.

In the example above, Pycharm was eating a lot of memory so I killed 20450 and 20699.

Kenan
  • 11,783
  • 8
  • 39
  • 49
  • 9
    Thank you! For some reason, I had a process hogging all my VRAM, not showing on `nvidia-smi`. – Davidmh Oct 10 '17 at 18:48
  • 1
    I need to use this a lot when running deep learning in different jupyter notebooks. The only issue is knowing exactly which PID is which. Any tips on this? – Little Bobby Tables May 12 '18 at 21:56
  • is chrome --- the google chrome browser? if so what business does it have using a gpu? – kRazzy R Jun 05 '18 at 21:19
  • 2
    @josh I kill them one at a time making a mental note of the COMMAND. – Kenan Jun 07 '18 at 14:43
  • 2
    @kRazzyR - It uses it for speeding up computations, I assume, for rendering graphics, but maybe also other things. This did cause me a lot of issues when I install Nvidia drivers, CUDA and cudnn. I had to turn a lot of it off. See [here](https://www.lifewire.com/hardware-acceleration-in-chrome-4125122). – Little Bobby Tables Jun 07 '18 at 15:50
  • @ksooklall - I meant how do we know which jupyter notebook/process corresponds to which PID and therefore which one to kill... thanks. – Little Bobby Tables Jun 07 '18 at 15:52
  • I notice that I kill one then it start a new one. I tried to pkill -9 -t pts/1 to log out,still not working – Tina Liu May 31 '19 at 16:07
  • Try kill -9 or kill -15, not pkill – Kenan May 31 '19 at 17:37
  • 1
    In my case, `sudo` is not necessary. – one Aug 07 '20 at 00:58
46

First type

nvidia-smi

then select the PID that you want to kill

sudo kill -9 PID
Ashiq Imran
  • 1,662
  • 16
  • 13
  • 1
    Brilliant, this one actually worked for me. PID should be replaced with the.. PID number of the process that uses the GPU (which you can figure by nvidia-smi) – Lior Magen Jul 26 '21 at 11:42
  • the command `nvidia-smi` returns `Failed to initialize NVML: Driver/library version mismatch` – desmond13 Nov 02 '21 at 10:40
16

Although it should be unecessary to do this in anything other than exceptional circumstances, the recommended way to do this on linux hosts is to unload the nvidia driver by doing

$ rmmod nvidia 

with suitable root privileges and then reloading it with

$ modprobe nvidia

If the machine is running X11, you will need to stop this manually beforehand, and restart it afterwards. The driver intialisation processes should eliminate any prior state on the device.

This answer has been assembled from comments and posted as a community wiki to get this question off the unanswered list for the CUDA tag

talonmies
  • 68,743
  • 34
  • 184
  • 258
  • 8
    cannot process the above command, error says, CUDA in use. So killed the PID using the solution provided by https://stackoverflow.com/a/46597252/3503565. Its works for me – DSBLR Apr 17 '19 at 00:53
12

I also had the same problem, and I saw a good solution in quora, using

sudo kill -9 PID.

see https://www.quora.com/How-do-I-kill-all-the-computer-processes-shown-in-nvidia-smi

Petter Friberg
  • 20,644
  • 9
  • 57
  • 104
ailihong
  • 129
  • 1
  • 2
  • Worked a treat when I accidentally opened and loaded two different *jupyter notebooks* with *VGG16*. **Warning**: it kills the notebooks. I guess you could pick one to free up some memory for the other but I dont know how you select the PID for a given notebook. – Little Bobby Tables Dec 30 '17 at 15:02
7

for the ones using python:

import torch, gc
gc.collect()
torch.cuda.empty_cache()
Lukas
  • 159
  • 1
  • 4
6

One can also use nvtop, which gives an interface very similar to htop, but showing your GPU(s) usage instead, with a nice graph. You can also kill processes directly from here.

Here is a link to its Github : https://github.com/Syllo/nvtop

NVTOP interface

Antoine Viallon
  • 168
  • 2
  • 10
5

on macOS (/ OS X), if someone else is having trouble with the OS apparently leaking memory:

  • https://github.com/phvu/cuda-smi is useful for quickly checking free memory
  • Quitting applications seems to free the memory they use. Quit everything you don't need, or quit applications one-by-one to see how much memory they used.
  • If that doesn't cut it (quitting about 10 applications freed about 500MB / 15% for me), the biggest consumer by far is WindowServer. You can Force quit it, which will also kill all applications you have running and log you out. But it's a bit faster than a restart and got me back to 90% free memory on the cuda device.
Matthias Winkelmann
  • 15,037
  • 6
  • 59
  • 72
2

For OS: UBUNTU 20.04 In the terminal type

nvtop

If the direct killing of consuming activity doesn't work then find and note the exact number of activity PID with most GPU usage.

sudo kill PID -number