0

I have looked through many articles and posts about using multiple GPUs with TensorFlow. It helps me more here on "how to use parallel GPUs to train NN (neural network)". But I have a different question. Can a separate GPU be used to train different NNs at the same time?

More details:

I have neural networks A, B, and GPU1, GPU2. I want to train A NN on GPU1 and B NN on GPU2 at the same time. Is it possible?

Bryan Woo
  • 189
  • 1
  • 4
  • 13
  • You could run two scripts separately at the same time. Just change cuda visible device in each script. – Muhammad Danial Khan Jan 20 '21 at 07:10
  • 1
    Does this answer your question? [How do I select which GPU to run a job on?](https://stackoverflow.com/questions/39649102/how-do-i-select-which-gpu-to-run-a-job-on) – alift Jan 20 '21 at 07:16

1 Answers1

0

I suggest using two separate python scripts to train both networks, such as trainA.py and trainB.py.

In the first two lines of trainA.py you select your preferred GPU.

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"

For trainB.py you select the other GPU:

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "1"

Now you should be able to run both train scripts at the same time.

Jan Willem
  • 561
  • 4
  • 20