3

I would like to build a desktop PC with Linux and use it mainly for machine learning research and related stuff. I am aware that there are software packages that use GPU "magic stuff" to train statistical models on GPUs. However, my knowledge in hardware is very poor. I don't want to buy hardware that doesn't support machine learning software.

Questions:

1- does it matter what CPU one buys? AMD, Intel?

2- does it matter what graphics card one buys? nVidia, ATI?

3- does it matter what motherboard? does it have to contain certain ports/features?

Jack Twain
  • 8,381
  • 1
    I think this might be just barely on-topic because of the GPU part--nVidia/CUDA seem like a much better choice for GPGPU-related machine learning. – Matt Krause Mar 06 '16 at 06:44

1 Answers1

6

The choice of CPU doesn't matter much, beyond the obvious. More and faster cores are better, as are larger caches; these are, unfortunately, also more expensive. Similarly, motherboards don't matter much except that it'll need to support everything you want (lots of RAM, fast ethernet, etc).

As for a graphics card, there are a few general-purpose GPU (GPGPU) computing platforms. Until recently, nVIDIA's CUDA platform has been the clear winner, particularly for deep learning. nVidia distribute some deep learning libraries themselves. Some other popular deep learning packages, like TensorFlow, Theano and Caffe also only support CUDA on the GPU. Matlab's GPU computing is also limited to CUDA-compatible video cards. OpenCL is the other major alternative, and has historically lagged quite a bit behind. However, there are efforts to port Theano and Caffe to OpenCL. R and Python bindings also exist for OpenCL too. As of early 2016, CUDA still seems to be the field's first choice, but if you got a terrific deal on a beefy AMD GPU, you might want to think about it.

In either event, you'll need a sufficiently large power supply and cooling. Some of the high-end GPUs draw 200+ watts at full load.


If you're really hard-pressed for speed, Intel's C++ compiler can occasionally generate slightly faster code for Intel CPUs, but in practice this would requires writing in C++ (and it's usually not a huge difference). Intel does sell the Xeon Phi co-processor. If you're going to use that, you probably want to stay with other Intel gear.
Matt Krause
  • 21,095