The choice of CPU doesn't matter much, beyond the obvious. More and faster cores are better, as are larger caches; these are, unfortunately, also more expensive.
Similarly, motherboards don't matter much except that it'll need to support everything you want (lots of RAM, fast ethernet, etc).
As for a graphics card, there are a few general-purpose GPU (GPGPU) computing platforms. Until recently, nVIDIA's CUDA platform has been the clear winner, particularly for deep learning. nVidia distribute some deep learning libraries themselves. Some other popular deep learning packages, like TensorFlow, Theano and Caffe also only support CUDA on the GPU. Matlab's GPU computing is also limited to CUDA-compatible video cards. OpenCL is the other major alternative, and has historically lagged quite a bit behind. However, there are efforts to port Theano and Caffe to OpenCL. R and Python bindings also exist for OpenCL too. As of early 2016, CUDA still seems to be the field's first choice, but if you got a terrific deal on a beefy AMD GPU, you might want to think about it.
In either event, you'll need a sufficiently large power supply and cooling. Some of the high-end GPUs draw 200+ watts at full load.
If you're really hard-pressed for speed, Intel's C++ compiler can occasionally generate slightly faster code for Intel CPUs, but in practice this would requires writing in C++ (and it's usually not a huge difference). Intel does sell the
Xeon Phi co-processor. If you're going to use that, you probably want to stay with other Intel gear.