(a) Are there solvers that exploit the presence of multiple processing
units -- CPU + additional GPUs on a machine?
Many solvers can work with multiple CPUs. Fewer can do distributed optimization. GPUs have been used in some academic studies, but in general, GPUs don't work very well when dealing with large sparse LP/MIP models. So there is, as of yet, not much commercial interest. Of course, GPUs are much more popular in dense linear algebra and machine learning.
(b) Is there any benefit to learning DC++ and SYCL (and other options
such as Open CL / CUDA) programming for operations research
computations?
If you want to develop your own high-performance algorithms, this may be of some interest. For modeling, not so much (often better to just use readily available tools -- writing your own tools is often not a good investment: buying good tools is much cheaper than developing them).
(c) Are there any open source template code bases in C/C++ that
someone can use to build their own OR models?
Most model development is not done in C++ but in higher-level languages such as Python or using modeling languages such as AMPL and GAMS. C++ is more for developing high-performance algorithms than for model building. Modelers like to be efficient (they often work under time pressure), and using more low-level languages does not help there. Also, model maintenance may be more difficult when using low-level languages. Performance is usually less of an issue when doing modeling (opposed to when running solvers to solve these models: solvers are often written in C or C++).
It seems you are mixing model development and solver development. These are two very different crafts. At least for traditional mathematical programming (e.g. LP/MIP) and for constraint programming (CP). When writing your own tailored heuristic algorithms, the difference may be a bit more blurred.