0

To my understanding, rasterized lighting would seem to be less scalable than raytraced lighting with regards to light count. That is, given my flawed and shallow understanding, the difficulty of lighting calculations becomes greater at some point when utilizing rasterized lighting than with ray tracing as you increase lighting samples.

Basically, it is my understanding that raytracing has a larger upfront cost to the general lighting problem, but this upfront cost does not greatly increase as the geometric complexity and light count increases. This is as opposed to rasterized lighting which has great performance costs as more and more lights are added.

I am also under the understanding that in any situation where geometry becomes so small or overlapping that more than one tri or vertex is present in a pixel that ray tracing should be more efficient for that given pixel.

Basically, to give a more concise question: Does ray tracing scale better with light count than rasterization does? Assuming, of course, a non-infinite sample count.

  • IMHO deferred rendering is most suited to coope with high lights count ... but it also depends on used light model and techniques . You know light maps can do a lot does not really matter if you ray trace or rasterize. for more info see [How lighting in building games with unlimited number of lights works?](https://stackoverflow.com/a/31042808/2521214) – Spektre Dec 01 '21 at 09:17

0 Answers0