To my understanding, rasterized lighting would seem to be less scalable than raytraced lighting with regards to light count. That is, given my flawed and shallow understanding, the difficulty of lighting calculations becomes greater at some point when utilizing rasterized lighting than with ray tracing as you increase lighting samples.
Basically, it is my understanding that raytracing has a larger upfront cost to the general lighting problem, but this upfront cost does not greatly increase as the geometric complexity and light count increases. This is as opposed to rasterized lighting which has great performance costs as more and more lights are added.
I am also under the understanding that in any situation where geometry becomes so small or overlapping that more than one tri or vertex is present in a pixel that ray tracing should be more efficient for that given pixel.
Basically, to give a more concise question: Does ray tracing scale better with light count than rasterization does? Assuming, of course, a non-infinite sample count.