0

I am trying to simulate the actual response of a camera given some object that is reflecting light. I've written a ray tracer, and have a BRDF that I need to use, and I have a camera sensitivity in terms of signal/Watts. But I am confused by one (rather important) detail:

Each ray coming out of the camera has some solid angle associated with it (this part I've already figured out). So each ray should then have a "Radiance" value associated with it (as radiance has units of W/(sr*m^2)). That way each ray would just multiply its solid angle by its radiance value, and you'd get an "Irradiance" value of W/m^2. However, I am unsure how to actually calculate this initial radiance value for each ray. The reason I am confused is because this seems to be backwards from what the BRDF is giving me.

A BRDF gives me the radiance leaving the surface in the direction of the camera, meaning the vertex of the solid angle is at the point of intersection. The solid angle for the ray however is defined flipped, with the vertex of the solid angle at the camera itself.

How do I bridge this gap? My idea is that is that if I can actually calculate the radiance for each ray, then I can simply multiple the radiance of each ray by its corresponding solid angle to get an irradiance value, then apply the inverse square law, and finally add up the irradiance for each ray per pixel and divide by the area of the pixel to get the wattage it receives.

But I am very lost as to how to calculate the radiance for each ray given that the only calculations I'm familiar with (the BRDF) returns a radiance value for a solid angle in the "wrong direction".

Am I misunderstanding what is going on? Am I approaching this incorrectly? Any help would be really appreciated!

Chris Gnam
  • 229
  • 1
  • 7

1 Answers1

0

I didn't get this part:

That way each ray would just multiply its solid angle by its radiance value, and you'd get an "Irradiance" value of W/m^2.

I'll try to explain how the whole thing works though. You have a light source, it emits light rays. Those light rays bounce around the scene and eventually end up at your camera film. So at the end of the day you have multiple rays hitting the film surface and the radiance of those gets multiplied with the sensitivity function. For efficiency reasons you usually start backwards - from the camera. But if you work with symmetric BRDFs this should not matter. So what arrives at your film is radiance, you multiply that radiance with the sensitivity function and integrate within a "film pixel" to get the result in terms of your screen pixel.

Edit: To clarify what I meant by integrated out:

If you have radiance which is a measure of W/(sr * m^2), you can integrate out sr or m^2 like so:

$$E(x) = \int_{\Omega}{L(x,\omega)\cos\theta\,d\omega}$$

From where you get irradiance at a point $x$, which gives you the energy arriving from all directions at point $x$ (W/m^2). Now you can go further and find out the energy that arrives at some surface with area A, by integrating over all points on the surface:

$$\Phi = \int_{A}{E(x)\,d\mu(x)} = \int_{A}\int_{\Omega}{L(x,\omega)\cos\theta\,d\omega\,d\mu(x)}$$

lightxbulb
  • 2,226
  • 1
  • 6
  • 14
  • What I meant by the comment you copied is that the sensitivity function I have to work with is a function of Watts. Radiance has units of Wm^-2sr^-1. So if I multiply the radiance value by the solid angle represented by the ray, then I would get an irradiance value, with units W*m^2. I then multiply that by the surface area of the pixel that the ray originated from the get the Watts received, which I can then use in the sensitivity calculation. – Chris Gnam Jul 18 '19 at 17:42
  • But the part that is confusing me is that the solid angles of the BRDF and of the ray appear (at least to me) to be opposite one another. The BRDF deals with solid angles whose vertex is at the point of intersection on the surface. The rays on the other hand have a solid angle with its vertex at the camera (more specifically, somewhere on the pixel that it was sent from. If no super sampling is done, then it originates just from the pixel center). So at the intersection point, the ray forms a cone in one direction, while the BRDF forms a cone in the exact opposite direction – Chris Gnam Jul 18 '19 at 17:45
  • You're not multiplying radiance by a solid angle, to get a different measure you're integrating out the steridian part. Granted you're integrating over the solid angle, but multiply the radiance with the solid angle doesn't make much sense. The BRDF is not something related to the camera, its simply a function that describes the scattering properties of a surface at some point. It just tells you how ray coming from direction A scatters (and vice-versa). – lightxbulb Jul 18 '19 at 17:49
  • With graphics: [Here is a simple diagram of a ray] (https://upload.wikimedia.org/wikipedia/commons/b/b2/RaysViewportSchema.png). It is diverging away from the camera, so the solid angle of incoming light to the sensor represented by the ray has its vertex at the camera origin. This diagram of a BRDF shows that the solid angle diverging off the surface. Which is opposite how the ray looks at that point – Chris Gnam Jul 18 '19 at 17:50
  • I understand it has nothing to do with the camera, but the rays reflected off the object are diverging away from the point they reflected off of. The camera rays however, are diverging from the camera, not the point they intersect. – Chris Gnam Jul 18 '19 at 17:52
  • It doesn't matter, BRDFs are usually symmetric (at least if they are to be physically based). Also a common convention is to have both directions point out in the BRDF: https://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function I still don't get what your question is really. Can you try and formalize this. – lightxbulb Jul 18 '19 at 17:53
  • Also, I'm not entirely sure what you mean that the steradian is being integrated out. Each ray has some associated solid angle with it which if not accounted for, would mean that adding more and more rays per pixel would increase the brightness per pixel. Giving each ray an appropriately scaled solid angle allows you to ensure adding more rays doesn't increase the actual signal, rather just increases sub pixel resolution. At least, that is how it was explained to me. – Chris Gnam Jul 18 '19 at 17:54
  • Here is a drawing of what I mean. Tracing the light forward makes perfect sense. It is reflected in some manner, and is now diverging away from the reflection point (indicated by the green arrows). The red line is the traced intersection point of the ray, which also makes sense. However, it is my understanding that the rays are diverging from the camera, and so I represented them with the blue arrows. This means that they are diverging in opposite directions which leaves me confused as to how the reflection value can be transfered to the ray. – Chris Gnam Jul 18 '19 at 18:02
  • @ChrisGnam See the edit. A possibly easier to understand analogy is through acceleration, speed, and distance. If you have the acceleration function for an object over time, you can find its speed at each point in time by integrating the acceleration (which integrates out 1/s, so from m/s^2 you get m/s). You can also integrate speed over time to get distance traveled (that gets rid of the other 1/s and you get from m/s to m). – lightxbulb Jul 18 '19 at 18:03
  • @ChrisGnam The Bi-directional part in BRDF stands for the fact that it is symmetric. It shouldn't matter in which direction the radiance travels, since both are equivalent under the assumptions made. – lightxbulb Jul 18 '19 at 18:05
  • Thank you that does help somewhat. Do you have any recommendation on textbooks or papers or anything that covers the math used here in rigorous, yet understandable terms? – Chris Gnam Jul 18 '19 at 18:05
  • @ChrisGnam I recommend Advanced Global Illumination. But depending on your background, you may not find it rigorous enough, in which case you can refer to Eric Veach's thesis, Mathias Lang's thesis, or Christian Lessig's thesis (this one is the most abstract that I know of and the hardest). Refer to the acknowledgements section here for more resources: https://vchizhov.github.io/resources/ray%20tracing/ray%20tracing%20tutorial%20series%20vchizhov/index.html – lightxbulb Jul 18 '19 at 18:08
  • thank you for the help. I've ordered the textbook and I'm looking forward to reading it! I hate to ask one more question, but I must know. Having calculated the reflected light off of the object, do I apply the inverse square law to that irradiance to obtain the irradiance received by the camera? Or is that accounted for by the fact that the object is so far away that it takes up a small pixel area? – Chris Gnam Jul 22 '19 at 13:01
  • @ChrisGnam Depends on whether you use the area or the solid angle formulation of the rendering equation. See my answer here: https://computergraphics.stackexchange.com/questions/9015/rendering-equation-in-terms-of-paths-rather-than-directions – lightxbulb Jul 22 '19 at 13:05