**Adding multiple bounces** In the [previous post](../part-2/index.html) we implemented a simple diffuse BRDF. Let's extend our ray-tracer with several bounces to get nice global illumination. As always [source code](https://github.com/sergeyreznik/metal-ray-tracer/commit/215dfe59f21638d3138e46c2daff7b1bedbfb3e1) for this post is available on my GitHub page. However, this is the link to the specific commit, because part 3 introduces way more than just multiple bounces. Rest will be described in the next post. Basics ====================================================================== If we want to enable a global illumination in our ray-tracer, we need to be able to handle multiple bounces of our rays. Each time when ray intersects objects in the scene we need to generate a new direction and send a ray to this direction. Also, on each intersection we can sample light source from the intersection point in order to know how much light arrived to this point from the emitter. Unfortunately, we can't have infinite number of bounces in our current implementation, so let's introduce a constant which will define a maximum number of intersections a ray can undergo. Let's call it maximum path length. A value of **one** will mean that rays sent from the camera intersects the scene, and no further intersections will be calculated, including next event estimation (explicit light sampling). It means, that only emitters will be visible in this case. And if we didn't hit the emitter we are now allowed to send a new ray towards light sources, because in this case max path length would equal to two. We need to change a code a bit to be able to perform several intersections. In the previous post we had something like this (in pseudocode): Now we need to make a loop, where we will be finding intersections, sampling light sources and generating new rays for the new intersections: As I said before, in case of `MAX_PATH_LENGTH` equals one - only emitters will be visible on the final image: ![](images/pic-1.png) Setting `MAX_PATH_LENGTH` to two will produce an image from the second post, where direct illumination only is being calculated: ![](images/pic-2.png) So far, we didn't modify our shader. But if we want to have several bounces we have to generate a new rays in the intersection handler shader. We can generate rays randomly over the hemisphere, aligned with surface's normal, but this won't be very efficient since some rays will have a contribution close to zero. Let's recall a rendering equation (I heard that each post regarding the graphics should include it): ![](images/pic-3.png) Basically, what we are doing here is that we estimating value of the given equation. We are doing this by using Monte-Carlo integration. To those who are not familiar with Monte-Carlo integration theory I'd strongly recommend to read [Chapter 13 Monte Carlo Integration](http://www.pbr-book.org/3ed-2018/Monte_Carlo_Integration.html) of the **PBRT book** and view ray-tracing course by [Karoly Zsolnai](https://twitter.com/karoly_zsolnai) (he has amazing YouTube channel [Two Minute Papers](https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg)), especially [chapter 16](https://www.youtube.com/watch?v=Tb6-JfI0HA0&list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi&index=16) dedicated to Monte-Carlo integration. I will briefly describe basic idea here. To estimate value of the given rendering equation we need to generate random directions, sample the integral at these directions and then by averaging samples (recall previous part on image accumulation) we will get an estimated value for the given rendering equation. By using **importance sampling** we can get better estimation of the integral by sampling it towards directions which contributes more to the result (read more theory in the [Chapter 13.10 Importance Sampling](http://www.pbr-book.org/3ed-2018/Monte_Carlo_Integration/Importance_Sampling.html) of the **PBRT Book**). In order to use it we need to know a probability function of generating specific direction and the direction, generated according to this probability. These are well known equations and I will not derive them here. If you curious about how this can be derived - [Alan Wolfe](https://twitter.com/atrix256?lang=en) has the great [blog post](https://blog.demofox.org/2017/08/05/generating-random-numbers-from-a-specific-distribution-by-inverting-the-cdf/) on how to generate random variables according to certain probability functions, read it if you want more math and details. Sampling diffuse BSDF ====================================================================== In order to have better sampling of diffuse BSDF we need to generate directions, where \(cos(\theta)\) will have larger values, it is well known cosine-weighted distribution on the unit hemisphere, where \(PDF(d) = \cos(\theta) / \pi\) To generate from two random values \(\xi_1\) and \(\xi_2\) a random direction according to the given distribution we have to chose azimuthal and polar angles as: \(\phi = \xi_1 * 2\pi\) \(\theta = \sqrt\xi_2\) A source code for this: Now we can generate the direction for the next ray, but we also need to know how much energy will arrive from this direction and which portion of this energy will contribute to the current point. In other words, we need to calculate \(L_i\) and \(f_r\) of our rendering equation. Usually in the very basic implementations ray-tracers are made recursive, and estimates lighting in the given point as (pseudocode): But since we can't make a recursive ray-tracing in the compute shader, and actually this is not necessary, we can store a new value, which shows how much energy is lost during the bounces, in our `Ray` structure. This value will be initialized with `1.0` (meaning not energey was lost) and on the each bounce we scale it by the material's BSDF divided by probability of generating next direction and multiplied by cosine of the angle between current normal and the generated next direction: Let's recall how BSDF and PDF for diffuse material are calculated. BSDF equals to \(1/\pi\), PDF is \(cos(\theta) / \pi\), plugging these values to our equation will give us Which effectively cancels to: Which means that on the each bounce we just scale ray's throughput with the material's color. So whole piece of code for generating a new ray inside intersection handler will look like: One more important step would be to have a random values for each bounce, rather than random values per pixel per frame. So, let's extend our noise buffer with one more dimension (add a several "layers" to the noise, each "layer" for each bounce). Also, we have to track number of bounces for each ray. Our `Ray` structure will look like: And we will sample noise from the buffer according to the current bounce: Plugging all together ====================================================================== Let's recap what we changed and introduced in the code in order to get multiple bounces: - added bounces and throughput values to the `Ray` structure; - on the CPU side placed intersection handling and light sampling calls into the loop; - added a "layers" into the noise buffer; - modified intersection handler shader to generate a new rays. Plugging everything together and setting maximum path length to eight will produce an image with global illumination: ![](images/pic-4.png) **That's it!** [Next post](../part-3-2/index.html) [Return to the index](../index.html)