**Going slightly physically based** In the previous parts we've implemented diffuse and rough conductor materials. Now we going to add two materials - plastic and dielectric. Also, we are going to make sure that our materials are physically correct (or at least plausible). [Source Code](http://github.com/sergeyreznik/metal-ray-tracer/tree/part-4) for this post is available on my GitHub page. Additionally, I've created a [repository](https://github.com/sergeyreznik/raytracing-references) with scenes that were used to validate ray-tracer. ![Scene from Pica Pica](images/pic-0.png) **A small note:** I will use a term "BSDF" (bidirectional scattering distribution function) rather than "BRDF" (bidirectional reflection distribution function) because it is more general and describes not only reflections, but also transmittance and diffuse scattering. Easy way to validate results ================================================================================ Before implementing new materials, we need to figure out a way to make sure that our materials are physically correct. In the previous posts I compared images produced by ray-tracer with those produced by Mitsuba. But such comparison gives nothing but visual similarity, and while rendering could be incorrect it will still be visually similar to the correct one. Therefore, if we want to compare images against Mitsuba we need to develop a more robust way to do it. I propose a simple method, that will be able to quickly provide an information about luminance in the images, and compare it to the reference image produced by Mitsuba (or PBRT if you want). Take a look at these two images: ![](images/pic-1.png)![](images/pic-2.png) They look almost identical and what is important they both looks natural. It is hard to notice that the glass box is slightly different (I intentionally modified Fresnel term), especially when comparing final image to the reference. So let's make a simple comparison method: get the average value from red, green and blue components of each pixel of source image and reference and find the difference between two averaged values. If the difference is greater than zero - it means that image produced by our ray-tracer is brighter than the reference and vice versa. So, let's output this difference to the green channel if is greater than zero and negative difference to the red channel if it is smaller than zero: Obviously, in this situation colors like (1, 0, 0) and (0, 0, 1) would look the same, but we are trying to compare brightness of the pixel, but not it's color. Also that's why I decided to compare average values, rather than color converted to grayscale. Comparing colors converted to grayscale is more proper method, but difference in blue colors (for example) would be much noticable than in green. Ideally we would see a black color if we matched the reference perfectly. But in the real world there a several factors that won't allow us to see a perfect zero difference: - number of samples are usually limited, which leads to noise; - .exr format (to which Mitsuba and PBRT can save results) stores data as half floats (16 bits), which introduces rounding errors. This mean that even when we implemented everything correctly we will likely be seeing a red-green noise on the image. Compare the difference images below. These are difference between images above and the reference image: ![](images/pic-3.png)![](images/pic-4.png) Note the glass box: on the first image we can see an almost uniform red-green noise, while on the second image glass box has noticeable darker (reddish) sides. This is very rough method to estimate a similarity to the reference, but it gives results almost immediately. Furnace test ================================================================================ Another (and actually more proper) way to validate materials is so called "Furnace test". The name comes from a property of a furnace in thermal equilibrium. When a furnace reaches equilibrium, its interior has a completely uniform appearance, causing all geometric features to disappear [^](http://www.scratchapixel.com/lessons/3d-basic-rendering/global-illumination-path-tracing/global-illumination-path-tracing-practical-implementation). The idea behind it is to put a white object, (which does not absorb light) usually a sphere, to the environment having uniform color. If BSDF is implemented correctly (i.e. have energy conservation property) then the object will have the color of the environment - it will not absorb light and will not emit any additional light. Usually it is good idea to make a gray environment, in this way it would be easier to notice energy gain. Take a look on these three images: ![](images/pic-5.png)![](images/pic-6.png)![](images/pic-7.png) The first one - is the correct plastic BSDF implementation. In the second one BSDF was not multiplied by cosine between normal and outgoing ray direction, the third one does not account for Fresnel in diffuse component and probability distribution function. So now we have a least two methods to verify our BSDFs, so let's implement plastic and dielectric materials. But before I want to slightly refactor shader code to eliminate all duplications, let's keep two methods: We will be using `sampleMaterial` (which will call `evaluateMaterial` inside) for generating new ray direction and `evaluateMaterial` for the next event estimation. All code related to BSDF evaluation will only be located in the `evaluateMaterial` method. Plastic BSDF ================================================================================ Plastic material consists of two components - specular and diffuse. It reflects portion of light (specular component) and absorbs and then re-emits rest of the light (diffuse component). The exact amount of reflected light is defined by Fresnel equations, I believe they are pretty common in computer graphics and I don't need to go in details with them. Usually, Schlik approximation is being used, but since we are going slightly physically based, let's use real formula. Considering unpolarized light, a function which calculates Fresnel equation will look like: *(read more on [Fresnel reflectance](http://www.pbr-book.org/3ed-2018/Reflection_Models/Specular_Reflection_and_Transmission.html#FresnelReflectance) in PBRT)* Where `eta` is ratio of index of refractions - IOR of medium from light arrives divided by IOR of medium to which light is being transmitted. For plastic it is common case to use 1.5 as IOR, but we want this to be customizable. In order to sample plastic material, we need to decide if this would be reflected or diffuse ray. We can use Fresnel equations here as well, and say that probability of generating reflected ray is equal to the value of Fresnel: **It is important to not forget to plug Fresnel value into the PDF.** Therefore, we are generating either reflected ray or uniformly (actually cosine-weighted) ray around geometric normal. Now we need a function that evaluates plastic material. We will be using the same microfacet BSDF which we used for conductors in the previous part, but now instead of `fresnelConductor` we will be using `fresnelDielectric` method. As we mentioned before - part of the light will be reflected (specular component) - this part is defined by microfacet material, and diffuse part is defined by the same diffuse BSDF but scaled with `(1.0f - F)`, which means that diffuse light is all light which was not reflected. Since we used Fresnel term as a probability for generating reflected ray we need to plug it into the probability density function. PDF of plastic material also consists of PDF of microfacet BSDF scaled by probability of generating reflected ray (`F`) and PDF of diffuse BSDF, scaled by probability of generating diffuse ray (`1.0f - F`). Since we can't nicely cancel components of `bsdf / pdf` as we did with conductor material - we will write weight just as a ratio between BSDF and PDF: This will give use nice plastic spheres: ![Actually every surface on this image is plastic, just with various roughness](images/pic-8.png) Glass BSDF ================================================================================ *(glass BSDF is based on a [classic paper](https://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf) : **Microfacet Models for Refraction through Rough Surfaces**)* Glass material also consists of two components - specular and transmittance. Specular is pretty much the same as in plastic material, while transmittance is a little bit tricky. So, I will mostly focus on transmittance. We going to use same approach for sampling glass material - use Fresnel term as a probability of generating reflected ray and `(1.0f - F)` as a probability of generating transmitted ray. Notice the condition in the method that calculates Fresnel equation: Using Snell's law: \( \eta_i * sin\theta_i = \eta_o * sin\theta_o \) we can figure out a critical angle, where \(sin\theta_o\) would be greater than one, and which will mean a [total internal reflection](https://en.wikipedia.org/wiki/Total_internal_reflection). In this case Fresnel term will return `1.0f` and therefore we always will be generating reflected ray, so we don't need to add any extra conditions to our code. Now let's get to the implementation of sampling method itself. At first, we should remember that there are two cases for the transmitted ray. One when ray enters denser medium and the second when rays goes from denser medium: ![](images/d-1.png) ![](images/d-2.png) The difference is that in the second case we need to flip a normal, in order to make our computation consistent, considering \((\omega_i • n) < 0.0 \). Also, we need to use proper indices of refraction for incoming and outgoing rays: Now we can sample a microfacet around correctly oriented normal, compute Fresnel term and select either reflected or refracted ray: *(please refer to the [original paper](https://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf) for the derivation and details, section 5.3 - Sampling and weights, Eq. 39, 40)* **It is important** to pass original (not flipped normal) into the `evaluate` method, since it could be used separately and also does the same check. Evaluation method would be a little trickier. Before evaluating we have to determine not only if ray enters material (exactly in the same way as we did in sampling method): But we also need to determine if given incoming and outgoing rays (\(\omega_i\) and \(\omega_o\)) describes reflection, or refraction event. We can do that by checking if both vectors lies in the same hemisphere. If they are - this is reflection, if \(\omega_o\) lies in the other hemisphere than \(\omega_i\) - it mean that the ray was refracted into the material: Now we need to determine a half-vector, or microfacet. For the reflection it would just normalized sum of incoming and outgoing vectors (but since we considering \((\omega_i • n) < 0.0\), we should take incoming direction with negative sign): ![](images/d-3.png) But for the refraction, the half vector is computed slightly differently. In general case it would be: \(m = -(\omega_i * \eta + \omega_o)\) But since we have \(\omega_i\) flipped, we can rewrite this as: \(m = \omega_i * \eta - \omega_o\) ![](images/d-4.png) Also, if we flipped a normal, we need to make sure that normal and microfacet both lies in the same hemisphere: Now, everything is ready and if ray was reflected we can calculate specular BSDF, pretty much as we did with plastic. But if ray is transmitted we will use another values for BSDF *(again please refer to the [original paper](https://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf) for the derivation and details, sections 4 and 5)*: *(notice that we have an extra parameter in our `SampledMaterial` structure, named `eta`, it is required for the Russian roulette, which will be described below)* If we did everything correctly - it will give us nice glass spheres: ![](images/pic-9.png) Unbiased rendering ================================================================================ Before this part we had a custom number of bounces per pixel each frame. But this introduces bias to our rendering, since not all bounces are taken into the account. If we want to be physically based we have to calculate infinite amount of bounces, but how can we do that? At first we need to refactor our rendering code. Before this part it was like (in very rough pseudocode): But now, let's distribute not samples, but **bounces** in time: Each ray will have a property, describing if the ray has finished his path. We will be only generating new rays if current one is completed and accumulate image only using completed rays. Like: and This will give us potentially an infinite number of bounces for each ray. And a ray will be completed only if it goes out of scene and hits background. But imagine a scene, which is completely surrounded by geometry and there is no change for ray to escape it. I've added such scene to my [ray-tracing references repository](https://github.com/sergeyreznik/raytracing-references), in this scene camera is located inside glowing sphere, and no ray could escape it. We need to make sure such scenes will also be handled properly. Luckily for us, there is one method than not only makes this possible, but is commonly used for improving efficiency of the Monte-Carlo estimator. This method is called Russian roulette. The idea behind it to discard paths that are expensive to calculate, but does not contribute (at least significantly) to the final image. This could be done by chosing a probability of terminating ray, based on ray's throughput. And then either terminate ray or scale ray's throughput by inverse probability of continuing path, which will account all terminated paths: *(read more about Russian roulette [here](http://www.pbr-book.org/3ed-2018/Monte_Carlo_Integration/Russian_Roulette_and_Splitting.html))* In this way we can say that we are calculating potentialy infinite amount of samples, and this is a step to a physically-correct and unbiased rendering. **That’s it!** [Return to the index](../index.html)