Before starting this post, I would like to talk a bit about my homeland, Hong Kong. The Chinese government enacted a new National Security Law, bypassing our local legislative council. We can only read the full text of this law after it is enacted (with official English version published 3 days after that). This law destroys our legal system completely, the government can appoint judges they like (Article 44), jury can be removed from trial (Article 46), and without media and public presence (Article 41). This law is so vague that the government can prosecute anyone they don't like. People were arrested due to processing anti-government stickers. We don't even have the right to hate the government (Article 29.5). If I promote "Boycott Chinese Products", I may broke this law already... Also the personnel of the security office do not need to obey HK law (Article 60). This law even applies to foreigners outside HK (Article 38). Our voting right is also deteriorating, more pro-democracy candidates can be disqualified by this law in the upcoming election (Article 35)... So, if you are living in a democratic country, please cast a vote if you can.
Back to the topic of spectral path tracer. Path tracing in the spectral domain has be added in my toy renderer (along side with tracing in sRGB / ACEScg space). Spectral path tracer trace rays with actual wavelength of light instead of RGB channels. The result is physically correct, and some of the effect can only be calculated by spectral rendering (e.g. dispersion, iridescence). Although my toy spectral path tracer does not support such material, I would like to investigate how spectral rendering affects the bounced light color compared to image rendered with RGB color spaces. The demo can be downloaded here.
Spectral rendered image |
Render Loop Modification
Referencing my previous DXR Path Tracer post, only a few modification need to be done to support spectral path tracing:
RGB path tracer render loop in previous post |
Spectral Up Sampling Texture
One of the problem in spectral rendering is to convert texture from color to spectral power distribution(SPD), this process is called spectral up sampling. Luckily, there are many paper talks about it. The technique called "Spectral Primary Decomposition for Rendering with sRGB Reflectance" is used in the demo to up sample texture. I choose this method because of its simplicity. This method can reconstruct the spectrum by using linear combination of the texture color with 3 pre-computed spectral basis function:
Spectral Up Sampling Light Color and Intensity
Beside spectral up sampling the texture, light also need to be up sampled. Because the light color can be specified in wide color in the demo, the up sampling method used in above section is not enough. The method "A Low-Dimensional Function Space for Efficient Spectral Upsampling" is used to up sample the light color. This method compute 3 coefficients using light color (i.e. from RGB to c0, c1, c2), and then the spectral power distribution, f (λ), can be computed as below:
In the demo, the original light intensity of RGB path tracer is modified so that it better matches the intensity of the spectral rendered image. Before the modification, the RGB lighting is done by simply multiplying the light color with light intensity. But now, this value is also divided by the luminance of the color (but this lose some control in the color picker UI...).
|
|
|
In addition to the luminance scale, we also need to chromatic adapt the light color from illuminant E to illuminant D65/D60 before computing the 3 SPD coefficients, because the coefficients are fitted using illuminant E. If not doing this, the image will have a reddish appearance.
|
|
Importance Sampling Wavelength
As mention at the start of the post, the wavelengths are sampled using hero wavelength, which randomly pick 1 wavelength within the visible spectrum (i.e. 380-780nm in the demo), and then 2 additional samples are picked, which evenly separated within visible wavelength range. With this approach, there is a high variance in color. Sometimes with 100 samples per pixel, the color converge to the final color, but more often, it requires > 1000 samples per pixel to converge. It really depends on luck...
3 different spectral rendered images with hero wavelength using 100 samples per pixel, The color looks a bit different between all 3 images with a noticeable red tint in the middle image. |
To make the render converge faster, let's consider the CIE XYZ standard observer curves below, those samples with wavelength >650nm and <420nm will only have a few influence on the output image. So I tried to place more samples around the center of visible wavelength range.
CIE 1931 Color Matching Function from Wikipedia |
Compute inverse CDF of the cos weighted PDF (with w=200) |
Unfortunately, this cannot be inverted analytically as mentioned here. So Newton's method(with 15 iterations,) is used as suggested from this post, which have the follow result:
3 different spectral rendered images with cos-weighted PDF using 100 samples per pixel, The color still looks a bit different between all 3 images... |
Sadly, the result is not improved, which gives more color variance than the hero wavelength method...
So I google for a while and found another paper: "An Improved Technique for Full Spectral Rendering". It suggests to use the cosh function for PDF, which its CDF can be inverted analytically:
By using 3 different PDF to sample the wavelengths (A0=0.0078, B0= 600, A1= 0.0072, B1= 535, A2= 0.0062, B2= 445 are used in the demo), the image converge much faster than using hero wavelength. Using about 100 SPP will often enough to get a similar color to the converged image.
|
|
Another problem with the color variance in hero wavelength is the camera movement. Since my demo is an interactive path tracer, when the camera moves, the path tracer re-generate a wavelength sample which change the color greatly every frame:
Camera movement with hero wavelength sampling |
To give a better preview of color during first few samples of the path tracing. The random numbers are stratified into 9 regions so that the first ray will pick 3 random wavelengths lying around 600nm, 535nm and 445nm when substituted into the inverse CDF of cosh weighted curves, which will give some Red, Green, Blue color.
Code to generate stratified random numbers P0, P1, P2 within [0, 1] range. |
With this stratified random number, color variation is reduced during camera movement:
Camera movement with stratified random numbers. |
Conclusion
In this post, I have described how a basic spectral path tracer can be implemented. The spectral rendered image is a bit different from the RGB rendered image (The RGB rendered image is a bit more reddish compared to the spectral traced one.). This may be due to the spectral up sampling method used, or not using a D65 light source. However, the bounced light intensity is not much different between tracing in Spectral and ACEScg space. In the future I would like to try using different light source such as illuminant E/D/F to see how it affects the color. I would also like to have a technique to spectral up sampling albedo with wide color gamut instead of sRGB color only.
References
[1] https://cgg.mff.cuni.cz/~wilkie/Website/EGSR_14_files/WNDWH14HWSS.pdf
[2] https://en.wikipedia.org/wiki/CIE_1931_color_space
[3] https://graphics.geometrian.com/research/spectral-primaries.html
[4] https://rgl.s3.eu-central-1.amazonaws.com/media/papers/Jakob2019Spectral_3.pdf
[5] http://jcgt.org/published/0002/02/01/paper.pdf
[6] https://www.researchgate.net/publication/228938842_An_Improved_Technique_for_Full_Spectral_Rendering