Introduction
Before starting this post, I would like to talk a bit about my homeland, Hong Kong. The Chinese government enacted a new
National Security Law, bypassing our local legislative council. We can only read the
full text of this law after it is enacted (
with official English version published 3 days after that). This law destroys our legal system completely, the government can appoint judges they like (
Article 44), jury can be removed from trial (
Article 46), and without media and public presence (
Article 41). This law is so vague that the government can prosecute anyone they don't like. People were arrested due to processing
antigovernment stickers. We don't even have the right to hate the government (
Article 29.5). If I promote "Boycott Chinese Products", I may broke this law already... Also the personnel of the security office do not need to obey HK law (
Article 60). This law even applies to foreigners outside HK (
Article 38). Our voting right is also deteriorating, more prodemocracy candidates can be disqualified by this law in the upcoming election (
Article 35)... So, if you are living in a democratic country, please cast a vote if you can.
Back to the topic of spectral path tracer. Path tracing in the spectral domain has be added in my toy renderer (along side with tracing in sRGB / ACEScg space). Spectral path tracer trace rays with actual wavelength of light instead of RGB channels. The result is physically correct, and some of the effect can only be calculated by spectral rendering (e.g. dispersion, iridescence). Although my toy spectral path tracer does not support such material, I would like to investigate how spectral rendering affects the bounced light color compared to image rendered with RGB color spaces. The demo can be downloaded
here.

Spectral rendered image 
Render Loop Modification
Referencing my previous
DXR Path Tracer post, only a few modification need to be done to support spectral path tracing:

RGB path tracer render loop in previous post 
When start tracing new rays, a wavelength is randomly picked. My first implementation uses
hero wavelength with 3 wavelength samples per ray. The number 3 is chosen because it is convenient to replace existing code where rays are traced with RGB channels. So the "Light Path Texture" in previous post is modified to accumulate the energy at that 3 wavelengths during ray traversal. Finally, when the ray is terminated, the resulting energy in the "Light Path Texture" will be integrated with the
CIE XYZ color matching function and stored in the "Lighting Sample Texture" in XYZ space, which later will be converted to the display device space (e.g. sRGB/AdobeRGB/Rec2020) as described in previous post.
Spectral Up Sampling Texture
One of the problem in spectral rendering is to convert texture from color to
spectral power distribution(SPD), this process is called spectral up sampling. Luckily, there are many paper talks about it. The technique called
"Spectral Primary Decomposition for Rendering with sRGB Reflectance" is used in the demo to up sample texture. I choose this method because of its simplicity. This method can reconstruct the spectrum by using linear combination of the texture color with 3 precomputed spectral basis function:
But one thing bothered me is that, the meaning of texture color is a bit different than the spectral up sampling method. In PBR rendering, texture color is referring to
albedo (i.e. ratio of radiosity to the irradiance received by a surface.), which is independent of CIE XYZ observer. While the upsampling method is trying to minimize the least squares problem of the texture color viewed under illuminant D65 with CIE standard observer. May be the RGB albedo values are computed with SPD and XYZ observer function? I have no idea about it and may investigate about this in the future.
Spectral Up Sampling Light Color and Intensity
Beside spectral up sampling the texture, light also need to be up sampled. Because the light color can be specified in wide color in the demo, the up sampling method used in above section is not enough. The method
"A LowDimensional Function Space for Efﬁcient Spectral Upsampling" is used to up sample the light color. This method compute 3 coefficients using light color (i.e. from RGB to c
0, c
1, c
2), and then the spectral power distribution,
f (λ), can be computed as below:
Since, light is specified by color and intensity, after calculating the SPD coefficients, we need to scale the SPD curve, so that when integrating the scaled SPD with the CIE standard observer
ȳ(λ) curve, the result should equals to the specified luminance intensity :
The scaling factor K is calculated numerically using
Trapezoidal rule with 1
nm wavelength interval and the
ȳ(λ) curve is approximated with
"multilobe approximation in "Simple Analytic Approximations to the CIE XYZColor Matching Functions". So, the light spectral power distribution is specified by 4 floating point numbers: 3 coefficients + 1 intensity scale.
In the demo, the original light intensity of RGB path tracer is modified so that it better matches the intensity of the spectral rendered image. Before the modification, the RGB lighting is done by simply multiplying the light color with light intensity. But now, this value is also divided by the luminance of the color (but this lose some control in the color picker UI...).

RGB light color multiply with
intensity only 


RGB light color multiply with intensity
divided by color luminance 


Spectral light with scaled SPD curve


In addition to the luminance scale, we also need to chromatic adapt the light color from illuminant E to illuminant D65/D60 before computing the 3 SPD coefficients, because the coefficients are fitted using illuminant E. If not doing this, the image will have a reddish appearance.

Computing light SPD coefficients without chromatic adaption 


Computing light SPD coefficients with chromatic adaption 

Importance Sampling Wavelength
As mention at the start of the post, the wavelengths are sampled using hero wavelength, which randomly pick 1 wavelength within the visible spectrum (i.e. 380780nm in the demo), and then 2 additional samples are picked, which evenly separated within visible wavelength range. With this approach, there is a high variance in color. Sometimes with 100 samples per pixel, the color converge to the final color, but more often, it requires > 1000 samples per pixel to converge. It really depends on luck...



3 different spectral rendered images with hero wavelength using 100 samples per pixel,
The color looks a bit different between all 3 images with a noticeable red tint in the middle image. 
To make the render converge faster, let's consider the CIE XYZ standard observer curves below, those samples with wavelength >650nm and <420nm will only have a few influence on the output image. So I tried to place more samples around the center of visible wavelength range.

CIE 1931 Color Matching Function from Wikipedia 
My first failed attempt is to use a cos weighted PDF curve like this to randomly pick 3 wavelengths for each ray:
A normalization constant is computed so that the PDF is integrated to one, and then CDF can be computed. To pick a random sample from this PDF,
inverse method can be used. To simplify the calculation, the PDF is centered at 0 with width 200 instead of [380, 780] range. After sampling λ from the inverse of CDF, the λ is shifted by 580 to make it lies in [380, 780] range. To find the inverse of CDF:

Compute inverse CDF of the cos weighted PDF (with w=200) 
Unfortunately, this cannot be inverted analytically as
mentioned here. So
Newton's method(with 15 iterations,) is used as suggested from this post, which have the follow result:



3 different spectral rendered images with cosweighted PDF using 100 samples per pixel,
The color still looks a bit different between all 3 images...

Sadly, the result is not improved, which gives more color variance than the hero wavelength method...
So I google for a while and found another paper: "An Improved Technique for Full Spectral Rendering". It suggests to use the cosh function for PDF, which its CDF can be inverted analytically:
The paper only suggest to use that PDF curve with center B= 538nm and A= 0.0072. Since this shape is similar to my cos weighted PDF, the color converge rate is similar (so I just skip capturing screen shots for this case)... But what if we use this curve with their center lying around the peak of the XYZ standard observer curve? To find this out, I tried to find the normalization constant within range [380, 780]nm, and then compute the CDF and inverse CDF:
By using 3 different PDF to sample the wavelengths (
A0=0.0078, B0= 600, A1= 0.0072, B1= 535, A2= 0.0062, B2= 445 are used in the demo), the image converge much faster than using hero wavelength. Using about 100 SPP will often enough to get a similar color to the converged image.

Rendered with 3 different coshcurves PDF using 100SPP 


Converged spectral rendered image. 

Another problem with the color variance in hero wavelength is the camera movement. Since my demo is an interactive path tracer, when the camera moves, the path tracer regenerate a wavelength sample which change the color greatly every frame:

Camera movement with hero wavelength sampling 
To give a better preview of color during first few samples of the path tracing. The random numbers are stratified into 9 regions so that the first ray will pick 3 random wavelengths lying around 600nm, 535nm and 445nm when substituted into the inverse CDF of cosh weighted curves, which will give some Red, Green, Blue color.

Code to generate stratified random numbers P0, P1, P2 within [0, 1] range. 
With this stratified random number, color variation is reduced during camera movement:

Camera movement with stratified random numbers. 
Conclusion
In this post, I have described how a basic spectral path tracer can be implemented. The spectral rendered image is a bit different from the RGB rendered image (The RGB rendered image is a bit more reddish compared to the spectral traced one.). This may be due to the spectral up sampling method used, or not using a D65 light source. However, the bounced light intensity is not much different between tracing in Spectral and ACEScg space. In the future I would like to try using different light source such as illuminant E/D/F to see how it affects the color. I would also like to have a technique to spectral up sampling albedo with wide color gamut instead of sRGB color only.
References
[1] https://cgg.mff.cuni.cz/~wilkie/Website/EGSR_14_files/WNDWH14HWSS.pdf
[2] https://en.wikipedia.org/wiki/CIE_1931_color_space
[3] https://graphics.geometrian.com/research/spectralprimaries.html
[4] https://rgl.s3.eucentral1.amazonaws.com/media/papers/Jakob2019Spectral_3.pdf
[5] http://jcgt.org/published/0002/02/01/paper.pdf
[6] https://www.researchgate.net/publication/228938842_An_Improved_Technique_for_Full_Spectral_Rendering