顯示包含「Spectral Rendering」標籤的文章。顯示所有文章
顯示包含「Spectral Rendering」標籤的文章。顯示所有文章

Color Matching Function Comparison

Introduction

When performing spectral rendering, we need to use the Color Matching Function(CMF) to convert the spectral radiance to XYZ values, and then convert to RGB value for display. Different people have a slight variation when perceiving color, and age may also affect how color are perceived too. So the CIE defines several standard observers for an average person. The commonly used CMF are CIE 1931 2° Standard Observer and CIE 1964 10° Standard Observer. Beside these 2 CMF, there also exist other CMF such as Judd and Vos modified CIE 1931 2° CMF and CIE 2006 CMF. In this post, I will try to compare the images rendered with different CMF (as well as some analytical approximation). A demo can be downloaded here (the demo renders using wavelength between [380, 780]nm, which may introduce some error with CMF that have a larger range).

Left: rendered with CIE2006 CMF
Right: rendered with CIE1931 CMF

CMF Luminance

When I was implementing different CMF into my renderer, replacing the CMF directly will result in slightly different brightness of the rendered images:

Rendered with 1931 CMF
Rendered with 1964 CMF

This is because the renderer uses photometric units (e.g. lumen, lux..) to define the brightness of the light sources. Since the definition of luminous energy depends on the luminosity function (usually the y(λ) of CMF), we need to calculate the intensity of the light source with respect to the chosen CMF. Using the correct luminosity function, both rendered images have similar brightness:

Rendered with 1931 CMF
Rendered with 1964 CMF + luminance adjustment

 

CMF White Point

When using different CMF, the white point of different standard illuminant will be slightly different:

White point from wikipedia

Since we are dealing with game texture, color are usually defined in sRGB with a D65 white point, we need to find the white point of the D65 illuminant for the CMF that will be tested in this post. Unfortunately, I can't find D65 white point for the CIE 2006 CMF on the internet, so I calculated it myself (The calculation steps can be found in the Colab source code):

CIE 2006   2° : (0.313453, 0.330802) 

CIE 2006 10° : (0.313786, 0.331275)

But when I rendered some images with and without chromatic adaptation, the result looks similar:

1964 CMF without chromatic adaptation
1964 CMF with chromatic adaptation

So I searched on the internet, I can't find any information whether we need to chromatic adapt the rendered image due to different white point when using different CMF... May be this is because the difference is so small that applying chromatic adaptation makes no visible difference. 


CIE 2006 CMF analytical approximation

The popular CIE 1931 and 1964 CMF have simple analytical approximation, such as: "Simple Analytic Approximations to the CIE XYZ Color Matching Functions" (which will be tested in this post). The newer CIE 2006 CMF lacks such an approximation. So I derived one using similar methods and the curve fitting process can be found in the Colab source code.

2006 2° lobe approximation:

2006 2° lobe approximation shader source code
black lines: exact 2006 2° CMF
color lines: approximated 2006 2° CMF
 
2006 10° lobe approximation:

2006 10° lobe approximation shader source code
black lines: exact 2006 10° CMF
color lines: approximated 2006 10° CMF

Saturated lights comparison

With the above changes to the path tracer, we can render some images for comparison. A scene with several saturated lights using sRGB color (1,0,0), (1,1,0), (0,1,0), (0,1,1), (0,0,1), (1,0,1) is tested (which will be spectral up-sampled). 10 different CMF are used:

  • CIE 1931 2° 
  • CIE 1931 2° with Judd Vos adjustment
  • CIE 1931 2° single lobe analytic approximation
  • CIE 1931 2° multi lobe analytic approximation
  • CIE 1964 10° 
  • CIE 1964 10° single lobe analytic approximation
  • CIE 2006 2°
  • CIE 2006 2° lobe analytic approximation
  • CIE 2006 10°
  • CIE 2006 10° lobe analytic approximation

Here are the results:

CIE 1931 2°
CIE 1931 2° with Judd Vos adjustment
CIE 1931 2° single lobe analytic approximation
CIE 1931 2° multi lobe analytic approximation
CIE 1964 10°
CIE 1964 10° single lobe analytic approximation
CIE 2006 2°
CIE 2006 2° lobe analytic approximation
CIE 2006 10°
CIE 2006 10° lobe analytic approximation

From Wikipedia:

"The CIE 1931 CMF is known to underestimate the contribution of the shorter blue wavelengths."

So I was expecting some variation for the blue color when using different CMF. But to my surprise, only the CIE 1931 CMF suffer from the “Blue Turns Purple” Problem (Edited: As pointed out by troy_s on twitter, the reference I provided was wrong, the link talks about psychophysical effect, while the current issue is mishandling of light data) which we have encountered in previous posts (i.e. saturated sRGB blue light will render purple color). Originally, after previous blog post, I was investigating this issue and was suspecting the ACES tone mapper cause the color shift (as this issue does not happen when rendering in narrow sRGB gamut with Reinhard tone mapper). I was thinking may be we can use the OKLab color space to get the hue value before tone mapping and tone map only the lightness to keep the blue color. But when I tried with this approach, the hue value obtained before tone mapping is still purple color, which suggest may not be the tone mapper causing the issue (or somehow my method of getting the hue value from HDR value is wrong...). So I have no idea on how to solve the issue and randomly toggle some debug view modes. Accidentally, I found that some of the purple color are actually inside my AdobeRGB monitor display gamut (but outside the sRGB gamut on another monitor...), so the problem is not only caused by out of gamut color producing the purple shift...

The purple color on the wall is within displayable Adobe RGB gamut
Highlighting out of gamut pixel with cyan color

So I decided to investigate the problem for spectral renderer first (and ignore the RGB renderer), and that's why I tested different CMF in this blog post. (Also, as a side note, the behavior for the blue turns purple color problem is a bit different between RGB and spectral renderer, using a more saturated blue color, e.g. (0, 0, 1) in Rec2020, can hide this issue in RGB renderer while using the same more saturated blue color with 1931 CMF spectral renderer still suffer from the problem, while other CMF doesn't have this issue.)

 

Color Checker comparison

Next, we compare a color checker lit by a white light source. Since my spectral renderer need to maintain compatibility with RGB rendering and I was too lazy to implement spectral material using measured spectral reflectance, so both the color checker and the light source are up-sampled from sRGB color.

CIE 1931 2°
CIE 1931 2° with Judd Vos adjustment
CIE 1931 2° single lobe analytic approximation
CIE 1931 2° multi lobe analytic approximation
CIE 1964 10°
CIE 1964 10° single lobe analytic approximation
CIE 2006 2°
CIE 2006 2° lobe analytic approximation
CIE 2006 10°
CIE 2006 10° lobe analytic approximation

From the above results, different CMF have similar looks except the blue color.


Conclusion

In this post, we have compare different CMF, provided an analytical approximation for the CIE 2006 CMF and calculate the D65 white point for CIE 2006 CMF (the math can be found in the Colab source code). All the CMF produce similar color except the blue color, with CMF newer than the 1931 CMF can render saturated blue color correctly without turning it into purple color. May be we should use newer CMF instead, especially when working with wide gamut color. And the company Konica Minolta points out that: the CIE 1931 CMF has issue with wider color gamut with OLED display (which suggest to use CIE 2015 CMF instead). But sadly, I cannot find the data for CIE 2015 CMF, so it is not tested in this post.


Reference

[1] https://en.wikipedia.org/wiki/CIE_1931_color_space

[2] http://cvrl.ioo.ucl.ac.uk/

[2] http://jcgt.org/published/0002/02/01/paper.pdf

[3] https://en.wikipedia.org/wiki/ColorChecker

[4] https://en.wikipedia.org/wiki/Standard_illuminant

[5] https://www.rit.edu/cos/colorscience/rc_useful_data.php

[6] https://sensing.konicaminolta.asia/deficiencies-of-the-cie-1931-color-matching-functions/

Importance sampling visible wavelength

Introduction

It has been half a year since my last post. Due to the pandemic and political environment in Hong Kong, I don't have much time/mood to work on my hobby path tracer... And until recently, I tried to get back to this hobby, may be it is better to work on some small tasks first. One thing I am not satisfied in previous spectral path tracing post is using 3 different cosh curves(with peak at the XYZ color matching functions) to importance sample the visible wavelength instead of 1. So I decided to revise it and find another PDF to take random wavelength samples. A demo with the updated importance sampled wavelength can be downloaded here and the python code used for generating the PDF can be viewed here (inspired by Bart Wronski's tweet to use Colab).

First Failed Try

The basic idea is to create a function with peak values at the same location as the color matching functions. I decided to use the sum of the XYZ color matching curves as PDF.

Black line is the Sum of XYZ curves

To simplify calculation, analytical approximation of the XYZ curves can be used. A common approximation can be found here, which seems hard to be integrated(due to the λ square) to create the CDF. So a rational function is used instead:

The X curve is approximated with 1 peak and dropped the small peak at around 450nm, because we want to compute the "sum of XYZ" curve, that missing peak can be compensated by scaling the Z curves. The approximated curves looks like:

Light colored curves are the approximated function
Grey color curve is the approximated PDF
The approximated PDF is roughly similar to the sum of XYZ color matching curves. But I have made a mistake: Although the rational function can be integrated to create the CDF, I don't know how to compute the inverse of CDF(which is needed by the inverse method to draw random samples using uniform random numbers). So I need to find another way...

Second Try

From previous section, although I don't know how to find the inverse of the approximated CDF, out of curiosity, I still want to know how the approximated CDF looks like, so I plot the graph:

Black line is the original CDF
Grey line is the approximated CDF

It looks like some smoothstep functions added together, with 1 base smoothstep curve in range [380, 780] with 2 smaller smoothstep curves (in range around [400, 500] and [500, 650]) added on top of the base curve. May be I can approximate this CDF with some kind of polynomial function. After some trail and error, an approximated CDF is found (details of CDF and PDF can be found in the python code):

The above function divides the visible wavelength spectrum into 4 intervals to form a piecewise function. Since smoothstep is a cubic function, adding another smoothstep function is still another cubic function, which can be inverted and differentiated. And the approximated "smoothstep CDF/PDF" curve looks like:

Blue line is the approximated "smoothstep CDF"
Black line is the original CDF
Blue line is the approximated "smoothstep PDF"
Black line is the original PDF
Although the "smoothstep CDF" looks smooth, its PDF is not (not C1 continuous at around 500nm). But it has 2 peaks value at around 450nm  and 600nm, may be let's try to render some images to see how it behaves.

Result

The same Sponza scene is rendered with 3 different wavelength sampling functions: uniform, cosh and smoothstep (with stratification of random number disabled, color noise are more noticeable when zoomed in):

uniform weighted sampling
cosh weighted sampling
smoothstep weighted sampling

Both cosh and smoothstep wavelength sampling method show less color noise than uniform sampling method, with the smoothstep PDF slightly better than cosh function. Seems the C1 discontinuity of the PDF does not affect rendering very much. A demo can be downloaded here to see how it looks in real-time.

Conclusion

This post described an approximated function to importance sample the visible wavelength by using the sum of the color matching functions, which reduce the color noise slightly. The approximated CDF is composed of cubic piecewise functions. The python code used for generating the polynomial coefficients can be found here (with some unused testing code too. e.g. I have tried to use 1 linear base function with 2 smaller smoothstep functions added on top, but the result is not much better...). Although the approximated PDF is not C1 continuous, it does not affect the rendering very much. If someone knows more about how the C1 continuity of the PDF affect rendering, please leave a comment below. Thank you.

References

[1] https://en.wikipedia.org/wiki/CIE_1931_color_space#Color_matching_functions

[2] Simple Analytic Approximations to the CIE XYZColor Matching Functions

[3] https://stackoverflow.com/questions/13328676/c-solving-cubic-equations

Spectral Path Tracer

Introduction
Before starting this post, I would like to talk a bit about my homeland, Hong Kong. The Chinese government enacted a new National Security Law, bypassing our local legislative council. We can only read the full text of this law after it is enacted (with official English version published 3 days after that). This law destroys our legal system completely, the government can appoint judges they like (Article 44), jury can be removed from trial (Article 46), and without media and public presence (Article 41). This law is so vague that the government can prosecute anyone they don't like. People were arrested due to processing anti-government stickers. We don't even have the right to hate the government (Article 29.5). If I promote "Boycott Chinese Products", I may broke this law already... Also the personnel of the security office do not need to obey HK law (Article 60). This law even applies to foreigners outside HK (Article 38). Our voting right is also deteriorating, more pro-democracy candidates can be disqualified by this law in the upcoming election (Article 35)... So, if you are living in a democratic country, please cast a vote if you can.

Back to the topic of spectral path tracer. Path tracing in the spectral domain has be added in my toy renderer (along side with tracing in sRGB / ACEScg space). Spectral path tracer trace rays with actual wavelength of light instead of RGB channels. The result is physically correct, and some of the effect can only be calculated by spectral rendering (e.g. dispersion, iridescence). Although my toy spectral path tracer does not support such material, I would like to investigate how spectral rendering affects the bounced light color compared to image rendered with RGB color spaces. The demo can be downloaded here.
Spectral rendered image

Render Loop Modification
Referencing my previous DXR Path Tracer post, only a few modification need to be done to support spectral path tracing:

RGB path tracer render loop in previous post
When start tracing new rays, a wavelength is randomly picked. My first implementation uses hero wavelength with 3 wavelength samples per ray. The number 3 is chosen because it is convenient to replace existing code where rays are traced with RGB channels. So the "Light Path Texture" in previous post is modified to accumulate the energy at that 3 wavelengths during ray traversal. Finally, when the ray is terminated, the resulting energy in the "Light Path Texture" will be integrated with the CIE XYZ color matching function and stored in the "Lighting Sample Texture" in XYZ space, which later will be converted to the display device space (e.g. sRGB/AdobeRGB/Rec2020) as described in previous post.

Spectral Up Sampling Texture
One of the problem in spectral rendering is to convert texture from color to spectral power distribution(SPD), this process is called spectral up sampling. Luckily, there are many paper talks about it. The technique called "Spectral Primary Decomposition for Rendering with sRGB Reflectance" is used in the demo to up sample texture. I choose this method because of its simplicity. This method can reconstruct the spectrum by using linear combination of the texture color with 3 pre-computed spectral basis function:
But one thing bothered me is that, the meaning of texture color is a bit different than the spectral up sampling method. In PBR rendering, texture color is referring to albedo (i.e. ratio of radiosity to the irradiance received by a surface.), which is independent of CIE XYZ observer. While the up-sampling method is trying to minimize the least squares problem of the texture color viewed under illuminant D65 with CIE standard observer. May be the RGB albedo values are computed with SPD and XYZ observer function? I have no idea about it and may investigate about this in the future.

Spectral Up Sampling Light Color and Intensity
Beside spectral up sampling the texture, light also need to be up sampled. Because the light color can be specified in wide color in the demo, the up sampling method used in above section is not enough. The method "A Low-Dimensional Function Space for Efficient Spectral Upsampling" is used to up sample the light color. This method compute 3 coefficients using light color (i.e. from RGB to c0, c1, c2), and then the spectral power distribution, (λ), can be computed as below:
Since, light is specified by color and intensity, after calculating the SPD coefficients, we need to scale the SPD curve, so that when integrating the scaled SPD with the CIE standard observer ȳ(λ) curve, the result should equals to the specified luminance intensity :
The scaling factor K is calculated numerically using Trapezoidal rule with 1nm wavelength interval and the ȳ(λ) curve is approximated with "multi-lobe approximation in "Simple Analytic Approximations to the CIE XYZColor Matching Functions". So, the light spectral power distribution is specified by 4 floating point numbers: 3 coefficients + 1 intensity scale.

In the demo, the original light intensity of RGB path tracer is modified so that it better matches the intensity of the spectral rendered image. Before the modification, the RGB lighting is done by simply multiplying the light color with light intensity. But now, this value is also divided by the luminance of the color (but this lose some control in the color picker UI...).
RGB light color multiply with
intensity only
RGB light color multiply with intensity
divided by color luminance
Spectral light with scaled SPD curve


In addition to the luminance scale, we also need to chromatic adapt the light color from illuminant E to illuminant D65/D60 before computing the 3 SPD coefficients, because the coefficients are fitted using illuminant E. If not doing this, the image will have a reddish appearance.
Computing light SPD coefficients without chromatic adaption
Computing light SPD coefficients with chromatic adaption

Importance Sampling Wavelength
As mention at the start of the post, the wavelengths are sampled using hero wavelength, which randomly pick 1 wavelength within the visible spectrum (i.e. 380-780nm in the demo), and then 2 additional samples are picked, which evenly separated within visible wavelength range. With this approach, there is a high variance in color. Sometimes with 100 samples per pixel, the color converge to the final color, but more often, it requires > 1000 samples per pixel to converge. It really depends on luck...
3 different spectral rendered images with hero wavelength using 100 samples per pixel,
The color looks a bit different between all 3 images with a noticeable red tint in the middle image.

To make the render converge faster, let's consider the CIE XYZ standard observer curves below, those samples with wavelength >650nm and <420nm will only have a few influence on the output image. So I tried to place more samples around the center of visible wavelength range.
CIE 1931 Color Matching Function from Wikipedia
My first failed attempt is to use a cos weighted PDF curve like this to randomly pick 3 wavelengths for each ray:

A normalization constant is computed so that the PDF is integrated to one, and then CDF can be computed. To pick a random sample from this PDF, inverse method can be used. To simplify the calculation, the PDF is centered at 0 with width 200 instead of [380, 780] range. After sampling λ from the inverse of CDF, the λ is shifted by 580 to make it lies in [380, 780] range. To find the inverse of CDF:
Compute inverse CDF of the cos weighted PDF (with w=200)

Unfortunately, this cannot be inverted analytically as mentioned here. So Newton's method(with 15 iterations,) is used as suggested from this post, which have the follow result:

3 different spectral rendered images with cos-weighted PDF using 100 samples per pixel,
The color still looks a bit different between all 3 images...

Sadly, the result is not improved, which gives more color variance than the hero wavelength method...

So I google for a while and found another paper: "An Improved Technique for Full Spectral Rendering". It suggests to use the cosh function for PDF, which its CDF can be inverted analytically:
The paper only suggest to use that PDF curve with center B= 538nm and A= 0.0072. Since this shape is similar to my cos weighted PDF, the color converge rate is similar (so I just skip capturing screen shots for this case)... But what if we use this curve with their center lying around the peak of the XYZ standard observer curve? To find this out, I tried to find the normalization constant within range [380, 780]nm, and then compute the CDF and inverse CDF:



By using 3 different PDF to sample the wavelengths (A0=0.0078, B0= 600, A1= 0.0072, B1= 535, A2= 0.0062, B2= 445 are used in the demo), the image converge much faster than using hero wavelength. Using about 100 SPP will often enough to get a similar color to the converged image.
Rendered with 3 different cosh-curves PDF using 100SPP
Converged spectral rendered image.

Another problem with the color variance in hero wavelength is the camera movement. Since my demo is an interactive path tracer, when the camera moves, the path tracer re-generate a wavelength sample which change the color greatly every frame:
Camera movement with hero wavelength sampling

To give a better preview of color during first few samples of the path tracing. The random numbers are stratified into 9 regions so that the first ray will pick 3 random wavelengths lying around 600nm, 535nm and 445nm when substituted into the inverse CDF of cosh weighted curves, which will give some Red, Green, Blue color.

Code to generate stratified random numbers P0, P1, P2 within [0, 1] range.

With this stratified random number, color variation is reduced during camera movement:

Camera movement with stratified random numbers.

Conclusion
In this post, I have described how a basic spectral path tracer can be implemented. The spectral rendered image is a bit different from the RGB rendered image (The RGB rendered image is a bit more reddish compared to the spectral traced one.). This may be due to the spectral up sampling method used, or not using a D65 light source. However, the bounced light intensity is not much different between tracing in Spectral and ACEScg space. In the future I would like to try using different light source such as illuminant E/D/F to see how it affects the color. I would also like to have a technique to spectral up sampling albedo with wide color gamut instead of sRGB color only.

References
[1] https://cgg.mff.cuni.cz/~wilkie/Website/EGSR_14_files/WNDWH14HWSS.pdf
[2] https://en.wikipedia.org/wiki/CIE_1931_color_space
[3] https://graphics.geometrian.com/research/spectral-primaries.html
[4] https://rgl.s3.eu-central-1.amazonaws.com/media/papers/Jakob2019Spectral_3.pdf
[5] http://jcgt.org/published/0002/02/01/paper.pdf
[6] https://www.researchgate.net/publication/228938842_An_Improved_Technique_for_Full_Spectral_Rendering