顯示包含「Graphics」標籤的文章。顯示所有文章
顯示包含「Graphics」標籤的文章。顯示所有文章

Spherical Harmonic Lighting

Introduction
Spherical Harmonics(SH) functions are a set of orthogonal basis functions defined in spherical coordinates using imaginary numbers. In this post, we use the following conversion between spherical and cartesian coordinates:
Since we are dealing with real value functions, we only need to deal with real spherical harmonics functions which in the form of:
The index l of the SH function is called the band index which is an integer >= 0 and index m is an integer with range -l<=m<=l , so there will be (2l + 1) functions in a given band. You may refer to the Appendix A2 of Stupid Spherical Harmonics(SH) Trick to look up the evaluated value of the SH basis function for a pair of (l, m).

The linear combination of SH basis functions with scalar values can be used to approximate a function as below:
With an approximation up to band l = - 1, which n×n coefficients are needed.
So the remaining problem to approximate a function is to compute the coefficient c which can be solved either analytically or numerically by Monte Carlo Integration.

Monte Carlo Integration
To compute a definite integral numerically, we can consider the Monte Carlo Estimator:
When the number of samples, N, is large enough, the estimator F will equal to the definite integral because considering the expected value of F:
When number of samples,N, is large enough, by the law of large numbers, the estimator F will converge to the definite integral. Therefore, we can calculate the coefficient of the SH basis functions by using Monte Carlo Estimator.

Properties of Spherical Harmonics Function
There are 2 important properties properties of SH functions:
First, it is rotationally invariant. 
Where the rotated function g is still a SH function which its coefficients can be computed by using the coefficients of f. For details of rotating a general SH functions, you can refer to the section 'Rotating Spherical Harmonics' in Spherical Harmonics Lighting: The Gritty Details.

Second, when integrating 2 SH projected functions over the spherical domain, the results will equals to dot product of their SH coefficients (due to the SH basis functions are orthogonal):
This is a nice property that we can calculate the integration over the spherical domain by a dot product of the SH coefficients.

Lighting with SH functions
When performing lighting calculation, we need to solve the rendering equation:
For shading lambert diffuse surface without shadow, we can simplify the rendering equation into:
To solve this integral, we can project the functions L(x, ω) and max(N.ω, 0) into SH functions using Monte Carlo Integration, then by the property 2 described above, the integral equals to dot product of the SH coefficients of the 2 SH projected functions.

Zonal Harmonics
If a SH projected function is rotational symmetric about a fixed axis, it is called Zonal Harmonics(ZH). If this axis is the z-axis, this will make the ZH function only depends on θ, which will result in only one non-zero coefficient in each band with m= 0. Then rotation of the ZH function can be greatly simplified. When the ZH function is rotated to a new axis d, the coefficients of the rotated SH function will equals to:
,which is faster than the general SH rotation. The ZH function is well suit to approximate the function max(N.ω, 0) in the above diffuse surface rendering equation since the SH projected L(xω) is usually done in world space while the shading surface can be re-oriented to the same space to perform lighting calculation.

WebGL Demo
Below is a webGL demo (which need a webGL enabled browser such as Chrome) using the cube map on the right as light source and projected to SH function using Monte Carlo Integration.

Both the white and the blue color on the model is reflected from the sun and the blue sky using SH coefficients generated from the cube map and the ZH coefficients projected from max(N.ω, 0) which rotated to world space according the surface normal. The approximation is done up to band l=2.  You can drag in the viewport to rotate the camera.
Your browser does not support the canvas tag/WebGL. This is a static example of what would be seen.
The source code of the webGL can be downloaded here.

Conclusion
SH functions can be used to approximate the rendering equation with only a few coefficients and a simple dot product to evaluate lighting during run time. But it also has its disadvantage while SH can only approximate low frequency function as it needs large number of bands to represent high frequency details.

Reference
[1] Spherical Harmonics Lighting: The Gritty Details: http://www.research.scea.com/gdc2003/spherical-harmonic-lighting.pdf
[2] Stupid Spherical Harmonics(SH) Trick: http://www.ppsloan.org/publications/StupidSH36.pdf
[3] Physically Based Rendering: http://www.amazon.com/gp/product/0123750792/ref=pd_lpo_k2_dp_sr_1?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=012553180X&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=09AG8FQQWKJHC2AEFPD1
[4] Sky box texture downloaded from: http://www.codemonsters.de/home/content.php?show=cubemaps

SSAO using Line Integrals


Hi everyone, this is my first post in #AltDevBlogADay. Let me introduce myself first, I am Simon Yeung, currently working as a game programmer. I like graphics programming and sometimes write iPhone apps.
This time, I would like to talk about the SSAO implemented in my little demo program. I write this demo because I spent most of my time using openGL and know little about DirectX, so I decided to learn DX by writing this demo. So it is not well optimized.
SSAO, short for Screen Space Ambient Occlusion, is a technique for approximating the indirect shadow casted by surrounding scene geometry which is done in screen space by sampling from the depth buffer.
The SSAO is implemented using the line integrals from "Rendering techniques in Toy Story 3"[1]. Here is my results:
With SSAO

without SSAO

SSAO texture

Their method calculates the volume occluded by other objects inside a sphere at each fragment by sampling from the depth buffer.
From Slide 22 of the paper
The volume of sphere is found by using the equation:
From Slide 51 of the paper
And they use the Voronoi Diagram to associate the ratio of volume occupied by each sample points for their predefined sampling pattern.
But in my implementation, I didn't use the Voronoi Diagram, in stead, I tried to calculate the volume occupied by the depth sample using that equation in the pixel shader. However , due to the perspective projection, that equation no longer holds as the ray will not form a right angle triangle which is not the same as above figure, and resulting the artifact as below(the wall on the right side):
The artifact on the wall on the right side
So, I tried to solve the problem by using ray-sphere intersection to calculate a more accurate line integrals.
For example, when calculate the occlusion volume for the black cross in the above diagram (take 2 depth samples for easy explanation), I need to compute the length L1 and L2 by solving ray-sphere intersection. Also the length O1 and O2 can be computed by sampling from the depth buffer. Therefore, the volume of the sphere can be approximated by L1+L2 and the occlusion volume can be approximated by O1+O2. (I also added a distance attenuation factor to O1 and O2 if the depth difference is too large so that the tank does not occlude the wall in my demo program). And the AO value will be (O1+O2)/(L1+L2)
This eliminate the artifacts:
Solving ray-sphere intersection to eliminate the artifacts
The demo program uses 8 depth samples for each fragment. In order to fake a higher sample count, I also tried to rotate the sample points as suggested by the paper which gives a softer look for the AO:
Rotating the sample points
Then, a bilateral blur is applied to smooth out the noise. Although bilateral blur is not separable, it is faster to divided it into 2 passes (i.e. 1 horizontal and 1 vertical, just like Gaussian blur), with 5 samples for each pass, which gives a softer result:
After bilateral blur
Finally the SSAO texture is blend with the scene:
Applying SSAO to the scene
In conclusion, I finished the SSAO but it is not optimized, there are several places can be improved such as when calculating the line integrals, I made several branches in pixel shader which slow down a lot. Also I rotated the sample points by calculate a rotation angle using the fragment position in pixel shader which can also be optimized using a pre-computed rotation angle texture as the paper suggested. These things can be improved but my main purpose of this demo is make me familiar with DirectX, so I just left out the optimization and left as future enhancement.

References:
[1]: Rendering techniques in Toy Story 3, http://advances.realtimerendering.com/s2010/index.html
[2]: The brick texture is obtained from Crytek's Sponza Model: http://crytek.com/cryengine/cryengine3/downloads
[3]: The tank model is obtained from an XNA demo project:http://create.msdn.com/en-US/education/catalog/?contenttype=0&devarea=0&platform=21&sort=1

Light Pre Pass Renderer

This is my first attempt to write a DirectX 9 program (I usually write program in OpenGL). I decided to implement a Light Pre-Pass renderer[1], my purpose of this program is to give a try on DX9, and try out different techniques, so the program is not optimized.
Here is the result:

The G-buffer is rendered in the first pass, with buffer layout:
1. Depth (32bits) + (32bits unused)
2. View Space Normal (16bits * 3) + glossiness(16bits)


Then in the second pass, light buffer is calculated for all lights by sampling from the depth buffer and normal buffer using Blinn-Phong Shading. with the following layout:

R channel :  ∑(L.N) Ir
G channel :  ∑(L.N) Ig
B channel :  ∑(L.N) Ib
A channel :  ∑(N.H)^ glossiness
                    , where L is light vector,
                        N is the normal from the G-buffer,
                        H is the half vector,
                        I is the light color,
                        glossiness is the specular power from the G-buffer

Finally, the third pass render the geometry to compose the albedo color with the light buffer.

This screen shot show the depth buffer, normal buffer, the light buffer and the SSAO, rendered with 25 point lights. Currently, no ambient color is used.
In the next post, I will talk about my SSAO implementation.

[1]: http://diaryofagraphicsprogrammer.blogspot.com/2008/03/light-pre-pass-renderer.html
[2]: The brick texture is obtained from Crytek's Sponza Model: http://crytek.com/cryengine/cryengine3/downloads
[3]: The tank model is obtained from an XNA demo project:http://create.msdn.com/en-US/education/catalog/?contenttype=0&devarea=0&platform=21&sort=1