After writing an AO demo in last post, I started to write a progressive path tracer, but my progress was very slow due to the social unrest in past few months (here are some related news about what has happened). In the past weeks, the situation has claimed down a bit, and I continue to write my path tracer and started adding specular lighting. While implementing Eric Heitz's "Sampling the GGX Distribution of Visible Normals" technique, I was confused by why taking a random sample on a disk and then project it on the hemisphere equals to the GGX distribution of visible normals (VNDF). And I can't find a prove in the paper, so in this post, I will try to verify their PDF are equal. (Originally, I planned to write this post after finishing my path tracer demo. But I worry that the situation here in Hong Kong will get worse again and won't be able to write, so I decided to write it down first, hope it won't get too boring with only math equations.)
Quick summary of sampling the GGX VNDF technique
For those who are not familiar with the GGX VNDF technique, I will briefly talk about it. It is an important sampling technique to sample a random normal vector from GGX distribution. That normal vector is then used for generating a reflection vector, usually for the next reflected ray during path tracing.
Given a view vector to a GGX surface with arbitrary roughness, the steps to sample a normal vector are:
My confusion mainly comes from step 2 and 3, in the hemisphere configuration: why this method of generating normal vector equals to GGX VNDF exactly...
My work in progress path tracer, using GGX material only 
Quick summary of sampling the GGX VNDF technique
For those who are not familiar with the GGX VNDF technique, I will briefly talk about it. It is an important sampling technique to sample a random normal vector from GGX distribution. That normal vector is then used for generating a reflection vector, usually for the next reflected ray during path tracing.


 Transform the view vector to GGX hemisphere configuration space (i.e. from arbitrary roughness to roughness = 1 config) using GGX stretchinvariant property.
 Sample a random point on the projected disk along the transformed view direction.
 Reproject the sampled point onto the hemisphere along view direction. And this will be our desired normal vector.
 Transform the normal vector back to original GGX roughness space from the hemisphere configuration.
VNDF sampling technique illustration from Eric Heitz's paper 
GGX NDF definition
Before digging deep into the problem, let's start with the definition of GGX NDF. In the paper, it states that: The GGX distribution uses only the upper part of the ellipsoid, and when alpha/roughness equals to 1, the GGX distribution is a uniform hemisphere. According to the definition (with alpha = 1):
So its PDF will be:
So, sampling a normal vector from GGX distribution (with alpha = 1) equals to sampling a vector using a cosweighted distribution.
So its PDF will be:
So, sampling a normal vector from GGX distribution (with alpha = 1) equals to sampling a vector using a cosweighted distribution.
GGX VNDF definition
The definition of VNDF depends on the shadowing function. And we are using the Smith shadowing function (with alpha =1):
Therefore the VNDF equals to:
GGX VNDF specific case
With both GGX NDF and VNDF definition, we can start investigating the problem. I decided to start with something simple first, with a specific case: view direction equals to surface normal (i.e. V=Z).
After simplification in this V=Z case, the PDF of Dz(N) is also cosweighted, which equals to the traditional sampling GGX NDF method.
Now take a look at the sampling scheme by Eric Heitz's method. The method start with uniform sampling from a unit disc, which has a PDF = 1/π, then the point is projected to the hemisphere along the view direction, which add a cos term to the PDF (i.e. Z.N/π ) according to Malley's method (where the cos term comes from the Jacobian transform). Therefore, both the VNDF and Eric Heitz's method are the same at this specific case, which has a cos weighted PDF.
After simplification in this V=Z case, the PDF of Dz(N) is also cosweighted, which equals to the traditional sampling GGX NDF method.
Now take a look at the sampling scheme by Eric Heitz's method. The method start with uniform sampling from a unit disc, which has a PDF = 1/π, then the point is projected to the hemisphere along the view direction, which add a cos term to the PDF (i.e. Z.N/π ) according to Malley's method (where the cos term comes from the Jacobian transform). Therefore, both the VNDF and Eric Heitz's method are the same at this specific case, which has a cos weighted PDF.
GGX VNDF general case
To verify Eric Heitz's sampling scheme equals to the PDF of GGX VNDF in all possible viewing direction, we need to calculate the PDF of his method and take care of how the PDF changes according to each transformation. From the paper we have this vertical mapping:
We know the PDF of sampling an unit disk is 1/π, (i.e. P(t1, t2)= 1/π), we need to calculate P(t1, t2'):
Transformation of randomly sampled point from Eric Heitz's paper 
The next step of the algorithm is to reproject the disc to the hemisphere along the view direction, which produce our target importance sampled normal, so by Malley's method again (but this time along the view direction instead of surface normal), we can add a V.N Jacobian term to the above PDF P(t1,t2'):
The resulting PDF equals to the GGX VNDF definition exactly. So this solved my question of why Eric Heitz's sampling scheme is an exact sampling routine for the GGX VNDF.
Conclusion
This post describe my learning process of the paper "Sampling the GGX Distribution of Visible Normals" and solved my most confusing part of why "taking a random sample on a disk and then project it on the hemisphere equals to the GGX VNDF". If anybody knows a simpler proof of how these 2 equations are equal, or if you discover any mistake, please let me know in the comment. Thank you.
References
[1] http://www.jcgt.org/published/0007/04/01/paper.pdf
[2] https://hal.archivesouvertes.fr/hal01509746/document
[3] https://agraphicsguy.wordpress.com/2015/11/01/samplingmicrofacetbrdf/
[4] https://schuttejoe.github.io/post/ggximportancesamplingpart1/
[5] https://schuttejoe.github.io/post/ggximportancesamplingpart2/
[6] http://www.pbrbook.org/3ed2018/Monte_Carlo_Integration/2D_Sampling_with_Multidimensional_Transformations.html
References
[1] http://www.jcgt.org/published/0007/04/01/paper.pdf
[2] https://hal.archivesouvertes.fr/hal01509746/document
[3] https://agraphicsguy.wordpress.com/2015/11/01/samplingmicrofacetbrdf/
[4] https://schuttejoe.github.io/post/ggximportancesamplingpart1/
[5] https://schuttejoe.github.io/post/ggximportancesamplingpart2/
[6] http://www.pbrbook.org/3ed2018/Monte_Carlo_Integration/2D_Sampling_with_Multidimensional_Transformations.html