顯示包含「Graphics」標籤的文章。顯示所有文章
顯示包含「Graphics」標籤的文章。顯示所有文章

Note on sampling GGX Distribution of Visible Normals

Introduction
After writing an AO demo in last post, I started to write a progressive path tracer, but my progress was very slow due to the social unrest in past few months (here are some related news about what has happened). In the past weeks, the situation has claimed down a bit, and I continue to write my path tracer and started adding specular lighting. While implementing Eric Heitz's "Sampling the GGX Distribution of Visible Normals" technique, I was confused by why taking a random sample on a disk and then project it on the hemisphere equals to the GGX distribution of visible normals (VNDF). And I can't find a prove in the paper, so in this post, I will try to verify their PDF are equal. (Originally, I planned to write this post after finishing my path tracer demo. But I worry that the situation here in Hong Kong will get worse again and won't be able to write, so I decided to write it down first, hope it won't get too boring with only math equations.)
My work in progress path tracer, using GGX material only

Quick summary of sampling the GGX VNDF technique
For those who are not familiar with the GGX VNDF technique, I will briefly talk about it. It is an important sampling technique to sample a random normal vector from GGX distribution. That normal vector is then used for generating a reflection vector, usually for the next reflected ray during path tracing.

Traditional importance sampling scheme use D(N) to sample a normal vector
VNDF technique use the visible normal to importance sample a vector, taking the view direction into account
Given a view vector to a GGX surface with arbitrary roughness, the steps to sample a normal vector are:
  1. Transform the view vector to GGX hemisphere configuration space (i.e. from arbitrary roughness to roughness = 1 config) using GGX stretch-invariant property.
  2. Sample a random point on the projected disk along the transformed view direction.
  3. Re-project the sampled point onto the hemisphere along view direction. And this will be our desired normal vector.
  4. Transform the normal vector back to original GGX roughness space from the hemisphere configuration.
VNDF sampling technique illustration from  Eric Heitz's paper
My confusion mainly comes from step 2 and 3, in the hemisphere configuration: why this method of generating normal vector equals to GGX VNDF exactly...

GGX NDF definition
Before digging deep into the problem, let's start with the definition of GGX NDF. In the paper, it states that: The GGX distribution uses only the upper part of the ellipsoid, and when alpha/roughness equals to 1, the GGX distribution is a uniform hemisphere. According to the definition (with alpha = 1):



So its PDF will be:

So, sampling a normal vector from GGX distribution (with alpha = 1) equals to sampling a vector using a cos-weighted distribution.

GGX VNDF definition
The definition of VNDF depends on the shadowing function. And we are using the Smith shadowing function (with alpha =1):


Therefore the VNDF equals to:



GGX VNDF specific case
With both GGX NDF and VNDF definition, we can start investigating the problem. I decided to start with something simple first, with a specific case: view direction equals to surface normal (i.e. V=Z).



After simplification in this V=Z case, the PDF of Dz(N) is also cos-weighted, which equals to the traditional sampling GGX NDF method.

Now take a look at the sampling scheme by Eric Heitz's method. The method start with uniform sampling from a unit disc, which has a PDF = 1/π, then the point is projected to the hemisphere along the view direction, which add a cos term to the PDF (i.e. Z.N/π ) according to Malley's method (where the cos term comes from the Jacobian transform). Therefore, both the VNDF and Eric Heitz's method are the same at this specific case, which has a cos weighted PDF.

GGX VNDF general case
To verify Eric Heitz's sampling scheme equals to the PDF of GGX VNDF in all possible viewing direction, we need to calculate the PDF of his method and take care of how the PDF changes according to each transformation. From the paper we have this vertical mapping:
Transformation of randomly sampled point from Eric Heitz's paper
We know the PDF of sampling an unit disk is 1/π, (i.e. P(t1, t2)= 1/π), we need to calculate P(t1, t2'):



The next step of the algorithm is to re-project the disc to the hemisphere along the view direction, which produce our target importance sampled normal, so by Malley's method again (but this time along the view direction instead of surface normal), we can add a V.N Jacobian term to the above PDF P(t1,t2'):



The resulting PDF equals to the GGX VNDF definition exactly. So this solved my question of why Eric Heitz's sampling scheme is an exact sampling routine for the GGX VNDF.

Conclusion
This post describe my learning process of the paper "Sampling the GGX Distribution of Visible Normals" and solved my most confusing part of why "taking a random sample on a disk and then project it on the hemisphere equals to the GGX VNDF". If anybody knows a simpler proof of how these 2 equations are equal, or if you discover any mistake, please let me know in the comment. Thank you.

References
[1] http://www.jcgt.org/published/0007/04/01/paper.pdf
[2] https://hal.archives-ouvertes.fr/hal-01509746/document
[3] https://agraphicsguy.wordpress.com/2015/11/01/sampling-microfacet-brdf/
[4] https://schuttejoe.github.io/post/ggximportancesamplingpart1/
[5] https://schuttejoe.github.io/post/ggximportancesamplingpart2/
[6] http://www.pbr-book.org/3ed-2018/Monte_Carlo_Integration/2D_Sampling_with_Multidimensional_Transformations.html





DXR AO

Introduction
(Edit 3/5/2020: An updated version of the demo can be downloaded here, which support high DPI monitor and some bug fixes)
It has been 2 months since my last post. For the past few months, the situation here in Hong Kong was very bad. Our basic human rights are deteriorating. Absurd things happens such as suspected cooperation between police and triad, as well as the police brutality (including shooting directly at the journalists). I really don't know what can be done... May be, could you spare me a few minutes to sign some of these petitions? Although such petitions may not be very useful, at least after signing some of them, the US Congress is discussing the Hong Kong Human Rights and Democracy Act now. I would sincerely appreciate your help. Thank you very much!

Back to today's topic, after setting up my D3D12 rendering framework, I started to learn DirectX ray-tracing (DXR). So I decided to start writing an ambient occlusion demo first because it is easier than writing a full path tracer since I do not need to handle material information as well as the lighting data. The demo can be downloaded from here (required a DXR compatible graphics card and driver with Windows 10 build version 1809 or newer).



Rendering pipeline
In this demo, it renders a G-buffer with normal and depth data. Then a velocity buffer will be generated using current and previous frame camera transform, stored in RG16Snorm format. Then rays are traced from world position reconstructed from depth buffer with cosine weight distribution. To avoid ray-geometry self intersection, ray origin is shifted towards the camera a bit. After that, a temporal and spatial filter is applied to smooth out the noisy AO image and then an optional bilateral blur pass can be applied for a final clean up.



Temporal Filter
With the noisy image generated from the ray tracing pass, we can reuse previous frame ray-traced data to smooth out the image. In the demo, the velocity buffer is used to get the pixel location in previous frame (with an additional depth check between current frame depth value and the re-projected previous frame depth value). As we are calculating ambient occlusion using Monte Carlo Integration:


We can split the Monte Carlo integration into multiple frames and store the AO result into a RG16Unorm texture, where red channel stores the accumulated AO result, green channel stores the total sample count N (The sample count is clamped to 255 to avoid overflow). So after a new frame is rendered, we can accumulated the AO Monte Carlo Integration with the following equation:



We also reduce the sample count by the delta depth difference between current and previous frame depth buffer value (i.e. when the camera zoom out/in) to "fade out" the accumulated history faster to reduce ghosting artifact.

AO image traced at 1 ray per pixel
AO image with accumulated samples over multiple frames

But this re-projection temporal filter have a short coming that the geometry edge would failed very often (especially when done in half resolution). So in the demo, when re-projection failed, it will shift 1 pixel to perform the re-projection again to accumulate more samples.

Many edge pixels failed the re-projection test
With 1 pixel shifted, many edge pixels can be re-projected

As the result is biased, I have also reduced the sample count by a factor of 0.75 to make the correct ray-traced result "blend in" faster.

Spatial Filter
To increase the sample count for Monte Carlo Integration, we can reuse the ray-traced data in the neighbor pixels. We search in 5x5 grid and reuse the neighbor data if they are on the same surface by comparing their delta depth value (i.e. ddx and ddy generated from depth buffer). As the delta depth value is re-generated from depth buffer, some artifact may been seen on the triangle edge.

noisy AO image applied with a spatial filter
artifact shown at the triangle edge by re-constructed delta depth
To save some performance, beside using half resolution rendering, we can also choose to interleave the ray cast every 4 pixels and ray cast the remaining pixels in next few frames.

Rays are traced only at the red pixels
to save performance
For those pixels without any ray traced data during interleaved rendering, we use the spatial filter to fill in the missing data. The same surface depth check in spatial filter can be by-passed when the sample count(stored in green channel during temporal filter) is low, because it is better to have some "wrong" neighbor data than have no data for the pixel. This also helps to remove the edge artifact shown before.

Rays are traced at interleaved pattern,
leaving many 'holes' in the image
Spatial filter will fill in those 'holes'
during interleaved rendering

Also, when ray casting are interleaved between pixels, we need to pay attention to the temporal filter too. We may have a chance that we re-project to previous frame pixel which have no sample data. In this case, we snap the re-projected UV to the pixel that have cast interleaved ray in previous frame.

Bilateral Blur
To clean up the remaining noise from the temporal and spatial filter. A bilateral blur is applied, we can have a wider blur by using the edge aware A-Trous algorithm. The blur radius is adjusted according to the sample count (stored in green channel in temporal filter). So when we have already cast many ray samples, we can reduce the blur radius to have a sharper image.

Applying an additional bilateral blur to smooth out remaining noise

Random Ray Direction
When choosing the random ray cast direction, we want those chosen direction can have a more significance effect. Since we have a spatial filter to reuse neighbor pixels data, so we can try to cast rays in directions such that the angle between the ray direction in neighbor pixels should be as large as possible and also cover as much hemisphere area as possible.



It looks like we can use some kind of blue noise texture so that the ray direction is well distributed. Let's take a look at how the cosine weighted random ray direction is generated:



From the above equation, the random variable ϕ is directly corresponding to the random ray direction on the tangent plane, which have a linear relationship between the angle ϕ and random variable ξ2. Since we generate random numbers using wang hash, which is a white noise. May be we can stratified the random range and using the blue noise to pick a desired range to turn it like a blue noise pattern. For example, originally we have a random number between [0, 1), we may stratified it into 4 ranges: [0, 0.25), [0.25, 0.5), [0.5, 0.75), [0.75, 1). Then using the screen space pixel coordinates to sample a tileable blue noise texture. And according to the value of the blue noise, we scale the white noise random number into 1 of the 4 stratified range. Below is some sample code of how the stratification is done:
int BLUE_NOISE_TEX_SIZE= 64;
int STRATIFIED_SIZE= 16;
float4 noise= blueNoiseTex[pxPos % BLUE_NOISE_TEX_SIZE];
uint2 noise_quantized= noise.xy * (255.0 * STRATIFIED_SIZE / 256.0);
float2 r= wang_hash(pxPos);  // random white noise in range [0, 1)
r = mad(r, 1.0/STRATIFIED_SIZE, noise_quantized * (1.0/STRATIFIED_SIZE));
With the blue noise adjusted ray direction, the ray traced AO image looks less noisy visually:

Rays are traced using white noise
Rays are traced using blue noise
Blurred white noise AO image
Blurred blue noise AO image

Ray Binning
In the demo, ray binning is also implemented, but the performance improvement is not significant. The ray binning only show a large performance gain when the ray tracing distance is large (e.g. > 10m) and turning off both half resolution and interleaved rendering. I have only ran the demo on my GTX 1060, may be the situation will be different on RTX graphcis card (so, this is something I need to investigate in the future). Also the demo may have a slight difference when toggling on/off ray binning due to the precision difference using RGBA16Float format to store ray direction (the difference will be vanished after accumulating more samples over multiple frames with temporal filter).

Conclusion
In this post, I have described how DXR is used to compute ray-traced AO in real-time using a combination of temporal and spatial filter. Those filters are important to increase the total sample count for Monte Carlo Integration and getting a noise free and stable image. The demo can be downloaded from here. There are still plenty of stuff to improve, such as having a better filter, currently, when the AO distance is large and both half resolution and interleaved rendering is turned on (i.e. 1 ray per 16 pixels), the image is too noisy and not temporally stable during camera movement. May be I need to improve those stuff when writing a path tracer in the future.

References
[1] DirectX Raytracing (DXR) Functional Spec https://microsoft.github.io/DirectX-Specs/d3d/Raytracing.html
[2] Edge-Avoiding À-Trous Wavelet Transform for fast GlobalIllumination Filtering https://jo.dreggn.org/home/2010_atrous.pdf
[3] Free blue noise textures http://momentsingraphics.de/BlueNoise.html
[4] Quick And Easy GPU Random Numbers In D3D11 http://www.reedbeta.com/blog/quick-and-easy-gpu-random-numbers-in-d3d11/
[5] Leveraging Real-Time Ray Tracing to build a Hybrid Game Engine http://advances.realtimerendering.com/s2019/Benyoub-DXR%20Ray%20tracing-%20SIGGRAPH2019-final.pdf
[6] ”It Just Works”: Ray-Traced Reflections in 'Battlefield V' https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s91023-it-just-works-ray-traced-reflections-in-battlefield-v.pdf

Render Graph

Introduction
Render graph is a directed acyclic graph that can be used to specify the dependency of render passes. It is a convenient way to manage the rendering especially when using low level API such as D3D12. There are many great resources talked about it such as this and this. In this post I will talk about how the render graph is set up, render pass reordering as well as resources barrier management.

Render Graph set up
To have a simplified view of render graph, we can treat each node inside a graph as single render pass. For example we can have a graph for a simple deferred renderer like this:
Render passes dependency within a render graph
By having such graph, we can derive the dependency of the render passes, remove unused render pass, as well as reorder them. In my toy graphics engine, I use a simple scheme to reorder render passes. Taking the below render graph as an example, the render pass are added as following order:
A render graph example
We can group it into several dependency levels like this:
split render passes into several dependency levels
Within each level, the passes are independent and can be reordered freely, so the render passes are enqueued into command list as the following order:
Reordered render passes
Between each dependency level, we batch resources barrier to transit the resources to the correct state.

Transient Resources
The above view is just a simplified view of the graph. In fact, each render pass consist of a number of inputs and outputs. Every input/output is a graphics resource (e.g. texture). And render passes are connected through such resources within a render graph.
Render graph connecting render passes and resources
As you can see in the above example, there are many transient resources used (e.g. depth buffer, shadow map, etc). We handle such transient resources by using a texture pool. Texture will be reused after it is no longer need by previous pass (placed resources is not used for simplicity). When building a render graph, we compute the life time of every transient resources (i.e. the dependency level that the resource start/end). So we can free the transient resources when the execution go beyond a dependency level and reuse them for later render pass. So to specify a render pass input/output in my engine, I only need to specify their size/format and don't need to worry about the resources creation and the transient resources pool will create the textures as well as the required resources view (e.g. SRV/DSV/RTV).

Conclusion
In this post, I have described how render passes are reordered inside render graph, when barrier are inserted and transient resources handling. But I have not implemented parallel recording of command lists and async compute. It really takes much more effort to use D3D12 than D3D11. I think the current state of my hobby graphics engine is good enough to use. Looks like I can start learning DXR after spending lots of effort on basic D3D12 set up code. =]

D3D12 Constant Buffer Management

Introduction
In D3D12, it does not have an explicit constant buffer API object (unlike D3D11). All we have in D3D12 is ID3D12Resource which need to be sub-divided into smaller region with Constant Buffer View. And it is our job to handle the constant buffer life time and avoid updating constant buffer value while the GPU is still using it. This post will describe how I handle this topic.

Constant buffer pool
We allocate a large ID3D12Resource and treat it as an object pool by sub-dividing it into many small constant buffers (Let's call it constant buffer pool). Since constant buffer required to be 256 bytes aligned (I can only find this requirement in previous documentation, while the updated document only have such requirement in the Uploading Texture Data Through Buffers, which is under a section about texture...), so I defined 3 fixed size pools 256/512/1024 bytes pool. Only this 3 size type is enough for my need as most constant buffers are small (In Seal Guardian, the largest constant buffer size is 560 bytes, while large data like skinning matrix palette is uploaded via texture).
3 constant buffer pools with different size
In last post, a non shader visible descriptor heap manager is used to handle non shader visible descriptors. But in fact, that is only used for SRV/DSV/RTV descriptors. Constant buffer view are managed with another scheme. As described above, when we create a ID3D12Resource for constant buffer pool, we also create a non shader visible ID3D12DescriptorHeap with size large enough to have descriptors point to all the constant buffers inside the constant buffer pool.
ID3D12Resource and ID3D12DescriptorHeap are created in pair
We also split constant buffer pool based on their usage: static/dynamic. So there are total 6 constant buffer pools inside my toy engine (static 256/512/1024 bytes pool + dynamic 256/512/1024 bytes pool).

Dynamic constant buffer
Constant buffer can be updated dynamically. Each constant buffer contains a CPU side copy of their constant values. When they are binded before a draw call, those values will be copied to the dynamic constant buffer pool (created in upload heap). A piece of memory for constant buffer values will be allocated from the constant buffer pool in a ring buffer fashion. If the pool is full (i.e. ring buffer wrap around too fast where all the constant buffers are still in use by GPU), we will create a larger pool and the existing pool will be deleted after all related GPU commands finish execution.
Resizing dynamic constant buffer pool, the previous pool
will be deleted after executing related GPU commands
To avoid copying the same constant buffer values to the constant buffer pool when having multiple binding constant buffer calls. We keep 2 integer values for every dynamic constant buffer: "last upload frame index" and "value version". The last upload frame index is the frame index that those CPU constant buffer values get copied to the dynamic pool. The value version is an integer which is monotonic increased every-time the constant buffer value get modified/updated. So by checking this 2 integers, we can avoid duplicated copies of constant buffer in dynamic pool and re-use the previous copied values.

Static constant buffer
The static constant buffer will have a static descriptor handle described in last post. The static constant buffer pool is created in the default heap. The pool is managed in a free-list fashion as oppose to ring buffer in dynamic pool. Also when the pool is full, we still create extra pool for new constant buffer allocation request. But different from dynamic pool, previous pool will not be deleted when new pool get created.
Creating more static constant buffer pool if existing pools are full
To upload static constant buffer values to the GPU(since static pools are created in default heap), we use the dynamic constant buffer pool instead of creating another new upload heap. Every frame, we gather all newly created static constant buffers, then before we start rendering in this frame, we copy all the CPU constant buffer values to the dynamic constant buffer pool and then schedule a ID3D12GraphicsCommandList::CopyBufferRegion() call to copy those values from upload heap to default heap. By grouping all the static constant buffer uploads, we can reduce the number of D3D12_RESOURCE_BARRIER to transit between the D3D12_RESOURCE_STATE_COPY_DEST and D3D12_RESOURCE_STATE_VERTEX_AND_CONSTANT_BUFFER states.

Conclusion
In this post, I have described how constant buffers are managed in my toy engine. It use a number of different pool size which is managed in ring buffer fashion for dynamic constant buffers and in free-list fashion for static constant buffers. Uploading of static constant buffer content are grouped together to reduce barrier usage. However, I only split the usage based simply on static/dynamic, I would like to investigate the performance in the future whether adding another usage type for some use case like constant buffer will be updated every frame, and used frequently in many draw calls (e.g. write once, read many within a frame) and would like to place those resources on the default heap instead of the current dynamic upload heap.

Reference
[1] https://docs.microsoft.com/en-us/windows/desktop/direct3d12/large-buffers
[2] https://www.gamedev.net/forums/topic/679285-d3d12-how-to-correctly-update-constant-buffers-in-different-scenarios/





D3D12 Descriptor Heap Management

Introduction
Continue with the last post, we described about how root signature is managed to bind resources. But root signature is just one part of resources binding, we also need to use descriptor to bind reousrces. Descriptors are small block of memory describing an object (CBV/SRV/UAV/Sampler) to GPU. They are stored in descriptor heaps, and they may be shader visible or non shader visible. In this post, I will talk about how descriptors are managed for resources binding in my toy graphics engine.

Non shader visible heap
Let's start with the non-shader visible heap management. We can treat a descriptor as a pointer to a GPU resource (e.g. texture). Descriptor heap is a piece of memory used for storing descriptors and the size of a single descriptor can be queried by ID3D12Device::GetDescriptorHandleIncrementSize(). So we treat descriptor heap as an object pool, and every descriptor within the same heap can be referenced by an index.
Non shader visible descriptor heap containing N descriptors
Since we don't know how many descriptors are needed in-advance and we may have many non shader visible heaps, A non shader visible heap manager is created for allocating a descriptor from descriptor heap(s). This manager contains at least 1 descriptor heap. When a descriptor allocation request is made to the manager, it will first look for free descriptor from existing descriptor heap(s), if none is found, a new descriptor heap will be created to handle the request.
Descriptor heap manager handles descriptor allocation request, create descriptor heap if necessary
So within the graphics engine, we use a "non shader visible descriptor handle" to reference a D3D12 descriptor which store the heap index and descriptor index with respect to a descriptor heap manager. All the textures created in the engine will have a "non shader visible descriptor handle" for resources binding (more on this later).

Shader visible heap
Next, we will talk about shader visible heap management. Shader visible heap is responsible for binding resources that get used in shaders. It is recommend that just only 1 heap is used for all frames so that asynchronous compute and graphics workload can be run in parallel(on NVidia hardware). So we just create 1 large shader visible heap at the start of program and don't bother to resize/allocate a lager heap when the heap is full (we just assert in this case). With a single large shader visible descriptor heap, it is divided into 2 regions: static / dynamic.
A single large shader visible descriptor heap, divided into 2 regions

Dynamic descriptor
Dynamic descriptors are used for some transient resources that their descriptor table cannot be reused often. During resources binding (e.g. texture), their non-shader visible descriptors will be copied to the shader visible heap via ID3D12Device::CopyDescriptors(), where the copy destination (i.e. dynamic shader visible descriptors) is allocated in a ring buffer fashion (Note the copy operation have a restriction that the copy source must be in non-shader visible heap, that's why we allocate a "non shader visible descriptor handle" for every texture).

Static descriptor
Static descriptors are used for resources which can be grouped together into a descriptor table, so that they can be reused over multiple frames. For example, a set of textures inside a material will not be changed very often, those textures can be grouped into a descriptor table. My current approach is to use a "stack" based approach to manage the static shader visible descriptor heap. Instead of creating a stack of individual descriptor, we have a stack of groups of descriptors, often, during level load, 1 static descriptor group will be created.
static descriptors are packed into group during level load
Inside a group of static descriptors, the descriptors are sorted such that all constant buffer descriptors appear before texture descriptors. Also null descriptors may need to be added to respect the Hardware Tiers restriction. To identify a static descriptor in shader visible heap, we can use the stack group index together with the descriptor index within the group.
descriptor are ordered by type, with necessary padding
Each "static resource"(e.g. constant buffer/texture) will have a "static descriptor handle" beside the "non shader visible descriptor handle". We can check whether those resources are within the same descriptor table by comparing the stack group index and descriptor index to see whether they are in consecutive order. With such information, we can create a resource binding API similar to D3D11 (e.g. ID3D11DeviceContext::PSSetShaderResources() ), if all the resources in the API call are in the same descriptor table, we can use the static descriptor to bind the descriptor table directly, otherwise, we switch to use the dynamic descriptor approach described in previous section to create a continuous descriptor table. (I have also think of instead of using similar binding API as D3D11, may be I can create a so call "descriptor table" object explicitly, say during material loading and grouping material textures into a descriptor table, so that resources binding can skip the consecutive descriptor index check described above. But currently I just stick with a simple solution first...)

As mentioned before, the static descriptor group is allocated based on a "stack" based approach. But my current implement is not strictly "last in - first out", we can removing a group in between and make some "hole" in the static shader visible heap region, but this will result in fragmentation.

Fragmented static descriptor heap region
In theory, we can defragment this heap region by moving descriptor groups to un-used space (it works as we use index to reference descriptors inside a heap instead of address directly) and during defragmentation, we may switch to use dynamic descriptors temporarily to avoid overwriting the static heap region while the GPU commands are still using it. But currently, I have not implemented the defragmentation yet because I only get one simple level (i.e. only 1 static descriptor group) now...

Conclusion
In this post, I have described how the descriptor heap is managed for resources binding. To sum up, the shader visible descriptor heap is divided into 2 regions: static/dynamic. Static descriptor heap is managed in a "stack" based approach. During level loading, all the static CBV/SRV descriptors are stored within a static descriptor stack group, which is a big continuous descriptor table. This will increase the chance to reuse the descriptor table. In addition to this optional static descriptor, every resources must have a non-shader visible descriptor handle. This non-shader visible descriptor handle is used when a static descriptor table cannot be used during resource binding, and it will get copied to the shader visible heap to form a new descriptor table. With this kind of heap management, we can create a resources binding API similar to D3D11, which call the underlying D3D12 API using descriptors.

References
[1] https://docs.microsoft.com/en-us/windows/desktop/direct3d12/resource-binding
[2] https://www.gamedev.net/forums/topic/686440-d3d12-descriptor-heap-strategies/


D3D12 Root Signature Management

 Introduction
Continue with the last post about writing my new toy D3D12 graphics engine, we have compiled some shaders and extracted some reflection data from shader source. The next problem is to bind resources(e.g. constant buffer / textures) to the shaders. D3D12 use root signatures together with root parameters to achieve this task. In this post, I will describe how my toy engine create root signatures automatically based on shader resources usage.

Left: new D3D12 graphics engine (with only basic diffuse material)
Right: previous D3D11 rendering (with PBR material, GI...)
Still a long way to go to catch up with the previous renderer... 

Resource binding model
In D3D12, shader resource binding relies on the root parameter index. But when iterating on shader code, we may modify some resources binding(e.g. add a texture variable / remove a constant buffer), the root signature may be changed, which cause the change of root parameter index. This will need to update all function call like SetGraphicsRootDescriptorTable() with new root parameter index, which is tedious and error-prone... Compare to the resource binding model in D3D11 (e.g. PSSetShaderResources()PSSetConstantBuffers()), it doesn't have such problem as the API defined a set of fixed slots to bind with. So I would prefer to work with a similar binding model in my toy engine.

So, I defined a couple of slots for resource binding as follow (which is a bit different than D3D11):
Engine_PerDraw_CBV
Engine_PerView_CBV
Engine_PerFrame_CBV
Engine_PerDraw_SRV_VS_ONLY
Engine_PerDraw_SRV_PS_ONLY
Engine_PerDraw_SRV_ALL
Engine_PerView_SRV_VS_ONLY
Engine_PerView_SRV_PS_ONLY
Engine_PerView_SRV_ALL
Engine_PerFrame_SRV_VS_ONLY
Engine_PerFrame_SRV_PS_ONLY
Engine_PerFrame_SRV_ALL
Shader_PerDraw_CBV
Shader_PerDraw_SRV_VS_ONLY
Shader_PerDraw_SRV_PS_ONLY
Shader_PerDraw_SRV_ALL
Shader_PerDraw_UAV
Instead of having a fixed slot per shader stage in D3D11, my toy engine fixed slots can be summarized into 3 categories as:
Resource binding slot categories

Slot category "Resource Type"
As described by its name (CBV/SRV/UAV), this slot is used to bind the corresponding resource type like constant buffer / shader resource view / unordered access view.
For SRV type, it further sub-divide into VS_ONLY / PS_ONLY / ALL sub-categories which refer to the shader visibility. According to Nvidia Do's An Don'ts, limiting the shader visibility will improve the performance.
For CBV type, the shader visibility will be deduced from shader reflection data during root signature and PSO creation.

Slot category "Change frequency"
Resources are encouraged to be bound based on their update frequency. So this slot category are divided into 3 types: Per Frame/ Per View / Per Draw.
For the Per Frame/View types, they will have a root parameter type as descriptor table.
While the Per Draw CBV type will have root parameter type as root descriptor.
For Per Draw SRV type, it still uses descriptor table instead of root descriptor, because for example, it is common to have only 1 constant buffer for material of a mesh while binding multiple textures for the same material. So using descriptor table instead will help to keep the size of root signature small.

Slot category "Usage"
This category is used for sub-dividing into different usage patterns: Engine/Shader.
For Engine usage, it will typically be binding stuff like mesh transform constant, camera transform, etc.
For Shader usage, it is used for something like shader specific stuff, e.g. material constant.
I just can't find the appropriate name for this category, and simply use the name as Engine/Shader. May be it is better to call them Group 0/1/2/3... in case I may have different usage patterns in the future. But currently I just don't bother with it now...

Shader Reflection
In last post, I have mentioned that during shader compilation, shader reflection data is exported. This is important for the root signature creation. From these reflection data, we can know which constant buffer/texture slots get used. When creating a pipeline state object(PSO) from shaders, we can deduce all the resources slots get used in PSO (as well as the shader visibility for constant buffer) and then create an appropriate root signature with each resource slot mapped to the corresponding root parameter index (let's call this mapping data as "root signature info").

To specify the resource slot in shader code, we make use of the register space introduced in Shader Model 5.1. We can define which slot is used for constant buffer/texture. For example:
#define ENGINE_PER_DRAW_SRV_ALL space5 // all shaders must have the same slot-space definition
Texture2D shadowMap : register(t0, ENGINE_PER_DRAW_SRV_ALL);
With the above information, on the CPU side code, we can bind a resource to a specific slot using the root parameter index stored inside the "root signature info" similar to D3D11 API.

Conclusion
In this post, we have described how root signature can be automatically created and used for a slot based API. First root signature are created(or re-used/shared) during the creation of pipeline state object(PSO) based on its shader reflection data. We also create a "root signature info" to store the mapping between resource slots and root parameter index together with the root signature and PSO. Then we can use this "root signature info" to bind the resources to the shader.

As this is my first time to write a graphics engine with D3D12. I am not sure whether this resource binding model is the best. I have also think of other naming scheme for the resource slots: instead of naming with PerDraw / PerView type, is it better to name it explicitly with RootDescriptor / DescriptorTable instead? May be I will change my mind after I gained more experience in the future...

Simple GPU Path Tracer

Introduction
Path tracing is getting more popular in recent years. And because it is easy to get the code run in parallel, so making the path tracer to run on GPU can greatly reduce the rendering time. This post is just my personal notes about learning the basic of Path Tracing and to make me familiar with the D3D12 API. The source code can be downloaded here. And for those who don't want to compile from the source, the executable can be downloaded here.

Rendering Equation
Like other rendering algorithm, path tracing is solving the rendering equation:


To solve this integral, Monte Carlo Integration can be used, so we will shoot many rays within a single pixel from the camera position.


During path tracing, when a ray hits a surface, we can accumulate its light emission as well as the reflected light of that surface, i.e. computing the rendering equation. But we only take one sample in the Monte Carlo Integration so that only 1 random ray is generated according to the surface normal, which simplify the equation to:


Since we shoot many rays within a single pixel, we can still get an un-biased result. To expand the recursive path tracing rendering equation, we can derive the following equation:


GPU random number
To compute the Monte Carlo Integration, we need to generate random number on the GPU. The wang_hash is used due to its simple implementation.
  1. uint wang_hash(uint seed)
  2. {
  3.     seed = (seed ^ 61) ^ (seed >> 16);
  4.     seed *= 9;
  5.     seed = seed ^ (seed >> 4);
  6.     seed *= 0x27d4eb2d;
  7.     seed = seed ^ (seed >> 15);
  8.     return seed;
  9. }
We use the pixel index as the input for the wang_hash function.
seed = px_pos.y * viewportSize.x + px_pos.x
However, there are some visible pattern for the random noise texture using this method (although not affecting the final render result much...):



Luckily, to fix this, we can simply multiple a random number for the pixel index which eliminate the visible pattern in the random texture.
seed = (px_pos.y * viewportSize.x + px_pos.x) * 100 

To generate multiple random numbers within the same pixel, we can add the random seed by a constant number after each call to the wang_hash function. Any constant larger than 0, (e.g. 10) will be good enough for this simple path tracer.
  1. float rand(inout uint seed)
  2. {
  3.     float r= wang_hash(seed) * (1.0 / 4294967296.0);
  4.     seed+= 10;
  5.     return r;
  6. }
Scene Storage
To trace ray on the GPU, I upload all the scene data(e.g. triangles, material, light...) into several structure buffers and constant buffer. Due to my laziness and the announcement of DirectX Raytracing, I did not implement any ray tracing acceleration structure like BVH. I just store the triangles in a big buffer.

Tracing Rays
By using the rendering equation derived above, we can start writing code to shoot rays from the camera. During each frame, for each pixel, we trace one ray and reflect it multiple times to compute the rendering equation. And then we can additive blend the path traced result over multiple frames to get a progressive path tracer using the following blend factor:


To generate the random reflected direction of any ray hit surface, we simply uniformly sample a direction on the hemi-sphere around surface normal:


Here is the result of the path tracer when using the uniform random direction and using an emissive light material. The result is quite noisy:

Uniform implicit light sampling, 64 sample per pixel

To reduce noise, we can weight the randomly reflected ray with a cosine factor similar to the Lambert diffuse surface:

Cos weighted implicit light sampling, 64 sample per pixel
The result is still a bit noisy. Because in our scene, the light source is not very large, the probability of a randomly reflected ray to hit the light source is quite low. So to improve this, we can explicit sample the light source for every ray that hit a surface.

To sample a rectangular light source, we can randomly choose a point over its surface area, and the corresponding probability will be:
1/area of light
Since our light sampling is over the area domain instead of the direction domain as state in the above equation. The rendering equation need to multiply by the Jacobian that relates solid angle to area. i.e.


With the same number of sample per pixel, the result is much less noisy:

Uniform explicit light sampling, 64 sample per pixel
Cos weighted explicit light sampling, 64 sample per pixel

Simple de-noise

As we have seen above, the result of path tracing is a bit noise even with 64 samples per pixel. The result will be even worse for the first frame:

first frame path traced result
There are some very bright dots and looks not good during camera motion. So I added a simple de-noise pass, which is just blurring lots of pixels where they are located on the same surface (which really need a lot of pixel to make the result looks good, which cost some performance...).

Blurred first frame path traced result
To identify the pixel correspond to which surface, we store this data in the alpha channel of the path tracing texture with the following formula:
dot(surface_normal, float3(1, 10, 100)) + (mesh_idx + 1) * 1000
This works because we only contains small number of mesh and the mesh normal are the same for each surface in this simple scene.

Random Notes...
During the implementation, I encounter various bugs/artifacts which I think is interesting.

First, is about the simple de-noise pass. It may bleed the light source color to neighbor pixel far away even we have per pixel mesh index data.


This is because we only store a single mesh index per pixel, but we jitter the ray shot from camera within a single pixel per frame, some of the light color will be blend to the light geometry edge. It get very noticeable because the light source have a very high radiance compared to the reflect light of ceiling geometry.

To fix this, I just simply do not jitter the ray for tracing a direct hit of light geometry from camera, so this fix can only apply to explicit light sampling.



The second one is about quantization when using 16bit floating point texture. The path tracing texture sometimes may get quantized result after several hundred frames of additive blend when the single sample per pixel path trace result is very noise.

Quantized implicit light sampling
Path traced result in first frame
simple de-noised first frame result
To work around this, 32bit floating point texture need to be used, but this may have a performance impact (explicitly for my simple de-noise pass...).



The last one is the bright flyflies artifact when using a very large light source (as big as ceiling). This may sound counter intuitive. And the implicit light path traced result(i.e. not sampling the light source directly) does not have those flyflies...

Explicit light sample result
Implicit light sample result
But it turns out this artifact is not related to the size of the light source, but is related to the light too close to the reflected geometry. To visualize it, we may look at how the light get bounced:

path trace depth = 1
path trace depth = 2

The flyflies start to appear in first bound, located at the position near the light source. And then those flyflies get propagated with the reflected light rays. Those large values are generated by explicit light sampling Jacobian transform, the denominator part, which is the distance square between the light and surface.

After a brief search on the internet, to fix this, either need to implement radiance clamping or bi-directional path tracing, or greatly increase the sampling number. Here is the result with over 75000 number of samples per pixel, but it still contains some flyflies...


Conclusion
In this post, we discuss the steps to implement a simple GPU path tracer. The most basic path tracer is simply shooting large number of rays per pixel, and reflect the ray multiple times until it hits a light source. With explicit light sampling, we can greatly reduce noise.

This path tracer is just my personal toy project, which only have Lambert diffuse reflection with a single light. It is my first time to use the D3D12 API, the code is not well optimized, so the source code are for reference only and if you find any bugs, please let me know. Thank you.

Reference
[1] Physically Based Rendering http://www.pbrt.org/
[2] https://www.slideshare.net/jeannekamikaze/introduction-to-path-tracing
[3] https://www.slideshare.net/takahiroharada/introduction-to-bidirectional-path-tracing-bdpt-implementation-using-opencl-cedec-2015
[4] http://reedbeta.com/blog/quick-and-easy-gpu-random-numbers-in-d3d11/