D3D12 Descriptor Heap Management

Introduction
Continue with the last post, we described about how root signature is managed to bind resources. But root signature is just one part of resources binding, we also need to use descriptor to bind reousrces. Descriptors are small block of memory describing an object (CBV/SRV/UAV/Sampler) to GPU. They are stored in descriptor heaps, and they may be shader visible or non shader visible. In this post, I will talk about how descriptors are managed for resources binding in my toy graphics engine.

Non shader visible heap
Let's start with the non-shader visible heap management. We can treat a descriptor as a pointer to a GPU resource (e.g. texture). Descriptor heap is a piece of memory used for storing descriptors and the size of a single descriptor can be queried by ID3D12Device::GetDescriptorHandleIncrementSize(). So we treat descriptor heap as an object pool, and every descriptor within the same heap can be referenced by an index.
Non shader visible descriptor heap containing N descriptors
Since we don't know how many descriptors are needed in-advance and we may have many non shader visible heaps, A non shader visible heap manager is created for allocating a descriptor from descriptor heap(s). This manager contains at least 1 descriptor heap. When a descriptor allocation request is made to the manager, it will first look for free descriptor from existing descriptor heap(s), if none is found, a new descriptor heap will be created to handle the request.
Descriptor heap manager handles descriptor allocation request, create descriptor heap if necessary
So within the graphics engine, we use a "non shader visible descriptor handle" to reference a D3D12 descriptor which store the heap index and descriptor index with respect to a descriptor heap manager. All the textures created in the engine will have a "non shader visible descriptor handle" for resources binding (more on this later).

Shader visible heap
Next, we will talk about shader visible heap management. Shader visible heap is responsible for binding resources that get used in shaders. It is recommend that just only 1 heap is used for all frames so that asynchronous compute and graphics workload can be run in parallel(on NVidia hardware). So we just create 1 large shader visible heap at the start of program and don't bother to resize/allocate a lager heap when the heap is full (we just assert in this case). With a single large shader visible descriptor heap, it is divided into 2 regions: static / dynamic.
A single large shader visible descriptor heap, divided into 2 regions

Dynamic descriptor
Dynamic descriptors are used for some transient resources that their descriptor table cannot be reused often. During resources binding (e.g. texture), their non-shader visible descriptors will be copied to the shader visible heap via ID3D12Device::CopyDescriptors(), where the copy destination (i.e. dynamic shader visible descriptors) is allocated in a ring buffer fashion (Note the copy operation have a restriction that the copy source must be in non-shader visible heap, that's why we allocate a "non shader visible descriptor handle" for every texture).

Static descriptor
Static descriptors are used for resources which can be grouped together into a descriptor table, so that they can be reused over multiple frames. For example, a set of textures inside a material will not be changed very often, those textures can be grouped into a descriptor table. My current approach is to use a "stack" based approach to manage the static shader visible descriptor heap. Instead of creating a stack of individual descriptor, we have a stack of groups of descriptors, often, during level load, 1 static descriptor group will be created.
static descriptors are packed into group during level load
Inside a group of static descriptors, the descriptors are sorted such that all constant buffer descriptors appear before texture descriptors. Also null descriptors may need to be added to respect the Hardware Tiers restriction. To identify a static descriptor in shader visible heap, we can use the stack group index together with the descriptor index within the group.
descriptor are ordered by type, with necessary padding
Each "static resource"(e.g. constant buffer/texture) will have a "static descriptor handle" beside the "non shader visible descriptor handle". We can check whether those resources are within the same descriptor table by comparing the stack group index and descriptor index to see whether they are in consecutive order. With such information, we can create a resource binding API similar to D3D11 (e.g. ID3D11DeviceContext::PSSetShaderResources() ), if all the resources in the API call are in the same descriptor table, we can use the static descriptor to bind the descriptor table directly, otherwise, we switch to use the dynamic descriptor approach described in previous section to create a continuous descriptor table. (I have also think of instead of using similar binding API as D3D11, may be I can create a so call "descriptor table" object explicitly, say during material loading and grouping material textures into a descriptor table, so that resources binding can skip the consecutive descriptor index check described above. But currently I just stick with a simple solution first...)

As mentioned before, the static descriptor group is allocated based on a "stack" based approach. But my current implement is not strictly "last in - first out", we can removing a group in between and make some "hole" in the static shader visible heap region, but this will result in fragmentation.

Fragmented static descriptor heap region
In theory, we can defragment this heap region by moving descriptor groups to un-used space (it works as we use index to reference descriptors inside a heap instead of address directly) and during defragmentation, we may switch to use dynamic descriptors temporarily to avoid overwriting the static heap region while the GPU commands are still using it. But currently, I have not implemented the defragmentation yet because I only get one simple level (i.e. only 1 static descriptor group) now...

Conclusion
In this post, I have described how the descriptor heap is managed for resources binding. To sum up, the shader visible descriptor heap is divided into 2 regions: static/dynamic. Static descriptor heap is managed in a "stack" based approach. During level loading, all the static CBV/SRV descriptors are stored within a static descriptor stack group, which is a big continuous descriptor table. This will increase the chance to reuse the descriptor table. In addition to this optional static descriptor, every resources must have a non-shader visible descriptor handle. This non-shader visible descriptor handle is used when a static descriptor table cannot be used during resource binding, and it will get copied to the shader visible heap to form a new descriptor table. With this kind of heap management, we can create a resources binding API similar to D3D11, which call the underlying D3D12 API using descriptors.

References
[1] https://docs.microsoft.com/en-us/windows/desktop/direct3d12/resource-binding
[2] https://www.gamedev.net/forums/topic/686440-d3d12-descriptor-heap-strategies/


D3D12 Root Signature Management

 Introduction
Continue with the last post about writing my new toy D3D12 graphics engine, we have compiled some shaders and extracted some reflection data from shader source. The next problem is to bind resources(e.g. constant buffer / textures) to the shaders. D3D12 use root signatures together with root parameters to achieve this task. In this post, I will describe how my toy engine create root signatures automatically based on shader resources usage.

Left: new D3D12 graphics engine (with only basic diffuse material)
Right: previous D3D11 rendering (with PBR material, GI...)
Still a long way to go to catch up with the previous renderer... 

Resource binding model
In D3D12, shader resource binding relies on the root parameter index. But when iterating on shader code, we may modify some resources binding(e.g. add a texture variable / remove a constant buffer), the root signature may be changed, which cause the change of root parameter index. This will need to update all function call like SetGraphicsRootDescriptorTable() with new root parameter index, which is tedious and error-prone... Compare to the resource binding model in D3D11 (e.g. PSSetShaderResources()PSSetConstantBuffers()), it doesn't have such problem as the API defined a set of fixed slots to bind with. So I would prefer to work with a similar binding model in my toy engine.

So, I defined a couple of slots for resource binding as follow (which is a bit different than D3D11):
Engine_PerDraw_CBV
Engine_PerView_CBV
Engine_PerFrame_CBV
Engine_PerDraw_SRV_VS_ONLY
Engine_PerDraw_SRV_PS_ONLY
Engine_PerDraw_SRV_ALL
Engine_PerView_SRV_VS_ONLY
Engine_PerView_SRV_PS_ONLY
Engine_PerView_SRV_ALL
Engine_PerFrame_SRV_VS_ONLY
Engine_PerFrame_SRV_PS_ONLY
Engine_PerFrame_SRV_ALL
Shader_PerDraw_CBV
Shader_PerDraw_SRV_VS_ONLY
Shader_PerDraw_SRV_PS_ONLY
Shader_PerDraw_SRV_ALL
Shader_PerDraw_UAV
Instead of having a fixed slot per shader stage in D3D11, my toy engine fixed slots can be summarized into 3 categories as:
Resource binding slot categories

Slot category "Resource Type"
As described by its name (CBV/SRV/UAV), this slot is used to bind the corresponding resource type like constant buffer / shader resource view / unordered access view.
For SRV type, it further sub-divide into VS_ONLY / PS_ONLY / ALL sub-categories which refer to the shader visibility. According to Nvidia Do's An Don'ts, limiting the shader visibility will improve the performance.
For CBV type, the shader visibility will be deduced from shader reflection data during root signature and PSO creation.

Slot category "Change frequency"
Resources are encouraged to be bound based on their update frequency. So this slot category are divided into 3 types: Per Frame/ Per View / Per Draw.
For the Per Frame/View types, they will have a root parameter type as descriptor table.
While the Per Draw CBV type will have root parameter type as root descriptor.
For Per Draw SRV type, it still uses descriptor table instead of root descriptor, because for example, it is common to have only 1 constant buffer for material of a mesh while binding multiple textures for the same material. So using descriptor table instead will help to keep the size of root signature small.

Slot category "Usage"
This category is used for sub-dividing into different usage patterns: Engine/Shader.
For Engine usage, it will typically be binding stuff like mesh transform constant, camera transform, etc.
For Shader usage, it is used for something like shader specific stuff, e.g. material constant.
I just can't find the appropriate name for this category, and simply use the name as Engine/Shader. May be it is better to call them Group 0/1/2/3... in case I may have different usage patterns in the future. But currently I just don't bother with it now...

Shader Reflection
In last post, I have mentioned that during shader compilation, shader reflection data is exported. This is important for the root signature creation. From these reflection data, we can know which constant buffer/texture slots get used. When creating a pipeline state object(PSO) from shaders, we can deduce all the resources slots get used in PSO (as well as the shader visibility for constant buffer) and then create an appropriate root signature with each resource slot mapped to the corresponding root parameter index (let's call this mapping data as "root signature info").

To specify the resource slot in shader code, we make use of the register space introduced in Shader Model 5.1. We can define which slot is used for constant buffer/texture. For example:
#define ENGINE_PER_DRAW_SRV_ALL space5 // all shaders must have the same slot-space definition
Texture2D shadowMap : register(t0, ENGINE_PER_DRAW_SRV_ALL);
With the above information, on the CPU side code, we can bind a resource to a specific slot using the root parameter index stored inside the "root signature info" similar to D3D11 API.

Conclusion
In this post, we have described how root signature can be automatically created and used for a slot based API. First root signature are created(or re-used/shared) during the creation of pipeline state object(PSO) based on its shader reflection data. We also create a "root signature info" to store the mapping between resource slots and root parameter index together with the root signature and PSO. Then we can use this "root signature info" to bind the resources to the shader.

As this is my first time to write a graphics engine with D3D12. I am not sure whether this resource binding model is the best. I have also think of other naming scheme for the resource slots: instead of naming with PerDraw / PerView type, is it better to name it explicitly with RootDescriptor / DescriptorTable instead? May be I will change my mind after I gained more experience in the future...

MSBuild custom build tools notes

Introduction
Recently, I am trying to re-write my graphics code to use D3D12 (instead of D3D11 in Seal Guardian). I need to have a convenient way to compile shader code. While tidying up the MSBuild custom build steps files used in Seal Guardian for my new toy graphics engine, I regret that I did not write a blog post about custom MSBuild at that time, as I remembered, finding such information was hard at that time and I need to look at some of the CUDA custom build files to guess how it works. So this post will just be my personal notes about custom MSBuild and I don't guarantee all information about MSBuild are 100% correct. I have uploaded an example project to compile shader files here. Interested readers may also check out this excellent post about MSBuild written by Nathan Reed.

Custom build steps set up
MSBuild need to have .targets file to describe how the compiler (e.g. dxc/fxc used for shader compilation) are invoked. In the uploaded example project, we have 3 main targets: DXC, JSON, BIN.

- DXC target: described by its name, invoking the dxc.exe to compile HLSL file.
- JSON target: used to invoke the shaderCompiler.exe, which is our internal tool written using Flex & Bison to parse the shader source code to output some meta data, like texture/constant buffer usage for root signature management.
- BIN target: a task that depends on DXC task and JSON task, invoke the dataBuilder.exe, our internal tool for data serialization/deserialization into our binary format, combining the output from DXC and JSON task.

target dependency

Although MSBuild can set up the target dependency, but it looks like those independent targets are not executed in parallel. In Seal Guardian, when compiling the surface shaders which generate the shader permutation for lighting, this result in a long compilation time. At the end, I need to create another exe to launch multiple threads to speed up the shader compilation. May be I was setting up MSBuild incorrectly, if anyone knows how to parallelize it, please let me know in the comment below. Thank you!

Incremental Builds
MSBuild use .tlog file to track files modification to avoid unnecessary compilation (also affect which files got deleted when cleaning the project). There are 2 tlog files (read.1.tlog and write.1.tlog), one is for tracking whether the source files are modified, and the other is tracking whether the output file is up to date. We can simply use the WriteLinesToFile task to mark such dependency, e.g.

<WriteLinesToFile File="$(TLogLocation)$(ProjectName).write.1.tlog" Lines="$(TLog_writelines)" />

But doing this only will make the tlog file larger and larger after every compilation. So it is better to read the tlog file content into a PropertyGroup and check whether the file already contains the text we would like to write using a "Conidition" inside the WriteLinesToFile task. For details, please take a look at the example project.

Also, as a side note, do not include $(MSBuildProjectFile) property in the "Inputs" element inside "Target" task. I did it accidentally and it cause the whole project to recompile all the shaders every time a new shader file is added to / removed from the project. This is not necessary as most of the shader files are independent.

Output files
Like every visual studio project, our example project have a Debug and Release configuration. After executing the BIN task described above, we also use a Copy task to copy our compiled shader from Debug/Release config $(OutDir) directory to our content directory. We can also use the Property Function MakeRelative() to maintain the directory hierarchy in the output directory. This is another reason why I use Copy task instead of specifying the $(OutDir) to the content directory, as I cannot get a nested Property Function working inside the .props file (or may be I did something wrong? I don't know...)...

Also, beside output files, another note is about the output log. If we want to write something to the output console of visual studio from your custom exe (e.g. dataBuilder.exe/dataBuilder.exe in the example project), the text must be in a specific format like (but I cannot find the documentation of the exact format, just guess from similar message emitted from visual studio...):

1>C:/shaderFileName.hlsl(123): error : error message

otherwise, those message will not get displayed in output window.


Example project
An example project is uploaded here. It will compile vertex/pixel shaders with dxc.exe and output JSON meta data to the $(IntDir), then combine those data and write to the $(OutDir). Finally those files will be copied to the asset directory with the corresponding relative path to the source directory. Please note that the shaderCompiler.exe used for outputting meta-data is for internal tools, which have some custom restriction on the HLSL grammar for my convenience to create root signature. It is used just as an example to illustrate how to set up a custom MSBuild tool, feel free to replace/modify those files to suit your own need. Thank you.

Reference
[1] 
https://docs.microsoft.com/en-us/cpp/build/understanding-custom-build-steps-and-build-events?view=vs-2019
[2] http://www.reedbeta.com/blog/custom-toolchain-with-msbuild/