Render Passes in "Seal Guardian"

Introduction
"Seal Guardian" uses a forward renderer to render the scene. Because we need to support mobile platform, we don't have too many effect in it. But still it consists of a few render passes to compose an image.

Shadow Map Pass
To calculate dynamic shadow of the scene, we need to render the depth of the meshes from the light point of view. We render them into a 1024x1024 shadow map.
Standard shadow map

Then we use the Exponential Shadow Map method to blur the shadow map into a 512x512 shadow map.
ESM blurred shadow map

(Note that this pass may be skipped according to current performance setting.)

Opaque Geometry Pass
In this pass, we render the scene meshes into a RGBA8 render target. We compute all the lighting including direct lighting, indirect lighting(lightmap or SH probe), tone mapping in this single pass. This is because on iOS, reducing render pass may have a better performance, so we choose to combine all the calculation into a single pass.
Tonemapped opaque scene color
Opaque geometry depth bufer

To reduce the impact of overdraw, we pre-compute a visibility set to avoid drawing occluded mesh (may talk about it in future post). Also we want to add a bloom pass to enhance the effect of bright pixels, we compute a bloom value in this pass according to the pre-tone mapped value and store it in the alpha channel of this pass.

Transparent Geometry Pass
In this pass, we render transparent mesh and particle. We blend the post-tonemapped color with the opaque geometry due to performance reason. Also, because we store the bloom intensity in the alpha channel and we want the alpha geometry to affect the bloom result. We solve this by 2 different methods depending on the game runs on which platform.

On iOS, we render the mesh directly to the render target of the opaque geometry pass with a shader similar to the opaque pass by outputting tonemapped  scene color in RGB and bloom intensity in A. To blend those 4 values over the opaque value, we use the EXT_shader_framebuffer_fetch OpenGL extension. So the blending happens at the end of the transparent geometry shader and we choose the simple blending formula below by using the opacity of the mesh(because we want to make it consistent with other platform):
RGB= mesh color * mesh alpha + dest color * (1 - mesh alpha)
A = mesh bloom intensity
* mesh alpha + dest bloom intensity * (1 - mesh alpha)
On Windows and Mac, the EXT_shader_framebuffer_fetch does not exist. We render all the transparent meshes into a separate RGBA8 render target. We compute the scene color and bloom intensity similar to opaque pass, but before writing to the render target, we decompose the RGB scene color into luma and chroma and store the chroma value in checkerboard pattern similar to this paper(slide 104). So we can store luma+chroma in RG channel, bloom intensity in B channel and opacity of mesh in the A channel of the render target.
Transparent render target on Windows platform

Finally, we can blend this transparent texture over the opaque geometry pass render target.
Composed opaque and transform geometry

Post Process Pass
After those geometry passes, we can blend in the bloom filter. We make several blur passes for those bright pixels and additive blend over the previous render pass output to enhance the bright effect.
Blurred bright pixels
Additive blended bloom texture with scene color

Then we compute a simplified(but not very accurate, due to the lack of a velocity buffer) temporal anti-aliasing using the color and depth buffer of current frame and previous 2 frames. One thing we didn't mention is that, during rendering the opaque and transparent meshes, we jitter the camera projection by half a pixel, alternating between odd and even frame, similar to the figure below, so that we can have sub-pixel information for anti-aliasing.
Temporal AA jitter pattern
Temporal anti-aliased image

Conclusion
In this post, we break down the render passes in "Seal Guardian", which compose of mainly 4 parts: shadow map, opaque geometry, transparent geometry and post process passes. By making less render pass, we can achieve a constant 60FPS in most cases (if target framerate is not met, we may skip some render pass such as temporal AA and shadow).

Lastly, "Seal Guardian" has already been released on Steam / Mac App Store / iOS App Store. If you want to support us to develop games with custom tech, then buying a copy of the game on any platform will help. Thank you very much.

References
[1] The Art and Technology behind Crysis 3 http://www.crytek.com/download/fmx2013_c3_art_tech_donzallaz_sousa.pdf

Shadow in "Seal Guardian"

Introduction
"Seal Guardian" uses a mix of static and dynamic shadow systems to support long range shadow to cover the whole level. "Seal Guardian" only use a single directional for the whole level, so part of the shadow information can be pre-computed. It mainly consists of 3 parts: baked static shadow on static meshes stored along with the light map, baked static shadow for dynamic objects stored along with the irradiance volume and dynamic shadow with optional ESM soft shadow.

Static shadow for static objects
During the baking process of the light map, we also compute static shadow information. We first render a shadow map for the whole level in a big render target (e.g. 8192x8192), then for each texel of light map, we can compare against its world position to the shadow map to check whether that texel is in shadow. But we are using a 1024x1024 light map for the whole scene, storing the shadow term directly will not have enough resolution. So we use distance field representation[1] to reduce storage size similar to the UDK[2]. To bake the distance field representing of the shadow term, instead of comparing a single depth value at texel world position as before, we compare several values within a 0.5m x 0.5m grid, oriented along the normal at position similar to the figure below:
Blue dots indicate the positions for sampling shadow map
to compute distance field value for the texel at red dot position.
(The gird is perpendicular to the red vertex normal of the texel.)

By doing this, we can get the shadow information around the baking texel to compute the distance field. We choose this method instead of computing the distance field from a large baked shadow texture because we want to have the shadow distance filed consistently computed in world space no matter how the mesh UV is and this can also avoid UV seam too. But this method may cause potential problem for concave mesh, but so far, for all levels in "Seal Guardian", it is not a big problem.
Static shadow only

Static shadow for dynamic objects
For dynamic objects to receive baked shadow, we baked shadow information and store it along with the irradiance volume. For each irradiance probe location, we compare it to the whole scene shadow map and get a binary shadow value. During runtime, we interpolate this binary shadow value by using the position of dynamic object and the probe location to get a smooth transition of shadow value, just like interpolating the SH coefficients of irradiance volume.

Circled objects does not have light map UV, so they are treated the same as dynamic objects and shadowed with the shadow value stored along with irradiance volume
Each small sphere is a sampling location for storing the SH coefficients and shadow value of the irradiance for dynamic objects.

Dynamic Shadow
We use standard shadow mapping algorithm with exponential shadow map(ESM)[3] to support dynamic shadow in "Seal Guardian".  However due to we need to support a variety of hardware(from iOS, Mac to PC) and minimise code complexity, we choose not to use any cascade shadow map. Instead we use a single shadow map to support dynamic shadow for a very short distance (e.g. 30m-60m) and rely on baked shadow to cover the remaining part of the scene.
Dynamic shadow mixed with static shadow
Dynamic shadow only

Shadow Quality Settings
With the above systems, we can make a few shadow quality settings:
  1. mix of static shadow with dynamic ESM shadow
  2. mix of static shadow with dynamic hard shadow
  3. static shadow only
On iOS platform, we choose the shadow quality depends on the device capability. Besides, as we are using a forward renderer, when we are drawing objects that outside the dynamic shadow distance, those objects can use the static shadow only shader to save a bit of performance.
Soft Shadow
Hard Shadow
No Shadow

Conclusion
We have briefly describe the shadow system in "Seal Guardian", which uses distance field shadow map for static mesh shadow, interpolated static shadow value for dynamic objects and ESM dynamic shadow for a short distance. Also a few shadow quality settings can be generated with very few coding effort.

Lastly, if you are interested in "Seal Guardian", feel free to check it out and its Steam store page is live now. It will be released on 8th Dec, 2017 on iOS/Mac/PC. Thank you.

References
[1] http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
[2] https://docs.unrealengine.com/udk/Three/DistanceFieldShadows.html
[3] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.146.177&rep=rep1&type=pdf



Light Map in "Seal Guardian"

Introduction
Light mapping is a common technique used in games for storing lighting data. "Seal Guardian" used light map in order to support large variety of hardware from iOS, Mac to PC because of its low run time cost. There are many methods to bake the light map such as photon mapping and radiosity. Our baking method is similar to radiosity hemicube[1], but we render a full cube map for each light map texel to store incoming lighting data instead.

Scene with light map
Scene without light map


Light Map Atlas
In each level, light map is built for all static meshes with a second unique UV set. We gather all those static meshes and pack them into a large light map atlas by using this method[2], others method can be chosen, we just pick a simple one.

Packing a single large light map atlas for all static mesh in the scene

Compute Light Map Texel Position
Then we render all the meshes into a RGBA32Float world position render target using the light map atlas layout created before (by a vertex shader which transform the mesh 3D world position vertex to its unique 2D light map UV). Then we query back the render target to store all the written texels which correspond to the world position of each light map texel. Those position will be used for rendering cube maps for radiosity.

Each square represent a single light map texel,
we query back those texel world space position to render cube map for radiosity

Radiosity Baking
As talked before, we use a method similar to hemicube, but rendering a full cube map instead, so we will render a cube map at each light map texel with all the post processing effect/tone mapping off and just storing the lighting data. Because our light map is intended to store the incoming indirect static lighting for each texel, we convert the incoming lighting data cube map rendered at each texel to 2nd order spherical harmonics coefficients(i.e. 4 coefficients for each RGB channels), the conversion method can be found in "Stupid Spherical Harmonics (SH) Tricks"[3]. So we will need 1 RGBA32Float(or RGBA16Float) cube map and 3 temporal RGBA32Float(or RGBA16Float) textures for each radiosity iteration.

No light map, direct lighting and emissive materials only 
Lighting without albedo texture

Radiosity pass 1
In the first pass, we render all the meshes without analytical light source (e.g. directional light) into the cube map. Only the emissive material such as sky and static light placed in the scene get rendered to inject the initial lighting into the radiosity iterations. We support sphere and box shape static light which get rendered into the cube just like an emissive mesh during rendering the cube map. Once the cube map render is completed, we convert the cube map to SH coefficients and store the values. After all the texels are rendered, we will have an incoming lighting light map from emissive mesh and static light source ready for the next pass.
Light map baked with the emissive sky material
Lighting with light map using the emissive sky

Radiosity pass 2
In the second pass, we render all the meshes only with analytical light source and the SH light map from previous pass into a new cube map to calculate the first bound incoming lighting. Then convert the cube map to SH coefficients. After all the texels are rendered and converted to an SH light map, we need to sum this SH light map to the previous pass SH light map to get the accumulated lighting for pass 1 and 2 into another 3 accumulated SH light maps for our final storage (this accumulated light map is not used in the radiosity iteration, just for final radiosity output.).
Light map baked with direct lighting and emissive material
Lighting with light map using direct lighting and emissive sky

Radiosity pass >= 3
For the sub-sequence passes, we can use the SH light map from previous iteration to render the cube map and repeat the conversion to SH and accumulate SH lighting for all passes steps to get the incoming indirect lighting for each light map texels.

Final baked result, showing both direct and indrect lighting
Lighting using light map, without albedo texture 

Storage Format
To store the light map data for runtime and reduce memory usage (3 SH light maps in float format, i.e. 12 values for each texels, is too much data to store...), we decompose the the incoming lighting color data to luma and chroma. We only store the luma data in SH format and compute an average chroma value by integrating the SH RGB incoming lighting data with a SH cosine transfer function along the static mesh normal direction, this will get a reflected Lambertian lighting and we use this value to compute the chroma value. By doing this, we can preserve the directional variation of indirect lighting, keeping an average color of incoming lighting and reduce the light map storage to 6 values per texels. To further reduce the memory usage, we clamp the incoming SH luma value to a predefined range so that we can store it in 8-bit texture. However, using compression like DXT will result in artifacts, so we just store the light map data in 2 RGBA8 textures.

Final light map used in run-time, storing SH luma and average chroma

Conclusion
In this post, we have briefly outlined how the light maps are created in "Seal Guardian". It is based on a modified version of radiosity hemicube and using SH as an intermediate representation for baking and reduce the storage size (by splitting the lighting data to luma and chroma.). We skipped some of the baking details like padding the lighting data for each UV shell in each radiosity iteration to avoid light leaking from empty light map texels. Also "Seal Guardian" is rendered using PBR, that means we have metallic material which doesn't work well with radiosity. Instead of converting the metallic to a diffuse material, we pre-filter all the environment probe in each radiosity pass to get the lighting for metallic material. Also, we would like to improve the light map baking in the future, like improving the baking time, fixing the compression problem, we may try BC6H (but need to find another method for iOS compression...), or using a smaller texture size for chroma light map than the luma SH light map texture...

Lastly, if you are interested in "Seal Guardian", feel free to check it out and its Steam store page is live now. It will be released on 8th Dec, 2017 on iOS/Mac/PC. Thank you.

The yellow light bounce on the floor is done by the yellow metallic wall with pre-filtering the environment map in each radiosity pass
Showing only the indirect lighting



References

[1] https://www.siggraph.org/education/materials/HyperGraph/radiosity/overview_2.htm
[2] http://blackpawn.com/texts/lightmaps/
[3] http://www.ppsloan.org/publications/StupidSH36.pdf