Customizing render pipelines with render graphs

Attempting to work with advanced visual effects (like SSAO) and post-processing in Crimild has always been very painful. ImageEffects, introduced a while ago, were somewhat useful but limited to whatever information the (few) available shared frame buffers contained after rendering a scene. 

To make things worse, maintaining different render paths (i.e. forward, deferred, mobile) usually required a lot of duplicated logic and/or code and sooner or later some of them just stopped working (at this point I still don’t know why there’s code for deferred rendering since it has been broken for a years at least).

Enter Render Graphs…

WHAT ARE RENDER GRAPHS?

Render graphs are a tool for organizing processes that take place when rendering a scene, as well as the resources (i.e. frame buffers) that are required to execute them.

It’s a relatively new rendering paradigm that achieves highly modular render pipelines which can be easily customized and extended. 

WHY ARE Render Graphs HELPFUL?

First of all, they provide high modularity. Processes are connected in a graph like structure and they are pretty much independent of each other. This means that we can create pipelines by plugging in lots of different nodes together. 

Do you need a high fidelity pipeline for AAA games? Then add some nodes for deferred lighting, SSAO, post-processing and multiple shadow casters.

Do you have to run the game in a low level hardware or mobile phone? Use a couple of forward lighting nodes and simple shadows. Do you really need a depth pre-pass?

In addition, a render graph helps with resource management. Each render pass may produce one or more textures but, do we really need as many textures as passes? Can we reuse some of them? All of them? 

Finally, technologies like Vulkan, Metal or DX12 allow us to execute multiple processes in parallel, which is amazing. But it comes with the cost of having to synchronize those processes manually. A render graph helps to identify synchronization barriers for those processes based on the resources they are consuming.

OK, BUT HOW DO THEY WORK?

Like I said above, a render graph defines a render pipeline by using processes (or render passes) and resources (or attachments), each of them represented as a node in a graph. Here’s a simple render graph implementing a (simplified) deferred lighting pipeline:

The graph is composed by two types of nodes: Render Passes (circles) and Attachments (squares). Passes may read from zero, one or multiple attachments and write to at least one attachment. Attachments are the only way to connect passes together.

For example, in the image above, the Depth Pass will produce two attachments: Depth and Normal. The later one is only needed for lighting accumulations, but the Depth attachment is used multiple times (lighting, opaque and translucent render passes).

Once lighting accumulation is complete, its result is blended together with the one produced by the opaque render pass. Then, we blend the resulting attachment with the one written by the translucent render pass to achieve the final image for the frame.

The following images shows the final rendered frame (big image), as well as each of the intermediate attachments used for this pipeline. Notice that even the UI is rendered in its own texture.

Top row: depth, normal, opaque and translucent. Left column: opaque+translucent, sepia tint and UI

If you want to read more about render graphs, here are a couple of links to articles I used as reference for my own implementation:

In the next weeks I’m going to explain how render graphs help to optimize our pipeline by reusing attachments and discarded irrelevant passes.

Enjoy your coffee!