Deferred improvements

Revisit the deferred rendering implementation took me a little more time than expected and it wasn’t easy, but I’m very excited with the results and the flexibility that has been achieved.

In contrast with my last post, this one is gonna be all about visual improvements. So, let’s begin.

Truth be told, my original goal was to enhance just the post-processing pipeline, adding support for more than one image effect at the same time and then accumulating results. But I ended up refactoring the entire deferred render pass and added a couple of new things on the way. Because, you know, refactors are fun 🙂

As before, the Deferred render path is split into three stages: G-Buffer generation, lighting and post-processing.

The G-Buffer

The G-Buffer is composed of five render targets organized as follows:

G-Buffer organization
G-Buffer organization. Floating-point buffers are used whenever possible in order to keep data precision.

Both world and view space normals are stored for different purposes. As a matter of fact, view space normals are generated just from the geometry itself, without any bump mapping applied to them since some post-processing effects achieve better results with less information (i.e. SSAO)

g_buffer
The G-Buffer in action. Top row: depth, diffuse and world space positions. Bottom row: emissive (unused in this demo), world space normals and view space normals

Lighting

The second step is to compute lighting and generate a colored frame. Usually, this step involves two passes: one for lighting and one for the final composition, but I’m doing both in a single pass. There’s room for some improvements here, but I leave that to my future self.

Lighting computation, before applying post-processing effects
Lighting computation, before applying post-processing effects

Lighting is computed from world space information. Shadow maps are applied here after the scene is lit.

Post-Processing

Once the scene is generated, image effects are applied. A couple of auxiliary buffers are used (following the “ping-pong buffer” technique), accumulating results.

Ping-pong buffer. For each image effect, the source and destination buffers are swapped. Once all effects have been processed, the source buffer contains the final image
Ping-pong buffer technique. For each image effect, the source and accumulation buffers are swapped. Once all effects have been processed, the source buffer contains the final image

Concerning image effects, the new additions are Depth of Field and SSAO. There was a previous implementation for SSAO, but the new one performs blurring in order to reduce noise and improve the final results.

Applying Depth of Field to the scene
Applying Depth of Field to the scene
SSAO only
Rendering the scene with only the output from the SSAO effect, before applying it to the scene

Final Comments

In order to debug the entire process, I made it possible to render the results of all the passes at the same time. It is a costly operation, but extremely useful when trying to understand what’s going on. In fact, I’m planning to add this feature to other systems as well in a future iteration.

Top row: depth, diffuse, positions, shadow map (packed). Middle row: emissive (unused), world space normals (with bump), view space normals and screen objects. Bottom row: lighting and post-processing
Top row: depth, diffuse, positions, shadow map (packed). Middle row: emissive (unused), world space normals (with bump mapping applied), view space normals and screen objects. Bottom row: lighting and post-processing (SSAO + DoF)

That’s it for now. I’m done with refac–

Wait…

Those shadows look awful…

Deferred(ish) Rendering

During the past few weeks I’ve been reviewing a lot of the rendering pipeline. One of the things I started working on is the new Deferred Render pass.

In short, Deferred Rendering (DR) is performed by first rendering the visible scene into a deep frame buffer (usually known as the G-Buffer), storing specific sets of data like positions, normals, colors, material properties, etc. Lighting calculations are then performed in a separate pass, computing the lighting equation only for the visible pixels. Here’s a great introduction to Deferred Rendering in general.

The main benefit of DR is that it makes possible the use of a high number of light sources, since there is no need to compute lighting for those pixels that are actually occluded by others. But I personally like it because it provides much more data to work with when applying screen effects like SSAO or Glow.

Since this is my first approach to Deferred Rendering, I didn’t want to complicate too much the layout of the G-Buffer:

G-Buffer LayoutA lot of space is wasted, I know. But then again, this is the first approach.

The following video shows the updated Lightcycle demo using deferred rendering and a glow effect. The video also shows the output for each buffer separately, for debug purposes

I also gave it a try to SSAO, which can be seen in the next couple of images, but I’m not comfortable with the results

With SSAO
With SSAO
Without SSAO
Without SSAO

As much as I like DR, making it the default render pass seems difficult in the near future. It requires a lot of memory and it’s dependent on the host platform to support multiple render targets (MRT), which is not standard. For example, GLES 3 does support MRT, but most mobile devices still have GLES2-based graphic cards.

Next steps are to compute lighting using light shapes and add some more screen effects. Stay tuned.