Deferred(ish) Rendering

During the past few weeks I’ve been reviewing a lot of the rendering pipeline. One of the things I started working on is the new Deferred Render pass.

In short, Deferred Rendering (DR) is performed by first rendering the visible scene into a deep frame buffer (usually known as the G-Buffer), storing specific sets of data like positions, normals, colors, material properties, etc. Lighting calculations are then performed in a separate pass, computing the lighting equation only for the visible pixels. Here’s a great introduction to Deferred Rendering in general.

The main benefit of DR is that it makes possible the use of a high number of light sources, since there is no need to compute lighting for those pixels that are actually occluded by others. But I personally like it because it provides much more data to work with when applying screen effects like SSAO or Glow.

Since this is my first approach to Deferred Rendering, I didn’t want to complicate too much the layout of the G-Buffer:

G-Buffer LayoutA lot of space is wasted, I know. But then again, this is the first approach.

The following video shows the updated Lightcycle demo using deferred rendering and a glow effect. The video also shows the output for each buffer separately, for debug purposes

I also gave it a try to SSAO, which can be seen in the next couple of images, but I’m not comfortable with the results

With SSAO

With SSAO

Without SSAO

Without SSAO

As much as I like DR, making it the default render pass seems difficult in the near future. It requires a lot of memory and it’s dependent on the host platform to support multiple render targets (MRT), which is not standard. For example, GLES 3 does support MRT, but most mobile devices still have GLES2-based graphic cards.

Next steps are to compute lighting using light shapes and add some more screen effects. Stay tuned.

Advertisements