Live Long and Render (I)

320x240

Fun fact: I created a draft for this article in early 2016, while I was rounding up Metal support and shortly after Vulkan was announced. It took me three years, but here it is:

YES.

I’m finally working on a Vulkan-based rendering backend for Crimild.

This time it’s going to be much more than just making a Renderer class with Vulkan code inside as I did with Metal some time ago. This time, thought, I’m willing to go as far as I have to and make a modern rendering architecture.

It’s BIG refactor time!

Why now?

I started this year (2019, I think) talking about how render graphs and shader graphs and other rendering improvements were critical for the next major version of Crimild. But as time has passed, I come to realize that they were not enough.

Despite their benefits, I was headed towards a mixed solution. Neither old school nor modern. Using state of the art techniques, but bounded by OpenGL limitations. And, while it was indeed a step forward towards newer paradigms and APIs (Vulkan, Metal, etc), if I followed that road I was going to have to change most of it again in the near future. After all, there’s no gain in making Vulkan work like OpenGL.

Therefore, I started from scratch and decided to do it the right way from the beginning. I’m going to implement a whole new rendering system on top of Vulkan and it’s major concepts (render devices, swapchains, pipelines, render passes, etc).

What about OpenGL? Well, as much as I love it, I would like to get rid of it. The only environment in which I see OpenGL as still relevant is on web (as in WebGL). Being able to publish apps on browsers (throught Emscripten) is still a goal for Crimild, so I guess OpenGL is not going anywhere for now. But, it’s going to change based on whatever architecture I come up with after this refactor.

What about Metal? I am getting rid of the Metal renderer. Plain and simple. There’s no point in supporting both Vulkan and Metal (at least for the moment). And the current implementation is quite limited. The good size of this is that after having a Vulkan-based rendering system, implementing on in Metal should be straightforward. Provided I needed, of course.

What have I done so far?

I have read articles, books, even tweets. I have watched videos. I have completed the excellent Vulkan Tutorial and looked into several examples.

I recently started working on (that is, actually coding) the new rendering architecture on my spare time (I have other priorities at the moment). This is what I achieved so far:

Yeap, that’s just a window.

Well, it’s a bit more than that.

At the moment, I can create a Vulkan Instance and a Surface, which is not a small feat for Vulkan. I’m working on a Mac (using MoltenVK) with GLFW for window management, but the new implementation is supposed to work on multiple platforms.

I made changes to other parts of Crimild too, like simulations and systems. Nothing too big, just a few tools to better prioritize systems. I am planning to change them a lot once go full ECS in the not-so-distant future. But, one refactor at a time.

That’s it for now. I try and write more posts as I make progress. But I won’t make any promises 🙂

Bye!

Customizing render pipelines with render graphs

Attempting to work with advanced visual effects (like SSAO) and post-processing in Crimild has always been very painful. ImageEffects, introduced a while ago, were somewhat useful but limited to whatever information the (few) available shared frame buffers contained after rendering a scene. 

To make things worse, maintaining different render paths (i.e. forward, deferred, mobile) usually required a lot of duplicated logic and/or code and sooner or later some of them just stopped working (at this point I still don’t know why there’s code for deferred rendering since it has been broken for a years at least).

Enter Render Graphs…

WHAT ARE RENDER GRAPHS?

Render graphs are a tool for organizing processes that take place when rendering a scene, as well as the resources (i.e. frame buffers) that are required to execute them.

It’s a relatively new rendering paradigm that achieves highly modular render pipelines which can be easily customized and extended. 

WHY ARE Render Graphs HELPFUL?

First of all, they provide high modularity. Processes are connected in a graph like structure and they are pretty much independent of each other. This means that we can create pipelines by plugging in lots of different nodes together. 

Do you need a high fidelity pipeline for AAA games? Then add some nodes for deferred lighting, SSAO, post-processing and multiple shadow casters.

Do you have to run the game in a low level hardware or mobile phone? Use a couple of forward lighting nodes and simple shadows. Do you really need a depth pre-pass?

In addition, a render graph helps with resource management. Each render pass may produce one or more textures but, do we really need as many textures as passes? Can we reuse some of them? All of them? 

Finally, technologies like Vulkan, Metal or DX12 allow us to execute multiple processes in parallel, which is amazing. But it comes with the cost of having to synchronize those processes manually. A render graph helps to identify synchronization barriers for those processes based on the resources they are consuming.

OK, BUT HOW DO THEY WORK?

Like I said above, a render graph defines a render pipeline by using processes (or render passes) and resources (or attachments), each of them represented as a node in a graph. Here’s a simple render graph implementing a (simplified) deferred lighting pipeline:

The graph is composed by two types of nodes: Render Passes (circles) and Attachments (squares). Passes may read from zero, one or multiple attachments and write to at least one attachment. Attachments are the only way to connect passes together.

For example, in the image above, the Depth Pass will produce two attachments: Depth and Normal. The later one is only needed for lighting accumulations, but the Depth attachment is used multiple times (lighting, opaque and translucent render passes).

Once lighting accumulation is complete, its result is blended together with the one produced by the opaque render pass. Then, we blend the resulting attachment with the one written by the translucent render pass to achieve the final image for the frame.

The following images shows the final rendered frame (big image), as well as each of the intermediate attachments used for this pipeline. Notice that even the UI is rendered in its own texture.

Top row: depth, normal, opaque and translucent. Left column: opaque+translucent, sepia tint and UI

If you want to read more about render graphs, here are a couple of links to articles I used as reference for my own implementation:

In the next weeks I’m going to explain how render graphs help to optimize our pipeline by reusing attachments and discarded irrelevant passes.

Enjoy your coffee!

Color masks and occluders

One of the newest features that will be included in the next major release for Crimild (coming soon) is the support for color masks and invisible occluders for our scenes.

Occluders are objects that block the visibility path, fully or partially hiding whatever is beyond them. An invisible occluder behaves in the same way, and while the object itself is not drawn it still prevents objects behind it to be rendered.

For example, in the following scene the teapot (in yellow) is an occluder. Other objects are orbiting around it and the scene is rendered normally.

screen-shot-2016-09-28-at-12-34-10-pm

Original scene with color mask enabled for all channels

By playing around with the color mask and turning it off for all channels we can avoid the teapot itself for being drawn, yet it still blocks the objects that are passing behind it. The green plane is not being affected by this behavior (that’s on purpose).

The effect is a pretty cool one and has a lot of applications in games and simulations. It’s specially useful in augmented reality to mix real-life objects with virtual ones.