Live Long and Render (II)

This is the second part of my dev diary about implementing Vulkan support in Crimild. Check out the first part for a brief introduction if you haven’t read it yet.

Changes, changes, changes

I’m still struggling with the class hierarchy and responsibilities. I would like to use RAII as much as possible, but I’m not sure about the API design and who’s responsible for creating new objects yet.

For example, it feels natural that the Instance (basically a wrapper for VkInstance) creates the render devices and swapchain. But, since the surface is platform dependent, it must be created somewhere else which doesn’t feel right.

On the other hand, a render device should create new resources (like images or buffers) but that also means that such resources are coupled with that particular device. What if we have more than one device?

I know, I’m overthinking it as usual but, to be honest, defining the class hierarchy has proven to be the most challenging task so far.

As a side note, I decided to use exceptions for error reporting. Like when attempting to create a Vulkan objects and the process fails for some reason. This simplifies the code a lot and, although there’s an overhead in using exceptions, they’re only used in error paths so it’s not a big issue.


The process of initializing Vulkan in Crimild can be described as follows:

  1. The VulkanSystem creates a Vulkan Instance and keeps a strong reference to it that lives for the rest of the simulation
  2. The VulkanSystem creates a surface where we’re going to render into. This is platform dependent mostly.
  3. The Instance creates a Render Device (see below)
  4. The Instance creates a Swapchain (see below)
  5. The Render Device creates resources (images, buffers, etc)
  6. The Swapchain request the Render Device to create Image Views for available Images (usually 2 in order to work with double buffering)
  7. Dark magic goes here (not implemented yet)
  8. Render!

Please keep in mind that this is still work in progress.

Render Device

I’ve been talking about render devices but I didn’t say what they are yet. RenderDevice is a new class that handles both Vulkan’s physical and logical devices. I know that we may have more than one logical device per physical one, but I’m not seeing that as a requirement for the moment. If the time comes where I need to make that distinction, it won’t be hard to split the class in two.

The goals is for RenderDevice to replace the Renderer interface which has become too big over the years.

I don’t have much code for the RenderDevice class at the moment. Well, there’s a lot of code, but it’s mostly for initialization. I’m expecting this class to get bigger and bigger as the time passes.


The Swapchain is kind of a new concept that I borrowed directly from Vulkan. It’s main responsibility is to handle images that need to be presented to the screen/surface.

For such reason, there are only two main functions for a Swapchain object: 1) acquire a new image for us to render to and 2) present that image to the screen once is ready.

The Swapchain class is pretty much completed and I don’t think it might get much bigger than what it is today.

Thinking out loud: Headless Vulkan

This is something that I would like to try out in future iterations. Unlike OpenGL, we can use Vulkan without having to create a visible window. This could prove useful in several scenarios, like when doing complex compute operations, image generation using procedural algorithms, computer vision… even unit tests. I do like the idea of having automatic tests for everything rendering-related that actually mean something as I do for other systems in the engine.

Again, this is not a priority right now, but I’ll definitely give it a try in the future.

Up Next!

Now that we have a window, a render device and a swapchain, I believe the next logical step is to actually render something. Therefore, I’ll be focusing on pipelines and commands next.


Live Long and Render (I)


Fun fact: I created a draft for this article in early 2016, while I was rounding up Metal support and shortly after Vulkan was announced. It took me three years, but here it is:


I’m finally working on a Vulkan-based rendering backend for Crimild.

This time it’s going to be much more than just making a Renderer class with Vulkan code inside as I did with Metal some time ago. This time, thought, I’m willing to go as far as I have to and make a modern rendering architecture.

It’s BIG refactor time!

Why now?

I started this year (2019, I think) talking about how render graphs and shader graphs and other rendering improvements were critical for the next major version of Crimild. But as time has passed, I come to realize that they were not enough.

Despite their benefits, I was headed towards a mixed solution. Neither old school nor modern. Using state of the art techniques, but bounded by OpenGL limitations. And, while it was indeed a step forward towards newer paradigms and APIs (Vulkan, Metal, etc), if I followed that road I was going to have to change most of it again in the near future. After all, there’s no gain in making Vulkan work like OpenGL.

Therefore, I started from scratch and decided to do it the right way from the beginning. I’m going to implement a whole new rendering system on top of Vulkan and it’s major concepts (render devices, swapchains, pipelines, render passes, etc).

What about OpenGL? Well, as much as I love it, I would like to get rid of it. The only environment in which I see OpenGL as still relevant is on web (as in WebGL). Being able to publish apps on browsers (throught Emscripten) is still a goal for Crimild, so I guess OpenGL is not going anywhere for now. But, it’s going to change based on whatever architecture I come up with after this refactor.

What about Metal? I am getting rid of the Metal renderer. Plain and simple. There’s no point in supporting both Vulkan and Metal (at least for the moment). And the current implementation is quite limited. The good size of this is that after having a Vulkan-based rendering system, implementing on in Metal should be straightforward. Provided I needed, of course.

What have I done so far?

I have read articles, books, even tweets. I have watched videos. I have completed the excellent Vulkan Tutorial and looked into several examples.

I recently started working on (that is, actually coding) the new rendering architecture on my spare time (I have other priorities at the moment). This is what I achieved so far:

Yeap, that’s just a window.

Well, it’s a bit more than that.

At the moment, I can create a Vulkan Instance and a Surface, which is not a small feat for Vulkan. I’m working on a Mac (using MoltenVK) with GLFW for window management, but the new implementation is supposed to work on multiple platforms.

I made changes to other parts of Crimild too, like simulations and systems. Nothing too big, just a few tools to better prioritize systems. I am planning to change them a lot once go full ECS in the not-so-distant future. But, one refactor at a time.

That’s it for now. I try and write more posts as I make progress. But I won’t make any promises 🙂


Customizing render pipelines with render graphs

Attempting to work with advanced visual effects (like SSAO) and post-processing in Crimild has always been very painful. ImageEffects, introduced a while ago, were somewhat useful but limited to whatever information the (few) available shared frame buffers contained after rendering a scene. 

To make things worse, maintaining different render paths (i.e. forward, deferred, mobile) usually required a lot of duplicated logic and/or code and sooner or later some of them just stopped working (at this point I still don’t know why there’s code for deferred rendering since it has been broken for a years at least).

Enter Render Graphs…


Render graphs are a tool for organizing processes that take place when rendering a scene, as well as the resources (i.e. frame buffers) that are required to execute them.

It’s a relatively new rendering paradigm that achieves highly modular render pipelines which can be easily customized and extended. 

WHY ARE Render Graphs HELPFUL?

First of all, they provide high modularity. Processes are connected in a graph like structure and they are pretty much independent of each other. This means that we can create pipelines by plugging in lots of different nodes together. 

Do you need a high fidelity pipeline for AAA games? Then add some nodes for deferred lighting, SSAO, post-processing and multiple shadow casters.

Do you have to run the game in a low level hardware or mobile phone? Use a couple of forward lighting nodes and simple shadows. Do you really need a depth pre-pass?

In addition, a render graph helps with resource management. Each render pass may produce one or more textures but, do we really need as many textures as passes? Can we reuse some of them? All of them? 

Finally, technologies like Vulkan, Metal or DX12 allow us to execute multiple processes in parallel, which is amazing. But it comes with the cost of having to synchronize those processes manually. A render graph helps to identify synchronization barriers for those processes based on the resources they are consuming.


Like I said above, a render graph defines a render pipeline by using processes (or render passes) and resources (or attachments), each of them represented as a node in a graph. Here’s a simple render graph implementing a (simplified) deferred lighting pipeline:

The graph is composed by two types of nodes: Render Passes (circles) and Attachments (squares). Passes may read from zero, one or multiple attachments and write to at least one attachment. Attachments are the only way to connect passes together.

For example, in the image above, the Depth Pass will produce two attachments: Depth and Normal. The later one is only needed for lighting accumulations, but the Depth attachment is used multiple times (lighting, opaque and translucent render passes).

Once lighting accumulation is complete, its result is blended together with the one produced by the opaque render pass. Then, we blend the resulting attachment with the one written by the translucent render pass to achieve the final image for the frame.

The following images shows the final rendered frame (big image), as well as each of the intermediate attachments used for this pipeline. Notice that even the UI is rendered in its own texture.

Top row: depth, normal, opaque and translucent. Left column: opaque+translucent, sepia tint and UI

If you want to read more about render graphs, here are a couple of links to articles I used as reference for my own implementation:

In the next weeks I’m going to explain how render graphs help to optimize our pipeline by reusing attachments and discarded irrelevant passes.

Enjoy your coffee!