Long Live And Render (VII)


Yes! Shadows are finally working (again).

A higher resolution video can be found here

This is a big achievement because it’s the first real use of multiple render passes and shared attachments.

And it makes everything look nicer.

I always said that Vulkan is difficult to work with, but I do like how easy it is use attachments as textures. I guess I reached the point where I’m actually seeing the benefits of this new API (other than just better performance, of course).

Regarding shadows, they’re created from a single directional light. You might have noticed that the shadows are actually incorrect, because directional lights are supposed to cast parallel shadows using an orthographic projection. I am using a perspective projection instead (shown in the little white rectangle at the bottom right corner), but just because it makes the final effect look nicer. The final implementation will have correct shadows for directional lights, of course.

Let’s talk about descriptors sets

In Vulkan, descriptors are used to bind data to each of the different shader uniforms. Resembling newer OpenGL versions, Vulkan allows us to group multiple values into descriptors (uniform buffers), reducing the number of bind function calls.

But that’s only the beginning. In Vulkan, we can also group multiple of those descriptors together into descriptor sets, and each of them can be bound with a single draw command. So, we only need to create one big set with all the descriptors required for all shaders, then bind it with a single function call and be done with it, right? Well, not really (*).

Where’s the catch, then? We do want to minimize the number of descriptor sets, of course, but as the number of sets decreases, the amount of data we need to send in each of them increases. Therefore, a huge, single set approach leads to binding the whole set once per each object and render pass. Not ideal.

What we actually need is to group descriptors together depending on the frequency in which they are updated during a frame.

For example, consider shaders requiring uniforms like model, view and projection matrices to compute a vertex position. The last two of those matrices are only updated whenever the camera changes, which means that their values remain constant for all objects in our scene. On the other hand, the model matrix only needs to be updated when a model changes its pose. If the camera does change but the object itself remains stationary, there’s no need to update the model matrix. This is specially true when rendering the scene multiple times, like when doing shadows or reflections.

Then, we need two different sets, both of them being updated at different times. The first set contains the view and projection matrices and is updated only if the camera frame changes. The second set only contains the model matrix and it’s updated once per object (regardless of in which render passes is used).

In practice, shaders need much more data than just a bunch a matrices. There are colors, textures, timers, bone animation data, lighting information, etc. But we cannot create too many sets either since each platform defines a different limit for how many descriptor sets we can bind at the same time (Vulkan specs says that the minimum is four). Therefore, I’ve consider creating the following groups:

  • Render Pass specific descriptors
    This are all the descriptors that change once per render pass. Things like view/projection matrices, time, camera properties (FOV, near and far planes), each of the lights in the scene, shadow maps, etc.
  • Pipeline/Shader specific Descriptors
    Values that are required for shaders to work, like noise textures, constants, etc.
  • Material specific Descriptors
    These are the values for each property in the material, like colors, textures, normal maps, light maps, ambient occlusion maps, emission color, etc.
  • Geometry Descriptors
    These are values that affect only geometries, like the model matrix, bone indices, light indices, normal maps, etc.

Separating material and geometry descriptors is important. For example, if we’re rendering shadows, we don’t need the object colors. Just its pose and animations, for example.

Most importantly, these groups can change and be mixed however we like. If we update descriptors for a render pass, the scene will be rendered in a completely different way. We can also change materials without affecting the topology of the objects.

Up next…

There are lots of Vulkan features that I haven’t even look at yet, but there’s one in particular that I need to implement before I’m able the merge branch into the general development one: compute operations.

I want to be able to execute compute operations in the GPU for image filtering and/or particle systems, but that requires a lot of more work.

June is going to be a busy month…

(*) I actually did that a while ago when working on the Metal-based renderer. I did not really understand at the time how uniforms were supposed to be bound, so I made one big object including everything. That’s the reason why there’s only one shader in Le Voyage and no skeletal animation, basically.

I’ve been busy

Happy 2019!! First post of the year!! (I checked ;))

During the past month I’ve focused most of my efforts on the new renderer architecture, one of the major changes for Crimild 5. There are many, many things I already changed and a lot more that I want to change and upgrade in order to bring Crimild a little closer to most modern game engines.

At the core of the new rendering pipeline are both Render and Shader Graphs. Both of these tools were already introduced in latest Crimild versions, but as experimental features. It’s time to make them production-ready.

Without further ado, here’s what I’ve been doing so far:

New forward render pass

I’m writing the entire forward pipeline from scratch using shader graphs and fixing exiting errors in lighting calculations.

Point lights with different attenuation values
Spotlights and ambient light, working correctly this time
Directional lighting and specular mapping

Cube and Environmental Mapping

I tried implementing cube mapping years ago, but it was too hard-coded into the engine for it to actually become something useful. Now, with a new Skybox node and cube textures support, working with environmental mapping has become straightforward:

Skybox and Reflections (left bunny)


Crimild shadow mapping support was bad. Really bad. But that is about to change.

I’m implementing a new shadow pass that creates a single shadow atlas supporting multiple casters with different types and resolutions. Only directional lights can cast shadows at the moment, but except more news in the coming weeks.

Two directional lights casting shadows at the same time!!


Last, but not least, Emscripten support has been greatly improved, with support for WebGL2.

I’m revisiting most of the demos to make them work on the browser.

That’s it for the moment.

2019 has definitely started in a high note for Crimild 🙂

Shadow mapping improvements (I)

Just a brief update to build up some expectations for the next release (whenever that happens).

I’ve been working on improving the shadow mapping technique support in Crimild in order to make it more reliable in production environments. The current implementation had a lot of errors and artifacts and it’s not really usable in big open spaces due to incorrect light frustum calculations.

Here’s a quick look at the new Shadows demo:


This slideshow requires JavaScript.

Only directional and spot lights can cast shadows at the moment, but I’m planning on adding support for point lights shortly. I’m also planning on adding support for cascade shadow maps in a later release.

That’s it. See you later 🙂