Decisions, Decisions

In the past couple of weeks months I’ve been doing some clean up process for most of the code I’ve written during the big rendering system refactor in order to make it production-ready. Part of that process is to start making some choices that are going to have a major impact in the future of the project (at least for Crimild v5.0).

Here are the biggest decisions I had to make so far:

Deferred vs Forward

For years I’ve been using a (mostly) forward approach for rendering objects that are affected by lights. That is, each object has a shader that not only calculates its color, but also compute all of the lighting equations. This is the traditional way of rendering and, as the number of lights increases, so does the rendering time. Plus, it needs to evaluate each light for each pixel regardless if that pixel is actually visible or it’s occluded by another object. Simply put, it does not support a high number of lights.

I always liked the deferred approach, where we split the rendering process in two: one pass will render all objects without lighting, while we do the lighting calculations in a separated process. Deferred rendering does support a lot more lights, but has other drawbacks like having to render transparent objects in a separated pass or more memory requirements, but it’s still a lot better than what I have right now. Plus, it’s used in a lot of modern games and engines.

I gave it a try a few years back, but never really had “official” support in the engine. Now, I finally made the call and I’m going to start using deferred rendering from now on. Overall, it should keep things simpler in the long term and should help me introduce real-time ray tracing some day.

I’m aware of other, more modern, approaches like Forward+ or Clustered Rendering, but those are too complex for Crimild at the moment. Due to the modular nature of the new frame graph, implementing such technique in the future should not represent another big refactor of the entire rendering system. So, I might give it a try next year.

PBR Lighting

Another decision I made is to stick with Physically-Based Rendering (PBR) for lit objects as the only lighting solution that comes bundled with the engine. For years I attempted to maintain both physically-based and traditional (specular/Phong) lighting solutions, but there’s no point in doing that anymore since PBR is the current standard.

Of course, custom lighting solutions are supported if needed, but from now on I’m not going to be one having to maintain them.

Better glTF Support

The glTF file format has been around for quite some time now and it has become the standard for handling 3D assets.

At the moment, Crimild depends on the Assimp Library in order to load glTF models, but I’m going to change that sometime in the near future since Assimp is pretty big and I’m using only one of the many file formats it supports. Plus, it generates a lot of warnings when compiling, and nobody likes warnings.

I would love to have a glTF loader that is part of the Core module, just as the OBJ loader is. Reading glTF (either JSON or binary formats) is relatively easy. The real challenge lays in the fact that the Core module must be written in ANSI C++ and it must not depend on any external libraries. Then, I’m going to implement my very own JSON parser, which is not a simple task. I guess I’ll stick with Assimp for now.

Which one?

This has nothing to do with Crimild, but it was definitely the hardest decision of all…

Hello ImGUI!

I’ve been wanting to add support for ImGUI ever since I started the Vulkan branch about a year ago. To be fair, ImGUI is a pretty easy library to use, yet it depends heavily on dynamic buffers which is something that the new rendering system in Crimild was not providing…

Until now.

Making things more dynamic

ImGUI works by recreating the visual geometries every frame. Which not only means that it depends on dynamic data, but also in the ability to record a new set of rendering commands every frame, which wasn’t supported by the new rendering system at all.

I think I mentioned this before in other posts: the current frame graph implementation in Crimild is static. That is, you create a scene and define which render passes will be used at the beginning of the simulation and then the engine assumes that things won’t change later on. That means nothing can be added to or removed from the scene or frame graph. If something changes, we get an undefined behavior (or more likely, a crash).

A static frame graph might be good enough for demos but it’s definitely not how things are supposed to work in more complex projects (specially games) where new objects are created and destroyed all the time. This has been a known constraint so far, one that made things a lot simpler for me at the beginning. But the time has come to finally remove that limitation for good.

Firstly, Uniform buffers have been dynamic pretty much since the very beginning. Otherwise, no camera will work at all and objects won’t move. They are updated, if needed, just before rendering (there’s still room for optimizations, though). Then, I added support for dynamic vertex buffers when revisiting the particle system a couple of months ago. That should deal with dynamic data, right? (spoiler: no, it doesn’t)

What about command buffers? They are recorded once when creating render passes and then never change. That’s not what we want, so I made some changes here and there and now render buffers have a callback that returns the command buffers every frame. The render pass is completely free to either recreate command buffers or memoize them in order to avoid duplicating the work.

But then, I ran the simulation and the result was not the expected one:

Something else was missing.

It was clear to me that the dynamic buffers were working because the frame counter was being updated as expected. And I was able to add/remove panels, which indicated that command buffers were updated too. What was missing, then? It took me quite a while to find the problem (which is a clear indicator that I need better debugging tools). While uniforms and vertices were being updated, index buffers were not. I assumed that was already supported but it turned out it wasn’t.

Once I figure out the problem, fixing it was pretty easy and then I finally got the correct result.

Handling events

Rendering was fixed, so the next step was to forward mouse events to ImGUI. And that was really, really easy. A few minutes later I had a fully working UI:

Closing Comments

As expected, working with ImGUI was really easy since it’s a very well made library. The problems I have along the way were due lack of support of some the requirements. it’s worth mentioning that ImGUI does provide bindings for Vulkan (and other graphics libraries like OpenGL) but I couldn’t use those since they were not compatible with Crimild. Otherwise, the process would have end up even easier.

See you next time!

A More Correct Raytracing Implementation

Happy 2021!!

I decided to start this new year continuing experimenting with compute shaders, specially raytracing. I actually managed to fix the issues I was facing in my previous posts and added some new material properties, like reflection and refraction.

My implementation uses an iterative approach to sampling, computing only one sample per frame and accumulating the result in the resulting image. If the camera moves, the image is reset (set to black, basically) and the process starts again.

Here’s a video of the image sampling process in action:

You might have noticed that the glass sphere in the center is black whenever the camera moves. That is because, if the camera moves, number of bounces for each ray is limited to one in order to provide a smoother experience when repositioning the view. Once the camera is no longer moving, bounces are set to ten or more and the glass sphere is computed correctly, showing proper reflections and refractions.

Here’s another example:

In the example above, you also see the depth of field effect in action, which is another property for the camera that can be tuned in real-time:

In this case, the image is reset whenever the camera’s focus changes.

I’m really happy with this little experiment. I don’t think that it’s good enough for production yet, since it’s still too slow for any interactive project. But it’s definitely something that I’m going to keep improving whenever I have the chance.