Hello ImGUI!

I’ve been wanting to add support for ImGUI ever since I started the Vulkan branch about a year ago. To be fair, ImGUI is a pretty easy library to use, yet it depends heavily on dynamic buffers which is something that the new rendering system in Crimild was not providing…

Until now.

Making things more dynamic

ImGUI works by recreating the visual geometries every frame. Which not only means that it depends on dynamic data, but also in the ability to record a new set of rendering commands every frame, which wasn’t supported by the new rendering system at all.

I think I mentioned this before in other posts: the current frame graph implementation in Crimild is static. That is, you create a scene and define which render passes will be used at the beginning of the simulation and then the engine assumes that things won’t change later on. That means nothing can be added to or removed from the scene or frame graph. If something changes, we get an undefined behavior (or more likely, a crash).

A static frame graph might be good enough for demos but it’s definitely not how things are supposed to work in more complex projects (specially games) where new objects are created and destroyed all the time. This has been a known constraint so far, one that made things a lot simpler for me at the beginning. But the time has come to finally remove that limitation for good.

Firstly, Uniform buffers have been dynamic pretty much since the very beginning. Otherwise, no camera will work at all and objects won’t move. They are updated, if needed, just before rendering (there’s still room for optimizations, though). Then, I added support for dynamic vertex buffers when revisiting the particle system a couple of months ago. That should deal with dynamic data, right? (spoiler: no, it doesn’t)

What about command buffers? They are recorded once when creating render passes and then never change. That’s not what we want, so I made some changes here and there and now render buffers have a callback that returns the command buffers every frame. The render pass is completely free to either recreate command buffers or memoize them in order to avoid duplicating the work.

But then, I ran the simulation and the result was not the expected one:

Something else was missing.

It was clear to me that the dynamic buffers were working because the frame counter was being updated as expected. And I was able to add/remove panels, which indicated that command buffers were updated too. What was missing, then? It took me quite a while to find the problem (which is a clear indicator that I need better debugging tools). While uniforms and vertices were being updated, index buffers were not. I assumed that was already supported but it turned out it wasn’t.

Once I figure out the problem, fixing it was pretty easy and then I finally got the correct result.

Handling events

Rendering was fixed, so the next step was to forward mouse events to ImGUI. And that was really, really easy. A few minutes later I had a fully working UI:

Closing Comments

As expected, working with ImGUI was really easy since it’s a very well made library. The problems I have along the way were due lack of support of some the requirements. it’s worth mentioning that ImGUI does provide bindings for Vulkan (and other graphics libraries like OpenGL) but I couldn’t use those since they were not compatible with Crimild. Otherwise, the process would have end up even easier.

See you next time!

Sponsored Post Learn from the experts: Create a successful blog with our brand new courseThe WordPress.com Blog

WordPress.com is excited to announce our newest offering: a course just for beginning bloggers where you’ll learn everything you need to know about blogging from the most trusted experts in the industry. We have helped millions of blogs get up and running, we know what works, and we want you to to know everything we know. This course provides all the fundamental skills and inspiration you need to get your blog started, an interactive community forum, and content updated annually.

A More Correct Raytracing Implementation

Happy 2021!!

I decided to start this new year continuing experimenting with compute shaders, specially raytracing. I actually managed to fix the issues I was facing in my previous posts and added some new material properties, like reflection and refraction.

My implementation uses an iterative approach to sampling, computing only one sample per frame and accumulating the result in the resulting image. If the camera moves, the image is reset (set to black, basically) and the process starts again.

Here’s a video of the image sampling process in action:

You might have noticed that the glass sphere in the center is black whenever the camera moves. That is because, if the camera moves, number of bounces for each ray is limited to one in order to provide a smoother experience when repositioning the view. Once the camera is no longer moving, bounces are set to ten or more and the glass sphere is computed correctly, showing proper reflections and refractions.

Here’s another example:

In the example above, you also see the depth of field effect in action, which is another property for the camera that can be tuned in real-time:

In this case, the image is reset whenever the camera’s focus changes.

I’m really happy with this little experiment. I don’t think that it’s good enough for production yet, since it’s still too slow for any interactive project. But it’s definitely something that I’m going to keep improving whenever I have the chance.

Victory!

Throughout this weird year I managed to accomplished a lot of different milestones when refactoring the rendering system in Crimild. Yet, the year was coming to an end and there was one feature in particular that was still missing: compute operations.

Then, this happened:

That, my friends, is the very first image created by using a compute pass in Crimild. The image is then used as a texture that is presented to the screen. Both compute and rendering passes are managed by the frame graph and executed every frame in real-time.

At the time of this writing I haven't implemented true synchronization between the graphics and compute queues, meaning that the compute shader might still be writing the image by the time it is read by the rendering engine, which produce some visual artifacts every once in a while. 

Of course, I had to push forward.

A few hours passed and the next compute shader that I made was used to implement a very basic path tracer completely in the GPU:

It’s not a true real-time ray tracing solution (since I don’t have a GPU with proper RTX support), but sampling is done incrementally, allowing me to reposition the camera in real-time:

I’m still amazed about how easy it was to port my software-based path tracer to the GPU.

So much power…

So much potential…

I wanted more…

I needed more…

I became greedy.

I flew too close to the Sun.

And I got burnt.

Then I learned a valuable lesson. It turns out that if I screwed up the shader code in some specific way (which I’m still trying to understand), weird things happens. Like my compute crashing… bad (as in having to turn it off and on again bad).

Next steps

I’m planning on (finally) merging the Vulkan branch at this point, since all major features are done. Sure, there are things that still need to be fixed and cleaned up, yet they don’t really depend on Vulkan itself, like behaviors, animations and sound, which is broken (again).

Plus, I really want to release Crimild v5.0 in the next decade.

See you next year!