A More Correct Raytracing Implementation

Happy 2021!!

I decided to start this new year continuing experimenting with compute shaders, specially raytracing. I actually managed to fix the issues I was facing in my previous posts and added some new material properties, like reflection and refraction.

My implementation uses an iterative approach to sampling, computing only one sample per frame and accumulating the result in the resulting image. If the camera moves, the image is reset (set to black, basically) and the process starts again.

Here’s a video of the image sampling process in action:

You might have noticed that the glass sphere in the center is black whenever the camera moves. That is because, if the camera moves, number of bounces for each ray is limited to one in order to provide a smoother experience when repositioning the view. Once the camera is no longer moving, bounces are set to ten or more and the glass sphere is computed correctly, showing proper reflections and refractions.

Here’s another example:

In the example above, you also see the depth of field effect in action, which is another property for the camera that can be tuned in real-time:

In this case, the image is reset whenever the camera’s focus changes.

I’m really happy with this little experiment. I don’t think that it’s good enough for production yet, since it’s still too slow for any interactive project. But it’s definitely something that I’m going to keep improving whenever I have the chance.

Organizing demos

Another side effect of the huge rendering refactor I’m currently working on is that I’m taking the opportunity to clean up and reorganize all examples in Crimild’s demos repository. Most of them have been already updated to use the newest features, there are a lot of new demos and some have been removed since they were obsolete.

I have three different goals in mind when doing this: documentation, testing and, of course, showcase.

Documentation

It’s no secret that Crimild’s documentation is sparse and not very helpful (at best). Therefore, having good examples is extremely important. Covering everything from how to create a simulation up to how to render a complex, interactive, scene, these examples are highly valuable as a quick reference point.

You want to load an OBJ file? There’s an example for that. Want to composite different screen effects but don’t know how to do bloom? There’s an example for that.

(I find myself coming back to these examples time and time again, so I know first-hand how useful they are).

Testing

Sometimes is hard to tell whenever a change in some module ends up breaking something else in a completely different one… [insert your default unit/automated testing pitch here].

Having simpler demos not only helps with documentation, but also means that we have lots of simpler tests. Those tests might not be automated (yet), but there are still useful.

Showcase

Finally, the eye-candy.

These are the ones that show what Crimild is capable of. They’re still supposed to be simple projects, with only a handful of source files at most, since I’m interested in showing a particular feature in action and not really a full application.

Bonus Track: Experimentation

Since Crimild is becoming more flexible and extensible than ever, these demo projects are a great way to experiment with new features. We can create simple projects that use a customized rendering algorithm, for example, which might or might not end up being part of the engine. Or we can finally implement a true entity-component-system by refactoring the scene graph completely. It has become really easy to replace one system with a new one without having to do any changes in the core ones.

As they say, the sky(box) is the limit…

Sorry, I had to do it…

Live Long And Render (IV)

Hello 2020!

As I mentioned before, development for Vulkan support is still happening and in this post I’m going to talk about the biggest milestones that I achieved in very first month of this new year.

Triangles

This is the classical spinning triangle example that has been part of Crimild for a very long time. Only this time it’s a bit more interesting:

While not visually impressive, this is the first big milestone for every graphics programming, specially those working with Vulkan.

One of the biggest changes I made while working on this demo is a new way to work with vertex and index buffers. Why? The decision may not have much to do Vulkan, to be honest, but the current way of specifying this type of data (basically, as an array of floats or ints) has several limitations, particularly when dealing with multiple vertex attributes (positions, colors, etc.) in the same buffer of data. I’m going to write a different post just to explain this better. But for now, just take a look at how vertices and indices are specified in the new approach:

// The layout of a single vertex
struct VertexP2C3 {
    Vector2f position;
    RGBAColorf color;
};

// Create vertex buffer
auto vbo = crimild::alloc< VertexP2C3Buffer >(
    containers::Array< VertexP2C3 > {
        {
            .position = Vector2f( -0.5f, 0.5f ),
            .color = RGBColorf( 1.0f, 0.0f, 0.0f ),
        },
        {
            .position = Vector2f( -0.5f, -0.5f ),
            .color = RGBColorf( 0.0f, 1.0f, 0.0f ),
        },
        {
            .position = Vector2f( 0.5f, -0.5f ),
            .color = RGBColorf( 0.0f, 0.0f, 1.0f ),
        },
        {
            .position = Vector2f( 0.5f, 0.5f ),
            .color = RGBColorf( 1.0f, 1.0f, 1.0f ),
        },
    }
);

// Create index buffer
auto ibo = crimild::alloc< IndexUInt32Buffer >(
    containers::Array< crimild::UInt32 > {
        0, 1, 2,
    }
);

Even without knowing how it was done before, it cannot be denied that the newer approach is pretty clear and straightforward. Again, I’ll write about it later.

Textures

Similar to the previous example, this demo seems to be quite simple, yet it’s another great milestone.

Each quad is rendered with a checkerboard texture and colored vertices.

Working with textures requires us to handle multiple descriptor sets and layouts, since each object has its own set of transformations and textures. I think I came up with a nice approach that may allow us create a shader library in the future (and also stream then directly from disk if needed).

OBJ Loader

Once vertex data and textures were working correctly, the next obvious step was to show something more interesting on screen:

The famous Stanford Bunny is loaded from an OBJ file. Please note that there’s no dynamic lighting in the scene. Instead, the texture file already has ambient occlusion baked on it.

The most difficult part of this example is actually hidden inside the engine. The OBJ loader needs to create a new pipeline based on what data is available (are there any textures? what about normals?). There’s also the option to specify a pipeline to the loader, so every object will use it instead of default one.

Also, OBJ loader is making use of the new vertex/index buffer objects.

Pipelines

The final example I’m showing today is the most complex one so far:

The demo sets up multiple pipelines to render the same scene (the bunny) using different settings: textured, lines, dots and normals.

The challenge for this demo was to being able to override some or all of the settings for whatever pipeline configuration the scene (or in this case, the model) has with new values, like overriding their viewport size.

Up next…

As you can see, I’ve been busy. Being able to load models with textures and setup different pipelines is the very basis for all the the rest of the features.

There are still some unresolved design challenges that I need to tackle, like how to handle render targets and offscreen rendering, but I’m hoping the solution will come up as I move forward with simpler demos.