Live Long And Render (IV)

Hello 2020!

As I mentioned before, development for Vulkan support is still happening and in this post I’m going to talk about the biggest milestones that I achieved in very first month of this new year.

Triangles

This is the classical spinning triangle example that has been part of Crimild for a very long time. Only this time it’s a bit more interesting:

While not visually impressive, this is the first big milestone for every graphics programming, specially those working with Vulkan.

One of the biggest changes I made while working on this demo is a new way to work with vertex and index buffers. Why? The decision may not have much to do Vulkan, to be honest, but the current way of specifying this type of data (basically, as an array of floats or ints) has several limitations, particularly when dealing with multiple vertex attributes (positions, colors, etc.) in the same buffer of data. I’m going to write a different post just to explain this better. But for now, just take a look at how vertices and indices are specified in the new approach:

// The layout of a single vertex
struct VertexP2C3 {
    Vector2f position;
    RGBAColorf color;
};

// Create vertex buffer
auto vbo = crimild::alloc< VertexP2C3Buffer >(
    containers::Array< VertexP2C3 > {
        {
            .position = Vector2f( -0.5f, 0.5f ),
            .color = RGBColorf( 1.0f, 0.0f, 0.0f ),
        },
        {
            .position = Vector2f( -0.5f, -0.5f ),
            .color = RGBColorf( 0.0f, 1.0f, 0.0f ),
        },
        {
            .position = Vector2f( 0.5f, -0.5f ),
            .color = RGBColorf( 0.0f, 0.0f, 1.0f ),
        },
        {
            .position = Vector2f( 0.5f, 0.5f ),
            .color = RGBColorf( 1.0f, 1.0f, 1.0f ),
        },
    }
);

// Create index buffer
auto ibo = crimild::alloc< IndexUInt32Buffer >(
    containers::Array< crimild::UInt32 > {
        0, 1, 2,
    }
);

Even without knowing how it was done before, it cannot be denied that the newer approach is pretty clear and straightforward. Again, I’ll write about it later.

Textures

Similar to the previous example, this demo seems to be quite simple, yet it’s another great milestone.

Each quad is rendered with a checkerboard texture and colored vertices.

Working with textures requires us to handle multiple descriptor sets and layouts, since each object has its own set of transformations and textures. I think I came up with a nice approach that may allow us create a shader library in the future (and also stream then directly from disk if needed).

OBJ Loader

Once vertex data and textures were working correctly, the next obvious step was to show something more interesting on screen:

The famous Stanford Bunny is loaded from an OBJ file. Please note that there’s no dynamic lighting in the scene. Instead, the texture file already has ambient occlusion baked on it.

The most difficult part of this example is actually hidden inside the engine. The OBJ loader needs to create a new pipeline based on what data is available (are there any textures? what about normals?). There’s also the option to specify a pipeline to the loader, so every object will use it instead of default one.

Also, OBJ loader is making use of the new vertex/index buffer objects.

Pipelines

The final example I’m showing today is the most complex one so far:

The demo sets up multiple pipelines to render the same scene (the bunny) using different settings: textured, lines, dots and normals.

The challenge for this demo was to being able to override some or all of the settings for whatever pipeline configuration the scene (or in this case, the model) has with new values, like overriding their viewport size.

Up next…

As you can see, I’ve been busy. Being able to load models with textures and setup different pipelines is the very basis for all the the rest of the features.

There are still some unresolved design challenges that I need to tackle, like how to handle render targets and offscreen rendering, but I’m hoping the solution will come up as I move forward with simpler demos.

Another year, another release– wait, what?

Hi there!

2019 is coming to an end and as you may have noticed, there was no new releases this year.

That’s correct.

Not a single one.

Why? Well, because 2019 was a weird year for me.

This year started with me using Crimild to create a new game, as usual, but at one point I’ve made the (very hard) decision to switch to Unity in order to speed things up. It made sense at the moment (and still does), since I was running out of free time and working on both improvements for Crimild and a new game was becoming impossible. So, I made the call.

Then, there were those unexpected (yet highly satisfying) side projects that ended up consuming the rest of my free time.

So, what about Crimild?

What’s the future for Crimild?

Honestly, no idea.

It’s moving forward, as always, but at a very slow pace.

I am still working on Vulkan support, of course. But what started as a yet-another-renderer-class, quickly became this huge refactor of the entire rendering subsystem (and more).

Instead of adapting Vulkan to Crimild, I decided to do the opposite and adapt Crimild to Vulkan (and similar modern rendering paradigms). Why? Because Crimild has been built around OpenGL since the very beginning and Vulkan has a lot of different concepts and approaches to rendering that demand a rethinking of several of the engine design choices I made 15 or more years ago (yes, Crimild has been around for that long).

How long will that take? As long as it takes ™.

Until then, I’ll keep refactoring and having fun.

Have a happy 2020!

Live long and Render (III)

I’m slowly moving forward with my Vulkan implementation. After several days of trial and error, I finally managed to render a simple triangle, which is a big deal for Vulkan. But I’m getting ahead of myself. Let me talk you about the journey first.

As mentioned in previous posts, the majority of the design decisions at the moment are how to introduce Vulkan’s concepts into Crimild and make sense of them. I talked about render devices and swapchains before and the next step was to start dealing with how to draw objects in the screen.

Shaders

Shaders have been part of Crimild for a long time, but the time has come to update them in order to support modern features. For the moment, the most important change I introduced in the Vulkan branch is that we can have multiple shader sources for each program. Besides the typical vertex/fragment shader pair, we can now specify geometry and compute shaders too. These are not implemented yet, but it’s a start.

Graphics Pipelines

Graphics pipelines define how objects are rendered in the screen, including everything from viewport size, vertex inputs, depth testing, color blending, etc.

Older graphics APIs like OpenGL define a graphics pipeline in a very strict fashion. Yes, it was possible to introduce some customization in the form of shaders here and there, but in the end everything was rendered in the same way.

Vulkan introduces the concept of highly customizable graphics pipelines. We can know specify things like rasterization options, depth/stencil settings, multisampling, etc in a single object and use it a way that’s really efficient. As usual with Vulkan, this means two things: on one hand, a great power. And, on the other, a very, very explicit amount of code to create the pipelines.

Custom graphics Pipelines are, of course, another new concept for Crimild and it wasn’t easy to reach a consensus about how to work with them (and, to be honest, I’m still second guessing some decisions).

Having one pipeline shared by every single renderable object doesn’t make any sense. But neither does the opposite, since I would end up having too many instances of the same pipeline for objects that are similar.

Associating pipelines with materials didn’t feel right either. Again, some materials may reuse the same pipelines.

In the end, I made up my mind and decided that pipelines are independent of both drawables and materials. Why? Because there may be times when we need to render objects disregarding their geometry (i.e. don’t care about normals or vertex colors) and/or material properties (like we’re rendering a shadow map).

What about linking pipelines and shaders? Well, that makes more sense, but it’s not enough. Pipelines handle much more information than shaders, like viewport sizes and blending, for example.

And that’s how the Pipeline class was born.

Render Passes, Attachments & Framebuffers

Render passes are already a very important (albeit experimental) feature of Crimild. And they don’t differ too much from Vulkan’s own render passes.

The most important difference is that in Vulkan the actual rendering is performed in sub-passes. Render passes only serve as a way to declare which resources (that is, attachments) are needed for the sub-passes to work. Then, you can declare a single render pass that performs deferred lighting on a scene by implementing multiple sub-passes, all working with the same shared attachments.

The use of sub-passes makes the render pass much more efficient, even if working with OpenGL Since attachments are shared, we only need to bind them once before executing all sub-passes. This is a change that I’m planning to make soon.

Render Graphs

Vulkan does not have a render graph API, although it is implemented internally by specifying sub-pass dependencies in each render pass. It is our job to correctly set those dependencies which might quickly become cumbersome for complex renderers.

I’m still trying to figure out the changes require to Crimild’s render graph API. Not only to support sub-passes, but also because I want it to become much more than just a bunch of passes and dependencies. I want to include things like scene culling, filtering (i.e. render only UI elements), commands and much more. My goal is to make the render graph a descriptor for how an entire frame should be drawn for each application, not only the scene.

I believe this will be extremely beneficial both for complex and simple applications. You don’t need to cull objects because it’s a simple app? Do you need post-processing only on the 3D scene? Do you want a different post-processing for the UI? Are you making a headless path tracer for generating images? All of those scenarios can be supported.

Like I said, I’m still working on this and I’m not planning on it to be ready any time soon.

Moving on.

Command Pools & Command Buffers

Almost there, I promise.

Here’re another two new concepts for Crimild: Command pools and Command Buffers.

Command buffers are used to store commands that will be later executed when a frame is actually rendered. This is probably the biggest difference between OpenGL and Vulkan. While the former works by setting the state machine immediately (in theory, some drivers may change that), Vulkan declares everything up front and defers most operations for (possible much) later use.

For example, when rendering a triangle we usually issue commands for clear the screen buffer, bind vertex and index data, define a viewport, etc. When everything is ready, we issue a draw command (aka, a “draw call”). A command buffer will record all of these commands sequentially.

Command buffers are created for given specific command pool, depending on their type. There may be many different pools for different purposes, like graphics or compute pools.

Wait. Doesn’t Crimild’s render queues work in the same fashion? What’s different? It’s true that I tried to achieve something like this in Crimild before in the form of render queues, yet they are of a much higher level. With render queues, visible objects are recorded (which may be done in separated threads) to be rendered later. But it’s only the renderable object the one that is saved, not the actual render commands. This requires that we compute what state changes are triggered every time we draw that object. This is clearly an overhead, specially if we consider the fact that the renderer triggers draw state changes and draw calls without actually checking if those are needed. I made this call on purpose in the past to ensure that any object can be rendered independently of what came before, always reseting states to default values before drawing.

By using command buffers, instead, we can avoid that overhead while keeping the safety net. For each renderable object, we record the list of state changes and draw calls needed to make it appear on the screen. Then, we can check which of those commands are redundant and discard them. And by the time the render process is triggered, we’ll have the minimum number of commands that are needed to draw all objects.

Obviously, recording commands is a costly operation. The challenge, then, will be to understand when to trigger the recording of render commands. After all, doing it every frame may end up causing more overhead than the one we’re trying to solve. But that’s another problem for my future self (I hate you too, future self!).

And then… Victory!

After all the hard work, the mighty Triangle shows up in the screen:

…Up Next!

Phew, that was a long post.

Now it’s time to make a pause. Think. Design.

There are many new concepts introduced into the engine and I want to do it right before moving on to other features like buffers and textures.

And yes, I think that the render graph is the most interesting feature I’ve ever made for Crimild… assuming it works 🙂