Writing Compositions

When I started working on the new frame graph and render passes (and related classes), I knew they were a bit of a pain to work with. Simply put, they are just too verbose.

Let me give you an example: Let’s assume you want to render a simple triangle? Then, it’s just as simple as:

  1. Create one or more attachments, each of them including:
    1. Usage flags (is it a color or a depth attachment?)
    2. Format flags (is it RGB, RGBA, whatever the swapchain supports, RGB8, RGB32?)
    3. Is it supposed to be used as a texture? Then you also need to specify image views
  2. Create a pipeline
    1. Lot’s of settings (rasterization, depth)
    2. More settings (alpha, cull mode)
    3. Plus, you need to create and configure shaders
    4. Also, more settings (viewport modes, etc)
  3. Create descriptors for
    1. Textures
    2. Uniform buffers
      1. So many options
  4. Create a render pass
    1. Link it with attachments and descriptors
    2. Is it rendering offscreen? 
      1. Link it with more attachments and descriptors from other render passes
    3. Record render commands
      1. You need a pointer to your scene here, BTW…

See? Easy… (that’s the simplest scenario I can think of)

Ok, I’m not being fair here.

Yes, the API is verbose. But that’s exactly how the API is supposed to be. And that is OK, because I wanted it to be verbose. That kind of verbosity is exactly what allows us to customize our rendering process to whatever needs we have in our simulation. Besides, we are only supposed to do that once in our code (unless our rendering process for some reason). So all that verbosity is acceptable. 

Wait. If we only need to deal with that verbosity only a few times in our program, why am I complaining about it?

The problem is that as I am re-organizing the demos and examples, I suddenly found myself writing lots of new applications. And that means, having to deal with that verbosity in each of them. Which is annoying, not only because I have to repeat myself every time I want to render a scene, but also because the API is still changing and I have to go over all of the examples time and time again in order to make sure they’re all up-to-date.

Therefore, I need a simpler way to deal with this verbosity. 

But let me be clear here. I don’t want to get rid of the verbosity by introducing a simpler API. That’s a big NO. I like that verbosity. I’m very happy with that verbosity and I know that is the cost I have to pay for having that kind of customization power.

What I want is just a way to not having to repeat myself every time I create a new demo. To be honest, I don’t care if this is not good enough for real-world applications (more on that later).

So, I need a tool to compose different render passes and mix them in an expressive way. Therefore, I need a… oh, right. I spoiled it already with the title of this post. Mmm. Ok, I’m going to say it anyways and you promise you’ll make your best surprised face, ok? Here we go:

What I need is a Composition tool.

Surprise!

About compositions

Let’s go over our requirements again:

  1. We’re going to be creating lots of objects when defining the different render passes and they need to be kept alive for as long as our composition is valid. In order to accomplish this requirement, we can store them in a struct called Composition, containing a list of objects. If a composition is destroyed (i.e. the app ends, or we swap the composition with a different one), all of its internal objects will also be destroyed.
  2. We’re going to be rendering images, so we need to treat Attachments in a special way. We need to declare at least one of them to be the resulting image (the one that is going to be presented to the screen). We also need to access them by name so we can link different render passes if needed (for example, we might need to apply some special effect to an image generated by rendering a 3D scene). For this purpose, the Composition type also keeps a map with references to all of the existing attachments (there is a chance for name collisions between attachments, but I don’t care for the moment. I don’t want to complicate things too much at this stage).
  3. Obviously, we need a mechanism for creating compositions that is reusable. That is the whole point of this discussion. This mechanism will deal with all the verbosity I mentioned above but, since it’s reusable, that’s not a problem. These mechanisms are called generators and they’re simple functions that return Composition objects.
  4. Finally, we want to be able to mix generators in order to produce more complex effects. For example, we might want to apply some special effect to an image containing a rendered scene. And then overlay UI elements on top of it. So, the generators receive an optional Composition argument, which can be augmented with new objects (and it should produce images).

So, how do all that look like in practice?

Composition myGenerator( void ) 
{
  Composition cmp;
  auto color = cmp.create< Attachment >( "color" );
  auto depth = cmp.create< Attachment >( "depth" );
  auto renderPass = cmp.create< RenderPass >();
  renderPass->attachments = { color, depth };
  renderPass->recordCommands();
  cmp->setOutput( color ); // the main attachment
  return cmp;
}

Composition anotherGenerator( Composition cmp )
{
  auto baseColor = cmp.getOutput();
  auto color = cmp.create< Attachment >( "anotherColor" );
  auto descriptorSet = cmp.create< DescriptorSet >();
  descriptorSet->descriptors = {
    Descriptor {
      .type = TEXTURE,
      .texture = baseColor->getTexture(),
    },
  };
  auto renderPass = cmp.create< RenderPass >();
  renderPass->attachments = { color };
  renderPass->descriptors = descriptorSet;
  renderPass->recordCommands();
  return cmp;
}

auto finalComposition = anotherGenerator( myGenerator() );

A more real-world example might look like this:

namespace crimild {
  namespace compositions {
    Composition present( Composition cmp );
    Composition sepia( Composition cmp );
    Composition vignette( Composition cmp );
    Composition overlay( Composition cmp1, Composition cmp2 );

    // pure generators
    Composition renderScene( Node *scene );
    Composition renderUI( Node *ui );
  }
}

auto composition = present( 
  overlay(
    sepia( vignette( renderScene( aScene ) ) ),  
    renderUI( aUI )
  )  
);

Notice how some generators can augment existing compositions in order to apply effects. For this purpose, they can access existing attachments (or maybe other resources) inside the composition by name.

Also, renderScene and renderUI are consider pure generators, since they will create a new composition from scratch (it receives no composition argument).

Finally, overlay takes two compositions and produce a new one that is a mix of both. In this case, both input compositions are merged. Then, the resulting one contains all objects.

Now it’s really easy to create new applications combining these compositions together:

I even when as far as creating a debug composition generator, that takes every single attachment ever created by other generators and display them on screen:

Final Thoughs

I like this approach because it’s simple and we can combine different generators together. Performance is not the best at the moment, since every time we pass a composition from one generator to another we’re (probably?) copying the internal collections (not the objects themselves, though), which is not great.

This is done only once in our app (usually at the very beginning), which is not that bad, but it might not be a good solution for performance-heavy applications or games.

Yet, it’s more than enough for examples or simple simulations, which is exactly what I needed.

Organizing demos

Another side effect of the huge rendering refactor I’m currently working on is that I’m taking the opportunity to clean up and reorganize all examples in Crimild’s demos repository. Most of them have been already updated to use the newest features, there are a lot of new demos and some have been removed since they were obsolete.

I have three different goals in mind when doing this: documentation, testing and, of course, showcase.

Documentation

It’s no secret that Crimild’s documentation is sparse and not very helpful (at best). Therefore, having good examples is extremely important. Covering everything from how to create a simulation up to how to render a complex, interactive, scene, these examples are highly valuable as a quick reference point.

You want to load an OBJ file? There’s an example for that. Want to composite different screen effects but don’t know how to do bloom? There’s an example for that.

(I find myself coming back to these examples time and time again, so I know first-hand how useful they are).

Testing

Sometimes is hard to tell whenever a change in some module ends up breaking something else in a completely different one… [insert your default unit/automated testing pitch here].

Having simpler demos not only helps with documentation, but also means that we have lots of simpler tests. Those tests might not be automated (yet), but there are still useful.

Showcase

Finally, the eye-candy.

These are the ones that show what Crimild is capable of. They’re still supposed to be simple projects, with only a handful of source files at most, since I’m interested in showing a particular feature in action and not really a full application.

Bonus Track: Experimentation

Since Crimild is becoming more flexible and extensible than ever, these demo projects are a great way to experiment with new features. We can create simple projects that use a customized rendering algorithm, for example, which might or might not end up being part of the engine. Or we can finally implement a true entity-component-system by refactoring the scene graph completely. It has become really easy to replace one system with a new one without having to do any changes in the core ones.

As they say, the sky(box) is the limit…

Sorry, I had to do it…