Live Long And Render (IV)

Hello 2020!

As I mentioned before, development for Vulkan support is still happening and in this post I’m going to talk about the biggest milestones that I achieved in very first month of this new year.

Triangles

This is the classical spinning triangle example that has been part of Crimild for a very long time. Only this time it’s a bit more interesting:

While not visually impressive, this is the first big milestone for every graphics programming, specially those working with Vulkan.

One of the biggest changes I made while working on this demo is a new way to work with vertex and index buffers. Why? The decision may not have much to do Vulkan, to be honest, but the current way of specifying this type of data (basically, as an array of floats or ints) has several limitations, particularly when dealing with multiple vertex attributes (positions, colors, etc.) in the same buffer of data. I’m going to write a different post just to explain this better. But for now, just take a look at how vertices and indices are specified in the new approach:

// The layout of a single vertex
struct VertexP2C3 {
    Vector2f position;
    RGBAColorf color;
};

// Create vertex buffer
auto vbo = crimild::alloc< VertexP2C3Buffer >(
    containers::Array< VertexP2C3 > {
        {
            .position = Vector2f( -0.5f, 0.5f ),
            .color = RGBColorf( 1.0f, 0.0f, 0.0f ),
        },
        {
            .position = Vector2f( -0.5f, -0.5f ),
            .color = RGBColorf( 0.0f, 1.0f, 0.0f ),
        },
        {
            .position = Vector2f( 0.5f, -0.5f ),
            .color = RGBColorf( 0.0f, 0.0f, 1.0f ),
        },
        {
            .position = Vector2f( 0.5f, 0.5f ),
            .color = RGBColorf( 1.0f, 1.0f, 1.0f ),
        },
    }
);

// Create index buffer
auto ibo = crimild::alloc< IndexUInt32Buffer >(
    containers::Array< crimild::UInt32 > {
        0, 1, 2,
    }
);

Even without knowing how it was done before, it cannot be denied that the newer approach is pretty clear and straightforward. Again, I’ll write about it later.

Textures

Similar to the previous example, this demo seems to be quite simple, yet it’s another great milestone.

Each quad is rendered with a checkerboard texture and colored vertices.

Working with textures requires us to handle multiple descriptor sets and layouts, since each object has its own set of transformations and textures. I think I came up with a nice approach that may allow us create a shader library in the future (and also stream then directly from disk if needed).

OBJ Loader

Once vertex data and textures were working correctly, the next obvious step was to show something more interesting on screen:

The famous Stanford Bunny is loaded from an OBJ file. Please note that there’s no dynamic lighting in the scene. Instead, the texture file already has ambient occlusion baked on it.

The most difficult part of this example is actually hidden inside the engine. The OBJ loader needs to create a new pipeline based on what data is available (are there any textures? what about normals?). There’s also the option to specify a pipeline to the loader, so every object will use it instead of default one.

Also, OBJ loader is making use of the new vertex/index buffer objects.

Pipelines

The final example I’m showing today is the most complex one so far:

The demo sets up multiple pipelines to render the same scene (the bunny) using different settings: textured, lines, dots and normals.

The challenge for this demo was to being able to override some or all of the settings for whatever pipeline configuration the scene (or in this case, the model) has with new values, like overriding their viewport size.

Up next…

As you can see, I’ve been busy. Being able to load models with textures and setup different pipelines is the very basis for all the the rest of the features.

There are still some unresolved design challenges that I need to tackle, like how to handle render targets and offscreen rendering, but I’m hoping the solution will come up as I move forward with simpler demos.

Stay awhile and listen

A couple of days ago, I pushed to Github the first implementation for a new audio subsystem with the intention of providing a set of tools for audio playback, both in 2D and 3D.

Although Crimild does currently provide a simple 3D audio tool, it is far from useful in productive environments, specially because it lacks of any means to play audio streams, as in music.

Decided to work from scratch in the new system, I managed to complete a working version in just a couple of days and it has already proven to be much more useful than the previous one.

Here’s a brief introduction of what I’ve accomplished so far.

Core abstraction

As with many other systems in Crimild, the new new audio subsystem is split into two parts: the core abstraction and a platform dependent implementation.

A handful of new classes have been added to the core library, describing abstract interfaces for audio-related concepts, like audio sources, listeners and the audio manager itself, which also acts as an abstract factory for those objects (something that I’m planning to replicate for images too in the future).

In addition, two new components are now available: one for audio sources and the other for the audio listener (usually attached to a camera). These components will handle audio spatialization if needed (for 3D audio), using the parent node’s transformation.

But how useful is an audio system that doesn’t play audio, right?

SFML

As I was looking for a simple, portable, library that will allow me to implement the actual audio layer in a simple way, I came upon SFML. SFML is an abstraction layer for most of the platform specific functionalities, like window handling, networking and, of course, audio.

SFML provides an API to handle both 2D and 3D sounds and streams, so the implementation was pretty straightforward. Much as with other third-party dependencies (like GLFW), this code is organized in a separate package.

Integration

The unique instance for the Audio Manager is stored by the Simulation class itself, in a similar way as with the graphics renderer or other manager. The GLSimulation class, the one used in desktop projects, is already creating a valid instance upon construction, which means client projects should be able to use the new audio facilities right out of the box. No additional setup is needed.

Finally, In order to avoid confusion, I removed the old OpenAL implementation completely.

Examples

There are two examples that make use of the new audio system.

AudioRoom presents you with three music instruments in the floor. As you approach them, a music clip is played and it is possible to have more than one instrument at the same time if you move the camera quick enough.

Screen Shot 2017-11-05 at 6.43.45 PM

The Drone example has been enhanced with multiple actors. This example uses several audio sources and a listener attached to the camera to showcase how 3D sound spatialization works.

Screen Shot 2017-11-05 at 6.43.33 PM

Both of the examples are already available in the demo project.

Final comments

The new audio system is already available in the development branch, in Github. I’m still working on this code, so expect changes in the near future.

Now, I have to admit that I’m not 100% comfortable with SFML. Don’t get me wrong, the library is great and extremely useful, but it feels a bit overkill in my case. SFML provides so much functionality, not only for audio, yet I’m only using a minimal part of it, mainly because Crimild already depends on GLFW for window and input management. On the other hand, I haven’t check how to use SFML on iOS yet and that might cause some headaches in the future. Also, there’s no support for MP3 at the moment.

But then again, SFML is extremely simple to use and I’m short of time at the moment, so I guess it’s a good compromise. I’m confident that I would be able to easily change the implementation in the future if needed, since the level of abstraction of the core classes is quite good.

Time will tell.

 

Particle System Improvements

Here’s a little something that I’ve been doing on the side.

The fire effect has been created with a new Particle System node that includes tons of customization controls. For example, you can define shapes for the particle emitter from several mathematical shapes like cylinder (the one in the video), cones, spheres, etc.

In addition, some particle properties like size and color can be interpolated using starting and ending values. The interpolation method is fixed for the moment, but it will be customizable in the near future to use different integration curves.

At the moment, the particle system is implemented entirely in CPU, meaning no point sprites or any other optimization techniques are used, which lead to performance issues as particle count gets bigger (the effect in the video has a particle count of 50).

The particle system in the video above is configured on a Lua script as follow

{
   type = 'crimild::ParticleSystem',
   maxParticles = 50,
   particleLifetime = 2.0,
   particleSpeed = 0.75,
   particleStartSize = 0.5,
   particleEndSize = 0.15,
   particleStartColor = { 1.0, 0.9, 0.1, 1.0 },
   particleEndColor = { 1.0, 0.0, 0.0, 0.5 },
   emitter = {
      type = 'cylinder',
      height = 0.1,
      radius = 0.2,
   },
   precomputeParticles = true,
   useWorldSpace = true,
   texture = 'assets/textures/fire.tga',
 },

Even when the new system is quite easy to configure, it still requires a lot of try and error to get something nice on screen. Maybe it’s time to start thinking about a scene editor…