Keeping things Simple(r)

Crimild never really had an easy way in. The building process itself is a bit more complex that I would like and once you pass that, creating simulations is no walk in the park either. Regardless of the complexity of the scene or the rendering composition, you usually end up with a lot of boilerplate code just to run the application:

  1. You need a main function (or in the case of OSX/iOS, a whole new project).
  2. You need to create not only a Simulation instance, but also add any System implementation you require based on what you want to accomplish (which is usually the same set of systems every time).
  3. You need to call some helper functions to initialize builders and other factory objects. And those must be called before creating the simulation itself, which is error prone.
  4. Don’t forget to set the Logger level.
  5. You also need to create asset managers and settings. The later must parse command line arguments before the simulation is created.
  6. While we could implement the main loop ourselves, we usually end up calling the helper function Simulation::run() most of the time.

Only after completing all those steps correctly we can start creating the actual scene and rendering compositions. Oh, did I forget to mention that the later one is pretty much the same in most situations?

If you’re wondering how these process looks in practice, here’s an example of how to implement a simple simulation that loads an OBJ file and renders it on screen.

In the past, I tried to justify these complexity by arguing that it provides a lot of flexibility, which is true. But here I am today, rewriting all demo and realizing that there’s a lot of duplicated code in our applications. And that code needs to be updated whenever the related classes change in the engine (which happens quite frequently. Sorry about that).

I spent the last week doing a lot of simplifications here and there, which resulted in a much simpler simulation workflow. You don’t believe me? Then check out how the same code looks now.

A lot of things are happening under the hood now:

  1. The engine will take care of the main function and all of the internal initialization.
  2. The System class has been upgraded to provide a lot of hooks that are executed during the different stages of the Simulation lifetime.
  3. The Simulation class itself provides two virtual functions that can be used to configure our simulation in a very easy way:
    1. onAwake() is called when the Simulation is about to start, just before any system is initialized. This is a great point to attach your own systems or to remove the default ones and configure the simulation as much as you need.
    2. As the name implies, onStarted() is called after all systems have been started. Here you can create your initial scene and/or a rendering composition. If no rendering composition is provided, the Simulation will use the default one.
  4. CRIMILD_CREATE_SIMULATION() is a macro that is used to tell the engine the class that implements our Simulation, plus a name that will be used as the title of the window.

And that’s it.

No more complicated main files with lots of code that we don’t really care about. Now we can focus on building beautiful scenes without having to worry about the target platform. Did you notice that we’re only include the only the core header file now? That means that the same simulation code could be executed on desktop, web or mobile, since the engine is the one setting up the target platform based on the build settings.

Speaking of building settings, I mentioned that that the building process is complicated and requires some manual steps. Well, guess what?…

Still is.

I didn’t have time to fix that yet, sorry. But I do have it on my list (which for some reason it gets bigger and bigger every day).

The Road to PBR (II)

Once all of the technical requirements were taken care of, it was time to start moving forward with the actual PBR implementation.

A New Material Is Born

By using PBR, we need to specify geometry properties in a more, well, physically correct fashion. Regardless of the object’s color (aka albedo), we also need two new concepts: metalness and roughness. As their name implies, the first one is used to define if an object is made of metal, while the seconds tells us how rough (or soft) its surface is.

With those new concepts in mind, I created a new material class, named LitMaterial. This will be the new default material from now on for all geometries in Crimild.

In the following image, we can see both metalness (decreasing from top to bottom) and roughness (increasing from left to right) in action. The higher the metalness, the more the surface behaves like a mirror. A high roughness makes the surface behave like a rubber ball with no reflections.

From there, adding support for textures was quite straightforward:

In the image above, I’m using textures not only for colors, but also to indicate per-pixel metalness and roughness using individual maps (in a similar way as with specular maps).

But that’s not all…

Image-Based Lighting

A PBR workflow also allows us to use image-based lighting. This means that an objects is lit not only from direct lights, but also based on the current environmental map (i.e. skybox).

The following images show spheres (believe me, they’re plain white spheres), being affected by the corresponding environment.

There’s more to it, though. The environment not only affects how objects are colored (aka, diffuse lighting). We can compute specular reflections as well based on the material’s metalness and roughness values:

Importing Models with PBR properties

Spheres are nice, but nothing better to showcase the power of PBR than an actual model with some great textures:

And that’s how Crimild finally got PBR support.

Is it over?

PBR support is a huge milestone for Crimild and the rendering refactor that I’ve been doing this past (weird) months.

At the moment, performance is not good enough since I’m calculating the irradiance maps every frame (instead of doing it once at the very beginning and reusing it). This is a known issue and it’s related with the fact that the frame graph does not support dynamic scenes (as in adding/removing objects) at the moment.

But there’s one more thing I need to finish before fixing that.

To be continued…

The Road to PBR (I): HDR and Bloom

I knew from the start that supporting PBR (Physically-based rendering) was going to be one of the biggest (and hardest) milestones of the rendering system refactor. The fact that I’m finally starting working on it (and also writing about it) indicates that I’m seeing the finish line after all this time.

There’s still a lot of work ahead, though.

Supporting PBR is not an easy task in any engine, since it involves not only creating new shaders and materials from scratch, but also to perform several render passes before and after the actual scene is rendered in order to account for all of the lighting calculations. Some of those will make things a bit faster in later stages, like pre-computing irradiance maps, while others will improve the final render quality, like tone mapping.

The road is long, so I’m going to split the process into several posts. In this one, I’ll focus on HDR and Bloom. 

HDR

Adding support for HDR was the natural first step towards PBR, since it’s extremely important in order to handle light intensities, which can easily go way above 1.0. Wait! How come light intensities can have values beyond 1 if that is the biggest value for an RGB component? Think about the intensity of the sun compared with a common light bulb. Both are light sources, but the Sun if a lot brighter than the light bulb. When calculating lighting values using a method such as Lambert or Phong, both of them will end up having the maximum possible value (that is, of course, 1 or white), which is definitely not physically correct. 

HDR stands for “High Dynamic Range” and it basically means we are going to use 16 or 32 bits float values for each color component (RGBA) in our frame buffer. That obviously leads to brighter colors because the values are no longer clamped to the [0, 1] interval, as well as more defined dark areas because of the improved precision. 

As an example, this is what a scene looks like without HDR when having lights with values above 1.0:

The above scene consists of a long box with a few lights along the way. At the very end we have one more light with a huge intensity value (around 250 for each component). What’s important to notice in this image is that we’re almost blinded by the brightest light and there’s no much else visible. In fact, we can barely see the walls of the box.

Now, here’s how the same scene looks like with HDR enabled:

The light at the end is still the brightest one, but now we can see more details in the darker areas, which makes more sense since our eyes end up adjusting to the dark in real world.

Tone Mapping

At this point you might be wondering how those huge intensity values translate to something that our monitors can display, since most of them still work with 8-bit RGB values.

Once we compute all of the lighting in HDR (that might include post-processing too), we need to remap the final colors to whatever our monitor supports. The process is called tone mapping, and it involves a normalization process which maps HDR colors to LDR (low definition) colors, applying some extra color balance in the process to make sure the result looks good.

There are many ways to implement tone mapping depending on the final effect we want to achieve. The one I’m using is the Reinhard tone mapping algorithm, which balances all bright lights evenly and applies gamma correction.

Bloom

While not really a requirement with PBR, but at this point it was really easy to implement and probably the best way to test HDR colors.

Bloom is an image effect that produces those nice auras around bright colors, as if they were really emitting light.

Below, you can see the same scene with and without bloom:

Bloom Off
Bloom On

The technique require us to filter colors in the image using a threshold value. If we’re using HDR, it’s as simple as keeping the colors that are equal or greater to 1.0. Then, we blur the image as many times depending on the desired effect (the more blur we applied, the bigger the final aura around objects). Finally, we blend both the original scene image and the blurred one together. That’s it.

Next up…

In the next post I’m going to talk about the new PBR materials and shaders and how them approximate the rendering equation in real-time.