Sorting particles the right way… I guess…

Close your eyes.

Actually, keep them opened because you have to keep reading.

Imagine a train moving at a constant speed. You’re traveling inside the train, in a seat near a window. There’s a computer resting on a table right in front of you. On its screen, a random frame from a secret project is being rendered looking like this:

Screen Shot 2017-09-09 at 12.10.35 PM.png

The Problem

The above image is the result of using a mixture of offscreen render targets and particle systems to generate a lot of soldiers in real-time, in a similar fashion as other impostor techniques out there. In this case, each particle represents a group of soldiers (in order to avoid clones as much as possible) and they are walking from the left to the right of the screen. Don’t pay attention to the bearded guy at the front.

Did you noticed how the “density” of the soldiers seems to increase after some point at around the middle of the screen? That’s the symptom for the problem. Particles are generated using a uniform distribution random algorithm, and therefore there’s no reason for the empty spaces between them.

If we look at the debug version of the same frame, we would see something like this:

Screen Shot 2017-09-10 at 5.18.59 PM.png

As shown in the image above, the particles are indeed uniformly distributed. Then, where are the soldiers?

Here’s another clue: if I turn the camera just a little bit to the left, I get the following result:

Screen Shot 2017-09-10 at 5.22.22 PM.png

This seems to indicated that, although the soldier-particles do exist, they are just not being rendered in the right way. Actually, I’ve dealt with this kind of problems before and they always seem to be related with object sorting and transparency.

Distance to the Camera

Before any particle is rendered on the screen, they must be sorted in the right order for the transparency to work. OK, so particles are not being sorted and we just need to implement that, right? Alas, after checking the code, it turns out that the particle system does perform sorting over live particles, ordering them from back to front based on the camera position. And yet the problem remains.

It turns out I was making the wrong assumption here. Particles are being reordered, true, but the algorithm handles them as points instead of billboards (quads that always face the camera).

Let’s look at the following diagram:

img_3586-1

The above diagram is quite self explanatory, right? No? OK, let me try and explain it then.

In the first case, particles are sorted using the camera position (just as in the current implementation). There are tree distances to the camera (d1, d2, d3). If we use the camera position as reference, the order in which the particles will be rendered will end up being 3, 2, 1 (back-to-front, remember). But that result is incorrect.

Particle 2 (the one in the middle) is indeed closer than particle 3 to the camera position, but it should be rendered before particle 3 in order to prevent artifacts like before.

Near-plane distance

The second scenario is the right one. We have to sort particles based on their distance to the near plane of the camera, not its position. That way, particles are correctly rendered as 2, 3, 1 (again, back-to-front).

This change produces the correct result:

Screen Shot 2017-09-10 at 5.32.11 PM.png

All soldiers are successfully rendered and our army is completed.

Final comments

We should keep in mind that, while this method fixes this particular sorting problem, it may not work when particles are rotated or they intersect each other. There are other approaches that attempt to solve those cases and, depending on what we’re targeting for, those might end up being too expensive to implement.

That’s it for today. Time to get back to my cave now.

See you later

PS: If you’re still wondering what that thing about the train was, well, I guess I’ve been watching too much Genius lately…

 

Advertisements

Shadow mapping improvements (I)

Just a brief update to build up some expectations for the next release (whenever that happens).

I’ve been working on improving the shadow mapping technique support in Crimild in order to make it more reliable in production environments. The current implementation had a lot of errors and artifacts and it’s not really usable in big open spaces due to incorrect light frustum calculations.

Here’s a quick look at the new Shadows demo:

 

This slideshow requires JavaScript.

Only directional and spot lights can cast shadows at the moment, but I’m planning on adding support for point lights shortly. I’m also planning on adding support for cascade shadow maps in a later release.

That’s it. See you later 🙂

Rendering to Texture

Happy 2017! I know, it’s almost February, but better late than never, right?

[EDIT: I was so excited that this demo actually worked that I didn’t even realized this wasn’t the first post of the year]

Anyway, I just finished this demo and I wanted to share it with you. Simply put, I’m rendering lots of characters on screen using the impostor technique, which relies on rendering a single model in an offscreen buffer. Then, multiple quads are drawn using the output of that buffer as texture.

Et voilá!

In order to achieve this goal, I had to make some changes in how scenes are rendered (is this the first refactor of the year? No). Since we’re using multiple cameras, we needed a way to define which one is the main one (we cannot rely on the Simulation to that for us any longer). Also, a new render pass is required to draw the model on a texture. And so the OffscreenRenderPass (were you expecting something else?) was born. But that’s pretty much it.

As usual, this is a new feature and therefore… unstable. Do not try it at home (yet).

Bye!