Praise the Metal – Part 5: Textures & Lighting

Hello, Voyagers. As the title of this post suggests, I’m going to talk about how to do texture & lighting in Metal. Additionally, I’ll be briefly describing other techniques like depth/stencil, alpha and culling tests, which are very important during the rendering process.

Truth be said, maybe Le Voyage is not the best example of the use of textures and lighting in a game, since it’s quite simple in this regard. Still, a single one of its frames has enough information for us to do something meaningful with Metal.

Screen Shot 2016-05-15 at 2.57.49 PM

The image above shows an in-game frame of Le Voyage, with most of the objects being rendered using the techniques described in previous posts, plus a few new one tricks that will be presented here.

There are a lot of things going on here. Let’s summarize them:

  • Opaque objects, like the projectile (in blue), the mountains or the balloons, all of them affected by a single light
  • Opaque objects that are not affected by lights, like the Moon in the background (in yellow)
  • Translucent objects like the pause button, the approaching high score label at the very back of the scene (visible near the Moon in the image above) or the current score label at the top right corner of the screen
  • Two textures for fonts, one for each font style (one is used for the score label, the other for the high scores in the 3D world)
  • One texture for object colors (there are several levels of gray for different objects, plus some basic colors)
  • One light source, positioned behind the camera and affecting most of the objects in the 3D world.

Additionally, when displaying menus there are more textures for special buttons (such as the Facebook, Twitter or Game Center buttons). The intro scene, the one with the cannon and the scaffolding, has an extra light source coming from where the Moon is (I bet you didn’t notice that one).

No post-processing effects have been applied in the image above since we haven’t talked about image effects yet. I’ll leave that for the next entry in the series.

DISCLAIMER: The techniques described in this series of posts are limited only to those features currently supported by Crimild and also to those used in Le Voyage. It’s by no means an extensive introduction to all texture and lighting mechanisms, as there are many, many more. It’s expected for Crimild to support more features in future releases, of course.


In Metal, the MTLTexture protocol represents formatted image data using specific type and pixel format. Textures can be used as sources for either vertex, fragment or compute shader functions (or all of them), as well as attachments for render passes.

Metal supports images of 1, 2 or 3 dimensions, arrays or cubemaps. Only 2D textures are supported in Crimild at the time of this writing, though.

Creating textures

Assuming we already loaded the actual image file (TGA is the de-facto format for textures in Crimild), we will need to create a texture object and upload the data to its internal storage.

When creating new textures, we use the MTLTextureDescriptor protocol to define properties like  image size, pixel format and arrangement, as well as number of mipmap levels, provided mipmapping is supported. For example:

auto textureDescriptor = [MTLTextureDescriptor 
   texture2DDescriptorWithPixelFormat: MTLPixelFormatRGBA8Unorm
                                width: image->getWidth()
                               height: image->getHeight()
                            mipmapped: NO];

In the code above, a descriptor is created for RGBA images of an specific width and height, with no mipmapping support since, at the moment, Crimild does not support mipmapping when working with textures in Metal.

Then, we’ll pass that descriptor to the device in order to create the actual texture:

id< MTLTexture > mtlTexture = [getDevice() newTextureWithDescriptor:textureDescriptor];

Copy image data to a texture

After creating the texture, we usually need to copy our image data into its storage. Alternatively, the texture data may come from a render pass attachment or other sources, so there won’t be a need to copy anything.

Assuming we do need to copy data, the following code shows how to copy the image data from a crimild::Image object in memory to a texture, at mipmap level 0:

[mtlTexture replaceRegion: region mipmapLevel: 0 withBytes: image->getData() bytesPerRow: image->getWidth() * image->getBpp()];

So far, creating and loading textures is not that different from what OpenGL provides, right?

Binding textures

In order to use textures during our render process, we first need to bind them. As we do for other rendering resources, we need to invoke the corresponding the method in the render encoder:

[getRenderEncoder() setFragmentTexture:mtlTexture atIndex: 0];

This binds the texture to the first index of the texture argument table.


Working with textures require us to define how do we want to apply filtering, addressing and other properties while performing texture sampling operations. A sampling operation maps texels to polygons and pixels.

Things are a little bit different in Metal than in OpenGL concerning sampling. At least in practice.

Metal provides a specialized object for sampling operations described by the MTLSampleState protocol. I haven’t use the samplers facilities in Crimild yet, since Le Voyage has extremely simple sampling requirements for textures, all of which can be easily described in MLSL, as we’re going to next.

Textures in MLSL

Two objects are required in order to use textures in MLSL. One is the texture itself, bound as described above. The other is a sampler object, that can be described using the Metal API or instantiate the one that we need in the MLSL shader itself, as Crimild is currently doing. In addition to those objects, we also need texture coordinates specified in the interpolated vertex input, which is provided by the vertex shader.

The following MLSL code implements a fragment shader that returns a color based on both the sampled texture and the material’s diffuse color:

fragment float4 crimild_fragment_shader_unlit_texture( VertexOut projectedVertex [[ stage_in ]],
                                                       texture2d< float > texture [[ texture( 0 ) ]],
                                                       constant crimild::metal::MetalStandardUniforms &uniforms [[ buffer( 1 ) ]] )
    constexpr sampler s( coord::normalized, address::repeat, filter::linear);
    float4 sampledColor = texture.sample(s, projectedVertex.textureCoords);
    return sampledColor * uniforms.material.diffuse;

The first line will create a sampler object using standard options for both addressing and filtering. Texture coordinates are expected to be normalized in the range [0, 1].

That sampler is used in the second line to get the texture color at the provided texture coordinates. Finally, both the texture color and the material diffuse color are mixed.

In theory, it’s not that different from OpenGL. And again, many more options can be applied for both textures and samplers than the ones presented here.

Let there be light… Or not

Unsurprisingly, Metal’s lighting facilities are… non-existent.  As in OpenGL, lighting is computed in shaders and that has to be implemented entirely by the developer. Long gone are the times for fixed function pipelines for lighting.

Therefore, Crimild works with lighting in Metal in a very similar way as it does in OpenGL. For each geometry, we pass all active light sources using uniform buffers. Shaders are responsible for the lighting calculations, usually implementing the Phong lighting model (sorry, no PBR support… yet) and a forward render pass.

As it was explained before, the biggest benefit Metal provides over OpenGL in this regard is the fact that we can dispatch all uniforms in a single batch, which is a big performance gain.

I’m assuming deferred rendering will be a lot easier to implement in Metal than it is OpenGL, since handling framebuffers and attachments is very simple in the former one. But that’s something that I can’t tell for sure until I see it working.


Working with Depth/Stencil in Metal turned out to be a little bit more cumbersome than in OpenGL. Again, this has to do with the paradigm shift, but I still have the feeling that it could’ve been simpler (as it is in culling, see my comments below).

MTLDepthStencilDescriptor *depthStencilDescriptor = [MTLDepthStencilDescriptor new];
depthStencilDescriptor.depthCompareFunction = MTLCompareFunctionLess;
depthStencilDescriptor.depthWriteEnabled = YES;

auto depthStencilState = [_device newDepthStencilStateWithDescriptor:depthStencilDescriptor];
[getRenderEncoder() setDepthStencilState: depthStencilState];

Describing the depth/stencil state is done by the MTLDepthStencilDescriptor class, which provides options for things like the comparison function and read/write operations. Once described, we compile an object implementing the MTLDepthStencil protocol, which in turn we pass to the render encoder in order to activate it.

While the depth/stencil state should be compiled only once, we can switch them during the rendering pass based on the requirements for each object that we’re drawing.

Cull State

Culling is set in the render encoder too, but it’s much more direct than depth/stencil. It’s almost as simple as in OpenGL:

    [getRenderEncoder() setFrontFacingWinding: MTLWindingCounterClockwise];
    [getRenderEncoder() setCullMode: MTLCullModeBack];

Two functions are provided to define the winding and cull mode. By default, Crimild uses counter-clockwise winding and back-face culling, just as in OpenGL. No surprises here.

Alpha Blending/Testing

Unfortunately, Alpha Blending is not supported by Crimild’s MetalRenderer at the time of this writing. There was no need for alpha blending in Le Voyage, so I completely skipped this feature. It is expected at least some minimal support for alpha blending in future releases, of course.

On the other hand, Alpha Testing was implemented at fragment shader level, discarding fragments with lower alpha values, which is pretty similar to it’s counterpart in OpenGL.

Are we there yet?

And so we reach the last step in the rendering call for single objects.  At this point, we are able to render objects on the screen with textures and lighting using Metal. But the resulting frame still lacks the final touch, which is the most distinctive feature in Le Voyage: that old film effect that’s applied to the entire screen.

Next week we’ll talk about image effects in Metal, the final step in the rendering pipeline.

To be continue…

Shadow Mapping Demos

I’ve just uploaded some videos showing the shadow support in action. Check them out:



The Lightcycle demo is more interesting due to several factors. First, it’s a Lightcycle from Tron :). Secondly, it shows self shadowing. And third, it display several rendering artifacts that are yet to be solved. Notice how the shadow is shaggy since there is no blur in the map itself. Also, there is a little acne too (mostly in the wheel section).

Shadow Mapping

Legends talk about a guy who wanted to make an engine with support for real-time shadows. And 10 years later, here it is:

Shadow Mapping

Truth be told, this is extremely experimental. I’m using a standard shadow map technique, where a depth map is rendered from the point of view of a light and then a shader determines if a pixel is lit or shaded based on that information.

The demos are not fancy, but I’ll improve that.

Shadow Mapping

Shadows are supported in a new render pass object called “ForwardRenderPass” (yes, there is a “DeferredRenderPass” coming too). A light must explicitly be configure to cast shadows, making this feature completely optional.

On the TODO list is filtering for the shadow map itself (in order to achieve something like a soft shadow), point lights casting shadow cube maps and a lot bug fixing.