“This is the 12th day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021.”

An overview of the

Metal uses textures to draw and manipulate images, which are made up of pixels. The image is stored using a 2-dimensional array of textures, each element containing color data. Texture is drawn onto geometric primitives by texture mapping technology. In the fragment shader, the fragment function is used to sample the texture to generate color for each fragment.

In Metal, the MTLTexture object is used to represent the texture. The MTLTexture object defines the format of the texture, including its size and layout, the number of elements in the texture, and how those elements are organized. Once a texture is created, its format and organization are fixed, and the content of the texture can only be changed later by rendering or copying data into it.

The Metal framework does not provide the ability to load image data from a file into a generated texture, it only allocates texture resources and provides methods for copying data to and from textures. Therefore, you have to write your own code and use other frameworks (such as MetalKit, Image I/O, UIKit, or AppKit) to process Image files. In Metal, you can use MTKTextureLoader to load textures. This article shows you how to write a custom texture loader.

Loading image resources

You need to manually create or update textures for the following reasons.

  • Custom format of picture data

  • Textures that need to be created at runtime

  • Texture data sent from the server or texture content that needs to be dynamically updated

The texture that Metal can load must be of type MTLPixelFormat. The pixel format describes the layout of pixel data in a texture. This example uses MTLPixelFormatBGRA8Unorm pixel format, each pixel is 32 bits, organized in blue, green, red, and alpha order, and each part is 8 bits:

Before filling the Metal texture, you must format the image data into the texture’s pixel format. TGA files can provide pixel data in either 32 bits per pixel format or 24 bits per pixel format. For TGA files that are 32 bits per pixel, all you need to do is copy the pixel data for use. For BGR images of 24 bits per pixel, conversion process is required to be used by copying the red, green and blue channels and setting the alpha channel to 255.


// Initialize a source pointer with the source image data that's in BGR form

uint8_t *srcImageData = ((uint8_t*)fileData.bytes +

                         sizeof(TGAHeader) +

                         tgaInfo->IDSize);




// Initialize a destination pointer to which you'll store the converted BGRA

// image data

uint8_t *dstImageData = mutableData.mutableBytes;




// For every row of the image

for(NSUInteger y = 0; y < _height; y++)

{

    // If bit 5 of the descriptor is not set, flip vertically

    // to transform the data to Metal's top-left texture origin

    NSUInteger srcRow = (tgaInfo->topOrigin) ? y : _height - 1 - y;



    // For every column of the current row

    for(NSUInteger x = 0; x < _width; x++)

    {

        // If bit 4 of the descriptor is set, flip horizontally

        // to transform the data to Metal's top-left texture origin

        NSUInteger srcColumn = (tgaInfo->rightOrigin) ? _width - 1 - x : x;



        // Calculate the index for the first byte of the pixel you're

        // converting in both the source and destination images

        NSUInteger srcPixelIndex = srcBytesPerPixel * (srcRow * _width + srcColumn);

        NSUInteger dstPixelIndex = 4 * (y * _width + x);




        // Copy BGR channels from the source to the destination

        // Set the alpha channel of the destination pixel to 255

        dstImageData[dstPixelIndex + 0] = srcImageData[srcPixelIndex + 0];

        dstImageData[dstPixelIndex + 1] = srcImageData[srcPixelIndex + 1];

        dstImageData[dstPixelIndex + 2] = srcImageData[srcPixelIndex + 2];



        if(tgaInfo->bitsPerPixel == 32)

        {

            dstImageData[dstPixelIndex + 3] =  srcImageData[srcPixelIndex + 3];

        }

        else

        {

            dstImageData[dstPixelIndex + 3] = 255;

        }

    }

}

_data = mutableData;

Copy the code

Create textures from texture descriptors

Use the MTLTextureDescriptor object to configure properties such as texture size and pixel format for the MTLTexture object. Then call newTextureWithDescriptor: method to create textures.


MTLTextureDescriptor *textureDescriptor = [[MTLTextureDescriptor alloc] init];



// Indicate that each pixel has a blue, green, red, and alpha channel, where each channel is

// an 8-bit unsigned normalized value (i.e. 0 maps to 0.0 and 255 maps to 1.0)

textureDescriptor.pixelFormat = MTLPixelFormatBGRA8Unorm;



// Set the pixel dimensions of the texture

textureDescriptor.width = image.width;

textureDescriptor.height = image.height;



// Create the texture from the device by using the descriptor

id<MTLTexture> texture = [_device newTextureWithDescriptor:textureDescriptor];

Copy the code

Use Metal to create an MTLTexture object and allocate memory for the texture data. This memory is not initialized, so the next step is to copy the data into the texture.


MTLRegion region = {

    { 0, 0, 0 },                   // MTLOrigin

    {image.width, image.height, 1} // MTLSize

};

Copy the code

Image data is usually stored in rows, and Metal needs to be told the offset between rows in the source image when using the image. In this case, the image data is stored in a compact format, so the rows of pixels are next to each other.


NSUInteger bytesPerRow = 4 * image.width;

Copy the code

Call on the texture replaceRegion: mipmapLevel: withBytes: bytesPerRow: method will pixel data from the image object is copied to the texture.


[texture replaceRegion:region

            mipmapLevel:0

              withBytes:image.data.bytes

            bytesPerRow:bytesPerRow];

Copy the code

Map textures to geometric primitives

Textures cannot be rendered directly, they must be mapped to geometric primitives. These geometric primitives are output by the vertex stage and converted by the raster into fragment primitives (in this case, a pair of triangles). Each fragment needs to know what part of the texture is acting on it, and you can define this mapping using texture coordinates: mapping positions on the texture image to positions on the geometric surface.

For 2D textures, normalized texture coordinates are used, from 0.0 to 1.0 in both x and y directions. (0.0, 0.0) specifies the texture element at the first byte of the texture data (upper left corner of the image). (1.0, 1.0) Specifies the texture element in the last byte of the texture data (bottom right corner of the image).

Define a data structure to hold vertex data and texture coordinates:


typedef struct

{

    // Positions in pixel space. A value of 100 indicates 100 pixels from the origin/center.

    vector_float2 position;


    // 2D texture coordinate

    vector_float2 textureCoordinate;

} AAPLVertex;

Copy the code

In vertex data, map the four corners of a quadrilateral to the four corners of a texture:


static const AAPLVertex quadVertices[] =

{

    // Pixel positions, Texture coordinates

    { {  250,  -250 },  { 1.f, 1.f } },

    { { -250,  -250 },  { 0.f, 1.f } },

    { { -250,   250 },  { 0.f, 0.f } },



    { {  250,  -250 },  { 1.f, 1.f } },

    { { -250,   250 },  { 0.f, 0.f } },

    { {  250,   250 },  { 1.f, 0.f } },

};

Copy the code

Define the RasterizerData data structure to store the textureCoordinate value, which will later be passed into the fragment shader:

struct RasterizerData { // The [[position]] attribute qualifier of this member indicates this value is // the clip space  position of the vertex when this structure is returned from // the vertex shader float4 position [[position]]; // Since this member does not have a special attribute qualifier, the rasterizer // will interpolate its value with values of other vertices making up the triangle // and pass that interpolated value to the fragment shader for each fragment in // that triangle. float2 textureCoordinate; };Copy the code

In the vertex shader, texture coordinates need to be written into the textureCoordinate field to pass texture coordinates to the rasterization stage. The rasterization phase inserts some coordinate points into the triangular fragment of the quadrilateral to produce pixels on the screen.


out.textureCoordinate = vertexArray[vertexID].textureCoordinate;

Copy the code

Compute vertex color

To calculate the color of a location point by texture sampling. The calculation of the specific color depends on the fragment function, which needs to sample the texture data according to the texture coordinates and texture to obtain the color value. In addition to the parameters passed in from the rasterization stage, a Texture2D colorTexture parameter is passed in, which points to the MTLTexture object.


fragment float4

samplingShader(RasterizerData in [[stage_in]],

               texture2d<half> colorTexture [[ texture(AAPLTextureIndexBaseColor) ]])

Copy the code

Sample the grain prime data using the built-in texture sample() function. The sample() function takes two arguments: a textureSampler that describes how you want to sample the texture, and texture coordinates that describe where in the texture you want to sample(in.texturecoordinate). The sample() function takes one or more pixels from the texture and returns a color calculated based on those pixels.

The sampler can use different algorithms to calculate exactly which pixel color the sample() function should return when the rendered area is different in size from the texture. Set mag_filter mode to specify how the sampler calculates the returned color when the region is larger than the texture size, and set min_filter mode to specify how the sampler calculates the returned color when the region is smaller than the texture size. Setting linear mode for both filters causes the sampler to average the color of the pixels around the given texture coordinates, resulting in a smoother output image.


constexpr sampler textureSampler (mag_filter::linear,

                                  min_filter::linear);



// Sample the texture to obtain a color

const half4 colorSample = colorTexture.sample(textureSampler, in.textureCoordinate)

Copy the code

Setting drawing Parameters

The process of encoding and submitting a drawing command is the same as that shown in the rendering primitive using a render pipe. When to encode the parameters of the command, you need to set up fragment function of texture parameters, the example USES AAPLTextureIndexBaseColor index to identify the Objective – C and the Metal Shading Language code of the texture.


[renderEncoder setFragmentTexture:_texture

                          atIndex:AAPLTextureIndexBaseColor];

Copy the code

conclusion

This article introduces the steps for drawing and manipulating images using textures in Metal. A texture can be rendered only after the texture is mapped to the geometry using texture mapping technology.

The first step is to load the image data, convert it to the texture of the MTLTexture object, then map the texture, set the filtering mode, finally sample and calculate the color value, and finally encode and submit the drawing command.