The purpose of this case is to understand how to load A TGA image, as well as PNG/JPG images

Load the TGA image

Loading a TGA image will look like this

The overall case flow chart is shown below

The preparatory work

Before loading an image, the following preparations need to be made:

  • UI view controller class
  • OC and OC. H bridge files
  • CJLImage class
  • Creating a Metal file
  • Create a custom Render loop class

UI view controller class

Create a MTKView object in viewDidLoad, customize the Render object, and set the view’s proxy to render. The process is the same as in Metal Entry 02: Loading triangles

Bridge file between OC and C

Create index enumerations (including vertex data index, texture index) and vertex data structures that are common to the METAL file and OC

Typedef enum CJLVertexInputIndex {/ / vertex CJLVertexInputIndexVertices = 0, / / view size CJLVertexInputIndexViewportSize = 1, }CJLVertexInputIndex; Typedef enum CJLTextureIndex {CJLTextureIndexBaseColor = 0}CJLTextureIndex; // Vector_float2 position; // vector_float2 position; // vector_float2 position; Vector_float2 textureCoordinate; vector_float2 textureCoordinate; }CJLVertex;Copy the code

CJLImage class

This class is a utility class for converting tgAS to bitmaps. As TGA files are rarely loaded in normal development, usually PNG/JPG files, so TGA files are converted into bitmaps, the essence of which is to copy the DATA of TGA files to the memory pointed to by the BGRA image data pointer. Interested in the complete code at the end of the article to further study.

Creating a Metal file

The structure contains vertex coordinates and texture coordinates as the output of the point shader and the input of the chip shader

// Return value structure: Typedef struct {// Vertex coordinates float4 clipSpacePosition [[position]]; // Texture coordinates float2 textureCoordinate; }RasterizerData;Copy the code

The following describes the execution flow of the two shader functions

  • The vertex shader function is mainly to normalize the vertex coordinates and output the processed vertex coordinates and texture coordinates. After the pixel assembly and rasterization of Metal, the vertex data is fed into the pixel shader. Its flow chart is as follows

vertex RasterizerData vertexShader(uint vertexID [[vertex_id]], constant CJLVertex *vertexArray [[buffer(CJLVertexInputIndexVertices)]], Constant vector_uint2 * viewportSizePointer [[buffer (CJLVertexInputIndexViewportSize)]]) {/ * processing vertex data: 2) Pass the vertex color value to the return value */ // 1) Define out RasterizerData out; Out. ClipSpacePosition = vector_float4(0.0, 0.0, 0.0, 1.0); // select * from xy; Float2 pixelSpacePosition = vertexArray[vertexID].position.xy; float2 pixelSpacePosition = vertexArray[vertexID].position.xy; // Convert vierportSizePointer from verctor_uint2 to vector_FLOAT2 viewportSize = vector_float2(*viewportSizePointer); // The output position of each vertex shader is in the clipping space (also known as the normalized device coordinate space (NDC)), where (-1,-1) represents the lower left corner of the viewport and (1,1) represents the upper right corner of the viewport. To convert from a position in the pixel space to a position in the clip space, we divide the pixel coordinates by half the size of the viewport. Out.clipspaceposition.xy = pixelSpacePosition/(viewportSize / 2.0); Out. ClipSpacePosition. Z = 0.0 f; Out. ClipSpacePosition. W = 1.0 f; // Assign our input color directly to the output color. TextureCoordinate = vertexArray[vertexID]. TextureCoordinate; textureCoordinate = vertexArray[vertexID]. / / finish! Pass the structure to the next stage in the pipe: return out; }Copy the code
  • The chip shader function is mainly to obtain the grain element through the sampler, which is equivalent to the built-in function texture2D in GLSL and the sample function of texture in Metal. The main flow chart is as follows

fragment float4 fragmentShader(RasterizerData in [[stage_in]], Texture2d <half> colorTexture [[texture(CJLTextureIndexBaseColor)]]) {texture2d<half> colorTexture [[texture(CJLTextureIndexBaseColor)]]) {// When Texture2d does not write access, the default is sampler, // set the sampler: Constexpr Sampler textureSampler(mag_filter:: Linear, min_filter:: Linear); // The state machine is used to set properties in the GLSL function half4, which is equivalent to the built-in function texture2D <half> // Metal attributes are set in an object's mind. Const half4 colorSampler = colortexturesampler. Sample (textureSampler, in. TextureCoordinate); // grayscale /... Return float4(colorSampler); // Convert half4 to float4; // Convert half in Texture2D <half> to float. Return float4(colorSampler); }Copy the code

Create a custom Render loop class

Apple suggests writing the render loop as a separate class that handles metal’s render and delegate events. Loading the TGA file is also handled in Render, which passes the processed vertex data and texture images to the shader, which draws and displays them on the screen

The following focuses on loading the TGA file in Render

Render loop class

As can be seen from the overall flow chart, the functions in the Render class are mainly divided into two categories

  • InitWithMetalKitView function: initialization
  • MTKViewDelegate delegate method: Handles view delegate events

InitWithMetalKitView function

The flow chart for this function is shown below

It is mainly divided into the following four steps:

  • Initialize the GPU device: Obtain the permission to use the GPU from the incoming View
  • setupVertexFunction: Sets vertex related operations
  • setupPipeLineFunction: Sets the related operations of the render pipe
  • setupTextureFunction: Load the TGA file

SetupVertex is used to initialize vertex data, including vertex coordinates and texture coordinates. Vertex coordinates that are not in the range of -1 to 1 are located in the object coordinate system, and need to be normalized in the vertex shader of the Metal file and stored in the MTLBuffer object

The corresponding code is as follows

SetupVertex {// create a MTLBuffer based on vertex/texture coordinates static const CJLVertex quadVertices[] = / / pixel coordinates, texture coordinates {} {, 250-250, {1 f, 0. F}}, {{250, 250}, {0 f, 0. F}}, {{- 250, 250}, {0 f, 1 f}}, { { 250, -250 }, { 1.f, 0.f } }, { { -250, 250 }, { 0.f, 1.f } }, { { 250, 250 }, { 1.f, 1.f } }, }; // create our vertex buffer. With our Qualsits array initialization it _vertexBuffer = [_device newBufferWithBytes: quadVertices length: sizeof (quadVertices) options:MTLResourceStorageModeShared]; _numVertices = sizeof(quadVertices)/sizeof(CJLVertex); }Copy the code

The setupPipeLine function is mainly related to initialization of the rendering pipeline. The corresponding flow chart is shown below

Initialization of the render pipeline is divided into the following sections

  • Loading the Metal file
  • Configure render pipes
  • Create the render pipeline object
  • Sets the commandQueue command object

All of these steps have been explained in previous cases. Here, relevant operations are extracted and packaged separately into a method, which will not be explained in detail. The code is as follows

Void setupPipeLine{// create a render pipe // from the project. Metal file, create a library id<MTLLibrary> defaultLibrary = [_device newDefaultLibrary]; VertexFunction = [defaultLibrary newFunctionWithName:@"vertexShader"]; <MTLFunction> fragmentFunction = [defaultLibrary newFunctionWithName:@"fragmentShader"]; / / 2, configuration is used to create the state of the rendering pipeline pipe MTLRenderPipelineDescriptor * renderPipelineDescriptor = [[MTLRenderPipelineDescriptor alloc] init];  / / name of Pipeline renderPipelineDescriptor. Label = @ "Texturing Pipeline". / / programmable function, used to handle each vertex renderPipelineDescriptor. In the process of rendering vertexFunction = vertexFunction; / / programmable function, used for handling the rendering process overall individual pieces/fragment renderPipelineDescriptor fragmentFunction = fragmentFunction; / / set the pipe storage component of color data format renderPipelineDescriptor colorAttachments [0]. PixelFormat = cjlMTKView. ColorPixelFormat; // 3, create and return the render pipeline object & check whether it was created successfully NSError *error; _pipelineState = [_device newRenderPipelineStateWithDescriptor:renderPipelineDescriptor error:&error]; if (! _pipelineState) { NSLog(@"Failed to created pipeline state, error %@", error); CommandQueue _commandQueue = [_device newCommandQueue]; }Copy the code

The setupTexture function, which is the focus of this example, loads the TGA image as follows

Loading a TGA image is divided into the following steps

  • Get TGA files mainly by converting TGA files to CJLImage objects, that is, converting texture pictures to bitmaps
NSURL *imageFileLocation = [[NSBundle mainBundle] URLForResource:@"circle" withExtension:@"tga"]; / / to the tag file - > CJLImage object CJLImage * image = [[CJLImage alloc] initWithTGAFileAtLocation: imageFileLocation]; If (! image) { NSLog(@"Failed to create the image from:%@",imageFileLocation.absoluteString); return; }Copy the code
  • Create texture descriptors & Texture Objects A texture object is generated by creating a texture descriptor and decompressing a bitmap, which needs to set the properties of the texture descriptor object
CJLImage --> Texture (that is, bitmap becomes texture object) MTLTextureDescriptor *textureDescriptor = [[MTLTextureDescriptor alloc] init]; // Indicates that each pixel has blue, green, red and alpha channels. Where each channel is an 8-bit unsigned normalized value (that is, 0 is mapped to 0,255 to 1); / / bitmap information textureDescriptor pixelFormat = MTLPixelFormatBGRA8Unorm; // Set the pixel size of the texture, that is, the texture resolution textureDescription.width = image.width; textureDescriptor.height = image.height; / / 3, create a texture object: use the descriptor from equipment to create texture _texture = [_device newTextureWithDescriptor: textureDescriptor];Copy the code
  • Before copying the texture, you first need to calculate the number of bytes per line and set the texture’s corresponding pixel region. Region is defined as follows, which contains two struct objects: Origin and size, where Origin stands for the starting vertex coordinate (x, Y, z), size (width, height, depth)
typedef struct { MTLOrigin origin; X,y,z MTLSize size; // width,height,depth} MTLRegion;Copy the code

The image is then copied into the texture object using the replaceRegion: mipmapLevel: withBytes: bytesPerRow: function

NSUInteger bytesPerRow = 4 * image.width; /* typedef struct {MTLOrigin origin; X,y,z MTLSize size; // width,height,depth} MTLRegion; */ // the MLRegion structure is used to identify a specific region of the texture. Demo fills the entire texture with image data; Thus, the area of pixels covering the entire texture is equal to the size of the texture. MTLRegion region = {{0,0,0}, {image.width, image.height, 1},}; ReplaceRegion :(MTLRegion)region - (void)replaceRegion:(MTLRegion)region mipmapLevel:(NSUInteger)level withBytes:(const void *)pixelBytes bytesPerRow:(NSUInteger)bytesPerRow; Parameter 1-region: position of pixel region in texture Parameter 2-level: zero-based value specifying which MiPMap level is the target. If the texture does not have miPMap, use 0. Parameter 3-pixelBytes: The number of bytes pointing to the image to be copied Parameter 4-BytesperRow: The span in bytes between rows of source data for normal or compressed pixel formats. For compressed pixel formats, the span is the number of bytes from the beginning of a block row to the beginning of the next line. */ [_texture replaceRegion:region mipmapLevel:0 withBytes:image.data.bytes bytesPerRow:bytesPerRow]; * * * *Copy the code

MTKViewDelegate delegate method

There are two kinds of delegate proxy methods, here is mainly to illustrate the drawing drawInMTKView proxy method, its flow chart is as follows

In addition to transferring data, the other steps in the flowchart are exactly the same as those in Metal Case 03: Rendering a Large amount of vertex data

There are three kinds of data passed to the shader: 1) vertex data: contains vertex coordinates and texture coordinates. Since the vertex data is stored in the cache, it needs to be passed to the vertex shader through setVertexBuffer. 2) View size data: 3) Texture image: To load the texture object into the GPU, you need to pass the setFragmentTexture function to the slice shader function, and load the TGA image through the sampler reading pixel

The code for passing the data is as follows

/* There are three types of data that need to be passed: 1) Vertex data, texture coordinates, CommandEncoder setVertexBuffer:_vertexBuffer commandEncoder setVertexBuffer:_vertexBuffer offset:0 atIndex:CJLVertexInputIndexVertices]; CommandEncoder setVertexBytes:&_viewportSize length:sizeof(_viewportSize) atIndex:CJLVertexInputIndexViewportSize]; / / texture object is passed to the fragment shader (that is, the metal in the fragment shader function) - texture images [commandEncoder setFragmentTexture: _texture atIndex: CJLTextureIndexBaseColor];Copy the code

Load PNG/JPG images

Loading a PNG/JPG image looks like this

The overall case flow chart is shown below

Instead of loading the TGA image, you need to change the setupTexture function to the setupTexturePNG function based on the original code. The setupTexturePNG function is highlighted below

The setupTexturePNG function is used to load PNG/JPG images. The loading process is shown in the figure below

The loadImage function is not new. In OpenGL ES, the loadImage function is not new. SetupTexture in GLSL loading image has explained how to load PNG/JPG image into bitmap, mainly through CGContextRef object to redraw the image, decompress into bitmap data, the process of decompression into bitmap is shown in the figure

UIImage *image = [UIImage imageNamed:@"mouse.jpg"]; UIImage *image = [UIImage imageNamed:@"mouse.jpg"] MTLTextureDescriptor *textureDescriptor = [[MTLTextureDescriptor alloc] init]; // Indicates that each pixel has blue, green, red and alpha channels. Where each channel is an 8-bit unsigned normalized value (that is, 0 is mapped to 0,255 to 1); textureDescriptor.pixelFormat = MTLPixelFormatRGBA8Unorm; // Set the pixel size of the texture textureDescription.width = image.size.width; textureDescriptor.height = image.size.height; / / 3, the use of texture descriptors to create _texture = [_device newTextureWithDescriptor: textureDescriptor]; // Create the MTLRegion object //MLRegion structure to identify the specific region of the texture. Demo fills the entire texture with image data; Thus, the area of pixels covering the entire texture is equal to the size of the texture. /* typedef struct { MTLOrigin origin; X,y,z MTLSize size; } MTLRegion; // width,height,depth (i.e. */ MTLRegion region = { {0, 0, 0}, {image.size.width, image.size.height, 1}, }; Byte *imageBytes = [self loadImage:image]; Byte *imageBytes = [self loadImage:image]; // 6. UIImage data needs to be converted to binary before uploading. If (imageBytes) {// Copy the texture image to texture [_texture replaceRegion:region mipmapLevel:0 withBytes:imageBytes bytesPerRow:4 * image.size.width]; / / release free (imageBytes); imageBytes = NULL; LoadImage :(UIImage *)image{// 1, convert UIImage to CGImageRef CGImageRef spriteImage = image.CGImage; Size_t width = CGImageGetWidth(spriteImage); size_t height = CGImageGetHeight(spriteImage); Byte *spriteData = (Byte *)calloc(width * height * 4, sizeof(Byte)); CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(spriteContext, CGRectMake(0, 0, width, height), spriteImage); CGRect = CGRectMake(0, 0, width, height); CGContextTranslateCTM(spriteContext, 0, rect.size.height); CGContextScaleCTM (spriteContext, 1.0, 1.0); CGContextDrawImage(spriteContext, rect, spriteImage); // release context CGContextRelease(spriteContext); return spriteData;; }Copy the code

conclusion

According to the analysis of TGA image loading and PNG/JPG image loading process, we can simply divide the image loading process into the following steps

  • 1. Decompress the texture image into a bitmap
  • 2, create,MTLTextureDescriptorObject, the texture description object, and setpixelFormatPixel information,The width and heightSize information
  • 3. Use texture description object creationMTLTextureObject, that is, texture object
  • 4, create,MTLRegionObject that identifies the pixel region of the texture
  • 5. What’s your texturereplaceRegion:mipmapLevel:withBytes:bytesPerRow:The bitmap () function copies the bitmap data extracted from the texture image into the texture object
  • Draw the callback functiondrawInMTKViewthroughMTLrenderCommandEncoderThe object’ssetFragmentTexture:atIndex:Function to pass the texture to the slice shader function

The full code is at Github:

  • 20_1_Metal_ Load TGA picture _OC
  • 20_2_Metal_ Load PNG picture _OC