“This is the 18th day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021”

An overview of the

Metal optimizes textures for quick access by the GPU, but does not allow direct access to the texture’s contents from the CPU. When the App needs to change or read the contents of the texture, it requires Metal to copy the data between the texture and the accessible CPU memory (system memory or Metal buffer allocated using shared storage). The example in this article configures drawable textures that have readable permissions and copies rendered pixel data from these textures into the Metal buffer.

Run the sample, and then click or click a single point to read the pixel data stored at that point. Alternatively, drag out a rectangle to capture pixel data for an area of the screen. This example converts your selection to a rectangle in a drawable texture coordinate system, and then renders the image to the texture. Finally, the pixel data in the selected rectangle is copied to the buffer for further processing.

Configure read permissions for drawable textures

By default, the MetalKit view creates a drawable texture that is only used for rendering and cannot be accessed by other Metal commands. The following code creates a view that turns on read access to the texture. The example needs to get the texture every time the user selects part of the view, so the code configures the Metal layer of the view to wait indefinitely for new drawable objects.


_view.framebufferOnly = NO;

((CAMetalLayer*)_view.layer).allowsNextDrawableTimeout = NO;

_view.colorPixelFormat = MTLPixelFormatBGRA8Unorm;

Copy the code

Configuring read access for drawable textures means Metal may not do some optimizations, so only change the drawable configuration if necessary. Similarly, performance-sensitive apps should not configure views to wait indefinitely.

Determine the pixels to copy

The AAPLViewController class is used to manage user interactions. AppKit and UIKit send events with locations when the user interacts with the view. To determine which pixels to copy from Metal’s drawable texture, the App converts these view coordinates to Metal’s texture coordinate system.

The code to convert between view coordinates and texture coordinates varies from platform to platform due to graphical coordinate system and API differences.

In macOS, the pointToBacking: method on the view is called to convert the position to the pixel position in the backup store, and then the coordinate transformation is applied to adjust the origin and Y-axis.


CGPoint bottomUpPixelPosition = [_view convertPointToBacking:event.locationInWindow];

CGPoint topDownPixelPosition = CGPointMake(bottomUpPixelPosition.x,
_view.drawableSize.height - bottomUpPixelPosition.y);

Copy the code

In iOS, the App reads the view’s contentScaleFactor and applies the scaling transformation to the view coordinates. IOS views and Metal textures use the same coordinate convention, so there is no need to move the origin or change the Y-axis orientation.

- (CGPoint)pointToBacking:(CGPoint)point { CGFloat scale = _view.contentScaleFactor; CGPoint pixel; pixel.x = point.x * scale; pixel.y = point.y * scale; // Round the pixel values down to put them on a well-defined grid. pixel.x = (int64_t)pixel.x; pixel.y = (int64_t)pixel.y; // Add.5 to move to the center of the pixel.pixel.x += 0.5f; Pixel. + y = 0.5 f; return pixel; }Copy the code

Render pixel data

When the user select a rectangle in the view, the view controller call renderAndReadPixelsFromView: withRegion method to render can map and copy them to the content of the Metal in the buffer.

Create a new command buffer and call a utility method to encode the render channel.


id<MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer];

// Encode a render pass to render the image to the drawable texture.

[self drawScene:view withCommandBuffer:commandBuffer];

Copy the code

After encoding the render channel, another method is called to encode the command to copy a portion of the rendered texture. This example encodes the command to copy the pixel data before rendering the drawable texture, because the system discards the texture’s contents after rendering.


id<MTLTexture> readTexture = view.currentDrawable.texture;

MTLOrigin readOrigin = MTLOriginMake(region.origin.x, region.origin.y, 0);

MTLSize readSize = MTLSizeMake(region.size.width, region.size.height, 1);

const id<MTLBuffer> pixelBuffer = [self readPixelsWithCommandBuffer:commandBuffer

                                                        fromTexture:readTexture

                                                           atOrigin:readOrigin

                                                           withSize:readSize];

Copy the code

Copy the pixel data to the buffer

The renderer readPixelsWithCommandBuffer: fromTexture: atOrigin: withSize: analysis the command to copy the texture coding. Since the example passes the same command buffer to this method, Metal encodes these new commands after the rendering process. Metal automatically manages the dependencies of the target texture and ensures that rendering is completed before copying the texture data.

First, the method allocates a Metal buffer to hold pixel data, which is calculated by multiplying the size of a pixel (in bytes) by the width and height of the region. Again, the number of bytes per row is counted, which is needed later when the data is copied (this example does not add any padding at the end of the row). Then, the Metal device object is called to create a new Metal buffer, specifying the shared storage mode so that the App can read the contents of the buffer later.


NSUInteger bytesPerPixel = sizeofPixelFormat(texture.pixelFormat);

NSUInteger bytesPerRow   = size.width * bytesPerPixel;

NSUInteger bytesPerImage = size.height * bytesPerRow;

_readBuffer = [texture.device newBufferWithLength:bytesPerImage 
                                        options:MTLResourceStorageModeShared];

Copy the code

Next, create an MTLBlitCommandEncoder object that provides commands to copy data between Metal resources, populate the resources with data, and perform other similar resource-related tasks that do not directly involve computation or rendering. The example encodes the blit command to copy the texture data to the beginning of a new buffer, and then closes the BLit channel.


id <MTLBlitCommandEncoder> blitEncoder = [commandBuffer blitCommandEncoder];

[blitEncoder copyFromTexture:texture

                 sourceSlice:0

                 sourceLevel:0

                sourceOrigin:origin

                  sourceSize:size

                    toBuffer:_readBuffer

           destinationOffset:0

      destinationBytesPerRow:bytesPerRow

    destinationBytesPerImage:bytesPerImage];

[blitEncoder endEncoding];

Copy the code

Finally, the command buffer is submitted and waitUntilCompleted is called to wait for the GPU to finish rendering and blit command execution. After this call returns control to the method, the buffer will contain the requested pixel data. In real-time apps, synchronization commands unnecessarily reduce parallelism between CPU and GPU.


[commandBuffer commit];

// The app must wait for the GPU to complete the blit pass before it can

// read data from _readBuffer.

[commandBuffer waitUntilCompleted];

Copy the code

Read pixels from the buffer

Call the contents() method of the buffer to get a pointer to the pixel data.


AAPLPixelBGRA8Unorm *pixels = (AAPLPixelBGRA8Unorm *)pixelBuffer.contents;

Copy the code

This article’s example copies the buffer’s data into an NSData object and passes it to another method to initialize the AAPLImage object. For more information about AAPLImage, see Texture Creation and Texture Sampling for the Metal Framework.


// Create an `NSData` object and initialize it with the pixel data.

// Use the CPU to copy the pixel data from the `pixelBuffer.contents`

// pointer to `data`.

NSData *data = [[NSData alloc] initWithBytes:pixels length:pixelBuffer.length];

// Create a new image from the pixel data.

AAPLImage *image = [[AAPLImage alloc] initWithBGRA8UnormData:data

                                                       width:readSize.width

                                                      height:readSize.height];

Copy the code

The renderer returns this image object to the view controller for further processing. The behavior of view controllers varies from operating system to operating system. During MacOS, sample will be written to the file ~ / Desktop image/ReadPixelsImage tga, and in iOS, sample add images to the photo library.

conclusion

This article shows how to get pixel data from rendered textures, that is, how the CPU gets the result of a GPU rendering. You first need to enable the texture’s readability, then render the texture, copy it to a new buffer, and finally get the data from the new buffer.

Download the sample code for this article