preface

In normal development, we often use UImage to load images in JPG, PNG, etc., but it ends up decompressing the image data into bitmaps. Image decompression is a process of decompressing JPG and PNG images into bitmaps. Let’s explore it together in this article.

First, image texture mapping

In OpenGL ES preliminaries this article mentioned about the rendering process as follows:

The picture

After we get the texture data of the image, we need to do two things to display the texture on the screen:

1. Bridge the texture coordinates of the image to the chip shader through the vertex shader through attribute mode

2. Pass the image texture data to the pixel shader through Uniform, and the pixel shader fills the image color

When filling the image with texture color, one-to-one mapping should be carried out according to the coordinates. The default texture coordinate is (0,0) in the lower left corner and (0,1) in the upper right corner. The mapping is shown as follows:

The texture coordinates correspond to the graphic coordinates one by one, which will eventually display the image correctly. If the texture coordinate mapping is not correct, the image may be flipped or inverted, or even the image information is confused. As shown in the following figure, the inverted image with incorrect texture coordinate may be inverted.

Two, image decompression

Before explaining image decompression, let’s understand a few concepts:

Bitmap:

Also called pixel map or raster map, it records the color, depth, transparency and other information of each pixel of the picture. This sequence of pixels is arranged according to certain rules to form the picture we see. The advantage of bitmap is that it can record the picture information completely. No matter how the picture is stretched, it will not be distorted. The disadvantage is that the picture file is too large, so the bitmap is generally compressed into JPG, PNG and other formats.

Lossy compression:

It will not completely record picture information, but will ignore part of the color information that would be ignored by human eyes and replace it with neighboring colors according to the characteristics of human eyes observing the world. Therefore, although most pictures can be restored, but in some cases will be distorted, common lossy compression formats such as JPG.

Lossless compression:

Lossless compression will completely record the color information of the image, but the area with the same color will be compressed and recorded, so lossless compression can also restore the image more completely. However, due to the limited value of the color that can be saved, there is still a possibility of distortion, common formats such as PNG.

In our development, most of the images we used were in JPG or PNG format, but they were extracted into bitmaps and rendered back to the screen before they were actually displayed. So the process of unpacking images is

  • Decompress JPG/PNG images to obtain image information
  • Redraw the bitmap based on the obtained picture information, that is, texture data
  • The texture data is loaded, passed into the slice shader, rendered and displayed

The Gore Graphics framework for iOS provides some methods for unpacking images.

UImage *image = [UImage imageNamed:@"fly"]; CGImageRef cgImageRef = [image CGImage]; GLuint width = (GLuint)CGImageGetWidth(CGImageRef); GLuint height = (GLuint)CGImageGetHeight(cgImageRef); Rect = CGRectMake(0, 0, width, height); / / get photo color space CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB (); Void *imageData = malloc(width * height * 4); void *imageData = malloc(height * 4); ** CGBitmapContextCreate(void * __nullable data, size_t width, size_t height, size_t bitsPerComponent, Uint32_t bitmapInfo) data: uint32_t bitmapInfo: BitsPerComponent: number of bits for each color component (8 bits) bytesPerRow: number of bytes for each line (width * 4 space) bitmapInfo: Bitmap information, here RGBA, Namely kCGImageAlphaPremultipliedLast * / CGContextRef context = CGBitmapContextCreate (imageData, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGContextTranslateCTM(context, 0, height); CGContextTranslateCTM(context, 0, height); CGContextScaleCTM (context, 1.0 f, 1.0 f); CGColorSpaceRelease(colorSpace); // clear the colorSpace and drawing area before drawing to prevent residual data. CGContextClearRect(context, rect); CGContextDrawImage(context, recT, cgImageRef); CGContextDrawImage(context, recT, cgImageRef); GLuint textureID; GLuint textureID; glGenTextures(1, &textureID); GlBindTexture (GL_TEXTURE_2D, textureID); // Set the texture properties glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); GlBindTexture (GL_TEXTURE_2D, 0); // Rebind the texture target to 0 CGContextRelease(context); // Release context free(imageData); // Release the image data areaCopy the code

Slide to show more

3. Texture flipping diagram

In the comments to the code above, it is mentioned that the texture needs to be flipped, and the flipped diagram is as follows:

The picture

  • First, move the original image in the positive direction of the y axis 1 — 2
  • Then scale the texture on the Y-axis by -1 and flip the image around the X-axis by 2 — 3
  • The texture coordinates now correspond to the image coordinates

\