This article mainly introduces how to use OpenGL ES to render an image. Content includes: basic concepts, how to use GLKit to render textures, how to use GLSL-written shaders to render textures.

preface

OpenGL (Open Graphics Library) is a specification developed and maintained by the Khronos Group, a Graphics hardware and software industry association that focuses on Open standards in Graphics and multimedia. It is hardware independent. It basically defines apis for us to use to manipulate graphs and images. OpenGL itself is not an API.

OpenGL ES (OpenGL for Embedded Systems) is a subset of OpenGL, designed for Embedded devices such as mobile phones, PDAs and game consoles. The specification was also developed and maintained by Khronos Group.

OpenGL ES removes complex primitives such as quadrangles (GL_QUADS) and polygons (GL_POLYGONS), as well as many features that are not absolutely necessary, leaving the core useful. It can be understood as a streamlined specification that supports the most basic features of OpenGL on mobile platforms.

OpenGL ES 1.0, 2.0, and 3.0 are available on iOS. OpenGL ES 3.0 adds some new features, but in addition to iOS 7.0 and above, it requires iPhone 5S and later devices to support it. For existing devices, we mainly used OpenGL ES 2.0.

Note: OpenGL ES in the following refers to OpenGL ES 2.0.

A concept,

1. What is cache

OpenGL ES runs partly on CPU and partly on GPU. In order to coordinate data exchange between these two parts, the concept of Buffers is defined. The CPU and GPU have independently controlled memory areas. Caching prevents data replication between these two memory areas and improves efficiency. A cache is essentially a contiguous block of RAM.

2. Meaning of texture rendering

A texture is a cache of element values used to hold the colors of an image, and rendering is the process of generating the data into an image. Texture rendering is the process of generating images by storing data such as color values in memory.

3. Coordinate system

1. OpenGL ES coordinate system

The OpenGL ES coordinate system ranges from -1 to 1. It is a three-dimensional coordinate system, usually represented by X, Y, and Z. The positive Z axis is pointing out of the screen. Without considering the Z-axis, the lower left corner is (-1, -1, 0) and the upper right corner is (1, 1, 0).

2. Texture coordinate system

Texture coordinate system range is 0 ~ 1, is a two-dimensional coordinate system, the horizontal axis is called S axis, vertical axis is called T axis. In the coordinate system, the x-coordinate of a point is usually represented by U, and the y-coordinate of a point is usually represented by V. The lower left corner is (0, 0) and the upper right corner is (1, 1).

Note: the (0, 0) point in THE UIKit coordinate system is in the upper left corner, and the vertical axis of the UIKit coordinate system is opposite to the vertical axis of the texture coordinate system.

4. Texture-related concepts

  • Texel: After an image is initialized as a texture cache, each pixel becomes a Texel. Texture coordinates range from 0 to 1 and may contain any number of textures within this unit length.
  • Rasterizing: A rendering step that converts geometry data into fragments.
  • Fragment: Color pixels in viewport coordinates. When no texture is used, object vertices are used to calculate the color of the fragment; When using textures, calculations are made on the basis of the grain element.
  • Mapping: How to align vertices and strixels. Map the vertex coordinates (X, Y, Z) to the texture coordinates (U, V).
  • Sampling: The process of finding the corresponding voxel for each segment according to the calculated (U, V) coordinates after vertex fixing.
  • Frame Buffer: A Buffer that receives render results and specifies an area for the GPU to store render results. More colloquially, it can be understood as the area that stores the final frame displayed on the screen.

Note: (U, V) may exceed the range of 0 to 1. GlTextParameteri () needs to be configured to map the s-axis and t-axis.

5. How to use cache

In practice, we need to use a variety of caches. For example, before rendering a texture, you need to generate a texture cache that holds the image data. Here are the general steps for cache management:

The process of using the cache can be divided into seven steps:

  1. Generate:Generate cache identifiersglGenBuffers()
  2. Bind:For subsequent operations, bind a cacheglBindBuffer()
  3. Buffer Data:Copy data from CPU memory to cached memoryglBufferData() / glBufferSubData()
  4. Enable or Disable:Sets whether cached data is to be used in subsequent renderingsglEnableVertexAttribArray() / glDisableVertexAttribArray()
  5. Set Pointers:Tell the type of data cached and the offset of the corresponding dataglVertexAttribPointer()
  6. Draw:Draw with cached dataglDrawArrays() / glDrawElements()
  7. Delete:Delete the cache to release resourcesglDeleteBuffers()

These 7 steps are very important, but just to give you an idea, we’re going to use them again and again in our actual examples.

6. OpenGL ES context

OpenGL ES is a state machine. The configuration information is stored in a Context, and the values are kept until they are modified. But we can configure multiple context, by calling [EAGLContext setCurrentContext: context] to switch.

7, OpenGL ES primitives

Primitive is the basic graphics that support rendering in OpenGL ES. OpenGL ES supports only three primitives: vertices, line segments, and triangles. Complex graphics are achieved by rendering multiple triangles.

8. How to render triangles

The basic process for rendering a triangle is as shown above. The vertex Shader and fragment Shader are the programmable parts, and the Shader is a small program that runs on the GPU and dynamically compiles while the main program is running, rather than dying in code. The Shading Language is GLSL (OpenGL Shading Language), which we’ll cover in detail in Section 3.

Here’s what happens at each step of the rendering process:

1. Vertex data

To render a triangle, we need to pass in an array of three three-dimensional vertex coordinates, each with a corresponding vertex property, which can contain any data we want to use. In the example above, each vertex contains a color value.

And, to let OpenGL ES know that we are drawing triangles, not points or line segments, we pass the meta information to OpenGL ES whenever we call the draw instruction.

2. Vertex shaders

The vertex shader performs one operation on each vertex and can use the vertex data to calculate the vertex’s coordinates, colors, lighting, texture coordinates, and so on.

An important task of vertex shaders is to perform coordinate transformations, such as converting a model’s original coordinate system (usually the coordinates in its 3D modeling tool) to the screen coordinate system.

3. Primitive assembly

After the vertex shader program outputs the vertex coordinates, each vertex is assembled into primitives according to the primitives type parameters in the draw command and the vertex index array.

In this step, the 3D primitives in the model have been converted to 2D primitives on the screen.

Geometry shaders

In the “OpenGL” version, there is an optional Shader called Geometry Shader between the vertex Shader and the fragment Shader.

Geometric shaders take as input a collection of vertices in the form of primitives, which can generate other shapes by generating new primitives from new vertices.

OpenGL ES does not currently support geometric shaders, so we can leave this part behind.

5. Rasterization

In the rasterization phase, the base primitives are converted into fragments for use by fragment shaders. Fragments represent pixels that can be rendered on the screen and contain information such as position, color, texture coordinates, etc. These values are calculated by interpolating the vertex information of the primitives.

Pruning is performed before the fragment shader runs, and any pixels outside the view are pruned to improve execution efficiency.

Fragment shaders

The main purpose of the fragment shader is to calculate the final color value of each fragment (or discard the fragment). The fragment shader determines the color value of each pixel on the final screen.

7. Test and mix

In this step, OpenGL ES will discard or mix the fragments according to whether the fragments are blocked or whether there are already drawn fragments in the view. The remaining fragments will be written into the frame cache and finally displayed on the device screen.

9, how to render deformation

Since OpenGL ES can only render triangles, polygons need to be composed of multiple triangles.

As shown, a pentagon, we can break it up into 3 triangles to render.

To render a triangle, we need an array of 3 vertices. This means that we render a pentagon with 9 vertices. In addition, we can see that V0, V2 and V3 are all repeated vertices, which are a little redundant.

So is there an easier way to reuse the vertices that we used? The answer is yes.

In OpenGL ES, there are three drawing modes for triangles. Given the same array of vertices, we can specify how we want to join them. As shown below:

1, GL_TRIANGLES

Using GL_TRIANGLES is just the way we said at the beginning, using a triangle for every three vertices, instead of using multiple vertices. The first triangle uses V0, V1, V2, the second triangle uses V3, V4, V5, and so on. If the number of vertices is not a multiple of 3, the last one or two vertices are discarded.

2, GL_TRIANGLE_STRIP

GL_TRIANGLE_STRIP will reuse the first two vertices when drawing triangles. The first triangle will still use V0, V1, V2, the second triangle will use V1, V2, V3, and so on. The NTH one will use V(n-1), V(n), V(n+1).

3, GL_TRIANGLE_FAN

GL_TRIANGLE_FAN reuses the first vertex and the previous vertex when drawing triangles. The first triangle will still use V0, V1, V2, the second triangle will use V0, V2, V3, and so on. The NTH one will use V0, V(n), V(n+1). This way it looks like you’re fanning around V0.

2. Render via GLKit

Congratulations on making it through that boring presentation of concepts. From here, we’ll start by going into actual examples and explaining the rendering process in code.

In GLKit, Apple’s dad encapsulates some of the operations in OpenGL ES, so using GLKit to render will save some steps.

If you’re curious, what does GLKit do for us when it comes to texture rendering?

Hold your breath and answer this question after we cover the GLSL rendering method in section 3.

Now, with some trepidation and anticipation, let’s take a look at how GLKit renders textures.

1. Get vertex data

Define vertex data using a three-dimensional vector to hold (X, Y, Z) coordinates and a two-dimensional vector to hold (U, V) coordinates:

typedef struct {
    GLKVector3 positionCoord; // (X, Y, Z)
    GLKVector2 textureCoord; // (U, V)
} SenceVertex;
Copy the code

Initialize vertex data:

self.vertices = malloc(sizeof(SenceVertex) * 4); // 4 vertices
    
self.vertices[0] = (SenceVertex){{- 1.1.0}, {0.1}}; / / the top left corner
self.vertices[1] = (SenceVertex){{- 1.- 1.0}, {0.0}}; / / the bottom left corner
self.vertices[2] = (SenceVertex){{1.1.0}, {1.1}}; / / the top right corner
self.vertices[3] = (SenceVertex){{1.- 1.0}, {1.0}}; / / the bottom right hand corner
Copy the code

When exiting, remember to manually free memory:

- (void)dealloc {
    // other code ...
    
    if (_vertices) {
        free(_vertices);
        _vertices = nil; }}Copy the code

2. Initialize GLKView and set the context

// Create the context, using version 2.0
EAGLContext *context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
    
// Initialize GLKView
CGRect frame = CGRectMake(0.100.self.view.frame.size.width, self.view.frame.size.width);
self.glkView = [[GLKView alloc] initWithFrame:frame context:context];
self.glkView.backgroundColor = [UIColor clearColor];
self.glkView.delegate = self;
    
[self.view addSubview:self.glkView];
    
// Set the glkView context to the current context
[EAGLContext setCurrentContext:self.glkView.context];
Copy the code

3. Load textures

Use GLKTextureLoader to load the texture and GLKBaseEffect to save the ID of the texture for later rendering.

NSString *imagePath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"sample.jpg"];
UIImage *image = [UIImage imageWithContentsOfFile:imagePath]; 

NSDictionary *options = @{GLKTextureLoaderOriginBottomLeft : @(YES)};
GLKTextureInfo *textureInfo = [GLKTextureLoader textureWithCGImage:[image CGImage]
                                                           options:options
                                                             error:NULL];
self.baseEffect = [[GLKBaseEffect alloc] init];
self.baseEffect.texture2d0.name = textureInfo.name;
self.baseEffect.texture2d0.target = textureInfo.target;
Copy the code

Because the texture coordinate system and UIKit vertical axis direction is opposite, so will GLKTextureLoaderOriginBottomLeft set to YES, used to eliminate the difference between the two coordinate system.

Note: If you use imageNamed: to read the image, you will get an upside down error when loading the same texture repeatedly.

4, the implementation of GLKView proxy method

In the glkView:drawInRect: proxy method, we are going to implement the rendering logic for vertex data and texture data. This step is important, watch out for the use of the “7 steps of cache management”.

The code is as follows:

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
    [self.baseEffect prepareToDraw];
    
    // Create a vertex cache
    GLuint vertexBuffer;
    glGenBuffers(1, &vertexBuffer);  // Step 1: generate
    glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);  // Step 2: Bind
    GLsizeiptr bufferSizeBytes = sizeof(SenceVertex) * 4;
    glBufferData(GL_ARRAY_BUFFER, bufferSizeBytes, self.vertices, GL_STATIC_DRAW);  // Step 3: Cache data
    
    // Set vertex data
    glEnableVertexAttribArray(GLKVertexAttribPosition);  // Step 4: Enable or disable
    glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(SenceVertex), NULL + offsetof(SenceVertex, positionCoord));  // Step 5: Set pointer
    
    // Set the texture data
    glEnableVertexAttribArray(GLKVertexAttribTexCoord0);  // Step 4: Enable or disable
    glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, sizeof(SenceVertex), NULL + offsetof(SenceVertex, textureCoord));  // Step 5: Set pointer
    
    // Start drawing
    glDrawArrays(GL_TRIANGLE_STRIP, 0.4);  // Step 6: Draw
    
    // Delete vertex cache
    glDeleteBuffers(1, &vertexBuffer);  // Step 7: Delete
    vertexBuffer = 0;
}
Copy the code

5. Start drawing

We call GLKView’s display method, which triggers the GLKView :drawInRect: callback to start the rendering logic.

The code is as follows:

[self.glkView display];
Copy the code

At this point, the use of GLKit texture rendering process is introduced.

If that’s not enough, let’s move on to the next section and learn how to render textures directly from glSL-written shaders.

3. Render through GLSL

In this section, we will explain how to render textures without using GLKit. We will focus on the parts that differ from GLKit rendering.

Note: the

header is still introduced when you actually look at the demo. The main purpose here is to use GLKVector3, GLKVector2 these two types, of course, do not use is also completely ok. The goal is to keep the data format consistent with the GLKit example so you can focus on the real differences between the two.

1, shader writing

First, we need to write our own shaders, including vertex shaders and fragment shaders, in GLSL. I will not expand on GLSL here, but only explain the part we will use later. For more detailed syntax, please refer to here.

Create a new file, use the.vsh suffix for normal vertex shaders, and use the.fsh suffix for fragment shaders (that’s fine if you don’t like it, but it’s best to stick to the spec for others to read), and then write the code.

The code for the vertex shader is as follows:

attribute vec4 Position;
attribute vec2 TextureCoords;
varying vec2 TextureCoordsVarying;

void main (void) {
    gl_Position = Position;
    TextureCoordsVarying = TextureCoords;
}
Copy the code

The fragment shader code looks like this:

precision mediump float;

uniform sampler2D Texture;
varying vec2 TextureCoordsVarying;

void main (void) {
    vec4 mask = texture2D(Texture, TextureCoordsVarying);
    gl_FragColor = vec4(mask.rgb, 1.0);
}

Copy the code

GLSL is written like C language, if you have learned C language, get started is very fast. Here is a brief explanation of the code for the two shaders.

Attribute modifiers exist only in vertex shaders and are used to store input for each vertex, such as Position and TextureCoords, which receive vertex Position and texture information.

Vec4 and VEC2 are data types, which refer to four-dimensional vector and two-dimensional vector respectively.

Varying modifiers refer to both the output of the vertex shader and the input of the fragment shader. If both the vertex shader and the fragment shader are declared at the same time and exactly the same, the data in the vertex shader can be obtained from the fragment shader.

Gl_Position and gl_FragColor are built-in variables, and assigning values to them can be interpreted as output fragment position and color information to the screen.

Precision specifies the default precision for data types. Precision mediump float sets the default precision for float types to mediump.

Uniform is used to hold passed read-only values that will not be modified in either vertex shaders or fragment shaders. Vertex shaders and fragment shaders share the uniform variable namespace. Uniform variables are declared in the global extent, and the same UNIFORM variable is accessible in both vertex shaders and fragment shaders.

Sampler2D is a texture handle type that holds the texture passed in.

The texture2D() method retrieves color information based on texture coordinates.

The vertex shader outputs the input vertex coordinates directly and passes the texture coordinates to the fragment shader. The fragment shader obtains the color information of each fragment according to texture coordinates and outputs it to the screen.

2. Texture loading

Without the help of GLKTextureLoader, we are left to generate textures ourselves. The steps for generating a texture are fairly fixed, encapsulated as follows:

- (GLuint)createTextureWithImage:(UIImage *)image {
    // Convert UIImage to CGImageRef
    CGImageRef cgImageRef = [image CGImage];
    GLuint width = (GLuint)CGImageGetWidth(cgImageRef);
    GLuint height = (GLuint)CGImageGetHeight(cgImageRef);
    CGRect rect = CGRectMake(0.0, width, height);
    
    // Draw a picture
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(a);void *imageData = malloc(width * height * 4);
    CGContextRef context = CGBitmapContextCreate(imageData, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGContextTranslateCTM(context, 0, height);
    CGContextScaleCTM(context, 1.0f, 1.0f);
    CGColorSpaceRelease(colorSpace);
    CGContextClearRect(context, rect);
    CGContextDrawImage(context, rect, cgImageRef);

    // Generate a texture
    GLuint textureID;
    glGenTextures(1, &textureID);
    glBindTexture(GL_TEXTURE_2D, textureID);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData); // Write the image data to the texture cache
    
    // Set how to map pixels to pixels
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    
    / / unbundling
    glBindTexture(GL_TEXTURE_2D, 0);
    
    // Free memory
    CGContextRelease(context);
    free(imageData);
    
    return textureID;
}
Copy the code

3. Compiler links for shaders

For written shaders, we need to dynamically compile links while the program is running. The code used to compile a shader is also fairly fixed. Here we use the suffix to distinguish the shader type, looking directly at the code:

- (GLuint)compileShaderWithName:(NSString *)name type:(GLenum)shaderType {
    // Find the shader file
    NSString *shaderPath = [[NSBundle mainBundle] pathForResource:name ofType:shaderType == GL_VERTEX_SHADER ? @"vsh" : @"fsh"]; // Determine the suffix according to the type
    NSError *error;
    NSString *shaderString = [NSString stringWithContentsOfFile:shaderPath encoding:NSUTF8StringEncoding error:&error];
    if(! shaderString) {NSAssert(NO.@" Failed to read shader");
        exit(1);
    }
    
    // Create a shader object
    GLuint shader = glCreateShader(shaderType);
    
    // Get the contents of the shader
    const char *shaderStringUTF8 = [shaderString UTF8String];
    int shaderStringLength = (int)[shaderString length];
    glShaderSource(shader, 1, &shaderStringUTF8, &shaderStringLength);
    
    / / compile a shader
    glCompileShader(shader);
    
    // Check whether shader is compiled successfully
    GLint compileSuccess;
    glGetShaderiv(shader, GL_COMPILE_STATUS, &compileSuccess);
    if (compileSuccess == GL_FALSE) {
        GLchar messages[256];
        glGetShaderInfoLog(shader, sizeof(messages), 0, &messages[0]);
        NSString *messageString = [NSString stringWithUTF8String:messages];
        NSAssert(NO.Shader failed to compile: %@, messageString);
        exit(1);
    }
    
    return shader;
}
Copy the code

The vertex shader and fragment shader also need to go through this compilation process. After compilation, we also need to generate a shader program, which links the two shaders together as follows:

- (GLuint)programWithShaderName:(NSString *)shaderName {
    // Compile two shaders
    GLuint vertexShader = [self compileShaderWithName:shaderName type:GL_VERTEX_SHADER];
    GLuint fragmentShader = [self compileShaderWithName:shaderName type:GL_FRAGMENT_SHADER];
    
    // Mount shader to program
    GLuint program = glCreateProgram();
    glAttachShader(program, vertexShader);
    glAttachShader(program, fragmentShader);
    
    / / link to the program
    glLinkProgram(program);
    
    // Check whether the link was successful
    GLint linkSuccess;
    glGetProgramiv(program, GL_LINK_STATUS, &linkSuccess);
    if (linkSuccess == GL_FALSE) {
        GLchar messages[256];
        glGetProgramInfoLog(program, sizeof(messages), 0, &messages[0]);
        NSString *messageString = [NSString stringWithUTF8String:messages];
        NSAssert(NO.@"program link failed: %@", messageString);
        exit(1);
    }
    return program;
}
Copy the code

In this way, we just need to name the two shaders the same way and add the suffix according to the specification. You then pass the shader name to this method to get a compiler linked shader program.

Once we have a shader program, we need to pass data into the program. First, we need to get the variables defined in the shader, as follows:

Note: Different types of variables are obtained in different ways.

GLuint positionSlot = glGetAttribLocation(program, "Position");
GLuint textureSlot = glGetUniformLocation(program, "Texture");
GLuint textureCoordsSlot = glGetAttribLocation(program, "TextureCoords");
Copy the code

Pass in the generated texture ID:

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
glUniform1i(textureSlot, 0);
Copy the code

GlUniform1i (textureSlot, 0) means, assign textureSlot to 0 and 0 corresponds to GL_TEXTURE0. If I say 1, GlActiveTexture also needs to be passed to GL_TEXTURE1 to match.

Set vertex data:

glEnableVertexAttribArray(positionSlot);
glVertexAttribPointer(positionSlot, 3, GL_FLOAT, GL_FALSE, sizeof(SenceVertex), NULL + offsetof(SenceVertex, positionCoord));
Copy the code

Set texture data:

glEnableVertexAttribArray(textureCoordsSlot);
glVertexAttribPointer(textureCoordsSlot, 2, GL_FLOAT, GL_FALSE, sizeof(SenceVertex), NULL + offsetof(SenceVertex, textureCoord));
Copy the code

4. Viewport Settings

When rendering a texture, we need to specify the size of the Viewport, which can be interpreted as the size of the rendering window. Call the glViewport method to set:

glViewport(0.0.self.drawableWidth, self.drawableHeight);
Copy the code
// Get render cache width
- (GLint)drawableWidth {
    GLint backingWidth;
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
    
    return backingWidth;
}

// Get the render cache height
- (GLint)drawableHeight {
    GLint backingHeight;
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
    
    return backingHeight;
}
Copy the code

5. Render layer binding

From the above steps, we have the texture and the vertex position information. Now for the last step, how do we associate the cache with the view? In other words, if there are two views on the screen, how does OpenGL ES know which view to render the image to?

So we’re going to do render layer binding. Through renderbufferStorage: fromDrawable: to implement:

- (void)bindRenderLayer:(CALayer <EAGLDrawable> *)layer {
    GLuint renderBuffer; // Render cache
    GLuint frameBuffer;  / / frame buffer
    
    // Bind render cache to output layer
    glGenRenderbuffers(1, &renderBuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer);
    [self.context renderbufferStorage:GL_RENDERBUFFER fromDrawable:layer];
    
    // Bind render cache to frame cache
    glGenFramebuffers(1, &frameBuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
    glFramebufferRenderbuffer(GL_FRAMEBUFFER,
                              GL_COLOR_ATTACHMENT0,
                              GL_RENDERBUFFER,
                              renderBuffer);
}
Copy the code

The above code generates a frame cache and a render cache, mounts the render cache to the frame cache, and sets the output layer of the render cache to Layer.

Finally, render the bound render cache to the screen:

[self.context presentRenderbuffer:GL_RENDERBUFFER];
Copy the code

This completes the key step of rendering textures using GLSL.

End result:

To sum up, we can answer the questions in section 2. GLKit mainly helps us to do the following points:

  • Shader writing: GLKit built-in simple shader, we do not have to write.
  • Texture loading:GLKTextureLoaderEncapsulates a method to convert an Image to a Texture.
  • Compiler link for shaders:GLKBaseEffectInternal implementation of the shader compiler link process, we can basically ignore the concept of “shader” in the process of use.
  • Viewport Settings:When rendering a texture, you need to specify the Viewport size,GLKViewIn the calldisplayMethod is set internally.
  • Render layer binding:GLKViewInternally it callsrenderbufferStorage:fromDrawable:To their ownlayerSet to the output layer of the render cache. Therefore, in the calldisplayMethod is called internallypresentRenderbuffer:To render the render cache to the screen.

The source code

Check out the full code on GitHub.

reference

  • OpenGL ES Application Development Practice Guide
  • Hello triangle – LearnOpenGL CN
  • OpenGL ES iOS Introduction 2 – Draw a polygon
  • Ios-opengles image texture

For a better reading experience, please visit Lyman’s Blog to explain OpenGL ES texture rendering in iOS from scratch