In the GLSL initial shader language article, we covered how to write a shader file and how to connect a program object to a shader object’s method functions, so we’ll take a look at how these methods are used with a practical example.

Before we look at the examples, what are FrameBuffer and RenderBuffer?

FrameBuffer object FrameBuffer (FBO)

In the OpenGL rendering pipeline, geometric data and textures are transformed and tested many times, and finally displayed on the screen as two-dimensional pixels. The final rendering destination of the OpenGL pipeline is called the framebuffer. The frame buffer is a collection of two-dimensional arrays and storage areas used by OpenG: color buffer, depth buffer, template buffer, and accumulative buffer. By default, OpenGL uses the frame buffer as the final render destination. This frame buffer is completely generated and managed by the system.

We know that before the application calls any OpenGL ES command, you need to first create a rendering context and drawing surface, and make it become the current context and the surface, at the time of rendering, before actually have been using the native window system (for example, EAGL GLFW) provided by rendering context and drawing surface (frame buffer). In general, we only require the system to provide the frame buffer as a drawing surface, but also some special cases, such as shadow maps, dynamic reflection, after processing effects such as going to render to texture of the operation, if you use the system to provide the frame buffer, efficiency is low, so need to customize their own frame buffer.

The Frame buffer object API supports the following actions: · Create frame buffer objects using only the OpenGL ES command · Create and use multiple buffer objects in a single EGL context, that is, there is no need for each frame buffer to have a rendering context. · Create off-screen color, depth, or template render buffers and textures and link them to frame buffer objects · Share color, depth, or template buffers across multiple frame buffers · Link textures directly to frame buffers as colors or depths, · Copy between frame buffers and invalidate the frame buffer contents.

Creates a frame buffer object

// Define a buffer ID GLuint buffer; // Apply a buffer flag glGenFramebuffers(1, &buffer); // Bind glBindFramebuffer(GL_FRAMEBUFFER, buffer);Copy the code

GlGenFramebuffers (GLsizei n, GLuint* framebuffers) : The first argument is the number of frame caches to be created, and the second argument is a pointer to a variable or array that stores one or more ids. It returns the ID of the unused frame buffer object. An ID of 0 indicates the default frame cache, which is the frame cache provided by the system. Once an FBO is created, glBindFramebuffer (GLenum target, GLuint frameBuffer) must be bound before it can be used: The first argument, target, is GL_FRAMEBUFFER, and the second argument is the ID of the frame buffer object. Once the frame buffer object is bound, all subsequent OpenGL operations affect the currently bound frame buffer object. An ID of 0 indicates the default frame cache, which is the default system-provided frame cache. Therefore, setting ID to 0 in glBindFramebuffer() unbinds the current frame buffer object.

After binding to the GL_FRAMEBUFFER target, all reads and writes to the frame buffer will affect the currently bound frame buffer. We can also use GL_READ_FRAMEBUFFER or GL_DRAW_FRAMEBUFFER to bind a frame buffer to a read or write target, respectively. Frame buffers bound to GL_READ_FRAMEBUFFER are used in all read operations such as glReadPixels, and frame buffers bound to GL_DRAW_FRAMEBUFFER are used in all write operations such as render, clear, etc. In most cases, there is no need to distinguish between them and GL_FRAMEBUFFER is usually used.

Delete buffer object

When a buffer object is no longer in use, it can be deleted by calling glDeleteFramebuffers (GLsizei n, const GLuint* framebuffers).

glDeleteFramebuffers(1, &buffer);
Copy the code

Like the frame buffer of the system, the frame buffer object contains the color buffer, depth buffer, and template buffer. These logical buffers are called aplatable images in the frame buffer object. They are two-dimensional arrays of pixels that can be attached to the frame buffer object. The FBO contains two types of additional images: texture images and Renderbuffer images. If the texture object’s image data is associated with the frame cache, OpenGL performs the Render to texture operation. If the image data in the render cache is associated with the frame cache, OpenGL performs offscreen rendering.

RenderBuffer (RBO)

Render caching is a new introduction for offline rendering. It allows a scene to be rendered directly into a render cache object rather than into a texture object. A render cache object is a data storage area used to store a single image. The image is stored in an internal format that can be rendered. It is used to store OpenGL logical caches without associated texture formats, such as template caches or depth caches.

A Renderbuffer is a 2D image buffer allocated by an application. The Renderbuffer can be used to allocate and store colors, depths, or template values, or as a framebuffer for the attachment of colors, depths, and templates. A Renderbuffer is a drawable surface provided by an off-screen window system. But renderbuffers cannot be used directly as GL textures.

Create render buffer

//1. Define a buffer ID GLuint buffer; Buffers (1, &buffer); // buffers(1, &buffer); GlBindRenderbuffer (GL_RENDERBUFFER, buffer);Copy the code

As with frame buffer objects, the current renderbuffer object must be bound before referring to the renderbuffer object by calling glBindRenderbuffer (GLenum target, GLuint renderbuffer). The first argument, target, is GL_RENDERBUFFER, and the second argument is the ID of the render buffer object.

Delete buffer object

Buffer objects can be deleted by calling glDeleteRenderbuffers (GLsizei n, const GLuint* renderbuffers) when they are no longer in use.

glDeleteRenderbuffers (1, &buffer);
Copy the code

When a render cache is created, it doesn’t have any data storage area, so it has to be allocated space, This can be done by using glRenderbufferStorage (GLenum target, GLenum InternalFormat, GLsizei Width, GLsizei height).

glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, 600, 800);
Copy the code

The first parameter must be GL_RENDERBUFFER. The second parameter is the format available for color, depth, and template. Width and height are the pixel dimensions of the rendered cached image

Appends the render buffer object

Finally, generate the frame buffer, you will need to will be binding, renderbuffer with framebuffer glFramebufferRenderbuffer function called for binding to the corresponding attachment points, at the back of the map can play a role

/ / will render cache area myColorRenderBuffer by binding to the GL_COLOR_ATTACHMENT0 glFramebufferRenderbuffer function. glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, self.myColorRenderBuffer);Copy the code

The following figure shows the relationship between frame buffer objects, render buffer objects and textures. Only one color, depth, and template can be attached to a frame buffer object.

GLSL renders images

This case basically does the following: · Create a rendering surface on screen with EAGL · Load vertex/slice shader · Create a program object and link vertex/slice shader and link program object · Set viewport · Clear color cache · Render simple primions · Render the contents of color cache in EAGL window

#####1. Create a vertex/slice shader file. A shader file usually ends with. VSH /.fsh/

// Attribute highp vec4 position; // Attribute highp vec2 textCoordinate; // Texture coordinates varying lowp VEC2 varyTextCoord; voidmain() {
    varyTextCoord = textCoordinate;
    gl_Position = position;
}
Copy the code

It is best not to add Chinese comments to shader files in case the compilation fails. Chinese comments here are only used as understanding comments.

The chip shader shaderf.fsh

// Texture coordinates varying lowp VEC2 varyTextCoord; Uniform sampler2D colorMap; // Uniform sampler2D colorMap; voidmain() {
    gl_FragColor = texture2D(colorMap, varyTextCoord);
}
Copy the code

Create a UIView and import the header file #import <OpenGLES/ES2/gl.h>. The code for rendering images with GLSL is all written in this UIView.

// Draw OpenGL ES content layer on iOS and tvOS, inherit with calayer@property (nonatomic,strong)CAEAGLLayer *zhEagLayer; @property(nonatomic,strong)EAGLContext *zhContext; @property(nonatomic,assign)GLuint zhColorRenderBuffer; @property(nonatomic,assign)GLuint zhColorFrameBuffer; @property(nonatomic,assign)GLuint zhPrograme;Copy the code
2. Set the layer setupLayer
Create special layer // Here we need to override layerClass and replace the layer returned by ZHView from CALayer with CAEAGLLayer self.zhEagLayer = (CAEAGLLayer *)self.layer; // set scale [selfsetContentScaleFactor:[[UIScreen mainScreen]scale]]; / / 3. Set the description attribute, set here will not maintain render content and color format for RGBA8 self. ZhEagLayer. DrawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: @false,kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8,kEAGLDrawablePropertyColorFormat,nil];
Copy the code

For drawableProperties set description attribute, kEAGLDrawablePropertyRetainedBacking: said after drawing surface display, whether to retain its content kEAGLDrawablePropertyColorFormat: Represents the format of the internal color cache for a drawable surface. The corresponding value of this key is an NSString specifying a specific color cache object. The default is kEAGLColorFormatRGBA8; · kEAGLColorFormatRGBA8:32-bit RGBA color · kEAGLColorFormatRGB565:16-bit RGB color · kEAGLColorFormatSRGBA8: SRGB represents the standard red, green, and blue, the three basic pigments used for color reproduction in CRT monitors, LCD monitors, projectors, printers, and other devices. The color space of sRGB is based on independent color coordinates, which can make the colors correspond to the same color coordinate system in the transmission of different devices, without being affected by the different color coordinates of these devices.

Rewrite the layerClass

+(Class)layerClass
{
    return [CAEAGLLayer class];
}
Copy the code
3. Set the rendering context setupContext
/ / 1. Specify the OpenGL ES rendering API version EAGLRenderingAPI API = kEAGLRenderingAPIOpenGLES2; EAGLContext *context = [[EAGLContext alloc]initWithAPI: API]; //3. Check whether the vm is created successfullyif(! context) { NSLog(@"Create failed!");
        return; } //4. Set the graphics contextif(! [EAGLContextsetCurrentContext:context]) {
        NSLog(@"Set failed!");
        return; } //5. Assign the local context to the global context self.zhContext = context;Copy the code
4. Empty buffer deleteRenderAndFrameBuffer
// Clear the frame buffer glDeleteBuffers(1, &_zhcolorframeBuffer); self.zhColorFrameBuffer = 0; GlDeleteBuffers (1, &_zhcolorrenderBuffer); self.zhColorRenderBuffer = 0;Copy the code

The code for clearing the buffer can also be written as:

glDeleteFramebuffers(1, &_zhColorFrameBuffer);
self.zhColorFrameBuffer = 0;

glDeleteRenderbuffers(1, &_zhColorRenderBuffer);
self.zhColorRenderBuffer = 0;
Copy the code
5. Set the RenderBuffer
//1. Define a buffer ID GLuint buffer; Buffers (1, &buffer); // buffers(1, &buffer); Self. ZhColorRenderBuffer = buffer; GlBindRenderbuffer (GL_RENDERBUFFER, self.zhcolorrenderbuffer); //5. Drawable objectThe store of the CAEAGLLayer is bound to the OpenGL ES renderBuffer object [self.zhContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:self.zhEagLayer];Copy the code
6. Set the FrameBuffer
//1. Define a buffer ID GLuint buffer; Buffers (1, &buffer); // buffers(1, &buffer); // change buffer to global self.zhColorFrameBuffer = buffer; GL_FRAMEBUFFER glBindFramebuffer(GL_FRAMEBUFFER, self.zhcolorFramebuffer); / * generated after the frame buffer, you will need to will be binding, renderbuffer with framebuffer glFramebufferRenderbuffer function called for binding to the corresponding attachment points on the back of the map to work * / / / 5. Will render buffer myColorRenderBuffer by binding to the GL_COLOR_ATTACHMENT0 glFramebufferRenderbuffer function. glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, self.zhColorRenderBuffer);Copy the code
7. Start rendering renderLayer

1. Set the screen clearing color

GlClearColor (0.3 f, f, 0.45 0.5 f, 1.0 f);Copy the code

2. Clear the screen

glClear(GL_COLOR_BUFFER_BIT);
Copy the code

3. Set the viewport size

CGFloat scale = [[UIScreen mainScreen]scale];
glViewport(self.frame.origin.x * scale, self.frame.origin.y * scale, self.frame.size.width * scale, self.frame.size.height * scale);
Copy the code

4. Read vertex shader program, chip shader program

NSString *vertFile = [[NSBundle mainBundle]pathForResource:@"shaderv" ofType:@"vsh"];
NSString *fragFile = [[NSBundle mainBundle]pathForResource:@"shaderf" ofType:@"fsh"];
Copy the code

5. Load shader

self.zhPrograme = [self loadShaders:vertFile Withfrag:fragFile]; // load shader -(GLuint)loadShaders:(NSString *)vert Withfrag:(NSString *)frag {//1. Define two temporary shader objects, GLuint verShader, fragShader; // create program GLint program = glCreateProgram(); //2. Compile vertex shader and slice shader [self compileShader:&verShader]type:GL_VERTEX_SHADER file:vert];
    [self compileShader:&fragShader type:GL_FRAGMENT_SHADER file:frag]; /* About this compileShader:type:file: the method passes three parameters: parameter 1: the underlying address of the compiled storage parameter 2: the type of shader to compile, GL_VERTEX_SHADER (vertex) and GL_FRAGMENT_SHADER(slice) parameters 3: the file path */ //3. Link shader object and program object glAttachShader(program, verShader); glAttachShader(program, fragShader); //4. Release unneeded shader glDeleteShader(verShader); glDeleteShader(fragShader);returnprogram; } // compileShader - (void)compileShader:(GLuint *)shadertype:(GLenum)typeFile: (nsstrings *) file {/ / 1. Read the file path string nsstrings * content = [nsstrings stringWithContentsOfFile: file encoding:NSUTF8StringEncoding error:nil]; const GLchar*source= (GLchar *)[content UTF8String]; / / 2. According totypeType create a shader *shader = glCreateShader(type); //3. Attach the shader source code to the shader object. glShaderSource(*shader, 1, &source,NULL); /* Argument 1: shader, the shader object to compile * Shader argument 2: numOfStrings, the number of source strings to pass 1 argument 3: Strings, the source code of the shader program (the real shader program source code) Argument 4: LenOfStrings, length, an array with the length of each string, or NULL, which means the string is null-terminated by */ //4. Compile shader source code into target glCompileShader(*shader); }Copy the code

6. Link program objects

glLinkProgram(self.zhPrograme); GLint linkStatus; GlGetProgramiv (self.myprograme, GL_LINK_STATUS, &linkStatus);if (linkStatus == GL_FALSE) {
        GLchar message[512];
        glGetProgramInfoLog(self.zhPrograme, sizeof(message), 0, &message[0]);
        NSString *messageString = [NSString stringWithUTF8String:message];
        NSLog(@"program link error:%@",messageString);
        return;
  }
NSLog(@"program link success!");
Copy the code

7. Use program objects

glUseProgram(self.zhPrograme);
Copy the code

8. Set vertex coordinates and texture coordinates

// Coordinate array GLfloatAttrArr [] = {1.0 f to 1.0 f to 1.0 f to 1.0 f to 0.0 f to 1.0 f to 1.0 f to 1.0 f to 0.0 f to 1.0 f to 1.0 f to 1.0 f to 1.0 f to 0.0 f to 0.0 f to 1.0 f, 1.0 f to 1.0 f, f, 1.0 1.0 f to 1.0 f, 1.0 f to 1.0 f, 0.0 f, f 1.0, 1.0, f - 1.0 f to 1.0 f, f 1.0, 0.0, f}; // The first three bits of each row are vertex coordinates and the last two are texture coordinatesCopy the code

9. Processing vertex data

//(1) GLuint attrBuffer; Buffers(1, &attrbuffer); //(1) Buffers(1, &attrbuffer); GlBindBuffer (GL_ARRAY_BUFFER, attrBuffer); GL_ARRAY_BUFFER, sizeof(attrArr), attrArr, GL_DYNAMIC_DRAW);Copy the code

10. Pass vertex data to the vertex shader object

The vertex data is passed to the vertex shader’s position in self.zhprograme, and the vertex data ·glGetAttribLocation is processed by the following three functions to obtain the entry to the Vertex attribute. Tell OpenGL ES, through glEnableVertexAttribArray read data from the buffer (open channels), glVertexAttribPointer set way of reading data

GLuint Position = glGetAttribLocation(self.zhprograme,"position"); / / set the appropriate format to read data from a buffer glEnableVertexAttribArray (position); GlVertexAttribPointer (position, 3, GL_FLOAT, GL_FALSE, sizeof(GL)float) * 5, NULL);
Copy the code

glVertexAttribPointer (GLuint indx, GLint size, GLenum type, GLboolean normalized, GLsizei stride, Const GLvoid* PTR) parameter 1: index, the index parameter of vertex data 2: size, the number of components per vertex property, 1, 2, 3, or 4. The default initial value is 4. Parameter 3: type, the type of each component in the data. Common ones are GL_FLOAT,GL_BYTE, and GL_SHORT. The default initial value is the GL_FLOAT parameter 4: normalized, whether fixed-point data values should be normalized, or directly converted to fixed values. (GL_FALSE) parameter 5: stride, the offset between consecutive vertex attributes, default to 0; Parameter 6: Specifies a pointer to the first component of the first vertex property in the array. The default is 0

11. Processing texture data

GLuint textCoor = glGetAttribLocation(self.zhprograme,"textCoordinate"); / / set the appropriate format to read data from a buffer glEnableVertexAttribArray (textCoor); GlVertexAttribPointer (textCoor, 2, GL_FLOAT, GL_FALSE, sizeof(GL)float) * 5, (float *)NULL + 3);
Copy the code

12. Load the texture

CGImageRef spriteImage = [UIImage imageNamed:fileName].CGImage; // Check whether the image was successfully obtainedif(! spriteImage) { NSLog(@"load image failed: %@", fileName);
       return; Size_t width = CGImageGetWidth(spriteImage); size_t height = CGImageGetHeight(spriteImage); GLubyte * spriteData = (GLubyte *) calloc(width * height *4, sizeof(GLubyte)); //4. Create context /* Parameter 1: data, pointing to the memory address of the drawn image to be rendered 2: width,bitmap width, in pixels 3: height,bitmap height, in pixels 4: BitPerComponent, the number of bits per component of pixels in memory, such as 32-bit RGBA, is set to 8. KCGImageAlphaPremultipliedLast colorSpace, bitmap using the color space: RGBA */ CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4,CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast); /* CGContextDrawImage uses the Core Graphics framework. The coordinate system is different from UIKit. The origin of the UIKit frame is in the upper-left corner of the screen, and the origin of the Core Graphics frame is in the lower-left corner of the screen. CGRect rect = CGRectMake(0, 0, width, height); CGContextDrawImage = CGRectMake(0, 0, width, height); //6. Draw CGContextTranslateCTM(spriteContext, 0, rect.sie.height) using the default; CGContextScaleCTM (spriteContext, 1.0, 1.0); CGContextDrawImage(spriteContext, rect, spriteImage); CGContextRelease(spriteContext); // bind texture to default texture ID (glBindTexture(GL_TEXTURE_2D, 0); /* Parameter 1: texture dimension parameter 2: linear filter, set mode parameter 3 for S, T coordinates: WrapMode, surround mode */ glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR ); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);floatfw = width, fh = height; GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_3D: GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_3D: GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_3D: GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_3D Parameter 3: the color value of the texture GL_RGBA, which indicates the internal format of the texture. The internal format is the format in which our pixel data is stored on the graphics card. GL_RGB here obviously means that the color value of the pixel in the texture is stored in RGB format. Parameter 4: texture width parameter 5: texture height parameter 6: border, border width, usually 0. Parameter 7: format (which describes how pixels are stored in memory) Parameter 8:type9: Texture data */ glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, fw, FH, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); //11. Free spriteData free(spriteData);Copy the code

13. Set the texture sampler2D

glUniform1i(glGetUniformLocation(self.zhPrograme, "colorMap"), 0);
Copy the code

14. The drawing

glDrawArrays(GL_TRIANGLES, 0, 6);
Copy the code

15. Display from render buffer to screen

[self.zhContext presentRenderbuffer:GL_RENDERBUFFER];
Copy the code

So far, the code for rendering images with GLSL has been basically completed. If there is no processing of image flipping, the running effect shows that the picture is flipped. As for the reason of image flipping and the method to solve the image flipping has been introduced in detail in the last article, I will not repeat it here.

The above code steps are sorted out as shown in the figure below:

OpenGL ES 3.0 Programming Guide