In the recent project of the company, a real-time video rendering function is needed to be realized. According to the facial point information recognized in the camera, patterns are drawn between the designated points to guide users. For the sake of performance, we decided to use OpenGL ES to draw the pattern. The final effect is as follows:

This paper will start from the basic theory of OpenGL, from shallow to deep, until the realization of the drawing effect above. No theory is as specific as reality, so to really understand a technology, we must learn and practice from practical project application. All right, let’s get started!

OpenGL ES

OpenGL(Open Graphics Library) is a professional Graphics programming interface that defines a cross-programming language and cross-platform programming interface specification. It is mainly used for 3d image rendering (of course, two-dimensional can also be), is a powerful, easy to call the bottom graphics library. OpenGL ES is a lightweight version of OpenGL for mobile devices, simplifying some methods and data types, such as all graphics are made up of points, lines, and triangles.

We know that there are two commonly used drawing frameworks in iOS. As shown in the figure below, UIKit and Core Graphics are respectively. UIKit mainly uses UIBezierPath to achieve Graphics drawing, and in fact UIBezierPath is a further encapsulation of Core Graphics framework. As for Core Graphics, Quartz2D is used as the engine and Graphics are drawn and rendered on GPU just like OpenGL ES.

So the question is, why use OpenGL when there are so many graphing frameworks? In a computer system, CPU and GPU work together. The CPU prepares to display data and submits it to THE GPU for rendering. After rendering, the GPU puts the results into the frame buffer and finally displays the image content by the display after digital-analog conversion. Thus, the key to improve rendering efficiency is to make CPU and GPU play their respective roles as much as possible. OpenGL gives us direct access to the GPU, and introduces the concept of caching to improve the efficiency of graphics rendering.

Coordinate system

So first of all, let’s look at the coordinate system of OpenGL, as shown below, starting at the center of the screen, between -1 and 1. The normal UIKit coordinates are starting from the top left corner of the screen, and the range of coordinates is the width and height of the screen.

So if we draw a pattern on the screen using OpenGL, we need to convert UIKit coordinate system to OpenGL coordinate system (mainly 2D drawing, so we ignore the Z-axis of OpenGL for the moment), the coordinate transformation formula should be easy to summarize:

Drawing process

The rendering process of OpenGL ES 2.0 is shown in the figure, in which we need to control Vertex Data, Vertex Shader and Fragment Shader. Vertex Data is the Data that we pass in to draw vertices, which can be Data representing points, lines, or triangles. The Vertex Shader and Fragment Shader steps are programmable and are the.glsl files we’ll see below. The Fragment Shader processes the Vertex data for each point, and the Fragment Shader processes each pixel.

In OpenGL, no geometry is drawn unless a valid Vertex Shader and Fragment Shader are loaded. Let’s start with a basic vertex shader:

// vertex.glsl
attribute vec4 position; 
void main(void) {
    gl_Position = position; 
}
Copy the code

The first line declares a 4-component vector named position and assigns it to the gl_Position variable in the main function. The gl_Position is the Vertex we need to process, which is Vertex Data in the figure above.

There are three variable types in the Shader, attribute, UNIFORM and VARYING. Uniform variables are variables passed to the shader by external programs. Attribute variables can only be used in the Vertex Shader. They are variables passed to the Vertex Shader by external programs. The varying variable is used to transfer data between vertex and fragment shader.

Let’s look at the fragment shader code again:

// fragment.glsl
precision mediump float;
void main(void) {
    gl_FragColor = vec4(1.0.0.0.0.0.1.0); 
}
Copy the code

The first line declares the default precision of the floating point variable in the shader. We then assign the color value of each pixel in the main function. Here we assign vec4(1.0, 0.0, 0.0, 1.0) to indicate that each pixel is red.

Basic primitives

Drawing with OpenGL usually starts with drawing a triangle, because this process includes the three basic elements of OpenGL ES: points, lines, and triangles. In OpenGL, any complex 3D model is made up of these three basic geometric primients.

Compiler shader

Vertex and pixel processing is implemented in the shader, so to use the shader we need to compile the source dynamically at run time to get a shader object. Fortunately, the process for compiling a Shader is fixed, and many open source implementations are already available. The general steps are as follows:

GLSL or vertex. GLSL files are stored in the path, and type is used to distinguish between shader types, i.e. vertex shader or Fragment shader shader.

- (GLuint)compileShader:(NSString *)path type:(GLenum)type source:(GLchar *)source
{
    NSError *error          = nil;
    NSString *shaderContent = [NSString stringWithContentsOfFile:path encoding:NSUTF8StringEncoding error:&error];
    
    if(! shaderContent)NSLog(@ "% @", error.localizedDescription);
    
    const char *shaderUTF8 = [shaderContent UTF8String];
    GLint length           = (GLint)[shaderContent length];
    GLuint shader          = glCreateShader(type);
    
    glShaderSource(shader, 1, &shaderUTF8, &length);
    
    glCompileShader(shader);
    
    GLint status;
    glGetShaderiv(shader, GL_COMPILE_STATUS, &status);
    
    if (status == GL_FALSE) { glDeleteShader(shader); exit(1); }
    
    return shader;
}
Copy the code

Now that we have our compiled Shader object, we need to link it to OpenGL’s glProgram so that it can run on the GPU. The code looks like this:

program = glCreateProgram();

glAttachShader(program, vertShader);
glAttachShader(program, fragShader);

glLinkProgram(program);
    
GLint status;
glGetProgramiv(program, GL_LINK_STATUS, &status);
Copy the code

After completing the above steps, we can use programe to interact with the shader, such as the position variable assigned to the vertex shader:

GLuint attrib_position = glGetAttribLocation(program, "position");
glEnableVertexAttribArray(attrib_position);
glVertexAttribPointer(attrib_position, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (char *)points);
Copy the code

The geometric primitives

With this introduction, we are ready to start drawing. All geometry elements are drawn by calling glDrawArrays:

glDrawArrays (GLenum mode, GLint first, GLsizei count);
Copy the code

Mode here is the type of geometric shape, mainly including point, line and triangle:

#define GL_POINTS 0x0000
#define GL_LINES 0x0001
#define GL_LINE_LOOP 0x0002 #define GL_LINE_LOOP 0x0002
#define GL_LINE_STRIP 0x0003 // Line segment strip -> Adjacent lines share vertices
#define GL_TRIANGLES 0x0004 // Triangles -> Three vertices join
#define GL_TRIANGLE_STRIP 0x0005
#define GL_TRIANGLE_FAN 0x0006 // All triangles share vertex
Copy the code

Draw the point code as shown below, where the geometric type is passed in GL_POINTS

static GLfloat points[] = { // The first three digits represent positions x, y, z and the last three digits represent color values r, g, b
    0.0f, 0.5f, 0.0.0.0.// Position is (0.0, 0.5, 0.0); The color is (0, 0, 0) black
   0.5f, 0.0f, 0.1.0.0.// The position is (-0.5, 0.0, 0.0); The color is (1, 0, 0) red
    0.5f, 0.0f, 0.1.0.0  // Position is (0.5, 0.0, 0.0); The color is (1, 0, 0) red
}; // There are three sets of data, representing three points

GLuint attrib_position = glGetAttribLocation(program, "position");
glEnableVertexAttribArray(attrib_position);
GLuint attrib_color    = glGetAttribLocation(program, "color");
glEnableVertexAttribArray(attrib_color);

// Each value of position contains 3 bytes, with 6 glfloats between the two sets of data
// Again, each value for color has three components, but the data start pointer position is a GLFloat size that skips three positions
glVertexAttribPointer(attrib_position, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (char *)points);
glVertexAttribPointer(attrib_color, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (char *)points + 3 * sizeof(GLfloat));
 
glDrawArrays(GL_POINTS, 0.3); 
Copy the code

The effect is as follows:

You can see that the dots drawn by default are square dots, but what about dots drawn? In order for OpenGL ES 2.0 to draw points as circles rather than rectangles, it is necessary to process the pixel data contained in the rasterized points. The idea is to ignore points with a radius greater than 0.5 to draw dots. In FragmentShader. GLSL modify the code as follows:

// FragmentShader.glsl
varying lowp vec4 fragColor;

void main(void) {
    if (length(gl_PointCoord - vec2(0.5.0.5)) > 0.5) {
        discard;
    }
    gl_FragColor = fragColor;
}
Copy the code

After running, you can see the dot effect as follows:

The code for drawing a line is shown below, with the geometry type passed in as GL_LINES

static GLfloat lines[] = { 
    0.0f, 0.0f, 1.1.1.1.0.5f, 0.5f, 0.0.0.0.0.0f, 0.0f, 0.1.0.0.0.5f, 0.0f, 0.0.0.1}; glVertexAttribPointer(attrib_position,3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (char *)lines);
glVertexAttribPointer(attrib_color, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (char *)lines + 3 * sizeof(GLfloat));
 
glLineWidth(5); // Set the line width to 5
glDrawArrays(GL_LINES, 0.4); 
Copy the code

For line segments, if the color values between two points are different, OpenGL will produce gradient effect by default, as shown in the figure below:

Since the initial effect of this article only used point and line drawing, so draw the most basic triangle, readers can try by themselves, I will not repeat here.

Texture map

In addition to primitives, OpenGL also has the concept of textures. In simple terms, it is to display the image data to the pixel we draw, so that the object represented by the pixel is more real. Let’s first look at the coordinate system of the texture, as shown below:

Texture coordinates range from 0 to 1. The origin of texture coordinates is the lower left corner of the picture. The corresponding relationship between texture coordinates and The OpenGL drawing coordinate system is shown by the arrow on the schematic diagram. When texture mapping, we need to ensure that the coordinate point mapping is consistent with the above picture.

The rendering of texture requires two pieces of information, one is the coordinate of texture and the other is the content of texture. The texture content is simply converted from UIImage in iOS to texture data in OpenGL ES.

- (GLuint)textureFromImage:(UIImage *)image 
{
    CGImageRef imageRef = [image CGImage];
    size_t w = CGImageGetWidth (imageRef);
    size_t h = CGImageGetHeight(imageRef);
    
    GLubyte *textureData        = (GLubyte *)malloc(w * h * 4);
    CGColorSpaceRef colorSpace  = CGColorSpaceCreateDeviceRGB(a);NSUInteger bytesPerPixel    = 4;
    NSUInteger bytesPerRow      = bytesPerPixel * w;
    NSUInteger bitsPerComponent = 8;
    
    CGContextRef context = CGBitmapContextCreate(textureData,
                                                 w,
                                                 h,
                                                 bitsPerComponent, 
                                                 bytesPerRow, 
                                                 colorSpace,
                                                 kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGContextTranslateCTM(context, 0, h);
    CGContextScaleCTM(context, 1.0f, 1.0f);
    CGContextDrawImage(context, CGRectMake(0.0, w, h), imageRef);
    
    glEnable(GL_TEXTURE_2D);
    GLuint texName;
    glGenTextures(1, &texName);
    glBindTexture(GL_TEXTURE_2D, texName);
    
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    
    glTexImage2D(GL_TEXTURE_2D, 
                 0, 
                 GL_RGBA, 
                 (GLsizei)w, 
                 (GLsizei)h, 
                 0,
                 GL_RGBA, 
                 GL_UNSIGNED_BYTE, 
                 textureData);
    
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);
    free(textureData);
    
    return texName;
}
Copy the code

With texture objects in place, we then need to convert coordinates and texture information in vertex shaders and fragment shaders, that is, sample rendering. The vertex shader looks like this:

// vertex.glsl
attribute vec4 aPosition; 
attribute vec2 aTexcoord;
varying   vec2 vTexcoord;
void main(void) {
    gl_Position = aPosition; 
    vTexcoord   = aTexcoord;
}
Copy the code

The aTexcoord in the above code is used to accept texture coordinate information and pass it to the variable vTexcoord defined in the fragment shader. This passes texture coordinate information. The fragment shader code looks like this:

// fragment.glsl
precision mediump   float;
uniform   sampler2D uTexture;
varying   vec2      vTexcoord;
void main(void) {
    gl_FragColor = texture2D(uTexture, vTexcoord);
}
Copy the code

Here uTexture is our texture and vTexcoord is our texture coordinate. Once you have the coordinates and texture information, you can sample it using the texture2D function. In simple terms, it takes out the color information of each coordinate point pixel and assigns it to OpenGL for drawing, and the data of the picture is the matrix information composed of the color pixel value of each point. Therefore, after having the color mapping relationship between texture and pixel, the whole picture can be displayed through OpenGL. Once that’s done, the last step is to activate the texture and render it as follows:

GLuint tex_name = [self textureFromImage:[UIImage imageNamed:@"ryan.jpg"]];

glActiveTexture(GL_TEXTURE5);
glBindTexture(GL_TEXTURE_2D, tex_name);
glUniform1i(uTexture, 5);

const GLfloat vertices[] = { // OpenGL draws coordinates
    0.5.0.25.0.0.5.0.25.0.0.5.0.25.0.0.5.0.25.0 }; 
glEnableVertexAttribArray(aPosition);
glVertexAttribPointer(aPosition, 3, GL_FLOAT, GL_FALSE, 0, vertices);

static const GLfloat coords[] = { // Texture coordinates
    0.0.1.0.0.1.1.1
};

glEnableVertexAttribArray(aTexcoord);
glVertexAttribPointer(aTexcoord, 2, GL_FLOAT, GL_FALSE, 0, coords);

glDrawArrays(GL_TRIANGLE_STRIP, 0.4);
Copy the code

Vertices in the code are the drawing coordinates of OpenGL and the texture coordinates are CoorDs. These two coordinates need to be in accordance with the coordinate corresponding relationship in the figure above to display the image correctly. After running, the effect is shown as follows:

Video rendering

Well, with the above theoretical basis, we can implement the real-time video rendering shown at the beginning of the article. For video stream acquisition and OpenGL drawing environment, we use GPUImage to achieve, and face recognition algorithm uses the company’s own visual engine (free and open for use, download address is Hongsoft visual AI engine open platform). Of course, CIDetector face recognition class of CoreImage framework can also be used.

@interface PVTStickerFilter : GPUImageFilter

@property (nonatomic, copy) NSArray<NSValue *> *facePoints;

@end
Copy the code

Firstly, GPUImageFilter class is inherited and a face point array is defined to receive the point information from the face recognition engine. It should be noted that the image obtained by the camera is stored 90 degrees counterclockwise in memory by default, so the point we obtained needs to rotate 90 degrees clockwise to see the image in the viewfinder frame. Also, if you have a front-facing camera, it will have a mirror effect by default, so you need to flip the point position 180 degrees along the Y-axis.

[self.facePoints enumerateObjectsUsingBlock:^(NSValue *obj, NSUInteger idx, BOOL *stop) {
    CGPoint point = [obj CGPointValue];
    [mPs addObject:[NSValue valueWithCGPoint:CGPointMake(point.y, point.x)]];
}];
Copy the code

For a point (x, y) rotated 90 degrees clockwise, the coordinate is (imageheight-y, x). If the point is the mirror effect, it needs to rotate another 180 degrees about the Y-axis, and the final coordinate is (y, x).

As can be seen from the renderings, what we want to achieve is the animation of the symmetrical lines on both sides. Renderings in a total of three groups of lines drawn, we will analyze the principle of one group. The specific point is the line segment drawing from the point at the lower left corner of the bridge of the nose (X67, Y67) to the point at the left inner side of the eyebrow (X24, Y24), and the line segment drawing from the point at the lower right corner of the bridge of the nose (X70, Y70) to the point at the right inner side of the eyebrow (X29, Y29). Also, (x24, y24) and (x29, y29) need to display dots at the end of the animation.

According to the above analysis, we also need to convert the coordinates of the video image frame to the OpenGL coordinate system before drawing the point points, that is, to convert the coordinates of the above points to between -1 and 1. The conversion formula is given above:

CGFloat x67 = 2 * [mPs[67] CGPointValue].x / frameWidth - 1.f;
CGFloat y67 = 1 - 2 * [mPs[67] CGPointValue].y / frameHeight ;

CGFloat x24 = 2 * [mPs[24] CGPointValue].x / frameWidth - 1.f;
CGFloat y24 = 1 - 2 * [mPs[24] CGPointValue].y / frameHeight ;

CGFloat x70 = 2 * [mPs[70] CGPointValue].x / frameWidth - 1.f;
CGFloat y70 = 1 - 2 * [mPs[70] CGPointValue].y / frameHeight ;

CGFloat x29 = 2 * [mPs[29] CGPointValue].x / frameWidth - 1.f;
CGFloat y29 = 1 - 2 * [mPs[29] CGPointValue].y / frameHeight ;
Copy the code

With these points, we can easily use glDrawArrays(GL_LINES, 0, 4) to draw line segments. But there are two problems to solve, one is how to draw the dotted line, the other is how to draw the animation.

There is no direct API for drawing dashed lines in OpenGL ES 2.0, so we need to change the idea and convert dashed lines into continuous drawing of several lines. The specific idea is that a dotted line (x1, 0) to (x10, 0) with a length of 10 pixels is drawn by cutting it into five lines with a length of 1 pixel. Namely drawing (x1, 0) to (x2, 0) segment, (x3, 0) to (x4, 0) segment, (x5, 0) to (x6, 0) segment, (x7, 0) to (by 8, 0) segment, (x9, 0) to the line segment (x10, 0).

Therefore, first of all, we need to segment the whole line segment according to the length of the dotted line to be drawn. For example, we define the length of each dotted line as 0.01, and then we can calculate the number of line segments to be drawn between two points:

CGFloat w_24_67 = (x24 - x67); // The x-base distance between two points
CGFloat h_24_67 = (y24 - y67); // The y distance between two points

CGFloat w_29_70 = (x29 - x70); // The x-base distance between two points
CGFloat h_29_70 = (y29 - y70); // The y distance between two points

GLsizei s_24_67 = [self stepsOfLineWidth:w_24_67 height:h_24_67]; // How many fragment lines need to be divided
GLsizei s_29_70 = [self stepsOfLineWidth:w_29_70 height:h_29_70]; // How many fragment lines need to be divided
Copy the code

The function to calculate fragmentation is shown below, where PVT_DASH_LENGTH is the length of each dashed line:

- (GLsizei)stepsOfLineWidth:(CGFloat)w height:(CGFloat)h
{
    CGFloat a_w = fabs(w);
    CGFloat a_h = fabs(h);
    GLsizei s   = a_w / (PVT_DASH_LENGTH * cos(atan(a_h / a_w)));
    
    return ((s % 2)? s : ++s) +1;
}
Copy the code

Then plug all the line segments into OpenGL to draw, the code is as follows:

GLsizei total_s = s_24_67 + s_29_70;
GLfloat *lines  = (GLfloat *)malloc(sizeof(GLfloat) * total_s * 3);

for (int i = 0; i < s_24_67; i++) {
    CGFloat xt = x67 + (CGFloat)i/(CGFloat)(s_24_67- 1) * w_24_67;
    CGFloat yt = y67 + (CGFloat)i/(CGFloat)(s_24_67- 1) * h_24_67;
    int   idx  = i * 3;
    lines[idx] = xt; lines[idx+1] = yt; lines[idx+2] = 0;
}
for (int i = 0; i < s_29_70; i++) {
    CGFloat xt = x70 + (CGFloat)i/(CGFloat)(s_29_70- 1) * w_29_70;
    CGFloat yt = y70 + (CGFloat)i/(CGFloat)(s_29_70- 1) * h_29_70;
    int   idx  = s_24_67 * 3 + i * 3;
    lines[idx] = xt; lines[idx+1] = yt; lines[idx+2] = 0;
}

glVertexAttribPointer(_position, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (char *)lines);
glLineWidth(2.5);
glDrawArrays(GL_LINES, 0, total_s);
Copy the code

Now that we’ve solved the dotted line problem, let’s look at how to animate the drawing. Actually very simple ideas, such as we need to gradually draw line segments in 4 seconds (because of the need to draw a dotted line, we were divided into 100 line), so, we in the callback data per frame camera under the current frame distance judgment when the first frame has been interval time for many times, hypothesis interval of 1 second, it is for this frame, we need to draw out a quarter of the length That’s 25 line segments stuffed into OpenGL to draw. And so on, if it’s more than 4 seconds, then it’s zero again. At 4 seconds should be the full length of the line segment to draw.

- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex
{
    _currentTime = frameTime;

    [super newFrameReadyAtTime:frameTime atIndex:textureIndex];
}
Copy the code

The time of the current frame is first recorded so that the cumulative time between the current frame and the first frame can be calculated later.

- (void)calcAccumulatorTime
{
    NSTimeInterval interval = 0;
    
    if (CMTIME_IS_VALID(_lastTime)) {
        interval = CMTimeGetSeconds(CMTimeSubtract(_currentTime, _lastTime));
    }
    _lastTime       = _currentTime;
    _accumulator   += interval;
    
    _frameDuration  = _stepsIdx == 3 ? PVT_FRAME_DURATION / 2.f : PVT_FRAME_DURATION;
    
    CGFloat sumTime = _accumulator + interval;
    _accumulator    = MIN(sumTime, _frameDuration);
}
Copy the code

Then calculate where the current frame should be drawn based on the total animation time:

- (GLsizei)animationIdxWithStep:(GLsizei)step
{
    CGFloat s_scale = _accumulator / _frameDuration;
    GLsizei s_index = ceil(s_scale * step);
    
    return (s_index % 2)? ++s_index : s_index; }Copy the code

The last step is to transfer the calculated fragment data to OpenGL for drawing. It should be noted that when the cumulative time exceeds the animation time, the cumulative time should be cleared to zero, so as to realize the continuous display of animation. The _frameDuration here is the animation time.

- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(constGLfloat *)textureCoordinates; {[self calcAccumulatorTime];

    GLsizei s_24_67_index = [self animationIdxWithStep:s_24_67];
    GLsizei s_29_70_index = [self animationIdxWithStep:s_29_70];

    GLsizei total_s = s_24_67_index + s_29_70_index;
    GLfloat *lines  = (GLfloat *)malloc(sizeof(GLfloat) * total_s * 3);
    
    for (int i = 0; i < s_24_67_index; i++) {
        CGFloat xt = x67 + (CGFloat)i/(CGFloat)(s_24_67_index- 1) * w_24_67 * s_index_scale;
        CGFloat yt = y67 + (CGFloat)i/(CGFloat)(s_24_67_index- 1) * h_24_67 * s_index_scale;
        int   idx  = i * 3;
        lines[idx] = xt; lines[idx+1] = yt; lines[idx+2] = 0;
    }
    for (int i = 0; i < s_29_70_index; i++) {
        CGFloat xt = x70 + (CGFloat)i/(CGFloat)(s_29_70_index- 1) * w_29_70 * s_index_scale;
        CGFloat yt = y70 + (CGFloat)i/(CGFloat)(s_29_70_index- 1) * h_29_70 * s_index_scale;
        int   idx  = s_24_67_index * 3 + i * 3;
        lines[idx] = xt; lines[idx+1] = yt; lines[idx+2] = 0;
    }
    
    if (_accumulator == _frameDuration) {
        _accumulator = 0.f;
    }
    
    // to do drawing work...
}
Copy the code

With the dashed lines and animations resolved, there is now one last requirement to draw dots at (x24, y24) and (x29, y29) at the end of the animation. In FragmentShader. GLSL, if the radius is larger than 0.5, you can draw dots directly. However, since we need to draw points and lines at the same time and use the same Fragment Shader file, it is difficult to distinguish between drawing points and lines at present. Points with a radius greater than 0.5 cannot be ignored in Shader directly, so we directly use geometric methods to draw dots. See this blog post for details of the geometry.

#define PVT_CIRCLE_SLICES 100
#define PVT_CIRCLE_RADIUS  0.015

- (void)drawCircleWithPositionX:(CGFloat)x y:(CGFloat)y radio:(CGFloat)radio
{
    glLineWidth(2.0);
    
    GLfloat *vertext = (GLfloat *)malloc(sizeof(GLfloat) * PVT_CIRCLE_SLICES * 3);
    
    memset(vertext, 0x00.sizeof(GLfloat) * PVT_CIRCLE_SLICES * 3);
    
    float a     = PVT_CIRCLE_RADIUS; // horizontal radius
    float b     = a * radio;         // fWidth / fHeight;
    
    float delta = 2.0 * M_PI / PVT_CIRCLE_SLICES;
    
    for (int i = 0; i < PVT_CIRCLE_SLICES; i++) {
        GLfloat cx   = a * cos(delta * i) + x;
        GLfloat cy   = b * sin(delta * i) + y;
        int   idx    = i * 3;
        vertext[idx] = cx; vertext[idx+1] = cy; vertext[idx+2] = 0;
    }
    
    glVertexAttribPointer(_position, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (char *)vertext);
    glDrawArrays(GL_TRIANGLE_FAN, 0, PVT_CIRCLE_SLICES);
    
    free(vertext);
}
Copy the code

OpenGL ES is as deep as learning a new language, great oaks grow from little acorns, I hope the summary of this article can bring some help and harvest to the students who want to get started, also welcome to leave a comment.

Refer to the article

  1. OpenGL ES get started and draw a triangle
  2. Imitation QQ video animation special effects – face recognition
  3. Create a GPUImage from 0
  4. Learn how to draw more graphics in OpenGL ES
  5. OpenGL ES 3.0 Data Visualization 1: Draw dots
  6. OpenGL ES Getting Started 03-OpenGL ES Circle Drawing
  7. OpenGL ES Getting Started 05-OpenGL ES Texture mapping