preface

Today, WE will introduce how WebRTC renders video on iOS. There are two ways to speed up video rendering in iOS. One is to use OpenGL; The other is to use Metal.

The advantage of OpenGL is that it’s cross-platform and takes a long time to launch, so it’s stable. Compatibility is also better. Metal, on the other hand, is a relatively recent addition to iOS and is theoretically more efficient than OpenGL ES.

WebRTC supports both rendering methods. It first determines whether the current iOS system supports Metal, and if so, uses Metal first. If not, use OpenGL ES.

What we’re going to introduce today is OpenGL ES.

Create an OpenGL context

When you do video rendering with OpenGL ES in iOS, you first create an EAGLContext object. This is because EAGLContext manages the OpengGL ES rendering context. This context includes state information, render commands, and OpenGL ES rendering resources such as textures and renderbuffers. To execute the OpenGL ES command, you need to set the EAGLContext created to the current rendering context.

EAGLContext does not manage the draw resource directly, it manages it through the EAGLSharegroup object that is context-specific. When creating an EAGLContext, you can choose to create a new ShareGroup or share the EAGLSharegroup with the EAGLContext you created previously.

The relationship between EAGLContext and EAGLSharegroup is shown as follows:

WebRTC does not use a shared EAGLSharegroup, so we are not going to cover this particular case here. Students who are interested can find relevant information on the Internet.

Currently, There are three versions of OpenGL ES, mainly version 2 and version 3. So we have to judge it when we create it. First we’ll see if version 3 is supported, if not we’ll use version 2.

The code is as follows:

Once the context is created, we also need to set it to the current context for it to really work.

The code is as follows:

It is important to note that as the application switches to the background, the context switches. So when it switches to the foreground, make that judgment as well.

Now that the OpenGL ES context is created, let’s look at how to create a View.

Create the OpenGL View

In iOS, there are two presentation layers, GLKView and CAEAGLLayer. WebRTC uses GLKView for presentation. CAEAGLLayer will not be introduced.

The GLKit framework provides View and View Controller classes to reduce the need to create and maintain code that draws OpenGL ES content. The GLKView class is used to manage the presentation section; The GLKViewController class is used to manage the content drawn. They’re all inherited from UIKit. The benefit of GLKView is that developers can focus their efforts on OpenGL ES rendering.

The basic process shown in GLKView is as follows:

  • Prepare OpenGL ES environment;

  • Send draw command;

  • Show render content.

The GLKView class implements the first and third steps by itself. The second step is for the developer to implement the drawRect function. GLKView provides a simple drawing interface for OpenGL ES because it manages the standard part of the OpenGL ES rendering process:

Before calling the draw method:

Use EAGLContext as the current context.

Create FBO and RenderBuffer based on size, scale factor, and draw properties.

Bind FBO as the current destination of the draw command.

Matches OpenGL ES ViewPort with frameBuffer size.

After the draw method returns:

Resolve multi-sampling buffers if multi-sampling is enabled.

Discard renderbuffers when the content is no longer needed.

Display the contents of the RenderBuffer.

There are two ways to use GLKView. One is to implement a class that directly inherits from GLKView and implements the drawRect method. The other is to implement the GLKView delegate, which is the GLKViewDelegate, and implement the drawInRect method.

In WebRTC, the second approach is used. RTCEAGLVideoView is a wrapper class of GLKView and inherits from GLKViewDelegate.

First, create GLKView.

Once the GLKView is created, set the glkView.delegate to RTCEAGLVideoView so that the RTCEAGLVideoView can do the drawing. . In addition, glkView enableSetNeedsDisplay set to NO, drawn by ourselves to control when to.

Then, the drawInRect method is implemented.

The above code uses Shader to draw NV12 YUV data into the View. The basic meaning of this code is to decompose a decoded video frame into Y data texture and UV data texture. The Shader program is then called to convert the texture into RGB data and finally render it into the View.

Shader program

OpenGL ES has two types of Shader. One is the vertex (Vetex)Shader; The other type is fragment Shader.

Vetex Shader: Used to draw vertices.

Fragment Shader: Used to draw pixels.

Vetex Shader

Vetex Shader is used to draw the vertices of the graph. We all know that both 2D and 3D graphics are made of vertices.

In OpenGL ES, there are three basic primitives: points, lines, and triangles. From there, you can make more complex shapes. And points, lines and triangles are all made up of points.

The video is displayed in a rectangle, so we will build a rectangle from the base primitives. In theory, distance shapes could be drawn from points and lines, but to do so, OpenGL ES would have to draw them four times. Drawing by triangle only takes two times, so using triangles is faster. The following code is the Vetex Shader program in WebRTC. The purpose of this program is to execute once per vertex, output the user-entered vertices into gl_Position, and punctuate the vertices’ textures as input to the Fragment Shader.

1. The OpenGL coordinate origin is the center of the screen. The origin of texture coordinates is the lower left corner.

2. Gl_Position is an internal variable of Shader that stores the coordinates of an item point.

See OpenGL ES Shader syntax in my other article shaders

fragment Shader

The Fragment Shader program is used to color each fragment once. A slice is almost the same as a pixel. A slice can simply be understood as a pixel.

The following code is the fragment in WebRTC

YUV comes in a variety of formats, as you can see in my other article, YUV.

In the code, use the FRAGMENT_SHADER_TEXTURE command, also known as texture2D in OpenGL ES, to extract the Y value from the Y data texture and the UV value from the UV data texture, respectively, The RGB value of each pixel (actually a slice) is then calculated by the formula.

Once you have the vertex data and the RGB values of the slice, you can call the Draw method of OpenGL ES to draw the video.

Vetex Shader and Fragment Shader programs under WebRTC are introduced above. There’s a little extra work to do to get the program running.

OpenGL ES shader programs are similar to C programs. Imagine a C program. To get a C program running, there are several steps:

  • Write good program code

  • compile

  • link

  • perform

The same is true for Shader programs. Let’s look at how WebRTC does it.

It first creates a Shader, and then binds the Shader program above to the Shader. Then compile the Shader.

After successful compilation, create the Program object. Bind the Shader you created earlier to program. Then do the linking. With everything in place, you can use the Shader program to draw the video.

WebRTC video rendering related files

Rtceaglvideoview. m/h: Create EAGLContext and OpenGL ES View and display the video data.

RTCShader. Mm /h: OpenGL ES Shader program creation, compilation and link related code.

RTCDefaultShader. Mm /h: Shader program, draw the relevant code.

RTCNV12TextureCache. Mm/h: used to generate the YUV NV12 texture related code.

RTCI420TexutreCache. Mm/h: used to generate I420 code related texture.

summary

This paper introduces OpenGL ES rendering in WebRTC. In this article you can see how WebRTC renders the video. Include:

Context creation and initialization.

GLKView creation.

The realization of drawing method.

Analysis of Shader code.

Compilation and execution of Shader.