OpenGL ES /OpenGL ES /OpenGL ES /OpenGL ES /OpenGL ES Image rendering implementation and rendering problems OpenGL/OpenGL ES introduction: basic transformation – beginning vector/matrix OpenGL/OpenGL ES introduction: texture exploration – common API parsing OpenGL/OpenGL ES introduction: OpenGL/OpenGL ES: Vertex shaders and slice shaders (OpenGL transition OpenGL ES) Introduction to OpenGL/OpenGL ES: GLKit usage and case studies

When I first got into OpenGL, I was also concerned about the problem that Apple’s father abandoned it. I thought, Apple’s father abandoned OpenGL/Open LG ES, so what’s the use of learning this thing?

It is worth noting: It took 4 years for Apple to migrate its own system to Metal. 2. Before the launch of Metal, Apple highly integrated OpenGL ES with corresponding layers and GLKit to help developers quickly use OpenGL ES. It’s just about the underlying API dependencies of Apple’s internal systems. It’s not about getting iOS developers to stop using OpenGL ES. After all, its cross-platform and stability make it difficult to give up the existing development, which Metal is currently difficult to achieve. 4. Currently, most project teams like Baidu Map, Autonavi and audio and video processing are very large, so they will not migrate to Metal for the time being. So learning only Metal is not enough 5, so learning needs to go step by step OpenGL -> OpenGL ES -> Metal. This is also my experience. I tried to learn GPUImage or OpenGL ES before, but I only know how to use it, but I don’t know the principle. So lay the foundation first

The purpose of this article

  • Get to know the graphics API quickly
  • Quickly understand the terminology under OpenGL

Graphics apis

Introduction to the Graphics API

  • OpenGL (Open Graphics Library) is a cross-programming language and cross-platform programming graphical programming interface. It abstracts computer resources as OpenGL objects, and operations on these resources are abstracted as OpenGL instructions.

  • OpenGL ES (OpenGL for Embedded Systems) : OpenGL ES is a subset of OpenGL 3D graphics API. It is designed for mobile phones, PDAs, game consoles and other Embedded devices. It removes many unnecessary and low performance APIS.

  • DirectX: Is made up of many apis, DirectX is not a pure graphics API. Most importantly, DirectX is a multimedia processing API that belongs to Windows. It does not support platforms other than Windows, so it is not a cross-platform framework. According to the classification of nature, can be divided into four parts, display part, voice part, input part and network part.

  • Metal: Apple has launched a new platform for game developers that can render 3D graphics 10 times faster. Metal is Apple’s framework for 3D rendering

What problem is a graphics API designed to solve

Simply say is to achieve the bottom rendering of graphics

  • In game development, for example, the rendering of the game scene/game mission
  • For example, in audio and video development, data rendering after video decoding
  • For example, in the map engine, rendering data on the map
  • For example, in animation, the realization of animation drawing
  • For example, in video processing, add a filter effect to the video

Graphics apis are the only way iOS developers have access to a GPU

OpenGL professional noun parsing

OpenGL context context

  • Before an application calls any OpenGL instruction, it needs to create an OpenGL context. This context is a very large state machine that holds the various states in OpenGL and is the basis for the execution of OpenGL instructions

  • OpenGL functions, in any language, are procedural-oriented functions like C, which essentially operate on a state or an object in the vast state machine of the OpenGL context, and of course you have to set that object to the current object first. Therefore, by encapsulating OpenGL instructions, it is possible to encapsulate the relevant calls of OpenGL into an object-oriented graphics API

  • Since the OpenGL context is a large state machine, switching contexts tends to be expensive, but different drawing modules may need to use completely separate state management. As a result, you can create multiple different contexts in your application, use different contexts in different threads, and share textures, buffers, and so on. This approach makes more sense and more efficient than repeatedly switching context or changing render state a lot

OpenGL state machine

  • A state machine is theoretically a machine that describes the various states an object goes through during its life cycle, the transitions between states, the motivations for sending transitions, the conditions, and the activities performed during transitions. In other words, a state machine is a behavior, that is, the sequence of states that an object experiences in response to events in its life cycle and its response to those state events. It has the following characteristics:

    • Have memory function, can remember the current state
    • Can receive input, according to the input content and their original state, modify their current state, and can have the corresponding output
    • When entering a special state (shutdown state), it will no longer receive input, that is, stop working
  • A computer is a classic state machine

    • The computer’s memory (memory, hard disk, etc.) can remember the current state of the computer itself (the current hidden software in the computer, the data saved in the computer, the current Settings of the computer, etc., are actually binary values, all belong to the current state)
    • The input device of the computer receives input (keyboard, mouse, file input), according to the input content and its own state (mainly refers to the program code that can be run), modify its state (modify the value in memory), and can get output (display the result on the screen)
    • When it enters a special state (shutdown state), it no longer receives input and stops working
  • And if you go to OpenGL, you can think of it this way

    • OpenGL can record its own state (such as the current color, whether the blending function is enabled, etc.)
    • OpenGL can receive input (when calling OpenGL function, it can be understood as OpenGL receiving input). For example, when calling glColor3f, OpenGL will modify its “current color” state after receiving the input
    • OpenGL can go to a stop state and no longer accept input. OpenGL always stops working before the program exits

OpenGL is a state machine that maintains its state until the user enters a command to change the state

Apply colours to a drawing

Converting graphics/image data into 3D spatial images is called Rendering.

VertexArray and VertexBuffer

  • Vertices: Refers to the vertex position data of a graph when it is drawn. This data can be stored directly in an array or cached in GPU memory

  • All images in OpenGL are composed of primitives. In OpenGL ES, there are three types of primitives: points, lines, and triangles. The developer can optionally set the function pointer to pass the vertex data directly from memory when calling the draw method, that is, the data that was previously stored in memory is called the vertex array. A more efficient way to do this is to pre-allocate a piece of video memory, which is called the vertex buffer, and preload the vertex data into video memory

pipeline

  • Pipeline: can be understood as the rendering pipeline. The process by which a bunch of raw graphic data travels through a pipeline, through various changes, and finally onto the screen.

  • When you render a graph in OpenGL, you go node by node. Such operations can be understood as pipelines. You can think of it as an assembly line, where each task is executed like an assembly line, with a sequence of tasks. It is called pipeline because the graphics card processes data in a fixed order, and strictly in this order. Just as water flows from one end of a pipe to the other, the sequence cannot be broken.

  • Fixed pipeline: Simply interpreted as rendering an image, we can only implement a series of shader processes by calling the fixed pipeline effect of the GLShaderManager class.

  • Programmable pipeline: Simply understood as a process in processing graphics, we can use custom vertex shaders and chip shaders for the process. Because OpenGL’s usage scenarios are so rich, fixed pipelines or storage shaders cannot accomplish every task, leaving the relevant parts open for programming

Shader program

  • Change the fixed render pipeline architecture into a programmable render pipeline. Therefore, OpenGL also needs to specify a shader program compiled by the shader before actually calling the draw function. The common shaders are VertexShader, FragmentShader/PixelShader, geometrysshader, and Surface segmentation Shade R). Until OpenGL ES3.0, only vertex shaders and fragment shaders were supported

  • OpenGL handles shaders just like any other compiler. By compiling, linking and other steps, a shader program (glProgram) is generated, which contains the operation logic of vertex shader and chip shader. When OpenGL is drawing, the vertex shader first computes the incoming vertex data. Convert vertices to primitives through primitives assembly. Then rasterization, the pixel vector graphics, into rasterized data. Finally, the rasterized data is fed into a chip shader for operation. The pixel shader operates on each pixel in the rasterized data and determines the color of the pixel.

VertexShader VertexShader

  • Generally used to deal with graph per vertex transformation (rotation/translation/projection, etc.)

  • Vertex shader is a program used to calculate vertex properties in OpenGL. Vertex shaders are vertex-by-vertex programs, meaning that each vertex data is executed once. Of course, this is parallel, and the vertex shader cannot access other vertex data during the operation

  • Generally speaking, typical vertex attributes need to be calculated mainly include vertex coordinate transformation, vertex light operation and so on. This is where the conversion of vertex coordinates from its own coordinate system to a normalized coordinate system takes place

FragmentShader

  • Generally used to deal with the color calculation and filling of each pixel in the graph

  • The slice shader is a program in OpenGL used to calculate the color of a fragment (pixel). The slice shader is a per-pixel program, which means that each pixel executes the slice shader once, also in parallel

GLSL (OpenGL Shading Language)

OpenGL shader language is the language used to color code in OpenGL, that is, short custom programs written by developers that are executed on a GPU (Graphic Processor Unit) instead of a fixed part of the rendering pipeline, Make different layers in the render pipeline programmable. GLSL Shader code is divided into two parts: VertexShader and Fragment Shader.

Rasterizer Rasterization

  • Rasterization is the process of converting vertex data into slices. It has the function of transforming the graph into a grid composed of images. Each element in the slice corresponds to a pixel in the frame buffer

  • Rasterization is actually a process of transforming geometric elements into two-dimensional images. This process involves two parts of work. The first part of the work: determine which integral grid areas in window coordinates are occupied by basic primitives; Part two: Assign a color value and a depth value to each area. The rasterization process produces slices

  • The mathematical description of an object and the color information associated with the object are converted into pixels for the corresponding position on the screen and colors for filling the pixels. This process is called rasterization, and it is a process of converting analog signals into discrete signals

texture

Texture can be understood as a picture. When rendering graphics, it is necessary to encode and fill images in order to make the scene more realistic, and the images used here are often referred to as textures. In OpenGL, it’s more common to call it texture, not image.

Mixed Blending

  • After the test phase, if the pixel is still unculled, the color of the pixel will be mixed with the color attached to the color in the frame buffer. The hybrid algorithm can be specified using OpenGL functions. However, The mixing algorithm provided by OpenGL is limited. If a more complex mixing algorithm is needed, it can be achieved by pixel shader. Of course, the performance will be worse than the original mixing algorithm

  • To mix is to mix two colors together. Specifically, the original color of a pixel position is mixed with the color to be painted in a certain way to achieve a special effect

Transformation matrix Transformation

To translate, scale and rotate a graph, you need to use a transformation matrix

Projection matrix Projection

Used to convert 3D coordinates to 2d screen coordinates where the actual lines will also be drawn

Render the screen/SwapBuffer SwapBuffer

Render buffers typically map system resources, such as Windows. If you render the image directly to the window’s corresponding buffer, you can display the image on the screen. However, it is worth noting that if each window has only one buffer, the screen may refresh during drawing and the window may display an incomplete image.

A normal OpenGL program will have at least two buffers. What is displayed on the screen is called the screen buffer, and what is not displayed is called the off-screen buffer. After a buffer is rendered, the image is displayed on the screen by swapping the screen buffer with the off-screen buffer.

Because the monitor refresh is generally by rows, so in order to prevent the exchange buffer, up and down the screen area of the image points belong to two different frames, so the exchange generally will be waiting for the signal to the display refresh to complete, exchanged in the display refresh interval twice, the signal is called a vertical sync signal, vertical sync. This technology is called

After using double buffer and vertical synchronization technique, because of always waiting for exchange and then on to the next frame buffer rendering, make frame rate can not be completely to the highest levels of hardware to allow, in order to solve this problem, the introduction of the three buffer technology, waiting for the vertical synchronization, rendering two off-screen buffer, alternately back and forth while vertical synchronization occurs, The screen buffer is swapped with the recently rendered off-screen buffer to maximize hardware performance