Mainly focus on conceptual noun explanation and personal understanding

Get to know the graphics API quickly

OpenGL

OpenGL (Open Graphics Library) is a cross-language, cross-platform application programming interface for rendering 2D and 3D vector Graphics. It abstracts the resources of the computer as an OpenGL object, and the operations on these resources are abstracted as an OpenGL instruction. OpenGL is designed to be output only, so it only provides rendering. OpenGL is commonly used in CAD, virtual reality, scientific visualization programs and video game development.

OpenGL ES

OpenGL ES (OpenGL for Embedded Systems) is a subset of OpenGL 3D graphics API. It is customized from OpenGL and designed for Embedded devices such as mobile phones, PDAs and game consoles.

DirectX

DirectX (Direct eXtension) is a multimedia programming interface created by Microsoft. DirectX is not a pure API, it is made up of many apis, and only supports Windows platform. (As an iOS developer)

Metal

Metal is a low-level rendering application programming interface that provides the lowest layer required for software to run on different graphics chips. Apple’s new platform technology for game developers at WWDC 2014 offers 3D graphics 10 times better rendering performance. The advantage of Metal is that it integrates rendering and heterogeneous computing (like OpenGL, it can accelerate rendering using GPU and do heterogeneous computing using GPU) to provide multiple requirements in a unified framework. Although Apple announced in WWDC 2018 that it would give up using OpenGL, most applications in the current market still use OpenGL and will not switch to Metal in a short time, so learning OpenGL is very necessary and important, and Metal is more important!!

OpenGL

OpenGL Context

Before an application calls an OpenGL instruction, an OpenGL context needs to be created. This context is a large state machine that holds various states in OpenGL, which is the basis for the execution of OpenGL instructions. OpenGL adopts client-server mode. We can think that each hardware GPU is a server, and each drawing context corresponds to a client applied, and a client maintains a set of state machines. If two Windows correspond to two different drawing contexts, the two Windows are independent of each other.

A huge state machine with OpenGL context, context switching may produce a larger system resource overhead, but different drawing module, may need to use completely independent of the state, therefore, can create multiple context in the application, using different context in different threads, context Shared between texture, such as buffer resource.

OpenGL state machine

A state machine is a machine existing in theory. It has the following characteristics: a. It has the ability of memory and can remember its current state. B. It can receive input, modify its own state according to the input content and its own state, and can get output. C. When it enters a special state (down state), it no longer receives input and stops working.

So, essentially, is our computer a kind of state machine:

  1. The computer’s memory (memory, hard disk, etc.) can remember the current state of the computer itself (currently installed in the computer software, data stored in the computer, are actually binary values, all belong to the current state).
  2. The input device of the computer receives input (keyboard input, mouse input, file input), according to the input content and its own state (mainly refers to the program code that can be run), modify its state (modify the value in memory), and can get output (display the results to the screen).
  3. When it enters a special state (shutdown state), it no longer receives input and stops working. (1) OpenGL can record its state (e.g. the current color glClearColor(1.0f,1.0f,0.0f,0.0f); , whether the depth test function is enabled glEnable(GL_DEPTH_TEST); (2) OpenGL can receive input (when we call OpenGL function, actually can be regarded as OpenGL receiving our input), according to the input content and its own state, modify its own state, and can get the output (for example, we call glColor3f, OpenGL receives this input and changes its current color state; If we call glRectf, OpenGL will output a rectangle. (3) OpenGL can go into a stop state and no longer accept input. This may not be obvious in our program, but OpenGL always stops working before the program exits. ##### Rendering the operation of converting images and graphics data into 3D spatial images is called Rendering (also known as Rendering 3D graphics according to the external environment such as light, Rendering and texture material and other internal structures, to produce a more realistic 3D effect). “Draw”, I understand that draw should only draw basic graphic elements, and render should be a little more advanced, draw graphic elements on the basis of some effects to show it.

VertexArray and VertexBuffer

Vertices: refers to the vertex position data of a graph when we draw it. This data can be stored in an array or cached in the MEMORY of the GPU. Vertex array: Stores vertex data in an array, including vertex coordinates, surface normals, RGBA colors, auxiliary colors, color indexes, texture coordinates, and polygon boundary markers. In this way, only one function call can be used to complete the drawing, greatly reducing the number of function calls, but also avoiding the redundant processing of shared vertices, improving program performance. Vertex buffer: Because OpenGL is a client-server structure, it can sometimes be slow to transfer data from the client to the server, so we added a buffer object that explicitly specifies which data to store in the graphics server.

pipeline

Pipelines can be thought of as a rendering pipeline. When rendering graphics in OpenGL, the graphics card processes data in a fixed sequence, like an assembly line, one after the other. And must be in strict accordance with this order, can not be broken. (Pipelining, in fact, refers to the management of a stack of raw graphic data that travels through a pipeline, undergoing various changes and finally appearing on the screen.)

Fixed pipeline (store shader)

OpenGL in the early days of the packaging of a variety of Shader program block built-in a section of light, coordinate transformation, clipping and other functions to complete the fixed Shader program to help developers to complete the rendering of graphics. ##### Programmable pipeline In the use of OpenGL, fixed pipeline may not be able to complete every business, it is possible to open up the relevant parts to be programmable. (For example, the production line of soap, namely the pipeline, from the raw material to the production line steps, step by step to the final production of square, white bar of soap, soap color is single, single shape, less freedom, this is the fixed pipeline. If in the process of production of soap, you can customize the shape of the soap, the color of the soap, in the end we can produce all kinds of shapes, all sorts of color of soap) # # # # # Shader program Shader shaders (Shader) is used to implement image rendering, used to replace the fixed rendering pipeline can edit program, which is programmable rendering pipeline. Therefore, OpenGL also needs to specify a shader program edited by the shader before actually calling the draw function. Common shaders are: A VertexShader, FragmentShader/PixelShader, geometrysshader, Surface Subdivision Shader. The most important are vertex shaders and fragment shaders. OpenGL in the process of shader, through compilation, link and other steps, generate shader program (glProgram), shader program includes vertex shader and fragment shader operation logic. In OpenGL rendering, first by the vertex shader for incoming data calculation, again through the primitive assembly, converts the vertex figure yuan, then carries on the rasterizer, converts primitives to raster data, finally, rasterize data into the fragment shader in calculation, the fragment shader to rasterize each pixel in the data to calculate, And determine the color of the pixels. ##### VertexShader (VertexShader) vertex shaders are generally used to handle the transformation (rotation/translation/projection) of each vertex of a graph and implement a general programmable method of manipulating vertices. A vertex shader is a vertex-by-vertex program that executes a vertex shader (in parallel) for each vertex. It’s like every product needs to be inspected by the quality department and stamped with a seal before leaving the factory. ##### FragmentShader: A FragmentShader is generally used to handle the filling and calculation of the color of each pixel in a graph. OpenGL is used to calculate the color of fragments of the program, and pixel by pixel calculation (parallel), any graph will have many pixels, so it will perform a lot of calculations, so the CPU can not complete these massive calculations, at this time, it needs to use GPU to do auxiliary. #####GLSL(OpenGL Shading Language) is a procedural Language for OpenGL Shading programming. The basic syntax is basically the same as C/C++, They are executed on the GPU (Graphic Processor Unit) of the graphics card, replacing part of the fixed rendering pipeline, so that different layers in the rendering pipeline have programmability. For example: view transformation, projection transformation and so on. GLSL’s shader code is divided into two parts: vertex shader and fragment shader.

Rasterizer Rasterization

1. Refers to the process of converting vertex data into slices, each element of which corresponds to a pixel in the buffer. 2. It is a process of transforming geometric elements into two-dimensional images, which consists of two parts: 1. Determine which integral raster areas of the window coordinates are occupied by the base elements. 2. Assign a color value and a depth value to each area.

texture

Texture is an OpenGL ES cache used to store the values of image color elements. It can be understood as a picture/image. When rendering graphics, the image needs to be filled in its encoding.

Hybrid (Blending)

In OpenGL, blending is usually a technique to achieve transparency of objects. Transparency means that an object is not a solid color, but its color is a varying degree of combination of the color of the object itself and the color of other objects behind it. For example, we can mix multiple colors into one color. The three primary colors red, green, and blue together make black. After the test phase, if the pixel is still unculled, the color of the pixel will be mixed with the color attached to the color in the frame buffer.

Transformation matrix

In the drawing process, there are three kinds of transformation, namely translation, scaling and rotation. You have to use the transformation matrix.

Projection matrix

Used to convert 3D coordinates to 2d screen coordinates where the actual lines will also be drawn.

Render the screen/SwapBuffer (SwapBuffer)

Render buffers typically map system resources such as Windows. If you render the image directly to the window’s corresponding render buffer, you can display the image to the screen. Be aware that if each window has only one buffer, then the screen is refreshed during drawing and the window may display an incomplete image. To solve this problem, regular OpenGL programs will have at least two buffers. What is displayed on the screen is called the screen buffer, and what is not displayed is called the off-screen buffer. After a buffer is rendered, the image is displayed on the screen by swapping the screen buffer with the off-screen buffer. Due to the display refresh is generally by rows, so the screen up and down in order to prevent the exchange buffer area of the image points belong to two different frames, so the exchange generally will be waiting for the signal to the display refresh to complete, exchanged in the display refresh interval twice, the signal is called a vertical sync signal, this technology is called the vertical synchronization.