Introduction to the Graphics API

  • OpenGL(Open Graphics Library) is a cross-programming language, cross-platform programming Graphics program interface, it abstracts the resources of the computer into one OpenGL object, the operation of these resources, abstract one OpenGL instruction
  • OpenGL ES(OpenGL for Embedded Systems) is a subset of OpenGL 3D graphics API. It is designed for Embedded devices such as mobile phones, PDAs and game consoles, and many unnecessary and low performance API interfaces are removed
  • DirectX is made up of many apis, DirectX is not a pure graphics API. Most importantly, DirectX is a multimedia processing framework for Windows. It does not support platforms other than Windows, so it is not a cross-platform framework. According to the classification of nature, can be divided into four parts, display part, voice part, input part and network part.
  • Metal Apple has launched Metal, a new platform technology for game developers that can improve 3D graphics rendering performance by 10 times. Metal is Apple’s framework for 3D rendering

OpenGL state machine

A state machine is a kind of machine in theory, which describes the various states that an object (or a behavior) goes through in its life cycle, the transitions between the states, the causes and conditions for the transitions and the activities performed during the transitions

Features:

  • Have memory function, can remember the current state
  • Can receive input, according to the input content and the original state, modify the current state, and can have the corresponding output
  • When entering the special state (stop), it will no longer receive input, stop working

OpenGL context

  • Before your application invokes any OpenGL instructions, you need to arrange to create an OpenGL context first. thiscontextIt’s a very large oneThe state machine, which stores various states in OpenGL and is the basis of OpenGL instruction execution
  • OpenGL functions, in any language, are procedural-oriented functions like C, which essentially operate on a state or an object in the vast state machine of the OpenGL context, and of course you have to set that object to the current object first. Therefore, by encapsulating OpenGL instructions, it is possible to encapsulate the relevant calls of OpenGL into an object-oriented graphics API
  • Since the OpenGL context is a huge state machine,Context switchingThis tends to incur significant overhead, but different drawing modules may need to use completely separate state management. Therefore, they can be created separately in the applicationMultiple different contexts, using different contexts in different threads, sharing textures, buffers, and other resources between contexts. This approach is much more efficient than repeatedly switching context or changing render state a lot.

Apply colours to a drawing

The process of Rendering graphics/image data into 2D spatial images is called Rendering.

During rendering, two shaders must be stored: vertex shader and fragment shader. The vertex shader is the first shader, and the fragment shader is the last. Vertex shaders handle vertices and pixel shaders handle pixel colors.

Vertex array & vertex buffer

  • Vertices refer to when we draw a graph, itsVertex position data
  • The developer chooses to set the function pointer to pass the vertex data directly from memory when calling the draw method, which means that the data was previously stored in memory and is calledVertexArray
  • Pre-storing data into video memory for better performance, in part video memory, is calledVertex buffer

supplement

  • All images in OpenGL are made up of primitives
  • There are three types of primitives in OpenGL ES: points, lines, and triangles

Shader program

  • The fixed rendering pipeline architecture has been completely transformed into a programmable rendering pipeline. Therefore, OpenGL also needs to specify a shader program compiled by the shader before actually calling the draw function. The common shaders are VertexShader, FragmentShader/PixelShader, geometrysshader, and surface subdivision Shader. Fragment shaders and pixel shaders are just different names in OpenGL and DX. Unfortunately, until OpenGLES3.0, only the most basic shaders, vertex and fragment shaders, were supported
  • OpenGL handles shaders just like any other compiler. By compiling, linking and other steps, a shader program (glProgram) is generated, which contains the operation logic of vertex shader and fragment shader. When OpenGL is drawing, the vertex shader first computes the incoming vertex data. Then the vertex is converted to primitives through primitives assembly. Then rasterization, the pixel vector graphics, into rasterized data. Finally, the rasterized data is passed into the fragment shader for operation. The fragment shader operates on each pixel in the rasterized data and determines the color of the pixel

pipeline

Rendering graphics in OpenGL will go through node by node, which can be thought of as pipelined rendering tasks. It is called a pipeline because the graphics card processes data in a fixed order.

Fixed pipeline/storage shader

In earlier versions of OpenGL, developers simply passed in the parameters and followed the fixed shader program to complete the rendering of graphics without paying attention to the underlying implementation principles. Such pipelines are called fixed pipelines

Because OpenGL’s usage scenarios are so rich that fixed pipelines or storage shaders cannot fulfill special requirements, programmable pipelines have been developed

Vertex shader

  • Generally used to process each vertex of the graphtransform(rotation/translation/projection, etc.)
  • Vertex shaders are used in OpenGLCalculate vertex properties of the program. Vertex shaders are vertex-by-vertex programs, that is, each vertex data is executed once, in parallel and without access to other vertex data
  • Generally speaking, typical vertex attributes need to be calculated mainly include vertex coordinate transformation, vertex light operation and so on. This is where the conversion of vertex coordinates from its own coordinate system to a normalized coordinate system takes place.

Chip shader

  • Generally used to deal with each pixel in the graphColor calculation and filling
  • Fragment shaders are programs used in OpenGL to calculate the color of fragments (pixels). The fragment shader is a per-pixel program, meaning that the fragment shader is executed once per pixel, also in parallel

GLSL(OpenGL Shading Language)

OpenGL coloring language is used to color programming language in OpenGL, running on the GRAPHICS card GPU(Graphic Processor Unit), can replace a fixed part of the pipeline, so that different levels in the rendering pipeline with programmability

GLSL shader code is divided into two parts: vertex shader & slice shader

Rasterizer Rasterization

  • It is the process of converting vertex data into slices, which has the function of converting a graph into a grid composed of images, characterized by each element corresponding to a pixel in the frame buffer
  • Rasterization is actually a process of transforming geometric elements into two-dimensional images. This process involves two parts of work. The first part of the work: determine which integral grid areas in window coordinates are occupied by basic primitives; Part two: Assign a color value and a depth value to each area. The rasterization process produces slices
  • The mathematical description of the object and the color information related to the object are converted into pixels for corresponding positions on the screen and colors for filling pixels. This process is called rasterization, which is a process of converting analog signals into discrete signals

texture

It can be understood as a picture. You need to encode and fill the picture in the rendering image, in order to make the scene more realistic. But in OpenGL, we’re more used to calling it texture, not image.

hybrid

  • After the test phase, if the pixel is still unculled, the color of the pixel will be mixed with the color attached to the frame buffer. The blending algorithm can be specified by the OpenGL function.
  • If a more complex blending algorithm is needed, it can be implemented through the pixel shader, but of course the performance will be worse than the native blending algorithm.

Transformation matrix

The graph wants to be translated, scaled, rotated. You have to use the transformation matrix.

Projection matrix

Used to convert 3D coordinates to 2d screen coordinates where the actual lines will also be drawn

mapping

Corresponding relations between

Render the screen/swap buffer

  • Render buffers typically map system resources such as Windows. If you render the image directly to the window’s corresponding render buffer, you can display the image to the screen.
  • However, it is worth noting that if each window has only one buffer, then the screen is refreshed during the drawing process and the window may display an incomplete image
  • To solve this problem, regular OpenGL programs will have at least two buffers. What is displayed on the screen is called the screen buffer, and what is not displayed is called the off-screen buffer. After a buffer is rendered, the image is displayed on the screen by swapping the screen buffer with the off-screen buffer.
  • Due to the display refresh is generally by rows, so the screen up and down in order to prevent the exchange buffer area of the image points belong to two different frames, so the exchange generally will be waiting for the signal to the display refresh to complete, exchanged in the display refresh interval twice, the signal is called a vertical sync signal, this technology is called the vertical synchronization
  • With the use of double buffers and vSYNC, the frame rate is not quite as high as the hardware allows because you always have to wait for the buffer to be swapped before rendering the next frame. To solve this problem, a three-buffer technique was introduced, in which two off-screen buffers are rendered back and forth alternately while waiting for vSYNC, while vsync occurs, the screen buffer is swapped with the recently rendered off-screen buffer to make full use of the hardware performance