Basic concepts of OpenGL

Introduction to the Graphics API

  • OpenGL (Open Graphics Library) is a cross-programming language, cross-platform programming Graphics program interface, which abstracts computer resources as an OpenGL object, and the operation of these resources is abstracted into one OpenGL instruction

  • OpenGL ES(OpenGL for Embedded Systems) is a subset of OpenGL 3D graphics API. It is designed for Embedded devices such as mobile phones, PDAs and game consoles, removing many unnecessary and low performance API interfaces.

  • DirectX is made up of many apis, DirectX is not a pure graphics API. Most importantly, DirectX is a multimedia processing framework for Windows. It does not support platforms other than Windows, so it is not a cross-platform framework. According to the qualitative classification, can be divided into four parts, the display part, the voice part, the input part and the network part.

  • Metal: Apple has launched Metal, a new platform for game developers that can render 3D graphics 10 times better. Metal is Apple’s framework for 3D rendering.

  • Note: OpenGL ES is only a subset of OpenGL, ES (embedded) for mobile applications. Apple’s underlying rendering is currently implemented by Metal

The role of OpenGL in iOS

What can a graphics API do

  • Game development, rendering game characters, game scenes
  • In audio and video development, render the decoded data
  • Map development, data rendering on the map
  • In animation, the realization of animation drawing
  • Add a filter effect while processing the video

OpenGL, OpenGL ES, and Metal are all essentially GPU rendering graphics and images efficiently, and the graphics API is the only way we iOS developers can approach the GPU.

OpenGL

The state machine

State machine is short for finite state automata, which is a mathematical model abstracted from the running rules of real things.

If the “automatic door” in the picture below is a state machine, he has memorized the open and close states and knows what he should do next when he is in different states, which is “open” or “close”. When in the close state, you input a signal to open the door, it will switch to the open state

State machine features:

  • You can remember the state, the current state
  • Can accept input, change according to current state, switch state, and can have output
  • After the state machine stops, it no longer receives input and stops working

Back to the OpenGL state machine:

  • OpenGL can record its own state: the current use of color, whether blending is enabled and so on
  • OpenGL can take input, (the method that calls OpenGL can be viewed as input),
  • OpenGL can go to a stop state and no longer accept input.

Apply colours to a drawing

The operation of converting graphic/image data into 2D spatial images is called rendering

VertexArray and VertexBuffer

  • Drawing usually involves drawing the skeleton of the image, and then filling in the skeleton with colors, and OpenGL is the same way. Vertex data is the skeleton of the image to be drawn. Unlike the real world, OpenGL images are made up of primitives (what are primitives? Point, line, triangle). Where does this vertex data reside? The developer can optionally set the function pointer to be passed in from memory before the draw method is called, that is, stored in memory first, called the vertex array. For high performance, a piece of video memory is allocated in advance and the vertex data is passed into the video memory, which is also called the vertex buffer.
  • Vertices refer to the position data of each vertex when we draw a graph. This data can be stored in an array or cached directly on the GPU.

shader

  • Before OpenGL actually calls the draw function, you also need to specify a shader program compiled by the shader. Common shaders are: VertexShader, FragmentShader/PixelShader, geometrysshader, and a Surface subdivision shader. Slice shaders and pixel shaders are different names in OpenGL and DX, but until OpenGL ES 3.0, only two basic vertex shaders and slice shaders were supported.
  • OpenGL processes Shader by compiling and linking to generate Shader program (glProgram), which includes vertex Shader and chip Shader operation logic. OpenGL to draw, the first input vertex data, by the vertex shader operation, the vertex into a pixel, and then through the pixel assembly connecting lines, and then raster, the pixel vector into rasterized data, and finally the rasterized data into the chip shader operation. The pixel shader performs bitwise operations on each pixel of rasterized data to determine the color of each pixel.

VertexShader VertexShader

  • Generally used to deal with graph per vertex transformation (rotation/translation/projection, etc.)
  • Vertex shaders are programs used in OpenGL to calculate vertex properties. Vertex shaders are vertex-by-vertex programs, meaning that each vertex is executed once, but not in parallel, and the vertex shader cannot access the data of other vertices.
  • Generally speaking, typical vertex attributes need to be calculated mainly include vertex coordinate transformation, vertex light operation and so on. This is where the conversion of vertex coordinates from its own coordinate system to a normalized coordinate system takes place.

FragmentShader

  • Generally used to deal with the color calculation and filling of each pixel in the graph
  • Fragment shaders are programs used in OpenGL to calculate the color of fragments (pixels). The fragment shader is a per-pixel program, meaning that the fragment shader is executed once per pixel, also in parallel

OpenGLSL (OpenGL Shading Language)

OpenGL Shading Language is the Language used for Shading programming in OpenGL, which is short custom programs written by developers who execute on the GRAPHICS card’s GPU, Instead of a fixed part of the render pipeline, the different layers in the render pipeline are programmable. For example: view conversion, projection conversion and so on. GLSL(GL Shading Language) Shader code is divided into 2 parts: Vertex Shader and Fragment

Rasterizer Rasterization

  • It is the process of turning vertex data into pixels (pixels, as we say in Pingshu). It has the function of turning the graph into a grid, characterized by each element corresponding to each pixel of the frame buffer.
  • Rasterization is actually a process of transforming geometric elements into two-dimensional images. The process consists of two parts. The first part of the work: determine which integral grid areas in window coordinates are occupied by basic primitives; Part two: Assign a color value and a depth value to each area. The rasterization process produces slices.
  • It is a process of converting analog signals into discrete signals

texture

OpenGL tgA texture file mobile display a PNG image is also using the corresponding texture file

Transformation matrix

When a graphic image wants to translate, scale, rotate and other transformations, it needs to use transformation matrix

Projection matrix

Convert 3D coordinates to 2-latitude screen coordinates.

Render screen/SwapBuffer [SwapBuffer]

  • Render buffers typically map system resources such as Windows. If you render the image directly to the window’s corresponding render buffer, you can display the image to the screen.
  • However, if each window has only one render buffer, then the screen is refreshed during the drawing process and the window may not display the full image
  • To solve this problem, regular OpenGL has two buffers: on-screen buffers and off-screen buffers. After a buffer is rendered, the image is displayed on the screen by swapping the screen buffer with the off-screen buffer
  • Because the display refresh is line by line, in order to prevent the exchange of buffers when the contents of two buffers, so the exchange is generally waiting for the display refresh completed signal, in the interval between the display refresh, this signal is vSYNC signal, this technology is vsync
  • With dual buffers and vSYNC, frame rates are not as high as they could be on hardware because you always have to wait for buffers to be swapped before the next frame is rendered. To solve this problem, a three-buffer technique was introduced, in which two off-screen buffers are rendered back and forth alternately while waiting for vSYNC, while vsync occurs, the screen buffer is swapped with the recently rendered off-screen buffer to make full use of the hardware performance