preface

So once you get into OpenGL, it’s amazing why it renders images, and to understand how it works, you have to know how to use it. Level I do not speak the principle, in this paper to the underlying GPU to render it’s hard to say clear, there are too many online posts, truly understand the few, still hasn’t found that principle may be too low-level, and domestic real studied principle may be left his footprint in colleges and universities or research institute, and these articles generally are in English, can’t help feeling, Learning English is really important, learn OpenGL will not be outdated, after all, will use OpenGL is not many people (relative to the Chinese programmer base), with the proficient people less (may have sprayed me, but OpenGL is still very useful). So, to get back to the topic of this article, before we get into OpenGL ES, we need to understand the concept of OpenGL ES. The purpose of this article is OpenGL ES concept literacy. But also to prevent yourself from forgetting.

For those just getting started, learnopengl-CN is a good place to start

OpenGL ES

OpenGL stands for Open Graphics Library. Is a cross-platform, cross-programming language programming interface standard. 2D and 3D graphics processors for debugging hardware. Since it is only a software interface, the specific underlying implementation depends on the hardware device manufacturer.

OpenGL ES stands for OpenGL for Embedded Systems. It’s a subset of OpenGL’s 3D graphics API. Designed for embedded devices such as mobile phones, PDAs and game consoles, it removes many unnecessary and low performance apis.

Common Basic Concepts

The vertices

Object’s vertices. For a 2D image, there are four vertices. For a 3D cube there are eight vertices.

Figure yuan

Draw the smallest unit, which for OpenGL ES could be points, lines, triangles.

fragment

An image is a collection of several elements after calculation. As is shown in the picture below, a square is made up of two triangles

texture

Is a 2D image (even with 1D and 3D textures) that can be used to add details to objects. Different pictures can be affixed to an object surface, as shown below:

Shader

Shaders are small programs that run on a GPU and can handle vertex and slice dependent calculations. Shader variable:

Attribute variable: is the variable to which the information of each vertex in a 3D object belongs. Generally, the vertex position, color, normal vector and other information of each vertex are passed into the vertex shader in the way of Attribute variable. Uniform variable: refers to the quantity that all vertices in a single 3D object composed of the same set of vertices are the same. Generally, they are the current light source position in the scene, the current camera position, projection series matrix, etc. Varying variables: Data variables generated from the vertex shader calculation and passed to the chip shader. Vertex shaders can use mutable variables to pass arbitrary values such as colors, normal vectors, texture coordinates, etc. that need to be interpolated into slices. Gl_Position: is a built-in OpenGL variable, used to store vertex data to be rendered after coordinate changes. Gl_FragColor: is a built-in OpenGL variable, which is the final color of the slice. It is assigned at the end of the slice shader.Copy the code

Vertex shader

Overview: Vertex shaders provide programming for vertex manipulation. Imaginative variables, Sampler Sampler. Output variables: built-in variables such as gl_Position.

Attribute: Vertex - by - vertex data supplied with an array of vertices. Imaginative: Unspoiled data used by vertex shaders, such as the MVP matrix. Sampler: A special unified variable type that represents the texture used by the vertex shader. VertexShader(VertexShader program) : the source code of a VertexShader program or an executable file describing the manipulation of vertices.Copy the code

Chip shader

Overview: Chip shader provides a programming program for the chip shader operation. Variables: Varying variables and imaginative variables, and the Sampler Sampler. Output variables: built-in variables such as gl_FragColor.

Varying(variable): Varying is passed into the slice shader from the vertex shader. The slice shader does not have attribute variables. An imaginative grid unit outputs vertex shaders that are interpolated for each segment. Uniform variables: the invariant data used by the pixel shader, such as the rotation matrix Sampler: a special uniform variable type that represents the texture used by the pixel shader. Find the corresponding texture in memory. FragShader: The source code of the FragShader program or the executable file describing the vertex of the operation.Copy the code

The coordinate system

Any system that transforms a 3D object into a local view. OpenGL can draw 3D objects, so how do 3D objects go from their local coordinates representing 3D objects to shielded coordinates step by step? As can be seen from the picture below, a 3D is transformed from local space to world space, then to view space, after clipping space, and finally to screen space before it can be displayed on our mobile phones.

MVP matrix

This MVP is not “MVP”; MVP stands for Model View Projection. From the figure above, we can see that the transformation from different Spaces requires matrix operation. According to linear algebra knowledge in college, when converting from one coordinate system to another coordinate system, we only need to multiply the corresponding coordinate transformation matrix. MVP Matrix here is actually the product of Model Matrix, View Matrix and Projection Matrix.

Projection Matrix * View Matrix * Model Matrix

For matrix operation, OpenGL Mathematics can be used in NDK programming

The model of matrix

There are three transformations of model matrix transformation:

  • Translation transformation: 3D objects are translated along a certain direction, and the size of the object remains unchanged;

GLM interface:

Model = glm::translate(Model, glm::vec3(x, y, z));
Copy the code
  • Rotation transformation: the object rotates along an axis of symmetry without changing its size.

GLM interface:

Model = GLM ::rotate(Model, GLM ::radians(pitch), GLM ::vec3(pitchSymbolFactor, 0.0f, 0.0F)); Model = : GLM ::radians(pitch), GLM ::vec3(pitchSymbolFactor, 0.0F, 0.0F)); Model = GLM ::rotate(Model, GLM ::radians(yaw), GLM ::vec3(0.0f, yawSymbolFactor, 0.0f)); Model = GLM ::rotate(Model, GLM ::radians(roll), GLM ::vec3(0.0f, 0.0f, rollSymbolFactor));Copy the code
  • Scale transformation: An object is scaled in a certain direction and changes in size.

Model = GLM ::scale(Model, GLM :: VEC3 (xFactor, yFactor, zFactor));

The view matrix

In OpenGL ES, we will use the camera instead of ourselves to observe objects, and the space under the camera’s perspective is the view space, so the view matrix transformation has a lot to do with the position of the camera. GLM interface:

View = glm::lookAt(glm::vec3(cx, cy, cz), glm::vec3(tx, ty, tz), glm::vec3(upx, upy, upz)); View returns the observation matrix 0 as offset cx,cy,cz as camera coordinates Tx, TY,tz as camera lens direction vectors UPx, UPy,upz as camera head directionCopy the code

Projection matrix

There are two commonly used projection methods in OpenGL ES, namely orthogonal projection and perspective projection;

  • Orthogonal projection: As you can see, orthogonal projection does not scale distant objects.

GLM interface:

Projection = glm::ortho(T left, T right, T bottom, T top, T zNear, T zFar)
Copy the code

  • Perspective projection: As you can see in the figure, perspective projection scales distant objects, resulting in the phenomenon of “far smaller, near larger”.

GLM interface:

Projection = glm::frustum(T left, T right, T bottom, T top, T nearVal, T farVal)
Copy the code

Advanced concepts

Frame Buffer Object

FBO for short, FBO itself has no image storage area. The frame cache associated image (texture or render object) must be associated with the FBO when used. You can think of it as a sketchpad.

Vertex Buffer Object

VBO for short, VBO opens a memory space in GPU memory for storing various vertex attribute information, including vertex coordinates, vertex normal vector, vertex color data, etc. The advantage is that the vertex attribute information can be read directly from video memory during rendering, instead of being passed in from the CPU, which is more efficient.

Vertex Array Object buffer

VAO for short, VAO is a reference to VBO that holds all vertex attributes and does not itself store any vertex information.

Element Buffer Object

EBO for short, EBO is to create a memory space in GPU memory for storing all vertex location index indices.