An overview of the

This is a new series, learn OpengGl Es, actually is “OpenGl Es Application Development Practice Guide Android volume” study notes, interested can directly read this book, of course, this will record their own understanding, the following only as notes, in case you forget later

The first nine chapters of the book will then be analyzed in turn

Android OpenGl Es learning (a) : create an OpenGl Es program

Android OpenGl Es learning (2) : define vertices and shaders

Android OpenGl Es learning (3) : build shaders

Android OpenGl Es learning (4) : add color

Android OpenGl Es learning (5) : adjust the aspect ratio

Android OpenGl Es Learning (vi) : Enter 3d

Android OpenGl Es learning (7) : Using textures

Android OpenGl Es Learning (8) : Build simple objects

Add touch feedback to Android OpenGl Es

The end result is a simple game of hockey, something like this

The three dimensional

When we look at a three-dimensional painting, actually the artist is drawing on a two-dimensional plane, except for three-dimensional objects, let’s look at an example

This is a train track, and when we stand on the train track and look into the distance, it looks further and further away, until it disappears to a single point on the horizon

The further away those railway ties are from us, the smaller they look. If we measure the size of each tie, we find that it decreases in proportion to the distance between our eyes, as shown in the figure below:

Transformation from shader to screen coordinates

In the last video we talked about multiplying virtual coordinates by the projection matrix to get the coordinates of the shader. This time we’ll look at the change from shader to screen coordinates

In the last section we discussed normalized device coordinates and knew that for a vertex to appear on the screen, the x,y, and z components all need to be in the range [-1,1]

Cut out space

When a vertex shader writes a value to gl_Position,OpenGl expects that position to be in Clip space. The logic behind clipping space is simple. For a given position, its XYZ component needs to be between -w and w. If a w component is 1, then x,y, and z all need to be between minus 1 and 1, and anything outside that range is invisible on the screen

The perspective divide

In a vertex coordinates into a normalized equipment before the Opengl actually perform an extra step, it is called the perspective divide, after the perspective divide, this position is in the normalized device coordinates, regardless of the rendering, the size and shape for each of these visual coordinates, the x, y, z component values are between [1, 1]

In order to create 3 d vision on the screen, OpenGl will each x, y, z component is divided by w component, when w component used to indicate distance, makes more distant objects, moved closer to render the center place, the role of the center is a vanishing point, OpenGl is this way to cheat our vision, we see a 3 d scene

If an object has two vertices, each at the same location in three dimensions, the same x,y, and z, but different w components (1,1,1,1) and (1,1,1,2), OpenGl will do perspective division before converting these to normalized device coordinates, dividing the first three components by w, The two coordinates will become (1,1,1) and (0.5,0.5,0.5), and the coordinates with the larger w component will be moved closer to (0,0,0), which is the center of the normalized device coordinate rendering area

Homogenize coordinates

After the perspective trigger, several different points will map to the same point, for example (1,1,1) and (2, 2, 2) will map to (1,1,1)

Divided by the advantage of w

Adding the W coordinate as the fourth component has the advantage that we can decouple the influence of the projection from the actual Z coordinate so that we can switch between the orthogonal projection and the perspective projection, leaving the Z component as the depth buffer

Viewport changes

So before we see the result, OpengL needs to map the normalized device coordinates x and Y components to some area of the screen that the system reserves for display, which is called the ViewPort, and these mapped coordinates are called window coordinates, and in addition to telling OpengL how to map, Otherwise we don’t care much about these coordinates, we set gles20.glviewPort (0, 0, width, height) in our code; When you tell OpengL to map, the range (-1, -1, -1) to (1, 1, 1) will be mapped to the window reserved for display, and normalized device coordinates outside this range will be cropped

Add the W component to create a 3D image

We actually use the W component, we add a z and w component to the previous vertex data, and the updated vertex data is as follows

	// Update this variable
  private int POSITION_COMPONENT_COUNT = 4;

  float[] tableVertices = {
            / / the vertices
            0f.0f.0f.1.5 f.// Vertex color values
            1f.1f.1f,

            -0.5 f, -0.8 f.0f.1f.0.7 f.0.7 f.0.7 f.0.5 f, -0.8 f.0f.1f.0.7 f.0.7 f.0.7 f.0.5 f.0.8 f.0f.2f.0.7 f.0.7 f.0.7 f,

            -0.5 f.0.8 f.0f.2f.0.7 f.0.7 f.0.7 f,

            -0.5 f, -0.8 f.0f.1f.0.7 f.0.7 f.0.7 f./ / line
            -0.5 f.0f.0f.1.5 f.1f.0f.0f.0.5 f.0f.0f.1.5 f.0f.1f.0f./ / points
            0f, -0.4 f.0f.1.25 f.1f.0f.0f.0f.0.4 f.0f.1.75 f.0f.0f.1f
    };
Copy the code

We get close to the bottom of the screen w set to 1, close to the top of the screen is set to 2, in the middle of the set as between 1 to 2 decimal, such effect should be, at the bottom of the table looks bigger than the top, like we from close to the distance, we all set to 0, z because we don’t need z stereo effect can be achieved

Opengl will automatically do perspective division using the w value that we specify, and our current orthogonal projection will just copy the W value over, so let’s run the project and see what happens

Let’s just add the W component and make it look three-dimensional, but let’s not write down the W component, let’s use the matrix to dynamically generate these values, let’s restore the code above to its original state, and then let’s start learning how to dynamically generate the W component using the matrix

Using perspective projection

In the previous image we fixed the aspect ratio of the screen with the orthogonal projection matrix, which changed it to normalized device coordinates by displaying the width and height of the area. The second image is the orthogonal projection, and the first image is the perspective projection

Depending on the vertebral body

Above the first figure is referred to as visual vertebral (Frustum), the watch space is created by a perspective projection matrix and the division to create perspective, simple visual vertebral body is a cube, the proximal than distal big, make it become a truncated pyramid, the greater the difference between the two side, observe the range is wide, the more we see

An optic vertebra has a focus, and the focus can be obtained by following the lines that extend from the larger end of the optic vertebra to the smaller end, all the way through the smaller end until they come together, and when you look at the scene with a perspective projection, it’s like putting your head in focus, The distance between the focal point and the smallest end of the optic vertebra is called focal length, which affects the ratio of the smaller end to the larger end of the optic vertebra and the corresponding field of vision

Define perspective projection

To create three dimensions, perspective projection needs to work with perspective division

If an object is moving to the screen center, when they are far from us more and more, he is becoming more and more small, the size of the result of projection matrix task is to produce the correct values for w, so that at the time of opengl do the perspective divide, distant objects will be smaller than the near objects, one of the ways to achieve this is to use the z component, put him as the distance between the object and focus, And you map this distance to W, and the larger the distance, the larger the value of W, the smaller the object

With perspective projection, the farther away the object is, the smaller it appears on the screen. This effect cannot be achieved by the projection matrix itself, it needs to use the W component and perspective division together. The main job of perspective projection is to generate the VALUE of the W component and use perspective division to achieve the three-dimensional scene

Adjust aspect ratio and field of view

This is a general perspective projection matrix that allows us to adjust the field of view and the aspect ratio of the screen

Projection matrix variable

variable describe
a If we imagine a camera shooting scene, this variable represents the focal length of that camera, and the focal length is defined by1 / tan (vision / 2)So the field of view is going to be less than 180 degrees, tangent is the tangent function, like a field of view of 90 degrees1/tan(90/2) = 1/1 = 1
aspect The aspect ratio of the screen is equal to thetaWidth/height
f The distance to the far plane must be positive and greater than the distance to the near plane
n The distance to the near plane must be positive, for example, if this value is set to 1, then the near plane lies at a z value of -1

As the field of vision becomes smaller, the focal length becomes longer, and the range of x and y values that can be mapped to the range of normalized coordinates [-1,1] becomes smaller, which will have the effect of narrowing the visual vertebra

The left optic vertebra has a 90-degree field of vision, while the right one has a 45-degree field of vision

The optic vertebral body can be seen at 45 degrees, and the focal length between its focus and proximal end is longer than 90 degrees

In the image below, they are seen from their focal points in the same optic vertebra

A narrower field of view usually has less distortion, and conversely, as the field of view widens, the edges of the final image appear to be severely distorted

Create a perspective projection in your code

Now we’re going to add perspective projection to the code. Matrix for Android has two methods for it, frustumM and perspectiveM. But frustumM has a flaw and it affects certain types of projection. PerspectiveM has been introduced in Android 4.0. You can either use perspectiveM or implement it yourself

Let’s implement a perspective projection matrix by ourselves, first define the method

    public static void perspetiveM(float[] m, float degree, float aspect, float n, float f) 
Copy the code

Calculate the focal length according to the above formula

   		// Calculate the focal length
        float angle = (float) (degree * Math.PI / 180.0);
        float a = (float) (1.0 f / Math.tan(angle / 2.0));
Copy the code

The output matrix

 m[0] = a / aspect;
        m[1] = 0f;
        m[2] = 0f;
        m[3] = 0f;

        m[4] = 0f;
        m[5] = a;
        m[6] = 0f;
        m[7] = 0f;

        m[8] = 0f;
        m[9] = 0f;
        m[10] = -((f + n) / (f - n));
        m[11] = -1f;

        m[12] = 0f;
        m[13] = 0f;
        m[14] = - ((2f * f * n) / (f - n));
        m[15] = 0f;
Copy the code

A complete method

 public class MatrixHelper {
    / * * *@paramM generates a new matrix *@paramDegree Visual Angle *@paramAspect aspect ratio *@paramN distance from the near plane *@paramF from the far plane */
    public static void perspetiveM(float[] m, float degree, float aspect, float n, float f) {

        // Calculate the focal length
        float angle = (float) (degree * Math.PI / 180.0);
        float a = (float) (1.0 f / Math.tan(angle / 2.0));

        m[0] = a / aspect;
        m[1] = 0f;
        m[2] = 0f;
        m[3] = 0f;

        m[4] = 0f;
        m[5] = a;
        m[6] = 0f;
        m[7] = 0f;

        m[8] = 0f;
        m[9] = 0f;
        m[10] = -((f + n) / (f - n));
        m[11] = -1f;

        m[12] = 0f;
        m[13] = 0f;
        m[14] = - ((2f * f * n) / (f - n));
        m[15] = 0f; }}Copy the code

The matrix data is stored in a floating point array defined by the parameter M. The array requires at least 16 elements. OpenGl stores the matrix in column order, which means that instead of writing one row at a time, the first four columns are the first column, the next four are the second column, and so on

Now that we’ve completed our own perspetiveM method, we can use it in our code. Our own implementation is actually very similar to the method in the Android source code

Let’s start with the projection matrix

We change the code for onSurfaceChanged as follows:

 @Override
    public void onSurfaceChanged(GL10 gl, int width, int height) {
        // Set the screen size
        GLES20.glViewport(0.0, width, height);
        // Create a perspective projection that starts at z-1 and ends at -10
        MatrixHelper.perspetiveM(mProjectionMatrix, 90, (float) width / (float) height, 1f.10f);
      
    }
Copy the code

Using a 45-degree Angle, we create a projection matrix that starts at z-1 and ends at z-10

If we run now, you will find the rectangle is missing, because we don’t have to specify the location of the z axis table, by default, he in the position of the z is zero, because the visual vertebral body starts from the z value of 1 point, so we’re going to move the rectangle to the position, otherwise we can’t see this table

Before we use the projection matrix, let’s move the table out with a translation matrix, which we’ll call the Model matrix.

Move objects using model matrices

First we define the model matrix

 	// Model matrix
    private float[] mModelMatrix = new float[16];
Copy the code

We will use this model matrix to move the rectangle within the distance, which we add at the end of onSurfaceChanged

		// Set to the identity matrix
        Matrix.setIdentityM(mModelMatrix, 0);
        // Shift z-axis -2f
        Matrix.translateM(mModelMatrix, 0.0f.0f, -2f);
Copy the code

So we set our model matrix to be the identity matrix, and then we shift it by -2 in the z direction, and when we multiply our rectangle coordinate by this matrix, the rectangle coordinate will eventually move by 2 in the negative z direction

One or two times

Now we have two matrices to work with, the projection matrix and the model matrix, so now we can add a matrix to the vertex shader, multiply the vertices by the model matrix, and then multiply the vertices by the projection matrix

A better way to keep the shader unchanged is to multiply the model matrix and the projection matrix to get a final matrix, and then pass the final matrix to the vertex shader. In this way, only one matrix can be kept in the shader

Choose the right order to multiply

We know that A times B, A star B does not equal B star A, so we have to choose the right order to multiply the model matrix times the projection matrix

So you end up with the projection matrix times the model matrix, which is putting the projection matrix on the left, and the model matrix on the right

We add the following code at the end of onSurfaceChanged

		float[] temp = new float[16];
        // Matrix multiplication
        Matrix.multiplyMM(temp, 0, mProjectionMatrix, 0, mModelMatrix, 0);
        // Assign the matrix repeatedly to the projection matrix
        System.arraycopy(temp, 0, mProjectionMatrix, 0, temp.length);
Copy the code

You create a temporary variable, you store the multiplication in the temporary variable, and then you copy the values of the temporary variable into the mProjectionMatrix, and now this matrix contains the combined effects of the model matrix and the projection matrix

This is what happens when I run it now

Understanding perspective projection

Let’s take a look at the formula (1) and formula (2), which means that the value of 1.78 is inserted into formula (2). In fact, 1.78 is the adjustment of aspect ratio in the previous chapter. Since there are scores in ②, it is difficult to calculate, so we use ③ instead

So let’s see what happens in the third matrix

  • The first two rows of the matrix, just copy the values of x and y
  • The third row of the matrix replicates the z component and flips it, so the minus 2 in the fourth column is multiplied by a w value, which is 1 by default, so the third row ends up being -z minus 2.
  • The fourth line sets the w value to -z. This will reduce the size of distant objects once opengL perspective division is performed

Let’s multiply the matrix by a point on the proximal end of the optic vertebra

Let’s look at two more points, each of which is further away than the previous one

As the points get further and further away, the values of z and w get bigger and bigger

Divided by the w

To achieve three dimensions, first we define a matrix, which will create a value of W that increases with distance, and then we do the next step, perspective division, and opengL divides each component by w, and we end up with the following result

Visual vertebral body proximal 1, distal 1

In this type of projection matrix, the far end is actually at an infinite distance. Due to the limitation of hardware precision, no matter how far Z is, in normalized coordinates, it is close to but cannot completely coordinate with 1 of the far plane

Increase rotation

Why does it have a projection matrix on it, not in three dimensions?

Because we don’t write z and w in our vertex array, they’re both 1 by default, so the projection matrix and perspective division really don’t work, so it’s still two-dimensional

We can rotate it to make it three-dimensional, so that we can see it from a certain Angle

Direction of rotation

The first thing we need to figure out is, how many degrees do we need to rotate around that axis, to figure out how an object rotates around a given axis, we’re going to use the right hand coordinate system principle, right

Make a fist with your right hand and point your thumb in the direction of the positive axis. The curved finger will tell you, how does the object rotate about that axis at a positive Angle

We want to rotate backwards around the X-axis, opposite to the positive direction of rotation, so we’re going to rotate by a negative Angle

Rotation matrix

Rotate along the X-axis

Rotate along the y axis

Rotate along the Z axis

Let’s test the rotation along the X-axis, (0,1,0) rotated 90 degrees along the X-axis

This point goes from 0,1,0 to 0,0,1, and we use the right hand rule, and we can see that the rotation is positive

Add rotation to the code

Add the following code to onSurfaceChanged

  		Matrix.translateM(mModelMatrix, 0.0f.0f, -2.5 f);
        // Rotate -60 degrees about the X-axis
        Matrix.rotateM(mModelMatrix, 0, -60.1.0 f.0f.0f);
Copy the code

The first row is to get the table further away from us, and the second row is to rotate it -60 degrees about the X-axis

Run the

The complete code

public class AirHockKeyRender3 implements GLSurfaceView.Renderer {

    // Adjust the width/height ratio
    private final FloatBuffer verticeData;
    private final int BYTES_PER_FLOAT = 4;
    private int POSITION_COMPONENT_COUNT = 2;
    private final int COLOR_COMPONENT_COUNT = 3;
    private final int STRIDE = (POSITION_COMPONENT_COUNT + COLOR_COMPONENT_COUNT) * BYTES_PER_FLOAT;
    private final Context mContext;

    // Draw the triangle counterclockwise
    float[] tableVertices = {
            / / the vertices
            0f.0f.// Vertex color values
            1f.1f.1f,

            -0.5 f, -0.8 f.0.7 f.0.7 f.0.7 f.0.5 f, -0.8 f.0.7 f.0.7 f.0.7 f.0.5 f.0.8 f.0.7 f.0.7 f.0.7 f,

            -0.5 f.0.8 f.0.7 f.0.7 f.0.7 f,

            -0.5 f, -0.8 f.0.7 f.0.7 f.0.7 f./ / line
            -0.5 f.0f.1f.0f.0f.0.5 f.0f.0f.1f.0f./ / points
            0f, -0.4 f.1f.0f.0f.0f.0.4 f.0f.0f.1f
    };
    private int a_position;
    private int a_color;

    // Model matrix
    private float[] mModelMatrix = new float[16];

    // Projection matrix
    private float[] mProjectionMatrix = new float[16];
    private int u_matrix;


    public AirHockKeyRender3(Context context) {

        this.mContext = context;
        // Load float into local memory
        verticeData = ByteBuffer.allocateDirect(tableVertices.length * BYTES_PER_FLOAT)
                .order(ByteOrder.nativeOrder())
                .asFloatBuffer()
                .put(tableVertices);
        verticeData.position(0);
    }

    @Override
    public void onSurfaceCreated(GL10 gl, EGLConfig config) {
        GlsurfaceView calls this method when the surface is created. This happens in the application
        It is also called when you first run it or when you return from another Activity

        // Clear the screen
        GLES20.glClearColor(0.0 f.0.0 f.0.0 f.0.0 f);
        // Read the shader source code
        String fragment_shader_source = ReadResouceText.readResoucetText(mContext, R.raw.fragment_shader1);
        String vertex_shader_source = ReadResouceText.readResoucetText(mContext, R.raw.vertex_shader2);

        // Compile the shader source code
        int mVertexshader = ShaderHelper.compileShader(GLES20.GL_VERTEX_SHADER, vertex_shader_source);
        int mFragmentshader = ShaderHelper.compileShader(GLES20.GL_FRAGMENT_SHADER, fragment_shader_source);
        // Link program
        int program = ShaderHelper.linkProgram(mVertexshader, mFragmentshader);

        // Validate opengL objects
        ShaderHelper.volidateProgram(program);
        // Use the program
        GLES20.glUseProgram(program);

        // Get the shader attribute
        a_position = GLES20.glGetAttribLocation(program, "a_Position");
        a_color = GLES20.glGetAttribLocation(program, "a_Color");
        u_matrix = GLES20.glGetUniformLocation(program, "u_Matrix");


        // Bind a_position and verticeData vertex positions
        / * * * the first parameter, the second parameter is the shader properties * and how much weight each vertex, we have only to the component * the third parameter, the fourth parameter data types * and is only meaningful plastic, ignore * 5 parameters, an array has multiple attributes to be meaningful, we only have an attribute, Pass 0 * sixth argument, where does OpengL read from */
        verticeData.position(0);
        GLES20.glVertexAttribPointer(a_position, POSITION_COMPONENT_COUNT, GLES20.GL_FLOAT,
                false, STRIDE, verticeData);
        // Start vertex
        GLES20.glEnableVertexAttribArray(a_position);

        verticeData.position(POSITION_COMPONENT_COUNT);
        GLES20.glVertexAttribPointer(a_color, COLOR_COMPONENT_COUNT, GLES20.GL_FLOAT,
                false, STRIDE, verticeData);
        // Start vertex
        GLES20.glEnableVertexAttribArray(a_color);
    }

    @Override
    public void onSurfaceChanged(GL10 gl, int width, int height) {
        // After the Surface is created, this method is called every time the Surface size changes, such as switching between horizontal and vertical screens
        // Set the screen size
        GLES20.glViewport(0.0, width, height);
        // Create a perspective projection that starts at z-1 and ends at -10
        MatrixHelper.perspetiveM(mProjectionMatrix, 45, (float) width / (float) height, 1f.10f);


        // Set to the identity matrix
        Matrix.setIdentityM(mModelMatrix, 0);
        // Shift z-axis -2f
        Matrix.translateM(mModelMatrix, 0.0f.0f, -2f);

        Matrix.translateM(mModelMatrix, 0.0f.0f, -2.5 f);
        // Rotate -60 degrees about the X-axis
        Matrix.rotateM(mModelMatrix, 0, -60.1.0 f.0f.0f);


        float[] temp = new float[16];
        // Matrix multiplication
        Matrix.multiplyMM(temp, 0, mProjectionMatrix, 0, mModelMatrix, 0);
        // Assign the matrix repeatedly to the projection matrix
        System.arraycopy(temp, 0, mProjectionMatrix, 0, temp.length);
    }

    @Override
    public void onDrawFrame(GL10 gl) {
        // When drawing each frame of data, this method is called. This method must draw something, even if it is just to empty the screen
        // Because when this method returns, the data in the render area will be exchanged and displayed on the screen. If nothing else, you will see a flicker effect

        GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);

        GLES20.glUniformMatrix4fv(u_matrix, 1.false, mProjectionMatrix, 0);


        // Draw a rectangle
        // Specifies that the color of the shader u_color is white
        /** * First argument: draw draw triangle * second argument: read from vertices array 0 * third argument: read 6 vertices ** Finally draw two triangles to form a rectangle */
        GLES20.glDrawArrays(GLES20.GL_TRIANGLE_FAN, 0.6);

        // Draw the dividing line

        GLES20.glDrawArrays(GLES20.GL_LINES, 6.2);

        / / draw point
        GLES20.glDrawArrays(GLES20.GL_POINTS, 8.1);

        GLES20.glDrawArrays(GLES20.GL_POINTS, 9.1); }}Copy the code