The introduction

Graphics development will encounter an obstacle, that is matrix, just began to understand, how to write is correct, online articles generally mostly in the form of matrix formula, preliminary introduction to people, it’s easy to see the fog, and in the actual operation process and cognitive is not clear, it is easy to go wrong, so specially write this article to record the cognitive process of matrix, It is convenient for later people to quickly understand what the matrix is, what to do, how to write the matrix to achieve the desired effect.

conclusion

In order to make it easier to read and prevent long paragraphs from interfering with your reading, I put the cognitive conclusion in front of you so that you can read with the conclusion in mind and feel more comfortable. If you think the theory is hard to understand, you can just watch the actual practice. Two matrix visualization sites are recommended here

Harry7557558. Making. IO/tools/matri…

shad.io/MatVis/

  1. Good interpretation is more important in graphic development than long formulae, which can be handed over to a computer.

  2. Matrix is a representation of equations, as a function of changing coordinate system in graphic development, of course, the coordinate system can also be considered unchanged, it only changes the position of coordinate points

  3. The homogeneous matrix is to unify the matrix operations as multiplication, and the W component as the perspective components of near large and far small

  4. Matrix has row matrix and column matrix distinction, is actually the same matrix, but the storage order is different

  5. The matrix that affects the coordinate system is actually the basis vector that affects the coordinate system

  6. The inverse matrix is to compute the representation of one point in one coordinate system in another coordinate system, which can be understood as matrix division, which is essentially solving the equation.

  7. The order of matrix operations is important., can be understood as moving the point V by M1 matrix in the coordinate system consisting of M3*M2.

  8. Although the projection matrix is called the projection matrix, it actually turns the cuboid object into a cube, and the reverse understanding is to project the cube object into the cuboid space. That’s what it means to understand the order of matrix multiplication.

When you’re manipulating a matrix, you have to be aware of the coordinate system in which you’re manipulating the object, and all of the following problems in Opengl are due to coordinate system problems. Of course, it’s not just Opengl that has these problems, but other environments where matrices manipulate graphics have to be aware of these problems.

  1. Opengl is a right-handed coordinate system, but if you don’t add the projection matrix, the NDC inside is a left-handed coordinate system, which means that the object with z -1 will block out the object with Z 1, and of course if you add the projection matrix you don’t have that problem, it reverses the Z axis

  2. Matrix multiplication in Opengl is left multiplication, butIts matrix is actually a column matrixThat is, the origin of its coordinate system is below

  3. When Opengl uses a matrix to manipulate texture coordinates, first convert the 0,1 texture coordinates to -1,1, and then reverse the matrix, otherwise you will actually shrink the object instead of magnifying it.

  4. Opengl to pay attention to the order of the model matrix, in the right order is mat4 modelMat = offsetMatrotateMatscaleMat; Opengl is written from the right to the left, so it’s just zooming in and then rotating and then moving

  5. So in Opengl, if you’re going to zoom in on a rotating object, you’re going to move it to the origin of the frame, and then you’re going to zoom in and rotate it, and then you’re going to move it back, otherwise the rotation will be distorted

  6. So in Opengl, when the matrix rotates the object, it first scales it to 1:1, and then when it’s done, it scales it back.

What is a matrix

Coordinate system

The numerous points in the three-dimensional space are put together to form the three-dimensional coordinate system, which can be divided into left – handed coordinate system and right – handed coordinate system according to the direction of z axis

Making a fist from x to y, with your thumb pointing in the direction of z, is better than the three-finger gesture common on the Internet

Coordinates and matrices

The position of a point in the coordinate system is represented by theta

The midpoint of the coordinate system is called the origin

The change from one coordinate to another is called A vector, so the vector from coordinate A to the origin is exactly the same length as coordinate A itself.

Let’s say that we have a functionIt becomes another point.

Simplified writing becomes

And one of theIt’s called a matrix.

Use of matrices

Matrices sound weird and hard to remember, right? A moment is a rectangle and an array is a permutation, so a matrix is a numerical permutation of a row and column rectangle.

So what’s the use of this arrangement? What can it be used for? Why not call it a determinant if it’s a row.

Matrix is a coordinate transformation function (programmer called method), the coordinate system of a point into another point, graphic development can be used to manipulate the position of each pixel picture, move, rotate, zoom in, zoom out. So why don’t we call it the determinant, because the determinant is already used, and the determinant is a number that represents the transformation of volume in 3 dimensional coordinates. Matrix English is matrix, and matrix has the meaning of the matrix, the name of the matrix Sylvester thinks that the matrix is the matrix to generate a variety of determinants, which is why the science fiction film Matrix is also called matrix.

The homogeneous matrix

The matrix above is 33, those of you who have done 3D graphics development will find that the matrix is 44, so why use a 4 by 4 matrix?

Homogeneous coordinates

So what do you do if you want to zoom in on the matrix, from the equation is zoom in on the coordinates of each point

It’s going to be converted to a matrix

What do you do if you want to move the graph matrix, from the equation is move the coordinates of each point

I’ll write it as an equationIt’s going to be the matrix

Amplification and displacement matrix use two different operations of addition and multiplication, we hope to unify the operation to facilitate processing, and things that cannot be solved in low dimension can be solved in high dimension.

Three dimensional space coordinates are converted to four dimensional space coordinates, the W component can be arbitrary,

To convert four dimensional space coordinates to three dimensional space coordinates is to remove the W component

The scale-up equation is expressed in 4-dimensional coordinates

It’s going to be converted to a matrix

It’s going to be converted to a matrix

The original N dimension coordinates become the n+1 dimension coordinates, which are the homogeneous coordinates, and the original N n matrix becomes the n+1n+1 matrix, which is the homogeneous matrix. Homogeneous matrices unify matrix operations into multiplication operations.

Let’s go back to the term homogeneous, which sounds strange, homogeneous means the same, homogeneous means the same number of unknowns,

We don’t use the displacement equation of homogeneous coordinates, the first three terms in each row are of degree 1, and the fourth term is of degree 0,

Using homogeneous coordinates, the displacement equation becomes zeroThe first three terms in each row are of degree 1, the fourth term in the first rowThe w in here is always a constant of one, so the fourth term is also a constant of one, and the n dimension becomes the n plus one dimension and it becomes homogeneous.

W component

In addition to unifying operations to the advantage of multiplication, the homogeneous matrix also has the advantage of achieving near-large and far-small perspective effects.

Don’t do the perspectiveTo convert the coordinates to 3 dimensions is simply to remove the w, but to do perspective you divide by the W component.

Some of you might wonder, well, perspective is just projecting a three-dimensional object onto a two-dimensional space, so why not divide by the z component? Yes, the Z component can also be used for perspective projection, but if the Z component is the perspective component divided by the Z component, the Z component itself will be lost, and we want to separate the Z component from the perspective component. The geometry of the W component means the distance between the starting point of the projection and the object point.

Finally, the 4 by 4 homogeneous matrix transforms the coordinates to homogeneous coordinates, which are divided by the W component to give perspective.

The 4*4 homogeneous matrix is to solve the computing problems in the development of graphics

  1. All unified matrix operations use multiplication
  2. Solve perspective problems

Transposed matrix

Let me write the coordinates of the same pointAnd the coordinates are the vectors that are starting at the origin, so the ones that are vertical to the left are column vectors, and the ones that are horizontal to the right are row vectors

The aforementionedThe transformation matrix looks like this,This row matrix is called a row matrix and matrix multiplication on the left is called left multiplication.

And you could write the same equationThe resulting matrix is 1, 2, 3The matrix arranged in columns is called the column matrix and the multiplication of matrices on the right is called the right product.

These two representations are equivalent, the row matrix and column vectors are equal to the row vector and column matrix, and the transpose matrix is called the transpose matrix.

So in OpenGL you’re actually going to have to use left multiplication, but you’re not going to get the right result with vertices = row matrix column vectors, so transpose the matrix, so why is OpenGL going to have vertices = column matrix column vectors? Because opengL actually used right multiplication, vertices = column matrix row vectors, but later opengL decided that left multiplication was more appropriate, and for compatibility with older versions, the order inside the matrix didn’t change, so opengL ended up using vertices = column matrix column vectors, and opengL made some internal compatibility, When you do that, you actually convert the column matrix to the row matrix.

Permutation matrix is to replace the column and column direction of the matrix. I personally think that the transpose matrix can be used to improve the operation speed of the matrix in graphic development

  1. The effect of CPU caching rows is that arrays fetch rows faster than columns, and transpose matrices convert rows and columns, so transpose matrices speed up matrix operations. Juejin. Cn/post / 694275… It’s optimized in OpengL for the same efficiency.
  2. When the matrix is orthogonal, the transpose is equal to the inverse, and the transpose is much faster than the inverse.

The base transformation

I vector representation

J vector representation

K vector representation

These three vectors are the basis vectors.

Any point in a coordinate system can be represented by a basis vectorThe matrix changing points in the coordinate system is essentially changing the basis vectors, and that’s the basis transformation.

The basis vectors for the row matrix are going to be vertical

And in Opengl, the matrix is represented by a column matrix, so Opengl actually looks like this

The basis vectors for the column matrix are going to be horizontal, and the simple memory is that the origin of OpenGL is down here.

For example, if I want to transform the middle figure into the right figure, what do I do with the matrix?

  1. The origin is 4,4,4.
  2. The red I vector is 3,1,0.
  3. The green j vector is minus 1,4,0.

So it follows that in OpengL this matrix is theta

Two matrix visualization sites are recommended here

Harry7557558. Making. IO/tools/matri…

Shad. IO /MatVis/ to verify the basis transformation.

Basis transformation is a function that changes the basis vectors and the origin of the coordinate system to change each point in the coordinate system.

Inverse matrix

If you want to turn the right figure into the middle figure, this matrix is the inverse of the middle figure into the right figure, and this matrix is called an inverse matrix, and a matrix with a minus 1 on it is an inverse matrix.

If you use basis transformation, the origin of the middle graph and the basis vector representation in the coordinate system of the right graph can be derived from the inverse matrix, but it is difficult to see directly from the coordinate system, gaussian elimination method is generally used.

An inverse matrix is a reverse calculation to calculate the representation of a point in one coordinate system in another. 天安门事件

The matrix order

In General, in OpengL you multiply the matrix to the left, and the more right you are, the more you do the operation, the more you change the size

V represents the position of each point, and M represents the matrix

If you look from right to left you can think of it as keeping the frame constant, changing the object

If you look from left to right, you can think of it as the object doesn’t change, changing the coordinate system

You can think of it as moving the point V along the M1 matrix in a coordinate system consisting of M3*M2

What if I want an object in frame A to move in frame B

You transform the V vector to B coordinates, and then you move it by M matrix, and then you transform it back using the inverse matrix.

The matrix itself is changing the point in the coordinate system, changing the point also changes the coordinate system.

Commonly used matrix

The following matrices are column matrices that can be used directly in OpengL.

The displacement matrix

Dx, dy and Dz represent displacement distances

By changing the basis, you’re essentially changing the origin

Scaling matrix

Sx, Sy, and Sz represent scaling multiples

And by the basis transformation, we’re essentially changing the multiples of three basis vectors

Rotate the matrix counterclockwise about the z axis

The angles inside the rotation matrix are rotated counterclockwise, notice that if you are manipulating 2D objects the rotation is rotated z-axis.

The I basis vector becomes (cos theta,sin theta,0), and the J basis vector becomes (-sin theta,cos theta,0), and you put that into the basis transformation and you deduce the rotation matrix

Rotate the matrix counterclockwise about the x axis

Rotate the matrix counterclockwise about the y axis

Notice that the y rotation is a little bit different than the z rotation, the x rotation parameter, and the basis vector for the y rotation is cos theta,0,-sin theta, and the reason why it’s -sin theta, is because in the right hand coordinate system, when you rotate backwards, the x axis is rotated to the negative half of the z axis.

The view matrix

Eye represents the coordinates of the camera

At represents the coordinates of the point the camera is looking at

Up represents the vector that looks up at the camera

www.cnblogs.com/mikewolf200…

The main idea of view matrix is that the object seen from the camera has the same effect as the camera without the reverse movement of the animal body, so the view matrix is the inverse matrix of the object transformation

Orthogonal projection matrix

Left represents the left boundary of the object before the transformation

Right represents the right boundary of the object before the transformation

Top represents the upper boundary of the object before transformation

Bottom represents the lower boundary of the object before the transformation

Near represents the side of the object near the user before transformation

Far represents the side of the object away from the user before the transformation

Orthogonal projection matrix is to transform the cuboid of [left,right][bottom,top][near,far] into a cube of [-1,1][-1,1][-1,1].

There’s a point here, the side near the user becomes near and the final coordinate is -1, but isn’t the side near the user in the right hand coordinate system 1? I reversed the Z axis, because opengL actually has a right-handed space coordinate system, but OpengL’s normalized device coordinate system (NDC) is left-handed, so when you convert the world coordinate system to NDC, you have to reverse the world coordinate system, and I just verified that without any matrix, z minus one is red, z one is blue, And it ends up in red, which proves that OpengL’s NDC coordinate system is actually left-handed.

The main idea is to first move the center point of the cuboid to the origin of the coordinate system, and then scale the cuboid into a cube of unit 1

The center of a cuboid

To move to the origin, we have to multiply by negative 1, so our shift matrix is 1

The size of the cuboid on the X-axis was right-left, now it’s 2, so the X-axis scale is 2 over right-left, and the Y-axis and z-axis are the same, so the scale matrix is

So the final orthogonal projection matrix is the shift matrix the scale matrix, and notice that this is the shift matrix the scale matrix, not the scale matrix * the shift matrix, because now it’s a column matrix

Perspective projection matrix

Left represents the left edge of the side of the object near the user before the transformation

Right represents the right edge of the object near the user before the transformation

Top represents the upper boundary of the side of the object near the user before the transformation

Bottom represents the lower boundary of the side of the object near the user before the transformation

Near represents the side of the object near the user before transformation

Far represents the side of the object away from the user before the transformation

Perspective projection matrix is to transform the quadcones of near plane and far plane into cubes of unit length 1.

Yemi. Me / 2018/09/09 /…

In actual combat

1. How to display images correctly in OpenGL?

If you put it in directly, it will stretch without matrix transformation, which is caused by the different proportion of the display area and the image. At this time, you can use the projection matrix to solve this problem, of course, you can also simply scale to solve the problem.

For the sake of discussion, the following image zoom mode is centered fill mode.

The solution of the center fill mode network is to calculate which side of the image is larger relative to the display area, keep the large side unchanged, and scale down the small side, but such a solution is not easy to understand why the scale of the small side is the scale shown in the figure below.

So the way I’m going to do it is, I’m going to set the size of the object and then I’m going to project it to the size of the screen, and I’m going to divide by 2 because my original frame, minus 1, minus 1, is 2.

Now I want to rotate the object counterclockwise by 30, and you see that if you take the scale matrix to the far right, it stretches the rotated shape.

This is because the size of the object is textureWidth, textureHeight, the ratio of the size is not equal to 1, so the size of the object must be 1 when it is rotated, and when rotateMat is added to the left of the sizeMat, the object is actually a cube of -1,1, of course you can manually change the scale and rotate it, And then I’m going to change the scale back, but it’s a little bit easier.

If you want to move the center of the image to the upper right corner of the screen, where should you put the displacement matrix?

And you’ll notice that this displacement matrix is the absolute value of the screen size, because it’s in the screen size space coordinate system relative to viewWidth and viewHeight, so if I want to zoom in relative to the screen size, relative to -1, only to the left of the projection matrix, this frame right here is -1,1

Merge all displacement matrices

If I write the enlargement matrix on the left side of the rotation matrix, you will see that if the ratio of the width to the height is different, the image will be rotated out of shape, and this is when the ratio of the object becomes 0.5, the object will be rotated out of shape.

The zoom matrix can only be written to the right of the rotation matrix, so that the image itself remains at a right Angle, without distortion, even though it is stretched

Combined with camera matrix matrix of common MVP, m refers to the model of matrix, v refers to the view matrix, p refers to the projection matrix, pay attention to in the modeling of matrix offsetMatrotateMatscaleMat order cannot be changed, this is because the rotation and scaling of object must be in the coordinate system origin

Finally, to center the display, you need to select a smaller value of the width and height ratio between the screen and the object to scale to ensure that the object is inside the screen.

Just to keep track of the order of the matrices, let’s sort out why did the matrix end up like this?

If right to left, matrix is orthoMat sizeMatrotateMatoffsetMat * and coordinate system have never changed, only changes in object size

If look from left to right, from the matrix matrix is orthoMatoffsetMatrotateMat * sizeMat objects have never changed, coordinate system in the ongoing transformation

2. How to apply matrix to texture coordinates

First, convert the texture coordinates 0,1 to -1,1, then apply the matrix, and finally, convert the coordinates 0,1 to 0,1, but you will find that the enlarged matrix actually shrinks.

By zooming in on the texture coordinates, the texture coordinates in the display area change from 0-1 to -0.5-1.5. Only 0-1 texture coordinates have content, so it looks easy to become smaller. Then why does zooming in on the vertex coordinates not have this problem, because there will be clipping space for the vertex coordinates to crop out the content other than -1,1? The display area displays only the internal contents of -1,1.

From the appearance of the texture is the result of the inverse, so add an inverse matrix can be solved.

3. If the object is not rotated, then dx passing -2 into the translation matrix in viewoffsetMat will just move out, but if it is rotated, there will be an Angle left, how to solve this problem?

So what you do is you take the red border of the object that you rotate, and you move it by half the size of the screen and then you move it by half the red border and it just goes out.

Of course if it’s just a rotation of a 2D object it can be widened simply by the Angle, but what I want to do is figure out what the boundary of a 3D object is when I do the combinatorial matrix operation?

This is called the bounding box of objects in graphics, and bounding bodies are a means of quickly detecting collisions between objects. The bounding body types include sphere, axial-aligned bounding box (AABB) and directed bounding box (OBB). The bounding box that we have here that is square and parallel to the coordinate system is AABB.

To find aABB is to find the maximum and minimum values of the eight vertices of the original cube.

Zhuanlan.zhihu.com/p/116051685…

So you end up with viewOffset passing -1, boxOffset passing -1, as opposed to first moving the center of the object to the left of the screen, and then moving it halfway around the box, just moving it out of the screen.

Some people might want to test their ideas with code, so I posted the code to top the generator code

precision mediump float;
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
uniform vec2 inputTextureSize;
uniform vec2 viewSize;
uniform float time;


mat4 translate(vec3 t)
{
    return mat4(1.0, 0.0, 0.0, 0.0,
    0.0, 1.0, 0.0, 0.0,
    0.0, 0.0, 1.0, 0.0,
    t.x, t.y, t.z, 1.0);
}


mat4 ortho(float l, float r, float b, float t, float n, float f)
{
    return mat4(
    2.0/(r-l), 0.0, 0.0, 0.0,
    0.0, 2.0/(t-b), 0.0, 0.0,
    0.0, 0.0, -2.0/(f-n), 0.0,
    -(r+l)/(r-l), -(t+b)/(t-b), -(f+n)/(f-n), 1);
}

mat4 scale(vec3 v)
{
    return mat4(v.x, 0, 0, 0,
    0, v.y, 0, 0,
    0, 0, v.z, 0,
    0, 0, 0, 1);
}

mat4 rotate2d(float degree)
{
    float radian = radians(degree);
    return mat4(cos(radian), sin(radian), 0.0, 0.0,
    -sin(radian), cos(radian), 0.0, 0.0,
    0.0, 0.0, 1.0, 0.0,
    0.0, 0.0, 0.0, 1.0);
}

mat4 inverse_mat4(mat4 m)
{
    float Coef00 = m[2][2] * m[3][3] - m[3][2] * m[2][3];
    float Coef02 = m[1][2] * m[3][3] - m[3][2] * m[1][3];
    float Coef03 = m[1][2] * m[2][3] - m[2][2] * m[1][3];

    float Coef04 = m[2][1] * m[3][3] - m[3][1] * m[2][3];
    float Coef06 = m[1][1] * m[3][3] - m[3][1] * m[1][3];
    float Coef07 = m[1][1] * m[2][3] - m[2][1] * m[1][3];

    float Coef08 = m[2][1] * m[3][2] - m[3][1] * m[2][2];
    float Coef10 = m[1][1] * m[3][2] - m[3][1] * m[1][2];
    float Coef11 = m[1][1] * m[2][2] - m[2][1] * m[1][2];

    float Coef12 = m[2][0] * m[3][3] - m[3][0] * m[2][3];
    float Coef14 = m[1][0] * m[3][3] - m[3][0] * m[1][3];
    float Coef15 = m[1][0] * m[2][3] - m[2][0] * m[1][3];

    float Coef16 = m[2][0] * m[3][2] - m[3][0] * m[2][2];
    float Coef18 = m[1][0] * m[3][2] - m[3][0] * m[1][2];
    float Coef19 = m[1][0] * m[2][2] - m[2][0] * m[1][2];

    float Coef20 = m[2][0] * m[3][1] - m[3][0] * m[2][1];
    float Coef22 = m[1][0] * m[3][1] - m[3][0] * m[1][1];
    float Coef23 = m[1][0] * m[2][1] - m[2][0] * m[1][1];

    const vec4 SignA = vec4(1.0, -1.0, 1.0, -1.0);
    const vec4 SignB = vec4(-1.0, 1.0, -1.0, 1.0);

    vec4 Fac0 = vec4(Coef00, Coef00, Coef02, Coef03);
    vec4 Fac1 = vec4(Coef04, Coef04, Coef06, Coef07);
    vec4 Fac2 = vec4(Coef08, Coef08, Coef10, Coef11);
    vec4 Fac3 = vec4(Coef12, Coef12, Coef14, Coef15);
    vec4 Fac4 = vec4(Coef16, Coef16, Coef18, Coef19);
    vec4 Fac5 = vec4(Coef20, Coef20, Coef22, Coef23);

    vec4 Vec0 = vec4(m[1][0], m[0][0], m[0][0], m[0][0]);
    vec4 Vec1 = vec4(m[1][1], m[0][1], m[0][1], m[0][1]);
    vec4 Vec2 = vec4(m[1][2], m[0][2], m[0][2], m[0][2]);
    vec4 Vec3 = vec4(m[1][3], m[0][3], m[0][3], m[0][3]);

    vec4 Inv0 = SignA * (Vec1 * Fac0 - Vec2 * Fac1 + Vec3 * Fac2);
    vec4 Inv1 = SignB * (Vec0 * Fac0 - Vec2 * Fac3 + Vec3 * Fac4);
    vec4 Inv2 = SignA * (Vec0 * Fac1 - Vec1 * Fac3 + Vec3 * Fac5);
    vec4 Inv3 = SignB * (Vec0 * Fac2 - Vec1 * Fac4 + Vec2 * Fac5);

    mat4 Inverse = mat4(Inv0, Inv1, Inv2, Inv3);

    vec4 Row0 = vec4(Inverse[0][0], Inverse[1][0], Inverse[2][0], Inverse[3][0]);

    float Determinant = dot(m[0], Row0);

    Inverse /= Determinant;

    return Inverse;
}
mat4 transpose(mat4 m) {
    return mat4(m[0][0], m[1][0], m[2][0], m[3][0],
    m[0][1], m[1][1], m[2][1], m[3][1],
    m[0][2], m[1][2], m[2][2], m[3][2],
    m[0][3], m[1][3], m[2][3], m[3][3]);
}
mat4 inverse(mat4 m) {
    float
    a00 = m[0][0], a01 = m[0][1], a02 = m[0][2], a03 = m[0][3],
    a10 = m[1][0], a11 = m[1][1], a12 = m[1][2], a13 = m[1][3],
    a20 = m[2][0], a21 = m[2][1], a22 = m[2][2], a23 = m[2][3],
    a30 = m[3][0], a31 = m[3][1], a32 = m[3][2], a33 = m[3][3],

    b00 = a00 * a11 - a01 * a10,
    b01 = a00 * a12 - a02 * a10,
    b02 = a00 * a13 - a03 * a10,
    b03 = a01 * a12 - a02 * a11,
    b04 = a01 * a13 - a03 * a11,
    b05 = a02 * a13 - a03 * a12,
    b06 = a20 * a31 - a21 * a30,
    b07 = a20 * a32 - a22 * a30,
    b08 = a20 * a33 - a23 * a30,
    b09 = a21 * a32 - a22 * a31,
    b10 = a21 * a33 - a23 * a31,
    b11 = a22 * a33 - a23 * a32,

    det = b00 * b11 - b01 * b10 + b02 * b09 + b03 * b08 - b04 * b07 + b05 * b06;

    return mat4(
    a11 * b11 - a12 * b10 + a13 * b09,
    a02 * b10 - a01 * b11 - a03 * b09,
    a31 * b05 - a32 * b04 + a33 * b03,
    a22 * b04 - a21 * b05 - a23 * b03,
    a12 * b08 - a10 * b11 - a13 * b07,
    a00 * b11 - a02 * b08 + a03 * b07,
    a32 * b02 - a30 * b05 - a33 * b01,
    a20 * b05 - a22 * b02 + a23 * b01,
    a10 * b10 - a11 * b08 + a13 * b06,
    a01 * b08 - a00 * b10 - a03 * b06,
    a30 * b04 - a31 * b02 + a33 * b00,
    a21 * b02 - a20 * b04 - a23 * b00,
    a11 * b07 - a10 * b09 - a12 * b06,
    a00 * b09 - a01 * b07 + a02 * b06,
    a31 * b01 - a30 * b03 - a32 * b00,
    a20 * b03 - a21 * b01 + a22 * b00) / det;
}

mat4 lookat(vec3 eye, vec3 at, vec3 up)
{
    vec3 zaxis = normalize(at - eye);
    vec3 xaxis = normalize(cross(zaxis, up));
    vec3 yaxis = cross(xaxis, zaxis);
    zaxis = -1.0*zaxis;
    mat4 viewMatrix =  mat4(
    vec4(xaxis.x, yaxis.x, zaxis.x, 0.0),
    vec4(xaxis.y, yaxis.y, zaxis.y, 0.0),
    vec4(xaxis.z, yaxis.z, zaxis.z, 0.0),
    vec4(-dot(xaxis, eye), -dot(yaxis, eye), -dot(zaxis, eye), 1.0));
    return viewMatrix;
}

vec3 boxSize(mat4 m, vec3 pmin, vec3 pmax){
    vec4 xa = m[0]*pmin.x;
    vec4 xb = m[0]*pmax.x;
    vec4 ya = m[1]*pmin.y;
    vec4 yb = m[1]*pmax.y;
    vec4 za = m[2]*pmin.z;
    vec4 zb = m[2]*pmax.z;
    float w = m[3][3];
    vec3 vmin =((min(xa, xb)+min(ya, yb)+min(za, zb)+w)/w).xyz;
    vec3 vmax =((max(xa, xb)+max(ya, yb)+max(za, zb)+w)/w).xyz;
    return (vmax -vmin);
}



void main()
{

    float viewWidth = viewSize.x;
    float viewHeight = viewSize.y;
    float textureWidth = inputTextureSize.x;
    float textureHeight = inputTextureSize.y;
    mat4 orthoMat = ortho(-viewWidth/2.0, viewWidth/2.0, -viewHeight/2.0, viewHeight/2.0, -1.0, 1.0);
    float radian = radians(mod(time, 10000.0)/10000.0*360.0);
    mat4 cameraMat = lookat(vec3(0.0, 0.0, 1.0), vec3(0.0, 0.0, 0.0), vec3(sin(radian), cos(radian), 0.0));
    mat4 sizeMat = scale(vec3(textureWidth/2.0, textureHeight/2.0, 1.0));
    mat4 offsetMat = translate(vec3(0.0));
    mat4 rotateMat = rotate2d(30.0);
    mat4 scaleMat = scale(vec3(0.5, 0.5, 1.0));
    float widthRatio = viewWidth/textureWidth;
    float heightRatio = viewHeight/textureHeight;
    float ratio = min(widthRatio, heightRatio);
    mat4 scaleTypeMat = scale(vec3(ratio, ratio, 1.0));
    mat4 modelMat = offsetMat*rotateMat*scaleMat*sizeMat*scaleTypeMat;
    mat4 mvpMat = orthoMat*cameraMat*modelMat;
    vec3 boxSize = boxSize(mvpMat, vec3(-1.0), vec3(1.0));
    mat4 viewOffsetMat = translate(vec3(sin(radian), cos(radian), 0.0));
    mat4 boxOffsetMat = translate(vec3(0.0, 0.0, 0.0)*boxSize/2.0);
    mat4 transformMat = viewOffsetMat*boxOffsetMat*mvpMat;
    gl_Position =transformMat *position;
    textureCoordinate = inputTextureCoordinate.xy;
    textureCoordinate.y= 1.0 - textureCoordinate.y;
    


    //    gl_Position =position;
    //    textureCoordinate =inputTextureCoordinate.xy*2.0-1.0;
    //    textureCoordinate = (inverse(scale(vec3(1.0, 1.0, 1.0)))*vec4(textureCoordinate, 0.0, 1.0)).xy;
    //    textureCoordinate =(textureCoordinate+1.0)/2.0;
    //    textureCoordinate.y= 1.0 - textureCoordinate.y;

}
Copy the code

`

Chip shader code

precision mediump float; varying vec2 textureCoordinate; uniform sampler2D inputImageTexture; uniform vec2 inputTextureSize; void main() { vec4 color = texture2D(inputImageTexture, textureCoordinate); If (textureCoordinate. X < 0.0 | | textureCoordinate. X > 1.0 | | textureCoordinate. Y < 0.0 | | textureCoordinate. Y > 1.0) { Gl_FragColor = vec4 (0.0, 0.0, 0.0, 0.0); }else{ gl_FragColor = color; }}Copy the code