preface

Through this paper, we can understand the matrix operations of translation, rotation, scaling and perspective in WebGL, and finally achieve the effect of a cube transformation

Building a cube

Start by drawing a basic cube

The right hand theorem is used to determine the position of the Cartesian coordinate system, as shown on the right side of the figure below

So our cube vertex coordinates are

[
  v0,v1,v2,
  v0,v2,v3,
  v0,v3,v4
  ...
]
Copy the code

Each face requires 6 vertices, and a total of 36 vertices are passed in

Similarly, for colors, 6 * 6 vec4 color values are passed in

[
  c0,c1,c2,
  c0,c2,c3,
  c0,c3,c4
  ...
]
Copy the code

You’re thinking that the cube is supposed to have 8 points, but we’ve passed 36 points.

DrawElements can be used to reduce the number of vertices defined

  // Create a cube
  // v6----- v5
  // /| /|
  // v1------v0|
  // | | | |
  // | |v7---|-|v4
  // |/ |/
  // v2------v3
  var vertices = new Float32Array([   // Vertex coordinates
     1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.// v0-v1-v2-v3 front
     1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.// v0-v3-v4-v5 right
     1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.// v0-v5-v6-v1 up
    1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.// v1-v6-v7-v2 left
    1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.// v7-v4-v3-v2 down
     1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0   // v4-v7-v6-v5 back
  ]);

  var colors = new Float32Array([     // Colors
    0.4.0.4.1.0.0.4.0.4.1.0.0.4.0.4.1.0.0.4.0.4.1.0.// v0-v1-v2-v3 front(blue)
    0.4.1.0.0.4.0.4.1.0.0.4.0.4.1.0.0.4.0.4.1.0.0.4.// v0-v3-v4-v5 right(green)
    1.0.0.4.0.4.1.0.0.4.0.4.1.0.0.4.0.4.1.0.0.4.0.4.// v0-v5-v6-v1 up(red)
    1.0.1.0.0.4.1.0.1.0.0.4.1.0.1.0.0.4.1.0.1.0.0.4.// v1-v6-v7-v2 left
    1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.// v7-v4-v3-v2 down
    0.4.1.0.1.0.0.4.1.0.1.0.0.4.1.0.1.0.0.4.1.0.1.0   // v4-v7-v6-v5 back
  ]);

  var indices = new Uint8Array([       // Indices of the vertices
     0.1.2.0.2.3.// front
     4.5.6.4.6.7.// right
     8.9.10.8.10.11.// up
    12.13.14.12.14.15.// left
    16.17.18.16.18.19.// down
    20.21.22.20.22.23     // back
  ]);
  // Set buffer and binding

  gl.drawElements(gl.TRIANGLES, indices.length, gl.UNSIGNED_BYTE, 0);
Copy the code

What if you only had eight vertices?

  // Create a cube
  // v6----- v5
  // /| /|
  // v1------v0|
  // | | | |
  // | |v7---|-|v4
  // |/ |/
  // v2------v3

var vertices = new Float32Array([
  // v0-v1-v2-v3 front
  1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.// v4-v7-v6-v5 back
  1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0.1.0     
]);

/ / colors remains the same

// Define the index of the vertex when drawing
var indices = new Uint8Array([
  0.1.2.0.2.3.// front
  0.3.4.0.4.7.// right
  0.7.6.0.6.1.// up
  1.6.5.1.5.2.// left
  6.4.3.6.3.3.// down
  4.5.6.4.6.7     // back
]);
Copy the code

You’ll see that the cube has only two colors, blue and green, and color interpolation

This is caused by sharing vertices, so if the same vertex is involved in multiple different color slice processing, it cannot be shared.

Shader code

      var vertexShaderSource = ` attribute vec4 aVertexPosition; attribute vec4 aVertexColor; varying lowp vec4 vColor; void main(void) { gl_Position = aVertexPosition; vColor = aVertexColor; } `

      var fragmentShaderSource = ` varying lowp vec4 vColor; void main(void) { gl_FragColor = vColor; } `
Copy the code

In the final rendering, we only see the back side

This is because of the perspective, the following will describe how to do matrix transformation

Matrix operations

It is mainly divided into moEL matrix, View matrix and projection matrix.

The model matrix represents the combined transformation of the observation target, including rotation, translation and scaling

Start by defining some utility functions

/** * create a 4-order identity matrix */
function createMat4 () {
  let out = new Float32Array(16);
  out[0] = 1;
  out[5] = 1;
  out[10] = 1;
  out[15] = 1;
  return out;
}
@param {Mat4} a @param {Mat4} b */
function multiply(a, b) {
  let out = new Float32Array(16);
  var a00 = a[0],
      a01 = a[1],
      a02 = a[2],
      a03 = a[3];
  var a10 = a[4],
      a11 = a[5],
      a12 = a[6],
      a13 = a[7];
  var a20 = a[8],
      a21 = a[9],
      a22 = a[10],
      a23 = a[11];
  var a30 = a[12],
      a31 = a[13],
      a32 = a[14],
      a33 = a[15];

  // Cache only the current line of the second matrix
  var b0 = b[0],
      b1 = b[1],
      b2 = b[2],
      b3 = b[3];
  out[0] = b0 * a00 + b1 * a10 + b2 * a20 + b3 * a30;
  out[1] = b0 * a01 + b1 * a11 + b2 * a21 + b3 * a31;
  out[2] = b0 * a02 + b1 * a12 + b2 * a22 + b3 * a32;
  out[3] = b0 * a03 + b1 * a13 + b2 * a23 + b3 * a33;

  b0 = b[4]; b1 = b[5]; b2 = b[6]; b3 = b[7];
  out[4] = b0 * a00 + b1 * a10 + b2 * a20 + b3 * a30;
  out[5] = b0 * a01 + b1 * a11 + b2 * a21 + b3 * a31;
  out[6] = b0 * a02 + b1 * a12 + b2 * a22 + b3 * a32;
  out[7] = b0 * a03 + b1 * a13 + b2 * a23 + b3 * a33;

  b0 = b[8]; b1 = b[9]; b2 = b[10]; b3 = b[11];
  out[8] = b0 * a00 + b1 * a10 + b2 * a20 + b3 * a30;
  out[9] = b0 * a01 + b1 * a11 + b2 * a21 + b3 * a31;
  out[10] = b0 * a02 + b1 * a12 + b2 * a22 + b3 * a32;
  out[11] = b0 * a03 + b1 * a13 + b2 * a23 + b3 * a33;

  b0 = b[12]; b1 = b[13]; b2 = b[14]; b3 = b[15];
  out[12] = b0 * a00 + b1 * a10 + b2 * a20 + b3 * a30;
  out[13] = b0 * a01 + b1 * a11 + b2 * a21 + b3 * a31;
  out[14] = b0 * a02 + b1 * a12 + b2 * a22 + b3 * a32;
  out[15] = b0 * a03 + b1 * a13 + b2 * a23 + b3 * a33;
  return out;
}
Copy the code

rotating

Rotating about the z axis, changing the x and y coordinates, you can kind of view it as each of the XY planes rotating about the origin of the XY axis

B is the radian of counterclockwise rotation

You might see that in some of the tutorials the matrix is a little bit different, which is the clockwise rotation radians of B. Because sine of minus b is minus sine of b; cos-b = cos b

// x' = x cosb - y sinb
// y' = x sinb + y cosb
// z' = z
attribute vec4 a_Position;
uniform float u_CosB,u_SinB;
void main(a){
  gl_Position.x = a_Position.x * u_CosB - a_Position.y * u_SinB;
  gl_Position.y = a_Position.x * u_SinB + a_Position.y * u_CosB;
  gl_Position.z = a_Position.z;
  gl_Position.w = 1.0;
}
Copy the code

(The completion of the 4th-order matrix is to facilitate subsequent operations of the same order matrix)

[ x' ] [ cosb -sinb 0 0 ] [ x ] [ x * cosb - y * sinb ] [ y' ] = [ sinb  cosb 0  0 ] x [ y ] =  [ x * sinb + y * cosb ]
[ z' ] [ 0 0 1 0 ] [ z ] [ z ] [ 1 ] [ 0 0 0 1 ] [ 1 ] [ 1 ]Copy the code

The matrix used is called the rotation matrix

Similarly, if you rotate around the X-axis, using the right-hand theorem (flip the coordinate system, the X-axis points to yourself), instead of rotating around the z-axis, z is actually the same as the y that you just did, and y is the same as x, so the rotation matrix is

[ x' ] [ 1 0 0 0 ] [ x ] [ x ] [ y' ] = [  0 cosb -sinb 0 ] x [ y ] =  [ y * cosb - z * sinb ]
[ z' ] [ 0 sinb cosb 0 ] [ z ] [ y * sinb + z * cosb ] [ 1 ] [ 0 0 0 1 ] [ 1 ] [ 1 ]Copy the code

If you rotate around the Y-axis, the rotation matrix is zero

[ x' ] [ cosb 0 sinb 0 ] [ x ] [ x * cosb + z * sinb ] [ y' ] = [   0   1   0   0 ] x [ y ] =  [        y            ]
[ z' ] [ -sinb 0 cosb 0 ] [ z ] [ z * cosb - x * sinb ] [ 1 ] [ 0 0 0 1 ] [ 1 ] [ 1 ]Copy the code

The vertex shader can be modified as follows

attribute vec4 a_Position;
uniform mat4 u_xformMatrix;
void main(a){
  gl_Position = u_xformMatrix * a_Position;
}
Copy the code

The utility functions are as follows

@param {Number} angleInRadians counterclockwise radians */
function rotateX (angleInRadians) {
  let c = Math.cos(angleInRadians);
  let s = Math.sin(angleInRadians);
  return new Float32Array([
    1.0.0.0.0, c, -s, 0.0, s, c, 0.0.0.0.1,]); }@param {Number} angleInRadians counterclockwise rotation radians */
function rotateY (angleInRadians) {
  let c = Math.cos(angleInRadians);
  let s = Math.sin(angleInRadians);
  return new Float32Array([
    c, 0, s, 0.0.1.0.0,
    -s, 0, c, 0.0.0.0.1,]); }@param {Number} angleInRadians counterclockwise rotation radians */
function rotateZ (angleInRadians) {
  let c = Math.cos(angleInRadians);
  let s = Math.sin(angleInRadians);
  return new Float32Array([
    c, -s, 0.0,
    s, c, 0.0.0.0.1.0.0.0.0.1,]); }Copy the code

translation

attribute vec4 a_Position;
uniform vec4 u_Translation;
void main(a){
  gl_Position = a_Position + u_Translation;
}
Copy the code

The two vec4 components can be added together. Note that the w of the second component, u_Translation, is 0

If you use the matrix, it becomes

[ x' ] [ 1 0 0 Tx ] [ x ] [ x + Tx ] [ y' ] = [ 0 1 0 Ty ] x [ y ] =  [ y + Ty ]
[ z' ] [ 0 0 1 Tz ] [ z ] [ z + Tz ] [ 1 ] [ 0 0 0 1 ] [ 1 ] [ 1 ]Copy the code

This matrix is called the translation matrix

A matrix in JS uses typed arrays and is represented in column main order. For example, the translation matrix above is represented in JS as

new Float32Array([
  1.0.0.0.0.0.0.0.0.0.1.0.0.0.0.0.0.0.0.0.1.0.0.0,
  Tx, Ty, Tz, 1.0
])
Copy the code

The utility functions are as follows

@param {Number} tx x offset @param {Number} ty y offset @param {Number} tz Z offset */
function translation (tx, ty, tz) {
  return new Float32Array([
    1.0.0.0.0.1.0.0.0.0.1.0,
    tx, ty, tz, 1,]); }Copy the code

The zoom

X,y and z are scaled in three directions, and the scaling factors are Sx,Sy and Sz respectively

The corresponding scaling matrix is

[ x' ] [ Sx 0 0 0 ] [ x ] [ x * Sx ] [ y' ] = [ 0  Sy 0 0 ] x [ y ] =  [ y * Sy ]
[ z' ] [ 0 0 Sz 0 ] [ z ] [ z * Sz ] [ 1 ] [ 0 0 0 1 ] [ 1 ] [ 1 ]Copy the code

The utility functions are as follows

@param {Number} sx x scale value @param {Number} sy y scale value @param {Number} sz z scale value */
function scaling (sx, sy, sz) {
  return new Float32Array([
    sx, 0.0.0.0, sy, 0.0.0.0, sz, 0.0.0.0.1,]); };Copy the code

view

In three dimensions, perspective is also an important factor in determining which direction we view the object

The perspective consists of three parts:

  • Viewpoint: the starting point of sight, the position of the observer (camera), (eyeX,eyeY,eyeZ)
  • 1. The point of view through which the object is observed, (atX,atY,atZ).
  • Up direction: The upward direction of the image that is finally drawn on the screen. Since the observer may rotate around the line of sight axis, the upward direction is also required to fix the view. Vector (upX upY, upZ)

The default view is the negative half of the z-axis, which points into the screen. Among them

  • Viewpoint: at the origin of coordinates (0,0,0)
  • Observation target point: the line of sight is in the negative direction of z-axis, so the observation target point is< 0 (0, 0, z), z
  • Up direction: positive Y direction (0,1,0)

The information of the three forms a view matrix for the transformation from world space to view space. How to determine the view matrix?

1. Construct the camera space coordinate system

  1. The forward basis vector of line of sight is determined according to at and eye

    First calculate the line of sight direction forwrad = (at – eye) and normalized forward = forward / | forwrad |

fx = centerX - eyeX;
fy = centerY - eyeY;
fz = centerZ - eyeZ;

rlf = 1 / Math.sqrt(fx*fx + fy*fy + fz*fz);
fx *= rlf;
fy *= rlf;
fz *= rlf;
Copy the code
  1. Determine the camera’s side basis vector \ according to the up vector and forward vector

    The orientation is determined by the right hand rule and is perpendicular to the surface constructed by the two vectors

    Normalized the up vector: the up = up / | | of the up; Cross product: side=cross(forward,up) or cross product: side=cross(forward,up); Vector normalization side: the side = side / | side |;

sx = fy * upZ - fz * upY;
sy = fz * upX - fx * upZ;
sz = fx * upY - fy * upX;

rls = 1 / Math.sqrt(sx*sx + sy*sy + sz*sz);
sx *= rls;
sy *= rls;
sz *= rls;
Copy the code
  1. Compute the up vector in terms of forward and side

    Cross product: up = cross(side,forward)

    The up vector is perpendicular to the plane of forward and side

ux = sy * fz - sz * fy;
uy = sz * fx - sx * fz;
uz = sx * fy - sy * fx;
Copy the code

Thus, the eye position and the three basis vectors forward, side and UP are constructed into a new coordinate system

It is important to note that this coordinate system is left-handed, so in practice you need to reverse forward

Side corresponds to x, up to y, and -forward to z

Eye and side, up and -forward are used to form a right-handed coordinate system

Next we will perform coordinate transformation to calculate the coordinates of the object in the world coordinate system in the camera coordinate system

Using rotation and translation matrix to find the inverse matrix

The rotation + translation of the world coordinate system coincides with the camera coordinate system, and the rotation R and translation T form the combination matrix M=T*R

The transformation matrix transforms the coordinates in the camera coordinate system to the world coordinate system

Correspondingly, the view matrix (world coordinates converted to camera coordinates) view = M-¹

The vertex in the world coordinate system is mapped to the camera coordinate system through M. According to the relative motion, the vertex in the camera coordinate system is actually the original vertex position of View *

Translation matrix T

[ 0 0 0 eyeX ]
[ 0 0 0 eyeY ]
[ 0 0 0 eyeZ ]
[ 0 0 0   1  ]
Copy the code

known

X (0, 1), y (0, 0), z (0, 1) to the world coordinate system is the base of the u (sx, sy, sz), v (ux, uy, offers), n (fz) of – fx, – fy – base for camera coordinate system

Find the transformation matrix R of the camera coordinate system to the world coordinate system

Solution:

According to the theorem

If (u, v, n) = (x, y, z) * C matrix C as from the base (x, y, z) to the base (u, v, n) transition matrix

Let the coordinates of a certain vector in the vector space be expressed as x and y under the basis (x,y,z) and the basis (u,v,n), and x = C * y according to the coordinate transformation formula

Therefore, C is the transformation matrix R from the vertex of camera coordinate system to the vertex of world coordinate system

T is the transpose, same thing

u = (sx,sy,sz)t = sx * x + sy * y + sz * z
v = (ux,uy,uz)t = ux * x + uy * y + uz * z
n = (-fx,-fy,-fz)t = -fx * x - fy * y - fz * z

(u,v,n) 
= (x,y,z) * R
= (x,y,z) * ( (sx,sy,sz)t ,(ux,uy,uz)t, (-fx,-fy,-fz)t )
Copy the code

So the rotation matrix R is

[ sx ux -fx 0 ]
[ sy uy -fy 0 ]
[ sz uz -fz 0 ]
[ 0  0   0  1 ]
Copy the code

The final matrix view = (T * R)-¹ = R-¹ * T-¹ = Rt * T-¹

The rotation matrix is an orthogonal matrix whose inverse is equal to its transpose

Therefore,

view = 
[  sx  sy  sz 0 ]   [ 1 0 0 -eyeX ]
[  ux  uy  uz 0 ] x [ 0 1 0 -eyeY ]
[ -fx -fy -fz 0 ]   [ 0 0 1 -eyeZ ]
[  0   0   0  1 ]   [ 0 0 0    1  ]
Copy the code

“Changing the state of the observer” is essentially the same as “transforming the whole world”, which matrix is used depends on which subject is more convenient to change

Multiply the view matrix by the vertex coordinates to get the new view

attribute vec4 a_Position;
uniform mat4 u_ViewMatrix;
void main(a){
  gl_Position = u_ViewMatrix * a_Position;
}
Copy the code

The utility functions are as follows

function setLookAt (eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ) {
  fx, fy, fz, rlf, sx, sy, sz, rls, ux, uy, uz;

  fx = centerX - eyeX;
  fy = centerY - eyeY;
  fz = centerZ - eyeZ;

  // Normalize f.
  rlf = 1 / Math.sqrt(fx * fx + fy * fy + fz * fz);
  fx *= rlf;
  fy *= rlf;
  fz *= rlf;

  // Calculate cross product of f and up.
  sx = fy * upZ - fz * upY;
  sy = fz * upX - fx * upZ;
  sz = fx * upY - fy * upX;

  // Normalize s.
  rls = 1 / Math.sqrt(sx * sx + sy * sy + sz * sz);
  sx *= rls;
  sy *= rls;
  sz *= rls;

  // Calculate cross product of s and f.
  ux = sy * fz - sz * fy;
  uy = sz * fx - sx * fz;
  uz = sx * fy - sy * fx;

  var Rt = new Float32Array(16);
  Rt[0] = sx;
  Rt[1] = ux;
  Rt[2] = -fx;
  Rt[3] = 0;

  Rt[4] = sy;
  Rt[5] = uy;
  Rt[6] = -fy;
  Rt[7] = 0;

  Rt[8] = sz;
  Rt[9] = uz;
  Rt[10] = -fz;
  Rt[11] = 0;

  Rt[12] = 0;
  Rt[13] = 0;
  Rt[14] = 0;
  Rt[15] = 1;
  var inverseT = multiply(translation(-eyeX, -eyeY, -eyeZ), createMat4())
  return multiply(Rt, inverseT)
};
Copy the code

projection

When viewed from different views, parts are cropped because WebGL only shows the viewable area

We can move the camera to see more space

There are two types of viewspaces:

  • Cuboid visual space, generated by orthographic projection
  • Pyramid visual space, produced by perspective projection

Face up to the projection

It is equivalent to cutting the original Canvas box cuboid and then scaling it

Left, right, bottom, top, near, far

Tool function

@param {Number} left * @param {Number} right * @param {Number} bottom * @param {Number} top * @param {Number} near * @param {Number} far */
function setOrtho (left, right, bottom, top, near, far) {
  let rw, rh, rd;

  rw = 1 / (right - left);
  rh = 1 / (top - bottom);
  rd = 1 / (far - near);

  let out = new Float32Array(16);

  out[0] = 2 * rw;
  out[1] = 0;
  out[2] = 0;
  out[3] = 0;

  out[4] = 0;
  out[5] = 2 * rh;
  out[6] = 0;
  out[7] = 0;

  out[8] = 0;
  out[9] = 0;
  out[10] = 2 - * rd;
  out[11] = 0;

  out[12] = -(right + left) * rw;
  out[13] = -(top + bottom) * rh;
  out[14] = -(far + near) * rd;
  out[15] = 1;

  return out;
};
Copy the code

Perspective projection

The effect is to make distant objects look smaller, giving the scene a sense of depth and closeness to the real world

Equivalent to scaling all clipping surfaces + panning

The parameters used are:

  • Fov vertical view
  • Aspect aspect ratio near the clipping surface
  • The position of the near and far clipping surface

The corresponding matrix is zero

The utility functions are as follows

/** * Set perspective projection matrix * @param {*} fovy vertical Angle * @param {*} aspect aspect ratio * @param {*} near near clipping position * @param {*} far clipping */
function setPerspective (fovy, aspect, near, far) {
  let rd, s, ct;

  if (near === far || aspect === 0) {
    throw 'null frustum';
  }
  if (near <= 0) {
    throw 'near <= 0';
  }
  if (far <= 0) {
    throw 'far <= 0';
  }

  fovy = Math.PI * fovy / 180 / 2;
  s = Math.sin(fovy);
  if (s === 0) {
    throw 'null frustum';
  }

  rd = 1 / (far - near);
  ct = Math.cos(fovy) / s;

  let out = new Float32Array(16);

  out[0]  = ct / aspect;
  out[1]  = 0;
  out[2]  = 0;
  out[3]  = 0;

  out[4]  = 0;
  out[5]  = ct;
  out[6]  = 0;
  out[7]  = 0;

  out[8]  = 0;
  out[9]  = 0;
  out[10] = -(far + near) * rd;
  out[11] = - 1;

  out[12] = 0;
  out[13] = 0;
  out[14] = 2 - * near * far * rd;
  out[15] = 0;

  return out;
};
Copy the code

Composite transform

Implement first translation and then rotation transformation

Matrix multiplication satisfies the associative law and is calculated from right to left

< "translation after rotating coordinates" > = > < rotation matrix x (< > translation matrix x < > the original coordinates) = (< rotation matrix > x < > translation matrix) < > the original coordinates x = (" translation after rotation matrix) x < > the original coordinatesCopy the code

The above “translation after rotation” matrix is a composite transformation matrix, called the model matrix

Usually we also use the view matrix and the projection matrix, i.e

gl_Position = u_ProjectionMatrix  * u_ViewMatrix * u_ModelMatrix * a_Position;
Copy the code

Pay attention to the point

WebGL does not provide its own matrix computing method, so it should package its own matrix computing library in daily development, or use open source projects

RAF API is used to realize animation, and the degree of animation is judged by rendering time difference

WebGL draws the vertex according to the position of the vertex in the buffer, not considering the near and far, resulting in the distant image will be drawn on top of the near image. In this case, the hidden face elimination function can be enabled. The fragment shader then performs depth detection after drawing and caches the result to a depth buffer

// Enable the hidden face elimination function
gl.enable(gl.DEPTH_TEST);
// Clear the color buffer and depth buffer before drawing
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
Copy the code

When the depth of two primions is close, there will be depth conflict, which can be solved by polygon migration mechanism

Please refer to other literature for the principle

// Enable polygon offset
gl.enable(gl.POLYGON_OFFSET_FILL);
// Draw triangle 1
gl.drawArrays(gl.TRIANGLES, 0, n/2); 
// Set polygon offset
gl.polygonOffset(1.0.1.0); 
// Draw triangle 2
gl.drawArrays(gl.TRIANGLES, n/2, n/2);
Copy the code

The sample

Take the vertex data originally defined and perform the following matrix transformation

Set perspective projection

setPerspective(30.1.1.100)
Copy the code

Set the Angle of view

lookAt(3.3.7.0.0.0.0.1.0);
Copy the code

The final result

reference

  1. View matrix derivation process
  2. WebGL Programming Guide