This article will start from demo to explain the evolution process of WebGL to Threejs.

WebGL

I. WebGL rendering pipeline

This article assumes some familiarity with 3D rendering, so it will only briefly review webGL’s rendering pipeline.

As shown in the figure above, a WebGL rendering process generally consists of three steps:

  • Write data (such as vertices and textures)
  • Primitive assembly
  • Rasterization and wafer coloring

(Thanks Bruce for the photo)

1. Write data:We need to write the vertex data to video memory for the next pixel assembly:

2. Pixel assembly:The primitive types supported by WebGL include points, lines, and triangles. Triangles are usually used. The so-called pixel assembly is actually to confirm the vertex information of the pixel, that is, the position of the three vertices of a triangle. Even the most complex 3D model is made by drawing triangles:

Meanwhile, WebGL provides us with a slot-vertex shader that allows us to freely control vertex positions:Sample code:

attribute vec4 position;
uniform mat4 matrix;
void main() {
  gl_Position = matrix * position;  
}
Copy the code

The matrix in the code represents a composite matrix used to convert three-dimensional world coordinates to projected coordinates:

Matrix = projection matrix x view matrix X model matrix

Webgl will automatically crop the projected vertices to the viewport for us.

** Rasterization and pixel coloring ** Rasterization is automatically done by WebGL for us, which is used to disintegrate pixels into each pixel. All we can do is color the pieces that we’ve disassembled. This is done by providing a slice shader that tells WebGL how to color the current slice.Sample code:

precision mediump float; Void main(void) {gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0); }Copy the code

Draw a triangle with WebGL

1. Drawing process

2. Relevant codes

const VSHADER_SOURCE = ` attribute vec4 a_Position; void main() { gl_Position = a_Position; } `;

const FSHADER_SOURCE = Void main() {gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); } `;

function main() {
  const canvas = document.getElementById('webgl');
  const gl = getWebGLContext(canvas);
  if(! gl) {return console.log('Failed to get the rendering context for WebGL');
  }

  const program = createProgram(gl, VSHADER_SOURCE, FSHADER_SOURCE);
  gl.useProgram(program);
  gl.program = program;

  const n = initVertexBuffers(gl); // Write the vertex data to the buffer
  if (n < 0) {
    return console.log('Failed to set the positions of the vertices');
  }

  gl.clearColor(0.0.0.0.0.0.1.0); // Sets the color used by the canvas after the color buffer is cleared

  const draw = function() {
    // Clear the color buffer (canvas)
    gl.clear(gl.COLOR_BUFFER_BIT);
    / / to draw
    gl.drawArrays(gl.TRIANGLES, 0, n);

    requestAnimationFrame(draw);
  };
  draw();
}

function initVertexBuffers(gl) {
  const vertices = new Float32Array ([
    0.0.5,   -0.5, -0.5.0.5, -0.5
  ]);
  const n = 3;

  const vertexBuffer = gl.createBuffer();
  if(! vertexBuffer) {console.log('Failed to create the buffer object');
    return -1;
  }

  // gl.array_buffer points to the newly requested memory
  gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
  // Write data
  gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);

  const a_Position = gl.getAttribLocation(gl.program, 'a_Position');
  if(a_Position < 0) {
    console.log('Failed to get the storage location of a_Position');
    return -1;
  }
  // a_Position -> gl.array_buffer specifies the memory to point to
  gl.vertexAttribPointer(a_Position, 2, gl.FLOAT, false.0.0);

  // Enable traversal
  gl.enableVertexAttribArray(a_Position);

  // Clear the pointer pointer
  gl.bindBuffer(gl.ARRAY_BUFFER, null);

  return n;
}
Copy the code

3. Draw the result

3. Disadvantages of WebGL programming

1. The red tape

(1) The generation of vertex data is tedious: it is impossible for us to work out each vertex data for every geometry we define. Instead, it is reasonable to define the length, width and height of a geometry and automatically generate relative vertex data. (2) Tedious data writing to video memory: whenever we need to interact with video memory, the process is also very tedious. Write vertex data and texture data.

function initVertexBuffers(gl) {
  const vertices = new Float32Array ([
    0.0.5,   -0.5, -0.5.0.5, -0.5
  ]);
  const n = 3;

  const vertexBuffer = gl.createBuffer();
  if(! vertexBuffer) {console.log('Failed to create the buffer object');
    return -1;
  }

  // gl.array_buffer points to the newly requested memory
  gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
  // Write data
  gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);

  const a_Position = gl.getAttribLocation(gl.program, 'a_Position');
  if(a_Position < 0) {
    console.log('Failed to get the storage location of a_Position');
    return -1;
  }
  // a_Position -> gl.array_buffer specifies the memory to point to
  gl.vertexAttribPointer(a_Position, 2, gl.FLOAT, false.0.0);

  // Enable traversal
  gl.enableVertexAttribArray(a_Position);

  // Clear the pointer pointer
  gl.bindBuffer(gl.ARRAY_BUFFER, null);

  return n;
}
Copy the code
function initTextures(gl, n) {
  const texture = gl.createTexture();
  if(! texture) {console.log('Failed to create the texture object');
    return false;
  }

  // the u_Sampler gets the color values from the texture coordinates
  const u_Sampler = gl.getUniformLocation(gl.program, 'u_Sampler');
  if(! u_Sampler) {console.log('Failed to get the storage location of u_Sampler');
    return false;
  }
  const image = new Image();
  if(! image) {console.log('Failed to create the image object');
    return false;
  }

  image.onload = function(){ loadTexture(gl, n, texture, u_Sampler, image); };
  image.src = './assets/sky.jpg';

  return true;
}

function loadTexture(gl, n, texture, u_Sampler, image) {
  // Invert the Y-axis because the coordinate system of the image is Y-axis down
  gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, 1);
  // Activate a texture unit. A texture unit manages a texture object
  gl.activeTexture(gl.TEXTURE0);
  // Bind the texture object to the texture unit
  gl.bindTexture(gl.TEXTURE_2D, texture);
  // Configure texture object parameters to set how the texture image is mapped to the graph
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
  // Assign the texture image to the texture object
  gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, gl.RGB, gl.UNSIGNED_BYTE, image);
  // Assign the zero texture unit to the sampler parameter of the slice shader
  gl.uniform1i(u_Sampler, 0);
  
  gl.clear(gl.COLOR_BUFFER_BIT);

  gl.drawArrays(gl.TRIANGLE_STRIP, 0, n);
}
Copy the code

2. The cost is too high

(1) Learning cost of shaders: the shaders given in our demo are very basic and can only produce simple effects. If we need to achieve more complex effects, we need to develop students to learn more about shaders, including various optical and linear algebra knowledge. (2) Learning cost of rendering pipeline: For most front-end students, they do not know webGL’s rendering pipeline. If they are to be able to draw 3D scenes, they need to provide a framework that is cheaper to learn and easier to use.

Threejs

Threejs does a layer of encapsulation based on WebGL, so that front-end students can draw 3D scenes at a lower cost and more efficiently.

Draw a square with threejs

1. Drawing process

2. Relevant codes

function init() {

        container = document.getElementById( 'container' );

        scene = new THREE.Scene();
        scene.background = new THREE.Color( 0x8fbcd4 );

        camera = new THREE.PerspectiveCamera(
          45.window.innerWidth / window.innerHeight,
          1.20
        );
        camera.position.z = 10;
        scene.add( camera );

        const geometry = new THREE.PlaneGeometry( 2.2 );
        const material = new THREE.MeshBasicMaterial( {
          color: 0xff0000}); mesh =new THREE.Mesh( geometry, material );
        scene.add( mesh );

        renderer = new THREE.WebGLRenderer( { antialias: true}); renderer.setSize(window.innerWidth, window.innerHeight );

        container.appendChild( renderer.domElement );

        renderer.render( scene, camera );

      }
Copy the code

3. Draw the result

You can see that the whole drawing process is more efficient:

  • Creating vertices is even easier, we just define the square’s length and width, and Threejs automatically calculates the vertices for us.
  • Threejs interacts with video memory instead of us, so the user doesn’t have to care.
  • Threejs helps us encapsulate various shaders through materials, so the general user doesn’t need to care, and the high fun can also implement shaders through custom materials.

Look at demo from the perspective of Threejs source code

Take a look at how Threejs draws a square from a source code perspective.

1. Scene

Scene inherits from Object3d. Object3d is the base class for most objects, which provides a set of properties and methods to manipulate objects in three-dimensional space. And Scene is used to organize objects.

2. Camera

Camera also inherits Object3d, and also includes projection matrix, which is used to transform vertices in the world coordinate system into Camera coordinate system.

3. Mesh

A Mesh is the basic object that can be drawn in threejs. It consists of two parts: geometry and material.

  • BufferGeometry: The base class of all geometers in Threejs, including vertex positions, facet indexes, normals, color values, UV coordinates, and custom cache attribute values.
  • Material: The base class for all materials in Threejs. Each type of Material corresponds to a program containing the associated shaders.

4. WebGLRenderer

The WebGLRenderer is used to construct a WebGL renderer. The main thing is to help us initialize a WebGL rendering context and a set of objects that we can use to manipulate WebGL.

5. renderer.render()

Render all the mesh under the scene in turn.

Threejs rendering mesh is divided into three steps:

  • Writes the corresponding vertex data to the geometry.
  • Switch to the webGL program corresponding to the material.
  • Call a drawCall.

The appendix

  • Threejs source code with comments and breakpoints