As mentioned above, the most interesting part for me in the activity of “Your Dominant Personality Color” is to realize the dynamic effect through Three. Js. According to the author, the position of each cloud is random and the effect is very good.

The online Demo

First, let’s talk about the basic idea of moving through clouds:

  1. Place a bunch of 64*64 planes along the Z axis uniformly, with random X and Y coordinates (much like the bucket of potato chips below).
  2. Combine all the above shapes into one large shape
  3. Large graphics and patch materials (clouds) generate grids that are placed into the scene
  4. The motion effect is to move the camera slowly along the Z axis from a distance to create the effect of moving through clouds

First of all, the official documentation provides a quick start to creating a scene, so you can better understand what follows.

Here are the basic concepts in three.js. As far as I’m concerned. There are good documents or share, welcome to help point the way.

scenario

A scene is a space that holds what we want to render. The simplest use is to add a grid to the scene and render it.

Var scene = new three.scene (); // Other code... // Add the object to the scene scene.add(mesh); Renderer. render(scene, camera);Copy the code

Here are the coordinate rules in the scene: the origin is the center of the canvas plane, the Z axis is perpendicular to the X and Y axes, and the forward direction is towards us. I rotated the Z axis here, otherwise we could not see it, as shown below:

Code:

const scene = new THREE.Scene();

var camera = new THREE.PerspectiveCamera(70.window.innerWidth / window.innerHeight, 1.1000);
camera.position.set(0.0.100);

const renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);

// Line segment 1, red, from the origin to X-axis 40
const points = [];
points.push(new THREE.Vector3(0.0.0));
points.push(new THREE.Vector3(40.0.0));
const geometry1 = new THREE.BufferGeometry().setFromPoints(points);
var material1 = new THREE.LineBasicMaterial({ color: 'red' });
var line1 = new THREE.Line(geometry1, material1);

// Line segment 2, blue, from the origin to Y 40
points.length = 0;
points.push(new THREE.Vector3(0.0.0));
points.push(new THREE.Vector3(0.40.0));
const geometry2 = new THREE.BufferGeometry().setFromPoints(points);
var material2 = new THREE.LineBasicMaterial({ color: 'blue' });
var line2 = new THREE.Line(geometry2, material2);

// Line segment 3, green, from the origin to z-axis 40
points.length = 0;
points.push(new THREE.Vector3(0.0.0));
points.push(new THREE.Vector3(0.0.40));
const geometry3 = new THREE.BufferGeometry().setFromPoints(points);
var material3 = new THREE.LineBasicMaterial({ color: 'green' });
var line3 = new THREE.Line(geometry3, material3);
// Make a rotation, otherwise you can't see the line on the Z axis
line3.rotateX(Math.PI / 8);
line3.rotateY(-Math.PI / 8);

scene.add(line1, line2, line3);

renderer.render(scene, camera);
Copy the code

The camera

In order for objects in the scene to be seen by us, that is to say, rendered, the camera needs to “see”. According to the coordinate system diagram above, we know that the same object observed from different angles by the camera will definitely present different pictures. The most commonly used is the perspective camera used here, which can see through objects, and in this case it can see right through the clouds, which is amazing.

// Initialize the camera
camera = new THREE.PerspectiveCamera(70, pageWidth / pageHeight, 1.1000);

// Finally, the scene is rendered with the camera and we can see the objects in the scene
renderer.render(scene, camera);
Copy the code

The material

The material is easy to understand. In the original example, the cube was colored using MeshBasicMaterial. The way materials are used is by combining materials and shapes to create a mesh. We used a more complex texture material here.

// Texture
const material = new THREE.ShaderMaterial({
  // The value here is passed to the shader
  uniforms: {
    map: {
      type: 't'.value: texture
    },
    fogColor: {
      type: 'c'.value: fog.color
    },
    fogNear: {
      type: 'f'.value: fog.near
    },
    fogFar: {
      type: 'f'.value: fog.far
    }
  },
  vertexShader: vShader,
  fragmentShader: fShader,
  transparent: true
});
Copy the code

Graphics and grids

By default, three. js provides many Geometry shapes, namely various Geometry, whose base class is BufferGeometry.

Graphs can be merged, like here, many clone plane graphs can be merged to form a large cloud by modifying their positions.

At first I thought of graphics and grids as concepts, but later I learned that materials and graphics can generate grids that can be put into a scene.

// Combine the above shapes and materials to create a mesh
mesh = new THREE.Mesh(mergedGeometry, material);
Copy the code

Apply colours to a drawing

Rendering the scene and camera to the target element generates a canvas, and if it is a static scene, it is done. But if it is a moving scene, a native function requestAnimationFrame is used.

function animate() {
  requestAnimationFrame(animate);
  renderer.render(scene, camera);
}
Copy the code

The code above is a rendering cycle, with a frequency of 60HZ on the general screen. The refresh frequency will increase on the high-brush screen, that is, it will give users a good refresh experience, and we do not need to use setInterval to control. And it pauses the refresh when the user switches to another TAB, without wasting valuable processor resources or battery life.

Reveal the process

The process is actually very interesting, but also very tortuous.

Pulled down the “Your personality dominant color” activity of the front code, but the cloud dynamic effect related to a lot of code compressed, can not understand.

How to do? Then I went to three.js to find an official example. After searching for a long time, I could only find one like the following:

Later, after various searches, I finally found this special effect through clouds in the discussion section of Three. js, which is an example written by the author of Three. js a long time ago.

After the cloud dynamic effect source code in hand, I feel imYZF students should be drawn from this example.

I find that the version of three.js in the source code is a little behind. The version in the source code is 55, and the latest version is 131. There is a big gap between the versions, and some of the above classes and APIS have been lost.

THREE.Geometry

First of all, this class is missing in the latest version. This class is used to merge many flat shapes into one shape. Look at the following code, the 55 version of the Geometry is created into a plane mesh, and then generate a plane mesh, adjust the coordinates of the mesh, merge the mesh and Geometry.

// Initialize a basic graph
geometry = new THREE.Geometry();
// Initialize a 64*64 plane
var plane = new THREE.Mesh(new THREE.PlaneGeometry(64.64));

for (var i = 0; i < 8000; i++) {
  // Adjust the position and rotation Angle of the plane pattern, etc
  plane.position.x = Math.random() * 1000 - 500;
  plane.position.y = -Math.random() * Math.random() * 200 - 15;
  plane.position.z = i;
  plane.rotation.z = Math.random() * Math.PI;
  plane.scale.x = plane.scale.y = Math.random() * Math.random() * 1.5 + 0.5;
  // Merge planes to base graphics
  THREE.GeometryUtils.merge(geometry, plane);
}
Copy the code

After querying the latest document, it is found that the base class BufferGeometry provides the Clone method for all graphs, and the plane graphs can also be cloned naturally.

// A flat shape
const geometry = new THREE.PlaneGeometry(64.64);
const geometries = [];

for (var i = 0; i < CloudCount; i++) {
  const instanceGeometry = geometry.clone();

  // Take the cloud clone and make some shifts with random parameters to achieve the effect of a bunch of clouds, each time the cloud stack will be different
  // After the X-axis is offset, adjust the camera position to achieve balance
  // The Y-axis wants to place the clouds below the scene, so they are all negative values
  // The z-axis displacement is: the current number of clouds * the z-axis length of each cloud
  instanceGeometry.translate(Math.random() * RandomPositionX, -Math.random() * RandomPositionY, i * perCloudZ);

  geometries.push(instanceGeometry);
}

// Combine these shapes
const mergedGeometry = BufferGeometryUtils.mergeBufferGeometries(geometries);
Copy the code

GeometryUtils.merge

There is an API in the old code, which is a very important API for merging graphics and grids to create a cloud. The latest version of Three.js is no longer available.

/ / merge all the plane figure into a basic graphics THREE GeometryUtils. Merge (geometry, a plane);Copy the code

Through the query of the latest version of the document, found that a group of graphics can be combined, I think better than the above, a lot of semantics. The above code is repeated to merge the same plane into a base figure, and the following is to combine the group of planes into a new plane.

/ / combine these shapes const mergedGeometry = BufferGeometryUtils. MergeBufferGeometries (geometries');Copy the code

shader

GLSL(OpenGL Shading Language), the original shader code was written in

/ / the original
<script id="vs" type="x-shader/x-vertex">
  varying vec2 vUv;
  void main()
  {
      vUv = uv;
      gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
  }
</script>

<script id="fs" type="x-shader/x-fragment">
   uniform sampler2D map;
   uniform vec3 fogColor;
   uniform float fogNear;
   uniform float fogFar;
   varying vec2 vUv;
   void main()
   {
       float depth = gl_FragCoord.z / gl_FragCoord.w;
       float fogFactor = smoothstep( fogNear, fogFar, depth );
       gl_FragColor = texture2D(map, vUv );
       gl_FragColor.w *= pow( gl_FragCoord.z, 20.0 );
       gl_FragColor = mix( gl_FragColor, vec4( fogColor, gl_FragColor.w ), fogFactor );
  }
</script>
Copy the code

Later, I found several places to use the time string instead:

const vShader = ` varying vec2 vUv; void main() { vUv = uv; Gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0); } `;
Copy the code

Vertex shader and slice shader code, I am really do not understand, first copy as respect.

The source code

Finally put the source code, interested students can have a look, welcome Star and make suggestions.