preface

The world is so big at the front end that instead of talking about a hundred ways to write javascript and the fields in a seven-layer model of the computer network, it’s better to make new sense.

In the world of data visualization, I can see the use of linear algebra and the fun of graphics, and even write some small games when I am not working. Rows of data are displayed in the database. Perhaps only professionals can understand what these data express. How to effectively present these data is the significance of data visualization, which can let more people understand the value of these data.

This series of articles will be easier to understand if you have a general idea of the basic elements of a 3D scene such as camera, lighting, scenes, render animation loops, etc., but it’s not a problem, and it’s ok to leave the details behind.

Tool encapsulation

This article is mainly about threejs wrapping, but threejs is already wrapping webGL, so why wrap it again? In fact threejs already encapsulates WebGL so well that many effects do not require handwritten vector matrix calculations. But when you get into the actual development, you’ll find that the code is a little messy and difficult to maintain, because you don’t want to see dozens or hundreds of lines of initialization code every time you go in. It also does not conform to normal usage habits (for example, it is not intuitive to use raytrace or color mapping related classes for common pickup operations in an interaction). In order to better organize your code and make it more enjoyable to develop, it’s important to do a little generalizing about some common operations, as well as getting a better understanding of Threejs.

The effect achieved and how to use

const threetool = new ThreeTool({
    canvas: document.getElementById("canvasFrame") as HTMLCanvasElement,
    container: document.getElementById("canvasWrap") as HTMLElement,
});

const geometry = new THREE.BoxGeometry(100.100.100);
const material = new THREE.MeshPhongMaterial({ color:0x33bb77 });
const cube = new THREE.Mesh(geometry, material);

threetool.scene.add(cube);

threetool.continuousRender((time) = >{
    cube.rotation.x = time;
});
Copy the code
<div id="canvasWrap">
    <canvas id="canvasFrame" />
</div>
Copy the code

Right out of the box, you don’t have to worry about lighting, camera, graphics dimensions, etc., just a few lines of code, and you can see a cube in the scene rotating on the X axis. It is important to note that the initialization of the utility class is done after the DOM is loaded.

Environment configuration

npx create-react-app my-app --typescript
cd my-app
npm i three @types/three @tweenjs/tween.js
npm start
Copy the code
import * as THREE from 'three';
/ /...
Copy the code

Let’s take a look at what the Threejs utility class needs.

elements

This is an example structure diagram of threejs in a scenario, summarizing the relationships between various elements. The relationship insideThe official documentation, or other places are also a lot of search, here will not introduce.

As you can see, camera, scene, lighting, renderer, canvas are all essential elements. You can define the related attributes first and then initialize them in the constructor.

// ThreeTool.ts
class ThreeTool {
    / / camera
	public camera: PerspectiveCamera;
	/ / direction of the light
	public directionalLight: DirectionalLight;
	/ / the scene
	public scene: Scene;
	/ / the rasterizer
	public renderer: WebGLRenderer;
	/ / the canvas
	public canvas: HTMLCanvasElement;
	// Canvas container
	public container: HTMLElement;
	
	/ /...
}
Copy the code

Initialize the

What needs to be explained here is that the WebGL API is mounted on the Canvas object, so we need an instance of the Canvas.

// ...
constructor(threeToolParams: { canvas: HTMLCanvasElement; container: HTMLElement; }) {
		const { canvas, container } = threeToolParams;
		this.canvas = canvas;
		this.container = container;
		this.camera = this.initCamera();
		this.scene = this.initScene();
		this.directionalLight = this.initDirectionalLight();
		this.renderer = this.initRenderer({canvas});
		this.scene.add(this.directionalLight);

		this.renderer.render(this.scene, this.camera);
	}
// ...
Copy the code

Except for camera, everything else is normal initialization.

For example,

public initScene(): Scene {
	const scene = new THREE.Scene();
	return scene;
}

public initDirectionalLight(color: Color = new Color(0xffffff), intensity = 1): DirectionalLight {
	const light = new THREE.DirectionalLight(color, intensity);
	light.position.set(1000.1000.1000);
	return light;
}

public initRenderer(rendererParams: { canvas: HTMLCanvasElement; clearColor? : Color }): WebGLRenderer {const { canvas, clearColor = new Color(0xffffff) } = rendererParams;
	const renderer = new THREE.WebGLRenderer({ canvas, antialias: true });
	renderer.setClearColor(clearColor);
	return renderer;
}
Copy the code

The layout size

In Threejs, the scale units of the model are relative, for example, to set up a cube with side length of 100 const geometry = new THREE.BoxGeometry(100, 100, 100), the unit of 100 can be centimeters/meters/kilometers. But most of the time, for ease of use, it is best to match the size of the screen, such as 100 is 100 pixels, so that it is more intuitive to adjust the size of the geometry in the scene.

We can adjust the perspective projection camera Fov to achieve this effect. The corresponding FOV viewing Angle can be calculated by simple triangle calculation.

/ /...
public initCamera(cameraParams = { aspect: 2.near: 0.1.far: 2000 }): PerspectiveCamera {
	const { aspect, near, far } = cameraParams;
	const position = new THREE.Vector3(100.100.600);
	const Rag2Deg = 360 / (Math.PI * 2);
	// The inverse trigonometric function returns the radian value, and the Angle height is the canvas height, in order to be equal to the screen pixel units
	const fovRad = 2 * Math.atan(this.canvas.clientHeight / 2 / position.z);
	// Convert to Angle value
	const fovDeg = fovRad * Rag2Deg;
	const camera = new THREE.PerspectiveCamera(fovDeg, aspect, near, far);
	camera.position.set(position.x, position.y, position.z);
	return camera;
}
/ /...
Copy the code

This means that according to the height of the canvas (pixel value), the distance of the camera’s Z axis, using trigonometry to calculate the Fov size of the perspective. Since aspect ratio can vary with screen size, use the default value first and set aspect ratio after render.

debugging

Stats. js is a handy performance monitoring tool. Of course, if you don’t want to use this tool, you can use the browser’s built-in performance monitoring tool by clicking command/ Window + Shift + P and typing FPS.

/ /...
if (mode === 'dev') {
	this.stats = this.initStats(container);
}
/ /...
public initStats(container: HTMLElement): Stats {
	const stats = new Stats();
	// Displays the performance monitoring screen area in the upper left corner
	stats.dom.style.position = 'absolute';
	stats.dom.style.bottom = '0px';
	stats.dom.style.zIndex = '100';
	container.appendChild(stats.dom);
	return stats;
}
/ /...
Copy the code

interaction

This effect will be covered in more detail in the next few chapters

The pick operation is the most common interactive operation. Click, hover, drag and other events are triggered by the need to pick up an object, and then perform the related callback on the object. Although the change, input and other input operations are usually done in the pop-up box, there are many times when it is necessary to know which object triggered the pop-up box. Pick-ups are the cornerstone of 3D interaction.

Set up the related event agent (listen for mouse movement events) -> find the picked up object -> call the related event on the object

public initEvent() {
	// Listen for hover events in the container
	this.container.addEventListener('pointermove'.(event) = > this.throttleTriggerByPointer(event, 'hover'));
	// Listen for the click event in the container
	this.container.addEventListener('click'.(event) = > this.throttleTriggerByPointer(event, 'click'));
}
Copy the code

You’re using Throttle to throttle a stream so that mouse events don’t fire too often.

/ /...
// Event cache
public _PointerMoveEventCacheObj = new Map<string | number, THREE.Object3D>();
/ /...
public triggerByPointer(
	event: PointerEvent | MouseEvent,
	type: 'hover' | 'click'.) {
	const object3D = this.getObject3D(event);
	if (object3D) {
		// Trigger related events
		if (type= = ='hover') {
			object3D.dispatchEvent({ type: 'mouseenter' });
		} else {
			object3D.dispatchEvent({ type });
		}
		this._PointerMoveEventCacheObj.set(object3D.id, object3D);
	} else {
		this._PointerMoveEventCacheObj.forEach((item) = > {
			if (type= = ='hover') {
				item.dispatchEvent({ type: 'mouseleave'}); }});this._PointerMoveEventCacheObj.clear(); }}/ /...
Copy the code

Just add the relevant callback to the object when you use it

mesh.addEventListener('click'.() = > {
	//do something
});
Copy the code

There are probably two ways to pick up objects: Raycaster Pick, GPU Pick, so let’s just pick one.

Raycaster Pick Based on RayCaster

Ray tracing is a way to render images that look like the real world. In this case, however, it’s the equivalent of flashing a ray of light in the direction of the camera’s perspective (lookat) over the mouse position inside the canvas. So the object that we want to pick up is the first object that the ray encounters as it travels in a straight line. To describe it in mathematical language is to give a straight line that passes through the camera in the same direction as the camera Angle vector, and to find the object (triangular plane) with an Angle greater than 0 on the straight line, which is only a rough description here. The above process is actually a part of the ray tracing algorithm, ray tracing rendering also involves recursion of the ray, and not only one ray, but also the optimization of the bounding box.

Fortunately, we have this class RayCaster, which kind of encapsulates all of these judgments and just takes the results.

public getObject3D(event: PointerEvent): THREE.Object3D | null {
	const pointer = new THREE.Vector2();
	pointer.x = (event.clientX / window.innerWidth) * 2 - 1;
	pointer.y = -(event.clientY / window.innerHeight) * 2 + 1;
	this.raycaster.setFromCamera(pointer, this.camera);
	const intersects = this.raycaster.intersectObject(this.scene, true);
	if (intersects.length > 0) {
		const res = intersects.filter((item) = > {
			returnitem && item.object; }) [0];
		if (res && res.object) {
			return res.object;
		}
		return null;
	} else {
		return null; }}Copy the code

Gpu Pick Based on Color Mapping

Implementation approach is at the time of rendering, generate more behind a layer of color mapping layer (hidden), a layer of color value and position of objects in the scene with one to one correspondence, in the change of the scene color layer at the same time also should follow the change, so that when the mouse to pick up to a certain color, find the corresponding objects by color, and the corresponding event triggering. The idea here is similar to the DepthMap technique for deep testing and the ShaderMap technique for generating shadows. If one dimension doesn’t work, you add another dimension, which is another auxiliary layer to store some information.

Specific implementation can consult relevant information

The model is loaded

All kinds of model file parsing load threejs have a better encapsulation, model analysis is the relationship between the point and point formed by the relationship between the surface

Continuous rendering versus on-demand rendering

It is suitable for two different scenarios, and there is no superior or inferior. Directly speaking, the scene corresponding to continuous rendering is the scene where the object will keep moving or the shader has dynamic effect. On-demand rendering is the scene where the object will move only after user input operation. But the key to any render is to use the requestAnimationFrame function to drive the render loop.

For ease of use, OrbitControls is the camera track control tool. This tool class encapsulates the camera’s shift rotation operations.

Continuous rendering

/ /...
public resizeRendererToDisplaySize(Renderer: WebGLRenderer, isUseScreenRatio =true) {
	const canvas = renderer.domElement;
	// Ratio of device physical pixels to device-independent pixels, that is, device-independent pixels *devicePixelRatio= Actual physical pixels of the device
	const pixelRatio = isUseScreenRatio ? window.devicePixelRatio : 1;
	// Render at screen resolution
	const width = (canvas.clientWidth * pixelRatio) | 0;
	const height = (canvas.clientHeight * pixelRatio) | 0;
	constneedResize = canvas.width ! == width || canvas.height ! == height;if (needResize) {
		renderer.setSize(width, height, false);
	}
	return needResize;
}

// Continuous render mode
public continuousRender(callback? : (time:number) = >void) {
	const render = (time: number) = > {
		if (this.resizeRendererToDisplaySize(this.renderer)) {
			const canvas = this.renderer.domElement;
			this.camera.aspect = canvas.clientWidth / canvas.clientHeight;
			this.camera.updateProjectionMatrix();
		}
		this.renderer.render(this.scene, this.camera);
		// The time unit is specified in seconds
		const t = time * 0.001;
		callback && callback(t);
		if (this.mode === 'dev') {
			this.stats? .update(); } requestAnimationFrame(render); }; render(0);
}
/ /...
Copy the code

Is important to note the resizeRendererToDisplaySize this method, the size of the screen is likely to change, so I need to the canvas in the render graphics buffer (drawingbuffer) size and camera to adjust the aspect ratio.

On-demand rendering

/ /...
// Render on demand
public ondemandRender(callback? : () = >void) {
	let renderRequested = false;
	const render = () = > {
		renderRequested = false;
		if (this.resizeRendererToDisplaySize(this.renderer)) {
			const canvas = this.renderer.domElement;
			this.camera.aspect = canvas.clientWidth / canvas.clientHeight;
			this.camera.updateProjectionMatrix();
		}
		this.controls.enableDamping = true;
		this.controls.update();
		callback && callback();
		this.renderer.render(this.scene, this.camera);
	};
	render();
	const requestRenderIfNotRequested = () = > {
		if(! renderRequested) { renderRequested =true; requestAnimationFrame(render); }};this.controls.addEventListener('change', requestRenderIfNotRequested);
	window.addEventListener('resize', requestRenderIfNotRequested);
}
/ /...
Copy the code

If you look closely at the changes in the FPS indicator in the upper left corner of both renderings, you will see that continuous renderings are rendered at the beginning, while on-demand renderings are rendered with changes.

Why add renderRequested to render on demand? This is because of this. Controls. EnableDamping = true; The controller added the slow effect, not to judge will cause change and render trigger each other, so it needs to stipulate that the event unilaterally to trigger.

The end of the

At this point we can have fun developing with Threejs, and we will continue to update the geometry and how to use shaders, and the cool effects.

The source code

Original is not easy, reprint please contact the author. Like is the motivation for the author to open the source code and continue to update. The open source address of this tool class and related code is on the way.

Ready to update the series

  • “Shaders in Threejs” (in final review)
    • CPU render
    • The GPU to render
  • “On the use of shader built-in functions and effects” (in draft)
    • Various functions and special effects
  • “This thing about shader lighting” (draft)
    • Plane-by-plane coloring
    • Per-vertex shading (Gouraud shading)
    • Per-pixel coloring (Phong lighting model)
    • BlinnPhong Lighting model
    • Fresnel effect
    • Cartoon shaders
  • “On local Coordinates, World coordinates, projection coordinates” (in draft)
    • Local coordinates
    • The world coordinates
    • The projection coordinate
    • Matrix transformation
  • “About writing a particle Effects transform plugin” (in draft)
  • About github homepage Earth Special Effects
  • About this D3js thing
  • About the visualization of a data diagram
  • About writing a little hop game.
    • Scenario generation
    • Collision detection
    • The game logic