Note: The demo in this article is inInside this

Introduction to the

Three.js, WebGL and OpenGL

When we talk about three.js, we have to talk about OpenGL and WebGL. OpenGL, which many of you have probably heard of, is the most commonly used open source library for cross-platform graphics processing. WebGL is a Web-oriented 3D graphics standard designed based on OpenGL. It provides a series of JavaScript apis, through which the system hardware can accelerate 3D rendering, so as to obtain high performance. Three.js, a WebGL third-party library written in JavaScript, is an easy-to-use graphics library formed through the encapsulation and simplification of WebGL interfaces.

WebGL vs. three.js

From the above introduction, we know that both WebGL and three. js can develop 3D graphics on the Web side. The question is, if we have WebGL, why do we need three.js? This is because it can be difficult for front-end engineers to get into WebGL in a short time. WebGL has a relatively high threshold, and computer graphics requires relatively more mathematical knowledge. A front-end programmer might be familiar with analytic geometry, but few would be familiar with linear algebra. What’s more, it emphasizes the physical meaning of matrix operation, which is also missing in teaching. Therefore, three. js encapsulates the interface provided by WebGL very well, simplifying many details and greatly reducing the learning cost. And there is little loss of WebGL’s flexibility. Therefore, it is recommended to start with three. js, which can enable you to face most of the requirement scenarios after a short period of study.

Some concepts in three.js

To display 3D objects on a screen, the general idea goes like this: \

  1. Create a three-dimensional space, three.js called a Scene
  2. Determine an observation point, and set the observation direction and Angle, three.js is called Camera (Camera).
  3. Add objects for observation in the scene. There are many kinds of objects in three. js, such as Mesh, Group, Line, etc. They all inherit from Object3D class.
  4. Finally we need to render everything to the screen, and that’s where the Renderer in three.js comes in.

Let’s take a closer look at these concepts.

Scene

A spatial container for all objects, corresponding to the three-dimensional space of reality. Creating a Scene is as simple as creating a new Scene class.

Camera

A Camera, that makes sense. “What you see is what you get.” Even though I’m a materialist, you have to be seen to be felt. Cameras are our eyes, and in order to see the world, we need to describe the position of something. You need a coordinate system to describe the position of an object. The common coordinates are left handed and right handed.

Three.js uses a right-handed coordinate system.

There are four kinds of cameras in three.js: CubeCamera, OrthographicCamera, PerspectiveCamera, and StereoCamera. All of them inherit from the Camera class. THREE.OrthographicCamera and THREE.PerspectiveCamera.

3 d projection

It is believed that those who have studied painting can understand at once that they correspond to OrthographicCamera and PerspectiveCamera.

The image above on the left is an orthogonal projection, where the light reflected from the object is projected parallel to the screen, and its size is always the same, so objects near and far have the same size. This is used when rendering some 2D effects and UI elements. The picture on the right is perspective projection, which is in line with our usual feeling of seeing things. It is often used in 3D scenes.

Visual body

The visual body is an important concept. It refers to the collection of Spaces where images are located. Simply put, the viewscape is a geometry where only objects inside the viewscape are visible to us, and objects outside the viewscape are cropped out (what you see is what you get). This is to get rid of unnecessary calculations. By changing the landscape, we get different cameras.The visual body of OrthographicCamera is a cuboid, and its constructor is OrthographicCamera(left, right, top, bottom, near, far). If the Camera is regarded as a point, left represents the distance between the left plane of the visual body and the Camera in the left and right directions. The same is true for other parameters. Therefore, the six parameters define the positions of the six faces of the visual body. We can approximate that objects in the visual field are projected in parallel onto the near plane, and then the images in the near plane are rendered onto the screen.Perspective projection camera PerspectiveCamera visual body is a four trustum of a pyramid, the constructor for PerspectiveCamera (fov, aspect, near and far. Fov, field of View, corresponds to the Angle of view in the figure, which is the Angle between the top and bottom. An aspect is the aspect ratio of a near plane. In addition to the near plane distance and far plane distance, this visual body can be uniquely identified.

Objects

Objects are Objects in three dimensions. Three.js provides many types of objects, all of which inherit from the Object3D class, which we’ll cover later.

Mesh

Sometimes when you don’t notice it, it’s not there. This point is fully reflected in computer graphics. In the computer world, an arc is made up of a finite number of lines connected by a finite number of points. As the number of segments increases, the length decreases, and when you reach a point where you can’t tell that it’s a segment, a smooth arc appears.

The computer’s three-dimensional model is similar. But the line segments become planes, and are generally described as a grid of triangles. We call this a Mesh model. Geometry

There are many kinds of shapes in three. js, geometry, cube, plane, sphere, circle, cylinder, round platform and many other basic shapes. Geometry describes the shape of an object by storing the set of points in the model and the relationships between them (which points form a triangle). So we can also construct shapes by defining the position of each point. We can also construct more complex shapes by importing external model files.



Material

By material, I mean not just the texture of an object, but the collection of all the visible properties of an object’s surface other than its shape, such as color, texture, smoothness, transparency, reflectivity, refractive index, and luminescence.

When we talk about Material, we need to talk about maps and textures.

Textures are mentioned above, which include textures and others.

Stickers are actually “stickers” and “pictures”, which include pictures and where pictures should be posted.

Texture, in fact, is “picture”.

Three.js offers a variety of materials to choose from, and you can freely choose diffuse/specular materials.



Light

And God said, Let there be light.

Light and shadow effects can make the picture richer.

Three.js provides multiple light sources, including AmbientLight, PointLight, SpotLight, DirectionalLight, HemisphereLight, and other light sources.

Just add the desired light source to the scene.

Implement a demo

In order to really get your scene to display with three.js, we need the following objects: scene, camera, and renderer so that we can render the scene through the camera.

const scene = new THREE.Scene(); Const camera = new THREE. PerspectiveCamera (75, window. InnerWidth/window. InnerHeight, 0.1, 1000); const renderer = new THREE.WebGLRenderer(); renderer.setSize( window.innerWidth, window.innerHeight ); document.body.appendChild( renderer.domElement );Copy the code

There are several different cameras in three.js, but in this case we use PerspectiveCamera.

The first parameter is the view Angle (FOV). View Angle is the range of the scene you can see on your monitor at any given time. It is measured in angles (as distinct from radians).

The second parameter is the aspect ratio. That is, you take the width of an object and divide it by its height. For example, when you show old movies on a widescreen TV, the image appears to be squashed.

The next two parameters are near and far. Parts of an object that are farther or closer than the camera’s far or near section will not be rendered into the scene. You may not have to worry about this value right now, but in the future you will be able to set it in your application for better rendering performance.

Next comes the renderer. This is where the magic works. In addition to the WebGL renderer we use here, three.js also provides several other renderers that can be used to degrade the user’s browser when it is too old or does not support WebGL for other reasons.

In addition to creating an instance of the renderer, we also need to set the size of a renderer in our application. For example, we can use the desired width and height of the render area to fill our application with scenes rendered by the renderer. Therefore, we can set the renderer width height to the browser window width height. For performance-sensitive applications, you can use setSize to pass in a smaller value, such as window.innerWidth/2 and window.innerHeight/2. This will cause the application to render the scene at half its width and length.

If you want to keep your application’s size, but render at a lower resolution, you can set updateStyle (the third argument) to false when calling setSize. For example, assuming your tag is now 100% width and height, calling setSize(window.innerWidth/2, window.innerheight /2, false) will cause your application to render at half the resolution.

As an important last step, we add the renderer’s DOM element (renderer.domElement) to our HTML document. This is the element that the renderer uses to show us the scene.

const geometry = new THREE.BoxGeometry(); 
const material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } ); 
const cube = new THREE.Mesh( geometry, material ); 
scene.add( cube );  
camera.position.z = 5;
Copy the code

To create a cube, we need a BoxGeometry (Cube) object. This object contains all of the vertices and faces in a cube. We will explore this more in the future.

Next, for this cube, we need to give it a material to give it a color. Three.js comes with several materials, and we use MeshBasicMaterial here. All materials have objects that are applied to their properties. For simplicity here, we’ll just set a color property with a value of 0xfC5603, which is yellow. What you’re doing here is the same as using the hex Colors color format in CSS or Photoshop.

Third, we need a Mesh. The mesh contains a geometry and the materials that act on it. We can simply put the mesh object into our scene and let it move around freely in the scene.

By default, when we call scene.add(), the object will be added to the coordinate (0,0,0). But it will bring the camera and the cube together. To prevent this from happening, all we need to do is move the camera out a little.

Render the scene

Now, if you copy the code you wrote earlier into the HTML file, you won’t see anything on the page. That’s because we haven’t actually rendered it yet. To do this, we need to use something called a render loop or an animate loop.

function animate() { requestAnimationFrame( animate ); 
renderer.render( scene, camera ); } animate();
Copy the code

Here we create a loop that allows the renderer to draw the scene every time the screen is refreshed (on most screens, the refresh rate is generally 60 times per second). If you’re new to browser game development, you might be saying, “Why don’t we just use setInterval for refreshing?” Of course, we can use setInterval, but requestAnimationFrame has a number of advantages. Perhaps most importantly, it pauses when the user switches to another TAB, so it doesn’t waste valuable processor resources or battery life.

Make the cube move

Before you start, if you have written the above code into the file you created, you will see a green square. Let’s do something even more interesting — let’s spin it.

Add the following code to the animate() function above the renderer.render call:

Cube. Rotation. X + = 0.01; Cube. Rotation. + y = 0.01;Copy the code

This code is executed every frame (normally 60 times per second), which gives the cube a nice looking rotation animation. Basically, when the application is running, if you want to move or change anything in the scene, you have to go through this animation loop. Of course, you can call other functions within the animation loop, so you don’t have to write hundreds of lines of animate functions.

The results of

Here is the complete code

<html lang="en"> <head> <meta charset="utf-8"> <title>My first three.js app</title> <style> body { margin: 0; } </style> </head> <body> <script src="js/three.js"></script> <script> const scene = new THREE.Scene(); Const Camera = new THREE.PerspectiveCamera(75, window.innerwidth/window.innerheight, 0.1, 1000); const renderer = new THREE.WebGLRenderer(); renderer.setSize( window.innerWidth, window.innerHeight ); document.body.appendChild( renderer.domElement ); const geometry = new THREE.BoxGeometry(); const material = new THREE.MeshBasicMaterial( { color: 0xfc5603 } ); const cube = new THREE.Mesh( geometry, material ); scene.add( cube ); camera.position.z = 5; const animate = function () { requestAnimationFrame( animate ); Cube. Rotation. X + = 0.01; Cube. Rotation. + y = 0.01; renderer.render( scene, camera ); }; animate(); </script> </body>Copy the code

start

Install the NPM

npm install --save three

// Method 1: import the entire three.js core library import * as three from 'three'; const scene = new THREE.Scene(); Import {Scene} from 'three'; import {Scene} from 'three'; const scene = new Scene();Copy the code

Install from a CDN or static host

< script type = "module" > / / by visiting https://cdn.skypack.dev/three to find the latest version. import * as THREE from 'https://cdn.skypack.dev/three@<version>'; const scene = new THREE.Scene(); </script>Copy the code

WebGL compatibility check

Will github.com/mrdoob/thre… Import to your file and run the file before attempting to start rendering.

if (WEBGL.isWebGLAvailable()) { 
// Initiate function or other initializations here
animate();
} else {
const warning = WEBGL.getWebGLErrorMessage(); 
document.getElementById('container').appendChild(warning);
}
Copy the code

Run three.js locally

Node.js server

Node.js has a simple HTTP server package. To install it, do the following:

npm install http-server -g

To run from a local directory, run:

http-server . -p 8000

Based on the API

Line drawing

Suppose you’re going to draw a circle or a line instead of a wireframe, or a Mesh. So the first thing we’re going to do is set up the renderer, the Scene, and the Camera

Here’s the code we’ll use:

const renderer = new THREE.WebGLRenderer(); 
renderer.setSize( window.innerWidth, 
window.innerHeight ); 
document.body.appendChild( renderer.domElement ); 
const camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 500 ); 
camera.position.set( 0, 0, 100 ); 
camera.lookAt( 0, 0, 0 ); const scene = new THREE.Scene();
Copy the code

The next thing we’re going to do is define a material. For lines, we can only use LineBasicMaterial or LineDashedMaterial.

//create a blue LineBasicMaterial 
const material = new THREE.LineBasicMaterial( { color: 0x0000ff } );
Copy the code

Once the material is defined, we need a geometry with some vertices.

const points = [];
points.push( new THREE.Vector3( - 10, 0, 0 ) ); 
points.push( new THREE.Vector3( 0, 10, 0 ) );
points.push( new THREE.Vector3( 10, 0, 0 ) ); 
const geometry = new THREE.BufferGeometry().setFromPoints( points );
Copy the code

Note that the line is drawn between each successive pair of vertices, not between the first and last vertices (the line is not closed).

Now that we have a point and a material that can draw two lines, we can combine them together to form one line.

const line = new THREE.Line( geometry, material );

All that’s left is to add it to the scene and call the render function.

scene.add( line ); renderer.render( scene, camera );

You should now have seen an arrow pointing up with two blue lines.

Create a text

The constructor Font and assigns it to the scene

FileLoader

A low-level class that uses XMLHttpRequest to load resources and is used internally by most loaders. It can also be used directly to load any file type that does not have a corresponding loader.

Text buffer geometry (TextGeometry)

A class for generating text into a single geometry. It is constructed from a given string of text, with parameters made up of the loaded Font and Settings in the ExtrudeGeometry parent class. See the Font and FontLoader pages for more details.

const loader = new THREE.FontLoader(); Loader.load ("/js/optimer_regular.typeface. Json "); Function (font) {// optimer_regular.typeface. Json const new_text = new THREE.TextGeometry("text you want to show"), {font: font, size: 0.5, height: 0.3, /* there are only basic parameters defined here and there are other parameters: font: THREE. Float, extrude the thickness of the text. The default is 50 curveSegments: Integer, the number of points on the curve (representing text). The default is 12. BevelEnabled: Boolean whether bevelThickness is enabled. BevelSize: Float, the extension distance between the bevel and the original text profile, the default value is 8. BevelSegments: Integer, the number of segments of the bevel, the default value is 3 */}); const material_text = new THREE.MeshLambertMaterial({color: 0x9933FF}); 0x9933FF is a hexadecimal color name */ text_1 = new three.mesh (new_text, material_text); Scene.add (text_1); // Create the text scene.add(text_1); // Add text text_1.position.z = -7.4; text_1.position.y = 4; Text_1. Position. X = 2.5; });Copy the code

Let’s go through the renderer, scene, and camera set again — scenes will skip this step

const scene = new THREE.Scene(); Const Camera = new THREE.PerspectiveCamera(75, window.innerwidth/window.innerheight, 0.1, 1000); // Create camera const renderer = new three. WebGLRenderer({antialias: true, alpha: true}); renderer.setSize(window.innerWidth, window.innerHeight); document.body.appendChild(renderer.domElement); // Create renderer const spotlight = new three.spotlight (0xFFFFFF); // Define the light source 0xFFFFFF as a hexadecimal color name, white spotlight. Position. set(-15, 10, 0); Scene. add(spotlight); // Add light //... loader function animate() { requestAnimationFrame(animate); If (text_1&&text_1.position.z>=-12){text_1.position.z -= 0.01; } renderer.render(scene, camera); } // Animate ();Copy the code

The constructor Texture

An image object, usually created by the textureLoader.load method. The object can be any image format supported by three.js (e.g. PNG, JPG, GIF, DDS) or video format (e.g. MP4, OGG/OGV).

const texture = new THREE.TextureLoader().load( "textures/water.jpg" ); 
texture.wrapS = THREE.RepeatWrapping; 
texture.wrapT = THREE.RepeatWrapping; 
texture.repeat.set( 4, 4);
Copy the code

Load the 3D model

Currently, there are thousands of 3D model formats to choose from, but each has a different purpose, purpose, and complexity. While three.js already offers a variety of import tools, choosing the right file format and workflow can save a lot of time and frustration. Some formats are difficult to use, or inefficient to experience in real time, or are not yet fully supported.

Recommended workflow

GlTF (GL Transport Format) is recommended. GLB and.gltf are two different versions of the format, both of which are well supported. Because the glTF format is focused on rendering 3d objects while the program is running, its transmission efficiency is very high and the loading speed is very fast. Features include grids, materials, textures, skins, bones, morphing targets, animations, lights, and cameras.

Public domain glTF files can be found online, such as Sketchfab, or many tools include glTF export capabilities:

  • Blender by the Blender Foundation
  • Substance Painter by Allegorithmic
  • Modo by Foundry
  • Toolbag by Marmoset
  • Houdini by SideFX
  • Cinema 4D by MAXON
  • COLLADA2GLTF by the Khronos Group
  • FBX2GLTF by Facebook
  • OBJ2GLTF by Analytical Graphics Inc

If your favorite tool does not support glTF, consider asking the tool’s author for glTF export functionality, or post your ideas on the glTF Roadmap Thread.

loading

import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';

GLTF loader

GlTF (GL Transport Format) is an Open Format Specification for transporting and loading 3D content more efficiently. The files are available in JSON (.glft) format or binary (.glb) format, and the external files store the maps (.jpg,.png) and additional binary data (.bin). A glTF component can transmit one or more scenes, including grids, textures, maps, skins, skeletons, morphing objects, animations, lights, and cameras.

See GLTFLoader Documentation for more details. Once you’ve introduced a loader, you’re ready to add models to the scene. Different loaders may have different syntax — see the example and documentation for this format’s loaders when using other formats. For glTF, the use of global script is similar:

const loader = new GLTFLoader(); loader.load( 'path/to/model.glb', function ( gltf ) { scene.add( gltf.scene ); }, undefined, function ( error ) { console.error( error ); });Copy the code

Not to stay up