This article has participated in the good article call order activity, click to see: back end, big front end double track submission, 20,000 yuan prize pool for you to challenge!

preface

It’s the end of the week, and the last few articles have been showing you some things about 2D, Canvas and SVG. In July, I plan to output three long articles of 10,000 words to teach you the three ways of visual expression, SVG, Canvas and WebGL. So this is the first article in 3D. What can you learn from reading this article

  1. Have a simple understanding of the framework of three.js, you can get started.
  2. Learning Raycaster in Three involves using the mouse to determine which object is currently selected.
  3. Let me show you a simple example of how to visualize the earth with three.

The choice of 3D framework — three.js

1. Why choose three.js

Threejs is described simply on the website as “Javascript 3D Library.” OpenGL is a cross-platform 3D/2D drawing standard, and WebGL is an implementation of openGL in the browser. Web front-end developers can program directly with the WebGL interface, but WebGL is only a very basic graphics API, which requires programmers to have a lot of mathematical knowledge, graphics knowledge to complete 3D programming tasks, and the amount of code is huge. Threejs encapsulates WebGL to make it easy for front-end developers to do Web 3D development without having to know a lot of math and graphics, lowering the barrier to entry and making it much more efficient. If you don’t understand computer graphics, you can as long as you understand some basic concepts of three.js.

The basic element of Threejs — the scene

The definition is as follows:

Scene: it is a three-dimensional space, a container of all the objects, think of the scene as an empty room, and then we will put objects, cameras, light sources, etc. into the room.

In code, it looks like this:

const scene = new THREE.Scene();
Copy the code

Think of it as a room, and then you can add objects to it, add a cube, add a rectangle, whatever. In fact, the whole relationship between three.js is a tree structure.

The basics of Threejs – the camera 📷

Camera: Threejs has to add a camera to the scene. The camera is used to determine position, orientation, Angle, and what the camera sees is what we always see on the screen. The position, orientation and Angle of the camera can be adjusted while the program is running.

There are two kinds of cameras in three.js: the orthogonal camera 📷 and the perspective camera 📷. I’ll tell you about each of them, but to understand a camera, you need to understand a concept first — the visual cone

The perspective camera

The visual cone is the space visible to the camera and looks like a pyramid with the top cut off. The cone is surrounded by six clipped surfaces. The four sides of the cone are called upper left, lower right and corresponding to the four boundaries of the screen. To prevent objects from being too close to the camera, set the near section, and to prevent objects from being too far away from the camera to be visible, set the far section.

Oc is the location of the camera, which is indicated in the near plane and far plane. As you can see from the figure, what is inside the six faces of the edges and terraces can be seen. Factors affecting the size of a perspective camera:

  1. The vertical field of view Angle of the camera’s visual cone is a in the picture
  2. The near face of the camera’s visual cone is also the near plane in the picture
  3. The far plane of the camera’s visual cone
  4. The aspect ratio of the camera’s visual cone represents the ratio of the width to the height of the output image

The camera in the corresponding three:

const camera = new THREE.PerspectiveCamera( 45, width / height, 1.1000 );
Copy the code

The biggest characteristic of perspective camera: it is in line with the characteristics of our human eyes to observe things, near big far small.

The idea behind this is that the camera has a projection matrix: what the projection matrix does is simply convert the optic disc into a cube. So the points in the far section get smaller, and the points in the close section get bigger.

Orthogonal camera

The characteristic of an orthogonal camera is that the vertebral body is a cube

In this projection mode, the size of the object remains the same in the final rendered image, regardless of how close or far it is from the camera.

This is useful for rendering 2D scenes or UI elements. As shown in figure:

The code in three is as follows:

const camera = new THREE.OrthographicCamera( width / - 2, width / 2, height / 2, height / - 2.1.1000 );
Copy the code

With that said, the camera will introduce the form of graphics.

The basic element of Threejs — the grid

In the computer world, an arc is made up of a finite number of lines connected by a finite number of points. As the number of segments increases, the length decreases, and when you reach a point where you can’t tell that it’s a segment, a smooth arc appears. The computer’s three-dimensional model is similar. But the line segments become planes, and are generally described as a grid of triangles. We call this a Mesh model.

An arc is derived from multiple line segments. The more the number of line segments, the closer to the arc. For those of you who don’t know, check out my post: Canvas? I can draw a fireworks 🎇 animation where the Bezier curve can be fitted with a small segment of line.

All graphics behind three.js will be triangulated before rendering and then handed over to WebGL for rendering.

Threejs provides some common geometric shapes, both 3d and 2D, 3d such as cuboids, spheres, and 2d such as rectangles and circles, etc. If the default shape is not enough, you can also customize by defining vertices and the lines between the vertices to draw a custom geometry. More complex models can also be modeled with modeling software and imported.

2d

3d

With the shape, it’s possible that the rendered graphics don’t look beautiful, and the material comes out. The mesh consists of two parts:

Material + Geometry is a mesh. Threejs provides a concentrated and representative Material, which is commonly used as diffuse reflection and specular reflection. It can also introduce external images and stick them on the surface of the object to become a texture map. You can try it yourself if you are interested. As shown in figure:

The basic element of Threejs — lighting

If there is no light, the camera can’t see anything, so you need to add light to the scene. To get closer to the real world, Threejs supports simulating different light sources and showing different lighting effects, such as light points, parallel lights, spotlights, ambient lights, and so on.

An AmbientLight.

Ambient light uniformly illuminates all objects in the scene. Ambient light cannot be used to cast shadows because it has no direction.

const light = new THREE.AmbientLight( 0x404040 ); // soft white light
Copy the code

Direct light

Parallel light is light emitted in a particular direction. This light behaves as if it were infinite, and the light from it is all parallel. Parallel light is often used to simulate the effect of sunlight; The sun is far enough away that we think the sun is infinitely far away, so we think the light coming from the sun is parallel.

const directionalLight = new THREE.DirectionalLight( 0xffffff.0.5 );
Copy the code

PointLight

A source of light emitting from one point in all directions. A common example is to simulate the light emitted by a light bulb.

const light = new THREE.PointLight( 0xff0000.1.100 );
Copy the code

SpotLight

The cone of light grows in size as it travels farther from one point in one direction.

const spotLight = new THREE.SpotLight( 0xffffff );
Copy the code

There are some other lights, interested partners can go to the website of Three.js to check.

The basic element of Threejs – the renderer

The renderer is to render lights, cameras, grids in your scene.

let renderer = new THREE.WebGLRenderer({
    antialias: true.// True /false Indicates whether to enable anti-aliasing
    alpha: true.// True /false indicates whether the background color can be set to transparent
    precision: 'highp'.// Highp /mediump/lowp indicates the color accuracy selection
    premultipliedAlpha: false.// True /false indicates whether the pixel depth can be set.
    preserveDrawingBuffer: true.// True /false indicates whether to save the drawing buffer
    maxLights: 3.// Maximum number of lights
    stencil: false // False /true indicates whether to use a template font or pattern

Copy the code

I have introduced the general elements of three.js, and then I will enter the main topic, how to realize a visual map of three.js?

Visual map – three. Js implementation

Setting up scenes

I don’t care whether the map is a map or not, the shape of the map must be placed in the scene. Follow my steps step by step to build a scene. The scene is built by camera and renderer. I use a map class to represent the code as follows:

class chinaMap {
    constructor() {
      this.init()
    }

    init() {
      // Create a new scene
      this.scene = new THREE.Scene()
      this.setCamera()
      this.setRenderer()
    }

    // Create a perspective camera
    setCamera() {
      // The second parameter is the length to width ratio. By default, the browser returns the internal width and height of the window in pixels
      this.camera = new THREE.PerspectiveCamera(
        75.window.innerWidth / window.innerHeight,
        0.1.1000)}// Set the renderer
    setRenderer() {
      this.renderer = new THREE.WebGLRenderer()
      // Set the canvas size
      this.renderer.setSize(window.innerWidth, window.innerHeight)
      // This is the canvas, renderer.domElement
      document.body.appendChild(this.renderer.domElement)
    }
    
    // Set the ambient light
    setLight() {
      this.ambientLight = new THREE.AmbientLight(0xffffff) / / the ambient light
      this.scene.add(ambientLight)
    }
  }
Copy the code

I’ve explained everything above, and now that we have the scene and the lights, let’s have a look.

The scene is black and nothing is there, so next we’ll just add a random cuboid and call the renderer’s render method. The code is as follows:

init() {
  // Create a new scene
  this.scene = new THREE.Scene()
  this.setCamera()
  this.setRenderer()
  const geometry = new THREE.BoxGeometry()
  const material = new THREE.MeshBasicMaterial({ color: 0x00ff00 })
  const cube = new THREE.Mesh(geometry, material)
  this.scene.add(cube)
  this.render()
}

/ / render method
render() {
  this.renderer.render(this.scene, this.camera)
}
Copy the code

According to the above 👆 to do you will page or obviously have been added, why?

By default, when we call scene.add(), the object will be added to the coordinate (0,0,0). But it will bring the camera and the cube together. To prevent this from happening, all we need to do is move the camera out a little

So just adjust the z-axis property of the camera position to get to the image

  // Create a perspective camera
  setCamera() {
    // The second parameter is the length to width ratio. By default, the browser returns the internal width and height of the window in pixels
    this.camera = new THREE.PerspectiveCamera(
      75.window.innerWidth / window.innerHeight,
      0.1.1000
    )
    this.camera.position.z = 5
  }
Copy the code

The pictures are as follows:

At this point, some of you will ask, well, what’s the difference between canvas 2D and canvas 2D? Can’t see it in three dimensions? OK, so I’m going to move this cube. Just keep calling our render function. We use ReqestAnimationFrame. Try not to use setInterval, there is a very simple optimization.

RequestAnimationFrame has many advantages. Perhaps most importantly, it pauses when the user switches to another TAB, so it doesn’t waste valuable processor resources or battery life.

So what I’m doing here is that x and y of the cube are constantly plus 0.1. Let’s look at the code:

render() { this.renderer.render(this.scene, This.camera)} animate() {requestAnimationFrame(this.animate. Bind (this)) this.cube This.cube.rotation. Y += 0.01 this.render()}Copy the code

The renderings are as follows:

If you have that feeling, I will introduce you to the next three.js from the beginning with the simplest cube rotation. If you see here feel here, feel helpful to you, I hope you can give me a thumbs up 👍 oh, thank you old iron! Below is a formal map requirement analysis.

Acquisition of map data

In fact, the most important thing is to get the map data, so you can learn about openStreetMap

This is a freely edited map of the world. OpenStreetMap allows you to view, edit, or use geographic data from around the world to help you.

Here I copied the json data of China map by myself, the code is as follows:

// Load the map data
loadMapData() {
  const loader = new THREE.FileLoader()
  loader.load('.. /json/china.json'.(data) = > {
    const jsondata = JSON.parse(JSON.stringify(data))
  })
}
Copy the code

I’m going to show you what the JSON data looks like

! [image-20210703154646470](/Users/wangzhengfei/Library/Application Support/typora-user-images/image-20210703154646470.png)

But the main thing is that there’s a latitude and a longitude coordinate down here, and actually that’s what I care about, because you have points to generate lines, and you have points to generate planes. There is a point involved here, Mercator projection transformation. The Mercator projection transformation converts our latitude and longitude coordinates to the 2D coordinates of our corresponding plane. For a sense of this derivation, read the article portal

Here I’m going straight to the visual frame d3 which has its own Mercator projection transform.

// Mercator projection conversion
  const projection = d3
    .geoMercator()
    .center([104.0.37.5])
    .scale(80)
    .translate([0.0])
Copy the code

Since China has many provinces, each province corresponds to an Object3d.

Object3d is the base class of three.js, providing a set of properties and methods to manipulate objects in 3d space. Objects can be combined with the.add(object) method, which adds them to child objects

The whole China here is a big Object3d, each province is an Object3d, the province is hanging under China. And then the Map of China hangs under the Object3d of the scene. Obviously, three.js is a typical tree data structure, I draw a picture for you to see.

There are a lot of things hanging in the Scence scenario, one of which is the Map, the whole Map, and then each province, each province is composed of Mesh and lLine.

Let’s look at the code:

     generateGeometry(jsondata) {
          // Initialize a map object
          this.map = new THREE.Object3D()
          // Mercator projection conversion
          const projection = d3
            .geoMercator()
            .center([104.0.37.5])
            .scale(80)
            .translate([0.0])

          jsondata.features.forEach((elem) = > {
            // Define a province 3D object
            const province = new THREE.Object3D()
            this.map.add(province)
          })
          this.scene.add(this.map)
        }
Copy the code

I think you may not have any problems here, we have the overall framework set, then we go to the core segment

Generate map geometry

Shape (), geometry (), geometry (), geometry () Let me explain to you, first of all, the subscript of each province outline is a 2D coordinate, but we want to generate the cube, shape() can define a 2d shape plane. It can be used with ExtrudeGeometry to get points, or to get triangular faces.

The code is as follows:

    // Array of coordinates for each
    const coordinates = elem.geometry.coordinates
    // Loop through the array of coordinates
    coordinates.forEach((multiPolygon) = > {
      multiPolygon.forEach((polygon) = > {
        const shape = new THREE.Shape()
        const lineMaterial = new THREE.LineBasicMaterial({
          color: 'white',})const lineGeometry = new THREE.Geometry()

        for (let i = 0; i < polygon.length; i++) {
          const [x, y] = projection(polygon[i])
          if (i === 0) {
            shape.moveTo(x, -y)
          }
          shape.lineTo(x, -y)
          lineGeometry.vertices.push(new THREE.Vector3(x, -y, 4.01))}const extrudeSettings = {
          depth: 10.bevelEnabled: false,}const geometry = new THREE.ExtrudeGeometry(
          shape,
          extrudeSettings
        )
        const material = new THREE.MeshBasicMaterial({
          color: '#2defff'.transparent: true.opacity: 0.6,})const material1 = new THREE.MeshBasicMaterial({
          color: '#3480C4'.transparent: true.opacity: 0.5,})const mesh = new THREE.Mesh(geometry, [material, material1])
        const line = new THREE.Line(lineGeometry, lineMaterial)
        province.add(mesh)
        province.add(line)
      })
    })
Copy the code

Traversing the first point is exactly the same as the canvas2D drawing, moving the starting point, then drawing the outline behind the line. Then we can set the depth of the stretch here, and then we can set the material. LineGeometry actually corresponds to the edge of the outline. Let’s take a look at the picture:

Camera auxiliary view

To make it easier to adjust the camera position, I added a secondary view, the cameraHelper. Then you look back at the screen and there’s a cross, and we can keep repositioning our camera so that our map is in the center of the picture:

addHelper() {
  const helper = new THREE.CameraHelper(this.camera)
  this.scene.add(helper)
}
Copy the code

Constantly adjusted with auxiliary views:

Ha ha ha ha, do not have that smell. At this point our map of China is already in the center of the canvas and it’s done.

Adding an interactive controller

Now the map has been generated, but the user interaction is not good, here we introduce Three’s OrbitControls, you can rotate the mouse around the screen at will, you can see every part of the cube. However, this method is not in the three package, so we need to introduce a separate file code like this:

setController() {
  this.controller = new THREE.OrbitControls(
    this.camera,
    document.getElementById('canvas'))}Copy the code

Let’s look at the effect:

Ray tracing

But for me, I’m not satisfied. How do I know which province I’m clicking on? OK, this is where we introduce a very important class in three, Raycaster.

This class is used for raycasting. Raycasts are used for mouse pickups (figuring out what the mouse has moved over in three dimensions).

We can listen to the onmouseMove event on the canvas, and then we can know which mesh is selected by the current mouse move. But before we do that, let’s add a property to each province object to indicate which province it belongs to.

// Add the attributes of the province
province.properties = elem.properties
Copy the code

Ok, we can introduce ray tracing as follows:

setRaycaster() {
  this.raycaster = new THREE.Raycaster()
  this.mouse = new THREE.Vector2()
  const onMouseMove = (event) = > {
    // Normalize the mouse position to the device coordinates. The x and y directions are (-1 to +1).
    this.mouse.x = (event.clientX / window.innerWidth) * 2 - 1
    this.mouse.y = -(event.clientY / window.innerHeight) * 2 + 1
  }
  window.addEventListener('mousemove', onMouseMove, false)}animate() {
  requestAnimationFrame(this.animate.bind(this))
  // Update ray by camera and mouse position
  this.raycaster.setFromCamera(this.mouse, this.camera)
  this.render()
}
Copy the code

Since we are constantly moving around the canvas, we need constant ray positions. Now that we have the ray, we need everything from the scene to compare. RayCaster also provides the following method code:

const intersects = this.raycaster.intersectObjects(
  this.scene.children, / / the scene
  true  // If true, all objects' descendants will also be detected. Otherwise, only intersecting parts of the object itself will be examined
)
Copy the code

There are many intersects obtained from this Intersects, but we only choose one of them, that is, the object with two materials, because we use two materials in the right mesh above

 const mesh = new THREE.Mesh(geometry, [material, material1])
Copy the code

So the filter code is as follows

animate() {
  requestAnimationFrame(this.animate.bind(this))
  // Update ray by camera and mouse position
  this.raycaster.setFromCamera(this.mouse, this.camera)
  // Calculate which objects the ray intersects with when the scene
  const intersects = this.raycaster.intersectObjects(
    this.scene.children,
    true
  )
  const find = intersects.find(
    (item) = > item.object.material && item.object.material.length === 2
  )

  this.render()
}
Copy the code

How do I know if I have found it? We gray the surface of the mesh we found, but this will cause a problem. When we move the mouse again, we have to restore the material from the last time.

The code is as follows:

 animate() {
    requestAnimationFrame(this.animate.bind(this))
    // Update ray by camera and mouse position
    this.raycaster.setFromCamera(this.mouse, this.camera)
    // Calculate which objects the ray intersects with when the scene
    const intersects = this.raycaster.intersectObjects(
      this.scene.children,
      true
    )
    // Restore the last empty
    if (this.lastPick) {
      this.lastPick.object.material[0].color.set('#2defff')
      this.lastPick.object.material[1].color.set('#3480C4')}this.lastPick = null
    this.lastPick = intersects.find(
      (item) = > item.object.material && item.object.material.length === 2
    )
    if (this.lastPick) {
      this.lastPick.object.material[0].color.set(0xff0000)
      this.lastPick.object.material[1].color.set(0xff0000)}this.render()
  }
Copy the code

Check out the renderings:

Increase the tooltip

To make the interaction even better, if you find a tooltip in the lower right corner of the mouse, it must be a div that is hidden by default, and then move the corresponding position according to the mouse movement.

The first step is to create a div

<div id="tooltip"></div>
Copy the code

Step 2: Set the style to hide by default

#tooltip {
  position: absolute;
  z-index: 2;
  background: white;
  padding: 10px;
  border-radius: 2px;
  visibility: hidden;
}
Copy the code

Step 3 change the position of the div:

  setRaycaster() {
    this.raycaster = new THREE.Raycaster()
    this.mouse = new THREE.Vector2()
    this.tooltip = document.getElementById('tooltip')
    const onMouseMove = (event) = > {
      this.mouse.x = (event.clientX / window.innerWidth) * 2 - 1
      this.mouse.y = -(event.clientY / window.innerHeight) * 2 + 1
      // Change div position
      this.tooltip.style.left = event.clientX + 2 + 'px'
      this.tooltip.style.top = event.clientY + 2 + 'px'
    }

    window.addEventListener('mousemove', onMouseMove, false)}Copy the code

The last step is to set the name of the Tooltip:

ShowTip information () {/ / display the province the if (this. LastPick) {const properties = this. LastPick. Object. The parent. The properties this.tooltip.textContent = properties.name this.tooltip.style.visibility = 'visible' } else { this.tooltip.style.visibility = 'hidden' } }Copy the code

At this point, the whole 3D visualization of the Earth project is complete, let’s take a look at the effect.

conclusion

Readers, if you think it’s helpful for you to finish, I hope you don’t mean your hands 👍, point a 👍 and attention is one of the largest support to me, knowledge output is not easy, but I don’t forget the beginner’s mind, continue to share visual good article, if you are interested in visualization, you can pay attention to the following my visual column, or I can focus on the public no. : Front-end graphics, continuous sharing of computer graphics knowledge. All the code for this article is available on github. Welcome to star.