takeaway

After reading this article, you will know:

1, how to generate map events 2, how to do GPU graphics pickup 3, how to calculate map pose parameters 4, optimization related to map rendering boundaries 5, how to generate complex map animation

The interactive nature of maps can express more abundant data information, enhance the appeal to the audience and the audience’s understanding of data. Map interaction includes translation, rotation, tilt, zooming and graph picking, etc. All of these interactions require abstraction of a series of mouse-native events and conversion of parameters. Finally, the parameters are applied to the camera of the map to achieve the purpose of controlling the map posture.

The posture of the map itself is similar to the flight posture of the aircraft, as shown below:

  

Attitude of aircraft

The rotation of the map corresponds to the YAW (yaW) action of the aircraft, the tilt of the map corresponds to the pitch (pitch) action of the aircraft, and the map does not have the roll (roll) action of the aircraft. These attitudes of the map can be calculated by transforming the position of the map perspective camera, yaW and PITCH. Next, we’ll go into detail on how to abstract, distribute, pick up, and pose control these events.

Principles of map interaction

Abstract events

Native mouse events include Click, DBLClick, wheel, mousemove, etc. For maps, simple events can be directly triggered by native mouse events. Complex events, such as dragStart, drag, dragEnd, zoomStart, Zoom, zoomEnd, etc., are triggered in combination with mouse native events.

The cycle of the map’s Drag event starts when the left mouse button is clicked on the map, triggering the MouseDown event. Then the mouse moves, triggering a dragStart of the map; Then the mouse continues to move in the “Mousedown” state, triggering the Mousemove event and continuously triggering the Drag event of the map. Finally, the left mouse button is raised, triggering the Mouseup and triggering a dragEnd event for the map, ending a drag event cycle. A complete map Drag event is made up of two consecutive mousedown and Mousemove events, with some markers in between to ensure that the composite map event is correct.

Event Generation process

Examples of event generation code:

let isMouseDown = false let isMouseMove = false mapContainer.addEventListener('mousedown', onMouseDown) mapContainer.addEventListener('mouseup', onMouseUp) mapContainer.addEventListener('mousemove', onMouseMove) function onMouseDown () { map.emit('mousedown') isMouseDown = true } function onMouseUp () { map.emit('mouseup') isMouseDown = false if (isMouseMove) { map.emit('dragEnd') isMouseMove = false } } funciton onMouseMove () { map.emit('mousemove') if (isMouseDown) { if (! isMouseMove) { map.emit('dragStart') } map.emit('drag') isMouseMove = true } }Copy the code

For the Zoom event, it is triggered by the mouse or the wheel or DBLclick of the touchpad. When the wheel event is triggered by the mouse wheel or touchpad, a zoomStart event is triggered. During the drill-down animation, the Zoom event is constantly triggered; When the drill-down animation ends, a zoomEnd event is triggered. Dblclick will trigger a click event at the same time as the mouse dBLclick event, so it needs to be distinguished from the click event by delaying the execution of click for a certain time. If the dBLclick event is triggered, the click timer will be cleared:

let timer

mapContainer.addEventListener('click', onClick)
mapContainer.addEventListener('dblclick', onDblclick )

function onClick () {
  if(timer) {
    clearTimeout(timer)
    timer = undefined
  }

  timer = setTimeout(() => {
    // do domething
    map.emit('click')
  }, delay)
}

function onDblclick () {
  if (timer) {
    clearTimeout(timer)
    timer = undefined
  }

  // do something
  map.emit('dblclick')
  map.emit('zoom')
}
Copy the code

The trackpad slide event is similar to the mouse wheel event, but not identical. Trackpad sliding events are triggered by the MAC trackpad. Compared with the mouse wheel events, trackpad sliding events are more intensive. Without separate optimization, the map will not be smooth and will not reach the experience that MAC trackpad should have. We refer to mapbox-GL-js processing logic to distinguish trackpad sliding from mouse wheel: github.com/mapbox/mapb… .

Dispatching events

After abstracting the event, the next step is to distribute the event to the layers, which do their own logical processing based on the event. The map maintains an array storage layer, traverses the layer, and distributes events to each layer.

The function onMapEventName (... Args) {layer. forEach(function(layer) {layer.emit(eventName,... args) }) } function onLayerEventName() { // do something } map.on(eventName, onMapEventName) layer.on(eventName, onLayerEventName)Copy the code

Graphics to pick up

In ThreeJS, if you want to pick up a graph, it’s done in the CPU with Raycaster. If the coordinates of the graphics have been determined in the CPU, a ray is sent from the camera’s position to the world coordinates of the mouse interaction position for collision, which object in the world can intersect with the ray, it means that the mouse has picked up the object. Then according to the distance between the picked object and the ray sorted, can pick up the nearest object from the mouse. The drawback of this approach is that it does not do a good job of detecting shapes that are sized and shaped in the GPU.

GPU pickup is a solution to the above problems. One method of GPU pickup is to determine the object to be picked by reading the rendered pixels.

The first is the maintenance of graph ids. To determine which graphics to pick up, you need to number the graphics. The numbering id uses three color channels R, G, B, 8 bits per channel, a total of 24 bits, can generate a maximum of 2^24 colors. You need to maintain an array to keep track of which ids have been used. If an ID is used, the value of this position in the array is assigned to 1, otherwise 0; In order to reduce the process of retrieving which ID is available during ID generation, a cursor is maintained to record the current location. Each ID generation starts from the cursor location until the maximum available ID is detected. If the maximum ID is not available, it is retrieved again from 0; If the loop fails to generate a new ID, the exception “ID exhausted” is thrown; In general, ten million level ids are sufficient; When an ID is deleted, the value of the ID in the array is marked as 0, indicating that the ID can be used again.

For each mesh, geometry needs to add an attribute to store the color ID. Material needs to add a macro definition of PICK, and then use this macro definition in the shard shader to determine whether the shape shader value is the color value for rendering or the ID value for picking.

For example, if the ID is 0x000001, the color of the mesh to be picked up is #000001. Render these separately as the picked mesh off-screen instead of rendering to canvas. When the mouse picks up, read the pixel value corresponding to the mouse pick coordinate point (x, y) at THREE.WebGLRenderTarget, and then convert this pixel value to the corresponding ID, so as to know which object the mouse picks up currently. Because it is picking up pixels, it is also very accurate for irregular objects.

GPU picks up fine lines

Color and graphic ID coding principle:

@param {Number} id */ function packID (id) {const r = id >> 16 const g = (id - (r << 8)) >> 8 const b = id - (r << 16) - (g << 8) return [r, g, @param {Number} r * @param {Number} g * @param {Number} b */ function unpackID (r, g, b) { return (r << 16) + (g << 8) + b }Copy the code

Shader part of the code:

// Vertex shader attribute vec4 a_color; attribute vec3 a_idColor; varying vec4 v_color; varying vec3 v_idColor; void main () { v_color = a_color; v_idColor = a_idColor; gl_Position = projectionMatrix * modelViewMatrix * position; }Copy the code

// The slice shader varying VEC4 V_color; varying vec3 v_idColor; Void main() {#if defined PICK gl_FragColor = vec4(vec3, 1.0); #else gl_FragColor = v_color; }Copy the code

Attitude control

The attitude of the map is controlled by the values of Center, Zoom, pitch and Yaw. Center is used to determine the location of the center of the map, Zoom is used to determine the level of the map, Pitch is used to determine the tilt Angle of the map, yaw is used to determine the rotation Angle of the map.

Parameter calculation

Center calculation

The left mouse button drag event is used to pan the map, i.e. change the center position of the map:

Map drag

As can be seen from the figure, the mouse position is always on the “Guomao” position when dragging the map at different levels. Different levels have different resolutions, so the difference of mouse positions at different levels in “meters” should be calculated and applied to center:

function normal(pos, containerWidth, containerHeigth) { const vec2 = new Vector2() vec2.setX((pos.x / containerWidth) * 2 -1) vec2.setY(-(pos.y / containerHeight) * 2 + 1) return vec2 } function normalToWorld (vec2, camera) { raycaster.setFromCamera(vec2, Camera) const intersects = raycaster. IntersectObject (planeXOZ) / / computing and map plane intersection of the if (intersects. Length > 0) {return new Vector3(intersect[0].point.x, 0, Intersect [0].point.z)}} const start = normalToWorld(normal(mouseStart), canera) const end = normalToWorld(normal(mouseEnd), canera) const offset = [start.x - end.x, start.y - end.y] newCenter[0] = center[0] + offset[0] newCenter[1] = center[1] + offset[1] newPosition[0] += center[0] +  cameraPosition[0] newPosition[1] = cameraPosition[1] newPosition[2] += center[1] + cameraPosition[2] camera.setTarget(newCenter) camera.setPosition(newPosition)Copy the code

Zoom to calculate

When zoom is triggered by wheel event, the zoomDelta generated by this Zoom action can be calculated according to the deltaY attribute value of WheelEvent object.

ZoomDelta = deltaY * scale // Scale is a coefficient that prevents zoomDelta from becoming too largeCopy the code

When zoom is triggered by a DBLclick event, typically dBLclick drills down one level at a time.

zoomDelta = 1
Copy the code

Different perspective cameras have different distances from the center, so map elements of different sizes can be observed, and the loading of tiles of the next level can be determined according to the intersection of the perspective camera’s cone and the map plane.

Pitch and Yaw calculate

Pitch and Yaw are changed when the map Drag event is triggered by the right mouse button. Yaw value changes with the mouse’s X direction coordinate, and PITCH value changes with the mouse’s Y direction coordinate.

Pitch operation

The change of pitch can be seen as the radian change generated by the movement of deltaY on a circle with radius R in the pixel change value of the mouse’s Y coordinate. DeltaY can be calculated by comparing the y-coordinate of the mouse with the y-coordinate of the previous mouse every once in a while. The R value can be derived from the minimum value of the width and height of the map container.

R = min(containerWidth, containerHeight) deltaY = lasty-currenty deltaPitch = deltaY/R // Change of pitch radianCopy the code

Similarly, Yaw takes deltaX, the difference of x between the two mouse changes, to calculate the radian of the change. As can be seen from the figure above, different from pitch, the operation of the mouse on the upper part and the lower part of the map container is opposite, so it is necessary to determine whether the mouse operation is on the upper part or the lower part of the map container.

Yaw operation

Pitch and YaW are calculated and applied to the camera to change the attitude of the map. The operation of pitch and yaw with the mouse is carried out around the center of the center. The position of the camera can be regarded as a point on the sphere, the distance between position and target can be regarded as the radius of a ball, and the change of pitch and yaw can be regarded as the movement of position on the sphere. According to pitch and YAW, a new position is calculated for the camera to observe the new posture of the map.

Render boundary optimization

Window range determination

Camera loaded with tiles

The relationship between camera and tile loading can be seen from the previous article “Data Sources and Storage Computing”. If the camera tilt Angle is too large, tiles far from the field of view will also be loaded, and it will be difficult to see the distant tile rendering content within the perspective camera’s field of view, resulting in meaningless

Rendering, cost performance. After investigation and testing, it was found reasonable to limit the inclination pitch to [0, Math.pi / 3].

Render boundary optimization

Set pitch to the maximum Angle math.pi / 3, the quadrilateral where the perspective camera’s view cone intersects the map plane is not filled with map tiles, and there will be blank parts, as shown in the red box below:

Before I add the sky

So we need to add sky color to cover the resulting white space. The sky increases as shown below:

After adding the sky

In order to ensure that the sky is not affected by the map posture, separate scene and camera are needed. The sky is a plane, there will be no perspective effect, so choose an orthogonal camera. When map pitch changes, y coordinate of sky mesh can be dynamically changed. When the map yaW changes, there is no need to change the state of the sky since the height is the same.

Complex animation

Focus change

Focusing on the

The camera is more

The map library also provides a variety of complex animations, including focus shifts, focus, camera zoom, and more. To implement these animations, you need to maintain an array of parameters, and when the array has parameter objects, the animation is executed; After the previous argument object completes execution, the next argument object is executed.

Each parameter object consists of attitude control parameters Center, Zoom, PITCH, yaw and animation control parameters Delay, duration, easeFunc, repeat and callback. Different animations can be obtained by interpolating attitude control parameters within the time of animation control parameters.

let animating = false let animations = [param0, param1, param2 ...] Function animate() {const {stateParam, animaParma} = animations[0] const {duration, delay, repeat, animations, easeFunc, Callback} = animaParma const currentParam = getCurrentParam() const tween = new TWEEN.Tween(currentParam) .to(stateParam, duration) .delay(delay) .repeat(repeat) .ease(easeFunc) .onStart(() => { animating = true }) .onUpdate((param) => { UpdateMapCamera (param) // Update map perspective camera parameters}).onComplete() {animating = false animation.shift () callback()} tween.start ()  } function loop() { if (! animating && animations.length) { animate() } TWEEN.update() requestAnimationFrame(loop) } loop()Copy the code

Please look forward to the next issue: Map Text Rendering.

Review past

Creating a Cool 3D Map Visualization Product for B-end Clients

Data Sources and Stored Computing