Introduction: This article will focus on: understanding what a panorama is -> how to make a panorama -> the interactive principles of panorama, hand in hand teach you to implement a cool Web panorama from zero basis, and explain the principles. Small white also can study, suggest collect study, have any question, please discuss in the comment area, the author often check and reply.

1. What is panorama

1.1 Panorama Definition

Definition: A panorama is the overall view of a space.

Generally speaking: everyone has taken photos, then let’s think about the process of taking photos: standing in a certain space, holding the camera, shooting toward a certain Angle, you can get this Angle of the scenery photos, and panorama? It is an interactive photo that is taken by standing in a space, holding a camera, shooting at a 360 Angle, taking all the views, combining them, and showing them to everyone through special technology.

Panorama example:

Experience QR code (support wechat scan code) :

1.2 Panoramic display

There are many ways to display panorama, such as: column panorama, cube panorama, sphere panorama and so on…

Most popular understanding: with a large cardboard box on the head, to see the scene (this display is called the cube panorama)

There is a cross area between the column and the cube, and the interface interaction in the cross area will present a dead Angle. Therefore, the best panoramic presentation mode is spherical panorama, 360 degrees without dead Angle. This article will explain with spherical panorama.

Two, how to form a panorama

2.1 know ThreeJS

Current front-end implementation of mainstream panorama:

implementation cost Whether open source Learning costs The development of the difficulty compatibility extension performance
CSSS 3D free is In the difficult CSS3D supported browser easy low
ThreeJS free is high In the Some browsers that support WebGL easy high
Panorama Tool (Krpano) charge no easy There is no Support for Flash and Canvas browsers difficult In the

As a pursuing front-end development, ThreeJS is definitely the choice!!

ThreeJS is Three (3D) +JS (JavaScript), which encapsulates the underlying WebGL interface, enabling us to achieve 3D scene rendering with simple code without any knowledge of graphics.

To display a 3D image on the screen, here’s a general idea:

  • Step 1: Construct a rectangular coordinate system in space: call it Scene in Three
  • Step 2: In the coordinate system, draw the geometry: There are many kinds of geometry in Three, including BoxGeometry (cube), SphereGeometry (sphere), and so on
  • Step 3: Select an observation point, and determine the observation direction, etc.
  • Step 4: Render the observed scene to the specified area on the screen: Use the Renderer in Three to do this (equivalent to taking a photo)

The above is ThreeJS render object fixed writing method, do not understand the words remember the line 😄

Sphere panorama image material (below) : the width is twice the height, the value is an integer multiple of 2 is best, the recommended image width and height is 2048px by 1024px.

Concrete code implementation:

<! DOCTYPEhtml>
<html lang="en">
<head>
    <meta charset="utf-8">
<title>Hand - in - hand tutorials on creating cool Web panoramas</title>
    <meta name="viewport" id="viewport" content="width=device-width,initial-scale=1,minimum-scale=1, maximum-scale=1, user-scalable=no, viewport-fit=cover">
</head>
<body>
<div id="wrap" style="position: absolute; z-index: 0; top: 0; bottom: 0; left: 0; right: 0; width: 100%; height: 100%; overflow: hidden;">
</div>
<script src="https://cdn.bootcdn.net/ajax/libs/three.js/r128/three.js"></script>
<script>
    const width = window.innerWidth
    const height = window.innerHeight
    const radius = 500 // The radius of the sphere

    // Step 1: Create the scene
    const scene = new THREE.Scene()

    // Step 2: Draw a sphere
    const geometry = new THREE.SphereBufferGeometry(radius, 32.32)
    const material = new THREE.MeshBasicMaterial({
        map: new THREE.TextureLoader().load('./img/1.jpeg') // The panorama above
    })
    const mesh = new THREE.Mesh(geometry, material)
    scene.add(mesh)

    // Step 3: Create the camera and determine the camera position
    const camera = new THREE.PerspectiveCamera(75, width / height, 0.1.100)
    camera.position.x = 0  // Determine the camera position
    camera.position.y = 0
    camera.position.z = 100
    
    camera.target = new THREE.Vector3(radius, 0.0) // Set the focus
    

    // Step 4: Take a picture and draw to canvas
    const renderer = new THREE.WebGLRenderer()
    renderer.setSize(width, height) // Set the photo size

    document.querySelector('#wrap').appendChild(renderer.domElement) // Draw to canvas

    function render() {
        camera.lookAt(camera.target)   / / af
        renderer.render(scene, camera) / / photo
        
        // Constantly render, because the image takes time to load and process, not sure when it is appropriate to take a photo
        requestAnimationFrame(render)
    }
    render()
</script>
</body>

</html>

Copy the code

Browser page effects (remember to enable mobile simulation debugging) :

2.2 Basic Knowledge

2.2.1 latitude and longitude

This article is the use of latitude and longitude to operate the panorama, need to popular knowledge of latitude and longitude

Longitude and latitude are the combined names of longitude and latitude to form a coordinate system. Called a geographic coordinate system, it is a spherical coordinate system that defines the space on the earth by using a sphere of three dimensions, capable of identifying any position on the earth’s surface.

As shown in the figure, longitude: lon, value range: [0,360], latitude: lat, value range: [-90,90];

2.2.2 Three-dimensional coordinates of longitude and latitude conversion

The point {lon,lat} of the sphere, where R is the radius of the sphere, the position of the point of the sphere in the coordinate of ThreeJS is:

Solution:

X = R *cos(lat)* sin(lon) Y = R * sin(lat) Z = R *cos(lat)*cos(lon)

Note: The default coordinate system in ThreeJS is a right-handed coordinate system, with X-axis left and right, Y-axis up and down, and Z-axis back and forth.

2.3 Steps for Generating a Panorama

In section 2.1, we have finished drawing a sphere, and we need to make adjustments on the basis of drawing a panorama:

  • 1. Move the camera to the center of the sphere;
  • 2. Attach the panoramic picture to the inner surface of the sphere;

The specific steps are as follows:

  • Step 1: Create a Scene
  • Step 2: Create a sphere and paste the panoramic image onto the inner surface of the sphere and put it into the scene
  • Step 4: Create a perspective projection camera. Pull the camera to the center of the sphere. The camera looks at the inner surface of the sphere
  • Step 5: Change the point at which the camera observes by modifying the latitude and longitude

Concrete code implementation:

<! DOCTYPEhtml>
<html lang="en">
<head>
    <meta charset="utf-8">
    <title>Hand - in - hand tutorials on creating cool Web panoramas</title>
    <meta name="viewport" id="viewport" content="width=device-width,initial-scale=1,minimum-scale=1, maximum-scale=1, user-scalable=no, viewport-fit=cover">
</head>
<body>
    <div id="wrap" style="position: absolute; z-index: 0; top: 0; bottom: 0; left: 0; right: 0; width: 100%; height: 100%; overflow: hidden;">
    </div>
    <script src="https://cdn.bootcdn.net/ajax/libs/three.js/r128/three.js"></script>
    <script>
        const width = window.innerWidth, height = window.innerHeight // Screen width and height
        const radius = 50 // The radius of the sphere

        // Step 1: Create the scene
        const scene = new THREE.Scene()

        // Step 2: Draw a sphere
        const geometry = new THREE.SphereBufferGeometry(radius, 32.32)
        geometry.scale(-1.1.1) // Reverse the surface map from the outer surface to the inner surface
        const material = new THREE.MeshBasicMaterial({
            map: new THREE.TextureLoader().load('./img/1.jpeg') // The panorama above
        })
        const mesh = new THREE.Mesh(geometry, material)
        scene.add(mesh)

        // Step 3: Create the camera and determine the camera position
        const camera = new THREE.PerspectiveCamera(75, width / height, 0.1.100)
        camera.position.x = 0  // Move the camera position to the center of the ball
        camera.position.y = 0
        camera.position.z = 0

        camera.target = new THREE.Vector3(radius, 0.0) // Set a focus
      

        // Step 4: Take a picture and draw to canvas
        const renderer = new THREE.WebGLRenderer()
        renderer.setPixelRatio(window.devicePixelRatio)
        renderer.setSize(width, height) // Set the photo size

        document.querySelector('#wrap').appendChild(renderer.domElement) // Draw to canvas
        renderer.render(scene, camera)

        let lat = 0, lon = 0

        function render() {
            lon += 0.003 // Add an offset per frame
            // Change the focusing of the camera. Refer to section 2.2.2 for the calculation formula
            camera.target.x = radius * Math.cos(lat) * Math.cos(lon);
            camera.target.y = radius * Math.sin(lat);
            camera.target.z = radius * Math.cos(lat) * Math.sin(lon)
            camera.lookAt(camera.target) / / af

            renderer.render(scene, camera)
            requestAnimationFrame(render)
        }
        render()
    </script>
</body>

</html>

Copy the code

Effect:

At this point, we have completed the panorama, (only counting js code: 28 lines of code, I am not the title party? 😁).

Three, panoramic interaction principle

3.1 Rotation of gesture interaction

The rotating finger in gesture interaction is the single finger sliding operation, which is consistent with the interaction of the sliding globe.

In the screen coordinate system, the upper left is the origin, the X-axis is left to right, and the Y-axis is top to bottom. The finger swiping on the screen will trigger three events in turn: TouchStart, TouchMove and TouchEnd. The event object records the position of the finger screen

The process of sliding your finger across the screen:

  • Touchstart: Record the starting position of the slide (startX, startY)
  • Touchmove: record the current position (curX, curY) subtracting the value of the last position, calculate the arc length divided by the radius times factor, calculate (lon, lat)
  • ‘Touchend’ : Not for now

Where: arc length R value is the screen sliding distance

So just refer to the screen slide from P1 (clientX1,clientY1) to P2 (clientX1,clientY1) with length of, corresponding to the latitude and longitude change:

DistanceX = clientx1-clientx2 // distanceY = clienty1-clienty2 //  lon = distanX / R lat = distanY / RCopy the code

Code implementation:

// Add touch event listener
let lastX, lastY       // Last screen position
let curX, curY         // The current screen position
const factor = 1 / 10  // Sensitivity coefficient

const $wrap = document.querySelector('#wrap')
// Touch starts
$wrap.addEventListener('touchstart'.function (evt) {
    const obj = evt.targetTouches[0] // Select the first touch
    startX = lastX = obj.clientX
    startY = lastY = obj.clientY
})

/ / touch
$wrap.addEventListener('touchmove'.function (evt) {
    evt.preventDefault()
    const obj = evt.targetTouches[0]
    curX = obj.clientX
    curY = obj.clientY

    // Reference: arc length formula
    lon -= ((curX - lastX) / radius) * factor // factor Multiply by a sensitivity factor for panoramic rotation stability
    lat += ((curY - lastY) / radius) * factor

    lastX = curX
    lastY = curY
})
Copy the code

Single-finger operation effect:

The above code has added panoramic one-finger interaction, but lacks rotational inertia. Next, let’s add an inertial animation:

Slide inertia implementation, finger on the screen slide process:

  • Touchstart: Record the starting position of the slide (startX, startY, startTime)

  • Touchmove: Record the current position (curX, curY), subtract the value of the last position, multiply by factor, calculate (lon, lat),

  • Touchend: Record the endTime and calculate the average speed during the slide, then subtract the deceleration d from each frame until the speed reaches zero or the TouchStart event is triggered.

Code implementation:

let lastX, lastY         // Last screen position
let curX, curY           // The current screen position
let startX, startY       // Start touch position, used to calculate speed
let isMoving = false     // Whether to stop the single-finger operation
let speedX, speedY       / / speed
const factor = 1 / 10    // Sensitivity coefficient, experience value
const deceleration = 0.1 // Slow down, inertial animation used

const $wrap = document.querySelector('#wrap')
// Touch starts
$wrap.addEventListener('touchstart'.function (evt) {
    const obj = evt.targetTouches[0] // Select the first touch
    startX = lastX = obj.clientX
    startY = lastY = obj.clientY
    startTime = Date.now()
    isMoving = true
})

/ / touch
$wrap.addEventListener('touchmove'.function (evt) {
    evt.preventDefault()
    const obj = evt.targetTouches[0]
    curX = obj.clientX
    curY = obj.clientY

    // Reference: arc length formula
    lon -= ((curX - lastX) / radius) * factor // factor Multiply by a coefficient for smooth panoramic rotation
    lat += ((curY - lastY) / radius) * factor

    lastX = curX
    lastY = curY
})

// Touch ends
$wrap.addEventListener('touchend'.function (evt) {
    isMoving = false
    var t = Date.now() - startTime
    speedX = (curX - startX) / t    // The average velocity in the x-direction
    speedY = (curY - startY) / t    // The average velocity in the y-direction

    subSpeedAnimate() // Inertial animation
})

let animateInt
// Slow down animation
function subSpeedAnimate() {
    lon -= speedX * factor / / the X axis
    lat += speedY * factor
    
    / / deceleration
    speedX = subSpeed(speedX)
    speedY = subSpeed(speedY)

    // Stop animation when speed is 0 or there is a new touch event
    if ((speedX === 0 && speedY === 0) || isMoving) {
        if (animateInt) {
            cancelAnimationFrame(animateInt)
            animateInt = undefined}}else {
        requestAnimationFrame(subSpeedAnimate)
    }
}

/ / deceleration
function subSpeed(speed) {
    if(speed ! = =0) {
        if (speed > 0) {
            speed -= deceleration;
            speed < 0 && (speed = 0);
        } else {
            speed += deceleration;
            speed > 0 && (speed = 0); }}return speed;
}
Copy the code

Address: preview azuoge. Making. IO/Opanorama /

3.2 Zooming of gesture interaction

Zooming for gesture interaction is a two-finger operation, just like zooming in a picture.

When I introduced ThreeJS earlier, I mentioned the camera, and panoramic zooming is based on the same principle that when you take a photo with the camera, you zoom in and out of the photo content.

To create a camera using ThreeJS:

const camera = new THREE.PerspectiveCamera( fov , aspect , near , fear )
Copy the code

Parameter description:

Among them,

  • Near: The default value is 0.1
  • Fear: As long as it is larger than the radius of the sphere, the value can be: the radius of the sphere R
  • Aspect: In the panoramic scene has been determined, the aspect ratio of the photo: screen width/screen height
  • Fov: Field of view, zooming is done by changing its value to complete the panorama zooming;

In fact, it is easy to understand that if we keep our eyes wide open, our field of view will be wider and objects will appear smaller [zoom out], whereas if we narrow our eyes, our field of view will be narrower and objects will appear larger [zoom in]. We can modify the value of FOV in the image on the right to scale the panorama

So how do you compute FOV? At this time, we need to interact with the two fingers and calculate the same, start to touch to calculate the first two-finger distance, and continue to calculate the two-finger distance in the two-finger movement, and the last distance divided by the scaling factor.

The key code is as follows:

// Where (clientX1, clientY1) and (clientX2, clientY2) are the current positions of the two fingers on the screen

// Calculate the distance, simplify the transport without square calculation
const distance = Math.abs(clientX1 - clientX2) + Math.abc(clientY1 - clientY)
// Calculate the scaling ratio
const scale = distance / lastDiance 
// Calculate new perspectives
fov = camera.fov / scale

// View range
camera.fov = Math.min(90.Math.max(fov,60)) // 90 > foV > 60, select from the parameter description

// The perspective needs to be updated actively
camera.updateProjectionMatrix()

Copy the code

Address: experience azuoge. Making. IO/Opanorama /

3.3 Mobile phone gyroscope interaction

In html5 events, the deviceorientation event is an event that detects changes in deviceorientation.

H5 has two coordinates:

  • Earth coordinates x/ Y/Z: in any case, constant direction
  • Phone plane x/ Y/Z: The direction defined relative to the phone screen

Value range:

  • X axis: rotate Beta(X) up and down, value range: [-180° ~ 180°]
  • Z-axis: left and right rotation twist Alpha(Z), value range: [0°, 360°]
  • Y axis: the torsion can be Gamma(Y), value range: [-90°, 90°]

When holding the phone vertically, with the front (90 degrees) facing you.

From the above observation, combined with the coordinate system of ThreeJS, the key conclusions can be drawn:

  • * (Math.pi / 180)
  • Lon corresponds to Gamma * (Math.pi / 180)

This alpha Angle is not included in this panoramic interaction.

To demonstrate, turn the phone vertically with the front (90 degrees) facing you

Gyroscope simulation can be enabled in Chrome as follows:

The code is simple:

// Angle conversion radian formula
const L = Math.PI / 180

// Gyroscope interaction
window.addEventListener('deviceorientation'.function (evt) {
    lon = evt.alpha * L
    lat = (evt.beta - 90) * L
})
Copy the code

The effect is as follows:

It should be noted that the direction value of the phone obtained by H5 has obvious jitter in some Android phones. Even if the phone is still on the desktop, the data output by the gyroscope will jitter. (This problem is not part of the principle, but just a problem encountered in the panoramic application process. Students who are not interested can skip 😄)

We need to do digital processing of the output of the gyroscope. Here, the low-pass filtering algorithm used in signal transmission is adopted

The formula is as follows:

When K is equal to 1, that’s the real number, less than 1, you can dilute the change.

But there is a new problem: the conflict between sensitivity and stability

  • The smaller the filtering coefficient, the smoother the filtering result, but the lower the sensitivity
  • The larger the filtering coefficient is, the higher the sensitivity is, but the more unstable the filtering result is

It is concluded from the statistical data that the value of K is 10, and the sensitivity and stability are good.

Address: experience azuoge. Making. IO/Opanorama /

3.4 Interactive combination of gesture and gyroscope

Both gestures and gyroscopic interactions translate into latitude and longitude to drive the panorama, so the combination of the two is simple.

The specific ideas are as follows:

Lat = touch.lat + orienter.lat + fix.lat // value range: [-90,90] lon = touch.lon + orienter.lon + fix.lon // value range: [0,360]Copy the code

Among them, touch is the influence of gesture, orienrer is the influence of gyroscope, and fix is the correction factor to ensure that the conversion results of longitude and latitude always conform to the value range.

In this paper, the complete code on: https://github.com/azuoge/Opanorama, welcome to consult and discuss.