Ray pick up

As the name suggests, ray pickup is sending a ray from the camera at an infinite distance and determining which objects the ray intersects. The following figure

The official case

The following is a direct look at the ray picking code in the official document of Three. This code is that each frame will send a ray to the mouse position of the camera and return the object hit by the ray, and finally modify the material color.

const raycaster = new THREE.Raycaster(); const mouse = new THREE.Vector2(); Function onMouseMove(event) {// Normalize mouse position to device coordinates. X and Y directions range from (-1 to +1) mouse. X = (Event.clientx/window.innerWidth) * 2-1; mouse.y = - ( event.clientY / window.innerHeight ) * 2 + 1; } function render() {// Update raycaster. SetFromCamera (mouse, camera); / / calculate the focus of the objects and ray (the scene here. Children can only needs to be ray detection objects, or object too much will caton) const intersects. = raycaster intersectObjects (scene. The children ); for ( let i = 0; i < intersects.length; i ++ ) { intersects[ i ].object.material.color.set( 0xff0000 ); } renderer.render( scene, camera ); } window.addEventListener( 'mousemove', onMouseMove, false ); window.requestAnimationFrame(render);Copy the code

THREE ray pick up step

  1. First, click (X,Y) from the screen coordinate system to the WebGL coordinate system.
  2. Then, the WebGL coordinate system is transformed into the projection coordinate system through the inverse projection transformation, and the value of (X,Y) in the projection coordinate system is obtained (Xw,Yw,Zw).
  3. Subtract the coordinates of the camera from (Xw,Yw,Zw) to get the direction vector of the ray.
  4. The ray first intersects with the surrounding sum of the detected object.
  5. The previous step to detect the intersection of the object traversed each surface, detect whether the intersection.
  6. Sort the intersecting objects by depth (Z) and return.

Step 1: Transfer screen coordinate system to WebGL coordinate system, and specific calculation method. Blog.csdn.net/u011332271/…

mouse.x = ( event.clientX / window.innerWidth ) * 2 - 1;
mouse.y = - ( event.clientY / window.innerHeight ) * 2 + 1;
Copy the code

In steps 2 and 3, calculate the starting point and direction of the ray.

raycaster.setFromCamera( mouse, camera );
Copy the code

You can look at the code for the raycaster. SetFromCamera () method, where.unproject(camera) is.

setFromCamera(coords, Camera) {if (camera & camera. IsPerspectiveCamera) {/ / starting point is set to the position of the camera this.ray.origin.setFromMatrixPosition(camera.matrixWorld); // Convert screen coordinates to world coordinates and subtract the starting point of the ray to get the direction vector from the starting point to the clicked point, Set (coords.x, coords.y, 0.5).unproject(camera).sub(this.ray.origin).normalize(); this.camera = camera; } else if (camera && camera.isOrthographicCamera) { this.ray.origin.set(coords.x, coords.y, (camera.near + camera.far) / (camera.near - camera.far)).unproject(camera); // Set origin in plane of camera // This.ray.direction. set(0, 0, -1). TransformDirection (camera. MatrixWorld); this.camera = camera; } else { console.error('THREE.Raycaster: Unsupported camera type.'); }},Copy the code

In the fourth step, the ray first intersects with the enveloping and detecting object, and intersects with the enveloping body can exclude the object that must not intersect first, and reduce the amount of calculation behind.

const intersects = raycaster.intersectObjects( scene.children );
Copy the code

Excerpts from the code

// Checking boundingSphere distance to ray if (geometry.boundingSphere === null) geometry.computeBoundingSphere(); _sphere.copy(geometry.boundingSphere); _sphere.applyMatrix4(matrixWorld); if (raycaster.ray.intersectsSphere(_sphere) === false) return; / / use ray times model matrix inverse matrix to get here - ray equivalent ray against the model transformation, and the relative position of mode shape, the detection of radiation and the model can directly face the situation of the intersection of time model without changing point can be used to calculate / / otherwise need to calculation model after the transformation, in and ray intersection tests (not change), _inversematrix.copy (matrixWorld).invert(); _ray.copy(raycaster.ray).applyMatrix4(_inverseMatrix); // Check boundingBox before continuing if (geometry.boundingBox ! == null) { if (_ray.intersectsBox(geometry.boundingBox) === false) return; }Copy the code

The calculation of _ray can be seen in the figure below. The calculation of _ray can keep the position relative to the model without rotating the model.

In step 5 and step 6, the model detected by bounding box is continuously traversed to judge the intersection of the model’s faces (see raycast() method of Mesh object for specific code, which will not be pasted here). Finally, the intersecting objects are returned in order of depth.

intersects.sort(ascSort);
return intersects;
Copy the code

The above is the principle of ray picking in THREE. It can be seen that when the number of scene models is large, the amount of calculation required to detect each model once is very large. However, there are ways to optimize this. One is to manage the model of the scene through octree, checking only a small part at a time (not expanded here), and the other is to pick through the buffer.

Buffer pickup

Buffer is in using FBO pick another vertex data (frame buffer) rendering, but vertex color is calculated according to ID a into RGB values, thus ensure the each of the color of the object is different, so click access to click on the location of the RGB values, then a conversion back to the ID value can know click to that object. In fact, buffer pick is to use space (more data) in exchange for time (faster pick), and because buffer pick does not need to traverse the model, so the model can do batch.

THREE Buffer pickup steps

  1. Prepare two sets of data, one render output to screen and one render to FBO, and store the information for each object.
  2. Create a webglRenderTarget()(FBO, not output directly to the screen).
  3. Render FBO to determine which object was clicked by converting the obtained color bits back to the ID value.
  4. The information of the clicked object is obtained through the ID value, and a cube is generated outside the clicked object to indicate highlighting.
  5. Finally, render the scene normally and output to the color buffer (screen).

Let’s take a look at the THREE example webgl_interactive_cubes_GPU

Go to file

Step 1: Prepare two sets of data.

for ( let i = 0; i < 5000; i ++ ) { let geometry = new THREE.BoxBufferGeometry(); Const position = new three.vector3 (); // Generate random model matrix const position = new three.vector3 (); position.x = Math.random() * 10000 - 5000; position.y = Math.random() * 6000 - 3000; position.z = Math.random() * 8000 - 4000; const rotation = new THREE.Euler(); rotation.x = Math.random() * 2 * Math.PI; rotation.y = Math.random() * 2 * Math.PI; rotation.z = Math.random() * 2 * Math.PI; const scale = new THREE.Vector3(); scale.x = Math.random() * 200 + 100; scale.y = Math.random() * 200 + 100; scale.z = Math.random() * 200 + 100; quaternion.setFromEuler( rotation ); matrix.compose( position, quaternion, scale ); geometry.applyMatrix4( matrix ); // Create applyVertexColors(Geometry, color.sethex (math.random () * 0xFFFFFF)); // Push to array. The first data is ready to complete geometriesDrawn. Push (Geometry); Clone (); Geometry, color.sethex (I)); // Set the color by I conversion, so that each color is unique applyVertexColors(Geometry, color.sethex (I)); // The second data is ready to complete geometriesPicking. Push (Geometry); [I] = {position: position, rotation: Rotation, scale: scale}; } // Put both sets together, Loaded into the respective scene scene. The add (new THREE. The Mesh (BufferGeometryUtils. MergeBufferGeometries (geometriesDrawn), defaultMaterial)); pickingScene.add( new THREE.Mesh( BufferGeometryUtils.mergeBufferGeometries( geometriesPicking ), pickingMaterial ) );Copy the code

Step 2, 3, 4, 5

pickingTexture = new THREE.WebGLRenderTarget( 1, 1 );
Copy the code

This is set to (1,1) because you only need to take the color value of one pixel, so setting it too large increases the computation.

FBO function pick () {/ / rendering (pickingTexture) / / sets the view migration to represent only the mouse at the bottom of the single pixel camera. SetViewOffset (the renderer. DomElement accordingly. Width, renderer.domElement.height, mouse.x * window.devicePixelRatio | 0, mouse.y * window.devicePixelRatio | 0, 1, 1 ); SetRenderTarget (pickingTexture); renderer.render( pickingScene, camera ); // Restore the camera to normal state camera.clearViewoffset (); const pixelBuffer = new Uint8Array( 4 ); / / get pickingTexture the color value of the renderer. ReadRenderTargetPixels (pickingTexture, 0, 0, 1, 1, pixelBuffer); / / a conversion back to ID value const ID = (pixelBuffer [0] < < 16) | (pixelBuffer [1] < < 8) | (pixelBuffer [2]). Const data = pickingData[id]; // Get the matrix of the clicked object const data = pickingData[id]; If (data) {/ / convert highlightBox to the corresponding position if (data. The position & & data. Rotation && data. The scale) {highlightBox. Position. Copy (  data.position ); highlightBox.rotation.copy( data.rotation ); highlightBox.scale.copy( data.scale ).add( offset ); highlightBox.visible = true; {highlightbox.visible = false;} // hignlightBox is not selected. }}Copy the code

The above code is probably the principle of buffer pick, of course, you do not need to generate a highlightBox to indicate the highlight, you can also modify the VertexColors in the corresponding Geometry.

function render() { controls.update(); // Render FBO and get the mouse clicked object pick(); // Switch back to the color buffer and render normal output to the screen renderer. SetRenderTarget (null); renderer.render( scene, camera ); }Copy the code

Switch back to color buffer, normal render output to screen

Advantages and disadvantages of ray pickup and buffer pickup

According to the above explanation, it is not difficult to see that ray pickup can get all objects hit by ray, and can get the specific surface and UV information of the model hit. This information is important in a shooting game, such as hitting people behind walls, leaving bullet holes in walls, etc. The disadvantage is that the scene object more on the line, and can not batch model. On the contrary, buffer pickup means that there are too many models to pick up cards, but no detailed information can be obtained. Therefore, buffer pickup is more suitable for BIM model pickup.

Ok, the above is the author’s understanding of THREE ray pick and buffer pick. I hope to point out what is wrong with each one.