Description: today’s main demonstration environment construction, geometry composition

1. The tool

JavaScript 3D library is a webGL-based JS library

Vuejs: The environment is VUE frame environment, vuecli scaffolding

Visual Studio Code: Development tools

-Chuck: Well, it’s Chrome.

2. Prepare

NPM install -g@vue /cli vue create ThreejsExport

2. NPM install three install three

3. The SRC/components/HelloWord. Vue template content empty inside, the inside of the div add a handle to ref = “container” CSS style Settings;

<template> <div class="hello" ref="container"> </div></template> import * as Three from 'Three' // export default { name: 'HelloWorld', data(){ }, props: { msg: // Init (){}, run() {}}, mounted(){this.run()}. height: 100vh; }Copy the code

3. Start

Start with the code to build a basic graph

const container = this.$refs.container; // get dom const width = container. ClientWidth; const height = container.clientHeight; // PerspectiveCamera const camera = new Three.PerspectiveCamera(70, width/height, 0.01, 10); // Camera camera.position.z = 0.6; const scene = new Three.Scene(); BoxGeometry(0.2, 0.2, 0.2); const material = new Three.MeshNormalMaterial(); const mesh = new Three.Mesh(geometry, material); scene.add(mesh); let animate = () => { requestAnimationFrame(animate); Mesh. Rotation. X + = 0.01; Mesh. Rotation. + y = 0.02; renderer.render(scene, camera) } animate(); Start by creating a simple three-dimensional objectCopy the code

The following step by step to introduce the production process

A WebGL model probably has several parts

: Container Stores the dom

: Scence objects are called scenes

4. Network model

  1. BoxGeometry (60, 40, 40); BoxGeometry has three parameters: widthSegments, heightSegments, and deptSegments. When using line mode ({wireframe:true}), you will see the following effect

  2. BoxGeometry(0.2, 0.2, 0.2,1,1,1); Var Material = new Three.MeshBasicMaterial({wireframe: true}); material.color = new Three.Color(‘green’); Const material = new three.meshnormalMaterial (); // Const material = new three.meshnormalMaterial (); const mesh = new Three.Mesh(geometry, material); scene.add(mesh);

    Now let’s change the parameters inside

    BoxGeometry(0.2, 0.2, 0.2,2,2,1); (width, height, dept, widthSegments, heightSegments, depthSegmentsCopy the code

  3. Material material: THREE. MeshLambertMaterial (too much content, behind a single open a ticket)

  4. PerspectiveCamera (FOV, aspect, near, far)This is the common projection Fov – the vertical field Angle of the camera’s visual cone

    Aspect – The Aspect ratio of the camera’s viewing cone

    Near – The Near plane of the camera’s visual cone

    Far – The Far plane of the camera’s visual cone

    Partially inherits the camera base class. Aspect The aspect ratio of the camera's viewing cone is generally the aspect ratio of the canvas. The default is 1 (square). Far The camera views the far plane of the cone. The default is 2000.filmGauge film size, default 35mm lens. This parameter does not affect the projection transformation matrix unless. FilmOffset is set to a non-zero value. FileOffset Horizontal offset in mm, default is 0. Camera field Angle. IsPerspectiveCamera tests whether the class or derived class is a perspective camera. The default is true. This table cannot be used for internal optimization. .near defaults to 0.1, near-flat value of the viewcone. View The viewcone specification window/empty. Use. SetViewOffset to set and clear. ClearViewOffset.. Zoom reading and writing camera zoom ratio method: ClearViewOffset (). SetViewOffset () can be cleared by using.clearViewoffset (). GetEffectiveFOV () returns the current FOV value (degree), which is affected by the camera zoom value. .getfilmHeight () returns the height of the image. If FOV <= 1(vertical mode), the result is the same as fileGauge. .getFileWidth() returns the width of the screen. If FOV >=1(widescreen mode), the result is the same as fileGauge. SetFocalLendth (focalLength) Sets FOV with focalLength (35mm by default). SetViewOffset (fullWidth fullHeight, x, y, width, height) fullWidth - more full width fullHeight screen - screen full height of the X-ray camera horizontal offset value y - son camera vertical offset value width - camera shows Width height- The display height of the child camera as an example: I'd like to interface display 3 x2 6 images, in turn, is ABCDEFG, each individual screen size is 1920 x1080, + - + - + - + | A | B | | C + - + - + - + | D | E | | F + - + - + - + can write it this way: var w =1920; var h =1080; var fullWidth = w *3; var fullHeight = h *2; // A camera.setViewOffset( fullWidth, fullHeight, w *0, h *0, w, h ); // B camera.setViewOffset( fullWidth, fullHeight, w *1, h *0, w, h ); // C camera.setViewOffset( fullWidth, fullHeight, w *2, h *0, w, h ); // D camera.setViewOffset( fullWidth, fullHeight, w *0, h *1, w, h ); // E camera.setViewOffset( fullWidth, fullHeight, w *1, h *1, w, h ); // F camera.setViewOffset( fullWidth, fullHeight, w *2, h *1, w, h ); .updateProjectionMatrix() This function must be called whenever the camera parameters change. ToJSON returns the camera's JSON formatCopy the code

OrthographicCamera focuses on the representation of the actual size of the object, there is no near large and far small effect; It is generally used in drawing and modeling.

Constructor:

var camera = new THREE.OrthographicCamera(left, right, top, bottom, near, far);
Copy the code

Parameters:

Right -- the right plane of the renderable space top -- the top of the renderable space Bottom -- the bottom of the renderable space near -- Based on where the camera is, From this point, render the scene FAR -- based on the location of the camera, render to this point in the scene. These six parameters respectively represent the positions of the six faces of the space photographed by the orthogonal camera, which are surrounded by a cuboid called Frustum. Only objects inside the viewport can be displayed on the screen, while objects outside the viewport are clipped before display. Properties:.zoom - Gets and sets the camera zoom factor. .left,.right,.top,.bottom,.near,.far -- The camera sees the vertebral body left, right, above, below, in front, behind. Methods: .setViewOffset( fullWidth, fullHeight, x, y, width, X - horizontal offset of the secondary camera Y - vertical offset of the secondary camera width - width of the secondary camera height - Height of the secondary camera This method is used to set the view offset in a large visual vertebra. This is useful for multi-window or multi-monitor/multi-machine setups.Copy the code

Too many, camera to the back of a separate chapter.

5. Renderer

This. Render = function (scene, camera, renderTarget, forceClear) {// // 1. Reset caching for this frame...... // 2.update scene graph if ( scene.autoUpdate === true ) scene.updateMatrixWorld(); // 3.update camera matrices and frustum if ( camera.parent === null ) camera.updateMatrixWorld(); . // 4\. init WebGLRenderState currentRenderState = renderStates.get( scene, camera ); currentRenderState.init(); scene.onBeforeRender( _this, scene, camera, renderTarget ); // 5. View matrix calculation, is the cross product of the camera's projection matrix and the inverse matrix of the camera's world matrix? _projScreenMatrix.multiplyMatrices( camera.projectionMatrix, camera.matrixWorldInverse ); _frustum.setFromMatrix( _projScreenMatrix ); _localClippingEnabled = this.localClippingEnabled; _clippingEnabled = _clipping.init( this.clippingPlanes, _localClippingEnabled, camera ); CurrentRenderList = renderLists. Get (scene, camera); currentRenderList.init(); projectObject( scene, camera, _this.sortObjects ); . // 7\. Shadow draw if (_clippingEnabled) _clipping. BeginShadows (); var shadowsArray = currentRenderState.state.shadowsArray; shadowMap.render( shadowsArray, scene, camera ); currentRenderState.setupLights( camera ); if ( _clippingEnabled ) _clipping.endShadows(); // if ( this.info.autoReset ) this.info.reset(); if ( renderTarget === undefined ) { renderTarget = null; } this.setRenderTarget( renderTarget ); Background. render(currentRenderList, scene, camera, forceClear); // 9.render scene var opaqueObjects = currentRenderList.opaque; var transparentObjects = currentRenderList.transparent; if ( scene.overrideMaterial ) { // 10. Force the overrideMaterial of the scene to unify the Render object. var overrideMaterial = scene.overrideMaterial; if ( opaqueObjects.length ) renderObjects( opaqueObjects, scene, camera, overrideMaterial ); if ( transparentObjects.length ) renderObjects( transparentObjects, scene, camera, overrideMaterial ); } else {/ / 11. Of opaque and transparent objects to render / / opaque pass (front - to - back order) if (opaqueObjects. Length) renderObjects( opaqueObjects, scene, camera ); // transparent pass (back-to-front order) if ( transparentObjects.length ) renderObjects( transparentObjects, scene, camera ); } // Generate mipmap if we're using any kind of mipmap filtering ..... // Ensure depth buffer writing is enabled so it can be cleared on next render state.buffers.depth.setTest( true ); state.buffers.depth.setMask( true ); state.buffers.color.setMask( true ); state.setPolygonOffset( false ); scene.onAfterRender( _this, scene, camera ); . currentRenderList = null; currentRenderState = null; };Copy the code

Render () is the core of rendering, and at a glance it does roughly the following.

  1. Reset caching for this frame.
  2. The update scene graph.
  3. Update Camera matrices and Frustum.
  4. Init WebGLRenderState.
  5. The view matrix is the cross product of the projection matrix of the camera and the inverse matrix of the world matrix of the camera.
  6. Initialization of the WebGLRenderList.
  7. Shadow drawing.
  8. Background drawing.
  9. Render the scene.
  10. If overrideMaterial is used, force the overrideMaterial of the scene to unify the Render object.
  11. Render opaque and transparent objects respectively.

6. Animation animate

Let animate = () => {requestAnimationFrame(animate) mesh.rotation. X += 0.01 mesh renderer.render(scene, camera) } animate();Copy the code

Window. RequestAnimationFrame () method tells the browser – you want to perform an animation, and require the browser until the next redraw calls the specified callback function to update the animation. This method takes as an argument a callback function that is executed before the browser’s next redraw

Let’s change the animation parameters to see what it looks like

Let animate = () => {requestAnimationFrame(animate) // Mesh.rotation renderer.render(scene, camera) } animate();Copy the code

The direction of rotation is set according to the direction of 3d space, left hand Cartesian coordinate system.

Conclusion:

Too many related content, always easy to diverge, the next article will focus on the camera

Looking forward to your attention

Exploration of WEBGL (II)

— WebGL Scenario construction of The Road to Explore WebGL (1) — Understanding WebGL

reactVR