The reason

Recently, I was working on the design of the official website. After browsing the official website of a cloud service product, I was excited to say that I want a advanced rose gold stroke effect. I felt like a frightened little bird, thinking, do you want a color case? Not so much. Knock on the door and see what happens.

Results show

Major technology stack

WebGL

In terms of WebGL, students can choose WebGL Programming Guide for detailed reading, understand WebGL native API thoroughly, and have a better sense of how to use the framework. You’ll get a general idea of how three. js is designed, and what has been added and retained from the API perspective as opposed to the WebGL native API perspective.

WebGL, three. js, GLSL Shading Language, MATHEMATICS of MVP matrix, simple operation of Blender modeling

THREE.js

There are also many examples of introductions on the Threejs website, such as Threejs Fundamentals and Steve Kwok’s tech blog. Three. js can save us a lot of complicated work of WEBGL API calls. In addition, it abstracts such high-level objects as cameras, scenes, mathematical matrices and vectors, which are convenient for us to call.

GLSL Shading Language

Shaders, as an important part of the implementation, plays a pivotal role. I am a little educated, so here I recommend the article of bump lab and the basis of GLSL language. Of course, the most important enlightenment is thebookofshaders, which has many interesting gameplay about shader. In addition, the ShaderToy community often has great people sharing some cool special effects implemented. Ray Marching, SDF directed distance field, can be modeled without a model file. Use pure math. And so on…

The math of MVP matrix

For our fictional world space, we need to use a transformation we define to achieve the 3D and 2D view conversion, here first vector matrix basic knowledge, then start projection, view and model matrix is better.

Modeling in Blender is simple

I use Blender as my modeling tool, which is mostly free, but I have also used Maya, 3DSmax, and C4D before, and I have found Blender to be quite useful, and I am always surprised by the way it is modeled and how it feels. However, each modeling tool has its own characteristics, everyone started to suit their own good.

Step 1 Modeling

THREE. Js unit

Currently, SI Unit is used for the Unit of three, which is the international standard Unit. If you want to understand more, please wiki yourself. Here, 1 Unit of Threejs is used = 1m(meter) modeled.

Model coordinate position

As shown in the figure, the model can be placed in accordance with this coordinate system.

Model coordinate scaling unit

The scaling criteria depends on the parameters of the perspective camera you defined in THreejs, so the boundary of one of my models is about

The coordinates of the top model are roughly the following

Model is derived

Here I choose GLTF format to export. When exporting, remember to remove the lights, camera and animation. We only keep the model part.

Step 2: Write the code

Loading a resource file

Since I used the Webpack environment of VUe-CLI for engineering development, the related Loader used was raw-loader, and url-loader added the following code in vue.config.js

    config.module
      .rule('gltf')
      .test(/\.glb$/)
      .use('url-loader')
      .loader('url-loader')
      .end()
    config.module
      .rule('shader')
      .test(/\.(glsl)$/)
      .use('raw-loader')
      .loader('raw-loader')
      .end()
Copy the code

Writing a loading function

Here we combine Three’s GltfLoader object for a simple Promise encapsulation

import * as THREE from 'three/build/three.module'
import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js'
const loader = new GLTFLoader()
Copy the code
methods:{
    loadModel(url) {
      return new Promise((resolve, reject) => {
        loader.load(
          url,
          function(gltf) {
            console.log(gltf)
            resolve(gltf)
          },
          undefined,
          function(error) {
            reject(error)
          }
        )
      })
    },
}

Copy the code

Resource calls we use require to get the GLTF build time resource address or Base64, depending on how url-Loader handles the file size

  async mounted() {
    this.initMaterial()
    this.initRender()
    this.initScene()

    this.curScene = this.scene
   
   // Load the model file
    const res = await this.loadModel(this.src)
   
    this.modelObject = res
    this.initLight()
    // this.control = new OrbitControls(this.camera, this.render.domElement)
    this.addModelToScene() && this.draw()
  }
Copy the code

Initialization Scenario

    initScene() {
      this.scene = new THREE.Scene()
    }
Copy the code

Initialize the perspective camera and render context

    initRender() {
      const width = this.$refs.render.clientWidth
      const height = this.$refs.render.clientHeight * 0.8
      this.camera = new THREE.PerspectiveCamera(50, width / height, 0.1.1000)
      this.camera.position.set(5.5.5)
      this.camera.lookAt(0.0.0)

      const renderer = new THREE.WebGLRenderer({
        canvas: this.$refs.render,
        setPixelRatio: 2.antialias: true.alpha: true.precision: 'highp'
      })
      // Back off
      renderer.setFaceCulling(THREE.CullFaceBack, THREE.FrontFaceDirectionCW)
      renderer.setSize(width, height)
      this.render = renderer
    }
Copy the code

Add the light

    initLight() {
      const spotLight = new THREE.SpotLight(0x111111.4)
      spotLight.position.set(8.8.8)
      spotLight.castShadow = true

      spotLight.shadow.mapSize.width = 1
      spotLight.shadow.mapSize.height = 1

      spotLight.shadow.camera.near = 10
      spotLight.shadow.camera.far = 40
      spotLight.shadow.camera.fov = 10
      const light = new THREE.AmbientLight(0xffffff.0.02) // soft white light
      this.scene.add(light)
      this.scene.add(spotLight)
    }
Copy the code

Initialize material (key)

For this hybrid render, one thing to be careful about is that the wireframes do not overlap with the solid model, so make the scale of the wireframes slightly larger

    initMaterial() {
      this.material = new THREE.ShaderMaterial({
        uniforms: {
            // Base model scale 0.89
          scale: { value: 0.89}},vertexShader: require('./shaders/v.glsl').default,
        fragmentShader: require('./shaders/f.glsl').default
      })
      this.frameMaterial = new THREE.ShaderMaterial({
        uniforms: {
        // Scale 1.09 > 0.89
          scale: { value: 1.09}},// Turn on the wireframe mode. This is the gl.LINES mode used to render
        wireframe: true.wireframeLinejoin: 'bevel'.wireframeLinewidth: 1.5.vertexShader: require('./shaders/v.glsl').default,
        fragmentShader: require('./shaders/fshow.glsl').default
      })
    }
Copy the code

Here we start with two material objects, one for the wireframe and one for the model

Adding scene Objects

Here we use the Clone method of OBject3D object of THREE of object.Clone () to make a deep copy of the loaded GLTF data object.


    addModelToScene() {
      try {
        const object = this.modelObject.scene
        // Base solid model
        const normalObject = object.clone()
        // normalObject.children.forEach(item => {
        // item.material = this.material
        // })
        // Wire frame solid model
        const wireframe = object.clone()
        // Iterate through the Mesh of the wireframe entities and assign the material to the wireframe
        wireframe.children.forEach(item= > {
          item.material = this.frameMaterial
        })
        // Start with the object model coordinates
        normalObject.position.set(-1, -1.1)
        wireframe.position.set(-1, -1.1)
        // Add to the scene
        this.scene.add(normalObject)
        this.scene.add(wireframe)
        return true
      } catch (e) {
        return false}}Copy the code

Render loop

    draw() {
      this.scene.rotation.x = this.rotate[1]
      this.scene.rotation.y = this.rotate[0]
      this.render.render(this.curScene, this.camera)
      // this.control.update()
      requestAnimationFrame(this.draw)
    }
Copy the code

Mounted Example function

Here we use the Vue component, so add in the hook

async mounted() {
    this.initMaterial()
    this.initRender()
    this.initScene()

    this.curScene = this.scene
    const res = await this.loadModel(this.src)
    this.modelObject = res
    this.initLight()
    // this.control = new OrbitControls(this.camera, this.render.domElement)
    this.addModelToScene() && this.draw()
  }
Copy the code

High-energy part

Shader implementation

Since we need custom effects, we need to use the ShaderMaterial constructor of Threejs to help us render the model using our custom Shader

Uniform variable definition

We’re only using scale here, so we just need to define scale

  this.frameMaterial = new THREE.ShaderMaterial({
        uniforms: {
          scale: { value: 1.09}},wireframe: true.wireframeLinejoin: 'bevel'.wireframeLinewidth: 1.5.vertexShader: require('./shaders/v.glsl').default,
        fragmentShader: require('./shaders/fshow.glsl').default
      })
Copy the code

Shader Vertex Shader

Before writing shaders, let’s first understand the difference between ShaderMaterial and RawShaderMaterial. Raw, as its name implies, is to remove some of the basic functions or variables that you may need to implement. This is useful for those who want to optimize their shader code as much as possible. For ShaderMaterial, Threejs comes with some of its own built-in variables and methods, such as view matrix, projection matrix, model matrix, etc. We’re basically adding our own variables and functions to our shader header, and we’re passing in some uniform attribute parameters. We recommend using ShaderMaterial to implement the built-in variables for Threejs. Please refer to the WebGL Program object code below

precision highp float;
varying vec3 v_Normal;
uniform float scale;
void main() {
    // Multiply the vertex coordinates by scale to scale the center
    gl_Position = projectionMatrix *
        modelViewMatrix *
        vec4(position.xyz * scale, 1.0);
    // Since the vertex normals are required by the later slice shaders, define a v_Normal interpolation
    v_Normal = normalize(normalMatrix * normal);
}
Copy the code

Shader

For the chip shader, we first implement a simple light model function if we want to make the light gradient effect. First you have to understand model normals

Model normals are the normal vectors of the surface elements formed by each vertex and its surrounding vertices in the model

And with that, we also need an incident ray vector

Then reflect the pixel brightness level we can, in the form of two vector dot product, if the incident ray and the normal vector included Angle is smaller, can prove that nearly all the incoming light reflection can completely cameras, on the other hand, will be in the other direction reflex, camera represents the direction of the incoming light to accept the opposite direction to the light will be less, more dark and brightness. The formula is as follows: vector A represents the incident ray, and n represents the normal vector of a vertex in the model


l i g h t V a l u e = d o t ( a . n ) lightValue = dot(\vec{a},\vec{n})

With these conditions we can write the shader code for the fragment

precision highp float;
// The interpolated normal vector passed in to the vertex shader
varying vec3 v_Normal;
void main() {
    // Define the incident light, from the screen, normalized
    vec4 light = normalize(vec4(cameraPosition, 0.1));
    // Calculate the light intensity of the pixel. 2 is the multiplier, which can be any floating point number, and you can adjust it accordingly by looking at the display
    float density = dot(light.xyz, normalize(v_Normal.xyz)) * 2.;
    // Use the mix function to mix the two colors (purple and red) according to the light intensity and multiply by the square of the light intensity to make a quadratic attenuation effect
    // + vec3(0.2) + vec3(0.2) + vec3(0.2) + Vec3 (0.2) + Vec3 (0.2
    gl_FragColor = vec4(mix(normalize(vec3(236..72..153.)), normalize(vec3(186..85..211.)), density - 0.4) * density * density + vec3(0.2), 1.0);
}

Copy the code

Make the finishing point, follow the mouse rotation

Here we define an external attribute that accepts the rotote value

  props: {
    src: {
      type: String.default: ' '
    },
    rotate: {
      type: Array.default: () = > [0.0]}}Copy the code

After the upper-layer component listens for mouse function changes passed in

    onMouseMove(e) {
      const width = window.innerWidth
      const height = this.$el.clientHeight
      let rotateX = 0.5 + (e.offsetX / width) * -0.5
      let rotateY = 0.5 + (e.offsetY / height) * -0.5
      this.rotate[0] = rotateX
      this.rotate[1] = rotateY
    }
Copy the code

I put the specific code on gitee, you can go to gitee.com/zhou-jianhu…