· Create the world is committed to creating the first “cloud CAD” collaborative design platform integrating viewing, modeling, assembly and rendering in China.

At the request of readers, we hope to set up a professional WEBGL and Threejs industry QQ communication group for front-end developers in Chengdu-Chongqing area to facilitate discussion. There are webGL and Threejs in the group, welcome to join! — Click the link to join the group chat [three.js/ webGL Chongqing Union Group] : jq.qq.com/?_wv=1027&k…

Post-processing is usually a special effect or filter applied to a 2D image. In ThreeJs’ scene, we have a scene made up of many meshes rendered into 2D images. In general, images are rendered directly to the canvas and displayed in the browser, however, we can also use another render target and apply some post-effects before the result is printed to the canvas. This is called Post Processing because it happens after the main scene rendering process.

Pass the object

Examples of post-processing, like Instagram filters, Photoshop filters. ThreeJs also has a post-processing pipeline.

The way it works is you create EffectComposer and add some Pass objects. Each Pass stage can add some post-processing effects, add vignetting, blur, add flare, add noise, adjust hue, saturation, contrast, and so on. Finally render the effect to canvas.

It is important to understand how EffectComposer works. It creates two Render Targets, which we call rtA and rtB, and then calls EffectComposer. AddPass to add passes in the order in which you want to apply them.

First the scene you pass in the RenderPass is rendered to the rtA, whatever the content of the rtA is, and it passes on to the next pass. The next pass will do something with it as input and then write it to rtB. And then rtB passes to the next pass, takes rtB as input and does something and then writes back to rtA. This process continues throughout the pass process. Each pass has four basic options:

Enabled → Whether to use this pass

NeedsSwap → Whether to exchange rtA and rtB after completing this pass

Clear → Whether the pass needs to be cleared before rendering

RenderToScreen → Whether the current content is rendered to the canvas

Start

The first step is to create an EffectComposer

import * as THREE from 'THREE'
const composer = new THREE.EffectComposer(renderer);
Copy the code

Then, as our first pass, we add a RenderPass that will render our scene and our camera camera to our first render target.

composer.addPass(new THREE.RenderPass(scene, camera));
Copy the code

Next, we add a BloomPass. BloomPass puts its input into a generally smaller Render target and then obfuscates the surface of the result. This causes the scene to glow.

-   const bloomPass = new THREE.BloomPass(
-   1.// strength
-   25.// kernel size
-   4.// sigma ?
-   256.// blur render target resolution-); - composer.addPass(bloomPass);Copy the code

Finally, we use FilmPass to add noise and scan lines.

-   const filmPass = new THREE.FilmPass(
-   0.35.// noise intensity
-   0.025.// scanline intensity
-   648.// scanline count
-   false.// grayscale-); - filmPass.renderToScreen =true;
-   composer.addPass(filmPass);
Copy the code

Since filmPass is passed for the last time, we tell it to render to the canvas by setting its renderToScreen property to true. If it is not set, it will render to the next render target.

RenderPass is required for almost all post-processing EffectComposer. The last thing we need to do is use EffectComposer. Render instead of webglrenderer.render and tell EffectComposer to match the size of the canvas

- function render(now) {
- time * = 0.001;
+ let then = 0;
+ function render(now) {
+ now * = 0.001; // convert to seconds
+ const deltaTime = now - then;
+ then = now;
  
  if (resizeRendererToDisplaySize(renderer)) {
    const canvas = renderer.domElement;
    camera.aspect = canvas.clientWidth /   canvas.clientHeight;
    camera.updateProjectionMatrix();
+ composer.setSize(canvas.width, canvas.height);
   }
    
    cubes.forEach((cube, ndx) => {
    const speed = 1 + ndx * .1;
- const rot = time * speed;
+ const rot = now * speed;
    cube.rotation.x = rot;
    cube.rotation.y = rot;
  });
   
- renderer.render(scene, camera);
+ composer.render(deltaTime);
    
   requestAnimationFrame(render);
  }
Copy the code

EffectComposer. Render takes deltaTime a few seconds since the last frame was rendered. If any one of them has an animation, it passes on various effects. In this example FilmPass is animated.

<! DOCTYPEhtml>
<html lang="en">

<head>
  <meta charset="UTF-8">
  <meta http-equiv="X-UA-Compatible" content="IE=edge">
  <meta name="viewport" content="Width = device - width, initial - scale = 1.0">
  <title>Document</title>
  <style>
    html.body {
      height: 100%;
      margin: 0;
    }

    #c {
      width: 100%;
      height: 100%;
      display: block;
    }
  </style>
</head>

<body>
  <canvas id="c"></canvas>
  <script>
    import * as THREE from 'https://threejsfundamentals.org/threejs/resources/threejs/r132/build/three.module.js';
    import { EffectComposer } from 'https://threejsfundamentals.org/threejs/resources/threejs/r132/examples/jsm/postprocessing/EffectComposer.js';
    import { RenderPass } from 'https://threejsfundamentals.org/threejs/resources/threejs/r132/examples/jsm/postprocessing/RenderPass.js';
    import { BloomPass } from 'https://threejsfundamentals.org/threejs/resources/threejs/r132/examples/jsm/postprocessing/BloomPass.js';
    import { FilmPass } from 'https://threejsfundamentals.org/threejs/resources/threejs/r132/examples/jsm/postprocessing/FilmPass.js';

    function main() {
      const canvas = document.querySelector('#c');
      const renderer = new THREE.WebGLRenderer({ canvas });
      const fov = 75;
      const aspect = 2;  // the canvas default
      const near = 0.1;
      const far = 5;
      const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
      camera.position.z = 2;
      const scene = new THREE.Scene();
      {
        const color = 0xFFFFFF;
        const intensity = 2;
        const light = new THREE.DirectionalLight(color, intensity);
        light.position.set(-1.2.4);
        scene.add(light);
      }
      const boxWidth = 1;
      const boxHeight = 1;
      const boxDepth = 1;
      const geometry = new THREE.BoxGeometry(boxWidth, boxHeight, boxDepth);
      function makeInstance(geometry, color, x) {
        const material = new THREE.MeshPhongMaterial({ color });
        const cube = new THREE.Mesh(geometry, material);
        scene.add(cube);
        cube.position.x = x;
        return cube;
      }
      const cubes = [
        makeInstance(geometry, 0x44aa88.0),
        makeInstance(geometry, 0x8844aa, -2),
        makeInstance(geometry, 0xaa8844.2)];const composer = new EffectComposer(renderer);
      composer.addPass(new RenderPass(scene, camera));
      const bloomPass = new BloomPass(
        1.// strength
        25.// kernel size
        4.// sigma ?
        256.// blur render target resolution
      );
      composer.addPass(bloomPass);
      const filmPass = new FilmPass(
        0.35.// noise intensity
        0.025.// scanline intensity
        648.// scanline count
        false.// grayscale
      );
      filmPass.renderToScreen = true;
      composer.addPass(filmPass);
      function resizeRendererToDisplaySize(renderer) {
        const canvas = renderer.domElement;
        const width = canvas.clientWidth;
        const height = canvas.clientHeight;
        constneedResize = canvas.width ! == width || canvas.height ! == height;if (needResize) {
          renderer.setSize(width, height, false);
        }
        return needResize;
      }
      let then = 0;
      function render(now) {
        now *= 0.001;
        const deltaTime = now - then;
        then = now;
        if (resizeRendererToDisplaySize(renderer)) {
          const canvas = renderer.domElement;
          camera.aspect = canvas.clientWidth / canvas.clientHeight;
          camera.updateProjectionMatrix();
          composer.setSize(canvas.width, canvas.height);
        }
        cubes.forEach((cube, ndx) = > {
          const speed = 1 + ndx * 1.;
          const rot = now * speed;
          cube.rotation.x = rot;
          cube.rotation.y = rot;
        });
        composer.render(deltaTime);
        requestAnimationFrame(render);
      }
      requestAnimationFrame(render);
    }
    main();
  </script>
</body>

</html>
Copy the code

Check the instance

To change effect parameters at run time, you usually need to set uniform values. Let’s add a GUI to adjust some parameters. In order for you to easily adjust which values and how to adjust them you need to drill down into the code for that effect.

import { GUI } from '.. /3rdparty/dat.gui.module.js';

bloomPass.copyUniforms.opacity.value = someValue;
   const gui = new GUI();
   {
   const folder = gui.addFolder('BloomPass');
   folder.add(bloomPass.copyUniforms.opacity, 'value'.0.2).name('strength');
   folder.open();
   }
   {
   const folder = gui.addFolder('FilmPass');
   folder.add(filmPass.uniforms.grayscale, 'value').name('grayscale');
   folder.add(filmPass.uniforms.nIntensity, 'value'.0.1).name('noise intensity');
   folder.add(filmPass.uniforms.sIntensity, 'value'.0.1).name('scanline intensity');
   folder.add(filmPass.uniforms.sCount, 'value'.0.1000).name('scanline count');
   folder.open();
  }
Copy the code

For post-processing, three.js provides a useful helper called ShaderPass. It requires an object whose information defines the vertex shader, fragment shader, and default input. It handles setting up the texture to read to get the results of the previous run and the location to render to one of EffectComposers’ render targets or canvas.

This is a simple post-processing shader that multiplies the previous result by color.

const colorShader = {
  uniforms: {
    tDiffuse: { value: null },
    color:    { value: new THREE.Color(0x88CCFF)}},vertexShader: ` varying vec2 vUv; void main() { vUv = uv; gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1); } `.fragmentShader: ` varying vec2 vUv; uniform sampler2D tDiffuse; uniform vec3 color; void main() { vec4 previousPassColor = texture2D(tDiffuse, vUv); gl_FragColor = vec4( previousPassColor.rgb * color, previousPassColor.a); } `};Copy the code

TDiffuse above is the name ShaderPass uses to pass the previous pass texture, so we almost always need it. We then declare color as a three.js color.

Next, we need a vertex shader. For post-processing, the vertex shaders shown here are almost standard and require little change. The variables UV don’t go into too much detail, projectionMatrix, modelViewMatrix and Position are all miraculously augmented by three.js.

Finally, we create a fragment shader. In this row, we get the pixel color of the row from the previous pass

Vec4 previousPassColor = texture2D (tDiffuse, vUv);Copy the code

We multiply it by our color and set gl_FragColor to be the result of the calculation

gl_FragColor = vec4(
    previousPassColor.rgb * color,
    previousPassColor.a
);
Copy the code

Add some simple GUI to set the three values of color

const gui = new GUI();
gui.add(colorPass.uniforms.color.value, 'r'.0.4).name('red');
gui.add(colorPass.uniforms.color.value, 'g'.0.4).name('green');
gui.add(colorPass.uniforms.color.value, 'b'.0.4).name('blue');
Copy the code

And you’re done.

Check the instance

Translation: threejsfundamentals.org/threejs/les…

end

As mentioned earlier, it would be too much to cover all the details of how to write GLSL and custom shaders. If you really want to know how WebGL itself works, check out these articles and hopefully help you out.