For students in the front end of animation, Canvas can be said to be the most free and fully controlled animation carrier. Not only through javascript control point, line, surface drawing, using image resources to fill; It can also change the input parameters to make interactive animation, and fully control the motion trajectory, speed, elasticity and other factors in the animation process.

However, those of you who have developed more complex animations using Canvas may find it difficult to achieve certain effects (this article only discusses 2D) when using javascript to draw and control animations, such as blur, lighting, and water drops. Although the per-pixel processing method can also be achieved, but javascript is not good at this type of massive data calculation, the realization of each frame drawing time is very impressive, using it to achieve animation is not realistic.

However, in addition to the most commonly used javascript API drawing method (getContext(‘ 2D ‘)), Canvas also has WebGL drawing method (getContext(WebGL)). It can be said that canvas is the most suitable place to play in the scenario of massive data calculation mentioned above. WebGL for many students is to achieve 3D scenes, in fact, for 2D drawing, there is also a great play scene.

Why WebGL is great

Let’s take a look at how rendering in javascript API differs from rendering in webGL:

If javascript is used to process the canvas pixel by pixel, this part of the processing work needs to be carried out in the runtime environment of javascript. We know that javascript execution is single-threaded, so we can only calculate and draw pixel by pixel. It’s like a long, thin funnel, dripping down.

However, the processing mode of WebGL is driven by GPU. The processing of each pixel is performed on GPU, and GPU has many rendering pipelines, in which the processing can be performed in parallel. This is the reason why WebGL is good at such massive data computing scenarios.

WebGL is so awesome, you can just draw with it

While WebGL has all the advantages mentioned above, it has a fatal downside: it’s hard to learn, and it takes a lot of effort to simply draw a line.

GPU parallel pipes do not know what the output of the other pipe is, only know the input of their own pipe and the program to be executed; And without retaining state, the pipe itself does not know what programs were executed before the task, or what input and output values it had, similar to the concept of pure functions today. These conceptual differences raise the bar for drawing with WebGL.

In addition, these programs running in the GPU are not javascript, but C language, which also needs front-end students to learn.

Hello, world

No matter how high the threshold is, there is always a day that needs to be crossed. Next, I will control WebGL step by step to draw a little pattern. You can also experience when to use this technology.

Basic environment – the big screen

In order to enter the stage of GLSL shader as soon as possible, the construction of the basic WebGL environment uses three. js. You can study the construction of this basic environment, which does not require third-party libraries or much code.

The following is the construction of the basic environment:

function init(canvas) {
  const renderer = new THREE.WebGLRenderer({canvas});
  renderer.autoClearColor = false;
 
  const camera = new THREE.OrthographicCamera(
    -1.// left
     1.// right
     1.// top
    -1.// bottom
    -1.// near,
     1.// far
  );
  const scene = new THREE.Scene();
  const plane = new THREE.PlaneGeometry(2.2);

  const fragmentShader = '... '
  const uniforms = {
    u_resolution:  { value: new THREE.Vector2(canvas.width, canvas.height) },
    u_time: { value: 0}};const material = new THREE.ShaderMaterial({
    fragmentShader,
    uniforms,
  });
  scene.add(new THREE.Mesh(plane, material));
 
  function render() {
    material.uniforms.u_time.value++;
    renderer.render(scene, camera);
    requestAnimationFrame(render);
  }

  render()
}
Copy the code

Explain what the above code does: it creates a 3D scene (what about 2D?). It’s like sitting in the front row of IMAX. All you see is the screen in front of you. The screen is your whole world. Our drawing is right here on this screen.

Again, shaders are divided into VERTEX_SHADER and FRAGMENT_SHADER.

Vertex shader calculates the value of each vertex of the object in the 3D scene, such as color, normal vector, etc. Here we only discuss the 2D picture, and the vertex shader is done by three.js, which is to fix the position of the lens and screen in the scene.

The purpose of the fragment shader is to calculate the output color value of each fragment on the plane (in this case, each pixel on the screen), which is the object of this article.

The input parameter of the fragment shader has two types: VARYING and UNIFORM. To put it simply, it is passed in by the vertex shader. The input value of each fragment is obtained by linear interpolation of the related vertices, so the value of each fragment is different. Uniform is the uniform value passed in from outside the shader, and each fragment gets the same value, which in this case is the entry point into the variable we input from javascript. In this code we pass u_resolution for the fragment shader, which contains the width and height of the canvas.

The first shader

FragmentShader is shader program code, the general composition is:

#ifdef GL_ES
precision mediump float;
#endif

uniform vec2 u_resolution;

void main() {
  gl_FragColor = vec4(1.0.0.0.0.0.1.0);
}
Copy the code

The first three lines check to see if GL_ES is defined, which is usually defined in mobile or browser Settings. The second line specifies that float has medium precision, and can also be specified as lowp or highp. The lower the precision, the faster the execution, but the lower the quality. It is worth mentioning that the same Settings may behave differently in different execution environments. For example, some mobile browser environments need to be specified as high precision to get the same performance as medium precision in PC browsers.

Line 5 specifies which incoming parameters the shader can accept, and there is only one: u_resolution of type VEC2.

The last three lines describe the main program of the shader, where the input parameters and other information can be processed, and the final output color is gl_FragColor, which represents the color that the fragment displays, with four values representing RGBA (red, green, blue, transparency) and values ranging from 0.0 to 1.0.

The reason for writing 0.0 instead of 0 is that GLSL, unlike javascript, has only one type for numbers. Instead, GLSL is divided into int and float, and float must contain a decimal point. When the decimal point is 0, write.0 as well.

After watching this, you can guess what the shader output is going to be. Yes, it’s full screen red.

This is the basic fragment shader.

Using the uniform

You should note that the example above does not use the uniform values passed in. Here’s how they are used.

If you look at the javascript code that built the base environment, you can see that u_resolution stores the width and height of the canvas. What does this value do in the shader?

This refers to another built-in value of the fragment shader, gl_FragCoord. This value stores the x and y values of the fragment (pixel). Using these two values, you can know which position on the canvas the shader is evaluating. Here’s an example:

#ifdef GL_ES
precision mediump float;
#endif

uniform vec2 u_resolution;

void main() {
  vec2 st = gl_FragCoord.xy / u_resolution;
  gl_FragColor = vec4(st, 0.0.1.0);
}
Copy the code

You can see this image:

The shader code above uses the normalized x and y coordinates to output to the red and green parts of gl_FragColor.

As you can see from the figure, gl_FragCoord’s (0, 0) point is in the lower left corner, and the X-axis and Y-axis directions are right and up, respectively.

Another uniform value, u_time, is a value that increases with time. Using this value, the image can be changed over time to achieve the effect of animation.

Let’s rewrite the shader above:

#ifdef GL_ES
precision mediump float;
#endif

uniform vec2 u_resolution;
uniform float u_time;

void main() {
  vec2 st = gl_FragCoord.xy / u_resolution;
  gl_FragColor = vec4(st, sin(u_time / 100.0), 1.0);
}
Copy the code

You can see the following image:

Storage.360buyimg.com/element-vid…

The shader uses the trig function sin to make a periodic change from 0 to 1 in the blue channel of the color output.

What else can be done?

After mastering the basic principles, it is time to learn from the works of the masters. Shadertoy is a shader group similar to Codepen. Shadertoy uses the basic tools and some styling functions to create a variety of dazzling effects and animations.

The above is the basic development tool for GLSL shaders, now you can start to develop your own shaders, the rest is to use the math skills.