WebGL is an OpengL-based 3D renderer built into the browser that allows you to display 3D content directly in HTML5 pages. In this tutorial, I’ll cover all the basics you need to use the framework.


There are a few things you need to know before you start studying. WebGL is a JavaScript API for rendering 3D content onto HTML5’s Canvas element. It does this using two scripts called shaders in “3D World “. The two shaders are:

  • Vertex shader
  • Chip shader

Don’t be too alarmed when you hear these terms; They’re just another word for “position calculators” and “color pickers.” Chip shaders are easy to understand; It simply tells WebGL what color the pointer on the model should be. Vertex shaders require point techniques to interpret, but basically they transform a 3d model into a 2d coordinate. Because all computer monitors are two-dimensional, when you look at 3-d objects on the screen, they are nothing more than visions of perspective.

If you want to fully understand this calculation, you’d better ask a mathematician, because it involves advanced 4×4 matrix multiplication, which is a bit beyond the scope of our “basics” tutorial. Fortunately, you don’t need to know all how it works, because WebGL handles most of the details behind it. So, let’s get started.

Step 1: Set up WebGL

WebGL has a lot of subtle Settings that need to be set up every time you want to draw something on the screen. To save time and keep the code a little cleaner, we wrapped all the “behind the scenes” code into a JavaScript object and stored it in a separate file. Now to get started, create a new file ‘webgl.js’ and write the following code:

function WebGL(CID, FSID, VSID){
    var canvas = document.getElementById(CID);
    if(! canvas.getContext("webgl") && !canvas.getContext("experimental-webgl"))
        alert("Your Browser Doesn't Support WebGL");
        this.GL = (canvas.getContext("webgl"))? canvas.getContext("webgl") : canvas.getContext("experimental-webgl"); This. GL. ClearColor (1.0, 1.0, 1.0, 1.0); // this is the color this.GL.enable(this.GL.DEPTH_TEST); //Enable Depth Testing this.GL.depthFunc(this.GL.LEQUAL); //Set Perspective View this.AspectRatio = canvas.width / canvas.height; //Load Shaders Here } }Copy the code

This constructor takes the canvas invisible ID and two shader objects. First, we need to get the Canvas element and make sure it is WebGL-enabled. If WebGL is supported, we assign the WebGL context to a local variable called “GL”. ClearColor is basically setting the background color. It is worth noting that most parameters in WebGL range from 0.0 to 1.0, so we need to divide the usual RGB value by 255. So, in our example,1.0,1.0,1.0 means that the background is white and 100% visible (i.e., not transparent). The next two lines require WebGL to calculate depth and perspective so that objects closer to you block objects farther away. Finally, we set the aspect ratio, which is the width of the canvas divided by its height.

Before we move on, we need to prepare two shaders. I write these shaders into an HTML file that also contains our Canvas element. Create an HTML file and place the following two script elements before the body tag.

<script id="VertexShader" type="x-shader/x-vertex"> attribute highp vec3 VertexPosition; attribute highp vec2 TextureCoord; uniform highp mat4 TransformationMatrix; uniform highp mat4 PerspectiveMatrix; varying highp vec2 vTextureCoord; Void main(void) {gl_Position = PerspectiveMatrix * TransformationMatrix * vec4(VertexPosition, 1.0); vTextureCoord = TextureCoord; } </script> <script id="FragmentShader" type="x-shader/x-fragment">
    varying highp vec2 vTextureCoord;

uniform sampler2D uSampler;

void main(void) {
    highp vec4 texelColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
    gl_FragColor = texelColor;
</script>Copy the code

Starting with the vertex shader, we define two attributes:

  • Vertex position, which stores the position of the current vertex (the point on your model), including x,y, and z coordinates.
  • Texture coordinates, the position of the texture assigned to this point in the texture image

Next, we create variables such as transformation and perspective matrices. They are used to convert 3D models into 2D images. The next line creates a variable vTextureCoord that is shared with the slice shader, and in the main function we calculate gl_Position (the final 2D position). We then assign ‘current texture coordinates’ to the shared variable vTextureCoord.

In the fragment shader, we take the coordinate defined in the vertex shader and use this coordinate to ‘sample’ the texture. Basically, through this process, we get the color of the texture at the current point on our geometry.

Now that the shaders are written, we can go back and load them in our JS file. Replace “//Load Shaders Here” with this code:

var FShader = document.getElementById(FSID);
var VShader = document.getElementById(VSID);

if(! FShader || ! VShader) alert("Error, Could Not Find Shaders");
    //Load and Compile Fragment Shader
    var Code = LoadShader(FShader);
    FShader = this.GL.createShader(this.GL.FRAGMENT_SHADER);
    this.GL.shaderSource(FShader, Code);

    //Load and Compile Vertex Shader
    Code = LoadShader(VShader);
    VShader = this.GL.createShader(this.GL.VERTEX_SHADER);
    this.GL.shaderSource(VShader, Code);

    //Create The Shader Program
    this.ShaderProgram = this.GL.createProgram();
    this.GL.attachShader(this.ShaderProgram, FShader);
    this.GL.attachShader(this.ShaderProgram, VShader);

    //Link Vertex Position Attribute from Shader
    this.VertexPosition = this.GL.getAttribLocation(this.ShaderProgram, "VertexPosition");

    //Link Texture Coordinate Attribute from Shader
    this.VertexTexture = this.GL.getAttribLocation(this.ShaderProgram, "TextureCoord");
}Copy the code

Your texture must be even byte size, otherwise it will go wrong… Like 2×2, 4×4, 16×16, 32×32…

First, we make sure the shaders are there, and then we load them one by one. The process is basically: take the shader source code, compile it, and attach it to the core shader program. The code that extracts the shader source from an HTML file, encapsulated in a function called LoadShader; We’ll talk about that later. We use this’ shader program ‘to link the two shaders, through which we can access the variables in the shader. We store the data in attributes defined in the shader; We can then input the geometry into the shader.

Now, let’s look at the LoadShader function, which you should place outside of the WebGL function.

function LoadShader(Script){
    var Code = "";
    var CurrentChild = Script.firstChild;
        if(CurrentChild.nodeType == CurrentChild.TEXT_NODE)
            Code += CurrentChild.textContent;
        CurrentChild = CurrentChild.nextSibling;
    return Code;
}Copy the code

Basically, this function collects source code by iterating through the shader.

Step 2: “Simple” cube

To draw objects in WebGL, you need three arrays:

  • Vertices: The vertices that form your object
  • Triangles: Tell WebGL how to join vertices into faces
  • Texture coordinates: Defines how vertices are mapped to the texture image

This process is called UV mapping. Our example is to construct a simple cube. I divided the cube into groups of four vertices, and each group was connected into two triangles. We can use a variable to store these arrays of cubes.

var Cube = { Vertices : [/ / X, Y, Z Coordinates / / Front of 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, / / Back 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, / / Right 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, / / Left - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, / / Top of 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, / / Bottom - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]. Triangles : [ // Alsoin groups of threes to define the three points of each triangle
        //The numbers here are the index numbers in the vertex array


        0, 1, 2,
        1, 2, 3,


        4, 5, 6,
        5, 6, 7,


        8, 9, 10,
        9, 10, 11,


        12, 13, 14,
        13, 14, 15,


        16, 17, 18,
        17, 18, 19,


        20, 21, 22,
        21, 22, 23

    Texture : [ //This array is in groups of two, the x and y coordinates (a.k.a U,V) inThe texture //The numbers go from 0.0 to 1.0, One pairforEach vertex / / Front 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, / / Back to 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, / / Right 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, / / Left 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, / / Top of 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, / / Bottom 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0]};Copy the code

This might seem like a lot of data for such a simple cube, but in the second part of our tutorial, we’ll write a script to import the 3D model, so don’t worry about that for now.

You may be wondering, why do you need 24 vertices (4 on each side) when there are only 8? I do this because you can specify only one texture coordinate for each vertex; If you use 8 vertices, the whole cube will look the same because it will propagate a texture value to all the faces touched by the top point. In our way, each face has its own points, so we can specify different texture areas on each face.

Now that we have the cube variable cube, we are ready to draw it. Let’s go back to the WebGL method and add a Draw function.

Step 3: Draw function

The process of drawing an object in WebGL has many steps; So it’s best to write each step as a function to simplify the code of the process. The basic idea is to load three arrays into WebGL’s cache. We then connect these caches to the properties defined in the shader, along with transformation and perspective matrices. Next, we need to load the texture into memory and finally call the draw command. So, let’s get started.

The following code goes into the WebGL function:

this.Draw = function(Object, Texture)
    var VertexBuffer = this.GL.createBuffer(); //Create a New Buffer

    //Bind it as The Current Buffer
    this.GL.bindBuffer(this.GL.ARRAY_BUFFER, VertexBuffer);

    // Fill it With the Data
    this.GL.bufferData(this.GL.ARRAY_BUFFER, new Float32Array(Object.Vertices), this.GL.STATIC_DRAW);

    //Connect Buffer To Shader's attribute this.GL.vertexAttribPointer(this.VertexPosition, 3, this.GL.FLOAT, false, 0, 0); //Repeat For The next Two var TextureBuffer = this.GL.createBuffer(); this.GL.bindBuffer(this.GL.ARRAY_BUFFER, TextureBuffer); this.GL.bufferData(this.GL.ARRAY_BUFFER, new Float32Array(Object.Texture), this.GL.STATIC_DRAW); this.GL.vertexAttribPointer(this.VertexTexture, 2, this.GL.FLOAT, false, 0, 0); var TriangleBuffer = this.GL.createBuffer(); this.GL.bindBuffer(this.GL.ELEMENT_ARRAY_BUFFER, TriangleBuffer); // Image Matrix var PerspectiveMatrix = MakePerspective(45, this.aspectratio, 1, 10000.0); var TransformMatrix = MakeTransform(Object); //Set slot 0 as the active Texture this.GL.activeTexture(this.GL.TEXTURE0); //Load in the Texture To Memory this.GL.bindTexture(this.GL.TEXTURE_2D, Texture); //Update The Texture Sampler in the fragment shader to use slot 0 this.GL.uniform1i(this.GL.getUniformLocation(this.ShaderProgram, "uSampler"), 0); //Set The Perspective and Transformation Matrices var pmatrix = this.GL.getUniformLocation(this.ShaderProgram, "PerspectiveMatrix"); this.GL.uniformMatrix4fv(pmatrix, false, new Float32Array(PerspectiveMatrix)); var tmatrix = this.GL.getUniformLocation(this.ShaderProgram, "TransformationMatrix"); this.GL.uniformMatrix4fv(tmatrix, false, new Float32Array(TransformMatrix)); //Draw The Triangles this.GL.drawElements(this.GL.TRIANGLES, Object.Trinagles.length, this.GL.UNSIGNED_SHORT, 0); };Copy the code

Vertex shaders place, rotate, and scale your objects based on transformations and perspective matrices. In the second part of this tutorial, we will cover transformations in more depth.

I’ve added two functions: MakePerspective() and MakeTransform(). They simply generate the 4×4 matrix required by WebGL. The MakePerspective() function takes several parameters: the vertical height of the field of view, the aspect ratio, and the closest and farthest points. Anything closer than 1 unit or farther than 10,000 units will not be shown, but you can adjust these values to get the effect you want. Now, let’s take a look at these two functions:

function MakePerspective(FOV, AspectRatio, Closest, Farest){
    var YLimit = Closest * Math.tan(FOV * Math.PI / 360);
    var A = -( Farest + Closest ) / ( Farest - Closest );
    var B = -2 * Farest * Closest / ( Farest - Closest );
    var C = (2 * Closest) / ( (YLimit * AspectRatio) * 2 );
    var D = (2 * Closest) / ( YLimit * 2 );
    return [
        C, 0, 0, 0,
        0, D, 0, 0,
        0, 0, A, -1,
        0, 0, B, 0
function MakeTransform(Object){
    return(1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 6, 1); }Copy the code

These matrices all affect the final look of your objects, but perspective matrices affect your “3d world”, such as the field of view and visible objects, while transformation matrices affect individual objects, such as their rotation and position. Once this is done, we can start drawing geometry, and all that remains is to convert an image into a WebGL texture.

Step 4: Load the texture

There are two steps to loading a texture. First, we load an image using JavaScript standard practice, and then we convert it to a WebGL texture. So, let’s start with step 2, we are talking about JS files after all. Add the following code to the bottom of the WebGL function, just after the Draw command.

this.LoadTexture = function(Img){
    //Create a new Texture and Assign it as the active one
    var TempTex = this.GL.createTexture();
    this.GL.bindTexture(this.GL.TEXTURE_2D, TempTex);

    //Flip Positive Y (Optional)
    this.GL.pixelStorei(this.GL.UNPACK_FLIP_Y_WEBGL, true);

    //Load in The Image
    this.GL.texImage2D(this.GL.TEXTURE_2D, 0, this.GL.RGBA, this.GL.RGBA, this.GL.UNSIGNED_BYTE, Img);

    //Setup Scaling properties
    this.GL.texParameteri(this.GL.TEXTURE_2D, this.GL.TEXTURE_MAG_FILTER, this.GL.LINEAR);
    this.GL.texParameteri(this.GL.TEXTURE_2D, this.GL.TEXTURE_MIN_FILTER, this.GL.LINEAR_MIPMAP_NEAREST);

    //Unbind the texture and return it.
    this.GL.bindTexture(this.GL.TEXTURE_2D, null);
    return TempTex;
};Copy the code

It is worth mentioning that your texture size must be even bytes, otherwise you will get an error message; For example, possible dimensions include: 2×2, 4×4, 16×16, 32×32, and so on. I added an extra line to flip the y-coordinate, just because my 3D application’s y-coordinate is facing back, but whether to do that is entirely up to you. This is because some programs take the zero of Y as the upper left corner, while others take the lower left corner. The scaling properties I set just tell WebGL how the image should be sampled up and down. You can use other options to get different results, but I think this combination works best.

Now that we’re done with the JS file, we can go back to the HTML file for the final step.

Step 5: Close it

As mentioned earlier, WebGL draws on the Canvas element. Therefore, in the body section, all we need is a canvas canvas. After adding the Canvas element, your HTML page should look like this:

<html> <head> <! -- Include Our WebGL JS file --> <script src="WebGL.js" type="text/javascript"></script>

<body onload="Ready()">
<canvas id="GLCanvas" width="720" height="480">
    Your Browser Doesn't Support HTML5's Canvas. </canvas> <! -- Your Vertex Shader --> <! -- Your Fragment Shader --> </body> </html>Copy the code

This page is fairly simple. In the head area, I link to the JS file. Now, let’s implement the Ready function, which is called when the page loads.

//This will hold our WebGL variable
var GL;

//Our finished texture
var Texture;

//This will hold the textures image 
var TextureImage;

function Ready(){
    GL = new WebGL("GLCanvas"."FragmentShader"."VertexShader");
    TextureImage = new Image();
    TextureImage.onload = function(){
        Texture = GL.LoadTexture(TextureImage);
        GL.Draw(Cube, Texture);
    TextureImage.src = "Texture.png";
}Copy the code

So we create a new WebGL object and pass in the ID of the Canvas and shader. Next, we load the texture image. Once the load is complete, we call Draw() on the Cube and Texture. If you follow along, you should have a textured stationary cube on your screen.

I said we’ll do transformations next time, but we can’t just throw you a stationary rectangle, it’s not three-dimensional enough. Let’s go back and add another little rotation. In the HTML file, modify the onload function to the following code:

TextureImage.onload = function(){
    Texture = GL.LoadTexture(TextureImage);
    setInterval(Update, 33);
};Copy the code

This causes a function called Update() to be called every 33 milliseconds, so we get a frame rate of about 30fps. Here is the update function:

function Update(){
    GL.GL.clear(16384 | 256);
    GL.Draw(GL.Cube, Texture);
}Copy the code

This function is fairly simple; It simply clears the screen and draws the updated cube. Now, let’s go into the JS file and add the rotation code.

Step 6: Add some rotations

We’re not going to fully implement the transformation code, because I said we’re going to wait until next time, and this time we’re just going to add a rotation around the Y axis. The first thing to do is add a Rotation variable to the Cube object. It keeps track of the current Angle and allows us to incrementally maintain rotation. So the code at the top of your Cube variable should look like this:

var Cube = {
    Rotation : 0,
    //The Other Three Arrays
};Copy the code

Now, let’s modify the MakeTransform() function to add rotation:

functionMakeTransform(Object){var y = object.rotation * (math.pi / 180.0); var A = Math.cos(y); var B = -1 * Math.sin(y); var C = Math.sin(y); var D = Math.cos(y); Object.Rotation += .3;return [
        A, 0, B, 0,
        0, 1, 0, 0,
        C, 0, D, 0,
        0, 0, -6, 1
}Copy the code

More exciting content, please pay attention to the wechat “front-end talent” public number!

The original link: https://code.tutsplus.com/zh-hans/articles/webgl-essentials-part-i – net – 25856

Gabriel Manricks