WebGL is an extension of the old OpenGL, which is very different. I understand how rasterization works, so I’m comfortable with the concepts. However, every tutorial I’ve read introduces abstractions and helper functions that make it harder for me to understand which parts are really at the heart of the openglapi.

To be clear, abstractions like separating location data and presentation capabilities into different classes are important in real-world applications. However, these abstractions spread the code across multiple areas and incur overhead due to passing boilerplate files and passing data between logical units. I learned that the best approach is a linear code flow, where each line is the heart of the topic at hand.

First of all, the tutorial I used was commendable. From this base, I stripped away all abstractions until I had a “minimum viable program.” Hopefully this will help you get started with modern OpenGL. Here’s what we made:

An equilateral triangle with green at the top, black at the bottom left, red at the bottom right, and colors inserted in the middle.

A slightly more colorful black triangle

Initialize the

For WebGL, we need a canvas to draw on. You want to include all the usual HTML templates, some styles, and so on, but the canvas is the key. Once the DOM is loaded, we can access the canvas using Javascript.

<canvas id="container" width="500" height="500"></canvas>

  document.addEventListener('DOMContentLoaded', () = > {// All the Javascript code below is here
Copy the code

In the case that the canvas is accessible, we can get the WebGL rendering context and initialize its clear color. The color in the OpenGL world is RGBA, and each component is between 0 and 1. The “clear” color is the color used to paint the canvas at the beginning of any frame of the repainted scene.

const canvas = document.getElementById('container');
const gl = canvas.getContext('webgl');


Copy the code

In a real program, more initialization can be done. Of particular note is the enabling of depth buffers, which will allow sorting of geometry based on Z coordinates. For this basic program with only one triangle, we will avoid this problem.

Compiler shader

At the heart of OpenGL is a rasterization framework, where we can decide how to implement everything but rasterization. This requires running at least two pieces of code on the GPU:

  • Vertex shaders run for each input, one 3D (actually, 4D in homogeneous coordinates) position for each input and output.

  • A fragment shader that runs for each pixel on the screen and outputs what color that pixel should be.

In between these two steps, OpenGL takes the geometry from the vertex shader and determines which pixels on the screen are actually covered by the geometry. This is the rasterization part.

Both shaders are typically written in GLSL (OpenGL shader language), which is then compiled into GPU machine code. The machine code is then sent to the GPU so that it can run during rendering. I won’t spend too much time on GLSL because I just want to show the basics, but the language is so close to C that most programmers are familiar with it.

First, we compile a vertex shader and send it to the GPU. Here, the source code for the shader is stored in a string, but can be loaded from elsewhere. Finally, the string is sent to the WebGLAPI.

const sourceV = ` attribute vec3 position; varying vec4 color; void main() { gl_Position = vec4(position, 1); Color = gl_Position * 0.5 + 0.5; } `;

const shaderV = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(shaderV, sourceV);

if(! gl.getShaderParameter(shaderV, gl.COMPILE_STATUS)) {console.error(gl.getShaderInfoLog(shaderV));
  throw new Error('Failed to compile vertex shader');
Copy the code

Here, there are several variables worth calling in the GLSL code:

  • A property called position. The property is essentially an input, and the shader is called for each such input.

  • One is called color change. This is the output of the vertex shader (one for each input) and the input of the fragment shader. When passed to the fragment shader, the value is interpolated according to rasterized attributes.

  • Gl position value. Is essentially the output of the vertex shader, just like any variable value. This one is special because it is used to determine which pixels need to be drawn.

There is also a variable type called UNIFORM, which will remain constant through multiple calls to the vertex shader. These unities apply to properties such as transformation matrices, which are constant for all vertices on a single geometry.

Next, we do the same for the Fragment Shader, compile it and send it to the GPU. Notice that the color variable in the vertex shader is now read by the fragment shader.

const sourceF = ` precision mediump float; varying vec4 color; void main() { gl_FragColor = color; } `;

const shaderF = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(shaderF, sourceF);

if(! gl.getShaderParameter(shaderF, gl.COMPILE_STATUS)) {console.error(gl.getShaderInfoLog(shaderF));
  throw new Error('Failed to compile fragment shader');
Copy the code

Finally, the vertex and fragment shaders are linked to an OpenGL program.

const program = gl.createProgram();
gl.attachShader(program, shaderV);
gl.attachShader(program, shaderF);

if(! gl.getProgramParameter(program, gl.LINK_STATUS)) {console.error(gl.getProgramInfoLog(program));
  throw new Error('Failed to link program');


Copy the code

We tell the GPU that the shader defined above is the shader we want to run. So now all that’s left is to create the inputs and let the GPU relax those inputs.

Sends input data to the GPU

The input data will be stored in the GPU’s memory and processed from there. Instead of making a separate draw call for each input (transferring only one relevant data at a time), the entire input is transferred to the GPU and read from there. (Traditional OpenGL transfers only one data block at a time, causing performance degradation.)

OpenGL provides an abstraction called a Vertex Buffer object (VBO). I’m still figuring out how all of this works, but ultimately, we’ll use abstraction to do the following:

  • Stores a sequence of bytes in CPU memory.

  • Bytes are transferred to the GPU’s memory gl.createBuffer () and a binding point gl.array_buffer using the unique buffer created with the GPU.

In vertex shaders, there is one VBO for each input variable (attribute), but one VBO can be used for multiple inputs.

const positionsData = new Float32Array([ 1.0   ,  0.65.- 1,]);const buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, positionsData, gl.STATIC_DRAW);

Copy the code

Typically, you’ll specify the geometry using any coordinates that make sense for the application, and then use a series of transformations in the vertex shader to put them into OpenGL’s clipspace. I won’t go into the details of clip Spaces (they must use homogeneous coordinates), but for now, X and Y vary from -1 to +1. Because the vertex shader simply passes the input data as-is, we can specify coordinates directly in the fragment space.

Next, we also associate the buffer with a variable in the vertex shader. Here we are:

  • Gets a handle to the location variable from the program we created above.

  • Tell OpenGL to bind points from gl.array_buffer, three in a batch, with specific parameters such as offset and step size to zero.

const attribute = gl.getAttribLocation(program, 'position');
gl.vertexAttribPointer(attribute, 3, gl.FLOAT, false.0.0);
Copy the code

Note that we can create a VBO and associate it with a vertex shader property in this way, because we perform both operations one after the other. If we separate the two functions (for example, create all the VBos at once and then associate them with a single attribute), we need to call gl.bindBuffer (…) Associate each VBO with its corresponding attribute.

The Draw!

Finally, after all the data in GPU memory is set up the way we want it to be, we can tell OpenGL to clear the screen and run the program on the array we set up. As part of rasterization (determining which pixels are covered by vertices), we tell OpenGL to treat groups of three vertices as triangles.

gl.drawArrays(gl.TRIANGLES, 0.3);

Copy the code

The way we set it up in a linear fashion does mean that the program can run once. In any practical application, we store data in a structured way, send it to the GPU as it changes, and draw it in every frame.

Putting everything together, the diagram below shows the minimum set of concepts needed to display the first triangle on the screen. Even so, the diagram is greatly simplified, so it’s best to put the 75 lines of code introduced in this article together and examine them.

The whole process involves creating shaders, transferring data to the GPU via VBO, correlating the two together, and then the GPU combines everything into the final image.

The last

Although greatly simplified, a series of steps are required to demonstrate this coveted triangle

The hardest part of learning OpenGL is the amount of boilerplate required to get the most basic images on the screen. Because the rasterization framework required us to provide 3D rendering capabilities, communication with the GPU was tedious.


An introduction to WebGL