1. The concept

WebGL is a cross-platform, royalty-free Web standard for low-level 3D graphics apis based on OpenGL ES and exposed to ECMAScript via the HTML5 Canvas element. [link] (https://www.khronos.org/webgl/)
Developers familiar with OpenGL ES 2.0 view WebGL as a Shader based API using GLSL, constructed semantically similar to the underlying OpenGL ES API.

WebGL (Web Graphics Library) is a JavaScript API that renders high-performance interactive 3D and 2D graphics in any compatible Web browser without the need for plug-ins. WebGL does this by introducing an API very much in line with OpenGL ES 2.0, which can be used within the HTML5 < Canvas > element. This consistency allows the API to take advantage of hardware graphics acceleration provided by the user device.

WebGL is currently supported by: Firefox 4+, Google Chrome 9+, Opera 12+, Safari 5.1+, Internet Explorer 11+ and Microsoft Edge Build 10240+; However, some WebGL features also require user hardware support. Mobile is mostly supported as well, it requires GPU support. For example, S3 Texture Compression (S3TC) is only available on Tugrui gpus based devices. Most browsers can use WebGL under the context name webGL, but older browsers need to use experimental-WebGL. In addition, future WebGL 2 will only be backward compatible, using the context name WebGL2.

Desktop  Mobile
Chrome Edge Firefox Internet Explorer Opera Safari Android webview Chrome for Android Firefox for Android Opera for Android Safari on iOS Samsung Internet
[`WebGL2RenderingContext`](https://developer.mozilla.org/zh-CN/docs/Web/API/WebGL2RenderingContext)
Full support

56

Full support

79

Full support

51

No support

No

Full support

43

No support

No

Full support

58

Full support

58

Full support

51

Full support

43

No support

No

Full support

7.0

Principle 2.

Webgl itself is applied to canvas; The 3D space model is established based on the left hand Cartesian coordinate system, and then the raster model is used to render the 3D effect on the canvas through the camera perspective.

2D+ Perspective = 3D

Vertex processing: In this stage, GPU reads the vertex data that describes the appearance of 3D graphics and determines the shape and position relationship of 3D graphics according to the vertex data to establish the skeleton of 3D graphics. On existing Gpus, this is done by a hardware-implemented Vertex Shader.

Raster calculation: the actual image displayed by the display is composed of pixels. We need to convert the points and lines on the graph generated above to the corresponding pixels through certain algorithms. The process of converting a vector image into a series of pixels is called rasterization. A mathematically represented slash segment, for example, is eventually transformed into a cascade of continuous pixels.

Texture mapping: Polygons generated by vertex units only form the outline of the 3D object. Texture mapping is a texture mapping of polygons with multiple deformable surfaces. In layman’s terms, the polygon surfaces are pasted with corresponding images to produce a “realistic” shape. The TMU (Texture Mapping Unit) is used to do this.

Pixel processing: This stage (during the rasterization of each pixel) the GPU completes the calculation and processing of the pixel to determine the final attributes of each pixel. On Gpus that support DX8 and DX9 specifications, this is done by the hardware-implemented Pixel Shader (Pixel Shader).

Final output: ROP (rasterization engine) finally completes the output of pixels, after 1 frame rendering is completed, it is sent to the video memory frame buffer.

Generally speaking, the work of GPU is to complete the generation of 3D graphics, map the graphics to the corresponding pixels, calculate each pixel to determine the final color and complete the output.

However, it should be noted that no matter how great the game home graphics card, light and shadow are calculated by CPU, GPU only 2 jobs, 1 polygon generation. 2 is the color on the polygon.

The actual image generation process is roughly as follows:

First it reads the model from the hard disk,

The CPU classifies the polygon information and gives it to the GPU, which then processes it into visible polygons on the screen, but without textures

Cable box. After the CPU calculates the model, the GPU puts the model data into the video memory, and the video card also attaches materials and colors to the model. The CPU retrieves polygon information from video memory accordingly. The CPU then calculates the silhouette of the shadow created by the illumination. After the CPU calculates, the graphics card’s job is to fill the shadows with dark colors. Data is exchanged between the CPU and GPU.

The flow chart

3. The instructions

Webgl itself involves a lot of aspects and is also based on OpenGL. The next direction is mainly used to expand the application of Web direction. Webgl SE is embedded direction.

We will continue to update, and we are looking forward to your support for the following research in the direction of games and visualization.

The Exploring Road of WEBGL (ii) — WEBGL Scenario Construction the exploring Road of WEBGL (I) — Understanding WEBGL

reactVR