“This is the second day of my participation in the First Challenge 2022, for more details: First Challenge 2022”.

preface

WebGPU is a graphical API for the next generation of Web platforms, and while it’s still in development, it’s already available in the latest Version of Chrome Canary. Today I will take you to appreciate the style of the latest WebGPU.

The preparatory work

  1. Download the latest version of Chrome Canary (this is part of the general process and I won’t go into details here)
  2. Open your browser and type in the address barchrome://flags/searchWebGPUTo enable the WebGPU function

So far, all our preparations have been completed.

Coding

Next, we will start Coding. During Coding, I will introduce some concepts in WebGPU and try my best to compare them with some APIS in WebGL (there may be some mistakes, please correct them).

Step 1: Obtain the WebGPU device and context

First, we need to get the “WebGPU instance “. We will use the analogy with WebGL here.

// WebGL
const ctx: WebGLRenderingContext = canvas.getContext('webgl')

// WebGPU
const webGPUCtx = canvas.getContext('webgpu')
const adapter = await navigator.gpu.requestAdapter();
const device = await adapter.requestDevice();
Copy the code

Here we can easily see, WebGPU besides need to obtain the WebGPU context, need through the navigator. Gpu. RequestAdapter and adapter requestDevice to obtain adapter and device respectively

adapter

Adapter represents an instance of WebGPU implementation at the bottom of the system. Such as: On Windows, the underlying rendering is done by D3D12. WebGPU is a set of APIS built on D3D12, so there must be an “adapter” (Adapter) for adaptation. Similarly, on Mac, the underlying renderer is done by Metal. So you also need an adapter for adaptation.

device

The device is the logical instance object of the Adapter, and most of our operations are done through the Adapter.

WebGPUContext The WebGPUContext differs from the WebGLRenderingContext:

  • WebGPUContextIt doesn’t provide any rendering or computing power per se, it just provides rendering areas and methods associated with rendering.
  • WebGLRenderingContextThere is a full rendering API, such as setting various states, creating buffers, initiating drawcalls, etc., that can be done by this context object.

Step 2: Configure WebGPU context parameters

interface GPUCanvasConfiguration {
  device: GPUDevice; format: GPUTextureFormat; usage? : GPUTextureUsageFlags; colorSpace? : GPUPredefinedColorSpace; compositingAlphaMode? : GPUCanvasCompositingAlphaMode; size? : GPUExtent3D; } presentationFormat = context.getPreferredFormat(adapter) context.configure({ device,format: presentationFormat, // Different devices support different texture formats
    size: presentationSize, // This is usually the Canvas size
  });
Copy the code

This is a basic configuration item, nothing special. You can understand.

Step 3: Create the render pipeline object

Now we need to create the PipelineState Object (PipelineState Object). For more information about rendering pipelines, please refer to the WebGL Overview I wrote earlier – Principles.

const pipeline = device.createRenderPipeline({
    vertex: {
      module: device.createShaderModule({
        code: triangleVertWGSL,
      }),
      entryPoint: 'main',},fragment: {
      module: device.createShaderModule({
        code: redFragWGSL,
      }),
      entryPoint: 'main'.targets: [{format: presentationFormat,
        },
      ],
    },
    primitive: {
      topology: 'triangle-list',}});Copy the code

We can see the general content:

  • Vertex: Specifies the corresponding vertex shader
    • EntryPoint: You can specify which function is the entry function
  • Fragment: Specifies the fragment shader
    • EntryPoint: Specifies which function is the entry function
    • Targets: Sets some parameters for a series of targets
  • Primitive: Specifies the primitive type to draw

With this configuration, we complete the configuration of a rendering pipeline, where we specify vertex shaders, slice shaders, and basic primitive types.

Compared to WebGL, this is equivalent to the following code:

const vertexShader = gl.createShader(); // equivalent to Vertex in WebGPU PSO
const fragmentShader = gl.createShader(); // Equivalent to the fragment in WebGPU PSO
const program = gl.createProgram();
// There are also a series of Shader initialization procedures, which are omitted here

gl.useProgram(program)

gl.drawArrays(gl.TRIANGLES, 0.3); // Equivalent to Primitive in WebGPU PSO
Copy the code

There are some shader language parts involved here, which we will skip for the moment. Let’s go through the whole process and then come back to the shader language part.

Step 4: “Record” and submit the command

WebGPU is different from WebGL in that the GPU will draw drawArrays as soon as they are called. In WebGPU, there are two steps:

  1. “Record” command, we can “record” what we need to do, and then put these commands into a queue.
  2. Submit commands, and once these commands are submitted to the GPU, the GPU will draw them in the order we recorded them.

This is also the key design for WebGPU to have good support for multithreading!

Now let’s do this

const commandEncoder = device.createCommandEncoder();
const textureView = context.getCurrentTexture().createView(); // Get the texture view on the canvas and display the WebGPU rendered image on the canvas

const renderPassDescriptor: GPURenderPassDescriptor = {
  colorAttachments: [{view: textureView, // Bind the color buffer to the canvas above
      loadValue: { r: 1.0.g: 1.0.b: 1.0.a: 1.0 }, // corresponds to the gl.clearcolor command in WebGL
      storeOp: 'store',}]};const passEncoder = commandEncoder.beginRenderPass(renderPassDescriptor);
passEncoder.setPipeline(pipeline);
passEncoder.draw(3.1.0.0);
passEncoder.endPass();

device.queue.submit([commandEncoder.finish()]);
Copy the code

The commandEncoder in the code above is an object for the “record” command. Behind passEncoder is used for recording associated with rendering commands, in addition to this, there is also a commandEncoder. The method can create a Shader beginComputePass related processes.

The next step is to take the PSO object that we created earlier, set it into this passEncoder, and call draw, but the GPU doesn’t actually draw. This is just a “record command”! Keep that in mind!

Finally, be sure to call the endPass method, which indicates that our render command has been recorded.

Finally, we submit all of our recorded commands to the GPU with device.queue.submit([commandEncoder. Finish ()]); To complete.

This is the whole process of drawing a triangle. The final result is as follows:

Shader language

Now let’s review the Shader language. In WebGL, GLSL is used as the Shader language, while in WebGPU, WGSL is used as the Shader language (Shader language program hereinafter collectively referred to as Shader). Let’s take a look at how shader is written in this program.

Vertex shader

[[stage(vertex)]] fn main([[builtin(vertex_index)]] VertexIndex : U32) - > [[builtin (position)]] vec4 < f32 > {var pos = array < vec2 < f32 >, 3 > (vec2 < f32 > (0.0, 0.5), vec2 < f32 > (0.5, 0.5), Vec2 < f32 > (0.5, 0.5)); Return vec4 < f32 > (pos [VertexIndex], 0.0, 1.0); }Copy the code

First of all, let’s explain some of this procedure:

Stage

In a WebGPU, there are two broad categories of rendering pipelines:

  1. GPURenderPipeline, the rendering pipeline we are most familiar with for rendering graphics
  2. GPUComputePipeline, a rendering pipeline for GPGPU general-purpose computing

For RenderPipeline, there are two types of stages: Vertex shader Stage and Fragment shader Stage

Stage is used to indicate which stage the program is used for.

[[stage(vertex)]] indicates that this program is used for the vertex shader stage, which is the vertex processing stage in the rendering pipeline.

[[stage(fragment)]] indicates that this program is used for the Fragment shader stage, which is used to render each fragment.

Next, FN Main declares a function with the same name as entryPoint we specified when we created the PSO.

[[builtin(vertex_index)]]:

Builtin: [[XXX]]; builtin: [[XXX]]; builtin: [XXX]; U32 indicates the variable name and variable type respectively. U32 indicates an unsigned 32-bit integer variable.

-> The location and type of the return value indicated after the arrow. [[builtin(position)]] is exactly the same as the builtin variable before, so it doesn’t need to be expanded much.

Let’s look at the contents of the function body. In the function body, we mainly define the positions of several vertices, which is quite different from WebGL. WebGL does not support the direct definition of vertex positions in shader programs. WebGPU gives us the ability to specify vertex positions in shaders by building in a variable called vertex_index. It eliminates the need for us to pass vertex data to the GPU, which in turn improves performance.

Finally, we return the vertex information.

Chip shader

[[stage (fragments)]] fn the main () - > [[the location (0)]] vec4 < f32 > {return vec4 < f32 > (1.0, 0.0, 0.0, 1.0); }Copy the code

The program in the chip shader is relatively simple. [[stage(Frameng)]] represents the program that runs during the chip shader, as mentioned above. This simply returns a red value.

conclusion

Ok, so far, a simple WebGPU program has been written. Let’s review the main content.

  1. The rendering process of WebGPU can be roughly divided into two steps:
    • Record commands (record commands by creating renderPassEncoder/ computePassEncoder, PipelineStateObject, etc.)
    • Submit the command. Once the command is submitted, the GPU will actually start rendering. The GPU will not work until then, which is convenient for multi-threaded processing.
  2. WebGPU uses a new shader language (WGSL: WebGPU Shading Language), WGSL provides a series of built-in variables that do things GLSL can’t, like specifying vertex positions in shaders.

OK, today’s content is so much, if you feel useful, welcome to like and follow me, your attention is my motivation to update ~