preface

In the first article, we talked a lot about shaders. What is a shader? What kind of flow do shaders have? What kinds of shaders are there? What are the functions of the different shaders? With these questions in mind, we begin our article.

1. Basic understanding of shaders

When it comes to shaders, the first thing we should mention is primitives, so what are primitives?

Pixel: The basic unit that makes up an image.

So with primions out of the way, we’re going to talk about rendering pipelines. How do we understand rendering pipelines?

OpenGL rendering pipeline: A sequence of sequential processing stages used to transfer data from our application to OpenGL to produce a final image.

How to understand? In fact, we can think of the pipeline as an assembly line, where each process is fixed and the order cannot be confused. Then we can think of a worker or machine equipment on the assembly line as a shader. So how do we direct the assembly line workers or the machines on the assembly line? We need a command to communicate, and this command can be understood as a programming language designed specifically for graphics development, also known as GLSL.

2. Classification of shaders

  • Vertex shaders (required) : Originally used to work with vertex coordinates, they usually do some calculations on coordinates
  • Subdivision shader (optional) : Describes the shape of the object, generates new geometry in the pipeline processing (smoothing) model to generate the final shape
  • Geometry shader (optional) : is to make some changes to the original coordinates, I feel suitable for some special effects
  • Pixel shaders (required) : vertex shaders and pixel shaders are required for coloring and final output, while subdivision shaders and geometry shaders are optional and rarely used.

3. The rendering process of shaders

The rendering pipeline mentioned in the first section is actually the rendering flow of the shader. The following diagram shows the rendering flow of the shader

  • You take the vertex data and you pass it to the vertex shader, and the vertex shader processes it based on your projection and so forth
  • After the vertex shader is finished, it is handed over to the subdivision shader, which generates new geometry processing (smoothing) model to generate the final shape, and changes the geometry primitypes of all images or drops all flanges
  • Then hand it over to the geometry shader to make some changes to the original coordinates (usually used for special effects)
  • Then we set the pixel, which can be interpreted as describing what the image looks like
  • The next step is to shear, which, as the name implies, is to cut the drawing outside the viewport
  • Below is rasterization, which is the mathematical description of the input pixel, transformed into the pixel slice corresponding to the screen position. In fact, it is easy to understand that the image is cut into pixels and then handed to the pixel shader
  • The pixel shader will process each pixel and give each pixel a color, which means that the pixel color and depth values are passed to the pixel test and blend module. So the pixel shader is executed as many times as there are pixels.

4, pipeline

In fact, our popular understanding, the above said the process is pipeline. Pipelines also fall into two categories:

  • Fixed pipeline: When OpenGL first came out, everybody used fixed pipeline, so everything was fixed, and you just had to do it in that order, and you didn’t have to do anything in between. But as later business scenarios became richer, fixed pipelines could no longer meet the requirements, and programmable pipelines emerged.
  • Programmable pipeline: As the name implies, programmable pipeline, so which shaders are programmable? So far, the only shaders we have been able to program with the GLSL language are vertex shaders and chip shaders.