Introduction to the

In this part, we began to enter coloring (shading) link, simple shading is calculated for each sample what is the color of the pixel and the results stored (rasterization only fill pixels, in other words is responsible for the transfer of colour to the screen after coloring) coloring calculation factors usually have to consider: Lighting, shading frequency, texture.

Local illumination model

Middle school physics taught us that people can see objects because the human eye receives light reflected from the surface of the object. So the reason why an object can have color is that it absorbs a certain color of light and reflects the rest of the light back, giving it color. This is actually the basis of the local illumination model, and you can look at what types of light can go from an object to the human eye?As shown in the image above, light can be divided into three simple categories: Ambient Lighting, Diffuse Reflection, and Specular highlights. This leads to three simple lighting models.

Floodlight model

The floodlight model only considers the environmental light, which is the simplest empirical model. It only considers the influence of the environmental light and does not accurately describe it, but only expresses it with a simple formula

  • Ienv stores results
  • Ka represents the reflectivity of the object surface to ambient light
  • Ia is the brightness of the incident ambient light

Since the reflective model does not have the direction of incident light and reflected light and other factors, the object will only show the effect of a 2D graph without any sense of volume, as shown in the figure below.To get a sense of volume, you have to add diffuse reflection, the Lambert diffuse model.

The Lambert diffuse reflection model

In fact, the Lambert diffuse reflection model is to add the diffuse reflection term on the basis of the floodlight model. Diffuse reflection means that light is uniformly reflected from the incident point in all directions after incident at a certain Angle, and the intensity of the reflected light in each direction is equal. The reason for diffuse reflection is the roughness of the object surface, which leads to the occurrence of this physical phenomenon.In the case of diffuse reflection, because the light is reflected in a uniform direction and intensity by default, you only need to consider how much energy the surface is actually receiving.

  • Kd: Diffuse reflection coefficient
  • I/ R2: Energy decay (energy is inversely proportional to the square of the distance from the light source)
  • N · L: The dot product of the incident light and the normal of the colored surface (according to Lambert’s Cosine Law, the smaller the Angle between the light source and the surface normal, the more energy the surface receives)
  • Max (0, n· L) : Avoid negative angles (0 means the camera cannot see the shading point)

Phong reflection model

In Phong model, specular light is added to Lambert reflection model. The phenomenon of specular light is actually caused by the fact that the direction observed by the human eye is very close to the direction reflected by the mirror, so the Angle α between the reflection direction R and the observation direction V should be considered.You just have to pay attention to the exponential P, and the reason for adding this is pretty straightforward, because the farther you go from the reflected light the less you should see the reflected light, you need an exponential P to accelerate the decay.

Blinn-phong lighting model

The Blinn-Phong illumination model is only a computational optimization of the Phong model. Blinn-phong assumes that the Angle α of a half-distance vector (midway between the direction of observation V and the direction of incident L) to the surface normal is approximately equal to the Angle between the direction of observation V and the direction of reflection R (since v approaching R is equivalent to H approaching N, and the deviation can also be adjusted by coefficients).This result is actually very similar to the real calculation of the Angle between reflection and human eye observation (specifically, the Angle is half of the correct Angle), but the benefit is greatly accelerated Angle calculation speed, improve the efficiency!

Coloring frequency

Finished in this paper we explain the local illumination model, the main use of the observation direction, the relationship between the location of the incident ray and the normal vector, but did not specifically whether the triangle surface normal vector or vector triangle vertex normals, which involved the content of this chapter – colored frequency (surface shading, vertex shader and pixel shader). These three different coloring frequencies correspond to three different methods. I’m going to introduce each of them

Flat shading

Face coloring, as the name implies, takes each face as a coloring unit. The model data are mostly stored in many triangular faces, so the normal vector of each face is recorded. The normal vector of each face is used to calculate the Blinn-Phong reflection illumination model, and the color is assigned to the whole face. The effect is as follows:The calculation of face coloring is small, and the normal vector can be obtained by the cross product of both sides of the polygon, but it is obvious that the transition between face and face is not smooth, and each face is the same color.

Gouraud shading for vertices

The color calculation is performed once for each vertex and the color of the pixels inside the triangle is obtained through Barycentric Interpolation of the triangle. To calculate the normal vector of a vertex of a polygon, you average the normal vectors of all the faces that share the point, and then you normalize to get the normal vector of that vertex.The barycenter coordinate formula is as follows:C0, C1,c2 are the colors of the vertices, and α, β, and γ are the barycentric coordinates of a point in the triangle.

The barycentric coordinate is usually the barycentric coordinate in the original world coordinate system, but in actual processing, the barycentric coordinate in the screen coordinate system is usually taken, and then obtained through error correction.

Pixel-based shading

Using the barycentric coordinate to get the normal vector of each pixel through the vertex normal vector, but the calculation is large, but the effect picture is obviously much smoother than the vertex shading.

Summary of render Pipeline

The rendering pipeline is a sequence of operations that translate a bunch of data points with three-dimensional geometric information into pixels in two-dimensional screen space. In fact, this is all the knowledge before this series of notes together. Let’s take the following figure as a summary, and then explain it step by step:

  • Vertex Processing: The vertices that do not exceed the observation space in the world coordinate system are changed by MVP, and finally the coordinate information projected to the two-dimensional plane is obtained (meanwhile, the depth Z value is reserved for Zbuffer).
  • Triangle Processing: Dividing complex geometries into triangles for easier subsequent Processing.
  • Rasterization: All Rasterization does is determine which points in the triangle can be displayed.
  • Fragment Processing: Fragment Processing is the process of truly coloring pixels. There are many factors to consider: depth value, shading frequency, anti-aliasing method, texture mapping (texture can replace the color information obtained by lighting model), and so on.
  • Framebuffer Operation: The Framebuffer Operation is the process of putting together all the pixel color information and feeding it to the display device for display. This completes the graphics rendering pipeline.

reference