Implement a simple soft raster rendering effect -02

The target

Following on from the previous chapter, this time mainly to triangular surface as the basic pixel organization vertex, and its raster processing, and then to the triangular surface of each slice for coloring.

The practice part

Primitive assembly

In my understanding, pixel assembly is how to organize these vertices in the data. For example, I take the line formed by two vertices as the basic pixel, or the triangle formed by three vertices as the basic pixel, and so on.

To the point Sprite as the basic pixel for pixel assemblyUse line as basic primitives to assemble primitives

Figure element assembly is carried out with triangular plane as basic figure element

rasterizer

We know that a screen is made up of pixels, so to display a 3D model on the screen, we need to sample the model image from the camera’s perspective with each pixel. Above, we used vertex data for pixel assembly, which actually decomposed the problem of sampling model images with pixels into the problem of sampling each basic pixel with pixels. Generally, triangular surface is used as the basic element for element assembly, because triangular surface is the smallest unit that constitutes a surface and has many available geometric properties, which is convenient for algorithm calculation. Lesson 2: Triangle Rasterization and Back Face Culling, demonstrating the rasterization algorithm animation.

The algorithm I use is to find the minimum enclosing rectangle of the triangle first, and then determine whether each pixel in the rectangle is sampled in the triangle.

Method to determine whether a pixel is in a triangle

There are many ways to judge, I recommend this article [to judge whether the point is in the triangle], I use the barycenter method to judge, because the weight calculated by the barycenter method will be used to interpolate the UV coordinates, depth value Z and segment normals.

The depth of the test

The purpose of the depth test is to simulate the occlusion effect of a picture. For example, if an opaque object in front of a cube just covers the cube, we should only see the opaque object in front, and the cube behind should not appear in the picture. Remember that during the view transformation, the Z values of the vertices were only scaled and did not change their direction, as shown below:

In other words, the larger the Z value is, the closer it is to you, so all we have to do is set a buffer for the width∗ Heightwidth * Heightwidth ∗height of the Z values, and, in sampling the triangulated plane, reserve the segment with the highest Z value for the same pixel.

A fragment shader

The soft raster rendering effect I have realized here is just to map the texture to the model and simply calculate the light intensity with interpolated slice normals and rays, because I want to sort out the relevant light model in the subsequent learning process, and then make other series of notes. Before the process of learning, has long been texture map this one concept confusion, is very abstract, what is the normal map, and bump map, and so on, then I won’t entangled with these nouns, because from the perspective of quantity, tube you these map is used to do, actually is a two-dimensional matrix or two-dimensional array, you can look at it in the form of pictures, as follows:

But I think it’s best understood in terms of two-dimensional matrices, texture mapping which I recommendTexture Mapping. In the process of grating, we’re going to get all pieces of the yuan, every piece of the value of the yuan need to depend on the vertex directly or between interpolation to obtain, such as UV coordinates for texture mapping, need according to each vertex corresponds to the UV coordinates, a fragment of the UV coordinates is obtained by weighting interpolation, to obtain the corresponding texture map the value of the corresponding coordinates. In this interpolation process, there is also a problem with the weight of your interpolation. I used the weight calculated by the gravity center method to carry out the interpolation, but this is an interpolation method without perspective correction, which will cause the phenomenon of incorrect perspective of the texture. I recommend this article about perspective correction interpolationGraphics: Things about Perspective Correction interpolation.. I also did not do perspective correction here, mainly more to experience the idea, many process steps of rendering pipeline can be expanded into a deep field, learning is endless ah. Obtaining the values of the corresponding coordinates of the texture map is a sampling problem, which is also a deep pit, involving some theory of digital signal processing.

This part of the code

function barycentric(A, B, C, P) {
  let s = []
  for(let i = 0; i < 2; i++) {
    let tmp = glMatrix.vec3.create()
    tmp[0] = C[i]-A[i]
    tmp[1] = B[i]-A[i]
    tmp[2] = A[i]-P[i]
    s.push(tmp)
  }

  let u = glMatrix.vec3.create()
  glMatrix.vec3.cross(u, s[0], s[1])

  if ( Math.floor(Math.abs(u[2))! =0 ) {
    let result = glMatrix.vec3.fromValues(1-(u[0]+u[1])/u[2], u[1]/u[2], u[0]/u[2])
    return result
  }

  return glMatrix.vec3.fromValues(-1, -1, -1)}function triangle_raster(pts, uvs, normals, light_dir, zBuffer, width, getTexture, fragementShader, drawPixel) {
  let bboxmin = [Infinity.Infinity]
  let bboxmax = [-Infinity, -Infinity]
  for(let i = 0; i < 3; i++) {               // Find the border of the triangle face
    for(let j = 0; j < 2; j++) {
      bboxmin[j] = Math.min(bboxmin[j], pts[i][j])
      bboxmax[j] = Math.max(bboxmax[j], pts[i][j])
    }
  }
  
  let p = glMatrix.vec2.create()


  for (p[0] = Math.floor(bboxmin[0]); p[0] <= bboxmax[0]; p[0] {+ +)for(p[1] = Math.floor(bboxmin[1]); p[1] <= bboxmax[1]; p[1] {+ +)let c = barycentric(pts[0], pts[1], pts[2], p) // The weight obtained from the barycentric coordinates


      let z = pts[0] [2]*c[0] + pts[1] [2]*c[1] + pts[2] [2]*c[2]  // Interpolate the depth value of P

      if (c[0] < 0 || c[1] < 0 || c[2] < 0 || zBuffer[Math.floor(p[0]) + Math.floor(p[1])*width] > z) 
        continue;

      zBuffer[Math.floor(p[0]) + Math.floor(p[1])* width] = z;

      let uv = glMatrix.vec3.create()           // Interpolate the uv coordinates of P
      glMatrix.vec3.transformMat3(uv, c, uvs)

      let normal = glMatrix.vec3.create()       // Interpolate the normal vector of P
      glMatrix.vec3.normalize(normal, normal)
      glMatrix.vec3.transformMat3(normal, c, normals)

      glMatrix.vec3.normalize(light_dir, light_dir)
      drawPixel(p[0], p[1], fragementShader(uv, getTexture, normal, light_dir))
    }
  }
}

function fragementShader(uv, texture, normal, light_dir){
    let color = texture(uv)
    let intensity = glMatrix.vec3.dot(normal, light_dir)
    glMatrix.vec3.scale(color, color, intensity)
    return color
}
Copy the code

The results show

Achieve a simple soft raster rendering effect source code and data

Github.com/yangla2zhen…

The resources

GAMES101- Introduction to Modern Computer Graphics – Lingqi Yan

Fundamentals of Computer Graphics, Fourth Edition

Tiny renderer or how OpenGL works: software rendering in 500 lines of code

Resources to recommend

How to learn Computer Graphics based on zero