# Introduction to the

Rasterization can be simply understood as how to present the geometric information contained in an image or object on the screen. For example, for a triangle, what sets of points in screen space are needed to represent it (discrete sets of points are used to represent continuous lines or images).

• Screen space pixel coordinates range from (0,0) to (width-1, height-1)
• The representation of each pixel is represented by the center point of the pixel (x+0.5, y+0.5)

# Rasterization of lines

## DDA numerical differential algorithm

DDA is an algorithm that plots a line based on its slope. The known linear equation y = kx + b, if | | k less than zero, then you can imagine a straight line is biased towards the x axis, so the change of the x axis is faster than y; If | | k is greater than zero, then you can imagine is linear biased towards the y axis, so the change of the y axis is faster than the x axis, two cases rasterizer as shown below, rendering algorithm is relatively simple.

• When the | | k < 1, each time from starting point to draw x = x + 1, and y = y + k, and rounds, y to get new x, y is the place where pixel should draw
• When | | k > 1, each time from starting point to draw y = y + 1, x = x + 1 / k, and x rounded, get new x, y is the place where pixel should draw

## Midpoint Bresenham algorithm

The idea of Bresenham algorithm is to construct the center of the pixel into a virtual grid line, calculate the intersection points between the straight line and each vertical grid line in the order from the starting point to the end point of the straight line, and then determine the pixel closest to the intersection point in the column of pixels according to the symbol of the error term. Let’s take 0<k<1 as an example.When 0<k<1, only the right or upper right will be considered for each pixel selection, so the center of the right and upper right grids will be selected (as shown in the center circle of the yellow grid) to determine which side of the line the center is on. If the line is on the upper side of the center of the two grids, the upper grid will be selected; if the line is on the lower side of the center of the two grids, the lower grid will be selected.

# Rasterization of triangles

Triangle is the most basic image in graphics learning. Most models are represented by a triangular face, and any other polygon can actually be transformed into a number of triangular forms.The rasterization of a triangle is mainly to judge whether a pixel is included in the triangle. If it is included, it will be rendered; if not, it will not be rendered. So how do you tell if something is inside the triangle? So here we’re going to take the cross product.As shown in the figure, we know in advance the three vertices of the triangle we want to rasterize, P0, P1, P2, and the detection point Q. As long as P0P1 *P0q, P1P2 *P1q and P2P0 *P2q are calculated respectively, if the three are the same sign, then point P must be on the same side of the three line segments, so it must be inside the triangle; if they are different, then the point must be outside the triangle. However, if all the pixels in the screen space have to do this operation is obviously redundant, so we can use a rectangular bounding box to optimize, and only judge the points inside the bounding box.

# anti-aliasing

After using the above triangle rasterization algorithm, the following set of pixel points will be generated, and it is obvious that the edges of the triangle are too “bumpy”.This is because we use finite discrete pixel points to approximate the continuous triangle, so the phenomenon of serrations will naturally appear, because this approximation is not accurate, from the perspective of the problem is that the sampling frequency is lower than the frequency of the signal, resulting in distortion phenomenon.

## SSAA (Supersampling anti-aliasing)

SSAASuper Sampling AAThe idea of “is actually very intuitive. If the approximation result of finite discrete pixel points is not good, then won’t we get better results by using more sample points to approximate? Therefore, according to this idea, we can subdivide each original pixel. For example, in the following example, each pixel is subdivided into 4 sampling points:We according to each sample point for shading (the concept also not mentioned that can be understood as a calculation process of the color of each pixel, here is a pure red triangle, of course, if that point within the triangle, its color value can be directly to (0, 1)), after this has been the color of each sampling point, We will sum up all the color values of the subdivided sampling points within each pixel and then calculate the mean value as the anti-aliasing color value of the pixel.

For example, in a 4xSSAA algorithm, assuming the final screen output resolution is 800×600, the 4xSSAA will first razzle a buffer with a resolution of 1600×1200 and color it pixel by pixel, and then directly sample the buffer with a magnification of 4 times to 800×600

SSAA is one of the most primitive anti-aliasing methods, although the resulting image is much less aliased, at the cost of: for example, 4X SSAA, both raster and coloring are four times more computating load, and the buffer size is four times larger than the target resolution.

## MSAA (Multiple Sampling anti-aliasing)

MSAA MultiSampling anti-aliasing is actually an improvement of SSAA. Obviously, the computational amount of SSAA is very large. Each pixel is divided into 4 sampling points, so we need to do 4 shading times to calculate the color, which is 4 times more computational amount.

The method of MSAA is also easy to understand. We will still divide sampling points, but only calculate how many sampling points will be covered by the triangle. When calculating the color, we will only calculate the color once using the coordinate of the pixel center (that is, all information will be interpolated to the pixel center and then calculate the color), as shown in the figure below:It can be seen that 4X MSAA has 4 times more raster load than the original. However, during the pixel shading stage, each target pixel is only shading once (4X SSAA corresponds to one target pixel, 4 high resolution pixels need to be calculated, that is, 4 shading times). It also does not require larger buffer storage (4x SSAA requires 4 times more buffer storage)

## FXAA (Fast Approximate Anti-aliasing)

FXAA Fast Approximate anti-Aliasing is an image post-processing technology. It first samples the target image directly and then detects the edge by pixel color. This method makes the pixels that change drastically in color be considered edges, which may not be accurate, but it is very fast, which is just a little trick for dealing with the sawtooth problem.

## TAA (Interframe anti-aliasing)

TAATemporal Anti-AliasingIs the most commonly used image post-processing technology. Can solve the aliasing of shading, by increasing the sampling frequency, the method of directly but do multiple sampling expenses is very large, and the practice of TAA is, the process of the multiple sampling distribution to each frame, namely every frame using a few frames in front of the preserved data (current on the mixed pixel information of a frame with position pixel information), That’s what the so-called “temporal” means, as shown in the figure below.Of course, the premise of doing this is that each frame in time is sampled at a different local position. That is to say, the sampling position of each frame cannot be sampled in the center as before, and the jitter operation is required (that is, the sampling position of each frame has a certain random offset). This is to avoid repeated sampling of the same position when the picture is still, so that the sampling times of the whole image do not increase, resulting in anti-aliasing failure.

Disadvantages of TAA: Because the pixel color of each frame is actually mixed according to the previous frame, it is easy to create a sense of delay. And when the object moves too fast, the phenomenon of the object’s shadow will appear.

# Visibility/Occlusion

Once we’ve solved the aliasing problem, there’s still the question of how do we determine the sequence of objects? More specifically, each pixel may correspond to more than one point on the face of the triangle. Which point on the face of the triangle should we choose to display? The answer is obvious. The pixels closest to the camera are displayed. So we’re going to use the depth z that we got from the previous transformation, and the larger the z, the further away from the camera.

## Painter’s Algorithm

The artist algorithm is the original algorithm, which simply sorts the triangles, rasterizes them one by one according to the order of z-value (front and back distance from the camera), and forces them to be overwritten when the rasterization encounters a conflict (a triangle rasterization occupied the pixel before).The painter algorithm is very simple and easy to understand, but it has obvious disadvantages:

• Sorting by the z distance of the triangles, which requires some cost, is O(nlogn), O(nlogn), where n is the number of triangles
• Incorrect z-value sorting by triangles can occur, such as the artist’s algorithm failing to render the image below

## Z-Buffer

Z-buffer is a mainstream and hardware-supported occlusion elimination technique. The principle of z-buffer is to provide an additional Buffer to store the minimum z-value (the smaller the z-value after transformation, the closer it is to the camera), so that there are actually two buffers

• Frame Buffer: Stores the color of the pixel, which is used to store the image
• Z-buffer (depth buffer) : Responsible for storing the depth value (Z value)

The buffer here is for the Pixel information of the screen space. The z-value of each Pixel in each triangle after rasterization and Pixel/Fragment Shader is first compared with the corresponding Pixel depth of the Z-buffer. If it is smaller (it means closer to the camera), Then the pixel color is written to the corresponding position of the frame buffer and the z-value overwrites the corresponding position of the Z-buffer. If larger, discard pixels.

• The time complexity of z-buffer is O(n)O(n), where n is the number of triangles
• Most GPU hardware implements the Z-buffer algorithm, which can be implemented quickly

## Early-Z

One of the disadvantages of z-Buffers is that when the Pixel Shader spends a lot of time on Pixel shading (the Pixel/Fragment Shader is responsible for a lot of computation such as lighting and texture sampling), the resulting Pixel points can be discarded, resulting in a waste of computational performance. The principle of early-Z is to advance the z-buffer operation to after rasterization and before pixel coloring, so as to avoid additional pixel coloring calculation for pixels that should have been discarded.

However, there are some situations that are not applicable to early-Z techniques: pixel coloring calculations may modify z-depth values, Alpha tests, etc. It is not accurate to use the early-Z technique at this time. You must use the z-test as long as you have rasterized and colored before deciding whether to discard pixels.

Z – prepass improvement: With an extra pass, each triangle is rasterized to write only the depth without any pixel coloring calculation (no color output), so that after all triangles pass the first pass, we can get the screen depth map (actually the Z-buffer) that records the minimum Z-value. While the second pass as normal rendering process, just need to be closed after each triangle rasterizer depth write and by comparing the depth of the triangle in the depth of the pixels with z – buffer depth are equal To decide whether the pixel shading computation, so it can ensure each pixel screen may be due to the most only one pixel shading calculations.

# conclusion

• Computer graphics is really not easy!
• Yan teacher’s class is really good!
• He’s notes are really good to remember!