Off and on, I also learned some graphics and 3D rendering knowledge, out of the game like it.

Because there are some things that I didn’t understand at the time, and then suddenly became enlightened, that I thought I could write about and share. The great god skipped over.

Pipeline? Assembly line

The word pipeline can be translated into pipeline. But pipeline, this is not image at all, even misleading. “Rendering pipes”, is that like taking a pen to draw a pipe? Until I saw this word, I was reading Chinese, and this word can be translated into assembly line. A “rendering pipeline” actually refers to a process that comes down like a factory line to turn fixed point data into pixels on a screen. Or, it wants to say is process, capture ** “Render Pipeline” this thing essence is a set of process **.

UV

There is a layer of paper on the bottle of Coke, it is that layer of oil paper, there are general beverage bottles. This paper it’s two dimensional, and the bottle is three dimensional, but, this is the point, when they’re stuck together, for a certain point on the bottle, it has a corresponding point on the paper.

Is it possible to find a piece of paper that can be perfectly attached to the surface of any shape? I don’t know for sure, but it can be used to understand the concept, assuming there is. If this paper is viscous and can stick points on the surface of the object, then the paper is expanded into two dimensions, then the points on the surface of the object are expanded into a two dimensional plane, and then in this two dimensional plane there is another coordinate system, and this coordinate system is UV.

What does UV do? Suppose I have A point A on the surface of A three-dimensional object, and its UV coordinates are (u,v), and I give you A picture, and you go to the point in the picture according to these coordinates, and then you paint the color of this point on the surface of A three-dimensional object, what is the result?

Imagine that each point on the surface of the bottle emits a line, and the other end of the line is connected to their corresponding UV coordinates on the picture. And then the lines get shorter and shorter, and eventually, is the picture stuck to the surface of a three-dimensional object? !

The graph is called texture.

  • With UV coordinates, you can attach any 2d image to a 3D object, so when you model, create a blank one, and then take whatever surface you want. So that’s where skin comes from.
  • UV is also useful for animation, right? Imagine the wrapping paper on a Coke bottle being rotated. In fact, the UV modulation at each point is changed synchronously. For example, for the water surface effect, first give a picture of ripples in the water, and then move the ripples in the picture continuously, it looks like the water surface is constantly rising and falling. The motion of the ripple image is to continuously adjust the left side of the UV of the point on the object over time.

These things, the next Unity, build a capsule, create a map, drag the UV and have an intuitive feeling.

Normal map

  • First, normals are used in lighting models, such as diffuse models
I_diff = K_d * ambient+ K_d * light* dot (Normal,lightDir);
Copy the code

K_d is the reflection coefficient, ambient is the ambient light, light is the ambient light, Normal is the Normal direction, and lightDir is the light direction.

Suppose I have A plane A, whose light is uniform, and another plane B, whose light is uneven, when I calculate the light of A, I use the normal data of B, then what result is calculated? That’s what happens when you break plane A into plane B, right?

Further, I write down the normal line of B with A graph, and then use this graph to calculate A, so that A has the surface lighting characteristics of B. And this is the normal map.

How do I draw it? What is a normal picture, each point is RGB data (of course there are other color space), and the normal line is a vector, which also happens to be 3 components (xyz), so it happens to store XYZ in RGB. So we have a graph, and every color information in this graph is actually normal information.

And then why do you do it? If you have a wall in the game and the wall is a bumpy brick, which is either the way it was modeled, that’s a lot of work, or you can actually make the wall a very smooth plane, and then you can calculate the illumination of the brick wall using the normal map, and it will have a bumpy effect. This way: 1. I want to change the effect, a picture of the matter 2. Save the work of modeling

Here’s an analogy: you draw a very realistic pit on a piece of white paper. Does that pit exist? Does not exist. Do you look like a pit to me? Is. What went wrong: light!

And this thing, back to its roots, is that it’s an eye manipulation technique. You can never have too much performance, so you have a lot of simulation.