preface


In front we said to the knowledge of some of the matrix calculation and believe that everyone has felt into the hot water, so in order to make everyone feel more exciting visual experience and sensory experience, this blog, I will be the determinant of a matrix, the inverse of the matrix and orthogonal matrix, research and development discusses the homogeneous matrix. I’m glad you’re here. Why do you say that? In fact, because homogeneous matrix is more commonly used in our development, I have been briefly mentioned in the Core Graphics framework: Affine Transformation and homogeneous coordinates (small white perspective), this article I will further explain the alignment matrix.

So, here we go.

The determinant of a matrix


Such a scalar exists in any square matrix, called the determinant of the square matrix. If I start by talking about concepts, it might confuse not only the reader, it might confuse me, but let’s use practical examples to illustrate determinants and what they mean geometrically.

Linear algorithm

First of all, we look at the square matrix determinant for | | M M, (note: the square of the determinant is undefined) let’s start with the simplest phalanx of 2 x2. 2 x2 determinant of specific definition as shown below.





So one thing to note here is that instead of using square brackets when writing a determinant like a matrix, we’re going to use two vertical lines…

According to the book, we can do the memory calculation by multiplying the main diagonal and the anti-diagonal, and then the product of the main diagonal minus the product of the anti-diagonal. As shown in the figure below, of course, this is the calculation of the determinant of a 2×2 square matrix, but the calculation of the determinant of a 3×3 square matrix is much more difficult. The guest officer allowed me to speak slowly.





An example of calculating the determinant of a 2×2 square matrix is shown below.





So that’s pretty much all we have to do to evaluate the determinant of a 2 by 2 square matrix, and now we’re going to evaluate the determinant of a 3 by 3 square matrix. So let’s first look at the definition of the determinant of the 3 by 3 square matrix.





Does it look like a lot of trouble? In fact, once we get the hang of it, we can easily calculate it. First, the matrix M is written twice, and then the calculation is carried out as shown in the figure.





All right, so after the two by two square matrices and the three by three square matrices, we’re getting confused, and is that all we can do with square matrices? No, no, no, the predecessors of mathematics have left us precious methods of calculation, that is, the cofactor and algebraic cofactor, the use and difference between the two let us see.

So let’s first look at the submatrix. Let’s first look at the concept, assuming a matrix M, the matrix left after removing the ith row and JTH column is the matrix M’s submatrix (the constraints on I and j will not be explained much), denoted as follows.





Next, let’s use an example to show how the submatrix is generated.





Above we have seen the definition and calculation method of the submatrix. So now, we’re going to do a little bit about the submatrix. So what are cofactors?

The algebraic cofactor is defined as, for a square matrix M, the cofactor of a given row and column element is equal to the signed determinant of the corresponding cofactor. So just to sort of extract this, if the codeterminant of some matrix is a determinant, then we’ve already noticed that the codeterminant of some matrix is a matrix. So we know the difference between the two, one is a scalar and one is a matrix, that’s the difference between the two. Ok, now that we understand the differences between the two, let’s look at how the calculation of cofactors is defined, as follows.





Only the formula above makes us feel helpless no, so let’s use an example of a submatrix to solve the corresponding algebraic submatrix. As shown below.





So how does this knowledge of superfluous submatrices and cofactors help us solve for the determinant? In fact, we can directly calculate the determinant of any n-dimensional square matrix by using cofactors and algebraic cofactors. First, we find any row I of the matrix (I is not greater than the maximum number of rows), and then the number of columns j increases successively. The specific calculation formula is as follows.





So you have this formula and you have to verify it, and then we’re going to use this formula to derive the determinant of the 4×4 square matrix. Because of the convenience of the calculation formula, it is more convenient for us to calculate, but we should carefully judge the positive and negative of each item (I did not pay attention to the verification error two or three times). Here, I choose I =1(I can be selected when verifying by myself), and the specific verification process is shown as follows (there are too many items, so it is divided into two steps.





The first step in the calculation




Result of calculation

From the above, we find that the more rows there are, the higher the complexity of the determinant will be. The complexity increases exponentially. We already had a lot of trouble figuring out the 4×4 determinant, so if we were to do a determinant of 10 by 10, why don’t we go crazy? Here, the book mentions a determinant calculation method called “pivot selection” calculation method, interested partners can consult the information by themselves.

So we’ve talked about the determinant, but we’ve talked about a lot, and we’re still confused, so what does the determinant do? Or what does the determinant mean? Actually, in 2D, the determinant represents the signed area of the parallelogram with the basis vectors on both sides. In 3D, it represents the signed volume of a parallelepiped with three sides of the basis vector. Let’s look at the following example to verify our idea.

As shown in the figure, there are basis vectors v = [3 0],u = [1 2] in 2D environment.





So its area is 3×2 is equal to 6, and its determinant is 3×2 minus 1×0 is equal to 6, and we find that the determinant is equal to the area (of course, if the basis vector v is equal to [-3 0], the determinant eventually evaluates to -6).

Next, let’s look at the three basis vectors u = [2 0 0], V = [1 2 0],w= [0 0 1] in the 3D environment, as shown in the figure.





Then we calculate the volume of the cube bounded by the above three basis vectors to be 1x2x2 = 4, and the determinant of the matrix composed of the three basis vectors. It turns out that the absolute values are the same. As shown below.










The matrix of the inverse


Matrix inverse and matrix transpose are different, matrix transpose please see 3D graphics: matrix knowledge. One prerequisite for inversions is that only square matrices can perform inversions.

So let’s first look at how the inverse of a square matrix is defined. Let’s say I have a square matrix M, and the inverse of the square matrix M, let’s call it M to the minus 1, the inverse of the square matrix is also a matrix. When M and M^-1 are multiplied, the result is the identity matrix I. as shown below.





So how do I compute square M inverse? In my view of the 3D graphics is given the following method.





In the formula above we know how to solve for the determinant of the matrix, so what is adj. M? Adjoint matrix of matrix M is called adjoint matrix of matrix M and is defined as transpose matrix of matrix M’s coalgebra submatrix. Okay, let’s see how the example illustrates this. Suppose the matrix M is shown below.





Then, we solve the cofactors of all the elements in the matrix, as shown below.





Then the transpose matrix of the cofactor (adjM) is shown below.





We have solved the transpose matrix of the cofactor (adjM). Next, we need to solve the inverse of the matrix. The calculation process of applying the formula is as follows.





Now that we know the inverse of a matrix and how to calculate it, what does it actually do? Or what does it mean geometrically? In fact, matrix inverse is mainly applicable to the realization of “undo” function. For example, a vector ν is transformed by the matrix M, and then, we can present the inverse matrix of M again, thus reversing the transformation, the verification process is shown as follows.





Orthogonal matrix


So let’s see how an orthogonal matrix is defined, if the square matrix M is orthogonal, then we call M an orthogonal matrix if and only if the product of M with its transpose M to the T is equal to the identity matrix.





In the inverse of the matrix, we know that the product of the inverse of the matrix and the matrix is the identity matrix I. From this reasoning, we can know that if the matrix is orthogonal, then the inverse of the matrix and the transpose matrix are equal.





So what’s the point of an orthogonal matrix? In fact, if a matrix is orthogonal, then the inverse of the matrix and the transpose of the matrix are equal. The transpose matrix is very easy to compute, and the inverse of a matrix is very troublesome to compute using cofactors, so we can just compute the transpose matrix and get the inverse of the matrix directly.

4X4 homogeneous matrix


DuangDuangduang~ The most important part of this article – homogeneous matrix, before talking about its related content, we will first use two more classic examples to explain how homogeneous space appeared,(example is from the Internet, don’t wonder).

Do two parallel lines meet? Before we realized homogeneous space, we knew that two parallel lines could not meet, but could two parallel lines really not meet? If we look at the picture below, we all know that the two tracks are parallel, but the two parallel tracks will meet at a point at infinity. Is that right? In cartesian 2D coordinates, we use (x, y) to represent a 2D point in Cartesian space, and a point at infinity (∞,∞) has no meaning in Cartesian space. So we can’t explain it, but in homogeneous space, we can explain it.





With these two questions in mind, we begin our journey to homogeneous coordinates. In fact, the emergence of homogeneous space is mainly used to solve the projection problem. The so-called homogeneous coordinate refers to the representation of an n-dimensional vector by an N + 1-dimensional vector. The 4D homogeneous space has four components (x,y,z, W), and the fourth component is W, which is called the homogeneous coordinate. Then in 3D Cartesian coordinates, the secondary coordinates can be expressed as (x/w,y/w,z/w).

So let’s solve the first problem by explaining that two parallel lines project into a 2D plane and intersect at a point. We know that in 2D Cartesian coordinates Ax plus By plus C equals 0 represents a line. When two parallel lines intersect, they relate two equations. As shown below.





In Cartesian coordinates, if these two intersect, then C is equal to D is equal to 0, which means they are the same line going through the origin. Obviously you can’t explain why two parallel lines intersect at one point. So if we introduce the notion of homogeneous coordinates, instead of x, y, we put x over w, y over W into the projection space, as follows.





The set of equations above can be converted to the set of equations below.





If C is not equal to D, then the solution to the system, w is equal to 0 and the two lines intersect, so x,y,0. The two lines intersect at infinity.

So why is it necessary to introduce homogeneous coordinates? What are the advantages of homogeneous coordinates? It provides an effective method to transform a set of points in two-dimensional, three-dimensional or even high-dimensional space from one coordinate system to another coordinate system by matrix operation. It can represent points at infinity. If h=0 in n+1 homogeneous coordinates, that actually represents an infinite point in n-dimensional space. For homogeneous coordinates [a,b,h], keeping a,b constant, a point along the line Ax +by=0 goes gradually to infinity.

4X4 translation matrix


In 3D graphics: Matrix and linear transformation I have said several linear transformation, such as rotation, scaling, mirror and so on, but there is no translation, but in the daily development process, translation should be counted on our very common affine transformation. So why is this? According to the book, the matrix multiplication property determines that the zero vector always transforms into the zero vector, so there is no translation of the transformation of the matrix multiplication expression. However, we can use 4X4 translation matrix to represent translation transformation in 3D environment, and 3X3 translation matrix to represent translation transformation in 2D environment (assuming w is constant and w = 1). The specific formula is as follows.





4X4 translation matrix




3X3 translation matrix

Although matrix multiplication is still linear in 4D, matrix multiplication cannot represent translation in 4D, but can represent translation transformation in 3D environment.

The end of the


In writing this blog, I think the biggest gain is the translation transformation matrix, because other we may not use too much in the normal development process, but translation, scaling, rotation these three common affine transformation is indeed the most common. There are a few more half-understood positions in this article, one of which is the explanation of alignment sub-coordinates. One is the derivation of 4X4 translation matrix, if there is a deep understanding of this aspect of the god, I hope to explain this, SAO Dong grateful. In my next blog POST I will take a closer look at projective coordinates and perspective projections. If you like SAO Dong, you can pay attention to me, thank you.

Finally, attach a PDF version of the portal for 3D Mathematical Graphics and Game Development.

–> <<3D Mathematical Basic Graphics and Game Development >> Portal 🚪