Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.

preface

Hello! Friend!!!

Thank you very much for reading haihong’s article, if there are any mistakes in the article, please point out ~

 

Self-introduction ଘ(੭, ᵕ)੭

Nickname: Haihong

Tag: programmer monkey | C++ contestant | student

Introduction: because of C language to get acquainted with programming, then transferred to the computer major, had the honor to get some national awards, provincial awards… Has been confirmed. Currently learning C++/Linux/Python

Learning experience: solid foundation + more notes + more code + more thinking + learn English well!

 

Machine learning little White stage

The article is only used as my own study notes for the establishment of knowledge system and review

Know what is, know why!

If the view mathematical formula is incomplete or display error

Elementary transformations of matrices in linear algebra

3.1 Elementary transformation of matrix

define

Elementary row operations of matrices

  1. Leftrightarrow r_{j}ri↔rj, leftrightarrow r_{j}ri↔rj
  2. Multiply all elements in a row by the number k≠0k\neq 0k=0 (the ith row times k, called ri×kr_{I}×kri×k)
  3. Add k times all the elements in one row to the corresponding elements in the other row (k times all the elements in the JTH row to the ith row, call it ri+krjr_{I}+kr_{j}ri+ KRJ)

Similarly, elementary row transformation and column transformation are referred to as elementary transformation for corresponding operations on columns (also the above three operations). If matrix A is transformed to B by A finite number of elementary row operations, matrix A is said to be equivalent to row B, denoted as A ~ rBA\stackrel{r}{\sim}BA ~ rB

If matrix A is transformed to B by A finite number of elementary column transformations, matrix A is said to be equivalent to column B, denoted as A ~ cBA\stackrel{c}{\sim}BA ~ cB

If matrix A is transformed into matrix B by finite elementary transformation, then matrix A is equivalent to matrix B, denoted as A ~ BA ~ BA ~ B

Note:

A ~ rB means A to BA\stackrel{r}{\sim}B means A to BA\stackrel{r}{\sim}B means A to BA\stackrel{c}{\sim}B means A to BA\stackrel{c}{\sim A transformation from BA to cB means A transformation from A to B

The property of equivalence

The equivalence relation between matrices has the following properties:

  1. Reflexivity A AA AA A
  2. Symmetry if A ~ BA ~ BA ~ B, then B ~ AB ~ AB ~ A
  3. If A ~ B, B ~ CA ~ B, B ~ CA ~ B, B ~ C, A ~ CA ~ C

Matrix type

1. Row echelon matrix

I could draw a step line, all zeros below it; Each step has only one row, and the number of steps is the number of non-zero rows. The first element behind the vertical line of the step line is a non-zero element, which is the first non-zero element of the non-zero row.

2. Row minimalism matrix

Based on the definition of row echelon matrix, the following requirements are also required:

  • The first nonzero element of a nonzero row is 1
  • And the rest of the column in which these non-zero elements are located is 0.

Any matrix
A m x n A_ {m * n}
It can always be transformed into row echelon matrix and row minimalism matrix by finite elementary transformation.

3. Standard form matrix

A matrix with simpler shape can be obtained by equal-column transformation of the simplest row matrix, which is called the standard form matrix. It is characterized by an identity matrix in the upper left corner and 0 for the rest of the elements.

The matrix A can always be transformed by A series of elementary transformations into the standard form F where r is the number of rows in the row echelon matrix that have no zero rows.

4. Elementary matrix

The matrix resulting from an elementary transformation of the identity matrix E is called an elementary matrix

There are three kinds of elementary transformation, then there are three kinds of elementary matrix, the following to the row elementary transformation as an example (1) the identity matrix of the I, J row swap, the elementary matrixLet’s use the m order elementary matrix
E m ( i . j ) E_m(i,j)
Multiply it by the matrix A, where
A = ( a i j ) m n A=(a_{ij})_{m*n}
,

(ri↔rj) (r_i \leftrightarrow r_j) (ri↔rj)

For a practical example (multiply left) : swap rows 1 and 3 of the identity matrix

Similarly, multiplying the matrix A by the right of the NTH order elementary matrix is the result of the column transformation (2) of the matrix A with the number k! =0k! =0k! =0 times the ith row (or column) of the identity matrix to get the elementary matrix

It can be found that the matrix Em(I (k))E_m(I (k))Em(I (k)) left multiplied by the matrix A is the ith row of A with respect to the number k

A practical example (multiply left) :

The second row of the identity matrix times k is equal to 2

In the same way,
E m ( i ( k ) ) E_m(i(k))
Right times A is the same thing as k times the ith column of A(3) Add k times the JTH row of E to the ith row (or k times the JTH column plus the ith column) to obtain the elementary matrixWhen you multiply it by left, you’re adding the JTH row of A times k to the ith row

A practical example (multiply left) :

Row 3 of the identity matrix times k is equal to 2 plus row 2

Similarly, when you multiply it to the right, you’re adding the JTH column of A times k to the ith column

The nature of the

From the above discussion, it can be concluded that

The nature of the 1

Let A be an M by n matrix

  • Apply an elementary row operation to A, which is equivalent to multiplying to the left of A by the corresponding m-order elementary matrix;
  • Applying an elementary column transformation to A is equivalent to multiplying to the right of A by the corresponding elementary matrix of order n

Elementary matrices are all invertible, and their inverses are all of the same type


  • E ( i . j ) 1 = E ( i . j ) E(i,j)^{-1} = E(i,j)

  • E ( i ( k ) ) 1 = E ( i ( 1 k ) ) E(i(k))^{-1}=E(i(\frac{1}{k}))

  • E ( i j ( k ) ) 1 = E ( i j ( k ) ) E(ij(k))^{-1}=E(ij(-k))

Pay attention to


  • E ( i ( k ) ) This is the identity matrix i Line is multiplied k E of I of k is equal to the ith row of the identity matrix times k

  • E ( i j ( k ) ) This is the identity matrix i Line with the first j Line is multiplied k E of ij of k is the ith row of the identity matrix plus the JTH row times k

An example is given to show that the identity matrix of order 3 is E


E = [ 1 0 0 0 1 0 0 0 1 ] E=\begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}

It is clear that the

  • (E)E=E
  • E(E)=E

So the unit matrix inverse matrix for itself that is E (I, j) – 1 = E (I, j) E (I, j) ^ {1} = E (I, j) E (I, j) – 1 = E (I, j)

If I multiply the second row of E by 2, I get E(I (2))=[100020001](I =2, Said the second line) (I) (2) = E \ begin 1 & & 0 0 {bmatrix} \ \ & & 2 0 0 \ \ & 0 0 & 1 \ end {bmatrix} (I = 2, the second line) E (I) (2) = ⎣ ⎢ ⎡ 100020001 ⎦ ⎥ ⎤ (I = 2, Represents the second line)

Then E (I) (2) – 1 = E (I) (12) = [1000120001] E (I) (2) ^ {1} = E (I (\ frac {1} {2})) = \ begin 1 & & 0 0 {bmatrix} \ \ 0 & \ frac {1} {2} \ \ & 0 0 & {0 & 1 \ end bmatrix} E (I) (2) – 1 = E (I) (21) = ⎣ ⎢ ⎡ 1000210001 ⎦ ⎥ ⎤

There are


[ 1 0 0 0 2 0 0 0 1 ] [ 1 0 0 0 1 2 0 0 0 1 ] = [ 1 0 0 0 1 0 0 0 1 ] = E \begin{bmatrix} 1 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 1 \end{bmatrix} * \begin{bmatrix} 1 & 0 & 0\\ 0 & \frac{1}{2} & 0\\ 0 & 0 & 1 \end{bmatrix}=\begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix} = E

Let’s say I multiply row 3 times 2 and add it to row 2

E(ij(2))=[100012001](I =2, j=3, E(ij(2)) = {begin{bmatrix} 1&0&0 \\ 0&1&2 \\ 0&0&1 \end{bmatrix}(I =2, j = 3, Respectively for line 1, 2, 3) E (ij (2)) = ⎣ ⎢ ⎡ 100010021 ⎦ ⎥ ⎤ (I = 2, j = 3, respectively 2, 3)

Then E (ij (2)) – 1 = E (ij (2 -)) = (10001-2001) E (ij (2)) ^ {1} = E (ij (2)) = \ begin 1 & & 0 0 {bmatrix} \ \ 0 and 1 & 2 \ \ 0 & 0 and 1 {\ end bmatrix} E (ij (2)) – 1 = E (ij (2 -)) = ⎣ ⎢ ⎡ 1000100-21 ⎦ ⎥ ⎤

There are


[ 1 0 0 0 1 2 0 0 1 ] + [ 1 0 0 0 1 2 0 0 1 ] = [ 1 0 0 0 1 0 0 0 1 ] = E ​ \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & -2\\ 0 & 0 & 1 \end{bmatrix} + \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 2\\ 0 & 0 & 1 \end{bmatrix}= \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}=E

The nature of the 2

The necessary and sufficient condition for invertibility of square matrix A is the existence of finite elementary matrices P1,P2… Pi,P_1,P_2,… P_i,P1,P2,… Pi, make A = P1P2… PiA=P_1P_2… P_iA=P1P2… Pi to prove:

First prove adequacy:

Set A = P1P2… PiA=P_1P_2… P_iA=P1P2… Pi

Because elementary matrices are invertible, the product of a finite number of invertible matrices is still invertible

So A is invertible

Evidence necessity:

Suppose the square matrix A of order n is invertible

A is transformed by A series of transformations into the normal-form matrix F

Then F can also be converted to AF ~ AF ~ AF ~ A by A series of elementary transformations

So A = P1… PsFPs+1… PiA=P_1… P_sFP_{s+1}… P_iA=P1… PsFPs+1… Pi because A is invertible, P1,P2… Pi,P_1,P_2,… P_i,P1,P2,… F=[Er000]n∗nF=\begin{bmatrix} e_r&0 \\ 0&0 \end{bmatrix}_{n*n}F=[Er000]n∗n If r


A = P 1 P 2 . . . P i A=P_1P_2… P_i

Theorem 1

Let A and B be m by n matrices, then

  1. A ~ B(r)A ~ B(r)A ~ B(r)A ~ B(r)A ~ B(r)A ~ B(r)A ~ B(r)A ~ B(r)A ~ B(r
  2. A ~ B(c)A ~ B(c)A ~ B(c)A ~ B(c) is sufficient and necessary for the existence of an n-order invertible matrix Q such that AQ=BAQ=BAQ=B
  3. The necessary and sufficient condition of A ~ BA ~ BA ~ B is the existence of m order reversible matrix P and n order reversible matrix Q such that PAQ=BPAQ=BPAQ=B

inference

Square matrix A is invertible if A ~ E(r) is A row transformation

Proof of adequacy:

Because A to E is the row operation, A to E is the row operation.

So there is an elementary matrix P such that PA=EPA=EPA=E

Since E is reversible and P is reversible, A must be reversible to prove the necessity: First of all, A must be changed into F, A ~ F through elementary row transformation


F = [ E r 0 0 0 ] n n F=\begin{bmatrix} E_r & 0\\ 0 & 0 \end{bmatrix}_{n*n}

If r < n | | = 0 F for A reversible So if reversible F | | = 0 F is irreversible So | | F! If theta =0, then r=n, F=E, so A ~ E, A ~ E

supplement

Theorem 1 shows that if A ~ B(r, refers to row transformation)A ~ B(r, refers to row transformation)A ~ B(r, refers to row transformation), that is, A can change into B through A series of elementary row transformation, then there must be an invertible matrix P, such that PA=BPA=BPA=B, then how to find P? A, B, c, d, d


{ P A = B P E = P \begin{cases} PA=B\\ PE=P \end{cases}

so


P ( A . E ) = ( B . P ) P(A,E)=(B,P)

It follows that the matrix (A,E) can be transformed into the matrix (B,P) by elementary row transformation. A,E, B are known, so P is obvious


A = [ 1 2 3 4 ] A=\begin{bmatrix} 1 & 2\\ 3 & 4 \end{bmatrix}

E = [ 1 0 0 1 ] E=\begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}

Then the matrix


( A . E ) = [ 1 2 1 0 3 4 0 1 ] (A,E)=\begin{bmatrix} 1 & 2 & 1 & 0\\ 3 & 4 & 0 & 1 \end{bmatrix}

When B=E, P is the inverse of A (A−1A^{-1}A−1).

conclusion

Description:

  • Refer to textbook “linear algebra” fifth edition tongji University mathematics department
  • With the book concept explanation combined with some of their own understanding and thinking

The essay is just a study note, recording a process from 0 to 1

Hope to help you, if there is a mistake welcome small partners to correct ~

I am haihong ଘ(੭, ᵕ)੭

If you think it’s ok, please give it a thumbs up

Thanks for your support ❤️