1. PCA face recognition operation process

In ordinary studies, THE main process of PCA face recognition summarized by me is shown in the figure below:

                                

FIG. 1 Flow chart of PCA face recognition

As can be seen from the PCA face recognition process in the figure above, PCA method can be summarized into the following stages: training samples, feature extraction, feature space construction and projection calculation.

 

 

2. PCA face recognition method principle introduction

Karhunen-loeve (K-L) transformation or Principal Component Analysis (PCA) is a widely used technique. The main function of this technique is to transform the continuous signal into a set of unrelated representation coefficients. Principal component analysis (PCA) forms the basis of K-L transformation and is mainly used for compact representation of data. In the application of data mining, it is mainly used to simplify the data set of large dimension, reduce the dimension of feature space, and obtain higher accuracy with less storage cost and computational complexity.

PCA method eliminates the correlation of data and finds a space in which the data of various categories can be well separated. As shown below:

                                              

FIG. 2 PCA reduction and classification diagram

In figure 2, there are some discrete two-dimensional distribution points, including stars show a kind of collection, small circles represent another set, assuming that these two categories can use the features of X and Y is described, the figure shows that in the X axis and Y axis on the two categories of projection overlap, shows that these two points X and Y did not show the prominent characteristics of identification. However, the projection of the two classes on the Z axis is highly differentiated and shows good identification. PCA is such a tool, it can produce a very good effect of dimensionality reduction, this method can also be used in other studies of image processing, such as image compression, classification and feature selection.

Next, the theoretical principle of PCA will be introduced in detail:

After a simple description of the principal component analysis method, next, we will specifically analyze the principle of K-L transformation.

Suppose that in the image set F (m,n), each image can be represented as a dimensional column vector by stacking:

 

            (1)

The representation method is as follows: start from the first column of the image matrix, take the last column in turn, and each column is connected end to end to form a dimensional column vector. Each face image is represented

After the column vector, and then turn each image column vector transpose into row vector, constitute a face sample matrix, the sample matrix is the image set:

                                                     (2)

According to Formula (2), the sample matrix has L rows, where each row of data represents a face sample image, and L represents the total number of training samples.

The covariance matrix of the training sample is:

                                                                                     (3)

In the formula, MF is the average vector of all training samples, that is, the average face of all samples. The matrix [Cf] in formula (3) is a real symmetric square matrix of order, then there must be an orthogonal eigenvector belonging to each eigenvalue, namely:

                              (4)

The above eigenvalues are arranged in descending order, and the corresponding eigenvectors of each eigenvalue are taken to form an orthogonal matrix, that is, the orthogonal space of a dimension. According to the literature, the dimension of the matrix [Cf] here is very large, and solving its eigenvalue and eigenvector is complicated. At this time, it is necessary to transform Equation (3) to simplify the solution, and the deformation results are as follows:

                                                                         (5)

At this point, the matrix in formula (5) for eigenvalue and eigenvector solution, will solve the eigenvector and eigenvalue through SVD singular value decomposition, get the original training sample eigenvector, so that the final face projection space can be constructed.

                                (6)

Where, vi is the feature vector of the covariance matrix in (5), and p is the number of feature vectors. The feature vector is transformed into a matrix, which can represent the image, namely the so-called feature face, as follows:

                        

FIG. 3 PCA feature face

So far, we have found the projection feature space required by PCA face recognition:

(7)

Then, k-L transformation formula can be used for projection calculation, and the projection features of each sample on spatial Wpca can be obtained:

                                                (8)

Where [A] is the space Wpca, in fact, formula (8) does not use (F-MF), can be directly replaced by F, that is, the original face sample on the space Wpca projection, the projection of the feature coefficient exists in the matrix G, if I remember correctly, Each row in G represents the feature coefficient of a face sample (or each column represents the feature sparsity of a face sample, which is related to personal calculation methods, mainly to see the matrix transpose or not transpose).

Here the main steps of PCA face recognition has been introduced, the rest is the identification process, it is easier to understand, the training sample of the projection on the space [A], get the projection coefficient of the characteristics of the sample, then will test samples are conducted on space [A] projection, get the projection characteristic coefficient of each test sample Yang, at this time, We only need to measure the Euclidean distance between the feature coefficient of a test sample and the projection feature coefficient of the training sample to see which sample has the closest Euclidean distance between the test sample and the training sample, and then the test sample can be classified into the category of the sample with the closest distance. For example, training sample A1 belongs to S1 (S1 contains many samples and A1 is only one of them). If the distance between test sample B1 and A1 is closest, b1 will be classified as S1. So down, all the samples are classified, PCA face recognition rate can be obtained.

% Test data: 40 people, 10 photos per person. Each person took the pre-train_NUM photo as the training set, and the post-(10-train_NUM) photo as the test set. clear all; clc; train_num=5; % calculate feature face and create feature space imdata=zeros(112*92,40*train_num); for i=1:40 for j=1:train_num addr=strcat('Dum2str(j),'.bmp'); a=imread(addr); % read image b=a(1:112*92) from address; Imdata (:,train_num*(i-1)+j)=b'; End [neednum,average_face, immin, newVT] = newVT(imdata); end [neednum,average_face, immin, newVT] = newVT(imdata); OutputClass = Recognition('\FaceRecognitionPCA\faces\s29\6.bmp',neednum,average_face, immin,newVT)Copy the code