A list,

1 PCA 1.1 Dimensionality reduction Methods include principal component analysis (PCA), factor analysis (FA), and independent component analysis (ICA). Principal component analysis: to find the vector and minimize the sum of the projection of each sample to the vector. Factor analysis: Independent component analysis:

1.2 PCA: The purpose is dimensionality reduction. The actual principle of dimensionality reduction is to maximize the objective function (maximum variance after data projection).

The principle of strong PushBlog.csdn.net/fendegao/ar…

(1) Suppose there are m n-dimensional samples: {Z1,Z2… ,Zm}

(2) Sample center U is: sum of all sample observation values /(MXN)

(3) After decentralization, the matrix {X1,X2… , Xm} = {Z1 -u, Z2 – U,… ,Zm-U}

(4) Remember the vector W containing n elements, then the projection of sample X1 in the direction of W is the inner product of the two, X1

(5) The objective function of PCA is maximized projection



The target equation can be solved in matrix form. The solution method is as follows:

(1) The Lagrange operator is constructed, the derivative is 0, and the vector with the largest projection is the eigenvector corresponding to the largest eigenvalue.

How many W vectors can be selected as the K-L transformation matrix according to the cumulative contribution rate of eigenvalues. If four principal components are selected, for each n-dimension sample, after matrix transformation, it becomes (1xn) x(nx4)=1×4 dimensional vector, that is, the purpose of dimension reduction is achieved.

(2) SVD singular value decomposition: Dimensionality reduction only requires the right singular matrix, namely the feature vector of AA(T), and does not require the covariance matrix of A. Memory friendly.

1.3 Face recognition based on PCA (1) based on the face sample library, such as real face photography (bank, station) and other data collection methods, the establishment of face library. (2) Obtain the eigenvalues and eigenvectors of the covariance matrix of the training face database. (3) For the face that needs to be discriminated, the projection on the feature vector is judged to be closest to the projection of which training sample.

!!!!!!!!! Note: : It should be noted that the covariance matrix is the covariance between dimensions, so it is NXN dimension. However, in practical application, for example, image dimensionality reduction (assuming that an image has 200*10 pixels and there are 100 images), one pixel is a dimension, so the original covariance matrix XX ‘is (2000×100) x (100×2000) dimension. If the consumption of computer storage calculation is too large, the substitution matrix P=X ‘X(100×2000) X(2000×100) can be used instead:



The eigenvalue of P is the eigenvalue of the original covariance matrix, and the eigenvector of P multiplied by the data matrix is the eigenvector of the original covariance matrix.

LDA: Linear discriminant analysis, also known as Fisher linear discriminant, is a commonly used dimensional reduction technique. Basic idea: the high-dimensional pattern samples are projected into the optimal discriminant vector space to extract classification information and compress the dimension of feature space. After projection, the maximum inter-class distance and minimum intra-class distance of pattern samples are guaranteed in the new subspace, that is, the pattern has the best separability in this space. LDA dimension reduction of dimension is directly related to the number of categories, and the dimension of the data itself is it doesn’t matter, such as the original data is n, a total of C category, so the LDA dimension reduction, dimension values range for C (1, 1), for example, assume that image classification, two categories are example, each image with 10000 d character said, Then after LDA, there are only 1-dimensional features, and this dimension has the best classification ability. For many two-category cases, there is only one dimension left after THE LDA, and finding the threshold that works best seems to do the trick. Let’s say x is an n-dimensional column vector, and to get x down to dimension C by LDA, all we have to do is find a projection matrix W, which is an N by C matrix, and multiply w transpose times x, which is a C-dimensional matrix. (The mathematical expression for a projection is to multiply a matrix.) At this point, the key is to find a projection matrix! Furthermore, the projection matrix ensures that the pattern sample has the maximum interclass distance and the minimum intra-class distance in the new subspace.

2.2 Mathematical representation of LDA:













Ii. Source code

clear all clc close all start=clock; sample_class=1:40; % sample_classnum=size(sample_class,2); % number of sample categories fprintf(' Program runs.................... \n\n'); for train_samplesize=3:8; train=1:train_samplesize; % Each type of training sample test=train_samplesize+1:10; % Train_num =size(train,2); % test_num=size(test,2); Address =[PWD '\ORL\s']; % Allsamples =readsample(address,sample_class,train); % PCA was first used for dimensionality reduction [newsample Base]= PCA (Allsamples,0.9); Sw,Sb [Sw Sb]=computswb(newSAMPLE, sample_classNum,train_num); Testsample =readsample(address,sample_class,test); best_acc=0; For temp_Dimension =1: length(sw) vsorT1 =projectto(sw,sb, temp_Dimension); % projection of training sample and testsample tstsample=testsample*base*vsort1; trainsample=newsample*vsort1; % Accuracy =computaccu(TSTSAMPLE, test_NUM, trainSAMPLE,train_num); if accuracy>best_acc best_dimension=temp_dimension; % save the best projection dimension best_acc=accuracy; End end % -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- output shows -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- fprintf (' each training sample is: %d\n',train_samplesize); Fprintf (' best projection dimension is: %d\n',best_dimension); Fprintf ('FisherFace's recognition rate is: %.2f%%\n',best_acc*100); Fprintf (' program run time: %3.2fs\n\n',etime(clock,start)); End function [newsample basevector]= PCA (Patterns,num) % When num is greater than 0 and less than or equal to 1, it means that the energy of the eigennumber obtained is num %. Output: Basevector represents the eigenvector corresponding to the maximum eigenvalue obtained, and Newsample represents the sample representation obtained under basevector % mapping. [u v]=size(patterns); totalsamplemean=mean(patterns); for i=1:u gensample(i,:)=patterns(i,:)-totalsamplemean; end sigma=gensample*gensample'; [U V]=eig(sigma); d=diag(V); [d1 index]=dsort(d); if num>1 for i=1:num vector(:,i)=U(:,index(i)); base(:,i)=d(index(i))^(-1/2)* gensample' * vector(:,i); end else sumv=sum(d1); for i=1:u if sum(d1(1:i))/sumv>=num l=i; break; end end for i=1:l vector(:,i)=U(:,index(i)); base(:,i)=d(index(i))^(-1/2)* gensample' * vector(:,i); End end function sample=readsample(address, classNum,num) % % address is the address of the sample to be read. Classnum is the type of the sample to be read. % output is sample matrix allsamples=[]; image=imread([pwd '\ORL\s1_1.bmp']); % read first image [rows cols]=size(image); For I = classNum for j=num a=imread(strcat(address,num2str(I),'_',num2str(j),'.bmp')); b=a(1:rows*cols); b=double(b); allsamples=[allsamples;b]; end endCopy the code

3. Operation results