SPP-master

所属分类:软件设计/软件工程
开发工具:matlab
文件大小:14KB
下载次数:52
上传日期:2016-10-18 14:57:17
上 传 者白玉龙
说明:  稀疏投影保持降维算法,用于高维度数据降维分类和回归的算法
(Projections remain sparse dimension reduction algorithm for high-dimensional data dimensionality reduction classification and regression algorithm)

文件列表:
Eigenface_f.m (1464, 2014-10-09)
Find_K_Max_Gen_Eigen.m (424, 2014-10-09)
LICENSE (18027, 2014-10-09)
eigen_reconstruction.m (1497, 2014-10-09)
l1eq_pd.m (5907, 2014-10-09)
orl_SPP_SRC.m (4135, 2014-10-09)

SPP === Sparsity Preserving Projection, a feature extraction algorithm in Pattern Recognition area +++ Author : Denglong Pan pandenglong@gmail.com +++ What is SPP Refer to https://github.com/lamplampan/SPP/wiki#what-is-spp # What is SPP SPP, Sparsity Preserving Projection, is an unsupervised dimensionality reduction algorithm. It uses the minimum L1 norm to keep the data in sparse reconstruction. SPP projections don't affect by the data rotation, scale or offset. SPP can classify the data instinct even though there is no given classified info. ### Sparse refactoring weight matrix Training sample matrix Use the weight vector in sparse reconstruction as the coefficient of , to solve the minimum L1 norm problem. Define the equation set [1] below: Define a sparse refactoring weight matrix below, in which is the optimal solution for equation set [1] : The weight vector is sparse. Because it contains a lot of classes in the face recognition test samples. The test samples should be as following: We can change the equation set [1] to be following equation set [2] taken the residual into consideration, in which the is the residual: ### Eigenvector extraction We can define the following objective function [3] in order to find the projection of preserve optimal weight vector Pass the function above into below one through algebraic transformation, in which the The eigenvector would be the maximum d eigenvalues in the following resolution. ### SPP algorithm **Step 1** Use the equation set [1] or equation set [2] to calculate the weight matrix S. It can be calculated by the standard linear programming tools such as L1-magic etc. **Step 2** Calculate the projection vector by objective function [3]. Then we can get the d maximum eigenvalues in the subspace and also get the corresponding eigenvectors. ### Test result on ORL face lib Use PCA + SPP +SRC for the testing. **Why use PCA here** We use PCA here to reduce the dimensions. There are 92*112 = 10304 dimensions in each face sample. There are 40 kinds of faces in ORL lib. Each kinds of face contains 10 samples. If we use 5 in each kind of face as training samples, then the constructed matrix is 10304*40*5 . It will confront of two problems with so many dimensions: 1 MATLAB will report "OUT OF MEMORY" with so many dimensions matrix. 2 The row number is bigger than column, so that it should be a overdetermined equation. It cannot be solved by L1_MAGIC algorithm. **Test results** Use 5 samples in 40 kinds of face to train. Use the left samples to be tested. Set the residual to be 0.0001 . Set the extracted projected vectors to be 80. The recognized rate is 93% when the PCA=80 . +++ How to run the algorithm? Refer to https://github.com/lamplampan/SPP/wiki#how-to-run # How to run **Step 1** : Config your ORL face lib in file orl_src.m . Default path is E:\ORL_face\orlnumtotal\ . **Step 2** : Run orl_src.m

近期下载者

相关文件


收藏者