用户名: 密码: 验证码:
图像特征提取方法及其在人脸识别中的应用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
特征抽取是模式识别中最基本的问题之一。无论是人脸识别还是字符识别,提取有效的鉴别特征是解决问题的关键。本文就有关几种线性和非线性特征提取方法的理论与算法进行了研究,并且所提出的一些特征提取的新算法在人脸识别方面得到了较成功的应用。
     首先,本文针对非矩阵分解的理论,提出了具有正交性的投影轴的计算方法和具有统计不相关性的投影轴的计算方法。这种处理方法的目的为了减少低维空间中投影轴之间的统计相关性,提高识别率。实验结果表明提出的两种特征提取方法在识别率方面整体上好于原非负矩阵分解特征提取(NMF)方法。针对非负矩阵分解方法在特征提取过程中没有充分利用训练样本的类别标签信息,本文提出一种新的有监督的非负矩阵分解方法,这种方法的特点:一是它充分直接利用训练样本的类别信息,二是在计算上仍然采用与非负矩阵分解方法相同数学公式,这种新特征提取方法被称为组合类别信息的非负矩阵分解(CINMF)方法。
     其次,针对非线性特征提取问题,基于核技术的理论,本文给出一种监督化的KPCA方法,即组合类别信息的核主成分分析(CIKPCA)。由于核主成分分析(KPCA)是无监督学习方法,在特征提取过程,KPCA中不能充分利用训练样本的类别信息,而CIKPCA则是克服这一弱点。在分类时本文采用基于两种特征融合的分类策略进一步改进CIKPCA方法的识别率。实验结果表明提出的新方法在识别率方面整体上超过常用的核主成分分析(KPCA)方法,在某些人脸数据库上,CIKPCA甚至超过了KLDA。另外,基于(核)最大间距准则,本文提出了一组具有统计不相关性的最优(核)鉴别矢量集的计算方法。新的方法的目的是消除了特征空间上最优(核)鉴别矢量间的统计相关性,提高了特征提取的有效性。
     最后,基于流形学习的理论,本文提出一种新的无监督的鉴别投影方法,这种新的方法是基于样本的局部和非局部统计量而建立的映射,它的鉴别准则是通过特征矢量的非局部散度与局部散度之间差的最大化来刻画的,准则目的是使得投影后特征矢量的非局部散度最大化,同时也使局部散度最小化。这种方法被称为最大间距鉴别投影(Marginal Discriminant Projection,MDP)。通过在ORL人脸库和AR人脸库上进行实验,比较了MDA、LDA、局部保持投影(LPP)和无监督鉴别投影(UDP)四种方法的识别率。另外,基于流形学习理论,本文提出一种新的针对图像矩阵的维数压缩方法。这种方法是基于样本图像矩阵来构建非局部散度矩阵和局部散度矩阵的,并且通过引入邻接矩阵来刻画高维数据的局部几何结构。准则函数是投影样本的非局部散度与局部散度之商的最大化来刻画的。新的方法是对图像矩阵行和列方向同时进行维数压缩而得到特征矩阵。这种方法被称为基于图像矩阵的双方向无监督鉴别投影((2D)~2UDP)。通过在ORL人脸库和AR人脸库上进行实验,结果表明本文提出的新方法在识别率方面整体上好于基于图像矢量PCA、LPP、UDP和(2D)~2PCA。
Feature extraction is the elementary problem in the area of pattern recognition. It is the key to solve the problems such as face identification and handwritten character recognition. In this paper, we focus our attention on linear and nonlinear feature extractions and develop some new algorithms as regards it. And, these algorithms are verified to be effective in the application of image identification.
     Firstly, based on the Non-negative matrix factorization (NMF), a new algorithm of orthogonal projection axis and a new algorithm of statistically uncorrelated projection axis for feature extraction are proposed in this paper. Aim of the proposed methods is reducing or eliminating the statistical correlation between features and improving recognition rate. The experimental results on Olivetti Research Laboratory (ORL) face database and YALE face database show that the new methods are better than original NMF in terms of recognition rate.
     Non-negative matrix factorization (NMF) is an unsupervised feature extraction method in image recognition, meaning that NMF does not sufficiently use the class information of given training sample in feature extraction. A novel supervised feature extraction method based on non-negative matrix factorization is presented in this paper. The new method has two traits: one is to sufficiently utilize a given class label of training sample in feature extraction and the other is to still follow the same mathematical formulation as NMF, so the new feature extraction method is named class-information-incorporated non-negative matrix factorization (CINMF). Besides, in order to further improve recognition rate, the paper presents a new classification strategy based on fusion of two kinds of feature vector.
     Secondly, for nonlinear feature extraction, a novel supervised feature extraction method based on kernel principal component analysis (KPCA) is presented in this paper. The method is named as class-information-incorporated kernel principal component analysis (CIKPCA). As a nonlinear feature extraction, the conventional KPCA is an unsupervised method, it is not to sufficiently utilize a given class label information of training kernel sample in feature extraction, but CIKPCA overcomes the drawback. The paper presents a new classification strategy by fusion of two kinds of feature vector in order to further improve recognition rate. The experimental results show that the new method is better than KPCA in terms of recognition rate, and even outperforms KLDA. Besides, based on the (kernel) maximum margin criterion, new algorithms of statistically uncorrelated optimal (kernel) discriminant vectors for feature extraction is presented in this paper. The proposed methods have more powerful capability to eliminate the statistical correlation between features and improve efficiency of feature extraction.
     Lastly, based on manifold learning, a new unsupervised discriminant projection for dimensionality reduction of high dimensional data is presented in this paper. The new projection can be seen as a linear approximation of a multimanifolds-based learning framework which is based on both the local and nonlocal statistically quantities. The discriminant criterion function be characterized by difference between the nonlocal scatter and the local scatter, seeking to find a group of projection axis that simultaneously maximizes the nonlocal scatter and minimizes the local scatter of feature vector. Locality preserving projection (LPP) considers only the local scatter for classification . The experimental results on Olivetti Research Laboratory (ORL) face database and AR face database show that the proposed method consistently outperforms LPP and UDP, and even outperforms Fisher linear discriminant analysis (LDA).
     To avoid the complication of a singular local scatter matrix, we present a new feature extraction method by the idea of manifolds learning, the trait of the method is to exploit image matrixes to directly construct local scatter matrix and nonlocal scatter matrix. Its criterion function is characterized by maximizing the ratio of the nonlocal scatter to the local scatter after the samples are projected. The advantage of this new approach is that the reduction in dimensionality can be achieved in both row and column directions; the method is called the two-directional two-dimensional unsupervised discriminant projection (i.e. (2D)~2UDP). The experimental results on ORL databases and AR databases indicate that the new method is the highest among LPP, PCA, (2D)~2PCA and (2D)~2UDP in terms of recognition rate.
引文
1.R.Chellappa,C.L.Wilson,S.Sirohey.Human and machine recognition of faces:a survey.Proceedings of the IEEE,1995,83(5):705-741.
    2.R.Baron.Mechanisms of human facial recognition.International Journal of Man Machine Studies,1981,15:137-178.
    3.Ian Craw,Nicholas Costen,Takashi Kato,et al,How should we represent faces for automatic recognition? IEEE Trans.Pattern Anal.Machine Intell.1999,21(8),725-736.
    4.M.Bichsel.Perceiving and recognizing faces,Mind and Language,1990,342-364.
    5.L.D.Harmon et al.Machine identification of human faces.Pattern Recognition,1981,13(2):97-110.
    6.G.J.Kaufman and K.J.Breeding.The automatic recognition of human faces from profilesilhouettes.IEEE Trans.on Systems,Man,and Cybernetics,1976,6:113-121.
    7.B.Fasel,Juergen Luettin.Automatic facial expression analysis:a survey.Pattern Recognition,2003,36:259-275.
    8.Chiunhsiun Lin,Kuo-Chin Fan.Pose classification of human faces by weighting mask function approach,Pattern Recognition Letters,2003,24:1857-1869
    9.荆晓远.模式分类技术在人脸识别中的应用.南京:南京理工大学博士学位论文,1998.
    10.R.Brunelli and T.Poggio.HyperBF networks for gender classification.In Proceedings of the DARPA Image Understanding Workshop,1992,311-314.
    11.K.Fukunaga.Introduction to Statistical Pattern Recognition.New York:Academic Press,Inc.2nd ed,1990
    12.边肇祺,张学工.模式识别(第二版).北京:清华大学出版社,1999
    13.M.Turk,A Pentland.Eigenfaces for recognition.J.Cognitive Neuroscience,1991,3(1):71-86
    14.K.Liu,J.Y.Yang et al,An efficient algorithm for Foley-Sammon optimal set of discriminant vectors by algebraic method.International Journal of Pattern Recognition and Artificial Intelligence,1992,6(5):817-829
    15.Y.F.Guo,T.T,Shu,J.Y.Yang,et al.Feature extraction method based on the generalized Fisher Discriminant criterion and face recognition.Pattern Analysis &Application,2001,4(1):61-66
    16.RN.Belhumeur,et al.Eigenfaces vs,Fisherfaces:Recognition using class specific linear projection. IEEE Trans.Pattern Anal. Machine Intell, 1997, 19(7): 711-720
    
    17. V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995
    
    18. 张学工.关于统计学习理论与支持向量机.自动化学报,2000, 26(1): 32-42
    
    19. C. Cortes, V. Vapnik. Support Vector Networks. Machine Learning, 1995,20: 273-297
    
    20. V. Cherkassky, F. Mulier. Learning from Data: Concepts, Theory and Methods. NY: John Viley& Sons, 1998
    
    21. B. Scholkopf, S. Mika, C. J. C. Burges, P. Knirsch, K. R. Mulier, G. Ratsch, A. J. Smola. Input space vs. feature space in kernel-based methods. IEEE Transactions on Neural Networks, 1999,10(5): 1000-1017
    
    22. K. R. Mulier, S. Mika, G Ratsch, K. Tsuda, and B. Scholkopf. An introduction to kernel-based learning algorithms. IEEE Transactions on Neural Networks, 2001, 12(2):181-201
    
    23. Colin Campbell. Kernel methods: a survey of current techniques. Neurcomputing, 2002,48: 63-84
    
    24. Alberto Ruiz, E. Perdro. L6pez-de-Teruel. Nonlinear Kernel-Based Statistical Pattern Analysis. IEEE Transactions on Neural Networks, 2001, 12(1): 1045-9227
    
    25. S. Mika, G. Ratsch, J. Weston, B. Scholkopf, A. Smola, et al. Constructing descriptive and discriminative non-linear features: Rayleigh coefficients in kernel feature spaces. IEEE Transaction on Pattern Analysis and Machine Intelligence, 2003,25(5): 623-628
    
    26. S. Mika, G. Ratsch, J. Weston, B. Scholkopf, K. Mulier. Fisher discriminant analysis with kernels. IEEE Neural Networks for Signal Processing Workshop, 1999, 41-48
    
    27. G Baudat, F. Anouar. Generalized discriminant analysis using a kernel approach. Neural Computation, 2000, 12: 2385-2404
    
    28. B. Scholkopf, A. Smola, and K.R. Mulier. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 1998, 10(5): 1299-1319
    
    29. B. Scholkopf, A. Smola, K.R. Mulier. Kernel principal component analysis. In W. Gerstner, Artificial Neural Networks - ICANN'97, Berlin, 1997, 583-588
    
    30. V. Roth, V. Steinhage. Nonlinear discriminant analysis using kernel functions. Advance in Neural Information Processing Systems 12, Cambridge, MA: MIT Press, 2000, 568-574
    
    31. K. I. Kim, K. Jung, H. J. Kim. Face recognition using kernel principal component analysis. IEEE Signal Processing Letters , 2002, 9(2): 40-42
    
    32. K. I. Kim, S. H. Park, H. J. Kim. Kernel principal component analysis for texture classification. IEEE Signal Processing Letters, 2001, 8(2): 39-41
    33. K. I. Kim, K. Jung, S. H. Park and H. J. Kim. Texture classification with kernel principal component analysis. IEE Electronics Letters, 2000, 36(12): 1021-1022
    
    34. M. H. Yang. Kernel Eigenfaces vs. Kernel Fisherfaces: Face Recognition Using Kernel Methods. Proceedings of the Fifth International Conference on Automatic Face and Gesture Recognition (FG 2002), Washington D. C, May, 2002,215-220
    
    35. M. H. Yang, N. Ahuja, D. Kriegman. Face recognition using kernel Eigenfaces. In Proceedings of the 2000 IEEE International Conference on Image Processing (ICIP 2000), Vancouver, Canada, September, 2000,1: 37-40
    
    36. Juwei Lu, K. N. Plataniotis, A. N. Venetsanopoulos. Face Recognition Using Kernel Direct Discriminant Analysis Algorithms. IEEE Transactions on Neural Networks, 2003, 14(1): 117-126
    
    37. C. Liu. Gabor-based kernel PCA with fractional power polynomial models for face recognition. IEEE Trans. Pattern Analysis and Machine Intelligence, 2004, 26(5): 572-581
    
    38. J. B. Tenenbaum, V. de Silva, and J. C. Langford, "A Global Geometric Framework for Nonlinear Dimensionality Reduction,"[J] Science, 2000, vol. 290: 2319-2322,.
    
    39. S. T. Roweis and L. K. Saul, "Nonlinear Dimensionality Reduction by Locally Linear Embedding,"[J] Science, 2000, vol. 290, pp:2323-2326.
    
    40. L. K. Saul and S. T. Roweis, "Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds,"[J] J. Machine Learning Research, 2003, vol. 4: 119-155.
    
    41. D. Donoho, C. Grimes. Hessian Eigenmaps: Locally Linear Embedding Techniques for High- Dimensional Data. Proceedings of the National Academy of Sciences, 2003,100(10):5591-5596
    
    42. M. Belkin, P. Niyogi. Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering. Advances in Neural Information Processing Systems, 2002, 14:585-591
    
    43. M. Belkin, P. Niyogi. Laplacian Eigenmaps for Dimensionality Reduction and Data Representation. Neural Computation, 2003, 15:1373-1396
    
    44. Z. Zhang, H. Zha. Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment. SIAM Journal on Scienti c Computing, 2004, 26(1):313-338
    
    45.X. He and P. Niyogi, "Locality Preserving Projections," Proc. 16th Conf. Neural Information Processing Systems, 2003, 36(11): 2585-2592.
    46. X. He, S. Yan, Y. Hu, P. Niyogi, and H. J. Zhang, "Face Recognition Using Laplacianfaces," IEEE Trans. Pattern Analysisand Machine Intelligence, 2005, vol. 27(3): 328-340.
    
    47. J. Yang, D. Zhang, J. Y. Yang, and B Niu, "Globally Maximizing, Locally Minimizing: Unsupervised Discriminant Projection with Applications to Face and Palm Biometrics"[J] IEEE Trans. Pattern Analysis and Machine Intelligence, 2007, 29(4):650-664,.
    
    48. J. Yang, D. Zhang, J. Y. Yang, "Two-Dimensional PCA: A New Approach to Appearance Based Face Representation and Recognition",[J] IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(1): 131-137.
    
    49. K. Fukunaga and W.L.G. Koontz, Representation of random processes using the finite Karhunen-Loeve expansion, Inform. And Contr., 1970, 16: 85-101.
    
    50. Y. Young, The reliability of linear feature extractor, Trans. IEEE Computers, 1971, C-20:967-971.
    
    51. M. Kirby, L. Sirovich, Application of the Karhunen - Loeve procedure for the characterization of human faces, IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12 (1):103 - 108.
    
    52. M. Turk and A. Pentland. Face processing: Models for recognition. Proc. Intelligent Robots and Computer Vision Ⅷ, SPIE,1989,1, 192:22-32
    
    53. M. Turk, A. Pentland. face recognition without features. In Proceedings of MVA'1990.pp.267~270
    
    54. A. Pentland, B. Moghaddam, T. Starner. View-based and mododular eigenspaces for face recognition. Proc. IEEE Conf. On Computer Vision and Pattern Recognition, 1994,84-91
    
    55. M. Turk and A. Pentland., Face recognition using Eigenfaces, Proc. IEEE Conf. On Computer Vision and Pattern Recognition, 1991, 586-591
    
    56. M. Bischel, A. Pentland. Human face recognition and face image set's topology. CVGIP:Image Understanding, 1994, 59(2): 54-261.
    
    57. R. A. Fisher, The use of multiple measurements in taxonomic problems, Annals of Eugenics 7 (1936): 178-188.
    
    58. S. S. Wilks, Mathematical Statistics, Wiley, New York, 1962, pp. 577-578.
    
    59. R. Duda and P. Hart, Pattern Classification and Scene Analysis, Wiley, New York, 1973.
    
    60. J. W. Sammon, An Optimal discriminant plane, IEEE Trans. Computer, 1970, C-19:826-829.
    
    61. D. H. Foley and J. W. Sammon. An optimal set of discriminant vectors, IEEE Trans. Computer. 1975,24(3) 281-289.
    62.Z.Jin,J.Y.Yang,Z.S.Hu et al.Face recognition based on the uncorrelated discriminant transformation.Pattern Recognition,2001,34:1405-1416
    63.Z.Jin,J.Y.Yang,Z.M.Tang,Z.S.Hu,A theorem on uncorrelated optimal discriminant vectors,Pattern Recognition,2001,34(10):2041-2047.
    64.J.Yang,J.Y.Yang.Generalized K-L transformed based combined feature extraction,Pattern Recognition,2002,35(1),pp.295-297.
    65.Z.Q.Hong,J.Y.Yang et al.,Optimal discriminant plane for a small number of samples and design method of classifier on the plane,Pattern Recognition 1991,24(4):317-324.
    66.K.Liu,Y.Q.Cheng and J.Y.Yang et al,Algebraic feature extraction for image recognition based on an optimal discriminant criterion.Pattern Recognition,1993,26(6):903-911
    67.郭跃飞,黄修武,杨静宇等,一种求解Fisher最佳鉴别矢量的新方法及人脸识别,中国图象图形学报,1999,4(A)2:95-98.
    68.Y.F.Guo,T.T.Shu,J.Y.Yang et al.Feature extraction method based on the generalized Fisher Discriminant criterion and face recognition.Pattern Analysis &Application,2001,4(1):61-06
    69.P.N.Belhumeur,et al.Eigenfaces vs.Fisherfaces:Recognition using class specific linear projection.IEEE Trans.Pattern Anal.Machine Intell.1997,19(7)711-720.
    70.C.J.Liu,H.Wechsler,Robust coding schemes for indexing and retrieval from large face databases,IEEE Trans.Image Processing,2000,9(1),132-137.
    71.V.N.Vapnik.统计学习理论的本质(中译本).北京:清华大学出版社,2002.
    72.B.Scholkopf,A.Smola,and K.R.Muller.Nonlinear component analysis as a kernel eigenvalue problem.Neural Computation,1998,10(5):1299-1319.
    73.K.R.Muller,S.Mika,G.Ratsch,K.Tsuda,and B.Scholkopf.An introduction to kernel-based learning algorithms.IEEE Trans.On Neural Network,2001,12(2):181-201.
    74.S.Mika,G.R(a|¨)tsch,G.and K.R.M(u|¨)ller,.A mathematical programming approach to the kernel Fisher Algorithm.In T.K.Leen,T.G.Dietterich and V.Tresp,editors,Advances in Neural Information Processing Systems,MIT Press,2001,vol.13:591-597,
    75.S.Mika,A.J.Smola,and B.Sch(o|¨)kopf.An improved training algorithm for kernel fisher discriminants.In T.Jaakkola and T.Richardson,editors,Proceedings AISTATS 2001,Morgan Kaufmann,2001:98-104.
    76.G.C.Cawley and N.L.C.Talbot.Efficient leave-one-out cross-validation of kernel fisher discriminant classifiers,Pattern Recognition,2003.
    77.G.Baudat,F.Anouar,Generalized discriminant analysis using a kernel approach,Neral computation,2000,12(10):2385-2404.
    78.J.Xu,X.Zhang,and Y.Li.Kernel MSE algorithm:A unified framework for KFD,LS-SVM and KRR.In proceedings of the International Joint Conference on Neural Networks(IJCNN-2001),Washington,D.C,2001:1486-1491,.
    79.J.Yang,A.F.Frangi,J.Y.Yang.A new kernel Fisher discriminant algorithm with application to face recognition,Neurocomputing,56(2004):415-421.
    80.J.Perkins,J.Theiler,S.Ahalt,Modified kernel-based nonlinear feature extraction,in:International Conference on Machine Learning and Application(ICMLKA'02),Las Vegas,NV,USA,2002.
    81.S.T.Roweis and L.K.Saul,"Nonlinear Dimensionality Reduction by Locally Linear Embedding," Science,2000,vol.290:2323-2326.
    82.J.Wang,Z.Zhang,H.Zha.Adaptive Manifold Learning.Advances in Neural Information Processing Systems,2004
    83.J.Zhang,L.He,Z.Zhou.Analyzing Magnication Factors and Principal Spread Directions in Manifold Learning.Proceedings of the 9th Online World Conference on Soft Computing in Industrial Applications(WSC9),2004
    84.何力,张军平,周志华.基于放大因子和延伸方向研究流形学习算法.计算机学报,2005.28:2000-2009
    85.X.Yang,H.Fu,H.Zha,et al.Semi-Supervised Nonlinear Dimensionality Reduction.Proceedings of the 23rd international conference on Machine learning.ACM Press New York,NY,USA,2006.1065-1072
    86.D.Ridder,O.Kouropteva,et al.Supervised Locally Linear Embedding.Proceedings of Joint International Conference on ICANN/ICONIR Springer,2003,pp 333-341
    87.X.Geng,D.Zhan,Z.Zhou.Supervised Nonlinear Dimensionality Reduction for Visual ization and Classication.IEEE Transaction on Systems,Man,and Cybernetics-Part B:Cybernetics,2005,35(6):1098-1107
    88.C.LI,J.GUO.Supervised Isomap with Explicit Mapping.Proceedings of First International Conference on Innovative Computing,Information and Control,2006
    89.S.Yan,D.Xu,B.Zhang,et al.Graph Embedding and Extensions:A General Framework for Dimensionality Reduction.IEEE Transaction on Pattern Analysis and Machine Intelligence,2007,29(1):40-51
    90.X.He,P.Niyogi.Locality Preserving Projections.Proceedings of Advances in Neural Information Processing Systems 16,2003
    91. E. Kokiopoulou, Y. Soad. Orthogonal Neighborhood Preserving Projections. Proceedings of IEEE International Conference on Data Mining, 2005. 234-241
    
    92. X. He, D. Cai, S. Yan, et al. Neighborhood Preserving Embedding. Proceedings of the Tenth IEEE International Conference on Computer Vision, 2005. 1208-1213
    
    93. S. Yan, D. Xu, B. Zhang, et al. Graph Embedding: A General Framework for Dimen-sionality Reduction. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Washington, DC, USA, 2005. 830—837
    
    94. D. D. Lee, H. S. Seung. Learning the parts of objects by non-negative matrix factorization . Nature, 1999, 401(21): 788-791.
    
    95. M. Turk, A. Pentland. Face Recognition Using Eigenfaces. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Maui: IEEE, 1991: 586-591.
    
    96. S. Z. Li, X. W. Hou, H. J. Zhang. Learning spatially localized, parts-based representation. Int. Conf. Computer Vision and Pattern Recognition, Washington, USA: IEEE Computer Society, 2001: 207-212.
    
    97. Buciu Ioan, Pitas loannis. Application of non negative and local non negative matrix factorization to facial expression recognition [C]// Proceedings of the 17th International Conference on Pattern Recognition (ICPR'04), Washington, USA: IEEE Computer Society, 2004: 288-291
    
    98. X. Chen, L. Gu, S. Z. Li, et al. Learning representative local features for face detection. IEEE Proceedings of Computer Vision and Pattern Recognition, Kauai, USA: IEEE , 2001: 1126-1131.
    
    99. S. Zafeiriou, A. Tefas, I. Pitas. Discriminant NMFfaces for Frontal Face Verification. Machine Learning for Signal Processing (MLSP 2005), Mystic, USA: IEEE, 2005: 355-359.
    
    100. D. D. Lee, H. S. Seung. Algorithms for non-negative matrix factorization. Proceedings of Neural Information Processing Systems, Cambridge, MA: MIT Press,2001(13):556-562
    
    101. T. Feng, S. Z. Li, H. Y. Shum and H. J. Zhang. Local Non-Negative Matrix Factorization as a Visual Representation Proceedings of the 2nd International Conference on Development and Learning (IEEE ICDL.02). Cambridge, USA: IEEE Computer Society, 2002: 178-183.
    
    102. G. H. Golub, C. F. Van Loan. Matrix Computations, 3rd ed. Baltimore, MD: John Hopkins Univ. Press, 1996.
    103.J.Ye,R.Janardan,C.H.Park,et al.An optimization criterion for generalized discriminant analysis on under sampled problems.IEEE Transactions on Pattern Analysis and Machine Intelligence,2004,26(8):982-994.
    104.北京大学数学系.高等代数[M].2版.北京:高等教育出版社,1988.
    105.I.Buciu,I.Pitas.A new sparse image representation algorithm applied to facial expression recognition.Proceedings of IEEE Workshop on Machine Learning for Signal Processing,SaoLuis,Brazil:IEEE,2004:539-548.
    106.S.C.Chen,T.K.Sun.Class-information-incorporated principal component analysis,Neurocomputing,2005.69(1-3) 216-223
    107.S.Becker,M.Plumbley,Unsupervised neural network learning procedures for feature extraction and classification,the International Journal of Applied Intelligence,1996,6(3):185-205.
    108.H.F.Li,T.Jiang,and K.S.Zhang.Efficient and robust feature extraction by maximum margin criterion.Proceedings of Advances in Neural Information Processing Systems,MIT Press,Cambridge,MA,2004,16:97-104.
    109.W.M.Zheng,C.R.Zou,L.Zhao.Weighted maximum margin discriminant analysis with kernels.Neurocomputing,2005,67:357-362
    110.X.P.Qiu and L.D.Wu.Nonparametric Maximum Margin Criterion for Face Recognition.IEEE International Conference on Image Processing,2005,2:918-921.
    111.杨健,线性投影分析的理论与算法及其在特征抽取中的应用研究.南京:南京理工大学博士论文,2002
    112.S.C.Chen,Y.L.Zhu,et.al.Feature extraction approaches based on matrix pattern:MatPCA and MatFLDA.Pattern Recognition Letters Vol.26 No.8 June 2005,1157-1167
    113.M.Li,B.Z.Yuan.2D-LDA A statistical linear discriminant analysis for image matrix.Pattern Recognition Letters 26(2005) 527-532.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700