用户名: 密码: 验证码:
基于谱图理论的人脸表情识别算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着信息和计算机技术的飞速发展,人脸面部表情识别技术越来越受到重视。人脸表情识别是智能人机交互的重要基础,该课题涉及图像处理、运动跟踪、模式识别、生理学、心理学等研究领域,是当前国内外模式识别和人工智能领域的研究热点。本文主要研究人脸表情特征提取的若干问题。基于谱图分析理论,分析表情图像的内在特性,提出能够有效表征人脸表情的特征从而用于分类。主要创新性工作包括:
     第一,为挖掘人脸表情图像样本的内在结构,采用监督型谱分析方法(SSA)提取表情特征。将人脸表情图像样本表示为图的形式,然后用谱图分析的方法处理这些图的结构。与传统谱聚类方法和其它降维方法相比,监督型谱分析方法具有以下三个优点(1)解决了小样本问题(small-sample-size),可直接对表情样本向量进行矩阵变换,不需要用其它降维方法进行预处理;(2)利用样本的类别信息,将样本点及其关系看做连接图进行分析,映射后的结构也很好的保留了原有图的特性;(3)可以反映数据潜在的非线性特性。实验结果表明它可以有效地提取人脸表情特征,提高人脸表情识别的精确度。
     第二,为了增强谱分析方法的判别性,提出了基于判别信息的谱分析方法(DSA)。谱分析方法主要保留数据的非线性局部结构,即同类样本点之间的近邻关系,而忽略了不同表情类别之间的关系,从而影响表情分类结果。针对这个问题,我们在谱分析算法中引入判别信息,同时考虑数据集的非线性局部结构和非线性外部结构,在保留样本点近邻关系的基础上也保留表情类别之间的近邻关系,从而得到判别性能更强的人脸表情特征。
     第三,为了解决基于向量的特征降维方法数据矩阵维数过高,计算量大等问题,提出基于二维图像的模糊判别性局部保留映射算法(2D-FDLPP)。将模糊性和判别性引入监督型局部保留映射算法,并扩展到基于二维图像矩阵。基于图像矩阵的二维降维方法不需要将二维图像转换为一维向量,直接对二维图像矩阵进行特征提取运算,克服了矩阵奇异等问题,且提取的特征中包含更多图像信息。在二维局部保留映射算法的基础上,利用模糊方法计算样本类别隶属度,构建模糊权重矩阵,从而分散相似表情类别之间的近似特征。此外,将表征表情类别间近邻关系的加权类间离散度引入目标函数,使其同时考虑样本近邻点之间的局部保留特性和表情类别之间的局部保留特性,得到判别性强的表情特征。
     第四,提出基于图的稀疏非负矩阵分解方法(GSNMF)并用于提取人脸表情特征。常用的基于矩阵分解的特征降维方法所得到的分解矩阵中常包含负数,而负数在表情图像分析中是没有意义的。因此,我们基于非负矩阵分解的思想,对矩阵分解添加非负性约束。同时,根据谱图理论,将图的保留约束及稀疏性约束引入非负矩阵分解,得到表征面部各部分的基图像,进行线性组合从而表征整幅表情图像。此外,提出求解约束条件下的非负矩阵分解方法的投影梯度方法框架。为保证特征分解后局部最小值的平稳性,采用投影梯度方法寻求分解矩阵,从而保证结果是满足最优化条件的最优解。大量实验证明了该方法在表情识别中的有效性,且对面部部分遮挡的表情图像具有一定鲁棒性。
Facial expression recognition technique becomes more and more important under the rapid technology improvement of information and computer. Facial expression recognition is one of the most important bases of intelligent human-computer interaction, and the subject involves many research fields, including image processing, motion tracking, pattern recognition, physiology, psychology, etc. It is research hotspot of pattern recognition and artificial intelligence. In this paper, we focus on some issues on facial expression feature extraction. Based on spectral graph theory, analyze the intrinsic characters of the facial expression images, so that to extract efficient facial expression representation for classification. The main contributions are listed as follows:
     First, in order to discover the intrinsic structure of facial expression images, we utilize supervised spectral analysis algorithm to extract facial expression features. Compared to traditional spectral clustering algorithms and dimensional reduction algorithms, supervised spectral analysis algorithm (SSA) benefits from the following three aspects:(1) SSA does not suffer from the small-sample-size problem. It can make matrix transformation directly on data matrix and do not need any other dimensional reduction methods for preprocessing. (2) SSA utilizes the class label information of samples, construct graph according to the data points and their relationship, and the data points after projection can preserve the graph structure. (3) SSA can effectively discover the nonlinear structure hidden in the data. Experimental results show that SSA can extract facial expression features efficiently, and enhance facial expression recognition accuracy.
     Second, in order to enhance the discriminant power of spectral analysis algorithm, discriminant spectral analysis algorithm (DSA) is proposed. Spectral analysis algorithm mainly preserves the nonlinear intra-locality structure, that is, the local neighborhood relationship between the data points. However, it ignores the relationship between facial expression classes. To enhance the discriminant power, we introduce discriminant information to supervised spectral analysis algorithm. By taking consideration of both nonlinear intra-locality and nonlinear inter-locality structure of the original data points, we obtain discriminant subspace which can preserve both neighborhood relationship of data points and neighborhood relationship of facial expression classes.
     Third, vector-based dimensionality reduction methods face the shortcomings of high dimension of data matrix and high computation complexity. To overcome these problems, Two-dimensional Fuzzy Discriminant Locality Preserving Projections (2D-FDLPP) is proposed. Fuzzy assignment and discriminant information are introduced to supervised locality preserving projections, and it bases on two-dimensional iamge matrix. Matrix-based dimensionality reduction method extracts the facial expression features directly from image matrices, and does not need to convert two-dimensional image to vector. Moreover, it does not suffer from matrix singular problem, and the features contain more image information. Based on two-dimensional locality preserving projections, we utilize fuzzy k-nearest neighbor classifier to calculate the membership degree, and construct fuzzy weight matrix. Furthermore, the weighted between-class scatter, which denotes the local neighborhood structure of facial expression classes, is introduced to the object function. By preserving both local neighborhood of data points and facial expressions, we obtain more discriminant facial expression features.
     Fourth, the graph-preserving sparse non-negative matrix factorization algorithm is proposed. The decomposition matrices obtained from common used matrix factorization-based methods always contain negative values, which are physically meaningless in facial expression recognition. Therefore, according to non-negative matrix factorization algorithm, we add non-negative constraint to matrix factorization. Also, both graph-preserving constraint and sparseness constraint are introduced to non-negative matrix factorization. Then parts-based basis images are obtained from the constrained matrix factorization, and facial expression images are represented by combining the basis images linearly. Furthermore, the framework for constrained non-negative matrix factorization is proposed. To guarantee the stationarity of the minimal solution, the projected gradient method is used to ensure the stationarity of limit points. Experimental results show that graph-preserving sparse non-negative matrix factorization is efficient for facial expression and robust to partial occluded facial expression images.
引文
[1]H. Kobayashi, F. Hara, A basic study of dynamic recognition of human facial expressions, IEEE International Workshop on Robot and Human Communication,271-275,1993.
    [2]K. Matsuno, C. W. Lee, S. Kimura, S. Tsuji, Automatic recognition of human facial expressions, Fifth International Conference on Computer Vision, June,352-359,1995.
    [3]M. Pantic, L. J. M. Rothkrantz, Automatic analysis of facial expressions:the state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence,22(12):1424-1445,2000.
    [4]B. Fasel, J. Luettin, Automatic facial expression analysis:a survey. Pattern Recognition,36: 259-275,2003.
    [5]P. Ekman, W. V. Friesen, Facial action coding system (FACS):manual. Consulting Psychologists Press,1978.
    [6]A. Mehrahian, Communication without words. Psychology Today,2(4):53-56,1968.
    [7]M. Suwa, N. Sugie, K. Fujimora, A preliminary note on pattern recognition of human emotional expression, Proceedings of the 4th International Joint Conference on Pattern Recognition, Japan, Kyoto,408-410,1978.
    [8]K. Mase, Recognition of facial expression for optical flow, IEICE Transactions, Special Issue on Computer Vision and its Applications,74(10):3474-3483,1991.
    [9]M. Yang, D. J. Kriegman, N. Ahuja, Detecting faces in image:a survey [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,24(1):34-58,2002.
    [10]刘晓旻,谭华春,章毓晋,人脸表情识别研究的新进展, 中国图象图形学报,11(10):1359-1368,2006。
    [11]赵力庄,高文,陈熙霖,Eigenface的变维分类方法及其在表情识别中的应用,计算机学报,22(6):627-632,1999.
    [12]H. Lo, R. Chung, Facial expression recognition approach for performance animation. In:Proc. The Second Internet Workshop on Digital and Computational Video,2001.pp:132-139.
    [13]M. Bartlett, Face Image Analysis by Unsupervised Learning and Redundancy Reduction, PhD thesis, University of California, San Diego,1998.
    [14]A. J. Caler, A. M. Burton, P. Miller, A. W. Young, S. Akamatsu, A Principal Component Analysis of Facial Expressions, Vision Research,41:117921208,2001.
    [15]T. F. Cootes, A. Hill, C. J. Taylor, J. Haslam, The use of active shape models for locating structures in medical images. Image and Vision Computing,12(6):355-366,1994.
    [16]T. F. Cootes, C. J. Taylor, A. Lanitis, Multi-resolution search using active shape models, Proc. 12th International Conference on Pattern Recognition, Los Alamitos, Calif:IEEE CS Press,1: 610-612,1994.
    [17]T. F. Cootes, C. J. Taylor, D. H. Cooper, J. Graham, Active shape models-their training and applications, Computer Vision and Image Understanding,61(1):38-59,1995.
    [18]C. L. Huang, Y. M. Huang, Facial expression recognition using model-based feature extraction and action parameters classification, Journal of Visual Communication and Image Representation, 8(3):278-290,1997.
    [19]A. L. Yuille, Deformable templates for face detection, Journal of Cognitive Neuroscience,3: 59-70,1991.
    [20]A. L. Yuille, P. W. Hallinan, D. S. Cohen, Feature extraction from faces using deformable templates, International Journal of Computer Vision,8:99-111,1992.
    [21]Y. Zhan, J. Ye, D. Niu, P.Cao, Facial expression recognition based on gabor wavelet transformation and elastic templates matching, International Journal of Image and Graphics,6(1): 125-138,2006.
    [22]M.Lades, J.C.Vorbruggen, J.Buhmann, J.Lange, C.V.D.Malsburg, R.P.Wurtz, W.Konen, Distortion Invariant Object Recognition in the Dynamic Link Architecture, IEEE Transactions on Computers,42(3):300-311,1993.
    [23]J.Buhmann, M.Lades, C.V.D.Malsburg, Size and distortion invariant object recognition by hierarchical graph matching, Proceedings of IEEE International Joint Conference on Neural Networks, San Diego,411-416,1990.
    [24]M.Kass, A.Witkin, D.Terzopoulos, Snakes:active contour models, International Journal of Computer Vision,1:321-331,1988.
    [25]李培华,张田文,主动轮廓线模型(蛇模型)综述,软件学报,11(6):751-757,2000.
    [26]M. Pantic, L. Rothkrantz, Facial action recognition for facial expression analysis from static face images, IEEE Transactions on Systems, Man and Cybernetics-Part B,34(3):1449-1461,2004.
    [27]谭华春,人脸表情识别中若干问题的研究,清华大学博士学位论文,2005.
    [28]J. Ye, Y. Zhan, S. Song, Facial expression features extraction based on gabor wavelet transformation, IEEE International Conference on Systems, Man and Cybernetics,2215-2219,2004.
    [29]Z.Wen, T.Huang, Capturing subtle facial motions in 3d face tracking, IEEE Conference on Computer Vision, France,2:1343-1350,2003.
    [30]朱健翔,苏光大,李迎春,结合Gabor特征与Adaboost的人脸表情识别,17(8):993-998,2006.
    [31]M. Lyons, S. Akamasku, M. Kamachi, J. Gyoba, Coding facial expressions with gabor wavelets, International Conference on Face and Gesture Recognition,1998.
    [32]Y. L. Tian, T. Kanade, J. Cohn, Evaluation of gabor-wavelet-based facial action unit recognition in image sequences of increasing complexity, IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC,2002.
    [33]Z. Zhang, M. Lyons, M. Schuster, S. Akamatsu, Comparison between geometry-based and gabor-wavelet-based facial expression recognition using multi-layer perceptron, In International Workshop on Automatic Face and Gesture Recognition,454-459,1998.
    [34]G Donato, M. Bartlett, J. Hager, P. Ekman, T. Sejnowski, Classifying facial actions, IEEE Transaction on Pattern Analysis and Machine Intelligence,21(10):974-989,1999.
    [35]G Ford, Full automatic coding of basic expressions from video, Technical Report INC-MPLabTR-2002.03, Machine Perception Lab, Institute for Neural Computation, University of California, San Diego,2002.
    [36]G R. Bradski, Real time face and object tracking as a component of a perceptual user interface, IEEE Workshop on Applications of Computer Vision,214-219,1998.
    [37]M.J.Lyons, J.Budynek, S.Akamatsu, Automatic Classification of Single Facial Images, IEEE Transactions on Pattern Recognition and Machine Intelligence,21(12):1357-1362,1999.
    [38]P. N. Belhumeur, J. P. Hespanha, D. J. Kriegman, Eigenfaces vs. fisherfaces:recognition using class specific linear projection, IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):711-720,1997.
    [39]Z. Liu, L. Xu, RPCL-based local pca algorithm, IEEE International Conference on Data Mining (ICDM'01),2001.
    [40]B. A. Draper, K. Baek, M. S. Bartlett, J. R. Beveridge, Recognizing faces with PCA and ICA, Computer Vision and Image Understanding,91:115-137,2003.
    [41]S. K. Nayar, S. A. Neune, H. Murase, Subspace methods for robot vision, IEEE Transactions on Robotics and Automation,12(5):750-758,1996.
    [42]C. J. Liu, H. Wechsler, Robust coding schemes for indexing and retrieval from large face database, IEEE Transactions on Image Processing,9(1):132-137,2000.
    [43]W. Zhao, R. Chellappa, J. Phillips, Subspace linear discriminant analysis for face recognition, Technical Report CS-TR4009, University of Maryland,1999.
    [44]Y. Shinohara, N. Otsu, Facial expression recognition using fisher weight maps, IEEE International Conference on Automatic Face and Gesture Recognition (FGR'04),2004.
    [45]M. S. Bartlett, H. M. Lades, T. J. Sejnowski, Independent Component Representations for face recognition, Proc. SPIE Symp. Electronic Imaging:Science and Technology; Human Vision and Electronic Imaging Ⅲ,3 (299):528-539,1998.
    [46]M. S. Bartlett, T. J. Sejnowski, Viewpoint invariant face recognition using independent component analysis and attractor networks, Advances in Neural Information Processing Systems,9: 817-823,1997.
    [47]C. Havran, L. Hupet, J. Czyz, J. Lee, L. Vandendorpe, M. Verleysen, Independent component analysis for face authentication, KES'2002 Proceedings-Knowledge-Based Intelligent Information and Engineering Systems, Crema (Italy),1207-1211,2002.
    [48]C. Liu, H. Wechsler, Independent component analysis of gabor features for face recognition, IEEE Transactions on Neural Network,14(4):919-928,2003.
    [49]H. S. Seung, D. D. Lee, The manifold ways of perception, Science,290:2268-2269,2000.
    [50]J. B. Tenenbaum, V. de Silva, J.C.Langford, A global geometric framework for nonlinear dimensionality reduction, Science,290:2319-2323,2000.
    [51]S. T. Roweis, L. K. Saul, Nonlinear dimensionality reduction by locally linear embedding, Science,290:2323-2326,2000.
    [52]黄启宏,刘钊,流形学习中非线性维数约简方法概述,计算机应用研究,24(11):19-25,2007.
    [53]T. Cox, M. Cox, Multidimensional scaling, London:Chapman&Hall,1994.
    [54]I. Borg, P. Groenen, Modern multidimensional scaling:theory and appljcation, New York: Springer-Verlag,1997.
    [55]M. Balasubramanian, E. L. Schwartz, The isomap algorithm and topological stability, Science, 295,2002.
    [56]D. de Ridder, R. P. W. Duin, Locally linear embedding for classification, In the Pattern Recognition Group Technical Report Series,2002.
    [57]D. de Ridder, O. Kouropteva, O. Okun, Supervised locally linear embedding, Artificial Neural Networks and Neural Information Processing,2003.
    [58]L. K. Saul, An introduction to locally linear embedding, AT&T Labs-Research,2001.
    [59]M. Belkin, P. Niyogi, Laplacian eigenmaps and spectral techniques for embedding and clustering, Advances in NIPS 14, Cambridge, MA:MIT Press,585-591,2001.
    [60]M. Belkin, P. Niyogi, Laplacian eigenmaps for dimensionality reduction and data representation, Neural Computations,15(6):1373-1396,2003.
    [61]Z Zhang, H. Zha, Principal manifolds and nonlinear dimensionality reduction via tangent space alignment, SIAM Journal of Scientific Computing,26(1):313-338,2004.
    [62]张振跃,查宏远,线性低秩逼近与非线性降维,中国科学A辑:数学,35(3):273-285,2005.
    [63]罗四维,赵连伟,基于谱图理论的流形学习算法,计算机研究与发展,43(7):1173-1179,2006.
    [64]S. Xu, Y. Jia, Y. Zhao, Facial expression analysis on semantic neighborhood preserving embedding, Advances in Neural Networks,4492:896-904,2007.
    [65]D. Liang, J. Yang, Z. Zheng, Y. Chang, A facial expression recognition system based on supervised locally linear embedding, Pattern Recognition Letters,26:2374-2389,2005.
    [66]X. He, P. Niyogi, Locality preserving projections, Advances in Neural Information Processing Systems,2003.
    [67]X. He, S. Yan, Y. Hu, H. Zhang, Learning a locality preserving subspace for visual recognition, IEEE International Conference on Computer Vision (ICCV 2003),2,2003.
    [68]X. He, S. Yan, Y. Hu, P. Niyogi, H. Zhang, Face recognition using laplacianfaces, IEEE Transactions on Pattern Analysis and Machine Intelligence,27(3):328-340,2005.
    [69]D. Cai, X. He, J. Han, H. Zhang, Orthogonal laplacianfaces for face recognition, IEEE Transactions on Image Processing,15(11):3608-3614,2006.
    [70]W. Yu, X. Teng, C.Liu, Face recognition using discriminant locality preserving projections, Image and Vision Computeing,24:239-248,2006.
    [71]J. Cheng, Q. Liu, H. Lu, Y. Chen, Supervised kernel locality preserving projections for face recognition, Neurocomputing,67:443-449,2005.
    [72]J. Yang, D. Zhang, Z. Jin, J. Yang, Unsupervised discriminant projection analysis for feature extraction, International Conference on Pattern Recognition (ICPR'06),2006.
    [73]Y. Pang, L. Zhang, Z. Liu, Neighborhood preserving projections (NPP):a novel linear dimension reduction method, Lecture notes in Computer Science,2005.
    [74]E. Kokiopoulou, Y. Saad, Orthogonal neighborhood preserving projections, IEEE International Conference on Data Mining (ICDM'05),2005.
    [75]E. Kokiopoulou, Y. Saad, Orthogonal neighborhood preserving projections:a projection-based dimensionality reduction technique, IEEE Transactions on Pattern Analysis and Machine Intelligence,29(12):2143-2156,2007.
    [76]C. Shan, S. Gong, P. W. McOwan, A comprehensive empirical study on linear subspace methods for facial expression analysis, Proceedings of the Conference on Computer Vision and Pattern Recognition Workshop,2006.
    [77]X. Feng, Facial expression recognition based on local binary patterns and coarse-to-fine classification, International Conference on Computer and Information Technology,178-183,2004.
    [78]C. Shan, S. Gong, P. W. McOwan, Conditional mutual information based boosting for facial expression recognition, In British Machine Vision Conference, Oxford UK,2005.
    [79]K. I. Kim, K. Jung, H. J. Kim, Face recognition using kernel principal component analysis, IEEE Signal Processing Letters,9(2):40-42,2002.
    [80]Q. S. Liu, R. Huang, H. Q. Lu and S. D. Ma, Face Recognition Using Kernel Based Fisher Discriminant Analysis, International Conference on Automatic Face and Gesture Recognition, Washington DC, USA, May,197-201,2002.
    [81]M. Yang, Kernel Eigenfaces vs. Kernel Fisherfaces:Face Recognition Using Kernel Methods, IEEE International Conference on Automatic Face and Gesture Recognition (FGR.02).
    [82]尹克重,龚卫国,李伟红,梁毅雄,张红梅,基于和独立主成分分析的人脸识别研究,计算机应用,25(6):1324-1326,2005.
    [83]Z. Jin, F. Davoine, Facial expression analysis by using KPCA, IEEE International Conference on Robotics, Intelligent Systems and Signal Processing,736-741,2003.
    [84]W. Zheng, X. Zhou, C. Zou, L. Zhao, Facial expression recognition using kernel discriminant plane, Lecture Notes in Computer Science,3173:947-952,2004.
    [85]D. Yang, L. Jin, J. Yin, L. Zhen, J. Huang, Facial expression recognition with pyramid gabor features and complete kernel fisher linear discriminant analysis, International Journal of Information Technology,11(9):91-100,2005.
    [86]Q. Wu, X. Zhou, W. Zheng, Facial expression recognition using fuzzy kernel discriminant anlaysis, Lecture notes in Computer Sciencce,4223:780-783,2006.
    [87]J. Yang, D. Zhang, A. F. Frangi, J. Yang, Two-dimensional PCA:a new approach to appearance-based face representation and recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence,26(1):31-137,2004.
    [88]L. Wang, X. Wang, X. Zhang, J. Feng, The equivalence of two-dimensional pca to line-based pca, Pattern Recognition Letters,26:57-60,2005.
    [89]M. Li, B. Yuan,2D-LDA:a statistical linear discriminant analysis for image matrix, Pattern Recognition Letters,26:527-532,2005.
    [90]J. Yang, D. Zhang, X. Yong, J. Yang, Two-dimensional discriminant transform for face recognition, Pattern Recognition,38:1125-1129,2005.
    [91]S Chen, H. Zhao, M. Kong, B. Luo,2D-LPP:a two-dimensional extension of locality preserving projections, Neurocomputing,70:912-921,2007.
    [92]D. Hu, G. Feng, Z. Zhou, Two-dimensional locality preserving projections (2DLPP) with its application to palmprint recognition, Pattern Recognition,40:339-342,2007.
    [93]W. Sun, Q. Ruan, Two-dimension pca for facial expression recognition, IEEE International Conference on Signal Processing,2006.
    [94]程剑,应自炉,基于二维主分量分析的面部表情识别,计算机工程与应用,32-39,2006.
    [95]D. D. Lee and H. S. Seung, Learning the parts of objects by non-negative matrix factorization, Nature,401(6755):788-791,1999.
    [96]I. Buciu, I. Pitas, Application of non-negative and local non-negative matrix factorization to facial expression recognition, Proceeding of the IEEE International Conference on Pattern Recognition, Cambridge, UK,288-291,2004.
    [97]P. O. Hoyer, Non-negative matrix factorization with sparseness constraints, Journal of Machine Learning Research,5:1457-1469,2004.
    [98]S. Z. Li, X. W. Hou, H. J. Zhang, Q. S. Cheng, Learning spatially localized, parts-based representation, Proceeding of the IEEE International Conference on Computer Vision and Pattern Recognition, Hawaii, USA,207-212,2001.
    [99]S. Zafeiriou, A. Tefas, I. Buciu, I. Pitas, Exploiting discriminant information in non-negative matrix factorization with application to frontal face verification,IEEE Transactions on Neural Networks,17(3):683-695,2006.
    [100]T. Ojala, M. Pietikainen, T. Maenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Transactions on Pattern Analysis and Machine Intelligence,24(7):971-987,2002.
    [101]X. Feng, M. Pietikainen, A. Hadid, Facial expression recognition with local binary patterns and linear programming, Pattern Recognition and Image Analysis,15(2):546-548,2005.
    [102]G Zhao, M. Pietikainen, Dynamic texture recognition using local binary patterns with an application to facial expressions, IEEE Transactions on Pattern Recognition and Machine Intelligence,29(6):915-928,2007.
    [103]孙宁,冀贞海,邹采荣,赵力,基于2维偏最小二乘法的图像局部特征提取及其在面部表情识别中的应用,中国图象图形学报,12(5):847-853,2007.
    [104]L. He, C. Zou, L. Zhao, D.Hu, An enhanced lbp feature based on facial expression recognition, Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China,2005.
    [105]张文超,山世光,张洪明,陈杰,陈熙霖,高文,基于局部Gabor变化直方图序列的人脸描述与识别,软件学报,17(12):2508-2517,2006.
    [106]Y. Zhu, L. C. D. Silva, C. C. Ko, Using moment invariants and HMM in facial expression recognition, Proceedings 4th IEEE Southwest Symposium,2000.
    [107]姜璐,舒华忠,章品正,Hu矩和Zernike矩在表情识别应用中的比较,洛阳大学学报,19:14-17-2004.
    [108]姜璐,章品正,舒华忠,矩在面部表情识别中的应用,东南大学学报,34(4):557-560,2004.
    [109]王宇博,艾海舟,武勃,黄畅,人脸表情的实时分类,计算机辅助设计与图形学学报,17(6):1296-1301,2005.
    [110]Chen Xue-wen, Huang Thomas, Facial expression recognition:a clustering-based approach pattern, Recognition Letters,24:1295-1302,2003.
    [111]L. Fabio, P. Roberto, An efficient use of MPGE-4 FAP interpolation for facial animation at 70 bits/frame, IEEE Transactions on Circuits and Systems for Video Technology,11(10),2001.
    [112]刘松,应自炉,基于局部特征和整体特征融合的面部表情识别,计算机应用,3:4-6,2005.
    [113]Y. Zhang, Q. Ji, Active and dynamic information fusion for facial expression understanding from image sequences, IEEE Transactions on Pattern Analysis and Machine Intelligence,27(5): 699-714,2005.
    [114]T. F. Cootes, G. J. Edwards, C. J. Taylor, Active appearance models, Proceeding of European Conference on Computer Vision,2:484-498,1998.
    [115]H. Wang, N. Ahuja, Facial expression decomposition, IEEE International Conference on Computer Vision, Nice, France,2:958-965,2003.
    [116]左坤隆,刘文耀,基于活动外观模型的人脸表情分析与识别,光电子·激光,15(7):853-857,2004.
    [117]Y. Chang, C. Hu, M. Turk, Probabilistic expression analysis on manifolds, International Conference on Computer Vision and Pattern Recognition, Washington DC, USA,2:520-527,2004.
    [118]I. A. Essa, Coding, analysis, interpretation, and recognition of facial expressions, IEEE Transactions on Pattern Analysis and Machine Intelligence,19(7):757-763,1997.
    [119]余棉水,黎绍发,基于光流的动态人脸表情识别,微电子学与计算机,22(7):113-119,,2005.
    [120]J. F. Cohn, A. J. Zlochower, J. J. Lien, T. Kanade, Feature-point tracking by optical flow discriminates subtle differences in facial expression, Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, April 14-16,396-401,1998.
    [121]孙雯玉,人脸表情识别算法研究,北京交通大学硕士学位论文,2007.
    [122]J. J. Lien, T. Kanade, J. F. Cohn, C. Li, Automated facial expression recognition based on FACS action units, IEEE International Conference on Automatic Face and Gesture Recognition, 390-395,1998.
    [123]F. Bourel, C. C. Chibelushi, A. A. Low, Robust facial expression recognition using a state-based model of spatially-localized facial dynamics, IEEE International Conference on Automatic Face and Gesture Recognition, Washington DC, USA,106-111,2002.
    [124]M. Pardas, A. Bonafonte, J. L. Landabaso, Emotion recognition based on MPEG-4 facial animation parameters, Proceedings of IEEE Acoustics, Speech and Signal Processing, Orlando, FL, USA,4:3624-3627,2002.
    [125]H. Tao, T. Huang, Explanation-based facial motion tracking using a piecewise bexier volume deformation model, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Collins, CO, USA,23-25,1999.
    [126]B. Braathen, M. S. Bartlett, G Littlewort, E. Smith J. R. Movellan, An approach to automatic recognition of spontaneous facial actions, IEEE International Conference on Automatic Face and Gesture Recognition,345-350,2002.
    [127]X.Huang, S.Zhang, Y.Wang, D.Metaxas, D.Samaras, A hierarchical framework for high resolution facial expression tracking, Conference on Computer Vision and Pattern Recognition Workshop,22-22,2004.
    [128]张家树,陈辉,李德芳,罗小宾,夏小东,人脸表情自动识别技术研究发展,西南交通大学学,40(3),2005.
    [129]T. Kanade, J. Cohn, Y. Tian, Comprehensive database for facial expression analysis, IEEE International Conference on FGR, France,2000.
    [130]http://www.mmifacedb.com
    [131]P. Ekman, W. Friesen, Pictures of facial affect, PaloAlto, CA:Consulting Psychologists Press, 1978.
    [132]http://cvc.yale.edu/projects/yalefaces/yalefaces.html.
    [133]A. Martinez, R. Benavente, The AR face database, CVC Technical Report#24,1998.
    [134]T. Sim, S. Baker, M. Bsat, The CMU pose, illumination, and expression (PIE) database, IEEE Conference on Automatic Face and Gesture Recognition,46-51,2002.
    [135]C. Izard, The maximally discriminative facial movement coding system. University of Delaware,1983.
    [136]J. Russell, Is there universal recognition of emotion from facial expression? Psychological bulletin,115:102-141,1994.
    [137]H. Lu, Y. Fainman, R. Hecht-Nielsen, Image manifolds, Proceedings of SPIE:Applications of Artificial Neural Networks in Image Processing Ⅲ. Bellingham, Washington,3307:52-63,1998.
    [138]F. Chung, Spectral graph theory, CBMS Regional Conference Series in Mathematics, Conference Board of the Mathematical Sciences, Washington,1997.
    [139]B. Mohar, The Laplacian spectrum of graphs, In Graph Theory, Combinations, and Applications,2:871-898,1991.
    [140]B. Mohar, Some applications of Laplace eigenvalues of graphs, Graph Symmetry:Algebraic Methods and Applications, Eds. G. Hahn and G Sabidussi, NATO ASI Ser. C497, Kluwer,225-275, 1997.
    [141]陈省身,陈伟桓。微分几何讲义。北京:北京大学出版社,1983.
    [142]S. Yan, D. Xu, B. Zhang, H. Zhang, Q. Yang, S. Lin, Graph embedding and extensions:a aeneral framework for dimensionality reduction, IEEE Transactions on Pattern Analysis and Machine Intelligence,29(1):40-50,2007.
    [143]M. Belkin, P. Niyogi, Towards a theoretical foundation for Laplacian-Based manicold methods, in COLT 2005.
    [144]M. Belkin, Problems of learning on manifolds, PhD Dissertation, The University of Chicago, 2003.
    [145]P. Niyogi, Estimating functional maps on Riemannian submanifolds from sampled data, http://www.ipam.ucla.edu/publications/mgaws3/mgaws3_5188.pdf, presented at IPAM Workshop on Multiscale Structures in the Analysis of High-Dimensional Data,2004.
    [146]T. F. Cootes, C. J. Taylor, Statistical models of appearance for medical image analysis and computer vision, Processing of SPIE Medical Imaging,2001.
    [147]J. Shi, J. Malik, Normalized cuts and image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence,22(8):888-905,2000.
    [148]S. X. Yu, J. Shi, Multiclass spectral clustering, Proceedings of IEEE International Conference on Computer Vision,2003.
    [149]G H. Golub, C. F. Van Loan, Matrix Computation, second ed., Baltimore,1989.
    [150]A. Y. Ng, M. I. Jordan, Y. Weiss, On spectral clustering analysis and an algorithm, Advances in Neural Information Processing Systems,14,2001.
    [151]F. Wang, J. Wang, C. Zhang, J. Kwok, Face recognition using spectral features, Pattern Recognition,40:2786-2797,2007.
    [152]B. Scholkopf, A. Smola, Learning with kernels, The MIT Press, Cambridge, MA, London, England,2002.
    [153]I. Fischer, New methods for spectral clustering, Technical Report, Dalle Molle Institute for Artificial Intelligence,2004.
    [154]F. R. Bach, M. I. Jordan, Learning spectral clustering, Technical Report, UC Berkeley,2003.
    [155]J. Wang, L. Yin, Static topographic modeling for facial expression recognition and analysis, Computer Vision and Image Understanding,108:19-34,2007.
    [156]K. Kwak, W. Pedryca, Face recognition using a fuzzy fisherface classifier, Pattern Recognition, 38:1717-1732,2005.
    [157]http://yann.lecun.com/exdb/mnist/
    [158]I. Buciu, I. Pitas, Subspace image representation for facial expression analysis and face recognition and its relation to the human visual system, Organic Computing, in the series "Understanding Complex Systems", Springer, March,303-320,2008.
    [159]D. Field, What is the goal of sensory coding? Neural Computation,6(4):559-601,1994.
    [160]J. W. Ellison, D. W. Massaro, Featural evaluation, integration, and judgment of facial affect, Journal of Experimental Psychology:Human Perception and Performance,23(1):213-226,1997.
    [161]I. Buciu, I. Pitas, A new sparse image representation algorithm applied to facial expression recognition, IEEE Workshop on Machine Learning for Signal Processing, Sao Luis, Brazil,539-548, 2004.
    [162]D. D. Lee, H. S. Seung, Algorithms for non-negative matrix factorization, Advances in Neural Information Processing Systems, Vancouver, British Columbia, Canada,13:556-562,2001.
    [163]I. Kotsia, S. Zafeiriou, I. Pitas, A novel discriminant non-negative matrix factorization algorithm with applications to facial image characterization problems, IEEE Transactions on Information Forensics and Security,2(3):588-594,2007.
    [164]C. J. Lin, Projected gradient methods for non-negative matrix factorization, Technical report, Department of Computer Science, National Taiwan University,2005.
    [165]E. F. Gonzales, Y. Zhang, Accelerating the Lee-Seung algorithm for non-negative matrix factorization, Technical report, Department of Computational and Applied Mathematics, Rice University,2005.
    [166]D. P. Bertsekas, Nonlinear programming. Athena Scientific, Belmont, MA 02178-9998, second edition,1999.
    [167]P. O. Hoyer, Non-negative sparse coding, Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing, Piscataway, New Jersey,557-565,2002.
    [168]E. Amaldi, V. Kann, On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems, Theoretical Computer Science,209:237-260,1998.
    [169]D. Donoho, For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution, Communications On Pure and Applied Mathematics,59(6): 797-829,2006.
    [170]E. Candes, J. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Communications On Pure and Applied Mathematics,59(8):1207-1223,2006.
    [171]Y. Bengio, J. Paiement, P. Vincent,O. Delalleau, N. Roux, M. Ouimet, Out-of-sample extensions for LLE, ISOMAP, MDS, Eigenmaps, and Spectral Clustering, Advances in Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 8-13,2004.
    [172]D. Donoho, V. Stodden, When does non-negative matrix factorization given a correct decomposition into parts? Advances in Neural Information Processing Systems, Vacouver, British Columbia, Canada, December 8-13,2003.
    [173]Z. Zeng, M. Pantic, G. Roisman, T. S. Huang, A survey of affect recognition methods:audio, visual and spontaneous expressions, IEEE Transactions on Pattern Analysis and Machine Intelligence,31(1):39-58,2009.
    [174]P. Ekman, EL. L. Rosenberg, What the face reveals:basic and applied studies of spontaneous expression using the facial action coding system,2nd edition, Oxford University Press.
    [175]J. F. Cohn, K. L. Schmidt, The timing of facial motion in posed and spontaneous smiles, International Journal of Wavelets, Multiresolution and Information Processing,2:1-12,2004.
    [176]M. F. Valstar, H. Gunes, M. Pantic, How to distinguish posed from spontaneous smiles using geometric features, Proceedings of the 9th International Conference on Multimodal Interfaces,38-45, 2007.
    [177]MPLab GENKI Database:http://mplab.ucsd.edu
    [178]http://www.mathworks.com/matlabcentral/fileexchange/13701
    [179]J. Yang, J. Y. Yang, From image vector to matrix:a straightforward image projection technique-IMPCA vs. PCA, Pattern Recognition,35(9):1997-1999,2002.
    [180]张楠,基于分块2DPCA的人脸表情识别,山东轻工业学院学报,1(3):8-17,2007.
    [181]D. Zhou, B. Scholkopf, Learning from labeled and unlabeled data using random walks, Proceedings of 26th DAGM Symposium, C. E. Rasmussen, H. H. Bulthoff, M. A. Giese, B. Scholkopf, (Eds.) Springer, Berlin, Germany,237-244,2004.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700