用户名: 密码: 验证码:
基于人脸图像的性别识别与年龄估计研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
近年来,基于人脸图像的生物特征识别研究取得了巨大的发展,与其它的生物特征相比,人脸特征具有自然性、方便性和非接触性等优点,使其在安全监控、身份验证、人机交互等方面具有巨大的应用前景。基于人脸图像的性别识别及年龄估计是指根据人的脸部图像判别其性别及估计其年龄的模式识别问题。性别识别和年龄估计因其在身份认证、人机接口、视频检索以及机器人视觉中存在潜在的应用,成为当前计算机视觉和模式识别领域中的研究课题之一,备受关注。
     在生活中,人们能很容易地根据人脸来判别其性别,也能大致估计年龄,但是让计算机识别并不容易。来自心理学、计算机视觉、模式识别、人工智能等各领域的研究人员做了大量的研究工作,但赋予计算机与人类同样的能力仍然是富有挑战性的研究课题。本文在国内外已有的研究基础上,对性别识别和年龄估计做了较深入的研究。
     用于人脸性别识别和年龄估计的特征提取算法仅需要提取人脸区域的特征,人脸区域检测是人脸性别识别和年龄估计的一个必要环节,其准确性会直接影响到特征提取的有效性。因此研究了基于Adaboost学习算法的人脸检测方法,采用多级分类器结构和金字塔结构实现多姿态人脸检测并提高了检测速度。实验结果表明该方法是快速有效的。
     为归一化人脸的大小,提出基于Adaboost和快速辐射对称变换的双眼定位方法,利用快速辐射对称变换寻找特征点,用基于Adaboost的方法检测眉毛眼睛区域以缩小搜索双眼的区域,减少其它特征点对双眼定位的影响,再利用眼珠模板和双眼的几何特征即可准确地定位双眼。采用基于主动外观模型的方法定位脸部特征点以提取人脸图像的局部特征。
     性别识别和人脸识别一样,都需要寻找有效而又稳定的面部特征,只是最终识别类别不同。本文研究了人脸性别识别的特征提取和分类方法,如局部二进制模式方法(LBP)、神经网络方法、Adaboost方法、SVM方法,以及直接把人脸图像的灰度值作为特征的性别识别方法。进一步研究了实际应用环境对人脸性别识别性能的影响,即把人脸检测和性别识别两者连接起来考察人脸性别识别的性能,研究了人脸图像的尺度大小对性别识别性能的影响。
     性别识别方法根据特征提取方法不同可以分为:基于整体特征的方法和基于局部特征的方法。两类特征对识别都是必要的,且能互相补充在识别过程中共同起作用。为提高识别率,提出了采用AdaBoost算法提取脸部整体特征,主动外观模型提取局部特征(脸部几何特征),融合局部与整体特征后用支持向量机进行分类的方法。本文在一个由AR、FERET、CAS-PEAL、网上收集和实验室自行采集所共同组成的,包含21,300余张人脸的数据库上,进行了大量有意义的实验。实验结果显示,融合了整体特征和局部特征后,识别率比基于(单独)整体特征的、基于(单独)局部特征的有很大的提高,达到了90%以上。实验结果还表明,识别过程中由于有了几何特征的参与,对于脸部光照、姿态等变化表现出了更强的鲁棒性和更好的识别结果。
     大多数性别分类方法是对整幅人脸图像提取特征,相对的,各脸部子区域受脸部表情变化的影响更小。为提高脸部表情变化时的性别识别率,提出一种基于子区域特征的性别识别方法。本文详细比较了脸部各子区域对性别识别的贡献大小,包括眼睛、鼻子、嘴巴、下巴、左眼区域,脸内部区域(包括眼睛、鼻子、嘴巴和下巴)、以及整个人脸区域(带头发的人脸区域)。用FERET和CAS-PEAL两个人脸数据库进行实验,实验结果表明人脸部各子区域同样具有足够的与性别相关的信息,用单个子区域进行性别识别的正确率可以达到80%以上。通过融合脸部对性别识别贡献较大的子区域,利用各子区域包含的脸部特征信息是互补的,进一步提高了性别识别准确率。
     提出利用能互相补充的脸部信息,即脸部灰度图像、脸部Gabor小波特征、以及眼睛区域,来提高年龄估计的准确率。用Gabor小波提取脸部特征的方法,Gabor小波具有多方向和多尺度选择的特性,并且能够获取对应于空间和频率的局部结构信息,对于图像的亮度和对比度变化以及人脸姿态变化具有较强的鲁棒性。为进一步提高准确率,针对男性和女性在人脸年龄特征表现上的差异,以性别信息作为年龄估计的先验知识。用支持向量机对各脸部特征分别进行年龄估计,融合后得到年龄识别结果,并对融合方法进行了改进。
Recently, Biometrics has been applied into pattern recognition including face recognition, fingerprint recognition, iris recognition, palm print recognition etc. Compared with other biometric personal authentication technologies, face recognition technology is natural, convenient and non-contact. These advantages make it be applied widely in security surveillance, verification, man-machine conversation and so on. Gender classification and age estimation is an attempt to give the computers the ability to discriminate the gender information and estimation age from a face image. Gender classification and age estimation has become the hotspot of computer vision and pattern recognition for its important application prospects in identity recognition, man-machine interface, video index and robot vision and so on. However, it is still one of the most challenging problems in the fields of computer vision.
     Since feature extraction algorithms used by gender classification and age estimation just need to extract the features of face area, face detection is a necessary step in automatic gender classification and age estimation systems. Moreover, the accuracy of face detection would affect the effectivity of feature extraction. So multi-view face detection method based on Adaboost learning algorithm is applied, and its cascaded classifier structure and pyramid architecture both improve the speed of detection. The experimental results show the face detection method is rapid and effective.
     In order to normalize the size of face image, an eye location method based on Adaboost and fast radial symmetry transform is proposed. Firstly, fast radial symmetry transform can search feature points rapidly. Secondly, an eyebrow region detection algorithm based on Adaboost is presented to reduce the range of searching eyes, which can decrease the influence of other feature points on eye location. Finally, the precise location of eyes can be obtained by using pupil model and geometrical features of eyes. Furthermore, the results of eye location are used to initialize active appearance model, which is applied to locate facial feature points so as to extract the local features of face images.
     Gender classification and face recognition both need to find effective and stable facial features, but ultimately identify the different patterns. In this paper, we compared those methods which used in gender classification, including local binary pattern, neural network, Adaboost, support vector machines (SVM), and image pixels as input. We present a systematic study on gender classification with automatically detected and aligned faces. We experimented with different combinations of automatic face detection, face alignment and gender classification, and show the classification rates for different face images sizes.
     Based on the type of features used, previous studies can be broadly classified into two categories: appearance feature-based (global) and geometric feature-based (local). The former finds the decision boundary directly from training images while the latter is based on geometric features such as eyebrows thickness, nose width, etc. The local and global features supplement each other under some conditions. In this paper, a novel gender classification method based on frontal face images is presented. In this work, the global features are extracted using AdaBoost algorithm. Active Appearance Model locates 83 landmarks, from which the local features are characterized. After the fusion of the local and global features, the mixed features are used to train support vector machine classifiers. This method is evaluated by the recognition rates over a mixed face database containing over 21,300 images from 4 sources (AR, FERET, CAS-PEAL, WWW and a database collected by the lab). Experimental results show that the hybrid method outperforms the unmixed appearance- or geometry-feature based methods and achieve a classification rate over 90%.
     In the past, most computational models of gender classification use global information (the whole face image) giving equal weight to all areas of the face, irrespective of the importance of internal features. Intuitively, we argue that smaller facial regions, if judiciously selected, would be less sensitive to expression variations and may lead to better overall performance. We evaluate the significance of different facial regions for gender perception. Our work on gender classification is one of the first attempts to report a detailed evaluation of the significance of different facial regions including the whole face (including hairline), the internal face, the upper region of face, the lower region of face, the left eye, the nose, and the mouth. Considering the significance of facial regions, we propose a fusion-based method, combining the classification results of three facial regions, for improving the robustness to facial expressions.
     We notice that most computational models of the existing age estimation methods consider only the entire face as a global feature, they do not take into account just what other regions of the face as local features. We propose a novel method for age estimation that combines information from multiple facial features for improving accuracy and robustness. The facial features that we consider are the grayscale image of the face, the Gabor wavelet representation of the face, and the eyes. The Gabor wavelet representations are robust for illumination and expressional variability, so are used widely in facial feature modeling in recent years. The eyes are essentially unaffected by beards and mustaches and quite robust to facial expressions and occlusions. Moreover, the area around the eyes was found to be the most significant for the task of age estimation. The idea is to use complementary information for improving overall performance. Further, we propose a fusion method that further improves the accuracy. Each feature provides an opinion on the claim in terms of a confidence value which is calculated by SVMs. The confidence values of all the three features are then fused for final age estimation. The proposed fusion method works quite well and yields a significant improvement in age estimation over that achievable with any single feature.
引文
[1] Golomb B.A., Lawrence D.T. and Sejnowski T.J.,“Sex-Net: a neural network identifies sex from human faces”, in Proceedings of Advances in neural information processing systems, pag. 572-577, 1990.
    [2] Cottrell G.W. and Metcalfe J.,“EMATH: face, emotion, and gender recognition using holons”, in Proceedings of Advances in neural information processing systems, pag. 564-571, 1990.
    [3] Brunelli R. and Poggio T.,“HyperBF networks for gender classification”, in Proceedings of the DARPA Image Understanding Workshop, pag. 311-314, 1992.
    [4] Wiskott L., Fellous J.M., Kruger N. and Von der Malsburg C.,“Face recognition and gender determination”, in Proceedings on Automatic Face and Gesture Recognition, pag. 92-97, 1995.
    [5] Tamura S., Kawai H. and Mitsumoto H.,“Male/female identification from 8x6 very low resolution face images by neural network”, in Pattern Recognition, vol. 29, iss. 2, pag. 331-335, February 1996.
    [6] S. C. Yen, P. Sajda, L. Finkel,“Comparison of Gender Recognition by PDP and Radial Basis Function Networks,”The Neurobiology of Computation. 1994:433–438.
    [7] D. Valentin, H. Abdi, B. Edelman,“Principal Component and Neural Network Analyses of Face Images: What can be Generalized in Gender Classification?,”Journal of Mathematical Psychology. 1996, 41:398–413.
    [8] Edelman B, Valentin D, Abdi H,“Sex classification of face areas: How well can a linear neural network predict human performance,”Journal of Biological System, 1998, 6(3): 241-264.
    [9] Alice J O’Toole et al,“The perception of face gender: The role of stimulus structure in recognition and classification,”Memory and Cognition, 1997, 26(1): 146-160.
    [10] Alice J O’Toole et al,“The role of shape and texture information in sex classification,”Max Planck Institute for Biological Cybernetics, Tubingen, Germany, Tech Rep: 23, 1995.
    [11] Gutta S., Huang J.R.J., Jonathon P. and Wechsler H.,“Mixture of experts for classification of gender, ethnic origin, and pose of human faces”, in IEEE Transactions on NeuralNetworks, vol. 11, iss. 4, pag. 948-960, July 2000.
    [12] Moghaddam B., Yang M. H.,“Learning Gender with Support Faces,”IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (5), pp. 707-711, 2002.
    [13] J. Hayashi,“Age and Gender Estimation Based on Wrinkle Texture and Color of Facial Images,”Proceedings of the 16th International Conference. 2002:405–408.
    [14] N. P. Costen, M. Brown, S. Akamatsu,“Sparse Models for Gender Classification,”Proceedings of Sixth IEEE International Conference on Automatic Face and Gesture Recognition,”2004:201–206.
    [15] T. Wilhelm, H. J. B¨ohme, H. M. Gross,“Classification of Face Images for Gender, Age, Facial Expression, and Identity,”Proceedings of Int. Conf. on Artificial Neural Networks (ICANN’05). Warsaw, 2005, 1:569–574.
    [16] G. Shakhnarovich, P. Viola and B. Moghaddam,“A Unified Learning Framework for Real Time Face Detection and Classification,”IEEE conf. on AFG 2002.
    [17] Baluja S. and Rowley H.A.,“Boosting Sex Identification Performance”, in International Journal of Computer Vision, vol. 71, iss. 1, pag. 111-119, January 2007.
    [18]刘江华,陈佳品,程君实,“基于Gabor小波特征抽取和支持向量机的人脸识别,”计算机工程与应用, 2003, (8):81-83.
    [19] http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
    [20]武勃,艾海舟,肖习攀,等,“人脸的性别识别,”计算机研究与发展, 2003, 40(11):1546-1553.
    [21]燕忠,袁春伟,“基于蚁群智能和支持向量机的人脸性别识别方法,”电子与信息学报, 2004, 26(8):1177-1182.
    [22]陈华杰,韦巍,“基于支持向量AAM迭代学习的性别识别算法,”浙江大学学报(工学版), 2005, 39(12):1989-2011.
    [23] H. K. Young, V. L. Niels,“Age Classification from Facial Images,”Proceedings of IEEE Conf. on Computer Vision.and Pattern Recognition. Seattle, Washington, U. S.A., 1994:762–767.
    [24] J. Hayashi,“Age and Gender Estimation Based on Wrinkle Texture and Color of FacialImages,”Proceedings of the 16th International Conference. 2002:405–408.
    [25] A. Lanitis, C. Draganova, C. Christodoulou,“Comparing Different Classifiers for AutomaticAge Estimation,”IEEE Transactions on Systems, Man, and Cybernetics-partB:Cybernetics. 2004, 34(1):621–628.
    [26] M. Nakano, F.Yasukata, M. Fukumi,“Age Classification from Face Images Focusingon Edge Information,”Proceedings of The 8th International Conference onKnowledge-Based Intelligent Information and Engineering Systems.Wellington, NewZealand, 2004:898–904.
    [27] S. K. Zhou, B. Georgescu, X. S. Zhou, et al,“Image Based Regression Using BoostingMethod,”Proceedings of the Tenth IEEE International Conference on ComputerVision. Beijing, 2005, 1:541– 548.
    [28] X. Geng, Z. H. Zhou, Y. Zhang, et al,“Learning from Facial Aging Patterns for AutomaticAge Estimation,”Proceedings of the 14th ACM International Conference onMultimedia (ACMMM’06). Santa Barbara, CA, 2006:307–316.
    [29] Guodong Guo, Yun Fu, Charles R. Dyer, and Thomas S. Huang,“Image-based human age estimation by manifold learning and locally adjusted robust regression,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 17, no. 7, pp. 1178–1188, July 2008.
    [30]胡斓,夏利民,“基于Boosting RBF神经网络的人脸年龄估计,”计算机工程. 2006,32(19):119–201.
    [31]胡斓,夏利民,“基于人工免疫识别系统的年龄估计,”计算机工程与应用. 2006,26:186–188.
    [32] Bledsoe, W,“The model method in facial recognition,”in Tech. Rep. PRI: 15. 1964: CA.
    [33] Kelly, M.D,“Visual identification of people by computer,”in Tech. Rep. AI-130, Stanfort AI Proj. 1970, Stanford: CA.
    [34]Lee S Y,Ham Y K,Park R H.,“Recognition of human front faces using knowledge-based feature extraction and neuron-fuzzy algorithm,”PattemRecognition, 1996 ,29(11):1863-1876.
    [35] Shakhnarovich G., Moghaddam B.,“Face recognition in subspaces,”Springer-Verlag, 2004.
    [36] P. Comon,“Independent Component Analysis: A New Concept?”, Signal Process, 36(3): 287–314, 1994.
    [37] T.F. Cox and M.A.A. Cox,“Multidimensional Scaling”, London: Chapman & Hall, 2001.
    [38] X. He and P. Niyogi,“Locality Preserving Projections”, Proc. Conf. Advances in Neural Information Processing Systems, 2003.
    [39] J.B. Tenenbaum, V. de Silva, and J.C. Langford,“A Global Geometric Framework for Nonlinear Dimensionality Reduction”, Science, 290(5500), 2319-2323, 2000.
    [40] S.T. Roweis and L.K. Saul,“Nonlinear Dimensionality Reduction by Locally Linear Embedding”, Science, 290(5500), 2323– 2326, 2000.
    [41] L.K. Saul and S.T. Roweis,“Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds”, Machine Learning Research, 4(2): 119-155, 2004.
    [42] M. Belkin and P. Niyogi,“Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering”, Proc. Conf. Advances in Neural Information Processing System 15, 2001.
    [43] Q. Liu, R. Huang, H. Lu, and S. Ma,“Face Recognition Using Kernel Based Fisher Discriminant Analysis”, Proc. Fifth Int’l Conf. Automatic Face and Gesture Recognition, May 2002.
    [44] M.-H. Yang,“Kernel Eigenfaces vs. Kernel Fisherfaces: Face Recognition Using Kernel Methods”, Proc. Fifth Int’l Conf. Automatic Face and Gesture Recognition, May 2002.
    [45]Lin S. H., Kung S. Y., Lin L. J.,“Face recognition/detection by probabilistic decision-based neural network,”IEEE Transaction on Neural Networks, 1997, 8(1):114-132.
    [46]Meng J. E., Shiqian W., Juwei L., et al,“Face recognition with radial basis function (RBF) neural networks,”IEEE Transaction on Neural Networks, 2002, 13(3):697-710.
    [47]C.J. Liu, Harry Wechsler,“Gabor Feature Based Classification Using the Enhanced Fisher Linear Discriminant Model for Face Recognition,”IEEE Transaction on Image Processing, vol.11, No.4, APRIL 2002.
    [48]Christopher J.C. Burges,“A Tutorial on Support Vector Machines for Pattern Recognition,”Springer, 2004.
    [49] T.F. Cootes, C.J. Taylor, D. Cooper, and J. Graham,“Active Shape Models-Their Training and Application”, Computer Vision and Image Understanding, 61(1): 38-59, Jan. 1995
    [50] T.F. Cootes, G.J. Edwards, and C.J. Taylor,“Active Appearance Models”, Proc. Fifth European Conf. Computer Vision, 2: 484-498, 1998.
    [51] T. Ahonen, A. Hadid and M. Pietikainen,“Face Recognition with Local Binary Patterns”. Proceedings of the European Conference on Computer Vision, LNCS 3021, pp. 469-481, 2004.
    [52]Robert E. Schapire,“The boosting approach to machine learning: An overview,”Nolinear Estimation and Classification, Springer, 2003.
    [53] P. Viola, M.J. Jones,“Robust Real-time Object Detection”, IEEE ICCV Workshop on Statistical and Computational Theories of Vision. Vancouver, Canada.July 13, 2001.
    [54]M.J. Jones, P. Viola,“Face recognition using boosted local features,”Proceedings of the IEEE International Comference on Computer Vision, 2003.
    [55]A.M. Martinez and R. Benavente,“The AR Face Database”, http://rvl1.ecn.purdue.edu/~aleix/aleix_face_DB.html, 2003.
    [56] A.M. Martinez and R. Benavente.“The AR Face Database”, CVC Technical Report #24, June 1998.
    [57] P. J. Phillips, H. Wechsler, J. Huang, and P. J. Rauss,“The FERET database and evaluation procedure for face recognition algorithms,”Image and Vision Computing J., vol. 16, no. 5,pp. 295–306, 1998.
    [58]Wen Gao, Bo Cao, Shiguang Shan, Delong Zhou, Xiaohua Zhang, Debin Zhao,“The CAS-PEAL Large-Scale Chinese Face Database and Evaluation Protocols,”Technical Report No. JDL_TR_04_FR_001, Joint Research & Development Laboratory, CAS, 2004.
    [59]Terence Sim, Simon Baker, and Maan Bsat,“The CMU Pose, Illumination and Expression Database,”IEEE Transactions on Pattern Analysis and Machine Intelligence, VOL. 25, No. 12, December 2003, pp. 1615– 1618
    [60] G. Yang, T. S. Huang,“Human Face Detection in Complex Background”, Pattern Recognition, 27(1): 53-63, 1994.
    [61] T.K. Leung, M.C. Burl, P. Perona,“Finding Faces in Cluttered Scenes Using Random Labeled Graph Matching”, Proc. Fifth IEEE Int’l Conf. Computer Vision, 637-644, 1995.
    [62] K.C. Yow, R. Cipolla,“Feature-Based Human Face Detection”, Image and Vision Computing, 15(9): 713-735, 1997.
    [63] Y. Dai, Y. Nakano,“Face-Texture Model Based on SGLD and Its Application in Face Detection in a Color Scene”, Pattern Recognition, 29(6): 1007-1017, 1996.
    [64] R. Kjeldsen, J. Kender,“Finding Skin in Color Images”, Proc. Second Int’l Conf. Automatic Face and Gesture Recognition, 312-317, 1996.
    [65] S. McKenna, S. Gong, Y. Raja,“Modeling Facial Colour and Identity with Gaussian Mixtures”, Pattern Recognition, 31(12): 1883-1892, 1998.
    [66] J. Yang, A. Waibel,“A Real-Time Face Tracker”, Proc. Third Workshop Applications of Computer Vision, 142-147, 1996.
    [67] I. Craw, D. Tock, A. Bennett,“Finding Face Features”, Proc. Second European Conf. Computer Vision, 92-96, 1992.
    [68] A. Lanitis, C.J. Taylor, T.F. Cootes,“An Automatic Face Identification System Using Flexible Appearance Models”, Image and Vision Computing, 13(5): 393-401, 1995.
    [69] Turk M, Pentland A,“Eigenfaces for Recognition”, Journal of Cognitive Neuroscience, 3: 71-86, 1991.
    [70] H. Rowley, S. Baluja, T. Kanade,“Neural Network-Based Face Detection”, IEEE Trans. Pattern Analysis and Machine Intelligence, 20(1): 23-38, Jan. 1998.
    [71] E. Osuna, R. Freund, F. Girosi,“Training Support Vector Machines: An Application to Face Detection”, Proc. IEEE Conf. Computer Vision and Pattern Recognition, 130-136, 1997.
    [72] M.-H. Yang, D. Roth, N. Ahuja,“A SNoW-Based Face Detector”, Advances in Neural Information Processing Systems 12, 855-861, MIT Press, 2000.
    [73] H. Schneiderman, T. Kanade,“Probabilistic Modeling of Local Appearance and Spatial Relationships for Object Recognition”, Proc. IEEE Conf. Computer Vision and PatternRecognition, 45-51,1998.
    [74] A. Rajagopalan, K. Kumar, J. Karlekar etc.,“Finding Faces in Photographs,”Proc. Sixth IEEE Int’l Conf. Computer Vision, 640-645, 1998.
    [75] M.S. Lew,“Information Theoretic View-Based and Modular Face Detection”, Proc. Second Int’l Conf. Automatic Face and Gesture Recognition, 198-203, 1996.
    [76] A.J. Colmenarez, T.S. Huang,“Face Detection with Information-Based Maximum Discrimination”, Proc. IEEE Conf. Computer Vision and Pattern Recognition, 782-787,1997.
    [77] J. Huang, S. Gutta, H. Wechsler,“Detection of Human Faces Using Decision Trees”, Proc. Second Int’l Conf. Automatic Face and Gesture Recognition, 248-252, 1996.
    [78] K.M. Lam, Y.L. Li,“An Efficient Approach for Facial Feature Detection”, Proc. Fourth International Conference on Signal Processing, ICSP '98, 2: 1100-1103, Oct. 1998.
    [79] Wiskott L, Fellous JM, Krueger N, von der Malsburg C,“Face Recognition by Elastic Bunch Graph Matching”, IEEE Transactions on Pattern Analysis and Machine Intellegence, 19(7): 665-779, 1997.
    [80]孙大端,吴乐南,“基于Gabor变换的人眼定位”,电路与系统学报,6(4):29-32,2001。
    [81] A.Yuille, D.Cohen, P. Hallinan,“Feature Extraction From Faces Using Deformable Templates”, Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 104-109, 1989.
    [82] S.Z. Li, Y.S. Cheng, H.J. Zhang etc.,“Multi-View Face Alignment using Direct Appearance Models”, Proc. Fifth IEEE International Conference on Automatic Face and Gesture Recognition, 309-314, May 2002.
    [83] D. Reisfeld, H. Wolfson, and Y. Yeshurun,“Context Free Attentional Operators: the Generalized Symmetry Transform”, International Journal of Computer Vision, Special Issue on Qualitative Vision, 14: 119–130, 1995.
    [84] D. Reisfeld and Y. Yeshurun,“Preprocessing of Face Images: Detection of Features and Pose Normalization,”Computer Vision and Image Understanding, 71(3): 413–430, September 1998.
    [85] Gareth Loy and Alexander Zelinsky,“A Fast Radial Symmetry Transform for Detecting Points of Interest”, European Conference on Computer Vision, 358-368, 2002.
    [86] C. Papageorgiou, M. Oren, T. Poggio,“A General Framework for Object Detection”, International Conference on Computer Vision, 1998.
    [87] R. Lienhart, A. Kuranov, V. Pisarevsky,“Empirical Analysis of Detection Cascades of Boosted Classifiers for Rapid Object Detection”, MRL(Microprocessor Research Lab, Intel Labs) Technical Report, May 2002.
    [88] R.E. Schapire, Y. Freund, P. Bartlett etc.,“Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods”, Proc. Fourteenth International Conference on Machine Learning, 1997.
    [89] J. Feraud, O. Bernier, M. Collobert,“A Fast and Accurate Face Detector for Indexation of Face Images”, Proc. Fourth IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 52-59, 1998.
    [90] S. Gong, S.McKenna, J.Collins,“An Investigation into Face Pose Distribution”, Proc. IEEE International Conference on Face and Gesture Recognition, Vermont, 1996.
    [91] J. Huang, X. Shao, H. Wechsler,“Face Pose Discrimination using Support Vector Machines (SVM)”, Proc. of International Conference Pattern Recognition, Brisbane, Queensland, Australia, 1998.
    [92] Z.Q. Zhang; L. Zhu, S.Z Li etc.,“Real-Time Multi-View Face Detection”, Proc. Fifth IEEE International Conference on Automatic Face and Gesture Recognition, 142-147, May 2002.
    [93]梁路宏,艾海舟,何克忠,张钹,“基于多关联模板匹配的人脸检测”,软件学报,12(1): 92-102,2001。
    [94] M. Castrill′on-Santana, O. D′eniz-Su′arez, J. Hern′andez-Sosa, and A. Dom′?nguez-Brito,“Identity and gender recognition using the ENCARA real-time face detector,”in Conferencia de la Asociacin Espaola para la Inteligencia Artificial (CAEPIA03), 2003.
    [95] B. Wu, H. Ai, and C. Huang,“Real-time gender classification,”in Proceedings of the Third International Symposium on Multispectral Image Processing and Pattern Recognition, vol. 5286, October 2003, pp. 498–503.
    [96] C. BenAbdelkader and P. Griffin,“A local region-based approach to gender classification from face images,”in CVPR’05: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) - Workshops. Washington, DC, USA: IEEE Computer Society, 2005, p. 52.
    [97] Z. Yang, M. Li, and H. Ai,“An experimental study on automatic face gender classification,”in Proceeding of the 18th IEEE International Conference on Pattern Recognition (ICPR’06), vol. 3, August 2006, pp. 1099–1102.
    [98] K. Balci, V. Atalay,“PCA for Gender Estimation: Which Eigenvectors Contribute?”Proceedings of the 16th International Conference on Pattern Recognition (ICPR’02). Quebec City, QC, Canada, 2002, 3:363–366.
    [99] H. C. Lian, B. L. Lu, E. Takikawa, et al,“Gender Recognition Using a Min-max Modular Support Vector Machine,”Proceedings of Advances in Natural Computation: First International Conference, ICNC 2005. 2005, 3611:438–441.
    [100] S. Hosoi, E. Takikawa, M. Kawade,“Ethnicity Estimation with Facial Images,”Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition. Seoul, Korea, 2004:195–200.
    [101] T. Ojala, M. Pietikainen, and D. Harwood,“A comparative study of texture measure with classification based on feature distribution”. Pattern Recognition, 29(1):51-59, January 1996.
    [102] T. Ojala, M. Pietikainen, and T. Maenpaa.“Multi-resolution gray-scale and rotation invariant texture classification with local binary patterns.”IEEE Transaction on Pattern Analysis and Machine Intelligence, vol.24, no. 7, pp.971-987, 2002.
    [103] H.-C. Lian and B.-L. Lu,“Multi-view gender classification using local binary patterns and support vector machines,”in Proceeding of the 3rd International Symposium on Neural Networks (ISNN’06), vol. 2, Chengdu, China, 2006, pp. 202–209.
    [104] Chang, C.C., Lin, C.J.“LIBSVM : a Library for Support Vector Machines”, http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.ps.gz.
    [105] V.F. Ferrario, C. Sforza, G. Pizzini, G. Vogel, A. Miani,“Sexual dimorphism in the human face assessed by Euclidean distance matrix analysis,”Journal of Anatomy 183 (1993) 593–600.
    [106] L.G. Farkas,“Anthropometry of the Head and Face,”second ed., Raven Press, New York, 1994.
    [107] A. Sammal, V. Subramani, and D. Marx,“Analysis of sexual dimorphism in human face,”Journal of Visual Communication & Image Representation, 18 (2007) 453-463.
    [108] M. Weston,“Biometric Evidence that Sexual Selection Has Shaped the Hominin Face Eleanor,”PLoS ONE,2007-,vol 2 (issue 1):710
    [109] Viola P.,Jones M. J.,“Rapid Object Detection using a Boosted Cascade of Simple Features,”Computer Vision and Pattern Recognition,2001,Volume 1:8-14
    [110] Snelick R,Uludag U,Mink A,et al.“Large scale evaluation of multimodal biometric authentication using state-of the-art systems,”IEEE Transactions on Pattern Analysis and Machine intelligence.
    [111] Kawano, T., Kato, K., Yamamoto,“K.: A comparison of the gender differentiation capability between facial parts,”In: Proc. of 17th International Conference on Pattern Recognition (ICPR 2006), Cambridge, UK. IEEE, Los Alamitos (2004).
    [112] Buchala S., Davey N., Frank R., Gale T., Loomes M., Kanargard W.,“Gender classification of faces images: The role of global and feature-based information,”In: Pal N.R., Kasabov N., Mudi R.K., Pal S., Parui S.K. (eds.) ICONIP 2004. LNCS, vol. 3316, pp. 762-768. Springer, Heidelberg (2004).
    [113] R. Iga, K. Izumi, H. Hayashi,“A Gender and Age Estimation System from Face Images,”Proceedings of SICE Annual Conference. Fukui, 2003:756–761.
    [114] A. Lanitis, C. Draganova, C. Christodoulou,“Comparing Different Classifiers for Automatic Age Estimation,”IEEE Transactions on Systems, Man, and Cybernetics-part B: Cybernetics. 2004, 34(1):621–628.
    [115] M. Nakano, F.Yasukata, M. Fukumi,“Age Classification from Face Images Focusing on Edge Information,”Proceedings of The 8th International Conference on Knowledge-Based Intelligent Information and Engineering Systems.Wellington, New Zealand, 2004:898–904
    [116] S. K. Zhou, B. Georgescu, X. S. Zhou, et al.,“Image Based Regression Using Boosting Method,”Proceedings of the Tenth IEEE International Conference on Computer Vision. Beijing, 2005, 1:541– 548
    [117] X. Geng, Z. H. Zhou, Y. Zhang, et al.,“Learning from Facial Aging Patterns for Automatic Age Estimation,”Proceedings of the 14th ACM International Conference on Multimedia (ACMMM’06). Santa Barbara, CA, 2006:307–316
    [118] A. Lanitis, C.J. Taylor, T.F. Cootes,“Toward Automatic Simulation of Aging Effects on Face Images,”IEEE Transaction on Pattern Analysis and Machine Intelligence, 2002, 24(4):442-455.
    [119] Narayanan Ramanathan and Rama Chellappa,“Modeling Age Progression in Young Faces,”Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Volume1, Pages:387-349.
    [120]荣建群,“口述记忆组合图像年龄和表情变化的修整,”郑州:河南省公安厅刑事科学技术研究所,2002.
    [121] A. Lanitis,“On the Significance of Different Facial Parts for Automatic Age Estimation,”14th International Conference on Digital Signal Processing, 2002.
    [122] M.S. Bartlett, J.R. Movellan, T.J. Sejnowski,“Face recognition by independent component analysis,”IEEE Transactions on Neural Networks, 2002, 13(6):1450-1464.
    [123] P.S. Penev, J. Atick,“Local Feature Analysis: a General Statistical Theory for Object Representation,”Network: Computation in Neural Systems, 1996, 7:477-500.
    [124] Givens, G., Beveridge, J.R., Draper, B.A. et al,“How Features of the Human Face Affect Recognition: a Statistical Comparison of Three Face Recognition Algorithms”, Computer Vision and Pattern Recognition, 2: II-381 - II-388, July 2004.
    [125] Matthew TURK,“A Random Walk through Eigenspace”, IEICE on Information and Systems, Vol.E84-D (12): 1586-1595, 2001.
    [126] Navarrete,P., Ruiz-del-Solar, J,“Analysis and Comparison of Eigenspace-based Face Recognition Approaches”, Int. Journal of Pattern Recognition and Artificial Intelligence, 16(7): 817-830, 2002.
    [127] J. Yang, D. Zhang, A.F. Frangi, J.Y. Yang,“Two-dimensional PCA: a new approach to appearance-based face representation and recognition,”IEEE Trans. PAMI 26(1) (2004) 131-137.
    [128] D.Q. Zhang and Z.H. Zhou,“(2D)2PCA: 2-Directional 2-Dimensional PCA for Efficient Face Representation and Recognition,”Neurocomputing 69(7-9 SPEC. ISS.) pp. 934-940.
    [129] Tai Sing Lee,“Image Representation Using 2D Gabor Wavelet”, IEEE Trans on Pattern Analysis and Machine Intelligence, 18(10): 959-971, 1996.
    [130] Mehrotora R,“Gabor Filter-based Edge Detection”, Pattern Recognition, 25(12): 1479-1494, 1992.
    [131] R.Rao and D.Ballard,“An active vision architecture based on iconic representations,“Artif.Intell.,vol78,pp.461-505,1995.
    [132] B.Schiele and J.L.Crowley,“Recognition without correspondence using multidimensional receptive field histograms,”Int.J.Comput.Vis.,vol.36,no.1,pp.31-52,2000.
    [133] M.Lades,J.C.Vorbruggen,J.Buhmann,J.Lange,C.von der Malsburg,R.P.Wurtz, and W.Konen,“Distortion invariant object recognition in the dynamic link architecture,”IEEE Transcation Computer,vol.42,pp.300-311,1993.
    [134] P. Kalocsai, C. von der Malsburg, et al.,“Face recognition by statistical analysis of feature detectors,”Image and Vision Computing 18(4) (2000) 273-278.
    [135] J. Qin, Z.S. He,“A SVM face recognition method based on Gabor-featured key points,”in: Proceedings of the 4th IEEE Conference on Machine Learning and Cybernetics, 2005, pp.5144-5149.
    [136] Y. Hamamoto, S. Uchimura, et al.,“A Gabor filter-based method for recognizing handwritten numerals,”Pattern Recognition 31(4) (1998) 395-400.
    [137] R. Alterson, M. Spetsakis,“Object recognition with adaptive Gabor features,”Image and Vision Computing 22 (2004) 1007-1014.
    [138]V.Vapnik,“Nature of Statistical Learning Theory,”John Wiley and Sons,Inc.,New York,in preparation.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700