用户名: 密码: 验证码:
面向多方式人际交互的肢体动作识别研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着机器人技术的迅速发展,服务机器人逐渐进入家庭生活与服务领域,在融入人类社会的过程中,服务机器人需要具备自然的人机交互能力,现有的人机交互技术距离实际应用仍有较大差距,需要学习人与人之间多方式的交互模式,提高服务机器人的人机交互技术水平,确保实际人机交互过程的自然和高效性。本文针对肢体动作识别在服务机器人的人机交互技术中的优势,从肢体动作中静态手势、上肢动作和步态识别三个主要的交互方式的入手,构建多通道人机交互方法,研究基于肢体动作识别的多方式自然通用人机交互。
     文中首先研究针对实际人机交互中静态手势识别的难点,提出一种基于单目视觉静态手势识别方法,通过融合多特征的静态手势表征方法,从颜色、纹理和轮廓信息三个方面对静态手势进行描述,实现对静态手势的快速准确的识别,对复杂背景、部分遮挡及用户独立性,具有较强的鲁棒性,并通过公共数据库测试和现有文献中方法进行对比,验证本文识别方法的有效性。
     研究了基于单目图像和深度图像融合的上肢动作识别技术,首先通过对深度和VGA摄像头进行标定,获取深度图像和RGB图像的坐标关系。然后构建人体骨骼模型,获取关节主节点处的空间坐标,建立基于球坐标系的三维直方图分布,通过各个主节点的分布情况实现对不同上肢动作的识别,对比现有的上肢动作识别方法,能够有效的消除用户个体的手势表达在时间和空间上的差异性,并在复杂环境条件下具有良好的测试效果。
     提出一种基于激光雷达的步态识别技术,在非接触的条件下从室内较大范围内激光数据中,快速提取用户的步态信息。该方法通过激光数据的预处理,提取数据中的人脚数据段,获取人脚的位置信息,通过建立的用户行走步态模型,从连续数据帧中的人脚位置信息的变化中获得人步态运动中的行走速度、单步长、单步时间和步态速率等步态特征,为人机交互提供重要的交互信息。通过实验验证基于激光雷达的步态识别方法的有效性,并对行走中男女步态的特点进行分析。
     最后在智能服务机器人实验硬件平台上构建基于肢体动作识别的多方式人机交互系统,通过建立多方式交互信息的语义理解模型,综合肢体动作,表情等多方式交互信息进行机器人的多方式人机交互实验,让机器人与用户实现多方式人机交互的过程,验证基于肢体动作识别的多方式人机交互性能,并建立多方式人机交互系统的评估机制,对多方式人机交互效果进行实验分析。
     本论文针对服务机器人在实际应用中自然人机交互的要求,通过肢体动作识别技术和多方式人机交互系统的研究提升服务机器人的交互性能,有利于机器人实际应用及产业化发展,同时对多方式人机交互实际应用效果进行评估,为多方式人机交互模式研究提供有利参考,对服务机器人具有实用性参考价值。
With the development of robot technologies, the applications of robot have beenextented to the service field and human world. To work in the human society, servicerobot need to have the natural human robot interaction ability. Currently, some problemsstill unsolved in human robot interaction. For natural and efficient communication, weneed to improve the service robot interaction ability on the basis of human intercationmodals. Considering the advantages of human action recognition for human robotinteraction of service robot, we systematically researched the human action recogntiontechnology, including hand posture recognition、upper body gesture recogniton、gaitrecogntion, and set up a multi-modal human-robot interaciton based on human actionrecognition.
     In the dissertation, we first proposed a vision based method to overcome theproblems of hand posture recogntion in human robot interaction. We casted hand posturerecognition as a sparse representation problem, and proposed a novel approach calledjoint feature sparse representation classifier for efficient and accurate sparserepresentation based on multiple features. By integrating different features for sparserepresentation, including gray-level, texture, and shape feature, the proposed method canfuse benefits of each feature and hence is robust to partial occlusion and varyingillumination. Additionally, a new database optimization method was introduced toimprove computational speed. Experimental results, based on public and self-builddatabases, showed that our method performed well compared to the state-of-the-artmethods.
     By fusing the information from color image and depth image, a new upper bodygesture recognition method was proposed. By means of the VGA camera and depthcamera calibration, the coordinate transformation between color image and depth imagewas estimated. Key points of upper body object were extracted based on human skeletonmodeling. Then the coordinates of key points in spherical coordinate system wererepresentated by3D histogram. According to the model-based sparse classificationmethod,10upper body gestures were recognized in the experiments. Compared withcommon used methods, our method achieved the better results, especially with the complex background and different users.
     A method for gait feature extraction is given based on laser range data. Thecontourship of the legs are quickly picked up from the laser data in the large area. Inorder to collect the gait features, such as walk speed, step length, step time and stepvelocity, the position of human were located by legs position extracted from laser data ofcontinous frames. Experimental results show that our method performs well for gaitfeatures extraction.
     Finally, multi-modal human and robot interaction system was formed based on ourservice robot. To achieve natural and universal human and robot interaction, a newhuman robot interaction architecture has been proposed on the basis of semanicunderstanding. By fusing human action information, facial expression, voice information,user was able to natural interact with service robot. According to the evaluation of themulti-modal human and robot interaction system, our system were test based on severalexperiments.
     In this thesis, to get natural and univeral human and robot interaction, a mobilerobot platform was developed with mulit-modal human and robot interation system byhuman action recogntion which improve the interaction performance of the robot.Experiments presented in this thesis verified that these techniques can improve theperformance of service robot and possess a practical reference value.
引文
[1] Sharma, R., Pavlovic, V.I., Huang, T.S., Toward Multimodal Human-ComputerInterface. Proceedings of the IEEE,1998,86(5):853-869.
    [2] Goodrich, M.A., Schultz, A.C., Human-Robot Interaction: A Survey. Foundationsand Trends in Human-Computer Interaction,2007,1(3):203-275.
    [3] Rahimi, M., Karwowski, W., Human-Robot Interaction, CRC,1992.
    [4]张旭,基于表面肌电信号的人体动作识别与交互,中国科学技术大学,2010.
    [5] Smith-Atakan, S., Human-Computer Interaction, Thomson Learning Emea,2006.
    [6]董建明,人机交互:以用户为中心的设计和评估,清华大学出版社,2003.
    [7] Potamianos, G., Neti, C., Luett J., et al., Audio-Visual Automatic SpeechRecognition: An Overview. Issues in Visual and Audio-Visual Speech Processing,2004,356-396.
    [8] Pavlovic, V.I., Sharma, R., Huang, T.S., Visual Interpretation of Hand Gestures forHuman-Computer Interaction: A Review. IEEE Transactions on Pattern Analysisand Machine Intelligence,1997,19(7):677-695.
    [9] Benali-Khoudja, M., Hafez, M., Alexandre, J.M., et al., Tactile Interfaces: AState-of-the-Art Survey, Citeseer,2004.
    [10] Grauman, K., Betke, M., Lombardi, J., et al., Communication Via Eye Blinks andEyebrow Raises: Video-Based Human-Computer Interfaces. Universal Access in theInformation Society,2003,2(4):359-373.
    [11]龚杰民,王献青,人机交互技术的进展与发展趋向.西安电子科技大学学报,1998,25(006):782-786.
    [12]柳克俊,关于人机交互,人机和谐环境的思考.计算机应用,2005,25(10):2226-2227.
    [13] Schmidt-Rohr, S.R., Losch, M., Dillmann, R., Learning Flexible, Multi-ModalHuman-Robot Interaction by Observing Human-Human-Interaction. IEEE,2010,582-587.
    [14] Perzanowski, D., Schultz, A.C., Adams, W., et al., Building a MultimodalHuman-Robot Interface. Intelligent Systems. IEEE,2001,16(1):16-21.
    [15] Ghidary, S.S., Nakata, Y., Saito, H., et al., Multi-Modal Interaction of Human andHome Robot in the Context of Room Map Generation. Autonomous Robots,2002,13(2):169-184.
    [16] Fritsch, J., Kleinehagenbrock, M., Lang, S., et al., Multi-Modal Anchoring forHuman–Robot Interaction. Robotics and Autonomous Systems,2003,43(2):133-147.
    [17] Stiefelhagen, R., Fugen, C., Gieselmann, R., et al., Natural Human-RobotInteraction Using Speech, Head Pose and Gestures. IEEE,2004,2422-2427.
    [18] Stiefelhagen, R., Ekenel, H.K., Fugen, C., et al., Enabling MultimodalHuman–Robot Interaction for the Karlsruhe Humanoid Robot. IEEE Transactionson Robotics,2007,23(5):840-851.
    [19] LC.H., Yang, C.H., Wang, C.K., et al., A New Design on Multi-Modal RoboticFocus Attention. IEEE,2008,598-603.
    [20] Li, Z., Jarvis, R., A Multi-Modal Gesture Recognition System in a Human-RobotInteraction Scenario. IEEE,2009,41-46.
    [21] Arumbakkam, A.K., Yoshikawa, T., Dariush, B., et al., A Multi-Modal Architecturefor Human Robot Communication. IEEE,2010,639-646.
    [22] Schmidt-Rohr, S.R., Losch, M., Jakel, R., et al., Programming by Demonstration ofProbabilistic Decision Making on a Multi-Modal Service Robot. IEEE,2010,784-789.
    [23] Iio, T., Shiomi, M., Shinozawa, K., et al., Entrainment between Speech andGestures in Human-Robot Interaction. IEEE,2010,2769-2774.
    [24] Han, J.G., Dalton, J., Vaughan, B., et al., Collecting Multi-Modal Data ofHuman-Robot Interaction. IEEE,2011,1-4.
    [25] Rabie, A., Handmann, U., Fusion of Audio-and Visual Cues for Real-LifeEmotional Human Robot Interaction. Pattern Recognition,2011,346-355.
    [26]伊强,陈恳,刘莉等,小型仿人机器人Thbip-Ii的研制与开发.机器人,2009,31(6):586-593.
    [27]刘江华,程君实,陈佳品,基于视觉的动态手势识别及其在仿人机器人交互中的应用.机器人,2002,24(3):197-200.
    [28]林应明,董士海,多通道融合算法和软件平台的实现.计算机学报,2000,23(1):90-94.
    [29] Wu, Y., Huang, T., Vision-Based Gesture Recognition: A Review. Gesture-BasedCommunication in Human-Computer Interaction,1999,103-115.
    [30] Chen, Q., El-Sawah, A., JoslC., et al., A Dynamic Gesture Interface for VirtualEnvironments Based on Hidden Markov Models. IEEE,2005,6.
    [31] Carrozza, M.C., Persichetti, A., Laschi, C., et al., A Wearable BiomechatronicInterface for Controlling Robots with Voluntary Foot Movements. IEEE/ASMETransactions on Mechatronics,2007,12(1):1-11.
    [32]崔玉鹏,洪峰,表面肌电图在人体运动研究中的应用.首都体育学院学报,2005,17(1):102-104.
    [33]崔建国,王旭,李忠海等,基于AR参数模型与聚类分析的肌电信号模式识别方法.计量学报,2006,27(3):286-289.
    [34]杨晶晶,基于肌电信号的人体上肢动作辨识与轨迹预测方法研究,天津大学,2006.
    [35] Zhang, X., Chen, X., Li, Y., et al., A Framework for Hand Gesture RecognitionBased on Accelerometer and EMG Sensors. IEEE Transactions on Systems, Manand Cybernetics, Part A: Systems and Humans,2011,99:1-13.
    [36] Just, A., Marcel, S., A Comparative Study of Two State-of-the-Art SequenceProcessing Techniques for Hand Gesture Recognition. Computer Vision and ImageUnderstanding,2009,113(4):532-543.
    [37] Lee, J.H., Tsubouchi, T., Yamamoto, K., et al., People Tracking Using a Robot inMotion with Laser Range Finder. IEEE,2006,2936-2942.
    [38] Park, H., Kim, E., Jang, S., et al., HMM-Based Gesture Recognition for RobotControl. Pattern Recognition and Image Analysis,2005,695-716.
    [39] Yang, H.D., Park, A.Y., Lee, S.W., Gesture Spotting and Recognition forHuman–Robot Interaction. IEEE Transactions on Robotics,2007,23(2):256-270.
    [40] Kulkarni, S., Manoj, H., David, S., et al., Robust Hand Gesture Recognition SystemUsing Motion Templates. IEEE,2011,431-435.
    [41] Dollár, P., Rabaud, V., Cottrell, G., et al., Behavior Recognition Via SparseSpatio-Temporal Features. IEEE,2005,65-72.
    [42] Gorelick, L., Blank, M., Shechtman, E., et al., Actions as Space-Time Shapes. IEEETransactions on Pattern Analysis and Machine Intelligence,2007,29(12):2247-2253.
    [43] Holte, M.B., Chakraborty, B., Gonzalez, J., et al., A Local3D Motion Descriptor forMulti-View Human Action Recognition from4D Spatio-Temporal Interest Points.IEEE Journal of Selected Topics in Signal Processing,2012, PP(99):1.
    [44] Cousins, S., Exponential Growth of ROS. IEEE Robotics&Automation Magazine,2011,18(1):19-20.
    [45] Oh, C.M., Islam, M., Lee, J.S., et al., Upper Body Gesture Recognition forHuman-Robot Interaction. Human-Computer Interaction. Interaction Techniquesand Environments,2011,294-303.
    [46] Van den Bergh, M., Carton, D., De Nijs, R., et al., Real-Time3D Hand GestureInteraction with a Robot for Understanding Directions from Humans. IEEE,2011,357-362.
    [47] Bigdelou, A., Benz, T., Schwarz, L., et al., Simultaneous Categorical andSpatio-Temporal3d Gestures Using Kinect. IEEE,2012,53-60.
    [48]柴秀娟,唐毅力,山世光等,融合图像和深度信息的手势跟踪及应用,第七届和谐人机环境联合学术会议(HHME),2011.
    [49] Zeller, M., Phillips, J.C., Dalke, A., et al., A Visual Computing Environment forVery Large Scale Biomolecular Modeling. IEEE,1997,3-12.
    [50] Ju, S.X., Black, M.J., Minneman, S., et al., Analysis of Gesture and Action inTechnical Talks for Video Indexing. IEEE,1997,595-601.
    [51] Davis, J.W., Bobick, A.F., Virtual Pat: A Virtual Personal Aerobics Trainer,1998,13-18.
    [52]Triesch, J., Von Der Malsburg, C., A Gesture Interface for Human-Robot-Interaction.IEEE,1998,546-551.
    [53] Weinland, D., Ronfard, R., Boyer, E., A Survey of Vision-Based Methods for ActionRepresentation, Segmentation and Recognition. Computer Vision and ImageUnderstanding,2011,115(2):224-241.
    [54] Marcel, S., Bernier, O., Hand Posture Recognition in a Body-Face Centered Space.Gesture-Based Communication in Human-Computer Interaction,1999,97-100.
    [55] Raheja, J.L., Shyam, R., Kumar, U., et al., Real-Time Robotic Hand Control UsingHand Gestures. IEEE,2010,12-16.
    [56] Chen, K.Y., Chien, C.C., Chang, W.L., et al., An Integrated Color and Hand GestureRecognition Approach for an Autonomous Mobile Robot. IEEE,2010,2496-2500.
    [57] Wu, X.H., Su, M.C., Wang, P.C., A Hand-Gesture-Based Control Interface for aCar-Robot. IEEE,2010,4644-4648.
    [58] Cheng, J., Xie, C., Bian, W., et al., Feature Fusion for3D Hand GestureRecognition by Learning a Shared Hidden Space. Pattern Recognition Letters,2012,33(4):476-484.
    [59] Wu, Y., Huang, T.S., Hand Modeling, Analysis and Recognition. Signal ProcessingMagazine. IEEE,2001,18(3):51-60.
    [60] Ying, W., LJ.Y., Huang, T.S., Capturing Natural Hand Articulation, in: ComputerVision,2001. ICCV2001. Proceedings. Eighth IEEE International Conference,2001,422:426-432.
    [61] LJ., Wu, Y., Huang, T.S., Modeling the Constraints of Human Hand Motion. IEEE,2000,121-126.
    [62] Heap, T., Hogg, D., Towards3d Hand Tracking Using a Deformable Model. IEEE,1996,140-145.
    [63] Stergiopoulou, E., Papamarkos, N., Hand Gesture Recognition Using a NeuralNetwork Shape Fitting Technique. Engineering Applications of ArtificialIntelligence,2009,22(8):1141-1158.
    [64] Roussos, A., Theodorakis, S., Pitsikalis, V., et al., Hand Tracking and AffineShape-Appearance Handshape Subunits in Continuous Sign Language Recognition,Workshop on Sign, Gesture and Activity,11th European Conference on ComputerVision (ECCV), Heraklion, Greece.2010.
    [65] Konuko lu, E., Y rük, E., Darbon, J., et al., Shape-Based Hand Recognition. IEEETransactions on Image Processing,2006,15(7):1803-1815.
    [66] Chang, J.S., Kim, E.Y., Kim, H.J., Mobile Robot Control Using Hand-ShapeRecognition. Transactions of the Institute of Measurement and Control,2008,30(2):143-152.
    [67] Chang, C.C., Liu, C.Y., Tai, W.K., Feature Alignment Approach for Hand PostureRecognition Based on Curvature Scale Space. Neurocomputing,2008,71(10-12):1947-1953.
    [68] Chang, C.C., Adaptive Multiple Sets of Css Features for Hand Posture Recognition.Neurocomputing,2006,69(16-18):2017-2025.
    [69] Triesch, J., von der Malsburg, C., Classification of Hand Postures against ComplexBackgrounds Using Elastic Graph Matching. Image and Vision Computing,2002,20(13-14):937-943.
    [70] Yin, X., Xie, M., Finger Identification and Hand Posture Recognition forHuman-Robot Interaction. Image and Vision Computing,2007,25(8):1291-1300.
    [71] Viola, P., Jones, M., Rapid Object Detection Using a Boosted Cascade of SimpleFeatures. IEEE,2001,511-518.
    [72] Viola, P., Jones, M.J., Robust Real-Time Face Detection. International Journal ofComputer Vision,2004,57(2):137-154.
    [73] Freund, Y., Schapire, R., A Desicion-Theoretic Generalization of on-Line Learningand an Application to Boosting, Springer,1995,23-37.
    [74] Lienhart, R., Maydt, J., An Extended Set of Haar-Like Features for Rapid ObjectDetection. IEEE,2002,901:900-903.
    [75] Donoho, D.L., For Most Large Underdetermined Systems of Linear Equations theMinimal L1-Norm Solution is Also the Sparsest Solution. Communications on Pureand Applied Mathematics,2006,59(6):797-829.
    [76] Gribonval, R., Nielsen, M., Highly Sparse Representations from Dictionaries AreUnique and Independent of the Sparseness Measure. Applied and ComputationalHarmonic Analysis,2007,22(3):335-355.
    [77]Chen, S.S., Donoho, D.L., Saunders, M.A., Atomic Decomposition by Basis Pursuit.SIAM Journal on Scientific Computing,1999,20(1):33-61.
    [78] Donoho, D.L., Elad, M., Temlyakov, V.N., Stable Recovery of Sparse OvercompleteRepresentations in the Presence of Noise. IEEE Transactions on Information Theory,2006,52(1):6-18.
    [79] Boyd, S.P., Vandenberghe, L., Convex Optimization, Cambridge University Press,2004.
    [80]Chen, S.S., Donoho, D.L., Saunders, M.A., Atomic Decomposition by Basis Pursuit.SIAM Review,2001,43:129.
    [81] Kim, S.J., Koh, K., Lustig, M., et al., An Interior-Point Method for Large-ScaleL1-Regularized Least Squares. IEEE Journal of Selected Topics in SignalProcessing,2007,1(4):606-617.
    [82] Figueiredo, M.A.T., Nowak, R.D., Wright, S.J., Gradient Projection for SparseReconstruction: Application to Compressed Sensing and Other Inverse Problems.IEEE Journal of Selected Topics in Signal Processing,2007,1(4):586-597.
    [83] Pati, Y.C., Rezaiifar, R., Krishnaprasad, P., Orthogonal Matching Pursuit: RecursiveFunction Approximation with Applications to Wavelet Decomposition. IEEE,1993,41:40-44.
    [84] Friedman, J.H., Stuetzle, W., Projection Pursuit Regression. Journal of the AmericanStatistical Association,1981,817-823.
    [85] Elhamifar, E., Vidal, R., Robust Classification Using Structured SparseRepresentation. IEEE,2011,1873-1879.
    [86] Wright, J., Yang, A.Y., Ganesh, A., et al., Robust Face Recognition Via SparseRepresentation. IEEE Transactions on Pattern Analysis and Machine Intelligence,2008,210-227.
    [87] Zuo, Y., Zhang, B., General Image Classification Based on Sparse Representation.IEEE,2010,223-229.
    [88] Yu, L., He, Z., Cao, Q., Gabor Texture Representation Method for Face RecognitionUsing the Gamma and Generalized Gaussian Models. Image and Vision Computing,2010,28(1):177-187.
    [89] Tao, D., Li, X., Wu, X., et al., General Tensor Discriminant Analysis and GaborFeatures for Gait Recognition. IEEE Transactions on Pattern Analysis and MachineIntelligence,2007,29(10):1700-1715.
    [90] Triesch, J., Von Der Malsburg, C., Robust Classification of Hand Postures againstComplex Backgrounds. IEEE,1996,170-175.
    [91]张秋余,姚开博,吴佩莉,基于矩形特征和改进Adaboost的手势检测.计算机工程,2008,34(14):176-178.
    [92] Kelly, D., McDonald, J., Markham, C., A Person Independent System forRecognition of Hand Postures Used in Sign Language. Pattern Recognition Letters,2010,31(11):1359-1368.
    [93] Kumar, P.P., Vadakkpeat, P., Loh, A., Hand Postrue and Face Recognition Using aFuzzy-Rough Approach. International Journal of Humanoid Robotics,2010,7(3):331-356.
    [94] Yuan, X.T., Yan, S., Visual Classification with Multi-Task Joint SparseRepresentation. IEEE,2010,3493-3500.
    [95] Martinez, A., Wilbur, R., Shay, R., et al., Purdue Rvl-Slll American Sign LanguageDatabase. IEEE International Conference on Multimodal Interfaces, Pittsburgh,USA,2006,167-172.
    [96] Agarwal, A., Triggs, B., Recovering3d Human Pose from Monocular Images. IEEETransactions on Pattern Analysis and Machine Intelligence,2006,28(1):44-58.
    [97] Li, H., Greenspan, M., Model-Based Segmentation and Recognition of DynamicGestures in Continuous Video Streams. Pattern Recognition,2011,44(8):1614-1628.
    [98] Roberts, T.J., McKenna, S.J., Ricketts, I.W., Human Tracking Using3d SurfaceColour Distributions. Image and Vision Computing,2006,24(12):1332-1342.
    [99] Sminchisescu, C., Triggs, B., Estimating Articulated Human Motion withCovariance Scaled Sampling. The International Journal of Robotics Research,2003,22(6):371-391.
    [100] Cheung, K., Baker, S., Kanade, T., Shape-from-Silhouette of Articulated Objectsand its Use for Human Body Kinematics Estimation and Motion Capture. IEEE,2003,71:77-84.
    [101] Deutscher, J., Reid, I., Articulated Body Motion Capture by Stochastic Search.International Journal of Computer Vision,2005,61(2):185-205.
    [102] Howe, N.R., Silhouette Lookup for Automatic Pose Tracking. IEEE,2004,15-22.
    [103] Shotton, J., Fitzgibbon, A., Cook, M., et al., Real-Time Human Pose Recognitionin Parts from Single Depth Images, CVPR,2011,7.
    [104] Izadi, S., Kim, D., Hilliges, O., et al., Kinectfusion: Real-Time3D Reconstructionand Interaction Using a Moving Depth Camera, ACM,2011,559-568.
    [105] Malassiotis, S., Strintzis, M., Real-Time Hand Posture Recognition Using RangeData. Image and Vision Computing,2008,26(7):1027-1037.
    [106] Engelharda, N., Endresa, F., Hessa, J., et al., Real-Time3D Visual Slam with aHand-Held RGB-D Camera, in Workshop on Live Dense Reconstruction withMoving Cameras at ICCV,2011.
    [107] Henry, P., KrainM., Herbst, E., et al., RGB-D Mapping: Using Depth Cameras forDense3D Modeling of Indoor Environments, International Symposium onExperimental Robotics,2010.
    [108] Du, H., Henry, P., Ren, X., et al., Interactive3D Modeling of Indoor Environmentswith a Consumer Depth Camera, ACM,2011,75-84.
    [109] Herrera C, D., Kannala, J., Heikkil, J., Accurate and Practical Calibration of aDepth and Color Camera Pair, Springer,2011,437-445.
    [110] Zhang, Z., Flexible Camera Calibration by Viewing a Plane from UnknownOrientations. IEEE,1999,661:666-673.
    [111]邓志红,刘明阳,付梦印,一种改进的视觉传感器与激光测距雷达特征匹配点提取算法.光学技术,2010,1:43-47.
    [112] Rother, C., Kolmogorov, V., Blake, A., Grabcut: Interactive Foreground ExtractionUsing Iterated Graph Cuts, ACM,2004,309-314.
    [113] Vezhnevets, V., Konouchine, V., Growcut: Interactive Multi-Label ND ImageSegmentation by Cellular Automata,2005,150-156.
    [114] Soille, P., Vogt, P., Morphological Segmentation of Binary Patterns. PatternRecognition Letters,2009,30(4):456-459.
    [115] Otsu, N., A Threshold Selection Method from Gray-Level Histograms. Automatica,1975,11:285-296.
    [116]杨晖,图像分割的阈值法研究.辽宁大学学报,2006,33(2):135-137.
    [117] Shakhnarovich, G., Viola, P., Darrell, T., Fast Pose Estimation withParameter-Sensitive Hashing. IEEE,2003,752:750-757.
    [118] Mori, G., Malik, J., Estimating Human Body Configurations Using Shape ContextMatching. Computer Vision-ECCV,2002,150-180.
    [119] JaH., Subramanian, A., Das, S., et al., Real-Time Upper-Body Human PoseEstimation Using a Depth Camera. Computer Vision/Computer GraphicsCollaboration Techniques,2011,227-238.
    [120] Hua, G., Yang, M.H., Wu, Y., Learning to Estimate Human Pose with Data DrivenBelief Propagation. IEEE,2005,741:747-754.
    [121] Zhang, J., Luo, J., Collins, R., et al., Body Localization in Still Images UsingHierarchical Models and Hybrid Search. IEEE,2006,1536-1543.
    [122] Badler, N.I., Phillips, C.B., Webber, B.L., Simulating Humans: Computer GraphicsAnimation and Control, Oxford University Press, USA,1993.
    [123] Borgefors, G., Distance Transformations in Digital Images. Computer Vision,Graphics, and Image Processing,1986,34(3):344-371.
    [124] Wright, J., Yang, A.Y., Ganesh, A., et al., Robust Face Recognition Via SparseRepresentation. IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(2):210-227.
    [125] Webb, J., Ashley, J., Beginning Kinect Programming with the Microsoft KinectSDK, Apress,2012.
    [126] Lee, H.K., Kim, J.H., An Hmm-Based Threshold Model Approach for GestureRecognition. IEEE Transactions on Pattern Analysis and Machine Intelligence,1999,21(10):961-973.
    [127]包加桐,宋爱国,郭晏等,基于Surf特征跟踪的动态手势识别算法.机器人,2011,33(4):482-489.
    [128] Kohavi, R., A Study of Cross-Validation and Bootstrap for Accuracy Estimationand Model Selection, Lawrence Erlbum Associates LTD,1995,1137-1145.
    [129] Leung, T., Malik, J., Representing and Recognizing the Visual Appearance ofMaterials Using Three-Dimensional Textons. International Journal of ComputerVision,2001,43(1):29-44.
    [130] Chan, C.W., Rudins, A., Foot Biomechanics During Walking and Running, MayoClinic Proceedings,1994,69(5):448-61.
    [131] Collins, R.T., Gross, R., Shi, J., Silhouette-Based Human Identification from BodyShape and Gait. IEEE,2002,366-371.
    [132] Kozlowski, L.T., Cutting, J.E., Recognizing the Sex of a Walker from a DynamicPoint-Light Display. Attention, Perception,&Psychophysics,1977,21(6):575-580.
    [133] Lee, C.S., Elgammal, A., Gait Style and Gait Content: Bilinear Models for GaitRecognition Using Gait Re-Sampling. IEEE,2004,147-152.
    [134] Zhang, X., Fan, G., Dual Gait Generative Models for Human Motion Estimationfrom a Single Camera. IEEE Transactions on Systems, Man, and Cybernetics, PartB: Cybernetics,2010,40(4):1034-1049.
    [135] Ioannidis, D., Tzovaras, D., Damousis, I.G., et al., Gait Recognition UsingCompact Feature Extraction Transforms and Depth Information. IEEE Transactionson Information Forensics and Security,2007,2(3):623-630.
    [136] Kusakunniran, W., Wu, Q., Zhang, J., et al., Support Vector Regression forMulti-View Gait Recognition Based on Local Motion Feature Selection. IEEE,2010,974-981.
    [137] Phillips, P.J., Sarkar, S., Robledo, I., et al., The Gait Identification ChallengeProblem: Data Sets and Baseline Algorithm. IEEE,2002,381:385-388.
    [138] Sarkar, S., Phillips, P.J., Liu, Z., et al., The Humanid Gait Challenge Problem: DataSets, Performance, and Analysis. IEEE Transactions on Pattern Analysis andMachine Intelligence,2005,27(2):162-177.
    [139] Lay, A., Hass, C., Smith, D., et al., Characterization of a System for StudyingHuman Gait During Slope Walking. Journal of Applied Biomechanics,2005,21(2):153.
    [140] Tanawongsuwan, R., Bobick, A., Modelling the Effects of Walking Speed onAppearance-Based Gait Recognition. IEEE,2004,782:783-790.
    [141] Matovski, D., Nixon, M., Mahmoodi, S., et al., The Effect of Time on GaitRecognition Performance. IEEE Transactions on Information Forensics andSecurity,2012,7(2):543-552.
    [142] Lee, L., Grimson, W., Gait Analysis for Recognition and Classification. IEEE,2002,148-155.
    [143] Cunado, D., Nixon, M., Carter, J., Using Gait as a Biometric, Via Phase-WeightedMagnitude Spectra, Springer,1997,93-102.
    [144] Spehr, J., Winkelbach, S., Wahl, F.M., Hierarchical Pose Estimation for HumanGait Analysis. Computer Methods and Programs in Biomedicine,2012.
    [145] BenAbdelkader, C., Cutler, R., Davis, L., Motion-Based Recognition of People inEigengait Space. IEEE,2002,267-272.
    [146] Zhang, E., Zhao, Y., Xiong, W., Active Energy Image Plus2dlpp for GaitRecognition. Signal Processing,2010,90(7):2295-2302.
    [147] Venkat, I., De Wilde, P., Robust Gait Recognition by Learning and ExploitingSub-Gait Characteristics. International Journal of Computer Vision,2011,91(1):7-23.
    [148] RosP.L., A Note on the Least Squares Fitting of Ellipses. Pattern RecognitionLetters,1993,14(10):799-808.
    [149] Ahn, S.J., Rauh, W., Warnecke, H.J., Least-Squares Orthogonal Distances Fittingof Circle, Sphere, Ellipse, Hyperbola, and Parabola. Pattern Recognition,2001,34(12):2283-2303.
    [150] Madsen, K., Bruun, H., Tingleff, O., Methods for Non-Linear Least SquaresProblems. Informatics and Mathematical Modelling, Technical University ofDenmark, DTU,1999.
    [151]Whittle, M., Gait Analysis: An Introduction, Elsevier Health Sciences: Oxford, UK,2002.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700