用户名: 密码: 验证码:
人体行为识别关键技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
人体行为识别是人工智能与模式识别领域内一个新兴的研究方向,具有极其广泛的应用前景。本论文针对人体行为识别中的图像预处理、行为表征、特征降维以及行为分类等关键技术进行研究,提出了适用于可见光与红外成像、可穿戴传感的行为识别方法,获得了较好的识别效果。论文取得的主要创新性成果如下:
     (1)高质量的图像预处理是行为识别研究的基础。面向可见光、红外双波段视频监控应用,提出了一种双波段彩色图像融合算法,并考察了图像融合对于人体目标跟踪性能的影响。将可见光与红外图像在NSCT域内进行自适应融合,并将融合图像赋予YUV颜色空间的亮度通道,进一步通过颜色传递可获得具有自然色彩视觉效果的彩色融合图像。实验结果表明,该方法可提高人体目标的可探测性,丰富融合图像的细节信息,增强观察者对监控场景的感知,为计算机视觉分析提供更高质量的源图像;此外,双波段图像融合能够提高对人体目标跟踪的准确度和鲁棒性。
     (2)提出了一种基于外观表征和多类相关向量机的行为识别方法。建立了一种新的时空模板:能量变化图,并在此基础上提取反映人体形状信息和运动信息的行为特征;首次将多类相关向量机引入行为识别领域,用于对多类行为的分类识别。在Weizmann行为数据库上进行了测试,采用“Constructive”结构的多类相关向量机获得的识别率达98.2%,且表现出优异的特征样本稀疏性。与其它一些典型的识别方法相比,本文方法在行为特征的复杂度和识别率方面均具有明显优势。进一步分析表明,不同方法间识别性能间差异主要源自于特征选取方式和分类方法选择上的不同。
     (3)提出了基于视觉特性的行为识别方法,并首次将Gabor类小波应用于红外成像人体行为识别。采用Gabor小波,对行为的能量变化图进行多尺度、多方向性描述。为了减少频带覆盖所需的分解层数,并更好地刻画行为的细节特征,进一步采用了性能更为优越的Log-Gabor小波。针对行为识别中面临的高维特征问题及训练过程中的小样本问题,分别采用了主元分析方法和鉴别共同向量方法对Gabor类特征进行降维。在重庆大学构建的红外行为数据库上进行测试,获得的识别率达94.44%。此外,还考察了Gabor小波类别、特征降维方法及分类器的选取对识别性能的影响,验证了本文方法的设计合理性。
     (4)对可穿戴传感行为识别进行研究。针对行为传感中存在的高维数据问题,首次将广义判别分析方法应用于可穿戴传感行为识别,提出了一种新颖的行为识别方法。对提取的时频域行为特征,采用广义判别分析方法进行降维,并构建组合相关向量机实现对多类行为的分类。在WARD人体行为数据库上进行了测试,获得的识别率达99.2%。为增强可穿戴传感行为识别系统的鲁棒性,对系统结构进行优化分析,进一步对多传感节点的融合问题进行了研究。在建立的决策融合框架中,采用自适应对数优化池对各个节点的分类后验概率输出进行决策融合,最终判别行为的类别。重点研究了传感节点数目及节点部署方式,融合规则、特征降维方法和分类方法的选取对识别性能的影响。
Human activity recognition has become one of the most active research topics inthe artificial intelligence and pattern recognition field recently, due to its wideapplications. The key technologies for human activity recognition, including the imagepreprocessing, activity representation, feature reduction, as well as the activityclassification technology are all thoroughly studied in this paper. Based on them, wehave put forward several novel activity recognition methods applicable to differentsystems, and achieved satisfactory results. The main contribution of this thesis can beconcluded as follows.
     (1) As a rule of thumb, the effective image preprocessing is crucial to the ultimaterecognition performance of human activity. We put forward a color fusion algorithm forthe dual-band surveillance application. The visible and the thermal infrared sourceimages are fused with the non-subsampled contourlet transform (NSCT), andsubsequently colorized with the color transfer scheme. Experimental resultsdemonstrate that the method can not only keep abundant details of the background, butalso improve the human target detectability. As a result, the situation awareness can beenhanced, and new source images can be obtained with higher quality. Moreover,experiments also show that the fused images can improve the robustness and accuracyduring the target tracking process.
     (2) We put forward a novel human activity recognition method based on a newspatial-temporal template called variation energy image (VEI), which can betterrepresent the shape and the motion features. The multi-class relevance vector machines(mRVM) is introduced into the human recognition field for the first time, which is thecurrent state-of-the-art kernel machine learning technology given the multi-classclassification problems. We have achieved a recognition rate as high as98.2%on theWeizmann dataset, which prove that the mRVM especially the mRVM2has advantagesboth in terms of recognition rate and sparsity. Moreover, we further found that ourrecognition rate is higher than other methods, mainly resulting from the differences inactivity representation and activity classification process.
     (3) We also put forward several human activity recognition methods based on thehuman-vision properties, and test them on a thermal infrared human activity datasetconstructed by Chongqing University. Both the Gabor and Log-Gabor wavelets are employed to describe the infrared human activity for the first time, of which we stronglyrecommend the latter so as to reduce the required scale number. The principlecomponent analysis method and the discriminative common vectors method are used tosolve the feature reduction problem, as well as the small sample size problem,respectively. A recognition rate as high as94.44%is achieved on the infrared dataset,and the influence of several factors (including the Gabor-based wavelets, featurereduction method as well as the classification method) on the recognition performanceis thoroughly studied.
     (4) Finally, we studied the human activity recognition with wearable sensor. Weput forward a novel method in which the generalized discriminant analysis is used forthe first time to reduce the high dimensional time and frequency features, derived fromthe multiple sensors. And then an array of relevance vector machines is constructed toclassify the reduced features. Experimental results on the WARD dataset with differentclassification techniques demonstrate that our approach can achieve the best recognitionrate as high as99.2%. In consideration of the robustness as well the optimizablemodifications for the multi-sensor system, we further studied on the decision fusionmethods. The relation between several factors (including the number of sensors as wellas the deployment way, the fusion rule, the feature reduction method and theclassification method) of the recognition method and the recognition performance arethoroughly investigated.
引文
[1] Computed-aussisted prescreelling of video streams for unusual activities[OL].http://homepages.inf.ed.ac.uk/rbf/BEHAVE.
    [2] Integrated surveillance of crowded areas for public security[OL].http://www.cvg.rdg.ac.uk/projects/iscaps/index.html.
    [3] Robust methods for monitoring and understanding people in public spaces[OL].http://www.cvg.rdg.ac.uk/projects/reason/index.html.
    [4] N. Bird, S. Atev, N. Caramelli, R. Martin, O. Masoud, N. Papanikolopoulos. Real time,online detection of abandoned objects in public areas[C]. Proceedings of IEEE InternationalConference on Robotics and Automation,2006:3775-3780.
    [5] R. Sivalingam, A. Cherian, J. Fasching, N. Walczak, N. Bird, V. Morellas, B. Murphy, K.Cullen, K. Lim, G. Sapiro. A multi-sensor visual tracking system for behavior monitoring ofat-risk children[C]. International Conference on Robotics and Automation,2012.
    [6] Computer vision lab, university of central Forida[OL]. http://vision.eecs.ucf.edu/
    [7] Ariel computerized exercise system [OL]. http://www.arielnet.com/start/aces/.
    [8]面向公共安全的社会感知数据处理[Z].973项目:2012CB316300-G.
    [9] J. Gu, X. Ding, S. Wang, Y. Wu. Action and gait recognition from recovered3-D humanjoints[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics,2010,40(4):1021-1033.
    [10]田国会,吉艳青,李晓磊.家庭智能空间下基于场景的人的行为理解[J].智能系统学报,2010,5(001):57-62.
    [11]吴宝元,余永,许德章,吴仲城,陈峰.可穿戴式下肢助力机器人运动学分析与仿真[J].机械科学与技术,2007,26(2):235-240.
    [12] Q. Zhou, S. Yu, X. Wu, Q. Gao, C. Li, Y. Xu. HMMs-based human action recognition for anintelligent household surveillance robot[C]. IEEE International Conference on Robotics andBiomimetics,2009:2295-2300.
    [13]曹春梅,季林红,王子羲,陈大融.跳板跳水起跳动作时的人板协调关系[J].清华大学学报(自然科学版),2008,48(2):207-210.
    [14] G. Johansson. Visual perception of biological motion and a model for its analysis[J].Attention, Perception,&Psychophysics,1973,14(2):201-211.
    [15] Z. Chen, H.J. Lee. Knowledge-guided visual perception of3-D human gait from a singleimage sequence-part I: A new framework for modeling human motion[J]. IEEE Transactionson Systems, Man and Cybernetics,1992,22(2):336-342.
    [16] X. Feng, P. Perona. Human action recognition by sequence of movelet codewords[C].2002:717-721.
    [17] N. Ikizler, D. Forsyth. Searching video for complex activities with finite state models[C].IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2007:1-8.
    [18] D. Marr, H.K. Nishihara. Representation and recognition of the spatial organization ofthree-dimensional shapes[J]. Proceedings of the Royal Society of London. Series B.Biological Sciences,1978,200(1140):269-294.
    [19] D. Hogg. Model-based vision: a program to see a walking person[J]. Image and visioncomputing,1983,1(1):5-20.
    [20] K. Rohr. Towards model-based recognition of human movements in image sequences[J].CVGIP-Image Understanding,1994,59(1):94-115.
    [21] R.D. Green, L. Guan. Quantifying and recognizing human movement patterns frommonocular video images-part I: a new framework for modeling human motion[J]. IEEETransactions on Circuits and Systems for Video Technology,2004,14(2):179-190.
    [22] M. Brand, N. Oliver, A. Pentland. Coupled hidden Markov models for complex actionrecognition[C]. Proceedings of IEEE Computer Society Conference on Computer Vision andPattern Recognition,1997:994-999.
    [23] T. Starner, A. Pentland. Real-time american sign language recognition from video usinghidden markov models[C]. Proceedings of the IEEE International Conference on ComputerVision,1995:265-270.
    [24]胡琴,王文中,夏时洪,刘任任,李锦涛.基于多摄像机的人体步态跟踪方法[J].计算机工程,2008,34(22):220-222.
    [25]谷军霞,丁晓青,王生进.基于人体行为3D模型的2D行为识别[J].自动化学报,2010,36(1):46-53.
    [26] J. Yamato, J. Ohya, K. Ishii. Recognizing human action in time-sequential images usinghidden Markov model[C].1992:379-385.
    [27] L. Wang, D. Suter. Recognizing human activities from silhouettes: Motion subspace andfactorial discriminative graphical model[C]. IEEE Conference on In Computer Vision andPattern Recognition,2007:1-8.
    [28] A.F. Bobick, J.W. Davis. The recognition of human movement using temporal templates[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2001,23(3):257-267.
    [29] D. Weinland, R. Ronfard, E. Boyer. Free viewpoint action recognition using motion historyvolumes[J]. Computer vision and image understanding,2006,104(2-3):249-257.
    [30] M. Blank, L. Gorelick, E. Shechtman, M. Irani, R. Basri. Actions as space-time shapes[C].Proceedings of10th IEEE International Conference on Computer Vision,2005:1395-1402.
    [31] X. Yang, Y. Zhou, T. Zhang, G. Shu, J. Yang. Gait recognition based on dynamic regionanalysis[J]. Signal Processing,2008,88(9):2350-2356.
    [32]马勤勇,聂栋栋,王申康.基于能量图分解与运动偏移特性的步态识别[J].光电子.激光,2009,20(4):545-540.
    [33]黄飞跃,徐光祐.视角无关的动作识别[J]. Journal of Software,2008,19(7):1623-1634.
    [34] R. Polana, R. Nelson. Low level recognition of human motion (or how to get your manwithout finding his body parts)[C]. Proceedings of Motion of Non-Rigid and ArticulatedObgects Workshop,1994:77-82.
    [35] R. Cutler, M. Turk. View-based interpretation of real-time optical flow for gesturerecognition[C]. Proceedings of the3rd. International Conference on Face&GestureRecognition,1998:416-421.
    [36] A.A. Efros, A.C. Berg, G. Mori, J. Malik. Recognizing action at a distance[C]. InternationalConference on Computer Vision,2003:726-733
    [37] Toby H.W.Lam, K.H.Cheung, JamesN.K.Liu. Gait flow image: A silhouette-based gaitrepresentation for human identification[J]. Pattern Recognition,2011,44(4):973-987.
    [38] I. Laptev. On space-time interest points[J]. International Journal of Computer Vision,2005,64(2):107-123.
    [39] P. Dollár, V. Rabaud, G. Cottrell, S. Belongie. Behavior recognition via sparsespatio-temporal features[C].2nd Joint IEEE International Workshop on Visual Surveillanceand Performance Evaluation of Tracking and Surveillance,2005:65-72.
    [40] P. Scovanner, S. Ali, M. Shah. A3-dimensional sift descriptor and its application to actionrecognition[C]. Proceedings of the15th international conference on Multimedia,2007:357-360.
    [41] H. Bay, A. Ess, T. Tuytelaars, L. Van Gool. Speeded-up robust features (SURF)[J]. Computervision and image understanding,2008,110(3):346-359.
    [42] X. Jiang, T. Sun, B. Feng, C. Jiang. A space-time SURF descriptor and its application toaction recognition with video words[C].8th International Conference on Fuzzy Systems andKnowledge Discovery,2011:1911-1915.
    [43] R. Poppe. A survey on vision-based human action recognition[J]. Image and visioncomputing,2010,28(6):976-990.
    [44]吴心筱.图像序列中人的姿态估计与动作识别[D].北京:北京理工大学,2010.
    [45] J.C. Niebles, L. Fei-Fei. A hierarchical model of shape and appearance for human actionclassification[C]. IEEE Conference on Computer Vision and Pattern Recognition,2007:1-8.
    [46] S. Belongie, J. Malik, J. Puzicha. Shape matching and object recognition using shapecontexts[J]. Ieee Transactions on Pattern Analysis and Machine Intelligence,2002,24(4):509-522.
    [47] J. Liu, S. Ali, M. Shah. Recognizing human actions using multiple features[C].26th IEEEConference on Computer Vision and Pattern Recognition,2008:1-8.
    [48]杨跃东,郝爱民,褚庆军,赵沁平,王莉莉.基于动作图的视角无关动作识别[J]. Journalof Software,2009,20(10):2679-2691.
    [49] J. Han, B. Bhanu. Human activity recognition in thermal infrared imagery[C]. IEEEComputer Society Conference on Computer Vision and Pattern Recognition-Workshops,2005.
    [50] D. Tan, K. Huang, S. Yu, T. Tan. Efficient night gait recognition based on templatematching[C].18th International Conference on Pattern Recognition,2006:1000-1003.
    [51] Z. Xue, D. Ming, W. Song, B. Wan, S. Jin. Infrared gait recognition based on wavelettransform and support vector machine[J]. Pattern Recognition,2010,43(8):2904-2910.
    [52]李建福.红外图像中人体目标检测、跟踪及其行为识别研究[D].重庆:重庆大学,2010.
    [53] H.C. Fernandes, X. Maldague, M.A. Batista, C.A.Z. Barcelos. Suspicious event recognitionusing infrared imagery[C]. Proceedings of IEEE International Conference on Systems, Manand Cybernetics,2011:2186-2191.
    [54] A.F. Bobick. Movement, activity and action: the role of knowledge in the perception ofmotion[J]. Philosophical Transactions of the Royal Society of London. Series B: BiologicalSciences,1997,352(1358):1257-1265.
    [55] K. Schindler, L. Van Gool. Action snippets: How many frames does human actionrecognition require?[C]. IEEE Conference on Computer Vision and Pattern Recognition,2008:1-8.
    [56] I. Laptev, M. Marszalek, C. Schmid, B. Rozenfeld. Learning realistic human actions frommovies[C].26th IEEE Conference on Computer Vision and Pattern Recognition,2008:1-8.
    [57] H. Qian, Y. Mao, W. Xiang, Z. Wang. Recognition of human activities using SVMmulti-class classifier[J]. Pattern Recognition Letters,2010,31(2):100-111.
    [58] J. Liu, M. Shah, B. Kuipers, S. Savarese. Cross-view action recognition via view knowledgetransfer[C]. IEEE Conference on Computer Vision and Pattern Recognition,2011:3209-3216.
    [59] H. Meng, N. Pears. Descriptive temporal template features for visual motion recognition[J].Pattern Recognition Letters,2009,30(12):1049-1058.
    [60] B. Gholami, W.M. Haddad, A.R. Tannenbaum. Relevance vector machine learning forneonate pain intensity assessment using digital imaging[J]. IEEE Transactions on BiomedicalEngineering,2010,57(6):1457-1466.
    [61] D. Selvathi, R. Ram Prakash, S. Thamarai Selvi. Performance evaluation of kernel basedtechniques for brain MRI data classification[C]. Proceedings of International Conference onComputational Intelligence and Multimedia Applications,2007:456-460.
    [62] A. Fathi, G. Mori. Action recognition by learning mid-level motion features[C]. ComputerVision and Pattern Recognition,2008:1-8.
    [63] S. Nowozin, G. Bakir, K. Tsuda. Discriminative subsequence mining for actionclassification[C].11th IEEE International Conference on Computer Vision,2007:1-8.
    [64] J. Yamato, J. Ohya, K. Ishii. Recognizing human action in time-sequential images usinghidden Markov model[C]. IEEE Computer Society Conference on Computer Vision andPattern Recognition,1992:379-385.
    [65] S. Fine, Y. Singer, N. Tishby. The hierarchical hidden Markov model: Analysis andapplications[J]. Machine learning,1998,32(1):41-62.
    [66] R. Messing, C. Pal, H. Kautz. Activity recognition using the velocity histories of trackedkeypoints [C].12th International Conference on Computer Vision,2009:104-111.
    [67] F. Caillette, A. Galata, T. Howard. Real-time3-D human body tracking using learnt modelsof behaviour[J]. Computer vision and image understanding,2008,109(2):112-125.
    [68]任海兵.非特定人自然的人体动作识别[D].北京:清华大学,2003.
    [69] Y. Luo, T.D. Wu, J.N. Hwang. Object-based analysis and interpretation of human motion insports video sequences by dynamic Bayesian networks[J]. Computer vision and imageunderstanding,2003,92(2):196-216.
    [70] Benjamin Laxton, Jongwoo Lim, a.D. Kriegman. Leveraging temporal, contextual andordering constraints for recognizing complex activities in vide[J]. IEEE InternationalConferance on Computer Vision and Pattern Recognition,2007.
    [71] Y. Du, F. Chen, W. Xu, W. Zhang. Activity recognition through multi-scale motion detailanalysis[J]. Neurocomputing,2008,71(16-18):3561-3574.
    [72] J. Lafferty, A. McCallum, F.C.N. Pereira. Conditional random fields: Probabilistic models forsegmenting and labeling sequence data[C]. Proceedings of the18th International Conferenceon Machine Learning,2001:282-289.
    [73] C. Sminchisescu, A. Kanaujia, Z. Li, D. Metaxas. Conditional models for contextual humanmotion recognition[C].10th IEEE International Conference on Computer Vision,2005:1808-1815.
    [74] P. Natarajan, R. Nevatia. View and scale invariant action recognition using multiviewshape-flow models[C].26th IEEE Conference on Computer Vision and Pattern Recognition,2008:1-8.
    [75] H. Ning, W. Xu, Y. Gong, T. Huang. Latent pose estimator for continuous actionrecognition[J]. Computer Vision,2008,5303(2):419-433.
    [76] A. Veeraraghavan, A.K. Roy-Chowdhury, R. Chellappa. Matching shape sequences in videowith applications in human movement analysis[J]. IEEE Transactions on Pattern Analysisand Machine Intelligence,2005,27(12):1896-1909.
    [77] N. Ikizler, P. Duygulu. Histogram of oriented rectangles: a new pose descriptor for humanaction recognition[J]. Image and vision computing,2009,27(10):1515-1526.
    [78]李善青.基于穿戴视觉的人机交互技术[D].北京:北京理工大学,2010.
    [79]张湘才.基于可穿戴计算的现场采集系统的研究与实现[D].成都:电子科技大学,2007.
    [80]韩露.面向智能移动监控辅助的可穿戴视觉研究[D].重庆:重庆大学,2011.
    [81] C.T. Moritz, S.I. Perlmutter, E.E. Fetz. Direct control of paralysed muscles by corticalneurons[J]. Nature,2008,456(7222):639-642.
    [82] J.E. O'Doherty, M.A. Lebedev, P.J. Ifft. Active tactile exploration enabled by a brainmachine-brain interface[J]. Nature,2011,479:228-231.
    [83] W. Jia, N. Kong, F. Li, X. Gao, S. Gao, G. Zhang, Y. Wang, F. Yang. An epileptic seizureprediction algorithm based on second-order complexity measure[J]. Physiologicalmeasurement,2005,26(5):609-625.
    [84] F. Meng, K.Y. Tong, S.T. Chan, W.W. Wong, K.H. Lui, K.W. Tang, X. Gao, S. Gao. Cerebralplasticity after subcortical stroke as revealed by cortico-muscular coherence[J]. IEEETransactions on Neural Systems and Rehabilitation Engineering,2009,17(3):234-243.
    [85]徐江.基于实时脑机接口的无线遥控车系统[D].重庆:重庆大学,2010.
    [86] K. Nagata, M. Yamada, K. Magatani. Development of the assist system to operate a computerfor the disabled using multichannel surface EMG [C].26th Annual International Conferenceof the IEEE Engineering in Medicine and Biology Society,2004:4952-4955.
    [87] I. Moon, M. Lee, J. Chu, M. Mun. Wearable EMG-based HCI for electric-poweredwheelchair users with motor disabilities[C]. Proceedings of the2005IEEE InternationalConference on Robotics and Automation,2005:2649-2654.
    [88]费烨赟.基于肌电信号控制的康复医疗下肢外骨骼设计及研究[D].杭州:浙江大学,2006.
    [89]李庆玲,孔民秀,杜志江,孙立宁,王东岩.5-DOF上肢康复机械臂交互式康复训练控制策略[J].机械工程学报,2008,44(9):169-176.
    [90] C-leg microprocessor prosthetic knee [OL]. http://www.ottobockknees.com/.
    [91]张佳帆.基于柔性外骨骼人机智能系统基础理论及应用技术研究[D].杭州:浙江大学,2009.
    [92]田双太.一种可穿戴机器人的多传感器感知系统研究[D].合肥:中国科学技术大学,2011.
    [93] Y. Ohgi. Microcomputer-based acceleration sensor device for sports biomechanics-strokeevaluation by using swimmer's wrist acceleration[C]. Proceedings of IEEE Sensors,2002:699-704.
    [94] Xsens MVN [OL]. http://www.xsens.com/en/general/mvn
    [95] J.Y. Yang, Y.P. Chen, G.Y. Lee, S.N. Liou, J.S. Wang. Activity recognition using one triaxialaccelerometer: A neuro-fuzzy classifier with feature reduction[C]. International Conferenceof Entertainment Computing,2007:395-400.
    [96] S. Song, J. Jang, S. Park. A phone for human activity recognition using triaxial accelerationsensor[C]. IEEE International Conference on Consumer Electronics,2008:1-2.
    [97] X. Long, B. Yin, R.M. Aarts. Single-accelerometer-based daily physical activityclassification[C]. Proceedings of the31st Annual International Conference of the IEEEEngineering in Medicine and Biology Society: Engineering the Future of Biomedicine,2009:6107-6110.
    [98] F. Bianchi, S.J. Redmond, M.R. Narayanan, S. Cerutti, N.H. Lovell. Barometric pressure andtriaxial accelerometry-based falls event detection[J]. IEEE Transactions on Neural Systemsand Rehabilitation Engineering,2010,18(6):619-627.
    [99]陈雷,杨杰,沈红斌,王双全.基于加速度信号几何特征的动作识别[J].上海交通大学学报,2008,42(2):219-222.
    [100] Z. He. Accelerometer based gesture recognition using fusion features and SVM[J]. Journal ofSoftware,2011,6(6):1042-1049.
    [101]齐娟,陈益强,刘军发,孙卓.融合多模信息感知的低功耗行为识别[J]. Journal ofSoftware,2010,21,39-50.
    [102]苗强,周兴社,於志文,倪红波.一种非觉察式的睡眠行为识别方法[J]. Journal ofSoftware,2010,21,21-32.
    [103]石欣.基于压力感知步态的运动人体行为识别研究[D].重庆:重庆大学,2010.
    [104] S. Wang, J. Yang. Decentralized acoustic source localization with unknown source energy ina wireless sensor network[J]. Measurement Science and Technology,2007,18(12):3768-3776.
    [105]钟志.基于异常行为辨识的智能监控技术研究[D].上海:上海交通大学,2008.
    [106]王江涛,杨静宇.红外图像中人体实时检测研究[J].系统仿真学报,2007,19(19):4490-4494.
    [107] X. Song, H. Zhao, J. Cui, X. Shao, R. Shibasaki, H. Zha. Fusion of laser and vision formultiple targets tracking via on-line learning[C]. IEEE International Conference on Roboticsand Automation,2010:406-411.
    [108] T.B. Moeslund, A. Hilton, V. Krüger. A survey of advances in vision-based human motioncapture and analysis[J]. Computer vision and image understanding,2006,104(2):90-126.
    [109] L. Wang, Y. Wang, W. Gao. Mining Layered Grammar Rules for Action Recognition[J].International Journal of Computer Vision,2011,93(2):162-182.
    [110]韩磊,李君峰,贾云得.基于时空单词的两人交互行为识别方法[J].计算机学报,2010,33(4):776-784.
    [111] S. Yu, D. Tan, T. Tan. A framework for evaluating the effect of view angle, clothing andcarrying condition on gait recognition[C].18th International Conference on PatternRecognition,2006:441-444.
    [112] N. Otsu. A threshold selection method from gray-level histograms[J]. Automatica,1975,11(9):62-66.
    [113] J.A. Hartigan, M.A. Wong. Algorithm AS136: A k-means clustering algorithm[J]. Journal ofthe Royal Statistical Society. Series C (Applied Statistics),1979,28(1):100-108.
    [114] R. Eckhorn, H. Reitboeck, M. Arndt, P. Dicke. Feature linking via synchronization amongdistributed assemblies: Simulations of results from cat visual cortex[J]. Neural Computation,1990,2(3):293-307.
    [115] D.M. Ryan, R.D. Tinkler. Night pilotage assessment of image fusion[C]. Proceedings of SPIE,1995:50-67.
    [116] S. Paicopolis, Jonathan G. Hixson, V.A. Noseck. Human visual performance of a dual bandI2/IR sniper scope[J]. Proceedings of SPIE,2007,6737, l-12.
    [117]骆媛,王岭雪,金伟其,赵源萌,张长兴,李家琨.微光(可见光)/红外彩色夜视技术处理算法及系统进展[J].红外技术,2010,32(6):337-344.
    [118] D. Dwyer, D. Hickman, T. Riley, J. Heather, M. Smith. Real time implementation of imagealignment and fusion on a police helicopter[C]. Proceedings of SPIE,2006:622607.
    [119]倪国强,肖蔓君,秦庆旺,黄光华.近自然彩色图像融合算法及其实时处理系统的发展[J].光学学报,2007,27(12):2101-2109.
    [120]王岭雪,史世明,金伟其,赵源萌,王生祥.基于YUV空间的双通道视频图像色彩传递及实时系统[J].北京理工大学学报,2007,27(3):189-191.
    [121]张闯.单通道双谱微光彩色夜视技术研究[D].南京:南京理工大学,2008.
    [122] E. Reinhard, M. Adhikhmin, B. Gooch, P. Shirley. Color transfer between images[J]. IEEEComputer Graphics and Applications,2001,21(5):34-41.
    [123] A. Toet. Natural colour mapping for multiband nightvision imagery[J]. Information Fusion,2003,4(3):155-166.
    [124] W. Wen, D.M. Fu. Colorization of infrared images based on DWT fusion and colortransfer[C]. Proceedings of International Conference on Wavelet Analysis and PatternRecognition,2007:432-436.
    [125] L. Wang, Y. Zhao, W. Jin, S. Shi, S. Wang. Real-time color transfer system for low-light levelvisible and infrared images in YUV color space[C]. Proceedings of SPIE,2007:65671G.
    [126] Y. Zheng, E.A. Essock. A local-coloring method for night-vision colorization utilizing imageanalysis and fusion[J]. Information Fusion,2008,9(2):186-199.
    [127] A. Toet, M.A. Hogervorst. Progress in color night vision[J]. Optical Engineering,2012,51010901.
    [128] S. Yin, L. Cao, Y. Ling, G. Jin. One color contrast enhanced infrared and visible image fusionmethod[J]. Infrared Physics&Technology,2010,53(2):146-150.
    [129] L. Mihaylova, A. Loza, S. Nikolov, J. Lewis, E.F. Canga, J. Li, T. Dixon, C. Canagarajah, D.Bull. The influence of multi-sensor video fusion on object tracking using a particle filter[C].Proceedings of the2nd workshop Multiple Sensor Data Fusion: Trends, Solutions,Applications,2006:354-358.
    [130] A. Mahmood, P.M. Tudor. Applied multi-dimensional fusion for urban intelligence,surveillance, target acquisition, and reconnaissance[C]. Proceedings of SPIE,2008:711905.
    [131] G. Xiao, X. Yun, J.M. Wu. A multi-cue mean-shift target tracking approach based onfuzzified region dynamic image fusion[J]. SCIENCE CHINA Information Sciences,2012,55(3):577-589.
    [132] S.R. Schnelle, A.L. Chan. Fusing Infrared and Visible Imageries for Improved Tracking ofMoving Targets [R]. Army reasearch lab adelphimed sensors and electron devices directorate,2011.
    [133] T. Li, Y. Wang. Biological image fusion using a NSCT based variable-weight method[J].Information Fusion,2011,12(2):85-92.
    [134] C. Ye, B. Wang, Q. Miao. Fusion algorithm of infrared and visible light images based onNSCT transform[J]. Systems Engineering and Electronics (In Chinese),2008,30(4):593-596.
    [135] Image fusion website [OL]. http://imagefusion.org/.
    [136] C.S. Xydeas, V.S. Petrovic. Objective pixel-level image fusion performance measure[C].Proceedings of SPIE,2000:89.
    [137] Z. Wang, A.C. Bovik, H.R. Sheikh, E.P. Simoncelli. Image quality assessment: From errorvisibility to structural similarity[J]. IEEE Transactions on mage Processing,2004,13(4):600-612.
    [138] Y. Xiang, B. Zou, H. Li. Selective color transfer with multi-source images[J]. PatternRecognition Letters,2009,30(7):682-689.
    [139] L.F.M. Vieira, E.R. Nascimento, F.A. Fernandes, R.L. Carceroni, R.D. Vilela, A.A. Araujo.Fully automatic coloring of grayscale images[J]. Image and vision computing,2007,25(1):50-60.
    [140] K. Fukunaga, L. Hostetler. The estimation of the gradient of a density function, withapplications in pattern recognition[J]. IEEE Transactions on Information Theory,1975,21(1):32-40.
    [141] OTCBVS dataset [DB]. http://www.cse.ohio-state.edu/OTCBVS-BENCH/.
    [142] A. Madevska-Bogdanova, D. Nikolik, L. Curfs. Probabilistic SVM outputs for patternrecognition using analytical geometry[J]. Neurocomputing,2004,62:293-303.
    [143] J. Platt. Probabilistic outputs for support vector machines and comparisons to regularizedlikelihood methods[J]. Advances in large margin classifiers,1999,10(3):61-74.
    [144] M.E. Tipping. Sparse Bayesian learning and the relevance vector machine[J]. The Journal ofMachine Learning Research,2001,1:211-244.
    [145] X. Wang, M. Ye, C. Duanmu. Classification of data from electronic nose using relevancevector machines[J]. Sensors and Actuators B: Chemical,2009,140(1):143-148.
    [146] B Yogameena, S Veera Lakshmi, M Archana, S.R. Abhaikumar. Human behaviorclassification using multi-class relevance vector machine[J]. Journal of Computer Science,2010,6(9):1021-1026.
    [147] T. Damoulas, Y. Ying, M.A. Girolami, C. Campbell. Inferring sparse kernel combinations andrelevance vectors: An application to subcellular localization of proteins[C]. Proceedings of7th International Conference on Machine Learning and Applications,2008:577-582.
    [148] I. Psorakis, T. Damoulas, M.A. Girolami. Multiclass relevance vector machines: Sparsity andaccuracy[J]. IEEE Transactions on Neural Networks,2010,21(10):1588-1598.
    [149] L. Wang, T. Tan, H.Z. Ning, W.M. Hu. Silhouette analysis-based gait recognition for humanidentification[J]. Ieee Transactions on Pattern Analysis and Machine Intelligence,2003,25(12):1505-1518.
    [150] S.D. Mowbray, M.S. Nixon. Automatic gait recognition via Fourier descriptors of deformableobjects[C]. Proceedings of Audio-and Video-Based Biometric Person Authentication,2003:566-573.
    [151] C.K.Peng, J.M.Hausdorff, A.L.Goldberger. Fractal mechanisms in neural control: Humanheartbeat and gait dynamics in health and disease. Self-Organized Biological Dynamics andNonlinear Control. Cambridge: Cambridge University Press:2000.
    [152] L. Wang, D. Suter. Informative shape representations for human action recognition[C].Proceedings of International Conference on Pattern Recognition,2006:1266-1269.
    [153] R. Rosales. Recognition of human action using moment-based feature[R]. Boston UniversityComputer Science Technical Report,1998.
    [154] S.T. Roweis, L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding[J].Science,2000,290(5500):2323-2326.
    [155] M.E. Tipping, A. Faul. Fast marginal likelihood maximisation for sparse Bayesian models[C].Proceedings of9th Interantional Workshop Artificial Intelligence and Statistics,2003.
    [156] Weizmann dataset [DB]. http://www.wisdom.weizmann.ac.il/~vision/.
    [157] H. Zhou, L. Wang, D. Suter. Human action recognition by feature-reduced Gaussian processclassification[J]. Pattern Recognition Letters,2009,30(12):1059-1066.
    [158]凌志刚,梁彦,潘泉,程咏梅,赵春晖.基于张量子空间学习的人行为识别方法[J].中国图象图形学报,2009,14(3):394-400.
    [159] N. Otsu. A threshold selection method from gray level histograms[J]. IEEE Transactions onSystems, Man, and Cybernetics,1979,9(1):62-66.
    [160] X.Y. Xu, S.Z. Xu, L.H. Jin, E.M. Song. Characteristic analysis of Otsu threshold and itsapplications[J]. Pattern Recognition Letters,2011,32(7):956-961.
    [161] B. Haider, M.R. Krause, A. Duque, Y. Yu, J. Touryan, J.A. Mazer, D.A. McCormick.Synaptic and network mechanisms of sparse and reliable visual cortical activity duringnonclassical receptive field stimulation[J]. Neuron,2010,65(1):107-121.
    [162] J.G. Daugman. Uncertainty relation for resolution in space, spatial frequency, and orientationoptimized by two-dimensional visual cortical filters[J]. Optical Society of America, Journal,A: Optics and Image Science,1985,2(7):1160-1169.
    [163] J.P. Jones, L.A. Palmer. An evaluation of the two-dimensional Gabor filter model of simplereceptive fields in cat striate cortex[J]. Journal of Neurophysiology,1987,58(6):1233-1258.
    [164] T. Serre, L. Wolf, T. Poggio. Object recognition with features inspired by visual cortex[C].2005:994-1000.
    [165] M.T. Ibrahim, Y. Wang, L. Guan, A.N. Venetsanopoulos. A filter bank fased approach forrotation invariant fingerprint recognition[J]. Journal of Signal Processing Systems,2011:1-14.
    [166] J.F. Khan, R.R. Adhami, S. Bhuiyan. A customized Gabor filter for unsupervised color imagesegmentation[J]. Image and vision computing,2009,27(4):489-501.
    [167] J. Han, K.K. Ma. Rotation-invariant and scale-invariant Gabor features for texture imageretrieval[J]. Image and vision computing,2007,25(9):1474-1481.
    [168] C.A. Perez, L.A. Cament, L.E. Castillo. Methodological improvement on local Gabor facerecognition based on feature selection and enhanced Borda count[J]. Pattern Recognition,2011,44(4):951-963.
    [169] V.E. Balas, I.M. Motoc, A. Barbulescu. Combined Haar-Hilbert and Log-Gabor Based IrisEncoders[C]. Computer Vision and Pattern Recognition,2012.
    [170] D. Tao, X. Li, X. Wu, S.J. Maybank. General tensor discriminant analysis and gabor featuresfor gait recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(10):1700-1715.
    [171] D.J. Field. Relations between the statistics of natural images and the response properties ofcortical[J]. Journal of the Optical Society of America A,1987,4(12):2379-2394.
    [172]容观澳.计算机图像处理[M].北京:清华大学出版社,2000.
    [173]蒋达琴.虹膜图像噪声处理技术研究[D].重庆:重庆大学,2006.
    [174] What Are Log-Gabor Filters and Why Are They Good?[OL].http://www.csse.uwa.edu.au/~pk/Research/MatlabFns/PhaseCongruency/Docs/convexpl.html
    [175] P. Kovesi. Invariant measures of image features from phase information[D]. University ofWestern Australia,1996.
    [176] A.M. Proverbio, A. Zani. Visual selective attention to object features[M]. Amsterdam:Academic Press,2003;275-306.
    [177]肖志涛,于明,李锵,唐红梅,国澄明. Log Gabor小波性能分析及其在相位一致性中应用[J].天津大学学报:自然科学与工程技术版,2003,36(4):443-446.
    [178]胡步发,王忠.基于Gabor小波与RBF神经网络的人脸识别新方法[J].电路与系统学报,2008,13(1):73-78.
    [179] R.J. Nemati, M.Y. Javed. Fingerprint verification using filter-bank of Gabor and Log Gaborfilters[C].15th International Conference on Systems, Signals and Image Processing,2008:363-366.
    [180] L. Ning, X. De.2D Log-Gabor wavelet based action recognition[J]. IEEE Transactions onInformation and Systems,2009,92(11):2275-2278.
    [181] S. Wold, K. Esbensen, P. Geladi. Principal component analysis[J]. Chemometrics andIntelligent Laboratory Systems,1987,2(1):37-52.
    [182]刘燕飞.人脸认证特征提取及阈值平衡方法研究[D].重庆:重庆大学,2010.
    [183] P.N. Belhumeur, J.P. Hespanha, D.J. Kriegman. Eigenfaces vs. fisherfaces: Recognition usingclass specific linear projection[J]. IEEE Transactions on Pattern Analysis and MachineIntelligence,1997,19(7):711-720.
    [184] K. Etemad, R. Chellappa. Discriminant analysis for recognition of human face images[J].Journal of the Optical Society of America A,1997,14(8):1724-1733.
    [185] L.F. Chen, H.Y.M. Liao, M.T. Ko, J.C. Lin, G.J. Yu. A new LDA-based face recognitionsystem which can solve the small sample size problem[J]. Pattern Recognition,2000,33(10):1713-1726.
    [186] H. Cevikalp, M. Neamtu, M. Wilkes, A. Barkana. Discriminative common vectors for facerecognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(1):4-13.
    [187]贺云辉,赵力,邹采荣.基于核鉴别共同矢量的小样本脸像鉴别方法[J].电子与信息学报,2006,28(12):2296-2300.
    [188] X.Y. Jing, Y.F. Yao, D. Zhang, J.Y. Yang, M. Li. Face and palmprint pixel level fusion andKernel DCV-RBF classifier for small sample biometric recognition[J]. Pattern Recognition,2007,40(11):3209-3224.
    [189]赵海龙,穆志纯,张霞,敦文杰.基于小波分解和鉴别共同矢量的人耳识别[J].计算机工程,2009,35(10):27-29.
    [190] J.F. Li, W.G. Gong. Application of thermal infrared imagery in human action recognition[J].Nanotechnology and Computer Engineering,2010,121-122:368-372.
    [191] L. Gorelick, M. Blank, E. Shechtman, M. Irani, R. Basri. Actions as space-time shapes[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(12):2247-2253.
    [192]彭雷.人体运动加速度信息无线获取系统的设计与应用研究[D].合肥:中国科学技术大学,2009.
    [193] D.T.G. Huynh. Human activity recognition with wearable sensors[D]. TU Darmstadt,2008.
    [194] A.M. Khan, Y.K. Lee, S.Y. Lee, T.S. Kim. A triaxial accelerometer-based physical-activityrecognition via augmented-signal features and a hierarchical recognizer[J].IEEE Transactionson Information Technology in Biomedicine,2010,14(5):1166-1172.
    [195] M. Li, V. Rozgic, G. Thatte, S. Lee, A. Emken, M. Annavaram, U. Mitra, D. Spruijt-Metz, S.Narayanan. Multimodal physical activity recognition by fusing temporal and cepstralinformation[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering,2010,18(4):369-380.
    [196] G. Baudat, F. Anouar. Generalized discriminant analysis using a kernel approach[J]. NeuralComputation,2000,12(10):2385-2404.
    [197] A.Y. Yang, R. Jafari, S.S. Sastry, R. Bajcsy. Distributed recognition of human actions usingwearable motion sensor networks[J]. Journal of Ambient Intelligence and SmartEnvironments,2009,1(2):103-115.
    [198] C.V.C. Bouten, K.T.M. Koekkoek, M. Verduin, R. Kodde, J.D. Janssen. A triaxialaccelerometer and portable data processing unit for the assessment of daily physicalactivity[J]. IEEE Transactions on Biomedical Engineering,1997,44(3):136-147.
    [199] J. Mantyjarvi, J. Himberg, T. Seppanen. Recognizing human motion with multipleacceleration sensors[C]. IEEE International Conference on Systems, Man, and Cybernetics,2001:747-752.
    [200] M.E. Tipping. The relevance vector machine[M]. Advances in Neural Information ProcessingSystems. MIT Press,2000,12:652-658.
    [201] D.B. Rubin. Iteratively reweighted least squares[J]. Encyclopedia of statistical sciences,1983,4:272-275.
    [202] R.O.C. Norman, J.M. Coxon. Principles of organic synthesis[M]. Chapman and Hall London,1978.
    [203] R. Lemoyne, C. Coroian, T. Mastroianni, W. Grundfest. Accelerometers for quantification ofgait and movement disorders: a perspective review[J]. Journal of Mechanics in Medicine andBiology,2008,8(2):137.
    [204] R. Lemoyne, C. Corian, T. Mastroianni, W. Grundfest. Wireless accelerometer assessment ofgait for quantified disparity of hemiparetic locomotion[J]. Journal of Mechanics in Medicineand Biology,2009,9(3):329-343.
    [205] G.J. Briem, J.A. Benediktsson, J.R. Sveinsson. Multiple classifiers applied to multisourceremote sensing data[J]. IEEE Transactions on Geoscience and Remote Sensing,2002,40(10):2291-2299.
    [206] S. Prasad, L.M. Bruce. Decision fusion with confidence-based weight assignment forhyperspectral target recognition[J]. IEEE Transactions on Geoscience and Remote Sensing,2008,46(5):1448-1456.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700