用户名: 密码: 验证码:
多摄像机接力目标跟踪关键算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
在安防市场巨大的需求推动下,视频监控技术正在向着数字化、网络化、智能化的方向飞速发展,传统的单摄像机小场景监控正被多个智能体(摄像机)组成的广域大范围智能视频监控所替代,这就首先要求用多个摄像机替代传统的视野有限的单摄像机,接力或协同进行运动目标的持续跟踪。多摄像机接力目标跟踪方面的研究作为智能视频处理系统最基本的核心技术,是目前计算机视觉方面的一个重要研究课题,具有很强的理论意义和实际应用价值。
     尽管多摄像机接力目标跟踪的相关研究在过去一段时间内取得了较大的进步,但总体而言仍处于探索阶段,仍然有很多难题制约其发展。首先,多摄像机间目标的持续跟踪要以准确的单摄像机目标跟踪为基础,而单摄像机目标跟踪中仍存在许多问题和难点尚未解决,例如如何适应目标所处环境的外在变化(包括环境光照变化、摄像机运动、噪声污染、背景复杂性等)如何选取目标图像特征适应目标的内在变化(包括目标的姿态变化、超平面旋转、目标的非刚性和目标的铰接性等)、目标部分或全部遮挡难题、跟踪的初始化问题、跟踪的实时性与准确性间的平衡、目标模型的适应性和鲁棒性间的平衡等;其次,多摄像机的使用给视觉目标跟踪带来很多新的理论和技术问题需要研究和探讨。目前大场景下多摄像机的接力目标跟踪面临的关键问题主要包括跟踪目标的初始检测定位、如何选择特征稳定有效表示目标、跟踪过程中目标模型的在线学习和目标经验模型在多摄像机间的继承传递、多摄像机间目标交接等。因此本文针对上述存在的问题,以多摄像机接力进行某一特定运动目标的持续跟踪为研究背景,对涉及到的一系列关键算法进行研究,包括特定目标的初始化定位、单摄像机内基于特征融合的稳定目标跟踪方法、带有特征学习与特征继承的目标跟踪方法和多摄像机间基于时空渐进匹配的交接方法。
     论文的主要研究内容与成果包括:
     1.针对目前研究较少的根据特征描述进行特定目标自动捕获定位问题,以常见的行人目标为例,在摄像机焦距、视野可变的情况下对只有简单特征描述的目标自动初始化方法进行研究。首先根据不同的描述方法生成目标的自适应分块颜色模型,然后采用宽高比、方差和特征模型三级级联滑动窗口法(cascade slide window)完成目标的检测,最后融合粒子滤波状态估计跟踪进行目标的自动初始化定位。实验结果表明,当目标的颜色纹理比较简单时,该方法在只给出目标描述性颜色特征时获得了较好的自动初始化定位效果。
     2.要想在多摄像机间准确稳定地对特定目标进行持续跟踪,必须首先在单摄像机内部进行目标的准确快速跟踪,多线索融合是单摄像机目标准确快速跟踪的一种有效方法。因此针对单摄像机内部目标跟踪问题,研究了一种按目标颜色分布进行自适应分块,并用子块组成多线索进行粒子滤波跟踪的方法。自适应分块方法根据目标颜色分布确定子块个数,提高了对目标初始描述的适应性;粒子滤波跟踪时,根据子块的可靠性及粒子的空间分布动态调整子块权重及进行子块的分裂与合并,提高了跟踪过程中对目标姿态变化、遮挡等情况的适应性。最后还对自适应分块阈值的自动选取做了尝试。
     3.为了在单摄像机内和多摄像机间进行目标的持续跟踪,最好能够将目标的某些稳定特征进行学习和继承。如在单摄像机内部跟踪中,通过学习可得到目标的稳定外观模型,适应目标及场景的各种变化,实现长时稳定的跟踪。而在多摄像机间同一目标持续跟踪中,一个摄像机跟踪过程中学习到的较稳定、鲁棒的目标模型可继承到下一摄像机中,下一摄像机无需重复进行复杂的学习过程,就能根据上一摄像机学习到的稳定目标表达进行快速目标定位和稳定目标跟踪。为了实现这一目标,在目前流行的基于目标外观模型在线学习跟踪方法基础上,提出一种带有特征学习和特征继承的目标跟踪方法。特征学习通过在线加权多示例学习提升(Online Weighted Multiple Instance Learning Boost, WMIL)算法实现,特征继承通过在跟踪过程中评估特征的稳定性和分类能力并保留较好的特征实现,运动模型则通过粒子滤波跟踪算法实现。特征学习和特征继承的目标判别式模型为粒子适应性度量提供了更自然有效的方法,粒子滤波运动模型则可更快速有效地为在线加权多示例学习采集正负样本,这两者的结合提高了跟踪算法的效率和鲁棒性,并为后续摄像机间的目标交接和目标经验模型的继承传递打好基础。
     4.在多摄像机监控系统的接力目标跟踪中,目标交接即目标的一致性标定(consistent labeling)是必须要解决的关键问题。针对这一问题,研究了不带重叠视野区域的多摄像机间目标交接方法,提出一种基于目标经验模型继承和时空渐进匹配的目标交接方法。首先人工确定环境地图,并通过离线学习得到摄像机间的时空约束关系,包括进入/离开区域及其空间转移概率和转移时间概率。然后利用这些时空约束关系渐进地采样粒子判断目标交接时刻,继承上一摄像机中自底向上和自上而下两种思路融合得到的目标经验模型进行粒子权重的计算,并根据对应多个进入区域的粒子集权重进行粒子个数和权重的调整,最终实现跟踪目标的准确交接。
     本文的研究工作是多摄像机接力目标跟踪尤其是对某一特定感兴趣目标在多个摄像机间持续跟踪涉及到的一些关键算法进行的有益尝试,重在提高单摄像机跟踪中算法的鲁棒性和快速性,以及为多摄像机间目标的接力跟踪提供一种新的解决思路。
Driven by the huge demands of security market, video surveillance technology is developing rapidly towards digital, networked, intelligent directions, in which the traditional small scene surveillance with single camera is being substituted by wide range video surveillance with multiple agents (cameras), and this firstly demands relay or continuous tracking of a particular moving object by multiple cameras instead of the traditional single camera tracking with a limited field of view. As the most fundamental and core technology of intelligent video processing system, research on multi-camera target relay tracking is an important topic in computer vision, with strong theoretical and practical values.
     Although research of multi-camera relay target tracking has made great progress in the past period of time, it is still in the exploratory stage entirely, and there are many problems restricting its development. Firstly, target relay tracking under multiple cameras should based on accurate target tracking within a single camera, and there are many problems and difficulties in single camera object tracking still unresolved, such as how to adapt to changes of external environment (including changes in ambient light, camera movement, noise pollution, background complexity, etc.), how to select features to adapt to internal changes of tracking target (including attitude changes of target, beyond plane rotation of target, hinge and non-rigid of target, etc.), full and part occlusion problem, tracking initialization problem, balance between accuracy and real-time of tracking, balance between adaptability and robustness of target model and so on. Secondly, the use of multiple cameras brings a lot new theoretical and technical issues to the visual target tracking need to be studied and discussed. At present, the key issues in large scale multiple cameras target relay tracking mainly include initial detection and location of interest target, selection of features which can stably and effectively represent target, online learning of target's appearance model during single camera and inheritance of empirical model among multiple cameras, target handover among multi cameras and so on. Therefore, this paper aims to solve above key problems in multi-camera target relay tracking, takes particular continuous or synergistic tracking of specific moving target within multi-camera as research background, investigates a series of key algorithms including the initialization and location of specific target, steady single camera target tracking method based on multi-cue fusion, moving target tracking method with feature learning and feature inheriting of intra-camera and inter-camera, and multi-camera target handover method based on spatial and temporal progressive matching.
     The main research contents of this paper are as follow:
     1. For the present less performed study of automatic capture and location of specific target based on characterization, taking the common target of pedestrian as example, we research the automatic initialization method of target under variable focus and field of view cameras, only based on simple feature description. Firstly, the target adaptive fragment-based color model is generated in line with variable characterizations, and then cascade sliding window method with three levels, aspect ratio, variance and feature model are adapted to achieve target's automatically initialization, finally, particle filter state estimation tracking is fusion to automatically locate the target. Experimental results show that when the target's color and texture are relatively simple, this method obtains a good automatic initialization results only with target's color characteristic description.
     2. To accurately and steadily tracking the specific object between multiple cameras, the accurate and fast object tracking within a single camera must first be carried out, and multi-cue integration is an effective way to track the target quickly and accurately in single camera. Therefore, an adaptive target fragment method based on color distribution is proposed to handle single-camera internal target tracking problem, in which sub-blocks are integrated into multi-cue particle filter tracking. Adaptive fragment method determines the number of sub-blocks according to the color distribution of target, which improves the adaptability of initial description of target. During particle filter tracking, the weight of each sub-block is dynamically adjusted according to its spatial reliability and all particles' distribution, and the splitting and merging of sub-bocks are also involved. All above improve the tracking adaptability especially on target attitude changing, occlusion situations. Automatic selection of adaptive fragment threshold also does finally.
     3. In order to realize continuous tracking of target in a single camera and among multiple cameras, it is optative to learn and inherit some stabilize features of target during tracking. As tracking in a single camera, the stable appearance model obtained by online learning can adapt to changes of environment and target, which helps to achieve stable long-time tracking. As continuous tracking of one target among multiple cameras is concerned, the stable and robust target model learned during tracking in one camera can be inherited to the tracking of the next camera, and no longer need repeating the complex learning process, the next camera can track the target fastly and stably based on the empirical target model of the first camera. To achieve this goal, based on the popular tracking methods with online learning appearance model, an object tracking method with feature learning and inheriting is been researched, in which the feature learning is realized by online Weighted Multiple Instance Learning Boost algorithm (WMIL), and the feature inheriting is implemented by measuring the classification capacity of features and inheriting and preserving some of them with stability and strong classification ability, at last the motion model is carried out by particle filter tracking algorithm. The discriminative appearance model with feature learning and feature inheriting helps particle filter to evaluate particle set more naturally and effectively, and particle filter helps online Weighted Multiple Instance Learning to generate a better representative set of training examples, which improves the efficiency and robustness of single camera tracking and lays a solid foundation for subsequent steady target handover and empirical model inheritance between multiple cameras.
     4. In relay target tracking of multi-camera surveillance system, target handover, i.e., the target consistent labeling is a key issue that must be addressed. To solve this problem, a target handover method based on spatial-temporal progressive matching among multi-camera with nonoverlapping field of view is researched. Firstly we artificial determine the environmental map and get the spatial-temporal constraints among cameras through offline learning, including enter/leave zones, their spatial transition probabilities and the transfer time probabilities, and then we use these spatial-temporal constraints progressively sample particles to determine the time of target handover, to calculate weights of particles according to the inherited target's empirical model, which combines bottom-up and top-down research ideas, adjust the number and weights of particles based on the weights of particle sets corresponding with enter zones, and ultimately realize accurate handover of targets.
     Research works of this paper are useful attempts in multi-camera relay object tracking, especially key algorithms of continuous tracking of a particular interest target among multiple cameras, which focus on improving the robustness and fastness of tracking algorithms in single camera tracking, and providing a new idea for relay target tracking among multiple cameras.
引文
[1]冈萨雷斯.数字图像处理(matlab版)[M].北京:电子工业出版社,2007.
    [2]侯志强,韩崇昭.视觉跟踪技术综述[J].自动化学报,2006,32(4):603-617.
    [3]《中国安防行业“十二五”发展规划》安防行业发展新契机[DB/OL]. http://www. 21csp.com.cn/zhanti/pan2011 zhan2012/article/article_8574.html.
    [4]Haering N, Venetianer P L, Lipton A. The evolution of video surveillance:an overview [J]. Machine Vision and Applications,2008,19(5-6):279-290.
    [5]Wang X. Intelligent multi-camera video surveillance:A review [J]. Pattern recognition letters,2013,34(1):3-19.
    [6]Yilmaz A, Javed O, Shah M. Object tracking:A survey [J]. ACM computing surveys (CSUR),2006,38(4):13.
    [7]Trucco E, Plakas K. Video tracking:a concise survey [J]. IEEE Journal of Oceanic Engineering.2006,31(2):520-529.
    [8]Kim I S, Choi H S, Yi K M, et al. Intelligent Visual Surveillance-A Survey [J]. International Journal of Control, Automation, and Systems,2010,8(5):926-939.
    [9]Micheals T E B R J, Eckmann X G M. Into the woods:visual surveillance of non-cooperative and camouflaged targets in complex outdoor settings [C]. Proc. IEEE,2001, 89(1):1382-1401.
    [10]Micheloni C, Foresti G L, and Snidaro L. A cooperative multi camera system for video-surveillance of parking lots [J]. Intelligent Distributed Surveillance Systems Symposium, by the IEE,2003:21-24.
    [11]刘亚辉.面向智能空间的多视角视觉系统关键技术研究[D].北京:北京邮电大学,2011.
    [12]Valera M, Velastin S A. Intelligent distributed surveillance systems:a review [J]. IEE Proceedings of Vision, Image and Signal Processing,2005,152(2):192-204.
    [13]Xu M, Orwell J, Lowey L, et al. Architecture and algorithms for tracking football players with multiple cameras [J]. IEE Proceedings-Vision, Image and Signal Processing, 2005,152(2):232-241.
    [14]Gil A E, Passino K M, Cruz J B. Stable Cooperative Surveillance[C].200544th IEEE Conference on Decision and Control and 2005 European Control Conference. CDC-ECC05. IEEE,2005:2182-2187.
    [15]Campbell M E, Wheeler M. Cooperative Tracking Using Vision Measurements on SeaScan UAVs [J]. IEEE Transactions on Control Systems Technology,2007,15(4):613-627.
    [16]徐国保,尹怡欣,周美娟.智能移动机器人技术现状及展望[J].机器人技术 与应用,2008,(2):29-34.
    [17]Maki J N, Bell J F, Herkenhoff K E, et al. Mars Exploration Rover Engineering Cameras [J]. Journal of Geophysical Research,2003,108(E12):1-24.
    [18]http://vision.kuee.kyoto-u.ac.jp/CDVPRJ/.
    [19]Matsuyama T. Cooperative distributed vision:integration of visual perception, action, and communication [M]. Mustererkennung 1999. Springer Berlin Heidelberg,1999: 138-151.
    [20]Collins R, Lipton A, Kanade T, et al. A system for video surveillance and monitoring:VSAM final report [R]. Technical Report CMU-RI-TR-00-12, Robotics Institute, Carnegie Mellon University,2000.
    [21]Haritaoglu I, Harwood D, Davis L S. W4:Real-time surveillance of people and their activities [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(8): 809-830.
    [22]Siebel N T, Maybank S. The advisor visual surveillance system [C]. ECCV 2004 workshop Applications of Computer Vision (ACV).2004,1:103-111.
    [23]Lipton A J, Fujiyoshi H, Patil R S. Moving target classification and tracking from real-time video[C]. Proceedings of Fourth IEEE Workshop on Applications of Computer Vision,1998, WACV'98,1998:8-14.
    [24]Wren C R, Azarbayejani A, Darrell T, et al. Pfinder:Real-time tracking of the human body [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1997, 19(7):780-785.
    [25]Pavlidis I, Morellas V, Tsiamyrtzis P, et al. Urban surveillance systems:from the laboratory to the commercial world [J]. Proceedings of the IEEE,2001,89(10):1478-1497.
    [26]Javed O, Rasheed Z, Alatas O, et al. KNIGHT:A real time surveillance system for multiple overlapping and non-overlapping cameras [C]. Proceedings of 2003 IEEE International Conference on Multimedia and Expo, ICME'03, Baltimore, Maryland, USA, 2003,1:1-649.
    [27]Cucchiara R, Grana C, Piccardi M, et al. Detecting moving objects, ghosts and shadows in video streams [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(10):1337-1342.
    [28]Hampapur A, Brown L, Connell J, et al. Smart video surveillance:exploring the concept of multi scale spatiotemporal tracking [J]. In Signal Processing Magazine, IEEE, 2005,22(2):38-51.
    [29]Ferryman J. AVITRACK:Aircraft surroundings categorized Vehicles & Individuals Tracking for apron Activity model interpretation & Check [C]. IEEE International Conference on Computer Vision, Beijing, China,2005.
    [30]Blauensteiner P, Kampel M. Visual surveillance of an airports apron-an overview of the AVITRACK project [C]. Digital imaging in media and education, annual workshop of AAPR.2004.
    [31]First IEEE Workshop on Visual Surveillance, January 1998, Bombay, India.
    [32]Second IEEE Workshop on Visual Surveillance, January 1999, Fort Collins, Colorado.
    [33]Third IEEE International Workshop on Visual Surveillance (VS'2000), July 2000, Dublin, Ireland.
    [34]First IEE Workshop on Intelligent Distributed Surveillance Systems, February 2003, London.
    [35]Second IEE Workshop on Intelligent Distributed Surveillance Systems, February 2004, London.
    [36]IEEE Conference on Advanced Video and Signal Based Surveillance, July 2003.
    [37]Special issue on visual surveillance, International Journal of Computer Vision,2000, 37.
    [38]Special issue on visual surveillance, IEEE Transaction of Pattern Analysis and Machine Intelligence,2000,22(8).
    [39]Special issue on third generation surveillance systems, Proc. IEEE,2001,89(10).
    [40]Special issue on human motion analysis, Computer Vision and Image Understand-ing,2001.
    [41]Khan S, Shah M. Consistent labeling of tracked objects in multiple cameras with overlapping fields of view [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(10):1355-1360.
    [42]Javed O, Rasheed Z, Shafique K, et al. Tracking across multiple cameras with disjoint views [C]. Proceedings of Ninth IEEE International Conference on Computer Vision, 2003:952-957.
    [43]Javed O, Shafique K, Shah M, et al. Appearance modeling for tracking in multiple non-overlapping cameras [C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2005,2:26-33.
    [44]Kuo C H, Huang C, Nevatia R. Inter-camera association of multi-target tracks by on-line learned appearance affinity models [C]. Computer Vision-ECCV 2010. Springer Berlin Heidelberg,2010:383-396.
    [45]Cai Y, Medioni G, Dinh T B. Towards a Practical PTZ Face Detection and Tracking System [C].2013 IEEE Workshop on Applications of Computer Vision (WACV),2013: 31-38.
    [46]Wu Y, Lim J, Yang M H. Online object tracking:A benchmark [C].2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE,2013:2411-2418.
    [47]Taylor C, Rahimi A, Bachrach J, et al. Simultaneous localization, calibration, and tracking in an ad hoc sensor network [C]. Proceedings of the 5th international conference on Information processing in sensor networks. ACM,2006:27-33.
    [48]Gall J, Yao A, Razavi N, et al. Hough forests for object detection, tracking, and action recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2011, 33(11):2188-2202.
    [49]Bay H, Ess A, Tuytelaars T, et al. Speeded-up robust features (SURF) [J]. Computer vision and image understanding,2008,110(3):346-359.
    [50]Aeschliman C, Park J, Kak A C. A probabilistic framework for joint segmentation and tracking [C].2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2010:1371-1378.
    [51]Han B, Joo S W, Davis L S. Multi-camera tracking with adaptive resource allocation [J]. International Journal of Computer Vision,2011,91(1):45-58.
    [52]http://www.nlpr.ia.ac.cn/.
    [53]http://ice.dlut.edu.cn/lu/index.html.
    [54]张莉.多摄像机人体跟踪技术的研究[D].浙江大学,2008.
    [55]梁华.多摄像机视频监控中目标检测与跟踪[D].国防科技大学,2009.
    [56]钟必能.复杂动态场景中运动目标检测与跟踪算法研究[D].哈尔滨工业大学,2010.
    [57]Chen K W, Lai C C, Lee P J, et al. Adaptive Learning for Target Tracking and True Linking Discovering Across Multiple Non-Overlapping Cameras [J]. IEEE Transactions on multimedia,2011,13(4):625-638.
    [58]Yang H, Shao L, Zheng F, et al. Recent advances and trends in visual tracking:A review [J]. Neurocomputing,2011,74(18):3823-3831.
    [59]牛长锋,陈登峰,刘玉树.基于SIFT特征和粒子滤波的目标跟踪方法[J].机器人,2010,32(2):241-247.
    [60]Li X, Hu W, Shen C, et al. A survey of appearance models in visual object tracking [J]. ACM Transactions on Intelligent Systems and Technology (TIST),2013,4(4):58.
    [61]Wang H Z, Suter D, Schindler K, et al. Adaptive object tracking based on an effective appearance filter [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(9):1661-1667.
    [62]Zhou H Y, Yuan Y, Zhang Y, et al. Non-rigid object tracking in complex scenes [J]. Pattern Recognition Letters,2009,30(2):98-102.
    [63]Takala V, Pietikainen M. Multi-object tracking using color, texture and motion [C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway, NJ, USA,2007:1-7.
    [64]Lucas B D, Kanade T. An iterative image registration technique with an application in stereo vision [C]. International Joint Conferences on Artificial Intelligence,1981:674-679.
    [65]Horn B K, Schunck B G. Determining optical flow [C].1981 Technical Symposium East. International Society for Optics and Photonics,1981:319-331.
    [66]Rathi Y, Vaswani N, Tannenbaum A, et al. Tracking deforming objects using particle filtering for geometric active contours [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(8):1470-1475.
    [67]Viola P, Jones M. Rapid object detection using a boosted cascade of simple features [C]. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2001,1:511-518.
    [68]Lowe D G. Distinctive image features from scale-invariant keypoints [J]. International journal of computer vision,2004,60(2):91-110.
    [69]Calonder M, Lepetit V, Strecha C, et al. BRIEF:Binary robust independent elementary features [C]. Computer Vision-ECCV 2010. Springer Berlin Heidelberg,2010, 6341(3):778-792.
    [70]Leutenegger S, Chli M, Siegwart R Y. BRISK:Binary Robust Invariant Scalable Keypoints [C].2011 IEEE International Conference on Computer Vision (ICCV).2011: 2548-2555.
    [71]Alahi A, Ortiz R, Vandergheynst P. Freak:Fast retina keypoint [C].2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2012:510-517.
    [72]Matas J, Chum O, Urban M, et al. Robust wide baseline stereo from maximally stable external regions [J]. Image and Vision Computing,2004,22(10):761-767.
    [73]Rublee E, Rabaud V, Konolige K, et al. ORB-an efficient alternative to SIFT or SURF[C]. International Conference on Computer Vision.2011,95(1):2564-2571.
    [74]Ross D, Lim J, Lin R S, et al. Incremental learning for robust visual tracking [J]. International Journal of Computer Vision,2008,77(1-3):125-141.
    [75]Grabner H, Bischof H. On-line boosting and vision [C].2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.2006,1:260-267.
    [76]Babenko B, Yang M H, Belongie S. Robust object tracking with online multiple instance learning [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2011, 33(8):1619-1632.
    [77]Avidan S. Support vector tracking [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2004,26(8):1064-1072.
    [78]Kelm B M, Pal C, McCallum A. Combining generative and discriminative methods for pixel classification with multi-conditional learning [C]. International Conference on Pattern Recognition,2006:828-832.
    [79]Everingham M, Zisserman A. Identifying individuals in video by combining 'generative' and discriminative head models [C]. Tenth IEEE International Conference on Computer Vision,2005,2:1103-1110.
    [80]Smeulders A, Chu D, Cucchara R, et al.. Visual Tracking:An Experimental Survey [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2013,19.
    [81]Kalal Z, Mikolajczyk K, Matas J. Tracking-learning-detection [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,34(7):1409-1422.
    [82]Kwon J and Park F C. Visual tracking via geometric particle filtering on the affine group with optical importance functions [C].2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2009.
    [83]Comaniciu D, Ramesh V, Meer P. Kernel-based object tracking [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(5):564-577.
    [84]Kass M, Witkin A, Terzopoulos D. Snakes:Active Contour Models [J]. International Journal of Computer Vision,1988,1(4):321-331.
    [85]Arulampalam M S, Maskell S, Gordon N, et al. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking [J]. IEEE Transactions on Signal Processing, 2002,50(2):174-188.
    [86]Kuo C H, Huang C, Nevatia R. Multi-target tracking by on-line learned discriminative appearance models [C].2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2010:685-692.
    [87]Chen K W, Lai C C, Lee P J, et al. Adaptive learning for target tracking and true linking discovering across multiple non-overlapping cameras [J]. IEEE Transactions on Multimedia,2011,13(4):625-638.
    [88]Javed O, Khan S, Rasheed Z, et al. Camera handoff:tracking in multiple uncalibrated stationary cameras [C]. IEEE Workshop Human Motion (HUMO'00), Austin, TX,2000:113-118.
    [89]Lee L, Romano R, Stein G. Monitoring activities from multiple video streams: establishing a common coordinate frame [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(8):758-767.
    [90]Kelly P H, Katkere A, Kuramura D Y, et al. An architecture for multiple perspective interactive video [C]. Proceedings of the third ACM International Conference on Multimedia, 1995:201-212.
    [91]Black J, Ellis T. Multiple camera image tracking [C]. International Workshop on Performance Evaluation of Tracking and Surveillance,2001.
    [92]Fleuret F, Berclaz J, Lengagne R, et al. Multicamera people tracking with a probabilistic occupancy map. IEEE Transactions on Pattern Analysis and Machine Intelligence,2008,30(2):267-282.
    [93]Chang T H, Gong S. Tracking multiple people with a multi-camera system [C]. IEEE Workshop on Multi-Object Tracking,2001:19-26.
    [94]Piva S, Calbi A, Angiati D, et al. A multi-feature object association framework for overlapped field of view multi-camera video surveillance systems [C]. IEEE Conference on Advanced Video and Signal Based Surveillance,2005:505-510.
    [95]Cai Q, Aggarwal J K. Tracking human motion using multiple cameras [C]. IEEE International Conference on Pattern Recognition,1996,3:68-68.
    [96]Cai Q, Aggarwal J K. Tracking human motion in structured environments using a distributed-camera system [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1999,21(11):1241-1247.
    [97]M. Naylor, C. I. Attwood, et al. ADVISOR Final Report [R]. May 2003.
    [98]Kang J, Cohen I, Medioni G. Persistent objects tracking across multiple non overlapping cameras [C]. IEEE Workshps on Application of Computer Vision,2005,2: 112-119.
    [99]Elgammal A M, Davis L S. Probabilistic framework for segmenting people under occlusion [C]. IEEE International Conference on Computer Vision (ICCV) 2001,2:145-152.
    [100]Kang J, Cohen I, Medioni G. Continuous multi-views tracking using tensor voting [C]. IEEE Workshop on Motion and Video Computing,2002:181-186.
    [101]Morioka K, Mao X, Hashimoto H. Global color model based object matching in the multi-camera environment [C]. IEEE International Conference on Intelligent Robots and Systems,2006:2644-2649.
    [102]Chen K W, Lai C C, Hung Y P, et al. An adaptive learning method for target tracking across multiple cameras [C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2008:1-8.
    [103]Chen K W, Hung Y P. Multi-cue integration for multi-camera tracking [C]. IEEE 20th International Conference on Pattern Recognition (ICPR),2010:145-148.
    [104]Dick A R, Brooks M J. A stochastic approach to tracking objects across multiple cameras [J]. Advances in Artificial Intelligence. Springer Berlin Heidelberg,2005:160-170.
    [105]Stein G P. Tracking from multiple view points:Self-calibration of space and time [C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition,1999, 1:521-527.
    [106]Caspi Y, Irani M. A step towards sequence-to-sequence alignment [C]. IEEE Conference on Computer Vision and Pattern Recognition,2000,2:682-689.
    [107]Caspi Y, Irani M. Alignment of non-overlapping sequences [J]. International Journal of Computer Vision,2002,48(1):39-51.
    [108]Rao C, Gritai A, Shah M, et al. View-invariant alignment and matching of video sequences [C]. Ninth IEEE International Conference on Computer Vision,2003:939-945.
    [109]Makris D, Ellis T, Black J. Bridging the Gaps between Cameras [C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2004,2:205-210.
    [110]Zhao T, Aggarwal M, Kumar R, et al. Real-time wide area multi-camera stereo tracking [C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2005,1:976-983.
    [111]程咏梅,周问天,王溢等.基于决策级融合的图像传感系统目标交接[J].传感技术学报,2010,23(5):676-681.
    [112]Wright J, Ma Y, Mairal J, et al. Sparse representation for computer vision and pattern recognition [J]. Proceedings of the IEEE,2010,98(6):1031-1044.
    [113]Mei X, Ling H, Robust visual tracking using 11 minimization [C].2009 IEEE 12th International Conference on Computer Vision,2009:1436-1443.
    [114]Chen F, Wang Q, Wang S, et al. Object tracking via appearance modeling and sparse representation [J]. Image and Vision Computing,2011,29(11):787-796.
    [115]Bai T, Li Y F. Robust visual tracking with structured sparse representation appearance model [J]. Pattern Recognition,2012,45(6):2390-2404.
    [116]Spagnolo P, Orazio T D, Leo M, et al. Moving object segmentation by background subtraction and temporal analysis [J]. Image and Vision Computing,2006,24(5):411-423.
    [117]Enzweiler M, Kanter P, Gavrila D M. Monocular pedestrian recognition using motion parallax [C].2008 IEEE Intelligent Vehicles Symposium,2008:792-797.
    [118]Ali S, Shah M. COCOA-Tracking in Aerial Imagery [C]. International Society for Optics and Photonics, Defense and Security Symposium,2006:62090D-62090D-6.
    [119]Bell W, Felzenszwalb P, Huttenlocher D. Detection and long term tracking of moving objects in aerial video [J]. Computer Science Department, Cornell University. Version, 1999.
    [120]Dalal N, Triggs B. Histograms of oriented gradients for human detection [C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2005,1: 886-893.
    [121]Papageorgiou C, Poggio T. A trainable system for object detection [J]. International Journal of Computer Vision,2000,38(1):15-33.
    [122]Viola P, Jones M J. Robust real-time face detection [J]. International journal of computer vision,2004,57(2):137-154.
    [123]Alahi A, Vandergheynst P, Bierlaire M, et al. Cascade of Descriptors to Detect and Track Objects Across Any Network of Cameras [J]. Journal on Computer Vision and Image Understanding,2010,114(6):624-640.
    [124]王亮,胡为民,谭铁牛.人运动的视觉分析综述[J].计算机学报,2002,25(3):225-237.
    [125]Dollar P, Wojek C, Schiele B, et al. Pedestrian detection:An evaluation of the state of the art [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,34(4): 743-761.
    [126]Shashua A, Gdalyahu Y, Hayun G. Pedestrian detection for driving assistance systems:Single-frame classification and system level performance [C].2004 IEEE Intelligent Vehicles Symposium,2004:1-6.
    [127]Wu B, Nevatia R. Detection of multiple, partially occluded humans in a single image by bayesian combination of edgelet part detectors [C]. Tenth IEEE International Conference on Computer Vision,2005,1:90-97.
    [128]Wu B, Nevatia R. Cluster boosted tree classifier for multi-view, multi-pose object detection [C]. IEEE 11th International Conference on Computer Vision,2007:1-8.
    [129]Tuzel O, Porikli F, Meer P. Pedestrian detection via classification on Riemannian manifolds [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2008,30(10): 1713-1727.
    [130]Viola P, Jones M J, Snow D. Detecting pedestrians using patterns of motion and appearance [C]. Ninth IEEE International Conference on Computer Vision,2003:734-741.
    [131]Dalal N, Triggs B, Schmid C. Human detection using oriented histograms of flow and appearance [M]. European Conf. Computer Vision 2006. Springer Berlin Heidelberg, 2006:428-441.
    [132]Ilyas A, Scuturici M, Miguet S. A combined motion and appearance model for human tracking in multiple cameras environment [C].2010 6th International Conference on Emerging Technologies (ICET),2010:198-203.
    [133]Panero J, Zelnik M. Human dimension and interior space [M]. Watson-GuPtill Pubns,1979.
    [134]姚建敏.粒子滤波跟踪方法研究[D].中国科学院研究生院,2004.
    [135]胡士强,敬忠良.粒子滤波算法综述[J].控制与决策,2005,20(4),361-365.
    [136]Doucet A, Gordon N, Krishnamurthy V. Particle Filters for State Estimation of Jump Markov Linear Systems [J]. IEEE Transactions on Signal Processing,2001,49: 613-624.
    [137]Isard M, Blake A. Condensation:conditional density propagation for visual tracking [J]. International Journal of Computer Vision,1998,29(1):5-28.
    [138]张波.基于粒子滤波的图像跟踪算法研究[D].上海交通大学,2007.
    [139]Godsill S, Clapp T. Improvement strategies for Monte Carlo particle filters [M]. Sequential Monte Carlo Methods in Practice. Springer New York,2001:139-158.
    [140]Gordon N J, Salmond D J, Smith A F M. Novel approach to nlinear/non-Gaussian Bayesian state estimation [C]. IEE Proceedings F (Radar and Signal Processing), IET Digital Library,1993,140(2):107-113.
    [141]Erdem E, Dubuisson S, Bloch I. Fragments based tracking with adaptive cue integration [J]. Computer Vision and Image Understanding,2012,116(7):827-841.
    [142]Maggio E, Smerladi F, Cavallaro A. Adaptive multifeature tracking in a particle filtering framework [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2007,17(10):1348-1359.
    [143]Ning J, Zhang L, Zhang D, et al. Robust object tracking using joint color-texture histogram [J]. International Journal of Pattern Recognition and Artificial Intelligence,2009, 23(7):1245-1263.
    [144]Wang D, Lu H, Chen Y. Object tracking by multi-cues spatial pyramid matching [C].2010 IEEE 17th International Conference on Image Processing,2010:3957-3960.
    [145]顾鑫,王海涛,汪凌峰等.基于不确定性度量的多特征融合跟踪[J].自动化学报,2011,37(5):550-559.
    [146]Elgammal A, Duraiswami R, Davis L S. Efficient kernel density estimation using the fast gauss transform with applications to color modeling and tracking [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(11):1499-1504.
    [147]Mittal A, Davis L. M2tracker:A multi-view approach to segmenting and tracking people in a cluttered scene [J]. International Journal of Computer Vision,2003,51(3): 189-203.
    [148]Adam A, Rivlin E, Shimshoni I. Robust fragments-based tracking using the integral histogram [C].2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2006,1:798-805.
    [149]Nejhum S M, Ho J, Yang M H. Online visual tracking with histograms and articulating blocks [J]. Computer Vision and Image Understanding,2010,114(8):901-914.
    [150]Kwon J, Lee K M. Tracking of a non-rigid object via patch-based dynamic appearance modeling and adaptive basin hopping monte carlo sampling [C]. IEEE Conference on Computer Vision and Pattern Recognition,2009:1208-1215.
    [151]董文会,常发亮,李天平.融合颜色直方图及SIFT特征的自适应分块目标跟踪方法[J].电子与信息学报,2013,35(4):770-776.
    [152]Fang J, Yang J, Liu H. Efficient and robust fragments-based multiple kernels tracking [J]. AEU-International Journal of Electronics and Communications,2011,65(11): 915-923.
    [153]Nickel K, Stiefelhagen R. Dynamic integration of generalized cues for person tracking [C]. Proceedings of Europe Conference on Computer Vision,2008:514-526.
    [154]Spengler M, Schiele B. Towards robust multi-cue integration for visual tracking [J]. Machine Vision and Application,2003,14(1):50-58.
    [155]Zhou S K, Chellappa R, Moghaddam B. Visual tracking and recognition using appearance-adaptive models in particle filters [J]. IEEE Transactions on Image Processing, 2004,13(11):1491-1506.
    [156]Yu T, Wu Y. Differential tracking based on spatial-appearance model (SAM) [C]. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2006, 1:720-727.
    [157]Leichter I, Lindenbaum M, Rivlin E. Tracking by affine kernel transformations using color and boundary cues [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(1):164-171.
    [158]Collins R T. Mean-shift blob tracking through scale space [C].2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2003,2: 234-240.
    [159]Ho J, Lee K C, Yang M H, et al. Visual tracking using learned linear subspaces [C]. 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2004, 1:782-789.
    [160]Ross D, Lim J, Yang M H. Adaptive probabilistic visual tracking with incremental subspace update [C]. Computer Vision-ECCV 2004. Springer Berlin Heidelberg,2004: 470-482.
    [161]Li X, Hu W, Zhang Z, et al. Robust visual tracking based on an effective appearance model [C]. Computer Vision-ECCV 2008. Springer Berlin Heidelberg,2008: 396-408.
    [162]Yu Q, Dinh T B, Medioni G. Online tracking and reacquisition using co-trained generative and discriminative trackers [C]. Computer Vision-ECCV 2008. Springer Berlin Heidelberg,2008:678-691.
    [163]Oza N C, Russell S. Experimental comparisons of online and batch versions of bagging and boosting [C]. Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining. ACM,2001:359-364.
    [164]Shai A. Ensemble Tracking [J]. IEEE Transactions on Pattern Analysis and Machine Intelligent,2007,29(2):261-271.
    [165]Zhang K, Song H. Real-time visual tracking via online weighted multiple instance learning [J]. Pattern Recognition,2013,46(1):397-411.
    [166]Zheng W, Bhandarkar S M. Face detection and tracking using a Boosted Adaptive Particle Filter [J]. Journal of Visual Communication and Image Representation,2009,20(1): 9-27.
    [167]Li T, Liu J, Gong C, et al. Robust object tracking by combining Boosting learning and particle filter [J]. Journal of Computational Information Systems,2012,8(22): 9593-9601.
    [168]Ni Z, Sunderrajan S, Rahimi A, et al. Particle filter tracking with online multiple instance learning [C].201020th International Conference on Pattern Recognition (ICPR), 2010:2616-2619.
    [169]Song Y, Li Q. Visual tracking based on multiple instance learning particle filter [C]. 2011 International Conference on Mechatronics and Automation (ICMA), IEEE,2011: 1063-1067.
    [170]Dollar P, Tu Z, Tao H, et al. Feature mining for image classification [C]. IEEE Conference on Computer Vision and Pattern Recognition,2007 (CVPR'07),2007:1-8.
    [171]Khan S, Javed O, Rasheed Z, et al. Human tracking in multiple cameras [C]. Eighth IEEE International Conference on Computer Vision,2001 (ICCV 2001),2001,1: 331-336.
    [172]Javed O, Khan S, Rasheed Z, et al. Camera handoff:tracking in multiple uncalibrated stationary cameras [C]. IEEE Workshop on Human Motion,2000:113-118.
    [173]Focken D, Stiefelhagen R. Towards vision-based 3-d people tracking in a smart room [C]. Fourth IEEE International Conference on Multimodal Interfaces,2002:400-405.
    [174]Khan S M, Shah M. A multiview approach to tracking people in crowded scenes using a planar homography constraint [C]. Computer Vision-ECCV 2006. Springer Berlin Heidelberg,2006:133-146.
    [175]Stauffer C, Grimson W. Learning patterns of activity using real-time tracking [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI),2000,22(8): 747-757.
    [176]Redding N J, Ohmer J F, Kelly J, et al. Cross-matching via feature matching for camera handover with non-overlapping fields of view [C]. Digital Image Computing: Techniques and Applications (DICTA), IEEE,2008:343-350.
    [177]Doretto G, Sebastian T, Tu P, et al. Appearance-based person reidentification in camera networks:problem overview and current approaches [J]. Journal of Ambient Intelligence and Humanized Computing,2011,2(2):127-151.
    [178]Kang J, Cohen I, Medioni G. Continuous tracking within and across camera streams [C].2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2003,1:267-272.
    [179]Cai Y, Chen W, Huang K, et al. Continuously tracking objects across multiple widely separated cameras [C]. Computer Vision-ACCV 2007. Springer Berlin Heidelberg, 2007:843-852.
    [180]Loke Y R, Kumar P, Ranganath S, et al. Object matching across multiple non-overlapping fields of view using fuzzy logic [J]. Acta Automatica Sinica,2006,32(6): 978-987.
    [181]刘少华.非重叠监控摄像机网络中运动目标检测与跟踪[D].国防科技大学,2009.
    [182]Gilbert A, Bowden R. Incremental Modeling of the Posterior Distribution of Objects for Inter and Intra Camera Tracking [C]. BMVC,2005.
    [183]Gilbert A, Bowden R. Tracking objects across cameras by incrementally learning inter-camera colour calibration and patterns of activity [C]. Computer Vision-ECCV 2006. Springer Berlin Heidelberg,2006:125-136.
    [184]Ellis T J, Makris D, Black J. Learning a multi-camera topology [C]. Joint IEEE Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS).2003:165-171.
    [185]Black J, Makris D, Ellis T. Validation of blind region learning and tracking [C]. 2nd Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance,2005:9-16.
    [186]Lim F L, Leoputra W, Tan T. Non-overlapping distributed tracking system utilizing particle filter [J]. The Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology,2007,49(3):343-362.
    [187]Chen C H, Yao Y, Page D, et al. Camera handoff and placement for automated tracking systems with multiple omnidirectional cameras [J]. Computer Vision and Image Understanding,2010,114(2):179-197.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700