用户名: 密码: 验证码:
基于检测的在线多物体跟踪
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
物体跟踪一直以来都是计算机视觉领域中的热点研究问题,其主要目标就是估计出视频场景中物体运动的轨迹。近些年来,随着物体检测技术的迅速发展,基于检测的多物体跟踪逐渐成为一类非常重要的跟踪问题,拥有诸如视觉监控、交通导航及基于内容的视频检索等众多重要实际应用。另一方面,物体的剧烈运动及物体间的复杂交互等诸多因素,使得该问题异常困难,需要设计许多全新复杂的模型和算法,因而在学术研究方面也具有重要价值。
     本文关注于如何基于检测器在线地同时跟踪多个物体。这是一个更具挑战性的问题,因为处理时只能使用当前帧及当前帧之前的图像信息来推测出最有可能的结果。本文通过对跟踪问题进行系统深入的分析,认为观测模型和跟踪策略是其两个关键组成部分。从这两个部分出发,本文提出了一系列针对于诸如监控视频、消费视频及体育视频等多种典型视频场景中的在线多物体跟踪算法,并成功地将其应用于一些中高层应用。论文主要工作包括:
     1.提出了一种基于局部跟踪片段滤波和全局跟踪片段关联的两阶段多物体跟踪算法。该算法设计了一种多生存周期背景差分器和一种多视角多部件人体检测器,并将它们用于一个带选择阶段的粒子滤波器局部跟踪过程和一个时序滑动窗口内的全局关联跟踪过程,取得了较好的有遮挡物体的跟踪结果。
     2.提出了一种基于在线区分性学习的多物体跟踪算法。通过在线学习区分性的兴趣点和颜色块特征,跟踪系统增强了对更严重遮挡现象的处理能力以及对任意两个物体之间的区分能力,从而进一步提高了有遮挡物体的跟踪结果。
     3.提出了一种基于递进式观测模型建模和双模双向贝叶斯推理的多球员跟踪方法。该递进式观测模型建模过程将原本复杂的问题分治求解,依次收集出稳定可靠的观测量信息,用于使用前向滤波和后向平滑的单个及多个物体统一跟踪过程,较好地解决了存在剧烈运动及复杂交互的多球员跟踪问题。
     4.基于物体跟踪结果以及跟踪技术,提出了两种新算法来分别处理人体分割和群体计数问题。实验表明,借助于物体跟踪能够明显提升这两个中高层视觉分析问题的处理结果。
Object tracking, which aims to estimate object trajectories in video scenes, haslong been an active research topic in computer vision. Recently, with the fastdevelopment of object detection techniques, detection based multiple object tracking isbecoming a very important class of tracking problem and has many practicalapplications as wide as visual surveillance, traffic monitoring, and content-based videoretrieval. It is also a quite difficult problem due to abrupt motion of objects and complexinteractions among them, which requires designing many novel and sophisticatedmodels and algorithms. Thus it is also very important in academic research.
     This thesis focuses on how to online track multiple objects simultaneously based ondetection, which is an even more challenging problem because only the imageinformation until current processing frame can be used to infer for the most possibleresults. By means of systematic in-depth analysis of the tracking problem, it has beenfound that the observation model and the tracking strategy are two key components inthis problem. From this perspective, several new online multiple object trackingalgorithms on many typical kinds of video scenes, including surveillance videos,consumer videos and sports videos, have been proposed and successfully applied intosome middle and high level applications. The main work of this thesis includes:
     Firstly, a two-stage online multiple object tracking algorithm by local trackletsfiltering and global tracklets association is proposed. It designs a multiple lifespanbackground subtractor and a multi-part multi-view human detector, both of which areadapted into a local tracking procedure using a particle filter with selection and a globalassociation tracking procedure within a temporal sliding window. Good results onoccluded objects tracking is obtained through this method.
     Secondly, a multiple object tracking algorithm based on online discriminativelearning is proposed. Through online learned discriminative features from interestpoints and color patches, the ability of the tracking system to deal with more seriousocclusions and distinguish between every two different objects are enhanced, whichfurther improves the tracking results of occluded objects.
     Thirdly, an approach based on progressive observation modeling and dual-modetwo-way Bayesian inference is proposed for multiple player tracking in sports videos. The progressive observation modeling process divides an initially difficult problem intosome solvable sub-problems and tackles them step-by-step to collect robust and reliableobservation information sequentially. These observations are directed at a unified singleobject and multiple objects tracking procedure by forward filtering and backwardsmoothing. The whole algorithm provides a very good solution for the problem oftracking multiple players with abrupt motions and complex interactions.
     Last but not least, two new algorithms, which respectively address the humansegmentation and crowd counting problem, are proposed based on object trackingresults and techniques. The experiments demonstrate that the processing results of thesetwo middle-and-high level vision analysis problems can be significantly improved withthe help of object tracking.
引文
[1]. O. Yilmaz and M. Shah, Object tracking: a survey, ACM Computing Surveys, vol.38, no.4,Dec.2006, pp.1-45.
    [2]. K. Cannons, A review of visual tracking, Tech. Rep. CSE-2008-07, York University, Sep.2008.
    [3]. N. Wax, Signal-to-noise improvement and the statistics of track populations, J. Appl. Phys.,vol.26, no.5, May.1955, pp.586-595.
    [4]. R. Kalman, A new approach to linear filtering and prediction problems, J. Basic Engineering,vol.82,1960, pp.35-45.
    [5]. D. Reid, An algorithm for tracking multiple targets, IEEE Trans. on Automatic Control, vol.24, no.6, Dec.1979, pp.843-854.
    [6]. I. Cox. A review of statistical data association for motion correspondence. Int. J. Comput. Vis.,vol.10, no.4, Feb.1993, pp.53–66.
    [7]. I. Sethi and R. Jain, Finding trajectories of feature points in monocular images, IEEE Trans.Pattern Anal. Mach. Intel., vol.9, no.1, Jan.1987, pp.56–73.
    [8]. K. Rangarajan and M. Shah, Establishing motion correspondence, in Proc. IEEE Int. Conf.Comput. Vis. Pattern Recognit., Maui, HI, USA, Jun.1991, pp.103-108.
    [9]. C. Harris and M. Stephens, A combined corner and edge detector, in Proc. Alvey Vision Conf.,University of Manchester, U.K., Aug.1988, pp.147-151.
    [10]. C. Tomasi and T. Kanade, Detection and tracking of point features, Carnegie Mellon Univ.,Pittsburgh, PA, Tech. Rep. CMU-CS-91-132, Apr.1991.
    [11]. E. Rosten and T. Drummond, Machine learning for high-speed corner detection, in Proc. Eur.Conf. Comput. Vision, Graz, Austria, May.2006, pp.430-443.
    [12]. K. Mikolajczyk and C. Schmid, Scale&affine invariant interest point detectors, Int. J.Comput. Vision, vol.60, no.1, Oct.2004, pp.63-86.
    [13]. D. Lowe, Robust model-based motion tracking through the integration of search andestimation, Int. J. Comput. Vision, vol.8, no.2, Aug.1992, pp.113-122.
    [14]. D. Lowe, Object recognition from local scale-invariant features, in Proc. IEEE Int. Conf.Comput. Vis., Kerkyra, Greece, Sep.1999, pp.1150–1157.
    [15]. F. Li, R. Fergus and P. Perona, One-shot learning of object categories, IEEE Trans. PatternAnal. Mach. Intel., vol.28, no.4, Apr.2006, pp.594-611.
    [16]. C. Liu, J. Yuen and A. Torralba, SIFT flow: dense correspondence across scenes and itsapplications, IEEE Trans. Pattern Anal. Mach. Intel., vol.33, no.5, May.2011, pp.978-994.
    [17]. H. Bay, T. Tuytelaars and L.V. Gool, SURF: speeded up robust features, Comput. Vis. ImageUnderst., vol.110, no.3, Jun.2008, pp.346-359.
    [18]. W. He, T. Yamashita, H. Lu and S. Lao, SURF tracking, in Proc. IEEE Int. Conf. Comput. Vis.,Kyoto, Japan, Sep.2009, pp.1586-1592.
    [19]. D. Chen, G. Baatz, K. Koser, S.S. Tsai, R. Vedantham, T. Pylvanainen, K. Roimela, X. Chen,J. Bach, M. Pollefeys, B. Girod and R. Grzeszczuk, City-scale landmark identification onmobile devices, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Colorado Springs,CO, USA, Jun.2011, pp.737-744.
    [20]. P. Felzenszwalb and D. Huttenlocher, Efficient graph-based image segmentation, Int. J.Comput. Vision, vol.59, no.2, Sep.2004, pp.167-181.
    [21]. J. Shi and J. Malik, Normalized cuts and image segmentation, IEEE Trans. Pattern Anal.Mach. Intel., vol.22, no.8, Aug.2000, pp.888-905.
    [22]. A. Levinshtein, A. Stere, K.N. Kutulakos, D.J. Fleet, S.J. Dickinson and K. Siddiqi,TurboPixels: fast superpixels using geometric flows, IEEE Trans. Pattern Anal. Mach. Intel.,vol.31, no.12, Dec.2009, pp.2290-2297.
    [23]. R. Achanta, A. Shaji, K. Smith and A. Lucch, SLIC superpixels, EPFL Tech. Rep.49300,2010.
    [24]. W. Jiang, K. Chan, M. Li and H. Zhang, Mapping low-level features to high-level semanticconcepts in region-based image retrieval, in Proc. IEEE Int. Conf. Comput. Vis. PatternRecognit., San Diego, CA, USA. Jun.2005, pp.244-249.
    [25]. Y. Liu, D. Zhang and G. Lu, Region-based image retrieval with high-level semantics usingdecision tree learning, Pattern Recognit., vol.41, no.8, Aug.2008, pp.2554-2570.
    [26]. Y. Ke, X. Tang and F. Jing, The design of high-level features for photo quality assessment, inProc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., New York, NY, USA, Jun.2006, pp.419-426.
    [27]. K. Shafique and M. Shah, A non-iterative greedy algorithm for multi-frame pointcorrespondence, in Proc. IEEE Int. Conf. Comput. Vis., Nice, France, Oct.2003, pp.110-115.
    [28]. R. Roberts, C. Potthast and F. Dellaert, Learning general optical flow subspaces foregomotion estimation and detection of motion anomalies, in Proc. IEEE Int. Conf. Comput.Vis. Pattern Recognit., Miami, FL, USA, Jun.2009, pp.57-64.
    [29]. M. Kass, A. Witkin and D. Terzopoulos, Snakes: active contour models, Int. J. Comput. Vis.,vol.1, no.4, Jan.1988, pp.321-331.
    [30]. S. Osher and J. Sethian, Fronts propagating with curvature-dependent speed: AlgorithmsBased on Hamilton-Jacobi Formulations, J. Comput. Phys., vol.79, Nov.1988, pp.12-49.
    [31]. S. Osher and R. Fedkiw, Level set methods: evolving interfaces in Computational geometry,fluid mechanics, Computer Vision, and Materials Science, Cambridge University Press, Aug.1996.
    [32]. F. Leymarie and M.D. Levine, Tracking deformable objects in the plane using an activecontour model, IEEE Trans. Pattern Anal. Mach. Intel., vol.15, no.6, Jun.1993, pp.0162-8828.
    [33]. V. Caselles, R. Kimmel and G. Sapiro, Geodesic active contour, Int. J. Comput. Vis., vol.22,no.1, Feb.1997, pp.61-79.
    [34]. N. Paragios and R. Deriche, Geodesic active contours and level sets for the detection andtracking of moving objects. IEEE Trans. Pattern Anal. Mach. Intel., vol.22, no.3, Mar.2000,pp.265-280.
    [35]. M. Bertalmio, G. Sapiro and G. Randall. Morphing active contours. IEEE Trans. Pattern Anal.Mach. Intel., vol.22, no.7, Jun.2000, pp.733-737.
    [36]. A. Yilmaz, X. Lin and M. Shah. Contour-based object tracking with occlusion handling invideo acquired using mobile cameras. IEEE Trans. Pattern Anal. Mach. Intel., vol.26, no.11,2004, pp.1531-1536.
    [37]. C. Wren, A. Azarbayejani, T. Darrell and A. Pentland. Pfinder: real-time tracking of thehuman body. IEEE Trans. Pattern Anal. Mach. Intel., vol.19, no.7, Jul.1997, pp.780-785.
    [38]. K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, Wallflower: principles and practice ofbackground maintenance, in Proc. IEEE Int. Conf. Comput. Vision, Kerkyra, Corfu, Greece,Sep.1999, pp.255-261.
    [39]. W. Hu, X. Xiao, Z. Fu, D. Xie, T. Tan, and S. Maybank, A system for learning statisticalmotion patterns, IEEE Trans. Pattern Anal. Mach. Intell., vol.28, no.9, Sep.2006, pp.1450–1464.
    [40]. C. Stauffer and W. Grimson, Learning patterns of activity using real-time tracking, IEEETrans. Pattern Anal. Mach. Intel., vol.22, no.8, Aug.2000, pp.747–757.
    [41]. L. Li, W. Huang, I. Gu, and Q. Tian, Foreground object detection from videos containingcomplex background, in Proc. ACM Int. Conf. Multimed., Berkeley, CA, USA, Nov.2003, pp.2-10.
    [42]. A. Elgammal, D. Harwood, and L. Davis, Non-parametric model for background subtraction,in Proc. Eur. Conf. Comput. Vision, Dublin, Ireland, Jun.2000, pp.751-767.
    [43]. M. Azab, H. Shedeed, and A. Hussein, A new technique for background modeling andsubtraction for motion detection in real-time videos, in Proc. IEEE Int. Conf. Image Process.,Hong Kong, China, Sep.2010, pp.3453-3456.
    [44]. M. Isard and J. MacCormick. BraMBle: a bayesian multiple-blob tracker. in IEEE Int. Conf.Comput. Vision, Vancouver, British Columbia, Canada, Jul.2001, pp.34-41.
    [45]. T. Zhao and R. Nevatia. Tracking multiple humans in complex situations. IEEE Trans. PatternAnal. Mach. Intel., vol.26, no.9, Sep.2004, pp.1208–1221.
    [46]. C. Kotropoulos and I. Pitas. Rule-based face detection in frontal views, in Proc. IEEE Int.Conf. Acoustics, Speech and Signal Process., Munich, Germany, Apr.1997, pp.2537-2540.
    [47]. D. Comaniciu, V. Ramesh and P. Meer, Kernel-based object tracking, IEEE Trans. PatternAnal. Mach. Intel., vol.25, no.5, May.2003, pp.564-577.
    [48]. Y. Zhong, A. Jain, and M. Dubuisson-Jolly, Object tracking using deformable templates, IEEETrans. Pattern Anal. Mach. Intel., vol.22, no.5, May2000, pp.544–549.
    [49]. T. Campos,3D visual tracking of articulated objects and hands, PhD Thesis, Department ofEngineering Science, University of Oxford,2006.
    [50]. G. Welch and G. Bishop, An introduction to the Kalman filter, University of North Carolina atChapel Hill, Tech. Rep. TR-95-041, Jul.2006.
    [51]. S. Julier and J.K. Uhlmann, Unscented filtering and nonlinear estimation, IEEE Proceedings,vol.92, no.3, Mar.2004, pp.401-422.
    [52]. M. Isard and A. Blake, Condensation—Conditional density propagation for visual tracking,Int. J. Comput. Vis., vol.28, no.1, Aug.1998, pp.5–28.
    [53]. M. Yang, Y. Wu and G. Hua, Context-aware visual tracking, IEEE Trans. Pattern Anal. Mach.Intel., vol.31, no.7, Jul.2009, pp.1195-1209.
    [54]. R. Collins, Y. Liu, and M. Leordeanu, Online selection of discriminative tracking features,IEEE Trans. Pattern Anal. Mach. Intel., vol.27, no.10, Oct.2004, pp.1631–1643.
    [55]. B. Babenko, M. Yang and S. Belongie, Robust object tracking with online multiple instancelearning, IEEE Trans. Pattern Anal. Mach. Intel., vol.33, no.8, Dec.2011, pp.1619-1632.
    [56]. R. Collins, Mean-shift blob tracking through scale space, in Proc. IEEE Int. Conf. Comput.Vis. Pattern Recognit., Madison, WI, USA, Jun.2003, pp.234-240.
    [57]. S. Birchfield and S. Rangarajan, Spatiograms versus histograms for region-based tracking, inProc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., San Diego, CA, USA, Jun.2005, pp.1158-1163.
    [58]. C. Bibby and I. Reid Robust real-time visual tracking using pixel-wise posteriors in Proc.Eur. Conf. Comput. Vis., Heraklion, Crete, Greece, Sep.2010, pp.831-844.
    [59]. H. Grabner, M. Grabner and H. Bischof, Real-time Tracking via On-line Boosting, in Proc.British Mach. Vision Conf., Edinburgh, UK, Sep.2006, pp.47-56..
    [60]. H. Grabner, C. Leistner and H. Bischof, Semi-supervised on-line boosting for robust tracking,in Proc. Eur. Conf. Comput. Vis., Marseille, France, Oct.2008, pp.234-247.
    [61]. S. Stalder, H. Grabner and L. Gool, Beyond semi-supervised tracking: tracking should be assimple as detection, but not Simpler than Recognition, in Proc. IEEE Int. Conf. Comput.Vision Workshop On-line Learning for Comput. Vision, Kyoto, Japan, Sep.2009, pp.1409-1416.
    [62]. H. Grabner, On-line boosting and vision, Ph.D. Thesis, Institute for Computer Graphics andVision, Graz University of Technology, Aug.2008.
    [63]. J. Wang, X. Chen and W. Gao, Online selecting discriminative tracking features using particlefilter, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., San Diego, CA, USA, Jun.2005, pp.1037-1042.
    [64]. J. Kwon and K. Lee, Tracking of a non-rigid object via patch-based dynamic appearancemodeling and adaptive basin hopping monte carlo sampling, in Proc. IEEE Int. Conf. Comput.Vis. Pattern Recognit., Miami, Florida, USA, Jun.2009, pp.1208-1215.
    [65]. J. Kwon and K. Lee, Visual tracking decomposition, in Proc. IEEE Int. Conf. Comput. Vis.Pattern Recognit., San Francisco, CA, USA, Jun.2010, pp.1269-1276.
    [66]. Z. Kalal, J. Matas and K. Mikolajczyk, P-N Learning: Bootstrapping Binary Classifiers byStructural Constraints, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., San Francisco,CA, USA, Jun.2010, pp.49-56.
    [67]. Z. Kalal, K. Mikolajczyk and J. Matas, Face-Tld: tracking-learning-detection applied to faces,in Proc. IEEE Int. Conf. Image Process., Hong Kong, China, Sep.2010, pp.3789-3792.
    [68]. M. Ozuysal, M. Calonder, V. Lepetit and P. Fua, Fast keypoint recognition using random ferns,IEEE Trans. Pattern Anal. Mach. Intel., vol.32, no.3, Mar.2010, pp.448-461.
    [69]. A. Doucet and A. Johansen, A tutorial on particle filtering and smoothing: fifteen years later,in Oxford Handbook of Nonlinear Filtering. Oxford, U.K.: Oxford Univ. Press, Dec.2011.
    [70]. Y. Li, H. Ai, C. Huang and S. Lao, Robust head tracking based on a multi-state particle filter,in Proc. IEEE Int. Conf. Automatic Face Gesture Recognit., Southampton, UK, Apr.2006, pp.335-340.
    [71]. W. Du and J. Piater, A probabilistic approach to integrating multiple cues in visual tracking, inProc. Eur. Conf. Comput. Vis., Marseille, France, Oct.2008, pp.225-238.
    [72]. K. Nickel and R. Stiefelhagen, Dynamic integration of generalized cues for person tracking,in Proc. Eur. Conf. Comput. Vis., Marseille, France, Oct.2008, pp.514-526.
    [73]. F. Moreno-Noguer, A. Sanfeliu and D. Samaras, Dependent multiple cue integration for robusttracking, IEEE Trans. Pattern Anal. Mach. Intel., vol.30, no.4, Apr.2008, pp.670-685.
    [74]. Y. Li, H. Ai, T. Yamashita, S. Lao, and M. Kawade, Tracking in low frame rate video: acascade particle filter with discriminative observers of different life spans, IEEE Trans.Pattern Anal. Mach. Intel., vol.30, no.10, Oct.2008, pp.1728–1740.
    [75]. Y. Li, C. Huang and R. Nevatia, Learning to associate hybridboosted multi-target trackerfor crowded scene, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Miami, Florida,USA, Jun.2009, pp.2953-2960.
    [76]. J. Vermaak, A. Doucet, and P. Perez, Maintaining multi-modality through mixture tracking, inProc. IEEE Int. Conf. Comput. Vis., Nice, France, Oct.2003, pp.1110–1116.
    [77]. K. Okuma, A. Taleghani, N. D. Freitas, J. J. Little, and D. G. Lowe, A boosted particle filter:Multitarget detection and tracking, in Proc. Eur. Conf. Comput. Vis., Prague, Czech Republic,May2004, pp.28–39.
    [78]. C. Shen, A. Hengel, and A. Dick, Probabilistic multiple cue integration for particle filter basedtracking, in Proc. Digital Image Computing: Tech. Applicat., Sydney, Australia, Dec.2003, pp.309–408.
    [79]. C. Hue, J.. Cadre, and P. Perez, A particle filter to track multiple objects, in Proc. IEEEWorkshop Multi-Object Tracking, Vancouver, BC, Canada, Jun.2001, pp.61–68.
    [80]. Z. Khan, T. Balch and F. Dellaert, MCMC data association and sparse factorization updatingfor real time multitarget tracking with merged and multiple measurements, IEEE Trans.Pattern Anal. Mach. Intel., vol.28, no.12, Dec.2006, pp.1960-1972.
    [81]. Z. Khan, T. Balch and F. Dellaert, MCMC-based particle filtering for tracking a variablenumber of interacting targets, IEEE Trans. Pattern Anal. Mach. Intel., vol.27, no.11, Nov.2005, pp.1805-1819.
    [82]. J. Sullivan and S. Carlsson, Tracking and labelling of interacting multiple targets, in Proc. Eur.Conf. Comput. Vis., Graz, Austria, May2006, pp.619–632.
    [83]. B. Leibe, K. Schindler, and L. Gool. Coupled detection and trajectory estimation formulti-object tracking. in Proc. IEEE Int. Conf. Comput. Vis., Rio de Janeiro, Brazil, Oct.2007,pp.1-8.
    [84]. M. Andriluka, S. Roth, and B. Schiele. People-tracking-by-detection andpeople-detection-by-tracking. in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit.,Anchorage, Alaska, USA, Jun.2008, pp.1-8.
    [85]. C. Huang, B. Wu, and R. Nevatia, Robust object tracking by hierarchical association ofdetection responses, in Proc. Eur. Conf. Comput. Vis., Marseille, France, Oct.2008, pp.788–801.
    [86]. L. Zhang, Y. Li, and R. Nevatia, Global data association for multi-object tracking usingnetwork flows, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Anchorage, AK, Jun.2008, pp.1–8.
    [87]. C. Kuo, C. Huang and R. Nevatia, Multi-target tracking by on-line learned discriminativeappearance models, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., San Francisco,CA, USA, Jun.2010, pp.685-692.
    [88]. C. Kuo and R. Nevatia, How does person identity recognition help multi-person tracking, inProc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Colorado Springs, CO, USA, Jun.2011,pp.1217-1224.
    [89]. H. Jiang, S. Fels, and J. J. Little, A linear programming approach for multiple object tracking,in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Minneapolis, MN, USA, Jun.2007,pp.1–8.
    [90]. N. Peter, S. Josephine, and C. Stefan, Multi-target tracking—linking identities using bayesiannetwork inference, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., New York, USA,Jun.2006, pp.2187–2194.
    [91]. M. Breitenstein, F. Reichlin, B. Leibe, E. Koller-Meier and L. Van Gool, Online multi-persontracking-by-detection from a single, uncalibrated camera, IEEE Trans. Pattern Anal. Mach.Intell., vol.33, no.9, Dec.2010, pp.1820-1833.
    [92]. H. Pirsiavash, D. Ramanan and C. Fowlkes, Globally-optimal greedy algorithms for trackinga variable number of objects, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit.,Colorado Springs, CO, USA, Jun.2011, pp.1201-1208.
    [93]. J. Berclaz, F. Fleuret, E. Turetken and P. Fua, Multiple object tracking using k-shortest pathsoptimization, IEEE Trans. Pattern Anal. Mach. Intel., vol.33, no.9, Sep.2011, pp.1806-1819.
    [94]. J. Henriques, R. Caseiro and J. Batista, Globally optimal solution to multi-object trackingwith merged measurements, in Proc. IEEE Int. Conf. Comput. Vis., Barcelona, Spain, Nov.2011, pp.2470-2477.
    [95]. W. Brendel, M. Amer and S. Todorovic, Multiobject tracking as maximum weightindependent set, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Colorado Springs,CO, USA, Jun.2011, pp.1273-1280.
    [96]. A. Andriyenko and K. Schindler, Multi-target tracking by continuous energy minimization, inProc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Colorado Springs, CO, USA, Jun.2011,pp.1265-1272.
    [97]. B. Wu, H. Ai, C. Huang and S. Lao, Fast rotation invariant multi-view face detection based onreal adaboost, in Proc. IEEE Int. Conf. Automatic Face Gesture Recognit., Seoul, Korea, May.2004, pp.79-84.
    [98]. C. Huang, H. Ai, Y. Li and S. Lao, Vector boosting for rotation invariant multi-view facedetection, in Proc. IEEE Int. Conf. Comput. Vision, Rio de Janeiro, Brazil, Oct.2005, pp.446-453.
    [99]. C. Huang, H. Ai, Y. Li and S. Lao, High performance rotation invariant multiview facedetection, IEEE Trans. Pattern Anal. Mach. Intel., vol.29, no.4, Apr.2007, pp.671-686.
    [100]. P. Viola and M. Jones, Robust real-time face detection, Int. J. Comput. Vis., vol.57, no.2,May.2004, pp.137–154.
    [101]. N. Dalal and B. Triggs, Histograms of oriented gradients for human detection, in Proc. IEEEInt. Conf. Comput. Vis., San Diego, CA, Jun.2005, pp.886–893.
    [102]. B. Wu and R. Nevatia, Cluster boosted tree classifier for multi-view, multi-pose objectdetection, in Proc. IEEE Int. Conf. Comput. Vis., Rio de Janeiro, Brazil, Oct.2007, pp.1387-1394.
    [103]. B. Wu and R. Nevatia, Detection and tracking of multiple, partially occluded humans byBayesian combination of edgelet based part detectors, Int. J. Comput. Vis., vol.75, no.2, Jan.2007, pp.247–266.
    [104]. B. Wu and R. Nevatia, Detection and segmentation of multiple, partially occluded objects bygrouping, merging, assigning part detection responses, Int. J. Comput. Vis., vol.82, no.2, Apr.2009, pp.185-204.
    [105]. P. Sabzmeydani and G. Mori, Detecting pedestrians by learning shapelet features, in Proc.IEEE Int. Conf. Comput. Vis. Pattern Recognit., Minneapolis, MN, USA, Jun.2007, pp.1251-1258.
    [106]. C. Wang and J. Lien, AdaBoost learning for human detection based on histograms of orientedgradients, in Proc. Asian Conf. Comput. Vision, Tokyo, Japan, Nov.2007, pp.885-895.
    [107]. C. Hou, H. Ai and S. Lao, Multiview pedestrian detection based on vector boosting, in Proc.Asian Conf. Comput. Vision, Tokyo, Japan, Nov.2007, pp.210-219.
    [108]. W. Gao, H. Ai and S. Lao, Adaptive contour features in oriented granular space for humandetection and segmentation, in Proc. IEEE Int. Conf. Pattern Recognit., Miami, Florida, USA,Jun.2009, pp.1786-1793.
    [109]. G. Duan, C. Huang, H. Ai and S. Lao, Boosting associated pairing comparison features forpedestrian detection, in Proc. IEEE Int. Conf. Comput. Vision Workshop, Kyoto, Japan, Sep.2009, pp.1097-1104.
    [110]. P. Negri, X. Clady, S.M. Hanif and L. Prevost, A cascade of boosted generative anddiscriminative classifiers for vehicle detection, Eurasip J. Adv Signal Process., vol.2008, no.2, Jan.2008, pp.1-12.
    [111]. C. Kuo and R. Nevatia, Robust multi-view car detection using unsupervisedsub-categorization, in Proc. IEEE Int. Workshop Appl. Comput. Vision, Snowbird, UT, USA,Feb.2009, pp.1-8.
    [112]. W. Chang and C. Cho, Online boosting for vehicle detection, IEEE Trans. Syst. Man Cybern.Part B-Cybern., vol.40, no.3, Jun.2010, pp.892-902.
    [113]. L. Breiman, Random forests, Machine Learning, vol.45, no.1, Oct.2001, pp.5-32.
    [114]. J. Gall and V. Lempitsky, Class-specific hough forests for object detection, in Proc. IEEE Int.Conf. Pattern Recognit., Miami, Florida, USA, Jun.2009, pp.1022-1029.
    [115]. J. Gall, A. Yao, N. Razavi, L. Van Gool and V. Lempitsky, Hough forests for object detection,tracking, and action recognition, IEEE Trans. Pattern Anal. Mach. Intel., vol.33, no.11, Nov.2011, pp.2188-2202.
    [116]. S. Munder and D.M. Gavrila, An experimental study on pedestrian classification, IEEE Trans.Pattern Anal. Mach. Intel., vol.28, no.11, Nov.2006, pp.1863-1868.
    [117]. M. Everingham, L. Gool, C. Williams, J. Winn and A. Zisserman, The pascal visual objectclasses (VOC) challenge, Int. J. Comput. Vision, vol.88, no.2, Jun.2010, pp.303-338.
    [118]. P. Felzenszwalb, R. Girshick, D. McAllester and D. Ramanan, Object detection withdiscriminatively trained part-based models, IEEE Trans. Pattern Anal. Mach. Intel., vol.32,no.9, Sep.2010, pp.1627-1645.
    [119]. L. Bourdev and J. Malik, Poselets: body part detectors trained using3d human poseannotations, in Proc. IEEE Int. Conf. Comput. Vision, Kyoto, Japan, Sep.2009, pp.1365-1372.
    [120]. X. Wang, T. Han and S. Yan, An HOG-LBP human detector with partial occlusion handling,in Proc. IEEE Int. Conf. Comput. Vision, Kyoto, Japan, Sep.2009, pp.32-39.
    [121]. Z. Song, Q. Chen, Z. Huang, Y. Hua and S. Yan, Contextualizing object detection andclassification, in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., San Francisco, CA,USA, Jun.2010, pp.1585-1592.
    [122]. H. Kuhn, The Hungarian method for the assignment problem, Nav. Res. Logist., vol.52, Feb.2005, pp.7–21.
    [123]. L. Wang, B. Zeng, S. Lin, G. Xu, and H. Shum, Automatic extraction of semantic colors insports video, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Montreal, QC,Canada, May2004, pp.617–620.
    [124]. Y. Liu, S. Jiang, Q. Ye, W. Gao, and Q. Huang, Playfield detection using adaptive GMM andits application, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Philadelphia, PA,USA, Mar.2005, pp.421–424.
    [125]. A. Ekin and A. Tekalp, Robust dominant color region detection and color-based applicationsfor sports video, in Proc. IEEE Int. Conf. Image Process., Barcelona, Spain, Sep.2003, pp.21–24.
    [126]. M. Barnard and J. Odobez, Robust playfield segmentation using map adaptation, in Proc.IEEE Int. Conf. Pattern Recognit., Cambridge, U.K., Sep.2004, pp.610–613.
    [127]. S. Jiang, Q. Ye, W. Gao and T. Huang, A New Method to Segment Playfield and itsApplications in Match Analysis in Sports Video, in Proc. ACM Multimed. Conf., New York,NY, USA, Oct.2004, pp.292-295.
    [128]. M. Hung, C. Hsieh, C. Kuo and J. Pan, Generalized playfield segmentation of sport videosusing color features, Pattern Recognit. Lett., vol.32, no.7, May.2011, pp.987-1000.
    [129]. J. Liu, X. Tong, W. Li, T. Wang, Y. Zhang, H. Wang, B. Yang, L. Sun, and S. Yang, Automaticplayer detection, labeling and tracking in broadcast soccer video, Pattern Recognit. Lett., vol.30, no.2, Jan.2009, pp.103–113.
    [130]. G. Zhu, C. Xu, Q. Huang, and W. Gao, Automatic multi-player detection and tracking inbroadcast sports video using support vector machine and particle filter, in Proc. IEEE Int.Conf. Multimed. Expo, Sydney, Australia, Jun.2006, pp.1629–1632.
    [131]. V. Pallavi, J. Mukherjee, A. Majumdar, and S. Sural, Graph-based multiplayer detection andtracking in broadcast soccer videos, IEEE Trans. Multimed., vol.10, no.8, Aug.2008, pp.794–805.
    [132]. Y. Huang, J. Llach, and S. Bhagavathy, Players and ball detection in soccer videos based oncolor segmentation and shape analysis, in Proc. Int. Workshop Multimed.Content Anal.Mining, Weihai, Wuhan, China, Jun.2007, pp.416–425.
    [133]. D. Douglas and T. Peucker, Algorithms for the reduction of the number of points required torepresent a digitized line or its caricature, Cartographica: Int. J. Geographic Info.Geovisualization, vol.10, no.2, Dec.1973, pp.112–122.
    [134]. L. Vincent and P. Soille, Watershed in digital spaces: an efficient algorithm based onimmersion simulation, IEEE Trans. Pattern Anal. Mach. Intel., vol.13, no.6, Jun.1991, pp.583-597.
    [135]. Y. Boykov and O. Veksler, Fast approximate energy minimization via graph cuts, IEEE Trans.Pattern Anal. Mach. Intel., vol.23, no.11, Nov.2001, pp.1222–1239.
    [136]. C. Rother, V. Kolmogorov and A. Blake,"GrabCut"-Interactive foreground extraction usingiterated graph cuts, ACM Trans. Graphics, vol.23, no.3, Aug.2004, pp.309-314.
    [137]. R. Megret and D. DeMenthon, A survey of spatio-temporal grouping techniques, Tech. Rep.LAMP-TR-094, University of Maryland, College Park, Aug.2002.
    [138]. J. Niebles, B. Han and L. Fei-Fei, Efficient extraction of human motion volumes by tracking,in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., San Francisco, CA, USA, Jun.2010,pp.655-662.
    [139]. D. Kong, D. Gray, and H. Tao, A viewpoint invariant approach for crowd counting, in Proc.IEEE Int. Conf. Pattern Recognit., Hong Kong, China, Aug.2006, pp.1187-1190.
    [140]. V. Rabaud and S. Belongie, Counting crowded moving objects, in Proc. IEEE Int. Conf.Comput. Vision Pattern Recognit., New York, NY, USA, Jun.2006, pp.705-711.
    [141]. B. Antic, D. Letic, D. Culibrk, and V. Crnojevic, K-means based segmentation for real-timezenithal people counting, in Proc. IEEE Int. Conf. Image Process., Cairo, Egypt, Nov.2009,pp.2565-2568.
    [142]. A. Chan, Z. Liang, and N. Vasconcelos, Privacy preserving crowd monitoring: Countingpeople without people models or tracking, in Proc. IEEE Int. Conf. Comput. Vision PatternRecognit., Anchorage, Alaska, USA, Jun.2008, pp.1-8.
    [143]. A. Ess, B. Leibe, and L. Gool. Depth and appearance for mobile scene analysis. in Proc. IEEEInt. Conf. Comput. Vis., Rio de Janeiro, Brazil, Oct.2007, pp.1-8.
    [144]. CAVIAR dataset, http://homepages.inf.ed.ac.uk/rbf/CAVIAR/.
    [145]. PETS2009; http://www.cvg.rdg.ac.uk/PETS2009/.
    [146]. Intel OpenCV Library, http://www.sourceforge.net/projects/opencvlibrary.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700