用户名: 密码: 验证码:
智能视频监控中的目标跟踪技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
智能视频监控系统能够自动对异常事件进行检测和处理,符合“智慧城市”和“平安城市”的需求,近年来受到了研究人员的广泛关注。目标跟踪技术是智能视频监控系统中的核心技术之一,它是目标识别、目标分类、异常行为检测等各种高级处理技术的基础,有着非常重要的研究价值。然而,由于监控环境的动态变化(比如:光照变化、摄像机运动等)和被监控目标的动态变化(比如:姿势变化、尺度变化、相互遮挡等)使得设计鲁棒性强、实时性好的目标跟踪算法仍然是一个研究热点及难点。
     本文重点研究了智能视频监控系统中的目标跟踪技术,主要针对基于静止单摄像机的单.目标跟踪算法,基于运动单摄像机的单目标跟踪算法,基于静止单摄像机的多目标跟踪算法,以及基于静止多摄像机的多目标跟踪算法等几个方面展开了相关的研究工作:
     首先,本文研究了基于静止单摄像机的单目标在线跟踪算法。分别对稀疏编码和线性子空间学习进行了研究和分析,针对已有跟踪算法存在的问题,提出了两种基于单摄像机的在线目标跟踪算法:基于MLE (Maximum Likelihood Estimation)和L2范数的在线目标跟踪算法和基于特征分组的在线目标跟踪算法。与其他典型算法相比,这两种算法在目标发生遮挡、旋转、尺度变化、光照变化等异常情况时,能够比较稳定的跟踪目标,具有较强的鲁棒性。
     其次,本文研究了监控摄像机处于运动状态的单目标跟踪算法。首先,筛选特征点然后,在剩余的特征点上利用光流法估计摄像机的全局运动信息,利用全局运动信息对粒子滤波算法进行修正,选择分块颜色直方图作为目标的特征模型,实现摄像机运动状态下的目标跟踪。实验结果表明,该算法能够在摄像机运动的情况下稳定的跟踪运动目标。同时,该算法克服了传统算法将稳像和跟踪分开处理的缺点,降低了计算的复杂度,提高了目标跟踪算法的处理速度。
     再次,本文研究了基于静止单摄像机的多目标跟踪算法。重点研究了图割理论,以及如何将目标跟踪问题和网络图理论建立起联系。在此基础上,提出了一种基于图割的多目标跟踪算法。该算法结合像素的颜色信息和运动信息建立能量方程,构建网络图,利用最大流/最小割理论实现能量方程最小化,从而完成对多目标的跟踪。实验结果表明,本文提出的基于图割理论的多目标跟踪算法对遮挡和多目标数目的变化具有较强的鲁棒性。
     最后,本文研究究了基于静止多摄像机的多目标跟踪算法。在利用码本模型进行运动目标检测的基础上,提出了两种基于多层定位的多目标跟踪算法:第一种算法利用灭点信息计算多层单应性矩阵,获取多目标的多层定位信息,利用图割理论处理多层定位信息,实现对多目标的跟踪。第二种算法利用标志物计算多层单应性矩阵,利用最短路径优化算法处理多层定位信息,进而完成多目标跟踪。实验结果表明,这两种算法对光照变化,遮挡,目标的复杂运动具有较强的鲁棒性,第二种算法能够达到实时性要求。
     在鲁棒性和实时性方面,上述工作在一定程度上丰富了现有的目标跟踪算法,同时,为其在相关领域的应用进行了有益的探索。
Intelligent Video Surveillance (IVS) system can detect and process the abnormal events automaticly. It has drawn many researchers'attention as it meets the requirements of "smart city" and "safe city". As one of the core technologies in IVS. object tracking is the foundation of many advanced applications (e.g. object recognition, object classification, abnormal behavior detection, etc.), and it has very important research value. However, developing a robust and real-time tracker is still a challenging and promising problem due to the dynamic change of the surveillance environment (e.g. illumination change, camera motion, etc.) and dynamic change of the tracked objects (e.g. pose variation, scale change, occlusion, etc.).
     This dissertation mainly focuses on the study of object tracking technologies in the Intelligence Video Surveillance, including the online single object tracking algorithm based on static single camera, the single object tracking algorithm based on moving single camera, the multi-object tracking algorithm based on static single camera, and the multi-object tracking algorithm based on multi-camera. The main contributions of the dissertation are summarized as follows:
     Firstly, the online single-object tracking algorithm based on the single camera is studied in the dissertation. The sparse coding and linear subspace learning algorithms are introduced and analyzed. In order to overcome the existing problems of the traditional online object tracking algorithms, we present two visual object tracking algorithms:one algorithm based on the Maximum Likelihood Estimation (MLE) and L2-norm; and the other algorithm based on feature grouping. Compared with other state-of-the-art methods, the proposed methods can track the objects stably and robustly under some abnormal conditions (e.g. occlusion, rotation, scale change, illumination variation, etc.).
     Secondly, the single-object tracking algorithm with cameras mounted on moving platforms is studied. Feature points are selected. And then the global motion of the camera is estimated by using optical flows on the rest feature points. To achieve robust tracking on unstable video sequences, the proposed algorithm modifies the particle filtering method according to the global motion of camera and adopts the block-based color histogram as the appearance model. Experiments demonstrate that our algorithm could track moving object robustly on challenging videos captured by moving cameras. Meanwhile, our algorithm overcomes the disadvantages of the traditional tracking algorithms that require processing video stabilization and tracking separately. Thus, it reduces the computational complexity and therefore improves the processing speed.
     Thirdly, the multi-object tracking algorithm based on static single camera is studied in this dissertation. We put emphasis on introducing the graph cuts theory and discussing how to establish the relationship between object tracking and graph theory. Based on the discussions, a multi-object tracking algorithm based on the graph cuts theory is proposed. The proposed algorithm combines the color and motion information of the pixels to establish the energy function and construct the graph. Then the algorithm minimizes the energy function using the max-flow/min-cut method and therefore achieves multiple objects tracking. The experimental results demonstrate that our multi-object tracking method is robust to occlusion and variation of the number of objects.
     Finally, the multi-object tracking algorithm based on static multi-camera is studied. Based on moving objects detection using the Codebook model, we propose two multi-object tracking algorithms. The first algorithm computes the homography matrix by using the information of vanishing points. Multiple objects are located at multiple planes and then tracked by using the graph cuts theory. The second algorithm computes the view-to-view homography matrix using several landmarks on different planes and performs multi-objects tracking by the shortest paths optimization algorithm. The experimental results demonstrate that our tracking methods have strong robustness to illumination change, occlusion and complex motion. In addition, the second algorithm achieves real-time performance.
     In terms of robustness and real-time performance, the above-mentioned works enrich the theory of object tracking, and make some valuable exploration on the applications of the theory in the related fields.
引文
[1]蒋馨.浅析国外智能视频监控技术的发展及应用[J].中国安防.2011(10):105-108.
    [2]张曙光.智能视频监控在公共安防中的应用研究[J].信息系统工程,2011(7):58-59.
    [3]Yilmaz A, Javed 0, Shah M. Object tracking:a survey[J]. ACM Computing Surveys, 2006,38(4):229-240.
    [4]Kristan M, Kovacic S, Leonardis A, and Pers J. A two-stage dynamic model for visual tracking[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics,2010,40(6):1505-1520.
    [5]Jiang M X, Li M, Wang H Y. A robust combined algorithim of object tracking based on moving object detect ion[C].Proceedings of the International Conference on Intelligent Control and Information Processing. Dalian, China,2010.619-622.
    [6]刘晨光,程丹松,刘家锋,黄剑华,唐降龙.一种基于交互式粒子滤波器的视频中多目标跟踪算法[J].电子学报,2011,39(2):260-267.
    [7]冯巍,胡波,杨成,林青,杨涛.基于贝叶斯理论的分布式多视角目标跟踪算法[J].电子学报,2011,39(2):316-321.
    [8]姜明新,王洪玉.基于多相机的多目标跟踪算法[J].自动化学报,2012,38(4):497-506.
    [9]Wang Q, Chen F, Yang J M, Xu W L, Yang M H. Transferring visual prior for online object tracking[J]. IEEE Transactions on Image Processing,2012,21(7):3296-3305.
    [10]Babenko B, Yang M II, Belongie S. Robust object tracking with online multiple instance learni ng[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.2011,33(8):1619-1632.
    [11]Chen D T, Yang J. Robust object tracking via onl ine dynamic spatial bias appearance models [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.2007, 29(12):2157-2169.
    [12]Isard M, Maccormick J. Bramble:A bayesian multiple-blob tracker[C]. Proceeding of IEEE International Conference on Computer Vision. Vancouver, British Columbia, 2001:34-41.
    [13]Lepetit V, Fua P. Keypoi nt recognition using randomized trees[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(9):1465-1479.
    [14]Hager G, Belhumeur P. Efficient region tracking with parametric model s of geometry and illumination[J]. IEEE Transactions on Pat tern Analysis and Machine Intelligence. 1998,20(10):1025-1039.
    [15]Black M, Jepson A. Eigentracking:Robust matching and tracking of articulated objects using a view-based representation[J]. International Journal of Computer Vision.1998,26(1):63-84.
    [16]Comaniciu D, Ramesh V, Meer P. Real-time tracking of non-rigid objects using mean shi ft [C]. Proceeding of IEEE Conference on Computer Vision and Pattern Recognition. Hilton Head, SC, USA,2000:1-12-149.
    [17]Jepson A D, Fleet D J, El-Maraghi T F. Robust online appearance models for visual tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2003,25(10):1296-1311.
    [18]Ross D, Lim J, Lin R S, Yang M H. Incremental learning for robust visual tracking[J]. International Journal of Computer Vision.2008,77(1-3):125-141.
    [19]P'erez P, Hue C, Vermaak J, Gangnet M. Color-based probabilistic tracking[C]. Proceedings of European Conference on Computer Vision. Copenhagen, Denmark,2002: 661-675.
    [20]Avidan S. Ensemble tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(2):261-271.
    [21]Wang S, Lu H, Yang F, Yang M H. Superpixel tracking[C]. Proceedings of the IEEE International Conference on Computer Vision. Vancouver, British Columbia,2011: 1323-1330.
    [22]Jia X, Lu H, Yang M H. Visual tracking via adaptive structural local sparse appearance model. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USA,2012:1822-1829.
    [23]Han Z J, Jiao J B, Zhang B C, Ye Q X and Liu J Z. Visual object tracking via sample-based adaptive sparse representation [J]. Pattern Recognition,2011, 44(9):2170-2183.
    [24]Mei X, Ling H B. Robust visual tracking and vehicle classification via sparse representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011,33(11):2259-2272.
    [25]Wang Q, Chen F, Xu W L, Yang M H. Object tracking via partial least, squares analysis[J]. IEEE Transactions on Image Processing,2012,21 (10):4454-4465.
    [26]Comaniciu D, Member V R, Meer P. Kernel-based object tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.2003,25(5):564-575.
    [27]Adam A, RivlinE, Shimshoni J. Robust fragments-based tracking using the integral histogram[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. New York, NY, USA.2006:798-805.
    [28]Li X,Hu W, Zhang Z, Zhang X, Luo G. Robust visual tracking based on incremental tensor subspace learning[C]. Proceedings of the IEEE International Conference on Computer Vision. Rio de Janeiro, Brazil,2007:1-8.
    [29]Wang D, Lu H, Chen Y W. Incremental mpca for color object tracking[C]. International Conference on Pattern Recognition, Istanbul, Turkey.2010:1751-1754.
    [30]Babenko B, Yang M H, and Belongie S, Visual tracking with online multiple instance learning[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Miami, Florida, USA,2009:983-990.
    [31]Grabner H, Bischof 11. On-line boosting and vision[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.New York, NY, USA.2006:260-267.
    [32]Grabner H, Leistner C, Bischof H.Semi-supervised on-line boosting for robust tracking[C]. Proceedings of European Conference on Computer Vision. Marseille, France,2008:234-247.
    [33]Avidan S.Support vector tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2004,26(8):1064-1072.
    [34]Tang F, Brennan S, Zhao Q, Tao H. Co-tracking using semi suporvised support vector machines[C]. In Proceedings of the IEEE International Conference on Computer Vision. Rio de Janeiro, Brazil.2007:1-8.
    [35]Lasserre J A, Bishop C M, Minka T P.Principled hybrids of generative and discriminative models [C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. New York, NY, USA.2006:87-94.
    [36]Ng A Y,Jordan M I. On discriminative vs. generative classifiers:A comparison of logistic regression and naive bayes[C]. In Advances in Neural Information Processing Systems. Vancouver, British Columbia, Canada 2001:438-451.
    [37]Mei X, Ling H. Robust visual tracking using L1 minimization[C]. Proceedings of the IEEE International Conference on Computer Vision, Kyoto, Japan,2009:1436-1443.
    [38]Wang D, Lu H, Chen Y W. Incremental MPCA for color object tracking[C]. Proceedings of IEEE International Conference on Pattern Recognition. Istanbul, Turkey,2010: 1751-1754.
    [39]Li G, Liang D, Huang Q, Jiang S, and Gao W. Object tracki ng using incremental 2d-1da learning and bayes inference[C]. Proceedings of IEEE International Conference on Image Processing. San Diego, California, USA.2008:1568-1571.
    [40]Hu W M, Tan T N, Wang L, Maybank S. A survey on visual survei Ilance of object motion and behaviors[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews,2004,34(3):334-352
    [41]蒋恋华,甘朝晖,蒋旻.多目标跟踪综述[J].计算机系统应用,2010,19(12):271-275.
    [42]Zhao T, Nevatia R. Tracking multiple humans in crowded environment [C]. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. Washington D. C., USA.2004:459-466.
    [43]Hue C, Le Cadre, J P, Perez P. Sequential monte carlo methods for multiple target tracking and data fusion[J]. IEEE Transactions on Signal Processing,2002,50(2): 309-325.
    [44]Cox I J. A review of statistical data association techniques for motion correspondence[J]. Computer Vision,1993,10(1):53-66.
    [45]Shafique K, Lee M W, Haering NC. A rank constrained continuous formulation of multi-frame multi-target tracking problem[C]. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. Anchorage, Alaska, USA.2008:1-8.
    [46]Chia A Y S, Huang W M. Multiple objects tracking with multiple hypotheses dynamic updating[C]. The International Conference on Image Processing. Atlanta, Georgia, USA.2006:569-572.
    [47]Reid D B. An algorithm for tracking multiple targets[J]. IEEE Transactions on Automatic Control,1979, AC-24(6):843-854.
    [48]Betke M, Hirsh DE, Bagchi A. Tracking large variable numbers of objects in clutter[C]. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. Minneapolis, Minnesota, USA.2007:1-8.
    [49]Rasmussen C, Hager GD, Probabilistie data associat ion methods for tracking complex visual objects[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2001,23(6):560-576.
    [50]Shi J, Tomasi C. Good features to track[C]. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA,1994.
    [51]Mittal A, Davis I. S. M2Tracker:A multi-view approach to segmenting and tracking people in a cluttered scene[J]. International Journal of Computer Vision.2003, 51(3):189-203.
    [52]Kim K, Davis L. Multi-Camera tracking and segmentation of occluded people on ground plane using search-guided particle filtering[C]. Proceedings of the Ninth European Conference on Computer Vision. Graz, Austria.2006:98-109.
    [53]Khan S M and Shah M, Tracking multiple occluding people by localizing on multiple scene planes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009,31(3):505-519.
    [54]Eshel R, Moses Y. Tracking in a dense crowd using multiple cameras[J]. International Journal of Computer Vision,2010,88(1):129-143.
    [55]张文生,丁欢,杨柳.融合光流速度与背景建模的目标检测测方法[J].中国图象图形学报,2011,16(2):236-243
    [56]王欢.运动目标检测与跟踪技术研究[D].南京理工大学博士论文,2009.
    [57]王勇.基于统计方法的运动目标检测与跟踪技术[D].华中科技大学博十论文,2009.
    [58]Stauffer C, Eric W, Grimson L. Learning patterns of activity using real-time tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(8):747-757.
    [59]Stauffer C, Eric W, Grimson L. Adaptive background mixture models for real-time tracking[C]. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. Ft. Collins, CO, USA.1999:2246-2252.
    [60]Kim K, Chal idabhongse T H, Harwood D, Davis L. Real-time foreground-background segmentation using codebook model[J], Real-Time Imaging,2005,11(3):167-256.
    [61]Kim K, Chal idabhongse T H, Harwood D, Davis L. Background modeling and subtraction by codebook construct.ion[C]. The International Conference on Image Processing, Singapore,2004:3061-3064.
    [62]Chen T H, Wu P H, Chiou Y Ch. An Early Fire-detection method based on image processing[C]. IEEE International Conference On Image Processing. Singapore,2004: 1707-1710.
    [63]王涛,刘渊,谢振平.一种基于飘动性分析的视频烟雾检测新方法[J].电子与信息报,2011,33(5):1024-1029.
    [64]Vicente J, Guillemant P. An image processing technique for automatically detecting forest fire[J]. International Journal of Thermal Sciences.2002,41 (12):1113-1120.
    [65]Ugur Toreyin, Yigithan Dedeoglu, and Enis Cetin. Wavelet based real-time smoke detection in video[J]. Signal Processing:Image Communication, EURASIP, Elsevier, 2005,20:255-260.
    [66]Chen T H, Yin Y H, and Huang Sh F,et al. The smoke detect ion for early fire-alarming system base on video processing[C]. IEEE International Conference on Intelligent Information Hiding and Multimedia Signal Processing, California, USA,2006: 427-430.
    [67]周平,姚庆杏,钟取发等.基于视频的早期烟雾检测[J].光电工程,2008,35(12)82-88.
    [68]郑璐,陈俊周.基于运动和颜色的视频烟雾检测算法[J].计算机工程与设计,2010,31(21):4650-4652.
    [69]袁非牛,张永明,刘十兴.基于累积量和主运动方向的视频烟雾检测方法[J].中国图象图形学报,2008,13(4):808-813.
    [70]邹洪伙,秦锋,程泽凯等. 二类分类器的ROC曲线生成算法[J].计算机技术与发展,2009,19(6):109-112.
    [71]Wright J, Yang A Y, Ganesh A, Sastry S S, and Ma Y. Robust face recognition via sparse representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(1):210-227.
    [72]Wagner A, Wright J, Ganesh A, Zhou Z H, Mohani H. Toward a practical face recognition system:robust alignment and illumination by sparse representation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012,34 (2):372-386.
    [73]Meng Y, Zhang L, Yang J and Zhang D. Robust sparse coding for face recognition[C]. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. Colorado Springs, USA,2011:625-632.
    [74]Zhang L, Meng Y, and Feng X C. Sparse representation or collaborative representation: which helps face recognition[C]. Proceedings of IEEE International Conference on Computer Vision, Barcelona, Spain,2011.471-478.
    [75]Zhang L, Zhu P, Hu Q and Zhang D. A linear subspace learning approach via sparse coding[C]. Proceedings of IEEE International Conference on Computer Vision, Barcelona, Spain,2011.755-761.
    [76]Dong W S, Li X, Zhang L, Shi G M. Sparsi ty-based image denoising via dictionary learning and structural clustering[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, USA,2011.457-464.
    [77]Mei X, Ling H B. Robust visual tracking and vehicle classification via sparse representat ion[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011,33(11):2259-2272.
    [78]Han Z J, Jiao J B, Zhang B C, Ye Q X and Liu J Z. Visual object tracking via sample-based adaptive sparse representation [J]. Pattern Recognition,2011, 44(9):2170-2183.
    [79]Donoho I). For most large underdetermined systems of linear equations the minimal 11-norm solution is also the sparsest solution[J]. Communications on Pure and Applied Mathematics,2006,59(6):797-829.
    [80]Ross D, Lim J, Lin R S, Yang M H. Incremental learning for robust visual tracking[J]. International Journal of Computer Vision,2008,77(1-3):125-141.
    [81]Kalman R E. A new approach to linear filtering and prediction problems[J]. Transaction of the ASME-Journal of Basic Engineering,1960,82:35-45.
    [82]Wan E A, Van Der Merwe R. The unscented kalman filter for nonlinear estimation[C]. Proceeding of the Symposium on Adaptive Systems for Signal Processing, Communication and Control, Lake Louise, Alberta, Canada, October,2000:153-158.
    [83]Doucet. A, Godsill S, Andrieu C. On sequential Monte Carlo sampling methods for Bayesian filtering [J]. Statistics and Computing,2000,10(3):197-208.
    [84]Djuric P M, Kotecha J H, Zhang J Q, et al. Particle filtering[J]. IEEE Signal Processing Magazine,2003,20(5):19-38.
    [85]Fox D, Hightower J, Lin L, et al. Bayesian filtering for location estimation[J]. IEEE Pervasive Computing, vol.2, no.3, pp.24-33,2003.
    [86]Kalal Z, Matas J, and Mikolajezyk K. P-n learning:Bootstrapping binary classifiers by structural constraints[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA.2010:49-56.
    [87]Kwon J, Lee K M. Visual tracking decomposition [C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. San Francisco, CA, USA,2010: 1269-1276.
    [88]Litvin A, Konrad J, Karl W. Probabilistic video stabilization using kalman filtering and mosaicking[C]. Proceeding of SPIE Image and Video Communications and Processing. Santa Clara, Cal ifornia, USA.2003:663-674.
    [89]Tico M, Vehvi lainen M. Constraint motion f i lter ing for v ideo stabilisationsing[C]. Proceedings of International Conference on Image Processing. Genoa, Italy,2005: 569-572.
    [90]Auberger S.Miro C. Digital video stabilization architecture for low cost devices[C], Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis. Zagreb, Croatia.2005:474-479.
    [91]王斌,赵跃进,尹德森.基于电子稳像的特征跟踪算法,红外与激光工程,2()()8,37(4):607-610.
    [92]Barron J L, Fleet D J, Beauchemin S S. Performance of optical flow techniques[J]. International Journal of Computer Vision,1994,12 (1):43-77.
    [93]Vella F, Castorina A, Mancuso M, et al. Digital image stabilization by adaptive block motion vectors filtering[J]. IEEE Transactions on Consumer Electronics,2002,48(3):796-801.
    [94]SHI J, TOMASI C. Good features to track[C]. Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA,1994: 593-600.
    [95]Cai J, Walker R. Robust video stabilisation algorithm using feature point selection and delta optical flow[J]. IET Computer Vision,2009,3(4):176-188.
    [96]Yang J, Schonfeld D, and Mohamed M. Robust video stabilization based on particle filter tracking of projected camera motion [J] IEEE Transactions on Circuits And Systems For Video Technology,2009,19(7):945-954.
    [97]Kabaoglu N. Target Tracking using particle filters with support vector regression[J]. IEEE Transactions on Vehicular Technology,2009,58(5):2569-2573.
    [98]Ford L, Fulkerson D. Flows in networks[M]. New Jersey:Princeton University Press, 1962
    [99]Boykov Y, Veksler O, Zabih R. Fast approximate energy minimization via graph cuts [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2001, 23(11):1222-1239.
    [100]Boykov Y, Kolmogorov V. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2004,26(9):1124-1137.
    [101]Boykov Y, Jolly M. Interactive graph cut optimal boundary & region segmentation of objects in N-D images[C]. Proceedings of International Conference on Computer Vision. Vancouver, British Columbia, Canada,2001:105-112.
    [102]Goldberg A V, Tarjan R E. A new approach to the maximum2flow problem[J]. Journal of the Association for Computing Machinery,1988,35(4):921-940.
    [103]吴朝福.计算机视觉中的数学方法[M](第一版).北京:科学出版社,2008.
    [104]Hartley R, Zisserman A. Multiple view geometry i n computer vision (Second Edi t ion). Cambridge:Cambridge University Press,2003.363-406.
    [105]Rother C. A new approach for vanishing point detection in architectural environments[J]. Image and Vision Computing,2002,20(9):647-655.
    [106]Lv F, Zhao T, and Nevatia R, Self-Calibration of a camera From video of a walking human[C]. Proceedings of 16th International Conference on Pattern Recognition. Quebec, Canada,2002:639-644.
    [107]Criminisi A, Reid 1, Zisserman A. Single view metrology. Proceedings of the Seventh IEEE Internalional Conference on Computer Vision. Kerkyra, Corfu, Greece, 1999:434-441.
    [108]徐秋平,郭敏.静止背景下基于图割的运动目标检测算法[J].计算机工程与应用,2008,44(32):186-188.
    [109]裴明涛,刘鹏.一种基于图割的快速立体匹配方法[J].北京理工大学学报,2009,29(3):229-232.
    [110]Lowe D. Distinctive image features from scale invariant keypoints[J]. International Journal of Computer Vision,2004,60(2):91-110.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700