用户名: 密码: 验证码:
可视对象跟踪算法研究及应用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
对象跟踪作为智能视频分析的关键问题,在计算机视觉领域具有广泛的应用,如智能监控、人机交互、机器人技术以及多媒体应用等。尽管研究人员对此做出了大量的工作,但是由于真实世界的复杂性,如背景干扰、表观变化、低图像分辨率,以及帧跳跃等问题,使得在无约束环境下实现对目标进行长时间实时稳定的可视跟踪任务仍然是一项极具挑战性的课题。本文通过对对象跟踪算法研究现状的分析,结合跟踪过程具有很强的时序性和时空关系的特点,基于图像信号分析、模式识别和在线机器学习的理论与方法,分别从单目标跟踪、多目标跟踪及其应用三个方面开展研究,提出了几种实时稳定的对象跟踪算法。具体工作如下:
     (1)为了提高依赖随机蕨检测的跟踪算法的稳定性,提出了一种基于增强型随机蕨的对象跟踪算法。该算法在学习过程中,通过在线聚类随机蕨每个叶节点中的学习样例,自动发掘其特征空间中特征向量潜在的分布特性,即隐含类型;在评价过程中,将这些隐含类型作为核函数的数据点进行核密度估计,计算测试样例的类型概率。实验结果表明,该算法在实现实时对象跟踪的同时提高了跟踪的稳定性。
     (2)针对基于在线学习的跟踪算法面临的两难问题,即如何既保证对目标变化的适应能力,又保证学习的准确性,提出了基于主动场景学习的对象跟踪算法。该算法基于对象与背景信息建立结构化的约束,并根据该约束对在线模型和检测器进行有监督的学习,从而提高了其学习的准确性。同时结合基于光流分析的目标运动区域提取方法,使得能够对快速移动目标进行跟踪。实验结果表明,该算法提高了跟踪系统对目标变化的适应能力和跟踪的稳定性。
     (3)针对基于霍夫变换的对象跟踪算法难以实现实时跟踪的问题,提出了基于霍夫蕨的对象跟踪算法。该算法采用依赖检测的跟踪框架,以随机蕨作为基础检测结构,将对象的局部表观作为学习数据,在其每个叶节点中计算并保存霍夫空间中属于目标对象的投票概率,并通过在线学习使其能够同步适应对象表观的变化。实验结果表明,该算法在满足跟踪稳定性的同时能够实现实时的对象跟踪。
     (4)为了提高跟踪过程中检测器的对象识别能力进而提高跟踪的稳定性,提出了在线学习多重检测的对象跟踪算法。该算法将目标对象的整体和局部表观,以及由场景学习中发掘的同步对象同时作为学习数据,因此能够在跟踪过程中分别对这些类型的对象进行检测。最后通过计算这些检测结果关于目标的配置概率进而确定目标的位置,实现对象跟踪任务。实验结果表明该算法可以适应更加复杂的跟踪环境,在满足实时性的同时提高了跟踪的稳定性。
     (5)为了降低多目标跟踪算法的计算复杂度,实现实时的多目标跟踪,提出了基于自适应运动相关协作的多目标跟踪算法。该算法根据目标运动信息建立目标间相关度,通过相关度状态估计协作模型预测目标状态,实现多目标跟踪。实验结果表明,仅采用基本的短时跟踪算法,结合该协作模型则可以有效的处理目标遮挡,实现实时稳定的多目标跟踪。
     (6)针对对象跟踪算法的应用问题,结合具体的应用场景研究了相应的技术方法。关于医学图像处理的应用,提出了一种基于分层检测的人体膝关节前交叉韧带(ACL)定位方法,用于解决在图像中检测和定位前交叉韧带区域问题,从而促进前交叉韧带重建手术的研究。该方法将韧带定位分为全局与局部检测,根据不同的样例图像选择不同的图像特征,基于随机森林构建对应的全局和局部检测器,通过确定膝关节中前交叉韧带的整体组织的位置,再进一步识别属于前交叉韧带的具体区域,从而实现对它的准确定位。基于真实人体膝关节MRI图像的实验结果表明,该方法对前交叉韧带的检测识别能力高,且定位准确。
This thesis mainly focuses on the problem of visual object tracking, which is a key problem of intelligent video analysis that is demanded by many applications in computer vision, such as intelligent surveillance, human-computer interfaces, robotics and multimedia. Robust long-term visual tracking in unconstrained environment is still very challenging due to the real-world complications such as clutters, appearance change, low image quality, and frame-cut. Based on the analysis of research actuality of object tracking which contains strong spatial-temporal relevance and the theory and method of image signal processing, pattern recognition and online machine learning, we propose several robust real-time object tracking algorithms, involving single target tracking and multiple target tracking, and apply them to address other problems in computer vision. The main contributions of this thesis are given as follows:
     (1) In order to improve the robustness of tracking algorithm using random ferns for detection, we propose an enhanced random ferns which is integrated into our tracking framework as the object detector. Its main idea is to exploit the potential distribution properties of feature vectors which are here called hidden classes by on-line clustering of feature space for each leaf-node of ferns. The kernel density estimation technique is then used to evaluate unlabeled samples based on the hidden classes which are set as the data points of the kernel function. Experimental results demonstrate the effectiveness and the improved robustness of our approach.
     (2) To address the problem of improving the ability of adaptation to the variation of target and meanwhile ensuring the accuracy of online learning for tracking system, we propose a method of active context learning for object tracking. The approach exploits both target and background information on the fly automatically and builds the structural constraint by using the active context learning to enhance the adaptability for variation of the target and stability of tracking. An optical-flow-based motion region extraction method is integrated into the context learning framework to address the problem of fast target motion or abrupt camera motion. Experimental results demonstrate the improved tracking performance of our tracker.
     (3) Existing Hough-based tracking systems have not achieved real-time performance. To deal with this problem, we propose a Hough ferns based method for real-time object tracking. In the tracking-by-detection framework, Hough ferns, which are based on random ferns, sample the local appearances of object as training set, and compute and save the Hough votes for each leaf-node. Hough ferns and object model are leaned on-line to adapt to the variation of object. Experimental results validate the effectiveness and robustness of our tracker which can run in real time.
     (4) In order to improve the capability of object recognition of the detector and then the robustness of the tracking system, we propose a method of online learning multiple detectors for object tracking. The method uses the random ferns as the basic detector. The entire and the local appearances of the target and the connected objects which are explored by the context learning are used synchronously as the training data to build and upgrade the object detector on-line. Thus it is able to detect the objects with different classes independently. Since different detection is related to different object class, the results of object detections are fused as the measurements and the probabilities of configuration hypotheses for the measurements to the target are calculated to find the target location for visual tracking task. Experimental results validate the effectiveness and robustness of our approach and demonstrate its better tracking performance than several state-of-the-art methods.
     (5) To reduce the computational complexity of the algorithm achieving real-time multiple target tracking, we propose a collaboration model in which the acceleration difference between two targets is used to calculate the motion correlation value based on the two-dimensional Gaussian function. By the collaboration model, the location of occluded target is estimated using the motion information from other targets. The proposed approach is computationally efficient and robust. Experimental results exhibit the performance of our tracker based on our approach.
     (6) For the application of object tracking, the methods proposed can be applied to the corresponding scenarios. In particular, in order to address the problem of detecting and locating the anterior cruciate ligament of human's knee in medical image and promote the study of its reconstruction operation, we proposes a hierarchical detection based method to locate the anterior cruciate ligament. The location task is considered to be to perform the global and the local detections successively. The features are selected according to the type of image samples, and the corresponding global and local detectors are built based on the random forests respectively to first find the entire region of the anterior cruciate ligament and then recognize its definite area. Experimental results based on the real MRI images validate the effectiveness and accuracy of our method.
引文
[1]蔡自兴,徐光祜.人工智能及其应用,清华大学出版社.中国北京,1996.
    [2]R. Szeliski, Computer Vision:Algorithms and Applications, Springer-Verlag London Inc.,2011.
    [3]R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision, McGraw-Hill Inc.,1995
    [4]A. Yilmaz, O. Javed, and M. Shah, Object Tracking:A Survey, ACM Computing Surveys,2006,38(4), pp.13.
    [5]侯志强,韩崇昭.视觉跟踪技术综述.自动化学报,2006,32(4):603-617.
    [6]T. M. Mitchell, Machine Learning, McGraw-Hill,1997.
    [7]C. M. Bishop, Pattern Recognition and Machine Learning,1st ed. Springer-Verlag New York Inc.,2006.
    [8]张贤达.现代信号处理,清华大学出版社.中国北京,2002.
    [9]阮秋琦,阮宇智.数字图像处理,电子工业出版社.中国北京,2007.
    [10]丁玉美,高西全.数字信号处理,西安电子科技大学出版社.中国西安,2001.
    [11]陆大硷.随机过程及其应用,清华大学出版社.中国北京,1986.
    [12]K. Zhou, J. Doyle, and K. Glover, Robust and optimal control, Prentice Hall Englewood Cliffs, NJ,1996.
    [13]J.K. Aggarwal and Q. Cai, Human motion analysis:A review, Comput. Vision Image Understand,1999,73(3):428-440.
    [14]王亮,胡卫明,谭铁牛.人运动的视觉分析综述.计算机学报,2002,25(3):225-237.
    [15]杜友田,陈峰,徐文立,李永彬.基于视觉的人的运动识别综述.电子学报,2007,35(1):57-63.
    [16]I.S. Kim, H.S. Choi, K.M. Yi, J.Y. Choi, and S.G. Kong, Intelligent Visual Surveillance-A Survey, International Journal of Control, Automation, and Systems,2010,8(5):926-939.
    [17]孔晓东.智能视频监控技术研究,上海交通大学博士学位论文.中国上海, 2008.
    [18]M. Lew, E.M. Bakker, N. Sebe, and T.S. Huang, Human-Computer Intelligent Interaction:A Survey, Lecture Notes in Computer Science,2007,4796:1-5.
    [19]张玖.机器人技术,机械工业出版社.中国北京,2011.
    [20]J. Kittler, M. Ballette, W. J. Christmas, et al. Fusion of multiple cue detectors for automatic sports video annotation, Proceedings of Workshop on Structural Syntactic and Statistical Pattern Recognition,2002:597-606.
    [21]D. Tjondronegoro, Y P. P. Chen, B. Pham, Content-based video indexing for sports applications using integrated multilmodal approach, Proceedings of the 13th Annual ACM International Conference on Multimedia.2005:1035-1036.
    [22]B. Lucas and T. Kanade, An iterative image registration technique with an application to stereo vision, International Joint Conferences on Artificial Intelligence,1981,81:674-679.
    [23]S. Birchfield, Elliptical head tracking using intensity gradients and color histograms, IEEE Conf. Computer Vision and Pattern Recognition,1998.
    [24]A. Yilmaz, K. Shafique, and M. Shah, Target tracking in airborne forward looking infrared imagery, Image and Vision Computing,21 (2003) 623-63.
    [25]Y. Chen, T. Huang, and Y. Rui, Parametric Contour Tracking using Unscented Kalman Filter, Proc. Int'l Conf. Image Processing,2002,3(3):613-616.
    [26]K. Nummiaro, E. Koller-Meier, and L.Van Gool, An Adaptive Color-Based Particle Filter, Image and Vision Computing,2003,21(1):99-110.
    [27]M. Isard and A. Blake, CONDENSATION-Conditional Density Propagation for Visual Tracking, Int. J. Comput. Vision,1998,29(1):5-28.
    [28]M. S. Arulampalam, S.Maskell, N.Gordon, and T.Clapp. A tutorial on particle filters for online nonlinear/non-Gaussian bayesian tracking. IEEE Trans. Signal Process.,2002,50(2):174-188.
    [29]D. Comaniciu, V. Ramesh, and P. Meer, Kernel-Based Object Tracking, IEEE Trans. Pattern Analysis and Machine Intelligence,2003,5(5):564-577.
    [30]王勇,陈分雄,郭红想.偏移校正的核空间直方图目标跟踪.自动化学报,2012,38(3):430-436.
    [31]D. Comaniciu, V. Ramesh, and P. Meer, Real-Time Tracking of Non-Rigid Objects using Mean Shift, Proc. IEEE Conf. Computer Vision and Pattern Recognition,2000,2:142-149.
    [32]C. Yang, R. Duraiswami, and L. Davis, Efficient Spatial-Feature Tracking via the Mean-Shift and a New Similarity Measure, Proc. IEEE Conf. Computer Vision and Pattern Recognition,2005.
    [33]李培华.一种改进的Mean Shift跟踪算法.自动化学报,2007,33(4):347-354.
    [34]H. T. Nguyen and A.W.M. Smeulders, Fast Occluded Object Tracking by a Robust Appearance Filter, IEEE Trans. Pattern Analysis and Machine Intelligence,2004,26(8):1099-1104.
    [35]D. P. Huttenlocher, J. J. Noh, and W. J. Rucklidge, Tracking Non-Rigid Objects in Complex Scenes, IEEE Int'l Conf. Computer Vision,1993, pp.93-101.
    [36]A. Yilmaz, X. Li, and M. Shah, Contour-Based Object Tracking with Occlusion Handling in Video Acquired Using Mobile Cameras, IEEE Trans. Pattern Analysis and Machine Intelligence,2004,26(11):1531-1536.
    [37]W. Quan, J. Chen and N. Yu, Mesh-Shrink:Real-Time Fast Moving Object Tracking with Sporadic Occlusion,4th International Conference on Information and Multimedia Technology (ICIMT),2012.
    [38]O. Chapelle, B. Scholkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, MA,2006.
    [39]X. Zhu and A. Goldberg. Introduction to semi-supervised learning. Morgan & Claypool Publishers,2009.
    [40]S. Shalev-Shwartz. Online Learning:Theory, Algorithms, and Applications., PhD dissertation, Hebrew University,2007.
    [41]K. Zimmermann. Fast Learnable Methods for Object Tracking, PhD dissertation, Center for Machine Perception, Department of Cybernetics, Faculty of Electrical Engineering,Czech Technical University in Prague,2008.
    [42]C. Rosenberg, M. Hebert, and H. Schneiderman. Semi-supervised self-training of object detection models. Seventh IEEE Workshops on Application of Computer Vision,2005.
    [43]S. Avidan, Ensemble tracking, IEEE Trans. Pattern Analysis and Machine Intelligence,2007,29(2):261-271.
    [44]R. Collins, Y. Liu, and M. Leordeanu, Online selection of discriminative tracking features, IEEE Trans. Pattern Analysis and Machine Intelligence,2005,27(10): 1631-1643.
    [45]J. Lim, D. Ross, R. Lin, and M. Yang, Incremental learning for visual tracking, Neural Information Processing Systems,2005.
    [46]梁大为.视频运动对象跟踪技术研究,哈尔滨工业大学博士学位论文.中国哈尔滨,2009.
    [47]Q. Yu, T. Dinh, and G. Medioni, Online tracking and reacquisition using co-trained generative and discriminative trackers, European Conference on Computer Vision,2008.
    [48]O. Javed, S. Ali, and M. Shah. Online detection and classification of moving objects using progressively improving detectors. IEEE Conf. Computer Vision and Pattern Recognition,2005.
    [49]R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale-invariant learning. IEEE Conf. Computer Vision and Pattern Recognition,2003,2.
    [50]A. Levin, P. Viola, and Y. Freund. Unsupervised improvement of visual detectors using co-training. IEEE Int'l Conf. Computer Vision,2003.
    [51]S. Avidan. Support vector tracking. IEEE Trans. Pattern Analysis and Machine Intelligence,2004,26:1064-1072.
    [52]M. Black, and A. Jepson, Eigentracking:Robust matching and tracking of articulated objects using a view-based representation, Int. J. Comput. Vision, 1998,26(1):63-84.
    [53]J. Kwon and K. M. Lee, Visual Tracking Decomposition, IEEE Conf. Computer Vision and Pattern Recognition,2010.
    [54]D. A. Ross, J. Lim, R. Lin, and M. Yang. Incremental Learning for Robust Visual Tracking, Int. J. Comput. Vision,2007.
    [55]陈荣,曹永锋,孙洪.基于主动学习和半监督学习的多类图像分类.电子学报,2011,37(8):954-962.
    [56]R. Schapire. The boosting approach to machine learning:An overview. In Proc. MSRI Workshop on Nonlinear Estimation and Classification,2001.
    [57]P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. IEEE Conf. Computer Vision and Pattern Recognition,2001.
    [58]P. Viola, J. Platt, and C. Zhang. Multiple instance boosting for object detection. Advances in Neural Information Processing Systems,2006,18:1417-1424.
    [59]R. Schapire, Y. Freund, P. Bartlett, and W. Lee. Boosting the margin:A new explanation for the effectiveness of voting methods. Proc. International Conference on Machine Learning,1997, pp.322-330.
    [60]Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences,1997,55(1):119-139.
    [61]A. Demiriz, K. Bennett, and J. Shawe-Taylor. Linear programming boosting via column generation. Machine Learning,2002,46:225-254.
    [62]Z. Kalal, J.Matas, and K.Mikolajczyk. Weighted sampling for large-scale boosting. British Machine Vision Conference,2008.
    [63]H. Grabner and H. Bischof, On-line boosting and vision, IEEE Conf. Computer Vision and Pattern Recognition,2006, (1):260-267.
    [64]H. Grabner, C. Leistner, and H. Bischof, Semi-supervised on-line boosting for robust tracking, European Conference on Computer Vision,2008.
    [65]S. Stalder, H. Grabner, L. van Gool, E. Zurich, and K. Leuven, Beyond semi-supervised tracking:tracking should be as simple as detection, but not simpler than recognition, IEEE Int'l Conf. Computer Vision, WS on On-line Learning for Computer Vision,2009.
    [66]L. Breiman, Random forests, Machine Learning,2001.
    [67]P. Geurts, D. Ernst, and L.Wehenkel. Extremely randomized trees. In Machine Learning,2006,63(3-42).
    [68]C. Leistner, A. Saffari, J. Santner, and H. Bischof, Semi-supervised random forests, IEEE Int'l Conf. Computer Vision,2009.
    [69]C. Leistner, A. Saffari, and H. Bischof, MILForests:Multiple-Instance Learning with Randomized Trees, European Conference on Computer Vision,2010.
    [70]A. Saffari, C. Leistner, J. Santner, M. Godec, and H. Bischof, On-line random forests, IEEE Int'l Conf. Computer Vision, WS on On-line Learning for Computer Vision,2009.
    [71]C. Leistner, M. Godec, A. Saffari, and H. Bischof, On-line multi-view forests for tracking, DAGM-Symposium,2010, pp.493-502.
    [72]J. Santner, C. Leistner, A. Saffari, T. Pock, and H. Bischof. PROST:Parallel Robust Online Simple Tracking, IEEE Conf. Computer Vision and Pattern Recognition,2010.
    [73]A. Wang, G. Wan, Z. Cheng, and S. Li, An incremental extremely random forest classifier for online learning and tracking, IEEE Int'l Conf. Image Processing, 2009, pp.1449-1452.
    [74]F. Schroff, A. Criminisi, and A. Zisserman, Object class segmentation using random forests. British Machine Vision Conference,2008.
    [75]T. Sharp, Implementing decision trees and forests on a GPU, European Conference on Computer Vision,2008, (4):595-608.
    [76]M. Ozuysal, P. Fua, and V. Lepetit. Fast keypoint recognition in ten lines of code, IEEE Conf. Computer Vision and Pattern Recognition,2007.
    [77]V. Lepetit, P. Lagger, and P. Fua, Randomized trees for real-time keypoint recognition, IEEE Conf. Computer Vision and Pattern Recognition,2005.
    [78]A. Bosch, A. Zisserman, and X. Munoz. Image classification using random forests and ferns. IEEE Int'l Conf. Computer Vision,2007, pp.1-8.
    [79]B. Babenko, M.-H. Yang, and S. Belongie. Visual tracking with online multiple instance learning, IEEE Conf. Computer Vision and Pattern Recognition,2009.
    [80]Z. Kalal, J. Matas, and K. Mikolajczyk, Online learning of robust object detectors during unstable tracking, IEEE Int'l Conf. Computer Vision, WS on On-line Learning for Computer Vision,2009.
    [81]Z. Kalal, K. Mikolajczyk, and J. Matas. Tracking-Learning-Detection, IEEE Trans. Pattern Analysis and Machine Intelligence,2010,6(1).
    [82]Z. Kalal, K. Mikolajczyk, and J. Matas, Face-TLD:Tracking-Learning-Detection Applied to Faces, IEEE Int'l Conf. Image Processing,2010.
    [83]Z. Kalal, J. Matas, and K. Mikolajczyk, P-N learning:bootstrapping binary classifiers by structural constraints, IEEE Conf. Computer Vision and Pattern Recognition,2010.
    [84]J. Gall and V. Lempitsky, Class-Specific Hough Forests for Object Detection, IEEE Conf. Computer Vision and Pattern Recognition,2009.
    [85]J. Gall, N. Razavi, and L. Van Gool, On-Line Adaption of Class-Specific Codebooks for Instance Tracking, British Machine Vision Conference,2010.
    [86]A. Yao, J. Gall, and L. Van Gool, A Hough Transform-Based Voting Framework for Action Recognition, IEEE Conf. Computer Vision and Pattern Recognition, 2010.
    [87]J. Gall, A. Yao, N. Razavi, L. van Gool, and V. Lempitsky, Hough Forests for Object Detection, Tracking, and Action Recognition, IEEE Trans. Pattern Analysis and Machine Intelligence,2011,33(11):2188-2202.
    [88]F. Moosmann, B. Triggs, and F. Jurie. Fast discriminative visual codebooks using randomized clustering forests. Advances in Neural Information Processing Systems,2006.
    [89]D. H. Ballard, Generalizing the hough transform to detect arbitrary shapes, Pattern Recognition,1981,13(2):111-122.
    [90]S. Agarwal, A. Awan, and D. Roth, Learning to detect objects in images via a sparse, part-based representation, IEEE Trans, on Pattern Analysis and Machine Intelligence,2004.
    [91]X. Mei and H. Ling. Robust Visual Tracking using L1 Minimization, Proc. Int'l Conf. on Computer Vision,2009.
    [92]O. Williams, A. Blake, and R. Cipolla. Sparse Bayesian Learning for Efficient visual Tracking, IEEE Trans, on Pattern Analysis and Machine Intelligence, 2005,27:1292-1304.
    [93]B. Liu, L. Yang, J. Huang, P. Meer, L. Gong, and C. Kulikowski. Robust and fast collaborative tracking with two stage sparse optimization, European Conference on Computer Vision,2010.
    [94]W. Zhong, H. Lu, M. Yang, Robust Object Tracking via Sparsity-based Collaborative Model, IEEE Conf. Computer Vision and Pattern Recognition, 2012.
    [95]X. Mei and H. Ling. Robust Visual Tracking and Vehicle Classification via Sparse Representation, IEEE Trans. Pattern Analysis and Machine Intelligence, 2011.
    [96]M. Yang, Y. Wu, and G. Hua. Context-aware visual tracking. IEEE Trans. Pattern Analysis and Machine Intelligence,2009,31:1195-1209.
    [97]H. Grabner, J. Matas, L. V. Gool, and P. Cattin. Tracking the invisible:Learning where the object might be. IEEE Conf. Computer Vision and Pattern Recognition, 2010, pp.1285-1292.
    [98]T. B. Dinh, N. Vo, and G. Medioni, Context tracker:Exploring supporters and distracters in unconstrained environments, IEEE Conf. Computer Vision and Pattern Recognition,2011, pp.1177-1184.
    [99]J. L. Fan, Y. Wu, and S. Y. Dai. Discriminative spatial attention for robust tracking. European Conference on Computer Vision,2010, pp.480-493.
    [100]M. Godec, S. Sternig, P. M. Roth and H. Bischof. Context-driven clustering by multi-class classification in an active learning framework. IEEE Conf. Computer Vision and Pattern Recognition Workshops,2010, pp.19-24.
    [101]J. Berclaz, F. Fleuret, and P. Fua. Robust people tracking with global trajectory optimization. IEEE Conf. Computer Vision and Pattern Recognition,2006.
    [102]H. Jiang, S. Fels, and J. J. Little. A linear programming approach for multiple object tracking. IEEE Conf. Computer Vision and Pattern Recognition,2007.
    [103]L. Zhang, Y. Li, and R. Nevatia. Global data association for multi-object tracking using network flows. IEEE Conf. Computer Vision and Pattern Recognition, 2008.
    [104]A. G. A. Perera, C. Srinivas, A. Hoogs, G. Brooksby, and W. Hu. Multi-object tracking through simultaneous long occlusions and split-merge conditions. IEEE Conf. Computer Vision and Pattern Recognition,2006.
    [105]蒋恋华,甘朝晖.多目标跟踪综述.计算机系统应用,2010.19(12):271-275.
    [106]V. Takala and M. Pietikainen, Multi-object tracking using color, texture and motion, VS,2007:1-7.
    [107]L.L. Ma, J. Cheng, and H.Q. Lu, Multi-cue collaborative kernel tracking with cross ratio invariant constraint, IEEE Int'l Conf. Computer Vision GIP. 2008:665-672.
    [108]P. Kumar and M. J. Brooks, An adaptive bayesian technique for tracking multiple objects, Pattern Recognition,2007,657-665.
    [109]Z. Li, H.F. Gong, and S.C. ZHu, Dynamic feature cascade for multiple object tracking with track ability analysis, IEEE Conf. Computer Vision and Pattern Recognition EMM,2007:350-361.
    [110]F. Pernkopf, Tracking of multiple targets using online learning for reference model adaptation, IEEE Trans on SMC-B,2008,38(6):1465-1475.
    [111]Y. L. Chang and J.K. Aggarwal,3D Structure Reconstruction from An Ego Motion Sequence Using Statistical Estimation and Detection Theory, Workshop on Visual Motion,1991, pp.268-273.
    [112]C. Rasmussen and G. D. Hager, Probabilistic Data Association Methods for Tracking Complex Visual Objects, IEEE Trans. Pattern Analysis and Machine Intelligence,2001,23(6):560-576.
    [113]D. B. Reid, An Algorithm for Tracking Multiple Targets, IEEE Trans. Automatic Control, vol.1979,24(6):843-854.
    [114]R.L. Streit and T. E. Luginbuhl, Maximum Likelihood Method for Probabilistic Multi-Hypothesis Tracking, Proc. Int'l Soc. Optical Engineering,1994,2235: 394-405.
    [115]I. Cox and S. Hingorani, An Efficient Implementation of Reid's Multiple Hypothesis Tracking Algorithm and Its Evaluation for the Purpose of Visual Tracking, IEEE Trans. Pattern Analysis and Machine Intelligence, vol.1996, 18(2):138-150.
    [116]K. Shafique and M. Shah, A Non-Iterative Greedy Algorithm for Multi-Frame Point Correspondence, IEEE Int'l Conf. Computer Vision,2003, pp.110-115.
    [117]W. Ng, S.J. Godsill, and J. Vermaak, A Review of Recent Results in Multiple Target Tracking, Proc.4th Int'l Symposium on Image and Signal Processing and Analysis,2005, pp.40-45,
    [118]M. Isard and J. Maccormick, Bramble:ABayesian Multiple-Blob Tracker, IEEE Int'l Conf. Computer Vision,2001, pp.34-41.
    [119]M. Breitenstein, F. Reichlin, B. Leibe, E. Koller-Meier, and L. Van Gool. Robust Tracking-by-Detection using a Detector Confidence Particle Filter. IEEE Int'l Conf. Computer Vision,2009, pp.1515-1522.
    [120]I. Ali and M. Dailey. Multiple Human Tracking in High-Density Crowds. Advanced Concepts for Intelligent Vision Systems, LNCS,2009,5807:540-549.
    [121]B. Benfold and I. Reid. Guiding Visual Surveillance by Tracking Human Attention. British Machine Vision Conference,2009.
    [122]Y. Li, C. Huang, and R. Nevatia. Learning to Associate:Hybrid Boosted Multi-Target Tracker for Crowded Scene. IEEE Conf. Computer Vision and Pattern Recognition,2009, pp.2953-2960.
    [123]C. Huang, B. Wu, and R. Nevatia. Robust Object Tracking by Hierarchical Association of Detection Responses. European Conference on Computer Vision, 2008,5303:788-801.
    [124]B. Leibe, K. Schindler, and L. J. V. Gool. Coupled Detection and Trajectory Estimation for Multi-Object Tracking. IEEE Int'l Conf. Computer Vision,2007, pp.1-8.
    [125]J. Berclaz, F. Fleuret, and P. Fua, Multiple Object Tracking using Flow Linear Programming, Winter-PETS,2009.
    [126]S. Stalder, H. Grabner, and L. J. V. Gool. Cascaded Confidence Filtering for Improved Tracking-by-Detection. European Conference on Computer Vision, 2010,6311:369-382.
    [127]L.D. Stone, C. A. Barlow, and T. L. Corwin, Bayesian Multiple Target Tracking, Norwood, MA:Artech House.1999.
    [128]M. Vihola, Rao-Blackwellised particle filtering in random set multitarget tracking, IEEE Trans. Aerospace and electronic.systems.2007,43(2):689-705.
    [129]R. Mahler, A theoretical foundation for the Stein-Winter probability hypothesis density (phd) multi-target tracking approach, In proceeding of the 2002 MSS National Symposium on Sensor and Data fusion,2000,1.
    [130]R. Mahler, PHD Filter of Second Order in Target Number, Proc. SPIE Signal and Data Processing of Small Targets,2006,6236.
    [131]R. Mahler, A Theory of PHD Filters of Higher Order in Target Number, Proc. SPIE Defense Security Symp. Signal Process, Sensor Fusion Target Recognit., 2006,6235.
    [132]R. Mahler, Unified Sensor Management Using CPHD Filters, The 10 th International Conference on Information Fusion,2007.
    [133]R. P. S. Mahler, Multitarget Bayes filtering via first-order multitarget moments, IEEE Transactions on Aerospace and Electronic Systems,2003,39(4):1152-1178.
    [134]D. L Hall and J. Llinas. Handbook of multisensor data fusion.2nd ed. New York: CRC Press.2008.
    [135]张鹤冰.概率假设密度滤波算法及其在多目标跟踪中的应用,哈尔滨工程大学博士学位论文.中国哈尔滨,2012.
    [136]Y. Abu-Mostafa, Machines that learn from hints, Scientific American,1995, 272(4):64-71.
    [137]Z. Kalal, K. Mikolajczyk, and J. Matas, Forward-Backward Error:Automatic Detection of Tracking Failures, International Conference on Pattern Recognition, 2010, pp.23-26.
    [138]D. W. Scott, Mulivariate Density Estimation. New York:Wiley-Interscience, 1992.
    [139]C. Lambert, S. Harrington, C. Harvey, and A. Glodjo, Efficient on-line nonparametric kernel density estimation, Algorithmica,1999,25:37-57.
    [140]A. Elgammal, R. Duraiswami, and L. S. Davis, Efficient computation of kernel density estimation using fast gauss transform with applications for segmentation and tracking, Proc. IEEE 2nd Int.Workshop Statistical and Computational Theories of Vision,2001.
    [141]C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, Pfinder:Real-time tracking of human body, IEEE Trans. Pattern Anal. Machine Intell.,1997,19: 780-785.
    [142]K.-P. Karmann and A. von Brandt, Moving object recognition using and adaptive background memory, Time-Varying Image Processing and Moving Object Recognition. Amsterdam, The Netherlands:Elsevier,1990.
    [143]K.-P. Karmann, A. V. Brandt, and R. Gerl, Moving object segmentation based on adaptive reference images, Signal Processing V:Theories and Application. Amsterdam, The Netherlands:Elsevier,1990.
    [144]D.Koller, J.Weber, T.Huang, J.Malik, G. Ogasawara, B. Rao, and S. Russell, Toward robust automatic traffic scene analyis in real-time, Proc. Int. Conf. Pattern Recognition,1994, pp.126-131.
    [145]J. Ahmed, Adaptive Edge-Enhanced Correlation Based Robust and Real-Time Visual Tracking Framework and Its Deployment in Machine Vision Systems, PhD dissertation, Dept. of Electrical Eng., Univ. of Sciences and Technology at Rawalpindi,2008.
    [146]B. Stenger, T. Woodley, and R. Cipolla, Learning to track with multiple observers, IEEE Conf. Computer Vision and Pattern Recognition,2009.
    [147]A. Adam, E. Rivlin, and I. Shimshoni, Robust fragments-based tracking using the integral histogram, IEEE Conf. Computer Vision and Pattern Recognition, 2006, pp.798-805.
    [148]G. Shakhnarovich, T. Darrell and P. Indyk, editors, Nearest-neighbor methods in learning and vision:Theory and Practice, MIT Press,2005.
    [149]M. Pietikainen, A. Hadid, G. Zhao, and T. Ahonen. Computer Vision Using Local Binary Patterns, Springer,2011.
    [150]S. Agarwal, A. Awan, and D. Roth, Learning to detect objects in images via a sparse, part-based representation, IEEE Trans. on Pattern Analysis and Machine Intelligence,2004.
    [151]Y. Cheng, Mean shift mode seeking and clustering, IEEE Trans. Pattern Analysis and Machine Intelligence, vol.1995,17(8):790-799.
    [152]D. Comaniciu and P. Meer, Mean shift:A robust approach toward feature space analysis, IEEE Trans. Pattern Analysis and Machine Intelligence,2002,24(5): 603-619.
    [153]A. Wolf, A.M. Ⅲ Digioia, B. Jaramaz, et al. Computer-guided total knee arthroplasty. Scuderi GR, Tria AJ, Berger RA, eds. MIS techniques in orthopedics, Springer-Verlag,2005,390-407.
    [154]S. Shafizadeh, T. Paffrath, S. Grote, et al. Fluoroscopic-based ACL Navigation. Navigation and MIS in Orthopedic Surgery,2007,324-332.
    [155]胡岩,王田苗,王君臣等.基于视频跟踪的前交叉韧带重建导航及仿真评估系统研究.中国生物医学工程学报,2009,28(2):231-237.
    [156]K. Mamoru, Y. Kazunori, H. Nobuhiko, et al. Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images. International Congress Series,2004,1268:498-502.
    [157]B. Jaramaz, C. Nikou, N. Watterson, et al. MRI-based Surgical Navigation System for ACL Reconstruction. Jounal of Biomechanics,2006,39:573-574.
    [158]高凯,王立德,齐志明,张羽飞.前交叉韧带损伤的MRI诊断与临床诊断比较研究.中国运动医学杂志,2005,5(2):521-525.
    [159]Y. Ishibashi, E. Tsuda, A. Fukuda, et al. Future of double-bundle anterior cruciate ligament (ACL) reconstruction:incorporation of ACL anatomic data into the navigation system, Orthopedics,2006,29 Suppl:108-112.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700