用户名: 密码: 验证码:
靶场图像目标检测跟踪与定姿技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
光学测量技术是靶场测控技术的重要组成部分。通过判读光测设备拍摄的图像可得到飞行目标外弹道及姿态等参数,这些参数是武器试验鉴定及故障分析的重要依据。本文以靶场光测图像自动判读工作为背景,研究了运动目标的自动检测跟踪与三维姿态测量等技术,并将其成功应用于靶场光测图像判读系统中,提高了图像判读的可靠性与效率。论文主要研究成果如下:
     1.针对靶场目标多为已知目标的特点,提出了两种单样本训练的目标检测方法:①基于方向核形态滤波的目标检测方法,该方法充分利用了图像核特征的稳定性,对靶场复杂背景图像的目标检测具有很强的适应性。②基于形状熵差的目标检测方法,该方法根据样本图像中目标的形状特征构造形状熵差目标检测算子,能够有效地用于靶场多目标图像的自动目标检测。这些方法在仅提取单个目标样本图像特征的情况下,实现了鲁棒的目标检测。
     2.针对靶场序列图像目标稳定跟踪问题,提出了三种目标跟踪方法:①基于梯度LoG算子的光团目标跟踪方法,该方法具有检测精度高、对环境光照变化适应性强等优点,能够稳定跟踪靶场图像中的光团小目标。②基于特征点匹配的跟踪漂移校正方法,该方法可与多种跟踪算法相结合,对跟踪结果进行跟踪漂移检测,然后根据需要进行漂移校正。③一种多视约束下基于点描述的多目标跟踪匹配方法,该方法将多视约束关系融入到单站序列图像目标二维轨迹分析中,能够较好地解决多站序列图像间多目标跟踪匹配问题。
     3.针对靶场图像中弹体小目标中轴线的提取问题,提出了一种基于迭代矩的中轴线提取算法,该算法通过迭代不断改进矩方法的计算区域,最大限度地抑制了背景干扰,实现了目标中轴线的高精度提取。此外,本文定量地分析了中轴线面面交会法的误差影响因素,并给出了误差模型。
     4.结合光测设备成像模型与目标自身结构特点,提出了两种三维姿态参数解算方法:①利用离轴结构线求解目标滚转角的方法,该方法能够十分方便地测量飞机、巡航导弹等常见目标的滚转角。②基于目标比例信息的单站图像定姿方法,在近似平行投影成像条件下,可以不考虑目标位置和像机焦距等因素直接计算目标姿态,能够较好地满足靶场单站光测设备的定姿需求。
     5.针对靶场数字图像判读的需求,设计了靶场通用数字图像判读系统的总体方案,分析并实现了系统构建所需的几项关键技术:红外图像增强技术、异源图像目标判读点分析技术、多站图像并行判读技术、运动目标检测跟踪与定姿方法在目标判读系统中的应用等。
     将本文研究内容作为光测图像判读系统的关键技术之一,与靶场图像事后处理的其他技术相结合,设计并实现了一套通用的靶场数字图像判读系统,已应用于海军、空军以及总装备部所属各靶场。
Optical measurement technique is an important component of measurement and control technology in shooting ranges. The ballistic trajectory and pose parameters of the flight object, which are the significant evidences for the assessment of weapon test and the analysis of failure, can be obtained by interpreting the images captured by the optical measurement equipments. The automatic interpretation of the images captured by the optical measurement equipments is taken as the background of this dissertation. The automatic target detection, tracking and three Dimensional(3-D) pose measurement are investigated and applied successfully to the interpretation system of the images, which improve the reliability and efficiency of image interpretation. The main contents and contribution of this dissertation are as follows:
     1. As the information of targets in shooting range is mostly known, two robust methods based on single example training for target detection are proposed as follows:①Target detection method based on steering kernel morphological filter of single example, which makes full use of the stability of steering kernel description and is suitable for target detection in complex background.②Target detection method for similar multi-target detection based on shape entropy difference, the shape entropy difference operator is designed by specific shape of target, which can be used to detect small targets in multi-target images of shooting range.
     2. In order to track the target stably, three methods for stable tracking are proposed, they are as follows:①A new small target tracking method based on the gradient Laplacian of Gaussian(LoG) operator, which has the high precision of target detection and is robust to the change of illumination.②A drift correction method based on feature points matching, which can be combined with many tracking methods to address the problem of drift. The tracking result is checked frame by frame and drift correction is applied if necessary.③A new method based on point description for multi-target tracking with multi-view constraint. The constraint of multi-view is utilized to analyze the trajectories of targets in image sequence, and the algorithm is very suitable for tracking multi-targets in shooting range.
     3. To extract the axis of small target in image of shooting range, an iterative moment method for extracting target’s axis is proposed, the iteration process is designed to improve the fit region of target, which can restrain the interference of background obviously, therefore the high-precision result of extraction is obtained. In addition, the influencing factor and error of the axes method is analyzed quantificationally.
     4. By using the imaging model and target’s structure, two methods for pose estimation are proposed. They are as follows:①New convenient method for measuring rolling angle by axis and off-axis lines of targets, which can be applied to determine the rolling angle of targets such as plane and cruise missile.②New method for measuring the pose of target with known structure based on proportions of feature points, which can determine the 3-D pose from mono-view. Under the parallel projection imaging model, the pose of object can be directly calculated without target’s 3-D position and camera parameter such as focus length.
     5. Considering the requirements of digital image interpretation, a design scheme for universal digital image interpretation system is proposed. Meanwhile, several key technologies are discussed for constructing the image interpretation system, i.e., infrared image enhancement techniques, analysis of targets’interpretation position from multi-sensor images, parallel interpreting images from different cameras, applications of target detection, tracking and 3-D pose measurement in image interpretation system.
     The technologies researched and other related technologies for image interpretation are utilized to realize a digital image interpretation system, and the system has been playing an important role in images interpretation of shooting ranges of Navy, Air Force and Gerneral Armament Department.
引文
[1]余高达,赵潞生.军事装备学[M].北京:国防大学出版社, 2000: 1~10.
    [2]杨榜林,岳全发.军事装备试验学[M].北京:国防工业出版社, 2002: 1~7.
    [3]何照才.光学测量系统[M].北京:国防工业出版社, 2002: 1~11, 25.
    [4]张小虎.靶场图像运动目标检测与跟踪定位技术研究[D].长沙:国防科技大学研究生院,2006: 5~8,33-36,67-71,88-95.
    [5]美“发现”号航天飞机采取的主要技术改进措施[EB/OL]. http://news.china.com/ zh_cn/international/1000/20050802/12534929.html, 2005-08-02/2010-09-01. [ 6 ]美国发现号航天飞机通信天线失灵[EB/OL]. http://news.sciencenet.cn/ htmlnews/2010/4/230562.shtm, 2010-04-07/2010-09-01.
    [7]权威解读:发射神七火箭可靠性提高,有两大新亮点[EB/OL]. http://www.gov.cn/ jrzg/2008-09/24/content_1104206.htm, 2008-09-24/2010-09-01.
    [8]王国龙,衣同胜.靶场光电测量技术应用展望[J].2003,11(4): 213~217.
    [9]莫年祥.试验数据处理与应用[M].北京:装备指挥技术学院,2000.
    [10]于起峰,陆宏伟,刘肖琳.基于图像的精密测量与运动测量[M].北京:科学出版社,2002:96-108,137-151.
    [11]王之卓.摄影测量原理[M].北京:测绘科技出版社,1979:1-14.
    [12]王之卓.摄影测量原理续编[M].北京:测绘科技出版社,1986:117-123.
    [13]金为铣,杨先宏等.摄影测量学[M].武汉:武汉测绘科技大学出版社,1996.
    [14]马颂德,张正友.计算机视觉[M].北京:科学出版社,1998:52-71.
    [15]刘利生.外弹道测量数据处理[M].北京:国防工业出版社,2002:1-9,230-262.
    [16]于起峰,尚洋.摄像测量学原理与应用研究[M].北京:科学出版社,2009:20-45, 81-117,119-148,164-167.
    [17]黄文云,邓年茂等.外弹道摄影胶片数据处理技术研究[J].人防科研,2003(4):22~ 28.
    [18]中华人民共和国国家军用标准:导弹飞行试验测量信息记录要求和格式.中国人民解放军总装备部,2003.
    [19]伏思华,张小虎.基于序列图象的运动目标实时检测方法[J].光学技术,2004,30 (2):215-217.
    [20] Rosin P L, Ellis T. Image difference threshold strategies and shadow detection[C]. 6th British Machine Vision Conf., Birmingham, 1995: 347~356.
    [21] Jain R, Nagel H. On the analysis of accumulative difference of picture from image sequences of real world scenes[J]. IEEE Trans. PAMI, 1979: 206~214.
    [22]陈朝阳,张桂林.基于图像对称差分运算的运动小目标检测方法[J].华中理工大学学报,1998,26(9):34~38.
    [23] Lipton A, Fujiyoshi H, Patil R. Moving target classification and tracking from real time video[C]. NJ. USA, Proc of the 1998 DARPA Image Understanding Workshop, 1998: 8-14.
    [24] Changick K. Fast and automatic video object segmentation and tracking for content bassed application[J]. IEEE Transaction on Circuits and System for Video Technology,2002,12(2):122~129.
    [25]王栓.基于差分图像的多运动目标的检测与跟踪[J].中国图像图形学报, 1999,4(6):470~475.
    [26] Qu Y S ,Tian W J ,Li Y C. Detecting small moving target in images sequences using optical flow based on the discontinuous frame difference[C]. Proceedings of the Third SPIE Multi-spectra Image Processing and Pattern Recognition Conference,2003,52(6): 915~918.
    [27] Kim C, Hwang J N. Fast and automatic video object segmentation and tracking for content-based applications[J].IEEE Trans on Circuits and Systems,2002,12(2): 122~129.
    [28] Eun J H, Robyn O. Segmenting occluded objects using a motion snake[C]. Asian Conference on Computer Vision, 2004.
    [29] Alexei A E, Alexander C B, Greg M, Jitendra M. Recognizing Action at a distance [C]. International Conference on Computer Vision, 2003: 726~733.
    [30] Mae Y, Shirai Y, Miura J, et al. Object tracking in cluttered background based on optical flow and edges[C]. Vienna, Austrilia, Proceedings of the 13th International Conference on Pattern Recognition, 1996, (1): 196~200.
    [31] Mae Y, Yamamoto S, Shirai Y, Miura Jun.Optical flow based realtime object tracking by active vision system[C]. Proc. 2nd Japan-France Congress on Mechatronics, 1994, (2): 545~548.
    [32] Okada R, Shirai Y , Miura J. Object tracking based on optical flow and depth[C]. Washington DC. USA, Proc. of RWC, 1997: 432~439.
    [33] Ckenna S., Jabri Z., Duric Z. Tracking groups of people[J]. Computer Vision and Image Understanding, 2000, 80(1): 42~56.
    [34] Monnet A., Mittal A., Ramesh V. Background modeling and subtraction of dynamic scenes[J]. Proceedings of Ninth IEEE International Conference on Computer Vision, 2003(2): 1305~1312.
    [35] Cutler R, Davis L. View-based detection and analysis of periodic motion[C]. , Brisbane, Australia, Proceedings of International Conference on Pattern Recognition 1998: 495~500.
    [36] Cucchiara R., Piccardi. M, Prati A. Detecting moving objects, ghosts, and shadows in video streams[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(10): 1337~1342.
    [37] Haritaoglu I., Davis Larry S. W4 who?when?where?what? A real time system for detecting and tracking people[C]. Nara, Japan, Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, 1998: 222-227.
    [38] Friedman N, Russell S. Image segmentation in video sequences: A probabilistic approach[C]. In: Proceedings of 13th Conference on Uncertainty in Artificial Intelligence, Rhode Island, USA, 1997: 175~181.
    [39] Grimson W, Stauffer C, Romano R. Using adaptive tracking to classify and monitor activities in a site[C]. CA, USA, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Santa Barbara, 1998: 22~29.
    [40]谢敏.视频序列图像中运动目标的分割和识别研究[D].苏州大学硕士论文, 2002. [ 41 ] Mallat S. A theory for multiresolution signal decomposition: the wavelet representation[J]. IEEE Trans on PAMI, 1989, 11(7): 674~693.
    [42] Yu X, Reed I S, Kraske W. A robust adaptive multispectral object detection by using wavelet transform[C]. Proc. ICASSP-92, 1992: 141~144.
    [43] Casasent D P, Smokelin J S, Ye A. Wavelet and Gabor transforms for detection[J]. Opt. Eng., 1992, 31(4): 1893~1898.
    [44]李红艳.图像低信噪比小目标检测与跟踪算法研究[D].西安电子科技大学博士论文,2000.
    [45] Grimson, Stauffer C, Romano C and Lee L. Using adaptive tracking to classify and monitor activitics in a site[C]. Proc. IEEE conference on Computer Vsion and Pattern Recognition, 1998: 758~764.
    [46] Ahmed E, David H, Larry D. Non-parametric Model for background subtraction[C]. European Conference on Computer Vision, 2000: 654~659.
    [47] Henry S and Takeo K. Object detection using the statistics of parts[J]. International Journal of Computer Vision, 2004, 56(3): 151~177.
    [48] Zhuowen Tu, Xiangrong Chen, Alan L Yuille, etc. Image Parsing: Unifying Segmentation, Detection and Recognition[C]. Nice, France, International Conference on Computer Vision, 2003:18-25.
    [49] Kenji Okuma, Ali Taleghani, Nando de Freitas, etc. A Boosted Particle Filter: Multitarget Detction and Tracking[C]. European Conference on Computer Vision, 2004.
    [50] Robert T Collins, Alan J Liton, Takeo Kanade. Special Section on Video Surveillance[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(8): 745~746.
    [51] Paul Viola, Michael J Jones, Daniel Snow. Detecting Pedestrians Using Pattern of Motion and Appearance[C].Nice,France, International Conference on Computer Vision, 2003:734-741.
    [52] Alper Yilmaz, Omar Javed, Mubarak Shah. Object Tracking: A Survey[J]. ACMComputing Surveys, 2006, 38(4): 1~45.
    [53] Veenman C., Reinders M., Backer, E. Resolving motion correspondence for densely movingpoints[J]. IEEE Trans. Patt. Analy. Mach. Intell., 2001, 23(1): 54~72.
    [54] Serby D., Koller-Meier S., Gool L. V. Probabilistic object tracking using multiple features[C]. In IEEE 17th International Conference of Pattern Recognition (ICPR), 2004(2), 184~187.
    [55] Comaniciu D., Ramesh, V., Andmeer, P. Kernel-based object tracking. IEEE Trans. Patt. Analy. Mach.Intell[J]. 2003, 25: 564~575.
    [56] Yilmaz, A., Li, X., Shah, M. Contour based object tracking with occlusion handling in video acquired using mobile cameras[J]. IEEE Trans. Patt. Analy. Mach. Intell. 2004, 26(11): 1531~1536.
    [57] Ali, A., Aggarwal, J. Segmentation and recognition of continuous human activity[J]. In IEEEWorkshopon Detection and Recognition of Events in Video, 2001, 28~35.
    [58] Zhu, S., Yuille, A. Region competition: unifying snakes, region growing, and bayes/mdl for multiband image segmentation[J]. IEEE Trans. Patt. Analy. Mach. Intell. 1996, 18(9): 884~900.
    [59] Paragios, N. Deriche R. Geodesic active regions and level set methods for supervised texture segmentation[J]. Int. J. Comput. Vision, 2002, 46(3): 223~247.
    [60] Elgammal, A.,Duraiswami, R.,Harwood, D., Anddavis, L. Background and foreground modeling using nonparametric kernel density estimation for visual surveillance[J]. Proceedings of IEEE 90, 2002, (7): 1151~1163.
    [61] Fieguth, P. Terzopoulos, D. Color-based tracking of heads and other mobile objects at video frame rates[C]. San Juan, Puerto Rico, In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1997, 21~27. [ 62 ] Edwards, G., Taylor, C., Cootes, T.Interpreting face images using active appearance models[C]. Nara, Japan, In International Conference on Face and Gesture Recognition, 1998, 300~305.
    [63] Black, M. Jepson, A. Eigentracking: Robust matching and tracking of articulated objects using a view-based representation[J]. Int. J. Comput. Vision, 1998, 26(1): 63~84.
    [64] Paschos, G. Perceptually uniform color spaces for color texture analysis: an empirical evaluation[J]. IEEE Trans. Image Process, 2001, (10): 932~937.
    [65] Bowyer, K., Kranenburg, C., Dougherty, S. Edge detector evaluation using empirical roc curve[J]. Comput. Vision Image Understand. 2001, (10): 77~103.
    [66] Horn, B. AND Schunk, B. Determining optical flow[J]. Artific. Intell. 1981, (17): 185~203.
    [67] Lucas, B. D. AND Kanade., T. An iterative image registration technique with anapplication to stereo vision[C]. In International Joint Conference on Artificial Intelligence, 1981:674-679.
    [68] Haralick, R., Shanmugam, B., AND Dinstein, I. Textural features for image classification[J]. IEEE Trans. Syst. Man Cybern. 1973, 33(3): 610~622.
    [69] Laws, K. Textured image segmentation[D]. PhD thesis, Electrical Engineering, University of Southern California, 1980.
    [70] Mallat, S. A theory for multiresolution signal decomposition: The wavelet representation[J]. IEEE Trans. Patt. Analy. Mach. Intell. 1989, 11(7): 674~693.
    [71] Greenspan, H., Belongie, S., Goodman, R., etc. Overcomplete steerable pyramid filters and rotation invariance[C]. In IEEE Conference on Computer Vision and Pattern Recognition. 1994, 222~228. [ 72 ] Vidal, R. Ma, Y.A unified algebraic approach to 2-d and 3-d motion segmentation[J]. Journal of Mathematical Imaging and Vision, 2006(9):1~15.
    [73] Sethi, I. Jain, R. Finding trajectories of feature points in a monocular image sequence[J]. IEEE Trans. Patt. Analy. Mach. Intell.1987, 9(1): 56~73.
    [74] Rangarajan, K. Shah,M. Establishing motion correspondence[J]. Journal of Computer Vision Graphies Image Process. 1991, 54(1):56~73.
    [75] Salari, V. Sethi, I. K. Feature point correspondence in the presence of occlusion[J]. IEEE Trans. Patt. Analy. Mach. Intell. 1990, 12(1): 87~91.
    [76] Broida, T. Chellappa, R. Estimation of object motion parameters from noisy images[J]. IEEE Trans. Patt. Analy. Mach. Intell. 1986. 8(1): 90~99.
    [77] Beymer D., Konolige, K. Real-time tracking of multiple people using continuous detection[C]. In IEEE International Conference on Computer Vision (ICCV) Frame-Rate Workshop, 1999.
    [78] Tanizaki, H. Non-gaussian state-space modeling of nonstationary time series[J]. J. Amer. Statist. Assoc. 1987, (82): 1032~1063.
    [79] Reid, D. B. An algorithm for tracking multiple targets[J]. IEEE Trans. Autom. Control, 1979, 24(6): 843~854. [ 80 ] Birchfield, S. Elliptical head tracking using intensity gradients and color histograms[C]. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1998, 232~237.
    [81] Comaniciu, D., Ramesh, V., Andmeer, P. Kernel-based object tracking[J]. IEEE Trans. Patt. Analy. Mach.Intell., 2003, (25): 564~575.
    [82] Comaniciu, D. Meer, P. Mean shift: A robust approach toward feature space analysis[J]. IEEE Trans. Patt. Analy. Mach. Intell., 2002, 24(5): 603~619.
    [83] Schunk, B. The image flow constraint equation. Comput[J]. Vision Graphics Image Process, 1986, 35: 20~46.
    [84] Tao, H., Sawhney, H., AND Kumar, R. Object tracking with bayesian estimation ofdynamic layer representations[J]. IEEE Trans. Patt. Analy. Mach. Intell., 2002, 24(1): 75~89.
    [85] Avidan, S. Support vector tracking[J]. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004,26(8):1064-1072.
    [86] Yilmaz, A., Shafique, K., Shah,M. Target tracking in airborne forward looking imagery[J]. J. Image Vision Comput. 2003, 21(7): 623~635.
    [87] Huttenlocher, D., Noh, J., AND Rucklidge, W. Tracking nonrigid objects in complex scenes[C]. In IEEE International Conference on Computer Vision,1993, 93~101.
    [88] Kang, J., Cohen, I., Medioni, G. Object reacquisition using geometric invariant appearance model[C]. In International Conference on Pattern Recongnition, 2004, 759~762.
    [89] Terzopoulos, D. Szeliski, R. Tracking with kalman snakes. In Active Vision[M], A. Blake andA. Yuille, Eds. MIT Press, 1992.
    [90] Bertalmio, M., Sapiro, G., AND Randall, G. Morphing active contours[J]. IEEE Trans. Patt. Analy. Mach.Intell. 2000, 22(7): 733~737.
    [91] Mansouri, A. Region tracking via level set pdes without motion computation[J]. IEEE Trans. Patt.Analy. Mach. Intell. 2002, 24(7): 947~961.
    [92] Cremers, D. AND Schnorr, C. Statistical shape knowledge in variational motion segmentation[J]. I.Srael Nent. Cap. J., 2003, (21): 77~86.
    [93]吕日好,赵常寿,杨中文.空间目标姿态角测量计算方法研究[J].仪器仪表学报, 2006,27(6):1211~1212.
    [94]于起峰,孙祥一,陈国军.用光测法测量空间目标三维姿态[J].国防科技大学学报, 2000,22(2):15~19.
    [95]杨丽梅,郭立红,曹西征.单站光测图像确定空间目标三维姿态方法综述[J]. 2006,22(5):186~188.
    [96]于起峰,孙祥一,邱志强.从单站光测图像确定空间目标三维姿态[J].光学技术, 2002,28(1):77~79,82.
    [97] Lu C P, Hager G, Eric M. Fast and globally convergent pose estimation from video images[J]. IEEE Transactions on PAMI, 2000, 22(6):610~622.
    [98] F. Toyama, K. Shoji and J. Miyamichi. Pose estimation using Genetic Algorithm[J]. IEICE Technical Report, 1997,96(564):55–60.
    [99]张祖勋,苏国中,张剑清等.基于序列影像的飞机姿态跟踪测量方法研究[J].武汉大学学报信息科学版,2004,29(4):287~291.
    [100]胡宝洁,曾峦,熊伟.基于立体视觉的目标姿态测量技术[J].计算机测量与控制, 2007,15(1):27~28.
    [101] S. Agarwal, A. Awan, D. Roth. Learning to detect objects in images via a sparse, part-based representation[J]. IEEE Transactions on Pattern Analysis and MachineIntelligence, 2004, 26(11): 1475~1490.
    [102] B. Wu, R. Nevatia. Simultaneous object detection and segmentation by boosting local shape feature based classifier[J]. IEEE Conference on Computer Vision and Pattern Recognition, 2007,1~8.
    [103] A. Kappor , J. Winn. Located hidden random fields: Learning discriminative parts for object detection[C].European Conference on Computer Vision,2006,3954: 302~315.
    [104] Hiroyuki Takeda, Sina Farsiu, Peyman Milanfar. Kernel Regression for Image Processing and Reconstruction[J]. IEEE Transaction on Image Processing, 2007, 16(2): 349~366.
    [105] M. P. Wand and M. C. Jones, Kernel Smoothing, ser. Monographs on Statistics and Applied Probability[M]. New York:Chapman & Hall,1995.
    [106] Hae Jong, Peyman Milanfar. Training-free, generic object detection using locally adaptive regression kernels[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 6(1):1~17.
    [107] M. Elad. On the origin of the bilateral filter and ways to improve it[J]. IEEE Trans. Image Process., 2002, 11(10):1141–1150.
    [108] R. Mester , M. Muhlich. Improving motion and orientation estimation using an equilibrated total least squares approach[C]. in Proc. IEEE Int. Conf. Image Processing, 2001, 929~932.
    [109] Jonathon Shlens. A Tutorial on Principal Component Analysis[EB/OL]. http:// www. snl.salk.edu/~shlens/ pub/notes/pca.pdf.
    [110]李介谷,蔡国廉.计算机模式识别技术[M].上海:上海交通大学出版社,1986:112-117.
    [111] Y. Fu, S. Yan, T. S. Huang. Correlation metric for generalized feature extraction[J]. IEEE Transactions on Pattern Analysis andMachine Intelligence, 2008 , 30(12): 2229~2235.
    [112] Y. Fu ,T. S. Huang. Image classification using correlation tensor analysis[J]. IEEE Transactions on Image Processing, 2008, 17(2): 226~234. [ 113 ] C. Liu. The Bayes decision rule induced similarity measures[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6):1086~1090.
    [114] Y. Fu, T. S. Huang. Image classification using correlation tensor analysis[J]. IEEE Transactions on Image Processing, 2008, 17(2): 226~234.
    [115] Y.Ma, S.Lao, E.Takikawa, M.Kawade.Discriminant analysis in correlation similarity measure space[C]. International Conference on Machine Learning, 2007, 227: 577~584.
    [116] Yaser Sheikh. Bayesian Modeling of Dynamic Scenes for Object Detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(11): 1778~1792.
    [117]林洪文,姚作操,涂丹等.基于减背景技术的运动目标检测方法研究[J].国防科技大学学报,2003,25(3):66~69.
    [118]王颖.视频图像记录判读系统[J].光学技术,2003,29(2):232~234.
    [119] PAL N R, PAL S K. Entropy: a new definition and its applications[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1991, 21(5): 1260-1270.
    [120]徐嵘,刘书明.图像中局部熵描述的合理性及其应用[J].信息技术,2005, 11:59-61.
    [121]周冰,王永仲,孙立辉.图像局部熵用于小目标检测研究[J].光子学报,2008, 37(2):381-387.
    [122]王广君,田金文,柳健.基于局部熵的红外图像小目标检测[J].红外与激光工程, 2000,29(4):26-29.
    [123]许彬,郑链,王克勇等.基于局域灰度概率分布的小目标检测方法[J].激光与红外, 2005,35(3):187-189.
    [124]赵清杰,钱芳,蔡利栋译.计算机视觉[M].北京机械工业出版社,2005:111-150.
    [125]张广军.视觉测量[M].北京:科学出版社,2008,42-70.
    [126]张恒,李由,李立春等.一种尺度自适应的小目标实时检测方法[J].应用光学,2008, 29(1):9-13.
    [127]李由,张恒,雷志辉.基于生物视觉Center-surround机制的光团目标检测与跟踪[J].应用光学.2008,29(2):283~288.
    [128] Tony Lindeberg. Feature detection with automatic scale selection [J]. IJCV,1998, 21(1): 144~156.
    [129] Baumberg A. Reliable Feature Matching across widely separated views[C]. CVPR, Hilton Head Island, South Carolina, USA, June 13-15, 2000: 774~781.
    [130]孙即祥.图像分析[M].北京:科学出版社,2005:48~50.
    [131] I. Matthews, T.Ishikawa, and S. Baker. The Template Update Problem[J]. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2004, 26(6): 810-815.
    [132] Lucas, B.D., Kanade, T. An Iterative Image Registration Technique with an Application to Stereo Vision[C]. In: Proceedings of the 7th International Joint Conference on Artificial Intelligence, 1981: 674~679.
    [133] T. Kaneko and Osamu Hori. Template Update Criterion for Template Matching of Image Sequences[C], Proc. IEEE Int’l Conf. Pattern Recognition, 2002, (2): 1~5.
    [134] Jiyan Pan, Bo Hu, Jian Q. Zhang. Robust and Accurate Object Tracking under Various Types of Occlusions[J]. IEEE Transactions on Circuits and Systems for Video TechnoLoGy, 2008, 18(2): 223~ 236.
    [135] K. Mikolajczyk, C. Schmid. An Affine Invariant Interest Point Detector[C]. Proc. Seventh European Conf. Computer Vision, 2002: 128~142.
    [136] Matas, J., Chum, O., Urban, M.. Robust wide-baseline stereo from maximally stable extremal regions[J]. Image and Vision Computing, 2004, 22(10): 761~767.
    [137] David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91~110.
    [138] Richard Hartley. Multiple View Geometry in Computer Vision, 2nd ed.[M], Cambrige University Press,Cambrige, 2003:168-170.
    [139] L.Lee, R. Romano, G. Stein. Learning Patterns of Activity Using Real-Time Tracking[J]. In IEEE Transaction on Pattern Analysis and Machine Intelligence, 2000.
    [140]孙广富,张兵,卢焕章.基于窗口预测匹配的序列图像点目标轨迹检测算法[J].国防科技大学学报, 2004, 2(26):25~29.
    [141] I.K. Sethi, R. Jain. Finding trajectories of feature points in a monocular image sequence[J]. IEEE Trans. Pattern Analysis and Machine Intelligence, 1987, (9): 56~73.
    [142] K.Rangarajan, M. Shah. Establishing motion correspondence[J]. CVGIP:Image Understanding, 1991, (54): 56~73.
    [143] D. Chetverikov, J. Verestoy. Feature point tracking for incomplete trajectories[J]. Computing, 1999, (62): 321~338.
    [144] Sethi I, Jain R. Finding trajectories of feature points in a monocular image sequence[J]. IEEE Trans. Patt. Analy. Mach. Intell. ,1987, 9(1): 56–73.
    [145] Veeman C., Reinders M., Backer E. Resolving motion correspondence for densely moving points[J]. IEEE Trans. Patt. Analy. Mach. Intell., 2001, 23(1): 54~72.
    [146] Shafique K, Shah M. A non-iterative greedy algorithm for multi-frame point correspondence[C]. In IEEE International Conference on Computer Vision (ICCV) , 2003, 110~115.
    [147] Chang Y. L., Aggarwal J. K. 3D structure reconstruction from an ego motion sequence using statistical estimation and detection theory[J]. In Workshop on Visual Motion, 1991, 268~273.
    [148] Reid D. B. An algorithm for tracking multiple targets[J]. IEEE Trans. Auto. Control, 1979, 24(6): 843~854.
    [149] Hue, C., Cadre J. L., Prez P. Sequential monte carlo methods for multiple target tracking and data fusion[J]. IEEE Trans. Sign. Process, 2002, 50(2): 309~325.
    [150] A. Nakazawa, H. Kato, S. Inokuchi. Human Tracking Using Distributed Vision Systems[C]. In Proceedings of the International Conference on Pattern Recognition, 1998:593-596.
    [151] Q.Cai, J.K. Aggarwal. Tracking Human Motion in Structured Environments using a Distributed Camera System[J]. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999.
    [152] T. H. Chang, S. Gong. Tracking Multiple People with a Multi-Camera System[C]. In IEEE Workshop on Multi-Object Tracking, 2001:19-26.
    [153] V. Hwang. Tracking feature points in time-varying images using an opportunistic selection approach[J]. Pattern Recognition, 1989, (22): 247~256.
    [154]王鲲鹏,张小虎,李立春.一种基于正负差图像的运动目标检测新方法[J].应用光学, 2007, (28)5: 521~526.
    [155]王锋,曹剑中,周仁魁等.飞行目标姿态测量中的图像处理方法[J].光子学报, 2006,35(11):1780~1783.
    [156]邹永星,周仁魁,罗秀娟等.一种提取图像目标边缘的新方法[J].光电工程,2005,32(6):76~78.
    [157]魏敏,吴国华,周进等.序列图像轴对称物体中轴线提取方法[J].半导体光电,2007, 28(1):143~146.
    [158]邓航,芮雨.用于边缘检测的Snake模型[J].上海交通大学学报.2000,34(6):848~ 850,858.
    [159]姜泳水,唐金辉,陈学佺.二值图像中物体几何主轴的提取方法[J].计算机工程, 2005, 31(18): 56~58.
    [160]杨雨奇,张三喜.弹丸空间刚体姿态非接触测量方法[J].弹箭与制导学报, 2006, 26(2): 80~82.
    [161]于起峰,孙祥一,邱志强.从单站光测图像确定空间目标三维姿态[J].光学技术, 2002, 28 (1): 77~80.
    [162]于起峰,孙祥一,陈国军.用光测图像确定空间目标俯仰角和偏航角的中轴线法[J].国防科技大学学报, 2000, 22(2): 15~19.
    [163]尚洋.基于视觉的空间目标位置姿态测量方法研究[D].国防科技大学博士论文, 2006:78~124.
    [164]张祖勋,张剑清.数字摄影测量学[M].武汉:武汉测绘科技大学出版社, 1996.
    [165]邓年茂.高速光电经纬仪摄影胶片图象信息处理技术研究[D].中国科学院西安光学精密机械研究所博士学位论文, 2001.
    [166]黄文云.激光狭缝摄影胶片图像信息判读处理技术研究[D].西安交通大学博士学位论文, 2003.
    [167]于起峰.数字式光测胶片图像自动分析系统[J].应用光学, 2001,22(4):30~34.
    [168] http://www.imagesystems.se/ImageSystems/products.html.
    [169]郑维,黄世霖,张金换.图像运动分析在汽车被动安全研究中的应用[J].公路交通科技,2003,20(3):151~154.
    [170] Adrian R J. Particle imaging techniques for experimental fluid mechanics[J]. Ann Rev Fluid Mech ,1991, (23):261~304.
    [171] http://www.visinst.com/Fusion.html.
    [172] http://www.gaitech.net/Optotrak.asp.
    [173]衣同胜,袁立群.红外成像技术在靶场中的应用[J].光子学报,2002,31(Z2): 279~285.
    [174]刘春霞.红外视频图像判读的预处理技术研究[D].长春:中国科学院研究生院硕士学位论文,2003:1-11.
    [175]邸慧,于起峰,张小虎.一种基于灰度变换的红外图像增强算法[J].应用光学, 2006,27(1):12~14.
    [176]邱翰.红外热像仪软件系统研究[M].武汉:华中科技大学,2006,10~13.
    [177]张宇峰.红外数字图像判读处理方法研究[D].西安:西北工业大学硕士学位论文,2005:1-5.
    [178]李慧.光电经纬仪测量精度分析的误差模型研究[D].中国科学院研究生院博士学位论文,2008:10-35.
    [179]毛士艺,赵巍.多传感器图像融合技术综述[J].北京航空航天大学学报,2002, 28(5):512-518.
    [180] B. Zitova, J. Flusser. Image Registration Methods: a survey[J]. Image Vision Computing, 2003, 21: 977-l000.
    [181] L. Gottesfeld Brown. A Survey of Image Registration Techniques[J]. ACM Computing Surveys, 1992, 24(4): 325-376.
    [182] P.Viola, W. M. Wells. Alignment by Maximization of Mutual Information[C]. //International Conference on Computer Vision, IEEE Computer Society Press, Los Alamitos, LA, 1995: 16-23.
    [183] Reddy B, Chatierji B N. An FFT-Based Technique for Translation, Rotation, and Scale-Invariant Image Registration[J]. IEEE Transactions on image processing and Remote Sensing, 1996, 5(8): 1266-1271.
    [184]马俊,曹治国.基于边缘信息的红外与可见光图像匹配技术[J].计算机与数字工程, 2006, 12(34): 30-32.
    [185]刘卫光.图像信息融合与识别[M].北京:电子工业出版社, 2008:72-83.
    [186] Ranade S., Rosenfeld A. Point Pattern Matching by Relaxation[J]. Pattern Recognition, 1980, 12(4): 269-275.
    [187]陶冰洁,王敬儒,张启衡.采用仿射变换的红外与可见光图像配准方法[J].光电工程, 2004, 31(11): 39-41. [ 188 ] Zhengwei Yang and Fernand S. Cohen. Image Registration and Object Recognition Using Affine Invariants and Convex Hulls[J]. IEEE. Transactions on Image Processing. 1999, 8(7): 934-946.
    [189]舒丽霞,彭晓明,周成平等.一种新颖的红外与可见光图像自动配准算法[J].华中科技大学学报:自然科学版, 2003, 31(6): 92-94.
    [190]席学强,王润生.基于直线特征的图像一模型匹配算法[J].国防科技大学学报,2000,22(6):70-74.
    [191] Hui Li, B. S. Manjunath, Sanjit K. Mitra, A Contour -Based Approach to Multisensor Image Registration[J]. IEEE Trans. Image Processing, 1995,4(3): 320-334.
    [192] TON J, JAIN A K. Registering Landsat Images by Point Matching[J]. IEEE Transactions on Geoscience and Remote Sensing, 1989, 27(5): 642-651.
    [193] S.Belongie, J.Malik, J.Puzicha. Shape Matching and Object Recognition Using Shape Contexts[Z]. Technical Report UCB//CSD-00-1128, UC Berkeley, January 2001.
    [194]张小虎,金学军,郭金虎.靶场光测多站图像多目标判读技术[J].光学技术,2006, 8(32): 398-403.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700