用户名: 密码: 验证码:
基于鱼眼相机的立体图像校正和图像拼接
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
双目立体视觉一直是计算机视觉领域的研究热点。由于传统透视相机的视野范围大小的局限性,具有宽广视野的超广角相机在计算机视觉领域的应用受到了人们越来越多的关注。超广角相机图像的拼接,大大减少了所需图像序,列的数目,具有很好的应用前景。
     本文主要研究基于鱼眼相机的立体视觉相关算法及其基于鱼眼相机的图像拼接。本文的主要内容包括:
     提出了一种基于泰勒级数模型的鱼眼相机立体视觉方法。利用泰勒级数模型对立体视觉相机进行标定,推导了泰勒级数模型到理想透视投影模型的映射关系,将鱼眼图像去失真展开为理想的透视投影图像,同时也获得了虚拟透视相机的摄像机内参。利用平面模板标定出立体视觉相机间的外参数,对立体图对进行外极线校正。根据相机的投影模型推导了鱼眼立体相机的深度信息恢复方法,完成三维重建。
     以汽车周身监视系统为应用背景,研究了一种基于泰勒级数模型的鱼眼图像拼接。将四幅两两交叠的鱼眼图像进行去失真处理,投影到一个共同平面,得到合成的虚拟视点下的鸟瞰图像。
Binocular stereo vision is research hot spot in computer vision. Due to the limitation of the field of view of the perspective camera, the application of large field of view camera with wide angle lens in computer vision draws more and more attention. Due to sharp decrease in the number of images, image stitching based on images captured by large angle camera has a good application prospect.
     This thesis is mainly about stereo vision algorithms and image stitching based on fisheye camera.
     We proposed fisheye stereo based on the Taylor model firstly. The Taylor series model is used to calibrate the stereo camera. Based on this model, the mapping relationship between perspective model and Taylor model is obtained, which can be used to undistort the fisheye image. The interior parameters of the virtual perspective camera can be set directly. The exterior parameters of the binocular cameras calibrated by using calibration pattern are used in epipolar rectification of the stereo image pair. Derivation of the depth information of stereo pair is also performed before 3D reconstruction.
     Image Stitching based on Taylor model is researched aimed to the application of vehicle surrounding monitoring. Four overlapped fisheye images are undistorted into perspective ones. Then we projected them into one common plane and fused them into one monitoring image.
引文
[1]C. Hughes, M. Glavin, E. Jones, and P. Denny, "Wide-angle camera technology for automotive applications:a review," Intelligent Transport Systems, IET, vol.3, pp.19-31, 2009.
    [2]C. Peng and M. Hebert, "Omni-directional structure from motion," in Omnidirectional Vision,2000. Proceedings. IEEE Workshop on,2000, pp.127-133.
    [3]B. Micusik and T. Pajdla, "Structure from motion with wide circular field of view cameras," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.28, pp. 1135-1149,2006.
    [4]L. Shigang, "Real-Time Spherical Stereo," in Pattern Recognition,2006. ICPR 2006. 18th International Conference on,2006, pp.1046-1049.
    [5]S. Li, "Binocular Spherical Stereo," Intelligent Transportation Systems, IEEE Transactions on, vol.9, pp.589-600,2008.
    [6]S. Abraham and W. Forstner, "Fish-eye-stereo calibration and epipolar rectification," ISPRS Journal of photogrammetry and remote sensing, vol.59, pp.278-288,2005.
    [7]J. Heller and T. Pajdla, "Stereographic rectification of omnidirectional stereo pairs," in Computer Vision and Pattern Recognition,2009. CVPR 2009. IEEE Conference on, 2009, pp.1414-1421.
    [8]S. Zehang, G. Bebis, and R. Miller, "On-road vehicle detection: a review," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.28, pp.694-711,2006.
    [9]T. Ehlgen, M. Thorn, and M. Glaser, "Omnidirectional Cameras as Backing-Up Aid," in Computer Vision,2007. ICCV 2007. IEEE 11th International Conference on,2007, pp. 1-5.
    [10]T. Ehlgen, T. Pajdla, and D. Ammon, "Eliminating Blind Spots for Assisted Driving," Intelligent Transportation Systems, IEEE Transactions on, vol.9, pp.657-665,2008.
    [11]L. Shigang, "Monitoring Around a Vehicle by a Spherical Image Sensor," Intelligent Transportation Systems, IEEE Transactions on, vol.7, pp.541-550,2006.
    [12]Y.-C. Liu, K.-Y. Lin, and Y.-S. Chen, "Bird's-Eye View Vision System for Vehicle Surrounding Monitoring," in Robot Vision,2008, pp.207-218.
    [13]D. Scaramuzza, A. Martinelli, and R. Siegwart, "A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion," in Computer Vision Systems,2006 ICVS'06. IEEE International Conference on,2006, pp.45-45.
    [14]L. Matthies, C. Olson, G. Tharp, and S. Laubach, "Visual localization methods for Mars rovers using lander, rover, and descent imagery," in Proceedings of the 4th International Symposium on Artificial Intelligence, Robotics and Automation in Space,1997, pp. 413-418.
    [15]J. Maki, J. Bell III, K. Herkenhoff, S. Squyres, A. Kiely, M. Klimesh, M. Schwochert, T. Litwin, R. Willson, and A. Johnson, "Mars exploration rover engineering cameras," Journal of Geophysical research, vol.108, pp.12-1,2003.
    [16]W. Faig, "Calibration of close-range photogrammetric systems:Mathematical formulation," Photogrammetric engineering and remote sensing, vol.41,1975.
    [17]J. Y. S. Luh and J. A. Klaasen, "A Three-Dimensional Vision by Off-Shelf System with Multi-Cameras," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. PAMI-7,pp.35-45,1985.
    [18]R. Y. Tsai, "An Efficient and Accurate Camera Calibration Technique for 3-{D} Machine Vision," in IEEE Computer Vision and Pattern Recognition or CVPR,1986, pp.364-374.
    [19]H. Martins, J. Birk, and R. Kelley, "Camera models based on data from two calibration planes* 1," Computer Graphics and Image Processing, vol.17, pp.173-180,1981.
    [20]Z. Zhang, "A flexible new technique for camera calibration," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.22, pp.1330-1334,2000.
    [21]O. Faugeras, Q. Long, and P Strum, "Self-calibration of a 1D projective camera and its application to the self-calibration of a 2D projective camera," Pattern Analysis and, Machine Intelligence, IEEE Transactions on, vol.22, pp.1179-1185,2000.
    [22]M. Sang De, "A self-calibration technique for active vision systems," Robotics and Automation, IEEE Transactions on, vol.12, pp.114-120,1996.
    [23]R. Benosman and S. Kang, Panoramic vision: sensors, theory, and applications: Springer Verlag,2001.
    [24]D. Schneider, E. Schwalbe, and H. Maas, "Validation of geometric models for fisheye lenses," ISPRS Journal of photogrammetry and remote sensing, vol.64, pp.259-266, 2009.
    [25]J. Kannala and S. S. Brandt, "A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.28, pp.1335-1340,2006.
    [26]英向华,胡占义,”一种基于球面透视投影约束的鱼眼镜头校正方法,”计算机学报vol.26,pp.1702-1708,2003.
    [27]J. Gluckman and S. K. Nayar, "Rectified catadioptric stereo sensors," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.24, pp.224-236,2002.
    [28]W. L. D. Lui and R. Jarvis, "Eye-Full Tower: A GPU-based variable multibaseline omnidirectional stereovision system with automatic baseline selection for outdoor mobile robot navigation," Robotics and Autonomous Systems, vol.58, pp. 747-761, 2010.
    [29]S. Liancheng and Z. Feng, "Design of a novel stereo vision navigation system for mobile robots," in Robotics and Biomimetics (ROBIO).2005 IEEE International Conference on, 2005, pp.611-614.
    [30]S. Gehrig, C. Rabe, and L. Krueger, "6D Vision Goes Fisheye for Intersection Assistance," in Computer and Robot Vision,2008. CRV'08. Canadian Conference on, 2008, pp.34-41.
    [31]C. Geyer and K. Daniilidis, "Conformal Rectification of Omnidirectional Stereo Pairs," in Computer Vision and Pattern Recognition Workshop,2003. CVPRW'03. Conference on,2003, pp.73-73.
    [32]Z. Arican and P. Frossard, "Dense disparity estimation from omnidirectional images," in Advanced Video and Signal Based Surveillance,2007. AVSS 2007. IEEE Conference on, 2007, pp.399-404.
    [33]W. Dingrui, Z. Jie, and D. Zhang, "A Spherical Rectification for Dual-PTZ-Camera System," in Acoustics, Speech and Signal Processing,2007. ICASSP 2007. IEEE International Conference on,2007, pp. I-777-I-780.
    [34]S. Cagnoni, M. Mordonini, L. Mussi, and G. Adorni, "Spherical rectification based stereo vision applied to hybrid dual camera vision systems," in Workshop on Omnidirectional Robot Vision (SIMPAR),2008.
    [35]杜歆,徐钢梅,朱云芳,王贻术,“一种基于单相机三维重建的简单方法,”传感技术学报vol.20,pp.1917-1920,2007.
    [36]A. Klaus, M. Sormann, and K. Karner, "Segment-Based Stereo Matching Using Belief Propagation and a Self-Adapting Dissimilarity Measure," in Pattern Recognition,2006. ICPR 2006.18th International Conference on,2006, pp.15-18.
    [37]M. Z. Brown, D. Burschka, and G. D. Hager, "Advances in computational stereo," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.25, pp.993-1008, 2003.
    [38]F. Schaffalitzky and A. Zisserman, "Viewpoint invariant texture matching and wide baseline stereo," in Computer Vision,2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on,2001, pp.636-643 vol.2.
    [39]C. Schmid and A. Zisserman, "The Geometry and Matching of Lines and Curves Over Multiple Views," International Journal of Computer Vision, vol.40, pp.199-233,2000.
    [40]B. J. Super and W. N. Klarquist, "Patch-based stereo in a general binocular viewing geometry," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.19, pp.247-253,1997.
    [41]B. Lucas and T. Kanade, "An iterative image registration technique with an application to stereo vision," 1981, pp.674-679.
    [42]V. Venkateswar and R. Chellappa, "Hierarchical stereo and motion correspondence using feature groupings," International Journal of Computer Vision, vol.15, pp.245-269, 1995.
    [43]M. Bleyer and M. Gelautz, "A layered stereo matching algorithm using image segmentation and global visibility constraints," ISPRS Journal of photogrammetry and remote sensing, vol.59, pp.128-150,2005.
    [44]H. Li and G. Chen, "Segment-based stereo matching using graph cuts," in Computer Vision and Pattern Recognition,2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on,2004, pp. I-74-I-81 Vol.1.
    [45]C. Zitnick, "High-quality Video View Interpolation Using a Layered Representation," ACM transactions on graphics, p.598,2004.
    [46]D. Scharstein and R. Szeliski, "A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms," International Journal of Computer Vision, vol.47, pp. 7-42,2002.
    [47]Z. Zhu, "Omnidirectional stereo vision," Proc. of lCAR'01, pp.22-25,2001.
    [48]A. Krishnan and N. Ahuja, "Panoramic image acquisition," in Computer Vision and Pattern Recognition,1996. Proceedings CVPR '96,1996 IEEE Computer Society Conference on,1996, pp.379-384.
    [49]S. Mann and R. W. Picard, "Virtual bellows:constructing high quality stills from video," in Image Processing,1994. Proceedings. ICIP-94., IEEE International Conference,1994, pp.363-367 vol.1.
    [50]M. Irani, P. Anandan, and S. Hsu, "Mosaic based representations of video sequences and their applications," in Fifth International Conference on Computer Vision,1995, pp. 605-605.
    [51]H. S. Sawhney, S. Ayer, and M. Gorkani, "Model-based 2D & 3D dominant motion estimation for mosaicing and video representation," in Computer Vision,1995. Proceedings., Fifth International Conference on,1995, pp.583-590.
    [52]P. Jaillon and A. Montanvert, "Image mosaicking applied to three-dimensional surfaces," in Pattern Recognition,1994. Vol.1-Conference A:Computer Vision & Image Processing., Proceedings of the 12th IAPR International Conference on,1994, pp. 253-257 vol.1.
    [53]B. Zitova and J. Flusser, "Image registration methods:a survey," Image and Vision Computing, vol.21, pp.977-1000,2003.
    [54]M. Brown and D. G. Lowe, "Recognising panoramas," in Computer Vision,2003. Proceedings. Ninth IEEE International Conference on,2003, pp.1218-1225 vol.2.
    [55]R. Szeliski, "Image alignment and stitching: A tutorial," Foundations and Trends(R) in Computer Graphics and Vision, vol.2, p.104,2006.
    [56]K. Mikolajczyk and C. Schmid, "Scale & affine invariant interest point detectors," International Journal of Computer Vision, vol.60, pp.63-86,2004.
    [57]C. Harris and M. Stephens, "A combined corner and edge detector," in Proceedings of the 4th Alvey Vision Conference,1988, pp.147-151.
    [58]B. Triggs, "Detecting keypoints with stable position, orientation, and scale under illumination changes," Computer Vision-ECCV 2004, pp.100-113,2004.
    [59]D. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, vol.60, pp.91-110,2004.
    [60]P. Burt and E. Adelson, "A multiresolution spline with application to image mosaics," ACM Transactions on Graphics (TOG), vol.2, pp.217-236,1983.
    [61]S. E. Chen, "QuickTime VR: an image-based approach to virtual environment navigation," in Proceedings of the 22nd annual conference on Computer graphics and interactive techniques:ACM,1995.
    [62]S. Kang and R. Szeliski, "3-D scene data recovery using omnidirectional multibaseline stereo," International Journal of Computer Vision, vol.25, pp.167-183,1997.
    [63]S. Peleg and J. Herman, "Panoramic mosaics by manifold projection," in Computer Vision and Pattern Recognition,1997. Proceedings.,1997 IEEE Computer Society Conference on,1997, pp.338-343.
    [64]R. Szeliski and H.-Y. Shum, "Creating full view panoramic image mosaics and environment maps," in Proceedings of the 24th annual conference on Computer graphics and interactive techniques:ACM Press/Addison-Wesley Publishing Co.,1997, pp. 251-258.
    [65]S. K. Nayar and A. Karmarkar, "360x360 mosaics," in Computer Vision and Pattern Recognition,2000. Proceedings. IEEE Conference on,2000, pp.388-395 vol.2.
    [66]S. K. Nayar, "Catadioptric omnidirectional camera," in Computer Vision and Pattern Recognition,1997. Proceedings.,1997 IEEE Computer Society Conference on,1997, pp. 482-488.
    [67]H. Bakstein and T. Pajdla, "Panoramic mosaicing with a 180° field of view lens," in Omnidirectional Vision,2002. Proceedings. Third Workshop on,2002, pp.60-67.
    [68]D. Xiaoming, W. Fuchao, W. Yihong, and W. Chongwei, "Automatic spherical panorama generation with two fisheye images," in Intelligent Control and Automation,2008. WCICA 2008.7th World Congress on,2008, pp.5955-5959.
    [69]R. Szeliski, "Video mosaics for virtual environments," IEEE Computer Graphics and Applications, pp.22-30,1996.
    [70]H. Shum and R. Szeliski, "Panoramic image mosaics," Microsoft Research MSR-TR-97-23,1997.
    [71]M. Uyttendaele, A. Eden, and R. Skeliski, "Eliminating ghosting and exposure artifacts in image mosaics," in Computer Vision and Pattern Recognition,2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on,2001, pp. Ⅱ-509-Ⅱ-516 vol.2.
    [72]C. Yi-Yuan, T. Yuan-Yao, C. Cheng-Hsiang, and C. Yong-Sheng, "An embedded system for vehicle surrounding monitoring," in Power Electronics and Intelligent Transportation System (PEITS),2009 2nd International Conference on,2009, pp.92-95.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700