用户名: 密码: 验证码:
特征法视觉SLAM逆深度滤波的三维重建
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:3D reconstruction with inverse depth filter of feature-based visual SLAM
  • 作者:张一 ; 姜挺 ; 江刚武 ; 余岸竹 ; 于英
  • 英文作者:ZHANG Yi;JIANG Ting;JIANG Gangwu;YU Anzhu;YU Ying;Institute of Surveying and Mapping, Information Engineering University;
  • 关键词:视觉即时定位与地图构建 ; 三维重建 ; 逆深度滤波器 ; 运动模型 ; 后端混合优化框架
  • 英文关键词:visual simultaneous localization and mapping;;3D reconstruction;;inverse depth filter;;motion model;;back-end hybrid optimization framework
  • 中文刊名:CHXB
  • 英文刊名:Acta Geodaetica et Cartographica Sinica
  • 机构:信息工程大学地理空间信息学院;
  • 出版日期:2019-06-15
  • 出版单位:测绘学报
  • 年:2019
  • 期:v.48
  • 基金:国家自然科学基金(41501482;41471387;41801388)~~
  • 语种:中文;
  • 页:CHXB201906006
  • 页数:10
  • CN:06
  • ISSN:11-2089/P
  • 分类号:42-51
摘要
针对现有特征法视觉SLAM只能重建稀疏点云、非关键帧对地图点深度估计无贡献等问题,本文提出一种特征法视觉SLAM逆深度滤波的三维重建方法,可利用视频序列影像实时、增量式地构建相对稠密的场景结构。具体来说,设计了一种基于运动模型的关键帧追踪流程,能够提供精确的相对位姿关系;采用一种基于概率分布的逆深度滤波器,地图点通过多帧信息累积、更新得到,而不再由两帧三角化直接获取;提出一种基于特征法与直接法的后端混合优化框架,以及基于平差约束的地图点筛选策略,可以准确、高效解算相机位姿与场景结构。试验结果表明,与现有方法相比,本文方法具有更高的计算效率和位姿估计精度,而且能够重建出全局一致的较稠密点云地图。
        Aiming at the problem that the current feature-based visual SLAM can only reconstruct a sparse point cloud and the ordinary frame does not contribute to point depth estimation, a novel 3 D reconstruction method with inverse depth filter of feature-based visual SLAM is proposed, which utilizes video sequence to incrementally build a denser scene structure in real-time. Specifically, a motion model based keyframe tracking approach is designed to provide accurate relative pose relationship. The map point is no longer calculated directly by two-frame-triangulation, instead it is accumulated and updated by information of several frames with an inverse depth filter based on probability distribution. A back-end hybrid optimization framework composed of feature and direct method is introduced, as well as an adjustment constraint based point screening strategy, which can precisely and efficiently solve camera pose and structure. The experimental results demonstrate the superiority of proposed method on computational speed and pose estimation accuracy compared with existing methods. Meanwhile, it is shown that our method can reconstruct a denser globally consistent point cloud map.
引文
[1] 刘浩敏,章国锋,鲍虎军.基于单目视觉的同时定位与地图构建方法综述[J].计算机辅助设计与图形学学报,2016,28(6):855-868.LIU Haomin,ZHANG Guofeng,BAO Hujun.A survey of monocular simultaneous localization and mapping[J].Journal of Computer-Aided Design & Computer Graphics,2016,28(6):855-868.
    [2] 邸凯昌,万文辉,赵红颖,等.视觉SLAM技术的进展与应用[J].测绘学报,2018,47(6):770-779.DOI:10.11947/j.AGCS.2018.20170652.DI Kaichang,WAN Wenhui,ZHAO Hongying,et al.Progress and applications of visual SLAM[J].Acta Geodaetica et Cartographica Sinica,2018,47(6):770-779.DOI:10.11947/j.AGCS.2018.20170652.
    [3] CADENA C,CARLONE L,CARRILLO H,et al.Past,present,and future of simultaneous localization and mapping:toward the robust-perception age[J].IEEE Transactions on Robotics,2017,32(6):1309-1332.
    [4] DAVISON A J,REID I D,MOLTON N D,et al.MonoSLAM:real-time single camera SLAM[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(6):1052-1067.
    [5] KLEIN G,MURRAY D.Parallel tracking and mapping for small AR workspaces[C]//Proceedings of 2007 IEEE and ACM International Symposium on Mixed and Augmented Reality.Nara,Japan:IEEE,2007:225-234.
    [6] STRASDAT H,DAVISON A J,MONTIEL J M M,et al.Double window optimisation for constant time visual SLAM[C]//Proceedings of 2011 International Conference on Computer Vision.Barcelona:IEEE,2011:2352-2359.
    [7] TRIGGS B,MCLAUCHLAN P F,HARTLEY R I,et al.Bundle adjustment—a modern synthesis[M].TRIGGS B,ZISSERMAN A,SZELISKI R.Vision Algorithms:Theory and Practice.Berlin,Heidelberg:Springer,2000:298-372.
    [8] MUR-ARTAL R,MONTIEL J M M,TARDóS J D.ORB-SLAM:a versatile and accurate monocular SLAM system[J].IEEE Transactions on Robotics,2015,31(5):1147-1163.
    [9] MUR-ARTAL R,TARDóS J D.ORB-SLAM2:an open-source SLAM system for monocular,stereo,and RGB-D cameras[J].IEEE Transactions on Robotics,2016,33(5):1255-1262.
    [10] RUBLEE E,RABAUD V,KONOLIGE K,et al.ORB:an efficient alternative to SIFT or SURF[C]//Proceedings of 2011 International Conference on Computer Vision.Barcelona:IEEE,2011:2564-2571.
    [11] DUBBELMAN G,BROWNING B.COP-SLAM:closed-form online pose-chain optimization for visual SLAM[J].IEEE Transactions on Robotics,2015,31(5):1194-1213.
    [12] VOGIATZIS G,HERNáNDEZ C.Video-based,real-time multi-view stereo[J].Image and Vision Computing,2011,29(7):434-441.
    [13] FORSTER C,ZHANG Zichao,GASSNER M,et al.SVO:semidirect visual odometry for monocular and multicamera systems[J].IEEE Transactions on Robotics,2017,33(2):249-265.
    [14] ENGEL J,STURM J,CREMERS D.Semi-dense visual odometry for a monocular camera[C]//Proceedings of 2013 IEEE International Conference on Computer Vision.Sydney:IEEE,2013:1449-1456.
    [15] KERL C,STURM J,CREMERS D.Robust odometry estimation for RGB-D cameras[C]//Proceedings of 2013 IEEE International Conference on Robotics and Automation.Karlsruhe:IEEE,2013:3748-3754.
    [16] STüHMER J,GUMHOLD S,CREMERS D.Real-time dense geometry from a handheld camera[C]//Proceedings of Joint Pattern Recognition Symposium.Darmstadt:Springer,2010:11-20.
    [17] NEWCOMBE R A,LOVEGROVE S J,DAVISON A J.DTAM:dense tracking and mapping in real-time[C]//Proceedings of 2011 International Conference on Computer Vision.Barcelona:IEEE,2011:2320-2327.
    [18] ENGEL J,SCH?PS T,CREMERS D.LSD-SLAM:large-scale direct monocular SLAM[C]//Proceedings of European Conference on Computer Vision.Zurich:Springer,2014:834-849.
    [19] ENGEL J,KOLTUN V,CREMERS D.Direct sparse odometry[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,40(3):611-625.
    [20] MUR-ARTAL R,TARDóS J D.Probabilistic semi-dense mapping from highly accurate feature-based monocular SLAM[C]//Proceedings of Conference on Robotics:Science and Systems.Rome:Universidad Zaragoza,2015.
    [21] CIVERA J,DAVISON A J,MONTIEL J M M.Inverse depth parametrization for monocular SLAM[J].IEEE Transactions on Robotics,2008,24(5):932-945.
    [22] 高翔,张涛,刘毅,等.视觉SLAM十四讲——从理论到实践[M].北京:电子工业出版社,2017:340-341.GAO Xiang,ZHANG Tao,LIU Yi,et al.Visual SLAM fourteen lectures-from theory to practice[M].Beijing:China Machine Press,2017:340-341.
    [23] 于英,张永生,薛武,等.影像连接点均衡化高精度自动提取[J].测绘学报,2017,46(1):90-97.DOI:10.11947/j.AGCS.2017.20160320.YU Ying,ZHANG Yongsheng,XUE Wu,et al.Automatic tie points extraction with uniform distribution and high precision[J].Acta Geodaetica et Cartographica Sinica,2017,46(1):90-97.DOI:10.11947/j.AGCS.2017.20160320.
    [24] STURM J,ENGELHARD N,ENDRES F,et al.A benchmark for the evaluation of RGB-D SLAM systems[C]//Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.Vilamoura-Algarve:IEEE,2012:573-580.
    [25] ContextCapture.Create 3D models from simple photographs[EB/OL].(2018-08-31).https://www.bentley.com/en/products/brands/contextcapture.
    [26] MUR-ARTAL R,TARDóS J D.Fast relocalisation and loop closing in keyframe-based SLAM[C]//Proceedings of 2014 IEEE International Conference on Robotics and Automation.Hong Kong:IEEE,2014:846-853.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700