用户名: 密码: 验证码:
城市环境中移动机器人视觉定位研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
自主定位技术是移动机器人自主导航的基础和关键。近年来,随着图像传感器生产成本的降低,以及图像处理、模式识别等相关技术的迅速发展,视觉定位受到越来越多的关注。移动机器人视觉定位是指借助于摄像机采集的视觉信息,确定机器人的当前位置。近年来,移动机器人在户外环境下,特别是城市区域中的应用越来越广泛。因此,城市环境中的移动机器人视觉定位研究具有非常重要的理论意义和极其广泛的应用前景。
     论文围绕城市环境中的移动机器人视觉定位问题展开研究,实现了城市环境中无准确GPS数据情况下的移动机器人精确定位。论文提出了利用机器人工作区域的卫星地图和移动机器人车载相机所采集的图像二者相结合的视觉定位策略。其中,卫星地图用来提取周围建筑物的俯视轮廓,而摄像机图像用于重建图像中的建筑物轮廓,通过对二者进行匹配来确定移动机器人在二维卫星地图中的绝对位置。
     为了实现上述定位过程,论文设计了基于线特征的场景三维重建方法。传统的进行场景三维重建的方法主要是利用点特征。基于点特征的三维重建方法存在着精度低、计算量大以及无法准确地表示当前场景等问题。与点特征相比,线特征存在诸多优点:在同样的噪声强度下,线特征受噪声影响更小;线特征对光照情况和阴影都不敏感;线特征的数量较少,从中确定有用线段并利用其进行三维重建的计算量较小。然而,不同视图间的线特征匹配一直都是计算机视觉领域的一大难点,至今仍缺乏能够准确、全面地查找线对应的方法。为此,论文首次提出了多层特征图(Multilayer Feature Graph, MFG)结构。MFG借助于多种特征之间的几何关系和约束,可以有效地确定不同视图间线特征的对应关系,并最终确定线特征和建筑物平面(竖直平面)的三维信息。同时,MFG还是一种有效的场景表达的方式,将周围场景表示为多种相互关联的主要特征,包括点特征、线段特征、直线特征和竖直平面特征,更有助于对场景的理解。
     论文的主要研究内容包括MFG的设计与构建、基于MFG的视觉定位算法两部分。MFG的设计与构建部分主要包括MFG的结构设计以及基于特征融合的MFG的构建方法。视觉定位算法部分主要讨论了如何利用高分辨率的卫星地图以及MFG确定移动机器人在卫星地图中的准确位置。首先设计并实现了利用卫星地图自动提取俯视的建筑物二维轮廓的方法。然后提出了利用单个MFG和建筑物俯视轮廓的特征加权视觉定位方法,实现了移动机器人的简单、快速定位。但该方法的缺点是无法保证定位解的唯一性和正确性,特别是在机器人所处的环境中存在很多相似建筑的情况下。为此,论文又设计了基于投票的视觉定位方法。基于投票的定位方法同时利用了多个MFG,每个MFG提供若干个候选解,最后根据候选解的一致性来确定最终解。论文的整体工作概述如下:
     (1) MFG的设计与构建。给出了MFG的模型结构以及特征的提取方法。利用MFG中多层特征之间的几何关系,提出了基于特征融合的MFG的构建方法。通过构建MFG实现两视图间线对应的查找,并实现了基于线对应的场景重建与理解。
     (2)基于高分辨率卫星地图的建筑物轮廓自动提取。针对高分辨率卫星地图中建筑物以及非建筑物区域的特点,同时借助相应的普通的城市电子地图,提出了一种自动、快速的建筑物二维轮廓信息提取方法。利用Google卫星地图对该方法进行了实验验证。实验结果表明,该方法可以快速、准确、自动地完成建筑物轮廓提取。
     (3)基于单个MFG的特征加权视觉定位。利用单个MFG以及二维的建筑物俯视轮廓信息,提出了基于特征加权的视觉定位方法。该方法将定位问题转化为一个优化问题,通过求解该优化问题来定位机器人。实验表明,该方法在大多情况下能够实现移动机器人的快速自主定位。然而,理论分析和物理实验均表明,当机器人所处的环境较复杂,特别是周围存在很多相似的建筑物时,该方法无法保证定位解的唯一性,甚至可能出现错误定位。
     (4)基于投票的视觉定位。该方法是对基于单个MFG的特征加权视觉定位方法的改进。基于投票的视觉定位方法利用在相同位置采集的多组两视图构建的多个MFG,根据特征加权的方法,每个MFG可以提供若干个候选解,最终由所有的候选解进行投票来确定最终解。实验结果表明,该方法可以有效地降低错误定位的概率,同时可以提高定位的精度。
Self-localization is a basic and key technique for mobile robot navigation. Inrecent years, with the reduction of camera cost and the rapid development of thetechniques for image processing and pattern recognition, researchers pay moreattention to the visual localizaton (also termed as “vision-based localization”). Visuallocalization for mobile robot utilizes the visual information from the camera(s) tolocalize the mobile robot. Recently, the applications for mobile robots working inoutdoor environments, especially in urban areas, are very popular. Therefore, theresearch on visual localization for mobile robot in urban environments is veryimportant both in theoretical and practical applications.
     The dissertation focuses on visual localization for mobile robot in urbanenvironments. A localization scheme combining the satellite map of mobile robotworking area and images captured by an onboard camera is proposed. The satellitemap is used to generate the building boundaries from top-down view, and the cameraimages are utilized for building reconstruction from horizontal view. The proposedmethod can determine the absolute position of the robot in the2D satellite map bymatching the two results obtained in the above.
     To realize the localization scheme above, a line-based3D reconstruction methodis proposed. The point-based3D reconstruction methods are popular. However,existing point-based methods are low-accuracy, high-computation, and can notrepresent the scene exactly. Compared with point features, line features are morerobust, and insensitive to lighting condition or shadows. The line-based methods canlead to low computational cost because of their small amount and robustness.However, the matching of line features between views is a difficult issue in computervision. To this end, a structure named Multilayer Feature Graph (MFG) is firstlyproposed in this dissertation. With the aid of the geometric relationships betweendifferent features, MFG can find the line correspondences between two viewssuccessfully, and reconstruct line features and vertical planes in further. Besides, MFG is an effective method to facilitate the robot scene understanding byrepresenting the scene as different related key features, such as points, line segments,lines and vertical planes.
     The main work consists of two parts: the design and construction of MFG,localization algorithms based on MFG. A feature fusion method is discussed in theMFG construction part. While in the localization algorithms, an automatic buildingboundary generation method from high-resolution satellite map is developed firstly,then a feature weighted localization method based on an MFG is proposed. However,this method can not guarantee the correctness and uniqueness of solution. Thus, avoting-based localization algorithm based on multiple MFGs is designed, where eachMFG could provide several candidate solutions, and the final solution is determinedbased on the consensus of all the candidate ones. In general, the main work in thedissertation is summarized as follows:
     (1) Design and construction of MFG. The structure of MFG and the extractionmethods for the features in MFG are proposed. A feature fusion method for MFGconstruction is developed based on the geometric relationships between features inMFG. MFG can help us to find the line correspondences between two views, usingthat we can realize the scene reconstruction and understanding well.
     (2) An automatic building extraction method from high resolution satellite map.By analyzing the characters of building and non-building regions in satellite map, anovel automatic and time-efficient building boundary extraction method, with the aidof corresponding ordinary map, is proposed. This method was implemented andtested on the Google satellite maps. The physical experiments demonstrated theaccuracy and efficiency of the method.
     (3) Feature-weighted map query method for localization. A feature-weightedvisual localization method is proposed based on a single MFG and2D buildingboundary map. This method converts the localization task into an optimizationproblem. The location solution can be obtained by solving the optimization problem.The physical experiments demonstrated that the method can localize the robotsuccessfully in most cases. However, the theoretical analysis and physicalexperiments show that, in some complex situations, especially when there are similar buildings in the surroundings, this method can not guarantee the uniqueness of thesolution, even leads to wrong results.
     (4) Voting-based visual localization method. This method is an improvement ofthe feature-weighted algorithm above. Multiple MFGs generated from camera framesare utilized in the voting-based method, with each MFG providing multiple candidatesolutions. The final localization solution is determined based on the consensus of thecandidate ones. Physical experiments demonstrated that, compared with the featureweighted method, the voting-based algorithm can improve the probability ofcorrectness and the localization accuracy.
引文
[1]. Jones J L. Robots at the tipping point: the road to iRobot Roomba. IEEE Robotics&Automation Magazine,2006,13(1):76-78.
    [2]. Kobayashi T, Ogawa Y, Kato K, et al. Learning system of human facial expression for afamily robot. Proceedings of the Sixth International Conference on Automatic Face andGesture Recognition, Seoul, Korea,2004.481-486.
    [3]. Ahn H S, Sa I K, Choi J Y. PDA-based mobile robot system with remote monitoring forhome environment. IEEE Transactions on Consumer Electronics,2009,55(3):1487-1495.
    [4].王志文,郭戈.移动机器人导航技术现状与展望.机器人,2003,25(5):470-474.
    [5]. Urmson C, Whittaker W. Self-driving cars and the urban challenge. Intelligent Systems,2008,23(2):66-68.
    [6]. Borenstein J, Feng L. A new method for combining data from gyros and odometry in mobilerobots. Proceedings of the IEEE International Conference on Robotics and Automation(ICRA), Minneapolis, MN, USA,1996.423-428.
    [7]. Carelli R, Oliveira F E. Corridor navigation and wall-following stable control forsonar-based mobile robots. Robotics and Autonomous Systems,2003,45(3):235-247.
    [8]. Ojeda L, Borenstein J. Methods for the reduction of odometry errors in over-constrainedmobile robots. Autonomous Robots,2004,16(3):273-286.
    [9]. Ojeda L, Cruz D, Reina G, et al. Current-based slippage detection and odometry correctionfor mobile robots and planetary rovers. IEEE Transactions on Robotics,2006,22(2):366-378.
    [10].Martinez J L, Mandow A, Morales J, et al. Approximating kinematics for tracked mobilerobots. International Journal of Robotics Research,2005,24(10):867-878.
    [11]. Mandow A, Martinez J L, Morales J, et al. Experimental kinematics for wheeled skid-steermobile robots. IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), San Diego, CA, USA,2007.1222-1227.
    [12].Kitano M, Kuma M. An analysis of horizontal plane motion of tracked vehicles. Journal ofTerramechanics,1977,14(4):211-225.
    [13].Wong J, Chris C. A general theory for skid steering of tracked vehicles on firm ground.Journal of Autonomous Engineering,2001,215(3):343-355.
    [14].Ward C C, Karl L. A dynamic-model-based wheel skid detector for mobile robot on outdoorterrain. IEEE Transactions on Robotics,2008,24(4):821-831.
    [15].Yi J, Zhang J, Song D, et al. IMU-based localization and slip estimation for skid-steeredmobile robots. IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), San Diego, CA, USA,2007;2845-2850.
    [16].Yi J, Song D, Zhang J. Adaptive trajectory tracking control of skid-steered mobile robots.IEEE International Conference on Robotics and Automation (ICRA), Roma, Italy,2007.2605-2610.
    [17].Yi J, Wang H, Zhang J, et al. Kinematic modeling and analysis of skid-steered mobile robotswith applications to low-cost inertial-measurement-unit-based motion estimation. IEEETransactions on Robotics (TRO),2009,25(5):1087-1097.
    [18].Agrawal M, Konolige K. Real-time localization in outdoor environments using stereo visionand inexpensive GPS. The18th International Conference on Pattern Recognition, HongKong, China,2006.1063-1068.
    [19].Rose E, HAAG-WACKERNAGEL D, Nagel P. Practical use of GPS-localization of FeralPigeons Columba livia in the urban environment. Ibis,2006,148(2):231-239.
    [20].Bonnifait P, Bouron P, Crubille P, et al. Data fusion of four ABS sensors and GPS for anenhanced localization of car-like vehicles. IEEE International Conference on Robotics andAutomation (ICRA), Seoul, Korea,2001.1597-1602.
    [21].Panzieri S, Pascucci F, Ulivi G. An outdoor navigation system using GPS and inertialplatform. IEEE/ASME Transactions on Mechatronics,2002,7(2):134-142.
    [22].Reina G, Vargas A, Nagatani K, et al. Adaptive kalman filtering for gps-based mobile robotlocalization. IEEE International Workshop on Safety, Security and Rescue Robotics (SSRR),2007.1-6.
    [23].Bouvet D, Garcia G. Improving the accuracy of dynamic localization systems using RTKGPS by identifying the GPS latency. IEEE International Conference on Robotics andAutomation (ICRA), San Francisco, CA, USA,2000.2525-2530.
    [24].Drumheller, M. Mobile robot localization using sonar. IEEE Transactions on Pattern Analysisand Machine Intelligence,1987,1(2):325-332.
    [25].Simsarian K T, Olson T J, Nandhakumar N. View-invariant regions and mobile robotself-localization. IEEE Transactions on Robotics and Automation,1996,12(5):810-816.
    [26].Cox I J. Blanche-an experiment in guidance and navigation of an autonomous robot vehicle.IEEE Transactions on Robotics and Automation,1991,7(2):193-204.
    [27].Atiya S, Hager G D. Real-time vision-based robot localization. IEEE Transactions onRobotics and Automation,1993,9(6):785-800.
    [28].Nourbakhsh I, Powers R, Birchfield S. DERVISH an office-navigating robot. ArtificialIntelligence Magazine,1995,16(2):53.
    [29].Simmons R, Koenig S. Probabilistic robot navigation in partially observable environments.International Joint Conference on Artificial Intelligence,1995.1080-1087.
    [30].Thrun S, Burgard W, Fox D. A probabilistic approach to concurrent mapping and localizationfor mobile robots. Machine Learning,1998,14(3):29-53.
    [31].Fox D, Burgard W, Thrun S. Active markov localization for mobile robots. RoboticAutonomous System,1998,25(3):195-207.
    [32].Davison A J, Reid I D, Molton N D, et al. MonoSLAM: Real-time single camera SLAM.IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(6):1052-1067.
    [33].Caron G, Mouaddib E M. Vertical line matching for omnidirectional stereovision images.IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan,2009.2787-2792.
    [34].Nister D, Naroditsky O, Bergen J. Visual odometry for ground vehicle applications. Journalof Field Robotics,2006,23(1):3-20.
    [35].Maimone M, Cheng Y, Matthies L. Two years of visual odometry on the mars explorationrovers. Journal of Field Robotics,2007,24(3):169-186.
    [36].Amidi O, Kanade T, Miller R. Vision-based autonomous helicopter research at carnegiemellon robotics institute1991-1997. International Conference American Helicopter Society,Gifu, Japan,1998.
    [37].Marks R L, Wang H H, Lee M J, et al. Automatic visual station keeping of an underwaterrobot. In Oceans Engineering for Today's Technology and Tomorrow's Preservation, Brest,France,1994.137-142.
    [38].Ozawa R, Takaoka Y, Kida Y, et al. Using visual odometry to create3D maps for onlinefootstep planning. IEEE International Conference on Systems, Man and Cybernetics,Istanbul,2005.2643-2648.
    [39].Corke P, Strelow D, Singh S. Omnidirectional visual odometry for a planetary rover.IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan,2004.4007-4012.
    [40].Agrawal M, Konolige K. Rough terrain visual odometry. Proceedings of the IEEEInternational Conference on Advanced Robotics (ICAR), Roma, Italy,2007.
    [41].Cheng Y, Maimone M, Matthies L. Visual odometry on the mars exploration rovers. IEEEInternational Conference on Systems, Man and Cybernetics,2005.903-910.
    [42].Helmick D M, Cheng Y, Clouse D S, et al. Path following using visual odometry for a Marsrover in high-slip environments. Proceedings of IEEE Conference on Aerospace,2004.772-789.
    [43].Lowe D G. Distinctive image features from scale-invariant keypoints. International journal ofcomputer vision,2004,60(2):91-110.
    [44].Bay H, Tuytelaars T, Van G L. Speeded-up robust features (SURF). Computer Vision andImage Understanding,2008,110(3):346-359.
    [45].Fischler M A, Bolles R C. Random sample consensus: a paradigm for model fitting withapplications to image analysis and automated cartography. Communications of the ACM,1981,24(6):381-395.
    [46]. Von Gioi R G, Jakubowicz J, Morel J M, et al. Lsd: A fast line segment detector with a falsedetection control. IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(4):722-732.
    [47].Caron G, Mouaddib E M. Vertical line matching for omnidirectional stereovision images.IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan,2009.2787-2792.
    [48].Scaramuzza D, Siegwart R, Martinelli A. A robust descriptor for tracking vertical lines inomnidirectional images and its use in mobile robotics. The International Journal of RoboticsResearch,2009,28(2):149-171.
    [49].Wongphati M, Niparnan N, Sudsang A. Bearing only FastSLAM using vertical lineinformation from an omnidirectional camera. IEEE International Conference on Roboticsand Biomimetics, Guilin, Guangxi, China,2009.1188-1193.
    [50].Zhang J, Song D. On the error analysis of vertical line pair-based monocular visual odometryin urban area. IEEE/RSJ International Conference on Intelligent Robots and Systems, St.Louis, MO, USA,2009.3486-3491.
    [51].Zhang J, Song D. Error aware monocular visual odometry using vertical line pairs for smallrobots in urban areas. In Special Track on Physically Grounded AI, the Twenty-Fourth AAAIConference on Artificial Intelligence, Atlanta, Georgia, USA,2010.
    [52].陈卫东,张飞.移动机器人的同步自定位与地图创建研究进展.控制理论与应用,2005,22(3):455-460.
    [53].Thrun S. Bayesian landmark learning for mobile robot localization. Machine Learning,1998,33(1):41-76.
    [54].Dissanayake M W M G, Newman P, Clark S, et al. A solution to the simultaneous localizationand map building (SLAM) problem. IEEE Transactions on Robotics and Automation,2001,17(3):229-241.
    [55].Fox D, Burgard W, Kruppa H, et al. A probabilistic approach to collaborative multi-robotlocalization. Autonomous Robots,2000,8(3):325-344.
    [56].Thrun S. Probabilistic robotics. MA, USA: MIT Press,2005.
    [57].Smith R, Self M, Cheeseman P. Estimating Uncertain Spatial Relationships in Robotics.Autonomous Robot Vehicles,1990,1(1):167-193.
    [58]. Montemerlo M, Thrun S. Simultaneous localization and mapping with unknown dataassociation using FastSLAM. IEEE International Conference on Robotics and Automation(ICRA), Taipei, Taiwan,2003.1985-1991.
    [59].Bailey T, Durrant-Whyte H. Simultaneous localization and mapping (slam): Part2state ofthe art. Robotics and Automation Magazine,2006,13(3):108-117.
    [60]. Newman P, Sibley G, Smith M, et. al. Navigating, recognizing and describing urban spaceswith vision and lasers. The International Journal of Robotics Research,2009,28(11-12):1406-1433.
    [61].Molton N, Davison A J, Reid I D. Locally planar patch features for real-time structure frommotion. In Proc.15th British Machine Vision Conference, Kingston,2004.
    [62].Davison A J. Real-time simultaneous localisation and mapping with a single camera.Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France,2003.1403-1410.
    [63].Eade E, Drummond T. Scalable monocular SLAM. IEEE Computer Society Conference onComputer Vision and Pattern Recognition, New York, NY, USA,2006.469-476.
    [64].Liu Y, Thrun S. Results for outdoor-SLAM using sparse extended information filters.Proceedings of IEEE International Conference on Robotics and Automation, Taipei, Taiwan,2003.1227-1233.
    [65].Dissanayake M W M G, Newman P, Clark S, et al. A solution to the simultaneous localizationand map building (SLAM) problem. IEEE Transactions on Robotics and Automation,2001,17(3):229-241.
    [66].Guivant J, Nebot E, Durrant-Whyte H. Simultaneous localization and map building usingnatural features in outdoor environments. Intelligent Autonomous Systems,2000,6(1):581-586.
    [67].Gaspar J, Winters N, Santos-Victor J. Vision-based navigation and environmentalrepresentations with an omnidirectional camera. IEEE Transactions on Robotics andAutomation,2000,16(6):890-898.
    [68].Ulrich I, Nourbakhsh I. Appearance-based place recognition for topological localization.Proceedings of IEEE International Conference on Robotics and Automation, Florida, USA,2000.1023-1029.
    [69].Yeh T, Tollmar K, Darrell, T. Searching the web with mobile images for location recognition.Proceedings of the2004IEEE Computer Society Conference on Computer Vision andPattern Recognition, Washington, DC, USA,2004. II-76.
    [70]. Remazeilles A, Chaumette F, Gros P.3D navigation based on a visual memory. Proceedingsof IEEE International Conference on Robotics and Automation, Orlando, FL, USA,2006.2719-2725.
    [71].Schindler G, Brown M, Szeliski R. City-scale location recognition. Proceedings of the IEEEComputer Society Conference on Computer Vision and Pattern Recognition, Minneapolis,MN, USA,2007.1-7.
    [72].Murillo A C, Kosecka J. Experiments in place recognition using gist panoramas[A]. IEEE12th International Conference on Computer Vision Workshops, Kyoto, Japan,2009.2196-2203.
    [73].Bonin-Font F, Ortiz A, Oliver G. Visual navigation for mobile robots: a survey. Journal ofIntelligent and Robotic Systems,2008,53(3):263-296.
    [74].王珂,王伟,庄严.一种基于MAP估计的移动机器人视觉自定位方法.自动化学报,2008,34(2):159-166.
    [75].Dellaert F, Burgard W, Fox D, et al. Using the condensation algorithm for robust,vision-based mobile robot localization. IEEE Computer Society Conference on ComputerVision and Pattern Recognition, Fort Collins, CO, USA,1999.
    [76].Se S, Lowe D, Little J. Global localization using distinctive visual features. IEEE/RSJInternational Conference on Intelligent Robots and Systems, Lausanne, Switzerland,2002.226-231.
    [77].Elinas P, Little J J. Monte-Carlo localization for mobile robots with stereo vision. InProceedings of Robotics: Science and Systems,2005.373-380.
    [78].Bennewitz M, Stachniss C, Burgard W, et al. Metric localization with scale-invariant visualfeatures using a single perspective camera. In European Robotics Symposium,2006.195-209.
    [79].Se S, Lowe D G, Little J J. Vision-based global localization and mapping for mobile robots.IEEE Transactions on Robotics,2005,21(3):364-375.
    [80].Johns D, Dudek G. Urban position estimation from one dimensional visual cues. The3rdCanadian Conference on Computer and Robot Vision, Quebec City, Canada,2006.22-32.
    [81].Zhang W, Kosecka J. Image based localization in urban environments. The ThirdInternational Symposium on3D Data Processing, Visualization, and Transmission, ChapelHill, USA,2006.33-40.
    [82].Frontoni E, Ascani A, Mancini A, et al. Robot localization in urban environments usingomnidirectional vision sensors and partial heterogeneous apriori knowledge. IEEE/ASMEInternational Conference on Mechatronics and Embedded Systems and Applications (MESA),Qingdao, ShanDong, China,2010.428-433.
    [83]. Siagian C, Itti L. Biologically inspired mobile robot vision localization. IEEE Transactionson Robotics,2009,25(4):861-873.
    [84].Zamir A, Shah M. Accurate image localization based on google maps street view. In The11th European Conference on Computer Vision (ECCV2010), Crete, Greece,2010.255-268.
    [85].Kosecka J, Zhang W. Extraction, matching, and pose recovery based on dominantrectangular structures. Computer Vision and Image Understanding,2005,100(3):274-293.
    [86].Kalogerakis E, Vesselova O, Hays, J, et al. Image sequence geolocation with human travelpriors. IEEE12th International Conference on Computer Vision, Kyoto, Japan,2009.253-260.
    [87].Ramalingam S, Bouaziz S, Sturm P, et al. Skyline2gps: Localization in urban canyons usingomni-skylines. IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), Taipei, Taiwan,2010.3816-3823.
    [88]. Lee S C, Jung S K, Nevatia R. Automatic pose estimation of complex3d building models.Proceedings. Sixth IEEE Workshop on Applications of Computer Vision (WACV), Orlando,FL, USA,2002.148-152.
    [89].McHenry M, Cheng Y, Matthies L. Vision based localization in urban environments. InProceedings of SPIE, Unmanned Ground Vehicle Technology VII,2005.
    [90].Leung K Y K, Clark C M, Huissoon J P. Localization in urban environments by matchingground level video images with an aerial image. In Robotics and Automation, EEEInternational Conference on Robotics and Automation, California, USA,2008.551-556.
    [91].Cham T J, Tan W C, Pham M T, et al. Estimating camera pose from a single urbanground-view omnidirectional image and a2D building outline map. IEEE Conference onComputer Vision and Pattern Recognition (CVPR), San Francisco, California, USA,2010.366-373.
    [92].David P, Ho S. Orientation Descriptors for Localization in Urban Environments. IEEE/RSJInternational Conference on Intelligent Robots and Systems, IROS[C], San Francisco, CA,USA,2011.494-500.
    [93].Elfes A. Sonar-based real-world mapping and navigation. IEEE Journal of Robotics andAutomation,1987,3(3):249-265.
    [94]. Diosi A, Kleeman L. Laser scan matching in polar coordinates with application to slam.IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS2005, Alberta,Canada,2005.3317-3322.
    [95].Henry P, Krainin M, Herbst E, et al. Rgb-d mapping: Using depth cameras for dense3dmodeling of indoor environments. The12th International Symposium on ExperimentalRobotics, New Delhi&Agra, India,2010.
    [96].Konolige K, Agrawal M. Frameslam: From bundle adjustment to real-time visual mapping.IEEE Transactions on Robotics,2008,24(5):1066-1077.
    [97].Lemaire T, Lacroix S. Monocular-vision based SLAM using line segments. IEEEInternational Conference on Robotics and Automation, Roma, Italy,2007.2791-2796.
    [98].Rasmussen C. A hybird vision+ladar rural road follower[A]. IEEE International Conferenceon Robotics and Automation, Orlando, Florida, USA,2006.156-161.
    [99].Choi Y H, Lee T K, Oh S Y. A line feature based SLAM with low grade range sensors usinggeometric consraints and active exploration for mobile robot. Autonomous Robots,2008,24(1):13-27.
    [100]. Frueh C, Jain S, Zakhor A. Data processing algorithm for generating textured3d buildingfacade meshes from laser scans and camera images. International journal of computer vision,2005,61(2):159-184.
    [101]. Zebedin L, Bauer J, Karner K, et al. Fusion of feature and area-based information for urbanbuildings modeling from aerial imagery. Proceedings of the10th European Conference onComputer Vision: Part IV,2008.873-886.
    [102]. Cornelis N, Leibe B, Cornelis K, et al.3d urban scene modeling integrating recognition andreconstruction. International journal of computer vision,2008,78(2):121-141.
    [103]. Seitz S M, Curless B, Diebe J, et al. A comparison and evaluation of multi-view stereoreconstruction algorithms. IEEE Conference on Computer Vision and Pattern Recognition,CVPR, New York, NY, USA,2006.
    [104]. Vogiatzis G, Torr P H S, Cipolla R. Multi-view stereo via volumetric graph-cuts. IEEEComputer Society Conference on Computer Vision and Pattern Recognition, CVPR SanDiego, CA, USA,2005.291-298.
    [105]. Jin H, Soatto S, Yezzi A J. Multi-view stereo reconstruction of dense shape and complexappearance. International journal of computer vision,2005,63(3):175-189.
    [106]. Yu T, Xu N, Ahuja N. Shape and view independent refectance map from multiple views.International journal of computer vision,2007,73(2):123-138.
    [107]. Gallup D, Frahm J M, Pollefeys M. Piecewise planar and non-planar stereo for urban scenereconstruction. In IEEE Computer Society Conference on Computer Vision and PatternRecognition, San Francisco, CA, USA,2010.1418-1425.
    [108]. Delmerico J A, David P, Corso J J. Building facade detection, segmentation, and parameterestimation for mobile robot localization and guidance. IEEE/RSJ International Conferenceon Intelligent Robots and Systems, San Francisco, CA, USA,2011.1632-1639.
    [109]. Haifeng Li, Dezhen Song, Jingtai Liu, et al. A two-view based multilayer feature graph forrobot navigation. IEEE International Conference on Robitics and Automation (ICRA), SaintPaul, Minnesota, USA, May14-18,2012.
    [110]. Bay H, Tuytelaars T, Van Gool L. Surf: Speeded up robust features. The9th EquopeanConference on Computer Vision, ECCV, Graz, Austria,2006.404-417.
    [111]. Schmid C, Zisserman A. The geometry and matching of lines and curves over multipleviews. International journal of computer vision,2000,40(3):199-233.
    [112]. Wang Z, Wu F, Hu Z. Msld: A robust descriptor for line matching. Pattern Recognition,2009,42(5):941-953.
    [113]. Lourakis MIA, Halkidis ST, Orphanoudakis SC, et. al. Matching disparate views of planarsurfaces using projective invariants. Image and Vision Computing,2000,18(9):673-683.
    [114]. Fan B, Wu F, Hu Z. Line matching leveraged by point correspondence. IEEE InternationalConference on Computer Vision and Pattern Recognition, CVPR, San Francisco, CA, USA,2010.390-397.
    [115]. Hartley R, Zisserman A. Multiple View Geometry in Computer Vision,2nd Edition.Cambridge University Press,2004.
    [116].张大朴,李玉山,刘洋,等.采用拟梯度方向信息的随机Hough变换直线检测.计算机科学,2006,33(4):208-210.
    [117].梁学军.改进的随机窗口Hough变换在直线检测中的应用.图像识别与自动化,2000,(1):27-33.
    [118]. Burns J B, Hanson A R, Riseman E M. Extracting straight lines. IEEE Transactions onPattern Analysis and Machine Intelligence,1986,1(4):425-455.
    [119]. Barnard S T. Interpreting perspective images. Artificial Intelligence,1983,21(4):435-462.
    [120]. Tardif J P. Non-iterative approach for fast and accurate vanishing point detection. IEEEInternational Conference on Computer Vision, Kyoto, Japan,2009.1250-1257.
    [121]. Almansa A, Desolneux A, Vamech S. Vanishing point detection without any a prioriinformation. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(4):502-507.
    [122]. Kalantari M, Jung F, Guedon J. Precise, automatic and fast method for vanishing pointdetection. The Photogrammetric Record,2009,24(127):246-263.
    [123]. Schaffalitzky F, Zisserman A. A planar grouping for automatic detection of vanishing linesand points. Image and Vision Computing,2000,18(9):647-658.
    [124].李海丰,刘景泰.一种优化的消失点估计方法及误差分析.自动化学报,2012,38(2):218-224.
    [125]. Haifeng Li, Jingtai Liu. Optimal orientation estimation for mobile robot in urban area. InIEEE International Conference on Robotics and Biomimetics, Phuket, Thailand,2011.319-324.
    [126].孟亚宾.高分辨率卫星影像建筑物轮廓提取方法研究:[硕士学位论文].辽宁:辽宁工程技术大学2006.
    [127]. Weidner U, Forstner, W. Towards automatic building extraction from high-resolutiondigital elevation models. Journal of Photogrammetry and Remote Sensing,1995,50(4):38-49.
    [128]. Sohn G, Dowman I. Building extraction using LiDAR DEMs and IKONOS images.International Archives of Photogrammetry and Remote Sensing,2003,34(3):37-43.
    [129]. Lin C, Nevatia, R. Building detection and description from a single intensity image.Computer Vision and Image Understanding,1998,72(2):101-121.
    [130]. Kim T, Muller J P. Development of a graph-based approach for building detection. Imageand Vision Computing,1999,17(1):3-14.
    [131]. Croitoru A, Doytsher Y. Right-angle building hypothesis generation in regularized urbanareas by pose clustering. Photogrammetric Engineering and Remote Sensing,2003,69(2):151-169.
    [132]. Irvin R B, McKeown Jr D M. Methods for exploiting the relationship between buildingsand their shadows in aerial imagery. IEEE Transactions on System, Man and Cybernetics,1989,19(6):564-575.
    [133]. Stassopoulou A, C. T., Ramirez R. Automatic extraction of building statistics from digitalorthophotos[J]. International Journal of Geographical Information Science,2000,14(8):759-841.
    [134]. Katartzis A, Sahli H, Nyssen E, et al. Detection buildinigs from a single airborne usingMarkov Random Field model. In Proceedings of the International Geoscience and RemoteSensing Symposium,2001.2832-2834.
    [135]. Lee D, Shel J, Bethel J. Class-guild building extraction from IKONOS imagery. ISPRSJournal of Photogrammetry and Remote Sensing,2003,69(2):143-150.
    [136]. Baatz M, Schape A. An optimization approach for high quality multi-scale imagesegmentation. Journal of Photogrammetry and Remote Sensing,2000,58(3-4):12-23.
    [137]. Levitt S, Aghdasi F. Texture measure for building recognition in aerial photographs. In theSouth African Symposium on Communications and Signal Processing,1997.75-80.
    [138]. Shufelt J A. Performance evaluation and analysis of monocular building extraction fromaerial imagery. IEEE Transactions on Pattern Analysis and Machine Intelligence,1999,21(4):311-326.
    [139]. Triggs B, McLauchlan P, Hartley R. Bundle adjustment-A modern synthesis. Visionalgorithms: theory and practice,2000.153-177.
    [140]. Haifeng Li, Jilei Xiang, Jingtai Liu. An Automatic Building Extraction Method from HighResolution Satellite Image. The31st Chinese Control Conference[C]. Hefei, China.July,2012, Accepted.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700