用户名: 密码: 验证码:
面向煤矿井下探测的多节履带式机器人及其关键技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着科学技术的发展,在自然灾难或人为灾害发生后的废墟救灾现场将不再全部依靠人类亲自进入现场实施搜救工作。灾难搜救探测机器人的出现降低了搜救人员在灾难废墟中二次伤害的发生,具有非常重要的现实意义。限于目前的技术水平,探测机器人很难完全自主地完成所有的作业任务,遥控为主、局部自主是目前合理的控制模式。在这种模式下,操作人员会辅助机器人完成一些复杂的任务,而机器人在某些基础行为上也具有一定的自主性。本文首先研制了一台用于煤矿井下环境搜索的多节履带式移动机器人,然后针对为实现该机器人具有一定程度的局部自主能力所涉及到的关键技术进行了研究,包括基于超声和单目视觉系统的障碍物及自身运动的感知研究、多节履带式机器人的定位研究以及该类型机器人的一些基本行为控制方法和控制策略研究。
     针对煤矿井下的特殊环境,设计了一种多节履带式移动机器人。该机器人采用正压防爆设计,四周包覆履带,具有较强的环境适应能力,能够在翻滚时继续进行搜索任务;机器人为四节串联,串联关节机构具有俯仰和偏航两个主动自由度,这种设计使机器人具有较强的运动能力,能够攀爬较高的障碍物。机器人上配置了超声传感器、视觉传感器、姿态传感器、霍尔传感器等多种传感器,为实现机器人的局部自主和导航提供了硬件条件。针对该机器人多节的结构特点,设计了基于CANOpen总线协议的分布式控制系统,在传感器集中配置的两节中开发了用于传感器数据处理的MicroCANOpen子节点;开发了机器人单目视觉系统用于视觉导航应用;设计了基于Zigbee技术的无线通讯节点用于与上位机通讯,通讯链上设计了数据中继节点,可以保证在远距离或有遮挡环境下的通讯可靠性;为控制系统设计了混合式控制体系结构框架,该框架为完成搜索任务目标,把感知、建模、规划、决策和行动等多个功能模块进行了有机、高效的结合;设计了针对煤矿井下环境探测的传感器系统,可以探测甲烷、CO、温度以及湿度等重要环境参数;搜索环境的高度非结构化与未知性决定了机器人行为的复杂性,为设计与快速验证这些行为建立了机器人仿真系统,通过该系统能够对机器人的行为特性进行快速仿真分析,为实体机器人的控制算法研究提供依据及仿真验证平台。该机器人系统经实验测试在非结构化环境中具有良好的搜索性能。
     受机器人结构的限制,机器人只装配了单目视觉系统。视觉传感器提供了机器人上装配的传感器中最为丰富的信息量,是机器人完成局部自主与导航不可缺少的信息源。由于搜索场所通常是光线昏暗的环境,视觉传感器采用具有红外功能的CCD摄像头,并针对红外图像噪声大亮度低的特点,建立了基于Contourlet变换的图像预处理算法,有效地消除了噪声,并对图像亮度进行了平衡;提出了基于近前下方地平假设的机器人运动估计算法,该算法通过对近前下方地面强角点光流的计算对机器人的运动状态进行有效估计,为机器人定位提供了一种信息源;结合机器人的多节结构特点,基于超声测距及对视觉图像的光流场发散性分析对视景范围内的障碍物进行检测,建立视景障碍物图,为机器人避障导航提供依据。
     航位推算,也称为相对定位,是机器人导航的基础。通过机器人上装配的微惯性传感器建立捷联惯性导航系统,用于短时间内的姿态参考;建立了多节履带式机器人的稳态运动模型,对各种转向模式进行分析,提出了一种基于轨迹曲率与侧偏角的航向估计方法,在此基础上建立了多节履带式机器人的运动模型;通过卡尔曼滤波器对捷联惯性姿态参考系统进行信息融合,通过无迹卡尔曼滤波器对视觉运动估计系统进行信息融合,实现机器人的相对定位。
     行为级目标自主是任务级目标自主的基础,是机器人实现局部自主的必要条件,机器人的行为便是完成这些目标的相对独立的功能模块。对多节履带式机器人的基本行为进行了分析,建立了针对多节履带式机器人的用于转向及航向控制的模糊控制器;对沿墙跟踪行为与避障行为策略进行了研究;基于行为向导的方式对机器人越障进行了研究;建立了搜索机器人示范应用实验系统,进行了煤矿井下现场应用实验,对机器人的运动性能与控制系统性能进行现场验证测试。实验结果表明所研制的多节履带式探测机器人在搜索探测方面具有较大优势,基于本文理论方法能够很好地辅助该机器人完成相关搜索任务。
With the development of technology, the search and rescue work in the disasters willnot only depend on human beings. The search and rescue robot prevents the injury of therescue people. The study of search and rescue robot that can replace human beings andgo to the disaster places possessing very important realistic meaning. Due to the limitedlevel of technology, the robot cannot finish the entire job by itself. The strategy of mainlydepending on remote control and partly depending on self-guidance is preferred. Underthis strategy, the operator can help the robot to finish the complex and difficult tasks. Andthe robot has some low-level autonomy. First, a serpentine tracked robot for underminesearching was develped. Then some key technologies for realizing its low-levelautonomy were studied. The first was obstacle detecting and self-motion detecting basedon the ultrasonic sensor and onocular vision. The second was location of serpentinetracked robot and the last was the control method and strategy.
     Facing the highly unstructured environment after disasters, this thesis designed aserpentine tracked robot. This robot is omni tracked and has strong ability to adapt theenvironment. It can keep work and even roll over which would definitely happen whileworking. The robot has four segments. The joint between two segments has yawing andpitching degrees. The design makes the robot have strong motion capability and canclimb high obstacles.This robot is equipped with ultrasonic distance sensors, the visionsensor, attitude sensor, hall sensors and other sensors. The navigation of robot can bebased on these sensors. Considering the structure of the robot, we constructed thedistributed control system based on the CANOpen bus. The MicroCANOpen nodes fordealing with the sensor information were developed and these two nodes were fixed inthe head segment and the tail segment. A monocular system for vision navigation wasdeveloped and studied. The wireless communication nodes and relay nodes based on thetechnology of Zigbee were developed. These nodes can keep reliable communicationwhile the distance is far or there are obstacles. A framework and architecture for controlsystem were also designed. This framework realizes seamless integration of sensor,model, plan, decision and motion. The sensor system for undermine environment wasdeveloped. The system can detect CH4, CO, temperature, dampness and other necessaryenvironment parameters. The high unstructured and unpredictable environment makessome motion of the robot very complex. To design the motions fast and to test thesemotions, a simulation system was set up. By the simulation on this system, we can getthe behavioral characteristics and result rapidly. And the result can offer reference andgist the remote control of the robot. The robot was proven to have good searching ability by experiments.
     Because of the limition of structure, the monocular vision system was equipted. Thevision sensor provides the richest information about the environment. The visioninformation is indispensable for navigation. Because the environment of rescue is oftendark, the vision sensor is a CCD camera with infrared vision. To resolve the problem ofbig noises in the infrared vision, the preprocessing algorithms based on the Contourlettransformation was studied. This algorithm has good results for image denoising andlight balance. A method of motion estimation based on the hypothesis of flat ground offront downward region is suggested. The method estimates the robot motion by pickingup the strong corners in the region and calculates the optical flow. And this methodprovides another information for location By analyzing the divergent of optical flow,obstacles in the vision can be detected. The obstacles diagram can be set up which willhelp the robot’s strategy of obstacle avoiding.
     Location is the base of robot navigation. The strap down inertial navigation system(SINS) is set up which can be attitude reference in short time. The stable movementmodel for serpentine tracked robot is set up. A linearization method of track curvatureand deviating angle is suggested based on the analysis of turning modes. Odometer forserpentine tracked robot is proposed based on the previous study. The Kalman filter forinformation fusion between the odometer and SINS is set up. And the UKF for theinformation fusion between the odometer and vision information is proposed too. Thesestudies realize the location of the robot.
     System level objective autonomy is the base of task level objective autonomy. It isthe necessary condition of partly autonomous navigation. The motions of the robot arerelatively independent function models to get to these targets. The typical motion forserpentine tracked robot is studied. A fuzzy controller for turning direction motion is setup. The wall follow motion and the obstacle avoiding motion are realized. In addition,the simulations or experiments about these typical motions are done.The experimentsystem for the search and rescue robot in coalmine was set up. The search and rescueexperiment in fields were done and this experiment verified the integral performance. Allof the experiments show that the serpentine tracked robot has the special advantage insearch and rescue jobs and verify the methods and algorithms in this thesis.
引文
[1] Casper J, Murphy R R. Human-robot interactions during the robot-assisted urbansearch and rescue response at the World Trade Center[J]. IEEE Transactions onSystems, Man, and Cybernetics, Part B: Cybernetics,2003,33(3):367-385.
    [2]王忠民.灾难搜救机器人研究现状与发展趋势[J].现代电子技术,2007(17):152-155.
    [3] Mine Rescue Robot[EB/OL]. http://www.msha.gov/SagoMine/robotdetails.asp.
    [4] Baker C, Morris A, Ferguson D, et al. A campaign in autonomous minemapping[C]//Proceedings of the2004IEEE International Conference on Robotics&Automation. New Orleans: IEEE,2004:2004-2009.
    [5]李允旺,葛世荣,张德坤.矿山运输机械的安全装备研究[J].中国科技论文在线,2007(07):480-486.
    [6] Yunwang L, Shirong G, Haifeng F, et al. Effects of the fiber releasing onstep-climbing performance of the articulated tracks robots[C]//IEEE InternationalConference on Robotics and Biomimetics. Guilin: IEEE,2009:818-823.
    [7] Weidong W, Zhijiang D, Lining S. Obstacle Performance Analysis of MineResearch Robot Based on Terramechanics[C]//International Conference onMechatronics and Automation. Harbin: IEEE,2007:1382-1387.
    [8]于文彬.基于虚拟样机技术的危险区域探测机器人运动学及动力学分析[D].济南:山东大学硕士论文,2009:3.
    [9] Gao J, Gao X, Zhu J, et al. Coal mine detect and rescue robot techniqueresearch[C]//International Conference on Information and Automation.Zhuhai/Macau: IEEE,2009:1068-1073.
    [10] Takayama T, Hirose S. Development of Souryu-I connected crawler vehicle forinspection of narrow and winding space[C]//26th Annual Confjerence of the IEEEIndustrial Electronics Society. Nagoya, Japan: IEEE,2000:143-148.
    [11] Masayuki A, Takayama T, Hirose S. Development of "Souryu-III": connectedcrawler vehicle for inspection inside narrow and winding spaces[C]//Proceedingsof2004IEEE/RSJ International Conference on Intelligent Robots and Systems.Shendai, Japan: IEEE,2004:52-57.
    [12] Hirose S. Super mechano-system: New perspective for versatile roboticsystem[C]//Proceedings of the7th International Symposium on ExperimentalRobotics. Waikiki, Hawaii: Springer,2000:249-258.
    [13] Osuka K, Kitajima H. Development of mobile inspection robot for rescueactivities: MOIRA[C]//Proceedings of the2003IEEE/RSJ InternationalConference on Intelligent Robots and Systems. Las Vegas, Nevada: IEEE,2003:3373-3377.
    [14] Haraguchi R, Osuka K, Makita S, et al. The development of the mobile inspectionrobot for rescue activity, MOIRA2[C]//Proceedings of the12th InternationalConference on Advanced Robotics. Seattle, WA: IEEE,2005:498-505.
    [15] Hirose S, Fukushima E F, Tsukagoshi S. Basic steering control methods for thearticulated body mobile robot[J]. IEEE Control Systems Magazine,1995,15(1):5-14.
    [16] Kamegawa T, Yamasak T, Matsuno F. Evaluation of snake-like rescue robot“KOHGA” for usability of remote control[C]//Proceedings of the2005IEEEInternational Workshop on Safety, Security and Rescue Robotics. Kobe, Japan:IEEE,2005:25-30.
    [17] Kamegawa T, Yamasaki T, Igarashi H, et al. Development of the snake-likerescue robot “KOHGA”[C]//Proceedings of the2004IEEE InternationalConference on Robotics and Automation. New Orleans, LA: IEEE,2004:5081-5086.
    [18] Miyanaka H, Wada N, Kamegawa T, et al. Development of an unit type robot"KOHGA2" with stuck avoidance ability[C]//IEEE International Conference onRobotics and Automation. Roma: IEEE Press,2007:3877-3882.
    [19] Granosik G, Borenstein J. The OmniTread serpentine robot with pneumatic jointactuation[C]//Proceedings of the Fifth International Workshop on Robot Motionand Control. Dymaczewo, Poland: IEEE,2005:105-110.
    [20] Hansen, Borenstein J, Malik. OmniTread OT-4serpentine robot: new features andexperiments[C]//Proc. SPIE Defense and Security Conf: Unmanned SystemsTechnology,2007.
    [21] Borenstein J, Borrell A. The OmniTread OT-4serpentine robot[C]//IEEEInternational Conference on Robotics and Automation. Pasadena, CA, USA: IEEE,2008:1766-1767.
    [22] Borenstein J. The OmniTread OT-4serpentine robot-design and fieldperformance[J]. Journal of Field Robotics,2010,24(7):601-621.
    [23]马颂德,张正友著.计算机视觉——计算理论与算法基础[M].科学出版社,1998:52-71.
    [24] Desouza G N, Kak A C. Vision for mobile robot navigation: a survey[J]. PatternAnalysis and Machine Intelligence, IEEE Transactions on,2002,24(2):237-267.
    [25] Borenstein J, Koren Y. Real-time obstacle avoidance for fact mobile robots[J].Systems, Man and Cybernetics, IEEE Transactions on,1989,19(5):1179-1187.
    [26] Sugihara K. Some location problems for robot navigation using a single camera[J].Computer Vision, Graphics, and Image Processing,1988,42(1):112-129.
    [27] Tongying G, Fengyan H, Haichen W, et al. Application of Monte CarloLocalization Algorithm on Mobile Robot[C]//International Conference onArtificial Intelligence and Computational Intelligence. Sanya: IEEE,2010:533-536.
    [28] Lei Z, Zapata R, Lepinay P. Self-adaptive Monte Carlo localization for mobilerobots using range sensors[C]//IEEE/RSJ International Conference on IntelligentRobots and Systems. St. Louis, MO: IEEE,2009:1541-1546.
    [29] Atiya S, Hager G D. Real-time vision-based robot localization[J]. IEEETransactions on Robotics and Automation,1993,9(6):785-800.
    [30] Matthies L, Shafer S. Error modeling in stereo navigation[J]. IEEE Journal ofRobotics and Automation,1987,3(3):239-248.
    [31]赵守鹏,田国会,李晓磊.基于单个人工地标的机器人自主定位[J].山东大学学报(工学版),2007(4):39-44.
    [32]李耀军,潘泉,赵春晖,等.基于动态关键帧的自然地标景象匹配视觉导航[J].光电工程,2010(9):32-38.
    [33] Moravec H P. The Stanford Cart and the CMU Rover[J]. Proceedings of the IEEE,1983,71(7):872-884.
    [34]迟健男,徐心和.移动机器人即时定位与地图创建问题研究[J].机器人,2004(1):92-96.
    [35] Thrun S. Learning metric-topological maps for indoor mobile robot navigation[J].Artificial Intelligence,1998,99(1):21-71.
    [36] Dissanayake M W M G, Newman P, Clark S, et al. A solution to the simultaneouslocalization and map building (SLAM) problem[J]. Robotics and Automation,IEEE Transactions on,2001,17(3):229-241.
    [37] Mourikis A I, Roumeliotis S I. Analysis of positioning uncertainty insimultaneous localization and mapping (SLAM)[C]//Proceedings of IEEE/RSJInternational Conference on Intelligent Robots and Systems. Sendai, Japan: IEEE,2004:13-20.
    [38]厉茂海,洪炳熔.移动机器人的概率定位方法研究进展[J].机器人,2005(4):380-384.
    [39]王璐.未知环境中移动机器人视觉环境建模与定位研究[D].长沙:中南大学博士论文,2007:6-15.
    [40]方正,佟国峰,徐心和.基于贝叶斯滤波理论的自主机器人自定位方法研究[J].控制与决策,2006(8):841-847.
    [41] Kidono K, Miura J, Shirai Y. Autonomous visual navigation of a mobile robotusing a human-guided experience[J]. Robotics and Autonomous Systems,2002,40(2-3):121-130.
    [42] Royer E, Lhuillier M, Dhome M, et al. Monocular Vision for Mobile RobotLocalization and Autonomous Navigation[J]. Int. J. Comput. Vision,2007,74(3):237-260.
    [43] Barron J L, Fleet D J, Beauchemin S S, et al. Performance of optical flowtechniques[J]. International Journal of Computer Vision,1992,12(1):43-77.
    [44]诸昌钤,刘国锋.光流的计算技术[J].西南交通大学学报,1997(06):656-662.
    [45] Liu H, Hong T, Herman M, et al. Accuracy vs efficiency trade-offs in optical flowalgorithms[J]. Computer Vision and Image Understanding,1998,72(3):271-286.
    [46] Srinivasan M V, Thurrowgood S, Soccol D. An Optical System for Guidance ofTerrain Following in UAVs[C]//IEEE International Conference on Video andSignal Based Surveillance. Sydney, Australia: IEEE,2006:51.
    [47] Pears N, Bojian L. Ground plane segmentation for mobile robot visualnavigation[C]//Proceedings of IEEE/RSJ International Conference on IntelligentRobots and Systems. Maui, HI, USA: IEEE,2001:1513-1518.
    [48] Bojian L, Pears N. Visual navigation using planar homographies[C]//Proceedingsof IEEE International Conference on Robotics and Automation. Washington DC:IEEE,2002:205-210.
    [49] Saeedi P, Lawrence P D, Lowe D G. Vision-based3-D trajectory tracking forunknown environments[J]. Robotics, IEEE Transactions on,2006,22(1):119-136.
    [50] Haddad H, Khatib M, Lacroix S, et al. Reactive navigation in outdoorenvironments using potential fields[C]//Proceedings of IEEE InternationalConference on Robotics and Automation. Leuven, Belgium: IEEE,1998:1232-1237.
    [51] Remazeilles A, Chaumette F, Gros P. Robot motion control from a visualmemory[C]//Proceedings of IEEE International Conference on Robotics andAutomation. New Orleans, LA: IEEE,2004:4695-4700.
    [52] Morita H, Hild M, Miura J, et al. Panoramic View-Based Navigation in OutdoorEnvironments Based on Support Vector Learning[C]//International Conference onIntelligent Robots and Systems. Beijing: IEEE,2006:2302-2307.
    [53] Leonard J J, Durrant Whyte H F, Cox I J. Dynamic map building for anautonomous mobile robot[J]. International Journal of Robotics Research,1992,11(4):286-298.
    [54]冯子龙,刘健,刘开周. AUV自主导航航位推算算法的研究[J].机器人,2005(2):168-172.
    [55]陈磊,梁强. GPS原理及应用简介[J].科技信息(学术研究),2008(22):188-190.
    [56]徐爱功.自动车辆定位导航系统中传感器的误差分析[J].测绘工程,1997(1):36-39.
    [57] Borenstein J, Feng L. UMBmark--A Method for Measuring, Comparing, andCorrecting Dead-reckoning Errors in Mobile Robots.The University ofMichiganUM-MEAM-94-22,1994.
    [58] Kok S C, Kleeman L. Accurate odometry and error modelling for a mobilerobot[C]//Proceedings of IEEE International Conference on Robotics andAutomation. Albuquerque, NM, USA: IEEE,1997:2783-2788.
    [59] Martinelli A. Estimating the odometry error of a mobile robot duringnavigation[C]//In Proceedings of the1st European Conference on Mobile Robots.Warszawa, Poland: Zturek Research Scientific Inst. Press,2003:218-223.
    [60] Haoming X, Collins J J. Estimating the Odometry Error of a Mobile Robot byNeural Networks[C]//International Conference on Machine Learning andApplications. Miami Beach, Florida,2009:378-385.
    [61] Ojeda L, Chung H, Borenstein J. Precision calibration of fiber-optics gyroscopesfor mobile robot navigation[C]//Proceedings of IEEE International Conference onRobotics and Automation. San Francisco, CA, USA: IEEE,2000:2064-2069.
    [62] Barshan B, Durrant-Whyte H F. Inertial navigation systems for mobile robots[J].Robotics and Automation, IEEE Transactions on,1995,11(3):328-342.
    [63]王磊.硅微航姿系统技术研究[D].哈尔滨:哈尔滨工程大学硕士论文,2010:10-54.
    [64]董冀. MTi微惯性航姿系统/GPS组合技术研究[D].哈尔滨:哈尔滨工程大学硕士论文,2009:25-40.
    [65]刘珺琇.基于MTi微惯性航姿系统研究[D].哈尔滨:哈尔滨工程大学硕士论文,2008:7-24.
    [66] Barton J, Gonzalez A, Buckley J, et al. Design, Fabrication and Testing ofMiniaturised Wireless Inertial Measurement Units (IMU)[C]//Proceedings ofElectronic Components and Technology Conference. Reno, NV: IEEE,2007:1143-1148.
    [67]薛亮,李天志,李晓莹,等.基于MEMS传感器的微型姿态确定系统研究[J].传感技术学报,2008(3):457-460.
    [68] Zhang Z. A flexible new technique for camera calibration[J]. Pattern Analysis andMachine Intelligence, IEEE Transactions on,2000,22(11):1330-1334.
    [69]孟晓桥,胡占义.一种新的基于圆环点的摄像机自标定方法[J].软件学报,2002(5):957-965.
    [70]吴福朝,王光辉,胡占义.由矩形确定摄像机内参数与位置的线性方法[J].软件学报,2003(3):703-712.
    [71]王卫华.移动机器人定位技术研究[D].武汉:华中科技大学博士论文,2005:5-16.
    [72]罗荣华,洪炳镕.移动机器人同时定位与地图创建研究进展[J].机器人,2004(2):182-186.
    [73] Yuanxin W, Xiaoping H, Dewen H, et al. Strapdown inertial navigation systemalgorithms based on dual quaternions[J]. Aerospace and Electronic Systems, IEEETransactions on,2005,41(1):110-132.
    [74] Hakyoung C, Ojeda L, Borenstein J. Accurate mobile robot dead-reckoning with aprecision-calibrated fiber-optic gyroscope[J]. Robotics and Automation, IEEETransactions on,2001,17(1):80-84.
    [75]张志超,郑之增,方海峰,等.矿井救灾机器人的导航定位研究[J].煤矿机械,2008(11):41-43.
    [76]陶敏,陈新,孙振平.移动机器人定位技术[J].火力与指挥控制,2010(7):169-172.
    [77]蒋林.全方位移动操作机器人及其运动规划与导航研究[D].哈尔滨:哈尔滨工业大学博士论文,2008:19-38.
    [78] Hall D L, Llinas J. An introduction to multisensor data fusion[J]. Proceedings ofthe IEEE,1997,85(1):6-23.
    [79]贾沛璋.对飞机的跟踪方法[J].航空学报,1984(4):444-450.
    [80]张汉祥,黄鋆祥.多目标航迹跟踪计算机模拟研究[J].火控雷达技术,1986(1):21-31.
    [81]周宏仁.多目标跟踪技术综述[J].航空学报,1986(1):1-10.
    [82]何友,谭庆海.多传感器系统分类研究[J].火力与指挥控制,1988(2):3-12.
    [83]东方,刘玉波,陈博.基于多传感器数据融合技术的力学量测量系统研究[J].宇航计测技术,2011(3):64-67.
    [84]何虎军,杨兴科,李煜航,等.多源数据融合技术及其在地质矿产调查中应用[J].地球科学与环境学报,2010(1):44-47.
    [85]王鹏,王雷,王中,等.数据融合技术在鱼雷空中弹道测试中的应用研究[J].舰船电子工程,2010(11):37-40.
    [86]魏玮,张海勇.基于数据链的海上编队协同作战多传感器数据融合技术[J].舰船电子对抗,2009(3):10-13.
    [87]张谦,庞彩霞.多传感器模糊数据融合技术在瓦斯监测中的应用[J].矿山机械,2008(12):58-60.
    [88]甄昕,韩波.战术导弹可靠性评定中数据融合技术的应用研究[J].战术导弹技术,2011(3):15-19.
    [89] Brooks R. A robust layered control system for a mobile robot[J]. IEEE Journal ofRobotics and Automation,1986,2(1):14-23.
    [90] Kiwon P, Nian Z. Behavior-Based Autonomous Robot Navigation on ChallengingTerrain: A Dual Fuzzy Logic Approach[C]//IEEE Symposium on Foundations ofComputational Intelligence. Hawaii: IEEE,2007:239-244.
    [91] Seraji H, Howard A. Behavior-based robot navigation on challenging terrain: Afuzzy logic approach[J]. IEEE Transactions on Robotics and Automation,2002,18(3):308-321.
    [92]杜春红,徐江伟,岳宏.基于视觉和超声信息的机器人行为控制[J].仪器仪表学报,2006(07):734-738.
    [93]郭晏,杨炯,宋爱国.移动机器人的宏行为控制设计[J].信息与电子工程,2008(1):59-63.
    [94]孙德玮,祁晓磊,蔡学良,等.基于HPI接口实现DSP和ARM间的通信[J].微处理机,2009(03):61-63.
    [95]马斌.基于DSP的JPEG编码器的实现与优化[D].西安:西安电子科技大学硕士论文,2009:14-28.
    [96]丁大尉,王金刚,毛战华.基于TMS320VC5416图像JPEG编码系统[J].电子测量技术,2004(3):59-60.
    [97]胥布工,王俊波. CANopen协议分析与实现[J].微计算机信息,2006(17):104-106.
    [98]闫士珍,徐喆,张卓.基于uC/OS-Ⅱ的CANopen从节点的实现[J].计算机系统应用,2008(07):113-118.
    [99]王瑞. MicroCANopen——一种开放源代码的CANOPEN协议软件[C]//中国宇航学会计算机应用专业委员会2004年学术交流会论文集,2004:180-188.
    [100] Krist P. MicroCANopen distributed control node Ethernet HTTPmonitoring[C]//Applied Electronics. Pilsen: IEEE,2009:165-168.
    [101]阎梅,王西武,张殿富.超宽频无线技术的发展[J].现代电子技术,2004(18):52-53.
    [102]李正荣,黄晓涛.超宽带定位技术的分析与思考[J].电信快报,2008(4):29-32.
    [103] Zhang Q, Yang X, Zhou Y, et al. A wireless solution for greenhouse monitoringand control system based on ZigBee technology[J]. Journal of ZhejiangUniversity SCIENCE A,2007,8(10):1584-1587.
    [104] Carcelle X, Heile B, Chatellier C, et al. Next WSN applications using ZigBee[J].2008,256:239-254.
    [105] Pan M, Tseng Y. ZigBee and Their Applications[J].2007:349-368.
    [106] Somayazulu V S, Foerster J R, Roy S. Design challenges for very high data rateUWB systems[C]//Conference Record of the Thirty-Sixth Asilomar Conferenceon Signals, Systems and Computers: IEEE,2002:717-721.
    [107]徐峰,刁节涛.蓝牙技术标准的发展与未来[J].电脑知识与技术,2010(15):4057-4059.
    [108]马树才,范青,米海英.浅谈蓝牙技术及其发展[J].实验技术与管理,2006(12):76-78.
    [109]王莹.浅谈蓝牙技术应用及其发展展望[J].黑龙江科技信息,2011(14):90.
    [110]陈东娅.无线局域网(WLAN)技术的应用与发展[J].农业网络信息,2008(7):83-85.
    [111]谢瑜. NFC技术在中国的发展关键因素及应用前景[J].金卡工程,2008(2):41-42.
    [112]才岩峰.几种无线传感网中通信技术的比较[J].科学与财富,2010(11):31.
    [113] Saridis G. Intelligent robotic control[J]. IEEE Transactions on Automatic Control,1983,28(5):547-557.
    [114]贾连兴主编.仿真技术与软件[M].国防工业出版社,2006:1-10.
    [115]赵小川,罗庆生,韩宝玲.基于Webots仿真软件的仿生六足机器人机构设计与步态规划[J].系统仿真学报,2009(11):3241-3245.
    [116]张颖,谭冠政.改进的免疫遗传算法在多机器人协作中的应用[J].计算机测量与控制,2008(7):1001-1003.
    [117]费燕琼,朱越梁,宋立博.多模块式移动机器人系统的自组织协作行为[J].上海交通大学学报,2011(7):990-994.
    [118]薛颂东,曾建潮.群机器人研究综述[J].模式识别与人工智能,2008(2):177-185.
    [119]魏武,冯静,张占.基于Webots的蛇形机器人翻滚运动仿真及实现[J].自动化与仪表,2011(6):4-7.
    [120]闫继宏,赵楠,赵杰,等.基于适应性虚拟向导的遥操作机器人系统[J].哈尔滨工业大学学报,2007(1):59-63.
    [121]赵杰,高胜,闫继宏,等.基于虚拟向导的多操作者多机器人遥操作系统[J].哈尔滨工业大学学报,2005(1):5-9.
    [122]李洪均,梅雪,林锦国.基于Contourlet域HMT模型的红外图像去噪算法[J].红外技术,2007(6):357-360.
    [123]徐心和,韩晓微,范立南.一种基于脉冲噪声检测的图像均值滤波方法[J].计算机工程与应用,2004(27):102-104.
    [124]彭玉华,孙伟峰.一种改进的非局部平均去噪方法[J].电子学报,2010(04):923-928.
    [125] Goto T, Komatsu R, Sakurai M. Blocky noise reduction for JPEG images usingtotal variation minimization[C]//7th International Conference on Information,Communications and Signal Processing. Macau: IEEE,2009:1-5.
    [126]沈立新,徐书方,刘婧.一种基于偏微分方程的图像去噪方法[J].大连海事大学学报,2010(04):107-110.
    [127]熊小华,刘祝华,邹道文.基于离散Fourier变换的非线性滤波图像去噪算法[J].江西师范大学学报(自然科学版),2004(03):248-251.
    [128]石庚辰,艾泽潭.小波变换在图像去噪中的应用[J].科技导报,2010(01):102-106.
    [129] Do M N, Vetterli M. Contourlets: a directional multiresolution imagerepresentation[C]//Proceedings of International Conference on ImageProcessing. Beckman Inst., Illinois Univ., Urbana, IL, USA: IEEE,2002:357-360.
    [130] Do M N, Vetterli M. The contourlet transform: an efficient directionalmultiresolution image representation[J]. IEEE Transactions on Image Processing,2005,14(12):2091-2106.
    [131] Bamberger R H, Smith M J T. A Filter Bank for the Directional Decomposition ofImages: Theory and Design[J]. IEEE TRANSACTIONS ON SIGNALPROCESSING.,1992,40(4):882-893.
    [132] Do M N. Directional Multiresolution Image Representations[D]. Lausanne: SwissFederal Institute of Technology Lausanne.Ph.D,2001:21-45.
    [133] Shensa M J. The discrete wavelet transform: wedding the a trous and Mallatalgorithms[J]. IEEE Transactions on Signal Processing,1992,40(10):2464-2482.
    [134] Da Cunha A L, Zhou J, Do M N. Nonsubsampled contourilet transform: filterdesign and applications in denoising[C]//IEEE International Conference on ImageProcessing: IEEE,2005:749-752.
    [135] Jianping Z, Cunha A L, Do M N. Nonsubsampled contourlet transform:construction and application in enhancement[C]//IEEE International Conferenceon Image Processing: IEEE,2005:469-472.
    [136] Da Cunha A L, Jianping Z, Do M N. Nonsubsampled Contourlet Transform:Theory, Design, and Applications[J]. IEEE Transactions on Image Processing,2006,15(10):3089-3101.
    [137]杨福生著.小波变换的工程分析与应用[M].科学出版社,1999:20-71.
    [138]张强.基于多尺度几何分析的多传感器图像融合研究[D].西安:西安电子科技大学博士论文,2008:51-69.
    [139]许素芹,陈捷,孙继银.基于Contourlet变换的图像增强方法[J].微电子学与计算机,2007(12):100-102.
    [140] Xiaohui Y, Buckles B P. Subband noise estimation for adaptive waveletshrinkage[C]//Proceedings of the17th International Conference on PatternRecognition. Cambridge UK: IEEE,2004:885-888.
    [141] Chang S G, Bin Y, Vetterli M. Adaptive wavelet thresholding for image denoisingand compression [J]. IEEE Transactions on Image Processing,2000,9(9):1532-1546.
    [142] Lucas B D, Kanade T. An Iterative Image Registration Technique with anApplication to Stereo Vision[C]//Proc. Of7th International Joint Conference onArtificial Intelligence (IJCAI). Milan: ACM,1981:674-679.
    [143] Lucas B D. Generalized Image Matching by the Method of Differences[D].Pittsburgh: Carnegie Mellon University doctoral dissertation,1984:25-57.
    [144] Heikkila J, Silven O. A four-step camera calibration procedure with implicitimage correction[C]//IEEE Computer Society Conference on Computer Visionand Pattern Recognition. California: IEEE,1997:1106-1112.
    [145]朱云芳,王贻术,杜歆.静态环境中基于光流的障碍物检测[J].浙江大学学报(工学版),2008(06):923-926.
    [146]郑南宁著.计算机视觉与模式识别[M].国防工业出版社,1998:190-248.
    [147]谌艳春刘琼梅A.主动视觉研究的进展与展望[J].武汉理工大学学报(信息与管理工程版),2009(04):536-540.
    [148]谈正胡钊政A.一种基于主动视觉的三维结构恢复和直接欧氏重建算法[J].自动化学报,2007(05):494-499.
    [149] Souhila K, Djekoune O, Djebrouni D, et al. On the Use of Optical Flow in RobotNavigation[C]//IEEE International Conference on Signal Processing andCommunications. Dubai: IEEE,2007:1287-1290.
    [150]刘罡.多节履带式搜索机器人及其运动策略研究[D].哈尔滨:哈尔滨工业大学博士论文,2011:47-52.
    [151]陈哲.捷联惯导系统原理[M].宇航出版社,1986:143-153.
    [152]龙辉平,习胜丰,侯新华.实验数据的最小二乘拟合算法与分析[J].计算技术与自动化,2008(03):20-23.
    [153]付梦印,邓志红,闫莉萍编著. Kalman滤波理论及其在导航系统中的应用[M].科学出版社,2010:12-36.
    [154]李建东.矩阵QR分解的三种方法[J].吕梁高等专科学校学报,2009(1):16-19.
    [155]杨蕊.矩阵的Cholesky分解的Matlab实现[J].中国科技信息,2007(4):273-274.
    [156] Van der Merwe R, Wan E A. The square-root unscented Kalman filter for stateand parameter-estimation[C]//Proceedings of IEEE International Conference onAcoustics, Speech, and Signal Processing. Salt Lake City, UT, USA: IEEE,2001:3461-3464.
    [157]蔡建羡,阮晓钢,郜园园.随机模糊控制策略及其在机器人控制中的应用[J].电机与控制学报,2009(5):754-761.
    [158]阮晓钢,胡敬敏,蔡建羡,等.一种基于模糊控制理论的独轮机器人控制算法[J].控制与决策,2010(6):862-866.
    [159]朱林立,夏幼明,夏耀稳,等.基于IDM法与CRI推理机制的应用研究[J].云
    南师范大学学报(自然科学版),2008(4):21-26.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700