用户名: 密码: 验证码:
基于移动机械手的危险化学反应器泄漏监控与修补系统技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
移动机器人研究是机器人学中的重要分支。目前,随着机器人技术的不断进步,移动机器人的应用领域越来越广泛,其重要发展方向是全自主化。在石油化工等行业,一旦发生有毒有害气体的泄漏而未采取有效的措施,将会对人员及国家财产产生极大的影响,甚至发生毁灭性的灾难。对气体泄漏及早做出判断,在泄漏事故未扩大之前采取有效措施,就可以避免安全事故的发生。本论文以石家庄化纤厂危险化学储罐和反应器为背景,在国家863计划“危险化学反应器泄漏检测与修补移动机械手系统”(2003AA421040)的支持下,对危险气体的泄漏监控与报警系统、基于移动机械手的泄漏修补以及多传感器融合方面展开了研究,取得了如下一些创新性的成果:
     1、以危险化学反应器和储罐的泄漏监控与修补为背景,研制和开发了HEBUT-2型移动机械手,阐述了其结构、驱动系统以及硬件体系,所研制的HEBUT-2型移动机械手采用同侧轮并联驱动的方式,这种结构显著的优点是最小转弯半径为零。该移动机械手车体采用上、中、下三层结构,上层为传感器层,中层为数据处理及决策层,下层为驱动层。主要功能是通过采用CCD摄像机提取图像实现移动载体的路径跟踪,基于超声传感器、CCD以及气体传感器信息融合的目标定位,无线通讯模块实现远程控制。
     2、针对本课题的实际应用背景,提出了适于危险环境下工作的化学反应器泄漏监测和远程报警系统,该报警系统采用固定式检测、无线数据传输的方式,提出了基于三层单元的无线数据传输模式;结合工作过程中的实际要求,为了保障系统的可靠性,采用了由上到下定时自检的方式,一旦发现自检失败,则产生自检失败报警信息。对于泄漏报警信息的传递,采用了由下到上的方式进行多次报警,直至接收到应答信号后停止报警。针对从顶层所传来的信息的不同,设计了泄漏报警和自检失败报警两种不同的报警形式,通过采用窗口对话框的形式来完成报警信息的传递,该方法提高了数据传输的正确性;同时,将黑匣子的概念引入到泄漏检测报警系统中,对分析现场情况、查找故障原因具有非常重要的意义。
     3、针对现场化学反应器泄漏的两种可能性,即:微量泄漏和突发泄漏,提出了适合危险化学品泄漏的胶粘封堵方法和磁密封方法,并研制完成了在微量泄漏情况下具有快速、混合比例可调、喷出量可控、结构简单等特点的胶粘封堵装置以及在突发泄漏情况下具有快速、带压作业的柔性磁密封装置。
     4、在对移动机械手的视觉导航及目标辨识技术进行了系统深入研究的基础上,提出了基于彩色视觉、模糊控制和人工神经网络的移动机械手导航方法,为适应不同工况条件,将计算机视觉和光电传感器进行结合,提高了移动机械手路径跟踪的定位精度和准确性;设计了HEBUT-2型移动机械手视觉系统,建立了移动机械手的摄像机模型,完成了摄像机的逆模型变换,使路径、路标信息成功地从图像空间中恢复出来,在此基础上,对各种图像分割方法进行了实验,将基于HSI空间颜色阈值的区域分割法应用于移动机械手路面图像分割,该方法抗噪声能力强,所利用的信息量大,不受光照条件变化的影响,运算简单,提高了复杂路面图像分割的准确性和实时性。在路标识别系统的设计上,为简化环境,采用人工铺设的12条红色路径,红色路径上粘有六种绿色路标的方法实现对移动机械手的导航,并使用离线确定的两种HSI颜色阈值进行区域分割,避免了在线确定分割阈值以及区分路径和路标区域的复杂过程,提高了系统的实时性,通过在HEBUT-2型移动机械手进行的大量实验,结果表明本文提出的方法是一种简单有效的移动机械手视觉导航方法。
     5、对机械手的视觉与超声信息的融合进行了研究,并将其应用于机械手末端的精确定位。对于机械手的视觉系统,研究了其预处理过程,产生一幅易于计算机识别和理解的二维目标图像,同时由超声传感器来提供第三维的距离信息。试验表明,该方法可以将机械手末端至封堵起始点的三维距离信息提取出来,应用于机械手末端的精确定位,为机械手下一步实施喷胶封堵作业打下良好的基础。
The research of mobile robot is an important embranchment of Robotics. At present, along with the continuous advancement of robot technology, the apply field of mobile robot has been more and more abroad, and it’s main development aspect is all independence. Without any measure about the leakage of poisonous gas,there will be thumping damages,even people's lives,in petrochemical industry etc.. If we can judge the leakage of poisonous gas earlier,and take measures before the accident enlarge ,we can avoid safety misadventure. This thesis is carried on the research of the leaking detecting、alarming and maintenance system and multi-source information fusion based on mobile manipulator on the background of Shijiazhuang chemical-fabric manufactory , supported by a National Program 863 item "The Mobile Manipulator system aimed at measurement and maintenance,working in the circumstance of chemistry dangerous leakage from the chemistry dangerous reactor" (item serial number :2003AA421040).
     Firstly, according to the background of leakage detecting control and maintenance of the chemistry dangerous reactor, manufacture the HEBUT-2 mobile manipulator. We introduce the structure, the driving system and the hardware system. The HEBUT-2 mobile manipulator adopts the way of parallel connection driving, which can rotate 0 radius. The whole mobile manipulator is three layers, that is, the super layer is sensor layer, the middle layer is decision layer, the bottom layer is the driving layer. The main function is road following by the CCD collection image, object orientation by multi-sensors fusion, and the long-distance control by no-line communication mode.
     Secondly, this paper putts forward the alarming system of gas leakage, and has confirmed the means on the system,which use fixed detection and wireless communication. The three-level-unit wireless communication is also studied in this paper,which can adapt the industrial theater perfectly .Two wireless modules are used to fit for wireless communication in this paper,according to collecting and comparing plenty of files. There are two alarming forms,one is fail self-scan alarm, which is used to assure the system's reliability,the other is leaking alarm .The fail self-scan alarm is started from the upper to the bottom,and the leaking alarm is started from the bottom to the upper . The upper unit use different dialog box to transfer the alarming information comes from the industrial theater. The way improving the date transmitting, at the same time, introducing the concept of black casket, which is very important to analyze the locale environment and the causation of failure.
     Thirdly, according to the two kinds of possibilities of the spot chemical reactor leakage, namely, little leakiness and abrupt leakiness, we put forward the stickiness plugging up way and the magnetism plugging up way to adapt different situation. At the same time, we research the stickiness plugging up equipment in the situation of little leakiness, which has the characteristic of celerity、the mixture proportion adjustable, the jet quantity control adjustable, structure in brief etc. Also, we research the flexible magnetism plugging up equipment in the situation of abrupt leakiness, which has the characteristic of celerity and high press working.
     Then, vision navigation is being a developing trend of mobile manipulator’s navigation recent years. How to recognize man-made road and road sign rapidly and exactly by using vision technology and How to pick-up object figures are crucial problems in the study of mobile robot’s vision navigation. In this paper, a vision-based navigation and object detection system is provided, The main work and innovative ideas include. The vision system of HEBUT-2mobile manipulator is designed. The modeling of camera and the conversion between image coordinate and vehicle coordinate is completed, which makes the road information revert from the image space. Experiments on image segmentation are done and a method of image segmentation based on HSI color threshold is used for mobile manipulator’s road and road-signs image segmentation. The image segmentation method based on HSI color threshold has many advantages: strong anti-noise ability, using more information during image processing, perfect segmentation result without being affected by illuminative condition, and simple calculating process. All these advantages improve the segmentation speed and veracity of complicated road image. The mobile manipulator’s road-signs recognition system is designed. Man-made guiding road and road sign are laid on the ground, which simplifies environment. The road and road sign have two different colors, and two different HSI color threshold coming from offline experiment are used for road image segmentation. Therefore, the complicated processes of determining threshold and distinguishing the road and road sign area are avoided and the real-time capability is improved. The recognition of road-sign is by artificial neutral net.
     Finally, the paper works over data fusion of vision and sonar fixed on the manipulator, that would applied to the orientation of the end of manipulator. For the vision system, it emphasizes on image-preprocessing, including: gray treatment, the choice of threshold, binarization erosion, dilation, group color up method and invariant moment, which used to generate a 2-dimension image that is prone to be recognized and understand by computer, at the same time, the third information, that is the distance between the end of manipulator and the operated surface is provided by ultrasonic sensor. Examinations make known that the method could pick-up beautifully the information of object figure as the foundation of the next task—gushing glue and blocking up the leak.
引文
[1]方建军.智能机器人.北京:化学工业出版社,2004.133-156
    [2]熊有伦,丁汉,刘恩沧.机器人学.北京:机械工业出版社,1995.1-164
    [3] Jorge Angeles. Fundamentals of Robotic Mechanical Systems. New York: Spriger-Verlag New York Inc., 2003.13-129
    [4] Saeed B. Niku, Introduction to Robotics: Analysis, Systems, Application. USA: Pearson Education, 2001.1-154
    [5] Robin R, Murphy. Introduction to AI Robotics. USA: MIT Press, 2002.215-280
    [6]蔡自兴.中国的智能机器人研究.莆田学院学报,2002. 9(3): 36-39
    [7]徐国华,谭民.移动机械手的发展现状及其趋势.机器人技术与应用,2001. 3:7-14
    [8] N. Nilsson. An application of artificial intelligence techniques. Journal of IJCAI, 1969. 2(3): 23~26
    [9] Giralt G et al. A multi-level planning and navigation system for a mobile robot: Afirst approach to HALRE. In: Proc 6th Int Joint Conf, Artificial Intelligent, Tokyo, Japan, 1979. 335-337
    [10]徐国华,谭民.移动机械手的发展现状及其趋势.机器人技术与应用, 2001. 3(5):7~8
    [11] http://www.cc.gatech.edu/ai/robot-lab/research/3d/
    [12] I. Ulrich, I. Nourbakhsh. Appearance-based Place Recognition for TopologicalLocalization. In: IEEE International conference on Robotics and Automation, SanFrancisco, CA, 2000. 4. 1023~1029
    [13]卢韶芳,刘大维.自主式移动机械手导航研究现状及其相关技术.农业机械学报,2002.33(2):112-116
    [14]李磊,叶涛,谭民,陈细军.移动机械手技术研究现状与未来.机器人,2002.24(5):475~477
    [15]张朋飞,何克忠,欧阳正柱,张军宇.多功能室外智能移动机械手实验平台-THMR-V.机器人,2002. 24(2): 97~101
    [16]卢韶芳,刘大维.自主式移动机械手导航研究现状及其相关技术.农业机械学报,2002.33(2): 112~116
    [17]胡晓敏,张友军,朱淼良.移动机械手导航模型的设计方法.机器人, 1997.19(4):282-286
    [18] Robin R. Murphy. Introduction to AI Robotics. USA:MIT Press,2002.215-280
    [19]欧青立,何克忠.室外智能移动机械手的发展及其关键技术研究.机器人,2000.22(6):521-526
    [20]马明山,朱绍文,何克忠.室外机器人定位技术研究.电工技术学报,1998.13(2):43-46
    [21]陈延国,于澎,高振东.自主移动机械手定位方法的研究现状.应用科技,2002.29(11):41-43
    [22] Rob Callan. Artificial Intelligence.USA: Palgrave Macmillan,2003.237-272
    [23]达朝平,孙茂相,尹朝万.轮式移动机械手规划起点选择的新方法.机器人,1999.7(4):240-243
    [24] Seriji H. A Unified Approach to Motion Control of Mobile Manipulators. The International Journal of Robotics research, 1998. 17(2):107
    [25] Thomas G. Sugar, Vijay Kumar. Control of Cooperating Mobile Manipulators. IEEE Transaction on Robotics and Automation, 2002. 18(1):94-103
    [26]葛春林.浅谈化工污染事故的主要特点.山东环境,1997(4):7-8
    [27]马全军等.浅谈化工企业事故的预防与处理.消防科学与技术,2002.2(1):51-52.
    [28] http://www.cohere.com.cn/cems7.php
    [29] http://www.farmch.com/registration/chr/2004091801.htm
    [30]王利平.大容量锅炉“四管”爆漏分析及检测技术.东北电力技术,1997.(12):27-28
    [31]王利平.我国可燃气体检测仪表的发展.化工自动化及仪表,1998(4):9-13
    [32] D.J.Kriegman et al. A Mobile Robot: Sensing, Planing and Locomotion. IEEE Int. Conf. Robotics andAutomation, 2000.402-408.
    [33] Y.Goto , A.Stents. The CMU System for Mobile Robot Navigation. Proc. IEEE Int. Conf. Robotics and Automation, 2001.99-105.
    [34] T.Lozano,Perez. Spatial Planning: A Configuration Space Approach. IEEE Transactions on Computer. C-32, 2003. No.2. 108-120.
    [35] J.Canny and B.Donald. Simplified Voronoi Diagram. Discrete and Computational Geometry , 2000. No.3.219-236.
    [36] Chan.P, K S Tam, D N K Lenug. Robot Navigation in Unknown Terrains via Multi-resolution Grid Maps. Proc. Of the IEEE IECON’91 Inter. Conf. on Industrial Electronics, Control and Instrumentation, Nov. 2001.1138-1143.
    [37] R.Chatila and J.P.Laumond. Position Referencing and Consistent World Modeling for Mobile Robot. Proc. Of the IEEE Inter. conf. on Robotics and Automation, 2000.138-145
    [38] J.Borenstein , Y.Koren. Histogramic in Motion Mapping for Mobile Robot Obstacle Avoidance. IEEE Transactions on Robotics and Automation, 2001.Vol. 7(No.4).535-539
    [39] V.Boschian , A.Pruski. GridModeling of Robot Cells: A Memory Efficient Approach. Journal of Intelligent and Robotics Systems, Vol.8, Oct., 2003.89-94
    [40] D.Gaw et al. Minimum-Time Navigation of an Unmanned Mobile Robot in a 2-1/2-D World with Obstacles. Proc. of IEEE Int. Conf. Robotics and Automation, 2002. 1670-1677
    [41] N.C Rowe and R.S Ross. Optimal Grid-Free Path Planning Across Arbitrarily Contoured Terrain with Anisotropic Friction and Gravity Effects. IEEE Trans. On Robotics and Antomation, 2000. Vol.6(No.5).540-553
    [42]金小平等.移动机械手的动态路径规划及控制.机器人, 2002. Vol.12(No.6).10-16
    [43] S.Akishita, S.Kawamura and T Hisanobu..Velocity Potential Approach to Path Planning for Avoiding Moving Obstacles. Advanced Robotics, 2000. Vol.7(No.5).463-478
    [44] Zalzala. Genetic Solution for the Motion of Wheeled Robotic System in Dynamic Environments. IEEE Conf. Publication, 1999.Vol.1..389-392
    [45] Chang et al. An Obstacle Avoidance Algorithm for an Autonomous Land Vehicle. Int. Journal of Robotics and Automation, 1999. Vol.2(No.1).21-24
    [46] V.Lumelsky et al. A.Paradigm for Incorporating Vision in the Robot Navigation Function. IEEE Int. Conf. Robotics and Automation, 1999, p.p.734-739.
    [47] Y.Kanayama. ALocomotion Control Method for Autonomous Vehicles. Proc. of the IEEE Inter. Conf. on Robotics and Automation, 1998. Vol.3.1315-1317
    [48] Y.Kimura. A Stable Tracking Control Method for an Autonomous Mobile Robot. Proc. of the IEEE Inter. Conf. on Robotics and Automation, 1997.Vol.1.384-389
    [49] K.Ohshima et al. Fuzzy Control in Tracking of Welding Line. International Conference on New advances in Welding and Allied Process, Beijing, 1999.238-243
    [50] K.Ohshima , S. Yamane. Application of Neural Network and Fuzzy Control to 2 dimension Orbit Tracking. Proc. of the IEEE Inter. Conf. on Robotics and Automation, 1996.751-755
    [51] W.L.Nelson et al. A Locomotion Control Method for Autonomous Vehicles. Proc. of IEEE Inter. Conf. on Robotics and Automation, 1998.Vol.13.1315-1317
    [52] S.S.Lee , J H Williams. A Fast Tracking Error Control Method for An Autonomous Mobile Robot. Robotica, 1999. Vol.11.209-215
    [53]张明路,彭商贤.一种基于多组传感器信息移动机械手避障方法.机器人, 2001. Vol.24(No.5).671-674
    [54]孟庆浩,彭商贤.使用非视觉传感器的移动机械手定位问题的研究方法.中国机械工程,1998.Vol.9(No.2).49-53
    [55]齐丙辰,大川善邦.计算机视觉在球类机器人和行为理解研究中的应用与发展.中国图像图形学报. 1998.Vol.3(No.9).770-773
    [56] N.Kehtarnavaz, N.C.Grisewold, J.S.Lee. Visual control of an autonomous vehicle(BART)-The vehicle following problem. IEEE Trans. Veh. Technol, 2001.Vol.40(No.3).654-662
    [57] C.Thorpe, M.H.Hebert, T Kanade, S A Shafer. Vision and Navigation for the Carnegie-Mellon Navlab. IEEE Trans. Pattern Analysis and Machine Intelligence, 1998.Vol.10(No.3).362-373
    [58] R.Schuster, N.Ansari, A.B.Hashemi. Steering a robot with vanishing points. IEEE. Trans. Robotics and Automation, 1998. Vol.9(No.4).491-498
    [59] H.Ishiguro, S.Tsuji. Applying panoramic sensing to autonomous map making by a mobile robot. Proc. International Conference on Advanced Robotics, 1998.127-132
    [60] C.B. Madsen, C.S. Andersen. Optimal landmark selection for triangulation of robot position. Robotics and Autonomous Systems, 1998.277-292
    [61] Z.L.Cao, S.J.Oh, E L Hall. Ominidirectional dynamic vision positioning for a mobile robot. Journal of Robotic System, 1996. Vol.3(No.1).5-17
    [62] Y.Yagi, S.Kawato, S Tsuji. Real-time ominidiretional image sensors(COPIS) for vision-guided navigation. IEEE Trans. On Robotics and Automation, 1999. Vol.10(No.1).11-21
    [63]林靖,陈辉堂,王月娟.机器人视觉伺服系统的研究.控制理论和应用,2000. Vol.17(No.4). 477-481
    [64] R.Aufrere, R Chapuis, F.Chausse. A Dynamic Vision Algorithm to Locate a Vehicle on a Nonstructured Road. The Int. J. of Robotics Research. 2000. Vol.19(No.5). 411-423
    [65]姚玉荣,章毓晋.利用小波和矩进行基于形状的图像检索.中国图像图形学报, 2000. Vol.5(No.3).206-210
    [66]关柏青.基于彩色视觉和模糊控制的移动机械手的路径跟踪.河北工业大学, 2001
    [67] Wang Yaonan, Li Shutao. Multisensor Information Fusion Processing and Its Application. Journal of Hunan University, 2004. 12(1).120-125
    [68]杨杰.用于目标识别跟踪的雷达\红外成像双模传感器数据融合技术.航天控, 1998(4).18-26
    [69] W.Wan and D.Fraser. Multisource data fusion with multiple self-organizing maps. IEEE Transactions on Geoscience and Remote Sensing, 1999. vo1.10( No.5).1344-1349
    [70] Tsumuta. Survey of Automated Guided Vehicle in Japanese Factory. IEEE Robotics and Automation, 1986.1329-1334
    [71]孟庆浩,彭商贤.使用非视觉传感器的移动机械手定位问题的研究方法.中国机械工程, 1998. Vol.9(No.2).49-53.
    [72]马保离,霍伟.移动小车的路径跟踪与镇定.机器人, 1995.358-362
    [73]沈林成,常文森.移动机械手数字地形模型.机器人, 1996.148-157
    [74] Y Yagi, S Kawato, S Tsuji. Real-time ominidiretional image sensors(COPIS) for vision-guided navigation. IEEE Transaction On Robotics and Automation, 1999. Vol.10( No.1).11-21
    [75] Y.K.Tham, H.Wang and E.K.Teoh. Multi-sensor fusion for steerable four-wheeled industrial vehicles. Control Engineering Practice, 1999. vo1.7( No.10).1233-1248
    [76]董再励,郝颖明,朱枫.一种基于视觉的移动机器人定位系统.中国图像图形学报, 2000.Vol.5( No.8).688-692
    [77] G.Kinoshita, Y Ikhsan and H.O. Location based on sensor fusion of vision and sensing tactual. Advanced Robotics, 2000. vo1.13( No.6).633-646
    [78] F.Azuaje, W.Dubitzky, N.Black etc. Improving clinical decision support through case based data fusion. IEEE Transaction on Biomedical Engineering, 1999. vol.46(No.10).1181-1185
    [79]张明路,孟庆浩,彭商贤.基于多传感器的移动机械手对感知环境的识别天津大学学报(自然科学与工程技术版), 1998. Vo131( No.4).466-470
    [80]中科院沈阳自动化研究所. AGVS产品及其应用.机器人技术与应用. 1999.13-15
    [81] S.Houzelle, G.Giraudon. Contribution to Multisensor Fusion Formalization. Robotics and Autonomous Systems, 2004. Vol.13.68-85
    [82]屠大维,林财兴.智能机器人视觉体系结构研究.机器人,2001, Vo123( No.3). 207-210
    [83] Seth Hutchinson, Gregory D Hager, Peter I Corke . A Tutorial on Visual Servo Control, Robotics and Automation, 1996.Vol 12(No.5). 651-669
    [84] William J Wilson, Williams Hulls, Graham S Bell .Relative End-Effect Control Using Cartesian Position Based Visual Servoing. IEEE Trans. on Robotics and Automation, 1996.Vol 12(No. 5).648-696
    [85] Koichi Hashimoto, Takumi Ebine, Hidenori Kimura .Visual Servoing with Hand-Eye Manipulator---Optimal Control Approach. IEEE Trans. on Robotics and Automation, 1996.Vol 12(No.5). 766-774
    [86] Andres Castano , Seth Hutchinson.Visual Compliance: Task—Directed Visual Servo Control . IEEE Trans. on Robotics and Automation, 1994.Vol 10 (No.3). 334-341
    [87] Nikolaos p Papanikolopoulos, Pradeep K Khosla, Takeo kanade.Visual Tracking of a Moving Target by a Camera Mounted on a Robot: A Combination of Control and Vision. IEEE Trans. on Robotics and Automation, 1993.Vol 9(No.1). 14-35
    [88] F Janabi-Sharifi , W J Wilson.Automatic Selection of Image Features for Visual Servoing. IEEE Trans. on Robotics and Automation, 1997.Vol 13(No.6). 890-903
    [89] P. M. Sharkey ,D. W. Murray. Delays versus performance of visually guided systems. IEE Proc—Control Theory Appl, 1996.Vol 143(No. 5).436-447
    [90] Jay Stavnitzky, David Capson .Multiple Camera Model-Based 3-D Visual Servo. IEEE Trans. on Robotics and Automation, 2000.Vol 16(No.6). 732-739
    [91] Choi, Gyu-Jong, Lee, Kyung-Soo.Visual servoing system based on ANFIS (Adaptive Neuro Fuzzy Inference System), Proceedings of SPIE, 2001. 29-31
    [92] Lee, J, Han, S H, et al. A study on feature-based visual servoing control robot system by utilizing redundant feature. IEEE International Symposium on Industrial Electronics, 2001.Vol.3. 1585-1590
    [93] Gans, Nicholas R , Corke, et al. Performance tests of partitioned approaches to visual servo control. IEEE International Conference on Robotics and Automation, 2002.Vol.2. 1616-1623
    [94] H Hashimoto, M Sato, F Harashima.Visual control of robotics manipulator based on neural networks. IEEE Trans on Industrial Electronics, 1992.Vol36. 490-496
    [95] G Wells, C Venaille, C Torras. Promising research vision-based robot positioning using neural networks, Image and Visual Computing, 1996.Vol14, 715-732.
    [96] L Sun, CH Doeschner. Visuo-motor coordination of a robot manipulator based on neural networks. IEEE Conf on Robotics and Automation, 1998.1737-1742
    [97] K Stanley, Q M Wu, A Jerbi. Neural network-based vision guided robotics. IEEE Conf on Robotics and Automation, 1999.281-286
    [98] R.Aufrere, R Chapuis, F.Chausse. A Dynamic Vision Algorithm to Locate a Vehicle on a Nonstructured Road. The Int. J. of Robotics Research. 2000. Vol.19(No.5).411-423
    [99] Thrun S, Minerva. A second-generation museum tour guide robot. Machine Learning, 1998.Vol33(No.1).41-76
    [100] Maniere C,Couvignou E,Philippe K,et al.Visual servoing in the task-function framework:a contour following tusk[J].Journal of Intelligent and Robotic Systems:Theory & Applications,1995.Vol12 (No.1). 9-21
    [101]罗荣海.机器人自动导引、跟踪控制研究南京航空航天大学2002
    [102]张明路.一种新的移动机器人跟踪策略.河北工业大学学报, 1999. Vol.9(No.12).2071-2082
    [103]李丽宏.基于双目显微立体视觉系统的研究.河北工业大学, 2003
    [104] L.L.Wang ,W.H.Tsai. Car Safty Driving Aided by 3-D Image Analysis Techniques. Proc. Microelectronics and Information Science and Technology Workshok, Hsinchu, Taiwan, R.O.C., 2001.687-701
    [105]邬永革,杨静宇.多传感器数据融合的概念.方法和实现.机器人, 2003. Vol.17.564-568
    [106] J.Borenstein , Y.Koren. Histogramic in Motion Mapping for Mobile Robot Obstacle Avoidance. IEEE Transactions on Robotics and Automation, 2001.Vol. 7(No.4).535-539
    [107] D.Gaw et al. Minimum-Time Navigation of an Unmanned Mobile Robot in a 2-1/2-D World with Obstacles. Proc. of IEEE Int. Conf. Robotics and Automation, 2002. 1670-1677
    [108] Zalzala. Genetic Solution for the Motion of Wheeled Robotic System in Dynamic Environments. IEEE Conf. Publication, 1999.Vol.1.389-392
    [109] V.Lumelsky et al. A.Paradigm for Incorporating Vision in the Robot Navigation Function. IEEE Int. Conf. Robotics and Automation, 1999.734-739.
    [110] E.Triend , D.J.Kringman. Stereo Vision and Navigation within Buildings. IEEE Int Conf.
    [111] C.K.Cowan, P.D.Kovest. Automatic sensor placement from vision task requirements. IEEE Tran.Patt.Anal.Mach.Intell, 1988.407-416
    [112] C.K.Cowan, A.Bergman. Determining the camera and light source location for a visual task.In Proc.IEEE Int. Conf. Robotics and Automat,1989.509-614
    [113] C.K.Cowan , B.Modayur. Edge-based placement of camera and light-source for object recognition and location,In Proc.1993 IEEE Int.Conf. Robotics and Automat, 1993.234-253
    [114] Tarabanis K.A, Tsai R.Y, Allen R.K. A survey of sensor planning in computer vision. IEEE Trans.Robot.Automat, 1995. Vol.11(No 1).86-104
    [115] Papanilolopuolos D.N, Khosla P.K. Adaptive robot visual tracking:theory and experiments, IEEE Trans.Automat.Contr,1993. Vol.38. 429-445
    [116] Srinivasa N, Sharma R. Execution of saccades for active vision using a neuron controller. IEEE Control Systems, 1997.18-29
    [117] Honda K, Hasegawa T, Kiriki T, Matsuoka T. Real-time pose estimation of an object manipulated by multi-fingered hand using 3D stereo vision and tactile sensing. 1998 IEEE/RSJ Int.Conf.Intelligent Robots and Systems, Vol.3.1814-1819
    [118]Kai-Tai Song,Wen-Hui Tang. Environment recognition for a mobile robot using double ultrasonic sensor and a CCD camera MFI’94.P441-514
    [119]马兆青,袁增任.基于栅格方法的移动机器人实时导航和避障.机器人, 2002. Vol.18(No.6).344-348
    [120] R Chatila , J P Laumond. Position Referencing and Consistent World Modeling for Mobile Robot. Proc. of the IEEE Inter. conf. on Robotics and Automation, 2000.138-145
    [121] K Xue, C W Breznik. A neural-net computing algorithm for detecting edges in a gray scale image. Proc. 29th IEEE Conf. on Decision and Control, 2002.2368-2373
    [122]袁军,黄心汉,陈锦江.基于多传感器的智能机器人信息融合、控制结构和应用.机器人, 2004. Vol.16(No.5).313-320
    [123]王荣本,徐友春,李庆东,纪寿文. AGVS图像识别多分支路径的研究.中国图像图形学报, 2000. Vol.5 (No.8).632-637
    [124]章毓晋.图像工程(下册)——图像理解与计算机视觉.清华大学出版社, 2000. 29-35

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700