用户名: 密码: 验证码:
基于非接触观测信息的机器人行为模仿学习
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
行为学习是智能机器人实用化的关键技术之一。模仿学习使机器人将指导者的行为演示信息自主转化为自身行为,是机器人行为学习必然途径与有效方法。行为观测与表征-执行是模仿学习的重要问题。目前的行为观测方法多为接触式,对设备条件和指导者专业知识要求较高且实用性不强。现有行为表征与执行模型不能适应不同类型与层次的行为。因此本文针对以视觉为主的非接触观测信息行为模仿学习方法开展深入研究。
     针对模仿学习建模与研究效率及安全性问题,建立了视觉观测信息下的模仿学习人机关系,提出基于雅可比矩阵的末端微分运动实时控制算法,实现了机器人末端速度控制。建立3D仿真结合实体的模仿学习系统平台以提高研究效率。
     针对一般非标注场景下的视觉行为观测问题,首先提出主从特征点描述(MSKD)和二值化像素离散采样(BIDS)局部不变特征描述子算法。MSKD利用了主点与辅助点间关系构造描述子,屏蔽了无关点计算并有效表征了特征点局部特性。BIDS利用了局部区域离散采样信息构造二值特征,克服了光照影响并提高了特征匹配速度。提出的描述子单点计算和匹配时间均优于传统方法。然后提出了基于样本仿射变换的特征训练方法,利用仿射变换模拟不同观测点图像,并整合同一特征点在不同变换图像下的局部特征,经训练后的MSDK与BIDS描述子的识别正确率均由于传统方法。最后建立了基于RGB-D图像的实时目标识别与定位方法。利用深度图像前景分割方法屏蔽了无关远景图像区域,加速了特征提取。利用深度相机模型实时准确地计算了相机坐标系下目标的实际空间位置。实验证明了提出方法的可有效地完成一般非标注环境下的视觉行为观测问题。
     针对不同层次与类型行为的表征与执行问题,提出了控制图模型。模型将行为表征为由具有特定含义的行为元节点构成的图结构。提出的基于B-Spline曲线的行为元表征方法和基于动态规划的行为元实时执行算法,实现了不同轨迹的行为元表征与执行。仿真与实验验证了提出的控制图模型能在不同机器人平台下有效地表征与执行不同类型及层次行为。
     针对视觉观测信息下的机器人行为模仿学习问题,提出了适用于视觉观测序列的控制图模型学习方法。首先采用尺度规范化与平滑滤波方法,解决了观测序列的尺度差异与抖动问题;其次提出基于相关性函数的观测序列分割方法,将观测序列分割为待学习的子序列;再次提出了基于弧长约束梯度下降的行为元轨迹学习算法,将观测序列表征为B-Spline曲线;最后提出基于RBF网络的行为元泛化提升算法,将行为元参数函数化,以提升模型的泛化能力。仿真实验验证了提出的学习算法有效性。在RCA87A与雅马哈机器人平台上的多实例视觉观测行为模仿学习综合实验,验证本文建立方法的有效性、通用性与实用性。
Behavioral imitation learning is one of key technologies for robot application.Imitation learning enables robots to transform demonstrated behaviors to their actions,which is an inevitable and effective method of behavioral imitation learning in robotics.Observation and representation-reproduction are important contents of imitation learning.There exist many behavioral observation methods by contact, which often need complexdevices and advanced professional knowledge and are hard to be applied. The proposedmodels for representation and reproduction can handle different layer and class behaviors.Therefore, in this dissertation the imitation learning of non-contact observation based onvision was researched in depth.
     The relationship between human and robot for visual observation was built formodeling of imitation learning and research efficiency and security. The real time controlalgorithm of end-effector differential motion was proposed to control its velocity. Imitationlearning system, which integrated3D simulation with real robots, was built to increaseresearch efficiency.
     For the problem on visual behavioral observation in a general unmarked scene,firstly, two local invariant descriptors were proposed, which were main-sub featuredescriptor (MSKD) and binary intensity discrete sampling (BIDS). The relationshipsbetween main key and sub-keys were used to construct descriptors by MSKD, whichavoided computing irrelevant points and represented the patches of key points effectively.BIDS constructed descriptors by sampling discretely around the keys, which overcome theproblem on lighting effects, and sped up the feature matching. The time costs of computingper key and matching of proposed descriptors were less than traditional methods. Secondly,a feature training algorithm based on sample image affine warping was proposed. In thisalgorithm, the affine transform was used to simulate observed images in different views,and the features in different affine transformed images of the same key in sample imagewere integrated. The MSKD and BIDS trained by the proposed method were superior totraditional methods in matching accuracy. Finally, the real-time objective recognition and positioning based on RGB-D image were built. The foreground segmentation based ondepth image shielded regions of irrelevant background image, which sped up featureextraction. The objective positions of actual space on the camera coordinate werecomputed in real time and accurately based on depth camera model. Experiment testifiesthat the proposed methods can deal with visual behavioral observation in a generalunmarked environment.
     For the problems on representing and reproducing behaviors in different types andlevels, the cybernetic graph (CGM) model ware proposed, which represented a behaviorwith a graph, in which each node was a behavior primitive that had specific signification.The method of representing behavioral primitives based on B-Spline curve, and thealgorithm of behavioral primitive real-time control based on dynamic programming wereproposed, which represented and reproduced different types of behavioral primitivetrajectories. Simulations and experiments testifies that CGM represents and reproducesbehaviors in different types and levels in different robots.
     For robot behavior imitation learning from visual observed information, learningmethod of CGM was proposed, which was suitable for visual observed sequences. Firstly,the problems on difference in scale and shaking of sequences were resolved bynormalizing scale and smooth filter. Secondly, the method of sequence segmentation basedon correlation function segmented sequences into sub-sequences for learning. Thirdly, thelearning algorithm for behavioral primitive trajectories based on gradient descent withconstraint of arc length was proposed, which transformed observation sequences intoB-Spline curve. Finally, the parameters of behavioral primitives were functionalized by anew generalization boosting algorithm, which enhanced generalization performances.Simulations testifies that the proposed methods were efficient, and the multi-instancesynthetical experiment of imitation learning from visual observation on RCA87A andYamaha robots approves that the proposed methods in this dissertation are efficient,commonly used and applied.
引文
[1] Gr ve K, Stückler J, Behnke S. Improving imitated grasping motions through interactive expecteddeviation learning[C]//2010IEEE-RAS International Conference on Humanoid Robots. Nashville,2010:397-404.
    [2] Minato T, Thomas D, Yoshikawa Y, et al. A model of the emergence of early imitationdevelopment based on predictability preference[C]//IEEE International Conference onDevelopment and Learning.2010:19-25.
    [3] Bandera J P, Molina-Tanco L, Rodr guez J A, et al. Architecture for a Robot Learning by ImitationSystem[C]//15th IEEE Mediterranean Electrotechnical Conference. Valletta,2010:87-92.
    [4] Wu Y, Demiris Y. Towards one shot learning by imitation for humanoid robots[C]//IEEEInternational Conference on Robotics and Automation. Anchorage,Alaska,USA,2010:2889-2894.
    [5] Calderon C A A, Mohan R E, Zhou C. Teaching new tricks to a Robot Learning to Solve a Task byImitation[C]//IEEE Conference on Robotics, Automation and Mechatronics.2010:256-262.
    [6] Nemec B, Zorko M. Learning of a ball-in-a-cup playing robot[C]//IEEE International Workshop onRobotics in Alpe-Adria-Danube Region. Budapest,2010:297-301.
    [7] Matsubara T, Hyon S, Morimoto J. Learning stylistic dynamic movement primitives from multipledemonstrations[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Taipei:2010:1277-1283.
    [8] Gr ve K, Stückler J, Behnke S. Learning motion skills from expert demonstrations and ownexperience using gaussian process regression[C]//41st International Symposium on and6th GermanConference on Robotics. Munich,2010:212-219.
    [9] Forte D, Ude A, Kos A. Robot learning by Gaussian process regression[C]//IEEE InternationalWorkshop on Robotics in Alpe-Adria-Danube Region. Budapest,2010:303-308.
    [10] Boucenna S, Gaussier P, Andry P, et al. Imitation as a communication tool for online facialexpression learning and recognition[C]//IEEE/RSJ International Conference on Intelligent Robotsand Systems. Taipei,2010:5323-5328.
    [11] Ude A, Gams A, Asfour T, et al. Task-specific generalization of discrete and periodic dynamicmovement primitives[J]. IEEE Transactions on Robotics,2010,26(5):800-815.
    [12] Gams A, Do M, Ude A, et al. On-line periodic movement and force-profile learning for adaptationto new surfaces[C]//IEEE-RAS International Conference on Humanoid Robots. Nashville,2010:560-565.
    [13] Gams A, Petri T, lajpah L, et al. Optimizing parameters of trajectory representation formovement generalization: robotic throwing[C]//International Workshop on Robotics inAlpe-Adria-Danube Region. Budapest,2010:161-166.
    [14] Sylvain C, Florent D, L S E, et al. Learning and reproduction of gestures by imitation-an approachbased on hidden markov model and gaussian mixture regression[J]. IEEE Robotics and AutomationMagazine,2010,17(2):44-54.
    [15] Calinon S, Sauser E L, Billard A G, et al. Evaluation of a probabilistic approach to learn andreproduce gestures by imitation[C]//IEEE International Conference on Robotics and Automation.Anchorage,2010:2671-2676.
    [16] Cederborg T, Li M, Baranes A, et al. Incremental local online gaussian mixture regression forimitation learning of multiple tasks[C]//IEEE/RSJ International Conference on Intelligent Robotsand Systems. Taipei,2010:267-274.
    [17] Chan H, Young K, Fu H. Learning by Demonstration for Tool-Handling Task[C]//SICE AnnualConference. Taipei,2010:459-464.
    [18] Sasamoto Y, Yoshikawa Y, Asada M. Mutually constrained multimodal mapping for simultaneousdevelopment: Modeling vocal imitation and lexicon acquisition[C]//IEEE9th InternationalConference on Development and Learning. Ann Arbor,2010:291-296.
    [19] Tan H, Kawamura K. A computational framework for integrating robotic exploration and humandemonstration in imitation learning[C]//IEEE International Conference on Systems, Man andCybernetics. Anchorage:20112:501-2506.
    [20] Tan H, Erdemir E, Kawamura K, et al. A potential field method-based extension of the dynamicmovement primitive algorithm for imitation learning with obstacle avoidance[C]//IEEEInternational Conference on Mechatronics and Automation. Beijing,2011:525-530.
    [21] Tan H, Liang C. A conceptual cognitive architecture for robots to learn behaviors fromdemonstrations in robotic aid area[C]//Annual International Conference of the IEEE Engineering inMedicine and Biology Society. Boston,2011:1249-1252.
    [22] Balaguer B, Carpin S. Combining imitation and reinforcement learning to fold deformable planarobjects[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. San Francisco,2011:1405-1412.
    [23] Balaguer B, Carpin S. Efficient grasping of novel objects through dimensionalityreduction[C]//IEEE International Conference on Robotics and Automation. Anchorage,2010:1279-1285.
    [24] Gu Y, Thobbi A, Sheng W. Human-robot collaborative manipulation through imitation andreinforcement leaming[C]//IEEE International Conference on Information and Automation.Shenzhen:2011:151-156.
    [25] Takahashi S, Takahashi Y, Maeda Y, et al. Development of Body Mapping from HumanDemonstrator to Inverted-Pendulum Mobile Robot for Imitation[C]//IEEE International Conferenceon Fuzzy Systems. Taiwan,2011:1344-1349.
    [26] Scesa V, Ra evsky C, Sanchez S, et al. Rule Fusion for the Imitation of a Human Tutor[C]//IEEEConference on Computational Intelligence and Games. Dublin,2010:154-161.
    [27] A. E, Antonelo, Schrauwen B. Supervised learning of internal models for autonomousgoal-oriented robot navigation using reservoir computing[C]//IEEE International Conference onRobotics and Automation. Anchorage,2010:2959-2964.
    [28] Sheh R, Hengst B, Sammut C. Behavioural cloning for driving robots over roughterrain[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. San Francisco,2011:732-737.
    [29] Argall B D, Chernova S, Veloso M, et al. A survey of robot learning from demonstration[J].Robotics and Autonomous Systems,2009,57(5):469-483.
    [30] Ijspeert A J, Nakanishi J, Schaal S. Learning rhythmic movements by demonstration usingnonlinear oscillators[C]//IEEE/RSJ International Conference on Intelligent Robots and SystemsI,Lausanne, Switzerland,2002:958-963.
    [31] Koenemann J, Bennewitz M. Whole-body imitation of human motions with a Naohumanoid[C]//The7th ACM/IEEE International Conference on Human-Robot Interaction. Boston,2012:425.
    [32] Nguyen K, Perdereau V. Arm-hand movement: Imitation of human natural gestures with tenodesiseffect[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. San Francisco,2011:1459-1464.
    [33] Pollick J G H A."Sticky hands": learning and generalization for cooperative physical interactionswith a humanoid Robot[J]. IEEE Transactions on Systems, Man, and cybernetics—Part C:Application and Reviews,2005,35(4):512-521.
    [34] Howard M, Braun D J, Vijayakumar S. Transferring human impedance behavior to heterogeneousvariable impedance actuators[J]. IEEE Transactions on Robotics,2013,99:1-16.
    [35] Herzog A, Pastor P, Kalakrishnan M, et al. Template-based learning of grasp selection[C]//IEEEInternational Conference on Robotics and Automation. Saint Paul,2012:2379-2384.
    [36] Schmidts A M, Lee D, Peer A. Imitation learning of human grasping skills from motion and forcedata[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Francisco,2011:1002-1007.
    [37] Vaandrager M, Babuska R, Busoniu L, et al. Imitation learning with non-parametric regression.[C]//IEEE International Conference on Automation Quality and Testing Robotics. Cluj-Napoca,2012:91-96.
    [38] Kober J, Peters J. Policy search for motor primitives in robotics[J]. Machine Learning,2011,1-2(84):171-203.
    [39] Lee D, Ott C, Nakamura Y, et al. Physical human robot interaction in imitation learning[C]//IEEEInternational Conference on Robotics and Automation. Shanghai,2011:3439-3440.
    [40] Sanmohan, Kruger V, Kragic D. Unsupervised learning of action primitives[C]//IEEE-RASInternational Conference on Humanoid Robots. Nashville,2010:554-559.
    [41] Azad P, Munch D, Asfour T, et al.6-DoF model-based tracking of arbitrarily shaped3D objects.[C]//IEEE International Conference on Robotics and Automation. Shanghai,2011:5204-5209.
    [42] Lin H, Liu Y, Chen C. Evaluation of human-robot arm movement imitation[C]//The8th AsianControl Conference. Kaohsiung,2011:287-292.
    [43] Chuang L, Lin C, Cangelosi A. Learning of Composite Actions and Visual Categories viaGrounded Linguistic Instructions: Humanoid Robot Simulations[C]//The International JointConference on Neural Networks. Brisbane,2012:1-8.
    [44] M P, S K, R D, et al. Incremental learning of tasks from user demonstrations, past experiences, andvocal comments[J]. IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics,2007,2(37):322-332.
    [45] Tan H, Du Q, Wu N. A framework for cognitive robots to learn behaviors through imitation andinteraction with humans[C]//IEEE International Multi-Disciplinary Conference on CognitiveMethods in Situation Awareness and Decision Support. New Orleans, LA,2012:235-238.
    [46] Tilki U, Erkmen I, Erkmen A M. Imitation of hand gestures classified by principal componentanalysis with fluid particles[C]//The21st Signal Processing and Communications ApplicationsConference. Haspolat,2013:1-4.
    [47] Tilki U, Erkmen I, Erkmen A M. Imitation of human body poses with a fluid basedcontroller[C]//The21st Signal Processing and Communications Applications Conference. Haspolat,2013:1-4.
    [48] Argall B D, Sauser E L, Billard A G. Tactile guidance for policy refinement and reuse[C]//IEEEInternational Conference on Development and Learning.2010:7-12.
    [49] Tamura Y, Takahashi Y, Asada M. Clustering Observed Body Image for Imitation based on ValueSystem[C]//IEEE World Congress on Computational Intelligence.2010:1258-1260.
    [50] Nikolaidis S, Shah J. Human-robot cross-training: Computational formulation, modeling andevaluation of a human team training strategy[C]//The8th ACM/IEEE International Conference onHuman-Robot Interaction. Tokyo,2012:33-40.
    [51] Montrey M R, Shultz T R. Evolution of social learning strategies[C]//IEEE9th InternationalConference on Development and Learning.2010:95-100.
    [52] Inamura T, Tanie H, Nakamura Y. Keyframe compression and decompression for time series databased on continuous hidden Markov models[C]//The IEEE/RSJ International Conference onIntelligent Robots and Systems.2003:1487-1492.
    [53] Mohammad S, Khansari-Zadeh, Billard A. Imitation learning of globally stable non-linearpoint-to-point robot motions using nonlinear pro-gramming[C]//The IEEE/RSJ InternationalConference on Intelligent Robots and Systems.2010:2676-2683.
    [54] Sturm J, Bennewitz M, Stachniss C, et al. Imitation learning with generalized taskdescriptions[C]//IEEE International Conference on Robotics and Automation. Kobe,2009:3968-3974.
    [55] Hak S, Mansard N, Ramos O, et al. Capture, recognition and imitation of anthropomorphicmotion[C]//IEEE International Conference on Robotics and Automation. Saint Paul,2012:3539-3540.
    [56] Pastor P, Hoffmann H, Asfour T, et al. Learning and generalization of motor skills by learningfrom demonstration[C]//IEEE International Conference on Robotics and Automation. Kobe:2009:763-768.
    [57] Calinon S, Pervez A, Caldwell D G. Multi-optima exploration with adaptive Gaussian mixturemodel[C]//IEEE International Conference on Development and Learning and Epigenetic Robotics.San Diego,2012:1-6.
    [58] Ijspeert A J, Nakanishi J, Hoffmann H, et al. Dynamical movement primitives: learning attractormodels for motor behaviors[J]. Neural Computation,2013,2(25):328-373.
    [59] Mohammad S, Khansari-Zadeh, Billard A. Learning stable nonlinear dynamical systems withgaussian mixture models[J]. IEEE Transactions on Robotics,2011,27(5):943-957.
    [60] Kuli D, Takano W, Nakamura Y. Online segmentation and clustering from continuousobservation of whole body motions[J]. IEEE Transactions on Robotics,2009,25(5):1158-1166.
    [61] Gribovskaya E. Imitation Learning of Motion Coordination in Robots: a Dynamical SystemApproach[D]. EPFL, Doctoral Dissertation,2012.
    [62] Kulic D, Takano W, Nakamura Y. Incremental learning, clustering and hierarchy formation ofwhole body motion patterns using adaptive hidden markov chains[J]. The International Journal ofRobotics Research,2008,27(7):761-784.
    [63] Dindo H, Schillaci G. An adaptive probabilistic graphical Model for representing skills in PbDsettings[C]//The5th ACM/IEEE International Conference on Human Robot Interaction. Osaka,2010:89-90.
    [64] D. Song K H V K. Learning Task Constraints for Robot Grasping using Graphical Models[C]//TheIEEE/RSJ International Conference on Intelligent Robots and Systems. Taipei,2010:1579-1585.
    [65] Aksoy E E, Abramov A, D rr J, et al. Learning the semantics of object–action relations byobservation[J]. The International Journal of Robotics Research,2011,30(10):1229-1249.
    [66] Konidaris G, Kuindersma S, Grupen R, et al. Robot learning from demonstration by constructingskill trees[J]. The International Journal of Robotics Research,2012,31(3):360-375.
    [67] Transferring impedance control strategies between heterogeneous systems via apprenticeshiplearning[C]//The10th IEEE-RAS International Conference on Humanoid Robots (Humanoids),Nashville, TN,2010:98-105.
    [68] Chen X, Ong Y, Lim M, et al. A multi-facet survey on memetic computation[J]. IEEE Transactionson Evolutionary Computation,2011,15(5):591-607.
    [69] de Rengerve A, Boucenna S, Andry P, et al. Emergent imitative behavior on a robotic arm based onvisuo-motor associative memories[C]//The IEEE/RSJ International Conference on IntelligentRobots and Systems. Taiwan,2010:1754-1759.
    [70] Hajimirsadeghi H. Conceptual imitation learning based on functional effects ofaction[C]//International Conference on Computer as a Tool. Lisbon,2011:1-6.
    [71] Waldron K, Schmiedeler J. Kinematics[M]. Springer-Verlag Berlin Heidelberg,2007:9-34.
    [72] Lowe D G. Object recognition from local scale-invariant features[C]//IEEE InternationalConference on Computer Vision. Corfu, Greece,1999:1150-1157.
    [73] Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International Journal ofComputer Vision,2004,60(2):91-110.
    [74] Rubleeethan, Vincent R, Kurt K, et al. ORB: an effcient alternative to SIFT or SURF[C]//IEEEInternational Conference on Computer Vision. Barcelona,2011:2564-2571.
    [75] Rosten E, Porter R, Drummond T. Faster and better: a machine learning approach to cornerdetection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(1):105-119.
    [76] Calonder M, Lepetit V, Strecha C, et al. BRIEF: binary robust independent elementaryfeatures[C]//European Conferenceon Computer Vision.2010:778-792.
    [77] Belongie S, Malik J, Puzicha J. Shape Matching and Object Recognition Using Shape Contexts[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2002,24(4):509-522.
    [78] Leutenegger S, Chli M, Siegwart R. BRISK: Binary Robust invariant scalable keypoints[C]//IEEEInternational Conference on Computer Vision. Barcelona,2011:2548-2555.
    [79] Alahi A, Ortiz R, Vandergheynst P. FREAK: Fast Retina Keypoint[C]//IEEE Conference onComputer Vision and Pattern Recognition. Rhode Island, Providence,2012:510-517.
    [80] Bay H, Ess A, Tuytelaars T, et al. SURF: Speeded Up Robust Features[J]. Computer Vision andImage Understanding,2008,110(3):346-359.
    [81] Ozuysal M, Calonder M, Lepetit V, et al. Fast keypoint recognition using random ferns[J]. IEEETransactions on Pattern Analysis and Machine Intelligence,2010,32(2):448-461.
    [82] Taylor S, Drummond T. Binary histogrammed intensity patches for efficient and robustmatching[J]. International Journal of Computer Vision,2011,2(94):241-265.
    [83] Shotton J, Fitzgibbon A, Cook M, et al. Real-time human pose recognition in parts from singledepth images[C]//IEEE Conference on Computer Vision and Pattern Recognition. Colorado Springs,2011:1297-1304.
    [84] Izadi S, Kim D, Hilliges O, et al. KinectFusion: real-time3D reconstruction and interaction using amoving depth camera[C]//ACM Symposium on User Interface Software and Technology. NewYork,2011:559-568.
    [85] Xin L, Hua-Rong Xu, Zhan-Yi Hu. GPU based Fast3D-object modeling with Kinect[J]. ActaAutomatica Sinica,2012,38(8):1288-1297.
    [86] Tavakoli M, Cabrita G, Faria R, et al. Cooperative multi-agent mapping of three-dimensionalstructures for pipeline inspection applications[J]. The International Journal of Robotics Research,2012,31(12):1489-1503.
    [87] Henry P, Krainin M, Herbst E, et al. RGB-D mapping: using Kinect-style depth cameras for dense3D modeling of indoor environments[J]. The International Journal of Robotics Research,2012,31(5):647-663.
    [88]鑫刘,许华荣,胡占义.基于GPU和Kinect的快速物体重建[J].自动化学报,2012,38(8):1288-1297.
    [89] C. D H, Kannala J, Heikkil J. Accurate and practical calibration of a depth and color camerapair[C]//The14th International Conference on Computer Analysis of Images and Patterns. Seville,Spain,2011:437-445.
    [90] Calinon S, Guenter F, Billard A. Goal-directed imitation in a humanoid robot[C]//IEEEInternational Conference on Robotics and Automation. Barcelona, Spain,2005:299-304.
    [91] Aleotti J, Caselli S. Trajectory Clustering and Stochastic Approximation for Robot Programmingby Demonstration[C]//IEEE International Conference on Robotics and Automation.2005:2582-2598.
    [92] Haykin S. Neural Networks: A Comprehensive Foundation[M]. Prentice Hall,2001.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700