用户名: 密码: 验证码:
输电线路除冰机器人抓线智能控制方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
严重的高压输电线路覆冰会导致杆塔倾斜、倒塌、断线及绝缘子闪络,由此引起的线路跳闸、供电中断等事故给工农业生产和人民生活造成严重影响。采用机器人除冰具有无人员伤亡、无需停电和转移负载等优点,无需除冰作业时还可作巡检用途,其发展前景广阔。
     输电线除冰机器人工作在柔性输电导线上,在运行过程中需要翻越各类障碍物。受到环境风载等外部因素和机械振动等内部因素的影响,都可能造成越障过程中出现脱线情况,要实现除冰机器人自主抓线难度较大。常规的控制方法难以保证自主抓线控制精度,存在过于复杂、无法保证控制实时性等问题。所以,设计满足控制精度要求、简单可靠、实时性好、便于实现的机器人手臂抓线控制方法是除冰机器人关键技术之一。本论文围绕该技术开展了深入的研究,主要研究工作如下:
     1.论文对除冰机器人越障过程中抓线控制问题的难点进行分析,基于除冰机器人三关节手臂的结构特征,建立了三关节手臂的运动学和动力学模型,该模型在本文并且可在相关研究中得到应用。
     2.论文提出一类离散空间基于增强学习的抓线控制方法。根据经典增强学习控制方法可在线学习、易于实现的特点,论文提出基于Q学习、SARSA学习的抓线控制方法,并结合资格迹方法提出基于Q(λ)学习和基于SARSA(λ)学习的抓线控制算法。论文对所提算法进行了仿真实验和比较,实验表明基于经典增强学习的抓线控制算法是有效的,能够在多次迭代后找到“目标点”,能够解决外界恶劣环境干扰未知和手臂末端姿态的不确定性带来的控制问题。
     3.论文提出一类连续空间基于增强学习的抓线控制算法。针对经典增强学习算法对大规模和连续空间的优化决策问题难以保证算法收敛性以及存在学习效率不高的缺点,论文研究在输电线等效为蔓叶线模型的情况下,提出一类采用KNN算法结合资格迹的增强学习KNN-SARSA(λ)算法,实现连续状态-离散动作、连续状态-连续动作的抓线控制。仿真结果表明,基于KNN-SARSA(λ)的改进抓线控制算法能够解决二维空间内状态和动作输出的连续化表达问题,与传统增强学习控制方法相比,进一步提高了控制精度,具有良好的泛化能力和学习效率。
     4.论文提出一类基于迭代学习的除冰机器人轨迹跟踪控制方法。针对除冰机器人可以根据手臂末端与目标点的位置误差,采用抓线策略不断调整抓线手臂的动作,具有重复性的特点,提出一种鲁棒自适应迭代学习控制器,使之在PD控制器的基础上,随着作业任务的重复执行不断提高轨迹跟踪精度。该方法对处理器的计算和存储资源消耗低,可以实现干扰为不重复,包含线性化残差的不确定机器人动力学系统的鲁棒控制。仿真和实验表明该方法能够进一步提高轨迹跟踪精度。
     5.论文提出一类除冰机器人不确定项RBF神经网络逼近自适应控制方法。除冰机器人是一个非线性、强耦合复杂系统,控制难度较大。该方法采用计算力矩方法和神经网络补偿控制器相结合的控制方案,推导了神经网络权值自适应调整律,证明了系统的稳定性和误差的收敛性。补偿控制器的设计基于径向基神经网络,可以在线修正机器人模型误差,具有较好的适应性。仿真实验结果表明该方法有较好的轨迹跟踪性能和鲁棒性能。
     6.论文提出一类除冰机器人小波神经网络的鲁棒控制方法。该方法利用小波神经网络的强非线性学习性能来逼近除冰机器人系统的未知动力学部分,同时采用一个鲁棒控制器来补偿小波神经网络的逼近误差和外部干扰。该控制器能够有效降低模型不确定性和外部干扰的影响,减少了回归矩阵的计算,通过Lyapunov函数证明了控制系统的收敛性。仿真实验结果表明,该控制方法具有很强的抗干扰能力和很好的动态特性。
Ice coating in high voltage power networks imposes heavy load upontransmission lines and could result in trip, disconnection, power-tower collapse andpower interruption, which has posed a serious of damages to economy. Adopting robotdeicing has advantages of avoiding casualties, temporal power failure and power loadtransferring. Furthermore, de-icing robot can be used for line inspection when there isno need for de-icing. For the above reasons, de-icing robot has good prospects.
     De-icing robot works on the flexible transmission line and need to cross variousobstacles on transmission line. Some external factors, e.g. strong wind, and someinternal factors, e.g. mechanical vibration could make the de-icing robot fail to grasptransmission line. Thus, it is of great difficulty for autonomous line-grasping ofde-icing robot. Conventional controlling methods, like PID control havedisadvantages due to low control precision, over-complexity and low real-timecapability. It is one of the key technologies for de-icing robot to design simple, robust,easy-realizable and real-time line-grasping control methods that satisfying controlaccuracy. This dissertation focuses on autonomous line-grasping control problem. Themain contributions are as follows:
     1. This dissertation analyses the obstacle crossing problem of de-icing robot andproposes kinematics and dynamics model of a three-link de-icing robot arm accordingto its structural characteristics. This kinematics model and dynamics model are usedin this dissertation and can be used in related research work.
     2. This dissertation proposes one type of discrete-space line-grasping controlmethods based on traditional reinforcement learning. Considering traditionalreinforcement learning methods can on-line study and is easy to implement, thisdissertation proposes line-grasping control methods based on Q-learning andSARSA-learning. Then by combing eligibity traces the line-grasping control methodsbased on Q(λ)-learning and SARSA(λ)-learning algorithm are proposed. Theproposed methods are evaluated and compared, experiment results show that thesemethods based on traditional reinforcement learning are effective and might adaptharsh environment, because the target point can be approximated in simulation aftersome times iterative computations.
     3. This dissertation proposes one type of continuous-space line-grasping control method based on reinforcement learning. Traditional reinforcement learning methodshave inevitable problems and learning efficiency is low for large and continuousspace. To overcome this limitation, after an equivalent cissoids model is deductedfrom transmission line model to facilitate the computation, one type of line-graspingcontrol KNN-SARSA(λ) methods of de-icing robot which combine the k-nearestneighbor algorithm and reinforcement learning are proposed. The proposed methodscan produce continuous-state-discrete-action and continuous-state-continuous-action.Simulations results show that these methods can solve the continuous representationproblem of state and action in two-dimensional space, with great generalizationability and learning efficiency.
     4. This dissertation proposes an adaptive learning control method for trajectorytracking of de-icing robot manipulator in an iterative operation mode. De-icing robotought to repetitively adjust line-grasping actions according to the position errors.Based on these characteristics, the proposed method consists of a classical PDfeedback structure and an additional robust adaptive updated term designed to copewith the non-repeated disturbances and unknown parameters. The controlimplementation is simple for the knowledge is not needed, and the only requirementon the PD and learning gains is the positive definiteness conditions. By usingLyapunov’s method, the asymptotic convergence of the closed-loop control systemcan be achieved. The simulation and experimental results of de-icing robotmanipulator are provided to verify the effectiveness of the proposed control method.
     5. This dissertation proposes a control method combining the well-knowncomputer torque method which is based on the known nominal robot dynamics, with acompensating controller which is based on the RBF neural network. This schemetakes advantages of the model based control approach and uses the neural networkcontroller to compensate for the robot modeling uncertainties, derives the adaptivelaw of the neural network. The neural network is trained on line which is based onLyapunov theory, thus its convergence is guaranteed. Simulation results are providedto demonstrate performance of the scheme.
     6. This dissertation proposes a WNN-based robust adaptive control method forde-icing robot. The bounds of the uncertainties are not necessarily known. A WNNsystem is used to approach the unknown controlled system, and a robust controller isdesigned to compensate for approximation error of neural network and externaldisturbances. It is shown that the proposed control scheme can guarantee estimationconvergence by Lyapunov function, reduce computation of regression matrix and the impact on model uncertainty and external disturbances. As demonstrated in theillustrated simulation, the control scheme proposed in this dissertation can achieve abetter model following tracking performance than the existing results.
引文
[1]饶宏,李立涅,黎小林,等.南方电网直流融冰技术研究.南方电网技术,2008,2(2):7-12
    [2] McCurdy J D, Sullivan C R, Petrenko V F. Using dielectric losses to de-icepower transmission lines with100kHz high-voltage excitation. In: Proc ofThirty-Sixth IAS Annual Meeting. Hanover, NH, USA: IEEE,2001,2515-2519
    [3] Lowell M. Breaking the ice. Bantam Books,1993
    [4]朱卫华,朱晓,朱长虹,等. CO2激光热熔法除冰的研究.光学与光电技术,2007,5(003):41-42
    [5]刘磊,朱晓.激光除冰研究.光散射学报,2006,4:379-385
    [6] Landry M, Beauchemin R, Venne A. De-icing EHV overhead transmissionlines using electromagnetic forces generated by moderate short-circuitcurrents. In: Proc of IEEE9th International Conference on Transmission andDistribution Construction, Operation and Live-Line Maintenance. Montre al(Que bec), Canada: IEEE,2000,94-100
    [7] Montambault S, Cote J, St Louis M. Preliminary results on the developmentof a teleoperated compact trolley for live-line working. In: Proc of IEEE9thInternational Conference on Transmission and Distribution Construction,Operation and Live-Line Maintenance. Montre al (Que bec), Canada:IEEE,2002,21-27
    [8] TOM B G P. Application of BCTC standardized risk estimation model toassess rick due to ice storm. In: Proc of The8th International Conference onPabalilistic Methods Applied to Power systems. lowa State University, Ames,lowa, Canada:2004,12-16
    [9] Brostr m E. Ice storm modelling in transmission system reliabilitycalculations: KTH,2007
    [10] Yu C, Peng Q, Wachal R, et al. An Image-Based3D Acquisition of IceAccretions on Power Transmission Lines. In: Proc of Conference onElectrical and Computer Engineering. Canadian: IEEE,2006,2005-2008
    [11] Farzaneh M, Savadjiev K. Statistical analysis of field data for precipitationicing accretion on overhead power lines. IEEE Transactions on PowerDelivery,2005,20(2):1080-1087
    [12]黄新波,孙钦东,丁建国,等.基于GSMSMS的输电线路覆冰在线监测系统[J].电力自动化设备,2008,28(5):72-76
    [13] Huang X, Sun Q, Ding J. An on-line monitoring system of transmission lineconductor de-icing. In: Proc of3rd IEEE Conference on Industrial Electronicsand Applications. Melbourne, Australia: IEEE,2008,891-896
    [14] Yao M J, Min K J. Repair-unit location models for power failures. IEEETransactions on Engineering Management,1998,45(1):57-65
    [15] Lubkeman D, Julian D E. Large scale storm outage management. In: Proc ofPower Engineering Society General Meeting. Raleigh,NC, USA: IEEE,2004,16-22
    [16] Liu H, Davidson R A, Apanasovich T. Statistical forecasting of electric powerrestoration times in hurricanes and ice storms. IEEE Transactions on PowerSystems,2007,22(4):2270-2279
    [17]张运楚,梁自泽,谭民.架空电力线路巡线机器人的研究综述.机器人,2004,26(5):467-472
    [18] Toussaint K, Pouliot N, Montambault S. Transmission line maintenancerobots capable of crossing obstacles: State-of-the-art review and challengesahead. Journal of Field Robotics,2009,26(5):477-499
    [19] Kobayashi H, Nakamura H, Shimada T. An inspection robot for feedercables-basic structure and control. In: Proc of the1991InternationalConference on Industrial Electronics. Kobe Japan: IEEE,1991,992-995
    [20] Sawada J, Kusumoto K, Maikawa Y, et al. A mobile robot for inspection ofpower transmission lines. IEEE Transactions on Power Delivery,2002,6(1):309-315
    [21] Eliot T. Robots and Examine Live Lines in Sever Condition. Electrical World,1989,5:71-72
    [22] Peungsungwal S, Pungsiri B, Chamnongthai K, et al. Autonomous robot for apower transmission line inspection. In: Proc of the2001IEEE InternationalSymposium on Circuits and Systems. Sydney, Australia: IEEE,2002,121-124
    [23] Montambault S, Pouliot N. On the economic and strategic impact of roboticsapplied to transmission line maintenance. In: Proc of the7th InternationalConference on Live Maintenance. Bucharest,Romania:2004,1-8
    [24] Montambault S, Pouliot N. LineScout technology: Development of aninspection robot capable of clearing obstacles while operating on a live line.In: Proc of IEEE11th International Conference on Transmission andDistribution Construction, Operation and Live-Line Maintenance.Albuquerque, NM: IEEE,2007,15-19
    [25] Pouliot N, Montambault S. Geometric design of the LineScout, a teleoperatedrobot for power line inspection and maintenance. In: Proc of Proceedings ofthe IEEE International Conference on Robotics and Automation (ICRA2008),Pasadena, CA.2008,3970-3977
    [26] Montambault S, Pouliot N, Toth J, et al. Reporting on a large ocean inletcrossing live transmission line inspection performed by linescout technology.In: Proc of the IEEE International Conference on Robotics and Automation.Anchorage, AK: IEEE,2010,1102-1103
    [27]吴功平,肖晓晖,肖华,等.架空高压输电线路巡线机器人样机研制.电力系统自动化,2006,30(013):90-93
    [28]吴功平,肖晓晖,郭应龙,等.架空高压输电线自动爬行机器人的研制.中国机械工程,2006,17(003):237-240
    [29]王鲁单,王洪光,房立金,等.输电线路巡检机器人越障控制研究.中国机械工程,2007,18(22):2652-2655
    [30]王鲁单,王洪光,房立金,等.一种输电线路巡检机器人控制系统的设计与实现.机器人,2007,29(1):7-11
    [31]张运楚,梁自泽,谭民,等.架空输电线路巡线机器人越障视觉伺服控制.机器人,2007,29(002):111-116
    [32]周风余,吴爱国,李贻斌,等.高压架空输电线路自动巡线机器人的研制.电力系统自动化,2004,28(023):89-91
    [33]周风余,吴爱国,李贻斌.110kV输电线路巡线机器人.中国电力,2008,41(003):32-35
    [34] Zhu X, Wang H, Fang L, et al. Dual arms running control method ofinspection robot based on obliquitous sensor. In: Proc of2006IEEE/RSJInternational Conference on Intelligent Robots and Systems. IEEE,2006,5273-5278
    [35] Montambault S, Pouliot N. The HQ LineROVer: contributing to innovation intransmission line maintenance. In: Proc of the2003IEEE10th InternationalConference on Transmission and Distribution Construction,Operation andLive-Line Maintenance. Orlando: IEEE,2003,33-40
    [36] Zhao J, Guo R, Cao L, et al. Improvement of LineROVer: A mobile robot forde-icing of transmission lines. In: Proc of20101st International Conferenceon Applied Robotics for the Power Industry. Canada: IEEE,2010,1-4
    [37]李恩,梁自泽,谭民.基于规则库的巡线机器人自主越障动作规划.机器人,2005,27(005):400-405
    [38]王鲁单,王洪光,房立金,等.基于视觉伺服的输电线巡检机器人抓线控制.机器人,2007,29(005):451-455
    [39]陈中伟,肖华,吴功平.高压巡线机器人电磁传感器导航方法.传感器与微系统,2006,25(009):33-35
    [40]王耀南.智能控制系统.湖南:湖南大学出版社,2006
    [41]黄友锐,曲立国. PID控制器参数整定与实现.科学出版社,2010
    [42]李士勇.模糊控制,神经网络和智能控制论.1998
    [43]申铁龙.机器人鲁棒控制基础.清华大学出版社,2000
    [44]李言俊,张科.自适应控制理论及应用.西北工业大学出版社,2005
    [45]韩建海.工业机器人.武汉:华中科技大学出版社,2009
    [46]丁学恭.机器人控制研究.浙江:浙江大学出版社,2006
    [47]蔡自兴.机器人学.清华大学出版社,2009
    [48]熊有伦.机器人技术基础.广州:华中理工大学出版社,1996
    [49]宋伟刚.机器人学:运动学,动力学与控制.北京:科学出版社,2007
    [50]孙树栋.工业机器人技术基础.西安:西北工业大学出版社,2006
    [51] Craig J. J.机器人学导论. China Machine Press,2005
    [52]霍伟.机器人动力学与控制.北京:高等教育出版社,2005
    [53] Amer A F, Sallam E A, Elawady W M. Adaptive fuzzy sliding mode controlusing supervisory fuzzy control for3DOF planar robot manipulators. AppliedSoft Computing,2011,11(8):4943-4953
    [54] Wang F Y, Zhang H, Liu D. Adaptive dynamic programming: An introduction.Computational Intelligence Magazine,2009,4(2):39-47
    [55]张奇.学习理论.湖北:湖北教育出版社,1999
    [56] Powell W B. Approximate Dynamic Programming: Solving the curses ofdimensionality. Wiley-Interscience,2007
    [57] Ito K, Hiroki N, Kazuya N. Control of Multi-legged Robot UsingReinforcement Learning with Body Image and application to rubble.Transaction on Control and Mechanical Systems,2013,1(6)
    [58] Pang T, Ruan X G, Wang E S, et al. Search and Rescue Robot Path Planningin Unknown Environment. Applied Mechanics and Materials,2013,241:1682-1687
    [59] Kuroda S, Miyazaki K, Kobayashi H. Introduction of Fixed Mode States intoOnline Reinforcement Learning with Penalties and Rewards and itsApplication to Biped Robot Waist Trajectory Generation. Journal ofAdvanced Computational Intelligence and Intelligent Informatics,2012,16(6):758-768
    [60] Beom H R, Cho H S. A sensor-based navigation for a mobile robot usingfuzzy logic and reinforcement learning. IEEE transactions on Systems, Manand Cybernetics,1995,25(3):464-477
    [61] Thrun S, Mitchell T M. Lifelong robot learning. Robotics and autonomoussystems,1995,15(1):25-46
    [62] Liu D, Javaherian H, Kovalenko O, et al. Adaptive critic learning techniquesfor engine torque and air–fuel ratio control. IEEE Transactions on Systems,Man, and Cybernetics, Part B: Cybernetics,2008,38(4):988-993
    [63] Barto A G, Sutton R S. Reinforcement Learning: an introduction. MIT PressCambridge, MA,1998
    [64]林龙信.仿生水下机器人的增强学习控制方法研究:国防科技大学,2010
    [65] Puterman M L. Markov decision processes: Discrete stochastic dynamicprogramming. John Wiley&Sons, Inc.,1994
    [66] Bertsekas D P, Bertsekas D P, Bertsekas D P, et al. Dynamic programmingand optimal control. Athena Scientific Belmont, MA,1995
    [67] Sutton R S. Learning to predict by the methods of temporal differences.Machine learning,1988,3(1):9-44
    [68] CJC H W. Learning from delayed rewards. Cambridge University, Cambridge,England, Doctoral thesis,1989
    [69] Rummery G A, Niranjan M. On-line Q-learning using connectionist systems.Citeseer,1994
    [70] Singh S, Jaakkola T, Littman M L, et al. Convergence results for single-stepon-policy reinforcement-learning algorithms. Machine Learning,2000,38(3):287-308
    [71] Barto A G, Sutton R S, Anderson C W. Neuronlike adaptive elements that cansolve difficult learning control problems. IEEE Transactions on systems, man,and cybernetics,1983,13(5):834-846
    [72] Lin C T, Lee C S G. Reinforcement structure/parameter learning forneural-network-based fuzzy logic control systems. IEEE Transactions onFuzzy Systems,1994,2(1):46-63
    [73] Glorennec P Y, Jouffe L. Fuzzy Q-learning. In: Proc of Fuzzy Systems,Proceedings of the Sixth IEEE International Conference. IEEE,1997,659-662
    [74] Jouffe L. Fuzzy inference system learning by reinforcement methods. IEEETransactions on Systems, Man, and Cybernetics, Part C: Applications andReviews,1998,28(3):338-355
    [75] Zhou C. Robot learning with GA-based fuzzy reinforcement learning agents.Information Sciences,2002,145(1):45-68
    [76] Er M J, Deng C. Online tuning of fuzzy inference systems using dynamicfuzzy Q-learning. IEEE Transactions on Systems, Man, and Cybernetics, PartB: Cybernetics,2004,34(3):1478-1489
    [77] Sutton R S, Precup D, Singh S. Between MDPs and semi-MDPs: A frameworkfor temporal abstraction in reinforcement learning. Artificial intelligence,1999,112(1):181-211
    [78] Parr R E. Hierarchical control and learning for Markov decision processes:Citeseer,1998
    [79] Dietterich T G. Hierarchical reinforcement learning with the MAXQ valuefunction decomposition. Journal of Artificial Intelligence Research,1999,13(1):227-303
    [80] LIU Q, Jiachen M A, Wei X. Action Selection in Cooperative Robot Soccerusing Q-learning with Kalman Filter. Journal of Computational InformationSystems,2012,8(24):10367-10374
    [81] Zhang Q, Li M, Wang X, et al. Reinforcement Learning in Robot PathOptimization. Journal of Software,2012,7(3):657-662
    [82] Dini S, Serrano M. Combining Q-Learning with Artificial Neural Networks inan Adaptive Light Seeking Robot.2012
    [83] Rummery G A. Problem solving with reinforcement learning. University ofCambridge,1995
    [84]徐昕.增强学习与近似动态规划.北京:科学出版社,2010
    [85] Watkins C, Dayan P. Q-learning. Machine learning,1992,8(3):279-292
    [86] Sutton R S. Generalization in reinforcement learning: Successful examplesusing sparse coarse coding. Advances in neural information processingsystems,1996,8(3):1038-1044
    [87] Tesauro G. Temporal difference learning and TD-Gammon. Communicationsof the ACM,1995,38(3):58-68
    [88] Baird L. Residual algorithms: Reinforcement learning with functionapproximation. In: Proc of Machine Learning International WorkshopConference. Morgan Kaufamn Publishers,1995,30-37
    [89] Sutton R S, McAllester D, Singh S, et al. Policy gradient methods forreinforcement learning with function approximation. Advances in neuralinformation processing systems,2000,12(22):1057-1063
    [90] Bartlett P L, Baxter J. Infinite-horizon policy-gradient estimation. Journal ofArtificial Intelligence Research,2011,15(3):319-350
    [91] Greensmith E, Bartlett P L, Baxter J. Variance reduction techniques forgradient estimates in reinforcement learning. The Journal of MachineLearning Research,2004,5:1471-1530
    [92] Kakade S. A natural policy gradient. Advances in neural informationprocessing systems,2001,14:1531-1538
    [93] Berenji H R, Khedkar P. Learning and tuning fuzzy logic controllers throughreinforcements. IEEE Transactions on Neural Networks,1992,3(5):724-740
    [94] Wang X S, Cheng Y H, Yi J Q. A fuzzy Actor–Critic reinforcement learningnetwork. Information Sciences,2007,177(18):3764-3781
    [95] Prokhorov D V, Wunsch D C. Adaptive critic designs. IEEE Transactions onNeural Networks,1997,8(5):997-1007
    [96] Xu X, Hu D, Lu X. Kernel-based least squares policy iteration forreinforcement learning. IEEE Transactions on Neural Networks,2007,18(4):973
    [97]张鹏程.基于核的连续空间增强学习方法及应用研究:国防科技大学,2009
    [98]沈晶,顾国昌,刘海波.分层强化学习研究综述.模式识别与人工智能,2005,18(005):574-581
    [99] Dudani S A. The distance-weighted k-nearest-neighbor rule. IEEETransactions on Systems, Man and Cybernetics,2010(4):325-327
    [100] Cover T, Hart P. Nearest neighbor pattern classification. IEEE Transactionson Information Theory,2002,13(1):21-27
    [101]魏书宁,王耀南,印峰,等.基于k-最近邻分类增强学习的除冰机器人抓线控制.控制理论与应用,2012,29(4):470-476
    [102] Martin H., de Lope J. Ex〈α〉: An effective algorithm for continuous actionsReinforcement Learning problems. Porto: IEEE,2010
    [103] Martín H J, de Lope J, Maravall D. The kNN-TD Reinforcement LearningAlgorithm. Methods and Models in Artificial and Natural Computation,2009:305-314
    [104] Kelly R. A tuning procedure for stable PID control of robot manipulators.Robotica,1995,13(02):141-148
    [105] Kelly R. PD Control with Desired Gravity Compensation of RoboticManipulators. The International Journal of Robotics Research,1997,16(5):660-672
    [106] Alvarez-Ramirez J, Cervantes I, Kelly R. PID regulation of robotmanipulators: stability and performance. Systems&control letters,2000,41(2):73-83
    [107] Tayebi A. Adaptive iterative learning control for robot manipulators.Automatica,2004,40(7):1195-1203
    [108] Tayebi A, Islam S. Adaptive iterative learning control for robot manipulators:Experimental results. Control Engineering Practice,2006,14(7):843-851
    [109] Kuc T, Nam K, Lee J S. An iterative learning control of robot manipulators.IEEE Transactions on Robotics and Automation,1991,7(6):835-842
    [110] Arimoto S, Kawamura S, Miyazaki F. Bettering operation of robots bylearning. Journal of robotic systems,1984,1(2):123-140
    [111] Freeman C T. Constrained point-to-point iterative learning control withexperimental verification. Control Engineering Practice,2012
    [112] Tan Y, Xu J X, Norrl f M, et al. On reference governor in iterative learningcontrol for dynamic systems with input saturation. Automatica,2011,47(11):2412-2419
    [113] Sauser E L, Argall B D, Metta G, et al. Iterative learning of grasp adaptationthrough human corrections. Robotics and Autonomous Systems,2011,60(1):55-71
    [114] Abidi K, Xu J X. Iterative Learning Control for Sampled-Data Systems: FromTheory to Practice. IEEE Transactions on Industrial Electronics,2011,58(7):3002-3015
    [115] Tsai J S H, Chen F M, Yu T Y, et al. Efficient decentralized iterative learningtracker for unknown sampled-data interconnected large-scale state-delaysystem with closed-loop decoupling property. ISA transactions,2011,51(1):81-94
    [116] Zhang B, Wang D, Ye Y, et al. Cyclic pseudo-downsampled iterative learningcontrol for high performance tracking. Control Engineering Practice,2009,17(8):957-965
    [117]孙明轩,黄宝健.迭代学习控制.北京:国防工业出版社,1999
    [118] Park K H. An average operator-based PD-type iterative learning control forvariable initial state error. IEEE Transactions on Automatic Control,2005,50(6):865-869
    [119] Ahn H S, Chen Y Q, Moore K L. Iterative learning control: Brief survey andcategorization. IEEE Transactions on Systems, Man, and Cybernetics, Part C:Applications and Reviews,2007,37(6):1099-1121
    [120]徐湘元.自适应控制理论与应用.北京:电子工业出版社,2007
    [121]刘金琨.机器人控制系统的设计与MATLAB仿真.清华大学出版社,2008
    [122] Ouyang P R, Zhang W J, Gupta M M. An adaptive switching learning controlmethod for trajectory tracking of robot manipulators. Mechatronics,2006,16(1):51-61
    [123] Jin M, Kang S H, Chang P H. Robust compliant motion control of robot withnonlinear friction using time-delay estimation. IEEE Transactions onIndustrial Electronics,2008,55(1):258-269
    [124] Chen C S. Dynamic structure neural-fuzzy networks for robust adaptivecontrol of robot manipulators. Industrial Electronics, IEEE Transactions on,2008,55(9):3402-3414
    [125] Mostefai L, Dena M, Sehoon O, et al. Optimal control design for robust fuzzyfriction compensation in a robot joint. IEEE Transactions on IndustrialElectronics,2009,56(10):3832-3839
    [126] Paul R P. Robot manipulators: mathematics, programming, and control. MITpress,1981
    [127] Deshpande A D, Ko J, Fox D, et al. Control strategies for the index finger of atendon-driven hand. The International Journal of Robotics Research,2013,32(1):115-128
    [128] Chen J Y, Wu T F, Tsai P S, et al. Indirect Adaptive Fuzzy Controller forLEGO Mindstorms NXT Two-Wheeled Robot. Applied Mechanics andMaterials,2013,278:561-567
    [129] Heinrich S, Weber C, Wermter S. Adaptive Learning of Linguistic Hierarchyin a Multiple Timescale Recurrent Neural Network. Artificial NeuralNetworks and Machine Learning–ICANN2012,2012:555-562
    [130] Khosla P K, Kanade T. Experimental evaluation of nonlinear feedback andfeedforward control schemes for manipulators. The International journal ofrobotics research,1988,7(1):18-28
    [131] Khosla P K, Kanade T. Real-time implementation and evaluation of thecomputed-torque scheme. IEEE Transactions on Robotics and Automation,1989,5(2):245-253
    [132] Leahy Jr M B, Bossert D E, Whalen P V. Robust model-based control: Anexperimental case study. In: Proc of IEEE International Conference onRobotics and Automation. IEEE,1990,1982-1987
    [133] Feng G. A compensating scheme for robot tracking based on neural networks.Robotics and autonomous systems,1995,15(3):199-206
    [134]钟珞,饶文碧等.人工神经网络及其融合应用技术.北京:科学出版社,2007
    [135] Park J, Sandberg I W. Universal approximation using radial-basis-functionnetworks. Neural computation,1991,3(2):246-257
    [136]张德丰. MATLAB神经网络应用设计.北京:机械工业出版社,2009
    [137] Delyon B, Juditsky A, Benveniste A. Accuracy analysis for waveletapproximations. IEEE Transactions on Neural Networks,1995,6(2):332-348
    [138] Zhang Q, Benveniste A. Wavelet networks. IEEE Transactions on NeuralNetworks,1992,3(6):889-898
    [139] Zhang Q. Using wavelet network in nonparametric estimation. NeuralNetworks, IEEE Transactions on,1997,8(2):227-236
    [140]李媛.小波变换及其工程应用.北京:北京邮电大学出版社,2010
    [141]张国华,张文娟等.小波分析与应用基础.西安:西北工业大学出版社,2006
    [142] Wei S, Wang Y, Zuo Y. Wavelet neural networks robust control of farmtransmission line deicing robot manipulators. Computer Standards&Interfaces,2011,34(3):327-333
    [143] Doyle J, Stein G. Multivariable feedback design: Concepts for aclassical/modern synthesis. Automatic Control, IEEE Transactions on,1981,26(1):4-16
    [144] Wai R. Tracking control based on neural network strategy for robotmanipulator. Neurocomputing,2003,51:425-445
    [145]王耀南,魏书宁,印峰,等.输电线路除冰机器人关键技术综述.机械工程学报,2012,47(23):30-38
    [146] Cao Z, Zhao Y, Wu Q. Adaptive Trajectory Tracking Control for aNonholonomic Mobile Robot. Chinese Journal of MechanicalEngineering,24(3)
    [147] Xizhang C, Zhenyou Z, Wenjie C, et al. Vision-based recognition and guidingof initial welding position for arc-welding robot. Chinese Journal ofMechanical Engineering,2005,18(3):382-384
    [148]朱兴龙,周骥平,王洪光,等.输电线巡检机器人越障机理与试验.机械工程学报,2009,45(002):119-125

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700