用户名: 密码: 验证码:
基于神经网络的函数逼近方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
函数逼近是函数论中的一个重要组成部分,其在数值计算中的作用是十分重要的。运用神经网络进行函数逼近,为函数逼近的发展提供了一条新的思路。
     用神经网络作函数的逼近有许多优点:首先,它提供了一个标准的逼近结构及逼近工具,这个工具可以随着隐层个数改变来达到任意精度;其次,有标准的学习算法用以确定逼近函数的参数,并且这一过程是拟人的,即很好地模拟了人的学习过程;最后,能处理的数据对象十分广泛:适用于大规模的,高度非线性的,不完备的数据处理。
     本文以几种典型神经网络为例(BP神经网络、RBF神经网络、正交多项式基函数神经网络、样条基函数神经网络),对基于神经网络的函数逼近方法进行了研究。神经网络的函数逼近能力受神经元个数、学习率、学习次数和训练目标等因素的影响,因此,在研究过程中,充分运用神经网络的非线性逼近能力,首先对几种用于函数逼近的神经网络的结构及算法进行研究;再针对几种常用函数曲线,如正弦函数、指数函数、对数函数、三角函数等,分别用典型神经网络进行逼近,并对逼近效果进行比较,得到用于函数逼近的神经网络选取规律。所得结论经过实际仿真测试,证明了其有效性。
     本文的研究结果对函数逼近的研究具有借鉴意义。
Function approximation is one of the most important parts of the theory of functions which plays a very important role in the numerical computation. Using neural network to implement function approximation provides a new way to the development of function approximation.
     There are many advantages of using neural network to implement function approximation: First, it provides a standard approximation structure and approximation tool that can achieve arbitrary accuracy by changing the numbers of hidden layer; Second, there is a standard learning algorithm to determine the parameters of the approximation of functions, and the process is anthropopathic, namely a good simulation of the human learning process; Finally, it can deal with a wide range of data objects, such as a very large scale, highly non-linear, and incomplete data set.
     This paper takes several types of neural networks as the example(the BP neural network, the RBF neural network, the Orthogonal neural network and the Spline neural network).To study the methods of function approximation based on neural networks. The ability of function approximation is influenced by the numbers of neurons, learning rate, training objectives and so on. Therefore, to make a full use of the nonlinear approximation ability of neural networks in the process of the study, we should, first, make a study of structures and algorithms of the several neural networks, and then aiming at several kinds of conventional function curves, such as sine function, exponential function, logarithmic function, triangle waves and so on, compare experiment results of function approximation with typical neural network respectively to get the selection law of Neural Networks. All the conclusions are proved effective after the actual simulation test.
     The method studied by the article has significant reference for the research of function approximation.
引文
[1]Tian Ping Cher,Hong Chen. Universal Approximation to Nonlinear Operators by Neural Networks with Arbitrary Activation Functions and Its Application to Dynamical System. IEEE Trans. 1995,(4).
    [2]邵华平,何正友,覃征.几种函数逼近方式的逼近能力比较与综合[J].湖南师范大学自然科学学报, 2003(12).
    [3]隆金玲.Sum-of-Product神经网络和径向基函数神经网络的逼近能力研究[D].中国博士学位论文全文数据库,2008,(05).
    [4]陈天平.神经网络及其在系统识别应用中的逼近问题[J].中国科学(A辑数学物理学天文学技术科学), 1994, (01).
    [5]徐利治.函数逼近的理论与方法.上海科技出版社, 1983.
    [6]Kolmogrov A N. On the Representation of Continuous Functions of Many Variables by Superposition of Continuous Functions of One Variable and Addition. Dokl. Akad. Nauk, SSR. English Translation, 1963.
    [7]洛伦茨.函数逼近论.上海科技出版社, 1981.
    [8]陈小平,赵鹤鸣,杨新艳.遗传前馈神经网络在函数逼近中的应用[J].计算机工程,2008(10),34(20):(24-25,28).
    [9]GirosiF,PoggioT. Representation Properties of networks: Kolmogrov’s theorem is irrelevant , Neural Computation, 1989,(1):465.
    [10]Kreino rich V Y. Arbitrary Nonlineartity Is Sufficient to Rep resent all Functions by Neural Networks: A Theorem. N. N. 1991,(4):381.
    [11]Qinghua Zhang, Albert Benveniste. Wavelet Network. IEEE Trans. N. N 1992,(3):889.
    [12]Chen T, Chen H. Approximation capability to functions of several variables, nonlinear functions, and operators by radial basis function neural networks. IEEE Trans. on Neural Networks,1995,6(4):904-910.
    [13]Tomaso Poggio, Federico Girosi. Networks for Approxication and learning. Procs. IEEE, 1990.
    [14]Cybenko G.Approximation by superpositions of a sigmoidal function.Math.Contr.,Signal, Syst,1989,2(4):303-314.
    [15]蒋传海.神经网络中的逼近问题.数学年刊(A辑)1998,19A(3):295-300.
    [16]冯蓉,杨建华.基于BP神经网络的函数逼近的MATLAB实现[J].榆林学院学报,2007(3),17(2):21-22.
    [17]Funahashi K I. On the approximate realieation of continuous mapping by neural networks.Intel.Conf.NN,1989,(2):11.
    [18]周凤利,李绍滋,梁文林.一种改进型BP算法[J].电气传动自动化, 1997, 19(2): 39-41.
    [19]朱剑英.智能系统非经典数学方法[M].华中科技大学出版社,2002(4).
    [20]南东.RBF和MLP神经网络逼近能力的几个结果[D]:[博士学位论文].大连:大连理工大学,2007.
    [21]邵华平,何正友,覃征.几种函数逼近方式的逼近能力比较与综合[J].湖南师范大学自然科学学报,2003(12),26(4):23-26(完整的).
    [22]韩力群.人工神经网络理论、设计及应用[M].哈尔滨:哈尔滨工程大学出版社,1996.
    [23]Park J, Sandberg I W. Approximation and radial-basis-function networks Neural Computation,1993,5:305-316.
    [24]罗玉春,都洪基,崔芳芳.基于MATLAB的BP神经网络结构与函数逼近能力的关系分析[J].现代电子技术,2007,88-90(缺卷的).
    [25]吴宏建,刘红霞.基于子波神经网络的函数逼近[J].南京化工大学学报,2000(11),22(6):76-77.
    [26]CHEN Xiong-zhi,CAI Chang-lin. A New Architecture for Multilayer Perceptrons as Function Approximators[J].四川大学学报,2006(4),43(2):257-260.
    [27]许慧.一种用于非线性函数逼近的小波神经网络[J].自动化与仪器仪表,2003,4-6.(缺卷的).
    [28]Pinkus A.TDI-subspace of C(R d )and some density problems from neural networks. Approximation Theory,1996,85:269-287.
    [29]魏海坤.神经网络结构设计的理论与方法[M].北京:国防工业出版社,2005.
    [30]Hecht-Nielsen R. Theory of the backpropagation. Proc. IEEE IJCNN,1989,(1):593-605.
    [31]王艳芹.非线性系统的径向基函数神经网络建模方法研究[D]:[硕士学位论文].大庆:大庆石油学院,2005.
    [32]兰倩,李骏.神经网络的函数逼近性分析[J].甘肃科学学报,2005(3),17(1):31-32.
    [33]Park J, Sandberg I W. Universal approximation using radial-basis-function networks. Neural Computation,1991,3:246-257.
    [34]王志勇,陈昊鹏.基于组合神经网络模型的函数逼近方法[J].计算机应用与软件,2008(7),25(7):138-139.
    [35]Irie B, Miyake S. Capacity of three-layered perceptrons. in Proc. IEEE IJCNN,1988,(1):641-648.
    [36]李胜.基于小波神经网络的非线性函数逼近[D]:[硕士学位论文].四川:四川大学,2003.
    [37]Carroll S M, Dickinson W. Construction of neural nets using radom tansform. in Proc.IJCNN,1989,(1):607-611.
    [38]海山,张策,何风琴.在逼近能力方面的研究[J].内蒙古科技与经济,2007(7),64-65(缺卷的).
    [39] Wiedand A, Leighten R. Geometric analysis of neural network capacity. in Proc. IEEE First ICNN,1987,(1):385-392.
    [40]Hornik K. Approximation capabilities of multilayer feed-forward network. Neural Networks,1991,4(2):251-257.
    [41]黄忠明,吴志红,刘全喜.几种用于非线性函数逼近的神经网络方法研究[J].兵工自动化,2009(10),28(10):88-92.
    [42]王强,余岳峰,张浩炯.利用人工神经网络实现函数逼近[J].计算机仿真,2002年9月,19(5):44-46.
    [43]梁久祯,赵建民.模糊算子神经网络的函数逼近能力[J].广西师范大学学报,2003(3),21(1):19-22.
    [44]李明国,郁文贤.神经网络的函数逼近理论[J].国防科技大学学报,1998,20(4):71-76(完整的).
    [45]王泰华,葛云萍,余发山.小波神经网络用于非线性函数逼近的研究[J].西南民族学院学报,2003,29(1):39-40.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700