用户名: 密码: 验证码:
Sum-of-Product神经网络和径向基函数神经网络的逼近能力研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
神经网络理论在近年来得到了迅速发展.神经网络的逼近能力是考察神经网络性能的重要一环.实际应用问题中要逼近的映射通常非常复杂,我们不能期待完全精确计算这些未知的映射.现在比较流行的趋势是用神经网络计算一元函数或其它简单函数的复合和线性组合逼近静态映射.这与如下的问题相关:是否,或在什么条件下,一族神经网络输出函数在某个多元函数空间中稠密?即神经网络的逼近能力的研究.神经网络的逼近能力问题作为神经网络的一个基本问题,随着神经网络的发展,引起了工程界和数学家们的广泛关注.稠密性是理论上能够逼近函数的能力,满足稠密性并不意味着这种形式是一种有效的逼近格式.然而,缺少稠密性的保证就意味着一些网络是不可能作逼近应用的.对于神经网络的逼近问题,在数学上讲可以分为四个方面:函数逼近,函数族逼近(强逼近),连续泛函逼近以及连续算子逼近.迄今,人们提出了很多神经网络模型,应用最广泛的是前馈神经网络,所以各种前馈网络的逼近能力的研究任务更加急迫.
     学者们对径向基函数(RBF)神经网络的逼近能力已有了深入的研究,然而已有研究结果仍需要发展和完善.同时,学者们在研究神经网络对函数族的逼近能力时,都是利用了已有的多层感知器(MLP)和RBF神经网络的函数逼近能力定理,得到了这两种不同网络的强逼近结果,那么对一般的前馈网络的函数逼近性和强逼近性之间是不是也存在着这种联系呢?这一问题对提出统一的逼近理论框架具有重要的实际意义.
     Sum-of-Product神经网络和Sigma-Pi-Sigma神经网络是分别于2000年和2003年提出的,它们都是由求积神经元和求和神经元构成的多层神经网络,试图解决经典RBF网络和MLP网络遇到的存储记忆量大和学习困难的问题.这两种网络在函数逼近、预测、分类和学习控制任务中都有很好的表现.本论文分别讨论了这两种神经网络的一致逼近能力和L~p逼近能力.
     已有的神经网络逼近理论主要是存在性地证明了神经网络的逼近能力,我们应用一种构造型方法证明了具有RBF型和平移伸缩不变(TDI)型隐单元的三层前馈神经网络只需随机选择隐单元的权值参数,然后适当调整新增的隐单元和输出单元之间的权值,网络输出函数就能够以任意精度逼近L~2(R~d)中任意函数.同时,我们的结果给出了一种自然地建立渐增网络逼近L~2(R~d)中函数的方法.
     形如g(a·x)的岭函数及其线性组合,在拓扑学、神经网络、统计学、调和分析和逼近理论中都有广泛应用.这里g是一元函数,a·x表示欧氏空间R~n中内积.确定在什么程度上函数表示成岭函数的和的表达方式是唯一的,是非常重要的课题.已有的这方面的研究结果考虑的是g∈C(R)和g∈L_(loc)~1(R)的情况,我们将相应的结论推广到g∈L_(loc)~p(R)(1≤p<∞)和g∈D'(R)的情况.另外,如果一个函数能够表示成岭函数的和,函数本身和每个和分量的光滑性之间的关系也是本论文关心的问题.
     本论文的结构和内容如下:
     第一章回顾了神经网络的相关基础知识,介绍了神经网络的逼近能力理论研究意义、方法和研究现状.
     第二章主要研究了一个函数如果能够表示成岭函数的和,其表达式的唯一性问题.我们证明了如果f(x)=∑_(i=1)~m gi(a~i·x)=0,a~i=(a_1~i,…,a_n~i)∈R~n\{0}两两线性无关,并且gi∈L_(loc)~p(R)(或gi∈D'(R),gi(a~i·x)∈D'(R~n)),那么每个gi是一个次数不超过m-2次的多项式.此外,我们还给出了岭函数线性组合的一个光滑性定理.
     第三章给出了RBF神经网络在L~p空间中的函数逼近能力以及强逼近和算子逼近能力的结果.这些结果改进了陈天平和蒋传海等人最近在RBF神经网络逼近方面的结果,为RBF神经网络的应用提供了理论基础.另外,我们还得到了前馈神经一般形式的强逼近定理,现有的很多结果都是它的特例.
     第四章指出了R上的连续函数作为Sum-of-Product神经网络的激活函数时,网络所生成函数集合在C(K)中稠密的充分必要条件是它不是多项式.进一步地,我们还给出了Sigma-Pi-Sigma神经网络所生成的函数集合在C(K)中稠密的充分必要条件.
     第五章揭示了Sum-of-Product神经网络所生成的函数集合在L~p(K)中稠密的充要条件.另外,我们根据Sum-of-Product神经网络的逼近结果,讨论了Sigma-Pi-Sigma神经网络的L~p逼近能力.
     第六章研究了具有随机隐单元的三层渐增前馈神经网络对L~2(R~d)中函数的逼近能力.主要讨论了具有RBF型和平移伸缩不变(TDI)型隐单元的前馈神经网络.我们指出了对于具有RBF型隐单元的网络,给定非零激活函数g:R→R且g(‖x‖_(R~d))∈L~2(R~d),或者对于具有TDI型隐单元的网络,给定非零激活函数g(x)∈L~2(R~d),如果适当选择隐层单元和输出单元之间权值,则具有n个随机隐单元的三层渐增网络的网络输出函数当n→∞时以概率1收敛于L~2(R~d)中任意目标函数.
In recent years, neural network theory has developed rapidly. Approximation theory of neural networks is important for analyzing the computation capability of neural networks. Mappingsin approximation applications are usually very complicated. Moreover, we can not expect to be able to compute exactly the unknown mappings. Thus, a current trend is to use artificial neural networks to approximate multivariate functions by computing superpositions and linear combinations of simple univariate functions. This is related to the density problem of neural networks: whether, or under what conditions, is a family of neural network output functions dense in a space of multivariate functions, i.e., approximation capability of neural networks. Approximation capability of neural networks, which is a basic problem in neural networks, has aroused extensive attention among engineers and mathematicians along with the development of neural networks. Density is the capability to approximate functions in theory, but denseness does not always give an effective scheme. A class of networks can not be used for approximationwithout guarantee of denseness. From a mathematical point of view, the approximation problem of neural networks can be studied from four aspects: function approximation, approximationof families of functions (strong approximation), functional approximation and operator approximation. Many neural network models have been proposed so far. The feedforward neuralnetworks are most widely used in applications, so it is important to study approximation capabilities of various feedforward neural networks.
     There have been deep investigations on approximation capability of radial basis function (RBF) neural networks. But the known results still need to be improved. Meanwhile, the approximation capability theorems of RBF neural networks and multilayer perceptron (MLP) neural networks are used in the investigations of their approximation capability to families of functions. Thus, we will ask: is there similar relationship between approximation of functions and family of functions for general feedforward neural networks? It is desirable to propose an integrated theoretical framework for the above problem.
     Sum-of-Product neural networks (SOPNN) and Sigma-Pi-Sigma neural networks (SPSNN) are proposed in 2000 and 2003, respectively. Product and additive neurons are their basic units. The new structures overcome the extensive memory requirement as well as the learning difficulty for MLP neural networks and RBF neural networks. They have novel performance in function approximation, prediction, classification and learning control. We discuss both the uniform and L~p approximation capabilities of them.
     In comparison with the conventional existence approach in approximation theory for neural networks, we follow a constructive approach to prove that one may simply randomly choose parameters of hidden units of three-layered Translation and Dilation Invariant (TDI) neural networks and RBF neural networks, and then adjust the weights between the hidden units and the output unit to make the networks approximate any function in L~2(R~d) to arbitrary accuracy. Furthermore, the result we obtained also presents an automatic and efficient way to construct an incremental three-layered feedforward networks for function approximation in L~2(R~d).
     Ridge functions in the form of g(a·x) and their linear combinations are widely used in applications on topology, neural networks, statistics, harmonic analysis and approximation theory, where g is a univariate function, and a·x denotes the inner product of a and x in R~n. When we study a function represented as a sum of ridge functions, it is fundamental to understand to what extent the representation is unique. The known results consider two cases:g∈C(R)and g∈L_(loc)~1(R).We draw the same conclusion under the conditions g∈L_(loc)~p(R)(1≤p<∞),or g∈D'(R) and g(a·x)∈D'(R~n).Provided that a function is represented by a sumof ridge functions, the relationship between the smoothness of the given function and the sum components is also analyzed.
     This thesis is organized as follows:
     Some background information about feedforward neural networks is reviewed and the significanceof approximation capability theory of neural networks is introduced in Chapter 1. The methods usually used and progress in researches on approximation capability theory of neural networks is also presented in this chapter.
     Investigated in Chapter 2 is the uniqueness of representation of a given function assome sum of ridge functions. It is shown that if f(x)=∑_(i=1)~m gi(a~i·x)=0,a~i=(a_1~i,…,a_n~i)∈R~n\{0}are pairwise linearly independent, and gi∈L_(loc)~p(R)(or gi∈D'(R),gi(a~i·x)∈D'(R~n)),then each g_i is a polynomial of degree at most m - 2. In addition, atheorem on the smoothness of linear combinations of ridge functions is also obtained.
     Chapter 3 mainly deals with capability of RBF neural networks to approximate functions, family of functions, functional and operators. Besides, we follow a general approach to obtain approximation capability theorem for feedforward neural networks to a compact set of functions.The results can cover all the existing results in this respect.
     It is proved in Chapter 4 that the set of functions that are generated by SOPNN with its activation function in C(R) is dense in C(K), if and only if the activation function is not a polynomial. The sufficient and necessary condition under which the set of functions generated by SPSNN is dense in C(K) is also derived. Here IK is a compact set in R~N.
     In Chapter 5, we give a sufficient and necessary condition under which the set of functions that are generated by SOPNN is dense in L~p(K). Based on the L~p approximation result of SOPNN, the L~p approximation capability of SPSNN is also studied.
     Chapter 6 studies approximation capability to L~2(R~d) functions of three-layered incrementalconstructive feedforward neural networks with random hidden units. RBF neural networks and TDI neural networks are mainly discussed. Our result shows that given any non-zero activationg : R_+→R and g(‖x‖)∈L~2(R~d) for RBF hidden units, or any non-zero activationfunction g(x)∈L~2(R~d) for TDI hidden units, the incremental network output function with nrandomly generated hidden units converges to any target function in L~2(R~d) with probability 1 as n→∞, if one only properly adjust the weights between the hidden units and output unit.
引文
[1] McCulloch W, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bulletin of MathematicalBiophysics, 1943, 5: 115-133.
    
    [2] Hebb D O. The Organization of Behavior: A Neuropsychological Theory. New York: Wiley, 1949.
    
    [3] Kohonen T. Correlation matrix memories. IEEE Transactions on Computers, 1972, 21: 353-359.
    
    [4] Grossberg S. Adaptive pattern classification and universal recording: I. Parallel development and coding of neural feature detectors. Biological Cybernetics, 1976, 23: 121-134.
    
    [5] Hopfield J J. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, USA, 1982, 79: 2554-2558.
    
    [6]阎平凡,张长水.人工神经网络与模拟进化计算.北京:清华大学出版社,2000.
    
    [7]Hagan M T,Demuth H B,Beale M H著.戴葵等译.神经网络设计.北京:机械工业出版社,2002.
    
    [8]张军英,许进.二进前向人工神经网络.西安:西安电子科技大学出版社,2001.
    
    [9]阎平凡.人工神经网络的容量、学习与计算复杂性.电子学报,1995,23(4):63-67.
    
    [10]沈世镒.神经网络系统理论及其应用.北京:科学出版社,1998.
    
    [11]褚蕾蕾,陈绥阳,周梦.计算智能的数学基础.北京:科学出版社,2002.
    
    [12] Liao X F, Chen G R, Sanchez E N. Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach. Neural Networks, 2002,15(8): 855-866.
    
    [13] Liao X F, Chen G R, Sanchez E N. LMI-based approach for asymptotically stability analysis of delayed neural networks. IEEE Transactions on Circuits and Systems, 2002,49(7): 1033-1039.
    
    [14]徐秉铮,张百灵,韦岗.神经网络理论与应用.广州:华南理工大学出版社,1994.
    
    [15]戴葵.神经网络实现技术.长沙:国防科技大学出版社,1998.
    
    [16]吴微,陈维强,刘播.用BP神经网络预测股票市场涨跌.大连理工大学学报,2001,41(1):9-15.
    
    [17]孔俊,吴微,赵卫海.识别数学符号的神经网络方法.吉大自然科学学报,2001,3:11-16.
    
    [18]吴微,侯利昌.基于LL(1)文法的印刷体数学公式结构分析方法.大连理工大学学报,2006,46(3): 454-459.
    
    [19] Chen T P, Chen H, Liu R W. Approximation capability in C((?)~n) by multilayer feedforward networks and related problems. IEEE Transactions on Neural Networks, 1995, 6(1): 25-30.
    
    [20] Vapnik V N. The Nature of Statistical Learning Theory. New York: Springer, 1995.
    
    [21]Vapnik V N著.张学工译.统计学习理论的本质.北京:清华大学出版社,2000.
    
    [22]魏海坤.神经网路结构设计的理论与方法.北京:国防工业出版社,2005.
    [23] Moody J, Darken C. Fast learning in networks of locally-tuned processing units. Neural Computation, 1989,1:281-294.
    
    [24] Haykin S. Neural Networks: A Comprehensive Foundation. 第2版. 北京: 清华大学出版社, 2001.
    [25] Lin C S, Li C K. A sum-of-product neural network (SOPNN). Neurocomputing, 2000,30: 273-291.
    
    [26] Heywood M, Noakes P. A framework for improved training of Sigma-Pi networks. IEEE Transactions on Neural Networks, 1989, 2: 359-366.
    
    [27] Gurney K N. Training nets of hardware realiabe Sigma-Pi units. Neural Networks, 1992,4: 289-303.
    
    [28] Gumey K N. Training recurrent nets of hardware realiabe Sigma-Pi units. Int. J. Neural Systems, 1992, 3:31-42.
    
    [29] Li C K. A sigma-pi-sigma neural network (SPSNN). Neural Processing Letters, 2003,17: 1-19.
    
    [30] Shin Y, Ghosh J. Ridge polynomial networks. IEEE Transactions on Neural Networks, 1995, 6: 610- 622.
    
    [31] Hecht-Nielsen R. Theory of the backpropagation neural networks. Proceedings of International Joint Conference on Neural Networks, Washington DC, 1989,1: 593-611.
    
    [32] Honik K, Stinchcombe M, White H. Multi-layer feedforward networks are universal approximators. Neural Networks, 1989,2: 359-366.
    
    [33] Nedeljkovic V. A novel multilayer neural networks training algorithm that minimizes the probability of classification error. IEEE Transactions on Neural Networks, 1993,4: 650-659.
    
    [34] Rumelhart D E, Hinton G E, Williams R J. Learning internal representation by error propagation. In: Parallel Distributed Processing: Exploration in the Microstructure of Congnition, MIT Press, Cambridge, 1986,1: 318-362.
    
    [35] Werbos P J. Beyond regression: new tools for predicting and analysis in the behavioral sicences. Boston: Harvard University, 1974.
    
    [36] Bianchini M, Fransconi P, Gori M. Learning without local minima in radial basis function networks. IEEE Transactions on Neural Networks, 1995,6: 749-756.
    
    [37] Chen S, Mulgrew B, Grant P M. A clustering technique for digital communications channel equalization using radial basis function networks. IEEE Transactions on Neural Networks, 1993,4: 570-579.
    
    [38] Cheng Y H, Lin C S. A learning algorithm for radial basis function networks: with capability of adding and pruning neurons. Proceedings of IEEE International Conference in Neural Networks, Sorrento, 1994: 797-801.
    
    [39] Gorinevsky D. On the persistency of excititation in radial basis function network identification of nonlinear systems. IEEE Transactions on Neural Networks, 1995, 6: 1237-1244.
    
    [40] Lee S, Kil R M. A guassian potential function networks with hierarchically selforganizing learning. Neural Networks, 1991,4: 207-224.
    
    [41] Sangers T D. A tree-structure adaptive networks for function approximation in high dimentional space.IEEE Transactions on Neural networks, 1991, 2: 285-293.
    
    [42]华青,白水著.数学家小词典.上海:知识出版社,1987,245-248.
    
    [43]Eyes H著.欧阳绛译.数学史上的里程碑.北京:北京科学技术出版社,1990,376-390.
    
    [44] Hecht-Nielsen R. Kolmogorov's mapping neural network existence theorem. Proceedings of the IEEE International Conference on Neural Networks, New York, 1989(3): 11-14.
    
    [45] Cybenko G. Approximation by superpositions of sigmoidal function. Math. Control Signals Syst., 1989, 2: 303-314.
    
    [46] Leshno M, Lin Y V, Pinkus A et al. Multilayer feedforward networks with a non-polynomial activation function can approximate any function. Neural Networks, 1993, 6: 861-867.
    
    [47]陈天平.神经网络及其在系统识别应用中的逼近问题.中国科学(A辑),1994,24(1):1-6.
    
    [48] Chen T P, Chen H. Approximation capability to functions of several variables nonlinear functionals and operators by radial basis function neural networks. IEEE Transactions on Neural Networks, 1995, 6(4): 904-910.
    
    [49] Luo Y H, Shen S Y. L~p Approximation of sigma-pi neural networks. IEEE Transactions on Neural Networks, 2000,11(6): 1485-1489.
    
    [50] Pinkus A. TDI-subspace of C(R~d) and some density problems from neural networks. Approximation Theory, 1996, 85: 269-287.
    
    [51] Schwartz L. Sur certaines families non fondermentales de fonctions continues. Bull. Soc. Math. France, 1944,72: 141-145.
    
    [52] Park J, Sandberg I W. Universal approximation using radial-basis-function networks. Neural Computation,1991,3: 246-257.
    
    [53] Park J, Sandberg I W. Approximation and radial-basis-function networks. Neural Computation, 1993, 5: 305-316.
    
    [54]蒋传海.神经网络中的逼近问题.数学年刊,1998,19A(3):295-300.
    
    [55]齐民友.广义函数与数学物理方程.北京:高等教育出版社,1989.
    
    [56]Barros J著.欧阳光中,朱学炎译.广义函数引论.上海:上海科学技术出版社,1981.
    
    [57] Friedlander F G. Introduction to the Distribution Theory. England: Cambridge University Press, 1998.
    
    [58]夏道行,吴卓人,严绍宗等.实变函数论与泛函分析.北京:人民教育出版社,1979.
    
    [59] Rudin W. Functional Analysis. New York: McGraw-Hill, 1987.
    [60] Jones D S. The Theory of Generalised Function. England: Cambridge University Press, 1982.
    [61] Friedman A. Generalised Functions and Partial Differential Equations. Boston: Prentice-Hall, 1963.
    
    [62] Logan B F, Shepp L A. Optimal reconstruction of a function from its projections. Duke Math. J., 1975, 42: 645-659.
    
    [63] Kazantsev I G. Tomographic reconstruction from arbitrary directions using ridge functions. Inverse Problems, 1998,14: 635-645.
    
    [64] Natterer F. The Mathematics of Computerized Tomography. New York: Wiley, 1986.
    
    [65] Kazantsev I G, Samuel M J, Lewitt R M. Limited angle tomography and ridge functions. IEEE Nuclear Science Symposium Conference Record, 2002,3(10-16): 1706 - 1710.
    
    [66] Pinkus A. Approximation theory of the MLP model in neural networks. Acta Numerica, 1999, 8: 143-195.
    
    [67] Chui C K, Li X. Approximation by ridge functions and neural networks with one hidden layer. J. Approx. Theory, 1992,70: 131-141.
    
    [68] Petrushev P P. Approximation by ridge functions and neural networks. SIAM J. Math. Anal, 1998, 30: 155-189.
    
    [69] Wu W, Feng G, Li X. Training multilayer perceptrons via minimization of sum of ridge functions. Adv. Comput. Math., 2002,17: 331-347.
    
    [70] Pinkus A. Approximating by ridge functions. In: A. Le Méhauté, C. Rabut, L.L. Schumaker. Surface Fitting and Multiresolution Methods. Nashville: Vanderbilt University Press, 1997: 279-292.
    
    [71] Pelletier B. Approximation by ridge function fields over compact sets. J. Approx. Theory, 2004, 129: 230-239.
    
    [72] Braess D, Pinkus A. Interpolation by ridge functions. J. Approx. Theory, 1993, 73: 218-236.
    
    [73] Martin D B, Pinkus A. Identifying Linear Combination of Ridge Functions. Adv. Appl. Math., 1999, 22(1): 103-118.
    
    [74] Donoho D L. Ridge Functions and Orthonormal Ridgelets. J. of Approx. Theory, 2001, 111(2): 143-179.
    
    [75] Ismailov V E. Characterization of an extremal sum of ridge functions. Journal of Computational and Applied Mathematics, 2006,12: 1-11.
    
    [76] Mairov V, Meir R, Ratsaby J. On the approximation of functional classes equipped with a uniform measure using ridge functions. J. Approx. Theory, 1999, 99: 95-111.
    
    [77] Lin V Y, Pinkus A. Fundamentally of ridge functions. J. Approx. Theory, 1993, 75: 295-311.
    
    [78] Dahmen W, Micchelli C A. Some remarks on ridge functions. Approx. Theory Appl., 1987, 3: 139-143.
    
    [79] Sun X, Cheney E W. The fundamentality of sets of ridge functions. Aequationes Math., 1992,44: 226-235.
    
    [80] Gordon Y, Maiorov V, Meyer M et al. On the best approximation by ridge functions in the uniform norm. Constr. Approx., 2002, 18: 61-85.
    
    [81] Xu Y, Light W A, Cheney E W. Constructive methods of approximation by ridge functions and radial functions. Numer. Algorithms, 1993,4: 205-223.
    
    [82] Donoho D L, Johnstone IM. Projection-based approximation and a duality method with kernel methods. Ann. Statist., 1989,17: 58-106.
    
    [83] Candes E J. Ridgelets: estimating with ridge functions. Ann. Statist., 2003, 31: 1561-1599.
    
    [84] Friedman J H, Stuetzle W. Projection pursuit regression. J. Amer. Statist. Assoc, 1981, 76: 817-823.
    
    [85] Huber P J. Projection pursuit. Ann. Statist., 1985,13: 435-475.
    
    [86]周民强.实变函数.北京:北京大学出版社,2001.
    
    [87] Liao Y, Fang S, Nuttle H L W. Relaxed conditions for radial-basis function networks to be universal approximators. Neural Networks, 2003,16: 1019-1028.
    
    [88] Chen T P, Chen H. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE Transactions on Neural Networks, 1995,6(4): 911-917.
    
    [89] Lehtolangas M, Saaarinen J. Centrioid based multilayer perceptron networks. Neural Processing Letters, 1998,7: 101-106.
    
    [90] Chen T P, Wu X W. Characteristics of activation function in Sigma-Pi neural networks. Journal of Fudan University, 1997, 36(6): 639-644.
    
    [91]夏道行,吴卓人,严绍宗等著.实变函数与泛函分析.北京:科学出版社,1985.
    
    [92]刘斯铁尔尼克,索伯列夫著.杨从仁译.泛函分析概要.北京:科学出版社,1985.
    
    [93]南东.RBF和MLP神经网路逼近能力的几个结果:(博士学位论文).大连:大连理工大学,2007.
    
    [94]蒋传海.神经网络中的逼近问题及其在系统识别中的应用.数学年刊,2000,21A:417422.
    
    [95] Hornik K. Approximation capabilities of mutilayer feedforward networks. Neural Networks, 1991, 4: 251-257.
    
    [96] Attali J G , Pagès G. Approximations of functions by a multilayer perceptron: a new approach. Neural Networks, 1997, 10: 1069-1081.
    
    [97] Chui C K, Li X. Approximation by ridge functions and neural networks with one hidden layer. Journal of Approximation Theory, 1992,70: 131-141.
    
    [98]Cheney W,Light W.A Course in Approximation Theory.北京:中国机械出版社,2003.
    
    [99] Huang G B, Chen L, Siew C. Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Transaction on Neural Networks, 2006,17(4): 879-892.
    
    [100] Huang G, Chen L. Convex incremental extreme learning machine. Neurocomputing, 2007, 70: 3056-3062.
    
    [101]张恭庆,林源渠.泛函分析讲义.北京:北京大学出版社,2003.
    
    [102] Chen T P, Chen H. Denseness of radial-basis functions in L~2(R~n) and its applications in neural networks.Chinese Annals of Mathematics, 1996,17B(2): 219-226.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700