用户名: 密码: 验证码:
混沌时间序列预测与储备池机器学习方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
混沌理论是现代科学技术中的一朵奇葩,它揭示了复杂系统的内在运行规律。复杂系统常呈现出高度的非线性和初始状态敏感性,初始条件的细微差异也会导致截然不同的系统运行状态。然而这些纷纭杂沓的自然现象都遵循着某种潜在的运行规律,它就是混沌理论所揭示的有序性和确定性。二十余年间,国内外学者着力于混沌时间序列的预测问题研究,其成果颇丰。然而,现存方法大多停留于理论,而忽略实际观测过程中的不确定性,与此同时,预测模型的提取缺乏有效的非线性处理机制,泛化性能及鲁棒性能也存在诸多问题。作为一种针对动态系统的机器学习方法,储备池在混沌时间序列的预测问题中性能卓越;近年来,国内外学者对储备池的预测机制展开研究,但某些性质仍无法较好地解释。基于此,本文以混沌时间序列的储备池预测方法为题进行研究,以期探索储备池的非线性处理机制,另觅新的机器学习方法,主要研究内容包括:
     1、分析和建立混沌时间序列的储备池预测模型。对于一些确定性混沌序列,基于储备池的迭代预测方法性能卓越,但其结构设置缺乏合理解释,而且到目前为止,这种方法较好地应用于含噪声的混沌时间序列。针对这些问题,本文首先证明此类模型对非线性系统状态轨迹的逼近特性,并探讨初始状态设置的任意性。其次,本文将储备池模型分为三类:常规状态反馈结构、输出反馈结构和前馈(静态)结构,而著名的迭代预测方法则可由输出反馈结构加以分析。通过进一步对比这三类结构,本文提出了基于储备池的混沌序列直接预测方法,该方法利用预测原点和预测时域之间的关系直接构建预测器。相对于已有的迭代方法,本文所建直接预测器的稳定性可预先加以保障,遂避免由于网络附加回路闭合而产生的稳定性和误差积累问题。
     2、提出储备池正则化学习方法。在现有的储备池学习方法中,存在较为严重的不适定性,表现为奇异值分布较连续、条件数较大,得到的输出权值幅值较大,从而为储备池的应用埋下了隐患。针对这个问题,本文提出储备池的正则化学习方法。该方法可通过奇异值截断或惩罚方法实现,其中,截断方法直接处理病态的系数矩阵,通过矩阵的奇异值截断,舍弃较小的奇异值以解决不适定问题;惩罚方法则采用岭回归形式,改善待因子化矩阵的性质,使其对称正定,因而可通过高效的Cholesky或高斯消元法进行求解。此外,本文还探讨了正则化方法应用于含噪声混沌序列预测的理论问题。假设时间序列所含的噪声有界,从变量含误差(Errors-in-variables)模型的角度,可得到由噪声所引起的最坏预测误差。通过最小化该误差,便得到含噪混沌序列的鲁棒最优预测模型,该模型具有惩罚正则化的形式。
     3、基于储备池方法,提出无核非线性支持向量机模型。传统的核方法实现了一种静态映射,但较难实现递归结构,因此无法直接处理动态模式。储备池具有“递归核”的功能,并可较好地应用于动态系统辨识。基于此,本文结合储备池的特点和传统支持向量机的处理方法,提出一种不依赖核的非线性支持向量机—支持向量回声状态机(Support Vector Echo-State Machines,SVESMs)。SVESMs的主要特点是不依赖核方法构建隐式的特征空间,它采用随机生成的储备池来处理非线性系统建模问题,在高维的储备池状态空间中进行计算。这种方法便于实现结构风险最小化(StructuralRisk Minimization),并可根据问题的不同引入不同的代价函数,当采用鲁棒损失函数时,SVESMs可处理包含异常点的预测问题。SVESMs可工作在递归模式和前馈模式。相对于传统递归神经网络,工作在递归模式的SVESMs易于训练,不存在局部最小点问题,且预测器精度高、泛化能力强;此外,工作在前馈模式的SVESMs可应用于静态模式识别问题,它具有与传统支持向量机类似的超参数和容量控制方法,但形式上与传统的前馈网络相同,从而建立起神经网络和支持向量机之间的内在联系。
Chaos theory serves an important part in modern science and technology, which reveals the inherent laws governing complex system. A complex system is a nonlinear chaotic system, which exhibits the sensitive dependence on the initial conditions. Two such systems with however small a difference in their initial state eventually will end up with a large difference between their states. The objective of the Chaos theory is to explore the order and determinacy behind these phenomenons. It has been twenty years since scientists started the research work of time series prediction based on Chaos theory, and many results have been obtained. However, many prediction methods stay in laboratories, and ignore the uncertainty in practical situations. Meanwhile the access to accurate prediction model relies on machine learning tools, which are expected to be efficient in dealing nonlinearity, excellent in generalization and reliable in robustness. As an emerging machine learning tool for dynamical system, reservoir method has been shown the excellent performance in chaotic time series prediction. It draws many interests but the mysteriousness still exists. Based on the above observations, this paper will focuses on reservoir method and try to find new techniques for chaotic time series prediction and machine learning applications. It covers:
     1. Analyze and construct a reservoir prediction model for chaotic time series. Reservoir can be used as a good predictor for some chaotic systems, however, the predictor is in the absence of reasonable explanation and difficult to apply to noisy situations at present. Based on the problem, the approximation ability of this kind of recurrent neural networks is verified in this study, and the emphasis is placed on the problem of trajectory learning and initial state setting. The reservoir model structure is classified into three categories: general state-feedback structure, output-feedback structure and the feed-forward structure. The existing iterative predictor can be well explained in the perspective of output-feedback structure, and it is shown that this structure will lead to two difficulties in modeling chaotic time-series: error accumulation and underlying stability problems. To eliminate these problems, a direct prediction method based on general state-feedback structure is proposed, and it relates the prediction origin and horizon directly. The stability can be assured before the network training, and the prediction error does not accumulate since there is no output-feedback loop.
     2. Propose a regularized learning method for reservoir. The existing reservoir training method is ill-posed, which has many symptoms, such as continuous singular value spread, large condition number and output weights. Regularized learning method is then proposed to curve the difficulties in this study. Regularization can be realized by truncated singular value decomposition(TSVD) or penalty methods. In TSVD, the small singular values are discarded to improve the solution; In penalty method, the ridge regression is used to improve the matrix structure to be factorized. The computation problem is also addressed, and it is shown that the penalty method can be more efficient than the truncation method. In addition, the theory analysis is carried out to the noisy chaotic system prediction problem, the worst prediction error caused by noise is computed in the style of Error-In-Variables model. Given a perturbation bound of the noise, it is shown that the regularization learning based on penalty method can result in an optimal robust solution.
     3. Propose a novel nonlinear support vector machines(SVMs) without a kernel. Based on the similarity between kernel and reservoir, a kernel-free SVMs—Support Vector Echo-State Machines(SVESMs) is proposed in this study. Classic SVMs rely on kernel method to compute the inner product, by contrast, reservoir method allows the direct creation of nonlinear mapping by the mechanisms of echo state property. The computation in the reservoir state space is straight-forward and it is easy to realize the structural risk minimization principle and introduce different loss functions for various applications. Robust loss functions can be used in SVESMs so that the model is insensitive to outliers, which is very suitable for practical time-series modeling. SVESMs can work both in recurrent and feed-forward modes. In recurrent mode, SVESMs are easy to train and do not get trapped in local minimums, and the generalization ability is well supported by the statistical learning theory. SVESMs working in the feed-forward mode behave much like the classic SVMs in many ways, such as hyper-parameter searching and capacity controlling, and feed-forward SVESMs serves a bridge from feed-forward neural networks to SVMs.
引文
[1]Jinno K,Xu SG,Berndtsson R,Kawamura A,Matsumoto M.Prediction of Sunspots Using Reconstructed Chaotic System Equations.Journal of Geophysical Research-Space Physics.1995,100(A8):14773-14781.
    [2]丁涛,周惠成,黄健辉.混沌水文时间序列区间预测研究.水利学报.2004,12:15-20.
    [3]席剑辉.混沌时间序列的长期预测方法研究.(PhD Thesis),大连理工大学,2005.
    [4]Yu X,Liong SY.Forecasting of hydrologic time series with ridge regression in feature space.Journal of Hydrology.2007,332(3-4):290-302.
    [5]王文均,叶敏.长江径流时间序列混沌特性的定量分析.水科学进展.1994,5(2):87-94.
    [6]李天云.电力系统负荷的混沌特性及预测.中国电机工程学报.2000,20(11):36-40.
    [7]Principe JC,Rathie A,Jyh-Ming K.Prediction of chaotic time series with neural networks and the issue of dynamic modeling.International Journal of Bifurcation and Chaos.1992,2(4):989-96.
    [8]王海燕,盛昭瀚.混沌时间序列相空间重构参数的选取方法.东南大学学报:自然科学版.2000,30(5):113-117.
    [9]张家树,肖先赐.混沌时间序列的Volterra自适应预测.物理学报.2000,49(3):403-408.
    [10]丁涛,周惠成.混沌时间序列局域预测方法.系统工程与电子技术.2004,26(3):338-340,367.
    [11]Lapedes AS,Farber RM.How Neural Nets Work.In:Anderson DZ.Advances in Neural Information Processing Systems O.Denver,Colorado:American Institue of Physics,1987:442-456.
    [12]Casdagli M.Nonlinear prediction of chaotic time series.Physica D:Nonlinear Phenomena.1989,35(3):335-356.
    [13]Wan EA.Finite Impulse Response Neural Networks with Applications in Time Series Prediction.(PhD Thesis),Stanford University,1993.
    [14]Principe JC,Kuo JM.Dynamic modelling of chaotic time series with neural networks.In:Tesauro GTD,T.L.Advances in Neural Information Processing Systems 7.The MIT Press,Cambridge,MA:MIT Press,1995:311-318.
    [15]Suykens JAK,Vandewalle J.Learning a Simple Recurrent Neural State-Space Model to Behave Like Chuas Double Scroll.IEEE Transactions on Circuits and Systems I-Fundamental Theory and Applications.1995,42(8):499-502.
    [16]顾炜,瞿东辉.对复杂混沌时间序列快速预测的前馈神经网络.复旦学报:自然科学版.1995,34(3):262-268.
    [17]Mukherjee S,Osuna E,Girosi F.Nonlinear prediction of chaotic time series using support vector machines.In:Proc.of the 1997 7th IEEE Workshop on Neural Networks for Signal Processing,NNSP'97.Amelia Island,FL,USA:IEEE,Piscataway,NJ,USA,1997:511-520.
    [18]Muller KR,Smola AJ,Ratsch G,Scholkopf B,Kohlmorgen J,Vapnik VN.Predicting time series with support vector machines.In:Proc.of the 1997 7th International Conference on Artificial Neural Networks,ICANN'97,Oct 8-10 1997.Lausanne,Switz:IEEE,Piscataway,NJ,United States,1997:999.
    [19]Vesanto J.Using the SOM and local models in time-series prediction.In:Proc.of WSOM'97,Workshop on Self-Organizing Maps,Espoo,Finland,June 4-6.Espoo,Finland,1997:209-214.
    [20]Zhang J,Tang KS,Man KF.Recurrent NN model for chaotic time series prediction.In:Proc.of the IECON'97.IEEE,New York,NY,USA,1997:1108-1112.
    [21]Haykin S,Principe J.Making sense of a complex world[chaotic events modeling].IEEE Signal Processing Magazine.1998,15(3):66-81.
    [22]Suykens JAK,Vandewalle J.Recurrent least squares support vector machines.IEEE Transactions on Circuits and Systems I:Fundamental Theory and Applications.2000,47(7):1109-1114.
    [23]Leung H,Lo T,Wang SC.Prediction of noisy chaotic time series using an optimal radial basis function neural network.IEEE Transactions on Neural Networks.2001,12(5):1163-1172.
    [24]陈哲,冯天瑾,张海燕.基于小波神经网络的混沌时间序列分析与相空间重构.计算机研究与发展.2001,38(5):591-596.
    [25]Girard A,Rasmussen CE,Qui J.Gaussian process with uncertain inputs-application to multiple-step ahead time-series forecasting.In:Becket S,Thrun S,Obermayer K.Advances in Neural Information Processing Systems 15.Cambridge,MA.:MIT Press,2003:529-536.
    [26]Jaeger H,Haas H.Harnessing nonlinearity:Predicting chaotic systems and saving energy in wireless communication.Science.2004,304(5667):78-80.
    [27]崔万照,朱长纯,保文星,刘君华.混沌时间序列的支持向量机预测.物理学报.2004,53(10):3303-3310.
    [28]Han M,Xi JH,Xu SG,Yin FL.Prediction of chaotic time series based on the recurrent predictor neural network.IEEE Transactions on Signal Processing.2004,52(12):3409-3416.
    [29]Shi ZW,Han M.Support vector echo-state machine for chaotic time-series prediction.IEEE Transactions on Neural Networks.2007,18(2):359-372.
    [30]Schmichuber J,Wierstra D,Gagliolo M,Gomez F.Training recurrent networks by Evolino.Neural Computation.2007,19(3):757-779.
    [31]Jaeger H.The "echo state" approach to analysing and training recurrent neural networks.Technical Report GMD Report 148.German National Research Center for Information Technology,2001.
    [32]Jaeger H.Tutorial on training recurrent neural networks,covering BPTT,RTRL,EKF and the "echo state network" approach.GMD Report 159.German National Research Center for Information Technology,2002.
    [33]韩敏,史志伟,郭伟.储备池状态重构与混沌时间序列预测.物理学报.2007,56(1):43-50.
    [34]Smola AJ,Scholkopf B.A tutorial on support vector regression.Statistics and Computing.2004,14(3):199-222.
    [35]Vapnik V,Golowich S.Support Vector Method for Function Approximation,Regression Estimation,and Signal Processing.In:Mozer M M.Jordan,T P.Advances in Neural Information Processing Systems 9.Cambridge,MA:MIT Press,1997:281-287.
    [36]Cao L J,Tay FEH.Support vector machine with adaptive parameters in financial time series forecasting.IEEE Transactions on Neural Networks.2003,14(6):1506-1518.
    [37]Asefa T,Kemblowski M,Lall U,Urroz G.Support vector machines for nonlinear state space reconstruction:Application to the Great Salt Lake time series.Water Resources Research.2005,41(12):12422.
    [38]Han M,Liu YH,Xi JH,Guo W.Noise smoothing for nonlinear time series using wavelet soft threshold.IEEE Signal Processing Letters.2007,14(1):62-65.
    [39]Jaeger H,Maass W,Principe J.Special issue on echo state networks and liquid state machines.Neural Networks.2007,20(3):287-289.
    [40]Bishop CM.Pattern Recognition and Machine Learning.Berlin:Springer-Verlag,2006.
    [41]Haykin S.Neural Networks:A Comprehensive Foundation.Prentice Hall,1999.
    [42]Fahlman SE,Lebiere C.The Cascade-Correlation Learning Architecture.Morgan Kaufmann,San Mateo:Morgan Kaufmann,San Mateo,1990:524-532.
    [43]Lehtokangas M.Modelling with constructive backpropagation.Neural Networks.1999,12(4-5):707-716.
    [44]Girosi F,Jones M,Poggio T.Regularization Theory and Neural Network Architectures.Neural Computation.1995,7(2):219-269.
    [45]Bishop CM.Training with noise is equivalent to tikhonov regularization.Neural Computation.1995,7(1):108-116.
    [46]Reed B..Pruning algorithm-a survey.IEEE Transactions on Neural Networks.1993,4(5):740-747.
    [47]Ishikawa M.Structural learning with forgetting.Neural Networks.1996,9(3):509-521.
    [48]LeCun Y,J.D,Solla S.Optimal Brain Damage.In:Touretzky DS.Advances in Neural Information Processing Systems 2.San Mateo,CA:Morgan Kauffman,1990:598-605.
    [49]Babak H,David GS.Second Order Derivatives for Network Pruning:Optimal Brain Surgeon.In:Jose Hanson S,Cowan JD,Giles CL.Advances in Neural Information Processing Systems 5.Morgan Kaufmann,San Mateo,CA:Morgan Kaufmann,San Mateo,CA,1993:164-171.
    [50]鲁子奕,杨绿溪,吴球,何振亚.提高前馈神经网络泛化能力的新算法.电路与系统学报.1997,2(4):7-12.
    [51]邵红梅.带惩罚项的BP神经网络训练算法的收敛性.(PhD Thesis),大连理工大学,2006.
    [52]Wang XX,Chen S,Lowe D,Harris CJ.Sparse support vector regression based on orthogonal forward selection for the generalised kernel model.Neurocomputing.2006,70(1-3):462-474.
    [53]Orr MJL.Regularization in the selection of radial basis function centers.Neural Computation.1995,7(3):606-623.
    [54]Evgeniou T,Pontil M,Poggio T.Regularization networks and support vector machines.Advances in Computational Mathematics.2000,13(1):1-50.
    [55]Smola A J,Scholkopf B,Muller KR.The connection between regularization operators and support vector kernels.Neural Networks.1998,11(4):637-649.
    [56]Andras P.The equivalence of support vector machine and regularization neural networks.Neural Processing Letters.2002,15(2):97-104.
    [57]Muller KR,Mika S,Ratsch G,Tsuda K,Scholkopf B.An introduction to kernel-based learning algorithms.IEEE Transactions on Neural Networks.2001,12(2):181-201.
    [58]Cristianini N,Scholkopf B.Support vector machines and kernel methods:The new generation of learning machines.AI Magazine.2002,23(3):31-41.
    [59]Perez-Cruz F,Bousquet O.Kernel methods and their potential use in signal processing.IEEE Signal Processing Magazine.2004,21(3):57-65.
    [60]Cawley GC,Talbot NLC.Constructing Bayesian formulations of sparse kernel learning methods.Neural Networks.2005,18(5-6):674-683.
    [61]Boser B,Guyon I,Vapnik V.A training algorithm for optimal margin classifier.In:Proc.of the 5th ACM Workshop on Computational Learning Theory.Pittsburgh,PA,1992:144-152.
    [62]Cortes C,Vapnik V.Support-Vector Networks.Machine Learning.1995,20(3):273-297.
    [63]Williams CKI,Rasmussen CE.Gaussian processes for regression.In:Touretzky DS,Mozer MC,Hasselmo ME.Advances in Neural Information Processing Systems 8.The MIT Press,Cambridge,MA:The MIT Press,Cambridge,MA,1996:514-520.
    [64]Cawley GC,Talbot NLC.Improved sparse least-squares support vector machines.Neurocomputing.2002,48:1025-1031.
    [65]Suykens JAK,De Brabanter J,Lukas L,Vandewalle J.Weighted least squares support vector machines:Robustness and sparce approximation.Neurocomputing.2002,48:85-105.
    [66]Van Gestel T,Suykens JAK,Baesens B,Viaene S,Vanthienen J,Dedene G,De Moor B,Vandewalle J.Benchmarking Least Squares Support Vector Machine Classifiers.Machine Learning.2004,54(1):5-32.
    [67]Tipping M.The relevance vector machine.In:Solla SA,Leen TK,Muller KR.Advances in Neural Information Processing Systems 12.The MIT Press,Cambridge,MA:Cambridge,MA:MIT Press,2000:652-658.
    [68]Flake GW,Lawrence S.Efficient SVM regression training with SMO.Machine Learning.2002,46(1-3):271-290.
    [69]Fan RE,Chen PH,Lin CJ.Working set selection using second order information for training support vector machines.Journal of Machine Learning Research.2005,6:1889-1918.
    [70]Huang GB,Zhu QY,Siew CK.Extreme learning machine:A new learning scheme of feedforward neural networks..In:Proc.of the 2004 International Joint Conference on Neural Networks,.Budapest,Hungary:IEEE,Piscataway,NJ,United States,2004:985-990.
    [71]Huang GB,Chen L,Siew CK.Universal approximation using incremental constructive feedforward networks with random hidden nodes.IEEE Transactions on Neural Networks.2006,17(4):879-892.
    [72]Huang GB,Zhu QY,Siew CK.Extreme learning machine:Theory and applications.Neurocomputing.2006,70(1-3):489-501.
    [73]Liang NY,Huang GB,Saratchandran P,Sundararajan N.A fast and accurate online sequential learning algorithm for feedforward networks.IEEE Transactions on Neural Networks.2006,17(6):1411-1423.
    [74]Feldkamp LA,Puskorius GV.A signal processing framework based on dynamic neural networks with application to problems in adaptation,filtering,and classification.Proceedings of the IEEE.1998,86(11):2259-2277.
    [75]Saad EW,Prokhorov DV,Wunsch I D.C..Comparative study of stock trend prediction using time delay,recurrent and probabilistic neural networks.IEEE Transactions on Neural Networks.1998,9(6):1456-1470.
    [76]Elman J.Finding structure in time.Cognitive Science.1990,14:179-211.
    [77]Werbos PJ.Backpropagation through time:what it does and how to do it.Proceedings of the IEEE.1990,78(10):1550-1560.
    [78]Williams RJ,Zipser D.A learning algorithm for continually running fully recurrent neural networks.Neural Computation.1989,1(2):270-280.
    [79]Williams R J,Peng J.An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories.Neural Computation.1990,2(4):490-501.
    [80]Narendra KS,Parthasarathy K.Identification and control of dynamical systems using neural networks.IEEE Transactions on Neural Networks.1990,1(1):4-27.
    [81]Ah Chung T,Back AD.Locally recurrent globally feedforward networks:a critical review of architectures.IEEE Transactions on Neural Networks.1994,5(2):229-239.
    [82]Tsung-Nan L,Giles CL,Home BG,Sun-Yuan K.A delay damage model selection algorithm for NARX neural networks.IEEE Transactions on Signal Processing.1997,45(11):2719-2730.
    [83]韩敏,史志伟.递归神经网络在堆石坝地震响应分析中的应用.系统仿真学报.2005,17(10):2533-2536.
    [84]Yu W.State-space recurrent fuzzy neural networks for nonlinear system identification.Neural Processing Letters.2005,22(3):391-404.
    [85]丛爽,高雪鹏.几种递归神经网络及其在系统辨识中的应用.系统工程与电子技术.2003,25(2):194-197.
    [86]时小虎,梁艳春,徐旭.改进的Elman模型与递归反传控制神经网络.软件学报.2003,14(6):1110-1119.
    [87]魏民祥,闫桂荣.基于动态神经网络非线性结构辨识的研究.应用力学学报.2000,17(2):110-113.
    [88]Han M,Hirasawa K,Ohbayashi M,Fujita H.Modeling dynamic systems using universal learning network.In:Proc.of the 1996 IEEE International Conference on Systems,Man,and Cybernetics,.IEEE,Piscataway,NJ,United States,1996:1172-1177 vol.2.
    [89]Hirasawa K,Wang X,Murata J,Hu J,Jin C.Universal learning network and its application to chaos control.Neural Networks.2000,13(2):239-253.
    [90]Savran A.Multifeedback-Layer Neural Network.IEEE Transactions on Neural Networks.2007,18(2):373-384.
    [91]Pearlmutter BA.Gradient calculations for dyamic recurrent neural networks:a survey.IEEE Transactions on Neural Networks.1995,6(5):1212-1228.
    [92]Cai X,Zhang N,Yenayagamoorthy GK,Wunsch Ii DC.Time series prediction with recurrent neural networks trained by a hybrid PSO-EA algorithm.Neurocomputing.2007,70(13-15):2342-2353.
    [93]Han M,Hirasawa K,Hu J,Murata J.Generalization ability of universal learning network by using second order derivatives.In:Proc.of the 1998 IEEE International Conference on Systems,Man,and Cybernetics,.IEEE,Piscataway,NJ,United States,1998:1818-1823vol.2.
    [94]Suykens JAK,Vandewalle J,De Moor BLR.NLq theory:Checking and imposing stability of recurrent neural networks for nonlinear modeling.IEEE Transactions on Signal Processing.1997,45(11):2682-2691.
    [95]De Jess O,Hagen MT.Backpropagation Algorithm,for a Broad Class of Dynamic Networks.IEEE Transactions on Neural Networks.2007,18(1):14-27.
    [96]Atiya AF,Parlos AG.New results on recurrent network training:Unifying the algorithms and accelerating convergence.IEEE Transactions on Neural Networks.2000,11(3):697-709.
    [97]Puskorius GV,Feldkamp LA.Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks.IEEE Transactions on Neural Networks.1994,5(2):279-297.
    [98]Choi J,Yeap TH,Bouchard M.Online state-space modeling using recurrent multilayer perceptrons with unscented kalman filter.Neural Processing Letters.2005,22(1):69-84.
    [99]de Jesus Rubio J,Yu W.Nonlinear system identification with recurrent neural networks and dead-zone Kalman filter algorithm.Neurocomputing.2007,70(13-15):2460-2466.
    [100]Leung CS,Chan LW.Dual extended Kalman filtering in recurrent neural networks.Neural Networks.2003,16(2):223-239.
    [101]Jaeger H.Adaptive nonlinear system identification with echo state networks.In:Becket S OK ThrunS.Advances in Neural Information Processing Systems 15.Cambridge,MA:MIT Press,2003:593-600.
    [102]Maass W,Natschlager T,Markram H.Real-time computing without stable states:A new framework for neural computation based on perturbations.Neural Computation.2002,14(11):2531-2560.
    [103]Lukosevicius M,Jaeger H.Overview of Reservoir Recipes-A survey of new RNN training methods that follow the Reservoir paradigm.School of Engineering and Science,Jacobs University Bremen,2007.
    [104]Ozturk MC,Xu D,Principe JC.Analysis and design of echo state networks.Neural Computation.2007,19(1):111-138.
    [105]Verstraeten D,Schrauwen B,D'Haene M,Stroobandt D.An experimental unification of reservoir computing methods.Neural Networks.2007,20(3):391-403.
    [106]Jaeger H.Reservoir riddles:Suggestions for echo state network research.In:Proc.of the 2005 International Joint Conference on Neural Networks,.Montreal,QC,Canada:IEEE,Piscataway,NJ,United States,2005:1460-1462.
    [107]Prokhorov D.Echo state networks:Appeal and challenges.In:Proc.of the 2005 International Joint Conference on Neural Networks,.Montreal,QC,Canada:IEEE,Piscataway,NJ,United States,2005:1463-1466.
    [108]Ozturk MC,Principe JC.Computing with transiently stable states.In:Proc.of the 2005 International Joint Conference on Neural Networks,.Montreal,QC,Canada:IEEE,Piscataway,NJ,United States,2005:1467-1472.
    [109]Buehner M,Young P.A tighter bound for the echo state property.IEEE Transactions on Neural Networks.2006,17(3):820-824.
    [110]Steil JJ.Backpropagation-decorrelation:online recurrent learning with O(N) complexity.In:Proc.of the 2004 International Joint Conference on Neural Networks,.IEEE,Piscataway,NJ,United States,2004:843-848 vol.2.
    [111]Steil JJ.Online reservoir adaptation by intrinsic plasticity for backpropagationdecorrelation and echo state learning.Neural Networks.2007,20(3):353-364.
    [112]Takens F.Detecting strange attractors in fluid turbulence.In:Rand D,Young LS.Dynamical Systems and Turbulence.Berlin:Germany:Springer-Verlag,1981.
    [113]Lera G.A state-space-based recurrent neural network for dynamic system identification.Journal of Systems Engineering.1996,6(3):186-193.
    [114]Zamarreno JM,Vega P.State space neural network.Properties and application.Neural Networks.1998,11(6):1099-1112.
    [115]Zamarreno JM,Vega P,Garcia LD,Francisco M.State-space neural network for modelling,prediction and control.Control Engineering Practice.2000,8(9):1063-1075.
    [116]任雪梅.非线性系统的回归网络辨识.控制理论与应用.2001,18(6):944-948,953.
    [117]Han M,Shi ZW,Wang W.Modeling dynamic system by recurrent neural network with state variables.Berlin:Springer-Verlag,2004:200-205.
    [118]Pan TY,Wang RY.State space neural networks for short term rainfall-runoff forecasting.Journal of Hydrology.2004,297(1-4):34-50.
    [119]Hirsch MW,Smale S.Differential equations,dynamical systems,and linear algebra.New York:Academic Press,1974.
    [120]Jin L,Nikiforuk PN,Gupta MM.Approximation of discrete-time state-space trajectories using dynamic recurrent neural networks.IEEE Transactions on Automatic Control.1995,40(7):1266-1270.
    [121]Funahashi KI.On the approximate realization of continuous mappings by neural networks.Neural Networks.1989,2(3):183-192.
    [122]Xi JH,Shi ZW,Han M.Analyzing the state space property of echo state networks for chaotic system prediction.In:Proc.of the 2005 International Joint Conference on Neural Networks,.Montreal,QC,Cauada:IEEE,Piscataway,NJ,United States,2005:1412-1417.
    [123]Atiya AF,El-Shoura SM,Shaheen SI,El-Sherif MS.A comparison between neural-network forecasting techniques-Case study:River flow forecasting.IEEE Transactions on Neural Networks.1999,10(2):402-409.
    [124]韩敏,史志伟,席剑辉.应用递归神经网络学习非线性周期运动吸引子轨迹.控制理论与应用.2006,23(4):497-502.
    [125]TSUNG FS,Cottrell GW.Phase-space learning.In:TESAURO G TDS.Advances in Neural Information Processing Systems 7.The MIT Press,Cambridge,MA:Cambridge,MA:MIT Press,1995:481-488..
    [126]Shi ZW,Han M,Xi JH.Exploring the neural state space learning from one-dimension chaotic time series.In:Prod.of the 2005 IEEE International Conference On Networking,Sensing and Control,.Tucson,AZ,United States:IEEE,Piscataway,NJ,United States,2005:437-442.
    [127]Sima DM.Regularization techniques in model fitting and parameter estimation.(PhD Thesis),Faculty of Engineering,K.U.Leuven,2006.
    [128]史志伟,韩敏.ESN岭回归学习算法与混沌时间序列预测.控制与决策.2007,22(3):258-261.
    [129]Cherkassky V,Ma YQ.Comparison of model selection for regression.Neural Computation.2003,15(7):1691-1714.
    [130]An S,Liu W,Venkatesh S.Fast cross-validation algorithms for least squares support vector machine and kernel ridge regression.Pattern Recognition.2007,40(8):2154-2162.
    [131]Nakamura T,Judd K,Mees A.Refinements to model selection for nonlinear time series.International Journal of Bifurcation and Chaos.2003,13(5):1263-1274.
    [132]Lendasse A,Simon G,Wertz V,Verleysen M.Fast bootstrap methodology for regression model selection.Neurocomputing.2005,64:161-181.
    [133]Sugiyama M,Ogawa H.Subspace information criterion for model selection.Neural Computation.2001,13(8):1863-1889.
    [134]Shi ZW,Han M.Tikhonov-type regularization in local model for noisy chaotic time series prediction.In:Proc.of the 46th IEEE Conference on Derision and Control.IEEE,Piscataway,NJ,United States,2007:2223-2228.
    [135]N?rgaard M,Ravn O,Poulsen NK,Hausen L.Neural Networks for Modelling and Control of Dynamic Systems.London:Springer-Verlag,2000.
    [136]N?rgaard M,Ravn O,Poulsen NK.NNSYSID and NNCTRL tools for system identification and control with neural networks.Computing and Control Engineering Journal.2001,12(1):29-36.
    [137]NNSYSID toolbox,from:http://www.iau.dtu.dk/research/control/nnsysid.html.
    [138]He X,Asada H.New method for identifying orders of input-output models for nonlinear dynamic systems.San Francisco:IEEE,PISCATAWAY,NJ,1993:2520-2523.
    [139]Cherkassky V,Ma YQ.Practical selection of SVM parameters and noise estimation for SVM regression.Neural Networks.2004,17(1):113-126.
    [140]Chu W,Keerthi SS,Ong CJ.Bayesian support vector regression using a unified loss function.IEEE Transactions on Neural Networks.2004,15(1):29-44.
    [141]Wang C,Hill DJ.Deterministic Learning and Rapid Dynamical Pattern Recognition.IEEE Transactions on Neural Networks.2007,18(3):617-630.
    [142]Cherkassky V,Shao XH,Mulier FM,Vapnik VN.Model complexity control for regression using VC generalization bounds.IEEE Transactions on Neural Networks.1999,10(5):1075-1089.
    [143]Vapnik VN.An overview of statistical learning theory.IEEE Transactions on Neural Networks.1999,10(5):988-999.
    [144]Evgeniou T,Pontil M,Poggio T.Statistical Learning Theory:A primer.International Journal of Computer Vision.2000,38(1):9-13.
    [145]Shao XH,Cherkassky V,Li W.Measuring the VC-dimension using optimized experimental design.Neural Computation.2000,12(8):1969-1986.
    [146]Saunders C,Gammerman A,Vovk V.Ridge regression learning algorithm in dual varables.In:Proc.of the 15th International Conference on Machine Learning.Morgan Kaufmann,1998:515-521.
    [147]Mangasarian OL,Musicant DR,.Robust linear and support vector regression.IEEE Transactions on Pattern Analysis and Machine Intelligence.2000,22(9):950-955.
    [148]Chuang CC,Su SF,Jeng JT,Hsiao CC.Robust support vector regression networks for function approximation with outliers.IEEE Transactions on Neural Networks.2002,13(6):1322-1330.
    [149]张学工(译).统计学习理论的本质.北京:清华大学出版社,2000.
    [150]孙剑,郑南宁,张志华.一种训练支撑向量机的改进贯序最小优化算法.软件学报.2002,13(10):2007-2013.
    [151]Platt J.Sequential minimal optimization:a fast algorithm for training support vector machines.Technical Report,MSR-TR-98-14.Microsoft Research,1998.
    [152]LIBSVM:A library for support vector machines,http://www.csie.ntu.edu.tw/cjlin/libsvm..
    [153]Keerthi SS,DeCoste D.A modified finite newton method for fast solution of large scale linear SVMs.Journal of Machine Learning R,esearch.2005,6:341-361.
    [154]Kao WC,Chung KM,Sun CL,Lin CJ.Decomposition methods for linear support vector machines.Neural Computation.2004,16(8):1689-1704.
    [155]Lee Y J,Mangasarian OL.SSVM:A smooth support vector machine for classification.Computational Optimization and Applications.2001,20(1):5-22.
    [156]Lee YJ,Hsieh WF,Huang CM.epsilon-SSVR:A smooth support vector machine for epsilon-insensitive regression.IEEE Transactions on Knowledge and Data Engineering.2005,17(5):678-685.
    [157]Mangasarian OL.Exact 1-norm support vector machines via unconstrained convex differentiable minimization.Journal of Machine Learning Research.2006,7:1517-1530.
    [158]Fung G,Mangasarian OL.Finite Newton method for Lagrangian support vector machine classification.Neurocomputing.2003,55(1-2):39-55.
    [159]袁玉波,严杰,徐成贤.多项式光滑的支撑向量机.计算机学报.2005,28(1):9-17.
    [160]周水生,詹海生,周利华.训练支持向量机的Huber近似算法.计算机学报.2005,28(10):1664-1670.
    [161]Mangasarian OL,Musicant DR.Active support vector machine classification.In:Leen TK,Dietterich TG.Advances in Neural Information Processing Systems 13.Cambridge,MA:MIT Press,2001:577-583.
    [162]Mnsicant DR,Feinberg A.Active set support vector regression.IEEE Transactions on Neural Networks.2004,15(2):268-275.
    [163]Mangasarian OL,Musicant DR.Lagrangian support vector machines.Journal of Machine Learning Research.2001,1(3):161-177.
    [164]Zhu J,Rosset S,Hastie T,Tibshirani R.1-norm Support Vector Machines.In:Thrun S,Saul L,Sch(o|¨)lkopf B.Advances in Neural Information Processing Systems 16.MIT Press Cambridge,MA:MIT Press,2004.
    [165]Zhou WD,Zhang L,Jiao LC.Linear programming support vector machines.Pattern Recognition.2002,35(12):2927-2936.
    [166]Fung GM,Mangasarian OL.A feature selection Newton method for support vector machine classification.Computational Optimization and Applications.2004,28(2):185-202.
    [167]Kobayashi K,Komaki F.Information criteria for support vector machines.IEEE Transactions on Neural Networks.2006,17(3):571-577.
    [168]Kwok JTY.Evidence framework applied to support vector machines.IEEE Transactions on Neural Networks.2000,11(5):1162-1173.
    [169]Babinec S,Pospichal J.Optimization of Echo State neural networks for electrical load forecasting.Neural Network World.2007,17(2):133-152.
    [170]Ozturk MC,Principe JC.An associative memory readout for ESNs with applications to dynamical pattern recognition.Neural Networks.2007,20(3):377-390.
    [171]Skowronski MD,Harris JG.Noise-Robust Automatic Speech Recognition Using a Predictive Echo State Network.IEEE Transactions on Audio,Speech and Language Processing.2007,15(5):1724-1730.
    [172]Skowronski MD,Harris JG.Automatic speech recognition using a predictive echo state network classifier.Neural Networks.2007,20(3):414-423.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700