用户名: 密码: 验证码:
一种基于FPGA的新的SVM硬件实现方法
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
支持向量机是一种基于结构风险最小化原则的机器学习理论。支持向量机综合了统计学习、机器学习和神经网络等方面的理论,并被证明在最小化结构风险的同时,可有效地提高算法的推广能力。目前对支持向量机的研究主要集中在理论研究和算法的优化方面。与之相比,其应用研究和算法实现的研究则相对较少,目前只有较为有限的实验研究报道。同时这些算法大多只能用计算机软件来实现,而不适合模拟硬件的实现,这显然大大的限制了SVM在实际中的应用。
     最近已经有许多国内外学者提出了不同的神经网络结构来实现支持向量机问题,其中比较具有典型性的是对单层回归神经网络及其改进结构的研究。这种单层网络结构相对于已有的两层神经网络具有更低的复杂度,并且可以指数收敛到稳定解。
     随着FPGA技术的发展,可以利用FPGA的并行性和快速重构技术来适应神经网络的权值和拓扑结构,基于FPGA的可重构计算体系结构也很适于实现人工神经网络。因此FPGA逐渐成为实现人工神经网络的一种有效方式。在此基础上使用FPGA来实现神经网络支持向量机,具有较强的灵活性,并且很好的保留了神经网络的并行性,既保证了处理速度,又适合大规模复杂的网络结构。
     本论文提出了一种新的动态神经网络最小二乘支持向量机的FPGA串行计算实现结构,相比已有的并行实现结构,在综合了动态神经网络并行性的特点和最小二乘支持向量机简化的约束条件的优点的基础上,其结构在保证计算速度的同时,明显提高了支持向量机的硬件资源利用效率,能够适应于大规模训练样本的情况。实验结果表明,由于该结构的串行计算,并行传输的特性,使得FPGA在使用面积减少的同时,速度不会有明显下降,因此使该串行计算结构更具灵活性并减小了对硬件资源的使用。
Support vector machine(SVM) is a new sort of recognizing theory. Based on the principle of structural risk minimization instead of the principle of experiential risk minimization, combining the techniques of statistical learning, machines learning and neural networks etc, support vector machines has good capability of generalization. Up to now, the investigation on SVM mainly has concentrated on study of its theory and optimization of arithmetic. Comparing with it. the research on application and realization of arithmetic is less. By far, there have been only limited research reports about this field. And most of the arithmetic can only be realized by the software rather than by the analog hardware, which distinctly restricted the application of SVM.
     Recently, there are a lot of domestic and foreign scholars proposed the different neural networks to implement the problems of support vector machines. Meanwhile, the single-layer recurrent neural network and its improved structure is more typical. This single-layer network structure has lower complexity than two-tier neural network has lower complexity, and it can converge to a stable solution.
     With the development of FPGA technology, we can use the FPGA parallelism and rapid reconfiguration to adapt to the weights of neural networks and topological structure, and the reconfigurable computing architecture which based FPGA is also very suitable for artificial neural network. Therefore, the method of use the FPGA to implement artificial neural network has gradually become an effective way. Based on this method, we use the FPGA to implement the neural networks for support vector machines, which has strong flexibility and it is good to retain the parallelism of neural networks. It ensures that the processing speed, but also suitable for large-scale complex network structure.
     This dissertation presented a new FPGA structure of serial computational dynamic neural network for least squares support vector machines. Compared with the existing support vector machines structure, it combines the characteristics of neural network parallelism with the advantages of simplify of the least squares support vector machines, which can get a good performance with less hardware resource while the computational speed is not reduced, and so that it can adapt to large-scale training samples. Experiment results show that because of the serial computational and parallel transmission, it reduced the use of FPGA space, and the speed will not have obvious drop at the same time. In conclusion, it is more flexible and reduced the demand for hardware.
引文
[1]Christopher J. C. burges. A Tutorial on Support Vector Machines for Pattern Recognition [J]. Data Mining and Knowledge Discovery,1998,2:121-167
    [2]霍雨佳,支持向量机分类算法的研究与应用[D].华北电力大学,2007
    【3】 杨光正,吴岷,张晓莉.模式识别[M].合肥:中国科学技术大学出版社,2003
    [4]Vapnik V N. The Nature of Statistical Learning Theory [M]. Berlin:Springer-Verlag. 1982.
    【5】 马儒宁,陈天平.神经网络与支持向量机相关问题的研究[D].上海:复旦大学,2003:15.
    【6】 邓乃扬,田英杰.数据挖掘中的新方法-支持向量机[M].北京:科学出版社,2004
    [7]D. Anguita, S.Ridella, S.Rovetta. Circuital implementation of support vector machines [J]. ELECTRONICS LETTERS,1998.34(16):1596-1597.
    [8]D.Anguita, Andrea Boni. Neural network learning for analog VLSI implementation of support vector machines:a survey [J]. Neurocomputing,2003,55:265-283.
    [9]Y. S. Xia. J. Wang. A Recurrent Neural Network for Solving Nonlinear Convex Programs Subject to Linear Constrain [J]. IEEE Trans. Neural networks,2005.16(2): 379-386.
    [10]Y. S. Xia, J. Wang. A Recurrent Neural network for Solving linear projection equations [J]. IEEE Trans. Neural networks,2000,13(3):337-350.
    [11]G.Joya. M.A. Atencia, F.Sandoval. Hopfield neural networks for optimization:study of the different dynamics [J]. Neurocomputing,2002,43(1-4):219-237.
    [12]M.Anthony, P.Bartlett. Learning In Neural Networks:Theoretical Foundations[M]. Cambridge:Cambridge University Press,1999:45-67.
    [13]D. Anguita, A Boni. Improved neural network for SVM learning [J]. IEEE Trans. Neural networks,2002.13(5):1243-1244.
    [14]Y. Tan, Y. S. Xia, J. Wang. Neural network realization of support vector machines classification [J]. In Proc. IEEE Int. Joint Conf. Neural networks,2000, 6(6):411-416.
    [15]Y. S. Xia. J. Wang. A one-layer recurrent neural network for support vector machines learning [J]. IEEE Trans Syst. Man Cybern. B. Cybern.2004.34(2):1261-1269.
    [16]Renzo Perfetti, Elisa Ricci. Analog Neural Network for Support vector machine learning [J]. IEEE Transactions on neural network,2006.17(4):1-6.
    [17]Reyna-Rojas R, Houzet D, Dragomirescu D etc. Object Recognition System-on-Chip Using the Support Vector Machines [J]. EURASIP Journal on Applied Signal Processing,2005,7:993~1004
    [18]Anguita D, Boni A, Ridella S. The digital kernel perceptron[J]. Electronics Letters.2002,38(10):445-456
    [19]Cox C E, Blanz E. CangLion-a fast field programma-ble gate array implementation of a connectionist classier [J]. IEEE Journal of SolidState Circuits,1992,28(3):288-299
    [20]Berthelot F.Nouvel F,Houzed D.Partial and dynamic reconfiguration of FPGAs:a top down design methodology for an automatic implementation [C].Karlsruhe.Germany: IEEE Computer Society Annual symposium on emerging VLSI Technologies and Architectures,2006.
    [21]Anguita D, Boni A, Ridella S. A digital architecture for support vector machines: Theory, algorithm and FPGA implementation [J]. IEEE Transactions on Neural Networks,2003,14 (5):993-1009.
    [22]边肇祺,张学工.模式识别[M].北京:清华大学出版社.2000
    [23]V. Vapnik. Statistical learning theory[M], New York:J. Wiley.1998
    【24】 张学工.关于统计学习理论与支持向量机[J].自动化学报,2000.26(1):32-42
    [25]Bugres CJC. A tutorial on support vector machines for pattern recognition [J]. Data Mining and Knowledge Discovery.1998
    【26】 唐发明,陈绵云,王仲东.基于统计学习理论的支持向量机算法研究[D].武汉:华中科技大学,2005:10.
    [27]T.Friess, N.Cristianini., C.Campbell. The Kernel-algorithm:A Fast and Simple Learning Produce for Support Vector Machines [C], In Processing of Fifteenth International Conference on Machine Learning. San Francisco:Morgan Kaufmann Publishers Inc,1998:188-196.
    [28]E.Osuna, R.Freund, F.Girosi. Support Vector Machines:Training and Application [C]. Massachusetts Institute of Technology AILab. A.I.Meno No.1602 C.B.C.L Paper No.144. Cambridge:AI Publication,1997:130-136.
    [29]O.L.Mangasarian, D.R.Musican. Successive Over relaxation for Support vector machines[J]. IEEE Transactions on Neural Networks.1999.10(5):1032-1037.
    [30]O.L.Mangasarian. Generalize Support Vector Machines [C], In A.Smola, B.Scholkopf and D.schuurmans. Advance in large margin classifiers. Cambridge City: MIT Press,2000:135-146.
    [31]O.L.Mangasarian, D.Musicant. Nonlinear Data Discrimination via Generalized Support Vector Machines [C], ICCP99. Advances in Large Margin Classifiers. Wisconsin:MIT Press,1999:135-146.
    [32]Takuya Inoue, Shigeo Abe. Fuzzy Support Vector Machines for Pattern Classification [C]. In Processing International Joint Conference on Neural Network,2001,2: 1449-1454.
    [33]Chun-Fu Lin, Sheng-De Wang. Fuzzy Support Vector Machines [J]. IEEE Transactions on Neural Networks,2002,13(2):464-471.
    [34]J.A.K.Suykens, J.Vandewalle. Least Squares Support Vector Machines Classifiers [J]. Neural Processing Letters,1999,9(3):293-300.
    [35]J.A.K.Suykens. J.Vandewalle. Recurrent Least Squares Support Vector Machines [J]. IEEE Transactions on Circuits and Systems,2000,47(7):1109-1114.
    [36]姜静清,梁春燕.最小二乘支持向量机算法及应用研究[D].吉林:吉林大学,2007:5-18.
    [37]G. Cauwenberghs. T. Poggio. Incremental and decremental support vector machine learning [C]. In Proc. NIPS.2000:409—415.
    [38]C. J. C. Burges. A tutorial on support vector machines for pattern recognition [C]. In Data Mining and Knowledge Discovery 2. Norwell, MA:Kluwer,1998:121-167.
    [39]M. Forti, A. Tesi. New conditions for global stability of neural networks with applications to linear and quadratic programming problems [J]. IEEE Trans. Circuits Syst. I,1995,42(7):354-366
    [40]Hassan K.Khalil. Nonlinear Systems (3rd Edition) [M]. New York:Prentice Hall.2004 Networks-ICANN'96. Spingers Lecture Notes in Computer Science. Berlin,1996,1112: 47-52
    [41]C.Blake, C.Merz. UCI Repository of Machine Learning Databases 1998 [EB/OL].: http://www.ics.uci.edu/mlearn/MLRrpositorv.htlm
    [42]Kuhn H, Tucker A. Nonlinear programming [M]. University of California Press,1951. 481-492
    [43]尹嵩,刘涵,基于动态神经网络的支持向量机的FPGA实现[D],西安理工大学硕士学位论文,2008
    [44]E.Won. A hardware implementation of artificial neural networks using field programmable gates array [J]. Nuclear Instruments and Methods in Physics Research A. 2007,581:816-820.
    [45]韩少锋,王国峰.神经网络控制的FPGA研究与应用[D].大连:大连海事大学,2005:12-28.
    [46]李昂,王沁,李占才,万勇.基于FPGA的神经网络硬件实现方法[J].北京科技大学学报,2007,29(1):90-95.
    [47]Pedro Ferreira, Pedro Ribeiro, Ana Antunes. Fernando Morgado Dias. A high bit resolution FPGA implementation of a FNN with a new algorithm for the activation function [J]. Neurocomputing,2007,71:71-77.
    [48]Jesus Lazaro, Jagoba Arias, Armando Astarloa, Unai Bidarte, Aitzol Zuloaga. Hardware architecture for a general regression neural network coprocessor [J]. Neurocomputing, 2007.71(1-3):78-87.
    【49】 王守觉,李兆洲,陈向东.通用神经网络硬件中神经元基本数学模型的讨论[D].电子学报,2001,29(5):577
    [50]L.P.Maguire. T.M.McGinnity, B.Glackin, A.Ghani, A.Belatreche, J.Harkin. Challenges for large-scale implementations of spiking neural networks on FPGAs [J]. Neurocomputing,2007,71:13-29.
    【51】 刘涵,叶平,基于递归神经网络的LS-SVM硬件实现与实用研究[D],仪器仪表学报,2009
    [52]石英,李新新,姜宇柏.ISE应用与开发研究[M].北京:机械工业出版社,2007:11-53.
    [53]J.A.K.Suykens. Nonlinear modeling and support vector machines [C]. Proceedings of the IEEE International Conference on Instrumentation and Measurement Technology, Budapest, Hungary,2001:287-294.
    [54]D.Anguita, S.Ridella, S.Rovetta. Circuital implementations of support vector machines[J], ELECTRONICS LETTERS,1998.34(16):1596-1597.
    [55]V. Vapnik, Boser, Guyon. A Training Algorithm for Optimal Margin Classifiers[M], New York,1992
    [56]V.N.Vapnik著,张学工译,统计学习理论的本质[M].清华大学出版社,2000.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700