用户名: 密码: 验证码:
提高密度泛函理论方法计算吸收能的精度:神经网络和遗传算法
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
电子跃迁吸收能是分子的一个重要的物理属性,它包含分子的内在结构信息和电子性质,所以精确地预测吸收能是计算化学领域的一个重要问题。量子化学方法已经超过了仅仅验证实验值的水平,它能够在实验值不知道或不确定的时候来精确地预测吸收能,然而并不是所有的计算结果都是十分精确地,特别是对于复杂分子或者较大的系统,导致这种局限性的主要原因是计算方法本身采用固有的近似引起的。要解决这个问题,期望找到一些简单而有效的方法来校正理论计算的误差。
     本论文针对150个有机小分子体系,用神经网络、遗传算法、神经网络集成以及K近邻等方法来校正量子化学方法计算的结果,提高量子化学计算电子光谱吸收能的精度。这些方法为准确地预测分子的各种性质提供了一种新的研究手段,拓展了理论方法的可靠性和适用性。
     研究工作主要包括如下几个部分:
     1.基于量子化学TDDFT/B3LYP方法计算有机小分子的紫外可见吸收光谱的吸收能,利用遗传算法和BP神经网络(GANN)来提高有机小分子吸收能的计算精度。在GANN方法中,GA被用来搜索神经网络的最优初始权值,BP被用来进一步训练神经网络来获得最优的最终连接权值。该方法被用来校正150个有机分子的光谱吸收能的理论计算误差。通过BPN的校正,均方根误差由B3LYP/6-31G(d)计算得到的0.47降到了0.22 eV,而对于GANN校正方法,误差则降到了0.16 eV。GANN方法避免了传统BP算法易陷入局部极小的缺陷,同时在提高DFT方法计算精度时优于BP神经网络校正方法。
     2.利用神经网络集成(NNE)的方法来提高单一神经网络的泛化能力,其中NNE采用了bagging技术来生成集成中的6个个体神经网络,在集成时使用基于简单平均的结果合成(NNEA)和加权平均的结果合成(NNEW)方法。包含150个分子的实验数据被随机分成两个数据集,训练集包含120个分子,测试集包含30个分子。对于BPN、NNEA和NNEW校正方法将训练集中120个分子的误差分别由B3LYP/6-31G(d)计算得到的0.48降到0.20,0.22,0.22 eV,对于测试集中的30个分子,误差则分别由原来的0.41降到0.26,0.20和0.18 eV。从测试集的数据仿真结果表明神经网络集成能够降低单一神经网络的泛化误差。
     3.提出了用神经网络集成和K近邻方法(NNEKNN)来精确预测150个有机分子的电子跃迁吸收能。传统的前向神经网络是一个无记忆的方法,这意味着当神经网络训练结束后,所有有关输入的相关信息都被存储在网络的连接权值中,这时不再需要输入数据了。相反,K近邻方法代表的是一种基于记忆的方法,该方法在记忆中存储了输入数据的整个数据库,然后它的预测结果是基于这些已存储数据的局部近似值。在近邻选择上,NNEKNN方法使用集成输出的结果与位于训练集中的近邻之间的欧几里德距离作为衡量方法。对于NNEKNNA和NNEKNNW校正方法,对于训练集的120个分子而言,误差均由原来的0.48降到0.16 eV,而对于测试集中的30有机分子来说,误差分别由原来的0.41降到0.14和0.10 eV。结果表明NNEKNN方法能够有效地降低密度泛函理论的计算误差,可以以比BPN和NNE方法提供更高的精度来预测光谱吸收能。
Absorption energy is a significant physical property for a molecule, which impliesinherent structure information and electronic properties. In this regard, the accurate predictionof the absorption energy is of great significance in computational chemistry. Quantumchemical methods have been developed beyond the level of just reproducing experimentaldata and can now accurately predict the absorption energies which are unknown or uncertainexperimentally. However, the calculation results are not accurate enough for all systems,especially for large systems. This is caused by the inherent approximations adopted infirst-principles methods. To resolve this, simple yet efficient way to correct such errors isdesired.
     In the present work, genetic algorithm, neural network, neural network ensemble andk-nearest-neighbors approaches have been applied to improve the calculation accuracy ofquantum chemical methods for absorption energies of 150 small organic molecules. Thesecombined methods can greatly eliminate the systemic errors of theoretical calculation andimprove the calculation accuracy of density functional theory (DFT) for absorption energies,which provide a novel tool for predicting the properties of the molecules.
     Our work has been focus on following aspects:
     1. The combination of genetic algorithm and neural network correction approach(GANN) has successfully improved the calculation accuracy of the absorption energies afterquantum chemical methods calculated UV-visible absorption spectra. The raw calculatedabsorption energies are evaluated by TDDFT/B3LYP method. In this GANN approach, GA isadopted in searching the optimal initial synaptic weights for neural networks of pre-specifiedtopology, while BP is employed in further training the neural networks to find the optimalfinal synaptic weights. It is employed to reduce the errors of calculated absorption energy of150 molecules. Upon the traditional BP neural networks correction approach, the RMSdeviation of the calculated absorption energies of 150 organic molecules is reduced from 0.47eV to 0.22 eV for the TDDFT/B3LYP/6-31G(d) calculation. With the GANN correction, theRMS deviation is reduced from 0.47 eV to 0.16 eV. This combined GANN correctionapproach avoids being trapped at local minima of the traditional BPN approach, thus leads toimproved DFT calculation results as compared to those of BPN.
     2. The generalization ability of NN can be substantially improved by using an averagingtechnique with a neural network ensemble (NNE). We use bagging on the training set togenerate six individual base BP networks. The NNE with simple averaging method (NNEA)and weighted averaging method (NNEW) is adopted as for combining the predictions ofcomponent NNs. The experimental data of 150 organic molecules are randomly divided into atraining set with 120 molecules and a testing set with 30 molecules. The BPN, NNEA andNNEW approaches reduce the RMS deviations from 0.48 to 0.20 and both 0.22 eV for the120 absorption energies, respectively. For the 30 absorption energies, they are from 0.41 to0.26, 0.20 and 0.18 eV. Statistical tests show that generalization errors of the NNE approach is significantly lower than that of the TDDFT/B3LYP method, and they attain still lower errors than BPN.
     3. In this paper, we propose an ensemble of NNs and the KNN approach (NNEKNN) to improve the calculation accuracy of DFT. The traditional artificial feed-forward NN is a memoryless approach. This means that after training is complete all information about the input patterns is stored in the NN weights and the input data is no longer needed. Contrary to that, the k-nearest-neighbors (KNN) approach represents the memory-based approach. The approach keep in memory the entire database of examples, and their predictions are based on some local approximation of the stored examples. The approach is applied to predict the optical absorption energies of 150 organic molecules. This method uses the distance between ensemble responses and the neighbors in the training set as a measure amid the analyzed cases for the nearest neighbor technique. The NNEKNNA and NNEKNNW approach improved DFT calculation results and reduced the RMS deviations from 0.48 to both 0.16 eV for the training set of 120 organic molecules, respectively, while for the testing set of 30 organic molecules, they are from 0.41 to 0.14 and 0.10 eV. Simulation results and comparison of the BPN and NNE corrected values demonstrates the feasibility and effectiveness of the NNEKNN approach to reduce the calculation errors of DFT and it could indeed predict the absorption energies with higher accuracy.
引文
[1] Schaefer H F. Methods of Electronic Structure Theory[M] New York: Springer, 1977.
    [2] Cramer C J. Essentials of Computational Chemistry: Theories and Models[M]. West Sussex: Wiley, 2002.
    [3] Parr R G, Yang W. Density-Functional Theory of Atoms and Molecules[M]. New York: Oxford University Press, 1989.
    [4] Koch W, Holthausen M C A. Chemist’s Guide to Density Functional Theory[M]. Weinheim, Germany: Wiley-VCH, 2000.
    [5] Dunlap B I. In: Labanowski J K, Andzelm J W, Eds. Density Functional Methods in Chemistry[C], Springer-Verlag, New York, 1991. 49-60.
    [6] Hohenberg P, Kohn W. Inhomogeneous Electron Gas[J]. Phys Rev, 1964, 136: B864-871.
    [7] Ziegler T. Approximate density functional theory as a practical tool in molecular energetics and dynamics[J]. Chem Rev, 1991, 91: 651 667.
    [8] Becke A D. Density-functional thermochemistry III. The role of exact exchange[J]. J Chem Phys, 1993, 98(7): 5648-5652.
    [9] Lee C, Yang W, Parr R G. Development of the Colle-Salvetti correlation-energy formula into a functional of the electron density[J]. Phys Rev B, 1988, 37(2): 785-789.
    [10] Becke A D. Density-functional exchange-energy approximation with correct asymptotic behavior[J]. Phys Rev A, 1988, 38(6): 3098-3100.
    [11] Jursic B S. Reliability of hybrid density theory—semiempirical approach for evaluation of bond dissociation energies[J]. J Chem Soc, Perkin Trans. 2, 1999: 369-372.
    [12] Durbeej B, Eriksson L A. Photodegradation of substituted stilbene compounds: what colors aging paper yellow[J]. J Phys Chem A, 2005, 109 (25): 5677-5682.
    [13] Chen C C, Bozzelli J W. Structures, Intramolecular Rotation Barriers, and Thermochemical Properties of Methyl Ethyl, Methyl Isopropyl, and Methyl tert-Butyl Ethers and the Corresponding Radicals[J]. J Phys Chem A, 2003, 107(22): 4531-4546.
    [14] Irikura K K, Frurip D J. Computational Thermochemistry: Prediction and Estimation of Molecular Thermodynamics[M]. Washington, DC: American Chemical Society, 1998.
    [15]Jacquemin D, Wathelet V, Perpete E A. Ab initio investigation of the n ->pi* transitions in thiocarbonyl dyes[J]. Journal of Physical Chemistry A, 2006, 110(29): 9145-9152.
    [16]Jacquemin D, Perpete E A, Vydrov O A, et al. Assessment of long-range corrected functionals performance for n ->pi(*) transitions in organic dyes[J]. Journal of Chemical Physics, 2007, 127(9): 6.
    [17]Duan X M, Song G L, Li Z H, et al. Accurate prediction of heat of formation by combining Hartree-Fock/density functional theory calculation with linear regression correction approach[J]. J Chem Phys, 2004, 121(15): 7086-7095.
    [18]Duan X M, Li Z H, Hu H R, et al. Linear regression correction to first principle theoretical calculations - Improved descriptors and enlarged training set[J]. Chem Phys Lett, 2005, 409(4-6): 315-321.
    [19] Ripley B D. Pattern recognition via neural networks[M]. New York: Oxford University Press.1996.
    [20] Jain A K, Duin R P W, Mao J C. Statistical pattern recognition: A review[J]. IEEE Tranactions on Pattern and machine Intelligence, 2000, 22 (1): 4-37.
    [21] Boddy L, Wilkins M F, Morris C W. Pattern recognition in flow cytometry[J]. Cytometry , 2001, 44(3): 195-209.
    [22]任宏萍,陆建东,尹中明.一种非线性优化的神经网络[J].计算机研究与发展,1995,32(06):50-54.
    [23]王美玲,张长江,付梦印等.一种用于非线性函数逼近的小波神经网络算法仿真[J].北京理工大学学报,2002,22(3):274—278.
    [24]李银国,张邦礼,曹长修.小波神经网络及其结构设计方法[J].模式识别与人工智能,1997,10(3):197-205.
    [25]Liu G Z, Mi Z T. Prediction of pulsation frequency of pulsing flow in trickle-bed reactors usingartificial neural network[J]. Chemical Engineering Science, 2004, 59(24): 5787-5794.
    [26] Qu N, Wang L H, Zhu M C, et al. Radial basis function networks combined with genetic algorithmapplied to nondestructive determination of compound erythromycin ethylsuccinate powder[J].Chemometrics and Intelligent Laboratory Systems, 2008, 90(2): 145-152.
    [27] Raff L M, Malshe M, Hagan M, et al. Ab initio potential-energy surfaces for complex, multichannelsystems using modified novelty sampling and feedforward neural networks [J]. J Chem Phys, 2005,122(8): 084104-1-084104-16.
    [28] Le H M, Raff L M. Cis->trans, trans->cis isomerizations and N-O bond dissociation of nitrous acid(HONO) on an ab initio potential surface obtained by novelty sampling and feed-forward neural networkfitting[J]. J Chem Phys, 2008, 128(19): 194310-1-194310-11.
    [29]Manzhos S, Carrington Jr T. Using neural networks, optimized coordinates, and high-dimensionalmodel representations to obtain a vinyl bromide potential surface[J]. J Chem Phys, 2008, 129 (22):224104-1-224104-8.
    [30]Homer J, Generalis S C, Robson J H. Artificial neural networks for the prediction of liquid viscosity,density, heat of vaporization, boiling point and Pitzer's acentric factor - Part 1: Hydrocarbons [J]. PhysChem Chem Phys, 1999, 1(17): 4075-4081.
    [31] Souza L E S, Canuto S. Efficient estimation of second virial coefficients of fused hard-spheremolecules by an artificial neural network[J]. Phys Chem Chem Phys, 2001, 3(21): 4762-4768.
    [32] Wang X J, Wong L H, Hu L H, et al. Improving the accuracy of density-functional theory calculation:The statistical correction approach[J]. J Phys Chem A, 2004, 208(40): 8514-8525.
    [33]Hu L H, Wang X J, Wong L H, et al. Combined first-principles calculation and neural-networkcorrection approach for heat of formation [J]. J Chem Phys., 2005,119(22): 11501-11507.
    [34] Duan X M, Li Z H, Song G L, et al. Neural network correction for heats of formation with a largerexperimental training set and new descriptors [J]. Chemical Physics Letters, 2005, 410(1-3): 125-130.
    [35] Wang X J, Hu L H, Wong L H, et al. A Combined First-principles Calculation and Neural NetworksCorrection Approach for Evaluating Gibbs Energy of Formation[J]. Molecular Simulation, 2004, 30(1):9-15.
    [36] Wu J M, Xu X. The X1 method for accurate and efficient prediction of heats of formation [J]. J ChemPhys, 2007, 127 (21): 214105-214113.
    [37] Wu J M, Xu X. Improving the B3LYP bond energies by using the X1 method[J]. J Chem Phys, 2008,129(16): 164103-1-164103-11.
    [38] Wu J M, Xu X. Accurate Prediction of Heats of Formation by a Combined Method of B3LYP andNeural-network Correction [J]. J. Comput. Chem, 2009, 30: 1424-1444.
    [39]Alexandridis A, Patrinos P, Sarimveis H, et al. A two-stage evolutionary algorithm for variableselection in the development of RBF neural network models [J]. Chemometrics and Intelligent LaboratorySystems, 2005, 75(2): 149-162.
    [40]Arakawa M, Hasegawa K, Funatsu K. QSAR study of anti-HIV HEPT analogues based onmulti-objective genetic programming and counter-propagation neural network[J]. Chemometrics andIntelligent Laboratory Systems, 2006, 83(2): 91-98.
    [41]Fernandez M, Caballero J. Ensembles of Bayesian-regularized Genetic Neural Networks for modelingof acetylcholinesterase inhibition by huprines[J]. Chemical Biology & Drug Design, 2006, 68(4): 201-212.
    [42]Galvao RKH, Araujo M C U, Martins M D, et al. An application of subagging for the improvement ofprediction accuracy of multivariate calibration models [J]. Chemometrics and Intelligent LaboratorySystems, 2006, 81(1): 60-67.
    [43]Hemmateenejad B. Correlation ranking procedure for factor selection in PC-ANN modeling andapplication to ADMETox evaluation [J]. Chemometrics and Intelligent Laboratory Systems, 2005, 75(2):231-245.
    [44]Kim B, Kim S. GA-optimized backpropagation neural network with multi-parameterized gradients andapplications to predicting plasma etch data[J]. Chemometrics and Intelligent Laboratory Systems, 2005,79(1-2): 123-128.
    [45]Lee K T, Bhatia S, Mohamed A R, et al. Optimizing the specific surface area of fly ash-based sorbentsfor flue gas desulfurization[J]. Chemosphere, 2006, 62(1): 89-96.
    [46]Sattari M, Gharagheizi F. Prediction of molecular diffusivity of pure components into air: A QSPRapproach[J]. Chemosphere, 2008, 72(9): 1298-1302.
    [47]Votano J R, Parham M, Hall L H, et al. Prediction of aqueous solubility based on large datasets usingseveral QSPR models utilizing topological structure representation [J]. Chemistry & Biodiversity, 2004,1(11): 1829-1841.
    [48]Zhou P, Tian F F, Li Z L. A structure-based, quantitative structure-activity relationship approach forpredicting HLA-A*0201-restricted cytotoxic T lymphocyte epitopes[J]. Chemical Biology & Drug Design,2007, 69(1): 56-67.
    [49]Wodrich M D, Corminboeuf C. Reaction Enthalpies Using the Neural-Network-Based X1 Approach:The Important Choice of Input Descriptors [J]. J Phys Chem A, 2009, 113(13): 3285-3290.
    [50] Li H, Shi L L, Zhang M, et al. Improving the accuracy of density-functional theory calculation: Thegenetic algorithm and neural network approach[J]. J Chem Phys, 2007, 126: 144101-1-144101-8.
    [51] Gao T, Shi L L, Li H B, et al. Improving the accuracy of low level quantum chemical calculation forabsorption energies: the genetic algorithm and neural network approach[J]. Phys Chem Chem Phys., 2009,11(25): 5124-5129.
    [52] Hansen L K, Salamon P. Neural network ensembles[J]. IEEE Transactions on Pattern Analysis andMachine Intelligence, 1999, 12 (10): 993-1001.
    [53] Schapire R E, Singer Y. BoosTexter: A boosting-based system for text categorization [J]. MachineLearning, 2000, 39 (2- 3): 135- 168.
    [54] Huang F J, Zhou Z H, Zhang H J, et al. Pose Invariant Face Recognition[C]. In: Proceedings of the 4thIEEE International Conference on Automatic Face and Gesture Recognition, NY: IEEE, 2000. 245-250.
    [55]杨国亮,王志良,任金霞.采用AdaBoost算法进行面部表情识别[J].计算机应用,2005,25,(4):946—948.
    [56] Gutta S, Wechsler H. Face recognition using hybrid classifier systems[C]. In: Proc of the IEEEInternational Conference on Neural Networks, Washington, DC, 1996. 1017- 1022.
    [57] Gutta S, Huang J R J, Jonathon P, et al. Mixture of experts for classification of gender, ethnic origin,and pose of human faces[J]. IEEE Transactions on Neural Networks, 2000, 11 (4): 948-960.
    [58] Ceccarelli M, Petrosino A. Multi-feature adaptive classifiers for SAR image segmentation [J].Neurocomputing, 1997, 14 (4): 345- 363.
    [59] Shimshoni Y, Intrator N. Classification of Seismic Signals by Integrating Ensembles of NeuralNetworks[J]. IEEE Transactions on Signal Processing, 1998, 46(5): 1194-1201.
    [60]李晓梅,马树元,吴平东等.基于bagging的手写体数字识别系统[J].计算机工程与科学,2004,26(2):36-39.
    [61] Cotlet M, Hofkens J, Habuchi S, et al. Identification of different emitting species in the red fluorescentprotein DsRed by means of ensemble and single-molecule spectroscopy [J]. Proc. Natl. Acad. Sci. U.S.A.2001,98: 14398-14403.
    [62] Cotlet M, Hofkens J, Kohn F, et al. Collective effects in individual oligomers of the red fluorescentcoral protein DsRed[J]. Chem Phys Lett, 2001, 336: 415-423.
    [63] Bowen B P, Scruggs A, Enderlein J, et al. Implementation of Neural Networks for the Identification ofSingle Molecules [J]. J Phys Chem A, 2004, 108: 4799-4804.
    [64] Fernandez M, Tundidor-Camba A, Caballero J. Modeling of cyclin-dependent kinase inhibition by1H-pyrazolo [3,4-d] pyrimidine derivatives using Artificial Neural Networks Ensembles[J], J Chem InfModel, 2005, 45 (6): 1884-1895.
    [65] Agrafiotis D K, Cedeiio W, Lobanov V S. On the use of neural network ensembles in QSAR andQSPR[J], J Chem Info Comput Sci., 2002, 42 (4): 903-911.
    [66] Lucic B, Nadramija D, Basic I, et al. Toward Generating Simpler QSAR Models: NonlinearMultivariate Regression versus Several Neural Network Ensembles and Some Related Methods [J]. J ChemInf Comput. Sci, 2003, 43: 1094-1102.
    [67] Tetko IV. Associative Neural Network [J]. Neural Processing Letters, 2002, 16 (2): 187-199.
    [68] Dasarthy B V. Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques[M]. LosAlamitos, CA: IEEE Computer Society Press, 1991.
    [69] Lawrence S, Tsoi A C, Back A D. Function approximation with neural networks and local methods:bias, variance and smoothness[C]. In: Bartlett P, Burkitt A, Williamson R, eds. Australian Conference onNeural Networks, Australian National University: Australian National University, 1996.16-21.
    [70] Tetko I V. Associative Neural Network, CogPrints Archive, cog00001441[EB/OL]. Available ashttp://cogprints.soton.ac.uk/documents/disk0/00/00/14/41/index.html, 2001.
    [71] Tetko IV. Associative Neural Network. Neural [J]. Processing Letters, 2002, 16: 187-199.
    [72]Tetko I. V. Neural Network Studies. 4. Introduction to Associative Neural Networks[J]. J Chem InfComput Sci, 2002, 42: 717-728.
    [73] Tetko I V, Tanchuk V Y. Application of Associative Neural Networks for Prediction of Lipophilicityin ALOGPS 2.1 Program[J]. J Chem Inf Comput Sci, 2002, 42: 1136-1145.
    [74]TurroN J.现代分子光化学[M].姚绍明等译.北京:科学出版社,1987.
    [75]张建成,王夺元.现代光化学[M].北京:化学工业出版社,2006.
    [76] Lash T D, Chandrasekar P. Synthesis of tetraphenyltetraacenaphthoporphyrin: A new highlyconjugated porphyrin system with remarkably red-shifted electronic absorption spectra[J]. J Am ChemSoc, 1996, 118(36): 8767-8768.
    [77] Loiseau F, Campagna S, Hameurlaine A, et al. Dendrimers constructed from porphyrin cores andcarbazole chromophores as peripheral units. Absorption spectra, luminescence properties, and oxidationbehaviour [J]. J Am Chem Soc, 2005, 127(32): 11352-11363.
    [78] Hutchinson G R, Ratner M A, Marks T J. Accurate Prediction of Band Gaps in Neutral HeterocyclicConjugated Polymers[J]. J Phys Chem A, 2002, 106(44): 10596-10605.
    [1] Born M, Oppenheimer J R. Zur quantentheorie der molekeln[J]. Ann Physik, 1927, 84(20): 457-484.
    [2] Born M, Huang K. Dynamical Theory of Crystal Lattices[M]. New York: Oxford University Press,1954.
    [3] Hartree D. Calculations of Atomic Structure [M]. Wiley, 1957.
    [4] Roothaan C C J. New developments in molecular orbital theory[J]. Rev Mod Phy, 1951. 23(2): 69-89.
    [5] John C G. The Transactional Interpretation of Quantum Mechanics [M]. News York: Department ofPhysics University of Washington, 1986.
    [6] Parr R, Yang W. Density functional theory of atoms and molecules[M]. New York: Oxford UniversityPress. 1989.
    [7] Koch W, Holthausen M C A. Chemist’s Guide to Density Functional Theory[M]. Weinheim, Germany:Wiley-VCH, 2000.
    [8] Dunlap B I. In: Labanowski J K, Andzelm J W, Eds. Density Functional Methods in Chemistry[C],Springer-Verlag, New York, 1991. 49-60.
    [9] Hohenberg P, Kohn W. Inhomogeneous Electron Gas[J]. Phys Rev, 1964, 136: B864-871.
    [10] Andzelm J, Sosa C, Eades R A. Theoretical Study of Chemical Reactions Using Density FunctionalMethods with Nonlocal Corrections[J], J Phys Chem, 1993, 97: 4664.
    [11] Dixon D A, Christe A O. Nitrosyl hypofluorite: local density functional study of a problem case fortheoretical methods[J], J Phys Chem, 1992, 96(3): 1018–1021.
    [12] Murray C W, Laming G J, Handy N C, et al. Structure and vibrational frequencies of diazomethylene(CNN) and diazasilene (SiNN) using nonlocal density functional theory[J], J Phys Chem, 1993, 97(9):1868–1871.
    [13] Holme T A, Troung T N. A test of density functional theory for dative bonding systems[J], ChemPhys Lett, 1993, 215(1-3): 53-57.
    [14] Theophilou A K, Gidopoulos N I. Density functional theory for excited states[J]. Int J Quantum Chem,1995, 56(4): 333-336.
    [15] Theophilou A K, The energy density functional formalism for excited states[J], J Phys, 1979, C12:5419-5430.
    [16] Hadjisavvas N, Theophilou A K, Rigorous formulation of Slater’s transition-state theory for excitedstates[J], Phys Rev, 1985, A32: 720-724.
    [17] Peukert V, A new approximation method for electron systems[J], J Phys, 1978, C11: 4945-4956.
    [18] Jamorski C, Casida M E, Salahub D R. Dynamic polarizabilities and excitation spectra from amolecular implementation of time-dependent density-functional response theory: N2 as a case study[J]. JChem Phys, 1996, 104(13): 5134-5147.
    [19] Marques M A L, Gross E K U. Time-dependent density functional theory[J]. Annu Rev Phys Chem,2004, 55: 427-455.
    [20] Petersilka M, Gossmann U J, Gross E K U. Excitation energies from time-dependentdensity-functional theory[J]. Phys Rev Lett, 1996, 76: 1212-1215.
    [21] HOLLAND J H. Adaptation in Nature and Artificial Systems[M], Ann Arbor:The University ofMichigan Press, 1975. 89-120.
    [22]王小平,曹立明.遗传算法-理论、应用与软件实现[M].西安交通大学出版社.2001.
    [23]蒋宗礼.人工神经网络导论[M].北京:高等教育出版社,2001.
    [24] Hebb D O. The Organization of Behavior[M], NY: John Wiley, 1949.
    [25] Mackay D J C. Bayesian interpolation[J]. Neural Computation, 1992, 4(3): 415-447.
    [26] Foresee F D, Hagan F D. Gauss-Newton approximation to Bayesian regularization[C]. Proceedings ofthe 1997 international joint conference on neural networks[C]. 1997. 1930-1935.
    [27] Hornik K M, Stinchcombe M, White H. Multilayer feedforward networks are universalapproximators[J]. Neural Networks, 1989, 2(5): 359-366.
    [28] Judd J S. Learning in networks is hard[C]. In: Proc the 1st IEEE International Conference on NeuralNetworks. San Diego, CA , 1987. 685- 692.
    [29] Baum E B, Haussler D. What size net gives valid generalization?[J]. Neural Computation, 1989, 1 (1):151- 160.
    [30] Hansen L K, Salamon P. Neural network ensembles[J]. IEEE Transactions on Pattern Analysis andMachine Intelligence, 1990, 12 (10): 993-1001.
    [31] Sollich P, Krogh A. Learning with ensembles: How over-fitting can be useful[C]. In: Touretzky D S,Mozer M C, Hasselmo M E, eds. Proc. of the Advances in Neural Information Processing Systems 8.Cambridge, MA: MIT Press, 1996. 190-196.
    [32] Breiman L. Bagging predictors[J]. Machine Learning, 1996, 24 (2): 123-140.
    [33] Breiman L. Bias, variance, and arcing classifiers[R]. Technical Report TR 460, Statistics Department,University of California, Berkeley, CA, 1996.
    [34] Perrone M. General averaging results for convex optimization[C]. In: Elman J L, Mozer M C,Smolensky P, et al. eds. Proc. of the 1993 Connectionist Models Summer School. Hillsdale, NJ: Erlbaum,1994. 364-371.
    [35] Huang F J, Zhou Z H, Zhang H J, et al. Pose Invariant Face Recognition[C]. In: Proceedings of the 4thIEEE International Conference on Automatic Face and Gesture Recognition, NY: IEEE, 2000. 245-250.
    [36] Shimshoni Y, Intrator N. Classification of Seismic Signals by Integrating Ensembles of NeuralNetworks[J]. IEEE Transactions on Signal Processing, 1998, 46(5): 1194-1201.
    [37] Fernández M, Tundidor-Camba A, Caballero J. Modeling of cyclin-dependent kinase inhibition by1H-pyrazolo [3,4-d] pyrimidine derivatives using Artificial Neural Networks Ensembles[J], J Chem InfModel, 2005, 45 (6): 1884-1895.
    [38] Agrafiotis D K, Cede?o W, Lobanov V S. On the use of neural network ensembles in QSAR andQSPR[J], J Chem Info Comput Sci., 2002, 42 (4): 903-911.
    [39] Schapire R E. The strength of weak learnability [J]. Machine Learning, 1990, 5 (2): 197—227.
    [40] Efron B, Tibshirani R J. An Introduction to the Bootstrap[M]. New York: Chapman & Hall, 1993.
    [41] Breiman L. Arcing classifiers [J]. Annals of Statistics, 1998, 26(3): 801—849.
    [42] Freund Y. Boosting a weak algorithm by majority[J]. Information and Computation, 1995, 121(2):256-285.
    [43] Freund Y, Schapire R E. A decision-theoretic geneneralization of on-line learning and an application toboosting [J]. Journal of Computer and System Sciences, 1997, 55 (1): 119-139.
    [44] Bauer E, Kohavi R. An empirical comparison of voting classification algorithms: Bagging, boosting,and variants [J]. Machine Learning, 1999, 36 (1-2): 105-139.
    [45]周志华,陈世福,神经网络集成[J],计算机学报,2002,25(1):1-8.
    [46] Opitz D, Maclin R. Popular ensemble methods: An empirical study [J]. Journal of ArtificialIntelligence Research, 1999, 11: 169-198.
    [47] Perrone M P, Cooper L N. When networks disagree: Ensemble method for neural networks. In:Mammone R J ed. Artificial Neural Networks for Speech and Vsision, New York: Chapman & Hall, 1993:126-142.
    [48] Opitz D W, Shavlik J W. Actively searching for an effective neural network ensemble[J]. ConnectionScience,1996, 8(3-4): 337-354.
    [49] Mitchell T M. Machine Learning[M]. New York: McGraw-Itill, 1997
    [50] Cover T M, Hart P E. Nearest neighbor pattern classification[J]. IEEE Transactions on InformationTheory, 1967, 13(3): 21-27.
    [51] Luo M, Bai X S, Xu G Y. Hierarchical similarity indexing for quadratic distance based on SVDtechnology[J]. Journal of Tsinghua University (Science & Technology), 2002, 42(1): 36-39.
    [1] Schaefer H F. Methods of Electronic Structure Theory[M] New York: Springer, 1977.
    [2] Parr R G, Yang W. Density-Functional Theory of Atoms and Molecules[M]. New York: Oxford University Press, 1989.
    [3] Cramer C J. Essentials of Computational Chemistry: Theories and Models[M]. West Sussex: Wiley, 2002.
    [4] Irikura K K, Frurip D J. Computational Thermochemistry: Prediction and Estimation of Molecular Thermodynamics[M]. Washington, DC: American Chemical Society, 1998.
    [5] Hu L H, Wang X J, Wong L H, et al. Combined first-principles calculation and neural-network correction approach for heat of formation[J]. J Chem Phys., 2003, 119(22): 11501-11507.
    [6] Wang X J, Wong L H, Hu L H, et al. Improving the accuracy of density functional theory calculation: the statistical correction approach[J]. J Phys Chem A, 2004, 108(40): 8514- 8525. [7 Wang X J, Hu L H, Wong L H, et al. A combined first-principles calculation and Neural Networks correction approach for evaluating Gibbs energy of formation [J]. Mol Simul, 2004, 30(1): 9 -15.
    [8] Zheng X, Hu L H, Wang X J, et al. A generalized exchange-correlation functional: the Neural-Networks approach[J]. Chem Phys Lett, 2004, 390(1-3): 186-192.
    [9] Hutchinson G R, Ratner M A, Marks T J. Accurate Prediction of Band Gaps in Neutral Heterocyclic Conjugated Polymers[J]. J Phys Chem A, 2002, 106(44): 10596-10605.
    [10] Holland J H. Adaptation in Natural and Artificial Systems[M]. Ann Arbor: The University of Michigan Press, 1975.
    [11]Alexandridis A, Patrinos P, Sarimveis H, et al. A two-stage evolutionary algorithm for variable selection in the development of RBF neural network models[J]. Chemometrics and Intelligent Laboratory Systems, 2005, 75(2): 149-162.
    [12]Arakawa M, Hasegawa K, Funatsu K. QSAR study of anti-HIV HEPT analogues based on multi-objective genetic programming and counter-propagation neural network[J]. Chemometrics and Intelligent Laboratory Systems, 2006, 83(2): 91-98.
    [13]Fernandez M, Caballero J. Ensembles of Bayesian-regularized Genetic Neural Networks for modeling of acetylcholinesterase inhibition by huprines[J]. Chemical Biology & Drug Design 2006, 68(4): 201-212.
    [14]Hemmateenejad B. Correlation ranking procedure for factor selection in PC-ANN modeling and application to ADMETox evaluation[J]. Chemometrics and Intelligent Laboratory Systems, 2005, 75(2): 231-245.
    [15] Janson D J Frenzel J F. Training product unit neural networks with genetic algorithms[J]. IEEE Expert, 1993, 8(5): 26-33.
    [16] Schaffer J D. Combinations of genetic algorithms and neural networks: a survey of the state of the art[C]. In: Whitley D, ed. Proc. of the Workshop on Combinations of Genetic Algorithms and Neural Networks. Los Alamitos, CA: IEEE Computer Society Press, 1992: 1-37.
    [17] Floreano D, Urzelai J. Evolutionary robots with on-line self-organization and behavioral fitness[J]. Neural Networks, 2000, 13(4-5): 431-443.
    [18] Hecht-Nielsen R. Neurocomputing[M]. New Jersey: Addison-Wesley, 1990.
    [19] Rumelhart D E, Hinton G E, Williams R J, Learning representations by back-propagating errors[J], Nature, 1986, 323: 533-536.
    [20] Michalewicz Z. Genetic Algorithms + Data Structures=Evolution Programs[M]. New York: Springer-Verlag, 1992.
    [21] Joines J A, Houck C R. On the Use of Non-Stationary Penalty Functions to Solve Nonlinear Constrained Optimization Problems with GA's[C]. In: Proc. of the IEEE Conference On Evolutionary Computation. Los Alamitos, CA: IEEE, 1994: 579-584.
    [22] Davis L, The Handbook of Genetic Algorithms[M]. New York: Van Nostrand Reinhold, 1991.
    [1] Lash T D, Chandrasekar P. Synthesis of tetraphenyltetraacenaphthoporphyrin: A new highly conjugated porphyrin system with remarkably red-shifted electronic absorption spectra[J]. J Am Chem Soc, 1996, 118(36): 8767-8768.
    [2] Loiseau F, Campagna S, Hameurlaine A, et al. Dendrimers constructed from porphyrin cores and carbazole chromophores as peripheral units. Absorption spectra, luminescence properties, and oxidation behaviour[J]. J Am Chem Soc, 2005, 127(32): 11352-11363.
    [3] Becke A D. Density-functional thermochemistry III. The role of exact exchange[J]. J Chem Phys, 1993, 98(7): 5648-5652.
    [4] Lee C, Yang W, Parr R G. Development of the Colle-Salvetti correlation-energy formula into a functional of the electron density[J]. Phys Rev B, 1988, 37(2): 785-789.
    [5] Becke A D. Density-functional exchange-energy approximation with correct asymptotic behavior[J]. Phys Rev A, 1988, 38(6): 3098-3100.
    [6] Jursic B S. Reliability of hybrid density theory—semiempirical approach for evaluation of bond dissociation energies[J]. J Chem Soc, Perkin Trans. 2, 1999: 369-372.
    [7] Durbeej B, Eriksson L A. Photodegradation of substituted stilbene compounds: what colors aging paper yellow[J]. J Phys Chem A, 2005, 109 (25): 5677-5682.
    [8] Chen C C, Bozzelli J W. Structures, Intramolecular Rotation Barriers, and Thermochemical Properties of Methyl Ethyl, Methyl Isopropyl, and Methyl tert-Butyl Ethers and the Corresponding Radicals[J]. J Phys Chem A, 2003, 107(22): 4531-4546.
    [9] Irikura K K, Frurip D J. Computational Thermochemistry: Prediction and Estimation of Molecular Thermodynamics[M]. Washington, DC: American Chemical Society, 1998.
    [10] Wodrich M D, Corminboeuf C, Schleyer P V. Systematic Errors in Computed Alkane Energies Using B3LYP and Other Popular DFT Functionals[M]. Org Lett. 2006, 8(17): 3631-3634.
    [11] Schreiner P, Fokin A, Pascal R A, et al. Many Density Functional Theory Approaches Fail To Give Reliable Large Hydrocarbon Isomer Energy Differences[J]. Org Lett, 2006, 8(17): 3635-3638.
    [12] Parr R, Yang W. Density functional theory of atoms and molecules[M]. New York: Oxford University Press. 1989.
    [13] Ripley B D. Pattern recognition via neural networks[M]. New York: Oxford University Press.1996.
    [14] Jain A K, Duin R P W, Mao J C. Statistical pattern recognition: A review[J]. IEEE Tranactions on Pattern and machine Intelligence, 2000, 22 (1): 4-37.
    [15] Raff L M, Malshe M, Hagan M, et al. Ab initio potential-energy surfaces for complex, multichannel systems using modified novelty sampling and feedforward neural networks [J]. J Chem Phys, 2005, 122(8): 084104-1-084104-16.
    [16] Le H M, Raff L M. Cis->trans, trans->cis isomerizations and N–O bond dissociation of nitrous acid (HONO) on an ab initio potential surface obtained by novelty sampling and feed-forward neural network fitting[J]. J Chem Phys, 2008, 128(19): 194310-1-194310-11.
    [17] Wang X J, Wong L H, Hu L H, et al. Improving the accuracy of density-functional theory calculation: The statistical correction approach[J]. J Phys Chem A, 2004, 208(40): 8514-8525.
    [18] Hu L H, Wang X J, Wong L H, et al. Combined first-principles calculation and neural-network correction approach for heat of formation[J]. J Chem Phys., 2005,119(22): 11501-11507.
    [19] Duan X M, Li Z H, Song G L, et al. Neural network correction for heats of formation with a larger experimental training set and new descriptors[J]. Chemical Physics Letters, 2005, 410(1-3): 125-130.
    [20] Wang X J, Hu L H, Wong L H, et al. A Combined First-principles Calculation and Neural Networks Correction Approach for Evaluating Gibbs Energy of Formation[J]. Molecular Simulation, 2004, 30(1): 9-15.
    [21]Wu J M, Xu X. The X1 method for accurate and efficient prediction of heats of formation[J]. J Chem Phys, 2007, 127 (21): 214105–214113.
    [22]Wu J M, Xu X. Improving the B3LYP bond energies by using the X1 method[J]. J Chem Phys, 2008, 129 (16): 164103-1-164103-11.
    [23] Li H, Shi L L, Zhang M, et al. Improving the accuracy of density-functional theory calculation: The genetic algorithm and neural network approach[J]. J Chem Phys, 2007, 126: 144101-1-144101-8.
    [24] Gao T, Shi L L, Li H B, et al. Improving the accuracy of low level quantum chemical calculation for absorption energies: the genetic algorithm and neural network approach[J]. Phys Chem Chem Phys., 2009, 11(25): 5124-5129.
    [25] Hansen L K, Salamon P. Neural network ensembles[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12 (10): 993-1001.
    [26] Sollich P, Krogh A. Learning with ensembles: How over-fitting can be useful[C]. In: Touretzky D S, Mozer M C, Hasselmo M E, eds. Proc. of the Advances in Neural Information Processing Systems 8. Cambridge, MA: MIT Press, 1996. 190-196.
    [27] Breiman L. Bagging predictors[J]. Machine Learning, 1996, 24 (2): 123-140.
    [28] Breiman L. Bias, variance, and arcing classifiers[R]. Technical Report TR 460, Statistics Department, University of California, Berkeley, CA, 1996.
    [29] Perrone M. General averaging results for convex optimization[C]. In: Elman J L, Mozer M C, Smolensky P, et al. eds. Proc. of the 1993 Connectionist Models Summer School. Hillsdale, NJ: Erlbaum, 1994. 364-371.
    [30] Huang F J, Zhou Z H, Zhang H J, et al. Pose Invariant Face Recognition[C]. In: Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, NY: IEEE, 2000. 245-250.
    [31] Shimshoni Y, Intrator N. Classification of Seismic Signals by Integrating Ensembles of Neural Networks[J]. IEEE Transactions on Signal Processing, 1998, 46(5): 1194-1201.
    [32] Fernández M, Tundidor-Camba A, Caballero J. Modeling of cyclin-dependent kinase inhibition by 1H-pyrazolo [3,4-d] pyrimidine derivatives using Artificial Neural Networks Ensembles[J], J Chem Inf Model, 2005, 45 (6): 1884-1895.
    [33] Agrafiotis D K, Cede?o W, Lobanov V S. On the use of neural network ensembles in QSAR and QSPR[J], J Chem Info Comput Sci., 2002, 42 (4): 903-911.
    [34] Tetko I V. Associative Neural Network[J]. Neural Processing Letters, 2002, 16(2): 187-199.
    [35] Dasarthy B V. Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques[M]. Los Alamitos, CA: IEEE Computer Society Press, 1991.
    [36] Lawrence S, Tsoi A C, Back A D. Function approximation with neural networks and local methods: Bias, variance and smoothness[C]. In: Bartlett P, Burkitt A, Williamson R, eds. Proc. of the Seventh Australian Conference on Neural Networks. Australian National University, 1996. 16-21.
    [37] Hornik K M, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators[J]. Neural Networks, 1989, 2(5): 359-366.
    [38] Pinkus A. Approximation theory of the MLP model in neural networks[J]. Acta Numerica, 1999, 8: 143-195.
    [39] Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagation errors[J].Nature, 1986, 323 (6088): 533-536.
    [40] Mackay D J C. Bayesian interpolation[J]. Neural Computation, 1992, 4(3): 415-447.
    [41] Efron B, Tibshirani R J. An Introduction to the Bootstrap[M]. New York: Chapman & Hall, 1993.
    [42] Opitz D W, Shavlik J W. Actively searching for an effective neural network ensemble[J]. Connection Science, 1996, 8(3-4): 337-354.
    [43] Perrone M P, Cooper L N. When networks disagree: Ensemble method for neural networks[C]. In: Mammone R J, ed. Proc. of Artificial Neural Networks for Speech and Vision, New York: Chapman & Hall, 1993. 126-142.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700