用户名: 密码: 验证码:
粗集神经网络集成方法及其在模式识别中的应用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
粗糙集理论是处理不确定、不完整、不精确知识的有力工具,该理论已成为不确定性计算领域的一个重要分支。粗糙集对信息系统的分析建模,涵盖了人类逻辑思维的归纳推理、演绎推理和常识推理这三种形式,直接模拟了人脑的逻辑思维能力。而通过大量神经元互连形成的神经网络是不确定性计算领域的另一个重要分支,具有很强的非线性映射能力及自适应、自学习、鲁棒性和容错能力,是对人脑直观形象思维的模拟。鉴于粗糙集和神经网络在信息处理方式、知识获取、抗噪声能力、泛化性能等方面有很多互补之处,将两者集成的粗集神经网络反映了人类智能中定性和定量、清晰和隐含、串行和并行交叉混合的思维机理,其研究必然具有前沿性、科学性及优越性。作为当今智能集成系统的一个重要分支,粗集神经网络集成可望成为开发下一代专家系统的主流技术。
     本文通过增强粗集神经网络的信息处理能力,建立新的决策系统建模方式,拓展其应用领域,解决其易构造性和计算复杂性问题等方面进行了一系列的探索和研究,提出了一些新的粗集神经网络集成方法并研究了它们在模式识别领域的应用。
     通过对近十几年发展起来的国内、外粗集神经网络集成方法的总结和归纳,本文将粗集神经网络划分为三种主要的集成方式:粗集神经网络综合集成系统、粗边界神经网络、粗—颗粒神经网络。介绍了每一种集成方式的研究现状,并对其原理及特点进行了分析和阐述。
     不同于以往单纯基于粗糙集数据分析和约简基础下的粗集神经网络模型,基于粗逻辑理论,研究了粗逻辑意义下,基于粗逻辑决策规则的“强耦合”形式的具有模糊化神经元的粗逻辑神经网络模型的设计,分析和比较了粗逻辑神经网络和模糊逻辑神经网络的特点和性质。在重庆地区和长白山天池地区Landsat TM遥感图像的地物分类实验中,验证了粗逻辑神经网络模型的有效性,同时可以发现其在网络结构和收敛性方面的优势。
     通过在输入层和隐含层之间加入一个模糊化神经元层,构造了一种集成模糊神经元和粗神经元的模糊粗神经网络模型。由于粗神经元的不可微性,BP算法不再适用,因此本文采用遗传算法(GA)来进行网络权值的学习,同时融入具有局部最优解搜索能力的爬山法改善了进化后期的计算效率。仿真表明,融合了模糊信息和粗糙集信息处理能力的模糊粗神经网络模型在图像融合滤波方面比BP网络和单纯由粗神经元构成的粗集神经网络模型具有更好的性能,是一种性能很好的混合智能神经网络。
     由于系统中模糊不确定性信息和粗糙不确定性信息往往是并存的,有必要采用模糊粗糙集(Fuzzy-Rough Set )理论来进行处理,基于模糊粗隶属函数,本文创建了一种模糊粗隶属函数神经网络(FRMFN)模型。融入了粗糙不确定性信息处理能力的FRMFN,在保留原有模糊相似信息的基础上,很大程度减少了分类中的粗糙不确定性。在遥感图像分类和元音字母语音识别的实验中,显示FRMFN网络具有比相应径向基函数(RBF)网络更好的分类精度,同时保留有RBF网络学习速度快的优点。
     基于经典Pawlak粗糙集分析设计的应用系统的推广泛化能力弱,抗噪声性能差。为更好地解决这些问题,研究了基于“多数包含关系”的变精度粗糙集模型下的粗集神经网络设计。文中对β近似约简条件进行了弱化推广,在结合约简异常分析和实例分析的基础上,提出了β近似约简的选取原则,给出了变精度粗糙规则集提取和β阈值稳定区间求取算法。在对Brodatz纹理图像的分类实验中,比较了经典粗集神经网络(RNN)和变精度粗集神经网络(VPRNN)的性能,VPRNN不仅具有更为精简的结构和更短的训练时间,而且VPRNN对测试样本分类表现出了更强的泛化性能。
     为降低集成特征选择方法的计算复杂性,提出一种基于粗糙集约简的神经网络集成分类方法。首先,通过结合遗传算法求约简和重采样技术介绍了一种动态约简方法,通过动态约简方法可以获得稳定的、泛化能力较强的属性约简集。然后,基于不同约简设计BP网络作为待集成的基分类器,并依据选择性集成思想,通过一定的搜索策略,找到具有最佳泛化性能的集成网络。最后,通过多数投票法实现神经网络集成分类。该方法在某地区Landsat 7波段遥感图像的分类实验中得到了验证,由于通过粗糙集约简,过滤掉了大量分类性能欠佳的特征子集,和传统的集成特征选择方法相比,本文方法时间开销少,计算复杂性低,具有良好的分类性能。
     增加集成神经网络中个体网络的差异度有利于提高神经网络集成的泛化性能,基于此,提出了Rough_Boosting和Rough_Bagging个体网络生成算法。在Boosting或Bagging算法对样本进行扰动的基础上,通过粗糙集约简实现属性选择,从而有效的将扰动训练样本和扰动输入属性结合起来,生成精确度高且差异度大的个体网络。实验结果表明,本文算法泛化能力明显优于Boosting和Bagging算法,生成的个体网络差异度更大。和同类算法相比,本文算法具有相近或相当的性能。
     由于传统聚类方法设计RBF网络时,带有一定盲目性和主观性,而且聚类结果对初始值具有敏感性,在输入特征空间进行的聚类仅考虑到样本在输入特征空间的相似性,而没有考虑样本本身的输出类别信息,生成的聚类并不能完全反映输入输出之间的映射关系。因此,本文提出一种基于不可分辨划分的有监督粗糙集聚类算法的RBF网络设计方法。文中由布尔逻辑推理方法进行属性离散化,得到初始决策模式集,通过定义在粗糙集分析基础上的差异度对初始决策模式的相似度进行衡量并实现聚类,以聚类决策模式构造RBF网络。针对RBF网络输出单元的线性权值和隐层单元的非线性基函数是在一种不同的“时间尺度”上更新的特点,提出分别对隐层参数采用BP算法,而对输出权值采用线性最小二乘滤波法进行训练的混合学习算法。实验结果表明,该方法设计的RBF网络结构精简,具有良好的泛化性能,所采用的混合学习算法收敛速度优于单纯的BP算法。
     为解决粗逻辑神经网络精度和网络规模复杂性及推广泛化能力之间的矛盾,提出一种具有可变离散精度的粗逻辑神经网络模型设计方法。该方法通过近似域划分,将论域空间划分为确定性区域和可能性区域,由于可能性区域信息粒度过大是造成误分类的重要原因,只需对可能性区域离散区间进一步细化,即可达到提高粗逻辑神经网络模型的精度,同时抑制网络规模增长过快的目的。在长白山天池地区的遥感图像分类实验中,常规离散精度等级确定的粗逻辑神经网络方法在离散等级为7时有最好性能,而本文方法以较小的网络代价和训练时间获得了逼近的分类结果。
     为避免约简计算,研究了基于模糊粗糙模型(FRM),从自底向上的角度进行粗神经网络建模的方法。该方法通过自适应G-K聚类算法,实现基于输入—输出积空间聚类的模糊划分,在基于聚类数和约简属性搜索的基础上,提取优化的FRM模型,在此基础上融合神经网络实现粗神经网络建模(FRM_RNN_M)。对Brodatz纹理图像的实验表明:①其性能优于传统的贝叶斯和LVQ方法;②FRM_RNN_M比单纯的FRM模型具有更强的综合决策能力,在考虑到属性约简的搜索后将有利于获得分类性能更好的FRM优化模型;③和传统的粗逻辑神经网络(RLNN)相比,FRM_RNN_M方法建立的神经网络结构精简,收敛速度快,具有更强的泛化能力。作为一种通用的决策系统建模方法,该方法可以广泛应用于其它相关领域。
     论文最后总结了全文的主要创新性研究成果,对下一步研究工作进行了展望。
The theory of rough sets is a powerful tool for processing uncertain, incomplete and imprecise information and it has become an important branch of uncertain computing. Analysis and modeling based on rough sets, which cover three kinds of human logical thinking, i.e. inductive reasoning, deductive reasoning and common sense reasoning, directly imitate human logical thinking ability. While as another important branch of uncertain computing, the neural network, which constructed by numerous of inter-connective neurons, shows powerful non-linear mapping, adaptive, self-learning, robust and fault-tolerant ability. And it is the mimic of human imaginal thinking. Due to complementary advantages in mode of information processing, knowledge acquisition, anti-noise capability and generalization ability, etc., rough neural network, which integrates rough sets and neural network technologies, reflects some characteristics of human intelligent, i.e. it is the compound of qualitative and quantitative, clear and implied, serial and parallel. Then the research of rough neural network certainly has leading, scientific and superior characteristics. As an important branch of present intelligent integration system, rough neural network promises to be the main technology to develop next generation expert system.
     A series of exploration and research are done in rough neural network by improving the information processing ability, building new decision system modeling methods, extending application fields and solving the problem of easily constructing and computing complexity. And the thesis proposes some new rough neural network integration methods and applies them to pattern recognition.
     The paper summarizes and generalizes the rough neural network integration methods that has been developed for last decade and categorizes them into three main integration modes, i.e. rough sets and neural network general integration system, rough boundary value neural network and rough-granular neural network. And present research status of each integration mode, also the principle and characteristic of them are analyzed and expounded in this paper.
     Be different from conventional literatures, where rough neural networks were studied based on rough sets data analysis and reduction. In this paper, based on rough logic theory, the design of rough neural network model with fuzzy neurons under the meaning of rough logic is studied. And the characteristics of rough logic neural network and fuzzy logic neural network are analyzed and compared. The validity of the rough logic neural network model can be verified in the land cover classification experiment of the Landsat TM remote sensing image of Chongqing area and Changbai moutain area. Moreover, the rough logic neural network indicates superiorities at the aspect of structure and convergence.
     A FRNN (Fuzzy Rough Neural Network) model, which is constructed by fuzzy neurons and rough neurons, is proposed by adding a fuzzy neuron layer between input layer and hidden layer. Due to indifferentiable feature of rough neurons, BP algorithm can’t be adopted again. Thus a kind of GA (Genetic Algorithm) integrating with mountain climbing is applied to tune the weights of the FRNN. And the local search efficiency for optimum solution is improved in the later period of learning process. The results of simulation indicate that the FRNN, which integrates the ability to deal with fuzzy information and rough information, has better performance than BP network and rough neural network that constructed only by rough neurons in image fusion for filtering. Thus the FRNN is a kind of hybrid intelligent neural network that has good performance.
     Because fuzzy uncertainty and rough uncertainty are often both exist in information system, so fuzzy rough sets theory is needed to process them. A FRMFN (Fuzzy-Rough Membership Function Neural Network) model is established based on fuzzy-rough sets theory. The FRMFN reserves original fuzzy information, at the same time the rough uncertainty decreases at a large extent. The test results of classification for remote sensing image and vowel characters indicate that the FRMFN has better classification precision than RBF (Radial Basis Function) network. And it has the same merit of quick learning as RBF network.
     Application system designed by conventional Pawlak rough sets data analysis method has the defects of weak generalization and poor anti-noise ability. To better tackle these problems, the design of the rough neural network based on variable precision rough set model, in which the majority inclusion relation is used, is studied. Here the condition ofβ-approximation reduction is generalized. And the criteria for selecting an appropriateβ-approximation reduction, which based on the analysis of abnormal reduction and example is introduced. Moreover, the algorithm for extracting variable rough decision rules and computing related stableβ-threshold interval is introduced. In the experiment of the Brodatz texture image classification, the performances of conventional RNN (Rough Neural Network) and VPRNN (Variable Precision Rough Set Neural Network) are compared. The results indicate that VPRNN not only has more simplified structure and costs less training time, but also, because of its powerful approximate decision-making ability, has better generalization ability than RNN.
     Neural network ensemble based on rough sets reduction is proposed to decrease the computing complexity of conventional feature ensemble selection algorithms. Firstly, a dynamic reduction method, which integrates genetic algorithm and resample technology, is introduced. Then dynamic reduction method is used to get reduct sets that have stable and good generalization ability. Secondly, Multiple BP neural networks based on different reducts are built as base classifiers. And according to the idea of selection ensemble, the best generalization ability neural network ensemble can be found by some search strategies. Finally, classification based on neural network ensemble can be implemented by combination with vote rule. The method is verified in the experiment of classifying Landsat 7 bands remote sensing image of an area. Since a great number feature sets of poor performance were discarded by reduction based on rough sets. Thus compared with conventional feature selection algorithms, the method needs less time, has lower computing complexity, and the performance is satisfied.
     Generalization ability of ensemble networks can be improved if the diversity of the individual network be increased. Considering to this point, here, Rough_Boosting and Rough_Bagging are proposed as new individual network building algorithms. First, the training samples are disturbed by Boosting or Bagging methods, then, based on rough sets theory, proper attributes are selected by finding relative reducts. Thus, the mechanism of disturbing training data and the input attribute are combined to help generate accurate and diverse component networks. Experiment results show that the generalization ability of proposed method obviously better than that of Boosting and Bagging methods, and individual networks generated have more diversity than that of Boosting or Bagging. Compared with prevailing similar methods, the proposed method has close or corresponsive performance.
     While designing RBF network by conventional clustering methods, it is often contain blindness and subjectivity. And clustering result is sensitive to the value of initial status. Because only consider the similarity of the samples in input feature space, while their class label are not taken into account, so the clustering can’t completely reflect the mapping relationship between input and output variables. Thus a method of designing RBF neural network, which based on a supervised rough clustering method by partition under indiscernibility relation, is proposed. Therein, continuous attributes are discretized by Boolean reasoning algorithm and original decision modes are generated. Then similarities among original decision modes can be measured by dissimilarity degree and original decision modes can be clustered. At last, final clustered decision modes are used to construct RBF neural network. Because the linear weights of output layer and nonlinear base function parameters of hidden layer are updated on different time scales , to quicken the training speed, a hybrid training algorithm is introduced in which the parameters of hidden layer and weights of output layer are tuned by back propagation algorithm and linear least squares filtering, respectively. Results of experiment indicate that the designed RBF neural network has refined structure and powerful generalization ability. And hybrid training algorithm has more rapid convergence speed than single back propagation algorithm.
     A rough logic neural network model with variable discretization precision is proposed to solve the contradiction between network precision and the size of network as well as generalization ability. Based on the approximation area partition, the universe can be partitioned into certain area and possible area. Because the important reason of misclassification is that the granularity of possible area is too coarse. Therefore, in this work, only possible area is refined and the precision of the rough logic neural network is improved while the size of network can be restrained. In the experiment of the remote sensing image classification about Changbai mountain area, the performance of conventional method achieves best when the discretization level is 7. While, the most approximate result is acquired, and less network cost and training time are expended, when this method is used.
     To avoid reduct calculation, under the view of bottom-up, a method based on FRM (fuzzy rough model) to construct rough neural network model, FRM_RNN_M, is addressed. By means of adaptive Gaustafason-Kessel (G-K) algorithm, fuzzy partition can be implemented in input-output product space. Then based on the search of cluster number and feature reduction sets, optimum FRM can be extracted and rough neural network model can be constructed by integrating neural network technique. The experiment results of classifying Brodatz texture image indicate that:①FRM_RNN_M is superior to conventional Bayesian and LVQ methods;②FRM_RNN_M has more powerful synthesis decision-making ability than single FRM model. And better optimum FRM will be searched if feature reduction is considered;③Compared with conventional RLNN (Rough Logic Neural Network), the neural network based on FRM_RNN_M has superiorities in structure, convergence speed and generalization ability. As a universal method for decision system modeling, the proposed method can be widely used in related fields.
     In the end, the main innovations of the thesis are summarized, and the fields for further investigation are expected.
引文
[1] Rich E. Expert systems and neural networks can work together. IEEE Expert, 1990, 5(5): 5-7.
    [2] Yahia M E, Mahmod R, Sulaiman N, etal. Rough neural expert systems. Expert Systems with Application, 2000, 18(2):87-99.
    [3] Datta S, Banerjee M K. Mapping the input–output relationship in HSLA steels through expert neural network. Materials Science and Engineering: A, 2006, 420(1-2): 254-264.
    [4] Zhang H, Luo D Y. Application of an expert system using neural network to control the coagulant dosing in water treatment plant. Journal of Control Theory and Applications. 2004, 2(1):89-92.
    [5] Elfadil N , Isa D. Automated knowledge acquisition based on unsupervised neural network and expert system paradigms, LNCS, 2773: 134-140.
    [6] Park S S, Seo K K, Jang D S. Expert system based on artificial neural networks for content-based image retrieval. Expert Systems with Application, 2005, 29(3):87-99.
    [7] 刘振凯,贵中华,蔡清.基于神经网络结构学习的知识求精方法.计算机研究与发展,1999,36(10):1169-117.
    [8] Loo C K , Mandava R, Rao M V C. A hybrid intelligent active force controller for articulated robot arms using dynamic structure neural network,Journal of Intelligent and Robotic Systems,2004, 40(2):113-145.
    [9] Homma N, Sakai M, Abe K.. Takeda H. Dynamic neural structure for long-term memory formation. In: SICE 2004 Annual Conference, Tokyo, Japan: IEEE, 2004, 3: 2272-2277.
    [10] Fu L. Rule generation from neural networks. IEEE Trans. on Systems, Man, Cybernetics, 1994, 24(8):1114-1124.
    [11] Tsukimoto H. Extracting rules from trained neural networks. IEEE Trans on Neural Networks, 2000, 11(2): 377-389.
    [12] 周 志 华 , 陈 世 福 . 神 经 网 络 规 则 抽 取 . 计 算 机 研 究 与 发 展 , 2002, 39(4) :398-405.
    [13] Pawlak Z. Rough set theory and its application to data analysis. Cybernetics and Systems, 1998, 29(9): 661-688.
    [14] Pawlak Z. Rough set theory for intelligent industrial applications. In: Intelligent Processing and Manufacturing of Materials, Honolu: IEEE, 1999, 1:37-44.
    [15] 赵卫东,陈国华. 粗集与神经网络集成技术研究. 系统工程与电子技术, 2002, 24 (10):103-107.
    [16] Yon J H,Yang S M,Jeon H T.Structure optimization of fuzzy-neural network using rough set theory.In: IEEE International Fuzzy Systems Conference Proceedings, Seoul, Korea: IEEE, 1999, :1666-1670.
    [17] Li Y G,Shen J,Lu Z Z.Structure optimization of wavelet neural network using rough set theory. In: Proceedings of the 4th World Congress on Intelligent Control and Automation.Shanghai: IEEE, 2002, 652-655.
    [18] 安利平, 吴育华, 仝凌云.增量式获取规则的粗糙集方法. 南开大学学报(自然科学版), 2003, 36(2): 98-103.
    [19] Zhong N, Dong J Z, Ohsuga S, Lin T Y. An incremental, probabilistic rough set approach to rule discovery. In: IEEE World Congress on Computational Intelligence, Anchorage: IEEE, 1998, 2: 933-938.
    [20] Zheng Z, Wang G Y, Wu Y. A rough set and rule tree based incremental knowledge acquisition algorithm. In: RSFDGrC'2003, Berlin: Springer, 2003, LNCS 2639: 122-129.
    [21] Zheng Z, Wang G Y. RRIA: A rough set and rule tree based incremental knowledge acquisition algorithm. Fundamenta Informaticae, 2004, 59(2-3):299-313.
    [22] Pal S K, Pedrycz W, Skowron A, Swiniarski R. Rough-neuro computing (special issue) . Neurocomputing, 2001, 36(1-4).
    [23] Pal S K, Polkowski L, Skowron A(Eds.). Rough-neuro computing: technologies for computing with words . Berlin: Physica-Verlag, 2002.
    [24] Lingras P. Comparison of neofuzzy and rough neural networks. Information Sciences, 1998, 110(3-4):207-213.
    [25] Banerjee M, Mitra S, Pal S K. Rough fuzzy MLP: knowledge encoding and classification. IEEE Transactions on Neural Networks, 1998, 9(6):1203-1216.
    [26] Peters J F, Han L, Ramanna S. Rough neural computing in signal analysis. Computational Intelligence,2001,17(3):493-513.
    [27] Gu X P, Tso S K. Applying rough-set concept to neural-network-based transient-stability classification of power system. In: Proceedings of the 5th International Conference on Advances in Power System Control,Operation andManagement, Hong Kong: IEEE, 2000 : 400-404.
    [28] Roman, Swiniarski W. Rough sets and neural networks application to handwritten character recognition by complex zernike moments. In: RSCTC’98, Warsaw: Spinger, 1998: 617-624.
    [29] Ahn B S, Cho S S, Kim C Y. The integrated methodology of rough set theory and artificial neural network for business failure prediction. Expert Systems with Applications, 2000, 18(2): 65-74.
    [30] Szczuka M S. Rough sets and artificial neural networks. In: Rough Sets in Knowledge Discovery(2): Applications, Case Studies and Software Systems. Heidelberg: Physica Verlag, 1998, 449-470.
    [31] Peters J F, Szczuka M S. Rough neurocomputing: a survey of basic models of neurocomputation. In: RSCTC 2002, Malvern: Springer, LNAI 2475, 2002, 308-315.
    [32] Pal S K, Peters J F, Polkowski L, et al. Rough-neurocomputing: an introduction. In: Rough-Neuro Computing: Technologies for Computing with Words. Berlin: Physica-Verlag, 2002, 15-41.
    [33] Jelonek J, Krawiec K, Slowinski R. Rough set reduction of attributes and their domains for neural networks. Computational Intelligence, 1995,11 (2): 339-347.
    [34] Dougherty J, Kohavi R, Shami M. Supervised and unsupervised discretization of continuous features. In: Proceedings of 12th International Conference on Machine Learning, Los: Morgan Kaufmann, 1995, 194-202.
    [35] 陈遵德. Rough Set 神经网络智能系统及其应用. 模式识别与人工智能, 1999,12(1):1-5.
    [36] Li Q D, Chi Z X, Shi W B. Application of rough set theory and artificial neural network for load forecasting. In: Proceedings of the First International Conference on Machine Learning and Cybernetics, Beijing: IEEE, 2002, 1148-1152.
    [37] Wang X Y, Wang Z O. Stock market time series data mining based on regularized neural network and rough set. In: Proceedings of the First International Conference on Machine Learning and Cybernetics, Beijing: IEEE, 2002, 315-318.
    [38] Chen S Y, Yi J K. A fuzzy neural network based on rough sets and its applications to chemical production process. In: 2001 International Conferences on Info-tech and Info-net, Beijing: IEEE, 2001, 405-410.
    [39] Wu Z C. Research on remote sensing image classification using neural networkbased on rough sets. In: 2001 International Conferences on Info-tech and Info-net, Beijing: IEEE, 2001, 279-284.
    [40] Pal S K., Mitra S, Mitra P. Rough-fuzzy MLP: modular evolution, rule generation, and evaluation. IEEE Transactions on Knowledge and Data Engineering, 2003, 15(1):14-25.
    [41] Yasser Hassan, Eiichiro Tazaki. Decision making using hybrid rough sets and neural networks. International Journal of Neural System,2002,12(6):435-446 .
    [42] Son Nguyen H, Szczuka M, Slezak D. Neural Network Design: Rough Set Approach to Real-valued Data. In: KomorowskiJ, ZtkowJ Eds. The First European Symposiumon Principlies of data Mining and Knowledge Discovery (PKDD’97). Berlin: Springer-Verlag , 1997:359-366.
    [43] Lingras P. Rough neural networks. In: Proc.of the 6th Int. Conf. on Information Processing and Management of Uncertainty in Knowledge-based Systems(IPMU’96), Granada, Spain: IEEE, 1996, 1445-1450.
    [44] Lingras P. Fuzzy-rough and rough-fuzzy serial combinations in neurocomputing. Neurocomputing, 2001, 36(1-4):29-34.
    [45] 谢海燕, 赵连昌, 王德强. 粗神经网络及其在股市预测中的应用. 大连海事大学学报, 2002,28(3):77-80.
    [46] 张 兆 礼 , 孙 圣 和 . 粗 神 经 网 络 及 其 在 数 据 融 合 中 的 应 用 . 控 制 与 决 策 , 2001,16(1):76-78.
    [47] Yasser Hassan, Eiichiro Tazaki, Shin Egawa, et al. Rough neural classifier system. In: 2002 IEEE International Conference on Systems, Man and Cybernetics ,Yasmina Hammamet: IEEE, 2002, 5.
    [48] 刘国良,强文义,麻亮,等. 基于粗神经网络的仿人智能机器人的语音融合算法研究. 控制与决策,2003,18(3):364-366.
    [49] 梅晓丹,孙圣和. 粗神经网络的禁止搜索训练算法研究. 电子学报,2001,29(12):1908-1911.
    [50] Pedrycz W, Vukovich G. Granular neural networks. Neurocomputing, 2001, 36(1-4):205-224.
    [51] Polkowski L, Skowron A. Rough mereology: a new paradigm for approximate reasoning. International Journal of Approximate Reasoning, 1997, 15(4): 333- 365.
    [52] Polkwski L. Rough mereology: a survey of new developments with application to granular computing, spatial reasoning and computing with words. In: RSFDGrC 2003, Chongqin: Springer, LNAI 2639, 2003, 106-113.
    [53] Skowron A. Toward intelligent systems: calculi of information granules. In: T.Terano et al, (Eds): JSAI 2001 Workshops, Berlin: Springer-Verlag, LNAI 2253, 2001, 251-260.
    [54] Skowron A. Approximate reasoning by agents. In: B.Dunin-Keplicz and E.Nawarecki(Eds): CEEMAS 2001, Cracow: Springer, LNAI 2296, 2002, 3-14.
    [55] Pawlak Z, Peters J F, Skowron A, et al. Rough measures:theory and application. In:S.Hirano, M.Inuiguchi, S.Tsumoto(Eds.). Rough Set Theory and Granular Computing, Bulletin of the International Rough Set Society, Japan: Springer, 2001,5(1/2).177-184.
    [56] Pedrycz W, Han L, Peters J F, et al. Calibration of software quality:fuzzy neural and rough neural network computing approaches. Neurocomputing, 2001,36(1-4):149-170.
    [57] Peters J F, Ahn T C, Borkowski M. Obstacle classification by a line-crawling robot:a rough neurocomputing approach. In: RSCTC 2002, Malvern: Springer, LNAI 2475, 2002, 594-601.
    [58] Sushmita M, Pabitra M, Pal S K. Evolutionary modular design of rough knowledge-based network using fuzzy attributes. Neurocomputing, 2001, 36 (1-4):45-66.
    [59] Zadeh L. Fuzzy sets, Information and Control. 1965,8 (3):338-353.
    [60] Pawlak Z. Rough sets. International Journal of Computer and Information Sciences. 1982, 11(5): 341-356.
    [61] 张文修,吴伟志,梁吉业,等.粗糙集理论与方法.北京:科学出版社,2001.
    [62] Pawlak Z. Rough sets and fuzzy sets. Fuzzy Sets And Systems, 1985, 17(11): 99-102.
    [63] Dubois D, Prade H. Twofold fuzzy sets and rough sets— some issues in knowledge representation.Fuzzy Sets and Systems, 1987, 23(1): 3-l8.
    [64] Yao Y Y. A comparative study of fuzzy sets and rough sets. Journal of Information Sciences, 1998, 109(1-4): 227-242.
    [65] Dubois D, Prade H. Rough fuzzy sets and fuzzy rough sets. International Journal of General Systems, 1990, 17(2): 191-209.
    [66] Dubois D, Prade H. Putting rough sets and fuzzy sets together. In: Intelligent Decision Support: Handbook of Applications and Advances of the Rough Sets Theory. Dordrecht, The Netherlands: Kluwer, 1992: 203-222.
    [67] 黄正华,胡宝清. 模糊粗糙集理论研究进展,模糊系统与数学, 2005, 19(4):125-134.
    [68] Pal S K, Skowron A. Rough Fuzzy Hybridization: A New Trend in Decision Making. Singapore: Springer-Verlag, 1999.
    [69] Kasemsiri W, Kimpan C. Printed thai character recognition using fuzzy-rough sets. In: TENCON Proceedings of IEEE Region 10 International Conference on Electrical and Electronic Technology. Singapore: IEEE, 2001, 1: 326-33.
    [70] Hashemi R R, Choobineh F F. A fuzzy rough sets classifier for database mining. Smart Engineering System Design, 2002, 4(2):107-114.
    [71] Sarkar M. Rough-fuzzy functions in classification. Fuzzy Sets and Systems, 2002, 132(3): 353-369.
    [72] Shen Q, Chouchoulas A. A rough-fuzzy approach for generating classification rules. Pattern Recognition, 2002, 35(11): 2425-2438.
    [73] Inuiguchi M, Tanino T. A new class of necessity measures and fuzzy rough sets based on certainty qualtfications. In: RSCTC 2000. Berlin Heidelberg: Springer-Verlag, 2001, 261-268.
    [74] Srinivasana P, Ruiza M E, Kraftb D H, Chen J H. Vocabulary mining for information retrieval: rough sets and fuzzy sets. Information Processing and Management, 2001, 37(1):15-38.
    [75] Rojanavasu P, Pinngern O. Extended rough fuzzy sets for web search agent. In: Proceedings of the 25th International Conference Information Technology Interfaces. Cavtat, Croatia: IEEE, 2003, 403-407.
    [76] Sarkar M, Yegnanarayana B. Application of fuzzy-rough sets in modular neural networks. In: Neural Networks Proceedings. IEEE World Congress on Computational Intelligence. New York: IEEE, 1998, 1: 741-746.
    [77] Petrosino A, Ceccarelli M. Unsupervised texture discrimination based on rough fuzzy sets and parallel hierarchical clustering. In: Proceedings of 15th International Conference on Pattern Recognition, Spain: IEEE, 2000, 3: 1088-1091.
    [78] Czogala E, Mr6zek A, Pawlak Z.The idea of a rough fuzzy controller and its application to the stabilization of a pendulum-car system. Fuzzy Sets and Systems, 1995, 72(2):61-73.
    [79] Peters J F, Ziaei K, Ramanna S, Ehikioya S A. Adaptive fuzzy rough approximate time controller design methodology: concepts, petri net model and application. In: Proceedings of 1998 IEEE International Conference on Systems, Man, and Cybernetics. San Diego: IEEE, 1998, 3: 2101-2106.
    [80] Wang Y F. Mining stock price using fuzzy rough set system. Expert Systemwith Applications, 2003, 24(1): 13-23.
    [81] Sarkar M, Yegnanarayana B. Fuzzy-rough neural networks for vowel classification. In: Proceedings of 1998 IEEE International Conference on Systems, Man, and Cybernetics. San Diego: IEEE, 1998, 5: 4160-4165.
    [82] Yu C Y, Wu M H, Wu M. Combining rough set theory with neural network theory for pattern recognition. In: IEEE International Conference on Robotics, Intelligent Systems and Signal Processing, Changsha: IEEE, 2003:880-885.
    [83] Liu H J, Tuo H Y, Liu Y C. Rough neural network of variable precision. Neural Processing Letters, 2004, 19(1):73-87.
    [84] Pawlak Z. Rough Logic, Bulletin of the Polish Academy of Sciences, Technical Science, 1987, 35:253-258.
    [85] 王耀南.计算智能信息处理技术及其应用,长沙: 湖南大学出版社,1999:147-176.
    [86] 姚洪兴,赵林度,盛昭瀚. 多级模糊神经网络在故障诊断中的应用. 东南大学学报,2001,31(2):59-63.
    [87] Nguyen H S, Skowron A. Quantization of real-valued attributes. In: Second International Joint Conference on Information Sciences, North Carolina: Wrightsville Beach, 1995, 34-37.
    [88] Sarkar M,Yegnanarayana B. Fuzzy-rough membership functions. In: 1998 IEEE International conference on Systems,Man and Cybernetics, San Diego: IEEE, 1998, 2: 2028-2033.
    [89] Sarkar M,Yegnanarayana B. Rough-fuzzy membership functions. In: The 1998 IEEE International Conference on Fuzzy Systems Proceedings, Anchorage, AK USA: IEEE, 1998, 1:796-801.
    [90] Ruikang Y, Yin L, Gabbouj M, etc. Performance of detail-preserving weighted median filters for image processing. In: IEEE Winter Workshop on Nonlinear Digital Signal Processing, Murikka, Tampere: IEEE, 1993, 1-6.
    [91] Wang X. Adaptive multistage median filter. IEEE Transactions on Signal Processing, 1992,40(4):1015-1017.
    [92] Lin H M, Willson A N. Adaptive-Length Median Filters for Image Processing. In: IEEE International Symposium on Circuits and Systems , Espco, Finland: IEEE, 1988, 3: 2557-2560.
    [93] Chen T, Hong R W. Adaptive impulse detection using center-weighted median filters. IEEE Signal Processing Letters, 2001, 8(1):1-3.
    [94] Nieminen A, Heinonen P, Neuvo Y. 2-D multilevel FIR-median hybrid filters. In:IEEE International Conference on Acoustics, Speech, and Signal Processing, Tokyo, Japan: IEEE, 1986, 11:1025-1028.
    [95] Russo F, Ramponi G. A fuzzy filter for image corrupted by impulse noise. IEEE Signal Processing Letters, 1996,3(6):168-170.
    [96] Zhang D, Wang Z. Impulse noise detection and removal using fuzzy techniques. Electronic Letters, 1997,33(5):378-379.
    [97] Yeh I C. Modeling chaotic two-dimensional mapping with fuzzy-neuron networks. Fuzzy sets and Systems, 1999,105(3):421-427.
    [98] 王耀南,李树涛,毛建旭. 计算机图像处理与识别技术,北京:高等教育出版社,2001:70-74.
    [99] Nanda S. Fuzzy rough sets. Fuzzy Sets and Systems, 1992, 45(2):157-160.
    [100] 吴伟志, 张文修, 徐宗本. 粗糙模糊集的构造与公理化方法. 计算机学报, 2004, 27(2):197-203.
    [101] 张诚一,卢昌荆. 关于模糊粗糙集的相似度量. 计算机工程与应用, 2004, 40(9): 58-59,68.
    [102] Sarkar M, Yegnanarayana B. Rough-fuzzy set theoretic approach to evaluate the importance of input features in classification. In: International Conference on Neural Networks, Texas, USA: IEEE, 1997,3:1590-1595.
    [103] Sarkar M. Fuzzy-rough nearest neighbors algorithm. In: IEEE International Conference on Systems, Man, and Cybernetics, Nashville, TN, USA: IEEE, 2000,5:3556-3561.
    [104] Wang Y F. Mining stock price using fuzzy rough set system. Expert System with Applications, 2003,24(1):13-23.
    [105] 王耀南. 卫星遥感图像的神经网络自动识别分类. 湖南大学学报,1998,25(4):61-66.
    [106] Bischof H, Schneider W , Pinz A J. Multi-spectral classification of landsat images using neural networks. IEEE Transactions on Geo-science and Remote Sensing, 1992, 30(3): 482-490.
    [107] Atkinson P M , Tatnall A R L. Neural networks in remote sensing. International Journal of Remote Sensing, 1997, 18 (4) :699-709.
    [108] 骆剑承,杨艳. 基于径向基函数(RBF)映射理论的遥感影像分类模型研究. 中国图象图形学报,2000, 2(5):94-99.
    [109] 毛建旭,王耀南. 径向基函数神经网络的遥感图象分类.系统仿真学报,2001,13(S2):146-147.
    [110] Bruzzone L, Prieto D F. A technique for the selection of kernel-functionparameters in RBF neural networks for classification of remote-sensing images. IEEE Transactions on Geo-science and Remote Sensing, 1999,37(2):1179-1184.
    [111] Ziarko W. Variable precision rough sets model. Journal of Computer and System Sciences, 1993, 46(1):39-59.
    [112] Zhao Y Q, Zhang H C, Pan Q. Classification using the variable precision rough set. In: RSFDGrC 2003, Chongqin: Springer, 2003, LNAI 2639: 350-353.
    [113] 张登峰, 李忠新, 王执铨等. 故障诊断专家系统知识获取的变精度粗集方法, 南京理工大学学报, 2004, 28(2):118-122.
    [114] Hou T H, Huang C C. Application of fuzzy logic and variable precision rough set approach in a remote monitoring manufacturing process for diagnosis rule induction. Journal of Intelligent Manufacturing, 2004, 15(3):395-408.
    [115] 潘郁,菅利荣,达庆利. 多标准决策表中发现概率规则的变精度粗糙集方法. 中国管理科学, 2005,13(1): 95-100.
    [116] Wang Q D, Wang X J, Wang X P. Variable precision rough set model based data partition and association rule mining. In: Proceedings of the First International Conference on Machine Learning and Cybernetics, Beijing: IEEE, 2002, 2175-2179.
    [117] Wu W Z. Knowledge acquisition in incomplete information systems based on variable precision rough set model. In: Proceedings of International Conference on Machine Learning and Cybernetics, Guangzhou: IEEE, 2005, 4:2245-2250.
    [118] Ziarko W, Fei X. VPRSM approach to WEB searching. In: RSCTC2002, Berlin: Springer, 2002, LNAI 2475: 514-521.
    [119] Beynon M J, Peel M J. Variable precision rough set theory and data discretisation: an application to corporate failure prediction. The International Journal of Management Science. 2001, 29(6):561-576.
    [120] Beynon M. The identification of low-paying workplaces: an analysis using the variable precision rough sets model. In: RSCTC 2002, Malvern, PA, USA: Springer, 2002, LNAI 2475:530-537.
    [121] 庾慧英,刘文奇. 变精度粗糙集模型中 β 取值范围的确定. 昆明理工大学学报, 2005, 30(6): 109-111.
    [122] Beynon M. Reducts within the variable precision rough sets model: a further investigation. European Journal of Operational Research, 2001, 134(3):592-605.
    [123] Beynon M. An investigation of β -reduct selection within the variable precision rough sets model. In: RSCTC 2000, Banff, Canada: Springer, LNAI 2005: 114-121.
    [124] An A, Shan N, Chan C, et al. Discovering rules for water demand prediction: an enhanced rough-set approach. Engineering Application and Artificial Intelligence, 1996, 9(6): 645-653.
    [125] Ziarko, W. Analysis of uncertain information in the framework of variable precision rough sets. Foundations of Computing and Decision Sciences, 1993, 18 (3-4): 381-396.
    [126] 李永敏, 朱善君, 陈湘辉,等. 根据粗糙集理论进行 BP 网络设计的研究. 系统工程理论与实践, 1999, 19(4):62-69.
    [127] Tuceryan M, Jain A K. Texture analysis, in C.H. Chen, L.F. Pau, P.S.P Wang(Eds.), Handbook of Pattern Recognition and Computer Vision, Singapore: World Scientific, 1993: 235-276.
    [128] Weszka J S, Dyer C R, Rosenfeld A. A comparative study of texture measure for terrain classification. IEEE Transactions on Systems Man Cybernet, 1976,6(4):269-285.
    [129] Hong Z Q. Algebraic feature extraction of image for recognition. Pattern Recognition, 1991, 24(3):211-219.
    [130] Roman W, Swiniarski L H. Rough sets as a front end of neural-network texture classifiers. Neurocomputing, 2001, 36(1-4):85-102.
    [131] Nguyen T, Swiniarski R, Skowron A, et al. Application of rough sets, neural networks and maximu likelihood for texture classification based on singular decomposition. In: Proceedings of the International Workshop RSSC Rough Sets and Soft Computing, San Jose USA: IEEE , 1994: 373-383.
    [132] Brodatz Textures[Online]. Available: http://sipi.usc.edu/database/database.cgi? Volume=textures&image=24, 2006.10.18.
    [133] Hansen L K, Salamon P. Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12(10):993-1001.
    [134] 周志华, 陈世福. 神经网络集成. 计算机学报,2002, 25(1):1-8.
    [135] Krogh A, Vedelsby J. Neural network ensembles, cross validation, and active learning. In: Tesauro G, Touretzky D, Leen T eds, Advances in Neural Information Processing Systems 7, Cambridge, MA:MIT Press, 1995, 231-238.
    [136] Hampshire J, Waibel A. A novel objective function for improved phonemerecognition using time delay neural networks. IEEE Trans Neural Networks, 1990, 1(2) : 216-228.
    [137] Cherkauer K J. Human expert level performance on a scientific image analysis task by a system using combined artificial neural networks. In: Proc the 13th AAAI Work shop on Integrating Multiple Learned Models for Improving and Scaling Machine Learning Algorithms, Portland: OR, 1996, 15-21.
    [138] Maclin R, Shavlik J W. Combining the predictions of multiple classifiers: using competitive learning to initialize neural networks. In: Proc the 14th International Joint Conference on Artificial Intelligence, Montreal, Canada: IEEE, 1995, 524- 530.
    [139] Yao X, Liu Y. Making use of population information in evolutionary artificial neural networks. IEEE Trans Systems, Man and Cybernetics—— Part B: Cybernetics, 1998, 28 (3): 417-425.
    [140] Zhou Z H, Wu J X, Jiang Y, Chen S F. Genetic algorithm based selective neural network ensemble. In: Proc the 17th International Joint Conference on Artificial Intelligence, Seattle: WA , 2001, 2: 797- 802.
    [141] Zeke S, Chan H, Kasabov N. Fast neural network ensemble learning via negative-correlation data correction, IEEE Transactions on Neural Networks, 2005, 16(6):1707-1710.
    [142] 傅向华, 冯博琴, 马兆丰,等. 增量构造负相关异构神经网络集成的方法. 西安交通大学学报,2004,38(8):796-799.
    [143] 李凯,黄厚宽. 一种基于聚类技术的选择性神经网络集成方法. 计算机研究与发展, 2005, 42 (4):594-598.
    [144] Fu Q, Hu Shang-xu, Zhao S Y. Clustering-based selective neural network ensemble. Journal of Zhejiang University, 2005 6A(5):387-392.
    [145] Freund Y, Schapire R E. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 1997, 55 (1):119-139.
    [146] Breiman L. Bagging predictors. Machine Learning, 1996, 24(2): 123-140.
    [147] Ho T K. The random subspace method for constructing decision forests, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20 (8):832-844.
    [148] Opitz D. Feature selection for ensembles, In: Proc. 16th National Conf. on Artificial Intelligence, AAAI Press, 1999, 379-384.
    [149] 凌锦江, 陈兆乾, 周志华. 基于特征选择的神经网络集成方法, 复旦学报(自然科学版), 2004,43(5):685-688.
    [150] Cunningham P, Carney J. Diversity versus quality in classification ensembles based on feature selection. In: DeMántaras R L, Plaza E. (eds.), Proc. ECML 2000 11th European Conf. On Machine Learning, Barcelona, Spain: Springer, 2000, LNCS 1810, 109-116.
    [151] Tsymbal A, Pechenizkiy M, Cunningham P. Diversity in ensemble feature selection. Technical Report, Trinity. College Dublin, 2003.1-38.
    [152] Vinterbo S, Ohrn A, Minimal approximate hitting sets and rule templates, International Journal of Approximate Reasoning, 2000,25 (2):123-143.
    [153] Bazan J, Skowron A, Synak P. Dynamic reducts as a tool for extracting laws from decision tables. In: Proceedings of Symposium on Methodologies for Intelligent Systems, Charlotte, NC,USA: Springer-Verlag, LNAI 869, 1994: 346-355.
    [154] Liu Y, Yao X. Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 1999, 29 (6) : 716-725.
    [155] Zhou Z H, Wu J X, Tang W. Ensembling neural networks: many could be better than all. Artificial Intelligence, 2002, 137 (1,2) 239-263.
    [156] 吴建鑫, 周志华, 沈学华等. 一种选择性神经网络集成构造方法. 计算机研究与发展, 2000, 37(9):1039-10
    [157] Bryll R, Gutierrez-Osuna R, Quek F. Attribute bagging: improving accuracy of classifier ensembles by using random feature subset, Pattern Recognition, 2003, 36(3):1291-1302.
    [158] 凌 锦 江 , 周 志 华 . 基 于 因 果 发 现 的 神 经 网 络 集 成 方 法 , 软 件 学 报 , 2005,15(10): 479-1484.
    [159] http://www.ics.uci.edu/~mlearn/MLRepository.html. 2006.10.18.
    [160] Opitz D. Popular ensemble methods: an empirical study. Journal of Artificial Intelligence Research, 1999,11(1):169-198.
    [161] Kuncheva L, Whitaker C. Measures of diversity in classifier ensembles and their relationship with ensemble accuracy. Machine Learning, 2003, 51(2):181-207.
    [162] Whitehead B A. Cooperative competitive genetic evolution of radial basis function centers and widths fortime series prediction. IEEE Transactions on Neural Networks , 1996 , 7(4): 869-880.
    [163] Lu Y W, Sundararajan N , Saratchandran P. Performance evaluation ofsequential minimal radial basis function (RBF) neural network learning algorithm. IEEE Transactions on Neural Networks , 1998 , 9(6) : 308-317.
    [164] 王旭东,邵惠鹤. RBF神经网络理论及其在控制中的应用. 信息与控制, 1997,26(4): 272-284.
    [165] Haykin S. Neural networks: a comprehensive foundation, 2nd edition. Beijing: China Machine Press, 2004:183-220.
    [166] Bezdek J C. Pattern recognition with fuzzy objective function algorithm. New York: Plenum, 1981.
    [167] Kohonen T. Self organization and associative memory, 3rd edition. Berlin, Germany: Springer-Verlag, 1989.
    [168] 席静, 欧阳为民. 基于聚类的连续值属性最佳离散化算法. 小型微型计算机系统, 2000, 21(10):1025-1027.
    [169] Wroblewski J. Finding minimal reducts using genetic algorithms. In: Proceedings of the International Workshop on Rough Sets Soft Computing at Second Annual Joint Conference on Information Sciences(JCIS’95), NC: Wrightsville Beach, 1995:186-189.
    [170] 孙健,申瑞民,韩鹏。一种新颖的径向基函数( RBF) 网络学习算法。计算机学报,2003,26(11):1562-1567.
    [171] Moody J, Darken C J. Fast learning in networks of locally-tuned processing units. Neural Computing, 1989, 1(2):281-294.
    [172] Chen S, Mulgrew B, Mclaughlin S. Adaptive bayesian feedback equalizer based on radial basis function network. In: International Conference on Communications, Chicago, IEEE, 1992, 3:1267-1271.
    [173] Lowe D. Adaptive radial basis function nonlinearities, and the problem of generalization. In: First IEE International Conference on Artifical Neural Networks, London: IEEE, 1989:171-175.
    [174] Wettschereck D, Dietterich T. Improving the performance of radial basis function networks by learning center locations. In: Advances in Neural Information Processing Systems, 1992, 4:1133-1140.
    [175] Berthold M R , Diamond J . Boosting the performance of RBF networks with dynamic decay adjustment. Advances in Neural Information Processing Systems , Cambridge MA: MIT Press, 1995, 7(3):521-528.
    [176] Lowe D. What have neural networks to offer statistical pattern processing. In: Proceedings of the SPIE Conference on Adaptive Signal Processing, San Diego, CA: SPIE, 1991:460-471.
    [177] Mitra S, Kuncheva L I. Improving classification performance using fuzzy MLP and two-level selective partitioning of the feature space. Fuzzy Sets and Systems, 1995, 70(1):1-13.
    [178] Kowalczyk W. Analyzing temporal patterns with rough sets. In: Proceedings of the 4th European Congress on Intelligent Techniques and Soft Computing ( EUFIT’96), Aachen: Wissenschaftsverlag, 1996, 139-143.
    [179] Kowalczyk W. Rough data modeling: a new technique for analyzing data. In: Rough Sets in Knowledge Discovery 1: Methodology and Applications, Heidelberg: Physica-Verlag, 1998, 400-421.
    [180] Pista Z, Lenarcik A. Learning rough classifiers from large database with missing values. In: Rough Sets and Knowledge Discovery 1: Methodology and Applications. Heidelberg: Physica-Verlag, 1998, 483-499.
    [181] Mollestad T, Komorowski J. A rough set framework for data mining of propositional default rules. In: Rough-Fuzzy Hybridization: A New Trend in Decision Making, Berlin: Springer-Verlag, 1998, 298-316.
    [182] Greco S, Matarazzo B, S?OWINSKI R. New developments in the rough set approach to multi-attribute decision analysis. Bulletin of International Rough set Society, 1998, 2(2/3):57-87.
    [183] Loken T. Rough modeling: extracting compact models from large database. Master thesis, Norwegian university of science and technology, 1999, 57-81.
    [184] Loken T, Komorowski J. Rough modeling-- a bottom-up approach to model construction. International Journal of Applied Math and Computer Science. 2001, 11(3):675-690.
    [185] 黄金杰,武俊峰,蔡云泽. 模糊粗数据模型:一种数据分析的新方法. 计算机学报,2005,28(11):1866-1874.
    [186] 黄金杰,李士勇,蔡云泽. 一种建立粗糙数据模型的监督聚类方法. 软件学报,2005,16(5):744-753.
    [187] Li S T, Kwok J T, Zhu H L, Wang Y N. Texture classification using the support vector machines. Pattern Recognition, 2003, 36(12): 2883-2893.
    [188] Martin T H, Howard B D, Mark H B. Neural Network Design. Beijing: China Machine Press, 2004, 3. 295-299.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700