用户名: 密码: 验证码:
基于稀疏表示和集成学习的若干分类问题研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
分类问题是真实世界中普遍存在的问题,同时也是机器学习领域研究的核心问题之一。在真实世界问题的驱动下,分类问题已从单示例单标记分类(传统有监督分类)扩展到多标记分类、多示例分类和多示例多标记分类。上述各种分类问题对机器学习领域的研究人员提出了新的挑战。
     稀疏表示和集成学习有着坚实的理论基础,是解决分类问题的有力工具,在许多应用领域表现出了优越的性能。因此,针对上述各种分类问题,本论文以单示例单标记高光谱遥感图像分类、多标记图像分类、多标记基因分类、多标记Web页面分类、多示例图像分类和多示例多标记图像分类为具体研究对象,以稀疏表示和集成学习为工具,以提高整体分类性能为主要目的,提出了一些新的解决各种具体分类问题的方法。本论文的研究成果主要包括如下几个方面:
     1.结合稀疏表示特征及光谱信息特征提出了一种新的高光谱遥感图像数据分类方法。首先利用高光谱遥感图像数据集和机器学习方法得到字典,然后根据字典计算每个像元的稀疏表示特征,最后使用随机森林分类器分别对稀疏表示特征和光谱信息特征进行分类,并对分类结果进行集成。在高光谱遥感图像数据集上的实验结果表明:所提方法与基于稀疏表示特征的方法和基于光谱信息特征的方法相比,能够提高分类结果。
     2.基于稀疏表示提出了一种新的多标记分类方法。首先利用训练样本集作为字典,将测试样本表示为字典中训练样本上的线性组合,基于l1-最小化方法求得稀疏表示系数,然后利用稀疏表示系数所包含的判别信息提出了一个计算测试样本属于各标记的隶属度的方法,最后根据隶属度对标记进行排序,利用标记的排序结果对测试样本分类。在多标记数据集上的实验结果表明:所提方法与其他方法相比取得了更好的分类结果。
     3.提出了基于随机子空间集成的多标记分类方法。使用随机子空间方法从多标记数据的整体特征中随机选择多个相同大小的特征子集,然后利用每个特征子集生成多标记基分类器,最后将所有多标记基分类器的输出结果集成起来,得到最终的分类结果。在多标记数据集上的实验结果表明:所提方法的性能优于单个多标记分类算法的性能。
     4.结合稀疏表示和集成学习提出了一种新的多示例图像分类方法。利用训练包中所有示例学习出一个字典,根据该字典计算示例的稀疏表示系数,然后利用每个包中所有示例的稀疏表示系数计算包特征向量,从而把多示例分类问题转化为传统有监督分类问题,最后利用传统有监督分类方法进行分类。为了进一步提高分类性能,通过改变字典的大小,计算出不同长度具有不同表示能力的包特征,使用这些包特征训练出不同的基分类器,最后对基分类器结果进行集成。在多示例图像数据集上的实验结果表明:该方法与其他方法相比具有更高的分类精度。
     5.利用退化策略的思想提出了基于稀疏表示和分类器集成的多示例多标记图像分类方法。首先利用基于字典学习的稀疏表示方法计算出多示例多标记图像的包特征,把多示例多标记图像分类问题转化为一个多标记分类问题,然后把该多标记分类问题进一步转化为一个传统有监督分类问题,从而利用传统有监督分类方法进行分类。为了进一步提高分类性能,通过改变字典的大小,重复上述过程可以训练出多个差异性显著的基分类器,最后对基分类器结果集成。该方法在多示例多标记图像数据集上的实验结果表现出优越的分类性能。
Classification problem is a general problem in the real world, as well as the one ofcore problems in the machine learning community. Driven by problems in the real world,classification problem has been extended from single-instance single-label classification(traditional supervised classification) to multi-label classification, multi-instanceclassification, and multi-instance multi-label classification. The above variousclassification problems are new challenges for researchers from machine learningcommunity.
     Sparse representation and ensemble learning have sound theoretical foundations,and are strong tools for solving classification problem. They have manifested goodperformance in many applications. In order to solve the above various classificationproblems, this dissertation focus on the concrete classification problems ofsingle-instance single-label hyperspectral remote sensing image, multi-label image,multi-label gene, multi-label Web page, multi-instance image, and multi-instancemulti-label image. In order to improve the performance of classification, some newmethods are proposed based on sparse representation and ensemble learning. The mainresearch fruits achieved in this dissertation can be summarized as follows:
     1. A novel classification method of hyperspectral remote sensing image is proposedbased on sparse representation feature and spectral information feature. First, adictionary is obtained based on hyperspectral remote sensing image data and machinelearning method, and then the sparse feature of each pixel is calculated according to thedictionary. Finally, random forest is used to respectively classifying sparserepresentation feature and spectral information feature, and the ensemble ofclassification results is used for prediction. The experimental results conducted onhyperspectral remote sensing image data indicate that the proposed method has betterclassification results than those of based on spectral information feature and based onsparse representation feature.
     2. A novel multi-label classification method based on sparse representation isproposed. Firstly, the training samples are used as a dictionary, and the test sample istreated as a linear combination of training samples in the dictionary, the sparserepresentation coefficients are obtained based on l1-minimization method. Thendiscriminating information of sparse representation coefficients is utilized to calculatemembership function of the test sample belonging to labels. Finally, labels are rankedaccording to membership function and the test sample is assigned to labels by using rank result. Extensive experiments conducted on multi-label data show that theproposed method achieves better results than other works in the literatures.
     3. A multi-label classifier ensemble method based on random subspace is proposed.The multi-label base classifier in the multi-label ensemble system is constructed byrandomly selecting subsets of components of the whole feature, and the result ofclassifier ensemble is used for prediction. The experimental results on the multi-labeldata demonstrate that the performance of the proposed method is better than those ofsingle multi-label classifier.
     4. A multi-instance image classification method is proposed based on sparserepresentation and ensemble learning. First, a dictionary is learned based on all theinstances in the training bags, and the sparse representation coefficients of each instanceare calculated according to the dictionary; second, a bag feature vector is computedbased on all the sparse representation coefficients of instances in the bag. Thus,multi-instance classification problem is transformed into traditional supervisedclassification problem that can be solved by well-know traditional supervisedclassification methods. In order to improve classification performance, the componentclassifiers are obtained by repeatedly using the above method with dictionaries ofdifferent sizes, and the result of classifier ensemble is used for prediction. Experimentalresults on multi-instance image data demonstrate the superiority of the proposed methodin classification accuracy as compared with state-of-the-art multi-instance classificationmethods.
     5. According to the idea of degeneration, a novel multi-instance multi-label imageclassification method is proposed based on sparse representation and classifier ensemble.First, the bag feature of multi-instance multi-label image is computed by sparserepresentation method based on dictionary learning, and multi-instance multi-labelimage classification problem is transformed into multi-label classification problem.Second, multi-label classification problem is further transformed into traditionalsupervised classification problem, therefore traditional supervised classificationmethods can be used to solve it. In order to improve classification performance, manydiversity individual classifiers are constructed by repeatedly using the above methodwith dictionaries of different sizes, and the result of classifier ensemble is used forprediction. Experimental results conducted on multi-instance multi-label image datashow that the proposed method is superior to the state-of-art methods in terms ofmetrics.
引文
[1] Manyika J, Chui M, Brown B, et al A. Big data: The next frontier for innovation,competition, and productivity. Technical report, McKinsey Global Institute,2011.
    [2] Mitchell T. Machine Learning. McGraw-Hill,1997.
    [3] Duda RO, Hart PE, Stork DG. Pattern Classification. Second edition. New York:Wiley,2001.
    [4] Bishop CM. Pattern recognition and machine learning. New York: Springer,2006.
    [5]边肇祺,张学工.模式识别.北京:清华大学出版社,2004.
    [6] Tenenbaum JB, de Silva V, Langford JC. A global geometric framework fornonlinear dimensionality reduction. Science,2000,290(5500):2319-2323.
    [7] Roweis ST, Saul LK. Nonliner dimensionality reduction by locally linearembedding,2000,290(5500):2323-2326.
    [8] Belkin M, Niyogi P. Laplacian eigenmaps for dimensionality reduction and datarepresentation. Neural Computation,2003,15(6):1373-1393.
    [9] Raina R, Battle A, Lee H, et al. Self-taught learning: transfer learning fromunlabeled data. In Proceedings of International Conference on Machine Learning,2007:759-766.
    [10]Yang JC, Yu K, Gong YH, et al. Linear spatial pyramid matching using sparsecoding for image classification. In Proceedings of IEEE Conference on ComputerVision and Pattern Recognition,2009:1794–1801.
    [11]尚凡华.基于低秩结构学习数据表示.博士论文,西安电子科技大学,2012.
    [12]Yang J, Vasant H. Feature subset selection using a genetic algorithm. IEEEIntelligent Systems,1998,13(2):44-49.
    [13]张向荣.基于选择性特征融合与集成学习的SAR图像分类与分割.博士论文,西安电子科技大学,2006.
    [14]Patrick EA, Fischer FP. A generalized k-nearest neighbor rule. Information andControl,1970,16(2):128-152.
    [15]Bernardo JM, Smith AFM. Bayesian Theory. New York: Wiley,1996.
    [16]Jain AK, Mao J, Mohiuddin KM. Artificial neural networks: A tutorial. Computer,1996,29(3):31-44.
    [17]Lippmann R. An introduction to computing with neural nets. IEEE ASSP Magazine,1987,4(2):4-22.
    [18]Breiman L, Friedman JH, Olshen RA, et al. Classification and Regression Trees.New York: Chapman&Hall,1993.
    [19]Vapnik V. The Nature of Statistical Learning Theory. New York: Springer,1995.
    [20]Vapnik V. Statistical Learning Theory. New York: Wiley,1998.
    [21]Cristianini N, Shawe-Taylor J. An Introduction to Support Vector Machines andOther Kernel-based Learning Methods. Cambridge: Cambridge University Press,2000.
    [22]Shawe-Taylor J, Cristianini N. Kernel Methods for Pattern Analysis. Cambridge:Cambridge University Press,2004.
    [23]Hansen L, Salamon P. Neural network ensembles. IEEE Transactions on PatternAnalysis and Machine Intelligence,1990,12(10):993-1001.
    [24]Schapire R. The strength of weak learnability. Machine Learning1990,5(2):197–227
    [25]Freund Y, Schapire R. A decision-theoretic generalization of on-line learning andan application to boosting. Journal of Computer and System Sciences,1997,55(1):119-139.
    [26]Breiman L. Bagging predictors.1996, Machine Learning24(2):123–140.
    [27]Ho TK. The random subspace method for constructing decision forests. IEEETransactions on Pattern Analysis and Machine Intelligence,1998,20,(8):832-844.
    [28]Zhou ZH, Wu J, Tang W. Ensembling neural networks: many could be better thanall. Artificial Intelligence,2002,137(1-2):239-263.
    [29]Dietterich TG. Machine-learning research: Four current directions. AI magazine,1997,18(4):97-136.
    [30]Bauer E, Kohavi R. An empirical comparison of voting classification algorithms:bagging, boosting, and variants. Machine Learning1999,36(1-2):105-139
    [31]Breiman L. Random forests. Machine Learning,2001,45(1):5-32.
    [32]Strehl A, Ghosh J. Cluster ensembles---a knowledge reuse framework forcombining multiple partitioning. Journal of Machine Learning Research,2002,3(12):583-617.
    [33]Jiao LC, Li Q. Kernel matching pursuit classifier ensemble. Pattern Recognition,2006,39(4):587-594.
    [34]Mao SS, Jiao LC, Xiong L, et al. Greedy optimization classifiers ensemble basedon diversity. Pattern Recognition,2011,44(6):1245-1261.
    [35]Huang FJ, Zhou ZH, Zhang HJ, et al. Pose invariant face recognition. InProceedings of IEEE Conference on Automatic Face and Gesture Recognition,2000:245-250.
    [36]Geng X, Zhou ZH. Image region selection and ensemble for face recognition.Journal of Computer Science and Technology,2006,21(1):116-125.
    [37]Gao XB, Zhong JJ, Li J, et al. Face sketch synthesis algorithm based on E-HMMand selective ensemble. IEEE Transactions on Circuits and Systems for VideoTechnology,2008,18(4):487-496.
    [38]Wang XG, Tang XO. Random sampling for subspace face recognition.International Journal of Computer Vision,2006,70(1):91-104.
    [39]Tao D, Tang XO, Li XL, et al. Asymmetric bagging and random subspace forsupport vector machines-based relevance feedback in image retrieval. IEEETransactions on Pattern Analysis and Machine Intelligence,2006,28(7):1088-1099.
    [40]Xu SX, Xue X, Zhou ZH. Ensemble multi-instance multi-label learning approachfor video annotation task. In Proceedings of the ACM Conference on Multimedia,2011:1153-1156.
    [41]Xu, XS, Jiang Y, Liang P, et al. Ensemble approach based on conditional randomfield for multi-label image and video annotation. In Proceedings of ACMConference on Multimedia,2011:1377-1380.
    [42]Ham J, Chen Y, Crawford M, et al. Investigation of the random forest frameworkfor classification of hyperspectral data. IEEE Transactions on Geoscience andRemote Sensing,2005,43(3):492-501.
    [43]Zhang XR, Jiao LC, Liu F, et al. Spectral clustering ensemble applied to texturefeatures for SAR image segmentation. IEEE Transactions on Geoscience andRemote Sensing,2008,46(7):2126-2136.
    [44]贾建华.谱聚类集成算法研究及其在图像分割中的应用.博士论文,西安电子科技大学,2010.
    [45]Kuncheva LI, Rodríguez JJ, Plumpton CO, et al. Random subspace ensembles forfMRI classification. IEEE Transactions on Medical Imaging,2010,29(2):531-542.
    [46]Candès E, Wakin M. An introduction to compressive sampling. IEEE SignalProcessing Magazine,2008,25(2):21-30.
    [47]Donoho D. Compressed sensing. IEEE Transactions on Information Theory,2006,52(4):1289-1306.
    [48]Candès E, Tao T. Near optimal signal recovery from random projections: Universalencoding strategies? IEEE Transactions on Information Theory,2006,52(12):5406-5425
    [49]Candès E, Romberg J, Tao T. Robust uncertainty principles: Exact signalreconstruction from highly incomplete frequency information. IEEE Transactionson Information Theory,2006,52(2):489-509.
    [50]Donoho D, Tsaig Y. Extensions of compressed sensing. Signal Processing,2006,86(3):533-548.
    [51]Wu J, Liu F, Jiao LC, et al. Compressive sensing SAR image reconstructionbased on bayesian framework and evolutionary computation. IEEE Transactions onImage Processing,2011,20(7):1904-1911.
    [52]Wu J, Liu F, Jiao LC, et al. Multivariate compressive sensing for imagereconstruction in the wavelet domain: using scale mixture models. IEEETransactions on Image Processing,2011,20(12):3483-3494.
    [53]焦李成,杨淑媛,刘芳,等.压缩感知回顾与展望.电子学报,2011,39(7):1651-1662.
    [54]石光明,刘丹华,高大华,等.压缩感知理论及其进展.电子学报,2009,37(5):1071-1081.
    [55]Elad M, Aharon M. Image denoising via sparse and redundant representations overlearned dictionaries. IEEE Transactions on image Processing,2006,15(12):3736-3745.
    [56]Mairal J, Elad M, Sapiro G. Sparse representation for color image restoration. IEEETransactions on Image Processing,2008,17(1):53-69.
    [57]Mairal J, Bach F, Ponce J, et al. Non-local sparse models for image restoration. InProceedings of IEEE Conference on Computer Vision,2009:2272-2279.
    [58]Yang JC, Wright J, Huang T, et al. Image super-resolution via sparse representation.IEEE Transaction on Image Processing,2010,19(11):2861-2873.
    [59]Dong WS, Zhang L, Shi GM, et al. Image deblurring and super-resolution byadaptive sparse domain selection and adaptive regularization. IEEE Transactions onImage Processing,2011,20(7):1838-1857.
    [60]Gao XB, Zhang KB, Tao DC, et al. Image super-resolution with sparse neighborembedding. IEEE Transactions on Image Processing,2012,21(7):3194-3205.
    [61]Yang SY, Wang M, Chen YG, et al. Single-image super-resolution reconstructionvia learned geometric dictionaries and clustered sparse coding. IEEE Transactionson Image Processing2012,21(9):4016-4028.
    [62]Yang B, Li S. Multifocus image fusion and restoration with sparse representation.IEEE Transaction on Information Theory Instrumentation and Measurement,2010,59(4):884-892.
    [63]Wright J, Yang A, Ganesh A, et al. Robust face recognition via sparserepresentation.IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(2):201-227.
    [64]Yang M, Zhang L, Feng XC, et al. Fisher discrimination dictionary learning forsparse representation. In proceedings of IEEE Conference on Computer Vision,2011,543-550.
    [65]Zhang HC, Nasrabadi NM, Zhang YN, et al. Joint dynamic sparse representationfor multi-view face recognition. Pattern Recognition,2012,45(4):1290-1298.
    [66]Han ZJ, Jiao JB, Zhang BC, et al. Visual object tracking via sample-based adaptivesparse representation. Pattern Recognition,2011,44,(1):2170-2183.
    [67]Raina R, Battle A, Lee H, et al. Self-taught learning: transfer learning fromunlabeled data. In Proceedings of International Conference on Machine Learning,2007:759-766.
    [68]Yang JC, Yu K, Gong YH, et al. Linear spatial pyramid matching using sparsecoding for image classification. In Proceedings of IEEE Conference on ComputerVision and Pattern Recognition,2009:1794–1801.
    [69]Qiao LS, Chen SC, Tan XY. Sparsity preserving projection with applications toface recognition. Pattern Recognition,2010,43(1):331-341
    [70]Qiao LS, Chen SC, Tan XY. Sparsity preserving discriminant analysis for singletraining image face recognition. Pattern Recognition Letter,2010,31(5):422-429.
    [71]Cheng B, Yang J, Yan SC, etal. Learning with1-graph for image analysis. IEEETransactions Image Processing,2010,19(4):858-866.
    [72]Elhamifar, Ehsan, Vidal R. Sparse subspace clustering. In Proceedings of IEEEConference on Computer Vision Pattern Recognition,2009,2790-2797.
    [73]Tsoumakas G, Katakis I. Multi-label classification: an overview. InternationalJournal of Data Warehousing and Mining,2007,3(3):1-13.
    [74]Zhou ZH. Multi-instance learning: a survey. Technical report, AI Lab, Departmentof Computer Science&Technology, Nanjing University, Nanjing, China,2004.
    [75]James F, Frank F. A review of multi-instance learning assumptions. TheKnowledge Engineering Review,2010,25(1):1-25.
    [76]Zhou ZH, Zhang ML. Multi-instance multi-label learning with application to sceneclassification. In Proceedings of International Conference on Neural InformationProcessing Systems,2006:1609-1616.
    [77]Zhou ZH, Zhang ML, Huang SJ, et al. Multi-instance multi-label learning.Artificial Intelligence,2011,176(1):2291-2320.
    [78]Schapire RE, Singer Y. BoosTexter: a boosting-based system for text categorization.Machine Learning,2000,39(2-3):135-168.
    [79]Ueda N, Saito K. Parametric mixture models for multi-label text. In Proceedings ofInternational Conference on Neural Information Processing Systems,2002:721-728.
    [80]Elisseeff A, Weston J. A kernel method for multi-labelled classification. InProceedings of International Conference on Neural Information Processing Systems,2001:681-687.
    [81]Clare A, King RD. Knowledge discovery in multi-label phenotype data. LectureNotes in Computer Science,2001,2168:42–53.
    [82]Zhang ML, Zhou ZH. ML-kNN: a lazy learning approach to multi-label learning.Pattern Recognition,2007,40(7):2038-2048.
    [83]Boutell MR, Luo J, Shen X, et al. Learning multi-label scene classification. PatternRecognition,2004,7(9):1757-71.
    [84]Comité FD, Gilleron R, Tommasi M. Learning multi-label alternating decision treefrom texts and data. Lecture Notes in Computer Science,2003,2734:35-49.
    [85]Zhang ML, Zhou ZH. Multilabel neural networks with applications to functionalgenomics and text categorization. IEEE Transactions on Knowledge and DataEngineering,2006,18(10):1338-1351.
    [86]Kong X, Ng M, Zhou ZH. Transductive multi-label learning via label setpropagation. IEEE Transactions on Knowledge and Data Engineering,2012,25(3)704-719.
    [87]Zhang Y, Zhou ZH. Multi-label dimensionality reduction via dependencymaximization. In Proceedings of AAAI Conference on Artificial Intelligence,2008:1503-1505.
    [88]Yang B, Sun JT, Wang T, et al. Effective multi-label active learning for textclassification. In Proceedings of ACM Conference on Knowledge Discovery andData Mining,2009:917-926.
    [89]Qi GJ, Hua XS, Rui Y, et al. Two-dimensional multi-label active learning with anefficient online adaptation model for image classification. IEEE Transactions onPattern Analysis and Machine Intelligence,2008,99(1):1880-1897.
    [90]Cabral RS, De la Torre F, Costeira JP, et al. Matrix completion for multi-labelimage classification. In Proceedings of International Conference on NeuralInformation Processing Systems.2011:190-198.
    [91]Luo Y, Dao DC, Xu C, et al. Multiview vector-valued manifold regularization formultilabel image classification. IEEE Transactions on Neural Networks andLearning Systems,2013,24(5):709-722.
    [92]Sanden C, Zhang JZ. Enhancing multi-label music genre classification throughensemble techniques. In Proceedings of ACM Conference on Research andDevelopment in Information Retrieval,2011:705–714.
    [93]Qi GJ, Hua XS, Rui Y, et al. Correlative multi-label video annotation. InProceedings of ACM Conference on Multimedia,2007:17-26.
    [94]Yu K, Yu S, Tresp V. Multi-label informed latent semantic indexing. InProceedings of ACM Conference on Research and Development in InformationRetrieval,2005:258-265.
    [95]Zhu S, Ji X, Xu W, et al. Multi-labelled classification using maximum entropymethod. In Proceedings of ACM Conference on Research and Development inInformation Retrieval,2005:274-281.
    [96]Gopal S, Yang Y. Multilabel classification with meta-level features. In Proceedingsof ACM Conference on Research and Development in Information Retrieval,2010:315-322.
    [97]Katakis I, Tsoumakas G, Vlahavas I. Multilabel text classification for automatedtag suggestion. In Proceedings of European Conference on Machine Learning andPrinciples and Practice of Knowledge Discovery in Databases Discovery Challenge,2008:75-83.
    [98]Kong XG, Yu PS. gMLC: a multi-label feature selection framework for graphclassification. Knowledge and Information Systems,2012,31(2):281-305.
    [99]Dietterich TG, Lathrop RH, Lozano-Pérez T. Solving the multiple instance problemwith axis-parallel rectangles. Artificial Intelligence,1997,89(1-2):31-71.
    [100]Long PM, Tan L. PAC learning axis-aligned rectangles with respect to productdistributions from multiple-instance examples. Machine Learning,1998,30(1):7-21.
    [101]Valiant LG. A theory of the learnable. Communications of the ACM,1984,27(11):1134-1142.
    [102]Maron O, Ratan AL. Multiple-instance learning for natural scene classification. InProceedings of International Conference on Machine Learning,1998:341-349.
    [103]Andrews S, Tsochantaridis I, Hofmann T. Support vector machines for multipleinstance learning. In Proceedings of International Conference on NeuralInformation Processing Systems,2003:561-568.
    [104]Zhou ZH, Jiang K, Li M. Multi-instance learning based web mining. AppliedIntelligence,2005,22(2):135-147.
    [105]Maron O, Lozano-Pérez T. A framework for multiple-instance learning. InProceedings of International Conference on Neural Information Processing Systems,1998:570-576.
    [106]Chen Y, Wang JZ, Image categorization by learning and reasoning with regions.Journal of Machine Learning Research,2004,5(8):913-939.
    [107]Zhou ZZ, Zhang ML. Solving multi-instance problems with classifier ensemblebased on constructive clustering. Knowledge and Information System,2007,11(2):155-170.
    [108]Zhang Q, Yu W, Goldman SA, et al. Content-based image retrieval usingmultiple-instance learning. In Proceedings of International Conference on MachineLearning,2002:682-689.
    [109]Fung G, Dundar M, Krishnappuram B, et al. Multiple instance learning forcomputer aided diagnosis. In Proceedings of International Conference on NeuralInformation Processing Systems,2007:425-432.
    [110]Zhang ML, Zhou ZH. M3MIML: A maximum margin method for multi-instancemulti-label learning. In Proceedings of IEEE Conference on Data Mining,2008:688-697.
    [111]Zhang ML, Wang ZJ. MIMLRBF: RBF neural networks for multi-instancemulti-label learning. Neurocomputing,2009,72(16–18),3951-3956.
    [112]He JJ, Gu H, Wang ZL. Multi-instance multi-label learning based on Gaussianprocess with application to visual mobile robot navigation. Information Sciences,2012,190(1):162-177.
    [113]Li YX, JW, Kumar S, et al. Drosophila gene expression pattern annotation throughmulti-instance multi-label learning. ACM/IEEE Transactions on ComputationalBiology and Bioinformatics,2012,9(1),98-112.
    [1] Olshausen B, Field D. Emergence of simple-cell receptive field properties bylearning a sparse code for natural images. Nature,1996,381(6583)607-609.
    [2] Olshausen B, Field D. Sparse coding with an overcomplete basis set: A strategyemployed by v1? Vision Research,1997,37(23):3311-3325.
    [3] Tropp JA, Wright SJ. Computational methods for sparse solution of linear inverseproblems. Proceedings of the IEEE.2010.98(6):948-958.
    [4] Donoho D. For most large underdetermined system of liner equations the minimalL1-norm solution is also the sparsest solution. Communications on the Pure andApplied Mathematics2006,59(6):797-829.
    [5] Mallat S, Zhang Z. Matching pursuits with time-frequency dictionaries. IEEETransactions on Signal Processing,1994,41(12):3397-3415.
    [6] Pati YC, Rezaiifar R, Krishnaprasad PS. Orthogonal matching pursuit: recursivefunction approximation with applications to wavelet decomposition. In Proceedingsof Asilomar Conference on Signals, Systems and Computers,1993:40-44.
    [7] Needell D, Vershynin R. Signal recovery from incomplete and inaccuratemeasurements via regularized orthogonal matching pursuit. IEEE Journal ofSelected Topics in Signal Processing,2010,4(2):310-316.
    [8] Donoho D, Tsaig Y, Drori I, et al. Sparse solution of underdetermined linearequations by stagewise orthogonal matching pursuit. Technical Report, Departmentof Statistics, Stanford University,2006.
    [9] Needell D, Tropp JA. CoSaMP: Iterative signal recovery from incomplete andinaccurate samples. Applied and Computational Harmonic Analysis,2008,26(3):301-321.
    [10]Donoho D, Huo X. Uncertainty principles and ideal atomic decomposition. IEEETransactions on Information Theory,2001,47(7):2845-2862.
    [11]Donoho D, Elad M. Optimally sparse representation in general (non-orthogonal)dictionaries via1minimization. Proceedings of the National Academy of Sciences,2003,100(5):2197-2202.
    [12]Candès EJ, Tao T. Near optimal signal recovery from random projections:Universal encoding strategies. IEEE Transactions on Information Theory.2006.52(12):5406-5425.
    [13]Tibshirani R. Regression shrinkage and selection via the LASSO. Journal of theRoyal Statistical Society, Series B.1996.58(1):267-288.
    [14]Chen SS, Donoho D, Saunders MA. Atomic decomposition by basis pursuit. SIAMReview,2001,43(1):129-159.
    [15]Candès E, Romberg J. l1-MAGIC: Recovery of sparse signals via convexprogramming. Technical Report, California Institute of Technology,2005.
    [16]Kim SJ, Koh K, Lustig M, et al. An interior-point method for large-scale1regularized least squares. IEEE Journal of Selected Topics in Signal Processing,2007,1(4):606-617.
    [17]Figueiredo M, Nowak R, Wright S. Gradient projection for sparse reconstruction:application to compressed sensing and other inverse problems. IEEE Journal onSelected Topics in Signal Processing,2007,1(4):586-598.
    [18]Berg E, Friedlander MP. Probing the Pareto frontier for basis pursuit solutions.SIAM Journal on Scientific Computing.2008,31(2):890-912.
    [19]Chartrand R. Exact reconstruction of sparse signals via nonconvex minimization.IEEE Signal Processing Letters.2007,14(10):707-710.
    [20]Xu ZB, Zhang H, Wang Y, et al. L1/2regularization. Science China InformationSciences.2010,53(6):1159-1169.
    [21]Xu ZB, Chang X, Xu F, et al. L1/2regularization: a thresholding representationtheory and a fast solver. IEEE Transactions on Neural Networks and LearningSystems.2010,23(7):1013-1027.
    [22]Gilbert AC, Iwen MA, Strauss MJ. Group testing and sparse signal recovery. InProceedings of Asilomar Conference on Signals, Systems, and Computers,2008:1059-1063.
    [23]Wipf DP, Rao BD. Sparse bayesian learning for basis selection. IEEE Transactionson Signal Processing,2004,52(8):2153-2164.
    [24]Wu J, Liu F, Jiao L, et al. Compressive sensing SAR image reconstruction basedon bayesian framework and evolutionary computation. IEEE Transactions on ImageProcessing,2011,20(7):1904-1911.
    [25]Wu J, Liu F, Jiao L, et al. Multivariate compressive sensing for imagereconstruction in the wavelet domain: using scale mixture models. IEEETransactions on Image Processing,2011,20(12):3483-3494.
    [26]Engan K, Aase SO, Hakon-Husoy JH. Method of optimal directions for framedesign. In Proceedings of IEEE Conference on Acoustics, Speech, SignalProcessing,1999:2443-2446.
    [27]Aharon M, Elad M, Bruckstein A. K-SVD: an algorithm for designingover-complete dictionaries for sparse representation. IEEE Transactions on SignalProcessing,2006,54(11):4311-4322.
    [28]Skretting K, Engan K. Recursive least squares dictionary learning algorithm. IEEETransaction on Signal Processing,2010,58(4):2121-2130.
    [29]Mairal J, Bach F, Ponce J, et al. Online learning for matrix factorization and sparsecoding. Journal of Machine Learning Research,2010,11(1):19-60.
    [30]Wright J, Yang A, Ganesh A, et al. Robust face recognition via sparserepresentation.IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(2):201-227.
    [31]Yang M, Zhang L. Gabor feature based sparse representation for face recognitionwith Gabor occlusion dictionary. Lecture Notes in Computer Science,2010,6316:448-461.
    [32]Zhang HC, Yang JC, Zhang YN, et al. Close the loop: joint blind image restorationand recognition with sparse representation prior. In Proceedings of IEEEConference on Computer Vision,2011:770-777.
    [33]Yang M, Zhang L, Yang J, et al. Robust sparse coding for face recognition. InProceedings of IEEE Conference on Computer Vision and Image Recognition,2011:625-632.
    [34]Zhang HC, Nasrabadi NM, Zhang YN, et al. Joint dynamic sparse representationfor multi-view face recognition. Pattern Recognition,2012,45(4):1290-1298.
    [35]Raina R, Battle A, Lee H, et al. Self-taught learning: transfer learning fromunlabeled data. In Proceedings of International Conference on Machine Learning,2007:759-766.
    [36]Yang JC, Yu K, Gong YH, et al. Linear spatial pyramid matching using sparsecoding for image classification. In Proceedings of IEEE Conference on ComputerVision and Pattern Recognition,2009:1794–1801.
    [37]Gao SH, Tsang IW, Chia LT, et al. Local features are not lonely-Laplacian sparsecoding for image classification. In Proceedings of IEEE Conference on ComputerVision and Pattern Recognition,2010:3555–3561.
    [38]Zheng M, Bu JJ, Chen C, et al. Graph regularized sparse coding for imagerepresentation. IEEE Transactions on Image Processing,2011,20(5):1327-1336.
    [39]Qiao LS, Chen SS, Tan XY. Sparsity preserving projection with applications toface recognition. Pattern Recognition,2010,43(1):331-341.
    [40]Qiao LS, Chen SC, Tan XY. Sparsity preserving discriminant analysis for singlerraining image face recognition. Pattern Recognition Letter,2010,31(5):422-429.
    [41]Cheng B, Yang JC, Yan SC, et al. Learning with L1-graph for image analysis.IEEE Transactions on image processing,2010,19(4):858-866.
    [42]Yan SC, Wang H. Semi-supervised learning by sparse representation. InProceedings of SIAM Conference on Data Mining,2009:792-801.
    [43]Cheng H, Liu ZC, Yang J. Sparsity induced similarity measure for labelpropagation. In Proceedings of IEEE Conference on Computer Vision,2009:317-324.
    [44]Fan MY, Gu NN, Qiao H, et al. Sparse regularization for semi-supervisedclassification. Pattern Recognition,2011,44(8):1777-1784.
    [45]Donoho D, Compressed sensing. IEEE Transactions on Information Theory,2006,52(4):1289-1306.
    [46]Candes E, Wakin M. An introduction to compressive sampling. IEEE SignalProcessing Magazine,2008,25(2):21-30.
    [47]Candes E, Romberg J, Tao T. Robust uncertainty principles: exact signalreconstruction from highly incomplete frequency information. IEEE Transactionson Information Theory,2006,52(2):489-509.
    [48]Candes E, Tao T. Near-optimal signal recovery from random projections: Universalencoding strategies? IEEE Transactions on Information Theory,2006,52(12):5406-5425.
    [49]焦李成,杨淑媛,刘芳,等.压缩感知回顾与展望.电子学报,2010,39(7):1651-1662.
    [50]石光明,刘丹华,高大化,等.压缩感知理论及其研究进展.电子学报,2009,37(5):1070-1081.
    [51]Elad E, Aharon M. Image denoising via sparse and redundant representations overlearned dictionaries. IEEE Transactions on Image Processing,2006,15(12):3736-3745.
    [52]Mairal J, Elad M, Sapiro G. Sparse representation for color image restoration,IEEE Transactions on Image Processing,2008,17(1):53-69.
    [53]Dong WS, Zhang L, Shi GM, et al. Image deblurring and super-resolution byadaptive sparse domain selection and adaptive regularization. IEEE Transactions onImage Processing,2011,20(7):1838-1857.
    [54]Gao XB, Zhang KB, Tao DC, et al. Image super-resolution with sparse neighborembedding. IEEE Transactions on Image Processing,2012,21(7):3194-3205.
    [55]Yang SY, Wang M, Chen YG, et al. Single-image super-resolution reconstructionvia learned geometric dictionaries and clustered sparse coding. IEEE Transactionson Image Processing,2012,21(9):4016-4028.
    [56]Hansen LK, Salamon P. Neural network ensembles. IEEE Transaction on PatternAnalysis and Machine Intelligence,1990,12(10):993-1001.
    [57]Dietterich T G. Machine learning research: four current directions. AI Magazine,1997,18(4):97-136.
    [58]Krogh, A., Vedelsby, J. Neural network ensembles, cross validation, and activelearning. In Proceedings of International Conference on Neural InformationProcessing Systems,1995:231–238.
    [59]Breiman L. Bagging predictors. Machine Learning.1996,24(2)123–140.
    [60]Breiman L. Random forests. Machine Learning,2001,45(1)5-32.
    [61]Schapire R E. The strength of weak learnability. Machine Learning,1990,5(2):197-227.
    [62]Kearns M, Valiant LG. Learning boolean formulae or factoring. AikenComputation Laboratory, Harvard University, Cambridge, Technical Report,1988.
    [63]Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning andan application to boosting. Journal of Computer and System Sciences,1997,55(1):119-139
    [64]Ho TH. The random subspace method for constructing decision forests. IEEETransactions on Pattern Analysis and Machine Intelligence,1998,20(8):832-844.
    [65]Kuncheva LI, Jain LC. Designing classifier fusion systems by genetic algorithms.IEEE Transactions on Evolutionary Computation,2000,4(4):327-336.
    [66]张向荣.基于选择性特征融合与集成学习SAR图像分类与分割.博士论文,西安电子科技大学,2006.
    [67]Zhang XZ, Brodley CE. Random projection for high dimensional data clustering: Aclustering ensemble approach. In Proceedings of International Conference onMachine Learning,2003:186-191.
    [68]Tsoumakas G, Katakis I, Vlahavas I. Effective voting of heterogeneous classifiers.Lecture Notes in Computer Science,2004,3201:465-476.
    [69]Zhou Z H, Yu Y. Ensembling local learners through multimodal perturbation. IEEETransactions on System, Man, and Cybernetics-Part B: Cybernetics,2005,35(4):725-735.
    [70]Zhang ML, Zhou ZH. Exploiting unlabeled data to enhance ensemble diversity.Data Mining and Knowledge Discovery,2013,26(1):98-129.
    [71]Polikar R. Ensemble based systems in decision making. IEEE Circuits and SystemsMagazine,2006,6(3)21-45.
    [72]Dietterich TG. Ensemble methods in machine learning. Lecture Notes in ComputerScience,2000,1857:1-15.
    [73]Hornik K, Stinchcombe M, White H. Universal approximation of an unknownmapping and its derivatives using multilayer feedforward networks. NeuralNetworks,1990,3(5):551-560.
    [74]Zhou ZH, Wu J, Tang W. Ensembling neural networks: many could be better thanall. Artificial Intelligence,2002,137(1-2):239-263.
    [75]Bauer E., Kohavi R. An empirical comparison of voting classification algorithms:bagging, goosting, and variants. Machine Learning,1999,36(1-2):105-139.
    [76]Hansen LK, Liisberg L, Salamon P. Ensemble methods for handwritten digitrecognition. In Proceedings of IEEE Workshop on Neural Networks for SignalProcessing,1992:333-342.
    [77]Schwenk H, Bengio Y. Boosting neural networks. Neural Computation,2000,12(8):1869-1887.
    [78]Mao J. A case study on bagging, boosting and basic ensembles of neural networksfor OCR. In Proceedings of IEEE Conference on Neural Networks,1998:1828-1833.
    [79]Gutta S, Wechsler H. Face recognition using hybrid classifier systems. InProceedings of IEEE Conference on Neural Networks,1996:1017-1022.
    [80]Gutta S, Huang JR, Jonathon P, et al. Mixture of experts for classification of gender,ethnic origin, and pose of human faces. IEEE Transactions on Neural Networks,2000,11(4):948-960.
    [81]Huang FJ, Zhou ZH, Zhang HJ, et al. Pose invariant face recognition. InProceedings of IEEE Conference on Automatic Face and Gesture Recognition,2000:245-250.
    [82]Geng X, Zhou ZH. Image region selection and ensemble for face recognition.Journal of Computer Science and Technology,2006,21(1):116-125.
    [83]Gao XB, Zhong JJ, Li J, et al. Face sketch synthesis algorithm based on E-HMMand selective ensemble. IEEE Transactions on Circuits and Systems for VideoTechnology,2008,18(4):487-496.
    [84]Wang XG, Tang XO. Random sampling for subspace face recognition.International Journal of Computer Vision,2006,70(1):91-104.
    [85]Zhang X, Jiao LC, Liu F, et al. Spectral clustering ensemble applied to texturefeatures for SAR image segmentation. IEEE Transactions on Geoscience andRemote Sensing,2008,46(7):2126-2136.
    [86]贾建华,焦李成.图像分割的谱聚类集成算法.西安交通大学学报.2010,44(6):93-98.
    [87]贾建华.谱聚类集成算法研究及其在图像分割中的应用.博士论文,西安电子科技大学,2006.
    [88]Ham J, Chen Y, Crawford M, et al. Investigation of the random forest frameworkfor classification of hyperspectral data. IEEE Transactions on Geoscience andRemote Sensing,2005,43(3):492-501.
    [89]Chan JW, Paelinckx D. Evaluation of random forest and adaboost tree-basedensemble classification and spectral band selection for ecotope mapping usingairborne hyperspectral imagery. Remote Sensing of Environment,2008,112(6):2999-3011.
    [90]www.netflixprize.com
    [91]Schapire RE, Singer Y. BoosTexter: A boosting-based system for textcategorization. Machine Learning,2000,39(2-3):135-168.
    [92]Sharkey AJC, Sharkey NE, Cross SS. Adapting an ensemble approach for thediagnosis of breast cancer. In Proceedings of International Conference on ArtificialNeural Networks,1998:281-286.
    [93]Kuncheva LI, Rodríguez JJ, Plumpton CO, et al. Random subspace ensembles forfMRI classification. IEEE Transactions on Medical Imaging,2010,29(2):531-542.
    [94]Tao D, Tang X, Li X, et al. Asymmetric bagging and random subspace for supportvector machines-based relevance feedback in image retrieval. IEEE Transactions onPattern Analysis and Machine Intelligence,2006,28(7):1088-1099.
    [95]Xu SX, Xue X, Zhou ZH. Ensemble multi-instance multi-label learning approachfor video annotation task. In Proceedings of the ACM Conference on Multimedia,2011:1153-1156.
    [96]Xu, XS, Jiang Y, Liang P, et al. Ensemble approach based on conditional randomfield for multi-label image and video annotation. In Proceedings of ACMConference on Multimedia,2011:1377-1380.
    [1]童庆禧,张兵,郑兰芬.高光谱遥感:原理、技术与应用.北京:高等教育出版社,2006.
    [2] Shaw G, Manolakis D. Signal processing for hyperspectral image exploitation. EEESignal Processing Magazine,2002,19(l):12-16.
    [3] Goetz AFH, Vane G, Jerry E, et al. Image spectrometry for earth remote sensing.Science,1985,228(4704):1147-1153.
    [4]张兵,高连如.高光谱图像分类与目标探测.北京:科学出版社,2011.
    [5] Plaza A, Benediktsson JA, Boardman J, et al. Recent advances in techniques forhyperspectral image processing. Remote Sensing of Environment,2009,113(9):110-122.
    [6]张良培,张立福.高光谱遥感.武汉:武汉大学出版社,2005.
    [7]邸韡,潘泉,赵永强,等.高光谱图像波段子集模糊积分融合异常检测.电子与信息学报,2008,30(2):267-271.
    [8]宋娟,吴成柯,张静,等.基于分类和陪集码的高光谱图像无损压缩.电子与信息学报,2011,33(1):231-234.
    [9] Hughes G. On the mean accuracy of statistical pattern recognizers. IEEETransactions on Information Theory,1968,14(1):55-63.
    [10]Lee C, Landgrebe D. Decision boundary feature extraction for neural networks.IEEE Transactions on Neural Networks,1997,8(1):75-83.
    [11]Kuo B, Landgrebe D. A covariance estimator for small sample size classificationproblems and its application to feature extraction. IEEE Transactions onGeoscience and Remote Sensing,2002,40(4):814–819.
    [12]Jia X, Richards JA. Segmented principal components transformation for efficienthyperspectral remote-sensing image display and classification. IEEE Transactionson Geoscience and Remote Sensing,1999,37(1):538–542.
    [13]Kumar S, Ghosh J, Crawford MM. Best basis feature extraction algorithms forclassification of hyperspectral data. IEEE Transactions on Geoscience and RemoteSensing,2001,29(7):1368-13791.
    [14]Han T, Goodenough DG. Nonlinear feature extraction of hyperspectral data basedon locally linear embedding. In Proceedings of IEEE Conference on Geoscienceand Remote Sensing,2005:1237–1240.
    [15] Benediktsson JA,Palmason JA, Sveinsson JR. Classification of hyperspectral datafrom urban areas based on extended morphological profiles. IEEE Transactions onGeoscience and Remote Sensing,2005,43(3):480-491.
    [16]焦李成,杜海峰,刘芳,等.免疫优化计算学习与识别.北京:科学出版社,2006.
    [17]Zhang L, Zhong Y, Huang B, et al. Dimensionality reduction based on clonalselection for hyperspectral imagery. IEEE Transactions on Geoscience and RemoteSensing,2007,45(12):4172-4186.
    [18]Kumar S,Ghosh J,Crawford M.Hierarchical fusion of multiple classifiers forhyperspectral data analysis. International Journal of Pattern Analysis andApplications,2002,5(2):210-220.
    [19]Ham J, Chen Y, Crawford M, et al. Investigation of the random forest frameworkfor classification of hyperspectral data. IEEE Transactions on Geoscience andRemote Sensing,2005,43(3):492-501.
    [20]Breiman L. Random forests. Machine Learning,2001,45(1):5-32.
    [21]Crawford M, Kim W. Manifold learning for multiclassifier systems viaensembles.In Proceedings of International Workshop on Multiple ClassifierSystems,2009:519-528.
    [22]Chan JW, Paelinckx D. Evaluation of random forest and adaboost tree-basedensemble classification and spectral band selection for ecotope mapping usingairborne hyperspectral imagery. Remote Sensing of Environment,2008,112(6):2999-3011.
    [23]焦李成,杨淑媛,刘芳,等.压缩感知回顾与展望.电子学报,2010,39(7):1651-1662.
    [24]石光明,刘丹华,高大化,等.压缩感知理论及其研究进展.电子学报,2009,37(5):1070-1081.
    [25]余慧敏,方广有.压缩感知理论在探地雷达三维成像中的应用.电子与信息学报,2010,32(1):12-16.
    [26]屈乐乐,方广有,杨天虹.压缩感知理论在频率步进探地雷达偏移成像中的应用.电子与信息学报,2011,33(1):21-26.
    [27]Elad E, Aharon M. Image denoising via sparse and redundant representations overlearned dictionaries. IEEE Transactions on Image Processing,2006,15(12):3736-3745.
    [28]孙玉宝,韦志辉,吴敏,等.稀疏性正则化的图像泊松去噪算法.电子学报,2011,39(2):285-290.
    [29]Raina R, Battle A, Lee H, et al. Self-taught learning: transfer learning fromunlabeled data. In Proceedings o International Conference on Machine Learning,2007:759-766.
    [30]Qiao LS, Chen SC, Tan XY. Sparsity preserving projection with applications toface recognition. Pattern Recognition,2010,43(1):331-341.
    [31]Han YH, Wu F, Zhuang YT, et al. Multi-label transfer learning with sparserepresentation. IEEE Transactions on Circuits and Systems for Video Technology,2010,20(8):1110-1121.
    [32]Iordache MD, Dias JMB, Plaza A. Sparse unmixing of hyperspectral data. IEEETransactions on Geoscience and Remote Sensing,2011,49(6):2014-2039.
    [33]Mairal J, Bach F, Ponce J, et al. Online learning for matrix factorization and sparsecoding. Journal of Machine Learning Research,2010,11(1):19-60.
    [34]Aharon M, Elad M, Bruckstein A. K-SVD: An algorithm for designingovercomplete dictionaries for sparse representation. IEEE Transactions on SignalProcessing,2006,54,(11):4311-4322.
    [35]Breiman L. Bagging predictors. Machine Learning,1996,24(2):123-140.
    [36]Ho TK. The random subspace method for constructing decision forests. IEEETransactions on Pattern Analysis and Machine Intelligence,1998,20(8):832-844.
    [1] Schapire RE, Singer Y. BoosTexter: A boosting-based system for textcategorization. Machine Learning,2000,39(2-3):135-168.
    [2] Elisseeff A, Weston J. A kernel method for multi-labelled classification. InProceedings of International Conference on Neural Information ProcessingSystems,2002:681-687.
    [3] Boutell MR, Luo J, Shen X, et al. Learning multi-label scene classification. PatternRecognition,2004,37(9):1757-1771.
    [4] McCallum A. Multi-label text classification with a mixture model trained by EM.In Proceedings of National Conference on Artificial Intelligence,1999:1-10.
    [5] Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete datavia the EM algorithm. Journal of the Royal Statistics Society,1977,39(1):1-38.
    [6] Ueda N, Saito K. Parametric mixture models for multi-label text. In Proceedings ofInternational Conference on Neural Information Processing Systems,2003:721-728.
    [7] Gao S, Wu W, Lee CH, Chua TS. A MFoM learning approach to robust multiclassmultilabel text categorization. In Proceedings of International Conference onMachine Learning,2004:329-336.
    [8] Gao S, Wu W, Lee CH, Chua TS. A maximal figure-of-merit learning approach totext categorization. In Proceedings of ACM Conference on Research andDevelopment in Information Retrieval,2003:174-181.
    [9] Comité FD, Gilleron R, Tommasi M. Learning multi-label alternating decision treefrom texts and data. In Proceedings of International Conference on MachineLearning and Data Mining in Pattern Recognition,2003:35-49.
    [10] Clare A, King RD. Knowledge discovery in multi-label phenotype data. LectureNotes in Computer Science,2001,2168:42-53.
    [11] Zhang ML, Zhou ZH. Multilabel neural networks with applications to functionalgenomics and text categorization. IEEE Transactions on Knowledge and DataEngineering,2006,18(10):1338-1351.
    [12] Barutcuoglu Z, Schapire RE, Troyanskaya OG. Hierarchical multi-label predictionof gene function. Bioinformatics,2006,22(7):830-836.
    [13] Zhang ML, Zhou ZH. ML-KNN:a lazy learning approach to multi-label learning.Pattern Recognition,2007,40(7):2038-2048.
    [14] Han YH, Wu F, Zhuang YT, et al. Multi-label transfer learning with sparserepresentation. IEEE Transactions on Circuits and Systems for Video Technology,2010,20(8):1110-1121.
    [15] Candès R, Romberg J, Tao T. Robust uncertainty principles: exact signalreconstruction from highly incomplete frequency information. IEEE Transactionson Information Theory,2006,52(2):489-509.
    [16] Candès E, Tao T. Near-optimal signal recovery from random projections: universalencoding strategies? IEEE Transactions on Information Theory,2006,52(12):5406-5425.
    [17] Donoho, D. For most large underdetermined systems of linear equations theminimal1-norm solution is also the sparsest solution. Communications on Pureand Applied Mathematics,2006,59(6):797-829.
    [18] Wright J, Yang A, Ganesh A, et al. Robust face recognition via sparserepresentation. IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(2):201-227.
    [19]赵瑞珍,刘晓宇, LI Ching Chung等.基于稀疏表示的小波去噪.中国科学:信息科学,2010,40(1):33-40.
    [20]蔡泽民,赖剑煌.一种基于超完备字典学习的图像去噪方法.电子学报,2009,37(2):347-350.
    [21] Qiao LH, Chen SC, Tan XY. Sparsity preserving projection with applications toface recognition. Pattern Recognition,2010,43(1):331-341.
    [22] Cheng B, Yang JC, Yan SC, et al. Learning with1-graph for image analysis.IEEE Transactions on Image Processing,2010,19(4):858-866.
    [23] Zheng CH, Zhang L, Ng TY, et al. Metasample-based sparse representation fortumor classification. IEEE/ACM Transactions on Computational Biology andBioinformatics,2011,8(5):1273-1282.
    [24] Candès E, Romberg J. l1-MAGIC: Recovery of sparse signals via convexprogramming. Technical Report, California Institute of Technology,2005.
    [25] Kim SJ, Koh K, Lustig M, et al. An interior-point method for large-scale1-regularized least squares. IEEE Journal of Selected Topics in Signal Processing,2007,1(4):606-617.
    [26] Figueiredo M, Nowak R, Wright S. Gradient projection for sparse reconstruction:application to compressed sensing and other inverse problems. IEEE Journal onSelected Topics in Signal Processing,2007,1(4):586-598.
    [27]李宏,谢政,向遥等.一种采用LLE降维和贝叶斯分类的多类标学习算法.系统工程与电子,2009,31(6):1467-1472.
    [28] Yang Y, Pedersen JO. A comparative study on feature selection in textcategorization. In Proceedings of International Conference on Machine Learning,1997:412–420.
    [1] Schapire RE, Singer Y. Boostexter: A boosting-based system for textcategorization. Machine Learning,2000,39(2-3):135-168.
    [2] McCallum A. Multi-label text classification with a mixture model trained by EM.In Proceedings of AAAI Workshop on Text Learning,1999.
    [3] Elisseeff A, Weston J. A kernel method for multi-labelled classification. InProceedings of International Conference on Neural Information ProcessingSystems,2002:681-687.
    [4] Zhang ML, Zhou ZH. ML-KNN:A lazy learning approach to multi-label learning.Pattern Recognition,2007,40(7):2038-2048.
    [5] Qi GJ, Hua XS, Rui Y, et al. Correlative multi-label video annotation. InProceedings of ACM Conference on Multimedia,2007:17-26.
    [6] Clare A, King RD. Knowledge discovery in multi-label phenotype data. LectureNotes in Computer Science,2001,2168:42–53.
    [7] Comité FD, Gilleron R, Tommasi M. Learning multi-label alternating decisiontree from texts and data. Lecture Notes in Computer Science,2003,2734:35-49.
    [8] Boutell MR, Luo J, Shen X, et al. Learning multi-label scene classification. PatternRecognition,2004,37(9):1757-1771.
    [9] Zhang ML, Zhou ZH. Multilabel neural networks with applications to functionalgenomics and text categorization. IEEE Transactions on Knowledge Dataengineering,2006,18(10):1338-1351.
    [10] Sanden C, Zhang JZ. Enhancing multi-label music genre classification throughensemble techniques. In Proceedings of ACM Conference on Research andDevelopment in Information Retrieval,2011:705–714.
    [11] Xu XS, Jiang Y, Liang P, et al. Ensemble approach based on conditional randomfield for multi-label image and video annotation. In Proceedings of ACMConference on Multimedia,2011:1377-1380.
    [12] Hansen LK, Salamon P. Neural network ensembles. IEEE Transactions on PatternAnalysis and Machine Intelligence,1990,12(10):993-1001.
    [13] Woods K, Kegelmeyer WP, Bowyer K. Combination of multiple classifiers usinglocal accuracy estimates. IEEE Transactions on Pattern Analysis MachineIntelligence,1997,19(4):405-410.
    [14] Zhou ZH, Wu JX, Tang W. Ensembling local learners through multimodalperturbation. IEEE Transactions on Systems, Man, and Cybernetics-Part B:Cybernetics,2005,35(4):725-735.
    [15] Breiman L. Bagging predictors. Machine Learning.1996,24(2):123-140.
    [16] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning andan application to boosting. Journal of Computer and System Sciences,1997,55(1):119-139.
    [17] Ho TH. The random subspace method for constructing decision forests. IEEETransactions on Pattern Analysis and Machine Intelligence,1998,20(8):832-844.
    [18] Dietterich TG. Ensemble methods in machine learning. Lecture Notes in ComputerScience,2000,1857:1-15.
    [19] Breiman L. Random forests. Machine Learning.2001,45(1):5-32.
    [20] Wang XG, Tang XO. Random sampling for subspace face recognition.International Journal of Computer Vision,2006,70(1):91-104.
    [21] Tao D, Tang XO, Li XL, et al. Asymmetric bagging and random subspace forsupport vector machines-based relevance feedback in image retrieval. IEEETransactions on Pattern Analysis and Machine Intelligence,2006,28(7):1088-1099.
    [22] Kuncheva LI, Rodríguez JJ, Plumpton CO, et al. Random subspace ensembles forfMRI classification. IEEE Transactions on Medical Imaging,2010,29(2):531-542.
    [23] Zhang Y, Zhou ZH. Multi-label dimensionality reduction via dependencemaximization. ACM Transactions on Knowledge Discovery from Data,2010,4(3):1-21.
    [1] Maron O, Ratan AL, Multiple-instance learning for natural scene classification. InProceedings of International Conference on Machine Learning,1998:341-349.
    [2] Dietterich TG, Lathrop RH, Lozano-Perez T. Solving the multiple instanceproblem with axis-parallel rectangles. Artificial Intelligence1997,89(1-2):31-71.
    [3] Scott S, Zhang J, Brown J. On generalized multiple instance learning. TechnicalReport, Department of Computer Science and Engineering, University of Nebraska,Lincoln, NE,2003.
    [4] Weidmann N, Frank E, Pfahringer B. A two-level learning method for generalizedmulti-instance problem. Lecture Notes in Artificial Intelligence,2003,2837:468-579.
    [5] Chen Y, Wang JZ. Image categorization by learning and reasoning with regions.Journal of Machine Learning Research,2004,5(8):913–939.
    [6] Chen Y, Bi J, Wang JZ, MILES: multiple-instance learning via embedded instanceselection. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(12):1931-1947.
    [7] Zhou ZH, Zhang ML. Multi-instance multi-label learning with application to sceneclassification. In Proceedings of International Conference on Neural InformationProcessing Systems,2006:1609-1616.
    [8] Yang C, Lozano-P rez T. Image database retrieval with multiple-instance learningtechniques. In Proceedings of IEEE Conference on Data Engineering,2000:233-243.
    [9] Zhang Q, Yu W, Goldman SA, et al. Content-based image retrieval usingmultiple-instance learning. In Proceedings of International Conference on MachineLearning,2002:682-689.
    [10] Li WJ, Yeung DY. Localized content-based image retrieval through evidenceregion identification. In Proceedings of IEEE Conference on Computer Vision andPattern Recognition,2009:1666-1673.
    [11] Li DX, Peng JY, Li Z, et al. LSA based multi-instance learning algorithm forimage retrieval. Signal Processing,2011,91(8):1993-2000.
    [12] Andrews S, Tsochantaridis I, Hofmann T. Support vector machines for multipleinstance learning. In Proceedings of International Conference on NeuralInformation Processing Systems,2003:561-568.
    [13] Viola P, Platt J, Zhang C, Multiple instance boosting for object detection. InProceedings of International Conference on Neural Information ProcessingSystems,2006:1419-1426.
    [14] Maron O, Lozano-P rez T. A framework for multiple-instance learning. InProceedings of International Conference on Neural Information ProcessingSystems,1998:570-576.
    [15] Zhang Q, Goldman SA. EM-DD: an improved multiple-instance learning technique.In Proceedings of International Conference on Neural Information ProcessingSystems,2002:1073-1080.
    [16] Dempster A, Laird N, Rubin D. Maximum likelihood from incomplete data via theEM algorithm. Journal of the Royal Statistical Society B,1997,39(1):1-38.
    [17] Gehler P, Chapelle O. Deterministic annealing for multiple-instance learning. InProceedings of International Conference on Artificial Intelligence and Statistics,2007:123-130.
    [18] G rtner T, Flach A, Kowalczyk A, et al. Multi-instance kernels. In Proceedings ofInternational Conference on Machine Learning,2002:179-186.
    [19] Kwok J, Cheung PM. Marginalized multi-instance kernels. In Proceedings ofInternational Joint Conference on Artificial Intelligence,2007:901–906.
    [20] Yang C, Dong M, Hua J. Region-based image annotation using asymmetricalsupport vector machine-based multiple instance learning. In Proceedings of IEEEConference on Computer Vision and Pattern Recognition,2006:2057-2063.
    [21] Zhou ZH, Xu JM. On the relation between multi-instance learning andsemi-supervised learning. In Proceedings of International Conference on MachineLearning,2007:1167-1174.
    [22] Zhou ZH, Sun YY, Li YF. Multi-instance learning by treating instances as non-i.i.d.samples. In Proceedings of International Conference on Machine Learning,2009:1249-1256.
    [23] Zhou ZH, Zhang ML. Solving multi-instance problems with classifier ensemblebased on constructive clustering. Knowledge and Information System,2007,11(2):155-170.
    [24] Zhou ZH, Zhang ML, Huang SJ, et al. Multi-instance multi-label learning.Artificial Intelligence,2011:176(1):2291-2320.
    [25] Zhou ZH. Multi-instance learning: a survey. Technical report, AI Lab, Departmentof Computer Science&Technology, Nanjing University, Nanjing, China,2004.
    [26] James F, Frank E. A review of multi-instance learning assumptions. TheKnowledge Engineering Review,2010,25(1):1-25.
    [27] Raina R, Battle A, Lee H, et al. Self-taught learning: transfer learning fromunlabeled data. In Proceedings of International Conference on Machine Learning,2007:759-766.
    [28] Yang J, Yu K, Gong Y, et al. Linear spatial pyramid matching using sparse codingfor image classification. In Proceedings of IEEE Conference on Computer Visionand Pattern Recognition,2009:1794-1801.
    [29] Olshausen BA, Field DJ. Emergence of simple-cell receptive field properties bylearning a sparse code for natural images. Nature,1996,381(6583):607-609.
    [30] Olshausen BA, Field DJ. Field, Sparse coding with an overcomplete basis set: Astrategy employed by V1? Vision Research,1996,37(23):3311-3325.
    [31] Lee H, Battle A, Raina R, et al. Efficient sparse coding algorithms. In Proceedingsof International Conference on Neural Information Processing Systems,2006:801-808.
    [32] Mairal J, Bach F, Ponce J, et al. Supervised dictionary learning. In Proceedings ofInternational Conference on Neural Information Processing Systems,2009:1033-1040.
    [33] Boureau YL, Bach F, LeCun Y, et al. Learning mid-level features for recognition.In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2009:2559-2566.
    [34] Elad M, Aharon M. Image denoising via sparse and redundant representations overlearned dictionaries. IEEE Transactions on Image Processing,2006,54(12):3736-3745.
    [35] Dong WS, Li X, Zhang L, et al, Sparsity-based image denoising via dictionarylearning and structural clustering. In Proceedings of IEEE Conference on ComputerVision and Pattern Recognition,2011:457-464.
    [36] He YM, Gan T, Chen WF. Multi-stage image denoising based on correlationcoefficient matching and sparse dictionary pruning. Signal Processing,2012,92(1):139-149.
    [37] Mairal J, Elad M, Sapiro G. Sparse representation for color image restoration.IEEE Transactions on Image Processing,2008,17(1):53-69.
    [38] Mairal J, Bach F, Ponce J, et al. Online learning for matrix factorization and sparsecoding. Journal of Machine Learning Research,2010,11(1):19-60.
    [39] Aharon M, Elad M, Bruckstein A. K-SVD: An algorithm for designingovercomplete dictionaries for sparse representation. IEEE Transactions on SignalProcessing,2006,54,(11):4311-4322.
    [40] Serre T, Wolf L, Bileschi S, et al. Robust object recognition with cortex-likemechanisms. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(3):411-426.
    [41] Dietterich T. Ensemble methods in machine learning. Lecture Notes in ComputerScience,2000,(1857):1-15.
    [42] Polikar P. Ensemble based systems in decision making. IEEE Circuits and SystemsMagazine,2006,6(3):21-45.
    [43] Wang JZ, Li J, Wiederhold G. SIMPLIcity: Semantics-sensitive integratedmatching for picture libraries. IEEE Transactions on Pattern Analysis and MachineIntelligence,2001,23(9):947-963.
    [44] Gersho A. Asymptotically optimum block quantization. IEEE Transactions onInformation Theory,1979,25(4):373–380.
    [45] Zhang R, Zhang Z. A robust color object analysis approach to efficient imageretrieval. EURASIP Journal on Applied Signal Processing,2004,(6):871-885.
    [46] Chang CC, Lin CJ, LIBSVM: A library for support vector machines, http://www.csie. ntu. edu.tw/cjlin/libsvm,2001.
    [47] Zhou ZH, Zhang ML. Ensembles of multi-instance learners. Lecture notes inartificial intelligence,2003(2837):492-502.
    [48] Csurka G, Dance CR, Fan L, et al. Visual categorization with bags of keypoints. InProceedings of Workshop Statistical Learning in Computer Vision,2004:59-74.
    [1] Yu J, Tao DC, Wang M. Adaptive hypergraph learning and its application in imageclassification. IEEE Transactions on Image Processing,2012,21(7):3262-3272.
    [2]贾世杰,孔祥维.一种新的直方图核函数及在图像分类中的应用.电子与信息学报,2011,33(7):1738-1742.
    [3]亓晓振,王庆.一种基于稀疏编码的多核学习图像分类方法.电子学报,2012,40(4):773-779.
    [4] Boutell MR, Luo JB, Shen XP, et al. Learning multi-label scene classification.Pattern Recognition,2004,37(9):1757-1771.
    [5] Ma Z, Yuan Y, Li XL, et al. Multimodal learning for multi-label imageclassification. In Proceedings of IEEE Conference on Image Processing, HongKong,2011:1797-1800.
    [6] Maron O, Ratan AL. Multiple-instance learning for natural scene classification. InProceedings of International Conference on Machine Learning, Madison,1998:341-349.
    [7] Chen YX, Wang JZ. Image categorization by learning and reasoning with regions.Journal of Machine Learning Research,2004(5):913–939.
    [8] Song XF, Jiao LC, Yang SY, et al. Sparse coding and classifier ensemble basedmulti-instance learning for image categorization. Signal Processing,2013,93(1):1-11.
    [9] Zhou ZH, Zhang ML. Multi-instance multi-label learning with application to sceneclassification. In Proceedings of International Conference on Neural InformationProcessing Systems,2006:1609-1616.
    [10] Zhou ZH, Zhang ML, Huang SJ, et al. Multi-instance multi-label learning.Artificial Intelligence,2011,176(1):2291-2320.
    [11] Xu X, Frank E. Logistic regression and boosting for labeled bags of instances. InProceedings of Pacific-Asia Conference on Knowledge Discovery and Data Mining,2004:272-281.
    [12] Li YX, Ji SW, Kumar S, et al. Drosophila gene expression pattern annotationthrough multi-instance multi-label learning. ACM/IEEE Transactions onComputational Biology and Bioinformatics,2012,9(1):98-112.
    [13] Zhou ZH, Liu XY. Training cost-sensitive neural networks with methodsaddressing the class imbalance problem. IEEE Transactions on Knowledge andData Engeering.2006,18(1):63-77.
    [14] Evgeniou T, Pontil M. Regularized multi-task learning. In Proceedings of ACMConference on Knowledge Discovery and Data Mining,2004:109-117.
    [15] Zhang ML, Zhou ZH. M3MIML: a maximum margin method for multi-instancemulti-label learning. In Proceedings of IEEE Conference on Data Mining,2008:688-697.
    [16] Zhang ML, Wang ZJ. MIMLRBF: RBF neural networks for multi-instancemulti-label learning. Neurocomputing,2009,72(16–18),3951-3956.
    [17] Zhang, Min-ling. A k-nearest neighbor based multi-instance multi-label learningalgorithm. In Proceedings of International Conference on Tools with ArtificialIntelligence,2010:207-212.
    [18] He JJ, Gu H, Wang ZL. Bayesian multi-instance multi-label learning usingGaussian process prior. Machine Learning,2012,88(7):273-295.
    [19] Briggs F, Fern XZ, Raich R. Rank-loss support instance machines for MIMLinstance annotation. In Proceedings of ACM Conference on Knowledge discoveryand data mining,2012:534-542.
    [20] Feng SH, Xu D. Transductive multi-instance multi-label learning algorithm withapplication to automatic image annotation. Expert Systems and Applications,2010,37(1):661–670.
    [21] Jin R, Wang SJ, Zhou ZH. Learning a distance metric from multi-instancemulti-label data. In Proceedings of IEEE Conference on Computer Vision andPattern Recognition,2009:896–902.
    [22] Wang W, Zhou ZH. Learnability of multi-instance multi-label learning. ChineseScience Bulletin,2012,57(19):2488-2491.
    [23] Zha ZJ, Hua XS, Mei T, et al. Joint multi-label multi-instance learning for imageclassification. In Proceedings of IEEE Conference on Computer Vision and PatternRecognition,2008:1–8.
    [24] Xu XS, Xue XY, Zhou ZH. Ensemble multi-instance multi-label learning approachfor video annotation task. In Proceedings of ACM Conference on Multimedia,2011:1153-1156.
    [25] He JJ, Gu H, Wang ZL. Multi-instance multi-label learning based on Gaussianprocess with application to visual mobile robot navigation. Information Sciences,2012,190(1):162-177.
    [26] Olshausen BA, Field DJ. Emergence of simple-cell receptive field properties bylearning a sparse code for natural images. Nature,1996,381(6583):607-609.
    [27] Raina R, Battle A, Lee H, et al. Self-taught learning: transfer learning fromunlabeled data. In Proceedings of International Conference on Machine Learning,2007:759-766.
    [28] Yang J, Yu K, Gong Y, et al. Linear spatial pyramid matching using sparse codingfor image classification. In Proceedings of IEEE Conference on Computer Visionand Pattern Recognition,2009:1794-1801.
    [29] Wu J, Liu F, Jiao LC, et al. Compressive sensing SAR image reconstructionbased on bayesian framework and evolutionary computation. IEEE Transactions onImage Processing,2011,20(7):1904-1911.
    [30] Wu J, Liu F, Jiao LC, et al. Multivariate compressive sensing for imagereconstruction in the wavelet domain: using scale mixture models. IEEETransactions on Image Processing,2011,20(12):3483-3494.
    [31]武昕,王岩飞,刘畅.基于压缩感知理论的随机噪声雷达目标检测算法研究.电子与信息学报,2012,34(7):1609-1015.
    [32]焦李成,杨淑媛,刘芳,等.压缩感知回顾与展望.电子学报,2011,39(7):1651-1662.
    [33]江海,林月冠,张冰尘,等.基于压缩感知的随机噪声成像雷达.电子与信息学报,2011,33(3):672-676.
    [34] Lee H, Battle A, Raina R, et al. Efficient sparse coding algorithms. In Proceedingsof International Conference on Neural Information Processing Systems,2006:801-808.
    [35] Aharon M, Elad M, Bruckstein A. K-SVD: an algorithm for designingover-complete dictionaries for sparse representation. IEEE Transactions on SignalProcessing,2006,54(11):4311-4322.
    [36] Yang JC, Yu K, Gong YH, et al, Linear spatial pyramid matching using sparsecoding for image classification. In Proceedings of IEEE Conference on ComputerVision and Pattern Recognition,2009:1794-1801.
    [37] Zhang HC, Nasrabadi NM, Zhang YY, et al. Joint dynamic sparse representationfor multi-view face recognition. Pattern Recognition,2012,45(4):1290-1298.
    [38]宋相法,焦李成.基于稀疏表示及光谱信息的高光谱遥感图像分类.电子与信息学报,2012,34(2):268-272.
    [39] Mairal J, Bach F, Ponce J, et al. Online learning for matrix factorization and sparsecoding. Journal of Machine Learning Research,2010,11(1):19-60.
    [40] Aharon M, Elad M, Bruckstein A. K-SVD: An algorithm for designingovercomplete dictionaries for sparse representation. IEEE Transaction on SignalProcessing,2006,54,(11):4311-4322.
    [41] Duygulu P, Barnard K, Freitas N, et al. Object recognition as machine translation:Learning a lexicon for a fixed image vocabulary. In Proceedings of EuropeanConference on Computer Vision,2002:97-112.
    [42] Maron O, Ratan AL. Multiple-instance learning for natural scene classification. InProceedings of International Conference on Machine Learning,1998:341-349.
    [43] Shi J, Malik J. Normalized cuts and image segmentation. IEEE Transactions onPattern Analysis and Machine Intelligence,2000,22(8):888-905.
    [44] Chang CC, Lin CJ, LIBSVM: A library for support vector machines, http://www.csie.ntu.edu.tw/cjlin/libsvm,2001.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700