用户名: 密码: 验证码:
监督信息在图学习中的有效利用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
基于图的学习方法近年来受到越来越多研究者的关注,它不仅有着深厚的图论理论作基础,而且还指明了数据间的联系与数据本身同样重要这一提高现有学习算法性能的关键。监督信息对于机器学习中的有监督学习和半监督学习,既是重要的信息来源,又是最终的学习目的。本文将对应到图上的监督信息划分为两类——点约束和边约束,并把机器学习中的有监督分类和半监督聚类分别视为具有点约束的学习问题和具有边约束的学习问题,重点从图的角度讨论这两类学习算法对监督信息的利用方式。
     传统的有监督分类算法用基于属性的方法估计从数据到类别的映射函数,只利用了顶点上的信息。虽然核方法引入了数据间的两两关系,但也只利用了数据的条件属性,没有把顶点上的监督信息扩展到边上加以利用。类似地,大多半监督聚类算法用点对约束限制可行解的搜索范围或者学习一个合适的度规,从而使聚类结果尽可能满足事先给定的约束条件,但却鲜少考虑把边上的监督信息的扩展到顶点上加以利用。本文提出将已知的监督信息在图的点边结构上进行传递,从而提高有监督分类和半监督聚类算法对监督信息的有效利用。
     首先,我们提出了一种统一的非线性分类框架,称为流形映射机,它由监督式流形映射,分类器构建和测试数据扩展三部分构成。该分类框架将顶点间的类别关系融入到边的权重上(点—边),然后有监督地将不同类别的数据在新的低维特征空间中分离开来(边—点),有利于后续分类器的构建,是一种“点—边—点”的途径。为了使测试数据映射到目标空间后能达到类似的效果,我们在数据的原始流形和目标流形之间搭建了一个“桥”,通过最小化测试数据从原始流形和目标流形映射到该中间“桥”上的差异,确定测试数据在目标流形上的最佳映射。此外,我们还讨论了流形映射机与几种著名流形学习算法之间的联系,证明了该框架可行性和广泛性。
     其次,在流形映射机框架的前提下,我们提出了一种监督式谱空间分类器。该分类器用线性融入监督信息的方式,将输入数据映射到低维的监督式谱空间中。然后,S3C分别采用了三种不同的分类算法用于分类器构建。在测试数据扩展阶段,我们证明了S3C通过构建流形桥所推得的测试数据最佳映射与Nystr m方法具有相同的形式。大量基于人工数据集和真实数据集的实验结果显示,S3C的分类性能显著优于其它多种经典的分类算法。
     最后,我们提出了一种局部约束传播的半监督聚类算法,可用于处理既有必连约束又有不连约束的多类别半监督聚类问题。该算法先确定每个约束顶点的影响范围(边—点),然后根据每个无约束边所连接的顶点与有约束顶点之间的相似度,将约束边的影响成比例地传播开去(点—边),因而是一种“边—点—边”的途径。我们将每个顶点的传播范围及其影响程度定义为介于细粒度顶点和粗粒度簇之间的一种中间结构,称之为“组件”。通过评估各个组件传播范围的准确程度,算法还可以自适应地调节各个点对约束在不同组件上的传播强度,使得置信度高的组件受到的约束影响较大,而置信度低的组件则受到的约束影响较小。大量基于UCI数据库,文本文档,手写数字,英文字符,人脸识别和图像分割数据集上的实验证明,局部约束传播半监督聚类算法比其它经典的半监督聚类算法更准确,也更高效。
Graph-based learning has attracted more and more interest from the researcher in recent years. Itnot only has mathematical support from the graph theory, but also indicates the key to improve theperformance of the existing learning algorithms that the connection among data points is as importantas the data points. The supervised information is critical information resource as well as ultimatelearning target for supervised learning and semi-supervised learning in machine learning. In this paper,the supervised information in graph-based learning is partitioned into two categories, including vertexconstraints and edge constraints. The supervised classification and the semi-supervised clustering areviewed as vertex-constrained and edge-constrained learning problems respectively. We will putemphasis on the discussion of the utilization of the supervised information by these two groups oflearning algorithms.
     Traditional supervised classification algorithms prefer the attribute-based approaches in estimatingthe underlying functions that map the attributes to the class labels, which only extracts the informationabout vertices. Although the kernel method later introduces the pairwise relationship of the data points,it only computes with the conditional attributes, but has not extended the supervised information fromthe vertices to the edges. Similarly, most of the existing semi-supervised clustering algorithms bias thesolution space or learn an appropriate metric to make the clustering result as consistent with theprovided constraints as possible. However, they also fail to extend the superved information from theedges to the vertices for full utilization. This paper proposes to transfer the supervised informationthrough vertices and edges on graph, so that the utilization of the supervised information in these twogroups of learning problems can be improved.
     First, we propose a unified framework for nonlinear classifier design called Manifold MappingMachine (M3). It is composed of three stages, supervised manifold mapping, classifier constructionand out-of-sample extension. As a ‘vertex–edge-vertex’ approach, S3C integrates the classrelationship of the vertices into the weight of their edges (vertex-edge), and then transforms thedifferent classes of data into the low-dimensional target manifold separately (edge-vertex), whichsimplifies the subsequet classifier construction. In order to achieve a similar effect in the spacetransformation of the test data, we construct a ''bridge'' between the original manifold and the targetmanifold. By minimizing the difference of the test data mapped from the two manifolds on theintermediate ''bridge'', the optimal mapping of the test data can be determined uniquely. To prove the feasibility and the generalization ability of M3, we also discuss its connections with severalwell-known manifold learning algorithms as well.
     Second, we present a nonlinear classifier under the framework of M3, named Supervised SpectralSpace Classifier (S3C). It integrates the supervised information linearly to map the input data into thelow-dimensional supervised spectral space. Then S3C adopts three different classification algorithmsfor classifier construction. During the out-of-sample extension state, we prove that the optimalmapping of the test data derived by S3C through the manifold “bridge” has the same form as thatderived from the Nystr m Approximation method. Sufficient experimental results show that S3C issignificantly superior to other state-of-the-art nonlinear classifiers on both synthetic and real-worlddata sets.
     Last but not least, we propose a semi-supervised clustering algorithm named SCRAWL, short forSemi-supervised Clustering via RAndom WaLk, to deal with multi-class semi-supervised clusteringproblems given both must-link and cannot-link constraints. As an ‘edge-vertex-edge’ approach,SCRAWL first determines the propagation range of each constrained vertex (edge-vertex), followedby the expansion of the constraint influence according to the similarities of the vertices connected bythe unconstrained edges to the constrained vertices (vertex-edge). We define the propagation range ofeach constrained vertex and the degrees of its impact as an intermediate structure between thefine-grained vertex and the coarse-grained cluster, called “component”. By estimating propagationaccuracy of each component, SCRAWL can also adjust the strength of the constraint propagation overthe different components, so that the components with higher confidence can receive greater influencefrom the propagated constraints, while the components with lower confidence can only receive lessinfluence by contrast. The experiments on UCI database, text documents, handwritten digits,alphabetic characters, face recognition and image segmentations demonstrate the effectiveness andefficiency of SCRAWL.
引文
[1] Russell, Stuart J., Norvig, Peter. Artificial Intelligence: A Modern Approach (2nd ed.),Prentice Hall,2003.
    [2] Searle, J. Minds, Brains and Programs. The Behavioral and Brain Sciences.1980,3:417-424.
    [3] Mitchell, Tom M. Machine Learning, McFraw-Hill, New York,1997.
    [4] Alpaydin, Ethem. Introduction to Machine Learning, MIT Press,2004.
    [5] Wang, Paul P. Computational Intelligence in Economics and Finance, Springer,2003.
    [6] Witkowski, T., Antczak, P. and Antczak, A. Machine learning-Based classification inmanufacturing system. in IEEE6th International Conference on Intelligent Data Acquisitionand Advanced Computing Systems (IDAACS),2011.
    [7] Kononenko, Igor., Machine learning for medical diagnosis: history, state of the art andperspective. Artificial Intelligence in Medicine.2001,23(1):89-109.
    [8] Forster, A. Machine Learning Techniques Applied to Wireless Ad-Hoc Networks: Guide andSurvey. in3rd International Conference on Intelligent Sensors, Sensor Networks andInformation,2007.
    [9] E. Mjolsness, D. DeCoste. Machine learning for science: State of the art and future prospects.Science.2001,293(5537):2051~2055.
    [10] Dietterich, T. G. Machine Learning, Nature Encyclopedia of Cognitive Science, Macmillan,2003.
    [11] Robert M. Bell, Yehuda Koren. Improved Neighborhood-based Collaborative Filtering. inProc. KDD-Cup and Workshop at the13th ACM SIGKDD International Conference onKnowledge Discovery and Data Mining,2007.
    [12] Olivier Chapelle, Bernhard Sch lkopf, Alexander Zien. Semi-Supervised Learning, MITPress,2006.
    [13] Zhu, Xiaojin. Semi-supervised learning with graphs. Carnegie Mellon University,2005.
    [14]张向荣,骞晓雪,焦李成.基于免疫谱聚类的图像分割.软件学报.2010,21(9):2196-2205.
    [15] Paramveer Dhillon, Dean Foster, Lyle Ungar. Multi-View Learning of Word Embeddings viaCCA. in NIPS,2011.
    [16]徐琳,赵铁军.国家自然科学基金在自然语言处理领域近年来资助的已结题项目综述.软件学报.2005,16(10):1853-1858.
    [17]何萍,徐晓华,陈崚.潜在属性空间树分类器.软件学报.2009,20(7):1735-1745.
    [18]朱靖波,叶娜,罗海涛.基于多元判别分析的文本分割模型.软件学报.2007,18(3):555-564.
    [19]戴群,陈松灿,王喆.一个基于自组织特征映射网络的混合神经网络结构.软件学报.2009,20(5):1329-1336.
    [20]严斌峰,朱小燕,张智江,张范.基于邻接空间的鲁棒语音识别方法.软件学报.2007,18(4):878-883.
    [21]杨立,左春,王裕国.基于语义距离的K-最近邻分类方法.软件学报.2005,16(12):2054-2062.
    [22] Wagstaff, K., C. Cardie, S. Rogers, et al. Constrained k-means clustering with backgroundknowledge. in: In Proceedings of the Eighteenth International Conference on MachineLearning,2001:577-584.
    [23] Klein, Dan, Sepandar D. Kamvar, Christopher D. Manning. From Instance-level Constraintsto Space-Level Constraints: Making the Most of Prior Knowledge in Data Clustering. in:Proceedings of the Nineteenth International Conference on Machine Learning, MorganKaufmann Publishers Inc., San Francisco, CA, USA,2002:307-314.
    [24] Shental, Noam, Aharon Bar-hillel, Tomer Hertz, et al. Computing gaussian mixture modelswith EM using equivalence constraints. in: In Advances in Neural Information ProcessingSystems16, MIT Press,2003:465-472.
    [25] Xing, Eric P., Andrew Y. Ng, Michael I. Jordan, et al. Distance Metric Learning, withApplication to Clustering with Side-information. in: Advances in Neural InformationProcessing Systems15, MIT Press,2002:505-512.
    [26] Hoi, Steven C. H., Rong Jin, Michael R. Lyu. Learning nonparametric kernel matrices frompairwise constraints. in: Proceedings of the24th international conference on Machinelearning, ACM, Corvalis, Oregon,2007:361-368.
    [27] Davis, Jason V., Brian Kulis, Prateek Jain, et al. Information-theoretic metric learning. in:Proceedings of the24th international conference on Machine learning, ACM, Corvalis,Oregon,2007:209-216.
    [28] Bilenko, Mikhail, S. Basu, R. Mooney. Integrating Constraints and Metric Learning inSemi-Supervised Clustering. in: Proceeding of21th International Conference on MachineLearning, Banff, Canada,2004:81-88.
    [29] Kamvar, Sepandar D., Dan Klein, Christopher D. Manning. Spectral learning. in: Proceedingof the17th international joint conference on artificial intelligence,2003:561-566.
    [30] Kulis, B., S. Basu, I. Dhillon, et al. Semi-supervised graph clustering: A kernel approach.Machine Learning.2009,74:1-22.
    [31] Li, Wenyuan, Kok-Leong Ong, Wee-Keong Ng, et al. Spectral Kernels for Classification. in:Proceeding of the7th international conference on Data Warehousing and KnowledgeDiscovery,2005:520-529.
    [32] Chung, Fan R. K. Spectral Graph Theory. American Mathematical Society.1997.
    [33] Jebara, Tony, Jun Wang, Shih-Fu Chang. Graph construction and b-matching forsemi-supervised learning. in: Proceedings of the26th Annual International Conference onMachine Learning, ACM, Montreal, Quebec, Canada,2009:441-448.
    [34] Rowies, S. L. Sual. Nonlinear dimensionality reduction by locally linear embedding. Science.2000,290:2323-2326.
    [35] Wang, Fei, Changshui Zhang. Label Propagation Through Linear Neighborhoods. in: ICML'06: Proceedings of the23rd international conference on Machine learning,2006:985-992.
    [36] Daitch, Samuel I., Jonathan A. Kelner, Daniel A. Spielman. Fitting a graph to vector data. in:Proceedings of the26th Annual International Conference on Machine Learning, ACM,Montreal, Quebec, Canada,2009:201-208.
    [37] Manor, Lihi Z. Pietro Perona. Self-Tuning Spectral Clustering. in: Advances in NeuralInformation Processing Systems17, MIT Press, Cambridge, MA,2005:1601-1608.
    [38] Wu, Z. R. Leahy. An optimal graph theoretic approach to data clustering: theory and itsapplication to image segmentation. IEEE Transaction on Pattern Analysis and MachineIntelligence.1993,15(11):1101-1113.
    [39] Hagen, L. A. Kahng. New spectral methods for ratio cut partitioning and clustering. IEEETrans. Computer-Aided Design.1992,11(9):1074-1085.
    [40] Shi, J. J. Malik. Normalized cuts and image segmentation. IEEE Transactions on PatternAnalysis and Machine Intel ligence.2000,22(8):888-905.
    [41] Ding, Chris, Xiaofeng He, Hongyuan Zha, et al. Spectral Min-max Cut for Graph Partitioningand Data Clustering. in: Proceedings of the First IEEE International Conference on DataMining, San Jose,2001:107-114.
    [42] Ng, A.Y., M.I. Jordan, Y. Weiss. On spectral clustering: Analysis and an algorithm. Advancesin neural information processing systems.2002,2:849-856.
    [43]张贤达.矩阵分析与应用,清华大学出版社,2004.
    [44] Von Luxburg, U. A tutorial on spectral clustering. Statistics and Computing.2007,17(4):395-416.
    [45] von Luxburg, Ulrike. A Tutorial on Spectral Clustering. Max Plank Institute for BiologicalCybernetics,2008.
    [46] G., Van Kampen N. Stochastic Processes in Physics and Chemistry (revised and enlargededition), North-Holland, Amsterdam,1992.
    [47] Goel N. W., Richter-Dyn N. Stochastic Models in Biology, Academic Press, New York,1974.
    [48] Doi M., Edwards S. F. The Theory of Polymer Dynamics, Clarendon Press, Oxford,1986.
    [49] H., Weiss G. Aspects and Applications of the Random Walk, North-Holland, Amsterdam,1994.
    [50] Melia, Marina Jianbo Shi. A Random Walks View of Spectral Segmentation. in: Proceedingof International Workshop on AI and. Statistics (AISTATS),2001:873-879.
    [51] Chen, Songcan, Lei Chen, Zhi-hua Zhou. A unified SWSI-KAMs framework andperformance evaluation on face recognition. Neurocomputing. October2005,68:54-69.
    [52] Sebastiani, Fabrizio. Machine learning in automated text categorization. ACM Comput. Surv.2002,34(1):1-47.
    [53] Sun, X. D. R.B. Huang. Prediction of protein structural classes using support vector machines.Amino Acids.2006,30(4):469-475.
    [54] Yina, Chuanhuan, Shengfeng Tiana, Shaomin Mu. High-order Markov kernels for intrusiondetection. Neurocomputing. October2008,71:3247-3252.
    [55] Breiman, L., J. Friedman, R. Olshen, et al. Classification and Regression Trees, Wadsworthand Brooks,1984.
    [56] McLachlan. Discriminant Analysis and Statistical Pattern Recognition, Wiley Interscience,2004.
    [57] Haykin, S. Neural Networks: A Comprehensive Foundation, Macmillan, New York,1994.
    [58] Bernardo, Jose M. Adrian F. M. Smith. Bayesian Theory, Wiley, New York,1996.
    [59] Cover, T. M. P. E. Hart. Nearest Neighbor Pattern Classification. IEEE Transactions onInformation Theory.1967,13(1):21-27.
    [60] Aizerman, M., E. Braverman, L. Rozonoer. Theoretical foundations of the potential functionmethod in pattern recognition learning. Automation and Remote Control.1964,25:821-837.
    [61] Boser, B. E., I. M. Guyon, V. N. Vapnik. A training algorithm for optimal margin classifiers.in: in D. Haussler, editor,5th Annual ACM Workshop on COLT, ACM Press, Pittsburgh,1992:144-152.
    [62] Mika, Sebastian, Gunnar R\"atsch, Jason Weston, et al. Fisher Discriminant Analysis WithKernels. in: Proceedings of the1999IEEE Signal Processing Society Workshop,1999:41-48.
    [63] Herbrich, Ralf. Learning Kernel Classifiers: Theory and Algorithms, The MIT Press,2001.
    [64] Campbell, Colin. Kernel methods: a survey of current techniques. Neurocomputing. October2002,48:63-849.
    [65] van der Maaten, Laurens, Eric Postma, Jaap van den Herik. Dimensionality Reduction: AComparative Review. TiCC, Tilburg University, October26,2009.
    [66] T. Lin, H. Zha. Riemannian manifold learning. Pattern Anal. Mach. Intell.2008,30(5):796-809.
    [67] T., Jolliffe I. Principal Component Analysis. in Series: Springer Series in Statistics (2nd ed.),Springer: NY.2002.
    [68] Williams, C. M. Seeger. Using the Nystr\"om method to speed up kernel machines. in:Advances in neural information processing systems,2001:682-688.
    [69] S.S. Keerthi, S. K. Shevade, C. Bhattacharyya, K. R. K. Murthy. Improvements to Platt'sSMO Algorithm for SVM Classifier Design. Neural Computation.2001,13(3):637-649.
    [70] Vapnik, V. The Nature of Statistical Learning Theory, Springer-Verlag, New York,1995.
    [71] Chih-Wei Hsu, Chih-Jen Lin. A Comparison of Methods for Multiclass Support VectorMachines. IEEE Transactions on Neural Networks.2002,13(2):415-425.
    [72] Chang, Chih-Chung Chih-Jen Lin. LIBSVM: a library for support vector machines. ACMTransactions on Intelligent Systems and Technology.2001,2(3):1-27. Software available athttp://www.csie.ntu.edu.tw/~cjlin/libsvm.
    [73] Koby Crammer, Yoram Singer. On the Algorithmic Implementation of MulticlassKernel-based Vector Machines. Journal of Machine Learning Research.2001,2:265-292.
    [74] Tenenbaum, J. B., V. de Silva, J. C. Langford. A global geometric framework for nonlineardimensionality reduction. Science.2000,290:2319-2323.
    [75] Belkin, Mikhail Partha Niyogi. Laplacian Eigenmaps and Spectral Techniques for Embeddingand Clustering. in: Advances in Neural Information Processing Systems14, MIT Press,Cambridge, Mass, USA,2002:585-591.
    [76] He, Xiaofei, Deng Cai, Shuicheng Yan, et al. Neighborhood Preserving Embedding. in: ICCV'05: Proceedings of the Tenth IEEE International Conference on Computer Vision, IEEEComputer Society, Washington, DC, USA,2005:1208-1213.
    [77] He, Xiaofei Partha Niyogi. Locality Preserving Projections. in: Advances in NeuralInformation Processing Systems16, MIT Press, Canada,2003:585-591.
    [78] Bengio, Yoshua, Jean-Fran\ccois Paiement, Pascal Vincent, et al. Out-of-Sample Extensionsfor LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering. in: Advances in neuralinformation processing systems, MIT Press,2004:177-184.
    [79] Cox, T., M. Cox. Multidimensional Scaling, Chapman&Hall, London,1994.
    [80] Liu, Wei, Buyue Qian, Jingyu Cui, et al. Spectral Kernel Learning for Semi-SupervisedClassification. in: Proceeding of The Twenty First Joint Conference on Artificial Intelligence,2008:1150-1155.
    [81] Johnson, Rie Tong Zhang. Graph-Based Semi-Supervised Learning and Spectral KernelDesign. IEEE Transactions on Information Theory.2008,54(1):275-288.
    [82] Zhu, X., J. Kandola, J. Lafferty, et al. Graph Kernels by Spectral Transforms. inSemi-Supervised Learning. O. Chapelle, B. Sch lkopf, Z. A., Editors, MIT Press.2006.
    [83] Quinlan, J. C4.5Programs For Machine Learning, Morgan Kaufmann Publishers, San MateoCalifornia,1993.
    [84] Chung, Fan R. K. Spectral Graph Theory, American Mathematical Society,1997.
    [85] He, Ping, Ling Chen, Xiaohua Xu. Fast C4.5. International Conference on Machine Learningand Cybernetics.2007:2841-2846. Software available at http://code.google.com/p/fastc45/downloads/list.
    [86] Cai, Deng, Xiaofei He, Jiawei Han. Efficient Kernel Discriminant Analysis via SpectralRegression. Department of Computer Science, University of Illinois at Urbana-Champaign,August2007.
    [87] Zhu, X., Z. Ghahramani, J. Lafferty. Semi-supervised learning using Gaussian fields andharmonic functions. in: The20th International Conference on Machine Learning (ICML),2003:912-919.
    [88] Zhou, D., O. Bousquet, T. Lal, et al. Learning with local and global consistency. in: Advancesin Neural Information Processing System16,2004:321-328.
    [89] Azran, Arik. The Rendezvous Algorithm: Multiclass Semi-Supervised Learning with MarkovRandom Walks. in: The24th International Conference on Machine Learning,2007:49-56.
    [90] Bengio, Y., O. Delalleau, N. L. Roux. Label propagation and quadratic criterion. in:Semi-Supervised Learning, O. Chapelle and B. Sch elkopf and A. Zien,2006.
    [91] Grady, Leo. Random Walks for Image Segmentation. IEEE Transactions on Pattern Analysisand Machine Intelligence.2006,28:1768-1783.
    [92] Szummer, M. T. Jaakkola. Partially labeled classification with Markov random walks. in:Advances in Neural Information Processing Systems14,2001:945-952.
    [93]徐晓华.图上的随机游走学习.南京航空航天大学,2008.
    [94] Basu, Sugato, Arindam Banerjee, Raymond J. Mooney. Semi-supervised Clustering bySeeding. in: Proceedings of the19th International Conference on Machine Learnin,2002:19-26.
    [95] Mishra, Alok Duncan Gillies. Semi-supervised Spectral Clustering for Regulatory ModuleDiscovery. in: The5th International Workshop on Data Integration in the Life Sciences,2008:192-203.
    [96] Basu, Sugato, Ian Davidson, Kiri Lou Wagstaff. Constrained Clustering: Advances inAlgorithms, Theory, and Applications, Chapman and Hall/CRC,2008.
    [97] Bar-Hillel, Aharon, Tomer Hertz, Noam Shental, et al. Learning Distance Functions UsingEquivalence Relations. in: In Proceedings of the Twentieth International Conference onMachine Learning,2003:11-18.
    [98] Wang, Xiang Ian Davidson. Flexible constrained spectral clustering. in: KDD '10:Proceedings of the16th ACM SIGKDD International Conference on Knowledge Discoveryand Data Mining, Washington DC, USA,2010:563-572.
    [99] Stella Yu, Jianbo Shi. Segmentation Given Partial Grouping Constraints. IEEE Transactionson Pattern Analysis and Machine Intelligence.2004,26(2):173-180.
    [100] Bie, Tijl De, Johan A. K. Suykens, Bart De Moor. Learning from General Label Constraints.in: Structural, Syntactic, and Statistical Pattern Recognition, Joint IAPR InternationalWorkshops,2004:671-679.
    [101] Lu, Zhengdong M.A. Carreira-Perpinan. Constrained spectral clustering through affinitypropagation. in: In Proceedings of IEEE Conference on Computer Vision and PatternRecognition,2008:1-8.
    [102] Li, Zhenguo, Jianzhuang Liu, Xiaoou Tang. Constrained Clustering via SpectralRegularization. in: Proceeding of Computer Vision and Pattern Recognition2009,2009:421-428.
    [103] Lang, Ken. Newsweeder: Learning to filter netnews. in: Proceedings of the TwelfthInternational Conference on Machine Learning,1995:331-339.
    [104] Boykov, Y. Marie-Pierre Jolly. Interactive Graph Cuts for Optimal Boundary&RegionSegmentation of Objects in N-D Images. in: ICCV,2001:105-112.
    [105] Ding, Lei Alper Yilmaz. Interactive image segmentation using probabilistic hypergraphs.Pattern Recognition.2010,43(43):1863-1873.
    [106] Blake, A., C. Rother, M. Brown, et al. Interactive image segmentation using an adaptiveGMMRF model. in European Conference on Computer Vision,2006.
    [107] Li, Yin, Jian Sun, Chi-Keung Tang, et al. Lazy Snapping. ACM Transactions on Graphics.August2004,23(3):303-308.
    [108] Rother, Carsten, Vladimir Kolmogorov, Andrew Blake."GrabCut": interactive foregroundextraction using iterated graph cuts. ACM Trans. Graph.2004,23(3):309-314.
    [109] Arbelaez, Pablo, Laurent Cohen. Constrained image segmentation from hierarchicalboundaries. Computer Vision and Pattern Recognition, IEEE Computer Society Conferenceon.2008,0:1-8.
    [110] Martin, D., C. Fowlkes, D. Tal, et al. A database of human segmented natural images and itsapplication to evaluating segmentation algorithms and measuring ecological statistics. in:ICCV,2001.
    [111] Strehl Alexander, Joydeep Ghosh, Raymond Mooney. Impact of similarity measures onweb-page clustering. in: Proceedings of the17th National Conference on ArtificialIntelligence: Workshop of Artificial Intelligence for Web Search (AAAI2000), Austin, Texas,USA,2000:58-64.
    [112] Rand, W. M. Objective criteria for the evaluation of clustering methods. Journal of theAmerican Statistical Association (American Statistical Association).1971,66(336):846-850.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700