用户名: 密码: 验证码:
流形学习的谱方法相关问题研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
在当今这个信息时代,可以方便地获得大量的数据。许多实际应用中,获得的数据是高维的、庞大的、繁杂的、无序的,并且还在不断的增加,有价值的信息淹没在大规模的海量高维数据集之中,需要发现数据的内在规律以及预测未来发展趋势。流形学习就是假定这些观测数据位于或近似位于一个嵌入在高维欧氏空间中的内在低维流形上,主要目标是发现高维观测数据集的内在低维流形结构和嵌入映射关系。目前,流形学习已经成为机器学习、模式识别、数据挖掘以及其它相关研究领域的研究热点。
     本文通过分析流形学习的内涵与外延,立足于解决流形学习的谱方法中的重要问题,在算法设计层面和图像流形应用层面上展开了一系列研究。首先对流形学习的典型谱方法做了详细对比分析,然后针对流形的增殖学习、构造近邻关系的合理度量、提高内在低维空间的可分性、基于集成的流形学习、局部保持的算法和全局保持的算法两者优势融合等几方面进行了重点研究,提出了五个以谱方法为基础的流形学习算法,并和相关研究成果做了理论上与实验上的比较,表明了我们提出算法的有效性。
     本文主要创新成果有以下几方面:
     (1)定义了增殖流形学习的概念,这有利于指导符合人脑增殖学习机理的流形学习算法的研究。以此为指导原则,提出了一种基于LLE的动态增殖流形学习算法(DKI-LLE)。实验结果表明:DKI-LLE算法比LLE的几个增量式算法在处理新数据集时有更好的效果;DKI-LLE算法发现的整体低维结构更接近批处理的方式获得的低维结构,使得新到来的数据子集所包含的低维结构知识被整合到原有的低维结构中去;而LLE的增量式算法处理新的观测数据时更依赖于原有数据的低维坐标。
     (2)提出了一种基于测地线距离的广义高斯型拉普拉斯特征映射算法(GGLE)。该算法将测地线距离和广义高斯函数融合到传统的拉普拉斯特征映射算法中,可以调整近邻图结点间的相似度,通过选择超高斯、高斯或者次高斯函数来实现不同程度的近邻局部特性的保持;而且当需要保持更多的近邻关系使得数据点邻域增大时,采用测地线距离可以避免欧氏距离度量不合理的缺陷;实验结果表明在用不同的广义高斯函数度量高维数据点间的相似度时,局部近邻结构保持的程度是不同的,GGLE获得的全局低维坐标也呈现出不同的聚类特性。
     (3)提出了一种基于GGLE的集成判别算法(EGGLE),该算法的主要优点是:近邻参数k固定,邻接矩阵和测地线距离矩阵都只构造一次,只需要多次选择广义高斯型函数构造多个拉普拉斯矩阵,获取多个独立的低维空间坐标集合,独立学习分类器,集成分类识别。时间复杂度上EGGLE算法与Ensemble-Isomap和En-ULLELDA算法相比较通常更具有优越性。在半监督学习框架下做了LE与EGGLE算法的对比实验,识别结果表明了EGGLE算法的有效性。另外,本文也提出了一种监督的集成流形学习算法(EGGLE-LDA),该算法将线性监督算法LDA和EGGLE相结合,加强集成流形学习在监督学习中的判别能力,使得EGGLE-LDA算法既考虑了数据的类别信息又考虑了几何分布特性。实验结果表明了EGGLE-LDA算法和En-ULLELDA算法的集成识别性能的差异。
     (4)提出了一种全局拉普拉斯展开算法(GLU),该算法综合了局部保持的拉普拉斯特征映射算法(LE)和全局保持的最大化方差展开算法(MVU)的优点。主要思想是使得局部近邻的点尽可能的接近,同时也要使得相互远离点尽可能远。实现方法是构造局部尽可能近邻和全局展开的双目标函数,引入低维坐标的Gram内积矩阵,通过半定规划(SDP)的方法优化双目标函数,从而学习这样一个内积矩阵,最后对这个内积矩阵进行特征分解求内在低维嵌入。在月亮形人造数据集、真实的USPS手写体数字数据集和雕塑头像数据集上的可视化实验验证了GLU算法的有效性;并且比较了LE、MVU、UDP和GLU等4种流形学习算法的低维可分性和可视化效果,实验结果表明了GLU算法的优越性。
In the current information age,a large quantity of data can be obtained easily.The obtained data are high-dimensional,enormous,multifarious,disordered,and continuously increasing in many practical applications.The valuable information is submerged into large scale dataset.It is necessary to find the intrinsic laws of the dataset and predict the future development trend.Manifold learning assumes that these observed data lie on or close to intrinsic low-dimensional manifolds embedded in the high-dimensional Euclidean space.The main goal of manifold learning is to find intrinsic low-dimensional manifold structures of high-dimensional observed dataset and the embedding map.At present,manifold learning has become a hot issue in the fields of machine learning,pattern recognition,data mining and other related research.
     By analyzing the intension and extension of manifold learning,this dissertation is devoted to solving several important problems of spectral methods for manifold learning,and carries out a series of research on algorithm design and image manifold application.Firstly,traditional spectral methods are analyzed and contrasted in detail. Secondly,five problems are mainly investigated,which include knowledge-increasable learning of manifold,constructing a reasonable measure of neighborhood relation, enhancing the separability of the low-dimensional space,manifold learning based on ensemble,combining the advantages of local preserving algorithms and global preserving algorithms in manifold learning.Finally,this dissertation proposes five manifold learning algorithms based on spectral method.Our proposed algorithms are compared with the related research in theories and experiments.And the results show the effectiveness of our proposed algorithms.
     The main contributions of this dissertation are summarized as follows:
     (1) The concept of knowledge-increasable manifold learning is defined.It is advantageous to guide the research of manifold learning algorithms which fit to knowledge-increasable learning mechanism in human brain.According to the guiding principle of knowledge-increasable manifold learning,we propose a dynamically knowledge-increasable manifold learning algorithm based on locally linear embedding (DKI-LLE).Experimental results show that DKI-LLE algorithm has better results than several LLE-based incremental algorithms in dealing with the new dataset.DKI-LLE is nearer to the original(batch) LLE for discovering low-dimensional structure,and can integrate the low-dimensional structure knowledge of the new data subset into the previous low-dimensional structure.On the contrary,incremental LLE algorithms depend more on those previous low-dimensional coordinates for dealing with the new observed data.
     (2) A generalized Gaussian Laplacian eigenmap algorithm based on geodesic distance (GGLE) is proposed,which incorporates geodesic distance and generalized Gaussian function into the original Laplacian eigenmap algorithm.GGLE algorithm can adjust the similarities between nodes of neighborhood graph,and can preserve the different degrees of local properties by using super-Gaussian function,Gaussian function or sub-Gaussian function.Moreover,GGLE can avoid the deficiency of Euclidean distance by using geodesic distance when neighborhoods of data points are enlarged for preserving more neighborhood relations.Experimental results show that the global low-dimensional coordinates obtained by GGLE have different clustering properties and different degrees of preserving local neighborhood structures when different generalized Gaussian functions are used to measuring the similarities between high-dimensional data points.
     (3) An ensemble-based discriminant algorithm based on GGLE(EGGLE) is proposed. The main advantages of EGGLE include:the fixed neighborhood parameter k,only one time for constructing neighborhood graph and geodesic distance matrix,only needing to choose generalized Gaussian functions many times for constructing many Laplacian matrixes,obtaining multiple independent low-dimensional coordinate sets, independently training multiple classifiers,and combining classification results of these component classifiers to produce the final identification.EGGLE algorithm generally outperforms Ensemble-ISOMAP algorithm and En-ULLELDA algorithm on the time complexity.Under the framework for semi-supervised learning,the comparative experiments of EGGLE and LE are completed.Experimental results show that EGGLE algorithm is effective.In addition,a supervised manifold learning algorithm based on ensemble(EGGLE-LDA) is proposed in this dissertation.For enhancing the classification ability of ensemble-based manifold learning in supervised learning, EGGLE is combined with LDA,so that EGGLE-LDA algorithm takes into account the label information of data as well as the characterization of the geometric distribution. Experimental results show the difference of the ensemble-based recognition performance between EGGLE-LDA algorithm and En-ULLELDA algorithm.
     (4) A global Laplacian unfolding algorithm(GLU) is proposed so that LE algorithm with preserving local properties is integrated into MVU algorithm with preserving global properties.The main idea of GLU algorithm is that nearby points are pulled as near as possible while distant points are pulled as far apart as possible.Its implementation method is to construct the double object functions for remaining as near as possible in locality and unfolding in globality,to reformulate the double object functions in terms of the Gram inner product matrix of low-dimensional coordinates,to optimize the object function for learning the inner product matrix by semi-definite programming,and to compute a intrinsically low-dimensional embedding via the eigen-decomposition of the inner product matrix.Visualization experiments on synthetic "Two Moons" dataset,USPS handwritten digits dataset and the sculpture head portrait dataset demonstrate the effectiveness of GLU algorithm.In addition,the classification ability and visualization performance of four manifold learning algorithms that include LE,MVU,UDP and GLU are compared.Experimental results show the superiority of GLU algorithm.
引文
[1]H.S.Seung,D.D.Lee.The manifold ways of perception.Science.2000.290(5500).2268-2269
    [2]J.Tenenbaum,V.d.Silva,J.Langford.A global geometric framework for nonlinear dimensionality reduction.Science.2000.290(5500).2319-2323
    [3]S.Roweis,L.Saul.Nonlinear dimensionality reduction by locally linear embedding.Science.2000.290(5500).2323-2326
    [4]M.Belkin,P.Niyogi.Laplacian eigenmaps for dimensionality reduction and data representation.Technical Report,University of Chicago,US.2001.
    [5]Z.Y.Zhang,H.Y.Zha.Principal manifolds and nonlinear dimensionality reduction via tangent space alignment.SI AM Journal of Scientific Computing.2004.26(1).313-338
    [6]K.Q.Weinberger,L.K.Saul.An introduction to nonlinear dimensionality reduction by maximum variance unfolding.Proceedings of the Twenty First National Conference on Artificial Intelligence(AAAI'06).Boston,Massachusetts,USA.AAAI Press.2006.
    [7]T.J.Hastie.Principle curves and surfaces.Technical Report,Laboratory for Computational Statistics,Stanford University.1984.
    [8]C.Bregler,S.M.Omohundro.Nonlinear manifold learning for visual speech recognition.Proceedings of the 5th International Conference on Computer Vision(CV95).Boston,US.1995.494-499
    [9]C.Bregler,S.M.Omohundro.Nonlinear image interpolation using manifold learning.Advances in Neural Information Processing Systems 7.MIT Press.1995.
    [10]V.D.Silva,J.B.Tenenbaum.Global versus local methods in nonlinear dimensionality reduction.Advances in Neural Information Processing Systems 15.2002.705-712
    [11]K.Pettis,T.Bailey,A.K.Jain,R.Dubes.An Intrinsic dimensionality estimator from near-neighbor information.IEEE Transactions on Pattern Analysis and Machine Intelligence.1979.1(1).25-36
    [12]F.Camastra,A.Vinciarelli.Estimating the Intrinsic dimension of data with a fractal-based method.IEEE Transactions on Pattern Analysis and Machine Intelligence.2002.24(10).1404-1407
    [13]B.B.Kegl.Intrinsic dimension estimation using packing numbers Advances in Neural Information Processing Systems(NlPS'02).Vancouver,Canada.MIT Press.2002.
    [14]J.Costa,A.O.Hero.Geodesic entropic graphs for dimension and entropy estimation in manifold learning.IEEE Transactions on Signal Processing.2004.52(8).2210-2221
    [15]S.K.Nayar,S.A.Nene,H.Murase.Subspace methods for robot vision.Technical Report CUCS-06-95,Columnbia University,New York,US.1995.
    [16]B.Scholkopf,A.Smola,K.R.Müller.Nonlinear component analysis as a kernel eigenvalue problem.Neural Computation.1998.10(5).1299-1319
    [17]H.Y.Zha,Z.Y.Zhang.Isometric embedding and continuum ISOMAP.Proceedings of 20th International Conference Machine Learning (ICML'03).Washington.2003.864-871
    [18]H.Y.Zha,Z.Y.Zhang.Continuum Isomap for manifold learnings.Computational Statistics & Data Analysis.2007.52(1).184-200
    [19]詹德川,周志华.基于集成的流形学习可视化.计算机研究与发展.2005.42(9).1533-1537
    [20]X.Geng,D.C.Zhan,Z.H.Zhou.Supervised nonlinear dimensionality reduction for visualization and classification.IEEE Transaction on Systems,Man and Cybernetics.2005.35(6).1098- 1107
    [21]H.Choi,S.Choi.Kernel Isomap.Electronics Letters.2005.40(25).1612-1613
    [22]赵连伟,罗四维,赵艳敞,刘蕴辉.高维数据流形的低维嵌入及嵌入维数研究.软件学报.2005.16(8).1423-1430
    [23]M.H.C.Law,N.Zhang,A.K.Jain.Nonlinear manifold learning for data stream.Proceedings of SIAM Data Mining.Orlando,Florida,US.2004.33-44
    [24]M.H.C.Law,A.K.Jain.Incremental nonlinear dimensionality reduction by manifold learning.IEEE Transactions on Pattern Analysis and Machine Intelligence.2006.28(3).337-391
    [25]D.DeCoste.Visualizing mercer kernel feature spaces via kernelized locally- linear.embeddings.Proceedings of the 8th International Conference on Neural Information Processing.Shanghai,China 2001.
    [26]Y.W.Teh,S.T.Rowels.Automatic alignment of local representations.Advances in Neural Information Processing Systems 15.MIT Press.2002.841-848
    [27]L.Saul,S.Rowels.Think globally,fit locally:unsupervised learning of low dimensional manifolds.Journal of Machine Learning Research.2003.4(2).119-155
    [28]D.L.Donoho,C.Grimes.Hessian eigenmaps:new locally linear embedding techniques for high-dimensional data.Technical Report TR 2003-08,Department of Statistics,Stanford University,Stanford,USA.2003.
    [29]D.d.Ridder,O.Kouropteva,O.Okun,M.Pietikainen,R.P.W.Duin.Supervised locally linear embedding.Artificial Neural Networks and Neural Information Processing,ICANN/ICONIP 2003 Proceedings,Lecture Notes in Computer Science 2714.2003.333-341
    [30]C.S.Zhang,J.Wang,N.Y.Zhao,D.Zhang.Reconstruction and analysis of multi-pose face images based on nonlinear dimensionality reduction.Pattern Recognition.2004.37(2).325-326
    [31]J.P.Zhang,H.X.Shen,Z.H.Zhou.Unified locally linear embedding and linear discriminant analysis algorithm(ULLELDA) for face recognition Stan Z.Li,Jianhuang Lai,Tieniu Tan,Guocan Feng,Yunhong Wang(eds.).Advances in Biometric Personal Authentication.Lecture Notes in Computer Science,Vol.3338.Springer-Verlag,Berlin Heidelberg New York.2004.3338.209-307
    [32]J.P.Zhang,L.He,Z.H.Zhou.Ensemble-based discriminant manifold learning for face recognition Proceedings of the 2nd International Conference on Natural Computation(ICNC'06).Xi'an,China.2006.29-38
    [33]O.Kouropteva,O.Okun,M.Pietika|¨nen.Incremental locally linear embedding.Pattern Recognition.2005.38(10).1764 - 1767
    [34]X.F.He,D.Cai,S.C.Yan,H.J.Zhang.Neighborhood preserving embedding.Proceedings of the 10th IEEE International Conference on Computer Vision(ICCV'05)Beijing,China.2005.1208 - 1213
    [35]X.L.Li,S.Lin S.C.Yan,D.Xu.Discriminant locally linear embedding with high-order tensor data.IEEE Transactions on Systems,Man,and Cybernetics,Part B.2008.38(2).342-352
    [36]张振跃,查宏远.线性低次逼近与非线性降维.中国科学,A辑,数学.2005.35(3).273-285
    [37]J.WANG,Z.Y.ZHANG,H.Y.ZHA.Adaptive manifold learning.Advances in Neural Information Processing Systems 17(NIPS'04).Victoria,British Columbia,Canada.2004.1473-1480
    [38]杨剑,李伏欣,王珏.一种改进的局部切空间排列算法.软件学报.2005.16(9).1584-1590
    [39]X.M.Liu,J.W.Yin,Z.L.Feng,J.X.Dong.Incremental manifold learning via tangent space alignment.Artificial Neural Networks in Pattern Recognition(ANNPR'06),LNCS 4087.Berlin Heidelberg.Springer-Verlag.2006.107-121
    [40]T.H.Zhang,J.Yang,D.L.Zhao,X.L.Ge.Linear local tangent space alignment and application to face recognition.Neurocompnting Letters.2007.70(7-9).1547-1553
    [41]Z.Y.Zhang,H.Y.Zha,M.Zhang.Spectral methods for semi-supervised manifold learning.IEEE Conference on Computer Vision and Pattern Recognition(CVPR'08).Anchorage,AK.2008.1-6
    [42]Y.Li.Locally multidimensional scaling for nonlinear dimensionality reduction.Proceddings of the 18th International Conference on Pattern Recognition(ICPR'06).Hong Kong,China.2006.202-205
    [43]M.Belkin,P.Niyogi.Semi-supervised learning on riemannian manifolds.Machine Learning Journal.2004.56(1-3).209-239
    [44]X.F.He,P.Niyogi.Locality preserving projections.Advances in.Neural Information Processing Systems.16.Vancouver,Canada,2003.
    [45]M.Belkin,P.Niyogi.Towards a theoretical foundation for Laplacian-based manifold methods.Proceedings of Annual Conference on Learning Theory(COLT'05).Berlin,Germany.Springer.2005.486-500
    [46]J.Yang,D.Zhang,J.Y.Yang,B.Niu.Globally maximizing,locally minimizing:unsupervised discriminant projection with applications to face and palm biometrics.IEEE Transactions on Pattern Analysis and Machine Intelligence.2007.29(4).650-664
    [47]P.Niyogi.Manifold regularization and semi-supervised learning:Some theoretical analyses(Technical Report).University of Chicago.Technical Report TR-2008-01.2008.
    [48]K.Q.Weinberger,F.Sha,L.K.Saul.Learning a kernel matrix for nonlinear dimensionality reduction.Proceedings of the 21st International Conference on Machine Learning(ICML'04).Banff,Canada.2004.839-846
    [49]K.Q.Weinberger.Metric learning with convex optimization[PhD Dissertation].University of Pennsylvania,US.2007.
    [50]K.Q.Weinberger,B.D.Packer,L.K.Saul.Nonlinear dimensionality reduction by semidefinite programming and kernel matrix factorization.Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics.Barbados.2005.381-388
    [51]C.P.Hou,Y.Wu.Learning High Dimensional Correspondences Based on Maximum Variance Unfolding.Proceedings of the 2007 IEEE International Conference on Mechatronics and Automation.Harbin,China.2007.635-640
    [52]T.J.Chin,D.Suter.Out-of-Sample extrapolation of learned manifolds.IEEE Transaction on Pattern Analysis and Machine Intellegence.2008.30(9).1-10
    [53]L.K.Saul,K.Q.Weinberger,J.H.Ham,F.Sha,D.D.Lee.Spectral methods for dimensionality reduction.O.Chapelle,B.Scho|¨lkopf,A.Zien,Semi-supervised Learning.MIT Press.2006.
    [54]Y.Li.Alignment of overlapping locally scaled patches for multidimensional scaling and dimensionality reduction.IEEE Transactions on Pattern Analysis and Machine Intelligence 2008.30(3).438-450
    [55]L.J.P.v.d.Maaten,E.O.Postma,H.J.v.d.Herik.Dimensionality Reduction:A Comparative Review(2007).It is available from http://tsam-fich.wdfiles.com/local--files /apuntes/TPAMI_Paper.pdf.
    [56]J.P.Ye,Q.Li,H.Xiong,H.Park,R.Janardan,and V.Kumar.IDR/QR:an incremental dimension reduction algorithm via QR decomposition.IEEE Transactions on Knowledge and Data Engineering.2005.17(9).1208 -1222.
    [57]D.d.Ridder,V.Franc.Robust manifold learning.Research Report CTU-CMP-2003-08,Dept.of Imaging Science & Technology,Delft University of Technology,Delft,The Netherlands.2003.
    [58]H.Chang,D.Y.Yeung.Robust locally linear embedding Pattern Recognition.2006.39(6).1053-1065
    [59]罗四维.大规模人工神经网络理论基础.北京:清华大学出版社,北京:北方交通大学出版社.2004.
    [60]罗四维,温津伟.神经场整体性和增殖性研究.计算机研究与发展.2003.40(5).668-674
    [61]O.Kouropteva,O.Okun,A.Hadid,M.Soriano,S.Marcos,and M.Pietik¨ainen.Beyond locally linear embedding algorithm.Technical Report MVG-01-2002,University of Oulu,Finland.2002.
    [62]张贤达.矩阵分析与应用.北京:清华大学出版社.2004.
    [63]周志华,曹存根.神经网络及其应用.北京:清华大学出版社.2004.
    [64]D.d.Ridder,R.P.W.Duin.Locally linear embedding for classification.Technical Report PH-2002-01,Pattern Recognition Group,Dept.of Imaging Science & Technology,Delft University of Technology,Delft,The Netherlands.2002.
    [65]M.Belkin,P.Niyogi.Laplacian eigenmaps for dimensionality reduction and data representation.Neural Computation.2003.15(6).1373-1396
    [66]R.W.Floyd.Algorithm 97:shortest path.Communications of the ACM.1962.5(6).345
    [67]E.W.Dijkstra.A note on two problems in connexion with graphs:Numerische MathematiK.1959.1(1).269-271
    [68]M.Belkin.Problems of learning on manifolds[PhD Dissertation].University of Chicago, US.2003.
    [69]张军平.监督流形学习.周志华,王珏,机器学习及其应用 2007.北京:清华大学出版社.2007.
    [70]于剑.聚类分析的新进展-谱聚类综述.周志华.王珏,机器学习及其应用.北京:清华大学出版社.2007.
    [71]L.Zelnik-Manor,P.Perona.Self-tuning spectral clustering.Advances in Neural Information Processing Systems 17(NIPS'04).Cambridge.MIT Press.2004.1601-1608
    [72]A.Y.Ng,M.Jordan,Y.Weiss.On Spectral Clustering:Analysis and an algorithm.Advances in Neural Information Processing Systems 14(NIPS'02).Cambridge.MIT Press 2002.
    [73]F.R.Bach,M.I.Jordan.Learning spectral clustering.Advances in Neural Information Processing System 16(NIPS'03).Cambridge.MIT Press.2003.305 - 312
    [74]L.K.Hanse,P.Salamon.Neural network ensembles.IEEE Transactions on Pattern Analysis and Machine Intelligence.1990.12(10).993-1001
    [75]T.G.Dietrerich.Ensemble learning.M.A.Arbib,The Handbook of Brain Theory and Neural Networks,2nd Edition.MIT Press.2002.
    [76]Z.H.Zhou,Y.Yu.Ensembling local learners through multimodal perturbation.IEEE Transactions on Systems,Man,and Cybernetics,Part B.2005.35(4).725-735
    [77]刘红卫.半定规划及其应用[博士论文].西安:西安电子科技大学应用数学系.2002.
    [78]C.Helmberg,F.Rendl,R.J.Vanderbei,H.Wolkowicz.An interior-point method for semidefinite programming.SIAM Journal on Optimization.1996.6(2).342-361
    [79]J.F.Sturm.Using SeDuMi 1.02,a MATLAB toolbox for optimization over symmetric cones.Optimization Methods and Software.1999.11/12(1-4).625-653
    [80]M.Yamashita,K.Fujisawa,M.Kojima.Implementation and evaluation of SDPA 6.0(semidefinite programming algorithm 6.0).Optimization Methods and Softwere.2003.18(4).491-505
    [81]K.C.Toh,M.J.Todd,R.H.Tutuncu.SDPT3-a MATLAB software package for semidefinite programming,version 1.3.Optimization Methods and Software.1999.11/12(1-4).545-581
    [82]B.Borchers.CSDP,a C library for semidefinite programming.Optimization Methods and Software.1999.11-2(1-4).613-623
    [83]B.Borchers,J.Young.Implementation of a primal-dual method for SDP on a shared memory parallel architecture.Computational Optimization and Applications.2007.37(3).355-369
    [84]J.Ham,D.L.Lee,S.Mika,B.Scholkopf.A Kernel view of the dimensionality reduction of manifolds,the 21th International Conference on Machine Learning(ICML '04).Banff,Alberta,Canada.ACM New York,USA 2004.369-376
    [85]Y.Bengio,J.F.Paiement,P.Vincent,O.Delalleau,N.L.Roux,and M.Ouimet.Out-of-Sample extensions for LLE,Isomap,MDS,Eigenmaps,and Spectral Clustering.Advances in Neural Information Processing Systems 16(NIPS'03).Cambridge.MIT Press.2003.177-184
    [86]P.Arias,G.Randall,G.Sapiro.Connecting the Out-of-Sample and Pre-lmage Problems in Kernel Methods.Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition(CVPR '07).Minneapolis,MN.2007.1-8
    [87]J.Yan,Q.S.Cheng,Q.Yang,B.Y.Zhang.An incremental subspace learning algorithm to categorize large scale text data.Proceedings of 7th Asia-Pacific Conference on Web Technologies Research and Development(APWeb 2005).Shanghai,China.2005.52-63
    [88]S.N.Pang,S.Ozawa,N.Kasabov.Incremental linear discriminant analysis for classification of data streams.IEEE Transactions on Systems,Man,and Cybernetics,Part B 2005.35(5).905-914
    [89]D.Y Chen,L.M.Zhang.An Incremental Linear Discriminant Analysis Using Fixed Point Method Lecture Notes in Computer Science,LNCS 3971.2006.1334-1339
    [90]X.F.He.Incremental Semi-Supervised Subspace Learning for Image Retrieval.Proceedings of the 12th ACM International Conference on (ACM Multimedia 2004).New York,USA.2004.2-8
    [91]D.F.Zhao,Y Li.Incremental construction of neighborhood graphs for nonlinear dimensionality reduction.Proceddings of 18th International Conference on Pattern Recognition (ICPR'06).Hong Kong,China.2006.177-180
    [92]O.K.Kouropteva.Locally linear embedding algorithm:Extensions and.applications[PhD Dissertation].Faculty of Technology,University of Oulu.2006.
    [93]V.d.Silva,J.B.Tenenbaum.Sparse multidimensional scaling using landmark points.Stanford Mathematics Technical Report.2004.1-41
    [94]曾宪华,罗四维.动态增殖流形学习算法.计算机研究与发展.2007.44(9).1462-1468
    [95]T.J.Chin,D.Suter.Incremental kernel principal component analysis.IEEE Transactions on Image Processing.2007.16(6).1662-1674
    [96]M.Belkin,P.Niyogi,V.Sindhwani,P.Bartlett.Manifold regularization:A geometric framework for learning from examples.Journal of Machine Learning Research.2006.7.2399-2434
    [97]A.Singer.From graph to manifold Laplacian:the convergence rate.Applied Computational Harmonic Analysis.2006.21(1).128-134
    [98]S.C.Yan,D.Xu,B.Y Zhang,H.J.Zhang.Graph embedding and extensions:a general framework for dimensionality reduction IEEE Transactions on Pattern Analysis and Machine Intelligence.2007.29(1).40-51
    [99]M.Belkin,P.Niyogi.Convergence of laplacian eigenmaps.Department of Computer Science and Engineering,The Ohio State University,Technical Report.2008.
    [100]X.He,S.Yan,Y Hu,P.Niyogi,H.J.Zhang.Face recognition using laplacianfaces.IEEE Transactions on Pattern Analysis and Machine Intelligence.2005.27(3).328-340
    [101]M.Belkin,P.Niyogi,V.Sindhwani.On Manifold Regularization.Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics(AI & Statistics 2005).Barbados.2005.
    [102]V.Sindhwani,M.Belkin,P.Niyogi.The geometric basis of semi-supervised learning.O.Chapelle,B.Scholkopf,A.Zien,Semi-supervised Learning.MIT Press.2006.
    [103]Z.H.Zhou.Ensemble learning.S.Z.Li,Encyclopedia of Biometrics.Berlin:Springer.2009.
    [104]C.H.Shen,H.D.Li,Brooks,J.Michael.Feature extraction using sequential semidefinite programming.Proceddings of the 9th Biennial Conference of the Australian Pattern Recognition Society on Digital Image Computing Techniques and Applications.Glenelg,Australia.2007.430-437
    [105]L.Song,A.J.Smola,K.Borgwardt,A.G..Colored maximum variance unfolding.Advances in Neural Information Processing Systems 20(NIPS'07).Cambridge,MA.USA MIT Press.2007.1385-1392
    [106]K.Q.Weinberger,L.K.Saul.Unsupervised learning of image manifolds by semidefinite programming.Proceedings of the 2004 IEEE Conference on Computer Vision and Pattern Recognition(CVPR'04)2004.988-995
    [107]C.H.Shen,H.D.Li,M.J.Brooks.Supervised dimensionality reduction via sequential semidenite programming.Pattern Recognition.2008.41(12).3644-3652
    [108]S.C.Yan,Y.X.Hu,D.Xu,H.J.Zhang,B.Y.Zhang,and Q.S.Cheng.Nonlinear discriminant analysison embedded manifold.IEEE Transactions on Circuits and System for Video Technology.2007.17(4).468-477
    [109]J.Wang.Improve local tangent space alignment using various dimensional local coordinates Neurocomputing.2008.71 (16-18).3575-3581
    [110]J.Verbeek.Learning nonlinear image manifolds by global alignment of local linear models.IEEE Transactions on Pattern Analysis and Machine Intelligence.2006.28(8).1236-1250
    [111]Y.Song,F.P.Nie,C.S.Zhang,S.M.Xiang.A unified framework for semi-supervised dimensionality reduction.Pattern Recognition.2008.41(9).2789-2799
    [112]L.K.Saul,K.Q.Weinberger,J.H.Ham,F.Sha,D.D.Lee.Semisupervised learning.MIT Press:Cambridge,MA.2006.
    [113]R.Salakhutdinov,G.E.Hinton.Learning a nonlinear embedding by preserving class neighbourhood structure.Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics(AI and Statistics 2007).Puerto Rico.2007.412-419
    [114]B.A.01shausen,D.J.Field.Sparse coding of sensory inputs.Current Opinion in Neurobiology.2004.(14).481-487
    [115]S.K.Nayar,S.A.Nene,H.Murase.Subspace methods for robot vision.IEEE Transactions on Robotics and Automation.1996.12(5).750-758
    [116]T.Lin,H.Y.Zha,S.Lee.Riemannian manifold learning for nonlinear dimensionality reduction,proceedings of the 9th European Conference on Computer Vision(ECCV06).Graz,Austria.2006.44-55
    [117]T.Lin,H.B.Zha.Riemannian manifold learning.IEEE Transactions on Pattern Analysis and Machine Intelligence.2008.30(5).796-809
    [118]N.A.Laskaris,E.K.Kosmiois,R.Homma.Understanding and characterizing Olfactory responses [A manifold learning approach based on optical recordings].IEEE Transactions on Engineering in Medicine and Biology Magazine.2008 27(2).69-79
    [119]S.Lafon,A.B.Lee.Diffusion maps and coarse-graining:a unified framework for dimensionality reduction,graph partitioning,and data set parameterization.IEEE Transactions on Pattern Analysis and Machine Intelligence.2006.28(9).1393-1403
    [120]Q.L.Zhang.Image segmentation informed by manifold learning[PhD dissertation].School of Arts and Sciences,Washington University.2007.
    [121]T.Kohonen.Self-Organizing Maps,Third Edition.Springer.2001.
    [122]N.Kijoeng,H.Je,S.Choi.Fast stochastic neighbor embedding:a trust-region algorithm.proceedings of 2004 International Joint Conference on Neural Networks(IJCNN'04)Budapest,Hungary.2004.123-128
    [123]G.Hinton,R.Salakhutdinov.Reducing the dimensionality of data with neural networks.Science.2006.313(5786).504-507
    [124]G.Hinton,S.Roweis.Stochastic Neighbor Embedding.Advances in Neurul Infonnurion Processinr Svsterns 15 (NIPS'02).2002.857-864
    [125]T.Hastie,W.Stuetzle.Principal curves.Journal of the American Statistical Association.1988.84(406).502-516
    [126]J.Goldberger,S.Roweis,G.Hinton,R.Salakhutdinov.Neighbourhood Components Analysis.Advances in Neurul Infonnurion Processinr Systems 17.2004.
    [127]Y.Fu,M.Liu,T.S.Huang.Conformal Embedding Analysis with Local Graph Modeling on the Unit Hypersphere.Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition^VPR '07).Minneapolis,MN 2007.1-6
    [128]M.H.Cheng,M.F.Ho,C.L.Huang.Gait analysis for human identification through manifold learning and HMM.Pattern Recognition.2008 41(8).2541-2553
    [129]V.N.Baiasubramanian,J.P.Ye,S.Panchanathan.Biased manifold embedding:a framework for person-independent head pose estimation.Proceddings of the 7th IEEE Conference on Computer Vision and Pattern Recognition(CVPR '07)..Minneapolis,MN.2007.1-7
    [130]S.Rosenberg.The Laplacian on a Riemmannian Manifold.Cambridge University Press.1997.
    [131]R.S.Bennet.The intrinsic dimensionality of signal collections.IEEE Transactions on Information Theory.1969.15(5).517-525
    [132]S.K.Chalup,R.Clement,J.Marshall,C.Tucker,M.J.Ostwaid.Representations of streetscape perceptions through manifold Learning in the space of hough arrays.Proceedings of the 2007 IEEE Symposium on Artificial Life (CI-ALife 2007).2007.362-369
    [133]J.Chen,S.J.Deng,X.M.Huo.Electricity price curve modeling and forecasting by manifold learning.IEEE Transactions on Power System.2008.23(3).877-888
    [134]H.Choi,S.Choi.Robust kernel Isomap.Pattern Recognition.2007.40(4).853-862
    [135]M.Collins,S.Dasgupta,R.E.Schapire.A generalization of principal component analysis to the exponential family.Advances in Neural Information Processing Systems 13.Cambridge,MA.2002.617-624
    [136]J.A.Costa,N.Patwari,A.O.H.Ⅲ.Distributed weighted-multidimensional scaling for node localization in sensor networks.ACM Transactions on Sensor Networks.2006.2(1).39-64
    [137]T.F.Cox,M.A.A.Cox.Multidimensional scaling,2ed.Chapman & Hall.2001.
    [138]A.Elgammal,C.S.Lee.Nonlinear manifold learning for dynamic shape and dynamic appearance.Computer Vision and Image Understanding.2007.106(2).31-46
    [139]S.Fiori.Geodesic-based and projection-based neural blind deconvolution algorithms.Signal Processing.2008.88.521-538
    [140]P.Gaillarda,M.Aupetita,G Govaert.Learning topology of a labeled data set with the supervised generative Gaussian graph.Neurocomputing.2008.71(2).1283-1299
    [141]J.Ham,I.Ahn,D.Lee.Learning a manifold-constrained map between image sets:applications to matching and pose estimation.Procedding of 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR'06).New York,USA.2006.817-824
    [142]J.Hamm,D.Lee.Grassmann discriminant analysis:a unifying view on subspace-based learning Proceedings of the 25th International Conference on Machine Learning (ICML'08).Helsinki,Finland 2008.376-383
    [143]S.Kadoury,M.D.Levine.Face detection in gray scale images using locally linear embeddings.Computer Vision and Image Understanding.2007.105 (9).1-20
    [144]D.D.Lee,H.S.Seung.Algorithms for nonnegative matrix factorization.Advances in Neural Information Processing Systems 14.Cambridge,MA.2001.556-562
    [145]J.G.Lee,C.S.Zhang.Classification of gene-expression data:The manifold-based metric learningway.Pattern Recognition.2006.39(5).2450-2463
    [146]J.B.Li,J.S.Pan,S.C.Chu.Kernel class-wise locality preserving projection.Information Sciences.2008.178.1825-1835
    [147]S.W.Park,M.Savvides.Breaking the limitation of manifold analysis for super-resolution of facial images.Proceeding of 2007 IEEE International Conference onAcoustics,Speech and Signal Processing(ICASSP'07).2007.573-576
    [148]J.C.Piatt.Fast embedding of sparse music similarity graphs.Advances in Neural Information Processing Systems 16.Cambridge,MA.2004.571-578
    [149]J.C.Platt.FastMap,MetricMap,and landmark MDS are all Nystrom algorithms.Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics (AISTATS'05).2005.261-268.
    [150]I.Rish,G.Grabarnik,G.Cecchi,F.Pereira,p.G.J.Gordon ()..In Closed-form supervised dimensionality reduction with generalized linear models.Proceedings of the 25th International Conference on Machine Learning (ICML'08).Helsinki,Finland.2008.832-839
    [151]J.Shi,J.Malik.Normalized cuts and image segmentation.IEEE Transactions on Pattern Analysis and Machine Intelligence.2000.22(8).888-905
    [152]P.J.Verveer,R.P.W.Duin.An evaluation of intrinsic dimensionality estimatorsce.IEEE Transactions on Pattern Analysis and Machine Intelligence.1995.17(1).81-86
    [153]H.X.Wang,S.B.Chen,Z.L.Hu,W.M.Zheng.Locality-preserved maximum information projection.IEEE Transaction on Neural Networks.2008.19(4).571-585
    [154]L.Wang,D.Suter.Visual learning and recognition of sequential data manifolds with applications to human movement analysis.Computer Vision and Image Understanding.2008.110(6).153-172
    [155]K.Q.Weinberger,F.Sha,Q.Zhu,L.K.Saul.Graph Laplacian regularization for large-scale semidefinite programming.Advances in Neural Information Processing Systems 19.Cambridge,MA.2007.1489-1496
    [156]S.F.Weng,C.S.Zhang,Z.L.Lin.Exploring the structure of supervised data by discriminant isometric mapping.Pattern Recognition.2005.38(8).599 - 601
    [157]J.Yan,B.Y.Zhang,N.Liu,S.C.Yah,etc.Effective and efficient dimensionality reduction for large-scale and streaming data preprocessing.IEEE Transactions on Knowledge and Data Engineering.2006.18(3).320-333
    [158]W.X.Yang,J.P.Zhang.Spectral clustering based null space linear discriminant analysis (SNLDA).PAKDD 2007,LNA14426.2007.344-354
    [159]J.S.Yin,D.W.Hu,Z.T.Zhou.Noisy manifold learning using neighborhood smoothing embedding Pattern Recognition Letters.2008 29(11).1613-1620
    [160]X.L.Yu,X.G.Wang.Uncorrelated discriminant locality preserving projections.IEEE Signal Processwing.2008.15(3).361-364
    [161]D.F.Zhao.Y.Li.Incremental isometric embedding of high dimensional data using connected neighborhood graphs.IEEE Transactions on Pattern Analysis and Machine Intelligence.2009.31(1).86-98
    [162]罗四维,赵连伟.基于谱图理论的流形学习算法.计算机研究与发展.2006.43(7).1173-1179
    [163]赵连伟.高维数据的低维流形结构研究[博士论文].北京:北京交通大学计算机与信息技术学院.2006.
    [164]邵超,黄厚宽,赵连伟.P-ISOMAP:一种新的对邻域大小不甚敏感的数据可视化算法.电子学报.2006.34(8).1497-1501
    [165]陈省身,陈维桓.微分几何讲义.北京大学出版社.1983.
    [166]王珏,周志华,周傲英.机器学习及其应用.2006.
    [167]张军平.流形学习及应用[博士论文].北京:中国科学院自动化研宄所.2003.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700