用户名: 密码: 验证码:
多特征融合视频复制检测关键技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着数字视频采集设备的广泛应用和计算机网络技术的飞速发展,网络上的视频数据呈现爆炸性增长,视频复制检测技术能够在众多视频数据中快速高效地检测到具有相同内容的视频信息,因此在数字视频版权保护、视频管理与索引以及媒体跟踪等领域具有巨大的应用需求和重要的应用价值。近年来,基于内容的视频复制检测技术已成为多媒体信息处理领域研究的热点。
     现有的视频复制检测技术存在运算量大、查全率和准确率低、鲁棒性差、应用范围受限等诸多问题,研究快速高效的视频复制检测方法迫在眉睫。在上述背景下,本文对视频复制检测关键技术进行了深入研究,完成的主要工作与贡献包括:
     1.提出了基于颜色时序特征曲线的视频复制检测方法(Video CopyDetection Based on Spatial-tempral Color Feature Curves, SCFC-VCD)。针对视频复制检测普遍存在的计算量大的问题,提出了基于时序特征曲线的检测方法。首先,对视频帧进行分割,提取各子区域颜色Y分量和U分量的均值,按照视频帧的先后顺序组成视频的特征曲线;然后,将提取出的特征曲线与待匹配视频的特征曲线进行匹配。为了去除视频亮度和色度整体漂移带来的影响,提出了基于差值曲线的相似性匹配算法;为了去除突变干扰的影响,提出了异常因子(ExceptionFactor)来解决;为了处理不同时间尺度的视频匹配问题,提出了改进的动态时间规划匹配算法。实验结果证明,SCFC-VCD方法运算量非常小、检索速度比一般的方法快,对于广告等画面变化较频繁的视频具有很好的检测效果,且能够抵抗常见的干扰。对于电视剧等画面变化率比较低的视频,SCFC-VCD方法可以快速有效地过滤掉大部分不相关的视频,从而大大减小了后续基于关键帧特征处理的运算量。
     2.提出了基于三维量化颜色直方图的视频复制检测方法(Three-dimentionalQuantized Color Histogram Method, TQCH)。针对颜色直方图在量化边界处误差大、对颜色变化过度敏感等问题,提出了三维量化颜色直方图方法。首先,对HSV颜色空间的关键帧颜色值进行非均匀量化;然后,统计其颜色直方图。为了降低量化边界处颜色值的误差,对颜色直方图沿H分量方向将相邻的两个值相加,得到三维量化颜色直方图,用来代表关键帧的颜色特征。最后,提出了相应的匹配方法。实验结果表明,TQCH方法有效表示了关键帧的颜色特征,对于常见的彩色图像,其查全率和准确率高于现有的其他颜色特征检索方法,并且对于常见干扰具有鲁棒性。
     3.提出了基于仿射不变连通区域的视频复制检测方法(ConnectedComponent Based Affine Invariant Region Method,CCB-Affine)。针对现有形状特征提取方法存在的特征数目少、可重复性及鲁棒性差等问题,提出了一种新的仿射不变区域提取和描述方法。在检测子中,首先对关键帧图像预处理;然后,找到关键帧中灰度值相同的点所组成的连通区域,将灰度值差小于阈值的相邻连通区域分别进行合并,取满足条件的最后一次合并结果为仿射不变区域;最后,通过一定的策略将检测结果中不满足条件的区域去除,得到最终的仿射不变区域。在区域描述子部分,基于归一化复数中心矩构造了6个不变矩。实验结果表明,CCB-Affine方法可有效提取图像中的形状特征,并可抵抗包括视角变化在内的各种常见干扰的影响,与其他方法相比具有更好的鲁棒性,且提取的特征数目足够多。
     4.提出了基于方向可控金字塔二值图像投影的视频复制检测方法(SteerablePyramid Binary Image Projection Method, SP-BIP)。为了提取关键帧图像的多尺度、多方向纹理特征,提出了方向可控金字塔二值图像投影方法。首先,对灰度化后的关键帧图像进行方向归一化,并进行方向可控金字塔分解,对各子带图像通过自适应阈值进行二值化;然后,计算子带图像的归一化行和列投影向量,作为子带图像的纹理特征。在特征匹配上,采用向量相交匹配方法。实验结果表明,SP-BIP方法可有效提取关键帧中的多尺度、多方向的纹理特征,优于小波变换等纹理特征提取方法,并对一些常见干扰具有鲁棒性。
     5.提出了基于Tri-training的多特征融合视频复制检测方法(Tri-trainingBased Multi-feature Video Copy Detection,TBM-VCD)。为了有效融合视频的多种视觉特征,提出了新的多特征融合方案。通过Tri-training半监督学习方法将视频的颜色、形状和纹理特征进行了有效的融合,弥补了单一特征在应用中的缺点。通过三个分类器的协同训练,提高了视频复制检测的查全率与准确率,扩大了应用范围。实验结果表明,本文提出的视频复制检测方法具有速度快、查全率与准确率高、应用范围广等优点。与使用单一特征的视频复制检测方法相比,TBM-VCD方法的查全率与准确率具有明显优势,很好地满足了视频复制检测的需求。
With the wide application of video capture devices and the rapid development ofthe Internet technology, video data on the Internet is growing uncontrollably. Videocopy detection method can detect videos with the same content in a large number ofvideos and has great application requirements and broad application prospects in thefield of digital video copyright protection, video management and indexing as well asmedia tracking. Therefore, the Content-based Video Copy Detection has become aresearch hot in the field of multimedia information processing.
     The existed video copy detection methods have the drawbacks of hugecomputation amount, low recall and precision rate, low robustness, limited applicationdomains and so on. The research of highly efficient video copy detection methods isurgent. Under this background, efficient video copy detection technology is studied inthis paper in the following aspects:
     1. A method of Video Copy Detection Based on Spatial-tempral Color FeatureCurves(SCFC-VCD) is proposed. This method is proposed to deal with the hugecomputation problem. First, each frame is segmented and average of Y color and Ucolor is computed. Combine corresponding values of average Y and U according tothe frames’ play order to get the video’s color feature curves. Then, the extracted colorfeature curves are matched with those of the target video. In the matching of thefeature curves, in order to remove the impact of luminance and chrominance overallshift, a similarity matching algorithm based on the gradient curves is introduced. Anexception facor is also adopted to remove the impact of abupt interference. To dealwith the matching of videos with different time scales, a method based on improvedDynamic Time Warping is proposed. The experimental results show that SCFC-VCDmethod is small in computation and it is faster than other methods. For videos whosecontent change frequently such as advertisements, the proposed method can detectvideos effectively. It is also robust to common disturbs. For videos whose contenthardly change such as TV series, the proposed method can filter most unrelativevideos quickly which can reduce the computation in the following keyframe-basedprocess.
     2. A Three-dimentional Quantized Color Histogram (TQCH) method is proposed.Color histogram is sensitive to color changes at quantize edges. TQCH is proposed todeal with this problem. First, HSV color values of keyframes are quantized non-uniformly. Then color histograms are calculated. To decrease the quantized errorat edges, neibor values in H part of histogram is added and the resulting histogram isdefined Three-dimentional Quantized Color Histogram and is used to represent thecolor feature of the keyframe. At last, corresponding matching method is proposed.Experimental results show that TQCH method represent the color featurs of thekeyframe effectively. For commen color images, its recall and precision is higher thanother color-based methods. It is also robust to common disturbs.
     3. A method of Connected Component Based Affine InvariantRegion(CCB-Affine) is proposed. The existing shape feature extraction methods havedrawbacks of small features, low in repeatness and robustness. A new affine invariantfeature extractor and descriptor is proposed. In the detector, keyframes ispreprocessed and then the pixels with the same grayscal value are connected to form aconnected region. Regions whose gray value difference is smaller than the thresholdare merged. The last merging result is the affine regions. At last, certain methods areused to remove bad regions. In the descriptor,6invariant moments are constructedbased on complex centre moments. Experimental results show that the proposedmethod can detect keyframe shapes effectively and it is also robust to commendisturbs including change in views. It is more robust than other methods and candetect enough shape features.
     4. A Steerable Pyramid Binary Image Projection (SP-BIP) method is proposed.To get multi-scale and multi-oritation features of the keyframe, SP-BIP is proposed.First, oritation normalization is performed to the grayscale keyframe. The keyframe isperformed Pyramid decomposition. The result sub-images is binarized according theirown thresholds. Then normalized row projection and column projection are computedto represent the texture features. Vector intersect is used to match tow keyframes.Experimental results show that the proposed method can extract multi-scale andmulti-oritation texture features of the keyframe. It is superior to wavelet transformbased method. It is also robust to commen disturbs.
     5. Tri-training Based Multi-feature Video Copy Detection(TBM-VCD) method isproposed. To fuse different video visual features the new fuse method is proposed.Color feature, shape feature and texture features of videos is effectively fused.Disadvantages of one kind of feature are removed. Through co-training of3calssifiers,video copy detection recall and precision is improved. Experimental results show thatTBM-VCD method has advantages of fastness, high recall and precision and can be used for different kinds of videos. Compared with state-of-the-art methods, it is highin recall and precision and fullfill the needs of video copy detection.
引文
[1] C. Kim and B. Vasudev. Spatiotemporal Sequence Matching for Efficient Video CopyDetection[J]. IEEE Transactions on Circuits and Systems for Video Technology,2005,15(1):127-132.
    [2] TREC Video Retrieval Evaluation: TRECVID Home Page[OL]. http://trecvid.nist.gov/.
    [3] Mani Malek Esmaeili, Mehrdad Fatourechi, Rabab Kreidieh Ward. A Robust and FastVideo Copy Detection System Using Content-Based Fingerprinting[J]. IEEETransactions on Information Forensics and Security,2011,6(1):213-226.
    [4]西安电子科技大学.图像与视频检索新发展与急需解决的科学问题[J].国际学术动态,2011,(2):33-37.
    [5] J. Law-To, L. Chen, A. Joly, et al. Video Copy Detection: A Comparative Study[C].Proc. ACM Int. Conf. Image and Video Retrieval, New York, NY,2007:371–378, ACM.
    [6] Liu F. J, Dong D. G, Xue X. Y and Miao X. P. A Fast Video Clip Retrieval AlgorithmBased on VA-File[C]. Proc. of SPIE Electronic Imaging2004: Storage and Retrieval forMedia Database,2004:167-176.
    [7] Chatzigiorgaki, M., Skodras, A.N. Real-time keyframe extraction towards video contentidentification[C]. Proceedings of16th International Conference on Digital SignalProcessing,2009:1-5.
    [8] Alan Hanjalic. Shot-Boundary Detection: Unraveled and Resolved[J]. IEEETransactions on Circuits and Systems for Video Technology.2002,12(2):90-104.
    [9] Wayne Wolf. Keyframe Selection by Motion Analysis[C]. Proc. of IEEE Int. Conf.Speech and Signal Processing. ICASSP.1996.
    [10] Gwo-Cheng Chao, Yu-Pao Tsai, Shyh-Kang Jeng. Augmented Keyframe[J]. Journal ofVisual Communication and Image Representation,2010,21(7):682–692.
    [11] Omidyeganeh M., Ghaemmaghami S., Shirmohammadi S.. An Event Based Approachto Video Analysis and Keyframe Selection[C]. IEEE Workshop on Signal ProcessingSystems, SiPS: Design and Implementation,2010:128-133.
    [12] Qu Zhong, Gao TengFei. An Improved Algorithm of Keyframe Extraction for VideoSummarization[J]. Advanced Research on Automation, Communication, Architectonicsand Materials,2011:807-811.
    [13] Usach-Molina Pau, Sastre Jorge, Naranjo Valery, Vergara, Luis, Mu oz Joaqun M.López. Content-based Dynamic Threshold Method for Real-time Keyframe Selecting[J].IEEE Transactions on Circuits and Systems for Video Technology,2010,20(7):982-993.
    [14] Spyrou Evaggelos, Tolias Giorgos, Mylonas Phivos, Avrithis Yannis. Concept Detectionand Keyframe Extraction Using a Visual Thesaurus[J]. Multimedia Tools andApplications,2009,41(3):337-373.
    [15] Chatzigiorgaki Maria, Skodras Athanassios N.. Real-time Keyframe Extraction TowardsVideo Content Identification[C]. Proceedings of the16th International Conference onDigital Signal Processing,2009:1-6.
    [16]邱建雄,黄少年.一种电影视频场景的自动构造方法[J].计算机工程与科学,2011,33(11):128-131.
    [17] Vendrig Jeroen, Worring Marce. Systematic Evaluation of Logical Story UnitSegmentation[J]. IEEE Transactions on Multimedia,2002,4(4):492-499.
    [18] Yeung Minerva, Yeo Boon-lock, Liu Bede. Segmentation of Video by Clustering andGraph Analysis[J]. Computer Vision and Image Understanding,1998,71(1):94-109.
    [19] Rui Yong, Huang Thomas S., Mehrotra Sharad. Constructing Table-of Content forVideos[J]. Multimedia Systems,1999,7(5):359-368.
    [20] Wallapak Tavanapong, Zhou Jun-yu. Shot Clustering Techniques For Story Browsing[J].IEEE Transactions on Multimedia,2004,6(4):517-527.
    [21]王学军,丁红涛,陈贺新.一种基于镜头聚类的视频场景分割方法[J].中国图象图形学报,2007,12(12):2127-2131.
    [22]张静,许高锋.基于优化分块颜色直方图及模C聚类的彩色图像检索方法[J].计算机工程与科学,2011,33(8):106-110.
    [23]王向阳,杨红颖,郑宏亮.吴俊峰.基于视觉权值的分块颜色直方图图像检索算法[J].自动化学报,2010,(10):1489-1492.
    [24]万其明,汪闽,张星月,蒋圣,谢玉林.基于五叉树分解与多特征直方图匹配的高分辨遥感图像检索[J].地球信息科学学报,2010,12(2):275-280.
    [25]束鑫,宋晓宁,祁云嵩.基于区域模糊直方图的图像检索[J].江苏科技大学学报(自然科学版),2007,21(6):43-47.
    [26]焦晓军,王成良,刘张桥.基于非线性模糊直方图的图像检索算法[J].计算机工程,2012,12(1):204-207.
    [27] M. Stricker, M. Orengo. Similarity of Color Images[J]. SPIE Storage and Retrieval forImage and Video Databases III,1995(2185):381-392.
    [28]杨红菊,张艳,曹付元.一种基于颜色矩和多尺度纹理特征的彩色图像检索方法[J].计算机科学,2009,36(9):274-276.
    [29] Pass Greg, Zabih Ramin, Miller Justin. Comparing Images Using Color CoherenceVectors[C]. Proceedings of the ACM International Multimedia Conference&Exhibition,1996,65-73.
    [30] Zhao Qi, Tao Hai. A Motion Observable Representation Using Color Correlogram andIts Applications to Tracking[J]. Computer Vision and Image Understanding,2009,113(2):273-290.
    [31] Ye Mei, Androutsos D.. Wavelet-based Color Texture Retrieval Using the IndependentComponent Color Space[C]. Proceedings of15th IEEE International Conference onImage Processing,2008:165-168.
    [32] Pei-xuan Chen, Guo-can Feng. Image Retrieval Based on Dominant Color and TextureFeatures in DCT Domain[C]. Proceedings of Chinese Conference on PatternRecognition,2009:1-5.
    [33] Jing-Zhi Ca, Ming-Xin Zhang, Jin-Yi Chang. A Novel Salient Region Extraction Basedon Color and Texture Features[C]. Proceedings of International Conference on WaveletAnalysis and Pattern Recognition,2009:8-15.
    [34] Chen Ch. A Study of Texture Classification Using Spectral Features[[C]. Proceedings ofIEEE6thInternational Conference of Pattern Recognition.1982:1074-1077.
    [35] Chen Patrick C., Pavlidis Theodosios. Segmentation by Texture Using a Co-occurrenceMatrix and a Split-and-merge Algorithm[J]. Computer Graphics and Image Processing,1979,10(2):172-182.
    [36] Haralick R M. Textural Features for Image Classification[J]. IEEE Trans SMC,1973,3(6):610-621.
    [37]林剑,王润生,鲍光淑.基于空间模糊纹理的多光谱遥感图像分类方法[J].中国图象图形学报,2006,11(2):186-190.
    [38] S. AMSREIN G,RAFFY M. Analysis of the Structure of Radiometric Remotely-sensedImages[J]. International Journal of Remote Sensing,1989,10:1049-1073.
    [39] Mahmood Arif, Khan Sohaib. Exploiting Local Auto-correlation Function for FastVideo to Reference Image Alignment[C]. Proceedings of International Conference onImage Processing,2008:2412-2415.
    [40] Sigelle Marc. Cumulant Expansion Technique for Simultaneous Markov Random FieldImage Restoration and Hyperparameter Estimation[J]. International Journal ofComputer Vision,2000,37(3):275-293.
    [41] Velazquez-Camilo Oscar, Bola os-Reynoso Eusebio, Rodriguez Eduardo,Alvarez-Ramirez Jose. Characterization of Cane Sugar Crystallization Using ImageFractal Analysis[J]. Journal of Food Engineering,2010,100(1):77-84.
    [42] Morimoto Akira, Ashino Ryuichi, Kataoka Shusuke, Mandai Takeshi. Image SeparationUsing Monogenic Signal of Stationary Wavelet Transform[C]. International Conferenceon Wavelet Analysis and Pattern Recognition,2011:239-244.
    [43] Chen Chien-Chang, Chen Chaur-Chin. Gabor Transform in Texture Analysis[C].Proceedings of SPIE-The International Society for Optical Engineering,1994,(2353):237-245.
    [44] Hwang Jin-Tsong, Chang Kuan-Tsung, Chiang Hun-Chin. Satellite Image ClassificationBased on Gabor Texture Features and SVM[C]. Proceedings of19th InternationalConference on Geoinformatics, Geoinformatics2011,2011:1-6.
    [45] Chen Xiaolin, Zhang Rui, Zheng Shibao. Image Quality Assessment Based on LocalEdge Direction Histogram[C]. Proceedings of2011International Conference on ImageAnalysis and Signal Processing,2011:108-112.
    [46] Mei Ye, Androutsos Dimitrios. Affine Invariant Shape Descriptors: The ICA-FourierDescriptor and the PCA-Fourier Descriptor[C]. Proceedings of InternationalConference on Pattern Recognition,2008:1-4.
    [47] Wu Hong, Zhou Ping, Gao Zhen-Guo. An Algorithm for Automatic Side Face PortraitRecognition Based on Fourier Descriptor[C]. International Symposium on InformationScience and Engineering,2008:767-772.
    [48] Hu M K. Visual Pattern Recognition by Moment Invariants[J]. IRE Transactions onInformation Theory,1962,IT-8:179-187.
    [49] Hwang Sun-Kyoo, Kim Whoi-Yul. A Novel Approach to the Fast Computation ofZernike Moments[J]. Pattern Recognition,2006,39(11):2065-2076.
    [50] Jan Flusser, Tomá SUK, Barbara Zitová. Moments and Moment Invariants in PatternRecognition[M]. Chichester: Wiley&Sons Ltd.,2009:60-63.
    [51] Hwang Hsiu-Ying, Choi Kyung K., Chang Kuang-Hua. Shape Design Sensitivity andOptimization Using P-version Design Modeling and Finite Element Analysis[J].Mechanics of Structures and Machines,1997,25(1):103-137.
    [52] Volot o Carlos F.S., Erthal Guaraci J., Santos Rafael D.C., Dutra Luciano V. ImageSegmentation Refinement by Modeling in Turning Function Space[C]. Proceedings ofSPIE-The International Society for Optical Engineering,2011,(7870)
    [53] Esther M. Arkin, L. Chew, D. Huttenlocher, K. Kedem, and J. Mitchell. An EfficienlyComputable Metric for Comparing Polygonal Shapes[J]. IEEE Trans. Patt. Recog. AndMach. Intell.,1991,13(3):109-216.
    [54] Mikolajczyk K.M, Schmid C. An Affine Invariant Interest Point Detector[C].Proceedings of the7th European Conference on Computer Vision, Copenhagen,Denmark,2002:128-142.
    [55] Schaffalitzky F., Zisserman, A. Multi-view Matching for Unordered Image Sets, or“How do I Organize My Holiday Snaps?”[C]. Proceedings of the7th EuropeanConference on Computer Vision, Copenhagen, Denmark,2002:414–431.
    [56] Matas J., Chum O., Urban M., Pajdla T. Robust Wide-baseline Stereo From MaximallyStable Extremal Regions[C]. Proceedings of the British Machine Vision Conference,Cardiff, UK,2002:384–393.
    [57] Tuytelaars T., Van Gool L., D’haene L., Koch R. Matching of Affinely InvariantRegions for Visual Servoing[C]. Proceedings of International Conference on Roboticsand Automation,1999,(2):1601-1606.
    [58] Tuytelaars T.,Van Gool L. Wide Baseline Stereo Matching Based on Local, AffinelyInvariant Regions[C]. Proceedings of the11th British Machine Vision Conference,Bristol, UK,2000:412–425.
    [59] Kadir T., Zisserman A., Brady, M. An Affine Invariant Salient Region Detector[C].Proceedings of the8th European Conference on Computer Vision, Prague, CzechRepublic,2004:345–457.
    [60] K. Mikolajczyk, T. Tuytelaars, C. Schmid et al. A Comparison of Affine RegionDetectors[J]. International Journal on Computer Vision,2005,65(1/2):43-72.
    [61] S. Belongie, J. Malik, J. Puzicha. Shape Matching and Object Recognition Using ShapeContexts[J]. IEEE Trans. Pattern Analysis and Machine Intelligence,2002,2(4):509-522.
    [62] W. Freeman, E. Adelson. The Design and Use of Steerable Filters[J]. IEEE Trans.Pattern Analysis and Machine Intelligence,1991,13(9):891-906.
    [63] Y. Ke, R. Sukthankar. PCA-SIFT: A More Distinctive Representation for Local ImageDescriptors[C]. Proc. Conf. Computer Vision and Pattern Recognition,2004:511-517.
    [64] D.G. Lowe. Object Recognition from Local Scale-Invariant Features[C]. Proc. SeventhInt’l Conf. Computer Vision, pp.1150-1157,1999.
    [65] F. Schaffalitzky, A. Zisserman. Multi-View Matching for Unordered Image Sets[C],Proc. Seventh European Conf. Computer Vision, pp.414-431,2002.
    [66] L. Van Gool, T. Moons, D. Ungureanu. Affine/Photometric Invariants for PlanarIntensity Patterns[C], Proc. Fourth European Conf. Computer Vision,1996:642-651.
    [67] Krystian Mikolajczyk and Cordelia Schmid. A Performance Evaluation of LocalDescriptors[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.27, No.10, October2005.
    [68] Barrow H G. Parametric Correspondence and Chamfer Matching[C]. Proc.5th Int. JointConf. Artificial Intelligence.1977.659-663.
    [69] Yoshinobu Tonomura, Akihito Akutsu, Kiyotaka Otsuji, Toru Sadakata. VideoMAP andVideo SpaceIcon: Tools for Anatomizing Video Content[C]. Proceedings of theINTERACT '93and CHI '93Conference on Human Factors in Computing Systems.1993:131-137.
    [70] Patel Nilesh V, et al. Video Shot Detection and Characterization for Video Databases[J].Pattern Recognition.1997.30:583-592.
    [71] Zhang H J, Wu Jianhua, et al. An Integrated System for Content-Based Video Retrievaland Browsing[J]. Pattern Recognition.1997.30(4):643-657.
    [72] Changick Kim, Bhaskaran Vasudev. Spatiotemporal Sequence Matching for EfficientVideo Copy Detection[J]. IEEE Transactions on Circuits and Systems for VideoTechnology,2005,15(1):127-132.
    [73]杨恒,王庆,何周灿.面向高维图像特征匹配的多次随机子向量量化哈希算法[J].计算机辅助设计与图形学学报,2010,22(3):494-502.
    [74] Cámara-Chávez G., Precioso F., Cord M., Phillip-Foliguet S., Araújo A., De A. AnInteractive Video Content-based Retrieval System[C]. Proceedings of IWSSIP2008-15th International Conference on Systems, Signals and Image Processing,2008:133-136.
    [75] Mackay D. Information Theory, Inference, and Learning Algorithms[M]. Cambridge:Cambridge University Press.2003:150:176.
    [76] MPEG-7. Context and Objectives[S]. ISO/IEC JTC1/SC29/WG11N2460, Oct.1998.
    [77] R. Mohan. Video Sequence Matching[C]. Procedings of International Conference ofAudio, Speech and Signal Processing (ICASSP),1998(6):3697-3700.
    [78] Kim. Ordinal Measure of DCT Coefficients for Image Correspondence and ItsApplication to Copy Detection[C]. Procedings of SPIE Storage and Retrieval for MediaDatabases,2003:199-210.
    [79] Mann-Jung Hsiao, Yo-Ping Huang, Te-Wei Chiang. A Region-Based Image RetrievalApproach Using Block DCT[C]. Procedings of Second International Conference onInnovative Computing, Information and Control, Kumamoto, Japan,2007:218-218.
    [80] Pikrakis A., Theodoridis S., Kamarotos D.. Recognition of Isolated Musical PatternsUsing Context Dependent Dynamic Time Warping[J]. IEEE Transactions on Speech andAudio Processing,2003,22(3):175-183.
    [81]彭宇新, Ngo CW,董庆杰等.一种通过视频片段进行视频检索的方法[J].软件学报,2003,14(8):1409-1417.
    [82]周明全,韦娜,耿国华.交互信息理论及改进的颜色量化方法在图像检索中的应用研究[J].小型微型计算机系统,2006,27(7):1331-1334.
    [83]张水利,郑秀萍,雷文礼.基于量化颜色空间的彩色图像检索算法[J].计算机仿真,2007,27(10):194-196.
    [84]王莹,彭进业,贺静芳,王大凯.融合曲波变换和颜色直方图的图像检索[J].计算机工程与应用,2011,47(11):194-196.
    [85]冀亚丽.基于内容的鲁棒的图像检索方法的研究与系统实现[D].重庆:西南师范大学,2005.
    [86] Navneet Dalal, Bill Triggs. Histograms of Oriented Gradients for Human Detection[C].Proceedings of the2005IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR’05),2005:886-893.
    [87] David G. Lowe. Distinctive Image Features from Scale-invariant Keypoints[J].International Journal of Computer Vision,2004,60(2):91-110.
    [88]蔡红苹,雷琳,陈涛,粟毅.一种通用的仿射不变特征区域提取方法[J].电子学报,2008,36(4):672-678.
    [89]曾庆业,唐娉.使用仿射不变特征的遥感图像自动配准[J].计算机工程,2009,35(1):192-194.
    [90]王怀野,张科,李言俊.一种自适应各项异性高斯滤波方法[J].计算机工程与应用,2004,46(13):18-19.
    [91] Mallat, S.. A Theory for Multi-resolution Approximation: the Wavelet Approximation[J].IEEE Trans. PAMI11,1989:674-693.
    [92]唐荣年,韩九强,张新曼.一种Log-Gabor滤波器结合多分辨率分析的虹膜识别方法[J].西安交通大学学报,2009,43(4):31-33.
    [93]吴学文,徐涵秋.多分辨率分解的遥感影像融合方法对比分析[J].地球信息科学学报,2010,12(3):419-424.
    [94]胡钢,吉晓民,刘哲,秦新强.结合区域特性和非子采样SPT的图像融合方法[J].计算机辅助设计与图形学学报,2012,24(5):636-644.
    [95]李振宏,吴慧中.基于DWT及方向可控金字塔变换的抗几何攻击水印[J].中国图象图形学报,2010,15(2):212-216.
    [96] William T. Freeman, Edward H. Adelson. The Design and Use of Steerable Filters[J].IEEE Trans. Patt. Anal. and Machine Intell.,1991,13(9):891-906.
    [97] Javier A. Montoya-Zegarra, Jo ao Paulo Papa, Neucimar J. Leite, etc. Learning how toextract rotation-invariant and scale-Invariant features from texture images[J]. EURASIPJournal on Advances in Signal Processing (S1687-6172),2008:1-15.
    [98] J. A. Montoya-Zegarra, N. J. Leite, and R. da Silva Torres. Rotation-invariant andscale-invariant steerable pyramid decomposition for texture image retrieval[C]. inProceedings of the20th Brazilian Symposium on Computer Graphics and ImageProcessing (SIBGRAPI’07), IEEE Computer Society, Belo Horizonte, MG, Brazil2005:121–128.
    [99] Mohamed El Aroussi, Mohammed El Hassouni, Sanaa Ghouzali, etc. Local appearancebased face recognition method using block based steerable pyramid transform[J].SignalProcessing,2011,91:38-50.
    [100]楚稼,张桂林.基于颜色和边缘信息融合的背景建模方法[J].计算机工程,2008,34(4):42-45.
    [101]陆丽珍,刘仁义,刘南.一种融合颜色和纹理特征的遥感图像检索方法[J].中国图象图形学报,2004,9(3):328-332.
    [102]苏金树,张博锋,徐昕.基于机器学习的文本分类技术研究进展[J].软件学报,2006,17(9):1848-1859.
    [103] Y.Song, C.Zhang, J.Lee. Semi-supervised discriminative classification with applicationto tumorous tissues segmentation of MR brain images[J]. Pattern Analysis&Applications,2009,12:99-115.
    [104] W Feng, L Xie, J Zeng, ZQ Liu. Audio-visual human recognition using semi-supervisedspectral learning and hidden Markov models[J]. Journal of Visual Languages&Computing,2009,20:188-195.
    [105]周志华,王珏.机器学习及其应用[M].北京:清华大学出版社,2007:259-275.
    [106] Zhi-Hua Zhou, Ming Li. Tri-Training: Exploiting Unlabeled Data Using ThreeClassifiers[J]. IEEE Transactions on Knowledge and Data Engineering,2005,17(11):1529-1541.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700