用户名: 密码: 验证码:
视频序列中运动物体检测算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
在计算机视觉研究领域里,运动物体检测作为预处理技术中非常重要的任务,就是将感兴趣的运动物体从背景中分离出来,在智能视频监控(Automatic VideoSurvalliance,AVS)、视频压缩、视频检索和智能人机接口等方面有着广泛的应用。在实时场景中,由于光照的变化、背景元素的扰动、阴影、照相机的抖动或移动等因素的影响,如何将前景从背景中准确地分离出来是一个极具挑战性的研究问题。本文重点对各种动态场景下的前景检测算法进行研究。所谓动态场景一种是指由固定照相机拍摄,但却存在背景运动的场景,如背景是喷泉、波浪或飘雪等,另外一种是指存在照相机移动的场景,或者更复杂的即存在背景运动又存在照相机运动的场景。重点选择了高斯混合模型(Mixture of Gaussian,MOG)、动态纹理(Dynamic Texture,DT)模型和基于生物视觉的Center-Surround机制的前景检测算法作为具体的研究对象。
     对于固定照相机拍摄的相对静态的场景,针对MOG算法计算复杂度高的问题,重点对该算法进行了优化和改进。基本思想是首先选用计算复杂度低的运行期均值(Running Average,RA)算法进行粗检测,大致定位到前景所在区域,然后在粗检测的前景区域里采用改进的MOG算法对每个像素进行细检测。为了抑制阴影的影响,选用YUV颜色格式作为像素的特征。改进的方法与传统的MOG算法和非参数核密度估计(No-Parametric Kernel Density Estimator,KDE)算法相比,在获得较优的检测性能时,明显地降低了计算复杂度,其运行速度能满足实时视频处理的需要。
     对于固定照相机拍摄的动态场景,重点对DT模型进行了研究分析。针对其整体建模时由于输入数据是高维向量,在学习DT参数时奇异值分解(Singular ValueDecomposition, SVD)复杂度很高的问题,对Gopalakrishnan等提出的SO(SustainObservibility)算法进行了优化和改进。基本思想是先对观测性测量方法进行了优化,然后根据线性系统的观测性的系统特性,提出可以在图像的下采样位置计算观察性大小,并采用上采样技术取得原始尺度图像中每一个像素的观测性大小。其特点是与SO算法相比在获得接近的检测性能时,计算复杂度明显地降低了。其次,针对DT局部建模时,虽然降低了单次奇异值分解的复杂度,但却增加了奇异值分解的次数,针对该问题,我们对基于局部DT建模(Local Dynamic Texture, LDT)的方法进行了改进。基本思想是利用动态冗余度来计算块之间的相似性,只对相似性小的块组用DT进行建模。改进的LDT方法与其他方法相比,具有较低的等错率(Equal Error Rate,EER),同时计算复杂度也比较低。
     针对DT模型的状态空间维数事先设定的问题,提出了一种由输入数据驱动的自适应设置方法。该方法根据DT参数估计时奇异值分解获得的奇异值矩阵,引入奇异熵的概念,根据奇异熵增量来自适应选择状态空间维数。DT模型经过自适应设定维数后用于动态场景中进行前景检测将具有较低的等错率,其检测性能明显优于模型维数事先假定的算法。另外,在估计DT参数时,为了尽可能避免奇异值分解操作,我们采用一种联合batch-PCA(Principal Component Analysis, PCA)和CCIPCA(CandidCovariance-Free Incremental Principal Component Analysis, CCIPCA)结合的方法,首先选用batch-PCA估计的DT参数作为基参数,再用CCIPCA对基参数进行更新。和采用batch-PCA方法相比,在获得接近的性能时,平均每帧处理时间明显减少。
     对于存在照相机运动的动态场景,由于照相机的运动使大多数前景检测算法将一部分背景也检测为前景,致使算法的虚警率比较高。本文对基于生物视觉的Center-Surround机制进行了深入研究,并提出一种先全局后局部检测的方法。全局检测时使用改进的SO算法以获得候选的前景区域,局部检测时采用贝叶斯Center-Surround架构在该区域里计算像素的局部特征对比度,最后将局部检测的前景轮廓信息反馈到候选前景区域里进行去伪求精。其特点是全局检测时算法的运算规模大,针对整帧图像的每一个像素,但是所采用的算法比较简单,局部检测时算法的运算规模小,只局限在候选的前景区域里,但是算法的复杂度大些,因此平均每帧处理时间大大降低了。该算法与目前大多数算法相比,具有较优的检测性能同时计算复杂度也比较低。
In the computer vision field, foreground detection is to separate the moving objects ofinterest from the background as an important pre-processing task. It is now widely appliedin the automatic video survalliance (AVS), video compression, video index and automatichuman-machine interface. In the realistic scenarios, how to accurately separate theforeground from the background is very challenging because of illumination change,background elements motion, shadow or camera motion.We focus on the foregrounddetection methods in the dynamic scenes. One scenes are shot by the static camera but withthe scene motion (such as fourtain, waves, flying snow and so on). Another scenes are staticbut with the camera motion. Even some scenes are more complex, where both scenesmotion and camera motion exist. Any of these types of scenes are refered to as the dynamicscenes. We mainly study on Mixture of Gaussian (MOG), Dynamic Texture (DT) modeland some methods based on center-surround framework.
     In order to reduce the computational cost in the relatively static scenes shot by thestationary camera, the MOG method is improved. The improved method first detects therough fouregrund region by Running Average (RA) algorithm, where each pixel is thenprocessed by edited MOG algorithm. In order to suppress the shadow, the YUV colorinformation is used as the pixel feature. Compared to MOG and No-Parametric KernelDensity Estimator (KDE) algorithm, the improved method has the bettter performance andlower computational cost. Its processing speed can meet the realtime need.
     We do much research on the DT model for some dynamic scenes shot by the stationarycamera.When the video process is modeled by DT in a holistic manner, the observed datamatrix has the high dimension. So Singular Value Decomposition (SVD) of the theobserved data matrix has high computational complexity. In order to resolve this problem,the Sustain Observibility (SO) method is modified. It first improves the method of measuring the observibility and then measures the observibility at the subsample locationsaccording to the system property of observibility. The observibility value of each pixel atthe original scale can be obtained by upsample technique. The modified method has thesimilar performance as the SO method, but its computational cost is much lower. In order toreduce the dimensionality of the model, the DT model is applied to local video patches,which reduces the computational complexity of each SVD operation, but the number of DTmodel is increased. In order to deal with the problem, we propose the Local DynamicTexture (LDT) method, which computes the similarity between the video patches accordingto the dynamic redundancy. DT model is only applied to the video patches with littlesimilarity. Compared to other methods, LDT method both has the lowest average EqualError Rate (EER) values and computational cost.
     Most approaches based on DT model assume that the dimensionality of the state spaceis a constant for all the tested scenes. In order to deal with this problem, we propose anadaptive method which is driven by the observed data, which computes the singular entropyfrom the singular matrix. The increment of singular entropy at each order is thresholded todecide the order of model. When the dimensionality of the state space is adaptively decidedaccording to the proposed method, the lowest EER can be obtained by applying DT modelto foreground detection.Besides, in order to avoid the SVD operation, a method ofcombining the batch-PCA(Principal Component Analysis) and Candid Covariance-FreeIncremental Principal Component Analysis (CCIPCA) is adopted to reduce thecomputational cost, which the basis parameters are learned by batch-PCA, and other DTparameters are obtained by updating the basis parameters with the new observed data.Compared to batch-PCA, the method has the similar performance, but its computationalcost is much reduced.
     Most approaches detect some background as the foreground in the scenes with cameraor Ego motion. In order to resolve the problem, much research is done on the center-surround hypothesis from the biologic vision and we propose a method whichcombines the global and local detection. In the stage of global detection, the improved SOmethod is used to obtain the candidate foreground regions. In the stage of local detection,bayesian center-surround framework is used to compute the local feature contrast in the thecandidate foreground regions. The contour information obtained from the local detection isfeedback to confirm the accurate foreground regions and remove the background pixels inin the candidate foreground regions.The global detection is operated at each pixel of thewhole video frames with the low algorithm complexity, while the local detection is onlydone at at pixels in the candidate foreground regions with the high algorithm complexity.So the average processing time per frame is reduced greatly. Compared to other methods,the proposed method has the better performance and lower computational cost.
引文
[1] Shah Mubarak, Javed Omar and Shafique Khurram. Automated Surveillance inRealistic Scenarios. IEEE MultiMedia,2007,14(1):30-39.
    [2] Tian Ying Li and Hampapur Arun. Robust Salient Motion Detection with ComplexBackground for Real-time Video Surveillance. in: Proceedings of the IEEEWorkshop on Motion and Video Computing. Breckenridge, CO.2005:30-35.
    [3] Yuan Yun, Miao Zhenjiang and Hu Shaohai. Real-Time Human BehaviorRecognition in Intelligent Environment. in: IEEE8th Int. Conf. on Signal Processing.Beijing.2006:16-20.
    [4] Guo Chenlei and Zhang Liming. A Novel Multi Resolution Spatiotemporal SaliencyDetection Model and Its Applications in Image and Video Compression. IEEETransactions on Image Processing,2010,19(1):185-198.
    [5] Visser Rene, Sebe Nicu and Bakker Erwin. Object Recognition for Video Retrieval.in: Int’l Conf. Image and Video Retrieval. London: Springer Berlin/Heidelberg.2002:250-259.
    [6] Collins Robert T., Lipton Alan J. and Kanade Takeo, et al. A System for VideoSurveillance and Monitoring. in: Proc. American Nuclear Society (ANS) EighthInternational Topical Meeting on Robotic and Remote Systems. Pittsburgh, PA.2000.
    [7] Wren Christopher Richard, Azarbayejani Ali and Darrell Trevor, et al. Pfinder:Real-Time Tracking of the Human Body. IEEE Transactions On Pattern Analysisand Machine Intelligence,1997,19(7):780-785.
    [8] Remagnino P., Tan T. and Bakera K. Multi-agent visual surveillance of dynamicscenes. Image and Vision Computing,1998,16(8):529-532.
    [9] Haritaoglu I., Harwood D. and Davis L. S. W4: real-time surveillance of people andtheir activities. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(8):809-830.
    [10] Javed Omar and Shah Mubarak. KNIGHTM: A Multi-Camera Surveillance System.in: ONDCP International Technology Symposium.2003.
    [11] Javed Omar, Rasheed Zeeshan and Alatas Orkun, et al. KNIGHT: a real timesurveillance system for multiple and non-overlapping cameras. in: Proc. IEEE Conf.Multimedia and Expo: IEEE CS Press.2003:649-652.
    [12] Javed Omar, Shafique Khurram and Shah Mubarak. A Hierarchical Approach toRobust Background Subtraction using Color and Gradient Information. in: Proc.IEEE Workshop on Motion and Video Computing: IEEE CS Press.2002:22-27.
    [13] Ren Ying, Chua Chin-Seng and Ho Yeong-Khing. Motion detection withnonstationary background. Machine Vision and Applications,2003,13(5-6):332-343.
    [14] Murray Don and Basu Anup. Motion Tracking with an Active Camera. IEEETransactions On Pattern Analysis and Machine Intelligence,1994,16(5):449-459.
    [15] Hayman Eric and Eklundh Jan-Olof. Statistical Background Subtraction for a MobileObserver. in: Proceedings. Ninth IEEE International Conference on ComputerVision. Nice.2003:67-74.
    [16] Elhabian S., El-Sayed K. and Ahmed S. Moving object detection in spatial domainusing background removal techniques-State-of-Art. Recent Pat on Comput Sci,2008,1(1):32-54.
    [17] Mahadevan Vijay and Vasconcelos Nuno. Spatiotemporal Saliency in DynamicScenes. IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(1):171-177.
    [18] Cheung Sen-Ching S. and Kamath Chandrika. Robust Techniques for BackgroundSubtraction in Urban Traffic Video. in: Proceedings of SPIE Electronic Imaging:Visual Communications and Image Processing. San Jose,Californi.2004:881-892.
    [19] Toyama Kentaro, Krumm John and Brumitt Barry, et al. Wallflower: principles andpractice of background maintenance. in: Proceedings of the Seventh IEEEInternational Conference on Computer Vision. Kerkyra.1999:255-261.
    [20] Cristani Marco, Bicego Manuele and Murino Vittorio. Multi-level backgroundinitialization using Hidden Markov Models. in: Proceeding of First ACM SIGMMinternational workshop on Video surveillance. New York.2003:11-20.
    [21] Lipton Alan J., Fujiyoshi Hironobu and Patil Raju S. Moving target classificationand tracking from real-time video. in: Proceedings of the Fourth IEEE Workshop onApplications of Computer Vision. Princeton.1998:8-14.
    [22] Anderson C., Bert P. and Vander Wal G. Change Detection and Tracking UsingPyramids Transformation Technique. in: Proceedings of SPIE Conference onIntelligent Robots and Computer Vision. Cambridge,MA.1985:72-78.
    [23] Radke RJ, Andra S. and Al-Kofahi O. Image change detection algorithms: asystematic survey. IEEE Transactions on Image Processing,2005,14(3):294-307.
    [24] Huwer S. and Niemann H. Adaptive change detection for real-time surveillanceapplications. in: Proceedings of Third IEEE International Workshop on VisualSurveillance. Dublin.2000.
    [25] Koller D., Weber J. and Huang T., et al. Towards robust automatic traffic sceneanalysis in real-time. in: Proceeding of the International Conference on PatternRecognition. Jerusalem.1994:126-131.
    [26] Stauffer C. and Grimson W. E. L. Adaptive background mixture models for real-timetracking. in: Proceeding of IEEE Computer Society Conference on.Computer Visionand Pattern Recognition. Fort Collins, CO.1999:246-252.
    [27] Cucchiara R., Grana C. and Piccardi M. Detecting moving objects, ghosts andshadows in video streams. IEEE Transactions On Pattern Analysis and MachineIntelligence,2003,25(10).
    [28] Cutler R. and Davis L. View-based detection and analysis of periodic motion. in:Proceedings of International Conference on Pattern Recognition. Brisbane.1998:495-500.
    [29] Lo B. and Velastin S. Automatic congestion detection system for undergroundplatforms. in: Proceedings of2001International Symposium on IntelligentMultimedia, Video, and Speech Processing. Hong Kong.2001:158-161.
    [30] Zhou Quming and Aggarwal J. K. Tracking and classifying moving objects fromvideos. in: Proceedings of IEEE Workshop on Performance Evaluation of Trackingand Surveillance.2001.
    [31] Haritaoglu Ismail, Harwood David and Davis Larry S. W4S: A real-time system fordetecting and tracking people in21/2D. in: Proceedings of the5th EuropeanConference on Computer Vision. Freiburg.1998:877-892.
    [32] Karmann K. P. and von Brandt A. Moving object recognition using and adaptivebackground memory. in: Time-Varying Image Processing and Moving ObjectRecognition. Amsterda: Elsevier Science Publisher.1990:289-307.
    [33] Karmann K. P., von Brandt A. and Gerl R. Moving object segmentation based onadabtive reference images. in: Proceedings of the5th. European Signal ProcessingConference. Barcelona: Elsevier Science Publishers.1990:951-954.
    [34] Koller D., Weber J. and Huang T., et al. Toward robust automatic traffic sceneanalysis in realtime. in: Proceedings of12th International Conference on PatternRecognition. Jerusalem.1994:126-131.
    [35] Stauffer C. and Grimson W. E. L. Adaptive background mixture models for real-timetracking. in: Proceeding of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition. Fort Collins,CO.1999:246-252.
    [36] Elgammal Ahmed, Harwood David and Davis Larry. Non-parametric Model forBackground Subtraction. in: Proceedings of the6th European Conference onComputer Vision. Dublin: Springer Berlin/Heidelberg.2000:751-767.
    [37] Wang Dongsheng, Feng Tao and Shum Heung-Yeung, et al. A Novel ProbabilityModel for Background Maintenance and Subtraction. in: Proceeding of the15thInternational Conference on Vision Interface.2002:109-117.
    [38] Kim K., Chalidabhongse T. H. and Harwood D., et al. Background modeling andsubtraction by Codebook construction. in: Proceeding of the2004InternationalConference on Image Processing:3061-3064.
    [39] Fukunaga K. and Hostetler L. D. The estimation of the gradient of a density function,with applications in pattern recognition. IEEE Transactions on Information Theory,1975,21(1):32-40.
    [40] Oliver N., Rosario B. and Penfland A. A Bayesian computer vision system formodeling human interaction. IEEE Transactions on Pattern Analysis And MachineIntelligence,2000,22(8):831-843.
    [41] Gutchess D., Trajkovics M. and Cohen-Solal E., et al. A background modelinitialization algorithm for video surveillance. in: Proceedings of the Eighth IEEEInternational Conference on Computer Vision. Vancouver.2001:733-740.
    [42] Redner A. R. and Walker H. F. Mixture Densities, Maximum Likelihood and the EMAlgorithm. SIAM Review,1984,26(2):195–239.
    [43] PRIEBE C. E. and MARCHETTE D. J. Adaptive mixture density estimation. PatternRecognition,1993,26(5):771-785.
    [44] Javed Omar, Shafique Khurram and Shah Mubarak. A hierarchical approach torobust background subtraction using color and gradient information. in: Proceedingsof Workshop on Motion and Video Computing.2002:22-27.
    [45] Rabiner L. R. and Murray Hill N. J. A tutorial on hidden Markov models andselected applications in speech recognition. in: Proceedings of the IEEE.1989:257-286.
    [46] Huo Qiang and Lee Chin-Hui. On-line adaptive learning of the continuous densityhidden Markov model based on approximate recursive Bayes estimate. IEEETransactions on Speech and Audio Processing,1997,5(2):161-172.
    [47] Digalakis V. V. and Neumeyer L. G. Speaker adaptation using combinedtransformation and Bayesian methods. IEEE Transactions on Speech and AudioProcessing,1996,4(4):294-300.
    [48] Cristani Marco, Bicego Manuele and Murino Vittorio. Multi-level backgroundinitialization using Hidden Markov Models. in: Proceedings of the InternationalWorkshop on VideoSurveillance. New York.2003:11-20.
    [49] Wang Hanzi and Suter David. A Novel Robust Statistical Method for BackgroundInitialization and Visual Surveillance. in: Proceeding of the7th Asian Conference onComputer Vision. Hyderabad: Springer Berlin/Heidelberg.2006:328-337.
    [50] Long W. and Yang YH. Stationary background generation: An alternative to thedifference of two images. Pattern Recognition,1990,23(12):1351-1359.
    [51] Mittal A. and Paragios N. Motion-based background subtraction using adaptivekernel density estimation. in: Proceedings of the International Conference onComputer Vision and Pattern Recognition.2004:302-309.
    [52] Stauffer C. and Grimson W. Learning patterns of activity using realtime tracking.IEEE Transactions On Pattern Analysis and Machine Intelligence,2000,22(8):747-757.
    [53] Wixson L. Detecting salient motion by accumulating directionary consistent flow.IEEE Transactions On Pattern Analysis and Machine Intelligence,2000,22(8):774-780.
    [54] Li Liyuan, Huang Weimin and Gu I. Y. H., et al. Foreground object detection inchanging background based on color co-occurrence statistics. in: Proceedings of theSixth IEEE Workshop on Applications of Computer Vision.2002:269-274.
    [55] Paragios N. and Ramesh V. A MRF-based approach for real-time subwaymonitoring. in: Proceedings of the2001IEEE Computer Society Conference onComputer Vision and Patternin Recognition.2001:1034-1040.
    [56] Spagnolo P., Dorazio T. and Leo M. Moving object segmentation by backgroundsubtraction and temporal analysis. Image and Vision Computing,2006,24(5):411-423.
    [57] Fan Sicun and Liu Zhijing. Moving Object Extraction in Complex Scenes. in:International Symposium on Computational Intelligence and Design. Wuhan:126-129.
    [58] Li Liyuan, Huang Weimin and Gu Irene Yu-Hua, et al. Statistical modeling ofcomplex backgrounds for foreground object detection. IEEE Transactions on ImageProcessing,2004,13(11):1459-1472.
    [59] Bugeau A. and Perez P. Detection and segmentation of moving objects in highlydynamic scenes. in: IEEE Conference on Computer Vision and Pattern Recognition.Minneapolis.2007:1-8.
    [60] Monnet A., Mittal A. and Paragios N., et al. Background modeling and subtractionof dynamic scenes. in: Proceedings of the Ninth IEEE International Conference onComputer Vision. Nice.2003:1305-1302.
    [61] Sheikh Y. and Shah M. Bayesian object detection in dynamic scenes. in: Proceedingsof the2005IEEE Computer Society Conference on Computer Vision and PatterninRecognition.2005:74-79.
    [62] Yang Tao, Pan Quan and Li Jing, et al. Real-time multiple objects tracking withocclusion handling in dynamic scenes. in: Proceedings of the2005IEEE ComputerSociety Conference on Computer Vision and Patternin Recognition.2005.
    [63] Raja Y., Mckenna S. J. and Gong S. Color model selection and adaptation indynamic scenes. in: Proceedings of the5th Europe Conference on ComputerVision.1998:460-474.
    [64] Gao D., Mahadevan V. and Vasconcelos N. The discriminant center surroundhypothesis for bottom-up saliency. in: Proceedings of NIPS.2007.
    [65] Gao D. and Vasconcelos N. Decision-Theoretic Saliency: Computational Principle,Biological Plausibility, and Implications for Neurophysiology and Psychophysics.Neural Computation,2007,21:239-271.
    [66] Mahadevan V. and Vasconcelos N. Background subtraction in highly dynamicscenes. in: Proceedings of IEEE Conference on Computer Vision and PatternRecognition. Anchorage.2008:1-6.
    [67] Chan A. B. and Vasconcelos N. Mixtures of dynamic textures. in: Proceedings ofIEEE International Conference on Computer Vision.2005:641-647.
    [68] Gopalakrishnan V., Hu Y. and Rajan D. Sustained Observability for Salient MotionDetection. in: Proceedings of10th Asian Conference on Computer Vision.Queenstown: Springer Berlin/Heidelberg.2011:732-743.
    [69] Gopalakrishnan V., Rajan D. and Hu Yiqun. A Linear Dynamical SystemFramework for Salient Motion Detection. IEEE Transactions on Circuits andSystems for Video Technology,2012,22(5):683-692.
    [70] Ko Teresa, Soatto Stefano and Estrin Deborah. Warping background subtraction. in:Proceedings of the2010IEEE Computer Society Conference on Computer Visionand Patternin Recognition. San Francisco.2010:1331-1338.
    [71] Bugeau A. and Perez P. Detection and segmentation of moving objects in highlydynamic scenes. in: Proceedings of the2007IEEE Computer Society Conference onComputer Vision and Patternin Recognition. Minneapolis.2007:1--8.
    [72]贾永红,数字图像处理.第1版.2003,武汉:武汉大学出版社.38-39.
    [73]代科学,李国辉,丹涂等,监控视频运动目标检测减背景技术的研究现状和展望.中国图象图形学报,2006.11(7):第921-927页.
    [74] Fuentes Luis M. and Elastin Sergio A. From tracking to advanced surveillance. in:Proceedings of the International Conference on Image Processing: III-121-4.
    [75] KaewTraKulPong P. and Bowden R. An Improved Adaptive Background MixtureModel for Realtime Tracking with Shadow Detection. in: Proceedings of the2thEuropean Workshop on Advanced Video Based Surveillance Systerms.2001:1-5.
    [76] Doretto G., Chiuso A. and Wu Y. N., et al. Dynamic textures[. International Journalof Computer Vision,2003,51(2):91-109.
    [77] Chan A. B., Mahadevan V. and Vasconcelos N. Generalized stauffer–grimsonbackground subtraction for dynamic scenes. Machine Vision and Applications,2011,22(5):751-766.
    [78] Shumway R. and Stoffer D. An approach to time series smoothing and forecastingusing the EM algorithm. Journal of Time Series Analysis,1982,3(4):433-467.
    [79] Overschee Peter Van and Moor Bart De. N4sid: Subspace algorithms for theidentification of combined deterministic-stochastic systems. Automatica,1994,30(1):75-93.
    [80] Zhong Jing and Sclaroff Stan. Segmenting foreground objects from a dynamictextured background via a robust Kalman filter. in: Proceedings of the Ninth IEEEInternational Conference on Computer Vision. Nice.2003:44-50.
    [81]王积伟,陆一心,吴震顺,现代控制理论和工程.第1版.2003:高等教育出版社.116-118.
    [82] Tarokh M. Measures for controllability, observability and fixed modes. IEEETransactions on Automatic Control,1992,37(8):1268-1273.
    [83] Mahadevan V. and Vasconcelos N. http://www.svcl.ucsd.edu/projects/background_subtraction.2011.3..
    [84] Zivkovic Z. Improved adaptive Gaussian mixture model for background subtraction.in: Proceedings of the International conference on Pattern Recognition.2004:28-31.
    [85] Itti L. and Koch C. Computational Modeling of Visual Attention. Nature ReviewsNeuroscience,2005,2(3):194-203.
    [86] Itti L., Koch C. and Niebur E. A. Model of saliency--based visual attention for rapidsome analysis. IEEE Transactions On Pattern Analysis and Machine Intelligence,1998,20(11):1254-1259.
    [87] Rahtu E. and Heikkil J. A simple and efficient saliency detector for backgroundsubtraction. in: Proceedings of IEEE International Workshop on VisualSurveillance.2009:1137-1144.
    [88] Rahtu E. and Heikkil J. Segmenting Salient Objects from Images and Videos. in:Proceedings of the Europe Conference on Computer Vision.2010:366-379.
    [89] Farin D., de With P. H. N. and Effelsberg W. Robust background estimation forcomplex video sequences. in: International Conference on Image Processing.Barcelona.2003: I-145-8.
    [90] Chetverikov Dmitry and Axt Attila. Approximation-free running SVD and itsapplication to motion detection. Pattern Recognition Letters,2010,31(9):891-897.
    [91] Chetverikov Dmitry, Fazekas Sándor and Haindl Michal. Dynamic texture asforeground and background. Machine Vision and Applications,2011,22(5):741-750.
    [92] Yang Wen-Xian and Tse Peter W. Development of an advanced noise reductionmethod for vibration analysis based on singular value decomposition. NDT&EInternational,2003,36(6).
    [93] Oja Erkki and Karhunen Juha. On stochastic approximation of the eigenvectors andeigenvalues of the expectation of a random matrix. Journal of MathematicalAnalysis and Applications,1985,106(1):69-84.
    [94] Sanger Terence D. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks,1989,2(6):459-473.
    [95] Weng J., Zhang Y. and Hwang W. S. Candid covariance-free incremental principalcomponent analysis. IEEE Transactions on Pattern Analysis and Machine,2003,25(8):1034-1039.
    [96] Chan A. B. and Vasconcelos N. Efficient computation of the kl divergence betweendynamic textures. in: Technical Report SVCL-TR-2004-02, Dept. of ECE, UCSD.2004.
    [97] Chan A. B. and Vasconcelos N. Probabilistic kernels for the classification ofauto-regressive visual processes. in: Proceedings of the International Conference onComputer Vision and Pattern Recognition.2005:846-851.
    [98] Gao Dashan, Mahadevan Vijay and Vasconcelos Nuno. On the plausibility of thediscriminant center-surround hypothesis for visual saliency. Journal of vision,2008,8(7):1-18.
    [99] Tavakoli Hamed Rezazadegan, Rahtu Esa and Heikkil Janne. Fast and EfficientSaliency Detection Using Sparse Sampling and Kernel Density Estimation. in:Proceedings of the17th Scandinavian conference on Image Analysis. Ystad:Springer Berlin/Heidelberg.2011:666-675.
    [100] Porikli F. Integral histogram: A fast way to extract histograms in cartesian spaces. in:Proceedings of the IEEE conference on Computer Vision and Pattern Recognition.2005:829-836.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700