用户名: 密码: 验证码:
基于基图像分解的室外光照估计研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
虚拟现实是近年来信息领域迅速兴起的一种技术,具有重要的理论价值和广泛的应用价值。增强现实是虚拟现实的一个分支,借助于计算机图形学技术、交互技术、传感技术、三维显示技术、计算机视觉技术,将计算机生成的虚拟景物实时叠加到真实场景中,增强用户对真实场景的理解,达到超越现实的感官体验。随着技术的不断进步,增强现实已经应用到尖端武器及飞行器的研制与开发、精密仪器制造和维修、医疗研究与解剖训练、工程设计和远程机器人控制、文化遗产保护以及教育娱乐等诸多领域。
     通过将虚拟物体和真实场景无缝融合,呈现给用户一个感官效果真实的新环境是增强现实技术的一个重要研究内容。但迄今为止,这项技术仍有大量的问题尚未解决。近年来,增强现实系统主要的研究工作集中在跟踪、注册和交互技术上,光照一致性方面的研究相对较少,而光照一致性是确保虚拟物体表面光照效果的真实感、并使虚实物体无缝融合的关键技术,这就造成了增强现实系统中虚拟物体真实感程度不高的现状。由于室外场景的几何复杂性和光照复杂性,其光照估计是当前尚未解决的难点和热点问题。此外,室外光照估计也是计算机视觉的重要研究内容之一,物体的分割与识别、视频跟踪、阴影检测等算法都不同程度地受到变化光照的影响。因此,在场景三维模型未知的条件下,如何根据室外场景的光照特点,从输入的视频图像估计光照条件对计算机视觉和计算机图形学具有十分重要的意义。
     在场景三维几何信息未知的情况下,本文围绕固定视点下室外场景视频图像的光照展开研究,分析基于室外场景光照线性分解模型的光照估计算法的求解性质,避开重建大规模室外场景所面临的重重困难,省去繁琐的几何建模工作,提出新的解决策略,使光照估计算法更为实用,以实现增强现实场景中虚实物体的光照一致性的目标。本文在室外场景光照估计研究的主要工作和创新点如下:
     1)证明了在室外场景基图像分解模型下,同一太阳方位在不同天气条件下的三幅图像具有线性相关性,使得基图像方程欠约束,导致基图像无法自动求解。为此,利用不同太阳方位不同天气条件下的采样图像,提出自动求解基图像的新算法。其基本策略是根据阴影中的像素估算天空光,由基图像分解的特点计算太阳光,并利用太阳光与天空光基图像的像素色调一致性优化基图像及太阳光和天空光光照参数。
     2)提出了基于对反射率没有任何限制的整体光照明模型的室外静态场景图像线性分解模型,并证明了在此模型下任意一幅静态室外场景图像都可以分解为太阳光基图像和天空光基图像的线性组合;分析研究了基图像的性质,证明了基图像包含场景的几何信息和材质反射率,且是场景的不变量,将太阳光和天空光基图像定义为太阳光和天空光单位强度下的场景整体光照明效果;输入静态场景图像通过最小化二次能量方程得到基图像。
     3)提出了基图像的约束和先验条件的室外场景光照估计新方法,将静态室外场景的延时视频序列的每一帧自动分解成太阳光基图像和天空光基图像的线性组合;使用k-means聚类算法分析每个像素的延时曲线检测出阴影像素,利用基图像线性分解模型得到太阳光和天空光基图像,并基于基图像的约束和先验条件提出过程式改进方法,优化光照参数。
     本文算法均不需要场景的任何三维几何信息,避免了室外场景的大规模重建;对场景中物体的材质和纹理没有要求,也不需要场景中存在特殊物体或者特殊表面,适用于一般的室外场景;除了场景视频图像不需要额外输入,也不需要用户交互,便于应用到增强现实系统。
In recent years, virtual reality grows rapidly, which is of great theoretical and applied significance. Augmented reality is developed on the basis of virtual reality. With the aid of computer graphics technology, interactive technology, sensor technology, three-dimensional display technology and computer vision technology, the augmented reality system can real-timely overlay computer-generated objects onto video images of real scenes, to enhance the user's sensory experience. Along with the advance of mature concept and technology, augmented reality has been applied to research and development of sophisticated weapons and aircrafts, manufacture and repair of precision instruments, medical research and anatomical training, engineering and remote robot control, protection of cultural heritage and educational entertainment, and many other areas.
     Through the seamless integration of virtual objects and the real scene, augmented reality can enhance the display of the real world. However, many open problems still remain in augmented reality. In recent years, research on augmented reality mainly focused on tracking, registration and interactive technology, and research on illumination consistency was relatively less. But illumination consistency plays an important role in achieving high realism. So the realism of virtual objects was not high in many augmented reality systems. Due to the complexity of geometry and lighting, the illumination estimation of outdoor scenes is a difficult and hot problem at present. Also, it is one of research topics in computer vision. Varying illumination usually severely degrades the performance of algorithms proposed in object recognition and segmentation, video tracking, shadow detection, etc. Therefore, without the information of3D scene geometry, the real-time illumination estimation of outdoor scenes is of great importance for both computer graphics and computer vision.
     In the case that3D geometric information of the scene is unknown, the dissertation focuses on the illumination estimation of outdoor video images captured under a fixed view, and analyzes the property on solving illumination parameters based on the linear decomposition model of outdoor scene lighting. To get rid of the difficulties in reconstructing the3D geometry of large-scale outdoor scenes, we propose the new resolution strategy, which makes the outdoor illumination estimation more practical. The contributions of the dissertation are as follows:
     1) We prove that three images captured with the same sun position under different weather conditions show linear correlation, based on the theory of the basis image decomposition of the sunlight and the skylight for fixed outdoor scenes. Therefore, the basis image equations are systems of under-constraint so that they can not be solved automatically. Then, we present the algorithm of solving basis images using four images with two sun positions under two kinds of weather conditions. In the algorithm, the intensity of the skylight is estimated according to pixels in the shadow region, and the intensity of the sunlight is calculated based on the characteristic of the basis image decomposition. In addition, the hue consistency of pixels in the sun area between the basis images of the sunlight and the skylight is used to optimize the basis images and illumination parameters.
     2) On the basis of the global illumination model without any limitation on reflectance, we propose a linear decomposition equation for the images of static outdoor scenes. Then, we proved that each image of a static outdoor scene can be decomposed into a linear combination of basis images of the sunlight and skylight, which encapsulate the geometry and material reflectivity of the scene. As well as, it is proved that the resulted basis images are invariants of the scene, corresponding to the global illumination effects of the outdoor scene under a unit intensity of the sunlight and skylight. Based on the input images of an outdoor static scene, basis images can be obtained by minimizing a quadratic energy function.
     3) We investigate the constraints and priors of the basis images, and propose a novel decomposition method to solve for the sunlight and skylight basis images of static outdoor scenes from a time-lapse image sequence without any user interaction. During decomposition, we first detect shadowed pixels by analyzing the time-lapse curve of each pixel through k-means clustering, and then the basis images of sunlight and skylight are solved by an iterative procedure with the decomposition equation. The basis images are further optimized by exploiting their constraints and priors.
     The above approaches don't request any information of scene geometry, avoiding the3D reconstruction of large-scale outdoor scenes. As well as, they are applicable to the general outdoor scene because any assumption on materials and textures of the scene is not required and there need not be a special object or special surface in the scene. And that, these approaches do not need any other input except video images of the outdoor scene and request no user interaction. So they can be easily used for the system of augmented reality.
引文
[1]赵沁平.虚拟现实综述[J].中国科学F辑:信息科学,2009,39(1):2-46.
    [2]Azuma R A. A survey of augmented reality[J]. Teleoperators and Virtual Environments,1997,6(4):355-385.
    [3]James R V. Interactive augmented reality[D]. New York:University of Rochester,1998.
    [4]Milgram, P, Kishino F. A taxonomy of mixed reality visual displays[J]. IEICE Transactions on Information Systems,1994, E77-D (12):1321-1329.
    [5]Billinghurst M, Kato H, Poupyrev I. The magic book:a transitional AR interface[J]. Computers and Graphics,2001,25(5):745-753.
    [6]Szalavaril Z, Schmalstiegl D, Fuhrmannl A, Gervautzl M. Studierstube: an environment for collaboration in augmented reality [J]. Virtual Reality, 1998,3(1):37-48.
    [7]Wagner D, Pintaric T, Ledermann F. Towards massively multi-user augmented reality on handheld devices[C]. Proceedings of Pervasive 2005, 2005:208-219.
    [8]Henrysson A, Billinghurst M, Ollila M Face to face collaborative AR on mobile phones[C]. Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality,2005:80-89.
    [9]http://www.cnbeta.com/articles/113772.htm.
    [10]http://discovery.163.com/10/0601/18/6844L9N4000125LI.html.
    [11]http://www.jingweidu.com/main/productServiceShow.action?id=4028489 b407cl8c301407c1921390002
    [12]姚雄华,李磊,张杰.虚拟现实技术在飞机设计中的应用[J].航空制造技术,2013,3:67-70.
    [13]Jiang B, Neumann U, You S. A robust hybrid tracking system for outdoor augmented reality[C]. Proceedings of IEEE Virtual Reality Conference, 2004:3-10.
    [14]Rolland J.P., Davis L., Baillot Y. A survey of tracking technology for virtual environments[J]. Fundamentals of wearable computers and augmented reality,2001:67-112.
    [15]Schweighofer G, Pinz A. robust pose estimation from a planar target[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006, 28:2024-2030.
    [16]Kalkofen D, Mendez E, Schmalstieg D. Interactive focus and context visualization for augmented reality[C]. Proceedings of 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007:191-201.
    [17]Zhang Guofeng, Qin Xueying, Wei Hua, et al. Robust metric reconstruction from challenging video sequences [C]. Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2007:1-8.
    [18]Li Xinyu, Chen Dongyi. Augmented reality in e-commerce with markerless tracking[C]. Proceedings of the 2nd IEEE International Conference on Information Management and Engineering,2010:609-613.
    [19]Behringer R. Registration for outdoor augmented reality applications using computer vision techniques and hybrid sensors[C]. Proceedings of IEEE Virtual Reality,1999:244-251.
    [20]You Suya, Neumann U. Fusion of vision and gyro tracking for robust augmented reality registration[C]. Proceedings of IEEE Virtual Reality, 2001:71-78.
    [21]Jacobs K, C. Loscos. Classification of illumination methods for mixed reality[J]. Computer Graphics Forum,2006,25(1):29-51.
    [22]Heidrich W, Seidel H P:Realistic, hardware-accelerated shading and lighting[C]. Proceedings of the 26th annual conference on Computer graphics and interactive techniques,1999:171-178.
    [23]Kautz J, McCool M D. Approximation of glossy reflection with prefiltered environment maps[C]. Proceedings of Graphics Interface,2000:119-126.
    [24]Gibson S, Murta A. Interactive rendering with real-world illumination[C]. Proceedings of 11th Eurographics Workshop on Rendering,2000: 365-376.
    [25]Haller M, Drab S, Hartmann W. A real-time shadow approach for an augmented reality application using shadow volumes[C]. Proceedings of the 10th ACM symposium on Virtual Reality Sotware and Technology, 2003:56-65.
    [26]Loscos C, Frasson M C, Drettakis G, et al. Interactive virtual relighting and remodeling of real scenes[C]. Proceedings of the 10th Eurographics Workshop on Rendering,1999,10:235-246.
    [27]Gibson S, Howard T, Hubbold R. Flexible image-based photometric reconstruction using virtual light sources[J]. Computer Graphics Forum, 2001,20(3):203-214.
    [28]Yu Yizhou, Malik J. Recovering photometric properties of architectural scenes from photographs[C]. Proceedings of ACM SIGGRAPH'98, 1998:207-217.
    [29]Yu Yizhou, Debevec P E, Malik J, Hawkins T. Inverse global illumination: recovering reflectance models of real scenes from photographs[C]. Proceedings of ACM SIGGRAPH'99,1999:215-224.
    [30]Fulgenzi C, Tay C, Spalanzani A, Laugier C. Probabilistic navigation in dynamic environment using rapidly-exploring random trees and gaussian processes[C]. Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems,2008:1056-1062.
    [31]Melchior N A, Simmons R. Particle RRT for path planning with uncertainty[C]. Proceedings of 2007 IEEE International Conference on Robotics and Automation,2007:1617-1624.
    [32]Badler N I, Bindiganavale R, Bourne J, et al. A parameterized action representation for virtual human agents[C]. Proceedings of Workshop on Embodied Conversational Characters,1998:28-34.
    [33]De Sevin E, Kallmann L, Thalmann D. Towards real time virtual human life simulations[C]. Proceedings of Computer Graphics International 2001, 2001:31-37.
    [34]Noser H, Renault Olivier, Thalmann D, et al. Navigation for digital actors based on synthetic vision, memory and learning[J]. Computer and Graphics,1995,19(1):7-19.
    [35]Li T Y, Jeng Y J, Chang S I, Imulating virtual human crowds with a leader-follower model[C]. Proceedings of the 2001 Computer Animation Conference,2001:93-102.
    [36]Sato I, Sato Y, Ikeuchi K. Stability issues in recovering illumination distribution from brightness in shadows[C]. Proceedings of IEEE Computer Vision and Pattern Recognition,2001,2:400-407.
    [37]Kim T, Hong K S. A practical single image based approach for estimating illumination distribution from shadows[C]. Proceedings of IEEE International Conference on Computer Vision,2005,1:266-271.
    [38]Fuchs M, Blanz V, Seidel H P. Bayesian relighting[C]. Proceedings of the Sixteenth Eurographics conference on Rendering Techniques, 2005:157-164.
    [39]Wang R, Tran J, Luebke D. All-frequency interactive relighting of translucent objects with single and multiple scattering[J]. ACM Transactions on Graphics,2005,24(3):1202-1207.
    [40]Choudhury B, Chandran S. A survey of image-based relighting techniques[C]. Proceedings of International Conference on Computer Graphics Theory and Applications,2006:176-183.
    [41]Ahmed A H, Farag A A. A new formulation for shape from shading for non-lambertian surfaces[C]. Proceedings of 2006 IEEE International Conference on Computer Vision and Pattern Recognition,2006, 2:1817-1824.
    [42]Ahmed A H, Farag A A. Shape from shading for hybrid surfaces[C]. Proceedings of 2006 IEEE International Conference on Image Processing, 2007:525-528.
    [43]Durou J D, Falcone M, Sagona M. Numerical methods for shape-from-shading:a new survey with benchmarks [J]. Computer Vision and Image Understanding,2008,109(1):22-43.
    [44]Loscos C, Drettakis G., Robert L. Interactive virtual relighting of real scenes [J]. IEEE Transactions on Visualization and Computer Graphics, 2000,6(4):289-305.
    [45]Kjellstrom H, Kragic D, Black M J. Tracking people interacting with objects[C]. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.2010:747-754.
    [46]Kwon J, Lee K M. Tracking by sampling trackers[C]. Proceedings of the International Conference on Computer Vision.2011:1195-1202.
    [47]Finlayson G D, Hordley S D, Lu C, et al. On the removal of shadows from images[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(1):59-68.
    [48]Guo Ruiqi, Dai Qieyun, Hoiem D. Single-image shadow detection and removal using paired regions[C]. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2011:2033-2040.
    [49]Alpert S, Galun M, Brandt A, et al. Image segmentation by probabilistic bottom-up aggregation and cue integration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2011,34(2):315-327.
    [50]Maire M, Yu S X, Perona P. Object detection and segmentation from joint embedding of parts and pixels[C]. Proceedings of the International Conference on Computer Vision,2011:2142-2149.
    [51]Aniruddha K, Behjat S, Roland M, et al. Incremental multiple kernel learning for object recognition[C]. Proceedings of the International Conference on Computer Vision,2009:638-645.
    [52]Crandall D J, Huttenlocher D P. Probabilistic 3D object recognition with both positive and negative evidences[C]. Proceedings of the International Conference on Computer Vision.,2011:2360-2367.
    [53]Zhang Taiping, Fang Bin, Yuan Yuan, et al. Multiscale facial structure representation for face recognition under varying illumination[J]. Pattern Recognition,2009,42(2):251-258.
    [54]Goh Y Z, Teoh A B J, Goh K O M. Wavelet-based illumination invariant preprocessing in face recognition[J]. Journal of electronic imaging,2009, 18(2):421-425.
    [55]Wagner A, Wright J, Ganesh A, et al. Towards a practical face recognition system:Robust registration and illumination by sparse representation[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2009:597-604.
    [56]Cheng Yong, Hou Yingkun, Zhao Chunxia, et al. Robust face recognition based on illumination invariant in nonsubsampled contourlet transform domain[J]. Neurocomputing,2010,73(10):2217-2224.
    [57]Tan Xiaoyang, Triggs B. enhanced local texture feature sets for face recognition under difficult lighting conditions[J]. IEEE Transactions on Image Processing,2010,19(6):1635-1650.
    [58]Cao Xue, Shen Wen, Yu Ligong, et al. Illumination invariant extraction for face recognition using neighboring wavelet coefficients[J]. Pattern Recognition,2012,45(4):1299-1305.
    [59]Zhao Xincan, Zuo Hongfu. Aviation applications and prospect of augmented reality[J]. Aviation Maintenance & Engineering.2008, (6):23-25.
    [60]杜清运,刘涛.户外增强现实地理信息系统原型设计与实现[J].武汉大学学报:信息科学版,2007,32(11):1046-1049.
    [61]Jia Qingxuan, Gao Xin, Sun Hanxu, et al. Predictive graphical simulation technology for robot teleoperation[J]. Aeronautical Manufacturing Technology,2011, (7):60-63.
    [62]Hicks J D, Flanagan R A, Petrov P V, et al. Eyekon:augmented reality for battlefield soldiers[C]. Proceedings of the 27th Annual NASA Goddard/IEEE Software Engineering Workshop.2002:156-163.
    [63]Chang Yong, He Zongyi. Technology of augmented reality and its application in 3D visualization of underground pipeline[J]. Bulletin of Surveying and Mapping.2005,51(11):54-57.
    [64]陈靖,王涌天,林精敦等.基于增强现实技术的圆明园景观数字重现[J].系统仿真学报.2010,22(2):424-428.
    [65]彭群生,金小刚,鲍虎军.计算机真实感图形学的算法基础[M].科学出版社,1999.
    [66]Sato I, Sato Y, Ikeuchi K. Illumination distribution from shadows[C]. Proceedings of IEEE Computer Vision and Pattern Recognition, 1999:1306-1312.
    [67]Grossberg M, Nayar S. Modeling the space of camera response functions[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence,2004,26(10):1272-1282.
    [68]Kim S J, Frahm J M, Pollefeys M. Radiometric calibration with illumination change for outdoor scene analysis[C]. Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, 2008:1-8.
    [69]Zhang Yufei, Yang Y H. Multiple illuminant direction detection with application to image synthesis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2001,23(8):915-920.
    [70]Wang Yang, Samaras D. Estimation of multiple directional light sources for synthesis of augmented reality images[J]. Graphical Models,2003, 65(4):185-205.
    [71]Andersen M S, Jensen T, Madsen C B. Estimation of dynamic light changes in outdoor scenes without the use of calibration objects[C]. Proceedings of the 18th IEEE International Conference on Pattern Recognition,2006:91-94.
    [72]Gibson S, Cook J, Howard T, HubboldR. Rapid shadow generation in real-world lighting environments[C]. Proceedings of Eurographics Workshop on Rendering,2003:219-229.
    [73]Magnenat-Thalmann N, Foni A E, Papagiannakis G, et al. Real time animation and illumination in ancient roman sites[J]. The International Journal of Virtual Reality,2007,6(1):11-24.
    [74]刘艳丽.室外场景的光照分析研究[D].浙江大学博士学位论文,2009.
    [75]Debevec P E, Camillo J, Malik J. Modeling and rendering architecture from photographs:a hybrid geometry- and image-based approach[C]. Proceedings of ACM SIGGRAPH'96,1996:11-20.
    [76]Criminisi A, Reid L, Zisserman A. Single view metrology[J]. International Journal of Computer Vision,2000,40(2):123-148.
    [77]张祖勋,吴军,张剑清.建筑物场景三维重建中影像方位元素的获取方法[J].武汉大学学报(信息科学版),2003,28(3):265-271.
    [78]Muller P, Zeng G, Wonka P, Gool L V. Image-based procedural modeling of facades[J]. Transactions on Graphics,2007,26(3):85.
    [79]Lhuillier M, Quan L. A quasi-dense approach to surface reconstruction from uncalibrated images[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(3):418-433.
    [80]Strecha C, Fransens R, Van G L. Combined depth and outlier estimation in multi-view stereo [C]. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2006:2394-2401.
    [81]Goesele M, Curless B, Seitz S. Multi-view stereo revisited[C]. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition,2006:2402-2409.
    [82]Chen C L, Tai C L, Lio Y F. Virtual binocular vision systems to solid model reconstruction[J]. International Journal of Advanced Manufacturing Technology,2007,35(3-4):379-384.
    [83]Landabaso J L, Pardas M, Casas J R. Shape from inconsistent silhouette [J], Computer Vision and Image Understanding,2008,112(2):210-224.
    [84]Furukawa Y, Ponce J. Accurate, dense, and robust multi-view stereopsis[J], IEEE Transactions on Pattern Analysis and Machine Intelligence,2010, 32(8):1362-1376.
    [85]Choi B K, Shin H Y, Yoon Y L, et al. Triangulation of scattered data in 3D space[J]. Computer Aided Design,1998,20(5):239-248.
    [86]Oblonsek C, Guid N. A fast surface-based Procedure for object reconstruction from 3D scattered points[J]. Computer Vision and Image Understanding,1998,69(2):185-195.
    [87]Weir D J, Milroy M, Bradly C, Zhang Y E, et al. Cloud data modeling employing a unified, non-redundant triangular mesh[J]. Computer-Aided Design,2001,33(1):83-93.
    [88]Gu Xianfeng, Gortler S, Hoppe H. Geometry image[C]. Proceedings of ACM SIGGRAPH'02,2002:355-361.
    [89]Armin G, Devrim A. Least squares 3D surface and curve matching[J]. ISPRS Journal of Photogrammetry& Remote Sensing,2005,15:151-174.
    [90]Chen Jie, Chen Baoquan. Architectural modeling from sparsely scanned range data[J]. International Journal of Computer Vision,2008, 78(2-3):223-236.
    [91]Debevec P E, Malik J. Recovering high dynamic range radiance maps from photographs[C]. Proceedings of ACM SIGGRAPH'97, 1997:369-378.
    [92]ReesW. Physical Principles of Remote Sensing[M]. Cambridge University Press,1990.
    [93]Minnaert M. The nature of light & color in the open air[M]. Dover Publications,1954.
    [94]Lynch D K, Livingston W. Color and light in nature. Cambridge University Press,2001.
    [95]Jensen H W, Marschner S R, Levoy M, Hanrahan P. A practical analytic model for subsurface light transport[C]. Proceedings of ACM SIGGRAPH'01,2001:511-518.
    [96]Klassen R V. Modeling the effect of the atmosphere on light[J]. ACM Transactions on Graphics,1987,6(3):215-237.
    [97]Kaneda K, Okamoto T, Nakame E, Nishita T. Photorealistic image synthesis for outdoor scenery under various atmospheric conditions [J]. The Visual Computer,1991,7:247-258.
    [98]Nishita T, Sirai T, Tadamura, Nakame E. Display of the earth taking into account atmospheric scattering[C]. Proceedings of ACM SIGGRAPH'93, 1993:175-182.
    [99]Nishita T, Dobashi Y, Kaneda K, Yamashita H. Display method of the sky color taking into account multiple scattering. Proceedings of Pacific Graphics'96,1996:66-79.
    [100]Irwin J. Full-spectral rendering of the earth's atmosphere using a physical model of Rayleigh scattering[C]. Proceedings of the 1996 Eurographics UK Conference,1996:103-115.
    [101]Hensen H W, Durand F, Stark MM, et al. A physically-based night sky model[C]. Proceedings of ACM SIGGRAPH'01,2001:399-408.
    [102]Hoffman N, Preetham A J. Rendering outdoor light scattering in real time[C]. Proceedings of Game Developer Conference 2002, 2002:337-352.
    [103]Dobashi Y, Yamamoto T, Nishita T. Interactive rendering of atmospheric scattering effects using graphics hardware[C]. Proceedings of ACM SIGGRAPH'02/EUROGRAPHICS conference on Graphics hardware, 2002:99-107.
    [104]Tabellion E, Lamorlette A. An approximate global illumination system for computer generated films[C]. Proceedings of ACM SIGGRAPH'04, 2004:469-476.
    [105]International Commission on Illumination. CIE-110-1994. Spatial distribution of daylight-luminance distributions of various reference skies[S].1994.
    [106]International Commission on Illumination. CIE-011-2003. Spatial distribution of daylight-CIE standard general sky[S].2003.
    [107]Slater D, Healey G. What is the spectral dimensionality of illumination functions in outdoor scenes?[C]. Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,1998:105-110.
    [108]Sunkavalli K, Romeiro F, Matusik W, et al. What do color changes reveal about an outdoor scene?[C]. Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2008:1-8.
    [109]Preetham A J, Shirley P, Smits B. A practical analytic model for daylight[C]. Proceedings of SIGGRAPH'99,1999:91-100.
    [110]Zotti G., Wilkie A, Purgathofer W. A critical review of the Preetham skylight model[C]. Proceedings of WSCG 2007 Short Communications, 2007,1:23-30.
    [111]Nomura K, Xin Yin. Estimation and rendering of tactile information based on photometric images[C]. Proceedings of International Conference on Computational Science and its Applications (ICCSA), 2010:340-346.
    [112]Phong B T. Illumination for computer generated images[D], Salt Lake City:University of Utah,1973.
    [113]Blinn J F. Models of light reflection for computer synthesized pictures[J]. ACM SIGGRAPH Computer Graphics,1977,11 (2):192-198.
    [114]Cook R L, Torrance K E. A reflectance model for computer graphics[J]. ACM SIGGRAPH Computer Graphics,1981,15(3):307-316.
    [115]Whitted T. An improved illumination model for shaded display[C]. Proceedings of SIGGRAPH'05,2005, Article No.4.
    [116]Goral C M, Torrance K E, Greenberg D P, Battaile B. Modeling the interaction of light between diffuse surfaces[C]. Proceedings of SIGGRAPH'84,1984:213-222.
    [117]Nishita T, Nakamae E. Continuous tone representation of three-dimensional objects taking account of shadows and interreflection[C]. Proceedings of SIGGRAPH'85,1985:23-30.
    [118]Hanrahan P, Salzman D, Aupperle L. A rapid hierarchical radiosity algorithm[C]. Proceedings of SIGGRAPH'91,1991:197-206.
    [119]Troutman R, Max N L. Radiosity algorithm using higher order finite element methods[C]. Proceedings of SIGGRAPH'93,1993:209-212.
    [120]Zatz H R. Galerkin radiosity:a higher order solution method for global illumination[C]. Proceedings of SIGGRAPH'93,1993:213-220.
    [121]Gortler S J, Schroder P, Cohen M F, Hanrahan P. Wavelet radiosity[C]. Proceedings of SIGGRAPH'93,1993:221-230.
    [122]Kajiya J T. The rendering equation[C]. Proceedings of SIGGRAPH'86, 1986:143-150.
    [123]State A, Hirota G., Chen D T, et al. Superior augmented reality registration by integrating landmark tracking and magnetic tracking[C]. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques,1996:429-438.
    [124]Kanbara M, Yokoya N. Geometric and photometric registration for real-time augmented reality[C]. Proceedings of International Symposium on Mixed and Augmented Reality,2002:279-280.
    [125]Kanbara M, Yokoya N. Real-time estimation of light source environment for photorealistic augmented reality[C]. Proceedings of the 17th International Conference on Pattern Recognition,2004:911-914.
    [126]Bouganis C S, Brookes M. Multiple light source detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2004, 26(4):509-514.
    [127]周雅,晏磊,赵虎.增强现实系统光照模型建立研究[J].中国图象图形学报,2004,9(8):968-971.
    [128]Zhou Wei, Kambhamettu C. Estimation of illuminant direction and intensity of multiple light sources[C]. Proceedings of the 7th European Conference on Computer Vision,2002:206-220.
    [129]Zhou Wei, Kambhamettu C. Estimation of the size and location of multiple area light sources[C]. Proceedings of the 17th International Conference on Pattern Recognition,2004,3:214-217.
    [130]Zhou Wei, Kambhamettu C. A unified framework for scene illuminant estimation[J]. Image and Vision Computing,2008,26(3):415-429.
    [131]姚远.增强现实应用技术的研究[D].浙江大学博士学位论文,2006.
    [132]Feng Yan. Estimation of light source environment for illumination consistency of augmented reality[C]. Proceedings of the 1st International Congress on Image and Signal Processing,2008,3:771-775.
    [133]Ma Jintao, Zhou Ya, Hao Qun, et al. Efficient estimation of multiple illuminant directions using c-means clustering and self-correction for augmented reality[C]. Proceedings of the 2009 8th IEEE/ACIS International Conference on Computer and Information Science, 2009:1106-1110.
    [134]Sato I, Sato Y, Ikeuchi K. A method for estimating illumination distribution of a real scene based on soft shadows[C]. Proceedings of Advanced Multimedia Content Processing,1998:44-58.
    [135]Sato I, Sato Y, Ikeuchi K. Illumination distribution from brightness in shadows:adaptive estimation of illumination distribution with unknown reflectance properties in shadow regions[C]. Proceedings of IEEE International Conference on Computer Vision,1999:875-882.
    [136]Sato I, Sato Y, Ikeuchi K. Illumination from Shadows. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(3): 290-300.
    [137]Zheng Qinfen, Chellappa R. Estimation of illuminant direction, albedo, and shape from shading[J]. IEEE Transactions Pattern Analysis and Machine Intelligence,1991,13(7):680-702.
    [138]Samaras D, Metaxas D. Coupled lighting direction and shape estimation from single images[C]. Proceedings of IEEE International Conference on Computer Vision,1999:868-874.
    [139]Yang Yibing, Yuille A. Sources from shading[C]. Proceedings of IEEE Computer Vision and Pattern Recognition,1991:534-439.
    [140]Vega E V, Yang Y H. Default shape theory:With the application to the computation of the direction of the light source[J]. Journal of the Optical Society of America A,1994,60:285-299.
    [141]Zhang Yufei, Yang Y H. Illuminant direction determination for multiple light sources[C]. Proceedings of IEEE Computer Vision and Pattern Recognition,2001,1:269-276.
    [142]Wang Yang, Samaras D. Estimation of multiple illuminants from a single image of arbitrary known geometry[C]. Proceedings of European Conference on Computer Vision,2002:272-288.
    [143]Lee H C. Method for computing the scene-illuminant chromaticity from specular highlights[J]. Journal of the Optical Society of America A,1986, 3(10):1694-1699.
    [144]Lehmann T M, Palm C. Color line search far illuminant estimation in real-world scenes[J]. Journal of the Optical Society of America A,2001, 18(11):2679-2691.
    [145]Kwon O S, Cho Y H, Kim Y T, et al. Illumination estimation based on valid pixel selection in highlight region[C]. Proceedings of International Conference on Image Processing,2004,4:2419-2422.
    [146]Wang Yang, Samaras D. Estimation of multiple directional illuminants from a single image[J]. Image and Vision Computing,2008, 26(9):1179-1195.
    [147]Li Yuanzhen, Lin S, Lu Hanqing, Shum H Y. Multiple-cue illumination estimation in textured scenes[C]. Proceedings of IEEE International Conference on Computer Vision,2003,2:1366-1373.
    [148]Ikeda T, Oyamada Y, Sugimoto M. Illumination estimation from shadow and incomplete object shape captured by an RGB-D camera[C]. Proceedings of the 21st International Conference on Pattern Recognition, 2012:165-169.
    [149]Hara K, Nishino K, Ikeuchi K. Light source position and reflectance estimation from a single view without the distant illumination assumption[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(4):493-505.
    [150]Xie Feng, Tao Linmi, Xu Guangyou. Estimating illumination parameters using spherical harmonics coefficients in frequency space[J]. Tsinghua Science and Technology,2007,12(1):44-50.
    [151]Bingham M, Taylor D, Gledhill D, et al. Illuminant condition matching in augmented reality:a multi-vision, interest point based approach[C]. Proceedings of the 6th International Conference on Computer Graphics, Imaging and Visualization:New Advances and Trends,2009:57-61.
    [152]Blinn J F, Newell M E. Texture and reflection in computer generated images[J]. Communications of the ACM,1976,19(10):542-547.
    [153]Agusanto, K, Li Li, Zhu Chuangui, Sing N W. Photorealistic rendering for augmented reality using environment illumination[C]. Proceedings of IEEE/ACM International Symposium on Augmented and Mixed Reality, 2003:208-215.
    [154]Heymann S, Smolic A, Muller K, Froehlich B. Illumination reconstruction from real-time video for interactive augmented reality[C]. Proceedings of International Conference on Video and Image Precessing, 2005.
    [155]Sato I, Hayashida M, Kai F, et al. Fast image synthesis of virtual objects in a real scene with natural shadings[J]. Systems and Computers in Japan, 2005,36(14):1864-1872.
    [156]Jensen T, Andersen M S and Madsen C B. Real-Time image based lighting for outdoor augmented reality under dynamically changing illumination conditions[C]. Proceedings of International Conference on Computer Graphics Theory and Applications,2006:364-371.
    [157]Debevec P E. Rendering synthetic objects into real scenes:Bridging traditional and image-based graphics with global illumination and high dynamic range photography[C]. Proceedings of SIGGRAPH'98, 1998:189-198.
    [158]Sato I, Sato Y, Ikeuchi K. Acquiring a radiance distribution to superimpose virtual objects onto a real scene[J]. IEEE Transactions on Visualization and Computer Graphics,1999,5(1):1-12.
    [159]Jacobs K, Loscos C, Ward G. Automatic high-dynamic range image generation for dynamic scenes[J]. IEEE Computer Graphics and Applications,2008,28(2):84-93.
    [160]Banterle F, Debattista K, Artusi A, et al. High dynamic range imaging and low dynamic range expansion for generating HDR content[J]. Computer Graphics Forum,2009,28(8):2343-2367.
    [161]Kovaleski R P, M.Oliveira M. High-quality brightness enhancement functions for real-time reverse tone mapping[J]. The Visual Computer, 2009,25:539-547.
    [162]Barrow H G, Tenenbaum J M. Recovering intrinsic scene characteristics from images. Computer Vision Systems[M]. Academic Press,1978:3-26.
    [163]Tappen M F, Freeman W T, Adelson E H. Recovering intrinsic images from a single image[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27:1459-1472.
    [164]Agrawal A, Raskar R, Chellappa R. Edge suppression by gradient field transformation using cross-projection tensors[C]. Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, 2006,2:2301-2308.
    [165]Tappen M F, Freeman W T, Adelson E H. Estimating intrinsic component images using non-linear regression[C]. Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, 2006,2:1992-1999.
    [166]Li Shen, Ping Tan, Stephen L. Intrinsic image decomposition with non-local texture cues[C]. Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2008:1-7.
    [167]Sunkavalli K, Matusik W, Pfister H, et al. Factored time-lapse video[C]. Proceedings of ACM SIGGRAPH'07,2007:101-111.
    [168]Boyadzhiev I, Paris S, Bala K. User-assisted Image Compositing for Photographic Lighting[C]. Proceedings of ACM SIGGRAPH'13,2013, Article No.36.
    [169]Liu Yanli, Qin Xueying, Xu Songhua, et al. Light source estimation of outdoor scenes for mixed reality[J]. The Visual Computer,2009, 25(5-7):637-646.
    [170]Liu Yanli, Qin Xueying, Xing Guanyu, et al. A new approach to outdoor illumination estimation based on statistical analysis for augmented reality [J]. Computer Animation and Virtual Worlds,2010, 21(3-4):321-330.
    [171]杨美燕,吴志红,刘艳丽等.一种基于偏最小二乘法的室外光照估计算法[J].计算机辅助设计与图形学学报,2012,24(3):767-773.
    [172]刘艳丽,邢冠宇,秦学英,彭群生.基于能量优化的在线室外光照估计算法[J].计算机辅助设计与图形学学报,2011,23(1):132-137.
    [173]刘艳丽,杨美燕,邢冠宇等.一种鲁棒的实时室外光照估计算法[J].计算机辅助设计与图形学学报,2013,25(6):541-547.
    [174]Liu Yanli, Granier, X. Online tracking of outdoor lighting variations for augmented reality with moving cameras[J]. IEEE Transactions on Visualization and Computer Graphics.2012,18(4):573-580.
    [175]Xing Guanyu, Liu Yanli, Qin Xueying, Peng Qunsheng. A practical approach for real-time illumination estimation of outdoor videos[J]. Computers & Graphics,2012,36(7):857-865.
    [176]Xing Guanyu, Zhou Xuehong, Liu Yanli, et al. Online illumination estimation of outdoor scenes based on videos containing no shadow area[J]. SCIENCE CHINA(Information Sciences),2013,56(3):1-11.
    [177]Chen Xiaowu, Wang Ke, Jin Xin. Single image based illumination estimation for lighting virtual object in real scene[C]. Proceedings of the 12th IEEE International Conference on Computer-Aided Design and Computer Graphics,2011:450-455.
    [178]林丽丽.基于室外场景的光照分析[D].山东大学硕士学位论文,2012.
    [179]Tadamura K, Nakamae E, Kaneda K, et al. Modeling of skylight and rendering of outdoor scenes[J]. Computer Graphics Forum,1993, 12(3):189-200.
    [180]Matsushita Y, Lin S, Kang S B, Shum H Y. Estimating Intrinsic Images from Image Sequences with Biased Illumination[C]. Proceedings of the European Conference on Computer Vision,2004,3022:274-286.
    [181]Kimmel R, Elad M, Shaked D, et al. A variational framework for retinex[J]. International Journal of Computer Vision,2003,52(1):7-23.
    [182]Madsen C B, Stoerring M, Jensen T, et al:Real-time illumination estimation from image sequences[C]. Proceedings of 13th Danish Conference on Pattern Recognition and Image Analysis,2005:1-9.
    [183]Lalonde J F, Efros A A, Narasimhan S G:Estimating the natural illumination conditions from a single outdoor image [J]. International Journal of Computer Vision,2012,98(2),123-145.
    [184]Moreno-noguer F, Nayar S, Belhumeur P. Optimal illumination for image and video relighting[C]. Proceedings of IEEE European Conference on Visual Media Production,2005:201-210.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700