用户名: 密码: 验证码:
基于动态点云的三维人脸表情跟踪问题研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
基于动态点云的三维人脸表情跟踪是计算机图形学领域一个重要的研究问题,也是当前的研究热点。它在计算机动画和游戏、数字影视、虚拟现实、远程网络会议、可视电话、人脸表情识别、辅助教育等领域有着广泛的应用。近年来出现的动态三维数据测量技术使得以视频频率获取动态人脸点云序列成为现实,这些点云序列记录了人脸的表情变化(包括全局的刚体运动和局部的非刚体变形),对点云序列进行跟踪生成拓扑一致的三角网格模型,避免了手工进行人脸建模和表情编辑这一繁琐工作。因此,基于动态点云的三维人脸表情跟踪成为真实感三维人脸表情建模最新的途径和方式之一。
     基于动态点云的三维人脸表情跟踪研究有三个急需解决的关键问题和难点问题:1)如何减少跟踪过程中的人工干预;2)如何获取人脸逼真的表情细节;3)如何提高处理效率,达到跟踪的实时性。这三个问题解决的好坏将直接影响到该技术在相关领域的实际应用。因此,本文针对这三个关键问题展开分析和讨论,提出新的方法着重改善这三方面的性能,从而使方法能满足实际应用领域的需求。主要工作和创新点如下:
     1、提出一种新的网格对准目标函数。目标函数中引入法向保持和夹角保持约束条件,减少了跟踪过程中的人工干预,并能保持较好的网格质量。
     1)法向保持约束条件。法向保持约束条件使得参数化网格对准每一帧点云时无需人工指定特征对应点,减少了跟踪过程中的人工干预;与传统的基于光流的自动跟踪算法相比,基于法向保持的算法更加稳定;法向保持约束条件使得在参数化网格与点云形状相差较大时,也可以得到理想的对准效果,提高了算法的自适应性。
     2)夹角保持约束条件。该条件可使跟踪得到的一系列参数化网格始终保持较好的质量,避免狭长三角片和很小三角片的出现。
     2、提出基于拉普拉斯平滑和多尺度网格匹配的三维人脸表情跟踪算法,能很好地再现人脸表情的细节特征。
     1)细节特征提取。利用拉普拉斯平滑将点云上的皱纹、褶皱等细节特征提取出来单独匹配,可有效解决网格模型对准整个点云时难以准确匹配点云上较尖锐的局部细节变形的缺陷,提高了对准精度;细节特征完全提取自点云,充分利用了点云自身所包含的运动信息,与传统的高真实感跟踪方法相比,避免了手工绘制或单独获取这些细节信息所需的额外花销和代价。
     2)多尺度网格匹配。低尺度网格用于匹配点云上平滑掉细节特征后的由脸部肌肉运动引起的全局变形,高尺度网格用于匹配从点云上提取出的由局部皮肤变形引起的精细表情细节特征,在跟踪得到表情细微细节的同时,保证了算法的效率。
     3、提出基于区域变形的三维人脸表情跟踪算法,提高了表情跟踪的效率。
     将网格模型与每一帧点云对准时,自动对网格模型进行初始区域划分,对每个初始区域,又自动进行子区域分割并对每个子区域计算一个统一的非刚体变换,使各个子区域变形后逼近点云。分区域计算变形可显著减少优化求解变形时涉及的未知量数目,大大提高了运算速度,并节省了存储空间。
     本文提出了新的用于表情跟踪的网格对准目标函数,基于拉普拉斯平滑和多尺度网格匹配的表情跟踪算法,以及基于区域变形的表情跟踪算法,减少了跟踪中的人工干预、提高了跟踪质量和跟踪速度。在如何利用帧点云间连续性更好地处理质量差的点云输入方面还需要进一步提高。另外,结合并行处理,可进一步提高算法效率。这些都是本工作未来的工作方向。
3D facial expression tracking using time-varying point clouds is an important research issue in computer graphics, as well as a research focus. It is widely applied in many fields such as computer animation and game, digital film and TV, virtual reality, teleconference, videophone, facial expression recognition, computer-aided education, and so on. Recent advances in time-varying3D data acquisition techniques have made video-rate face point clouds capture available. These point clouds record facial expression variations, including global rigid deformation and local non-rigid deformations. Tracking these point clouds to produce triangular mesh models with the same connectivity will avoid the tedious work in face modeling and expression editing. Therefore,3D facial expression tracking using time-varying point clouds has become one of the latest ways of modeling realistic3D facial expressions.
     There are three key and difficult problems in3D facial expression tracking using time-varying point clouds:1) how to reduce manual intervention in tracking process;2) how to capture the lifelike expressional details;3) how to improve the processing efficiency to achieve real-time tracking. The solutions of these three problems will directly influence the practical application of the expression tracking techniques in various fields. So we discuss these three problems in this thesis, and propose novel algorithms focusing on improving the performance of these three aspects to satisfy the requirements in practical application. The main contributions are as follows:
     1. We propose a new objective function for mesh registration. The normal-preserving constraint and the angle-preserving constraint are presented and introduced in the objective function to reduce manual intervention in tracking process and maintain good mesh quality.
     1) The normal-preserving constraint condition. This condition avoids manual selection of feature correspondences for registering the parametric mesh to each point cloud, so it reduces manual intervention in tracking process. Compared with those existing automatic tracking algorithms based on optical flow, our algorithm based on the normal-preserving constraint condition is more robust. When there is large difference between the shape of the parametric mesh and the point cloud, it could also get suggested registration result by using the normal-preserving constraint condition, so the adaptability of the algorithm is improved.
     2) The angle-preserving constraint condition. This condition makes the parametric meshes that are generated in tracking process maintain good mesh quality, avoiding slivers and very tiny triangles.
     2. We propose a new3D facial expression tracking algorithm based on Laplacian smooth and multi-scale mesh matching. The algorithm can well reproduce the facial expression details.
     1) Detail feature extraction. It is generally difficult to match accurately the sharp local deformations on the point cloud by directly registering the mesh model to the whole point cloud. Extracting these local detail features such as wrinkles and folds with Laplacian smooth for separate matching can overcome this limitation and improve the registration accuracy. Compared with those traditional high-realistic tracking algorithms, detail feature extraction avoids additional cost of acquiring these details with hand-paint or independent capture since these details are completely extracted from the point cloud.
     2) Matching with multi-scale mesh. Low-scale mesh is used to match the global deformation caused by facial muscle action; high-scale mesh is used to match the extracted subtle expressional detail features generated by local skin deformations. So the algorithm can ensure the efficiency as well as track the subtle expressional details.
     3. We propose a new algorithm for3D facial expression tracking based on regionwise deformation, which can improve the tracking efficiency.
     In each registration, we automatically segment the mesh model into initial regions, then for each initial region, segment it into subregions and estimate a non-rigid transformation for each subregion to approximate the point cloud. Regionwise deformation significantly decreases the unknowns to be estimated in optimization, so it greatly improves the computation speed and saves the storage space for unknowns.
     In this thesis, we propose a new objective function for mesh registration used in expression tracking, an expression tracking algorithm based on Laplacian smooth and multi-scale mesh matching, and an expression tracking algorithm based on regionwise deformation. These algorithms can reduce manual intervention in tracking, improving the tracking quality and speed. Taking into consideration the intra-frame coherence to better deal with those poor point clouds, and combining with parallel processing to further improve the algorithm efficiency are our future work.
引文
[1]S. Rusinkiewicz,O. Hall-Holt, and M. Levoy. Rea-ltime 3D model acquisition. ACM Transactions on Grahpics (TOG),2002,21(3):438-446.
    [2]L. Zhang, B. Curless, and S.M. Seitz. Spacetime stereo:Shape recovery of dynamic scenes. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition,2003:367-374.
    [3]I. A. Ypsilos, A. Hilton, and S. Rowe. Video-rate capture of dynamic face shape and appearance. In proc. IEEE Conf. on Automatic Face and Gesture Recognition,2004:117-122.
    [4]J. Davis, D. Nehab, R. Ramamoorthi and S. Rusinkiewicz. Spacetime stereo:a unifying framework for depth from triangulation. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(2):296-302.
    [5]T. Weise, B. Leibe, L. Van Gool. Fast 3D scanning wiht automatic motion compensation. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2007:1-8.
    [6]S. Konig and S. Gumhold. Image-based motion compensation for structured light scanning of dynamic sufaces. International Journal of Intelligent Systems Technologies and Applications,2008,5(3):434-441.
    [7]S. K. Nayar, M. Watanabe and M. Noguchi. Real-time focus range sensor. IEEE Transactions on Pattern Analysis and Machine Intelligence,1996,18(12): 1186-1198.
    [8]K.M. Cheung. Visual hull construction, alignment and refinement for human kinematic modeling, motion tracking and rendering. PhD Thesis, Carnegie Mellon University,2003.
    [9]K. M. Cheung, S. Baker and T. Kanade. Visual hull alignment and refinement across time:a 3D reconstruction algorithm combining shape-from-silhouette with stereo. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition. 2003,2:375-382.
    [10]E. Hern and F. Schmitt. Silhouette and stereo fusion for 3d object modeling. Computer Vision and Image Understanding,2004,96(3):367-392.
    [11]D. Nister. An efficient solution to the five-point relative pose problem. IEEE Transactions on Pattern Analysis and Machine Intelligence.2004,26(6): 756-770.
    [12]Y. Furukawa and J. Ponce. Accurate, dense, and robust multiview stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(8): 1362-1376.
    [13]P. Ratner.3-D human modeling and animation. Wiley, second edition,2003.
    [14]S.M. Platt and N.I. Badler. Animating facial expressions. Computer Graphics, 1981,15(3):245-252.
    [15]K. Waters. A muscle model for animating three-dimensional facial expressions. Comp ute r Graph ic s,1987,21 (4):17-24.
    [16]K. Waters and D. Terzopoulos. A physical model of facial tissue and muscle articulation. In SIGGRAPH Facial Animation Course Notes,1990:130-145.
    [17]张翔宇,华蓓,陈意云.人脸建模和动画的基本技术.计算机辅助设计与图形学学报,2001,13(4):342-347.
    [18]D. Terzopoulos and K. Waters. Physically-based facial modeling, analysis, and animation. Journal of Visualization and Computer Animation,1990, 1(4):73-80.
    [19]Y. Zhang, E.C. Prakash and E. Sung. Real-time physically-based facial expession animation using mass-spring system. In Proceedings of International conference on Computer Graphics,2001:347-350.
    [20]Y. Zhang, E.C. Prakash and E. Sung. Efficient modeling of an anatomy-based face and fast 3d facial expression synthesis. Computer Graphics Forum,2003, 22(2):159-169.
    [21]Y.C. Lee, D. Terzopoulos and K. Waters. Constructing physics-based facial models of individuals. In Proc. of Graphics Interface'93,1993:1-8.
    [22]Y.C. Lee, D. Terzopoulos and K. Waters. Realistic modeling for facial animation. In proceedings of SIGGRAPH,1995:55-62.
    [23]王鑫,孙守迁,陈廓.基于区域控制模型的三维人脸重建技术.计算机辅助设计与图形学学报,2007,19(8):1046-1050.
    [24]K. Kahler, J. Haber, H. Yamauchi and H.-P. Seidel. Head shop:generating animated head models with anatomical structure. In proceedings of ACM SIGGRAPH/Eurographics symposium on Computer Animation,2002:55-63.
    [25]B. Choe and H-S. Ko. Analysis and synthesis of facial expressions with hand-generated muscle actuation basis. In Proc. of Comput. Anim.,2001:12-19.
    [26]B. Choe, H. Lee and H-S. Ko. Performance-driven muscle-based facial animation. Journal of Visualization and Computer Animation,2001,12(2): 67-79.
    [27]E. Sifakis, I. Neverov and R. Fedkiw. Automatic determination of facial muscle activations from sparse motion capture marker data. ACM Trans. Graph.,2005, 24 (3):417-425.
    [28]J. Teran, E. Sifakis, S.S Blemker, V. Ng-Thow-Hing, C. Lau and R. Fedkiw. Creating and simulating skeletalmuscle from the visible human data set. IEEE Transactions on Visualization and Compute r Graphic s,2005,11(3):317-328.
    [29]N. Magnenat-Thalmann, N.E. Primeau and D. Thalmann. Abstract muscle actions procedures for human face animation. Visual Computer,1988,3(5): 290-297,
    [30]N. Magnenat-Thalmann, A. Cazedevals and D. Thalmann. Modeling facial communication between an animator and a synthetic actor in real time. In proceedings of Modeling in Computer Graphics,1993:387-396.
    [31]T. Sederberg and S. Parry. Free-form deformation of solid geometric models. Computer Graphics,1986,20(4):151-160.
    [32]S. Coquillart. Extended free-form deformation:a sculpturing tool for 3D geometric modeling. Computer Graphics,1990,24(4):187-196.
    [33]P. Kalra, A. Mangili, N.M. Thalmann and D. Thalmann. Simulation of facial muscle actions based on rational free form deformations, Computer Graphics Forum,1992,11(3):59-69.
    [34]H.J. Lamousin and Jr. W N. Waggenspack. NURBS-based free-form deformation. IEEE Computer Graphics and Applications,1994,14(6):59-65.
    [35]X. Huang, S. Zhang, Y. Wang, D. Metaxas and D. Samaras. A hierarchical framework for high resolution facial expression tracking. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition Workshop,2004:22-29.
    [36]Y. Wang, X. Huang, C-S. Lee, S. Zhang, Z. Li, D. Samaras, D. Metaxas, A. Elgammal and P. Huang. High resolution acquisition, learning and transfer of dynamic 3-D facial expressions. Computer Graphics Forum,2004,23(3): 677-686.
    [37]邹北骥,彭永进,伍立华,彭群生.基于物理模型的人脸表情动画技术研究.计算机学报,2002,25(3):331-336.
    [38]S. Pasquariello and C. Pelachaud. Greta:a simple facial animation engine. In 6th Online World Conference on Soft Computing in Industrial Applications,2001.
    [39]T.D. Bui. Creating emotions and facial exp ressions for embodied agents. PhD Thesis, University of Twente.2004.
    [40]Y. Huang, X. Zhang, Y. Fan, L. Yin, L. Seversky, J. Allen, T. Lei and W. Dong. Reshaping 3D facial scans for facial appearance modeling and 3D facial expression analysis. Image and Vision Computing,2012.
    [41]A. Shashua. Projective structure from two uncalibrated images:structure from motion and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence,1994,16(8):778-790.
    [42]B. Guenter, C. Grimm, D. Wolf, H. Malvar and F. Pighin. Making faces. In Computer Graphics Proceedings SIGGRAPH'98,1998:55-66.
    [43]Y. Li and G. Su. Face pose estimation and synthesis by 2D morphable model. Lecture Notes in Computer Science,2007,4456:1001-1008.
    [44]F. Min, N. Sang and Z. Wang. Automatic face replacement in video based on 2D morphable model. In proceedings of the 20th International Conference on Pattern Recognition,2010:2250-2253.
    [45]T. Poggio, and T. Vetter. Recognition and structure from one 2D model view: observations on prototypes, object classes and symmetries. Libraries, Massachusetts Institute of Technology,1992:MIT-1347.
    [46]T. Vetter and T. Poggio. Linear object classes and image synthesis from a single example image. IEEE Transactions on Pattern Analysis and Machine Intelligence,1997,19(7):733-742.
    [47]V Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In proceedings of ACM SIGGRAPH,1999:187-194.
    [48]V. Blanz and T. Vetter. Face recognition based on fitting a 3D morphable model. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(9): 1063-1074.
    [49]胡永利,尹宝才,谷春亮,程世铨.基于形变模型的三维人脸重建方法及其改进.计算机学报,2005,28(10):1671-1679.
    [50]S. Romdhani and T. Vetter. Efficient, robust and accurate fitting of a 3d morphable model. In proceedings of IEEE International Conference on Computer Vision,2003:1-8.
    [51]P. Joshi, W.C. Tien, M. Desbrun and F. Pighin. Learning controls for blendshape based realistic facial animation. In proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Aimation,2003:187-192.
    [52]Q. Zhang, Z. Liu, B. Guo and h. Shum. Geometry-driven photorealistic facial expression synthesis. In Proceedings of Eurographics/SIGGRAPH Symposium on Computer Animation,2003:177-186.
    [53]B. Park, H. Chung, T. Nishita and S.Y Shin. A feature-base approach to facial expression cloning. Computer Animation and Virtual Worlds,2005,16(3): 291-303.
    [54]L, Zhang, N. Snavely, B. Curless and S.M. Seitz. Spacetime faces:high resolution capture for modeling and animation. ACM Transaction on Graphics, 2004,23(3):548-558.
    [55]M. Lau, J. Chai, Y-Q. Xu and H-Y. Shum. Face poser:interactive modeling of 3D facial expressions using model priors. In Proceedings of Eurographics/SIGGRAPH Symposium on Computer Animation,2007:161-170.
    [56]M. Lau, J. Chai, Y-Q. Xu and H-Y Shum. Face poser:interactive modeling of 3D facial expressions using facial priors. ACM Transactions on Graphics (TOG), 2009,29(1):Article 3.
    [57]M. Dellepiane, N. Pietroni, N. Tsingos, M. Asselot and R Scopigno. Reconstructing head models from photographs for individualized 3d-audio processing. Proc. Pac. Graph.,2008,27 (7):1719-1727.
    [58]F. Pighin, R. Szeliski, and D.H. Salesin. Resynthesizing facial animation through 3d model-based tracking. In Proceedings of the 7th IEEE International Conference on Computer Vision,1999,1:143-150.
    [59]J. Chai, J. Xiao and J Hodgins. Vision-based control of 3d facial animation. In proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation,2003:193-206.
    [60]V. Blanz, C. Basso, T. Poggio and T. Vetter. Reanimating faces in images and video. Computer Graphics Forum,2003,22(3):641-650.
    [61]D. Vlasic, M. Brand, H. Pfister, J. Popovic. Face transfer with multilinear models. ACM Transactions on Grahpics (TOG),2005,24 (3):426-433.
    [62]F. Dornaika and J. Ahlberg. Fitting 3D face models for tracking and active appearance model training. Image and vision Computing,2006,24(9): 1010-1024.
    [63]I. Mpiperis, S. Malassiotis and M. Strintzis. Bilinear models for 3-D face and facial expression recognition. IEEE Transactions on Information Forensics and Security,2008,3 (3):498-511.
    [64]L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato. A 3D facial expression database for facial behavior research. In 7th International Conference on Automatic Face and Gesture Recognition (FGR06),2006:211-216.
    [65]JM. Buenaposada, E. Munoz and L. Baumela. Efficient illumination independent appearance-based face tracking. Image and Vision Computing,2009,27(5): 560-578.
    [66]V. Blanz, K. Scherbaum and H. Seidel. Fitting a morphable model to 3D scans of faces. IEEE International Conference on Computer Vision (ICCV),2007:1-8.
    [67]G Borshukov, D. Piponi, O. Larsen, J.P Lewis, and C. Tempelaar-Lietz. Universal capture-image-based facial animation for "the matrix reloaded" ACM SIGGRAPH 2005 Courses,2005:16.
    [68]D. Sibbing, M. Habbecke and L. Kobbelt. Markerless reconstruction and synthesis of dynamic facial expression. Computer Vision and Image Understanding,2011,115(5):668-680.
    [69]LI Smith. A tutorial on principal components analysis. Cornell University, USA, 2002.
    [70]F. I. Parke. Computer generated animation of faces. In proceedings of the ACM annual conference,1972,1:451-457.
    [71]F. I. Parke. A parametric model for human faces. PhD Thesis, The University of Utah, Salt Lake City, USA,1974.
    [72]F. I. Parke. Parameterized models for facial animation. IEEE Computer Graphics and Applications,1982,2(9):61-68.
    [73]M. Rydfalk. CANDIDE, a parameterized face. Report No. LiTH-ISY-I-866, Dept. of Electrical Engineering, Linkoping University, Sweden,1987.
    [74]B. Welsh. Model-based coding of images. PhD dissertation, British Telecom Research Lab, Jan.1991.
    [75]J. Ahlberg. CANDIDE-3-an updated parameterised face. Technical Report No. lith-isy-r-2326, Dept. of Electrical Engineering, Linkoping University, Sweden, January 2001.
    [76]J. Ahlberg. Model-based coding-extraction, coding and evaluation of face model parameters. Ph.D thesis, Linkoping University, Sweden,2002.
    [77]M. Cohen and D. Massaro. Modeling coarticulation in synthetic visual speech. Models and Techniques in Computer Animation,1993:139-156.
    [78]H.S. Horace Ip and C.S. Chan. Script-based facial gesture and speech animation using a NURBS based face model. Computer and Graphics,1996,20(6): 881-891.
    [79]K. Kahler, J. Haber, H. Yamauchi and H.P. Seidel. Head shop:Generating animated head models with anatomical structure. In Proceedings of the ACM SIGGRAPH Symposium on Computer Animation,2002:55-64.
    [80]I.S. Pandzic and R. Forchheimer. MPEG-4 facial animation:The standard, implementation and applications. John Wiley & Sons, New York,2002.
    [81]姜大龙.真实感三维人脸合成方法研究.博士论文.中国科学院研究生院(计算技术研究所),2006.
    [82]M. Fratarcangeli, M. Schaerf and R. Forchheimer. Facial motion cloning with radial basis functions in MPEG-4 FBA. Graphical Models,2007,69(2): 106-118.
    [83]Y. Zhang, Q. Ji, Z. Zhu and B. Yi. Dynamic facial expression analysis and synthesis with MPEG-4 facial animation parameters. IEEE Transaction on Circuits and Systems for Video Technology,2008,18(10):1383-1396.
    [84]S. Garchery and N.M. Thalmann. Designing MPEG-4 facial animation tables for web applications. In Proc. of Multimedia Modeling,2001:39-59.
    [85]I.S. Pandzic. Facial animation framework for the web and mobile platforms. In Proc. of the 7th Int'l Conf. on 3D Web Technology,2002.
    [86]A. Arya and S. Dipaola. Face modeling and animation language for MPEG-4 XMT framework. IEEE Transactions on Multimedia,2007,9(6):1137-1146.
    [87]J. Ahlberg and F. Dornaika. Parametric face modeling and tracking. Springer eBook, Springer, New York,2005.
    [88]J. Lien, T. Kanade, A. Zlochower, J. Cohn and C. Li. Subtly different facial expression recognition and expression intensity estimation. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition,1998:853-859.
    [89]L. Essa and A. Pentland. A vision system for observing and extracting facial action parameters. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition,1994:76-83.
    [90]S. K. Goldenstein, C. Vogler and D. Metaxas. Statistical cue integration in dag deformable models. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(7):801-813.
    [91]Y. Yacoob and L. Davis. Computing spatio-temporal representations of human faces. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition,1994: 70-75.
    [92]M. La Cascia, S. Sclaroff, and V. Athitsos. Fast, reliable head tracking under varying illumination:An approach based on registration of texture-mapped 3D models. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000, 22(4):322-336.
    [93]J. Ahlberg and R. Forchheimer. Face tracking for model-based coding and face animation. International Journal on Imaging Systems and Technology,2003, 13(1):8-22.
    [94]M. Odisio, M. Odisio, G Bailly, Shape and appearance models of talking faces for model-based tracking. In Proc. AVSP,2003:105-110.
    [95]S. Goldenstein, C. Vogler and D. Metaxas.3D facial tracking from corrupted movie sequences. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition,2004:1880-1885.
    [96]C. Kublbeck and A. Ernst. Face detection and tracking in video sequences using the modifiedcensus transformation. Image and Vision computing,2006,24(6): 564-572.
    [97]W. Hyneman, H. Itokazu, L. Williams and X. Zhao. Human face project. In proceedings of SIGGRAPH'05, Session:Digital face cloning,2005:# 5.
    [98]C. Oat. Animated wrinkle maps. In ACM SIGGRAPH'07 course,2007:33-37.
    [99]B. Bickel, M. Botsch, R. Angst, W. Matusik, M. Otaduy, H. Pfister and M. Gross. Multi-scale capture of facial geometry and motion. ACM Tansactions on Graphics (TOG),2007,26(3), Article 33.
    [100]B. Bickel, M. Lang, M. Botsch, M. Otaduy and M. Gross. Pose-space animation and transfer of facial details. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation,2008:57-66.
    [101]Y. Furukawa and J. Ponce. Dense 3d motion capture for human faces. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2009:1674-1681.
    [102]H. Huang, J. Chai, X. Tong and H. Wu. Leveraging motion capture and 3D scanning for high-fidelity facial performance acquisition. In proceedings of ACM SIGGRAPH'11,2011:Article 74.
    [103]M. A. Sanchez. Techniques for performance-based, realtime facial animation. PhD thesis, University of Sheffield,2006.
    [104]W.-C. Ma, A. Jones, J.-Y. Chiang, T. Hawkins, S. Frederiksen, P. Peers, M. Vukovic, M. Ouhyoung and P. Debevec. Facial performance synthesis using deformation-driven polynomial displacement maps. ACM Trans. Graph.,2008, 27(5):1-10.
    [105]J. Noh and U. Neumann. Expression Cloning. In Proceedings of SIGGRAPH'01,2001:277-288.
    [106]R.W. Summer and J. Popovic. Deformation Transfer for Triangle Meshes. ACM Transactions on Graphics (TOG),2004,23(3):399-405.
    [107]J. L. Minoi and D. Gillies.3D facial expression analysis and deformation. In proceedings of the 4th symposium on Applied Perceptioin in Graphics and Visiualiztion,2007:138-138.
    [108]B. Amberg, S. Romdhani and T.Vetter. Optimal step nonrigid ICP algorithms for surface registration. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition,2007:1-8.
    [109]B. Allen, B.Curless and Z. Popovic. The space of human body shapes: reconstruction and parameterization from range scans. ACM Trans. On Grphics (TOG),2003,22(3):587-594.
    [110]H. Pottmann, Q.-X. Huang, Y.-L. Yang and S.-M. Hu. Gometry and convergence analysis of algorithms for registrationof 3D shapes. International Journal of Computer Vision,2006,67(3):277-296.
    [111]S. Basu, I. Essa, and A. Pentland. Motion regularization for model-based head-tracking. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 1996:611-616.
    [112]M.J. Black and Y. Yacoob. Recognizing facial expressions in image sequences using local parameterized models of image motion. International Journal on Computer Vision,1997,25(1):23-48.
    [113]X. Pennec, P. Cahier and N. Ayache. Understanding the "Deman's algorithm", 3D non-rigid registration by gradient descent. In proceedings of the second international conference on medical image computing and computer assisted intervention (MICAI' 99),1999:597-605.
    [114]D. Decarlo and D. Metaxas. Optical flow constraints on deformable models with applications to face tracking. International Journal of Computer Vision, 2000,38(2):99-127.
    [115]S.B. Gokturk, J.Y. Bouguet and R. Grzeszczuk. A data-driven model for monocular face tracking. In International Conference on Computer Vision,2001: 701-708.
    [116]F. Dornaika and J. Ahlberg. Face and facial feature tracking using deformable models. International Journal of Image and Graphics,2004,4(3):499-532.
    [117]J. Strom. Model-based real-time head tracking. EURASIP Journal on Applied Signal Processing,2002,2002(10):1039-1052.
    [118]Y. Wang, M. Gupta, S. Zhang, S. Wang, X. Gu, D. Samaras and P. Huang. High resolution tracking of non-rigid motion of densely sampled 3D data using Harmonic Maps, International Journal of Computer Vision,2008,76(3): 283-300.
    [119]R. Schoen and S. T. Yau. Lectures on harmonic maps. Cambridge: International Press, Harvard University,1997.
    [120]H. Li, B. Adams, L. J. Guibas and M. Pauly. Robust single-view geometry and motion reconstruction. ACM Transactions on Graphics (TOG)-Proceedings of ACM SIGGRAPH Asia,2009,28(5), Article 175.
    [121]Y. Zeng, C.Wang, Y.Wang, D. Gu, D. Samaras and N. Paragios. Intrinsic Dense 3D Surface Tracking. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition,2011:1225-1232.
    [122]M. Shinya. Unifying measured point sequences of deforming objects. In Proceedings of International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT'04),2004:904-911.
    [123]N. Ahmed, C. Theobalt, C. Rossl, S. Thrum and H.-P. Seidel. Dense correspondence finding for parametrization-free animation reconstruction from video. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition,2008: 1-8.
    [124]M. Wand, P. Jenke, Q. Huang, M. Bokeloh, L. guibas and A. Schilling. Reconstruction of deforming geometry from time-varying point clouds. In proceedings of the fifth Eurographics symposium on Geometry processing (SGP'07),2007:49-58.
    [125]M. Wand, B. Adams, M. Ovsjanikov, A. Berner, M. Bokeloh, P. Jenke, L. Guibas, H.-P. Seidel, and A. Schilling. Efficient reconstruction of non-rigid shape and motion from real-time 3D scanner data. ACM Transactions on Graphics,2009,28(2):Article 15.
    [126]A. Sharf, D.A. Alcantara, T. Lewiner, C. Greif, A. Sheffer, N. Amenta, and D. Cohen-Or. Space-time surface reconstruction using incompressible Flow. ACM Transactions on Graphics,2008,27(5):Article 110.
    [127]J. SuBmuth, M. Winter, and G. Greiner. Reconstructing animated meshes from time-varying point clouds. Computer Graphics Forum,2008,27(5):1469-1476.
    [128]C. Zhu, R.H. Byrd, P. Lu and J. Nocedal. Algorithm 778. L-BFGS-B:fortran subroutines for large-scale bound constrained optimization. ACM Transactions on Mathematical Software,1997,23(4):550-560.
    [129]Y. Lipman, O. Sorkine, D. Cohen-Or, D. Levin, C. Rossl and H.P. Seidel. Differential coordinates for interactive mesh editing. In proceedings of Shape Modeling International,2004:181-190.
    [130]O. Sorkine, D. Cohen-Or, Y. Lipman, M. Alexa, C. Rossl and H.P. Seidel. Laplacian surface editing. In proceedings of the 2004 Eurographics/ACM SIGGRAPH Symposium on Geometry Processing,2004:175-184.
    [131]O. Sorkine and M. Alexa. As-rigid-as-possible surface modeling. In proceedings of the fifth Eurographics symposium on Geometry Processing,2007: 109-116.
    [132]M. S. Floater. Mean value coordinates. Computer Aided Geometric Design, 2003,20(1):19-27.
    [133]H. Y. Wu, C. Pan, Q. Yang, J. Pan and S. Ma. Mean-value laplacian coordinates for triangular meshes. In Proceedings of the International Conference on Computer Graphics, Imaging and Visualisation,2006:156-160.
    [134]O.K.-C. Au, C.L. Tai, L. Liu and H. Fu. Dual laplacian editing for meshes. IEEE Transactions on Visualization and Computer Graphics,2006,12(3): 386-395.
    [135]何军.网格上曲面拟合和变形的研究.博士论文.山东大学,2009.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700