Abstract
We present a fully automatic approach to real-time facial tracking and animation with a single video camera. Our approach does not need any calibration for each individual user. It learns a generic regressor from public image datasets, which can be applied to any user and arbitrary video cameras to infer accurate 2D facial landmarks as well as the 3D facial shape from 2D video frames. The inferred 2D landmarks are then used to adapt the camera matrix and the user identity to better match the facial expressions of the current user. The regression and adaptation are performed in an alternating manner. With more and more facial expressions observed in the video, the whole process converges quickly with accurate facial tracking and animation. In experiments, our approach demonstrates a level of robustness and accuracy on par with state-of-the-art techniques that require a time-consuming calibration step for each individual user, while running at 28 fps on average. We consider our approach to be an attractive solution for wide deployment in consumer-level applications.
Supplemental Material
Available for Download
Supplemental material.
- Asthana, A., Zafeiriou, S., Cheng, S., and Pantic, M. 2013. Robust discriminative response map fitting with constrained local models. In IEEE CVPR, 3444--3451. Google ScholarDigital Library
- Baltrušaitis, T., Robinson, P., and Morency, L.-P. 2012. 3D constrained local model for rigid and non-rigid facial tracking. In Proceedings of IEEE CVPR, 2610--2617. Google ScholarDigital Library
- Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R. W., and Gross, M. 2011. High-quality passive facial performance capture using anchor frames. ACM Trans. Graph. 30, 4, 75:1--75:10. Google ScholarDigital Library
- Blanz, V., and Vetter, T. 1999. A morphable model for the synthesis of 3d faces. In Proceedings of SIGGRAPH, 187--194. Google ScholarDigital Library
- Bouaziz, S., Wang, Y., and Pauly, M. 2013. Online modeling for realtime facial animation. ACM Trans. Graph. 32, 4 (July), 40:1--40:10. Google ScholarDigital Library
- Bradley, D., Heidrich, W., Popa, T., and Sheffer, A. 2010. High resolution passive facial performance capture. ACM Trans. Graph. 29, 4, 41:1--41:10. Google ScholarDigital Library
- Burgos-Artizzu, X. P., Perona, P., and Dollár, P. 2013. Robust face landmark estimation under occlusion. In Proceedings of ICCV, 117--124. Google ScholarDigital Library
- Byrd, R. H., Lu, P., Nocedal, J., and Zhu, C. 1995. A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput. 16, 5 (Sept.), 1190--1208. Google ScholarDigital Library
- Cao, X., Wei, Y., Wen, F., and Sun, J. 2012. Face alignment by explicit shape regression. Proceedings of IEEE CVPR, 2887--2894. Google ScholarDigital Library
- Cao, C., Weng, Y., Lin, S., and Zhou, K. 2013. 3d shape regression for real-time facial animation. ACM Trans. Graph. 32, 4 (July), 41:1--41:10. Google ScholarDigital Library
- Cao, C., Weng, Y., Zhou, S., Tong, Y., and Zhou, K. 2013. Facewarehouse: a 3D facial expression database for visual computing. IEEE TVCG, PrePrints. Google ScholarDigital Library
- Chai, J.-X., Xiao, J., and Hodgins, J. 2003. Vision-based control of 3d facial animation. In Symp. Comp. Anim., 193--206. Google ScholarDigital Library
- Cootes, T. F., Taylor, C. J., Cooper, D. H., and Graham, J. 1995. Active shape models - their training and application. Computer Vision and Image Understanding 61, 38--59. Google ScholarDigital Library
- Cootes, T. F., Edwards, G. J., and Taylor, C. J. 1998. Active appearance models. In Proceedings of ECCV, 484--498. Google ScholarDigital Library
- DeCarlo, D., and Metaxas, D. 2000. Optical flow constraints on deformable models with applications to face tracking. Int. Journal of Computer Vision 38, 2, 99--127. Google ScholarDigital Library
- Dollar, P., Welinder, P., and Perona, P. 2010. Cascaded pose regression. In Proceedings of IEEE CVPR, 1078--1085.Google Scholar
- Ekman, P., and Friesen, W. 1978. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press.Google Scholar
- Essa, I., Basu, S., Darrell, T., and Pentland, A. 1996. Modeling, tracking and interactive animation of faces and heads: using input from video. In Computer Animation, 68--79. Google ScholarDigital Library
- Garrido, P., Valgaert, L., Wu, C., and Theobalt, C. 2013. Reconstructing detailed dynamic face geometry from monocular video. ACM Trans. Graph. 32, 6 (Nov.), 158:1--158:10. Google ScholarDigital Library
- Huang, G. B., Ramesh, M., Berg, T., and Learned-Miller, E. 2007. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Tech. Rep. 07-49, University of Massachusetts, Amherst, October.Google Scholar
- Huang, H., Chai, J., Tong, X., and Wu, H.-T. 2011. Leveraging motion capture and 3d scanning for high-fidelity facial performance acquisition. ACM Trans. Graph. 30, 4, 74:1--74:10. Google ScholarDigital Library
- Lewis, J. P., and Anjyo, K. 2010. Direct manipulation blendshapes. IEEE CG&A 30, 4, 42--50. Google ScholarDigital Library
- Li, H., Yu, J., Ye, Y., and Bregler, C. 2013. Realtime facial animation with on-the-fly correctives. ACM Trans. Graph. 32, 4 (July), 42:1--42:10. Google ScholarDigital Library
- Pighin, F., Hecker, J., Lischinski, D., Szeliski, R., and Salesin, D. H. 1998. Synthesizing realistic facial expressions from photographs. In Proceedings of SIGGRAPH, 75--84. Google ScholarDigital Library
- Pighin, F., Szeliski, R., and Salesin, D. 1999. Resynthesizing facial animation through 3d model-based tracking. In Int. Conf. Computer Vision, 143--150.Google Scholar
- Saragih, J. M., Lucey, S., and Cohn, J. F. 2011. Real-time avatar animation from a single image. In IEEE International Conference on Automatic Face Gesture Recognition and Workshops, 117--124.Google Scholar
- Saragih, J., Lucey, S., and Cohn, J. 2011. Deformable model fitting by regularized landmark mean-shift. International Journal of Computer Vision 91, 2, 200--215. Google ScholarDigital Library
- Tarres, F., and Rama, A. GTAV Face Database. A http://gps-tsc.upc.es/GTAV/ResearchAreas/UPCFaceDatabase/GTAVFaceDatabase.htm.Google Scholar
- Viola, P., and Jones, M. 2004. Robust real-time face detection. International Journal of Computer Vision 57, 2, 137--154. Google ScholarDigital Library
- Vlasic, D., Brand, M., Pfister, H., and Popović, J. 2005. Face transfer with multilinear models. ACM Trans. Graph. 24, 3, 426--433. Google ScholarDigital Library
- Weise, T., Li, H., Gool, L. V., and Pauly, M. 2009. Face/off: Live facial puppetry. In Symp. Computer Animation, 7--16. Google ScholarDigital Library
- Weise, T., Bouaziz, S., Li, H., and Pauly, M. 2011. Realtime performance-based facial animation. ACM Trans. Graph. 30, 4 (July), 77:1--77:10. Google ScholarDigital Library
- Weng, Y., Cao, C., Hou, Q., and Zhou, K. 2013. Real-time facial animation on mobile devices. Graphical Models, PrePrints.Google Scholar
- Williams, L. 1990. Performance-driven facial animation. In Proceedings of SIGGRAPH, 235--242. Google ScholarDigital Library
- Xiao, J., Baker, S., Matthews, I., and Kanade, T. 2004. Real-time combined 2d+3d active appearance models. In Proceedings of IEEE CVPR, 535--542. Google ScholarDigital Library
- Xiong, X., and De La Torre, F. 2013. Supervised descent method and its applications to face alignment. In Proceedings of IEEE CVPR, 532--539. Google ScholarDigital Library
- Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: high resolution capture for modeling and animation. ACM Trans. Graph. 23, 3, 548--558. Google ScholarDigital Library
Index Terms
- Displaced dynamic expression regression for real-time facial tracking and animation
Recommendations
3D shape regression for real-time facial animation
We present a real-time performance-driven facial animation system based on 3D shape regression. In this system, the 3D positions of facial landmark points are inferred by a regressor from 2D video frames of an ordinary web camera. From these 3D points, ...
Pose-Robust Facial Expression Recognition Using View-Based 2D + 3D AAM
This paper proposes a pose-robust face tracking and facial expression recognition method using a view-based 2D 3D active appearance model (AAM) that extends the 2D 3D AAM to the view-based approach, where one independent face model is used for a ...
Real-time facial expression recognition using STAAM and layered GDA classifier
This paper proposes a real-time person independent facial expression recognition in two parts: one is a model fitting part using a proposed stereo active appearance model (STAAM) and another is a person independent facial expression recognition using a ...
Comments