- 最后登录
- 2017-9-18
- 注册时间
- 2011-1-12
- 阅读权限
- 90
- 积分
- 12276
- 纳金币
- 5568
- 精华
- 0
|
Copyright is held by the author / owner(s).
SIGGRAPH Asia 2011, Hong Kong, China, December 12 – 15, 2011.
ISBN 978-1-4503-0807-6/11/0012
kinect-based Facial Animation
Thibaut Weise
EPFL
Sofien Bouazizy
EPFL
Hao Liz
Columbia University
Mark Paulyx
EPFL
Abstract
In this demo we present our system for performance-based character
animation that enables any user to control the facial expressions
of a digital avatar in realtime. Compared to existing technologies,
our system is easy to deploy and does not require any face markers,
in***sive lighting, or complex scanning hardware. Instead, the user
is recorded in a natural environment using the non-in***sive, commercially
available Microsoft Kinect 3D sensor. Since high noise
levels in the acquired data prevent conventional tracking methods
to work well, we developed a method to combine a database of
existing animations with facial tracking to generate compelling animations.
Realistic facial tracking facilitates a range of new applications,
e.g. in digital gameplay, telepresence or social interactions.
1 Real-time Facial Animation
The technology behind the demo was first presented at SIGGRAPH
2011 [Weise et al. 2011]. Our current prototype system uses a simple
2-step process for facial tracking: First, the user performs a set
of calibration expressions. A generic facial blendshape rig is then
modified to best recons***ct these training expressions while keeping
the semantics of the blendshapes intact [Li et al. 2010]. Only
five training expressions are typically sufficient to enable compelling
animations. The resulting personalized facial rig is then
used for real-time facial tracking. Due to the high noise levels of the
input data a database of animations is incorporated into the tracking
framework resulting in stable and accurate facial animations.
2 Applications
In the professional domain our technology can be used as a virtual
mirror: animators drive their CG characters in 3D animation
software packages using their own facial expressions. Similarly, it
has never been easier to include compelling facial animations in the
previs pipeline for movies, or to create facial animations at almost
no cost for secondary characters and crowds in games. For consumers
our technology facilitates a range of new applications, e.g.
in digital gameplay or social interactions.
Acknowledgements. This research is supported by Swiss National
Science Foundation grant 20PA21L 129607.
References
LI, H., WEISE, T., AND PAULY, M. 2010. Example-based facial
rigging. ACM Trans. Graph. 29, 32:1–32:6.
WEISE, T., BOUAZIZ, S., LI, H., AND PAULY, M. 2011. Realtime
performance-based facial animation. ACM Trans. Graph.
30, 77:1–77:10.
e-mail: thibaut.weise@epfl.ch
ye-mail:sofien.bouaziz@epfl.ch
ze-mail:hao@hao-li.com
xe-mail:mark.pauly@epfl.ch
Figure 1: Real-time facial animation using the Kinect as an input
device: the facial expressions of the user are recognized and
tracked in real-time and transferred onto an arbitrary virtual character.
|
|