The researchers from Disney Research and Carnegie Mellon University’s Robotics Institute have developed a method that translates the motions of actors into a three-dimensional face model and also sub-divides it into facial regions that enable animators to create the poses they need.
The work, to be presented on August 10 at SIGGRAPH 2011, the International Conference on Computer Graphics and Interactive Techniques in Vancouver, envisions the creation of a facial model that could be used to rapidly animate any number of characters for films, video games or exhibits.
’We can build a model that is driven by data, but can still be controlled in a local manner,’ said J Rafael Tena, a Disney research scientist, who developed the interactive face models based on principal component analysis (PCA) with Iain Matthews, senior research scientist at Disney, and Fernando De la Torre, associate research professor of robotics at Carnegie Mellon.
Tena said that most facial animation still depends on ’blendshape’ models, a set of facial poses sculpted by artists based on static images. Given the wide range of human expressions, it can be difficult to predict all of the facial poses required in a film or videogame.
Tena, De la Torre and Matthews created their models by recording facial motion-capture data from a professional actor as sentences were performed with emotional content, localised actions and random motions. To cover the whole face, 320 markers were applied to enable the camera to capture facial motions during the performances.
The data from the actor was then analysed using a mathematical method that divided the face into regions, based in part on distances between points and in part on correlations between points that tend to move in concert with each other.
These regional sub-models are independently trained, but share boundaries. In this study, the result was a model with 13 distinct regions, but Tena said more regions would be possible by using performance-capture techniques that can provide a dense reconstruction of the face, rather than the sparse samples produced by traditional motion-capture equipment.
Future work will reportedly include developing models based on higher-resolution motion data and developing an interface that can be readily used by computer animators.
Oxa launches autonomous Ford E-Transit for van and minibus modes
I'd like to know where these are operating in the UK. The report is notably light on this. I wonder why?