Performance Driven Facial Capture and Animation

During my PhD, I was a part of the CAMERA laboratory at the University of Bath, UK

My PhD was focused on the accurate capture of geometry, facial expressions, movements and the digitization and  parameterization of this information. This parameterized information can then be used for extraction of individual specific characteristics, to build probabilistic models to regularize future capture information or to further animate a digital face.

Realistic rendering and animation of human faces has been a goal within the computer graphics community for a long time, pioneered by Parke et al. in 1972 and followed by many others leading to vast advances in this field. This field is very multi-disciplinary and spans techniques from computer vision, computer graphics and machine learning in order to achieve efficiency, robustness and accuracy. The difficulty inherent in achieving convincing representations, renderings and animations of human faces, combined with the revolutionary impact it will have on various fields including education, medicine, psychology, forensics, human computer interaction, virtual reality and the entertainment industry, has rightfully earned it the title of the holy grail of computer graphics.

Over the course of my research I built systems to automatically generate fully controllable and animatable blendshape rigs from 3D reconstructions of facial geometry. My research focussed on both enhancing marker-based capture techniques using multi-view camera systems and improving marker-less tracking of facial expressions and head movement from monocular camera inputs.


Shridhar Ravikumar – 2017


Reading Between the Dots: Combining 3D Markers and FACS Classification for High-Quality Blendshape Facial Animation. 
Shridhar Ravikumar, Colin Davidson, Dmitry Kit, Neill Campbell, Luca Benedetti, Darren Cosker
Graphics Interface, 2016

Lightweight Markerless Monocular Face Capture with 3D Spatial Priors
Shridhar Ravikumar, Jose Serra, Darren Cosker

Easy Generation of Facial Animation Using Motion Graphs

José Serra, Ozan Cetinaslan, Shridhar Ravikumar, Verónica Orvalho, Darren Cosker,Computer Graphics Forum, 2018

Anamaria Ciucanu, Naval Bhandari, Xiaokun Wu, Shridhar Ravikumar, Yong-Liang Yang, Darren Cosker
Proceedings of the 11th Annual International Conference on Motion, Interaction, and Games

Research Output

Overview of our marker-less monocular capture pipeline

Overview of our hybrid multi-view and marker-based capture pipeline

Results obtained from our monocular capture pipeline, with and without the 3D spatial priors.

Comparison of the results from a vanilla marker-based capture system against our system with the FACS classification from video

Our capture method parameterizes the capture data such that the facial expressions can be retargeted onto another digital face as shown in the video below.

Results obtained from our lighting optimization algorithm that allows us to re-texture, relight and overlay the 3D mesh on top of the tracked monocular video.