Model-based synthesis of visual speech movements from 3D video
We describe a method for the synthesis of visual speech movements using a hybrid unit
selection/model-based approach. Speech lip movements are captured using a 3D stereo
face capture system and split up into phonetic units. A dynamic parameterisation of this data
is constructed which maintains the relationship between lip shapes and velocities; within this
parameterisation a model of how lips move is built and is used in the animation of visual
speech movements from speech audio input. The mapping from audio parameters to lip …
selection/model-based approach. Speech lip movements are captured using a 3D stereo
face capture system and split up into phonetic units. A dynamic parameterisation of this data
is constructed which maintains the relationship between lip shapes and velocities; within this
parameterisation a model of how lips move is built and is used in the animation of visual
speech movements from speech audio input. The mapping from audio parameters to lip …
We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into phonetic units. A dynamic parameterisation of this data is constructed which maintains the relationship between lip shapes and velocities; within this parameterisation a model of how lips move is built and is used in the animation of visual speech movements from speech audio input. The mapping from audio parameters to lip movements is disambiguated by selecting only the most similar stored phonetic units to the target utterance during synthesis. By combining properties of model-based synthesis (e.g., HMMs, neural nets) with unit selection we improve the quality of our speech synthesis.

Showing the best result for this search. See all results