Award Abstract # 9984847
CAREER: Developing and Evaluating a Spatio-temporal Representation for Analysis, Modeling, Recognition and Synthesis of Facial Expressions

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: GEORGIA TECH RESEARCH CORP
Initial Amendment Date: April 5, 2000
Latest Amendment Date: June 5, 2003
Award Number: 9984847
Award Instrument: Continuing Grant
Program Manager: Ephraim Glinert
IIS
�Division of Information & Intelligent Systems
CSE
�Directorate for Computer and Information Science and Engineering
Start Date: July 1, 2000
End Date: June 30, 2005�(Estimated)
Total Intended Award Amount: $301,106.00
Total Awarded Amount to Date: $301,106.00
Funds Obligated to Date: FY 2000 = $75,493.00
FY 2001 = $75,127.00

FY 2002 = $73,169.00

FY 2003 = $77,317.00
History of Investigator:
Recipient Sponsored Research Office: Georgia Tech Research Corporation
926 DALNEY ST NW
ATLANTA
GA �US �30318-6395
(404)894-4819
Sponsor Congressional District: 05
Primary Place of Performance: Georgia Tech Research Corporation
926 DALNEY ST NW
ATLANTA
GA �US �30318-6395
Primary Place of Performance
Congressional District:
05
Unique Entity Identifier (UEI): EMW9FC8J3HN4
Parent UEI: EMW9FC8J3HN4
NSF Program(s): HUMAN COMPUTER INTER PROGRAM
Primary Program Source: app-0100�
app-0101�

app-0102�

app-0103�
Program Reference Code(s): 1045, 1187, 9216, HPCC
Program Element Code(s): 684500
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

This is the first year of funding of a 4-year continuing award. The objective of this research is to lay the
groundwork for machines that are capable of accurate recognition and realistic synthesis of facial expressions. The approach is to develop and validate a dynamic spatio-temporal representation of facial movements, To this end, the PI will develop and evaluate methodologies for. robust analysis and modeling of facialmovements from video (so as to allow for unencumbered measurement in noninvasive interfaces). Facial activity is inherently dynamic in nature; therefore, automatic recognition of expressions from video and realistic animation of facial motion require a detailed dynamic representation of facial action. The PI will develop an analysissynthesis framework wherein model-based synthesis is used to analyze facial movement, to construct a spatio-temporal 3D representation of facial action that encodes the dynamics inherent in facialmotion. He will then evaluate the representation by analyzing videos of many human subjects making facial expressions, and by testing the synthesis of realistic facial motions. This research will provide a detailed scientific understanding of how people make facial expression and how recognition of facial expressions is possible, and thus will lay the foundation for systems that recognize human expressions and emotions, read lips, generate realistic facial animations, and code facial movements, which will form a vital component in the next generation of human-machine interfaces.

Please report errors in award information by writing to: [email protected].

Print this page

Back to Top of page