
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | April 5, 2000 |
Latest Amendment Date: | June 5, 2003 |
Award Number: | 9984847 |
Award Instrument: | Continuing Grant |
Program Manager: |
Ephraim Glinert
IIS �Division of Information & Intelligent Systems CSE �Directorate for Computer and Information Science and Engineering |
Start Date: | July 1, 2000 |
End Date: | June 30, 2005�(Estimated) |
Total Intended Award Amount: | $301,106.00 |
Total Awarded Amount to Date: | $301,106.00 |
Funds Obligated to Date: |
FY 2001 = $75,127.00 FY 2002 = $73,169.00 FY 2003 = $77,317.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
926 DALNEY ST NW ATLANTA GA �US �30318-6395 (404)894-4819 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
926 DALNEY ST NW ATLANTA GA �US �30318-6395 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | HUMAN COMPUTER INTER PROGRAM |
Primary Program Source: |
app-0101� app-0102� app-0103� |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
This is the first year of funding of a 4-year continuing award. The objective of this research is to lay the
groundwork for machines that are capable of accurate recognition and realistic synthesis of facial expressions. The approach is to develop and validate a dynamic spatio-temporal representation of facial movements, To this end, the PI will develop and evaluate methodologies for. robust analysis and modeling of facialmovements from video (so as to allow for unencumbered measurement in noninvasive interfaces). Facial activity is inherently dynamic in nature; therefore, automatic recognition of expressions from video and realistic animation of facial motion require a detailed dynamic representation of facial action. The PI will develop an analysissynthesis framework wherein model-based synthesis is used to analyze facial movement, to construct a spatio-temporal 3D representation of facial action that encodes the dynamics inherent in facialmotion. He will then evaluate the representation by analyzing videos of many human subjects making facial expressions, and by testing the synthesis of realistic facial motions. This research will provide a detailed scientific understanding of how people make facial expression and how recognition of facial expressions is possible, and thus will lay the foundation for systems that recognize human expressions and emotions, read lips, generate realistic facial animations, and code facial movements, which will form a vital component in the next generation of human-machine interfaces.
Please report errors in award information by writing to: [email protected].