Project Presentation
Project Presentation
University
REAL-TIME FACIAL
EMOTION RECOGNITION
Muskan Shangari-21BCS11144
Disket Angmo-21BCS11243
Saurav-21BCS7660
Pushpendra Sharma-21BCS11206
Overview
Introduction Literature review
Problem Results
Objectives Conclusion
Methodology Future work
Implementation
References
Introduction
This project focuses on abstract facial emotion recognition using Convolutional
Neural Networks (CNNs). By leveraging deep learning techniques, the goal is to
develop a model capable of detecting and classifying emotions expressed through
facial expressions. The proposed approach aims to harness the power of CNNs in
capturing relevant features and achieving accurate emotion recognition.
Proble
ms
The lack of robustness and generalization in existing facial
emotion recognition systems poses a challenge. The proposed
project aims to address this issue by developing a CNN-based
approach that can effectively recognize and classify abstract
emotions expressed through facial expressions. Limited
availability of diverse and well-balanced datasets for abstract
facial emotion recognition hinders the development and
evaluation of accurate models. This project seeks to overcome
this challenge by either curating or augmenting existing
datasets to ensure a comprehensive representation of various
emotions, demographics, and facial expressions.
Objectives
The literature review reveals that CNN-based approaches have shown promising results in
facial emotion recognition tasks, highlighting the importance of feature extraction, dataset
diversity, and model optimization techniques in achieving accurate and robust emotion
classification.
Ex-
"Real-time emotion recognition using multimodal fusion of facial expressions and
speech" by Tao et al. (2020): This work proposed a multimodal fusion approach to real-time
emotion recognition by combining facial expressions and speech signals.
Methodology
1) Preprocessing the facial images by resizing and normalizing them for
input to the CNN model.
2) Designing a deep learning architecture with convolutional layers to
extract relevant features from the facial images.
3) Training the CNN model using a large dataset of labeled facial
expressions and optimizing it through back propagation and gradient
descent.
4) Evaluating the trained model's performance on a separate test set to
detect the expressions.
Dataset
0 The dataset for this project will consist of diverse
facial images expressing various abstract
1 emotions across different demographics.
Method
0 The method comprises preprocessing facial images,
Ekman, P., Friesen, W. V. (1971). Faces and emotions are the same across
cultures. Journal of Personality and Social Psychology, 17(2), 124-129.
Zhang, Z., Song, Y., Qi, H. (2020). Advances in deep learning for facial
expression recognition: a comprehensive survey. IEEE Transactions on
Affective Computing, 11(4), 705- 727.
Liu, X., Wang, Yi, and Gong, Y. (2021). Deep emotional cognition: An
investigation. ACM Computing Survey (CSUR), 54(2), 1-38.
Thank you.