Face Emotion Recognition System Using Deep Learning
Face Emotion Recognition System Using Deep Learning
1234 Dept. of Computer Science and Engineering, Government College of Engineering Srirangam, Tamilnadu, India
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - This paper presents an enhanced system for real- 1.1 RELATED WORK
time facial emotion detection, aiming to improve efficiency
and accuracy through deep learning. The proposed approach Sharmeen M. Saleem Abdullah and Adnan Mohsin
utilizes VGG-19 transfer learning, a pre-trained convolutional Abdulazeez [13] addressed the latest FER analysis. Numerous
neural network (CNN) architecture known for its depth and CNN architectures have been identified that have recently
strong performance in image classification. VGG-19's pre- been proposed. They have provided various databases of
trained weights contribute to improved efficiency compared to random photographs obtained from the actual world and
simpler CNNs, allowing for effective feature extraction and other laboratories to detect human emotions.
classification of emotional expressions in real-time. This
approach has the potential to benefit various applications in Hussein, E. S., Qidwai, U. and Al-Meer, M. [4]
human-computer interaction and psychology by enabling recommended a CNN model to understand face emotions
accurate and timely emotion recognition with three continuum emotions. This model uses residual
blocks and depth-separable convolutions inspired by
Key Words: Facial Emotion Detection, Deep Learning, VGG- Xception to minimize the sum of parameters to 33k. They
19 Transfer Learning, Real-Time Emotion Recognition, use a convolutional neural FER network for emotional
Efficiency, Human-Computer Interaction, Psychology stability identification. CNN uses convolution operations to
learn extract features from the input images, which reduces
1.INTRODUCTION the need to extract features from images manually. The
proposed model offers 81 percent total precision for
Human communication encompasses speech, invisible results. It senses negative and positive emotions,
gestures, and emotions, vital for interpersonal interactions. respectively, with a precision of 87% and 85%. However, the
AI systems capable of understanding human emotions are accuracy of neutral emotion detection is just 51%.
crucial, especially in healthcare, and e-learning where
emotional understanding is paramount. Traditional emotion Jiang, P., Liu, G., Wang, Q., and Wu, J [5] introduced a
detection methods often fall short in real-time scenarios, new loss feature called the advanced softmax loss to
necessitating models that can continuously interpret facial eradicate imbalanced training expressions. The proposed
expressions for dynamic emotional assessment. losses guarantee that any class would have a level playing
field and potential using fixed (unlearnable) weight
This paper proposes a real-time facial emotion parameters of the same size and equally allocated in angular
recognition model leveraging AI and computer vision space. The research shows that proposed (FER) methods are
advancements. The model aims to enhance human-computer better than specific state-of-the-art FER methods. The
interactions across diverse applications by dynamically proposed loss can be used as an isolated signal or used
detecting and responding to emotions. Automatic Facial simultaneously with other loss functions. To sum up, detailed
Expression Recognition (FER) has gained traction, driven by studies on FER2013 and the real-world practical face (RAF)
its potential in human-computer interaction and healthcare. databases have shown that ASL is considerably more precise
While Ekman's discrete categorization model is widely used, and effective than many state-of-the-art approaches..
its limitation in handling spontaneous expressions prompts
the need for more comprehensive approaches. 2 METHODOLOGIES
Our focus is on categorical facial expression This approach utilizes the VGG19 architecture for Facial
classification using the VGG-19 model, known for its depth Emotion Recognition by preprocessing the dataset, training
and performance in image tasks. By employing pre-trained the model, and validating it for real-time deployment. It
weights, our system achieves efficiency and accuracy for includes implementing a user interface for interaction and
real-time emotion recognition. This work explores VGG-19 feedback loops for continual improvement.
transfer learning's potential for facial emotion recognition
while remaining adaptable to other models with suitable CNNs, or Convolutional Neural Networks, are crucial in
data. deep learning and particularly effective in computer vision
tasks. They automatically learn relevant features from raw
input data, making them ideal for image and video
recognition. Structured to mimic human visual processing,
© 2024, IRJET | Impact Factor value: 8.226 | ISO 9001:2008 Certified Journal | Page 2019
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 04 | Apr 2024 www.irjet.net p-ISSN: 2395-0072
© 2024, IRJET | Impact Factor value: 8.226 | ISO 9001:2008 Certified Journal | Page 2020
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 04 | Apr 2024 www.irjet.net p-ISSN: 2395-0072
© 2024, IRJET | Impact Factor value: 8.226 | ISO 9001:2008 Certified Journal | Page 2021
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 04 | Apr 2024 www.irjet.net p-ISSN: 2395-0072
4. CONCLUSIONS
In this system implementing face emotion
recognition using deep learning with the VGG19 architecture
presents a promising approach for accurately detecting and
Fig.3. Accuracy and Loss Graph. classifying emotions from facial images. By following a
systematic approach that involves data collection,
preprocessing, model architecture selection, transfer
learning, training, evaluation, and deployment, it's possible
to develop a robust and effective emotion recognition
system. Transfer learning with pre-trained VGG19 models
enables leveraging knowledge learned from large-scale
image classification tasks, which can significantly enhance
the performance of the emotion recognition model,
especially when training data is limited. Throughout the
development process, careful attention should be paid to
data preprocessing, augmentation, hyperparameter tuning,
and model evaluation to ensure the model generalizes well
to unseen data and accurately predicts emotions across
various facial expressions and environmental conditions.
Ultimately, the successful deployment of a face emotion
recognition system can open up possibilities for applications
in diverse fields, including human-computer interaction,
healthcare, entertainment, and security, contributing to
advancements in technology and enhancing user
experiences.
REFERENCES
Fig.4. Confusion matrix
[1] Sharmeen M. Saleem Abdullah, Adnan Mohsin
Abdulazeez. Facial Expression Recognition Based on Deep
Learning Convolution Neural Network: A Review in
JOURNAL OF SOFT COMPUTING AND DATA MINING VOL. 2
NO. 1 (2021) 53-65.
[3] Jiang, P., Liu, G.,Wang, Q. and Wu, J. (2020). Accurate and
Reliable Facial Expression Recognition Using Advanced
Fig.5. Classification Report Softmax Loss with Fixed Weights. IEEE Signal Processing
Letters, 27, 725-729.
3.4 Deployment:
The facial emotion recognition project is deployed
via a web-based platform built on the Flask framework and
integrated with React and OpenCV. Users access the
application through a web browser, where they can upload
images for emotion analysis. The Flask backend handles
image processing tasks using the trained VGG19 model,
while the frontend displays results such as predicted
© 2024, IRJET | Impact Factor value: 8.226 | ISO 9001:2008 Certified Journal | Page 2022