0% found this document useful (0 votes)
13 views5 pages

Research 1.6

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views5 pages

Research 1.6

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Facial Emotion Recognition using Transfer Learning

Prof. Ichhanshu Jaiswal Piyush Rajaram Sahani


Computer Science and Engineering Computer Science and Engineering
(Data Science) (Data Science)
Vidyavardhini’s College of Engineering Vidyavardhini’s College of Engineering
Vasai, India Vasai, India
[email protected] [email protected]

Suvith Sunil Shetty Viraj Vijay Wadke


Computer Science and Engineering Computer Science and Engineering
(Data Science) (Data Science)
Vidyavardhini’s College of Engineering Vidyavardhini’s College of Engineering
Vasai, India Vasai, India
[email protected] [email protected]

Abstract -- Our study offers a thorough investigation of model training, and thorough evaluation. Its uses are wide-
Facial Emotion Recognition (FER) for content moderation ranging and include developing emotion-aware user interfaces,
in the dynamic digital environment. An advanced mosaic developing tools to evaluate mental health, and facilitating
of emotions and expressions has been made possible by the human-computer interaction systems.The importance of
quick expansion of online information in the form of films, accurate face expression identification and content regulation
photos, and live camera feeds. With the help of the is made clear by the exponential growth in the consumption of
MobileNetV2 pre-trained model and transfer learning video content and online interactions. In order to address these
techniques, we use the FER 2013 dataset to address the issues and create a more secure and emotionally intelligent
complex problem of content moderation in this online ecosystem, our project makes use of the capabilities of
situation.Our main objective is to create a tool that computer vision and artificial intelligence. A collection of 108
encourages safer and more compassionate online face expressions taken from TV shows and annotated by
interactions by alerting people when there is a human coders for further analysis was used for the initial
contradiction between their spoken words and the empirical analysis [13].
emotions shown on their faces.The range of human Statistics highlight how important video content has become as
emotions included in the study's focus on emotions a dominant force. For instance, video postings receive 48%
includes anger, sadness, disgust, fear, surprise, neutrality, more views than content with static images, and the average
and happiness.We explore the confluence of cutting-edge individual watches 17 hours of internet video per week
[10] .These statistics highlight the significant influence that
technology, data analytics, and emotional intelligence in
videos have on social media platforms, making it crucial for
this comprehensive investigation. We hope to create a
content producers to make sure that the emotions portrayed are
digital environment that is more sensitive to human
consistent with their goals. Before content is made available on
emotions by linking these realms. Our initiative aims to
the platform, our project acts as an Ex-ante filter as a
provide users with a virtual environment where they may
preventative step. It provides automatic content screening
communicate while feeling more at ease, empathic, and through algorithmic predictions, boosting the platform's ability
understanding.By doing this, we hope to contribute to the to ensure user safety and decorum.
ongoing transformation of the digital environment into an Facial Emotion Recognition (FER) is the most effective
area where content and interactions are more emotionally method for controlling user behavior in the contemporary
intelligent and in tune with one another. environment of sophisticated and flexible socio-technical
systems, particularly on social media platforms. Our solution
Keywords- Facial Emotion Recognition (FER),MobileNetV2 fits with the changing dynamics of digital communication and
Pre-trained Model,FER 2013 Dataset, Emotional engagement by enabling the creator's proactive participation in
Intelligence, Computer Vision, Artificial Intelligence, guaranteeing content quality and adherence to community
Transfer Learning, Content Moderation. standards.

I. INTRODUCTION
A key component of computer vision is facial expression
recognition, a cutting-edge technique for reading and
understanding human emotions from facial expressions and
features. In order to categorize emotions like happiness,
sorrow, rage, and more, this multidimensional project requires
the gathering and analysis of real-time or static facial imaging,
preprocessing of these data, and application of machine
learning methods, notably deep neural networks. However,
most of these models focus only on the local features captured Fig. 1. Different emotions in FER-2013 dataset [11]
by filters on individual facial expressions, without considering
the local features from multiple facial expressions except a
global loss function. II. EXISTING SYSTEM
The project's scope includes crucial stages like data collecting,
1
• Designed with commercial use in mind, therefore it might
1] FaceEmo App: not be appropriate for private users.
• Needs to be integrated with current systems.
The FaceEmo App[19] is a smartphone application that
analyzes and detects emotions in real-time using cutting-edge 5] Affectiva:
face recognition technology. It offers an intuitive user interface
for deciphering and visualizing the emotions conveyed through Affectiva[21] is a prominent player in the field of emotion
facial expressions. recognition. Their software and APIs offer emotion analysis
for a wide variety of applications, from entertainment to
Advantages: healthcare.
• Emotion detection in real time.
• Friendly interface for users. Advantages:
• Appropriate for many uses, including as mental health • It’s Proven and widely used in the industry.
evaluation and market research. • Comprehensive emotion recognition capabilities.

Disadvantages: Disadvantages:
• It needs a front-facing camera in order to function • Licensing and subscription costs may be a consideration.
properly. • Customization may be required for specific use cases.
• Performance can change depending on the type of
lighting.
III. LITERATURE SURVEY
2] EmotionAI SDK:
Wafa Mellouka et al. [1] discussed the research field of
A useful tool for developers looking to include facial emotion automatic emotion recognition based on facial expression. It's
detection into their apps is the EmotionAI Software an interesting field that has been applied in several areas such
Development Kit (SDK)[8]. Among the many functions it as safety, health, and human-machine interfaces. The
provides are sentiment analysis, emotion tracking, and real- researchers aim to develop techniques to interpret, code facial
time feedback. expressions, and extract these features to create better
prediction models by computers. With the success of deep
Advantages: learning, various architectures of this technique are exploited
• Incredibly adaptable and versatile for programmers. to achieve better performance
• Offers sentiment analysis for deeper emotional
understanding. Advantages:
• The paper provides a comprehensive review of recent
Disadvantages: works on automatic facial emotion recognition (FER) via
• Requires development and coding abilities in order to deep learning.
integrate. • It explores the contribution of different types of
• Commercial use may be subject to licensing fees. architectures of this technique to achieve better
performance.
3] SmileSense: • It serves as a guide for researchers by reviewing recent
works and providing insights for improvements to this
A cloud-based service for recognising emotions, SmileSense field.
may be incorporated into a number of different platforms and • It includes comparisons of different methods and results
apps. It provides emotion-related data with pretty good obtained.
accuracy by analyzing facial expressions using deep learning • The paper highlights databases available for FER research.
algorithms.
Disadvantages:
Advantages: • Facial recognition systems may exhibit bias, especially
• Versatile and scalable for a broad range of uses. against people with darker skin tones, leading to false
• Precise identification of emotions by deep learning.. positives or negatives and perpetuating discrimination.

Disadvantages: Yahui Nan et al. [2] provided a detailed explanation of a study


• Cloud-based services require internet connectivity. that aims to improve facial expression recognition using an
• For extended use, subscription fees could be necessary. intelligent AI assistant known as A-MobileNet. It discusses the
techniques used to enhance the AI's ability to extract fine-
4] Moodify: grained features of facial expressions and outlines the
experiment setup, including the datasets used, model
Moodify[20] is an AI-powered system for identifying emotions architecture, and the performance of the model. It also
that is intended for use in customer care and corporate settings. highlights some guidelines for effective communication with
In order to improve customer interactions and comprehend AI assistants.
customer sentiment, it offers real-time emotion analysis.

Advantages:
• Enhanced client support and tailored exchanges.
• Data insights to help firms decide wisely. Advantages:
• Lightweight model.
Disadvantages:
2
• Improved accuracy. collecting more data and extracting more features from the
EEG signals..
Disadvantages:
• Evaluated on Limited Datasets, on the FERPlus and RAF- Advantages:
DB datasets. • The results show that the system can recognize emotion in
99.81% of facial landmarks and 87.25% of EEG signals.
Smith K. Khare et al. [3] provided a comprehensive and
systematic review of emotion recognition techniques of the Disadvantages:
current decade. It covers emotion recognition using physical Using 19 subjects' facial landmarks, our method achieved
and physiological signals. Physical signals involve speech and an 87.25% emotional detection rate through threefold
facial expression, while physiological signals include cross-validation.
electroencephalogram, electrocardiogram, galvanic skin
response, and eye tracking. IV. PROPOSED APPROACH
The paper provides an introduction to various emotion models,
stimuli used for emotion elicitation, and the background of Dataset:
existing automated emotion recognition systems. It also
provides a detailed analysis of existing studies and available • The majority of the datasets that are used to identify
datasets of emotion recognition. The review analysis also emotions are openly accessible. But most of them have
presented potential challenges in the existing literature and been used to their fullest extent, which yields the highest
directions for future research. categorization accuracy [8].
• The FER-2013 dataset, which comprises a wide variety of
Advantages: facial expressions, will be used to train the proposed
• Comprehensive Review,Identification of Challenges and system. It will be able to distinguish between a total of
Future Directions,summarizes existing studies but also seven emotions: disgust, neutrality, surprise, fear, and
identifies potential challenges in the literature and happiness.
suggests directions for future research.
Algorithm:
Disadvantages:
• Limited details in methodologies. • The MobileNetV2 model is used as the base model for
facial expression recognition. The code summarizes the
Xuemei Wu et al. [4] discussed about a framework called architecture of the MobileNetV2 model.
Facial Expression Recognition with Cross-Hierarchy Contrast • The code extracts the input and output layers of the
(FER-CHC) that is designed to enhance the performance of MobileNetV2 model. It then adds additional layers to the
Convolutional Neural Network (CNN) based models in output to customize the model for the specific task [2],
recognizing different facial • These additional layers include dense layers with
expressions. The framework uses a contrastive learning activation functions.The new model is defined, which
mechanism to capture crucial features of facial expressions, takes the base input and produces the final output after
improving the global representation of these expressions. The passing through the additional layers.
accuracy and effectiveness of the FER-CHC model is • The new model is compiled with an optimizer (Adam), a
evaluated through experiments on six popular facial expression loss function (Sparse Categorical Crossentropy), and
recognition datasets. The document also discusses the accuracy as the metric for evaluation.
importance of clear and precise instructions for an AI assistant
and the need for maintaining the AI's anonymity. User Interface:

Advantages: • Both alternatives may be used easily because of the


• Improved FER Performance.Global Feature system's user-friendly interface. Users can choose the real-
Representation; this global perspective can be time face expression detector, upload videos,images and
advantageous in situations where fine-grained analysis is get quick feedback on the emotions found in the videos.
required.

Disadvantages:
• Complexity and Computational Cost is high as multiple
datasets are combined together.

Aya Hassouneh et al. [5] worked on the development of a real-


time emotion recognition system using facial expressions and
EEG signals based on machine learning and deep neural
network methods. It discusses the process of collecting data,
the system's design and functionality, and its testing and
validation.
The system was developed to recognize six basic emotional Fig. 2. Workflow of the System
expressions (happiness, sadness, anger, fear, disgust, and
surprise) and to aid people with physical disabilities and V. ANALYSIS
autism in recognizing the feelings of others. The system
achieved a maximum recognition rate of 99.81% using facial The use of a pre-trained MobileNetV2[22] model significantly
landmarks and 87.25% using EEG signals. The research improved the model's performance, starting with an accuracy
suggests that further improvements could be made by of 44% after 5 epochs without pre-training. In contrast, with
3
pre-training, the model achieved an accuracy of approximately cutting-edge machine learning algorithms.
75% after just one epoch, consistently maintaining this level of
accuracy over five epochs. This underscores the effectiveness Our project will provide a more thorough and contextually
of transfer learning in boosting the facial expression aware approach to content filtering by including sound
recognition model's accuracy and convergence speed. characteristics, furthering the aim of improving the security
and emotional intelligence of the online environment.As
consumers interact with various media and communication
channels, this widened reach is in line with the changing
demands of digital content management.Our technology is
positioned to be even more successful at spotting emotional
inconsistencies and promoting respectful online interactions
after the incorporation of audio-based capabilities.

REFERENCES

[1] Mellouka, W., & Handouzia, W. Facial emotion


recognition using deep learning: review and insights.
©2021. Elsevier Ltd.
Fig. 3. Accuracy of the model without transfer learning
[2] Nan, Y., Ju, J., Hua, Q., Zhang, H., Wang, B. A-
MobileNet: An approach to facial expression recognition.
©2022. Elsevier Ltd.
[3] Khare, S. K., Blanes-Vidal, V., Nadimi, E. S., Acharya, U.
R. Emotion recognition and artificial intelligence: A
systematic review (2014–2023) and research
recommendations. ©2024. Elsevier Ltd.
[4] Wu, X., He, J., Huang, Q., Huang, C., Zhu, J., Huang, X.,
Fujita, H. FER-CHC: Facial expression recognition with
cross-hierarchy contrast. ©2023. Elsevier Ltd.
[5] Hassouneh, A., Mutawa, A. M., Murugappan, M.
Development of a Real-Time Emotion Recognition
Fig. 4. Accuracy of the model with transfer learning
System Using Facial Expressions and EEG based on
machine learning and deep neural network methods. ©12
VI. CONCLUSION
June 2020. Elsevier Ltd.
This study demonstrates the value of transfer learning, [6] Mao, J., Xu, R. POSTER V2: A simpler and stronger
especially when the MobileNetV2 model is used. The
experiment has shown that we may attain outstanding facial expression recognition network. ©12 Feb 2020.
performance in the difficult field of face expression detection Elsevier Ltd.
by utilizing pre-trained weights, highlighting the enormous
potential of transfer learning in deep learning applications.In [7] Zhou, F., Kong, S., Fowlkes, C. C. Fine-grained facial
the FER13 dataset, our carefully developed model has expression analysis using dimensional emotion model.
regularly demonstrated great accuracy in the recognition of
facial expressions, displaying its skill in handling a range of ©2020. Elsevier Ltd.
emotions while preserving computational efficiency. In order [8] https://www.morphcast.com/sdk/Han, B., Yoo, C-H., Kim,
to guarantee usability and scalability in the real world, this
combination of precision and efficiency is crucial. H-W., Yoo, J-H. Deep emotion change detection via facial
expression analysis. ©2023. Elsevier Ltd.
The project's impacts cover a wide range of real-world uses.
The project's results show its potential to improve digital [9] Zeng, D., Lin, Z., Yan, X., Liu, Y., Wang, F., & Tang, B.
experiences and contribute to a more emotionally aware online Face2exp: Combating data biases for facial expression
environment, from boosting human-computer interaction to
exploring emotion analysis and sentiment analysis for content recognition. In Proceedings of the IEEE/CVF Conference
moderation.Future work will focus on investigating and on Computer Vision and Pattern Recognition, 2022, pp.
implementing cutting-edge audio identification methods like
audio sentiment analysis and natural language processing 20291–20300.
(NLP).In order to handle audio data and build a solid
foundation for multi-modal content analysis, we plan to use
4
[10] https://sproutsocial.com/insights/social-media-video-
statistics/
[11] https://images.app.goo.gl/kNSomhdXfwt4ruXR7
[12] Yeasin, M., Bullot, B., Sharma, R. Recognition of facial
expressions and measurement of levels of interest from
video. IEEE Trans. Multimedia 8 (3) (2006) 500–508.
[13] Khan, A. R. Facial Emotion Recognition Using
Conventional Machine Learning and Deep Learning
Methods: Current Achievements, Analysis and Remaining
Challenges. Information, vol. 13, no. 6, p. 268, May 2022,
doi: 10.3390/info13060268.
[14] Santhosh, S., Rajashekararadhya, S. V. A Design of Face
Recognition Model with Spatial Feature Extraction using
Optimized Support Vector Machine. In 2023 2nd
International Conference for Innovation in Technology
(INOCON), Bangalore, India, 2023, pp. 1-8, doi:
10.1109/INOCON57975.2023.10101149.
[15] Ullah, S., Jan, A., Khan, G. M. Facial Expression
Recognition Using Machine Learning Techniques. In
2021 International Conference on Engineering and
Emerging Technologies (ICEET), Istanbul, Turkey, 2021,
pp. 1-6, doi: 10.1109/ICEET53442.2021.9659631.
[16] Zhang, Y., Zhang, F., Guo, L. Face Recognition Based on
Principal Component Analysis and Support Vector
Machine Algorithms. In 2021 40th Chinese Control
Conference (CCC), Shanghai, China, 2021, pp. 7452-
7456, doi: 10.23919/CCC52363.2021.9550727.
[17] Mollahosseini, A., Chan, D., Mahoor, M. H. Going deeper
in facial expression recognition using deep neural
networks. In 2016 IEEE Winter Conference on
Applications of Computer Vision (WACV), 2016: IEEE,
pp. 1-10.
[18] https://apps.apple.com/us/app/facetime/id1110145091
[19] https://devpost.com/software/moodify-teg0r6
[20] https://www.affectiva.com/
[21] https://keras.io/api/applications/mobilenet

You might also like