Facial Expression-Based Emotion Detection for Adaptive Teaching in Educational Environments

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Volume 10, Issue 1, January – 2025 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165 https://doi.org/10.5281/zenodo.14613833

Facial Expression-Based Emotion Detection for


Adaptive Teaching in Educational Environments
Sathya.C1; Dhatchana.P2
Assistant Professor 1; Student2
Department of Computer Science and Engineering,
Tagore Institute of Engineering and Technology,
Deviyakuruchi, Salem, Tamil Nadu, India

Abstract:- Understanding and classifying student actions student activity classification based on facial expression
within educational environments is a vital component of detection. Traditional methods of monitoring student
boosting learning results and well-being. This study involvement and behaviour frequently rely on physical
presents a novel method to student activity categorisation observation or self-reporting, which are subjective and
by employing facial expression detection technologies. restricted in scope. In contrast, our suggested approach uses
The technology is intended to record and evaluate pupils' computer vision and deep learning techniques to collect and
facial expressions, understand their emotional states, and analyse students' emotional responses, allowing us to
then classify their actions. This study investigates the categorise their behaviours more objectively and
application of deep learning models for face emotion comprehensively. The heart of this technique is based on face
identification using a dataset that includes both academic expression detection, which has advanced significantly with
and non-academic activities. The system can recognise the emergence of Convolutional Neural Networks (CNNs).
emotions such as happiness, sorrow, rage, and surprise. Using these deep learning models, we can effectively identify
The extracted emotion traits are then used to characterise and categorise a variety of emotions from students' facial
student actions, revealing whether a student is engaged, expressions in real time, including happy, sorrow, rage,
attentive, puzzled, or indifferent, among other states. This surprise, and others. These observations reveal important
strategy has the potential to improve educational settings details regarding their emotional states. Furthermore, this
by offering real-time insights into student conduct and technology goes beyond basic emotion identification. It
allowing for timely adjustments to improve learning correlates the recognised emotions with specific tasks,
experiences and outcomes. It also offers up possibilities allowing us to determine if a student is actively participating
for personalised educational support and the creation of in a classroom discussion, struggling with a difficult idea, or
intelligent learning systems. In this research, we will just disinterested in individual study. The real-time nature of
construct a system to extract face characteristics using the this research allows educators and institutions to make
Grassmann method. And identify the emotions of educated decisions, change teaching tactics, and provide
students at certain times. Predict the active state using timely interventions to improve student learning experiences.
emotion categorisation and provide reports to the This study also paves the way for the creation of intelligent
administrator. Furthermore, this technique shows learning systems that dynamically adapt to students'
potential for the creation of adaptive learning systems emotional states. Such systems might modify material, adjust
that react to students' emotional states, delivering extra difficulty levels, or give additional help, resulting in a more
help or challenges as needed. For example, a virtual tutor personalised and effective learning environment. However, in
may modify the difficulty of exercises based on a student's pursuing these breakthroughs, we must emphasise ethical
emotional reactions, producing a dynamic and responsive issues, such as data protection, informed permission, and
learning experience. eliminating possible biases in emotion identification. These
considerations are critical to the proper application of face
I. INTRODUCTION expression detection in educational contexts. This research
represents a significant step towards leveraging facial
In educational settings, knowing student involvement, emotion recognition for comprehensive student activity
emotions, and activities is critical for developing effective classification. It has the potential to revolutionise education
teaching and learning tactics. Students' emotional moods are by providing real-time insights into student behaviour and
strongly related to their academic achievement and well- fostering a more responsive and effective learning ecosystem.
being. This study provides a new way to improving The existing model is shown in fig 1.
educational quality and providing personalised support:

IJISRT25JAN138 www.ijisrt.com 40
Volume 10, Issue 1, January – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.5281/zenodo.14613833

Fig 1 Video Based Facial Emotion Recognition

II. RELATED WORK ANFISPSO-based classifier had a classification accuracy of


99.6%. In conclusion, this study proposed a novel framework
Chahak gautam, et.al,…[1] provides an innovative and and highly accurate classification algorithm based on AUs for
effective framework for emotion recognition using feature emotion recognition. The efficacy of the suggested model
extraction and CNN. This study demonstrates how explicit was assessed using a number of criteria. Compared to earlier
key-feature extraction from a dataset may help in successful techniques, the suggested model performed better (99.6%).
and fast face emotion analysis. Handcrafted feature extraction The downside of this study is that it only uses static photos
is critical when you have limited data collection and want to and does not take into account the temporal behaviour of
create the model without missing any crucial or useful facial expressions. Emotion identification from face
information. The important contribution/findings of the photographs is a significant and active topic of research.
research are: data feature extraction acts as a useful approach Facial traits are commonly employed in computer vision to
for accuracy improvement, minimises the danger of understand emotions, conduct cognitive science, and
overfitting, speeds up the training process, enhances data communicate with others. To accurately analyse facial
visualisation, and data processing. Emotion recognition expressions (happy, angry, sad, startled, disgusted, afraid, and
technology is a type of facial detection and recognition that neutral), a complicated system based on human-computer
uses facial expressions as well as biophysical indications and interaction and data is needed. It is challenging to build an
symptoms such as pulse rate and brain activations to effective and computationally simple technique for feature
determine an individual's emotional state. Image-based facial selection and emotion categorisation
expression recognition is a difficult problem, especially when
it comes to determining human emotion or mood in certain Sudheer babu punuri, et.al,…[3] An efficient approach
scenarios, such as while enjoying or watching a series or with a cutting-edge transfer learning mechanism has been
movie, immersed in video games, shopping, or even on the proposed for face emotion identification. The system is
battlefield. Emotions are of the biggest importance owing to known as EfficientNet XGBoost. The scheme's novelty is
an instant increase in a variety of healthcare difficulties such demonstrated by a specific mix of pre-trained EfficientNet
as depression, cancer, paralysis, and trauma. This study architecture, fully connected layers, an XGBoost classifier,
presents a technique to emotion identification using feature and bespoke parameter fine-tuning. The input face photos are
extraction and convolutional neural networks. appropriately pre-processed, and the work of feature
extraction is performed using the custom model. The feature
Mahmut dirik, et.al,…[2] suggested a study on points are retrieved using different networks. To average the
automated facial emotion detection from facial photographs. feature maps, global average pooling is used, and the final
An ANFISPSO classifier recognition model is used to create feature set is given into the XGBoost Classifier, which
dependable decision support systems that recognise faces recognises class labels for different emotions. Four different
automatically, quickly, and robustly. The suggested datasets are utilised to validate the technique. The
technique, GPA-based normalisation, and a range of experimental findings for the dataset CK+ demonstrate
classifiers based on AU characteristics were used to compare remarkable performance with an overall accuracy rate of
their performance. The ANFISPSO method combines the 100%. Furthermore, the suggested model can reliably
detection and exploitation capabilities of particle swarm recognise expressions while maintaining minimal latency.
optimisation (PSO) and the ANFIS algorithm. The proposed Datasets such as JAFFE and KDEF show an overall accuracy

IJISRT25JAN138 www.ijisrt.com 41
Volume 10, Issue 1, January – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.5281/zenodo.14613833
rate of 98%. Despite the uneven sample distribution in transformer. Furthermore, we sharply defined our emotion
FER2013, augmentation using geometric transformation categories by using them in two dimensions (arousal and
techniques resulted in a benchmark accuracy of 72.54%. To valence). Initially, we showed that our strategy outperformed
substantiate our claim, we give a comparison study of our previous state-of-the-art methods by comparing our model to
results with previous research on current datasets. The future strong baselines from RAVDESS datasets. In the future, we
focus of the study would be to address the issue of boosting intend to try recognising emotions from contextual data and
efficiency for unbalanced sample sets. Exploring the usage of categorising them into three dimensions: arousal, valence,
bespoke GAN (generative adversarial networks) might be a and dominance. We also plan to put our concept to the test in
sensible approach for recognising face emotions from the the medical field to help professionals properly diagnose
unbalanced datasets. patients.

Ninad mehendale, et.al,…[4] devised an innovative Liam Schoneveld et al…,[6] The study describes an
method of face emotion recognition that takes advantage of enhanced deep neural network-based AVER approach. The
CNN and supervised learning (possible owing to massive proposed model consists of two deeper neural networks: a
data). The FERC algorithm's key benefit is that it operates deep CNN model trained on knowledge distillation for FER
with multiple orientations (less than 30°) because to its and a tweaked and enhanced VGGish version for SER. The
unique 24-digit long EV feature matrix. The backdrop auditory and visual feature representations are integrated
elimination provided a significant benefit in precisely utilising a model-level fusion approach. To mimic the
assessing emotions. FERC might be the first step for temporal dynamics, recurrent neural networks are employed
numerous emotion-based applications, including deception to analyse temporally and spatially represented data. This
detectors and mood-based learning for pupils. Detecting study presents a high-performance deep neural network-
emotions from facial expressions has always been a simple based technique for AVER that incorporates a model-level
task for humans, but doing it with a computer algorithm is fusion architecture, a modified VGGish backbone, and a
rather difficult. With recent advances in computer vision and visual feature extractor network. The updated face expression
machine learning, it is now feasible to discern emotions in embedding network illustrates how to acquire robust facial
photographs. In this study, we provide a unique approach for expression representations by training both AffectNet and
facial emotion identification based on convolutional neural FEC concurrently. We also demonstrated that information
networks (FERC). The FERC is built on a two-part distillation may improve facial emotion recognition even
convolutional neural network (CNN): the first eliminates the more. Our enhanced VGGish backbone feature extractor's
background from the image, while the second focuses on face performance suggests a viable new method for inferring
feature vector extraction. In the FERC model, an expressional emotion from audio. Furthermore, AVER has proved the
vector (EV) is employed to identify the many forms of regular usefulness of our shallow neural networks approach to
facial expressions. Supervisory data were acquired from a multimodal fusion, surpassing state-of-the-art algorithms in
database of 10,000 pictures (154 individuals). It was feasible predicting emotion on the RECOLA data.
to appropriately emphasise the emotion with 96% accuracy,
using an EV of length 24 values. The two-level CNN operates Dr. P. Sumathy et al …,[7] Undoubtedly, emotion
in sequence, and the last layer of Perceptron updates the recognition will play an important role in the field of machine
weights and exponent values with each iteration. FERC learning. With the recent development and widespread usage
differs from commonly used techniques using single-level of Deep Learning and Machine Learning techniques, the
CNN, resulting in improved accuracy. Furthermore, a unique prospect of creating intelligent systems that accurately grasp
backdrop removal approach done prior to EV production emotions has become more feasible. Recognition of human
prevents dealing with many difficulties that may occur (for features and emotions is challenging due to the range of facial
example, distance from the camera). expressions, physical attributes, odd positions, and lighting
conditions. To increase the quality of the photograph by
Aayushi chaudhari, et.al,…[5] developed a strategy for minimising noise and illumination in the face expression
recognising emotions utilising unsupervised data that is images, a better image pre-processing technique utilising
widely available and self-supervised learning (SSL) KNN and CA is offered. The findings of the proposed
algorithms. Using this method, we were able to save time on enhanced pre-processing approach with ANN classification
retraining the model or starting from scratch while also method show that it is more accurate and has a better
utilising currently available pretrained self-supervised detection rate, sensitivity, and specificity for distinguishing
learning algorithms. Using self-supervised learning as an human moods. When looking only at the frequency domain,
input revealed that the created features had large dimensions a filter is one object that reduces or enhances frequencies
and were considered high-level features, necessitating a particular to the view. The primary functions of image filters
trustworthy and in-depth fusion procedure. The results are to alter the look of images by adjusting their colours, size,
showed that we may successfully address the challenge of shading, and other features. This filtering may be used for a
multimodal emotion recognition by combining self- variety of image processing applications, including edge
supervised learning (SSL) with intermodality interaction enhancement, sharpening, and smoothing.
techniques. Using pretrained self-supervised learning
algorithms for feature extraction, we focused on enhancing Jung Hwan Kim et al…,[8] A gadget that can reliably
the job of emotion recognition. To achieve our aim, we identify a driver's emotional expression is one way to reduce
created a multimodal fusion methodology based on a the number of fatal vehicle accidents. Ismail et al. claimed

IJISRT25JAN138 www.ijisrt.com 42
Volume 10, Issue 1, January – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.5281/zenodo.14613833
that furious driving increases the likelihood of an accident its accuracy. The project's current technology identifies
and endangers other people's lives. Facial expression emotion using traditional means such as a person's voice,
recognition (FER) technology might protect a person from facial expression, EEG, text, and so on. Nonetheless, in the
irate drivers and potentially avoid disastrous crashes. Driver history of human-computer interaction, a computer's capacity
moods are heavily influenced by facial emotion recognition to recognise an individual's emotions is very important. We
(FER) systems. Excellent facial expression recognition in constructed a CNN model utilising data given by Jonathan
autonomous vehicles leads to reduced road rage. Even the Oheix to get beyond the typical technique's failure to identify
most complex FER model performs badly in real-time testing emotions in HCI (computer interaction).
when trained without the necessary datasets. The integrity of
datasets has a greater influence on FER system performance III. BACKGROUND OF THE WORK
than the accuracy of algorithms. We propose a facial image
thresh (FIT) machine that uses additional capabilities that There are several applications in human-computer
existed before to face identification and learning from the interaction that could benefit from the capacity to perceive
Xception method to improve the performance of FER emotion. Face detection may be thought of as a binary
systems for autonomous cars. In addition to the data- categorisation of picture frames as containing or not
augmentation approach, the FIT machine needed the containing a face. To learn such a classification model, we
elimination of unnecessary facial pictures, the collection of must first characterise an image in terms of characteristics
facial images, the repair of misplaced face data, and the that may be used to detect the presence or absence of a face
massive merging of original datasets. on a particular picture. The present methodology typically
consists of two tasks: the first is to extract ASM motion using
M. A. H. Akhand et al…,[9] For a secure and safe living, a pyramid ASM model fitting method, and the second is to
smart surroundings, and a smart society, emotion recognition classify projected motion using Adaboost classifiers. The
from facial images in unconstrained contexts (for example, system then aligns three retrieved feature points, the eyes and
public locations) where frontal view shots are not always nose section, with the mean shape of ASM and ignores the
feasible is becoming increasingly important. In order to do other component of the ASM against the mean face shape of
this, a powerful FER is required, with emotion recognition ASM to estimate the geometrical dislocation information
possible from a range of face perspectives, notably from between current and mean ASM point coordinates. Then,
varied angles. The landmark aspects of the complete image using the Adaboost classifier, face emotions are recognised
are not visible in profile views from various angles, and based on geometrical motion. Additionally, characteristics
traditional feature extraction approaches cannot recover are extracted using Viola Jones. Viola and Jones employed
facial expression characteristics from side views. FER from wavelet-based characteristics. Wavelets are square waves
high-resolution facial images using the DCNN model is thus with a single wavelength (one high and one low). In two
regarded to be the sole way to handle such a tough dimensions, a square wave is a pair of neighbouring
assignment. The recommended FER system incorporates the rectangles—one luminous and one dark.
TL-based technique. To make a pre-trained DCNN
compatible with FER, the upper layers are replaced with IV. PROPOSED WORK
dense layer(s) to improve the model's performance utilising
facial expression data. The distinctive component of the The suggested method for detecting student activity
proposed technique is the pipelines training process for fine- from facial expressions employs the Grassmann algorithm,
tuning: first, the thick layers are tuned, and then the remaining introducing a fresh way to understanding and improving the
DCNN blocks are changed one at a time. educational experience. At its foundation, the system makes
use of the Grassmann algorithm, a statistical tool designed for
Avigyan Sinha et al…,[10] Facial Expression analysing high-dimensional data like facial expressions. The
Recognition (FER) evaluates expressions in both still and initial phase is collecting a broad dataset of students
moving photographs to determine the subject's emotional participating in various educational activities, such as in-class
state. Human emotions may be communicated nonverbally learning and remote education situations. This dataset
through facial expressions. The algorithm classifies faces as collects facial photos or video frames of students
fundamental emotions (such as anger, contempt, fear, participating in various activities, guaranteeing inclusion
pleasure, sadness, neutral, and surprise). In rare cases, a across demographics. Preprocessing procedures standardise
person's mental or physiological state of mind may also be the data by normalising pixel values and performing data
reflected by their facial expressions (for example, exhaustion augmentation to improve its diversity and quality. The
or boredom). Speech, EEG, voice quality, and text may all be Grassmann algorithm, which is well-known for its ability to
used to identify emotions. Face expressions are among the reduce dimensionality and extract features, is used to
most popular examples of these personality traits since they recognise face emotions. This technique allows the system to
may be observed, contain a range of important features for extract key aspects from high-dimensional face data, such as
distinguishing emotions, and are easy to discern. To create a tiny movements and expressions that communicate emotion.
big face collection (rather than other methods for human Grassmann's skills make it an excellent candidate for
identification). FER can also be used with biometric analysing and interpreting facial information associated with
identification. Technology analysis of a variety of sources, emotions. The Grassmann method, also known as
such as voice, text, device-generated health information, or Grassmannian subspace analysis, is a strong mathematical
blood circulation patterns inferred from images, may increase approach used to analyse facial characteristics in computer

IJISRT25JAN138 www.ijisrt.com 43
Volume 10, Issue 1, January – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.5281/zenodo.14613833
vision and facial recognition applications. One of its primary extraction. It transforms high-dimensional pixel data into a
uses is dimensionality reduction, which is an important issue more compact yet intelligible representation, highlighting
when dealing with high-dimensional face data. Using the significant facial traits such as texture patterns, shape
Grassmann technique, it is feasible to minimise the variations, and landmark placements. Another aspect is
dimensionality of facial feature vectors while keeping critical subspace learning, in which the algorithm identifies
information, making it particularly useful for facial subspaces within the feature space that characterise certain
identification, emotion detection, and expression analysis. facial features or identities. Fig 2 shows the proposed
Another important use of the Grassmann method is feature architecture.

Fig 2 Proposed Architecture

 Framework Construction  Model Building

 This strategy offers instructors real-time feedback, which  The next step is to select a CNN architecture that is
is a big advantage. Teachers can use emotion recognition suitable for facial attribute extraction.
technology to assess their students' emotional responses  Popular CNN architectures for image recognition tasks
during classes, lectures, or conversations. embrace Sequential model. This model is pre-trained on
 This immediate feedback allows them to make adjustment large datasets and can be fine-tuned for stress detection.
on the spot to make certain students remain engaged and  The CNN is then used to extract features from the facial
understand the material. images. This is done by fleeting the images through the
 In this module, we can design the scaffold for students CNN and extracting the activations from one of the last
 Student can login to the system with their details convolutional layers.
 Admin can view the particulars about students
 Activity Classification
 Features Extraction
 Deep neural networks possess key advantages in their
 Facial features extraction using Convolutional Neural capability to model complex systems and utilize
Networks (CNN) is a admired approach for stress automatically learning skin tone through several network
detection from facial images. CNNs are deep learning layers.
models that can robotically extract skin tone from images  As such, deep neural networks are used to carry out
and learn complex patterns precision-driven tasks such as classification and
 The first step in using CNN for stress detection is to classification
collect a dataset of facial images of persons under
different stress conditions.

IJISRT25JAN138 www.ijisrt.com 44
Volume 10, Issue 1, January – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.5281/zenodo.14613833
 Design a CNN architecture that takes in a facial image as  And sentiment details stored in database for future
input and predict whether the person in the image is verification.
stressed or not.
 The architecture typically consists of multiple V. EXPERIMENTAL RESULTS
convolutional layers followed by max-pooling layers to
take out features from the images. The final layers of the The false rejection rate is the measure of the likelihood
network are fully connected layers that categorize the that the biometric security system will incorrectly reject an
image as positive or negative. access attempt by an authorized user. A system's FRR
typically is stated as the ratio of the number of false rejections
 Reports divided by the number of identification attempts.

 In this module, provide the intelligence for all students  FALSE REJECT RATE = FN / (TP+FN)
about activity state information.  FN =Genuine Scores Exceeding Threshold
 TP+FN = All Genuine Scores

Algorithms FRR
Random Forest 0.42
Adaboost Classifier 0.35
CNN classifier 0.28

Fig 3 False Rejection Rate

The proposed system can be provided a smaller number beyond typical classrooms to include remote and online
of negative response rate than the existing algorithms such as, learning contexts, offering an effective tool for measuring
Random Forest, Adaboost classifier. student engagement and well-being in a wide range of
educational settings. This flexibility is particularly important
VI. CONCLUSION in the ever-changing context of education.

Finally, using Convolutional Neural Networks (CNNs) REFERENCES


to classify student behaviour based on facial expressions has
significant potential for altering the educational environment. [1]. Gautam, Chahak, and K. R. Seeja. "Facial emotion
This novel technique uses deep learning to get a sophisticated recognition using Handcrafted features and
knowledge of students' emotional states during various CNN." Procedia Computer Science 218 (2023): 1295-
learning sessions. By analysing facial expressions and 1303.
categorising emotional reactions, the technology provides [2]. Dirik, Mahmut. "Optimized anfis model with hybrid
real-time insights that might improve the educational metaheuristic algorithms for facial emotion
experience. Using this technology, instructors may alter and recognition." International Journal of Fuzzy
personalise their teaching techniques based on real-time Systems 25.2 (2023): 485-496.
emotional input, which improves student engagement and [3]. Punuri, Sudheer Babu, et al. "Efficient net-XGBoost:
understanding. The ability to detect not just fundamental an implementation for facial emotion recognition
emotions, but also sophisticated emotional states such as using transfer learning." Mathematics 11.3 (2023):
perplexity, irritation, and satisfaction broadens educators' 776.
understanding. Furthermore, the project's reach extends

IJISRT25JAN138 www.ijisrt.com 45
Volume 10, Issue 1, January – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.5281/zenodo.14613833
[4]. Mehendale, Ninad. "Facial emotion recognition using [19]. Said, Yahia, and Mohammad Barr. "Human emotion
convolutional neural networks (FERC)." SN Applied recognition based on facial expressions via deep
Sciences 2.3 (2020): 446. learning on high-resolution images." Multimedia
[5]. Chaudhari, Aayushi, et al. "Facial emotion recognition Tools and Applications 80.16 (2021): 25241-25253.
with inter-modality-attention-transformer-based self- [20]. Ruiz-Garcia, Ariel, et al. "Deep learning for emotion
supervised learning." Electronics 12.2 (2023): 288. recognition in faces." International Conference on
[6]. Schoneveld, Liam, Alice Othmani, and Hazem Artificial Neural Networks. Springer, Cham, 2016.
Abdelkawy. "Leveraging recent advances in deep
learning for audio-visual emotion recognition."
Pattern Recognition Letters 146 (2021): 1-7.
[7]. Sumathy, P., and Ahilan Chandrasekaran. "An
Optimized Image Pre-Processing Technique for Face
Emotion Recognition System." Annals of the
Romanian Society for Cell Biology 25.6 (2021): 6247-
6261.
[8]. Kim, Jung Hwan, Alwin Poulose, and Dong Seog
Han. "The extensive usage of the facial image
threshing machine for facial emotion recognition
performance." Sensors 21.6 (2021): 2026.
[9]. Akhand, M. A. H., et al. "Facial emotion recognition
using transfer learning in the deep CNN." Electronics
10.9 (2021): 1036.
[10]. Sinha, Avigyan, and R. P. Aneesh. "Real time facial
emotion recognition using deep learning."
International Journal of Innovations and
Implementations in Engineering 1 (2019).
[11]. Zhong, Yuanchang, et al. "HOG-ESRs Face Emotion
Recognition Algorithm Based on HOG Feature and
ESRs Method." Symmetry 13.2 (2021): 228.
[12]. Chang, Jia-Wei, et al. "Music recommender using
deep embedding-based features and behavior-based
reinforcement learning." Multimedia Tools and
Applications 80.26 (2021): 34037-34064.
[13]. Athavle, Madhuri,” Music Recommendation System
Using Facial Expression Recognition Using Machine
Learning, International Journal for Research in
Applied Science & Engineering Technology
(IJRASET), 2022
[14]. Chowdary, M. Kalpana, Tu N. Nguyen, and D. Jude
Hemanth. "Deep learning-based facial emotion
recognition for human–computer interaction
applications." Neural Computing and
Applications (2021): 1-18.
[15]. Ch, Satyanarayana. "An efficient facial emotion
recognition system using novel deep learning neural
network-regression activation classifier." Multimedia
Tools and Applications 80.12 (2021): 17543-17568.
[16]. Mehendale, Ninad. "Facial emotion recognition using
convolutional neural networks (FERC)." SN Applied
Sciences 2.3 (2020): 1-8.
[17]. Ramírez, Jaime, and M. Julia Flores. "Machine
learning for music genre: multifaceted review and
experimentation with audioset." Journal of Intelligent
Information Systems 55.3 (2020): 469-499.
[18]. Liu, Jun, Yanjun Feng, and Hongxia Wang. "Facial
expression recognition using pose-guided face
alignment and discriminative features based on deep
learning." IEEE Access 9 (2021): 69267-69277.

IJISRT25JAN138 www.ijisrt.com 46

You might also like