Facial Expression Detection Using Deep Learning
Facial Expression Detection Using Deep Learning
-----------------------------------------------------------------***----------------------------------------------------------------
Abstract - The use of machines to perform different
tasks is constantly increasing in society. Providing 1.1 MOTIVATION
machines with perception can lead them to perform a
great variety of tasks, even very complex ones such as In today‟s networked world the need to maintain security
elderly care. Machine requires that machines of information or physical property is becoming both
understand about their environment and interlocutor’s increasingly important and increasingly difficult. In
intention. Recognizing facial emotions might help in countries like Nepal the rate of crimes are increasing day
this regard. During the development of this work, deep by day. No automatic systems are there that can track
learning techniques have been used over images person‟s activity. If we will be able to track Facial
displaying the following facial emotions : happiness, expressions of persons automatically then we can find the
sadness, anger, surprise, disgust, and fear. criminal easily since facial expressions changes doing
different activities.
As results, such method best resolves issues of
So we decided to make a Facial Expression
lighting variations and different orientation of object
Recognition System. We are interested in this project
in the image and thus achieves a higher accuracy.In
after we went through few papers in this area.. As a
the field of education online learning plays a vital
result we are highly motivated to develop a system
role.. The fundamental problem facing in the online
that recognizes facial expression and track one
learning environment is the low engagement of
person’s activity.
Listener to the Preceptor. The educational institutions
and Preceptors are responsible to guarantee best
1.2 PROBLEM DEFINITION
learning environment with maximum engagement in
educational activities for online learners.
Human facial expressions can be easily classified into 7
basic emotions: happy, sad, surprise, fear, anger, disgust,
Key Words: Environment, interlocutor, happiness,
and neutral. Our facial emotions are expressed through
sadness, anger, surprise, listener, educational.
activation of specific sets of facial muscles. These
sometimes subtle, yet complex, signals in an expression
1.INTRODUCTION often contain an abundant amount of information about
our state of mind. Through facial emotion recognition, we
The primary goal of this research is to design, implement are able to measure the effects that content and services
and evaluate a novel facial expression recognition have on the audience/users through an easy and low-cost
system using various statistical learning techniques. This procedure.
goal will be realized through the following objectives:
1. System level design: In this stage, we'll be using Neural Network
existing techniques in related areas as building
blocks to design our system. A neural network is a network or circuit of neurons, or in
a) A facial expression recognition system usually a modern sense, an artificial neural network, composed of
consists of multiple components, each of which is artificial neurons or nodes. Thus a neural network is
responsible for one task. We first need to review the either a biological neural network, made up of real
literature and decide the overall architecture of our biological neurons, or an artificial neural network, for
system, i.e., how many modules it has, the solving artificial intelligence (AI) problem.
responsibility of each of them and how they should
cooperate with each other. 2. LITERATURE REVIEW
b) Implement and test various techniques for each
module and find the best combination by comparing As per various literature surveys it is found that for
their accuracy, speed, and robustness. implementing this project four basic steps are
2. Algorithm level design: Focus on the classifier required to be performed.
which is the core of a recognition system, trying to a. Preprocessing
design new algorithms which hopefully have better b. Face registration
performance compared to existing ones.
© 2024, IJSREM | www.ijsrem.com DOI: 10.55041/IJSREM32013 | Page 1
International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 04 | April - 2024 SJIF Rating: 8.448 ISSN: 2582-3930
3.Facial Feature Extraction • Dimas Lima, Bin Li[9] described Facial Expression
Facial Features extraction is an important step in FER via Res Net-50 The proposed system focuses on
face recognition and is defined as the process of locating FER dataset that achieved good results in
specific regions, points, landmarks, or curves/contours in multitasking classification.It attained the accuracy of
a given 2-D image or a 3D range image. In this feature 95.39+/-1.41.
extraction step, a numerical feature vector is generated
from the resulting registered image. • Michail N et al [4], proposed a wristband model
system which has an EEG cap. ENOBIO EOG
correction mechanism is used for calibrating data.
The user wore the EEG cap and the concentration and
attention level while learning is measured. Mohamed
El Kerdawy et al [5], uses 14 channel EEG headset to
record EEG signals.
3.SYSTEM ANALYSIS
3.1 EXISTED SYSTEM
4.Emotion Classification In the existing system, classification is done through
simple image processing to classify images only.
In the third step, of classification, the algorithm
Existing work includes the application of feature
attempts to classify the given faces portraying one of the
extraction of facial expressions with the combination of
seven basic emotions.
neural networks for the recognition of different facial
emotions (happy, sad, angry, fear, surprised, neutral,
etc..). Humans are capable of producing thousands of
facial actions during communication that varies in-
complexity, intensity, and meaning.Overview of
conventional methods used for expression detection,
such as feature extraction followed by classification.
The challenges faced by traditional approaches,
including limited accuracy and robustness to variations
in facial expressions.
1. First Phase is the acquisition phase of face. Collecting data for training the model is the basic
step in the machine learning pipeline.The predictions
2. The second phase images preprocessing and extraction is made by systems can only be as good as the data on
completed. which they have been trained. Following are some ofthe
problems that can arise in data collection:Inaccurate
3. In the third phase, extracted images of faces are checking data. The collected data could be unrelated to the
to data sets. problem statement.Missing data. Sub-data could be
missing. That could take the form of empty values in
4. After this step some algorithmic and statistical part columns or missing images for some class of prediction.
processed based on the images input.
Matplot-lib
The idea was to build a single consolidated system
that is able to effectively recognize the emotions of Matplotlib is a powerful and widely-used plotting
learners during online form of education with the help library in Python which enables us to create a variety of
of convolutional neural networks and plot emotion static, interactive and publication-quality plots and
metrics based on the results. Below Fig shows the visualizations. It's extensively used for data
overall system design. visualization tasks and offers a wide range of
functionalities to create plots like line plots, scatter
plots, bar charts, histograms, 3D plots and much more.
Matplotlib library provides flexibility and customization
options to tailor our plots according to specific needs.
4. SYSTEM DESIGN
System Architecture
Work Flow
1. Data cleaning
2. Virtualization
1. web development(server-side) The basic idea of integral image is that to calculate the
2. software development area. So, we do not need to sum up all the pixel values
3. mathematics rather than we have to use the corner values and then a
4. system scripting simple calculation is to be done. The integral image at
location x , y contains the sum of the pixels above and
The most recent major version of Python is Python to the left of x , y, inclusive
3. However, Python 2, althoughnot being updated with
anything other than security updates, is still quite
popular. It is possible to write Python in an Integrated
Development Environment, such as Thonny, Pycharm,
Netbeans or Eclipse, Anaconda which are particularly
useful when managing larger collections of Python
files. Python was designed for its readability. Python
uses new lines to complete a command, as opposed to
other programming languages which often use
semicolons or parentheses. Python relies on
indentation, using whitespace, to define scope; such as
the scope of loops, functions and classes.
Python Libraries
1. Numpy
2. TensorFlow
3. Pandas
4. Matplotlib
NumPy: is a very popular python library for large multi- Fig: Integral Images
dimensional array and matrix processing, with the help Adaboost
of a large collection of high-level mathematical
functions. It is very useful for fundamental scientific Adaboost is used to eliminate the redundant feature of
computations in Machine Learning. Haar. A very small number of these features can be
combined to form an effective classifier.
TensorFlow: is a very popular open-source library for
high performance numerical computation developed by
the Google Brain team in Google. As the name suggests,
Tensorflow is a framework that involves defining and
running computations involvingtensors.
TYPES OF TESTING:
BlackBox Testing
GreyBox Testing
8. CONCLUSION
9. REFERENCES
https://www.kaggle.com/nehalbirla/vehicledataset-
from-cardekho
https://www.greeksforgreeks.org/
https://www.w3schools.com/
https://www.javatpoint.com/regression-analysis-in-
machine- learning
https://www.youtube.com/results?search_query=sidd
hardhan