0% found this document useful (0 votes)
1 views

Facial Expression Detection Using Deep Learning

The document discusses the development of a facial expression detection system using deep learning techniques to recognize seven basic emotions: happiness, sadness, anger, surprise, disgust, fear, and neutral. The proposed system aims to enhance security and engagement in online learning environments by accurately tracking facial expressions. It outlines the methodology, including data collection, preprocessing, feature extraction, and emotion classification, while also reviewing existing systems and their limitations.

Uploaded by

ElakkiyaSruthi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Facial Expression Detection Using Deep Learning

The document discusses the development of a facial expression detection system using deep learning techniques to recognize seven basic emotions: happiness, sadness, anger, surprise, disgust, fear, and neutral. The proposed system aims to enhance security and engagement in online learning environments by accurately tracking facial expressions. It outlines the methodology, including data collection, preprocessing, feature extraction, and emotion classification, while also reviewing existing systems and their limitations.

Uploaded by

ElakkiyaSruthi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

International Journal of Scientific Research in Engineering and Management (IJSREM)

Volume: 08 Issue: 04 | April - 2024 SJIF Rating: 8.448 ISSN: 2582-3930

FACIAL EXPRESSION DETECTION USING DEEP LEARNING

Dr. N. Gopala Krishna1, Korlapati Jasmitha Lakshmi 2,


Koneti Mohan Vasmi3, Pallapothu Padmavathi 4,
Mudavatha Sai Ram Naik 5
1 Professor, Department of Computer Science and Engineering, Tirumala Engineering College
2, 3 ,4, 5 Student,
Department of Computer Science and Engineering, Tirumala Engineering College

-----------------------------------------------------------------***----------------------------------------------------------------
Abstract - The use of machines to perform different
tasks is constantly increasing in society. Providing 1.1 MOTIVATION
machines with perception can lead them to perform a
great variety of tasks, even very complex ones such as In today‟s networked world the need to maintain security
elderly care. Machine requires that machines of information or physical property is becoming both
understand about their environment and interlocutor’s increasingly important and increasingly difficult. In
intention. Recognizing facial emotions might help in countries like Nepal the rate of crimes are increasing day
this regard. During the development of this work, deep by day. No automatic systems are there that can track
learning techniques have been used over images person‟s activity. If we will be able to track Facial
displaying the following facial emotions : happiness, expressions of persons automatically then we can find the
sadness, anger, surprise, disgust, and fear. criminal easily since facial expressions changes doing
different activities.
As results, such method best resolves issues of
So we decided to make a Facial Expression
lighting variations and different orientation of object
Recognition System. We are interested in this project
in the image and thus achieves a higher accuracy.In
after we went through few papers in this area.. As a
the field of education online learning plays a vital
result we are highly motivated to develop a system
role.. The fundamental problem facing in the online
that recognizes facial expression and track one
learning environment is the low engagement of
person’s activity.
Listener to the Preceptor. The educational institutions
and Preceptors are responsible to guarantee best
1.2 PROBLEM DEFINITION
learning environment with maximum engagement in
educational activities for online learners.
Human facial expressions can be easily classified into 7
basic emotions: happy, sad, surprise, fear, anger, disgust,
Key Words: Environment, interlocutor, happiness,
and neutral. Our facial emotions are expressed through
sadness, anger, surprise, listener, educational.
activation of specific sets of facial muscles. These
sometimes subtle, yet complex, signals in an expression
1.INTRODUCTION often contain an abundant amount of information about
our state of mind. Through facial emotion recognition, we
The primary goal of this research is to design, implement are able to measure the effects that content and services
and evaluate a novel facial expression recognition have on the audience/users through an easy and low-cost
system using various statistical learning techniques. This procedure.
goal will be realized through the following objectives:
1. System level design: In this stage, we'll be using Neural Network
existing techniques in related areas as building
blocks to design our system. A neural network is a network or circuit of neurons, or in
a) A facial expression recognition system usually a modern sense, an artificial neural network, composed of
consists of multiple components, each of which is artificial neurons or nodes. Thus a neural network is
responsible for one task. We first need to review the either a biological neural network, made up of real
literature and decide the overall architecture of our biological neurons, or an artificial neural network, for
system, i.e., how many modules it has, the solving artificial intelligence (AI) problem.
responsibility of each of them and how they should
cooperate with each other. 2. LITERATURE REVIEW
b) Implement and test various techniques for each
module and find the best combination by comparing As per various literature surveys it is found that for
their accuracy, speed, and robustness. implementing this project four basic steps are
2. Algorithm level design: Focus on the classifier required to be performed.
which is the core of a recognition system, trying to a. Preprocessing
design new algorithms which hopefully have better b. Face registration
performance compared to existing ones.
© 2024, IJSREM | www.ijsrem.com DOI: 10.55041/IJSREM32013 | Page 1
International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 04 | April - 2024 SJIF Rating: 8.448 ISSN: 2582-3930

c. Facial feature extraction


d. Emotion classification

Description about all these processes are given below


• Sunil Kumar, M K Bhuyan, Biplab Ketan
1. Preprocessing Chakraborty[6] described Extraction of Informative
Preprocessing is a common name for operations regions of a face or FER The proposed model
with images at the lowest level of abstraction both input successfully estimated the importance of facial sub
and output are intensity images. regions.It attained the accuracy of 98.44% MUG
,98,51% on JAFFEE, 97.01% on CK+ datasets.
2.Face Registration :
Face Registration is a computer technology being
used in a variety of applications that identifies human • Ebenzer Owusu, Justice Kwame , Appati Percy
faces in digital images. In this face registration step, okae[7] described Robust FER system in Higher
faces are first located in the image using some set of passes. The proposed model improves FER
landmark points called “face localization” or “face performance. Also the 2D phase conversions has been
detection”. established to handle phase invariant FER problems
successfully. It attained an accuracy of 98.90% on
Bosphorus , 93.50% on BUD3FE, 97.20% on MMI,
98.20% on CK+ datasets.

• Shrey Modi, Muhmmad Hussain Bohara[8] described


Facial Expression recognition Using CNN.This
technology will provide a great boom to many things
such as the robotics field, which will provide
emotions to them and then to the blind community. It
attained the accuracy of 73.5% on FER dataset.

3.Facial Feature Extraction • Dimas Lima, Bin Li[9] described Facial Expression
Facial Features extraction is an important step in FER via Res Net-50 The proposed system focuses on
face recognition and is defined as the process of locating FER dataset that achieved good results in
specific regions, points, landmarks, or curves/contours in multitasking classification.It attained the accuracy of
a given 2-D image or a 3D range image. In this feature 95.39+/-1.41.
extraction step, a numerical feature vector is generated
from the resulting registered image. • Michail N et al [4], proposed a wristband model
system which has an EEG cap. ENOBIO EOG
correction mechanism is used for calibrating data.
The user wore the EEG cap and the concentration and
attention level while learning is measured. Mohamed
El Kerdawy et al [5], uses 14 channel EEG headset to
record EEG signals.

3.SYSTEM ANALYSIS
3.1 EXISTED SYSTEM
4.Emotion Classification In the existing system, classification is done through
simple image processing to classify images only.
In the third step, of classification, the algorithm
Existing work includes the application of feature
attempts to classify the given faces portraying one of the
extraction of facial expressions with the combination of
seven basic emotions.
neural networks for the recognition of different facial
emotions (happy, sad, angry, fear, surprised, neutral,
etc..). Humans are capable of producing thousands of
facial actions during communication that varies in-
complexity, intensity, and meaning.Overview of
conventional methods used for expression detection,
such as feature extraction followed by classification.
The challenges faced by traditional approaches,
including limited accuracy and robustness to variations
in facial expressions.

© 2024, IJSREM | www.ijsrem.com DOI: 10.55041/IJSREM32013 | Page 2


International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 04 | April - 2024 SJIF Rating: 8.448 ISSN: 2582-3930

3.2 PROPOSED SYSTEM


DATA COLLECTION AND DATA PRE-
Deep Learning-Based Approach: PROCESSING

1. First Phase is the acquisition phase of face. Collecting data for training the model is the basic
step in the machine learning pipeline.The predictions
2. The second phase images preprocessing and extraction is made by systems can only be as good as the data on
completed. which they have been trained. Following are some ofthe
problems that can arise in data collection:Inaccurate
3. In the third phase, extracted images of faces are checking data. The collected data could be unrelated to the
to data sets. problem statement.Missing data. Sub-data could be
missing. That could take the form of empty values in
4. After this step some algorithmic and statistical part columns or missing images for some class of prediction.
processed based on the images input.
Matplot-lib
The idea was to build a single consolidated system
that is able to effectively recognize the emotions of Matplotlib is a powerful and widely-used plotting
learners during online form of education with the help library in Python which enables us to create a variety of
of convolutional neural networks and plot emotion static, interactive and publication-quality plots and
metrics based on the results. Below Fig shows the visualizations. It's extensively used for data
overall system design. visualization tasks and offers a wide range of
functionalities to create plots like line plots, scatter
plots, bar charts, histograms, 3D plots and much more.
Matplotlib library provides flexibility and customization
options to tailor our plots according to specific needs.

4. SYSTEM DESIGN

4.1 Data Flow Diagrams

System Architecture

Work Flow

3.3 MODULE DESCRIPTION Fig: Data Flow Diagram

1. Data cleaning
2. Virtualization

© 2024, IJSREM | www.ijsrem.com DOI: 10.55041/IJSREM32013 | Page 3


International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 04 | April - 2024 SJIF Rating: 8.448 ISSN: 2582-3930

commercial products. Being a BSD-licensed product,


5. IMPLEMANTATION Open CV makes it easy for businesses to utilize and
modify the code.
Python
Python is a popular programming language. It was Internal Images
created in 1991 by Guidovan Rossum. It is used for:

1. web development(server-side) The basic idea of integral image is that to calculate the
2. software development area. So, we do not need to sum up all the pixel values
3. mathematics rather than we have to use the corner values and then a
4. system scripting simple calculation is to be done. The integral image at
location x , y contains the sum of the pixels above and
The most recent major version of Python is Python to the left of x , y, inclusive
3. However, Python 2, althoughnot being updated with
anything other than security updates, is still quite
popular. It is possible to write Python in an Integrated
Development Environment, such as Thonny, Pycharm,
Netbeans or Eclipse, Anaconda which are particularly
useful when managing larger collections of Python
files. Python was designed for its readability. Python
uses new lines to complete a command, as opposed to
other programming languages which often use
semicolons or parentheses. Python relies on
indentation, using whitespace, to define scope; such as
the scope of loops, functions and classes.

Python Libraries

1. Numpy
2. TensorFlow
3. Pandas
4. Matplotlib

NumPy: is a very popular python library for large multi- Fig: Integral Images
dimensional array and matrix processing, with the help Adaboost
of a large collection of high-level mathematical
functions. It is very useful for fundamental scientific Adaboost is used to eliminate the redundant feature of
computations in Machine Learning. Haar. A very small number of these features can be
combined to form an effective classifier.
TensorFlow: is a very popular open-source library for
high performance numerical computation developed by
the Google Brain team in Google. As the name suggests,
Tensorflow is a framework that involves defining and
running computations involvingtensors.

Pandas is a popular Python library for data analysis. It is


not directly related to Machine Learning. As we know
that the dataset must be prepared before training. In this
case, Pandas comes handy as it was developed
specifically for data extraction and preparation.

Matpoltlib is a very popular Python library for data


visualization. Like Pandas, it is not directly related to
Machine Learning. It particularly comes in handy when a
programmer wants to visualize the patterns in the data.
f(x) = a1f1(x) + a2f2(x) + a3f3(x) + a4f4(x) + ….
Open CV F(x) is strong classifier and f(x) is weak classifier.
Open CV (Open Source Computer Vision Library) is an
open source computer vision and learning software
library. Open CV was built to provide a common Weak classifier always provide binary value i.e. 0
infrastructure for computer vision applications and to and 1. If the feature is present the value will be 1,
accelerate the use of machine perception in the otherwise value will be 0. Generally 2500 classifiers are

© 2024, IJSREM | www.ijsrem.com DOI: 10.55041/IJSREM32013 | Page 4


International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 04 | April - 2024 SJIF Rating: 8.448 ISSN: 2582-3930

used to make a strong classifier. Here selected features 7. SYSTEM TESTING


are said to be okay if it perform better than the random
guessing i.e. it has to detect more than half of cases. 7.1 TESTING OBJECTIVE

Software testing is a process used to help


identify the correctness, completeness and quality
6. RESULTS ofdeveloped computer software.

Software testing is the process used to measure


the quality of developed software. Testing is the process
of executing a program with the intent of finding errors.
Software testing is often referred to as verification &
validation.
STLC (Software Testing Life Cycle):
Testing itself has many phases i.e., is called as STLC.
STLC is part of SDLC
1. Test Plan
2. Test Development
3. Test Execution
4. Analyze Result
5. Defect Tracking

TYPES OF TESTING:

• White Box Testing


• BlackBox Testing
• GreyBox Testing

White Box Testing

White box testing as the name suggests gives the internal


view of the software. This type oftesting is also known as
structural testing or glass box testing as well, as the
interest lies in what lies inside the box.

BlackBox Testing

Its also called as behavioral testing. It focuses on the


functional requirements of the software.Testing either
functional or non functional without reference to the
internal structure of the component or system is called
black box testing.

GreyBox Testing

Grey box testing is the combination of black box and


white box testing.

8. CONCLUSION

In this case, when the model predicts incorrectly, the


correct label is often the second most likely
emotion.The facial expression recognition system
presented in this research work contributes a resilient
face recognition model based on the mapping of
behavioral characteristics with the physiological
biometric characteristics. The physiological
characteristics of the human face with relevance to
various expressions such as happiness, sadness, fear,
anger, surprise and disgust are associated with
geometrical structures which restored as base matching

© 2024, IJSREM | www.ijsrem.com DOI: 10.55041/IJSREM32013 | Page 5


International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 04 | April - 2024 SJIF Rating: 8.448 ISSN: 2582-3930

template for the recognition system.The behavioral


aspect of this system relates the attitude behind
different expressions as property base. The property
bases are alienated as exposed and hidden category in
genetic algorithmic genes. The gene training set
evaluates the expressional uniqueness of individual
faces and provide a resilient expressional recognition
model in the field of biometric security.

9. REFERENCES

[1] D. C. Ali Mollahosseini and M. H. Mahoor. Going


deeper in facial expression recognition using deep
neural networks. IEEEWinter Conference on
Applications of Computer Vi- sion, 2016.

[2] S.-Y. D. Bo-Kyeong Kim, Jihyeon Roh and S.-Y.


Lee. Hi- erarchical committee of deep convolutional
neural networks for robust
facial expression recognition. Journal on Multi- modal
User Interfaces, pages 1–17, 2015.

[3] F. Chollet. keras. https://github.com/fchollet/ keras,


2015.

[4] P. Ekman and W. V. Friesen.Emotional facial


action coding system. Unpublished manuscript,
University of California at San
Francisco, 1983.

[5] B. Graham. Fractional max-pooling. 2015.

[6] S. Ioffe and C. Szegedy. Batch


normalization: Accelerating deep network
training by reducing internal covariate shift. JMLR
Proceedings, 2015.

https://www.kaggle.com/nehalbirla/vehicledataset-

from-cardekho

https://www.greeksforgreeks.org/

https://www.w3schools.com/

https://www.javatpoint.com/regression-analysis-in-
machine- learning
https://www.youtube.com/results?search_query=sidd
hardhan

© 2024, IJSREM | www.ijsrem.com DOI: 10.55041/IJSREM32013 | Page 6

You might also like