0% found this document useful (0 votes)
216 views26 pages

Emotion Recognition Using Facial Expressions PDF

Uploaded by

Nowreen Haque
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
216 views26 pages

Emotion Recognition Using Facial Expressions PDF

Uploaded by

Nowreen Haque
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

BANGLADESH U NIVERSITY O F B USINESS

AND T ECHNOLOGY

P ROJECT R EPORT ON

Emotion Recognition Using Facial Expressions

Couse Code:352
Course Title :Artificial Intelligence and Expert System Lab
Submitted By :
Nowreen Haque Biswas
ID-17181103043
Submitted To:
Md.Momin
Dr. M. Firoz Mridha
ID-17181103046
Associate Professor
Nazin Nahar
Department of CSE
ID-17181103056
Intake-37-2
Department of CSE
Date of submission:4/11/2020
Contents
Declaration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Approval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Copyright . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1 Introduction 11
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2 Problem Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Project Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5 Project Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.2 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.6 Organizations of the Project Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2 Background 14
2.1 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Problem Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Supporting Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.1 Used Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.2 Numpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.3 OpenCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.4 TensorFlow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.5 Keras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.6 Pillow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.7 Convulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3 Proposed Model 17
3.1 Feasibility Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.1 Technical Feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.2 Operational Feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.3 Economic Feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.4 Schedule feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Requirement Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.1 Functional requirements: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.2 Non-Functional requirements: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3.1 Methodolgy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4.1 Sequence Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5.1 Phases in Facial Emotion Recognition . . . . . . . . . . . . . . . . . . . . . . . . 20

4 Implementation and Testing 22


4.1 Result Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2 Application Outcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

5 Conclusions 24

1
Emotion Recognition Using Facial Expressions

5.1 Introductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.2 Future Work /Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2
List of Figures
3.1 System Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Sequence Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4.1 Flow Chart of Testing/Predicting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22


4.2 The seven expression from one subject . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3
Emotion Recognition Using Facial Expressions

Declaration
We declare that this report and the work presented in it is my own and has
been generated by us as the result of our own original research
We confirm that:
• This Work is done completely or mainly while in candidature for a course
project at this University.
• This report work has not been previously submitted for any degree at this
university or any other educational institutes.
• We have quoted from the work of others the source is always given. With
the exception of such quotations, this report is entire our own work.

———— ———— ————


Nowreen Haque Md.Momin Nazin Nahar
ID: 17181103043 ID: 17181103046 ID: 17181103056

4
Emotion Recognition Using Facial Expressions

Dedication

Dedicate to our parents and teacher for all


their love and inspiration.

5
Emotion Recognition Using Facial Expressions

Acknowledgements
First of all, we are thankful and expressing our gratefulness to Almighty Allah who offers
us His divine blessing, patient, mental and physical strength to complete this project work.
We are deeply indebted to my project supervisor Dr. M. Firoz Mridha , (Associate Pro-
fessor), Department of Computer Science and Engineering (CSE), Bangladesh Univer-
sity of Business and Technology (BUBT). His scholarly guidance, important suggestions,
work for going through our drafts and correcting them, and generating courage from the
beginning to the end of the project work has made the completion of this report possible.
A very special gratitude goes out to all our friends for their support and help to implement
our works. The discussions with them on various topics of our works have been very
helpful for us to enrich our knowledge and conception regarding the work.
Last but not the least; we are highly grateful to our parents and family members for sup-
porting us spiritually throughout writing this project report and our life in general.

6
Emotion Recognition Using Facial Expressions

Abstract
Human beings displays their emotions using facial expressions. For hu-
man it is very easy to recognize those emotions but for computer it is very
challenging.
Modern world is changing in each pulse. New technologies are taking
place in every sector of our day to day life. Image processing is one of
the major pioneer in this changing world. With a single click many thing
are taking place. Many things are possible with the help of an image. A
text image can be converted from one language to another without any
help from a human interpreter. One can also save his or her time to text
someone with an image as a single image explains many things. Images
are also used to identify a person on the social media and in many other
web. For this fact Facial Emotion Recognition is getting very popular every
day. With the help of Facial Emotion Recognition it is possible to identify
a person very easily. What if one could tell what type of emotional state a
person is in? It would help one to approach that person. For example if a
person is sad can do something to make him or her feel happy and so on.
In this project it has been searched that is it possible to identify a person
is it possible to identify a person’s emotional state. Then it has been also
researched to suggest music on the basis of his or her emotion.
The purpose of this project entitled as “Emotions Recognition using Fa-
cial Expressions” we worked on recognition of seven basic human emo-
tions.These emotions are angry, disgust, fear, happy, sad, surprise and neu-
tral.

7
Emotion Recognition Using Facial Expressions

Approval
This Project Report “Medicare Management System” Submitted by Nowreen Haque Biswas
ID-17181103043 ,Md.Momin ID-17181103046 and Nazin Nahar ID-17181103056 De-
partment of Computer Science and Engineering (CSE), Bangladesh University of Busi-
ness and Technology (BUBT) under the supervision of Dr. M. Firoz Mridha ,Associate
Professor and Department of Computer Science and Engineering has been accepted as
satisfactory for the partial fulfillment of the requirement for the degree of Bachelor of
Science (B.Sc. Eng.) in Computer Science and Engineering and approved as to its style
and contents.

——————————–
Supervisor:
Dr. M. Firoz Mridha
Associate Professor
Department of Computer Science and Engineering (CSE)

Bangladesh University of Business and Technology (BUBT) Mirpur-2, Dhaka-1216, Bangladesh

——————————–
Chairman:
Prof.Dr. M. Ameer Ali
Professor,Dean and Chairman
Department of Computer Science and Engineering (CSE)

Bangladesh University of Business and Technology (BUBT) Mirpur-2, Dhaka-1216, Bangladesh

8
Emotion Recognition Using Facial Expressions

Copyright
c All rights reserved
9
Emotion Recognition Using Facial Expressions

Appendix
FER - Face Emotion Recognition
OpenCV - Open Computer Vision
Pillow - Python Imaging Library

10
Chapter 1
Introduction
1.1 Introduction

Facial expressions are the vital identifiers for human feelings, because it corresponds to the emotions. Most
of the times , the facial expression is a nonverbal way of emotional expression, and it can be considered as
concrete evidence to uncover whether an individual is speaking the truth or not.
The current approaches primarily focus on facial investigation keeping background intact and hence built
up a lot of unnecessary and misleading features that confuse CNN training process. The current manuscript
focuses on five essential facial expression classes reported, which are displeasure/anger, sad/unhappy, smil-
ing/happy, feared, and surprised/astonished .
The human face is an important organ of an individual’s body and it especially plays an important role in
extraction of an individual’s behavior and emotional state. As humans, we classify emotions all the time
without knowing it. Nowadays people spend lots of time with their works. Sometimes they forget that they
should also find some time for themselves. In spite of their busyness if they see their facial expression then
they may be try to do something different.
For example, suppose if anyone see that his or her facial expression is happy then he or she will try to be
more happier. On the other hand, if anyone see that his or her facial expression is sad then he or she will
improve his or her mental condition.
Facial expression plays an important role for detecting human emotion. It is a valuable indicator of a person.
In a word an expression sends a message to a person about his or her internal feeling. Facial expression
is the most important application of image processing. In the present age , a huge research work on the
field of image processing. Facial image based mood detection techniques provides a fast and useful result
for mood detection. The process of recognizing the expression of feelings through facial emotion was an
interesting object since the time of Aristotle. After 1960 this topic became more popular , when a list of
universal emotion was established and different system were proposed. Because of the arrival of modern
technology our expectation goes high and it has no limitation.
As a result people try to improve this image based mood detection in different ways. There are six basic
universal emotions for human beings. These are happy, sad, angry, fear, disgust and surprise. From human’s
facial expression we can easily detect this emotion. In this research we will proposed a useful way to detect
happy, sad and angry these three emotions from frontal facial emotion.
Our aim, which we believe we have reached, was to develop a method of face mood detection that is fast,
robust, reasonably simple and accurate with a relatively simple and 2 easy to understand algorithms and
techniques. The examples provided in this thesis are real-time and taken from our own surroundings.

11
1.4

1.2 Problem Background

Human emotions and intentions are expressed through facial expressions and deriving an efficient and effec-
tive feature is the fundamental component of facial expression system. Facial expressions convey non-verbal
cues, which play an important role in interpersonal relations. Automatic recognition of facial expressions
can be an important component of natural human-machine interfaces; it may also be used in behavioral
science and in clinical practice. An automatic Facial Expression Recognition system needs to solve the
following problems: detection and location of faces in a cluttered scene, facial feature extraction, and facial
expression classification.
Facial expression is the common signal for all humans to convey the mood. There are many attempts to
make an automatic facial expression analysis tools as it has application in many fields such as robotics,
medicine, driving assist systems, and lie detector.

1.3 Project Objectives

• Emotion Recognition deals with the investigation of identifying emotions, techniques and methods
used for identifying. Emotions can be identified from facial expressions, speech signals etc. Enor-
mous methods have been adapted to infer the emotions such as machine learning, neural networks,
artificial intelligence, emotional intelligence.
• Facial Emotion Recognition is research area which tries to identify the emotion from the human facial
expression. The surveys states that developments in emotion recognition makes the complex systems
simpler. FER has many applications which is discussed later. Emotion Recognition is the challenging
task because emotions may vary depending on the environment, appearance, culture, face reaction
which leads to ambiguous data. Survey on Facial emotion recognition helps a lot in exploring facial
emotion recognition.
• Facial Emotion Recognition(FER) is a thriving research area in which lots of advancements like
automatic translation systems, machine to human interaction are happening in industries.
• Emotion Recognition serves as the identifier for conversational analysis[7] for identifying the unsatis-
fied customer, customer satisfaction so on. FER is used in car board system depending on information
of the mentality of the driver can be provided to the system to initiate his/her and the customer safety.

1.4 Motivation

The universality of these expressions means that facial emotion recognition is a task that can also be accom-
plished by computers. Furthermore, like many other important tasks, computers can provide advantages
over humans in analysis and problem-solving.
Computers that can recognize facial expressions can find application where efficiency and automation can
be useful, including in entertainment, social media, content analysis, criminal justice,and healthcare. For
example, content providers can determine the reactions of a consumer and adjust their future offerings
accordingly.
It is important for a detection approach, whether performed by a human or a computer, to have a taxonomic
reference for identifying the seven target emotions.
Facial expression recognition is growing rapidly as a sub-field of image processing. Some of the possible
applications are human–computer interaction, psychiatric observations , drunk driver recognition, and the
most important is lie detector .

12
1.6

1.5 Project Contributions

After the system was successfully developed, it will bring lots of convenience to the staffs and will reduce
human errors and will be less time consuming.

1.5.1 Analysis

For developing the system we first analysis to existing system and manual system. Also,analysis the positive
and negative site of this software. After details analysis for Emotion Recognition using Face Expressions
we design this application.

1.5.2 Design

After analysis we design this ”Emotion Recognition using Facial Expressions”. In designing section select
which module need for this system. After selecting the module we design all module. We choose Keras,
Python, Opencv,Numpy,Tensor Flow as it is a AI based project .

1.5.3 Implementation

After designing all module. Then start to build this project . The designing module are implement here.
• Prepare Dataset:
We used FER 2013 dataset to train and test the model. We extract the images from pixels and kept in
two folders,test and train so that model can be test and trained and give accurate predictions
• Implement Emotions
In this project facial expression recognition system is implemented using convolution neural network.
Facial images are classified into seven facial expression categories namely Anger, Disgust, Fear,
Happy, Sad, Surprise and ’Neutral. Kaggle dataset is used to train and test the classifier.

1.6 Organizations of the Project Report

Because of this usefulness of image processing, in our research we are dealing with this method. Mainly the
project aim is to detect human’s facial expression by applying image processing techniques and send them a
massage about their internal feelings based on their facial expression. Those people who remain submerged
in despair, this application is more beneficial for them. This application can get rid of their stress by playing
music or jokes automatically. We hope that this application will bring a significant change of human life.

13
Chapter 2
Background
2.1 Literature Review

Two different approaches are used for facial expression recognition, both of which include two different
methodologies, exist. Dividing the face into separate action units or keeping it as a whole for further
processing appears to be the first and the primary distinction between the main approaches. In both of
these approaches, two different methodologies, namely the ‘Geometric based’ and the ‘Appearance-based’
parameterizations, can be used. Making use of the whole frontal face image and processing it in order to end
up with the classifications of 6 universal facial expression prototypes: disgust, fear, joy, surprise, sadness
and anger; outlines the first approach.
Here, it is assumed that each of the above mentioned emotions have characteristic expressions on face and
that’s why recognition of them is necessary and sufficient. Instead of using the face images as a whole,
dividing them into some sub-sections for further processing forms up the main idea of the second approach
for facial expression analysis. As expression is more related with subtle changes of some discrete features
such as eyes, eyebrows and lip corners; these fine-grained changes are used for analyzing automated recog-
nition. There are two main methods that are used in both of the above explained approaches. Geometric
Based Parameterization is an old way which consists of tracking and processing the motions of some spots
on image sequences, firstly presented by Suwa et al to recognize facial expressions . Cohn and Kanade later
on tried geometrical modeling and tracking of facial features by claiming that each AU is presented with a
specific set of facial muscles.
The disadvantages of this method are the contours of these features and components have to be adjusted
manually in this frame, the problems of robustness and difficulties come out in cases of pose and illumi-
nation changes while the tracking is applied on images, as actions expressions tend to change both in
morphological and in dynamical senses, it becomes hard to estimate general parameters for movement and
displacement. Therefore, ending up with robust decisions for facial actions under these varying conditions
becomes to be difficult. Rather than tracking spatial points and using positioning and movement parameters
that vary within time, color (pixel) information of related regions of face are processed in Appearance Based
Parameterizations; in order to obtain the parameters that are going to form the feature

14
2.3.5

2.2 Problem Analysis

Human emotions and intentions are expressed through facial expressions and deriving an efficient and effec-
tive feature is the fundamental component of facial expression system.Face recognition is important for the
interpretation of facial expressions in applications such as intelligent, man-machine interface and commu-
nication, intelligent visual surveillance, teleconference and real-time animation from live motion images.
The facial expressions are useful for efficient interaction Most research and system in facial expression
recognition are limited to six basic expressions (joy, sad, anger, disgust, fear,surprised)
It is found that it is insufficient to describe all facial expressions and theseexpressions are categorized based
on facial actions.Detecting face and recognizing the facial expression is a very complicated task when it is
a vital to pay attention to primary components like: face configuration, orientation,location where the face
is set.

2.3 Supporting Theory

Our whole system is AI web based. We have implemented our system by the following supporting various
web technology and tools for making this software.This paper will explores how the technology and tools
used to make this AI based software.

2.3.1 Used Technology

For developing this system used many technology.Some of them are used for development purpose and
some of them are part of this software.Without them MMS cannot work properly.

2.3.2 Numpy

NumPy is a library for the Python programming language, adding support for large, multi-dimensional
arrays and matrices, along with a large collection of high-level mathematical functions to operate on these
arrays.
NumPy contains a multi-dimensional array and matrix data structures. It can be utilised to perform a number
of mathematical operations on arrays such as trigonometric, statistical, and algebraic routines

2.3.3 OpenCV

OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning
software library. OpenCV was built to provide a common infrastructure for computer vision applications
and to accelerate the use of machine perception in the commercial products.

2.3.4 TensorFlow

TensorFlow is a Python library for fast numerical computing created and released by Google. It is a foun-
dation library that can be used to create Deep Learning models directly or by using wrapper libraries that
simplify the process built on top of TensorFlow.
Created by the Google Brain team, TensorFlow is an open source library for numerical computation and
large-scale machine learning. TensorFlow bundles together a slew of machine learning and deep learning
(aka neural networking) models and algorithms and makes them useful by way of a common metaphor.

15
2.3.7.0

2.3.5 Keras

Keras is a neural networks library written in Python that is high-level in nature – which makes it extremely
simple and intuitive to use. It works as a wrapper to low-level libraries like TensorFlow or Theano high-level
neural networks library, written in Python that works as a wrapper to TensorFlow or Theano.

2.3.6 Pillow

Python Imaging Library (abbreviated as PIL) (in newer versions known as Pillow) is a free and open-source
additional library for the Python programming language that adds support for opening, manipulating, and
saving many different image file formats
Some of the file formats supported are PPM, PNG, JPEG, GIF, TIFF, and BMP. It is also possible to create
new file decoders to expand the library of file formats accessible.

2.3.7 Convulation

The primary purpose of Convolution in case of a CNN is to extract features from the input image. Con-
volution preserves the spatial relationship between pixels by learning image features using small squares
of input data. The convolution layer’s parameters consist of a set of learnable filters. Every filter is small
spatially (along width and height), but extends through the full depth of the input volume. For example,
a typical filter on a first layer of a CNN might have size 3x5x5 (i.e. images have depth 3 i.e. the color
channels, 5 pixels width and height). During the forward pass, each filter is convolved across the width and
height of the input volume and compute dot products between the entries of the filter and the input at any
position. As the filter convolve over the width and height of the input volume it produces a 2-dimensional
activation map that gives the responses of that filter at every spatial position. Intuitively, the network will
learn filters that activate when they see some type of visual feature such as an 3 edge of some orientation
or a blotch of some color on the first layer, or eventually entire honeycomb or wheel-like patterns on higher
layers of the network. Now, there will be an entire set of filters in each convolution layer (e.g. 20 filters),
and each of them will produce a separate 2-dimensional activation map.

Rectified Linear Unit

An additional operation called ReLU has been used after every Convolution operation. A Rectified Linear
Unit (ReLU) is a cell of a neural network which uses the following activation function to calculate its
output given x: R(x) = Max(0,x) Using these cells is more efficient than sigmoid and still forwards more
information compared to binary units. When initializing the weights uniformly, half of the weights are
negative. This 4 helps creating a sparse feature representation. Another positive aspect is the relatively
cheap computation. No exponential function has to be calculated. This function also prevents the vanishing
gradient error, since the gradients are linear functions or zero but in no case non-linear functions.

Pooling

Spatial Pooling (also called subsampling or downsampling) reduces the dimensionality of each feature map
but retains the most important information. Spatial Pooling can be of different types: Max, Average, Sum
etc. In case of Max Pooling, a spatial neighborhood (for example, a 2×2 window) is defined and the largest
element is taken from the rectified feature map within that window. In case of average pooling the average
or sum of all elements in that window is taken. In practice, Max Pooling has been shown to work better.

16
Chapter 3
Proposed Model
3.1 Feasibility Analysis

Feasiblility and Requirements analysis in systems engineering and software engineering, encompasses those
tasks that go into determining the needs or conditions to meet for a new or altered product, taking account
of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users. Another
requirement you need to have to be a software manager you need to know how to pleasure your boss. But
in financing you also need to pleasure your clients.
Before starting the project, feasibility study is carried out to measure the viable of thesystem. Feasibility
study is necessary to determine if creating a new or improved systemis friendly with the cost, benefits,
operation, technology and time. Following feasibility study is given as below:

3.1.1 Technical Feasibility

Technical feasibility is one of the first studies that must be conducted after the project has been identified.
Technical feasibility study includes the hardware and software devices.The required technologies (Python
language and Pycharm IDE) existed.

3.1.2 Operational Feasibility

Operational Feasibility is a measure of how well a proposed system solves the problemand takes advan-
tage of the opportunities identified during scope definition. The following points were considered for the
project’s technical feasibility.
The system will detect and capture the image of face.
The captured image is then (identified which category)

3.1.3 Economic Feasibility

The purpose of economic feasibility is to determine the positive economic benefits thatinclude quantifica-
tion and identification. The system is economically feasible due toavailability of all requirements such as
collection of data from the FER dataset prepared by:
Pierre-Luc Carrier
Aaron Courville

3.1.4 Schedule feasibility

Schedule feasibility is a measure of how reasonable the project timetable is. The system isfound schedule
feasible because the system is designed in such a way that it will finish prescribed time

17
3.3.1

3.2 Requirement Analysis

Requirements analysis is critical to the success of a development project. Requirements must be docu-
mented, actionable, measurable, testable, related to identified business needs or opportunities, and defined
to a level of detail sufficient for system design. Requirements can be architectural, structural, behavioral,
functional, and non-functional.
In planning phase study of reliable and effective algorithms is done. On the other handdata were collected
and were preprocessed for more fine and accurate results. Since hugeamount of data were needed for better
accuracy we have collected the data surfing the internet. Since, we are new to this project we have decided to
use local binary pattern algorithm for feature extraction and support vector machine for training the dataset.
We have decided to implement these algorithms by using OpenCv framework.
Requirement analysis is mainly categorized into two types:

3.2.1 Functional requirements:

The functional requirements for a system describe what the system should do. Thoserequirements depend
on the type of software being developed, the expected users of thesoftware. These are statement of services
the system should provide, how the systemshould react to particular inputs and how the system should
behave in particular situation.

3.2.2 Non-Functional requirements:

Nonfunctional requirements are requirements that are not directly concerned with the specified function
delivered by the system. They may relate to emergent system properties such as reliability, response time
and store occupancy. Some of the nonfunctional requirements related with this system are hereby below:

Reliability:

Reliability based on this system defines the evaluation result of the system, correct identification of the
facial expressions and maximum evaluation rate of the facial expression recognition of any input images.

Ease of Use:

The system is simple, user friendly, graphics user interface implemented so any can usethis system without
any difficulties

3.3 System Architecture

3.3.1 Methodolgy

The methodology is the general research strategy that outlines the way in which research is to be undertaken
and, among other things, identifies the methods to be used in it. These methods, described in the method-
ology, define the means or modes of data collection or, sometimes, how a specific result is to be calculated.
Methodology does not define specific methods, even though much attention is given to the nature and kinds
of processes to be followed in a particular procedure or to attain an objective. When proper to a study
of methodology, such processes constitute a constructive generic framework, and may therefore be broken
down into sub-processes, combined, or their sequence changed.
A methodology is the design process for carrying out research or the development of a procedure and is not
in itself an instrument, or method, or procedure for doing things

18
3.4.1

System design shows the overall design of system. In this section we discuss in detail the design aspects of
the system.

Figure 3.1: System Diagram

3.4 System Requirements

The data consists of 48x48 pixel grayscale images of faces. The faces have been automatically registered so
that the face is more or less centered and occupies about the same amount of space in each image. The task
is to categorize each face based on the emotion shown in the facial expression in to one of seven categories
(0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral).
train.csv contains two columns, ”emotion” and ”pixels”. The ”emotion” column contains a numeric code
ranging from 0 to 6, inclusive, for the emotion that is present in the image. The ”pixels” column contains
a string surrounded in quotes for each image. The contents of this string a space-separated pixel values in
row major order. test.csv contains only the ”pixels” column and your task is to predict the emotion column.
The training set consists of 28,709 examples. The public test set used for the leaderboard consists of 3,589
examples. The final test set, which was used to determine the winner of the competition, consists of another
3,589 examples.

19
3.5.1

3.4.1 Sequence Diagram

Figure 3.2: Sequence Diagram


This is the sequence diagram of Emotion recognition using facial expressions.

3.5 Implementation

3.5.1 Phases in Facial Emotion Recognition

The facial expression recognition system is trained using supervised learning approach in which it takes
images of different facial expressions. The system includes the training and testing phase followed by
image acquisition, face detection, image preprocessing, feature extraction and classification. Face detection
and feature extraction are carried out from face images and then classified into six classes belonging to six
basic expressions which are outlined below:
• Image Acquisition Images used for facial expression recognition are static images or image se-
quences.Images of face can be captured using camera.
• Face detection Face Detection is useful in detection of facial image. Face Detection is carried out
in training dataset using Haar classifier called Voila-Jones face detector and implemented through
Opencv. Haar like features encodes the difference in average intensity indifferent parts of the image
and consists of black and white connected rectangles in whichthe value of the feature is the difference
of sum of pixel values in black and white regions.
• Image Pre-processing Image pre-processing includes the removal of noise and normalization against
thevariation of pixel position or brightness.
Color Normalization

20
3.5.1

• Feature Extraction Selection of the feature vector is the most important part in a pattern classi-
fication problem. The image of face after pre-processing is then used for extracting the important
features. The inherent problems related to image classification include the scale, pose,translation and
variations in illumination level.

21
Chapter 4
Implementation and Testing
4.1 Result Analysis

The aim of this project work is to develop a complete facial expression recognitionsystem, Pierre-Luc
Carrier and Aaron Courville were used for the experimentations. First of all, system was trained using
different random samples in each dataset by supervised learning. In each datasets the data were partitioned
into two parts for training and testing. Every dataset have completely different samples which are selected
randomly in uniform manner from the pool of given dataset.
Evaluation of the system can be done using following methods:
Precision
It estimates the predictive value of a label, either positive or negative, depending on the class for which it is
calculated; in other words, it assesses the predictive power of the algorithm. Precision is the percentage of
correctly assigned expressions in relation to the total number of aspects.
The accuracy of this dataset is 78 percent.

Figure 4.1: Flow Chart of Testing/Predicting


This shows how its implemented.

22
4.2

Figure 4.2: The seven expression from one subject


The 7 expressions to detect emotions

4.2 Application Outcome

Our system can be used in Digital Cameras wherein the image can be captured only when the person smiles.
In security systems which can identify a person, in any form of expression he presents himself.
Rooms in homes can set the lights, television to a person’s taste when they enter the room.
Doctors can use the system to understand the intensity of pain or illness of a deaf patient.
Our system can be used to detect and track a user’s state of mind, and in mini-marts, shopping center to
view the feedback of the customers to enhance the business etc.

23
Chapter 5
Conclusions
5.1 Introductions

This project proposes an approach for recognizing the category of facial expressions.Face Detection and
Extraction of expressions from facial images is useful in many applications, such as robotics vision, video
surveillance, digital cameras, security and human-computer interact ion. This project’s objective was to
develop a facial expression recognition system implementing the computer visions and enhancing the ad-
vanced feature extraction and classification in face expression recognition.
Experiment results on FER Dataset , developed by Pierre-Luc Carrier and Aaron Courville dataset, show
that our proposed method can achieve a good performance. Facial expression recognition is a very chal-
lenging problem. More efforts should be made to improve the classification performance for important
applications. Our future work will focus on improving the performance of the system and deriving more
appropriate classifications which may be useful in many real world applications.

5.2 Future Work /Goals

Face expression recognition systems have improved a lot over the past decade. The focus has definitely
shifted from posed expression recognition to spontaneous expression recognition. Promising results can be
obtained under face registration errors, fast processing time, and high correct recognition rate (CRR) and
significant performance improvements can be obtained in our system. System is fully automatic and has the
capability to work with images feed. It is able to recognize spontaneous expressions.
The future scope in the system would to design a mechanism that would be automatic playing music or
videos based on the human facial mood. This system would be also helpful in music therapy treatment and
provide the music therapist the help needed to treat the patients suffering from disorders like mental stress,
anxiety, acute depression and trauma.

24
5.3

5.3 References

[1] Shan, C., Gong, S., McOwan, P. W. (2005, September). Robust facialexpression recognition using local
binary patterns. In Image Processing, 2005. ICIP 2005. IEEE International Conference on (Vol. 2, pp.
II-370). IEEE
[2]A Facial Expression Recognition System Tribhuvan University Institute of Science and Technology
Kathford International College of Engineering and Management
[3]Recognizing action units for facial expression analysis YI Tian, T Kanade, JF Cohn IEEE Transactions
on pattern analysis and machine intelligence 23 (2), 97-115
[4]https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data

25

You might also like