Ferpms P1 R1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 13

Designing A FER System to Monitor Isolated Patients.

m
(1st Project Review)

Team Member Name USN


Presented By

Bammidi Ketan Rao 1SB19CS013


Bammidi Pragati Rao 1SB19CS014
Harish R 1SB18CS024
Shakti Swetha I A 1SB19CS099

Under the Guidance of,


Dr. Smitha J A,
Head Of Department , C.S.E.

Department of Computer Science and Engineering


Abstract

• Facial emotion recognition is one of the most interesting research areas where many researchers are actively
participating over the past few decades.

• Model attempts to achieve and create a smart prototype that can identify patient’s expression or emotions we
will use static image processing via proposed model.

• Types of emotions – fear, disgust, angry, etc., could then be processed to derive critical information.

• The study focuses on using Convolutional Neural Networks in order to develop a model that can facilitate
facial expression recognition with the notion of Patient Surveillance.

Department of Computer Science and Engineering 2


Aim / Objective

• The primitive aim of this study is to come up with an efficient system to identify basic human emotions using static
images under our proposed model.

• In the set of thousands of emotions, 7 basic human emotions are happy, sad, anger, disgust, neutral, fear, and
surprise. These emotions contain lot of information about their state of mind and can thus be processed.

• The processed outcome differentiates critical situations, such as to monitor an isolated patient during virus outbreak
so as to inform the doctor about the patient's current emotions. Say, ‘frustrated’ or ‘tensed’.

Department of Computer Science and Engineering 3


Introduction

• Emotions are an important part of any interpersonal communication. In every human’s life emotions plays an
important role.

• At several situations humans plays different emotions based on their mood. They can be shown in disparate
forms which may or may not be noticed visually.

• For detecting emotions in face, facial emotion recognition process is used. Humans vary widely in their
accuracy at finding others emotion.

• Model helps us in focusing important features in humans face to detect emotion using multiple datasets such as
FER-2013 and image dataset.

• Face detection is a pre-processing stage which helps in detection emotions, During Feature extraction various
features of the face is extracted, In the final stage it produces labels and model will be trained in CNN model.

• We can utilize this model to predict the emotion of person in situations where doctors or professionals cannot
physically examine the patient.

Department of Computer Science and Engineering 4


Requirement Analysis

Hardware Requirements Software Requirements


System : Intel i5 2.8GHz Operating System : Windows 10 +
Memory : 4 GB Language : Python 3
Hard Disk : 200 GB HDD Platform : Anaconda IDE
Optical Image Capture

Department of Computer Science and Engineering 5


Existing System and its Disadvantages

• The Existing system deems to hit accuracy score of nearly 88% under numerous
steps.

• This model fails to counter to certain situation where expression of a human is


excruciating. Ex: patients

• The existing system there is no implementation of emotion recognition as a major


component of the model.

• Only captures static images and predicts human emotions but doesn’t follow up with
next-best-action.

Department of Computer Science and Engineering 6


Proposed System and its Advantages

• Model is classified into 3 stages they are Face Detection, Feature Extraction, Emotion Classification.

• Face detection is a pre-processing stage which helps in detection emotions, During Feature extraction various features of the
face is extracted, In the final stage it produces labels and model will be trained.

• The proposed system uses a different and more efficient architecture for intermediate layers of CNN, i.e., ResNet-50. This
will then predict human expression.

• The proposed system is focused around ResNet-50 which is


much more efficient and proficient than the existing
model's architecture.

• Proposed System aims at increasing the accuracy achieved


by existing system.

Department of Computer Science and Engineering 7


Literature Survey

SL NO PAPER AUTHOR TITLE OF THE PAPER OBJECTIVE METHOD INFERENCE

1 IEEE PAPER R Raja Subramanian, Design and Evaluation of a Deep model focuses on important features Convolutional illumination and pose
Chunduri Sandya Learning Algorithm for Emotion in humans face to detect emotion Neural Networks modelling
Niharika, Dondapati Recognition using multiple datasets
Usha Rani ,
Parvathareddy Pavani
, Ketepalli Poojita
Lakshmi Syamala.

2 IEEE PAPER Christopher Facial Expression Recognition to recognize facial expressions Convolutional image-based facial
Pramerdorfer, Martin using Convolutional Neural automatically to foster human- Neural Networks expression recognition
Kampel Networks: State of the Art computer interaction model
3 IEEE PAPER Rohit Pathar, Human Emotion Recognition to acquaint the machine with human Convolutional gray scale image
Abhishek Adivarekar , using Convolutional Neural like ability to recognize and analyse Neural Networks training.
Arti Mishra, Anushree Network in Real Time human emotions
Deshmukh

4 MDPI PAPER Christian Mancini , Threatening Facial Expressions to test task relevance’s impact on Go/No-go task Model with target
Luca Falciati, Claudio Impact Goal-Directed Actions emotional stimuli without conflating Decision Model detection and task
Maioli and Giovanni Only if Task-Relevant movement planning with target switching
Mirabella , detection and task switching

5 IJITEE S.Nithya Roopa Research on Face Expression detailed review of research works SVM,KNN feature extraction ways
PAPER Recognition done in the field of facial expression area unit exploited
identification together with
comparison.

Department of Computer Science and Engineering 8


Identification of Problem

• In the set of thousands of emotions, 7 basic human emotions are happy, sad, anger, disgust, neutral, fear,
and surprise. These emotions contain lot of information about their state of mind.

• This application can thus be applied in Health care while monitoring of patient. With this, additional
information about the state of patient can be obtained.

• It is found that it is insufficient to describe all facial expressions and the concerned expressions are
categorized based on facial actions.

• Detecting face and recognizing the facial expression is one of the complex tasks when it is an essential to
show attention to primary elements like: face configuration, orientation, location where the face is set.

Department of Computer Science and Engineering 9


Proposed System Design

• Model is classified into 3 stages they are Face Detection, Feature Extraction, Emotion Classification

• The proposed system taken in ‘input’ and takes in stream of data from the respective dataset, i.e.,
FER-2013 dataset and will process it.

• Next the data is pre-processed and put into 7 classes or categories to sort out consistent and
inconsistent data.

• Finally the system uses ResNet-50 Convolutional Neural Network Layer(50 layers deep) to predict
the outcome of the model.

Department of Computer Science and Engineering 10


Scope of the Project

• The scope of this paper is limited to predication of human emotions under 7 classes, namely, happy, sad, neutral,
angry, surprise, fear and disgust.

• The project uses FER-2013 DATASET and IMAGE DATASET. The Facial Expression Recognition 2013
(FER2013) database consists of different 48x48 pixels of images which shows various emotions.

• This dataset carries more than 35,000 examples with 48x48 resolution, out of them most of those are taken in wild
settings.

• This can be extended to facilitate virtual monitoring, the model can monitor static images of individuals to
distinguish their conditions.

• This can also be used in health care to cater to patient’s mental status via processing their facial expressions.

Department of Computer Science and Engineering 11


References

[1] Mingli Song, Jiajun Bu, Chun Chen, and Nan Li. 2004.Audio-visual based emotion recognition-a new approach. In CVPR

2004, volume 2, pages II–II. IEEE

[2] S.Nithya Roopa’ Research on Face Expression Recognition’ International Journal of Innovative Technology and Exploring

Engineering (IJITEE) ISSN: 2278-3075, Volume-8, Issue- 9S2, July 2019

[3] Christian Mancini ,Luca Falciati , Claudio Maioli and Giovanni Mirabella’ “Threatening Facial Expressions Impact

GoalDirectedActions Only if Task-Relevant ,published october 2020

[4] A.D. Deshmukh, U.B. Shinde, D.L. Bhuyar, M.G. Nakrani, Facerecognition using Computer vision based on Internet of

things for Intelligent door look system, 2019;

[5] Rohit pathar,Abhishek Adivarekar,Arti Mishra and Anusree Deshmukh “Human Emotion Recognition using Convolutional

Neural Network in Real Time”.

Department of Computer Science and Engineering 12


THANK YOU

Department of Computer Science and Engineering 13

You might also like