0% found this document useful (0 votes)
63 views4 pages

Facial Emotion Detection in Low Light Conditions Using CNN

Uploaded by

Nidhi Parikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views4 pages

Facial Emotion Detection in Low Light Conditions Using CNN

Uploaded by

Nidhi Parikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Facial Emotion Detection in Low Light Conditions

using CNN
Chinmayee Mhaskar Nidhi Parikh Sherilyn Kevin
Dept. of Information Technology Dept. of Information Technology Dept. of Information Technology
Thakur College of science and Thakur College of science and Thakur College of science and
commerce commerce commerce
Mumbai University Mumbai University Mumbai University
Mumbai-400101, India Mumbai, India Mumbai, India
[email protected] [email protected] [email protected]

Abstract—This Paper represents face detection using an II. LITERATURE SURVEY


advanced method of convolutional neural network which uses
an artificial deep learning framework. The old models used to
1. Ge Wen, Huaguan Chen, Deng Cai, Xiaofei He.
detect the faces were like the Haar-cascade method which Description: A model named CNN has been proposed in this
detect the faces with good approaches but there is some paper for face recognition using OpenCV Convolutional
uncertainty in the accuracy of the old models, so in this system, Neural Network (CNN), LFW, and FaceNet architecture
we will use the latest deep neural network model which is which is used as an image classifier. OpenCV used in CNN
embedded with latest TensorFlow and by using the deep contains SSD with ResNet-10 as a backbone and is capable
learning model framework which is weighted with some other of detecting faces in most orientations. We achieved 99.33%
files. In this, it is not only facial Detection but also recognizing on the LFW benchmark with only a single CNN model and
the emotions of the face like happy, sad, neutral, and angry in similar performance even without face alignment.
low light after designing programs in addition to the new
algorithms with the theory of TensorFlow and Dldetector. The 2. Kumar Y, Sharma S Description: In their paper, first it
benefit of this work emerges in the areas of security, airports, is explained the process of Face Expression Recognition
markets, and so on. (FER) which further assists that how a particular technique is
modelled. In addition, a thorough analysis of the most recent
Keywords— Convolutional neural network, Caffe model, face FER techniques has been provided, with comparisons made
detection, face emotion detection, detection, low light. based on the methods used for feature extraction,
classification, and recognition rate.
I. INTRODUCTION
3. Dhavalikar AS, Kulkarni RK. Description: In this
Face detection plays a very important role nowadays in paper, an Automatic Facial Expression Recognition System
various applications. First, we need to know about the (AFERS) has been proposed. The proposed method has three
difference between face detection and recognition. Face stages: (i) face detection, (ii) feature extraction, and (iii)
detection is nothing but detecting the face of any person facial expression recognition.
present, if there is a face, first try to detect the face features
and then the entire face, and face recognition is an 4. Anirudha, B Shetty, Bhoomika, Deeksha, Jeevan,
application of face detection. Facial detection is one of the Rebeiro, Ramyashree. Description: This project proposed an
most popular topics in the deep learning field for object idea about how home security can be enhanced by using an
detection. algorithm for face detection and recognition (Haar Cascade
Classifier). The Haar Cascade Classifier model is used with
There are hundreds of facial detection algorithms today, KNN Algorithm, LBP.
but the first facial detection algorithms were developed in the
early seventies. Since then, the accuracy of the algorithms 5. P.Anjaneyulu, S.V.S Prasad, V.Syambabu, A.Vamshi
has improved so much, that nowadays face recognition is Kumar. Description: This Paper represents face detection
often used over biometrics, often as a part of (or together using an advanced method of deep neural network which
with) a facial recognition system. Additionally, it is utilized uses a deep learning framework. This model detects the faces
in picture database administration, human-computer accurately and paves the way for better recognition systems
interaction, and video surveillance. that can be used in several face biometric applications.
The main aim of this paper is to achieve better accuracy
in low light face emotion detection rate by using the
convolutional neural network which uses the neural network III. METHODOLOGY
as its base for processing inputs that contain multiple layers The emotion recognition problem requires an algorithm
and by using the deep learning framework which comes with to perform feature extraction and categorical classification,
haar cascade model which is pre-trained. This will use the just like any other classification task. In this project, we
camera of the computer or the system that would detect and detect facial emotions in low light with good accuracy. Our
recognize the person’s face and facial emotions of model for emotion detection is based on a convolutional
individuals using the tool in Tensorflow called the Open neural network pre-trained model for better identifying
Face, Dldetector, and python programming language in a emotions. The model is explained below:
convolutional neural network.
A high LR can cause the model to converge too quickly
while a small LR may lead to more accurate weights (up to
convergence) but takes more computation time. The number
of epochs is the number of times a dataset is passed forward
and backward through the NN. The dataset is divided into
batches to lower the processing time and the number of
training images in a batch is called the batch size.

Fig. 1. Model Architecture

Proposed Method Modules: -


Modules used in our model design are:
i. Face capturing module: During this phase, we are
taking pictures of people's faces for further processing. We
are utilizing a webcam or an external web camera for this
project. There is no way to complete the procedure without
first taking the image, and there is no way to identify the
emotions without first capturing the image.
ii. Face recognition module: The first phase in the face
recognition process is to train the host system on the facial
data that has been collected. The face is photographed using
the web camera on the computer system. In this session, we Fig. 2. Layers of Convolution Neural Network
will learn how to detect people's faces using the haar cascade.
C. Convolutional Neural Network Model
iii. Expression recognition module: Facial expression
 ML models can be built and trained easily using a
recognition software is a system that detects emotions in
high-level Application Programming Interface (API)
human faces by using facial parameters. Because it collects
like Keras. In this project, a sequential CNN model is
and analyses information from images, it is possible to offer
developed using Tensorflow with the Keras API since
an unfiltered, unbiased emotional reaction or data that is
it allows a model to be built layer by layer.
unfiltered and impartial.
Tensorflow is an end-to-end open-source platform for
ML. It has a flexible collection of tools, libraries, and
community resources to build and deploy ML
A. The Dataset applications.
 The dataset used is FER2013 for training the model.
The dataset is from a Kaggle Facial Expression  Convolutional Neural Network is considered the
Recognition Contest a few years ago. The data methodology that is used with data augmentation in
contains 64x64 pixel grayscale pictures of faces. The this research. A dataset that is used in this research
faces have been robotically recorded so the face is has variations as data was collected from different
comparatively positioned and dwells in about an datasets. As a result, the proposed model is not biased
equal amount of space in each image. toward any particular dataset. The event flowchart of
this system is illustrated. At first, the model takes an
 The mission is to categorize each face in low light image from the dataset and detects the face from the
conditions based on the emotion displayed in the image by Cascade Classifier. If a face is found, then it
facial expression into one of seven categories. is sent for pre-processing. Data have been augmented
by the ImageDataGenerator function offered by the
 The dataset used here is FER2013 that consists of
Keras API. At last, the augmented dataset is fed into
35,888 images which are further categorized into
CNN in order to predict the class.
training (28,710) and validation (7,178) images
respectively.  The model that is used to classify the facial
expression contains 3 convolution layers with 32, 64,
B. Training model
and 128 filters respectively and the kernel size is 3x3.
To train the model, the train-test split () function is used.
This function splits the dataset into training and testing sets.  The activation function that has been used in the
The training data is not used for testing. A training ratio of convolution layer is the Relu activation function. Relu
0.80 means 80% of the dataset will be used for training and is applied to introduce the non-linearity of a model.
the remaining for testing the model. The Learning Rate (LR) The model has been provided with 48X48-sized
is a configurable parameter used in training that determines images as model input. The input shape of the model
how fast the model weights are calculated. is (64,64,1), where 1 refers to the number of channels
that exist in input images.
 Images have been converted into grayscale which is Fig. 5. Sad Facial Emotion Detected
why the number of channels is 1. After the
convolution layer, the model has a 2*2 pool size
pooling layer and max pooling has been chosen. Next,
there are four fully connected layers which consist of
750, 850, 850, and 750 nodes respectively.
D. Performance Evaluation
1. Accuracy:
One statistic for assessing classification models is
accuracy. The percentage of predictions that our model
correctly predicted is known as accuracy. According to
formal definitions, accuracy is:

Accuracy = Number of correct predictions


Total number of predictions

The accuracy of binary classification can also be determined Fig. 6. Disgust Facial Emotion Detected
in terms of positives and negatives, as in the following
formula:
V. CONCLUSION
Accuracy = TP+TN In this paper, the CNN model is developed to extract facial
TP+TN+FP+FN features and recognize emotions. The FER 2013 dataset
consists of seven emotions. The emotions considered were
Where TP stands for True Positives, TN for True Negatives, happy, sad, angry, fearful, surprised, disgusted, and neutral.
FP for False Positives, and FN for False Negatives. These images were converted into NumPy arrays and
landmark features were identified and extracted. A CNN
TABLE I. Experimental Analysis and Results
model was developed with four phases where the first three
phases had convolution, pooling, batch normalization, and
BATC EPOC TRAINING TRAININ VALIDATIO VALIDATIO dropout layers. The final phase consists of output layers.
H H ACCURAC G LOSS N N LOSS CNN Model has 35,888 parameters of which 28,710 are
Y ACCURACY
trainable. The best parameter values were determined for
64 10 57% 1.13 53% 1.27 these models using the accuracy and loss metrics. CNN
64 50 65% 0.92 62% 0.12 Model had an average accuracy of 85.0% and an average
loss of 0.93.
64 75 70% 0.8 65% 0.95
64 100 85% 0.7 69% 0.95 ACKNOWLEDGMENT
This paper and the research behind it would not have
been possible without the exceptional support of my
supervisor, Sherilyn Kevin. Her enthusiasm, knowledge and
IV. OUTPUTS exacting attention to detail have been an inspiration and kept
The following figures show detected facial emotions in our work on track from our first encounter with the log books
low-light conditions. of Artificial Network, Deep Learning to the final draft of this
paper. Further, we are very thankful to our Head of
Department Dr. Santosh Kumar Singh, for giving us this
opportunity. Last but not the least, we would like to thank all
our friends, family members, non-teaching staff, and
colleagues for their support and individual help.
REFERENCES
[1] B. Y. LeCun, Yann and G. Hinton, “Deep learning,” Nature, vol. 521,
pp. 436–444, May 28, 2015.
[2] D. Duncan, G. Shine, and C. English, “Facial emotion recognition in
real-time”, [Accessed: 2019-05-03].
[3] P. Lucey, J. F. Cohn, J. S. Takeo Kanade, Z. Ambadar, and I.
Matthews, “A complete expression dataset for action unit and
emotion-specified expression,” 2010 IEEE Computer Society
Conference on Computer Vision and Pattern Recognition-
Workshopss, pp. 94–101, June 12, 2010.
Fig. 3. Happy Facial Emotion Detected [4] M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “A brief review
Fig. 4. Neutral Facial Emotion Detected of facial emotion recognition based on visual information,” Automatic
Face and Gesture Recognition, 1998. Proceedings of Third IEEE
International Conference on, pp. 200–205, April 14, 1998.
[5] M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial
expressions with Gabor wavelets,” Proceedings of Third IEEE
International Conference on Automatic Face and Gesture
Recognition, 1998., pp. 200–205, April 14, 1998.
[6] J. P. Skelley, "Experiments in Expression Recognition," MSc Thesis, [11] Senthil Kumar, Keerthana, Sharmila, Dhivya Shree, Face Emotion
Massachusetts institute of technology, 2005. Recognition and Detection, Volume: 08, Issue: 06 June 2021.
[7] Zilu Ying, Mingwei Huang, Zhen Wang, and Zhewei Wang, "A New [12] Johanna Freeda, Lavanya, Lekhaa Shree, Nivedhitha, Kavitha,
Method of Facial Expression Recognition Based on SPE Plus SVM," Emotion Detection using Convolutional Neural Network, Volume 8,
Springer-Verlag Berlin Heidelberg, Part II, CCIS 135, 2011, pp. 399– Issue 3 March 2019.
404. [13] Pushkata Petkar, K. A. Pujari, Srushti Salgare, Sumukh Shetty,
[8] M. Pantic, and I. Patras, "Dynamics of Facial Expression: Recognition Emotion Recognition Using CNN, Volume 09, Issue 4 June 2020.
of Facial Action and their Temporal Segment from Face Profile Image [14] Nguyen Gia Hong, Facial Emotional Recognition Experiment by
sequences," IEEE Trans. A system, Man, and Cybernetic, Part B, Vol. applying R-CNN, 05 October 2020.
36, No. 2, 2006, pp.433-449.
[15] Hongli Zhang, Alireza Jolfaei, and Mamoun Alazab, A Face Emotion
[9] I. Kotsia, I. Buciu, and I. Pitas, "An Analysis of Facial Expression Recognition Method Using Convolutional Neural Network and Image
Recognition under Partial Facial Image Occlusion," Image and Vision Edge Computing, volume xx, 2017.
Computing, Vol. 26, No. 7, 2008, pp.1052-1067.
[10] I. Kotsia, I. Buciu, and I. Pitas, "An Analysis of Facial Expression
Recognition under Partial Facial Image Occlusion," Image and Vision
Computing, Vol. 26, No. 7, 2008, pp.1052-1067.

You might also like