0% found this document useful (0 votes)
22 views9 pages

Automatic Emotion Detection Model From Facial Expression

Uploaded by

Laura De Gea
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views9 pages

Automatic Emotion Detection Model From Facial Expression

Uploaded by

Laura De Gea
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

Automatic Emotion Detection Model from


Facial Expression
1
Debishree Dagar, 2Abir Hudait, 3 H. K. Tripathy, 4 M. N. Das
1,2,4
School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneswar -751024, India
3
School of Computing, University Utara Malaysia, Sintok, Kedah, Malaysia
1
[email protected], [email protected], [email protected], [email protected]

Abstract—The human face plays a prodigious role for Emotion recognition [2][3][5] is useful to make
automatic recognition of emotion in the field of smooth communication between human & computer
identification of human emotion and the interaction interaction. The recognition of human emotion can
between human and computer for some real application have wide applications in heterogeneous field. The
like driver state surveillance, personalized learning,
applications are mainly based on the man and machine
health monitoring etc. Most reported facial emotion
recognition systems, however, are not fully considered interaction, patient surveilling, inspecting for antisocial
subject-independent dynamic features, so they are not motives etc. Even we can recognize emotion for
robust enough for real life recognition tasks with subject customers by analyzing their response on seeing
(human face) variation, head movement and illumination certain commodity or advertisement or immediately
change. In this article we have tried to design an after getting a message and based on the response from
automated framework for emotion detection using facial the customers, the resource hub can improve their
expression. For human-computer interaction facial strategies[1].
expression makes a platform for non-verbal The first aim of this work is to incorporate
communication. The emotions are effectively changeable
anatomical grip for emotion recognition. Facial
happenings that are evoked as a result of impelling force.
So in real life application, detection of emotion is very behavior is represented using Facial Action Coding
challenging task. Facial expression recognition system System (FACS). FACS couples the transient
requires to overcome the human face having multiple appearance changes with the action of muscles from
variability such as color, orientation, expression, posture anatomical perspective. FACS employs Action Units
and texture so on. In our framework we have taken (AU) and AU represents the muscular activities to
frame from live streaming and processed it using Grabor describe the facial expressions. Generally, a single
feature extraction and neural network. To detect the muscle is invoked by most of the AUs. However, in
emotion facial attributes extraction by principal some scenarios to express relatively autonomous
component analysis is used and a clusterization of
activity of several segment of one specific muscle, two
different facial expression with respective emotions.
Finally to determine facial expressions separately, the or more than two AUs [13] are used. FACS has
processed feature vector is channeled through the already recovered overall 46 Action units which delivers a
learned pattern classifiers. multifaceted procedure to express a large variety of
facial behavior [13].
Index Terms—Face Detection, Gabor Feature In rest of the section we have discussed the
Extraction, Neural Network, Facial Expressions, Emotion following-Section II we have tried to report and refer
Recognition, Facial Attribute Extraction, Principal some of the influential work in the domain of
Component Analysis(PCA), Pattern Classification, K- emotional intelligence. Section III we have dragged
mean clustering. something on emotion taxonomy. Section IV we have
described the dataset. Section V discussed mainly
I. INTRODUCTION methodology, where initially frame extraction from
live streaming and well-known face detection through
The article introduced here has mainly neural network are touched on very light way, then
concentrated on the creation of smart framework with the detail discussion of PCA is taken care and applied
the inherent capabilities of drawing the inference for K-mean with a little modification for clustering,
emotion detection from facial expressions. Recently, including a pictorial representation of flow chart and
the notion of emotion recognition is attaining mostly output of each individual step. Section VI the results
the researcher's mind in the area of exploration on are shown and experimental analysis is done towards
smart system and interaction between human and reaching our goal. Finally Section VII and Section VIII
computer. Based on facial attributes the facial extension of our work and conclusion are made
expression recognition can be classified one of the six betterment.
well known fundamental emotions: sadness, disgust,
II. RELATED WORK
happiness, fear, anger and surprise [1]. Coren and
Russel [1] stated that each emotion is having the In emotional recognition of face a notable
property of stereo-scopic-perceptual conflict. So advancement has been observed in the field of
establishing an effective automatic emotion neuroscience, cognitive and computational intelligence
recognition framework is a very challenging task. [1][5][6]. It is also proved by Kharat and Dudul that
about 55% effect of overall emotion expression is as

ISBN No.978-1-4673-9545-8 77
Authorized licensed use limited to: UNIVERSIDAD DE ALICANTE . Downloaded on November 07,2024 at 17:38:41 UTC from IEEE Xplore. Restrictions apply.
2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

facial expression which is contributed during social emotions [1], and Russell's circumflex representation
interactions. of influence [1]. Ekman and Freisen in 1971 [1] put
Actually, facial muscle generates monetary forward six quintessential basic emotions like disgust,
adaptation in facial appearance which can be joy, sadness, fear, anger and surprise which are
recapitulated by incorporating Action Units. The six globally presented and identified from facial
common emotions have been considered as globally expressions.
recognizable as the movements of muscle are very
similar of these emotional expressions from the people
from various region and society. Therefore, we have
mainly concentrated on the automatic recognition of
these six fundamental emotions.
In general, emotion recognition is a two steps
procedure which involves extraction of significant
features and classification [1]. Feature extraction
determines a set of independent attributes, which
together can portray an expression of facial emotion.
For classification in emotion recognition the features
are mapped into either of various emotion classes like
anger, happy, sad, disgust, surprise, etc [1]. For the
effectiveness calculation of a facial expression
identification model both the group of feature
attributes which have been taken for feature extraction
and the classifier that is responsible classification are
equivalently significant. For a badly picked collection
of feature attributes, in some cases, even a smart
classification mechanism is not able to produce an
ideal outcome. Thus, for getting the high classification
accuracy and qualitative outcome, picking of superior
features will play a major role.
The circumflex model by Russell and recognition Fig1. The Circumflex Representation of Russell.
of six basic emotions are having remarkable
contribution in the field of emotion recognition. Other Since last five decades, this model with six basic
than this, the work by Kudiri M. Krishna, Said Abas emotion has begun to be the most popular and usual
Md, Nayan M Yunus [2], where they tried to detect model for estimating the emotions and detection of
emotion by using the concept of sub-image based emotion from their respective facial expression. After
features through facial expression. Silva C. DE certain time a different model of emotion was
Liyanage, Miyasato Tsutomu, Nakatsu Ryohei [4], presented by Russell where emotional states are
they formed a model for emotion recognition with the depicted by a ring having two pole in two dimensional
help of multimodal information. Maja Pantic, Iounnis space instead for categorizing each of the emotion
Patras [5], they implemented an approach for distinctly.
recognition facial action and temporal segment from
IV. DATASET
fare profile image sequences by considering the
dynamic property of facial action. Li Zhang, Ming
To simulate our proposed model JAFFE has been
Jiang, Dewan Farid, M.A. Hassain [13], they modelled
used. JAFFE has 213 sample images and 213 lines in
an intelligent system for automatic emotion
this file. Each line contains the position of 77 key
recognition. Happy S L, Routray Aurobinda [7],
points, thus makes a 154 dimensional vector. Also
created an automatic emotion recognition system by
all_labels.txt contains all sample labels in numerical
using salient features. This article is greatly influenced
form, where label mapping is done as follows: NEU =
by all of this contribution in the field of emotion
0; HAP = 1; SAD = 2; SUR = 3; ANG = 4; DIS = 5;
recognition.
FEA = 6. Out of 213 samples 20 number of sample is
used for training for respective emotion.
III. EMOTION TAXONOMY
V. IMPLEMENTATION
According to the emotion theorists and
psychologists, different of emotion can be categorized
A. Frame Extraction and Face Detection
starting from globally showed six fundamental
Initially we have taken live video stream and
emotions to complicated emotions which are
extracts frames from the video. Then we have tried to
originated from different culture with. From several
detect those frames that are having face. To detect face
reported framework in the field of emotion
in a frame we have used an existing well-known
recognition, two models have hold the command in
technique where for feature extraction Gabor method is
this research domain: Ekman's fundamental set of
used and for learning of neural network is used, and

78
Authorized licensed use limited to: UNIVERSIDAD DE ALICANTE . Downloaded on November 07,2024 at 17:38:41 UTC from IEEE Xplore. Restrictions apply.
2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

combinedly they are used for face detection[14] [15] ଵ  ଶ ଷ  ସ ହ


[16]. We send frame for further processing where the
model is already learned for emotion detection. Input Neuron i
Output

Input
 ଵ ‹ͳ
Extracted frames
Layer
as input image  ୧ଶ Activation
 ଶ ܵ‫ܫ‬
functionܱூ ୍
୧ଷ ‹͵ ݃ሺܵூ ሻ
Gabor Feature Extraction from Image ‫ܫ‬ଷ ଷ ଷ

Hidden
Layer
threshold, t
Face Detection Using the Filter
Output
Layer
Training O ANN
means learning the
yi weights of neurons
Create Database to Train Neural
Network Fig 2. Architecture of Artificial Neural Network

Training of the Network ii).Gabor Feature Extraction


In this work, we have chosen the Gabor features
for face detection. The main advantage of the Gabor
filters is that they provide optimal simultaneous
Testing of New Image resolution in both space and frequency domains [21].
The mathematical formulation can be shown as:
Fig 3. Generalized Model for Face Detection ݇௝ଶ ݇௝ଶ ‫ ݔ‬ଶ
Ȳ௝ ሺ‫ݔ‬Ԧሻ ൌ ଶ ݁‫ ݌ݔ‬ቆെ ሬሬሬሬԦ௙ ‫ݔ‬Ԧ൯
ቇ ቈ‡š’൫݆݇
ߪ ʹߪ ଶ
i)Multi-Layer Perceptron ߪଶ
The feed forward architecture of Multi-Layer െ ݁‫ ݌ݔ‬ቆെ ቇ቉
ʹ
Perceptron (MLP) neural network [19][20][21]consists The following characteristic wave vector gives the
of three layer- input layer, a hidden layer, and an center frequency of fth filter.
output layer. For an N dimensional input vector there ݇௝௫ ݇௫ ܿ‫ߠݏ݋‬ఓ
exists N units in the input layer. The input units are ሬሬሬԦ
݇ఫ ൌ ቆ ቇ ൌ ቆ ቇ
݇௝௬ ݇௬ ‫ߠ݊݅ݏ‬ఓ
fully cascaded to the I hidden layer units, which are in
turn, connected to the J output layers units, where J is comprises a scale and orientation which is given by
the number of output classes. A training data of 1 pairs (kxy,ϴμ). The term exp (- σ2/2)eliminates the bias,
(xi, yi) is assumed to be accessed where xi is the where σ is invariable. Convolving the image with
pattern vector, while yi is the corresponding pattern complex Gabor filters with 5 spatial frequency (v =
class. We can code yi as 1 and -1 in a 2-class task. 0,…,4) and 8 orientation (μ=0,….,7) captures the
MLP comprises of 3 layers, the input layer is a whole frequency spectrum, both amplitude and phase
vector constituted having n2 units of neurons (n x n (Fig 4.(d)). In broader sense feature extraction system
pixel input images). The hidden layer contains n comprises the two following phase-
neurons, and the output layer contains a single neuron (1) Localization of feature point
that is active to 1 when the face is presented and (2) Feature vector generation.
otherwise face is not presented. The activity of jth 
neuron in the hidden layer can be represented as 1. Localization of feature point
Sj = Σ w jixi, xi= f (Sj)(1), Feature vectors including special facial features
Where f represents a sigmoid function and are extracted from the face image by finding the
W1 represents the set of weights of neuron i, b1 (i) uttermost pixel in a window W0=(x0, y0) of size WxW
shows the threshold and xi represents an input of the by the following procedure:
neuron. Similarly the output layer activity is:Sj= Σ w ܴ௝ ሺ‫ݔ‬଴ ǡ ‫ݕ‬଴ ሻ ൌ ƒšሺ௫ǡ௬ሻఢௐబ ሺ ܴ௝ ሺ‫ݔ‬ǡ ‫ݕ‬ሻሻ, (1)
ଵ ேభ ேమ
jixi ܴ௝ ሺ‫ݔ‬଴ ǡ ‫ݕ‬଴ ሻ ൐ σ௫ୀଵ σ௬ୀଵሺܴ௝ ሺ‫ݔ‬ǡ ‫ݕ‬ሻሻ (2)
ேభ ேమ
In this network human faces and non-face are ݆ ൌ ͳǡ Ǥ Ǥ ǡͶͲ
presented with 27x18 pixels as the dimension of the Where Rj is the response of the face image to the
retina, the input vector and the hidden layer are jth gabor filter. N1N2is the size of Face image, the center
having2160 neurons, 100 neurons respectively. of the window,W0is at (x0,y0).

79
Authorized licensed use limited to: UNIVERSIDAD DE ALICANTE . Downloaded on November 07,2024 at 17:38:41 UTC from IEEE Xplore. Restrictions apply.
2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

2. Feature vector generation


Feature vectors [15][16] are formed as a composition
of Gabor wavelet coefficient sat the feature points. kth
feature vector of ith reference face image is defined as.

vi,k = {xk,yk, Rj (xk,yk), j= 1,…,40}, k=1,…,Nf (3)

Where a feature point is presented by the


coordinates (xk,yk) and the specimen of the Gabor filter
reactions at that coordinate is shown by Rij and for
image i, the number of feature vectors is represented
by Ni.

ii. a). Similarity of feature vector


Now for the computation of sameness of two a. Input image for our proposed framework
composite feature vectors, we have applied the
subsequent similarity function without concerning the
period:
σ೔ห௩೔ǡೖ ሺ௟ሻหห௩೔ǡೕ ሺ௟ሻห
ܵ௜ ሺ݇ǡ ݆ሻ ൎ మ మ
(4)
ටσ೔ห௩೔ǡೖ ሺ௟ሻห ห௩೔ǡೕ ሺ௟ሻห

, 
Where the similarity of jth feature vector of the test
face(vi,j), to kth feature vector of ith reference face, (vi,k)
is represented by Si (k,j). The similarity OSi of two
faces are calculated by

σೕ ௌ௜௠೔ǡೕ
ܱܵ௜ ൌ
ே೔
(5)
Where, b. Real parts of Gabor filter

ܵ ሺ݈ǡ ݆ሻ
ܵ݅݉௜ǡ௝ ൌ ݉ܽ‫ ݔ‬൤ ௜ ൨ (6)
ܵ௜ ሺܰ௜ ǡ ݆ሻ

OSi represents the overall similarity of test face to


ith reference face, where Ni, Nj are the number of
feature vectors of the ith reference and test faces,
respectively.

ii. b). Face Comparison


In this stage [15], firstly both the location and
similarity are found for those reference images whose
feature vectors are not so adequate to the feature
vectors of the test image and then they are eliminated.
In the next step, OSi is used to find the similarity. To c. Magnitudes of Gabor filter
fulfil the goal, each of the feature vector of test face
reference faces are sorted according to their similarity
measure and the number of times that each of the
reference face gets the first position is calculated as;

Cl= ∑Ɂ(Simi,j = max(Simi,j)), (7)

and to maximize the following, we look for the best


candidate match

FSFi= OSi+ β(Ci/ni)

where i= l,…… ,shows the number of reference faces,


d. Real parts of Gabor filter of experimented
the number of feature vectors of ith reference image is
image
represented by ni and β represents a weighting factor.

80
Authorized licensed use limited to: UNIVERSIDAD DE ALICANTE . Downloaded on November 07,2024 at 17:38:41 UTC from IEEE Xplore. Restrictions apply.
2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)


X= σே
௡ୀଵ ‫ݔ‬௡

The variance of the projected data is given by


ͳ
෍ሼ‫ݑ‬ଵ் ‫ݔ‬௡ െ ‫ݑ‬ଵ் ‫ݔ‬ሽଶ ൌ ‫ݑ‬ଵ் ܵ‫ݑ‬ଵ
ܰ
௡ୀଵ

Where we have represented the data covariance matrix


by S
ଵ ே
S= σ ሺ‫ݔ‬
ே ௡ୀଵ ௡
െ ‫ݔ‬ሻሺ‫ݔ‬௡ െ ‫ݔ‬ሻ்
e. Magnitudes of Gabor filters of experimented
image C. Testing and Pattern Classification
Then during the test phase we have taken any random
image from the dataset and use pattern classification to
find the emotion of that particular image. For pattern
classification we have used very well-known K-mean
clustering with some modified approach. Here, instead
of making clusters for each of the individual emotion
by creating cluster centre for each of the emotion, we
use their principal component as it is. Because for a
same human being for similar emotion the output of
principal component may vary up to certain extent.

So for testing the emotion we have considered two


f. Face Detected Frame measure simultaneously-

Fig4. Output of individual steps of Gabor Feature a) Overlapping measure.


Extraction
b) Minimum distance measure.
B. Learning and Principal Component Analysis In overlapping measure we have tried to find amount
In learning phase, firstly create clusters with of overlapping of principle component of the test
respective emotion using PCA [18]. Now whenever a image with the principle component of respective
new frame will come it’s PCA will be generated & we trained emotion, if overlapping occurs.
have compared that PCA in test phase. In minimum distance measure, we have used the K-
Here we have trained our system with different mean approach. For any principal component of the
facial expression. Principal component analysis is a test image we have measured the distance with the
technique mainly used for dimensionality reduction, principle component of different or same emotion and
lossy data compression, and feature extraction.PCA considered only the minimum one. Finally comparing
can be described by orthogonally projecting the data these two result we are able to find the emotion of the
onto a lower dimensional linear space called as test image.
principal subspace in order to maximize the variance
of projected data [18].For maximum variance Start
formulation we consider a data set of observation {xn}
where n=1, 2, 3 …… N, and xn is a Euclidean value
with dimensionality D. We need to project the data Accumulation of Live Streaming
onto a space having dimensionality M < D while
maximizing the variance of the projected data. The
value of M is assumed. Extraction of Frames
For the projection we have only considered the Prepr
oce-
space having one dimension (M = 1). The direction of
ssing
this space is defined by a vector u1 dimension D, then Apply Well Known Face Detection and
we have considered a unit vector so that u1Tu1=1. Each Technique using Gabor Feature Creat
Extraction and Neural Networks ion
data point xn is then projected onto a scalar
valueu1Txn.The mean of the projected data is where x is
the sample set mean given by
Take the face Detected Frame

81
Authorized licensed use limited to: UNIVERSIDAD DE ALICANTE . Downloaded on November 07,2024 at 17:38:41 UTC from IEEE Xplore. Restrictions apply.
2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

trained our model with large number of versatile input


Extract
Accumulation of image.
features using
images with Six
PCA Le-
Basic Emotions
arni
ng
Apply Modified K-mean for
clustering

Has the New


Image found
matches with
Testing
and
Validation
a. Principal component of angry emotion

Our System Fails to


Identify the Emotion

Emotion Detected
Stop Successfully

Fig5. Computational Flowchart for Emotion


Recognition

VI. RESULTS AND ANALYSIS

We have represented each of the emotion in Fig


4.(a), (b), (c), (d), (e), (f), (g) with different colour to
b. Principal component of happy emotion
distinguish them only. For each of the plot we have
considered three to four sample for each individual
emotion and their principal components and plot those
component relatively for better visualization. We can
observe that for same person principle components of
some cases, the emotions have got overlapped due to
their frontal dynamics, depicted in Fig 4.(g), (h). In Fig
4.(g) we have plotted each of the principal component
to their respective emotion, but some of them has got
overlapped and some of them are not clearly visible
due to the interval between each of the plot is very
small. To get some justification we taken scatter plot in
Fig 4.(h). So we can say one facial expression consists
one major emotion and few percentage of other
emotion also. We have tried to show the trained output c. Principal component of disgust emotion
of different emotions individually and together as well
with few samples for each of the respective emotion.
We have shown the test image to which emotion class
it will belong. For testing we have taken a sample with
more than one instances (Fig 4.(i)) and evaluated their
PCA in Fig 4.(j), then sent that through the trained
model. We can observe that in Fig 4.(k) it has
overlapped with angry emotion. In Fig 4.(m) we again
have considered all the emotion including that
overlapping of the experimented image by scatter
plotting. We have plotted Fig 4.(k) just for better
visibility and overlapping measure and in Fig 4.(m) the
plotting was for minimal distance measure to different
emotion. The result can be more optimized if we d. Principal component of sad emotion

82
Authorized licensed use limited to: UNIVERSIDAD DE ALICANTE . Downloaded on November 07,2024 at 17:38:41 UTC from IEEE Xplore. Restrictions apply.
2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

e. Principal component of neutral emotion

i. Experimented image with different instances

f. Principal component of surprise emotion

j. PCA of experimental images

g. Principal component of all emotion

Two emotion
has got
overlapped

k. Overlapping with angry mode PCA


h. Scatter output of all emotion

83
Authorized licensed use limited to: UNIVERSIDAD DE ALICANTE . Downloaded on November 07,2024 at 17:38:41 UTC from IEEE Xplore. Restrictions apply.
2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

the proposed framework for automatic emotion


detection can be well appertained to real time facial
expression and emotion characterization task.

REFERENCES
[1] D. Anurag,S. Ashim, ‘A Comparative Study on different
approaches of Real Time Human Emotion Recognition based
on Facial Expression Detection’, International Conference on
Advances in Computer Engineering and Applications, 978-
1-4673-6911-4/15/$31.00©2015 IEEE.
[2] Kudiri M. Krishna, Said Abas Md, Nayan M Yunus, ‘Emotion
Detection Using Sub-image Based Features Through Human
Facial Expressions’, International on Computer & Information
Science (ICCIS). 978-1-4673-1938-6/12/$31.00©2012 IEEE.
l. Scatter output of the same experimental [3] Pal Pritam, Iyer Ananth N. and Yantorno Robert E . ‘Emotion
image Detection From Infant Facial Expression’, Internationa
Conference on Acoustics, Speech and Signal Processing. 1-
4244-0469-X/06/$20.00©2016 IEEE
[4] Silva C. DE Liyanage,Miyasato Tsutomu, Nakatsu Ryohei.
‘Facial Emotion Recognition Using Multi-modal Information’,
PCA of the new International Conference on Information, Communication and
image instances is Signal Processing, 0-7803-3676-3/97/$10.00©1997 IEEE.
closer to angry [5] Maja Pantic, Iounnis Patras. ‘Dynamics of Facial Expression :
Recognition of Facial Actions and their Temoporal Segments
from fave profile Image Sequences’, IEEE Transctions on
System, Man and Cybernetis.1083-4419/$20.00©2006 IEEE
[6] Songfan Yang, Bir Bhanu, ‘Understanding Discrete Facial
expressions in Video Using an Emotion Avatar Image’, IEEE
Tranjactins on Systems, Man and Cybernetis. 1083-
4419/$31.00©2012 IEEE
[7] Happy S L, Routray Aurobinda, ‘Autamatic facial Expression
Recognition Using Features Salient Facial Patterns’, IEEE
m. Output to show minimum distance Tranjactions on Affective Computing.1949-3045©2014 IEEE.
measure [8] Mohammad Soleymani , Sadjad Asghari-Esfeden, Yunfu ,Maja
Pantic, ‘Analysis of EEG signals and facial Expressions for
Fig 4. Output of each individual steps of our proposed Continuous Emotion detection’, IEEE Transctions on Affective
approach Computing, 1949- 3045©2015 IEEE.
[9] Leh Luoh, Chih- Chang Huang, Hsueh-Yenhiu. ‘Image
VII. FUTURE WORK Processing Based Emotion Recognition’, International
Conference on System Science and Engineering. 978-1-4244-
We can make our automated framework for 6474-6/10/$26.00©2010
emotion detection more efficient by improving the [10] F. Abdat, C. Maaoui and A. Pruski, UKSIM 5th European
pattern classifiers by which we will be able to handle Symposium on Computer Modeling and Simulation. 978-0-
7695-4619-3/11$26.00©2011 IEEE.
more accurately the emotion of new face to which
[11] Kenny Hong, Stephan K. Chalup, Robert A.R. King. ’ A
class of emotion-cluster that will belong. However it Component based Approach for Classifying the Seven
will be very fascinating if we contemplate by Universal Facial Expressions of Emotion’. 978-1-4673-6010-
considering both the auditory & visual information and 4//13/$31.00©2013 IEEE
some more attributes like EEG signal, facial color etc. [12] A. Ghahari, Y. Rakhshani Fatmehsari and R.A Zoroofi, ‘A
together, for processing with the expectation that this Novel Clustering- Based Feature Extraction Method for an
Automatic Facial Expression Analysis System’. 5th
kind of multi-modal information processing will International Conference on Intelligent Information Hiding and
become a datum of information processing in future Multimedia Signal Processing. 978-0-7695-3762-
multimedia era. We can even improve the accuracy by 7/09$26.00©2009 IEEE.
taking the principal component of each individual [13] Li Zhang, Ming Jiang, Dewan Farid, M.A. Hassain, ‘Intelligent
portion of the face like eye, nose, lips, forehead, cheek facial Emotion recognition and Semantic- Based topic
detection for a Humanoid Robot’.0957-4174/$ © 2013 Elsevier
etc and then compare with the experimented image. Ltd.
[14] Lin-Lin Huang, Akinobu Shimizu, and Hidefumi Kobatake.
VIII. CONCLUSION ‘Automatic Face and Gesture Recognition. Classification-
Till today all of the existing vision system for Based Face Detection Using Gabor Filter Features’, Sixth
IEEE International Conference. 0-7695-2122-3/04 $ 20.00 ©
facial muscle action detection deal only with the 2004 IEEE.
frontal-view face images and cannot handle the [15] Wang Chuan-xu, Li Xue. ‘Face detection using BP network
temporal dynamics of facial actions. Also for some combined with Gabor wavelet transform’, Ninth ACIS
human being, they don’t show their emotion and International Conference on Software Engineering, Artificial
mental state by facial expression, for this kind of Intelligence, Networking, and Parallel/Distributed Computing.
978-0-7695-3263-9/08 $25.00 © 2008 IEEE.
situation our proposed model significantly fails to
[16] Burcu Kepenekci, F. Boray. ‘Occluded Face Recognition
recognize the emotion and provides FALSE Based on Gabor Wavelets’. 0-7803-7622-6/02/$17.00 02002
POSITIVE result. However, with this shortcoming we IEEE.
have shown based on experimental confirmation that [17] http://www.kasrl.org/jaffe_info.htm.

84
Authorized licensed use limited to: UNIVERSIDAD DE ALICANTE . Downloaded on November 07,2024 at 17:38:41 UTC from IEEE Xplore. Restrictions apply.
2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT)

[18] Christopher M. Bishop.‘Pattern recognition and Machine


Learning’, Springer.ISBN 978-0-387-31073-2.
[19] Gupta Bhaskar , Gupta Sushant , Tiwari Arun Kumar.
‘Face Detection Using Gabor Feature Extraction and Artificial
Neural Network’,International Journal of Computer Science
and Technology.IJCST Vol. 1, Issue 1, September 2010
[20] A. G. Ramakrishnan, S. Kumar Raja, and H. V. Raghu Ram.
‘Neural Nework- Based Segmentation of texture using gabor
feature’. 0-7803-7616-1/02/$17.00©2002.IEEE.
[21] Muthukannan K, Latha P and Manimaran.’Implementation of
Artificial Neural Network for Face Recognition Using Gabor
Feature Extraction.’,ICTACT Journal on Imageand Video
Processing, ISSN: 0976-9102 , volume : 04, issue: 02,
november 2013.

85
Authorized licensed use limited to: UNIVERSIDAD DE ALICANTE . Downloaded on November 07,2024 at 17:38:41 UTC from IEEE Xplore. Restrictions apply.

You might also like