Emotion Recognition Based On EEG Features in Movie Clips With Channel Selection
Emotion Recognition Based On EEG Features in Movie Clips With Channel Selection
Emotion Recognition Based On EEG Features in Movie Clips With Channel Selection
DOI 10.1007/s40708-017-0069-3
Received: 12 April 2017 / Accepted: 7 July 2017 / Published online: 15 July 2017
The Author(s) 2017. This article is an open access publication
Abstract Emotion plays an important role in human algorithms. The classification performance obtained with
interaction. People can explain their emotions in terms of both the algorithms are computed and compared. The
word, voice intonation, facial expression, and body lan- average overall accuracies were obtained as 77.14 and
guage. However, brain–computer interface (BCI) systems 72.92% by using MLPNN and kNN, respectively.
have not reached the desired level to interpret emotions.
Automatic emotion recognition based on BCI systems has Keywords Emotion EEG Classification Wavelet
been a topic of great research in the last few decades. transform Channel selection
Electroencephalogram (EEG) signals are one of the most
crucial resources for these systems. The main advantage of
using EEG signals is that it reflects real emotion and can 1 Introduction
easily be processed by computer systems. In this study,
EEG signals related to positive and negative emotions have Emotion is a human consciousness and plays a critical role
been classified with preprocessing of channel selection. in rational decision-making, perception, human interaction,
Self-Assessment Manikins was used to determine emo- and human intelligence. While emotions can be reflected
tional states. We have employed discrete wavelet transform through non-physiological signals such as words, voice
and machine learning techniques such as multilayer per- intonation, facial expression, and body language, many
ceptron neural network (MLPNN) and k-nearest neigh- studies on emotion recognition based on these non-physi-
borhood (kNN) algorithm to classify EEG signals. The ological signals have been reported in recent decades [1, 2].
classifier algorithms were initially used for channel selec- Signals obtained by recording voltage changes occurring
tion. EEG channels for each participant were evaluated on skull surface as a result of electrical activity of active
separately, and five EEG channels that offered the best neurons in the brain are called EEG [3]. From the clinical
classification performance were determined. Thus, final point of view, EEG is the mostly used brain-activity-
feature vectors were obtained by combining the features of measuring technique for emotion recognition. Furthermore
EEG segments belonging to these channels. The final EEG-based BCI systems provide a new communication
feature vectors with related positive and negative emotions channel by detecting the variation in the underlying pattern
were classified separately using MLPNN and kNN of brain activities while performing different tasks [4].
However, BCI systems have not reached the desired level
to interpret people’s emotions.
M. S. Özerdem
Electrical and Electronics Engineering, Dicle University,
The interpretation of people’s different emotional states
21000 Diyarbakır, Turkey via BCI systems and automatic identification of the emo-
e-mail: [email protected] tions may enable robotic systems to emotionally react to
humans in the future. They will be more useful especially
H. Polat (&)
Electrical and Electronics Engineering, Mus Alparslan
in fields such as medicine, entertainment, education and in
University, 49000 Muş, Turkey many other areas [5]. BCI systems need variable resources
e-mail: [email protected]
123
242 M. S. Özerdem, H. Polat
that can be taken from humans and processed to understand were used. Some main studies related with emotions are
emotions. EEG signal is one of the most important given below.
resources to achieve this target. Emotion recognition is
• Murugappan et al. [18] classified five emotions based
combined with different areas of knowledge such as psy-
on EEG signals. They used EEG signals that recorded
chology, neurology and engineering. SAM questionnaires
from 64, 24, and 8 EEG channels, respectively. They
are usually used for classified affective responses of sub-
achieved maximum classification accuracy of 83.26 and
jects in the design of emotion recognition systems [6].
75.21% using kNN and linear discriminant analysis
However, affective responses are not easily classified into
(LDA) algorithm, respectively. Researchers employed
distinctive emotion responses due to the overlapping of
DWT method for decomposing the EEG signal into
emotions.
alpha, beta, and gamma bands. These frequency bands
Emotions can be discriminated with either discrete
were analyzed for feature extraction.
classification spaces or dimensional spaces. A discrete
• Channel et al. [19] classified two emotions. They
space allows the assessment of a few basic emotions such
employed SAM to determine participant emotions.
as happiness and sadness and is more suitable for unimodal
They used Naive Bayes (NB) and Fisher discriminant
systems [7]. A dimensional space (valence–arousal plane)
analysis (FDA) as the classification algorithms. Clas-
allows a continuous representation of emotions on two
sification accuracy was obtained as 72 and 70% for NB
axes. Valence dimension is ranging from unpleasant to
and FDA, respectively.
pleasant, and arousal dimension is ranging from calm to
• Zhang et al. [20] applied PCA method for feature
excited state [8]. Higher dimension is better in the under-
extraction. The features were extracted from two
standing of different states, but the classification accuracies
channels (F3 and F4). The classification accuracy was
of application can be lower as obtained in Ref [7]. Thus, in
obtained as 73% by the researchers.
this study, EEG signals that are related to positive and
• Bhardwaj et al. [21] recognized seven emotions using
negative emotions have been classified with channel
support vector machine (SVM) and LDA. Three EEG
selection for only valence dimension.
channels (Fp1, P3 and O1) were used in their exper-
In the literature, there are studies in which various sig-
iment. Researchers investigated sub-bands (theta, alpha
nals obtained/measured from people are used in order to
and beta) of EEG signal. The overall average accura-
determine emotions automatically. We can gather these
cies obtained were 74.13% using SVM and 66.50%
studies under three areas [9]. The first approach includes
using LDA.
studies intended to predict emotions using face expressions
• Lee et al. [22] classified positive and negative emo-
and/or speech signals [10]. However, the main disadvan-
tions. Classification accuracy was obtained as 78.45%
tage of this approach is that permanently catching the
by using adaptive neuro-fuzzy inference system
spontaneous face expressions that do not reflect real emo-
(ANFIS).
tions is quite difficult. Speech and facial expressions vary
across cultures and nations as well. The second main In reference to the literature, it is seen that a limited
approach is based on emotion prediction by tracking the number of EEG channels (e.g., two or three) have been
changes in central automatic nervous system [11, 12]. used to detect different emotional states with different
Various signals such as electrocardiogram (ECG), skin classification algorithms such as SVM, MLPNN, and kNN.
conductance response (SCR), breath rate, and pulse are The aim of this study was to classify EEG signals related
recorded; hence, emotion recognition is applied by pro- to different emotions based on audiovisual stimuli with the
cessing them. The third approach includes studies intended preprocessing of channel selection. SAM was used to
for EEG-based emotion recognition. determine participants’ emotional states. Participants rated
In order to recognize emotions, a large variety of studies each audiovisual stimulus in terms of the level of valence,
were specifically conducted within the scope of EEG sig- arousal, like/dislike, dominance and familiarity. EEG sig-
nals. These studies can simply be gathered under three nals that related to positive and negative emotions have
main areas; health, game, and advertisement. Studies in been classified according to participants’ valence ratings.
health are generally conducted by physicians for purposes DWT method was used for feature extraction from EEG
of helping in disease diagnosis [13–15]. Game sector signals. Wavelet coefficients of EEG signals were assumed
involve studies in which people use EEG recordings as feature vectors and statistical features were used to
instead of joysticks and keyboards [16, 17]. As per this reduce the dimension of those feature vectors. EEG signals
study, advertisement sector generally involves studies related to positive and negative emotions groups have been
which aim at recognizing emotions from EEG signals. classified by MLPNN and kNN algorithm. After the pre-
There are several studies in which different algorithms processing and feature extraction stages, classifiers were
related to EEG-based classification of emotional states used for channel selection (Fig. 1). EEG channels that offer
123
Emotion recognition based on EEG features in movie clips with channel selection 243
the best classification performance were determined. Thus, the authors selected 120 music clips. Half of these stimuli
final feature vectors were obtained by combining the fea- were selected by semi automatically and another half was
tures of those EEG channel. The final feature vectors were selected manually [24]. From the initial collection of 120
classified and their performances were compared. The steps music clips, the final 40 test music clips were determined to
followed in the classification process are depicted in Fig. 1. present in the paradigm. These music clips were selected to
As a remainder, this paper is organized as follows elicit emotion prominently. A 1-min segment related to the
Sects. 2 and 3 describe the Materials and Methods maximum emotional content was extracted from each
employed in the proposed EEG-based emotion recognition music clips, and these segments were presented in final
system. Section 4 presents the experimental results. Sec- experiment.
tion 5 presents the results and discussion. Finally, Sect. 6
provides the conclusion of this paper. 2.3 Task
123
244 M. S. Özerdem, H. Polat
3 Methods
123
Emotion recognition based on EEG features in movie clips with channel selection 245
Fig. 4 Sample EEG signals related to positive and negative emotions. Sample EEG signals were measured from channel Fp1. Time axis was
defined as second; amplitude axis was defined as lV
123
246 M. S. Özerdem, H. Polat
At the end of DWT method and statistical procedures training algorithm technique adjusts the weights to obtain
used for feature extraction, five-dimensional feature vec- network that is closed to the desired output [32].
tors belonging to EEG segments related to every emotional
state were obtained. 3.3.2 k-nearest neighborhood
123
Emotion recognition based on EEG features in movie clips with channel selection 247
(FP), the number of false decision related to positive EEG segments selected for testing stage. In order to
emotion by automated system. True negative (TN), the increase reliability of the classification results, the training
number of true decision related to negative emotion, and and testing data were randomly changed four times.
False negative (FN), the number of false decision related to Single hidden layer with 5 9 n 9 2 architecture was
negative emotion. used in MLPNN architecture for determining five EEG
channels having the best classification performances.
Accuracy (Eq. 6) was taken as model success criteria for
4 Experimental results determining the channels. In training stage of MLPNN, the
network parameters are learning coefficient 0.7 and
4.1 Channel selection momentum coefficient 0.9.
In this study, EEG signals recorded from 32 channels
In this study, MLPNN was firstly used for channel selec- were examined, and five EEG channels having highest
tion. EEG recordings measured from 32 channels for every performance in emotion recognition were dynamically
participant were evaluated separately and five EEG chan- determined. It was generally observed that same channels
nels having the highest performance were dynamically except a few of them provided the highest performances.
determined. As we evaluate the results related to all par- The main intention of determining the EEG channels is the
ticipants, same channels provided the highest perfor- simultaneous processing of EEG signals recorded from
mances. The classification of EEG signals were achieved different regions of the brain. EEG signals recorded from
by a dynamic model in which the channels were selected different regions provide a more comprehensive and
for each participant. The dynamic selection process is dynamic solution to the description of emotional state. At
given below. the end of the process, the classification results revealed
For every participant, feature vectors related to EEG that the channels having high performances were P3, FC2,
segments consisting of positive and negative emotions AF3, O1 and Fp1 (Fig. 7). From this point of the study,
were classified by a MLPNN. The feature vectors obtained those five channels were used instead of 32.
by DWT with statistical calculation were used as input sets
for MLPNN. The number of neurons in the input layer of 4.2 Classification of emotions by using MLPNN
MLPNN was five due to the size of feature vector. MLPNN
output vectors were defined as [1 0] for positive emotion Final feature vectors were obtained by combining the
and [0 1] for negative emotion. Thus, the number of neu- features of EEG segments belonging to the selected chan-
rons in the output layer of network was two. Hence, the nels (P3, FC2, AF3, O1 and Fp1). Thus, new feature vec-
structure used in this study was (5 9 n 9 2), which is tors composed of 25 samples for every EEG segment
shown in Fig. 6, where n represents the number of neurons
in the hidden layer. The number of neurons used in the
hidden layer is separately determined for each participant.
Each participant had 40 EEG segments (training and test-
ing patterns) totally. Thirty EEG segments were randomly
selected for network training stage, and the remaining 10
123
248 M. S. Özerdem, H. Polat
related to positive and negative emotions were obtained. Table 2 Classification of emotions for each participant using
The formation procedure of final feature vectors with the MLPNN
selected five EEG channels is shown in Fig. 8. All proce- Participants Accuracy (%) Specificity (%) Sensitivity (%)
dures were applied separately for each participant.
1 77.5 78.9 76.2
The same procedure that was applied for channel
selection was employed for the classification of emotions 2 72.5 76.4 69.5
as well. MLPNN output vectors were defined as [1 0] for 3 80 89.4 74.1
positive emotion and [0 1] for negative emotion. While 4 7 64.2 83.3
training the network, 30 EEG segments are used and 10 5 80 100 71.4
EEG segments are used for testing. Single hidden layer 6 75 81.2 70.8
with 25 9 n 9 2 architecture was used to classify EEG 7 90 100 83.3
related with emotional states. The number of neurons used 8 80 80 80
in the hidden layer is separately determined for each par- 9 60 62.5 58.3
ticipant. In training stage, learning and momentum coeffi- 10 65 63.6 66.6
cients were 0.7 and 0.9, respectively. 11 80 74.1 89.4
The classification process was applied for each partici- 12 85 93.7 79.1
pant and results are shown in Table 2. According to SAM 13 75 91.6 67.8
valence, two participants out of 22 lacked health assess- 14 67.5 62.9 76.9
ment and for that reason classification process was not 15 80 87.5 75
applied on them. 16 72.5 76.4 69.5
As shown in Table 2, the percentages of accuracy, 17 80 80 80
specificity, and sensitivity are in the range of [60 90], [62.5 18 77.5 86.6 72
100] and [58.3 89.4], respectively. To estimate the overall 19 80 100 71.4
performance of the MLPNN model, statistical measures 20 77.5 78.9 76.1
(accuracy, specificity, sensitivity) were averaged. Average 77.14 92 76.75
123
Emotion recognition based on EEG features in movie clips with channel selection 249
Table 3 Classification of emotions for each participant using kNN most popular tools for EEG analysis. In this study,
Participants Accuracy (%) Specificity (%) Sensitivity (%)
MLPNN was employed for determining the emo-
tional state from the EEG signals. In this study, kNN
1 77.5 78.9 76.2 algorithm was also used as a classifier for increasing
2 75 75 75 the reliability of the results obtained by MLPNN.
3 75 81.2 70.1 kNN is one of the most fundamental and simple
4 55 53.5 72.7 classification methods. In many EEG applications,
5 80 87.5 75 kNN algorithm is frequently used. The fact that the
6 70 81.2 64.7 results produced by the two algorithms were close to
7 82.5 100 74 each other. This matching support the process
8 77.5 82.3 73.9 reliability.
9 65 71.4 61.5 (d) As shown in Table 2, the percentages of accuracy,
10 65 71.4 61.5 specificity, and sensitivity of MLPNN are in the
11 67.5 73.3 64 ranges of [60 90], [62.5 100], and [58.3 89.4],
12 85 100 74 respectively. These values indicate that the proposed
13 70 91.6 63.1 MLPNN model is successful, and the test results also
14 62.5 60.8 64.7 show that the generalization ability of MLPNN is
15 75 85.7 69.2 well. On the other hand, the percentages of accuracy,
16 72.5 53.5 58.33 specificity, and sensitivity of kNN are in the ranges
17 75 90.9 65.5 of [55 85], [53.5 100], and [63.1 85], respectively
18 82.5 93.3 76 (Table 3). It was observed that both classifiers
19 77.5 86.6 72 showed parallel performance for each participant.
20 85 85 85 The best classification accuracy of MLPNN was
Average 72.92 90 74.37
obtained as 90% (specificity: 100% and sensitivity:
83.3%) for participant 7. On the other part, the best
classification accuracy of kNN was obtained as 85%
for participant 12 and 20.
methods with channel selection. The discussion of findings (e) To estimate the overall performance of MLPNN and
obtained from this study is presented below. kNN classifiers, statistical measures (accuracy,
specificity, sensitivity) were averaged. The compar-
(a) Wavelet coefficients related to different emotions
ison of averaged values for classification of emotions
were obtained using DWT method. These wavelet
is shown in Fig. 10. As shown in the figure, the
coefficients were evaluated as feature vectors. The
performance of MLPNN was higher than that of
size of feature vectors was reduced by using five
kNN. However, both methods can be accepted as
statistical parameters to get rid of the computing
successful.
load.
(f) It was concluded that MLPNN and kNN used in this
(b) Thirty-two channels for every participant were
study give good accuracy results for classification of
evaluated separately, and five channels were deter-
emotions.
mined. MLPNN (5 9 10 9 2) structure was used for
channel selection. The results revealed that similar The channel selection has opened the door to improve
channels (P3, FC2, AF3, O1 and Fp1) had the the performance of automatic detection for emotion
highest performance for every participant. The recognition. Recently, Zhang et al [20] used two EEG
channels having the highest performance were channels (F3 and F4) for feature extraction. Bhardwaj et al.
selected for classification of emotions. From this [21] used three EEG channels (Fp1, P3 and O1) in order to
point of the study, five channels were used instead of detect emotion. Murugappan et al. [18] extracted features
32. The results revealed that the channels accepted from 64, 24 and 8 EEG channels, respectively. When these
as brain regions had relation with emotions. The studies were reviewed, it can be observed that the main
selected channels determined in this study were also EEG channels and region of brain was considered for
compatible with the channels selected in other detection of emotions. However, in this study, all EEG
papers [21]. channels were evaluated and considered separately and five
(c) Combined feature vectors of selected five channels EEG channels that offer the best classification performance
were classified by using MLPNN and kNN methods. were determined. Final feature vectors were obtained by
In reference to the literature, MLPNN is one of the combining the features of those EEG channels, and the
123
250 M. S. Özerdem, H. Polat
Fig. 9 Comparison of classification performances for each participant in terms of, a accuracy, b specificity and, c sensitivity (x-axis—
participants, y-axis—performances)
6 Conclusion
123
Emotion recognition based on EEG features in movie clips with channel selection 251
selections. It is considered that, with correct channels and 12. Kim K, Bang S, Kom S (2004) Emotion recognition system using
features, the performance can be increased. short-term monitoring of physiological signals. Med Biol Eng
Comput 42:419–427
In literature, it can be seen that useful features can be 13. Subaşı A, Erçelebi E (2005) Classification of EEG signals using
obtained from the alpha, beta, and gamma bands as well as neural network and logistic regression. Comput Methods Progr
the tetra band in the detection of the emotional state from Biomed 78:87–99
EEG signals. In future studies, the proposed model can be 14. Subası A (2007) Signal classification using wavelet feature
extraction and a mixture of expert model. Expert Syst Appl
used for all sub-bands separately to identify the effective- 32:1084–1093
ness of the bands in emotional activity. Furthermore, in 15. Fu K, Qu J, Chai YDY (2014) Classification of seizure based on
order to increase the classification success, other physio- the time-frequency image of EEG signals using HHT and SVM.
logical signals such as blood pressure, respiratory rate, Biomed Signal Process Control 13:15–22
16. Lopetegui E, Zapirain BG, Mendez A (2011) Tennis computer
body temperature, and GSR (galvanic skin response) can game with brain control using EEG signals. In: The 16th inter-
be used with EEG signals. national conference on computer games, pp 228–234
17. Leeb R, Lancelle M, Kaiser V, Fellner DW, Pfurtscheller G
Acknowledgements The authors would like to thank Julius Bam- (2013) Thinking Penguin: multimodal brain computer interface
wenda for his support in editing the English language in the paper. control of a VR game. IEEE Trans Comput Intell AI in Games
5(2):117–128
Open Access This article is distributed under the terms of the 18. Murugappan M, Ramachandran N, Sazali Y (2010) Classification
Creative Commons Attribution 4.0 International License (http://crea of human emotion from EEG using discrete wavelet transform.
tivecommons.org/licenses/by/4.0/), which permits unrestricted use, J. Biomed Sci Eng 3:390–396
distribution, and reproduction in any medium, provided you give 19. Cahnel G, Kroneeg J, Grandjean D, Pun T (2005) Emotion
appropriate credit to the original author(s) and the source, provide a assesstment: arousal evaluation using EEG’s and peripheral
link to the Creative Commons license, and indicate if changes were physiological signals, 24 rue du genaral dufour, Geneva
made. 20. Zhang Q, Lee M (2009) Analysis of positive and negative emo-
tions in natural scene using brain activity and GIST. Neuro-
computing 72:1302–1306
21. Bahrdwaj A, Gupta A, Jain P, Rani A, Yadav J (2015) Classifi-
References cation of human emotions from EEG signals using SVM and
LDA classifiers. In: 2nd international conference on signal pro-
1. Petrrushin V (1999) Emotion in speech: recognition and appli- cessing and integrated networks (SPIN), pp 180–185
cation to call centers. In: Processing of the artificial networks in 22. Lee G, Kwon M, Sri SK, Lee M (2014) Emotion recognition
engineering conference, pp 7–10 based on 3D fuzzy visual and EEG features in movie clips.
2. Anderson K, McOwan P (2006) A real-time automated system for Neurocomputing 144:560–568
the recognition of human facial expression. IEEE Trans Syst Man 23. DEAP: a dataset for emotion analysis EEG physiological and
Cybern B Cybern 36:96–105 video signals (2012) http://www.eecs.qmul.ac.uk/mmv/datasets/
3. Adeli H, Zhou Z, Dadmehr N (2003) Analysis of EEG records in deap/index.html. Accessed 01 May 2015
an epileptic patient using wavelet transform. J Neurosci Methods 24. Koelstra S, Mühl C, Soleymani M, Lee J, Yazdani A, Ebrahimi T,
123(1):69–87 Pun T, Nijholt A, Patras I (2012) DEAP: a database for emotion
4. Atyabi A, Luerssen MH, Powers DMW (2013) PSO-based analysis using physiological signals. IEEE Trans Affect Comput
dimension reduction of EEG recordings: implications for subject 3(1):18–31
transfer in BCI. Neurocomputing 119(7):319–331 25. Bradley MM, Lang PJ (1994) Measuring emotions: the self-
5. Petrantonokis PC, Hadjileontiadis LJ (2010) Emotion recognition assessment manikin and the sematic differential. J Behav Ther
from EEG using higher order crossing. IEEE Trans Inf Technol Exp Psychiatry 25(1):49–59
Biomed 14(2):186–197 26. Uusberg A, Thiruchselvam R, Gross J (2014) Using distraction to
6. Khosrowbadi R, Quek HC, Wahab A, Ang KK (2010) EEG based regulate emotion: insights from EEG theta dynamics. Int J Psy-
emotion recognition using self-organizing map for boundary chophysiol 91:254–260
detection. In: International conference on pattern recognition, 27. Polat H, Ozerdem MS (2015) Reflection emotions based on dif-
pp 4242–4245 ferent stories onto EEG signal. In: 23th conference on signal
7. Torres-Valencia C, Garcia-Arias HF, Alvarez Lopez M, Orozco- processing and communications applications, Malatya,
Gutierrez A (2014) Comparative analysis of physiological signals pp 2618–2618
and electroencephalogram (EEG) for multimodal emotion 28. Kıymık MK, Akın M, Subaşı A (2004) Automatic recognition of
recognition using generative models. In: 19th symposium on alertness level by using wavelet transform and artificial neural
image, signal processing and artificial vision, Armenia-Quindio network. J Neurosci Methods 139:231–240
8. Russell JA (1980) A circumplex model of affect. J Personal Soc 29. Amato F, Lopez A, Mendez EMP, Vanhara P, Hampl A (2013)
Psychol 39:1161–1178 Artificial neural networks in medical diagnosis. J Appl Biomed
9. Wang XW, Nie D, Lu BL (2014) Emotional state classification 11:47–58
from EEG data using machine learning approach. Neurocom- 30. Haykin S (2009) Neural networks and learning machines, 3rd
puting 129:94–106 edn. Prentice Hall, New Jersey, p 906
10. Kim J, Andre E (2006) Emotion recognition using physiological 31. Basheer IA, Hajmeer M (2000) Artificial neural networks: fun-
and speech signal in short-term observation. In: proceedings of damentals computing design and application. J Microbiol Meth-
the perception and interactive technologies, 4021:53–64 ods 43:3–31
11. Brosschot J, Thayer J (2006) Heart rate response is longer after 32. Patnaik LM, Manyam OK (2008) Epileptic EEG detection using
negative emotions than after positive emotions. Int J Psy- neural networks and post-classification. Comput Methods Progr
chophysiol 50:181–187 Biomed 91:100–109
123
252 M. S. Özerdem, H. Polat
33. Berrueta LA, Alonso RM, Heberger K (2007) Supervised pattern interests are in the areas of the wavelet transform, artificial neural
recognition in food analysis. J Chromatogr A 1158:196–214 networks and modeling non-stationary signals.
34. Atkinson J, Campos D (2016) Improving BCI–based emotion
recognition by combining EEG feature selection and kernel Hasan Polat was born in 1989, in Bingol, Turkey. He received the
classifiers. Expert Syst Appl 47:35–41 MSc degree in Electrical-Electronics Engineering in 2016 from Dicle
University. He joined the Electrical-Electronics Engineering, Mus
Alparslan University, as a research assistant in 2014. His research
Mehmet Siraç Özerdem was born in 1971, in Batman, Turkey. He interests are in the areas of the wavelet transform, artificial neural
received the Ph.D. degree in computer engineering in 2003 from network and biomedical science.
İstanbul Technical University. He joined Electrical-Electronic Engi-
neering, Dicle University, as an Assist. Prof. in 2006. His research
123