Ghosh 2019

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

This article has been accepted for inclusion in a future issue of this journal.

Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 1

Mimicking Short-Term Memory in


Shape-Reconstruction Task Using an EEG-Induced
Type-2 Fuzzy Deep Brain Learning Network
Lidia Ghosh , Amit Konar , Pratyusha Rakshit , and Atulya K. Nagar

Abstract—The paper attempts to model short-term memory I. INTRODUCTION


(STM) for shape-reconstruction tasks by employing a 4-stage deep
HE human memory is distributed across the brain with
brain leaning network (DBLN), where the first two stages are built
with Hebbian learning and the last two stages with Type-2 Fuzzy
logic. The model is trained stage-wise independently with visual
T functionally pronounced active regions located in the me-
dial temporal lobe, called Hippocampus, for use as the Long-
stimulus of the object-geometry as the input of the first stage, EEG Term Memory (LTM) and the pre-frontal lobe for use as the
acquired from different cortical regions as input and output of
respective intermediate stages, and recalled object-geometry as the
Short-Term Memory (STM) [1]–[5]. Although very little of
output of the last stage. Two error feedback loops are employed the encoding and recall processes of human memory system
to train the proposed DBLN. The inner loop adapts the weights of is known till this date [6]–[7], strong evidences of having two
the STM based on a measure of error in model-predicted response distinct cortical pathways for STM and LTM recalls for visuo-
with respect to the object-shape recalled by the subject. The outer spatial object-recognition tasks exist in the literature [8]–[11].
loop adapts the weights of the iconic (visual) memory based on
a measure of error of the model predicted response with respect
While for the STM-recall, the occipito-parietal pathway [8]–[9]
to the desired object-shape. In the test phase, the DBLN model is primarily responsible, the occipito-temporal pathway is used
reproduces the recalled object shape from the given input object for the LTM-recall [10], [11]. Neuro-physiological support of
geometry. The motivation of the paper is to test the consistency the above evidences also is reported in quite a few interesting
in STM encoding (in terms of similarity in network weights) for scientific treaties [12]–[16]. The current research on cognitive
repeated visual stimulation with the same geometric object. Ex-
periments undertaken on healthy subjects, yield high similarity in
neuroscience further reveals that the STM encoding and recall
network weights, whereas patients with pre-frontal lobe Amnesia for object-shape recognition task is performed in the Gamma
yield significant discrepancy in the trained weights for any two frequency band (30–100 Hz) [8], [17]–[19]. There also exist
trials with the same training object. This justifies the importance of evidences of related brain activities, including visual perception
the proposed DBLN model in automated diagnosis of patients with and object recognition in the Gamma band [2], [20], [21].
learning difficulty. The novelty of the paper lies in the overall design
of the DBLN model with special emphasis to the last two stages of The paper aims at developing one computational model of
the network, built with vertical slice based type-2 fuzzy logic, to the STM for use in the shape-reconstruction task with the
handle uncertainty in function approximation (with noisy EEG motivation to determine the degradation in recall-performance
data). The proposed technique outperforms the state-of-the-art of the memory using electroencephalographic (EEG) signatures
functional mapping algorithms with respect to the (pre-defined of the selected brain lobes. Unfortunately, the memory models
outer loop) error metric, computational complexity and runtime.
[22]–[29] available in the current literature are mostly philo-
Index Terms—Short-term memory, iconic memory, Hebbian sophical in nature, with minimal scope of use for diagnostic and
learning, type-2 fuzzy set, shape reconstruction, memory failure therapeutic applications. Although traces of STM analysis using
and N400.
EEG signal exist in the literature [30]–[44], there is a void of
research on STM modeling using EEG. This void has inspired
the present research group to model STM using EEG signatures.
As the encoding and the recall pathways of memory involve other
brain modules, modeling of memory independently is not easy.
Manuscript received December 3, 2018; revised April 24, 2019 and July 3, In fact, memory modeling requires an integrated approach with
2019; accepted August 18, 2019. This work was supported by UPE-II Project a mission to study the stimulus-response pairs of the relevant
in Cognitive Science (UPE-II/Cog. Sc./JU/2017), funded by UGC and RUSA
2.0 Project (RUSA 2.0/JU/2019) funded by MHRD, Government of India. brain modules lying on the encoding and the recall pathways
(Corresponding author: Amit Konar.) [8]–[11].
L. Ghosh, A. Konar, and P. Rakshit are with the Department Elec- EEG provides an interesting means to detect old/new-effect
tronics and Telecommunication Engineering, Jadavpur University, Kolkata
700032, India (e-mail: [email protected]; [email protected]; [74] of memory by utilizing one well-known brain signal, called
[email protected]). N400 [73]. The N400 signal exhibits a negative peak in response
A. K. Nagar is with the Mathematics and Computer Science Deparment, to new (unknown) visual input stimulus. It is usually observed
Liverpool Hope University, Liverpool L16 9JD, Merseyside, U.K. (e-mail:
[email protected]). that the negativity of N400 gradually diminishes, as the subject
Digital Object Identifier 10.1109/TETCI.2019.2937566 becomes more familiar with the object [74]. This particular

2471-285X © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE

characteristic of N400 signal is used here to determine the STM a neighborhood neuron [46]–[48]. The Hebbian learning, being
performance in 2-dimensional object shape-reconstruction task. unsupervised, fits well for signal transduction at low level (early
Deep Learning (DL) [54] is currently gaining increasing stage of) neural processing [81]. On the other hand, at higher
interest from diverse research community for its efficient per- level (later stage of) neural signal transduction [60], supervised
formance in classification [87]–[89] and functional mapping learning is employed to quantize the neural signals to converge
[90], [91] problems from raw data. Deep learning algorithms to fixed points, representing object classes in the recognition
differ from conventional neural network algorithms for hav- problem, and the desired output level in the functional mapping
ing exceedingly large number of layers to extract high level problems. Further, due to asynchronous firing of neurons in dif-
features/attributes from low level raw data. For example, in ferent brain lobes, noise is introduced in the signaling pathways,
Convolutional Neural Net (CNN) [68] based Deep Learning, causing undesirable changes in the outputs. The advent of fuzzy
the motivation is to extract features of objects from a large sets, in particular its type-2 counterpart has immense potential
pool of object-dataset. In CNN, during the recall phase, layers in approximate reasoning, which is expected to play a vital role
occupying the later stages offer more refined object features in the neural quantization process in presence of noise [77].
than the preceding layers. Generally, extracts of the penultimate Thus type-2 fuzzy logic is expected to serve well in functional
layer often are regarded as object-features, while the last layer mapping at higher level neural learning.
provides the class information in a multi-class classification Two distinct varieties of type-2 fuzzy sets are widely being
problem. used in the literature [50]–[53], [65]–[66]. They are well-known
Although conventional deep learning algorithms aim at imi- as Interval Type-2 Fuzzy Sets (IT2FS) [50] and General Type-2
tating the behavioral mechanism of learning in the brain [92], Fuzzy Sets (GT2FS) [51]. In classical fuzzy sets, the mem-
[93], they hardly realize the cognitive functionalities of the bership function of a linguistic variable lying in [0,1] is crisp,
individual brain modules [95] involved in the learning process. whereas in type-2 fuzzy set, the corresponding (primary) mem-
This paper makes an honest attempt to synthesize functionality bership is fuzzy, as the linguistic variable at a given linguistic
of different brain modules by distinctive layers with suitable value has a wide range of primary membership in [0,1]. GT2FS
non-linearity in the context of STM encoding and recall. It intro- fundamentally differs from IT2FS with respect to secondary
duces a novel technique of STM-modeling in the settings of deep (type-2) Membership Function (MF). In GT2FS, the secondary
brain learning, where the individual brain functions involved in membership function takes any value in [0, 1], whereas in
STM encoding and recall cycles are modeled by developing the IT2FS the secondary membership function is considered 1 for
functional mapping from the input to the output. During the all feasible primary memberships lying within a region, referred
STM encoding and recall phases (of the shape-reconstruction to as Footprint of Uncertainty (FOU), and is zero elsewhere.
experiments), four distinct functional mappings are extracted Because of its representational advantages, GT2FS can capture
from the EEG signals acquired from the occipital, pre-frontal higher degrees of uncertainty [52], however at the cost of addi-
and parietal lobes. The first functional mapping is developed tional computational overhead. Here, a special type of GT2FS,
from the input visual stimuli and the occipital EEG response called vertical slice [53], is used to design a novel algorithm
to the stimuli. The second functional mapping refers to the for functional mapping between pre-frontal to parietal lobe and
interdependence between the EEG signals acquired from the parietal lobe to hand-drawn object-geometry.
occipital and the pre-frontal lobes during the shape-encoding The paper is divided into seven sections. Section II provides
phase. This mapping is useful to predict pre-frontal response the system overview. In Section III, principles and methodology
from the occipital response in the recall cycle later. The third are covered in brief. Section IV deals with experiments and
mapping refers to pre-frontal to parietal mapping, resembling results. Biological implications of the experimental results are
the functionality of the parietal lobe. This mapping helps in summarized in Section V. Performance analysis by statistical
determining the parietal response, if the pre-frontal response is tests is undertaken in Section VI. Conclusions are listed in
known during the recall phase. The last mapping between the Section VII.
parietal responses to the geometric features of the reconstructed
(hand-drawn) object-shape indicates the parietal and motor cor-
tex behavior jointly. II. SYSTEM OVERVIEW
Machine learning models have successfully been used in This section provides an overview of the proposed type-2
Brain-Computer Interfaces (BCI) to handle two fundamental fuzzy deep brain learning network (DBLN), containing four
problems: i) classification of brain signals for different cognitive stages of functional mapping, shown in Fig. 1(a). The input-
activities/malfunctioning [52], [82]–[84] and ii) synthesis of the output layers of each functional mapping module are explicitly
functional mapping of the active brain lobes from their mea- indicated in Fig. 1(b). The geometric features of an object, to
sured input-output [77]. This paper aims at serving the second be reconstructed, are assigned at the first (input) layer of the
problem. Although the functional mapping can be realized by proposed feed-forward network architecture (Fig. 1(b)). These
a number of ways, here the mapping of the first 2-stages is features are obtained from the gray scale image of the object
realized by Hebbian learning [45], while that of the third and the by the following steps: i) Gaussian filtering with user defined
fourth stages is designed by Type-2 Fuzzy logic. The choice of standard deviation to smooth the raw gray scale image, ii) Edge
Hebbian learning appears from the fundamental basis of Hebb’s detection and thinning by non-maximal suppression [85], here
principle of an excited neuron’s natural tendency to stimulate realized with Canny edge detection [85], iii) Line parameters
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 3

Fig. 1. (a). General Block-diagram of the proposed DBLN describing four-stage functional mapping with feedback for STM and Iconic Memory weight adaptation.
Fig. 1. (b) The Model used in four-stage mapping of the DBLN, explicitly showing the input and the output features of each module.

(perpendicular distance of the line from the origin, ρ, and the the geometric feature cl of the visually perceived object and
angle α between the above perpendicular line with the x-axis) the iconic memory response ai , where p denotes the num-
detection by Hough Transform [86], iv) Evaluation of line end ber of vertices of the perceived object, and n denotes the
point coordinates, line length and adjacent sides of the poly- number of electrodes placed on the occipital lobe (Fig. 1(b)).
gon having common vertices and v) computation of the angle The second layer (the first hidden layer), thus contains the
between each two adjacent lines. The steps are illustrated in iconic memory response. The weight matrix G = [gi,j ]n×n
the Appendix. The length of the straight line edges and angles between the second and the third layers represents the con-
between adjacent edges are used as the geometric features of the nectivity weights between the iconic memory response ai and
object. STM response bj , where i, jࢠ{1, n}(Fig. 1(b)). The third
The weight matrix W = [wl,i ]2p×n between the first and the (the second hidden) layer thus contains STM response bj ,
second layers represents the weighted connectivity between j = 1 to n.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE

The parietal lobe used for smart movement-related planning


is modeled here by type-2 fuzzy logic for its inherent benefit of
approximate reasoning (here, functional mapping) in presence
of noisy input/output training samples. In absence of fuzzy
functional mapping, noise present in the training samples ac-
quired from the EEG electrodes, might result in unexpected
changes in function approximation. Let {bj : 1 ≤ j ≤ n} and
{dk : 1 ≤ k ≤ n} be the one dimensional EEG features (average
Gamma power) extracted from the pre-frontal and the parietal
lobes respectively during the STM recall phase of the shape-
recognition task. The functional mapping: b1 , b2 , . . . , bn → dk
for all k is developed using type-2 Fuzzy sets. Thus the fourth
(the third hidden) layer embedded in the DBLN takes care of
noisy EEG data acquired from the parietal lobe response dk ,
k = 1 to n. The parietal lobe response to geometric features of the
recalled/reconstructed hand-drawn object is represented here by
one additional module of type-2 fuzzy reasoning. The choice of
fuzzy mapping here too is ascertained to avoid possible creeping
Fig. 2. Iconic Memory encoding by Hebbian Learning.
of noise in the mapping function. The last (output) layer, thus,
contains the geometric features cl of the reconstructed object.
The following 3 issues need special mention while undertak-
ing training of the proposed feed-forward architecture. Gamma power [20] of the EEG signals acquired from the
1) First, each stage of the proposed functional mapping is occipital lobe are used as the input and output respectively
trained independently with acquired input and output in- of the first stage in Fig. 1, depicting the iconic mem-
stances of the corresponding layer. The input instance of ory model W = [wl,i ]2p×n . Let {cl : 1 ≤ l ≤ 2p} be the
the first layer is obtained from the object geometry, while length and the angle/orientation of the p ( = 8) bounding
the same for other layers is obtained from EEG data. The straight lines of the object with the horizontal (x-) axis. Let
output instance of all excluding the last layer is obtained {ai : 1 ≤ i ≤ n} be the average gamma power, extracted
from EEG data, while that of the last layer is obtained from n ( = 6) channels during memory encoding. Hebbian
from subject-produced drawing of the recalled object. learning is adopted following [45], [46] to initialize the
2) The training instances of the first two stages of functional weights of the iconic memory, where the weight wl, i ,
mapping are obtained from the EEG signals acquired denoting the connectivity between l-th object geometric
during the phase of memory encoding. On the other hand, feature cl and i-th occipital brain response ai . (as shown
the training instances of the last 2 stages are generated in Fig. 2), is given by
from the acquired EEG, during the memory recall phase wl,i = f (cl ).f (ai ). (1)
of the subject.
3) After the training of 4 individual stages of mapping is Here, f(.) is Sigmoid-type non-linear function, given by
over, two error feedback loops are employed in the model, 1
where the inner loop adapts the weights of the short-term f (net) = (2)
1 + e−net
memory based on a measure of error in model-predicted
response with respect to object-shape recalled/drawn by where net ∈ {cl , ai }.
the subject. The outer loop adapts the weights of the 2) For STM encoding, the average Gamma power is extracted
iconic (visual) memory based on a measure of error of from the EEG signals acquired from occipital and pre-
the model predicted response with respect to the desired frontal lobes during the visual examination and memo-
object-shape. rizing process of the 2-dimensinal object presented to the
It is important to mention here that during the encod- subject. Here, n ( = 6) channels of both the occipital and
ing of iconic memory and the STM, the subject observes a the pre-frontal lobes are used to establish the connection
2-dimensional planer object of asymmetric shape (with linear weight matrix G = [gi,j ]n×n representative of the STM
boundaries) for 10 seconds with an intension to remember the by Hebbian Learning, where
2-dimensional geometry of the object for subsequent participa- gi,j = f (ai ).f (bj ) (3)
tion in the memory recall phase. On the other hand, during the
memory recall phase, the subject recollects the 2-D planar object and f(.) is the Sigmoid function introduced above.
from his/her memory and draws the object on a piece of paper. 3) For functional approximation of the pre-frontal lobe re-
A brief overview of the layer-wise training of individual stages sponse to parietal lobe response, average Gamma power
of Fig. 1 is given below. is extracted from the pre-frontal and the parietal lobes
1) For iconic memory encoding, the geometric features of during the STM recall phase of the shape-reconstruction
the object (extracted by Hough transform) and average task. Let {bj : 1 ≤ j ≤ n} and {dk : 1 ≤ k ≤ n} be the
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 5

average Gamma power, extracted from the EEG signal, Definition 4: An Interval Type-2 Fuzzy Set (IT2FS) [51] is a
acquired from pre-frontal and parietal lobes respectively. special form of GT2FS with μà (x, u) = 1, for x ∈ X and u ∈
The functional mapping: b1 , b2 , . . . , bn → dk for all k is [0, 1]. A closed IT2FS (CIT2FS) is one form of IT2FS where
here obtained by type-2 fuzzy logic. Ix = {u ∈ [0, 1]|μà (x, u) = 1} is a closed interval for every
4) For functional approximation of the parietal lobe response x ∈ X [76]. Here, CIT2FS is used throughout the paper. All
to shape features of the recalled/reconstructed object, the IT2FS mentioned in this paper are CIT2FS. However, they are
average Gamma power of the EEG signals acquired from referred to as IT2FS as done in most of the literature [76].
the parietal lobe and the geometric features of the recon- Definition 5: The Footprint of Uncertainty (FOU) of a type-2
structed object (extracted by Hough transform) are used Fuzzy set (T2FS) Ã is defined as the union of all its primary
as the input and output respectively of the fourth/last stage memberships [76]. The mathematical representation of FOU is
of the proposed model. Considering {dk : 1 ≤ k ≤ n} and 
{cl : 1 ≤ l ≤ 2p} to be the parietal EEG features and the FOU(Ã) = Jx (5)
parameters of the drawn object respectively, a type-2 fuzzy ∀x∈X

mapping is employed to obtain the required mapping: where, Jx = {(x, u)|u ∈ [0, 1], μà (x, u) > 0}. FOU is a
d1 , d2 , . . . , dn → cl for all l. bounded region, which represents the uncertainty in the primary
The training phase of the proposed DBLN system (Fig. 1) memberships of the T2FS.
constitutes two fundamental steps: i) encoding of W and Definition 6: An embedded fuzzy set Ae (x) is an arbitrarily
G matrices along with construction of functional mappings: selected type-1 MF lying in the FOU, i.e., Ae (x) ∈ Jx , ∀x ∈ X.
b1 , b2 , . . . , bn → dk and d1 , d2 , . . . , dn → cl for all k and l, and Definition 7: The embedded fuzzy set, representing the
ii) adaptation of W and G matrices by supervised learning. Here, upper bound of FOU (Ã) is called the upper membership func-
W and G matrices are first encoded using Hebbian learning.
tion (UMF) and it is denoted by FOU(Ã)(or μ̄Ã (x)), ∀x ∈ X
The functional mappings indicated above are constructed using
[76]. Similarly, the embedded fuzzy set, representing the lower
type-2 fuzzy sets and the adaptation of W and G matrices are
bound of FOU (Ã), is called the lower membership function
performed using Perceptron-like learning equation [46].
(LMF) and is denoted as FOU(Ã) (or μà (x)), ∀x ∈ X. More
precisely,
III. BRAIN FUNCTIONAL MAPPING USING TYPE-2
FUZZY DBLN UMF(Ã) = μ̄Ã (x) ≡ FOU(Ã) = Max(Ae (x) : x ∈ X) (6)
Principles of brain functional mapping introduced in
and
Section II is realized here using type-2 fuzzy DBLN with feed-
back loops realized with Perceptron-like learning equation. The LMF(Ã) = μà (x) ≡ FOU(Ã) = Min(Ae (x) : x ∈ X) (7)
section has 5 parts. In Section A, a brief overview of IT2FS
and GT2FS is given. Section B introduces the realization of B. Type-2 Fuzzy Mapping and Parameter Adaptation by
functional mappings of i) prefrontal to parietal lobe and ii) Perceptron-like Learning
parietal lobe to object-shape-geometry by a novel type-2 fuzzy
This section attempts to construct the functional mappings for
vertical slice approach. In Section C, the weight adaptation of W
i)prefrontal to parietal and ii) parietal to object-shape geometry
and G matrices is carried out by perceptron-like learning. The
using the acquired EEG signals from the selected brain lobes.
training and testing of the proposed fuzzy neural architecture
The EEG signals acquired are usually found to be contaminated
are presented in Section D and E respectively.
with stochastic noise due to non-voluntary motor actions like
eye-blinking and artifacts due to simultaneous brain activa-
A. Overview of Type-2 Fuzzy Sets
tion for concurrent thoughts [58]. Very often the noise and
Definition 1: A type-1(T1)/classical fuzzy set A [49] is an the desired brain signals have overlapped frequency spectra,
ordered pairs of a linguistic variable x and its membership value thereby making filtering algorithms inefficient for the targeted
μA (x) in A, given by application. Naturally, the superimposed stochastic noise yields
A = {(x, μA (x))|∀x ∈ X} (4) erroneous results in mapping, if realized with classical mapping
techniques, such as neural functional approximation [55], [56],
where, X is the universe of discourse. Usually, μA (x) is a crisp nonlinear regression [57] and the like. Fuzzy logic has shown
number, lying in [0, 1] for any x ∈ X. promising performance in functional mapping in presence of
Definition 2: A General Type-2 Fuzzy Set à is given by noisy measurements because of their inherent nonlinearity in
à = {((x, u), μà (x, u))|x ∈ X, u ∈ [0, 1]}, where x is a lin- the MFs (Gaussian/Triangular) [78]. The effect of measurement
guistic variable defined on a universe of discourse X, u ∈ [0, 1] noise in functional mapping is reduced further in T2FS [77]
is the primary membership and μà (x, u) is a secondary MF, because of its characteristic to handle intra-personal level un-
given by the mapping (x, u) → μà where μà (x, u) too lies in certainty due to the presence of stochastic noise. These works
[0, 1] [51], [76]. inspired the authors to realize the brain mapping functions using
Definition 3: For a given value of x, say x = x , the 2D IT2FS and one vertical slice approach [53] of GT2FS. In addition
plane comprising u and μÃ(x ) (u) is called a vertical slice of to type-2 fuzzy mapping, parameter adaptation of the mapping
the GT2FS [53]. function is also needed to attain optimal performance.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE

at xi = xi . In (10) and (11), the t-norms [49] are computed


sequentially in order of their appearance from the left to the
right. To control the area under the IT2FS B̃j , the following
transformations are used.
U F Sj = U F S  ∧ λ̄j , for j = 1 to n. (12)

Fig. 3. Computation of flat-top IT2FS: (a) type-1 MFs, (b) IT2FS representa- LF Sj = LF S ∧ λj , for j = 1 to n. (13)
tion of the type-1 MFs, (c) Flat-topped IT2FS.
Here, U F Sj and LF Sj are the firing strength of the selected
rule, and λ̄j and λj are two control parameters used to adapt the
B.1 Construction of the Proposed Interval Type-2 Fuzzy Mem- area under the consequent membership function (MF) yj is B̃j .
bership Function (IT2MF): Let U and V be the 2 brain signals, The IT2 inference is obtained as
acquired from two distinct brain lobes, during the memory recall
μ̄B̃  (yj ) = U F Sj ∧ μ̄B̃j (yj ), ∀yj (14)
phase. Considering a time-duration of 30 seconds for drawing j

the object from the STM, and a sampling rate of EEG = 5000 μB̃  (yj ) = LF Sj ∧ μB̃ (yj ), ∀yj . (15)
j
samples/second, the total number of EEG samples acquired from j

each brain region = 5000 × 30 = 1,50,000. These total number The IT2FS consequent B̃j , represented by
of samples (i.e., 1,50,000 samples), obtained over the duration of [μB̃ (yj ), μ̄B̃j (yj )] is next defuzzified by a proposed Average
j
30 seconds, are divided into 30 time-slots of equal length of 5000 (Av.) Defuzzification Algorithm to obtain the centroid C, given
samples each. The Power Spectral Density (PSD) in gamma by
frequency band (30–100 Hz) is then extracted for each time
slot. The PSD over a slot is then described by a Gaussian MF: G Area of the consequent B̃j
(μ, σ 2 ) with μ and σ 2 representing the mean and variance of the C= . (16)
Support of the UMF of B̃j
PSD of 5000 samples over the slot. The MF: μAi (x), where Ai
= Close-to-center of the support [49] of the MF and x = PSD, Let, yj = C. Also, let yj be the desired value. The following
represents that power is close to mean value of the PSD over steps are used next to adapt the control parameters λ̄j and λj to
5000 samples. Thus for 30 time-slots, 30 type-1 Gaussian MFs: control the area under the FOU of the inference.
A1 , A2 , . . . , A30 are obtained. The following 2 steps are per- Let, εj = yj − yj , (17)
formed to construct the IT2FS (Fig. 3(b)) à = [μà (x), μ̄à (x)]
from the 30 type-1 Gaussian MFs. For εj < 0,

μà (x) = Min[μA1 (x), μA2 (x), . . . , μA30 (x)], ∀x (8) δ λ̄j = α|εj |U F S  , where λ̄j = λ̄j + δ λ̄j ,

μ̄Ã (x) = Max[μA1 (x), μA2 (x), . . . , μA30 (x)], ∀x (9) δ λj = α|εj |LF S  , where λj = λj − δλj , (18)

where, μà (x) and μ̄à (x) respectively denote the LMF and For εj > 0,
the UMF of the said IT2FS x isÃ. In order to maintain the δ λ̄j = α εj U F S  , where λ̄j = λ̄j − δ λ̄j ,
convexity criterion [59] of IT2FS, the peaks of the Type-1 MFs δ λj = α εj LF S  , where λj = λj + δλj , (19)
are joined with a straight line of zero slope, resulting in a flat-top
approximated IT2FS (see Fig. 3(c)). where 0 < α < 1. The adaptation of λ̄j and λj is done in the
B.2 Construction of IT2FS induced Mapping Function: To de- training phase. After the training with known [x1 , x2 , . . . , xn ]
sign the mapping function between 2 brain lobes, the EEG signal and [y1 , y2 , . . . , yn ] vectors is over, the weights λ̄j and λj are
is acquired from both the lobes simultaneously during a learning fixed forever and may directly be used in the test phase.
epoch. Let x1 (t), x2 (t), . . . , xn (t) and y1 (t), y2 (t), . . . , yn (t) B.3 Secondary Membership Function Computation of Pro-
be the gamma power extracted from n electrodes of a source posed GT2FS. Consider the rule: If x1 is Ã1 and x2 is Ã2 and
lobe and n electrodes of a destination lobe respectively during … and xn is Ãn Then yj is B̃j . Here, xi is Ãi for i = 1 to n
the learning epoch. The IT2MFs xi is Ãi for i = 1 to n and yj are GT2FS-induced propositions and yj is B̃j denotes an IT2FS
is B̃j , for j = 1 to n (see Fig. 4) are obtained by the technique consequent MF. Here, the secondary MFs with respect to the pri-
introduced in section B.1. mary memberships at given xi = xi of the GT2FS proposition
Now let xi = xi for i = 1 to n be a sample measurement are represented by a vertical slice [53]. Let μÃi (xi ) (u) be the
(here, average Gamma power). To map x1 = x1 , x2 = x2 , . . . , secondary MF for the ith antecedent proposition xi is Ãi . Given
xn = xn to yj = yj , the following transformation is used. the measurements xi = xi for i = 1 to n, the vertical planes
representing secondary memberships μÃi (xi ) (u) at xi = xi are
U F S  = μ̄Ã1 (x1 )tμ̄Ã2 (x2 )t . . . tμ̄Ãn (xn ) (10)
identified. Let the primary membership u at xi = xi is spatially
LF S  = μà (x1 ) t μà (x2 ) t . . . t μà (xn ) (11) sampled as u1 , u2 , …, um . Given the contributory primary mem-
1 2 n
berships, which jointly comprise the FOU, the secondary MF at
where, μ̄Ãj (xi ) and μà (xi ) are the upper membership func- a given value of the linguistic variable xi = xi is computed using
j
tion (UMF) and lower membership function (LMF) of μÃi (xi ) the following steps.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 7

Fig. 4. Adaptation of the IT2FS induced mapping function by Perceptron-like learning.

the primary membership (CPM) of the fuzzy proposition:


xi is Ãi , in the firing strength computation of the rule. The
CP Mi for xi is Ãi is obtained as
 
∀u∈Jx i μ(x i , u).u
CP Mi =  
(21)
∀u∈Jx μ(x i , u) i

B.4 Proposed General Type-2 Fuzzy Mapping: The following


transformations are performed to compute the GT2 inference
(depicted in Fig. 6):
1) Compute the smallest possible firing strength of the rule
j by taking the minimum of the CPMs of all the fuzzy
propositions in the antecedent. Thus, the Lower Fixed
Fig. 5. Secondary Membership Assignment in the proposed GT2FS based Firing Strength (LFFS) of rule j is obtained as LF F Sj =
mapping technique.
∧ni=1 CP Mi , where ∧ni=1 denotes cumulative AND (Min)
operator.
2) Compute the largest possible firing strength of the rule
1) Divide the interval [0, um ] at xi = xi into equal sized j by taking the maximum of the CPMs of all the fuzzy
intervals δu, such that each interval contains at least one propositions in the antecedent. Thus, the Upper Fixed
type-1 primary membership (see Fig. 5). Firing Strength (UFFS) of rule j is obtained as U F F Sj =
2) Count the number of primary memberships that ∨ni=1 CP Mi , where ∨ni=1 denotes cumulative OR (Max)
cross the intervalsδu1 = u1 − 0, δu2 = u2 − u1 , . . . , operator.
δum = um − um−1 at xi = xi . Let v1 , v2 , . . . , vm be the 3) Next, λ̄j and λj are introduced to control the area under the
respective counts for the intervals δu1 , δu2 , . . . , δum . consequent FOU. The following transformation is used to
3) Obtain the secondary MF at the mid-points of the intervals: control the area under the MF of yj is B̃j . Let
ur−1 + δur /2 by using (20) for r = 1 to m.
  U F Sj = U F F Sj ∧ λ̄j , for j = 1 to n. (22)
δur vr
μÃ(xi ) ur−1 + = and
2 Max{vr : r = 1 to m}
(20) LF Sj = LF F Sj ∧ λj , for j = 1 to n. (23)
This is illustrated in Fig. 5, where the numbers of type-
1 memberships that cross the intervals δu4 , δu5 and δu6 for where, λ̄j and λj are scalar parameters. The IT2 inference is
instances at xi = xi respectively are 1, 3 and 4. Therefore, obtained by (14) and (15). Next, the IT2FS consequent B̃j is
μÃ(x i) (u3 + δu2 4) = 14 = 0.25, μÃ(x i) (u4 + δu2 5 ) = 34 = 0.75 de-fuzzified by the Average (Av.) de-fuzzification algorithm,
and μÃ(x i ) (u5 + δu2 6 ) = 44 = 1. defined by (16), to obtain the centroid C.
1) Compute the centroid of the vertical slice by using the cen- The area under the secondary MF of μB̃  (yj ) is controlled
j
tre of gravity method and declare it as the contribution of by the proper choice of λ̄j and λj . The perceptron-like learning
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE

Fig. 6. GT2FS based mapping adapted with Perceptron-like learning.

algorithm is used to adapt λ̄j and λj . The adaptation process is system shown in Fig. 1(b), where the inner feedback loop is used
similar to that in IT2FS (equation (18) and (19)), where U F S  to adapt the weight matrix G = [gi, j ] using a perceptron-like
and LF S  are replaced by U F F Sj and LF F Sj respectively. supervised learning algorithm. The perceptron-like learning
The training phase ends after adaptation of λ̄j and λj . In the algorithm is selected here for its inherent characteristics of
test phase, λ̄j and λj are fixed as obtained in the training phase. gradient-free and network-topology independent learning.
Only the vectors [x1 , x2 , . . . ., xn ] are produced, and the result The above selection-criteria are imposed to avoid gradient
of mapping, i.e., yj is predicted. computation over functions involving Max (∨) and Min (ࢳ)
operators in the feed-forward network. After each learning epoch
of the subject, the weight matrix G = [gi, j ] is adapted following
C. Perceptron-Like Learning for Weight Adaptation
gi, j = gi,j + Δgi,j ,
The STM plays an important role in retrieval and
reconstruction of the shape of objects perceived by visual where
exploration. Here, we propose a multi-stages DBLN, where Δgi, j = η.E.ai , ∀i, j, (24)
the stages of the network represent different mental processes.
Fig. 1(b) provides the architecture of the complete system. For Here, the error norm E is defined by
example, the first stage, symbolizing the iconic memory (IM), 2p
 2p
 p
 p

represents the mapping from the shape-features of the object E= el = |ĉl − c l | = |L̂q − Lq | + |ŝq − sq |,
to the acquired EEG features of the occipital lobe (Fig. 1(a)). l=1 l=1 q=1 q=1
The second stage symbolizing the STM represents the mapping (25)
from the occipital lobe to the pre-frontal lobe. The third stage where, L̂q is the length of the line q in the object shape drawn
symbolizes the brain connectivity from the pre-frontal lobe to the by the subject and Lq is the length of the line q in the model-
parietal lobe using T2FS. The last stage describes the mapping generated object shape. Similarly, ŝq is the angle of the line
from the parietal layer to reproduced object shape, and is also q with respect to the x-axis in the hand-drawn object shape,
realized by T2FS. Two feedbacks have been incorporated in the and sq is the angle of the qth line with respect to the x-axis in
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 9

the model-produced object shape. It is important to note that E


III. Type-2 Fuzzy Mapping Function Construction
measure is taken only when the reproduced object has p vertices
between parietal lobe features and reproduced
like the original object, else the learning epoch is dropped.
object shape geometric features:
After the error norm E converges within a finite limit
Construct IT2/GT2 mapping function
δ1 (= 10−2 say), we leave the weight matrix without further
d1 , d2 , . . . , dn → cl , ∀ l for to realize parietal to
adaptation, and attempt to adapt the weight matrix W using the
reproduced object shape geometry functional
outer feedback loop in Fig. 1. Here, too we employ Perceptron-
connectivity. Given ĉl as the target feature, compare εl
like learning algorithm. The error vector here represents the dif-
by (17) and adapt parameters λ̄l and λl using (18)–(19),
ference between the model-produced object geometric features
until δ λ̄l and δλl are less than predefined real value
and the actual object geometric features. The weight adaptation
(= 10−2 ).
is given by
IV. G matrix Adaptation:
wl, i = wl,i + Δwl,i , (26) This step involves a) computation of cl, b) Computation
where of error E, and c) adaptation of G matrix using the error
E. Let [ĉl ] and [cl ] be the geometric features of the
Δwl, i = η  . E  cl , ∀l, i, (27) reproduced and model-produced object geometric
Here, features respectively.
2p 2p p p
a) Compute cl by the following steps.
    i) Compute iconic memory response ai from the
E = e l = |cl − c l | = |Lq − Lq | + |sq − sq |,
q=1 q=1
object-shape parameters cl for l = 1 to 2p by the
l=1 l=1
(28) following transformations:
where, Lq and sq are the length and angle (with respect to x-axis) [ai ](1×n) = [cl ]1×2p .W(2p×n) ,
of the line q in the actual object shape. The learning phase stops
ai = f (ai ) for i = 1 to n.
when E  approaches a small positive number, however small
ii) Compute Pre-frontal response bj , j = 1 to n, by
δ2 (≈ 10−3 ).
the following transformations:
[bj ](1×n) = [ai ]1×n .G(n×n) ,
D. Training of the Proposed Shape-Reconstruction
Architecture bj = f (bj ) for j = 1 to n.
The training algorithm is presented below. iii) Compute parietal response dk for k = 1 to m from
the computed prefrontal response by IT2/GT2 fuzzy
mapping, introduced in Section iii.
Training of Hebbian Learning and Type-2 Fuzzy Logic
Induced DBLN. iv) Compute predicted object-shape parameter cl for
l = 1 to 2p by IT2/GT2 fuzzy mapping, introduced in
Input: Object Geometry [cl ], where cl ∈ {Lq , sq : q
Section III.
= 1 to p}, EEG features (average gamma power)
[ai ] and [bj ] and [dk ] extracted from the occipital, a) Compute error el = ĉl − cl for l = 1 to 2p.
pre-frontal and parietal lobes respectively. b) Use Perceptron-like learning algorithm to
Output:Converged W and G matrices and T2 fuzzy adjust weights gi,j by the following steps.
mapping functions: i) Δgi,j = η.E.ai , ∀i, j.
b1 , b2 , . . . , bm → dk and d1 , d2 , . . . , dm → cl , ∀k and l. ii) gi, j = gi,j + Δgi,j , ∀i, j.
iii) Repeat from step (i) until E < δ1 , for some
Begin small positive real number δ1 .
I. Initialization: Here, the sign of E determines the increase/
(i) Use Hebbian learning to obtain initial values of [wl, i ], decrease in Δgij as desired.
where
V) W-matrix Adaptation
wl, i = f (cl ).f (ai ) ∀l and i.
Let [cl ] and [cl ] be the geometric features of the original
(ii) Initialize [gi, j ], where
and model-produced object geometric features
gi, j = f (ai ).f (bj ) ∀i and j
respectively. Compute el = cl − cl and use
Here, f(.) is Sigmoid-type non-linear function.
perceptron-like learning given by Δwl, i = η  . E  cl , for
II. Type-2 Fuzzy Mapping Function Construction
all l and i. The sign of E  determines increase/decrease
between pre-frontal and parietal lobes:
in Δwl, i as desired. Continue wl, i adaptation until E  is
Construct IT2/GT2 mapping function b1 , b2 , . . . , bm
less than a predefined threshold δ 2 .
→ dk for ∀k to realize pre-frontal to parietal functional
connectivity. Given dk as the target feature, compare b) Return W = [wl, i ]2p×n ∀l, i and G = [gi, j ]n×n
errorεk = dk − dk , and adapt parameters λ̄k and λk ∀i, j.
using (18)–(19) until δ λ̄k and δλk are less than End.
predefined real value ( = 10−2 ).
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

10 IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE

experimental set-ups, and the PSD in the gamma band (30–


100 Hz), called gamma power [20], is used as the feature for
each channel. All experiments are performed using MATLAB-
16b toolbox running under Windows-10 operating system on
Intel Octacore processor with clock speed 2 GHz and RAM
64 GB.
Experiments are undertaken on 35 subjects in the age group
20–30 years. 30 of the 35 members are healthy, while the
remaining 5 are suffering from memory impairment (2 suffering
from temporal lobe epilepsy and 3 suffering from Alzheimer’s
disease with pre-frontal lobe amnesia). Each subject is advised
to take a comfortable resting position with arms on the armrest to
avoid possible pick-ups of muscle artifacts. During the encoding
phase, objects of asymmetric shapes, similar to the one shown
in Fig. 1, are used as visual stimuli for the STM of the subject.
The subject is advised to remember the object-shape, presented
to him/her as a visual stimulus for 10 seconds (Fig. 8). The
EEG signals are acquired from the occipital and the pre-frontal
lobes at the end of this 10 seconds interval. Next, during the
Fig. 7. 10–20 electrode placement system (only the blue circled electrodes are recall phase, the subject is asked to draw the object-shape from
used for the present experiment).
his STM. EEG is then acquired from all the electrodes, and
common average referencing [80] is performed to eliminate the
artifacts due to hand movements in drawing. In order to examine
the effect of repeated STM learning, the same visual stimulus
To confirm that the brain response obtained is due to neurons is presented after a time-delay of 60 seconds (Fig. 8). The
participating as memory, the negativity of N400 [73] is checked STM learning is repeated γ-times (γ ≥1) until the learnt object
after each learning epoch. It is important to mention here that shape matches with the sample object. The steps narrated above
the decreasing negativity of N400 is observed with increasing are performed repeatedly for 10 different asymmetric object
learning epochs for the same training object. Details of N400 shapes (as shown in Fig. 9), for each of the 35 subjects. The
signal processing is available in [61]. object shapes are presented in the Fig. 9 with increasing shape
complexity.
E. The Test Phase of the Memory Model
Once the training phase is over, the network may be used B. Experiment 1 (Validation of the STM Model with Respect to
for reproduction of the model-generated object shape for a Error Metric ξ)
given input object shape. Here, the geometric features [cl ] for
The motivation of this experiment is to compare the model-
l = 1 to 2p for integer p, and the converged W matrix, G matrix
produced G matrices over successive trials with the same object-
and pre-constructed Type-2 Mapping function are used as input
shape on the same subject. An error metric ξ is introduced
of the algorithm. The algorithm returns computed geometric
to measure the relative difference between the G matrices of
features [cl ] of the object presented to the subject for visual
successive trials, where ξ is computed by
inspection. The steps (i) to (iv) under step IV.a) of the train-
ing algorithm are executed to obtain [cl ] from [cl ] for l = 1
 |gi,j − g  i,j |
to 2p. ξ= , (29)
Max (gi,j , g  i,j )
∀i ∀j

IV. EXPERIMENTS AND RESULTS



where, gi,j and gi,j are the STM weights obtained from two
A. Experimental Set-up
successive learning trials. The reproduced object-shape af-
A 21-channel EEG system manufactured by Nihon Kohden ter each trial and the corresponding error metric are given
has been employed for the present experiments. Here, earlobe in Table I for one healthy subject S3 with the best STM
electrodes A1 and A2 are used as the reference and the Fpz performance.
electrode as the ground. Further, 6 electrodes are selected from It is apparent from Table I that the error ξ is gradually decreas-
each of the occipital, pre-frontal and the parietal lobes to test ing with shape-similarity of reproduced shape by the subject
the mapping of EEG features from the occipital to the pre- with the desired one. Further, with increasing shape-complexity,
frontal lobe during STM encoding and later from the pre-frontal more number of trials is needed to retrieve the original shape.
to the parietal lobes during the memory recall phase. The Table II provides the error metric for a more complex shape than
10–20 electrode placement [62] (Fig. 7) is used in the present those given in Table I.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 11

Fig. 8. Stimulus preparation.

TABLE II
ERROR METRIC ξ FOR MORE COMPLEX SHAPE

TABLE III
STM MODEL G FOR SIMILAR BUT NON-IDENTICAL OBJECT SHAPES

Fig. 9. 10 objects (with sample number) used in the experiment with increasing
shape complexity.

TABLE I
VALIDATION OF THE STM MODEL WITH RESPECT TO ξ FOR TWO OBJECTS

STM encoding is here represented by the weight matrix G,


the similarity in STM encoding is measured by matching the
similarity in the weight matrix G for similar input instances. It is
apparent from Table III that for similar visual stimuli submitted
to a subject, there is a commonality/closeness in the respective
positions of the obtained G matrices. The common/similar part
of the weight matrix G for similar input instances is enclosed
by a firm box in Table III. In the measurement of commonal-
ity, a difference of Δgi, j = |gi, j − ĝi, j | ≤ 5 is allowed, where
gi, j and ĝi, j denote the STM weights for 2 objects of similar
geometry.

D. Experiment 3 (Study of Subjects’ Learning Ability with


C. Experiment 2 (Similar Encoding by a Subject for Similar Increasing Complexity in Object Shape)
Input Object-shapes) In Fig. 10, the STM performance of arbitrarily chosen 5
The motivation of the experiment is to match the similarity healthy subjects (out of 30) with increasing object complexity,
of the STM encoding for similar visual input stimuli. As the are depicted by evaluating the time required to completely
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

12 IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE

Fig. 11. Convergence of the error metric ξ (and weight matrix G) over time
Fig. 10. Learning ability of the subject with increasing shape complexity. with increased shape complexity.

TABLE IV
OBJECT SHAPES ACCORDING TO THE INCREASED SHAPE COMPLEXITY
(SC1 < SC2 < SC3 )

reconstruct the object by a subject. The average performance


of all 30 subjects are shown by red solid line. It is apparent from
Fig. 12. Dissimilar Region of the G matrix in successive trials obtained for
the figure that the curve has an approximate parabolic form, (a) a patient with pre-frontal lobe amnesia and (b) a patient with temporal lobe
indicating an increase in learning time with increased object epilepsy.
complexity.

E. Experiment 4 (Convergence-time of the Weight Matrix G


for Increased Complexity of the Input Shape Stimuli) submitted to the subject in 3 separate experiments performed
This study aims at examining the time required for conver- on 3 different dates with a gap of 10 days between 2 consecutive
gence of the G matrix for increased complexity of the visual experiments, and the convergence in G matrix is determined in
shape stimuli. The shape-complexity is here measured by the terms of the error metric E  .
number of vertices plus the number of connecting lines in the A similarity measure in the G matrix after convergence for
object geometry. On the other hand, the converegence-time is the above 3 experiments on the same subject is ascertained. It
measured by the error metric ξ defined in (29). In Table IV, is observed from Fig. 12(b) that the G matrices obtained after
objects of 3 distinct shapes of increasing shape-complexity (SC) convergence have least similarity for people with Alzheimer’s
are shown. In Table IV, the shape complexity of the i-th object is diesease with pre-frontal impairment. However, people with
denoted by SCi for i = 1 to 3. The converegence of the G matrix temporal lobe epilepsy have similarity in the converged G
for each of the 3 objects are presented in Fig. 11. It follows from matrices for 3 experiments (Fig. 12(b)), as happens to be for nor-
Fig. 11 that the converegence-time in G matrix increases with mal/healthy subjects. The G matrices for two persons (one with
increasing shape-comlexity. prefrontal lobe Amnesia, and one with temporal lobe eplilepsy),
obtained after converegence of 3 experiments are illustrated in
Fig. 12(a) and (b), where the regions (area) of converegence
F. Experiment 5 (Abnormality in G matrix for the Subjects in the G matrices for 3 experiments are indicated by hatched
with Brain Impairment)
lines. It is apparent from Fig. 12(a) that the commonality in
Here, the experiment is performed on 2 groups of sub- the converged areas in G matrix for patients with Prefrontal
jects: people with i)temporal lobe epilepsy (Fig. 12(a)), lobe Amnesia is insignificantly small. The dissimilarity in the G
ii)Alzheimer’s disease/amnesia with impairment in pre-frontal matrix (represented in blue) may be used as a measure of degree
regions (Fig. 12(b)). Here, the same input shape-stimulus is of STM impairment.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 13

Fig. 13. eLORETA tomography based on the current electric density (activity)
at cortical voxels.
Fig. 14. N400 repetition effects along with eLORETA solutions for successive
trials: (a) trial 1 (b) trial 2 and (c) trial 3.

V. BIOLOGICAL IMPLICATIONS
To compute the intra-cortical distribution of the electric ac-
tivity from the surface EEG data, a special software, called
eLORETA (exact Low Resolution brain Electromagnetic To-
mogrAphy) [63] is employed. The eLORETA is a linear inverse
solution method capable of reconstructing cortical electrical
activity with correct localization from the scalp EEG data even in
the presence of structured noise [64]. For the present experiment,
the selected artifact-free EEG segments are used to evaluate the
eLORETA intracranial spectral density in the frequency range
0–30 Hz with a resolution of 1 Hz. As indicated in Fig. 8, the Fig. 15. Increasing N400 negativity with increasing shape complexity.
entire experiment for a single trial is performed in 60 seconds
(60,000 ms), comprising 10 seconds for memory encoding and
50 seconds for memory recall. The 60 seconds interval is divided with decreasing negativity in successive trials. Fig. 14
into 600 time-frames of equal length (100 ms) by the e-LORETA represents the N400 dynamics over repetitive trials for the
software. In addition, the negativity of N400 [73] is checked after same subject stimulated with the same stimulus. Simul-
each learning epoch to confirm that the brain response obtained taneously, the eLORETA solutions, represented by topo-
is due to neurons participating in STM learning. graphic maps in Fig. 14, indicate the increasing neuronal
The following biological implications directly follow from activity in the pre-frontal cortex during the learning phase.
the eLORETA solutions and the negativity of the N400 3) The N400 negativity with increased complexity in shape
signal. learning, also increases at a given learning epoch. The
1) Fig. 13 provides the eLORETA solutions for the source increased negativity in N400 for the shapes listed in
localization problem during memory encoding and recall Table IV are shown in Fig. 15.
phases. It is observed from the eLORETA scalp map
(Fig. 13) that the electric neuronal activity is higher in VI. PERFORMANCE ANALYSIS
the occipital region for the first two time frames, demon-
This section provides an experimental basis for perfor-
strating the iconic memory (IM) encoding of the visually
mance analysis and comparison of the proposed Type 2
perceived object-shape for approximately 200 ms dura-
Fuzzy Set (T2FS) induced mapping techniques with the tra-
tion. For the next 90 time-frames (9000 milliseconds),
ditional/existing ones. Here too the performance of the pro-
the pre-frontal cortex remains highly active, revealing
posed and the state-of-the-art algorithms have been analyzed
the STM encoding during this interval of time. In the
using MATLAB-16b toolbox, running under Windows-10 on
remaining time frames, a significant increase in current
Intel Octacore processor with clock speed 2 GHz and 64 GB
density is observed in the pre-frontal and parietal cortex
RAM.
bilaterally, which signifies the involvement of these two
lobes in task-planning for the hand-drawing.
2) To check the N400 Repetition effect [74] during STM A. Performance Analysis of the Proposed T2FS Methods
learning, each subject is elicited with the same object- To study the relative performance of the proposed Type-2
shape repetitively until she learns to reproduce the original fuzzy mapping techniques with the existing methods, the error
shape presented to her, and the N400 pattern is observed metric E  and runtime of the training algorithm are used for
during each learning stage. It is observed that the N400 comparison. During comparison, the type-2 fuzzy model present
response to the first trial exhibits the largest negative peak in the last 2 stages of the training and the test model only are
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

14 IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE

TABLE V TABLE VI
COMPARISON OF E  OBTAINED BY THE PROPOSED MAPPING METHODS ORDER OF COMPLEXITY OF THE PROPOSED T2FS ALGORITHMS AND OTHER
AGAINST STANDARD MAPPING TECHNIQUES COMPETITIVE MAPPING TECHNIQUES

TABLE VII
RESULTS OF STATISTICAL VALIDATION WITH THE PROPOSED METHODS AS
REFERENCE, ONE AT A TIME

replaced by existing deep learning or other models. The rest of y axis and I is the number of z-slices (considered only in the
training and testing are similar to the present work. Table V existing z-slice based approaches).
includes the results of E  obtained by the 2 proposed T2FS
based mapping techniques against traditional type-1 and type-2 C. Statistical Validation using Wilcoxon Signed-rank Test
fuzzy [51]–[53], [65], [66] algorithms, standard deep learning
algorithms, including Long Short-Term Memory (LSTM) [67] A non-parametric Wilcoxon signed-rank test [72] is employed
and Convolutional Neural Network (CNN) [68], and traditional to statistically validate the proposed mapping techniques using
non-fuzzy mapping algorithms including N-th order Polynomial E  as a metric on a single database, prepared at Artificial in-

regression [75] of the form: P = N i
i=1 i z for real qi , Support
q telligence Laboratory of Jadavpur University. Let, Ho be the
Vector Machine (SVM) with polynomial Kernel [69], SVM with null hypothesis, indicating identical performance of a given
Gaussian Kernel [70] and the Back Propagation Neural Network algorithm-B with respect to a reference algorithm-A. Here,
(BPNN) [71], realized and tested for the present application. The A = any one of the two proposed type-2 fuzzy mapping tech-
experiment was performed on 35 subjects, each participating in niques and B = any one of the 7 algorithms listed in Table VII.
10 learning sessions, comprising 10 stimuli, covering 35 × 10 To statistically validate the null hypothesis Ho , we evaluate the
× 10 = 3500 learning instances. It is observed from Table V test statistic W by
that the proposed GT2FS based mapping technique outperforms Tr

its nearest competitors by an E  of ∼1.5%. In Table V, we W = [sgn(E  A,i − E  B,i ).ri ] (30)
also observe that the IT2FS based mapping technique takes i=1
the smallest run- time (∼34 ms), when compared with the
 
other mapping methods. In addition, the proposed GT2FS-based where EA,i and EB,i are the values of E  , obtained by algorithms
method requires 92.15 ms, which is comparable to the run-time A and B respectively at i-th experimental instance. Tr is the total
of most of the T2FS techniques. number of experimental instances and ri denotes the rank of
the pairs at i-th experimental instance, starting with the smallest
B. Computational Performance Analysis of the Proposed as 1.
T2FSmethods Table VII is aimed at reporting the results of the Wilcoxon
signed-rank test, considering either of the proposed IT2FS and
Computational performance of T2FS induced mapping tech- GT2FS as the reference algorithm. The plus (minus) sign in
niques is generally determined by the total number of t-norm Table VII represents that the W values (i.e., the difference in
and s-norm computations [65]. In the computational complexity errors) of an individual method with the proposed method as
analysis, given in Table VI, the order of complexity of each reference is significant (not significant). Here, 95% confidence
technique is listed, where n is the number of GT2FSs (i.e., level is achieved with the degree of freedom 1, studied at p-value
number of features), M is the number of discretization in the greater than 0.05.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 15

values obtained are given by α = 0.003, η = 0.011, η  = 0.09


and δ1 = 0.014.
VII. CONCLUSION
The paper introduced a novel technique to develop a computa-
tional model of STM in the context of shape-reconstruction task
with an ultimate aim to capture the inherent biological character-
istics of individual subjects in the model using the acquired EEG
signals. The STM model is initialized with Hebbian learning and
is adapted by a corrective feedback realized with Perceptron-like
learning at the end of each memory recall cycle (after the subject
reproduces the object-shape from his/her memory). The STM
adaptation is continued over several learning epochs until the
error in reproducing the object is within a user-defined small
Fig. 16. Parameter Selection of the type-2 fuzzy DBLN model. finite bound. After convergence of STM adaptation, a second
feedback is used to adapt iconic memory weight matrix using
Perceptron-like learning. Type-2 fuzzy logic is employed here to
D. Optimal Parameter Selection and Robustness Study
develop the mapping function between prefrontal to parietal lobe
For robustness study, the parameters used in the proposed EEG features and also to construct the mapping function rep-
training algorithm are optimized with respect to a judiciously resenting parietal EEG features to memory-reproduced object
selected objective function. Since E  represents the error metric shape-features. Extensive experiments have been undertaken to
at the last trial, one possible objective measure could be confirm that the type-2 fuzzy vertical slice approach used for
mapping yields the best results in comparison to fuzzy, non-
J = |E  |. (31)
fuzzy, neural, regression-based models and the well-known deep
where E  indirectly involves the following parameter set: learning models used to develop the same mapping functions.
ψ = {α, η, η  , δ1 }. Since J is not a direct function of the above An analysis undertaken reveals that the trained network yields
parameters, traditional derivative-based optimization is not fea- small error E  (≤ 0.09) for all the 30 experimental healthy
sible. Any meta-heuristic algorithm, however, can serve the subjects, whereas it yields significantly large values (≥ 53) for all
purpose well. Differential Evolution (DE) algorithm has been the 3 subjects, suffering from prefrontal lobe Amnesia. Further,
chosen here for its small code-length, low run-time complexity, the G matrix of persons with prefrontal lobe epilepsy/Amnesia
good computational accuracy, and above all the authors’ famil- shows wider differences in selected regions of converged matri-
iarity with it for several years [79], [96]. ces. The above result indicates that subjects with prefrontal lobe
DE maintains a fixed population-size of parameter vectors brain disease yield inconsistent EEG signals over the learning
(trial solutions) over the iterations. The components of the epochs, resulting in a mismatch in regions of G matrix after
parameter vectors are initialized in a uniformly random man- convergence. The degree of mismatch may be used as a score to
ner over user-defined individual parametric space. The param- measure the prefrontal lobe damage in future research.
eter vectors of the DE-target-to-best algorithm employed are Although the proposed EEG-induced DBLN shows early
evolved through a process of mutation with scale factor F = 0.8 success in STM modeling, there still remains scope for future
and binomial crossover/recombination with crossover rate research to improve the model. First, more intermediate layers
CR = 0.7 in two successive steps. The resulting vectors obtained in the model may be introduced to represent other intermediate
after recombination are referred to as Target vectors. For each brain regions for more accurate estimation of the STM perfor-
parameter vector, one target vector is obtained. Next, the fitness mance. Second, there also remains scope for modification of
of the evolved target vector and the corresponding trial solution the type-2 vertical slice models for better functional mapping,
are measured and the member with better fitness is redefined however, at the cost of additional computational overhead. Addi-
as the resulting parameter vector for the next step of evolution. tionally, besides taking only gamma power EEG features, theta
The evolution of trial solutions is continued until the algorithm power and/or transfer entropy may be used to improve STM
converges with a predefined error-limit of 0.001, where the model performance.
error-limit represents the absolute difference of fitness measure
of the best-fit solution of the last and the current generations. ACKNOWLEDGMENT
Fig. 16 provides a schematic overview of optimal parameter The authors gratefully acknowledge the funding they received
selection using DE. from the UPE-II Project in Cognitive Science, sponsored by
The version of DE, we used here, is DE/rand/1/bin. The DE University Grants Commission, and the RUSA 2.0 project, spon-
is run 30 times with randomly initialized parameters within sored by Ministry of Human Resource Development (MHRD),
selected bounds, and the selected parameter values of ψ are Govt. of India to undertake the work reported in this paper.
obtained from the best-fit solution of the most promising run. For
APPENDIX
GT2FS based training algorithm, the optimal parameter values
obtained are given by α = 0.002, η = 0.016, η  = 0.11 and This Appendix provides the simulation results of geometric
δ1 = 0.01. For IT2FS induced mapping, the optimal parameter feature extraction introduced in Section II.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

16 IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE

[4] N. M. Van Strien, N. L. M. Cappaert, and M. P. Witter, “The anatomy of


memory: An interactive overview of the parahippocampal-hippocampal
network,” Nature Rev. Neuroscience, vol. 4, no. 10, p. 272, 2009.
[5] J. Ward, The Student’s Guide To Cognitive Neuroscience. Psychology
Press, 2015.
[6] W. R. Klemm, Atoms of Mind: The" Ghost in the Machine" Materializes.
Springer Science & Business Media, 2011.
[7] W. R. Klemm, “The quest,” Atoms of Mind. Netherlands: Springer,
pp. 1–17, 2011.
[8] K. Numata, Y. Nakajima, T. Shibata, and S. Shimizu, “EEG gamma band is
asymmetrically activated by location and shape memory tasks in humans,”
J. Japanese Phys. Therapy Assoc., vol. 5, pp. 1–5, 2002.
[9] T. J. Lloyd-Jones, M. V. Roberts, E. C. Leek, N. C. Fouquet, and
E. G. Truchanowicz, “The time course of activation of object shape and
shapeþcolour representations during memory retrieval,” PloS One, vol. 7,
2012.
[10] J. M. Fuster, R. H. Bauer, and J. P. Jervery, “Functional interactions
between inferotemporal and prefrontal cortex in a cognitive task,” Brain
Res., vol. 2, no. 330, pp. 299–307, 1985.
[11] J. Quintana, J. M. Fuster, and Y. Javier, “Effects of cooling parietal cortex
on prefrontal units in delay tasks,” Brain research, vol. 1, no. 503, pp. 100–
110, 1989.
[12] S. Zeki, and S. Shipp, “The functional logic of cortical connections,”
Nature, no. 335, pp. 311–317, 1988.
[13] M. Mesulam, “A cortical network for directed attention and unilateral
neglect,” Ann. Neurology, no. 10, vol. 4, pp. 309–325, 1981.
[14] W. B. Barr, “Examining the right temporal lobe’s role in nonverbal mem-
ory,” Brain Cognition, no. 35, vol. 1 pp. 26–41, 1997.
[15] M. Corbetta, F. M. Miezin, S. Dobmeyer, G. L. Shulman, and S. E.
Petersen, “Selective and divided attention during visual discriminations
of shape, color, and speed: functional anatomy by positron emission
tomography,” J. Neuroscience, vol. 8, pp. 2383–2402, 1991.
[16] S. C. Baker, C. D. Frith, S. J. Frackowiak, and R. J. Dolan, “Active
representation of shape and spatial location in man,” Cerebral Cortex,
vol. 4, pp. 612–619, 1996.
[17] K. H. Lee, L. M. Williams, M. Breakspear, and E. Gordon, “Synchronous
gamma activity: A review and contribution to an integrative neuroscience
model of schizophrenia,” Brain Research Reviews, vol. 1, pp. 57–78, 2003.
[18] J. R. Vidal, M. Chaumon, J. K. O’Regan, and C. Tallon-Baudry, “Visual
grouping and the focusing of attention induce gamma-band oscillations at
different frequencies in human magnetoencephalogram signals,” J. Cogn.
Neuroscience, vol. 11, pp. 1850–1862, 2006.
[19] D. Senkowski and C. S. Herrmann, “Effects of task difficulty on evoked
gamma activity and ERPs in a visual discrimination task,” Clin. Neuro-
physiology, vol. 11, pp. 1742–1753, 2002.
[20] M. K. Rieder, B. Rahm, J. D. Williams, and J. Kaiser, “Human gamma-
band activity and behavior,” Int. J. Psychophysiology, no. 79, vol. 1,
pp. 39–48, 2011.
[21] P. J. Uhlhaas and W. Singer, “Neural synchrony in brain disorders: Rel-
evance for cognitive dysfunctions and pathophysiology,” Neuron, vol. 1,
no. 52, pp. 155–168, 2006.
[22] W. Ross Ashby, Design for a Brain: The Origin of Adaptive Behaviour,
John Wiley and Sons, 1954.
[23] R. C. Atkinson, and R. M. Shiffrin, “Human memory: A proposed system
and its control processes,” Psychol. Learn. Motivation, vol. 2, pp. 89–195,
1968.
[24] E. Tulving and J. Psotka, “Retroactive inhibition in free recall: Inacces-
sibility of information available in the memory store,” J. Exp. Psychol.,
vol. 87, pp. 1–8, 1971.
[25] M. A. Conway, Cognitive Models of Memory, MIT Press, 1997.
[26] A. D. Baddeley, Essentials of Human Memory, Psychology Press, 1999.
[27] A. Baddeley, “The episodic buffer: A new component of working mem-
ory?,” Trends Cogn. Sci., vol. 4, pp. 417–423, 2000.
[28] B. J. Baars and S. Franklin, “How conscious experience and working
memory interact,” Trends Cogn. Sci., vol. 7, pp. 166–172, 2003.
[29] D. Vernon, Artificial Cognitive Systems, Cambridge: MIT Press, 2014.
[30] C. Başar-Eroglu, D. Strüber, M. Schürmann, M. Stadler, and E. Başar,
“Gamma-band responses in the brain: a short review of psychophysiolog-
REFERENCES ical correlates and functional significance,” Int. J. of Psychophysiology,
vol. 1, pp. 101–112, 1996.
[1] M. Mishkin and T. Appenzeller, The Anatomy of Memory. Scientific [31] C. T. Baudry and O. Bertrand, “Oscillatory gamma activity in humans and
American, Incorporated, 1987. its role in object representation,” Trends Cogn. Sci., vol. 4, pp. 151–162,
[2] A. Baddeley, “Working memory and conscious awareness,” Theories 1999.
Memory, pp. 11–20, 1992. [32] W. Klimesch, “EEG alpha and theta oscillations reflect cognitive and
[3] A. Mollet, “Fundamentals of human neuropsychology,” J. Undergraduate memory performance: a review and analysis,” Brain Res. Rev., vol. 2,
Neuroscience Educ., vol. 6, p. 2, 2008. pp. 169–195, 1999.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 17

[33] C. S. Herrmann, M. H. J. Munk, and A. K. Engel, “Cognitive functions of [60] R. C. Malenka, ed., Intercellular Communication in the Nervous System,
gamma-band activity: Memory match and utilization,” Trends Cogn. Sci., Academic Press, 2009.
vol. 8, pp. 347–355, 2004. [61] L. Ghosh, A. Konar, P. Rakshit, S. Parui, A. L. Ralescu, and A. K. Nagar,
[34] P. Sauseng, W. Klimesch, M. Schabus and M. Doppelmayr, “Fronto- “P-300 and N-400 induced decoding of learning skill of driving learners
parietal EEG coherence in theta and upper alpha reflect central exec- using type-2 fuzzy sets,” in Proc. Int. IEEE Conf. Fuzzy Syst. (Fuzz-IEEE),
utive functions of working memory,” Int. J. Psychophysiology, vol. 2, 2018, pp. 1–8.
pp. 97–103, 2005. [62] G. H. Klem, H. O. Luders, H. H. Jasper, and C. Elger, “The ten-twenty
[35] W. Klimesch, P. Sauseng, and S. Hanslmayr, “EEG alpha oscillations: the electrode system of the International Federation,” Electroencephalogr Clin
inhibition–timing hypothesis,” Brain Res. Rev., vol. 1, pp. 63–88, 2007. Neurophysiol, vol. 52, pp. 3–6, 1999.
[36] N. Kopell, M. A. Whittington, and M. A. Kramer, “Neuronal assembly [63] R. D. Pascual-Marqui et al., “Assessing interactions in the brain with
dynamics in the beta1 frequency range permits short-term memory,” in exact low resolution electromagnetic tomography,” Phil Trans R Soc A
Proc. Nat. Acad. Sci., 2011, pp. 3779–3784. 369, pp. 3768–3784, 2011.
[37] R. Van den Berg, H. Shin, W. C. Chou, R. George, and W. J. Ma, [64] R. D. Pascual-Marqui, “Discrete, 3D distributed, linear imaging
“Variability in encoding precision accounts for visual short-term memory methods of electric neuronal activity. Part 1: exact, zero error
limitations,” in Proc. Nat. Acad. Sci., pp. 8780–8785, 2012. localization,” arXiv:0710.3341[math-ph]. Arxiv website. Available:
[38] W. Nan et al., “Individual alpha neurofeedback training effect on short http://arxiv.org/pdf/0710.3341, 2007.
term memory,” Int. J. Psychophysiology, vol. 1, pp. 83–87, 2012. [65] J. Andreu-Perez, F. Cao, H. Hagras, and G. Z. Yang, “A self-adaptive
[39] J. J. LaRocque, J. A. Lewis-Peacock, A. T. Drysdale, K. Oberauer, and online brain machine interface of a humanoid robot through a general
B. R. Postle, “Decoding attended information in short-term memory: An type-2 fuzzy inference system,” IEEE Trans. Fuzzy Syst., vol. 9, 2016.
EEG study,” J. Cogn. Neuroscience., vol. 1, pp. 127–142, 2013. [66] A. Saha, A. Konar, and A. K. Nagar, “EEG analysis for cognitive failure
[40] R. N. Roy, S. Bonnet, S. Charbonnier, and A. Campagne, “Mental fatigue detection in driving using type-2 fuzzy classifiers,” IEEE Trans. Emerg.
and working memory load estimation: Interaction and implications for Topics Comp. Int., pp. 437–453, 2017.
EEG-based passive BCI,” in Proc. Int. Conf. Eng. Medicine Biol. Soc. [67] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural
(EMBC), IEEE, 2013. Comput., vol. 8, pp. 1735–1780, 1997.
[41] G. Luksys et al., “Computational dissection of human episodic memory [68] K. Simonyan and A. Zisserman, “Very deep convolutional networks for
reveals mental process-specific genetic profiles,” in Proc. Nat. Acad. Sci., large-scale image recognition,” arXiv preprint arXiv, pp. 1409–1556,
2015, no. 35, pp. E4939–E4948. 2014.
[42] M. Ono, H. Furusho, and K. Iramina, “Analysis of the complexity of EEG [69] Y. W. Chang, C. J. Hsieh, K. W. Chang, M. Ringgaard, and C. J. Lin,
during the short-term memory task,” in Proc. Biomed. Eng. Int. Conf. “Training and testing low-degree polynomial data mappings via linear
(BMEiCON), 2015 8th, IEEE, 2015. SVM,” J. Mach. Learn. Res., vol. 11, pp. 1471–1490, 2010.
[43] Y. Singh, J. Singh, R. Sharma, and A. Talwar, “FFT transformed quan- [70] B. Scholkopf et al., “Comparing support vector machines with Gaussian
titative EEG analysis of short term memory load,” Ann. Neurosciences., kernels to radial basis function classifiers,” IEEE trans. Signal Process.,
vol. 3, p. 176, 2015. vol. 11, pp. 2758–2765, 1997.
[44] S. Slotnick, “Frontal-occipital interactions during visual memory,” [71] Z. Waszczyszyn and L. Ziemiański, “Neural networks in mechanics of
file:///G:/phd%20work/SSCI/iconic%20memory%20at%20occipital%20 structures and materials–new results and prospects of applications,” Com-
lobe/Frontaloccipital%20interactions%20during%20visual%20memory. puters Structures, vol. 79, pp. 2261–2276, 2001.
html [72] F. Wilcoxon, S. K. Katti, and R. A. Wilcox, “Critical values and probability
[45] W. Gerstner, Hebbian learning and plasticity, From Neuron to Cognition levels for the Wilcoxon rank sum test and the Wilcoxon signed rank test,”
Via Computational Neuroscience, MIT Press, Cambridge, Chapter 9, 2011. Sel. Tables Math. Statist., vol. 1, pp. 171–259, 1970.
[46] A. Konar, Computational Intelligence: Principles, Techniques and Appli- [73] M. Kutas and K. D. Federmeier, “N400,” Scholerpedia, vol. 4, no. 10,
cations, Springer, 2006. p. 7790, 2009.
[47] R. C. O’Reilly and K. A. Norman, “Hippocampal and neocortical con- [74] M. D. Rugg, “Event-related brain potentials dissociate repetition effects of
tributions to memory: Advances in the complementary learning systems high- and low-frequency words,” Memory Cognition, 18, no. 4, 367–379,
framework,” Trends Cogn. Sci., vol. 6, pp. 505–510, 2002. 1990.
[48] S. E. Hyman, R. C. Malenka, and E. J. Nestler, “Neural mechanisms of [75] C. L. Goodale, J. D. Aber, and S. V. Ollinger, “Mapping monthly pre-
addiction: the role of reward-related learning and memory,” Annu. Rev. cipitation, temperature, and solar radiation for Ireland with polynomial
Neurosci., vol. 29, pp. 565–598, 2006. regression and a digital elevation model,” Climate Res., vol. 1, pp. 35–49,
[49] G. J. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and Applica- 1998.
tions, Pretice-Hall, 1997. [76] J. M. Mendel, M. R. Rajati, and P. Sussner, “On clarifying some definitions
[50] J. Mendel and D. Wu, Perceptual Computing: Aiding People in Making and notations used for type-2 fuzzy sets as well as some recommended
Subjective Judgments, vol. 13, John Wiley & Sons, 2010. changes,” Inf. Sci., pp. 337–345, 2016.
[51] J. M. Mendel, “General type-2 fuzzy logic systems made simple: A [77] A. Khasnobish, A. Konar, D. N. Tibarewala, and A. K. Nagar, “Bypassing
tutorial,” IEEE Trans. Fuzzy Syst., vol. 5, no. 22, pp. 1162–1182, 2014. the natural visual-motor pathway to execute complex movement related
[52] C. Wagner and H. Hagras, “Toward general type-2 fuzzy logic systems tasks using interval type-2 fuzzy sets,” IEEE Trans. Neural Sys. Rehabil.
based on zSlices,” IEEE Trans. Fuzzy Syst., vol. 4, no. 18, pp. 637–660, Engg., vol. 1, pp. 91–105, 2017.
2010. [78] D. Bhattacharya, A. Konar, and P. Das, “Secondary factor induced stock
[53] J. M. Mendel and R. I B. John, “Type-2 fuzzy sets made simple,” IEEE index time-series prediction using self-adaptive interval type-2 fuzzy sets,”
Trans. Fuzzy Syst., vol. 2, no. 10, pp. 117–127, 2002. Neurocomputing, vol. 171, pp. 551–568, 2016.
[54] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep Learning, [79] P. Rakshit et al., “Realization of an adaptive memetic algorithm using
Cambridge: MIT Press, 2016, vol. 1. differential evolution and q-learning: A case study in multirobot path
[55] J. B. Scarborough, Numerical Mathematical, Oxford and IBH Publishing, planning,” IEEE Trans. Systems, Man, Cybern.: Syst., vol. 43, no. 4,
1955. pp. 814–831, 2013.
[56] S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall [80] K. A. Ludwig, R. M. Miriani, N. B. Langhals, M. D. Joseph, D. J. Anderson,
PTR, 1994. and D. R. Kipke, “Using a common average reference to improve cortical
[57] C. Werner, U. Wegmuller, T. Strozzi, and A. Wiesmann, “Interferometric neuron recordings from microelectrode arrays,” J. Neurophysiology, vol. 3,
point target analysis for deformation mapping,” in Proc. Geosci. Remote p. 1679, 2009.
Sens. Symp., IEEE, 2003, vol. 7, pp. 4362–4364. [81] L. R. Squire, D. Berg, F. E. Bloom, S. D. Lac, A. Ghosh, and N. C. Spitzer,
[58] S. L. Kappel, D. Looney, D. P. Mandic, and P. Kidmose, “Physiological Fundamental Neuroscience, Fourth Edition, Academic Press, 2013.
artifacts in scalp EEG and ear-EEG,” Biomed. Engg. online 16, p. 103, [82] Y. Jiang et al., “Seizure classification from EEG signals using transfer
2017. learning, semi-supervised learning and TSK fuzzy system,” IEEE Trans.
[59] D. Wu, “A Constraint representation theorem for interval type-2 fuzzy sets Neural Sys. Rehab. Eng., vol. 12, pp. 2270–2284, 2017.
using convex and normal embedded type-1 fuzzy sets, and its application to [83] Y. Jiang et al., “Recognition of epileptic EEG signals using a novel
centroid computation,” in Proc.World Conf. Soft Comput., San Francisco, multiview TSK fuzzy system,” IEEE Trans. Fuzzy Sys., vol. 25, pp. 3–20,
CA, May 2011. 2017.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

18 IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE

[84] D. Wu, “Online and offline domain adaptation for reducing BCI cali- Pratyusha Rakshit received the B. Tech. degree in
bration effort,” IEEE Trans. Human-Machine Sys., vol. 47, pp. 550–563, electronics and communication engineering (ECE)
Aug. 2017. from the Institute of Engineering and Manage-
[85] B. Green, “Canny edge detection tutorial,” Retrieved: March6 (2002): ment, India, and M.E. degree in control engineering
2005. from Electronics and Telecommunication Engineer-
[86] L. Xu and E. Oja, “Randomized Hough transform (RHT): Basic mech- ing (ETCE) Department, Jadavpur University, India
anisms, algorithms, and computational complexities,” CVGIP: Image in 2010 and 2012, respectively. She was awarded her
understanding, vol. 57, no. 2, pp. 131–154, 1993. Ph.D. (Engineering) degree from Jadavpur Univer-
[87] Y. Chen, Z. Lin, X. Zhao, G. Wang, and Y. Gu, “Deep learning-based sity, India in 2016. From August 2015 to November
classification of hyperspectral data,” IEEE J. Sel. Topics Appl. Earth 2015, she was an Assistant Professor in ETCE De-
Observ. Remote Sens. 7, vol. 6, pp. 2094–2107, 2014. partment, Indian Institute of Engineering Science and
[88] X. An, D. Kuang, X. Guo, Y. Zhao, and L. He, “A deep learning method Technology, India. She is currently an Assistant Professor in ETCE Department,
for classification of EEG data based on motor imagery,” in Proc. Int. Conf. Jadavpur University. She was awarded gold medals for securing the highest
Intell. Comput., Springer, Cham, 2014, pp. 203–210. percentage of marks in B. Tech. in ECE and among all the courses of M.E.
[89] S. Jirayucharoensak, S. Pan-Ngum, and P. Israsena, “EEG-based emotion respectively in 2010 and 2012. She was the recipient of CSIR Senior Research
recognition using deep learning network with principal component based Fellowship, INSPIRE Fellowship and UGC UPE-II Junior Research Fellowship.
covariate shift adaptation,” The Scientific World J., vol. 1, pp. 1–10, 2014. Her principal research interests include artificial and computational intelligence,
[90] P. Bashivan, I. Rish, M. Yeasin, and N. Codella, “Learning representations evolutionary computation, robotics, bioinformatics, pattern recognition, fuzzy
from EEG with deep recurrent-convolutional neural networks,” in Proc. logic, cognitive science and human-computer interaction. She is an author of over
Int. Conf. Learn. Representations, 2015, pp. 1–15. 50 papers published in top international journals and conference proceedings.
[91] P. Mehta and D. J. Schwab, “An exact mapping between the variational She serves as a Reviewer in IEEE-TFS, IEEE-SMC: Systems, Neurocomputing,
renormalization group and deep learning,” in Proc. Int. Conf. Learn. Information Sciences, and Applied Soft Computing.
Representations, 2014, pp. 1–8.
[92] S. Fan, “Do our brains use deep learning to make sense of the world?”
https://singularityhub.com/2017/12/20/life-imitates-art-is-the-human-
brain-also-running-deep-learning/#sm.0000f74204d3wdqrw591mrmxr
5tqq
[93] V. Mnih et al., “Human-level control through deep reinforcement learn-
ing,” Nature, vol. 518, pp. 529–533, 2015.
[94] Y. Sengupta, “Deep learning and the human brain: Inspiration, Atulya K. Nagar received the Doctorate (D.Phil.)
Not Imitation,” https://dzone.com/articles/deep-learning-and-the-human- degree in applied non-linear mathematics from the
brain-inspiration-not, May, 2019. University of York, York, U.K., in 1996. He holds
[95] F. Chollet, Deep Learning mit Python und Keras: Das Praxis-Handbuch the Foundation Chair as a Professor of Mathematical
vom Entwickler der Keras-Bibliothek, MITP-Verlags GmbH & Co. KG, Sciences with Liverpool Hope University, Liverpool,
2018. U.K., where he is currently the Dean of the Faculty
[96] S. Das, A. Abraham, and A. Konar, “Automatic clustering using an im- of Science. He was the recipient of a prestigious
proved differential evolution algorithm,” IEEE Trans. Sys., Man, Cybern. Commonwealth Fellowship for pursuing his Doctor-
-Part A: Syst. Humans, vol. 1, pp. 218–237, 2008. ate. Prior to joining Liverpool Hope, he was with
the Department of Mathematical Sciences, and later
with the Department of Systems Engineering, Brunel
Lidia Ghosh received her B.Tech. degree in elec- University, London. His research is inter-disciplinary with expertise in nonlinear
tronics and tele-communication engineering from the mathematics, natural computing, bio-mathematics and computational biology,
Bengal Institute of Technology, Techno India Col- as well as control systems engineering. He has edited volumes on Intelligent Sys-
lege in 2011, and her M. Tech. degree in intelligent tems, and Applied Mathematics. He is the Editor-in-Chief for the International
automation and robotics (IAR) from the department Journal of Artificial Intelligence and Soft Computing (IJAISC) and serves on the
of Electronics and Tele-Communication Engineer- Editorial Boards for a number of prestigious journals. He is well published with
ing, Jadavpur University, Kolkata in 2015. She was more than 400 publications in prestigious publishing outlets such as the Journal
awarded gold medals for securing the highest per- of Applied Mathematics and Stochastic Analysis, the International Journal of
centage of marks in M. Tech in IAR in 2015. She is Foundations of Computer Science, the IEEE TRANSACTIONS, Discrete Applied
currently pursuing her Ph.D. in cognitive intelligence Mathematics, Fundamenta Informaticae, IET Control Theory & Applications,
in Jadavpur University under the guidance of Prof. etc. He is a Chair of the University’s Research Committee including Research
Amit Konar and Dr. Pratyusha Rakshit. Her current research interest includes Excellence Framework and sits on a number of strategic U.K. wide research
deep learning, type-2 fuzzy sets, human memory formation, short and long term bodies including the JISC.
memory interactions, and biological basis of perception and scientific creativity.

Amit Konar (SM’10) is currently a Professor with the


Department of Electronics and Tele-Communication
Engineering, Jadavpur University, Kolkata, India. He
earned his B.E. degree from Bengal Engineering Col-
lege, Sibpur in 1983, and his M.E., M. Phil. and Ph.D.
degrees, all from Jadavpur University in 1985, 1988,
and 2004, respectively. Dr. Konar has published 15
books and over 350 research papers in leading inter-
national journals and conference proceeding. He has
supervised 28 Ph.D. theses and 262 masters’ theses.
He is a recipient of AICTE-accredited Career Award
for Young Teachers for the period: 1997–2000. He is nominated as a Fellow
of West Bengal Academy of Science and Engineering in 2010 and (Indian)
National Academy of Engineering in 2015. Dr. Konar has been serving as an
Associate Editor of several international journals, including IEEE TRANSAC-
TIONS OF FUZZY SYSTEMS AND IEEE TRANSACTIONS OF EMERGING TRENDS IN
COMPUTATIONAL INTELLIGENCE. His current research interest includes cognitive
neuroscience, brain-computer interfaces, type-2 fuzzy sets and multi-agent
systems.

You might also like