Ghosh 2019
Ghosh 2019
Ghosh 2019
2471-285X © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
characteristic of N400 signal is used here to determine the STM a neighborhood neuron [46]–[48]. The Hebbian learning, being
performance in 2-dimensional object shape-reconstruction task. unsupervised, fits well for signal transduction at low level (early
Deep Learning (DL) [54] is currently gaining increasing stage of) neural processing [81]. On the other hand, at higher
interest from diverse research community for its efficient per- level (later stage of) neural signal transduction [60], supervised
formance in classification [87]–[89] and functional mapping learning is employed to quantize the neural signals to converge
[90], [91] problems from raw data. Deep learning algorithms to fixed points, representing object classes in the recognition
differ from conventional neural network algorithms for hav- problem, and the desired output level in the functional mapping
ing exceedingly large number of layers to extract high level problems. Further, due to asynchronous firing of neurons in dif-
features/attributes from low level raw data. For example, in ferent brain lobes, noise is introduced in the signaling pathways,
Convolutional Neural Net (CNN) [68] based Deep Learning, causing undesirable changes in the outputs. The advent of fuzzy
the motivation is to extract features of objects from a large sets, in particular its type-2 counterpart has immense potential
pool of object-dataset. In CNN, during the recall phase, layers in approximate reasoning, which is expected to play a vital role
occupying the later stages offer more refined object features in the neural quantization process in presence of noise [77].
than the preceding layers. Generally, extracts of the penultimate Thus type-2 fuzzy logic is expected to serve well in functional
layer often are regarded as object-features, while the last layer mapping at higher level neural learning.
provides the class information in a multi-class classification Two distinct varieties of type-2 fuzzy sets are widely being
problem. used in the literature [50]–[53], [65]–[66]. They are well-known
Although conventional deep learning algorithms aim at imi- as Interval Type-2 Fuzzy Sets (IT2FS) [50] and General Type-2
tating the behavioral mechanism of learning in the brain [92], Fuzzy Sets (GT2FS) [51]. In classical fuzzy sets, the mem-
[93], they hardly realize the cognitive functionalities of the bership function of a linguistic variable lying in [0,1] is crisp,
individual brain modules [95] involved in the learning process. whereas in type-2 fuzzy set, the corresponding (primary) mem-
This paper makes an honest attempt to synthesize functionality bership is fuzzy, as the linguistic variable at a given linguistic
of different brain modules by distinctive layers with suitable value has a wide range of primary membership in [0,1]. GT2FS
non-linearity in the context of STM encoding and recall. It intro- fundamentally differs from IT2FS with respect to secondary
duces a novel technique of STM-modeling in the settings of deep (type-2) Membership Function (MF). In GT2FS, the secondary
brain learning, where the individual brain functions involved in membership function takes any value in [0, 1], whereas in
STM encoding and recall cycles are modeled by developing the IT2FS the secondary membership function is considered 1 for
functional mapping from the input to the output. During the all feasible primary memberships lying within a region, referred
STM encoding and recall phases (of the shape-reconstruction to as Footprint of Uncertainty (FOU), and is zero elsewhere.
experiments), four distinct functional mappings are extracted Because of its representational advantages, GT2FS can capture
from the EEG signals acquired from the occipital, pre-frontal higher degrees of uncertainty [52], however at the cost of addi-
and parietal lobes. The first functional mapping is developed tional computational overhead. Here, a special type of GT2FS,
from the input visual stimuli and the occipital EEG response called vertical slice [53], is used to design a novel algorithm
to the stimuli. The second functional mapping refers to the for functional mapping between pre-frontal to parietal lobe and
interdependence between the EEG signals acquired from the parietal lobe to hand-drawn object-geometry.
occipital and the pre-frontal lobes during the shape-encoding The paper is divided into seven sections. Section II provides
phase. This mapping is useful to predict pre-frontal response the system overview. In Section III, principles and methodology
from the occipital response in the recall cycle later. The third are covered in brief. Section IV deals with experiments and
mapping refers to pre-frontal to parietal mapping, resembling results. Biological implications of the experimental results are
the functionality of the parietal lobe. This mapping helps in summarized in Section V. Performance analysis by statistical
determining the parietal response, if the pre-frontal response is tests is undertaken in Section VI. Conclusions are listed in
known during the recall phase. The last mapping between the Section VII.
parietal responses to the geometric features of the reconstructed
(hand-drawn) object-shape indicates the parietal and motor cor-
tex behavior jointly. II. SYSTEM OVERVIEW
Machine learning models have successfully been used in This section provides an overview of the proposed type-2
Brain-Computer Interfaces (BCI) to handle two fundamental fuzzy deep brain learning network (DBLN), containing four
problems: i) classification of brain signals for different cognitive stages of functional mapping, shown in Fig. 1(a). The input-
activities/malfunctioning [52], [82]–[84] and ii) synthesis of the output layers of each functional mapping module are explicitly
functional mapping of the active brain lobes from their mea- indicated in Fig. 1(b). The geometric features of an object, to
sured input-output [77]. This paper aims at serving the second be reconstructed, are assigned at the first (input) layer of the
problem. Although the functional mapping can be realized by proposed feed-forward network architecture (Fig. 1(b)). These
a number of ways, here the mapping of the first 2-stages is features are obtained from the gray scale image of the object
realized by Hebbian learning [45], while that of the third and the by the following steps: i) Gaussian filtering with user defined
fourth stages is designed by Type-2 Fuzzy logic. The choice of standard deviation to smooth the raw gray scale image, ii) Edge
Hebbian learning appears from the fundamental basis of Hebb’s detection and thinning by non-maximal suppression [85], here
principle of an excited neuron’s natural tendency to stimulate realized with Canny edge detection [85], iii) Line parameters
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 3
Fig. 1. (a). General Block-diagram of the proposed DBLN describing four-stage functional mapping with feedback for STM and Iconic Memory weight adaptation.
Fig. 1. (b) The Model used in four-stage mapping of the DBLN, explicitly showing the input and the output features of each module.
(perpendicular distance of the line from the origin, ρ, and the the geometric feature cl of the visually perceived object and
angle α between the above perpendicular line with the x-axis) the iconic memory response ai , where p denotes the num-
detection by Hough Transform [86], iv) Evaluation of line end ber of vertices of the perceived object, and n denotes the
point coordinates, line length and adjacent sides of the poly- number of electrodes placed on the occipital lobe (Fig. 1(b)).
gon having common vertices and v) computation of the angle The second layer (the first hidden layer), thus contains the
between each two adjacent lines. The steps are illustrated in iconic memory response. The weight matrix G = [gi,j ]n×n
the Appendix. The length of the straight line edges and angles between the second and the third layers represents the con-
between adjacent edges are used as the geometric features of the nectivity weights between the iconic memory response ai and
object. STM response bj , where i, jࢠ{1, n}(Fig. 1(b)). The third
The weight matrix W = [wl,i ]2p×n between the first and the (the second hidden) layer thus contains STM response bj ,
second layers represents the weighted connectivity between j = 1 to n.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 5
average Gamma power, extracted from the EEG signal, Definition 4: An Interval Type-2 Fuzzy Set (IT2FS) [51] is a
acquired from pre-frontal and parietal lobes respectively. special form of GT2FS with μà (x, u) = 1, for x ∈ X and u ∈
The functional mapping: b1 , b2 , . . . , bn → dk for all k is [0, 1]. A closed IT2FS (CIT2FS) is one form of IT2FS where
here obtained by type-2 fuzzy logic. Ix = {u ∈ [0, 1]|μà (x, u) = 1} is a closed interval for every
4) For functional approximation of the parietal lobe response x ∈ X [76]. Here, CIT2FS is used throughout the paper. All
to shape features of the recalled/reconstructed object, the IT2FS mentioned in this paper are CIT2FS. However, they are
average Gamma power of the EEG signals acquired from referred to as IT2FS as done in most of the literature [76].
the parietal lobe and the geometric features of the recon- Definition 5: The Footprint of Uncertainty (FOU) of a type-2
structed object (extracted by Hough transform) are used Fuzzy set (T2FS) Ã is defined as the union of all its primary
as the input and output respectively of the fourth/last stage memberships [76]. The mathematical representation of FOU is
of the proposed model. Considering {dk : 1 ≤ k ≤ n} and
{cl : 1 ≤ l ≤ 2p} to be the parietal EEG features and the FOU(Ã) = Jx (5)
parameters of the drawn object respectively, a type-2 fuzzy ∀x∈X
mapping is employed to obtain the required mapping: where, Jx = {(x, u)|u ∈ [0, 1], μà (x, u) > 0}. FOU is a
d1 , d2 , . . . , dn → cl for all l. bounded region, which represents the uncertainty in the primary
The training phase of the proposed DBLN system (Fig. 1) memberships of the T2FS.
constitutes two fundamental steps: i) encoding of W and Definition 6: An embedded fuzzy set Ae (x) is an arbitrarily
G matrices along with construction of functional mappings: selected type-1 MF lying in the FOU, i.e., Ae (x) ∈ Jx , ∀x ∈ X.
b1 , b2 , . . . , bn → dk and d1 , d2 , . . . , dn → cl for all k and l, and Definition 7: The embedded fuzzy set, representing the
ii) adaptation of W and G matrices by supervised learning. Here, upper bound of FOU (Ã) is called the upper membership func-
W and G matrices are first encoded using Hebbian learning.
tion (UMF) and it is denoted by FOU(Ã)(or μ̄Ã (x)), ∀x ∈ X
The functional mappings indicated above are constructed using
[76]. Similarly, the embedded fuzzy set, representing the lower
type-2 fuzzy sets and the adaptation of W and G matrices are
bound of FOU (Ã), is called the lower membership function
performed using Perceptron-like learning equation [46].
(LMF) and is denoted as FOU(Ã) (or μà (x)), ∀x ∈ X. More
precisely,
III. BRAIN FUNCTIONAL MAPPING USING TYPE-2
FUZZY DBLN UMF(Ã) = μ̄Ã (x) ≡ FOU(Ã) = Max(Ae (x) : x ∈ X) (6)
Principles of brain functional mapping introduced in
and
Section II is realized here using type-2 fuzzy DBLN with feed-
back loops realized with Perceptron-like learning equation. The LMF(Ã) = μà (x) ≡ FOU(Ã) = Min(Ae (x) : x ∈ X) (7)
section has 5 parts. In Section A, a brief overview of IT2FS
and GT2FS is given. Section B introduces the realization of B. Type-2 Fuzzy Mapping and Parameter Adaptation by
functional mappings of i) prefrontal to parietal lobe and ii) Perceptron-like Learning
parietal lobe to object-shape-geometry by a novel type-2 fuzzy
This section attempts to construct the functional mappings for
vertical slice approach. In Section C, the weight adaptation of W
i)prefrontal to parietal and ii) parietal to object-shape geometry
and G matrices is carried out by perceptron-like learning. The
using the acquired EEG signals from the selected brain lobes.
training and testing of the proposed fuzzy neural architecture
The EEG signals acquired are usually found to be contaminated
are presented in Section D and E respectively.
with stochastic noise due to non-voluntary motor actions like
eye-blinking and artifacts due to simultaneous brain activa-
A. Overview of Type-2 Fuzzy Sets
tion for concurrent thoughts [58]. Very often the noise and
Definition 1: A type-1(T1)/classical fuzzy set A [49] is an the desired brain signals have overlapped frequency spectra,
ordered pairs of a linguistic variable x and its membership value thereby making filtering algorithms inefficient for the targeted
μA (x) in A, given by application. Naturally, the superimposed stochastic noise yields
A = {(x, μA (x))|∀x ∈ X} (4) erroneous results in mapping, if realized with classical mapping
techniques, such as neural functional approximation [55], [56],
where, X is the universe of discourse. Usually, μA (x) is a crisp nonlinear regression [57] and the like. Fuzzy logic has shown
number, lying in [0, 1] for any x ∈ X. promising performance in functional mapping in presence of
Definition 2: A General Type-2 Fuzzy Set à is given by noisy measurements because of their inherent nonlinearity in
à = {((x, u), μà (x, u))|x ∈ X, u ∈ [0, 1]}, where x is a lin- the MFs (Gaussian/Triangular) [78]. The effect of measurement
guistic variable defined on a universe of discourse X, u ∈ [0, 1] noise in functional mapping is reduced further in T2FS [77]
is the primary membership and μà (x, u) is a secondary MF, because of its characteristic to handle intra-personal level un-
given by the mapping (x, u) → μà where μà (x, u) too lies in certainty due to the presence of stochastic noise. These works
[0, 1] [51], [76]. inspired the authors to realize the brain mapping functions using
Definition 3: For a given value of x, say x = x , the 2D IT2FS and one vertical slice approach [53] of GT2FS. In addition
plane comprising u and μÃ(x ) (u) is called a vertical slice of to type-2 fuzzy mapping, parameter adaptation of the mapping
the GT2FS [53]. function is also needed to attain optimal performance.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
the object from the STM, and a sampling rate of EEG = 5000 μB̃ (yj ) = LF Sj ∧ μB̃ (yj ), ∀yj . (15)
j
samples/second, the total number of EEG samples acquired from j
each brain region = 5000 × 30 = 1,50,000. These total number The IT2FS consequent B̃j , represented by
of samples (i.e., 1,50,000 samples), obtained over the duration of [μB̃ (yj ), μ̄B̃j (yj )] is next defuzzified by a proposed Average
j
30 seconds, are divided into 30 time-slots of equal length of 5000 (Av.) Defuzzification Algorithm to obtain the centroid C, given
samples each. The Power Spectral Density (PSD) in gamma by
frequency band (30–100 Hz) is then extracted for each time
slot. The PSD over a slot is then described by a Gaussian MF: G Area of the consequent B̃j
(μ, σ 2 ) with μ and σ 2 representing the mean and variance of the C= . (16)
Support of the UMF of B̃j
PSD of 5000 samples over the slot. The MF: μAi (x), where Ai
= Close-to-center of the support [49] of the MF and x = PSD, Let, yj = C. Also, let yj be the desired value. The following
represents that power is close to mean value of the PSD over steps are used next to adapt the control parameters λ̄j and λj to
5000 samples. Thus for 30 time-slots, 30 type-1 Gaussian MFs: control the area under the FOU of the inference.
A1 , A2 , . . . , A30 are obtained. The following 2 steps are per- Let, εj = yj − yj , (17)
formed to construct the IT2FS (Fig. 3(b)) à = [μà (x), μ̄à (x)]
from the 30 type-1 Gaussian MFs. For εj < 0,
μà (x) = Min[μA1 (x), μA2 (x), . . . , μA30 (x)], ∀x (8) δ λ̄j = α|εj |U F S , where λ̄j = λ̄j + δ λ̄j ,
μ̄Ã (x) = Max[μA1 (x), μA2 (x), . . . , μA30 (x)], ∀x (9) δ λj = α|εj |LF S , where λj = λj − δλj , (18)
where, μà (x) and μ̄à (x) respectively denote the LMF and For εj > 0,
the UMF of the said IT2FS x isÃ. In order to maintain the δ λ̄j = α εj U F S , where λ̄j = λ̄j − δ λ̄j ,
convexity criterion [59] of IT2FS, the peaks of the Type-1 MFs δ λj = α εj LF S , where λj = λj + δλj , (19)
are joined with a straight line of zero slope, resulting in a flat-top
approximated IT2FS (see Fig. 3(c)). where 0 < α < 1. The adaptation of λ̄j and λj is done in the
B.2 Construction of IT2FS induced Mapping Function: To de- training phase. After the training with known [x1 , x2 , . . . , xn ]
sign the mapping function between 2 brain lobes, the EEG signal and [y1 , y2 , . . . , yn ] vectors is over, the weights λ̄j and λj are
is acquired from both the lobes simultaneously during a learning fixed forever and may directly be used in the test phase.
epoch. Let x1 (t), x2 (t), . . . , xn (t) and y1 (t), y2 (t), . . . , yn (t) B.3 Secondary Membership Function Computation of Pro-
be the gamma power extracted from n electrodes of a source posed GT2FS. Consider the rule: If x1 is Ã1 and x2 is Ã2 and
lobe and n electrodes of a destination lobe respectively during … and xn is Ãn Then yj is B̃j . Here, xi is Ãi for i = 1 to n
the learning epoch. The IT2MFs xi is Ãi for i = 1 to n and yj are GT2FS-induced propositions and yj is B̃j denotes an IT2FS
is B̃j , for j = 1 to n (see Fig. 4) are obtained by the technique consequent MF. Here, the secondary MFs with respect to the pri-
introduced in section B.1. mary memberships at given xi = xi of the GT2FS proposition
Now let xi = xi for i = 1 to n be a sample measurement are represented by a vertical slice [53]. Let μÃi (xi ) (u) be the
(here, average Gamma power). To map x1 = x1 , x2 = x2 , . . . , secondary MF for the ith antecedent proposition xi is Ãi . Given
xn = xn to yj = yj , the following transformation is used. the measurements xi = xi for i = 1 to n, the vertical planes
representing secondary memberships μÃi (xi ) (u) at xi = xi are
U F S = μ̄Ã1 (x1 )tμ̄Ã2 (x2 )t . . . tμ̄Ãn (xn ) (10)
identified. Let the primary membership u at xi = xi is spatially
LF S = μà (x1 ) t μà (x2 ) t . . . t μà (xn ) (11) sampled as u1 , u2 , …, um . Given the contributory primary mem-
1 2 n
berships, which jointly comprise the FOU, the secondary MF at
where, μ̄Ãj (xi ) and μà (xi ) are the upper membership func- a given value of the linguistic variable xi = xi is computed using
j
tion (UMF) and lower membership function (LMF) of μÃi (xi ) the following steps.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 7
algorithm is used to adapt λ̄j and λj . The adaptation process is system shown in Fig. 1(b), where the inner feedback loop is used
similar to that in IT2FS (equation (18) and (19)), where U F S to adapt the weight matrix G = [gi, j ] using a perceptron-like
and LF S are replaced by U F F Sj and LF F Sj respectively. supervised learning algorithm. The perceptron-like learning
The training phase ends after adaptation of λ̄j and λj . In the algorithm is selected here for its inherent characteristics of
test phase, λ̄j and λj are fixed as obtained in the training phase. gradient-free and network-topology independent learning.
Only the vectors [x1 , x2 , . . . ., xn ] are produced, and the result The above selection-criteria are imposed to avoid gradient
of mapping, i.e., yj is predicted. computation over functions involving Max (∨) and Min (ࢳ)
operators in the feed-forward network. After each learning epoch
of the subject, the weight matrix G = [gi, j ] is adapted following
C. Perceptron-Like Learning for Weight Adaptation
gi, j = gi,j + Δgi,j ,
The STM plays an important role in retrieval and
reconstruction of the shape of objects perceived by visual where
exploration. Here, we propose a multi-stages DBLN, where Δgi, j = η.E.ai , ∀i, j, (24)
the stages of the network represent different mental processes.
Fig. 1(b) provides the architecture of the complete system. For Here, the error norm E is defined by
example, the first stage, symbolizing the iconic memory (IM), 2p
2p
p
p
represents the mapping from the shape-features of the object E= el = |ĉl − c l | = |L̂q − Lq | + |ŝq − sq |,
to the acquired EEG features of the occipital lobe (Fig. 1(a)). l=1 l=1 q=1 q=1
The second stage symbolizing the STM represents the mapping (25)
from the occipital lobe to the pre-frontal lobe. The third stage where, L̂q is the length of the line q in the object shape drawn
symbolizes the brain connectivity from the pre-frontal lobe to the by the subject and Lq is the length of the line q in the model-
parietal lobe using T2FS. The last stage describes the mapping generated object shape. Similarly, ŝq is the angle of the line
from the parietal layer to reproduced object shape, and is also q with respect to the x-axis in the hand-drawn object shape,
realized by T2FS. Two feedbacks have been incorporated in the and sq is the angle of the qth line with respect to the x-axis in
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 9
GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 11
TABLE II
ERROR METRIC ξ FOR MORE COMPLEX SHAPE
TABLE III
STM MODEL G FOR SIMILAR BUT NON-IDENTICAL OBJECT SHAPES
Fig. 9. 10 objects (with sample number) used in the experiment with increasing
shape complexity.
TABLE I
VALIDATION OF THE STM MODEL WITH RESPECT TO ξ FOR TWO OBJECTS
Fig. 11. Convergence of the error metric ξ (and weight matrix G) over time
Fig. 10. Learning ability of the subject with increasing shape complexity. with increased shape complexity.
TABLE IV
OBJECT SHAPES ACCORDING TO THE INCREASED SHAPE COMPLEXITY
(SC1 < SC2 < SC3 )
GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 13
Fig. 13. eLORETA tomography based on the current electric density (activity)
at cortical voxels.
Fig. 14. N400 repetition effects along with eLORETA solutions for successive
trials: (a) trial 1 (b) trial 2 and (c) trial 3.
V. BIOLOGICAL IMPLICATIONS
To compute the intra-cortical distribution of the electric ac-
tivity from the surface EEG data, a special software, called
eLORETA (exact Low Resolution brain Electromagnetic To-
mogrAphy) [63] is employed. The eLORETA is a linear inverse
solution method capable of reconstructing cortical electrical
activity with correct localization from the scalp EEG data even in
the presence of structured noise [64]. For the present experiment,
the selected artifact-free EEG segments are used to evaluate the
eLORETA intracranial spectral density in the frequency range
0–30 Hz with a resolution of 1 Hz. As indicated in Fig. 8, the Fig. 15. Increasing N400 negativity with increasing shape complexity.
entire experiment for a single trial is performed in 60 seconds
(60,000 ms), comprising 10 seconds for memory encoding and
50 seconds for memory recall. The 60 seconds interval is divided with decreasing negativity in successive trials. Fig. 14
into 600 time-frames of equal length (100 ms) by the e-LORETA represents the N400 dynamics over repetitive trials for the
software. In addition, the negativity of N400 [73] is checked after same subject stimulated with the same stimulus. Simul-
each learning epoch to confirm that the brain response obtained taneously, the eLORETA solutions, represented by topo-
is due to neurons participating in STM learning. graphic maps in Fig. 14, indicate the increasing neuronal
The following biological implications directly follow from activity in the pre-frontal cortex during the learning phase.
the eLORETA solutions and the negativity of the N400 3) The N400 negativity with increased complexity in shape
signal. learning, also increases at a given learning epoch. The
1) Fig. 13 provides the eLORETA solutions for the source increased negativity in N400 for the shapes listed in
localization problem during memory encoding and recall Table IV are shown in Fig. 15.
phases. It is observed from the eLORETA scalp map
(Fig. 13) that the electric neuronal activity is higher in VI. PERFORMANCE ANALYSIS
the occipital region for the first two time frames, demon-
This section provides an experimental basis for perfor-
strating the iconic memory (IM) encoding of the visually
mance analysis and comparison of the proposed Type 2
perceived object-shape for approximately 200 ms dura-
Fuzzy Set (T2FS) induced mapping techniques with the tra-
tion. For the next 90 time-frames (9000 milliseconds),
ditional/existing ones. Here too the performance of the pro-
the pre-frontal cortex remains highly active, revealing
posed and the state-of-the-art algorithms have been analyzed
the STM encoding during this interval of time. In the
using MATLAB-16b toolbox, running under Windows-10 on
remaining time frames, a significant increase in current
Intel Octacore processor with clock speed 2 GHz and 64 GB
density is observed in the pre-frontal and parietal cortex
RAM.
bilaterally, which signifies the involvement of these two
lobes in task-planning for the hand-drawing.
2) To check the N400 Repetition effect [74] during STM A. Performance Analysis of the Proposed T2FS Methods
learning, each subject is elicited with the same object- To study the relative performance of the proposed Type-2
shape repetitively until she learns to reproduce the original fuzzy mapping techniques with the existing methods, the error
shape presented to her, and the N400 pattern is observed metric E and runtime of the training algorithm are used for
during each learning stage. It is observed that the N400 comparison. During comparison, the type-2 fuzzy model present
response to the first trial exhibits the largest negative peak in the last 2 stages of the training and the test model only are
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
TABLE V TABLE VI
COMPARISON OF E OBTAINED BY THE PROPOSED MAPPING METHODS ORDER OF COMPLEXITY OF THE PROPOSED T2FS ALGORITHMS AND OTHER
AGAINST STANDARD MAPPING TECHNIQUES COMPETITIVE MAPPING TECHNIQUES
TABLE VII
RESULTS OF STATISTICAL VALIDATION WITH THE PROPOSED METHODS AS
REFERENCE, ONE AT A TIME
replaced by existing deep learning or other models. The rest of y axis and I is the number of z-slices (considered only in the
training and testing are similar to the present work. Table V existing z-slice based approaches).
includes the results of E obtained by the 2 proposed T2FS
based mapping techniques against traditional type-1 and type-2 C. Statistical Validation using Wilcoxon Signed-rank Test
fuzzy [51]–[53], [65], [66] algorithms, standard deep learning
algorithms, including Long Short-Term Memory (LSTM) [67] A non-parametric Wilcoxon signed-rank test [72] is employed
and Convolutional Neural Network (CNN) [68], and traditional to statistically validate the proposed mapping techniques using
non-fuzzy mapping algorithms including N-th order Polynomial E as a metric on a single database, prepared at Artificial in-
regression [75] of the form: P = N i
i=1 i z for real qi , Support
q telligence Laboratory of Jadavpur University. Let, Ho be the
Vector Machine (SVM) with polynomial Kernel [69], SVM with null hypothesis, indicating identical performance of a given
Gaussian Kernel [70] and the Back Propagation Neural Network algorithm-B with respect to a reference algorithm-A. Here,
(BPNN) [71], realized and tested for the present application. The A = any one of the two proposed type-2 fuzzy mapping tech-
experiment was performed on 35 subjects, each participating in niques and B = any one of the 7 algorithms listed in Table VII.
10 learning sessions, comprising 10 stimuli, covering 35 × 10 To statistically validate the null hypothesis Ho , we evaluate the
× 10 = 3500 learning instances. It is observed from Table V test statistic W by
that the proposed GT2FS based mapping technique outperforms Tr
its nearest competitors by an E of ∼1.5%. In Table V, we W = [sgn(E A,i − E B,i ).ri ] (30)
also observe that the IT2FS based mapping technique takes i=1
the smallest run- time (∼34 ms), when compared with the
other mapping methods. In addition, the proposed GT2FS-based where EA,i and EB,i are the values of E , obtained by algorithms
method requires 92.15 ms, which is comparable to the run-time A and B respectively at i-th experimental instance. Tr is the total
of most of the T2FS techniques. number of experimental instances and ri denotes the rank of
the pairs at i-th experimental instance, starting with the smallest
B. Computational Performance Analysis of the Proposed as 1.
T2FSmethods Table VII is aimed at reporting the results of the Wilcoxon
signed-rank test, considering either of the proposed IT2FS and
Computational performance of T2FS induced mapping tech- GT2FS as the reference algorithm. The plus (minus) sign in
niques is generally determined by the total number of t-norm Table VII represents that the W values (i.e., the difference in
and s-norm computations [65]. In the computational complexity errors) of an individual method with the proposed method as
analysis, given in Table VI, the order of complexity of each reference is significant (not significant). Here, 95% confidence
technique is listed, where n is the number of GT2FSs (i.e., level is achieved with the degree of freedom 1, studied at p-value
number of features), M is the number of discretization in the greater than 0.05.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 15
GHOSH et al.: MIMICKING STM IN SHAPE-RECONSTRUCTION TASK USING AN EEG-INDUCED TYPE-2 FUZZY 17
[33] C. S. Herrmann, M. H. J. Munk, and A. K. Engel, “Cognitive functions of [60] R. C. Malenka, ed., Intercellular Communication in the Nervous System,
gamma-band activity: Memory match and utilization,” Trends Cogn. Sci., Academic Press, 2009.
vol. 8, pp. 347–355, 2004. [61] L. Ghosh, A. Konar, P. Rakshit, S. Parui, A. L. Ralescu, and A. K. Nagar,
[34] P. Sauseng, W. Klimesch, M. Schabus and M. Doppelmayr, “Fronto- “P-300 and N-400 induced decoding of learning skill of driving learners
parietal EEG coherence in theta and upper alpha reflect central exec- using type-2 fuzzy sets,” in Proc. Int. IEEE Conf. Fuzzy Syst. (Fuzz-IEEE),
utive functions of working memory,” Int. J. Psychophysiology, vol. 2, 2018, pp. 1–8.
pp. 97–103, 2005. [62] G. H. Klem, H. O. Luders, H. H. Jasper, and C. Elger, “The ten-twenty
[35] W. Klimesch, P. Sauseng, and S. Hanslmayr, “EEG alpha oscillations: the electrode system of the International Federation,” Electroencephalogr Clin
inhibition–timing hypothesis,” Brain Res. Rev., vol. 1, pp. 63–88, 2007. Neurophysiol, vol. 52, pp. 3–6, 1999.
[36] N. Kopell, M. A. Whittington, and M. A. Kramer, “Neuronal assembly [63] R. D. Pascual-Marqui et al., “Assessing interactions in the brain with
dynamics in the beta1 frequency range permits short-term memory,” in exact low resolution electromagnetic tomography,” Phil Trans R Soc A
Proc. Nat. Acad. Sci., 2011, pp. 3779–3784. 369, pp. 3768–3784, 2011.
[37] R. Van den Berg, H. Shin, W. C. Chou, R. George, and W. J. Ma, [64] R. D. Pascual-Marqui, “Discrete, 3D distributed, linear imaging
“Variability in encoding precision accounts for visual short-term memory methods of electric neuronal activity. Part 1: exact, zero error
limitations,” in Proc. Nat. Acad. Sci., pp. 8780–8785, 2012. localization,” arXiv:0710.3341[math-ph]. Arxiv website. Available:
[38] W. Nan et al., “Individual alpha neurofeedback training effect on short http://arxiv.org/pdf/0710.3341, 2007.
term memory,” Int. J. Psychophysiology, vol. 1, pp. 83–87, 2012. [65] J. Andreu-Perez, F. Cao, H. Hagras, and G. Z. Yang, “A self-adaptive
[39] J. J. LaRocque, J. A. Lewis-Peacock, A. T. Drysdale, K. Oberauer, and online brain machine interface of a humanoid robot through a general
B. R. Postle, “Decoding attended information in short-term memory: An type-2 fuzzy inference system,” IEEE Trans. Fuzzy Syst., vol. 9, 2016.
EEG study,” J. Cogn. Neuroscience., vol. 1, pp. 127–142, 2013. [66] A. Saha, A. Konar, and A. K. Nagar, “EEG analysis for cognitive failure
[40] R. N. Roy, S. Bonnet, S. Charbonnier, and A. Campagne, “Mental fatigue detection in driving using type-2 fuzzy classifiers,” IEEE Trans. Emerg.
and working memory load estimation: Interaction and implications for Topics Comp. Int., pp. 437–453, 2017.
EEG-based passive BCI,” in Proc. Int. Conf. Eng. Medicine Biol. Soc. [67] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural
(EMBC), IEEE, 2013. Comput., vol. 8, pp. 1735–1780, 1997.
[41] G. Luksys et al., “Computational dissection of human episodic memory [68] K. Simonyan and A. Zisserman, “Very deep convolutional networks for
reveals mental process-specific genetic profiles,” in Proc. Nat. Acad. Sci., large-scale image recognition,” arXiv preprint arXiv, pp. 1409–1556,
2015, no. 35, pp. E4939–E4948. 2014.
[42] M. Ono, H. Furusho, and K. Iramina, “Analysis of the complexity of EEG [69] Y. W. Chang, C. J. Hsieh, K. W. Chang, M. Ringgaard, and C. J. Lin,
during the short-term memory task,” in Proc. Biomed. Eng. Int. Conf. “Training and testing low-degree polynomial data mappings via linear
(BMEiCON), 2015 8th, IEEE, 2015. SVM,” J. Mach. Learn. Res., vol. 11, pp. 1471–1490, 2010.
[43] Y. Singh, J. Singh, R. Sharma, and A. Talwar, “FFT transformed quan- [70] B. Scholkopf et al., “Comparing support vector machines with Gaussian
titative EEG analysis of short term memory load,” Ann. Neurosciences., kernels to radial basis function classifiers,” IEEE trans. Signal Process.,
vol. 3, p. 176, 2015. vol. 11, pp. 2758–2765, 1997.
[44] S. Slotnick, “Frontal-occipital interactions during visual memory,” [71] Z. Waszczyszyn and L. Ziemiański, “Neural networks in mechanics of
file:///G:/phd%20work/SSCI/iconic%20memory%20at%20occipital%20 structures and materials–new results and prospects of applications,” Com-
lobe/Frontaloccipital%20interactions%20during%20visual%20memory. puters Structures, vol. 79, pp. 2261–2276, 2001.
html [72] F. Wilcoxon, S. K. Katti, and R. A. Wilcox, “Critical values and probability
[45] W. Gerstner, Hebbian learning and plasticity, From Neuron to Cognition levels for the Wilcoxon rank sum test and the Wilcoxon signed rank test,”
Via Computational Neuroscience, MIT Press, Cambridge, Chapter 9, 2011. Sel. Tables Math. Statist., vol. 1, pp. 171–259, 1970.
[46] A. Konar, Computational Intelligence: Principles, Techniques and Appli- [73] M. Kutas and K. D. Federmeier, “N400,” Scholerpedia, vol. 4, no. 10,
cations, Springer, 2006. p. 7790, 2009.
[47] R. C. O’Reilly and K. A. Norman, “Hippocampal and neocortical con- [74] M. D. Rugg, “Event-related brain potentials dissociate repetition effects of
tributions to memory: Advances in the complementary learning systems high- and low-frequency words,” Memory Cognition, 18, no. 4, 367–379,
framework,” Trends Cogn. Sci., vol. 6, pp. 505–510, 2002. 1990.
[48] S. E. Hyman, R. C. Malenka, and E. J. Nestler, “Neural mechanisms of [75] C. L. Goodale, J. D. Aber, and S. V. Ollinger, “Mapping monthly pre-
addiction: the role of reward-related learning and memory,” Annu. Rev. cipitation, temperature, and solar radiation for Ireland with polynomial
Neurosci., vol. 29, pp. 565–598, 2006. regression and a digital elevation model,” Climate Res., vol. 1, pp. 35–49,
[49] G. J. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and Applica- 1998.
tions, Pretice-Hall, 1997. [76] J. M. Mendel, M. R. Rajati, and P. Sussner, “On clarifying some definitions
[50] J. Mendel and D. Wu, Perceptual Computing: Aiding People in Making and notations used for type-2 fuzzy sets as well as some recommended
Subjective Judgments, vol. 13, John Wiley & Sons, 2010. changes,” Inf. Sci., pp. 337–345, 2016.
[51] J. M. Mendel, “General type-2 fuzzy logic systems made simple: A [77] A. Khasnobish, A. Konar, D. N. Tibarewala, and A. K. Nagar, “Bypassing
tutorial,” IEEE Trans. Fuzzy Syst., vol. 5, no. 22, pp. 1162–1182, 2014. the natural visual-motor pathway to execute complex movement related
[52] C. Wagner and H. Hagras, “Toward general type-2 fuzzy logic systems tasks using interval type-2 fuzzy sets,” IEEE Trans. Neural Sys. Rehabil.
based on zSlices,” IEEE Trans. Fuzzy Syst., vol. 4, no. 18, pp. 637–660, Engg., vol. 1, pp. 91–105, 2017.
2010. [78] D. Bhattacharya, A. Konar, and P. Das, “Secondary factor induced stock
[53] J. M. Mendel and R. I B. John, “Type-2 fuzzy sets made simple,” IEEE index time-series prediction using self-adaptive interval type-2 fuzzy sets,”
Trans. Fuzzy Syst., vol. 2, no. 10, pp. 117–127, 2002. Neurocomputing, vol. 171, pp. 551–568, 2016.
[54] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep Learning, [79] P. Rakshit et al., “Realization of an adaptive memetic algorithm using
Cambridge: MIT Press, 2016, vol. 1. differential evolution and q-learning: A case study in multirobot path
[55] J. B. Scarborough, Numerical Mathematical, Oxford and IBH Publishing, planning,” IEEE Trans. Systems, Man, Cybern.: Syst., vol. 43, no. 4,
1955. pp. 814–831, 2013.
[56] S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall [80] K. A. Ludwig, R. M. Miriani, N. B. Langhals, M. D. Joseph, D. J. Anderson,
PTR, 1994. and D. R. Kipke, “Using a common average reference to improve cortical
[57] C. Werner, U. Wegmuller, T. Strozzi, and A. Wiesmann, “Interferometric neuron recordings from microelectrode arrays,” J. Neurophysiology, vol. 3,
point target analysis for deformation mapping,” in Proc. Geosci. Remote p. 1679, 2009.
Sens. Symp., IEEE, 2003, vol. 7, pp. 4362–4364. [81] L. R. Squire, D. Berg, F. E. Bloom, S. D. Lac, A. Ghosh, and N. C. Spitzer,
[58] S. L. Kappel, D. Looney, D. P. Mandic, and P. Kidmose, “Physiological Fundamental Neuroscience, Fourth Edition, Academic Press, 2013.
artifacts in scalp EEG and ear-EEG,” Biomed. Engg. online 16, p. 103, [82] Y. Jiang et al., “Seizure classification from EEG signals using transfer
2017. learning, semi-supervised learning and TSK fuzzy system,” IEEE Trans.
[59] D. Wu, “A Constraint representation theorem for interval type-2 fuzzy sets Neural Sys. Rehab. Eng., vol. 12, pp. 2270–2284, 2017.
using convex and normal embedded type-1 fuzzy sets, and its application to [83] Y. Jiang et al., “Recognition of epileptic EEG signals using a novel
centroid computation,” in Proc.World Conf. Soft Comput., San Francisco, multiview TSK fuzzy system,” IEEE Trans. Fuzzy Sys., vol. 25, pp. 3–20,
CA, May 2011. 2017.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
[84] D. Wu, “Online and offline domain adaptation for reducing BCI cali- Pratyusha Rakshit received the B. Tech. degree in
bration effort,” IEEE Trans. Human-Machine Sys., vol. 47, pp. 550–563, electronics and communication engineering (ECE)
Aug. 2017. from the Institute of Engineering and Manage-
[85] B. Green, “Canny edge detection tutorial,” Retrieved: March6 (2002): ment, India, and M.E. degree in control engineering
2005. from Electronics and Telecommunication Engineer-
[86] L. Xu and E. Oja, “Randomized Hough transform (RHT): Basic mech- ing (ETCE) Department, Jadavpur University, India
anisms, algorithms, and computational complexities,” CVGIP: Image in 2010 and 2012, respectively. She was awarded her
understanding, vol. 57, no. 2, pp. 131–154, 1993. Ph.D. (Engineering) degree from Jadavpur Univer-
[87] Y. Chen, Z. Lin, X. Zhao, G. Wang, and Y. Gu, “Deep learning-based sity, India in 2016. From August 2015 to November
classification of hyperspectral data,” IEEE J. Sel. Topics Appl. Earth 2015, she was an Assistant Professor in ETCE De-
Observ. Remote Sens. 7, vol. 6, pp. 2094–2107, 2014. partment, Indian Institute of Engineering Science and
[88] X. An, D. Kuang, X. Guo, Y. Zhao, and L. He, “A deep learning method Technology, India. She is currently an Assistant Professor in ETCE Department,
for classification of EEG data based on motor imagery,” in Proc. Int. Conf. Jadavpur University. She was awarded gold medals for securing the highest
Intell. Comput., Springer, Cham, 2014, pp. 203–210. percentage of marks in B. Tech. in ECE and among all the courses of M.E.
[89] S. Jirayucharoensak, S. Pan-Ngum, and P. Israsena, “EEG-based emotion respectively in 2010 and 2012. She was the recipient of CSIR Senior Research
recognition using deep learning network with principal component based Fellowship, INSPIRE Fellowship and UGC UPE-II Junior Research Fellowship.
covariate shift adaptation,” The Scientific World J., vol. 1, pp. 1–10, 2014. Her principal research interests include artificial and computational intelligence,
[90] P. Bashivan, I. Rish, M. Yeasin, and N. Codella, “Learning representations evolutionary computation, robotics, bioinformatics, pattern recognition, fuzzy
from EEG with deep recurrent-convolutional neural networks,” in Proc. logic, cognitive science and human-computer interaction. She is an author of over
Int. Conf. Learn. Representations, 2015, pp. 1–15. 50 papers published in top international journals and conference proceedings.
[91] P. Mehta and D. J. Schwab, “An exact mapping between the variational She serves as a Reviewer in IEEE-TFS, IEEE-SMC: Systems, Neurocomputing,
renormalization group and deep learning,” in Proc. Int. Conf. Learn. Information Sciences, and Applied Soft Computing.
Representations, 2014, pp. 1–8.
[92] S. Fan, “Do our brains use deep learning to make sense of the world?”
https://singularityhub.com/2017/12/20/life-imitates-art-is-the-human-
brain-also-running-deep-learning/#sm.0000f74204d3wdqrw591mrmxr
5tqq
[93] V. Mnih et al., “Human-level control through deep reinforcement learn-
ing,” Nature, vol. 518, pp. 529–533, 2015.
[94] Y. Sengupta, “Deep learning and the human brain: Inspiration, Atulya K. Nagar received the Doctorate (D.Phil.)
Not Imitation,” https://dzone.com/articles/deep-learning-and-the-human- degree in applied non-linear mathematics from the
brain-inspiration-not, May, 2019. University of York, York, U.K., in 1996. He holds
[95] F. Chollet, Deep Learning mit Python und Keras: Das Praxis-Handbuch the Foundation Chair as a Professor of Mathematical
vom Entwickler der Keras-Bibliothek, MITP-Verlags GmbH & Co. KG, Sciences with Liverpool Hope University, Liverpool,
2018. U.K., where he is currently the Dean of the Faculty
[96] S. Das, A. Abraham, and A. Konar, “Automatic clustering using an im- of Science. He was the recipient of a prestigious
proved differential evolution algorithm,” IEEE Trans. Sys., Man, Cybern. Commonwealth Fellowship for pursuing his Doctor-
-Part A: Syst. Humans, vol. 1, pp. 218–237, 2008. ate. Prior to joining Liverpool Hope, he was with
the Department of Mathematical Sciences, and later
with the Department of Systems Engineering, Brunel
Lidia Ghosh received her B.Tech. degree in elec- University, London. His research is inter-disciplinary with expertise in nonlinear
tronics and tele-communication engineering from the mathematics, natural computing, bio-mathematics and computational biology,
Bengal Institute of Technology, Techno India Col- as well as control systems engineering. He has edited volumes on Intelligent Sys-
lege in 2011, and her M. Tech. degree in intelligent tems, and Applied Mathematics. He is the Editor-in-Chief for the International
automation and robotics (IAR) from the department Journal of Artificial Intelligence and Soft Computing (IJAISC) and serves on the
of Electronics and Tele-Communication Engineer- Editorial Boards for a number of prestigious journals. He is well published with
ing, Jadavpur University, Kolkata in 2015. She was more than 400 publications in prestigious publishing outlets such as the Journal
awarded gold medals for securing the highest per- of Applied Mathematics and Stochastic Analysis, the International Journal of
centage of marks in M. Tech in IAR in 2015. She is Foundations of Computer Science, the IEEE TRANSACTIONS, Discrete Applied
currently pursuing her Ph.D. in cognitive intelligence Mathematics, Fundamenta Informaticae, IET Control Theory & Applications,
in Jadavpur University under the guidance of Prof. etc. He is a Chair of the University’s Research Committee including Research
Amit Konar and Dr. Pratyusha Rakshit. Her current research interest includes Excellence Framework and sits on a number of strategic U.K. wide research
deep learning, type-2 fuzzy sets, human memory formation, short and long term bodies including the JISC.
memory interactions, and biological basis of perception and scientific creativity.