0% found this document useful (0 votes)
14 views127 pages

SC 3

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views127 pages

SC 3

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 127

PRINCIPLES OF SOFT

COMPUTING
UNIT II: RBF NETWORK,
ASSOCIATIVE MEMORY
TOPICS

RBF Network, Associative memory, Auto, hetero and linear


associative memory, network, Adaptive Resonance Theory:
ART1, ART2, Introduction to Computer vision, Introduction to
Convolutional neural network, Popular architectures: AlexNet,
GoogleNet, VGG Net

9/16/2024 Principles of Soft Computing (SRM University-AP) 3


9/16/2024 4
9/16/2024 5
9/16/2024 6
9/16/2024 7
Examples : implement XOR gate using RBF

9/16/2024 8
 Input space: x2
(0,1) (1,1)

x1
(0,0) (1,0)

 Output space:
0 1 y

 Construct an RBF pattern classifier such that:


(0,0) and (1,1) are mapped to 0, class C1
(1,0) and (0,1) are mapped to 1, class C2

9/16/2024 Principles of Soft Computing (SRM University-AP) 9


9/16/2024 10
9/16/2024 11
9/16/2024 12
9/16/2024 13
Step Binary Output function.

9/16/2024 14
9/16/2024 15
Flowchart for the training
process of RBF

9/16/2024 16
RBF ARCHITECTURE x1
1 w1
x2 y
One hidden layer with RBF activation functions

m1 wm1
 1 ...  m 1

xm
 Output layer with linear activation function.

y  w 1 1 (|| x  t1 ||)  ...  w m 1 m 1 (|| x  t m 1 ||)

|| x  t || distance of x  ( x1 ,..., x m ) from vector t

9/16/2024 Principles of Soft Computing (SRM University-AP) 17


Hidden Neurons
 A hidden neuron is more sensitive to data points near its center.

 For Gaussian RBF this sensitivity may be tuned by adjusting the spread ,
where a larger spread implies less sensitivity.

 Biological example: cochlear stereocilia cells (in our ears ...) have locally
tuned frequency responses.

9/16/2024 Principles of Soft Computing (SRM University-AP) 18


Gaussian RBF φ
φ:

center

 is a measure of how spread the curve is:

Large  Small 

9/16/2024 Principles of Soft Computing (SRM University-AP) 19


Example: XOR Problem
 Input space: x2
(0,1) (1,1)

x1
(0,0) (1,0)

 Output space:
0 1 y

 Construct an RBF pattern classifier such that:


(0,0) and (1,1) are mapped to 0, class C1
(1,0) and (0,1) are mapped to 1, class C2

9/16/2024 Principles of Soft Computing (SRM University-AP) 20


RBF NN for the XOR problem
||x t1 ||2
1 (|| x  t1 ||)  e x1
2
t1 -1
 2 (|| x  t2 ||)  e ||x t || 2

y
with t1  (1,1) and t2  (0,0) x2 t2 -1
+1

||x t1 ||2 ||x t2 ||2


y  e e 1
If y  0 then class 1 otherwise class 0

9/16/2024 Principles of Soft Computing (SRM University-AP) 21


RBF network parameters
 What do we have to learn for a RBF NN with a given architecture?

 The centers of the RBF activation functions

 The spreads of the Gaussian RBF activation functions

 The weights from the hidden to the output layer

 Different learning algorithms may be used for learning the RBF network
parameters. We describe three possible methods for learning centers,
spreads and weights.

9/16/2024 Principles of Soft Computing (SRM University-AP) 22


Learning Algorithm - 1
 Apply the gradient descent method for finding centers, spread and weights,
by minimizing the (instantaneous) squared error
1
E  ( y ( x )  d )2
2
 Update for:
E
t j  t j
 Centers  tj
E
 j   j
 Spread  j
E
 Weights w ij  ij
w ij
9/16/2024 Principles of Soft Computing (SRM University-AP) 23
Learning Algorithm 2
 Weights: are computed by means of the pseudo-inverse method.
 For an example ( xi , d i ) , consider the output of the network

y ( xi )  w11 (|| xi  t1 ||)  ...  wm1 m1 (|| xi  tm1 ||)

 We would like y ( xi )  d i for each example, that is

w11 (|| xi  t1 ||)  ...  wm1 m1 (|| xi  tm1 ||)  d i

9/16/2024 Principles of Soft Computing (SRM University-AP) 24


Learning Algorithm 2
 This can be re-written in matrix form for one example

1 (|| xi  t1 ||) ...  m1 (|| xi  tm1 ||)[ w1...wm1 ]T  di


and

1 (|| x1  t1 ||)... m1 (|| x1  tm1 ||) 


... [ w ...w ]T  [d ...d ]T
  1 m1 1 N

1 (|| x N  t1 ||)... m1 (|| x N  tm1 ||)

for all the examples at the same time

9/16/2024 Principles of Soft Computing (SRM University-AP) 25


Learning Algorithm 2: summary
 Choose the centers randomly from the training set.

 Compute the spread for the RBF function using the normalization method.

 Find the weights using the pseudo-inverse method.

9/16/2024 Principles of Soft Computing (SRM University-AP) 26


Learning Algorithm 3
 Hybrid Learning Process:

 Clustering for finding the centers.

 Spreads chosen by normalization.

 LMS (Least Mean Square) algorithm (see Adaline) for finding the
weights.

9/16/2024 Principles of Soft Computing (SRM University-AP) 27


K-mean Algorithm
 Step1: K initial clusters are chosen randomly from the samples to form K groups.

 Step2: Each new sample is added to the group whose mean is the closest to this sample.

 Step3: Adjust the mean of the group to take account of the new points.

 Step4: Repeat step2 until the distance between the old means and the new means of all
clusters is smaller than a predefined tolerance.

 Outcome: There are K clusters with means representing the centroid of each clusters.

 Advantages: (1) A fast and simple algorithm.

(2) Reduce the effects of noisy samples.

9/16/2024 Principles of Soft Computing (SRM University-AP) 28


Comparison with multilayer NN
 Architecture:
 RBF networks have one single hidden layer.

 FFNN (Fast Forward Neural Network) networks may have more hidden
layers.

 Neuron Model:
 In RBF the neuron model of the hidden neurons is different from the one of
the output nodes.
 Typically in FFNN hidden and output neurons share a common neuron

model.
 The hidden layer of RBF is non-linear, the output layer of RBF is line.

 Hidden and output layers of FFNN are usually non-linear.

9/16/2024 Principles of Soft Computing (SRM University-AP) 29


Auto associative Neural Network
9/16/2024 31
9/16/2024 32
9/16/2024 33
9/16/2024 34
9/16/2024 35
9/16/2024 36
9/16/2024 37
9/16/2024 38
9/16/2024 39
9/16/2024 40
These kinds of neural networks work on the basis of pattern association, which
means they can store different patterns and at the time of giving an output they
can produce one of the stored patterns by matching them with the given input
pattern. These types of memories are also called Content-Addressable
Memory (CAM). Associative memory makes a parallel search with the stored
patterns as data files.

9/16/2024 41
The difference between auto-associative and hetero-associative networks is:
Auto-associative Networks:
•Autoassosiative networks are special kind of networks used to simulate associative
processes.
•These are achieved through interaction of set of simple processing elements which are
connected through weighted connections.
•They are capable to retrieve piece of data with the partial information and also capable
for remembering from small portion of data.
Hetero-associative Networks:
•Hetero-associative networks stores input-output pattern pairs to recall stored output
pattern by receiving noisy or incomplete version.
•In each of the pairs, an input pattern should differ from an output pattern.
•In this basic logical operations are used to determine associations among common and
special features of reference patterns.

9/16/2024 42
Auto Associative Memory
This is a single layer neural network in which the input training vector and
the output target vectors are the same. The weights are determined so that
the network stores a set of patterns

9/16/2024 43
Hetero Associative memory
Similar to Auto Associative Memory network, this is also a single layer neural network.
However, in this network the input training vector and the output target vectors are not
the same. The weights are determined so that the network stores a set of patterns. Hetero
associative network is static in nature, hence, there would be no non-linear and delay
operations.

9/16/2024 44
Example
 Training and testing auto associative neural network

9/16/2024 Principles of Soft Computing (SRM University-AP) 45


1. Hebb Rule

The Hebb rule is widely used for finding the weights of an associative memory neural network. The training
vector pairs here are denoted as s:t. The weights are updated unril there is no weight change.

Step 0: Set all the initial weights to zero, i.e.,

Wij = 0 (i = 1 to n, j = 1 to m)

Step 1: For each training target input output vector pairs s:t, perform Steps 2-4.
Step 2: Activate the input layer units to current training input, Xi=Si (for i = 1 to n)
Step 3: Activate the output layer units to current target output,

yj = tj (for j = 1 to m)
Step 4: Start the weight adjustment

wij(new)=wij(old)+xiyj (i=1 to nj=1 to m)

9/16/2024 46
9/16/2024 47
9/16/2024 48
9/16/2024 49
Applications
Auto-associative Neural Networks can be used in many fields:
•Pattern Recognition
•Bio-informatics
•Voice Recognition
•Signal Validation etc.

While Hetero-associative Neural Networks can be used in many fields:


such as data compression and data retrieval.

9/16/2024 50
9/16/2024 51
9/16/2024 52
9/16/2024 53
The Hopfield Network (the most well known example of an auto-
associative memory.)
 A recurrent neural network has feedback loops from its outputs to its inputs. The
presence of such loops has a profound impact on the learning capability of the
network.
 The stability of recurrent networks intrigued several researchers in the 1960s and
1970s. However, none was able to predict which network would be stable, and
some researchers were pessimistic about finding a solution at all. The problem was
solved only in 1982, when John Hopfield formulated the physical principle of storing
information in a dynamically stable network
 The Hopfield network uses McCulloch and Pitts neurons with the sign activation
function as its computing element

9/16/2024 Principles of Soft Computing (SRM University-AP) 54


The Hopfield Network
• Unsupervised learning
• Also called Content Addressable Memory

• Fill in the blanks

• intel _ _gent
• 1 _ 34_ _ 789

In fact you can detect typos

123856729
Vaterloo

A content-addressable memory (CAM) is a system that can take part of pattern, and produce the most likely match
from memory.

9/16/2024 55
The output from Y1 going
to Y2, Yi and Yn have the
weights w12, w1i and w1n respectively.
Similarly, other arcs have the weights
on them.

9/16/2024 Principles of Soft Computing (SRM University-AP) 56


9/16/2024 57
Step 1 - Initialize weights (wij ) to store patterns (using training algorithm).
Step 2 - For each input vector yi, perform steps 3-7.
Step 3 - Make initial activators of the network equal to the external input vector x.
yi = xi : (for i = 1 to n)
Step 4 - For each vector yi, perform steps 5-7.
Step 5 - Calculate the total input of the network yin using the equation given below.

Step 6 - Apply activation over the total input to calculate the output as per the
equation given below:

9/16/2024 58
Step 7 - Now feedback the obtained output yi to all other units. Thus, the activation
vectors are updated.
Step 8 - Test the network for convergence.

9/16/2024 59
Problem
Consider the following problem. We are required to create Discrete Hopfield Network with bipolar representation of
input vector as [1 1 1 -1] or [1 1 1 0] (in case of binary representation) is stored in the network. Test the hopfield
network with missing entries in the first and second component of the stored vector (i.e. [0 0 1 0]).

Step 1 - given input vector, x = [1 1 1 -1] (bipolar) and we initialize the weight matrix (wij) as:

the weight matrix with no self connection is

9/16/2024 60
Step 3 - As per the question, input vector x with missing entries, x = [0 0 1 0] ([x1 x2 x3 x4]) (binary)
- Make yi = x = [0 0 1 0] ([y1 y2 y3 y4])
Step 4 - Choosing unit yi (order doesn't matter) for updating its activation.
- Take the ith column of the weight matrix for calculation.

9/16/2024 61
now for next unit, we will take updated value via feedback. (i.e. y = [1 0 1 0])

9/16/2024 62
now for next unit, we will take updated value via feedback. (i.e. y = [1 0 1 0])

9/16/2024 63
now for next unit, we will take updated value via feedback. (i.e. y = [1 0 1 0])

9/16/2024 64
The Hopfield Network
 The current state of the Hopfield network is determined by the current
outputs of all neurons, y1 , y2 , . . ., yn . Thus, for a single-layer n-neuron
network, the state can be defined by the state vector as

 In the Hopfield network, synaptic weights between neurons are usually


represented in matrix form as follows: where M is the number of states to be
memorised by the network, Ym is the n-dimensional binary vector, I is n × n
identity matrix, and superscript T denotes a matrix transposition

9/16/2024 Principles of Soft Computing (SRM University-AP) 65


The Hopfield Network
 The 3 × 3 identity matrix I is

 Thus, we can now determine the weight matrix as follows:

 Next, the network is tested by the sequence of input vectors, X1 and X2 ,


which are equal to the output (or target) vectors Y1 and Y2 , respectively

9/16/2024 Principles of Soft Computing (SRM University-AP) 66


The Hopfield Network
 The remaining six states are all unstable. However, stable states (also called
fundamental memories) are capable of attracting states that are close to
them.
 The fundamental memory (1, 1, 1) attracts unstable states (−1, 1, 1), (1, −1,
1) and (1, 1, −1). Each of these unstable states represents a single error,
compared to the fundamental memory (1, 1, 1).
 The fundamental memory (−1, −1, −1) attracts unstable states (−1, −1, 1),
(−1, 1, −1) and (1, −1, −1).
 Thus, the Hopfield network can act as an error correction network

9/16/2024 Principles of Soft Computing (SRM University-AP) 67


Bidirectional associative memory (BAM)
 The Hopfield network represents an auto-associative type of memory − it can
retrieve a corrupted or incomplete memory but cannot associate this memory
with another different memory.

 Human memory is essentially associative. One thing may remind us of


another, and that of another, and so on.

 We use a chain of mental associations to recover a lost memory. If we forget


where we left an umbrella, we try to recall where we last had it, what we were
doing, and who we were talking to. We attempt to establish a chain of
associations, and thereby to restore a lost memory.

9/16/2024 Principles of Soft Computing (SRM University-AP) 68


Bidirectional associative memory (BAM)…
• Bidirectional Associative Memory (BAM) is a supervised learning model in Artificial Neural Network.
• This is hetero-associative memory.
• For an input pattern, it returns another pattern which is potentially of a different size.
• This phenomenon is very similar to the human brain.
• Human memory is necessarily associative.

• It uses a chain of mental associations to recover a lost memory like associations of faces with
names

 Why BAM is required?


The main objective to introduce such a network model is to store hetero-
associative pattern pairs.
This is used to retrieve a pattern given a noisy or incomplete pattern.

9/16/2024 69
Bidirectional associative memory (BAM),

9/16/2024 Principles of Soft Computing (SRM University-AP) 70


Bidirectional associative memory (BAM)
 To develop the BAM, we need to create a correlation matrix for each pattern
pair we want to store. The correlation matrix is the matrix product of the input
vector X, and the transpose of the output vector .

 The BAM weight matrix is the sum of all correlation matrices, that is, where M
is the number of pattern pairs to be stored in the BAM.
 The BAM is unconditionally stable. This means that any set of associations
can be learned without risk of instability.
 The more serious problem with the BAM is incorrect convergence. The BAM
may not always produce the closest association. In fact, a stable association
may be only slightly related to the initial input vector.

9/16/2024 Principles of Soft Computing (SRM University-AP) 71


Ques. Construct a Bidirectional associative memory (BAM) network to associatate letters E and
F with simple bipolar input-output vectors. Targets output for E is (-1,1), and for F = (1,1).
Display matrix is 5 x 3, input pattern size are

X X X X X X

X O O X X X

X X X X O O

X O O X O O

X X X X O O

9/16/2024 72
9/16/2024 73
9/16/2024 74
9/16/2024 75
9/16/2024 76
9/16/2024 77
9/16/2024 78
9/16/2024 79
9/16/2024 Principles of Soft Computing (SRM University-AP) 80
Another Example
 GOAL: build a neural network which will associate the following two sets of
patterns using Hebb’s Rule:

 The process will involve 4 input neurons and 3 output neurons The algorithm
involves finding the four outer products and adding them

9/16/2024 Principles of Soft Computing (SRM University-AP) 81


Algorithm

9/16/2024 Principles of Soft Computing (SRM University-AP) 82


Weight Matrix
 Add all four individual weight matrices to produce the final weight matrix:

9/16/2024 Principles of Soft Computing (SRM University-AP) 83


9/16/2024 Principles of Soft Computing (SRM University-AP) 84
9/16/2024 Principles of Soft Computing (SRM University-AP) 85
Try a non-training pattern?

9/16/2024 Principles of Soft Computing (SRM University-AP) 86


Try a non-training pattern?

9/16/2024 Principles of Soft Computing (SRM University-AP) 87


There are three main steps to construct the
BAM model.
1.Learning
2.Testing
3.Retrieval

Each step has been described with


mathematical formulation in the article ANN
| Bidirectional Associative Memory (BAM).
Here, this learning algorithm is
explained iteratively with an example.
Assume,
Set A: Input Patterns

9/16/2024 Principles of Soft Computing (SRM University-AP) 88


Step 1: Here, the value of M (no of pairs of patterns) is 4.
Step 2: Assign the neurons in the input and output layer. Here, neurons in the input layer are 6 and the
output layer are 3
Step 3: Now, compute the Weight Matrix (W):

9/16/2024 89
Step 4: Test the BAM model learning algorithm- for the input
patterns BAM will return the corresponding target patterns as
output. And for each of the target patterns, BAM will return
the corresponding input patterns.
•Test on input patterns (Set A) using-

9/16/2024 90
9/16/2024 91
Algorithm

Binary targets BAM network

9/16/2024 Principles of Soft Computing (SRM University-AP) 92


Adaptive Resonance Theory (ART)

• The basic ART uses unsupervised learning technique.


• The term “adaptive” and “resonance” used in this suggests that they are open
to new learning (i.e. adaptive) without discarding the previous or the old
information(i.e. resonance).
• The ART networks are known to solve the stability-plasticity dilemma i.e.,
stability refers to their nature of memorizing the learning
• Plasticity refers to the fact that they are flexible to gain new information.
• Due to this the nature of ART they are always able to learn new input patterns
without forgetting the past.
• ART networks implement a clustering algorithm. Input is presented to the
network and the algorithm checks whether it fits into one of the already stored
clusters. If it fits then the input is added to the cluster that matches the most
else a new cluster is formed.
9/16/2024 93
Basically, ART network is a vector classifier which accepts an input vector and classifies it into one of the
categories depending upon which of the stored pattern it resembles the most.

9/16/2024 94
9/16/2024 95
Stability-Plasticity Dilemma
 Stability: system behaviour doesn’t change after irrelevant events

 Plasticity: System adapts its behaviour according to significant events

 Dilemma: how to achieve stability without rigidity and plasticity without


chaos?

 Ongoing learning capability

 Preservation of learned knowledge

9/16/2024 Principles of Soft Computing (SRM University-AP) 96


9/16/2024 97
9/16/2024 98
9/16/2024 Principles of Soft Computing (SRM University-AP) 99
Adaptive Resonance Theory (ART)
 The term "resonance" refers to resonant state of a neural network in which a category
prototype vector matches close enough to the current input vector.
 ART matching leads to this resonant state, which permits learning. The network learns only
in its resonant state.
 ART neural networks are capable of developing stable clusters of arbitrary sequences of
input patterns by self-organizing.

 ART-1 can cluster binary input vectors.


 ART-2 can cluster real-valued input vectors.
 ART systems are well suited to problems that require online learning of large and evolving
databases.

9/16/2024 Principles of Soft Computing (SRM University-AP) 100


Adaptive Resonance Theory (ART)
Features

 Recurrent ANN
Output layer
 Competitive output layer

 Data clustering applications

 Stability-plasticity dilemma

Feature layer

9/16/2024 Principles of Soft Computing (SRM University-AP) 101


ART Algorithm
 Incoming pattern matched with stored cluster templates
 If close enough to stored template joins best matching cluster, weights
adapted
 If not, a new cluster is initialised with pattern as template
recognition
new pattern

comparison

categorisation

known unknown

Adapt winner Initialise


node uncommitted node

9/16/2024 Principles of Soft Computing (SRM University-AP) 102


ART Types
 ART1: Unsupervised Clustering of binary input vectors.
 ART2: Unsupervised Clustering of real-valued input vectors.
 ART3: Incorporates "chemical transmitters" to control the search process in a
hierarchical ART structure.
 ARTMAP: Supervised version of ART that can learn arbitrary mappings of binary
patterns.
 Fuzzy ART: Synthesis of ART and fuzzy logic.
 Fuzzy ARTMAP: Supervised fuzzy ART
 dART and dARTMAP: Distributed code representations in the F2 layer (extension of
winner take all approach).
 Gaussian ARTMAP

9/16/2024 Principles of Soft Computing (SRM University-AP) 103


Reset Module
 Fixed connection weights

 Implements the vigilance test

 Excitatory connection from F1(b)

 Inhibitory connection from F1(a)

 Output of reset module inhibitory to output layer

 Disables firing output node if match with pattern is not close enough

 Duration of reset signal lasts until pattern is present

9/16/2024 Principles of Soft Computing (SRM University-AP) 104


ART1 Algorithm
 Step 0 : initialize parameters :
L = Learning rate >1
L 1
0   1

initialize weights :
L
0  bij (0) 
L 1 n

t ji ( 0 )  1

9/16/2024 Principles of Soft Computing (SRM University-AP) 105


9/16/2024 106
ART1 Algorithm (cont.)
Step 6: For each F2 node that is not inhibited:
if yj  1
. then
yj   i
b ij x i

Step 7: While reset is true. do Steps 8-11.

Step 8: find J such that yJ≥yj for all nodes j. If yJ then all nodes are
inhibited and this pattern cannot be clustered.

Step 9: Recompute activation x of F1(b)


xi = sitJi
9/16/2024 Principles of Soft Computing (SRM University-AP) 107
ART1 Algorithm (cont.)

Step 12: Update the weight for node J (fast learning)


Lx i
b ij ( new ) 
L  1  x
t Ji ( new )  xi

Step 13: Test for stopping condition.

9/16/2024 Principles of Soft Computing (SRM University-AP) 108


Recognition Phase
 Forward transmission via bottom-up weights
 Input pattern matched with bottom-up weights (normalised template) of
output nodes
 Inner product x•bi
 Best matching node fires (winner-take-all layer)
 Similar to Kohonen’s Self organising map (SOM) algorithm, pattern
associated to closest matching template
 ART1: fraction of bits of template also in input pattern

9/16/2024 Principles of Soft Computing (SRM University-AP) 109


Issues about ART1
 Learned knowledge can be retrieved

 Fast learning algorithm

 Difficult to tune vigilance threshold

 New noisy patterns tend to “erode” templates

 ART1 is sensitive to order of presentation of data

 Accuracy sometimes not optimal

 Only winner neuron is updated, more “point-to-point” mapping than SOM

9/16/2024 Principles of Soft Computing (SRM University-AP) 110


9/16/2024 112
9/16/2024 113
9/16/2024 114
9/16/2024 115
9/16/2024 116
9/16/2024 117
ART1 Example : character recognition
 Initial values of parameters :
  0.3, L  2, m  10
 Order of presentation : A1,A2,A3,B1,B2…
 Cluster patterns
 1 A1,A2
 2 A3
 3 C1,C2,C3,D2
 4 B1,D1,E1,K1
 B3,D3,E3,K3
 5 K2
 6 J1,J2,J3
 7 B2,E2

9/16/2024 Principles of Soft Computing (SRM University-AP) 118


ART1 Example : character recognition

9/16/2024 Principles of Soft Computing (SRM University-AP) 119


ART2
Unsupervised Clustering for :
 Real-valued input vectors
 Binary input vectors that are noisy
 Includes a combination of normalization and noise suppression

9/16/2024 Principles of Soft Computing (SRM University-AP) 120


ART2 Architecture

9/16/2024 Principles of Soft Computing (SRM University-AP) 121


ART2 Architecture (normalization)

9/16/2024 Principles of Soft Computing (SRM University-AP) 122


ART2 Learning Mode
 Fast Learning
 Weights reach equilibrium in each learning trial
 Have some of the same characteristics as the weight found by ART1
 More appropriate for data in which the primary information is contained in
the pattern of components that are ‘small’ or ‘large’
 Slow Learning
 Only one weight update iteration performed on each learning trial
 Needs more epochs than fast learning
 More appropriate for data in which the relative size of the nonzero
components is important

9/16/2024 Principles of Soft Computing (SRM University-AP) 123


ART2 Algorithm
p i  u i  dt Ji
wi
xi 
e w
pi
qi 
e P
v i  f ( x i )  bf ( q i )

Step11: Test stopping condition for weight updates.


Step 12: Test stopping condition for number of epochs.

9/16/2024 Principles of Soft Computing (SRM University-AP) 124


ART2 Reset Mechanism
u  cp
r 
e u c p
2
(1  c ) 2  cdt  2 (1  c ) cdt cos( u , t )
r 
2
1  c 1  dt  2 dt cos( u , t )

9/16/2024 Principles of Soft Computing (SRM University-AP) 125


ART2 Example : character recognition
Initial values of parameters :
a  10 , b  10 , c  0.1 , d  0.9 ,   0.126,   0.8
Order of presentation : A1,B1,C1,…,A2,B2,C2…
Cluster patterns
1 A1,A2
2 B1,D1,E1,K1
B3,D3,E3,K3
3 C1,C2,C3
4 J1,J2,J3
5 B2,D2,E2
6 K2
7 A3

9/16/2024 Principles of Soft Computing (SRM University-AP) 126


9/16/2024 127
ART Applications
 Natural language processing
 Document clustering

 Document retrieval

 Automatic query

 Image segmentation
 Character recognition
 Data mining
 Data set partitioning

 Detection of emerging clusters

 Fuzzy partitioning
 Condition-action association

9/16/2024 Principles of Soft Computing (SRM University-AP) 128

You might also like