UNIT_2_DL_Notes
UNIT_2_DL_Notes
Maxnet
Maxnet network was developed by Lippmann in 1987. The Maxner serves as a sub net
for picking the node whose input is larger. All the nodes present in this subnet are
fully interconnected and there exist symmetrical weights in all these weighted
interconnections.
Architecture of Maxnet
The architecrure of Maxnet is a fixed symmetrical weights are present over the
weighted interconnections. The weights between the neurons are inhibitory and fixed.
The Maxnet with this structure can be used as a subnet to select a particular node
whose net input is the largest.
https://studyglance.in/nn/display.php?tno=10&topic=Fixed-Weight-Competitive-Networks 1/5
Testing Algorithm of Maxnet
The Maxnet uses the following activation function:
x if x > 0
f (x) = {
0 if x ≤ 0
Testing algorithm
Step 0: Initial weights and initial activations are set. The weight is set as [0 < ε <
1/m], where "m" is the total number of nodes. Let
1 if i = j
wij = {
− ε if i ≠ j
i ≠ j
Step 3: Save the activations obtained for use in the next iteration. For j = 1 to m,
X j (new) = X j (old)
Step 4: Finally, test the stopping condition for convergence of the network. The
following is the stopping condition: If more than one node has a nonzero activation,
continue; else stop.
Hamming Network
The Hamming network is a two-layer feedforward neural network for classification of
binary bipolar n-tuple input vectors using minimum Hamming distance denoted as
DH(Lippmann, 1987). The first layer is the input layer for the n-tuple input vectors.
The second layer (also called the memory layer) stores p memory patterns. A p-class
Hamming network has p output neurons in this layer. The strongest response of a
neuron is indicative of the minimum Hamming distance between the stored pattern
and the input vector.
Hamming Distance
Hamming distance of two vectors, x and y of dimension n
x.y = a - d
where: a is number of bits in aggreement in x & y(No.of Similaritie bits in x & y), and
d is number of bits different in x and y(No.of Dissimilaritie bits in x & y).
The value "a - d" is the Hamming distance existing between two vectors. Since, the
total number of components is n, we have,
n=a+d
i.e., d = n - a
On simplification, we get
x.y = a - (n - a)
x.y = 2a - n
2a = x.y + n
a= x.y + n
1 1
2 2
From the above equation, it is clearly understood that the weights can be set to one-
half the exemplar vector and bias can be set initially to n/2
ei (j)
w ij =
2
Initialize the bias for storing the "m" exemplar vectors. For j = 1 to m,
n
bj =
x
yinj = ∑ xi w ij + bj j = 1 to m
i=1
yj (0 ) = yinj j = 1 to m
Step 4: Maxnet is found to iterate for finding the exemplar that best matches the
input patterns.
CATEGORIES
Tutorials Questions
PPTs Lab Programs
Kohonen Self-Organizing Feature Maps
Training Algorithm
Step 0: Initialize the weights with Random values and the learning rate
Step 3: Compute the square of the Euclidean distance, i.e., for each j = i to m,
n
2
D(j) = ∑(xi − w ij ) j = 1 to m
i=1
Step 5: For all units j within a specific neighborhood of J and for all i, calculate the new weights:
Ads by
Send feedback Why this ad?
Clip Studio Paint Amazing Sale
The art app with the ultimate drawing experience. Now up
to 60% off.
α(t + 1) = 0.5α(t)
Learning Vector Quantization
Learning Vector Quantization (LVQ) is a prototype-based supervised classification algorithm. A prototype is an early sample, model,
or release of a product built to test a concept or process. One or more prototypes are used to represent each class in the dataset.
New (unknown) data points are then assigned the class of the prototype that is nearest to them. In order for "nearest" to make
sense, a distance measure has to be defined. There is no limitation on how many prototypes can be used per class, the only
requirement being that there is at least one prototype for each class. LVQ is a special case of an artificial neural network and it
applies a winner-take-all Hebbian learning-based approach. With a small difference, it is similar to Self-Organizing Maps (SOM)
algorithm. SOM and LVQ were invented by Teuvo Kohonen.
Ads by
Send feedback Why this ad?
Clip Studio Paint Amazing Sale
The art app full of powerful brushes & features to achieve
your vision. Now up to 60% off.
Training Algorithm
Step 0: Initialize the reference vectors. This can be done using the following steps.
From the given set of training vectors, take the first "m" (number of clusters) training vectors and use them as weight
vectors, the remaining vectors can be used for training.
Assign the initial weights and classifications randomly.
K-means clustering method.
Set initial learning rate α
2
D(j) = ∑ ∑(xi − w ij )
i=1 j=1
Step 4: Update the weights on the winning unit, Wj using the following conditions.
if T = Cj then w j (new) = w j (old) + α[x − w j (old)]
Step 6: Test for the stopping condition of the training process. (The stopping conditions may be fixed number of epochs or if
learning rare has reduced to a negligible value.)
Counter Propagation Networks
Counterpropagation network
Counterpropagation network (CPN) were proposed by Hecht Nielsen in 1987.They are multilayer network based on the combinations
of the input, output, and clustering layers. The application of counterpropagation net are data compression, function approximation
and pattern association. The ccounterpropagation network is basically constructed from an instar-outstar model. This model is three
layer neural network that performs input-output data mapping, producing an output vector y in response to input vector x, on the
basis of competitive learning. The three layer in an instar-outstar model are the input layer, the hidden(competitive) layer and the
output layer.
There are two stages involved in the training process of a counterpropagation net. The input vector are clustered in the first stage.
In the second stage of training, the weights from the cluster layer units to the output units are tuned to obtain the desired
response.
Step 1: Perform Steps 2-7 if stopping condition is false for phase-I training.
Ads by
Send feedback Why this ad?
3D Interior Design Made Easy
Design your 3D dream home like a Pro. Interior Design & 4K
Render
Open
Step 2: For each of the training input vector pair x: y presented, perform Steps 3-5.
Ads by
Send feedback Why this ad?
3D Interior Design Made Easy
Design your 3D dream home like a Pro. Interior Design & 4K
Render
Step 4: Find the winning cluster unit. If dot product method is used, find rhe cluster unit Zj with target net input: for j = 1 to p.
n m
Zinj = ∑ xi vij + ∑ yk w kj
i=1 k=1
Ads by
McAfee Total
WhyProtection
this ad? - 2023 Antivirus,
₹1,699
McAfee Total Protection - Premium antivirus, identity and privacy
protection for your PCs, Macs, smartphones, and tablets - all in one
subscription (1-year subscription). Free 24/7 Support for the life of
your subscription. Award-winning anti-virus for your windows PC.…
If Euclidean distance method is used, find the cluster unit Zj whose squared distance from input vectors is the smallest
Ads by
Send feedback Why this ad?
3D Interior Design Made Easy
Design your 3D dream home like a Pro. Interior Design & 4K
Render
n m
2 2
D(j) = ∑(xi − vij ) + ∑(yk − w kj )
i=1 k=1
If there occurs a tie in case of selection of winner unit, the unit with the smallest index is the winner. Take the winner unit index as
J.
α(t + 1) = 0.5αt
β(t + 1) = 0.5βt
Step 8: Perform Steps 9-15 when stopping condition is false for phase-II training.
Step 9: Perform Steps 10-13 for each training input pair x:y. Here α and β are small constant values.
Step 10: Make the X-input layer activations to vector x. Make the Y-input layer activations to vector y.
Step 11: Find the winning cluster unit (use formulas from Step 4). Take the winner unit index as J.
Step 12: Update the weights entering into unit ZJ
Step 13: Update the weights from unit Zj to the output layers.
a(t + 1) = 0.5at
b(t + 1) = 0.5bt
Step 1: Perform Steps 2-7 if stopping condition is false for phase-I training.
Step 4: Compute the winning cluster unit (J). If dot product method is used, find the cluster unit zj with the largest net input.
n
Zinj = ∑ xi vij
i=1
If Euclidean distance method is used, find the cluster unit Zj whose squared distance from input patterns is the smallest
n
2
D(j) = ∑(xi − vij )
i=1
If there exists a tie in the selection of wiriner unit, the unit with the smallest index is chosen as the winner.
α(t + 1) = 0.5αt
Step 8: Perform Steps 9-15 when stopping condition is false for phase-II training.
Step 9: Perform Steps 10-13 for each training input Pair x:y..
Step 10: Set X-input layer activations to vector X. Sec Y-outpur layer activations to vector Y.
Step 11: Find the winning cluster unit (use formulas from Step 4). Take the winner unit index as J.
Step 13: Update the weights from unit Zj to the output layers.
Tutorials Questions
PPTs Lab Programs
Lecture Notes Syllabus