0% found this document useful (0 votes)
8 views

UNIT_2_DL_Notes

Deeplearning unit 2 notes

Uploaded by

ranisoumya512
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

UNIT_2_DL_Notes

Deeplearning unit 2 notes

Uploaded by

ranisoumya512
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Fixed Weight Competitive Networks

Fixed Weight Competitive Nets


During training process also the weights remains fixed in these competitive networks.
The idea of competition is used among neurons for enhancement of contrast in their
activation functions. In this, two networks- Maxnet and Hamming networks

Maxnet
Maxnet network was developed by Lippmann in 1987. The Maxner serves as a sub net
for picking the node whose input is larger. All the nodes present in this subnet are
fully interconnected and there exist symmetrical weights in all these weighted
interconnections.

Architecture of Maxnet
The architecrure of Maxnet is a fixed symmetrical weights are present over the
weighted interconnections. The weights between the neurons are inhibitory and fixed.
The Maxnet with this structure can be used as a subnet to select a particular node
whose net input is the largest.

https://studyglance.in/nn/display.php?tno=10&topic=Fixed-Weight-Competitive-Networks 1/5
Testing Algorithm of Maxnet
The Maxnet uses the following activation function:

x if x > 0
f (x) = {
0 if x ≤ 0

Testing algorithm
Step 0: Initial weights and initial activations are set. The weight is set as [0 < ε <
1/m], where "m" is the total number of nodes. Let

Xj(0) = input the node Xj


and

1 if i = j
wij = {
− ε if i ≠ j

Step 1: Perform Steps 2-4, when stopping condition is false.

Step 2: Update the activations of each node. For j = 1 to m,

Xj (new) = F [Xj (old) − ε ∑ Xk (old)]

i ≠ j

Step 3: Save the activations obtained for use in the next iteration. For j = 1 to m,

X j (new) = X j (old)

Step 4: Finally, test the stopping condition for convergence of the network. The
following is the stopping condition: If more than one node has a nonzero activation,
continue; else stop.

Hamming Network
The Hamming network is a two-layer feedforward neural network for classification of
binary bipolar n-tuple input vectors using minimum Hamming distance denoted as
DH(Lippmann, 1987). The first layer is the input layer for the n-tuple input vectors.
The second layer (also called the memory layer) stores p memory patterns. A p-class
Hamming network has p output neurons in this layer. The strongest response of a
neuron is indicative of the minimum Hamming distance between the stored pattern
and the input vector.

Hamming Distance
Hamming distance of two vectors, x and y of dimension n

x.y = a - d

where: a is number of bits in aggreement in x & y(No.of Similaritie bits in x & y), and
d is number of bits different in x and y(No.of Dissimilaritie bits in x & y).

The value "a - d" is the Hamming distance existing between two vectors. Since, the
total number of components is n, we have,
n=a+d
i.e., d = n - a
On simplification, we get
x.y = a - (n - a)

x.y = 2a - n

2a = x.y + n
a= x.y + n
1 1

2 2

From the above equation, it is clearly understood that the weights can be set to one-
half the exemplar vector and bias can be set initially to n/2

Testing Algorithm of Hamming Network


Step 0: Initialize the weights. For i = 1 to n and j = 1 to m,

ei (j)
w ij =
2

Initialize the bias for storing the "m" exemplar vectors. For j = 1 to m,
n
bj =
x

Step 1: Perform Steps 2-4 for each input vector x.

Step 2: Calculate the net input to each unit Yj, i.e.,


n

yinj = ∑ xi w ij + bj j = 1 to m

i=1

Step 3: Initialize the activations for Maxnet, i.e.,

yj (0 ) = yinj j = 1 to m

Step 4: Maxnet is found to iterate for finding the exemplar that best matches the
input patterns.

CATEGORIES
Tutorials Questions
PPTs Lab Programs
Kohonen Self-Organizing Feature Maps

Kohonen Self-Organizing Feature Maps


Self-Organizing Feature Maps(SOM) was developed by Dr. Teuvo Kohonen in 1982. Kohonen Self-Organizing feature map (KSOM)
refers to a neural network, which is trained using competitive learning. Basic competitive learning implies that the competition
process takes place before the cycle of learning. The competition process suggests that some criteria select a winning processing
element. After the winning processing element is selected, its weight vector is adjusted according to the used learning law.
Feature mapping is a process which converts the patterns of arbitrary dimensionality into a response of one or two dimensions array
of neurons. The network performing such a mapping is called feature map. The reason for reducing the higher dimensionality, the
ability to preserve the neighbor topology.

Training Algorithm
Step 0: Initialize the weights with Random values and the learning rate

Step 1: Perform Steps 2-8 when stopping condition is false.

Step 2: Perform Steps 3-5 for each input vector x.

Step 3: Compute the square of the Euclidean distance, i.e., for each j = i to m,
n

2
D(j) = ∑(xi − w ij ) j = 1 to m

i=1

Step 4: Find the winning unit index J, so that D(J) is minimum.

Step 5: For all units j within a specific neighborhood of J and for all i, calculate the new weights:
Ads by
Send feedback Why this ad?
Clip Studio Paint Amazing Sale
The art app with the ultimate drawing experience. Now up
to 60% off.

Clip Studio Paint Shop Now

w ij (new) = w ij (old) + α[xi − w ij (old)]

Step 6: Update the learning rare a using the formula (t is timestamp)

α(t + 1) = 0.5α(t)

Step 7: Reduce radius of topological neighborhood at specified time intervals.

Step 8: Test for stopping condition of the network.

   
Learning Vector Quantization

Learning Vector Quantization


In 1980, Finnish Professor Kohonen discovered that some areas of the brain develop structures with different areas, each of them
with a high sensitive for a specific input pattern. It is based on competition among neural units based on a principle called winner-
takes-all.

Learning Vector Quantization (LVQ) is a prototype-based supervised classification algorithm. A prototype is an early sample, model,
or release of a product built to test a concept or process. One or more prototypes are used to represent each class in the dataset.
New (unknown) data points are then assigned the class of the prototype that is nearest to them. In order for "nearest" to make
sense, a distance measure has to be defined. There is no limitation on how many prototypes can be used per class, the only
requirement being that there is at least one prototype for each class. LVQ is a special case of an artificial neural network and it
applies a winner-take-all Hebbian learning-based approach. With a small difference, it is similar to Self-Organizing Maps (SOM)
algorithm. SOM and LVQ were invented by Teuvo Kohonen.

Ads by
Send feedback Why this ad?
Clip Studio Paint Amazing Sale
The art app full of powerful brushes & features to achieve
your vision. Now up to 60% off.

Clip Studio Paint Shop Now


LVQ system is represented by prototypes W=(W1....,Wn). In winner-take-all training algorithms, the winner is moved closer if it
correctly classifies the data point or moved away if it classifies the data point incorrectly. An advantage of LVQ is that it creates
prototypes that are easy to interpret for experts in the respective application domain

Training Algorithm
Step 0: Initialize the reference vectors. This can be done using the following steps.
From the given set of training vectors, take the first "m" (number of clusters) training vectors and use them as weight
vectors, the remaining vectors can be used for training.
Assign the initial weights and classifications randomly.
K-means clustering method.
Set initial learning rate α

Step l: Perform Steps 2-6 if the stopping condition is false.

Step 2: Perform Steps 3-4 for each training input vector x

Step 3: Calculate the Euclidean distance; for i = 1 to n, j = 1 to m,


n m

2
D(j) = ∑ ∑(xi − w ij )

i=1 j=1

Find the winning unit index J, when D(J) is minimum

Step 4: Update the weights on the winning unit, Wj using the following conditions.
if T = Cj then w j (new) = w j (old) + α[x − w j (old)]

if T ≠ Cj then w j (new) = w j (old) − α[x − w j (old)]

Step 5: Reduce the learning rate α

Step 6: Test for the stopping condition of the training process. (The stopping conditions may be fixed number of epochs or if
learning rare has reduced to a negligible value.)

   
Counter Propagation Networks

Counterpropagation network
Counterpropagation network (CPN) were proposed by Hecht Nielsen in 1987.They are multilayer network based on the combinations
of the input, output, and clustering layers. The application of counterpropagation net are data compression, function approximation
and pattern association. The ccounterpropagation network is basically constructed from an instar-outstar model. This model is three
layer neural network that performs input-output data mapping, producing an output vector y in response to input vector x, on the
basis of competitive learning. The three layer in an instar-outstar model are the input layer, the hidden(competitive) layer and the
output layer.

There are two stages involved in the training process of a counterpropagation net. The input vector are clustered in the first stage.
In the second stage of training, the weights from the cluster layer units to the output units are tuned to obtain the desired
response.

There are two types of counterpropagation network:

1. Full counterpropagation network


2. Forward-only counterpropagation network

Full counterpropagation network


Full CPN efficiently represents a large number of vector pair x:y by adaptively constructing a look-up-table. The full CPN works best
if the inverse function exists and is continuous. The vector x and y propagate through the network in a counterflow manner to yield
output vector x* and y*.

Architecture of Full Counterpropagation Network


The four major components of the instar-outstar model are the input layer, the instar, the competitive layer and the outstar. For
each node in the input layer there is an input value xi. All the instar are grouped into a layer called the competitive layer. Each of
the instar responds maximally to a group of input vectors in a different region of space. An outstar model is found to have all the
nodes in the output layer and a single node in the competitive layer. The outstar looks like the fan-out of a node.

Training Algorithm for Full Counterpropagation Network:


Step 0: Set the initial weights and the initial learning rare.

Step 1: Perform Steps 2-7 if stopping condition is false for phase-I training.

Ads by
Send feedback Why this ad?
3D Interior Design Made Easy
Design your 3D dream home like a Pro. Interior Design & 4K
Render

Open
Step 2: For each of the training input vector pair x: y presented, perform Steps 3-5.

Ads by
Send feedback Why this ad?
3D Interior Design Made Easy
Design your 3D dream home like a Pro. Interior Design & 4K
Render

Coohom 3D Design Tool Open


Step 3: Make the X-input layer activations to vector X. Make the Y-inpur layer activations to vector Y.

Step 4: Find the winning cluster unit. If dot product method is used, find rhe cluster unit Zj with target net input: for j = 1 to p.
n m

Zinj = ∑ xi vij + ∑ yk w kj

i=1 k=1

Ads by
McAfee Total
WhyProtection
this ad? - 2023 Antivirus,

₹1,699
McAfee Total Protection - Premium antivirus, identity and privacy
protection for your PCs, Macs, smartphones, and tablets - all in one
subscription (1-year subscription). Free 24/7 Support for the life of
your subscription. Award-winning anti-virus for your windows PC.…

Shop now Starting At ₹749.50


McAfee Official Store

If Euclidean distance method is used, find the cluster unit Zj whose squared distance from input vectors is the smallest

Ads by
Send feedback Why this ad?
3D Interior Design Made Easy
Design your 3D dream home like a Pro. Interior Design & 4K
Render

Coohom 3D Design Tool Open

n m

2 2
D(j) = ∑(xi − vij ) + ∑(yk − w kj )

i=1 k=1

If there occurs a tie in case of selection of winner unit, the unit with the smallest index is the winner. Take the winner unit index as
J.

Step 5: Update the weights over the calculated winner unit Zj

vij (new) = vij (old) + α[xi − vij (old)] i = 1 to n

w kj (new) = w kj (old) + β[yk − w kj (old)] k = 1 to m

Step 6: Reduce the learning rates α and β

α(t + 1) = 0.5αt

β(t + 1) = 0.5βt

Step 7: Test stopping condition for phase-I training.

Step 8: Perform Steps 9-15 when stopping condition is false for phase-II training.
Step 9: Perform Steps 10-13 for each training input pair x:y. Here α and β are small constant values.
Step 10: Make the X-input layer activations to vector x. Make the Y-input layer activations to vector y.

Step 11: Find the winning cluster unit (use formulas from Step 4). Take the winner unit index as J.
Step 12: Update the weights entering into unit ZJ

vij (new) = vij (old) + α[xi − vij (old)] i = 1 to n

w kj (new) = w kj (old) + β[yk − w kj (old)] k = 1 to m

Step 13: Update the weights from unit Zj to the output layers.

tji (new) = tji (old) + b[xi − tji (old)] i = 1 to n

ujk (new) = ujk (old) + a[yk − ujk (old)] k = 1 to m

Step 14: Reduce the learning rates a and b.

a(t + 1) = 0.5at

b(t + 1) = 0.5bt

Step 15: Test stopping condition for phase-II training.

Forward-only Counterpropagation network:


A simplified version of full CPN is the forward-only CPN. Forward-only CPN uses only the x vector to form the cluster on the Kohonen
units during phase I training. In case of forward-only CPN, first input vectors are presented to the input units. First, the weights
between the input layer and cluster layer are trained. Then the weights between the cluster layer and output layer are trained. This
is a specific competitive network, with target known.

Architecture of forward-only CPN


It consists of three layers: input layer, cluster layer and output layer. Its architecture resembles the back-propagation network, but
in CPN there exists interconnections between the units in the cluster layer.

Training Algorithm for Forward-only Counterpropagation network:


Step 0: Initial the weights and learning rare.

Step 1: Perform Steps 2-7 if stopping condition is false for phase-I training.

Step 2: Perform Steps 3-5 for each of uaining input X

Step 3: Set the X-input layer activations to vector X.

Step 4: Compute the winning cluster unit (J). If dot product method is used, find the cluster unit zj with the largest net input.
n

Zinj = ∑ xi vij

i=1

If Euclidean distance method is used, find the cluster unit Zj whose squared distance from input patterns is the smallest
n

2
D(j) = ∑(xi − vij )

i=1

If there exists a tie in the selection of wiriner unit, the unit with the smallest index is chosen as the winner.

Step 5: Perform weight updation for unit Zj. For i= 1 to n,

vij (new) = vij (old) + α[xi − vij (old)] i = 1 to n

Step 6: Reduce the learning rates α

α(t + 1) = 0.5αt

Step 7: Test stopping condition for phase-I training.

Step 8: Perform Steps 9-15 when stopping condition is false for phase-II training.

Step 9: Perform Steps 10-13 for each training input Pair x:y..

Step 10: Set X-input layer activations to vector X. Sec Y-outpur layer activations to vector Y.

Step 11: Find the winning cluster unit (use formulas from Step 4). Take the winner unit index as J.

Step 12: Update the weights entering into unit ZJ,

vij (new) = vij (old) + α[xi − vij (old)] i = 1 to n

Step 13: Update the weights from unit Zj to the output layers.

wkj ((new) = wkj ( old) + β[yk − wkj ( old)] k= 1 toton m


Step 14: Reduce the learning rates β.
β(t + 1) = 0.5βt

Step 15: Test stopping condition for phase-II training.

Tutorials Questions
PPTs Lab Programs
Lecture Notes Syllabus

Copyright © 2020 All Rights Reserved by StudyGlance.

   

You might also like