0% found this document useful (0 votes)
12 views4 pages

Pages 17-20

Uploaded by

hackerktp2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views4 pages

Pages 17-20

Uploaded by

hackerktp2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

2.

MULTI LAYER NETWORKS

UNIT II MULTI LAYER NETWORKS

Back Propagation Networks (BPN) - Training - Architecture-Algorithm, Counter


Propagation Network (CPN) - Training - Architecture, Bi-Directional Associative
Memory (BAM) - Training-stability analysis, Adaptive Resonance Theory Adaptive
Resonance Theory (ART) - ART1- ART2 Architecture -Training, Hop Field Network -
Energy Function - Discrete - Continuous - Algorithm - Application Travelling Sales
Man Problem TSP
2 BACK PROPAGATION NETWORKS (BPN)

2.1 NEED FOR MULTILAYER NETWORKS

Single Layer networks cannot used to solve Linear Inseparable problems & can only be
used to solve linear separable problems
Single layer networks cannot solve complex problems
Single layer networks cannot be used when large input-output data set is available

training pairs

Hence to overcome the above said Limitations we use Multi-Layer Networks.

2.2 MULTI-LAYER NETWORKS

Any neural network which has at least one layer in between input and output layers is
called Multi-Layer Networks

Layers present in between the input and out layers are called Hidden Layers

Input layer neural unit just collects the inputs and forwards them to the next higher
layer

Hidden layer and output layer


produce an appropriate output

Multi -layer networks provide optimal solution for arbitrary classification problems

Multi -layer networks use linear discriminants, where the inputs are non linear

2
2.3 BACK PROPAGATION NETWORKS (BPN)

Introduced by Rumelhart, Hinton, & Williams in 1986. BPN is a Multi-layer


Feedforward Network but error is back propagated, Hence the name Back Propagation
Network (BPN). It uses Supervised Training process; it has a systematic procedure for training
the network and is used in Error Detection and Correction. Generalized Delta Law /Continuous
Perceptron Law/ Gradient Descent Law is used in this network. Generalized Delta rule
minimizes the mean squared error of the output calculated from the output. Delta law has faster
convergence rate when compared with Perceptron Law. It is the extended version of Perceptron
Training Law. Limitations of this law is the Local minima problem. Due to this the convergence
speed reduces, b Figure 1 represents a BPN network

that BPN. In figure 1 the weights between input and the hidden portion is considered as Wij
and the weight between first hidden to the next layer is considered as Vjk. This network is valid
only for Differential Output functions. The Training process used in backpropagation involves
three stages, which are listed as below

1. Feedforward of input training pair


2. Calculation and backpropagation of associated error
3. Adjustments of weights

Figure 1: Back Propagation Network

3
2.3.1 BPN Algorithm

The algorithm for BPN is as classified int four major steps as follows:

1. Initialization of Bias, Weights


2. Feedforward process
3. Back Propagation of Errors
4. Updating of weights & biases

Algorithm

I. Initialization of weights
Step 1: Initialize the weights to small random values near zero
Step 2: While stop condition is false , Do steps 3 to 10
Step 3: For each training pair do steps 4 to 9

II. Feed forward of inputs

Step 4: Each input xi is received and forwarded to higher layers (next hidden)
Step 5: Hidden unit sums its weighted inputs as follows
Zinj = Woj + xiwij
Applying Activation function
Zj = f(Zinj)
This value is passed to the output layer

yink= Voj + ZjVjk


Applying Activation function
Yk = f(yink)

III. Backpropagation of Errors

Step 7: k = (tk Yk)f(yink )


Step 8: inj = jVjk

IV. Updating of Weights & Biases

Step 8: Weight correction is wij = kZj

bias Correction is woj = k

4
V. Updating of Weights & Biases

Step 9: continued:
New Weight is
Wij(new) = Wij(old) + wij
Vjk(new) = Vjk(old) + Vjk
New bias is
Woj(new) = Woj(old) + woj
Vok(new) = Vok(old) + Vok
Step 10: Test for Stop Condition

2.3.2 Merits

Has smooth effect on weight correction

100 times faster than perceptron model


Has a systematic weight updating procedure

2.3.3 Demerits

Learning phase requires intensive calculations


Selection of number of Hidden layer neurons is an issue
Selection of number of Hidden layers is also an issue
Network gets trapped in Local Minima
Temporal Instability
Network Paralysis
Training time is more for Complex problems

2.4 COUNTER PROPAGATION NETWORK [CPN]

This network was proposed by Hect & Nielsen in 1987.It implements both supervised
& Unsupervised Learning. Actually it is a combination of two Neural architectures (a) Kohonan
Layer - Unsupervised (b) Grossberg Layer Supervised. It Provides good solution where long
training is not tolerated. CPN functions like a Look-up Table Generalization. The training pairs
may be Binary or Continuous. CPN produces a correct output even when input is partially
incomplete or incorrect. Main types of CPN is (a) Full Counter Propagation (b) Forward only
Counter Propagation. Figure 2 represents the architectural diagram of CPN network.

You might also like