Pages 17-20
Pages 17-20
Single Layer networks cannot used to solve Linear Inseparable problems & can only be
used to solve linear separable problems
Single layer networks cannot solve complex problems
Single layer networks cannot be used when large input-output data set is available
training pairs
Any neural network which has at least one layer in between input and output layers is
called Multi-Layer Networks
Layers present in between the input and out layers are called Hidden Layers
Input layer neural unit just collects the inputs and forwards them to the next higher
layer
Multi -layer networks provide optimal solution for arbitrary classification problems
Multi -layer networks use linear discriminants, where the inputs are non linear
2
2.3 BACK PROPAGATION NETWORKS (BPN)
that BPN. In figure 1 the weights between input and the hidden portion is considered as Wij
and the weight between first hidden to the next layer is considered as Vjk. This network is valid
only for Differential Output functions. The Training process used in backpropagation involves
three stages, which are listed as below
3
2.3.1 BPN Algorithm
The algorithm for BPN is as classified int four major steps as follows:
Algorithm
I. Initialization of weights
Step 1: Initialize the weights to small random values near zero
Step 2: While stop condition is false , Do steps 3 to 10
Step 3: For each training pair do steps 4 to 9
Step 4: Each input xi is received and forwarded to higher layers (next hidden)
Step 5: Hidden unit sums its weighted inputs as follows
Zinj = Woj + xiwij
Applying Activation function
Zj = f(Zinj)
This value is passed to the output layer
4
V. Updating of Weights & Biases
Step 9: continued:
New Weight is
Wij(new) = Wij(old) + wij
Vjk(new) = Vjk(old) + Vjk
New bias is
Woj(new) = Woj(old) + woj
Vok(new) = Vok(old) + Vok
Step 10: Test for Stop Condition
2.3.2 Merits
2.3.3 Demerits
This network was proposed by Hect & Nielsen in 1987.It implements both supervised
& Unsupervised Learning. Actually it is a combination of two Neural architectures (a) Kohonan
Layer - Unsupervised (b) Grossberg Layer Supervised. It Provides good solution where long
training is not tolerated. CPN functions like a Look-up Table Generalization. The training pairs
may be Binary or Continuous. CPN produces a correct output even when input is partially
incomplete or incorrect. Main types of CPN is (a) Full Counter Propagation (b) Forward only
Counter Propagation. Figure 2 represents the architectural diagram of CPN network.