NN 2
NN 2
where
Now,
𝝏 𝑶 𝑶𝒌
= 𝒂 𝟐 (𝟏 + 𝑶 𝑶𝒌) (𝟏− 𝑶 𝑶𝒌)
𝝏 𝑶𝑰 𝒌
𝝏 𝑶 𝑰𝒌
=𝑯 𝑶 𝒋
𝝏 𝒘 𝒋𝒌
𝝏 𝑬𝒌
=− (𝑻 𝑶𝒌 − 𝑶 𝑶𝒌 ) 𝒂𝟐 (𝟏+ 𝑶 𝑶𝒌)(𝟏 − 𝑶 𝑶𝒌) 𝑯 𝑶𝒋
𝝏 𝒘 𝒋𝒌
where
where
Now,
where,
𝝏 𝑶 𝑶𝒌
= 𝒂 𝟐 (𝟏 + 𝑶 𝑶𝒌) (𝟏− 𝑶 𝑶𝒌)
𝝏 𝑶𝑰 𝒌
𝝏 𝑶 𝑰𝒌
=𝒘 𝒋𝒌
𝝏 𝑯 𝑶𝒋
𝝏 𝑯 𝑶𝒋
= 𝒂𝟏 𝑯 𝑶𝒋 (𝟏 − 𝑯 𝑶𝒋 )
𝝏 𝑯 𝑰𝒋
𝝏 𝑯 𝑰𝒋
= 𝑰 𝑶𝒊 = 𝑰 𝑰 𝒊
𝝏 𝒗 𝒊𝒋
We get
𝝏 𝑬𝒌
=− 𝒂𝟏 𝒂 𝟐(𝑻 𝑶𝒌 − 𝑶 𝑶𝒌 )(𝟏+𝑶 𝑶 𝒌)(𝟏− 𝑶 𝑶 𝒌)(𝟏 − 𝑯 𝑶𝒋)𝒘 𝒋 𝒌 𝑯 𝑶𝒋 𝑰 𝑰𝒊
𝝏 𝒗 𝒊𝒋
Batch Mode of Training:
𝒍=𝟏
The change in wjk , that is , Δwjk is determined as follows:
′
𝝏 𝑬
∆ 𝒘 𝒋𝒌= − 𝜼 .
𝝏 𝒘 𝒋𝒌
Now,
𝝏 𝑬 𝝏 𝑬 𝒍 𝝏 𝑬𝒌 𝝏 𝑶 𝑶 𝒌 𝝏 𝑶 𝑰 𝒌
′ ′
𝝏𝑬
= . . . .
𝝏 𝒘 𝒋𝒌 𝝏 𝑬 𝒍 𝝏 𝑬 𝒌 𝝏 𝑶 𝑶 𝒌 𝝏 𝑶 𝑰 𝒌 𝝏 𝒘 𝒋 𝒌
{ }
′
Similarly, Δvij can be calculated as follows: 𝝏 𝑬
∆ 𝒗 𝒊𝒋 = − 𝜼 .
𝝏 𝒗𝒊 𝒋 𝒂𝒗
{ }
𝒑 ′
𝝏 Where
𝑬
′
𝟏 𝝏 𝑬
𝝏 𝒗𝒊 𝒋
=
𝒑
∑ 𝝏 𝒗 𝒊𝒋
𝒌
𝒂𝒗 𝒌=𝟏
Now, ′ ′
𝝏𝑬 𝝏 𝑬 𝝏 𝑬 𝒌𝒍 𝝏 𝑶 𝑶𝒌 𝝏 𝑶 𝑰 𝒌 𝝏 𝑯 𝑶𝒋 𝝏 𝑯 𝑰 𝒋
𝒌 𝒌
= . . . . .
𝝏 𝒗 𝒊𝒋 𝝏 𝑬 𝒌𝒍 𝝏 𝑶 𝑶𝒌 𝝏 𝑶 𝑰 𝒌 𝝏 𝑯 𝑶𝒋 𝝏 𝑯 𝑰𝒋 𝝏 𝒗 𝒊 𝒋
Momentum Constant (α’)
Generalized Delta Rule:
𝝏𝑬
∆ 𝒘 ( 𝒕 )=− 𝜼 ( 𝒕 ) + 𝜶′ ∆ 𝒘 ( 𝒕 − 𝟏)
𝝏𝒘
η: (0.0 to 1.0)
α': (0.0 to 1.0)
Disadvantages of BPNN
• Solutions of BPNN may get stuck at the local minima
• Training of NN is more involved computationally compared
to that of FLC
• It works like a black box
Numerical Example
Figure below shows the schematic view of an NN consisting of three
layers, such as input, hidden and output layers. The neuron lying on the
input, hidden and output layers have the transfer function represented
by , respectively. There are two inputs, namely and and one output,
that is, . The connecting weights between the input and hidden layers
are represented by [V] and those between hidden and output layers
are denoted by [W]. The initial values of the weights are assumed to be
as follows:
[V] [W]
1
𝑉 11
𝑊 11
𝑉 21
I1 1 𝑉 12 𝑊 21
𝑉 22 2 1 O
I2 𝑉 13
2 𝑉 23 𝑊 31
3
[ ][ ]
𝒘 𝟏𝟏 𝟎.𝟏 . . . .
𝒘 𝟐𝟏 = 𝟎 . 𝟐 . . . .
𝟎.𝟏 . . . .
𝒘 𝟑𝟏
The inputs of different neurons of the hidden layer are calculated as follows:
The neurons of the hidden layer are assumed to have log-sigmoid transfer function ().
The outputs of different hidden neurons are determined like the following:
Now, the input of the output neuron can be calculated as follows:
As the output neuron has a tan-sigmoid transfer function, its output can be
determined like the following:
❑
𝑶𝑰 𝟏 − 𝑶𝑰 𝟏
𝒆 −𝒆
𝑶 𝑶 𝟏= 𝑶𝑰 𝟏 −𝑶
=𝟎 . 𝟏𝟗𝟓𝟔𝟗𝟐
𝒆 +𝒆 𝑰𝟏
𝟏
𝑬 = (𝑻 𝑶 − 𝑶 𝑶 𝟏)𝟐=𝟎 . 𝟎𝟎𝟏𝟎𝟒𝟒
𝟐
Back-propagation Algorithm:
The change in can be determined using the procedure below.
𝝏𝑬
∆ 𝒘 𝟏𝟏 =−𝜼 ,
𝝏 𝒘 𝟏𝟏
where
Now,
Substituting the values of , , in the last expression of we get
𝝏𝑬
=𝟎 . 𝟎𝟐𝟐𝟔𝟑𝟎
𝝏 𝒘 𝟏𝟏
Now, substituting the values of and in the expression of we get
∆ 𝒘 𝟏𝟏 =−𝟎 . 𝟎𝟎𝟒𝟓𝟐𝟔
∆ 𝒘 𝟐𝟏 =−𝟎 . 𝟎𝟎𝟒𝟑𝟎𝟔
∆ 𝒘 𝟑𝟏 =−𝟎 . 𝟎𝟎𝟒𝟐𝟖𝟒
The necessary change in can be obtained as follows:
𝝏𝑬
∆ 𝒗 𝟏𝟏 =−𝜼
𝝏 𝒗 𝟏𝟏
𝝏𝑬 𝝏 𝑬 𝝏 𝑶𝑶 𝟏 𝝏 𝑶𝑰 𝟏 𝝏 𝑯 𝑶 𝟏 𝝏 𝑯 𝑰 𝟏
where =
𝝏 𝒗 𝟏𝟏 𝝏 𝑶 𝑶 𝟏 𝝏 𝑶 𝑰 𝟏 𝝏 𝑯 𝑶 𝟏 𝝏 𝑯 𝑰 𝟏 𝝏 𝒗 𝟏 𝟏
𝝏𝑬
Now, =− ( 𝑻 𝑶 − 𝑶 𝑶𝟏 )
𝝏 𝑶𝑶 𝟏
𝝏 𝑶𝑶 𝟏 𝟒
= 𝑶 𝑰𝟏 −𝑶 𝟐
𝝏 𝑶 𝑰 𝟏 (𝒆 +𝒆 ) 𝑰𝟏
𝝏 𝑶𝑰 𝟏
=𝒘 𝟏𝟏
𝝏 𝑯 𝑶𝟏
❑
𝝏 𝑯 𝑶𝟏 𝒆
− 𝑯𝑰 𝟏
=
𝝏 𝑯 𝑰 𝟏 (𝟏 +𝒆− 𝑯 )𝟐 𝑰𝟏
𝝏 𝑯𝑰𝟏
=𝑰 𝑶𝟏
𝝏𝒗 𝟏𝟏
Substituting the values of , , and in the last expression of , we obtain
𝝏𝑬
=𝟎 . 𝟎𝟎𝟎𝟓𝟒𝟗
𝝏 𝒗 𝟏𝟏
Now, substituting the values of and , we get
∆ 𝒗 𝟐𝟏 =𝟎 . 𝟎𝟎𝟎𝟎𝟖𝟖
∆ 𝒗 𝟏𝟐 =− 𝟎 .𝟎𝟎𝟎𝟐𝟐𝟎
∆ 𝒗 𝟐𝟐 =𝟎 . 𝟎𝟎𝟎𝟏𝟕𝟔
∆ 𝒗 𝟏𝟑 =− 𝟎 .𝟎𝟎𝟎𝟏𝟏𝟎
∆ 𝒗 𝟐𝟑 =𝟎 . 𝟎𝟎𝟎𝟎𝟖𝟖
Therefore, the updated values of the weights are coming out to be as follows:
[ ][ ]
𝒘 𝟏𝟏 𝟎 . 𝟎𝟗𝟓𝟒𝟕𝟒
𝒘 𝟐𝟏 = 𝟎 . 𝟏𝟗𝟓𝟔𝟗𝟒
𝒘 𝟑𝟏 𝟎 . 𝟎𝟗𝟓𝟕𝟏𝟔