0% found this document useful (0 votes)
18 views15 pages

Lesson 9

The document provides an overview of Multilayer Perceptron (MLP) neural networks, detailing the structure including input, hidden, and output layers. It explains the forward pass process, where input data is transformed through weighted summation and activation functions to produce output. Additionally, it illustrates the mathematical representation of perceptrons and the calculations involved in determining the outputs based on given inputs and weights.

Uploaded by

studytutor2022
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views15 pages

Lesson 9

The document provides an overview of Multilayer Perceptron (MLP) neural networks, detailing the structure including input, hidden, and output layers. It explains the forward pass process, where input data is transformed through weighted summation and activation functions to produce output. Additionally, it illustrates the mathematical representation of perceptrons and the calculations involved in determining the outputs based on given inputs and weights.

Uploaded by

studytutor2022
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Lesson 9

Multilayer Perceptron
What is Multilayer Perceptron Neural Network?
W11
V11
x1 W12
. V12

. . y1
x2 .
y2
x3
W34 V42

Input Layer Hidden Layer Output Layer

2
Multilayer Perceptron
W11 i1
V11
x1 W12
. V12
. y1 i2 h1
.
x2 .
y2 i3
x3
W34 V42

Perceptron
Input Layer Hidden Layer Output Layer

For a given node I, the perceptron is defined as weighted summation of the incoming data from the
nodes of the previous layer.
𝑝𝑖 = 𝑥0 𝑤0𝑖 + 𝑥2 𝑤2𝑖 +…+ 𝑥𝑛 𝑤𝑛𝑖 = σ𝑛𝑗=1 𝑥𝑗 𝑤𝑗𝑖
3
Multilayer Perceptron
W11 i1
V11
x1 W12
. V12
. y1 i2 h1
.
x2 .
y2 i3
x3
W34 V42

Activation
Output Layer
Function
Input Layer Hidden Layer

4
Forward Pass
Input Vector
0.2
0.1
1 X1=1 0.5
. 0.2

. . y1
0 X2=0 .
y2
1 X3=1
0.25 0.15

Input Layer Hidden Layer Output Layer

5
Forward Pass
0.2
0.1
X1=1 0.5 x=Perceptron(x)
. 0.2

. . y1
X2=0 . Linear Activation function
y2 x=f(x)
X3=1
0.25 0.15

Input Layer Hidden Layer Output Layer


1
0
1

6
Forward Pass
1x0.2 + 0x1.5 + 1x0.5= 0.7
0.2
0.1
X1=1 1.5 Sigmoid Activation function
. 0.2

.0.5 . y1 f(0.7)=0.668
X2=0 .
y2
X3=1
0.25 0.15

Input Layer Hidden Layer Output Layer


1
0
1

7
Forward Pass
0.2
0.1
X1=1 0.5
. 0.2

. . y1
X2=0 .
y2
X3=1
0.25 0.15

Input Layer Hidden Layer Output Layer


1 0.668
0 0.912
1 0.102
0.471

8
Forward Pass
0.2
0.1
X1=1 0.5
. 0.2

. . y1 0.812
X2=0 .
y2 0.151
X3=1
0.25 0.15

Input Layer Hidden Layer Output Layer


1 0.668 0.812
0 0.912 0.151
1 0.102
0.471

9
Weight Matrix
W11 h1
V11
x1 i1 W12
. V12

. h2 . o1 y1
x2 i2 .
h3 o2 y2
x3 i3
W34 V42
h4
h1 h 2 h3 h4 o1 o2
i1 W11 W12 W13 W14
h1
h2
i2 W21 W22 W23 W24 h3
i3 W31 W32 W33 W34 h4
W V

10
Forward Pass
𝒑𝟏 = 𝒙𝟏 𝑾𝟏𝟏 + 𝒙𝟐 𝑾𝟐𝟏 + 𝒙𝟑 𝑾𝟑𝟏

0.2 h1 𝒉𝟏 = 𝑺𝒊𝒈𝒎𝒐𝒊𝒅(𝒑𝟏 )
0.1
X1=1 i1 0.5
. 0.2

. h2 . o1 y1
X2=0 i2 .
h3 o2 y2
X3=1 i3
0.25 0.15
h4

h1 h2 h3 h4 h1 h2 h3 h4
1 0 1 X i1 = 0.668
i2
i3
W
11
Forward Pass
𝒑𝟐 = 𝒙𝟏 𝑾𝟏𝟐 + 𝒙𝟐 𝑾𝟐𝟐 + 𝒙𝟑 𝑾𝟑𝟐

0.2
𝒉𝟐 = 𝑺𝒊𝒈𝒎𝒐𝒊𝒅(𝒑𝟐 )
0.1
X1=1 0.5
. 0.2

. . y1
X2=0 .
y2
X3=1
0.25 0.15

h1 h2 h3 h4 h1 h2 h3 h4
1 0 1 X i1 = 0.668 0.912
i2
i3
W
12
Forward Pass
0.2
0.1
X1=1 0.5
. 0.2 ഥ𝑻 = 𝒙
𝒑 ഥ𝑻 . 𝑾
. . y1
. ഥ𝑻 = 𝑺𝒊𝒈𝒎𝒐𝒊𝒅 𝒑
X2=0
𝒉 ഥ𝑻
y2
X3=1
0.25 0.15

h1 h2 h3 h4 h1 h2 h3 h4
1 0 1 X i1 = 0.668 0.912 0.102 0.471
i2
ഥ𝑻
𝒙 ഥ𝑻
𝒉
i3
W
13
Forward Pass
0.2
0.1
X1=1 0.5
. 0.2
. y1
𝑇 ത 𝑇
𝑝ҧ = ℎ . 𝑉
.
X2=0 . 𝑇 𝑇
y2
𝑦ത = 𝑆𝑖𝑔𝑚𝑜𝑖𝑑 𝑝ҧ
X3=1
0.25 0.15

o1 o2
0.668 0.912 0.102 0.471 X h1 = 0.812 0.151
h2
ℎത 𝑇 h 3 𝑦ത 𝑇
h4
V
14
Forward Pass
0.2
0.1
X1=1 0.5
. 0.2

. . y1 0.812
X2=0 .
y2 0.151
X3=1
0.25 0.15

𝑦ത 𝑇 = 𝑆𝑖𝑔𝑚𝑜𝑖𝑑( 𝑆𝑖𝑔𝑚𝑜𝑖𝑑 𝑥ҧ 𝑇 𝑊 . 𝑉 )

15

You might also like