0% found this document useful (0 votes)
61 views10 pages

08 An Example of NN Using ReLu

This document describes the forward pass process of a simple neural network using the ReLU activation function. It contains two inputs, two hidden layers with one neuron each, and two output neurons. The forward pass calculates the total input, applies the ReLU activation, and repeats for the output layer. It then calculates the mean squared error cost function between the actual and predicted outputs. The next steps will involve backpropagation to update the weights to reduce the cost.

Uploaded by

Hafshah Devi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views10 pages

08 An Example of NN Using ReLu

This document describes the forward pass process of a simple neural network using the ReLU activation function. It contains two inputs, two hidden layers with one neuron each, and two output neurons. The forward pass calculates the total input, applies the ReLU activation, and repeats for the output layer. It then calculates the mean squared error cost function between the actual and predicted outputs. The next steps will involve backpropagation to update the weights to reduce the cost.

Uploaded by

Hafshah Devi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Jaringan Syaraf Tiruan

Modul ke:
An Example of NN using ReLu

09 Fakultas
Teknik

Program Studi
Teknik Elektro

Zendi Iklima, ST, S.Kom, M.Sc


The Architecture

𝒘𝟏 𝒘𝟓
𝒊𝟏 𝒉𝟏 𝒐𝟏
𝒘𝟐 𝒘𝟔

𝒘𝟑 𝒘𝟕

𝒊𝟐 𝒘𝟒 𝒉𝟐 𝒘𝟖 𝒐𝟐

𝒃𝟏 𝒃𝟐
The Architecture

𝟎. 𝟏𝟓 𝟎. 𝟒𝟎
𝟎. 𝟎𝟓 𝒉𝟏 𝒐𝟏
𝟎. 𝟐𝟎 𝟎. 𝟒𝟓 𝟎. 𝟎𝟏

𝟎. 𝟐𝟓 𝟎. 𝟓𝟎

𝟎. 𝟏 𝟎. 𝟑𝟎 𝒉𝟐 𝟎. 𝟓𝟓 𝒐𝟐
𝟎.99

𝟎. 𝟑𝟓 𝟎. 𝟔𝟎
The Forward Pass
The Forward Pass 𝑻𝒐𝒕𝒂𝒍 𝒊𝒏𝒑𝒖𝒕
We figure out the total net input to
each hidden layer neuron, squash the 𝑧𝑗 = 𝑛𝑒𝑡ℎ𝑗 = ෍ 𝑤𝑗 𝑎𝑗 + 𝑏𝑗
total net input using an activation 𝑗=1
function / logistic function, then
repeat the process with the output 𝑳𝒐𝒈𝒊𝒔𝒕𝒊𝒄 𝒇𝒖𝒏𝒄𝒕𝒊𝒐𝒏 (𝑹𝒆𝑳𝒖)
layer neurons. 𝜎 𝑧𝑗 = 𝑂𝑢𝑡ℎ𝑗 = max(1, 𝑧𝑗 )

𝒛𝒊𝒉𝟏 = 𝑤1 𝑖1 + 𝑤3 𝑖2 + 𝑏1 = 𝟎. 𝟑𝟕𝟕𝟓

𝝈 𝒛𝒊𝒉𝟏 = max(1, 𝒛𝒊𝒉𝟏 ) = 𝟏

𝒛𝒉𝒐𝟏 = 𝑤5 𝜎 𝑧𝑖ℎ1 + 𝑤7 𝜎 𝑧𝑖ℎ2 + 𝑏2 = 𝟏. 𝟓𝟎

𝝈 𝒛𝒉𝒐𝟏 = max 1, 𝒛𝒉𝒐𝟏 = 𝟏. 𝟓𝟎


The Forward Pass
𝑻𝒐𝒕𝒂𝒍 𝒊𝒏𝒑𝒖𝒕
𝑤1 𝑤3 𝑖1
𝑧𝑗 = 𝑛𝑒𝑡ℎ𝑗 = ෍ 𝑤𝑗 𝑎𝑗 + 𝑏𝑗 𝑧𝑖ℎ = 𝑤 𝑤4 𝑖2 + 𝑏1
2
𝑗=1
𝑳𝒐𝒈𝒊𝒔𝒕𝒊𝒄 𝒇𝒖𝒏𝒄𝒕𝒊𝒐𝒏 (𝑹𝒆𝑳𝒖)
0.15 0.25 0.05
𝜎 𝑧𝑗 = 𝑂𝑢𝑡ℎ𝑗 = max(1, 𝑧𝑗 ) 𝑧𝑖ℎ = + 0.35
0.20 0.30 0.10

𝟎. 𝟑𝟕𝟕𝟓 𝑧𝑖ℎ1
𝒛𝒊𝒉 =
𝟎. 𝟑𝟗𝟐𝟓 𝑧𝑖ℎ2

max(1, 𝑧𝑖ℎ1 ) 𝟏
𝝈 𝒛𝒊𝒉 = =
max(1, 𝑧𝑖ℎ2 ) 𝟏
The Forward Pass
𝑻𝒐𝒕𝒂𝒍 𝒊𝒏𝒑𝒖𝒕
𝑤5 𝑤7 𝜎 𝑧𝑖ℎ1
𝑧𝑗 = 𝑛𝑒𝑡ℎ𝑗 = ෍ 𝑤𝑗 𝑎𝑗 + 𝑏𝑗 𝑧ℎ𝑜 = 𝑤 + 𝑏2
6 𝑤8 𝜎 𝑧𝑖ℎ2
𝑗=1
𝑳𝒐𝒈𝒊𝒔𝒕𝒊𝒄 𝒇𝒖𝒏𝒄𝒕𝒊𝒐𝒏 (𝒔𝒊𝒈𝒎𝒐𝒊𝒅)
𝜎 𝑧𝑗 = 𝑂𝑢𝑡ℎ𝑗 = max(1, 𝑧𝑗 ) 0.4 0.5 1
𝑧ℎ𝑜 = + 0.6
0.45 0.55 1

𝟏. 𝟓𝟎 𝑧ℎ𝑜1
𝒛𝒉𝒐 =
𝟏. 𝟔𝟎 𝑧ℎ𝑜2

max(1, 𝑧𝑗 ) 𝟏. 𝟓𝟎
𝝈 𝒛𝒉𝒐 = =
max(1, 𝑧𝑗 ) 𝟏. 𝟔𝟎
The Cost Function
𝑪𝒐𝒔𝒕 𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏
1 2
𝑀𝑆𝐸 = 𝐶 𝑤, 𝑏 = ෍ 𝑡(𝑥) − 𝑧(𝑥)
2𝑛
𝑥

1
(𝑡 − 𝜎 𝑧ℎ𝑜1 )2
2 𝑜1
𝑀𝑆𝐸 = 1
(𝑡 − 𝜎 𝑧ℎ𝑜2 )2
2 𝑜2

1
(0.01 − 1.50)2
2
𝑀𝑆𝐸 = 1
(0.99 − 1.60)2
2

𝟏. 𝟏𝟏𝟎𝟏 𝑻𝒐𝒕𝒂𝒍 𝑪𝒐𝒔𝒕 𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏:


𝑴𝑺𝑬 = 𝐸 = 𝑀𝑆𝐸1 + 𝑀𝑆𝐸2 = 1.29615
𝟎. 𝟏𝟖𝟔𝟏
The Cost Function
𝑵𝒆𝒙𝒕 𝑳𝒆𝒄𝒕𝒖𝒓𝒆

Backpropagation process to update initial weights in


any iterations. This process shows the fully connected
layers affected like a chains and also will explains a
parameter named Learning Rate (𝜶)
The Architecture

𝒘+
𝟏 𝒘+
𝟓
𝒊𝟏 𝒉𝟏 𝒐𝟏
𝒘+
𝟐 𝒘+
𝟔

𝒘+
𝟑 𝒘+
𝟕

𝒊𝟐 𝒘+ 𝒉𝟐 𝒘+
𝟖
𝒐𝟐
𝟒

𝒃𝟏 𝒃𝟐
Terima Kasih
Zendi Iklima, ST, S.Kom, M.Sc

You might also like