0% found this document useful (0 votes)
45 views43 pages

Session XX - Neural Network

The document discusses neural networks and artificial intelligence. It introduces perceptrons as early attempts to build artificial neurons. Multilayer perceptrons are presented as having input, output, and hidden layers to allow for non-linear mappings. Backpropagation is described as the learning mechanism that allows adjustment of weights to minimize the cost function. An example demonstrates the feedforward and backpropagation processes on a multilayer perceptron network with calculations of outputs, errors, and weight/bias updates.

Uploaded by

michelle winata
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views43 pages

Session XX - Neural Network

The document discusses neural networks and artificial intelligence. It introduces perceptrons as early attempts to build artificial neurons. Multilayer perceptrons are presented as having input, output, and hidden layers to allow for non-linear mappings. Backpropagation is described as the learning mechanism that allows adjustment of weights to minimize the cost function. An example demonstrates the feedforward and backpropagation processes on a multilayer perceptron network with calculations of outputs, errors, and weight/bias updates.

Uploaded by

michelle winata
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 43

NEURAL NETWORK

SESSION XX

SUBJECT MATTER EXPERT


LEARNING OUTCOME
• LO 1: Students will be able to articulate the fundamental concepts,
theories, and principles of artificial intelligence
• LO 4: Students will be capable of applying learning algorithms to
solve problems
OUTLINE
• Introduction
• Perceptron
• Multi-layer Perceptron
• Backpropagation
• Algorithm of Artificial Neural Network
INTRODUCTION
INTRODUCTION
• Neural network generalize idea by feeding various components’ outputs into the input
of another component numerous times in sequence

• (Artificial) Neural Networks roughly developed as follows:


• Originally inspired by (but by no means equivalent to) biological neural networks.
• There was intense research 2-3 decades ago, but they did not quite live up to
expectations at the time. They were considered inferior to SVMs, kernels methods,
etc. for most tasks.
• In the last decade, there has been an explosion of interest and success following
computational advances, availability of large data sets, and improved deep
architectures.
• Deep neural networks now providing state-of-the-art performance in image
classification, speech recognition and synthesis, game playing (e.g., AlphaGo), and
INTRODUCTION
Application of Neural Network

Control System
INTRODUCTION
Application of Neural Network

Sound Recognition
PERCEPTRON
PERCEPTRON
• One of the early attempts to build an artificial neuron is that of
Frank Rosenblatt.

• Rosenblatt proposed a method called the perceptron, which


became known as the first artificial neuron.
PERCEPTRON
• The algorithm is a type of linear classifier that learns a decision boundary for binary
classification tasks
• Its architecture is composed of an augmentation of two functions:
• A linear transformation
• A thresholding Function

II
I
PERCEPTRON
• Mathematically, the perceptron function could be expressed by:

Where:
PERCEPTRON
• The perceptron algorithm, and architecture, has a major limitation,
its inability to deal with non-linearly separable pattern.
• This drawback restricts the use of Rosenblatt’s perceptron to
simple tasks
• To deal with such issue, artifficial neurons tweak the perceptron
architecture to include a differentiable activation function that
replaces the thresholding function.
PERCEPTRON
ACTIVATION FUNCTION
• Activation function is a non-linear operation applied on the output of the combiner.

• It serves two objectives:


• Provides control over the range of the output (activation range)
• It helps expand the representational ability of neural networks beyond linearly-
separable patterns
PERCEPTRON
ACTIVATION FUNCTION
• Three of the most popular activations:
• Sigmoid function:

• Hyperbolic tangent function (tanh):

• Rectified Linear Unit (ReLU):


MULTI-LAYER PERCEPTRON
MULTI-LAYER PERCEPTRON
• The Multilayer Perceptron is a neural network where the mapping
between inputs and output is non-linear.

• A Multilayer Perceptron has input and output layers, and one or


more hidden layers with many neurons stacked together.
MULTI-LAYER PERCEPTRON
MULTI-LAYER PERCEPTRON
• The Multilayer Perceptron is classified as a feedforward algorithm due
to its characteristic of combining inputs with initial weights in a weighted
sum and subsequently applying an activation function, similar to the
Perceptron.

• Nevertheless, if the algorithm solely performed calculations of weighted


sums within each neuron, transmitted the outcomes to the output layer,
and terminated the process, it would be unable of acquiring the weights
that effectively minimize the cost function.

• The above problem could be solved by backpropagation algorithm.


BACKPROPAGATION
BACKPROPAGATION

• Backpropagation is the learning mechanism that allows the


Multilayer Perceptron to iteratively adjust the weights in the
network, with the goal of minimizing the cost function.
ALGORITHM OF ARTIFICIAL NEURAL
NETWORK
ALGORITHM OF ARTIFICIAL NEURAL NETWORK

• Receive a new observation and target


• Feed Forward: for each unit in in each layer
Compute based on units from previous layer
• Get prediction and error (y-
• Back-propagate error: for each unit in each layer L … 1
• Calculate Error:
• Update Weight:
EXAMPLE

• A multilayer perceptron network has the following configuration:


• Input Layer: 2 neuron (x1, x2)
• Hidden Layer: 2 neuron (H1, H2) with Linear activation Function
• Output Layer: 1 neuron (O) with Linear activation function

• The initial weight and initial bias are given as follow:


• Weight: W1 = [0.5, -0.6], W2 = [0.4, 0.1]
• Bias: b1= 0.1, b2 = 0.2
• Weight from hidden layer to output = W3 = [0.3, -0.5]
• Bias for output layer b3= 0.05
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


• We draw the Multi-layer perceptron:

b B
𝑏 {1} =0.1
𝑏 {3 }=0.05
𝑏 {2 }=0.2
𝑊 {1 ,1 }=0.5 𝑊 {1 ,𝑂 } =0.3
H1
X1
4 O

H2 𝑊 {2 , 𝑂} =−0.5
𝑊 {2 , 2}=0.1

X2 𝑊 {1 ,2 }=− 0.6
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


• Forward Pass: Input to Hidden Layer:

b
𝑏 {1} =0.1
𝑏 {2 }=0.2
𝑊 {1 ,1 }=0.5
H1
X1
4
Since the activation function is linear, the output is the same as the
H2 input.
𝑊 {2 , 2}=0.1

X2 𝑊 {1 ,2 }=− 0.6
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


• Forward Pass: Input to Hidden Layer:

b
𝑏 {1} =0.1
𝑏 {2 }=0.2
𝑊 {1 ,1 }=0.5
H1
X1
4
Since the activation function is linear, the output is the same as the
H2 input.
𝑊 {2 , 2}=0.1

X2 𝑊 {1 ,2 }=− 0.6
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


• Forward Pass: Hidden Layer to Output Layer:

B
𝑏 {3 }=0.05

𝑊 {1 ,𝑂 } =0.3
H1
O

H2 𝑊 {2 , 𝑂} =−0.5
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


• Backward Propagation: Output Layer to Hidden Layer
Calculate the error. If the error is 0, we need to stop;
otherwise, we need to update the weight using back-
propagation
B
1 1 1
𝐿𝑜𝑠𝑠= ( 𝑦 − 𝑦 ❑ ) = ( 𝑦 − 𝑂 𝑜𝑢𝑡 ) = ( 1.2 −(0.1 5 3) ) =0.548
2 2
𝑏 {3 }=0.05 ^ 2
2 2 2

𝑊 {1 ,𝑂 } =0.3
H1
O After obtaining the loss, we search for the gradient of the loss with
respect to
H2 𝑊 {2 , 𝑂} =−0.5
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


The values based on:
• Backward Propagation: Weight Update:
The value become H
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


The values based on:
• Backward Propagation: Weight Update:
The value become H
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


The values based on:
• Backward Propagation: Bias Update:
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


• Backward Propagation: Hidden Layer to Input
• Hidden Layer:

For H1 neuron:

For H2 neuron:
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


The values based on:
• Backward Propagation: Weight Update:
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


The values based on:
• Backward Propagation: Weight Update:
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


The values based on:
• Backward Propagation: Weight Update:
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


The values based on:
• Backward Propagation: Weight Update:
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


The values based on:
• Backward Propagation: Bias Update:

359
EXAMPLE

• We have Data: x1 = 0.5, X2 = 0.6, Y = 1.2


The values based on:
• Backward Propagation: Bias Update:
EXERCISE
EXAMPLE

• A multilayer perceptron network has the following configuration:


• Input Layer: 2 neuron (x1, x2)
• Hidden Layer: 2 neuron (H1, H2) with Linear activation Function
• Output Layer: 1 neuron (O) with Linear activation function

• The initial weight and initial bias are given as follow:


• Weight: W1 = [0.5, -0.6], W2 = [0.4, 0.1]
• Bias: b1= 0.1, b2 = 0.2
• Weight from hidden layer to output = W3 = [0.3, -0.5]
• Bias for output layer b3= 0.05
EXAMPLE

• We have Data: x1 = 0.2, X2 = 0.3, Y = 0.7

a. Calculate the output value with forward propagation


b. Calculate the backward propagation and update the new
parameter value

c. Please give conclusion after recalculate forward propagation with


updated parameter value
REFERENCES
REFERENCES
• Alrabeiah, M. (2022). Neural Networks: From The Ground up.
https://faculty.ksu.edu.sa/sites/default/files/L3_1.pdf

• Sena, S. (2017, November 3). Pengenalan Deep Learning Part 3: BackPropagation


Algorithm. Medium. https://medium.com/@samuelsena/pengenalan-deep-learning-
part-3-backpropagation-algorithm-720be9a5fbb8

• Michael A Nielsen. 2015. “Neural Networks and Deep Learning”, Determination


Press

• Victor Lavrenko. (n.d.). Backpropagation: how it works [Video]. YouTube.


https://www.youtube.com/watch?v=An5z8lR8asY

You might also like