Bee4333 Intelligent Control: Artificial Neural Network (ANN)
Bee4333 Intelligent Control: Artificial Neural Network (ANN)
CONTROL
Chapter 4 :
Artificial Neural Network (ANN)
Class Activity (Pop Quiz)
• Searching for 3 Winners today
• Pop Quiz.
What do you understand about Artificial
Neural Network?
What are the similarities between these
pictures?
VS
Contents
• 4.1 Basic Concept
• 4.2 ANN Applications
• 4.3 ANN Model
• 4.4 ANN Learning
• 4.5 Simple ANN
• 4.6 Multilayer Neural Networks &
Backpropagation Algorithm
Basic Concept
• ANN born from the demand of machine learning;
computer learns from experience, examples and
analogy.
• Simple concept : computer attempts to model the
human brain.
• Also known as parallel distributed processors.
• Why we need an intelligent processor or computer
to replace current technology?
– To decide intelligently and interact accordingly.
Human Brain; biological NN
OUTPUT SIGNALS
INPUT SIGNALS
ANN Architecture
Learning
• Synapses has their own weight to express the
importance of input.
• The output of a neuron might be the final solution or the
input to other networks.
• ANN learns through iterated adjustment from synapses
weight.
• Weight is adjusted to cope with the output environment
regarding about its network input/output behavior.
• Each neutron computes its activation level based on the
I/O numerical weights.
How to design ANN?
Decide how many neurons to be used.
How the connections between neurons are
constructed? How many layers needed?
Which learning algorithm to be apply?
Train the ANN by initialize the weight and update
the weights from training sets.
ANN characteristics
Advantages:
• A neural network can perform tasks that a linear program can not.
Disadvantages:
• The neural network needs training to operate.
x1 w1 y1
x2 w2 y2
x3 w3 y3
http://en.wikibooks.org/wiki/Artificial_Neural_Networks/Feed-Forward_Networks
Radial Basis Function Networks
❑ Each hidden layer neuron represents
a basis function of the output space,
with respect to a particular center in
the input space.
❑ The activation function chosen is
commonly a Gaussian kernel:
http://en.wikibooks.org/wiki/Artificial_Neural_Networks/Recurrent_Networks
Echo State Networks
Recurrent networks where the hidden layer
neurons are not completely connected to all
input neurons.
Known as sparsely connected networks.
Only the weights from the hidden layer to the
output layer may be altered during training.
Echo state networks are useful for matching
and reproducing specific input patterns.
Because the only tap weights modified during
training are the output layer tap weights.
Training is typically quick and computationally
efficient in comparison to other multi-layer
networks that are not sparsely connected.
http://www.scholarpedia.org/article/Echo_state_network
Hopfield Networks
Competitive Networks
0 X 0 X
-1 -1
Step Y Linear Y
function function
+1 +1
0 X 0 X
-1 -1
4.5
SIMPLE ANN
Simple ANN: A Perceptron
• Perceptron is used to classify input in
two classes; e.g class A1 or A2.
• A linear separable function is used x2
to divide the n-dimensional space as
follows; 1
0 x1
• Say, 2 inputs, then we have a
characteristics as shown on left 2
figure. θ is used to shift the bound.
• Three dimensional states is also
possible to be view.
Simple Perceptron
Must be boolean!
Inputs
x1
w1 Linear Hard
Combiner limiter Output
/bias
∑
x2 w2
θ
Threshold
Learning: Classification
• Learning is done by adjusting the actual output Y to
meet the desired output Yd.
• Usually, the initial weight is adjust between -0.5 to
0.5. At iteration k of the training example, we have the
error e as
Threshold,
0 0 0 θ = 0.2
0 1 0 Learning rate,
α = 0.1
1 0 0
1 1 1
1 0 0 0 0.3 -0.1
0 1 0 0.3 -0.1
1 0 0 0.3 -0.1
1 1 1 0.2 -0.1
Actual
Output
Updated
Weight
Epoch Input Input Desired Initial Initial Actual Error, Final Final
x1 x2 output yd weight weight Output e weight weight
w1 w2 Y w1 w2
https://www.youtube.com/watch?v=zmIzNBMsQYQ
4.6
Multilayer Neural Networks &
Backpropagation Algorithm
Are you ready?
• Class activity today!
• Searching for 3 winners today.
• Get ready with calculator.
Videos
https://www.youtube.com/watch?v=WZDMNM36Ps
M
https://www.youtube.com/watch?v=Ilg3gGewQ5U
Multilayer neural networks
Multilayer NN-feedforward neural network with
one or more hidden layer.
Model consists of input layer, middle or hidden layer
and an output layer.
Why hidden layer is important?
Inputlayer only receives input signal
Output layer only display the output patterns.
x2
x3
1st 2nd
hidden hidden
layer layer
Multilayer Neural Network Learning
n m l
inputs i j k output
Input signals
Error signals
Back Propagation
Case Study
Black pixel : 1
White Pixel : 0
Network
Training Process
Set up the weight, all in the range of −1 <
𝑤𝑒𝑖𝑔ℎ𝑡 < 1
Apply input pattern → calculate output. (FORWARD
PASS)
Calculated output will be different with the TARGET.
Differences between CALCULATED OUTPUT and
TARGET is equal to error.
Error will be used for updating weight.
Step of Back Propagation Method
Step of Back Propagation Method (ctd)
Training Process
1) Find the neuron output in hidden layer
𝑜𝑢𝑡𝐴 = σ31 𝑖𝑛𝑝𝑢𝑡𝑖 × 𝑤𝑖𝐴
4)
5)
+
𝑤3𝐴 = 𝑤3𝐴 + η𝛿𝐴 𝑖𝑛𝑝𝑢𝑡3
6) +
𝑤3𝐵 = 𝑤3𝐵 + η𝛿𝐵 𝑖𝑛𝑝𝑢𝑡3
+ +
+ 𝑤3𝐶 + = 𝑤3𝐶 + η𝛿𝐶 𝑖𝑛𝑝𝑢𝑡3
𝑤1𝐴 = 𝑤1𝐴 + η𝛿𝐴 𝑖𝑛𝑝𝑢𝑡1 𝑤2𝐴 = 𝑤2𝐴 + η𝛿𝐴 𝑖𝑛𝑝𝑢𝑡2 𝑤4𝐴 = 𝑤4𝐴 + η𝛿𝐴 𝑖𝑛𝑝𝑢𝑡4
+ + +
𝑤1𝐵 = 𝑤1𝐵 + η𝛿𝐵 𝑖𝑛𝑝𝑢𝑡1 𝑤2𝐵 = 𝑤2𝐵 + η𝛿𝐵 𝑖𝑛𝑝𝑢𝑡2 𝑤4𝐵 = 𝑤4𝐵 + η𝛿𝐵 𝑖𝑛𝑝𝑢𝑡4
+ + +
𝑤1𝐶 = 𝑤1𝐶 + η𝛿𝐶 𝑖𝑛𝑝𝑢𝑡1 𝑤2𝐶 = 𝑤2𝐶 + η𝛿𝐶 𝑖𝑛𝑝𝑢𝑡2 𝑤4𝐶 = 𝑤4𝐶 + η𝛿𝐶 𝑖𝑛𝑝𝑢𝑡4
η is learning rate, usually 1.
When does the training process stop?
Slow training
XOR PROBLEM
In this example, θj and
θk is not available!
Net j= σ𝑛𝑖=1 𝑥𝑖 𝑝 × 𝑤𝑖𝑗 𝑝 − 𝜃𝑗
Sigmoid activation
function
∆Wji(t+1) = 0.1 X (-0.0035) X 0
+ (0.9 X 0)
=0
W11= -0.4
θj= 0.1 θk= 0.2
0
j=1 k=1
1 W13= 0.3
B. 0.60
C. 0.50
D. 0.30
Leader Board Results
Answer
(i) Calculation of 𝐿𝑘
𝑦𝑗 𝑝 = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑 𝑥𝑖 𝑝 × 𝑤𝑖𝑗 𝑝 − 𝜃𝑗
𝑖=1
𝑦𝑗 𝑝 = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑 0
1
𝑦𝑗 𝑝 = 𝐿𝑗 = = 0.50
1 + 𝑒 −(0)
Question 2
B. 0.66
C. 0.56
D. 0.36
• Output layer
𝑛
𝑦𝑘 𝑝 = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑 𝑥𝑖 𝑝 × 𝑤𝑖𝑘 𝑝 − 𝜃𝑘
𝑖=1
𝑦𝑘 𝑝 = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑 𝑥1 × 𝑊11 − 𝜃𝑘
𝑦𝑘 𝑝 = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑 0.50 × 0.1 − 0.2
𝑦𝑘 𝑝 = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑 0.05 − 0.2
𝑦𝑘 𝑝 = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑 −0.15
𝑦𝑘 𝑝 = 𝐿𝑘 = 0.46
Question 3
B. -0.66
C. -0.56
D. -0.46
Leader Board Results
Question 4
B. -0.2143
C. -0.1143
D. -0.5143
Leader Board Results
Question 5
B. -0.0229
C. -0.0429
D. -0.0529
Leader Board Results
Question 6
B. 0.0871
C. 0.0971
D. 0.0671
Multiple Choice Submissions
• Calculation ∆w11 and w11 (new) between output and hidden layer
δk = Lk (1- Lk )( tk - Lk )
δk = 0.46 (1- 0.46 )(0 – 0.46 ) = -0.1143
B. -0.0077
C. -0.0067
D. -0.0029
Question 8
A. 0, -0.5
B. 0, -0.4
C. 0, -0.6
D. 0. -0.7
Question 9
A. 0.00116, -0.2012
B. 0.00328, -0.3023
C. 0.00428, -0.4023
D. 0.00528, -0.5023
Question 10
A. 0.00228, -0.2023
B. 0.00328, -0.3023
C. 0.00428, -0.4023
D. 0.00116, 0.2988
δj = Lj (1- Lj )σ 𝛿𝑘 𝑤𝑘𝑗
δj = 0.5 (1- 0.5 )(-0.1143*0.2) = - 0.0057
Sigmoid Sigmoid
W11= -0.4
1 Lk
j=1 k=1
1 W13= 0.2977
Wednesday, 18/5
10am
30 minutes
KALAM Online
Assignment 2 (Individual)
Check your Assg 1 mark in ecomm.
Assg 2 question will be uploaded in KALAM.
Practise tutorial for ANN.
Due to submit? Sunday, 29 May 2022.