Intro NN - 2023
Intro NN - 2023
NEURAL GENETIC
NETWORK ALGORITHM
DATA
sin(x)
sin(x)
1,2
1,2
1
1
0,8
0,8
0,6
0,6
0,4 0,4
0,2 0,2
0 0
0 30 60 90 120 150 180 0 10 20 30 40 50 60 70 80 90 100110120130140150160170180
-0,2 -0,2
a b
Hasil training menggunakan Back Propagation Neural Network (BPNN) (a) hasil
kurang bagus (b) hasil yang bagus
Validation
RBFNN menggunakan Radial Basis Functions (RBFs) seperti gaussian, thin plate
spline, multi-kuadrat, dll., sebagai fungsi aktivasi di hidden layer-nya.
Implementation of RBFNN:
Clustering
Type of Neural Network:
Convolutional Neural Network (CNN)
An Illustration Data For
Training
CNN for Detecting Alphabet
RBF vs MLP
Biological neuron
#20
1.2 Biological neural
networks
Cortical neurons (nerve cells) growing in culture
Human neurons derived from IPSC stem cells This complex network forms the nervous
system, which relays information through the
body
#21
Interaction of neurons
▪ Chemical synapse:
pre-synaptic electric signal → chemical neurotransmitter → post-synaptic electrical signal
#23
Human brain
Human activity is regulated by
a nervous system:
▪ Central nervous system
▪ Brain
▪ Spinal cord
▪ Peripheral nervous system
#24
Structural organization of
brain
Molecules & Ions ................ transmitters
Where
p = scalar input
w = scalar weight
f = transfer/activation function
a = output
How It Works?
Transfer/activation Function
Hardlim dan Hardlims
n = -5:0.1:5;
plot(n,hardlim(n),‘ro:');
n = -5:0.1:5;
plot(n,hardlims(n),‘ro:');
Linear
n = -5:0.1:5;
plot(n,satlins(n),'ro:');
Sigmoid
n = -5:0.1:5;
plot(n,logsig(n),'ro:');
n = -5:0.1:5;
plot(n,tansig(n),'ro:');
Linear and Competitive
n = -5:0.1:5;
plot(n,poslin(n),'ro:');
3. Recurrent networks
• Contains at least one feedback loop
• Powerfull temporal learning capabilities
Single-layer feedforward
networks
#38
Multi-layer feedforward
networks (1/2)
Multi-layer feedforward
networks (2/2)
▪ Data flow strictly feedforward: input → output
▪ No feedback → Static network, easy learning
Recurrent networks (1/2)
▪ Learning process
1. Neural network is stimulated by an environment
2. Neural network changes its free parameters as a result of this stimulation
3. Neural network responds in a new way to the environment because of its
changed internal structure
▪ Learning algorithm
Prescribed set of defined rules for the solution of a learning problem
1. Error correction learning
2. Memory-based learning
3. Hebbian learning
4. Competitive learning
Error-correction learning
(1/2)
d(t)
y(t)
x(t)
e(t)
1. Neural network is driven by input x(t) and responds with output y(t)
e(t ) = y (t ) − d (t )
Error-correction learning
(2/2)
▪ Error signal → control mechanism to correct synaptic weights
▪ Corrective adjustments → designed to make network output y(t) closer to
target d(t)
▪ Learning achieved by minimizing instantaneous error energy
1 2
(t ) = e (t )
2
▪ Delta learning rule (Widrow-Hoff rule)
▪ Adjustment to a synaptic weight of a neuron is proportional to the product of the error signal and the input
signal of the synapse
▪ All (or most) past experiences are stored in a memory of input-output pairs
(inputs and target classes)
( xi , yi )i =1
N
▪ Learning algorithm
▪ Prescribed set of defined rules for the solution of a learning problem
1. Error correction learning
2. Memory-based learning
3. Hebbian learning
4. Competitive learning
▪ Learning paradigm
▪ Manner in which a neural network relates to its environment
1. Supervised learning
2. Unsupervised learning
3. Reinforcement learning
Labeled
Environment
training data
Target response = optimal action
+
Learning
system -
Σ
Error signal
▪ Learning algorithms
▪ Error-correction learning
▪ Memory-based learning
#50
Unsupervised learning
Learning
Environment
system
▪ Learning algorithms
▪ Hebbian learning
▪ Competitive learning
Reinforcement learning
▪ Goal is to minimize the expectation of the cumulative cost of actions taken over
a sequence of steps Primary
reinforcement
Environment Critic
▪ RL is realized through two neural networks:
Critic and Learning system Heuristic
reinforcement
Learning
▪ Critic network converts primary reinforcement system
signal (obtained directly from the environment)
Actions
into a higher quality heuristic reinforcement signal
which solves temporal credit assignment problem
2.6 Learning tasks (1/7)
1. Pattern Association
▪ Associative memory is brain-like distributed memory that learns by association
▪ Autoassociation
• Neural network stores a set of patterns by repeatedly presenting them to the
network
• Then, when presented distored pattern, a neural network can recall the original
pattern
• Unsupervised learning algorithms
▪ Heteroassociation
• Set of input patterns is paired with an arbitrary set of output patterns
• Supervised learning algorithms
Learning tasks (2/7)
2. Pattern Recognition
▪ In pattern recognition, input signals are assigned to categories (classes)
• Support-Vector-Machine example
#54
Learning tasks (3/7)
3. Function Approximation
▪ Arbitrary nonlinear input-output mapping
y = f(x)
can be approximated by a neural network, given a set of labeled examples
{xi, yi}, i=1,..,N
• System identification
Unknown
Environment
System
Unknown system response
+
Neural
network -
Σ
Error signal
Error signal
Learning tasks (5/7)
4. Control
• Neural networks can be used to control a plant (a process)
• Brain is the best example of a parallel distributed generalized controller
• Operates thousands of actuators (muscles)
• Can handle nonlinearity and noise
• Can handle invariances
• Can optimize over a long-range planning horizon
1. Filtering o o o o o o o o o o
• Extraction of information at discrete time n by using measured data up to and including
time n
• Examples: Cocktail party problem, Blind source separation
2. Smoothing o o o o o o x o o o
• Differs from filtering in:
a) Data need not be available at time n
b) Data measured later than n can be used to obtain this information
3. Prediction o o o o o o o o o o x
• Deriving information about the quantity in the future at time n+h, h>0, by using data
measured up to including n
• Example: Forecasting of energy consumption, stock market prediction
Learning tasks (7/7)
6. Beamforming
▪ Spatial form of filtering, used to distinguish between the spatial properties of a
target signal and background noise
#63
Illustrative Example: Three-
Inputs case
Let’s say that an orange with an elliptical shape is passed through the sensors.
The input vector would then be
In fact, any input vector that is closer to the orange prototype vector than to the
apple prototype vector (in Euclidean distance) will be classified as an orange
(and vice versa).
Type of Neural Network - Recurrent
Network: Hopfield Network (1/4)
Example
For a case where,
−1
𝑃 = −1
−1
−1
𝑎 0 = 𝑃 = −1
−1
Type of Neural Network - Recurrent
Network: Hopfield Network (3/4)
0.7 0.7
𝑎 1 = 𝑠𝑎𝑡𝑙𝑖𝑛𝑠 −1.2 = −1
−1.1 −1
Type of Neural Network - Recurrent
Network: Hopfield Network (4/4)
Convergence, stop!
Toolbox: nnd3hopc
Aplikasi Hopfield:
Pendeteksi Huruf
1. Run hopfieldNetwork.m
2. Load Image semisal A (max
10 huruf, dimasukkan satu-
persatu)
3. Train network
4. Ambil gambar huruf A yang
lain (note file dalam bmp)
5. Tekan Run
6. Hasil Run
https://www.mathworks.com/matlabcentral/fileexchange/13728-hopfield-neural-
network
Tugas #3
0.2 0 0 −1 0.9
1. Jika diketahui w = 0 1.2 0 , 𝑝 = −1 𝑑𝑎𝑛 𝑏 = 0 tentukan nilai
0 0 0.2 −1 −0.9
konvergen output Hopfiled network untuk 4 iterasi disertai
perhitungan manualnya, menggunakan fungsi aktivasi: satlins,
Logsig, dan Tansig