0% found this document useful (0 votes)
20 views75 pages

Intro NN - 2023

The document discusses various types of neural networks including single-layer feedforward networks, multi-layer feedforward networks, and recurrent networks. It also covers different learning algorithms used in neural networks such as error-correction learning, memory-based learning, Hebbian learning, and competitive learning.

Uploaded by

Abid Fadlullah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views75 pages

Intro NN - 2023

The document discusses various types of neural networks including single-layer feedforward networks, multi-layer feedforward networks, and recurrent networks. It also covers different learning algorithms used in neural networks such as error-correction learning, memory-based learning, Hebbian learning, and competitive learning.

Uploaded by

Abid Fadlullah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

Neural Network

Review: How The Machine


Learning Works
Review: Container Gantry
Crane
NN - GA

NEURAL GENETIC
NETWORK ALGORITHM

DATA

Modelling input- Optimization


output

Input and output


Mathematical
data
function
WorkFlow for Neural Network

Training, Testing, validation


Training

sin(x)
sin(x)
1,2
1,2
1
1
0,8
0,8
0,6
0,6

0,4 0,4

0,2 0,2

0 0
0 30 60 90 120 150 180 0 10 20 30 40 50 60 70 80 90 100110120130140150160170180
-0,2 -0,2

a b

Hasil training menggunakan Back Propagation Neural Network (BPNN) (a) hasil
kurang bagus (b) hasil yang bagus
Validation

• Validasi berfungsi untuk


mencegah underfitting dan
overfitting sehingga hasil
NN bisa digunakan untuk
proses prediksi dengan
error seminimal mungkin
• Dalam prosesnya validasi
bisa men-stop proses
training
Fungsi validation
Testing

Proses ini tidak


berpengaruh pada
pelatihan dan memberikan
ukuran independen kinerja
jaringan selama dan
setelah pelatihan
Implementation of NN

Komparasi hasil Sin(x)


antara original vs prediksi
BPNN
Artificial Intelligence
Machine Learning vs. Deep
Learning
Example of deep learning
Type of Neural Network: Radial Basis
Function Neural Network (RBFNN)

RBFNN menggunakan Radial Basis Functions (RBFs) seperti gaussian, thin plate
spline, multi-kuadrat, dll., sebagai fungsi aktivasi di hidden layer-nya.
Implementation of RBFNN:
Clustering
Type of Neural Network:
Convolutional Neural Network (CNN)
An Illustration Data For
Training
CNN for Detecting Alphabet
RBF vs MLP
Biological neuron

#20
1.2 Biological neural
networks
Cortical neurons (nerve cells) growing in culture

Neurons have a large cell body with several long


processes extending from it, usually one thick
axon and several thinner dendrites

Dendrites receive information from other neurons

Axon carries nerve impulses away from the


neuron. Its branching ends make contact with
other neurons and with muscles or glands

Human neurons derived from IPSC stem cells This complex network forms the nervous
system, which relays information through the
body

#21
Interaction of neurons

▪ Action potentials arriving at


the synapses stimulate
currents in its dendrites

▪ These currents depolarize


the membrane at its axon,
provoking an action potential

▪ Action potential propagates


down the axon to its
synaptic knobs, releasing
neurotransmitters and
stimulating the post-synaptic
neuron (lower left)
Synapses
▪ Elementary structural and functional units that mediate the interaction between neurons

▪ Chemical synapse:
pre-synaptic electric signal → chemical neurotransmitter → post-synaptic electrical signal

#23
Human brain
Human activity is regulated by
a nervous system:
▪ Central nervous system
▪ Brain
▪ Spinal cord
▪ Peripheral nervous system

≈ 1010 neurons in the brain


≈ 104 synapses per neuron
≈ 1 ms processing speed of a neuron

→ Slow rate of operation


→ Extrem number of processing
units & interconnections
→ Massive parallelism

#24
Structural organization of
brain
Molecules & Ions ................ transmitters

Synapses ............................ fundamental organization level

Neural microcircuits .......... assembly of synapses organized into patterns of


connectivity to produce desired functions
Dendritic trees .................... subunits of individual neurons

Neurons ............................... basic processing unit, size: 100 μm

Local circuits ....................... localized regions in the brain, size: 1 mm

Interregional circuits .......... pathways, topographic maps

Central nervous system ..... final level of complexity


© 2022 NEURAL NETWORKS #25 (1)
Introduction to Neural
Neuro link – Elon Musk
A simple Neuron

Where
p = scalar input
w = scalar weight
f = transfer/activation function
a = output
How It Works?
Transfer/activation Function
Hardlim dan Hardlims

n = -5:0.1:5;
plot(n,hardlim(n),‘ro:');

n = -5:0.1:5;
plot(n,hardlims(n),‘ro:');
Linear

n = -5:0.1:5;
plot(n,satlins(n),'ro:');
Sigmoid

n = -5:0.1:5;
plot(n,logsig(n),'ro:');

n = -5:0.1:5;
plot(n,tansig(n),'ro:');
Linear and Competitive

n = -5:0.1:5;
plot(n,poslin(n),'ro:');

n = [0; 1; -0.5; 0.5];


a = compet(n);
subplot(2,1,1), bar(n), ylabel('n')
subplot(2,1,2), bar(a), ylabel('a')
Neuron With Multiple Input
Running Example Program

Type nnd2n2 in Matlab’s


command window
Problem Setup IN NN
2.3 Network architectures
About network architectures
▪ Two or more of the neurons can be combined in a layer
▪ Neural network can contain one or more layers
▪ Strong link between network architecture and learning algorithm

1. Single-layer feedforward networks


• Input layer of source nodes projects onto an output layer of neurons
• Single-layer refers to the output layer (the only computation layer)

2. Multi-layer feedforward networks


• One or more hidden layers
• Can extract higher-order statistics

3. Recurrent networks
• Contains at least one feedback loop
• Powerfull temporal learning capabilities
Single-layer feedforward
networks

#38
Multi-layer feedforward
networks (1/2)
Multi-layer feedforward
networks (2/2)
▪ Data flow strictly feedforward: input → output
▪ No feedback → Static network, easy learning
Recurrent networks (1/2)

▪ Also called “Dynamic networks”


▪ Output depends on
▪ current input to the network (as in static networks)
▪ and also on current or previous inputs, outputs, or states of the network

▪ Simple recurrent network

Delay Feedback loop


Recurrent networks (2/2)

▪ Layered Recurrent Dynamic Network – example


2.4 Learning algorithms

▪ Important ability of neural networks


▪ To learn from its environment
▪ To improve its performance through learning

▪ Learning process
1. Neural network is stimulated by an environment
2. Neural network changes its free parameters as a result of this stimulation
3. Neural network responds in a new way to the environment because of its
changed internal structure

▪ Learning algorithm
Prescribed set of defined rules for the solution of a learning problem
1. Error correction learning
2. Memory-based learning
3. Hebbian learning
4. Competitive learning
Error-correction learning
(1/2)
d(t)

y(t)
x(t)

e(t)

1. Neural network is driven by input x(t) and responds with output y(t)

2. Network output y(t) is compared with target output d(t)


Error signal = difference between network output and target output

e(t ) = y (t ) − d (t )
Error-correction learning
(2/2)
▪ Error signal → control mechanism to correct synaptic weights
▪ Corrective adjustments → designed to make network output y(t) closer to
target d(t)
▪ Learning achieved by minimizing instantaneous error energy
1 2
 (t ) = e (t )
2
▪ Delta learning rule (Widrow-Hoff rule)
▪ Adjustment to a synaptic weight of a neuron is proportional to the product of the error signal and the input
signal of the synapse

w(t ) =  e(t ) x(t )


▪ Comments
▪ Error signal must be directly measurable

▪ Key parameter: Learning rate η


▪ Closed-loop feedback system → Stability determined by learning rate η
Memory-based learning

▪ All (or most) past experiences are stored in a memory of input-output pairs
(inputs and target classes)
( xi , yi )i =1
N

▪ Two essential ingredients of memory-based learning


1. Define the local neighborhood of a new input xnew
2. Apply learning rule to adapt stored examples in the local neighborhood of xnew

▪ Examples of memory-based learning


▪ Nearest neighbor rule
• Local neighborhood defined by the nearest training example (Euclidian distance)
▪ K-nearest neighbor classifier
• Local neighborhood defined by k-nearest training examples → robust against outliers
▪ Radial basis function network
• Selecting the centers of basis functions
Hebbian learning
▪ The oldest and most famous learning rule (Hebb, 1949)
▪ Formulated as associative learning in a neurobiological context
“When an axon of a cell A is near enough to excite a cell B and repeatedly or
persistently takes part in firing it, some growth process or metabolic changes take place
in one or both cells such that A’s efficiency as one of the cells firing B, is increased.”
▪ Strong physiological evidence for Hebbian learning in the hippocampus,
important for long-term memory and spatial navigation

▪ Hebbian learning (Hebbian synapse)


▪ Time-dependent, highly local, and strongly interactive mechanism to increase
synaptic efficiency as a function of the correlation between the presynaptic and
postsynaptic activities.
1. If two neurons on either side of a synapse are activated simultaneously, then the
strength of that synapse is selectively increased
2. If two neurons on either side of a synapse are activated asynchronously, then that
synapse is selectively weakened or eliminated
▪ The simplest form of Hebbian learning
y
w(t ) =  y (t ) x(t )
x
Competitive learning
Inputs
▪ Competitive learning network architecture
1. Set of inputs, connected to a layer of outputs
2. Each output neuron receives excitation from all inputs
3. Output neurons of a neural network compete to
become active by exchanging lateral inhibitory
connections
4. Only a single neuron is active at any time

▪ Competitive learning rule


▪ Neuron with the largest induced local field becomes a
winning neuron
▪ Winning neuron shifts its synaptic weights toward the
© 2022 NEURAL NETWORKS #48 (2)
input Neuron Model, Network
2.5 Learning paradigms

▪ Learning algorithm
▪ Prescribed set of defined rules for the solution of a learning problem
1. Error correction learning
2. Memory-based learning
3. Hebbian learning
4. Competitive learning

▪ Learning paradigm
▪ Manner in which a neural network relates to its environment

1. Supervised learning
2. Unsupervised learning
3. Reinforcement learning

© 2022 NEURAL NETWORKS #49 (2)


Neuron Model, Network
Supervised learning

▪ Learning with labeled training data


▪ Labeled training data provide knowledge of the environment
▪ Data are represented by a set of input-output examples

Labeled
Environment
training data
Target response = optimal action
+
Learning
system -
Σ

Error signal
▪ Learning algorithms
▪ Error-correction learning
▪ Memory-based learning

#50
Unsupervised learning

▪ Unsupervised or self-organized learning


▪ No external "teacher" to oversee the learning process
▪ Only a set of input examples is available, no output examples

Learning
Environment
system

▪ Unsupervised NNs usually perform some kind of data compression, such


as dimensionality reduction or clustering

▪ Learning algorithms
▪ Hebbian learning
▪ Competitive learning
Reinforcement learning

▪ No teacher, environment only offers primary reinforcement signal

▪ System learns under delayed reinforcement


• Temporal sequence of inputs which result in the generation of a reinforcement signal

▪ Goal is to minimize the expectation of the cumulative cost of actions taken over
a sequence of steps Primary
reinforcement

Environment Critic
▪ RL is realized through two neural networks:
Critic and Learning system Heuristic
reinforcement

Learning
▪ Critic network converts primary reinforcement system
signal (obtained directly from the environment)
Actions
into a higher quality heuristic reinforcement signal
which solves temporal credit assignment problem
2.6 Learning tasks (1/7)
1. Pattern Association
▪ Associative memory is brain-like distributed memory that learns by association

▪ Two phases in the operation of associative memory


1. Storage
2. Recall

▪ Autoassociation
• Neural network stores a set of patterns by repeatedly presenting them to the
network
• Then, when presented distored pattern, a neural network can recall the original
pattern
• Unsupervised learning algorithms

▪ Heteroassociation
• Set of input patterns is paired with an arbitrary set of output patterns
• Supervised learning algorithms
Learning tasks (2/7)

2. Pattern Recognition
▪ In pattern recognition, input signals are assigned to categories (classes)

▪ Two phases of pattern recognition


1. Learning (supervised)
2. Classification

▪ Statistical nature of pattern recognition


• Patterns are represented in multi-dimensional
decision space

• Decision space is divided by separate


regions for each class

• Decision boundaries are determined by a


learning process

• Support-Vector-Machine example

#54
Learning tasks (3/7)

3. Function Approximation
▪ Arbitrary nonlinear input-output mapping
y = f(x)
can be approximated by a neural network, given a set of labeled examples
{xi, yi}, i=1,..,N

▪ The task is to approximate the mapping f(x) by a neural network F(x)


so that f(x) and F(x) are close enough

||F(x) – f(x)|| < ε for all x, (ε is a small positive number)

▪ Neural network mapping F(x) can be realized by supervised learning


(error-correction learning algorithm)

▪ Important function approximation tasks


• System identification
• Inverse system
Learning tasks (4/7)

• System identification

Unknown
Environment
System
Unknown system response
+
Neural
network -
Σ

Error signal

Inputs from the environment


• Inverse system
+
Neural
Environment System
network -
Σ

Error signal
Learning tasks (5/7)

4. Control
• Neural networks can be used to control a plant (a process)
• Brain is the best example of a parallel distributed generalized controller
• Operates thousands of actuators (muscles)
• Can handle nonlinearity and noise
• Can handle invariances
• Can optimize over a long-range planning horizon

▪ Feedback control system (Model reference control)


• NN controller has to supply inputs that will drive a plant according to a reference
– Model predictive control
• NN model provides multi-step ahead predictions for the optimizer
Learning tasks (6/7)
5. Filtering
• Filter – device or algorithm used to extract information about a prescribed quantity of interest
from a noisy data set
• Filters can be used for three basic information processing tasks:

1. Filtering o o o o o o o o o o
• Extraction of information at discrete time n by using measured data up to and including
time n
• Examples: Cocktail party problem, Blind source separation

2. Smoothing o o o o o o x o o o
• Differs from filtering in:
a) Data need not be available at time n
b) Data measured later than n can be used to obtain this information

3. Prediction o o o o o o o o o o x
• Deriving information about the quantity in the future at time n+h, h>0, by using data
measured up to including n
• Example: Forecasting of energy consumption, stock market prediction
Learning tasks (7/7)

6. Beamforming
▪ Spatial form of filtering, used to distinguish between the spatial properties of a
target signal and background noise

▪ Device is called a beamformer

▪ Beamforming is used in human auditory response and echo-locating bats


→ the task is suitable for neural network application

▪ Common beamforming tasks: radar and sonar systems


• Task is to detect a target in the presence of receiver noise and interfering
signals
• Target signal originates from an unknown direction
• No a priori information available on interfering signals

▪ Neural beamformer, neuro-beamformer, attentional neurocomputers


1.7 Applications of neural
networks
▪ Aerospace
▪ High-performance aircraft autopilots, flight path simulations, aircraft
control systems, autopilot enhancements, aircraft component simulations,
aircraft component fault detectors
▪ Automotive
▪ Automobile automatic guidance systems, warranty activity analyzers
▪ Banking
▪ Check and other document readers, credit application evaluators
▪ Defense
▪ Weapon steering, target tracking, object discrimination, facial recognition,
new kinds of sensors, sonar, radar and image signal processing including
data compression, feature extraction and noise suppression, signal/image
identification
▪ Electronics
▪ Code sequence prediction, integrated circuit chip layout, process control,
chip failure analysis, machine vision, voice synthesis, nonlinear modeling
Applications of neural
networks
▪ Financial
▪ Real estate appraisal, loan advisor, corporate bond rating, credit line use
analysis, portfolio trading program, corporate financial analysis, currency
price prediction
▪ Manufacturing
▪ Manufacturing process control, product design and analysis, process
and machine diagnosis, real-time particle identification, visual quality
inspection systems, welding quality analysis, paper quality prediction,
computer chip quality analysis, analysis of grinding operations, chemical
product design analysis, machine maintenance analysis, project planning
and management, dynamic modeling of chemical process systems
▪ Medical
▪ Breast cancer cell analysis, EEG and ECG analysis, prothesis design,
optimization of transplant times, hospital expense reduction, hospital
quality improvement, emergency room test advisement
Applications of neural
networks
▪ Robotics
▪ Trajectory control, forklift robot, manipulator controllers, vision systems
▪ Speech
▪ Speech recognition, speech compression, vowel classification, text-to-
speech synthesis
▪ Securities
▪ Market analysis, automatic bond rating, stock trading advisory systems
▪ Telecommunications
▪ Image and data compression, automated information services, real-time
translation of spoken language, customer payment processing systems
▪ Transportation
▪ Truck brake diagnosis systems, vehicle scheduling, routing systems

#63
Illustrative Example: Three-
Inputs case

The three sensor outputs


will then be input to a
neural network. The
purpose of the network is
to decide which kind of
fruit is on the conveyor, so
that the fruit can be
directed to the correct
storage bin.
Represented Problem:
Three-Inputs case
Three-inputs neuron
Answer of Apple-Orange
Classification
Toolbox: nnd3pc
Classification Example

Let’s say that an orange with an elliptical shape is passed through the sensors.
The input vector would then be

In fact, any input vector that is closer to the orange prototype vector than to the
apple prototype vector (in Euclidean distance) will be classified as an orange
(and vice versa).
Type of Neural Network - Recurrent
Network: Hopfield Network (1/4)

The neurons in this network


are initialized with the input
vector, then the network
iterates until the output
converges
Type of Neural Network - Recurrent
Network: Hopfield Network (2/4)

Example
For a case where,
−1
𝑃 = −1
−1

−1
𝑎 0 = 𝑃 = −1
−1
Type of Neural Network - Recurrent
Network: Hopfield Network (3/4)

0.2 0 0 −1 0.9 Satlins rule


𝑎 1 = 𝑠𝑎𝑡𝑙𝑖𝑛𝑠 0 1.2 0 −1 + 0 Output input
0 0 0.2 −1 −0.9

0.7 0.7
𝑎 1 = 𝑠𝑎𝑡𝑙𝑖𝑛𝑠 −1.2 = −1
−1.1 −1
Type of Neural Network - Recurrent
Network: Hopfield Network (4/4)

Convergence, stop!
Toolbox: nnd3hopc
Aplikasi Hopfield:
Pendeteksi Huruf
1. Run hopfieldNetwork.m
2. Load Image semisal A (max
10 huruf, dimasukkan satu-
persatu)
3. Train network
4. Ambil gambar huruf A yang
lain (note file dalam bmp)
5. Tekan Run

6. Hasil Run

https://www.mathworks.com/matlabcentral/fileexchange/13728-hopfield-neural-
network
Tugas #3

0.2 0 0 −1 0.9
1. Jika diketahui w = 0 1.2 0 , 𝑝 = −1 𝑑𝑎𝑛 𝑏 = 0 tentukan nilai
0 0 0.2 −1 −0.9
konvergen output Hopfiled network untuk 4 iterasi disertai
perhitungan manualnya, menggunakan fungsi aktivasi: satlins,
Logsig, dan Tansig

2. Run aplikasi pendeteksi huruf menggunakan Hopfield, kumpulkan


dan gunakan character huruf B untuk proses training dan testing.
Tulis kesimpulan dari hasil running code tsb!

You might also like