Robotics 15
Robotics 15
Robotics 15
Email: [email protected]
[email protected]
Cell: 9952749533
www.researchprojects.info
PAIYANOOR, OMR, CHENNAI
Call For Research Projects Final
year students of B.E in EEE, ECE,
EI, M.E (Power Systems), M.E
(Applied Electronics), M.E (Power
Electronics)
Ph.D Electrical and Electronics.
Students can assemble their hardware in our
Research labs. Experts will be guiding the
projects.
1
Learning in Neural Networks
Sibel KAPLAN
İZMİR-2004
2
Program...
History
Introduction
Models of Processing Elements (neurons)
Models of Synaptic Interconnections
Learning Rules for Updating the Weights
Supervised learning
Perceptron Learning
Adaline Learning
Back Propagation (an example)
Reinforcement Learning
Unsupervised Learning
Hebbian Learning Rule
3
History...
4
The Real and Artificial Neurons
5
ANNs are systems that are constructed to make use of some
organizational principles resembling those of the human brain
ANNs are good at tasks such as;
pattern matching and classification
function approximation
optimization
vector quantitization
data clustering
6
The Neuron (Processing Element)
Bias
b
x1 w1
Activation
Local function
Field
Output
x2 w2 v () y
Input
values
Summing
Function (net input)
xm wm
weights
7
Models of ANNs are specified by three basic entitites:
Models of the neurons themselves,
Models of synaptic interconnections and structures,
8
There are three important parameters about a neuron:
9
III. Activation Function
10
Connections
11
Basic types of connection geometries (Nelson
and Illingworth, 1991).
w11 x1
x1 y1 y1
w21
x2 y2 x2
y2
xm w23
yn
wnm xm
yn
12
Basic types of connection geometries (Nelson and
Illingworth, 1991).
x1 x1 y1
y1
x2 y2 x2 y2
xm
yn xm yn
13
Learning Rules
Unsupervised learning
15
In supervised learning;
when input is applied to an ANN, the corresponding desired response of
the system is given. An ANN is supplied with a sequence of examples (x 1,
d1) , (x2, d2)...(xk, dk) of desired input-output pairs.
In reinforcement learning;
17
Three Categories of Learning
Unsupervised Learning
18
Machine Learning
Supervised Unsupervised
Data: Data:
Labeled examples Unlabeled examples
(input , desired output) (different realizations of the
input)
Problems:
classification Problems:
pattern recognition clustering
regression content addressable memory
NN models: NN models:
perceptron self-organizing maps (SOM)
adaline Hamming networks
back-propagation Hopfield networks
hierarchical networks
19
Knowledge about the learning task is given in the form of
examples called training examples.
20
A general form of the weight learning rule indicates that the
incremet of the weight vector wi produced by the learning
step at time t is proportional to the product of the learning
signal r and the input x(t);
wi (t ) rx(t )
21
Perceptron Learning Rule
y (k )
i sgn( w x
T
i
(k )
)d i
(k )
decision
boundary I1 I1
I1
C2
?
C1
I2 I2 I2
I1 and I2 I1 or I2 I1 xor I2
wixi w0 0
i 1
w1x1 + w2x2 + w0 = 0
23
The condition for solvability of a pattern classification
problem by a simple perceptron depends on whether the
problem is linearly separable or not.
If a classification problem is linearly separable by a simple
perceptron then;
T
T
w x 0
i
w x 0
i
24
Adaline (Widrow-Hoff) Learning Rule (1962)
25
To update the weights, we define a cost function E(w)
which measures the system’s performance error:
1 p (k )
E ( w) (d y ( k ) ) 2
2 k 1
For weight adjustment; w w E (w)
E p
w j (d k wT x ( k ) ) x (jk )
w j k 1
26
Back-Propagation
27
Three-Layer Back-Propagation Network
l l m
y i a(net i ) a( wiq z q ) a( wiq a ( v qj x j ))
yi q 1 q 1 j 1
i=1,2,...,n
wiq n
z q a(net q ) a( v qj x j )
zq q=1,2,...,l
j 1
vqj
m
Xj J=1,2,...,m net q v qj x j
j 1
xj xm
x1
28
A cost function is defined as in Adaline Learning Rule;
1 n 1 n
E ( w) (d i y i ) d i a (net i )
2 2
2 i 1 2 i 1
Then according to the gradient-descent method, the weights in the
hidden to output connections are updated by;
E
wiq
wiq
wiq d i y i a , (net i ) z q oi z q
29
Hebb’s Learning Law (1949)
30
According to Hebbian Hypothesis;
r a (wiTx ) = yi
(i = 1, 2,.........n); (j = 1, 2,..............m)
31
Tips about important learning factors
32
An example about back-propagation...
33
2
Net J I i wij Bij (for j=1)
i 1
2
Net1 I i wi1 B11 = Net1 I 1 w11 I 2 w21 (B11=0)
i 1
=0.6*0.2+0.7*0.1=0.19
1
A1 = 0.547
1 e 0.19
34
3
Net1 A j Gw j1= A1Gw11 A2 Gw21 A3Gw31
j 1
= 0.547*0.1+0.593*0.3+0.636*0.4
= 0.487
1
Q1 0.487
= 0.619 (final output)
1 e
Error of the first output : E1= (Q1 desired-Q1 real)
= 0.5-0.619 = -0.119
1 2
1
2
Total cost (error) = ( E1 E 2 ) = ( 0 . 119 ) 2
( 0. 5449 ) 2
2 2
= 0.1555
35