WK3 - Multi Layer Perceptron

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 59

WK3 – Multi Layer Perceptron

Contents

MLP Model
CS 476: Networks of Neural Computation
BP Algorithm WK3 – Multi Layer Perceptron
Approxim.

Model Selec.
BP & Opt. Dr. Stathis Kasderidis
Conclusions Dept. of Computer Science
University of Crete

Spring Semester, 2009

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Contents

•MLP model details


Contents
•Back-propagation algorithm
MLP Model
•XOR Example
BP Algorithm

Approxim. •Heuristics for Back-propagation


Model Selec. •Heuristics for learning rate
BP & Opt. •Approximation of functions
Conclusions •Generalisation
•Model selection through cross-validation
•Conguate-Gradient method for BP

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Contents II

•Advantages and disadvantages of BP


Contents
•Types of problems for applying BP
MLP Model
•Conclusions
BP Algorithm

Approxim.

Model Selec.
BP & Opt.

Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Multi Layer Perceptron

•“Neurons” are positioned in layers. There are Input,


Contents Hidden and Output Layers
MLP Model

BP Algorithm

Approxim.

Model Selec.
BP & Opt.

Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Multi Layer Perceptron Output

•The output y is calculated by:


Contents
 m 
MLP Model y j ( n)   j (v j ( n))   j   w ji ( n) yi ( n) 
BP Algorithm
 i 0 
Approxim. Where w0(n) is the bias.
Model Selec.
BP & Opt. •The function j(•) is a sigmoid function. Typical
Conclusions examples are:

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Transfer Functions

•The logistic sigmoid:


Contents
1
MLP Model y 
1  exp( x )
BP Algorithm

Approxim.

Model Selec.
BP & Opt.

Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Transfer Functions II

•The hyperbolic tangent sigmoid:


Contents
(exp( x)  exp( x))
MLP Model sinh( x) 2
y  tanh( x)  
BP Algorithm cosh( x) (exp( x)  exp( x))
2
Approxim.

Model Selec.
BP & Opt.

Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Learning Algorithm

•Assume that a set of examples ={x(n),d(n)}, n=1,


Contents …,N is given. x(n) is the input vector of dimension m0
MLP Model and d(n) is the desired response vector of dimension
BP Algorithm
M
•Thus an error signal, ej(n)=dj(n)-yj(n) can be defined
Approxim.
for the output neuron j.
Model Selec.
•We can derive a learning algorithm for an MLP by
BP & Opt. assuming an optimisation approach which is based on
Conclusions the steepest descent direction, I.e.
w(n)=-g(n)
Where g(n) is the gradient vector of the cost function
and  is the learning rate.

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Learning Algorithm II

•The algorithm that it is derived from the steepest


Contents descent direction is called back-propagation
MLP Model •Assume that we define a SSE instantaneous cost
BP Algorithm function (I.e. per example) as follows:

Approxim. 1
 e j ( n)
2
( n) 
Model Selec. 2 jC

BP & Opt.
Where C is the set of all output neurons.
Conclusions •If we assume that there are N examples in the set 
then the average squared error is:
N
1
 av 
N
 ( n)
n 1

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Learning Algorithm III

•We need to calculate the gradient wrt Eav or wrt to


Contents
E(n). In the first case we calculate the gradient per
MLP Model epoch (i.e. in all patterns N) while in the second the
BP Algorithm
gradient is calculated per pattern.
•In the case of Eav we have the Batch mode of the
Approxim.
algorithm. In the case of E(n) we have the Online or
Model Selec. Stochastic mode of the algorithm.
BP & Opt. •Assume that we use the online mode for the rest of
Conclusions the calculation. The gradient is defined as:
 ( n)
g ( n) 
w ji ( n)

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Learning Algorithm IV

•Using the chain rule of calculus we can write:


Contents
(n) (n) e j (n) y j (n) v j (n)
MLP Model 
w ji (n) e j (n) y j (n) v j (n) w ji (n)
BP Algorithm

Approxim. •We calculate the different partial derivatives as


Model Selec.
follows:
( n)
BP & Opt.  e j ( n)
e j ( n)
Conclusions
e j ( n)
 1
y j ( n)

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Learning Algorithm V

•And,
Contents
y j (n)
MLP Model   j ' (v j (n))
v j (n)
BP Algorithm

Approxim. v j ( n)
 yi ( n )
Model Selec.
w ji ( n)

BP & Opt.
•Combining all the previous equations we get finally:
Conclusions
( n)
wij (n)    e j (n) j ' (v j (n)) yi (n)
w ji (n)

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Learning Algorithm VI

•The equation regarding the weight corrections can be


Contents written as:
MLP Model
w ji ( n)   j (n) yi (n)
BP Algorithm

Approxim. Where j(n) is defined as the local gradient and is


given by:
Model Selec.
( n) (n) e j ( n) y j ( n)
BP & Opt.  j ( n)     e j (n) j ' (v j (n))
v j (n) e j (n) y j (n) v j (n)
Conclusions

•We need to distinguish two cases:


•j is an output neuron
•j is a hidden neuron

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Learning Algorithm VII

•Thus the Back-Propagation algorithm is an error-


Contents correction algorithm for supervised learning.
MLP Model

BP Algorithm •If j is an output neuron, we have already a definition


of ej(n), so, j(n) is defined (after substitution) as:
Approxim.

Model Selec.  j ( n)  ( d j (n)  y j ( n)) j ' (v j ( n))


BP & Opt.
•If j is a hidden neuron then j(n) is defined as:
Conclusions
( n) y j (n) ( n)
 j (n)     j ' (v j (n))
y j (n) v j (n) y j (n)

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Learning Algorithm VIII

•To calculate the partial derivative of E(n) wrt to yj(n)


Contents
we remember the definition of E(n) and we change
MLP Model the index for the output neuron to k, i.e.
BP Algorithm 1
 ek (n)
2
( n) 
Approxim. 2 k C

Model Selec.
•Then we have:
BP & Opt.
(n) e (n)
Conclusions   ek ( n) k
y j (n) kC y j (n)

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Learning Algorithm IX

•We use again the chain rule of differentiation to get


Contents the partial derivative of ek(n) wrt yj(n):
MLP Model
(n) ek (n) vk (n)
  ek ( n)
BP Algorithm y j ( n) kC vk (n) y j ( n)
Approxim.

Model Selec. •Remembering the definition of ek(n) we have:


BP & Opt. ek (n)  d k (n)  yk (n)  d k (n)   k (vv (n))
Conclusions
•Hence:
ek ( n)
  k ' (vk ( n))
vk ( n)

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Learning Algorithm X

•The local field vk(n) is defined as:


Contents
m
MLP Model vk (n)   wkj (n) y j ( n)
j 0
BP Algorithm

Approxim. Where m is the number of neurons (from the previous


Model Selec. layer) which connect to neuron k. Thus we get:
BP & Opt. vk (n)
 wkj ( n)
Conclusions
y j ( n)
•Hence:
( n)
   ek ( n)k ' (vk ( n))wkj (n)
y j (n) kC

    k ( n)wkj ( n)
kC

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Learning Algorithm XI

•Putting all together we find for the local gradient of a


Contents hidden neuron j the following formula:
MLP Model
 j (n)   j ' (v j (n))   k (n)wkj (n)
BP Algorithm k C

Approxim. •It is useful to remember the special form of the


derivatives for the logistic and hyperbolic tangent
Model Selec.
sigmoids:
BP & Opt. j’(vj(n))=yj(n)[1-yj(n)] (Logistic)
Conclusions j’(vj(n))=[1-yj(n)][1+yj(n)] (Hyp. Tangent)

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Summary of BP Algorithm

1. Initialisation: Assuming that no prior infromation is


Contents available, pick the synaptic weights and thresholds
MLP Model from a uniform distribution whose mean is zero
and whose variance is chosen to make the std of
BP Algorithm
the local fields of the neurons lie at the transition
Approxim. between the linear and saturated parts of the
Model Selec. sigmoid function
BP & Opt.
2. Presentation of training examples: Present the
network with an epoch of training examples. For
Conclusions each example in the set, perform the sequence of
the forward and backward computations described
in points 3 & 4 below.

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Summary of BP Algorithm II

3. Forward Computation:
Contents
• Let the training example in the epoch be
MLP Model denoted by (x(n),d(n)), where x is the input
BP Algorithm vector and d is the desired vector.
• Compute the local fields by proceeding forward
Approxim.
through the network layer by layer. The local
Model Selec. field for neuron j at layer l is defined as:
BP & Opt. m
v j ( n)   w ji (n) yi
(l ) (l ) ( l 1)
Conclusions
( n)
i 0
where m is the number of neurons which
connect to j and yi(l-1)(n) is the activation of
neuron i at layer (l-1). Wji(l)(n) is the weight

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Summary of BP Algorithm III

which connects the neurons j and i.


Contents
• For i=0, we have y0(l-1)(n)=+1 and wj0(l)(n)=bj(l)
MLP Model (n) is the bias of neuron j.
BP Algorithm • Assuming a sigmoid function, the output signal
Approxim. of the neuron j is:
( n)   j (v j
(l ) (l )
Model Selec. yj ( n))
BP & Opt.
• If j is in the input layer we simply set:
Conclusions (0)
y j ( n)  x j ( n)

where xj(n) is the jth component of the input


vector x.

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Summary of BP Algorithm IV

• If j is in the output layer we have:


Contents
( L)
yj ( n)  o j ( n)
MLP Model

BP Algorithm where oj(n) is the jth component of the output


Approxim. vector o. L is the total number of layers in the
network.
Model Selec.
• Compute the error signal:
BP & Opt.

Conclusions
e j ( n )  d j ( n)  o j ( n )

where dj(n) is the desired response for the jth


element.

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Summary of BP Algorithm V

4. Backward Computation:
Contents
• Compute the s of the network defined by:
MLP Model
 e
( L)
( n ) ' ( v
( L)
(n)) for neuron j in output layer L
(l )  j j j
BP Algorithm  j (n)  
 j ' (v j (n))  k (n) wkj (n) for neuron j in hidden layer l
(l ) ( l 1) ( l 1)
 k
Approxim.

Model Selec.
where j(•) is the derivative of function j wrt
the argument.
BP & Opt.
• Adjust the weights using the generalised delta
Conclusions
rule:
( l 1)
w ji (n)  w ji (n  1)   j (n) yi
(l ) (l ) (l )
( n)
where  is the momentum constant

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Summary of BP Algorithm VI

5. Iteration: Iterate the forward and backward


Contents computations of steps 3 & 4 by presenting new
MLP Model
epochs of training examples until the stopping
criterion is met.
BP Algorithm

Approxim.

Model Selec.
• The order of presentation of examples should be
randomised from epoch to epoch
BP & Opt.
• The momentum and the learning rate parameters
Conclusions
typically change (usually decreased) as the number
of training iterations increases.

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Stopping Criteria

• The BP algorithm is considered to have converged


Contents when the Euclidean norm of the gradient vector
MLP Model
reaches a sufficiently small gradient threshold.

BP Algorithm • The BP is considered to have converged when the


absolute value of the change in the average square
Approxim.
error per epoch is sufficiently small
Model Selec.
BP & Opt.

Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


XOR Example

• The XOR problem is defined by the following truth


Contents table:
MLP Model

BP Algorithm

Approxim.

Model Selec.
• The following network solves the problem. The
BP & Opt. perceptron could not do this. (We use Sgn func.)
Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Heuristics for Back-Propagation

• To speed the convergence of the back-propagation


Contents algorithm the following heuristics are applied:
MLP Model • H1: Use sequential (online) vs batch update
BP Algorithm • H2: Maximise information content
Approxim. • Use examples that produce largest error
Model Selec. • Use example which very different from all the
BP & Opt. previous ones
Conclusions • H3: Use an antisymmetric activation function,
such as the hyperbolic tangent. Antisymmetric
means:
(-x)=- (x)

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Heuristics for Back-Propagation II

• H4: Use different target values inside a smaller


Contents range, different from the asymptotic values of
MLP Model
the sigmoid

BP Algorithm • H5: Normalise the inputs:


Approxim. • Create zero-mean variables

Model Selec. • Decorrelate the variables


BP & Opt. • Scale the variables to have covariances
approximately equal
Conclusions
• H6: Initialise properly the weights. Use a zero
mean distribution with variance of:
1
w 
m

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Heuristics for Back-Propagation III

where m is the number of connections arriving


Contents to a neuron
MLP Model • H7: Learn from hints
BP Algorithm • H8: Adapt the learning rates appropriately (see
Approxim. next section)
Model Selec.
BP & Opt.

Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Heuristics for Learning Rate

• R1: Every adjustable parameter should have its


Contents own learning rate
MLP Model • R2: Every learning rate should be allowed to
BP Algorithm adjust from one iteration to the next
Approxim. • R3: When the derivative of the cost function
Model Selec.
wrt a weight has the same algebraic sign for
several consecutive iterations of the algorithm,
BP & Opt. the learning rate for that particular weight
Conclusions should be increased.
• R4: When the algebraic sign of the derivative
above alternates for several consecutive
iterations of the algorithm the learning rate
should be decreased.

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Approximation of Functions

•Q: What is the minimum number of hidden layers in a


Contents MLP that provides an approximate realisation of any
MLP Model continuous mapping?
BP Algorithm

Approxim. •A: Universal Approximation Theorem


Model Selec.
Let (•) be a nonconstant, bounded, and monotone
BP & Opt. increasing continuous function. Let Im0 denote the m0-
Conclusions dimensional unit hypercube [0,1]m0. The space of
continuous functions on Im0 is denoted by C(Im0). Then
given any function f  C(Im0) and  > 0, there exists an
integer m1 and sets of real constants ai , bi and wij
where i=1,…, m1 and j=1,…, m0 such that we may

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Approximation of Functions II

define:
Contents m1
 m0 
MLP Model F ( x1 ,..., xm0 )   ai   wij x j  bi 
i 1  j 1 
BP Algorithm

Approxim. as an approximate realisation of function f(•); that is:


Model Selec. | F ( x1 ,..., xm0 )  f ( x1 ,..., xm0 ) | 
BP & Opt. for all x1, …, xm0 that lie in the input space.
Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Approximation of Functions III

•The Universal Approximation Theorem is directly


Contents applicable to MLPs. Specifically:
MLP Model •The sigmoid functions cover the requirements for
BP Algorithm function 
Approxim. •The network has m0 input nodes and a single
Model Selec. hidden layer consisting of m1 neurons; the inputs
BP & Opt. are denoted by x1, …, xm0
Conclusions •Hidden neuron I has synaptic weights wi1, …, wm0
and bias bi
•The network output is a linear combination of the
outputs of the hidden neurons, with a1 ,…, am1
defining the synaptic weights of the output layer
CS 476: Networks of Neural Computation, CSD, UOC, 2009
Approximation of Functions IV

•The theorem is an existence theorem: It does not tell


Contents us exactly what is the number m1; it just says that
MLP Model exists!!!
BP Algorithm •The theorem states that a single hidden layer is
Approxim. sufficient for an MLP to compute a uniform 
approximation to a given training set represented by
Model Selec.
the set of inputs x1, …, xm0 and a desired output f(x1,
BP & Opt. …, xm0).
Conclusions
•The theorem does not say however that a single
hidden layer is optimum in the sense of the learning
time, ease of implementation or generalisation.

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Approximation of Functions V

•Empirical knowledge shows that the number of data


Contents pairs that are needed in order to achieve a given error
MLP Model level  is:
BP Algorithm W 
N  O 
  
Approxim.

Model Selec.
Where W is the total number of adjustable parameters
BP & Opt. of the model. There is mathematical support for this
Conclusions
observation (but we will not analyse this further!)
•There is the “curse of dimensionality” for
approximating functions in high-dimensional spaces.
•It is theoretically justified to use two hidden layers.

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Generalisation

Def: A network generalises well when the input-output


Contents mapping computed by the network is correct (or nearly
MLP Model so) for test data never used in creating or training the
network. It is assumed that the test data are drawn
BP Algorithm
form the population used to generate the training data.
Approxim.

Model Selec.
•We should try to approximate the true mechanism
BP & Opt.
that generates the data; not the specific structure of
Conclusions the data in order to achieve the generalisation. If we
learn the specific structure of the data we have
overfitting or overtraining.

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Generalisation II

Contents

MLP Model

BP Algorithm

Approxim.

Model Selec.
BP & Opt.

Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Generalisation III

•To achieve good generalisation we need:


Contents
•To have good data (see previous slides)
MLP Model
•To impose smoothness constraints on the function
BP Algorithm
•To add knowledge we have about the mechanism
Approxim.
•Reduce / constrain model parameters:
Model Selec.
BP & Opt. •Through cross-validation
Conclusions •Through regularisation (Pruning, AIC, BIC, etc)

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Cross Validation

•In cross validation method for model selection we


Contents split the training data to two sets:
MLP Model •Estimation set
BP Algorithm •Validation set
Approxim.
•We train our model in the estimation set.
Model Selec.
•We evaluate the performance in the validation set.
BP & Opt.
•We select the model which performs “best” in the
Conclusions
validation set.

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Cross Validation II

•There are variations of the method depending on the


Contents partition of the validation set. Typical variants are:
MLP Model •Method of early stopping
BP Algorithm •Leave k-out
Approxim.

Model Selec.
BP & Opt.

Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Method of Early Stopping

•Apply the method of early stopping when the number


Contents of data pairs, N, is less than N<30W, where W is the
MLP Model
number of free parameters in the network.

BP Algorithm •Assume that r is the ratio of the training set which is


allocated to the validation. It can be shown that the
Approxim.
optimal value of this parameter is given by:
Model Selec.
2W  1  1
BP & Opt. ropt  1 
2(W  1)
Conclusions
•The method works as follows:
•Train in the usual way the network using the data
in the estimation set

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Method of Early Stopping II

•After a period of estimation, the weights and bias


Contents levels of MLP are all fixed and the network is
MLP Model
operating in its forward mode only. The validation
error is measured for each example present in the
BP Algorithm validation subset
Approxim.
•When the validation phase is completed, the
Model Selec. estimation is resumed for another period (e.g. 10
BP & Opt. epochs) and the process is repeated
Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Leave k-out Validation

•We divide the set of available examples into K subsets


Contents

MLP Model
•The model is trained in all the subsets except for one
and the validation error is measured by testing it on
BP Algorithm the subset left out

Approxim.
•The procedure is repeated for a total of K trials, each
Model Selec. time using a different subset for validation
BP & Opt. •The performance of the model is assessed by
Conclusions averaging the squared error under validation over all
the trials of the experiment
•There is a limiting case for K=N in which case the
method is called leave-one-out.

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Leave k-out Validation II

•An example with K=4 is shown below


Contents

MLP Model

BP Algorithm

Approxim.

Model Selec.
BP & Opt.

Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Network Pruning

•To solve real world problems we need to reduce the


Contents
free parameters of the model. We can achieve this
MLP Model objective in one of two ways:
BP Algorithm •Network growing: in which case we start with a
Approxim. small MLP and then add a new neuron or layer of
hidden neurons only when we are unable to
Model Selec.
achieve the performance level we want
BP & Opt.
•Network pruning: in this case we start with a large
Conclusions MLP with an adequate performance for the problem
at hand, and then we prune it by weakening or
eliminating certain weights in a principled manner

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Network Pruning II

•Pruning can be implemented as a form of


Contents
regularisation
MLP Model

BP Algorithm

Approxim.

Model Selec.
BP & Opt.

Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Regularisation

•In model selection we need to balance two needs:


Contents
•To achieve good performance, which usually leads
MLP Model
to a complex model
BP Algorithm
•To keep the complexity of the model manageable
Approxim.
due to practical estimation difficulties and the
Model Selec. overfitting phenomenon
BP & Opt. •Aprincipled approach to the counterbalance both
Conclusions needs is given by regularisation theory.
•Inthis theory we assume that the estimation of the
model takes place using the usual cost function and a
second term which is called complexity penalty:

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Regularisation II

R(w)=Es(w)+Ec(w)
Contents

MLP Model Where R is the total cost function, Es is the standard


performance measure, Ec is the complexity penalty and
BP Algorithm
>0 is a regularisation parameter
Approxim.
•Typically one imposes smoothness constraints as a
Model Selec.
complexity term. I.e. we want to co-minimise the
BP & Opt. smoothing integral of the kth-order:
Conclusions  1 k    
 c ( w, k )   || k F ( x , w) ||2  ( x )dx
2 x

Where F(x,w) is the function performed by the model


and (x) is some weighting function which determines

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Regularisation III

the region of the input space where the function


Contents
F(x,w) is required to be smooth.
MLP Model

BP Algorithm

Approxim.

Model Selec.
BP & Opt.

Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Regularisation IV

•Other complexity penalty options include:


Contents
•Weight Decay:
MLP Model
  2 W
 c ( w) || w ||   wi
2
BP Algorithm
i 1
Approxim. Where W is the total number of all free parameters
in the model
Model Selec.
•Weight Elimination:
BP & Opt.
 W
( wi / w0 ) 2
Conclusions
 c ( w)  
i 1 1  ( wi / w0 )
2

Where w0 is a pre-assigned parameter

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Regularisation V

•There are other methods which base their decision on


Contents
which weights to eliminate on the Hessian, H
MLP Model
•For example:
BP Algorithm
•The optimal brain damage procedure (OBD)
Approxim.

Model Selec.
•The optimal brain surgeon procedure (OBS)

BP & Opt. •In this case a weight, wi, is eliminated when:


Conclusions Eav < Si
Where Si is defined as: 2
wi
Si 
2[ H 1 ]i ,i

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Conjugate-Gradient Method

•The conjugate-gradient method is a 2nd order


Contents optimisation method, i.e. we assume that we can
MLP Model
approximate the cost function up to second degree in
the Taylor series:
BP Algorithm
 1 T  T 
Approxim. f ( x )  x Ax  b x  c
2
Model Selec.
BP & Opt. Where A and b are appropriate matrix and vector and
x is a W-by-1 vector
Conclusions
•We can find the minimum point by solving the
equations:
x* = A-1b

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Conjugate-Gradient Method II

•Given the matrix A we say that a set of nonzero


Contents vectors s(0), …, s(W-1) is A-conjugate if the following
MLP Model
condition holds:

BP Algorithm sT(n)As(j)=0 ,  n and j, nj


Approxim.

Model Selec. •If A is the identity matrix, conjugacy is the same as


BP & Opt. orthogonality.
Conclusions •A-conjugate vectors are linearly independent

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Summary of the Conjugate-Gradient Method

1. Initialisation: Unless prior knowledge on the weight


Contents vector w is available, choose the initial value w(0)
MLP Model
using a procedure similar to the ones which are
used for the BP algorithm
BP Algorithm
2. Computation:
Approxim.
1. For w(0), use the BP to compute the gradient
Model Selec.
vector g(0)
BP & Opt.
2. Set s(0)=r(0)=-g(0)
Conclusions
3. At time step n, use a line search to find (n) that
minimises Eav(n) sufficiently, representing the cost
function Eav expressed as a function of  for fixed
values of w and s

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Summary of the Conjugate-Gradient Method II

4. Test to determine if the Euclidean norm of the


Contents residual r(n) has fallen below a specific value, that
is, a small fraction of the initial value ||r(0)||
MLP Model
5. Update the weight vector:
BP Algorithm
w(n+1)=w(n)+ (n) s(n)
Approxim.
6. For w(n+1), use the BP to compute the updated
Model Selec.
gradient vector g(n+1)
BP & Opt.
7. Set r(n+1)=-g(n+1)
Conclusions
8. Use the Polak-Ribiere formula to calculate (n+1):
  
 r T (n  1)[r (n  1)  r (n)] 
 (n  1)  max  T  , 0
 r ( n ) r ( n) 

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Summary of the Conjugate-Gradient Method III

9. Update the direction vector:


Contents s(n+1)=r(n+1)+ (n+1)s(n)
MLP Model 10. Set n=n+1 and go to step 3
BP Algorithm
3. Stopping Criterion: Terminate the algorithm when
Approxim. the following condition is satisfied:
Model Selec. ||r(n)||   ||r(0)||
BP & Opt.
Where  is a prescribed small number
Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Advantages & Disadvantages

•MLP and BP is used in Cognitive and Computational


Contents Neuroscience modelling but still the algorithm does not
MLP Model have real neuro-physiological support
BP Algorithm •The algorithm can be used to make encoding /
decoding and compression systems. Useful for data
Approxim.
pre-processing operations
Model Selec. •The MLP with the BP algorithm is a universal
BP & Opt. approximator of functions
Conclusions •The algorithm is computationally efficient as it has
O(W) complexity to the model parameters
•The algorithm has “local” robustness
•The convergence of the BP can be very slow,
especially in large problems, depending on the method

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Advantages & Disadvantages II

•The BP algorithm suffers from the problem of local


Contents minima
MLP Model

BP Algorithm

Approxim.

Model Selec.
BP & Opt.

Conclusions

CS 476: Networks of Neural Computation, CSD, UOC, 2009


Types of problems

•The BP algorithm is used in a great variety of


Contents problems:
MLP Model •Time series predictions

BP Algorithm •Credit risk assessment


•Pattern recognition
Approxim.
•Speech processing
Model Selec.
•Cognitive modelling
BP & Opt.
•Image processing
Conclusions
•Control
•Etc

•BP is the standard algorithm against which all other


NN algorithms are compared!!

CS 476: Networks of Neural Computation, CSD, UOC, 2009

You might also like