0% found this document useful (0 votes)
83 views

Lecture 10 - Supervised Learning in Neural Networks - (Part 3)

Uploaded by

Ammar Alkindy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views

Lecture 10 - Supervised Learning in Neural Networks - (Part 3)

Uploaded by

Ammar Alkindy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Dr.

Qadri Hamarsheh

Supervised Learning in Neural Networks


(Part 3)

Supervised Learning in Neural Networks – using matlab

 The MATLAB® Neural Network Toolbox implements some of the most


popular training algorithms, which encompass both original gradient-
descent and faster training methods.
 Batch Gradient Descent (traingd):
o Original but the slowest.
o Weights and biases updated in the direction of the negative gradient.
o Selected by setting trainFcn to traingd:
net = newff(minmax(p), [3 1], {‘tansig’, ‘purelin’}, ‘traingd’);
 Batch Gradient Descent with Momentum (traingdm):
o Faster convergence than traingd.
o Momentum allows the network to respond not only the local gradient,
but also to recent trends in the error surface.
o Momentum allows the network to ignore small features in the error
surface; without momentum a network may get stuck in a shallow
local minimum.
o Selected by setting trainFcn to traingdm:
net = newff (minmax(p), [3 1], {‘tansig’, ‘purelin’}, ‘traingdm’);
o Faster Training.
 The MATLAB® Neural Network Toolbox also implements some of the faster
training methods, in which the training can converge from ten to one
hundred times faster than traingd and traingdm.
o These faster algorithms fall into two categories:
1. Heuristic techniques: developed from the analysis of the
performance of the standard gradient descent algorithm, e.g.
traingda, traingdx and trainrp.
2. Numerical optimization techniques: make use of the standard
optimization techniques, e.g. conjugate gradient (traincgf,
traincgb, traincgp, trainscg), quasi-Newton (trainbfg, trainoss),
and Levenberg-Marquardt (trainlm).

1
Dr. Qadri Hamarsheh

Comparison of Training Algorithms

Modeling Logical XOR Function


 The XOR solving problem using a simple backpropagation network

%Solution:
% Define the training inputs and targets
p = [0 0 1 1; 0 1 0 1];
t = [0 0 0 1];
% Create the backpropagation network
net = newff(minmax(p), [4 1], {‘logsig’, ‘logsig’}, ‘traingdx’);
% Train the backpropagation network
net.trainParam.epochs = 500; % training stops if epochs reached
net.trainParam.show = 1; % plot the performance function at every epoch
net = train(net, p, t);
% Testing the performance of the trained backpropagation network
a = sim(net, p)
>> a = 0.0002 0.0011 0.0001 0.9985
>> t = 0 0 0 1

You might also like