ML 3
ML 3
NameT:TAbdullahTdar
RollT# T022T
Dep.T:T BsT(Cs)T6thT
Sub.T: Machine Learning
Submitted to:TSirTShahan
1T|TP
Question:
Describe the mathematical model of ANN and implement through any tool
like, MATLAB or Python.
Ans:
Mathematical Model Of ANN:
It may be noted that the inputs may consist of any real data or signal, or it may consist of
many inputs. In the next step, the summation of these inputs is carried out. Based upon this,
summation, the final decision will be carried out.
To take the decision we may use activation functions such as linear function, or any nonlinear
function. In the end, the final decision will be the output signal. Now have a look at this figure,
which shows the mathematical model of the artificial neuron in more real form.
Mathematically,
p
uk ¿ ∑ ❑ wkj xj
j=0
yk = f (uk – θk )
Suppose, we want the output of the linear combiner, then it will be the sum of the product of all
the inputs with their respective synaptic weighs as below. Now, the final output will become,
2T|TP
To make it simple so that it may be easy to understand, we use the following function with the
conditions. Now, it will be easy to get the output. Suppose, the total sum uk is less than zero,
then the output will be zero.
And if the total sum uk is greater than 0, then the output will be 1.
In this network, one or more hidden layers are used in order to get accurate results. Overall this
network consists of an input layer of source neurons, at least one middle or hidden layer of
computational neurons, and an output layer of computational neurons.
The input signals are propagated in a forward direction on layer by layer basis. The output
signals of the second layer are used as input to the third layer, and so on for the rest of the
network. The below figure shows a multilayer feed-forward network that has 10 source nodes, 4
hidden neurons, and 2 output neurons.
Error correction:
If the system output is y, and the desired system output is known to be d, the error signal can be
defined as,
E=D–Y
In the most direct route, this error value can be used to directly adjust the weights of the artificial
neural networks. Error correction learning algorithms attempt to minimize this error signal at
each training iteration. The most popular learning algorithm for use with error-correction
learning is the backpropagation algorithm. The process is continued until the difference is zero or
within the desired threshold. When the desired output is achieved it means the ANN model is
successfully trained.
3T|TP
Methods Of ANN Implementation In MATLAB:
To define a pattern recognition problem, arrange a set of input vectors (predictors) as columns in
a matrix. Then arrange another set of response vectors indicating the classes to which the
observations are assigned.
When there are only two classes, each response has two elements, 0 and 1, indicating which class
the corresponding observation belongs to. For example, you can define a two-class classification
problem as follows:
predictors = [7 10 3 1 6; 5 8 1 1 6; 6 7 1 1 6];
responses = [0 0 1 1 0; 1 1 0 0 1];
The data consists of five observations, each with three features, classified into one of two classes.
When predictors are to be classified into N different classes, the responses have N elements. For
each response, one element is 1 and the others are 0. For example, the following lines show how
to define a classification problem that divides the corners of a 5-by-5-by-5 cube into three
classes:
For MATLAB:
x = glassInputs;
t = glassTargets;
4T|TP
net.divideParam.testRatio = 15/100;
% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, ploterrhist(e)
%figure, plotconfusion(t,y)
%figure, plotroc(t,y)
You can save the script and then run it from the command line to reproduce the results of the
previous training session. You can also edit the script to customize the training process. In this
case, follow each step in the script.
Select Data
The script assumes that the predictor and response vectors are already loaded into the workspace.
If the data is not loaded, you can load it as follows:
load glass_dataset
This command loads the predictors glassInputs and the responses glassTargets into the
workspace.
This data set is one of the sample data sets that is part of the toolbox. You can also see a list of
all available data sets by entering the command help nndatasets. You can load the variables
from any of these data sets using your own variable names. For example, the command
[x,t] = glass_dataset;
will load the glass predictors into the array x and the glass responses into the array t.
Choose Training Algorithm
Define training algorithm.
trainFcn = 'trainscg'; % Scaled conjugate gradient backpropagation.
5T|TP
Create Network
Create the network. The default network for pattern recognition (classification)
problems, patternnet, is a feedforward network with the default sigmoid transfer function in the
hidden layer, and a softmax transfer function in the output layer. The network has a single hidden
layer with ten neurons (default).
The network has two output neurons, because there are two response values (classes) associated
with each input vector. Each output neuron represents a class. When an input vector of the
appropriate class is applied to the network, the corresponding neuron should produce a 1, and the
other neurons should output a 0.
hiddenLayerSize = 10;
net = patternnet(hiddenLayerSize, trainFcn);
Divide Data
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
With these settings, the predictor vectors and response vectors are randomly divided, with 70%
for training, 15% for validation, and 15% for testing. For more information about the data
division process.
You can also compute the fraction of misclassified observations. In this example, the model has
a very low misclassification rate.
tind = vec2ind(t);
yind = vec2ind(y);
percentErrors = sum(tind ~= yind)/numel(tind)
It is also possible to calculate the network performance only on the test set, by using the testing
indices, which are located in the training record.
tInd = tr.testInd;
tstOutputs = net(x(:,tInd));
6T|TP
tstPerform = perform(net,t(tInd),tstOutputs)
View Network
view(net)
Each time a neural network is trained can result in a different solution due to random initial
weight and bias values and different divisions of data into training, validation, and test sets. As a
result, different neural networks trained on the same problem can give different outputs for the
same input. To ensure that a neural network of good accuracy has been found, retrain several
times.
TheTEnd
7T|TP