0% found this document useful (0 votes)
72 views11 pages

Lab 10 - Neural Network

A neural network is a system designed to operate like a human brain, consisting of interconnected nodes that process information. It stores information in the form of connection weights between nodes. During training, the weights are continually adjusted until the network outputs match the desired outputs. This document provides an example of building a multi-layer neural network in MATLAB to model a quadratic function, including defining the network architecture, training function and parameters.

Uploaded by

Muneeb Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views11 pages

Lab 10 - Neural Network

A neural network is a system designed to operate like a human brain, consisting of interconnected nodes that process information. It stores information in the form of connection weights between nodes. During training, the weights are continually adjusted until the network outputs match the desired outputs. This document provides an example of building a multi-layer neural network in MATLAB to model a quadratic function, including defining the network architecture, training function and parameters.

Uploaded by

Muneeb Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

19CSIS01H

Data Mining & Warehousing

Lab (10)

Neural Network
Data mining and Warehousing – 19CSIS01H Lab (10)

What is a neural network?

You’ve probably already been using neural networks on a daily basis. When you ask your
mobile assistant to perform a search for you—say, Google or Siri, these are all neural
network-driven. Computer games also use neural networks on the back end, as part of the
game system and how it adjusts to the players, and so do map applications, in processing map
images and helping you find the quickest way to get to your destination.

So, a neural network is a system or hardware that is designed to operate like a human brain.
Before starting, let’s see how both; neural network and machine learning are related

Figure 1: Machine learning vs. Neural Network

Noticing that in machine learning the process of identifying the model is called “machine
learning”. While in neural network the process of determining the neural network is called
“learning rule”.

With regards to human nature, if we want to memorize something, our brain stores this
information or we could store it on a computer too. Both brain and computer can store the
information but the mechanisms are different. In computer, the information is stored at a
specific location. Whereas, brain alters the association of neurons to store information. The
neuron itself has no storage capability. It just transmits signals from one neuron to another.

2
Data mining and Warehousing – 19CSIS01H Lab (10)

Like human brain, neural networks consist of a large number of related elements that mimic
neurons and imitates the mechanism of the brain.

Human Brain Neural Network

It’s a connection of neurons It’s a connection of nodes

It uses association of neurons It uses connection weights of neuron

So, what is connection weight?

Let’s take a node where x1, x2, and x3 are


signals entering the node. Y is the output.
W1, w2 and w3 are the corresponding
weights of each signal respectively. And
finally, ‘b’ is bias which is associated with

the storage of information. Figure 2

Simply speaking, the information of neural network is stored in the form of weights and bias.
The input signals are multiplied by the weights before entering the node as follows:

𝑣 = (W1.X1) + (W2.X2) + (W3.X3) + b

The output is the weighted sum

We can write the equation of weighted sum in matrix for as follows:

𝑋1
W= [𝑊1 𝑊2 𝑊3] X =[𝑋2]
𝑋3

The output of the node (y) is processed using activation function (it determines the behavior
of a node and has different types; linear, sigmoid, etc.). So, Y = ∅(𝑣) = ∅(𝑤𝑥 + 𝑏)

Activation form

3
Data mining and Warehousing – 19CSIS01H Lab (10)

Types of Neural Network:-

1. Single-layered NN

Figure 3: single-layer NN

2. Multi-layer NN

Figure 4: Multi-layer NN

3. Deep NN

Figure 5: Deep NN
4
Data mining and Warehousing – 19CSIS01H Lab (10)

Let’s start with an example to illustrate how the neural network works.

Figure 6: Example of multi-layer NN

Here is a neural network with single hidden layer. Usually, we use sigmoid activation
function but for simplicity, we’ll use linear activation function.

First, we’ll calculate the weighted sum for the hidden nodes

𝑣 = 𝑤𝑥 + 𝑏

3 1 1 1 3.1 + 2.1 1
=[ ][ ] + [ ] = [ ]+ [ ]
2 4 2 1 2.1 + 4.2 1

5 1 6
=[ ]+ [ ] = [ ]
10 1 11

Output of
hidden layer

N.B:- The output of hidden layer will be transmitted to the next layer which is the output
layer. So, the input to the first node of output layer is 6 and the input for the second node
of output layer is 11.

Second, let’s calculate the output for the output layer

3 2 6 1 41
[ ][ ] + [ ] = [ ]
5 1 11 1 42

5
Data mining and Warehousing – 19CSIS01H Lab (10)

N.B:- we used linear activation function for simple illustration. But, it is not effective
because multi-layer neural network will become similar to a single layer neural network.
That means, the hidden layers become ineffective when we use linear activation function.

Referring to figure 1, there are many learning methods; supervised, unsupervised and
reinforcement learning. We’re going to use supervised learning. It’s about training the
neural network on already known correct output (where each training pattern is formatted
as input and desired output).

Figure 7: Supervised learning

N.B: it’s similar to the learning process of supervised machine learning in which we
modify the model based on the training data to reduce the difference between the model’s
output and correct output. While in neural network, the information is stored in terms of
weights so to train a neural network with new information, we have to modify the weights.

The systematic way of modifying the weights is called “learning rule”. For single-layer
neural network it is called “delta rule” which is the difference between the output and the
correct output (ERROR)

Step (2)
Error Output
6 Correct
output
Data mining and Warehousing – 19CSIS01H Lab (10)

Step (3)

Step (4)

Updated
Output from node j
weight

Previous
weight Error

Learning
rate

N.B:

 Learning rate means how much the weight is changed every time and its value is a
number between 0 and 1”

 Epoch: when all training data goes from step 2 to 5 it is called an epoch. That means,
One iteration for all data represents one epoch

7
Data mining and Warehousing – 19CSIS01H Lab (10)

Neural Network in Matlab

This example represents a multi-layer neural network that use a function 𝑦 = 𝑥^2. The
network is composed of an input layer of 1 neuron presenting the x value and an output
layer of 1 neuron presenting the y value and a hidden layer with 24 neurons.

The first step is to set the input and output values as follows:

Output

Figure 8: Matlab Code (1)

The second step is to use the ‘newff’ function to design the network. This is done by adding
the number of hidden neurons and output ones. The activation functions and training
functions are also set. Then pass some training parameters.

Figure 9: Matlab Code (2)


Referring to figure (9), newff function creates a feed-forward back propagation network. It
uses random number generator in creating the initial values for the network weights. This
function takes the following as parameters:

net = newff (PR, [S1 S2 … SNl], {TF1, TF2, … , TFNl}, BTF , BLF, PF)

 PR: which is a matrix of min and max values for input elements
 Si: which is the number of neurons in ith layer
 N: number of layers.
 TFi - Transfer function of ith layer. Default is 'tansig' for hidden layers, and 'purelin'
for output layer. The transfer functions TF{i} can be any differentiable transfer
function such as TANSIG, LOGSIG, or PURELIN.
8
Data mining and Warehousing – 19CSIS01H Lab (10)

 BTF - Backpropagation training function, default = 'traingdx'.


 BLF - Backpropagation learning function, default = 'learngdm'.
 PF - Performance function, default = 'mse'
And returns an N layer feed-forward back propagation Network.
N.B:-
 The transfer functions TFi can be any differentiable transfer function such
as tansig, logsig, or purelin.
 The training function BTF can be any of the backprop training functions such as
trainlm, trainbfg, trainrp, traingd, etc.

Figure 10: Network training parameters

The network’s training parameters (net.trainParam) are set to contain the parameters:
 trainParam :This property defines the parameters and values of the current training
function.
 net.trainParam: The fields of this property depend on the current training function.
 The most used of these parameters (components of trainParam).
 net.trainParam.epochs: which tells the algorithm the maximum number of
epochs to train.
 net.trainParam.show: that tells the algorithm how many epochs there should
be between each presentation of the performance.
Training occurs according to trainlm training parameters, shown here with their default
values:
 net.trainParam.epochs = 100  Maximum number of epochs to train
 net.trainParam.show = 25  Epochs between showing progress
 net.trainParam.goal = 0  Performance goal
 net.trainParam.time = inf  Maximum time to train in seconds
 net.trainParam.min_grad = 1e-6  Minimum performance gradient
 net.trainParam.max_fail = 5  Maximum validation failures
The third step is to train the network by adding this line of code:

net1 = train (net, P, T)

9
Data mining and Warehousing – 19CSIS01H Lab (10)

The function takes the following parameters


 Net: the initial MLP network generated by newff.
 P: Network’ measured input vector.
 T: Network targets (measured output vector), default = zeros.
And returns net1 - New network object

Figure 11: Matlab Code, step 3 illustration

Figure 12: output of step 3

1
0
Data mining and Warehousing – 19CSIS01H Lab (10)

Figure 13: training performance

The fourth step is to simulate the network using sim function

a = sim (net1, P)

This function takes the following parameters:


 net1: final MLP object.
 P: input vector
And returns the measured output.
 To test how well the resulting MLP net1 approximates the data, sim Command is applied.
The measured output is a (simulated output of MLP network).
 Error difference (e = T – a) at each measured point. The final validation must be done with
independent data.

1
1

You might also like