0% found this document useful (0 votes)
22 views36 pages

NN Mat Workshop

Uploaded by

jasmine
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views36 pages

NN Mat Workshop

Uploaded by

jasmine
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Hands on training on Artificial

Neural Networks using MATLAB

By
Dr.M.Sivachitra
Dr.M.Karthik
Department of EEE
Contents
• Introduction to ANN
• Dataset information
• Dataset preparation- sample
• ELM for Prediction(Wind profile)
• ELM for Classification(Habermans,EEG and IS
data)– both normalised and unnormalised
• Complex valued neural networks(CVNN)
• PE_CELM for Classification(IS data)
• Conclusion
Introduction to Artificial Neural
Networks
A new sort of computer
• What are (everyday) computer systems good at... and
not so good at?
Good at Not so good at
Rule-based systems: Dealing with noisy data
doing what the programmer
wants them to do Dealing with unknown
environment data
Massive parallelism
Fault tolerance
Adapting to circumstances
Neural networks
• Neural network: information processing
paradigm inspired by biological nervous
systems, such as our brain
• Structure: large number of highly
interconnected processing elements (neurons)
working together
• Like people, they learn from experience (by
example)
Neural networks
• Neural networks are configured for a specific
application, such as pattern recognition or
data classification, through a learning process
• In a biological system, learning involves
adjustments to the synaptic connections
between neurons
 same for artificial neural networks (ANNs)
Where can neural network systems help

• when we can't formulate an algorithmic


solution.
• when we can get lots of examples of the
behavior we require.
‘learning from experience’
• when we need to pick out the structure from
existing data.
BIOLOGICAL (MOTOR) NEURON

8
ASSOCIATION OF BIOLOGICAL NET
WITH ARTIFICIAL NET

9
BUILDING BLOCKS OF ARTIFICIAL
NEURAL NET
 Network Architecture (Connection
between Neurons)

 Setting the Weights (Training)

 Activation Function

10
ARTIFICIAL NEURAL NET
W1
X1 Y

W2
X2

The figure shows a simple artificial neural net


with two input neurons (X1, X2) and one output
neuron (Y). The inter connected weights are
given by W1 and W2.

11
OPERATION OF A NEURAL NET

- Bias
x0 w0j
x1 w1j
å f
Output y
xn wnj

Input Weight Weighted Activation


vector x vector sum function
w

12
Architecture
Zk k

Wj,k

Zj j

Vi,j

Xi i
April 2007 14
Basics of NN
• Input Vector
• Output Vector
• Activation
• Signal function or Activation function
• Weight
• Bias
PROCESSING OF AN ARTIFICIAL NET
The neuron is the basic information processing
unit of a NN. It consists of:
1. A set of links, describing the neuron inputs, with
weights W1, W2, …, Wm.
2. An adder function (linear combiner) for
computing the weighted sum of the inputs (real
numbers): m
u  W j X j
j1

3. Activation function for limiting the amplitude of


the neuron output.

y  (u  b)
16
MULTI LAYER ARTIFICIAL
NEURAL NET
 INPUT: records without class attribute with
normalized attributes values.
 INPUT VECTOR: X = { x1, x2, …, xn}
where n is the number of (non-class) attributes.
 INPUT LAYER: there are as many nodes as non-
class attributes, i.e. as the length of the input vector.
 HIDDEN LAYER: the number of nodes in the hidden
layer and the number of hidden layers depends on
implementation.

17
LAYER PROPERTIES
• Input Layer: Each input unit may be
designated by an attribute value possessed
by the instance.

• Hidden Layer: Not directly observable,


provides nonlinearities for the network.

• Output Layer: Encodes possible values.

18
TRAINING PROCESS
 Supervised Training - Providing the network
with a series of sample inputs and
comparing the output with the expected
responses.

 Unsupervised Training - Most similar input


vector is assigned to the same output
unit.

 Reinforcement Training - Right answer is not


provided but indication of whether ‘right’
or ‘wrong’ is provided.
19
ACTIVATION FUNCTION

Activation functions:

(A)Identity

(B)Binary step

(C)Bipolar step

(D)Binary sigmoidal

(E)Bipolar sigmoidal

(F)Ramp
April 2007 20
FEW APPLICATIONS OF NEURAL
NETWORKS

21
CONSTRUCTING ANN
•Determine the network properties:
• Network topology
• Types of connectivity
• Order of connections
• Weight range
•Determine the node properties:
• Activation range
•Determine the system dynamics
• Weight initialization scheme
• Activation – calculating formula
• Learning rule

22
PROBLEM SOLVING
 Select a suitable NN model based on the nature
of the problem.

 Construct a NN according to the characteristics


of the application domain.

 Train the neural network with the learning


procedure of the selected model.

 Use the trained network for making inference or


solving problems.

23
Applications

• Habermans survival Classification


• EEG signal classification
• Wind profile prediction
• Image Segmentation classification –Provided
in ELM manual
HABERMAN’S SURVIVAL DATA-
UCI dataset
Input attributes :
1. Age of patient at time of operation
2. Patient’s year of operation
3. Number of positive auxiliary nodes
detected
 Output attribute:
Survival status (class attribute)
1 = the patient survived 5 years or longer
2 = the patient death within 5 year

25
SAMPLE - INFORMATION
Data has been collected from UCI machine
learning repository
 No of Classes -2
 Total Samples - 305
 Total Training Samples - 244
 Total Testing Samples - 61

Attribute Domain
Age [30,83]
Year [1958,1970]
Positive [0,52]
Survival [Positive,Negative]

26
Planning and relaxed data- UCI
dataset
Input attributes :
12 wavelet coefficients obtained through
wavelet packet analysis in the frequency band of 7
to 13 Hz
Output attribute:
Survival status (class attribute)
1 = Planning state
2 = Relaxed state

27
DATASET INFORMATION

 Data has been collected from UCI machine

learning repository
 Class- 2
 Total Samples- 182
 Total Training Samples - 91
 Total Testing Samples – 91
 Training data -50% and testing data -50% with two
fold cross validation
28
Wind Profile Prediction dataset information

The sample data is obtained for the lowa (USA) Department


of Transport. It is obtained from the website

‘http://mesonet.agron.iastate.edu/request
/awos/1min.php’
The data were averaged over 1-h intervals.

 Total No of samples : 600


 Training samples : 500
 Testing Samples : 100
 No of input features : 5
 No of Target :1
29
Model for prediction –
Wind problem

Inputs
Targets
a1+ib1 a2+ib2 …. a5+ib5 a6+ib6
(1st hr) (2nd hr ) …. (5th hr ) ( 6th hr )

For a single sample :-


Each features and target are one hour averages.

30
EXTREME LEARNING MACHINE

ELM is a simple tuning-free and it is a single-


hidden-layer feed forward neural network

Learning speed of ELM is extremely fast

31
ELM ARCHITECTURE

32
LEARNING STEPS FOR ELM

The three-step is summarized as follows:


Given a training data set
the output function of a hidden node is G(a, b, x) and L denotes
the number of hidden nodes ,

step1: Generate random hidden nodes (ai, bi), i=1,…,L.

step2: Determine the hidden layer output matrix H.


step3: Find the output weight vector

Where denotes the Moore-Penrose generalized inverse of matrix H.

33
Introduction to Complex Valued Neural Networks(CVNNs)

 Complex valued neural networks deals with complex


valued data using complex valued weights and
complex valued neuron activation functions

 It can process magnitude and phase information


effectively

 It can naturally learn complex number data

 Better non-linear processing ability

34
Synthetic Function Approximation Problem

The synthetic function to be approximated is given


by z 22
f ( z )  z 3  10 z1 z 4 
z1

Where zi represents the input variables and i=1 to 4

 Total No of samples : 4000


 Training samples : 3000
 Testing Samples : 1000
 No of input features : 4
 No of Target :1
35
Thank You

You might also like