AI Mod4 Session 8 Best Fit Line & ANN
AI Mod4 Session 8 Best Fit Line & ANN
NETWORKS
NEURAL NETWORKS
Some NNs are models of biological neural networks and some are
not, but historically, much of the inspiration for the field of NNs came
from the desire to produce artificial systems capable of sophisticated,
perhaps intelligent, computations similar to those that the human
brain routinely performs, and thereby possibly to enhance our
understanding of the human brain.
Most NNs have some sort of training rule. In other words, NNs learn
from examples (as children learn to recognize dogs from examples of
dogs) and exhibit some capability for generalization beyond the
training data.
NEURAL NETWORKS
LEARN FROM EXAMPLES
• No requirement of an explicit description of the problem.
• No need for a programmer.
• The neural computer adapts itself during a training period,
based on examples of similar problems even without a
desired solution to each problem. After sufficient training the
neural computer is able to relate the problem data to the
solutions, inputs to outputs, and it is then able to offer a
viable solution to a brand new problem.
• Able to generalize or to handle incomplete data.
APPLICATIONS OF NEURAL
NETWORKS
classification
in marketing: consumer spending pattern classification
In defence: radar and sonar image classification
In agriculture & fishing: fruit and catch grading
In medicine: ultrasound and electrocardiogram image
classification, EEGs, medical diagnosis
1. Neurones (nodes)
2. Synapses (weights)
NEURONE VS. NODE
Neuron in Brain
A Neuron/Node in
Artificial Neural Network
Structure Of A Node
w2
a
p2
w3 f O utput
p3
1
Bias
a f p 1 w 1 p 2 w 2 p 3 w 3 b f p i w i b
Output layer
Input layer
of
of
neurons
source nodes
Multi Layer Feed-forward
3-4-2 Network
Input Output
layer layer
Hidden Layer
Recurrent Network
Recurrent Network with hidden
neuron(s): unit delay operator z-1
implies dynamic system
z-1
z-1
z-1
The Single Neuron Structure
Bias
b
x1 w1
Activation
Local function
Field
Output
Input
signal
x2 w2 v ( ) y
Summing
function
xm wm
Synaptic
weights
The Single Neuron Structure
• The neuron is the basic information processing
unit of a NN. It consists of:
1 A set of synapses or connecting links, each
link characterized by a weight:
m
W1, W2, …, Wm
2 An adder function (linear combiner) which
u wjxj
j 1
computes the weighted sum of
the inputs:
x1 w1 Activation
Local function
Field
Input Output
signal x2 w2 v ( ) y
Summing
function
xm wm Synaptic
weights
Feedforward Neural
Networks
The basic structure off a feedforward Neural Network
FEED-FORWARD NETS
Information flow is unidirectional
Data is presented to Input layer
Passed on to Hidden Layer
Passed on to Output layer
Information is distributed
Squashin 1
0.3775
g: 1 e 0.5
BACKPROPAGATION
ALGORITHM
1. A set of examples for training the network is assembled.
Each case consists of a problem statement (which
represents the input into the network) and the
corresponding solution (which represents the desired
output from the network).
2. The input data is entered into the network via the input
layer.
Data
Sources
Steve Simpson
David Raubenheimer
Format
Frequency distribution (60 bins)
Analogy: cochlea
Network architecture
Feed forward network
60 input (one for each frequency bin)
6 hidden
2 output (0-1 for “Steve”, 1-0 for
“David”)
Presenting The Data
Steve
David
Presenting The Data
(Untrained Network)
Steve 0.43
0.26
David
0.73
0.55
Calculate Error
Steve 0.43 – 0 = 0.43
0.26 –1 = 0.74
0.55 – 0 = 0.55
Backprop Error And Adjust
Weights
0.43 – 0 = 0.43
Steve
0.26 – 1 = 0.74
1.17
0.73 – 1 = 0.27
David
0.55 – 0 = 0.55
0.82
Repeat process (sweep) for all
training pairs
Present data
Calculate error
Backpropagate error
Adjust weights