0% found this document useful (0 votes)
57 views44 pages

Artificial Neural Networks

The document summarizes artificial neural networks and their evolution. It discusses: 1. The differences between biological brains and artificial neural networks in terms of number of neurons and connections, as well as speed of processing. 2. A timeline of important artificial neural networks developed between 1943-1990 including their designers and brief descriptions. 3. Key components of artificial neurons including input summing, activation functions, weighted inputs, and outputs. 4. Common neuron models and network architectures including feedforward, feedback, and recurrent networks. 5. Different learning procedures like supervised, unsupervised, and reinforcement learning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views44 pages

Artificial Neural Networks

The document summarizes artificial neural networks and their evolution. It discusses: 1. The differences between biological brains and artificial neural networks in terms of number of neurons and connections, as well as speed of processing. 2. A timeline of important artificial neural networks developed between 1943-1990 including their designers and brief descriptions. 3. Key components of artificial neurons including input summing, activation functions, weighted inputs, and outputs. 4. Common neuron models and network architectures including feedforward, feedback, and recurrent networks. 5. Different learning procedures like supervised, unsupervised, and reinforcement learning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 44

Artificial Neural Networks

BRAIN AND ANN


• Brain: • ANN:
– 1010 neurons – 20 000 neurons
– 1013 connections – Speed at nanosec
– Simulating parallel
– speed at milisec
computation
– Fully parallel
Evolution of ANN
Year Neural Network Designer Description
1943 McCulloch and Pitts Mcculloch Pitts Logic gates
Neuron
1949 Hebb Hebb Strength increases if
neurons are active

1958-1988 Perceptron Rosenblatt Weights of path can be


adjusted
1960 Adaline Widrow and Hoff Mean squared error
1972 SOM Kohenen Clustering
1982 Hopfield John Hopfield Associative memory
nets
1986 Back Propagation Rumelhard Multilayer
1987-90 ART Carpenter Used for both binary
and analog
Biological and Artificial Neuron
• Basic parts: Body(soma), Dendrites(inputs), Axons (outputs),
Synapses (connections)
Basic parts of artificial neuron
• Input summing function
output = f (w1in1+ …+wninn)
• Activation function
• Weighted inputs
• Output
Activation Functions
Activation Functions
The McCulloch-Pitts Neuron
This vastly simplified model of real neurons is also known as a
Threshold Logic Unit
A set of connections brings in activations from other neurons.
A processing unit sums the inputs, and then applies a non-linear activation
function (i.e. squashing/transfer/threshold function).
An output line transmits the result to other neurons

y = F(w1x1+ w2x2 - b)
Common Models of Neurons

Binary
perceptrons

Continuous perceptrons
1st Neural Network: AND function
Threshold(Y) = 2
X1 1 Y
1
Y

X2 1
2

1st Neural Network: OR function

2 Threshold(Y) = 2
X1 Y
1
Y
X2 2
2
1st Neural Network: AND not function

X1 2 Y Threshold(Y) = 2
1
Y

X2 -1
2

1st Neural Network: XOR function

2 Threshold(Y) = 2
X1 Z1 Y
2
-1 1
Y
-1
X2 Z2 2
2 2
McCulloch-Pitts Neuron Model
Classification Based on Interconnections

Interconnections

Feed forward Feed Back Recurrent

Network architecture is the arrangement of neurons to form layers and


connection patterns formed within and between layers
Neural Network Architectures
Neural network training/learning
• Learning procedure: adjusting connection weights, until network
gets desired behaviour
 Supervised Learning
 Unsupervised Learning
 Reinforcement Learning
Supervised Learing
Perceptron Learning Rule
• Learning signal is the difference between the desired and actual
neuron’s response
• Learning is supervised
Widrow-Hoff learning Rule
• Also called as least mean square learning rule
• Introduced by Widrow(1962), used in supervised learning
• Independent of the activation function
• Special case of delta learning rule wherein activation function is an identity function ie
f(net)=net
• Minimizes the squared error between the desired output value d i and neti
ADALINE

x1 w1
w2
x2 y
w3
x3

Linear transfer function


Linear combination of inputs
y = w1x1+w2x2+…+wnxn,
Delta Learning Rule
• Only valid for continuous activation function
• Used in supervised training mode
• Learning signal for this rule is called delta
• The aim of the delta rule is to minimize the error over all training patterns
Learning rule is derived from the condition of least squared error.
Calculating the gradient vector with respect to wi

Minimization of error requires the weight changes to be in the negative


gradient direction
Unsupervised Learning
Hebbian Learning Rule

Feed Forward Unsupervised Learning


• The learning signal is equal to the neuron’s output
Features of Hebbian Learning
• Feedforward unsupervised learning
• “When an axon of a cell A is near enough to excite a
cell B and repeatedly and persistently takes place in
firing it, some growth process or change takes place in
one or both cells increasing the efficiency”
• If oixj is positive the results is increase in weight else
vice versa
Reinforcement learning
Multi Layer Perceptron
• Basic perceptron extended with
one or more layers of hidden
neurons between input and
output layer
• Differentiable neuron transfer
functions (tanh, sigmoid)
• Using Backpropagation
algorithm for learning, which is
based on LMS algorithm
• Able to solve complex
problems
Backpropagation algorithm
• For Multi Layer Perceptron training – able to adjust
weights in hidden layers
• Supervised learning algorithm based on LMS algorithm
• Multi Layer Perceptron with Backpropagation is universal
aproximator
Backpropagation settings
• Max error
• Max iterations
• Learning rate
• Momentum
• Batch mode

You might also like