Introduction To Neural Networks
Introduction To Neural Networks
NEURAL NETWORKS
By
Dr. M. Tahir Khaleeq
Total Slides 25 1
The Biological Neuron
2
Description of Brain
• The dentrites are fibers that branch out from the cell
body into a network around the cell
4
Artificial Neural Networks
(ANN)
• An Artificial Neural Network is a biological inspired
computational model consisting of processing elements
(called neurons) and connections between them with
the coefficients (weights) bounds to the connections,
which constitute the neuronal structures.
• Neural Networks are called connectionist models
because of the main role of connections in them.
9
INPUTS
– Each input corresponds to a single attribute.
– Neural computing can process only numbers so the
numeric value of an attribute is used as the input to the
network.
– If a problem involves qualitative attributes or pictures,
they must be preprocessed to numerical equivalence
before they can be treated by ANN.
EX:
– Pixel values of characters and groups.
– Digital images and voice patterns
– Digital signals from monitoring and control systems.
10
WEIGHT
– Weight is an key element in an ANN.
– Weights express the relative strength (or mathematical
value) of each input to a processing element.
– Weights are repeated adjusted, called learning.
U = xi wi i = 1 to n
Where x and w are the inputs and corresponding weights.
11
For several neurons:
n
w11
x1
n1 u1
u1 = x1w11 + x2w21
w21 w12
u2 = x1w12 + x2w22
x2 n2 u2
w22
u3 = x2w23
w23
n3 u3
12
ACTIVATION FUNCTION (Transfer Function)
– It calculates the actuation level of the neuron
– Based on the level, the neuron may or may not produce
an output.
– The relationship between the internal activation level
and the output may be linear or non-linear.
– Such relationships are expressed by activation function,
s, as
a = s(u)
– There are several types of activation functions.
– The selection of the activation function determines the
network’s operations.
13
– The most used activation functions are
1. The hard-limited threshold function.
2. The linear threshold function.
3. The sigmoide function (s-function).
4. Gaussian function (bell shape function).
– A transformation can occur at the output of each
processing element, or it can be performed at the final
output of the network.
OUTPUT FUNCTION
– It calculates the output signal value emitted through the
output of the neuron O = g(a)
– The output signal is usually assumed to be activation
level of the neuron, that is, O = a 14
1 x t 1 x0 1
g ( x) g ( x) g ( x)
0 x t 1 x 0 1 e x
15
Example-1
– Inputs x1 = 3, x2 = 1, x3 = 2.
– Weights w1 = 0.2, w2 = 0.4, w3 = 0.1.
– Calculation of input function:
u = xi wi 3
i=1
= x1w1 + x2w2 + x3w3
= 3(0.2) + 1(0.4) + 2(0.1) = 1.2
– Calculation of activation function (s).
Apply sigmoid function 1 1
S( u) u 1.2
– Output:
1 e
1 e
O = g(0.77) = 1.
16
Example-2
ANN with four input nodes, two intermediate nodes
and one output node. Use he hard-limited threshold
function.
17
Hard-limited threshold function
0
U
0
1 2 3 4 5
19
• The connectionist architectures can be distinguished
according to the number of input and output sets of
neurons and the layers of neurons used. Following are
two major connectionist architectures:
1. Auto-associative
– Input neurons are output neurons too
– Hopfield network is an auto-associative network.
2. Hetro-associative
– Separate sets of input neurons and output neurons
– Ex: Perceptron, Multilayer Perceptron.
20
• The connectionist architectures can also be distinguished
according to the absence or presence of feedback
connections. Two types of architectures are
1. Feed forward Architecture
– No connections back from the output to the input
neurons.
– The network does not keep a memory of its previous
output values and the activation states of its neurons.
– Ex: Perceptron-like networks.
2. Feedback Architecture
– There are connections back from the output to the
input neurons
– Such network keeps a memory of its previous states
21
– The next state depends on the input signals and on
the previous states.
– Ex: Hopfield network.
feedback
Output
Inputs
22
LEARNING
• Ann ANN learns from its experiences
• The usual process of learning involves three tasks:
1.Compute outputs.
2.Compare outputs with desired targets.
3.Adjust the weights and repeat the process.
• More than a hundred learning (training) algorithms are
available for various situations and configurations.
• Types of learning algorithms.
1. Supervised Learning
2. Unsupervised Learning
3. Reinforcement Learning 23
Compute
Output
No Is Desired
Adjust Output
Weights Achieved?
Yes
Stop
25