Artificial Neural Networks
(ANN)
Contents
What is an Artificial Neural Network (ANN)?
History of Artificial Neural Networks
Fundamentals of ANN
Comparison between biological neuron and
artificial neuron
Basic models of ANN
Different types of connections of NN, Learning
and activation function
Basic fundamental neuron model-McCullochPitts neuron and Hebb network
Artificial Neural Network
An artificial neural network consists of a pool of simple
processing units which communicate by sending signals to
each other over a large number of weighted connections.
Artificial Neural Network
A set of major aspects of a parallel distributed model include:
a set of processing units (cells).
a state of activation for every unit, which equivalent to the output of the
unit.
connections between the units. Generally each connection is defined by a
weight.
a propagation rule, which determines the effective input of a unit from its
external inputs.
an activation function, which determines the new level of activation based
on the effective input and the current activation.
an external input for each unit.
a method for information gathering (the learning rule).
an environment within which the system must operate, providing input
signals and, if necessary, error signals.
History of the Artificial Neural Networks
history of the ANNs stems from the 1940s, the decade of the first electronic
computer.
However, the first important step took place in 1957 when Rosenblatt
introduced the first concrete neural model, the perceptron. Rosenblatt also
took part in constructing the first successful neurocomputer, the Mark I
Perceptron. After this, the development of ANNs has proceeded as
described in Figure.
History of Artificial Neural Networks
Rosenblatt's original perceptron model contained only one layer. From this,
a multi-layered model was derived in 1960. At first, the use of the multilayer perceptron (MLP) was complicated by the lack of an appropriate
learning algorithm.
In 1974, Werbos came to introduce a so-called backpropagation algorithm
for the three-layered perceptron network.
History of the Artificial Neural Networks
By 1980, the application area of the MLP networks remained rather limited
until the breakthrough when a general back propagation algorithm for a
multi-layered perceptron was introduced by Rummelhart and Mclelland.
In 1982, Hopfield brought out his idea of a neural network. Unlike the
neurons in MLP, the Hopfield network consists of only one layer whose
neurons are fully connected with each other.
History of the Artificial Neural Networks
Since then, new versions of the Hopfield network have been developed.
The Boltzmann machine has been influenced by both the Hopfield network
and the MLP.
History of the Artificial Neural Networks
In 1988, Radial Basis Function (RBF) networks were first introduced by
Broomhead & Lowe. Although the basic idea of RBF was developed 30
years ago, the work by Broomhead & Lowe opened a new frontier in the
neural network community.
History of the Artificial Neural Networks
In 1982, A totally unique kind of network models is the Self-Organizing
Map (SOM) introduced by Kohonen. SOM is a certain kind of topological
map which organizes itself based on the input patterns that it is trained with.
The SOM originated from the LVQ (Learning Vector Quantization) network
the underlying idea of which was also Kohonen's in 1972.
History of Artificial Neural
Networks
Since then, research on artificial neural networks has
remained active, leading to many new network types, as
well as hybrid algorithms and hardware for neural
information processing.
Computers vs. Neural Networks
Standard Computers
one CPU
Neural Networks
highly parallel processing
fast processing units
slow processing units
reliable units
unreliable units
static infrastructure
dynamic infrastructure
Why Artificial Neural Networks?
There are two basic reasons why we are interested in
building artificial neural networks (ANNs):
Technical viewpoint: Some problems such as
character recognition or the prediction of future
states of a system (load forecasting in power companies)
require massively parallel and adaptive processing.
Biological viewpoint: ANNs can be used to
replicate and simulate components of the human
(or animal) brain, thereby giving us insight into
natural information processing.
Artificial Neural Networks
The fundamental building blocks of neural networks
are the neurons.
In technical systems, we also refer to them as
units or nodes.
Basically, each neuron
receives inputs from many other neurons.
changes its internal state (activation) based on
the current input.
sends one output signal to many other
neurons, possibly including its input neurons
(recurrent network).
Biological Neural Networks
Information is transmitted as a series of electric
impulses, so-called spikes.
The frequency and phase of these spikes encode the
information.
In biological systems, one neuron can be connected to as
many as 10,000 other neurons.
Usually, a neuron receives its information from other
neurons in a confined area, its so-called receptive field.
How do ANNs work?
An artificial neuron is an imitation of a human neuron
How do ANNs work?
Now, let us have a look at the model of an artificial neuron.
How do ANNs work?
Input
xm
.........
...
x2
Processing
Output
x1
= X1+X2 + .+Xm =
How do ANNs work?
Not all inputs are equal
Input
xm
weights
.........
...
w
m
...
..
x2
w2
Processing
Output
x1
w1
= X1w1+X2w2 + .
+Xmwm =y
How do ANNs work?
The signal is not passed down to the
next neuron verbatim
Input
xm
.........
...
x2
w
weights
Processing
Transfer Function
(Activation Function)
Output
w
m
...
..
x1
w
2
1
f(vk)
The output is a function of the input, that is
affected by the weights, and the transfer
functions
Classifications of ANNs:
by structures
Feedforward ANNs
Feedback ANNs
Recurrent ANNs
Three types of layers: Input, Hidden, and
Output
Classifications of ANNs:
by training/learning algorithms
Supervised training/learning
Based on historical data or given data to training an ANN (to
determine the weights)
Unsupervised training/learning
Self-organization
Reinforced training/learning
Supervised training/learning with stochastic optimization
Supervised learning
A network is fed with a set of training samples (inputs
and corresponding outputs), and it uses these samples
to learn the general relationship between the inputs
and the outputs.
This relationship is represented by the values of the
weights of the trained network.
Unsupervised learning
No desired output is associated with the training data!
Faster than supervised learning
Used to find out structures within data:
Clustering
Compression
Reinforcement learning
Like supervised learning, but:
Weights adjusting/learning is not directly
related to the error value.
The error value is used to randomly shuffle
weights!
Relatively slow learning due to randomness.
With the objective of attaining the global
optimal solution.
Classifications of ANNs:
by functions
Nonlinear fitting and function approximation (forecasting)
Clustering
Optimization
Artificial Neural Networks
An ANN can:
1. compute any computable function, by the
appropriate selection of the network topology
and values of weights.
2. learn from experience or historical data
Specifically, by trial-and-error
Learning by trial-and-error
Continuous process of:
Trial:
Processing an input to produce an output (In
terms of ANN: Compute the output function of a
given input)
Evaluate:
Evaluating this output by comparing the actual
output with the expected output.
Adjust:
Adjust the weights (training, learning).
Example: XOR
Input Layer,
with two
neurons
Hidden
Layer, with
three
neurons
Output
Layer, with
one neuron
How it works?
Input Layer,
with two
neurons
Hidden
Layer, with
three
neurons
Output
Layer, with
one neuron
How it works?
Set initial values of the weights randomly.
Input: truth table of the XOR
Do
Read input (e.g. 0, and 0)
Compute an output (e.g. 0.60543)
Compare it to the expected output. (Diff= 0.60543)
Modify the weights accordingly.
Loop until a condition is met
Condition: certain number of iterations
Condition: error threshold
Design Issues
Initial weights (small random values [1,1])
Transfer function (How the inputs and the weights are
combined to produce output?)
Error estimation
Weights adjusting
Number of neurons
Data representation
Size of training set
Transfer (activation)
Functions
Linear: The output is proportional to the total
weighted input.
Threshold: The output is set at one of two values,
depending on whether the total weighted input is
greater than or less than some threshold value.
Nonlinear: The output varies continuously but not
linearly as the input changes.
Error Estimation
The root mean square error (RMSE) is a
frequently-used measure of the differences between
values predicted by a model or an estimator and the
values actually observed from the thing being
modeled or estimated
Weights Adjusting
After each iteration, weights should be adjusted to
minimize the error.
All possible weights
Back propagation
Back Propagation
Back-propagation is an example of supervised
learning, andused at each layer to minimize the
error between the layers response and the actual
data
The error at each hidden layer is an average of the
evaluated error
Hidden layer networks are trained this way
Back Propagation
N is a neuron.
Nw is one of Ns inputs weights
Nout is Ns output.
Nw = Nw + Nw
Nw = Nout * (1 Nout)* NErrorFactor
NErrorFactor = NExpectedOutput NActualOutput
This works only for the last layer, as we can know
the actual output, and the expected output.
Number of neurons
Many neurons:
Higher accuracy
Slower
Risk of over-fitting
Memorizing, rather than understanding
The network will be useless with new inputs.
Few neurons:
Lower accuracy
Inability to learn at all
Optimal number.
How to determine? An open question.
Data representation
Usually input/output data needs pre-processing
Pictures
Pixel intensity
Text:
A pattern
Size of training set
No onefitsall formula
Over fitting can occur if a good training set is not
chosen
What constitutes a good training set?
Samples must represent the general population.
Samples must contain members of each class.
Samples in each class must contain a wide range of variations or
noise effect.
The size of the training set is related to the number of
hidden neurons
Applications Areas
Function approximation
including time series prediction and modeling.
Classification
including patterns and sequences recognition,
novelty detection and sequential decision
making.
(radar systems, face identification,
handwritten text recognition)
Data processing
including filtering, clustering blinds source
separation and compression.
(data mining, e-mail Spam filtering)
Advantages / Disadvantages
Advantages
Adapt to unknown situations
Powerful, it can model complex functions.
Ease of use, learns by example, and very
little user domainspecific expertise needed
Disadvantages
The training/learning procedure is slow.
The knowledge/rule is stored in the weights
(not explicit).
Complex nonlinearity features (hard to
examine).
Why can an ANN be of strong
capability?
Variable weights (with learning capability)
Weights are attained by training/learning with historical
data.
Numerous connections among artificial neurons/units.
Each unit is very simple, but numerous connections
make an ANN powerful.
Nonlinear activation/transfer functions.
This makes an ANN a complex nonlinear system, and
could handle many kinds of hard problems.
Summary
A neuron: receives input from many other neurons;
changes its internal state (activation) based on the
current input;
sends one output signal to many other neurons, possibly
including its input neurons (ANN is recurrent network).
Back-propagation is a type of supervised learning, used
at each layer to minimize the error between the layers
response and the actual data.
Summary
Artificial Neural Networks (ANNs) are an imitation of the
biological neural networks, but much simpler ones.
The ability to learn by example makes ANNs very flexible and
powerful. There is a need to develop an efficient algorithm for
a specific kind of tasks.
Summary
Neural networks also contributes to area of research such a
neurology and psychology. They are regularly used to model
parts of living organizations and to investigate the internal
mechanisms of the brain.
Many factors affect the performance of ANNs, such as the
transfer (activation) functions, size of training sample, network
topology, weights adjusting algorithm,