0% found this document useful (0 votes)
11 views

Week 1

Uploaded by

om55500r
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Week 1

Uploaded by

om55500r
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Introduction to

Neural Network

Week1
By: Dr. Sumaya Sanober

Email :[email protected]
Outline
 Neural network  Types of Neural networks
 Architecture of ANN
 Biological Neural Network
 Learning algorithms ANN
 Biological vs. artificial neural
network  Types of learning in neural network

 How ANN works  Algorithm classification

 Structural design of ANN  Application and Advantages of neural


networks
Neural Network
The term ‘neural’ is derived from the human (animal) nervous
system’s basic functional unit ‘neuron’ or nerve cells which are
present in the brain and other parts of the human (animal) body.

Neural networks are a new method of programming computers.

They are exceptionally good at performing pattern recognition


and other tasks that are very difficult to program using
conventional techniques.

Programs that employ neural nets are also capable of learning on


their own and adapting to changing conditions.
Origin of Neural Network
Human brain has many incredible characteristics such as
massive parallelism, distributed representation and
computation, learning ability, generalization ability, adaptively

which seems simple but is really complicated.

ANN models was an effort to apply the same method as human


brain uses to solve perceptual problems.

Three periods of development for ANN:

1940: Mcculloch and Pitts: Initial work

1960: Rosenblatt: Perceptron Convergence Theorem

1980: Hopfield / Werbos and Rumelhart: Hopfield's energy


approach/back-propagation learning algorithm.
Biological Neural Network

Dendrite - it receives signals from other neurons.

Soma (cell body) - it sums all the incoming signals to generate input.

Axon - when the sum reaches a threshold value, neuron fires and the signal travels
down the axon to the other neurons.

Synapses - the point of interconnection of one neuron with other neurons. The
amount of signal transmitted depend upon the strength (synaptic weights) of the
connections.
How Brain Differs from Computers
Biological (Real) vs Artificial Neural Network
Characteristics Artificial Neural Network Biological(Real) Neural Network

Faster in processing information. Slower in processing information. The


Speed
Response time is in nanoseconds. response time is in milliseconds.

Processing Serial processing. Massively parallel processing.


Highly complex and dense network of
Size & Less size & complexity. It does not
interconnected neurons containing neurons
perform complex pattern recognition
Complexity of the order of 1011 with 1015of
tasks.
interconnections.
Highly complex and dense network of
Information storage is replaceable means
interconnected neurons containing neurons
Storage new data can be added by deleting an old
of the order of 1011 with 1015of
one.
interconnections.
Information storage is adaptable means new
Fault intolerant. Information once
information is added by adjusting the
Fault tolerance corrupted cannot be retrieved in case of
interconnection strengths without
failure of the system.
destroying old information
Control There is a control unit for controlling No specific control mechanism external to
Mechanism computing activities the computing task.
How Artificial Neural Network Works
How ANN Works
• Neural networks can be viewed as weighted directed graphs in which artificial
neurons are nodes, and directed edges with weights are connections between neuron
outputs and neuron inputs.

• The artificial neural network receives information from the external world in the
form of pattern and image in vector form. These inputs are mathematically
designated by the notation x(n) for n number of inputs.

• Each input is multiplied by its corresponding weights. Weights are the information
used by the neural network to solve a problem.

• The activation function is set to the transfer function used to get the desired output.
There are linear as well as the Nonlinear activation function.
• In case the weighted sum is zero, bias is added to make the output not- zero or to scale
up the system response. Bias has the weight and input always equal to ‘1'.

• The sum corresponds to any numerical value ranging from 0 to infinity. To limit the
response to arrive at the desired value, the threshold value is set up.

• For this, the sum is passed through activation function. Some of the commonly used
activation function is - binary, sigmoidal (linear) and tan hyperbolic sigmoidal
functions(nonlinear).

Binary - the output has only two values either 0 and 1. For this, the threshold value is set
up. If the net weighted input is greater than 1, an output is assumed one otherwise zero.

Sigmoidal hyperbolic - this function has ‘s’ shaped curve. Here tan hyperbolic function is
used to approximate output from net input.
Structural design of Artificial Neural
Networks

• Input layer - It contains those units (Artificial Neurons) which receive input from the
outside world on which network will learn, recognize about or otherwise process.
• Output layer - It contains units that respond to the information about how it's learned
any task.
• Hidden layer - These units are in between input and output layers. The job of hidden
layer is to transform the input into something that output unit can use in some way.
• Most Neural Networks are fully connected that means to say each hidden neuron is fully
linked to the every neuron in its previous layer(input) and to the next layer (output) layer.

Types of Artificial Neural Network
Parameter Types Description
Feedforward - In which graphs have no
loops.
Based on connection pattern Feed Forward, Recurrent
Recurrent - Loops occur because of
feedback.
Single Layer - Having one hidden layer.
Based on the number of hidden E.g. , Single Perceptron
Single layer, Multi-Layer
layer Multilayer - Having multiple hidden
layers. Multilayer Perceptron
Fixed - Weights are fixed a priori and
not changed at all.
Based on nature of weights Fixed, Adaptive
Adaptive - Weights are updated and
changed during training.
Static - Memory less unit. The current
output depends on the current input.
E.g. , Feed forward network
Based on Memory unit Static, Dynamic Dynamic - Memory unit - The output
depends upon the current input as well
as the current output. E.g. , Recurrent
Neural Network
Neural Network Architectures
Neural Network Architectures

• Perceptron - Neural Network is having two input units and one output units with no

hidden layers. These are also known as ‘single layer perceptron's.

• Radial Basis Function Network - These networks are similar to the feed forward

Neural Network except radial basis function is used as activation function of these

neurons.

• Multilayer Perceptron - These networks use more than one hidden layer of neurons,

unlike single layer perceptron. These are also known as Deep Feed forward Neural

Networks.

• Recurrent Neural Network - Type of Neural Network in which hidden layer neurons

has self-connections. Recurrent Neural Networks possess memory. At any instance,

hidden layer neuron receives activation from the lower layer as well as it previous

activation value.
Neural Network Architectures

• Long /Short Term Memory Network (LSTM) - Type of Neural Network in

which memory cell is incorporated inside hidden layer neurons is called LSTM

network.

• Hopfield Network - A fully interconnected network of neurons in which each

neuron is connected to every other neuron. The network is trained with input

pattern by setting a value of neurons to the desired pattern. Then its weights are

computed.
Neural Network Architectures
• Boltzmann machine network - these networks are similar to hop field network except
some neurons are input, while other are hidden in nature. The weights are initialized
randomly and learn through back propagation algorithm..
Learning Algorithms used in Neural Network
• Gradient descent - this is the simplest training algorithm used in case of
supervised training model. In case, the actual output is different from target
output, the difference or error is find out. The gradient descent algorithm
changes the weights of the network in such a manner to minimize this mistake.

• Back propagation - it is an extension of gradient based delta learning rule.


Here, after finding an error (the difference between desired and target), the error
is propagated backward from output layer to the input layer via hidden layer. It
is used in case of multilayer neural network.

• Newton's method: It is a second order algorithm because it makes use of the


hessian matrix. The objective of this method is to find better training directions
by using the second derivatives of the loss function.
Learning Algorithms used in Neural Network

• Conjugate Gradient Method: This method also avoids the information requirements
associated with the evaluation, storage, and inversion of the hessian matrix, as required
by the newton's method.
The main idea behind the quasi-newton method is to approximate the inverse hessian by
another matrix G, using only the first partial derivatives of the loss function.

• Levenberg-Marquardt Algorithm: It is also known as the damped least-squares


method, has been designed to work specifically with loss functions which take the form
of a sum of squared errors. It works without computing the exact hessian matrix.
Instead, it works with the gradient vector and the Jacobin matrix.
Types of Learning in Neural Network

Supervised Learning - In supervised learning, the training data is input to the network, and
the desired output is known to the weights that are adjusted until production yields desired
value.
Unsupervised Learning - The input data is used to train the network whose output is
known. The network classifies the input data and adjusts the weight by feature extraction in
input data.
Reinforcement Learning - Here the value of the output is unknown, but the network
provides the feedback whether the output is right or wrong. It is Semi-Supervised Learning.
Algorithm classification
1. Unsupervised Algorithms
Perceptron
Self-organization map
Radial basis function
2. Supervised Algorithms
Back proportion
Auto encoders
Hopfield network
Boltzmann machine
Restricted machine
3. Reinforcement Algorithms
Temporal difference learning
Q learning
Learning automate
Monte Carlo
SARSA
Uses of Neural Networks

• Classification - A Neural Network can be trained to classify given pattern or data


set into predefined class. It uses Feed forward Networks.

• Prediction - A Neural Network can be trained to produce outputs that are


expected from given input. E.g., - Stock market prediction.

• Clustering - The Neural network can be used to identify a unique feature of the
data and classify them into different categories without any prior knowledge of
the data.
• Association - A Neural Network can be trained to remember the particular

pattern, so that when the noise pattern is presented to the network, the network

associates it with the closest one in the memory or discard it. E.g. , Hopfield

Networks which performs recognition, classification, and clustering, etc.


Applications of Neural Network
Application Architecture / Algorithm Activation Function
Process modeling and
Radial Basis Network Radial Basis
control
Machine Diagnostics Multilayer Perceptron Tan- Sigmoid Function

Portfolio Management Classification Supervised Algorithm Tan- Sigmoid Function

Target Recognition Modular Neural Network Tan- Sigmoid Function

Medical Diagnosis Multilayer Perceptron Tan- Sigmoid Function

Logistic Discriminant Analysis with


Credit Rating Logistic function
ANN, Support Vector Machine
Targeted Marketing Back Propagation Algorithm Logistic function
Multilayer Perceptron, Deep Neural
Voice recognition Networks( Convolutional Neural Logistic function
Networks)
Financial Forecasting Back propagation Algorithm Logistic function

Intelligent searching Deep Neural Network Logistic function

Gradient - Descent Algorithm and


Fraud detection Logistic function
Least Mean Square (LMS) algorithm.
Advantages of Neural Network

• A neural network can perform tasks that a linear program can not.

• When an element of the neural network fails, it can continue without


any problem by their parallel nature.

• A neural network learns and does not need to be reprogrammed.

• It can be implemented in any application.

• It can be performed without any problem.

You might also like