ANN Introduction
ANN Introduction
networks
• An Artificial neural network (ANN) is defined as an information
processing model that is inspired by the way biological nervous
system, such as the brain, process information.
• This model tries to replicate only the most basic functions of the
brain.
• An ANN is composed of large number of highly interconnected
processing units (neurons) working in unison to solve specific
problems
• Like human being , Artificial neural network learn by example
• An ANN is a configured for a specific application, such as
classification, face recognition, pattern recognition through a
• Each neuron is connected with the other by a connection
link.
• Each connection link associated with weights which contain
information about the input signal.
• This information is used by the neuron network to solve a
particular problem.
• ANN’s collective behavior is characterized by their ability to
learn recall , and generalize training patterns or data similar
to that of a human brain
• They have the capability to model networks of original
neurons as found in the brain. Thus the ANN processing
2
elements are called neurons or artificial neurons.
Presentation title 20XX
• Input neurons X1 and X2 are connected to the out put neuron y, over a weighted
interconnection links w1 and w2
• For the above simple neuron net architecture, the net input has to be calculated in
the following way
Inter connections:
Inter connection can be defined as the way processing elements(Neuron) in ANN are connected
to each other hence ,the arrangements of these processing elements and geometry of
interconnections are very essential in ANN
• In this type of network, we have only two layers input layer and the output
layer but the input layer does not count because no computation is
performed in this layer.
• The output layer is formed when different weights are applied to input nodes
and
7 the cumulative effect per node is taken. After
Presentation titlethis, the neurons 20XX
Multi-layer fee forward
network:
• This layer also has a hidden layer that is internal to the network and has no direct contact
with the external layer.
• The existence of one or more hidden layers enables the network to be computationally
stronger,
8 a feed-forward network because Presentation
of information
title flow through the input function,20XX
• There are no feedback connections in which outputs of the model are fed back into itself.
Single node with its own feedback:
• When outputs can be directed back as inputs to the same layer or preceding layer
nodes, then it results in feedback networks. Recurrent networks are feedback
networks with closed loops. The above figure shows a single recurrent network
having a single neuron with feedback to itself.
• The above network is a single-layer network with a feedback connection in which the
processing element’s output can be directed back to itself or to another processing
element or both.
10 Presentation title 20XX
• A recurrent neural network is a class of artificial neural networks where connections
between nodes form a directed graph along a sequence. This allows it to exhibit dynamic
temporal
Multilayer behavior for a time sequence.
recurrent
network:
Learning:
The ANN(Artificial Neural Network) is based on BNN(Biological Neural Network) as its primary
goal is to fully imitate the Human Brain and its functions. Similar to the brain having neurons
interlinked to each other, the ANN also has neurons that are linked to each other in various
layers of the networks which are known as nodes.
13 Presentation title 20XX
14 Presentation title 20XX
ANN Terminology:
Weights: each neuron is linked to the other neurons through connection links that carry weight. The
weight has information and data about the input signal. The output depends solely on the weights
and input signal. The weights can be presented in a matrix form that is known as the Connection
matrix.
if there are “n” nodes with each node having “m” weights, then it is represented as:
Threshold: A threshold value is a constant value that is compared to the net input to get
the output. The activation function is defined based on the threshold value to calculate the
output.
For Example: Y=1 if net input>=threshold Y=0 else
• Learning Rate: The learning rate is denoted α. It ranges from 0 to 1. It is used for
balancing weights during the learning of ANN.
• Target value: Target values are Correct values of the output variable and are also
known as just targets.
Perceptron is a single layer neural network and a multi-layer perceptron is called Neural
Networks.
Perceptron is a linear classifier (binary). Also, it is used in supervised learning. It helps to
classify the given input data.
The perceptron consists of 4 parts
• Input values or One input layer
• Weights and Bias
• Net sum
• Activation Function
• It’s just a thing function that you use to get the output of node. It is also known
as Transfer Function.
• It is used to determine the output of neural network like yes or no. It maps the
resulting values in between 0 to 1 or -1 to 1 etc. (depending upon the function).
The Activation Functions can be basically divided into 2 types
• Linear Activation Function
• Non-linear Activation Functions
• A network with a single linear unit is called Adaline (Adaptive Linear Neural). A unit with a
linear activation function is called a linear unit.
• Adaline, there is only one output unit and output values are bipolar (+1,-1).
• Weights between the input unit and output unit are adjustable. It uses the delta rule
• The learning rule is found to minimize the mean square error between activation and target
values. Adaline consists of trainable weights
• it compares actual output with calculated output, and based on error training algorithm is
applied.
• First, calculate the net input to your Adaline network then apply the activation function
to its output then compare it with the original output
• if both the equal, then give the output else send an error back to the network and
update the weight according to the error which is calculated by the delta learning rule.
• In Adaptive linear network, all the input neuron is directly connected to the output neuron with the
weighted connected path. There is a bias b of activation function 1 is present.
• when the predicted output and the true value are the same then the
weight will not change.
•
28 Step 7: Test the stopping condition. The stopping
Presentation title condition may be 20XX
Back
propagation:
• Backpropagation is the essence of neural network training. It is the method of fine-tuning
the weights of a neural network based on the error rate obtained in the previous epoch (i.e.,
iteration).
• Proper tuning of the weights allows you to reduce error rates and make the model reliable
by increasing its generalization.
• This method helps calculate the gradient of a loss function with respect to all the weights in
the network.
• The Back propagation algorithm in neural network computes the gradient of the loss
function for a single weight by the chain rule.
• It efficiently computesneural
Back propagation one network
layer atexample
a time,diagram
unliketo
a understand
native direct computation. It computes
the gradient, but it does not define how the gradient is used. It generalizes the computation
in the delta rule.