Neural Network
Neural Network
In simple words, Neural Networks are a set of algorithms that tries to recognize the patterns,
relationships, and information from the data through the process which is inspired by and
works like the human brain/biology.
Components / Architecture of Neural Network A simple neural network consists of three components :
• Input layer
• Hidden layer
• Output layer
Input Layer: Also known as Input nodes are the inputs/information from the outside world is
provided to the model to learn and derive conclusions from. Input nodes pass the information
to the next layer i.e Hidden layer.
Hidden Layer: Hidden layer is the set of neurons where all the computations are performed on
the input data. There can be any number of hidden layers in a neural network. The simplest
network consists of a single hidden layer.
Output layer: The output layer is the output/conclusions of the model derived from all the
computations performed. There can be single or multiple nodes in the output layer. If we have
a binary classification problem the output node is 1 but in the case of multi-class classification,
the output nodes can be more than 1.
The typical Artificial Neural Network looks something like the above figure.
Relationship between Biological neural network and artificial neural network:
Dendrites Inputs
Synapse Weights
Axon Output
The artificial neural network takes input and computes the
weighted sum of the inputs and includes a bias. This computation is
represented in the form of a transfer function.
performed .
x1,x2,x3,….xn is passed.
2. Each hidden layer consists of neurons. All the inputs are connected to
each neuron.
W1 , W2 , W3 , W4 , W5 are the weights assigned to the inputs In1 , In2 , In3 , In4, In5,
and b is the bias.
5. After getting the predictions from the output layer, the error is calculated
i.e the difference between the actual and the predicted output.
If the error is large, then the steps are taken to minimize the error and for the same
purpose, Back Propagation is performed.
What is Back Propagation and How it works?
Back Propagation is the process of updating and finding the optimal values of weights
or coefficients which helps the model to minimize the error i.e difference between the
actual and predicted values.
The weights are updated with the help of optimizers. Optimizers are the methods/
mathematical formulations to change the attributes of neural networks i.e weights to
minimize the error.
Activation Functions
activated or not based on whether the neuron’s input is relevant for the
There are various types of Artificial Neural Networks (ANN) depending upon the human brain neuron and
network functions, an artificial neural network similarly performs tasks. For example, segmentation or
classification.
• Feedback ANN:
The feedback networks feed information back into itself and are well suited to solve optimization issues.
The Internal system error corrections utilize feedback ANNs.
• Feed-Forward ANN:
A feed-forward network is a basic neural network comprising of an input layer, an output layer, and at least
one layer of a neuron. Through assessment of its output by reviewing its input, the intensity of the network
can be noticed based on group behaviour of the associated neurons, and the output is decided. The
primary advantage of this network is that it figures out how to evaluate and recognize input patterns.
Advantages of Artificial Neural Network