0% found this document useful (0 votes)
19 views23 pages

Artificial Neural Networks

An interactive presentation on Artificial neural networks

Uploaded by

shailaja.m
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views23 pages

Artificial Neural Networks

An interactive presentation on Artificial neural networks

Uploaded by

shailaja.m
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Artificial Neural Networks

Introduction to Artificial Neural Networks


(ANNs)
• Artificial neural networks are computational models inspired by the
human brain.
• They consist of interconnected units (neurons) that process
information in layers to solve various tasks like classification,
regression, and pattern recognition.
• An artificial neuron, also known as a perceptron, is the basic building
block of artificial neural networks (ANNs).
• Example: Recognizing handwritten digits in images.
Components of an Artificial Neuron
• Neurons are the basic unit of ANN which are tiny
decision makers that take input, process data and send
outputs to neurons in the next layer.
• Inputs are datapoints that are fed to the input layer of
NN.
• Ex: Each pixel of image is input.
• Weights are the numerical values assigned to the
connections between neurons.
• They determine the impact/importance of input.
Activation Function
• An activation function in a neural network determines whether a
neuron should be activated or not based on the weighted sum of
inputs it receives.
• It determines the output of neuron.
Types of Activation Functions:
• Sigmoid Function
• Hyperbolic Tangent (tanh) Function:
• Rectified Linear Unit (ReLU)
• Leaky ReLU
Bias
• It is an additional parameter that allows you to shift the output
of an activation function.
• It enables the activation function to better fit the data and
generalize well to unseen examples, thus improves the flexibility
and performance of the model.
• Each neuron typically has its own bias term, which is added to
the weighted sum of inputs before applying the activation
function.
• Bias is like a tuning knob that helps the model fit the data better.
• It allows the network to adjust predictions, making them closer
to the actual outcomes observed in training.
Neural Network Representation
A neural network is represented by layers of interconnected nodes
(neurons):
• Input Layer: Receives the initial data.
• Hidden Layers: Intermediate layers that process inputs.
• Output Layer: Produces the final prediction or output.

• Example: A network for image recognition might have an input layer


for pixel values, hidden layers for feature extraction, and an output
layer for classifying objects.
Multilayer Networks(Multilayer
Perceptrons)
• Multilayer networks consist of multiple layers of neurons, including
input, hidden, and output layers.
• These networks can model more complex functions than single-layer
perceptrons.
• Example: A multilayer network for image recognition might have
several hidden layers that progressively extract higher-level features
from raw pixel data.
The Backpropagation Algorithm
• Backpropagation is a key algorithm for training multilayer neural
networks.
It involves:
1.Forward Pass: Computing the output of the network for a given input.
2.Backward Pass: Calculating the error by comparing the predicted
output to the actual target.
3.Weight Update: Adjusting the weights to minimize the error using
gradient descent.
• Gradient Descent is an optimization algorithm that helps find the set of
weights that minimizes errors.

• Gradient Descent calculates the gradient(partial derivative) of the error


w.r.t each weight, during every iteration.

• Weights are then updated in the opposite direction to reduce the


error.
Steps of Backpropagation:
1.Initialization: Start with random weights.
2.Forward Pass: Pass input through the network to get the output.
3.Compute Error: Calculate the difference between the predicted
output and the actual output.
4.Backward Pass: Compute the gradient of the error with respect to
each weight using the chain rule.
5.Update Weights: Adjust weights to reduce the error.
6.Repeat: Iterate through the process for many epochs until the
network converges.

You might also like