Deep Learning
Deep Learning
Neural Networks are a type of machine learning algorithm that is modeled after the structure and
function of the human brain. They are composed of layers of artificial neurons that are connected by
weighted edges. These connections allow the network to process input data, make predictions, and
learn from feedback.
Feedforward Neural Networks: In these networks, the data flows only in one direction, from input to
output. These are the simplest types of Neural Networks and are commonly used for pattern
recognition and classification tasks.
Recurrent Neural Networks: In these networks, the output of a previous time step is fed back as
input to the network at the current time step. This allows the network to have a memory and
process sequential data, such as time series or natural language.
Convolutional Neural Networks: In these networks, the input data is usually images, and the network
learns to identify features or patterns in the image by convolving a set of filters over the input. These
are commonly used for image classification, object detection, and image segmentation tasks.
Q. How do Neural Networks get the optimal Weights and Bias values?
Neural Networks get the optimal weights and bias values by a process called optimization. The
optimization process aims to find the values of the weights and biases that minimize the error or loss
function of the network.
Gradient Descent: This algorithm calculates the gradient of the error function with respect to the
weights and biases and updates them in the direction of the negative gradient. This process is
repeated iteratively until the error function converges to a minimum.
Stochastic Gradient Descent: This is a variant of Gradient Descent that updates the weights and
biases after each training example. This can be faster than Gradient Descent but may be less stable.
Adam: This is a popular optimization algorithm that uses a combination of momentum and adaptive
learning rates to update the weights and biases. It is efficient and robust to noise in the data.
Forward Propagation and Backward Propagation are two phases in the training of a Neural Network.
Forward Propagation is the process of taking the input data and propagating it forward through the
network to get an output prediction. This involves computing the weighted sum of the inputs at each
neuron, applying an activation function to the result, and passing the output to the next layer.
Backward Propagation, also known as backpropagation, is the process of propagating the error or
loss backwards through the network to update the weights and biases. This involves computing the
gradient of the error function with respect to the weights and biases and updating them using an
optimization algorithm such as Gradient Descent.
In summary, Forward Propagation is the process of passing the input data through the network to
get an output prediction, while Backward Propagation is the process of computing the error and
updating the weights and biases to improve the network's performance.