NNDL Focused Question-Updated
NNDL Focused Question-Updated
L T P C
CCS355 NEURAL NETWORKS AND DEEP LEARNING
2 0 2 3
Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks
(SNNs), are a subset of machine learning and are at the heart of deep learning algorithms. Their
name and structure are inspired by the human brain, mimicking the way that biological neurons
signal to one another.
Neural networks can help computers make intelligent decisions with limited human assistance.
This is because they can learn and model the relationships between input and output data that are
nonlinear and complex. For instance, they can do the following tasks.
Neural networks have several use cases across many industries, such as the following:
Medical diagnosis by medical image classification
Targeted marketing by social network filtering and behavioral data analysis
Financial predictions by processing historical data of financial instruments
Electrical load and energy demand forecasting
Process and quality control
Chemical compound identification
Weights
Bias
Threshold
Learning Rate
Target value
Error
Weights: each neuron is linked to the other neurons through connection links that carry
weight. The weight has information and data about the input signal. The output depends
solely on the weights and input signal. The weights can be presented in a matrix form
that is known as the Connection matrix.
Bias: Bias is a constant that is added to the product of inputs and weights to calculate
the product. It is used to shift the result to the positive or negative side. The net input
weight is increased by a positive bias while The net input weight is decreased by a
negative bias.
Bias: Bias is a constant that is added to the product of inputs and weights to calculate
the product. It is used to shift the result to the positive or negative side. The net input
weight is increased by a positive bias while The net input weight is decreased by a
negative bias.
Learning Rate: The learning rate is denoted α. It ranges from 0 to 1. It is used for
balancing weights during the learning of ANN.
Target value: Target values are Correct values of the output variable and are also
known as just targets.
Error: It is the inaccuracy of predicted output values compared to Target Values.
Basically supervised learning is when we teach or train the machine using data that is well-
labelled. Which means some data is already tagged with the correct answer. After that, the
machine is provided with a new set of examples(data) so that the supervised learning algorithm
analyses the training data(set of training examples) and produces a correct outcome from
labeled data.
1. Input Layer: The input layer consists of one or more input neurons, which receive input
signals from the external world or from other layers of the neural network.
2. Weights: Each input neuron is associated with a weight, which represents the strength of
the connection between the input neuron and the output neuron.
3. Bias: A bias term is added to the input layer to provide the Perceptron with additional
flexibility in modeling complex patterns in the input data.
4. Activation Function: The activation function determines the output of the Perceptron
based on the weighted sum of the inputs and the bias term. Common activation functions used
in Perceptrons include the step function, sigmoid function, and ReLU function.
5. Output: The output of the Perceptron is a single binary value, either 0 or 1, which
indicates the class or category to which the input data belongs.
6. Training Algorithm: The Perceptron is typically trained using a supervised learning
algorithm such as the Perceptron learning algorithm or back propagation. During training, the
weights and biases of the Perceptron are adjusted to minimize the error between the predicted
output and the true output for a given set of training examples.
3
The activation function decides whether a neuron should be activated or not by calculating the
weighted sum and further adding bias to it. The purpose of the activation function is to
introduce non-linearity into the output of a neuron.
Part B
1. Explain briefly about the operation of biological neural network with neat sketch
2. Discuss Supervised learning and Unsupervised learning.
3. Explain the evolution of Neural Network.
4. Explain briefly about artificial neural network.
5. Explain in detail supervised learning network.
4
UNIT II ASSOCIATIVE MEMORY AND UNSUPERVISED LEARNING
NETWORKS
An associative memory network can store a set of patterns as memories. When the associative
memory is being presented with a key pattern, it responds by producing one of the stored
patterns, which closely resembles or relates to the key pattern.
There are two algorithms developed for training of pattern association nets.
1. Hebb Rule
2. Outer Products Rule
In a hetero-associate memory, the training input and the target output vectors are
different. The weights are determined in a way that the network can store a set of pattern
associations. The association here is a pair of training input target output vector pairs
(s(p), t(p)), with p = 1,2,…p. Each vector s(p) has n components and each vector t(p) has
m components. The determination of weights is done either by using Hebb rule or delta
rule. The net finds an appropriate output vector, which corresponds to an input vector x,
that may be either one of the stored patterns or a new pattern.
8. What is BAM?
During training process also the weights remains fixed in these competitive networks. The idea
of competition is used among neurons for enhancement of contrast in their activation functions.
In this, two networks- Maxnet and Hamming networks.
Self Organizing Map (or Kohonen Map or SOM) is a type of Artificial Neural Network which
is also inspired by biological models of neural systems from the 1970s. It follows an
unsupervised learning approach and trained its network through a competitive learning
algorithm. SOM is used for clustering and mapping (or dimensionality reduction) techniques to
map multidimensional data onto lower-dimensional which allows people to reduce complex
problems for easy interpretation. SOM has two layers, one is the Input layer and the other one is
the Output layer.
Part B
A convolutional neural network is a specific kind of neural network with multiple layers. It
processes data that has a grid-like arrangement then extracts important features. One huge
advantage of using CNNs is that you don't need to do a lot of pre-processing on images.
A deep neural network (DNN) is an ANN with multiple hidden layers between the input and
output layers. Similar to shallow ANNs, DNNs can model complex non-linear relationships.
The main purpose of a neural network is to receive a set of inputs, perform progressively
complex calculations on them, and give output to solve real world problems like classification.
We restrict ourselves to feed forward neural networks.
6. What is ELM?
ELM (Extreme Learning Machines) are feed forward neural networks. The ELMs are believed to
have the ability to learn thousands of times faster than networks trained using the back
propagation technique.
Part B
1. Explain in detail about Convolutional neural network.
2. Discuss about various convolution operation.
3. Explain Efficient Convolution Algorithms
4. Discuss about computer vision.
5.Explain image compression using CNN
8
UNIT IV DEEP FEED FORWARD NETWORKS
Chain Rule
9
7. What is back propagation in deep neural network?
Back propagation is one of the important concepts of a neural network. For a single training
example, Back propagation algorithm calculates the gradient of the error function. Back
propagation can be written as a function of the neural network. Back propagation algorithms are
a set of methods used to efficiently train artificial neural networks following a gradient descent
approach which exploits the chain rule.
8. What is Regularization?
Regularization is a technique used in machine learning and deep learning to prevent overfitting
and improve the generalization performance of a model. It involves adding a penalty term to the
loss function during training.
Synthetic data is generated artificially without using the original dataset. It often uses DNNs
(Deep Neural Networks) and GANs (Generative Adversarial Networks) to generate synthetic
data.
Part B
1. Explain the Probabilistic Theory of Deep Learning.
2. Explain in detail about Chain Rule and Backpropagation.
3. Explain in detail about Data augmentation.
10
UNIT V RECURRENT NEURAL NETWORKS
Recursive Neural Networks (RvNNs) are a class of deep neural networks that can learn detailed
and structured information. With RvNN, you can get a structured prediction by recursively
applying the same set of weights on structured inputs. The word recursive indicates that the
neural network is applied to its output.
Prediction problems.
Machine Translation.
Speech Recognition.
Language Modelling and Generating Text.
Video Tagging.
Generating Image Descriptions.
Text Summarization.
Call Center Analysis.
11
6. What is LSTM in deep learning?
Long Short-Term Memory (LSTM) is a type of Recurrent Neural Network (RNN) that is
specifically designed to handle sequential data, such as time series, speech, and text.
7. What is an Autoencoder?
Autoencoders are neural networks designed to learn a low-dimensional representation of a given
input. Autoencoders typically consist of two components: an encoder which learns to map input
data to a lower dimensional representation and a decoder, which learns to map the representation
back to the input data.
Part B
1. Explain in detail about RNN.
2. Explain Bidirectional RNNs
3. Explain Deep Recurrent Networks
4. Discuss about NLP
5. Discuss about encoder and decoder.
12