DL Assessment - I
DL Assessment - I
ASSESSMENT – I
UNIT – I
Q1: Discuss the evolution of Artificial Intelligence (AI) and its connection to early neural networks.
How did early neural networks lay the foundation for modern deep learning?
Q2: What is probabilistic modeling in machine learning? Explain its significance and how it differs
from deterministic approaches.
Q3: Compare kernel methods and decision trees in machine learning. What are the key strengths and
limitations of each approach?
Q4: Define the four branches of machine learning (Supervised, Unsupervised, Semi-supervised, and
Reinforcement Learning) and provide an example use case for each.
Q5: What are overfitting and underfitting in machine learning? Discuss strategies to prevent each
issue during model training.
Q6: Why is evaluating machine learning models important? Discuss precision, recall, and F1-score as
evaluation metrics and their relevance in real-world applications.
UNIT – II
Q1: How does biological vision inspire the design of machine vision systems? Explain the similarities
and differences between the two.
Q2: Compare human language processing with machine language processing. How has deep learning
enhanced natural language understanding?
Q3: What is an artificial neural network (ANN)? Describe its basic structure and explain how it
mimics the functioning of the human brain.
Q4: What are the key steps involved in training a deep neural network? Discuss the role of loss
functions, optimization, and backpropagation in this process.
Q5: What is the backpropagation algorithm? How is it used for training a neural network?
Q6: What are the primary challenges in training and improving deep networks? How can issues like
vanishing gradients and overfitting be addressed?
UNIT – III
Q1: What are the primary components of a neural network, and how do they contribute to its
functioning? Discuss layers, weights, activation functions, and biases.
Q2: What is Keras, and how does it simplify the process of building neural networks? Compare Keras
with TensorFlow, Theano, and CNTK.
Q3: What are the key hardware and software requirements for setting up a deep learning workstation?
Explain the role of GPUs in accelerating deep learning tasks.
Q4: How can a neural network be designed for binary classification tasks such as classifying movie
reviews as positive or negative? Discuss the choice of loss functions and activation functions.
Q5: What are the challenges in multiclass classification tasks like classifying newswires? Explain
how to design a neural network for such tasks, including the use of softmax activation.
Q6: Compare TensorFlow, Theano, and CNTK in terms of their features, performance, and
use cases.
Q. Compare TensorFlow, Theano, and CNTK in terms of their features,
performance, and use cases.
CNTK (Microsoft
Aspect TensorFlow Theano
Cognitive Toolkit)
Developed by
Developed and Université de Montréal Developed and maintained
Development
maintained by Google. (no longer actively by Microsoft.
maintained).
Very popular and widely Used mainly in
Gained some traction but
used for both research research, now largely
Popularity not as widely adopted as
and production obsolete due to lack of
TensorFlow.
purposes. maintenance.
High performance with Efficient, but less
Optimized for large-scale
strong GPU/TPU optimized for large-
Performance distributed processing with
support and distributed scale production
good GPU support.
computing. environments.
Low-level API,
Offers both low-level requiring detailed API is relatively simple but
Ease of Use control and high-level knowledge of not as user-friendly as
abstractions (via Keras). mathematics and TensorFlow/Keras.
programming.
Extensive ecosystem
Focused on symbolic Excellent for speech
(TensorBoard,
Features differentiation and low- recognition and other
TensorFlow Lite,
level optimization. specific use cases.
TensorFlow.js, etc.).
Highly scalable with
Not designed for large-
distributed training and Scalable, particularly in
Scalability scale distributed
production-level cloud environments.
systems.
deployment.
Extremely flexible;
Limited flexibility Focused on specific
supports multiple
Flexibility compared to applications like speech
backends and advanced
TensorFlow. and text processing.
customizations.
Actively developed with Active but overshadowed
Current Deprecated; no longer
regular updates and a by TensorFlow and
Status maintained since 2017.
large community. PyTorch.
Q. What is the backpropagation algorithm? How is it used for training a
neural network?
1. Forward Propagation
The input data is passed through the neural network, layer by layer.
At each layer, weights and biases are applied to calculate the outputs (activations) of the
neurons.
The output of the final layer is compared to the target values, resulting in a loss/error.
A loss function (e.g., Mean Squared Error, Cross-Entropy) is used to quantify the difference
between the predicted output and the actual target.
3. Backward Propagation
Error Calculation:
The error signal is propagated backward from the output layer to the input layer.
Gradient Calculation:
Using the chain rule of calculus, the gradient (partial derivatives) of the loss function with
respect to each weight and bias is calculated. This involves two main parts:
o Output Layer: Calculate the gradient of the loss with respect to the output layer
weights and biases.
o Hidden Layers: Calculate the gradient for each hidden layer recursively by
propagating the error backward.
5. Iteration
The process of forward propagation, loss computation, backpropagation, and weight updating
is repeated for several iterations (epochs) until the network converges (i.e., the loss becomes
sufficiently small).
1. Initialization:
o Initialize the weights and biases with small random values.
2. Training Process:
o Feed the input data through the network (forward propagation).
o Calculate the loss using a loss function.
o Propagate the error backward through the network (backpropagation).
o Update the weights and biases using the gradients.
3. Stopping Criteria:
o Training is stopped when:
The loss becomes very small (convergence).
The model reaches a specified number of epochs.
The validation performance stops improving (to avoid overfitting).
Q. What are the key hardware and software requirements for setting up a deep learning
workstation? Explain the role of GPUs in accelerating deep learning tasks.
Hardware Requirements
Software Requirements