Deep Learning Viva
Deep Learning Viva
5. Can you name some key libraries that should be installed along with TensorFlow?
o NumPy
o SciPy
o Matplotlib
o Pandas
o Statsmodels
o Scikit-learn
import tensorflow as tf
a = tf.constant(2)
result = a + a + a
print(result)
7. How can you evaluate expressions in TensorFlow?
In TensorFlow 2.x, expressions are evaluated eagerly, so you can just print the tensor.
In TF 1.x, you needed to run a session to evaluate.
8. What is the purpose of a TensorFlow session (in v1) or tf.function (in v2)?
In TF 1.x, a session was required to execute the graph. In TF 2.x, tf.function is used to
convert Python functions into TensorFlow graphs for better performance.
9. How can you implement a logic gate like AND using TensorFlow?
Using basic neural network with 2 inputs and 1 output. Inputs are (0,0), (0,1), (1,0),
(1,1) and target output is [0, 0, 0, 1]. Use sigmoid activation.
10. What activation function would you use in such a logic gate model?
Sigmoid, because it outputs values between 0 and 1 which are suitable for binary
classification.
12. What is the difference between 2D and 3D datasets in the context of DNNs?
o Image classification
o Object detection
o Face recognition
o Hyperparameter tuning
o Transfer learning
o Regularization techniques
o Batch size: Number of samples processed before the model updates weights
o Overfitting: Model learns too much from training data (poor generalization)
27. What optimizers have you used in your models? (e.g., SGD, Adam)
o Grid search
o Random search
o Bayesian optimization
31. Experiment 2: Compute the function (x, y) = x² +
y² + 2x + y
How do you implement mathematical functions using TensorFlow?
A: Regularization (like L2) penalizes large weights in the model to prevent overfitting and
improve generalization.
A: L1 adds absolute weight values to loss (sparse models); L2 adds squared weights to loss
(smooth models).
A: Convolution layers, pooling layers, flattening, and dense (fully connected) layers.
A: The dataset is split into 5 parts; 4 are used for training and 1 for validation. This process
repeats 5 times for robustness.
A: It helps in reducing bias and variance, ensuring the model performs well across unseen
data.
A: RNNs maintain memory of previous words in a sequence, capturing the context necessary
for language understanding.
A: Vanishing gradient problem, long training times, and difficulty in capturing long-term
dependencies.