0% found this document useful (0 votes)
5 views3 pages

Deep Learning Theory Questions

Deep Learning is a subset of Machine Learning that utilizes deep neural networks to learn data representations. It has evolved from AI, which mimics human intelligence, to ML, which learns from data, and finally to Deep Learning for complex tasks. Key concepts include neural network structure, activation functions, loss functions, overfitting and underfitting, transfer learning, and performance metrics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views3 pages

Deep Learning Theory Questions

Deep Learning is a subset of Machine Learning that utilizes deep neural networks to learn data representations. It has evolved from AI, which mimics human intelligence, to ML, which learns from data, and finally to Deep Learning for complex tasks. Key concepts include neural network structure, activation functions, loss functions, overfitting and underfitting, transfer learning, and performance metrics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

1. What is Deep Learning? Explain its evolution from Machine Learning.

Deep Learning is a subset of Machine Learning (ML) where models automatically learn

representations from data using neural networks with many layers (hence "deep").

- Evolution:

- AI -> Mimicking human intelligence.

- ML -> Algorithms learning from data.

- Deep Learning -> ML using deep neural networks for complex tasks.

2. Explain the working of a Neural Network with layers and neurons.

A Neural Network consists of:

- Input Layer: Receives features.

- Hidden Layers: Perform weighted sums and activations.

- Output Layer: Produces final result.

Each neuron computes: z = w * x + b; activation = activation(z).

3. Differentiate between CNN, RNN, and GAN. Give 1 use case each.

| Feature | CNN | RNN | GAN |

|:--------|:----|:----|:----|

| Focus | Images | Sequences | Data Generation |

| Use case | Image classification | Language modeling | Fake image creation |

4. What are Activation Functions? Explain ReLU and Sigmoid.

- ReLU: max(0, x), fast, solves vanishing gradients.

- Sigmoid: 1/(1+e^(-x)), output between 0 and 1.

5. Explain Loss Functions: MSE, Cross-Entropy Loss.

- MSE: Mean Squared Error, used for regression.


- Cross-Entropy: Measures difference between predicted and true distributions (classification).

6. What is Overfitting and Underfitting? How can we prevent them?

- Overfitting: Model learns noise. Solution: Regularization, dropout.

- Underfitting: Model too simple. Solution: Complex models, longer training.

7. What is Transfer Learning? Explain with an example.

Transfer Learning uses a pre-trained model for a new task.

Example: Using ImageNet models for medical image classification.

8. What are the roles of Generator and Discriminator in GANs?

- Generator: Creates fake data.

- Discriminator: Distinguishes real from fake.

9. Explain the concept of Reinforcement Learning with key terms: Agent, Environment, Reward.

- Agent: Decision-maker.

- Environment: Surroundings.

- Reward: Feedback from actions.

10. What is Data Augmentation? Why is it important for small datasets?

Artificially expanding dataset by transformations (rotation, flipping). Important to avoid overfitting.

11. What is Gradient Descent? What is the difference between GD and SGD?

- Gradient Descent: Minimize loss by adjusting weights.

- GD: Uses full batch. SGD: Uses one sample at a time.

12. What are the advantages of using GPUs and TPUs in Deep Learning?
- GPUs: Parallel computation.

- TPUs: Specialized for deep learning, faster training.

13. What is Cross-Validation? Why is it important?

Splitting data multiple ways to validate models, prevents overfitting and ensures robustness.

14. What are different Performance Metrics in classification problems?

- Accuracy: (TP+TN)/Total

- Precision: TP/(TP+FP)

- Recall: TP/(TP+FN)

- F1-Score: Harmonic mean of Precision and Recall.

15. Explain the structure of a Convolutional Neural Network (CNN) step-by-step.

Input -> Convolution -> Activation (ReLU) -> Pooling -> Fully Connected -> Output.

You might also like