The document is a question bank covering various topics in deep learning, including TensorFlow, Keras, PyTorch, batch normalization, and neural network architectures. It addresses key concepts such as the XOR problem, gradient-based learning, and techniques like early stopping and dropout for training deep neural networks. Additionally, it discusses hyperparameter tuning methods like grid search and random search.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
2 views2 pages
Unit 2
The document is a question bank covering various topics in deep learning, including TensorFlow, Keras, PyTorch, batch normalization, and neural network architectures. It addresses key concepts such as the XOR problem, gradient-based learning, and techniques like early stopping and dropout for training deep neural networks. Additionally, it discusses hyperparameter tuning methods like grid search and random search.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2
Question Bank
1. What is TensorFlow, and why is it widely used in deep
learning? 2. Describe the basic structure of a TensorFlow model. 3. What is Keras, and how does it relate to TensorFlow? 4. Describe the functional and sequential API in Keras. 5. What is PyTorch, and what makes it different from TensorFlow? 6. What is batch normalization, and why is it used in deep learning? 7. How does batch normalization help stabilize training in DNNs? 8. How does batch normalization affect the learning rate in a model? 9. Explain where batch normalization layers are typically placed in a DNN architecture. 10.Why is the XOR problem significant in the study of neural networks? 11.Explain why a single-layer perceptron cannot solve the XOR problem. 12. Describe the architecture of a neural network that can solve the XOR problem. 13. What is gradient-based learning, and why is it used in training neural networks? 14. Explain the concept of vanishing and exploding gradients and how they impact training. 15. Explain the difference between batch gradient descent, stochastic gradient descent (SGD), and mini-batch gradient descent. 16. What is early stopping, and why is it used in training DNNs? 17. What is dropout, and how does it help regularize neural networks? 18. What are some limitations of using dropout, especially in large neural networks? 19. Explain grid search and random search in the context of hyperparameter tuning.