0% found this document useful (0 votes)
8 views1 page

DL

The document provides a comprehensive overview of various concepts in deep learning, including pooling layers, convolutional neural networks (CNN), recurrent neural networks (RNN), and generative adversarial networks (GAN). It discusses their architectures, functionalities, and applications, along with reinforcement learning techniques such as Q-learning and dynamic programming. Additionally, it highlights the differences between generative and discriminative models, as well as the challenges faced in reinforcement learning.

Uploaded by

shravvv005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views1 page

DL

The document provides a comprehensive overview of various concepts in deep learning, including pooling layers, convolutional neural networks (CNN), recurrent neural networks (RNN), and generative adversarial networks (GAN). It discusses their architectures, functionalities, and applications, along with reinforcement learning techniques such as Q-learning and dynamic programming. Additionally, it highlights the differences between generative and discriminative models, as well as the challenges faced in reinforcement learning.

Uploaded by

shravvv005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

B.E.

(Computer Engineering) | DEEP LEARNING (2019 Pattern) | (Semester - VIII) (410251) | May-June 2023 PYQ Solutions

Q1) a) Explain Pooling Layer with its need and different types
[6]
• Reduces spatial dimensions c) Explain Unfolding computational graphs with example [5] ∟ Generator creates fake images ∟ Evaluates and improves
• Helps manage complexity • Unfolding: Repeat neural network structure ∟ Discriminator identifies real images • Bellman Equations:
• Prevents overfitting ∟ Example: RNN with 3-time steps • GANs aim for realism ∟ Fundamental in DP
• Types: • Creates multiple time steps ∟ Learn to generate authentic images ∟ Express value function relationships
∟ Max Pooling • Each step linked to previous ∟ Discriminator improves over time • DP optimizes reward policies:
∟ Average Pooling • Unrolls recurrent connections Q6) a) Differentiate generative and discriminative models in GAN ∟ Balances exploration and exploitation
∟ Global Pooling • Facilitates understanding network behaviour (Generative Adversarial Network) [6] ∟ Solves complex RL problems efficiently
b) Draw and explain CNN (Convolution Neural Network) • Useful for training and analysis c) Explain Simple reinforcement learning for TicTacToe [5]
architecture in detail [6] Q4) a) What are types of RNN (Recurrent Neural Network)? How • Generative models create new data • Agent learns to play Tic-Tac-Toe:
• Input Layer: Initial data to train RNN explain in brief [6] ∟ Create from learned distribution ∟ Environment: Tic-Tac-Toe board
• Convolution Layer: Feature extraction • RNN Types: ∟ No direct classification involved ∟ Agent: Learns optimal moves
• Activation Layer: Adds nonlinearity ∟ Vanilla RNN: Basic recurrent network • Discriminative models classify data • States: Board configurations
• Pooling Layer: Reduces spatial dimensions ∟ LSTM: Long Short-Term Memory ∟ Distinguish between classes ∟ Actions: Placing X or O
• Fully Connected Layer: Makes predictions ∟ GRU: Gated Recurrent Unit ∟ Learn decision boundaries ∟ Rewards: Win, lose, draw
• Output Layer: Final prediction • Training RNN: b) What are applications of GAN (Generative Adversarial • Q-table stores state-action values:
∟ BPTT: Backpropagation Through Time Network)? Explain any four in detail [6] ∟ Agent updates Q-values
c) Explain ReLU Layer in detail What are the advantages of ReLU
∟ Gradient weight updates: Adjusts weights • Image generation: ∟ Uses exploration vs exploitation
over Sigmoid? [6]
∟ Creates realistic images • Learning through trial and error:
• Rectified Linear Unit (ReLU): Activation function b) Explain Encoder-Decoder Sequence to Sequence architecture
∟ Used in art, design ∟ Adjusts strategy based on outcomes
• ReLU sets negative values to zero with its application [6]
• Encoder: Converts input to context • Style transfer: ∟ Reinforces successful moves
• Advantages over Sigmoid:
∟ Captures information ∟ Apply styles to images Q8) a) Write Short Note on Q-Learning and Deep Q-Networks [6]
∟ Faster computation
∟ Avoids vanishing gradient problem • Context Vector: Condensed representation ∟ Alter art styles • Q Learning:
∟ Contains essential information • Super-resolution: ∟ Reinforcement learning algorithm
Q2) a) Explain all the features of pooling layer [6]
• Decoder: Generates output from context ∟ Enhance image resolution ∟ Learns value for actions
• Reduces spatial dimensions
∟ Reconstructs input from context ∟ Improve image quality • Uses value iteration
• Helps in feature selection
• Application: Machine translation • Data augmentation: ∟ Updates Q-values iteratively
• Controls overfitting
∟ Translates text between languages ∟ Generate synthetic data ∟ Finds optimal policy
• Increases computational efficiency
∟ Enhance training datasets • Deep Q-Networks (DQN):
• Types: c) Differentiate between Recurrent and Recursive Neural Network
c) Write Short Note on Deep generative model and Deep Belief ∟ Combines Q-learning with NN
∟ Max Pooling [5]
Networks[6] ∟ Learns Q-values
∟ Average Pooling • Recurrent Neural Network (RNN):
• Deep Generative Model: • Uses experience replay
b) Explain Dropout Layer in Convolutional Neural Network [6] ∟ Loops input through time
∟ Generates complex data ∟ Stores experiences in memory
• Randomly drops neurons ∟ Handles sequential data
∟ Uses deep learning techniques ∟ Enhances learning efficiency
• Prevents overfitting ∟ Example: Time series prediction
• Recursive Neural Network (RecNN): • Deep Belief Networks (DBNs): b) What are the challenges of reinforcement learning? Explain any
• Improves generalization four in detail [6]
∟ Processes data recursively ∟ Stack of Restricted Boltzmann Machines
• During training phase ∟ Unsupervised learning • Exploration vs exploitation dilemma
∟ Handles hierarchical data
• Retains model's flexibility • DBNs model hierarchical data: ∟ Balance exploration and exploitation
∟ Example: Parsing sentences
• Enhances model's performance ∟ Learn hierarchical representations • Credit assignment problem
• Key Differences:
c) Explain working of Convolution Layer with its features [6] ∟ RNN loops over time ∟ Useful for feature learning ∟ Assign credit accurately
• Applies filters to input data ∟ RecNN processes recursively • Deep generative models: • Non-stationarity of environments
• Feature extraction ∟ RNN for sequences, RecNN for hierarchies ∟ Generate realistic data ∟ Environments change constantly
• Stride moves filter Q5) a) Explain Boltzmann machine in details [6] ∟ Capture data distribution accurately • Reward design complexities
• Padding adds zeros Q7) a) Explain Markov Decision Process with Markov property [6] ∟ Design effective reward functions
• Nonlinear activation • Markov Decision Process (MDP): • Sample inefficiency
• Boltzmann machine:
• Preserves spatial information ∟ Models’ sequential decision-making ∟ Needs many training samples
∟ Stochastic neural network
Q3) a) What is RNN? What is need of RNN? Explain in brief about ∟ Comprises states, actions, rewards • High computational requirements
∟ Model’s joint probability distribution
working of RNN (Recurrent Neural Network) [6] ∟ Markov Property: ∟ Demands substantial computing power
∟ Uses Gibbs sampling for learning
• RNN: Recurrent Neural Network ∟ Consists of visible and hidden units ▪ Future depends on present c) What is deep reinforcement learning? Explain in detail [5]
• Handles sequential data ∟ Connections have weights ▪ History is not needed • Combines RL with deep learning
• Maintains memory of past ∟ Learning via contrastive divergence • States: Represent situations ∟ Learns complex tasks
• Addresses time dependencies b) Explain GAN (Generative Adversarial Network) architecture • Actions: Available choices ∟ Uses neural networks for decision-making
• Loops information over time with an example [6] • Rewards: Outcomes of actions • Deep RL learns representations
• Suitable for time series analysis b) Explain in detail Dynamic programming algorithms for ∟ Extracts features automatically
• GAN Architecture:
b) How LSTM and Bidirectional LSTM works [6] reinforcement learning [6] ∟ Handles high-dimensional data efficiently
∟ Generator: Creates fake data
• Dynamic Programming (DP) algorithms: • Benefits:
• LSTM: Long Short-Term Memory ∟ Discriminator: Distinguishes real from fake
∟ Value Iteration ∟ Solves complex tasks
∟ Handles long-term dependencies • Example:
∟ Policy Iteration ∟ Learns from raw data
∟ Input, forget, output gates ∟ Generating realistic faces
∟ Cell state maintains memory ∟ Generator learns data distribution • Value Iteration: • Challenges:
∟ Updates value function iteratively ∟ Requires substantial computational resources
• Bidirectional LSTM: ∟ Discriminator distinguishes real vs fake
∟ Converges to optimal policy ∟ May suffer from instability during training
∟ Processes input forward and backward c) Do GANs (Generative Adversarial Network) find real or fake
∟ Captures past and future context images? If yes, explain it in detail [6] • Policy Iteration:
∟ Combines outputs for prediction • GANs generate images from noise ∟ Iteratively improves policy

You might also like