0% found this document useful (0 votes)
23 views4 pages

DL Classtest3

Uploaded by

neelavasavi123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views4 pages

DL Classtest3

Uploaded by

neelavasavi123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

PART A

1. What is the difference between a Convolutional Neural Network


(CNN) and a Recurrent Neural Network (RNN)?

Answer:

 CNNs are designed to process spatial data, like images, by using filters
that detect features such as edges and shapes. They are effective for
structured grid data and capturing spatial hierarchies.

 RNNs are designed for sequential data, such as text or time series, as they
retain information across time steps using recurrent connections. This
makes them suitable for tasks involving sequences, like language
modeling.

2. What is dropout?
Answer:
 Dropout is a regularization technique in neural networks where randomly
selected neurons are "dropped" or deactivated during training. This
prevents overfitting by forcing the network to learn multiple
representations and reducing co-dependency among neurons.

3. What is the difference between Stochastic Gradient Descent (SGD) and


Batch Gradient Descent?
Answer:
 SGD updates model weights using a single sample at a time, which
allows faster updates but with more variance, making it suitable for large
datasets.

 Batch Gradient Descent, on the other hand, uses the entire dataset to
update the weights, which is more stable but can be computationally
expensive and slower on large datasets.
4. What are some of the hyperparameters that need to be tuned when
using dropout?
Answer:
 Key hyperparameters include the dropout rate (the proportion of
neurons to drop out) and the layer selection (determining which layers to
apply dropout to, usually hidden layers rather than input or output layers).

5. What is a Deep Belief Network (DBN)?


Answer:
 A DBN is a type of generative neural network that consists of multiple
layers of hidden units, often pre-trained layer by layer using Restricted
Boltzmann Machines (RBMs). DBNs are capable of learning complex
feature representations in an unsupervised way before fine-tuning with
supervised learning.

PART B
1. Explain the different layers in a Convolutional Neural Network and
discuss their functions.
Answer:

 CNNs typically include the following layers:

o Convolutional Layer: Applies a set of filters to the input image to


extract features such as edges, textures, and patterns. Each filter focuses
on specific aspects of the image.

o Activation Layer (e.g., ReLU): Adds non-linearity to the model by


transforming the filtered outputs, allowing the network to learn more
complex patterns.

o Pooling Layer (e.g., Max Pooling): Reduces the spatial dimensions of


the feature maps, preserving essential information while lowering the
computation load.
o Fully Connected (Dense) Layer: Connects all neurons to the output
neurons, combining features learned from previous layers to classify or
predict outcomes.

o Softmax/Output Layer: Provides the final output as probabilities,


commonly used in classification tasks.

2. Explain how Recurrent Neural Networks can be used to model


sequential data.
Answer:

 RNNs model sequential data by allowing information to persist over time


steps through recurrent connections. Each neuron in the RNN retains
information from the previous time step, making it effective at capturing
dependencies across sequences. This is beneficial in tasks like language
modeling, where the context of previous words influences the current
prediction. By utilizing hidden states and backpropagation through time
(BPTT), RNNs adjust weights to learn patterns in sequential data
effectively.

3. Explain how Deep Belief Networks (DBNs) can be used to learn


hierarchical representations of data.
Answer:

 DBNs learn hierarchical data representations by stacking multiple layers


of Restricted Boltzmann Machines (RBMs) or autoencoders. Each RBM
layer learns to represent features in an unsupervised manner, starting from
simple to more complex features as the layers go deeper. This hierarchical
learning helps DBNs to capture high-level abstractions in data, making
them valuable in applications like image and speech recognition. After
unsupervised pretraining, DBNs can be fine-tuned using supervised
backpropagation, improving classification accuracy.

You might also like