0% found this document useful (0 votes)
20 views

FunAI-Assignment-Week-12

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

FunAI-Assignment-Week-12

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Fundamentals of Artificial Intelligence

MOOCs; July - Dec 2024

Assignment No. 12
10 Marks

Each question carries 01 Mark each. There are MORE than ONE correct options for
some of the Questions. All correct options must be identified for the answer to be
evaluated as correct.

Q1. Reinforcement Learning is general than supervised or unsupervised learning; an agent learns
from interaction with the environment to achieve a goal. Learning is based on the ________.
A. multi-layer perceptron.
B. support vector machines.
C. rewards hypothesis.
D. knowledge representation and reasoning.

Ans: Question based on Week 12 (Lecture 1) Videos

Q2. A computational model of a single neuron that can only represent linearly separable
functions is the __________.
A. Perceptron.
B. Restricted Boltzmann Machine.
C. Autoencoder.
D. Convolutional Layer.

Ans: Question based on Week 12 (Lecture 2) Videos

Q3. A key difference between traditional Machine Learning (ML) and Deep Learning (DL) is in
how features are extracted. Which of the following statements are true?
I. Traditional ML approaches use handcrafted engineering features by applying several
feature extraction algorithms, and then apply the learning algorithms
II. In the case of DL, features are learned automatically and represented hierarchically in
multiple levels.
A. Statements I and II
B. Only Statement II
C. Only Statement I
D. None.

Ans: Question based on Week 12 (Lecture 3) Videos

Q4. _______________ offer an alternative approach to maximum likelihood estimation techniques;


an unsupervised deep learning approach where two neural networks compete against each
other in a zero-sum game.

A. Convolutional Neural Networks


B. Recurrent Neural Networks
C. Generative Adversarial Networks
D. Autoencoders
Ans: Question based on Week 12 (Lecture 3) Videos

Q5. A Reinforcement Learning (RL) agent interacting in an environment may include one or more
of these components: Policy, Value function and Model.
Identify the correct statements in the context of a RL agent?

A. A policy is the agent’s behaviour and is a map from state to action.


B. A model is the agent’s representation of the environment; predicts what it will do next.
C. The environment need not be observable
D. Value function is a prediction of the next state.

Ans: Question based on Week 12 (Lecture 3) Videos

Q6. In reinforcement learning, rather than finding a mapping from states to state values,
____________ finds a mapping from state/action pairs to values.

A. Value Iteration
B. Q-learning
C. Reinforcement function
D. Value function

Ans: Question based on Week 12 (Lecture 3) Videos

Q7. Assertion: In Deep Feed-forward Networks, multiple hidden layers help in modelling complex
nonlinear relation more efficiently.
Reason: Backpropagation using gradient descent is the most common learning algorithm
used to train this model.

Mark the correct choice as


A. Both A and R are true and R is the correct explanation for A
B. Both A and R are true but R is not the correct explanation for A
C. A is True but R is False
D. A is false but R is True

Ans: Question based on Week 12 (Lecture 3) Videos

Q8. Assertion: In a Reinforcement Learning (RL) agent, a policy is the agent’s behaviour and is a
map from state to action.
Reason: A model is the RL agent’s representation of the environment and predicts what the
agent will do next.

Mark the correct choice as


A. Both A and R are true and R is the correct explanation for A
B. Both A and R are true but R is not the correct explanation for A
C. A is True but R is False
D. A is false but R is True
Ans: Question based on Week 12 (Lecture 3) Videos

Q9. Assertion: The layers involved in any CNN model are the convolution layers and the
subsampling / pooling layers which allow the network learn filters that are
specific to specific parts in an image.
Reason: The convolution layers help the network retain the spatial arrangement of pixels
present in any image; the pooling layers summarize the pixel information.

Mark the correct choice as


A. Both A and R are true and R is the correct explanation for A
B. Both A and R are true but R is not the correct explanation for A
C. A is True but R is False
D. A is false but R is True

Ans: Question based on Week 12 (Lecture 3) Videos

Q10. Deep Learning (DL) which uses either deep architectures of learning or hierarchical learning
approaches is a class of Machine Learning (ML). Identify the statements correct below.

A. DL approaches do not require precisely defined features.


B. Traditional ML approaches use handcrafted engineering features by applying several
feature extraction algorithms, and then apply the learning algorithms.
C. In DL, the features are learned automatically and represented hierarchically in multiple
levels.
D. The DL approach is not scalable.

Ans: Question based on Week 12 (Lecture 3) Videos

You might also like