Deep Learning - IIT Ropar - Unit 12 - Week 9
Deep Learning - IIT Ropar - Unit 12 - Week 9
(https://swayam.gov.in) (https://swayam.gov.in/nc_details/NPTEL)
2) Consider the following corpus: "human machine interface for computer applications. 1 point
Week 7 ()
user opinion of computer system response time. user interface management system. system
engineering for improved response time". What is the size of the vocabulary of the above
Week 8 ()
corpus?
Week 9 () 13
14
One-hot
representation 15
s of words 16
https://onlinecourses.nptel.ac.in/noc24_cs114/unit?unit=115&assessment=297 1/4
10/31/24, 8:52 PM Deep Learning - IIT Ropar - - Unit 12 - Week 9
(unit?
unit=115&less What is the Euclidean distance between CAR and BUS?
on=117)
1.414
SVD for
learning word Yes, the answer is correct.
representation Score: 1
s (unit? Accepted Answers:
unit=115&less (Type: Range) 1.40,1.42
on=118)
1 point
SVD for
learning word 4) Let count(w, c) be the number of times the words w and c appear together in the 1 point
representation corpus (i.e., occur within a window of few words around each other). Further, let count(w) and
s (Contd.) count(c) be the total number of times the word w and c appear in the corpus respectively and
(unit? let N be the total number of words in the corpus. The PMI between w and c is then given by:
unit=115&less
on=119)
count(w,c)∗count(w)
log
Continuous N ∗count(c)
bag of words
count(w,c)∗count(c)
model (unit? log
N ∗count(w)
unit=115&less
on=120)
count(w,c)∗N
log
count(w)∗count(c)
Skip-gram
model (unit? Yes, the answer is correct.
unit=115&less Score: 1
on=121) Accepted Answers:
count(w,c)∗N
log
Skip-gram count(w)∗count(c)
model (Contd.)
(unit? 5) Consider a skip-gram model trained using hierarchical softmax for analyzing 1 point
unit=115&less
scientific literature. We observe that the word embeddings for 'Neuron' and 'Brain' are highly
on=122)
similar. Similarly, the embeddings for 'Synapse' and 'Brain' also show high similarity. Which of the
Contrastive following statements can be inferred?
estimation
(unit? 'Neuron' and 'Brain' frequently appear in similar contexts
unit=115&less The model's learned representations will indicate a high similarity between 'Neuron' and
on=123) 'Synapse'
Hierarchical The model's learned representations will not show a high similarity between 'Neuron' and
softmax (unit? 'Synapse'
unit=115&less
According to the model's learned representations, 'Neuron' and 'Brain' have a low cosine
on=124)
similarity
GloVe
Yes, the answer is correct.
representation Score: 1
s (unit?
Accepted Answers:
unit=115&less
'Neuron' and 'Brain' frequently appear in similar contexts
on=125)
https://onlinecourses.nptel.ac.in/noc24_cs114/unit?unit=115&assessment=297 2/4
10/31/24, 8:52 PM Deep Learning - IIT Ropar - - Unit 12 - Week 9
Evaluating The model's learned representations will indicate a high similarity between 'Neuron' and
word 'Synapse'
representation
6) Which of the following is a disadvantage of one hot encoding? 1 point
s (unit?
unit=115&less
It requires a large amount of memory to store the vectors
on=126)
It can result in a high-dimensional sparse representation
Relation
It cannot capture the semantic similarity between words
between SVD
and Word2Vec All of the above
(unit?
Yes, the answer is correct.
unit=115&less Score: 1
on=127)
Accepted Answers:
Lecture All of the above
Material for
Week 9 (unit? 7) Which of the following is true about the input representation in the CBOW model? 1 point
unit=115&less
on=128) Each word is represented as a one-hot vector
Week 9
Each word is represented as a continuous vector
Feedback Each word is represented as a sequence of one-hot vectors
Form: Deep Each word is represented as a sequence of continuous vectors
Learning - IIT
Ropar (unit? Yes, the answer is correct.
unit=115&less
Score: 1
on=192) Accepted Answers:
Each word is represented as a one-hot vector
Quiz: Week 9
: Assignment
8) What is the role of the softmax function in the skip-gram method? 1 point
9
(assessment?
To calculate the dot product between the target word and the context words
name=297)
To transform the dot product into a probability distribution
week 10 () To calculate the distance between the target word and the context words
To adjust the weights of the neural network during training
Week 11 ()
Yes, the answer is correct.
Score: 1
Week 12 ()
Accepted Answers:
To transform the dot product into a probability distribution
Download
Videos ()
9) We add incorrect pairs into our corpus to maximize the probability of words that 1 point
occur in the same context and minimize the probability of words that occur in different contexts.
Books ()
This technique is called-
https://onlinecourses.nptel.ac.in/noc24_cs114/unit?unit=115&assessment=297 3/4
10/31/24, 8:52 PM Deep Learning - IIT Ropar - - Unit 12 - Week 9
10) How does Hierarchical Softmax reduce the computational complexity of computing 1 point
the softmax function?
https://onlinecourses.nptel.ac.in/noc24_cs114/unit?unit=115&assessment=297 4/4