Vector Semantics and Embeddings: Smilodon Thylacosmilus
Vector Semantics and Embeddings: Smilodon Thylacosmilus
All
rights reserved. Draft of January 12, 2025.
CHAPTER
The asphalt that Los Angeles is famous for occurs mainly on its freeways. But
in the middle of the city is another patch of asphalt, the La Brea tar pits, and this
asphalt preserves millions of fossil bones from the last of the Ice Ages of the Pleis-
tocene Epoch. One of these fossils is the Smilodon, or saber-toothed tiger, instantly
recognizable by its long canines. Five million years ago or so, a completely different
saber-tooth tiger called Thylacosmilus lived
in Argentina and other parts of South Amer-
ica. Thylacosmilus was a marsupial whereas
Smilodon was a placental mammal, but Thy-
lacosmilus had the same long upper canines
and, like Smilodon, had a protective bone
flange on the lower jaw. The similarity of
these two mammals is one of many examples
of parallel or convergent evolution, in which particular contexts or environments
lead to the evolution of very similar structures in different species (Gould, 1980).
The role of context is also important in the similarity of a less biological kind
of organism: the word. Words that occur in similar contexts tend to have similar
meanings. This link between similarity in how words are distributed and similarity
distributional
hypothesis in what they mean is called the distributional hypothesis. The hypothesis was
first formulated in the 1950s by linguists like Joos (1950), Harris (1954), and Firth
(1957), who noticed that words which are synonyms (like oculist and eye-doctor)
tended to occur in the same environment (e.g., near words like eye or examined)
with the amount of meaning difference between two words “corresponding roughly
to the amount of difference in their environments” (Harris, 1954, p. 157).
vector In this chapter we introduce vector semantics, which instantiates this linguistic
semantics
embeddings hypothesis by learning representations of the meaning of words, called embeddings,
directly from their distributions in texts. These representations are used in every nat-
ural language processing application that makes use of meaning, and the static em-
beddings we introduce here underlie the more powerful dynamic or contextualized
embeddings like BERT that we will see in Chapter 11.
These word representations are also the first example in this book of repre-
representation
learning sentation learning, automatically learning useful representations of the input text.
Finding such self-supervised ways to learn representations of the input, instead of
creating representations by hand via feature engineering, is an important focus of
NLP research (Bengio et al., 2013).
2 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
identical to a sense of another word, or nearly identical, we say the two senses of
synonym those two words are synonyms. Synonyms include such pairs as
couch/sofa vomit/throw up filbert/hazelnut car/automobile
A more formal definition of synonymy (between words rather than senses) is that
two words are synonymous if they are substitutable for one another in any sentence
without changing the truth conditions of the sentence, the situations in which the
sentence would be true.
While substitutions between some pairs of words like car / automobile or wa-
ter / H2 O are truth preserving, the words are still not identical in meaning. Indeed,
probably no two words are absolutely identical in meaning. One of the fundamental
principle of tenets of semantics, called the principle of contrast (Girard 1718, Bréal 1897, Clark
contrast
1987), states that a difference in linguistic form is always associated with some dif-
ference in meaning. For example, the word H2 O is used in scientific contexts and
would be inappropriate in a hiking guide—water would be more appropriate— and
this genre difference is part of the meaning of the word. In practice, the word syn-
onym is therefore used to describe a relationship of approximate or rough synonymy.
Word Similarity While words don’t have many synonyms, most words do have
lots of similar words. Cat is not a synonym of dog, but cats and dogs are certainly
similar words. In moving from synonymy to similarity, it will be useful to shift from
talking about relations between word senses (like synonymy) to relations between
words (like similarity). Dealing with words avoids having to commit to a particular
representation of word senses, which will turn out to simplify our task.
similarity The notion of word similarity is very useful in larger semantic tasks. Knowing
how similar two words are can help in computing how similar the meaning of two
phrases or sentences are, a very important component of tasks like question answer-
ing, paraphrasing, and summarization. One way of getting values for word similarity
is to ask humans to judge how similar one word is to another. A number of datasets
have resulted from such experiments. For example the SimLex-999 dataset (Hill
et al., 2015) gives values on a scale from 0 to 10, like the examples below, which
range from near-synonyms (vanish, disappear) to pairs that scarcely seem to have
anything in common (hole, agreement):
vanish disappear 9.8
belief impression 5.95
muscle bone 3.65
modest flexible 0.98
hole agreement 0.3
Word Relatedness The meaning of two words can be related in ways other than
relatedness similarity. One such class of connections is called word relatedness (Budanitsky
association and Hirst, 2006), also traditionally called word association in psychology.
Consider the meanings of the words coffee and cup. Coffee is not similar to cup;
they share practically no features (coffee is a plant or a beverage, while a cup is a
manufactured object with a particular shape). But coffee and cup are clearly related;
they are associated by co-participating in an everyday event (the event of drinking
coffee out of a cup). Similarly scalpel and surgeon are not similar but are related
eventively (a surgeon tends to make use of a scalpel).
One common kind of relatedness between words is if they belong to the same
semantic field semantic field. A semantic field is a set of words which cover a particular semantic
domain and bear structured relations with each other. For example, words might be
4 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
related by being in the semantic field of hospitals (surgeon, scalpel, nurse, anes-
thetic, hospital), restaurants (waiter, menu, plate, food, chef), or houses (door, roof,
topic models kitchen, family, bed). Semantic fields are also related to topic models, like Latent
Dirichlet Allocation, LDA, which apply unsupervised learning on large sets of texts
to induce sets of associated words from text. Semantic fields and topic models are
very useful tools for discovering topical structure in documents.
In Appendix G we’ll introduce more relations between senses like hypernymy
or IS-A, antonymy (opposites) and meronymy (part-whole relations).
Semantic Frames and Roles Closely related to semantic fields is the idea of a
semantic frame semantic frame. A semantic frame is a set of words that denote perspectives or
participants in a particular type of event. A commercial transaction, for example,
is a kind of event in which one entity trades money to another entity in return for
some good or service, after which the good changes hands or perhaps the service is
performed. This event can be encoded lexically by using verbs like buy (the event
from the perspective of the buyer), sell (from the perspective of the seller), pay
(focusing on the monetary aspect), or nouns like buyer. Frames have semantic roles
(like buyer, seller, goods, money), and words in a sentence can take on these roles.
Knowing that buy and sell have this relation makes it possible for a system to
know that a sentence like Sam bought the book from Ling could be paraphrased as
Ling sold the book to Sam, and that Sam has the role of the buyer in the frame and
Ling the seller. Being able to recognize such paraphrases is important for question
answering, and can help in shifting perspective for machine translation.
connotations Connotation Finally, words have affective meanings or connotations. The word
connotation has different meanings in different fields, but here we use it to mean the
aspects of a word’s meaning that are related to a writer or reader’s emotions, senti-
ment, opinions, or evaluations. For example some words have positive connotations
(wonderful) while others have negative connotations (dreary). Even words whose
meanings are similar in other ways can vary in connotation; consider the difference
in connotations between fake, knockoff, forgery, on the one hand, and copy, replica,
reproduction on the other, or innocent (positive connotation) and naive (negative
connotation). Some words describe positive evaluation (great, love) and others neg-
ative evaluation (terrible, hate). Positive or negative evaluation language is called
sentiment sentiment, as we saw in Chapter 4, and word sentiment plays a role in important
tasks like sentiment analysis, stance detection, and applications of NLP to the lan-
guage of politics and consumer reviews.
Early work on affective meaning (Osgood et al., 1957) found that words varied
along three important dimensions of affective meaning:
Thus words like happy or satisfied are high on valence, while unhappy or an-
noyed are low on valence. Excited is high on arousal, while calm is low on arousal.
Controlling is high on dominance, while awed or influenced are low on dominance.
Each word is thus represented by three numbers, corresponding to its value on each
of the three dimensions:
6.2 • V ECTOR S EMANTICS 5
not good
bad
to by dislike worst
’s
that now incredibly bad
are worse
a i you
than with is
Figure 6.1 A two-dimensional (t-SNE) projection of embeddings for some words and
phrases, showing that words with similar meanings are nearby in space. The original 60-
dimensional embeddings were trained for sentiment analysis. Simplified from Li et al. (2015)
with colors added for explanation.
The term-document matrix of Fig. 6.2 was first defined as part of the vector
vector space space model of information retrieval (Salton, 1971). In this model, a document is
model
represented as a count vector, a column in Fig. 6.3.
vector To review some basic linear algebra, a vector is, at heart, just a list or array of
numbers. So As You Like It is represented as the list [1,114,36,20] (the first column
vector in Fig. 6.3) and Julius Caesar is represented as the list [7,62,1,2] (the third
vector space column vector). A vector space is a collection of vectors, and is characterized by
dimension its dimension. Vectors in a 3-dimensional vector space have an element for each
dimension of the space. We will loosely refer to a vector in a 4-dimensional space
as a 4-dimensional vector, with one element along each dimension. In the example
in Fig. 6.3, we’ve chosen to make the document vectors of dimension 4, just so they
fit on the page; in real term-document matrices, the document vectors would have
dimensionality |V |, the vocabulary size.
The ordering of the numbers in a vector space indicates the different dimensions
on which documents vary. The first dimension for both these vectors corresponds to
the number of times the word battle occurs, and we can compare each dimension,
noting for example that the vectors for As You Like It and Twelfth Night have similar
values (1 and 0, respectively) for the first dimension.
40
Henry V [4,13]
15
battle
10 Julius Caesar [1,7]
5 10 15 20 25 30 35 40 45 50 55 60
fool
Figure 6.4 A spatial visualization of the document vectors for the four Shakespeare play
documents, showing just two of the dimensions, corresponding to the words battle and fool.
The comedies have high values for the fool dimension and low values for the battle dimension.
A real term-document matrix, of course, wouldn’t just have 4 rows and columns,
let alone 2. More generally, the term-document matrix has |V | rows (one for each
word type in the vocabulary) and D columns (one for each document in the collec-
tion); as we’ll see, vocabulary sizes are generally in the tens of thousands, and the
number of documents can be enormous (think about all the pages on the web).
information Information retrieval (IR) is the task of finding the document d from the D
retrieval
documents in some collection that best matches a query q. For IR we’ll therefore also
represent a query by a vector, also of length |V |, and we’ll need a way to compare
two vectors to find how similar they are. (Doing IR will also require efficient ways
to store and manipulate these vectors by making use of the convenient fact that these
vectors are sparse, i.e., mostly zeros).
Later in the chapter we’ll introduce some of the components of this vector com-
parison process: the tf-idf term weighting, and the cosine similarity metric.
For documents, we saw that similar documents had similar vectors, because sim-
ilar documents tend to have similar words. This same principle applies to words:
similar words have similar vectors because they tend to occur in similar documents.
The term-document matrix thus lets us represent the meaning of a word by the doc-
uments it tends to occur in.
6.3 • W ORDS AND V ECTORS 9
Note in Fig. 6.6 that the two words cherry and strawberry are more similar to
each other (both pie and sugar tend to occur in their window) than they are to other
words like digital; conversely, digital and information are more similar to each other
than, say, to strawberry. Fig. 6.7 shows a spatial visualization.
4000
information
computer
3000 [3982,3325]
digital
2000 [1683,1670]
1000
Note that |V |, the dimensionality of the vector, is generally the size of the vo-
cabulary, often between 10,000 and 50,000 words (using the most frequent words
10 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
in the training corpus; keeping words after about the most frequent 50,000 or so is
generally not helpful). Since most of these numbers are zero these are sparse vector
representations; there are efficient algorithms for storing and computing with sparse
matrices.
Now that we have some intuitions, let’s move on to examine the details of com-
puting word similarity. Afterwards we’ll discuss methods for weighting cells.
The dot product acts as a similarity metric because it will tend to be high just when
the two vectors have large values in the same dimensions. Alternatively, vectors that
have zeros in different dimensions—orthogonal vectors—will have a dot product of
0, representing their strong dissimilarity.
This raw dot product, however, has a problem as a similarity metric: it favors
vector length long vectors. The vector length is defined as
v
u N
uX
|v| = t v2i (6.8)
i=1
The dot product is higher if a vector is longer, with higher values in each dimension.
More frequent words have longer vectors, since they tend to co-occur with more
words and have higher co-occurrence values with each of them. The raw dot product
thus will be higher for frequent words. But this is a problem; we’d like a similarity
metric that tells us how similar two words are regardless of their frequency.
We modify the dot product to normalize for the vector length by dividing the
dot product by the lengths of each of the two vectors. This normalized dot product
turns out to be the same as the cosine of the angle between the two vectors, following
from the definition of the dot product between two vectors a and b:
a · b = |a||b| cos θ
a·b
= cos θ (6.9)
|a||b|
cosine The cosine similarity metric between two vectors v and w thus can be computed as:
6.5 • TF-IDF: W EIGHING TERMS IN THE VECTOR 11
N
X
vi wi
v·w i=1
cosine(v, w) = =v v (6.10)
|v||w| u
uXN u N
uX
t 2
v t w2 i i
i=1 i=1
The model decides that information is way closer to digital than it is to cherry, a
result that seems sensible. Fig. 6.8 shows a visualization.
Dimension 1: ‘pie’
500
cherry
digital information
Dimension 2: ‘computer’
Figure 6.8 A (rough) graphical demonstration of cosine similarity, showing vectors for
three words (cherry, digital, and information) in the two dimensional space defined by counts
of the words computer and pie nearby. The figure doesn’t show the cosine, but it highlights the
angles; note that the angle between digital and information is smaller than the angle between
cherry and information. When two vectors are more similar, the cosine is larger but the angle
is smaller; the cosine has its maximum (1) when the angle between two vectors is smallest
(0◦ ); the cosine of all other angles is less than 1.
is not the best measure of association between words. Raw frequency is very skewed
and not very discriminative. If we want to know what kinds of contexts are shared
by cherry and strawberry but not by digital and information, we’re not going to get
good discrimination from words like the, it, or they, which occur frequently with
all sorts of words and aren’t informative about any particular word. We saw this
also in Fig. 6.3 for the Shakespeare corpus; the dimension for the word good is not
very discriminative between plays; good is simply a frequent word and has roughly
equivalent high frequencies in each of the plays.
It’s a bit of a paradox. Words that occur nearby frequently (maybe pie nearby
cherry) are more important than words that only appear once or twice. Yet words
that are too frequent—ubiquitous, like the or good— are unimportant. How can we
balance these two conflicting constraints?
There are two common solutions to this problem: in this section we’ll describe
the tf-idf weighting, usually used when the dimensions are documents. In the next
section we introduce the PPMI algorithm (usually used when the dimensions are
words).
The tf-idf weighting (the ‘-’ here is a hyphen, not a minus sign) is the product
of two terms, each term capturing one of these two intuitions:
term frequency The first is the term frequency (Luhn, 1957): the frequency of the word t in the
document d. We can just use the raw count as the term frequency:
tft, d = count(t, d) (6.11)
More commonly we squash the raw frequency a bit, by using the log10 of the fre-
quency instead. The intuition is that a word appearing 100 times in a document
doesn’t make that word 100 times more likely to be relevant to the meaning of the
document. We also need to do something special with counts of 0, since we can’t
take the log of 0.2
(
1 + log10 count(t, d) if count(t, d) > 0
tft, d = (6.12)
0 otherwise
If we use log weighting, terms which occur 0 times in a document would have tf = 0,
1 times in a document tf = 1 + log10 (1) = 1 + 0 = 1, 10 times in a document tf =
1 + log10 (10) = 2, 100 times tf = 1 + log10 (100) = 3, 1000 times tf = 4, and so on.
The second factor in tf-idf is used to give a higher weight to words that occur
only in a few documents. Terms that are limited to a few documents are useful
for discriminating those documents from the rest of the collection; terms that occur
document
frequency frequently across the entire collection aren’t as helpful. The document frequency
dft of a term t is the number of documents it occurs in. Document frequency is
not the same as the collection frequency of a term, which is the total number of
times the word appears in the whole collection in any document. Consider in the
collection of Shakespeare’s 37 plays the two words Romeo and action. The words
have identical collection frequencies (they both occur 113 times in all the plays) but
very different document frequencies, since Romeo only occurs in a single play. If
our goal is to find documents about the romantic tribulations of Romeo, the word
Romeo should be highly weighted, but not action:
Collection Frequency Document Frequency
Romeo 113 1
action 113 31
2 We can also use this alternative formulation, which we have used in earlier editions: tft, d =
log10 (count(t, d) + 1)
6.5 • TF-IDF: W EIGHING TERMS IN THE VECTOR 13
We emphasize discriminative words like Romeo via the inverse document fre-
idf quency or idf term weight (Sparck Jones, 1972). The idf is defined using the frac-
tion N/dft , where N is the total number of documents in the collection, and dft is
the number of documents in which term t occurs. The fewer documents in which a
term occurs, the higher this weight. The lowest weight of 1 is assigned to terms that
occur in all the documents. It’s usually clear what counts as a document: in Shake-
speare we would use a play; when processing a collection of encyclopedia articles
like Wikipedia, the document is a Wikipedia page; in processing newspaper articles,
the document is a single article. Occasionally your corpus might not have appropri-
ate document divisions and you might need to break up the corpus into documents
yourself for the purposes of computing idf.
Because of the large number of documents in many collections, this measure
too is usually squashed with a log function. The resulting definition for inverse
document frequency (idf) is thus
N
idft = log10 (6.13)
dft
Here are some idf values for some words in the Shakespeare corpus, (along with
the document frequency df values on which they are based) ranging from extremely
informative words which occur in only one play like Romeo, to those that occur in a
few like salad or Falstaff, to those which are very common like fool or so common
as to be completely non-discriminative since they occur in all 37 plays like good or
sweet.3
Word df idf
Romeo 1 1.57
salad 2 1.27
Falstaff 4 0.967
forest 12 0.489
battle 21 0.246
wit 34 0.037
fool 36 0.012
good 37 0
sweet 37 0
tf-idf The tf-idf weighted value wt, d for word t in document d thus combines term
frequency tft, d (defined either by Eq. 6.11 or by Eq. 6.12) with idf from Eq. 6.13:
Fig. 6.9 applies tf-idf weighting to the Shakespeare term-document matrix in Fig. 6.2,
using the tf equation Eq. 6.12. Note that the tf-idf values for the dimension corre-
sponding to the word good have now all become 0; since this word appears in every
document, the tf-idf weighting leads it to be ignored. Similarly, the word fool, which
appears in 36 out of the 37 plays, has a much lower weight.
The tf-idf weighting is the way for weighting co-occurrence matrices in infor-
mation retrieval, but also plays a role in many other aspects of natural language
processing. It’s also a great baseline, the simple thing to try first. We’ll look at other
weightings like PPMI (Positive Pointwise Mutual Information) in Section 6.6.
3 Sweet was one of Shakespeare’s favorite adjectives, a fact probably related to the increased use of
sugar in European recipes around the turn of the 16th century (Jurafsky, 2014, p. 175).
14 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
P(x, y)
I(x, y) = log2 (6.16)
P(x)P(y)
The pointwise mutual information between a target word w and a context word
c (Church and Hanks 1989, Church and Hanks 1990) is then defined as:
P(w, c)
PMI(w, c) = log2 (6.17)
P(w)P(c)
The numerator tells us how often we observed the two words together (assuming
we compute probability by using the MLE). The denominator tells us how often
we would expect the two words to co-occur assuming they each occurred indepen-
dently; recall that the probability of two independent events both occurring is just
the product of the probabilities of the two events. Thus, the ratio gives us an esti-
mate of how much more the two words co-occur than we expect by chance. PMI is
a useful tool whenever we need to find words that are strongly associated.
PMI values range from negative to positive infinity. But negative PMI values
(which imply things are co-occurring less often than we would expect by chance)
tend to be unreliable unless our corpora are enormous. To distinguish whether
two words whose individual probability is each 10−6 occur together less often than
chance, we would need to be certain that the probability of the two occurring to-
gether is significantly less than 10−12 , and this kind of granularity would require an
enormous corpus. Furthermore it’s not clear whether it’s even possible to evaluate
such scores of ‘unrelatedness’ with human judgments. For this reason it is more
4 PMI is based on the mutual information between two random variables X and Y , defined as:
XX P(x, y)
I(X,Y ) = P(x, y) log2 (6.15)
x y
P(x)P(y)
In a confusion of terminology, Fano used the phrase mutual information to refer to what we now call
pointwise mutual information and the phrase expectation of the mutual information for what we now call
mutual information
6.6 • P OINTWISE M UTUAL I NFORMATION (PMI) 15
PPMI common to use Positive PMI (called PPMI) which replaces all negative PMI values
with zero (Church and Hanks 1989, Dagan et al. 1993, Niwa and Nitta 1994)5 :
P(w, c)
PPMI(w, c) = max(log2 , 0) (6.18)
P(w)P(c)
More formally, let’s assume we have a co-occurrence matrix F with W rows (words)
and C columns (contexts), where fi j gives the number of times word wi occurs with
context c j . This can be turned into a PPMI matrix where PPMIi j gives the PPMI
value of word wi with context c j (which we can also express as PPMI(wi , c j ) or
PPMI(w = i, c = j)) as follows:
PC PW
fi j j=1 f i j fi j
pi j = PW PC , pi∗ = PW PC , p∗ j = PW i=1
PC (6.19)
i=1 j=1 f i j i=1 j=1 f i j i=1 j=1 f i j
pi j
PPMIi j = max(log2 , 0) (6.20)
pi∗ p∗ j
Let’s see some PPMI calculations. We’ll use Fig. 6.10, which repeats Fig. 6.6 plus
all the count marginals, and let’s pretend for ease of calculation that these are the
only words/contexts that matter.
Fig. 6.11 shows the joint probabilities computed from the counts in Fig. 6.10, and
Fig. 6.12 shows the PPMI values. Not surprisingly, cherry and strawberry are highly
associated with both pie and sugar, and data is mildly associated with information.
PMI has the problem of being biased toward infrequent events; very rare words
tend to have very high PMI values. One way to reduce this bias toward low frequency
5 Positive PMI also cleanly solves the problem of what to do with zero counts, using 0 to replace the
−∞ from log(0).
16 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
p(w,context) p(w)
computer data result pie sugar p(w)
cherry 0.0002 0.0007 0.0008 0.0377 0.0021 0.0415
strawberry 0.0000 0.0000 0.0001 0.0051 0.0016 0.0068
digital 0.1425 0.1436 0.0073 0.0004 0.0003 0.2942
information 0.2838 0.3399 0.0323 0.0004 0.0011 0.6575
events is to slightly change the computation for P(c), using a different function Pα (c)
that raises the probability of the context word to the power of α:
P(w, c)
PPMIα (w, c) = max(log2 , 0) (6.21)
P(w)Pα (c)
count(c)α
Pα (c) = P (6.22)
c count(c)
α
is sometimes referred to as the tf-idf model or the PPMI model, after the weighting
function.
The tf-idf model of meaning is often used for document functions like deciding
if two documents are similar. We represent a document by taking the vectors of
centroid all the words in the document, and computing the centroid of all those vectors.
The centroid is the multidimensional version of the mean; the centroid of a set of
vectors is a single vector that has the minimum sum of squared distances to each of
the vectors in the set. Given k word vectors w1 , w2 , ..., wk , the centroid document
document vector d is:
vector
w1 + w2 + ... + wk
d= (6.23)
k
Given two documents, we can then compute their document vectors d1 and d2 , and
estimate the similarity between the two documents by cos(d1 , d2 ). Document sim-
ilarity is also useful for all sorts of applications; information retrieval, plagiarism
detection, news recommender systems, and even for digital humanities tasks like
comparing different versions of a text to see which are similar to each other.
Either the PPMI model or the tf-idf model can be used to compute word simi-
larity, for tasks like finding word paraphrases, tracking changes in word meaning, or
automatically discovering meanings of words in different corpora. For example, we
can find the 10 most similar words to any target word w by computing the cosines
between w and each of the V − 1 other words, sorting, and looking at the top 10.
6.8 Word2vec
In the previous sections we saw how to represent a word as a sparse, long vector with
dimensions corresponding to words in the vocabulary or documents in a collection.
We now introduce a more powerful word representation: embeddings, short dense
vectors. Unlike the vectors we’ve seen so far, embeddings are short, with number
of dimensions d ranging from 50-1000, rather than the much larger vocabulary size
|V | or number of documents D we’ve seen. These d dimensions don’t have a clear
interpretation. And the vectors are dense: instead of vector entries being sparse,
mostly-zero counts or functions of counts, the values will be real-valued numbers
that can be negative.
It turns out that dense vectors work better in every NLP task than sparse vectors.
While we don’t completely understand all the reasons for this, we have some intu-
itions. Representing words as 300-dimensional dense vectors requires our classifiers
to learn far fewer weights than if we represented words as 50,000-dimensional vec-
tors, and the smaller parameter space possibly helps with generalization and avoid-
ing overfitting. Dense vectors may also do a better job of capturing synonymy.
For example, in a sparse vector representation, dimensions for synonyms like car
and automobile dimension are distinct and unrelated; sparse vectors may thus fail
to capture the similarity between a word with car as a neighbor and a word with
automobile as a neighbor.
skip-gram In this section we introduce one method for computing embeddings: skip-gram
SGNS with negative sampling, sometimes called SGNS. The skip-gram algorithm is one
word2vec of two algorithms in a software package called word2vec, and so sometimes the
algorithm is loosely referred to as word2vec (Mikolov et al. 2013a, Mikolov et al.
2013b). The word2vec methods are fast, efficient to train, and easily available on-
18 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
line with code and pretrained embeddings. Word2vec embeddings are static em-
static
embeddings beddings, meaning that the method learns one fixed embedding for each word in the
vocabulary. In Chapter 11 we’ll introduce methods for learning dynamic contextual
embeddings like the popular family of BERT representations, in which the vector
for each word is different in different contexts.
The intuition of word2vec is that instead of counting how often each word w oc-
curs near, say, apricot, we’ll instead train a classifier on a binary prediction task: “Is
word w likely to show up near apricot?” We don’t actually care about this prediction
task; instead we’ll take the learned classifier weights as the word embeddings.
The revolutionary intuition here is that we can just use running text as implicitly
supervised training data for such a classifier; a word c that occurs near the target
word apricot acts as gold ‘correct answer’ to the question “Is word c likely to show
self-supervision up near apricot?” This method, often called self-supervision, avoids the need for
any sort of hand-labeled supervision signal. This idea was first proposed in the task
of neural language modeling, when Bengio et al. (2003) and Collobert et al. (2011)
showed that a neural language model (a neural network that learned to predict the
next word from prior words) could just use the next word in running text as its
supervision signal, and could be used to learn an embedding representation for each
word as part of doing this prediction task.
We’ll see how to do neural networks in the next chapter, but word2vec is a
much simpler model than the neural network language model, in two ways. First,
word2vec simplifies the task (making it binary classification instead of word pre-
diction). Second, word2vec simplifies the architecture (training a logistic regression
classifier instead of a multi-layer neural network with hidden layers that demand
more sophisticated training algorithms). The intuition of skip-gram is:
1. Treat the target word and a neighboring context word as positive examples.
2. Randomly sample other words in the lexicon to get negative samples.
3. Use logistic regression to train a classifier to distinguish those two cases.
4. Use the learned weights as the embeddings.
P(+|w, c) (6.24)
The probability that word c is not a real context word for w is just 1 minus
Eq. 6.24:
occur near the target if its embedding vector is similar to the target embedding. To
compute similarity between these dense embeddings, we rely on the intuition that
two vectors are similar if they have a high dot product (after all, cosine is just a
normalized dot product). In other words:
Similarity(w, c) ≈ c · w (6.26)
The dot product c · w is not a probability, it’s just a number ranging from −∞ to ∞
(since the elements in word2vec embeddings can be negative, the dot product can be
negative). To turn the dot product into a probability, we’ll use the logistic or sigmoid
function σ (x), the fundamental core of logistic regression:
1
σ (x) = (6.27)
1 + exp (−x)
We model the probability that word c is a real context word for target word w as:
1
P(+|w, c) = σ (c · w) = (6.28)
1 + exp (−c · w)
The sigmoid function returns a number between 0 and 1, but to make it a probability
we’ll also need the total probability of the two possible events (c is a context word,
and c isn’t a context word) to sum to 1. We thus estimate the probability that word c
is not a real context word for w as:
P(−|w, c) = 1 − P(+|w, c)
1
= σ (−c · w) = (6.29)
1 + exp (c · w)
Equation 6.28 gives us the probability for one word, but there are many context
words in the window. Skip-gram makes the simplifying assumption that all context
words are independent, allowing us to just multiply their probabilities:
L
Y
P(+|w, c1:L ) = σ (ci · w) (6.30)
i=1
XL
log P(+|w, c1:L ) = log σ (ci · w) (6.31)
i=1
In summary, skip-gram trains a probabilistic classifier that, given a test target word
w and its context window of L words c1:L , assigns a probability based on how similar
this context window is to the target word. The probability is based on applying the
logistic (sigmoid) function to the dot product of the embeddings of the target word
with each context word. To compute this probability, we just need embeddings for
each target word and context word in the vocabulary.
Fig. 6.13 shows the intuition of the parameters we’ll need. Skip-gram actually
stores two embeddings for each word, one for the word as a target, and one for the
word considered as context. Thus the parameters we need to learn are two matrices
W and C, each containing an embedding for every one of the |V | words in the
vocabulary V .6 Let’s now turn to learning these embeddings (which is the real goal
of training this classifier in the first place).
6 In principle the target matrix and the context matrix could use different vocabularies, but we’ll simplify
by assuming one shared vocabulary V .
20 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
1..d
aardvark 1
apricot
… … W target words
zebra |V|
&= aardvark |V|+1
apricot
Figure 6.13 The embeddings learned by the skipgram model. The algorithm stores two
embeddings for each word, the target embedding (sometimes called the input embedding)
and the context embedding (sometimes called the output embedding). The parameter θ that
the algorithm learns is thus a matrix of 2|V | vectors, each of dimension d, formed by concate-
nating two matrices, the target embeddings W and the context+noise embeddings C.
count(w)α
Pα (w) = P 0 α
(6.32)
w0 count(w )
Setting α = .75 gives better performance because it gives rare noise words slightly
higher probability: for rare words, Pα (w) > P(w). To illustrate this intuition, it
might help to work out the probabilities for an example with α = .75 and two events,
P(a) = 0.99 and P(b) = 0.01:
.99.75
Pα (a) = = 0.97
.99.75 + .01.75
.01.75
Pα (b) = = 0.03 (6.33)
.99 + .01.75
.75
Thus using α = .75 increases the probability of the rare event b from 0.01 to 0.03.
Given the set of positive and negative training instances, and an initial set of
embeddings, the goal of the learning algorithm is to adjust those embeddings to
• Maximize the similarity of the target word, context word pairs (w, cpos ) drawn
from the positive examples
• Minimize the similarity of the (w, cneg ) pairs from the negative examples.
If we consider one word/context pair (w, cpos ) with its k noise words cneg1 ...cnegk ,
we can express these two goals as the following loss function L to be minimized
(hence the −); here the first term expresses that we want the classifier to assign the
real context word cpos a high probability of being a neighbor, and the second term
expresses that we want to assign each of the noise words cnegi a high probability of
being a non-neighbor, all multiplied because we assume independence:
" k
#
Y
L = − log P(+|w, cpos ) P(−|w, cnegi )
i=1
" k
#
X
= − log P(+|w, cpos ) + log P(−|w, cnegi )
i=1
" k
#
X
= − log P(+|w, cpos ) + log 1 − P(+|w, cnegi )
i=1
" k
#
X
= − log σ (cpos · w) + log σ (−cnegi · w) (6.34)
i=1
That is, we want to maximize the dot product of the word with the actual context
words, and minimize the dot products of the word with the k negative sampled non-
neighbor words.
We minimize this loss function using stochastic gradient descent. Fig. 6.14
shows the intuition of one step of learning.
To get the gradient, we need to take the derivative of Eq. 6.34 with respect to
the different embeddings. It turns out the derivatives are the following (we leave the
22 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
aardvark
move apricot and jam closer,
apricot w increasing cpos w
W
“…apricot jam…”
zebra
! aardvark move apricot and matrix apart
cpos decreasing cneg1 w
jam
C matrix cneg1
k=2
Tolstoy cneg2 move apricot and Tolstoy apart
decreasing cneg2 w
zebra
Figure 6.14 Intuition of one step of gradient descent. The skip-gram model tries to shift
embeddings so the target embeddings (here for apricot) are closer to (have a higher dot prod-
uct with) context embeddings for nearby words (here jam) and further from (lower dot product
with) context embeddings for noise words that don’t occur nearby (here Tolstoy and matrix).
∂L
= [σ (cpos · w) − 1]w (6.35)
∂ cpos
∂L
= [σ (cneg · w)]w (6.36)
∂ cneg
k
∂L X
= [σ (cpos · w) − 1]cpos + [σ (cnegi · w)]cnegi (6.37)
∂w
i=1
The update equations going from time step t to t + 1 in stochastic gradient descent
are thus:
ct+1 t t t
pos = cpos − η[σ (cpos · w ) − 1]w
t
(6.38)
ct+1
neg = ctneg − η[σ (ctneg · wt )]wt (6.39)
" k
#
X
wt+1 = wt − η [σ (ctpos · wt ) − 1]ctpos + [σ (ctnegi · wt )]ctnegi (6.40)
i=1
Just as in logistic regression, then, the learning algorithm starts with randomly ini-
tialized W and C matrices, and then walks through the training corpus using gradient
descent to move W and C so as to minimize the loss in Eq. 6.34 by making the up-
dates in (Eq. 6.38)-(Eq. 6.40).
Recall that the skip-gram model learns two separate embeddings for each word i:
target
embedding the target embedding wi and the context embedding ci , stored in two matrices, the
context
embedding target matrix W and the context matrix C. It’s common to just add them together,
representing word i with the vector wi + ci . Alternatively we can throw away the C
matrix and just represent each word i by the vector wi .
As with the simple count-based methods like tf-idf, the context window size L
affects the performance of skip-gram embeddings, and experiments often tune the
parameter L on a devset.
6.9 • V ISUALIZING E MBEDDINGS 23
corpus statistics. GloVe is based on ratios of probabilities from the word-word co-
FRANCE
CHINA
WRIST
occurrence matrix, combining the intuitions of count-based models like PPMI while
EUROPE
ASIA
AFRICA
ANKLE
ARM
also capturing the linear structures used by methods like word2vec.
AMERICA
BRAZIL
SHOULDER
FINGER
EAR
EYE
FACE
HAND It turns out that dense embeddings like word2vec actually have an elegant math-
MOSCOW
TOE LEG
FOOT ematical relationship with sparse embeddings like PPMI, in which word2vec can
HAWAII
TOOTH
NOSE
HEAD be seen as implicitly optimizing a function of a PPMI matrix (Levy and Goldberg,
TOKYO
2014c).
MONTREAL
CHICAGO
ATLANTA
MOUSE
6.9
DOG
CAT
Visualizing Embeddings
TURTLE
LION NASHVILLE
PUPPY
KITTEN COW
OYSTER
“I see well in many dimensions as long as the dimensions are around two.”
BULL The late economist Martin Shubik
Figure 8: Multidimensional scaling for three noun classes.
Visualizing embeddings is an important goal in helping understand, apply, and
improve these models of word meaning. But how can we visualize a (for example)
100-dimensional vector?
WRIST The simplest way to visualize the meaning of a word
ANKLE
SHOULDER
ARM
w embedded in a space is to list the most similar words to
LEG
HAND w by sorting the vectors for all words in the vocabulary by
FOOT
HEAD
NOSE
their cosine with the vector for w. For example the 7 closest
FINGER
TOE words to frog using a particular embeddings computed with
FACE
EAR
EYE
the GloVe algorithm are: frogs, toad, litoria, leptodactyli-
TOOTH
DOG dae, rana, lizard, and eleutherodactylus (Pennington et al.,
CAT
PUPPY
KITTEN
2014).
COW
MOUSE Yet another visualization method is to use a clustering
TURTLE
LION
OYSTER algorithm to show a hierarchical representation of which
BULL
CHICAGO words are similar to others in the embedding space. The
ATLANTA
MONTREAL
NASHVILLE
uncaptioned figure on the left uses hierarchical clustering
TOKYO
CHINA of some embedding vectors for nouns as a visualization
RUSSIA
AFRICA
ASIA
EUROPE
AMERICA
BRAZIL
MOSCOW
FRANCE
HAWAII
Figure 9: Hierarchical clustering for three noun classes using distances based on vector correlations.
24 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
tree
apple
vine
grape
Figure 6.15 The parallelogram model for analogy problems (Rumelhart and Abrahamson,
# » # » # » # »
1973): the location of vine can be found by subtracting apple from tree and adding grape.
# » + woman
# » is a vector close to queen. # » #
# » Similarly, Paris » # »
man − France + Italy results
# »
in a vector that is close to Rome. The embedding model thus seems to be extract-
ing representations of relations like MALE - FEMALE, or CAPITAL - CITY- OF, or even
COMPARATIVE / SUPERLATIVE , as shown in Fig. 6.16 from GloVe.
(a) (b)
Figure 6.16 Relational properties of the GloVe vector space, shown by projecting vectors onto two dimen-
# » # » # » is close to queen.
# » (b) offsets seem to capture comparative and superlative
sions. (a) king − man + woman
morphology (Pennington et al., 2014).
Figure 6.17 A t-SNE visualization of the semantic change of 3 words in English using
Figure
word2vec5.1: Two-dimensional
vectors. The modern sensevisualization of semantic
of each word, and the change in English
grey context words,using SGNS
are com-
vectors
puted from(seetheSection 5.8 for
most recent the visualization
(modern) algorithm).
time-point embedding A,Earlier
space. The word
pointsgay shifted
are com-
from
puted meaning “cheerful”
from earlier historicalorembedding
“frolicsome” to referring
spaces. to homosexuality.
The visualizations A, In the
show the changes early
in the
20th century broadcast referred to “casting out seeds”; with the rise of television
word gay from meanings related to “cheerful” or “frolicsome” to referring to homosexuality, and
radio
the development of the modern “transmission” sense of broadcast from its original sense of of
its meaning shifted to “transmitting signals”. C, Awful underwent a process
pejoration,
sowing seeds,asandit shifted from meaning
the pejoration “full
of the word of awe”
awful to meaning
as it shifted “terrible“full
from meaning or appalling”
of awe”
[212].
to meaning “terrible or appalling” (Hamilton et al., 2016).
ple’s associations between concepts (like ‘flowers’ or ‘insects’) and attributes (like
‘pleasantness’ and ‘unpleasantness’) by measuring differences in the latency with
which they label words in the various categories.7 Using such methods, people
in the United States have been shown to associate African-American names with
unpleasant words (more than European-American names), male names more with
mathematics and female names with the arts, and old people’s names with unpleas-
ant words (Greenwald et al. 1998, Nosek et al. 2002a, Nosek et al. 2002b). Caliskan
et al. (2017) replicated all these findings of implicit associations using GloVe vectors
and cosine similarity instead of human latencies. For example African-American
names like ‘Leroy’ and ‘Shaniqua’ had a higher GloVe cosine with unpleasant words
while European-American names (‘Brad’, ‘Greg’, ‘Courtney’) had a higher cosine
with pleasant words. These problems with embeddings are an example of a repre-
representational sentational harm (Crawford 2017, Blodgett et al. 2020), which is a harm caused by
harm
a system demeaning or even ignoring some social groups. Any embedding-aware al-
gorithm that made use of word sentiment could thus exacerbate bias against African
Americans.
Recent research focuses on ways to try to remove these kinds of biases, for
example by developing a transformation of the embedding space that removes gen-
der stereotypes but preserves definitional gender (Bolukbasi et al. 2016, Zhao et al.
2017) or changing the training procedure (Zhao et al., 2018). However, although
debiasing these sorts of debiasing may reduce bias in embeddings, they do not eliminate it
(Gonen and Goldberg, 2019), and this remains an open problem.
Historical embeddings are also being used to measure biases in the past. Garg
et al. (2018) used embeddings from historical texts to measure the association be-
tween embeddings for occupations and embeddings for names of various ethnici-
ties or genders (for example the relative cosine similarity of women’s names versus
men’s to occupation words like ‘librarian’ or ‘carpenter’) across the 20th century.
They found that the cosines correlate with the empirical historical percentages of
women or ethnic groups in those occupations. Historical embeddings also repli-
cated old surveys of ethnic stereotypes; the tendency of experimental participants in
1933 to associate adjectives like ‘industrious’ or ‘superstitious’ with, e.g., Chinese
ethnicity, correlates with the cosine between Chinese last names and those adjectives
using embeddings trained on 1930s text. They also were able to document historical
gender biases, such as the fact that embeddings for adjectives related to competence
(‘smart’, ‘wise’, ‘thoughtful’, ‘resourceful’) had a higher cosine with male than fe-
male words, and showed that this bias has been slowly decreasing since 1960. We
return in later chapters to this question about the role of bias in natural language
processing.
6.13 Summary
• In vector semantics, a word is modeled as a vector—a point in high-dimensional
space, also called an embedding. In this chapter we focus on static embed-
dings, where each word is mapped to a fixed embedding.
• Vector semantic models fall into two classes: sparse and dense. In sparse
models each dimension corresponds to a word in the vocabulary V and cells
are functions of co-occurrence counts. The term-document matrix has a
row for each word (term) in the vocabulary and a column for each document.
The word-context or term-term matrix has a row for each (target) word in
B IBLIOGRAPHICAL AND H ISTORICAL N OTES 29
the vocabulary and a column for each context term in the vocabulary. Two
sparse weightings are common: the tf-idf weighting which weights each cell
by its term frequency and inverse document frequency, and PPMI (point-
wise positive mutual information), which is most common for word-context
matrices.
• Dense vector models have dimensionality 50–1000. Word2vec algorithms
like skip-gram are a popular way to compute dense embeddings. Skip-gram
trains a logistic regression classifier to compute the probability that two words
are ‘likely to occur nearby in text’. This probability is computed from the dot
product between the embeddings for the two words.
• Skip-gram uses stochastic gradient descent to train the classifier, by learning
embeddings that have a high dot product with embeddings of words that occur
nearby and a low dot product with noise words.
• Other important embedding algorithms include GloVe, a method based on
ratios of word co-occurrence probabilities.
• Whether using sparse or dense vectors, word and document similarities are
computed by some function of the dot product between vectors. The cosine
of two vectors—a normalized dot product—is the most popular such metric.
Exercises
32 Chapter 6 • Vector Semantics and Embeddings
Agirre, E., C. Banea, C. Cardie, D. Cer, M. Diab, Carlson, G. N. 1977. Reference to kinds in English. Ph.D.
A. Gonzalez-Agirre, W. Guo, I. Lopez-Gazpio, M. Mar- thesis, University of Massachusetts, Amherst. Forward.
itxalar, R. Mihalcea, G. Rigau, L. Uria, and J. Wiebe. Church, K. W. and P. Hanks. 1989. Word association norms,
2015. SemEval-2015 task 2: Semantic textual similarity, mutual information, and lexicography. ACL.
English, Spanish and pilot on interpretability. SemEval-
15. Church, K. W. and P. Hanks. 1990. Word association norms,
mutual information, and lexicography. Computational
Agirre, E., M. Diab, D. Cer, and A. Gonzalez-Agirre. 2012. Linguistics, 16(1):22–29.
SemEval-2012 task 6: A pilot on semantic textual simi-
larity. SemEval-12. Clark, E. 1987. The principle of contrast: A constraint on
language acquisition. In B. MacWhinney, ed., Mecha-
Antoniak, M. and D. Mimno. 2018. Evaluating the stability
nisms of language acquisition, 1–33. LEA.
of embedding-based word similarities. TACL, 6:107–119.
Coccaro, N. and D. Jurafsky. 1998. Towards better integra-
Bellegarda, J. R. 1997. A latent semantic analysis framework
tion of semantic predictors in statistical language model-
for large-span language modeling. EUROSPEECH.
ing. ICSLP.
Bellegarda, J. R. 2000. Exploiting latent semantic informa-
tion in statistical language modeling. Proceedings of the Collobert, R. and J. Weston. 2007. Fast semantic extraction
IEEE, 89(8):1279–1296. using a novel neural network architecture. ACL.
Bender, E. M. and A. Koller. 2020. Climbing towards NLU: Collobert, R. and J. Weston. 2008. A unified architecture for
On meaning, form, and understanding in the age of data. natural language processing: Deep neural networks with
ACL. multitask learning. ICML.
Bengio, Y., A. Courville, and P. Vincent. 2013. Represen- Collobert, R., J. Weston, L. Bottou, M. Karlen,
tation learning: A review and new perspectives. IEEE K. Kavukcuoglu, and P. Kuksa. 2011. Natural language
Transactions on Pattern Analysis and Machine Intelli- processing (almost) from scratch. JMLR, 12:2493–2537.
gence, 35(8):1798–1828. Cordier, B. 1965. Factor-analysis of correspondences. COL-
Bengio, Y., R. Ducharme, P. Vincent, and C. Jauvin. 2003. ING 1965.
A neural probabilistic language model. JMLR, 3:1137– Crawford, K. 2017. The trouble with bias. Keynote at
1155. NeurIPS.
Bengio, Y., H. Schwenk, J.-S. Senécal, F. Morin, and J.-L. Cruse, D. A. 2004. Meaning in Language: an Introduction
Gauvain. 2006. Neural probabilistic language models. In to Semantics and Pragmatics. Oxford University Press.
Innovations in Machine Learning, 137–186. Springer. Second edition.
Bisk, Y., A. Holtzman, J. Thomason, J. Andreas, Y. Bengio, Dagan, I., S. Marcus, and S. Markovitch. 1993. Contextual
J. Chai, M. Lapata, A. Lazaridou, J. May, A. Nisnevich, word similarity and estimation from sparse data. ACL.
N. Pinto, and J. Turian. 2020. Experience grounds lan-
guage. EMNLP. Davies, M. 2012. Expanding horizons in historical lin-
guistics with the 400-million word Corpus of Historical
Blei, D. M., A. Y. Ng, and M. I. Jordan. 2003. Latent Dirich- American English. Corpora, 7(2):121–157.
let allocation. JMLR, 3(5):993–1022.
Davies, M. 2015. The Wikipedia Corpus: 4.6 million arti-
Blodgett, S. L., S. Barocas, H. Daumé III, and H. Wallach.
cles, 1.9 billion words. Adapted from Wikipedia. https:
2020. Language (technology) is power: A critical survey
//www.english-corpora.org/wiki/.
of “bias” in NLP. ACL.
Deerwester, S. C., S. T. Dumais, G. W. Furnas, R. A. Harsh-
Bojanowski, P., E. Grave, A. Joulin, and T. Mikolov. 2017.
man, T. K. Landauer, K. E. Lochbaum, and L. Streeter.
Enriching word vectors with subword information. TACL,
1988. Computer information retrieval using latent seman-
5:135–146.
tic structure: US Patent 4,839,853.
Bolukbasi, T., K.-W. Chang, J. Zou, V. Saligrama, and A. T.
Kalai. 2016. Man is to computer programmer as woman Deerwester, S. C., S. T. Dumais, T. K. Landauer, G. W. Fur-
is to homemaker? Debiasing word embeddings. NeurIPS. nas, and R. A. Harshman. 1990. Indexing by latent se-
mantics analysis. JASIS, 41(6):391–407.
Bréal, M. 1897. Essai de Sémantique: Science des significa-
tions. Hachette. Ethayarajh, K., D. Duvenaud, and G. Hirst. 2019a. Towards
understanding linear word analogies. ACL.
Budanitsky, A. and G. Hirst. 2006. Evaluating WordNet-
based measures of lexical semantic relatedness. Compu- Ethayarajh, K., D. Duvenaud, and G. Hirst. 2019b. Under-
tational Linguistics, 32(1):13–47. standing undesirable word embedding associations. ACL.
Bullinaria, J. A. and J. P. Levy. 2007. Extracting seman- Fano, R. M. 1961. Transmission of Information: A Statistical
tic representations from word co-occurrence statistics: Theory of Communications. MIT Press.
A computational study. Behavior research methods, Finkelstein, L., E. Gabrilovich, Y. Matias, E. Rivlin,
39(3):510–526. Z. Solan, G. Wolfman, and E. Ruppin. 2002. Placing
Bullinaria, J. A. and J. P. Levy. 2012. Extracting semantic search in context: The concept revisited. ACM Trans-
representations from word co-occurrence statistics: stop- actions on Information Systems, 20(1):116—-131.
lists, stemming, and SVD. Behavior research methods, Firth, J. R. 1957. A synopsis of linguistic theory 1930–
44(3):890–907. 1955. In Studies in Linguistic Analysis. Philological So-
Caliskan, A., J. J. Bryson, and A. Narayanan. 2017. Seman- ciety. Reprinted in Palmer, F. (ed.) 1968. Selected Papers
tics derived automatically from language corpora contain of J. R. Firth. Longman, Harlow.
human-like biases. Science, 356(6334):183–186.
Exercises 33
Garg, N., L. Schiebinger, D. Jurafsky, and J. Zou. 2018. Jurgens, D., S. M. Mohammad, P. Turney, and K. Holyoak.
Word embeddings quantify 100 years of gender and eth- 2012. SemEval-2012 task 2: Measuring degrees of rela-
nic stereotypes. Proceedings of the National Academy of tional similarity. *SEM 2012.
Sciences, 115(16):E3635–E3644. Katz, J. J. and J. A. Fodor. 1963. The structure of a semantic
Girard, G. 1718. La justesse de la langue françoise: ou les theory. Language, 39:170–210.
différentes significations des mots qui passent pour syn- Kiela, D. and S. Clark. 2014. A systematic study of semantic
onimes. Laurent d’Houry, Paris. vector space model parameters. EACL 2nd Workshop on
Giuliano, V. E. 1965. The interpretation of word Continuous Vector Space Models and their Composition-
associations. Statistical Association Methods For ality (CVSC).
Mechanized Documentation. Symposium Proceed- Kim, E. 2019. Optimize computational efficiency
ings. Washington, D.C., USA, March 17, 1964. of skip-gram with negative sampling. https://
https://nvlpubs.nist.gov/nistpubs/Legacy/ aegis4048.github.io/optimize_computational_
MP/nbsmiscellaneouspub269.pdf. efficiency_of_skip-gram_with_negative_
Gladkova, A., A. Drozd, and S. Matsuoka. 2016. Analogy- sampling.
based detection of morphological and semantic relations Lake, B. M. and G. L. Murphy. 2021. Word meaning in
with word embeddings: what works and what doesn’t. minds and machines. Psychological Review. In press.
NAACL Student Research Workshop. Landauer, T. K. and S. T. Dumais. 1997. A solution to Plato’s
Glenberg, A. M. and D. A. Robertson. 2000. Symbol ground- problem: The Latent Semantic Analysis theory of acqui-
ing and meaning: A comparison of high-dimensional and sition, induction, and representation of knowledge. Psy-
embodied theories of meaning. Journal of memory and chological Review, 104:211–240.
language, 43(3):379–401. Landauer, T. K., D. Laham, B. Rehder, and M. E. Schreiner.
Gonen, H. and Y. Goldberg. 2019. Lipstick on a pig: Debi- 1997. How well can passage meaning be derived with-
asing methods cover up systematic gender biases in word out using word order? A comparison of Latent Semantic
embeddings but do not remove them. NAACL HLT. Analysis and humans. COGSCI.
Gould, S. J. 1980. The Panda’s Thumb. Penguin Group. Lapesa, G. and S. Evert. 2014. A large scale evaluation of
Greenwald, A. G., D. E. McGhee, and J. L. K. Schwartz. distributional semantic models: Parameters, interactions
1998. Measuring individual differences in implicit cogni- and model selection. TACL, 2:531–545.
tion: the implicit association test. Journal of personality Lee, D. D. and H. S. Seung. 1999. Learning the parts of
and social psychology, 74(6):1464–1480. objects by non-negative matrix factorization. Nature,
Hamilton, W. L., J. Leskovec, and D. Jurafsky. 2016. Di- 401(6755):788–791.
achronic word embeddings reveal statistical laws of se- Levy, O. and Y. Goldberg. 2014a. Dependency-based word
mantic change. ACL. embeddings. ACL.
Harris, Z. S. 1954. Distributional structure. Word, 10:146– Levy, O. and Y. Goldberg. 2014b. Linguistic regularities in
162. sparse and explicit word representations. CoNLL.
Hellrich, J. and U. Hahn. 2016. Bad company— Levy, O. and Y. Goldberg. 2014c. Neural word embedding
Neighborhoods in neural embedding spaces considered as implicit matrix factorization. NeurIPS.
harmful. COLING. Levy, O., Y. Goldberg, and I. Dagan. 2015. Improving dis-
Hill, F., R. Reichart, and A. Korhonen. 2015. Simlex-999: tributional similarity with lessons learned from word em-
Evaluating semantic models with (genuine) similarity es- beddings. TACL, 3:211–225.
timation. Computational Linguistics, 41(4):665–695. Li, J., X. Chen, E. H. Hovy, and D. Jurafsky. 2015. Visual-
Hjelmslev, L. 1969. Prologomena to a Theory of Language. izing and understanding neural models in NLP. NAACL
University of Wisconsin Press. Translated by Francis J. HLT.
Whitfield; original Danish edition 1943. Lin, Y., J.-B. Michel, E. Lieberman Aiden, J. Orwant,
Hofmann, T. 1999. Probabilistic latent semantic indexing. W. Brockman, and S. Petrov. 2012. Syntactic annotations
SIGIR-99. for the Google Books NGram corpus. ACL.
Huang, E. H., R. Socher, C. D. Manning, and A. Y. Ng. 2012. Linzen, T. 2016. Issues in evaluating semantic spaces us-
Improving word representations via global context and ing word analogies. 1st Workshop on Evaluating Vector-
multiple word prototypes. ACL. Space Representations for NLP.
Jia, S., T. Meng, J. Zhao, and K.-W. Chang. 2020. Mitigat- Luhn, H. P. 1957. A statistical approach to the mechanized
ing gender bias amplification in distribution by posterior encoding and searching of literary information. IBM
regularization. ACL. Journal of Research and Development, 1(4):309–317.
Manning, C. D., P. Raghavan, and H. Schütze. 2008. Intro-
Jones, M. P. and J. H. Martin. 1997. Contextual spelling cor-
duction to Information Retrieval. Cambridge.
rection using latent semantic analysis. ANLP.
Mikolov, T., K. Chen, G. S. Corrado, and J. Dean. 2013a. Ef-
Joos, M. 1950. Description of language design. JASA,
ficient estimation of word representations in vector space.
22:701–708.
ICLR 2013.
Jurafsky, D. 2014. The Language of Food. W. W. Norton,
Mikolov, T., S. Kombrink, L. Burget, J. H. Černockỳ, and
New York.
S. Khudanpur. 2011. Extensions of recurrent neural net-
work language model. ICASSP.
34 Chapter 6 • Vector Semantics and Embeddings
Mikolov, T., I. Sutskever, K. Chen, G. S. Corrado, and Sparck Jones, K. 1972. A statistical interpretation of term
J. Dean. 2013b. Distributed representations of words and specificity and its application in retrieval. Journal of Doc-
phrases and their compositionality. NeurIPS. umentation, 28(1):11–21.
Mikolov, T., W.-t. Yih, and G. Zweig. 2013c. Linguis- Sparck Jones, K. 1986. Synonymy and Semantic Classifica-
tic regularities in continuous space word representations. tion. Edinburgh University Press, Edinburgh. Republica-
NAACL HLT. tion of 1964 PhD Thesis.
Niwa, Y. and Y. Nitta. 1994. Co-occurrence vectors from Switzer, P. 1965. Vector images in document retrieval.
corpora vs. distance vectors from dictionaries. COLING. Statistical Association Methods For Mechanized Docu-
Nosek, B. A., M. R. Banaji, and A. G. Greenwald. 2002a. mentation. Symposium Proceedings. Washington, D.C.,
Harvesting implicit group attitudes and beliefs from a USA, March 17, 1964. https://nvlpubs.nist.gov/
demonstration web site. Group Dynamics: Theory, Re- nistpubs/Legacy/MP/nbsmiscellaneouspub269.
search, and Practice, 6(1):101. pdf.
Nosek, B. A., M. R. Banaji, and A. G. Greenwald. 2002b. Tian, Y., V. Kulkarni, B. Perozzi, and S. Skiena. 2016. On
Math=male, me=female, therefore math6= me. Journal of the convergent properties of word embedding methods.
personality and social psychology, 83(1):44. ArXiv preprint arXiv:1605.03956.
Osgood, C. E., G. J. Suci, and P. H. Tannenbaum. 1957. The Turian, J., L. Ratinov, and Y. Bengio. 2010. Word represen-
Measurement of Meaning. University of Illinois Press. tations: a simple and general method for semi-supervised
learning. ACL.
Pennington, J., R. Socher, and C. D. Manning. 2014. GloVe:
Global vectors for word representation. EMNLP. Turney, P. D. and M. L. Littman. 2005. Corpus-based learn-
ing of analogies and semantic relations. Machine Learn-
Peterson, J. C., D. Chen, and T. L. Griffiths. 2020. Parallelo- ing, 60(1-3):251–278.
grams revisited: Exploring the limitations of vector space
models for simple analogies. Cognition, 205. van der Maaten, L. and G. E. Hinton. 2008. Visualizing high-
dimensional data using t-SNE. JMLR, 9:2579–2605.
Pilehvar, M. T. and J. Camacho-Collados. 2019. WiC: the
word-in-context dataset for evaluating context-sensitive Wierzbicka, A. 1992. Semantics, Culture, and Cognition:
meaning representations. NAACL HLT. University Human Concepts in Culture-Specific Configu-
rations. Oxford University Press.
Rehder, B., M. E. Schreiner, M. B. W. Wolfe, D. Laham,
T. K. Landauer, and W. Kintsch. 1998. Using Latent Wierzbicka, A. 1996. Semantics: Primes and Universals.
Semantic Analysis to assess knowledge: Some technical Oxford University Press.
considerations. Discourse Processes, 25(2-3):337–354. Wittgenstein, L. 1953. Philosophical Investigations. (Trans-
Rohde, D. L. T., L. M. Gonnerman, and D. C. Plaut. 2006. lated by Anscombe, G.E.M.). Blackwell.
An improved model of semantic similarity based on lexi- Zhao, J., T. Wang, M. Yatskar, V. Ordonez, and K.-
cal co-occurrence. CACM, 8:627–633. W. Chang. 2017. Men also like shopping: Reducing
Rumelhart, D. E. and A. A. Abrahamson. 1973. A model for gender bias amplification using corpus-level constraints.
analogical reasoning. Cognitive Psychology, 5(1):1–28. EMNLP.
Salton, G. 1971. The SMART Retrieval System: Experiments Zhao, J., Y. Zhou, Z. Li, W. Wang, and K.-W. Chang. 2018.
in Automatic Document Processing. Prentice Hall. Learning gender-neutral word embeddings. EMNLP.
Schluter, N. 2018. The word analogy testing caveat. NAACL
HLT.
Schone, P. and D. Jurafsky. 2000. Knowlege-free induction
of morphology using latent semantic analysis. CoNLL.
Schone, P. and D. Jurafsky. 2001a. Is knowledge-free in-
duction of multiword unit dictionary headwords a solved
problem? EMNLP.
Schone, P. and D. Jurafsky. 2001b. Knowledge-free induc-
tion of inflectional morphologies. NAACL.
Schütze, H. 1992. Dimensions of meaning. Proceedings of
Supercomputing ’92. IEEE Press.
Schütze, H. 1997. Ambiguity Resolution in Language Learn-
ing – Computational and Cognitive Models. CSLI, Stan-
ford, CA.
Schütze, H., D. A. Hull, and J. Pedersen. 1995. A compar-
ison of classifiers and document representations for the
routing problem. SIGIR-95.
Schütze, H. and J. Pedersen. 1993. A vector model for syn-
tagmatic and paradigmatic relatedness. 9th Annual Con-
ference of the UW Centre for the New OED and Text Re-
search.