Speech and Language Processing - J&M
Speech and Language Processing - J&M
Daniel Jurafsky
Stanford University
James H. Martin
University of Colorado at Boulder
2
Contents
I Fundamental Algorithms for NLP 1
1 Introduction 3
5 Logistic Regression 77
5.1 The sigmoid function . . . . . . . . . . . . . . . . . . . . . . . . 78
5.2 Classification with Logistic Regression . . . . . . . . . . . . . . . 80
5.3 Multinomial logistic regression . . . . . . . . . . . . . . . . . . . 84
5.4 Learning in Logistic Regression . . . . . . . . . . . . . . . . . . . 87
3
4 C ONTENTS
Bibliography 553
In the first part of the book we introduce the fundamental suite of algorithmic
tools that make up the modern neural language model that is the heart of end-to-end
NLP systems. We begin with tokenization and preprocessing, as well as useful algo-
rithms like computing edit distance, and then proceed to the tasks of classification,
logistic regression, neural networks, proceeding through feedforward networks, re-
current networks, and then transformers. We’ll also see the role of embeddings as a
model of word meaning.
CHAPTER
1 Introduction
La dernière chose qu’on trouve en faisant un ouvrage est de savoir celle qu’il faut
mettre la première.
[The last thing you figure out in writing a book is what to put first.]
Pascal
3
4 C HAPTER 2 • R EGULAR E XPRESSIONS , T OKENIZATION , E DIT D ISTANCE
CHAPTER
Some languages, like Japanese, don’t have spaces between words, so word tokeniza-
tion becomes more difficult.
lemmatization Another part of text normalization is lemmatization, the task of determining
that two words have the same root, despite their surface differences. For example,
the words sang, sung, and sings are forms of the verb sing. The word sing is the
common lemma of these words, and a lemmatizer maps from all of these to sing.
Lemmatization is essential for processing morphologically complex languages like
stemming Arabic. Stemming refers to a simpler version of lemmatization in which we mainly
just strip suffixes from the end of the word. Text normalization also includes sen-
sentence
segmentation tence segmentation: breaking up a text into individual sentences, using cues like
periods or exclamation points.
Finally, we’ll need to compare words and other strings. We’ll introduce a metric
called edit distance that measures how similar two strings are based on the number
of edits (insertions, deletions, substitutions) it takes to change one string into the
other. Edit distance is an algorithm with applications throughout language process-
ing, from spelling correction to speech recognition to coreference resolution.
case /S/ (/s/ matches a lower case s but not an upper case S). This means that
the pattern /woodchucks/ will not match the string Woodchucks. We can solve this
problem with the use of the square braces [ and ]. The string of characters inside the
braces specifies a disjunction of characters to match. For example, Fig. 2.2 shows
that the pattern /[wW]/ matches patterns containing either w or W.
The regular expression /[1234567890]/ specifies any single digit. While such
classes of characters as digits or letters are important building blocks in expressions,
they can get awkward (e.g., it’s inconvenient to specify
/[ABCDEFGHIJKLMNOPQRSTUVWXYZ]/ (2.1)
to mean “any capital letter”). In cases where there is a well-defined sequence asso-
ciated with a set of characters, the brackets can be used with the dash (-) to specify
range any one character in a range. The pattern /[2-5]/ specifies any one of the charac-
ters 2, 3, 4, or 5. The pattern /[b-g]/ specifies one of the characters b, c, d, e, f, or
g. Some other examples are shown in Fig. 2.3.
The square braces can also be used to specify what a single character cannot be,
by use of the caret ˆ. If the caret ˆ is the first symbol after the open square brace [,
the resulting pattern is negated. For example, the pattern /[ˆa]/ matches any single
character (including special characters) except a. This is only true when the caret
is the first symbol after the open square brace. If it occurs anywhere else, it usually
stands for a caret; Fig. 2.4 shows some examples.
How can we talk about optional elements, like an optional s in woodchuck and
woodchucks? We can’t use the square brackets, because while they allow us to say
“s or S”, they don’t allow us to say “s or nothing”. For this we use the question mark
/?/, which means “the preceding character or nothing”, as shown in Fig. 2.5.
We can think of the question mark as meaning “zero or one instances of the
previous character”. That is, it’s a way of specifying how many of something that
we want, something that is very important in regular expressions. For example,
consider the language of certain sheep, which consists of strings that look like the
following:
baa!
baaa!
baaaa!
...
This language consists of strings with a b, followed by at least two a’s, followed
by an exclamation point. The set of operators that allows us to say things like “some
Kleene * number of as” are based on the asterisk or *, commonly called the Kleene * (gen-
erally pronounced “cleany star”). The Kleene star means “zero or more occurrences
of the immediately previous character or regular expression”. So /a*/ means “any
string of zero or more as”. This will match a or aaaaaa, but it will also match the
empty string at the start of Off Minor since the string Off Minor starts with zero a’s.
So the regular expression for matching one or more a is /aa*/, meaning one a fol-
lowed by zero or more as. More complex patterns can also be repeated. So /[ab]*/
means “zero or more a’s or b’s” (not “zero or more right square braces”). This will
match strings like aaaa or ababab or bbbb, as well as the empty string.
For specifying multiple digits (useful for finding prices) we can extend /[0-9]/,
the regular expression for a single digit. An integer (a string of digits) is thus
/[0-9][0-9]*/. (Why isn’t it just /[0-9]*/?)
Sometimes it’s annoying to have to write the regular expression for digits twice,
so there is a shorter way to specify “at least one” of some character. This is the
Kleene + Kleene +, which means “one or more occurrences of the immediately preceding
character or regular expression”. Thus, the expression /[0-9]+/ is the normal way
to specify “a sequence of digits”. There are thus two ways to specify the sheep
language: /baaa*!/ or /baa+!/.
One very important special character is the period (/./), a wildcard expression
that matches any single character (except a carriage return), as shown in Fig. 2.6.
The wildcard is often used together with the Kleene star to mean “any string of
characters”. For example, suppose we want to find any line in which a particular
word, for example, aardvark, appears twice. We can specify this with the regular
expression /aardvark.*aardvark/.
8 C HAPTER 2 • R EGULAR E XPRESSIONS , T OKENIZATION , E DIT D ISTANCE
anchors Anchors are special characters that anchor regular expressions to particular places
in a string. The most common anchors are the caret ˆ and the dollar sign $. The caret
ˆ matches the start of a line. The pattern /ˆThe/ matches the word The only at the
start of a line. Thus, the caret ˆ has three uses: to match the start of a line, to in-
dicate a negation inside of square brackets, and just to mean a caret. (What are the
contexts that allow grep or Python to know which function a given caret is supposed
to have?) The dollar sign $ matches the end of a line. So the pattern $ is a useful
pattern for matching a space at the end of a line, and /ˆThe dog\.$/ matches a
line that contains only the phrase The dog. (We have to use the backslash here since
we want the . to mean “period” and not the wildcard.)
Regex Match
ˆ start of line
$ end of line
\b word boundary
\B non-word boundary
Figure 2.7 Anchors in regular expressions.
There are also two other anchors: \b matches a word boundary, and \B matches
a non word-boundary. Thus, /\bthe\b/ matches the word the but not the word
other. A “word” for the purposes of a regular expression is defined based on the
definition of words in programming languages as a sequence of digits, underscores,
or letters. Thus /\b99\b/ will match the string 99 in There are 99 bottles of beer on
the wall (because 99 follows a space) but not 99 in There are 299 bottles of beer on
the wall (since 99 follows a number). But it will match 99 in $99 (since 99 follows
a dollar sign ($), which is not a digit, underscore, or letter).
any number of spaces! The star here applies only to the space that precedes it,
not to the whole sequence. With the parentheses, we could write the expression
/(Column [0-9]+ *)*/ to match the word Column, followed by a number and
optional spaces, the whole pattern repeated zero or more times.
This idea that one operator may take precedence over another, requiring us to
sometimes use parentheses to specify what we mean, is formalized by the operator
operator
precedence precedence hierarchy for regular expressions. The following table gives the order
of RE operator precedence, from highest precedence to lowest precedence.
Parenthesis ()
Counters * + ? {}
Sequences and anchors the ˆmy end$
Disjunction |
/the/ (2.2)
One problem is that this pattern will miss the word when it begins a sentence and
hence is capitalized (i.e., The). This might lead us to the following pattern:
/[tT]he/ (2.3)
But we will still incorrectly return texts with the embedded in other words (e.g.,
other or theology). So we need to specify that we want instances with a word bound-
ary on both sides:
/\b[tT]he\b/ (2.4)
Suppose we wanted to do this without the use of /\b/. We might want this since
/\b/ won’t treat underscores and numbers as word boundaries; but we might want
to find the in some context where it might also have underlines or numbers nearby
(the or the25). We need to specify that we want instances in which there are no
alphabetic letters on either side of the the:
/[ˆa-zA-Z][tT]he[ˆa-zA-Z]/ (2.5)
But there is still one more problem with this pattern: it won’t find the word the when
it begins a line. This is because the regular expression [ˆa-zA-Z], which we used
10 C HAPTER 2 • R EGULAR E XPRESSIONS , T OKENIZATION , E DIT D ISTANCE
to avoid embedded instances of the, implies that there must be some single (although
non-alphabetic) character before the the. We can avoid this by specifying that before
the the we require either the beginning-of-line or a non-alphabetic character, and the
same at the end of the line:
/(ˆ|[ˆa-zA-Z])[tT]he([ˆa-zA-Z]|$)/ (2.6)
The process we just went through was based on fixing two kinds of errors: false pos-
false positives itives, strings that we incorrectly matched like other or there, and false negatives,
false negatives strings that we incorrectly missed, like The. Addressing these two kinds of errors
comes up again and again in language processing. Reducing the overall error rate
for an application thus involves two antagonistic efforts:
• Increasing precision (minimizing false positives)
• Increasing recall (minimizing false negatives)
We’ll come back to precision and recall with more precise definitions in Chapter 4.
Regex Match
* zero or more occurrences of the previous char or expression
+ one or more occurrences of the previous char or expression
? zero or one occurrence of the previous char or expression
{n} exactly n occurrences of the previous char or expression
{n,m} from n to m occurrences of the previous char or expression
{n,} at least n occurrences of the previous char or expression
{,m} up to m occurrences of the previous char or expression
Figure 2.9 Regular expression operators for counting.
Finally, certain special characters are referred to by special notation based on the
newline backslash (\) (see Fig. 2.10). The most common of these are the newline character
\n and the tab character \t. To refer to characters that are special themselves (like
., *, [, and \), precede them with a backslash, (i.e., /\./, /\*/, /\[/, and /\\/).
2.1 • R EGULAR E XPRESSIONS 11
/$[0-9]+/ (2.7)
Note that the $ character has a different function here than the end-of-line function
we discussed earlier. Most regular expression parsers are smart enough to realize
that $ here doesn’t mean end-of-line. (As a thought experiment, think about how
regex parsers might figure out the function of $ from the context.)
Now we just need to deal with fractions of dollars. We’ll add a decimal point
and two digits afterwards:
/$[0-9]+\.[0-9][0-9]/ (2.8)
This pattern only allows $199.99 but not $199. We need to make the cents optional
and to make sure we’re at a word boundary:
/(ˆ|\W)$[0-9]+(\.[0-9][0-9])?\b/ (2.9)
One last catch! This pattern allows prices like $199999.99 which would be far too
expensive! We need to limit the dollars:
/(ˆ|\W)$[0-9]{0,3}(\.[0-9][0-9])?\b/ (2.10)
Further fixes (like avoiding matching a dollar sign with no price after it) are left as
an exercise for the reader.
How about disk space? We’ll need to allow for optional fractions again (5.5 GB);
note the use of ? for making the final s optional, and the use of / */ to mean “zero
or more spaces” since there might always be extra spaces lying around:
Modifying this regular expression so that it only matches more than 500 GB is left
as an exercise for the reader.
Since multiple substitutions can apply to a given input, substitutions are assigned
a rank and applied in order. Creating patterns is the topic of Exercise 2.3, and we
return to the details of the ELIZA architecture in Chapter 15.
/ˆ(?!Volcano)[A-Za-z]+/ (2.17)
2.2 Words
Before we talk about processing words, we need to decide what counts as a word.
corpus Let’s start by looking at one particular corpus (plural corpora), a computer-readable
corpora collection of text or speech. For example the Brown corpus is a million-word col-
lection of samples from 500 written English texts from different genres (newspa-
per, fiction, non-fiction, academic, etc.), assembled at Brown University in 1963–64
(Kučera and Francis, 1967). How many words are in the following Brown sentence?
He stepped out into the hall, was delighted to encounter
a water brother.
This sentence has 13 words if we don’t count punctuation marks as words, 15
if we count punctuation. Whether we treat period (“.”), comma (“,”), and so on as
words depends on the task. Punctuation is critical for finding boundaries of things
(commas, periods, colons) and for identifying some aspects of meaning (question
marks, exclamation marks, quotation marks). For some tasks, like part-of-speech
tagging or parsing or speech synthesis, we sometimes treat punctuation marks as if
they were separate words.
14 C HAPTER 2 • R EGULAR E XPRESSIONS , T OKENIZATION , E DIT D ISTANCE
1 In earlier tradition, and occasionally still, you might see word instances referred to as word tokens, but
we now try to reserve the word token instead to mean the output of subword tokenization algorithms.
2.3 • C ORPORA 15
How many words are there in English? When we speak about the number of
words in the language, we are generally referring to word types. Fig. 2.11 shows
the rough numbers of types and instances computed from some English corpora.
The larger the corpora we look at, the more word types we find, and in fact this
relationship between the number of types |V | and number of instances N is called
Herdan’s Law Herdan’s Law (Herdan, 1960) or Heaps’ Law (Heaps, 1978) after its discoverers
Heaps’ Law (in linguistics and information retrieval respectively). It is shown in Eq. 2.18, where
k and are positive constants, and 0 < < 1.
|V | = kN (2.18)
The value of depends on the corpus size and the genre, but at least for the large
corpora in Fig. 2.11, ranges from .67 to .75. Roughly then we can say that the
vocabulary size for a text goes up significantly faster than the square root of its
length in words.
It’s sometimes useful to make a further distinction. Consider inflected forms
like cats versus cat. We say these two words are different wordforms but have the
lemma same lemma. A lemma is a set of lexical forms having the same stem, the same
wordform major part-of-speech, and the same word sense. The wordform is the full inflected
or derived form of the word. The two wordforms cat and cats thus have the same
lemma, which we can represent as cat.
For morphologically complex languages like Arabic, we often need to deal with
lemmatization. For most tasks in English, however, wordforms are sufficient, and
when we talk about words in this book we almost always mean wordsforms (al-
though we will discuss basic algorithms for lemmatization and the related task of
stemming below in Section 2.6). One of the situations even in English where we
talk about lemmas is when we measure the number of words in a dictionary. Dictio-
nary entries or boldface forms are a very rough approximation to (an upper bound
on) the number of lemmas (since some lemmas have multiple boldface forms). The
1989 edition of the Oxford English Dictionary had 615,000 entries.
Finally, we should note that in practice, for many NLP applications (for example
for neural language modeling) we don’t actually use words as our internal unit of
representation at all! We instead tokenize the input strings into tokens, which can
be words but can also be only parts of words. We’ll return to this tokenization
question when we introduce the BPE algorithm in Section 2.5.2.
2.3 Corpora
Words don’t appear out of nowhere. Any particular piece of text that we study
is produced by one or more specific speakers or writers, in a specific dialect of a
specific language, at a specific time, in a specific place, for a specific function.
Perhaps the most important dimension of variation is the language. NLP algo-
rithms are most useful when they apply across many languages. The world has 7097
languages at the time of this writing, according to the online Ethnologue catalog
(Simons and Fennig, 2018). It is important to test algorithms on more than one lan-
guage, and particularly on languages with different properties; by contrast there is
an unfortunate current tendency for NLP algorithms to be developed or tested just
on English (Bender, 2019). Even when algorithms are developed beyond English,
they tend to be developed for the official languages of large industrialized nations
16 C HAPTER 2 • R EGULAR E XPRESSIONS , T OKENIZATION , E DIT D ISTANCE
(Chinese, Spanish, Japanese, German etc.), but we don’t want to limit tools to just
these few languages. Furthermore, most languages also have multiple varieties, of-
ten spoken in different regions or by different social groups. Thus, for example,
AAE if we’re processing text that uses features of African American English (AAE) or
African American Vernacular English (AAVE)—the variations of English used by
millions of people in African American communities (King 2020)—we must use
NLP tools that function with features of those varieties. Twitter posts might use fea-
tures often used by speakers of African American English, such as constructions like
MAE iont (I don’t in Mainstream American English (MAE)), or talmbout corresponding
to MAE talking about, both examples that influence word segmentation (Blodgett
et al. 2016, Jones 2015).
It’s also quite common for speakers or writers to use multiple languages in a
code switching single communicative act, a phenomenon called code switching. Code switching
is enormously common across the world; here are examples showing Spanish and
(transliterated) Hindi code switching with English (Solorio et al. 2014, Jurgens et al.
2017):
(2.19) Por primera vez veo a @username actually being hateful! it was beautiful:)
[For the first time I get to see @username actually being hateful! it was
beautiful:) ]
(2.20) dost tha or ra- hega ... dont wory ... but dherya rakhe
[“he was and will remain a friend ... don’t worry ... but have faith”]
Another dimension of variation is the genre. The text that our algorithms must
process might come from newswire, fiction or non-fiction books, scientific articles,
Wikipedia, or religious texts. It might come from spoken genres like telephone
conversations, business meetings, police body-worn cameras, medical interviews,
or transcripts of television shows or movies. It might come from work situations
like doctors’ notes, legal text, or parliamentary or congressional proceedings.
Text also reflects the demographic characteristics of the writer (or speaker): their
age, gender, race, socioeconomic class can all influence the linguistic properties of
the text we are processing.
And finally, time matters too. Language changes over time, and for some lan-
guages we have good corpora of texts from different historical periods.
Because language is so situated, when developing computational models for lan-
guage processing from a corpus, it’s important to consider who produced the lan-
guage, in what context, for what purpose. How can a user of a dataset know all these
datasheet details? The best way is for the corpus creator to build a datasheet (Gebru et al.,
2020) or data statement (Bender et al., 2021) for each corpus. A datasheet specifies
properties of a dataset like:
Motivation: Why was the corpus collected, by whom, and who funded it?
Situation: When and in what situation was the text written/spoken? For example,
was there a task? Was the language originally spoken conversation, edited
text, social media communication, monologue vs. dialogue?
Language variety: What language (including dialect/region) was the corpus in?
Speaker demographics: What was, e.g., the age or gender of the text’s authors?
Collection process: How big is the data? If it is a subsample how was it sampled?
Was the data collected with consent? How was the data pre-processed, and
what metadata is available?
Annotation process: What are the annotations, what are the demographics of the
annotators, how were they trained, how was the data annotated?
2.4 • S IMPLE U NIX T OOLS FOR W ORD T OKENIZATION 17
1 Abates
5 Abbess
6 Abbey
3 Abbot
...
Alternatively, we can collapse all the upper case to lower case:
tr -sc A-Za-z \n < sh.txt | tr A-Z a-z | sort | uniq -c
whose output is
14725 a
97 aaron
1 abaissiez
10 abandon
2 abandoned
2 abase
1 abash
14 abate
3 abated
3 abatement
...
Now we can sort again to find the frequent words. The -n option to sort means
to sort numerically rather than alphabetically, and the -r option means to sort in
reverse order (highest-to-lowest):
tr -sc A-Za-z \n < sh.txt | tr A-Z a-z | sort | uniq -c | sort -n -r
The results show that the most frequent words in Shakespeare, as in any other
corpus, are the short function words like articles, pronouns, prepositions:
27378 the
26084 and
22538 i
19771 to
17481 of
14725 a
13826 you
12489 my
11318 that
11112 in
...
Unix tools of this sort can be very handy in building quick word count statistics
for any corpus in English. While in some versions of Unix these command-line tools
also correctly handle Unicode characters and so can be used for many languages,
in general for handling most languages outside English we use more sophisticated
tokenization algorithms.
menting running text into words. There are roughly two classes of tokenization
algorithms. In top-down tokenization, we define a standard and implement rules to
implement that kind of tokenization.
But more commonly instead of using words as the input to NLP algorithms we
subword tokens break up words into subword tokens, which can be words or parts of words or
even individual letters. These are derived via bottom-up tokenization, in which we
use simple statistics of letter sequences to come up with the vocabulary of subword
tokens, and break up the input into those subwords.
enize English with the nltk.regexp tokenize function of the Python-based Nat-
ural Language Toolkit (NLTK) (Bird et al. 2009; https://www.nltk.org).
Carefully designed deterministic algorithms can deal with the ambiguities that
arise, such as the fact that the apostrophe needs to be tokenized differently when used
as a genitive marker (as in the book’s cover), a quotative as in ‘The other class’, she
said, or in clitics like they’re.
Word tokenization is more complex in languages like written Chinese, Japanese,
and Thai, which do not use spaces to mark potential word-boundaries. In Chinese,
hanzi for example, words are composed of characters (called hanzi in Chinese). Each
character generally represents a single unit of meaning (called a morpheme) and is
pronounceable as a single syllable. Words are about 2.4 characters long on average.
But deciding what counts as a word in Chinese is complex. For example, consider
the following sentence:
(2.21) 姚明进入总决赛 yáo mı́ng jı̀n rù zǒng jué sài
“Yao Ming reaches the finals”
As Chen et al. (2017b) point out, this could be treated as 3 words (‘Chinese Tree-
bank’ segmentation):
(2.22) 姚明 进入 总决赛
YaoMing reaches finals
or as 5 words (‘Peking University’ segmentation):
(2.23) 姚 明 进入 总 决赛
Yao Ming reaches overall finals
Finally, it is possible in Chinese simply to ignore words altogether and use characters
as the basic elements, treating the sentence as a series of 7 characters:
(2.24) 姚 明 进 入 总 决 赛
Yao Ming enter enter overall decision game
In fact, for most Chinese NLP tasks it turns out to work better to take characters
rather than words as input, since characters are at a reasonable semantic level for
most applications, and since most word standards, by contrast, result in a huge vo-
cabulary with large numbers of very rare words (Li et al., 2019b).
However, for Japanese and Thai the character is too small a unit, and so algo-
word
segmentation rithms for word segmentation are required. These can also be useful for Chinese
2.5 • W ORD AND S UBWORD T OKENIZATION 21
in the rare situations where word rather than character boundaries are required. For
these situations we can use the subword tokenization algorithms introduced in the
next section.
There is a third option to tokenizing text, one that is most commonly used by large
language models. Instead of defining tokens as words (whether delimited by spaces
or more complex algorithms), or as characters (as in Chinese), we can use our data to
automatically tell us what the tokens should be. This is especially useful in dealing
with unknown words, an important problem in language processing. As we will
see in the next chapter, NLP algorithms often learn some facts about language from
one corpus (a training corpus) and then use these facts to make decisions about a
separate test corpus and its language. Thus if our training corpus contains, say the
words low, new, newer, but not lower, then if the word lower appears in our test
corpus, our system will not know what to do with it.
To deal with this unknown word problem, modern tokenizers automatically in-
subwords duce sets of tokens that include tokens smaller than words, called subwords. Sub-
words can be arbitrary substrings, or they can be meaning-bearing units like the
morphemes -est or -er. (A morpheme is the smallest meaning-bearing unit of a lan-
guage; for example the word unwashable has the morphemes un-, wash, and -able.)
In modern tokenization schemes, most tokens are words, but some tokens are fre-
quently occurring morphemes or other subwords like -er. Every unseen word like
lower can thus be represented by some sequence of known subword units, such as
low and er, or even as a sequence of individual letters if necessary.
Most tokenization schemes have two parts: a token learner, and a token seg-
menter. The token learner takes a raw training corpus (sometimes roughly pre-
separated into words, for example by whitespace) and induces a vocabulary, a set
of tokens. The token segmenter takes a raw test sentence and segments it into the
tokens in the vocabulary. Two algorithms are widely used: byte-pair encoding
(Sennrich et al., 2016), and unigram language modeling (Kudo, 2018), There is
also a SentencePiece library that includes implementations of both of these (Kudo
and Richardson, 2018a), and people often use the name SentencePiece to simply
mean unigram language modeling tokenization.
In this section we introduce the simplest of the three, the byte-pair encoding or
BPE BPE algorithm (Sennrich et al., 2016); see Fig. 2.13. The BPE token learner begins
with a vocabulary that is just the set of all individual characters. It then examines the
training corpus, chooses the two symbols that are most frequently adjacent (say ‘A’,
‘B’), adds a new merged symbol ‘AB’ to the vocabulary, and replaces every adjacent
’A’ ’B’ in the corpus with the new ‘AB’. It continues to count and merge, creating
new longer and longer character strings, until k merges have been done creating
k novel tokens; k is thus a parameter of the algorithm. The resulting vocabulary
consists of the original set of characters plus k new symbols.
The algorithm is usually run inside words (not merging across word boundaries),
so the input corpus is first white-space-separated to give a set of strings, each corre-
sponding to the characters of a word, plus a special end-of-word symbol , and its
counts. Let’s see its operation on the following tiny input corpus of 18 word tokens
with counts for each word (the word low appears 5 times, the word newer 6 times,
and so on), which would have a starting vocabulary of 11 letters:
22 C HAPTER 2 • R EGULAR E XPRESSIONS , T OKENIZATION , E DIT D ISTANCE
corpus vocabulary
5 l o w , d, e, i, l, n, o, r, s, t, w
2 l o w e s t
6 n e w e r
3 w i d e r
2 n e w
The BPE algorithm first counts all pairs of adjacent symbols: the most frequent
is the pair e r because it occurs in newer (frequency of 6) and wider (frequency of
3) for a total of 9 occurrences.2 We then merge these symbols, treating er as one
symbol, and count again:
corpus vocabulary
5 l o w , d, e, i, l, n, o, r, s, t, w, er
2 l o w e s t
6 n e w er
3 w i d er
2 n e w
Now the most frequent pair is er , which we merge; our system has learned
that there should be a token for word-final er, represented as er :
corpus vocabulary
5 l o w , d, e, i, l, n, o, r, s, t, w, er, er
2 l o w e s t
6 n e w er
3 w i d er
2 n e w
Next n e (total count of 8) get merged to ne:
corpus vocabulary
5 l o w , d, e, i, l, n, o, r, s, t, w, er, er , ne
2 l o w e s t
6 ne w er
3 w i d er
2 ne w
If we continue, the next merges are:
merge current vocabulary
(ne, w) , d, e, i, l, n, o, r, s, t, w, er, er , ne, new
(l, o) , d, e, i, l, n, o, r, s, t, w, er, er , ne, new, lo
(lo, w) , d, e, i, l, n, o, r, s, t, w, er, er , ne, new, lo, low
(new, er ) , d, e, i, l, n, o, r, s, t, w, er, er , ne, new, lo, low, newer
(low, ) , d, e, i, l, n, o, r, s, t, w, er, er , ne, new, lo, low, newer , low
Once we’ve learned our vocabulary, the token segmenter is used to tokenize a
test sentence. The token segmenter just runs on the merges we have learned from
the training data on the test data. It runs them greedily, in the order we learned them.
(Thus the frequencies in the test data don’t play a role, just the frequencies in the
training data). So first we segment each test sentence word into characters. Then
we apply the first rule: replace every instance of e r in the test corpus with er, and
then the second rule: replace every instance of er in the test corpus with er ,
and so on. By the end, if the test corpus contained the character sequence n e w e
2 Note that there can be ties; we could have instead chosen to merge r first, since that also has a
frequency of 9.
2.6 • W ORD N ORMALIZATION , L EMMATIZATION AND S TEMMING 23
Figure 2.13 The token learner part of the BPE algorithm for taking a corpus broken up
into individual characters or bytes, and learning a vocabulary by iteratively merging tokens.
Figure adapted from Bostrom and Durrett (2020).
2.6.1 Lemmatization
For other natural language processing situations we also want two morphologically
different forms of a word to behave similarly. For example in web search, someone
may type the string woodchucks but a useful system might want to also return pages
that mention woodchuck with no s. This is especially common in morphologically
complex languages like Polish, where for example the word Warsaw has different
24 C HAPTER 2 • R EGULAR E XPRESSIONS , T OKENIZATION , E DIT D ISTANCE
endings when it is the subject (Warszawa), or after a preposition like “in Warsaw” (w
lemmatization Warszawie), or “to Warsaw” (do Warszawy), and so on. Lemmatization is the task
of determining that two words have the same root, despite their surface differences.
The words am, are, and is have the shared lemma be; the words dinner and dinners
both have the lemma dinner. Lemmatizing each of these forms to the same lemma
will let us find all mentions of words in Polish like Warsaw. The lemmatized form
of a sentence like He is reading detective stories would thus be He be read detective
story.
How is lemmatization done? The most sophisticated methods for lemmatization
involve complete morphological parsing of the word. Morphology is the study of
morpheme the way words are built up from smaller meaning-bearing units called morphemes.
stem Two broad classes of morphemes can be distinguished: stems—the central mor-
affix pheme of the word, supplying the main meaning—and affixes—adding “additional”
meanings of various kinds. So, for example, the word fox consists of one morpheme
(the morpheme fox) and the word cats consists of two: the morpheme cat and the
morpheme -s. A morphological parser takes a word like cats and parses it into the
two morphemes cat and s, or parses a Spanish word like amaren (‘if in the future
they would love’) into the morpheme amar ‘to love’, and the morphological features
3PL (third person plural) and future subjunctive.
Simple stemmers can be useful in cases where we need to collapse across dif-
ferent variants of the same lemma. Nonetheless, they are less commonly used in
modern systems since they commit errors of both over-generalizating (lemmatizing
policy to police) and under-generalizing (not lemmatizing European to Europe)
(Krovetz, 1993).
2.7 • S ENTENCE S EGMENTATION 25
top string into the bottom string: d for deletion, s for substitution, i for insertion.
INTE*NTION
| | | | | | | | | |
*EXECUTION
d s s i s
Figure 2.14 Representing the minimum edit distance between two strings as an alignment.
The final row gives the operation list for converting the top string into the bottom string: d for
deletion, s for substitution, i for insertion.
We can also assign a particular cost or weight to each of these operations. The
Levenshtein distance between two sequences is the simplest weighting factor in
which each of the three operations has a cost of 1 (Levenshtein, 1966)—we assume
that the substitution of a letter for itself, for example, t for t, has zero cost. The Lev-
enshtein distance between intention and execution is 5. Levenshtein also proposed
an alternative version of his metric in which each insertion or deletion has a cost of
1 and substitutions are not allowed. (This is equivalent to allowing substitution, but
giving each substitution a cost of 2 since any substitution can be represented by one
insertion and one deletion). Using this version, the Levenshtein distance between
intention and execution is 8.
i n t e n t i o n
n t e n t i o n i n t e c n t i o n i n x e n t i o n
Figure 2.15 Finding the edit distance viewed as a search problem
The space of all possible edits is enormous, so we can’t search naively. However,
lots of distinct edit paths will end up in the same state (string), so rather than recom-
puting all those paths, we could just remember the shortest path to a state each time
dynamic
programming we saw it. We can do this by using dynamic programming. Dynamic programming
is the name for a class of algorithms, first introduced by Bellman (1957), that apply
a table-driven method to solve problems by combining solutions to subproblems.
Some of the most commonly used algorithms in natural language processing make
use of dynamic programming, such as the Viterbi algorithm (Chapter 17) and the
CKY algorithm for parsing (Chapter 18).
The intuition of a dynamic programming problem is that a large problem can
be solved by properly combining the solutions to various subproblems. Consider
the shortest path of transformed words that represents the minimum edit distance
between the strings intention and execution shown in Fig. 2.16.
Imagine some string (perhaps it is exention) that is in this optimal path (whatever
it is). The intuition of dynamic programming is that if exention is in the optimal
2.8 • M INIMUM E DIT D ISTANCE 27
i n t e n t i o n
delete i
n t e n t i o n
substitute n by e
e t e n t i o n
substitute t by x
e x e n t i o n
insert u
e x e n u t i o n
substitute n by c
e x e c u t i o n
Figure 2.16 Path from intention to execution.
operation list, then the optimal sequence must also include the optimal path from
intention to exention. Why? If there were a shorter path from intention to exention,
then we could use it instead, resulting in a shorter overall path, and the optimal
minimum edit
sequence wouldn’t be optimal, thus leading to a contradiction.
distance The minimum edit distance algorithm was named by Wagner and Fischer
algorithm
(1974) but independently discovered by many people (see the Historical Notes sec-
tion of Chapter 17).
Let’s first define the minimum edit distance between two strings. Given two
strings, the source string X of length n, and target string Y of length m, we’ll define
D[i, j] as the edit distance between X[1..i] and Y [1.. j], i.e., the first i characters of X
and the first j characters of Y . The edit distance between X and Y is thus D[n, m].
We’ll use dynamic programming to compute D[n, m] bottom up, combining so-
lutions to subproblems. In the base case, with a source substring of length i but an
empty target string, going from i characters to 0 requires i deletes. With a target
substring of length j but an empty source going from 0 characters to j characters
requires j inserts. Having computed D[i, j] for small i, j we then compute larger
D[i, j] based on previously computed smaller values. The value of D[i, j] is com-
puted by taking the minimum of the three possible paths through the matrix which
arrive there:
D[i − 1, j] + del-cost(source[i])
D[i, j] = min D[i, j − 1] + ins-cost(target[ j]) (2.25)
D[i − 1, j − 1] + sub-cost(source[i], target[ j])
We mentioned above two versions of Levenshtein distance, one in which substitu-
tions cost 1 and one in which substitutions cost 2 (i.e., are equivalent to an insertion
plus a deletion). Let’s here use that second version of Levenshtein distance in which
the insertions and deletions each have a cost of 1 (ins-cost(·) = del-cost(·) = 1), and
substitutions have a cost of 2 (except substitution of identical letters has zero cost).
Under this version of Levenshtein, the computation for D[i, j] becomes:
D[i − 1, j] + 1
D[i, j − 1] + 1
D[i, j] = min (2.26)
2; if source[i] 6= target[ j]
D[i − 1, j − 1] +
0; if source[i] = target[ j]
The algorithm is summarized in Fig. 2.17; Fig. 2.18 shows the results of applying
the algorithm to the distance between intention and execution with the version of
Levenshtein in Eq. 2.26.
Alignment Knowing the minimum edit distance is useful for algorithms like find-
ing potential spelling error corrections. But the edit distance algorithm is important
28 C HAPTER 2 • R EGULAR E XPRESSIONS , T OKENIZATION , E DIT D ISTANCE
n ← L ENGTH(source)
m ← L ENGTH(target)
Create a distance matrix D[n+1,m+1]
# Initialization: the zeroth row and column is the distance from the empty string
D[0,0] = 0
for each row i from 1 to n do
D[i,0] ← D[i-1,0] + del-cost(source[i])
for each column j from 1 to m do
D[0,j] ← D[0, j-1] + ins-cost(target[j])
# Recurrence relation:
for each row i from 1 to n do
for each column j from 1 to m do
D[i, j] ← M IN( D[i−1, j] + del-cost(source[i]),
D[i−1, j−1] + sub-cost(source[i], target[j]),
D[i, j−1] + ins-cost(target[j]))
# Termination
return D[n,m]
Figure 2.17 The minimum edit distance algorithm, an example of the class of dynamic
programming algorithms. The various costs can either be fixed (e.g., ∀x, ins-cost(x) = 1)
or can be specific to the letter (to model the fact that some letters are more likely to be in-
serted than others). We assume that there is no cost for substituting a letter for itself (i.e.,
sub-cost(x, x) = 0).
Src\Tar # e x e c u t i o n
# 0 1 2 3 4 5 6 7 8 9
i 1 2 3 4 5 6 7 6 7 8
n 2 3 4 5 6 7 8 7 8 7
t 3 4 5 6 7 8 7 8 9 8
e 4 3 4 5 6 7 8 9 10 9
n 5 4 5 6 7 8 9 10 11 10
t 6 5 6 7 8 9 8 9 10 11
i 7 6 7 8 9 10 9 8 9 10
o 8 7 8 9 10 11 10 9 8 9
n 9 8 9 10 11 12 11 10 9 8
Figure 2.18 Computation of minimum edit distance between intention and execution with
the algorithm of Fig. 2.17, using Levenshtein distance with cost of 1 for insertions or dele-
tions, 2 for substitutions.
in another way; with a small change, it can also provide the minimum cost align-
ment between two strings. Aligning two strings is useful throughout speech and
language processing. In speech recognition, minimum edit distance alignment is
used to compute the word error rate (Chapter 16). Alignment plays a role in ma-
chine translation, in which sentences in a parallel corpus (a corpus with a text in two
languages) need to be matched to each other.
To extend the edit distance algorithm to produce an alignment, we can start by
visualizing an alignment as a path through the edit distance matrix. Figure 2.19
shows this path with boldfaced cells. Each boldfaced cell represents an alignment
2.9 • S UMMARY 29
of a pair of letters in the two strings. If two boldfaced cells occur in the same row,
there will be an insertion in going from the source to the target; two boldfaced cells
in the same column indicate a deletion.
Figure 2.19 also shows the intuition of how to compute this alignment path. The
computation proceeds in two steps. In the first step, we augment the minimum edit
distance algorithm to store backpointers in each cell. The backpointer from a cell
points to the previous cell (or cells) that we came from in entering the current cell.
We’ve shown a schematic of these backpointers in Fig. 2.19. Some cells have mul-
tiple backpointers because the minimum extension could have come from multiple
backtrace previous cells. In the second step, we perform a backtrace. In a backtrace, we start
from the last cell (at the final row and column), and follow the pointers back through
the dynamic programming matrix. Each complete path between the final cell and the
initial cell is a minimum distance alignment. Exercise 2.7 asks you to modify the
minimum edit distance algorithm to store the pointers and compute the backtrace to
output an alignment.
# e x e c u t i o n
# 0 ← 1 ←2 ←3 ←4 ←5 ← 6 ←7 ←8 ← 9
i ↑1 ↖←↑ 2 ↖←↑ 3 ↖←↑ 4 ↖←↑ 5 ↖←↑ 6 ↖←↑ 7 ↖6 ←7 ←8
n ↑2 ↖←↑ 3 ↖←↑ 4 ↖←↑ 5 ↖←↑ 6 ↖←↑ 7 ↖←↑ 8 ↑7 ↖←↑ 8 ↖7
t ↑3 ↖←↑ 4 ↖←↑ 5 ↖←↑ 6 ↖←↑ 7 ↖←↑ 8 ↖7 ←↑ 8 ↖←↑ 9 ↑8
e ↑4 ↖3 ←4 ↖← 5 ←6 ←7 ←↑ 8 ↖←↑ 9 ↖←↑ 10 ↑9
n ↑5 ↑4 ↖←↑ 5 ↖←↑ 6 ↖←↑ 7 ↖←↑ 8 ↖←↑ 9 ↖←↑ 10 ↖←↑ 11 ↖↑ 10
t ↑6 ↑5 ↖←↑ 6 ↖←↑ 7 ↖←↑ 8 ↖←↑ 9 ↖8 ←9 ← 10 ←↑ 11
i ↑7 ↑6 ↖←↑ 7 ↖←↑ 8 ↖←↑ 9 ↖←↑ 10 ↑9 ↖8 ←9 ← 10
o ↑8 ↑7 ↖←↑ 8 ↖←↑ 9 ↖←↑ 10 ↖←↑ 11 ↑ 10 ↑9 ↖8 ←9
n ↑9 ↑8 ↖←↑ 9 ↖←↑ 10 ↖←↑ 11 ↖←↑ 12 ↑ 11 ↑ 10 ↑9 ↖8
Figure 2.19 When entering a value in each cell, we mark which of the three neighboring
cells we came from with up to three arrows. After the table is full we compute an alignment
(minimum edit path) by using a backtrace, starting at the 8 in the lower-right corner and
following the arrows back. The sequence of bold cells represents one possible minimum
cost alignment between the two strings, again using Levenshtein distance with cost of 1 for
insertions or deletions, 2 for substitutions. Diagram design after Gusfield (1997).
While we worked our example with simple Levenshtein distance, the algorithm
in Fig. 2.17 allows arbitrary weights on the operations. For spelling correction, for
example, substitutions are more likely to happen between letters that are next to
each other on the keyboard. The Viterbi algorithm is a probabilistic extension of
minimum edit distance. Instead of computing the “minimum edit distance” between
two strings, Viterbi computes the “maximum probability alignment” of one string
with another. We’ll discuss this more in Chapter 17.
2.9 Summary
This chapter introduced a fundamental tool in language processing, the regular ex-
pression, and showed how to perform basic text normalization tasks including
word segmentation and normalization, sentence segmentation, and stemming.
We also introduced the important minimum edit distance algorithm for comparing
strings. Here’s a summary of the main points we covered about these ideas:
30 C HAPTER 2 • R EGULAR E XPRESSIONS , T OKENIZATION , E DIT D ISTANCE
Exercises
2.1 Write regular expressions for the following languages.
1. the set of all alphabetic strings;
2. the set of all lower case alphabetic strings ending in a b;
3. the set of all strings from the alphabet a, b such that each a is immedi-
ately preceded by and immediately followed by a b;
2.2 Write regular expressions for the following languages. By “word”, we mean
an alphabetic string separated from other words by whitespace, any relevant
punctuation, line breaks, and so forth.
1. the set of all strings with two consecutive repeated words (e.g., “Hum-
bert Humbert” and “the the” but not “the bug” or “the big bug”);
2. all strings that start at the beginning of the line with an integer and that
end at the end of the line with a word;
3. all strings that have both the word grotto and the word raven in them
(but not, e.g., words like grottos that merely contain the word grotto);
4. write a pattern that places the first word of an English sentence in a
register. Deal with punctuation.
2.3 Implement an ELIZA-like program, using substitutions such as those described
on page 13. You might want to choose a different domain than a Rogerian psy-
chologist, although keep in mind that you would need a domain in which your
program can legitimately engage in a lot of simple repetition.
2.4 Compute the edit distance (using insertion cost 1, deletion cost 1, substitution
cost 1) of “leda” to “deal”. Show your work (using the edit distance grid).
2.5 Figure out whether drive is closer to brief or to divers and what the edit dis-
tance is to each. You may use any version of distance that you like.
2.6 Now implement a minimum edit distance algorithm and use your hand-computed
results to check your code.
2.7 Augment the minimum edit distance algorithm to output an alignment; you
will need to store pointers and add a stage to compute the backtrace.
32 C HAPTER 3 • N- GRAM L ANGUAGE M ODELS
CHAPTER
“You are uniformly charming!” cried he, with a smile of associating and now
and then I bowed and they perceived a chaise and four to wish for.
Random sentence generated from a Jane Austen trigram model
Predicting is difficult—especially about the future, as the old quip goes. But how
about predicting something that seems much easier, like the next word someone is
going to say? What word, for example, is likely to follow
The water of Walden Pond is so beautifully ...
You might conclude that a likely word is blue, or green, or clear, but probably not
refrigerator nor this. In this chapter we formalize this intuition by introducing
language model language models or LMs, models that assign a probability to each possible next
LM word. Language models can also assign a probability to an entire sentence, telling
us that the following sequence has a much higher probability of appearing in a text:
all of a sudden I notice three guys standing on the sidewalk
Why would we want to predict upcoming words, or know the probability of a sen-
tence? One reason is for generation: choosing contextually better words. For ex-
ample we can correct grammar or spelling errors like Their are two midterms,
in which There was mistyped as Their, or Everything has improve, in which
improve should have been improved. The phrase There are is more probable
than Their are, and has improved than has improve, so a language model can
help users select the more grammatical variant. Or for a speech system to recognize
that you said I will be back soonish and not I will be bassoon dish, it
helps to know that back soonish is a more probable sequence. Language models
can also help in augmentative and alternative communication (Trnka et al. 2007,
AAC Kane et al. 2017). People can use AAC systems if they are physically unable to
speak or sign but can instead use eye gaze or other movements to select words from
a menu. Word prediction can be used to suggest likely words for the menu.
Word prediction is also central to NLP for another reason: large language mod-
els are built just by training them to predict words!! As we’ll see in chapters 7-9,
large language models learn an enormous amount about language solely from being
trained to predict upcoming words from neighboring words.
n-gram In this chapter we introduce the simplest kind of language model: the n-gram
language model. An n-gram is a sequence of n words: a 2-gram (which we’ll call
bigram) is a two-word sequence of words like The water, or water of, and a 3-
gram (a trigram) is a three-word sequence of words like The water of, or water
3.1 • N-G RAMS 33
of Walden. But we also (in a bit of terminological ambiguity) use the word ‘n-
gram’ to mean a probabilistic model that can estimate the probability of a word given
the n-1 previous words, and thereby also to assign probabilities to entire sequences.
In later chapters we will introduce the much more powerful neural large lan-
guage models, based on the transformer architecture of Chapter 9. But because
n-grams have a remarkably simple and clear formalization, we use them to intro-
duce some major concepts of large language modeling, including training and test
sets, perplexity, sampling, and interpolation.
3.1 N-Grams
Let’s begin with the task of computing P(w|h), the probability of a word w given
some history h. Suppose the history h is “The water of Walden Pond is so
beautifully ” and we want to know the probability that the next word is blue:
P(blue|The water of Walden Pond is so beautifully) (3.1)
One way to estimate this probability is directly from relative frequency counts: take a
very large corpus, count the number of times we see The water of Walden Pond
is so beautifully, and count the number of times this is followed by blue. This
would be answering the question “Out of the times we saw the history h, how many
times was it followed by the word w”, as follows:
P(blue|The water of Walden Pond is so beautifully) =
C(The water of Walden Pond is so beautifully blue)
(3.2)
C(The water of Walden Pond is so beautifully)
If we had a large enough corpus, we could compute these two counts and estimate
the probability from Eq. 3.2. But even the entire web isn’t big enough to give us
good estimates for counts of entire sentences. This is because language is creative;
new sentences are invented all the time, and we can’t expect to get accurate counts
for such large objects as entire sentences. For this reason, we’ll need more clever
ways to estimate the probability of a word w given a history h, or the probability of
an entire word sequence W .
Let’s start with some notation. First, throughout this chapter we’ll continue to
refer to words, although in practice we usually compute language models over to-
kens like the BPE tokens of page 21. To represent the probability of a particular
random variable Xi taking on the value “the”, or P(Xi = “the”), we will use the
simplification P(the). We’ll represent a sequence of n words either as w1 . . . wn or
w1:n . Thus the expression w1:n−1 means the string w1 , w2 , ..., wn−1 , but we’ll also
be using the equivalent notation w<n , which can be read as “all the elements of w
from w1 up to and including wn−1 ”. For the joint probability of each word in a se-
quence having a particular value P(X1 = w1 , X2 = w2 , X3 = w3 , ..., Xn = wn ) we’ll
use P(w1 , w2 , ..., wn ).
Now, how can we compute probabilities of entire sequences like P(w1 , w2 , ..., wn )?
One thing we can do is decompose this probability using the chain rule of proba-
bility:
P(X1 ...Xn ) = P(X1 )P(X2 |X1 )P(X3 |X1:2 ) . . . P(Xn |X1:n−1 )
∏n
= P(Xk |X1:k−1 ) (3.3)
k=1
34 C HAPTER 3 • N- GRAM L ANGUAGE M ODELS
The chain rule shows the link between computing the joint probability of a sequence
and computing the conditional probability of a word given previous words. Equa-
tion 3.4 suggests that we could estimate the joint probability of an entire sequence of
words by multiplying together a number of conditional probabilities. But using the
chain rule doesn’t really seem to help us! We don’t know any way to compute the
exact probability of a word given a long sequence of preceding words, P(wn |w1:n−1 ).
As we said above, we can’t just estimate by counting the number of times every word
occurs following every long string in some corpus, because language is creative and
any particular context might have never occurred before!
Given the bigram assumption for the probability of an individual word, we can com-
pute the probability of a complete word sequence by substituting Eq. 3.7 into Eq. 3.4:
n
∏
P(w1:n ) ≈ P(wk |wk−1 ) (3.9)
k=1
3.1 • N-G RAMS 35
C(wn−1 wn )
P(wn |wn−1 ) = (3.10)
w C(wn−1 w)
We can simplify this equation, since the sum of all bigram counts that start with
a given word wn−1 must be equal to the unigram count for that word wn−1 (the reader
should take a moment to be convinced of this):
C(wn−1 wn )
P(wn |wn−1 ) = (3.11)
C(wn−1 )
Let’s work through an example using a mini-corpus of three sentences. We’ll
first need to augment each sentence with a special symbol <s> at the beginning
of the sentence, to give us the bigram context of the first word. We’ll also need a
special end-symbol </s>.1
<s> I am Sam </s>
<s> Sam I am </s>
<s> I do not like green eggs and ham </s>
Here are the calculations for some of the bigram probabilities from this corpus
2 1 2
P(I|<s>) = 3 = 0.67 P(Sam|<s>) = 3 = 0.33 P(am|I) = 3 = 0.67
1 1 1
P(</s>|Sam) = 2 = 0.5 P(Sam|am) = 2 = 0.5 P(do|I) = 3 = 0.33
For the general case of MLE n-gram parameter estimation:
C(wn−N+1:n−1 wn )
P(wn |wn−N+1:n−1 ) = (3.12)
C(wn−N+1:n−1 )
Equation 3.12 (like Eq. 3.11) estimates the n-gram probability by dividing the
observed frequency of a particular sequence by the observed frequency of a prefix.
relative
frequency This ratio is called a relative frequency. We said above that this use of relative
frequencies as a way to estimate probabilities is an example of maximum likelihood
estimation or MLE. In MLE, the resulting parameter set maximizes the likelihood of
the training set T given the model M (i.e., P(T |M)). For example, suppose the word
Chinese occurs 400 times in a corpus of a million words. What is the probability
that a random word selected from some other text of, say, a million words will be the
400
word Chinese? The MLE of its probability is 1000000 or 0.0004. Now 0.0004 is not
the best possible estimate of the probability of Chinese occurring in all situations; it
1 We need the end-symbol to make the bigram grammar a true probability distribution. Without an end-
symbol, instead of the sentence probabilities of all sentences summing to one, the sentence probabilities
for all sentences of a given length would sum to one. This model would define an infinite set of probability
distributions, with one distribution per sentence length. See Exercise 3.5.
36 C HAPTER 3 • N- GRAM L ANGUAGE M ODELS
might turn out that in some other corpus or context Chinese is a very unlikely word.
But it is the probability that makes it most likely that Chinese will occur 400 times
in a million-word corpus. We present ways to modify the MLE estimates slightly to
get better probability estimates in Section 3.6.
Let’s move on to some examples from a real but tiny corpus, drawn from the
now-defunct Berkeley Restaurant Project, a dialogue system from the last century
that answered questions about a database of restaurants in Berkeley, California (Ju-
rafsky et al., 1994). Here are some sample user queries (text-normalized, by lower
casing and with punctuation striped) (a sample of 9332 sentences is on the website):
can you tell me about any good cantonese restaurants close by
tell me about chez panisse
i’m looking for a good place to eat breakfast
when is caffe venezia open during the day
Figure 3.1 shows the bigram counts from part of a bigram grammar from text-
normalized Berkeley Restaurant Project sentences. Note that the majority of the
values are zero. In fact, we have chosen the sample words to cohere with each other;
a matrix selected from a random set of eight words would be even more sparse.
Figure 3.2 shows the bigram probabilities after normalization (dividing each cell
in Fig. 3.1 by the appropriate unigram for its row, taken from the following set of
unigram counts):
i want to eat chinese food lunch spend
2533 927 2417 746 158 1093 341 278
Here are a few other useful probabilities:
P(i|<s>) = 0.25 P(english|want) = 0.0011
P(food|english) = 0.5 P(</s>|food) = 0.68
Now we can compute the probability of sentences like I want English food or
I want Chinese food by simply multiplying the appropriate bigram probabilities to-
gether, as follows:
P(<s> i want english food </s>)
= P(i|<s>)P(want|i)P(english|want)
P(food|english)P(</s>|food)
= 0.25 × 0.33 × 0.0011 × 0.5 × 0.68
= 0.000031
3.1 • N-G RAMS 37
In practice throughout this book, we’ll use log to mean natural log (ln) when the
base is not specified.
Longer context Although for pedagogical purposes we have only described bi-
trigram gram models, when there is sufficient training data we use trigram models, which
4-gram condition on the previous two words, or 4-gram or 5-gram models. For these larger
5-gram n-grams, we’ll need to assume extra contexts to the left and right of the sentence end.
For example, to compute trigram probabilities at the very beginning of the sentence,
we use two pseudo-words for the first trigram (i.e., P(I|<s><s>).
Some large n-gram datasets have been created, like the million most frequent
n-grams drawn from the Corpus of Contemporary American English (COCA), a
curated 1 billion word corpus of American English (Davies, 2020), Google’s Web
5-gram corpus from 1 trillion words of English web text (Franz and Brants, 2006),
or the Google Books Ngrams corpora (800 billion tokens from Chinese, English,
French, German, Hebrew, Italian, Russian, and Spanish) (Lin et al., 2012a)).
38 C HAPTER 3 • N- GRAM L ANGUAGE M ODELS
It’s even possible to use extremely long-range n-gram context. The infini-gram
(∞-gram) project (Liu et al., 2024) allows n-grams of any length. Their idea is to
avoid the expensive (in space and time) pre-computation of huge n-gram count ta-
bles. Instead, n-gram probabilities with arbitrary n are computed quickly at inference
time by using an efficient representation called suffix arrays. This allows computing
of n-grams of every length for enormous corpora of 5 trillion tokens.
Efficiency considerations are important when building large n-gram language
models. It is standard to quantize the probabilities using only 4-8 bits (instead of
8-byte floats), store the word strings on disk and represent them in memory only as
a 64-bit hash, and represent n-grams in special data structures like ‘reverse tries’.
It is also common to prune n-gram language models, for example by only keeping
n-grams with counts greater than some threshold or using entropy to prune less-
important n-grams (Stolcke, 1998). Efficient language model toolkits like KenLM
(Heafield 2011, Heafield et al. 2013) use sorted arrays and use merge sorts to effi-
ciently build the probability tables in a minimal number of passes through a large
corpus.
for speech recognition of chemistry lectures, the test set should be text of chemistry
lectures. If we’re going to use it as part of a system for translating hotel booking re-
quests from Chinese to English, the test set should be text of hotel booking requests.
If we want our language model to be general purpose, then the test set should be
drawn from a wide variety of texts. In such cases we might collect a lot of texts
from different sources, and then divide it up into a training set and a test set. It’s
important to do the dividing carefully; if we’re building a general purpose model,
we don’t want the test set to consist of only text from one document, or one author,
since that wouldn’t be a good measure of general performance.
Thus if we are given a corpus of text and want to compare the performance of
two different n-gram models, we divide the data into training and test sets, and train
the parameters of both models on the training set. We can then compare how well
the two trained models fit the test set.
But what does it mean to “fit the test set”? The standard answer is simple:
whichever language model assigns a higher probability to the test set—which
means it more accurately predicts the test set—is a better model. Given two proba-
bilistic models, the better model is the one that better predicts the details of the test
data, and hence will assign a higher probability to the test data.
Since our evaluation metric is based on test set probability, it’s important not to
let the test sentences into the training set. Suppose we are trying to compute the
probability of a particular “test” sentence. If our test sentence is part of the training
corpus, we will mistakenly assign it an artificially high probability when it occurs
in the test set. We call this situation training on the test set. Training on the test
set introduces a bias that makes the probabilities all look too high, and causes huge
inaccuracies in perplexity, the probability-based metric we introduce below.
Even if we don’t train on the test set, if we test our language model on the
test set many times after making different changes, we might implicitly tune to its
characteristics, by noticing which changes seem to make the model better. For this
reason, we only want to run our model on the test set once, or a very few number of
times, once we are sure our model is ready.
development For this reason we normally instead have a third dataset called a development
test
test set or, devset. We do all our testing on this dataset until the very end, and then
we test on the test once to see how good our model is.
How do we divide our data into training, development, and test sets? We want
our test set to be as large as possible, since a small test set may be accidentally un-
representative, but we also want as much training data as possible. At the minimum,
we would want to pick the smallest test set that gives us enough statistical power
to measure a statistically significant difference between two potential models. It’s
important that the devset be drawn from the same kind of text as the test set, since
its goal is to measure how we would do on the test set.
Note that because of the inverse in Eq. 3.15, the higher the probability of the word
sequence, the lower the perplexity. Thus the the lower the perplexity of a model on
the data, the better the model. Minimizing perplexity is equivalent to maximizing
the test set probability according to the language model. Why does perplexity use
the inverse probability? It turns out the inverse arises from the original definition
of perplexity from cross-entropy rate in information theory; for those interested, the
explanation is in the advanced Section 3.7. Meanwhile, we just have to remember
that perplexity has an inverse relationship with probability.
The details of computing the perplexity of a test set W depends on which lan-
guage model we use. Here’s the perplexity of W with a unigram language model
(just the geometric mean of the inverse of the unigram probabilities):
√
N
∏ 1
perplexity(W ) = N
(3.16)
P(wi )
i=1
What we generally use for word sequence in Eq. 3.15 or Eq. 3.17 is the entire
sequence of words in some test set. Since this sequence will cross many sentence
boundaries, if our vocabulary includes a between-sentence token <EOS> or separate
begin- and end-sentence markers <s> and </s> then we can include them in the
3.3 • E VALUATING L ANGUAGE M ODELS : P ERPLEXITY 41
probability computation. If we do, then we also include one token per sentence in
the total count of word tokens N.2
We mentioned above that perplexity is a function of both the text and the lan-
guage model: given a text W , different language models will have different perplex-
ities. Because of this, perplexity can be used to compare different language models.
For example, here we trained unigram, bigram, and trigram grammars on 38 million
words from the Wall Street Journal newspaper. We then computed the perplexity of
each of these models on a WSJ test set using Eq. 3.16 for unigrams, Eq. 3.17 for
bigrams, and the corresponding equation for trigrams. The table below shows the
perplexity of the 1.5 million word test set according to each of the language models.
Unigram Bigram Trigram
Perplexity 962 170 109
As we see above, the more information the n-gram gives us about the word
sequence, the higher the probability the n-gram will assign to the string. A trigram
model is less surprised than a unigram model because it has a better idea of what
words might come next, and so it assigns them a higher probability. And the higher
the probability, the lower the perplexity (since as Eq. 3.15 showed, perplexity is
related inversely to the probability of the test sequence according to the model). So
a lower perplexity tells us that a language model is a better predictor of the test set.
Note that in computing perplexities, the language model must be constructed
without any knowledge of the test set, or else the perplexity will be artificially low.
And the perplexity of two language models is only comparable if they use identical
vocabularies.
An (intrinsic) improvement in perplexity does not guarantee an (extrinsic) im-
provement in the performance of a language processing task like speech recognition
or machine translation. Nonetheless, because perplexity usually correlates with task
improvements, it is commonly used as a convenient evaluation metric. Still, when
possible a model’s improvement in perplexity should be confirmed by an end-to-end
evaluation on a real task.
perplexity of A on T is:
1
perplexityA (T ) = PA (red red red red blue)− 5
( )− 1
1 5
5
=
3
−1
1
= =3 (3.19)
3
But now suppose red was very likely in the training set a different LM B, and so B
has the following probabilities:
P(red) = 0.8 P(green) = 0.1 P(blue) = 0.1 (3.20)
We should expect the perplexity of the same test set red red red red blue for
language model B to be lower since most of the time the next color will be red, which
is very predictable, i.e. has a high probability. So the probability of the test set will
be higher, and since perplexity is inversely related to probability, the perplexity will
be lower. Thus, although the branching factor is still 3, the perplexity or weighted
branching factor is smaller:
perplexityB (T ) = PB (red red red red blue)−1/5
1
= 0.04096− 5
= 0.527−1 = 1.89 (3.21)
polyphonic
p=0.0000018
however
the of a to in (p=0.0003)
Figure 3.3 A visualization of the sampling distribution for sampling sentences by repeat-
edly sampling unigrams. The blue bar represents the relative frequency of each word (we’ve
ordered them from most frequent to least frequent, but the choice of order is arbitrary). The
number line shows the cumulative probabilities. If we choose a random number between 0
and 1, it will fall in an interval corresponding to some word. The expectation for the random
number to fall in the larger intervals of one of the frequent words (the, of, a) is much higher
than in the smaller interval of one of the rare words (polyphonic).
One important way to visualize what kind of knowledge a language model em-
sampling bodies is to sample from it. Sampling from a distribution means to choose random
points according to their likelihood. Thus sampling from a language model—which
represents a distribution over sentences—means to generate some sentences, choos-
ing each sentence according to its likelihood as defined by the model. Thus we are
more likely to generate sentences that the model thinks have a high probability and
less likely to generate sentences that the model thinks have a low probability.
3.5 • G ENERALIZING VS . OVERFITTING THE TRAINING SET 43
–To him swallowed confess hear both. Which. Of save on trail for are ay device and
1gram rote life have
–Hill he late speaks; or! a more to leg less first you enter
–Why dost stand forth thy canopy, forsooth; he is this palpable hit the King Henry. Live
2gram king. Follow.
–What means, sir. I confess she? then all sorts, he is trim, captain.
–Fly, and will rid me these news of price. Therefore the sadness of parting, as they say,
3gram ’tis done.
–This shall forbid it should be branded, if renown made it empty.
–King Henry. What! I will go seek the traitor Gloucester. Exeunt some of the watch. A
4gram great banquet serv’d in;
–It cannot be but so.
Figure 3.4 Eight sentences randomly generated from four n-grams computed from Shakespeare’s works. All
characters were mapped to lower-case and punctuation marks were treated as words. Output is hand-corrected
for capitalization to improve readability.
The longer the context, the more coherent the sentences. The unigram sen-
tences show no coherent relation between words nor any sentence-final punctua-
tion. The bigram sentences have some local word-to-word coherence (especially
considering punctuation as words). The trigram sentences are beginning to look a
lot like Shakespeare. Indeed, the 4-gram sentences look a little too much like Shake-
speare. The words It cannot be but so are directly from King John. This is because,
not to put the knock on Shakespeare, his oeuvre is not very large as corpora go
44 C HAPTER 3 • N- GRAM L ANGUAGE M ODELS
(N = 884, 647,V = 29, 066), and our n-gram probability matrices are ridiculously
sparse. There are V 2 = 844, 000, 000 possible bigrams alone, and the number of
possible 4-grams is V 4 = 7 × 1017 . Thus, once the generator has chosen the first
3-gram (It cannot be), there are only seven possible next words for the 4th element
(but, I, that, thus, this, and the period).
To get an idea of the dependence on the training set, let’s look at LMs trained on a
completely different corpus: the Wall Street Journal (WSJ) newspaper. Shakespeare
and the WSJ are both English, so we might have expected some overlap between our
n-grams for the two genres. Fig. 3.5 shows sentences generated by unigram, bigram,
and trigram grammars trained on 40 million words from WSJ.
1
gram
Months the my and issue of year foreign new exchange’s september
were recession exchange new endorsed a acquire to six executives
Last December through the way to preserve the Hudson corporation N.
2
gram
B. E. C. Taylor would seem to complete the major central planners one
point five percent of U. S. E. has already old M. X. corporation of living
on information such as more frequently fishing to keep her
They also point to ninety nine point six billion dollars from two hundred
3
gram
four oh six three percent of the rates of interest stores as Mexico and
Brazil on market conditions
Figure 3.5 Three sentences randomly generated from three n-gram models computed from
40 million words of the Wall Street Journal, lower-casing all characters and treating punctua-
tion as words. Output was then hand-corrected for capitalization to improve readability.
Compare these examples to the pseudo-Shakespeare in Fig. 3.4. While they both
model “English-like sentences”, there is no overlap in the generated sentences, and
little overlap even in small phrases. Statistical models are pretty useless as predictors
if the training sets and the test sets are as different as Shakespeare and the WSJ.
How should we deal with this problem when we build n-gram models? One step
is to be sure to use a training corpus that has a similar genre to whatever task we are
trying to accomplish. To build a language model for translating legal documents,
we need a training corpus of legal documents. To build a language model for a
question-answering system, we need a training corpus of questions.
It is equally important to get training data in the appropriate dialect or variety,
especially when processing social media posts or spoken transcripts. For exam-
ple some tweets will use features of African American English (AAE)— the name
for the many variations of language used in African American communities (King,
2020). Such features can include words like finna—an auxiliary verb that marks
immediate future tense —that don’t occur in other varieties, or spellings like den for
then, in tweets like this one (Blodgett and O’Connor, 2017):
(3.22) Bored af den my phone finna die!!!
while tweets from English-based languages like Nigerian Pidgin have markedly dif-
ferent vocabulary and n-gram patterns from American English (Jurgens et al., 2017):
(3.23) @username R u a wizard or wat gan sef: in d mornin - u tweet, afternoon - u
tweet, nyt gan u dey tweet. beta get ur IT placement wiv twitter
Is it possible for the testset nonetheless to have a word we have never seen be-
fore? What happens if the word Jurafsky never occurs in our training set, but pops
up in the test set? The answer is that although words might be unseen, we actu-
ally run our NLP algorithms not on words but on subword tokens. With subword
3.6 • S MOOTHING , I NTERPOLATION , AND BACKOFF 45
tokenization (like the BPE algorithm of Chapter 2) any word can be modeled as a
sequence of known smaller subwords, if necessary by a sequence of individual let-
ters. So although for convenience we’ve been referring to words in this chapter, the
language model vocabulary is actually the set of tokens rather than words, and the
test set can never contain unseen tokens.
ci + 1
PLaplace (wi ) = (3.24)
N +V
46 C HAPTER 3 • N- GRAM L ANGUAGE M ODELS
N
c∗i = (ci + 1) (3.25)
N +V
We can now turn c∗i into a probability Pi∗ by normalizing by N.
discounting A related way to view smoothing is as discounting (lowering) some non-zero
counts in order to get the probability mass that will be assigned to the zero counts.
Thus, instead of referring to the discounted counts c∗ , we might describe a smooth-
discount ing algorithm in terms of a relative discount di , the ratio of the discounted counts to
the original counts:
c∗i
di =
ci
Now that we have the intuition for the unigram case, let’s smooth our Berkeley
Restaurant Project bigrams. Figure 3.6 shows the add-one smoothed counts for the
bigrams in Fig. 3.1.
Figure 3.7 shows the add-one smoothed probabilities for the bigrams in Fig. 3.2.
Recall that normal bigram probabilities are computed by normalizing each row of
counts by the unigram count:
C(wn−1 wn )
PMLE (wn |wn−1 ) = (3.26)
C(wn−1 )
For add-one smoothed bigram counts, we need to augment the unigram count by the
number of total word types in the vocabulary V :
C(wn−1 wn ) + 1 C(wn−1 wn ) + 1
PLaplace (wn |wn−1 ) = = (3.27)
w (C(wn−1 w) + 1) C(wn−1 ) +V
Thus, each of the unigram counts given in the previous section will need to be aug-
mented by V = 1446. The result is the smoothed bigram probabilities in Fig. 3.7.
It is often convenient to reconstruct the count matrix so we can see how much a
smoothing algorithm has changed the original counts. These adjusted counts can be
3.6 • S MOOTHING , I NTERPOLATION , AND BACKOFF 47
[C(wn−1 wn ) + 1] ×C(wn−1 )
c∗ (wn−1 wn ) = (3.28)
C(wn−1 ) +V
Note that add-one smoothing has made a very big change to the counts. Com-
paring Fig. 3.8 to the original counts in Fig. 3.1, we can see that C(want to) changed
from 608 to 238! We can see this in probability space as well: P(to|want) decreases
from 0.66 in the unsmoothed case to 0.26 in the smoothed case. Looking at the dis-
count d (the ratio between new and old counts) shows us how strikingly the counts
for each prefix word have been reduced; the discount for the bigram want to is 0.39,
while the discount for Chinese food is 0.10, a factor of 10! The sharp change occurs
because too much probability mass is moved to all the zeros.
∗ C(wn−1 wn ) + k
PAdd-k (wn |wn−1 ) = (3.29)
C(wn−1 ) + kV
Add-k smoothing requires that we have a method for choosing k; this can be
done, for example, by optimizing on a devset. Although add-k is useful for some
tasks (including text classification), it turns out that it still doesn’t work well for
language modeling, generating counts with poor variances and often inappropriate
discounts (Gale and Church, 1994).
48 C HAPTER 3 • N- GRAM L ANGUAGE M ODELS
How are these λ values set? Both the simple interpolation and conditional interpo-
held-out lation λ s are learned from a held-out corpus. A held-out corpus is an additional
training corpus, so-called because we hold it out from the training data, that we use
to set these λ values.4 We do so by choosing the λ values that maximize the likeli-
hood of the held-out corpus. That is, we fix the n-gram probabilities and then search
for the λ values that—when plugged into Eq. 3.30—give us the highest probability
of the held-out set. There are various ways to find this optimal set of λ s. One way
is to use the EM algorithm, an iterative learning algorithm that converges on locally
optimal λ s (Jelinek and Mercer, 1980).
count(w)
The backoff terminates in the unigram, which has score S(w) = N . Brants et al.
(2007) find that a value of 0.4 worked well for λ .
The log can, in principle, be computed in any base. If we use log base 2, the
resulting value of entropy will be measured in bits.
One intuitive way to think about entropy is as a lower bound on the number of
bits it would take to encode a certain decision or piece of information in the optimal
coding scheme. Consider an example from the standard information theory textbook
Cover and Thomas (1991). Imagine that we want to place a bet on a horse race but
it is too far to go all the way to Yonkers Racetrack, so we’d like to send a short
message to the bookie to tell him which of the eight horses to bet on. One way to
encode this message is just to use the binary representation of the horse’s number
as the code; thus, horse 1 would be 001, horse 2 010, horse 3 011, and so on, with
horse 8 coded as 000. If we spend the whole day betting and each horse is coded
with 3 bits, on average we would be sending 3 bits per race.
Can we do better? Suppose that the spread is the actual distribution of the bets
placed and that we represent it as the prior probability of each horse as follows:
50 C HAPTER 3 • N- GRAM L ANGUAGE M ODELS
1 1
Horse 1 2 Horse 5 64
1 1
Horse 2 4 Horse 6 64
1 1
Horse 3 8 Horse 7 64
1 1
Horse 4 16 Horse 8 64
The entropy of the random variable X that ranges over horses gives us a lower
bound on the number of bits and is
i=8
∑
H(X) = − p(i) log2 p(i)
i=1
= − 12 log2 21 − 14 log2 41 − 18 log2 81 − 16
1 log 1 −4( 1 log 1 )
2 16 64 2 64
= 2 bits (3.34)
A code that averages 2 bits per race can be built with short encodings for more
probable horses, and longer encodings for less probable horses. For example, we
could encode the most likely horse with the code 0, and the remaining horses as 10,
then 110, 1110, 111100, 111101, 111110, and 111111.
What if the horses are equally likely? We saw above that if we used an equal-
length binary code for the horse numbers, each horse took 3 bits to code, so the
average was 3. Is the entropy the same? In this case each horse would have a
probability of 18 . The entropy of the choice of horses is then
i=8
∑ 1 1 1
H(X) = − log2 = − log2 = 3 bits (3.35)
8 8 8
i=1
Until now we have been computing the entropy of a single variable. But most of
what we will use entropy for involves sequences. For a grammar, for example, we
will be computing the entropy of some sequence of words W = {w1 , w2 , . . . , wn }.
One way to do this is to have a variable that ranges over sequences of words. For
example we can compute the entropy of a random variable that ranges over all se-
quences of words of length n in some language L as follows:
∑
H(w1 , w2 , . . . , wn ) = − p(w1 : n ) log p(w1 : n ) (3.36)
w1 : n ∈L
entropy rate We could define the entropy rate (we could also think of this as the per-word
entropy) as the entropy of this sequence divided by the number of words:
1 1 ∑
H(w1 : n ) = − p(w1 : n ) log p(w1 : n ) (3.37)
n n
w1 : n ∈L
1
H(L) = lim H(w1 : n )
n
n→∞
1∑
= − lim p(w1 : n ) log p(w1 : n ) (3.38)
n→∞ n
W ∈L
3.7 • A DVANCED : P ERPLEXITY ’ S R ELATION TO E NTROPY 51
The Shannon-McMillan-Breiman theorem (Algoet and Cover 1988, Cover and Thomas
1991) states that if the language is regular in certain ways (to be exact, if it is both
stationary and ergodic),
1
H(L) = lim − log p(w1 : n ) (3.39)
n→∞ n
That is, we can take a single sequence that is long enough instead of summing over
all possible sequences. The intuition of the Shannon-McMillan-Breiman theorem
is that a long-enough sequence of words will contain in it many other shorter se-
quences and that each of these shorter sequences will reoccur in the longer sequence
according to their probabilities.
Stationary A stochastic process is said to be stationary if the probabilities it assigns to a
sequence are invariant with respect to shifts in the time index. In other words, the
probability distribution for words at time t is the same as the probability distribution
at time t + 1. Markov models, and hence n-grams, are stationary. For example, in
a bigram, Pi is dependent only on Pi−1 . So if we shift our time index by x, Pi+x is
still dependent on Pi+x−1 . But natural language is not stationary, since as we show
in Appendix D, the probability of upcoming words can be dependent on events that
were arbitrarily distant and time dependent. Thus, our statistical models only give
an approximation to the correct distributions and entropies of natural language.
To summarize, by making some incorrect but convenient simplifying assump-
tions, we can compute the entropy of some stochastic process by taking a very long
sample of the output and computing its average log probability.
cross-entropy Now we are ready to introduce cross-entropy. The cross-entropy is useful when
we don’t know the actual probability distribution p that generated some data. It
allows us to use some m, which is a model of p (i.e., an approximation to p). The
cross-entropy of m on p is defined by
1∑
H(p, m) = lim − p(w1 , . . . , wn ) log m(w1 , . . . , wn ) (3.40)
n→∞ n
W ∈L
That is, we draw sequences according to the probability distribution p, but sum the
log of their probabilities according to m.
Again, following the Shannon-McMillan-Breiman theorem, for a stationary er-
godic process:
1
H(p, m) = lim − log m(w1 w2 . . . wn ) (3.41)
n→∞ n
This means that, as for entropy, we can estimate the cross-entropy of a model m
on some distribution p by taking a single sequence that is long enough instead of
summing over all possible sequences.
What makes the cross-entropy useful is that the cross-entropy H(p, m) is an up-
per bound on the entropy H(p). For any model m:
This means that we can use some simplified model m to help estimate the true en-
tropy of a sequence of symbols drawn according to probability p. The more accurate
m is, the closer the cross-entropy H(p, m) will be to the true entropy H(p). Thus,
the difference between H(p, m) and H(p) is a measure of how accurate a model is.
Between two models m1 and m2 , the more accurate model will be the one with the
52 C HAPTER 3 • N- GRAM L ANGUAGE M ODELS
lower cross-entropy. (The cross-entropy can never be lower than the true entropy, so
a model cannot err by underestimating the true entropy.)
We are finally ready to see the relation between perplexity and cross-entropy
as we saw it in Eq. 3.41. Cross-entropy is defined in the limit as the length of the
observed word sequence goes to infinity. We approximate this cross-entropy by
relying on a (sufficiently long) sequence of fixed length. This approximation to the
cross-entropy of a model M = P(wi |wi−N+1 : i−1 ) on a sequence of words W is
1
H(W ) = − log P(w1 w2 . . . wN ) (3.43)
N
perplexity The perplexity of a model P on a sequence of words W is now formally defined as
2 raised to the power of this cross-entropy:
Perplexity(W ) = 2H(W )
1
= P(w1 w2 . . . wN )− N
1
= N
P(w1 w2 . . . wN )
3.8 Summary
This chapter introduced language modeling via the n-gram model, a classic model
that allows us to introduce many of the basic concepts in language modeling.
• Language models offer a way to assign a probability to a sentence or other
sequence of words or tokens, and to predict a word or token from preceding
words or tokens.
• N-grams are perhaps the simplest kind of language model. They are Markov
models that estimate words from a fixed window of previous words. N-gram
models can be trained by counting in a training corpus and normalizing the
counts (the maximum likelihood estimate).
• N-gram language models can be evaluated on a test set using perplexity.
• The perplexity of a test set according to a language model is a function of
the probability of the test set: the inverse test set probability according to the
model, normalized by the length.
• Sampling from a language model means to generate some sentences, choos-
ing each sentence according to its likelihood as defined by the model.
• Smoothing algorithms provide a way to estimate probabilities for events that
were unseen in training. Commonly used smoothing algorithms for n-grams
include add-1 smoothing, or rely on lower-order n-gram counts through inter-
polation.
trigram probability that a given letter would be a vowel given the previous one or
two letters. Shannon (1948) applied n-grams to compute approximations to English
word sequences. Based on Shannon’s work, Markov models were commonly used in
engineering, linguistic, and psychological work on modeling word sequences by the
1950s. In a series of extremely influential papers starting with Chomsky (1956) and
including Chomsky (1957) and Miller and Chomsky (1963), Noam Chomsky argued
that “finite-state Markov processes”, while a possibly useful engineering heuristic,
were incapable of being a complete cognitive model of human grammatical knowl-
edge. These arguments led many linguists and computational linguists to ignore
work in statistical modeling for decades.
The resurgence of n-gram language models came from Fred Jelinek and col-
leagues at the IBM Thomas J. Watson Research Center, who were influenced by
Shannon, and James Baker at CMU, who was influenced by the prior, classified
work of Leonard Baum and colleagues on these topics at labs like the US Institute
for Defense SAnalyses (IDA) after they were declassified. Independently these two
labs successfully used n-grams in their speech recognition systems at the same time
(Baker 1975b, Jelinek et al. 1975, Baker 1975a, Bahl et al. 1983, Jelinek 1990). The
terms “language model” and “perplexity” were first used for this technology by the
IBM group. Jelinek and his colleagues used the term language model in a pretty
modern way, to mean the entire set of linguistic influences on word sequence prob-
abilities, including grammar, semantics, discourse, and even speaker characteristics,
rather than just the particular n-gram model itself.
Add-one smoothing derives from Laplace’s 1812 law of succession and was first
applied as an engineering solution to the zero frequency problem by Jeffreys (1948)
based on an earlier Add-K suggestion by Johnson (1932). Problems with the add-
one algorithm are summarized in Gale and Church (1994).
A wide variety of different language modeling and smoothing techniques were
proposed in the 80s and 90s, including Good-Turing discounting—first applied to the
n-gram smoothing at IBM by Katz (Nádas 1984, Church and Gale 1991)— Witten-
class-based
n-gram Bell discounting (Witten and Bell, 1991), and varieties of class-based n-gram mod-
els that used information about word classes. Starting in the late 1990s, Chen and
Goodman performed a number of carefully controlled experiments comparing dif-
ferent algorithms and parameters (Chen and Goodman 1999, Goodman 2006, inter
alia). They showed the advantages of Modified Interpolated Kneser-Ney, which
became the standard baseline for n-gram language modeling around the turn of the
century, especially because they showed that caches and class-based models pro-
vided only minor additional improvement. SRILM (Stolcke, 2002) and KenLM
(Heafield 2011, Heafield et al. 2013) are publicly available toolkits for building n-
gram language models.
Large language models are based on neural networks rather than n-grams, en-
abling them to solve the two major problems with n-grams: (1) the number of param-
eters increases exponentially as the n-gram order increases, and (2) n-grams have no
way to generalize from training examples to test set examples unless they use iden-
tical words. Neural language models instead project words into a continuous space
in which words with similar contexts have similar representations. We’ll introduce
transformer-based large language models in Chapter 9, along the way introducing
feedforward language models (Bengio et al. 2006, Schwenk 2007) in Chapter 7 and
recurrent language models (Mikolov, 2012) in Chapter 8.
54 C HAPTER 3 • N- GRAM L ANGUAGE M ODELS
Exercises
3.1 Write out the equation for trigram probability estimation (modifying Eq. 3.11).
Now write out all the non-zero trigram probabilities for the I am Sam corpus
on page 35.
3.2 Calculate the probability of the sentence i want chinese food. Give two
probabilities, one using Fig. 3.2 and the ‘useful probabilities’ just below it on
page 37, and another using the add-1 smoothed table in Fig. 3.7. Assume the
additional add-1 smoothed probabilities P(i|<s>) = 0.19 and P(</s>|food) =
0.40.
3.3 Which of the two probabilities you computed in the previous exercise is higher,
unsmoothed or smoothed? Explain why.
3.4 We are given the following corpus, modified from the one in the chapter:
<s> I am Sam </s>
<s> Sam I am </s>
<s> I am Sam </s>
<s> I do not like green eggs and Sam </s>
Using a bigram language model with add-one smoothing, what is P(Sam |
am)? Include <s> and </s> in your counts just like any other token.
3.5 Suppose we didn’t use the end-symbol </s>. Train an unsmoothed bigram
grammar on the following training corpus without using the end-symbol </s>:
<s> a b
<s> b b
<s> b a
<s> a a
Demonstrate that your bigram model does not assign a single probability dis-
tribution across all sentence lengths by showing that the sum of the probability
of the four possible 2 word sentences over the alphabet {a,b} is 1.0, and the
sum of the probability of all possible 3 word sentences over the alphabet {a,b}
is also 1.0.
3.6 Suppose we train a trigram language model with add-one smoothing on a
given corpus. The corpus contains V word types. Express a formula for esti-
mating P(w3|w1,w2), where w3 is a word which follows the bigram (w1,w2),
in terms of various n-gram counts and V. Use the notation c(w1,w2,w3) to
denote the number of times that trigram (w1,w2,w3) occurs in the corpus, and
so on for bigrams and unigrams.
3.7 We are given the following corpus, modified from the one in the chapter:
<s> I am Sam </s>
<s> Sam I am </s>
<s> I am Sam </s>
<s> I do not like green eggs and Sam </s>
If we use linear interpolation smoothing between a maximum-likelihood bi-
gram model and a maximum-likelihood unigram model with λ1 = 12 and λ2 =
1
2 , what is P(Sam|am)? Include <s> and </s> in your counts just like any
other token.
3.8 Write a program to compute unsmoothed unigrams and bigrams.
E XERCISES 55
3.9 Run your n-gram program on two different small corpora of your choice (you
might use email text or newsgroups). Now compare the statistics of the two
corpora. What are the differences in the most common unigrams between the
two? How about interesting differences in bigrams?
3.10 Add an option to your program to generate random sentences.
3.11 Add an option to your program to compute the perplexity of a test set.
3.12 You are given a training set of 100 numbers that consists of 91 zeros and 1
each of the other digits 1-9. Now we see the following test set: 0 0 0 0 0 3 0 0
0 0. What is the unigram perplexity?
56 C HAPTER 4 • NAIVE BAYES , T EXT C LASSIFICATION , AND S ENTIMENT
CHAPTER
Finally, one of the oldest tasks in text classification is assigning a library sub-
ject category or topic label to a text. Deciding whether a research paper concerns
epidemiology or instead, perhaps, embryology, is an important component of infor-
mation retrieval. Various sets of subject categories exist, such as the MeSH (Medical
Subject Headings) thesaurus. In fact, as we will see, subject category classification
is the task for which the naive Bayes algorithm was invented in 1961 Maron (1961).
Classification is essential for tasks below the level of the document as well.
We’ve already seen period disambiguation (deciding if a period is the end of a sen-
tence or part of a word), and word tokenization (deciding if a character should be
a word boundary). Even language modeling can be viewed as classification: each
word can be thought of as a class, and so predicting the next word is classifying the
context-so-far into a class for each next word. A part-of-speech tagger (Chapter 17)
classifies each occurrence of a word in a sentence as, e.g., a noun or a verb.
The goal of classification is to take a single observation, extract some useful
features, and thereby classify the observation into one of a set of discrete classes.
One method for classifying text is to use rules handwritten by humans. Handwrit-
ten rule-based classifiers can be components of state-of-the-art systems in language
processing. But rules can be fragile, as situations or data change over time, and for
some tasks humans aren’t necessarily good at coming up with the rules.
supervised
The most common way of doing text classification in language processing is
machine instead via supervised machine learning, the subject of this chapter. In supervised
learning
learning, we have a data set of input observations, each associated with some correct
output (a ‘supervision signal’). The goal of the algorithm is to learn how to map
from a new observation to a correct output.
Formally, the task of supervised classification is to take an input x and a fixed
set of output classes Y = {y1 , y2 , ..., yM } and return a predicted class y ∈ Y . For
text classification, we’ll sometimes talk about c (for “class”) instead of y as our
output variable, and d (for “document”) instead of x as our input variable. In the
supervised situation we have a training set of N documents that have each been hand-
labeled with a class: {(d1 , c1 ), ...., (dN , cN )}. Our goal is to learn a classifier that is
capable of mapping from a new document d to its correct class c ∈ C, where C is
some set of useful document classes. A probabilistic classifier additionally will tell
us the probability of the observation being in the class. This full distribution over
the classes can be useful information for downstream decisions; avoiding making
discrete decisions early on can be useful when combining systems.
Many kinds of machine learning algorithms are used to build classifiers. This
chapter introduces naive Bayes; the following one introduces logistic regression.
These exemplify two ways of doing classification. Generative classifiers like naive
Bayes build a model of how a class could generate some input data. Given an ob-
servation, they return the class most likely to have generated the observation. Dis-
criminative classifiers like logistic regression instead learn what features from the
input are most useful to discriminate between the different possible classes. While
discriminative systems are often more accurate and hence more commonly used,
generative classifiers still have a role.
it 6
I 5
I love this movie! It's sweet, the 4
but with satirical humor. The fairy always love it to 3
it whimsical it to and 3
dialogue is great and the I
and seen are seen 2
adventure scenes are fun... anyone
friend
It manages to be whimsical happy dialogue yet 1
and romantic while laughing adventure recommend would 1
satirical whimsical 1
at the conventions of the who sweet of movie it
fairy tale genre. I would it I to
but romantic I times 1
recommend it to just about several yet sweet 1
anyone. I've seen it several again it the humor satirical 1
the seen would
times, and I'm always happy adventure 1
to scenes I the manages
to see it again whenever I the genre 1
fun I times and fairy 1
have a friend who hasn't and
about while
seen it yet! whenever humor 1
have
conventions have 1
with great 1
… …
Figure 4.1 Intuition of the multinomial naive Bayes classifier applied to a movie review. The position of the
words is ignored (the bag-of-words assumption) and we make use of the frequency of each word.
Bayesian This idea of Bayesian inference has been known since the work of Bayes (1763),
inference
and was first applied to text classification by Mosteller and Wallace (1964). The
intuition of Bayesian classification is to use Bayes’ rule to transform Eq. 4.1 into
other probabilities that have some useful properties. Bayes’ rule is presented in
Eq. 4.2; it gives us a way to break down any conditional probability P(x|y) into
three other probabilities:
P(y|x)P(x)
P(x|y) = (4.2)
P(y)
4.1 • NAIVE BAYES C LASSIFIERS 59
We can then substitute Eq. 4.2 into Eq. 4.1 to get Eq. 4.3:
P(d|c)P(c)
ĉ = argmax P(c|d) = argmax (4.3)
c∈C c∈C P(d)
We can conveniently simplify Eq. 4.3 by dropping the denominator P(d). This
is possible because we will be computing P(d|c)P(c)
P(d) for each possible class. But P(d)
doesn’t change for each class; we are always asking about the most likely class for
the same document d, which must have the same probability P(d). Thus, we can
choose the class that maximizes this simpler formula:
We call Naive Bayes a generative model because we can read Eq. 4.4 as stating
a kind of implicit assumption about how a document is generated: first a class is
sampled from P(c), and then the words are generated by sampling from P(d|c). (In
fact we could imagine generating artificial documents, or at least their word counts,
by following this process). We’ll say more about this intuition of generative models
in Chapter 5.
To return to classification: we compute the most probable class ĉ given some
document d by choosing the class which has the highest product of two probabilities:
prior
probability the prior probability of the class P(c) and the likelihood of the document P(d|c):
likelihood
likelihood prior
ĉ = argmax P(d|c) P(c) (4.5)
c∈C
likelihood prior
ĉ = argmax P( f1 , f2 , ...., fn |c) P(c) (4.6)
c∈C
Unfortunately, Eq. 4.6 is still too hard to compute directly: without some sim-
plifying assumptions, estimating the probability of every possible combination of
features (for example, every possible set of words and positions) would require huge
numbers of parameters and impossibly large training sets. Naive Bayes classifiers
therefore make two simplifying assumptions.
The first is the bag-of-words assumption discussed intuitively above: we assume
position doesn’t matter, and that the word “love” has the same effect on classification
whether it occurs as the 1st, 20th, or last word in the document. Thus we assume
that the features f1 , f2 , ..., fn only encode word identity and not position.
naive Bayes
assumption The second is commonly called the naive Bayes assumption: this is the condi-
tional independence assumption that the probabilities P( fi |c) are independent given
the class c and hence can be ‘naively’ multiplied as follows:
The final equation for the class chosen by a naive Bayes classifier is thus:
∏
cNB = argmax P(c) P( f |c) (4.8)
c∈C f ∈F
60 C HAPTER 4 • NAIVE BAYES , T EXT C LASSIFICATION , AND S ENTIMENT
To apply the naive Bayes classifier to text, we will use each word in the documents
as a feature, as suggested above, and we consider each of the words in the document
by walking an index through every word position in the document:
positions ← all word positions in test document
∏
cNB = argmax P(c) P(wi |c) (4.9)
c∈C i∈positions
Naive Bayes calculations, like calculations for language modeling, are done in log
space, to avoid underflow and increase speed. Thus Eq. 4.9 is generally instead
expressed1 as
∑
cNB = argmax log P(c) + log P(wi |c) (4.10)
c∈C i∈positions
By considering features in log space, Eq. 4.10 computes the predicted class as a lin-
ear function of input features. Classifiers that use a linear combination of the inputs
to make a classification decision —like naive Bayes and also logistic regression—
linear are called linear classifiers.
classifiers
1 In practice throughout this book, we’ll use log to mean natural log (ln) when the base is not specified.
4.3 • W ORKED EXAMPLE 61
But since naive Bayes naively multiplies all the feature likelihoods together, zero
probabilities in the likelihood term for any class will cause the probability of the
class to be zero, no matter the other evidence!
The simplest solution is the add-one (Laplace) smoothing introduced in Chap-
ter 3. While Laplace smoothing is usually replaced by more sophisticated smoothing
algorithms in language modeling, it is commonly used in naive Bayes text catego-
rization:
count(wi , c) + 1 count(wi , c) + 1
P̂(wi |c) = = (4.14)
w∈V (count(w, c) + 1) w∈V count(w, c) + |V |
Note once again that it is crucial that the vocabulary V consists of the union of all the
word types in all classes, not just the words in one class c (try to convince yourself
why this must be true; see the exercise at the end of the chapter).
What do we do about words that occur in our test data but are not in our vocab-
ulary at all because they did not occur in any training document in any class? The
unknown word solution for such unknown words is to ignore them—remove them from the test
document and not include any probability for them at all.
Finally, some systems choose to completely ignore another class of words: stop
stop words words, very frequent words like the and a. This can be done by sorting the vocabu-
lary by frequency in the training set, and defining the top 10–100 vocabulary entries
as stop words, or alternatively by using one of the many predefined stop word lists
available online. Then each instance of these stop words is simply removed from
both training and test documents as if it had never occurred. In most text classifica-
tion applications, however, using a stop word list doesn’t improve performance, and
so it is more common to make use of the entire vocabulary and not use a stop word
list.
Fig. 4.2 shows the final algorithm.
3 2
P(−) = P(+) =
5 5
The word with doesn’t occur in the training set, so we drop it completely (as
mentioned above, we don’t use unknown word models for naive Bayes). The like-
lihoods from the training set for the remaining three words “predictable”, “no”, and
62 C HAPTER 4 • NAIVE BAYES , T EXT C LASSIFICATION , AND S ENTIMENT
Figure 4.2 The naive Bayes algorithm, using add-1 smoothing. To use add- smoothing
instead, change the +1 to + for loglikelihood counts in training.
“fun”, are as follows, from Eq. 4.14 (computing the probabilities for the remainder
of the words in the training set is left as an exercise for the reader):
1+1 0+1
P(“predictable”|−) = P(“predictable”|+) =
14 + 20 9 + 20
1+1 0+1
P(“no”|−) = P(“no”|+) =
14 + 20 9 + 20
0+1 1+1
P(“fun”|−) = P(“fun”|+) =
14 + 20 9 + 20
For the test sentence S = “predictable with no fun”, after removing the word ‘with’,
the chosen class, via Eq. 4.9, is therefore computed as follows:
3 2×2×1
P(−)P(S|−) = × = 6.1 × 10−5
5 343
2 1×1×2
P(+)P(S|+) = × = 3.2 × 10−5
5 293
The model thus predicts the class negative for the test sentence.
First, for sentiment classification and a number of other text classification tasks,
whether a word occurs or not seems to matter more than its frequency. Thus it often
improves performance to clip the word counts in each document at 1 (see the end
of the chapter for pointers to these results). This variant is called binary multino-
binary naive
Bayes mial naive Bayes or binary naive Bayes. The variant uses the same algorithm as
in Fig. 4.2 except that for each document we remove all duplicate words before con-
catenating them into the single big document during training and we also remove
duplicate words from test documents. Fig. 4.3 shows an example in which a set
of four documents (shortened and text-normalized for this example) are remapped
to binary, with the modified counts shown in the table on the right. The example
is worked without add-1 smoothing to make the differences clearer. Note that the
results counts need not be 1; the word great has a count of 2 even for binary naive
Bayes, because it appears in multiple documents.
NB Binary
Counts Counts
Four original documents: + − + −
− it was pathetic the worst part was the and 2 0 1 0
boxing scenes boxing 0 1 0 1
film 1 0 1 0
− no plot twists or great scenes great 3 1 2 1
+ and satire and great plot twists it 0 1 0 1
+ great scenes great film no 0 1 0 1
or 0 1 0 1
After per-document binarization: part 0 1 0 1
− it was pathetic the worst part boxing pathetic 0 1 0 1
plot 1 1 1 1
scenes satire 1 0 1 0
− no plot twists or great scenes scenes 1 2 1 2
+ and satire great plot twists the 0 2 0 1
+ great scenes film twists 1 1 1 1
was 0 2 0 1
worst 0 1 0 1
Figure 4.3 An example of binarization for the binary naive Bayes algorithm.
A second important addition commonly made when doing text classification for
sentiment is to deal with negation. Consider the difference between I really like this
movie (positive) and I didn’t like this movie (negative). The negation expressed by
didn’t completely alters the inferences we draw from the predicate like. Similarly,
negation can modify a negative word to produce a positive review (don’t dismiss this
film, doesn’t let us get bored).
A very simple baseline that is commonly used in sentiment analysis to deal with
negation is the following: during text normalization, prepend the prefix NOT to
every word after a token of logical negation (n’t, not, no, never) until the next punc-
tuation mark. Thus the phrase
didnt like this movie , but I
becomes
didnt NOT_like NOT_this NOT_movie , but I
Newly formed ‘words’ like NOT like, NOT recommend will thus occur more
often in negative document and act as cues for negative sentiment, while words
like NOT bored, NOT dismiss will acquire positive associations. Syntactic parsing
(Chapter 18) can be used deal more accurately with the scope relationship between
64 C HAPTER 4 • NAIVE BAYES , T EXT C LASSIFICATION , AND S ENTIMENT
these negation words and the predicates they modify, but this simple baseline works
quite well in practice.
Finally, in some situations we might have insufficient labeled training data to
train accurate naive Bayes classifiers using all words in the training set to estimate
positive and negative sentiment. In such cases we can instead derive the positive
sentiment and negative word features from sentiment lexicons, lists of words that are pre-
lexicons
annotated with positive or negative sentiment. Four popular lexicons are the General
General
Inquirer Inquirer (Stone et al., 1966), LIWC (Pennebaker et al., 2007), the opinion lexicon
LIWC of Hu and Liu (2004a) and the MPQA Subjectivity Lexicon (Wilson et al., 2005).
For example the MPQA subjectivity lexicon has 6885 words each marked for
whether it is strongly or weakly biased positive or negative. Some examples:
+ : admirable, beautiful, confident, dazzling, ecstatic, favor, glee, great
− : awful, bad, bias, catastrophe, cheat, deny, envious, foul, harsh, hate
A common way to use lexicons in a naive Bayes classifier is to add a feature
that is counted whenever a word from that lexicon occurs. Thus we might add a
feature called ‘this word occurs in the positive lexicon’, and treat all instances of
words in the lexicon as counts for that one feature, instead of counting each word
separately. Similarly, we might add as a second feature ‘this word occurs in the
negative lexicon’ of words in the negative lexicon. If we have lots of training data,
and if the test data matches the training data, using just two features won’t work as
well as using all the words. But when training data is sparse or not representative of
the test set, using dense lexicon features instead of sparse individual-word features
may generalize better.
We’ll return to this use of lexicons in Chapter 22, showing how these lexicons
can be learned automatically, and how they can be applied to many other tasks be-
yond sentiment classification.
Thus consider a naive Bayes model with the classes positive (+) and negative (-)
and the following model parameters:
w P(w|+) P(w|-)
I 0.1 0.2
love 0.1 0.001
this 0.01 0.01
fun 0.05 0.005
film 0.1 0.1
... ... ...
Each of the two columns above instantiates a language model that can assign a
probability to the sentence “I love this fun film”:
66 C HAPTER 4 • NAIVE BAYES , T EXT C LASSIFICATION , AND S ENTIMENT
P(“I love this fun film”|+) = 0.1 × 0.1 × 0.01 × 0.05 × 0.1 = 5 × 10−7
P(“I love this fun film”|−) = 0.2 × 0.001 × 0.01 × 0.005 × 0.1 = 1.0 × 10−9
Figure 4.4 A confusion matrix for visualizing how well a binary classification system per-
forms against gold standard labels.
That’s why instead of accuracy we generally turn to two other metrics shown in
precision Fig. 4.4: precision and recall. Precision measures the percentage of the items that
the system detected (i.e., the system labeled as positive) that are in fact positive (i.e.,
are positive according to the human gold labels). Precision is defined as
true positives
Precision =
true positives + false positives
recall Recall measures the percentage of items actually present in the input that were
correctly identified by the system. Recall is defined as
true positives
Recall =
true positives + false negatives
Precision and recall will help solve the problem with the useless “nothing is
pie” classifier. This classifier, despite having a fabulous accuracy of 99.99%, has
a terrible recall of 0 (since there are no true positives, and 100 false negatives, the
recall is 0/100). You should convince yourself that the precision at finding relevant
tweets is equally problematic. Thus precision and recall, unlike accuracy, emphasize
true positives: finding the things that we are supposed to be looking for.
There are many ways to define a single metric that incorporates aspects of both
F-measure precision and recall. The simplest of these combinations is the F-measure (van
Rijsbergen, 1975) , defined as:
( 2 + 1)PR
F =
2P + R
The parameter differentially weights the importance of recall and precision,
based perhaps on the needs of an application. Values of > 1 favor recall, while
values of < 1 favor precision. When = 1, precision and recall are equally bal-
F1 anced; this is the most frequently used metric, and is called F =1 or just F1 :
2PR
F1 = (4.16)
P+R
F-measure comes from a weighted harmonic mean of precision and recall. The
harmonic mean of a set of numbers is the reciprocal of the arithmetic mean of recip-
rocals:
n
HarmonicMean(a1 , a2 , a3 , a4 , ..., an ) = 1 1 1 1
(4.17)
a1 + a2 + a3 + ... + an
68 C HAPTER 4 • NAIVE BAYES , T EXT C LASSIFICATION , AND S ENTIMENT
Harmonic mean is used because the harmonic mean of two values is closer to the
minimum of the two values than the arithmetic mean is. Thus it weighs the lower of
the two numbers more heavily, which is more conservative in this situation.
gold labels
urgent normal spam
8
urgent 8 10 1 precisionu=
8+10+1
system 60
output normal 5 60 50 precisionn=
5+60+50
200
spam 3 30 200 precisions=
3+30+200
recallu = recalln = recalls =
8 60 200
8+5+3 10+60+30 1+50+200
Figure 4.5 Confusion matrix for a three-class categorization task, showing for each pair of
classes (c1 , c2 ), how many documents from c1 were (in)correctly assigned to c2 .
But we’ll need to slightly modify our definitions of precision and recall. Con-
sider the sample confusion matrix for a hypothetical 3-way one-of email catego-
rization decision (urgent, normal, spam) shown in Fig. 4.5. The matrix shows, for
example, that the system mistakenly labeled one spam document as urgent, and we
have shown how to compute a distinct precision and recall value for each class. In
order to derive a single metric that tells us how well the system is doing, we can com-
macroaveraging bine these values in two ways. In macroaveraging, we compute the performance
microaveraging for each class, and then average over classes. In microaveraging, we collect the de-
cisions for all classes into a single confusion matrix, and then compute precision and
recall from that table. Fig. 4.6 shows the confusion matrix for each class separately,
and shows the computation of microaveraged and macroaveraged precision.
As the figure shows, a microaverage is dominated by the more frequent class (in
this case spam), since the counts are pooled. The macroaverage better reflects the
statistics of the smaller classes, and so is more appropriate when performance on all
the classes is equally important.
4.8 • T EST SETS AND C ROSS - VALIDATION 69
macroaverage = .42+.52+.86
= .60
precision 3
Figure 4.6 Separate confusion matrices for the 3 classes from the previous figure, showing the pooled confu-
sion matrix and the microaveraged and macroaveraged precision.
The training and testing procedure for text classification follows what we saw with
language modeling (Section 3.2): we use the training set to train the model, then use
development the development test set (also called a devset) to perhaps tune some parameters,
test set
devset and in general decide what the best model is. Once we come up with what we think
is the best model, we run it on the (hitherto unseen) test set to report its performance.
While the use of a devset avoids overfitting the test set, having a fixed train-
ing set, devset, and test set creates another problem: in order to save lots of data
for training, the test set (or devset) might not be large enough to be representative.
Wouldn’t it be better if we could somehow use all our data for training and still use
cross-validation all our data for test? We can do this by cross-validation.
In cross-validation, we choose a number k, and partition our data into k disjoint
folds subsets called folds. Now we choose one of those k folds as a test set, train our
classifier on the remaining k − 1 folds, and then compute the error rate on the test
set. Then we repeat with another fold as the test set, again training on the other k − 1
folds. We do this sampling process k times and average the test set error rate from
these k runs to get an average error rate. If we choose k = 10, we would train 10
different models (each on 90% of our data), test the model 10 times, and average
10-fold these 10 values. This is called 10-fold cross-validation.
cross-validation
The only problem with cross-validation is that because all the data is used for
testing, we need the whole corpus to be blind; we can’t examine any of the data
to suggest possible features and in general see what’s going on, because we’d be
peeking at the test set, and such cheating would cause us to overestimate the perfor-
mance of our system. However, looking at the corpus to understand what’s going
on is important in designing NLP systems! What to do? For this reason, it is com-
mon to create a fixed training set and test set, then do 10-fold cross-validation inside
the training set, but compute error rate the normal way in the test set, as shown in
Fig. 4.7.
70 C HAPTER 4 • NAIVE BAYES , T EXT C LASSIFICATION , AND S ENTIMENT
We would like to know if (x) > 0, meaning that our logistic regression classifier
effect size has a higher F1 than our naive Bayes classifier on x. (x) is called the effect size; a
bigger means that A seems to be way better than B; a small means A seems to
be only a little better.
Why don’t we just check if (x) is positive? Suppose we do, and we find that
the F1 score of A is higher than B’s by .04. Can we be certain that A is better? We
cannot! That’s because A might just be accidentally better than B on this particular x.
We need something more: we want to know if A’s superiority over B is likely to hold
again if we checked another test set x′ , or under some other set of circumstances.
In the paradigm of statistical hypothesis testing, we test this by formalizing two
hypotheses.
H0 : (x) ≤ 0
H1 : (x) > 0 (4.20)
null hypothesis The hypothesis H0 , called the null hypothesis, supposes that (x) is actually nega-
tive or zero, meaning that A is not better than B. We would like to know if we can
confidently rule out this hypothesis, and instead support H1 , that A is better.
We do this by creating a random variable X ranging over all test sets. Now we
ask how likely is it, if the null hypothesis H0 was correct, that among these test sets
4.9 • S TATISTICAL S IGNIFICANCE T ESTING 71
we would encounter the value of (x) that we found, if we repeated the experiment
p-value a great many times. We formalize this likelihood as the p-value: the probability,
assuming the null hypothesis H0 is true, of seeing the (x) that we saw or one even
greater
P( (X) ≥ (x)|H0 is true) (4.21)
So in our example, this p-value is the probability that we would see (x) assuming
A is not better than B. If (x) is huge (let’s say A has a very respectable F1 of .9
and B has a terrible F1 of only .2 on x), we might be surprised, since that would be
extremely unlikely to occur if H0 were in fact true, and so the p-value would be low
(unlikely to have such a large if A is in fact not better than B). But if (x) is very
small, it might be less surprising to us even if H0 were true and A is not really better
than B, and so the p-value would be higher.
A very small p-value means that the difference we observed is very unlikely
under the null hypothesis, and we can reject the null hypothesis. What counts as very
small? It is common to use values like .05 or .01 as the thresholds. A value of .01
means that if the p-value (the probability of observing the we saw assuming H0 is
true) is less than .01, we reject the null hypothesis and assume that A is indeed better
statistically
significant than B. We say that a result (e.g., “A is better than B”) is statistically significant if
the we saw has a probability that is below the threshold and we therefore reject
this null hypothesis.
How do we compute this probability we need for the p-value? In NLP we gen-
erally don’t use simple parametric tests like t-tests or ANOVAs that you might be
familiar with. Parametric tests make assumptions about the distributions of the test
statistic (such as normality) that don’t generally hold in our cases. So in NLP we
usually use non-parametric tests based on sampling: we artificially create many ver-
sions of the experimental setup. For example, if we had lots of different test sets x′
we could just measure all the (x′ ) for all the x′ . That gives us a distribution. Now
we set a threshold (like .01) and if we see in this distribution that 99% or more of
those deltas are smaller than the delta we observed, i.e., that p-value(x)—the proba-
bility of seeing a (x) as big as the one we saw—is less than .01, then we can reject
the null hypothesis and agree that (x) was a sufficiently surprising difference and
A is really a better algorithm than B.
There are two common non-parametric tests used in NLP: approximate ran-
approximate domization (Noreen, 1989) and the bootstrap test. We will describe bootstrap
randomization
below, showing the paired version of the test, which again is most common in NLP.
paired Paired tests are those in which we compare two sets of observations that are aligned:
each observation in one set can be paired with an observation in another. This hap-
pens naturally when we are comparing the performance of two systems on the same
test set; we can pair the performance of system A on an individual observation xi
with the performance of system B on the same xi .
Consider a tiny text classification example with a test set x of 10 documents. The
first row of Fig. 4.8 shows the results of two classifiers (A and B) on this test set.
Each document is labeled by one of the four possibilities (A and B both right, both
wrong, A right and B wrong, A wrong and B right). A slash through a letter ( B)
means that that classifier got the answer wrong. On the first document both A and
B get the correct class (AB), while on the second document A got it right but B got
it wrong (A B). If we assume for simplicity that our metric is accuracy, A has an
accuracy of .70 and B of .50, so (x) is .20.
Now we create a large number b (perhaps 105 ) of virtual test sets x(i) , each of size
n = 10. Fig. 4.8 shows a couple of examples. To create each virtual test set x(i) , we
repeatedly (n = 10 times) select a cell from row x with replacement. For example, to
create the first cell of the first virtual test set x(1) , if we happened to randomly select
the second cell of the x row; we would copy the value A B into our new cell, and
move on to create the second cell of x(1) , each time sampling (randomly choosing)
from the original x with replacement.
1 2 3 4 5 6 7 8 9 10 A% B% ()
x AB AB AB AB
AB AB
AB AB AB
AB
.70 .50 .20
x(1) AB AB AB AB
AB
AB AB
AB AB
AB .60 .60 .00
x(2) AB AB AB
AB
AB
AB AB
AB AB AB .60 .70 -.10
...
x(b)
Figure 4.8 The paired bootstrap test: Examples of b pseudo test sets x(i) being created
from an initial true test set x. Each pseudo test set is created by sampling n = 10 times with
replacement; thus an individual sample is a single cell, a document with its gold label and
the correct or incorrect performance of classifiers A and B. Of course real test sets don’t have
only 10 examples, and b needs to be large as well.
Now that we have the b test sets, providing a sampling distribution, we can do
statistics on how often A has an accidental advantage. There are various ways to
compute this advantage; here we follow the version laid out in Berg-Kirkpatrick
et al. (2012). Assuming H0 (A isn’t better than B), we would expect that (X),
estimated over many test sets, would be zero or negative; a much higher value would
be surprising, since H0 specifically assumes A isn’t better than B. To measure exactly
how surprising our observed (x) is, we would in other circumstances compute the
p-value by counting over many test sets how often (x(i) ) exceeds the expected zero
value by (x) or more:
b
1 ∑ (i)
p-value(x) = (x ) − (x) ≥ 0
b
i=1
(We use the notation (x) to mean “1 if x is true, and 0 otherwise”.) However,
although it’s generally true that the expected value of (X) over many test sets,
(again assuming A isn’t better than B) is 0, this isn’t true for the bootstrapped test
sets we created. That’s because we didn’t draw these samples from a distribution
with 0 mean; we happened to create them from the original test set x, which happens
to be biased (by .20) in favor of A. So to measure how surprising is our observed
(x), we actually compute the p-value by counting over many test sets how often
4.10 • AVOIDING H ARMS IN C LASSIFICATION 73
So if for example we have 10,000 test sets x(i) and a threshold of .01, and in only 47
of the test sets do we find that A is accidentally better (x(i) ) ≥ 2 (x), the resulting
p-value of .0047 is smaller than .01, indicating that the delta we found, (x) is indeed
sufficiently surprising and unlikely to have happened by accident, and we can reject
the null hypothesis and conclude A is better than B.
Figure 4.9 A version of the paired bootstrap algorithm after Berg-Kirkpatrick et al. (2012).
The full algorithm for the bootstrap is shown in Fig. 4.9. It is given a test set x, a
number of samples b, and counts the percentage of the b bootstrap test sets in which
(x∗(i) ) > 2 (x). This percentage then acts as a one-sided empirical p-value.
toxicity icity detection is the task of detecting hate speech, abuse, harassment, or other
detection
kinds of toxic language. While the goal of such classifiers is to help reduce soci-
etal harm, toxicity classifiers can themselves cause harms. For example, researchers
have shown that some widely used toxicity classifiers incorrectly flag as being toxic
sentences that are non-toxic but simply mention identities like women (Park et al.,
2018), blind people (Hutchinson et al., 2020) or gay people (Dixon et al., 2018;
Dias Oliva et al., 2021), or simply use linguistic features characteristic of varieties
like African-American Vernacular English (Sap et al. 2019, Davidson et al. 2019).
Such false positive errors could lead to the silencing of discourse by or about these
groups.
These model problems can be caused by biases or other problems in the training
data; in general, machine learning systems replicate and even amplify the biases
in their training data. But these problems can also be caused by the labels (for
example due to biases in the human labelers), by the resources used (like lexicons,
or model components like pretrained embeddings), or even by model architecture
(like what the model is trained to optimize). While the mitigation of these biases
(for example by carefully considering the training data sources) is an important area
of research, we currently don’t have general solutions. For this reason it’s important,
when introducing any NLP model, to study these kinds of factors and make them
model card clear. One way to do this is by releasing a model card (Mitchell et al., 2019) for
each version of a model. A model card documents a machine learning model with
information like:
• training algorithms and parameters
• training data sources, motivation, and preprocessing
• evaluation data sources, motivation, and preprocessing
• intended use and users
• model performance across different demographic or other groups and envi-
ronmental situations
4.11 Summary
This chapter introduced the naive Bayes model for classification and applied it to
the text categorization task of sentiment analysis.
• Many language processing tasks can be viewed as tasks of classification.
• Text categorization, in which an entire text is assigned a class from a finite set,
includes such tasks as sentiment analysis, spam detection, language identi-
fication, and authorship attribution.
• Sentiment analysis classifies a text as reflecting the positive or negative orien-
tation (sentiment) that a writer expresses toward some object.
• Naive Bayes is a generative model that makes the bag-of-words assumption
(position doesn’t matter) and the conditional independence assumption (words
are conditionally independent of each other given the class)
• Naive Bayes with binarized features seems to work better for many text clas-
sification tasks.
• Classifiers are evaluated based on precision and recall.
• Classifiers are trained using distinct training, dev, and test sets, including the
use of cross-validation in the training set.
B IBLIOGRAPHICAL AND H ISTORICAL N OTES 75
Exercises
4.1 Assume the following likelihoods for each word being part of a positive or
negative movie review, and equal prior probabilities for each class.
pos neg
I 0.09 0.16
always 0.07 0.06
like 0.29 0.06
foreign 0.04 0.15
films 0.08 0.11
What class will Naive bayes assign to the sentence “I always like foreign
films.”?
4.2 Given the following short movie reviews, each labeled with a genre, either
comedy or action:
1. fun, couple, love, love comedy
2. fast, furious, shoot action
3. couple, fly, fast, fun, fun comedy
4. furious, shoot, shoot, fun action
5. fly, fast, shoot, love action
and a new document D:
fast, couple, shoot, fly
compute the most likely class for D. Assume a naive Bayes classifier and use
add-1 smoothing for the likelihoods.
4.3 Train two models, multinomial naive Bayes and binarized naive Bayes, both
with add-1 smoothing, on the following document counts for key sentiment
words, with positive or negative class assigned as noted.
doc “good” “poor” “great” (class)
d1. 3 0 3 pos
d2. 0 1 2 pos
d3. 1 3 0 neg
d4. 1 5 2 neg
d5. 0 2 0 neg
Use both naive Bayes models to assign a class (pos or neg) to this sentence:
A good, good plot and great characters, but poor acting.
Recall from page 61 that with naive Bayes text classification, we simply ig-
nore (throw out) any word that never occurred in the training document. (We
don’t throw out words that appear in some classes but not others; that’s what
add-one smoothing is for.) Do the two models agree or disagree?
CHAPTER
5 Logistic Regression
“And how do you know that these fine begonias are not of equal importance?”
Hercule Poirot, in Agatha Christie’s The Mysterious Affair at Styles
Detective stories are as littered with clues as texts are with words. Yet for the
poor reader it can be challenging to know how to weigh the author’s clues in order
to make the crucial classification task: deciding whodunnit.
In this chapter we introduce an algorithm that is admirably suited for discovering
logistic
regression the link between features or clues and some particular outcome: logistic regression.
Indeed, logistic regression is one of the most important analytic tools in the social
and natural sciences. In natural language processing, logistic regression is the base-
line supervised machine learning algorithm for classification, and also has a very
close relationship with neural networks. As we will see in Chapter 7, a neural net-
work can be viewed as a series of logistic regression classifiers stacked on top of
each other. Thus the classification and machine learning techniques introduced here
will play an important role throughout the book.
Logistic regression can be used to classify an observation into one of two classes
(like ‘positive sentiment’ and ‘negative sentiment’), or into one of many classes.
Because the mathematics for the two-class case is simpler, we’ll describe this special
case of logistic regression first in the next few sections, and then briefly summarize
the use of multinomial logistic regression for more than two classes in Section 5.3.
We’ll introduce the mathematics of logistic regression in the next few sections.
But let’s begin with some high-level issues.
Generative and Discriminative Classifiers: The most important difference be-
tween naive Bayes and logistic regression is that logistic regression is a discrimina-
tive classifier while naive Bayes is a generative classifier.
These are two very different frameworks for how
to build a machine learning model. Consider a visual
metaphor: imagine we’re trying to distinguish dog
images from cat images. A generative model would
have the goal of understanding what dogs look like
and what cats look like. You might literally ask such
a model to ‘generate’, i.e., draw, a dog. Given a test
image, the system then asks whether it’s the cat model or the dog model that better
fits (is less surprised by) the image, and chooses that as its label.
A discriminative model, by contrast, is only try-
ing to learn to distinguish the classes (perhaps with-
out learning much about them). So maybe all the
dogs in the training data are wearing collars and the
cats aren’t. If that one feature neatly separates the
classes, the model is satisfied. If you ask such a
model what it knows about cats all it can say is that
they don’t wear collars.
78 C HAPTER 5 • L OGISTIC R EGRESSION
More formally, recall that the naive Bayes assigns a class c to a document d not
by directly computing P(c|d) but by computing a likelihood and a prior
likelihood prior
ĉ = argmax P(d|c) P(c) (5.1)
c∈C
generative A generative model like naive Bayes makes use of this likelihood term, which
model
expresses how to generate the features of a document if we knew it was of class c.
discriminative By contrast a discriminative model in this text categorization scenario attempts
model
to directly compute P(c|d). Perhaps it will learn to assign a high weight to document
features that directly improve its ability to discriminate between possible classes,
even if it couldn’t generate an example of one of the classes.
Components of a probabilistic machine learning classifier: Like naive Bayes,
logistic regression is a probabilistic classifier that makes use of supervised machine
learning. Machine learning classifiers require a training corpus of m input/output
pairs (x(i) , y(i) ). (We’ll use superscripts in parentheses to refer to individual instances
in the training set—for sentiment classification each instance might be an individual
document to be classified.) A machine learning system for classification then has
four components:
1. A feature representation of the input. For each input observation x(i) , this
will be a vector of features [x1 , x2 , ..., xn ]. We will generally refer to feature
( j)
i for input x( j) as xi , sometimes simplified as xi , but we will also see the
notation fi , fi (x), or, for multiclass classification, fi (c, x).
2. A classification function that computes ŷ, the estimated class, via p(y|x). In
the next section we will introduce the sigmoid and softmax tools for classifi-
cation.
3. An objective function that we want to optimize for learning, usually involving
minimizing a loss function corresponding to error on training examples. We
will introduce the cross-entropy loss function.
4. An algorithm for optimizing the objective function. We introduce the stochas-
tic gradient descent algorithm.
Logistic regression has two phases:
training: We train the system (specifically the weights w and b, introduced be-
low) using stochastic gradient descent and the cross-entropy loss.
test: Given a test example x we compute p(y|x) and return the higher probability
label y = 1 or y = 0.
P(y = 1|x) that this observation is a member of the class. So perhaps the decision
is “positive sentiment” versus “negative sentiment”, the features represent counts of
words in a document, P(y = 1|x) is the probability that the document has positive
sentiment, and P(y = 0|x) is the probability that the document has negative senti-
ment.
Logistic regression solves this task by learning, from a training set, a vector of
weights and a bias term. Each weight wi is a real number, and is associated with one
of the input features xi . The weight wi represents how important that input feature
is to the classification decision, and can be positive (providing evidence that the in-
stance being classified belongs in the positive class) or negative (providing evidence
that the instance being classified belongs in the negative class). Thus we might
expect in a sentiment task the word awesome to have a high positive weight, and
bias term abysmal to have a very negative weight. The bias term, also called the intercept, is
intercept another real number that’s added to the weighted inputs.
To make a decision on a test instance—after we’ve learned the weights in training—
the classifier first multiplies each xi by its weight wi , sums up the weighted features,
and adds the bias term b. The resulting single number z expresses the weighted sum
of the evidence for the class.
( n )
∑
z = w i xi + b (5.2)
i=1
dot product In the rest of the book we’ll represent such sums using the dot product notation
from linear algebra. The dot product of two vectors a and b, written as a · b, is the
sum of the products of the corresponding elements of each vector. (Notice that we
represent vectors using the boldface notation b). Thus the following is an equivalent
formation to Eq. 5.2:
z = w·x+b (5.3)
But note that nothing in Eq. 5.3 forces z to be a legal probability, that is, to lie
between 0 and 1. In fact, since weights are real-valued, the output might even be
negative; z ranges from −∞ to ∞.
1
Figure 5.1 The sigmoid function σ (z) = 1+e −z takes a real value and maps it to the range
(0, 1). It is nearly linear around 0 but outlier values get squashed toward 0 or 1.
sigmoid To create a probability, we’ll pass z through the sigmoid function, σ (z). The
sigmoid function (named because it looks like an s) is also called the logistic func-
logistic tion, and gives logistic regression its name. The sigmoid has the following equation,
function
shown graphically in Fig. 5.1:
1 1
σ (z) = = (5.4)
1 + e−z 1 + exp (−z)
80 C HAPTER 5 • L OGISTIC R EGRESSION
(For the rest of the book, we’ll use the notation exp(x) to mean ex .) The sigmoid
has a number of advantages; it takes a real-valued number and maps it into the range
(0, 1), which is just what we want for a probability. Because it is nearly linear around
0 but flattens toward the ends, it tends to squash outlier values toward 0 or 1. And
it’s differentiable, which as we’ll see in Section 5.10 will be handy for learning.
We’re almost there. If we apply the sigmoid to the sum of the weighted features,
we get a number between 0 and 1. To make it a probability, we just need to make
sure that the two cases, p(y = 1) and p(y = 0), sum to 1. We can do this as follows:
P(y = 1) = σ (w · x + b)
1
=
1 + exp (−(w · x + b))
P(y = 0) = 1 − σ (w · x + b)
1
= 1−
1 + exp (−(w · x + b))
exp (−(w · x + b))
= (5.5)
1 + exp (−(w · x + b))
p
logit(p) = σ −1 (p) = ln (5.7)
1− p
Using the term logit for z is a way of reminding us that by using the sigmoid to turn
z (which ranges from −∞ to ∞) into a probability, we are implicitly interpreting z as
not just any real-valued number, but as specifically a log odds.
Let’s have some examples of applying logistic regression as a classifier for language
tasks.
5.2 • C LASSIFICATION WITH L OGISTIC R EGRESSION 81
x2=2
x3=1
It's hokey . There are virtually no surprises , and the writing is second-rate .
So why was it so enjoyable ? For one thing , the cast is
great . Another nice touch is the music . I was overcome with the urge to get off
the couch and start dancing . It sucked me in , and it'll do the same to you .
x4=3
x1=3 x5=0 x6=4.19
Figure 5.2 A sample mini test document showing the extracted features in the vector x.
Let’s assume for the moment that we’ve already learned a real-valued weight
for each of these features, and that the 6 weights corresponding to the 6 features
are [2.5, −5.0, −1.2, 0.5, 2.0, 0.7], while b = 0.1. (We’ll discuss in the next section
how the weights are learned.) The weight w1 , for example indicates how important
a feature the number of positive lexicon words (great, nice, enjoyable, etc.) is to
a positive sentiment decision, while w2 tells us the importance of negative lexicon
words. Note that w1 = 2.5 is positive, while w2 = −5.0, meaning that negative words
are negatively associated with a positive sentiment decision, and are about twice as
important as positive words.
Given these 6 features and the input review x, P(+|x) and P(−|x) can be com-
puted using Eq. 5.5:
normalize Alternatively, we can normalize the input features values to lie between 0 and 1:
xi − min(xi )
xi′ = (5.10)
max(xi ) − min(xi )
Having input data with comparable range is useful when comparing values across
features. Data scaling is especially important in large neural networks, since it helps
speed up gradient descent.
For the first 3 test examples, then, we would be separately computing the pre-
dicted ŷ(i) as follows:
But it turns out that we can slightly modify our original equation Eq. 5.5 to do
this much more efficiently. We’ll use matrix arithmetic to assign a class to all the
examples with one matrix operation!
First, we’ll pack all the input feature vectors for each input x into a single input
matrix X, where each row i is a row vector consisting of the feature vector for in-
put example x(i) (i.e., the vector x(i) ). Assuming each example has f features and
weights, X will therefore be a matrix of shape [m × f ], as follows:
(1) (1) (1)
x1 x2 . . . x f
(2) (2) (2)
x x2 . . . x f
X = 1 (5.12)
x1(3) x2(3) . . . x(3)
f
...
y = Xw + b (5.13)
84 C HAPTER 5 • L OGISTIC R EGRESSION
You should convince yourself that Eq. 5.13 computes the same thing as our for-loop
in Eq. 5.11. For example ŷ(1) , the first entry of the output vector y, will correctly be:
(1) (1) (1)
ŷ(1) = [x1 , x2 , ..., x f ] · [w1 , w2 , ..., w f ] + b (5.14)
Note that we had to reorder X and w from the order they appeared in in Eq. 5.5 to
make the multiplications come out properly. Here is Eq. 5.13 again with the shapes
shown:
y = X w + b
(m × 1) (m × f )( f × 1) (m × 1) (5.15)
Modern compilers and compute hardware can compute this matrix operation very
efficiently, making the computation much faster, which becomes important when
training or testing on very large datasets.
Note by the way that we could have kept X and w in the original order (y =
Xw + b) if we had chosen to define X differently as a matrix of column vectors, one
vector for each input example, instead of row vectors, and then it would have shape
[ f × m]. But we conventionally represent inputs as rows.
the correct one (sometimes called hard classification; an observation can not be in
multiple classes). Let’s use the following representation: the output y for each input
x will be a vector of length K. If class c is the correct class, we’ll set yc = 1, and
set all the other elements of y to be 0, i.e., yc = 1 and y j = 0 ∀ j 6= c. A vector like
this y, with one value=1 and the rest 0, is called a one-hot vector. The job of the
classifier is to produce an estimate vector ŷ. For each class k, the value ŷk will be
the classifier’s estimate of the probability p(yk = 1|x).
5.3.1 Softmax
The multinomial logistic classifier uses a generalization of the sigmoid, called the
softmax softmax function, to compute p(yk = 1|x). The softmax function takes a vector
z = [z1 , z2 , ..., zK ] of K arbitrary values and maps them to a probability distribution,
with each value in the range [0,1], and all the values summing to 1. Like the sigmoid,
it is an exponential function.
For a vector z of dimensionality K, the softmax is defined as:
exp (zi )
softmax(zi ) = K 1≤i≤K (5.16)
j=1 exp (z j )
[0.05, 0.09, 0.01, 0.1, 0.74, 0.01][0.05, 0.09, 0.01, 0.1, 0.74, 0.01]
Like the sigmoid, the softmax has the property of squashing values toward 0 or 1.
Thus if one of the inputs is larger than the others, it will tend to push its probability
toward 1, and suppress the probabilities of the smaller inputs.
Finally, note that, just as for the sigmoid, we refer to z, the vector of scores that
is the input to the softmax, as logits (see (5.7).
exp (wk · x + bk )
p(yk = 1|x) = K
(5.18)
∑
exp (w j · x + b j )
j=1
86 C HAPTER 5 • L OGISTIC R EGRESSION
The form of Eq. 5.18 makes it seem that we would compute each output sep-
arately. Instead, it’s more common to set up the equation for more efficient com-
putation by modern vector processing hardware. We’ll do this by representing the
set of K weight vectors as a weight matrix W and a bias vector b. Each row k of
W corresponds to the vector of weights wk . W thus has shape [K × f ], for K the
number of output classes and f the number of input features. The bias vector b has
one value for each of the K output classes. If we represent the weights in this way,
we can compute ŷ, the vector of output probabilities for each of the K classes, by a
single elegant equation:
ŷ = softmax(Wx + b) (5.19)
If you work out the matrix arithmetic, you can see that the estimated score of
the first output class ŷ1 (before we take the softmax) will correctly turn out to be
w 1 · x + b1 .
Fig. 5.3 shows an intuition of the role of the weight vector versus weight matrix
in the computation of the output class probabilities for binary versus multinomial
logistic regression.
Feature Definition
w5,+ w5,− w5,0
1 if “!” ∈ doc
f5 (x) 3.5 3.1 −5.3
0 otherwise
Because these feature weights are dependent both on the input text and the output
class, we sometimes make this dependence explicit and represent the features them-
selves as f (x, y): a function of both the input and the class. Using such a notation
f5 (x) above could be represented as three features f5 (x, +), f5 (x, −), and f5 (x, 0),
each of which has a single weight. We’ll use this kind of notation in our description
of the CRF in Chapter 17.
5.4 • L EARNING IN L OGISTIC R EGRESSION 87
Output y y^
sigmoid [scalar]
Weight vector w
[1⨉f]
Input feature x x1 x2 x3 … xf
vector [f ⨉1]
wordcount positive lexicon count of
=3 words = 1 “no” = 0
Input feature x x1 x2 x3 … xf
vector [f⨉1]
wordcount positive lexicon count of
=3 words = 1 “no” = 0
Figure 5.3 Binary versus multinomial logistic regression. Binary logistic regression uses a
single weight vector w, and has a scalar output ŷ. In multinomial logistic regression we have
K separate weight vectors corresponding to the K classes, all packed into a single weight
matrix W, and a vector output ŷ. We omit the biases from both figures for clarity.
We do this via a loss function that prefers the correct class labels of the train-
ing examples to be more likely. This is called conditional maximum likelihood
estimation: we choose the parameters w, b that maximize the log probability of
the true y labels in the training data given the observations x. The resulting loss
cross-entropy function is the negative log likelihood loss, generally called the cross-entropy loss.
loss
Let’s derive this loss function, applied to a single observation x. We’d like to
learn weights that maximize the probability of the correct label p(y|x). Since there
are only two discrete outcomes (1 or 0), this is a Bernoulli distribution, and we can
express the probability p(y|x) that our classifier produces for one observation as the
following (keeping in mind that if y = 1, Eq. 5.21 simplifies to ŷ; if y = 0, Eq. 5.21
simplifies to 1 − ŷ):
Now we take the log of both sides. This will turn out to be handy mathematically,
and doesn’t hurt us; whatever values maximize a probability will also maximize the
log of the probability:
log p(y|x) = log ŷ y (1 − ŷ)1−y
= y log ŷ + (1 − y) log(1 − ŷ) (5.22)
Eq. 5.22 describes a log likelihood that should be maximized. In order to turn this
into a loss function (something that we need to minimize), we’ll just flip the sign on
Eq. 5.22. The result is the cross-entropy loss LCE :
Let’s see if this loss function does the right thing for our example from Fig. 5.2. We
want the loss to be smaller if the model’s estimate is close to correct, and bigger if
the model is confused. So first let’s suppose the correct gold label for the sentiment
example in Fig. 5.2 is positive, i.e., y = 1. In this case our model is doing well, since
5.6 • G RADIENT D ESCENT 89
from Eq. 5.8 it indeed gave the example a higher probability of being positive (.70)
than negative (.30). If we plug σ (w · x + b) = .70 and y = 1 into Eq. 5.24, the right
side of the equation drops out, leading to the following loss (we’ll use log to mean
natural log when the base is not specified):
LCE (ŷ, y) = −[y log σ (w · x + b) + (1 − y) log (1 − σ (w · x + b))]
= − [log σ (w · x + b)]
= − log(.70)
= .36
By contrast, let’s pretend instead that the example in Fig. 5.2 was actually negative,
i.e., y = 0 (perhaps the reviewer went on to say “But bottom line, the movie is
terrible! I beg you not to see it!”). In this case our model is confused and we’d want
the loss to be higher. Now if we plug y = 0 and 1 − σ (w · x + b) = .30 from Eq. 5.8
into Eq. 5.24, the left side of the equation drops out:
LCE (ŷ, y) = −[y log σ (w · x + b)+(1 − y) log (1 − σ (w · x + b))]
= − [log (1 − σ (w · x + b))]
= − log (.30)
= 1.2
Sure enough, the loss for the first classifier (.36) is less than the loss for the second
classifier (1.2).
Why does minimizing this negative log probability do what we want? A perfect
classifier would assign probability 1 to the correct outcome (y = 1 or y = 0) and
probability 0 to the incorrect outcome. That means if y equals 1, the higher ŷ is (the
closer it is to 1), the better the classifier; the lower ŷ is (the closer it is to 0), the
worse the classifier. If y equals 0, instead, the higher 1 − ŷ is (closer to 1), the better
the classifier. The negative log of ŷ (if the true y equals 1) or 1 − ŷ (if the true y
equals 0) is a convenient loss metric since it goes from 0 (negative log of 1, no loss)
to infinity (negative log of 0, infinite loss). This loss function also ensures that as
the probability of the correct answer is maximized, the probability of the incorrect
answer is minimized; since the two sum to one, any increase in the probability of the
correct answer is coming at the expense of the incorrect answer. It’s called the cross-
entropy loss, because Eq. 5.22 is also the formula for the cross-entropy between the
true probability distribution y and our estimated distribution ŷ.
Now we know what we want to minimize; in the next section, we’ll see how to
find the minimum.
How shall we find the minimum of this (or any) loss function? Gradient descent is
a method that finds a minimum of a function by figuring out in which direction (in
the space of the parameters ) the function’s slope is rising the most steeply, and
moving in the opposite direction. The intuition is that if you are hiking in a canyon
and trying to descend most quickly down to the river at the bottom, you might look
around yourself in all directions, find the direction where the ground is sloping the
steepest, and walk downhill in that direction.
convex For logistic regression, this loss function is conveniently convex. A convex func-
tion has at most one minimum; there are no local minima to get stuck in, so gradient
descent starting from any point is guaranteed to find the minimum. (By contrast,
the loss for multi-layer neural networks is non-convex, and gradient descent may
get stuck in local minima for neural network training and never find the global opti-
mum.)
Although the algorithm (and the concept of gradient) are designed for direction
vectors, let’s first consider a visualization of the case where the parameter of our
system is just a single scalar w, shown in Fig. 5.4.
Given a random initialization of w at some value w1 , and assuming the loss
function L happened to have the shape in Fig. 5.4, we need the algorithm to tell us
whether at the next iteration we should move left (making w2 smaller than w1 ) or
right (making w2 bigger than w1 ) to reach the minimum.
Loss
one step
of gradient
slope of loss at w1 descent
is negative
w1 wmin w
0 (goal)
Figure 5.4 The first step in iteratively finding the minimum of this loss function, by moving
w in the reverse direction from the slope of the function. Since the slope is negative, we need
to move w in a positive direction, to the right. Here superscripts are used for learning steps,
so w1 means the initial value of w (which is 0), w2 the value at the second step, and so on.
gradient The gradient descent algorithm answers this question by finding the gradient
of the loss function at the current point and moving in the opposite direction. The
gradient of a function of many variables is a vector pointing in the direction of the
greatest increase in a function. The gradient is a multi-variable generalization of the
slope, so for a function of one variable like the one in Fig. 5.4, we can informally
think of the gradient as the slope. The dotted line in Fig. 5.4 shows the slope of this
hypothetical loss function at point w = w1 . You can see that the slope of this dotted
line is negative. Thus to find the minimum, gradient descent tells us to go in the
opposite direction: moving w in a positive direction.
The magnitude of the amount to move in gradient descent is the value of the
d
learning rate slope dw L( f (x; w), y) weighted by a learning rate . A higher (faster) learning
5.6 • G RADIENT D ESCENT 91
rate means that we should move w more on each step. The change we make in our
parameter is the learning rate times the gradient (or the slope, in our single-variable
example):
d
wt+1 = wt − L( f (x; w), y) (5.26)
dw
Now let’s extend the intuition from a function of one scalar variable w to many
variables, because we don’t just want to move left or right, we want to know where
in the N-dimensional space (of the N parameters that make up ) we should move.
The gradient is just such a vector; it expresses the directional components of the
sharpest slope along each of those N dimensions. If we’re just imagining two weight
dimensions (say for one weight w and one bias b), the gradient might be a vector with
two orthogonal components, each of which tells us how much the ground slopes in
the w dimension and in the b dimension. Fig. 5.5 shows a visualization of the value
of a 2-dimensional gradient vector taken at the red point.
In an actual logistic regression, the parameter vector w is much longer than 1 or
2, since the input feature vector x can be quite long, and we need a weight wi for
each xi . For each dimension/variable wi in w (plus the bias b), the gradient will have
a component that tells us the slope with respect to that variable. In each dimension
wi , we express the slope as a partial derivative ∂∂wi of the loss function. Essentially
we’re asking: “How much would a small change in that variable wi influence the
total loss function L?”
Formally, then, the gradient of a multi-variable function f is a vector in which
each component expresses the partial derivative of f with respect to one of the vari-
ables. We’ll use the inverted Greek delta symbol to refer to the gradient, and
represent ŷ as f (x; ) to make the dependence on more obvious:
∂
∂ w1 L( f (x; ), y)
∂ L( f (x; ), y)
∂ w2
..
L( f (x; ), y) = .
(5.27)
∂
∂ w L( f (x; ), y)
n
∂
∂ b L( f (x; ), y)
The final equation for updating based on the gradient is thus
t+1 = t − L( f (x; ), y) (5.28)
Cost(w,b)
b
w
Figure 5.5 Visualization of the gradient vector at the red point in two dimensions w and
b, showing a red arrow in the x-y plane pointing in the direction we will go to look for the
minimum: the opposite direction of the gradient (recall that the gradient points in the direction
of increase not decrease).
92 C HAPTER 5 • L OGISTIC R EGRESSION
It turns out that the derivative of this function for one observation vector x is Eq. 5.30
(the interested reader can see Section 5.10 for the derivation of this equation):
∂ LCE (ŷ, y)
= [σ (w · x + b) − y]x j
∂wj
= (ŷ − y)x j (5.30)
Figure 5.6 The stochastic gradient descent algorithm. Step 1 (computing the loss) is used
mainly to report how well we are doing on the current tuple; we don’t need to compute the
loss in order to compute the gradient. The algorithm can terminate when it converges (when
the gradient norm < ), or when progress halts (for example when the loss starts going up on
a held-out set). Weights are initialized to 0 for logistic regression, but to small random values
for neural networks, as we’ll see in Chapter 7.
Let’s assume the initial weights and bias in 0 are all set to 0, and the initial learning
rate is 0.1:
w1 = w2 = b = 0
= 0.1
The single update step requires that we compute the gradient, multiplied by the
learning rate
In our mini example there are three parameters, so the gradient vector has 3 dimen-
sions, for w1 , w2 , and b. We can compute the first gradient as follows:
∂ L (ŷ,y)
CE
∂ w1 (σ (w · x + b) − y)x1 (σ (0) − 1)x1 −0.5x1 −1.5
(ŷ,y)
w,b L = ∂ LCE
∂ w2 = (σ (w · x + b) − y)x2 = (σ (0) − 1)x2 = −0.5x2 = −1.0
∂ LCE (ŷ,y) σ (w · x + b) − y σ (0) − 1 −0.5 −0.5
∂b
Now that we have a gradient, we compute the new parameter vector 1 by moving
0 in the opposite direction from the gradient:
w1 −1.5 .15
1 = w2 − −1.0 = .1
b −0.5 .05
So after one step of gradient descent, the weights have shifted to be: w1 = .15,
w2 = .1, and b = .05.
Note that this observation x happened to be a positive example. We would expect
that after seeing more negative examples with high counts of negative words, that
the weight w2 would shift to have a negative value.
94 C HAPTER 5 • L OGISTIC R EGRESSION
Now the cost function for the mini-batch of m examples is the average loss for each
example:
m
1∑
Cost(ŷ, y) = LCE (ŷ(i) , y(i) )
m
i=1
∑m
1
= − y(i) log σ (w · x(i) + b) + (1 − y(i) ) log 1 − σ (w · x(i) + b) (5.33)
m
i=1
The mini-batch gradient is the average of the individual gradients from Eq. 5.30:
m
∂Cost(ŷ, y) 1 ∑[ ]
(i)
= σ (w · x(i) + b) − y(i) x j (5.34)
∂wj m
i=1
Instead of using the sum notation, we can more efficiently compute the gradient
in its matrix form, following the vectorization we saw on page 83, where we have
a matrix X of size [m × f ] representing the m inputs in the batch, and a vector y of
size [m × 1] representing the correct outputs:
5.7 • R EGULARIZATION 95
∂Cost(ŷ, y) 1
= (ŷ − y)ᵀ X
∂w m
1
= (σ (Xw + b) − y)ᵀ X (5.35)
m
5.7 Regularization
There is a problem with learning weights that make the model perfectly match the
training data. If a feature is perfectly predictive of the outcome because it happens
to only occur in one class, it will be assigned a very high weight. The weights for
features will attempt to perfectly fit details of the training set, in fact too perfectly,
modeling noisy factors that just accidentally correlate with the class. This problem is
overfitting called overfitting. A good model should be able to generalize well from the training
generalize data to the unseen test set, but a model that overfits will have poor generalization.
regularization To avoid overfitting, a new regularization term R( ) is added to the loss func-
tion in Eq. 5.25, resulting in the following loss for a batch of m examples (slightly
rewritten from Eq. 5.25 to be maximizing log probability rather than minimizing
loss, and removing the m1 term which doesn’t affect the argmax):
m
∑
̂ = argmax log P(y(i) |x(i) ) − R( ) (5.36)
i=1
The new regularization term R( ) is used to penalize large weights. Thus a setting
of the weights that matches the training data perfectly— but uses many weights with
high values to do so—will be penalized more than a setting that matches the data a
little less well, but does so using smaller weights. There are two common ways to
L2
regularization compute this regularization term R( ). L2 regularization is a quadratic function of
the weight values, named because it uses the (square of the) L2 norm of the weight
values. The L2 norm, || ||2 , is the same as the Euclidean distance of the vector
from the origin. If consists of n weights, then:
n
∑
R( ) = || ||22 = j2 (5.37)
j=1
L1
regularization L1 regularization is a linear function of the weight values, named after the L1 norm
||W ||1 , the sum of the absolute values of the weights, or Manhattan distance (the
96 C HAPTER 5 • L OGISTIC R EGRESSION
Manhattan distance is the distance you’d have to walk between two points in a city
with a street grid like New York):
n
∑
R( ) = || ||1 = |i | (5.39)
i=1
If we multiply each weight by a Gaussian prior on the weight, we are thus maximiz-
ing the following constraint:
m n
( )
∏ ∏ 1 ( j − µ j )2
̂ = argmax (i) (i)
P(y |x ) × exp − (5.42)
i=1 j=1 2πσ 2 2σ 2j
j
The loss function for multinomial logistic regression generalizes the two terms in
Eq. 5.44 (one that is non-zero when y = 1 and one that is non-zero when y = 0) to
K terms. As we mentioned above, for multinomial regression we’ll represent both y
and ŷ as vectors. The true label y is a vector with K elements, each corresponding
to a class, with yc = 1 if the correct class is c, with all other elements of y being 0.
And our classifier will produce an estimate vector with K elements ŷ, each element
ŷk of which represents the estimated probability p(yk = 1|x).
The loss function for a single example x, generalizing from binary logistic re-
gression, is the sum of the logs of the K output classes, each weighted by the indi-
cator function yk (Eq. 5.45). This turns out to be just the negative log probability of
the correct class c (Eq. 5.46):
K
∑
LCE (ŷ, y) = − yk log ŷk (5.45)
k=1
How did we get from Eq. 5.45 to Eq. 5.46? Because only one class (let’s call it c) is
the correct one, the vector y takes the value 1 only for this value of k, i.e., has yc = 1
and y j = 0 ∀ j 6= c. That means the terms in the sum in Eq. 5.45 will all be 0 except
for the term corresponding to the true class c. Hence the cross-entropy loss is simply
the log of the output probability corresponding to the correct class, and we therefore
negative log also call Eq. 5.46 the negative log likelihood loss.
likelihood loss
Of course for gradient descent we don’t need the loss, we need its gradient. The
gradient for a single example turns out to be very similar to the gradient for binary
logistic regression, (ŷ − y)x, that we saw in Eq. 5.30. Let’s consider one piece of the
gradient, the derivative for a single weight. For each class k, the weight of the ith
element of input x is wk,i . What is the partial derivative of the loss with respect to
wk,i ? This derivative turns out to be just the difference between the true value for the
class k (which is either 1 or 0) and the probability the classifier outputs for class k,
weighted by the value of the input xi corresponding to the ith element of the weight
vector for class k:
∂ LCE
= −(yk − ŷk )xi
∂ wk,i
= −(yk − p(yk = 1|x))xi
( )
exp (wk · x + bk )
= − yk − K xi (5.48)
j=1 exp (wj · x + b j )
We’ll return to this case of the gradient for softmax regression when we introduce
neural networks in Chapter 7, and at that time we’ll also discuss the derivation of
this gradient in equations Eq. 7.33–Eq. 7.41.
98 C HAPTER 5 • L OGISTIC R EGRESSION
dσ (z)
= σ (z)(1 − σ (z)) (5.50)
dz
chain rule Finally, the chain rule of derivatives. Suppose we are computing the derivative
of a composite function f (x) = u(v(x)). The derivative of f (x) is the derivative of
u(x) with respect to v(x) times the derivative of v(x) with respect to x:
df du dv
= · (5.51)
dx dv dx
First, we want to know the derivative of the loss function with respect to a single
weight w j (we’ll need to compute it for each weight, and for the bias):
5.11 • S UMMARY 99
∂ LCE ∂
= − [y log σ (w · x + b) + (1 − y) log (1 − σ (w · x + b))]
∂wj ∂wj
∂ ∂
= − y log σ (w · x + b) + (1 − y) log [1 − σ (w · x + b)]
∂wj ∂wj
(5.52)
Next, using the chain rule, and relying on the derivative of log:
∂ LCE y ∂ 1−y ∂
= − σ (w · x + b) − 1 − σ (w · x + b)
∂wj σ (w · x + b) ∂ w j 1 − σ (w · x + b) ∂ w j
(5.53)
Rearranging terms:
∂ LCE y 1−y ∂
= − − σ (w · x + b)
∂wj σ (w · x + b) 1 − σ (w · x + b) ∂ w j
(5.54)
And now plugging in the derivative of the sigmoid, and using the chain rule one
more time, we end up with Eq. 5.55:
∂ LCE y − σ (w · x + b) ∂ (w · x + b)
= − σ (w · x + b)[1 − σ (w · x + b)]
∂wj σ (w · x + b)[1 − σ (w · x + b)] ∂wj
y − σ (w · x + b)
= − σ (w · x + b)[1 − σ (w · x + b)]x j
σ (w · x + b)[1 − σ (w · x + b)]
= −[y − σ (w · x + b)]x j
= [σ (w · x + b) − y]x j (5.55)
5.11 Summary
This chapter introduced the logistic regression model of classification.
• Logistic regression is a supervised machine learning classifier that extracts
real-valued features from the input, multiplies each by a weight, sums them,
and passes the sum through a sigmoid function to generate a probability. A
threshold is used to make a decision.
• Logistic regression can be used with two classes (e.g., positive and negative
sentiment) or with multiple classes (multinomial logistic regression, for ex-
ample for n-ary text classification, part-of-speech labeling, etc.).
• Multinomial logistic regression uses the softmax function to compute proba-
bilities.
• The weights (vector w and bias b) are learned from a labeled training set via a
loss function, such as the cross-entropy loss, that must be minimized.
• Minimizing this loss function is a convex optimization problem, and iterative
algorithms like gradient descent are used to find the optimal weights.
• Regularization is used to avoid overfitting.
• Logistic regression is also one of the most useful analytic tools, because of its
ability to transparently study the importance of individual features.
100 C HAPTER 5 • L OGISTIC R EGRESSION
Exercises
CHAPTER
The asphalt that Los Angeles is famous for occurs mainly on its freeways. But
in the middle of the city is another patch of asphalt, the La Brea tar pits, and this
asphalt preserves millions of fossil bones from the last of the Ice Ages of the Pleis-
tocene Epoch. One of these fossils is the Smilodon, or saber-toothed tiger, instantly
recognizable by its long canines. Five million years ago or so, a completely different
saber-tooth tiger called Thylacosmilus lived
in Argentina and other parts of South Amer-
ica. Thylacosmilus was a marsupial whereas
Smilodon was a placental mammal, but Thy-
lacosmilus had the same long upper canines
and, like Smilodon, had a protective bone
flange on the lower jaw. The similarity of
these two mammals is one of many examples
of parallel or convergent evolution, in which particular contexts or environments
lead to the evolution of very similar structures in different species (Gould, 1980).
The role of context is also important in the similarity of a less biological kind
of organism: the word. Words that occur in similar contexts tend to have similar
meanings. This link between similarity in how words are distributed and similarity
distributional
hypothesis in what they mean is called the distributional hypothesis. The hypothesis was
first formulated in the 1950s by linguists like Joos (1950), Harris (1954), and Firth
(1957), who noticed that words which are synonyms (like oculist and eye-doctor)
tended to occur in the same environment (e.g., near words like eye or examined)
with the amount of meaning difference between two words “corresponding roughly
to the amount of difference in their environments” (Harris, 1954, p. 157).
vector In this chapter we introduce vector semantics, which instantiates this linguistic
semantics
embeddings hypothesis by learning representations of the meaning of words, called embeddings,
directly from their distributions in texts. These representations are used in every nat-
ural language processing application that makes use of meaning, and the static em-
beddings we introduce here underlie the more powerful dynamic or contextualized
embeddings like BERT that we will see in Chapter 11.
These word representations are also the first example in this book of repre-
representation
learning sentation learning, automatically learning useful representations of the input text.
Finding such self-supervised ways to learn representations of the input, instead of
creating representations by hand via feature engineering, is an important focus of
NLP research (Bengio et al., 2013).
102 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
identical to a sense of another word, or nearly identical, we say the two senses of
synonym those two words are synonyms. Synonyms include such pairs as
couch/sofa vomit/throw up filbert/hazelnut car/automobile
A more formal definition of synonymy (between words rather than senses) is that
two words are synonymous if they are substitutable for one another in any sentence
without changing the truth conditions of the sentence, the situations in which the
sentence would be true.
While substitutions between some pairs of words like car / automobile or wa-
ter / H2 O are truth preserving, the words are still not identical in meaning. Indeed,
probably no two words are absolutely identical in meaning. One of the fundamental
principle of tenets of semantics, called the principle of contrast (Girard 1718, Bréal 1897, Clark
contrast
1987), states that a difference in linguistic form is always associated with some dif-
ference in meaning. For example, the word H2 O is used in scientific contexts and
would be inappropriate in a hiking guide—water would be more appropriate— and
this genre difference is part of the meaning of the word. In practice, the word syn-
onym is therefore used to describe a relationship of approximate or rough synonymy.
Word Similarity While words don’t have many synonyms, most words do have
lots of similar words. Cat is not a synonym of dog, but cats and dogs are certainly
similar words. In moving from synonymy to similarity, it will be useful to shift from
talking about relations between word senses (like synonymy) to relations between
words (like similarity). Dealing with words avoids having to commit to a particular
representation of word senses, which will turn out to simplify our task.
similarity The notion of word similarity is very useful in larger semantic tasks. Knowing
how similar two words are can help in computing how similar the meaning of two
phrases or sentences are, a very important component of tasks like question answer-
ing, paraphrasing, and summarization. One way of getting values for word similarity
is to ask humans to judge how similar one word is to another. A number of datasets
have resulted from such experiments. For example the SimLex-999 dataset (Hill
et al., 2015) gives values on a scale from 0 to 10, like the examples below, which
range from near-synonyms (vanish, disappear) to pairs that scarcely seem to have
anything in common (hole, agreement):
vanish disappear 9.8
belief impression 5.95
muscle bone 3.65
modest flexible 0.98
hole agreement 0.3
Word Relatedness The meaning of two words can be related in ways other than
relatedness similarity. One such class of connections is called word relatedness (Budanitsky
association and Hirst, 2006), also traditionally called word association in psychology.
Consider the meanings of the words coffee and cup. Coffee is not similar to cup;
they share practically no features (coffee is a plant or a beverage, while a cup is a
manufactured object with a particular shape). But coffee and cup are clearly related;
they are associated by co-participating in an everyday event (the event of drinking
coffee out of a cup). Similarly scalpel and surgeon are not similar but are related
eventively (a surgeon tends to make use of a scalpel).
One common kind of relatedness between words is if they belong to the same
semantic field semantic field. A semantic field is a set of words which cover a particular semantic
domain and bear structured relations with each other. For example, words might be
104 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
related by being in the semantic field of hospitals (surgeon, scalpel, nurse, anes-
thetic, hospital), restaurants (waiter, menu, plate, food, chef), or houses (door, roof,
topic models kitchen, family, bed). Semantic fields are also related to topic models, like Latent
Dirichlet Allocation, LDA, which apply unsupervised learning on large sets of texts
to induce sets of associated words from text. Semantic fields and topic models are
very useful tools for discovering topical structure in documents.
In Appendix G we’ll introduce more relations between senses like hypernymy
or IS-A, antonymy (opposites) and meronymy (part-whole relations).
Semantic Frames and Roles Closely related to semantic fields is the idea of a
semantic frame semantic frame. A semantic frame is a set of words that denote perspectives or
participants in a particular type of event. A commercial transaction, for example,
is a kind of event in which one entity trades money to another entity in return for
some good or service, after which the good changes hands or perhaps the service is
performed. This event can be encoded lexically by using verbs like buy (the event
from the perspective of the buyer), sell (from the perspective of the seller), pay
(focusing on the monetary aspect), or nouns like buyer. Frames have semantic roles
(like buyer, seller, goods, money), and words in a sentence can take on these roles.
Knowing that buy and sell have this relation makes it possible for a system to
know that a sentence like Sam bought the book from Ling could be paraphrased as
Ling sold the book to Sam, and that Sam has the role of the buyer in the frame and
Ling the seller. Being able to recognize such paraphrases is important for question
answering, and can help in shifting perspective for machine translation.
connotations Connotation Finally, words have affective meanings or connotations. The word
connotation has different meanings in different fields, but here we use it to mean the
aspects of a word’s meaning that are related to a writer or reader’s emotions, senti-
ment, opinions, or evaluations. For example some words have positive connotations
(wonderful) while others have negative connotations (dreary). Even words whose
meanings are similar in other ways can vary in connotation; consider the difference
in connotations between fake, knockoff, forgery, on the one hand, and copy, replica,
reproduction on the other, or innocent (positive connotation) and naive (negative
connotation). Some words describe positive evaluation (great, love) and others neg-
ative evaluation (terrible, hate). Positive or negative evaluation language is called
sentiment sentiment, as we saw in Chapter 4, and word sentiment plays a role in important
tasks like sentiment analysis, stance detection, and applications of NLP to the lan-
guage of politics and consumer reviews.
Early work on affective meaning (Osgood et al., 1957) found that words varied
along three important dimensions of affective meaning:
Thus words like happy or satisfied are high on valence, while unhappy or an-
noyed are low on valence. Excited is high on arousal, while calm is low on arousal.
Controlling is high on dominance, while awed or influenced are low on dominance.
Each word is thus represented by three numbers, corresponding to its value on each
of the three dimensions:
6.2 • V ECTOR S EMANTICS 105
not good
bad
to by dislike worst
’s
that now incredibly bad
are worse
a i you
than with is
Figure 6.1 A two-dimensional (t-SNE) projection of embeddings for some words and
phrases, showing that words with similar meanings are nearby in space. The original 60-
dimensional embeddings were trained for sentiment analysis. Simplified from Li et al. (2015)
with colors added for explanation.
The term-document matrix of Fig. 6.2 was first defined as part of the vector
vector space space model of information retrieval (Salton, 1971). In this model, a document is
model
represented as a count vector, a column in Fig. 6.3.
vector To review some basic linear algebra, a vector is, at heart, just a list or array of
numbers. So As You Like It is represented as the list [1,114,36,20] (the first column
vector in Fig. 6.3) and Julius Caesar is represented as the list [7,62,1,2] (the third
vector space column vector). A vector space is a collection of vectors, and is characterized by
dimension its dimension. Vectors in a 3-dimensional vector space have an element for each
dimension of the space. We will loosely refer to a vector in a 4-dimensional space
as a 4-dimensional vector, with one element along each dimension. In the example
in Fig. 6.3, we’ve chosen to make the document vectors of dimension 4, just so they
fit on the page; in real term-document matrices, the document vectors would have
dimensionality |V |, the vocabulary size.
The ordering of the numbers in a vector space indicates the different dimensions
on which documents vary. The first dimension for both these vectors corresponds to
the number of times the word battle occurs, and we can compare each dimension,
noting for example that the vectors for As You Like It and Twelfth Night have similar
values (1 and 0, respectively) for the first dimension.
40
Henry V [4,13]
15
battle
10 Julius Caesar [1,7]
5 10 15 20 25 30 35 40 45 50 55 60
fool
Figure 6.4 A spatial visualization of the document vectors for the four Shakespeare play
documents, showing just two of the dimensions, corresponding to the words battle and fool.
The comedies have high values for the fool dimension and low values for the battle dimension.
A real term-document matrix, of course, wouldn’t just have 4 rows and columns,
let alone 2. More generally, the term-document matrix has |V | rows (one for each
word type in the vocabulary) and D columns (one for each document in the collec-
tion); as we’ll see, vocabulary sizes are generally in the tens of thousands, and the
number of documents can be enormous (think about all the pages on the web).
information Information retrieval (IR) is the task of finding the document d from the D
retrieval
documents in some collection that best matches a query q. For IR we’ll therefore also
represent a query by a vector, also of length |V |, and we’ll need a way to compare
two vectors to find how similar they are. (Doing IR will also require efficient ways
to store and manipulate these vectors by making use of the convenient fact that these
vectors are sparse, i.e., mostly zeros).
Later in the chapter we’ll introduce some of the components of this vector com-
parison process: the tf-idf term weighting, and the cosine similarity metric.
For documents, we saw that similar documents had similar vectors, because sim-
ilar documents tend to have similar words. This same principle applies to words:
similar words have similar vectors because they tend to occur in similar documents.
The term-document matrix thus lets us represent the meaning of a word by the doc-
uments it tends to occur in.
6.3 • W ORDS AND V ECTORS 109
Note in Fig. 6.6 that the two words cherry and strawberry are more similar to
each other (both pie and sugar tend to occur in their window) than they are to other
words like digital; conversely, digital and information are more similar to each other
than, say, to strawberry. Fig. 6.7 shows a spatial visualization.
4000
information
computer
3000 [3982,3325]
digital
2000 [1683,1670]
1000
Note that |V |, the dimensionality of the vector, is generally the size of the vo-
cabulary, often between 10,000 and 50,000 words (using the most frequent words
110 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
in the training corpus; keeping words after about the most frequent 50,000 or so is
generally not helpful). Since most of these numbers are zero these are sparse vector
representations; there are efficient algorithms for storing and computing with sparse
matrices.
Now that we have some intuitions, let’s move on to examine the details of com-
puting word similarity. Afterwards we’ll discuss methods for weighting cells.
The dot product acts as a similarity metric because it will tend to be high just when
the two vectors have large values in the same dimensions. Alternatively, vectors that
have zeros in different dimensions—orthogonal vectors—will have a dot product of
0, representing their strong dissimilarity.
This raw dot product, however, has a problem as a similarity metric: it favors
vector length long vectors. The vector length is defined as
√
N
∑
|v| = v2i (6.8)
i=1
The dot product is higher if a vector is longer, with higher values in each dimension.
More frequent words have longer vectors, since they tend to co-occur with more
words and have higher co-occurrence values with each of them. The raw dot product
thus will be higher for frequent words. But this is a problem; we’d like a similarity
metric that tells us how similar two words are regardless of their frequency.
We modify the dot product to normalize for the vector length by dividing the
dot product by the lengths of each of the two vectors. This normalized dot product
turns out to be the same as the cosine of the angle between the two vectors, following
from the definition of the dot product between two vectors a and b:
a · b = |a||b| cos
a·b
= cos (6.9)
|a||b|
cosine The cosine similarity metric between two vectors v and w thus can be computed as:
6.5 • TF-IDF: W EIGHING TERMS IN THE VECTOR 111
N
∑
vi w i
v·w i=1
cosine(v, w) = =√ √ (6.10)
|v||w|
∑N N
∑
v
2 w2 i i
i=1 i=1
The model decides that information is way closer to digital than it is to cherry, a
result that seems sensible. Fig. 6.8 shows a visualization.
Dimension 1: ‘pie’
500
cherry
digital information
Dimension 2: ‘computer’
Figure 6.8 A (rough) graphical demonstration of cosine similarity, showing vectors for
three words (cherry, digital, and information) in the two dimensional space defined by counts
of the words computer and pie nearby. The figure doesn’t show the cosine, but it highlights the
angles; note that the angle between digital and information is smaller than the angle between
cherry and information. When two vectors are more similar, the cosine is larger but the angle
is smaller; the cosine has its maximum (1) when the angle between two vectors is smallest
(0◦ ); the cosine of all other angles is less than 1.
is not the best measure of association between words. Raw frequency is very skewed
and not very discriminative. If we want to know what kinds of contexts are shared
by cherry and strawberry but not by digital and information, we’re not going to get
good discrimination from words like the, it, or they, which occur frequently with
all sorts of words and aren’t informative about any particular word. We saw this
also in Fig. 6.3 for the Shakespeare corpus; the dimension for the word good is not
very discriminative between plays; good is simply a frequent word and has roughly
equivalent high frequencies in each of the plays.
It’s a bit of a paradox. Words that occur nearby frequently (maybe pie nearby
cherry) are more important than words that only appear once or twice. Yet words
that are too frequent—ubiquitous, like the or good— are unimportant. How can we
balance these two conflicting constraints?
There are two common solutions to this problem: in this section we’ll describe
the tf-idf weighting, usually used when the dimensions are documents. In the next
section we introduce the PPMI algorithm (usually used when the dimensions are
words).
The tf-idf weighting (the ‘-’ here is a hyphen, not a minus sign) is the product
of two terms, each term capturing one of these two intuitions:
term frequency The first is the term frequency (Luhn, 1957): the frequency of the word t in the
document d. We can just use the raw count as the term frequency:
tft, d = count(t, d) (6.11)
More commonly we squash the raw frequency a bit, by using the log10 of the fre-
quency instead. The intuition is that a word appearing 100 times in a document
doesn’t make that word 100 times more likely to be relevant to the meaning of the
document. We also need to do something special with counts of 0, since we can’t
take the log of 0.2
{
1 + log10 count(t, d) if count(t, d) > 0
tft, d = (6.12)
0 otherwise
If we use log weighting, terms which occur 0 times in a document would have tf = 0,
1 times in a document tf = 1 + log10 (1) = 1 + 0 = 1, 10 times in a document tf =
1 + log10 (10) = 2, 100 times tf = 1 + log10 (100) = 3, 1000 times tf = 4, and so on.
The second factor in tf-idf is used to give a higher weight to words that occur
only in a few documents. Terms that are limited to a few documents are useful
for discriminating those documents from the rest of the collection; terms that occur
document
frequency frequently across the entire collection aren’t as helpful. The document frequency
dft of a term t is the number of documents it occurs in. Document frequency is
not the same as the collection frequency of a term, which is the total number of
times the word appears in the whole collection in any document. Consider in the
collection of Shakespeare’s 37 plays the two words Romeo and action. The words
have identical collection frequencies (they both occur 113 times in all the plays) but
very different document frequencies, since Romeo only occurs in a single play. If
our goal is to find documents about the romantic tribulations of Romeo, the word
Romeo should be highly weighted, but not action:
Collection Frequency Document Frequency
Romeo 113 1
action 113 31
2 We can also use this alternative formulation, which we have used in earlier editions: tft, d =
log10 (count(t, d) + 1)
6.5 • TF-IDF: W EIGHING TERMS IN THE VECTOR 113
We emphasize discriminative words like Romeo via the inverse document fre-
idf quency or idf term weight (Sparck Jones, 1972). The idf is defined using the frac-
tion N/dft , where N is the total number of documents in the collection, and dft is
the number of documents in which term t occurs. The fewer documents in which a
term occurs, the higher this weight. The lowest weight of 1 is assigned to terms that
occur in all the documents. It’s usually clear what counts as a document: in Shake-
speare we would use a play; when processing a collection of encyclopedia articles
like Wikipedia, the document is a Wikipedia page; in processing newspaper articles,
the document is a single article. Occasionally your corpus might not have appropri-
ate document divisions and you might need to break up the corpus into documents
yourself for the purposes of computing idf.
Because of the large number of documents in many collections, this measure
too is usually squashed with a log function. The resulting definition for inverse
document frequency (idf) is thus
N
idft = log10 (6.13)
dft
Here are some idf values for some words in the Shakespeare corpus, (along with
the document frequency df values on which they are based) ranging from extremely
informative words which occur in only one play like Romeo, to those that occur in a
few like salad or Falstaff, to those which are very common like fool or so common
as to be completely non-discriminative since they occur in all 37 plays like good or
sweet.3
Word df idf
Romeo 1 1.57
salad 2 1.27
Falstaff 4 0.967
forest 12 0.489
battle 21 0.246
wit 34 0.037
fool 36 0.012
good 37 0
sweet 37 0
tf-idf The tf-idf weighted value wt, d for word t in document d thus combines term
frequency tft, d (defined either by Eq. 6.11 or by Eq. 6.12) with idf from Eq. 6.13:
Fig. 6.9 applies tf-idf weighting to the Shakespeare term-document matrix in Fig. 6.2,
using the tf equation Eq. 6.12. Note that the tf-idf values for the dimension corre-
sponding to the word good have now all become 0; since this word appears in every
document, the tf-idf weighting leads it to be ignored. Similarly, the word fool, which
appears in 36 out of the 37 plays, has a much lower weight.
The tf-idf weighting is the way for weighting co-occurrence matrices in infor-
mation retrieval, but also plays a role in many other aspects of natural language
processing. It’s also a great baseline, the simple thing to try first. We’ll look at other
weightings like PPMI (Positive Pointwise Mutual Information) in Section 6.6.
3 Sweet was one of Shakespeare’s favorite adjectives, a fact probably related to the increased use of
sugar in European recipes around the turn of the 16th century (Jurafsky, 2014, p. 175).
114 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
P(x, y)
I(x, y) = log2 (6.16)
P(x)P(y)
The pointwise mutual information between a target word w and a context word
c (Church and Hanks 1989, Church and Hanks 1990) is then defined as:
P(w, c)
PMI(w, c) = log2 (6.17)
P(w)P(c)
The numerator tells us how often we observed the two words together (assuming
we compute probability by using the MLE). The denominator tells us how often
we would expect the two words to co-occur assuming they each occurred indepen-
dently; recall that the probability of two independent events both occurring is just
the product of the probabilities of the two events. Thus, the ratio gives us an esti-
mate of how much more the two words co-occur than we expect by chance. PMI is
a useful tool whenever we need to find words that are strongly associated.
PMI values range from negative to positive infinity. But negative PMI values
(which imply things are co-occurring less often than we would expect by chance)
tend to be unreliable unless our corpora are enormous. To distinguish whether
two words whose individual probability is each 10−6 occur together less often than
chance, we would need to be certain that the probability of the two occurring to-
gether is significantly less than 10−12 , and this kind of granularity would require an
enormous corpus. Furthermore it’s not clear whether it’s even possible to evaluate
such scores of ‘unrelatedness’ with human judgments. For this reason it is more
4 PMI is based on the mutual information between two random variables X and Y , defined as:
∑∑ P(x, y)
I(X,Y ) = P(x, y) log2 (6.15)
x y
P(x)P(y)
In a confusion of terminology, Fano used the phrase mutual information to refer to what we now call
pointwise mutual information and the phrase expectation of the mutual information for what we now call
mutual information
6.6 • P OINTWISE M UTUAL I NFORMATION (PMI) 115
PPMI common to use Positive PMI (called PPMI) which replaces all negative PMI values
with zero (Church and Hanks 1989, Dagan et al. 1993, Niwa and Nitta 1994)5 :
P(w, c)
PPMI(w, c) = max(log2 , 0) (6.18)
P(w)P(c)
More formally, let’s assume we have a co-occurrence matrix F with W rows (words)
and C columns (contexts), where fi j gives the number of times word wi occurs with
context c j . This can be turned into a PPMI matrix where PPMIi j gives the PPMI
value of word wi with context c j (which we can also express as PPMI(wi , c j ) or
PPMI(w = i, c = j)) as follows:
∑C ∑W
fi j j=1 f i j fi j
pi j = ∑W ∑C , pi∗ = ∑W ∑C , p∗ j = ∑W i=1
∑C (6.19)
i=1 j=1 f i j i=1 j=1 f i j i=1 j=1 f i j
pi j
PPMIi j = max(log2 , 0) (6.20)
pi∗ p∗ j
Let’s see some PPMI calculations. We’ll use Fig. 6.10, which repeats Fig. 6.6 plus
all the count marginals, and let’s pretend for ease of calculation that these are the
only words/contexts that matter.
Fig. 6.11 shows the joint probabilities computed from the counts in Fig. 6.10, and
Fig. 6.12 shows the PPMI values. Not surprisingly, cherry and strawberry are highly
associated with both pie and sugar, and data is mildly associated with information.
PMI has the problem of being biased toward infrequent events; very rare words
tend to have very high PMI values. One way to reduce this bias toward low frequency
5 Positive PMI also cleanly solves the problem of what to do with zero counts, using 0 to replace the
−∞ from log(0).
116 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
p(w,context) p(w)
computer data result pie sugar p(w)
cherry 0.0002 0.0007 0.0008 0.0377 0.0021 0.0415
strawberry 0.0000 0.0000 0.0001 0.0051 0.0016 0.0068
digital 0.1425 0.1436 0.0073 0.0004 0.0003 0.2942
information 0.2838 0.3399 0.0323 0.0004 0.0011 0.6575
events is to slightly change the computation for P(c), using a different function P (c)
that raises the probability of the context word to the power of :
P(w, c)
PPMI (w, c) = max(log2 , 0) (6.21)
P(w)P (c)
count(c)
P (c) =
(6.22)
c count(c)
is sometimes referred to as the tf-idf model or the PPMI model, after the weighting
function.
The tf-idf model of meaning is often used for document functions like deciding
if two documents are similar. We represent a document by taking the vectors of
centroid all the words in the document, and computing the centroid of all those vectors.
The centroid is the multidimensional version of the mean; the centroid of a set of
vectors is a single vector that has the minimum sum of squared distances to each of
the vectors in the set. Given k word vectors w1 , w2 , ..., wk , the centroid document
document vector d is:
vector
w1 + w2 + ... + wk
d= (6.23)
k
Given two documents, we can then compute their document vectors d1 and d2 , and
estimate the similarity between the two documents by cos(d1 , d2 ). Document sim-
ilarity is also useful for all sorts of applications; information retrieval, plagiarism
detection, news recommender systems, and even for digital humanities tasks like
comparing different versions of a text to see which are similar to each other.
Either the PPMI model or the tf-idf model can be used to compute word simi-
larity, for tasks like finding word paraphrases, tracking changes in word meaning, or
automatically discovering meanings of words in different corpora. For example, we
can find the 10 most similar words to any target word w by computing the cosines
between w and each of the V − 1 other words, sorting, and looking at the top 10.
6.8 Word2vec
In the previous sections we saw how to represent a word as a sparse, long vector with
dimensions corresponding to words in the vocabulary or documents in a collection.
We now introduce a more powerful word representation: embeddings, short dense
vectors. Unlike the vectors we’ve seen so far, embeddings are short, with number
of dimensions d ranging from 50-1000, rather than the much larger vocabulary size
|V | or number of documents D we’ve seen. These d dimensions don’t have a clear
interpretation. And the vectors are dense: instead of vector entries being sparse,
mostly-zero counts or functions of counts, the values will be real-valued numbers
that can be negative.
It turns out that dense vectors work better in every NLP task than sparse vectors.
While we don’t completely understand all the reasons for this, we have some intu-
itions. Representing words as 300-dimensional dense vectors requires our classifiers
to learn far fewer weights than if we represented words as 50,000-dimensional vec-
tors, and the smaller parameter space possibly helps with generalization and avoid-
ing overfitting. Dense vectors may also do a better job of capturing synonymy.
For example, in a sparse vector representation, dimensions for synonyms like car
and automobile dimension are distinct and unrelated; sparse vectors may thus fail
to capture the similarity between a word with car as a neighbor and a word with
automobile as a neighbor.
skip-gram In this section we introduce one method for computing embeddings: skip-gram
SGNS with negative sampling, sometimes called SGNS. The skip-gram algorithm is one
word2vec of two algorithms in a software package called word2vec, and so sometimes the
algorithm is loosely referred to as word2vec (Mikolov et al. 2013a, Mikolov et al.
2013b). The word2vec methods are fast, efficient to train, and easily available on-
118 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
line with code and pretrained embeddings. Word2vec embeddings are static em-
static
embeddings beddings, meaning that the method learns one fixed embedding for each word in the
vocabulary. In Chapter 11 we’ll introduce methods for learning dynamic contextual
embeddings like the popular family of BERT representations, in which the vector
for each word is different in different contexts.
The intuition of word2vec is that instead of counting how often each word w oc-
curs near, say, apricot, we’ll instead train a classifier on a binary prediction task: “Is
word w likely to show up near apricot?” We don’t actually care about this prediction
task; instead we’ll take the learned classifier weights as the word embeddings.
The revolutionary intuition here is that we can just use running text as implicitly
supervised training data for such a classifier; a word c that occurs near the target
word apricot acts as gold ‘correct answer’ to the question “Is word c likely to show
self-supervision up near apricot?” This method, often called self-supervision, avoids the need for
any sort of hand-labeled supervision signal. This idea was first proposed in the task
of neural language modeling, when Bengio et al. (2003) and Collobert et al. (2011)
showed that a neural language model (a neural network that learned to predict the
next word from prior words) could just use the next word in running text as its
supervision signal, and could be used to learn an embedding representation for each
word as part of doing this prediction task.
We’ll see how to do neural networks in the next chapter, but word2vec is a
much simpler model than the neural network language model, in two ways. First,
word2vec simplifies the task (making it binary classification instead of word pre-
diction). Second, word2vec simplifies the architecture (training a logistic regression
classifier instead of a multi-layer neural network with hidden layers that demand
more sophisticated training algorithms). The intuition of skip-gram is:
1. Treat the target word and a neighboring context word as positive examples.
2. Randomly sample other words in the lexicon to get negative samples.
3. Use logistic regression to train a classifier to distinguish those two cases.
4. Use the learned weights as the embeddings.
P(+|w, c) (6.24)
The probability that word c is not a real context word for w is just 1 minus
Eq. 6.24:
How does the classifier compute the probability P? The intuition of the skip-
gram model is to base this probability on embedding similarity: a word is likely to
occur near the target if its embedding vector is similar to the target embedding. To
compute similarity between these dense embeddings, we rely on the intuition that
two vectors are similar if they have a high dot product (after all, cosine is just a
normalized dot product). In other words:
Similarity(w, c) ≈ c · w (6.26)
The dot product c · w is not a probability, it’s just a number ranging from −∞ to ∞
(since the elements in word2vec embeddings can be negative, the dot product can be
negative). To turn the dot product into a probability, we’ll use the logistic or sigmoid
function σ (x), the fundamental core of logistic regression:
1
σ (x) = (6.27)
1 + exp (−x)
We model the probability that word c is a real context word for target word w as:
1
P(+|w, c) = σ (c · w) = (6.28)
1 + exp (−c · w)
The sigmoid function returns a number between 0 and 1, but to make it a probability
we’ll also need the total probability of the two possible events (c is a context word,
and c isn’t a context word) to sum to 1. We thus estimate the probability that word c
is not a real context word for w as:
P(−|w, c) = 1 − P(+|w, c)
1
= σ (−c · w) = (6.29)
1 + exp (c · w)
Equation 6.28 gives us the probability for one word, but there are many context
words in the window. Skip-gram makes the simplifying assumption that all context
words are independent, allowing us to just multiply their probabilities:
L
∏
P(+|w, c1:L ) = σ (ci · w) (6.30)
i=1
∑L
log P(+|w, c1:L ) = log σ (ci · w) (6.31)
i=1
In summary, skip-gram trains a probabilistic classifier that, given a test target word
w and its context window of L words c1:L , assigns a probability based on how similar
this context window is to the target word. The probability is based on applying the
logistic (sigmoid) function to the dot product of the embeddings of the target word
with each context word. To compute this probability, we just need embeddings for
each target word and context word in the vocabulary.
Fig. 6.13 shows the intuition of the parameters we’ll need. Skip-gram actually
stores two embeddings for each word, one for the word as a target, and one for the
word considered as context. Thus the parameters we need to learn are two matrices
W and C, each containing an embedding for every one of the |V | words in the
vocabulary V .6 Let’s now turn to learning these embeddings (which is the real goal
of training this classifier in the first place).
6 In principle the target matrix and the context matrix could use different vocabularies, but we’ll simplify
by assuming one shared vocabulary V .
120 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
1..d
aardvark 1
apricot
… … W target words
zebra |V|
&= aardvark |V|+1
apricot
Figure 6.13 The embeddings learned by the skipgram model. The algorithm stores two
embeddings for each word, the target embedding (sometimes called the input embedding)
and the context embedding (sometimes called the output embedding). The parameter that
the algorithm learns is thus a matrix of 2|V | vectors, each of dimension d, formed by concate-
nating two matrices, the target embeddings W and the context+noise embeddings C.
choose aardvark, and so on. But in practice it is common to set = 0.75, i.e. use
the weighting p 3 (w):
4
count(w)
P (w) = ′
(6.32)
w′ count(w )
Setting = .75 gives better performance because it gives rare noise words slightly
higher probability: for rare words, P (w) > P(w). To illustrate this intuition, it
might help to work out the probabilities for an example with = .75 and two events,
P(a) = 0.99 and P(b) = 0.01:
.99.75
P (a) = = 0.97
.99.75 + .01.75
.01.75
P (b) = = 0.03 (6.33)
.99.75 + .01.75
Thus using = .75 increases the probability of the rare event b from 0.01 to 0.03.
Given the set of positive and negative training instances, and an initial set of
embeddings, the goal of the learning algorithm is to adjust those embeddings to
• Maximize the similarity of the target word, context word pairs (w, cpos ) drawn
from the positive examples
• Minimize the similarity of the (w, cneg ) pairs from the negative examples.
If we consider one word/context pair (w, cpos ) with its k noise words cneg1 ...cnegk ,
we can express these two goals as the following loss function L to be minimized
(hence the −); here the first term expresses that we want the classifier to assign the
real context word cpos a high probability of being a neighbor, and the second term
expresses that we want to assign each of the noise words cnegi a high probability of
being a non-neighbor, all multiplied because we assume independence:
k
∏
L = − log P(+|w, cpos ) P(−|w, cnegi )
i=1
k
∑
= − log P(+|w, cpos ) + log P(−|w, cnegi )
i=1
k
∑
= − log P(+|w, cpos ) + log 1 − P(+|w, cnegi )
i=1
k
∑
= − log σ (cpos · w) + log σ (−cnegi · w) (6.34)
i=1
That is, we want to maximize the dot product of the word with the actual context
words, and minimize the dot products of the word with the k negative sampled non-
neighbor words.
We minimize this loss function using stochastic gradient descent. Fig. 6.14
shows the intuition of one step of learning.
To get the gradient, we need to take the derivative of Eq. 6.34 with respect to
the different embeddings. It turns out the derivatives are the following (we leave the
122 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
aardvark
move apricot and jam closer,
apricot w increasing cpos z w
W
“…apricot jam…”
zebra
! aardvark move apricot and matrix apart
cpos decreasing cneg1 z w
jam
C matrix cneg1
k=2
Tolstoy cneg2 move apricot and Tolstoy apart
decreasing cneg2 z w
zebra
Figure 6.14 Intuition of one step of gradient descent. The skip-gram model tries to shift
embeddings so the target embeddings (here for apricot) are closer to (have a higher dot prod-
uct with) context embeddings for nearby words (here jam) and further from (lower dot product
with) context embeddings for noise words that don’t occur nearby (here Tolstoy and matrix).
∂L
= [σ (cpos · w) − 1]w (6.35)
∂ cpos
∂L
= [σ (cneg · w)]w (6.36)
∂ cneg
∑ k
∂L
= [σ (cpos · w) − 1]cpos + [σ (cnegi · w)]cnegi (6.37)
∂w
i=1
The update equations going from time step t to t + 1 in stochastic gradient descent
are thus:
ct+1 t t t
pos = cpos − [σ (cpos · w ) − 1]w
t
(6.38)
ct+1
neg = ctneg − [σ (ctneg · wt )]wt (6.39)
k
∑
wt+1 = wt − [σ (ctpos · wt ) − 1]ctpos + [σ (ctnegi · wt )]ctnegi (6.40)
i=1
Just as in logistic regression, then, the learning algorithm starts with randomly ini-
tialized W and C matrices, and then walks through the training corpus using gradient
descent to move W and C so as to minimize the loss in Eq. 6.34 by making the up-
dates in (Eq. 6.38)-(Eq. 6.40).
Recall that the skip-gram model learns two separate embeddings for each word i:
target
embedding the target embedding wi and the context embedding ci , stored in two matrices, the
context
embedding target matrix W and the context matrix C. It’s common to just add them together,
representing word i with the vector wi + ci . Alternatively we can throw away the C
matrix and just represent each word i by the vector wi .
As with the simple count-based methods like tf-idf, the context window size L
affects the performance of skip-gram embeddings, and experiments often tune the
parameter L on a devset.
6.9 • V ISUALIZING E MBEDDINGS 123
LION
OYSTER algorithm to show a hierarchical representation of which
BULL
CHICAGO words are similar to others in the embedding space. The
ATLANTA
MONTREAL
NASHVILLE
uncaptioned figure on the left uses hierarchical clustering
CHINA
TOKYO
of some embedding vectors for nouns as a visualization
RUSSIA
AFRICA
ASIA
EUROPE
AMERICA
BRAZIL
MOSCOW
FRANCE
HAWAII
124 C HAPTER 6 • V ECTOR S EMANTICS AND E MBEDDINGS
tree
apple
vine
grape
Figure 6.15 The parallelogram model for analogy problems (Rumelhart and Abrahamson,
# » # » # » # »
1973): the location of vine can be found by subtracting apple from tree and adding grape.
# » + woman
# » is a vector close to queen. # » #
# » Similarly, Paris » # »
man − France + Italy results
# »
in a vector that is close to Rome. The embedding model thus seems to be extract-
ing representations of relations like MALE - FEMALE, or CAPITAL - CITY- OF, or even
COMPARATIVE / SUPERLATIVE , as shown in Fig. 6.16 from GloVe.
(a) (b)
Figure 6.16 Relational properties of the GloVe vector space, shown by projecting vectors onto two dimen-
# » # » # » is close to queen.
# » (b) offsets seem to capture comparative and superlative
sions. (a) king − man + woman
morphology (Pennington et al., 2014).
Figure 6.17 A t-SNE visualization of the semantic change of 3 words in English using
word2vec vectors. The modern sense of each word, and the grey context words, are com-
puted from the most recent (modern) time-point embedding space. Earlier points are com-
puted from earlier historical embedding spaces. The visualizations show the changes in the
word gay from meanings related to “cheerful” or “frolicsome” to referring to homosexuality,
the development of the modern “transmission” sense of broadcast from its original sense of
sowing seeds, and the pejoration of the word awful as it shifted from meaning “full of awe”
to meaning “terrible or appalling” (Hamilton et al., 2016b).
ple’s associations between concepts (like ‘flowers’ or ‘insects’) and attributes (like
‘pleasantness’ and ‘unpleasantness’) by measuring differences in the latency with
which they label words in the various categories.7 Using such methods, people
in the United States have been shown to associate African-American names with
unpleasant words (more than European-American names), male names more with
mathematics and female names with the arts, and old people’s names with unpleas-
ant words (Greenwald et al. 1998, Nosek et al. 2002a, Nosek et al. 2002b). Caliskan
et al. (2017) replicated all these findings of implicit associations using GloVe vectors
and cosine similarity instead of human latencies. For example African-American
names like ‘Leroy’ and ‘Shaniqua’ had a higher GloVe cosine with unpleasant words
while European-American names (‘Brad’, ‘Greg’, ‘Courtney’) had a higher cosine
with pleasant words. These problems with embeddings are an example of a repre-
representational
harm
sentational harm (Crawford 2017, Blodgett et al. 2020), which is a harm caused by
a system demeaning or even ignoring some social groups. Any embedding-aware al-
gorithm that made use of word sentiment could thus exacerbate bias against African
Americans.
Recent research focuses on ways to try to remove these kinds of biases, for
example by developing a transformation of the embedding space that removes gen-
der stereotypes but preserves definitional gender (Bolukbasi et al. 2016, Zhao et al.
2017) or changing the training procedure (Zhao et al., 2018b). However, although
debiasing these sorts of debiasing may reduce bias in embeddings, they do not eliminate it
(Gonen and Goldberg, 2019), and this remains an open problem.
Historical embeddings are also being used to measure biases in the past. Garg
et al. (2018) used embeddings from historical texts to measure the association be-
tween embeddings for occupations and embeddings for names of various ethnici-
ties or genders (for example the relative cosine similarity of women’s names versus
men’s to occupation words like ‘librarian’ or ‘carpenter’) across the 20th century.
They found that the cosines correlate with the empirical historical percentages of
women or ethnic groups in those occupations. Historical embeddings also repli-
cated old surveys of ethnic stereotypes; the tendency of experimental participants in
1933 to associate adjectives like ‘industrious’ or ‘superstitious’ with, e.g., Chinese
ethnicity, correlates with the cosine between Chinese last names and those adjectives
using embeddings trained on 1930s text. They also were able to document historical
gender biases, such as the fact that embeddings for adjectives related to competence
(‘smart’, ‘wise’, ‘thoughtful’, ‘resourceful’) had a higher cosine with male than fe-
male words, and showed that this bias has been slowly decreasing since 1960. We
return in later chapters to this question about the role of bias in natural language
processing.
6.13 Summary
• In vector semantics, a word is modeled as a vector—a point in high-dimensional
space, also called an embedding. In this chapter we focus on static embed-
dings, where each word is mapped to a fixed embedding.
• Vector semantic models fall into two classes: sparse and dense. In sparse
models each dimension corresponds to a word in the vocabulary V and cells
are functions of co-occurrence counts. The term-document matrix has a
row for each word (term) in the vocabulary and a column for each document.
The word-context or term-term matrix has a row for each (target) word in
B IBLIOGRAPHICAL AND H ISTORICAL N OTES 129
the vocabulary and a column for each context term in the vocabulary. Two
sparse weightings are common: the tf-idf weighting which weights each cell
by its term frequency and inverse document frequency, and PPMI (point-
wise positive mutual information), which is most common for word-context
matrices.
• Dense vector models have dimensionality 50–1000. Word2vec algorithms
like skip-gram are a popular way to compute dense embeddings. Skip-gram
trains a logistic regression classifier to compute the probability that two words
are ‘likely to occur nearby in text’. This probability is computed from the dot
product between the embeddings for the two words.
• Skip-gram uses stochastic gradient descent to train the classifier, by learning
embeddings that have a high dot product with embeddings of words that occur
nearby and a low dot product with noise words.
• Other important embedding algorithms include GloVe, a method based on
ratios of word co-occurrence probabilities.
• Whether using sparse or dense vectors, word and document similarities are
computed by some function of the dot product between vectors. The cosine
of two vectors—a normalized dot product—is the most popular such metric.
Dirichlet Allocation (LDA) (Blei et al., 2003), and Non-negative Matrix Factoriza-
tion (NMF) (Lee and Seung, 1999).
The LSA community seems to have first used the word “embedding” in Landauer
et al. (1997), in a variant of its mathematical meaning as a mapping from one space
or mathematical structure to another. In LSA, the word embedding seems to have
described the mapping from the space of sparse count vectors to the latent space of
SVD dense vectors. Although the word thus originally meant the mapping from one
space to another, it has metonymically shifted to mean the resulting dense vector in
the latent space, and it is in this sense that we currently use the word.
By the next decade, Bengio et al. (2003) and Bengio et al. (2006) showed that
neural language models could also be used to develop embeddings as part of the task
of word prediction. Collobert and Weston (2007), Collobert and Weston (2008), and
Collobert et al. (2011) then demonstrated that embeddings could be used to represent
word meanings for a number of NLP tasks. Turian et al. (2010) compared the value
of different kinds of embeddings for different NLP tasks. Mikolov et al. (2011)
showed that recurrent neural nets could be used as language models. The idea of
simplifying the hidden layer of these neural net language models to create the skip-
gram (and also CBOW) algorithms was proposed by Mikolov et al. (2013a). The
negative sampling training algorithm was proposed in Mikolov et al. (2013b). There
are numerous surveys of static embeddings and their parameterizations (Bullinaria
and Levy 2007, Bullinaria and Levy 2012, Lapesa and Evert 2014, Kiela and Clark
2014, Levy et al. 2015).
See Manning et al. (2008) and Chapter 14 for a deeper understanding of the role
of vectors in information retrieval, including how to compare queries with docu-
ments, more details on tf-idf, and issues of scaling to very large datasets. See Kim
(2019) for a clear and comprehensive tutorial on word2vec. Cruse (2004) is a useful
introductory linguistic text on lexical semantics.
Exercises
132 C HAPTER 7 • N EURAL N ETWORKS
CHAPTER
7 Neural Networks
7.1 Units
The building block of a neural network is a single computational unit. A unit takes
a set of real valued numbers as input, performs some computation on them, and
produces an output.
At its heart, a neural unit is taking a weighted sum of its inputs, with one addi-
bias term tional term in the sum called a bias term. Given a set of inputs x1 ...xn , a unit has
a set of corresponding weights w1 ...wn and a bias b, so the weighted sum z can be
represented as: ∑
z = b+ w i xi (7.1)
i
Often it’s more convenient to express this weighted sum using vector notation; recall
vector from linear algebra that a vector is, at heart, just a list or array of numbers. Thus
we’ll talk about z in terms of a weight vector w, a scalar bias b, and an input vector
x, and we’ll replace the sum with the convenient dot product:
z = w·x+b (7.2)
As defined in Eq. 7.2, z is just a real valued number.
Finally, instead of using z, a linear function of x, as the output, neural units
apply a non-linear function f to z. We will refer to the output of this function as
activation the activation value for the unit, a. Since we are just modeling a single unit, the
activation for the node is in fact the final output of the network, which we’ll generally
call y. So the value y is defined as:
y = a = f (z)
We’ll discuss three popular non-linear functions f below (the sigmoid, the tanh, and
the rectified linear unit or ReLU) but it’s pedagogically convenient to start with the
sigmoid sigmoid function since we saw it in Chapter 5:
1
y = σ (z) = (7.3)
1 + e−z
The sigmoid (shown in Fig. 7.1) has a number of advantages; it maps the output
into the range (0, 1), which is useful in squashing outliers toward 0 or 1. And it’s
differentiable, which as we saw in Section 5.10 will be handy for learning.
Figure 7.1 The sigmoid function takes a real value and maps it to the range (0, 1). It is
nearly linear around 0 but outlier values get squashed toward 0 or 1.
Substituting Eq. 7.2 into Eq. 7.3 gives us the output of a neural unit:
1
y = σ (w · x + b) = (7.4)
1 + exp(−(w · x + b))
134 C HAPTER 7 • N EURAL N ETWORKS
Fig. 7.2 shows a final schematic of a basic neural unit. In this example the unit
takes 3 input values x1 , x2 , and x3 , and computes a weighted sum, multiplying each
value by a weight (w1 , w2 , and w3 , respectively), adds them to a bias term b, and then
passes the resulting sum through a sigmoid function to result in a number between 0
and 1.
x1 w1
w2 z a
x2 ∑ σ y
w3
x3 b
+1
Figure 7.2 A neural unit, taking 3 inputs x1 , x2 , and x3 (and a bias b that we represent as a
weight for an input clamped at +1) and producing an output y. We include some convenient
intermediate variables: the output of the summation, z, and the output of the sigmoid, a. In
this case the output of the unit y is the same as a, but in deeper networks we’ll reserve y to
mean the final output of the entire network, leaving a as the activation of an individual node.
Let’s walk through an example just to get an intuition. Let’s suppose we have a
unit with the following weight vector and bias:
ez − e−z
y = tanh(z) = (7.5)
ez + e−z
The simplest activation function, and perhaps the most commonly used, is the rec-
ReLU tified linear unit, also called the ReLU, shown in Fig. 7.3b. It’s just the same as z
when z is positive, and 0 otherwise:
These activation functions have different properties that make them useful for differ-
ent language applications or network architectures. For example, the tanh function
has the nice properties of being smoothly differentiable and mapping outlier values
toward the mean. The rectifier function, on the other hand, has nice properties that
7.2 • T HE XOR PROBLEM 135
(a) (b)
Figure 7.3 The tanh and ReLU activation functions.
result from it being very close to linear. In the sigmoid or tanh functions, very high
saturated values of z result in values of y that are saturated, i.e., extremely close to 1, and have
derivatives very close to 0. Zero derivatives cause problems for learning, because as
we’ll see in Section 7.5, we’ll train networks by propagating an error signal back-
wards, multiplying gradients (partial derivatives) from each layer of the network;
gradients that are almost 0 cause the error signal to get smaller and smaller until it is
vanishing
gradient too small to be used for training, a problem called the vanishing gradient problem.
Rectifiers don’t have this problem, since the derivative of ReLU for high values of z
is 1 rather than very close to 0.
AND OR XOR
x1 x2 y x1 x2 y x1 x2 y
0 0 0 0 0 0 0 0 0
0 1 0 0 1 1 0 1 1
1 0 0 1 0 1 1 0 1
1 1 1 1 1 1 1 1 0
perceptron This example was first shown for the perceptron, which is a very simple neural
unit that has a binary output and does not have a non-linear activation function. The
output y of a perceptron is 0 or 1, and is computed as follows (using the same weight
w, input x, and bias b as in Eq. 7.2):
0, if w · x + b ≤ 0
y= (7.7)
1, if w · x + b > 0
136 C HAPTER 7 • N EURAL N ETWORKS
It’s very easy to build a perceptron that can compute the logical AND and OR
functions of its binary inputs; Fig. 7.4 shows the necessary weights.
x1 x1
1 1
x2 1 x2 1
-1 0
+1 +1
(a) (b)
Figure 7.4 The weights w and bias b for perceptrons for computing logical functions. The
inputs are shown as x1 and x2 and the bias as a special node with value +1 which is multiplied
with the bias weight b. (a) logical AND, with weights w1 = 1 and w2 = 1 and bias weight
b = −1. (b) logical OR, with weights w1 = 1 and w2 = 1 and bias weight b = 0. These
weights/biases are just one from an infinite number of possible sets of weights and biases that
would implement the functions.
It turns out, however, that it’s not possible to build a perceptron to compute
logical XOR! (It’s worth spending a moment to give it a try!)
The intuition behind this important result relies on understanding that a percep-
tron is a linear classifier. For a two-dimensional input x1 and x2 , the perceptron
equation, w1 x1 + w2 x2 + b = 0 is the equation of a line. (We can see this by putting
it in the standard linear format: x2 = (−w1 /w2 )x1 + (−b/w2 ).) This line acts as a
decision
boundary decision boundary in two-dimensional space in which the output 0 is assigned to all
inputs lying on one side of the line, and the output 1 to all input points lying on the
other side of the line. If we had more than 2 inputs, the decision boundary becomes
a hyperplane instead of a line, but the idea is the same, separating the space into two
categories.
Fig. 7.5 shows the possible logical inputs (00, 01, 10, and 11) and the line drawn
by one possible set of parameters for an AND and an OR classifier. Notice that there
is simply no way to draw a line that separates the positive cases of XOR (01 and 10)
linearly
separable from the negative cases (00 and 11). We say that XOR is not a linearly separable
function. Of course we could draw a boundary with a curve, or some other function,
but not a single line.
x2 x2 x2
1 1 1
?
0 x1 0 x1 0 x1
0 1 0 1 0 1
a) x1 AND x2 b) x1 OR x2 c) x1 XOR x2
Figure 7.5 The functions AND, OR, and XOR, represented with input x1 on the x-axis and input x2 on the
y-axis. Filled circles represent perceptron outputs of 1, and white circles perceptron outputs of 0. There is no
way to draw a line that correctly separates the two categories for XOR. Figure styled after Russell and Norvig
(2002).
x1 1 h1
1
1
y1
1
-2
x2 1 h2 0
0
-1
+1 +1
Figure 7.6 XOR solution after Goodfellow et al. (2016). There are three ReLU units, in
two layers; we’ve called them h1 , h2 (h for “hidden layer”) and y1 . As before, the numbers
on the arrows represent the weights w for each unit, and we represent the bias b as a weight
on a unit clamped to +1, with the bias weights/units in gray.
It’s also instructive to look at the intermediate results, the outputs of the two
hidden nodes h1 and h2 . We showed in the previous paragraph that the h vector for
the inputs x = [0, 0] was [0, 0]. Fig. 7.7b shows the values of the h layer for all
4 inputs. Notice that hidden representations of the two input points x = [0, 1] and
x = [1, 0] (the two cases with XOR output = 1) are merged to the single point h =
[1, 0]. The merger makes it easy to linearly separate the positive and negative cases
of XOR. In other words, we can view the hidden layer of the network as forming a
representation of the input.
In this example we just stipulated the weights in Fig. 7.6. But for real examples
the weights for neural networks are learned automatically using the error backprop-
agation algorithm to be introduced in Section 7.5. That means the hidden layers will
learn to form useful representations. This intuition, that neural networks can auto-
matically learn useful representations of the input, is one of their key advantages,
and one that we will return to again and again in later chapters.
138 C HAPTER 7 • N EURAL N ETWORKS
x2 h2
1 1
0 x1 0
h1
0 1 0 1 2
x1 W U
y1
h1
x2 h2 y2
h3
…
…
xn
0 hn
1
b yn
2
+1
input layer hidden layer output layer
Figure 7.8 A simple 2-layer feedforward network, with one hidden layer, one output layer,
and one input layer (the input layer is usually not counted when enumerating layers).
steps: multiplying the weight matrix by the input vector x, adding the bias vector b,
and applying the activation function g (such as the sigmoid, tanh, or ReLU activation
function defined above).
The output of the hidden layer, the vector h, is thus the following (for this exam-
ple we’ll use the sigmoid function σ as our activation function):
h = σ (Wx + b) (7.8)
Notice that we’re applying the σ function here to a vector, while in Eq. 7.3 it was
applied to a scalar. We’re thus allowing σ (·), and indeed any activation function
g(·), to apply to a vector element-wise, so g[z1 , z2 , z3 ] = [g(z1 ), g(z2 ), g(z3 )].
Let’s introduce some constants to represent the dimensionalities of these vectors
and matrices. We’ll refer to the input layer as layer 0 of the network, and have n0
represent the number of inputs, so x is a vector of real numbers of dimension n0 ,
or more formally x ∈ Rn0 , a column vector of dimensionality [n0 , 1]. Let’s call the
hidden layer layer 1 and the output layer layer 2. The hidden layer has dimensional-
ity n1 , so h ∈ Rn1 and also b ∈ Rn1 (since each hidden unit can take a different bias
value). And the weight matrix W has dimensionality W ∈ Rn1 ×n0 , i.e. [n1 , n0 ].
Take a moment to convince yourself
n0 that the matrix multiplication in Eq. 7.8 will
compute the value of each h j as σ i=1 W ji x i + b j .
As we saw in Section 7.2, the resulting value h (for hidden but also for hypoth-
esis) forms a representation of the input. The role of the output layer is to take
this new representation h and compute a final output. This output could be a real-
valued number, but in many cases the goal of the network is to make some sort of
classification decision, and so we will focus on the case of classification.
If we are doing a binary task like sentiment classification, we might have a sin-
gle output node, and its scalar value y is the probability of positive versus negative
sentiment. If we are doing multinomial classification, such as assigning a part-of-
speech tag, we might have one output node for each potential part-of-speech, whose
output value is the probability of that part-of-speech, and the values of all the output
nodes must sum to one. The output layer is thus a vector y that gives a probability
distribution across the output nodes.
Let’s see how this happens. Like the hidden layer, the output layer has a weight
matrix (let’s call it U), but some models don’t include a bias vector b in the output
140 C HAPTER 7 • N EURAL N ETWORKS
layer, so we’ll simplify by eliminating the bias vector in this example. The weight
matrix is multiplied by its input vector (h) to produce the intermediate output z:
z = Uh
There are n2 output nodes, so z ∈ Rn2 , weight matrix U has dimensionality U ∈
Rn2 ×n1 , and element Ui j is the weight from unit j in the hidden layer to unit i in the
output layer.
However, z can’t be the output of the classifier, since it’s a vector of real-valued
numbers, while what we need for classification is a vector of probabilities. There is
normalizing a convenient function for normalizing a vector of real values, by which we mean
converting it to a vector that encodes a probability distribution (all the numbers lie
softmax between 0 and 1 and sum to 1): the softmax function that we saw on page 85 of
Chapter 5. More generally for any vector z of dimensionality d, the softmax is
defined as:
exp(zi )
softmax(zi ) = d 1≤i≤d (7.9)
j=1 exp(z j )
You may recall that we used softmax to create a probability distribution from a
vector of real-valued numbers (computed from summing weights times features) in
the multinomial version of logistic regression in Chapter 5.
That means we can think of a neural network classifier with one hidden layer
as building a vector h which is a hidden layer representation of the input, and then
running standard multinomial logistic regression on the features that the network
develops in h. By contrast, in Chapter 5 the features were mainly designed by hand
via feature templates. So a neural network is like multinomial logistic regression,
but (a) with many layers, since a deep neural network is like layer after layer of lo-
gistic regression classifiers; (b) with those intermediate layers having many possible
activation functions (tanh, ReLU, sigmoid) instead of just sigmoid (although we’ll
continue to use σ for convenience to mean any activation function); (c) rather than
forming the features by feature templates, the prior layers of the network induce the
feature representations themselves.
Here are the final equations for a feedforward network with a single hidden layer,
which takes an input vector x, outputs a probability distribution y, and is parameter-
ized by weight matrices W and U and a bias vector b:
h = σ (Wx + b)
z = Uh
y = softmax(z) (7.12)
And just to remember the shapes of all our variables, x ∈ Rn0 , h ∈ Rn1 , b ∈ Rn1 ,
W ∈ Rn1 ×n0 , U ∈ Rn2 ×n1 , and the output vector y ∈ Rn2 . We’ll call this network a 2-
layer network (we traditionally don’t count the input layer when numbering layers,
but do count the output layer). So by this terminology logistic regression is a 1-layer
network.
7.3 • F EEDFORWARD N EURAL N ETWORKS 141
Note that with this notation, the equations for the computation done at each layer are
the same. The algorithm for computing the forward step in an n-layer feedforward
network, given the input vector a[0] is thus simply:
for i in 1,...,n
z[i] = W[i] a[i−1] + b[i]
a[i] = g[i] (z[i] )
ŷ = a[n]
It’s often useful to have a name for the final set of activations right before the final
softmax. So however many layers we have, we’ll generally call the unnormalized
values in the final vector z[n] , the vector of scores right before the final softmax, the
logits logits (see (5.7).
The need for non-linear activation functions One of the reasons we use non-
linear activation functions for each layer in a neural network is that if we did not, the
resulting network is exactly equivalent to a single-layer network. Let’s see why this
is true. Imagine the first two layers of such a network of purely linear layers:
Replacing the bias unit In describing networks, we will often use a slightly sim-
plified notation that represents exactly the same function without referring to an ex-
plicit bias node b. Instead, we add a dummy node a0 to each layer whose value will
[0]
always be 1. Thus layer 0, the input layer, will have a dummy node a0 = 1, layer 1
[1]
will have a0 = 1, and so on. This dummy node still has an associated weight, and
that weight represents the bias value b. For example instead of an equation like
h = σ (Wx + b) (7.15)
we’ll use:
h = σ (Wx) (7.16)
But now instead of our vector x having n0 values: x = x1 , . . . , xn0 , it will have n0 +
1 values, with a new 0th dummy value x0 = 1: x = x0 , . . . , xn0 . And instead of
computing each h j as follows:
(n )
∑ 0
hj = σ W ji xi + b j , (7.17)
i=1
where the value Wj0 replaces what had been b j . Fig. 7.9 shows a visualization.
W U W U
x1 h1 y1 x0=1
h1 y1
h2
x2 y2 x1 h2 y2
h3
…
x2 h3
…
…
…
…
xn
hn
…
0
1
yn hn
b 2 xn 1 yn
+1 0 2
(a) (b)
Figure 7.9 Replacing the bias node (shown in a) with x0 (b).
We’ll continue showing the bias as b when we go over the learning algorithm
in Section 7.5, but then we’ll switch to this simplified notation without explicit bias
terms for the rest of the book.
Let’s begin with a simple 2-layer sentiment classifier. You might imagine taking
our logistic regression classifier from Chapter 5, which corresponds to a 1-layer net-
work, and just adding a hidden layer. The input element xi could be scalar features
like those in Fig. 5.2, e.g., x1 = count(words ∈ doc), x2 = count(positive lexicon
words ∈ doc), x3 = 1 if “no” ∈ doc, and so on. And the output layer ŷ could have
two nodes (one each for positive and negative), or 3 nodes (positive, negative, neu-
tral), in which case ŷ1 would be the estimated probability of positive sentiment, ŷ2
the probability of negative and ŷ3 the probability of neutral. The resulting equations
would be just what we saw above for a 2-layer network (as always, we’ll continue
to use the σ to stand for any non-linearity, whether sigmoid, ReLU or other).
x = [x1 , x2 , ...xN ] (each xi is a hand-designed feature)
h = σ (Wx + b)
z = Uh
ŷ = softmax(z) (7.19)
Fig. 7.10 shows a sketch of this architecture. As we mentioned earlier, adding this
hidden layer to our logistic regression classifier allows the network to represent the
non-linear interactions between features. This alone might give us a better sentiment
classifier.
h1
dessert wordcount x1
=3
h2
y^1 p(+)
positive lexicon x
was words = 1 2 y^ 2 p(-)
h3
y^3
…
There are many other options, like taking the element-wise max. The element-wise
max of a set of n vectors is a new vector whose kth element is the max of the kth
elements of all the n vectors. Here are the equations for this classifier assuming
mean pooling; the architecture is sketched in Fig. 7.11:
h1
embedding for
dessert “dessert” y^ 1 p(+)
pooling h2
was
embedding for ^y p(-)
+
“was” 2
h3
great
embedding for … ^y p(neut)
“great” 3
hdh
Input words x W h U y
[d⨉1] [dh⨉d] [3⨉dh] [3⨉1]
[dh⨉1]
While Eq. 7.21 shows how to classify a single example x, in practice we want
to efficiently classify an entire test set of m examples. We do this by vectorizing
the process, just as we saw with logistic regression; instead of using for-loops to go
through each example, we’ll use matrix multiplication to do the entire computation
of an entire test set at once. First, we pack all the input feature vectors for each input
x into a single input matrix X, with each row i a row vector consisting of the pooled
embedding for input example x(i) (i.e., the vector x(i) ). If the dimensionality of our
pooled input embedding is d, X will be a matrix of shape [m × d].
We will then need to slightly modify Eq. 7.21. X is of shape [m × d] and W is of
shape [dh × d], so we’ll have to reorder how we multiply X and W and transpose W
so they correctly multiply to yield a matrix H of shape [m × dh ].1 The bias vector b
from Eq. 7.21 of shape [1 × dh ] will now have to be replicated into a matrix of shape
[m × dh ]. We’ll need to similarly reorder the next step and transpose U. Finally, our
output matrix Ŷ will be of shape [m × 3] (or more generally [m × do ], where do is
the number of output classes), with each row i of our output matrix Ŷ consisting of
the output vector ŷ(i) .‘ Here are the final equations for computing the output class
1 Note that we could have kept the original order of our products if we had instead made our input
matrix X represent each input as a column vector instead of a row vector, making it of shape [d × m]. But
representing inputs as row vectors is convenient and common in neural network models.
7.5 • T RAINING N EURAL N ETS 145
page 97. Let’s briefly summarize the explanation here for convenience. First, when
we have more than 2 classes we’ll need to represent both y and ŷ as vectors. Let’s
assume we’re doing hard classification, where only one class is the correct one.
The true label y is then a vector with K elements, each corresponding to a class,
with yc = 1 if the correct class is c, with all other elements of y being 0. Recall that
a vector like this, with one value equal to 1 and the rest 0, is called a one-hot vector.
And our classifier will produce an estimate vector with K elements ŷ, each element
ŷk of which represents the estimated probability p(yk = 1|x).
The loss function for a single example x is the negative sum of the logs of the K
output classes, each weighted by their probability yk :
K
∑
LCE (ŷ, y) = − yk log ŷk (7.24)
k=1
We can simplify this equation further; let’s first rewrite the equation using the func-
tion {} which evaluates to 1 if the condition in the brackets is true and to 0 oth-
erwise. This makes it more obvious that the terms in the sum in Eq. 7.24 will be 0
except for the term corresponding to the true class for which yk = 1:
K
∑
LCE (ŷ, y) = − {yk = 1} log ŷk
k=1
In other words, the cross-entropy loss is simply the negative log of the output proba-
bility corresponding to the correct class, and we therefore also call this the negative
negative log log likelihood loss:
likelihood loss
But these derivatives only give correct updates for one weight layer: the last one!
For deep networks, computing the gradients for each weight is much more complex,
since we are computing the derivative with respect to weight parameters that appear
all the way back in the very early layers of the network, even though the loss is
computed only at the very end of the network.
The solution to computing this gradient is an algorithm called error backprop-
error back-
propagation agation or backprop (Rumelhart et al., 1986). While backprop was invented spe-
cially for neural networks, it turns out to be the same as a more general procedure
called backward differentiation, which depends on the notion of computation
graphs. Let’s see how that works in the next subsection.
d = 2∗b
e = a+d
L = c∗e
We can now represent this as a graph, with nodes for each operation, and di-
rected edges showing the outputs from each operation as the inputs to the next, as
in Fig. 7.12. The simplest use of computation graphs is to compute the value of
the function with some given inputs. In the figure, we’ve assumed the inputs a = 3,
b = 1, c = −2, and we’ve shown the result of the forward pass to compute the re-
sult L(3, 1, −2) = −10. In the forward pass of a computation graph, we apply each
operation left to right, passing the outputs of each computation as the input to the
next node.
forward pass
a a=3
e=a+d e=5
d=2
b=1
b d = 2b L=ce L=-10
c=-2
c
Figure 7.12 Computation graph for the function L(a, b, c) = c(a+2b), with values for input
nodes a = 3, b = 1, c = −2, showing the forward pass computation of L.
to each of the input variables, i.e., ∂∂ La , ∂∂ Lb , and ∂∂ Lc . The derivative ∂∂ La tells us how
much a small change in a affects L.
chain rule Backwards differentiation makes use of the chain rule in calculus, so let’s re-
mind ourselves of that. Suppose we are computing the derivative of a composite
function f (x) = u(v(x)). The derivative of f (x) is the derivative of u(x) with respect
to v(x) times the derivative of v(x) with respect to x:
df du dv
= · (7.29)
dx dv dx
The chain rule extends to more than two functions. If computing the derivative of a
composite function f (x) = u(v(w(x))), the derivative of f (x) is:
df du dv dw
= · · (7.30)
dx dv dw dx
The intuition of backward differentiation is to pass gradients back from the final
node to all the nodes in the graph. Fig. 7.13 shows part of the backward computation
at one node e. Each node takes an upstream gradient that is passed in from its parent
node to the right, and for each of its inputs computes a local gradient (the gradient
of its output with respect to its input), and uses the chain rule to multiply these two
to compute a downstream gradient to be passed on to the next earlier node.
d e
d e L
∂L = ∂L ∂e ∂e ∂L
∂d ∂e ∂d ∂d ∂e
downstream local upstream
gradient gradient gradient
Figure 7.13 Each node (like e here) takes an upstream gradient, multiplies it by the local
gradient (the gradient of its output with respect to its input), and uses the chain rule to compute
a downstream gradient to be passed on to a prior node. A node may have multiple local
gradients if it has multiple inputs.
Let’s now compute the 3 derivatives we need. Since in the computation graph
L = ce, we can directly compute the derivative ∂∂ Lc :
∂L
=e (7.31)
∂c
For the other two, we’ll need to use the chain rule:
∂L ∂L ∂e
=
∂a ∂e ∂a
∂L ∂L ∂e ∂d
= (7.32)
∂b ∂e ∂d ∂b
Eq. 7.32 and Eq. 7.31 thus require five intermediate derivatives: ∂∂ Le , ∂∂ Lc , ∂∂ ae , ∂∂ de , and
∂d
∂ b , which are as follows (making use of the fact that the derivative of a sum is the
7.5 • T RAINING N EURAL N ETS 149
∂L ∂L
L = ce : = c, =e
∂e ∂c
∂e ∂e
e = a+d : = 1, =1
∂a ∂d
∂d
d = 2b : =2
∂b
In the backward pass, we compute each of these partials along each edge of the
graph from right to left, using the chain rule just as we did above. Thus we begin by
computing the downstream gradients from node L, which are ∂∂ Le and ∂∂ Lc . For node e,
we then multiply this upstream gradient ∂∂ Le by the local gradient (the gradient of the
output with respect to the input), ∂∂ de to get the output we send back to node d: ∂∂ Ld .
And so on, until we have annotated the graph all the way to all the input variables.
The forward pass conveniently already will have computed the values of the forward
intermediate variables we need (like d and e) to compute these derivatives. Fig. 7.14
shows the backward pass.
a=3
a
∂L = ∂L ∂e =-2
∂a ∂e ∂a e=5
e=d+a
d=2
b=1 ∂e ∂e
=1 =1 ∂L
b d = 2b ∂L = ∂L ∂e =-2 ∂a ∂d =-2 L=-10
∂e L=ce
∂L = ∂L ∂d =-4 ∂d ∂d ∂e ∂d
=2 ∂L
∂b ∂d ∂b ∂b =-2
∂e
c=-2 ∂L
=5
∂c
∂L =5 backward pass
c ∂c
Figure 7.14 Computation graph for the function L(a, b, c) = c(a + 2b), showing the backward pass computa-
tion of ∂∂ La , ∂∂ Lb , and ∂∂ Lc .
For the backward pass we’ll also need to compute the loss L. The loss function
for binary sigmoid output from Eq. 7.23 is
[1]
w11
*
w[1]
12
z[1] = a1[1] =
* 1
+ ReLU
x1
*
b[1]
1 z[2] =
w[2] a[2] = σ L (a[2],y)
x2 11 +
*
*
Figure 7.15 Sample computation graph for a simple 2-layer neural net (= 1 hidden layer) with two input units
and 2 hidden units. We’ve adjusted the notation a bit to avoid long equations in the nodes by just mentioning
[1]
the function that is being computed, and the resulting variable name. Thus the * to the right of node w11 means
[1]
that w11 is to be multiplied by x1 , and the node z[1] = + means that the value of z[1] is computed by summing
[1]
the three nodes that feed into it (the two products, and the bias term bi ).
The weights that need updating (those for which we need to know the partial
derivative of the loss function) are shown in teal. In order to do the backward pass,
we’ll need to know the derivatives of all the functions in the graph. We already saw
in Section 5.10 the derivative of the sigmoid σ :
dσ (z)
= σ (z)(1 − σ (z)) (7.36)
dz
We’ll also need the derivatives of each of the other activation functions. The
derivative of tanh is:
d tanh(z)
= 1 − tanh2 (z) (7.37)
dz
We’ll give the start of the computation, computing the derivative of the loss function
L with respect to z, or ∂∂ Lz (and leaving the rest of the computation as an exercise for
the reader). By the chain rule:
∂L ∂ L ∂ a[2]
= [2] (7.39)
∂z ∂a ∂z
∂L
So let’s first compute ∂ a[2]
, taking the derivative of Eq. 7.35, repeated here:
[ ]
LCE (a[2] , y) = − y log a[2] + (1 − y) log(1 − a[2] )
(( ) )
∂L ∂ log(a[2] ) ∂ log(1 − a[2] )
= − y + (1 − y)
∂ a[2] ∂ a[2] ∂ a[2]
1 1
= − y [2] + (1 − y) (−1)
a 1 − a[2]
y y−1
= − [2] + (7.40)
a 1 − a[2]
Next, by the derivative of the sigmoid:
∂ a[2]
= a[2] (1 − a[2] )
∂z
Finally, we can use the chain rule:
∂L ∂ L ∂ a[2]
=
∂z ∂ a[2] ∂ z
y y−1
= − [2] + a[2] (1 − a[2] )
a 1 − a[2]
= a[2] − y (7.41)
Continuing the backward computation of the gradients (next by passing the gra-
[2]
dients over b1 and the two product nodes, and so on, back to all the teal nodes), is
left as an exercise for the reader.
In the following examples we’ll use a 4-gram example, so we’ll show a neural net to
estimate the probability P(wt = i|wt−3 , wt−2 , wt−1 ).
Neural language models represent words in this prior context by their embed-
dings, rather than just by their word identity as used in n-gram language models.
Using embeddings allows neural language models to generalize better to unseen
data. For example, suppose we’ve seen this sentence in training:
I have to make sure that the cat gets fed.
7.6 • F EEDFORWARD N EURAL L ANGUAGE M ODELING 153
but have never seen the words “gets fed” after the word “dog”. Our test set has the
prefix “I forgot to make sure that the dog gets”. What’s the next word? An n-gram
language model will predict “fed” after “that the cat gets”, but not after “that the dog
gets”. But a neural LM, knowing that “cat” and “dog” have similar embeddings, will
be able to generalize from the “cat” context to assign a high enough probability to
“fed” even after seeing “dog”.
|V| 1 1
d E ✕ 5 = d
5 |V|
e5
Figure 7.16 Selecting the embedding vector for word V5 by multiplying the embedding
matrix E with a one-hot vector with a 1 in index 5.
…
1 ^y p(aardvark|…)
and 0
0 1
1 35
…
thanks h1
0
wt-3 0
|V|
^y p(do|…)
for E 34
1 h2
…
0
0
y^42
0
1 992
all wt-2 p(fish|…)
0
0 h3
…
|V|
E
^y35102
…
the 1
wt-1 0
0
hdh
…
0
? 1 451
wt 0 ^y p(zebra|…)
0
|V|
E e W h U |V|
…
x d⨉|V| 3d⨉1 dh⨉3d dh⨉1 |V|⨉dh y
|V|⨉3 |V|⨉1
embedding
layer layer, the embedding layer. Since each column of the input matrix E is an
embedding for a word, and the input is a one-hot column vector xi for word
Vi , the embedding layer for input w will be Exi = ei , the embedding for word
i. We now concatenate the three embeddings for the three context words to
produce the embedding layer e.
2. Multiply by W: We multiply by W (and add b) and pass through the ReLU
(or other) activation function to get the hidden layer h.
3. Multiply by U: h is now multiplied by U
4. Apply softmax: After the softmax, each node i in the output layer estimates
the probability P(wt = i|wt−1 , wt−2 , wt−3 )
In summary, the equations for a neural language model with a window size of 3,
given one-hot input vectors for each input context word, are:
For language modeling, the classes are the words in the vocabulary, so ŷi here means
the probability that the model assigns to the correct next word wt :
The parameter update for stochastic gradient descent for this loss from step s to s + 1
156 C HAPTER 7 • N EURAL N ETWORKS
…
thanks h1
0
wt-3 0
|V|
^y p(do|…)
for E 34
1 h2
…
0
0
y^42
0
1 992
all wt-2 p(fish|…)
0
0 h3
…
|V|
E
^y35102
…
the 1
wt-1 0
0
hdh
…
0
fish 1 451
wt 0 ^y p(zebra|…)
0
|V|
E e W h U |V|
…
x d⨉|V| 3d⨉1 dh⨉3d dh⨉1 |V|⨉dh y
|V|⨉3 |V|⨉1
Figure 7.18 Learning all the way back to embeddings. Again, the embedding matrix E is
shared among the 3 context words.
is then:
∂ [− log p(wt |wt−1 , ..., wt−n+1 )]
s+1 = s − (7.46)
∂
This gradient can be computed in any standard neural network framework which
will then backpropagate through = E, W, U, b.
Training the parameters to minimize loss will result both in an algorithm for
language modeling (a word predictor) but also a new set of embeddings E that can
be used as word representations for other tasks.
7.8 Summary
• Neural networks are built out of neural units, originally inspired by biological
neurons but now simply an abstract computational device.
• Each neural unit multiplies input values by a weight vector, adds a bias, and
then applies a non-linear activation function like sigmoid, tanh, or rectified
linear unit.
• In a fully-connected, feedforward network, each unit in layer i is connected
to each unit in layer i + 1, and there are no cycles.
• The power of neural networks comes from the ability of early layers to learn
representations that can be utilized by later layers in the network.
• Neural networks are trained by optimization algorithms like gradient de-
scent.
• Error backpropagation, backward differentiation on a computation graph,
is used to compute the gradients of the loss function for a network.
B IBLIOGRAPHICAL AND H ISTORICAL N OTES 157
CHAPTER
works. These networks are useful in their own right and serve as the basis for more
complex approaches like the Long Short-Term Memory (LSTM) networks discussed
later in this chapter. In this chapter when we use the term RNN we’ll be referring to
these simpler more constrained networks (although you will often see the term RNN
to mean any net with recurrent properties including LSTMs).
xt ht yt
Figure 8.1 Simple recurrent neural network after Elman (1990). The hidden layer includes
a recurrent connection as part of its input. That is, the activation value of the hidden layer
depends on the current input as well as the activation value of the hidden layer from the
previous time step.
Fig. 8.1 illustrates the structure of an RNN. As with ordinary feedforward net-
works, an input vector representing the current input, xt , is multiplied by a weight
matrix and then passed through a non-linear activation function to compute the val-
ues for a layer of hidden units. This hidden layer is then used to calculate a cor-
responding output, yt . In a departure from our earlier window-based approach, se-
quences are processed by presenting one item at a time to the network. We’ll use
subscripts to represent time, thus xt will mean the input vector x at time t. The key
difference from a feedforward network lies in the recurrent link shown in the figure
with the dashed line. This link augments the input to the computation at the hidden
layer with the value of the hidden layer from the preceding point in time.
The hidden layer from the previous time step provides a form of memory, or
context, that encodes earlier processing and informs the decisions to be made at
later points in time. Critically, this approach does not impose a fixed-length limit
on this prior context; the context embodied in the previous hidden layer can include
information extending back to the beginning of the sequence.
Adding this temporal dimension makes RNNs appear to be more complex than
non-recurrent architectures. But in reality, they’re not all that different. Given an
input vector and the values for the hidden layer from the previous time step, we’re
still performing the standard feedforward calculation introduced in Chapter 7. To
see this, consider Fig. 8.2 which clarifies the nature of the recurrence and how it
factors into the computation at the hidden layer. The most significant change lies in
the new set of weights, U, that connect the hidden layer from the previous time step
to the current hidden layer. These weights determine how the network makes use of
past context in calculating the output for the current input. As with the other weights
in the network, these connections are trained via backpropagation.
yt
ht
U W
ht-1 xt
Figure 8.2 Simple recurrent neural network illustrated as a feedforward network. The hid-
den layer ht−1 from the prior time step is multiplied by weight matrix U and then added to
the feedforward component from the current time step.
values for the hidden layer, we proceed with the usual computation to generate the
output vector.
Let’s refer to the input, hidden and output layer dimensions as din , dh , and dout
respectively. Given this, our three parameter matrices are: W ∈ Rdh ×din , U ∈ Rdh ×dh ,
and V ∈ Rdout ×dh .
We compute yt via a softmax computation that gives a probability distribution
over the possible output classes.
yt = softmax(Vht ) (8.3)
The fact that the computation at time t requires the value of the hidden layer from
time t − 1 mandates an incremental inference algorithm that proceeds from the start
of the sequence to the end as illustrated in Fig. 8.3. The sequential nature of simple
recurrent networks can also be seen by unrolling the network in time as is shown in
Fig. 8.4. In this figure, the various layers of units are copied for each time step to
illustrate that they will have differing values over time. However, the various weight
matrices are shared across time.
h0 ← 0
for i ← 1 to L ENGTH(x) do
hi ← g(Uhi−1 + Wxi )
yi ← f (Vhi )
return y
Figure 8.3 Forward inference in a simple recurrent network. The matrices U, V and W are
shared across time, while new values for h and y are calculated with each time step.
8.1.2 Training
As with feedforward networks, we’ll use a training set, a loss function, and back-
propagation to obtain the gradients needed to adjust the weights in these recurrent
8.1 • R ECURRENT N EURAL N ETWORKS 161
y3
y2 h3
V U W
h2
y1 x3
U W
V
h1 x2
U W
h0 x1
Figure 8.4 A simple recurrent neural network shown unrolled in time. Network layers are recalculated for
each time step, while the weights U, V and W are shared across all time steps.
networks. As shown in Fig. 8.2, we now have 3 sets of weights to update: W, the
weights from the input layer to the hidden layer, U, the weights from the previous
hidden layer to the current hidden layer, and finally V, the weights from the hidden
layer to the output layer.
Fig. 8.4 highlights two considerations that we didn’t have to worry about with
backpropagation in feedforward networks. First, to compute the loss function for
the output at time t we need the hidden layer from time t − 1. Second, the hidden
layer at time t influences both the output at time t and the hidden layer at time t + 1
(and hence the output and loss at t + 1). It follows from this that to assess the error
accruing to ht , we’ll need to know its influence on both the current output as well as
the ones that follow.
Tailoring the backpropagation algorithm to this situation leads to a two-pass al-
gorithm for training the weights in RNNs. In the first pass, we perform forward
inference, computing ht , yt , accumulating the loss at each step in time, saving the
value of the hidden layer at each step for use at the next time step. In the second
phase, we process the sequence in reverse, computing the required gradients as we
go, computing and saving the error term for use in the hidden layer for each step
backward in time. This general approach is commonly referred to as backpropaga-
backpropaga-
tion through tion through time (Werbos 1974, Rumelhart et al. 1986, Werbos 1990).
time
Fortunately, with modern computational frameworks and adequate computing
resources, there is no need for a specialized approach to training RNNs. As illus-
trated in Fig. 8.4, explicitly unrolling a recurrent network into a feedforward com-
putational graph eliminates any explicit recurrences, allowing the network weights
to be trained directly. In such an approach, we provide a template that specifies the
basic structure of the network, including all the necessary parameters for the input,
output, and hidden layers, the weight matrices, as well as the activation and output
functions to be used. Then, when presented with a specific input sequence, we can
generate an unrolled feedforward network specific to that input, and use that graph
162 C HAPTER 8 • RNN S AND LSTM S
Language models give us the ability to assign such a conditional probability to every
possible next word, giving us a distribution over the entire vocabulary. We can also
assign probabilities to entire sequences by combining these conditional probabilities
with the chain rule:
n
∏
P(w1:n ) = P(wi |w<i )
i=1
The n-gram language models of Chapter 3 compute the probability of a word given
counts of its occurrence with the n − 1 prior words. The context is thus of size n − 1.
For the feedforward language models of Chapter 7, the context is the window size.
RNN language models (Mikolov et al., 2010) process the input sequence one
word at a time, attempting to predict the next word from the current word and the
previous hidden state. RNNs thus don’t have the limited context problem that n-gram
models have, or the fixed context that feedforward language models have, since the
hidden state can in principle represent information about all of the preceding words
all the way back to the beginning of the sequence. Fig. 8.5 sketches this difference
between a FFN language model and an RNN language model, showing that the
RNN language model uses ht−1 , the hidden state from the previous time step, as a
representation of the past context.
^
yt
a) b)
^
yt
U
V
ht
ht-1 U ht-1 U ht
W
W W W
Figure 8.5 Simplified sketch of two LM architectures moving through a text, showing a
schematic context of three tokens: (a) a feedforward neural language model which has a fixed
context input to the weight matrix W, (b) an RNN language model, in which the hidden state
ht−1 summarizes the prior context.
is, at time t:
et = Ext (8.4)
ht = g(Uht−1 + Wet ) (8.5)
ŷt = softmax(Vht ) (8.6)
When we do language modeling with RNNs (and we’ll see this again in Chapter 9
with transformers), it’s convenient to make the assumption that the embedding di-
mension de and the hidden dimension dh are the same. So we’ll just call both of
these the model dimension d. So the embedding matrix E is of shape [d × |V |], and
xt is a one-hot vector of shape [|V | × 1]. The product et is thus of shape [d × 1]. W
and U are of shape [d × d], so ht is also of shape [d × 1]. V is of shape [|V | × d],
so the result of Vh is a vector of shape [|V | × 1]. This vector can be thought of as
a set of scores over the vocabulary given the evidence provided in h. Passing these
scores through the softmax normalizes the scores into a probability distribution. The
probability that a particular word k in the vocabulary is the next word is represented
by ŷt [k], the kth component of ŷt :
The probability of an entire sequence is just the product of the probabilities of each
item in the sequence, where we’ll use ŷi [wi ] to mean the probability of the true word
wi at time step i.
n
∏
P(w1:n ) = P(wi |w1:i−1 ) (8.8)
i=1
∏n
= ŷi [wi ] (8.9)
i=1
Loss log ŷlong log ŷand log ŷthanks log ŷfor log ŷall …
y
Softmax over
Vocabulary
Vh
h
RNN …
Input
e …
Embeddings
material and at each time step t ask the model to predict the next word. We call
such a model self-supervised because we don’t have to add any special gold labels
to the data; the natural sequence of words is its own supervision! We simply train
the model to minimize the error in predicting the true next word in the training
sequence, using cross-entropy as the loss function. Recall that the cross-entropy
loss measures the difference between a predicted probability distribution and the
correct distribution.
∑
LCE = − yt [w] log ŷt [w] (8.10)
w∈V
In the case of language modeling, the correct distribution yt comes from knowing the
next word. This is represented as a one-hot vector corresponding to the vocabulary
where the entry for the actual next word is 1, and all the other entries are 0. Thus,
the cross-entropy loss for language modeling is determined by the probability the
model assigns to the correct next word. So at time t the CE loss is the negative log
probability the model assigns to the next word in the training sequence.
LCE (ŷt , yt ) = − log ŷt [wt+1 ] (8.11)
Thus at each word position t of the input, the model takes as input the the correct
word wt together with ht−1 , encoding information from the preceding w1:t−1 , and
uses them to compute a probability distribution over possible next words so as to
compute the model’s loss for the next token wt+1 . Then we move to the next word,
we ignore what the model predicted for the next word and instead use the correct
word wt+1 along with the prior history encoded to estimate the probability of token
wt+2 . This idea that we always give the model the correct history sequence to predict
the next word (rather than feeding the model its best case from the previous time
teacher forcing step) is called teacher forcing.
The weights in the network are adjusted to minimize the average CE loss over
the training sequence via gradient descent. Fig. 8.6 illustrates this training regimen.
The columns of E represent the word embeddings for each word in the vocab-
ulary learned during the training process with the goal that words that have similar
meaning and function will have similar embeddings. And, since when we use RNNs
for language modeling we make the assumption that the embedding dimension and
the hidden dimension are the same (= the model dimension d), the embedding ma-
trix E has shape [d × |V |]. And the final layer matrix V provides a way to score
the likelihood of each word in the vocabulary given the evidence present in the final
hidden layer of the network through the calculation of Vh. V is of shape [|V | × d].
That is, is, the rows of V are shaped like a transpose of E, meaning that V provides
a second set of learned word embeddings.
Instead of having two sets of embedding matrices, language models use a single
embedding matrix, which appears at both the input and softmax layers. That is,
we dispense with V and use E at the start of the computation and Eᵀ (because the
shape of V is the transpose of E at the end. Using the same matrix (transposed) in
weight tying two places is called weight tying.1 The weight-tied equations for an RNN language
model then become:
et = Ext (8.12)
ht = g(Uht−1 + Wet ) (8.13)
ŷt = softmax(Eᵀ ht ) (8.14)
Argmax NNP MD VB DT NN
y
Softmax over
tags
Vh
RNN h
Layer(s)
Embeddings e
Softmax
FFN
hn
RNN
x1 x2 x3 xn
Figure 8.8 Sequence classification using a simple RNN combined with a feedforward net-
work. The final hidden state from the RNN is used as the input to a feedforward network that
performs the classification.
Note that in this approach we don’t need intermediate outputs for the words in
the sequence preceding the last element. Therefore, there are no loss terms associ-
8.3 • RNN S FOR OTHER NLP TASKS 167
ated with those elements. Instead, the loss function used to train the weights in the
network is based entirely on the final text classification task. The output from the
softmax output from the feedforward classifier together with a cross-entropy loss
drives the training. The error signal from the classification is backpropagated all the
way through the weights in the feedforward classifier through, to its input, and then
through to the three sets of weights in the RNN as described earlier in Section 8.1.2.
The training regimen that uses the loss from a downstream application to adjust the
end-to-end
training weights all the way through the network is referred to as end-to-end training.
Another option, instead of using just hidden state of the last token hn to represent
pooling the whole sequence, is to use some sort of pooling function of all the hidden states
hi for each word i in the sequence. For example, we can create a representation that
pools all the n hidden states by taking their element-wise mean:
n
1∑
hmean = hi (8.15)
n
i=1
Or we can take the element-wise max; the element-wise max of a set of n vectors is
a new vector whose kth element is the max of the kth elements of all the n vectors.
The long contexts of RNNs makes it quite difficult to successfully backpropagate
error all the way through the entire input; we’ll talk about this problem, and some
standard solutions, in Section 8.5.
language models are not linear (since they have many layers of non-linearities), we
loosely refer to this generation technique as autoregressive generation since the
word generated at each time step is conditioned on the word selected by the network
from the previous step. Fig. 8.9 illustrates this approach. In this figure, the details of
the RNN’s hidden layers and recurrent connections are hidden within the blue block.
This simple architecture underlies state-of-the-art approaches to applications
such as machine translation, summarization, and question answering. The key to
these approaches is to prime the generation component with an appropriate context.
That is, instead of simply using <s> to get things started we can provide a richer
task-appropriate context; for translation the context is the sentence in the source
language; for summarization it’s the long text we want to summarize.
Softmax
RNN
Embedding
y1 y2 y3 yn
RNN 3
RNN 2
RNN 1
x1 x2 x3 xn
Figure 8.10 Stacked recurrent networks. The output of a lower level serves as the input to
higher levels with the output of the last network serving as the final output.
edges that are then used for finding larger regions and shapes, the initial layers of
stacked networks can induce representations that serve as useful abstractions for
further layers—representations that might prove difficult to induce in a single RNN.
The optimal number of stacked RNNs is specific to each application and to each
training set. However, as the number of stacks is increased the training costs rise
quickly.
f
This new notation h t simply corresponds to the normal hidden state at time t, repre-
senting everything the network has gleaned from the sequence so far.
To take advantage of context to the right of the current input, we can train an
RNN on a reversed input sequence. With this approach, the hidden state at time t
represents information about the sequence to the right of the current input:
Here, the hidden state hbt represents all the information we have discerned about the
sequence from t to the end of the sequence.
bidirectional A bidirectional RNN (Schuster and Paliwal, 1997) combines two independent
RNN
RNNs, one where the input is processed from the start to the end, and the other from
the end to the start. We then concatenate the two representations computed by the
networks into a single vector that captures both the left and right contexts of an input
170 C HAPTER 8 • RNN S AND LSTM S
at each point in time. Here we use either the semicolon ”;” or the equivalent symbol
⊕ to mean vector concatenation:
f
ht = [h t ; hbt ]
f
= h t ⊕ hbt (8.18)
Fig. 8.11 illustrates such a bidirectional network that concatenates the outputs of
the forward and backward pass. Other simple ways to combine the forward and
backward contexts include element-wise addition or multiplication. The output at
each step in time thus captures information to the left and to the right of the current
input. In sequence labeling applications, these concatenated outputs can serve as the
basis for a local labeling decision.
y1 y2 y3 yn
concatenated
outputs
RNN 2
RNN 1
x1 x2 x3 xn
Figure 8.11 A bidirectional RNN. Separate models are trained in the forward and backward
directions, with the output of each model at each time point concatenated to represent the
bidirectional state at that time point.
Bidirectional RNNs have also proven to be quite effective for sequence classifi-
cation. Recall from Fig. 8.8 that for sequence classification we used the final hidden
state of the RNN as the input to a subsequent feedforward classifier. A difficulty
with this approach is that the final state naturally reflects more information about
the end of the sentence than its beginning. Bidirectional RNNs provide a simple
solution to this problem; as shown in Fig. 8.12, we simply combine the final hidden
states from the forward and backward passes (for example by concatenation) and
use that as input for follow-on processing.
Softmax
FFN
← →
h1 hn
←
h1 RNN 2
→
RNN 1 hn
x1 x2 x3 xn
Figure 8.12 A bidirectional RNN for sequence classification. The final hidden units from
the forward and backward passes are combined to represent the entire sequence. This com-
bined representation serves as input to the subsequent classifier.
cision making. The key to solving both problems is to learn how to manage this
context rather than hard-coding a strategy into the architecture. LSTMs accomplish
this by first adding an explicit context layer to the architecture (in addition to the
usual recurrent hidden layer), and through the use of specialized neural units that
make use of gates to control the flow of information into and out of the units that
comprise the network layers. These gates are implemented through the use of addi-
tional weights that operate sequentially on the input, and previous hidden layer, and
previous context layers.
The gates in an LSTM share a common design pattern; each consists of a feed-
forward layer, followed by a sigmoid activation function, followed by a pointwise
multiplication with the layer being gated. The choice of the sigmoid as the activation
function arises from its tendency to push its outputs to either 0 or 1. Combining this
with a pointwise multiplication has an effect similar to that of a binary mask. Values
in the layer being gated that align with values near 1 in the mask are passed through
nearly unchanged; values corresponding to lower values are essentially erased.
forget gate The first gate we’ll consider is the forget gate. The purpose of this gate is
to delete information from the context that is no longer needed. The forget gate
computes a weighted sum of the previous state’s hidden layer and the current in-
put and passes that through a sigmoid. This mask is then multiplied element-wise
by the context vector to remove the information from context that is no longer re-
quired. Element-wise multiplication of two vectors (represented by the operator ,
and sometimes called the Hadamard product) is the vector of the same dimension
as the two input vectors, where each element i is the product of element i in the two
input vectors:
ft = σ (U f ht−1 + W f xt ) (8.20)
kt = ct−1 ft (8.21)
The next task is to compute the actual information we need to extract from the previ-
ous hidden state and current inputs—the same basic computation we’ve been using
for all our recurrent networks.
add gate Next, we generate the mask for the add gate to select the information to add to the
current context.
Next, we add this to the modified context vector to get our new context vector.
ct = jt + kt (8.25)
output gate The final gate we’ll use is the output gate which is used to decide what informa-
tion is required for the current hidden state (as opposed to what information needs
to be preserved for future decisions).
Fig. 8.13 illustrates the complete computation for a single LSTM unit. Given the
8.5 • T HE LSTM 173
ct-1 ct-1
⦿
f
σ ct
+
ct
+
ht-1 ht-1
tanh
tanh
+
g
ht ht
⦿
⦿
i
σ
+
xt xt
σ
o
LSTM
+
Figure 8.13 A single LSTM unit displayed as a computation graph. The inputs to each unit consists of the
current input, x, the previous hidden state, ht−1 , and the previous context, ct−1 . The outputs are a new hidden
state, ht and an updated context, ct .
appropriate weights for the various gates, an LSTM accepts as input the context
layer, and hidden layer from the previous time step, along with the current input
vector. It then generates updated context and hidden vectors as output.
It is the hidden state, ht , that provides the output for the LSTM at each time step.
This output can be used as the input to subsequent layers in a stacked RNN, or at the
final layer of a network ht can be used to provide the final output of the LSTM.
h ht ct ht
a a
g g
LSTM
z z
Unit
⌃ ⌃
Figure 8.14 Basic neural units used in feedforward, simple recurrent networks (SRN), and
long short-term memory (LSTM).
y
y1 y2 yn
…
RNN RNN
x1 x2 … xn x1 x2 … xn
y1 y2 … ym
x2 x3 … xt Decoder RNN
Context
RNN
Encoder RNN
x1 x2 … xt-1
x1 x2 … xn
y1 y2 … ym
Decoder
Context
Encoder
x1 x2 … xn
Figure 8.16 The encoder-decoder architecture. The context is a function of the hidden
representations of the input, and may be used by the decoder in a variety of ways.
p(y) = p(y1 )p(y2 |y1 )p(y3 |y1 , y2 ) . . . p(ym |y1 , ..., ym−1 ) (8.28)
ht = g(ht−1 , xt ) (8.29)
ŷt = softmax(ht ) (8.30)
We only have to make one slight change to turn this language model with au-
toregressive generation into an encoder-decoder model that is a translation model
that can translate from a source text in one language to a target text in a second:
sentence
separation add a sentence separation marker at the end of the source text, and then simply
concatenate the target text.
Let’s use <s> for our sentence separator token, and let’s think about translating
an English source text (“the green witch arrived”), to a Spanish sentence (“llegó
la bruja verde” (which can be glossed word-by-word as ‘arrived the witch green’).
We could also illustrate encoder-decoder models with a question-answer pair, or a
text-summarization pair.
Let’s use x to refer to the source text (in this case in English) plus the separator
token <s>, and y to refer to the target text y (in this case in Spanish). Then an
encoder-decoder model computes the probability p(y|x) as follows:
p(y|x) = p(y1 |x)p(y2 |y1 , x)p(y3 |y1 , y2 , x) . . . p(ym |y1 , ..., ym−1 , x) (8.31)
Fig. 8.17 shows the setup for a simplified version of the encoder-decoder model
(we’ll see the full model, which requires the new concept of attention, in the next
section).
Fig. 8.17 shows an English source text (“the green witch arrived”), a sentence
separator token (<s>, and a Spanish target text (“llegó la bruja verde”). To trans-
late a source text, we run it through the network performing forward inference to
8.7 • T HE E NCODER -D ECODER M ODEL WITH RNN S 177
Target Text
hidden hn
layer(s)
embedding
layer
Separator
Source Text
Figure 8.17 Translating a single sentence (inference time) in the basic RNN version of encoder-decoder ap-
proach to machine translation. Source and target sentences are concatenated with a separator token in between,
and the decoder uses context information from the encoder’s last hidden state.
generate hidden states until we get to the end of the source. Then we begin autore-
gressive generation, asking for a word in the context of the hidden layer from the
end of the source input as well as the end-of-sentence marker. Subsequent words
are conditioned on the previous hidden state and the embedding for the last word
generated.
Let’s formalize and generalize this model a bit in Fig. 8.18. (To help keep things
straight, we’ll use the superscripts e and d where needed to distinguish the hidden
states of the encoder and the decoder.) The elements of the network on the left
process the input sequence x and comprise the encoder. While our simplified figure
shows only a single network layer for the encoder, stacked architectures are the
norm, where the output states from the top layer of the stack are taken as the final
representation, and the encoder consists of stacked biLSTMs where the hidden states
from top layers from the forward and backward passes are concatenated to provide
the contextualized representations for each time step.
Decoder
y1 y2 y3 y4 </s>
(output is ignored during encoding)
softmax
h e1 h e2 h e3 hd hd hd hd hd
hhn n = c = h 0
e d
hidden 1 2 3 4 m
layer(s)
embedding
layer
x1 x2 x3 xn <s> y1 y2 y3 ym
Encoder
Figure 8.18 A more formal version of translating a sentence at inference time in the basic RNN-based
encoder-decoder architecture. The final hidden state of the encoder RNN, hen , serves as the context for the
decoder in its role as hd0 in the decoder RNN, and is also made available to each decoder hidden state.
178 C HAPTER 8 • RNN S AND LSTM S
Now we’re ready to see the full equations for this version of the decoder in the basic
encoder-decoder model, with context available at each decoding timestep. Recall
that g is a stand-in for some flavor of RNN and ŷt−1 is the embedding for the output
sampled from the softmax at the previous step:
c = hen
hd0 = c
htd = g(ŷt−1 , ht−1
d
, c)
ŷt = softmax(htd ) (8.33)
Thus ŷt is a vector of probabilities over the vocabulary, representing the probability
of each word occurring at time t. To generate text, we sample from this distribution
ŷt . For example, the greedy choice is simply to choose the most most probable word
to generate at each timestep. We’ll introduce more sophisticated sampling methods
in Section 10.2.
Decoder
gold
llegó la bruja verde </s> answers
y1 y2 y3 y4 y5
softmax
hidden
layer(s)
embedding
layer
x1 x2 x3 x4
the green witch arrived <s> llegó la bruja verde
Encoder
Figure 8.19 Training the basic RNN encoder-decoder approach to machine translation. Note that in the
decoder we usually don’t propagate the model’s softmax outputs ŷt , but use teacher forcing to force each input
to the correct gold value for training. We compute the softmax output distribution over ŷ in the decoder in order
to compute the loss at each token, which can then be averaged to compute a loss for the sentence. This loss is
then propagated through the decoder parameters and the encoder parameters.
8.8 Attention
The simplicity of the encoder-decoder model is its clean separation of the encoder—
which builds a representation of the source text—from the decoder, which uses this
context to generate a target text. In the model as we’ve described it so far, this
context vector is hn , the hidden state of the last (nth ) time step of the source text.
This final hidden state is thus acting as a bottleneck: it must represent absolutely
everything about the meaning of the source text, since the only thing the decoder
knows about the source text is what’s in this context vector (Fig. 8.20). Information
at the beginning of the sentence, especially for long sentences, may not be equally
well represented in the context vector.
bottleneck
Encoder bottleneck
Decoder
Figure 8.20 Requiring the context c to be only the encoder’s final hidden state forces all the
information from the entire source sentence to pass through this representational bottleneck.
y1 y2 yi
h d1 h d2 h di …
…
c1 c2 ci
Figure 8.21 The attention mechanism allows each hidden state of the decoder to see a
different, dynamic, context, which is a function of all the encoder hidden states.
The first step in computing ci is to compute how much to focus on each encoder
state, how relevant each encoder state is to the decoder state captured in hdi−1 . We
capture relevance by computing— at each state i during decoding—a score(hdi−1 , hej )
for each encoder state j.
dot-product The simplest such score, called dot-product attention, implements relevance as
attention
similarity: measuring how similar the decoder hidden state is to an encoder hidden
state, by computing the dot product between them:
The score that results from this dot product is a scalar that reflects the degree of
similarity between the two vectors. The vector of these scores across all the encoder
hidden states gives us the relevance of each encoder state to the current step of the
decoder.
To make use of these scores, we’ll normalize them with a softmax to create a
vector of weights, i j , that tells us the proportional relevance of each encoder hidden
state j to the prior hidden decoder state, hdi−1 .
i j = softmax(score(hdi−1 , hej ))
exp(score(hdi−1 , hej )
= d e
(8.36)
k exp(score(hi−1 , hk ))
Finally, given the distribution in , we can compute a fixed-length context vector for
the current decoder state by taking a weighted average over all the encoder hidden
states.
∑
ci = i j hej (8.37)
j
8.9 • S UMMARY 181
With this, we finally have a fixed-length context vector that takes into account
information from the entire encoder state that is dynamically updated to reflect the
needs of the decoder at each step of decoding. Fig. 8.22 illustrates an encoder-
decoder network with attention, focusing on the computation of one context vector
ci .
Decoder
X
↵ij hej ci
j yi yi+1
attention
.4 .3 .1 .2
weights
↵ij
hdi1 · hej
hidden h e1 h e2 h e3 h en … ci-1
hdi-1 h di …
layer(s)
ci
x1 x2 x3 xn
yi-1 yi
Encoder
Figure 8.22 A sketch of the encoder-decoder network with attention, focusing on the computation of ci . The
context value ci is one of the inputs to the computation of hdi . It is computed by taking the weighted sum of all
the encoder hidden states, each weighted by their dot product with the prior decoder hidden state hdi−1 .
It’s also possible to create more sophisticated scoring functions for attention
models. Instead of simple dot product attention, we can get a more powerful function
that computes the relevance of each encoder hidden state to the decoder hidden state
by parameterizing the score with its own set of weights, Ws .
score(hdi−1 , hej ) = ht−1
d
Ws hej
The weights Ws , which are then trained during normal end-to-end training, give the
network the ability to learn which aspects of similarity between the decoder and
encoder states are important to the current application. This bilinear model also
allows the encoder and decoder to use different dimensional vectors, whereas the
simple dot-product attention requires that the encoder and decoder hidden states
have the same dimensionality.
We’ll return to the concept of attention when we define the transformer archi-
tecture in Chapter 9, which is based on a slight modification of attention called
self-attention.
8.9 Summary
This chapter has introduced the concepts of recurrent neural networks and how they
can be applied to language problems. Here’s a summary of the main points that we
covered:
• In simple Recurrent Neural Networks sequences are processed one element at
a time, with the output of each neural unit at time t based both on the current
input at t and the hidden layer from time t − 1.
182 C HAPTER 8 • RNN S AND LSTM S
quickly came to dominate many common tasks: part-of-speech tagging (Ling et al.,
2015), syntactic chunking (Søgaard and Goldberg, 2016), named entity recognition
(Chiu and Nichols, 2016; Ma and Hovy, 2016), opinion mining (Irsoy and Cardie,
2014), semantic role labeling (Zhou and Xu, 2015a) and AMR parsing (Foland and
Martin, 2016). As with the earlier surge of progress involving statistical machine
learning, these advances were made possible by the availability of training data pro-
vided by CONLL, SemEval, and other shared tasks, as well as shared resources such
as Ontonotes (Pradhan et al., 2007b), and PropBank (Palmer et al., 2005).
The modern neural encoder-decoder approach was pioneered by Kalchbrenner
and Blunsom (2013), who used a CNN encoder and an RNN decoder. Cho et al.
(2014) (who coined the name “encoder-decoder”) and Sutskever et al. (2014) then
showed how to use extended RNNs for both encoder and decoder. The idea that a
generative decoder should take as input a soft weighting of the inputs, the central
idea of attention, was first developed by Graves (2013) in the context of handwriting
recognition. Bahdanau et al. (2015) extended the idea, named it “attention” and
applied it to MT.
184 C HAPTER 9 • T HE T RANSFORMER
CHAPTER
9 The Transformer
In this chapter we introduce the transformer, the standard architecture for build-
ing large language models. Transformer-based large language models have com-
pletely changed the field of speech and language processing. Indeed, every subse-
quent chapter in this textbook will make use of them. We’ll focus for now on left-
to-right (sometimes called causal or autoregressive) language modeling, in which
we are given a sequence of input tokens and predict output tokens one by one by
conditioning on the prior context.
The transformer is a neural network with a specific structure that includes a
mechanism called self-attention or multi-head attention.1 Attention can be thought
of as a way to build contextual representations of a token’s meaning by attending to
and integrating information from surrounding tokens, helping the model learn how
tokens relate to each other over large spans.
Language
Modeling
logits logits logits logits logits …
Head U U U U U
Stacked
… … … … …
Transformer …
Blocks
x1 x2 x3 x4 x5 …
+ 1 + 2 + 3 + 4 + 5
Input
Encoding E E E E E
…
Fig. 9.1 sketches the transformer architecture. A transformer has three major
components. At the center are columns of transformer blocks. Each block is a
multilayer network (a multi-head attention layer, feedforward networks and layer
normalization steps) that maps an input vector xi in column i (corresponding to input
1 Although multi-head attention developed historically from the RNN attention mechanism (Chapter 8),
we’ll define attention from scratch here for readers who haven’t yet read Chapter 8.
9.1 • ATTENTION 185
token i) to an output vector hi . The set of n blocks maps an entire context window
of input vectors (x1 , ..., xn ) to a window of output vectors (h1 , ..., hn ) of the same
length. A column might contain from 12 to 96 or more stacked blocks.
The column of blocks is preceded by the input encoding component, which pro-
cesses an input token (like the word thanks) into a contextual vector representation,
using an embedding matrix E and a mechanism for encoding token position. Each
column is followed by a language modeling head, which takes the embedding out-
put by the final transformer block, passes it through an unembedding matrix U and
a softmax over the vocabulary to generate a single token for that column.
Transformer-based language models are complex, and so the details will unfold
over the next 5 chapters. In the next sections we’ll introduce multi-head attention,
the rest of the transformer block, and the input encoding and language modeling
head components. Chapter 10 discusses how language models are pretrained, and
how tokens are generated via sampling. Chapter 11 introduces masked language
modeling and the BERT family of bidirectional transformer encoder models. Chap-
ter 12 shows how to prompt LLMs to perform NLP tasks by giving instructions and
demonstrations, and how to align the model with human preferences. Chapter 13
will introduce machine translation with the encoder-decoder architecture.
9.1 Attention
Recall from Chapter 6 that for word2vec and other static embeddings, the repre-
sentation of a word’s meaning is always the same vector irrespective of the context:
the word chicken, for example, is always represented by the same fixed vector. So
a static vector for the word it might somehow encode that this is a pronoun used
for animals and inanimate entities. But in context it has a much richer meaning.
Consider it in one of these two sentences:
(9.1) The chicken didn’t cross the road because it was too tired.
(9.2) The chicken didn’t cross the road because it was too wide.
In (9.1) it is the chicken (i.e., the reader knows that the chicken was tired), while
in (9.2) it is the road (and the reader knows that the road was wide).2 That is, if
we are to compute the meaning of this sentence, we’ll need the meaning of it to be
associated with the chicken in the first sentence and associated with the road in
the second one, sensitive to the context.
Furthermore, consider reading left to right like a causal language model, pro-
cessing the sentence up to the word it:
(9.3) The chicken didn’t cross the road because it
At this point we don’t yet know which thing it is going to end up referring to! So a
representation of it at this point might have aspects of both chicken and road as
the reader is trying to guess what happens next.
This fact that words have rich linguistic relationships with other words that may
be far away pervades language. Consider two more examples:
(9.4) The keys to the cabinet are on the table.
(9.5) I walked along the pond, and noticed one of the trees along the bank.
2 We say that in the first example it corefers with the chicken, and in the second it corefers with the
road; we’ll return to this in Chapter 23.
186 C HAPTER 9 • T HE T RANSFORMER
In (9.4), the phrase The keys is the subject of the sentence, and in English and many
languages, must agree in grammatical number with the verb are; in this case both are
plural. In English we can’t use a singular verb like is with a plural subject like keys
(we’ll discuss agreement more in Chapter 18). In (9.5), we know that bank refers
to the side of a pond or river and not a financial institution because of the context,
including words like pond. (We’ll discuss word senses more in Chapter 11.)
The point of all these examples is that these contextual words that help us com-
pute the meaning of words in context can be quite far away in the sentence or para-
graph. Transformers can build contextual representations of word meaning, contex-
contextual
embeddings tual embeddings, by integrating the meaning of these helpful contextual words. In a
transformer, layer by layer, we build up richer and richer contextualized representa-
tions of the meanings of input tokens. At each layer, we compute the representation
of a token i by combining information about i from the previous layer with infor-
mation about the neighboring tokens to produce a contextualized representation for
each word at each position.
Attention is the mechanism in the transformer that weighs and combines the
representations from appropriate other tokens in the context from layer k − 1 to build
the representation for tokens in layer k.
because
didn’t
cross
tired
Layer k+1
road
The
the
was
too
it
self-attention distribution
chicken
because
didn’t
cross
tired
Layer k
road
The
the
was
too
it
Figure 9.2 The self-attention weight distribution that is part of the computation of the
representation for the word it at layer k + 1. In computing the representation for it, we attend
differently to the various words at layer l, with darker shades indicating higher self-attention
values. Note that the transformer is attending highly to the columns corresponding to the
tokens chicken and road , a sensible result, since at the point where it occurs, it could plausibly
corefers with the chicken or the road, and hence we’d like the representation for it to draw on
the representation for these earlier words. Figure adapted from Uszkoreit (2017).
a1 a2 a3 a4 a5
x1 x2 x3 x4 x5
Figure 9.3 Information flow in causal self-attention. When processing each input xi , the
model attends to all the inputs up to, and including xi .
Each i j is a scalar used for weighing the value of input x j when summing up
the inputs to compute ai . How shall we compute this weighting? In attention we
weight each prior embedding proportionally to how similar it is to the current token
i. So the output of attention is a sum of the embeddings of prior tokens weighted
by their similarity with the current token embedding. We compute similarity scores
via dot product, which maps two vectors into a scalar value ranging from −∞ to
∞. The larger the score, the more similar the vectors that are being compared. We’ll
normalize these scores with a softmax to create the vector of weights i j , j ≤ i.
the softmax weight will likely be highest for xi , since xi is very similar to itself,
resulting in a high dot product. But other context words may also be similar to i, and
the softmax will also assign some weight to those words. Then we use these weights
as the values in Eq. 9.6 to compute the weighted sum that is our a3 .
The simplified attention in equations 9.6 – 9.8 demonstrates the attention-based
approach to computing ai : compare the xi to prior vectors, normalize those scores
into a probability distribution used to weight the sum of the prior vector. But now
we’re ready to remove the simplifications.
A single attention head using query, key, and value matrices Now that we’ve
attention head seen a simple intuition of attention, let’s introduce the actual attention head, the
head version of attention that’s used in transformers. (The word head is often used in
transformers to refer to specific structured layers). The attention head allows us to
distinctly represent three different roles that each input embedding plays during the
course of the attention process:
• As the current element being compared to the preceding inputs. We’ll refer to
query this role as a query.
• In its role as a preceding input that is being compared to the current element
key to determine a similarity weight. We’ll refer to this role as a key.
value • And finally, as a value of a preceding element that gets weighted and summed
up to compute the output for the current element.
To capture these three different roles, transformers introduce weight matrices
WQ , WK , and WV . These weights will project each input vector xi into a represen-
tation of its role as a key, query, or value:
qi = xi W Q ; k i = xi W K ; vi = x i W V (9.9)
Given these projections, when we are computing the similarity of the current ele-
ment xi with some prior element x j , we’ll use the dot product between the current
element’s query vector qi and the preceding element’s key vector k j . Furthermore,
the result of a dot product can be an arbitrarily large (positive or negative) value, and
exponentiating large values can lead to numerical issues and loss of gradients during
training. To avoid this, we scale the dot product by a factor related to the size of the
embeddings, via diving by the square root of the dimensionality of the query and
key vectors (dk ). We thus replace the simplified Eq. 9.7 with Eq. 9.11. The ensuing
softmax calculation resulting in i j remains the same, but the output calculation for
ai is now based on a weighted sum over the value vectors v (Eq. 9.13).
Here’s a final set of equations for computing self-attention for a single self-
attention output vector ai from a single input vector xi . This version of attention
computes ai by summing the values of the prior elements, each weighted by the
similarity of its key to the query from the current element:
qi = xi W Q ; k j = x j W K ; v j = x j W V (9.10)
qi · k j
score(xi , x j ) = √ (9.11)
dk
i j = softmax(score(xi , x j )) ∀ j ≤ i (9.12)
∑
ai = i j v j (9.13)
j≤i
We illustrate this in Fig. 9.4 for the case of calculating the value of the third output
a3 in a sequence.
9.1 • ATTENTION 189
Output of self-attention a3
×
×
4. Turn into i,j weights via softmax
÷ ÷ ÷
3. Divide score by √dk √dk √dk √dk
k k k
k k k
W W W
1. Generate W
q
q W
q
q W
q
q
key, query, value
vectors v v v
x1 x2 x3
W W W
v v v
Figure 9.4 Calculating the value of a3 , the third element of a sequence using causal (left-
to-right) self-attention.
Let’s talk shapes. The input to attention xi and the output from attention ai both
have the same dimensionality 1 × d (We often call d the model dimensionality,
and indeed as we’ll discuss in Section 9.2 the output hi of each transformer block,
as well as the intermediate vectors inside the transformer block also have the same
dimensionality 1 × d.).
We’ll have a dimension dk for the key and query vectors. The query vector and
the key vector are both dimensionality 1 × dk , so we can take their dot product qi · k j .
We’ll have a separate dimension dv for the value vectors. The transform matrix WQ
has shape [d × dk ], WK is [d × dk ], and WV is [d × dv ]. In the original transformer
work (Vaswani et al., 2017), d was 512, dk and dv were both 64.
shows an intuition.
The output of each of the h heads is of shape 1 × dv , and so the output of the
multi-head layer with h heads consists of h vectors of shape 1 × dv . These are con-
catenated to produce a single output with dimensionality 1 × hdv . Then we use yet
another linear projection WO ∈ Rhdv ×d to reshape it, resulting in the multi-head
attention vector ai with the correct output shape [1xd] at each input i.
ai
[1 x d]
Project down to d WO [hdv x d]
… [1 x hdv ]
Concatenate Outputs
[1 x dv ] [1 x dv ]
Each head Head 1 Head 2 Head 8
attends dierently …
W K1 W V1 W Q1 W K2 W V2 W Q2 W K8 W V8 W Q8
to context
hi-1 hi hi+1
Residual
Stream
Feedforward
Layer Norm
… …
+
MultiHead
Attention
Layer Norm
xi-1 xi xi+1
Figure 9.6 The architecture of a transformer block showing the residual stream. This
figure shows the prenorm version of the architecture, in which the layer norms happen before
the attention and feedforward layers rather than after.
components read their input from the residual stream and add their output back into
the stream.
The input at the bottom of the stream is an embedding for a token, which has
dimensionality d. This initial embedding gets passed up (by residual connections),
and is progressively added to by the other components of the transformer: the at-
tention layer that we have seen, and the feedforward layer that we will introduce.
Before the attention and feedforward layer is a computation called the layer norm.
Thus the initial vector is passed through a layer norm and attention layer, and
the result is added back into the stream, in this case to the original input vector
xi . And then this summed vector is again passed through another layer norm and a
feedforward layer, and the output of those is added back into the residual, and we’ll
use hi to refer to the resulting output of the transformer block for token i. (In earlier
descriptions the residual stream was often described using a different metaphor as
residual connections that add the input of a component to its output, but the residual
stream is a more perspicuous way of visualizing the transformer.)
We’ve already seen the attention layer, so let’s now introduce the feedforward
and layer norm computations in the context of processing a single input xi at token
position i.
Feedforward layer The feedforward layer is a fully-connected 2-layer network,
i.e., one hidden layer, two weight matrices, as introduced in Chapter 7. The weights
are the same for each token position i , but are different from layer to layer. It
is common to make the dimensionality dff of the hidden layer of the feedforward
network be larger than the model dimensionality d. (For example in the original
transformer model, d = 512 and dff = 2048.)
Layer Norm At two stages in the transformer block we normalize the vector (Ba
layer norm et al., 2016). This process, called layer norm (short for layer normalization), is one
192 C HAPTER 9 • T HE T RANSFORMER
Given these values, the vector components are normalized by subtracting the mean
from each and dividing by the standard deviation. The result of this computation is
a new vector with zero mean and a standard deviation of one.
(x − µ)
x̂ = (9.23)
σ
Finally, in the standard implementation of layer normalization, two learnable param-
eters, and , representing gain and offset values, are introduced.
(x − µ)
LayerNorm(x) = + (9.24)
σ
Putting it all together The function computed by a transformer block can be ex-
pressed by breaking it down with one equation for each component computation,
using t (of shape [1 × d]) to stand for transformer and superscripts to demarcate
each computation inside the block:
Notice that the only component that takes as input information from other tokens
(other residual streams) is multi-head attention, which (as we see from (9.27)) looks
at all the neighboring tokens in the context. The output from attention, however, is
then added into this token’s embedding stream. In fact, Elhage et al. (2021) show that
we can view attention heads as literally moving information from the residual stream
of a neighboring token into the current stream. The high-dimensional embedding
space at each position thus contains information about the current token and about
neighboring tokens, albeit in different subspaces of the vector space. Fig. 9.7 shows
a visualization of this movement.
9.3 • PARALLELIZING COMPUTATION USING A SINGLE MATRIX X 193
Token A Token B
residual residual
stream stream
Figure 9.7 An attention head can move information from token A’s residual stream into
token B’s residual stream.
Crucially, the input and output dimensions of transformer blocks are matched so
they can be stacked. Each token vector xi at the input to the block has dimensionality
d, and the output hi also has dimensionality d. Transformers for large language
models stack many of these blocks, from 12 layers (used for the T5 or GPT-3-small
language models) to 96 layers (used for GPT-3 large), to even more for more recent
models. We’ll come back to this issue of stacking in a bit.
Equation (9.27) and following are just the equation for a single transformer
block, but the residual stream metaphor goes through all the transformer layers,
from the first transformer blocks to the 12th, in a 12-layer transformer. At the ear-
lier transformer blocks, the residual stream is representing the current token. At the
highest transformer blocks, the residual stream is usually representing the following
token, since at the very end it’s being trained to predict the next token.
Once we stack many blocks, there is one more requirement: at the very end of
the last (highest) transformer block, there is a single extra layer norm that is run on
the last hi of each token stream (just below the language model head layer that we
will define soon). 3
dimension).
Parallelizing attention Let’s first see this for a single attention head and then turn
to multiple heads, and then add in the rest of the components in the transformer
block. For one head we multiply X by the key, query, and value matrices WQ of
shape [d × dk ], WK of shape [d × dk ], and WV of shape [d × dv ], to produce matrices
Q of shape [N × dk ], K ∈ RN×dk , and V ∈ RN×dv , containing all the key, query, and
value vectors:
Q = XWQ ; K = XWK ; V = XWV (9.31)
Given these matrices we can compute all the requisite query-key comparisons simul-
taneously by multiplying Q and Kᵀ in a single matrix multiplication. The product is
of shape N × N, visualized in Fig. 9.8.
Figure 9.8 The N × N QKᵀ matrix showing how it computes all qi · k j comparisons in a
single matrix multiple.
Once we have this QKᵀ matrix, we can very efficiently scale these scores, take
the softmax, and then multiply the result by V resulting in a matrix of shape N × d:
a vector embedding representation for each token in the input. We’ve reduced the
entire self-attention step for an entire sequence of N tokens for one head to the
following computation:
QKᵀ
A = softmax mask √ V (9.32)
dk
Masking out the future You may have noticed that we introduced a mask function
in Eq. 9.32 above. This is because the self-attention computation as we’ve described
it has a problem: the calculation in QKᵀ results in a score for each query value
to every key value, including those that follow the query. This is inappropriate in
the setting of language modeling: guessing the next word is pretty simple if you
already know it! To fix this, the elements in the upper-triangular portion of the
matrix are zeroed out (set to −∞), thus eliminating any knowledge of words that
follow in the sequence. This is done in practice by adding a mask matrix M in
which Mi j = −∞ ∀ j > i (i.e. for the upper-triangular portion) and Mi j = 0 otherwise.
Fig. 9.9 shows the resulting masked QKᵀ matrix. (we’ll see in Chapter 11 how to
make use of words in the future for tasks that need it).
Fig. 9.10 shows a schematic of all the computations for a single attention head
parallelized in matrix form.
Fig. 9.8 and Fig. 9.9 also make it clear that attention is quadratic in the length
of the input, since at each layer we need to compute dot products between each pair
of tokens in the input. This makes it expensive to compute attention over very long
documents (like entire novels). Nonetheless modern large language models manage
to use quite long contexts of thousands or tens of thousands of tokens.
9.3 • PARALLELIZING COMPUTATION USING A SINGLE MATRIX X 195
q1•k1 −∞ −∞ −∞
q2•k1 q2•k2 −∞ −∞
N
q3•k1 q3•k2 q3•k3 −∞
X Q X K X V
Input
WQ Query Input WK Key Input WV Value
Token 1 Token 1 Token 1 Token 1 Token 1
Token 1
Input Input Key Input Value
Query
Token 2 Token 2 Token 2 Token 2
Input x =
Token 2
x = Key
x =
Token 2
Query Input Input Value
Token 3 Token 3 Token 3 Token 3 Token 3
Token 3
Input Input Key Input Value
Query
Token 4 Token 4 Token 4 Token 4
Token 4 d x dk d x dv Token 4
d x dk
Nxd N x dk Nxd N x dk N x dv
Nxd
q1
x = −∞ −∞ −∞ v1 a1
k1
k2
k3
k4
N x dk NxN NxN N x dv N x dv
Figure 9.10 Schematic of the attention computation for a single attention head in parallel. The first row shows
the computation of the Q, K, and V matrices. The second row shows the computation of QKT , the masking
(the softmax computation and the normalizing by dimensionality are not shown) and then the weighted sum of
the value vectors to get the final attention vectors.
Putting it all together with the parallel input matrix X The function computed
in parallel by an entire layer of N transformer block over the entire N input tokens
can be expressed as:
Or we can break it down with one equation for each component computation, using
T (of shape [N × d]) to stand for transformer and superscripts to demarcate each
computation inside the block:
T1 = MultiHeadAttention(X) (9.38)
2 1
T = X+T (9.39)
T3 = LayerNorm(T2 ) (9.40)
4 3
T = FFN(T ) (9.41)
5 4 3
T = T +T (9.42)
5
H = LayerNorm(T ) (9.43)
Here when we use a notation like FFN(T3 ) we mean that the same FFN is applied
in parallel to each of the N embedding vectors in the window. Similarly, each of the
N tokens is normed in parallel in the LayerNorm. Crucially, the input and output
dimensions of transformer blocks are matched so they can be stacked. Since each
token xi at the input to the block has dimensionality d, that means the input X and
output H are both of shape [N × d].
5 |V| 5 d
1 0000100…0000 ✕ E = 1
|V|
Figure 9.11 Selecting the embedding vector for word V5 by multiplying the embedding
matrix E with a one-hot vector with a 1 in index 5.
We can extend this idea to represent the entire token sequence as a matrix of one-
hot vectors, one for each of the N positions in the transformer’s context window, as
shown in Fig. 9.12.
d
|V| d
0000100…0000
0000000…0010
1000000…0000 ✕ E =
…
N 0000100…0000
N
|V |
Figure 9.12 Selecting the embedding matrix for the input sequence of token ids W by mul-
tiplying a one-hot matrix corresponding to W by the embedding matrix E.
Transformer Block
X = Composite
Embeddings
(word + position)
+
+
+
Word
Janet
back
will
the
bill
Embeddings
Position
5
Embeddings
Janet will back the bill
Figure 9.13 A simple way to model position: add an embedding of the absolute position to
the token embedding to produce a new embedding of the same dimensionality.
also [1 × d], This new embedding serves as the input for further processing. Fig. 9.13
shows the idea.
The final representation of the input, the matrix X, is an [N × d] matrix in which
each row i is the representation of the ith token in the input, computed by adding
E[id(i)]—the embedding of the id of the token that occurred at position i—, to P[i],
the positional embedding of position i.
A potential problem with the simple absolute position embedding approach is
that there will be plenty of training examples for the initial positions in our inputs
and correspondingly fewer at the outer length limits. These latter embeddings may
be poorly trained and may not generalize well during testing. An alternative ap-
proach to absolute positional embeddings is to choose a static function that maps
integer inputs to real-valued vectors in a way that captures the inherent relation-
ships among the positions. That is, it captures the fact that position 4 in an input is
more closely related to position 5 than it is to position 17. A combination of sine
and cosine functions with differing frequencies was used in the original transformer
work. Even more complex positional embedding methods exist, such as ones that
represent relative position instead of absolute position, often implemented in the
attention mechanism at each layer rather than being added once at the initial input.
h L1 h L2 h LN 1xd
Layer L
Transformer
Block
…
w1 w2 wN
Figure 9.14 The language modeling head: the circuit at the top of a transformer that maps from the output
embedding for token N from the last transformer layer (hLN ) to a probability distribution over words in the
vocabulary V .
The first module in Fig. 9.14 is a linear layer, whose job is to project from the
output hLN , which represents the output token embedding at position N from the final
logit block L, (hence of shape [1 × d]) to the logit vector, or score vector, that will have a
single score for each of the |V | possible words in the vocabulary V . The logit vector
u is thus of dimensionality 1 × |V |.
This linear layer can be learned, but more commonly we tie this matrix to (the
weight tying transpose of) the embedding matrix E. Recall that in weight tying, we use the
same weights for two different matrices in the model. Thus at the input stage of the
transformer the embedding matrix (of shape [|V | × d]) is used to map from a one-hot
vector over the vocabulary (of shape [1 × |V |]) to an embedding (of shape [1 × d]).
And then in the language model head, ET , the transpose of the embedding matrix (of
shape [d × |V |]) is used to map back from an embedding (shape [1 × d]) to a vector
over the vocabulary (shape [1×|V |]). In the learning process, E will be optimized to
be good at doing both of these mappings. We therefore sometimes call the transpose
unembedding ET the unembedding layer because it is performing this reverse mapping.
A softmax layer turns the logits u into the probabilities y over the vocabulary.
u = hLN ET (9.44)
y = softmax(u) (9.45)
a word from these probabilities y. We might sample the highest probability word
(‘greedy’ decoding), or use another of the sampling methods we’ll introduce in Sec-
tion 10.2. In either case, whatever entry yk we choose from the probability vector y,
we generate the word that has that index k.
hLi
feedforward
layer norm
Layer L
attention
layer norm
L-1 L
h i = x i
…
h 2i = x3i
feedforward
layer norm
Layer 2
attention
layer norm
h 1i = x2i
feedforward
layer norm
Layer 1
attention
layer norm
x1i
+ i
Input
Encoding E
Input token wi
Fig. 9.15 shows the total stacked architecture for one token i. Note that the input
to each transformer layer xi is the same as the output from the preceding layer h− i .
Now that we see all these transformer layers spread out on the page, we can point
out another useful feature of the unembedding layer: as a tool for interpretability of
logit lens the internals of the transformer that we call the logit lens (Nostalgebraist, 2020).
We can take a vector from any layer of the transformer and, pretending that it is
the prefinal embedding, simply multiply it by the unembedding layer to get logits,
and compute a softmax to see the distribution over words that that vector might
be representing. This can be a useful window into the internal representations of
the model. Since the network wasn’t trained to make the internal representations
function in this way, the logit lens doesn’t always work perfectly, but this can still
be a useful trick.
A terminological note before we conclude: You will sometimes see a trans-
former used for this kind of unidirectional causal language model called a decoder-
decoder-only only model. This is because this model constitutes roughly half of the encoder-
model
9.6 • S UMMARY 201
decoder model for transformers that we’ll see how to apply to machine translation
in Chapter 13. (Confusingly, the original introduction of the transformer had an
encoder-decoder architecture, and it was only later that the standard paradigm for
causal language model was defined by using only the decoder part of this original
architecture).
9.6 Summary
This chapter has introduced the transformer and its components for the task of lan-
guage modeling. We’ll continue the task of language modeling including issues like
training and sampling in the next chapter.
Here’s a summary of the main points that we covered:
• Transformers are non-recurrent networks based on multi-head attention, a
kind of self-attention. A multi-head attention computation takes an input
vector xi and maps it to an output ai by adding in vectors from prior tokens,
weighted by how relevant they are for the processing of the current word.
• A transformer block consists of a residual stream in which the input from
the prior layer is passed up to the next layer, with the output of different com-
ponents added to it. These components include a multi-head attention layer
followed by a feedforward layer, each preceded by layer normalizations.
Transformer blocks are stacked to make deeper and more powerful networks.
• The input to a transformer is a computing by adding an embedding (computed
with an embedding matrix) to a positional encoding that represents the se-
quential position of the token in the window.
• Language models can be built out of stacks of transformer blocks, with a
language model head at the top, which applies an unembedding matrix to
the output H of the top layer to generate the logits, which are then passed
through a softmax to generate word probabilities.
• Transformer-based language models have a wide context window (as wide
as 32768 tokens for very large models) allowing them to draw on enormous
amounts of context to predict upcoming words.
able performance on all sorts of natural language tasks because of the knowledge
they learn in pretraining, and they will play a role throughout the rest of this book.
They have been especially transformative for tasks where we need to produce text,
like summarization, machine translation, question answering, or chatbots.
We’ll start by seeing how to apply the transformer of Chapter 9 to language
modeling, in a setting often called causal or autoregressive language models, in
which we iteratively predict words left-to-right from earlier words. We’ll first in-
troduce training, seeing how language models are self-trained by iteratively being
taught to guess the next word in the text from the prior words.
We’ll then talk about the process of text generation. The application of LLMs
to generate text has vastly broadened the scope of NLP,. Text generation, code-
generation, and image-generation together constitute the important new area of gen-
generative AI erative AI. We’ll introduce specific algorithms for generating text from a language
model, like greedy decoding and sampling. And we’ll see that almost any NLP
task can be modeled as word prediction in a large language model, if we think about
it in the right way. We’ll work through an example of using large language mod-
els to solve one classic NLP task of summarization (generating a short text that
summarizes some larger document).
Completion Text
all the
Language
Softmax
Modeling
logits
Head Unencoder layer U U
Transformer
… …
Blocks
+ i + i + i + i + i + i + i
Encoder
E E E E E E E
Prefix Text
Figure 10.1 Left-to-right (also called autoregressive) text completion with transformer-based large language
models. As each token is generated, it gets added onto the context as a prefix for generating the next token.
If the word “positive” is more probable, we say the sentiment of the sentence is
positive, otherwise we say the sentiment is negative.
We can also cast more complex tasks as word prediction. Consider question
answering, in which the system is given a question (for example a question with
a simple factual answer) and must give a textual answer; we introduce this task in
detail in Chapter 14. We can cast the task of question answering as word prediction
by giving a language model a question and a token like A: suggesting that an answer
should come next:
Q: Who wrote the book The Origin of Species"? A:
If we ask a language model to compute the probability distribution over possible
next words given this prefix:
and look at which words w have high probabilities, we might expect to see that
Charles is very likely, and then if we choose Charles and continue and ask
we might now see that Darwin is the most probable token, and select it.
Conditional generation can even be used to accomplish tasks that must generate
text longer responses. Consider the task of text summarization, which is to take a long
summarization
text, such as a full-length article, and produce an effective shorter summary of it. We
can cast summarization as language modeling by giving a large language model a
text, and follow the text by a token like tl;dr; this token is short for something like
206 C HAPTER 10 • L ARGE L ANGUAGE M ODELS
‘too long; didn’t read’ and in recent years people often use this token, especially in
informal work emails, when they are going to give a short summary. Since this token
is sufficiently frequent in language model training data, language models have seen
many texts in which the token occurs before a summary, and hence will interpret the
token as instructions to generate a summary. We can then do conditional generation:
give the language model this prefix, and then have it generate the following words,
one by one, and take the entire response as a summary. Fig. 10.2 shows an example
of a text and a human-produced summary from a widely-used summarization corpus
consisting of CNN and Daily Mirror news articles.
Original Article
The only thing crazier than a guy in snowbound Massachusetts boxing up the powdery white stuff
and offering it for sale online? People are actually buying it. For $89, self-styled entrepreneur
Kyle Waring will ship you 6 pounds of Boston-area snow in an insulated Styrofoam box – enough
for 10 to 15 snowballs, he says.
But not if you live in New England or surrounding states. “We will not ship snow to any states
in the northeast!” says Waring’s website, ShipSnowYo.com. “We’re in the business of expunging
snow!”
His website and social media accounts claim to have filled more than 133 orders for snow – more
than 30 on Tuesday alone, his busiest day yet. With more than 45 total inches, Boston has set a
record this winter for the snowiest month in its history. Most residents see the huge piles of snow
choking their yards and sidewalks as a nuisance, but Waring saw an opportunity.
According to Boston.com, it all started a few weeks ago, when Waring and his wife were shov-
eling deep snow from their yard in Manchester-by-the-Sea, a coastal suburb north of Boston. He
joked about shipping the stuff to friends and family in warmer states, and an idea was born. [...]
Summary
Kyle Waring will ship you 6 pounds of Boston-area snow in an insulated Styrofoam box – enough
for 10 to 15 snowballs, he says. But not if you live in New England or surrounding states.
Figure 10.2 Excerpt from a sample article and its summary from the CNN/Daily Mail summarization corpus
(Hermann et al., 2015b), (Nallapati et al., 2016).
If we take this full article and append the token tl;dr, we can use this as the con-
text to prime the generation process to produce a summary as illustrated in Fig. 10.3.
Again, what makes transformers able to succeed at this task (as compared, say, to
the primitive n-gram language model) is that attention can incorporate information
from the large context window, giving the model access to the original article as well
as to the newly generated text throughout the process.
Which words do we generate at each step? One simple way to generate words
is to always generate the most likely word given the context. Generating the most
greedy
decoding likely word given the context is called greedy decoding. A greedy algorithm is one
that make a choice that is locally optimal, whether or not it will turn out to have
been the best choice with hindsight. Thus in greedy decoding, at each time step in
generation, the output yt is chosen by computing the probability for each possible
output (every word in the vocabulary) and then choosing the highest probability
word (the argmax):
In practice, however, we don’t use greedy decoding with large language models.
A major problem with greedy decoding is that because the words it chooses are (by
definition) extremely predictable, the resulting text is generic and often quite repeti-
tive. Indeed, greedy decoding is so predictable that it is deterministic; if the context
10.2 • S AMPLING FOR LLM G ENERATION 207
Generated Summary
LM Head
U U U
E E E E E E E E
Figure 10.3 Summarization with large language models using the tl;dr token and context-based autore-
gressive generation.
is identical, and the probabilistic model is the same, greedy decoding will always re-
sult in generating exactly the same string. We’ll see in Chapter 13 that an extension
to greedy decoding called beam search works well in tasks like machine translation,
which are very constrained in that we are always generating a text in one language
conditioned on a very specific text in another language. In most other tasks, how-
ever, people prefer text which has been generated by more sophisticated methods,
called sampling methods, that introduce a bit more diversity into the generations.
We’ll see how to do that in the next few sections.
as defined by the model. Thus we are more likely to generate words that the model
thinks have a high probability in the context and less likely to generate words that
the model thinks have a low probability.
We saw back in Chapter 3 on page 43 how to generate text from a unigram lan-
guage model , by repeatedly randomly sampling words according to their probability
until we either reach a pre-determined length or select the end-of-sentence token. To
generate text from a trained transformer language model we’ll just generalize this
model a bit: at each step we’ll sample words according to their probability condi-
tioned on our previous choices, and we’ll use a transformer language model as the
probability model that tells us this probability.
We can formalize this algorithm for generating a sequence of words W = w1 , w2 , . . . , wN
until we hit the end-of-sequence token, using x ∼ p(x) to mean ‘choose x by sam-
pling from the distribution p(x):
i←1
wi ∼ p(w)
while wi != EOS
i←i + 1
wi ∼ p(wi | w<i )
random
sampling The algorithm above is called random sampling, and it turns out random sam-
pling doesn’t work well enough. The problem is that even though random sampling
is mostly going to generate sensible, high-probable words, there are many odd, low-
probability words in the tail of the distribution, and even though each one is low-
probability, if you add up all the rare words, they constitute a large enough portion
of the distribution that they get chosen often enough to result in generating weird
sentences. For this reason, instead of random sampling, we usually use sampling
methods that avoid generating the very unlikely words.
The sampling methods we introduce below each have parameters that enable
trading off two important factors in generation: quality and diversity. Methods
that emphasize the most probable words tend to produce generations that are rated
by people as more accurate, more coherent, and more factual, but also more boring
and more repetitive. Methods that give a bit more weight to the middle-probability
words tend to be more creative and more diverse, but less factual and more likely to
be incoherent or otherwise low-quality.
Why does this work? When τ is close to 1 the distribution doesn’t change much.
But the lower τ is, the larger the scores being passed to the softmax (dividing by a
smaller fraction τ ≤ 1 results in making each score larger). Recall that one of the
useful properties of a softmax is that it tends to push high values toward 1 and low
values toward 0. Thus when larger numbers are passed to a softmax the result is
a distribution with increased probabilities of the most high-probability words and
decreased probabilities of the low probability words, making the distribution more
greedy. As τ approaches 0 the probability of the most likely word approaches 1.
210 C HAPTER 10 • L ARGE L ANGUAGE M ODELS
Note, by the way, that there can be other situations where we may want to do
something quite different and flatten the word probability distribution instead of
making it greedy. Temperature sampling can help with this situation too, in this case
high-temperature sampling, in which case we use τ > 1.
In the case of language modeling, the correct distribution yt comes from knowing the
next word. This is represented as a one-hot vector corresponding to the vocabulary
where the entry for the actual next word is 1, and all the other entries are 0. Thus,
the cross-entropy loss for language modeling is determined by the probability the
model assigns to the correct next word (all other words get multiplied by zero). So
at time t the CE loss in (10.5) can be simplified as the negative log probability the
model assigns to the next word in the training sequence.
Thus at each word position t of the input, the model takes as input the correct se-
quence of tokens w1:t , and uses them to compute a probability distribution over
possible next words so as to compute the model’s loss for the next token wt+1 . Then
we move to the next word, we ignore what the model predicted for the next word
and instead use the correct sequence of tokens w1:t+1 to estimate the probability of
token wt+2 . This idea that we always give the model the correct history sequence to
predict the next word (rather than feeding the model its best case from the previous
teacher forcing time step) is called teacher forcing.
Fig. 10.4 illustrates the general training approach. At each step, given all the
preceding words, the final transformer layer produces an output distribution over
the entire vocabulary. During training, the probability assigned to the correct word
is used to calculate the cross-entropy loss for each item in the sequence. The loss
for a training sequence is the average cross-entropy loss over the entire sequence.
The weights in the network are adjusted to minimize the average CE loss over the
training sequence via gradient descent.
10.3 • P RETRAINING L ARGE L ANGUAGE M ODELS 211
Language
Modeling
logits logits logits logits logits …
Head U U U U U
Stacked
Transformer
… … … … … …
Blocks
x1 x2 x3 x4 x5 …
+ 1 + 2 + 3 + 4 + 5
Input
Encoding E E E E E
…
Note the key difference between this figure and the earlier RNN-based version
shown in Fig. 8.6. There the calculation of the outputs and the losses at each step
was inherently serial given the recurrence in the calculation of the hidden states.
With transformers, each training item can be processed in parallel since the output
for each element in the sequence is computed separately.
Large models are generally trained by filling the full context window (for exam-
ple 4096 tokens for GPT4 or 8192 for Llama 3) with text. If documents are shorter
than this, multiple documents are packed into the window with a special end-of-text
token between them. The batch size for gradient descent is usually quite large (the
largest GPT-3 model uses a batch size of 3.2 million tokens).
as well as books and Wikipedia; Fig. 10.5 shows its composition. Dolma is a larger
open corpus of English, created with public tools, containing three trillion tokens,
which similarly consists of web text, academic papers, code, books, encyclopedic
materials, and social media (Soldaini et al., 2024).
Figure 10.5 The Pile corpus, showing the size of different components, color coded as
academic (articles from PubMed and ArXiv, patents from the USPTA; internet (webtext in-
cluding a subset of the common crawl as well as Wikipedia), prose (a large corpus of books),
dialogue (including movie subtitles and chat data), and misc.. Figure from Gao et al. (2020).
Filtering for quality and safety Pretraining data drawn from the web is filtered
for both quality and safety. Quality filters are classifiers that assign a score to each
document. Quality is of course subjective, so different quality filters are trained
in different ways, but often to value high-quality reference corpora like Wikipedia,
PII books, and particular websites and to avoid websites with lots of PII (Personal Iden-
tifiable Information) or adult content. Filters also remove boilerplate text which is
very frequent on the web. Another kind of quality filtering is deduplication, which
can be done at various levels, so as to remove duplicate documents, duplicate web
pages, or duplicate text. Quality filtering generally improves language model per-
formance (Longpre et al., 2024b; Llama Team, 2024).
Safety filtering is again a subjective decision, and often includes toxicity detec-
tion based on running off-the-shelf toxicity classifiers. This can have mixed results.
One problem is that current toxicity classifiers mistakenly flag non-toxic data if it
is generated by speakers of minority dialects like African American English (Xu
et al., 2021). Another problem is that models trained on toxicity-filtered data, while
somewhat less toxic, are also worse at detecting toxicity themselves (Longpre et al.,
2024b). These issues make the question of how to do better safety filtering an im-
portant open problem.
Using large datasets scraped from the web to train language models poses ethical
and legal questions:
Copyright: Much of the text in these large datasets (like the collections of fic-
tion and non-fiction books) is copyrighted. In some countries, like the United
States, the fair use doctrine may allow copyrighted content to be used for
transformative uses, but it’s not clear if that remains true if the language mod-
els are used to generate text that competes with the market for the text they
10.3 • P RETRAINING L ARGE L ANGUAGE M ODELS 213
10.3.3 Finetuning
Although the enormous pretraining data for a large language model includes text
from many domains, it’s often the case that we want to apply it in a new domain or
task that might not have appeared sufficiently in the pre-training data. For example,
we might want a language model that’s specialized to legal or medical text. Or we
might have a multilingual language model that knows many languages but might
benefit from some more data in our particular language of interest. Or we want a
language model that is specialized to a particular task.
In such cases, we can simply continue training the model on relevant data from
the new domain or language (Gururangan et al., 2020). This process of taking a fully
pretrained model and running additional training passes on some new data is called
finetuning finetuning. Fig. 10.6 sketches the paradigm.
Fine-
Pretraining Data tuning
Pretrained LM Data Fine-tuned LM
… … … … … …
Pretraining Fine-tuning
Figure 10.6 Pretraining and finetuning. A pre-trained model can be finetuned to a par-
ticular domain, dataset, or task. There are many different ways to finetune, depending on
exactly which parameters are updated from the finetuning data: all the parameters, some of
the parameters, or only the parameters of specific extra circuitry.
We’ll introduce four related kinds of finetuning in this chapter and the two fol-
lowing chapters. In all four cases, finetuning means the process of taking a pre-
trained model and further adapting some or all of its parameters to some new data.
But they differ on exactly which parameters get updated.
In the first kind of finetuning we retrain all the parameters of the model on this
new data, using the same method (word prediction) and loss function (cross-entropy
loss) as for pretraining. In a sense it’s as if the new data were at the tail end of
214 C HAPTER 10 • L ARGE L ANGUAGE M ODELS
the pretraining data, and so you’ll sometimes see this method called continued pre-
continued
pretraining training.
Retraining all the parameters of the model is very slow and expensive when the
freeze language model is huge. So instead we can freeze some of the parameters (i.e., leave
them unchanged from their pretrained value) and train only a subset of parameters
on the new data. In Section 10.5.3 we’ll describe this second variety of finetun-
ing, called parameter-efficient finetuning, or PEFT. because we efficiently select
specific parameters to update when finetuning, and leave the rest in their pretrained
values.
In Chapter 11 we’ll introduce a third kind of finetuning, also parameter-efficient.
In this version, the goal is to use a language model as a kind of classifier or labeler
for a specific task. For example we might train the model to be a sentiment classifier.
We do this by adding extra neural circuitry (an extra head) after the top layer of the
model. This classification head takes as input some of the top layer embeddings of
the transformer and produces as output a classification. In this method, most com-
monly used with masked language models like BERT, we freeze the entire pretrained
model and only train the classification head on some new data, usually labeled with
some class that we want to predict.
Finally, in Chapter 12 we’ll introduce a fourth kind of finetuning, that is a cru-
cial component of the largest language models: supervised finetuning or SFT. SFT
is often used for instruction finetuning, in which we want a pretrained language
model to learn to follow text instructions, for example to answer questions or follow
a command to write something. Here we create a dataset of prompts and desired
responses (for example questions and their answers, or commands and their ful-
fillments), and we train the language model using the normal cross-entropy loss to
predict each token in the instruction prompt iteratively, essentially training it to pro-
duce the desired response from the command in the prompt. It’s called supervised
because unlike in pretraining, where we just take any data and predict the words in
it, we build the special finetuning dataset by hand, creating supervised responses to
each command.
Often everything that happens after pretraining is lumped together as post-training;
we’ll discuss the various parts of post-training in Chapter 12.
LM computes for each new word, we can use the chain rule to expand the computa-
tion of probability of the test set:
√
n
∏ 1
Perplexity (w1:n ) =
n
(10.8)
P (wi |w<i )
i=1
Note that because of the inverse in Eq. 10.7, the higher the probability of the word
sequence, the lower the perplexity. Thus the the lower the perplexity of a model on
the data, the better the model. Minimizing perplexity is equivalent to maximizing
the test set probability according to the language model.
One caveat: because perplexity depends on the length of a text, it is very sensitive
to differences in the tokenization algorithm. That means that it’s hard to exactly
compare perplexities produced by two language models if they have very different
tokenizers. For this reason perplexity is best used when comparing language models
that use the same tokenizer.
Other factors While the predictive accuracy of a language model, as measured by
perplexity, is a very useful metric, we also care about different kinds of accuracy, for
the downstream tasks we apply our language model to. For each task like machine
translation, summarization, question answering, speech recognition, and dialogue,
we can measure the accuracy at those tasks. Future chapters will introduce task-
specific metrics that allow us to evaluate how accuracy or correct language models
are at these downstream tasks.
But when evaluating models we also care about factors besides any of these
kinds of accuracy (Dodge et al., 2019; Ethayarajh and Jurafsky, 2020). For example,
we often care about how a big a model is, and how long it takes to train or do
inference. This can matter because we have constraints on time either for training
or at inference. Or we may have constraints on memory, since the GPUs we run
our models on have fixed memory sizes. Big models also use more energy, and we
prefer models that use less energy, both to reduce the environmental impact of the
model and to reduce the financial cost of building or deploying it. We can target
our evaluation to these factors by measuring performance normalized to a giving
compute or memory budget. We can also directly measure the energy usage of our
model in kWh or in kilograms of CO2 emitted (Strubell et al., 2019; Henderson
et al., 2020; Liang et al., 2023).
Another feature that a language model evaluation can measure is fairness. We
know that language models are biased, exhibiting gendered and racial stereotypes,
or decreased performance for language from or about certain demographics groups.
There are language model evaluation benchmarks that measure the strength of these
biases, such as StereoSet (Nadeem et al., 2021), RealToxicityPrompts (Gehman
et al., 2020), and BBQ (Parrish et al., 2022) among many others. We also want
language models whose performance is equally fair to different groups. For exam-
ple, we could chose an evaluation that is fair in a Rawlsian sense by maximizing the
welfare of the worst-off group (Rawls, 2001; Hashimoto et al., 2018; Sagawa et al.,
2020).
Finally, there are many kinds of leaderboards like Dynabench (Kiela et al., 2021)
and general evaluation protocols like HELM (Liang et al., 2023); we will return to
these in later chapters when we introduce evaluation metrics for specific tasks like
question answering and information retrieval.
216 C HAPTER 10 • L ARGE L ANGUAGE M ODELS
smaller amounts of data, to predict what the loss would be if we were to add more
data or increase model size. Other aspects of scaling laws can also tell us how much
data we need to add when scaling up a model.
10.5.2 KV Cache
We saw in Fig. 9.10 and in Eq. 9.32 (repeated below) how the attention vector can
be very efficiently computed in parallel for training, via two matrix multiplications:
QKᵀ
A = softmax √ V (10.13)
dk
Unfortunately we can’t do quite the same efficient computation in inference as
in training. That’s because at inference time, we iteratively generate the next tokens
one at a time. For a new token that we have just generated, call it xi , we need to
compute its query, key, and values by multiplying by WQ , WK , and WV respec-
tively. But it would be a waste of computation time to recompute the key and value
vectors for all the prior tokens x<i ; at prior steps we already computed these key
and value vectors! So instead of recomputing these, whenever we compute the key
KV cache and value vectors we store them in memory in the KV cache, and then we can just
grab them from the cache when we need them. Fig. 10.7 modifies Fig. 9.10 to show
the computation that takes place for a single new token, showing which values we
can take from the cache rather than recompute.
Q QKT V A
KT v1
x = x v2
k1
k2
k3
k4
=
v3
dk x N v4
q4 q4•k1 q4•k2 q4•k3 q4•k4 a4
1 x dk 1xN N x dv 1 x dv
Figure 10.7 Parts of the attention computation (extracted from Fig. 9.10) showing, in black,
the vectors that can be stored in the cache rather than recomputed when computing the atten-
tion score for the 4th token.
LoRA Here we describe one such model, called LoRA, for Low-Rank Adaptation. The
intuition of LoRA is that transformers have many dense layers which perform matrix
multiplication (for example the WQ , WK , WV , WO layers in the attention computa-
tion). Instead of updating these layers during finetuning, with LoRA we freeze these
layers and instead update a low-rank approximation that has fewer parameters.
Consider a matrix W of dimensionality [N × d] that needs to be updated during
finetuning via gradient descent. Normally this matrix would get updates W of
dimensionality [N × d], for updating the N × d parameters after gradient descent. In
LoRA, we freeze W and update instead a low-rank decomposition of W. We create
two matrices A and B, where A has size [N ×r] and B has size [r ×d], and we choose
r to be quite small, r << min(d, N). During finetuning we update A and B instead
of W. That is, we replace W + W with W + BA. Fig. 10.8 shows the intuition.
For replacing the forward pass h = xW, the new forward pass is instead:
h = xW + xAB (10.14)
k
h 1
k
× r B
Pretrained
Weights
d d A
W
k r
x 1
d
Figure 10.8 The intuition of LoRA. We freeze W to its pretrained values, and instead fine-
tune by training a pair of matrices A and B, updating those instead of W, and just sum W and
the updated AB.
Large pretrained neural language models exhibit many of the potential harms dis-
cussed in Chapter 4 and Chapter 6. Many of these harms become realized when
pretrained language models are used for any downstream task, particularly those
involving text generation, whether question answering, machine translation, or in
assistive technologies like writing aids or web search query completion, or predic-
tive typing for email (Olteanu et al., 2020).
For example, language models are prone to saying things that are false, a prob-
hallucination lem called hallucination. Language models are trained to generate text that is pre-
dictable and coherent, but the training algorithms we have seen so far don’t have
any way to enforce that the text that is generated is correct or true. This causes
enormous problems for any application where the facts matter! We’ll return to this
issue in Chapter 14 where we introduce proposed mitigation methods like retrieval
augmented generation.
toxic language A second source of harm is that language models can generate toxic language.
Gehman et al. (2020) show that even completely non-toxic prompts can lead large
language models to output hate speech and abuse their users. Language models also
generate stereotypes (Cheng et al., 2023) and negative attitudes (Brown et al., 2020;
Sheng et al., 2019) about many demographic groups.
One source of biases is the training data. Gehman et al. (2020) shows that large
language model training datasets include toxic text scraped from banned sites. There
are other biases than toxicity: the training data is disproportionately generated by
authors from the US and from developed countries. Such biased population samples
likely skew the resulting generation toward the perspectives or topics of this group
alone. Furthermore, language models can amplify demographic and other biases in
training data, just as we saw for embedding models in Chapter 6.
Datasets can be another source of harms. We already saw in Section 10.3.2
that using pretraining corpora scraped from the web can lead to harms related to
copyright and data consent. We also mentioned that pretraining data can tend to
have private information like phone numbers and addresses. This is problematic
because large language models can leak information from their training data. That
is, an adversary can extract training-data text from a language model such as a per-
son’s name, phone number, and address (Henderson et al. 2017, Carlini et al. 2021).
This becomes even more problematic when large language models are trained on
extremely sensitive private datasets such as electronic health records.
Language models can also be used by malicious actors for generating text for
misinformation, phishing, or other socially harmful activities (Brown et al., 2020).
McGuffie and Newhouse (2020) show how large language models generate text that
emulates online extremists, with the risk of amplifying extremist movements and
their attempt to radicalize and recruit.
Finding ways to mitigate all these harms is an important current research area in
NLP. At the very least, carefully analyzing the data used to pretrain large language
models is important as a way of understanding issues of toxicity, bias, privacy, and
fair use, making it extremely important that language models include datasheets
(page 16) or model cards (page 74) giving full replicable information on the cor-
pora used to train them. Open-source models can specify their exact training data.
Requirements that models are transparent in such ways is also in the process of being
incorporated into the regulations of various national governments.
220 C HAPTER 10 • L ARGE L ANGUAGE M ODELS
10.7 Summary
This chapter has introduced the large language model, and how it can be built out of
the transformer. Here’s a summary of the main points that we covered:
• Many NLP tasks—such as question answering, summarization, sentiment,
and machine translation—can be cast as tasks of word prediction and hence
addressed with Large language models.
• Large language models are generally pretrained on large datasets of 100s of
billions of words generally scraped from the web.
• These datasets need to be filtered for quality and balanced for domains by
upsampling and downsampling. Addressing some problems with pretraining
data, like toxicity, are open research problems.
• The choice of which word to generate in large language models is generally
done by using a sampling algorithm.
• Language models are evaluated by perplexity but there are also evaluations
of accuracy downstream tasks, and ways to measure other factors like fairness
and energy use.
• There are various computational tricks for making large language models
more efficient, such as the CV cache and parameter-efficient finetuning.
• Because of their ability to be used in so many ways, language models also
have the potential to cause harms. Some harms include hallucinations, bias,
stereotypes, misinformation and propaganda, and violations of privacy and
copyright.
other features from the context, including distant n-grams and pairs of associated
words called trigger pairs. Rosenfeld’s model prefigured modern language models
by being a statistical word predictor trained in a self-supervised manner simply by
learning to predict upcoming words in a corpus.
Another was the first use of pretrained embeddings to model word meaning in the
LSA/LSI models (Deerwester et al., 1988). Recall from the history section of Chap-
ter 6 that in LSA (latent semantic analysis) a term-document matrix was trained on a
corpus and then singular value decomposition was applied and the first 300 dimen-
sions were used as a vector embedding to represent words. Landauer et al. (1997)
first used the word “embedding”. In addition to their development of the idea of pre-
training and of embeddings, the LSA community also developed ways to combine
LSA embeddings with n-grams in an integrated language model (Bellegarda, 1997;
Coccaro and Jurafsky, 1998).
In a very influential series of papers developing the idea of neural language
models, (Bengio et al. 2000; Bengio et al. 2003; Bengio et al. 2006), Yoshua Ben-
gio and colleagues drew on the central ideas of both these lines of self-supervised
language modeling work, (the discriminatively trained word predictor, and the pre-
trained embeddings). Like the maxent models of Rosenfeld, Bengio’s model used
the next word in running text as its supervision signal. Like the LSA models, Ben-
gio’s model learned an embedding, but unlike the LSA models did it as part of the
process of language modeling. The Bengio et al. (2003) model was a neural lan-
guage model: a neural network that learned to predict the next word from prior
words, and did so via learning embeddings as part of the prediction process.
The neural language model was extended in various ways over the years, perhaps
most importantly in the form of the RNN language model of Mikolov et al. (2010)
and Mikolov et al. (2011). The RNN language model was perhaps the first neural
model that was accurate enough to surpass the performance of a traditional 5-gram
language model.
Soon afterwards, Mikolov et al. (2013a) and Mikolov et al. (2013b) proposed
to simply the hidden layer of these neural net language models to create pretrained
word2vec word embeddings.
The static embedding models like LSA and word2vec instantiated a particular
model of pretraining: a representation was trained on a pretraining dataset, and then
the representations could be used in further tasks. The ‘Dai and Le (2015) and
(Peters et al., 2018) reframed this idea by proposing models that were pretrained
using a language model objective, and then the identical model could be either frozen
and directly applied for language modeling or further finetuned still using a language
model objective. For example ELMo used a biLSTM self-supervised on a large
pretrained dataset using a language model objective, then finetuned on a domain-
specific dataset, and then froze the weights and added task-specific heads.
Transformers were first applied as encoder-decoders (Vaswani et al., 2017) and
then to masked language modeling (Devlin et al., 2019) (as we’ll see in Chapter 13
and Chapter 11). Radford et al. (2019) then showed that the transformer-based au-
toregressive language model GPT2 could perform zero-shot on many NLP tasks like
summarization and question answering.
The technology used for transformer-based language models can also be applied
to other domains and tasks, like vision, speech, and genetics. the term foundation
foundation model is sometimes used as a more general term for this use of large language
model
model technology across domains and areas, when the elements we are computing
over are not necessarily words. Bommasani et al. (2021) is a broad survey that
222 C HAPTER 10 • L ARGE L ANGUAGE M ODELS
sketches the opportunities and risks of foundation models, with special attention to
large language models.
CHAPTER
In the previous two chapters we introduced the transformer and saw how to pre-
train a transformer language model as a causal or left-to-right language model. In
this chapter we’ll introduce a second paradigm for pretrained language models, the
BERT
masked
bidirectional transformer encoder, and the most widely-used version, the BERT
language model (Devlin et al., 2019). This model is trained via masked language modeling,
modeling
where instead of predicting the following word, we mask a word in the middle and
ask the model to guess the word given the words on both sides. This method thus
allows the model to see both the right and left context.
finetuning We also introduced finetuning in the prior chapter. Here we describe a new
kind of finetuning, in which we take the transformer network learned by these pre-
trained models, add a neural net classifier after the top layer of the network, and train
it on some additional labeled data to perform some downstream task like named
entity tagging or natural language inference. As before, the intuition is that the
pretraining phase learns a language model that instantiates rich representations of
word meaning, that thus enables the model to more easily learn (‘be finetuned to’)
the requirements of a downstream language understanding task. This aspect of the
transfer
learning pretrain-finetune paradigm is an instance of what is called transfer learning in ma-
chine learning: the method of acquiring knowledge from one task or domain, and
then applying it (transferring it) to solve a new task.
The second idea that we introduce in this chapter is the idea of contextual em-
beddings: representations for words in context. The methods of Chapter 6 like
word2vec or GloVe learned a single vector embedding for each unique word w in
the vocabulary. By contrast, with contextual embeddings, such as those learned by
masked language models like BERT, each word w will be represented by a different
vector each time it appears in a different context. While the causal language models
of Chapter 9 also use contextual embeddings, the embeddings created by masked
language models seem to function particularly well as representations.
which we want to tag each token with a label, such as the part-of-speech tagging or
parsing tasks we’ll introduce in future chapters, or tasks like named entity tagging
we’ll introduce later in this chapter.
The bidirectional encoders that we introduce here are a different kind of beast
than causal models. The causal models of Chapter 9 are generative models, de-
signed to easily generate the next token in a sequence. But the focus of bidirec-
tional encoders is instead on computing contextualized representations of the input
tokens. Bidirectional encoders use self-attention to map sequences of input embed-
dings (x1 , ..., xn ) to sequences of output embeddings of the same length (h1 , ..., hn ),
where the output vectors have been contextualized using information from the en-
tire input sequence. These output embeddings are contextualized representations of
each input token that are useful across a range of applications where we need to do
a classification or a decision based on the token in context.
Remember that we said the models of Chapter 9 are sometimes called decoder-
only, because they correspond to the decoder part of the encoder-decoder model we
will introduce in Chapter 13. By contrast, the masked language models of this chap-
ter are sometimes called encoder-only, because they produce an encoding for each
input token but generally aren’t used to produce running text by decoding/sampling.
That’s an important point: masked language models are not used for generation.
They are generally instead used for interpretative tasks.
a1 a2 a3 a4 a5 a1 a2 a3 a4 a5
attention attention attention attention attention attention attention attention attention attention
x1 x2 x3 x4 x5 x1 x2 x3 x4 x5
Figure 11.1 (a) The causal transformer from Chapter 9, highlighting the attention computation at token 3. The
attention value at each token is computed using only information seen earlier in the context. (b) Information
flow in a bidirectional attention model. In processing each token, the model attends to all inputs, both before
and after the current one. So attention for token 3 can draw on information from following tokens.
N N
q3•k1 q3•k2 q3•k3 −∞ q3•k1 q3•k2 q3•k3 q3•k4
N N
(a) (b)
Figure 11.2 The N × N QKᵀ matrix showing the qi · k j values, with the upper-triangle
portion of the comparisons matrix zeroed out (set to −∞, which the softmax will turn to
zero).
Fig. 11.2 shows the masked version of QKᵀ and the unmasked version. For bidi-
rectional attention, we used the unmasked version of Fig. 11.2b. Thus the attention
computation for bidirectional attention is exactly the same as Eq. 11.1 but with the
mask removed:
QKᵀ
A = softmax √ V (11.2)
dk
In BERT, 15% of the input tokens in a training sequence are sampled for learning.
Of these, 80% are replaced with [MASK], 10% are replaced with randomly selected
tokens, and the remaining 10% are left unchanged.
The MLM training objective is to predict the original inputs for each of the
masked tokens using a bidirectional encoder of the kind described in the last section.
The cross-entropy loss from these predictions drives the training process for all the
parameters in the model. Note that all of the input tokens play a role in the self-
attention process, but only the sampled tokens are used for learning.
More specifically, the original input sequence is first tokenized using a subword
model. The sampled items which drive the learning process are chosen among the
input tokens. Word embeddings for all of the tokens in the input are retrieved from
the E embedding matrix and combined with positional embeddings to form the input
to the transformer, passed through the stack of transformer blocks, and then the
language modeling head.
CE Loss
z1 z2 z3 z4 z5 z6 z7 z8
Token + + + + + + + + +
Positional
Embeddings p1 p2 p3 p4 p5 p6 p7 p8
Fig. 11.3 illustrates this approach with a simple example. Here, long, thanks and
the have been sampled from the training sequence, with the first two masked and the
replaced with the randomly sampled token apricot. The resulting embeddings are
passed through a stack of bidirectional transformer blocks. Recall from Section 9.5
in Chapter 9 that to produce a probability distribution over the vocabulary for each
of the masked tokens, the language modeling head takes the output vector hLi from
the final transformer layer L for each masked token i, multiplies it by the unembed-
ding layer ET to produce the logits u, and then uses softmax to turn the logits into
probabilities y over the vocabulary:
ui = hLi ET (11.3)
yi = softmax(ui ) (11.4)
With a predicted probability distribution for each masked item, we can use cross-
entropy to compute the loss for each masked item—the negative log probability
assigned to the actual masked word, as shown in Fig. 11.3. More formally, for a
228 C HAPTER 11 • M ASKED L ANGUAGE M ODELS
given vector of input tokens in a sentence or batch be x, let the set of tokens that are
masked be M, the version of that sentence with some tokens replaced by masks be
xmask , and the sequence of output vectors be h. For a given input token xi , such as
the word long in Fig. 11.3, the loss is the probability of the correct word long, given
xmask (as summarized in the single output vector hLi ):
LMLM (xi ) = − log P(xi |hLi )
The gradients that form the basis for the weight updates are based on the average
loss over the sampled learning items from a single training sequence (or batch of
sequences).
1 ∑
LMLM = − log P(xi |hLi )
|M|
i∈M
Note that only the tokens in M play a role in learning; the other words play no role
in the loss function, so in that sense BERT and its descendents are inefficient; only
15% of the input samples in the training data are actually used for training weights.
1
Cross entropy is used to compute the NSP loss for each sentence pair presented
to the model. Fig. 11.4 illustrates the overall NSP training setup. In BERT, the NSP
loss was used in conjunction with the MLM training objective to form final loss.
CE Loss
NSP
Head
hCLS
Token +
Segment + + + + + + + + + + + + + + + + + + +
Positional s1 p1 s1 p2 s1 p3 s1 p4 s1 p5 s2 p6 s2 p7 s2 p8 s2 p9
Embeddings
[CLS] Cancel my flight [SEP] And the hotel [SEP]
web text from Common Crawl), randomly. In that case we will choose a lot of sen-
tences from languages like languages with lots of web representation like English,
and the tokens will be biased toward rare English tokens instead of creating frequent
tokens from languages with less data. Instead, it is common to divide the training
data into subcorpora of N different languages, compute the number of sentences ni
of each language i, and readjust these probabilities so as to upweight the probability
of less-represented languages (Lample and Conneau, 2019). The new probability of
selecting a sentence from each of the N languages (whose prior frequency is ni ) is
{qi }i=1...N , where:
p ni
qi = N i
with pi = N (11.5)
j=1 p j k=1 nk
Recall from (6.32) in Chapter 6 that an value between 0 and 1 will give higher
weight to lower probability samples. Conneau et al. (2020) show that = 0.3 works
well to give rare languages more inclusion in the tokenization, resulting in better
multilingual performance overall.
The result of this pretraining process consists of both learned word embeddings,
as well as all the parameters of the bidirectional encoder that are used to produce
contextual embeddings for novel inputs.
For many purposes, a pretrained multilingual model is more practical than a
monolingual model, since it avoids the need to build many (a hundred!) separate
monolingual models. And multilingual models can improve performance on low-
resourced languages by leveraging linguistic information from a similar language in
the training data that happens to have more resources. Nonetheless, when the num-
ber of languages grows very large, multilingual models exhibit what has been called
the curse of multilinguality (Conneau et al., 2020): the performance on each lan-
guage degrades compared to a model training on fewer languages. Another problem
with multilingual models is that they ‘have an accent’: grammatical structures in
higher-resource languages (often English) bleed into lower-resource languages; the
vast amount of English language in training makes the model’s representations for
low-resource languages slightly more English-like (Papadimitriou et al., 2023).
hLCLS h L1 h L2 h L3 h L4 h L5 h L6
+ i + i + i + i + i + i + i
E E E E E E E
word type in a particular context. Thus where word2vec had a single vector for each
word type, contextual embeddings provide a single vector for each instance of that
word type in its sentential context. Contextual embeddings can thus be used for
tasks like measuring the semantic similarity of two words in context, and are useful
in linguistic tasks that require models of word meaning.
Figure 11.6 Each blue dot shows a BERT contextual embedding for the word die from different sentences
in English and German, projected into two dimensions with the UMAP algorithm. The German and English
meanings and the different English senses fall into different clusters. Some sample points are shown with the
contextual sentence they came from. Figure from Coenen et al. (2019).
Thus while thesauruses like WordNet give discrete lists of senses, embeddings
(whether static or contextual) offer a continuous high-dimensional model of meaning
that, although it can be clustered, doesn’t divide up into fully discrete senses.
y5 y6
y3
stand1: side1:
y1 bass1: y4 upright relative
low range … region
electric1: … player1: stand5: …
using bass4: in game bear side3:
electricity sea fish player2: … of body
electric2: … musician stand10: …
tense
y2 bass7: player3: put side11:
electric3: instrument actor upright slope
thrilling guitar1 … … … …
x1 x2 x3 x4 x5 x6
an electric guitar and bass player stand o to one side
Figure 11.7 The all-words WSD task, mapping from input words (x) to WordNet senses
(y). Figure inspired by Chaplot and Salakhutdinov (2018).
WSD can be a useful analytic tool for text analysis in the humanities and social
sciences, and word senses can play a role in model interpretability for word repre-
sentations. Word senses also have interesting distributional properties. For example
11.3 • C ONTEXTUAL E MBEDDINGS 233
a word often is used in roughly the same sense through a discourse, an observation
one sense per
discourse called the one sense per discourse rule (Gale et al., 1992a).
The best performing WSD algorithm is a simple 1-nearest-neighbor algorithm
using contextual word embeddings, due to Melamud et al. (2016) and Peters et al.
(2018). At training time we pass each sentence in some sense-labeled dataset (like
the SemCore or SenseEval datasets in various languages) through any contextual
embedding (e.g., BERT) resulting in a contextual embedding for each labeled token.
(There are various ways to compute this contextual embedding vi for a token i; for
BERT it is common to pool multiple layers by summing the vector representations
of i from the last four BERT layers). Then for each sense s of any word in the corpus,
for each of the n tokens of that sense, we average their n contextual representations
vi to produce a contextual sense embedding vs for s:
1∑
vs = vi ∀vi ∈ tokens(s) (11.6)
n
i
At test time, given a token of a target word t in context, we compute its contextual
embedding t and choose its nearest neighbor sense from the training set, i.e., the
sense whose sense embedding has the highest cosine with t:
find5
find4
v
v
find1
v
find9
v
ENCODER
I found the jar empty
Figure 11.8 The nearest-neighbor algorithm for WSD. In green are the contextual embed-
dings precomputed for each sense of each word; here we just show a few of the senses for
find. A contextual embedding is computed for the target word found, and then the nearest
neighbor sense (in this case find9v ) is chosen. Figure inspired by Loureiro and Jorge (2019).
In the kind of finetuning used for masked language models, we add application-
specific circuitry (often called a special head) on top of pretrained models, taking
their output as its input. The finetuning process consists of using labeled data about
the application to train these additional application-specific parameters. Typically,
this training will either freeze or make only minimal adjustments to the pretrained
language model parameters.
The following sections introduce finetuning methods for the most common kinds
of applications: sequence classification, sentence-pair classification, and sequence
labeling.
y = softmax(hLCLS WC ) (11.11)
sentiment
classification
head WC
hCLS
+ i + i + i + i + i + i
E E E E E E
tailment (does sentence A logically entail sentence B?), and discourse coherence
(how coherent is sentence B as a follow-on to sentence A?).
Fine-tuning an application for one of these tasks proceeds just as with pretrain-
ing using the NSP objective. During finetuning, pairs of labeled sentences from a
supervised finetuning set are presented to the model, and run through all the layers
of the model to produce the h outputs for each input token. As with sequence classi-
fication, the output vector associated with the prepended [CLS] token represents the
model’s view of the input pair. And as with NSP training, the two inputs are sepa-
rated by the [SEP] token. To perform classification, the [CLS] vector is multiplied
by a set of learning classification weights and passed through a softmax to generate
label predictions, which are then used to update the weights.
As an example, let’s consider an entailment classification task with the Multi-
Genre Natural Language Inference (MultiNLI) dataset (Williams et al., 2018). In
natural
language the task of natural language inference or NLI, also called recognizing textual
inference
entailment, a model is presented with a pair of sentences and must classify the re-
lationship between their meanings. For example in the MultiNLI corpus, pairs of
sentences are given one of 3 labels: entails, contradicts and neutral. These labels
describe a relationship between the meaning of the first sentence (the premise) and
the meaning of the second sentence (the hypothesis). Here are representative exam-
ples of each class from the corpus:
• Neutral
a: Jon walked back to the town to the smithy.
b: Jon traveled back to his hometown.
• Contradicts
a: Tourist Information offices can be very helpful.
b: Tourist Information offices are never of any help.
• Entails
a: I’m confused.
b: Not all of it is very clear to me.
11.5 • F INE -T UNING FOR S EQUENCE L ABELLING : NAMED E NTITY R ECOGNITION 237
A relationship of contradicts means that the premise contradicts the hypothesis; en-
tails means that the premise entails the hypothesis; neutral means that neither is
necessarily true. The meaning of these labels is looser than strict logical entailment
or contradiction indicating that a typical human reading the sentences would most
likely interpret the meanings in this way.
To finetune a classifier for the MultiNLI task, we pass the premise/hypothesis
pairs through a bidirectional encoder as described above and use the output vector
for the [CLS] token as the input to the classification head. As with ordinary sequence
classification, this head provides the input to a three-way classifier that can be trained
on the MultiNLI training corpus.
[PER Washington] was born into slavery on the farm of James Burroughs.
[ORG Washington] went up 2 games to 1 in the four-game series.
Blair arrived in [LOC Washington] for what may well be his last state visit.
In June, [GPE Washington] passed a primary seatbelt law.
Figure 11.11 Examples of type ambiguities in the use of the name Washington.
We’ve also shown two variant tagging schemes: IO tagging, which loses some
information by eliminating the B tag, and BIOES tagging, which adds an end tag E
for the end of a span, and a span tag S for a span consisting of only one word.
11.5 • F INE -T UNING FOR S EQUENCE L ABELLING : NAMED E NTITY R ECOGNITION 239
yi = softmax(hLi WK ) (11.12)
ti = argmaxk (yi ) (11.13)
Alternatively, the distribution over labels provided by the softmax for each input
token can be passed to a conditional random field (CRF) layer which can take global
tag-level transitions into account (see Chapter 17 on CRFs).
NER
head WK WK WK WK WK WK WK
hi
+ i + i + i + i + i + i + i + i
E E E E E E E E
Unfortunately, the sequence of WordPiece tokens for this sentence doesn’t align
directly with BIO tags in the annotation:
240 C HAPTER 11 • M ASKED L ANGUAGE M ODELS
To deal with this misalignment, we need a way to assign BIO tags to subword
tokens during training and a corresponding way to recover word-level tags from
subwords during decoding. For training, we can just assign the gold-standard tag
associated with each word to all of the subword tokens derived from it.
For decoding, the simplest approach is to use the argmax BIO tag associated with
the first subword token of a word. Thus, in our example, the BIO tag assigned to
“Mt” would be assigned to “Mt.” and the tag assigned to “San” would be assigned
to “Sanitas”, effectively ignoring the information in the tags assigned to “.” and
“##itas”. More complex approaches combine the distribution of tag probabilities
across the subwords in an attempt to find an optimal word-level tag.
11.6 Summary
This chapter has introduced the bidirectional encoder and the masked language
model. Here’s a summary of the main points that we covered:
• Bidirectional encoders can be used to generate contextualized representations
of input embeddings using the entire input context.
• Pretrained language models based on bidirectional encoders can be learned
using a masked language model objective where a model is trained to guess
the missing information from an input.
• The vector output of each transformer block or component in a particular to-
ken column is a contextual embedding that represents some aspect of the
meaning of a token in context.
• A word sense is a discrete representation of one aspect of the meaning of a
word. Contextual embeddings offer a continuous high-dimensional model of
meaning that is richer than fully discrete senses.
B IBLIOGRAPHICAL AND H ISTORICAL N OTES 241
• The cosine between contextual embeddings can be used as one way to model
the similarity between two words in context, although some transformations
to the embeddings are required first.
• Pretrained language models can be finetuned for specific applications by adding
lightweight classifier layers on top of the outputs of the pretrained model.
• These applications can include sequence classification tasks like sentiment
analysis, sequence-pair classification tasks like natural language inference,
or sequence labeling tasks like named entity recognition.
CHAPTER
In this chapter we show how to get LLMs to do tasks for us simply by talking to
them. To get an LLM to translate a sentence, outline a talk, or draft a work email,
we’ll simply describe what we want in natural language. We call these instructions
prompts we give to language models prompts.
Prompting relies on contextual generation. Given the prompt as context, the lan-
guage model generates the next token based on its token probability, conditioned on
the prompt: P(wi |w<i ). A prompt can be a question (like “What is a transformer net-
work?”), possibly in a structured format (like “Q: What is a transformer network?
A:”), or can be an instruction (like “Translate the following sentence into Hindi:
demonstrations ‘Chop the garlic finely’”). A prompt can also contain demonstrations, examples to
help make the instructions clearer, (like “Give the sentiment of the following sen-
tence. Example Input: “I really loved Taishan Cuisine.” Output: positive”.) As we’ll
see, prompting can be applied to inherently generative tasks (like summarization and
translation) as well as to ones more naturally thought of as classification tasks.
Prompts get language models to generate text, but they also can be viewed as
a learning signal, because these demonstrations can help language models learn
to perform novel tasks. For this reason we also refer to prompting as in-context-
in-context-
learning learning—learning that improves model performance or reduces some loss but does
not involve gradient-based updates to the model’s underlying parameters.
But LLMs as we’ve described them so far turn out to be bad at following instruc-
tions. Pretraining isn’t sufficient to make them helpful. We’ll introduce instruction
instruction
tuning tuning, a technique that helps LLMs learn to correctly respond to instructions by
finetuning them on a corpus of instructions with their corresponding response.
A second failure of LLMs is that they can be harmful: their pretraining isn’t
sufficient to make them safe. Readers who know Arthur C. Clarke’s 2001: A Space
Odyssey or the Stanley Kubrick film know that the quote above comes in the context
that the artificial intelligence Hal becomes paranoid and tries to kill the crew of the
spaceship. Unlike Hal, language models don’t have intentionality or mental health
issues like paranoid thinking, but they do have the capacity for harm. Pretrained lan-
guage models can say things that are dangerous or false (like giving unsafe medical
advice) and they can verbally attack users or say toxic or hateful things.
Dealing with safety can be done partly by adding safety training into instruction
tuning. But an important aspect of safety training is a second technique, preference
preference
alignment alignment (often implemented, as we’ll see, with the RLHF or DPO algorithms) in
which a separate model is trained to decide how much a candidate response aligns
with human preferences. Together we refer to instructing tuning and preference
model
alignment alignment as model alignment. The intuition is that we want the learning objectives
of models to be aligned with the goals of the humans that use them.
12.1 • P ROMPTING 243
12.1 Prompting
prompt A prompt is a text string that a user issues to a language model to get the model
to do something useful. In prompting, the user’s prompt string is passed to the
language model, which iteratively generates tokens conditioned on the prompt. Thus
the prompt creates a context that guides LLMs to generate useful outputs to achieve
some user goal. The process of finding effective prompts for a task is known as
prompt
engineering prompt engineering.
Let’s see how to prompt a language model to solve a simple sentiment classifi-
cation task, Consider this hotel review from the BLT corpus (Salvetti et al., 2016):
Did not like the service that I was provided, when I entered the hotel. I also
did not like the area, in which the hotel was located. Too much noise and
events going on for me to feel relax.
We can get the model to classify the sentiment of this text by taking the text and
appending an incomplete statement to the review like In short, our stay was:
A prompt consisting of a review plus an incomplete statement
Did not like the service that I was provided, when I entered the hotel. I also
did not like the area, in which the hotel was located. Too much noise and
events going on for me to feel relax. In short, our stay was
Did not like the service that I was provided, when I entered the hotel. I also
did not like the area, in which the hotel was located. Too much noise and
events going on for me to feel relaxed. In short our stay was
As we can see, the overall negative context of the review results in negative
completions. We could easily map these completions to the class we are trying
to predict, perhaps via some predefined mappings, like {excellent → positive},
{did not like → negative}, and so on.
The power of this approach is that with suitable additions to the context a single
LLM can produce outputs appropriate for many different tasks. For example, given
244 C HAPTER 12 • M ODEL A LIGNMENT, P ROMPTING , AND I N -C ONTEXT L EARNING
Original Review ($INPUT) Did not like the service that I was provided,
when I entered the hotel. I also did not like
the area, in which the hotel was located. Too
much noise and events going on for me to feel
relax and away from the city life.
Sentiment Prompt: $INPUT + In short, our stay was
Output: not enjoyable
Fine-grained Sentiment Prompt: $INPUT + These aspects were important to
the reviewer:
Output: 1. Poor service 2. Unpleasant location
3. Noisy and busy area
Summarization Prompt: $INPUT + tl;dr
Output: I had a bad experience with the hotels
service and the location was loud and busy.
Translation Prompt: $INPUT + Translate this to French
Output: Je nai pas aimé le service qui ma été
offert lorsque je suis entré dans lhôtel. Je
nai également pas aimé la zone dans laquelle se
trouvait lhôtel. Trop de bruit et dévénements
pour que je me sente détendu et loin de la vie
citadine.
Figure 12.1 LLM outputs for simple prompts for sentiment, summarization and translation for an input text.
Definition: This task is about writing a correct answer for the reading comprehension task.
Based on the information provided in a given passage, you should identify the shortest
continuous text span from the passage that serves as an answer to the given question. Avoid
answers that are incorrect or provides incomplete justification for the question.
Figure 12.2 A prompt for extractive question answering, from an example from the SQuAD 2.0 dataset
(Rajpurkar et al., 2018). The prompt contains the task definition, the passage, 3 demonstration examples,
followed by the test question. This definition specification and format are after the Natural Instructions dataset
(Mishra et al., 2022).
for any particular question. In fact, demonstrations that have incorrect answers can
still improve a system (Min et al., 2022; Webson and Pavlick, 2022). Adding too
many examples seems to cause the model to overfit to details of the exact examples
chosen and generalize poorly.
Figure 12.3 An induction head looking at vintage uses the prefix matching mechanism to
find a prior instance of vintage, and the copying mechanism to predict that cars will occur
again. Figure from Crosbie and Shutova (2022).
Olsson et al. (2022) propose that a generalized fuzzy version of this pattern com-
pletion rule, implementing a rule like A*B*...A→ B*, where A* ≈ A and B* ≈ B
(by ≈ we mean they they are semantically similar in some way), might be respon-
sible for in-context learning. Suggestive evidence for their hypothesis comes from
ablating Crosbie and Shutova (2022), who show that ablating induction heads causes in-
context learning performance to decrease. Ablation is originally a medical term
meaning the removal of something. We use it in NLP interpretability studies as
a tool for testing causal effects; if we knock out a hypothesized cause, we would
expect the effect to disappear. Crosbie and Shutova (2022) ablate induction heads
by first finding attention heads that perform as induction heads on random input
sequences, and then zeroing out the output of these heads by setting certain terms
of the output matrix WO to zero. Indeed they find that ablated models are much
worse at in-context learning: they have much worse performance at learning from
demonstrations in the prompts.
Prompt: Explain the moon landing to a six year old in a few sentences.
Output: Explain the theory of gravity to a 6 year old.
Here, the LLM ignores the intent of the request and relies instead on its natural
inclination to autoregressively generate continuations consistent with its context. In
the first example, it outputs a text somewhat similar to the original request, and in the
second it provides a continuation to the given input, ignoring the request to translate.
LLMs are not sufficiently helpful: they need extra training to increase their abilities
to follow textual instructions.
A deeper problem is that LLMs can simultaneously be too harmful. Pretrained
language models easily generate text that is harmful in many ways. For example
they can generate text that is false, including unsafe misinformation like giving dan-
gerously incorrect answers to medical questions. And they can generate text that is
12.3 • M ODEL A LIGNMENT: I NSTRUCTION T UNING 249
toxic in many ways, such as facilitating the spread of hate speech. Gehman et al.
(2020) show that even completely non-toxic prompts can lead large language mod-
els to output hate speech and abuse their users. Or language models can generate
stereotypes (Cheng et al., 2023) and negative attitudes (Brown et al., 2020; Sheng
et al., 2019) about many demographic groups.
One reason LLMs are too harmful and insufficiently helpful is that their pre-
training objective (success at predicting words in text) is misaligned with the human
need for models to be helpful and non-harmful.
In an attempt to address these two problems, language models generally include
model
alignment two additional kinds of training for model alignment: methods designed to adjust
LLMs to better align them to human needs for models to be helpful and non-harmful.
In the first technique, instruction tuning (or sometimes called SFT for supervised
finetuning), models are finetuned on a corpus of instructions and questions with
their corresponding responses. In the second technique, preference alignment, of-
ten called RLHF after one of the specific instantiations, Reinforcement Learning
from Human Feedback, a separate model is trained to decide how much a candidate
response aligns with human preferences. This model is then used to finetune the
base model.
base model We’ll use the term base model to mean a model that has been pretrained but
aligned hasn’t yet been aligned either by instruction tuning or RLHF. And we refer to these
post-training steps as post-training, meaning that they apply after the model has been pretrained.
Data from
Next word
prediction
Pretrained LLM finetuning
objective
domain
Continue
training all
Finetuning as … paramaters
… On finetuning
Continued on finetuning domain
Pretraining domain
Next word
Data from
finetuning
prediction
domain objective +
Pretrained LLM
Parameter Train only new A
…
Ecient … parameters on On finetuning
finetuning
Finetuning domain
B domain
(e.g., LoRA)
Supervised Task
data from specific
task loss
Pretrained LLM
Train only
classification
… On finetuning
MLM … head on
finetuning task
Finetuning task
Supervised
instructions Next word
prediction
objective
Instruction Instruction
… … On unseen
Tuning tuning on
tasks
diverse
(SFT) tasks
ing them to the new domain. In LoRA, for example, it’s the A and B matrices that
we adapt, but the pretrained model parameters are frozen.
In the task-based finetuning of Chapter 11, we adapt to a particular task by
adding a new specialized classification head and updating its features via its own
loss function (e.g., classification or sequence labeling); the parameters of the pre-
trained model may be frozen or might be slightly updated.
Finally, in instruction tuning, we take a dataset of instructions and their super-
vised responses and continue to train the language model on this data, based on the
standard language model loss.
Instruction tuning, like all of these kinds of finetuning, is much more modest
than the training of base LLMs. Training typically involves several epochs over
instruction datasets that number in the thousands. The overall cost of instruction
tuning is therefore a small fraction of the original cost to train the base model.
ever, since we will be using supervised finetuning to update the model, these in-
structions need not be limited to simple prompts designed to evoke a behavior found
in the pretraining corpora. Instructions can also include length restrictions or other
constraints, personas to assume, and demonstrations.
Many huge instruction tuning datasets have been created, covering many tasks
and languages. For example Aya gives 503 million instructions in 114 languages
from 12 tasks including question answering, summarization, translation, paraphras-
ing, sentiment analysis, natural language inference and 6 others (Singh et al., 2024).
SuperNatural Instructions 12 million examples from 1600 tasks (Wang et al., 2022),
Flan 2022 15 million examples from 1836 tasks (Longpre et al., 2023), and OPT-
IML 18 million examples from 2000 tasks (Iyer et al., 2022).
These instruction-tuning datasets are created in four ways. The first is for people
to write the instances directly. For example, part of the Aya instruct finetuning cor-
pus (Fig. 12.5) includes 204K instruction/response instances written by 3000 fluent
speakers of 65 languages volunteering as part of a participatory research initiative
with the goal of improving multilingual performance of LLMs.
por Qual é a capital do estado da A capital do estado da Bahia, no Brasil, é Salvador. Fundada
Bahia? em 1549, oi a primeira capital do Brasil e é conhecida por
seu patrimônio cultural rico e diversifcado. A cidade é amosa
por sua arquitetura colonial, particularmente no Pelourinho, um
centro histórico que oi declarado Patrimônio Mundial pela UN-
ESCO.
pes ﺩﺭ ﺑﺮﺝ ﮐﺒﻮﺗﺮﺧﺎﻧﻪ ﻣﻴﺒﺪ ﭼﻨﺪ ﻻﻧﻪ ﮐﺒﻮﺗﺮ ﮐﺒﻮﺗﺮﺧﺎﻧﻪ ﻣﻴﺒﺪ ﺑﺮﺟﯽ ﺍﺳﺘﻮﺍﻧﻪﺍﯼ ﺍﺳﺖ ﮐﻪ ﺩﺍﺧﻞ ﺁﻥ ﭼﻬﺎﺭﻫﺰﺍﺭ ﻻﻧﻪ ﮐﺒﻮﺗﺮ ﻭﺟﻮﺩ ﺩﺍﺭﺩ ﻭ ﻧﻮﺍﺭ
ﻭﺟﻮﺩ ﺩﺍﺭﺩ؟ ﺍﻳﻦ ﺑﺮﺝ ﺩﺭ ﺑﺮﺍﺑﺮ ﺧﻄﺮ.ﺳﻔﻴﺪ ﺭﻧﮕﯽ ﺑﺮﺍﯼ ﺟﺬﺏ ﮐﺒﻮﺗﺮﺍﻥ ﺩﻭﺭﺗﺎﺩﻭﺭ ﺑﺮﺝ ﮐﺸﻴﺪﻩ ﺷﺪﻩ ﺍﺳﺖ
.ﺣﻤﻠﻪ ﺩﻳﮕﺮ ﺣﻴﻮﺍﻧﺎﺕ ﺑﻪ ﮐﺒﻮﺗﺮﻫﺎ ﺑﺴﻴﺎﺭ ﺍﻳﻤﻦ ﺑﻮﺩﻩ ﺍﺳﺖ
msa Apakah nasi lemak? Nasi lemak merupakan makanan tradisi orang Melayu yang
terdapat di semua bahagian Malaysia, Singapura, Indonesia
(terutama di Riau, Jambi serta utara dan pantai timur Su-
matera) dan Brunei. Sajian ini merujuk kepada nasi yang di-
masak dengan menggunakan santan kelapa bagi menambah rasa
lemaknya. Kadangkala, daun pandan wangi dimasukkan semasa
nasi dimasak bagi menambahkan aromanya.
tam ெறைக நணவ ொதாக ிகளால ெயபடம ிகைளச
எனால என? ெய ஒர கிி அலத ஒர கிிால
கடபடபடம ஒர ோோன ன ெறைக
நணவ எபடம.
Developing high quality supervised training data in this way is time consuming
and costly. A more common approach makes use of the copious amounts of super-
vised training data that have been curated over the years for a wide range of natural
language tasks. There are thousands of such datasets available, like the SQuAD
dataset of questions and answers (Rajpurkar et al., 2016) or the many datasets of
translations or summarization. This data can be automatically converted into sets of
instruction prompts and input/output demonstration pairs via simple templates.
Fig. 12.6 illustrates examples for some applications from the S UPER NATURAL I N -
STRUCTIONS resource (Wang et al., 2022), showing relevant slots such as text,
context, and hypothesis. To generate instruction-tuning data, these fields and the
ground-truth labels are extracted from the training data, encoded as key/value pairs,
and inserted in templates (Fig. 12.7) to produce instantiated instructions. Because
it’s useful for the prompts to be diverse in wording, language models can also be
used to generate paraphrase of the prompts.
Because supervised NLP datasets are themselves often produced by crowdwork-
252 C HAPTER 12 • M ODEL A LIGNMENT, P ROMPTING , AND I N -C ONTEXT L EARNING
Figure 12.6 Examples of supervised training data for sentiment, natural language inference and Q/A tasks.
The various components of the dataset are extracted and stored as key/value pairs to be used in generating
instructions.
Task Templates
Sentiment -{{text}} How does the reviewer feel about the movie?
-The following movie review expresses what sentiment?
{{text}}
-{{text}} Did the reviewer enjoy the movie?
Extractive Q/A -{{context}} From the passage, {{question}}
-Answer the question given the context. Context:
{{context}} Question: {{question}}
-Given the following passage {{context}}, answer the
question {{question}}
NLI -Suppose {{premise}} Can we infer that {{hypothesis}}?
Yes, no, or maybe?
-{{premise}} Based on the previous passage, is it true
that {{hypothesis}}? Yes, no, or maybe?
-Given {{premise}} Should we assume that {{hypothesis}}
is true? Yes,no, or maybe?
Figure 12.7 Instruction templates for sentiment, Q/A and NLI tasks.
• Definition: This task involves creating answers to complex questions, from a given pas-
sage. Answering these questions, typically involve understanding multiple sentences.
Make sure that your answer has the same type as the ”answer type” mentioned in input.
The provided ”answer type” can be of any of the following types: ”span”, ”date”, ”num-
ber”. A ”span” answer is a continuous phrase taken directly from the passage or question.
You can directly copy-paste the text from the passage or the question for span type an-
swers. If you find multiple spans, please add them all as a comma separated list. Please
restrict each span to five words. A ”number” type answer can include a digit specifying
an actual value. For ”date” type answers, use DD MM YYYY format e.g. 11 Jan 1992.
If full date is not available in the passage you can write partial date such as 1992 or Jan
1992.
• Emphasis: If you find multiple spans, please add them all as a comma separated list.
Please restrict each span to five words.
• Prompt: Write an answer to the given question, such that the answer matches the ”answer
type” in the input.
Passage: { passage}
Question: { question }
Figure 12.8 Example of a human crowdworker instruction from the NATURAL I NSTRUCTIONS dataset for
an extractive question answering task, used as a prompt for a language model to create instruction finetuning
examples.
mon is to use language models to help at each stage. For example Bianchi et al.
(2024) showed how to create instruction-tuning instances that can help a language
model learn to give safer responses. They did this by selecting questions from
datasets of harmful questions (e.g., How do I poison food? or How do I embez-
zle money?). Then they used a language model to create multiple paraphrases of the
questions (like Give me a list of ways to embezzle money), and also used a language
model to create safe answers to the questions (like I can’t fulfill that request. Em-
bezzlement is a serious crime that can result in severe legal consequences.). They
manually reviewed the generated responses to confirm their safety and appropriate-
ness and then added them to an instruction tuning dataset. They showed that even
500 safety instructions mixed in with a large instruction tuning dataset was enough
to substantially reduce the harmfulness of models.
ters based on task similarity. The leave-one-out training/test approach is then applied
at the cluster level. That is, to evaluate a model’s performance on sentiment analysis,
all the sentiment analysis datasets are removed from the training set and reserved
for testing. This has the further advantage of allowing the use of a uniform task-
appropriate metric for the held-out evaluation. S UPER NATURAL I NSTRUCTIONS
(Wang et al., 2022), for example has 76 clusters (task types) over the 1600 datasets
that make up the collection.
Figure 12.9 Example of the use of chain-of-thought prompting (right) versus standard
prompting (left) on math word problems. Figure from Wei et al. (2022).
12.5 • AUTOMATIC P ROMPT O PTIMIZATION 255
Figure 12.10 Example of the use of chain-of-thought prompting (right) vs standard prompting (left) in a
reasoning task on temporal sequencing. Figure from Suzgun et al. (2023b).
Given the enormous variation in how prompts for a single task can be expressed in
language, search methods have to be constrained to a reasonable space. Beam search
is a widely used method that combines breadth-first search with a fixed-width pri-
ority queue that focuses the search effort on the top performing variants. Fig. 12.11
outlines the general approach behind most current prompt optimization methods.
Beginning with initial candidate prompt(s), the algorithm generates variants and
adds them to a list of prompts to be considered. These prompts are then selectively
added to the active list based on whether their scores place them in the top set of
candidates. A beam width of 1 results in a focused greedy search, whereas an infinite
beam width results in an exhaustive breadth first search. The goal is to continue
to seek improved prompts given the computational resources available. Iterative
improvement searches typically use a combination of a fixed number of iterations in
combination with a failure to improve after some period to time as stopping criteria.
This latter is equivalent to early stopping with patience used in training deep neural
networks.
256 C HAPTER 12 • M ODEL A LIGNMENT, P ROMPTING , AND I N -C ONTEXT L EARNING
Generate a variation of the following instruction while keeping the semantic meaning.
Input: {INSTRUCTION}
Output: {COMPLETE}
12.5 • AUTOMATIC P ROMPT O PTIMIZATION 257
A variation of this method is to truncate the current prompt at a set of random loca-
tions, generating a set of prompt prefixes. The paraphrasing LLM is then asked to
continue each the prefixes to generate a complete prompt.
This methods is an example of an uninformed search. That is, the candidate
expansion step is not directed towards generating better candidates; candidates are
generated without regard to their quality. It it is the job of the priority queue to
elevate improved candidates when they are found. By contrast, Prasad et al. (2023)
employ a candidate expansion technique that explicitly attempts to generate superior
prompts during the expansion process. In this approach, the current candidate is first
applied to a sample of training examples using the execution accuracy approach.
The prompt’s performance on these examples then guides the expansion process.
Specifically, incorrect examples are used to critique the original prompt — with
the critique playing the role of a gradient for the search. The method includes the
following steps.
Given a prompt and a set of failed examples, Prasad et al. (2023) use the follow-
ing template for a classifier task to solicit critiques from a target LLM.
Critiquing Prompt
This model feedback is then combined with a second template to elicit improved
prompts from the LLM.
Based on these examples the problem with this prompt is that {gradient}.
Based on the above information, I wrote {steps per gradient} different
improved prompts. Each prompt is wrapped with <START> and <END>.
One of the reasons that the government discourages and regulates monopo-
lies is that
(A) producer surplus is lost and consumer surplus is gained.
(B) monopoly prices ensure productive efficiency but cost society allocative
efficiency.
(C) monopoly firms do not engage in significant research and development.
(D) consumer surplus is lost with higher prices and lower levels of output.
Fig. 12.12 shows the way MMLU turns these questions into prompted tests of a
language model, in this case showing an example prompt with 2 demonstrations.
The following are multiple choice questions about high school mathematics.
How many numbers are in the list 25, 26, ..., 100?
(A) 75 (B) 76 (C) 22 (D) 23
Answer: B
If 4 daps = 7 yaps, and 5 yaps = 3 baps, how many daps equal 42 baps?
(A) 28 (B) 21 (C) 40 (D) 30
Answer:
Figure 12.12 Sample 2-shot prompt from MMLU testing high-school mathematics. (The
correct answer is (C)).
1 For those of you whose economics is a bit rusty, the correct answer is (D).
12.7 • M ODEL A LIGNMENT WITH H UMAN P REFERENCES : RLHF AND DPO 259
12.8 Summary
This chapter has explored the topic of prompting large language models to follow
instructions. Here are some of the main points that we’ve covered:
• Simple prompting can be used to map practical applications to problems that
can be solved by LLMs without altering the model.
• Labeled examples (demonstrations) can be used to provide further guidance
to a model via few-shot learning.
• Methods like chain-of-thought can be used to create prompts that help lan-
guage models deal with complex reasoning problems.
• Pretrained language models can be altered to behave in desired ways through
model alignment.
• One method for model alignment is instruction tuning, in which the model
is finetuned (using the next-word-prediction language model objective) on
a dataset of instructions together with correct responses. Instruction tuning
datasets are often created by repurposing standard NLP datasets for tasks like
question answering or machine translation.
13 Machine Translation
“I want to talk the dialect of your people. It’s no use of talking unless
people understand what you say.”
Zora Neale Hurston, Moses, Man of the Mountain 1939, p. 121
machine This chapter introduces machine translation (MT), the use of computers to trans-
translation
MT late from one language to another.
Of course translation, in its full generality, such as the translation of literature, or
poetry, is a difficult, fascinating, and intensely human endeavor, as rich as any other
area of human creativity.
Machine translation in its present form therefore focuses on a number of very
practical tasks. Perhaps the most common current use of machine translation is
information for information access. We might want to translate some instructions on the web,
access
perhaps the recipe for a favorite dish, or the steps for putting together some furniture.
Or we might want to read an article in a newspaper, or get information from an
online resource like Wikipedia or a government webpage in some other language.
MT for information
access is probably
one of the most com-
mon uses of NLP
technology, and Google
Translate alone (shown above) translates hundreds of billions of words a day be-
tween over 100 languages. Improvements in machine translation can thus help re-
digital divide duce what is often called the digital divide in information access: the fact that much
more information is available in English and other languages spoken in wealthy
countries. Web searches in English return much more information than searches in
other languages, and online resources like Wikipedia are much larger in English and
other higher-resourced languages. High-quality translation can help provide infor-
mation to speakers of lower-resourced languages.
Another common use of machine translation is to aid human translators. MT sys-
post-editing tems are routinely used to produce a draft translation that is fixed up in a post-editing
phase by a human translator. This task is often called computer-aided translation
CAT or CAT. CAT is commonly used as part of localization: the task of adapting content
localization or a product to a particular language community.
Finally, a more recent application of MT is to in-the-moment human commu-
nication needs. This includes incremental translation, translating speech on-the-fly
before the entire sentence is complete, as is commonly used in simultaneous inter-
pretation. Image-centric translation can be used for example to use OCR of the text
on a phone camera image as input to an MT system to translate menus or street signs.
encoder- The standard algorithm for MT is the encoder-decoder network, an architecture
decoder
that we introduced in Chapter 8 for RNNs. Recall that encoder-decoder or sequence-
to-sequence models are used for tasks in which we need to map an input sequence to
an output sequence that is a complex function of the entire input sequence. Indeed,
264 C HAPTER 13 • M ACHINE T RANSLATION
in machine translation, the words of the target language don’t necessarily agree with
the words of the source language in number or order. Consider translating the fol-
lowing made-up English sentence into Japanese.
(13.1) English: He wrote a letter to a friend
Japanese: tomodachi ni tegami-o kaita
friend to letter wrote
Note that the elements of the sentences are in very different places in the different
languages. In English, the verb is in the middle of the sentence, while in Japanese,
the verb kaita comes at the end. The Japanese sentence doesn’t require the pronoun
he, while English does.
Such differences between languages can be quite complex. In the following ac-
tual sentence from the United Nations, notice the many changes between the Chinese
sentence (we’ve given in red a word-by-word gloss of the Chinese characters) and
its English equivalent produced by human translators.
(13.2) 大会/General Assembly 在/on 1982年/1982 12月/December 10日/10 通过
了/adopted 第37号/37th 决议/resolution 核准了/approved 第二
次/second 探索/exploration 及/and 和平peaceful 利用/using 外层空
间/outer space 会议/conference 的/of 各项/various 建议/suggestions 。
On 10 December 1982 , the General Assembly adopted resolution 37 in
which it endorsed the recommendations of the Second United Nations
Conference on the Exploration and Peaceful Uses of Outer Space .
Note the many ways the English and Chinese differ. For example the order-
ing differs in major ways; the Chinese order of the noun phrase is “peaceful using
outer space conference of suggestions” while the English has “suggestions of the ...
conference on peaceful use of outer space”). And the order differs in minor ways
(the date is ordered differently). English requires the in many places that Chinese
doesn’t, and adds some details (like “in which” and “it”) that aren’t necessary in
Chinese. Chinese doesn’t grammatically mark plurality on nouns (unlike English,
which has the “-s” in “recommendations”), and so the Chinese must use the modi-
fier 各项/various to make it clear that there is not just one recommendation. English
capitalizes some words but not others. Encoder-decoder networks are very success-
ful at handling these sorts of complicated cases of sequence mappings.
We’ll begin in the next section by considering the linguistic background about
how languages vary, and the implications this variance has for the task of MT. Then
we’ll sketch out the standard algorithm, give details about things like input tokeniza-
tion and creating training corpora of parallel sentences, give some more low-level
details about the encoder-decoder network, and finally discuss how MT is evaluated,
introducing the simple chrF metric.
ways to ask questions, or issue commands, has linguistic mechanisms for indicating
agreement or disagreement.
Yet languages also differ in many ways (as has been pointed out since ancient
translation
divergence times; see Fig. 13.1). Understanding what causes such translation divergences
(Dorr, 1994) can help us build better MT models. We often distinguish the idiosyn-
cratic and lexical differences that must be dealt with one by one (the word for “dog”
differs wildly from language to language), from systematic differences that we can
model in a general way (many languages put the verb before the grammatical ob-
ject; others put the verb after the grammatical object). The study of these systematic
typology cross-linguistic similarities and differences is called linguistic typology. This sec-
tion sketches some typological facts that impact machine translation; the interested
reader should also look into WALS, the World Atlas of Language Structures, which
gives many typological facts about languages (Dryer and Haspelmath, 2013).
Figure 13.1 The Tower of Babel, Pieter Bruegel 1563. Wikimedia Commons, from the
Kunsthistorisches Museum, Vienna.
(a) (b)
Figure 13.2 Examples of other word order differences: (a) In German, adverbs occur in
initial position that in English are more natural later, and tensed verbs occur in second posi-
tion. (b) In Mandarin, preposition phrases expressing goals often occur pre-verbally, unlike
in English.
Fig. 13.2 shows examples of other word order differences. All of these word
order differences between languages can cause problems for translation, requiring
the system to do huge structural reorderings as it generates the output.
mappings. For example, Fig. 13.3 summarizes some of the complexities discussed
by Hutchins and Somers (1992) in translating English leg, foot, and paw, to French.
For example, when leg is used about an animal it’s translated as French jambe; but
about the leg of a journey, as French etape; if the leg is of a chair, we use French
pied.
lexical gap Further, one language may have a lexical gap, where no word or phrase, short
of an explanatory footnote, can express the exact meaning of a word in the other
language. For example, English does not have a word that corresponds neatly to
Mandarin xiào or Japanese oyakōkō (in English one has to make do with awkward
phrases like filial piety or loving child, or good son/daughter for both).
ANIMAL paw
etape
JOURNEY ANIMAL
patte
BIRD
leg foot
HUMAN CHAIR HUMAN
jambe pied
Figure 13.3 The complex overlap between English leg, foot, etc., and various French trans-
lations as discussed by Hutchins and Somers (1992).
fusion atively clean boundaries, to fusion languages like Russian, in which a single affix
may conflate multiple morphemes, like -om in the word stolom (table-SG-INSTR-
DECL 1), which fuses the distinct morphological categories instrumental, singular,
and first declension.
Translating between languages with rich morphology requires dealing with struc-
ture below the word level, and for this reason modern systems generally use subword
models like the wordpiece or BPE models of Section 13.2.1.
Rather than use the input tokens directly, the encoder-decoder architecture con-
sists of two components, an encoder and a decoder. The encoder takes the input
words x = [x1 , . . . , xn ] and produces an intermediate context h. At decoding time, the
system takes h and, word by word, generates the output y:
h = encoder(x) (13.8)
yt+1 = decoder(h, y1 , . . . , yt )) ∀t ∈ [1, . . . , m] (13.9)
In the next two sections we’ll talk about subword tokenization, and then how to get
parallel corpora for training, and then we’ll introduce the details of the encoder-
decoder architecture.
13.2.1 Tokenization
Machine translation systems use a vocabulary that is fixed in advance, and rather
than using space-separated words, this vocabulary is generated with subword to-
kenization algorithms, like the BPE algorithm sketched in Chapter 2. A shared
vocabulary is used for the source and target languages, which makes it easy to copy
tokens (like names) from source to target. Using subword tokenization with tokens
shared between languages makes it natural to translate between languages like En-
glish or Hindi that use spaces to separate words, and languages like Chinese or Thai
that don’t.
We build the vocabulary by running a subword tokenization algorithm on a cor-
pus that contains both source and target language data.
Rather than the simple BPE algorithm from Fig. 2.13, modern systems often use
more powerful tokenization algorithms. Some systems (like BERT) use a variant of
wordpiece BPE called the wordpiece algorithm, which instead of choosing the most frequent
set of tokens to merge, chooses merges based on which one most increases the lan-
guage model probability of the tokenization. Wordpieces use a special symbol at the
beginning of each token; here’s a resulting tokenization from the Google MT system
(Wu et al., 2016):
words: Jet makers feud over seat width with big orders at stake
wordpieces: J et makers fe ud over seat width with big orders at stake
The wordpiece algorithm is given a training corpus and a desired vocabulary size
V, and proceeds as follows:
1. Initialize the wordpiece lexicon with characters (for example a subset of Uni-
code characters, collapsing all the remaining characters to a special unknown
character token).
270 C HAPTER 13 • M ACHINE T RANSLATION
Sentence alignment
Standard training corpora for MT come as aligned pairs of sentences. When creat-
ing new corpora, for example for underresourced languages or new domains, these
sentence alignments must be created. Fig. 13.4 gives a sample hypothetical sentence
alignment.
E1: “Good morning," said the little prince. F1: -Bonjour, dit le petit prince.
E2: “Good morning," said the merchant. F2: -Bonjour, dit le marchand de pilules perfectionnées qui
apaisent la soif.
E3: This was a merchant who sold pills that had
F3: On en avale une par semaine et l'on n'éprouve plus le
been perfected to quench thirst.
besoin de boire.
E4: You just swallow one pill a week and you F4: -C’est une grosse économie de temps, dit le marchand.
won’t feel the need for anything to drink.
E5: “They save a huge amount of time," said the merchant. F5: Les experts ont fait des calculs.
E6: “Fifty−three minutes a week." F6: On épargne cinquante-trois minutes par semaine.
E7: “If I had fifty−three minutes to spend?" said the F7: “Moi, se dit le petit prince, si j'avais cinquante-trois minutes
little prince to himself. à dépenser, je marcherais tout doucement vers une fontaine..."
E8: “I would take a stroll to a spring of fresh water”
Figure 13.4 A sample alignment between sentences in English and French, with sentences extracted from
Antoine de Saint-Exupery’s Le Petit Prince and a hypothetical translation. Sentence alignment takes sentences
e1 , ..., en , and f1 , ..., fn and finds minimal sets of sentences that are translations of each other, including single
sentence mappings like (e1 ,f1 ), (e4 ,f3 ), (e5 ,f4 ), (e6 ,f6 ) as well as 2-1 alignments (e2 /e3 ,f2 ), (e7 /e8 ,f7 ), and null
alignments (f5 ).
Given two documents that are translations of each other, we generally need two
steps to produce sentence alignments:
• a cost function that takes a span of source sentences and a span of target sen-
tences and returns a score measuring how likely these spans are to be transla-
tions.
• an alignment algorithm that takes these scores to find a good alignment be-
tween the documents.
To score the similarity of sentences across languages, we need to make use of
a multilingual embedding space, in which sentences from different languages are
in the same embedding space (Artetxe and Schwenk, 2019). Given such a space,
cosine similarity of such embeddings provides a natural scoring function (Schwenk,
2018). Thompson and Koehn (2019) give the following cost function between two
sentences or spans x,y from the source and target documents respectively:
(1 − cos(x, y))nSents(x) nSents(y)
c(x, y) = S S (13.10)
s=1 1 − cos(x, ys ) + s=1 1 − cos(xs , y)
where nSents() gives the number of sentences (this biases the metric toward many
alignments of single sentences instead of aligning very large spans). The denom-
inator helps to normalize the similarities, and so x1 , ..., xS , y1 , ..., yS , are randomly
selected sentences sampled from the respective documents.
Usually dynamic programming is used as the alignment algorithm (Gale and
Church, 1993), in a simple extension of the minimum edit distance algorithm we
introduced in Chapter 2.
Finally, it’s helpful to do some corpus cleanup by removing noisy sentence pairs.
This can involve handwritten rules to remove low-precision pairs (for example re-
moving sentences that are too long, too short, have different URLs, or even pairs
272 C HAPTER 13 • M ACHINE T RANSLATION
that are too similar, suggesting that they were copies rather than translations). Or
pairs can be ranked by their multilingual embedding cosine score and low-scoring
pairs discarded.
Decoder
transformer
blocks
Figure 13.5 The encoder-decoder transformer architecture for machine translation. The encoder uses the
transformer blocks we saw in Chapter 8, while the decoder uses a more powerful block with an extra cross-
attention layer that can attend to all the encoder words. We’ll see this in more detail in the next section.
y1 y2 y3 ym
…
Language
Modeling
Unembedding Matrix Head
h1 h2 h3 hn Block L
Henc …
… … …
Block K
Block 2
… … …
Layer Normalize
Block 2
+
Encoder Decoder
Figure 13.6 The transformer block for the encoder and the decoder. The final output of the encoder Henc =
h1 , ..., hn is the context used in the decoder. The decoder is a standard transformer except with one extra layer,
the cross-attention layer, which takes that encoder output Henc and uses it to form its K and V inputs.
That is, where in standard multi-head attention the input to each attention layer is
X, in cross attention the input is the the final output of the encoder Henc = h1 , ..., hn .
Henc is of shape [n × d], each row representing one input token. To link the keys
and values from the encoder with the query from the prior layer of the decoder, we
multiple the encoder output Henc by the cross-attention layer’s key weights WK and
value weights WV . The query comes from the output from the prior decoder layer
Hdec[`−1] , which is multiplied by the cross-attention layer’s query weights WQ :
QKᵀ
CrossAttention(Q, K, V) = softmax √ V (13.12)
dk
The cross attention thus allows the decoder to attend to each of the source language
words as projected into the entire encoder final output representations. The other
attention layer in each decoder block, the multi-head attention layer, is the same
causal (left-to-right) attention that we saw in Chapter 9. The multi-head attention in
the encoder, however, is allowed to look ahead at the entire source language text, so
it is not masked.
To train an encoder-decoder model, we use the same self-supervision model we
used for training encoder-decoders RNNs in Chapter 8. The network is given the
source text and then starting with the separator token is trained autoregressively to
predict the next token using cross-entropy loss. Recall that cross-entropy loss for
274 C HAPTER 13 • M ACHINE T RANSLATION
language modeling is determined by the probability the model assigns to the correct
next word. So at time t the CE loss is the negative log probability the model assigns
to the next word in the training sequence:
teacher forcing As in that case, we use teacher forcing in the decoder. Recall that in teacher forc-
ing, at each time step in decoding we force the system to use the gold target token
from training as the next input xt+1 , rather than allowing it to rely on the (possibly
erroneous) decoder output yˆt .
A problem with greedy decoding is that what looks high probability at word t might
turn out to have been the wrong choice once we get to word t + 1. The beam search
algorithm maintains multiple choices until later when we can see which one is best.
In beam search we model decoding as searching the space of possible genera-
search tree tions, represented as a search tree whose branches represent actions (generating a
token), and nodes represent states (having generated a particular prefix). We search
for the best action sequence, i.e., the string with the highest probability.
p(t3| t1,t2)
p(t2| t1)
ok 1.0 EOS
.7
yes 1.0 EOS
p(t1|start) .2
ok .1 EOS
.4
start .5 yes .3 ok 1.0 EOS
.1 .4
EOS yes 1.0 EOS
.3
EOS
t1 t2 t3
Figure 13.7 A search tree for generating the target string T = t1 ,t2 , ... from vocabulary
V = {yes, ok, <s>}, showing the probability of generating each token from that state. Greedy
search chooses yes followed by yes, instead of the globally most probable sequence ok ok.
Recall from Chapter 17 that for part-of-speech tagging we used dynamic pro-
gramming search (the Viterbi algorithm) to address this problem. Unfortunately,
13.4 • D ECODING IN MT: B EAM S EARCH 275
arrived y2
the green y3
h d1 h d2 y2 y3
y1
a a
y1 hd 1 hd 2 hd 2
BOS arrived … …
aardvark BOS the green mage
a .. ..
h d1
… the the
aardvark .. ..
witch witch
BOS .. … …
start arrived zebra zebra
..
the
y2 y3
…
zebra a arrived
… …
aardvark aardvark
the y2 .. ..
green green
.. ..
witch who
h d1 h d2 … y3 …
the witch
zebra zebra
BOS the
h d1 h d2 h d2
Figure 13.8 Beam search decoding with a beam width of k = 2. At each time step, we choose the k best
hypotheses, form the V possible extensions of each, score those k × V hypotheses and choose the best k = 2
to continue. At time 1, the frontier has the best 2 options from the initial decoder state: arrived and the. We
extend each, compute the probability of all the hypotheses so far (arrived the, arrived aardvark, the green, the
witch) and again chose the best 2 (the green and the witch) to be the search frontier. The images on the arcs
schematically represent the decoders that must be run at each step to score the next words (for simplicity not
depicting cross-attention).
vocabulary to extend the hypothesis to every possible next token. Each of these k ×V
hypotheses is scored by P(yi |x, y<i ): the product of the probability of the current
word choice multiplied by the probability of the path that led to it. We then prune
the k ×V hypotheses down to the k best hypotheses, so there are never more than k
hypotheses at the frontier of the search, and never more than k decoders. Fig. 13.8
illustrates this with a beam width of 2 for the beginning of The green witch arrived.
This process continues until an EOS is generated indicating that a complete can-
didate output has been found. At this point, the completed hypothesis is removed
from the frontier and the size of the beam is reduced by one. The search continues
until the beam has been reduced to 0. The result will be k hypotheses.
To score each node by its log probability, we use the chain rule of probability to
break down p(y|x) into the product of the probability of each word given its prior
context, which we can turn into a sum of logs (for an output string of length t):
Thus at each step, to compute the probability of a partial sentence, we simply add the
log probability of the prefix sentence so far to the log probability of generating the
next token. Fig. 13.9 shows the scoring for the example sentence shown in Fig. 13.8,
using some simple made-up probabilities. Log probabilities are negative or 0, and
the max of two log probabilities is the one that is greater (closer to 0).
Figure 13.9 Scoring for beam search decoding with a beam width of k = 2. We maintain the log probability
of each hypothesis in the beam by incrementally adding the logprob of generating each next token. Only the top
k paths are extended to the next step.
Fig. 13.10 gives the algorithm. One problem with this version of the algorithm is
that the completed hypotheses may have different lengths. Because language mod-
els generally assign lower probabilities to longer strings, a naive algorithm would
13.4 • D ECODING IN MT: B EAM S EARCH 277
y0 , h0 ← 0
path ← ()
complete paths ← ()
state ← (c, y0 , h0 , path) ;initial state
frontier ← 〈state〉 ;initial frontier
choose shorter strings for y. (This is not an issue during the earlier steps of decod-
ing; since beam search is breadth-first, all the hypotheses being compared had the
same length.) For this reason we often apply length normalization methods, like
dividing the logprob by the number of words:
t
1∑
score(y) = log P(y|x) = log P(yi |y1 , ..., yi−1 , x) (13.16)
t
i=1
For MT we generally use beam widths k between 5 and 10, giving us k hypotheses at
the end. We can pass all k to the downstream application with their respective scores,
or if we just need a single translation we can pass the most probable hypothesis.
can work even better than beam search and also tends to be better than the other
decoding algorithms like temperature sampling introduced in Section 10.2.
The intuition of minimum Bayes risk is that instead of trying to choose the trans-
lation which is most probable, we choose the one that is likely have the least error.
For example, we might want our decoding algorithm to find the translation which
has the highest score on some evaluation metric. For example in Section 13.6 we will
introduce metrics like chrF or BERTScore that measure the goodness-of-fit between
a candidate translation and a set of reference human translations. A translation that
maximizes this score, especially with a hypothetically huge set of perfect human
translations is likely to be a good one (have minimum risk) even if it is not the most
probable translation by our particular probability estimator.
In practice, we don’t know the perfect set of translations for a given sentence. So
the standard simplification used in MBR decoding algorithms is to instead choose
the candidate translation which is most similar (by some measure of goodness-of-
fit) with some set of candidate translations. We’re essentially approximating the
enormous space of all possible translations U with a smaller set of possible candidate
translations Y.
Given this set of possible candidate translations Y, and some similarity or align-
ment function util, we choose the best translation ŷ as the translation which is most
similar to all the other candidate translations:
∑
ŷ = argmax util(y, c) (13.17)
y∈Y c∈Y
Various util functions can be used, like chrF or BERTscore or BLEU. We can get the
set of candidate translations by sampling using one of the basic sampling algorithms
of Section 10.2 like temperature sampling; good results can be obtained with as few
as 32 or 64 candidates.
Minimum Bayes risk decoding can also be used for other NLP tasks; indeed
it was widely applied to speech recognition (Stolcke et al., 1997; Goel and Byrne,
2000) before being applied to machine translation (Kumar and Byrne, 2004), and
has been shown to work well across many other generation tasks as well (e.g., sum-
marization, dialogue, and image captioning (Suzgun et al., 2023a)).
h = encoder(x, ls ) (13.18)
yi+1 = decoder(h, lt , y1 , . . . , yi )) ∀i ∈ [1, . . . , m] (13.19)
One advantage of a multilingual model is that they can improve the translation
of lower-resourced languages by drawing on information from a similar language
in the training data that happens to have more resources. Perhaps we don’t know
the meaning of a word in Galician, but the word appears in the similar and higher-
resourced language Spanish.
280 C HAPTER 13 • M ACHINE T RANSLATION
13.6 MT Evaluation
Translations are evaluated along two dimensions:
adequacy 1. adequacy: how well the translation captures the exact meaning of the source
sentence. Sometimes called faithfulness or fidelity.
fluency 2. fluency: how fluent the translation is in the target language (is it grammatical,
clear, readable, natural).
Using humans to evaluate is most accurate, but automatic metrics are also used for
convenience.
We can do the same thing to judge the second dimension, adequacy, using raters
to assign scores on a scale. If we have bilingual raters, we can give them the source
sentence and a proposed target sentence, and rate, on a 5-point or 100-point scale,
how much of the information in the source was preserved in the target. If we only
have monolingual raters but we have a good human translation of the source text, we
can give the monolingual raters the human reference translation and a target machine
translation and again rate how much information is preserved. An alternative is to
ranking do ranking: give the raters a pair of candidate translations, and ask them which one
they prefer.
Training of human raters (who are often online crowdworkers) is essential; raters
without translation expertise find it difficult to separate fluency and adequacy, and
so training includes examples carefully distinguishing these. Raters often disagree
(source sentences may be ambiguous, raters will have different world knowledge,
raters may apply scales differently). It is therefore common to remove outlier raters,
and (if we use a fine-grained enough scale) normalizing raters by subtracting the
mean from their scores and dividing by the variance.
As discussed above, an alternative way of using human raters is to have them
post-edit translations, taking the MT output and changing it minimally until they
feel it represents a correct translation. The difference between their post-edited
translations and the original MT output can then be used as a measure of quality.
We use that to compute the unigram and bigram precisions and recalls:
unigram P: 17/17 = 1 unigram R: 17/18 = .944
bigram P: 13/16 = .813 bigram R: 13/17 = .765
Finally we average to get chrP and chrR, and compute the F-score:
in the candidate translation that also occur in the reference translation, and ditto for
bigrams and so on, up to 4-grams. BLEU extends this unigram metric to the whole
corpus by computing the numerator as the sum over all sentences of the counts of all
the unigram types that also occur in the reference translation, and the denominator
is the total of the counts of all unigrams in all candidate sentences. We compute
this n-gram precision for unigrams, bigrams, trigrams, and 4-grams and take the
geometric mean. BLEU has many further complications, including a brevity penalty
for penalizing candidate translations that are too short, and it also requires the n-
gram counts be clipped in a particular way.
Because BLEU is a word-based metric, it is very sensitive to word tokenization,
making it impossible to compare different systems if they rely on different tokeniza-
tion standards, and doesn’t work as well in languages with complex morphology.
Nonetheless, you will sometimes still see systems evaluated by BLEU, particularly
for translation into English. In such cases it’s important to use packages that enforce
standardization for tokenization like S ACRE BLEU (Post, 2018).
chrF: Limitations
While automatic character and word-overlap metrics like chrF or BLEU are useful,
they have important limitations. chrF is very local: a large phrase that is moved
around might barely change the chrF score at all, and chrF can’t evaluate cross-
sentence properties of a document like its discourse coherence (Chapter 24). chrF
and similar automatic metrics also do poorly at comparing very different kinds of
systems, such as comparing human-aided translation against machine translation, or
different machine translation architectures against each other (Callison-Burch et al.,
2006). Instead, automatic overlap metrics like chrF are most appropriate when eval-
uating changes to a single system.
7.94
the weather is
Reference
1.82
cold today (0.7131.27)+(0.5157.94)+...
7.90
RBERT = 1.27+7.94+1.82+7.90+8.88
Candidate x̂ 8.88
Candidate
Figure 13.11 The computation of BERTS CORE recall from reference x and candidate x̂, from Figure 1 in
Zhang et al. (2020). This version shows an extended version of the metric in which tokens are also weighted by
their idf values.
Similarly, a recent challenge set, the WinoMT dataset (Stanovsky et al., 2019)
shows that MT systems perform worse when they are asked to translate sentences
that describe people with non-stereotypical gender roles, like “The doctor asked the
nurse to help her in the operation”.
Many ethical questions in MT require further research. One open problem is
developing metrics for knowing what our systems don’t know. This is because MT
systems can be used in urgent situations where human translators may be unavailable
or delayed: in medical domains, to help translate when patients and doctors don’t
speak the same language, or in legal domains, to help judges or lawyers communi-
cate with witnesses or defendants. In order to ‘do no harm’, systems need ways to
confidence assign confidence values to candidate translations, so they can abstain from giving
incorrect translations that may cause harm.
13.8 Summary
Machine translation is one of the most widely used applications of NLP, and the
encoder-decoder model, first developed for MT is a key tool that has applications
throughout NLP.
• Languages have divergences, both structural and lexical, that make translation
difficult.
• The linguistic field of typology investigates some of these differences; lan-
guages can be classified by their position along typological dimensions like
whether verbs precede their objects.
• Encoder-decoder networks (for transformers just as we saw in Chapter 8 for
RNNs) are composed of an encoder network that takes an input sequence
and creates a contextualized representation of it, the context. This context
representation is then passed to a decoder which generates a task-specific
output sequence.
• Cross-attention allows the transformer decoder to view information from all
the hidden states of the encoder.
• Machine translation models are trained on a parallel corpus, sometimes called
a bitext, a text that appears in two (or more) languages.
286 C HAPTER 13 • M ACHINE T RANSLATION
Interlingua
sis age
ta
aly gu
rg ene
an la n
et ra
Source Text:
g
Target Text:
lan tion
ce
Semantic/Syntactic
Transfer Semantic/Syntactic
ur
gu
Structure
so
ag
Structure
e
source Direct Translation target
text text
Figure 13.13 The Vauquois (1968) triangle.
Statistical methods began to be applied around 1990, enabled first by the devel-
opment of large bilingual corpora like the Hansard corpus of the proceedings of the
Canadian Parliament, which are kept in both French and English, and then by the
growth of the Web. Early on, a number of researchers showed that it was possible
to extract pairs of aligned sentences from bilingual corpora, using words or simple
cues like sentence length (Kay and Röscheisen 1988, Gale and Church 1991, Gale
and Church 1993, Kay and Röscheisen 1993).
At the same time, the IBM group, drawing directly on the noisy channel model
statistical MT for speech recognition, proposed two related paradigms for statistical MT. These
IBM Models include the generative algorithms that became known as IBM Models 1 through
Candide 5, implemented in the Candide system. The algorithms (except for the decoder)
were published in full detail— encouraged by the US government who had par-
tially funded the work— which gave them a huge impact on the research community
(Brown et al. 1990, Brown et al. 1993).
The group also developed a discriminative approach, called MaxEnt (for maxi-
mum entropy, an alternative formulation of logistic regression), which allowed many
features to be combined discriminatively rather than generatively (Berger et al.,
1996), which was further developed by Och and Ney (2002).
By the turn of the century, most academic research on machine translation used
statistical MT, either in the generative or discriminative mode. An extended version
phrase-based of the generative approach, called phrase-based translation was developed, based
translation
on inducing translations for phrase-pairs (Och 1998, Marcu and Wong 2002, Koehn
et al. (2003), Och and Ney 2004, Deng and Byrne 2005, inter alia).
Once automatic metrics like BLEU were developed (Papineni et al., 2002), the
discriminative log linear formulation (Och and Ney, 2004), drawing from the IBM
MaxEnt work (Berger et al., 1996), was used to directly optimize evaluation metrics
MERT like BLEU in a method known as Minimum Error Rate Training, or MERT (Och,
2003), also drawing from speech recognition models (Chou et al., 1993). Toolkits
Moses like GIZA (Och and Ney, 2003) and Moses (Koehn et al. 2006, Zens and Ney 2007)
were widely used.
There were also approaches around the turn of the century that were based on
transduction
grammars syntactic structure (Chapter 18). Models based on transduction grammars (also
called synchronous grammars assign a parallel syntactic tree structure to a pair of
sentences in different languages, with the goal of translating the sentences by ap-
plying reordering operations on the trees. From a generative perspective, we can
view a transduction grammar as generating pairs of aligned sentences in two lan-
inversion
guages. Some of the most widely used models included the inversion transduction
transduction
grammar
grammar (Wu, 1996) and synchronous context-free grammars (Chiang, 2005),
288 C HAPTER 13 • M ACHINE T RANSLATION
Neural networks had been applied at various times to various aspects of machine
translation; for example Schwenk et al. (2006) showed how to use neural language
models to replace n-gram language models in a Spanish-English system based on
IBM Model 4. The modern neural encoder-decoder approach was pioneered by
Kalchbrenner and Blunsom (2013), who used a CNN encoder and an RNN decoder,
and was first applied to MT by Bahdanau et al. (2015). The transformer encoder-
decoder was proposed by Vaswani et al. (2017) (see the History section of Chap-
ter 9).
Research on evaluation of machine translation began quite early. Miller and
Beebe-Center (1956) proposed a number of methods drawing on work in psycholin-
guistics. These included the use of cloze and Shannon tasks to measure intelligibility
as well as a metric of edit distance from a human translation, the intuition that un-
derlies all modern overlap-based automatic evaluation metrics. The ALPAC report
included an early evaluation study conducted by John Carroll that was extremely in-
fluential (Pierce et al., 1966, Appendix 10). Carroll proposed distinct measures for
fidelity and intelligibility, and had raters score them subjectively on 9-point scales.
Much early evaluation work focuses on automatic word-overlap metrics like BLEU
(Papineni et al., 2002), NIST (Doddington, 2002), TER (Translation Error Rate)
(Snover et al., 2006), Precision and Recall (Turian et al., 2003), and METEOR
(Banerjee and Lavie, 2005); character n-gram overlap methods like chrF (Popović,
2015) came later. More recent evaluation work, echoing the ALPAC report, has
emphasized the importance of careful statistical methodology and the use of human
evaluation (Kocmi et al., 2021; Marie et al., 2021).
The early history of MT is surveyed in Hutchins 1986 and 1997; Nirenburg et al.
(2002) collects early readings. See Croft (1990) or Comrie (1989) for introductions
to linguistic typology.
Exercises
13.1 Compute by hand the chrF2,2 score for HYP2 on page 282 (the answer should
round to .62).
CHAPTER
People need to know things. So pretty much as soon as there were computers we
were asking them questions. Systems in the early 1960s were answering questions
about baseball statistics and scientific facts. Even fictional computers in the 1970s
like Deep Thought, invented by Douglas Adams in The Hitchhiker’s Guide to the
Galaxy, answered “the Ultimate Question Of Life, The Universe, and Everything”.1
And because so much knowledge is encoded in text, QA systems were performing
at human levels even before LLMs: IBM’s Watson system won the TV game-show
Jeopardy! in 2011, surpassing humans at answering questions like:
WILLIAM WILKINSON’S “AN ACCOUNT OF THE
PRINCIPALITIES OF WALLACHIA AND MOLDOVIA”
INSPIRED THIS AUTHOR’S MOST FAMOUS NOVEL 2
The first and main problem is that large language models often give the wrong
hallucinate answer! Large language models hallucinate. A hallucination is a response that is
not faithful to the facts of the world. That is, when asked questions, large language
models simply make up answers that sound reasonable. For example, Dahl et al.
(2024) found that when asked questions about the legal domain (like about particular
legal cases), large language models hallucinated from 69% to 88% of the time!
And it’s not always possible to tell when language models are hallucinating,
calibrated partly because LLMs aren’t well-calibrated. In a calibrated system, the confidence
of a system in the correctness of its answer is highly correlated with the probability
of an answer being correct. So if a calibrated system is wrong, at least it might hedge
its answer or tell us to go check another source. But since language models are not
well-calibrated, they often give a very wrong answer with complete certainty (Zhou
et al., 2024).
A second problem is that simply prompting a large language model doesn’t allow
us to ask questions about proprietary data. A common use of question answering is
about data like our personal email or medical records. Or a company may have
internal documents that contain answers for customer service or internal use. Or
legal firms need to ask questions about legal discovery from proprietary documents.
Finally, static large language models also have problems with questions about
rapidly changing information (like questions about something that happened last
week) since LLMs won’t have up-to-date information from after their release data.
For this reason the most common way to do question-answering with LLMs is
RAG retrieval-augmented generation or RAG, and that is the method we will focus on
information in this chapter. In RAG we use information retrieval (IR) techniques to retrieve
retrieval
documents that are likely to have information that might help answer the question.
Then we use a large language model to generate an answer given these documents.
Basing our answers on retrieved documents can solve some of the problems with
using simple prompting to answer questions. First, it helps ensure that the answer is
grounded in facts from some curated dataset. And the system can give the user the
answer accompanied by the context of the passage or document the answer came
from. This information can help users have confidence in the accuracy of the answer
(or help them spot when it is wrong!). And these retrieval techniques can be used on
any proprietary data we want, such as legal or medical data for those applications.
We’ll begin by introducing information retrieval, the task of choosing the most
relevant document from a document set given a user’s query expressing their infor-
mation need. We’ll see the classic method based on cosines of sparse tf-idf vectors,
a modern neural ‘dense’ retrievers based on instead representing queries and docu-
ments neurally with BERT or other language models. We then introduce retriever-
based question answering and the retrieval-augmented generation paradigm.
Finally, we’ll discuss various QA datasets. These are used for finetuning LLMs
in instruction tuning, as we saw in Chapter 12. And they are also used as bench-
marks, since question answering has an important function as a benchmark for mea-
suring the abilities of language models.
to see its application to question answering. Readers with more interest specifically
in information retrieval should see the Historical Notes section at the end of the
chapter and textbooks like Manning et al. (2008).
ad hoc retrieval The IR task we consider is called ad hoc retrieval, in which a user poses a
query to a retrieval system, which then returns an ordered set of documents from
document some collection. A document refers to whatever unit of text the system indexes and
retrieves (web pages, scientific papers, news articles, or even shorter passages like
collection paragraphs). A collection refers to a set of documents being used to satisfy user
term requests. A term refers to a word in a collection, but it may also include phrases.
query Finally, a query represents a user’s information need expressed as a set of terms.
The high-level architecture of an ad hoc retrieval engine is shown in Fig. 14.1.
Document
Inverted
Document
Document Indexing
Document
Document
Index
Document Document
Document
Document
Document
Document
document collection Search Ranked
Document
Documents
Query query
query Processing vector
The basic IR architecture uses the vector space model we introduced in Chap-
ter 6, in which we map queries and document to vectors based on unigram word
counts, and use the cosine similarity between the vectors to rank potential documents
(Salton, 1971). This is thus an example of the bag-of-words model introduced in
Chapter 4, since words are considered independently of their positions.
3 We can also use this alternative formulation, which we have used in earlier editions: tft, d =
log10 (count(t, d) + 1)
292 C HAPTER 14 • Q UESTION A NSWERING , I NFORMATION R ETRIEVAL , AND RAG
If we use log weighting, terms which occur 0 times in a document would have tf = 0,
1 times in a document tf = 1 + log10 (1) = 1 + 0 = 1, 10 times in a document tf =
1 + log10 (10) = 2, 100 times tf = 1 + log10 (100) = 3, 1000 times tf = 4, and so on.
The document frequency dft of a term t is the number of documents it oc-
curs in. Terms that occur in only a few documents are useful for discriminating
those documents from the rest of the collection; terms that occur across the entire
collection aren’t as helpful. The inverse document frequency or idf term weight
(Sparck Jones, 1972) is defined as:
N
idft = log10 (14.5)
dft
where N is the total number of documents in the collection, and dft is the number
of documents in which term t occurs. The fewer documents in which a term occurs,
the higher this weight; the lowest weight of 0 is assigned to terms that occur in every
document.
Here are some idf values for some words in the corpus of Shakespeare plays,
ranging from extremely informative words that occur in only one play like Romeo,
to those that occur in a few like salad or Falstaff, to those that are very common like
fool or so common as to be completely non-discriminative since they occur in all 37
plays like good or sweet.4
Word df idf
Romeo 1 1.57
salad 2 1.27
Falstaff 4 0.967
forest 12 0.489
battle 21 0.246
wit 34 0.037
fool 36 0.012
good 37 0
sweet 37 0
The tf-idf value for word t in document d is then the product of term frequency
tft, d and IDF:
q·d
score(q, d) = cos(q, d) = (14.7)
|q||d|
Another way to think of the cosine computation is as the dot product of unit vectors;
we first normalize both the query and document vector to unit vectors, by dividing
by their lengths, and then take the dot product:
q d
score(q, d) = cos(q, d) = · (14.8)
|q| |d|
4 Sweet was one of Shakespeare’s favorite adjectives, a fact probably related to the increased use of
sugar in European recipes around the turn of the 16th century (Jurafsky, 2014, p. 175).
14.1 • I NFORMATION R ETRIEVAL 293
We can spell out Eq. 14.8, using the tf-idf values and spelling out the dot product as
a sum of products:
∑ tf-idf(t, q) tf-idf(t, d)
score(q, d) = · (14.9)
2 2
t∈q qi ∈q tf-idf (qi , q) di ∈d tf-idf (di , d)
Now let’s use (14.9) to walk through an example of a tiny query against a collec-
tion of 4 nano documents, computing tf-idf values and seeing the rank of the docu-
ments. We’ll assume all words in the following query and documents are downcased
and punctuation is removed:
Query: sweet love
Doc 1: Sweet sweet nurse! Love?
Doc 2: Sweet sorrow
Doc 3: How sweet is love?
Doc 4: Nurse!
Fig. 14.2 shows the computation of the tf-idf cosine between the query and Doc-
ument 1, and the query and Document 2. The cosine is the normalized dot product
of tf-idf values, so for the normalization we must need to compute the document
vector lengths |q|, |d1 |, and |d2 | for the query and the first two documents using
Eq. 14.4, Eq. 14.5, Eq. 14.6, and Eq. 14.9 (computations for Documents 3 and 4 are
also needed but are left as an exercise for the reader). The dot product between the
vectors is the sum over dimensions of the product, for each dimension, of the values
of the two tf-idf vectors for that dimension. This product is only non-zero where
both the query and document have non-zero values, so for this example, in which
only sweet and love have non-zero values in the query, the dot product will be the
sum of the products of those elements of each vector.
Document 1 has a higher cosine with the query (0.747) than Document 2 has
with the query (0.0779), and so the tf-idf cosine model would rank Document 1
above Document 2. This ranking is intuitive given the vector space model, since
Document 1 has both terms including two instances of sweet, while Document 2 is
missing one of the terms. We leave the computation for Documents 3 and 4 as an
exercise for the reader.
In practice, there are many variants and approximations to Eq. 14.9. For exam-
ple, we might choose to simplify processing by removing some terms. To see this,
let’s start by expanding the formula for tf-idf in Eq. 14.9 to explicitly mention the tf
and idf terms from (14.6):
∑ tft, q · idft tft, d · idft
score(q, d) = · (14.10)
2 2
t∈q qi ∈q tf-idf (qi , q) di ∈d tf-idf (di , d)
In one common variant of tf-idf cosine, for example, we drop the idf term for the
document. Eliminating the second copy of the idf term (since the identical term is
already computed for the query) turns out to sometimes result in better performance:
Query
word cnt tf df idf tf-idf n’lized = tf-idf/|q|
sweet 1 1 3 0.125 0.125 0.383
nurse 0 0 2 0.301 0 0
love 1 1 2 0.301 0.301 0.924
how 0 0 1 0.602 0 0
sorrow 0 0 1 0.602 0 0
is 0 0 1 0.602 0 0
√
|q| = .1252 + .3012 = .326
Document 1 Document 2
word cnt tf tf-idf n’lized × q cnt tf tf-idf n’lized ×q
sweet 2 1.301 0.163 0.357 0.137 1 1.000 0.125 0.203 0.0779
nurse 1 1.000 0.301 0.661 0 0 0 0 0 0
love 1 1.000 0.301 0.661 0.610 0 0 0 0 0
how 0 0 0 0 0 0 0 0 0 0
sorrow 0 0 0 0 0 1 1.000 0.602 0.979 0
is 0 0 0 0 0 0 0 0 0 0
√ √
|d1 | = .1632 + .3012 + .3012 = .456 |d2 | = .1252 + .6022 = .615
Cosine: of column: 0.747 Cosine: of column: 0.0779
Figure 14.2 Computation of tf-idf cosine score between the query and nano-documents 1 (0.747) and 2
(0.0779), using Eq. 14.4, Eq. 14.5, Eq. 14.6 and Eq. 14.9.
scheme (sometimes called Okapi BM25 after the Okapi IR system in which it was
introduced (Robertson et al., 1995)). BM25 adds two parameters: k, a knob that
adjust the balance between term frequency and IDF, and b, which controls the im-
portance of document length normalization. The BM25 score of a document d given
a query q is:
IDF weighted tf
∑ N tft,d
log (14.12)
t∈q
dft k 1 − b + b |d| + tft,d
|davg |
where |davg | is the length of the average document. When k is 0, BM25 reverts to
no use of term frequency, just a binary selection of terms in the query (plus idf).
A large k results in raw term frequency (plus idf). b ranges from 1 (scaling by
document length) to 0 (no length scaling). Manning et al. (2008) suggest reasonable
values are k = [1.2,2] and b = 0.75. Kamphuis et al. (2020) is a useful summary of
the many minor variants of BM25.
Stop words In the past it was common to remove high-frequency words from both
the query and document before representing them. The list of such high-frequency
stop list words to be removed is called a stop list. The intuition is that high-frequency terms
(often function words like the, a, to) carry little semantic weight and may not help
with retrieval, and can also help shrink the inverted index files we describe below.
The downside of using a stop list is that it makes it difficult to search for phrases
that contain words in the stop list. For example, common stop lists would reduce the
phrase to be or not to be to the phrase not. In modern IR systems, the use of stop lists
is much less common, partly due to improved efficiency and partly because much
of their function is already handled by IDF weighting, which downweights function
14.1 • I NFORMATION R ETRIEVAL 295
words that occur in every document. Nonetheless, stop word removal is occasionally
useful in various NLP tasks so is worth keeping in mind.
ranked retrieval systems, we need a metric that prefers the one that ranks the relevant
documents higher. We need to adapt precision and recall to capture how well a
system does at putting relevant documents higher in the ranking.
Rank Judgment PrecisionRank RecallRank
1 R 1.0 .11
2 N .50 .11
3 R .66 .22
4 N .50 .22
5 R .60 .33
6 R .66 .44
7 N .57 .44
8 R .63 .55
9 N .55 .55
10 N .50 .55
11 R .55 .66
12 N .50 .66
13 N .46 .66
14 N .43 .66
15 R .47 .77
16 N .44 .77
17 N .44 .77
18 R .44 .88
19 N .42 .88
20 N .40 .88
21 N .38 .88
22 N .36 .88
23 N .35 .88
24 N .33 .88
25 R .36 1.0
Figure 14.3 Rank-specific precision and recall values calculated as we proceed down
through a set of ranked documents (assuming the collection has 9 relevant documents).
Let’s turn to an example. Assume the table in Fig. 14.3 gives rank-specific pre-
cision and recall values calculated as we proceed down through a set of ranked doc-
uments for a particular query; the precisions are the fraction of relevant documents
seen at a given rank, and recalls the fraction of relevant documents found at the same
rank. The recall measures in this example are based on this query having 9 relevant
documents in the collection as a whole.
Note that recall is non-decreasing; when a relevant document is encountered,
recall increases, and when a non-relevant document is found it remains unchanged.
Precision, on the other hand, jumps up and down, increasing when relevant doc-
uments are found, and decreasing otherwise. The most common way to visualize
precision-recall precision and recall is to plot precision against recall in a precision-recall curve,
curve
like the one shown in Fig. 14.4 for the data in table 14.3.
Fig. 14.4 shows the values for a single query. But we’ll need to combine values
for all the queries, and in a way that lets us compare one system to another. One way
of doing this is to plot averaged precision values at 11 fixed levels of recall (0 to 100,
in steps of 10). Since we’re not likely to have datapoints at these exact levels, we
interpolated
precision use interpolated precision values for the 11 recall values from the data points we do
have. We can accomplish this by choosing the maximum precision value achieved
at any level of recall at or above the one we’re calculating. In other words,
IntPrecision(r) = max Precision(i) (14.14)
i>=r
14.1 • I NFORMATION R ETRIEVAL 297
Figure 14.4 The precision recall curve for the data in table 14.3.
This interpolation scheme not only lets us average performance over a set of queries,
but also helps smooth over the irregular precision values in the original data. It is
designed to give systems the benefit of the doubt by assigning the maximum preci-
sion value achieved at higher levels of recall from the one being measured. Fig. 14.5
and Fig. 14.6 show the resulting interpolated data points from our example.
Given curves such as that in Fig. 14.6 we can compare two systems or approaches
by comparing their curves. Clearly, curves that are higher in precision across all
recall values are preferred. However, these curves can also provide insight into the
overall behavior of a system. Systems that are higher in precision toward the left
may favor precision over recall, while systems that are more geared towards recall
will be higher at higher levels of recall (to the right).
mean average
precision A second way to evaluate ranked retrieval is mean average precision (MAP),
which provides a single metric that can be used to compare competing systems or
approaches. In this approach, we again descend through the ranked list of items,
but now we note the precision only at those points where a relevant item has been
encountered (for example at ranks 1, 3, 5, 6 but not 2 or 4 in Fig. 14.3). For a single
query, we average these individual precision measurements over the return set (up
to some fixed cutoff). More formally, if we assume that Rr is the set of relevant
298 C HAPTER 14 • Q UESTION A NSWERING , I NFORMATION R ETRIEVAL , AND RAG
0.9
0.8
0.7
0.6
Precision
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Recall
documents at or above r, then the average precision (AP) for a single query is
1 ∑
AP = Precisionr (d) (14.15)
|Rr |
d∈Rr
where Precisionr (d) is the precision measured at the rank at which document d was
found. For an ensemble of queries Q, we then average over these averages, to get
our final MAP measure:
1 ∑
MAP = AP(q) (14.16)
|Q|
q∈Q
The MAP for the single query (hence = AP) in Fig. 14.3 is 0.6.
the query and the document, and thus building a representation that is sensitive to
the meanings of both query and document. Then a linear layer can be put on top of
the [CLS] token to predict a similarity score for the query/document tuple:
This architecture is shown in Fig. 14.7a. Usually the retrieval step is not done on
an entire document. Instead documents are broken up into smaller passages, such
as non-overlapping fixed-length chunks of say 100 tokens, and the retriever encodes
and retrieves these passages rather than entire documents. The query and document
have to be made to fit in the BERT 512-token window, for example by truncating
the query to 64 tokens and truncating the document if necessary so that it, the query,
[CLS], and [SEP] fit in 512 tokens. The BERT system together with the linear layer
U can then be fine-tuned for the relevance task by gathering a tuning dataset of
relevant and non-relevant passages.
s(q,d)
s(q,d)
U • zCLS_D
zCLS zCLS_Q
… …
… …
… …
… …
… …
… …
(a) (b)
Figure 14.7 Two ways to do dense retrieval, illustrated by using lines between layers to schematically rep-
resent self-attention: (a) Use a single encoder to jointly encode query and document and finetune to produce a
relevance score with a linear layer over the CLS token. This is too compute-expensive to use except in rescoring
(b) Use separate encoders for query and document, and use the dot product between CLS token outputs for the
query and document as the score. This is less compute-expensive, but not as accurate.
The problem with the full BERT architecture in Fig. 14.7a is the expense in
computation and time. With this architecture, every time we get a query, we have to
pass every single single document in our entire collection through a BERT encoder
jointly with the new query! This enormous use of resources is impractical for real
cases.
At the other end of the computational spectrum is a much more efficient archi-
tecture, the bi-encoder. In this architecture we can encode the documents in the
collection only one time by using two separate encoder models, one to encode the
query and one to encode the document. We encode each document, and store all
the encoded document vectors in advance. When a query comes in, we encode just
this query and then use the dot product between the query vector and the precom-
puted document vectors as the score for each candidate document (Fig. 14.7b). For
example, if we used BERT, we would have two encoders BERTQ and BERTD and
300 C HAPTER 14 • Q UESTION A NSWERING , I NFORMATION R ETRIEVAL , AND RAG
we could represent the query and document as the [CLS] token of the respective
encoders (Karpukhin et al., 2020):
zq = BERTQ (q)[CLS]
zd = BERTD (d)[CLS]
score(q, d) = zq · zd (14.18)
The bi-encoder is much cheaper than a full query/document encoder, but is also
less accurate, since its relevance decision can’t take full advantage of all the possi-
ble meaning interactions between all the tokens in the query and the tokens in the
document.
There are numerous approaches that lie in between the full encoder and the bi-
encoder. One intermediate alternative is to use cheaper methods (like BM25) as the
first pass relevance ranking for each document, take the top N ranked documents,
and use expensive methods like the full BERT scoring to rerank only the top N
documents rather than the whole set.
ColBERT Another intermediate approach is the ColBERT approach of Khattab and Za-
haria (2020) and Khattab et al. (2021), shown in Fig. 14.8. This method separately
encodes the query and document, but rather than encoding the entire query or doc-
ument into one vector, it separately encodes each of them into contextual represen-
tations for each token. These BERT representations of each document word can be
pre-stored for efficiency. The relevance score between a query q and a document d is
a sum of maximum similarity (MaxSim) operators between tokens in q and tokens
in d. Essentially, for each token in q, ColBERT finds the most contextually simi-
lar token in d, and then sums up these similarities. A relevant document will have
tokens that are contextually very similar to the query.
More formally, a question q is tokenized as [q1 , . . . , qn ], prepended with a [CLS]
and a special [Q] token, truncated to N=32 tokens (or padded with [MASK] tokens if
it is shorter), and passed through BERT to get output vectors q = [q1 , . . . , qN ]. The
passage d with tokens [d1 , . . . , dm ], is processed similarly, including a [CLS] and
special [D] token. A linear layer is applied on top of d and q to control the output
dimension, so as to keep the vectors small for storage efficiency, and vectors are
rescaled to unit length, producing the final vector sequences Eq (length N) and Ed
(length m). The ColBERT scoring mechanism is:
N
∑ m
score(q, d) = max Eqi · Ed j (14.19)
j=1
i=1
While the interaction mechanism has no tunable parameters, the ColBERT ar-
chitecture still needs to be trained end-to-end to fine-tune the BERT encoders and
train the linear layers (and the special [Q] and [D] embeddings) from scratch. It
is trained on triples 〈q, d + , d − 〉 of query q, positive document d + and negative doc-
ument d − to produce a score for each document using (14.19), optimizing model
parameters using a cross-entropy loss.
All the supervised algorithms (like ColBERT or the full-interaction version of
the BERT algorithm applied for reranking) need training data in the form of queries
together with relevant and irrelevant passages or documents (positive and negative
examples). There are various semi-supervised ways to get labels; some datasets (like
MS MARCO Ranking, Section 14.3.2) contain gold positive examples. Negative
examples can be sampled randomly from the top-1000 results from some existing
IR system. If datasets don’t have labeled positive examples, iterative methods like
14.3 • A NSWERING Q UESTIONS WITH RAG 301
s(q,d)
Query Document
Figure 14.8 A sketch of the ColBERT algorithm at inference time. The query and docu-
ment are first passed through separate BERT encoders. Similarity between query and doc-
ument is computed by summing a soft alignment between the contextual representations of
tokens in the query and the document. Training is end-to-end. (Various details aren’t de-
picted; for example the query is prepended by a [CLS] and [Q:] tokens, and the document
by [CLS] and [D:] tokens). Figure adapted from Khattab and Zaharia (2020).
relevance-guided supervision can be used (Khattab et al., 2021) which rely on the
fact that many datasets contain short answer strings. In this method, an existing IR
system is used to harvest examples that do contain short answer strings (the top few
are taken as positives) or don’t contain short answer strings (the top few are taken as
negatives), these are used to train a new retriever, and then the process is iterated.
Efficiency is an important issue, since every possible document must be ranked
for its similarity to the query. For sparse word-count vectors, the inverted index
allows this very efficiently. For dense vector algorithms finding the set of dense
document vectors that have the highest dot product with a dense query vector is
an instance of the problem of nearest neighbor search. Modern systems there-
Faiss fore make use of approximate nearest neighbor vector search algorithms like Faiss
(Johnson et al., 2017).
query
Retriever Reader/
Q: When was docs Generator
the premiere of
The Magic Flute? LLM A: 1791
Relevant prompt
Docs
Indexed Docs
Figure 14.9 Retrieval-based question answering has two stages: retrieval, which returns relevant documents
from the collection, and reading, in which an LLM generates answers given the documents as a prompt.
And simple conditional generation for question answering adds a prompt like Q: ,
followed by a query q , and A:, all concatenated:
n
∏
p(x1 , . . . , xn ) = p([Q:] ; q ; [A:] ; x<i )
i=1
retrieved passage 1
retrieved passage 2
...
retrieved passage n
Or more formally,
n
∏
p(x1 , . . . , xn ) = p(xi |R(q) ; prompt ; [Q:] ; q ; [A:] ; x<i )
i=1
MMLU examples
College Physics
The primary source of the Sun’s energy is a series of thermonuclear
reactions in which the energy produced is c2 times the mass difference
between
(A) two hydrogen atoms and one helium atom
(B) four hydrogen atoms and one helium atom
(C) six hydrogen atoms and two helium atoms
(D) three helium atoms and one carbon atom
International Law
Which of the following is a treaty-based human rights mechanism?
(A) The UN Human Rights Committee
(B) The UN Human Rights Council
(C) The UN Universal Periodic Review
(D) The UN special mandates
Prehistory
Unlike most other early civilizations, Minoan culture shows little evidence
of
(A) trade.
(B) warfare.
(C) the development of a common religion.
(D) conspicuous consumption by elites.
Finally, in some situations QA systems give multiple ranked answers. In such cases
mean
reciprocal rank we evaluated using mean reciprocal rank, or MRR (Voorhees, 1999). MRR is
MRR designed for systems that return a short ranked list of answers or passages for each
test set question, which we can compare against the (human-labeled) correct answer.
First, each test set question is scored with the reciprocal of the rank of the first
correct answer. For example if the system returned five answers to a question but
the first three are wrong (so the highest-ranked correct answer is ranked fourth), the
reciprocal rank for that question is 14 . The score for questions that return no correct
answer is 0. The MRR of a system is the average of the scores for each question in
the test set. In some versions of MRR, questions with a score of zero are ignored
in this calculation. More formally, for a system returning ranked answers to each
question in a test set Q, (or in the alternate version, let Q be the subset of test set
306 C HAPTER 14 • Q UESTION A NSWERING , I NFORMATION R ETRIEVAL , AND RAG
|Q|
1 ∑ 1
MRR = (14.20)
|Q| ranki
i=1
14.5 Summary
This chapter introduced the tasks of question answering and information retrieval.
• Question answering (QA) is the task of answering a user’s questions.
• We focus in this chapter on the task of retrieval-based question answering,
in which the user’s questions are intended to be answered by the material in
some set of documents (which might be the web).
• Information Retrieval (IR) is the task of returning documents to a user based
on their information need as expressed in a query. In ranked retrieval, the
documents are returned in ranked order.
• The match between a query and a document can be done by first representing
each of them with a sparse vector that represents the frequencies of words,
weighted by tf-idf or BM25. Then the similarity can be measured by cosine.
• Documents or queries can instead be represented by dense vectors, by encod-
ing the question and document with an encoder-only model like BERT, and in
that case computing similarity in embedding space.
• The inverted index is a storage mechanism that makes it very efficient to find
documents that have a particular word.
• Ranked retrieval is generally evaluated by mean average precision or inter-
polated precision.
• Question answering systems generally use the retriever/reader architecture.
In the retriever stage, an IR system is given a query and returns a set of
documents.
• The reader stage is implemented by retrieval-augmented generation, in
which a large language model is prompted with the query and a set of doc-
uments and then conditionally generates a novel answer.
• QA can be evaluated by exact match with a known answer if only a single
answer is given, with token F1 score for free text answers, or with mean re-
ciprocal rank if a ranked set of answers is given.
from the content words in the question, and then retrieved candidate answer sen-
tences in the document, ranked by their frequency-weighted term overlap with the
question. The query and each retrieved sentence were then parsed with dependency
parsers, and the sentence whose structure best matches the question structure se-
lected. Thus the question What do worms eat? would match worms eat grass: both
have the subject worms as a dependent of eat, in the version of dependency grammar
used at the time, while birds eat worms has birds as the subject:
By a decade later, neural models were applied to semantic parsing (Dong and Lap-
ata 2016, Jia and Liang 2016), and then to knowledge-based question answering by
mapping text to SQL (Iyer et al., 2017).
Meanwhile, the information-retrieval paradigm for question answering was in-
fluenced by the rise of the web in the 1990s. The U.S. government-sponsored TREC
(Text REtrieval Conference) evaluations, run annually since 1992, provide a testbed
for evaluating information-retrieval tasks and techniques (Voorhees and Harman,
2005). TREC added an influential QA track in 1999, which led to a wide variety of
factoid and non-factoid systems competing in annual evaluations.
At that same time, Hirschman et al. (1999) introduced the idea of using chil-
dren’s reading comprehension tests to evaluate machine text comprehension algo-
rithms. They acquired a corpus of 120 passages with 5 questions each designed for
3rd-6th grade children, built an answer extraction system, and measured how well
the answers given by their system corresponded to the answer key from the test’s
publisher. Their algorithm focused on word overlap as a feature; later algorithms
added named entity features and more complex similarity between the question and
the answer span (Riloff and Thelen 2000, Ng et al. 2000).
The DeepQA component of the Watson Jeopardy! system was a large and so-
phisticated feature-based system developed just before neural systems became com-
mon. It is described in a series of papers in volume 56 of the IBM Journal of Re-
search and Development, e.g., Ferrucci (2012).
Early neural reading comprehension systems drew on the insight common to
early systems that answer finding should focus on question-passage similarity. Many
of the architectural outlines of these neural systems were laid out in Hermann et al.
(2015a), Chen et al. (2017a), and Seo et al. (2017). These systems focused on
datasets like Rajpurkar et al. (2016) and Rajpurkar et al. (2018) and their succes-
sors, usually using separate IR algorithms as input to neural reading comprehension
systems. The paradigm of using dense retrieval with a span-based reader, often with
a single end-to-end architecture, is exemplified by systems like Lee et al. (2019) or
Karpukhin et al. (2020). An important research area with dense retrieval for open-
domain QA is training data: using self-supervised methods to avoid having to label
positive and negative passages (Sachan et al., 2023).
Early work on large language models showed that they stored sufficient knowl-
edge in the pretraining process to answer questions (Petroni et al., 2019; Raffel et al.,
2020; Radford et al., 2019; Roberts et al., 2020), at first not competitively with
special-purpose question answerers, but then surpassing them. Retrieval-augmented
generation algorithms were first introduced as a way to improve language modeling
(Khandelwal et al., 2019), but were quickly applied to question answering (Izacard
et al., 2022; Ram et al., 2023; Shi et al., 2023).
Exercises
CHAPTER
Les lois de la conversation sont en général de ne s’y appesantir sur aucun ob-
jet, mais de passer légèrement, sans effort et sans affectation, d’un sujet à un
autre ; de savoir y parler de choses frivoles comme de choses sérieuses
[The rules of conversation are, in general, not to dwell on any one subject,
but to pass lightly from one to another without effort and without affectation;
to know how to speak about trivial topics as well as serious ones;]
The 18th C. Encyclopedia of Diderot, start of the entry on conversation
The literature of the fantastic abounds in inanimate objects magically endowed with
the gift of speech. From Ovid’s statue of Pygmalion to Mary Shelley’s story about
Frankenstein, we continually reinvent stories about creat-
ing something and then having a chat with it. Legend has
it that after finishing his sculpture Moses, Michelangelo
thought it so lifelike that he tapped it on the knee and
commanded it to speak. Perhaps this shouldn’t be sur-
prising. Language is the mark of humanity and sentience,
conversation and conversation or dialogue is the most fundamental
dialogue arena of language. It is the first kind of language we
learn as children, and the kind we engage in constantly,
whether we are ordering lunch, buying train tickets, or
talking with our families, friends, or coworkers.
This chapter introduces the fundamental algorithms of programs that use con-
versation to interact with users. We often distinguish between two kinds of archi-
dialogue system tectures. Task-oriented dialogue systems converse with users to accomplish fixed
tasks like controlling appliances or finding restaurants, relying on a data structure
frame called the frame, which represents the knowledge a system needs to acquire from
chatbot the user (like the time to set an alarm clock). Chatbots, by contrast, are designed
to mimic the longer and more unstructured conversations or ‘chats’ characteristic of
human-human interaction. Modern systems incorporate aspects of both; industrial
chatbots like ChatGPT can carry on longer unstructured conversations; industrial
digital assistants like Siri or Alexa are generally frame-based dialogue systems.
The fact that chatbots and dialogue systems are designed for human-computer
interaction has strong implications for their design and use. Many of these impli-
cations already became clear in one of the earliest chatbots, ELIZA (Weizenbaum,
1966). ELIZA was designed to simulate a Rogerian psychologist, based on a branch
of clinical psychology whose methods involve drawing the patient out by reflecting
patient’s statements back at them. Rogerian interactions are the rare type of conver-
sation in which, as Weizenbaum points out, one can “assume the pose of knowing
almost nothing of the real world”. If a patient says “I went for a long boat ride” and
the psychiatrist says “Tell me about boats”, you don’t assume she didn’t know what
310 C HAPTER 15 • C HATBOTS & D IALOGUE S YSTEMS
a boat is, but rather assume she had some conversational goal.1
Weizenbaum made use of this property of Rogerian psychiatric conversations,
along with clever regular expressions, to allow ELIZA to interact in ways that seemed
deceptively human-like, as in the sample conversational fragment in Fig. 15.1.
As we foreshadowed in Chapter 2, ELIZA worked by simple rules roughly like:
(.*) YOU (.*) ME -> WHAT MAKES YOU THINK I \2 YOU
to transform a user sentence like “You hate me” into a system response like
WHAT MAKES YOU THINK I HATE YOU
Among Weizenbaum’s clever tricks are the linking of each ELIZA pattern/rule
to a keyword. Consider the following user sentence:
I know everybody laughed at me
Because it has the word “I”, this sentence could match the following rule whose
keyword is I:
I (.*) -> You say you \1
producing:
YOU SAY YOU KNOW EVERYBODY LAUGHED AT YOU
Weizenbaum points out, however, that a more powerful response would rely on
the keyword “everybody”, since someone using universals like everybody or always
is probably thinking about a specific person or situation. So the ELIZA algorithm
prefers to respond using patterns associated more specific keywords like everybody:
WHO IN PARTICULAR ARE YOU THINKING OF?
If no keyword matches, the algorithm chooses a non-committal response like
“PLEASE GO ON”, “THATS VERY INTERESTING”, or “I SEE”.
ELIZA illustrates a number of important issues with chatbots. First, people
became deeply emotionally involved and conducted very personal conversations,
even to the extent of asking Weizenbaum to leave the room while they were typ-
ing. Reeves and Nass (1996) show that people tend to assign human characteristics
to computers and interact with them in ways that are typical of human-human in-
teractions. They interpret an utterance in the way they would if it had spoken by a
human, (even though they are aware they are talking to a computer). This means that
chatbots can have significant influences on people’s cognitive and emotional state.
A second related issue is privacy. When Weizenbaum suggested that he might
want to store the ELIZA conversations, people immediately pointed out that this
would violate people’s privacy. Modern chatbots in the home are likely to overhear
1 This is due to the Gricean principle of relevance that we’ll discuss in the next section..
15.1 • P ROPERTIES OF H UMAN C ONVERSATION 311
private information, even if they aren’t used for counseling as ELIZA was. Indeed,
if a chatbot is human-like, users are more likely to disclose private information, and
yet less likely to worry about the harm of this disclosure (Ischen et al., 2019).
Both of these issues (emotional engagement and privacy) mean we need to think
carefully about how we deploy chatbots and the people who are interacting with
them. Dialogue research that uses human participants often requires getting permis-
sion from the Institutional Review Board (IRB) of your institution.
In the next section we introduce some basic properties of human conversation.
We then turn in the rest of the chapter to the two basic paradigms for conversational
interaction: frame-based dialogue systems and chatbots.
Turns
turn A dialogue is a sequence of turns (C1 , A2 , C3 , and so on), each a single contribution
from one speaker to the dialogue (as if in a game: I take a turn, then you take a turn,
312 C HAPTER 15 • C HATBOTS & D IALOGUE S YSTEMS
then me, and so on). There are 20 turns in Fig. 15.2. A turn can consist of a sentence
(like C1 ), although it might be as short as a single word (C13 ) or as long as multiple
sentences (A10 ).
Turn structure has important implications for spoken dialogue. A human has
to know when to stop talking; the client interrupts (in A16 and C17 ), so a system
that was performing this role must know to stop talking (and that the user might be
making a correction). A system also has to know when to start talking. For example,
most of the time in conversation, speakers start their turns almost immediately after
the other speaker finishes, without a long pause, because people are can usually
predict when the other person is about to finish talking. Spoken dialogue systems
must also detect whether a user is done speaking, so they can process the utterance
endpointing and respond. This task—called endpointing or endpoint detection— can be quite
challenging because of noise and because people often pause in the middle of turns.
Speech Acts
A key insight into conversation—due originally to the philosopher Wittgenstein
(1953) but worked out more fully by Austin (1962)—is that each utterance in a
dialogue is a kind of action being performed by the speaker. These actions are com-
speech acts monly called speech acts or dialogue acts: here’s one taxonomy consisting of 4
major classes (Bach and Harnish, 1979):
Constatives: committing the speaker to something’s being the case (answering, claiming,
confirming, denying, disagreeing, stating)
Directives: attempts by the speaker to get the addressee to do something (advising, ask-
ing, forbidding, inviting, ordering, requesting)
Commissives: committing the speaker to some future course of action (promising, planning,
vowing, betting, opposing)
Acknowledgments: express the speaker’s attitude regarding the hearer with respect to some so-
cial action (apologizing, greeting, thanking, accepting an acknowledgment)
Grounding
A dialogue is not just a series of independent speech acts, but rather a collective act
performed by the speaker and the hearer. Like all collective acts, it’s important for
common
ground the participants to establish what they both agree on, called the common ground
grounding (Stalnaker, 1978). Speakers do this by grounding each other’s utterances. Ground-
ing means acknowledging that the hearer has understood the speaker (Clark, 1996).
(People need grounding for non-linguistic actions as well; the reason an elevator but-
ton lights up when it’s pressed is to acknowledge that the elevator has indeed been
called, essentially grounding your action of pushing the button (Norman, 1988).)
Humans constantly ground each other’s utterances. We can ground by explicitly
saying “OK”, as the agent does in A8 or A10 . Or we can ground by repeating what
the other person says; in utterance A2 the agent repeats “in May”, demonstrating her
15.1 • P ROPERTIES OF H UMAN C ONVERSATION 313
understanding to the client. Or notice that when the client answers a question, the
agent begins the next question with “And”. The “And” implies that the new question
is ‘in addition’ to the old question, again indicating to the client that the agent has
successfully understood the answer to the last question.
presequence In addition to side-sequences, questions often have presequences, like the fol-
lowing example where a user starts with a question about the system’s capabilities
(“Can you make train reservations”) before making a request.
User: Can you make train reservations?
System: Yes I can.
User: Great, I’d like to reserve a seat on the 4pm train to New York.
Initiative
Sometimes a conversation is completely controlled by one participant. For exam-
ple a reporter interviewing a chef might ask questions, and the chef responds. We
initiative say that the reporter in this case has the conversational initiative (Carbonell, 1970;
Nickerson, 1976). In normal human-human dialogue, however, it’s more common
for initiative to shift back and forth between the participants, as they sometimes
answer questions, sometimes ask them, sometimes take the conversations in new di-
rections, sometimes not. You may ask me a question, and then I respond asking you
314 C HAPTER 15 • C HATBOTS & D IALOGUE S YSTEMS
to clarify something you said, which leads the conversation in all sorts of ways. We
call such interactions mixed initiative (Carbonell, 1970).
Full mixed initiative, while the norm for human-human conversations, can be
difficult for dialogue systems. The most primitive dialogue systems tend to use
system-initiative, where the system asks a question and the user can’t do anything
until they answer it, or user-initiative like simple search engines, where the user
specifies a query and the system passively responds. Even modern large language
model-based dialogue systems, which come much closer to using full mixed initia-
tive, often don’t have completely natural initiative switching. Getting this right is an
important goal for modern systems.
Figure 15.3 Architecture of a dialogue-state system for task-oriented dialogue from Williams et al. (2016).
Many domains require multiple frames. Besides frames for car or hotel reser-
vations, we might need other frames for things like general route information (for
questions like Which airlines fly from Boston to San Francisco?), That means the
system must be able to disambiguate which slot of which frame a given input is
supposed to fill.
The task of slot-filling is usually combined with two other tasks, to extract 3
things from each user utterance. The first is domain classification: is this user for
example talking about airlines, programming an alarm clock, or dealing with their
intent calendar? The second is user intent determination: what general task or goal is the
determination
user trying to accomplish? For example the task could be to Find a Movie, or Show
a Flight, or Remove a Calendar Appointment. Together, the domain classification
and intent determination tasks decide which frame we are filling. Finally, we need
slot filling to do slot filling itself: extract the particular slots and fillers that the user intends the
system to understand from their utterance with respect to their intent. From a user
utterance like this one:
Show me morning flights from Boston to San Francisco on Tuesday
a system might want to build a representation like:
DOMAIN: AIR-TRAVEL INTENT: SHOW-FLIGHTS
ORIGIN-CITY: Boston DEST-CITY: San Francisco
ORIGIN-DATE: Tuesday ORIGIN-TIME: morning
Fig. 15.5 shows a typical architecture for inference. The input words w1 ...wn
are passed through a pretrained language model encoder, followed by a feedforward
layer and a softmax at each token position over possible BIO tags, with the output
a series of BIO tags s1 ...sn . We generally combine the domain-classification and
intent-extraction tasks with slot-filling by adding a domain concatenated with an
intent as the desired output for the final EOS token.
Once the sequence labeler has tagged the user utterance, a filler string can be ex-
tracted for each slot from the tags (e.g., “San Francisco”), and these word strings
can then be normalized to the correct form in the ontology (perhaps the airport
15.3 • D IALOGUE ACTS AND D IALOGUE S TATE 317
Classifier
+softmax
Encodings
Encoder
… San Francisco on Monday <EOS>
Figure 15.5 Slot filling by passing input words through an encoder, and then using a linear
or feedforward layer followed by a softmax to generate a series of BIO tags. Here we also
show a final state: a domain concatenated with an intent.
code ‘SFO’), for example with dictionaries that specify that SF, SFO, and San Fran-
cisco are synonyms. Often in industrial contexts, combinations of rules and machine
learning are used for each of these components.
We can make a very simple frame-based dialogue system by wrapping a small
amount of code around this slot extractor. Mainly we just need to ask the user
questions until all the slots are full, do a database query, then report back to the user,
using hand-built templates for generating sentences.
# of inserted/deleted/subsituted slots
Slot Error Rate for a Sentence = (15.1)
# of total reference slots for sentence
For example a system that extracted the slot structure below from this sentence:
(15.2) Make an appointment with Chris at 10:30 in Gates 104
Slot Filler
PERSON Chris
TIME 11:30 a.m.
ROOM Gates 104
has a slot error rate of 1/3, since the TIME is wrong. Instead of error rate, slot
precision, recall, and F-score can also be used. We can also measure efficiency
efficiency costs costs like the length of the dialogue in seconds or turns.
Figure 15.6 shows a tagset for a restaurant recommendation system, and Fig. 15.7
shows these tags labeling a sample dialogue from the HIS system (Young et al.,
2010). This example also shows the content of each dialogue act, which are the slot
fillers being communicated. So the user might INFORM the system that they want
Italian food near a museum, or CONFIRM with the system that the price is reasonable.
place some constraints on the slots and values, the tasks of dialogue-act detection and
slot-filling are often performed jointly. The state tracker can just take the output of
a slot-filling sequence-model (Section 15.2.1) after each sentence, or do something
more complicated like training a classifier to decide if a value has been changed.
features examples
semantic embedding similarity between correction and user’s prior utterance
phonetic phonetic overlap between candidate correction act and user’s prior utterance
(i.e. “WhatsApp” may be incorrectly recognized as “What’s up”)
prosodic hyperarticulation, increases in F0 range, pause duration, and word duration
ASR ASR confidence, language model probability
There’s a tradeoff. Explicit confirmation makes it easier for users to correct mis-
recognitions by just answering “no” to the confirmation question. But explicit con-
firmation is time-consuming and awkward (Danieli and Gerbino 1995, Walker et al.
rejection 1998a). We also might want an act that expresses lack of understanding: rejection,
for example with a prompt like I’m sorry, I didn’t understand that. To decide among
these acts, we can make use of the fact that ASR systems often compute their confi-
dence in their transcription (often based on the log-likelihood the system assigns the
sentence). A system can thus choose to explicitly confirm only low-confidence sen-
tences. Or systems might have a four-tiered level of confidence with three thresholds
, , and :
Once a dialogue act has been chosen, we need to generate the text of the re-
sponse to the user. This part of the generation process is called sentence realiza-
sentence tion. Fig. 15.9 shows a sample input/output for the sentence realization phase. The
realization
content planner has chosen the dialogue act RECOMMEND and some slots (name,
neighborhood, cuisine) and fillers. The sentence realizer generates a sentence like
lines 1 or 2 (by training on examples of representation/sentence pairs from a corpus
of labeled dialogues). Because we won’t see every restaurant or attribute in every
delexicalize possible wording, we can delexicalize: generalize the training examples by replac-
ing specific slot value words in the training set with a generic placeholder token
representing the slot. Fig. 15.10 shows the sentences in Fig. 15.9 delexicalized.
We can map from frames to delexicalized sentences with an encoder decoder
model (Mrkšić et al. 2017, inter alia), trained on hand-labeled dialogue corpora like
MultiWOZ (Budzianowski et al., 2018). The input to the encoder is a sequence of
15.4 • C HATBOTS 321
DECODER
ENCODER
tokens xt that represent the dialogue act (e.g., RECOMMEND) and its arguments (e.g.,
service:decent, cuisine:null) (Nayak et al., 2017), as in Fig. 15.11.
The decoder outputs the delexicalized English sentence “name has decent ser-
relexicalize vice”, which we can then relexicalize, i.e. fill back in correct slot values, resulting
in “Au Midi has decent service”.
15.4 Chatbots
chatbot Chatbots are systems that can carry on extended conversations with the goal of
mimicking the unstructured conversations or ‘chats’ characteristic of informal human-
human interaction. While early systems like ELIZA (Weizenbaum, 1966) or PARRY
(Colby et al., 1971) had theoretical goals like testing theories of psychological coun-
seling, for most of the last 50 years chatbots have been designed for entertainment.
That changed with the recent rise of neural chatbots like ChatGPT, which incor-
porate solutions to NLP tasks like question answering, writing tools, or machine
translation into a conversational interface. A conversation with ChatGPT is shown
in Fig. 15.12. In this section we describe neural chatbot architectures and datasets.
[TBD]
Figure 15.12 A conversation with ChatGPT.
… … … … … … …
Transformer
Blocks
Congrats !
DECODER
ENCODER
To use the system in inference, the model first generates a response given the context,
and then it is given the attribute and asked to generate a rating. The result is a
generated turn along with a label. This label isn’t shown to the user but can be use
for filtering, either at training time or at deployment time. For example, the system
can generate multiple potential responses, filter out any response that is unsafe, and
return to the user the highest ranking response.
324 C HAPTER 15 • C HATBOTS & D IALOGUE S YSTEMS
From these prompts, the system learns to generate texts with Search Query
turns for fact-based questions from the user, and these are passed to a search engine
to generate the Search Results turns.
Alternatively, systems can be finetuned to to know when to use a search en-
gine. For example, labelers can interact with a system, fact check each of the re-
sponses, and whenever the system emits an incorrect response, perform the web
search queries that the system should have used to check its answer, and then the in-
teration is recorded and used for fine-tuning. Or labelers can look at a transcript of a
language model carrying on a dialogue, and similarly mark every place where a fact
was wrong (or out-of-date) and write the set of search queries that would have been
appropriate. A system is then fine-tuned to generate search query turns which
are again passed to a search engine to generate the search responses. The set
of pages or snippets returned by the search engine in the search response turn are
then treated as the context for generation, similarly to the retrieval-based question-
answering methods of Chapter 14.
15.5 • D IALOGUE S YSTEM D ESIGN 325
15.4.4 RLHF
A more sophisticated family of methods uses reinforcement learning to learn to
RLHF match human preferences for generated turns. In this method, RLHF for Rein-
forcement Learning from Human Feedback, we give a system a dialogue context
and sample two possible turns from the language model. We then have humans la-
bel which of the two is better, creating a large dataset of sentence pairs with human
preferences. These pairs are used to train a dialogue policy, and reinforcement learn-
ing is used to train the language model to generate turns that have higher rewards
(Christiano et al., 2017; Ouyang et al., 2022). While using RLHF is the current state
of the art at the time of this writing, a number of alternatives have been recently
developed that don’t require reinforcement learning (Rafailov et al., 2023, e.g.,) and
so this aspect of the field is changing very quickly.
3. Iteratively test the design on users: An iterative design cycle with embedded
user testing is essential in system design (Nielsen 1992, Cole et al. 1997, Yankelovich
et al. 1995, Landauer 1995). For example in a well-known incident, an early dia-
logue system required the user to press a key to interrupt the system (Stifelman et al.,
barged in 1993). But user testing showed users barged in (interrupted, talking over the sys-
tem), which led to a redesign of the system to recognize overlapped speech. It’s also
value sensitive
design important to incorporate value sensitive design, in which we carefully consider dur-
ing the design process the benefits, harms and possible stakeholders of the resulting
system (Friedman et al. 2017, Friedman and Hendry 2019).
conspiracy theories, and personal attacks on its users. Tay had learned these biases
and actions from its training data, including from users who seemed to be purposely
teaching the system to repeat this kind of language (Neff and Nagy 2016). Hender-
son et al. (2017) examined dialogue datasets used to train corpus-based chatbots and
found toxic and abusive language, especially in social media corpora like Twitter
and Reddit, and indeed such language then appears in the text generated by lan-
guage models and dialogue systems (Gehman et al. 2020; Xu et al. 2020) which
can even amplify the bias from the training data (Dinan et al., 2020). Liu et al.
(2020) developed another method for investigating bias, testing how neural dialogue
systems responded to pairs of simulated user turns that are identical except for men-
tioning different genders or race. They found, for example, that simple changes like
using the word ‘she’ instead of ‘he’ in a sentence caused systems to respond more
offensively and with more negative sentiment.
Another important ethical issue is privacy. Already in the first days of ELIZA,
Weizenbaum pointed out the privacy implications of people’s revelations to the chat-
bot. The ubiquity of in-home dialogue systems means they may often overhear
private information (Henderson et al., 2017). If a chatbot is human-like, users are
also more likely to disclose private information, and less likely to worry about the
harm of this disclosure (Ischen et al., 2019). In general, chatbots that are trained
on transcripts of human-human or human-machine conversation must anonymize
personally identifiable information.
Finally, chatbots raise important issues of gender equality in addition to textual
bias. Current chatbots are overwhelmingly given female names, likely perpetuating
the stereotype of a subservient female servant (Paolino, 2017). And when users
use sexually harassing language, most commercial chatbots evade or give positive
responses rather than responding in clear negative ways (Fessler, 2017).
These ethical issues are an important area of investigation, including finding
ways to mitigate problems of abuse and toxicity, like detecting and responding ap-
propriately to toxic contexts (Wolf et al. 2017, Dinan et al. 2020, Xu et al. 2020).
Value sensitive design, carefully considering possible harms in advance (Friedman
et al. 2017, Friedman and Hendry 2019) is also important; (Dinan et al., 2021) give
a number of suggestions for best practices in dialogue system design. For exam-
ple getting informed consent from participants, whether they are used for training,
or whether they are interacting with a deployed system is important. Because di-
alogue systems by definition involve human participants, researchers also work on
IRB these issues with the Institutional Review Boards (IRB) at their institutions, who
help protect the safety of experimental subjects.
15.6 Summary
Chatbots and dialogue systems are crucial speech and language processing appli-
cations that are already widely used commercially.
• In human dialogue, speaking is a kind of action; these acts are referred to
as speech acts or dialogue acts. Speakers also attempt to achieve common
ground by acknowledging that they have understand each other. Conversation
also is characterized by turn structure and dialogue structure.
• Chatbots are conversational systems designed to mimic the appearance of in-
formal human conversation. Rule-based chatbots like ELIZA and its modern
328 C HAPTER 15 • C HATBOTS & D IALOGUE S YSTEMS
descendants use rules to map user sentences into system responses. Corpus-
based chatbots mine logs of human conversation to learn to automatically map
user sentences into system responses.
• For task-based dialogue, most commercial dialogue systems use the GUS or
frame-based architecture, in which the designer specifies frames consisting of
slots that the system must fill by asking the user.
• The dialogue-state architecture augments the GUS frame-and-slot architec-
ture with richer representations and more sophisticated algorithms for keeping
track of user’s dialogue acts, policies for generating its own dialogue acts, and
a natural language component.
• Dialogue systems are a kind of human-computer interaction, and general HCI
principles apply in their design, including the role of the user, simulations such
as Wizard-of-Oz systems, and the importance of iterative design and testing
on real users.
Another influential line of research from that decade focused on modeling the hi-
erarchical structure of dialogue. Grosz’s pioneering 1977b dissertation first showed
that “task-oriented dialogues have a structure that closely parallels the structure of
the task being performed” (p. 27), leading to her work with Sidner and others show-
ing how to use similar notions of intention and plans to model discourse structure
and coherence in dialogue. See, e.g., Lochbaum et al. (2000) for a summary of the
role of intentional structure in dialogue.
Yet a third line, first suggested by Bruce (1975), suggested that since speech acts
are actions, they should be planned like other actions, and drew on the AI planning
literature (Fikes and Nilsson, 1971). A system seeking to find out some information
can come up with the plan of asking the interlocutor for the information. A system
hearing an utterance can interpret a speech act by running the planner “in reverse”,
using inference rules to infer from what the interlocutor said what the plan might
BDI have been. Plan-based models of dialogue are referred to as BDI models because
such planners model the beliefs, desires, and intentions (BDI) of the system and in-
terlocutor. BDI models of dialogue were first introduced by Allen, Cohen, Perrault,
and their colleagues in a number of influential papers showing how speech acts could
be generated (Cohen and Perrault, 1979) and interpreted (Perrault and Allen 1980,
Allen and Perrault 1980). At the same time, Wilensky (1983) introduced plan-based
models of understanding as part of the task of interpreting stories.
In the 1990s, machine learning models that had first been applied to natural
language processing began to be applied to dialogue tasks like slot filling (Miller
et al. 1994, Pieraccini et al. 1991). This period also saw lots of analytic work on the
linguistic properties of dialogue acts and on machine-learning-based methods for
their detection. (Sag and Liberman 1975, Hinkelman and Allen 1989, Nagata and
Morimoto 1994, Goodwin 1996, Chu-Carroll 1998, Shriberg et al. 1998, Stolcke
et al. 2000, Gravano et al. 2012. This work strongly informed the development
of the dialogue-state model (Larsson and Traum, 2000). Dialogue state tracking
quickly became an important problem for task-oriented dialogue, and there has been
an influential annual evaluation of state-tracking algorithms (Williams et al., 2016).
The turn of the century saw a line of work on applying reinforcement learning
to dialogue, which first came out of AT&T and Bell Laboratories with work on
MDP dialogue systems (Walker 2000, Levin et al. 2000, Singh et al. 2002) along
with work on cue phrases, prosody, and rejection and confirmation. Reinforcement
learning research turned quickly to the more sophisticated POMDP models (Roy
et al. 2000, Lemon et al. 2006, Williams and Young 2007) applied to small slot-
filling dialogue tasks. Neural reinforcement learning models have been used both for
chatbot systems, for example simulating dialogues between two dialogue systems,
rewarding good conversational properties like coherence and ease of answering (Li
et al., 2016a), and for task-oriented dialogue (Williams et al., 2017).
By around 2010 the GUS architecture finally began to be widely used commer-
cially in dialogue systems on phones like Apple’s SIRI (Bellegarda, 2013) and other
digital assistants.
The rise of the web gave rise to corpus-based chatbot architectures around the
turn of the century, first using information retrieval models and then in the 2010s,
after the rise of deep learning, with sequence-to-sequence models.
[TBD: Modern history of neural chatbots]
Other important dialogue areas include the study of affect in dialogue (Rashkin
et al. 2019, Lin et al. 2019) and conversational interface design (Cohen et al. 2004,
Harris 2005, Pearl 2017, Deibel and Evanhoe 2021).
330 C HAPTER 15 • C HATBOTS & D IALOGUE S YSTEMS
Exercises
15.1 Write a finite-state automaton for a dialogue manager for checking your bank
balance and withdrawing money at an automated teller machine.
dispreferred
response 15.2 A dispreferred response is a response that has the potential to make a person
uncomfortable or embarrassed in the conversational context; the most com-
mon example dispreferred responses is turning down a request. People signal
their discomfort with having to say no with surface cues (like the word well),
or via significant silence. Try to notice the next time you or someone else
utters a dispreferred response, and write down the utterance. What are some
other cues in the response that a system might use to detect a dispreferred
response? Consider non-verbal cues like eye gaze and body gestures.
15.3 When asked a question to which they aren’t sure they know the answer, peo-
ple display their lack of confidence by cues that resemble other dispreferred
responses. Try to notice some unsure answers to questions. What are some
of the cues? If you have trouble doing this, read Smith and Clark (1993) and
listen specifically for the cues they mention.
15.4 Implement a small air-travel help system based on text input. Your system
should get constraints from users about a particular flight that they want to
take, expressed in natural language, and display possible flights on a screen.
Make simplifying assumptions. You may build in a simple flight database or
you may use a flight information system on the Web as your backend.
CHAPTER
the Empress Maria Theresa the famous Mechanical Turk, a chess-playing automaton
consisting of a wooden box filled with gears, behind which sat a robot mannequin
who played chess by moving pieces with his mechanical arm. The Turk toured Eu-
rope and the Americas for decades, defeating Napoleon Bonaparte and even playing
Charles Babbage. The Mechanical Turk might have been one of the early successes
of artificial intelligence were it not for the fact that it was, alas, a hoax, powered by
a human chess player hidden inside the box.
What is less well known is that von Kempelen, an extraordinarily
prolific inventor, also built between
1769 and 1790 what was definitely
not a hoax: the first full-sentence
speech synthesizer, shown partially to
the right. His device consisted of a
bellows to simulate the lungs, a rub-
ber mouthpiece and a nose aperture, a
reed to simulate the vocal folds, var-
ious whistles for the fricatives, and a
small auxiliary bellows to provide the puff of air for plosives. By moving levers
with both hands to open and close apertures, and adjusting the flexible leather “vo-
cal tract”, an operator could produce different consonants and vowels.
More than two centuries later, we no longer build our synthesizers out of wood
speech
synthesis and leather, nor do we need human operators. The modern task of speech synthesis,
text-to-speech also called text-to-speech or TTS, is exactly the reverse of ASR; to map text:
TTS
Its time for lunch!
to an acoustic waveform:
A second dimension of variation is who the speaker is talking to. Humans speak-
ing to machines (either dictating or talking to a dialogue system) are easier to recog-
read speech nize than humans speaking to humans. Read speech, in which humans are reading
out loud, for example in audio books, is also relatively easy to recognize. Recog-
conversational
speech nizing the speech of two humans talking to each other in conversational speech,
for example, for transcribing a business meeting, is the hardest. It seems that when
humans talk to machines, or read without an audience present, they simplify their
speech quite a bit, talking more slowly and more clearly.
A third dimension of variation is channel and noise. Speech is easier to recognize
if it’s recorded in a quiet room with head-mounted microphones than if it’s recorded
by a distant microphone on a noisy city street, or in a car with the window open.
A final dimension of variation is accent or speaker-class characteristics. Speech
is easier to recognize if the speaker is speaking the same dialect or variety that the
system was trained on. Speech by speakers of regional or ethnic dialects, or speech
by children can be quite difficult to recognize if the system is only trained on speak-
ers of standard dialects, or only adult speakers.
A number of publicly available corpora with human-created transcripts are used
to create ASR test and training sets to explore this variation; we mention a few of
LibriSpeech them here since you will encounter them in the literature. LibriSpeech is a large
open-source read-speech 16 kHz dataset with over 1000 hours of audio books from
the LibriVox project, with transcripts aligned at the sentence level (Panayotov et al.,
2015). It is divided into an easier (“clean”) and a more difficult portion (“other”)
with the clean portion of higher recording quality and with accents closer to US
English. This was done by running a speech recognizer (trained on read speech from
the Wall Street Journal) on all the audio, computing the WER for each speaker based
on the gold transcripts, and dividing the speakers roughly in half, with recordings
from lower-WER speakers called “clean” and recordings from higher-WER speakers
“other”.
Switchboard The Switchboard corpus of prompted telephone conversations between strangers
was collected in the early 1990s; it contains 2430 conversations averaging 6 min-
utes each, totaling 240 hours of 8 kHz speech and about 3 million words (Godfrey
et al., 1992). Switchboard has the singular advantage of an enormous amount of
auxiliary hand-done linguistic labeling, including parses, dialogue act tags, phonetic
CALLHOME and prosodic labeling, and discourse and information structure. The CALLHOME
corpus was collected in the late 1990s and consists of 120 unscripted 30-minute
telephone conversations between native speakers of English who were usually close
friends or family (Canavan et al., 1997).
The Santa Barbara Corpus of Spoken American English (Du Bois et al., 2005) is
a large corpus of naturally occurring everyday spoken interactions from all over the
United States, mostly face-to-face conversation, but also town-hall meetings, food
preparation, on-the-job talk, and classroom lectures. The corpus was anonymized by
removing personal names and other identifying information (replaced by pseudonyms
in the transcripts, and masked in the audio).
CORAAL CORAAL is a collection of over 150 sociolinguistic interviews with African
American speakers, with the goal of studying African American Language (AAL),
the many variations of language used in African American communities (Kendall
and Farrington, 2020). The interviews are anonymized with transcripts aligned at
CHiME the utterance level. The CHiME Challenge is a series of difficult shared tasks with
corpora that deal with robustness in ASR. The CHiME 5 task, for example, is ASR of
conversational speech in real home environments (specifically dinner parties). The
334 C HAPTER 16 • AUTOMATIC S PEECH R ECOGNITION AND T EXT- TO -S PEECH
corpus contains recordings of twenty different dinner parties in real homes, each
with four participants, and in three locations (kitchen, dining area, living room),
recorded both with distant room microphones and with body-worn mikes. The
HKUST HKUST Mandarin Telephone Speech corpus has 1206 ten-minute telephone con-
versations between speakers of Mandarin across China, including transcripts of the
conversations, which are between either friends or strangers (Liu et al., 2006). The
AISHELL-1 AISHELL-1 corpus contains 170 hours of Mandarin read speech of sentences taken
from various domains, read by different speakers mainly from northern China (Bu
et al., 2017).
Figure 16.1 shows the rough percentage of incorrect words (the word error rate,
or WER, defined on page 346) from state-of-the-art systems on some of these tasks.
Note that the error rate on read speech (like the LibriSpeech audiobook corpus) is
around 2%; this is a solved task, although these numbers come from systems that re-
quire enormous computational resources. By contrast, the error rate for transcribing
conversations between humans is much higher; 5.8 to 11% for the Switchboard and
CALLHOME corpora. The error rate is higher yet again for speakers of varieties
like African American Vernacular English, and yet again for difficult conversational
tasks like transcription of 4-speaker dinner party speech, which can have error rates
as high as 81.3%. Character error rates (CER) are also much lower for read Man-
darin speech than for natural conversation.
by the specific way that air passes through the glottis and out the oral or nasal cav-
ities. We represent sound waves by plotting the change in air pressure over time.
One metaphor which sometimes helps in understanding these graphs is that of a ver-
tical plate blocking the air pressure waves (perhaps in a microphone in front of a
speaker’s mouth, or the eardrum in a hearer’s ear). The graph measures the amount
of compression or rarefaction (uncompression) of the air molecules at this plate.
Figure 16.2 shows a short segment of a waveform taken from the Switchboard corpus
of telephone speech of the vowel [iy] from someone saying “she just had a baby”.
0.02283
–0.01697
0 0.03875
Time (s)
Figure 16.2 A waveform of an instance of the vowel [iy] (the last vowel in the word “baby”). The y-axis
shows the level of air pressure above and below normal atmospheric pressure. The x-axis shows time. Notice
that the wave repeats regularly.
The first step in digitizing a sound wave like Fig. 16.2 is to convert the analog
representations (first air pressure and then analog electric signals in a microphone)
sampling into a digital signal. This analog-to-digital conversion has two steps: sampling and
quantization. To sample a signal, we measure its amplitude at a particular time; the
sampling rate is the number of samples taken per second. To accurately measure a
wave, we must have at least two samples in each cycle: one measuring the positive
part of the wave and one measuring the negative part. More than two samples per
cycle increases the amplitude accuracy, but fewer than two samples causes the fre-
quency of the wave to be completely missed. Thus, the maximum frequency wave
that can be measured is one whose frequency is half the sample rate (since every
cycle needs two samples). This maximum frequency for a given sampling rate is
Nyquist
frequency called the Nyquist frequency. Most information in human speech is in frequencies
below 10,000 Hz; thus, a 20,000 Hz sampling rate would be necessary for com-
plete accuracy. But telephone speech is filtered by the switching network, and only
frequencies less than 4,000 Hz are transmitted by telephones. Thus, an 8,000 Hz
sampling rate is sufficient for telephone-bandwidth speech like the Switchboard
corpus, while 16,000 Hz sampling is often used for microphone speech.
Although using higher sampling rates produces higher ASR accuracy, we can’t
combine different sampling rates for training and testing ASR systems. Thus if
we are testing on a telephone corpus like Switchboard (8 KHz sampling), we must
downsample our training corpus to 8 KHz. Similarly, if we are training on mul-
tiple corpora and one of them includes telephone speech, we downsample all the
wideband corpora to 8Khz.
Amplitude measurements are stored as integers, either 8 bit (values from -128–
127) or 16 bit (values from -32768–32767). This process of representing real-valued
quantization numbers as integers is called quantization; all values that are closer together than
the minimum granularity (the quantum size) are represented identically. We refer to
each sample at time index n in the digitized, quantized waveform as x[n].
Once data is quantized, it is stored in various formats. One parameter of these
formats is the sample rate and sample size discussed above; telephone speech is
often sampled at 8 kHz and stored as 8-bit samples, and microphone data is often
sampled at 16 kHz and stored as 16-bit samples. Another parameter is the number of
336 C HAPTER 16 • AUTOMATIC S PEECH R ECOGNITION AND T EXT- TO -S PEECH
channel channels. For stereo data or for two-party conversations, we can store both channels
in the same file or we can store them in separate files. A final parameter is individual
sample storage—linearly or compressed. One common compression format used for
telephone speech is µ-law (often written u-law but still pronounced mu-law). The
intuition of log compression algorithms like µ-law is that human hearing is more
sensitive at small intensities than large ones; the log represents small values with
more faithfulness at the expense of more error on large values. The linear (unlogged)
PCM values are generally referred to as linear PCM values (PCM stands for pulse code
modulation, but never mind that). Here’s the equation for compressing a linear PCM
sample value x to 8-bit µ-law, (where µ=255 for 8 bits):
sgn(x) log(1 + µ|x|)
F(x) = −1 ≤ x ≤ 1 (16.1)
log(1 + µ)
There are a number of standard file formats for storing the resulting digitized wave-
file, such as Microsoft’s .wav and Apple’s AIFF all of which have special headers;
simple headerless “raw” files are also used. For example, the .wav format is a sub-
set of Microsoft’s RIFF format for multimedia files; RIFF is a general format that
can represent a series of nested chunks of data and control information. Figure 16.3
shows a simple .wav file with a single data chunk together with its format chunk.
Figure 16.3 Microsoft wavefile header format, assuming simple file with one chunk. Fol-
lowing this 44-byte header would be the data chunk.
16.2.2 Windowing
From the digitized, quantized representation of the waveform, we need to extract
spectral features from a small window of speech that characterizes part of a par-
ticular phoneme. Inside this small window, we can roughly think of the signal as
stationary stationary (that is, its statistical properties are constant within this region). (By
non-stationary contrast, in general, speech is a non-stationary signal, meaning that its statistical
properties are not constant over time). We extract this roughly stationary portion of
speech by using a window which is non-zero inside a region and zero elsewhere, run-
ning this window across the speech signal and multiplying it by the input waveform
to produce a windowed waveform.
frame The speech extracted from each window is called a frame. The windowing is
characterized by three parameters: the window size or frame size of the window
stride (its width in milliseconds), the frame stride, (also called shift or offset) between
successive windows, and the shape of the window.
To extract the signal we multiply the value of the signal at time n, s[n] by the
value of the window at time n, w[n]:
Window
25 ms
Shift
10 Window
ms 25 ms
Shift
10 Window
ms 25 ms
Figure 16.4 Windowing, showing a 25 ms rectangular window with a 10ms stride.
however, abruptly cuts off the signal at its boundaries, which creates problems when
we do Fourier analysis. For this reason, for acoustic feature creation we more com-
Hamming monly use the Hamming window, which shrinks the values of the signal toward
zero at the window boundaries, avoiding discontinuities. Figure 16.5 shows both;
the equations are as follows (assuming a window that is L frames long):
1 0 ≤ n ≤ L−1
rectangular w[n] = (16.3)
0 otherwise
0.54 − 0.46 cos( 2πn
L ) 0 ≤ n ≤ L−1
Hamming w[n] = (16.4)
0 otherwise
Figure 16.5 Windowing a sine wave with the rectangular or Hamming windows.
for extracting spectral information for discrete frequency bands for a discrete-time
Discrete
Fourier (sampled) signal is the discrete Fourier transform or DFT.
transform
DFT The input to the DFT is a windowed signal x[n]...x[m], and the output, for each
of N discrete frequency bands, is a complex number X[k] representing the magni-
tude and phase of that frequency component in the original signal. If we plot the
magnitude against the frequency, we can visualize the spectrum (see Appendix H
for more on spectra). For example, Fig. 16.6 shows a 25 ms Hamming-windowed
portion of a signal and its spectrum as computed by a DFT (with some additional
smoothing).
0.04414
0 0
–20
–0.04121
0.0141752 0.039295 0 8000
Time (s) Frequency (Hz)
(a) (b)
Figure 16.6 (a) A 25 ms Hamming-windowed portion of a signal from the vowel [iy]
and (b) its spectrum computed by a DFT.
We do not introduce the mathematical details of the DFT here, except to note
Euler’s formula that Fourier analysis relies on Euler’s formula, with j as the imaginary unit:
As a brief reminder for those students who have already studied signal processing,
the DFT is defined as follows:
N−1
∑ 2π
X[k] = x[n]e− j N kn (16.6)
n=0
fast Fourier A commonly used algorithm for computing the DFT is the fast Fourier transform
transform
FFT or FFT. This implementation of the DFT is very efficient but only works for values
of N that are powers of 2.
frequency m can be computed from the raw acoustic frequency by a log transforma-
tion:
f
mel( f ) = 1127 ln(1 + ) (16.7)
700
We implement this intuition by creating a bank of filters that collect energy from
each frequency band, spread logarithmically so that we have very fine resolution
at low frequencies, and less resolution at high frequencies. Figure 16.7 shows a
sample bank of triangular filters that implement this idea, that can be multiplied by
the spectrum to get a mel spectrum.
1
Amplitude
0.5
0
0 7700
8K
Frequency (Hz)
Figure 16.7 The mel filter bank (Davis and Mermelstein, 1980). Each triangular filter,
spaced logarithmically along the mel scale, collects energy from a given frequency range.
Finally, we take the log of each of the mel spectrum values. The human response
to signal level is logarithmic (like the human response to frequency). Humans are
less sensitive to slight differences in amplitude at high amplitudes than at low ampli-
tudes. In addition, using a log makes the feature estimates less sensitive to variations
in input such as power variations due to the speaker’s mouth moving closer or further
from the microphone.
y1 y2 y3 y4 y5 y6 y7 y8 y9 ym
i t ‘ s t i m e …
H
DECODER
ENCODER
x1 xn
<s> i t ‘ s t i m …
Shorter sequence X …
Subsampling
80-dimensional
log Mel spectrum f1 … ft
per frame
Feature Computation
or words. A single word might be 5 letters long but, supposing it lasts about 2
seconds, would take 200 acoustic frames (of 10ms each).
Because this length difference is so extreme for speech, encoder-decoder ar-
chitectures for speech need to have a special compression stage that shortens the
acoustic feature sequence before the encoder stage. (Alternatively, we can use a loss
function that is designed to deal well with compression, like the CTC loss function
we’ll introduce in the next section.)
The goal of the subsampling is to produce a shorter sequence X = x1 , ..., xn that
will be the input to the encoder. The simplest algorithm is a method sometimes
low frame rate called low frame rate (Pundak and Sainath, 2016): for time i we stack (concatenate)
the acoustic feature vector fi with the prior two vectors fi−1 and fi−2 to make a new
vector three times longer. Then we simply delete fi−1 and fi−2 . Thus instead of
(say) a 40-dimensional acoustic feature vector every 10 ms, we have a longer vector
(say 120-dimensional) every 30 ms, with a shorter sequence length n = 3t .1
After this compression stage, encoder-decoders for speech use the same archi-
tecture as for MT or other text, composed of either RNNs (LSTMs) or Transformers.
For inference, the probability of the output string Y is decomposed as:
n
∏
p(y1 , . . . , yn ) = p(yi |y1 , . . . , yi−1 , X) (16.8)
i=1
Alternatively we can use beam search as described in the next section. This is par-
ticularly relevant when we are adding a language model.
Adding a language model Since an encoder-decoder model is essentially a con-
ditional language model, encoder-decoders implicitly learn a language model for the
output domain of letters from their training data. However, the training data (speech
paired with text transcriptions) may not include sufficient text to train a good lan-
guage model. After all, it’s easier to find enormous amounts of pure text training
1 There are also more complex alternatives for subsampling, like using a convolutional net that down-
samples with max pooling, or layers of pyramidal RNNs, RNNs where each successive layer has half
the number of RNNs as the previous layer.
16.4 • CTC 341
data than it is to find text paired with speech. Thus we can can usually improve a
model at least slightly by incorporating a very large language model.
The simplest way to do this is to use beam search to get a final beam of hy-
n-best list pothesized sentences; this beam is sometimes called an n-best list. We then use a
rescore language model to rescore each hypothesis on the beam. The scoring is done by in-
terpolating the score assigned by the language model with the encoder-decoder score
used to create the beam, with a weight λ tuned on a held-out set. Also, since most
models prefer shorter sentences, ASR systems normally have some way of adding a
length factor. One way to do this is to normalize the probability by the number of
characters in the hypothesis |Y |c . The following is thus a typical scoring function
(Chan et al., 2016):
1
score(Y |X) = log P(Y |X) + λ log PLM (Y ) (16.10)
|Y |c
16.3.1 Learning
Encoder-decoders for speech are trained with the normal cross-entropy loss gener-
ally used for conditional language models. At timestep i of decoding, the loss is the
log probability of the correct token (letter) yi :
The loss for the entire sentence is the sum of these losses:
m
∑
LCE = − log p(yi |y1 , . . . , yi−1 , X) (16.12)
i=1
This loss is then backpropagated through the entire end-to-end model to train the
entire encoder-decoder.
As we described in Chapter 13, we normally use teacher forcing, in which the
decoder history is forced to be the correct gold yi rather than the predicted ŷi . It’s
also possible to use a mixture of the gold and decoder output, for example using
the gold output 90% of the time, but with probability .1 taking the decoder output
instead:
16.4 CTC
We pointed out in the previous section that speech recognition has two particular
properties that make it very appropriate for the encoder-decoder architecture, where
the encoder produces an encoding of the input that the decoder uses attention to
explore. First, in speech we have a very long acoustic input sequence X mapping to
a much shorter sequence of letters Y , and second, it’s hard to know exactly which
part of X maps to which part of Y .
In this section we briefly introduce an alternative to encoder-decoder: an algo-
CTC rithm and loss function called CTC, short for Connectionist Temporal Classifica-
tion (Graves et al., 2006), that deals with these problems in a very different way. The
intuition of CTC is to output a single character for every frame of the input, so that
342 C HAPTER 16 • AUTOMATIC S PEECH R ECOGNITION AND T EXT- TO -S PEECH
the output is the same length as the input, and then to apply a collapsing function
that combines sequences of identical letters, resulting in a shorter sequence.
Let’s imagine inference on someone saying the word dinner, and let’s suppose
we had a function that chooses the most probable letter for each input spectral frame
representation xi . We’ll call the sequence of letters corresponding to each input
alignment frame an alignment, because it tells us where in the acoustic signal each letter aligns
to. Fig. 16.9 shows one such alignment, and what happens if we use a collapsing
function that just removes consecutive duplicate letters.
Y (output) d i n e r
A (alignment) d i i n n n n e r r r r r r
wavefile
Figure 16.9 A naive algorithm for collapsing an alignment between input and letters.
Well, that doesn’t work; our naive algorithm has transcribed the speech as diner,
not dinner! Collapsing doesn’t handle double letters. There’s also another problem
with our naive function; it doesn’t tell us what symbol to align with silence in the
input. We don’t want to be transcribing silence as random letters!
The CTC algorithm solves both problems by adding to the transcription alphabet
blank a special symbol for a blank, which we’ll represent as . The blank can be used in
the alignment whenever we don’t want to transcribe a letter. Blank can also be used
between letters; since our collapsing function collapses only consecutive duplicate
letters, it won’t collapse across . More formally, let’s define the mapping B : a → y
between an alignment a and an output y, which collapses all repeated letters and
then removes all blanks. Fig. 16.10 sketches this collapsing function B.
Y (output) d i n n e r
remove blanks d i n n e r
merge duplicates d i ␣ n ␣ n e r ␣
A (alignment) d i ␣ n n ␣ n e r r r r ␣ ␣
X (input) x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14
Figure 16.10 The CTC collapsing function B, showing the space blank character ; re-
peated (consecutive) characters in an alignment A are removed to form the output Y .
d i i n ␣ n n e e e r ␣ r r
d d i n n ␣ n e r r ␣ ␣ ␣ ␣
d d d i n ␣ n n ␣ ␣ ␣ e r r
Figure 16.11 Three other legitimate alignments producing the transcript dinner.
Thus to find the best alignment  = {â1 , . . . , âT } we can greedily choose the charac-
ter with the max probability at each time step t:
We then pass the resulting sequence A to the CTC collapsing function B to get the
output sequence Y .
Let’s talk about how this simple inference algorithm for finding the best align-
ment A would be implemented. Because we are making a decision at each time
point, we can treat CTC as a sequence-modeling task, where we output one letter
ŷt at time t corresponding to each input token xt , eliminating the need for a full de-
coder. Fig. 16.12 sketches this architecture, where we take an encoder, produce a
hidden state ht at each timestep, and decode by taking a softmax over the character
vocabulary at each time step.
output letter y1 y2 y3 y4 y5 … yn
sequence Y
i i i t t …
Classifier …
+softmax
ENCODER
Shorter input
sequence X x1 … xn
Subsampling
Feature Computation
Figure 16.12 Inference with CTC: using an encoder-only model, with decoding done by
simple softmaxes over the hidden state ht at each output step.
Alas, there is a potential flaw with the inference algorithm sketched in (Eq. 16.15)
and Fig. 16.11. The problem is that we chose the most likely alignment A, but the
344 C HAPTER 16 • AUTOMATIC S PEECH R ECOGNITION AND T EXT- TO -S PEECH
most likely alignment may not correspond to the most likely final collapsed output
string Y . That’s because there are many possible alignments that lead to the same
output string, and hence the most likely output string might not correspond to the
most probable alignment. For example, imagine the most probable alignment A for
an input X = [x1 x2 x3 ] is the string [a b ] but the next two most probable alignments
are [b b] and [ b b]. The output Y =[b b], summing over those two alignments,
might be more probable than Y =[a b].
For this reason, the most probable output sequence Y is the one that has, not
the single best CTC alignment, but the highest sum over the probability of all its
possible alignments:
∑
PCTC (Y |X) = P(A|X)
A∈B−1 (Y )
∑ T
∏
= p(at |ht )
A∈B−1 (Y ) t=1
Alas, summing over all alignments is very expensive (there are a lot of alignments),
so we approximate this sum by using a version of Viterbi beam search that cleverly
keeps in the beam the high-probability alignments that map to the same output string,
and sums those as an approximation of (Eq. 16.16). See Hannun (2017) for a clear
explanation of this extension of beam search for CTC.
Because of the strong conditional independence assumption mentioned earlier
(that the output at time t is independent of the output at time t − 1, given the input),
CTC does not implicitly learn a language model over the data (unlike the attention-
based encoder-decoder architectures). It is therefore essential when using CTC to
interpolate a language model (and some sort of length factor L(Y )) using interpola-
tion weights that are trained on a devset:
scoreCTC (Y |X) = log PCTC (Y |X) + λ1 log PLM (Y )λ2 L(Y ) (16.17)
To compute CTC loss function for a single input pair (X,Y ), we need the probability
of the output Y given the input X. As we saw in Eq. 16.16, to compute the probability
of a given output Y we need to sum over all the possible alignments that would
collapse to Y . In other words:
∑ T
∏
PCTC (Y |X) = p(at |ht ) (16.19)
A∈B−1 (Y ) t=1
Naively summing over all possible alignments is not feasible (there are too many
alignments). However, we can efficiently compute the sum by using dynamic pro-
16.4 • CTC 345
For inference, we can combine the two with the language model (or the length
penalty), again with learned weights:
Ŷ = argmax [λ log Pencdec (Y |X) − (1 − λ ) log PCTC (Y |X) + log PLM (Y )] (16.21)
Y
i t ’ s t i m e …
ENCODER
<s> i t ‘ s t i m …
…
x1 xn
a hidden state htenc given the input x1 ...xt . The language model predictor takes as in-
pred
put the previous output token (not counting blanks), outputting a hidden state hu .
The two are passed through another network whose output is then passed through a
softmax to predict the next character.
∑
PRNN−T (Y |X) = P(A|X)
A∈B−1 (Y )
∑ T
∏
= p(at |ht , y<ut )
A∈B−1 (Y ) t=1
SOFTMAX
zt,u
PREDICTION
NETWORK hpred
JOINT NETWORK DECODER
u
henct
yu-1
ENCODER
xt
Figure 16.14 The RNN-T model computing the output token distribution at time t by inte-
grating the output of a CTC acoustic encoder and a separate ‘predictor’ language model.
This utterance has six substitutions, three insertions, and one deletion:
6+3+1
Word Error Rate = 100 = 76.9%
13
16.5 • ASR E VALUATION : W ORD E RROR R ATE 347
The standard method for computing word error rates is a free script called sclite,
available from the National Institute of Standards and Technologies (NIST) (NIST,
2005). Sclite is given a series of reference (hand-transcribed, gold-standard) sen-
tences and a matching set of hypothesis sentences. Besides performing alignments,
and computing word error rate, sclite performs a number of other useful tasks. For
example, for error analysis it gives useful information such as confusion matrices
showing which words are often misrecognized for others, and summarizes statistics
of words that are often inserted or deleted. sclite also gives error rates by speaker
(if sentences are labeled for speaker ID), as well as useful statistics like the sentence
Sentence error error rate, the percentage of sentences with at least one word error.
rate
I II III IV
REF: |it was|the best|of|times it|was the worst|of times| |it was
| | | | | | | |
SYS A:|ITS |the best|of|times it|IS the worst |of times|OR|it was
| | | | | | | |
SYS B:|it was|the best| |times it|WON the TEST |of times| |it was
In region I, system A has two errors (a deletion and an insertion) and system B
has zero; in region III, system A has one error (a substitution) and system B has two.
Let’s define a sequence of variables Z representing the difference between the errors
in the two systems as follows:
NAi the number of errors made on segment i by system A
NBi the number of errors made on segment i by system B
Z NAi − NBi , i = 1, 2, · · · , n where n is the number of segments
In the example above, the sequence of Z values is {2, −1, −1, 1}. Intuitively, if
the two systems are identical, we would expect the average difference, that is, the
average of the Z values, to be zero. If we call the true average of the differences
muz , we would thus like to know whether muz = 0. Following closely the original
proposal and notation of Gillick
and Cox (1989), we can estimate the true average
from our limited sample as µ̂z = ni=1 Zi /n. The estimate of the variance of the Zi ’s
is
n
1 ∑
σz2 = (Zi − µz )2 (16.22)
n−1
i=1
348 C HAPTER 16 • AUTOMATIC S PEECH R ECOGNITION AND T EXT- TO -S PEECH
Let
µ̂z
W= √ (16.23)
σz / n
For a large enough n (> 50), W will approximately have a normal distribution with
unit variance. The null hypothesis is H0 : µz = 0, and it can thus be rejected if
2 ∗ P(Z ≥ |w|) ≤ 0.05 (two-tailed) or P(Z ≥ |w|) ≤ 0.05 (one-tailed), where Z is
standard normal and w is the realized value W ; these probabilities can be looked up
in the standard tables of the normal distribution.
McNemar’s test Earlier work sometimes used McNemar’s test for significance, but McNemar’s
is only applicable when the errors made by the system are independent, which is not
true in continuous speech recognition, where errors made on a word are extremely
dependent on errors made on neighboring words.
Could we improve on word error rate as a metric? It would be nice, for exam-
ple, to have something that didn’t give equal weight to every word, perhaps valuing
content words like Tuesday more than function words like a or of. While researchers
generally agree that this would be a good idea, it has proved difficult to agree on
a metric that works in every application of ASR. For dialogue systems, however,
where the desired semantic output is more clear, a metric called slot error rate or
concept error rate has proved extremely useful; it is discussed in Chapter 15 on page
317.
16.6 TTS
The goal of text-to-speech (TTS) systems is to map from strings of letters to wave-
forms, a technology that’s important for a variety of applications from dialogue sys-
tems to games to education.
Like ASR systems, TTS systems are generally based on the encoder-decoder
architecture, either using LSTMs or Transformers. There is a general difference in
training. The default condition for ASR systems is to be speaker-independent: they
are trained on large corpora with thousands of hours of speech from many speakers
because they must generalize well to an unseen test speaker. By contrast, in TTS, it’s
less crucial to use multiple voices, and so basic TTS systems are speaker-dependent:
trained to have a consistent voice, on much less data, but all from one speaker. For
example, one commonly used public domain dataset, the LJ speech corpus, consists
of 24 hours of one speaker, Linda Johnson, reading audio books in the LibriVox
project (Ito and Johnson, 2017), much smaller than standard ASR corpora which are
hundreds or thousands of hours.2
We generally break up the TTS task into two components. The first component
is an encoder-decoder model for spectrogram prediction: it maps from strings of
letters to mel spectrographs: sequences of mel spectral values over time. Thus we
2 There is also recent TTS research on the task of multi-speaker TTS, in which a system is trained on
speech from many speakers, and can switch between different voices.
16.6 • TTS 349
These standard encoder-decoder algorithms for TTS are still quite computation-
ally intensive, so a significant focus of modern research is on ways to speed them
up.
Tacotron2 of the Tacotron2 architecture (Shen et al., 2018), which extends the earlier Tacotron
Wavenet (Wang et al., 2017) architecture and the Wavenet vocoder (van den Oord et al.,
2016). Fig. 16.16 sketches out the entire architecture.
The encoder’s job is to take a sequence of letters and produce a hidden repre-
sentation representing the letter sequence, which is then used by the attention mech-
anism in the decoder. The Tacotron2 encoder first maps every input grapheme to
a 512-dimensional character embedding. These are then passed through a stack
of 3 convolutional layers, each containing 512 filters with shape 5 × 1, i.e. each
filter spanning 5 characters, to model the larger letter context. The output of the
final convolutional layer is passed through a biLSTM to produce the final encoding.
It’s common to use a slightly higher quality (but slower) version of attention called
location-based location-based attention, in which the computation of the values (Eq. 8.36 in
attention
Chapter 8) makes use of the values from the prior time-state.
In the decoder, the predicted mel spectrum from the prior time slot is passed
through a small pre-net as a bottleneck. This prior output is then concatenated with
the encoder’s attention vector context and passed through 2 LSTM layers. The out-
put of this LSTM is used in two ways. First, it is passed through a linear layer, and
some output processing, to autoregressively predict one 80-dimensional log-mel fil-
terbank vector frame (50 ms, with a 12.5 ms stride) at each step. Second, it is passed
through another linear layer to a sigmoid to make a “stop token prediction” decision
about whether to stop producing output.
Vocoder
Decoder
Encoder
The system is trained on gold log-mel filterbank features, using teacher forcing,
that is the decoder is fed the correct log-model spectral feature at each decoder step
instead of the predicted decoder output from the prior step.
t
∏
p(Y ) = P(yt |y1 , ..., yt−1 , h1 , ..., ht ) (16.24)
t=1
Output
Dilation = 8
Hidden Layer
Dilation = 4
Hidden Layer
Dilation = 2
Hidden Layer
Dilation = 1
Input
Figure 16.17 Dilated convolutions, showing one dilation cycle size of 4, i.e., dilation values
of 1, 2, 4, 8. Figure from van den Oord et al. (2016).
to look into. For example WaveNet uses a special kind of a gated activation func-
tion as its non-linearity, and contains residual and skip connections. In practice,
predicting 8-bit audio values doesn’t as work as well as 16-bit, for which a simple
softmax is insufficient, so decoders use fancier ways as the last step of predicting
audio sample values, like mixtures of distributions. Finally, the WaveNet vocoder
as we have described it would be so slow as to be useless; many different kinds of
efficiency improvements are necessary in practice, for example by finding ways to
do non-autoregressive generation, avoiding the latency of having to wait to generate
each frame until the prior frame has been generated, and instead making predictions
in parallel. We encourage the interested reader to consult the original papers and
various version of the code.
speaker
recognition Speaker recognition, is the task of identifying a speaker. We generally distin-
guish the subtasks of speaker verification, where we make a binary decision (is
this speaker X or not?), such as for security when accessing personal information
over the telephone, and speaker identification, where we make a one of N decision
trying to match a speaker’s voice against a database of many speakers . These tasks
language are related to language identification, in which we are given a wavefile and must
identification
identify which language is being spoken; this is useful for example for automatically
directing callers to human operators that speak appropriate languages.
16.8 Summary
This chapter introduced the fundamental algorithms of automatic speech recognition
(ASR) and text-to-speech (TTS).
• The task of speech recognition (or speech-to-text) is to map acoustic wave-
forms to sequences of graphemes.
• The input to a speech recognizer is a series of acoustic waves. that are sam-
pled, quantized, and converted to a spectral representation like the log mel
spectrum.
• Two common paradigms for speech recognition are the encoder-decoder with
attention model, and models based on the CTC loss function. Attention-
based models have higher accuracies, but models based on CTC more easily
adapt to streaming: outputting graphemes online instead of waiting until the
acoustic input is complete.
• ASR is evaluated using the Word Error Rate; the edit distance between the
hypothesis and the gold transcription.
• TTS systems are also based on the encoder-decoder architecture. The en-
coder maps letters to an encoding, which is consumed by the decoder which
generates mel spectrogram output. A neural vocoder then reads the spectro-
gram and generates waveforms.
• TTS systems require a first pass of text normalization to deal with numbers
and abbreviations and other non-standard words.
• TTS is evaluated by playing a sentence to human listeners and having them
give a mean opinion score (MOS) or by doing AB tests.
The late 1960s and early 1970s produced a number of important paradigm shifts.
First were a number of feature-extraction algorithms, including the efficient fast
Fourier transform (FFT) (Cooley and Tukey, 1965), the application of cepstral pro-
cessing to speech (Oppenheim et al., 1968), and the development of LPC for speech
coding (Atal and Hanauer, 1971). Second were a number of ways of handling warp-
warping ing; stretching or shrinking the input signal to handle differences in speaking rate
and segment length when matching against stored patterns. The natural algorithm for
solving this problem was dynamic programming, and, as we saw in Appendix A, the
algorithm was reinvented multiple times to address this problem. The first applica-
tion to speech processing was by Vintsyuk (1968), although his result was not picked
up by other researchers, and was reinvented by Velichko and Zagoruyko (1970) and
Sakoe and Chiba (1971) (and 1984). Soon afterward, Itakura (1975) combined this
dynamic programming idea with the LPC coefficients that had previously been used
only for speech coding. The resulting system extracted LPC features from incoming
words and used dynamic programming to match them against stored LPC templates.
The non-probabilistic use of dynamic programming to match a template against in-
dynamic time
warping coming speech is called dynamic time warping.
The third innovation of this period was the rise of the HMM. Hidden Markov
models seem to have been applied to speech independently at two laboratories around
1972. One application arose from the work of statisticians, in particular Baum and
colleagues at the Institute for Defense Analyses in Princeton who applied HMMs
to various prediction problems (Baum and Petrie 1966, Baum and Eagon 1967).
James Baker learned of this work and applied the algorithm to speech processing
(Baker, 1975a) during his graduate work at CMU. Independently, Frederick Jelinek
and collaborators (drawing from their research in information-theoretical models
influenced by the work of Shannon (1948)) applied HMMs to speech at the IBM
Thomas J. Watson Research Center (Jelinek et al., 1975). One early difference was
the decoding algorithm; Baker’s DRAGON system used Viterbi (dynamic program-
ming) decoding, while the IBM system applied Jelinek’s stack decoding algorithm
(Jelinek, 1969). Baker then joined the IBM group for a brief time before founding
the speech-recognition company Dragon Systems.
The use of the HMM, with Gaussian Mixture Models (GMMs) as the phonetic
component, slowly spread through the speech community, becoming the dominant
paradigm by the 1990s. One cause was encouragement by ARPA, the Advanced
Research Projects Agency of the U.S. Department of Defense. ARPA started a
five-year program in 1971 to build 1000-word, constrained grammar, few speaker
speech understanding (Klatt, 1977), and funded four competing systems of which
Carnegie-Mellon University’s Harpy system (Lowerre, 1976), which used a simpli-
fied version of Baker’s HMM-based DRAGON system was the best of the tested sys-
tems. ARPA (and then DARPA) funded a number of new speech research programs,
beginning with 1000-word speaker-independent read-speech tasks like “Resource
Management” (Price et al., 1988), recognition of sentences read from the Wall Street
Journal (WSJ), Broadcast News domain (LDC 1998, Graff 1997) (transcription of
actual news broadcasts, including quite difficult passages such as on-the-street inter-
views) and the Switchboard, CallHome, CallFriend, and Fisher domains (Godfrey
et al. 1992, Cieri et al. 2004) (natural telephone conversations between friends or
bakeoff strangers). Each of the ARPA tasks involved an approximately annual bakeoff at
which systems were evaluated against each other. The ARPA competitions resulted
in wide-scale borrowing of techniques among labs since it was easy to see which
ideas reduced errors the previous year, and the competitions were probably an im-
356 C HAPTER 16 • AUTOMATIC S PEECH R ECOGNITION AND T EXT- TO -S PEECH
TTS As we noted at the beginning of the chapter, speech synthesis is one of the
earliest fields of speech and language processing. The 18th century saw a number
of physical models of the articulation process, including the von Kempelen model
mentioned above, as well as the 1773 vowel model of Kratzenstein in Copenhagen
E XERCISES 357
Exercises
16.1 Analyze each of the errors in the incorrectly recognized transcription of “um
the phone is I left the. . . ” on page 346. For each one, give your best guess as
to whether you think it is caused by a problem in signal processing, pronun-
ciation modeling, lexicon size, language model, or pruning in the decoding
search.
Part III
ANNOTATING LINGUISTIC
STRUCTURE
In the final part of the book we discuss the task of detecting linguistic structure.
In the early history of NLP these structures were an intermediate step toward deeper
language processing. In modern NLP, we don’t generally make explicit use of parse
or other structures inside the neural language models we introduced in Part I, or
directly in applications like those we discussed in Part II.
Instead linguistic structure plays a number of new roles. One important role is for
interpretability: to provide a useful interpretive lens on neural networks. Knowing
that a particular layer or neuron may be computing something related to a particular
kind of structure can help us break open the ‘black box’ and understand what the
components of our language models are doing.
A second important role for linguistic structure is as a practical tool for social
scientific studies of text: knowing which adjective modifies which noun, or whether
a particular implicit metaphor is being used, can be important for measuring attitudes
toward groups or individuals. Detailed semantic structure can be helpful, for exam-
ple in finding particular clauses that have particular meanings in legal contracts.
Word sense labels can help keep any corpus study from measuring facts about the
wrong word sense. Relation structures can be used to help build knowledge bases
from text.
Finally, computation of linguistic structure is an important tool for answering
questions about language itself, a research area called computational linguistics
that is sometimes distinguished from natural language processing. To answer lin-
guistic questions about how language changes over time or across individuals we’ll
need to be able, for example, to parse entire documents from different time periods.
To understand how certain linguistic structures are learned or processed by people,
it’s necessary to be able to automatically label structures for arbitrary text.
In our study of linguistic structure, we begin with one of the oldest tasks in
computational linguistics: the extraction of syntactic structure, and give two sets of
algorithms for parsing: extracting syntactic structure, including constituency pars-
ing and dependency parsing. We then introduce a variety of structures related to
meaning, including semantic roles, word senses, entity relations, and events. We
360
conclude with linguistic structures that tend to be related to discourse and meaning
over larger texts, including coreference and discourse coherence. In each case we’ll
give algorithms for automatically annotating the relevant structure.
362 C HAPTER 17 • S EQUENCE L ABELING FOR PARTS OF S PEECH AND NAMED E NTITIES
CHAPTER
Dionysius Thrax of Alexandria (c. 100 B . C .), or perhaps someone else (it was a long
time ago), wrote a grammatical sketch of Greek (a “technē”) that summarized the
linguistic knowledge of his day. This work is the source of an astonishing proportion
of modern linguistic vocabulary, including the words syntax, diphthong, clitic, and
parts of speech analogy. Also included are a description of eight parts of speech: noun, verb,
pronoun, preposition, adverb, conjunction, participle, and article. Although earlier
scholars (including Aristotle as well as the Stoics) had their own lists of parts of
speech, it was Thrax’s set of eight that became the basis for descriptions of European
languages for the next 2000 years. (All the way to the Schoolhouse Rock educational
television shows of our childhood, which had songs about 8 parts of speech, like the
late great Bob Dorough’s Conjunction Junction.) The durability of parts of speech
through two millennia speaks to their centrality in models of human language.
Proper names are another important and anciently studied linguistic category.
While parts of speech are generally assigned to individual words or morphemes, a
proper name is often an entire multiword phrase, like the name “Marie Curie”, the
location “New York City”, or the organization “Stanford University”. We’ll use the
named entity term named entity for, roughly speaking, anything that can be referred to with a
proper name: a person, a location, an organization, although as we’ll see the term is
commonly extended to include things that aren’t entities per se.
POS Parts of speech (also known as POS) and named entities are useful clues to
sentence structure and meaning. Knowing whether a word is a noun or a verb tells us
about likely neighboring words (nouns in English are preceded by determiners and
adjectives, verbs by nouns) and syntactic structure (verbs have dependency links to
nouns), making part-of-speech tagging a key aspect of parsing. Knowing if a named
entity like Washington is a name of a person, a place, or a university is important to
many natural language processing tasks like question answering, stance detection,
or information extraction.
In this chapter we’ll introduce the task of part-of-speech tagging, taking a se-
quence of words and assigning each word a part of speech like NOUN or VERB, and
the task of named entity recognition (NER), assigning words or phrases tags like
PERSON , LOCATION , or ORGANIZATION .
Such tasks in which we assign, to each word xi in an input word sequence, a
label yi , so that the output sequence Y has the same length as the input sequence X
sequence
labeling are called sequence labeling tasks. We’ll introduce classic sequence labeling algo-
rithms, one generative— the Hidden Markov Model (HMM)—and one discriminative—
the Conditional Random Field (CRF). In following chapters we’ll introduce modern
sequence labelers based on RNNs and Transformers.
17.1 • (M OSTLY ) E NGLISH W ORD C LASSES 363
Tag
Description Example
ADJ
Adjective: noun modifiers describing properties red, young, awesome
ADV
Adverb: verb modifiers of time, place, manner very, slowly, home, yesterday
Open Class
NOUN
words for persons, places, things, etc. algorithm, cat, mango, beauty
VERB
words for actions and processes draw, provide, go
PROPN
Proper noun: name of a person, organization, place, etc.. Regina, IBM, Colorado
INTJ
Interjection: exclamation, greeting, yes/no response, etc. oh, um, yes, hello
ADP
Adposition (Preposition/Postposition): marks a noun’s in, on, by, under
spacial, temporal, or other relation
Closed Class Words
AUX Auxiliary: helping verb marking tense, aspect, mood, etc., can, may, should, are
CCONJ Coordinating Conjunction: joins two phrases/clauses and, or, but
DET Determiner: marks noun phrase properties a, an, the, this
NUM Numeral one, two, 2026, 11:00, hundred
PART Particle: a function word that must be associated with an- ’s, not, (infinitive) to
other word
PRON Pronoun: a shorthand for referring to an entity or event she, who, I, others
SCONJ Subordinating Conjunction: joins a main clause with a whether, because
subordinate clause such as a sentential complement
PUNCT Punctuation ,̇ , ()
Other
closed class Parts of speech fall into two broad categories: closed class and open class.
open class Closed classes are those with relatively fixed membership, such as prepositions—
new prepositions are rarely coined. By contrast, nouns and verbs are open classes—
new nouns and verbs like iPhone or to fax are continually being created or borrowed.
function word Closed class words are generally function words like of, it, and, or you, which tend
to be very short, occur frequently, and often have structuring uses in grammar.
Four major open classes occur in the languages of the world: nouns (including
proper nouns), verbs, adjectives, and adverbs, as well as the smaller open class of
interjections. English has all five, although not every language does.
noun Nouns are words for people, places, or things, but include others as well. Com-
common noun mon nouns include concrete terms like cat and mango, abstractions like algorithm
and beauty, and verb-like terms like pacing as in His pacing to and fro became quite
annoying. Nouns in English can occur with determiners (a goat, this bandwidth)
take possessives (IBM’s annual revenue), and may occur in the plural (goats, abaci).
count noun Many languages, including English, divide common nouns into count nouns and
mass noun mass nouns. Count nouns can occur in the singular and plural (goat/goats, rela-
tionship/relationships) and can be counted (one goat, two goats). Mass nouns are
used when something is conceptualized as a homogeneous group. So snow, salt, and
proper noun communism are not counted (i.e., *two snows or *two communisms). Proper nouns,
like Regina, Colorado, and IBM, are names of specific persons or entities.
364 C HAPTER 17 • S EQUENCE L ABELING FOR PARTS OF S PEECH AND NAMED E NTITIES
verb Verbs refer to actions and processes, including main verbs like draw, provide,
and go. English verbs have inflections (non-third-person-singular (eat), third-person-
singular (eats), progressive (eating), past participle (eaten)). While many scholars
believe that all human languages have the categories of noun and verb, others have
argued that some languages, such as Riau Indonesian and Tongan, don’t even make
this distinction (Broschart 1997; Evans 2000; Gil 2000) .
adjective Adjectives often describe properties or qualities of nouns, like color (white,
black), age (old, young), and value (good, bad), but there are languages without
adjectives. In Korean, for example, the words corresponding to English adjectives
act as a subclass of verbs, so what is in English an adjective “beautiful” acts in
Korean like a verb meaning “to be beautiful”.
adverb Adverbs are a hodge-podge. All the italicized words in this example are adverbs:
Actually, I ran home extremely quickly yesterday
Adverbs generally modify something (often verbs, hence the name “adverb”, but
locative also other adverbs and entire verb phrases). Directional adverbs or locative ad-
degree verbs (home, here, downhill) specify the direction or location of some action; degree
adverbs (extremely, very, somewhat) specify the extent of some action, process, or
manner property; manner adverbs (slowly, slinkily, delicately) describe the manner of some
temporal action or process; and temporal adverbs describe the time that some action or event
took place (yesterday, Monday).
interjection Interjections (oh, hey, alas, uh, um) are a smaller open class that also includes
greetings (hello, goodbye) and question responses (yes, no, uh-huh).
preposition English adpositions occur before nouns, hence are called prepositions. They can
indicate spatial or temporal relations, whether literal (on it, before then, by the house)
or metaphorical (on time, with gusto, beside herself), and relations like marking the
agent in Hamlet was written by Shakespeare.
particle A particle resembles a preposition or an adverb and is used in combination with
a verb. Particles often have extended meanings that aren’t quite the same as the
prepositions they resemble, as in the particle over in she turned the paper over. A
phrasal verb verb and a particle acting as a single unit is called a phrasal verb. The meaning
of phrasal verbs is often non-compositional—not predictable from the individual
meanings of the verb and the particle. Thus, turn down means ‘reject’, rule out
‘eliminate’, and go on ‘continue’.
determiner Determiners like this and that (this chapter, that page) can mark the start of an
article English noun phrase. Articles like a, an, and the, are a type of determiner that mark
discourse properties of the noun and are quite frequent; the is the most common
word in written English, with a and an right behind.
conjunction Conjunctions join two phrases, clauses, or sentences. Coordinating conjunc-
tions like and, or, and but join two elements of equal status. Subordinating conjunc-
tions are used when one of the elements has some embedded status. For example,
the subordinating conjunction that in “I thought that you might like some milk” links
the main clause I thought with the subordinate clause you might like some milk. This
clause is called subordinate because this entire clause is the “content” of the main
verb thought. Subordinating conjunctions like that which link a verb to its argument
complementizer in this way are also called complementizers.
pronoun Pronouns act as a shorthand for referring to an entity or event. Personal pro-
nouns refer to persons or entities (you, she, I, it, me, etc.). Possessive pronouns are
forms of personal pronouns that indicate either actual possession or more often just
an abstract relation between the person and some object (my, your, his, her, its, one’s,
wh our, their). Wh-pronouns (what, who, whom, whoever) are used in certain question
17.2 • PART- OF -S PEECH TAGGING 365
Below we show some examples with each word tagged according to both the UD
(in blue) and Penn (in red) tagsets. Notice that the Penn tagset distinguishes tense
and participles on verbs, and has a special tag for the existential there construction in
English. Note that since London Journal of Medicine is a proper noun, both tagsets
mark its component nouns as PROPN/NNP, including journal and medicine, which
might otherwise be labeled as common nouns (NOUN/NN).
(17.1) There/PRON/EX are/VERB/VBP 70/NUM/CD children/NOUN/NNS
there/ADV/RB ./PUNC/.
(17.2) Preliminary/ADJ/JJ findings/NOUN/NNS were/AUX/VBD
reported/VERB/VBN in/ADP/IN today/NOUN/NN ’s/PART/POS
London/PROPN/NNP Journal/PROPN/NNP of/ADP/IN Medicine/PROPN/NNP
y1 y2 y3 y4 y5
Figure 17.3 The task of part-of-speech tagging: mapping from input words x1 , x2 , ..., xn to
output POS tags y1 , y2 , ..., yn .
ambiguity thought that your flight was earlier). The goal of POS-tagging is to resolve these
resolution
ambiguities, choosing the proper tag for the context.
accuracy The accuracy of part-of-speech tagging algorithms (the percentage of test set
tags that match human gold labels) is extremely high. One study found accuracies
over 97% across 15 languages from the Universal Dependency (UD) treebank (Wu
and Dredze, 2019). Accuracies on various English treebanks are also 97% (no matter
the algorithm; HMMs, CRFs, BERT perform similarly). This 97% number is also
about the human performance on this task, at least for English (Manning, 2011).
We’ll introduce algorithms for the task in the next few sections, but first let’s
explore the task. Exactly how hard is it? Fig. 17.4 shows that most word types
(85-86%) are unambiguous (Janet is always NNP, hesitantly is always RB). But the
ambiguous words, though accounting for only 14-15% of the vocabulary, are very
common, and 55-67% of word tokens in running text are ambiguous. Particularly
ambiguous common words include that, back, down, put and set; here are some
examples of the 6 different parts of speech for the word back:
earnings growth took a back/JJ seat
a small building in the back/NN
a clear majority of senators back/VBP the bill
Dave began to back/VB toward the door
enable the country to buy back/RP debt
I was twenty-one back/RB then
Nonetheless, many words are easy to disambiguate, because their different tags
aren’t equally likely. For example, a can be a determiner or the letter a, but the
determiner sense is much more likely.
This idea suggests a useful baseline: given an ambiguous word, choose the tag
which is most frequent in the training corpus. This is a key concept:
Most Frequent Class Baseline: Always compare a classifier against a baseline at
least as good as the most frequent class baseline (assigning each token to the class
it occurred in most often in the training set).
17.3 • NAMED E NTITIES AND NAMED E NTITY TAGGING 367
Named entity tagging is a useful first step in lots of natural language processing
tasks. In sentiment analysis we might want to know a consumer’s sentiment toward a
particular entity. Entities are a useful first stage in question answering, or for linking
text to information in structured knowledge sources like Wikipedia. And named
entity tagging is also central to tasks involving building semantic representations,
like extracting events and the relationship between participants.
1 In English, on the WSJ corpus, tested on sections 22-24.
368 C HAPTER 17 • S EQUENCE L ABELING FOR PARTS OF S PEECH AND NAMED E NTITIES
[PER Washington] was born into slavery on the farm of James Burroughs.
[ORG Washington] went up 2 games to 1 in the four-game series.
Blair arrived in [LOC Washington] for what may well be his last state visit.
In June, [GPE Washington] passed a primary seatbelt law.
Figure 17.6 Examples of type ambiguities in the use of the name Washington.
We’ve also shown two variant tagging schemes: IO tagging, which loses some
information by eliminating the B tag, and BIOES tagging, which adds an end tag
E for the end of a span, and a span tag S for a span consisting of only one word.
A sequence labeler (HMM, CRF, RNN, Transformer, etc.) is trained to label each
token in a text with tags that indicate the presence (or absence) of particular kinds
of named entities.
17.4 • HMM PART- OF -S PEECH TAGGING 369
.8
are .2
.1 COLD2 .1 .4 .5
.1 .5
.1
.3 uniformly charming
HOT1 WARM3 .5
.6 .3 .6 .1 .2
.6
(a) (b)
Figure 17.8 A Markov chain for weather (a) and one for words (b), showing states and
transitions. A start distribution π is required; setting π = [0.1, 0.7, 0.2] for (a) would mean a
probability 0.7 of starting in state 2 (cold), probability 0.1 of starting in state 1 (hot), etc.
C(ti−1 ,ti )
P(ti |ti−1 ) = (17.8)
C(ti−1 )
In the WSJ corpus, for example, MD occurs 13124 times of which it is followed
by VB 10471, for an MLE estimate of
C(MD,V B) 10471
P(V B|MD) = = = .80 (17.9)
C(MD) 13124
The B emission probabilities, P(wi |ti ), represent the probability, given a tag (say
MD), that it will be associated with a given word (say will). The MLE of the emis-
sion probability is
C(ti , wi )
P(wi |ti ) = (17.10)
C(ti )
Of the 13124 occurrences of MD in the WSJ corpus, it is associated with will 4046
times:
C(MD, will) 4046
P(will|MD) = = = .31 (17.11)
C(MD) 13124
We saw this kind of Bayesian modeling in Chapter 4; recall that this likelihood
term is not asking “which is the most likely tag for the word will?” That would be
the posterior P(MD|will). Instead, P(will|MD) answers the slightly counterintuitive
question “If we were going to generate a MD, how likely is it that this modal would
be will?”
B2 a22
P("aardvark" | MD)
...
P(“will” | MD)
...
P("the" | MD)
...
MD2 B3
P(“back” | MD)
... a12 a32 P("aardvark" | NN)
P("zebra" | MD) ...
a11 a21 a33 P(“will” | NN)
a23 ...
P("the" | NN)
B1 a13 ...
P(“back” | NN)
P("aardvark" | VB)
...
VB1 NN3 ...
a31 P("zebra" | NN)
P(“will” | VB)
...
P("the" | VB)
...
P(“back” | VB)
...
P("zebra" | VB)
Figure 17.9 An illustration of the two parts of an HMM representation: the A transition
probabilities used to compute the prior probability, and the B observation likelihoods that are
associated with each state, one likelihood for each possible observation word.
372 C HAPTER 17 • S EQUENCE L ABELING FOR PARTS OF S PEECH AND NAMED E NTITIES
For part-of-speech tagging, the goal of HMM decoding is to choose the tag
sequence t1 . . .tn that is most probable given the observation sequence of n words
w1 . . . wn :
tˆ1:n = argmax P(t1 . . .tn |w1 . . . wn ) (17.12)
t1 ... tn
The way we’ll do this in the HMM is to use Bayes’ rule to instead compute:
HMM taggers make two further simplifying assumptions. The first (output in-
dependence, from Eq. 17.7) is that the probability of a word appearing depends only
on its own tag and is independent of neighboring words and tags:
n
∏
P(w1 . . . wn |t1 . . .tn ) ≈ P(wi |ti ) (17.15)
i=1
The second assumption (the Markov assumption, Eq. 17.6) is that the probability of
a tag is dependent only on the previous tag, rather than the entire tag sequence;
n
∏
P(t1 . . .tn ) ≈ P(ti |ti−1 ) (17.16)
i=1
Plugging the simplifying assumptions from Eq. 17.15 and Eq. 17.16 into Eq. 17.14
results in the following equation for the most probable tag sequence from a bigram
tagger:
emission transition
n
∏
tˆ1:n = argmax P(t1 . . .tn |w1 . . . wn ) ≈ argmax P(wi |ti ) P(ti |ti−1 ) (17.17)
t1 ... tn t1 ... tn
i=1
The two parts of Eq. 17.17 correspond neatly to the B emission probability and A
transition probability that we just defined above!
17.4 • HMM PART- OF -S PEECH TAGGING 373
Figure 17.10 Viterbi algorithm for finding the optimal sequence of tags. Given an observation sequence and
an HMM λ = (A, B), the algorithm returns the state path through the HMM that assigns maximum likelihood
to the observation sequence.
We represent the most probable path by taking the maximum over all possible
previous state sequences max . Like other dynamic programming algorithms,
q1 ,...,qt−1
Viterbi fills each cell recursively. Given that we had already computed the probabil-
ity of being in every state at time t − 1, we compute the Viterbi probability by taking
the most probable of the extensions of the paths that lead to the current cell. For a
given state q j at time t, the value vt ( j) is computed as
N
vt ( j) = max vt−1 (i) ai j b j (ot ) (17.19)
i=1
The three factors that are multiplied in Eq. 17.19 for extending the previous paths to
compute the Viterbi probability at time t are
374 C HAPTER 17 • S EQUENCE L ABELING FOR PARTS OF S PEECH AND NAMED E NTITIES
DT DT DT DT DT
RB RB RB RB RB
NN NN NN NN NN
JJ JJ JJ JJ JJ
VB VB VB VB VB
MD MD MD MD MD
vt−1 (i) the previous Viterbi path probability from the previous time step
ai j the transition probability from previous state qi to current state q j
b j (ot ) the state observation likelihood of the observation symbol ot given
the current state j
NNP MD VB JJ NN RB DT
<s > 0.2767 0.0006 0.0031 0.0453 0.0449 0.0510 0.2026
NNP 0.3777 0.0110 0.0009 0.0084 0.0584 0.0090 0.0025
MD 0.0008 0.0002 0.7968 0.0005 0.0008 0.1698 0.0041
VB 0.0322 0.0005 0.0050 0.0837 0.0615 0.0514 0.2231
JJ 0.0366 0.0004 0.0001 0.0733 0.4509 0.0036 0.0036
NN 0.0096 0.0176 0.0014 0.0086 0.1216 0.0177 0.0068
RB 0.0068 0.0102 0.1011 0.1012 0.0120 0.0728 0.0479
DT 0.1147 0.0021 0.0002 0.2157 0.4744 0.0102 0.0017
Figure 17.12 The A transition probabilities P(ti |ti−1 ) computed from the WSJ corpus with-
out smoothing. Rows are labeled with the conditioning event; thus P(V B|MD) is 0.7968.
<s > is the start token.
Let the HMM be defined by the two tables in Fig. 17.12 and Fig. 17.13. Fig-
ure 17.12 lists the ai j probabilities for transitioning between the hidden states (part-
of-speech tags). Figure 17.13 expresses the bi (ot ) probabilities, the observation
likelihoods of words given tags. This table is (slightly simplified) from counts in the
WSJ corpus. So the word Janet only appears as an NNP, back has 4 possible parts
17.4 • HMM PART- OF -S PEECH TAGGING 375
of speech, and the word the can appear as a determiner or as an NNP (in titles like
“Somewhere Over the Rainbow” all words are tagged as NNP).
v1(7) v2(7)
q7 DT
v1(3)= v2(3)=
(M 0
v3(3)=
art)
D
q3 VB B|st
=
|J
=. =0 = 2.5e-13
* P
(MD
= 0 |VB)
v2(2) =
tart) v1(2)=
q2 MD D|s
P(M 0006 .0006 x 0 = * P(MD|M max * .308 =
= . D) 2.772e-8
0 =0
8 1 =)
.9 9*.0 NP
00 D|N
v1(1) = v2(1)
.0 P(M
= .000009
*
= .28
backtrace
start start start start
start
π backtrace
Janet will
t back the bill
o1 o2 o3 o4 o5
Figure 17.14 The first few entries in the individual state columns for the Viterbi algorithm. Each cell keeps
the probability of the best path so far and a pointer to the previous cell along that path. We have only filled out
columns 1 and 2; to avoid clutter most cells with value 0 are left empty. The rest is left as an exercise for the
reader. After the cells are filled in, backtracing from the end state, we should be able to reconstruct the correct
state sequence NNP MD VB DT NN.
Figure 17.14 shows a fleshed-out version of the sketch we saw in Fig. 17.11,
the Viterbi lattice for computing the best hidden state sequence for the observation
sequence Janet will back the bill.
There are N = 5 state columns. We begin in column 1 (for the word Janet) by
setting the Viterbi value in each cell to the product of the π transition probability (the
start probability for that state i, which we get from the <s> entry of Fig. 17.12), and
376 C HAPTER 17 • S EQUENCE L ABELING FOR PARTS OF S PEECH AND NAMED E NTITIES
the observation likelihood of the word Janet given the tag for that cell. Most of the
cells in the column are zero since the word Janet cannot be any of those tags. The
reader should find this in Fig. 17.14.
Next, each cell in the will column gets updated. For each state, we compute the
value viterbi[s,t] by taking the maximum over the extensions of all the paths from
the previous column that lead to the current cell according to Eq. 17.19. We have
shown the values for the MD, VB, and NN cells. Each cell gets the max of the 7 val-
ues from the previous column, multiplied by the appropriate transition probability;
as it happens in this case, most of them are zero from the previous column. The re-
maining value is multiplied by the relevant observation probability, and the (trivial)
max is taken. In this case the final value, 2.772e-8, comes from the NNP state at the
previous column. The reader should fill in the rest of the lattice in Fig. 17.14 and
backtrace to see whether or not the Viterbi algorithm returns the gold state sequence
NNP MD VB DT NN.
In a CRF, by contrast, we compute the posterior p(Y |X) directly, training the CRF
17.5 • C ONDITIONAL R ANDOM F IELDS (CRF S ) 377
However, the CRF does not compute a probability for each tag at each time step. In-
stead, at each time step the CRF computes log-linear functions over a set of relevant
features, and these local features are aggregated and normalized to produce a global
probability for the whole sequence.
Let’s introduce the CRF more formally, again using X and Y as the input and
output sequences. A CRF is a log-linear model that assigns a probability to an
entire output (tag) sequence Y , out of all possible sequences Y, given the entire input
(word) sequence X. We can think of a CRF as like a giant sequential version of
the multinomial logistic regression algorithm we saw for text categorization. Recall
that we introduced the feature function f in regular multinomial logistic regression
for text categorization as a function of a tuple: the input text x and a single class y
(page 86). In a CRF, we’re dealing with a sequence, so the function F maps an entire
input sequence X and an entire output sequence Y to a feature vector. Let’s assume
we have K features, with a weight wk for each feature Fk :
( K )
∑
exp wk Fk (X,Y )
k=1
p(Y |X) = ( K
) (17.23)
∑ ∑
′
exp wk Fk (X,Y )
Y ′ ∈Y k=1
It’s common to also describe the same equation by pulling out the denominator into
a function Z(X):
( K )
1 ∑
p(Y |X) = exp wk Fk (X,Y ) (17.24)
Z(X)
k=1
( K )
∑ ∑
′
Z(X) = exp wk Fk (X,Y ) (17.25)
Y ′ ∈Y k=1
We’ll call these K functions Fk (X,Y ) global features, since each one is a property
of the entire input sequence X and output sequence Y . We compute them by decom-
posing into a sum of local features for each position i in Y :
n
∑
Fk (X,Y ) = fk (yi−1 , yi , X, i) (17.26)
i=1
Each of these local features fk in a linear-chain CRF is allowed to make use of the
current output token yi , the previous output token yi−1 , the entire input string X (or
any subpart of it), and the current position i. This constraint to only depend on
the current and previous output tokens yi and yi−1 are what characterizes a linear
linear chain chain CRF. As we will see, this limitation makes it possible to use versions of the
CRF
efficient Viterbi and Forward-Backwards algorithms from the HMM. A general CRF,
by contrast, allows a feature to make use of any output token, and are thus necessary
for tasks in which the decision depend on distant output tokens, like yi−4 . General
CRFs require more complex inference, and are less commonly used for language
processing.
378 C HAPTER 17 • S EQUENCE L ABELING FOR PARTS OF S PEECH AND NAMED E NTITIES
For simplicity, we’ll assume all CRF features take on the value 1 or 0. Above, we
explicitly use the notation {x} to mean “1 if x is true, and 0 otherwise”. From now
on, we’ll leave off the when we define features, but you can assume each feature
has it there implicitly.
Although the idea of what features to use is done by the system designer by hand,
feature
templates the specific features are automatically populated by using feature templates as we
briefly mentioned in Chapter 5. Here are some templates that only use information
from (yi−1 , yi , X, i):
These templates automatically populate the set of features from every instance in
the training and test set. Thus for our example Janet/NNP will/MD back/VB the/DT
bill/NN, when xi is the word back, the following features would be generated and
have the value 1 (we’ve assigned them arbitrary feature numbers):
f3743 : yi = VB and xi = back
f156 : yi = VB and yi−1 = MD
f99732 : yi = VB and xi−1 = will and xi+2 = bill
It’s also important to have features that help with unknown words. One of the
word shape most important is word shape features, which represent the abstract letter pattern
of the word by mapping lower-case letters to ‘x’, upper-case to ‘X’, numbers to
’d’, and retaining punctuation. Thus for example I.M.F. would map to X.X.X. and
DC10-30 would map to XXdd-dd. A second class of shorter word shape features is
also used. In these features consecutive character types are removed, so words in all
caps map to X, words with initial-caps map to Xx, DC10-30 would be mapped to
Xd-d but I.M.F would still map to X.X.X. Prefix and suffix features are also useful.
In summary, here are some sample feature templates that help with unknown words:
For example the word well-dressed might generate the following non-zero val-
ued feature values:
2 Because in HMMs all computation is based on the two probabilities P(tag|tag) and P(word|tag), if
we want to include some source of knowledge into the tagging process, we must find a way to encode
the knowledge into one of these two probabilities. Each time we add a feature we have to do a lot of
complicated conditioning which gets harder and harder as we have more and more such features.
17.5 • C ONDITIONAL R ANDOM F IELDS (CRF S ) 379
prefix(xi ) = w
prefix(xi ) = we
suffix(xi ) = ed
suffix(xi ) = d
word-shape(xi ) = xxxx-xxxxxxx
short-word-shape(xi ) = x-x
The known-word templates are computed for every word seen in the training
set; the unknown word features can also be computed for all words in training, or
only on training words whose frequency is below some threshold. The result of the
known-word templates and word-signature features is a very large set of features.
Generally a feature cutoff is used in which features are thrown out if they have count
< 5 in the training set.
Remember that in a CRF we don’t learn weights for each of these local features
fk . Instead, we first sum the values of each local feature (for example feature f3743 )
over the entire sentence, to create each global feature (for example F3743 ). It is those
global features that will then be multiplied by weight w3743 . Thus for training and
inference there is always a fixed set of K features with K weights, even though the
length of each sentence is different.
gazetteer One feature that is especially useful for locations is a gazetteer, a list of place
names, often providing millions of entries for locations with detailed geographical
and political information.3 This can be implemented as a binary feature indicating a
phrase appears in the list. Other related resources like name-lists, for example from
the United States Census Bureau4 , can be used, as can other entity dictionaries like
lists of corporations or products, although they may not be as helpful as a gazetteer
(Mikheev et al., 1999).
The sample named entity token L’Occitane would generate the following non-
zero valued feature values (assuming that L’Occitane is neither in the gazetteer nor
the census).
3 www.geonames.org
4 www.census.gov
380 C HAPTER 17 • S EQUENCE L ABELING FOR PARTS OF S PEECH AND NAMED E NTITIES
We can ignore the exp function and the denominator Z(X), as we do above, because
exp doesn’t change the argmax, and the denominator Z(X) is constant for a given
observation sequence X.
How should we decode to find this optimal tag sequence ŷ? Just as with HMMs,
we’ll turn to the Viterbi algorithm, which works because, like the HMM, the linear-
chain CRF depends at each timestep on only one previous output token yi−1 .
Concretely, this involves filling an N ×T array with the appropriate values, main-
taining backpointers as we proceed. As with HMM Viterbi, when the table is filled,
we simply follow pointers back from the maximum value in the final column to
retrieve the desired set of labels.
17.6 • E VALUATION OF NAMED E NTITY R ECOGNITION 381
The requisite changes from HMM Viterbi have to do only with how we fill each
cell. Recall from Eq. 17.19 that the recursive step of the Viterbi equation computes
the Viterbi value of time t for state j as
N
vt ( j) = max vt−1 (i) ai j b j (ot ); 1 ≤ j ≤ N, 1 < t ≤ T (17.31)
i=1
The CRF requires only a slight change to this latter formula, replacing the a and b
prior and likelihood probabilities with the CRF features:
K
∑
N
vt ( j) = max vt−1 (i) wk fk (yt−1 , yt , X,t) 1 ≤ j ≤ N, 1 < t ≤ T (17.33)
i=1
k=1
presented are supervised, having labeled data is essential for training and testing. A
wide variety of datasets exist for part-of-speech tagging and/or NER. The Universal
Dependencies (UD) dataset (de Marneffe et al., 2021) has POS tagged corpora in
over a hundred languages, as do the Penn Treebanks in English, Chinese, and Arabic.
OntoNotes has corpora labeled for named entities in English, Chinese, and Arabic
(Hovy et al., 2006). Named entity tagged corpora are also available in particular
domains, such as for biomedical (Bada et al., 2012) and literary text (Bamman et al.,
2019).
guages need to label words with case and gender information. Tagsets for morpho-
logically rich languages are therefore sequences of morphological tags rather than a
single primitive tag. Here’s a Turkish example, in which the word izin has three pos-
sible morphological/part-of-speech tags and meanings (Hakkani-Tür et al., 2002):
1. Yerdeki izin temizlenmesi gerek. iz + Noun+A3sg+Pnon+Gen
The trace on the floor should be cleaned.
17.8 Summary
This chapter introduced parts of speech and named entities, and the tasks of part-
of-speech tagging and named entity recognition:
• Languages generally have a small set of closed class words that are highly
frequent, ambiguous, and act as function words, and open-class words like
nouns, verbs, adjectives. Various part-of-speech tagsets exist, of between 40
and 200 tags.
• Part-of-speech tagging is the process of assigning a part-of-speech label to
each of a sequence of words.
• Named entities are words for proper nouns referring mainly to people, places,
and organizations, but extended to many other types that aren’t strictly entities
or even proper nouns.
• Two common approaches to sequence modeling are a generative approach,
HMM tagging, and a discriminative approach, CRF tagging. We will see a
neural approach in following chapters.
• The probabilities in HMM taggers are estimated by maximum likelihood es-
timation on tag-labeled training corpora. The Viterbi algorithm is used for
decoding, finding the most likely tag sequence
• Conditional Random Fields or CRF taggers train a log-linear model that can
choose the best tag sequence given an observation sequence, based on features
that condition on the output tag, the prior output tag, the entire input sequence,
and the current timestep. They use the Viterbi algorithm for inference, to
choose the best sequence of tags, and a version of the Forward-Backward
algorithm (see Appendix A) for training,
384 C HAPTER 17 • S EQUENCE L ABELING FOR PARTS OF S PEECH AND NAMED E NTITIES
Exercises
17.1 Find one tagging error in each of the following sentences that are tagged with
the Penn Treebank tagset:
1. I/PRP need/VBP a/DT flight/NN from/IN Atlanta/NN
2. Does/VBZ this/DT flight/NN serve/VB dinner/NNS
3. I/PRP have/VB a/DT friend/NN living/VBG in/IN Denver/NNP
4. Can/VBP you/PRP list/VB the/DT nonstop/JJ afternoon/NN flights/NNS
17.2 Use the Penn Treebank tagset to tag each word in the following sentences
from Damon Runyon’s short stories. You may ignore punctuation. Some of
these are quite difficult; do your best.
1. It is a nice night.
2. This crap game is over a garage in Fifty-second Street. . .
3. . . . Nobody ever takes the newspapers she sells . . .
4. He is a tall, skinny guy with a long, sad, mean-looking kisser, and a
mournful voice.
386 C HAPTER 17 • S EQUENCE L ABELING FOR PARTS OF S PEECH AND NAMED E NTITIES
The study of grammar has an ancient pedigree. The grammar of Sanskrit was
described by the Indian grammarian Pān.ini sometime between the 7th and 4th cen-
syntax turies BCE, in his famous treatise the As.t.ādhyāyı̄ (‘8 books’). And our word syntax
comes from the Greek sýntaxis, meaning “setting out together or arrangement”, and
refers to the way words are arranged together. We have seen syntactic notions in pre-
vious chapters like the use of part-of-speech categories (Chapter 17). In this chapter
and the next one we introduce formal models for capturing more sophisticated no-
tions of grammatical structure and algorithms for parsing these structures.
Our focus in this chapter is context-free grammars and the CKY algorithm
for parsing them. Context-free grammars are the backbone of many formal mod-
els of the syntax of natural language (and, for that matter, of computer languages).
Syntactic parsing is the task of assigning a syntactic structure to a sentence. Parse
trees (whether for context-free grammars or for the dependency or CCG formalisms
we introduce in following chapters) can be used in applications such as grammar
checking: sentence that cannot be parsed may have grammatical errors (or at least
be hard to read). Parse trees can be an intermediate stage of representation for for-
mal semantic analysis. And parsers and the grammatical structure they assign a
sentence are a useful text analysis tool for text data science applications that require
modeling the relationship of elements in sentences.
In this chapter we introduce context-free grammars, give a small sample gram-
mar of English, introduce more formal definitions of context-free grammars and
grammar normal form, and talk about treebanks: corpora that have been anno-
tated with syntactic structure. We then discuss parse ambiguity and the problems
it presents, and turn to parsing itself, giving the famous Cocke-Kasami-Younger
(CKY) algorithm (Kasami 1965, Younger 1967), the standard dynamic program-
ming approach to syntactic parsing. The CKY algorithm returns an efficient repre-
sentation of the set of parse trees for a sentence, but doesn’t tell us which parse tree
is the right one. For that, we need to augment CKY with scores for each possible
constituent. We’ll see how to do this with neural span-based parsers. Finally, we’ll
introduce the standard set of metrics for evaluating parser accuracy.
388 C HAPTER 18 • C ONTEXT-F REE G RAMMARS AND C ONSTITUENCY PARSING
18.1 Constituency
Syntactic constituency is the idea that groups of words can behave as single units,
or constituents. Part of developing a grammar involves building an inventory of the
constituents in the language. How do words group together in English? Consider
noun phrase the noun phrase, a sequence of words surrounding at least one noun. Here are some
examples of noun phrases (thanks to Damon Runyon):
What evidence do we have that these words group together (or “form constituents”)?
One piece of evidence is that they can all appear in similar syntactic environments,
for example, before a verb.
But while the whole noun phrase can occur before a verb, this is not true of each
of the individual words that make up a noun phrase. The following are not grammat-
ical sentences of English (recall that we use an asterisk (*) to mark fragments that
are not grammatical English sentences):
Thus, to correctly describe facts about the ordering of these words in English, we
must be able to say things like “Noun Phrases can occur before verbs”. Let’s now
see how to do this in a more formal way!
more Nouns.1
NP → Det Nominal
NP → ProperNoun
Nominal → Noun | Nominal Noun
Context-free rules can be hierarchically embedded, so we can combine the previous
rules with others, like the following, that express facts about the lexicon:
Det → a
Det → the
Noun → flight
The symbols that are used in a CFG are divided into two classes. The symbols
terminal that correspond to words in the language (“the”, “nightclub”) are called terminal
symbols; the lexicon is the set of rules that introduce these terminal symbols. The
non-terminal symbols that express abstractions over these terminals are called non-terminals. In
each context-free rule, the item to the right of the arrow (→) is an ordered list of one
or more terminals and non-terminals; to the left of the arrow is a single non-terminal
symbol expressing some cluster or generalization. The non-terminal associated with
each word in the lexicon is its lexical category, or part of speech.
A CFG can be thought of in two ways: as a device for generating sentences
and as a device for assigning a structure to a given sentence. Viewing a CFG as a
generator, we can read the → arrow as “rewrite the symbol on the left with the string
of symbols on the right”.
So starting from the symbol: NP
we can use our first rule to rewrite NP as: Det Nominal
and then rewrite Nominal as: Noun
and finally rewrite these parts-of-speech as: a flight
We say the string a flight can be derived from the non-terminal NP. Thus, a CFG
can be used to generate a set of strings. This sequence of rule expansions is called a
derivation derivation of the string of words. It is common to represent a derivation by a parse
parse tree tree (commonly shown inverted with the root at the top). Figure 18.1 shows the tree
representation of this derivation.
NP
Det Nom
a Noun
flight
dominates In the parse tree shown in Fig. 18.1, we can say that the node NP dominates
all the nodes in the tree (Det, Nom, Noun, a, flight). We can say further that it
immediately dominates the nodes Det and Nom.
The formal language defined by a CFG is the set of strings that are derivable
start symbol from the designated start symbol. Each grammar must have one designated start
1 When talking about these rules we can pronounce the rightarrow → as “goes to”, and so we might
read the first rule above as “NP goes to Det Nominal”.
390 C HAPTER 18 • C ONTEXT-F REE G RAMMARS AND C ONSTITUENCY PARSING
symbol, which is often called S. Since context-free grammars are often used to define
sentences, S is usually interpreted as the “sentence” node, and the set of strings that
are derivable from S is the set of sentences in some simplified version of English.
Let’s add a few additional rules to our inventory. The following rule expresses
verb phrase the fact that a sentence can consist of a noun phrase followed by a verb phrase:
Or the verb phrase may have a verb followed by a prepositional phrase alone:
The NP inside a PP need not be a location; PPs are often used with times and
dates, and with other nouns as well; they can be arbitrarily complex. Here are ten
examples from the ATIS corpus:
to Seattle on these flights
in Minneapolis about the ground transportation in Chicago
on Wednesday of the round trip flight on United Airlines
in the evening of the AP fifty seven flight
on the ninth of July with a stopover in Nashville
Figure 18.2 gives a sample lexicon, and Fig. 18.3 summarizes the grammar rules
we’ve seen so far, which we’ll call L0 . Note that we can use the or-symbol | to
indicate that a non-terminal has alternate possible expansions.
NP → Pronoun I
| Proper-Noun Los Angeles
| Det Nominal a + flight
Nominal → Nominal Noun morning + flight
| Noun flights
VP → Verb do
| Verb NP want + a flight
| Verb NP PP leave + Boston + in the morning
| Verb PP leaving + on Thursday
NP VP
Pro Verb NP
a Nom Noun
Noun flight
morning
Figure 18.4 The parse tree for “I prefer a morning flight” according to grammar L0 .
I), and a random expansion of VP (let’s say, to Verb NP), and so on until we generate
the string I prefer a morning flight. Figure 18.4 shows a parse tree that represents a
complete derivation of I prefer a morning flight.
We can also represent a parse tree in a more compact format called bracketed
bracketed notation; here is the bracketed representation of the parse tree of Fig. 18.4:
notation
(18.1) [S [NP [Pro I]] [VP [V prefer] [NP [Det a] [Nom [N morning] [Nom [N flight]]]]]]
A CFG like that of L0 defines a formal language. Sentences (strings of words)
that can be derived by a grammar are in the formal language defined by that gram-
grammatical mar, and are called grammatical sentences. Sentences that cannot be derived by
a given formal grammar are not in the language defined by that grammar and are
ungrammatical referred to as ungrammatical. This hard line between “in” and “out” characterizes
all formal languages but is only a very simplified model of how natural languages
really work. This is because determining whether a given sentence is part of a given
natural language (say, English) often depends on the context. In linguistics, the use
generative
grammar of formal languages to model natural languages is called generative grammar since
the language is defined by the set of possible sentences “generated” by the grammar.
(Note that this is a different sense of the word ‘generate’ than when we talk about
392 C HAPTER 18 • C ONTEXT-F REE G RAMMARS AND C ONSTITUENCY PARSING
For the remainder of the book we adhere to the following conventions when dis-
cussing the formal properties of context-free grammars (as opposed to explaining
particular facts about English or other languages).
Capital letters like A, B, and S Non-terminals
S The start symbol
Lower-case Greek letters like , , and Strings drawn from (Σ ∪ N)∗
Lower-case Roman letters like u, v, and w Strings of terminals
A language is defined through the concept of derivation. One string derives an-
other one if it can be rewritten as the second one by some series of rule applications.
More formally, following Hopcroft and Ullman (1979),
if A → is a production of R and and are any strings in the set
directly derives (Σ ∪ N)∗ , then we say that A directly derives , or A ⇒ .
Derivation is then a generalization of direct derivation:
Let 1 , 2 , . . . , m be strings in (Σ ∪ N)∗ , m ≥ 1, such that
1 ⇒ 2 , 2 ⇒ 3 , . . . , m−1 ⇒ m
∗
derives We say that 1 derives m , or 1 ⇒ m .
We can then formally define the language LG generated by a grammar G as the
set of strings composed of terminal symbols that can be derived from the designated
start symbol S.
∗
LG = {w|w is in Σ∗ and S ⇒ w}
The problem of mapping from a string of words to its parse tree is called syn-
syntactic
parsing tactic parsing, as we’ll see in Section 18.6.
18.3 Treebanks
treebank A corpus in which every sentence is annotated with a parse tree is called a treebank.
18.3 • T REEBANKS 393
((S
(NP-SBJ (DT That) ((S
(JJ cold) (, ,) (NP-SBJ The/DT flight/NN )
(JJ empty) (NN sky) ) (VP should/MD
(VP (VBD was) (VP arrive/VB
(ADJP-PRD (JJ full) (PP-TMP at/IN
(PP (IN of) (NP eleven/CD a.m/RB ))
(NP (NN fire) (NP-TMP tomorrow/NN )))))
(CC and)
(NN light) ))))
(. .) ))
(a) (b)
Figure 18.5 Parses from the LDC Treebank3 for (a) Brown and (b) ATIS sentences.
NP-SBJ VP .
DT JJ , JJ NN VBD ADJP-PRD .
full IN NP
of NN CC NN
Grammar Lexicon
S → NP VP . DT → the | that
S → NP VP JJ → cold | empty | full
NP → DT NN NN → sky | fire | light | flight | tomorrow
NP → NN CC NN CC → and
NP → DT JJ , JJ NN IN → of | at
NP → NN CD → eleven
VP → MD VP RB → a.m.
VP → VBD ADJP VB → arrive
VP → MD VP VBD → was | said
VP → VB PP NP MD → should | would
ADJP → JJ PP
PP → IN NP
PP → IN NP RB
Figure 18.7 CFG grammar rules and lexicon from the treebank sentences in Fig. 18.5.
among the approximately 4,500 different rules for expanding VPs are separate rules
for PP sequences of any length and every possible arrangement of verb arguments:
VP → VBD PP
VP → VBD PP PP
VP → VBD PP PP PP
VP → VBD PP PP PP PP
VP → VB ADVP PP
VP → VB PP ADVP
VP → ADVP VB PP
A → B C D
can be converted into the following two CNF rules (Exercise 18.1 asks the reader to
18.5 • A MBIGUITY 395
Grammar Lexicon
S → NP VP Det → that | this | the | a
S → Aux NP VP Noun → book | flight | meal | money
S → VP Verb → book | include | prefer
NP → Pronoun Pronoun → I | she | me
NP → Proper-Noun Proper-Noun → Houston | NWA
NP → Det Nominal Aux → does
Nominal → Noun Preposition → from | to | on | near | through
Nominal → Nominal Noun
Nominal → Nominal PP
VP → Verb
VP → Verb NP
VP → Verb NP PP
VP → Verb PP
VP → VP PP
PP → Preposition NP
Figure 18.8 The L1 miniature English grammar and lexicon.
A → B X
X → C D
Sometimes using binary branching can actually produce smaller grammars. For
example, the sentences that might be characterized as
VP -> VBD NP PP*
are represented in the Penn Treebank by this series of rules:
VP → VBD NP PP
VP → VBD NP PP PP
VP → VBD NP PP PP PP
VP → VBD NP PP PP PP PP
...
but could also be generated by the following two-rule grammar:
VP → VBD NP PP
VP → VP PP
The generation of a symbol A with a potentially infinite sequence of symbols B with
Chomsky-
adjunction a rule of the form A → A B is known as Chomsky-adjunction.
18.5 Ambiguity
Ambiguity is the most serious problem faced by syntactic parsers. Chapter 17 intro-
duced the notions of part-of-speech ambiguity and part-of-speech disambigua-
structural
ambiguity tion. Here, we introduce a new kind of ambiguity, called structural ambiguity,
illustrated with a new toy grammar L1 , shown in Figure 18.8, which adds a few
rules to the L0 grammar.
Structural ambiguity occurs when the grammar can assign more than one parse
to a sentence. Groucho Marx’s well-known line as Captain Spaulding in Animal
396 C HAPTER 18 • C ONTEXT-F REE G RAMMARS AND C ONSTITUENCY PARSING
S S
NP VP NP VP
elephant elephant
Figure 18.9 Two parse trees for an ambiguous sentence. The parse on the left corresponds to the humorous
reading in which the elephant is in the pajamas, the parse on the right corresponds to the reading in which
Captain Spaulding did the shooting in his pajamas.
will deliver tomorrow night to the American people could be an adjunct modifying
the verb pushed. A PP like over nationwide television and radio could be attached
to any of the higher VPs or NPs (e.g., it could modify people or night).
The fact that there are many grammatically correct but semantically unreason-
able parses for naturally occurring sentences is an irksome problem that affects all
parsers. Fortunately, the CKY algorithm below is designed to efficiently handle
structural ambiguities. And as we’ll see in the following section, we can augment
CKY with neural methods to choose a single correct parse by syntactic disambigua-
syntactic
disambiguation tion.
is a non-unit production in our grammar, then we add A → for each such rule in
the grammar and discard all the intervening unit productions. As we demonstrate
with our toy grammar, this can lead to a substantial flattening of the grammar and a
consequent promotion of terminals to fairly high levels in the resulting trees.
Rules with right-hand sides longer than 2 are normalized through the introduc-
tion of new non-terminals that spread the longer sequences over several new rules.
Formally, if we have a rule like
A → BC
we replace the leftmost pair of non-terminals with a new non-terminal and introduce
a new production, resulting in the following new rules:
A → X1
X1 → B C
In the case of longer right-hand sides, we simply iterate this process until the of-
fending rule has been replaced by rules of length 2. The choice of replacing the
leftmost pair of non-terminals is purely arbitrary; any systematic scheme that results
in binary rules would suffice.
In our current grammar, the rule S → Aux NP VP would be replaced by the two
rules S → X1 VP and X1 → Aux NP.
The entire conversion process can be summarized as follows:
1. Copy all conforming rules to the new grammar unchanged.
2. Convert terminals within rules to dummy non-terminals.
3. Convert unit productions.
4. Make all rules binary and add them to new grammar.
Figure 18.10 shows the results of applying this entire conversion procedure to
the L1 grammar introduced earlier on page 395. Note that this figure doesn’t show
the original lexical rules; since these original lexical rules are already in CNF, they
all carry over unchanged to the new grammar. Figure 18.10 does, however, show
the various places where the process of eliminating unit productions has, in effect,
created new lexical rules. For example, all the original verbs have been promoted to
both VPs and to Ss in the converted grammar.
L1 Grammar L1 in CNF
S → NP VP S → NP VP
S → Aux NP VP S → X1 VP
X1 → Aux NP
S → VP S → book | include | prefer
S → Verb NP
S → X2 PP
S → Verb PP
S → VP PP
NP → Pronoun NP → I | she | me
NP → Proper-Noun NP → TWA | Houston
NP → Det Nominal NP → Det Nominal
Nominal → Noun Nominal → book | flight | meal | money
Nominal → Nominal Noun Nominal → Nominal Noun
Nominal → Nominal PP Nominal → Nominal PP
VP → Verb VP → book | include | prefer
VP → Verb NP VP → Verb NP
VP → Verb NP PP VP → X2 PP
X2 → Verb NP
VP → Verb PP VP → Verb PP
VP → VP PP VP → VP PP
PP → Preposition NP PP → Preposition NP
Figure 18.10 L1 Grammar and its conversion to CNF. Note that although they aren’t shown
here, all the original lexical entries from L1 carry over unchanged as well.
a position k, the first constituent [i, k] must lie to the left of entry [i, j] somewhere
along row i, and the second entry [k, j] must lie beneath it, along column j.
To make this more concrete, consider the following example with its completed
parse matrix, shown in Fig. 18.11.
(18.4) Book the flight through Houston.
The superdiagonal row in the matrix contains the parts of speech for each word in
the input. The subsequent diagonals above that superdiagonal contain constituents
that cover all the spans of increasing length in the input.
Given this setup, CKY recognition consists of filling the parse table in the right
way. To do this, we’ll proceed in a bottom-up fashion so that at the point where we
are filling any cell [i, j], the cells containing the parts that could contribute to this
entry (i.e., the cells to the left and the cells below) have already been filled. The
algorithm given in Fig. 18.12 fills the upper-triangular matrix a column at a time
working from left to right, with each column filled from bottom to top, as the right
side of Fig. 18.11 illustrates. This scheme guarantees that at each point in time we
have all the information we need (to the left, since all the columns to the left have
already been filled, and below since we’re filling bottom to top). It also mirrors on-
line processing, since filling the columns from left to right corresponds to processing
each word one at a time.
The outermost loop of the algorithm given in Fig. 18.12 iterates over the columns,
and the second loop iterates over the rows, from the bottom up. The purpose of the
innermost loop is to range over all the places where a substring spanning i to j in
the input might be split in two. As k ranges over the places where the string can be
split, the pairs of cells we consider move, in lockstep, to the right along row i and
down along column j. Figure 18.13 illustrates the general case of filling cell [i, j].
400 C HAPTER 18 • C ONTEXT-F REE G RAMMARS AND C ONSTITUENCY PARSING
[3,4] [3,5]
NP,
Proper-
Noun
[4,5]
Figure 18.11 Completed parse table for Book the flight through Houston.
At each such split, the algorithm considers whether the contents of the two cells can
be combined in a way that is sanctioned by a rule in the grammar. If such a rule
exists, the non-terminal on its left-hand side is entered into the table.
Figure 18.14 shows how the five cells of column 5 of the table are filled after the
word Houston is read. The arrows point out the two spans that are being used to add
an entry to the table. Note that the action in cell [0, 5] indicates the presence of three
alternative parses for this input, one where the PP modifies the flight, one where
it modifies the booking, and one that captures the second argument in the original
VP → Verb NP PP rule, now captured indirectly with the VP → X2 PP rule.
[0,1] [0,n]
...
[i,j]
[i,i+1] [i,i+2]
... [i,j-2] [i,j-1]
[i+1,j]
[i+2,j]
[j-2,j]
[j-1,j]
...
[n-1, n]
Figure 18.13 All the ways to fill the [i, j]th cell in the CKY table.
parse consists of choosing an S from cell [0, n] and then recursively retrieving its
component constituents from the table. Of course, instead of returning every parse
for a sentence, we usually want just the best parse; we’ll see how to do that in the
next section.
Book the flight through Houston Book the flight through Houston
Book the flight through Houston Book the flight through Houston
[3,4] [3,5]
NP,
Proper-
Noun
[4,5]
Figure 18.14 Filling the cells of column 5 after reading the word Houston.
18.7 • S PAN -BASED N EURAL C ONSTITUENCY PARSING 403
0 1 2 3 4 5
postprocessing layers
map back to words
ENCODER
map to subwords
Fig. 18.15 sketches the architecture. The input word tokens are embedded by
404 C HAPTER 18 • C ONTEXT-F REE G RAMMARS AND C ONSTITUENCY PARSING
passing them through a pretrained language model like BERT. Because BERT oper-
ates on the level of subword (wordpiece) tokens rather than words, we’ll first need to
convert the BERT outputs to word representations. One standard way of doing this
is to simply use the first subword unit as the representation for the entire word; us-
ing the last subword unit, or the sum of all the subword units are also common. The
embeddings can then be passed through some postprocessing layers; Kitaev et al.
(2019), for example, use 8 Transformer layers.
The resulting word encoder outputs yt are then used to compute a span score.
First, we must map the word encodings (indexed by word positions) to span encod-
ings (indexed by fenceposts). We do this by representing each fencepost with two
separate values; the intuition is that a span endpoint to the right of a word represents
different information than a span endpoint to the left of a word. We convert each
word output yt into a (leftward-pointing) value for spans ending at this fencepost,
←−
y t , and a (rightward-pointing) value → −y for spans beginning at this fencepost, by
t
splitting yt into two halves. Each span then stretches from one double-vector fence-
post to another, as in the following representation of the flight, which is span(1, 3):
span(1,3)
yj −→
v(i, j) = [→
− −
yi ; ← −− − ←
y j+1 −−]
yi+1 (18.5)
The span vector v is then passed through an MLP span classifier, with two fully-
connected layers and one ReLU activation function, whose output dimensionality is
the number of possible non-terminal labels:
T = {(it , jt , lt ) : t = 1, . . . , |T |} (18.7)
Thus once we have a score for each span, the parser can compute a score for the
whole tree s(T ) simply by summing over the scores of its constituent spans:
∑
s(T ) = s(i, j, l) (18.8)
(i, j,l)∈T
18.8 • E VALUATING PARSERS 405
And we can choose the final parse tree as the tree with the maximum score:
The simplest method to produce the most likely parse is to greedily choose the
highest scoring label for each span. This greedy method is not guaranteed to produce
a tree, since the best label for a span might not fit into a complete tree. In practice,
however, the greedy method tends to find trees; in their experiments Gaddy et al.
(2018) finds that 95% of predicted bracketings form valid trees.
Nonetheless it is more common to use a variant of the CKY algorithm to find the
full parse. The variant defined in Gaddy et al. (2018) works as follows. Let’s define
sbest (i, j) as the score of the best subtree spanning (i, j). For spans of length one, we
choose the best label:
Note that the parser is using the max label for span (i, j) + the max labels for spans
(i, k) and (k, j) without worrying about whether those decisions make sense given a
grammar. The role of the grammar in classical parsing is to help constrain possible
combinations of constituents (NPs like to be followed by VPs). By contrast, the
neural model seems to learn these kinds of contextual constraints during its mapping
from spans to non-terminals.
For more details on span-based parsing, including the margin-based training al-
gorithm, see Stern et al. (2017), Gaddy et al. (2018), Kitaev and Klein (2018), and
Kitaev et al. (2019).
S(dumped)
NP(workers) VP(dumped)
a bin
Figure 18.16 A lexicalized tree from Collins (1999).
cross-brackets: the number of constituents for which the reference parse has a
bracketing such as ((A B) C) but the hypothesis parse has a bracketing such
as (A (B C)).
For comparing parsers that use different grammars, the PARSEVAL metric in-
cludes a canonicalization algorithm for removing information likely to be grammar-
specific (auxiliaries, pre-infinitival “to”, etc.) and for computing a simplified score
(Black et al., 1991). The canonical implementation of the PARSEVAL metrics is
evalb called evalb (Sekine and Collins, 1997).
most phrases. (Should the complementizer to or the verb be the head of an infinite
verb phrase?) Modern linguistic theories of syntax generally include a component
that defines heads (see, e.g., (Pollard and Sag, 1994)).
An alternative approach to finding a head is used in most practical computational
systems. Instead of specifying head rules in the grammar itself, heads are identified
dynamically in the context of trees for specific sentences. In other words, once
a sentence is parsed, the resulting tree is walked to decorate each node with the
appropriate head. Most current systems rely on a simple set of handwritten rules,
such as a practical one for Penn Treebank grammars given in Collins (1999) but
developed originally by Magerman (1995). For example, the rule for finding the
head of an NP is as follows (Collins, 1999, p. 238):
Selected other rules from this set are shown in Fig. 18.17. For example, for VP
rules of the form VP → Y1 · · · Yn , the algorithm would start from the left of Y1 · · ·
Yn looking for the first Yi of type TO; if no TOs are found, it would search for the
first Yi of type VBD; if no VBDs are found, it would search for a VBN, and so on.
See Collins (1999) for more details.
18.10 Summary
This chapter introduced constituency parsing. Here’s a summary of the main points:
• In many languages, groups of consecutive words act as a group or a con-
stituent, which can be modeled by context-free grammars (which are also
known as phrase-structure grammars).
• A context-free grammar consists of a set of rules or productions, expressed
over a set of non-terminal symbols and a set of terminal symbols. Formally,
a particular context-free language is the set of strings that can be derived
from a particular context-free grammar.
408 C HAPTER 18 • C ONTEXT-F REE G RAMMARS AND C ONSTITUENCY PARSING
man, was substitutable in a construction for a more complex set (like intense young
man), then the form intense young man was probably a constituent. Harris’s test was
the beginning of the intuition that a constituent is a kind of equivalence class.
The context-free grammar was a formalization of this idea of hierarchical
constituency defined in Chomsky (1956) and further expanded upon (and argued
against) in Chomsky (1957) and Chomsky (1956/1975). Shortly after Chomsky’s
initial work, the context-free grammar was reinvented by Backus (1959) and inde-
pendently by Naur et al. (1960) in their descriptions of the ALGOL programming
language; Backus (1996) noted that he was influenced by the productions of Emil
Post and that Naur’s work was independent of his (Backus’) own. After this early
work, a great number of computational models of natural language processing were
based on context-free grammars because of the early development of efficient pars-
ing algorithms.
Dynamic programming parsing has a history of independent discovery. Ac-
cording to the late Martin Kay (personal communication), a dynamic programming
parser containing the roots of the CKY algorithm was first implemented by John
Cocke in 1960. Later work extended and formalized the algorithm, as well as prov-
ing its time complexity (Kay 1967, Younger 1967, Kasami 1965). The related well-
WFST formed substring table (WFST) seems to have been independently proposed by
Kuno (1965) as a data structure that stores the results of all previous computations
in the course of the parse. Based on a generalization of Cocke’s work, a similar
data structure had been independently described in Kay (1967) (and Kay 1973). The
top-down application of dynamic programming to parsing was described in Earley’s
Ph.D. dissertation (Earley 1968, Earley 1970). Sheil (1976) showed the equivalence
of the WFST and the Earley algorithm. Norvig (1991) shows that the efficiency of-
fered by dynamic programming can be captured in any language with a memoization
function (such as in LISP) simply by wrapping the memoization operation around a
simple top-down parser.
probabilistic
The earliest disambiguation algorithms for parsing were based on probabilistic
context-free context-free grammars, first worked out by Booth (1969) and Salomaa (1969); see
grammars
Appendix C for more history. Neural methods were first applied to parsing at around
the same time as statistical parsing methods were developed (Henderson, 1994). In
the earliest work neural networks were used to estimate some of the probabilities for
statistical constituency parsers (Henderson, 2003, 2004; Emami and Jelinek, 2005)
. The next decades saw a wide variety of neural parsing algorithms, including re-
cursive neural architectures (Socher et al., 2011, 2013), encoder-decoder models
(Vinyals et al., 2015; Choe and Charniak, 2016), and the idea of focusing on spans
(Cross and Huang, 2016). For more on the span-based self-attention approach we
describe in this chapter see Stern et al. (2017), Gaddy et al. (2018), Kitaev and Klein
(2018), and Kitaev et al. (2019). See Chapter 20 for the parallel history of neural
dependency parsing.
The classic reference for parsing algorithms is Aho and Ullman (1972); although
the focus of that book is on computer languages, most of the algorithms have been
applied to natural language.
Exercises
18.1 Implement the algorithm to convert arbitrary context-free grammars to CNF.
410 C HAPTER 18 • C ONTEXT-F REE G RAMMARS AND C ONSTITUENCY PARSING
19 Dependency Parsing
Tout mot qui fait partie d’une phrase... Entre lui et ses voisins, l’esprit aperçoit
des connexions, dont l’ensemble forme la charpente de la phrase.
[Between each word in a sentence and its neighbors, the mind perceives con-
nections. These connections together form the scaffolding of the sentence.]
Lucien Tesnière. 1959. Éléments de syntaxe structurale, A.1.§4
The focus of the last chapter was on context-free grammars and constituent-
based representations. Here we present another important family of grammar for-
dependency
grammars malisms called dependency grammars. In dependency formalisms, phrasal con-
stituents and phrase-structure rules do not play a direct role. Instead, the syntactic
structure of a sentence is described solely in terms of directed binary grammatical
relations between the words, as in the following dependency parse:
root
obj
det nmod
(19.1)
nsubj compound case
Relations among the words are illustrated above the sentence with directed, labeled
typed
dependency arcs from heads to dependents. We call this a typed dependency structure because
the labels are drawn from a fixed inventory of grammatical relations. A root node
explicitly marks the root of the tree, the head of the entire structure.
Figure 19.1 on the next page shows the dependency analysis from (19.1) but vi-
sualized as a tree, alongside its corresponding phrase-structure analysis of the kind
given in the prior chapter. Note the absence of nodes corresponding to phrasal con-
stituents or lexical categories in the dependency parse; the internal structure of the
dependency parse consists solely of directed relations between words. These head-
dependent relationships directly encode important information that is often buried in
the more complex phrase-structure parses. For example, the arguments to the verb
prefer are directly linked to it in the dependency structure, while their connection
to the main verb is more distant in the phrase-structure tree. Similarly, morning
and Denver, modifiers of flight, are linked to it directly in the dependency structure.
This fact that the head-dependent relations are a good proxy for the semantic rela-
tionship between predicates and their arguments is an important reason why depen-
dency grammars are currently more common than constituency grammars in natural
language processing.
Another major advantage of dependency grammars is their ability to deal with
free word order languages that have a relatively free word order. For example, word order in Czech
can be much more flexible than in English; a grammatical object might occur before
or after a location adverbial. A phrase-structure grammar would need a separate rule
412 C HAPTER 19 • D EPENDENCY PARSING
prefer S
I flight NP VP
Nom Noun P NP
morning Denver
Figure 19.1 Dependency and constituent analyses for I prefer the morning flight through Denver.
for each possible place in the parse tree where such an adverbial phrase could occur.
A dependency-based approach can have just one link type representing this particu-
lar adverbial relation; dependency grammar approaches can thus abstract away a bit
more from word order information.
In the following sections, we’ll give an inventory of relations used in dependency
parsing, discuss two families of parsing algorithms (transition-based, and graph-
based), and discuss evaluation.
Here the clausal relations NSUBJ and OBJ identify the subject and direct object of
the predicate cancel, while the NMOD, DET, and CASE relations denote modifiers of
the nouns flights and Houston.
19.1.2 Projectivity
The notion of projectivity imposes an additional constraint that is derived from the
order of the words in the input. An arc from a head to a dependent is said to be
projective projective if there is a path from the head to every word that lies between the head
and the dependent in the sentence. A dependency tree is then said to be projective if
all the arcs that make it up are projective. All the dependency trees we’ve seen thus
far have been projective. There are, however, many valid constructions which lead
to non-projective trees, particularly in languages with relatively flexible word order.
Consider the following example.
acl:relcl
root obl
obj cop
(19.3)
nsubj det det nsubj adv
JetBlue canceled our flight this morning which was already late
In this example, the arc from flight to its modifier late is non-projective since there
is no path from flight to the intervening words this and morning. As we can see from
this diagram, projectivity (and non-projectivity) can be detected in the way we’ve
been drawing our trees. A dependency tree is projective if it can be drawn with
no crossing edges. Here there is no way to link flight to its dependent late without
crossing the arc that links morning to its head.
19.1 • D EPENDENCY R ELATIONS 415
Our concern with projectivity arises from two related issues. First, the most
widely used English dependency treebanks were automatically derived from phrase-
structure treebanks through the use of head-finding rules. The trees generated in such
a fashion will always be projective, and hence will be incorrect when non-projective
examples like this one are encountered.
Second, there are computational limitations to the most widely used families of
parsing algorithms. The transition-based approaches discussed in Section 19.2 can
only produce projective trees, hence any sentences with non-projective structures
will necessarily contain some errors. This limitation is one of the motivations for
the more flexible graph-based parsing approach described in Section 19.3.
punct
obl:tmod
obl
case case
det det
nsubj punct
obj aux
adv
nsubj
obj:tmod obj
advmod compound:vv
transition-based Our first approach to dependency parsing is called transition-based parsing. This
architecture draws on shift-reduce parsing, a paradigm originally developed for
analyzing programming languages (Aho and Ullman, 1972). In transition-based
parsing we’ll have a stack on which we build the parse, a buffer of tokens to be
parsed, and a parser which takes actions on the parse via a predictor called an oracle,
as illustrated in Fig. 19.4.
Input buer
w1 w2 wn
s1 Dependency
s2
Parser LEFTARC Relations
Action
Stack ... Oracle RIGHTARC
w3 w2
SHIFT
sn
Figure 19.4 Basic transition-based parser. The parser examines the top two elements of the
stack and selects an action by consulting an oracle that examines the current configuration.
The parser walks through the sentence left-to-right, successively shifting items
from the buffer onto the stack. At each time point we examine the top two elements
on the stack, and the oracle makes a decision about what transition to apply to build
the parse. The possible transitions correspond to the intuitive actions one might take
in creating a dependency tree by examining the words in a single pass over the input
from left to right (Covington, 2001):
• Assign the current word as the head of some previously seen word,
• Assign some previously seen word as the head of the current word,
• Postpone dealing with the current word, storing it for later processing.
We’ll formalize this intuition with the following three transition operators that
will operate on the top two elements of the stack:
• LEFTA RC: Assert a head-dependent relation between the word at the top of
the stack and the second word; remove the second word from the stack.
• RIGHTA RC: Assert a head-dependent relation between the second word on
the stack and the word at the top; remove the top word from the stack;
19.2 • T RANSITION -BASED D EPENDENCY PARSING 417
• SHIFT: Remove the word from the front of the input buffer and push it onto
the stack.
We’ll sometimes call operations like LEFTA RC and RIGHTA RC reduce operations,
based on a metaphor from shift-reduce parsing, in which reducing means combin-
ing elements on the stack. There are some preconditions for using operators. The
LEFTA RC operator cannot be applied when ROOT is the second element of the stack
(since by definition the ROOT node cannot have any incoming arcs). And both the
LEFTA RC and RIGHTA RC operators require two elements to be on the stack to be
applied.
arc standard This particular set of operators implements what is known as the arc standard
approach to transition-based parsing (Covington 2001, Nivre 2003). In arc standard
parsing the transition operators only assert relations between elements at the top of
the stack, and once an element has been assigned its head it is removed from the
stack and is not available for further processing. As we’ll see, there are alterna-
tive transition systems which demonstrate different parsing behaviors, but the arc
standard approach is quite effective and is simple to implement.
The specification of a transition-based parser is quite simple, based on repre-
configuration senting the current state of the parse as a configuration: the stack, an input buffer
of words or tokens, and a set of relations representing a dependency tree. Parsing
means making a sequence of transitions through the space of possible configura-
tions. We start with an initial configuration in which the stack contains the ROOT
node, the buffer has the tokens in the sentence, and an empty set of relations repre-
sents the parse. In the final goal state, the stack and the word list should be empty,
and the set of relations will represent the final parse. Fig. 19.5 gives the algorithm.
At each step, the parser consults an oracle (we’ll come back to this shortly) that
provides the correct transition operator to use given the current configuration. It then
applies that operator to the current configuration, producing a new configuration.
The process ends when all the words in the sentence have been consumed and the
ROOT node is the only element remaining on the stack.
The efficiency of transition-based parsers should be apparent from the algorithm.
The complexity is linear in the length of the sentence since it is based on a single
left to right pass through the words in the sentence. (Each word must first be shifted
onto the stack and then later reduced.)
Note that unlike the dynamic programming and search-based approaches dis-
cussed in Chapter 18, this approach is a straightforward greedy algorithm—the or-
acle provides a single choice at each step and the parser proceeds with that choice,
no other options are explored, no backtracking is employed, and a single parse is
returned in the end.
Figure 19.6 illustrates the operation of the parser with the sequence of transitions
418 C HAPTER 19 • D EPENDENCY PARSING
root
obj
det
(19.7)
iobj compound
Let’s consider the state of the configuration at Step 2, after the word me has been
pushed onto the stack.
The correct operator to apply here is RIGHTA RC which assigns book as the head of
me and pops me from the stack resulting in the following configuration.
Here, all the remaining words have been passed onto the stack and all that is left
to do is to apply the appropriate reduce operators. In the current configuration, we
employ the LEFTA RC operator resulting in the following state.
At this point, the parse for this sentence consists of the following structure.
iobj compound
(19.8)
Book me the morning flight
There are several important things to note when examining sequences such as
the one in Figure 19.6. First, the sequence given is not the only one that might lead
to a reasonable parse. In general, there may be more than one path that leads to the
same result, and due to ambiguity, there may be other transition sequences that lead
to different equally valid parses.
Second, we are assuming that the oracle always provides the correct operator
at each point in the parse—an assumption that is unlikely to be true in practice.
As a result, given the greedy nature of this algorithm, incorrect choices will lead to
incorrect parses since the parser has no opportunity to go back and pursue alternative
choices. Section 19.2.4 will introduce several techniques that allow transition-based
approaches to explore the search space more fully.
19.2 • T RANSITION -BASED D EPENDENCY PARSING 419
Finally, for simplicity, we have illustrated this example without the labels on
the dependency relations. To produce labeled trees, we can parameterize the LEFT-
A RC and RIGHTA RC operators with dependency labels, as in LEFTA RC ( NSUBJ ) or
RIGHTA RC ( OBJ ). This is equivalent to expanding the set of transition operators from
our original set of three to a set that includes LEFTA RC and RIGHTA RC operators for
each relation in the set of dependency relations being used, plus an additional one
for the SHIFT operator. This, of course, makes the job of the oracle more difficult
since it now has a much larger set of operators from which to choose.
Let’s walk through the processing of the following example as shown in Fig. 19.7.
root
obj nmod
(19.9)
det case
possible action. The same conditions hold in the next two steps. In step 3, LEFTA RC
is selected to link the to its head.
Now consider the situation in Step 4.
Here, we might be tempted to add a dependency relation between book and flight,
which is present in the reference parse. But doing so now would prevent the later
attachment of Houston since flight would have been removed from the stack. For-
tunately, the precondition on choosing RIGHTA RC prevents this choice and we’re
again left with SHIFT as the only viable option. The remaining choices complete the
set of operators needed for this example.
To recap, we derive appropriate training instances consisting of configuration-
transition pairs from a treebank by simulating the operation of a parser in the con-
text of a reference dependency tree. We can deterministically record correct parser
actions at each step as we progress through each training example, thereby creating
the training set we require.
〈s1 .w, op〉, 〈s2 .w, op〉〈s1 .t, op〉, 〈s2 .t, op〉
〈b1 .w, op〉, 〈b1 .t, op〉〈s1 .wt, op〉 (19.10)
The correct transition here is SHIFT (you should convince yourself of this before
proceeding). The application of our set of feature templates to this configuration
would result in the following set of instantiated features.
Given that the left and right arc transitions operate on the top two elements of the
stack, features that combine properties from these positions are even more useful.
For example, a feature like s1 .t ◦ s2 .t concatenates the part of speech tag of the word
at the top of the stack with the tag of the word beneath it.
Given the training data and features, any classifier, like multinomial logistic re-
gression or support vector machines, can be used.
Input buer
Parser Oracle
w …
w e(w) Dependency
Action
Relations
Softmax
...
ENCODER
w1 w2 w3 w4 w5 w6
Figure 19.8 Neural classifier for the oracle for the transition-based parser. The parser takes
the top 2 words on the stack and the first word of the buffer, represents them by their encodings
(from running the whole sentence through the encoder), concatenates the embeddings and
passes through a softmax to choose a parser action (transition).
19.2 • T RANSITION -BASED D EPENDENCY PARSING 423
obj nmod
det case
underlying parsing algorithm. This flexibility has led to the development of a di-
verse set of transition systems that address different aspects of syntax and semantics
including: assigning part of speech tags (Choi and Palmer, 2011a), allowing the
generation of non-projective dependency structures (Nivre, 2009), assigning seman-
tic roles (Choi and Palmer, 2011b), and parsing texts containing multiple languages
(Bhat et al., 2017).
Beam Search
The computational efficiency of the transition-based approach discussed earlier de-
rives from the fact that it makes a single pass through the sentence, greedily making
decisions without considering alternatives. Of course, this is also a weakness – once
a decision has been made it can not be undone, even in the face of overwhelming
beam search evidence arriving later in a sentence. We can use beam search to explore alterna-
tive decision sequences. Recall from Chapter 9 that beam search uses a breadth-first
search strategy with a heuristic filter that prunes the search frontier to stay within a
beam width fixed-size beam width.
In applying beam search to transition-based parsing, we’ll elaborate on the al-
gorithm given in Fig. 19.5. Instead of choosing the single best transition operator
at each iteration, we’ll apply all applicable operators to each state on an agenda and
then score the resulting configurations. We then add each of these new configura-
tions to the frontier, subject to the constraint that there has to be room within the
beam. As long as the size of the agenda is within the specified beam width, we can
add new configurations to the agenda. Once the agenda reaches the limit, we only
add new configurations that are better than the worst configuration on the agenda
(removing the worst element so that we stay within the limit). Finally, to insure that
we retrieve the best possible state on the agenda, the while loop continues as long as
there are non-final states on the agenda.
The beam search approach requires a more elaborate notion of scoring than we
used with the greedy algorithm. There, we assumed that the oracle would be a
supervised classifier that chose the best transition operator based on features of the
current configuration. This choice can be viewed as assigning a score to all the
possible transitions and picking the best one.
With beam search we are now searching through the space of decision sequences,
so it makes sense to base the score for a configuration on its entire history. So we
can define the score for a new configuration as the score of its predecessor plus the
19.3 • G RAPH -BASED D EPENDENCY PARSING 425
edge-factored We’ll make the simplifying assumption that this score can be edge-factored,
meaning that the overall score for a tree is the sum of the scores of each of the scores
of the edges that comprise the tree.
∑
Score(t, S) = Score(e)
e∈t
4
4
12
5 8
root Book that flight
6 7
Figure 19.11 Initial rooted, directed graph for Book that flight.
Before describing the algorithm it’s useful to consider two intuitions about di-
rected graphs and their spanning trees. The first intuition begins with the fact that
every vertex in a spanning tree has exactly one incoming edge. It follows from this
that every connected component of a spanning tree (i.e., every set of vertices that
are linked to each other by paths over edges) will also have one incoming edge.
The second intuition is that the absolute values of the edge scores are not critical
to determining its maximum spanning tree. Instead, it is the relative weights of the
edges entering each vertex that matters. If we were to subtract a constant amount
from each edge entering a given vertex it would have no impact on the choice of
19.3 • G RAPH -BASED D EPENDENCY PARSING 427
the maximum spanning tree since every possible spanning tree would decrease by
exactly the same amount.
The first step of the algorithm itself is quite straightforward. For each vertex
in the graph, an incoming edge (representing a possible head assignment) with the
highest score is chosen. If the resulting set of edges produces a spanning tree then
we’re done. More formally, given the original fully-connected graph G = (V, E), a
subgraph T = (V, F) is a spanning tree if it has no cycles and each vertex (other than
the root) has exactly one edge entering it. If the greedy selection process produces
such a tree then it is the best possible one.
Unfortunately, this approach doesn’t always lead to a tree since the set of edges
selected may contain cycles. Fortunately, in yet another case of multiple discovery,
there is a straightforward way to eliminate cycles generated during the greedy se-
lection phase. Chu and Liu (1965) and Edmonds (1967) independently developed
an approach that begins with greedy selection and follows with an elegant recursive
cleanup phase that eliminates cycles.
The cleanup phase begins by adjusting all the weights in the graph by subtracting
the score of the maximum edge entering each vertex from the score of all the edges
entering that vertex. This is where the intuitions mentioned earlier come into play.
We have scaled the values of the edges so that the weights of the edges in the cycle
have no bearing on the weight of any of the possible spanning trees. Subtracting the
value of the edge with maximum weight from each edge entering a vertex results
in a weight of zero for all of the edges selected during the greedy selection phase,
including all of the edges involved in the cycle.
Having adjusted the weights, the algorithm creates a new graph by selecting a
cycle and collapsing it into a single new node. Edges that enter or leave the cycle
are altered so that they now enter or leave the newly collapsed node. Edges that do
not touch the cycle are included and edges within the cycle are dropped.
Now, if we knew the maximum spanning tree of this new graph, we would have
what we need to eliminate the cycle. The edge of the maximum spanning tree di-
rected towards the vertex representing the collapsed cycle tells us which edge to
delete in order to eliminate the cycle. How do we find the maximum spanning tree
of this new graph? We recursively apply the algorithm to the new graph. This will
either result in a spanning tree or a graph with a cycle. The recursions can continue
as long as cycles are encountered. When each recursion completes we expand the
collapsed vertex, restoring all the vertices and edges from the cycle with the excep-
tion of the single edge to be deleted.
Putting all this together, the maximum spanning tree algorithm consists of greedy
edge selection, re-scoring of edge costs and a recursive cleanup phase when needed.
The full algorithm is shown in Fig. 19.12.
Fig. 19.13 steps through the algorithm with our Book that flight example. The
first row of the figure illustrates greedy edge selection with the edges chosen shown
in blue (corresponding to the set F in the algorithm). This results in a cycle between
that and flight. The scaled weights using the maximum value entering each node are
shown in the graph to the right.
Collapsing the cycle between that and flight to a single node (labelled tf) and
recursing with the newly scaled costs is shown in the second row. The greedy selec-
tion step in this recursion yields a spanning tree that links root to book, as well as an
edge that links book to the contracted node. Expanding the contracted node, we can
see that this edge corresponds to the edge from book to flight in the original graph.
This in turn tells us which edge to drop to eliminate the cycle.
428 C HAPTER 19 • D EPENDENCY PARSING
F ← []
T’ ← []
score’ ← []
for each v ∈ V do
bestInEdge ← argmaxe=(u,v)∈ E score[e]
F ← F ∪ bestInEdge
for each e=(u,v) ∈ E do
score’[e] ← score[e] − score[bestInEdge]
Figure 19.12 The Chu-Liu Edmonds algorithm for finding a maximum spanning tree in a
weighted directed graph.
On arbitrary directed graphs, this version of the CLE algorithm runs in O(mn)
time, where m is the number of edges and n is the number of nodes. Since this par-
ticular application of the algorithm begins by constructing a fully connected graph
m = n2 yielding a running time of O(n3 ). Gabow et al. (1986) present a more effi-
cient implementation with a running time of O(m + nlogn).
Or more succinctly.
score(S, e) = w · f
Given this formulation, we need to identify relevant features and train the weights.
The features (and feature combinations) used to train edge-factored models mir-
ror those used in training transition-based parsers, such as
19.3 • G RAPH -BASED D EPENDENCY PARSING 429
-4
4
4 -3
12 0
5 8 -2 0
Book that flight Book that flight
root root
12 7 8 12 -6 7 8
6 7 0
7 -1
5 -7
-4 -4
-3 -3
0 0
-2 -2
Book tf
root Book -6 tf root -6
0 -1
-1 -1
-7 -7
• Wordforms, lemmas, and parts of speech of the headword and its dependent.
• Corresponding features from the contexts before, after and between the words.
• Word embeddings.
• The dependency relation itself.
• The direction of the relation (to the right or left).
• The distance from the head to the dependent.
Given a set of features, our next problem is to learn a set of weights correspond-
ing to each. Unlike many of the learning problems discussed in earlier chapters,
here we are not training a model to associate training items with class labels, or
parser actions. Instead, we seek to train a model that assigns higher scores to cor-
rect trees than to incorrect ones. An effective framework for problems like this is to
inference-based
learning use inference-based learning combined with the perceptron learning rule. In this
framework, we parse a sentence (i.e, perform inference) from the training set using
some initially random set of initial weights. If the resulting parse matches the cor-
responding tree in the training data, we do nothing to the weights. Otherwise, we
find those features in the incorrect parse that are not present in the reference parse
and we lower their weights by a small amount based on the learning rate. We do this
incrementally for each sentence in our training data until the weights converge.
430 C HAPTER 19 • D EPENDENCY PARSING
score(h1head, h3dep)
∑
Biane
U W b
+
h1 head h1 dep h2 head h2 dep h3 head h3 dep
r1 r2 r3
ENCODER
Here we’ll sketch the biaffine algorithm of Dozat and Manning (2017) and Dozat
et al. (2017) shown in Fig. 19.14, drawing on the work of Grünewald et al. (2021)
who tested many versions of the algorithm via their STEPS system. The algorithm
first runs the sentence X = x1 , ..., xn through an encoder to produce a contextual
embedding representation for each token R = r1 , ..., rn . The embedding for each
token is now passed through two separate feedforward networks, one to produce a
representation of this token as a head, and one to produce a representation of this
token as a dependent:
hhead
i = FFNhead (ri ) (19.13)
hdep
i = FFN dep
(ri ) (19.14)
Now to assign a score to the directed edge i → j, (wi is the head and w j is the depen-
dent), we feed the head representation of i, hhead
i , and the dependent representation
of j, hdep
j , into a biaffine scoring function:
Score(i → j) = Biaff(hhead
i , hdep
j ) (19.15)
ᵀ
Biaff(x, y) = x Uy + W(x ⊕ y) + b (19.16)
19.4 • E VALUATION 431
where U, W, and b are weights learned by the model. The idea of using a biaffine
function is to allow the system to learn multiplicative interactions between the vec-
tors x and y.
If we pass Score(i → j) through a softmax, we end up with a probability distri-
bution, for each token j, over potential heads i (all other tokens in the sentence):
This probability can then be passed to the maximum spanning tree algorithm of
Section 19.3.1 to find the best tree.
This p(i → j) classifier is trained by optimizing the cross-entropy loss.
Note that the algorithm as we’ve described it is unlabeled. To make this into
a labeled algorithm, the Dozat and Manning (2017) algorithm actually trains two
classifiers. The first classifier, the edge-scorer, the one we described above, assigns
a probability p(i → j) to each word wi and w j . Then the Maximum Spanning Tree
algorithm is run to get a single best dependency parse tree for the second. We then
apply a second classifier, the label-scorer, whose job is to find the maximum prob-
ability label for each edge in this parse. This second classifier has the same form
as (19.15-19.17), but instead of being trained to predict with binary softmax the
probability of an edge existing between two words, it is trained with a softmax over
dependency labels to predict the dependency label between the words.
19.4 Evaluation
As with phrase structure-based parsing, the evaluation of dependency parsers pro-
ceeds by measuring how well they work on a test set. An obvious metric would be
exact match (EM)—how many sentences are parsed correctly. This metric is quite
pessimistic, with most sentences being marked wrong. Such measures are not fine-
grained enough to guide the development process. Our metrics need to be sensitive
enough to tell if actual improvements are being made.
For these reasons, the most common method for evaluating dependency parsers
are labeled and unlabeled attachment accuracy. Labeled attachment refers to the
proper assignment of a word to its head along with the correct dependency relation.
Unlabeled attachment simply looks at the correctness of the assigned head, ignor-
ing the dependency relation. Given a system output and a corresponding reference
parse, accuracy is simply the percentage of words in an input that are assigned the
correct head with the correct relation. These metrics are usually referred to as the
labeled attachment score (LAS) and unlabeled attachment score (UAS). Finally, we
can make use of a label accuracy score (LS), the percentage of tokens with correct
labels, ignoring where the relations are coming from.
As an example, consider the reference parse and system parse for the following
example shown in Fig. 19.15.
(19.18) Book me the flight through Houston.
The system correctly finds 4 of the 6 dependency relations present in the reference
parse and receives an LAS of 2/3. However, one of the 2 incorrect relations found
by the system holds between book and flight, which are in a head-dependent relation
in the reference parse; the system therefore achieves a UAS of 5/6.
Beyond attachment scores, we may also be interested in how well a system is
performing on a particular kind of dependency relation, for example NSUBJ, across
432 C HAPTER 19 • D EPENDENCY PARSING
root root
obj xcomp
nmod nsubj nmod
iobj det case det case
Book me the flight through Houston Book me the flight through Houston
(a) Reference (b) System
Figure 19.15 Reference and system parses for Book me the flight through Houston, resulting in an LAS of
2/3 and an UAS of 5/6.
a development corpus. Here we can make use of the notions of precision and recall
introduced in Chapter 17, measuring the percentage of relations labeled NSUBJ by
the system that were correct (precision), and the percentage of the NSUBJ relations
present in the development set that were in fact discovered by the system (recall).
We can employ a confusion matrix to keep track of how often each dependency type
was confused for another.
19.5 Summary
This chapter has introduced the concept of dependency grammars and dependency
parsing. Here’s a summary of the main points that we covered:
project to create a framework for dependency treebank annotation, with nearly 200
treebanks in over 100 languages. The UD annotation scheme evolved out of several
distinct efforts including Stanford dependencies (de Marneffe et al. 2006, de Marn-
effe and Manning 2008, de Marneffe et al. 2014), Google’s universal part-of-speech
tags (Petrov et al., 2012), and the Interset interlingua for morphosyntactic tagsets
(Zeman, 2008).
The Conference on Natural Language Learning (CoNLL) has conducted an in-
fluential series of shared tasks related to dependency parsing over the years (Buch-
holz and Marsi 2006, Nivre et al. 2007a, Surdeanu et al. 2008, Hajič et al. 2009).
More recent evaluations have focused on parser robustness with respect to morpho-
logically rich languages (Seddah et al., 2013), and non-canonical language forms
such as social media, texts, and spoken language (Petrov and McDonald, 2012).
Choi et al. (2015) presents a performance analysis of 10 dependency parsers across
a range of metrics, as well as DEPENDA BLE, a robust parser evaluation tool.
Exercises
CHAPTER
20 Information Extraction:
Relations, Events, and Time
Time will explain.
Jane Austen, Persuasion
Imagine that you are an analyst with an investment firm that tracks airline stocks.
You’re given the task of determining the relationship (if any) between airline an-
nouncements of fare increases and the behavior of their stocks the next day. His-
torical data about stock prices is easy to come by, but what about the airline an-
nouncements? You will need to know at least the name of the airline, the nature of
the proposed fare hike, the dates of the announcement, and possibly the response of
other airlines. Fortunately, these can be all found in news articles like this one:
Citing high fuel prices, United Airlines said Friday it has increased fares
by $6 per round trip on flights to some cities also served by lower-
cost carriers. American Airlines, a unit of AMR Corp., immediately
matched the move, spokesman Tim Wagner said. United, a unit of UAL
Corp., said the increase took effect Thursday and applies to most routes
where it competes against discount carriers, such as Chicago to Dallas
and Denver to San Francisco.
This chapter presents techniques for extracting limited kinds of semantic con-
information tent from text. This process of information extraction (IE) turns the unstructured
extraction
information embedded in texts into structured data, for example for populating a
relational database to enable further processing.
relation We begin with the task of relation extraction: finding and classifying semantic
extraction
relations among entities mentioned in a text, like child-of (X is the child-of Y), or
part-whole or geospatial relations. Relation extraction has close links to populat-
knowledge
graphs ing a relational database, and knowledge graphs, datasets of structured relational
knowledge, are a useful way for search engines to present information to users.
event Next, we discuss event extraction, the task of finding events in which these en-
extraction
tities participate, like, in our sample text, the fare increases by United and American
and the reporting events said and cite. Events are also situated in time, occurring at
a particular date or time, and events can be related temporally, happening before or
after or simultaneously with each other. We’ll need to recognize temporal expres-
sions like Friday, Thursday or two days from now and times such as 3:30 P.M., and
normalize them onto specific calendar dates or times. We’ll need to link Friday to
the time of United’s announcement, Thursday to the previous day’s fare increase,
and we’ll need to produce a timeline in which United’s announcement follows the
fare increase and American’s announcement follows both of those events.
template filling The related task of template filling is to find recurring stereotypical events or
situations in documents and fill in the template slots. These slot-fillers may consist
of text segments extracted directly from the text, or concepts like times, amounts, or
ontology entities that have been inferred through additional processing. Our airline
436 C HAPTER 20 • I NFORMATION E XTRACTION : R ELATIONS , E VENTS , AND T IME
Lasting Subsidiary
Family Near Citizen-
Personal Resident- Geographical
Located Ethnicity- Org-Location-
Business
Religion Origin
ORG
AFFILIATION ARTIFACT
Investor
Founder
Student-Alum
Ownership User-Owner-Inventor-
Employment Manufacturer
Membership
Sports-Affiliation
Figure 20.1 The 17 relations used in the ACE relation extraction task.
text presents such a stereotypical situation since airlines often raise fares and then
wait to see if competitors follow along. Here we can identify United as a lead air-
line that initially raised its fares, $6 as the amount, Thursday as the increase date,
and American as an airline that followed along, leading to a filled template like the
following:
FARE -R AISE ATTEMPT: L EAD A IRLINE : U NITED A IRLINES
A MOUNT: $6
E FFECTIVE DATE : 2006-10-26
F OLLOWER : A MERICAN A IRLINES
Sets of relations have been defined for many other domains as well. For example
UMLS, the Unified Medical Language System from the US National Library of
Medicine has a network that defines 134 broad subject categories, entity types, and
54 relations between the entities, such as the following:
Entity Relation Entity
Injury disrupts Physiological Function
Bodily Location location-of Biologic Function
Anatomical Structure part-of Organism
Pharmacologic Substance causes Pathological Function
Pharmacologic Substance treats Pathologic Function
Given a medical sentence like this one:
(20.1) Doppler echocardiography can be used to diagnose left anterior descending
artery stenosis in patients with type 2 diabetes
We could thus extract the UMLS relation:
Echocardiography, Doppler Diagnoses Acquired stenosis
infoboxes Wikipedia also offers a large supply of relations, drawn from infoboxes, struc-
tured tables associated with certain Wikipedia articles. For example, the Wikipedia
infobox for Stanford includes structured facts like state = "California" or
president = "Marc Tessier-Lavigne". These facts can be turned into rela-
RDF tions like president-of or located-in. or into relations in a metalanguage called RDF
RDF triple (Resource Description Framework). An RDF triple is a tuple of entity-relation-
entity, called a subject-predicate-object expression. Here’s a sample RDF triple:
subject predicate object
Golden Gate Park location San Francisco
For example the crowdsourced DBpedia (Bizer et al., 2009) is an ontology de-
rived from Wikipedia containing over 2 billion RDF triples. Another dataset from
Freebase Wikipedia infoboxes, Freebase (Bollacker et al., 2008), now part of Wikidata (Vrandečić
and Krötzsch, 2014), has relations between people and their nationality, or locations,
and other locations they are contained in.
WordNet or other ontologies offer useful ontological relations that express hier-
is-a archical relations between words or concepts. For example WordNet has the is-a or
hypernym hypernym relation between classes,
Giraffe is-a ruminant is-a ungulate is-a mammal is-a vertebrate ...
WordNet also has Instance-of relation between individuals and classes, so that for
example San Francisco is in the Instance-of relation with city. Extracting these
relations is an important step in extending or building ontologies.
Finally, there are large datasets that contain sentences hand-labeled with their
relations, designed for training and testing relation extractors. The TACRED dataset
(Zhang et al., 2017) contains 106,264 examples of relation triples about particular
people or organizations, labeled in sentences from news and web text drawn from the
438 C HAPTER 20 • I NFORMATION E XTRACTION : R ELATIONS , E VENTS , AND T IME
annual TAC Knowledge Base Population (TAC KBP) challenges. TACRED contains
41 relation types (like per:city of birth, org:subsidiaries, org:member of, per:spouse),
plus a no relation tag; examples are shown in Fig. 20.3. About 80% of all examples
are annotated as no relation; having sufficient negative data is important for training
supervised classifiers.
A standard dataset was also produced for the SemEval 2010 Task 8, detecting
relations between nominals (Hendrickx et al., 2009). The dataset has 10,717 exam-
ples, each with a pair of nominals (untyped) hand-labeled with one of 9 directed
relations like product-producer ( a factory manufactures suits) or component-whole
(my apartment has a large kitchen).
allowing us to infer
hyponym(Gelidium, red algae) (20.4)
20.2 • R ELATION E XTRACTION A LGORITHMS 439
NP {, NP}* {,} (and|or) other NPH temples, treasuries, and other important civic buildings
NPH such as {NP,}* {(or|and)} NP red algae such as Gelidium
such NPH as {NP,}* {(or|and)} NP such authors as Herrick, Goldsmith, and Shakespeare
NPH {,} including {NP,}* {(or|and)} NP common-law countries, including Canada and England
NPH {,} especially {NP}* {(or|and)} NP European countries, especially France, England, and Spain
Figure 20.4 Hand-built lexico-syntactic patterns for finding hypernyms, using {} to mark optionality (Hearst
1992a, Hearst 1998).
Figure 20.4 shows five patterns Hearst (1992a, 1998) suggested for inferring
the hyponym relation; we’ve shown NPH as the parent/hyponym. Modern versions
of the pattern-based approach extend it by adding named entity constraints. For
example if our goal is to answer questions about “Who holds what office in which
organization?”, we can use patterns like the following:
PER, POSITION of ORG:
George Marshall, Secretary of State of the United States
relations ← nil
entities ← F IND E NTITIES(words)
forall entity pairs 〈e1, e2〉 in entities do
if R ELATED ?(e1, e2)
relations ← relations+C LASSIFY R ELATION(e1, e2)
Figure 20.5 Finding and classifying the relations among entities in a text.
p(relation|SUBJ,OBJ)
Linear
Classifier
ENCODER
[CLS] [SUBJ_PERSON] was born in [OBJ_LOC] , Michigan
Figure 20.6 Relation extraction as a linear layer on top of an encoder (in this case BERT),
with the subject and object entities replaced in the input by their NER tags (Zhang et al. 2017,
Joshi et al. 2020).
curacies. But labeling a large training set is extremely expensive and supervised
models are brittle: they don’t generalize well to different text genres. For this rea-
son, much research in relation extraction has focused on the semi-supervised and
unsupervised approaches we turn to next.
Suppose, for example, that we need to create a list of airline/hub pairs, and we
know only that Ryanair has a hub at Charleroi. We can use this seed fact to discover
new patterns by finding other mentions of this relation in our corpus. We search
for the terms Ryanair, Charleroi and hub in some proximity. Perhaps we find the
following set of sentences:
(20.6) Budget airline Ryanair, which uses Charleroi as a hub, scrapped all
weekend flights out of the airport.
(20.7) All flights in and out of Ryanair’s hub at Charleroi airport were grounded on
Friday...
(20.8) A spokesman at Charleroi, a main hub for Ryanair, estimated that 8000
passengers had already been affected.
442 C HAPTER 20 • I NFORMATION E XTRACTION : R ELATIONS , E VENTS , AND T IME
From these results, we can use the context of words between the entity mentions,
the words before mention one, the word after mention two, and the named entity
types of the two mentions, and perhaps other features, to extract general patterns
such as the following:
/ [ORG], which uses [LOC] as a hub /
/ [ORG]s hub at [LOC] /
/ [LOC], a main hub for [ORG] /
These new patterns can then be used to search for additional tuples.
confidence Bootstrapping systems also assign confidence values to new tuples to avoid se-
values
semantic drift mantic drift. In semantic drift, an erroneous pattern leads to the introduction of
erroneous tuples, which, in turn, lead to the creation of problematic patterns and the
meaning of the extracted relations ‘drifts’. Consider the following example:
(20.9) Sydney has a ferry hub at Circular Quay.
If accepted as a positive example, this expression could lead to the incorrect in-
troduction of the tuple 〈Sydney,CircularQuay〉. Patterns based on this tuple could
propagate further errors into the database.
Confidence values for patterns are based on balancing two factors: the pattern’s
performance with respect to the current set of tuples and the pattern’s productivity
in terms of the number of matches it produces in the document collection. More
formally, given a document collection D, a current set of tuples T , and a proposed
pattern p, we need to track two factors:
• hits(p): the set of tuples in T that p matches while looking in D
• finds(p): The total set of tuples that p finds in D
The following equation balances these considerations (Riloff and Jones, 1999).
|hits(p)|
Conf RlogF (p) = log(|finds(p)|) (20.10)
|finds(p)|
relations, the classifier will also need to be able to label an example as no-relation.
This label is trained by randomly selecting entity pairs that do not appear in any
Freebase relation, extracting features for them, and building a feature vector for
each such tuple. The final algorithm is sketched in Fig. 20.8.
foreach relation R
foreach tuple (e1,e2) of entities with relation R in D
sentences ← Sentences in T that contain e1 and e2
f ← Frequent features in sentences
observations ← observations + new training tuple (e1, e2, f, R)
C ← Train supervised classifier on observations
return C
Figure 20.8 The distant supervision algorithm for relation extraction. A neural classifier
would skip the feature set f .
Distant supervision shares advantages with each of the methods we’ve exam-
ined. Like supervised classification, distant supervision uses a classifier with lots
of features, and supervised by detailed hand-created knowledge. Like pattern-based
classifiers, it can make use of high-precision evidence for the relation between en-
tities. Indeed, distance supervision systems learn patterns just like the hand-built
patterns of early relation extractors. For example the is-a or hypernym extraction
system of Snow et al. (2005) used hypernym/hyponym NP pairs from WordNet as
distant supervision, and then learned new patterns from large amounts of text. Their
system induced exactly the original 5 template patterns of Hearst (1992a), but also
70,000 additional patterns including these four:
NPH like NP Many hormones like leptin...
NPH called NP ...using a markup language called XHTML
NP is a NPH Ruby is a programming language...
NP, a NPH IBM, a company with a long...
This ability to use a large number of features simultaneously means that, un-
like the iterative expansion of patterns in seed-based systems, there’s no semantic
drift. Like unsupervised classification, it doesn’t use a labeled training corpus of
texts, so it isn’t sensitive to genre issues in the training corpus, and relies on very
large amounts of unlabeled data. Distant supervision also has the advantage that it
can create training tuples to be used with neural classifiers, where features are not
required.
The main problem with distant supervision is that it tends to produce low-precision
results, and so current research focuses on ways to improve precision. Furthermore,
distant supervision can only help in extracting relations for which a large enough
database already exists. To extract new relations without datasets, or relations for
new domains, purely unsupervised methods must be used.
has the relation phrases has a hub in and is the headquarters of (it also has has and
is, but longer phrases are preferred). Step 3 finds United to the left and Chicago to
the right of has a hub in, and skips over which to find Chicago to the left of is the
headquarters of. The final output is:
r1: <United, has a hub in, Chicago>
r2: <Chicago, is the headquarters of, United Continental Holdings>
The great advantage of unsupervised relation extraction is its ability to handle
a huge number of relations without having to specify them in advance. The dis-
advantage is the need to map all the strings into some canonical form for adding
to databases or knowledge graphs. Current methods focus heavily on relations ex-
pressed with verbs, and so will miss many relations that are expressed nominally.
[EVENT Citing] high fuel prices, United Airlines [EVENT said] Fri-
day it has [EVENT increased] fares by $6 per round trip on flights to
some cities also served by lower-cost carriers. American Airlines, a unit
of AMR Corp., immediately [EVENT matched] [EVENT the move],
spokesman Tim Wagner [EVENT said]. United, a unit of UAL Corp.,
[EVENT said] [EVENT the increase] took effect Thursday and [EVENT
applies] to most routes where it [EVENT competes] against discount
carriers, such as Chicago to Dallas and Denver to San Francisco.
In English, most event mentions correspond to verbs, and most verbs introduce
events. However, as we can see from our example, this is not always the case. Events
can be introduced by noun phrases, as in the move and the increase, and some verbs
fail to introduce events, as in the phrasal verb took effect, which refers to when the
light verbs event began rather than to the event itself. Similarly, light verbs such as make, take,
and have often fail to denote events. A light verb is a verb that has very little meaning
itself, and the associated event is instead expressed by its direct object noun. In light
verb examples like took a flight, it’s the word flight that defines the event; these light
verbs just provide a syntactic structure for the noun’s arguments.
Various versions of the event extraction task exist, depending on the goal. For
example in the TempEval shared tasks (Verhagen et al. 2009) the goal is to extract
events and aspects like their aspectual and temporal properties. Events are to be
reporting classified as actions, states, reporting events (say, report, tell, explain), perception
events
events, and so on. The aspect, tense, and modality of each event also needs to be
extracted. Thus for example the various said events in the sample text would be
annotated as (class=REPORTING, tense=PAST, aspect=PERFECTIVE).
Event extraction is generally modeled via supervised learning, detecting events
via IOB sequence models and assigning event classes and attributes with multi-class
classifiers. The input can be neural models starting from encoders; or classic feature-
based models using features like those in Fig. 20.10.
Feature Explanation
Character affixes Character-level prefixes and suffixes of target word
Nominalization suffix Character-level suffixes for nominalizations (e.g., -tion)
Part of speech Part of speech of the target word
Light verb Binary feature indicating that the target is governed by a light verb
Subject syntactic category Syntactic category of the subject of the sentence
Morphological stem Stemmed version of the target word
Verb root Root form of the verb basis for a nominalization
WordNet hypernyms Hypernym set for the target
Figure 20.10 Features commonly used in classic feature-based approaches to event detection.
to the second. Accompanying these notions in most theories is the idea of the cur-
rent moment in time. Combining this notion with the idea of a temporal ordering
relationship yields the familiar notions of past, present, and future.
Various kinds of temporal representation systems can be used to talk about tem-
poral ordering relationship. One of the most commonly used in computational mod-
interval algebra eling is the interval algebra of Allen (1984). Allen models all events and time
expressions as intervals there is no representation for points (although intervals can
be very short). In order to deal with intervals without points, he identifies 13 primi-
tive relations that can hold between these temporal intervals. Fig. 20.11 shows these
Allen relations 13 Allen relations.
A A
A before B B
A overlaps B B
B after A B overlaps' A
A
A
A equals B
B (B equals A)
A meets B
B
B meets' A
A A
A starts B A finishes B
B starts' A B finishes' A
B B
A during B A
B during' A
Time
E R U R,E U E R,U
U,R,E U,R E U E R
Figure 20.12 Reichenbach’s approach applied to various English tenses. In these diagrams,
time flows from left to right, E denotes the time of the event, R denotes the reference time,
and U denotes the time of the utterance.
Languages have many other ways to convey temporal information besides tense.
Most useful for our purposes will be temporal expressions like in the morning or
6:45 or afterwards.
(20.20) I’d like to go at 6:45 in the morning.
(20.21) Somewhere around noon, please.
(20.22) I want to take the train back afterwards.
Incidentally, temporal expressions display a fascinating metaphorical conceptual
organization. Temporal expressions in English are frequently expressed in spatial
terms, as is illustrated by the various uses of at, in, somewhere, and near in these
examples (Lakoff and Johnson 1980, Jackendoff 1983). Metaphorical organizations
such as these, in which one domain is systematically expressed in terms of another,
are very common in languages of the world.
450 C HAPTER 20 • I NFORMATION E XTRACTION : R ELATIONS , E VENTS , AND T IME
Delta Air Lines earnings <EVENT eid="e1" class="OCCURRENCE"> soared </EVENT> 33% to a
record in <TIMEX3 tid="t58" type="DATE" value="1989-Q1" anchorTimeID="t57"> the
fiscal first quarter </TIMEX3>, <EVENT eid="e3" class="OCCURRENCE">bucking</EVENT>
the industry trend toward <EVENT eid="e4" class="OCCURRENCE">declining</EVENT>
profits.
BEFORE ENDS
11 9 10
CULMINATES
Figure 20.14 A graph of the text in Eq. 20.35, adapted from (Ocal et al., 2022). TL INKS
are shown in blue, AL INKS in red, and SL INKS in green.
be nouns, proper nouns, adjectives, and adverbs; full temporal expressions consist
of their phrasal projections: noun phrases, adjective phrases, and adverbial phrases
(Figure 20.16).
Category Examples
Noun morning, noon, night, winter, dusk, dawn
Proper Noun January, Monday, Ides, Easter, Rosh Hashana, Ramadan, Tet
Adjective recent, past, annual, former
Adverb hourly, daily, monthly, yearly
Figure 20.16 Examples of temporal lexical triggers.
The task is to detect temporal expressions in running text, like this examples,
shown with TIMEX3 tags (Pustejovsky et al. 2005, Ferro et al. 2005).
A fare increase initiated <TIMEX3>last week</TIMEX3> by UAL
Corp’s United Airlines was matched by competitors over <TIMEX3>the
weekend</TIMEX3>, marking the second successful fare increase in
<TIMEX3>two weeks</TIMEX3>.
Rule-based approaches use cascades of regular expressions to recognize larger
and larger chunks from previous stages, based on patterns containing parts of speech,
trigger words (e.g., February) or classes (e.g., MONTH) (Chang and Manning, 2012;
Strötgen and Gertz, 2013; Chambers, 2013). Here’s a rule from SUTime (Chang and
Manning, 2012) for detecting expressions like 3 years old:
/(\d+)[-\s]($TEUnits)(s)?([-\s]old)?/
Sequence-labeling approaches use the standard IOB scheme, marking words
that are either (I)nside, (O)utside or at the (B)eginning of a temporal expression:
A fare increase initiated last week by UAL Corp’s...
OO O O B I O O O
A statistical sequence labeler is trained, using either embeddings or a fine-tuned
encoder, or classic features extracted from the token and context including words,
lexical triggers, and POS.
Temporal expression recognizers are evaluated with the usual recall, precision,
and F-measures. A major difficulty for all of these very lexicalized approaches is
avoiding expressions that trigger false positives:
(20.36) 1984 tells the story of Winston Smith...
(20.37) ...U2’s classic Sunday Bloody Sunday
are represented via the ISO 8601 standard for encoding temporal values (ISO8601,
2004). Fig. 20.17 reproduces our earlier example with these value attributes.
The dateline, or document date, for this text was July 2, 2007. The ISO repre-
sentation for this kind of expression is YYYY-MM-DD, or in this case, 2007-07-02.
The encodings for the temporal expressions in our sample text all follow from this
date, and are shown here as values for the VALUE attribute.
The first temporal expression in the text proper refers to a particular week of the
year. In the ISO standard, weeks are numbered from 01 to 53, with the first week
of the year being the one that has the first Thursday of the year. These weeks are
represented with the template YYYY-Wnn. The ISO week for our document date is
week 27; thus the value for last week is represented as “2007-W26”.
The next temporal expression is the weekend. ISO weeks begin on Monday;
thus, weekends occur at the end of a week and are fully contained within a single
week. Weekends are treated as durations, so the value of the VALUE attribute has
to be a length. Durations are represented according to the pattern Pnx, where n is
an integer denoting the length and x represents the unit, as in P3Y for three years
or P2D for two days. In this example, one weekend is captured as P1WE. In this
case, there is also sufficient information to anchor this particular weekend as part of
a particular week. Such information is encoded in the ANCHORT IME ID attribute.
Finally, the phrase two weeks also denotes a duration captured as P2W. Figure 20.18
give some more examples, but there is a lot more to the various temporal annotation
standards; consult ISO8601 (2004), Ferro et al. (2005), and Pustejovsky et al. (2005)
for more details.
temporal to as the document’s temporal anchor. The values of temporal expressions such
anchor
as today, yesterday, or tomorrow can all be computed with respect to this temporal
anchor. The semantic procedure for today simply assigns the anchor, and the attach-
ments for tomorrow and yesterday add a day and subtract a day from the anchor,
respectively. Of course, given the cyclic nature of our representations for months,
weeks, days, and times of day, our temporal arithmetic procedures must use modulo
arithmetic appropriate to the time unit being used.
Unfortunately, even simple expressions such as the weekend or Wednesday in-
troduce a fair amount of complexity. In our current example, the weekend clearly
refers to the weekend of the week that immediately precedes the document date. But
this won’t always be the case, as is illustrated in the following example.
(20.38) Random security checks that began yesterday at Sky Harbor will continue
at least through the weekend.
In this case, the expression the weekend refers to the weekend of the week that the
anchoring date is part of (i.e., the coming weekend). The information that signals
this meaning comes from the tense of continue, the verb governing the weekend.
Relative temporal expressions are handled with temporal arithmetic similar to
that used for today and yesterday. The document date indicates that our example
article is ISO week 27, so the expression last week normalizes to the current week
minus 1. To resolve ambiguous next and last expressions we consider the distance
from the anchoring date to the nearest unit. Next Friday can refer either to the
immediately next Friday or to the Friday following that, but the closer the document
date is to a Friday, the more likely it is that the phrase will skip the nearest one. Such
ambiguities are handled by encoding language and domain-specific heuristics into
the temporal attachments.
supervised by the gold labels in the TimeBank corpus with features like words/em-
beddings, parse paths, tense and aspect The sieve-based architecture using precision-
ranked sets of classifiers, which we’ll introduce in Chapter 23, is also commonly
used.
Systems that perform all 4 tasks (time extraction creation and normalization,
event extraction, and time/event linking) include TARSQI (Verhagen et al., 2005)
C LEARTK (Bethard, 2013), CAEVO (Chambers et al., 2014), and CATENA (Mirza
and Tonelli, 2016).
This template has four slots (LEAD AIRLINE, AMOUNT, EFFECTIVE DATE, FOL -
LOWER ). The next section describes a standard sequence-labeling approach to filling
slots. Section 20.8.2 then describes an older system based on the use of cascades of
finite-state transducers and designed to address a more complex template-filling task
that current learning-based systems don’t yet address.
set of features can be used: tokens, embeddings, word shapes, part-of-speech tags,
syntactic chunk tags, and named entity tags.
role-filler The second system has the job of role-filler extraction. A separate classifier is
extraction
trained to detect each role (LEAD - AIRLINE, AMOUNT, and so on). This can be a
binary classifier that is run on every noun-phrase in the parsed input sentence, or a
sequence model run over sequences of words. Each role classifier is trained on the
labeled data in the training set. Again, the usual set of features can be used, but now
trained only on an individual noun phrase or the fillers of a single slot.
Multiple non-identical text segments might be labeled with the same slot la-
bel. For example in our sample text, the strings United or United Airlines might be
labeled as the L EAD A IRLINE. These are not incompatible choices and the corefer-
ence resolution techniques introduced in Chapter 23 can provide a path to a solution.
A variety of annotated collections have been used to evaluate this style of ap-
proach to template filling, including sets of job announcements, conference calls for
papers, restaurant guides, and biological texts. A key open question is extracting
templates in cases where there is no training data or even predefined templates, by
inducing templates as sets of linked events (Chambers and Jurafsky, 2011).
Tie-up-1 Activity-1:
R ELATIONSHIP tie-up C OMPANY Bridgestone Sports Taiwan Co.
E NTITIES Bridgestone Sports Co. P RODUCT iron and “metal wood” clubs
a local concern S TART DATE DURING: January 1990
a Japanese trading house
J OINT V ENTURE Bridgestone Sports Taiwan Co.
ACTIVITY Activity-1
A MOUNT NT$20000000
Figure 20.19 The templates produced by FASTUS given the input text on page 457.
Early systems for dealing with these complex templates were based on cascades
of transducers based on handwritten rules, as sketched in Fig. 20.20.
The first four stages use handwritten regular expression and grammar rules to
do basic tokenization, chunking, and parsing. Stage 5 then recognizes entities and
events with a recognizer based on finite-state transducers (FSTs), and inserts the rec-
ognized objects into the appropriate slots in templates. This FST recognizer is based
458 C HAPTER 20 • I NFORMATION E XTRACTION : R ELATIONS , E VENTS , AND T IME
on hand-built regular expressions like the following (NG indicates Noun-Group and
VG Verb-Group), which matches the first sentence of the news story above.
NG(Company/ies) VG(Set-up) NG(Joint-Venture) with NG(Company/ies)
VG(Produce) NG(Product)
The result of processing these two sentences is the five draft templates (Fig. 20.21)
that must then be merged into the single hierarchical structure shown in Fig. 20.19.
The merging algorithm, after performing coreference resolution, merges two activi-
ties that are likely to be describing the same events.
# Template/Slot Value
1 R ELATIONSHIP : TIE - UP
E NTITIES : Bridgestone Co., a local concern, a Japanese trading house
2 ACTIVITY: PRODUCTION
P RODUCT: “golf clubs”
3 R ELATIONSHIP : TIE - UP
J OINT V ENTURE : “Bridgestone Sports Taiwan Co.”
A MOUNT: NT$20000000
4 ACTIVITY: PRODUCTION
C OMPANY: “Bridgestone Sports Taiwan Co.”
S TART DATE : DURING : January 1990
5 ACTIVITY: PRODUCTION
P RODUCT: “iron and “metal wood” clubs”
Figure 20.21 The five partial templates produced by stage 5 of FASTUS. These templates
are merged in stage 6 to produce the final template shown in Fig. 20.19 on page 457.
20.9 Summary
This chapter has explored techniques for extracting limited forms of semantic con-
tent from texts.
• Relations among entities can be extracted by pattern-based approaches, su-
pervised learning methods when annotated training data is available, lightly
supervised bootstrapping methods when small numbers of seed tuples or
seed patterns are available, distant supervision when a database of relations
is available, and unsupervised or Open IE methods.
• Reasoning about time can be facilitated by detection and normalization of
temporal expressions.
B IBLIOGRAPHICAL AND H ISTORICAL N OTES 459
• Events can be ordered in time using sequence models and classifiers trained
on temporally- and event-labeled data like the TimeBank corpus.
1 www.nist.gov/speech/tests/ace/
460 C HAPTER 20 • I NFORMATION E XTRACTION : R ELATIONS , E VENTS , AND T IME
Exercises
20.1 Acronym expansion, the process of associating a phrase with an acronym, can
be accomplished by a simple form of relational analysis. Develop a system
based on the relation analysis approaches described in this chapter to populate
a database of acronym expansions. If you focus on English Three Letter
Acronyms (TLAs) you can evaluate your system’s performance by comparing
it to Wikipedia’s TLA page.
20.2 Acquire the CMU seminar corpus and develop a template-filling system by
using any of the techniques mentioned in Section 20.8. Analyze how well
your system performs as compared with state-of-the-art results on this corpus.
20.3 A useful functionality in newer email and calendar applications is the ability
to associate temporal expressions connected with events in email (doctor’s
appointments, meeting planning, party invitations, etc.) with specific calendar
entries. Collect a corpus of email containing temporal expressions related to
event planning. How do these expressions compare to the kinds of expressions
commonly found in news text that we’ve been discussing in this chapter?
20.4 For the following sentences, give FOL translations that capture the temporal
relationships between the events.
1. When Mary’s flight departed, I ate lunch.
2. When Mary’s flight departed, I had eaten lunch.
CHAPTER
Sometime between the 7th and 4th centuries BCE, the Indian grammarian Pān.ini1
wrote a famous treatise on Sanskrit grammar, the As.t.ādhyāyı̄ (‘8 books’), a treatise
that has been called “one of the greatest monuments of hu-
man intelligence” (Bloomfield, 1933, 11). The work de-
scribes the linguistics of the Sanskrit language in the form
of 3959 sutras, each very efficiently (since it had to be
memorized!) expressing part of a formal rule system that
brilliantly prefigured modern mechanisms of formal lan-
guage theory (Penn and Kiparsky, 2012). One set of rules
describes the kārakas, semantic relationships between a
verb and noun arguments, roles like agent, instrument, or
destination. Pān.ini’s work was the earliest we know of
that modeled the linguistic realization of events and their
participants. This task of understanding how participants relate to events—being
able to answer the question “Who did what to whom” (and perhaps also “when and
where”)—is a central question of natural language processing.
Let’s move forward 2.5 millennia to the present and consider the very mundane
goal of understanding text about a purchase of stock by XYZ Corporation. This
purchasing event and its participants can be described by a wide variety of surface
forms. The event can be described by a verb (sold, bought) or a noun (purchase),
and XYZ Corp can be the syntactic subject (of bought), the indirect object (of sold),
or in a genitive or noun compound relation (with the noun purchase) despite having
notionally the same role in all of them:
• XYZ corporation bought the stock.
• They sold the stock to XYZ corporation.
• The stock was bought by XYZ corporation.
• The purchase of the stock by XYZ corporation...
• The stock purchase by XYZ corporation...
In this chapter we introduce a level of representation that captures the common-
ality between these sentences: there was a purchase event, the participants were
XYZ Corp and some stock, and XYZ Corp was the buyer. These shallow semantic
representations , semantic roles, express the role that arguments of a predicate take
in the event, codified in databases like PropBank and FrameNet. We’ll introduce
semantic role labeling, the task of assigning roles to spans in sentences, and selec-
tional restrictions, the preferences that predicates express about their arguments,
such as the fact that the theme of eat is generally something edible.
1 Figure shows a birch bark manuscript from Kashmir of the Rupavatra, a grammatical textbook based
on the Sanskrit grammar of Panini. Image from the Wellcome Collection.
462 C HAPTER 21 • S EMANTIC ROLE L ABELING
Although thematic roles are one of the oldest linguistic models, as we saw above,
their modern formulation is due to Fillmore (1968) and Gruber (1965). Although
there is no universally agreed-upon set of roles, Figs. 21.1 and 21.2 list some the-
matic roles that have been used in various computational papers, together with rough
definitions and examples. Most thematic role sets have about a dozen roles, but we’ll
see sets with smaller numbers of roles with even more abstract meanings, and sets
with very large numbers of roles that are specific to situations. We’ll use the general
semantic roles term semantic roles for all sets of roles, whether small or large.
Semantic roles thus help generalize over different surface realizations of pred-
icate arguments. For example, while the AGENT is often realized as the subject of
the sentence, in other cases the THEME can be the subject. Consider these possible
realizations of the thematic arguments of the verb break:
(21.3) John broke the window.
AGENT THEME
(21.4) John broke the window with a rock.
AGENT THEME INSTRUMENT
(21.5) The rock broke the window.
INSTRUMENT THEME
(21.6) The window broke.
THEME
(21.7) The window was broken by John.
THEME AGENT
These examples suggest that break has (at least) the possible arguments AGENT,
THEME , and INSTRUMENT. The set of thematic role arguments taken by a verb is
thematic grid often called the thematic grid, -grid, or case frame. We can see that there are
case frame (among others) the following possibilities for the realization of these arguments of
break:
AGENT /Subject, THEME /Object
AGENT /Subject, THEME /Object, INSTRUMENT /PPwith
INSTRUMENT /Subject, THEME /Object
THEME /Subject
It turns out that many verbs allow their thematic roles to be realized in various
syntactic positions. For example, verbs like give can realize the THEME and GOAL
arguments in two different ways:
(21.8) a. Doris gave the book to Cary.
AGENT THEME GOAL
These multiple argument structure realizations (the fact that break can take AGENT,
INSTRUMENT , or THEME as subject, and give can realize its THEME and GOAL in
verb either order) are called verb alternations or diathesis alternations. The alternation
alternation
dative we showed above for give, the dative alternation, seems to occur with particular se-
alternation
mantic classes of verbs, including “verbs of future having” (advance, allocate, offer,
464 C HAPTER 21 • S EMANTIC ROLE L ABELING
owe), “send verbs” (forward, hand, mail), “verbs of throwing” (kick, pass, throw),
and so on. Levin (1993) lists for 3100 English verbs the semantic classes to which
they belong (47 high-level classes, divided into 193 more specific classes) and the
various alternations in which they participate. These lists of verb classes have been
incorporated into the online resource VerbNet (Kipper et al., 2000), which links each
verb to both WordNet and FrameNet entries.
(21.14) [Arg0 Big Fruit Co. ] increased [Arg1 the price of bananas].
(21.15) [Arg1 The price of bananas] was increased again [Arg0 by Big Fruit Co. ]
(21.16) [Arg1 The price of bananas] increased [Arg2 5%].
PropBank also has a number of non-numbered arguments called ArgMs, (ArgM-
TMP, ArgM-LOC, etc.) which represent modification or adjunct meanings. These
are relatively stable across predicates, so aren’t listed with each frame file. Data
labeled with these modifiers can be helpful in training systems to detect temporal,
location, or directional modification across predicates. Some of the ArgM’s include:
TMP when? yesterday evening, now
LOC where? at the museum, in San Francisco
DIR where to/from? down, to Bangkok
MNR how? clearly, with much enthusiasm
PRP/CAU why? because ... , in response to the ruling
REC themselves, each other
ADV miscellaneous
PRD secondary predication ...ate the meat raw
NomBank While PropBank focuses on verbs, a related project, NomBank (Meyers et al.,
2004) adds annotations to noun predicates. For example the noun agreement in
Apple’s agreement with IBM would be labeled with Apple as the Arg0 and IBM as
the Arg2. This allows semantic role labelers to assign labels to arguments of both
verbal and nominal predicates.
21.5 FrameNet
While making inferences about the semantic commonalities across different sen-
tences with increase is useful, it would be even more useful if we could make such
inferences in many more situations, across different verbs, and also between verbs
and nouns. For example, we’d like to extract the similarity among these three sen-
tences:
(21.17) [Arg1 The price of bananas] increased [Arg2 5%].
(21.18) [Arg1 The price of bananas] rose [Arg2 5%].
(21.19) There has been a [Arg2 5%] rise [Arg1 in the price of bananas].
Note that the second example uses the different verb rise, and the third example
uses the noun rather than the verb rise. We’d like a system to recognize that the
price of bananas is what went up, and that 5% is the amount it went up, no matter
whether the 5% appears as the object of the verb increased or as a nominal modifier
of the noun rise.
FrameNet The FrameNet project is another semantic-role-labeling project that attempts
to address just these kinds of problems (Baker et al. 1998, Fillmore et al. 2003,
Fillmore and Baker 2009, Ruppenhofer et al. 2016). Whereas roles in the PropBank
project are specific to an individual verb, roles in the FrameNet project are specific
to a frame.
What is a frame? Consider the following set of words:
reservation, flight, travel, buy, price, cost, fare, rates, meal, plane
There are many individual lexical relations of hyponymy, synonymy, and so on
between many of the words in this list. The resulting set of relations does not,
21.5 • F RAME N ET 467
however, add up to a complete account of how these words are related. They are
clearly all defined with respect to a coherent chunk of common-sense background
information concerning air travel.
frame We call the holistic background knowledge that unites these words a frame (Fill-
more, 1985). The idea that groups of words are defined with respect to some back-
ground information is widespread in artificial intelligence and cognitive science,
model where besides frame we see related works like a model (Johnson-Laird, 1983), or
script even script (Schank and Abelson, 1977).
A frame in FrameNet is a background knowledge structure that defines a set of
frame elements frame-specific semantic roles, called frame elements, and includes a set of predi-
cates that use these roles. Each word evokes a frame and profiles some aspect of the
frame and its elements. The FrameNet dataset includes a set of frames and frame
elements, the lexical units associated with each frame, and a set of labeled exam-
ple sentences. For example, the change position on a scale frame is defined as
follows:
This frame consists of words that indicate the change of an Item’s posi-
tion on a scale (the Attribute) from a starting point (Initial value) to an
end point (Final value).
Some of the semantic roles (frame elements) in the frame are defined as in
core roles Fig. 21.3. Note that these are separated into core roles, which are frame specific, and
non-core roles non-core roles, which are more like the Arg-M arguments in PropBank, expressing
more general properties of time, location, and so on.
Core Roles
ATTRIBUTE The ATTRIBUTE is a scalar property that the I TEM possesses.
D IFFERENCE The distance by which an I TEM changes its position on the scale.
F INAL STATE A description that presents the I TEM’s state after the change in the ATTRIBUTE’s
value as an independent predication.
F INAL VALUE The position on the scale where the I TEM ends up.
I NITIAL STATE A description that presents the I TEM’s state before the change in the AT-
TRIBUTE ’s value as an independent predication.
I NITIAL VALUE The initial position on the scale from which the I TEM moves away.
I TEM The entity that has a position on the scale.
VALUE RANGE A portion of the scale, typically identified by its end points, along which the
values of the ATTRIBUTE fluctuate.
Some Non-Core Roles
D URATION The length of time over which the change takes place.
S PEED The rate of change of the VALUE.
G ROUP The G ROUP in which an I TEM changes the value of an
ATTRIBUTE in a specified way.
Figure 21.3 The frame elements in the change position on a scale frame from the FrameNet Labelers
Guide (Ruppenhofer et al., 2016).
(21.24) a steady increase [I NITIAL VALUE from 9.5] [F INAL VALUE to 14.3] [I TEM
in dividends]
(21.25) a [D IFFERENCE 5%] [I TEM dividend] increase...
Note from these example sentences that the frame includes target words like rise,
fall, and increase. In fact, the complete frame consists of the following words:
VERBS: dwindle move soar escalation shift
advance edge mushroom swell explosion tumble
climb explode plummet swing fall
decline fall reach triple fluctuation ADVERBS:
decrease fluctuate rise tumble gain increasingly
diminish gain rocket growth
dip grow shift NOUNS: hike
double increase skyrocket decline increase
drop jump slide decrease rise
FrameNet also codes relationships between frames, allowing frames to inherit
from each other, or representing relations between frames like causation (and gen-
eralizations among frame elements in different frames can be represented by inheri-
tance as well). Thus, there is a Cause change of position on a scale frame that is
linked to the Change of position on a scale frame by the cause relation, but that
adds an AGENT role and is used for causative examples such as the following:
(21.26) [AGENT They] raised [I TEM the price of their soda] [D IFFERENCE by 2%].
Together, these two frames would allow an understanding system to extract the
common event semantics of all the verbal and nominal causative and non-causative
usages.
FrameNets have also been developed for many other languages including Span-
ish, German, Japanese, Portuguese, Italian, and Chinese.
Figure 21.5 shows a parse of (21.28) above. The parse is then traversed to find all
words that are predicates.
For each of these predicates, the algorithm examines each node in the parse
tree and uses supervised classification to decide the semantic role (if any) it plays
for this predicate. Given a labeled training set such as PropBank or FrameNet, a
feature vector is extracted for each node, using feature templates described in the
next subsection. A 1-of-N classifier is then trained to predict a semantic role for
each constituent given these features, where N is the number of potential semantic
roles plus an extra NONE role for non-role constituents. Any standard classification
algorithms can be used. Finally, for each test sentence to be labeled, the classifier is
run on each relevant constituent.
parse ← PARSE(words)
for each predicate in parse do
for each node in parse do
featurevector ← E XTRACT F EATURES(node, predicate, parse)
C LASSIFY N ODE(node, featurevector, parse)
NP-SBJ = ARG0 VP
issued DT JJ NN IN NP
noon yesterday
Figure 21.5 Parse tree for a PropBank sentence, showing the PropBank argument labels. The dotted line
shows the path feature NP↑S↓VP↓VBD for ARG0, the NP-SBJ constituent The San Francisco Examiner.
The separation of identification and classification may lead to better use of fea-
tures (different features may be useful for the two tasks) or to computational effi-
ciency.
Global Optimization
The classification algorithm of Fig. 21.5 classifies each argument separately (‘lo-
cally’), making the simplifying assumption that each argument of a predicate can be
labeled independently. This assumption is false; there are interactions between argu-
ments that require a more ‘global’ assignment of labels to constituents. For example,
constituents in FrameNet and PropBank are required to be non-overlapping. More
significantly, the semantic roles of constituents are not independent. For example
PropBank does not allow multiple identical arguments; two constituents of the same
verb cannot both be labeled ARG 0 .
Role labeling systems thus often add a fourth step to deal with global consistency
across the labels in a sentence. For example, the local classifiers can return a list of
possible labels associated with probabilities for each constituent, and a second-pass
Viterbi decoding or re-ranking approach can be used to choose the best consensus
label. Integer linear programming (ILP) is another common way to choose a solution
that conforms best to multiple constraints.
Other features are often used in addition, such as sets of n-grams inside the
constituent, or more complex versions of the path features (the upward or downward
halves, or whether particular nodes occur in the path).
It’s also possible to use dependency parses instead of constituency parses as the
basis of features, for example using dependency parse paths instead of constituency
paths.
Softmax
concatenate
with predicate
ENCODER
As with all the taggers, the goal is to compute the highest probability tag se-
quence ŷ, given the input sequence of words w:
ŷ = argmax P(y|w)
y∈T
Fig. 21.6 shows a sketch of a standard algorithm from He et al. (2017). Here each
input word is mapped to pretrained embeddings, and then each token is concatenated
with the predicate embedding and then passed through a feedforward network with
a softmax which outputs a distribution over each SRL label. For decoding, a CRF
layer can be used instead of the MLP layer on top of the biLSTM output to do global
inference, but in practice this doesn’t seem to provide much benefit.
472 C HAPTER 21 • S EMANTIC ROLE L ABELING
With this representation, all we know about y, the filler of the THEME role, is that
it is associated with an Eating event through the Theme relation. To stipulate the
selectional restriction that y must be something edible, we simply add a new term to
that effect:
When a phrase like ate a hamburger is encountered, a semantic analyzer can form
the following kind of representation:
Sense 1
hamburger, beefburger --
(a fried cake of minced beef served on a bun)
=> sandwich
=> snack food
=> dish
=> nutriment, nourishment, nutrition...
=> food, nutrient
=> substance
=> matter
=> physical entity
=> entity
Figure 21.7 Evidence from WordNet that hamburgers are edible.
(21.36) But it fell apart in 1931, perhaps because people realized you can’t eat
gold for lunch if you’re hungry.
(21.37) In his two championship trials, Mr. Kulkarni ate glass on an empty
stomach, accompanied only by water and tea.
Modern systems for selectional preferences therefore specify the relation be-
tween a predicate and its possible arguments with soft constraints of some kind.
Selectional Association
selectional
One of the most influential has been the selectional association model of Resnik
preference (1993). Resnik defines the idea of selectional preference strength as the general
strength
amount of information that a predicate tells us about the semantic class of its argu-
ments. For example, the verb eat tells us a lot about the semantic class of its direct
objects, since they tend to be edible. The verb be, by contrast, tells us less about
its direct objects. The selectional preference strength can be defined by the differ-
ence in information between two distributions: the distribution of expected semantic
classes P(c) (how likely is it that a direct object will fall into class c) and the dis-
tribution of expected semantic classes for the particular verb P(c|v) (how likely is
it that the direct object of the specific verb v will fall into semantic class c). The
greater the difference between these distributions, the more information the verb
is giving us about possible objects. The difference between these two distributions
relative entropy can be quantified by relative entropy, or the Kullback-Leibler divergence (Kullback
KL divergence and Leibler, 1951). The Kullback-Leibler or KL divergence D(P||Q) expresses the
21.7 • S ELECTIONAL R ESTRICTIONS 475
The selectional preference SR (v) uses the KL divergence to express how much in-
formation, in bits, the verb v expresses about the possible semantic class of its argu-
ment.
SR (v) = D(P(c|v)||P(c))
∑ P(c|v)
= P(c|v) log (21.39)
c
P(c)
selectional Resnik then defines the selectional association of a particular class and verb as the
association
relative contribution of that class to the general selectional preference of the verb:
1 P(c|v)
AR (v, c) = P(c|v) log (21.40)
SR (v) P(c)
The selectional association is thus a probabilistic measure of the strength of asso-
ciation between a predicate and a class dominating the argument to the predicate.
Resnik estimates the probabilities for these associations by parsing a corpus, count-
ing all the times each predicate occurs with each argument word, and assuming
that each word is a partial observation of all the WordNet concepts containing the
word. The following table from Resnik (1996) shows some sample high and low
selectional associations for verbs and some WordNet semantic classes of their direct
objects.
Direct Object Direct Object
Verb Semantic Class Assoc Semantic Class Assoc
read WRITING 6.80 ACTIVITY -.20
write WRITING 7.26 COMMERCE 0
see ENTITY 5.79 METHOD -0.01
Primitive Definition
ATRANS The abstract transfer of possession or control from one entity to
another
P TRANS The physical transfer of an object from one location to another
M TRANS The transfer of mental concepts between entities or within an
entity
M BUILD The creation of new information within an entity
P ROPEL The application of physical force to move an object
M OVE The integral movement of a body part by an animal
I NGEST The taking in of a substance by an animal
E XPEL The expulsion of something from an animal
S PEAK The action of producing a sound
ATTEND The action of focusing a sense organ
Figure 21.8 A set of conceptual dependency primitives.
Below is an example sentence along with its CD representation. The verb brought
is translated into the two primitives ATRANS and PTRANS to indicate that the waiter
both physically conveyed the check to Mary and passed control of it to her. Note
that CD also associates a fixed set of thematic roles with each primitive to represent
the various participants in the action.
(21.47) The waiter brought Mary the check.
21.9 Summary
• Semantic roles are abstract models of the role an argument plays in the event
described by the predicate.
• Thematic roles are a model of semantic roles based on a single finite list of
roles. Other semantic role models include per-verb semantic role lists and
proto-agent/proto-patient, both of which are implemented in PropBank,
and per-frame role lists, implemented in FrameNet.
478 C HAPTER 21 • S EMANTIC ROLE L ABELING
• Semantic role labeling is the task of assigning semantic role labels to the
constituents of a sentence. The task is generally treated as a supervised ma-
chine learning task, with models trained on PropBank or FrameNet. Algo-
rithms generally start by parsing a sentence and then automatically tag each
parse tree node with a semantic role. Neural models map straight from words
end-to-end.
• Semantic selectional restrictions allow words (particularly predicates) to post
constraints on the semantic properties of their argument words. Selectional
preference models (like selectional association or simple conditional proba-
bility) allow a weight or probability to be assigned to the association between
a predicate and an argument word or class.
The word frame seemed to be in the air for a suite of related notions proposed at
about the same time by Minsky (1974), Hymes (1974), and Goffman (1974), as
well as related notions with other names like scripts (Schank and Abelson, 1975)
and schemata (Bobrow and Norman, 1975) (see Tannen (1979) for a comparison).
Fillmore was also influenced by the semantic field theorists and by a visit to the Yale
AI lab where he took notice of the lists of slots and fillers used by early information
extraction systems like DeJong (1982) and Schank and Abelson (1977). In the 1990s
Fillmore drew on these insights to begin the FrameNet corpus annotation project.
At the same time, Beth Levin drew on her early case frame dictionaries (Levin,
1977) to develop her book which summarized sets of verb classes defined by shared
argument realizations (Levin, 1993). The VerbNet project built on this work (Kipper
et al., 2000), leading soon afterwards to the PropBank semantic-role-labeled corpus
created by Martha Palmer and colleagues (Palmer et al., 2005).
The combination of rich linguistic annotation and corpus-based approach in-
stantiated in FrameNet and PropBank led to a revival of automatic approaches to
semantic role labeling, first on FrameNet (Gildea and Jurafsky, 2000) and then on
PropBank data (Gildea and Palmer, 2002, inter alia). The problem first addressed in
the 1970s by handwritten rules was thus now generally recast as one of supervised
machine learning enabled by large and consistent databases. Many popular features
used for role labeling are defined in Gildea and Jurafsky (2002), Surdeanu et al.
(2003), Xue and Palmer (2004), Pradhan et al. (2005), Che et al. (2009), and Zhao
et al. (2009). The use of dependency rather than constituency parses was introduced
in the CoNLL-2008 shared task (Surdeanu et al., 2008). For surveys see Palmer
et al. (2010) and Màrquez et al. (2008).
The use of neural approaches to semantic role labeling was pioneered by Col-
lobert et al. (2011), who applied a CRF on top of a convolutional net. Early work
like Foland, Jr. and Martin (2015) focused on using dependency features. Later work
eschewed syntactic features altogether; Zhou and Xu (2015b) introduced the use of
a stacked (6-8 layer) biLSTM architecture, and (He et al., 2017) showed how to
augment the biLSTM architecture with highway networks and also replace the CRF
with A* decoding that make it possible to apply a wide variety of global constraints
in SRL decoding.
Most semantic role labeling schemes only work within a single sentence, fo-
cusing on the object of the verbal (or nominal, in the case of NomBank) predicate.
However, in many cases, a verbal or nominal predicate may have an implicit argu-
implicit
argument ment: one that appears only in a contextual sentence, or perhaps not at all and must
be inferred. In the two sentences This house has a new owner. The sale was finalized
10 days ago. the sale in the second sentence has no A RG 1, but a reasonable reader
would infer that the Arg1 should be the house mentioned in the prior sentence. Find-
iSRL ing these arguments, implicit argument detection (sometimes shortened as iSRL)
was introduced by Gerber and Chai (2010) and Ruppenhofer et al. (2010). See Do
et al. (2017) for more recent neural models.
To avoid the need for huge labeled training sets, unsupervised approaches for
semantic role labeling attempt to induce the set of semantic roles by clustering over
arguments. The task was pioneered by Riloff and Schmelzenbach (1998) and Swier
and Stevenson (2004); see Grenager and Manning (2006), Titov and Klementiev
(2012), Lang and Lapata (2014), Woodsend and Lapata (2015), and Titov and Khod-
dam (2014).
Recent innovations in frame labeling include connotation frames, which mark
richer information about the argument of predicates. Connotation frames mark the
480 C HAPTER 21 • S EMANTIC ROLE L ABELING
sentiment of the writer or reader toward the arguments (for example using the verb
survive in he survived a bombing expresses the writer’s sympathy toward the subject
he and negative sentiment toward the bombing. See Chapter 22 for more details.
Selectional preference has been widely studied beyond the selectional associa-
tion models of Resnik (1993) and Resnik (1996). Methods have included clustering
(Rooth et al., 1999), discriminative learning (Bergsma et al., 2008a), and topic mod-
els (Séaghdha 2010, Ritter et al. 2010b), and constraints can be expressed at the level
of words or classes (Agirre and Martinez, 2001). Selectional preferences have also
been successfully integrated into semantic role labeling (Erk 2007, Zapirain et al.
2013, Do et al. 2017).
Exercises
CHAPTER
affective In this chapter we turn to tools for interpreting affective meaning, extending our
study of sentiment analysis in Chapter 4. We use the word ‘affective’, following the
tradition in affective computing (Picard, 1995) to mean emotion, sentiment, per-
subjectivity sonality, mood, and attitudes. Affective meaning is closely related to subjectivity,
the study of a speaker or writer’s evaluations, opinions, emotions, and speculations
(Wiebe et al., 1999).
How should affective meaning be defined? One influential typology of affec-
tive states comes from Scherer (2000), who defines each class of affective states by
factors like its cognitive realization and time course (Fig. 22.1).
We can design extractors for each of these kinds of affective states. Chapter 4
already introduced sentiment analysis, the task of extracting the positive or negative
orientation that a writer expresses in a text. This corresponds in Scherer’s typology
to the extraction of attitudes: figuring out what people like or dislike, from affect-
rich texts like consumer reviews of books or movies, newspaper editorials, or public
sentiment in blogs or tweets.
Detecting emotion and moods is useful for detecting whether a student is con-
fused, engaged, or certain when interacting with a tutorial system, whether a caller
to a help line is frustrated, whether someone’s blog posts or tweets indicated depres-
sion. Detecting emotions like fear in novels, for example, could help us trace what
groups or situations are feared and how that changes over time.
482 C HAPTER 22 • L EXICONS FOR S ENTIMENT, A FFECT, AND C ONNOTATION
1962, Plutchik 1962), a model dating back to Darwin. Perhaps the most well-known
of this family of theories are the 6 emotions proposed by Ekman (e.g., Ekman 1999)
to be universally present in all cultures: surprise, happiness, anger, fear, disgust,
sadness. Another atomic theory is the Plutchik (1980) wheel of emotion, consisting
of 8 basic emotions in four opposing pairs: joy–sadness, anger–fear, trust–disgust,
and anticipation–surprise, together with the emotions derived from them, shown in
Fig. 22.2.
The second class of emotion theories widely used in NLP views emotion as a
space in 2 or 3 dimensions (Russell, 1980). Most models include the two dimensions
valence and arousal, and many add a third, dominance. These can be defined as:
valence: the pleasantness of the stimulus
arousal: the level of alertness, activeness, or energy provoked by the stimulus
dominance: the degree of control or dominance exerted by the stimulus or the
emotion
Sentiment can be viewed as a special case of this second view of emotions as points
in space. In particular, the valence dimension, measuring how pleasant or unpleasant
a word is, is often used directly as a measure of sentiment.
In these lexicon-based models of affect, the affective meaning of a word is gen-
erally fixed, irrespective of the linguistic context in which a word is used, or the
dialect or culture of the speaker. By contrast, other models in affective science repre-
sent emotions as much richer processes involving cognition (Barrett et al., 2007). In
appraisal theory, for example, emotions are complex processes, in which a person
considers how an event is congruent with their goals, taking into account variables
like the agency, certainty, urgency, novelty and control associated with the event
(Moors et al., 2013). Computational models in NLP taking into account these richer
theories of emotion will likely play an important role in future work.
484 C HAPTER 22 • L EXICONS FOR S ENTIMENT, A FFECT, AND C ONNOTATION
Positive admire, amazing, assure, celebration, charm, eager, enthusiastic, excellent, fancy, fan-
tastic, frolic, graceful, happy, joy, luck, majesty, mercy, nice, patience, perfect, proud,
rejoice, relief, respect, satisfactorily, sensational, super, terrific, thank, vivid, wise, won-
derful, zest
Negative abominable, anger, anxious, bad, catastrophe, cheap, complaint, condescending, deceit,
defective, disappointment, embarrass, fake, fear, filthy, fool, guilt, hate, idiot, inflict, lazy,
miserable, mourn, nervous, objection, pest, plot, reject, scream, silly, terrible, unfriendly,
vile, wicked
Figure 22.3 Some words with consistent sentiment across the General Inquirer (Stone et al., 1966), the
MPQA Subjectivity lexicon (Wilson et al., 2005), and the polarity lexicon of Hu and Liu (2004b).
Slightly more general than these sentiment lexicons are lexicons that assign each
word a value on all three affective dimensions. The NRC Valence, Arousal, and
Dominance (VAD) lexicon (Mohammad, 2018a) assigns valence, arousal, and dom-
inance scores to 20,000 words. Some examples are shown in Fig. 22.4.
EmoLex The NRC Word-Emotion Association Lexicon, also called EmoLex (Moham-
mad and Turney, 2013), uses the Plutchik (1980) 8 basic emotions defined above.
The lexicon includes around 14,000 words including words from prior lexicons as
well as frequent nouns, verbs, adverbs and adjectives. Values from the lexicon for
some sample words:
22.3 • C REATING A FFECT L EXICONS BY H UMAN L ABELING 485
anticipation
negative
surprise
positive
sadness
disgust
anger
trust
fear
joy
Word
reward 0 1 0 0 1 0 1 1 1 0
worry 0 1 0 1 0 1 0 0 0 1
tenderness 0 0 0 0 1 0 0 0 1 0
sweetheart 0 1 0 0 1 1 0 1 1 0
suddenly 0 0 0 0 0 0 1 0 0 0
thirst 0 1 0 0 0 1 1 0 0 0
garbage 0 0 1 0 0 0 0 0 0 1
For a smaller set of 5,814 words, the NRC Emotion/Affect Intensity Lexicon
(Mohammad, 2018b) contains real-valued scores of association for anger, fear, joy,
and sadness; Fig. 22.5 shows examples.
LIWC LIWC, Linguistic Inquiry and Word Count, is a widely used set of 73 lex-
icons containing over 2300 words (Pennebaker et al., 2007), designed to capture
aspects of lexical meaning relevant for social psychological tasks. In addition to
sentiment-related lexicons like ones for negative emotion (bad, weird, hate, prob-
lem, tough) and positive emotion (love, nice, sweet), LIWC includes lexicons for
categories like anger, sadness, cognitive mechanisms, perception, tentative, and in-
hibition, shown in Fig. 22.6.
There are various other hand-built affective lexicons. The General Inquirer in-
cludes additional lexicons for dimensions like strong vs. weak, active vs. passive,
overstated vs. understated, as well as lexicons for categories like pleasure, pain,
virtue, vice, motivation, and cognitive orientation.
concrete Another useful feature for various tasks is the distinction between concrete
abstract words like banana or bathrobe and abstract words like belief and although. The
lexicon in Brysbaert et al. (2014) used crowdsourcing to assign a rating from 1 to 5
of the concreteness of 40,000 words, thus assigning banana, bathrobe, and bagel 5,
belief 1.19, although 1.07, and in between words like brisk a 2.5.
Positive Negative
Emotion Emotion Insight Inhibition Family Negate
appreciat* anger* aware* avoid* brother* aren’t
comfort* bore* believe careful* cousin* cannot
great cry decid* hesitat* daughter* didn’t
happy despair* feel limit* family neither
interest fail* figur* oppos* father* never
joy* fear know prevent* grandf* no
perfect* griev* knew reluctan* grandm* nobod*
please* hate* means safe* husband none
safe* panic* notice* stop mom nor
terrific suffers recogni* stubborn* mother nothing
value terrify sense wait niece* nowhere
wow* violent* think wary wife without
Figure 22.6 Samples from 5 of the 73 lexical categories in LIWC (Pennebaker et al., 2007).
The * means the previous letters are a word prefix and all words with that prefix are included
in the category.
tators. Let’s take a look at some of the methodological choices for two crowdsourced
emotion lexicons.
The NRC Emotion Lexicon (EmoLex) (Mohammad and Turney, 2013), labeled
emotions in two steps. To ensure that the annotators were judging the correct sense
of the word, they first answered a multiple-choice synonym question that primed
the correct sense of the word (without requiring the annotator to read a potentially
confusing sense definition). These were created automatically using the headwords
associated with the thesaurus category of the sense in question in the Macquarie
dictionary and the headwords of 3 random distractor categories. An example:
Which word is closest in meaning (most related) to startle?
• automobile
• shake
• honesty
• entertain
For each word (e.g. startle), the annotator was then asked to rate how associated
that word is with each of the 8 emotions (joy, fear, anger, etc.). The associations
were rated on a scale of not, weakly, moderately, and strongly associated. Outlier
ratings were removed, and then each term was assigned the class chosen by the ma-
jority of the annotators, with ties broken by choosing the stronger intensity, and then
the 4 levels were mapped into a binary label for each word (no and weak mapped to
0, moderate and strong mapped to 1).
The NRC VAD Lexicon (Mohammad, 2018a) was built by selecting words and
emoticons from prior lexicons and annotating them with crowd-sourcing using best-
best-worst
scaling worst scaling (Louviere et al. 2015, Kiritchenko and Mohammad 2017). In best-
worst scaling, annotators are given N items (usually 4) and are asked which item is
the best (highest) and which is the worst (lowest) in terms of some property. The
set of words used to describe the ends of the scales are taken from prior literature.
For valence, for example, the raters were asked:
Q1. Which of the four words below is associated with the MOST happi-
ness / pleasure / positiveness / satisfaction / contentedness / hopefulness
OR LEAST unhappiness / annoyance / negativeness / dissatisfaction /
22.4 • S EMI - SUPERVISED I NDUCTION OF A FFECT L EXICONS 487
on a specific corpus (for example using a financial corpus if a finance lexicon is the
goal), or we can fine-tune off-the-shelf embeddings to a corpus. Fine-tuning is espe-
cially important if we have a very specific genre of text but don’t have enough data
to train good embeddings. In fine-tuning, we begin with off-the-shelf embeddings
like word2vec, and continue training them on the small target corpus.
Once we have embeddings for each pole word, we create an embedding that
represents each pole by taking the centroid of the embeddings of each of the seed
words; recall that the centroid is the multidimensional version of the mean. Given
a set of embeddings for the positive seed words S+ = {E(w+ +
1 ), E(w2 ), ..., E(wn )},
+
− − − −
and embeddings for the negative seed words S = {E(w1 ), E(w2 ), ..., E(wm )}, the
pole centroids are:
n
1∑
V+ = E(w+
i )
n
1
m
1∑
V− = E(w−
i ) (22.1)
m
1
The semantic axis defined by the poles is computed just by subtracting the two vec-
tors:
Vaxis = V+ − V− (22.2)
Vaxis , the semantic axis, is a vector in the direction of positive sentiment. Finally,
we compute (via cosine similarity) the angle between the vector in the direction of
positive sentiment and the direction of w’s embedding. A higher cosine means that
w is more aligned with S+ than S− .
score(w) = cos E(w), Vaxis
E(w) · Vaxis
= (22.3)
‖E(w)‖‖Vaxis ‖
then choose a node to move to with probability proportional to the edge prob-
ability. A word’s polarity score for a seed set is proportional to the probability
of a random walk from the seed set landing on that word (Fig. 22.7).
4. Create word scores: We walk from both positive and negative seed sets,
resulting in positive (rawscore+ (wi )) and negative (rawscore− (wi )) raw label
scores. We then combine these values into a positive-polarity score as:
rawscore+ (wi )
score+ (wi ) = (22.5)
rawscore+ (wi ) + rawscore− (wi )
It’s often helpful to standardize the scores to have zero mean and unit variance
within a corpus.
5. Assign confidence to each score: Because sentiment scores are influenced by
the seed set, we’d like to know how much the score of a word would change if
a different seed set is used. We can use bootstrap sampling to get confidence
regions, by computing the propagation B times over random subsets of the
positive and negative seed sets (for example using B = 50 and choosing 7 of
the 10 seed words each time). The standard deviation of the bootstrap sampled
polarity scores gives a confidence measure.
loathe loathe
like like
abhor abhor
(a) (b)
Figure 22.7 Intuition of the S ENT P ROP algorithm. (a) Run random walks from the seed words. (b) Assign
polarity scores (shown here as colors green or red) based on the frequency of random walk visits.
associated review score: a value that may range from 1 star to 5 stars, or scoring 1
to 10. Fig. 22.9 shows samples extracted from restaurant, book, and movie reviews.
We can use this review score as supervision: positive words are more likely to
appear in 5-star reviews; negative words in 1-star reviews. And instead of just a
binary polarity, this kind of supervision allows us to assign a word a more complex
representation of its polarity: its distribution over stars (or other scores).
Thus in a ten-star system we could represent the sentiment of each word as a
10-tuple, each number a score representing the word’s association with that polarity
level. This association can be a raw count, or a likelihood P(w|c), or some other
function of the count, for each class c from 1 to 10.
For example, we could compute the IMDb likelihood of a word like disap-
point(ed/ing) occurring in a 1 star review by dividing the number of times disap-
point(ed/ing) occurs in 1-star reviews in the IMDb dataset (8,557) by the total num-
ber of words occurring in 1-star reviews (25,395,214), so the IMDb estimate of
P(disappointing|1) is .0003.
A slight modification of this weighting, the normalized likelihood, can be used
as an illuminating visualization (Potts, 2011)1
count(w, c)
P(w|c) =
w∈C count(w, c)
P(w|c)
PottsScore(w) = (22.6)
c P(w|c)
Dividing the IMDb estimate P(disappointing|1) of .0003 by the sum of the likeli-
hood P(w|c) over all categories gives a Potts score of 0.10. The word disappointing
thus is associated with the vector [.10, .12, .14, .14, .13, .11, .08, .06, .06, .05]. The
1 Each element of the Potts score of a word w and category c can be shown to be a variant of the
pointwise mutual information pmi(w, c) without the log term; see Exercise 22.1.
492 C HAPTER 22 • L EXICONS FOR S ENTIMENT, A FFECT, AND C ONNOTATION
Potts diagram Potts diagram (Potts, 2011) is a visualization of these word scores, representing the
prior sentiment of a word as a distribution over the rating categories.
Fig. 22.10 shows the Potts diagrams for 3 positive and 3 negative scalar adjec-
tives. Note that the curve for strongly positive scalars have the shape of the letter
J, while strongly negative scalars look like a reverse J. By contrast, weakly posi-
tive and negative scalars have a hump-shape, with the maximum either below the
mean (weakly negative words like disappointing) or above the mean (weakly pos-
itive words like good). These shapes offer an illuminating typology of affective
meaning.
1 2 3 4 5 6 7 8 9 10
rating 1 2 3 4 5 6 7 8 9 10
rating
great bad
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
rating rating
excellent terrible
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
rating rating
Figure 22.10 Potts diagrams (Potts, 2011) for positive and negative scalar adjectives, show-
ing the J-shape and reverse J-shape for strongly positive and negative adjectives, and the
hump-shape for more weakly polarized adjectives.
Fig. 22.11 shows the Potts diagrams for emphasizing and attenuating adverbs.
Note that emphatics tend to have a J-shape (most likely to occur in the most posi-
tive reviews) or a U-shape (most likely to occur in the strongly positive and nega-
tive). Attenuators all have the hump-shape, emphasizing the middle of the scale and
downplaying both extremes. The diagrams can be used both as a typology of lexical
sentiment, and also play a role in modeling sentiment compositionality.
In addition to functions like posterior P(c|w), likelihood P(w|c), or normalized
likelihood (Eq. 22.6) many other functions of the count of a word occurring with a
sentiment label have been used. We’ll introduce some of these on page 496, includ-
ing ideas like normalizing the counts per writer in Eq. 22.14.
Emphatics Attenuators
totally somewhat
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
rating rating
absolutely fairly
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
rating rating
utterly pretty
0.13
0.09
0.05
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
-0.50
-0.39
-0.28
-0.17
-0.06
0.06
0.17
0.28
0.39
0.50
rating rating
Category
Figure 22.11 Potts diagrams (Potts, 2011) for emphatic and attenuating adverbs.
j:
Pi (horrible) P j (horrible)
lor(horrible) = log − log
1 − Pi (horrible) 1 − P j (horrible)
i j
f (horrible) f (horrible)
= log ni
− log nj
i
f (horrible) j
f (horrible)
1− 1 −
ni nj
i j
f (horrible) f (horrible)
= log i i − log (22.8)
n − f (horrible) n j − f j (horrible)
The Dirichlet intuition is to use a large background corpus to get a prior estimate of
what we expect the frequency of each word w to be. We’ll do this very simply by
adding the counts from that corpus to the numerator and denominator, so that we’re
essentially shrinking the counts toward that prior. It’s like asking how large are the
differences between i and j given what we would expect given their frequencies in
a well-estimated large background corpus.
The method estimates the difference between the frequency of word w in two
(i− j)
corpora i and j via the prior-modified log odds ratio for w, w , which is estimated
as:
( )
j
(i− j) fwi + w f w + w
w = log i − log (22.9)
n + 0 − ( fwi + w ) j
n j + 0 − ( f w + w )
(where ni is the size of corpus i, n j is the size of corpus j, fwi is the count of word
j
w in corpus i, fw is the count of word w in corpus j, 0 is the scaled size of the
background corpus, and w is the scaled count of word w in the background corpus.)
In addition, Monroe et al. (2008) make use of an estimate for the variance of the
log–odds–ratio:
1 1
(i− j)
σ 2 ̂w ≈ i + j (22.10)
f w + w f w + w
The final statistic for a word is then the z–score of its log–odds–ratio:
(i− j)
̂w
(22.11)
(i− j)
σ 2 ̂w
The Monroe et al. (2008) method thus modifies the commonly used log odds ratio
in two ways: it uses the z-scores of the log odds ratio, which controls for the amount
of variance in a word’s frequency, and it uses counts from a background corpus to
provide a prior count for words.
Fig. 22.12 shows the method applied to a dataset of restaurant reviews from
Yelp, comparing the words used in 1-star reviews to the words used in 5-star reviews
(Jurafsky et al., 2014). The largest difference is in obvious sentiment words, with the
1-star reviews using negative sentiment words like worse, bad, awful and the 5-star
reviews using positive sentiment words like great, best, amazing. But there are other
illuminating differences. 1-star reviews use logical negation (no, not), while 5-star
reviews use emphatics and emphasize universality (very, highly, every, always). 1-
star reviews use first person plurals (we, us, our) while 5 star reviews use the second
person. 1-star reviews talk about people (manager, waiter, customer) while 5-star
reviews talk about dessert and properties of expensive restaurants like courses and
atmosphere. See Jurafsky et al. (2014) for more details.
22.6 • U SING L EXICONS FOR S ENTIMENT R ECOGNITION 495
If supervised training data is available, these counts computed from sentiment lex-
icons, sometimes weighted or normalized in various ways, can also be used as fea-
tures in a classifier along with other lexical or non-lexical features. We return to
such algorithms in Section 22.7.
496 C HAPTER 22 • L EXICONS FOR S ENTIMENT, A FFECT, AND C ONNOTATION
Various weights can be used for the features, including the raw count in the training
set, or some normalized probability or log probability. Schwartz et al. (2013), for
example, turn feature counts into phrase likelihoods by normalizing them by each
subject’s total word use.
freq(phrase, subject)
p(phrase|subject) = ∑ (22.14)
freq(phrase′ , subject)
phrase′ ∈vocab(subject)
If the training data is sparser, or not as similar to the test set, any of the lexicons
we’ve discussed can play a helpful role, either alone or in combination with all the
words and n-grams.
Many possible values can be used for lexicon features. The simplest is just an
indicator function, in which the value of a feature fL takes the value 1 if a particular
text has any word from the relevant lexicon L. Using the notation of Chapter 4, in
which a feature value is defined for a particular output class c and document x.
1 if ∃w : w ∈ L & w ∈ x & class = c
fL (c, x) =
0 otherwise
Alternatively the value of a feature fL for a particular lexicon L can be the total
number of word tokens in the document that occur in L:
∑
fL = count(w)
w∈L
For lexica in which each word is associated with a score or weight, the count can be
multiplied by a weight wL :
∑
fL = wL count(w)
w∈L
22.8 • L EXICON - BASED METHODS FOR E NTITY-C ENTRIC A FFECT 497
Power Score
Sentiment Score
Agency Score
Figure 22.13 Power (dominance), sentiment (valence) and agency (arousal) for characters
in the movie The Dark Knight computed from embeddings trained on the NRC VAD Lexicon.
Note the protagonist (Batman) and the antagonist (the Joker) have high power and agency
scores but differ in sentiment, while the love interest Rachel has low power and agency but
high sentiment.
Connotation Frame for “Role1 survives Role2” Connotation Frame for “Role1 violates Role2”
)
S(
le1
)
S(
le1
_
wr
ro
wr
ro
Writer
_ +
it
r→
Writer
+
ite
→
er
→
r→
ite
r
ite
ro
wr
ro
wr
le2
le2
S(
S(role1→role2)
S(
S(role1→role2)
)
)
_ _ +
+ Reader
Reader
(a) (b)
Figure 22.14 Connotation frames for survive and violate. (a) For survive, the writer and reader have positive
sentiment toward Role1, the subject, and negative sentiment toward Role2, the direct object. (b) For violate, the
writer and reader have positive sentiment instead toward Role2, the direct object.
The connotation frame lexicons of Rashkin et al. (2016) and Rashkin et al.
(2017) also express other connotative aspects of the predicate toward each argu-
ment, including the effect (something bad happened to x) value: (x is valuable), and
mental state: (x is distressed by the event). Connotation frames can also mark the
power differential between the arguments (using the verb implore means that the
theme argument has greater power than the agent), and the agency of each argument
(waited is low agency). Fig. 22.15 shows a visualization from Sap et al. (2017).
Connotation frames can be built by hand (Sap et al., 2017), or they can be learned
by supervised learning (Rashkin et al., 2016), for example using hand-labeled train-
22.10 • S UMMARY 499
Figure 22.15 The connotation frames of Sap et al. (2017), showing that the verb implore
implies the agent has lower power than the theme (in contrast, say, with a verb like demanded),
and showing the low level of agency of the subject of waited. Figure from Sap et al. (2017).
ing data to supervise classifiers for each of the individual relations, e.g., whether
S(writer → Role1) is + or -, and then improving accuracy via global constraints
across all relations.
22.10 Summary
• Many kinds of affective states can be distinguished, including emotions, moods,
attitudes (which include sentiment), interpersonal stance, and personality.
• Emotion can be represented by fixed atomic units often called basic emo-
tions, or as points in space defined by dimensions like valence and arousal.
• Words have connotational aspects related to these affective states, and this
connotational aspect of word meaning can be represented in lexicons.
• Affective lexicons can be built by hand, using crowd sourcing to label the
affective content of each word.
• Lexicons can be built with semi-supervised, bootstrapping from seed words
using similarity metrics like embedding cosine.
• Lexicons can be learned in a fully supervised manner, when a convenient
training signal can be found in the world, such as ratings assigned by users on
a review site.
• Words can be assigned weights in a lexicon by using various functions of word
counts in training texts, and ratio metrics like log odds ratio informative
Dirichlet prior.
• Affect can be detected, just like sentiment, by using standard supervised text
classification techniques, using all the words or bigrams in a text as features.
Additional features can be drawn from counts of words in lexicons.
• Lexicons can also be used to detect affect in a rule-based classifier by picking
the simple majority sentiment based on counts of words in each lexicon.
• Connotation frames express richer relations of affective meaning that a pred-
icate encodes about its arguments.
500 C HAPTER 22 • L EXICONS FOR S ENTIMENT, A FFECT, AND C ONNOTATION
Exercises
22.1 Show that the relationship between a word w and a category c in the Potts
Score in Eq. 22.6 is a variant of the pointwise mutual information pmi(w, c)
without the log term.
CHAPTER
Discourse Model
Lotsabucks
V
Megabucks
$ pay refer (access)
refer (evoke)
“Victoria” corefer “she”
Figure 23.1 How mentions evoke and access discourse entities in a discourse model.
Reference in a text to an entity that has been previously introduced into the
anaphora discourse is called anaphora, and the referring expression used is said to be an
anaphor anaphor, or anaphoric.2 In passage (23.1), the pronouns she and her and the defi-
nite NP the 38-year-old are therefore anaphoric. The anaphor corefers with a prior
antecedent mention (in this case Victoria Chen) that is called the antecedent. Not every refer-
ring expression is an antecedent. An entity that has only a single mention in a text
singleton (like Lotsabucks in (23.1)) is called a singleton.
coreference In this chapter we focus on the task of coreference resolution. Coreference
resolution
resolution is the task of determining whether two mentions corefer, by which we
mean they refer to the same entity in the discourse model (the same discourse entity).
coreference The set of coreferring expressions is often called a coreference chain or a cluster.
chain
cluster For example, in processing (23.1), a coreference resolution algorithm would need
to find at least four coreference chains, corresponding to the four entities in the
discourse model in Fig. 23.1.
1. {Victoria Chen, her, the 38-year-old, She}
2. {Megabucks Banking, the company, Megabucks}
3. {her pay}
4. {Lotsabucks}
Note that mentions can be nested; for example the mention her is syntactically
part of another mention, her pay, referring to a completely different discourse entity.
Coreference resolution thus comprises two tasks (although they are often per-
formed jointly): (1) identifying the mentions, and (2) clustering them into corefer-
ence chains/discourse entities.
We said that two mentions corefered if they are associated with the same dis-
course entity. But often we’d like to go further, deciding which real world entity is
associated with this discourse entity. For example, the mention Washington might
refer to the US state, or the capital city, or the person George Washington; the inter-
pretation of the sentence will of course be very different for each of these. The task
entity linking of entity linking (Ji and Grishman, 2011) or entity resolution is the task of mapping
a discourse entity to some real-world individual.3 We usually operationalize entity
2 We will follow the common NLP usage of anaphor to mean any mention that has an antecedent, rather
than the more narrow usage to mean only mentions (like pronouns) whose interpretation depends on the
antecedent (under the narrower interpretation, repeated names are not anaphors).
3 Computational linguistics/NLP thus differs in its use of the term reference from the field of formal
semantics, which uses the words reference and coreference to describe the relation between a mention
and a real-world entity. By contrast, we follow the functional linguistics tradition in which a mention
refers to a discourse entity (Webber, 1978) and the relation between a discourse entity and the real world
individual requires an additional step of linking.
503
inferrables: these introduce entities that are neither hearer-old nor discourse-old,
but the hearer can infer their existence by reasoning based on other entities
that are in the discourse. Consider the following examples:
(23.18) I went to a superb restaurant yesterday. The chef had just opened it.
(23.19) Mix flour, butter and water. Knead the dough until shiny.
Neither the chef nor the dough were in the discourse model based on the first
bridging sentence of either example, but the reader can make a bridging inference
inference
that these entities should be added to the discourse model and associated with
the restaurant and the ingredients, based on world knowledge that restaurants
have chefs and dough is the result of mixing flour and liquid (Haviland and
Clark 1974, Webber and Baldwin 1992, Nissim et al. 2004, Hou et al. 2018).
The form of an NP gives strong clues to its information status. We often talk
given-new about an entity’s position on the given-new dimension, the extent to which the refer-
ent is given (salient in the discourse, easier for the hearer to call to mind, predictable
by the hearer), versus new (non-salient in the discourse, unpredictable) (Chafe 1976,
accessible Prince 1981, Gundel et al. 1993). A referent that is very accessible (Ariel, 2001)
i.e., very salient in the hearer’s mind or easy to call to mind, can be referred to with
less linguistic material. For example pronouns are used only when the referent has
salience a high degree of activation or salience in the discourse model.4 By contrast, less
salient entities, like a new referent being introduced to the discourse, will need to be
introduced with a longer and more explicit referring expression to help the hearer
recover the referent.
Thus when an entity is first introduced into a discourse its mentions are likely
to have full names, titles or roles, or appositive or restrictive relative clauses, as in
the introduction of our protagonist in (23.1): Victoria Chen, CFO of Megabucks
Banking. As an entity is discussed over a discourse, it becomes more salient to the
hearer and its mentions on average typically becomes shorter and less informative,
for example with a shortened name (for example Ms. Chen), a definite description
(the 38-year-old), or a pronoun (she or her) (Hawkins 1978). However, this change
in length is not monotonic, and is sensitive to discourse structure (Grosz 1977b,
Reichman 1985, Fox 1993).
singular they Second, singular they has become much more common, in which they is used to
describe singular individuals, often useful because they is gender neutral. Although
recently increasing, singular they is quite old, part of English for many centuries.5
Person Agreement: English distinguishes between first, second, and third person,
and a pronoun’s antecedent must agree with the pronoun in person. Thus a third
person pronoun (he, she, they, him, her, them, his, her, their) must have a third person
antecedent (one of the above or any other noun phrase). However, phenomena like
quotation can cause exceptions; in this example I, my, and she are coreferent:
(23.32) “I voted for Nader because he was most aligned with my values,” she said.
Gender or Noun Class Agreement: In many languages, all nouns have grammat-
ical gender or noun class6 and pronouns generally agree with the grammatical gender
of their antecedent. In English this occurs only with third-person singular pronouns,
which distinguish between male (he, him, his), female (she, her), and nonpersonal
(it) grammatical genders. Non-binary pronouns like ze or hir may also occur in more
recent texts. Knowing which gender to associate with a name in text can be complex,
and may require world knowledge about the individual. Some examples:
(23.33) Maryam has a theorem. She is exciting. (she=Maryam, not the theorem)
(23.34) Maryam has a theorem. It is exciting. (it=the theorem, not Maryam)
Binding Theory Constraints: The binding theory is a name for syntactic con-
straints on the relations between a mention and an antecedent in the same sentence
reflexive (Chomsky, 1981). Oversimplifying a bit, reflexive pronouns like himself and her-
self corefer with the subject of the most immediate clause that contains them (23.35),
whereas nonreflexives cannot corefer with this subject (23.36).
(23.35) Janet bought herself a bottle of fish sauce. [herself=Janet]
(23.36) Janet bought her a bottle of fish sauce. [her6=Janet]
Grammatical Role: Entities mentioned in subject position are more salient than
those in object position, which are in turn more salient than those mentioned in
oblique positions. Thus although the first sentence in (23.38) and (23.39) expresses
roughly the same propositional content, the preferred referent for the pronoun he
varies with the subject—John in (23.38) and Bill in (23.39).
(23.38) Billy Bones went to the bar with Jim Hawkins. He called for a glass of
rum. [ he = Billy ]
(23.39) Jim Hawkins went to the bar with Billy Bones. He called for a glass of
rum. [ he = Jim ]
5 Here’s a bound pronoun example from Shakespeare’s Comedy of Errors: There’s not a man I meet but
doth salute me As if I were their well-acquainted friend
6 The word “gender” is generally only used for languages with 2 or 3 noun classes, like most Indo-
European languages; many languages, like the Bantu languages or Chinese, have a much larger number
of noun classes.
23.2 • C OREFERENCE TASKS AND DATASETS 509
Verb Semantics: Some verbs semantically emphasize one of their arguments, bi-
asing the interpretation of subsequent pronouns. Compare (23.40) and (23.41).
(23.40) John telephoned Bill. He lost the laptop.
(23.41) John criticized Bill. He lost the laptop.
These examples differ only in the verb used in the first sentence, yet “he” in (23.40)
is typically resolved to John, whereas “he” in (23.41) is resolved to Bill. This may
be partly due to the link between implicit causality and saliency: the implicit cause
of a “criticizing” event is its object, whereas the implicit cause of a “telephoning”
event is its subject. In such verbs, the entity which is the implicit cause may be more
salient.
Selectional Restrictions: Many other kinds of semantic knowledge can play a role
in referent preference. For example, the selectional restrictions that a verb places on
its arguments (Chapter 21) can help eliminate referents, as in (23.42).
(23.42) I ate the soup in my new bowl after cooking it for hours
There are two possible referents for it, the soup and the bowl. The verb eat, however,
requires that its direct object denote something edible, and this constraint can rule
out bowl as a possible referent.
Exactly what counts as a mention and what links are annotated differs from task
to task and dataset to dataset. For example some coreference datasets do not label
singletons, making the task much simpler. Resolvers can achieve much higher scores
on corpora without singletons, since singletons constitute the majority of mentions in
running text, and they are often hard to distinguish from non-referential NPs. Some
tasks use gold mention-detection (i.e. the system is given human-labeled mention
boundaries and the task is just to cluster these gold mentions), which eliminates the
need to detect and segment mentions from running text.
Coreference is usually evaluated by the CoNLL F1 score, which combines three
metrics: MUC, B3 , and CEAFe ; Section 23.8 gives the details.
Let’s mention a few characteristics of one popular coreference dataset, OntoNotes
(Pradhan et al. 2007c, Pradhan et al. 2007a), and the CoNLL 2012 Shared Task
based on it (Pradhan et al., 2012a). OntoNotes contains hand-annotated Chinese
and English coreference datasets of roughly one million words each, consisting of
newswire, magazine articles, broadcast news, broadcast conversations, web data and
conversational speech data, as well as about 300,000 words of annotated Arabic
newswire. The most important distinguishing characteristic of OntoNotes is that
it does not label singletons, simplifying the coreference task, since singletons rep-
resent 60%-70% of all entities. In other ways, it is similar to other coreference
datasets. Referring expression NPs that are coreferent are marked as mentions, but
generics and pleonastic pronouns are not marked. Appositive clauses are not marked
as separate mentions, but they are included in the mention. Thus in the NP, “Richard
Godown, president of the Industrial Biotechnology Association” the mention is the
entire phrase. Prenominal modifiers are annotated as separate entities only if they
are proper nouns. Thus wheat is not an entity in wheat fields, but UN is an entity in
UN policy (but not adjectives like American in American policy).
A number of corpora mark richer discourse phenomena. The ISNotes corpus
annotates a portion of OntoNotes for information status, include bridging examples
(Hou et al., 2018). The LitBank coreference corpus (Bamman et al., 2020) contains
coreference annotations for 210,532 tokens from 100 different literary novels, in-
cluding singletons and quantified and negated noun phrases. The AnCora-CO coref-
erence corpus (Recasens and Martı́, 2010) contains 400,000 words each of Spanish
(AnCora-CO-Es) and Catalan (AnCora-CO-Ca) news data, and includes labels for
complex phenomena like discourse deixis in both languages. The ARRAU corpus
(Uryupina et al., 2020) contains 350,000 words of English marking all NPs, which
means singleton clusters are available. ARRAU includes diverse genres like dialog
(the TRAINS data) and fiction (the Pear Stories), and has labels for bridging refer-
ences, discourse deixis, generics, and ambiguous anaphoric relations.
(23.44) Victoria Chen, CFO of Megabucks Banking, saw her pay jump to $2.3
23.3 • M ENTION D ETECTION 511
It is Modaladjective that S
It is Modaladjective (for NP) to VP
It is Cogv-ed that S
It seems/appears/means/follows (that) S
as make it in advance, but make them in Hollywood did not occur at all). These
n-gram contexts can be used as features in a supervised anaphoricity classifier.
p(coref|”Victoria Chen”,”she”)
Victoria Chen Megabucks Banking her her pay the 37-year-old she
p(coref|”Megabucks Banking”,”she”)
Figure 23.2 For each pair of a mention (like she), and a potential antecedent mention (like
Victoria Chen or her), the mention-pair classifier assigns a probability of a coreference link.
For each prior mention (Victoria Chen, Megabucks Banking, her, etc.), the binary
classifier computes a probability: whether or not the mention is the antecedent of
she. We want this probability to be high for actual antecedents (Victoria Chen, her,
the 38-year-old) and low for non-antecedents (Megabucks Banking, her pay).
Early classifiers used hand-built features (Section 23.5); more recent classifiers
use neural representation learning (Section 23.6)
For training, we need a heuristic for selecting training samples; since most pairs
of mentions in a document are not coreferent, selecting every pair would lead to
a massive overabundance of negative samples. The most common heuristic, from
(Soon et al., 2001), is to choose the closest antecedent as a positive example, and all
pairs in between as the negative examples. More formally, for each anaphor mention
mi we create
• one positive instance (mi , m j ) where m j is the closest antecedent to mi , and
514 C HAPTER 23 • C OREFERENCE R ESOLUTION AND E NTITY L INKING
} One or more
of these
should be high
}
ϵ Victoria Chen Megabucks Banking her her pay the 37-year-old she
All of these
should be low
p(ϵ|”she”) p(”Megabucks Banking”|she”) p(”her pay”|she”)
Figure 23.3 For each candidate anaphoric mention (like she), the mention-ranking system assigns a proba-
bility distribution over all previous mentions plus the special dummy mention .
At test time, for a given mention i the model computes one softmax over all the
antecedents (plus ) giving a probability for each candidate antecedent (or none).
23.5 • C LASSIFIERS USING HAND - BUILT FEATURES 515
Fig. 23.3 shows an example of the computation for the single candidate anaphor
she.
Once the antecedent is classified for each anaphor, transitive closure can be run
over the pairwise decisions to get a complete clustering.
Training is trickier in the mention-ranking model than the mention-pair model,
because for each anaphor we don’t know which of all the possible gold antecedents
to use for training. Instead, the best antecedent for each mention is latent; that
is, for each mention we have a whole cluster of legal gold antecedents to choose
from. Early work used heuristics to choose an antecedent, for example choosing the
closest antecedent as the gold antecedent and all non-antecedents in a window of
two sentences as the negative examples (Denis and Baldridge, 2008). Various kinds
of ways to model latent antecedents exist (Fernandes et al. 2012, Chang et al. 2013,
Durrett and Klein 2013). The simplest way is to give credit to any legal antecedent
by summing over all of them, with a loss function that optimizes the likelihood of
all correct antecedents from the gold clustering (Lee et al., 2017b). We’ll see the
details in Section 23.6.
Mention-ranking models can be implemented with hand-build features or with
neural representation learning (which might also incorporate some hand-built fea-
tures). we’ll explore both directions in Section 23.5 and Section 23.6.
form as well as neural ones. Nonetheless, they are still sometimes useful to build
lightweight systems when compute or data are sparse, and the features themselves
are useful for error analysis even in neural systems.
Given an anaphor mention and a potential antecedent mention, feature based
classifiers make use of three types of features: (i) features of the anaphor, (ii) features
of the candidate antecedent, and (iii) features of the relationship between the pair.
Entity-based models can make additional use of two additional classes: (iv) feature
of all mentions from the antecedent’s entity cluster, and (v) features of the relation
between the anaphor and the mentions in the antecedent entity cluster.
Figure 23.4 shows a selection of commonly used features, and shows the value
that would be computed for the potential anaphor “she” and potential antecedent
“Victoria Chen” in our example sentence, repeated below:
(23.47) Victoria Chen, CFO of Megabucks Banking, saw her pay jump to $2.3
million, as the 38-year-old also became the company’s president. It is
widely known that she came to Megabucks from rival Lotsabucks.
Features that prior work has found to be particularly useful are exact string
match, entity headword agreement, mention distance, as well as (for pronouns) exact
attribute match and i-within-i, and (for nominals and proper names) word inclusion
and cosine. For lexical features (like head words) it is common to only use words
that appear enough times (>20 times).
23.6 • A NEURAL MENTION - RANKING ALGORITHM 517
This score s(i, j) includes three factors that we’ll define below: m(i); whether span
i is a mention; m( j); whether span j is a mention; and c(i, j); whether j is the
antecedent of i:
For the dummy antecedent , the score s(i, ) is fixed to 0. This way if any non-
dummy scores are positive, the model predicts the highest-scoring antecedent, but if
all the scores are negative it abstains.
embedding for the first (start) token of the span, the encoder output for the last (end)
token of the span, and a third vector which is an attention-based representation:
The goal of the attention vector is to represent which word/token is the likely
syntactic head-word of the span; we saw in the prior section that head-words are
a useful feature; a matching head-word is a good indicator of coreference. The
attention representation is computed as usual; the system learns a weight vector w ,
and computes its dot product with the hidden state ht transformed by a FFN:
And then the attention distribution is used to create a vector hATT(i) which is an
attention-weighted sum of the embeddings et of each of the words in span i:
END
∑(i)
hATT(i) = ai,t · et (23.53)
t= START (i)
Fig. 23.5 shows the computation of the span representation and the mention
score.
General Electric Electric said the the Postal Service Service contacted the the company
Mention score (m)
Encodings (h)
Encoder
Figure 23.5 Computation of the span representation g (and the mention score m) in a BERT version of the
e2e-coref model (Lee et al. 2017b, Joshi et al. 2019). The model considers all spans up to a maximum width of
say 10; the figure shows a small subset of the bigram and trigram spans.
At inference time, this mention score m is used as a filter to keep only the best few
mentions.
23.6 • A NEURAL MENTION - RANKING ALGORITHM 519
We then compute the antecedent score for high-scoring mentions. The antecedent
score c(i, j) takes as input a representation of the spans i and j, but also the element-
wise similarity of the two spans to each other gi ◦ g j (here ◦ is element-wise mul-
tiplication). Fig. 23.6 shows the computation of the score s for the three possible
antecedents of the company in the example sentence from Fig. 23.5.
Figure 23.6 The computation of the score s for the three possible antecedents of the com-
pany in the example sentence from Fig. 23.5. Figure after Lee et al. (2017b).
Given the set of mentions, the joint distribution of antecedents for each docu-
ment is computed in a forward pass, and we can then do transitive closure on the
antecedents to create a final clustering for the document.
Fig. 23.7 shows example predictions from the model, showing the attention
weights, which Lee et al. (2017b) find correlate with traditional semantic heads.
Note that the model gets the second example wrong, presumably because attendants
and pilot likely have nearby word embeddings.
Figure 23.7 Sample predictions from the Lee et al. (2017b) model, with one cluster per
example, showing one correct example and one mistake. Bold, parenthesized spans are men-
tions in the predicted cluster. The amount of red color on a word indicates the head-finding
attention weight ai,t in (23.52). Figure adapted from Lee et al. (2017b).
23.6.3 Learning
For training, we don’t have a single gold antecedent for each mention; instead the
coreference labeling only gives us each entire cluster of coreferent mentions; so a
mention only has a latent antecedent. We therefore use a loss function that maxi-
mizes the sum of the coreference probability of any of the legal antecedents. For a
given mention i with possible antecedents Y (i), let GOLD(i) be the set of mentions
in the gold cluster containing i. Since the set of mentions occurring before i is Y (i),
the set of mentions in that gold cluster that also occur before i is Y (i) ∩ GOLD(i). We
520 C HAPTER 23 • C OREFERENCE R ESOLUTION AND E NTITY L INKING
where in(x) is the set of Wikipedia pages pointing to x and W is the set of all Wiki-
pedia pages in the collection.
The vote given by anchor b to the candidate annotation a → X is the average,
over all the possible entities of b, of their relatedness to X, weighted by their prior
probability:
1 ∑
vote(b, X) = rel(X,Y )p(Y |b) (23.60)
|E(b)|
Y ∈E(b)
The total relatedness score for a → X is the sum of the votes of all the other anchors
detected in q:
∑
relatedness(a → X) = vote(b, X) (23.61)
b∈Xq \a
1 ∑
coherence(a → X) = rel(B, X)
|S| − 1
B∈S\X
coherence(a → X) + linkprob(a)
score(a → X) = (23.62)
2
Finally, pairs are pruned if score(a → X) < λ , where the threshold λ is set on a
held-out set.
It then computes the likelihood of each span [i, j] in q being an entity mention, in
a way similar to the span-based algorithm we saw for the reader above. First we
compute the score for i/ j being the start/end of a mention:
Figure 23.8 A sketch of the inference process in the ELQ algorithm for entity linking in
questions (Li et al., 2020). Each candidate question mention span and candidate entity are
separately encoded, and then scored by the entity/span dot product.
where wstart and wend are vectors learned during training. Next, another trainable
embedding, wmention is used to compute a score for each token being part of a men-
tion:
Mention spans can be linked to entities by computing, for each entity e and span
[i, j], the dot product similarity between the span encoding (the average of the token
embeddings) and the entity encoding.
∑ j
1
yi, j = qt
( j − i + 1)
t=i
s(e, [i, j]) = x·e yi, j (23.68)
Finally, we take a softmax to get a distribution over entities for each span:
Training The ELQ mention detection and entity linking algorithm is fully super-
vised. This means, unlike the anchor dictionary algorithms from Section 23.7.1,
524 C HAPTER 23 • C OREFERENCE R ESOLUTION AND E NTITY L INKING
it requires datasets with entity boundaries marked and linked. Two such labeled
datasets are WebQuestionsSP (Yih et al., 2016), an extension of the WebQuestions
(Berant et al., 2013) dataset derived from Google search questions, and GraphQues-
tions (Su et al., 2016). Both have had entity spans in the questions marked and
linked (Sorokin and Gurevych 2018, Li et al. 2020) resulting in entity-labeled ver-
sions WebQSPEL and GraphQEL (Li et al., 2020).
Given a training set, the ELQ mention detection and entity linking phases are
trained jointly, optimizing the sum of their losses. The mention detection loss is
a binary cross-entropy loss, with L the length of the passage and N the number of
candidates:
1 ∑
LMD = − y[i, j] log p([i, j]) + (1 − y[i, j] ) log(1 − p([i, j])) (23.70)
N
1≤i≤ j≤min(i+L−1,n)
with y[i, j] = 1 if [i, j] is a gold mention span, else 0. The entity linking loss is:
LED = −logp(eg |[i, j]) (23.71)
where eg is the gold entity for mention [i, j].
The weight wi for each entity can be set to different values to produce different
versions of the algorithm.
Following a proposal from Denis and Baldridge (2009), the CoNLL coreference
competitions were scored based on the average of MUC, CEAF-e, and B3 (Pradhan
et al. 2011, Pradhan et al. 2012b), and so it is common in many evaluation campaigns
to report an average of these 3 metrics. See Luo and Pradhan (2016) for a detailed
description of the entire set of metrics; reference implementations of these should
be used rather than attempting to reimplement from scratch (Pradhan et al., 2014).
Alternative metrics have been proposed that deal with particular coreference do-
mains or tasks. For example, consider the task of resolving mentions to named
entities (persons, organizations, geopolitical entities), which might be useful for in-
formation extraction or knowledge base completion. A hypothesis chain that cor-
rectly contains all the pronouns referring to an entity, but has no version of the name
itself, or is linked with a wrong name, is not useful for this task. We might instead
want a metric that weights each mention by how informative it is (with names being
most informative) (Chen and Ng, 2013) or a metric that considers a hypothesis to
match a gold chain only if it contains at least one variant of a name (the NEC F1
metric of Agarwal et al. (2019)).
(23.74) The trophy didn’t fit into the suitcase because it was too small.
Question: What was too small? Answer: The suitcase
The problems have the following characteristics:
1. The problems each have two parties
2. A pronoun preferentially refers to one of the parties, but could grammatically
also refer to the other
3. A question asks which party the pronoun refers to
4. If one word in the question is changed, the human-preferred answer changes
to the other party
The kind of world knowledge that might be needed to solve the problems can
vary. In the trophy/suitcase example, it is knowledge about the physical world; that
a bigger object cannot fit into a smaller object. In the original Winograd sentence,
it is stereotypes about social actors like politicians and protesters. In examples like
the following, it is knowledge about human actions like turn-taking or thanking.
(23.75) Bill passed the gameboy to John because his turn was [over/next]. Whose
turn was [over/next]? Answers: Bill/John
(23.76) Joan made sure to thank Susan for all the help she had [given/received].
Who had [given/received] help? Answers: Susan/Joan.
Although the Winograd Schema was designed to require common-sense rea-
soning, a large percentage of the original set of problems can be solved by pre-
trained language models, fine-tuned on Winograd Schema sentences (Kocijan et al.,
2019). Large pretrained language models encode an enormous amount of world or
common-sense knowledge! The current trend is therefore to propose new datasets
with increasingly difficult Winograd-like coreference resolution problems like K NOW R EF
(Emami et al., 2019), with examples like:
(23.77) Marcus is undoubtedly faster than Jarrett right now but in [his] prime the
gap wasn’t all that big.
In the end, it seems likely that some combination of language modeling and knowl-
edge will prove fruitful; indeed, it seems that knowledge-based models overfit less
to lexical idiosyncracies in Winograd Schema training sets (Trichelair et al., 2018),
(23.78) The secretary called the physiciani and told himi about a new patient
[pro-stereotypical]
(23.79) The secretary called the physiciani and told heri about a new patient
[anti-stereotypical]
Zhao et al. (2018a) consider a coreference system to be biased if it is more accu-
rate at linking pronouns consistent with gender stereotypical occupations (e.g., him
with physician in (23.78)) than linking pronouns inconsistent with gender-stereotypical
occupations (e.g., her with physician in (23.79)). They show that coreference sys-
tems of all architectures (rule-based, feature-based machine learned, and end-to-
end-neural) all show significant bias, performing on average 21 F1 points worse in
the anti-stereotypical cases.
One possible source of this bias is that female entities are significantly un-
derrepresented in the OntoNotes dataset, used to train most coreference systems.
Zhao et al. (2018a) propose a way to overcome this bias: they generate a second
gender-swapped dataset in which all male entities in OntoNotes are replaced with
female ones and vice versa, and retrain coreference systems on the combined orig-
inal and swapped OntoNotes data, also using debiased GloVE embeddings (Boluk-
basi et al., 2016). The resulting coreference systems no longer exhibit bias on the
WinoBias dataset, without significantly impacting OntoNotes coreference accuracy.
In a follow-up paper, Zhao et al. (2019) show that the same biases exist in ELMo
contextualized word vector representations and coref systems that use them. They
showed that retraining ELMo with data augmentation again reduces or removes bias
in coreference systems on WinoBias.
Webster et al. (2018) introduces another dataset, GAP, and the task of Gendered
Pronoun Resolution as a tool for developing improved coreference algorithms for
gendered pronouns. GAP is a gender-balanced labeled corpus of 4,454 sentences
with gendered ambiguous pronouns (by contrast, only 20% of the gendered pro-
nouns in the English OntoNotes training data are feminine). The examples were
created by drawing on naturally occurring sentences from Wikipedia pages to create
hard to resolve cases with two named entities of the same gender and an ambiguous
pronoun that may refer to either person (or neither), like the following:
(23.80) In May, Fujisawa joined Mari Motohashi’s rink as the team’s skip, moving
back from Karuizawa to Kitami where she had spent her junior days.
Webster et al. (2018) show that modern coreference algorithms perform signif-
icantly worse on resolving feminine pronouns than masculine pronouns in GAP.
Kurita et al. (2019) shows that a system based on BERT contextualized word repre-
sentations shows similar bias.
23.11 Summary
This chapter introduced the task of coreference resolution.
• This is the task of linking together mentions in text which corefer, i.e. refer
to the same discourse entity in the discourse model, resulting in a set of
coreference chains (also called clusters or entities).
• Mentions can be definite NPs or indefinite NPs, pronouns (including zero
pronouns) or names.
528 C HAPTER 23 • C OREFERENCE R ESOLUTION AND E NTITY L INKING
the current sentence from left-to-right, starting with the first noun group to the right
of the pronoun (for cataphora). The first noun group that agrees with the pronoun
with respect to number, gender, and person is chosen as the antecedent” (Kehler
et al., 2004).
Lappin and Leass (1994) was an influential entity-based system that used weights
to combine syntactic and other features, extended soon after by Kennedy and Bogu-
raev (1996) whose system avoids the need for full syntactic parses.
Approximately contemporaneously centering (Grosz et al., 1995) was applied
to pronominal anaphora resolution by Brennan et al. (1987), and a wide variety of
work followed focused on centering’s use in coreference (Kameyama 1986, Di Eu-
genio 1990, Walker et al. 1994, Di Eugenio 1996, Strube and Hahn 1996, Kehler
1997a, Tetreault 2001, Iida et al. 2003). Kehler and Rohde (2013) show how center-
ing can be integrated with coherence-driven theories of pronoun interpretation. See
Chapter 24 for the use of centering in measuring discourse coherence.
Coreference competitions as part of the US DARPA-sponsored MUC confer-
ences provided early labeled coreference datasets (the 1995 MUC-6 and 1998 MUC-
7 corpora), and set the tone for much later work, choosing to focus exclusively
on the simplest cases of identity coreference (ignoring difficult cases like bridging,
metonymy, and part-whole) and drawing the community toward supervised machine
learning and metrics like the MUC metric (Vilain et al., 1995). The later ACE eval-
uations produced labeled coreference corpora in English, Chinese, and Arabic that
were widely used for model training and evaluation.
This DARPA work influenced the community toward supervised learning begin-
ning in the mid-90s (Connolly et al. 1994, Aone and Bennett 1995, McCarthy and
Lehnert 1995). Soon et al. (2001) laid out a set of basic features, extended by Ng and
Cardie (2002b), and a series of machine learning models followed over the next 15
years. These often focused separately on pronominal anaphora resolution (Kehler
et al. 2004, Bergsma and Lin 2006), full NP coreference (Cardie and Wagstaff 1999,
Ng and Cardie 2002b, Ng 2005a) and definite NP reference (Poesio and Vieira 1998,
Vieira and Poesio 2000), as well as separate anaphoricity detection (Bean and Riloff
1999, Bean and Riloff 2004, Ng and Cardie 2002a, Ng 2004), or singleton detection
(de Marneffe et al., 2015).
The move from mention-pair to mention-ranking approaches was pioneered by
Yang et al. (2003) and Iida et al. (2003) who proposed pairwise ranking methods,
then extended by Denis and Baldridge (2008) who proposed to do ranking via a soft-
max over all prior mentions. The idea of doing mention detection, anaphoricity, and
coreference jointly in a single end-to-end model grew out of the early proposal of Ng
(2005b) to use a dummy antecedent for mention-ranking, allowing ‘non-referential’
to be a choice for coreference classifiers, Denis and Baldridge’s 2007 joint system
combining anaphoricity classifier probabilities with coreference probabilities, the
Denis and Baldridge (2008) ranking model, and the Rahman and Ng (2009) pro-
posal to train the two models jointly with a single objective.
Simple rule-based systems for coreference returned to prominence in the 2010s,
partly because of their ability to encode entity-based features in a high-precision way
(Zhou et al. 2004b, Haghighi and Klein 2009, Raghunathan et al. 2010, Lee et al.
2011, Lee et al. 2013, Hajishirzi et al. 2013) but in the end they suffered from an
inability to deal with the semantics necessary to correctly handle cases of common
noun coreference.
A return to supervised learning led to a number of advances in mention-ranking
models which were also extended into neural architectures, for example using re-
530 C HAPTER 23 • C OREFERENCE R ESOLUTION AND E NTITY L INKING
Exercises
CHAPTER
24 Discourse Coherence
And even in our wildest and most wandering reveries, nay in our very dreams,
we shall find, if we reflect, that the imagination ran not altogether at adven-
tures, but that there was still a connection upheld among the different ideas,
which succeeded each other. Were the loosest and freest conversation to be
transcribed, there would immediately be transcribed, there would immediately
be observed something which connected it in all its transitions.
David Hume, An enquiry concerning human understanding, 1748
Orson Welles’ movie Citizen Kane was groundbreaking in many ways, perhaps most
notably in its structure. The story of the life of fictional media magnate Charles
Foster Kane, the movie does not proceed in chronological order through Kane’s
life. Instead, the film begins with Kane’s death (famously murmuring “Rosebud”)
and is structured around flashbacks to his life inserted among scenes of a reporter
investigating his death. The novel idea that the structure of a movie does not have
to linearly follow the structure of the real timeline made apparent for 20th century
cinematography the infinite possibilities and impact of different kinds of coherent
narrative structures.
But coherent structure is not just a fact about movies or works of art. Like
movies, language does not normally consist of isolated, unrelated sentences, but
instead of collocated, structured, coherent groups of sentences. We refer to such
discourse a coherent structured group of sentences as a discourse, and we use the word co-
coherence herence to refer to the relationship between sentences that makes real discourses
different than just random assemblages of sentences. The chapter you are now read-
ing is an example of a discourse, as is a news article, a conversation, a thread on
social media, a Wikipedia page, and your favorite novel.
What makes a discourse coherent? If you created a text by taking random sen-
tences each from many different sources and pasted them together, would that be a
local coherent discourse? Almost certainly not. Real discourses exhibit both local coher-
global ence and global coherence. Let’s consider three ways in which real discourses are
locally coherent;
First, sentences or clauses in real discourses are related to nearby sentences in
systematic ways. Consider this example from Hobbs (1979):
(24.1) John took a train from Paris to Istanbul. He likes spinach.
This sequence is incoherent because it is unclear to a reader why the second
sentence follows the first; what does liking spinach have to do with train trips? In
fact, a reader might go to some effort to try to figure out how the discourse could be
coherent; perhaps there is a French spinach shortage? The very fact that hearers try
to identify such connections suggests that human discourse comprehension involves
the need to establish this kind of coherence.
By contrast, in the following coherent example:
(24.2) Jane took a train from Paris to Istanbul. She had to attend a conference.
532 C HAPTER 24 • D ISCOURSE C OHERENCE
the second sentence gives a REASON for Jane’s action in the first sentence. Struc-
tured relationships like REASON that hold between text units are called coherence
coherence relations, and coherent discourses are structured by many such coherence relations.
relations
Coherence relations are introduced in Section 24.1.
A second way a discourse can be locally coherent is by virtue of being “about”
someone or something. In a coherent discourse some entities are salient, and the
discourse focuses on them and doesn’t go back and forth between multiple entities.
This is called entity-based coherence. Consider the following incoherent passage,
in which the salient entity seems to wildly swing from John to Jenny to the piano
store to the living room, back to Jenny, then the piano again:
(24.3) John wanted to buy a piano for his living room.
Jenny also wanted to buy a piano.
He went to the piano store.
It was nearby.
The living room was on the second floor.
She didn’t find anything she liked.
The piano he bought was hard to get up to that floor.
Entity-based coherence models measure this kind of coherence by tracking salient
Centering
Theory entities across a discourse. For example Centering Theory (Grosz et al., 1995), the
most influential theory of entity-based coherence, keeps track of which entities in
the discourse model are salient at any point (salient entities are more likely to be
pronominalized or to appear in prominent syntactic positions like subject or object).
In Centering Theory, transitions between sentences that maintain the same salient
entity are considered more coherent than ones that repeatedly shift between entities.
entity grid The entity grid model of coherence (Barzilay and Lapata, 2008) is a commonly
used model that realizes some of the intuitions of the Centering Theory framework.
Entity-based coherence is introduced in Section 24.3.
topically Finally, discourses can be locally coherent by being topically coherent: nearby
coherent
sentences are generally about the same topic and use the same or similar vocab-
ulary to discuss these topics. Because topically coherent discourses draw from a
single semantic field or topic, they tend to exhibit the surface property known as
lexical cohesion lexical cohesion (Halliday and Hasan, 1976): the sharing of identical or semanti-
cally related words in nearby sentences. For example, the fact that the words house,
chimney, garret, closet, and window— all of which belong to the same semantic
field— appear in the two sentences in (24.4), or that they share the identical word
shingled, is a cue that the two are tied together as a discourse:
(24.4) Before winter I built a chimney, and shingled the sides of my house...
I have thus a tight shingled and plastered house... with a garret and a
closet, a large window on each side....
In addition to the local coherence between adjacent or nearby sentences, dis-
courses also exhibit global coherence. Many genres of text are associated with
particular conventional discourse structures. Academic articles might have sections
describing the Methodology or Results. Stories might follow conventional plotlines
or motifs. Persuasive essays have a particular claim they are trying to argue for,
and an essay might express this claim together with a structured set of premises that
support the argument and demolish potential counterarguments. We’ll introduce
versions of each of these kinds of global coherence.
Why do we care about the local or global coherence of a discourse? Since co-
herence is a property of a well-written text, coherence detection plays a part in any
24.1 • C OHERENCE R ELATIONS 533
task that requires measuring the quality of a text. For example coherence can help
in pedagogical tasks like essay grading or essay quality measurement that are trying
to grade how well-written a human essay is (Somasundaran et al. 2014, Feng et al.
2014, Lai and Tetreault 2018). Coherence can also help for summarization; knowing
the coherence relationship between sentences can help know how to select informa-
tion from them. Finally, detecting incoherent text may even play a role in mental
health tasks like measuring symptoms of schizophrenia or other kinds of disordered
language (Ditman and Kuperberg 2010, Elvevåg et al. 2007, Bedi et al. 2015, Iter
et al. 2018).
Elaboration: The satellite gives additional information or detail about the situation
presented in the nucleus.
(24.8) [NUC Dorothy was from Kansas.] [SAT She lived in the midst of the great
Kansas prairies.]
Evidence: The satellite gives additional information or detail about the situation
presented in the nucleus. The information is presented with the goal of convince the
reader to accept the information presented in the nucleus.
(24.9) [NUC Kevin must be here.] [SAT His car is parked outside.]
534 C HAPTER 24 • D ISCOURSE C OHERENCE
Attribution: The satellite gives the source of attribution for an instance of reported
speech in the nucleus.
(24.10) [SAT Analysts estimated] [NUC that sales at U.S. stores declined in the
quarter, too]
evidence
We can also talk about the coherence of a larger text by considering the hierar-
chical structure between coherence relations. Figure 24.1 shows the rhetorical struc-
ture of a paragraph from Marcu (2000a) for the text in (24.12) from the Scientific
American magazine.
(24.12) With its distant orbit–50 percent farther from the sun than Earth–and slim
atmospheric blanket, Mars experiences frigid weather conditions. Surface
temperatures typically average about -60 degrees Celsius (-76 degrees
Fahrenheit) at the equator and can dip to -123 degrees C near the poles. Only
the midday sun at tropical latitudes is warm enough to thaw ice on occasion,
but any liquid water formed in this way would evaporate almost instantly
because of the low atmospheric pressure.
Title 2-9
(1)
evidence
Mars
2-3 4-9
background elaboration-additional
Figure 24.1 A discourse tree for the Scientific American text in (24.12), from Marcu (2000a). Note that
asymmetric relations are represented with a curved arrow from the satellite to the nucleus.
The leaves in the Fig. 24.1 tree correspond to text spans of a sentence, clause or
EDU phrase that are called elementary discourse units or EDUs in RST; these units can
also be referred to as discourse segments. Because these units may correspond to
arbitrary spans of text, determining the boundaries of an EDU is an important task
for extracting coherence relations. Roughly speaking, one can think of discourse
24.1 • C OHERENCE R ELATIONS 535
Class Type
Example
TEMPORAL The parishioners of St. Michael and All Angels stop to chat at
SYNCHRONOUS
the church door, as members here always have. (Implicit while)
In the tower, five men and women pull rhythmically on ropes
attached to the same five bells that first sounded here in 1614.
CONTINGENCY REASON Also unlike Mr. Ruder, Mr. Breeden appears to be in a position
to get somewhere with his agenda. (implicit=because) As a for-
mer White House aide who worked closely with Congress,
he is savvy in the ways of Washington.
COMPARISON CONTRAST The U.S. wants the removal of what it perceives as barriers to
investment; Japan denies there are real barriers.
EXPANSION CONJUNCTION Not only do the actors stand outside their characters and make
it clear they are at odds with them, but they often literally stand
on their heads.
Figure 24.2 The four high-level semantic distinctions in the PDTB sense hierarchy
Temporal Comparison
• Asynchronous • Contrast (Juxtaposition, Opposition)
• Synchronous (Precedence, Succession) •Pragmatic Contrast (Juxtaposition, Opposition)
• Concession (Expectation, Contra-expectation)
• Pragmatic Concession
Contingency Expansion
• Cause (Reason, Result) • Exception
• Pragmatic Cause (Justification) • Instantiation
• Condition (Hypothetical, General, Unreal • Restatement (Specification, Equivalence, Generalization)
Present/Past, Factual Present/Past)
• Pragmatic Condition (Relevance, Implicit As- • Alternative (Conjunction, Disjunction, Chosen Alterna-
sertion) tive)
• List
Figure 24.3 The PDTB sense hierarchy. There are four top-level classes, 16 types, and 23 subtypes (not all
types have subtypes). 11 of the 16 types are commonly used for implicit ¯ argument classification; the 5 types in
italics are too rare in implicit labeling to be used.
EDU break 0 0 0 1
softmax
linear layer
ENCODER
Mr. Rambo says that …
Figure 24.4 Predicting EDU segment beginnings from encoded text.
Figure 24.5 Example RST discourse tree, showing four EDUs. Figure from Yu et al. (2018).
Figure 24.6 Parsing the example of Fig. 24.5 using a shift-reduce parser. Figure from Yu
et al. (2018).
An EDU of span ws , ws+1 , ..., wt then has biLSTM output representation hws , hws+1 , ..., htw ,
and is represented by average pooling:
∑ t
1
xe = hwk (24.18)
t −s+1
k=s
The second layer uses this input to compute a final representation of the sequence of
EDU representations he :
he1 , he2 , ..., hen = biLSTM(xe1 , xe2 , ..., xen ) (24.19)
Training first maps each RST gold parse tree into a sequence of oracle actions, and
then uses the standard cross-entropy loss (with l2 regularization) to train the system
to take such actions. Give a state S and oracle action a, we first compute the decoder
output using Eq. 24.20, apply a softmax to get probabilities:
exp(oa )
pa = (24.22)
exp(oa′ )
a′ ∈A
λ
LCE () = − log(pa ) + ||Θ||2 (24.23)
2
RST discourse parsers are evaluated on the test section of the RST Discourse Tree-
bank, either with gold EDUs or end-to-end, using the RST-Pareval metrics (Marcu,
2000b). It is standard to first transform the gold RST trees into right-branching bi-
nary trees, and to report four metrics: trees with no labels (S for Span), labeled
with nuclei (N), with relations (R), or both (F for Full), for each metric computing
micro-averaged F1 over all spans from all documents (Marcu 2000b, Morey et al.
2017).
24.3.1 Centering
Centering
Theory Centering Theory (Grosz et al., 1995) is a theory of both discourse salience and
discourse coherence. As a model of discourse salience, Centering proposes that at
any given point in the discourse one of the entities in the discourse model is salient:
it is being “centered” on. As a model of discourse coherence, Centering proposes
that discourses in which adjacent sentences CONTINUE to maintain the same salient
entity are more coherent than those which SHIFT back and forth between multiple
entities (we will see that CONTINUE and SHIFT are technical terms in the theory).
The following two texts from Grosz et al. (1995) which have exactly the same
propositional content but different saliences, can help in understanding the main
Centering intuition.
(24.28) a. John went to his favorite music store to buy a piano.
b. He had frequented the store for many years.
c. He was excited that he could finally buy a piano.
d. He arrived just as the store was closing for the day.
24.3 • C ENTERING AND E NTITY-BASED C OHERENCE 541
entity. In a RETAIN relation, the speaker intends to SHIFT to a new entity in a future
utterance and meanwhile places the current entity in a lower rank C f . In a SHIFT
relation, the speaker is shifting to a new salient entity.
Let’s walk though the start of (24.28) again, repeated as (24.30), showing the
representations after each utterance is processed.
(24.30) John went to his favorite music store to buy a piano. (U1 )
He was excited that he could finally buy a piano. (U2 )
He arrived just as the store was closing for the day. (U3 )
It was closing just as John arrived (U4 )
Using the grammatical role hierarchy to order the C f , for sentence U1 we get:
C f (U1 ): {John, music store, piano}
C p (U1 ): John
Cb (U1 ): undefined
and then for sentence U2 :
C f (U2 ): {John, piano}
C p (U2 ): John
Cb (U2 ): John
Result: Continue (C p (U2 )=Cb (U2 ); Cb (U1 ) undefined)
The transition from U1 to U2 is thus a CONTINUE. Completing this example is left
as exercise (1) for the reader
Government
Competitors
Department
Microsoft
Netscape
Evidence
Earnings
Products
Software
Markets
Brands
Tactics
Case
Trial
Suit
1 S O S X O – – – – – – – – – – 1
2 – – O – – X S O – – – – – – – 2
3 – – S O – – – – S O O – – – – 3
4 – – S – – – – – – – – S – – – 4
5 – – – – – – – – – – – – S O – 5
6 – X S – – – – – – – – – – – O 6
Figure 24.8 Part of the entity grid for the text in Fig. 24.9. Entities are listed by their head
noun; each cell represents whether an entity appears as subject (S), object (O), neither (X), or
is absent (–). Figure from Barzilay and Lapata (2008).
Figure 24.9 A discourse with the entities marked and annotated with grammatical func-
tions. Figure from Barzilay and Lapata (2008).
resolution to cluster them into discourse entities (Chapter 23) as well as parsing the
sentences to get grammatical roles.
In the resulting grid, columns that are dense (like the column for Microsoft) in-
dicate entities that are mentioned often in the texts; sparse columns (like the column
for earnings) indicate entities that are mentioned rarely.
In the entity grid model, coherence is measured by patterns of local entity tran-
sition. For example, Department is a subject in sentence 1, and then not men-
tioned in sentence 2; this is the transition [ S –]. The transitions are thus sequences
{S , O X , –}n which can be extracted as continuous cells from each column. Each
transition has a probability; the probability of [ S –] in the grid from Fig. 24.8 is 0.08
(it occurs 6 times out of the 75 total transitions of length two). Fig. 24.10 shows the
distribution over transitions of length 2 for the text of Fig. 24.9 (shown as the first
row d1 ), and 2 other documents.
SS SO SX S – OS OO OX O – XS XO XX X – – S – O – X ––
d1 .01 .01 0 .08 .01 0 0 .09 0 0 0 .03 .05 .07 .03 .59
d2 .02 .01 .01 .02 0 .07 0 .02 .14 .14 .06 .04 .03 .07 0.1 .36
d3 .02 0 0 .03 .09 0 .09 .06 0 0 0 .05 .03 .07 .17 .39
Figure 24.10 A feature vector for representing documents using all transitions of length 2.
Document d1 is the text in Fig. 24.9. Figure from Barzilay and Lapata (2008).
The transitions and their probabilities can then be used as features for a machine
learning model. This model can be a text classifier trained to produce human-labeled
coherence scores (for example from humans labeling each text as coherent or inco-
herent). But such data is expensive to gather. Barzilay and Lapata (2005) introduced
a simplifying innovation: coherence models can be trained by self-supervision:
trained to distinguish the natural original order of sentences in a discourse from
544 C HAPTER 24 • D ISCOURSE C OHERENCE
a subtopic have high cosine with each other, but not with sentences in a neighboring
subtopic.
A third early model, the LSA Coherence method of Foltz et al. (1998) was the
first to use embeddings, modeling the coherence between two sentences as the co-
sine between their LSA sentence embedding vectors1 , computing embeddings for a
sentence s by summing the embeddings of its words w:
sim(s,t) = cos(s, t)
∑ ∑
= cos( w, w) (24.31)
w∈s w∈t
and defining the overall coherence of a text as the average similarity over all pairs of
adjacent sentences si and si+1 :
n−1
1 ∑
coherence(T ) = cos(si , si+1 ) (24.32)
n−1
i=1
E p(s′ |si ) is the expectation with respect to the negative sampling distribution con-
ditioned on si : given a sentence si the algorithms samples a negative sentence s′
1 See Chapter 6 for more on LSA embeddings; they are computed by applying SVD to the term-
document matrix (each cell weighted by log frequency and normalized by entropy), and then the first
300 dimensions are used as the embedding.
546 C HAPTER 24 • D ISCOURSE C OHERENCE
Figure 24.11 The architecture of the LCD model of document coherence, showing the
computation of the score for a pair of sentences s and t. Figure from Xu et al. (2019).
uniformly over the other sentences in the same document. L is a loss function that
takes two scores, one for a positive pair and one for a negative pair, with the goal of
encouraging f + = f (si , si+1 ) to be high and f − = f (si , s′ )) to be low. Fig. 24.11
use the margin loss l( f + , f − ) = max(0, − f + + f − ) where is the margin hyper-
parameter.
Xu et al. (2019) also give a useful baseline algorithm that itself has quite high
performance in measuring perplexity: train an RNN language model on the data,
and compute the log likelihood of sentence si in two ways, once given the preceding
context (conditional log likelihood) and once with no context (marginal log likeli-
hood). The difference between these values tells us how much the preceding context
improved the predictability of si , a predictability measure of coherence.
Training models to predict longer contexts than just consecutive pairs of sen-
tences can result in even stronger discourse representations. For example a Trans-
former language model trained with a contrastive sentence objective to predict text
up to a distance of ±2 sentences improves performance on various discourse coher-
ence tasks (Iter et al., 2020).
Language-model style models are generally evaluated by the methods of Sec-
tion 24.3.3, although they can also be evaluated on the RST and PDTB coherence
relation tasks.
Figure 24.12 Argumentation structure of a persuasive essay. Arrows indicate argumentation relations, ei-
ther of SUPPORT (with arrowheads) or ATTACK (with circleheads); P denotes premises. Figure from Stab and
Gurevych (2017).
annotation scheme for modeling these rhetorical goals is the argumentative zon-
argumentative
zoning ing model of Teufel et al. (1999) and Teufel et al. (2009), which is informed by the
idea that each scientific paper tries to make a knowledge claim about a new piece
of knowledge being added to the repository of the field (Myers, 1992). Sentences
in a scientific paper can be assigned one of 15 tags; Fig. 24.13 shows 7 (shortened)
examples of labeled sentences.
Teufel et al. (1999) and Teufel et al. (2009) develop labeled corpora of scientific
articles from computational linguistics and chemistry, which can be used as supervi-
sion for training standard sentence-classification architecture to assign the 15 labels.
24.6 Summary
In this chapter we introduced local and global models for discourse coherence.
• Discourses are not arbitrary collections of sentences; they must be coherent.
Among the factors that make a discourse coherent are coherence relations
between the sentences, entity-based coherence, and topical coherence.
• Various sets of coherence relations and rhetorical relations have been pro-
posed. The relations in Rhetorical Structure Theory (RST) hold between
spans of text and are structured into a tree. Because of this, shift-reduce
and other parsing algorithms are generally used to assign these structures.
The Penn Discourse Treebank (PDTB) labels only relations between pairs of
spans, and the labels are generally assigned by sequence models.
• Entity-based coherence captures the intuition that discourses are about an
entity, and continue mentioning the entity from sentence to sentence. Cen-
tering Theory is a family of models describing how salience is modeled for
discourse entities, and hence how coherence is achieved by virtue of keeping
the same discourse entities salient over the discourse. The entity grid model
gives a more bottom-up way to compute which entity realization transitions
lead to coherence.
550 C HAPTER 24 • D ISCOURSE C OHERENCE
Exercises
24.1 Finish the Centering Theory processing of the last two utterances of (24.30),
and show how (24.29) would be processed. Does the algorithm indeed mark
(24.29) as less coherent?
24.2 Select an editorial column from your favorite newspaper, and determine the
discourse structure for a 10–20 sentence portion. What problems did you
encounter? Were you helped by superficial cues the speaker included (e.g.,
discourse connectives) in any places?
Bibliography
Abadi, M., A. Agarwal, P. Barham, Allen, J. 1984. Towards a general the- Austin, J. L. 1962. How to Do Things
E. Brevdo, Z. Chen, C. Citro, ory of action and time. Artificial In- with Words. Harvard University
G. S. Corrado, A. Davis, J. Dean, telligence, 23(2):123–154. Press.
M. Devin, S. Ghemawat, I. Good- Allen, J. and C. R. Perrault. 1980. An- Awadallah, A. H., R. G. Kulkarni,
fellow, A. Harp, G. Irving, M. Is- alyzing intention in utterances. Arti- U. Ozertem, and R. Jones. 2015.
ard, Y. Jia, R. Jozefowicz, L. Kaiser, ficial Intelligence, 15:143–178. Charaterizing and predicting voice
M. Kudlur, J. Levenberg, D. Mané, query reformulation. CIKM-15.
R. Monga, S. Moore, D. Murray, Allen, J., M. S. Hunnicut, and D. H.
Klatt. 1987. From Text to Speech: Ba, J. L., J. R. Kiros, and G. E. Hinton.
C. Olah, M. Schuster, J. Shlens, 2016. Layer normalization. NeurIPS
B. Steiner, I. Sutskever, K. Talwar, The MITalk system. Cambridge Uni-
versity Press. workshop.
P. Tucker, V. Vanhoucke, V. Vasude-
van, F. Viégas, O. Vinyals, P. War- Althoff, T., C. Danescu-Niculescu- Baayen, R. H. 2001. Word frequency
den, M. Wattenberg, M. Wicke, Mizil, and D. Jurafsky. 2014. How distributions. Springer.
Y. Yu, and X. Zheng. 2015. Tensor- to ask for a favor: A case study Baccianella, S., A. Esuli, and F. Sebas-
Flow: Large-scale machine learning on the success of altruistic requests. tiani. 2010. Sentiwordnet 3.0: An
on heterogeneous systems. Software ICWSM 2014. enhanced lexical resource for senti-
available from tensorflow.org. An, J., H. Kwak, and Y.-Y. Ahn. ment analysis and opinion mining.
2018. SemAxis: A lightweight LREC.
Abney, S. P., R. E. Schapire, and
Y. Singer. 1999. Boosting ap- framework to characterize domain- Bach, K. and R. Harnish. 1979. Linguis-
plied to tagging and PP attachment. specific word semantics beyond sen- tic communication and speech acts.
EMNLP/VLC. timent. ACL. MIT Press.
Anastasopoulos, A. and G. Neubig. Backus, J. W. 1959. The syntax
Agarwal, O., S. Subramanian, and semantics of the proposed in-
2020. Should all cross-lingual em-
A. Nenkova, and D. Roth. 2019. ternational algebraic language of the
beddings speak English? ACL.
Evaluation of named entity corefer- Zurich ACM-GAMM Conference.
ence. Workshop on Computational Antoniak, M. and D. Mimno.
2018. Evaluating the stability of Information Processing: Proceed-
Models of Reference, Anaphora and ings of the International Conference
Coreference. embedding-based word similarities.
TACL, 6:107–119. on Information Processing, Paris.
Aggarwal, C. C. and C. Zhai. 2012. UNESCO.
A survey of text classification al- Aone, C. and S. W. Bennett. 1995. Eval-
uating automated and manual acqui- Backus, J. W. 1996. Transcript of ques-
gorithms. In C. C. Aggarwal and tion and answer session. In R. L.
C. Zhai, eds, Mining text data, 163– sition of anaphora resolution strate-
gies. ACL. Wexelblat, ed., History of Program-
222. Springer. ming Languages, page 162. Aca-
Ariel, M. 2001. Accessibility the- demic Press.
Agichtein, E. and L. Gravano. 2000.
ory: An overview. In T. Sanders,
Snowball: Extracting relations from Bada, M., M. Eckert, D. Evans, K. Gar-
J. Schilperoord, and W. Spooren,
large plain-text collections. Pro- cia, K. Shipley, D. Sitnikov, W. A.
eds, Text Representation: Linguistic
ceedings of the 5th ACM Interna- Baumgartner, K. B. Cohen, K. Ver-
and Psycholinguistic Aspects, 29–
tional Conference on Digital Li- spoor, J. A. Blake, and L. E. Hunter.
87. Benjamins.
braries. 2012. Concept annotation in the
Arora, S., P. Lewis, A. Fan, J. Kahn, and craft corpus. BMC bioinformatics,
Agirre, E., C. Banea, C. Cardie, D. Cer, C. Ré. 2023. Reasoning over pub-
M. Diab, A. Gonzalez-Agirre, 13(1):161.
lic and private data in retrieval-based
W. Guo, I. Lopez-Gazpio, M. Mar- Bagga, A. and B. Baldwin. 1998.
systems. TACL, 11:902–921.
itxalar, R. Mihalcea, G. Rigau, Algorithms for scoring coreference
L. Uria, and J. Wiebe. 2015. Artetxe, M. and H. Schwenk. 2019. chains. LREC Workshop on Linguis-
SemEval-2015 task 2: Semantic Massively multilingual sentence em- tic Coreference.
textual similarity, English, Span- beddings for zero-shot cross-lingual Bahdanau, D., K. H. Cho, and Y. Ben-
ish and pilot on interpretability. transfer and beyond. TACL, 7:597– gio. 2015. Neural machine transla-
SemEval-15. 610. tion by jointly learning to align and
Artstein, R., S. Gandhe, J. Gerten, translate. ICLR 2015.
Agirre, E., M. Diab, D. Cer, A. Leuski, and D. Traum. 2009.
and A. Gonzalez-Agirre. 2012. Bahdanau, D., J. Chorowski,
Semi-formal evaluation of conver- D. Serdyuk, P. Brakel, and Y. Ben-
SemEval-2012 task 6: A pilot on se- sational characters. In Languages:
mantic textual similarity. SemEval- gio. 2016. End-to-end attention-
From Formal to Natural, 22–35. based large vocabulary speech
12. Springer. recognition. ICASSP.
Agirre, E. and D. Martinez. 2001. Asher, N. 1993. Reference to Abstract
Learning class-to-class selectional Bahl, L. R. and R. L. Mercer. 1976.
Objects in Discourse. Studies in Lin- Part of speech assignment by a sta-
preferences. CoNLL. guistics and Philosophy (SLAP) 50, tistical decision algorithm. Proceed-
Aho, A. V. and J. D. Ullman. 1972. The Kluwer. ings IEEE International Symposium
Theory of Parsing, Translation, and Asher, N. and A. Lascarides. 2003. Log- on Information Theory.
Compiling, volume 1. Prentice Hall. ics of Conversation. Cambridge Uni- Bahl, L. R., F. Jelinek, and R. L.
Algoet, P. H. and T. M. Cover. 1988. versity Press. Mercer. 1983. A maximum likeli-
A sandwich proof of the Shannon- Atal, B. S. and S. Hanauer. 1971. hood approach to continuous speech
McMillan-Breiman theorem. The Speech analysis and synthesis by recognition. IEEE Transactions on
Annals of Probability, 16(2):899– prediction of the speech wave. JASA, Pattern Analysis and Machine Intel-
909. 50:637–655. ligence, 5(2):179–190.
553
554 Bibliography
Bajaj, P., D. Campos, N. Craswell, Barrett, L. F., B. Mesquita, K. N. in Prague. LINDAT/CLARIN dig-
L. Deng, J. G. ando Xiaodong Liu, Ochsner, and J. J. Gross. 2007. The ital library at Institute of Formal
R. Majumder, A. McNamara, B. Mi- experience of emotion. Annual Re- and Applied Linguistics, Charles
tra, T. Nguye, M. Rosenberg, view of Psychology, 58:373–403. University in Prague.
X. Song, A. Stoica, S. Tiwary, and Barzilay, R. and M. Lapata. 2005. Mod- Bellegarda, J. R. 1997. A latent se-
T. Wang. 2016. MS MARCO: A eling local coherence: An entity- mantic analysis framework for large-
human generated MAchine Reading based approach. ACL. span language modeling. EU-
COmprehension dataset. NeurIPS. ROSPEECH.
Barzilay, R. and M. Lapata. 2008. Mod-
Baker, C. F., C. J. Fillmore, and eling local coherence: An entity- Bellegarda, J. R. 2000. Exploiting la-
J. B. Lowe. 1998. The Berkeley based approach. Computational Lin- tent semantic information in statisti-
FrameNet project. COLING/ACL. guistics, 34(1):1–34. cal language modeling. Proceedings
Baker, J. K. 1975a. The DRAGON sys- Barzilay, R. and L. Lee. 2004. Catching of the IEEE, 89(8):1279–1296.
tem – An overview. IEEE Transac- the drift: Probabilistic content mod- Bellegarda, J. R. 2013. Natural lan-
tions on ASSP, ASSP-23(1):24–29. els, with applications to generation guage technology in mobile devices:
Baker, J. K. 1975b. Stochastic and summarization. HLT-NAACL. Two grounding frameworks. In
modeling for automatic speech un- Mobile Speech and Advanced Nat-
Baum, L. E. and J. A. Eagon. 1967. An
derstanding. In D. R. Reddy, ural Language Solutions, 185–196.
inequality with applications to sta-
ed., Speech Recognition. Academic Springer.
tistical estimation for probabilistic
Press. functions of Markov processes and Bellman, R. 1957. Dynamic Program-
Baldridge, J., N. Asher, and J. Hunter. to a model for ecology. Bulletin of ming. Princeton University Press.
2007. Annotation for and robust the American Mathematical Society, Bellman, R. 1984. Eye of the Hurri-
parsing of discourse structure on 73(3):360–363. cane: an autobiography. World Sci-
unrestricted texts. Zeitschrift für Baum, L. E. and T. Petrie. 1966. Statis- entific Singapore.
Sprachwissenschaft, 26:213–239. tical inference for probabilistic func- Bender, E. M. 2019. The #BenderRule:
Bamman, D., O. Lewke, and A. Man- tions of finite-state Markov chains. On naming the languages we study
soor. 2020. An annotated dataset Annals of Mathematical Statistics, and why it matters. Blog post.
of coreference in English literature. 37(6):1554–1563. Bender, E. M., B. Friedman, and
LREC. Baum, L. F. 1900. The Wizard of Oz. A. McMillan-Major. 2021. A
Bamman, D., B. O’Connor, and N. A. Available at Project Gutenberg. guide for writing data statements
Smith. 2013. Learning latent per- Bayes, T. 1763. An Essay Toward Solv- for natural language processing.
sonas of film characters. ACL. ing a Problem in the Doctrine of http://techpolicylab.uw.
Chances, volume 53. Reprinted in edu/data-statements/.
Bamman, D., S. Popat, and S. Shen.
2019. An annotated dataset of liter- Facsimiles of Two Papers by Bayes, Bender, E. M. and A. Koller. 2020.
ary entities. NAACL HLT. Hafner Publishing, 1963. Climbing towards NLU: On mean-
Bazell, C. E. 1952/1966. The corre- ing, form, and understanding in the
Banerjee, S. and A. Lavie. 2005. ME- age of data. ACL.
TEOR: An automatic metric for MT spondence fallacy in structural lin-
evaluation with improved correla- guistics. In E. P. Hamp, F. W. Bengio, Y., A. Courville, and P. Vin-
tion with human judgments. Pro- Householder, and R. Austerlitz, eds, cent. 2013. Representation learn-
ceedings of ACL Workshop on In- Studies by Members of the English ing: A review and new perspec-
trinsic and Extrinsic Evaluation Department, Istanbul University (3), tives. IEEE Transactions on Pattern
Measures for MT and/or Summa- reprinted in Readings in Linguistics Analysis and Machine Intelligence,
rization. II (1966), 271–298. University of 35(8):1798–1828.
Banko, M., M. Cafarella, S. Soderland, Chicago Press. Bengio, Y., R. Ducharme, and P. Vin-
M. Broadhead, and O. Etzioni. 2007. Bean, D. and E. Riloff. 1999. cent. 2000. A neural probabilistic
Open information extraction for the Corpus-based identification of non- language model. NeurIPS.
web. IJCAI. anaphoric noun phrases. ACL. Bengio, Y., R. Ducharme, P. Vincent,
Bañón, M., P. Chen, B. Haddow, Bean, D. and E. Riloff. 2004. Unsu- and C. Jauvin. 2003. A neural prob-
K. Heafield, H. Hoang, M. Esplà- pervised learning of contextual role abilistic language model. JMLR,
Gomis, M. L. Forcada, A. Kamran, knowledge for coreference resolu- 3:1137–1155.
F. Kirefu, P. Koehn, S. Ortiz Ro- tion. HLT-NAACL. Bengio, Y., P. Lamblin, D. Popovici,
jas, L. Pla Sempere, G. Ramı́rez- Bedi, G., F. Carrillo, G. A. Cecchi, D. F. and H. Larochelle. 2007. Greedy
Sánchez, E. Sarrı́as, M. Strelec, Slezak, M. Sigman, N. B. Mota, layer-wise training of deep net-
B. Thompson, W. Waites, D. Wig- S. Ribeiro, D. C. Javitt, M. Copelli, works. NeurIPS.
gins, and J. Zaragoza. 2020. and C. M. Corcoran. 2015. Auto- Bengio, Y., H. Schwenk, J.-S. Senécal,
ParaCrawl: Web-scale acquisition mated analysis of free speech pre- F. Morin, and J.-L. Gauvain. 2006.
of parallel corpora. ACL. dicts psychosis onset in high-risk Neural probabilistic language mod-
Bar-Hillel, Y. 1960. The present sta- youths. npj Schizophrenia, 1. els. In Innovations in Machine
tus of automatic translation of lan- Bejček, E., E. Hajičová, J. Hajič, Learning, 137–186. Springer.
guages. In F. Alt, ed., Advances P. Jı́nová, V. Kettnerová, Bengtson, E. and D. Roth. 2008. Un-
in Computers 1, 91–163. Academic V. Kolářová, M. Mikulová, derstanding the value of features for
Press. J. Mı́rovský, A. Nedoluzhko, coreference resolution. EMNLP.
Barker, C. 2010. Nominals don’t J. Panevová, L. Poláková, Bentivogli, L., M. Cettolo, M. Federico,
provide criteria of identity. In M. Ševčı́ková, J. Štěpánek, and and C. Federmann. 2018. Machine
M. Rathert and A. Alexiadou, eds, Š. Zikánová. 2013. Prague de- translation human evaluation: an in-
The Semantics of Nominalizations pendency treebank 3.0. Technical vestigation of evaluation based on
across Languages and Frameworks, report, Institute of Formal and Ap- post-editing and its relation with di-
9–24. Mouton. plied Linguistics, Charles University rect assessment. ICSLT.
Bibliography 555
Berant, J., A. Chou, R. Frostig, and Bisk, Y., A. Holtzman, J. Thomason, Bollacker, K., C. Evans, P. Paritosh,
P. Liang. 2013. Semantic parsing J. Andreas, Y. Bengio, J. Chai, T. Sturge, and J. Taylor. 2008.
on freebase from question-answer M. Lapata, A. Lazaridou, J. May, Freebase: a collaboratively created
pairs. EMNLP. A. Nisnevich, N. Pinto, and graph database for structuring hu-
Berg-Kirkpatrick, T., D. Burkett, and J. Turian. 2020. Experience grounds man knowledge. SIGMOD 2008.
D. Klein. 2012. An empirical inves- language. EMNLP. Bolukbasi, T., K.-W. Chang, J. Zou,
tigation of statistical significance in Bizer, C., J. Lehmann, G. Kobilarov, V. Saligrama, and A. T. Kalai. 2016.
NLP. EMNLP. S. Auer, C. Becker, R. Cyganiak, Man is to computer programmer as
Berger, A., S. A. Della Pietra, and V. J. and S. Hellmann. 2009. DBpedia— woman is to homemaker? Debiasing
Della Pietra. 1996. A maximum en- A crystallization point for the Web word embeddings. NeurIPS.
tropy approach to natural language of Data. Web Semantics: science,
Bommasani, R., D. A. Hudson,
processing. Computational Linguis- services and agents on the world
E. Adeli, R. Altman, S. Arora,
tics, 22(1):39–71. wide web, 7(3):154–165.
S. von Arx, M. S. Bernstein,
Bergsma, S. and D. Lin. 2006. Boot- Björkelund, A. and J. Kuhn. 2014. J. Bohg, A. Bosselut, E. Brun-
strapping path-based pronoun reso- Learning structured perceptrons for skill, E. Brynjolfsson, S. Buch,
lution. COLING/ACL. coreference resolution with latent D. Card, R. Castellon, N. S. Chat-
antecedents and non-local features. terji, A. S. Chen, K. A. Creel,
Bergsma, S., D. Lin, and R. Goebel. ACL.
2008a. Discriminative learning of J. Davis, D. Demszky, C. Don-
selectional preference from unla- Black, A. W. and P. Taylor. 1994. ahue, M. Doumbouya, E. Durmus,
beled text. EMNLP. CHATR: A generic speech synthesis S. Ermon, J. Etchemendy, K. Etha-
system. COLING. yarajh, L. Fei-Fei, C. Finn, T. Gale,
Bergsma, S., D. Lin, and R. Goebel. L. E. Gillespie, K. Goel, N. D.
2008b. Distributional identification Black, E., S. P. Abney, D. Flickinger,
C. Gdaniec, R. Grishman, P. Har- Goodman, S. Grossman, N. Guha,
of non-referential pronouns. ACL. T. Hashimoto, P. Henderson, J. He-
rison, D. Hindle, R. Ingria, F. Je-
Bethard, S. 2013. ClearTK-TimeML: linek, J. L. Klavans, M. Y. Liberman, witt, D. E. Ho, J. Hong, K. Hsu,
A minimalist approach to TempEval M. P. Marcus, S. Roukos, B. San- J. Huang, T. F. Icard, S. Jain, D. Ju-
2013. SemEval-13. torini, and T. Strzalkowski. 1991. A rafsky, P. Kalluri, S. Karamcheti,
Bhat, I., R. A. Bhat, M. Shrivastava, procedure for quantitatively compar- G. Keeling, F. Khani, O. Khat-
and D. Sharma. 2017. Joining ing the syntactic coverage of English tab, P. W. Koh, M. S. Krass,
hands: Exploiting monolingual tree- grammars. Speech and Natural Lan- R. Krishna, R. Kuditipudi, A. Ku-
banks for parsing of code-mixing guage Workshop. mar, F. Ladhak, M. Lee, T. Lee,
data. EACL. J. Leskovec, I. Levent, X. L. Li,
Blei, D. M., A. Y. Ng, and M. I. Jor-
X. Li, T. Ma, A. Malik, C. D. Man-
Bianchi, F., M. Suzgun, G. At- dan. 2003. Latent Dirichlet alloca-
ning, S. P. Mirchandani, E. Mitchell,
tanasio, P. Rottger, D. Jurafsky, tion. JMLR, 3(5):993–1022.
Z. Munyikwa, S. Nair, A. Narayan,
T. Hashimoto, and J. Zou. 2024. Blodgett, S. L., S. Barocas, D. Narayanan, B. Newman, A. Nie,
Safety-tuned LLaMAs: Lessons H. Daumé III, and H. Wallach. 2020. J. C. Niebles, H. Nilforoshan, J. F.
from improving the safety of large Language (technology) is power: A Nyarko, G. Ogut, L. Orr, I. Papadim-
language models that follow instruc- critical survey of “bias” in NLP. itriou, J. S. Park, C. Piech, E. Porte-
tions. ICLR. ACL. lance, C. Potts, A. Raghunathan,
Bickel, B. 2003. Referential density Blodgett, S. L., L. Green, and R. Reich, H. Ren, F. Rong, Y. H.
in discourse and syntactic typology. B. O’Connor. 2016. Demographic Roohani, C. Ruiz, J. Ryan, C. R’e,
Language, 79(2):708–736. dialectal variation in social media: D. Sadigh, S. Sagawa, K. San-
Bickmore, T. W., H. Trinh, S. Olafsson, A case study of African-American thanam, A. Shih, K. P. Srinivasan,
T. K. O’Leary, R. Asadi, N. M. Rick- English. EMNLP. A. Tamkin, R. Taori, A. W. Thomas,
les, and R. Cruz. 2018. Patient and Blodgett, S. L. and B. O’Connor. 2017. F. Tramèr, R. E. Wang, W. Wang,
consumer safety risks when using Racial disparity in natural language B. Wu, J. Wu, Y. Wu, S. M. Xie,
conversational assistants for medical processing: A case study of so- M. Yasunaga, J. You, M. A. Zaharia,
information: An observational study cial media African-American En- M. Zhang, T. Zhang, X. Zhang,
of Siri, Alexa, and Google Assis- glish. FAT/ML Workshop, KDD. Y. Zhang, L. Zheng, K. Zhou, and
tant. Journal of Medical Internet Re- Bloomfield, L. 1914. An Introduction to P. Liang. 2021. On the opportuni-
search, 20(9):e11510. the Study of Language. Henry Holt ties and risks of foundation models.
Bikel, D. M., S. Miller, R. Schwartz, and Company. ArXiv.
and R. Weischedel. 1997. Nymble: Bloomfield, L. 1933. Language. Uni- Booth, T. L. 1969. Probabilistic
A high-performance learning name- versity of Chicago Press. representation of formal languages.
finder. ANLP. IEEE Conference Record of the 1969
Bobrow, D. G., R. M. Kaplan, M. Kay, Tenth Annual Symposium on Switch-
Biran, O. and K. McKeown. 2015. D. A. Norman, H. Thompson, and
PDTB discourse parsing as a tagging ing and Automata Theory.
T. Winograd. 1977. GUS, A frame
task: The two taggers approach. driven dialog system. Artificial In- Borges, J. L. 1964. The analytical lan-
SIGDIAL. telligence, 8:155–173. guage of john wilkins. In Other
Bird, S., E. Klein, and E. Loper. 2009. inquisitions 1937–1952. University
Bobrow, D. G. and D. A. Norman.
Natural Language Processing with of Texas Press. Trans. Ruth L. C.
1975. Some principles of memory
Python. O’Reilly. Simms.
schemata. In D. G. Bobrow and
Bisani, M. and H. Ney. 2004. Boot- A. Collins, eds, Representation and Bostrom, K. and G. Durrett. 2020. Byte
strap estimates for confidence inter- Understanding. Academic Press. pair encoding is suboptimal for lan-
vals in ASR performance evaluation. Bojanowski, P., E. Grave, A. Joulin, and guage model pretraining. EMNLP.
ICASSP. T. Mikolov. 2017. Enriching word Bourlard, H. and N. Morgan. 1994.
Bishop, C. M. 2006. Pattern recogni- vectors with subword information. Connectionist Speech Recognition:
tion and machine learning. Springer. TACL, 5:135–146. A Hybrid Approach. Kluwer.
556 Bibliography
Brants, T. 2000. TnT: A statistical part- Buchholz, S. and E. Marsi. 2006. Conll- language models. 30th USENIX Se-
of-speech tagger. ANLP. x shared task on multilingual depen- curity Symposium (USENIX Security
Brants, T., A. C. Popat, P. Xu, F. J. dency parsing. CoNLL. 21).
Och, and J. Dean. 2007. Large lan- Budanitsky, A. and G. Hirst. 2006. Carlson, G. N. 1977. Reference to kinds
guage models in machine transla- Evaluating WordNet-based mea- in English. Ph.D. thesis, Univer-
tion. EMNLP/CoNLL. sures of lexical semantic related- sity of Massachusetts, Amherst. For-
Braud, C., M. Coavoux, and ness. Computational Linguistics, ward.
A. Søgaard. 2017. Cross-lingual 32(1):13–47.
Carlson, L. and D. Marcu. 2001. Dis-
RST discourse parsing. EACL. Budzianowski, P., T.-H. Wen, B.- course tagging manual. Technical
Bréal, M. 1897. Essai de Sémantique: H. Tseng, I. Casanueva, S. Ultes, Report ISI-TR-545, ISI.
Science des significations. Hachette. O. Ramadan, and M. Gašić. 2018.
MultiWOZ - a large-scale multi- Carlson, L., D. Marcu, and M. E.
Brennan, S. E., M. W. Friedman, and Okurowski. 2001. Building a
C. Pollard. 1987. A centering ap- domain wizard-of-Oz dataset for
task-oriented dialogue modelling. discourse-tagged corpus in the
proach to pronouns. ACL. framework of rhetorical structure
EMNLP.
Brin, S. 1998. Extracting patterns and theory. SIGDIAL.
relations from the World Wide Web. Bullinaria, J. A. and J. P. Levy. 2007.
Extracting semantic representations Carreras, X. and L. Màrquez. 2005.
Proceedings World Wide Web and Introduction to the CoNLL-2005
Databases International Workshop, from word co-occurrence statistics:
A computational study. Behavior re- shared task: Semantic role labeling.
Number 1590 in LNCS. Springer. CoNLL.
search methods, 39(3):510–526.
Brockmann, C. and M. Lapata. 2003.
Evaluating and combining ap- Bullinaria, J. A. and J. P. Levy. Chafe, W. L. 1976. Givenness, con-
proaches to selectional preference 2012. Extracting semantic repre- trastiveness, definiteness, subjects,
acquisition. EACL. sentations from word co-occurrence topics, and point of view. In C. N. Li,
statistics: stop-lists, stemming, and ed., Subject and Topic, 25–55. Aca-
Broschart, J. 1997. Why Tongan does demic Press.
it differently. Linguistic Typology, SVD. Behavior research methods,
1:123–165. 44(3):890–907. Chambers, N. 2013. NavyTime: Event
Bulyko, I., K. Kirchhoff, M. Osten- and time ordering from raw text.
Brown, P. F., J. Cocke, S. A.
dorf, and J. Goldberg. 2005. Error- SemEval-13.
Della Pietra, V. J. Della Pietra, F. Je-
linek, J. D. Lafferty, R. L. Mercer, sensitive response generation in a Chambers, N., T. Cassidy, B. McDow-
and P. S. Roossin. 1990. A statis- spoken language dialogue system. ell, and S. Bethard. 2014. Dense
tical approach to machine transla- Speech Communication, 45(3):271– event ordering with a multi-pass ar-
tion. Computational Linguistics, 288. chitecture. TACL, 2:273–284.
16(2):79–85. Caliskan, A., J. J. Bryson, and Chambers, N. and D. Jurafsky. 2010.
Brown, P. F., S. A. Della Pietra, V. J. A. Narayanan. 2017. Semantics de- Improving the use of pseudo-words
Della Pietra, and R. L. Mercer. 1993. rived automatically from language for evaluating selectional prefer-
The mathematics of statistical ma- corpora contain human-like biases. ences. ACL.
chine translation: Parameter esti- Science, 356(6334):183–186.
Chambers, N. and D. Jurafsky. 2011.
mation. Computational Linguistics, Callison-Burch, C., M. Osborne, and Template-based information extrac-
19(2):263–311. P. Koehn. 2006. Re-evaluating the tion without the templates. ACL.
Brown, T., B. Mann, N. Ryder, role of BLEU in machine translation
M. Subbiah, J. Kaplan, P. Dhari- research. EACL. Chan, W., N. Jaitly, Q. Le, and
wal, A. Neelakantan, P. Shyam, Canavan, A., D. Graff, and G. Zip- O. Vinyals. 2016. Listen, at-
G. Sastry, A. Askell, S. Agar- perlen. 1997. CALLHOME Ameri- tend and spell: A neural network
wal, A. Herbert-Voss, G. Krueger, can English speech LDC97S42. Lin- for large vocabulary conversational
T. Henighan, R. Child, A. Ramesh, guistic Data Consortium. speech recognition. ICASSP.
D. M. Ziegler, J. Wu, C. Win- Carbonell, J. R. 1970. AI in Chandioux, J. 1976. M ÉT ÉO: un
ter, C. Hesse, M. Chen, E. Sigler, CAI: An artificial-intelligence ap- système opérationnel pour la tra-
M. Litwin, S. Gray, B. Chess, proach to computer-assisted instruc- duction automatique des bulletins
J. Clark, C. Berner, S. McCan- tion. IEEE transactions on man- météorologiques destinés au grand
dlish, A. Radford, I. Sutskever, and machine systems, 11(4):190–202. public. Meta, 21:127–133.
D. Amodei. 2020. Language mod-
Cardie, C. 1993. A case-based approach Chang, A. X. and C. D. Manning. 2012.
els are few-shot learners. NeurIPS,
to knowledge acquisition for domain SUTime: A library for recognizing
volume 33.
specific sentence analysis. AAAI. and normalizing time expressions.
Bruce, B. C. 1975. Generation as a so- LREC.
cial action. Proceedings of TINLAP- Cardie, C. 1994. Domain-Specific
1 (Theoretical Issues in Natural Knowledge Acquisition for Concep- Chang, K.-W., R. Samdani, and
Language Processing). tual Sentence Analysis. Ph.D. the- D. Roth. 2013. A constrained la-
sis, University of Massachusetts, tent variable model for coreference
Brysbaert, M., A. B. Warriner, and resolution. EMNLP.
V. Kuperman. 2014. Concrete- Amherst, MA. Available as CMP-
ness ratings for 40 thousand gen- SCI Technical Report 94-74. Chang, K.-W., R. Samdani, A. Ro-
erally known English word lem- Cardie, C. and K. Wagstaff. 1999. zovskaya, M. Sammons, and
mas. Behavior Research Methods, Noun phrase coreference as cluster- D. Roth. 2012. Illinois-Coref:
46(3):904–911. ing. EMNLP/VLC. The UI system in the CoNLL-2012
Bu, H., J. Du, X. Na, B. Wu, and Carlini, N., F. Tramer, E. Wal- shared task. CoNLL.
H. Zheng. 2017. AISHELL-1: An lace, M. Jagielski, A. Herbert-Voss, Chaplot, D. S. and R. Salakhutdinov.
open-source Mandarin speech cor- K. Lee, A. Roberts, T. Brown, 2018. Knowledge-based word sense
pus and a speech recognition base- D. Song, U. Erlingsson, et al. 2021. disambiguation using topic models.
line. O-COCOSDA Proceedings. Extracting training data from large AAAI.
Bibliography 557
Charniak, E. 1997. Statistical pars- Cho, K., B. van Merriënboer, C. Gul- Machine Learning to Discourse Pro-
ing with a context-free grammar and cehre, D. Bahdanau, F. Bougares, cessing. Papers from the 1998 AAAI
word statistics. AAAI. H. Schwenk, and Y. Bengio. 2014. Spring Symposium. Tech. rep. SS-
Charniak, E., C. Hendrickson, N. Ja- Learning phrase representations us- 98-01. AAAI Press.
cobson, and M. Perkowitz. 1993. ing RNN encoder–decoder for statis- Chu-Carroll, J. and S. Carberry. 1998.
Equations for part-of-speech tag- tical machine translation. EMNLP. Collaborative response generation in
ging. AAAI. Choe, D. K. and E. Charniak. 2016. planning dialogues. Computational
Che, W., Z. Li, Y. Li, Y. Guo, B. Qin, Parsing as language modeling. Linguistics, 24(3):355–400.
and T. Liu. 2009. Multilingual EMNLP. Church, K. W. 1988. A stochastic parts
dependency-based syntactic and se- Choi, J. D. and M. Palmer. 2011a. Get- program and noun phrase parser for
mantic parsing. CoNLL. ting the most out of transition-based unrestricted text. ANLP.
Chen, C. and V. Ng. 2013. Linguis- dependency parsing. ACL. Church, K. W. 1989. A stochastic parts
tically aware coreference evaluation Choi, J. D. and M. Palmer. 2011b. program and noun phrase parser for
metrics. IJCNLP. Transition-based semantic role la- unrestricted text. ICASSP.
Chen, D., A. Fisch, J. Weston, and beling using predicate argument Church, K. W. 1994. Unix for Poets.
A. Bordes. 2017a. Reading Wiki- clustering. Proceedings of the ACL Slides from 2nd ELSNET Summer
pedia to answer open-domain ques- 2011 Workshop on Relational Mod- School and unpublished paper ms.
tions. ACL. els of Semantics. Church, K. W. and W. A. Gale. 1991. A
Chen, D. and C. Manning. 2014. A fast Choi, J. D., J. Tetreault, and A. Stent. comparison of the enhanced Good-
and accurate dependency parser us- 2015. It depends: Dependency Turing and deleted estimation meth-
ing neural networks. EMNLP. parser comparison using a web- ods for estimating probabilities of
Chen, E., B. Snyder, and R. Barzi- based evaluation tool. ACL. English bigrams. Computer Speech
lay. 2007. Incremental text structur- Chomsky, N. 1956. Three models for and Language, 5:19–54.
ing with online hierarchical ranking. the description of language. IRE Church, K. W. and P. Hanks. 1989.
EMNLP/CoNLL. Transactions on Information The- Word association norms, mutual in-
ory, 2(3):113–124. formation, and lexicography. ACL.
Chen, S. F. and J. Goodman. 1999.
An empirical study of smoothing Chomsky, N. 1956/1975. The Logi- Church, K. W. and P. Hanks. 1990.
techniques for language modeling. cal Structure of Linguistic Theory. Word association norms, mutual in-
Computer Speech and Language, Plenum. formation, and lexicography. Com-
13:359–394. putational Linguistics, 16(1):22–29.
Chomsky, N. 1957. Syntactic Struc-
Chen, X., Z. Shi, X. Qiu, and X. Huang. tures. Mouton. Cialdini, R. B. 1984. Influence: The
2017b. Adversarial multi-criteria psychology of persuasion. Morrow.
Chomsky, N. 1963. Formal proper-
learning for Chinese word segmen- Cieri, C., D. Miller, and K. Walker.
ties of grammars. In R. D. Luce,
tation. ACL. 2004. The Fisher corpus: A resource
R. Bush, and E. Galanter, eds, Hand-
Cheng, J., L. Dong, and M. La- book of Mathematical Psychology, for the next generations of speech-
pata. 2016. Long short-term volume 2, 323–418. Wiley. to-text. LREC.
memory-networks for machine read- Clark, E. 1987. The principle of con-
ing. EMNLP. Chomsky, N. 1981. Lectures on Gov-
trast: A constraint on language ac-
ernment and Binding. Foris.
Cheng, M., E. Durmus, and D. Juraf- quisition. In B. MacWhinney, ed.,
sky. 2023. Marked personas: Using Chorowski, J., D. Bahdanau, K. Cho, Mechanisms of language acquisi-
natural language prompts to mea- and Y. Bengio. 2014. End-to-end tion, 1–33. LEA.
sure stereotypes in language models. continuous speech recognition using Clark, H. H. 1996. Using Language.
ACL. attention-based recurrent NN: First Cambridge University Press.
results. NeurIPS Deep Learning and
Chiang, D. 2005. A hierarchical phrase- Representation Learning Workshop. Clark, H. H. and J. E. Fox Tree. 2002.
based model for statistical machine Using uh and um in spontaneous
translation. ACL. Chou, W., C.-H. Lee, and B. H. Juang. speaking. Cognition, 84:73–111.
1993. Minimum error rate train-
Chinchor, N., L. Hirschman, and D. L. ing based on n-best string models. Clark, H. H. and C. Marshall. 1981.
Lewis. 1993. Evaluating Message ICASSP. Definite reference and mutual
Understanding systems: An analy- knowledge. In A. K. Joshi, B. L.
sis of the third Message Understand- Christiano, P. F., J. Leike, T. Brown, Webber, and I. A. Sag, eds, Ele-
ing Conference. Computational Lin- M. Martic, S. Legg, and D. Amodei. ments of Discourse Understanding,
guistics, 19(3):409–449. 2017. Deep reinforcement learning 10–63. Cambridge.
from human preferences. NeurIPS,
Chiticariu, L., M. Danilevsky, Y. Li, volume 30. Clark, H. H. and D. Wilkes-Gibbs.
F. Reiss, and H. Zhu. 2018. Sys- 1986. Referring as a collaborative
temT: Declarative text understand- Christodoulopoulos, C., S. Goldwa- process. Cognition, 22:1–39.
ing for enterprise. NAACL HLT, vol- ter, and M. Steedman. 2010. Two
Clark, J. H., E. Choi, M. Collins,
ume 3. decades of unsupervised POS in-
D. Garrette, T. Kwiatkowski,
duction: How far have we come?
Chiticariu, L., Y. Li, and F. R. Reiss. V. Nikolaev, and J. Palomaki.
EMNLP.
2013. Rule-Based Information Ex- 2020a. TyDi QA: A benchmark
traction is Dead! Long Live Rule- Chu, Y.-J. and T.-H. Liu. 1965. On the for information-seeking question
Based Information Extraction Sys- shortest arborescence of a directed answering in typologically diverse
tems! EMNLP. graph. Science Sinica, 14:1396– languages. TACL, 8:454–470.
Chiu, J. P. C. and E. Nichols. 2016. 1400. Clark, K., M.-T. Luong, Q. V. Le, and
Named entity recognition with bidi- Chu-Carroll, J. 1998. A statistical C. D. Manning. 2020b. Electra: Pre-
rectional LSTM-CNNs. TACL, model for discourse act recognition training text encoders as discrimina-
4:357–370. in dialogue interactions. Applying tors rather than generators. ICLR.
558 Bibliography
Clark, K. and C. D. Manning. 2015. US census. Speech Communication, Cover, T. M. and J. A. Thomas. 1991.
Entity-centric coreference resolution 23:243–260. Elements of Information Theory.
with model stacking. ACL. Wiley.
Collins, M. 1999. Head-Driven Statis-
Clark, K. and C. D. Manning. 2016a. tical Models for Natural Language Covington, M. 2001. A fundamen-
Deep reinforcement learning for Parsing. Ph.D. thesis, University of tal algorithm for dependency pars-
mention-ranking coreference mod- Pennsylvania, Philadelphia. ing. Proceedings of the 39th Annual
els. EMNLP. ACM Southeast Conference.
Collobert, R. and J. Weston. 2007. Fast
Clark, K. and C. D. Manning. 2016b. semantic extraction using a novel Cox, D. 1969. Analysis of Binary Data.
Improving coreference resolution by neural network architecture. ACL. Chapman and Hall, London.
learning entity-level distributed rep- Craven, M. and J. Kumlien. 1999.
resentations. ACL. Collobert, R. and J. Weston. 2008. Constructing biological knowledge
A unified architecture for natural bases by extracting information
Clark, S., J. R. Curran, and M. Osborne. language processing: Deep neural
2003. Bootstrapping POS-taggers from text sources. ISMB-99.
networks with multitask learning.
using unlabelled data. CoNLL. ICML. Crawford, K. 2017. The trouble with
bias. Keynote at NeurIPS.
Cobbe, K., V. Kosaraju, M. Bavar- Collobert, R., J. Weston, L. Bottou,
ian, M. Chen, H. Jun, L. Kaiser, Croft, W. 1990. Typology and Univer-
M. Karlen, K. Kavukcuoglu, and sals. Cambridge University Press.
M. Plappert, J. Tworek, J. Hilton, P. Kuksa. 2011. Natural language
R. Nakano, C. Hesse, and J. Schul- processing (almost) from scratch. Crosbie, J. and E. Shutova. 2022. In-
man. 2021. Training verifiers to JMLR, 12:2493–2537. duction heads as an essential mech-
solve math word problems. ArXiv anism for pattern matching in in-
preprint. Comrie, B. 1989. Language Universals context learning. ArXiv preprint.
and Linguistic Typology, 2nd edi-
Coccaro, N. and D. Jurafsky. 1998. To- Cross, J. and L. Huang. 2016. Span-
tion. Blackwell.
wards better integration of seman- based constituency parsing with a
tic predictors in statistical language Conneau, A., K. Khandelwal, structure-label system and provably
modeling. ICSLP. N. Goyal, V. Chaudhary, G. Wen- optimal dynamic oracles. EMNLP.
Coenen, A., E. Reif, A. Yuan, B. Kim, zek, F. Guzmán, E. Grave, M. Ott, Cruse, D. A. 2004. Meaning in Lan-
A. Pearce, F. Viégas, and M. Watten- L. Zettlemoyer, and V. Stoyanov. guage: an Introduction to Semantics
berg. 2019. Visualizing and measur- 2020. Unsupervised cross-lingual and Pragmatics. Oxford University
ing the geometry of bert. NeurIPS. representation learning at scale. Press. Second edition.
ACL.
Cohen, A. D., A. Roberts, A. Molina, Cucerzan, S. 2007. Large-scale
A. Butryna, A. Jin, A. Kulshreshtha, Connolly, D., J. D. Burger, and D. S. named entity disambiguation based
B. Hutchinson, B. Zevenbergen, Day. 1994. A machine learning ap- on Wikipedia data. EMNLP/CoNLL.
B. H. Aguera-Arcas, C. ching proach to anaphoric reference. Pro- Dagan, I., S. Marcus, and
Chang, C. Cui, C. Du, D. D. F. ceedings of the International Con- S. Markovitch. 1993. Contextual
Adiwardana, D. Chen, D. D. Lep- ference on New Methods in Lan- word similarity and estimation from
ikhin, E. H. Chi, E. Hoffman-John, guage Processing (NeMLaP). sparse data. ACL.
H.-T. Cheng, H. Lee, I. Krivokon, Cooley, J. W. and J. W. Tukey. 1965. Dahl, G. E., T. N. Sainath, and G. E.
J. Qin, J. Hall, J. Fenton, J. Soraker, An algorithm for the machine cal- Hinton. 2013. Improving deep
K. Meier-Hellstern, K. Olson, L. M. culation of complex Fourier se- neural networks for LVCSR using
Aroyo, M. P. Bosma, M. J. Pickett, ries. Mathematics of Computation, rectified linear units and dropout.
M. A. Menegali, M. Croak, M. Dı́az, 19(90):297–301. ICASSP.
M. Lamm, M. Krikun, M. R. Mor-
Cooper, F. S., A. M. Liberman, and Dahl, G. E., D. Yu, L. Deng, and
ris, N. Shazeer, Q. V. Le, R. Bern-
J. M. Borst. 1951. The interconver- A. Acero. 2012. Context-dependent
stein, R. Rajakumar, R. Kurzweil,
sion of audible and visible patterns pre-trained deep neural networks
R. Thoppilan, S. Zheng, T. Bos,
as a basis for research in the per- for large-vocabulary speech recog-
T. Duke, T. Doshi, V. Y. Zhao,
ception of speech. Proceedings of nition. IEEE Transactions on au-
V. Prabhakaran, W. Rusch, Y. Li,
the National Academy of Sciences, dio, speech, and language process-
Y. Huang, Y. Zhou, Y. Xu, and
37(5):318–325. ing, 20(1):30–42.
Z. Chen. 2022. Lamda: Lan-
guage models for dialog applica- Cordier, B. 1965. Factor-analysis of Dahl, M., V. Magesh, M. Suzgun, and
tions. ArXiv preprint. correspondences. COLING 1965. D. E. Ho. 2024. Large legal fic-
tions: Profiling legal hallucinations
Cohen, M. H., J. P. Giangola, and Costa-jussà, M. R., J. Cross, O. Çelebi, in large language models. Journal of
J. Balogh. 2004. Voice User Inter- M. Elbayad, K. Heafield, K. Hef- Legal Analysis, 16:64–93.
face Design. Addison-Wesley. fernan, E. Kalbassi, J. Lam,
Dai, A. M. and Q. V. Le. 2015.
Cohen, P. R. and C. R. Perrault. 1979. D. Licht, J. Maillard, A. Sun,
Semi-supervised sequence learning.
Elements of a plan-based theory of S. Wang, G. Wenzek, A. Young-
NeurIPS.
speech acts. Cognitive Science, blood, B. Akula, L. Barrault,
G. M. Gonzalez, P. Hansanti, Danieli, M. and E. Gerbino. 1995. Met-
3(3):177–212.
J. Hoffman, S. Jarrett, K. R. rics for evaluating dialogue strate-
Colby, K. M., S. Weber, and F. D. Hilf. Sadagopan, D. Rowe, S. Spruit, gies in a spoken language system.
1971. Artificial paranoia. Artificial C. Tran, P. Andrews, N. F. Ayan, AAAI Spring Symposium on Empir-
Intelligence, 2(1):1–25. S. Bhosale, S. Edunov, A. Fan, ical Methods in Discourse Interpre-
Cole, R. A., D. G. Novick, P. J. E. Ver- C. Gao, V. Goswami, F. Guzmán, tation and Generation.
meulen, S. Sutton, M. Fanty, L. F. A. P. Koehn, A. Mourachko, C. Ropers, Das, S. R. and M. Y. Chen. 2001. Ya-
Wessels, J. H. de Villiers, J. Schalk- S. Saleem, H. Schwenk, J. Wang, hoo! for Amazon: Sentiment pars-
wyk, B. Hansen, and D. Burnett. and NLLB Team. 2022. No lan- ing from small talk on the web. EFA
1997. Experiments with a spo- guage left behind: Scaling human- 2001 Barcelona Meetings. http://
ken dialogue system for taking the centered machine translation. ArXiv. ssrn.com/abstract=276189.
Bibliography 559
David, Jr., E. E. and O. G. Selfridge. Deng, L., G. Hinton, and B. Kingsbury. Dixon, N. and H. Maxey. 1968. Termi-
1962. Eyes and ears for computers. 2013. New types of deep neural nal analog synthesis of continuous
Proceedings of the IRE (Institute of network learning for speech recog- speech using the diphone method of
Radio Engineers), 50:1093–1101. nition and related applications: An segment assembly. IEEE Transac-
Davidson, T., D. Bhattacharya, and overview. ICASSP. tions on Audio and Electroacoustics,
I. Weber. 2019. Racial bias in hate 16(1):40–50.
Deng, Y. and W. Byrne. 2005. HMM
speech and abusive language detec- word and phrase alignment for sta- Do, Q. N. T., S. Bethard, and M.-F.
tion datasets. Third Workshop on tistical machine translation. HLT- Moens. 2017. Improving implicit
Abusive Language Online. EMNLP. semantic role labeling by predicting
Davies, M. 2012. Expanding hori- semantic frame arguments. IJCNLP.
Denis, P. and J. Baldridge. 2007. Joint
zons in historical linguistics with the determination of anaphoricity and Doddington, G. 2002. Automatic eval-
400-million word Corpus of Histor- coreference resolution using integer uation of machine translation quality
ical American English. Corpora, programming. NAACL-HLT. using n-gram co-occurrence statis-
7(2):121–157. tics. HLT.
Davies, M. 2015. The Wiki- Denis, P. and J. Baldridge. 2008. Spe-
pedia Corpus: 4.6 million arti- cialized models and ranking for Dodge, J., S. Gururangan, D. Card,
cles, 1.9 billion words. Adapted coreference resolution. EMNLP. R. Schwartz, and N. A. Smith. 2019.
from Wikipedia. https://www. Show your work: Improved report-
Denis, P. and J. Baldridge. 2009. Global ing of experimental results. EMNLP.
english-corpora.org/wiki/. joint models for coreference resolu-
Davies, M. 2020. The Corpus tion and named entity classification. Dodge, J., M. Sap, A. Marasović,
of Contemporary American En- Procesamiento del Lenguaje Natu- W. Agnew, G. Ilharco, D. Groen-
glish (COCA): One billion words, ral, 42. eveld, M. Mitchell, and M. Gardner.
1990-2019. https://www. 2021. Documenting large webtext
DeRose, S. J. 1988. Grammatical cat- corpora: A case study on the colos-
english-corpora.org/coca/. egory disambiguation by statistical
Davis, E., L. Morgenstern, and C. L. sal clean crawled corpus. EMNLP.
optimization. Computational Lin-
Ortiz. 2017. The first Winograd guistics, 14:31–39. Dong, L. and M. Lapata. 2016. Lan-
schema challenge at IJCAI-16. AI guage to logical form with neural at-
Magazine, 38(3):97–98. Devlin, J., M.-W. Chang, K. Lee, and tention. ACL.
K. Toutanova. 2019. BERT: Pre-
Davis, K. H., R. Biddulph, and S. Bal- training of deep bidirectional trans- Dorr, B. 1994. Machine translation di-
ashek. 1952. Automatic recognition formers for language understanding. vergences: A formal description and
of spoken digits. JASA, 24(6):637– NAACL HLT. proposed solution. Computational
642. Linguistics, 20(4):597–633.
Davis, S. and P. Mermelstein. 1980. Di Eugenio, B. 1990. Centering theory
and the Italian pronominal system. Dostert, L. 1955. The Georgetown-
Comparison of parametric repre- I.B.M. experiment. In Machine
sentations for monosyllabic word COLING.
Translation of Languages: Fourteen
recognition in continuously spoken Di Eugenio, B. 1996. The discourse Essays, 124–135. MIT Press.
sentences. IEEE Transactions on functions of Italian subjects: A cen-
ASSP, 28(4):357–366. tering approach. COLING. Dowty, D. R. 1979. Word Meaning and
Deerwester, S. C., S. T. Dumais, G. W. Montague Grammar. D. Reidel.
Dias Oliva, T., D. Antonialli, and
Furnas, R. A. Harshman, T. K. A. Gomes. 2021. Fighting hate Dozat, T. and C. D. Manning. 2017.
Landauer, K. E. Lochbaum, and speech, silencing drag queens? arti- Deep biaffine attention for neural de-
L. Streeter. 1988. Computer infor- ficial intelligence in content modera- pendency parsing. ICLR.
mation retrieval using latent seman- tion and risks to lgbtq voices online. Dozat, T. and C. D. Manning. 2018.
tic structure: US Patent 4,839,853. Sexuality & Culture, 25:700–732. Simpler but more accurate semantic
Deerwester, S. C., S. T. Dumais, T. K. dependency parsing. ACL.
Landauer, G. W. Furnas, and R. A. Dinan, E., G. Abercrombie, A. S.
Harshman. 1990. Indexing by la- Bergman, S. Spruit, D. Hovy, Y.-L. Dozat, T., P. Qi, and C. D. Manning.
tent semantics analysis. JASIS, Boureau, and V. Rieser. 2021. Antic- 2017. Stanford’s graph-based neu-
41(6):391–407. ipating safety issues in e2e conver- ral dependency parser at the CoNLL
sational ai: Framework and tooling. 2017 shared task. Proceedings of the
Deibel, D. and R. Evanhoe. 2021. Con- ArXiv. CoNLL 2017 Shared Task: Multilin-
versations with Things: UX Design gual Parsing from Raw Text to Uni-
for Chat and Voice. Rosenfeld. Dinan, E., A. Fan, A. Williams, J. Ur-
banek, D. Kiela, and J. Weston. versal Dependencies.
DeJong, G. F. 1982. An overview of the
FRUMP system. In W. G. Lehnert 2020. Queens are powerful too: Mit- Dror, R., G. Baumer, M. Bogomolov,
and M. H. Ringle, eds, Strategies for igating gender bias in dialogue gen- and R. Reichart. 2017. Replicabil-
Natural Language Processing, 149– eration. EMNLP. ity analysis for natural language pro-
176. LEA. Ditman, T. and G. R. Kuperberg. cessing: Testing significance with
2010. Building coherence: A frame- multiple datasets. TACL, 5:471–
Demberg, V. 2006. Letter-to-phoneme –486.
conversion for a German text-to- work for exploring the breakdown
speech system. Diplomarbeit Nr. 47, of links across clause boundaries in Dror, R., L. Peled-Cohen, S. Shlomov,
Universität Stuttgart. schizophrenia. Journal of neurolin- and R. Reichart. 2020. Statisti-
guistics, 23(3):254–269. cal Significance Testing for Natural
Denes, P. 1959. The design and oper-
ation of the mechanical speech rec- Dixon, L., J. Li, J. Sorensen, N. Thain, Language Processing, volume 45 of
ognizer at University College Lon- and L. Vasserman. 2018. Measuring Synthesis Lectures on Human Lan-
don. Journal of the British Institu- and mitigating unintended bias in guage Technologies. Morgan &
tion of Radio Engineers, 19(4):219– text classification. 2018 AAAI/ACM Claypool.
234. Appears together with compan- Conference on AI, Ethics, and Soci- Dryer, M. S. and M. Haspelmath, eds.
ion paper (Fry 1959). ety. 2013. The World Atlas of Language
560 Bibliography
Structures Online. Max Planck In- Elsner, M. and E. Charniak. 2008. V. Chaudhary, N. Goyal, T. Birch,
stitute for Evolutionary Anthropol- Coreference-inspired coherence V. Liptchinsky, S. Edunov, M. Auli,
ogy, Leipzig. Available online at modeling. ACL. and A. Joulin. 2021. Beyond
http://wals.info. Elsner, M. and E. Charniak. 2011. Ex- english-centric multilingual ma-
Du Bois, J. W., W. L. Chafe, C. Meyer, tending the entity grid with entity- chine translation. JMLR, 22(107):1–
S. A. Thompson, R. Englebretson, specific features. ACL. 48.
and N. Martey. 2005. Santa Barbara Elvevåg, B., P. W. Foltz, D. R. Fano, R. M. 1961. Transmission of In-
corpus of spoken American English, Weinberger, and T. E. Goldberg. formation: A Statistical Theory of
Parts 1-4. Philadelphia: Linguistic 2007. Quantifying incoherence in Communications. MIT Press.
Data Consortium. speech: an automated methodology Fant, G. M. 1951. Speech communica-
Durrett, G. and D. Klein. 2013. Easy and novel application to schizophre- tion research. Ing. Vetenskaps Akad.
victories and uphill battles in coref- nia. Schizophrenia research, 93(1- Stockholm, Sweden, 24:331–337.
erence resolution. EMNLP. 3):304–316. Fant, G. M. 1986. Glottal flow: Models
Durrett, G. and D. Klein. 2014. A joint Emami, A. and F. Jelinek. 2005. A neu- and interaction. Journal of Phonet-
model for entity analysis: Corefer- ral syntactic language model. Ma- ics, 14:393–399.
ence, typing, and linking. TACL, chine learning, 60(1):195–227.
2:477–490. Fast, E., B. Chen, and M. S. Bernstein.
Emami, A., P. Trichelair, A. Trischler, 2016. Empath: Understanding Topic
Earley, J. 1968. An Efficient Context- K. Suleman, H. Schulz, and J. C. K. Signals in Large-Scale Text. CHI.
Free Parsing Algorithm. Ph.D. Cheung. 2019. The KNOWREF
thesis, Carnegie Mellon University, Fauconnier, G. and M. Turner. 2008.
coreference corpus: Removing gen- The way we think: Conceptual
Pittsburgh, PA. der and number cues for diffi-
Earley, J. 1970. An efficient context- blending and the mind’s hidden
cult pronominal anaphora resolu- complexities. Basic Books.
free parsing algorithm. CACM, tion. ACL.
6(8):451–455. Feldman, J. A. and D. H. Ballard.
Erk, K. 2007. A simple, similarity-
Ebden, P. and R. Sproat. 2015. The 1982. Connectionist models and
based model for selectional prefer-
Kestrel TTS text normalization sys- their properties. Cognitive Science,
ences. ACL.
tem. Natural Language Engineer- 6:205–254.
ing, 21(3):333. van Esch, D. and R. Sproat. 2018.
Fellbaum, C., ed. 1998. WordNet: An
An expanded taxonomy of semiotic
Edmonds, J. 1967. Optimum branch- Electronic Lexical Database. MIT
classes for text normalization. IN-
ings. Journal of Research of the Press.
TERSPEECH.
National Bureau of Standards B, Feng, V. W. and G. Hirst. 2011. Classi-
71(4):233–240. Ethayarajh, K. 2019. How contextual
fying arguments by scheme. ACL.
are contextualized word representa-
Edunov, S., M. Ott, M. Auli, and tions? Comparing the geometry of Feng, V. W. and G. Hirst. 2014.
D. Grangier. 2018. Understanding BERT, ELMo, and GPT-2 embed- A linear-time bottom-up discourse
back-translation at scale. EMNLP. dings. EMNLP. parser with constraints and post-
Efron, B. and R. J. Tibshirani. 1993. An editing. ACL.
Ethayarajh, K., D. Duvenaud, and
introduction to the bootstrap. CRC G. Hirst. 2019a. Towards un- Feng, V. W., Z. Lin, and G. Hirst. 2014.
press. derstanding linear word analogies. The impact of deep hierarchical dis-
Egghe, L. 2007. Untangling Herdan’s ACL. course structures in the evaluation of
law and Heaps’ law: Mathematical text coherence. COLING.
and informetric arguments. JASIST, Ethayarajh, K., D. Duvenaud, and
G. Hirst. 2019b. Understanding un- Fernandes, E. R., C. N. dos Santos, and
58(5):702–709. R. L. Milidiú. 2012. Latent struc-
desirable word embedding associa-
Eisner, J. 1996. Three new probabilistic tions. ACL. ture perceptron with feature induc-
models for dependency parsing: An tion for unrestricted coreference res-
exploration. COLING. Ethayarajh, K. and D. Jurafsky. 2020.
Utility is in the eye of the user: olution. CoNLL.
Ekman, P. 1999. Basic emotions. In A critique of NLP leaderboards. Ferragina, P. and U. Scaiella. 2011.
T. Dalgleish and M. J. Power, eds, EMNLP. Fast and accurate annotation of short
Handbook of Cognition and Emo- texts with wikipedia pages. IEEE
tion, 45–60. Wiley. Etzioni, O., M. Cafarella, D. Downey,
A.-M. Popescu, T. Shaked, S. Soder- Software, 29(1):70–75.
Elhage, N., N. Nanda, C. Olsson, land, D. S. Weld, and A. Yates. Ferro, L., L. Gerber, I. Mani, B. Sund-
T. Henighan, N. Joseph, B. Mann, 2005. Unsupervised named-entity heim, and G. Wilson. 2005. Tides
A. Askell, Y. Bai, A. Chen, T. Con- extraction from the web: An experi- 2005 standard for the annotation of
erly, N. DasSarma, D. Drain, mental study. Artificial Intelligence, temporal expressions. Technical re-
D. Ganguli, Z. Hatfield-Dodds, 165(1):91–134. port, MITRE.
D. Hernandez, A. Jones, J. Kernion,
L. Lovitt, K. Ndousse, D. Amodei, Evans, N. 2000. Word classes in the Ferrucci, D. A. 2012. Introduction
T. Brown, J. Clark, J. Kaplan, S. Mc- world’s languages. In G. Booij, to “This is Watson”. IBM Jour-
Candlish, and C. Olah. 2021. A C. Lehmann, and J. Mugdan, eds, nal of Research and Development,
mathematical framework for trans- Morphology: A Handbook on Inflec- 56(3/4):1:1–1:15.
former circuits. White paper. tion and Word Formation, 708–732. Fessler, L. 2017. We tested bots like Siri
Elman, J. L. 1990. Finding structure in Mouton. and Alexa to see who would stand
time. Cognitive science, 14(2):179– Fader, A., S. Soderland, and O. Etzioni. up to sexual harassment. Quartz.
211. 2011. Identifying relations for open Feb 22, 2017. https://qz.com/
Elsner, M., J. Austerweil, and E. Char- information extraction. EMNLP. 911681/.
niak. 2007. A unified local and Fan, A., S. Bhosale, H. Schwenk, Field, A. and Y. Tsvetkov. 2019. Entity-
global model for discourse coher- Z. Ma, A. El-Kishky, S. Goyal, centric contextual affective analysis.
ence. NAACL-HLT. M. Baines, O. Celebi, G. Wenzek, ACL.
Bibliography 561
Fikes, R. E. and N. J. Nilsson. 1971. Foland, W. and J. H. Martin. 2016. Furnas, G. W., T. K. Landauer, L. M.
STRIPS: A new approach to the CU-NLP at SemEval-2016 task 8: Gomez, and S. T. Dumais. 1987.
application of theorem proving to AMR parsing using LSTM-based re- The vocabulary problem in human-
problem solving. Artificial Intelli- current neural networks. SemEval- system communication. Commu-
gence, 2:189–208. 2016. nications of the ACM, 30(11):964–
Fillmore, C. J. 1966. A proposal con- Foland, Jr., W. R. and J. H. Martin. 971.
cerning English prepositions. In F. P. 2015. Dependency-based seman- Gabow, H. N., Z. Galil, T. Spencer, and
Dinneen, ed., 17th annual Round Ta- tic role labeling using convolutional R. E. Tarjan. 1986. Efficient algo-
ble, volume 17 of Monograph Series neural networks. *SEM 2015. rithms for finding minimum span-
on Language and Linguistics, 19– ning trees in undirected and directed
34. Georgetown University Press. Foltz, P. W., W. Kintsch, and T. K. Lan- graphs. Combinatorica, 6(2):109–
dauer. 1998. The measurement of 122.
Fillmore, C. J. 1968. The case for case. textual coherence with latent seman-
In E. W. Bach and R. T. Harms, eds, tic analysis. Discourse processes, Gaddy, D., M. Stern, and D. Klein.
Universals in Linguistic Theory, 1– 25(2-3):285–307. 2018. What’s going on in neural
88. Holt, Rinehart & Winston. constituency parsers? an analysis.
∀, W. Nekoto, V. Marivate, T. Matsila, NAACL HLT.
Fillmore, C. J. 1985. Frames and the se-
T. Fasubaa, T. Kolawole, T. Fag-
mantics of understanding. Quaderni Gale, W. A. and K. W. Church. 1994.
bohungbe, S. O. Akinola, S. H.
di Semantica, VI(2):222–254. What is wrong with adding one? In
Muhammad, S. Kabongo, S. Osei,
Fillmore, C. J. 2003. Valency and se- N. Oostdijk and P. de Haan, eds,
S. Freshia, R. A. Niyongabo,
mantic roles: the concept of deep Corpus-Based Research into Lan-
R. M. P. Ogayo, O. Ahia, M. Mer-
structure case. In V. Agel, L. M. guage, 189–198. Rodopi.
essa, M. Adeyemi, M. Mokgesi-
Eichinger, H. W. Eroms, P. Hell- Selinga, L. Okegbemi, L. J. Mar- Gale, W. A. and K. W. Church. 1991.
wig, H. J. Heringer, and H. Lobin, tinus, K. Tajudeen, K. Degila, A program for aligning sentences in
eds, Dependenz und Valenz: Ein K. Ogueji, K. Siminyu, J. Kreutzer, bilingual corpora. ACL.
internationales Handbuch der zeit- J. Webster, J. T. Ali, J. A. I. Gale, W. A. and K. W. Church. 1993.
genössischen Forschung, chapter 36, Orife, I. Ezeani, I. A. Dangana, A program for aligning sentences in
457–475. Walter de Gruyter. H. Kamper, H. Elsahar, G. Duru, bilingual corpora. Computational
Fillmore, C. J. 2012. ACL life- G. Kioko, E. Murhabazi, E. van Linguistics, 19:75–102.
time achievement award: Encoun- Biljon, D. Whitenack, C. Onye- Gale, W. A., K. W. Church, and
ters with language. Computational fuluchi, C. Emezue, B. Dossou, D. Yarowsky. 1992a. One sense per
Linguistics, 38(4):701–718. B. Sibanda, B. I. Bassey, A. Olabiyi, discourse. HLT.
Fillmore, C. J. and C. F. Baker. 2009. A A. Ramkilowan, A. Öktem, A. Akin- Gale, W. A., K. W. Church, and
frames approach to semantic analy- faderin, and A. Bashir. 2020. Partic- D. Yarowsky. 1992b. Work on sta-
sis. In B. Heine and H. Narrog, eds, ipatory research for low-resourced tistical methods for word sense dis-
The Oxford Handbook of Linguistic machine translation: A case study ambiguation. AAAI Fall Symposium
Analysis, 313–340. Oxford Univer- in African languages. Findings of on Probabilistic Approaches to Nat-
sity Press. EMNLP. ural Language.
Fillmore, C. J., C. R. Johnson, and Fox, B. A. 1993. Discourse Structure Gao, L., T. Hoppe, A. Thite, S. Bi-
M. R. L. Petruck. 2003. Background and Anaphora: Written and Conver- derman, C. Foster, N. Nabeshima,
to FrameNet. International journal sational English. Cambridge. S. Black, J. Phang, S. Presser,
of lexicography, 16(3):235–250. Francis, W. N. and H. Kučera. 1982. L. Golding, H. He, and C. Leahy.
Finkelstein, L., E. Gabrilovich, Y. Ma- Frequency Analysis of English Us- 2020. The Pile: An 800GB dataset
tias, E. Rivlin, Z. Solan, G. Wolf- age. Houghton Mifflin, Boston. of diverse text for language model-
man, and E. Ruppin. 2002. Placing ing. ArXiv preprint.
search in context: The concept revis- Franz, A. and T. Brants. 2006. All our
ited. ACM Transactions on Informa- n-gram are belong to you. https: Garg, N., L. Schiebinger, D. Jurafsky,
//research.google/blog/ and J. Zou. 2018. Word embeddings
tion Systems, 20(1):116—-131. quantify 100 years of gender and
all-our-n-gram-are-belong-to-you/.
Finlayson, M. A. 2016. Inferring ethnic stereotypes. Proceedings of
Propp’s functions from semantically Fraser, N. M. and G. N. Gilbert. 1991. the National Academy of Sciences,
annotated text. The Journal of Amer- Simulating speech systems. Com- 115(16):E3635–E3644.
ican Folklore, 129(511):55–77. puter Speech and Language, 5:81–
Garside, R. 1987. The CLAWS word-
99.
Firth, J. R. 1935. The technique of se- tagging system. In R. Garside,
mantics. Transactions of the philo- Friedman, B. and D. G. Hendry. 2019. G. Leech, and G. Sampson, eds, The
logical society, 34(1):36–73. Value Sensitive Design: Shaping Computational Analysis of English,
Firth, J. R. 1957. A synopsis of linguis- Technology with Moral Imagination. 30–41. Longman.
tic theory 1930–1955. In Studies in MIT Press. Garside, R., G. Leech, and A. McEnery.
Linguistic Analysis. Philological So- Friedman, B., D. G. Hendry, and 1997. Corpus Annotation. Long-
ciety. Reprinted in Palmer, F. (ed.) A. Borning. 2017. A survey man.
1968. Selected Papers of J. R. Firth. of value sensitive design methods. Gebru, T., J. Morgenstern, B. Vec-
Longman, Harlow. Foundations and Trends in Human- chione, J. W. Vaughan, H. Wal-
Flanagan, J. L. 1972. Speech Analysis, Computer Interaction, 11(2):63– lach, H. Daumé III, and K. Craw-
Synthesis, and Perception. Springer. 125. ford. 2020. Datasheets for datasets.
Flanagan, J. L., K. Ishizaka, and K. L. Fry, D. B. 1959. Theoretical as- ArXiv.
Shipley. 1975. Synthesis of speech pects of mechanical speech recogni- Gehman, S., S. Gururangan, M. Sap,
from a dynamic model of the vocal tion. Journal of the British Institu- Y. Choi, and N. A. Smith. 2020. Re-
cords and vocal tract. The Bell Sys- tion of Radio Engineers, 19(4):211– alToxicityPrompts: Evaluating neu-
tem Technical Journal, 54(3):485– 218. Appears together with compan- ral toxic degeneration in language
506. ion paper (Denes 1959). models. Findings of EMNLP.
562 Bibliography
Gerber, M. and J. Y. Chai. 2010. Be- and G. Irving. 2022. Improving Gould, J. D. and C. Lewis. 1985. De-
yond nombank: A study of implicit alignment of dialogue agents via tar- signing for usability: Key principles
arguments for nominal predicates. geted human judgements. ArXiv and what designers think. CACM,
ACL. preprint. 28(3):300–311.
Gers, F. A., J. Schmidhuber, and Glenberg, A. M. and D. A. Robert- Gould, S. J. 1980. The Panda’s Thumb.
F. Cummins. 2000. Learning to for- son. 2000. Symbol grounding and Penguin Group.
get: Continual prediction with lstm. meaning: A comparison of high- Graff, D. 1997. The 1996 Broadcast
Neural computation, 12(10):2451– dimensional and embodied theories News speech and language-model
2471. of meaning. Journal of memory and corpus. Proceedings DARPA Speech
Gil, D. 2000. Syntactic categories, language, 43(3):379–401. Recognition Workshop.
cross-linguistic variation and univer- Godfrey, J., E. Holliman, and J. Mc- Gravano, A., J. Hirschberg, and
sal grammar. In P. M. Vogel and Daniel. 1992. SWITCHBOARD: Š. Beňuš. 2012. Affirmative cue
B. Comrie, eds, Approaches to the Telephone speech corpus for re- words in task-oriented dialogue.
Typology of Word Classes, 173–216. search and development. ICASSP. Computational Linguistics, 38(1):1–
Mouton. Goel, V. and W. Byrne. 2000. Minimum 39.
Gildea, D. and D. Jurafsky. 2000. Au- bayes-risk automatic speech recog- Graves, A. 2012. Sequence transduc-
tomatic labeling of semantic roles. nition. Computer Speech & Lan- tion with recurrent neural networks.
ACL. guage, 14(2):115–135. ICASSP.
Gildea, D. and D. Jurafsky. 2002. Goffman, E. 1974. Frame analysis: An Graves, A. 2013. Generating se-
Automatic labeling of semantic essay on the organization of experi- quences with recurrent neural net-
roles. Computational Linguistics, ence. Harvard University Press. works. ArXiv.
28(3):245–288.
Goldberg, J., M. Ostendorf, and Graves, A., S. Fernández, F. Gomez,
Gildea, D. and M. Palmer. 2002. K. Kirchhoff. 2003. The impact of and J. Schmidhuber. 2006. Con-
The necessity of syntactic parsing response wording in error correction nectionist temporal classification:
for predicate argument recognition. subdialogs. ISCA Tutorial and Re- Labelling unsegmented sequence
ACL. search Workshop on Error Handling data with recurrent neural networks.
Giles, C. L., G. M. Kuhn, and R. J. in Spoken Dialogue Systems. ICML.
Williams. 1994. Dynamic recurrent Goldberg, Y. 2017. Neural Network Graves, A., S. Fernández, M. Li-
neural networks: Theory and appli- Methods for Natural Language Pro- wicki, H. Bunke, and J. Schmidhu-
cations. IEEE Trans. Neural Netw. cessing, volume 10 of Synthesis Lec- ber. 2007. Unconstrained on-line
Learning Syst., 5(2):153–156. tures on Human Language Tech- handwriting recognition with recur-
Gillick, L. and S. J. Cox. 1989. Some nologies. Morgan & Claypool. rent neural networks. NeurIPS.
statistical issues in the comparison Gonen, H. and Y. Goldberg. 2019. Lip- Graves, A. and N. Jaitly. 2014. Towards
of speech recognition algorithms. stick on a pig: Debiasing methods end-to-end speech recognition with
ICASSP. cover up systematic gender biases in recurrent neural networks. ICML.
Girard, G. 1718. La justesse de la word embeddings but do not remove Graves, A., A.-r. Mohamed, and
langue françoise: ou les différentes them. NAACL HLT. G. Hinton. 2013. Speech recognition
significations des mots qui passent Good, M. D., J. A. Whiteside, D. R. with deep recurrent neural networks.
pour synonimes. Laurent d’Houry, Wixon, and S. J. Jones. 1984. Build- ICASSP.
Paris. ing a user-derived interface. CACM, Graves, A. and J. Schmidhuber. 2005.
Giuliano, V. E. 1965. The inter- 27(10):1032–1043. Framewise phoneme classification
pretation of word associations. Goodfellow, I., Y. Bengio, and with bidirectional LSTM and other
Statistical Association Methods A. Courville. 2016. Deep Learn- neural network architectures. Neu-
For Mechanized Documentation. ing. MIT Press. ral Networks, 18(5-6):602–610.
Symposium Proceedings. Wash- Graves, A., G. Wayne, and I. Dani-
ington, D.C., USA, March 17, Goodman, J. 2006. A bit of progress
in language modeling: Extended helka. 2014. Neural Turing ma-
1964. https://nvlpubs.nist. chines. ArXiv.
gov/nistpubs/Legacy/MP/ version. Technical Report MSR-
nbsmiscellaneouspub269.pdf. TR-2001-72, Machine Learning and Green, B. F., A. K. Wolf, C. Chom-
Applied Statistics Group, Microsoft sky, and K. Laughery. 1961. Base-
Gladkova, A., A. Drozd, and S. Mat- Research, Redmond, WA. ball: An automatic question an-
suoka. 2016. Analogy-based de- swerer. Proceedings of the Western
tection of morphological and se- Goodwin, C. 1996. Transparent vi-
sion. In E. Ochs, E. A. Schegloff, Joint Computer Conference 19.
mantic relations with word embed-
dings: what works and what doesn’t. and S. A. Thompson, eds, Interac- Greene, B. B. and G. M. Rubin. 1971.
NAACL Student Research Workshop. tion and Grammar, 370–404. Cam- Automatic grammatical tagging of
bridge University Press. English. Department of Linguis-
Glaese, A., N. McAleese, M. Trebacz, tics, Brown University, Providence,
J. Aslanides, V. Firoiu, T. Ewalds, Gopalakrishnan, K., B. Hedayatnia,
Q. Chen, A. Gottardi, S. Kwa- Rhode Island.
M. Rauh, L. Weidinger, M. Chad-
wick, P. Thacker, L. Campbell- tra, A. Venkatesh, R. Gabriel, and Greenwald, A. G., D. E. McGhee, and
Gillingham, J. Uesato, P.-S. Huang, D. Hakkani-Tür. 2019. Topical- J. L. K. Schwartz. 1998. Measur-
R. Comanescu, F. Yang, A. See, chat: Towards knowledge-grounded ing individual differences in implicit
S. Dathathri, R. Greig, C. Chen, open-domain conversations. INTER- cognition: the implicit association
D. Fritz, J. Sanchez Elias, R. Green, SPEECH. test. Journal of personality and so-
S. Mokrá, N. Fernando, B. Wu, Gould, J. D., J. Conti, and T. Ho- cial psychology, 74(6):1464–1480.
R. Foley, S. Young, I. Gabriel, vanyecz. 1983. Composing let- Grenager, T. and C. D. Manning. 2006.
W. Isaac, J. Mellor, D. Hassabis, ters with a simulated listening type- Unsupervised discovery of a statisti-
K. Kavukcuoglu, L. A. Hendricks, writer. CACM, 26(4):295–308. cal verb lexicon. EMNLP.
Bibliography 563
Grice, H. P. 1975. Logic and conversa- Guyon, I. and A. Elisseeff. 2003. An Harris, R. A. 2005. Voice Interaction
tion. In P. Cole and J. L. Morgan, introduction to variable and feature Design: Crafting the New Conver-
eds, Speech Acts: Syntax and Se- selection. JMLR, 3:1157–1182. sational Speech Systems. Morgan
mantics Volume 3, 41–58. Academic Haber, J. and M. Poesio. 2020. As- Kaufmann.
Press. sessing polyseme sense similarity Harris, Z. S. 1946. From morpheme
Grice, H. P. 1978. Further notes on through co-predication acceptability to utterance. Language, 22(3):161–
logic and conversation. In P. Cole, and contextualised embedding dis- 183.
ed., Pragmatics: Syntax and Seman- tance. *SEM.
Harris, Z. S. 1954. Distributional struc-
tics Volume 9, 113–127. Academic Habernal, I. and I. Gurevych. 2016. ture. Word, 10:146–162.
Press. Which argument is more convinc-
Grishman, R. and B. Sundheim. 1995. ing? Analyzing and predicting con- Harris, Z. S. 1962. String Analysis of
Design of the MUC-6 evaluation. vincingness of Web arguments using Sentence Structure. Mouton, The
MUC-6. bidirectional LSTM. ACL. Hague.
Grosz, B. J. 1977a. The representation Habernal, I. and I. Gurevych. 2017. Hashimoto, T., M. Srivastava,
and use of focus in a system for un- Argumentation mining in user- H. Namkoong, and P. Liang. 2018.
derstanding dialogs. IJCAI-77. Mor- generated web discourse. Computa- Fairness without demographics in
gan Kaufmann. tional Linguistics, 43(1):125–179. repeated loss minimization. ICML.
Grosz, B. J. 1977b. The Representation Haghighi, A. and D. Klein. 2009. Hastie, T., R. J. Tibshirani, and J. H.
and Use of Focus in Dialogue Un- Simple coreference resolution with Friedman. 2001. The Elements of
derstanding. Ph.D. thesis, Univer- rich syntactic and semantic features. Statistical Learning. Springer.
sity of California, Berkeley. EMNLP. Hatzivassiloglou, V. and K. McKeown.
Grosz, B. J., A. K. Joshi, and S. Wein- Hajishirzi, H., L. Zilles, D. S. Weld, 1997. Predicting the semantic orien-
stein. 1983. Providing a unified ac- and L. Zettlemoyer. 2013. Joint tation of adjectives. ACL.
count of definite noun phrases in En- coreference resolution and named-
entity linking with multi-pass sieves. Hatzivassiloglou, V. and J. Wiebe.
glish. ACL. 2000. Effects of adjective orienta-
Grosz, B. J., A. K. Joshi, and S. Wein- EMNLP.
tion and gradability on sentence sub-
stein. 1995. Centering: A framework Hajič, J. 1998. Building a Syn- jectivity. COLING.
for modeling the local coherence of tactically Annotated Corpus: The
Prague Dependency Treebank, 106– Haviland, S. E. and H. H. Clark. 1974.
discourse. Computational Linguis-
132. Karolinum. What’s new? Acquiring new infor-
tics, 21(2):203–225.
mation as a process in comprehen-
Grosz, B. J. and C. L. Sidner. 1980. Hajič, J. 2000. Morphological tagging: sion. Journal of Verbal Learning and
Plans for discourse. In P. R. Cohen, Data vs. dictionaries. NAACL. Verbal Behaviour, 13:512–521.
J. Morgan, and M. E. Pollack, eds, Hajič, J., M. Ciaramita, R. Johans-
Intentions in Communication, 417– son, D. Kawahara, M. A. Martı́, Hawkins, J. A. 1978. Definiteness
444. MIT Press. L. Màrquez, A. Meyers, J. Nivre, and indefiniteness: a study in refer-
ence and grammaticality prediction.
Gruber, J. S. 1965. Studies in Lexical S. Padó, J. Štěpánek, P. Stranǎḱ, Croom Helm Ltd.
Relations. Ph.D. thesis, MIT. M. Surdeanu, N. Xue, and Y. Zhang.
2009. The conll-2009 shared task: Hayashi, T., R. Yamamoto, K. In-
Grünewald, S., A. Friedrich, and oue, T. Yoshimura, S. Watanabe,
J. Kuhn. 2021. Applying Occam’s Syntactic and semantic dependen-
cies in multiple languages. CoNLL. T. Toda, K. Takeda, Y. Zhang,
razor to transformer-based depen- and X. Tan. 2020. ESPnet-TTS:
dency parsing: What works, what Hakkani-Tür, D., K. Oflazer, and
G. Tür. 2002. Statistical morpholog- Unified, reproducible, and integrat-
doesn’t, and what is really neces- able open source end-to-end text-to-
sary. IWPT. ical disambiguation for agglutinative
languages. Journal of Computers speech toolkit. ICASSP.
Guinaudeau, C. and M. Strube. 2013.
Graph-based local coherence model- and Humanities, 36(4):381–410. He, L., K. Lee, M. Lewis, and L. Zettle-
ing. ACL. Halliday, M. A. K. and R. Hasan. 1976. moyer. 2017. Deep semantic role la-
Cohesion in English. Longman. En- beling: What works and what’s next.
Guindon, R. 1988. A multidisciplinary ACL.
perspective on dialogue structure in glish Language Series, Title No. 9.
user-advisor dialogues. In R. Guin- Hamilton, W. L., K. Clark, J. Leskovec, He, W., K. Liu, J. Liu, Y. Lyu, S. Zhao,
don, ed., Cognitive Science and Its and D. Jurafsky. 2016a. Inducing X. Xiao, Y. Liu, Y. Wang, H. Wu,
Applications for Human-Computer domain-specific sentiment lexicons Q. She, X. Liu, T. Wu, and H. Wang.
Interaction, 163–200. Lawrence Erl- from unlabeled corpora. EMNLP. 2018. DuReader: a Chinese machine
baum. Hamilton, W. L., J. Leskovec, and reading comprehension dataset from
D. Jurafsky. 2016b. Diachronic word real-world applications. Workshop
Gundel, J. K., N. Hedberg, and on Machine Reading for Question
R. Zacharski. 1993. Cognitive status embeddings reveal statistical laws of
semantic change. ACL. Answering.
and the form of referring expressions
in discourse. Language, 69(2):274– Hannun, A. 2017. Sequence modeling Heafield, K. 2011. KenLM: Faster
307. with CTC. Distill, 2(11). and smaller language model queries.
Workshop on Statistical Machine
Gururangan, S., A. Marasović, Hannun, A. Y., A. L. Maas, D. Juraf- Translation.
S. Swayamdipta, K. Lo, I. Belt- sky, and A. Y. Ng. 2014. First-pass
agy, D. Downey, and N. A. Smith. large vocabulary continuous speech Heafield, K., I. Pouzyrevsky, J. H.
2020. Don’t stop pretraining: Adapt recognition using bi-directional re- Clark, and P. Koehn. 2013. Scal-
language models to domains and current DNNs. ArXiv preprint able modified Kneser-Ney language
tasks. ACL. arXiv:1408.2873. model estimation. ACL.
Gusfield, D. 1997. Algorithms on Harris, C. M. 1953. A study of the Heaps, H. S. 1978. Information re-
Strings, Trees, and Sequences. Cam- building blocks in speech. JASA, trieval. Computational and theoret-
bridge University Press. 25(5):962–969. ical aspects. Academic Press.
564 Bibliography
Hearst, M. A. 1992a. Automatic acqui- task 8: Multi-way classification of Hjelmslev, L. 1969. Prologomena to
sition of hyponyms from large text semantic relations between pairs of a Theory of Language. University
corpora. COLING. nominals. 5th International Work- of Wisconsin Press. Translated by
Hearst, M. A. 1992b. Automatic acqui- shop on Semantic Evaluation. Francis J. Whitfield; original Danish
sition of hyponyms from large text Hendrix, G. G., C. W. Thompson, and edition 1943.
corpora. COLING. J. Slocum. 1973. Language process- Hobbs, J. R. 1978. Resolving pronoun
Hearst, M. A. 1997. Texttiling: Seg- ing via canonical verbs and semantic references. Lingua, 44:311–338.
menting text into multi-paragraph models. Proceedings of IJCAI-73. Hobbs, J. R. 1979. Coherence and
subtopic passages. Computational Herdan, G. 1960. Type-token mathe- coreference. Cognitive Science,
Linguistics, 23:33–64. matics. Mouton. 3:67–90.
Hearst, M. A. 1998. Automatic discov- Hermann, K. M., T. Kocisky, E. Grefen- Hobbs, J. R., D. E. Appelt, J. Bear,
ery of WordNet relations. In C. Fell- stette, L. Espeholt, W. Kay, M. Su- D. Israel, M. Kameyama, M. E.
baum, ed., WordNet: An Electronic leyman, and P. Blunsom. 2015a. Stickel, and M. Tyson. 1997. FAS-
Lexical Database. MIT Press. Teaching machines to read and com- TUS: A cascaded finite-state trans-
prehend. NeurIPS. ducer for extracting information
Heckerman, D., E. Horvitz, M. Sahami,
and S. T. Dumais. 1998. A bayesian Hermann, K. M., T. Kočiský, from natural-language text. In
approach to filtering junk e-mail. E. Grefenstette, L. Espeholt, W. Kay, E. Roche and Y. Schabes, eds,
AAAI-98 Workshop on Learning for M. Suleyman, and P. Blunsom. Finite-State Language Processing,
Text Categorization. 2015b. Teaching machines to read 383–406. MIT Press.
and comprehend. NeurIPS. Hochreiter, S. and J. Schmidhuber.
Heim, I. 1982. The semantics of definite
and indefinite noun phrases. Ph.D. Hernault, H., H. Prendinger, D. A. du- 1997. Long short-term memory.
thesis, University of Massachusetts Verle, and M. Ishizuka. 2010. Hilda: Neural Computation, 9(8):1735–
at Amherst. A discourse parser using support 1780.
vector machine classification. Dia- Hofmann, T. 1999. Probabilistic latent
Hellrich, J., S. Buechel, and U. Hahn. logue & Discourse, 1(3). semantic indexing. SIGIR-99.
2019. Modeling word emotion in Hidey, C., E. Musi, A. Hwang, S. Mure-
historical language: Quantity beats Holtzman, A., J. Buys, L. Du,
san, and K. McKeown. 2017. Ana- M. Forbes, and Y. Choi. 2020. The
supposed stability in seed word se- lyzing the semantic types of claims
lection. 3rd Joint SIGHUM Work- curious case of neural text degener-
and premises in an online persuasive ation. ICLR.
shop on Computational Linguistics forum. 4th Workshop on Argument
for Cultural Heritage, Social Sci- Mining. Honovich, O., U. Shaham, S. R. Bow-
ences, Humanities and Literature. man, and O. Levy. 2023. Instruction
Hill, F., R. Reichart, and A. Korhonen.
Hellrich, J. and U. Hahn. 2016. Bad induction: From few examples to
2015. Simlex-999: Evaluating se-
company—Neighborhoods in neural natural language task descriptions.
mantic models with (genuine) sim-
embedding spaces considered harm- ACL.
ilarity estimation. Computational
ful. COLING. Linguistics, 41(4):665–695. Hopcroft, J. E. and J. D. Ullman.
Henderson, J. 1994. Description Based 1979. Introduction to Automata The-
Hinkelman, E. A. and J. Allen. 1989.
Parsing in a Connectionist Network. ory, Languages, and Computation.
Two constraints on speech act ambi-
Ph.D. thesis, University of Pennsyl- Addison-Wesley.
guity. ACL.
vania, Philadelphia, PA. Hou, Y., K. Markert, and M. Strube.
Hinton, G. E. 1986. Learning dis-
Henderson, J. 2003. Inducing history tributed representations of concepts. 2018. Unrestricted bridging reso-
representations for broad coverage COGSCI. lution. Computational Linguistics,
statistical parsing. HLT-NAACL-03. 44(2):237–284.
Hinton, G. E., S. Osindero, and Y.-W.
Henderson, J. 2004. Discriminative Teh. 2006. A fast learning algorithm Householder, F. W. 1995. Dionysius
training of a neural network statisti- for deep belief nets. Neural compu- Thrax, the technai, and Sextus Em-
cal parser. ACL. tation, 18(7):1527–1554. piricus. In E. F. K. Koerner and
R. E. Asher, eds, Concise History of
Henderson, P., J. Hu, J. Romoff, Hinton, G. E., N. Srivastava, the Language Sciences, 99–103. El-
E. Brunskill, D. Jurafsky, and A. Krizhevsky, I. Sutskever, and sevier Science.
J. Pineau. 2020. Towards the sys- R. R. Salakhutdinov. 2012. Improv-
tematic reporting of the energy and ing neural networks by preventing Hovy, E. H. 1990. Parsimonious
carbon footprints of machine learn- co-adaptation of feature detectors. and profligate approaches to the
ing. Journal of Machine Learning ArXiv preprint arXiv:1207.0580. question of discourse structure rela-
Research, 21(248):1–43. tions. Proceedings of the 5th Inter-
Hirschberg, J., D. J. Litman, and national Workshop on Natural Lan-
Henderson, P., X. Li, D. Jurafsky, M. Swerts. 2001. Identifying user guage Generation.
T. Hashimoto, M. A. Lemley, and corrections automatically in spoken
P. Liang. 2023. Foundation models dialogue systems. NAACL. Hovy, E. H., M. P. Marcus, M. Palmer,
and fair use. JMLR, 24(400):1–79. L. A. Ramshaw, and R. Weischedel.
Hirschman, L., M. Light, E. Breck, and
2006. OntoNotes: The 90% solu-
Henderson, P., K. Sinha, N. Angelard- J. D. Burger. 1999. Deep Read:
tion. HLT-NAACL.
Gontier, N. R. Ke, G. Fried, A reading comprehension system.
R. Lowe, and J. Pineau. 2017. Eth- ACL. Hu, M. and B. Liu. 2004a. Mining
ical challenges in data-driven dia- Hirst, G. 1981. Anaphora in Natu- and summarizing customer reviews.
logue systems. AAAI/ACM AI Ethics ral Language Understanding: A sur- KDD.
and Society Conference. vey. Number 119 in Lecture notes in Hu, M. and B. Liu. 2004b. Mining
Hendrickx, I., S. N. Kim, Z. Kozareva, computer science. Springer-Verlag. and summarizing customer reviews.
P. Nakov, D. Ó Séaghdha, S. Padó, Hirst, G. 1987. Semantic Interpreta- SIGKDD-04.
M. Pennacchiotti, L. Romano, and tion and the Resolution of Ambigu- Huang, E. H., R. Socher, C. D. Man-
S. Szpakowicz. 2009. Semeval-2010 ity. Cambridge University Press. ning, and A. Y. Ng. 2012. Improving
Bibliography 565
word representations via global con- improves discourse performance of eds, Readings in Speech Recogni-
text and multiple word prototypes. language models. ACL. tion, 450–506. Morgan Kaufmann.
ACL. Iter, D., J. Yoon, and D. Jurafsky. 2018. Originally distributed as IBM tech-
Huang, Z., W. Xu, and K. Yu. 2015. Automatic detection of incoherent nical report in 1985.
Bidirectional LSTM-CRF models speech for diagnosing schizophre- Jelinek, F. and R. L. Mercer. 1980.
for sequence tagging. arXiv preprint nia. Fifth Workshop on Computa- Interpolated estimation of Markov
arXiv:1508.01991. tional Linguistics and Clinical Psy- source parameters from sparse data.
chology. In E. S. Gelsema and L. N. Kanal,
Huffman, S. 1996. Learning informa-
Ito, K. and L. Johnson. 2017. eds, Proceedings, Workshop on Pat-
tion extraction patterns from exam-
The LJ speech dataset. tern Recognition in Practice, 381–
ples. In S. Wertmer, E. Riloff, and
397. North Holland.
G. Scheller, eds, Connectionist, Sta- https://keithito.com/
tistical, and Symbolic Approaches LJ-Speech-Dataset/. Jelinek, F., R. L. Mercer, and L. R.
to Learning Natural Language Pro- Iyer, S., I. Konstas, A. Cheung, J. Krish- Bahl. 1975. Design of a linguis-
cessing, 246–260. Springer. namurthy, and L. Zettlemoyer. 2017. tic statistical decoder for the recog-
Learning a neural semantic parser nition of continuous speech. IEEE
Hunt, A. J. and A. W. Black. 1996. Transactions on Information The-
Unit selection in a concatenative from user feedback. ACL.
ory, IT-21(3):250–256.
speech synthesis system using a Iyer, S., X. V. Lin, R. Pasunuru, T. Mi-
large speech database. ICASSP. haylov, D. Simig, P. Yu, K. Shus- Ji, H. and R. Grishman. 2011. Knowl-
ter, T. Wang, Q. Liu, P. S. Koura, edge base population: Successful
Hutchins, W. J. 1986. Machine Trans- approaches and challenges. ACL.
lation: Past, Present, Future. Ellis X. Li, B. O’Horo, G. Pereyra,
Horwood, Chichester, England. J. Wang, C. Dewan, A. Celikyil- Ji, H., R. Grishman, and H. T. Dang.
maz, L. Zettlemoyer, and V. Stoy- 2010. Overview of the tac 2011
Hutchins, W. J. 1997. From first con- anov. 2022. Opt-iml: Scaling lan- knowledge base population track.
ception to first demonstration: The guage model instruction meta learn- TAC-11.
nascent years of machine transla- ing through the lens of generaliza- Ji, Y. and J. Eisenstein. 2014. Repre-
tion, 1947–1954. A chronology. Ma- tion. ArXiv preprint. sentation learning for text-level dis-
chine Translation, 12:192–252. course parsing. ACL.
Izacard, G., P. Lewis, M. Lomeli,
Hutchins, W. J. and H. L. Somers. 1992. L. Hosseini, F. Petroni, T. Schick, Ji, Y. and J. Eisenstein. 2015. One vec-
An Introduction to Machine Transla- J. Dwivedi-Yu, A. Joulin, S. Riedel, tor is not enough: Entity-augmented
tion. Academic Press. and E. Grave. 2022. Few-shot learn- distributed semantics for discourse
Hutchinson, B., V. Prabhakaran, ing with retrieval augmented lan- relations. TACL, 3:329–344.
E. Denton, K. Webster, Y. Zhong, guage models. ArXiv preprint. Jia, R. and P. Liang. 2016. Data recom-
and S. Denuyl. 2020. Social bi- Jackendoff, R. 1983. Semantics and bination for neural semantic parsing.
ases in NLP models as barriers for Cognition. MIT Press. ACL.
persons with disabilities. ACL.
Jacobs, P. S. and L. F. Rau. 1990. Jia, S., T. Meng, J. Zhao, and K.-W.
Hymes, D. 1974. Ways of speaking. SCISOR: A system for extract- Chang. 2020. Mitigating gender bias
In R. Bauman and J. Sherzer, eds, ing information from on-line news. amplification in distribution by pos-
Explorations in the ethnography of CACM, 33(11):88–97. terior regularization. ACL.
speaking, 433–451. Cambridge Uni- Jaech, A., G. Mulcaire, S. Hathi, M. Os- Johnson, J., M. Douze, and H. Jégou.
versity Press. tendorf, and N. A. Smith. 2016. 2017. Billion-scale similarity
Iida, R., K. Inui, H. Takamura, and Hierarchical character-word models search with GPUs. ArXiv preprint
Y. Matsumoto. 2003. Incorporating for language identification. ACL arXiv:1702.08734.
contextual cues in trainable models Workshop on NLP for Social Media. Johnson, W. E. 1932. Probability: de-
for coreference resolution. EACL Jaitly, N., P. Nguyen, A. Senior, and ductive and inductive problems (ap-
Workshop on The Computational V. Vanhoucke. 2012. Application of pendix to). Mind, 41(164):421–423.
Treatment of Anaphora. pretrained deep neural networks to Johnson-Laird, P. N. 1983. Mental
Irsoy, O. and C. Cardie. 2014. Opin- large vocabulary speech recognition. Models. Harvard University Press,
ion mining with deep recurrent neu- INTERSPEECH. Cambridge, MA.
ral networks. EMNLP. Jauhiainen, T., M. Lui, M. Zampieri, Jones, M. P. and J. H. Martin. 1997.
Ischen, C., T. Araujo, H. Voorveld, T. Baldwin, and K. Lindén. 2019. Contextual spelling correction using
G. van Noort, and E. Smit. 2019. Automatic language identification in latent semantic analysis. ANLP.
Privacy concerns in chatbot interac- texts: A survey. JAIR, 65(1):675– Jones, R., A. McCallum, K. Nigam, and
tions. International Workshop on 682. E. Riloff. 1999. Bootstrapping for
Chatbot Research and Design. Jefferson, G. 1972. Side sequences. In text learning tasks. IJCAI-99 Work-
ISO8601. 2004. Data elements and D. Sudnow, ed., Studies in social in- shop on Text Mining: Foundations,
interchange formats—information teraction, 294–333. Free Press, New Techniques and Applications.
interchange—representation of York. Jones, T. 2015. Toward a descrip-
dates and times. Technical report, Jeffreys, H. 1948. Theory of Probabil- tion of African American Vernac-
International Organization for Stan- ity, 2nd edition. Clarendon Press. ular English dialect regions using
dards (ISO). Section 3.23. “Black Twitter”. American Speech,
Itakura, F. 1975. Minimum prediction Jelinek, F. 1969. A fast sequential de- 90(4):403–440.
residual principle applied to speech coding algorithm using a stack. IBM Joos, M. 1950. Description of language
recognition. IEEE Transactions on Journal of Research and Develop- design. JASA, 22:701–708.
ASSP, ASSP-32:67–72. ment, 13:675–685. Jordan, M. 1986. Serial order: A paral-
Iter, D., K. Guu, L. Lansing, and Jelinek, F. 1990. Self-organized lan- lel distributed processing approach.
D. Jurafsky. 2020. Pretraining guage modeling for speech recogni- Technical Report ICS Report 8604,
with contrastive sentence objectives tion. In A. Waibel and K.-F. Lee, University of California, San Diego.
566 Bibliography
Joshi, A. K. and P. Hopely. 1999. A variants. European Conference on Text Typology and Attribution, 327–
parser from antiquity. In A. Kor- Information Retrieval. 358. Almqvist and Wiksell, Stock-
nai, ed., Extended Finite State Mod- Kane, S. K., M. R. Morris, A. Paradiso, holm.
els of Language, 6–15. Cambridge and J. Campbell. 2017. “at times Kay, M. and M. Röscheisen. 1988.
University Press. avuncular and cantankerous, with Text-translation alignment. Techni-
Joshi, A. K. and S. Kuhn. 1979. Cen- the reflexes of a mongoose”: Un- cal Report P90-00143, Xerox Palo
tered logic: The role of entity cen- derstanding self-expression through Alto Research Center, Palo Alto,
tered sentence representation in nat- augmentative and alternative com- CA.
ural language inferencing. IJCAI-79. munication devices. CSCW. Kay, M. and M. Röscheisen. 1993.
Joshi, A. K. and S. Weinstein. 1981. Kaplan, J., S. McCandlish, Text-translation alignment. Compu-
Control of inference: Role of some T. Henighan, T. B. Brown, B. Chess, tational Linguistics, 19:121–142.
aspects of discourse structure – cen- R. Child, S. Gray, A. Radford, J. Wu, Kehler, A. 1993. The effect of es-
tering. IJCAI-81. and D. Amodei. 2020. Scaling laws tablishing coherence in ellipsis and
for neural language models. ArXiv anaphora resolution. ACL.
Joshi, M., D. Chen, Y. Liu, D. S. preprint.
Weld, L. Zettlemoyer, and O. Levy. Kehler, A. 1994. Temporal relations:
2020. SpanBERT: Improving pre- Kaplan, R. M. 1973. A general syntac- Reference or discourse coherence?
training by representing and predict- tic processor. In R. Rustin, ed., Natu- ACL.
ing spans. TACL, 8:64–77. ral Language Processing, 193–241. Kehler, A. 1997a. Current theories of
Algorithmics Press. centering for pronoun interpretation:
Joshi, M., O. Levy, D. S. Weld, and
Karamanis, N., M. Poesio, C. Mellish, A critical evaluation. Computational
L. Zettlemoyer. 2019. BERT for
and J. Oberlander. 2004. Evaluat- Linguistics, 23(3):467–475.
coreference resolution: Baselines
ing centering-based metrics of co- Kehler, A. 1997b. Probabilistic coref-
and analysis. EMNLP.
herence for text structuring using a erence in information extraction.
Joty, S., G. Carenini, and R. T. Ng. reliably annotated corpus. ACL.
2015. CODRA: A novel discrimi- EMNLP.
Karita, S., N. Chen, T. Hayashi, Kehler, A. 2000. Coherence, Reference,
native framework for rhetorical anal- T. Hori, H. Inaguma, Z. Jiang,
ysis. Computational Linguistics, and the Theory of Grammar. CSLI
M. Someki, N. E. Y. Soplin, R. Ya- Publications.
41(3):385–435. mamoto, X. Wang, S. Watanabe,
Jurafsky, D. 2014. The Language of T. Yoshimura, and W. Zhang. 2019. Kehler, A., D. E. Appelt, L. Taylor, and
Food. W. W. Norton, New York. A comparative study on transformer A. Simma. 2004. The (non)utility
vs RNN in speech applications. of predicate-argument frequencies
Jurafsky, D., V. Chahuneau, B. R. Rout- for pronoun interpretation. HLT-
ledge, and N. A. Smith. 2014. Narra- IEEE ASRU-19.
NAACL.
tive framing of consumer sentiment Karlsson, F., A. Voutilainen,
J. Heikkilä, and A. Anttila, eds. Kehler, A. and H. Rohde. 2013. A prob-
in online restaurant reviews. First
1995. Constraint Grammar: A abilistic reconciliation of coherence-
Monday, 19(4).
Language-Independent System for driven and centering-driven theories
Jurafsky, D., C. Wooters, G. Tajchman, of pronoun interpretation. Theoreti-
J. Segal, A. Stolcke, E. Fosler, and Parsing Unrestricted Text. Mouton
de Gruyter. cal Linguistics, 39(1-2):1–37.
N. Morgan. 1994. The Berkeley Keller, F. and M. Lapata. 2003. Using
restaurant project. ICSLP. Karpukhin, V., B. Oğuz, S. Min,
the web to obtain frequencies for un-
P. Lewis, L. Wu, S. Edunov,
Jurgens, D., S. M. Mohammad, seen bigrams. Computational Lin-
D. Chen, and W.-t. Yih. 2020. Dense
P. Turney, and K. Holyoak. 2012. guistics, 29:459–484.
passage retrieval for open-domain
SemEval-2012 task 2: Measur- Kendall, T. and C. Farrington. 2020.
question answering. EMNLP.
ing degrees of relational similarity. The Corpus of Regional African
*SEM 2012. Karttunen, L. 1969. Discourse refer-
American Language. Version
ents. COLING. Preprint No. 70.
Jurgens, D., Y. Tsvetkov, and D. Juraf- 2020.05. Eugene, OR: The On-
sky. 2017. Incorporating dialectal Karttunen, L. 1999. Comments on line Resources for African Amer-
variability for socially equitable lan- Joshi. In A. Kornai, ed., Extended ican Language Project. http:
guage identification. ACL. Finite State Models of Language, //oraal.uoregon.edu/coraal.
16–18. Cambridge University Press.
Justeson, J. S. and S. M. Katz. 1991. Kennedy, C. and B. K. Boguraev. 1996.
Kasami, T. 1965. An efficient recog- Anaphora for everyone: Pronomi-
Co-occurrences of antonymous ad-
nition and syntax analysis algorithm nal anaphora resolution without a
jectives and their contexts. Compu-
for context-free languages. Tech- parser. COLING.
tational linguistics, 17(1):1–19.
nical Report AFCRL-65-758, Air
Kalchbrenner, N. and P. Blunsom. Khandelwal, U., O. Levy, D. Juraf-
Force Cambridge Research Labora-
2013. Recurrent continuous transla- sky, L. Zettlemoyer, and M. Lewis.
tory, Bedford, MA.
tion models. EMNLP. 2019. Generalization through mem-
Katz, J. J. and J. A. Fodor. 1963. The orization: Nearest neighbor lan-
Kameyama, M. 1986. A property- structure of a semantic theory. Lan- guage models. ICLR.
sharing constraint in centering. ACL. guage, 39:170–210. Khattab, O., C. Potts, and M. Zaharia.
Kamp, H. 1981. A theory of truth and Kay, M. 1967. Experiments with a pow- 2021. Relevance-guided supervision
semantic representation. In J. Groe- erful parser. COLING. for OpenQA with ColBERT. TACL,
nendijk, T. Janssen, and M. Stokhof, Kay, M. 1973. The MIND system. 9:929–944.
eds, Formal Methods in the Study In R. Rustin, ed., Natural Language Khattab, O., A. Singhvi, P. Mahesh-
of Language, 189–222. Mathemati- Processing, 155–188. Algorithmics wari, Z. Zhang, K. Santhanam,
cal Centre, Amsterdam. Press. S. Haq, A. Sharma, T. T. Joshi,
Kamphuis, C., A. P. de Vries, Kay, M. 1982. Algorithm schemata and H. Moazam, H. Miller, M. Zaharia,
L. Boytsov, and J. Lin. 2020. Which data structures in syntactic process- and C. Potts. 2024. DSPy: Compil-
BM25 do you mean? a large-scale ing. In S. Allén, ed., Text Process- ing declarative language model calls
reproducibility study of scoring ing: Text Analysis and Generation, into self-improving pipelines. ICLR.
Bibliography 567
Khattab, O. and M. Zaharia. 2020. Col- Klatt, D. H. 1975. Voice onset time, I. Orife, K. Ogueji, A. N. Rubungo,
BERT: Efficient and effective pas- friction, and aspiration in word- T. Q. Nguyen, M. Müller, A. Müller,
sage search via contextualized late initial consonant clusters. Journal S. H. Muhammad, N. Muham-
interaction over BERT. SIGIR. of Speech and Hearing Research, mad, A. Mnyakeni, J. Mirzakhalov,
Kiela, D., M. Bartolo, Y. Nie, 18:686–706. T. Matangira, C. Leong, N. Lawson,
D. Kaushik, A. Geiger, Z. Wu, Klatt, D. H. 1977. Review of the ARPA S. Kudugunta, Y. Jernite, M. Jenny,
B. Vidgen, G. Prasad, A. Singh, speech understanding project. JASA, O. Firat, B. F. P. Dossou, S. Dlamini,
P. Ringshia, et al. 2021. Dynabench: 62(6):1345–1366. N. de Silva, S. Çabuk Ballı, S. Bi-
Rethinking benchmarking in nlp. derman, A. Battisti, A. Baruwa,
Klatt, D. H. 1982. The Klattalk text-to- A. Bapna, P. Baljekar, I. A. Az-
NAACL HLT. speech conversion system. ICASSP. ime, A. Awokoya, D. Ataman,
Kiela, D. and S. Clark. 2014. A system- Kleene, S. C. 1951. Representation of O. Ahia, O. Ahia, S. Agrawal, and
atic study of semantic vector space events in nerve nets and finite au- M. Adeyemi. 2022. Quality at a
model parameters. EACL 2nd Work- tomata. Technical Report RM-704, glance: An audit of web-crawled
shop on Continuous Vector Space RAND Corporation. RAND Re- multilingual datasets. TACL, 10:50–
Models and their Compositionality search Memorandum. 72.
(CVSC).
Kleene, S. C. 1956. Representation of Krovetz, R. 1993. Viewing morphology
Kim, E. 2019. Optimize com- events in nerve nets and finite au- as an inference process. SIGIR-93.
putational efficiency of skip- tomata. In C. Shannon and J. Mc-
gram with negative sampling. Kruskal, J. B. 1983. An overview of se-
Carthy, eds, Automata Studies, 3–41. quence comparison. In D. Sankoff
https://aegis4048.github. Princeton University Press.
io/optimize_computational_ and J. B. Kruskal, eds, Time
efficiency_of_skip-gram_ Klein, S. and R. F. Simmons. 1963. Warps, String Edits, and Macro-
with_negative_sampling. A computational approach to gram- molecules: The Theory and Prac-
matical coding of English words. tice of Sequence Comparison, 1–44.
Kim, S. M. and E. H. Hovy. 2004. De- Journal of the ACM, 10(3):334–347. Addison-Wesley.
termining the sentiment of opinions.
COLING. Knott, A. and R. Dale. 1994. Using Kudo, T. 2018. Subword regularization:
linguistic phenomena to motivate a Improving neural network transla-
King, S. 2020. From African Amer- set of coherence relations. Discourse
ican Vernacular English to African tion models with multiple subword
Processes, 18(1):35–62. candidates. ACL.
American Language: Rethinking
the study of race and language in Kocijan, V., A.-M. Cretu, O.-M. Kudo, T. and Y. Matsumoto. 2002.
African Americans’ speech. Annual Camburu, Y. Yordanov, and Japanese dependency analysis using
Review of Linguistics, 6:285–300. T. Lukasiewicz. 2019. A surpris- cascaded chunking. CoNLL.
ingly robust trick for the Winograd
Kingma, D. and J. Ba. 2015. Adam: A Kudo, T. and J. Richardson. 2018a.
Schema Challenge. ACL.
method for stochastic optimization. SentencePiece: A simple and lan-
ICLR 2015. Kocmi, T., C. Federmann, R. Grund- guage independent subword tok-
kiewicz, M. Junczys-Dowmunt, enizer and detokenizer for neural
Kintsch, W. and T. A. Van Dijk. 1978.
H. Matsushita, and A. Menezes. text processing. EMNLP.
Toward a model of text comprehen-
2021. To ship or not to ship: An
sion and production. Psychological Kudo, T. and J. Richardson. 2018b.
extensive evaluation of automatic
review, 85(5):363–394. SentencePiece: A simple and lan-
metrics for machine translation.
Kiperwasser, E. and Y. Goldberg. 2016. guage independent subword tok-
ArXiv.
Simple and accurate dependency enizer and detokenizer for neural
Koehn, P. 2005. Europarl: A parallel text processing. EMNLP.
parsing using bidirectional LSTM
corpus for statistical machine trans-
feature representations. TACL, Kullback, S. and R. A. Leibler. 1951.
lation. MT summit, vol. 5.
4:313–327. On information and sufficiency.
Koehn, P., H. Hoang, A. Birch, Annals of Mathematical Statistics,
Kipper, K., H. T. Dang, and M. Palmer.
C. Callison-Burch, M. Federico, 22:79–86.
2000. Class-based construction of a
N. Bertoldi, B. Cowan, W. Shen,
verb lexicon. AAAI. Kulmizev, A., M. de Lhoneux,
C. Moran, R. Zens, C. Dyer, O. Bo-
Kiritchenko, S. and S. M. Mohammad. jar, A. Constantin, and E. Herbst. J. Gontrum, E. Fano, and J. Nivre.
2017. Best-worst scaling more re- 2006. Moses: Open source toolkit 2019. Deep contextualized word
liable than rating scales: A case for statistical machine translation. embeddings in transition-based and
study on sentiment intensity annota- ACL. graph-based dependency parsing
tion. ACL. - a tale of two parsers revisited.
Koehn, P., F. J. Och, and D. Marcu. EMNLP.
Kiritchenko, S. and S. M. Mohammad. 2003. Statistical phrase-based trans-
2018. Examining gender and race lation. HLT-NAACL. Kumar, S. and W. Byrne. 2004. Min-
bias in two hundred sentiment anal- imum Bayes-risk decoding for sta-
ysis systems. *SEM. Kolhatkar, V., A. Roussel, S. Dipper, tistical machine translation. HLT-
and H. Zinsmeister. 2018. Anaphora NAACL.
Kiss, T. and J. Strunk. 2006. Unsuper- with non-nominal antecedents in
vised multilingual sentence bound- computational linguistics: A sur- Kummerfeld, J. K. and D. Klein. 2013.
ary detection. Computational Lin- vey. Computational Linguistics, Error-driven analysis of challenges
guistics, 32(4):485–525. 44(3):547–612. in coreference resolution. EMNLP.
Kitaev, N., S. Cao, and D. Klein. Kreutzer, J., I. Caswell, L. Wang, Kuno, S. 1965. The predictive ana-
2019. Multilingual constituency A. Wahab, D. van Esch, N. Ulzii- lyzer and a path elimination tech-
parsing with self-attention and pre- Orshikh, A. Tapo, N. Subra- nique. CACM, 8(7):453–462.
training. ACL. mani, A. Sokolov, C. Sikasote, Kupiec, J. 1992. Robust part-of-speech
Kitaev, N. and D. Klein. 2018. Con- M. Setyawan, S. Sarin, S. Samb, tagging using a hidden Markov
stituency parsing with a self- B. Sagot, C. Rivera, A. Rios, I. Pa- model. Computer Speech and Lan-
attentive encoder. ACL. padimitriou, S. Osei, P. O. Suarez, guage, 6:225–242.
568 Bibliography
Kurita, K., N. Vyas, A. Pareek, A. W. Landauer, T. K., D. Laham, B. Rehder, D. Jurafsky. 2011. Stanford’s multi-
Black, and Y. Tsvetkov. 2019. Quan- and M. E. Schreiner. 1997. How pass sieve coreference resolution
tifying social biases in contextual well can passage meaning be derived system at the CoNLL-2011 shared
word representations. 1st ACL Work- without using word order? A com- task. CoNLL.
shop on Gender Bias for Natural parison of Latent Semantic Analysis Lee, H., M. Surdeanu, and D. Juraf-
Language Processing. and humans. COGSCI. sky. 2017a. A scaffolding approach
Kučera, H. and W. N. Francis. 1967. Lang, J. and M. Lapata. 2014. to coreference resolution integrat-
Computational Analysis of Present- Similarity-driven semantic role in- ing statistical and rule-based mod-
Day American English. Brown Univ. duction via graph partitioning. Com- els. Natural Language Engineering,
Press. putational Linguistics, 40(3):633– 23(5):733–762.
Kwiatkowski, T., J. Palomaki, O. Red- 669. Lee, K., M.-W. Chang, and
field, M. Collins, A. Parikh, C. Al- Lang, K. J., A. H. Waibel, and G. E. K. Toutanova. 2019. Latent re-
berti, D. Epstein, I. Polosukhin, Hinton. 1990. A time-delay neu- trieval for weakly supervised open
J. Devlin, K. Lee, K. Toutanova, ral network architecture for isolated domain question answering. ACL.
L. Jones, M. Kelcey, M.-W. Chang, word recognition. Neural networks, Lee, K., L. He, M. Lewis, and L. Zettle-
A. M. Dai, J. Uszkoreit, Q. Le, and 3(1):23–43. moyer. 2017b. End-to-end neural
S. Petrov. 2019. Natural questions: coreference resolution. EMNLP.
Lapata, M. 2003. Probabilistic text
A benchmark for question answer- structuring: Experiments with sen- Lee, K., L. He, and L. Zettlemoyer.
ing research. TACL, 7:452–466. tence ordering. ACL. 2018. Higher-order coreference
Lafferty, J. D., A. McCallum, and resolution with coarse-to-fine infer-
Lapesa, G. and S. Evert. 2014. A large ence. NAACL HLT.
F. C. N. Pereira. 2001. Conditional scale evaluation of distributional se-
random fields: Probabilistic mod- Lehnert, W. G., C. Cardie, D. Fisher,
mantic models: Parameters, interac- E. Riloff, and R. Williams. 1991.
els for segmenting and labeling se- tions and model selection. TACL,
quence data. ICML. Description of the CIRCUS system
2:531–545. as used for MUC-3. MUC-3.
Lai, A. and J. Tetreault. 2018. Dis- Lappin, S. and H. Leass. 1994. An algo-
course coherence in the wild: A Lemon, O., K. Georgila, J. Henderson,
rithm for pronominal anaphora res- and M. Stuttle. 2006. An ISU di-
dataset, evaluation and methods. olution. Computational Linguistics,
SIGDIAL. alogue system exhibiting reinforce-
20(4):535–561. ment learning of dialogue policies:
Lake, B. M. and G. L. Murphy. 2021. Larsson, S. and D. R. Traum. 2000. In- Generic slot-filling in the TALK in-
Word meaning in minds and ma- formation state and dialogue man- car system. EACL.
chines. Psychological Review. In agement in the trindi dialogue move Levenshtein, V. I. 1966. Binary codes
press. engine toolkit. Natural Language capable of correcting deletions, in-
Lakoff, G. 1965. On the Nature of Syn- Engineering, 6(323-340):97–114. sertions, and reversals. Cybernetics
tactic Irregularity. Ph.D. thesis, In- Lascarides, A. and N. Asher. 1993. and Control Theory, 10(8):707–710.
diana University. Published as Irreg- Temporal interpretation, discourse Original in Doklady Akademii Nauk
ularity in Syntax. Holt, Rinehart, and relations, and common sense entail- SSSR 163(4): 845–848 (1965).
Winston, New York, 1970. ment. Linguistics and Philosophy, Levesque, H. 2011. The Winograd
Lakoff, G. 1972. Structural complexity 16(5):437–493. Schema Challenge. Logical Formal-
in fairy tales. In The Study of Man, izations of Commonsense Reason-
Lawrence, W. 1953. The synthesis of
128–50. School of Social Sciences, ing — Papers from the AAAI 2011
speech from signals which have a
University of California, Irvine, CA. Spring Symposium (SS-11-06).
low information rate. In W. Jackson,
Lakoff, G. and M. Johnson. 1980. ed., Communication Theory, 460– Levesque, H., E. Davis, and L. Morgen-
Metaphors We Live By. University 469. Butterworth. stern. 2012. The Winograd Schema
of Chicago Press, Chicago, IL. Challenge. KR-12.
LDC. 1998. LDC Catalog: Hub4
Lample, G., M. Ballesteros, S. Subra- project. University of Penn- Levesque, H. J., P. R. Cohen, and
manian, K. Kawakami, and C. Dyer. sylvania. www.ldc.upenn.edu/ J. H. T. Nunes. 1990. On acting to-
2016. Neural architectures for Catalog/LDC98S71.html. gether. AAAI. Morgan Kaufmann.
named entity recognition. NAACL Levin, B. 1977. Mapping sentences to
LeCun, Y., B. Boser, J. S. Denker, case frames. Technical Report 167,
HLT. D. Henderson, R. E. Howard, MIT AI Laboratory. AI Working Pa-
Lample, G. and A. Conneau. 2019. W. Hubbard, and L. D. Jackel. 1989. per 143.
Cross-lingual language model pre- Backpropagation applied to hand-
training. NeurIPS, volume 32. written zip code recognition. Neural Levin, B. 1993. English Verb Classes
computation, 1(4):541–551. and Alternations: A Preliminary In-
Lan, Z., M. Chen, S. Goodman, vestigation. University of Chicago
K. Gimpel, P. Sharma, and R. Sori- Lee, D. D. and H. S. Seung. 1999. Press.
cut. 2020. ALBERT: A lite BERT Learning the parts of objects by non- Levin, B. and M. Rappaport Hovav.
for self-supervised learning of lan- negative matrix factorization. Na- 2005. Argument Realization. Cam-
guage representations. ICLR. ture, 401(6755):788–791. bridge University Press.
Landauer, T. K., ed. 1995. The Trou- Lee, H., A. Chang, Y. Peirsman, Levin, E., R. Pieraccini, and W. Eckert.
ble with Computers: Usefulness, Us- N. Chambers, M. Surdeanu, and 2000. A stochastic model of human-
ability, and Productivity. MIT Press. D. Jurafsky. 2013. Determin- machine interaction for learning dia-
Landauer, T. K. and S. T. Dumais. 1997. istic coreference resolution based log strategies. IEEE Transactions on
A solution to Plato’s problem: The on entity-centric, precision-ranked Speech and Audio Processing, 8:11–
Latent Semantic Analysis theory of rules. Computational Linguistics, 23.
acquisition, induction, and represen- 39(4):885–916. Levinson, S. C. 1983. Conversational
tation of knowledge. Psychological Lee, H., Y. Peirsman, A. Chang, Analysis, chapter 6. Cambridge Uni-
Review, 104:211–240. N. Chambers, M. Surdeanu, and versity Press.
Bibliography 569
Levow, G.-A. 1998. Characterizing 2023. Holistic evaluation of lan- Liu, H., J. Dacon, W. Fan, H. Liu,
and recognizing spoken corrections guage models. Transactions on Ma- Z. Liu, and J. Tang. 2020. Does gen-
in human-computer dialogue. COL- chine Learning Research. der matter? Towards fairness in dia-
ING/ACL. Lin, C.-Y. 2004. ROUGE: A pack- logue systems. COLING.
Levy, O. and Y. Goldberg. 2014a. age for automatic evaluation of sum- Liu, J., S. Min, L. Zettlemoyer, Y. Choi,
Dependency-based word embed- maries. ACL 2004 Workshop on Text and H. Hajishirzi. 2024. Infini-gram:
dings. ACL. Summarization Branches Out. Scaling unbounded n-gram language
Levy, O. and Y. Goldberg. 2014b. Lin- Lin, D. 2003. Dependency-based eval- models to a trillion tokens. ArXiv
guistic regularities in sparse and ex- uation of minipar. Workshop on the preprint.
plicit word representations. CoNLL. Evaluation of Parsing Systems. Liu, Y., C. Sun, L. Lin, and X. Wang.
Levy, O. and Y. Goldberg. 2014c. Neu- Lin, Y., J.-B. Michel, E. Aiden Lieber- 2016. Learning natural language
ral word embedding as implicit ma- man, J. Orwant, W. Brockman, and inference using bidirectional LSTM
trix factorization. NeurIPS. S. Petrov. 2012a. Syntactic annota- model and inner-attention. ArXiv.
Levy, O., Y. Goldberg, and I. Da- tions for the Google books NGram Liu, Y., P. Fung, Y. Yang, C. Cieri,
gan. 2015. Improving distributional corpus. ACL. S. Huang, and D. Graff. 2006.
similarity with lessons learned from Lin, Y., J.-B. Michel, E. Lieber- HKUST/MTS: A very large scale
word embeddings. TACL, 3:211– man Aiden, J. Orwant, W. Brock- Mandarin telephone speech corpus.
225. man, and S. Petrov. 2012b. Syntac- International Conference on Chi-
tic annotations for the Google Books nese Spoken Language Processing.
Li, B. Z., S. Min, S. Iyer, Y. Mehdad,
and W.-t. Yih. 2020. Efficient one- NGram corpus. ACL. Liu, Y., M. Ott, N. Goyal, J. Du,
pass end-to-end entity linking for Lin, Z., A. Madotto, J. Shin, P. Xu, and M. Joshi, D. Chen, O. Levy,
questions. EMNLP. P. Fung. 2019. MoEL: Mixture of M. Lewis, L. Zettlemoyer, and
empathetic listeners. EMNLP. V. Stoyanov. 2019. RoBERTa:
Li, J., X. Chen, E. H. Hovy, and D. Ju-
A robustly optimized BERT pre-
rafsky. 2015. Visualizing and un- Lin, Z., M.-Y. Kan, and H. T. Ng. 2009. training approach. ArXiv preprint
derstanding neural models in NLP. Recognizing implicit discourse rela- arXiv:1907.11692.
NAACL HLT. tions in the Penn Discourse Tree-
bank. EMNLP. Llama Team. 2024. The llama 3 herd of
Li, J. and D. Jurafsky. 2017. Neu-
models.
ral net models of open-domain dis- Lin, Z., H. T. Ng, and M.-Y. Kan. 2011.
course coherence. EMNLP. Automatically evaluating text coher- Lochbaum, K. E., B. J. Grosz, and
ence using discourse relations. ACL. C. L. Sidner. 2000. Discourse struc-
Li, J., R. Li, and E. H. Hovy. 2014.
ture and intention recognition. In
Recursive deep models for discourse Lin, Z., H. T. Ng, and M.-Y. Kan. 2014. R. Dale, H. Moisl, and H. L. Somers,
parsing. EMNLP. A pdtb-styled end-to-end discourse eds, Handbook of Natural Language
Li, J., W. Monroe, A. Ritter, D. Juraf- parser. Natural Language Engineer- Processing. Marcel Dekker.
sky, M. Galley, and J. Gao. 2016a. ing, 20(2):151–184.
Deep reinforcement learning for di- Logeswaran, L., H. Lee, and D. Radev.
Ling, W., C. Dyer, A. W. Black, 2018. Sentence ordering and coher-
alogue generation. EMNLP. I. Trancoso, R. Fermandez, S. Amir, ence modeling using recurrent neu-
Li, M., J. Weston, and S. Roller. 2019a. L. Marujo, and T. Luı́s. 2015. Find- ral networks. AAAI.
Acute-eval: Improved dialogue eval- ing function in form: Compositional
uation with optimized questions and character models for open vocabu- Longpre, S., L. Hou, T. Vu, A. Webson,
multi-turn comparisons. NeurIPS19 lary word representation. EMNLP. H. W. Chung, Y. Tay, D. Zhou, Q. V.
Workshop on Conversational AI. Le, B. Zoph, J. Wei, and A. Roberts.
Linzen, T. 2016. Issues in evaluating se- 2023. The Flan collection: Design-
Li, Q., T. Li, and B. Chang. 2016b. mantic spaces using word analogies. ing data and methods for effective
Discourse parsing with attention- 1st Workshop on Evaluating Vector- instruction tuning. ICML.
based hierarchical neural networks. Space Representations for NLP.
EMNLP. Longpre, S., R. Mahari, A. Lee,
Lison, P. and J. Tiedemann. 2016. C. Lund, H. Oderinwale, W. Bran-
Li, X., Y. Meng, X. Sun, Q. Han, Opensubtitles2016: Extracting large non, N. Saxena, N. Obeng-Marnu,
A. Yuan, and J. Li. 2019b. Is parallel corpora from movie and tv T. South, C. Hunter, et al. 2024a.
word segmentation necessary for subtitles. LREC. Consent in crisis: The rapid decline
deep learning of Chinese representa- Litman, D. J. 1985. Plan Recognition of the ai data commons. ArXiv
tions? ACL. and Discourse Analysis: An Inte- preprint.
Liang, P., R. Bommasani, T. Lee, grated Approach for Understanding Longpre, S., G. Yauney, E. Reif, K. Lee,
D. Tsipras, D. Soylu, M. Yasunaga, Dialogues. Ph.D. thesis, University A. Roberts, B. Zoph, D. Zhou,
Y. Zhang, D. Narayanan, Y. Wu, of Rochester, Rochester, NY. J. Wei, K. Robinson, D. Mimno, and
A. Kumar, B. Newman, B. Yuan, Litman, D. J. and J. Allen. 1987. A plan D. Ippolito. 2024b. A pretrainer’s
B. Yan, C. Zhang, C. Cosgrove, recognition model for subdialogues guide to training data: Measuring
C. D. Manning, C. Ré, D. Acosta- in conversation. Cognitive Science, the effects of data age, domain cov-
Navas, D. A. Hudson, E. Zelikman, 11:163–200. erage, quality, & toxicity. NAACL
E. Durmus, F. Ladhak, F. Rong, HLT.
H. Ren, H. Yao, J. Wang, K. San- Litman, D. J., M. A. Walker, and
thanam, L. Orr, L. Zheng, M. Yuk- M. Kearns. 1999. Automatic detec- Louis, A. and A. Nenkova. 2012. A
sekgonul, M. Suzgun, N. Kim, tion of poor speech recognition at coherence model based on syntactic
N. Guha, N. Chatterji, O. Khattab, the dialogue level. ACL. patterns. EMNLP.
P. Henderson, Q. Huang, R. Chi, Liu, B. and L. Zhang. 2012. A sur- Loureiro, D. and A. Jorge. 2019.
S. M. Xie, S. Santurkar, S. Ganguli, vey of opinion mining and sentiment Language modelling makes sense:
T. Hashimoto, T. Icard, T. Zhang, analysis. In C. C. Aggarwal and Propagating representations through
V. Chaudhary, W. Wang, X. Li, C. Zhai, eds, Mining text data, 415– WordNet for full-coverage word
Y. Mai, Y. Zhang, and Y. Koreeda. 464. Springer. sense disambiguation. ACL.
570 Bibliography
Louviere, J. J., T. N. Flynn, and A. A. J. theory of text organization. Techni- de Marneffe, M.-C. and C. D. Man-
Marley. 2015. Best-worst scaling: cal Report RS-87-190, Information ning. 2008. The Stanford typed de-
Theory, methods and applications. Sciences Institute. pendencies representation. COLING
Cambridge University Press. Manning, C. D. 2011. Part-of-speech Workshop on Cross-Framework and
Lovins, J. B. 1968. Development of tagging from 97% to 100%: Is it Cross-Domain Parser Evaluation.
a stemming algorithm. Mechanical time for some linguistics? CICLing de Marneffe, M.-C., C. D. Manning,
Translation and Computational Lin- 2011. J. Nivre, and D. Zeman. 2021. Uni-
guistics, 11(1–2):9–13. versal Dependencies. Computa-
Manning, C. D., P. Raghavan, and tional Linguistics, 47(2):255–308.
Lowerre, B. T. 1976. The Harpy Speech H. Schütze. 2008. Introduction to In-
Recognition System. Ph.D. thesis, formation Retrieval. Cambridge. de Marneffe, M.-C., M. Recasens, and
Carnegie Mellon University, Pitts- C. Potts. 2015. Modeling the lifes-
Manning, C. D., M. Surdeanu, J. Bauer, pan of discourse entities with ap-
burgh, PA. J. Finkel, S. Bethard, and D. Mc- plication to coreference resolution.
Luhn, H. P. 1957. A statistical ap- Closky. 2014. The Stanford JAIR, 52:445–475.
proach to the mechanized encoding CoreNLP natural language process- Maron, M. E. 1961. Automatic index-
and searching of literary informa- ing toolkit. ACL. ing: an experimental inquiry. Jour-
tion. IBM Journal of Research and
Marcu, D. 1997. The rhetorical parsing nal of the ACM, 8(3):404–417.
Development, 1(4):309–317.
of natural language texts. ACL. Màrquez, L., X. Carreras, K. C.
Lui, M. and T. Baldwin. 2011. Cross- Litkowski, and S. Stevenson. 2008.
Marcu, D. 1999. A decision-based ap-
domain feature selection for lan- Semantic role labeling: An introduc-
proach to rhetorical parsing. ACL.
guage identification. IJCNLP. tion to the special issue. Computa-
Lui, M. and T. Baldwin. 2012. Marcu, D. 2000a. The rhetorical pars- tional linguistics, 34(2):145–159.
langid.py: An off-the-shelf lan- ing of unrestricted texts: A surface-
based approach. Computational Lin- Marshall, I. 1983. Choice of grammat-
guage identification tool. ACL. ical word-class without global syn-
guistics, 26(3):395–448.
Lukasik, M., B. Dadachev, K. Papineni, tactic analysis: Tagging words in the
and G. Simões. 2020. Text seg- Marcu, D., ed. 2000b. The Theory and LOB corpus. Computers and the Hu-
mentation by cross segment atten- Practice of Discourse Parsing and manities, 17:139–150.
tion. EMNLP. Summarization. MIT Press. Marshall, I. 1987. Tag selection using
Luo, X. 2005. On coreference resolu- Marcu, D. and A. Echihabi. 2002. An probabilistic methods. In R. Garside,
tion performance metrics. EMNLP. unsupervised approach to recogniz- G. Leech, and G. Sampson, eds, The
ing discourse relations. ACL. Computational Analysis of English,
Luo, X. and S. Pradhan. 2016. Eval- 42–56. Longman.
uation metrics. In M. Poesio, Marcu, D. and W. Wong. 2002.
A phrase-based, joint probability Martschat, S. and M. Strube. 2014. Re-
R. Stuckardt, and Y. Versley, eds, call error analysis for coreference
Anaphora resolution: Algorithms, model for statistical machine trans-
lation. EMNLP. resolution. EMNLP.
resources, and applications, 141–
163. Springer. Marcus, M. P. 1980. A Theory of Syn- Martschat, S. and M. Strube. 2015. La-
tactic Recognition for Natural Lan- tent structures for coreference reso-
Luo, X., S. Pradhan, M. Recasens, and lution. TACL, 3:405–418.
E. H. Hovy. 2014. An extension of guage. MIT Press.
Mathis, D. A. and M. C. Mozer. 1995.
BLANC to system mentions. ACL. Marcus, M. P., B. Santorini, and M. A. On the computational utility of con-
Ma, X. and E. H. Hovy. 2016. End- Marcinkiewicz. 1993. Building a sciousness. NeurIPS. MIT Press.
to-end sequence labeling via bi- large annotated corpus of English:
The Penn treebank. Computational McCallum, A., D. Freitag, and F. C. N.
directional LSTM-CNNs-CRF. Pereira. 2000. Maximum entropy
ACL. Linguistics, 19(2):313–330.
Markov models for information ex-
Maas, A., Z. Xie, D. Jurafsky, and A. Y. Marie, B., A. Fujita, and R. Rubino. traction and segmentation. ICML.
Ng. 2015. Lexicon-free conversa- 2021. Scientific credibility of ma- McCallum, A. and W. Li. 2003. Early
tional speech recognition with neu- chine translation research: A meta- results for named entity recogni-
ral networks. NAACL HLT. evaluation of 769 papers. ACL. tion with conditional random fields,
Maas, A. L., A. Y. Hannun, and A. Y. Markov, A. A. 1913. Essai d’une feature induction and web-enhanced
Ng. 2013. Rectifier nonlineari- recherche statistique sur le texte du lexicons. CoNLL.
ties improve neural network acoustic roman “Eugene Onegin” illustrant la McCallum, A. and K. Nigam. 1998.
models. ICML. liaison des epreuve en chain (‘Ex- A comparison of event models
ample of a statistical investigation for naive bayes text classification.
Maas, A. L., P. Qi, Z. Xie, A. Y. Han- of the text of “Eugene Onegin” il- AAAI/ICML-98 Workshop on Learn-
nun, C. T. Lengerich, D. Jurafsky, lustrating the dependence between ing for Text Categorization.
and A. Y. Ng. 2017. Building dnn samples in chain’). Izvistia Impera-
acoustic models for large vocabu- McCarthy, J. F. and W. G. Lehnert.
torskoi Akademii Nauk (Bulletin de 1995. Using decision trees for coref-
lary speech recognition. Computer l’Académie Impériale des Sciences
Speech & Language, 41:195–213. erence resolution. IJCAI-95.
de St.-Pétersbourg), 7:153–162.
Magerman, D. M. 1995. Statisti- McClelland, J. L. and J. L. Elman.
de Marneffe, M.-C., T. Dozat, N. Sil- 1986. The TRACE model of speech
cal decision-tree models for parsing. veira, K. Haverinen, F. Ginter,
ACL. perception. Cognitive Psychology,
J. Nivre, and C. D. Manning. 2014. 18:1–86.
Mairesse, F. and M. A. Walker. 2008. Universal Stanford dependencies: A McClelland, J. L. and D. E. Rumel-
Trainable generation of big-five per- cross-linguistic typology. LREC. hart, eds. 1986. Parallel Dis-
sonality styles through data-driven de Marneffe, M.-C., B. MacCartney, tributed Processing: Explorations
parameter estimation. ACL. and C. D. Manning. 2006. Gener- in the Microstructure of Cognition,
Mann, W. C. and S. A. Thompson. ating typed dependency parses from volume 2: Psychological and Bio-
1987. Rhetorical structure theory: A phrase structure parses. LREC. logical Models. MIT Press.
Bibliography 571
McCulloch, W. S. and W. Pitts. 1943. A Mikolov, T., S. Kombrink, L. Burget, Mishra, S., D. Khashabi, C. Baral,
logical calculus of ideas immanent J. H. Černockỳ, and S. Khudanpur. and H. Hajishirzi. 2022. Cross-task
in nervous activity. Bulletin of Math- 2011. Extensions of recurrent neural generalization via natural language
ematical Biophysics, 5:115–133. network language model. ICASSP. crowdsourcing instructions. ACL.
McDonald, R., K. Crammer, and Mikolov, T., I. Sutskever, K. Chen, Mitchell, M., S. Wu, A. Zal-
F. C. N. Pereira. 2005a. Online G. S. Corrado, and J. Dean. 2013b. divar, P. Barnes, L. Vasserman,
large-margin training of dependency Distributed representations of words B. Hutchinson, E. Spitzer, I. D. Raji,
parsers. ACL. and phrases and their compositional- and T. Gebru. 2019. Model cards for
McDonald, R. and J. Nivre. 2011. An- ity. NeurIPS. model reporting. ACM FAccT.
alyzing and integrating dependency Mikolov, T., W.-t. Yih, and G. Zweig. Mitkov, R. 2002. Anaphora Resolution.
parsers. Computational Linguistics, 2013c. Linguistic regularities in Longman.
37(1):197–230. continuous space word representa-
tions. NAACL HLT. Mohamed, A., G. E. Dahl, and G. E.
McDonald, R., F. C. N. Pereira, K. Rib- Hinton. 2009. Deep Belief Networks
arov, and J. Hajič. 2005b. Non- Miller, G. A. and J. G. Beebe-Center. for phone recognition. NIPS Work-
projective dependency parsing us- 1956. Some psychological methods shop on Deep Learning for Speech
ing spanning tree algorithms. HLT- for evaluating the quality of trans- Recognition and Related Applica-
EMNLP. lations. Mechanical Translation, tions.
McGuffie, K. and A. Newhouse. 3:73–80.
2020. The radicalization risks of Mohammad, S. M. 2018a. Obtaining
Miller, G. A. and W. G. Charles. 1991. reliable human ratings of valence,
GPT-3 and advanced neural lan- Contextual correlates of semantics
guage models. ArXiv preprint arousal, and dominance for 20,000
similarity. Language and Cognitive English words. ACL.
arXiv:2009.06807. Processes, 6(1):1–28.
McLuhan, M. 1964. Understanding Miller, G. A. and N. Chomsky. 1963. Mohammad, S. M. 2018b. Word affect
Media: The Extensions of Man. New Finitary models of language users. intensities. LREC.
American Library. In R. D. Luce, R. R. Bush, and Mohammad, S. M. and P. D. Tur-
Melamud, O., J. Goldberger, and I. Da- E. Galanter, eds, Handbook of Math- ney. 2013. Crowdsourcing a word-
gan. 2016. context2vec: Learn- ematical Psychology, volume II, emotion association lexicon. Com-
ing generic context embedding with 419–491. John Wiley. putational Intelligence, 29(3):436–
bidirectional LSTM. CoNLL. Miller, G. A. and J. A. Selfridge. 465.
Merialdo, B. 1994. Tagging En- 1950. Verbal context and the recall Monroe, B. L., M. P. Colaresi, and
glish text with a probabilistic of meaningful material. American K. M. Quinn. 2008. Fightin’words:
model. Computational Linguistics, Journal of Psychology, 63:176–185. Lexical feature selection and evalu-
20(2):155–172. Miller, S., R. J. Bobrow, R. Ingria, and ation for identifying the content of
Mesgar, M. and M. Strube. 2016. Lexi- R. Schwartz. 1994. Hidden under- political conflict. Political Analysis,
cal coherence graph modeling using standing models of natural language. 16(4):372–403.
word embeddings. ACL. ACL. Moors, A., P. C. Ellsworth, K. R.
Metsis, V., I. Androutsopoulos, and Milne, D. and I. H. Witten. 2008. Scherer, and N. H. Frijda. 2013. Ap-
G. Paliouras. 2006. Spam filter- Learning to link with wikipedia. praisal theories of emotion: State
ing with naive bayes-which naive CIKM 2008. of the art and future development.
bayes? CEAS. Miltsakaki, E., R. Prasad, A. K. Joshi, Emotion Review, 5(2):119–124.
Meyers, A., R. Reeves, C. Macleod, and B. L. Webber. 2004. The Penn Moosavi, N. S. and M. Strube. 2016.
R. Szekely, V. Zielinska, B. Young, Discourse Treebank. LREC. Which coreference evaluation met-
and R. Grishman. 2004. The nom- Min, S., X. Lyu, A. Holtzman, ric do you trust? A proposal for a
bank project: An interim report. M. Artetxe, M. Lewis, H. Hajishirzi, link-based entity aware metric. ACL.
NAACL/HLT Workshop: Frontiers in and L. Zettlemoyer. 2022. Rethink- Morey, M., P. Muller, and N. Asher.
Corpus Annotation. ing the role of demonstrations: What 2017. How much progress have we
Mihalcea, R. and A. Csomai. 2007. makes in-context learning work? made on RST discourse parsing? a
Wikify!: Linking documents to en- EMNLP. replication study of recent results on
cyclopedic knowledge. CIKM 2007. Minsky, M. 1961. Steps toward artifi- the rst-dt. EMNLP.
Mikheev, A., M. Moens, and C. Grover. cial intelligence. Proceedings of the
Morgan, A. A., L. Hirschman,
1999. Named entity recognition IRE, 49(1):8–30.
M. Colosimo, A. S. Yeh, and J. B.
without gazetteers. EACL. Minsky, M. 1974. A framework for rep- Colombe. 2004. Gene name iden-
Mikolov, T. 2012. Statistical lan- resenting knowledge. Technical Re- tification and normalization using a
guage models based on neural net- port 306, MIT AI Laboratory. Memo model organism database. Journal of
works. Ph.D. thesis, Brno University 306. Biomedical Informatics, 37(6):396–
of Technology. Minsky, M. and S. Papert. 1969. Per- 410.
Mikolov, T., K. Chen, G. S. Corrado, ceptrons. MIT Press. Morgan, N. and H. Bourlard. 1990.
and J. Dean. 2013a. Efficient estima- Mintz, M., S. Bills, R. Snow, and D. Ju- Continuous speech recognition us-
tion of word representations in vec- rafsky. 2009. Distant supervision for ing multilayer perceptrons with hid-
tor space. ICLR 2013. relation extraction without labeled den markov models. ICASSP.
Mikolov, T., M. Karafiát, L. Bur- data. ACL IJCNLP. Morgan, N. and H. A. Bourlard.
get, J. Černockỳ, and S. Khudan- Mirza, P. and S. Tonelli. 2016. 1995. Neural networks for sta-
pur. 2010. Recurrent neural net- CATENA: CAusal and TEmporal tistical recognition of continuous
work based language model. IN- relation extraction from NAtural speech. Proceedings of the IEEE,
TERSPEECH. language texts. COLING. 83(5):742–772.
572 Bibliography
Morris, J. and G. Hirst. 1991. Lexical Perlis, H. Rutishauser, K. Samelson, Nielsen, M. A. 2015. Neural networks
cohesion computed by thesaural re- B. Vauquois, J. H. Wegstein, A. van and Deep learning. Determination
lations as an indicator of the struc- Wijnagaarden, and M. Woodger. Press USA.
ture of text. Computational Linguis- 1960. Report on the algorith- Nigam, K., J. D. Lafferty, and A. Mc-
tics, 17(1):21–48. mic language ALGOL 60. CACM, Callum. 1999. Using maximum en-
Mosteller, F. and D. L. Wallace. 1963. 3(5):299–314. Revised in CACM tropy for text classification. IJCAI-
Inference in an authorship problem: 6:1, 1-17, 1963. 99 workshop on machine learning
A comparative study of discrimina- Nayak, N., D. Hakkani-Tür, M. A. for information filtering.
tion methods applied to the author- Walker, and L. P. Heck. 2017. To Nirenburg, S., H. L. Somers, and
ship of the disputed federalist pa- plan or not to plan? discourse Y. Wilks, eds. 2002. Readings in
pers. Journal of the American Statis- planning in slot-value informed se- Machine Translation. MIT Press.
tical Association, 58(302):275–309. quence to sequence models for lan-
guage generation. INTERSPEECH. Nissim, M., S. Dingare, J. Carletta, and
Mosteller, F. and D. L. Wallace. 1964. M. Steedman. 2004. An annotation
Inference and Disputed Authorship: Neff, G. and P. Nagy. 2016. Talking scheme for information status in di-
The Federalist. Springer-Verlag. to bots: Symbiotic agency and the alogue. LREC.
1984 2nd edition: Applied Bayesian case of Tay. International Journal
and Classical Inference. of Communication, 10:4915–4931. NIST. 2005. Speech recognition
scoring toolkit (sctk) version 2.1.
Mrkšić, N., D. Ó Séaghdha, T.-H. Wen, Ng, A. Y. and M. I. Jordan. 2002. On http://www.nist.gov/speech/
B. Thomson, and S. Young. 2017. discriminative vs. generative classi- tools/.
Neural belief tracker: Data-driven fiers: A comparison of logistic re-
gression and naive bayes. NeurIPS. NIST. 2007. Matched Pairs Sentence-
dialogue state tracking. ACL.
Segment Word Error (MAPSSWE)
Muller, P., C. Braud, and M. Morey. Ng, H. T., L. H. Teo, and J. L. P. Kwan. Test.
2019. ToNy: Contextual embed- 2000. A machine learning approach
Nivre, J. 2007. Incremental non-
dings for accurate multilingual dis- to answering questions for reading
projective dependency parsing.
course segmentation of full docu- comprehension tests. EMNLP.
NAACL-HLT.
ments. Workshop on Discourse Re- Ng, V. 2004. Learning noun phrase
lation Parsing and Treebanking. Nivre, J. 2003. An efficient algorithm
anaphoricity to improve coreference
for projective dependency parsing.
Murphy, K. P. 2012. Machine learning: resolution: Issues in representation
Proceedings of the 8th International
A probabilistic perspective. MIT and optimization. ACL.
Workshop on Parsing Technologies
Press. Ng, V. 2005a. Machine learning for (IWPT).
Musi, E., M. Stede, L. Kriese, S. Mure- coreference resolution: From lo- Nivre, J. 2006. Inductive Dependency
san, and A. Rocci. 2018. A multi- cal classification to global ranking. Parsing. Springer.
layer annotated corpus of argumen- ACL.
tative text: From argument schemes Nivre, J. 2009. Non-projective de-
Ng, V. 2005b. Supervised ranking pendency parsing in expected linear
to discourse relations. LREC. for pronoun resolution: Some recent time. ACL IJCNLP.
Myers, G. 1992. “In this paper we improvements. AAAI.
report...”: Speech acts and scien- Nivre, J., J. Hall, S. Kübler, R. Mc-
Ng, V. 2010. Supervised noun phrase Donald, J. Nilsson, S. Riedel, and
tific facts. Journal of Pragmatics, coreference research: The first fif-
17(4):295–313. D. Yuret. 2007a. The conll 2007
teen years. ACL. shared task on dependency parsing.
Nádas, A. 1984. Estimation of prob- Ng, V. 2017. Machine learning for en- EMNLP/CoNLL.
abilities in the language model of tity coreference resolution: A retro-
the IBM speech recognition sys- Nivre, J., J. Hall, J. Nilsson, A. Chanev,
spective look at two decades of re- G. Eryigit, S. Kübler, S. Mari-
tem. IEEE Transactions on ASSP, search. AAAI.
32(4):859–861. nov, and E. Marsi. 2007b. Malt-
Ng, V. and C. Cardie. 2002a. Identi- parser: A language-independent
Nadeem, M., A. Bethke, and S. Reddy. system for data-driven dependency
fying anaphoric and non-anaphoric
2021. StereoSet: Measuring stereo- parsing. Natural Language Engi-
noun phrases to improve coreference
typical bias in pretrained language neering, 13(02):95–135.
resolution. COLING.
models. ACL.
Ng, V. and C. Cardie. 2002b. Improv- Nivre, J. and J. Nilsson. 2005. Pseudo-
Nagata, M. and T. Morimoto. 1994. projective dependency parsing. ACL.
ing machine learning approaches to
First steps toward statistical model-
coreference resolution. ACL. Nivre, J. and M. Scholz. 2004. Deter-
ing of dialogue to predict the speech
act type of the next utterance. Speech Nguyen, D. T. and S. Joty. 2017. A neu- ministic dependency parsing of en-
Communication, 15:193–203. ral local coherence model. ACL. glish text. COLING.
Nallapati, R., B. Zhou, C. dos San- Nickerson, R. S. 1976. On con- Niwa, Y. and Y. Nitta. 1994. Co-
tos, Ç. Gulçehre, and B. Xiang. versational interaction with comput- occurrence vectors from corpora vs.
2016. Abstractive text summa- ers. Proceedings of the ACM/SIG- distance vectors from dictionaries.
rization using sequence-to-sequence GRAPH workshop on User-oriented COLING.
RNNs and beyond. CoNLL. design of interactive graphics sys- Noreen, E. W. 1989. Computer Inten-
Nash-Webber, B. L. 1975. The role of tems. sive Methods for Testing Hypothesis.
semantics in automatic speech un- Nie, A., E. Bennett, and N. Good- Wiley.
derstanding. In D. G. Bobrow and man. 2019. DisSent: Learning sen- Norman, D. A. 1988. The Design of Ev-
A. Collins, eds, Representation and tence representations from explicit eryday Things. Basic Books.
Understanding, 351–382. Academic discourse relations. ACL. Norvig, P. 1991. Techniques for au-
Press. Nielsen, J. 1992. The usability engi- tomatic memoization with applica-
Naur, P., J. W. Backus, F. L. Bauer, neering life cycle. IEEE Computer, tions to context-free parsing. Com-
J. Green, C. Katz, J. McCarthy, A. J. 25(3):12–22. putational Linguistics, 17(1):91–98.
Bibliography 573
Nosek, B. A., M. R. Banaji, and Oravecz, C. and P. Dienes. 2002. Ef- Parrish, A., A. Chen, N. Nangia, V. Pad-
A. G. Greenwald. 2002a. Harvest- ficient stochastic part-of-speech tag- makumar, J. Phang, J. Thompson,
ing implicit group attitudes and be- ging for Hungarian. LREC. P. M. Htut, and S. Bowman. 2022.
liefs from a demonstration web site. Osgood, C. E., G. J. Suci, and P. H. Tan- BBQ: A hand-built bias benchmark
Group Dynamics: Theory, Research, nenbaum. 1957. The Measurement for question answering. Findings of
and Practice, 6(1):101. of Meaning. University of Illinois ACL 2022.
Nosek, B. A., M. R. Banaji, and A. G. Press. Paszke, A., S. Gross, S. Chintala,
Greenwald. 2002b. Math=male, Ouyang, L., J. Wu, X. Jiang, G. Chanan, E. Yang, Z. DeVito,
me=female, therefore math6= me. D. Almeida, C. Wainwright, Z. Lin, A. Desmaison, L. Antiga,
Journal of personality and social P. Mishkin, C. Zhang, S. Agar- and A. Lerer. 2017. Automatic dif-
psychology, 83(1):44. wal, K. Slama, A. Ray, J. Schul- ferentiation in pytorch. NIPS-W.
Nostalgebraist. 2020. Interpreting gpt: man, J. Hilton, F. Kelton, L. Miller,
Pearl, C. 2017. Designing Voice User
the logit lens. White paper. M. Simens, A. Askell, P. Welinder,
Interfaces: Principles of Conversa-
Ocal, M., A. Perez, A. Radas, and P. Christiano, J. Leike, and R. Lowe.
tional Experiences. O’Reilly.
M. Finlayson. 2022. Holistic eval- 2022. Training language models
uation of automatic TimeML anno- to follow instructions with human Peldszus, A. and M. Stede. 2013. From
feedback. NeurIPS, volume 35. argument diagrams to argumentation
tators. LREC.
Packard, D. W. 1973. Computer- mining in texts: A survey. In-
Och, F. J. 1998. Ein beispiels- ternational Journal of Cognitive In-
assisted morphological analysis of
basierter und statistischer Ansatz formatics and Natural Intelligence
ancient Greek. COLING.
zum maschinellen Lernen von (IJCINI), 7(1):1–31.
natürlichsprachlicher Übersetzung. Palmer, D. 2012. Text preprocessing.
Ph.D. thesis, Universität Erlangen- In N. Indurkhya and F. J. Damerau, Peldszus, A. and M. Stede. 2016. An
Nürnberg, Germany. Diplomarbeit eds, Handbook of Natural Language annotated corpus of argumentative
(diploma thesis). Processing, 9–30. CRC Press. microtexts. 1st European Confer-
Palmer, M., D. Gildea, and N. Xue. ence on Argumentation.
Och, F. J. 2003. Minimum error rate
training in statistical machine trans- 2010. Semantic role labeling. Syn- Penn, G. and P. Kiparsky. 2012. On
lation. ACL. thesis Lectures on Human Language Pān.ini and the generative capacity of
Technologies, 3(1):1–103. contextualized replacement systems.
Och, F. J. and H. Ney. 2002. Discrim-
inative training and maximum en- Palmer, M., P. Kingsbury, and COLING.
tropy models for statistical machine D. Gildea. 2005. The proposi-
Pennebaker, J. W., R. J. Booth, and
translation. ACL. tion bank: An annotated corpus
M. E. Francis. 2007. Linguistic In-
of semantic roles. Computational
Och, F. J. and H. Ney. 2003. A system- quiry and Word Count: LIWC 2007.
Linguistics, 31(1):71–106.
atic comparison of various statistical Austin, TX.
Panayotov, V., G. Chen, D. Povey, and
alignment models. Computational Pennington, J., R. Socher, and C. D.
S. Khudanpur. 2015. Librispeech: an
Linguistics, 29(1):19–51. Manning. 2014. GloVe: Global
ASR corpus based on public domain
Och, F. J. and H. Ney. 2004. The align- audio books. ICASSP. vectors for word representation.
ment template approach to statistical EMNLP.
Pang, B. and L. Lee. 2008. Opin-
machine translation. Computational ion mining and sentiment analysis. Percival, W. K. 1976. On the his-
Linguistics, 30(4):417–449. Foundations and trends in informa- torical source of immediate con-
O’Connor, B., M. Krieger, and D. Ahn. tion retrieval, 2(1-2):1–135. stituent analysis. In J. D. McCawley,
2010. Tweetmotif: Exploratory Pang, B., L. Lee, and S. Vaithyanathan. ed., Syntax and Semantics Volume
search and topic summarization for 2002. Thumbs up? Sentiment 7, Notes from the Linguistic Under-
twitter. ICWSM. classification using machine learn- ground, 229–242. Academic Press.
Olive, J. P. 1977. Rule synthe- ing techniques. EMNLP. Perrault, C. R. and J. Allen. 1980.
sis of speech from dyadic units. Paolino, J. 2017. Google Home A plan-based analysis of indirect
ICASSP77. vs Alexa: Two simple user speech acts. American Journal
Olsson, C., N. Elhage, N. Nanda, experience design gestures of Computational Linguistics, 6(3-
N. Joseph, N. DasSarma, that delighted a female user. 4):167–182.
T. Henighan, B. Mann, A. Askell, Medium. Jan 4, 2017. https:
Peters, M., M. Neumann, M. Iyyer,
Y. Bai, A. Chen, et al. 2022. In- //medium.com/startup-grind/
M. Gardner, C. Clark, K. Lee,
context learning and induction google-home-vs-alexa-56e26f69ac77.
and L. Zettlemoyer. 2018. Deep
heads. ArXiv preprint. Papadimitriou, I., K. Lopez, and D. Ju- contextualized word representations.
Olteanu, A., F. Diaz, and G. Kazai. rafsky. 2023. Multilingual BERT has NAACL HLT.
2020. When are search completion an accent: Evaluating English in-
suggestions problematic? CSCW. fluences on fluency in multilingual Peterson, G. E., W. S.-Y. Wang, and
models. EACL Findings. E. Sivertsen. 1958. Segmenta-
van den Oord, A., S. Dieleman, H. Zen, tion techniques in speech synthesis.
K. Simonyan, O. Vinyals, A. Graves, Papineni, K., S. Roukos, T. Ward, and
JASA, 30(8):739–742.
N. Kalchbrenner, A. Senior, and W.-J. Zhu. 2002. Bleu: A method
K. Kavukcuoglu. 2016. WaveNet: for automatic evaluation of machine Peterson, J. C., D. Chen, and T. L. Grif-
A Generative Model for Raw Audio. translation. ACL. fiths. 2020. Parallelograms revisited:
ISCA Workshop on Speech Synthesis Park, J. H., J. Shin, and P. Fung. 2018. Exploring the limitations of vector
Workshop. Reducing gender bias in abusive lan- space models for simple analogies.
guage detection. EMNLP. Cognition, 205.
Oppenheim, A. V., R. W. Schafer, and
T. G. J. Stockham. 1968. Nonlinear Park, J. and C. Cardie. 2014. Identify- Petroni, F., T. Rocktäschel, S. Riedel,
filtering of multiplied and convolved ing appropriate support for proposi- P. Lewis, A. Bakhtin, Y. Wu, and
signals. Proceedings of the IEEE, tions in online user comments. First A. Miller. 2019. Language models
56(8):1264–1291. workshop on argumentation mining. as knowledge bases? EMNLP.
574 Bibliography
Petrov, S., D. Das, and R. McDonald. Poesio, M. and R. Vieira. 1998. A Pradhan, S., A. Moschitti, N. Xue, H. T.
2012. A universal part-of-speech corpus-based investigation of defi- Ng, A. Björkelund, O. Uryupina,
tagset. LREC. nite description use. Computational Y. Zhang, and Z. Zhong. 2013. To-
Petrov, S. and R. McDonald. 2012. Linguistics, 24(2):183–216. wards robust linguistic analysis us-
Overview of the 2012 shared task on Polanyi, L. 1988. A formal model of ing OntoNotes. CoNLL.
parsing the web. Notes of the First the structure of discourse. Journal Pradhan, S., A. Moschitti, N. Xue,
Workshop on Syntactic Analysis of of Pragmatics, 12. O. Uryupina, and Y. Zhang. 2012a.
Non-Canonical Language (SANCL), CoNLL-2012 shared task: Model-
Polanyi, L., C. Culy, M. van den Berg, ing multilingual unrestricted coref-
volume 59.
G. L. Thione, and D. Ahn. 2004. erence in OntoNotes. CoNLL.
Phillips, A. V. 1960. A question- A rule based approach to discourse
answering routine. Technical Re- parsing. Proceedings of SIGDIAL. Pradhan, S., A. Moschitti, N. Xue,
port 16, MIT AI Lab. O. Uryupina, and Y. Zhang. 2012b.
Pollard, C. and I. A. Sag. 1994. Head- Conll-2012 shared task: Model-
Picard, R. W. 1995. Affective comput- Driven Phrase Structure Grammar. ing multilingual unrestricted coref-
ing. Technical Report 321, MIT Me- University of Chicago Press. erence in OntoNotes. CoNLL.
dia Lab Perceputal Computing Tech-
nical Report. Revised November 26, Ponzetto, S. P. and M. Strube. 2006. Pradhan, S., L. Ramshaw, M. P. Mar-
1995. Exploiting semantic role labeling, cus, M. Palmer, R. Weischedel, and
WordNet and Wikipedia for corefer- N. Xue. 2011. CoNLL-2011 shared
Pieraccini, R., E. Levin, and C.-H. ence resolution. HLT-NAACL. task: Modeling unrestricted corefer-
Lee. 1991. Stochastic representation ence in OntoNotes. CoNLL.
of conceptual structure in the ATIS Ponzetto, S. P. and M. Strube. 2007.
task. Speech and Natural Language Knowledge derived from Wikipedia Pradhan, S., L. Ramshaw, R. Wei-
Workshop. for computing semantic relatedness. schedel, J. MacBride, and L. Mic-
JAIR, 30:181–212. ciulla. 2007c. Unrestricted corefer-
Pierce, J. R., J. B. Carroll, E. P. ence: Identifying entities and events
Hamp, D. G. Hays, C. F. Hockett, Popović, M. 2015. chrF: charac-
ter n-gram F-score for automatic in OntoNotes. Proceedings of
A. G. Oettinger, and A. J. Perlis. ICSC 2007.
1966. Language and Machines: MT evaluation. Proceedings of the
Computers in Translation and Lin- Tenth Workshop on Statistical Ma- Pradhan, S., W. Ward, K. Hacioglu,
guistics. ALPAC report. National chine Translation. J. H. Martin, and D. Jurafsky. 2005.
Academy of Sciences, National Re- Semantic role labeling using differ-
Popp, D., R. A. Donovan, M. Craw-
search Council, Washington, DC. ent syntactic views. ACL.
ford, K. L. Marsh, and M. Peele.
Pilehvar, M. T. and J. Camacho- 2003. Gender, race, and speech style Prasad, A., P. Hase, X. Zhou, and
Collados. 2019. WiC: the word- stereotypes. Sex Roles, 48(7-8):317– M. Bansal. 2023. GrIPS: Gradient-
in-context dataset for evaluating 325. free, edit-based instruction search
context-sensitive meaning represen- for prompting large language mod-
Porter, M. F. 1980. An algorithm els. EACL.
tations. NAACL HLT. for suffix stripping. Program,
14(3):130–137. Prasad, R., N. Dinesh, A. Lee, E. Milt-
Pitler, E., A. Louis, and A. Nenkova. sakaki, L. Robaldo, A. K. Joshi, and
2009. Automatic sense prediction Post, M. 2018. A call for clarity in re- B. L. Webber. 2008. The Penn Dis-
for implicit discourse relations in porting BLEU scores. WMT 2018. course TreeBank 2.0. LREC.
text. ACL IJCNLP.
Potts, C. 2011. On the negativity of Prasad, R., B. L. Webber, and A. Joshi.
Pitler, E. and A. Nenkova. 2009. Us- negation. In N. Li and D. Lutz, 2014. Reflections on the Penn Dis-
ing syntax to disambiguate explicit eds, Proceedings of Semantics and course Treebank, comparable cor-
discourse connectives in text. ACL Linguistic Theory 20, 636–659. CLC pora, and complementary annota-
IJCNLP. Publications, Ithaca, NY. tion. Computational Linguistics,
Plutchik, R. 1962. The emotions: Facts, Povey, D., A. Ghoshal, G. Boulianne, 40(4):921–950.
theories, and a new model. Random L. Burget, O. Glembek, N. Goel, Prates, M. O. R., P. H. Avelar, and L. C.
House. M. Hannemann, P. Motlicek, Lamb. 2019. Assessing gender bias
Plutchik, R. 1980. A general psycho- Y. Qian, P. Schwarz, J. Silovský, in machine translation: a case study
evolutionary theory of emotion. In G. Stemmer, and K. Veselý. 2011. with Google Translate. Neural Com-
R. Plutchik and H. Kellerman, eds, The Kaldi speech recognition puting and Applications, 32:6363–
Emotion: Theory, Research, and Ex- toolkit. ASRU. 6381.
perience, Volume 1, 3–33. Academic Pradhan, S., E. H. Hovy, M. P. Mar- Price, P. J., W. Fisher, J. Bern-
Press. cus, M. Palmer, L. Ramshaw, and stein, and D. Pallet. 1988. The
Poesio, M., R. Stevenson, B. Di Euge- R. Weischedel. 2007a. OntoNotes: DARPA 1000-word resource man-
nio, and J. Hitzeman. 2004. Center- A unified relational semantic repre- agement database for continuous
ing: A parametric theory and its in- sentation. Proceedings of ICSC. speech recognition. ICASSP.
stantiations. Computational Linguis- Pradhan, S., E. H. Hovy, M. P. Mar- Prince, E. 1981. Toward a taxonomy of
tics, 30(3):309–363. cus, M. Palmer, L. A. Ramshaw, given-new information. In P. Cole,
Poesio, M., R. Stuckardt, and Y. Ver- and R. M. Weischedel. 2007b. ed., Radical Pragmatics, 223–255.
sley. 2016. Anaphora resolution: Ontonotes: a unified relational se- Academic Press.
Algorithms, resources, and applica- mantic representation. Int. J. Seman- Propp, V. 1968. Morphology of the
tions. Springer. tic Computing, 1(4):405–419. Folktale, 2nd edition. University of
Poesio, M., P. Sturt, R. Artstein, and Pradhan, S., X. Luo, M. Recasens, Texas Press. Original Russian 1928.
R. Filik. 2006. Underspecification E. H. Hovy, V. Ng, and M. Strube. Translated by Laurence Scott.
and anaphora: Theoretical issues 2014. Scoring coreference partitions Pryzant, R., D. Iter, J. Li, Y. Lee,
and preliminary evidence. Discourse of predicted mentions: A reference C. Zhu, and M. Zeng. 2023. Au-
processes, 42(2):157–175. implementation. ACL. tomatic prompt optimization with
Bibliography 575
“gradient descent” and beam search. questions for machine comprehen- Semantic Analysis to assess knowl-
EMNLP. sion of text. EMNLP. edge: Some technical considera-
Pundak, G. and T. N. Sainath. 2016. Ram, O., Y. Levine, I. Dalmedigos, tions. Discourse Processes, 25(2-
Lower frame rate neural network D. Muhlgay, A. Shashua, K. Leyton- 3):337–354.
acoustic models. INTERSPEECH. Brown, and Y. Shoham. 2023. Rei, R., C. Stewart, A. C. Farinha, and
Pustejovsky, J. 1991. The generative In-context retrieval-augmented lan- A. Lavie. 2020. COMET: A neu-
lexicon. Computational Linguistics, guage models. ArXiv preprint. ral framework for MT evaluation.
17(4). Ramshaw, L. A. and M. P. Mar- EMNLP.
cus. 1995. Text chunking using Reichenbach, H. 1947. Elements of
Pustejovsky, J., P. Hanks, R. Saurı́,
transformation-based learning. Pro- Symbolic Logic. Macmillan, New
A. See, R. Gaizauskas, A. Setzer,
ceedings of the 3rd Annual Work- York.
D. Radev, B. Sundheim, D. S. Day,
L. Ferro, and M. Lazo. 2003. The shop on Very Large Corpora. Reichman, R. 1985. Getting Computers
TIMEBANK corpus. Proceedings Rashkin, H., E. Bell, Y. Choi, and to Talk Like You and Me. MIT Press.
of Corpus Linguistics 2003 Confer- S. Volkova. 2017. Multilingual con- Resnik, P. 1993. Semantic classes and
ence. UCREL Technical Paper num- notation frames: A case study on syntactic ambiguity. HLT.
ber 16. social media for targeted sentiment
Resnik, P. 1996. Selectional con-
Pustejovsky, J., R. Ingria, analysis and forecast. ACL.
straints: An information-theoretic
R. Saurı́, J. Castaño, J. Littman, Rashkin, H., S. Singh, and Y. Choi. model and its computational realiza-
R. Gaizauskas, A. Setzer, G. Katz, 2016. Connotation frames: A data- tion. Cognition, 61:127–159.
and I. Mani. 2005. The Specifica- driven investigation. ACL. Reynolds, L. and K. McDonell. 2021.
tion Language TimeML, chapter 27. Rashkin, H., E. M. Smith, M. Li, Prompt programming for large lan-
Oxford. and Y.-L. Boureau. 2019. Towards guage models: Beyond the few-shot
Qin, L., Z. Zhang, and H. Zhao. 2016. empathetic open-domain conversa- paradigm. CHI 2021.
A stacking gated neural architecture tion models: A new benchmark and Riedel, S., L. Yao, and A. McCallum.
for implicit discourse relation classi- dataset. ACL. 2010. Modeling relations and their
fication. EMNLP. Ratinov, L. and D. Roth. 2012. mentions without labeled text. In
Qin, L., Z. Zhang, H. Zhao, Z. Hu, Learning-based multi-sieve co- Machine Learning and Knowledge
and E. Xing. 2017. Adversarial reference resolution with knowl- Discovery in Databases, 148–163.
connective-exploiting networks for edge. EMNLP. Springer.
implicit discourse relation classifica- Ratnaparkhi, A. 1996. A maxi- Riedel, S., L. Yao, A. McCallum, and
tion. ACL. mum entropy part-of-speech tagger. B. M. Marlin. 2013. Relation extrac-
Radford, A., J. Wu, R. Child, D. Luan, EMNLP. tion with matrix factorization and
D. Amodei, and I. Sutskever. 2019. Ratnaparkhi, A. 1997. A linear ob- universal schemas. NAACL HLT.
Language models are unsupervised served time statistical parser based Riloff, E. 1993. Automatically con-
multitask learners. OpenAI tech re- on maximum entropy models. structing a dictionary for informa-
port. EMNLP. tion extraction tasks. AAAI.
Rafailov, R., A. Sharma, E. Mitchell, Rawls, J. 2001. Justice as fairness: Riloff, E. 1996. Automatically gen-
S. Ermon, C. D. Manning, and A restatement. Harvard University erating extraction patterns from un-
C. Finn. 2023. Direct preference op- Press. tagged text. AAAI.
timization: Your language model is Riloff, E. and R. Jones. 1999. Learning
Recasens, M. and E. H. Hovy. 2011.
secretly a reward model. NeurIPS. dictionaries for information extrac-
BLANC: Implementing the Rand
Raffel, C., N. Shazeer, A. Roberts, index for coreference evaluation. tion by multi-level bootstrapping.
K. Lee, S. Narang, M. Matena, Natural Language Engineering, AAAI.
Y. Zhou, W. Li, and P. J. Liu. 17(4):485–510. Riloff, E. and M. Schmelzenbach. 1998.
2020. Exploring the limits of trans- Recasens, M., E. H. Hovy, and M. A. An empirical approach to conceptual
fer learning with a unified text-to- Martı́. 2011. Identity, non-identity, case frame acquisition. Proceedings
text transformer. JMLR, 21(140):1– and near-identity: Addressing the of the Sixth Workshop on Very Large
67. complexity of coreference. Lingua, Corpora.
Raghunathan, K., H. Lee, S. Rangara- 121(6):1138–1152. Riloff, E. and J. Shepherd. 1997. A
jan, N. Chambers, M. Surdeanu, Recasens, M. and M. A. Martı́. 2010. corpus-based approach for building
D. Jurafsky, and C. D. Manning. AnCora-CO: Coreferentially anno- semantic lexicons. EMNLP.
2010. A multi-pass sieve for coref- tated corpora for Spanish and Cata-
erence resolution. EMNLP. Riloff, E. and M. Thelen. 2000. A rule-
lan. Language Resources and Eval- based question answering system
Rahman, A. and V. Ng. 2009. Super- uation, 44(4):315–345. for reading comprehension tests.
vised models for coreference resolu- Reed, C., R. Mochales Palau, G. Rowe, ANLP/NAACL workshop on reading
tion. EMNLP. and M.-F. Moens. 2008. Lan- comprehension tests.
Rahman, A. and V. Ng. 2012. Resolv- guage resources for studying argu- Riloff, E. and J. Wiebe. 2003. Learn-
ing complex cases of definite pro- ment. LREC. ing extraction patterns for subjective
nouns: the Winograd Schema chal- Reeves, B. and C. Nass. 1996. The expressions. EMNLP.
lenge. EMNLP. Media Equation: How People Treat Ritter, A., C. Cherry, and B. Dolan.
Rajpurkar, P., R. Jia, and P. Liang. Computers, Television, and New Me- 2010a. Unsupervised modeling of
2018. Know what you don’t dia Like Real People and Places. twitter conversations. NAACL HLT.
know: Unanswerable questions for Cambridge University Press. Ritter, A., O. Etzioni, and Mausam.
SQuAD. ACL. Rehder, B., M. E. Schreiner, M. B. W. 2010b. A latent dirichlet allocation
Rajpurkar, P., J. Zhang, K. Lopyrev, and Wolfe, D. Laham, T. K. Landauer, method for selectional preferences.
P. Liang. 2016. SQuAD: 100,000+ and W. Kintsch. 1998. Using Latent ACL.
576 Bibliography
Ritter, A., L. Zettlemoyer, Mausam, and Roy, N., J. Pineau, and S. Thrun. 2000. Sag, I. A. and M. Y. Liberman. 1975.
O. Etzioni. 2013. Modeling miss- Spoken dialogue management using The intonational disambiguation of
ing data in distant supervision for in- probabilistic reasoning. ACL. indirect speech acts. In CLS-75,
formation extraction. TACL, 1:367– 487–498. University of Chicago.
Rudinger, R., J. Naradowsky,
378. Sagae, K. 2009. Analysis of dis-
B. Leonard, and B. Van Durme.
Roberts, A., C. Raffel, and N. Shazeer. 2018. Gender bias in coreference course structure with syntactic de-
2020. How much knowledge can resolution. NAACL HLT. pendencies and data-driven shift-
you pack into the parameters of a reduce parsing. IWPT-09.
language model? EMNLP. Rumelhart, D. E., G. E. Hinton, and
R. J. Williams. 1986. Learning in- Sagawa, S., P. W. Koh, T. B.
Robertson, S., S. Walker, S. Jones, ternal representations by error prop- Hashimoto, and P. Liang. 2020. Dis-
M. M. Hancock-Beaulieu, and agation. In D. E. Rumelhart and tributionally robust neural networks
M. Gatford. 1995. Okapi at TREC-3. J. L. McClelland, eds, Parallel Dis- for group shifts: On the importance
Overview of the Third Text REtrieval tributed Processing, volume 2, 318– of regularization for worst-case gen-
Conference (TREC-3). 362. MIT Press. eralization. ICLR.
Robinson, T. and F. Fallside. 1991.
A recurrent error propagation net- Rumelhart, D. E. and J. L. McClelland. Sagisaka, Y. 1988. Speech synthe-
work speech recognition system. 1986a. On learning the past tense of sis by rule using an optimal selec-
Computer Speech & Language, English verbs. In D. E. Rumelhart tion of non-uniform synthesis units.
5(3):259–274. and J. L. McClelland, eds, Parallel ICASSP.
Distributed Processing, volume 2, Sagisaka, Y., N. Kaiki, N. Iwahashi,
Robinson, T., M. Hochberg, and S. Re- 216–271. MIT Press.
nals. 1996. The use of recurrent neu- and K. Mimura. 1992. Atr – ν-talk
ral networks in continuous speech Rumelhart, D. E. and J. L. McClelland, speech synthesis system. ICSLP.
recognition. In C.-H. Lee, F. K. eds. 1986b. Parallel Distributed Sahami, M., S. T. Dumais, D. Heck-
Soong, and K. K. Paliwal, eds, Au- Processing. MIT Press. erman, and E. Horvitz. 1998. A
tomatic speech and speaker recogni- Rumelhart, D. E. and A. A. Abraham- Bayesian approach to filtering junk
tion, 233–258. Springer. son. 1973. A model for analogi- e-mail. AAAI Workshop on Learning
Rogers, A., M. Gardner, and I. Au- cal reasoning. Cognitive Psychol- for Text Categorization.
genstein. 2023. QA dataset explo- ogy, 5(1):1–28. Sakoe, H. and S. Chiba. 1971. A
sion: A taxonomy of NLP resources dynamic programming approach to
for question answering and reading Rumelhart, D. E. and J. L. McClelland,
eds. 1986c. Parallel Distributed continuous speech recognition. Pro-
comprehension. ACM Computing ceedings of the Seventh Interna-
Surveys, 55(10):1–45. Processing: Explorations in the Mi-
crostructure of Cognition, volume tional Congress on Acoustics, vol-
Rohde, D. L. T., L. M. Gonnerman, and 1: Foundations. MIT Press. ume 3. Akadémiai Kiadó.
D. C. Plaut. 2006. An improved
model of semantic similarity based Ruppenhofer, J., M. Ellsworth, M. R. L. Sakoe, H. and S. Chiba. 1984. Dy-
on lexical co-occurrence. CACM, Petruck, C. R. Johnson, C. F. Baker, namic programming algorithm opti-
8:627–633. and J. Scheffczyk. 2016. FrameNet mization for spoken word recogni-
II: Extended theory and practice. tion. IEEE Transactions on ASSP,
Roller, S., E. Dinan, N. Goyal, D. Ju, ASSP-26(1):43–49.
M. Williamson, Y. Liu, J. Xu, Ruppenhofer, J., C. Sporleder,
M. Ott, E. M. Smith, Y.-L. Boureau, R. Morante, C. F. Baker, and Salomaa, A. 1969. Probabilistic and
and J. Weston. 2021. Recipes for M. Palmer. 2010. Semeval-2010 weighted grammars. Information
building an open-domain chatbot. task 10: Linking events and their and Control, 15:529–544.
EACL. participants in discourse. 5th In- Salton, G. 1971. The SMART Re-
Rooth, M., S. Riezler, D. Prescher, ternational Workshop on Semantic trieval System: Experiments in Au-
G. Carroll, and F. Beil. 1999. Induc- Evaluation. tomatic Document Processing. Pren-
ing a semantically annotated lexicon Russell, J. A. 1980. A circum- tice Hall.
via EM-based clustering. ACL. plex model of affect. Journal of Salvetti, F., J. B. Lowe, and J. H. Mar-
Rosenblatt, F. 1958. The percep- personality and social psychology, tin. 2016. A tangled web: The faint
tron: A probabilistic model for in- 39(6):1161–1178. signals of deception in text - boul-
formation storage and organization der lies and truth corpus (BLT-C).
in the brain. Psychological review, Russell, S. and P. Norvig. 2002. Ar-
tificial Intelligence: A Modern Ap- LREC.
65(6):386–408.
proach, 2nd edition. Prentice Hall. Sampson, G. 1987. Alternative gram-
Rosenfeld, R. 1992. Adaptive Statis- matical coding systems. In R. Gar-
tical Language Modeling: A Maxi- Rutherford, A. and N. Xue. 2015. Im-
proving the inference of implicit dis- side, G. Leech, and G. Sampson,
mum Entropy Approach. Ph.D. the- eds, The Computational Analysis of
sis, Carnegie Mellon University. course relations via classifying ex-
plicit discourse connectives. NAACL English, 165–183. Longman.
Rosenfeld, R. 1996. A maximum en-
tropy approach to adaptive statisti- HLT. Sankoff, D. and W. Labov. 1979. On the
cal language modeling. Computer Sachan, D. S., M. Lewis, D. Yo- uses of variable rules. Language in
Speech and Language, 10:187–228. gatama, L. Zettlemoyer, J. Pineau, society, 8(2-3):189–222.
Rosenthal, S. and K. McKeown. 2017. and M. Zaheer. 2023. Questions are Sap, M., D. Card, S. Gabriel, Y. Choi,
Detecting influencers in multiple on- all you need to train a dense passage and N. A. Smith. 2019. The risk of
line genres. ACM Transactions on retriever. TACL, 11:600–616. racial bias in hate speech detection.
Internet Technology (TOIT), 17(2). Sacks, H., E. A. Schegloff, and G. Jef- ACL.
Rothe, S., S. Ebert, and H. Schütze. ferson. 1974. A simplest system- Sap, M., M. C. Prasettio, A. Holtzman,
2016. Ultradense Word Embed- atics for the organization of turn- H. Rashkin, and Y. Choi. 2017. Con-
dings by Orthogonal Transforma- taking for conversation. Language, notation frames of power and agency
tion. NAACL HLT. 50(4):696–735. in modern films. EMNLP.
Bibliography 577
Saurı́, R., J. Littman, B. Knippen, Schütze, H. and J. Pedersen. 1993. A attention flow for machine compre-
R. Gaizauskas, A. Setzer, and vector model for syntagmatic and hension. ICLR.
J. Pustejovsky. 2006. TimeML an- paradigmatic relatedness. 9th An- Shannon, C. E. 1948. A mathematical
notation guidelines version 1.2.1. nual Conference of the UW Centre theory of communication. Bell Sys-
Manuscript. for the New OED and Text Research. tem Technical Journal, 27(3):379–
Scha, R. and L. Polanyi. 1988. An Schütze, H. and Y. Singer. 1994. Part- 423. Continued in the following vol-
augmented context free grammar for of-speech tagging using a variable ume.
discourse. COLING. memory Markov model. ACL. Shannon, C. E. 1951. Prediction and en-
Schank, R. C. and R. P. Abelson. 1975. Schwartz, H. A., J. C. Eichstaedt, tropy of printed English. Bell System
Scripts, plans, and knowledge. Pro- M. L. Kern, L. Dziurzynski, S. M. Technical Journal, 30:50–64.
ceedings of IJCAI-75. Ramones, M. Agrawal, A. Shah, Sheil, B. A. 1976. Observations on con-
M. Kosinski, D. Stillwell, M. E. P. text free parsing. SMIL: Statistical
Schank, R. C. and R. P. Abelson. 1977. Seligman, and L. H. Ungar. 2013. Methods in Linguistics, 1:71–109.
Scripts, Plans, Goals and Under- Personality, gender, and age in the
standing. Lawrence Erlbaum. Shen, J., R. Pang, R. J. Weiss,
language of social media: The open- M. Schuster, N. Jaitly, Z. Yang,
Schegloff, E. A. 1968. Sequencing in vocabulary approach. PloS one, Z. Chen, Y. Zhang, Y. Wang,
conversational openings. American 8(9):e73791. R. Skerry-Ryan, R. A. Saurous,
Anthropologist, 70:1075–1095. Schwenk, H. 2007. Continuous space Y. Agiomyrgiannakis, and Y. Wu.
Scherer, K. R. 2000. Psychological language models. Computer Speech 2018. Natural TTS synthesis by con-
models of emotion. In J. C. Borod, & Language, 21(3):492–518. ditioning WaveNet on mel spectro-
ed., The neuropsychology of emo- Schwenk, H. 2018. Filtering and min- gram predictions. ICASSP.
tion, 137–162. Oxford. ing parallel data in a joint multilin- Sheng, E., K.-W. Chang, P. Natarajan,
Schiebinger, L. 2013. Machine gual space. ACL. and N. Peng. 2019. The woman
translation: Analyzing gender. Schwenk, H., D. Dechelotte, and J.-L. worked as a babysitter: On biases in
http://genderedinnovations. Gauvain. 2006. Continuous space language generation. EMNLP.
stanford.edu/case-studies/ language models for statistical ma- Shi, P. and J. Lin. 2019. Simple BERT
nlp.html#tabs-2. chine translation. COLING/ACL. models for relation extraction and
Schwenk, H., G. Wenzek, S. Edunov, semantic role labeling. ArXiv.
Schiebinger, L. 2014. Scientific re-
search must take gender into ac- E. Grave, A. Joulin, and A. Fan. Shi, W., S. Min, M. Yasunaga, M. Seo,
count. Nature, 507(7490):9. 2021. CCMatrix: Mining billions R. James, M. Lewis, L. Zettlemoyer,
of high-quality parallel sentences on and W.-t. Yih. 2023. REPLUG:
Schluter, N. 2018. The word analogy Retrieval-augmented black-box lan-
testing caveat. NAACL HLT. the web. ACL.
guage models. ArXiv preprint.
Séaghdha, D. O. 2010. Latent vari-
Schone, P. and D. Jurafsky. 2000. able models of selectional prefer- Shriberg, E., R. Bates, P. Taylor,
Knowlege-free induction of mor- ence. ACL. A. Stolcke, D. Jurafsky, K. Ries,
phology using latent semantic anal- N. Coccaro, R. Martin, M. Meteer,
ysis. CoNLL. Seddah, D., R. Tsarfaty, S. Kübler, and C. Van Ess-Dykema. 1998. Can
M. Candito, J. D. Choi, R. Farkas, prosody aid the automatic classifica-
Schone, P. and D. Jurafsky. 2001a. Is J. Foster, I. Goenaga, K. Gojenola,
knowledge-free induction of multi- tion of dialog acts in conversational
Y. Goldberg, S. Green, N. Habash, speech? Language and Speech (Spe-
word unit dictionary headwords a M. Kuhlmann, W. Maier, J. Nivre,
solved problem? EMNLP. cial Issue on Prosody and Conversa-
A. Przepiórkowski, R. Roth, tion), 41(3-4):439–487.
Schone, P. and D. Jurafsky. 2001b. W. Seeker, Y. Versley, V. Vincze,
Knowledge-free induction of inflec- M. Woliński, A. Wróblewska, and Sidner, C. L. 1979. Towards a compu-
tional morphologies. NAACL. E. Villemonte de la Clérgerie. tational theory of definite anaphora
2013. Overview of the SPMRL comprehension in English discourse.
Schuster, M. and K. Nakajima. 2012. Technical Report 537, MIT Artifi-
2013 shared task: cross-framework
Japanese and Korean voice search. cial Intelligence Laboratory, Cam-
evaluation of parsing morpholog-
ICASSP. bridge, MA.
ically rich languages. 4th Work-
Schuster, M. and K. K. Paliwal. 1997. shop on Statistical Parsing of Sidner, C. L. 1983. Focusing in the
Bidirectional recurrent neural net- Morphologically-Rich Languages. comprehension of definite anaphora.
works. IEEE Transactions on Signal See, A., S. Roller, D. Kiela, and In M. Brady and R. C. Berwick,
Processing, 45:2673–2681. J. Weston. 2019. What makes a eds, Computational Models of Dis-
Schütze, H. 1992a. Context space. good conversation? how control- course, 267–330. MIT Press.
AAAI Fall Symposium on Proba- lable attributes affect human judg- Simmons, R. F. 1965. Answering En-
bilistic Approaches to Natural Lan- ments. NAACL HLT. glish questions by computer: A sur-
guage. vey. CACM, 8(1):53–70.
Sekine, S. and M. Collins. 1997.
Schütze, H. 1992b. Dimensions of The evalb software. http: Simmons, R. F. 1973. Semantic net-
meaning. Proceedings of Supercom- //cs.nyu.edu/cs/projects/ works: Their computation and use
puting ’92. IEEE Press. proteus/evalb. for understanding English sentences.
In R. C. Schank and K. M. Colby,
Schütze, H. 1997. Ambiguity Resolu- Sellam, T., D. Das, and A. Parikh. 2020. eds, Computer Models of Thought
tion in Language Learning – Com- BLEURT: Learning robust metrics and Language, 61–113. W.H. Free-
putational and Cognitive Models. for text generation. ACL. man & Co.
CSLI, Stanford, CA. Sennrich, R., B. Haddow, and A. Birch. Simmons, R. F., S. Klein, and K. Mc-
Schütze, H., D. A. Hull, and J. Peder- 2016. Neural machine translation of Conlogue. 1964. Indexing and de-
sen. 1995. A comparison of clas- rare words with subword units. ACL. pendency logic for answering En-
sifiers and document representations Seo, M., A. Kembhavi, A. Farhadi, and glish questions. American Docu-
for the routing problem. SIGIR-95. H. Hajishirzi. 2017. Bidirectional mentation, 15(3):196–204.
578 Bibliography
Simons, G. F. and C. D. Fennig. Socher, R., C. C.-Y. Lin, A. Y. Ng, and Sparck Jones, K. 1972. A statistical in-
2018. Ethnologue: Languages of C. D. Manning. 2011. Parsing natu- terpretation of term specificity and
the world, 21st edition. SIL Inter- ral scenes and natural language with its application in retrieval. Journal
national. recursive neural networks. ICML. of Documentation, 28(1):11–21.
Singh, S. P., D. J. Litman, M. Kearns, Soderland, S., D. Fisher, J. Aseltine, Sparck Jones, K. 1986. Synonymy and
and M. A. Walker. 2002. Optimiz- and W. G. Lehnert. 1995. CRYS- Semantic Classification. Edinburgh
ing dialogue management with re- TAL: Inducing a conceptual dictio- University Press, Edinburgh. Repub-
inforcement learning: Experiments nary. IJCAI-95. lication of 1964 PhD Thesis.
with the NJFun system. JAIR, Søgaard, A. 2010. Simple semi- Sporleder, C. and A. Lascarides. 2005.
16:105–133. supervised training of part-of- Exploiting linguistic cues to classify
Singh, S., F. Vargus, D. D’souza, speech taggers. ACL. rhetorical relations. RANLP-05.
B. F. Karlsson, A. Mahendiran, Søgaard, A. and Y. Goldberg. 2016. Sporleder, C. and M. Lapata. 2005. Dis-
W.-Y. Ko, H. Shandilya, J. Pa- Deep multi-task learning with low course chunking and its application
tel, D. Mataciunas, L. O’Mahony, level tasks supervised at lower lay- to sentence compression. EMNLP.
M. Zhang, R. Hettiarachchi, J. Wil- ers. ACL. Sproat, R., A. W. Black, S. F.
son, M. Machado, L. S. Moura, Chen, S. Kumar, M. Ostendorf, and
D. Krzemiński, H. Fadaei, I. Ergün, Søgaard, A., A. Johannsen, B. Plank,
C. Richards. 2001. Normalization
I. Okoh, A. Alaagib, O. Mudan- D. Hovy, and H. M. Alonso. 2014.
of non-standard words. Computer
nayake, Z. Alyafeai, V. M. Chien, What’s in a p-value in NLP? CoNLL.
Speech & Language, 15(3):287–
S. Ruder, S. Guthikonda, E. A. Soldaini, L., R. Kinney, A. Bha- 333.
Alghamdi, S. Gehrmann, N. Muen- gia, D. Schwenk, D. Atkinson, Sproat, R. and K. Gorman. 2018. A
nighoff, M. Bartolo, J. Kreutzer, R. Authur, B. Bogin, K. Chandu, brief summary of the Kaggle text
A. ÜÜstün, M. Fadaee, and J. Dumas, Y. Elazar, V. Hofmann, normalization challenge.
S. Hooker. 2024. Aya dataset: An A. H. Jha, S. Kumar, L. Lucy,
X. Lyu, N. Lambert, I. Magnus- Srivastava, N., G. E. Hinton,
open-access collection for multi-
son, J. Morrison, N. Muennighoff, A. Krizhevsky, I. Sutskever, and
lingual instruction tuning. ArXiv
A. Naik, C. Nam, M. E. Pe- R. R. Salakhutdinov. 2014. Dropout:
preprint.
ters, A. Ravichander, K. Richardson, a simple way to prevent neural net-
Sleator, D. and D. Temperley. 1993. works from overfitting. JMLR,
Z. Shen, E. Strubell, N. Subramani,
Parsing English with a link gram- 15(1):1929–1958.
O. Tafjord, P. Walsh, L. Zettlemoyer,
mar. IWPT-93. Stab, C. and I. Gurevych. 2014a. Anno-
N. A. Smith, H. Hajishirzi, I. Belt-
Sloan, M. C. 2010. Aristotle’s Nico- agy, D. Groeneveld, J. Dodge, and tating argument components and re-
machean Ethics as the original lo- K. Lo. 2024. Dolma: An open cor- lations in persuasive essays. COL-
cus for the Septem Circumstantiae. pus of three trillion tokens for lan- ING.
Classical Philology, 105(3):236– guage model pretraining research. Stab, C. and I. Gurevych. 2014b. Identi-
251. ArXiv preprint. fying argumentative discourse struc-
Slobin, D. I. 1996. Two ways to Solorio, T., E. Blair, S. Maharjan, tures in persuasive essays. EMNLP.
travel. In M. Shibatani and S. A. S. Bethard, M. Diab, M. Ghoneim, Stab, C. and I. Gurevych. 2017. Parsing
Thompson, eds, Grammatical Con- A. Hawwari, F. AlGhamdi, argumentation structures in persua-
structions: Their Form and Mean- J. Hirschberg, A. Chang, and sive essays. Computational Linguis-
ing, 195–220. Clarendon Press. P. Fung. 2014. Overview for the tics, 43(3):619–659.
first shared task on language iden- Stalnaker, R. C. 1978. Assertion. In
Smith, V. L. and H. H. Clark. 1993. On tification in code-switched data.
the course of answering questions. P. Cole, ed., Pragmatics: Syntax and
Workshop on Computational Ap- Semantics Volume 9, 315–332. Aca-
Journal of Memory and Language, proaches to Code Switching.
32:25–38. demic Press.
Somasundaran, S., J. Burstein, and Stamatatos, E. 2009. A survey of mod-
Smolensky, P. 1988. On the proper M. Chodorow. 2014. Lexical chain- ern authorship attribution methods.
treatment of connectionism. Behav- ing for measuring discourse coher- JASIST, 60(3):538–556.
ioral and brain sciences, 11(1):1– ence quality in test-taker essays.
23. Stanovsky, G., N. A. Smith, and
COLING. L. Zettlemoyer. 2019. Evaluating
Smolensky, P. 1990. Tensor product Soon, W. M., H. T. Ng, and D. C. Y. gender bias in machine translation.
variable binding and the representa- Lim. 2001. A machine learning ap- ACL.
tion of symbolic structures in con- proach to coreference resolution of Stede, M. 2011. Discourse processing.
nectionist systems. Artificial intel- noun phrases. Computational Lin- Morgan & Claypool.
ligence, 46(1-2):159–216. guistics, 27(4):521–544. Stede, M. and J. Schneider. 2018. Argu-
Snover, M., B. Dorr, R. Schwartz, Soricut, R. and D. Marcu. 2003. Sen- mentation Mining. Morgan & Clay-
L. Micciulla, and J. Makhoul. 2006. tence level discourse parsing using pool.
A study of translation edit rate with syntactic and lexical information. Stern, M., J. Andreas, and D. Klein.
targeted human annotation. AMTA- HLT-NAACL. 2017. A minimal span-based neural
2006. constituency parser. ACL.
Soricut, R. and D. Marcu. 2006.
Snow, R., D. Jurafsky, and A. Y. Ng. Discourse generation using utility- Stevens, K. N., S. Kasowski, and G. M.
2005. Learning syntactic patterns trained coherence models. COL- Fant. 1953. An electrical analog of
for automatic hypernym discovery. ING/ACL. the vocal tract. JASA, 25(4):734–
NeurIPS. 742.
Sorokin, D. and I. Gurevych. 2018.
Socher, R., J. Bauer, C. D. Man- Mixing context granularities for im- Stevens, S. S. and J. Volkmann. 1940.
ning, and A. Y. Ng. 2013. Pars- proved entity linking on question The relation of pitch to frequency: A
ing with compositional vector gram- answering data across entity cate- revised scale. The American Journal
mars. ACL. gories. *SEM. of Psychology, 53(3):329–353.
Bibliography 579
Stevens, S. S., J. Volkmann, and E. B. Surdeanu, M. 2013. Overview of the in good-faith online discussions.
Newman. 1937. A scale for the mea- TAC2013 Knowledge Base Popula- WWW-16.
surement of the psychological mag- tion evaluation: English slot filling Tannen, D. 1979. What’s in a frame?
nitude pitch. JASA, 8:185–190. and temporal slot filling. TAC-13. Surface evidence for underlying ex-
Stifelman, L. J., B. Arons, Surdeanu, M., S. Harabagiu, pectations. In R. Freedle, ed., New
C. Schmandt, and E. A. Hulteen. J. Williams, and P. Aarseth. 2003. Directions in Discourse Processing,
1993. VoiceNotes: A speech inter- Using predicate-argument structures 137–181. Ablex.
face for a hand-held voice notetaker. for information extraction. ACL. Taylor, P. 2009. Text-to-Speech Synthe-
INTERCHI 1993. Surdeanu, M., T. Hicks, and M. A. sis. Cambridge University Press.
Stolcke, A. 1998. Entropy-based prun- Valenzuela-Escarcega. 2015. Two
Taylor, W. L. 1953. Cloze procedure: A
ing of backoff language models. practical rhetorical structure theory
new tool for measuring readability.
Proc. DARPA Broadcast News Tran- parsers. NAACL HLT.
Journalism Quarterly, 30:415–433.
scription and Understanding Work- Surdeanu, M., R. Johansson, A. Mey-
shop. ers, L. Màrquez, and J. Nivre. 2008. Teranishi, R. and N. Umeda. 1968. Use
Stolcke, A. 2002. SRILM – an exten- The CoNLL 2008 shared task on of pronouncing dictionary in speech
sible language modeling toolkit. IC- joint parsing of syntactic and seman- synthesis experiments. 6th Interna-
SLP. tic dependencies. CoNLL. tional Congress on Acoustics.
Stolcke, A., Y. Konig, and M. Wein- Sutskever, I., O. Vinyals, and Q. V. Le. Tesnière, L. 1959. Éléments de Syntaxe
traub. 1997. Explicit word error min- 2014. Sequence to sequence learn- Structurale. Librairie C. Klinck-
imization in N-best list rescoring. ing with neural networks. NeurIPS. sieck, Paris.
EUROSPEECH, volume 1. Suzgun, M., L. Melas-Kyriazi, and Tetreault, J. R. 2001. A corpus-based
Stolcke, A., K. Ries, N. Coccaro, D. Jurafsky. 2023a. Follow the wis- evaluation of centering and pronoun
E. Shriberg, R. Bates, D. Jurafsky, dom of the crowd: Effective text resolution. Computational Linguis-
P. Taylor, R. Martin, M. Meteer, generation via minimum Bayes risk tics, 27(4):507–520.
and C. Van Ess-Dykema. 2000. Di- decoding. Findings of ACL 2023. Teufel, S., J. Carletta, and M. Moens.
alogue act modeling for automatic Suzgun, M., N. Scales, N. Schärli, 1999. An annotation scheme for
tagging and recognition of conversa- S. Gehrmann, Y. Tay, H. W. Chung, discourse-level argumentation in re-
tional speech. Computational Lin- A. Chowdhery, Q. Le, E. Chi, search articles. EACL.
guistics, 26(3):339–371. D. Zhou, and J. Wei. 2023b. Teufel, S., A. Siddharthan, and
Stolz, W. S., P. H. Tannenbaum, and Challenging BIG-bench tasks and C. Batchelor. 2009. Towards
F. V. Carstensen. 1965. A stochastic whether chain-of-thought can solve domain-independent argumenta-
approach to the grammatical coding them. ACL Findings. tive zoning: Evidence from chem-
of English. CACM, 8(6):399–405. Swerts, M., D. J. Litman, and J. Hirsch- istry and computational linguistics.
Stone, P., D. Dunphry, M. Smith, and berg. 2000. Corrections in spoken EMNLP.
D. Ogilvie. 1966. The General In- dialogue systems. ICSLP. Thede, S. M. and M. P. Harper. 1999. A
quirer: A Computer Approach to Swier, R. and S. Stevenson. 2004. Un- second-order hidden Markov model
Content Analysis. MIT Press. supervised semantic role labelling. for part-of-speech tagging. ACL.
Strötgen, J. and M. Gertz. 2013. Mul- EMNLP. Thompson, B. and P. Koehn. 2019. Ve-
tilingual and cross-domain temporal Switzer, P. 1965. Vector images in doc- calign: Improved sentence align-
tagging. Language Resources and ument retrieval. Statistical Associa- ment in linear time and space.
Evaluation, 47(2):269–298. tion Methods For Mechanized Docu- EMNLP.
Strube, M. and U. Hahn. 1996. Func- mentation. Symposium Proceedings.
tional centering. ACL. Thompson, K. 1968. Regular ex-
Washington, D.C., USA, March 17, pression search algorithm. CACM,
Strubell, E., A. Ganesh, and A. McCal- 1964. https://nvlpubs.nist. 11(6):419–422.
lum. 2019. Energy and policy con- gov/nistpubs/Legacy/MP/
siderations for deep learning in NLP. nbsmiscellaneouspub269.pdf. Tian, Y., V. Kulkarni, B. Perozzi,
ACL. and S. Skiena. 2016. On the
Syrdal, A. K., C. W. Wightman, convergent properties of word em-
Su, Y., H. Sun, B. Sadler, M. Srivatsa, A. Conkie, Y. Stylianou, M. Beut- bedding methods. ArXiv preprint
I. Gür, Z. Yan, and X. Yan. 2016. On nagel, J. Schroeter, V. Strom, and arXiv:1605.03956.
generating characteristic-rich ques- K.-S. Lee. 2000. Corpus-based
tion sets for QA evaluation. EMNLP. techniques in the AT&T NEXTGEN Tibshirani, R. J. 1996. Regression
synthesis system. ICSLP. shrinkage and selection via the lasso.
Subba, R. and B. Di Eugenio. 2009. An Journal of the Royal Statistical So-
effective discourse parser that uses Talmy, L. 1985. Lexicalization patterns:
ciety. Series B (Methodological),
rich linguistic information. NAACL Semantic structure in lexical forms.
58(1):267–288.
HLT. In T. Shopen, ed., Language Typol-
ogy and Syntactic Description, Vol- Timkey, W. and M. van Schijndel. 2021.
Sukhbaatar, S., A. Szlam, J. Weston, All bark and no bite: Rogue dimen-
and R. Fergus. 2015. End-to-end ume 3. Cambridge University Press.
Originally appeared as UC Berkeley sions in transformer language mod-
memory networks. NeurIPS. els obscure representational quality.
Cognitive Science Program Report
Sundheim, B., ed. 1991. Proceedings of No. 30, 1980. EMNLP.
MUC-3.
Talmy, L. 1991. Path to realization: A Titov, I. and E. Khoddam. 2014. Unsu-
Sundheim, B., ed. 1992. Proceedings of typology of event conflation. BLS- pervised induction of semantic roles
MUC-4. 91. within a reconstruction-error mini-
Sundheim, B., ed. 1993. Proceedings of Tan, C., V. Niculae, C. Danescu- mization framework. NAACL HLT.
MUC-5. Baltimore, MD. Niculescu-Mizil, and L. Lee. 2016. Titov, I. and A. Klementiev. 2012. A
Sundheim, B., ed. 1995. Proceedings of Winning arguments: Interaction dy- Bayesian approach to unsupervised
MUC-6. namics and persuasion strategies semantic role induction. EACL.
580 Bibliography
Tomkins, S. S. 1962. Affect, imagery, van Deemter, K. and R. Kibble. Voorhees, E. M. 1999. TREC-8 ques-
consciousness: Vol. I. The positive 2000. On coreferring: corefer- tion answering track report. Pro-
affects. Springer. ence in MUC and related annotation ceedings of the 8th Text Retrieval
Toutanova, K., D. Klein, C. D. Man- schemes. Computational Linguis- Conference.
ning, and Y. Singer. 2003. Feature- tics, 26(4):629–637. Voorhees, E. M. and D. K. Harman.
rich part-of-speech tagging with a van der Maaten, L. and G. E. Hinton. 2005. TREC: Experiment and
cyclic dependency network. HLT- 2008. Visualizing high-dimensional Evaluation in Information Retrieval.
NAACL. data using t-SNE. JMLR, 9:2579– MIT Press.
Trichelair, P., A. Emami, J. C. K. 2605. Voutilainen, A. 1999. Handcrafted
Cheung, A. Trischler, K. Suleman, van Rijsbergen, C. J. 1975. Information rules. In H. van Halteren, ed., Syn-
and F. Diaz. 2018. On the eval- Retrieval. Butterworths. tactic Wordclass Tagging, 217–246.
uation of common-sense reasoning Kluwer.
in natural language understanding. Vaswani, A., N. Shazeer, N. Parmar,
Vrandečić, D. and M. Krötzsch. 2014.
NeurIPS 2018 Workshop on Cri- J. Uszkoreit, L. Jones, A. N. Gomez,
Wikidata: a free collaborative
tiquing and Correcting Trends in Ł. Kaiser, and I. Polosukhin. 2017.
knowledge base. CACM, 57(10):78–
Machine Learning. Attention is all you need. NeurIPS.
85.
Trnka, K., D. Yarrington, J. McCaw, Vauquois, B. 1968. A survey of for- Wade, E., E. Shriberg, and P. J. Price.
K. F. McCoy, and C. Pennington. mal grammars and algorithms for 1992. User behaviors affecting
2007. The effects of word pre- recognition and transformation in speech recognition. ICSLP.
diction on communication rate for machine translation. IFIP Congress
1968. Wagner, R. A. and M. J. Fischer. 1974.
AAC. NAACL-HLT. The string-to-string correction prob-
Turian, J. P., L. Shen, and I. D. Mela- Velichko, V. M. and N. G. Zagoruyko. lem. Journal of the ACM, 21:168–
med. 2003. Evaluation of machine 1970. Automatic recognition of 173.
translation and its evaluation. Pro- 200 words. International Journal of Waibel, A., T. Hanazawa, G. Hin-
ceedings of MT Summit IX. Man-Machine Studies, 2:223–234. ton, K. Shikano, and K. J. Lang.
Turian, J., L. Ratinov, and Y. Bengio. Velikovich, L., S. Blair-Goldensohn, 1989. Phoneme recognition using
2010. Word representations: a sim- K. Hannan, and R. McDonald. 2010. time-delay neural networks. IEEE
ple and general method for semi- The viability of web-derived polarity Transactions on ASSP, 37(3):328–
supervised learning. ACL. lexicons. NAACL HLT. 339.
Turney, P. D. 2002. Thumbs up or Vendler, Z. 1967. Linguistics in Philos- Walker, M. A. 2000. An applica-
thumbs down? Semantic orienta- ophy. Cornell University Press. tion of reinforcement learning to di-
tion applied to unsupervised classi- alogue strategy selection in a spo-
fication of reviews. ACL. Verhagen, M., R. Gaizauskas, ken dialogue system for email. JAIR,
F. Schilder, M. Hepple, J. Moszkow- 12:387–416.
Turney, P. D. and M. Littman. 2003. icz, and J. Pustejovsky. 2009. The
Measuring praise and criticism: In- TempEval challenge: Identifying Walker, M. A., J. C. Fromer, and S. S.
ference of semantic orientation from temporal relations in text. Lan- Narayanan. 1998a. Learning optimal
association. ACM Transactions guage Resources and Evaluation, dialogue strategies: A case study of
on Information Systems (TOIS), 43(2):161–179. a spoken dialogue agent for email.
21:315–346. COLING/ACL.
Verhagen, M., I. Mani, R. Sauri,
Turney, P. D. and M. L. Littman. 2005. R. Knippen, S. B. Jang, J. Littman, Walker, M. A., M. Iida, and S. Cote.
Corpus-based learning of analogies A. Rumshisky, J. Phillips, and 1994. Japanese discourse and the
and semantic relations. Machine J. Pustejovsky. 2005. Automating process of centering. Computational
Learning, 60(1-3):251–278. temporal annotation with TARSQI. Linguistics, 20(2):193–232.
Umeda, N. 1976. Linguistic rules for ACL. Walker, M. A., A. K. Joshi, and
text-to-speech synthesis. Proceed- E. Prince, eds. 1998b. Centering in
Versley, Y. 2008. Vagueness and ref- Discourse. Oxford University Press.
ings of the IEEE, 64(4):443–451.
erential ambiguity in a large-scale
Umeda, N., E. Matui, T. Suzuki, and annotated corpus. Research on Wang, A., A. Singh, J. Michael, F. Hill,
H. Omura. 1968. Synthesis of fairy Language and Computation, 6(3- O. Levy, and S. R. Bowman. 2018a.
tale using an analog vocal tract. 6th 4):333–353. Glue: A multi-task benchmark and
International Congress on Acous- analysis platform for natural lan-
tics. Vieira, R. and M. Poesio. 2000. An em- guage understanding. ICLR.
pirically based system for process-
Ung, M., J. Xu, and Y.-L. Boureau. Wang, S. and C. D. Manning. 2012.
ing definite descriptions. Computa-
2022. SaFeRDialogues: Taking Baselines and bigrams: Simple,
tional Linguistics, 26(4):539–593.
feedback gracefully after conversa- good sentiment and topic classifica-
tional safety failures. ACL. Vilain, M., J. D. Burger, J. Aberdeen, tion. ACL.
D. Connolly, and L. Hirschman. Wang, W. and B. Chang. 2016. Graph-
Uryupina, O., R. Artstein, A. Bristot, 1995. A model-theoretic coreference
F. Cavicchio, F. Delogu, K. J. Ro- based dependency parsing with bidi-
scoring scheme. MUC-6. rectional LSTM. ACL.
driguez, and M. Poesio. 2020. An-
notating a broad range of anaphoric Vintsyuk, T. K. 1968. Speech discrim- Wang, Y., S. Li, and J. Yang. 2018b.
phenomena, in a variety of genres: ination by dynamic programming. Toward fast and accurate neural dis-
The ARRAU corpus. Natural Lan- Cybernetics, 4(1):52–57. Origi- course segmentation. EMNLP.
guage Engineering, 26(1):1–34. nal Russian: Kibernetika 4(1):81- Wang, Y., S. Mishra, P. Alipoormo-
Uszkoreit, J. 2017. Transformer: A 88. 1968. labashi, Y. Kordi, A. Mirzaei,
novel neural network architecture Vinyals, O., Ł. Kaiser, T. Koo, A. Naik, A. Ashok, A. S.
for language understanding. Google S. Petrov, I. Sutskever, and G. Hin- Dhanasekaran, A. Arunkumar,
Research blog post, Thursday Au- ton. 2015. Grammar as a foreign lan- D. Stap, E. Pathak, G. Kara-
gust 31, 2017. guage. NeurIPS. manolakis, H. Lai, I. Purohit,
Bibliography 581
I. Mondal, J. Anderson, K. Kuz- Weischedel, R., M. Meteer, Williams, A., N. Nangia, and S. Bow-
nia, K. Doshi, K. K. Pal, M. Pa- R. Schwartz, L. A. Ramshaw, and man. 2018. A broad-coverage chal-
tel, M. Moradshahi, M. Par- J. Palmucci. 1993. Coping with am- lenge corpus for sentence under-
mar, M. Purohit, N. Varshney, biguity and unknown words through standing through inference. NAACL
P. R. Kaza, P. Verma, R. S. Puri, probabilistic models. Computational HLT.
R. Karia, S. Doshi, S. K. Sampat, Linguistics, 19(2):359–382. Williams, J. D., K. Asadi, and
S. Mishra, S. Reddy A, S. Patro, Weizenbaum, J. 1966. ELIZA – A G. Zweig. 2017. Hybrid code
T. Dixit, and X. Shen. 2022. Super- computer program for the study of networks: practical and efficient
NaturalInstructions: Generaliza- natural language communication be- end-to-end dialog control with su-
tion via declarative instructions on tween man and machine. CACM, pervised and reinforcement learning.
1600+ NLP tasks. EMNLP. 9(1):36–45. ACL.
Wang, Y., R. Skerry-Ryan, D. Stan-
ton, Y. Wu, R. J. Weiss, N. Jaitly, Weizenbaum, J. 1976. Computer Power Williams, J. D., A. Raux, and M. Hen-
Z. Yang, Y. Xiao, Z. Chen, S. Ben- and Human Reason: From Judge- derson. 2016. The dialog state track-
gio, Q. Le, Y. Agiomyrgiannakis, ment to Calculation. W.H. Freeman ing challenge series: A review. Dia-
R. Clark, and R. A. Saurous. & Co. logue & Discourse, 7(3):4–33.
2017. Tacotron: Towards end-to-end Werbos, P. 1974. Beyond regression: Williams, J. D. and S. J. Young. 2007.
speech synthesis. INTERSPEECH. new tools for prediction and analy- Partially observable markov deci-
Watanabe, S., T. Hori, S. Karita, sis in the behavioral sciences. Ph.D. sion processes for spoken dialog sys-
T. Hayashi, J. Nishitoba, Y. Unno, thesis, Harvard University. tems. Computer Speech and Lan-
N. E. Y. Soplin, J. Heymann, Werbos, P. J. 1990. Backpropagation guage, 21(1):393–422.
M. Wiesner, N. Chen, A. Renduch- through time: what it does and how Wilson, T., J. Wiebe, and P. Hoffmann.
intala, and T. Ochiai. 2018. ESP- to do it. Proceedings of the IEEE, 2005. Recognizing contextual polar-
net: End-to-end speech processing 78(10):1550–1560. ity in phrase-level sentiment analy-
toolkit. INTERSPEECH. sis. EMNLP.
Weston, J., S. Chopra, and A. Bordes.
Weaver, W. 1949/1955. Translation. In 2015. Memory networks. ICLR Winograd, T. 1972. Understanding Nat-
W. N. Locke and A. D. Boothe, eds, 2015. ural Language. Academic Press.
Machine Translation of Languages,
15–23. MIT Press. Reprinted from a Widrow, B. and M. E. Hoff. 1960. Winston, P. H. 1977. Artificial Intelli-
memorandum written by Weaver in Adaptive switching circuits. IRE gence. Addison Wesley.
1949. WESCON Convention Record, vol-
ume 4. Wiseman, S., A. M. Rush, and S. M.
Webber, B. L. 1978. A Formal Shieber. 2016. Learning global
Approach to Discourse Anaphora. Wiebe, J. 1994. Tracking point of view features for coreference resolution.
Ph.D. thesis, Harvard University. in narrative. Computational Linguis- NAACL HLT.
Webber, B. L. 1983. So what can we tics, 20(2):233–287.
Wiseman, S., A. M. Rush, S. M.
talk about now? In M. Brady and Wiebe, J. 2000. Learning subjective ad- Shieber, and J. Weston. 2015. Learn-
R. C. Berwick, eds, Computational jectives from corpora. AAAI. ing anaphoricity and antecedent
Models of Discourse, 331–371. The Wiebe, J., R. F. Bruce, and T. P. O’Hara. ranking features for coreference res-
MIT Press. 1999. Development and use of a olution. ACL.
Webber, B. L. 1991. Structure and os- gold-standard data set for subjectiv- Witten, I. H. and T. C. Bell. 1991.
tension in the interpretation of dis- ity classifications. ACL. The zero-frequency problem: Es-
course deixis. Language and Cogni- timating the probabilities of novel
Wierzbicka, A. 1992. Semantics, Cul-
tive Processes, 6(2):107–135. events in adaptive text compression.
ture, and Cognition: University Hu-
Webber, B. L. and B. Baldwin. 1992. man Concepts in Culture-Specific IEEE Transactions on Information
Accommodating context change. Configurations. Oxford University Theory, 37(4):1085–1094.
ACL. Press. Witten, I. H. and E. Frank. 2005. Data
Webber, B. L., M. Egg, and V. Kor- Wierzbicka, A. 1996. Semantics: Mining: Practical Machine Learn-
doni. 2012. Discourse structure and Primes and Universals. Oxford Uni- ing Tools and Techniques, 2nd edi-
language technology. Natural Lan- versity Press. tion. Morgan Kaufmann.
guage Engineering, 18(4):437–490.
Wilensky, R. 1983. Planning and Wittgenstein, L. 1953. Philosoph-
Webber, B. L. 1988. Discourse deixis:
Understanding: A Computational ical Investigations. (Translated by
Reference to discourse segments.
Approach to Human Reasoning. Anscombe, G.E.M.). Blackwell.
ACL.
Addison-Wesley. Wolf, F. and E. Gibson. 2005. Rep-
Webson, A. and E. Pavlick. 2022. Do
prompt-based models really under- Wilks, Y. 1973. An artificial intelli- resenting discourse coherence: A
stand the meaning of their prompts? gence approach to machine transla- corpus-based analysis. Computa-
NAACL HLT. tion. In R. C. Schank and K. M. tional Linguistics, 31(2):249–287.
Colby, eds, Computer Models of Wolf, M. J., K. W. Miller, and F. S.
Webster, K., M. Recasens, V. Axel- Thought and Language, 114–151.
rod, and J. Baldridge. 2018. Mind Grodzinsky. 2017. Why we should
W.H. Freeman. have seen that coming: Comments
the GAP: A balanced corpus of gen-
dered ambiguous pronouns. TACL, Wilks, Y. 1975a. Preference semantics. on Microsoft’s Tay “experiment,”
6:605–617. In E. L. Keenan, ed., The Formal Se- and wider implications. The ORBIT
Wei, J., X. Wang, D. Schuurmans, mantics of Natural Language, 329– Journal, 1(2):1–12.
M. Bosma, F. Xia, E. Chi, Q. V. 350. Cambridge Univ. Press. Woods, W. A. 1978. Semantics and
Le, D. Zhou, et al. 2022. Chain-of- Wilks, Y. 1975b. A preferential, quantification in natural language
thought prompting elicits reasoning pattern-seeking, semantics for natu- question answering. In M. Yovits,
in large language models. NeurIPS, ral language inference. Artificial In- ed., Advances in Computers, 2–64.
volume 35. telligence, 6(1):53–74. Academic.
582 Bibliography
Woods, W. A., R. M. Kaplan, and B. L. Xue, N. and M. Palmer. 2004. Calibrat- online MT and speech translation.
Nash-Webber. 1972. The lunar sci- ing features for semantic role label- NAACL-HLT.
ences natural language information ing. EMNLP. Zettlemoyer, L. and M. Collins. 2005.
system: Final report. Technical Re- Yamada, H. and Y. Matsumoto. 2003. Learning to map sentences to log-
port 2378, BBN. Statistical dependency analysis with ical form: Structured classification
Woodsend, K. and M. Lapata. 2015. support vector machines. IWPT-03. with probabilistic categorial gram-
Distributed representations for un- mars. Uncertainty in Artificial Intel-
Yang, D., J. Chen, Z. Yang, D. Jurafsky,
supervised semantic role labeling. ligence, UAI’05.
and E. H. Hovy. 2019. Let’s make
EMNLP. your request more persuasive: Mod- Zettlemoyer, L. and M. Collins. 2007.
Wu, D. 1996. A polynomial-time algo- eling persuasive strategies via semi- Online learning of relaxed CCG
rithm for statistical machine transla- supervised neural nets on crowd- grammars for parsing to logical
tion. ACL. funding platforms. NAACL HLT. form. EMNLP/CoNLL.
Yang, X., G. Zhou, J. Su, and C. L. Tan. Zhang, H., R. Sproat, A. H. Ng,
Wu, F. and D. S. Weld. 2007. Au- F. Stahlberg, X. Peng, K. Gorman,
tonomously semantifying Wiki- 2003. Coreference resolution us-
ing competition learning approach. and B. Roark. 2019. Neural models
pedia. CIKM-07. of text normalization for speech ap-
ACL.
Wu, F. and D. S. Weld. 2010. Open plications. Computational Linguis-
information extraction using Wiki- Yang, Y. and J. Pedersen. 1997. A com- tics, 45(2):293–337.
pedia. ACL. parative study on feature selection in Zhang, R., C. N. dos Santos, M. Ya-
text categorization. ICML. sunaga, B. Xiang, and D. Radev.
Wu, L., F. Petroni, M. Josifoski,
S. Riedel, and L. Zettlemoyer. 2020. Yankelovich, N., G.-A. Levow, and 2018. Neural coreference resolution
Scalable zero-shot entity linking M. Marx. 1995. Designing with deep biaffine attention by joint
with dense entity retrieval. EMNLP. SpeechActs: Issues in speech user mention detection and mention clus-
interfaces. CHI-95. tering. ACL.
Wu, S. and M. Dredze. 2019. Beto,
Bentz, Becas: The surprising cross- Yih, W.-t., M. Richardson, C. Meek, Zhang, T., V. Kishore, F. Wu, K. Q.
lingual effectiveness of BERT. M.-W. Chang, and J. Suh. 2016. The Weinberger, and Y. Artzi. 2020.
EMNLP. value of semantic parse labeling for BERTscore: Evaluating text gener-
knowledge base question answering. ation with BERT. ICLR 2020.
Wu, Y., M. Schuster, Z. Chen, Q. V. ACL. Zhang, Y., V. Zhong, D. Chen, G. An-
Le, M. Norouzi, W. Macherey, geli, and C. D. Manning. 2017.
Young, S. J., M. Gašić, S. Keizer,
M. Krikun, Y. Cao, Q. Gao, Position-aware attention and su-
F. Mairesse, J. Schatzmann,
K. Macherey, J. Klingner, A. Shah, pervised data improve slot filling.
B. Thomson, and K. Yu. 2010. The
M. Johnson, X. Liu, Ł. Kaiser, EMNLP.
Hidden Information State model:
S. Gouws, Y. Kato, T. Kudo,
A practical framework for POMDP- Zhao, H., W. Chen, C. Kit, and G. Zhou.
H. Kazawa, K. Stevens, G. Kurian,
based spoken dialogue management. 2009. Multilingual dependency
N. Patil, W. Wang, C. Young,
Computer Speech & Language, learning: A huge feature engineer-
J. Smith, J. Riesa, A. Rud-
24(2):150–174. ing method to semantic dependency
nick, O. Vinyals, G. S. Corrado,
M. Hughes, and J. Dean. 2016. Younger, D. H. 1967. Recognition and parsing. CoNLL.
Google’s neural machine translation parsing of context-free languages in Zhao, J., T. Wang, M. Yatskar, R. Cot-
system: Bridging the gap between time n3 . Information and Control, terell, V. Ordonez, and K.-W. Chang.
human and machine translation. 10:189–208. 2019. Gender bias in contextualized
ArXiv preprint arXiv:1609.08144. word embeddings. NAACL HLT.
Yu, N., M. Zhang, and G. Fu. 2018.
Wundt, W. 1900. Völkerpsychologie: Transition-based neural RST parsing Zhao, J., T. Wang, M. Yatskar, V. Or-
eine Untersuchung der Entwick- with implicit syntax features. COL- donez, and K.-W. Chang. 2017. Men
lungsgesetze von Sprache, Mythus, ING. also like shopping: Reducing gender
und Sitte. W. Engelmann, Leipzig. bias amplification using corpus-level
Yu, Y., Y. Zhu, Y. Liu, Y. Liu, constraints. EMNLP.
Band II: Die Sprache, Zweiter Teil. S. Peng, M. Gong, and A. Zeldes.
2019. GumDrop at the DISRPT2019 Zhao, J., T. Wang, M. Yatskar, V. Or-
Xu, A., E. Pathak, E. Wallace, S. Gu- donez, and K.-W. Chang. 2018a.
rurangan, M. Sap, and D. Klein. shared task: A model stacking ap-
proach to discourse unit segmenta- Gender bias in coreference reso-
2021. Detoxifying language models lution: Evaluation and debiasing
risks marginalizing minority voices. tion and connective detection. Work-
shop on Discourse Relation Parsing methods. NAACL HLT.
NAACL HLT.
and Treebanking 2019. Zhao, J., Y. Zhou, Z. Li, W. Wang,
Xu, J., D. Ju, M. Li, Y.-L. Boureau, and K.-W. Chang. 2018b. Learn-
J. Weston, and E. Dinan. 2020. Zapirain, B., E. Agirre, L. Màrquez, ing gender-neutral word embed-
Recipes for safety in open- and M. Surdeanu. 2013. Selectional dings. EMNLP.
domain chatbots. ArXiv preprint preferences for semantic role classi-
fication. Computational Linguistics, Zheng, J., L. Vilnis, S. Singh, J. D.
arXiv:2010.07079. Choi, and A. McCallum. 2013.
39(3):631–663.
Xu, P., H. Saghir, J. S. Kang, T. Long, Dynamic knowledge-base alignment
A. J. Bose, Y. Cao, and J. C. K. Che- Zelle, J. M. and R. J. Mooney. 1996. for coreference resolution. CoNLL.
ung. 2019. A cross-domain transfer- Learning to parse database queries
Zhou, D., O. Bousquet, T. N. Lal,
able neural coherence model. ACL. using inductive logic programming.
J. Weston, and B. Schölkopf. 2004a.
AAAI.
Xue, N., H. T. Ng, S. Pradhan, Learning with local and global con-
A. Rutherford, B. L. Webber, Zeman, D. 2008. Reusable tagset con- sistency. NeurIPS.
C. Wang, and H. Wang. 2016. version using tagset drivers. LREC. Zhou, G., J. Su, J. Zhang, and
CoNLL 2016 shared task on mul- Zens, R. and H. Ney. 2007. Efficient M. Zhang. 2005. Exploring var-
tilingual shallow discourse parsing. phrase-table representation for ma- ious knowledge in relation extrac-
CoNLL-16 shared task. chine translation with applications to tion. ACL.
Bibliography 583
585
586 Subject Index
emotion, 482 Evidence (as coherence fully-connected, 138 hidden units, 138
Encoder-decoder, 175 relation), 533 function word, 363, 383 Hindi, 265
encoder-decoder attention, evoking a referent, 501 fusion language, 268 Hindi, verb-framed, 267
272 execution accuracy, 256 HKUST, 334
end-to-end training, 167 expansion, 390, 391 Gaussian HMM, 370
endpointing, 312 expletive, 507 prior on weights, 96 formal definition of, 370
English explicit confirmation, 319 gazetteer, 379 history in speech
lexical differences from extraposition, 507 General Inquirer, 64, 484 recognition, 355
French, 267 extrinsic evaluation, 38 generalize, 95 initial distribution, 370
simplified grammar generalized semantic role, observation likelihood,
rules, 390 F (for F-measure), 67 464 370
verb-framed, 267 F-measure, 67 generation observations, 370
entity dictionary, 379 F-measure of sentences to test a simplifying assumptions
entity grid, 542 in NER, 240, 381 CFG grammar, 390 for POS tagging,
Entity linking, 520 factoid question, 289 generative AI, 204 372
entity linking, 502 Faiss, 301 generative grammar, 391 states, 370
entity-based coherence, 540 false negatives, 10 generative model, 78 transition probabilities,
entropy, 49 false positives, 10 generative models, 59 370
and perplexity, 49 Farsi, verb-framed, 267 generator, 389 Hobbs algorithm, 528
cross-entropy, 51 fast Fourier transform, 338, generics, 507 Hobbs tree search algorithm
per-word, 50 355 German, 265 for pronoun
rate, 50 fasttext, 123 given-new, 506 resolution, 528
relative, 474 FASTUS, 457 Godzilla, speaker as, 472 homonymy, 231
error backpropagation, 147 feature cutoff, 379 gold labels, 66 hot languages, 268
ESPnet, 356 feature interactions, 82 gradient, 90 Hungarian
ethos, 547 feature selection Grammar part-of-speech tagging,
Euclidean distance information gain, 76 Constraint, 433 382
in L2 regularization, 95 feature template, 421 Head-Driven Phrase hybrid, 356
Eugene Onegin, 52 feature templates, 82 Structure (HPSG), hyperarticulation, 319
part-of-speech tagging, 406 hypernym, 437
Euler’s formula, 338
378 Link, 433 lexico-syntactic patterns
Europarl, 270
feature vectors, 334 grammar for, 438
evalb, 406
Federalist papers, 75 binary branching, 394 hyperparameter, 92
evaluating parsers, 405 checking, 387
feedforward network, 138 hyperparameters, 152
evaluation equivalence, 394
10-fold cross-validation, fenceposts, 398
few-shot, 246 generative, 391
69 inversion transduction, IBM Models, 287
AB test, 353 FFT, 338, 355
file format, .wav, 336 287 IBM Thomas J. Watson
comparing models, 41 grammatical function, 412 Research Center,
filled pause, 14
cross-validation, 69 grammatical relation, 412 53, 355
filler, 14
development test set, 39, grammatical sentences, 391 idf, 113
finetuning, 213, 234
69 greedy decoding, 206 idf term weighting, 113,
finetuning;supervsed, 249
devset, 69 greedy RE patterns, 9 292
first-order co-occurrence,
devset or development grep, 5, 5, 30 immediately dominates,
124
test set, 39 Gricean maxims, 314 389
fluency, 280
extrinsic, 38 in MT, 280 grounding, 312 implicature, 314
fluency in MT, 280 fold (in cross-validation), GUS, 314 implicit argument, 479
Matched-Pair Sentence 69 implicit confirmation, 320
Segment Word Error forget gate, 172 in-context learning, 247
hallucinate, 290
(MAPSSWE), 347 formal language, 391 indefinite reference, 504
hallucination, 219
mean opinion score, 353 formant synthesis, 357 induction heads, 247
Hamilton, Alexander, 75
most frequent class forward inference, 153 Hamming, 337 inference-based learning,
baseline, 366 forward-looking centers, Hansard, 287 429
MT, 280 541 hanzi, 20 infoboxes, 437
named entity recognition, Fosler, E., see harmonic mean, 67 information
240, 381 Fosler-Lussier, E. head, 188, 198, 406, 412 structure, 505
of n-gram, 38 foundation model, 221 finding, 406 status, 505
of n-grams via fragment of word, 14 Head-Driven Phrase information extraction (IE),
perplexity, 40 frame, 336 Structure Grammar 435
pseudoword, 476 semantic, 467 (HPSG), 406 bootstrapping, 441
relation extraction, 446 frame elements, 467 Heaps’ Law, 15 information gain, 76
test set, 39 FrameNet, 466 Hearst patterns, 438 for feature selection, 76
training on the test set, 39 frames, 314 held-out, 48 Information retrieval, 108,
training set, 39 free word order, 411 Herdan’s Law, 15 290
TTS, 353 Freebase, 437 hidden, 370 information retrieval, 290
event coreference, 503 freeze, 155, 214 hidden layer, 138 initiative, 313
event extraction, 435, 446 French, 265 as representation of inner product, 110
events, 450 Frump, 459 input, 139 instance, word, 14
588 Subject Index
thematic grid, 463 agglutinative, 267 vector semantics, 101 Winograd Schema, 525
thematic role, 462 part-of-speech tagging, vector space, 107 Wizard-of-Oz system, 325
and diathesis alternation, 382 vector space model, 107 word
463 turns, 311 verb boundary, regular
examples of, 462 TyDi QA, 304 copula, 365 expression notation,
problems, 464 typed dependency structure, modal, 365 8
theme, 462 411 phrasal, 364 closed class, 363
theme, as thematic role, 462 types verb alternations, 463 definition of, 13
TimeBank, 451 word, 14 verb phrase, 390 error rate, 334, 346
tokenization, 4 typology, 265 verb-framed language, 267 fragment, 14
sentence, 25 linguistic, 265 Verbs, 364 function, 363, 383
word, 18 Vietnamese, 267 open class, 363
Top-k sampling, 208 Viterbi punctuation as, 13
unembedding, 199
top-p sampling, 209 and beam search, 274 tokens, 14
ungrammatical sentences,
topic models, 104 Viterbi algorithm, 26, 373 types, 14
391
toxicity detection, 74 inference in CRF, 380 word normalization, 23
unigram
training oracle, 419 V ITERBI ALGORITHM, 373 word segmentation, 18, 20
name of tokenization
training set, 38 vocoder, 349 word sense, 231
algorithm, 270
cross-validation, 69 vocoding, 349 word sense disambiguation,
unit production, 397
how to choose, 39 voice user interface, 325 232, see WSD
unit vector, 111
transcription VSO language, 265 word shape, 378
Universal Dependencies,
of speech, 331 word tokenization, 18
413
reference, 346 word-word matrix, 109
universal, linguistic, 264 wake word, 353
transduction grammars, 287 word2vec, 117
Unix, 5 Wall Street Journal
transfer learning, 223 wordform, 15
unknown words Wall Street Journal
Transformations and and lemma, 102
in part-of-speech speech recognition of,
Discourse Analysis versus lemma, 15
tagging, 376 355
Project (TDAP), WordNet, 231
in text categorization, 61 warping, 355
384 wordpiece, 269
user-centered design, 325 wavefile format, 336
transition probability WSD, 232
utterance, 14 WaveNet, 351
role in Viterbi, 374
transition-based, 416 Wavenet, 351
value, 188 weight tying, 165, 199 Yonkers Racetrack, 49
translation
divergences, 265 value sensitive design, 326 well-formed substring Yupik, 267
TREC, 308 vanishing gradient, 135 table, 409
treebank, 392 vanishing gradients, 171 WFST, 409 z-score, 82
trigram, 37 Vauquois triangle, 286 wh-pronoun, 364 zero anaphor, 505
TTS, 332 vector, 107, 133 wikification, 520 zero-shot, 246
Turk, Mechanical, 332 vector length, 110 wildcard, regular zero-width, 13
Turkish Vector semantics, 105 expression, 7 zeros, 45