Regular Expressions, Text Normalization, Edit Distance
Regular Expressions, Text Normalization, Edit Distance
Regular Expressions, Text Normalization, Edit Distance
All
rights reserved. Draft of February 3, 2024.
CHAPTER
Some languages, like Japanese, don’t have spaces between words, so word tokeniza-
tion becomes more difficult.
lemmatization Another part of text normalization is lemmatization, the task of determining
that two words have the same root, despite their surface differences. For example,
the words sang, sung, and sings are forms of the verb sing. The word sing is the
common lemma of these words, and a lemmatizer maps from all of these to sing.
Lemmatization is essential for processing morphologically complex languages like
stemming Arabic. Stemming refers to a simpler version of lemmatization in which we mainly
just strip suffixes from the end of the word. Text normalization also includes sen-
sentence
segmentation tence segmentation: breaking up a text into individual sentences, using cues like
periods or exclamation points.
Finally, we’ll need to compare words and other strings. We’ll introduce a metric
called edit distance that measures how similar two strings are based on the number
of edits (insertions, deletions, substitutions) it takes to change one string into the
other. Edit distance is an algorithm with applications throughout language process-
ing, from spelling correction to speech recognition to coreference resolution.
case /S/ (/s/ matches a lower case s but not an upper case S). This means that
the pattern /woodchucks/ will not match the string Woodchucks. We can solve this
problem with the use of the square braces [ and ]. The string of characters inside the
braces specifies a disjunction of characters to match. For example, Fig. 2.2 shows
that the pattern /[wW]/ matches patterns containing either w or W.
The regular expression /[1234567890]/ specifies any single digit. While such
classes of characters as digits or letters are important building blocks in expressions,
they can get awkward (e.g., it’s inconvenient to specify
/[ABCDEFGHIJKLMNOPQRSTUVWXYZ]/
to mean “any capital letter”). In cases where there is a well-defined sequence asso-
ciated with a set of characters, the brackets can be used with the dash (-) to specify
range any one character in a range. The pattern /[2-5]/ specifies any one of the charac-
ters 2, 3, 4, or 5. The pattern /[b-g]/ specifies one of the characters b, c, d, e, f, or
g. Some other examples are shown in Fig. 2.3.
The square braces can also be used to specify what a single character cannot be,
by use of the caret ˆ. If the caret ˆ is the first symbol after the open square brace [,
the resulting pattern is negated. For example, the pattern /[ˆa]/ matches any single
character (including special characters) except a. This is only true when the caret
is the first symbol after the open square brace. If it occurs anywhere else, it usually
stands for a caret; Fig. 2.4 shows some examples.
How can we talk about optional elements, like an optional s in woodchuck and
woodchucks? We can’t use the square brackets, because while they allow us to say
4 C HAPTER 2 • R EGULAR E XPRESSIONS , T EXT N ORMALIZATION , E DIT D ISTANCE
“s or S”, they don’t allow us to say “s or nothing”. For this we use the question mark
/?/, which means “the preceding character or nothing”, as shown in Fig. 2.5.
We can think of the question mark as meaning “zero or one instances of the
previous character”. That is, it’s a way of specifying how many of something that
we want, something that is very important in regular expressions. For example,
consider the language of certain sheep, which consists of strings that look like the
following:
baa!
baaa!
baaaa!
baaaaa!
...
This language consists of strings with a b, followed by at least two a’s, followed
by an exclamation point. The set of operators that allows us to say things like “some
Kleene * number of as” are based on the asterisk or *, commonly called the Kleene * (gen-
erally pronounced “cleany star”). The Kleene star means “zero or more occurrences
of the immediately previous character or regular expression”. So /a*/ means “any
string of zero or more as”. This will match a or aaaaaa, but it will also match the
empty string at the start of Off Minor since the string Off Minor starts with zero a’s.
So the regular expression for matching one or more a is /aa*/, meaning one a fol-
lowed by zero or more as. More complex patterns can also be repeated. So /[ab]*/
means “zero or more a’s or b’s” (not “zero or more right square braces”). This will
match strings like aaaa or ababab or bbbb.
For specifying multiple digits (useful for finding prices) we can extend /[0-9]/,
the regular expression for a single digit. An integer (a string of digits) is thus
/[0-9][0-9]*/. (Why isn’t it just /[0-9]*/?)
Sometimes it’s annoying to have to write the regular expression for digits twice,
so there is a shorter way to specify “at least one” of some character. This is the
Kleene + Kleene +, which means “one or more occurrences of the immediately preceding
character or regular expression”. Thus, the expression /[0-9]+/ is the normal way
to specify “a sequence of digits”. There are thus two ways to specify the sheep
language: /baaa*!/ or /baa+!/.
One very important special character is the period (/./), a wildcard expression
that matches any single character (except a carriage return), as shown in Fig. 2.6.
The wildcard is often used together with the Kleene star to mean “any string of
characters”. For example, suppose we want to find any line in which a particular
word, for example, aardvark, appears twice. We can specify this with the regular
expression /aardvark.*aardvark/.
anchors Anchors are special characters that anchor regular expressions to particular places
2.1 • R EGULAR E XPRESSIONS 5
in a string. The most common anchors are the caret ˆ and the dollar sign $. The caret
ˆ matches the start of a line. The pattern /ˆThe/ matches the word The only at the
start of a line. Thus, the caret ˆ has three uses: to match the start of a line, to in-
dicate a negation inside of square brackets, and just to mean a caret. (What are the
contexts that allow grep or Python to know which function a given caret is supposed
to have?) The dollar sign $ matches the end of a line. So the pattern $ is a useful
pattern for matching a space at the end of a line, and /ˆThe dog\.$/ matches a
line that contains only the phrase The dog. (We have to use the backslash here since
we want the . to mean “period” and not the wildcard.)
Regex Match
ˆ start of line
$ end of line
\b word boundary
\B non-word boundary
Figure 2.7 Anchors in regular expressions.
There are also two other anchors: \b matches a word boundary, and \B matches
a non-boundary. Thus, /\bthe\b/ matches the word the but not the word other.
More technically, a “word” for the purposes of a regular expression is defined as any
sequence of digits, underscores, or letters; this is based on the definition of “words”
in programming languages. For example, /\b99\b/ will match the string 99 in
There are 99 bottles of beer on the wall (because 99 follows a space) but not 99 in
There are 299 bottles of beer on the wall (since 99 follows a number). But it will
match 99 in $99 (since 99 follows a dollar sign ($), which is not a digit, underscore,
or letter).
any number of spaces! The star here applies only to the space that precedes it,
not to the whole sequence. With the parentheses, we could write the expression
/(Column [0-9]+ *)*/ to match the word Column, followed by a number and
optional spaces, the whole pattern repeated zero or more times.
This idea that one operator may take precedence over another, requiring us to
sometimes use parentheses to specify what we mean, is formalized by the operator
operator
precedence precedence hierarchy for regular expressions. The following table gives the order
of RE operator precedence, from highest precedence to lowest precedence.
Parenthesis ()
Counters * + ? {}
Sequences and anchors the ˆmy end$
Disjunction |
/the/
One problem is that this pattern will miss the word when it begins a sentence and
hence is capitalized (i.e., The). This might lead us to the following pattern:
/[tT]he/
But we will still incorrectly return texts with the embedded in other words (e.g.,
other or theology). So we need to specify that we want instances with a word bound-
ary on both sides:
/\b[tT]he\b/
Suppose we wanted to do this without the use of /\b/. We might want this since
/\b/ won’t treat underscores and numbers as word boundaries; but we might want
to find the in some context where it might also have underlines or numbers nearby
(the or the25). We need to specify that we want instances in which there are no
alphabetic letters on either side of the the:
/[ˆa-zA-Z][tT]he[ˆa-zA-Z]/
But there is still one more problem with this pattern: it won’t find the word the
when it begins a line. This is because the regular expression [ˆa-zA-Z], which
2.1 • R EGULAR E XPRESSIONS 7
we used to avoid embedded instances of the, implies that there must be some single
(although non-alphabetic) character before the the. We can avoid this by specify-
ing that before the the we require either the beginning-of-line or a non-alphabetic
character, and the same at the end of the line:
/(ˆ|[ˆa-zA-Z])[tT]he([ˆa-zA-Z]|$)/
The process we just went through was based on fixing two kinds of errors: false
false positives positives, strings that we incorrectly matched like other or there, and false nega-
false negatives tives, strings that we incorrectly missed, like The. Addressing these two kinds of
errors comes up again and again in implementing speech and language processing
systems. Reducing the overall error rate for an application thus involves two antag-
onistic efforts:
• Increasing precision (minimizing false positives)
• Increasing recall (minimizing false negatives)
We’ll come back to precision and recall with more precise definitions in Chapter 4.
Regex Match
* zero or more occurrences of the previous char or expression
+ one or more occurrences of the previous char or expression
? zero or one occurrence of the previous char or expression
{n} exactly n occurrences of the previous char or expression
{n,m} from n to m occurrences of the previous char or expression
{n,} at least n occurrences of the previous char or expression
{,m} up to m occurrences of the previous char or expression
Figure 2.9 Regular expression operators for counting.
Finally, certain special characters are referred to by special notation based on the
newline backslash (\) (see Fig. 2.10). The most common of these are the newline character
8 C HAPTER 2 • R EGULAR E XPRESSIONS , T EXT N ORMALIZATION , E DIT D ISTANCE
\n and the tab character \t. To refer to characters that are special themselves (like
., *, [, and \), precede them with a backslash, (i.e., /\./, /\*/, /\[/, and /\\/).
/$[0-9]+/
Note that the $ character has a different function here than the end-of-line function
we discussed earlier. Most regular expression parsers are smart enough to realize
that $ here doesn’t mean end-of-line. (As a thought experiment, think about how
regex parsers might figure out the function of $ from the context.)
Now we just need to deal with fractions of dollars. We’ll add a decimal point
and two digits afterwards:
/$[0-9]+\.[0-9][0-9]/
This pattern only allows $199.99 but not $199. We need to make the cents
optional and to make sure we’re at a word boundary:
/(ˆ|\W)$[0-9]+(\.[0-9][0-9])?\b/
One last catch! This pattern allows prices like $199999.99 which would be far
too expensive! We need to limit the dollars:
/(ˆ|\W)$[0-9]{0,3}(\.[0-9][0-9])?\b/
Further fixes (like avoiding matching a dollar sign with no price after it) are left
as an exercise for the reader.
How about disk space? We’ll need to allow for optional fractions again (5.5 GB);
note the use of ? for making the final s optional, and the use of / */ to mean “zero
or more spaces” since there might always be extra spaces lying around:
/\b[0-9]+(\.[0-9]+)? *(GB|[Gg]igabytes?)\b/
Modifying this regular expression so that it only matches more than 500 GB is
left as an exercise for the reader.
2.1 • R EGULAR E XPRESSIONS 9
s/colour/color/
s/([0-9]+)/<\1>/
The parenthesis and number operators can also specify that a certain string or
expression must occur twice in the text. For example, suppose we are looking for
the pattern “the Xer they were, the Xer they will be”, where we want to constrain
the two X’s to be the same string. We do this by surrounding the first X with the
parenthesis operator, and replacing the second X with the number operator \1, as
follows:
Here the \1 will be replaced by whatever string matched the first item in paren-
theses. So this will match the bigger they were, the bigger they will be but not the
bigger they were, the faster they will be.
capture group This use of parentheses to store a pattern in memory is called a capture group.
Every time a capture group is used (i.e., parentheses surround a pattern), the re-
register sulting match is stored in a numbered register. If you match two different sets of
parentheses, \2 means whatever matched the second capture group. Thus
will match the faster they ran, the faster we ran but not the faster they ran, the faster
we ate. Similarly, the third capture group is stored in \3, the fourth is \4, and so on.
Parentheses thus have a double function in regular expressions; they are used
to group terms for specifying the order in which operators should apply, and they
are used to capture something in a register. Occasionally we might want to use
parentheses for grouping, but don’t want to capture the resulting pattern in a register.
non-capturing
group In that case we use a non-capturing group, which is specified by putting the special
commands ?: after the open parenthesis, in the form (?: pattern ).
will match some cats like some cats but not some cats like some some.
Substitutions and capture groups are very useful in implementing simple chat-
bots like ELIZA (Weizenbaum, 1966). Recall that ELIZA simulates a Rogerian
psychologist by carrying on conversations like the following:
10 C HAPTER 2 • R EGULAR E XPRESSIONS , T EXT N ORMALIZATION , E DIT D ISTANCE
Since multiple substitutions can apply to a given input, substitutions are assigned
a rank and applied in order. Creating patterns is the topic of Exercise 2.3, and we
return to the details of the ELIZA architecture in Chapter 15.
2.2 Words
corpus: các tập ngữ
Before we talk about processing words, we need to decide what counts as a word. liệu, file văn bản
corpus Let’s start by looking at one particular corpus (plural corpora), a computer-readable
corpora collection of text or speech. For example the Brown corpus is a million-word col-
lection of samples from 500 written English texts from different genres (newspa-
per, fiction, non-fiction, academic, etc.), assembled at Brown University in 1963–64
(Kučera and Francis, 1967). How many words are in the following Brown sentence?
2.2 • W ORDS 11
He stepped out into the hall, was delighted to encounter a water brother.
This sentence has 13 words if we don’t count punctuation marks as words, 15
if we count punctuation. Whether we treat period (“.”), comma (“,”), and so on as
words depends on the task. Punctuation is critical for finding boundaries of things
(commas, periods, colons) and for identifying some aspects of meaning (question
marks, exclamation marks, quotation marks). For some tasks, like part-of-speech
tagging or parsing or speech synthesis, we sometimes treat punctuation marks as if
they were separate words.
The Switchboard corpus of American English telephone conversations between
strangers was collected in the early 1990s; it contains 2430 conversations averaging
6 minutes each, totaling 240 hours of speech and about 3 million words (Godfrey
et al., 1992). Such corpora of spoken language introduce other complications with
regard to defining words. Let’s look at one utterance from Switchboard; an utter-
utterance ance is the spoken correlate of a sentence:
I do uh main- mainly business data processing
disfluency This utterance has two kinds of disfluencies. The broken-off word main- is
fragment called a fragment. Words like uh and um are called fillers or filled pauses. Should
filled pause we consider these to be words? Again, it depends on the application. If we are
building a speech transcription system, we might want to eventually strip out the
disfluencies.
But we also sometimes keep disfluencies around. Disfluencies like uh or um
are actually helpful in speech recognition in predicting the upcoming word, because
they may signal that the speaker is restarting the clause or idea, and so for speech
recognition they are treated as regular words. Because people use different disflu-
encies they can also be a cue to speaker identification. In fact Clark and Fox Tree
(2002) showed that uh and um have different meanings. What do you think they are?
Perhaps most important, in thinking about what is a word, we need to distinguish
word type two ways of talking about words that will be useful throughout the book. Word types
are the number of distinct words in a corpus; if the set of words in the vocabulary
word instance is V , the number of types is the vocabulary size |V |. Word instances are the total
number N of running words.1
If we ignore punctuation, the following Brown sentence has 16 instances and 14
types:
They picnicked by the pool, then lay back on the grass and looked at the stars.
We still have decisions to make! For example, should we consider a capitalized
string (like They) and one that is uncapitalized (like they) to be the same word type?
The answer is that it depends on the task! They and they might be lumped together
as the same type in some tasks, like speech recognition, where we might just care
about getting the words in order and don’t care about the formatting, while for other
tasks, such as deciding whether a particular word is a noun or verb (part-of-speech
tagging) or whether a word is a name of a person or location (named-entity tag-
ging), capitalization is a useful feature and is retained. Sometimes we keep around
two versions of a particular NLP model, one with capitalization and one without
capitalization.
How many words are there in English? When we speak about the number of
words in the language, we are generally referring to word types. Fig. 2.11 shows
the rough numbers of types and instances computed from some English corpora.
1 In earlier tradition, and occasionally still, you might see word instances referred to as word tokens, but
we now try to reserve the word token instead to mean the output of word tokenization algorithms.
12 C HAPTER 2 • R EGULAR E XPRESSIONS , T EXT N ORMALIZATION , E DIT D ISTANCE
The larger the corpora we look at, the more word types we find, and in fact this
relationship between the number of types |V | and number of instances N is called
Herdan’s Law Herdan’s Law (Herdan, 1960) or Heaps’ Law (Heaps, 1978) after its discoverers
Heaps’ Law (in linguistics and information retrieval respectively). It is shown in Eq. 2.1, where
k and β are positive constants, and 0 < β < 1.
|V | = kN β (2.1)
The value of β depends on the corpus size and the genre, but at least for the large
corpora in Fig. 2.11, β ranges from .67 to .75. Roughly then we can say that the
vocabulary size for a text goes up significantly faster than the square root of its
length in words.
It’s sometimes useful to make a further distinction. Consider inflected forms
like cats versus cat. We say these two words are different wordforms but have the
lemma same lemma. A lemma is a set of lexical forms having the same stem, the same
wordform major part-of-speech, and the same word sense. The wordform is the full inflected
or derived form of the word. The two wordforms cat and cats thus have the same
lemma, which we can represent as cat.
For morphologically complex languages like Arabic, we often need to deal with
lemmatization. For most tasks in English, however, wordforms are sufficient, and
when we talk about words in this book we almost always mean wordsforms (al-
though we will discuss basic algorithms for lemmatization and the related task of
stemming below in Section 2.6). One of the situations even in English where we
talk about lemmas is when we measure the number of words in a dictionary. Dictio-
nary entries or boldface forms are a very rough approximation to (an upper bound
on) the number of lemmas (since some lemmas have multiple boldface forms). The
1989 edition of the Oxford English Dictionary had 615,000 entries.
Finally, we should note that in practice, for many NLP applications (for example
for neural language modeling) we don’t actually use words as our internal unit of
representation at all! We instead tokenize the input strings into tokens, which can
be words but can also be only parts of words. We’ll return to this tokenization
question when we introduce the BPE algorithm in Section 2.5.2.
2.3 Corpora
Words don’t appear out of nowhere. Any particular piece of text that we study
is produced by one or more specific speakers or writers, in a specific dialect of a
specific language, at a specific time, in a specific place, for a specific function.
Perhaps the most important dimension of variation is the language. NLP algo-
rithms are most useful when they apply across many languages. The world has 7097
2.3 • C ORPORA 13
languages at the time of this writing, according to the online Ethnologue catalog
(Simons and Fennig, 2018). It is important to test algorithms on more than one lan-
guage, and particularly on languages with different properties; by contrast there is
an unfortunate current tendency for NLP algorithms to be developed or tested just
on English (Bender, 2019). Even when algorithms are developed beyond English,
they tend to be developed for the official languages of large industrialized nations
(Chinese, Spanish, Japanese, German etc.), but we don’t want to limit tools to just
these few languages. Furthermore, most languages also have multiple varieties, of-
ten spoken in different regions or by different social groups. Thus, for example,
AAE if we’re processing text that uses features of African American English (AAE) or
African American Vernacular English (AAVE)—the variations of English used by
millions of people in African American communities (King 2020)—we must use
NLP tools that function with features of those varieties. Twitter posts might use fea-
tures often used by speakers of African American English, such as constructions like
MAE iont (I don’t in Mainstream American English (MAE)), or talmbout corresponding
to MAE talking about, both examples that influence word segmentation (Blodgett
et al. 2016, Jones 2015).
It’s also quite common for speakers or writers to use multiple languages in a
code switching single communicative act, a phenomenon called code switching. Code switching
is enormously common across the world; here are examples showing Spanish and
(transliterated) Hindi code switching with English (Solorio et al. 2014, Jurgens et al.
2017):
(2.2) Por primera vez veo a @username actually being hateful! it was beautiful:)
[For the first time I get to see @username actually being hateful! it was
beautiful:) ]
(2.3) dost tha or ra- hega ... dont wory ... but dherya rakhe
[“he was and will remain a friend ... don’t worry ... but have faith”]
Another dimension of variation is the genre. The text that our algorithms must
process might come from newswire, fiction or non-fiction books, scientific articles,
Wikipedia, or religious texts. It might come from spoken genres like telephone
conversations, business meetings, police body-worn cameras, medical interviews,
or transcripts of television shows or movies. It might come from work situations
like doctors’ notes, legal text, or parliamentary or congressional proceedings.
Text also reflects the demographic characteristics of the writer (or speaker): their
age, gender, race, socioeconomic class can all influence the linguistic properties of
the text we are processing.
And finally, time matters too. Language changes over time, and for some lan-
guages we have good corpora of texts from different historical periods.
Because language is so situated, when developing computational models for lan-
guage processing from a corpus, it’s important to consider who produced the lan-
guage, in what context, for what purpose. How can a user of a dataset know all these
datasheet details? The best way is for the corpus creator to build a datasheet (Gebru et al.,
2020) or data statement (Bender et al., 2021) for each corpus. A datasheet specifies
properties of a dataset like:
Motivation: Why was the corpus collected, by whom, and who funded it?
Situation: When and in what situation was the text written/spoken? For example,
was there a task? Was the language originally spoken conversation, edited
text, social media communication, monologue vs. dialogue?
Language variety: What language (including dialect/region) was the corpus in?
14 C HAPTER 2 • R EGULAR E XPRESSIONS , T EXT N ORMALIZATION , E DIT D ISTANCE
Speaker demographics: What was, e.g., the age or gender of the text’s authors?
Collection process: How big is the data? If it is a subsample how was it sampled?
Was the data collected with consent? How was the data pre-processed, and
what metadata is available?
Annotation process: What are the annotations, what are the demographics of the
annotators, how were they trained, how was the data annotated?
Distribution: Are there copyright or other intellectual property restrictions?
finite state automata. For example, Fig. 2.12 shows an example of a basic regular
expression that can be used to tokenize English with the nltk.regexp tokenize
function of the Python-based Natural Language Toolkit (NLTK) (Bird et al. 2009;
https://www.nltk.org).
Carefully designed deterministic algorithms can deal with the ambiguities that
arise, such as the fact that the apostrophe needs to be tokenized differently when used
as a genitive marker (as in the book’s cover), a quotative as in ‘The other class’, she
said, or in clitics like they’re.
Word tokenization is more complex in languages like written Chinese, Japanese,
and Thai, which do not use spaces to mark potential word-boundaries. In Chinese,
hanzi for example, words are composed of characters (called hanzi in Chinese). Each
character generally represents a single unit of meaning (called a morpheme) and is
pronounceable as a single syllable. Words are about 2.4 characters long on average.
But deciding what counts as a word in Chinese is complex. For example, consider
the following sentence:
(2.4) 姚明进入总决赛 yáo mı́ng jı̀n rù zǒng jué sài
“Yao Ming reaches the finals”
As Chen et al. (2017) point out, this could be treated as 3 words (‘Chinese Treebank’
segmentation):
(2.5) 姚明 进入 总决赛
YaoMing reaches finals
or as 5 words (‘Peking University’ segmentation):
(2.6) 姚 明 进入 总 决赛
Yao Ming reaches overall finals
Finally, it is possible in Chinese simply to ignore words altogether and use characters
as the basic elements, treating the sentence as a series of 7 characters:
(2.7) 姚 明 进 入 总 决 赛
Yao Ming enter enter overall decision game
In fact, for most Chinese NLP tasks it turns out to work better to take characters
rather than words as input, since characters are at a reasonable semantic level for
most applications, and since most word standards, by contrast, result in a huge vo-
cabulary with large numbers of very rare words (Li et al., 2019).
18 C HAPTER 2 • R EGULAR E XPRESSIONS , T EXT N ORMALIZATION , E DIT D ISTANCE
However, for Japanese and Thai the character is too small a unit, and so algo-
word
segmentation rithms for word segmentation are required. These can also be useful for Chinese
in the rare situations where word rather than character boundaries are required. The
standard segmentation algorithms for these languages use neural sequence mod-
els trained via supervised machine learning on hand-segmented training sets; we’ll
introduce sequence models in Chapter 8 and Chapter 9.
Figure 2.13 The token learner part of the BPE algorithm for taking a corpus broken up
into individual characters or bytes, and learning a vocabulary by iteratively merging tokens.
Figure adapted from Bostrom and Durrett (2020).
2.6.1 Lemmatization
For other natural language processing situations we also want two morphologically
different forms of a word to behave similarly. For example in web search, someone
may type the string woodchucks but a useful system might want to also return pages
that mention woodchuck with no s. This is especially common in morphologically
complex languages like Polish, where for example the word Warsaw has different
endings when it is the subject (Warszawa), or after a preposition like “in Warsaw” (w
2.6 • W ORD N ORMALIZATION , L EMMATIZATION AND S TEMMING 21
lemmatization Warszawie), or “to Warsaw” (do Warszawy), and so on. Lemmatization is the task
of determining that two words have the same root, despite their surface differences.
The words am, are, and is have the shared lemma be; the words dinner and dinners
both have the lemma dinner. Lemmatizing each of these forms to the same lemma
will let us find all mentions of words in Polish like Warsaw. The lemmatized form
of a sentence like He is reading detective stories would thus be He be read detective
story.
How is lemmatization done? The most sophisticated methods for lemmatization
involve complete morphological parsing of the word. Morphology is the study of
morpheme the way words are built up from smaller meaning-bearing units called morphemes.
stem Two broad classes of morphemes can be distinguished: stems—the central mor-
affix pheme of the word, supplying the main meaning—and affixes—adding “additional”
meanings of various kinds. So, for example, the word fox consists of one morpheme
(the morpheme fox) and the word cats consists of two: the morpheme cat and the
morpheme -s. A morphological parser takes a word like cats and parses it into the
two morphemes cat and s, or parses a Spanish word like amaren (‘if in the future
they would love’) into the morpheme amar ‘to love’, and the morphological features
3PL and future subjunctive.
top string into the bottom string: d for deletion, s for substitution, i for insertion.
INTE*NTION
| | | | | | | | | |
*EXECUTION
d s s i s
Figure 2.14 Representing the minimum edit distance between two strings as an alignment.
The final row gives the operation list for converting the top string into the bottom string: d for
deletion, s for substitution, i for insertion.
We can also assign a particular cost or weight to each of these operations. The
Levenshtein distance between two sequences is the simplest weighting factor in
which each of the three operations has a cost of 1 (Levenshtein, 1966)—we assume
that the substitution of a letter for itself, for example, t for t, has zero cost. The Lev-
enshtein distance between intention and execution is 5. Levenshtein also proposed
an alternative version of his metric in which each insertion or deletion has a cost of
1 and substitutions are not allowed. (This is equivalent to allowing substitution, but
giving each substitution a cost of 2 since any substitution can be represented by one
insertion and one deletion). Using this version, the Levenshtein distance between
intention and execution is 8.
i n t e n t i o n
n t e n t i o n i n t e c n t i o n i n x e n t i o n
Figure 2.15 Finding the edit distance viewed as a search problem
The space of all possible edits is enormous, so we can’t search naively. However,
lots of distinct edit paths will end up in the same state (string), so rather than recom-
puting all those paths, we could just remember the shortest path to a state each time
dynamic
programming we saw it. We can do this by using dynamic programming. Dynamic program-
ming is the name for a class of algorithms, first introduced by Bellman (1957), that
apply a table-driven method to solve problems by combining solutions to subprob-
lems. Some of the most commonly used algorithms in natural language processing
make use of dynamic programming, such as the Viterbi algorithm (Chapter 8) and
the CKY algorithm for parsing (Chapter 17).
The intuition of a dynamic programming problem is that a large problem can
be solved by properly combining the solutions to various subproblems. Consider
the shortest path of transformed words that represents the minimum edit distance
between the strings intention and execution shown in Fig. 2.16.
Imagine some string (perhaps it is exention) that is in this optimal path (whatever
it is). The intuition of dynamic programming is that if exention is in the optimal
24 C HAPTER 2 • R EGULAR E XPRESSIONS , T EXT N ORMALIZATION , E DIT D ISTANCE
i n t e n t i o n
delete i
n t e n t i o n
substitute n by e
e t e n t i o n
substitute t by x
e x e n t i o n
insert u
e x e n u t i o n
substitute n by c
e x e c u t i o n
Figure 2.16 Path from intention to execution.
operation list, then the optimal sequence must also include the optimal path from
intention to exention. Why? If there were a shorter path from intention to exention,
then we could use it instead, resulting in a shorter overall path, and the optimal
minimum edit
sequence wouldn’t be optimal, thus leading to a contradiction.
distance The minimum edit distance algorithm was named by Wagner and Fischer
algorithm
(1974) but independently discovered by many people (see the Historical Notes sec-
tion of Chapter 8).
Let’s first define the minimum edit distance between two strings. Given two
strings, the source string X of length n, and target string Y of length m, we’ll define
D[i, j] as the edit distance between X[1..i] and Y [1.. j], i.e., the first i characters of X
and the first j characters of Y . The edit distance between X and Y is thus D[n, m].
We’ll use dynamic programming to compute D[n, m] bottom up, combining so-
lutions to subproblems. In the base case, with a source substring of length i but an
empty target string, going from i characters to 0 requires i deletes. With a target
substring of length j but an empty source going from 0 characters to j characters
requires j inserts. Having computed D[i, j] for small i, j we then compute larger
D[i, j] based on previously computed smaller values. The value of D[i, j] is com-
puted by taking the minimum of the three possible paths through the matrix which
arrive there:
D[i − 1, j] + del-cost(source[i])
D[i, j] = min D[i, j − 1] + ins-cost(target[ j]) (2.8)
D[i − 1, j − 1] + sub-cost(source[i], target[ j])
If we assume the version of Levenshtein distance in which the insertions and dele-
tions each have a cost of 1 (ins-cost(·) = del-cost(·) = 1), and substitutions have a
cost of 2 (except substitution of identical letters have zero cost), the computation for
D[i, j] becomes:
D[i − 1, j] + 1
D[i, j − 1] + 1
D[i, j] = min (2.9)
2; if source[i] 6= target[ j]
D[i − 1, j − 1] +
0; if source[i] = target[ j]
The algorithm is summarized in Fig. 2.17; Fig. 2.18 shows the results of applying
the algorithm to the distance between intention and execution with the version of
Levenshtein in Eq. 2.9.
Alignment Knowing the minimum edit distance is useful for algorithms like find-
ing potential spelling error corrections. But the edit distance algorithm is important
in another way; with a small change, it can also provide the minimum cost align-
ment between two strings. Aligning two strings is useful throughout speech and
2.8 • M INIMUM E DIT D ISTANCE 25
n ← L ENGTH(source)
m ← L ENGTH(target)
Create a distance matrix D[n+1,m+1]
# Initialization: the zeroth row and column is the distance from the empty string
D[0,0] = 0
for each row i from 1 to n do
D[i,0] ← D[i-1,0] + del-cost(source[i])
for each column j from 1 to m do
D[0,j] ← D[0, j-1] + ins-cost(target[j])
# Recurrence relation:
for each row i from 1 to n do
for each column j from 1 to m do
D[i, j] ← M IN( D[i−1, j] + del-cost(source[i]),
D[i−1, j−1] + sub-cost(source[i], target[j]),
D[i, j−1] + ins-cost(target[j]))
# Termination
return D[n,m]
Figure 2.17 The minimum edit distance algorithm, an example of the class of dynamic
programming algorithms. The various costs can either be fixed (e.g., ∀x, ins-cost(x) = 1)
or can be specific to the letter (to model the fact that some letters are more likely to be in-
serted than others). We assume that there is no cost for substituting a letter for itself (i.e.,
sub-cost(x, x) = 0).
Src\Tar # e x e c u t i o n
# 0 1 2 3 4 5 6 7 8 9
i 1 2 3 4 5 6 7 6 7 8
n 2 3 4 5 6 7 8 7 8 7
t 3 4 5 6 7 8 7 8 9 8
e 4 3 4 5 6 7 8 9 10 9
n 5 4 5 6 7 8 9 10 11 10
t 6 5 6 7 8 9 8 9 10 11
i 7 6 7 8 9 10 9 8 9 10
o 8 7 8 9 10 11 10 9 8 9
n 9 8 9 10 11 12 11 10 9 8
Figure 2.18 Computation of minimum edit distance between intention and execution with
the algorithm of Fig. 2.17, using Levenshtein distance with cost of 1 for insertions or dele-
tions, 2 for substitutions.
# e x e c u t i o n
# 0 ←1 ←2 ←3 ←4 ←5 ←6 ←7 ←8 ←9
i ↑1 -←↑ 2 -←↑ 3 -←↑ 4 -←↑ 5 -←↑ 6 -←↑ 7 -6 ←7 ←8
n ↑2 -←↑ 3 -←↑ 4 -←↑ 5 -←↑ 6 -←↑ 7 -←↑ 8 ↑7 -←↑ 8 -7
t ↑3 -←↑ 4 -←↑ 5 -←↑ 6 -←↑ 7 -←↑ 8 -7 ←↑ 8 -←↑ 9 ↑8
e ↑4 -3 ←4 -← 5 ←6 ←7 ←↑ 8 -←↑ 9 -←↑ 10 ↑9
n ↑5 ↑4 -←↑ 5 -←↑ 6 -←↑ 7 -←↑ 8 -←↑ 9 -←↑ 10 -←↑ 11 -↑ 10
t ↑6 ↑5 -←↑ 6 -←↑ 7 -←↑ 8 -←↑ 9 -8 ←9 ← 10 ←↑ 11
i ↑7 ↑6 -←↑ 7 -←↑ 8 -←↑ 9 -←↑ 10 ↑9 -8 ←9 ← 10
o ↑8 ↑7 -←↑ 8 -←↑ 9 -←↑ 10 -←↑ 11 ↑ 10 ↑9 -8 ←9
n ↑9 ↑8 -←↑ 9 -←↑ 10 -←↑ 11 -←↑ 12 ↑ 11 ↑ 10 ↑9 -8
Figure 2.19 When entering a value in each cell, we mark which of the three neighboring
cells we came from with up to three arrows. After the table is full we compute an alignment
(minimum edit path) by using a backtrace, starting at the 8 in the lower-right corner and
following the arrows back. The sequence of bold cells represents one possible minimum cost
alignment between the two strings. Diagram design after Gusfield (1997).
While we worked our example with simple Levenshtein distance, the algorithm
in Fig. 2.17 allows arbitrary weights on the operations. For spelling correction, for
example, substitutions are more likely to happen between letters that are next to
each other on the keyboard. The Viterbi algorithm is a probabilistic extension of
minimum edit distance. Instead of computing the “minimum edit distance” between
two strings, Viterbi computes the “maximum probability alignment” of one string
with another. We’ll discuss this more in Chapter 8.
2.9 Summary
This chapter introduced a fundamental tool in language processing, the regular ex-
pression, and showed how to perform basic text normalization tasks including
word segmentation and normalization, sentence segmentation, and stemming.
We also introduced the important minimum edit distance algorithm for comparing
strings. Here’s a summary of the main points we covered about these ideas:
• The regular expression language is a powerful tool for pattern-matching.
• Basic operations in regular expressions include concatenation of symbols,
disjunction of symbols ([], |, and .), counters (*, +, and {n,m}), anchors
B IBLIOGRAPHICAL AND H ISTORICAL N OTES 27
“...The 1950s were not good years for mathematical research. [the]
Secretary of Defense ...had a pathological fear and hatred of the word,
research... I decided therefore to use the word, “programming”. I
wanted to get across the idea that this was dynamic, this was multi-
stage... I thought, let’s ... take a word that has an absolutely precise
meaning, namely dynamic... it’s impossible to use the word, dynamic,
in a pejorative sense. Try thinking of some combination that will pos-
sibly give it a pejorative meaning. It’s impossible. Thus, I thought
dynamic programming was a good name. It was something not even a
Congressman could object to.”
28 C HAPTER 2 • R EGULAR E XPRESSIONS , T EXT N ORMALIZATION , E DIT D ISTANCE
Exercises
2.1 Write regular expressions for the following languages.
1. the set of all alphabetic strings;
2. the set of all lower case alphabetic strings ending in a b;
3. the set of all strings from the alphabet a, b such that each a is immedi-
ately preceded by and immediately followed by a b;
2.2 Write regular expressions for the following languages. By “word”, we mean
an alphabetic string separated from other words by whitespace, any relevant
punctuation, line breaks, and so forth.
1. the set of all strings with two consecutive repeated words (e.g., “Hum-
bert Humbert” and “the the” but not “the bug” or “the big bug”);
2. all strings that start at the beginning of the line with an integer and that
end at the end of the line with a word;
3. all strings that have both the word grotto and the word raven in them
(but not, e.g., words like grottos that merely contain the word grotto);
4. write a pattern that places the first word of an English sentence in a
register. Deal with punctuation.
2.3 Implement an ELIZA-like program, using substitutions such as those described
on page 10. You might want to choose a different domain than a Rogerian psy-
chologist, although keep in mind that you would need a domain in which your
program can legitimately engage in a lot of simple repetition.
2.4 Compute the edit distance (using insertion cost 1, deletion cost 1, substitution
cost 1) of “leda” to “deal”. Show your work (using the edit distance grid).
2.5 Figure out whether drive is closer to brief or to divers and what the edit dis-
tance is to each. You may use any version of distance that you like.
2.6 Now implement a minimum edit distance algorithm and use your hand-computed
results to check your code.
2.7 Augment the minimum edit distance algorithm to output an alignment; you
will need to store pointers and add a stage to compute the backtrace.
Exercises 29
Baayen, R. H. 2001. Word frequency distributions. Springer. Kleene, S. C. 1956. Representation of events in nerve nets
Bellman, R. 1957. Dynamic Programming. Princeton Uni- and finite automata. In C. Shannon and J. McCarthy, edi-
versity Press. tors, Automata Studies, pages 3–41. Princeton University
Press.
Bellman, R. 1984. Eye of the Hurricane: an autobiography.
World Scientific Singapore. Krovetz, R. 1993. Viewing morphology as an inference pro-
cess. SIGIR-93.
Bender, E. M. 2019. The #BenderRule: On naming the lan-
guages we study and why it matters. Blog post. Kruskal, J. B. 1983. An overview of sequence compar-
ison. In D. Sankoff and J. B. Kruskal, editors, Time
Bender, E. M., B. Friedman, and A. McMillan-Major. 2021. Warps, String Edits, and Macromolecules: The The-
A guide for writing data statements for natural language ory and Practice of Sequence Comparison, pages 1–44.
processing. Available at http://techpolicylab.uw. Addison-Wesley.
edu/data-statements/.
Kudo, T. 2018. Subword regularization: Improving neural
Bird, S., E. Klein, and E. Loper. 2009. Natural Language network translation models with multiple subword candi-
Processing with Python. O’Reilly. dates. ACL.
Blodgett, S. L., L. Green, and B. O’Connor. 2016. Demo- Kudo, T. and J. Richardson. 2018. SentencePiece: A simple
graphic dialectal variation in social media: A case study and language independent subword tokenizer and detok-
of African-American English. EMNLP. enizer for neural text processing. EMNLP.
Bostrom, K. and G. Durrett. 2020. Byte pair encoding is Kučera, H. and W. N. Francis. 1967. Computational Analy-
suboptimal for language model pretraining. Findings of sis of Present-Day American English. Brown University
EMNLP. Press, Providence, RI.
Chen, X., Z. Shi, X. Qiu, and X. Huang. 2017. Adversar- Levenshtein, V. I. 1966. Binary codes capable of correct-
ial multi-criteria learning for Chinese word segmentation. ing deletions, insertions, and reversals. Cybernetics and
ACL. Control Theory, 10(8):707–710. Original in Doklady
Church, K. W. 1994. Unix for Poets. Slides from 2nd EL- Akademii Nauk SSSR 163(4): 845–848 (1965).
SNET Summer School and unpublished paper ms. Li, X., Y. Meng, X. Sun, Q. Han, A. Yuan, and J. Li. 2019.
Clark, H. H. and J. E. Fox Tree. 2002. Using uh and um in Is word segmentation necessary for deep learning of Chi-
spontaneous speaking. Cognition, 84:73–111. nese representations? ACL.
Egghe, L. 2007. Untangling Herdan’s law and Heaps’ Lovins, J. B. 1968. Development of a stemming algorithm.
law: Mathematical and informetric arguments. JASIST, Mechanical Translation and Computational Linguistics,
58(5):702–709. 11(1–2):9–13.
Gebru, T., J. Morgenstern, B. Vecchione, J. W. Vaughan, Manning, C. D., M. Surdeanu, J. Bauer, J. Finkel, S. Bethard,
H. Wallach, H. Daumé III, and K. Crawford. 2020. and D. McClosky. 2014. The Stanford CoreNLP natural
Datasheets for datasets. ArXiv. language processing toolkit. ACL.
Godfrey, J., E. Holliman, and J. McDaniel. 1992. SWITCH- NIST. 2005. Speech recognition scoring toolkit (sctk) ver-
BOARD: Telephone speech corpus for research and de- sion 2.1. http://www.nist.gov/speech/tools/.
velopment. ICASSP. O’Connor, B., M. Krieger, and D. Ahn. 2010. Tweetmotif:
Exploratory search and topic summarization for twitter.
Gusfield, D. 1997. Algorithms on Strings, Trees, and Se-
ICWSM.
quences: Computer Science and Computational Biology.
Cambridge University Press. Packard, D. W. 1973. Computer-assisted morphological
analysis of ancient Greek. COLING.
Heaps, H. S. 1978. Information retrieval. Computational and
theoretical aspects. Academic Press. Palmer, D. 2012. Text preprocessing. In N. Indurkhya and
F. J. Damerau, editors, Handbook of Natural Language
Herdan, G. 1960. Type-token mathematics. Mouton.
Processing, pages 9–30. CRC Press.
Jones, T. 2015. Toward a description of African American Porter, M. F. 1980. An algorithm for suffix stripping. Pro-
Vernacular English dialect regions using “Black Twitter”. gram, 14(3):130–137.
American Speech, 90(4):403–440.
Sennrich, R., B. Haddow, and A. Birch. 2016. Neural ma-
Jurgens, D., Y. Tsvetkov, and D. Jurafsky. 2017. Incorpo- chine translation of rare words with subword units. ACL.
rating dialectal variability for socially equitable language
identification. ACL. Simons, G. F. and C. D. Fennig. 2018. Ethnologue: Lan-
guages of the world, 21st edition. SIL International.
King, S. 2020. From African American Vernacular English
to African American Language: Rethinking the study of Solorio, T., E. Blair, S. Maharjan, S. Bethard, M. Diab,
race and language in African Americans’ speech. Annual M. Ghoneim, A. Hawwari, F. AlGhamdi, J. Hirschberg,
Review of Linguistics, 6:285–300. A. Chang, and P. Fung. 2014. Overview for the first
shared task on language identification in code-switched
Kiss, T. and J. Strunk. 2006. Unsupervised multilingual data. First Workshop on Computational Approaches to
sentence boundary detection. Computational Linguistics, Code Switching.
32(4):485–525.
Thompson, K. 1968. Regular expression search algorithm.
Kleene, S. C. 1951. Representation of events in nerve nets CACM, 11(6):419–422.
and finite automata. Technical Report RM-704, RAND
Wagner, R. A. and M. J. Fischer. 1974. The string-to-string
Corporation. RAND Research Memorandum.
correction problem. Journal of the ACM, 21:168–173.
30 Chapter 2 • Regular Expressions, Text Normalization, Edit Distance