0% found this document useful (0 votes)
15 views74 pages

Language Models: Instructor: Rada Mihalcea Taught by Bonnie Dorr at Univ. of Maryland

This document discusses language models and statistical natural language processing techniques. It provides an overview of language models and their applications, including speech recognition, machine translation, and spelling correction. It also describes approximating natural language using n-gram models and the chain rule for calculating word probabilities, as well as collecting word frequency data from text corpora to build these statistical language models.

Uploaded by

jeysam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views74 pages

Language Models: Instructor: Rada Mihalcea Taught by Bonnie Dorr at Univ. of Maryland

This document discusses language models and statistical natural language processing techniques. It provides an overview of language models and their applications, including speech recognition, machine translation, and spelling correction. It also describes approximating natural language using n-gram models and the chain rule for calculating word probabilities, as well as collecting word frequency data from text corpora to build these statistical language models.

Uploaded by

jeysam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 74

Language Models

Instructor: Rada Mihalcea



Note: some of the material in this slide set was adapted from an NLP course
taught by Bonnie Dorr at Univ. of Maryland
Slide 1
Language Models
A language model
an abstract representation of a (natural) language phenomenon.
an approximation to real language

Statistical models
predictive
explicative


Slide 2
Claim
A useful part of the knowledge needed to allow letter/word
predictions can be captured using simple statistical
techniques.
Compute:
probability of a sequence
likelihood of letters/words co-occurring
Why would we want to do this?
Rank the likelihood of sequences containing various alternative
hypotheses
Assess the likelihood of a hypothesis


Slide 3
Outline
Applications of language models
Approximating natural language
The chain rule
Learning N-gram models
Smoothing for language models
Distribution of words in language: Zipfs law and Heaps law
Slide 4
Why is This Useful?
Speech recognition
Handwriting recognition
Spelling correction
Machine translation systems
Optical character recognizers
Slide 5
Handwriting Recognition
Assume a note is given to a bank teller, which the teller reads as I have
a gub. (cf. Woody Allen)
NLP to the rescue .
gub is not a word
gun, gum, Gus, and gull are words, but gun has a higher probability in the
context of a bank
Slide 6
Real Word Spelling Errors
They are leaving in about fifteen minuets to go to her
house.
The study was conducted mainly be John Black.
Hopefully, all with continue smoothly in my absence.
Can they lave him my messages?
I need to notified the bank of.
He is trying to fine out.

Slide 7
For Spell Checkers
Collect list of commonly substituted words
piece/peace, whether/weather, their/there ...

Example:
On Tuesday, the whether
On Tuesday, the weather
Slide 8
Other Applications
Machine translation
Text summarization
Optical character recognition
Slide 9
Outline
Applications of language models
Approximating natural language
The chain rule
Learning N-gram models
Smoothing for language models
Distribution of words in language: Zipfs law and Heaps law
Slide 10
Letter-based Language Models

Shannons Game
Guess the next letter:


Slide 11
Letter-based Language Models

Shannons Game
Guess the next letter:
W

Slide 12
Letter-based Language Models

Shannons Game
Guess the next letter:
Wh


Slide 13

Shannons Game
Guess the next letter:
Wha
Letter-based Language Models
Slide 14

Shannons Game
Guess the next letter:
What

Letter-based Language Models
Slide 15

Shannons Game
Guess the next letter:
What d

Letter-based Language Models
Slide 16

Shannons Game
Guess the next letter:
What do

Letter-based Language Models
Slide 17

Shannons Game
Guess the next letter:
What do you think the next letter is?


Letter-based Language Models
Slide 18

Shannons Game
Guess the next letter:
What do you think the next letter is?
Guess the next word:


Letter-based Language Models
Slide 19

Shannons Game
Guess the next letter:
What do you think the next letter is?
Guess the next word:
What

Letter-based Language Models
Slide 20

Shannons Game
Guess the next letter:
What do you think the next letter is?
Guess the next word:
What do


Letter-based Language Models
Slide 21

Shannons Game
Guess the next letter:
What do you think the next letter is?
Guess the next word:
What do you

Letter-based Language Models
Slide 22

Shannons Game
Guess the next letter:
What do you think the next letter is?
Guess the next word:
What do you think

Letter-based Language Models
Slide 23

Shannons Game
Guess the next letter:
What do you think the next letter is?
Guess the next word:
What do you think the

Letter-based Language Models
Slide 24

Shannons Game
Guess the next letter:
What do you think the next letter is?
Guess the next word:
What do you think the next

Letter-based Language Models
Slide 25

Shannons Game
Guess the next letter:
What do you think the next letter is?
Guess the next word:
What do you think the next word is?


Letter-based Language Models
Slide 26
Approximating Natural Language Words
zero-order approximation: letter sequences are
independent of each other and all equally
probable:

xfoml rxkhrjffjuj zlpwcwkcy ffjeyvkcqsghyd

Slide 27
Approximating Natural Language Words
first-order approximation: letters are independent,
but occur with the frequencies of English text:

ocro hli rgwr nmielwis eu ll nbnesebya th eei alhenhtppa
oobttva nah
Slide 28
second-order approximation: the probability that a
letter appears depends on the previous letter

on ie antsoutinys are t inctore st bes deamy achin d
ilonasive tucoowe at teasonare fuzo tizin andy tobe
seace ctisbe

Approximating Natural Language Words
Slide 29
third-order approximation: the probability that a
certain letter appears depends on the two
previous letters

in no ist lat whey cratict froure birs grocid pondenome of
demonstures of the reptagin is regoactiona of cre
Approximating Natural Language Words
Slide 30
Higher frequency trigrams for different languages:
English: THE, ING, ENT, ION
German: EIN, ICH, DEN, DER
French: ENT, QUE, LES, ION
Italian: CHE, ERE, ZIO, DEL
Spanish: QUE, EST, ARA, ADO
Approximating Natural Language Words
Slide 31
Language Syllabic Similarity
Anca Dinu, Liviu Dinu
Languages within the same family are more similar
among them than with other languages
How similar (sounding) are languages within the
same family?
Syllabic based similarity
Slide 32
Syllable Ranks
Gather the most frequent words in each language
in the family;
Syllabify words;
Rank syllables;
Compute language similarity based on syllable
rankings;
Slide 33
Example Analysis: the Romance Family
Syllables in Romance languages
Slide 34
Latin-Romance Languages Similarity
servus
servus
ciao
Slide 35
Outline
Applications of language models
Approximating natural language
The chain rule
Learning N-gram models
Smoothing for language models
Distribution of words in language: Zipfs law and Heaps law
Slide 36
Terminology
Sentence: unit of written language
Utterance: unit of spoken language
Word Form: the inflected form that appears in the
corpus
Lemma: lexical forms having the same stem, part of
speech, and word sense
Types (V): number of distinct words that might
appear in a corpus (vocabulary size)
Tokens (N
T
): total number of words in a corpus
Types seen so far (T): number of distinct words seen
so far in corpus (smaller than V and N
T
)
Slide 37
Word-based Language Models
A model that enables one to compute the probability, or
likelihood, of a sentence S, P(S).
Simple: Every word follows every other word w/ equal
probability (0-gram)
Assume |V| is the size of the vocabulary V
Likelihood of sentence S of length n is = 1/|V| 1/|V| 1/|V|
If English has 100,000 words, probability of each next word is 1/100000 =
.00001
Slide 38
Word Prediction: Simple vs. Smart
Smarter: probability of each next word is related to
word frequency (unigram)
Likelihood of sentence S = P(w
1
) P(w
2
) P(w
n
)
Assumes probability of each word is independent of probabilities of
other words.

Even smarter: Look at probability given previous
words (N-gram)
Likelihood of sentence S = P(w
1
) P(w
2
|w
1
) P(w
n
|w
n-1
)
Assumes probability of each word is dependent on probabilities of
other words.
Slide 39
Chain Rule
Conditional Probability
P(w
1
,w
2
) = P(w
1
) P(w
2
|w
1
)
The Chain Rule generalizes to multiple events
P(w
1
, ,w
n
) = P(w
1
) P(w
2
|w
1
) P(w
3
|w
1
,w
2
)P(w
n
|w
1
w
n-1
)
Examples:
P(the dog) = P(the) P(dog | the)
P(the dog barks) = P(the) P(dog | the) P(barks| the dog)
Slide 40
Relative Frequencies and Conditional
Probabilities
Relative word frequencies are better than equal
probabilities for all words
In a corpus with 10K word types, each word would have
P(w) = 1/10K
Does not match our intuitions that different words are
more likely to occur (e.g. the)
Conditional probability more useful than
individual relative word frequencies
dog may be relatively rare in a corpus
But if we see barking, P(dog|barking) may be very large
Slide 41
For a Word String
In general, the probability of a complete string of
words w
1
n
= w
1
w
n
is
P(w
1
n
)
= P(w
1
)P(w
2
|w
1
)P(w
3
|w
1
..w
2
)P(w
n
|w
1
w
n-1
)

=

But this approach to determining the probability of
a word sequence is not very helpful in general
gets to be computationally very expensive

)
1
1
|
1
(
w
k
n
k
w
k
P

[
=
Slide 42
Markov Assumption
How do we compute P(w
n
|w
1
n-1
)?
Trick: Instead of P(rabbit|I saw a), we use P(rabbit|a).
This lets us collect statistics in practice
A bigram model: P(the barking dog) =
P(the|<start>)P(barking|the)P(dog|barking)
Markov models are the class of probabilistic models
that assume that we can predict the probability of
some future unit without looking too far into the
past
Specifically, for N=2 (bigram):
P(w
1
n
)
k=1

n
P(w
k
|w
k-1
); w
0
= <start>
Order of a Markov model: length of prior context
bigram is first order, trigram is second order,
Slide 43
Counting Words in Corpora
What is a word?
e.g., are cat and cats the same word?
September and Sept?
zero and oh?
Is seventy-two one word or two? AT&T?
Punctuation?
How many words are there in English?
Where do we find the things to count?

Slide 44
Outline
Applications of language models
Approximating natural language
The chain rule
Learning N-gram models
Smoothing for language models
Distribution of words in language: Zipfs law and Heaps law
Slide 45
Simple N-Grams
An N-gram model uses the previous N-1 words to predict
the next one:
P(w
n
| w
n-N+1
w
n-N+2
w
n-1
)
unigrams: P(dog)
bigrams: P(dog | big)
trigrams: P(dog | the big)
quadrigrams: P(dog | chasing the big)
Slide 46
Using N-Grams
Recall that
N-gram: P(w
n
|w
1
n-1

) P(w
n
|w
n-N+1
n-1
)
Bigram: P(w
1
n
)

For a bigram grammar
P(sentence) can be approximated by multiplying all the bigram
probabilities in the sequence
Example:
P(I want to eat Chinese food) =
P(I | <start>) P(want | I) P(to | want) P(eat | to)
P(Chinese | eat) P(food | Chinese)
( )
[
=

n
k
k k
w w P
1
1
|
Slide 47
A Bigram Grammar Fragment
Eat on .16 Eat Thai .03
Eat some .06 Eat breakfast .03
Eat lunch .06 Eat in .02
Eat dinner .05 Eat Chinese .02
Eat at .04 Eat Mexican .02
Eat a .04 Eat tomorrow .01
Eat Indian .04 Eat dessert .007
Eat today .03 Eat British .001
Slide 48
Additional Grammar
<start> I .25 Want some .04
<start> Id .06 Want Thai .01
<start> Tell .04 To eat .26
<start> Im .02 To have .14
I want .32 To spend .09
I would .29 To be .02
I dont .08 British food .60
I have .04 British restaurant .15
Want to .65 British cuisine .01
Want a .05 British lunch .01
Slide 49
Computing Sentence Probability
P(I want to eat British food) = P(I|<start>) P(want|I)
P(to|want) P(eat|to) P(British|eat) P(food|British) =
.25.32.65.26.001.60 = .000080
vs.
P(I want to eat Chinese food) = .00015
Probabilities seem to capture syntactic'' facts, world
knowledge''
eat is often followed by a NP
British food is not too popular
N-gram models can be trained by counting and
normalization
Slide 50
N-grams Issues
Sparse data
Not all N-grams found in training data, need smoothing
Change of domain
Train on WSJ, attempt to identify Shakespeare wont work
N-grams more reliable than (N-1)-grams
But even more sparse
Generating Shakespeare sentences with random unigrams...
Every enter now severally so, let
With bigrams...
What means, sir. I confess she? then all sorts, he is trim, captain.
Trigrams
Sweet prince, Falstaff shall die.
Slide 51
N-grams Issues
Determine reliable sentence probability estimates
should have smoothing capabilities (avoid the zero-counts)
apply back-off strategies: if N-grams are not possible, back-off to (N-
1) grams


P(And nothing but the truth) ~ 0.001

P(And nuts sing on the roof) ~ 0
Slide 52
Bigram Counts
I Want To Eat Chinese Food lunch
I
8 1087 0 13 0 0 0
Want
3 0 786 0 6 8 6
To
3 0 10 860 3 0 12
Eat
0 0 2 0 19 2 52
Chinese
2 0 0 0 0 120 1
Food
19 0 17 0 0 0 0
Lunch
4 0 0 0 0 1 0
Slide 53
Bigram Probabilities:
Use Unigram Count
Normalization: divide bigram count by unigram count
of first word.




Computing the probability of I I
P(I|I) = C(I I)/C(I) = 8 / 3437 = .0023
A bigram grammar is an VxV matrix of probabilities,
where V is the vocabulary size
I Want To Eat Chinese Food Lunch
3437 1215 3256 938 213 1506 459
Slide 54
Learning a Bigram Grammar
The formula
P(w
n
|w
n-1
) = C(w
n-1
w
n
)/C(w
n-1
)
is used for bigram parameter estimation

Slide 55
Training and Testing
Probabilities come from a training corpus, which is
used to design the model.
overly narrow corpus: probabilities don't generalize
overly general corpus: probabilities don't reflect task or
domain
A separate test corpus is used to evaluate the
model, typically using standard metrics
held out test set
cross validation
evaluation differences should be statistically significant

Slide 56
Outline
Applications of language models
Approximating natural language
The chain rule
Learning N-gram models
Smoothing for language models
Distribution of words in language: Zipfs law and Heaps law
Slide 57
Smoothing Techniques
Every N-gram training matrix is sparse, even for
very large corpora (Zipfs law )

Solution: estimate the likelihood of unseen N-
grams
Slide 58
Add-one Smoothing
Add 1 to every N-gram count

P(w
n
|w
n-1
) = C(w
n-1
w
n
)/C(w
n-1
)

P(w
n
|w
n-1
) = [C(w
n-1
w
n
) + 1] / [C(w
n-1
) + V]
Slide 59
Add-one Smoothed Bigrams
P(w
n
|w
n-1
) = C(w
n-1
w
n
)/C(w
n-1
)
P(w
n
|w
n-1
) = [C(w
n-1
w
n
)+1]/[C(w
n-1
)+V]
Assume a vocabulary V=1500
Slide 60
Other Smoothing Methods:
Good-Turing
Imagine you are fishing
You have caught 10 Carp, 3 Cod,
2 tuna, 1 trout, 1 salmon, 1 eel.
How likely is it that next species
is new? 3/18
How likely is it that next is tuna?
Less than 2/18
Slide 61
Smoothing: Good Turing
How many species (words) were
seen once? Estimate for how
many are unseen.
All other estimates are adjusted
(down) to give probabilities for
unseen
Slide 62
Smoothing:
Good Turing Example
10 Carp, 3 Cod, 2 tuna, 1 trout, 1 salmon, 1 eel.
How likely is new data (p
0
).
Let n
1
be number occurring
once (3), N be total (18). p
0
=3/18
How likely is eel? 1
*

n
1
=3, n
2
=1
1
*
=2 1/3 = 2/3
P(eel) = 1
*
/N = (2/3)/18 = 1/27


Notes:
p
0
refers to the probability of seeing any new data. Probability to see a specific
unknown item is much smaller, p
0
/all_unknown_items and use the assumption
that all unknown events occur with equal probability
for the words with the highest number of occurrences, use the actual probability
(no smoothing)
for the words for which n
r+1
is 0, go to the next rank n
r+2
Slide 63
Back-off Methods
Notice that:
N-grams are more precise than (N-1)grams (remember
the Shakespeare example)
But also, N-grams are more sparse than (N-1) grams
How to combine things?
Attempt N-grams and back-off to (N-1) if counts are not
available
E.g. attempt prediction using 4-grams, and back-off to
trigrams (or bigrams, or unigrams) if counts are not
available
Slide 64
Outline
Applications of language models
Approximating natural language
The chain rule
Learning N-gram models
Smoothing for language models
Distribution of words in language: Zipfs law and Heaps law
Slide 65
Text properties (formalized)
Sample word frequency data
Slide 66
Zipfs Law
Rank (r): The numerical position of a word in a list sorted
by decreasing frequency (f ).
Zipf (1949) discovered that:


If probability of word of rank r is p
r
and N is the total
number of word occurrences:

) constant (for k k r f =
1 . 0 const. indp. corpus for ~ = = A
r
A
N
f
p
r
Slide 67
Zipf curve
Slide 68
Predicting Occurrence Frequencies
By Zipf, a word appearing n times has rank r
n
=AN/n
If several words may occur n times, assume rank r
n
applies to the last of
these.
Therefore, r
n
words occur n or more times and r
n+1
words occur n+1 or
more times.
So, the number of words appearing exactly n times is:


) 1 ( 1
1
+
=
+
= =
+
n n
AN
n
AN
n
AN
r r I
n n n
Fraction of words with frequency n is:



Fraction of words appearing only once is therefore .
) 1 (
1
+
=
n n D
I
n
Slide 69
Zipfs Law Impact on Language Analysis
Good News: Stopwords will account for a large
fraction of text so eliminating them greatly
reduces size of vocabulary in a text

Bad News: For most words, gathering sufficient
data for meaningful statistical analysis (e.g. for
correlation analysis for query expansion) is
difficult since they are extremely rare.
Slide 70
Vocabulary Growth
How does the size of the overall vocabulary
(number of unique words) grow with the size of
the corpus?
This determines how the size of the inverted index
will scale with the size of the corpus.
Vocabulary not really upper-bounded due to
proper names, typos, etc.
Slide 71
Heaps Law
If V is the size of the vocabulary and the n is the length of
the corpus in words:


Typical constants:
K ~ 10100
| ~ 0.40.6 (approx. square-root)
1 0 , constants with < < = |
|
K Kn V
Slide 72
Heaps Law Data
Slide 73
Letter-based models do WE need
them? (a discovery)
Aoccdrnig to rscheearch at an Elingsh uinervtisy, it deosn't mttaer
in waht oredr the ltteers in a wrod are, olny taht the frist and
lsat ltteres are at the rghit pcleas. The rset can be a toatl mses
and you can sitll raed it wouthit a porbelm. Tihs is bcuseae we do
not raed ervey lteter by ilstef, but the wrod as a wlohe.

You might also like