0% found this document useful (0 votes)
53 views8 pages

Lecture3 Tolerant Retrieval Handout 6 Per

This lecture discusses dictionary data structures for inverted indexes in information retrieval systems. There are two main choices for storing dictionaries: hashtables and trees. Hashtables allow for fast lookup time of O(1) but do not support tolerant retrieval techniques like finding minor variants of terms. Trees support techniques like wild-card queries by allowing retrieval of terms within a certain range, but have slower lookup times of O(log M). B-trees specifically are discussed as they allow for efficient wild-card queries by term range lookups while avoiding the rebalancing problems of binary trees. Transforming wild-card queries so that asterisks only occur at the end of terms is suggested to enable their efficient handling by B-trees.

Uploaded by

Mehwsih
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views8 pages

Lecture3 Tolerant Retrieval Handout 6 Per

This lecture discusses dictionary data structures for inverted indexes in information retrieval systems. There are two main choices for storing dictionaries: hashtables and trees. Hashtables allow for fast lookup time of O(1) but do not support tolerant retrieval techniques like finding minor variants of terms. Trees support techniques like wild-card queries by allowing retrieval of terms within a certain range, but have slower lookup times of O(log M). B-trees specifically are discussed as they allow for efficient wild-card queries by term range lookups while avoiding the rebalancing problems of binary trees. Transforming wild-card queries so that asterisks only occur at the end of terms is suggested to enable their efficient handling by B-trees.

Uploaded by

Mehwsih
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Introduction to Information Retrieval Introduction to Information Retrieval Ch.

Recap of the previous lecture


 The type/token distinction
Introduction to  Terms are normalized types put in the dictionary
 Tokenization problems:
Information Retrieval  Hyphens, apostrophes, compounds, CJK
 Term equivalence classing:
CS276: Information Retrieval and Web Search  Numbers, case folding, stemming, lemmatization
Pandu Nayak and Prabhakar Raghavan  Skip pointers
 Encoding a tree-like structure in a postings list
Lecture 3: Dictionaries and tolerant retrieval
 Biword indexes for phrases
 Positional indexes for phrases/proximity queries
2

Introduction to Information Retrieval Ch. 3 Introduction to Information Retrieval Sec. 3.1

Dictionary data structures for inverted


This lecture indexes
 Dictionary data structures  The dictionary data structure stores the term
 “Tolerant” retrieval vocabulary, document frequency, pointers to each
postings list … in what data structure?
 Wild-card queries
 Spelling correction
 Soundex

3 4

Introduction to Information Retrieval Sec. 3.1 Introduction to Information Retrieval Sec. 3.1

A naïve dictionary Dictionary data structures


 An array of struct:  Two main choices:
 Hashtables
 Trees
 Some IR systems use hashtables, some trees

char[20] int Postings *


20 bytes 4/8 bytes 4/8 bytes
 How do we store a dictionary in memory efficiently?
 How do we quickly look up elements at query time?
5 6

1
Introduction to Information Retrieval Sec. 3.1 Introduction to Information Retrieval Sec. 3.1

Hashtables Tree: binary tree


Root
 Each vocabulary term is hashed to an integer a-m n-z

 (We assume you’ve seen hashtables before)


 Pros: a-hu hy-m n-sh si-z
 Lookup is faster than for a tree: O(1)
 Cons:
 No easy way to find minor variants:
 judgment/judgement
 No prefix search [tolerant retrieval]
 If vocabulary keeps growing, need to occasionally do the
expensive operation of rehashing everything

7 8

Introduction to Information Retrieval Sec. 3.1 Introduction to Information Retrieval Sec. 3.1

Tree: B-tree Trees


 Simplest: binary tree
a-hu n-z  More usual: B-trees
hy-m
 Trees require a standard ordering of characters and hence
strings … but we typically have one
 Pros:
 Solves the prefix problem (terms starting with hyp)
 Cons:
 Slower: O(log M) [and this requires balanced tree]
 Rebalancing binary trees is expensive
 Definition: Every internal nodel has a number of children
 But B-trees mitigate the rebalancing problem
in the interval [a,b] where a, b are appropriate natural
numbers, e.g., [2,4].
9 10

Introduction to Information Retrieval Introduction to Information Retrieval Sec. 3.2

Wild-card queries: *
 mon*: find all docs containing any word beginning
with “mon”.
 Easy with binary tree (or B-tree) lexicon: retrieve all
words in range: mon ≤ w < moo
 *mon: find words ending in “mon”: harder
 Maintain an additional B-tree for terms backwards.
WILD-CARD QUERIES Can retrieve all words in range: nom ≤ w < non.

Exercise: from this, how can we enumerate all terms


meeting the wild-card query pro*cent ?
11 12

2
Introduction to Information Retrieval Sec. 3.2 Introduction to Information Retrieval Sec. 3.2

B-trees handle *’s at the end of a


Query processing query term
 At this point, we have an enumeration of all terms in  How can we handle *’s in the middle of query term?
the dictionary that match the wild-card query.  co*tion
 We still have to look up the postings for each  We could look up co* AND *tion in a B-tree and
enumerated term. intersect the two term sets
 E.g., consider the query:  Expensive
se*ate AND fil*er  The solution: transform wild-card queries so that the
This may result in the execution of many Boolean *’s occur at the end
AND queries.  This gives rise to the Permuterm Index.

13 14

Introduction to Information Retrieval Sec. 3.2.1 Introduction to Information Retrieval Sec. 3.2.1

Permuterm index Permuterm query processing


 For term hello, index under:  Rotate query wild-card to the right
 hello$, ello$h, llo$he, lo$hel, o$hell, $hello  Now use B-tree lookup as before.
where $ is a special symbol.  Permuterm problem: ≈ quadruples lexicon size
 Queries:
 X lookup on X$ X* lookup on $X* Empirical observation for English.
 *X lookup on X$* *X* lookup on X*
 X*Y lookup on Y$X* X*Y*Z ??? Exercise!

Query = hel*o
X=hel, Y=o
Lookup o$hel*
15 16

Introduction to Information Retrieval Sec. 3.2.2 Introduction to Information Retrieval Sec. 3.2.2

Bigram (k-gram) indexes Bigram index example


 Enumerate all k-grams (sequence of k chars)  The k-gram index finds terms based on a query
occurring in any term consisting of k-grams (here k=2).
 e.g., from text “April is the cruelest month” we get
the 2-grams (bigrams) $m mace madden

$a,ap,pr,ri,il,l$,$i,is,s$,$t,th,he,e$,$c,cr,ru, mo among amortize


ue,el,le,es,st,t$, $m,mo,on,nt,h$ on along among
 $ is a special word boundary symbol
 Maintain a second inverted index from bigrams to
dictionary terms that match each bigram.
17 18

3
Introduction to Information Retrieval Sec. 3.2.2 Introduction to Information Retrieval Sec. 3.2.2

Processing wild-cards Processing wild-card queries


 Query mon* can now be run as  As before, we must execute a Boolean query for each
 $m AND mo AND on enumerated, filtered term.
 Gets terms that match AND version of our wildcard  Wild-cards can result in expensive query execution
query. (very large disjunctions…)
 But we’d enumerate moon.  pyth* AND prog*
 Must post-filter these terms against query.  If you encourage “laziness” people will respond!
 Surviving enumerated terms are then looked up in Search
the term-document inverted index.
Type your search terms, use ‘*’ if you need to.
 Fast, space efficient (compared to permuterm). E.g., Alex* will match Alexander.

19  Which web search engines allow wildcard queries? 20

Introduction to Information Retrieval Introduction to Information Retrieval Sec. 3.3

Spell correction
 Two principal uses
 Correcting document(s) being indexed
 Correcting user queries to retrieve “right” answers
 Two main flavors:
 Isolated word
 Check each word on its own for misspelling
 Will not catch typos resulting in correctly spelled words
SPELLING CORRECTION  e.g., from → form
 Context-sensitive
 Look at surrounding words,
 e.g., I flew form Heathrow to Narita.

21 22

Introduction to Information Retrieval Sec. 3.3 Introduction to Information Retrieval Sec. 3.3

Document correction Query mis-spellings


 Especially needed for OCR’ed documents  Our principal focus here
 Correction algorithms are tuned for this: rn/m  E.g., the query Alanis Morisett
 Can use domain-specific knowledge  We can either
 E.g., OCR can confuse O and D more often than it would confuse O
 Retrieve documents indexed by the correct spelling, OR
and I (adjacent on the QWERTY keyboard, so more likely
interchanged in typing).  Return several suggested alternative queries with the
correct spelling
 But also: web pages and even printed material have
 Did you mean … ?
typos
 Goal: the dictionary contains fewer misspellings
 But often we don’t change the documents and
instead fix the query-document mapping
23 24

4
Introduction to Information Retrieval Sec. 3.3.2 Introduction to Information Retrieval Sec. 3.3.2

Isolated word correction Isolated word correction


 Fundamental premise – there is a lexicon from which  Given a lexicon and a character sequence Q, return
the correct spellings come the words in the lexicon closest to Q
 Two basic choices for this  What’s “closest”?
 A standard lexicon such as  We’ll study several alternatives
 Webster’s English Dictionary
 Edit distance (Levenshtein distance)
 An “industry-specific” lexicon – hand-maintained
 Weighted edit distance
 The lexicon of the indexed corpus
 n-gram overlap
 E.g., all words on the web
 All names, acronyms etc.
 (Including the mis-spellings)

25 26

Introduction to Information Retrieval Sec. 3.3.3 Introduction to Information Retrieval Sec. 3.3.3

Edit distance Weighted edit distance


 Given two strings S1 and S2, the minimum number of  As above, but the weight of an operation depends on
operations to convert one to the other the character(s) involved
 Operations are typically character-level  Meant to capture OCR or keyboard errors
 Insert, Delete, Replace, (Transposition) Example: m more likely to be mis-typed as n than as q
 Therefore, replacing m by n is a smaller edit distance than
 E.g., the edit distance from dof to dog is 1 by q
 From cat to act is 2 (Just 1 with transpose.)  This may be formulated as a probability model
 from cat to dog is 3.
 Requires weight matrix as input
 Generally found by dynamic programming.
 Modify dynamic programming to handle weights
 See http://www.merriampark.com/ld.htm for a nice
example plus an applet.
27 28

Introduction to Information Retrieval Sec. 3.3.4 Introduction to Information Retrieval Sec. 3.3.4

Using edit distances Edit distance to all dictionary terms?


 Given query, first enumerate all character sequences  Given a (mis-spelled) query – do we compute its edit
within a preset (weighted) edit distance (e.g., 2) distance to every dictionary term?
 Intersect this set with list of “correct” words  Expensive and slow
 Show terms you found to user as suggestions  Alternative?

 Alternatively,  How do we cut the set of candidate dictionary


 We can look up all possible corrections in our inverted
terms?
index and return all docs … slow  One possibility is to use n-gram overlap for this
 We can run with a single most likely correction  This can also be used by itself for spelling correction.
 The alternatives disempower the user, but save a
round of interaction with the user
29 30

5
Introduction to Information Retrieval Sec. 3.3.4 Introduction to Information Retrieval Sec. 3.3.4

n-gram overlap Example with trigrams


 Enumerate all the n-grams in the query string as well  Suppose the text is november
as in the lexicon  Trigrams are nov, ove, vem, emb, mbe, ber.
 Use the n-gram index (recall wild-card search) to  The query is december
retrieve all lexicon terms matching any of the query  Trigrams are dec, ece, cem, emb, mbe, ber.
n-grams  So 3 trigrams overlap (of 6 in each term)
 Threshold by number of matching n-grams  How can we turn this into a normalized measure of
 Variants – weight by keyboard layout, etc. overlap?

31 32

Introduction to Information Retrieval Sec. 3.3.4 Introduction to Information Retrieval Sec. 3.3.4

One option – Jaccard coefficient Matching trigrams


 A commonly-used measure of overlap  Consider the query lord – we wish to identify words
 Let X and Y be two sets; then the J.C. is matching 2 of its 3 bigrams (lo, or, rd)

X ∩Y / X ∪Y
lo alone lore sloth
 Equals 1 when X and Y have the same elements and or border lore morbid
zero when they are disjoint
rd ardent border card
 X and Y don’t have to be of the same size
 Always assigns a number between 0 and 1
 Now threshold to decide if you have a match Standard postings “merge” will enumerate …
 E.g., if J.C. > 0.8, declare a match Adapt this to using Jaccard (or another) measure.
33 34

Introduction to Information Retrieval Sec. 3.3.5 Introduction to Information Retrieval Sec. 3.3.5

Context-sensitive spell correction Context-sensitive correction


 Text: I flew from Heathrow to Narita.  Need surrounding context to catch this.
 Consider the phrase query “flew form Heathrow”  First idea: retrieve dictionary terms close (in
weighted edit distance) to each query term
 We’d like to respond
 Now try all possible resulting phrases with one word
Did you mean “flew from Heathrow”? “fixed” at a time
because no docs matched the query phrase.  flew from heathrow
 fled form heathrow
 flea form heathrow
 Hit-based spelling correction: Suggest the
alternative that has lots of hits.

35 36

6
Introduction to Information Retrieval Sec. 3.3.5 Introduction to Information Retrieval Sec. 3.3.5

Exercise Another approach


 Suppose that for “flew form Heathrow” we have 7  Break phrase query into a conjunction of biwords
alternatives for flew, 19 for form and 3 for heathrow. (Lecture 2).
How many “corrected” phrases will we enumerate in  Look for biwords that need only one term corrected.
this scheme?  Enumerate only phrases containing “common”
biwords.

37 38

Introduction to Information Retrieval Sec. 3.3.5 Introduction to Information Retrieval

General issues in spell correction


 We enumerate multiple alternatives for “Did you
mean?”
 Need to figure out which to present to the user
 The alternative hitting most docs
 Query log analysis
 More generally, rank alternatives probabilistically
argmaxcorr P(corr | query)
SOUNDEX
 From Bayes rule, this is equivalent to
argmaxcorr P(query | corr) * P(corr)

Noisy channel Language model


39 40

Introduction to Information Retrieval Sec. 3.4 Introduction to Information Retrieval Sec. 3.4

Soundex Soundex – typical algorithm


 Class of heuristics to expand a query into phonetic  Turn every token to be indexed into a 4-character
equivalents reduced form
 Language specific – mainly for names  Do the same with query terms
 E.g., chebyshev → tchebycheff  Build and search an index on the reduced forms
 Invented for the U.S. census … in 1918  (when the query calls for a soundex match)

 http://www.creativyst.com/Doc/Articles/SoundEx1/SoundEx1.htm#Top

41 42

7
Introduction to Information Retrieval Sec. 3.4 Introduction to Information Retrieval Sec. 3.4

Soundex – typical algorithm Soundex continued


1. Retain the first letter of the word. 4. Remove all pairs of consecutive digits.
2. Change all occurrences of the following letters to '0' 5. Remove all zeros from the resulting string.
(zero):
'A', E', 'I', 'O', 'U', 'H', 'W', 'Y'. 6. Pad the resulting string with trailing zeros and
return the first four positions, which will be of the
3. Change letters to digits as follows:
form <uppercase letter> <digit> <digit> <digit>.
 B, F, P, V → 1
 C, G, J, K, Q, S, X, Z → 2
 D,T → 3 E.g., Herman becomes H655.
 L→4
 M, N → 5 Will hermann generate the same code?
 R→6 43 44

Introduction to Information Retrieval Sec. 3.4 Introduction to Information Retrieval

Soundex What queries can we process?


 Soundex is the classic algorithm, provided by most  We have
databases (Oracle, Microsoft, …)  Positional inverted index with skip pointers
 How useful is soundex?  Wild-card index
 Not very – for information retrieval  Spell-correction
 Soundex
 Okay for “high recall” tasks (e.g., Interpol), though
biased to names of certain nationalities  Queries such as
 Zobel and Dart (1996) show that other algorithms for (SPELL(moriset) /3 toron*to) OR SOUNDEX(chaikofski)
phonetic matching perform much better in the
context of IR

45 46

Introduction to Information Retrieval Introduction to Information Retrieval Sec. 3.5

Exercise Resources
 Draw yourself a diagram showing the various indexes  IIR 3, MG 4.2
in a search engine incorporating all the functionality  Efficient spell retrieval:
 K. Kukich. Techniques for automatically correcting words in text. ACM
we have talked about Computing Surveys 24(4), Dec 1992.
 Identify some of the key design choices in the index  J. Zobel and P. Dart. Finding approximate matches in large
pipeline: lexicons. Software - practice and experience 25(3), March 1995.
http://citeseer.ist.psu.edu/zobel95finding.html
 Does stemming happen before the Soundex index?  Mikael Tillenius: Efficient Generation and Ranking of Spelling Error
Corrections. Master’s thesis at Sweden’s Royal Institute of Technology.
 What about n-grams? http://citeseer.ist.psu.edu/179155.html
 Given a query, how would you parse and dispatch  Nice, easy reading on spell correction:
sub-queries to the various indexes?  Peter Norvig: How to write a spelling corrector
http://norvig.com/spell-correct.html
47 48

You might also like