Information Retrieval Systems Chap 2
Information Retrieval Systems Chap 2
Information Retrieval and Data Mining (AT71.07) Comp. Sc. and Inf. Mgmt. Asian Institute of Technology
Instructor: Dr. Sumanta Guha Slide Sources: Introduction to Information Retrieval book slides from Stanford University, adapted and supplemented Chapter 2: The term vocabulary and postings lists
1
Introduction to
Information Retrieval
CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar Raghavan Lecture 2: The term vocabulary and postings lists
Ch. 1
Postings
Faster merges: skip lists Positional postings and phrase queries
Token stream.
Friends Romans
Linguistic modules
Countrymen
friend
Indexer friend
roman
roman
countryman
2
1
4
2 16
countryman
13
Sec. 2.1
Parsing a document
What format is it in?
pdf/word/excel/html?
Sec. 2.1
Complications: Format/language
Documents being indexed can include docs from many different languages
A single index may have to contain terms of several languages.
Definitions
Word A delimited string of characters as it appears in the text. Term A normalized word (case, morphology, spelling etc); an equivalence class of words. Token An instance of a word or term occurring in a document. Type The same as a term in most cases: an equivalence class of tokens.
Sec. 2.2.1
Tokenization
Input: Friends, Romans and Countrymen Output: Tokens
Friends Romans and Countrymen
A token is an instance of a sequence of characters Each such token is now a candidate for an index entry, after further processing
Described below
Sec. 2.2.1
Tokenization
Issues in tokenization: Finlands capital
Finland? Finlands? Finlands? Hewlett-Packard Hewlett and Packard as two tokens?
state-of-the-art: break up hyphenated sequence. co-education lowercase, lower-case, lower case ?
It can be effective to get the user to put in possible hyphens
Sec. 2.2.1
Numbers
3/20/91 Mar. 12, 1991 20/3/91 55 B.C. B-52 My PGP key is 324a3df234cb23e (800) 234-2333 Often have embedded spaces Older IR systems may not index numbers
But often very useful: think about things like looking up error codes/stacktraces on the web (One answer is using n-grams: Lecture 3)
Sec. 2.2.1
Sec. 2.2.1
Katakana
Hiragana
Kanji
Romaji
Sec. 2.2.1
Sec. 2.2.2
Stop words
With a stop list, you exclude from the dictionary entirely the commonest words. Intuition:
They have little semantic content: the, a, and, to, be There are a lot of them: ~30% of postings for top 30 words
Sec. 2.2.3
Normalization to terms
We need to normalize words in indexed text as well as query words into the same form
We want to match U.S.A. and USA
Result is terms: a term is a (normalized) word type, which is an entry in our IR system dictionary We most commonly implicitly define equivalence classes of terms by, e.g.,
deleting periods to form a term
U.S.A., USA USA
Sec. 2.2.3
Even in languages that standardly have accents, users often may not type them
Often best to normalize to a de-accented term
Tuebingen, Tbingen, Tubingen Tubingen
Sec. 2.2.3
Tokenization and normalization may depend on the language and so is intertwined with language detection
Morgen will ich in MIT
Crucial: Need to normalize indexed text as well as query terms into the same form
Sec. 2.2.3
Case folding
Reduce all letters to lower case
exception: upper case in mid-sentence?
e.g., General Motors Fed vs. fed SAIL vs. sail
Often best to lower case everything, since users will use lowercase regardless of correct capitalization
Google example:
Query C.A.T. #1 result is for cat (well, Lolcats) not Caterpillar Inc.
Sec. 2.2.3
Normalization to terms
An alternative to equivalence classing is to do asymmetric query expansion An example of where this may be useful
Enter: window Enter: windows Enter: Windows Search: window, windows Search: Windows, windows, window Search: Windows
Sec. 2.2.4
Lemmatization
Reduce inflectional/variant forms to base form E.g.,
am, are, is be car, cars, car's, cars' car
the boy's cars are different colors the boy car be different color
Lemmatization implies doing proper reduction to dictionary headword form
Sec. 2.2.4
Stemming
Reduce terms to their roots before indexing Stemming suggest crude affix chopping
language dependent e.g., automate(s), automatic, automation all reduced to automat.
for example compressed and compression are both accepted as equivalent to compress.
Sec. 2.2.4
Porters algorithm
Commonest algorithm for stemming English
Results suggest its at least as good as other stemming options
Sec. 2.2.4
Sec. 2.2.4
Other stemmers
Other stemmers exist, e.g., Lovins stemmer
http://www.comp.lancs.ac.uk/computing/research/stemming/general/lovins.htm
Full morphological analysis at most modest benefits for retrieval Do stemming and other normalizations help?
English: very mixed results. Helps recall for some queries but harms precision on others
E.g., operative (dentistry) oper
Sec. 2.2.4
Language-specificity
Many of the above features embody transformations that are
Language-specific and Often, application-specific
These are plug-in addenda to the indexing process Both open source and commercial plug-ins are available for handling these
Sec. 2.2
.japanese
MIT.english
mit.german
guaranteed.english
entries.english
sometimes.english tokenization.english
These may be grouped by language (or not). More on this in ranking/query processing.
Sec. 1.3
a. b. c. d. e.
Sec. 1.3
Sec. 1.3
Sec. 2.3
2
2 8 1
4
2
8
3
41
8
48
11
64
17
128
21
Brutus
31 Caesar
If the list lengths are m and n, the merge takes O(m+n) operations.
Can we do better? Yes (if index isnt changing too fast).
Sec. 2.3
2
11
41
48
31
64
128
11
17
21
31
Why? To skip postings that will not figure in the search results. How? Where do we place skip pointers?
Sec. 2.3
2
11
41
48
31
64
128
11
17
21
31
Suppose weve stepped through the lists until we process 8 on each list. We match it and advance. We then have 41 and 11 on the lower. 11 is smaller. But the skip successor of 11 on the lower list is 31, so we can skip ahead past the intervening postings.
6
7 8 9
p2 next(p2)
else if docID(p1) < docID(p2) then if hasSkip(p1) and (docID(skip(p1)) docID(p2)) then while hasSkip(p1) and (docID(skip(p1)) docID(p2))
10
11 12 13
do p1 skip(p1)
else p1 next(p1) else if hasSkip(p2) and (docID(skip(p2)) docID(p1)) then while hasSkip(p2) and (docID(skip(p2)) docID(p1))
14
15 16 return answer
do p2 skip(p2)
else p2 next(p2)
37
Sec. 2.3
Sec. 2.3
Placing skips
Simple heuristic: for postings of length L, use L evenly-spaced skip pointers. This ignores the distribution of query terms. Easy if the index is relatively static; harder if L keeps changing because of updates. This definitely used to help; with modern hardware it may not (Bahle et al. 2002) unless youre memorybased
The I/O cost of loading a bigger postings list can outweigh the gains from quicker in memory merging!
Skips
Exercise 2.5: Why are skip pointers not useful for queries of the form x OR y? Exercise 2.6: We have a two-word query. For one term the postings list consists of the 16 entries [4, 6, 10, 12, 14, 16, 18, 20, 22, 32, 47, 81, 120, 122, 157, 180] and for the other it consists of only 1 entry [47] Work out how many comparisons would be done to intersect the two postings lists with the following strategies. a. Using standard postings lists. b. Using postings lists with skip pointers, with the suggested skip length of P.
40
Skips
Exercise 2.7: Consider a postings intersection between this postings list with skip pointers:
3 5 9 15 24 39 60 68 75 81 84 89 92 96 97 100 115 and the following intermediate result posting list (which hence has no skip pointers): 3 5 89 95 97 99 100 101 Trace through the postings intersection algorithm. a. How often is a skip pointer followed (i.e., p1 advanced to skip(p1))? b. How many postings comparisons will be made by this algorithm while intersecting the two lists? c. How many comparisons would be made if the postings lists are intersected without the use of skip pointers? 41
Sec. 2.4
Phrase queries
Want to be able to answer queries such as stanford university as a phrase Thus the sentence I went to university at Stanford is not a match. The sentence The inventor Stanford Ovshinsky never went to university is not a match.
The concept of phrase queries has proven easily understood by users; one of the few advanced search ideas that works Many more queries are implicit phrase queries
Sec. 2.4.1
Each of these biwords is now a dictionary term Two-word phrase query-processing is now immediate.
Sec. 2.4.1
Sec. 2.4.1
Extended biwords
Parse the indexed text and perform part-of-speech-tagging (POST). Bucket the terms into (say) Nouns (N) and articles/prepositions (X). Call any string of terms of the form NX*N an extended biword. Each such extended biword is now made a term in the dictionary. Example: catcher in the rye N X X N Query processing: parse it into Ns and Xs Segment query into enhanced biwords Look up in index: catcher rye
Sec. 2.4.1
Biword indexes are not the standard solution (for all biwords) but can be part of a compound strategy
Sec. 2.4.2
Sec. 2.4.2
For phrase queries, we use a merge algorithm recursively at the document level But we now need to deal with more than just equality
Sec. 2.4.2
Sec. 2.4.2
Proximity queries
LIMIT! /3 STATUTE /3 FEDERAL /2 TORT
Again, here, /k means within k words of.
Clearly, positional indexes can be used for such queries; biword indexes cannot. Exercise: Adapt the linear merge of postings to handle proximity queries. Can you make it work for any value of k?
This is a little tricky to do correctly and efficiently See Figure 2.12 of IIR Theres likely to be a problem on it!
Introduction to Information Retrieval PositionalIntersect(p1, p2, k) 1 answer <> l is a moving window of positions 2 while p1 nil and p2 nil of the second word in the current 3 do if docID(p1) = docID(p2) doc which are within k of the 4 then l <> current position of the first word. 5 pp1 positions(p1) 6 pp2 positions(p2) 7 while pp1 nil 8 do while pp2 nil 9 do if |pos(pp1) pos(pp2)| k 10 then ADD(l , pos(pp2)) 11 else if pos(pp2) > pos(pp1) 12 then break 13 pp2 next(pp2) 14 while l <> and |l[0] pos(pp1)| > k 15 do DELETE(l[0]) 16 for each ps l 17 do ADD(answer, <docID(p1), pos(pp1), ps>) 18 pp1 next(pp1) For each successive position of the 19 p1 next(p1) first word: moves the head of the 20 p2 next(p2) window l up till at most k away from 21 else if docID(p1) < docID(p2) the new position of the first word; 22 then p1 next(p1) deletes tail of the window till within 23 else p2 next(p2) k of the first word. 24 return answer
52
Sec. 2.4.2
Sec. 2.4.2
1 100
Sec. 2.4.2
Rules of thumb
A positional index is 24 as large as a non-positional index Positional index size 3550% of volume of original text Caveat: all of this holds for English-like languages
Sec. 2.4.3
Combination schemes
These two approaches can be profitably combined
For particular phrases (Michael Jackson, Britney Spears) it is inefficient to keep on merging positional postings lists
Even more so for phrases like The Who
57
59
D. Bahle, H. Williams, and J. Zobel. Efficient phrase querying with an auxiliary index. SIGIR 2002, pp. 215-221.