0% found this document useful (0 votes)
43 views19 pages

6 The Term Vocabulary & Posting List

Uploaded by

Ajitesh Thawait
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views19 pages

6 The Term Vocabulary & Posting List

Uploaded by

Ajitesh Thawait
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Review the previous class

The Term Vocabulary &


Posting List

Lecture 10
Tokenization: Language Issues

• French
• L'ensemble  one token or two?
• L ? L’ ? Le ?
• Want l’ensemble to match with un ensemble
• Until at least 2003, it didn’t on Google
• Internationalization!

• German noun compounds are not segmented


• Lebensversicherungsgesellschaftsangestellter
• ‘life insurance company employee’
• German retrieval systems benefit greatly from a compound splitter module
• Can give a 15% performance boost for German

Tokenization: Language Issues

• Chinese and Japanese have no spaces between words:


• 莎拉波娃现在居住在美国东南部的佛罗里达。
• Not always guaranteed a unique tokenization
• Further complicated in Japanese, with multiple alphabets intermingled
• Dates/amounts in multiple formats

フォーチュン500社は情報不足のため時間あた$500K(約6,000万円)

Katakana Hiragana Kanji Romaji

End-user can express query entirely in hiragana!


Tokenization: language issues

• Arabic (or Hebrew) is basically written right to left, but with certain items like
numbers written left to right
• Words are separated, but letter forms within a word form complex ligatures

← → ←→ ←
• ‘Algeria achieved its independence in 1962 after 132 years of French occupation.’
• With Unicode, the surface presentation is complex, but the stored
form is straightforward

Stop words

• With a stop list, you exclude from the dictionary entirely the commonest words.
Intuition:
• They have little semantic content: the, a, and, to, be
• There are a lot of them: ~30% of postings for top 30 words
Dropping common words: a, an, and, are, as, ......
have
Little value in helping select the documents
• General Strategy for determining a stop word list is to sort the terms
by Collection Frequency
No. of times the term ‘t’ appears in the document
So this is the third frequency
Stop Words

When you build an index you can also keep track of

Collection Frequency

along with

Document Frequency

Term Frequency

Stop Words

• But the trend is away from doing this:


• Good compression techniques means the space for including stop
words in a system is very small
• Good query optimization techniques mean you pay little at query
time for including stop words.
• You need them for:
• Phrase queries: “King of Denmark”
• Various song titles, etc.: “Let it be”, “To be or not to be”
• “Relational” queries: “flights to London”
Normalization to terms

• We need to “normalize” words in indexed text as well as query words into the
same form
• We want to match U.S.A. and USA
• Result is terms: a term is a (normalized) word type, which is an entry in our IR
system dictionary
• We most commonly implicitly define equivalence classes of terms by, e.g.,
• deleting periods to form a term
• U.S.A., USA  USA
• deleting hyphens to form a term
• anti-discriminatory,antidiscriminatory  antidiscriminatory

Normalization is heavily language dependent


Normalization: other languages

• Accents: e.g., French résumé vs. resume.


• Umlauts: e.g., German: Tuebingen vs. Tübingen
• Should be equivalent
• Most important criterion:
• How users like to write their queries for these words?

• Even in languages that standardly have accents, users often may not type them
• Often best to normalize to a de-accented term
• Tuebingen,Tübingen,Tubingen Tubingen
Normalization: other languages

• Normalization of things like date forms


• 7月30日 vs. 7/30
• Japanese use of kana vs. Chinese characters

• Tokenization and normalization may depend on the language and so is intertwined


with language detection
Is this
Morgen will ich in MIT … German “mit”?
• Crucial: Need to “normalize” indexed text as well as query terms into the same
form

Case folding

• Reduce all letters to lower case


• exception: upper case in mid-sentence?
• e.g., General Motors
• Fed vs. fed (Federal Reserve System )
• SAIL vs. sail
• Often best to lower case everything, since users will use lowercase regardless of ‘correct’
capitalization…

• Google example:
• Query C.A.T.
• #1 result is for “cat” (well, Lolcats) not Caterpillar Inc.
Normalization to terms

• An alternative to equivalence classing is to do asymmetric expansion


(Query expansion)
• An example of where this may be useful
• Enter: window Search: window, windows
• Enter: windows Search: Windows, windows, window
• Enter: Windows Search: Windows
• Potentially more powerful, but less efficient

Thesauri and Soundex

• Do we handle synonyms and homonyms?


• E.g., by hand-constructed equivalence classes
• car = automobile color = colour
• We can rewrite to form equivalence-class terms
• When the document contains automobile, index it under car-automobile (and vice-
versa)
• Or we can expand a query
• When the query contains automobile, look under car as well
• What about spelling mistakes?
• One approach is soundex, which forms equivalence classes of words based on phonetic
heuristics
• Will see in coming lectures
Lemmatization

• Reduce inflectional/variant forms to base form


• Lemmatization is derived from a word Lemma  which refers to root form of a particular
word
• This is sophisticated NLP technique
• E.g.,
• am, are, is  be
• car, cars, car's, cars’  car
• the boy's cars are different colors  the boy car be different color
Plural forms are converted into singular form
• Lemmatization implies doing “proper” reduction to dictionary headword form

Stemming Is more crude form of normalization

• Reduce terms to their “roots” before indexing


• “Stemming” suggest crude affix chopping
• language dependent
• e.g., automate(s), automatic, automation all reduced to automat.

for example compressed for exampl compress and


and compression are both compress ar both accept
accepted as equivalent to as equival to compress
compress.
Porter’s algorithm Developed by Martin Porter

• Commonest algorithm for stemming English


• Results suggest it’s at least as good as other stemming options

• The algorithm has 5 phases of reductions


• phases applied sequentially
• each phase consists of a set of commands
• sample convention: Of the rules in a compound command, select the one
that applies to the longest suffix.

FASTER POSTINGS
MERGES:
SKIP POINTERS/SKIP
LISTS
Faster postings merges via Skip pointers/Skip lists

•Extension to posting list data structures


•Way to increase the efficiency of using posting lists.

Recall Basic Merge

• Walk through the two postings simultaneously, in time linear in


the total number of postings entries
Recall Basic Merge

• Can we do better?
• Yes (if index isn’t changing too fast).
• i.e.,
• There are not new entries been added or deleted from the
posting list

• Use skip list by augmenting posting lists with skip pointers (at indexing
time)

Look into an example

•Skip pointer is a pointer that points from a particular


node to some other node far ahead in the same list.
Benefits of adding skip pointers

• Let see how can we use these skip pointers to increase our search and
how do we add them
41 128

2 4 8 41 48 64 128

Intervening results Not useful for the answer

Augment postings with skip pointers


(at indexing time)
p1 p1 p1 p1

2 4 8 41 48 64 128
2 8

1 2 3 8 11 17 21 31

p2 p2 p2 p2
And so on....
p2

We are looking into the intermediate results between 11 & 31


Augment postings with skip pointers
(at indexing time)

• Two question need to be answered


• Where do we place skip pointers?
• How to do efficient merging using skip pointers?

Where do we place skips?

How many skip pointers should we add


• Tradeoff:
• More skips  shorter skip spans  more likely to skip. But lots of comparisons to skip
pointers.

• Fewer skips  few pointer comparison, but then long skip spans  few successful skips.
Placing Skips

• Simple heuristic: for postings of length L, use L evenly-spaced skip pointers.


i.e., if total length of posting list is L, use L evenly-spaced skip pointers.

L L L L L

• Easy if the index is relatively static; harder if L keeps changing because of updates.
Best is static
Deleting/inserting elements

Important Points

• If Index is small  entirely fits into Memory (both dictionary & posting list can
fit into main memory)

• If corpus size is large  posting may have to be stored on disk, while dictionary is
kept in memory.

• If you index is entirely in memory  using skip pointers will help


• Because you will end up doing fewer no. of operations to transverse a particular posting
list, if you follow the skip pointers.
Skips

• Only AND Queries


• Does not work with OR queries. Why?

Algorithm: Postings lists intersection with skip


pointers
Exercise Problems

• Problem 1:
• We have two-word query. For one term the postings list consists of the
following 16 entries
[4, 6, 10, 12, 14, 16, 18, 20, 22, 32, 47, 81, 120,122, 157, 180]
and for the other it is the one entry posting list
[47]
Workout how many comparisons would be done to intersect the two posting
lists with the following two strategies. Briefly justify your answer.
(a) Using standard posting list.
(b) Using posting lists stored with skip pointers, with a skip length of L

Problem 1 Solution

• (a) The no. of comparisons would be 11 as shown


• (4,47), (6,47), (10,47), (12,47), (16,47), (18,47), (20,47), (22,47), (32,47),
(47,47)
• (b)Total length of posting L=16
• Skip length L = 16 = 4
• 4  14, 14  22, 22  120, 120  180
14 22 120

4 6 10 12 14 16 18 20 22 32 47 81 120 122 157 180


Problem 1 Solution
14 22 120

4 6 10 12 14 16 18 20 22 32 47 81 120 122 157 180

• 14 < 47, 22 < 47 & 120 > 47


• So there will be no comparisons after (32,47) and (47,47)

• No. of comparisons would be 6


• (4,47), (14,47), (22,47), (120,47), (32,47), (47,47)

Problem 2
• We have a two word query. For one term the postings list consist of the
following 16 entries.
[ 2, 4, 9, 12, 14, 16, 18, 20, 24, 32, 47, 81, 120, 125, 158, 180 ]
and for the other list it is the one entry postings list
[ 81]
Work out how many comparisons would be done to intersect the two postings
list with the following two strategies.
i. Using standard postings list.
ii. Using postings list stored with skip pointers, with the suggested skip length
of √P
Problem 2 Solution
i. Using standard postings list.
12 comparisons
(2,81), (4,81), (9,81), (12,81), (14,81), (16,81), (18,81), (20, ,81),
(24, ,81), (32, ,81), (47, 81), (81, 81)
ii. Using postings list stored with skip pointers, with the suggested skip length of
√P.

7 comparisons
(2,81), (14,81), (24,81), (120,81), (32,81), (47,81), (81,81)

Problem 3
Problem 3 Solution

• (a)The skip pointers is followed only once,


24  75
• (b) 19 posting comparisons would be made if the posting lists are intersected
without the use of skip pointers
• (3,3), (5,5), (89,9), (89,15), (89,24), (89, 39), (89,60), (89,68), (89,75), (89,81),
(89,84), (89,89), (95,92), (95,96), (97,96), (97,97), (99,100), (100,100), (101,115)
• (c) 18 posting comparisons will be made by the algorithm in total (with skip
pointers)
• (3,3), (5,5), (9,89), (15,89), (24,89), (75,89), (92,89), (81,89), (84,89), (89,89), (95,92),
(95,115), (95,96), (97,96), (97,97), (99,100), (100,100), (101,115)

Assignment - II

• Why are skip pointers not useful for queries of the form x OR y?
• Exercise 1.6, 1.11, 1.9, 2.2, 2.3

You might also like