Introduction To: Information Retrieval
Introduction To: Information Retrieval
Introduction To: Information Retrieval
Introduction to
Information Retrieval
Introducing Information Retrieval
and Web Search
Introduction to Information Retrieval
Information Retrieval
Information Retrieval (IR) is finding material (usually
documents) of an unstructured nature (usually text)
that satisfies an information need from within large
collections (usually stored on computers).
2
Introduction to Information Retrieval
250
200
150
Unstructured
100 Structured
50
0
Data volume Market Cap
3
Introduction to Information Retrieval
250
200
150
Unstructured
100 Structured
50
0
Data volume Market Cap
4
Introduction to Information Retrieval Sec. 1.1
5
Introduction to Information Retrieval
Info need
Info about removing mice
without killing them
Misformulation?
Query Searc
how trap mice alive
h
Search
engine
Query Results
Collection
refinement
Introduction to Information Retrieval Sec. 1.1
7
Introduction to Information Retrieval
Introduction to
Information Retrieval
Introducing Information Retrieval
and Web Search
Introduction to Information Retrieval
Introduction to
Information Retrieval
Term-document incidence matrices
Introduction to Information Retrieval Sec. 1.1
Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth
Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
Incidence vectors
So we have a 0/1 vector for each term.
To answer query: take the vectors for Brutus, Caesar
and Calpurnia (complemented) bitwise AND.
110100 AND
110111 AND
101111 =
100100 Antony
Antony and Cleopatra
1
Julius Caesar
1
The Tempest
0
Hamlet
0
Othello
0
Macbeth
1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
12
Introduction to Information Retrieval Sec. 1.1
Answers to query
Antony and Cleopatra, Act III, Scene ii
Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus,
When Antony found Julius Caesar dead,
He cried almost to roaring; and he wept
When at Philippi he found Brutus slain.
13
Introduction to Information Retrieval Sec. 1.1
Bigger collections
Consider N = 1 million documents, each with about
1000 words.
Avg 6 bytes/word including spaces/punctuation
6GB of data in the documents.
Say there are M = 500K distinct terms among these.
14
Introduction to Information Retrieval Sec. 1.1
15
Introduction to Information Retrieval
Introduction to
Information Retrieval
Term-document incidence matrices
Introduction to Information Retrieval
Introduction to
Information Retrieval
The Inverted Index
The key data structure underlying modern IR
Introduction to Information Retrieval Sec. 1.2
Inverted index
For each term t, we must store a list of all documents
that contain t.
Identify each doc by a docID, a document serial number
Can we used fixed-size arrays for this?
Brutus 1 2 4 11 31 45 173 174
Caesar 1 2 4 5 6 16 57 132
Calpurnia 2 31 54 101
Inverted index
We need variable-size postings lists
On disk, a continuous run of postings is normal and best
In memory, can use linked lists or variable length arrays
Some tradeoffs in size/ease of insertion Posting
Dictionary Postings
Sorted by docID (more later on why).
19
Introduction to Information Retrieval Sec. 1.2
Tokenizer
Token stream Friends Romans Countrymen
Linguistic
modules
Modified tokens friend roman countryman
Indexer friend 2 4
roman 1 2
Inverted index
countryman 13 16
Introduction to Information Retrieval
Doc 1 Doc 2
Why frequency?
Will discuss later.
Introduction to Information Retrieval Sec. 1.2
Terms
and
counts IR system
implementation
• How do we
index efficiently?
• How much
storage do we
need?
Pointers 26
Introduction to Information Retrieval
Introduction to
Information Retrieval
The Inverted Index
The key data structure underlying modern IR
Introduction to Information Retrieval
Introduction to
Information Retrieval
Query processing with an inverted index
Introduction to Information Retrieval Sec. 1.3
29
Introduction to Information Retrieval Sec. 1.3
2 4 8 16 32 64 128 Brutus
1 2 3 5 8 13 21 34 Caesar
30
Introduction to Information Retrieval Sec. 1.3
The merge
Walk through the two postings simultaneously, in
time linear in the total number of postings entries
2 4 8 16 32 64 128 Brutus
1 2 3 5 8 13 21 34 Caesar
33
Introduction to Information Retrieval
Introduction to
Information Retrieval
Query processing with an inverted index
Introduction to Information Retrieval
Introduction to
Information Retrieval
Phrase queries and positional indexes
Introduction to Information Retrieval Sec. 2.4
Phrase queries
We want to be able to answer queries such as
“stanford university” – as a phrase
Thus the sentence “I went to university at Stanford”
is not a match.
The concept of phrase queries has proven easily
understood by users; one of the few “advanced search”
ideas that works
Many more queries are implicit phrase queries
For this, it no longer suffices to store only
<term : docs> entries
Introduction to Information Retrieval Sec. 2.4.1
<be: 993427;
1: 7, 18, 33, 72, 86, 231; Which of docs 1,2,4,5
2: 3, 149; could contain “to be
4: 17, 191, 291, 430, 434; or not to be”?
5: 363, 367, …>
Proximity queries
LIMIT! /3 STATUTE /3 FEDERAL /2 TORT
Again, here, /k means “within k words of”.
Clearly, positional indexes can be used for such
queries; biword indexes cannot.
Exercise: Adapt the linear merge of postings to
handle proximity queries. Can you make it work for
any value of k?
This is a little tricky to do correctly and efficiently
See Figure 2.12 of IIR
Introduction to Information Retrieval Sec. 2.4.2
Rules of thumb
A positional index is 2–4 as large as a non-positional
index
Combination schemes
These two approaches can be profitably combined
For particular phrases (“Michael Jackson”, “Britney
Spears”) it is inefficient to keep on merging positional
postings lists
Even more so for phrases like “The Who”
Williams et al. (2004) evaluate a more sophisticated
mixed indexing scheme
A typical web query mixture was executed in ¼ of the time
of using just a positional index
It required 26% more space than having a positional index
alone
Introduction to Information Retrieval
Introduction to
Information Retrieval
Phrase queries and positional indexes