0% found this document useful (0 votes)
7 views46 pages

IR - 2 Unit

The document discusses scoring, term weighting, and the vector space model in information retrieval. It highlights the limitations of Boolean queries and introduces ranked retrieval as a more effective method for returning relevant documents based on user queries. Key concepts include term frequency, inverse document frequency, and the tf-idf weighting scheme, which together help in ranking documents by their relevance to a given query.

Uploaded by

candymandy2925
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views46 pages

IR - 2 Unit

The document discusses scoring, term weighting, and the vector space model in information retrieval. It highlights the limitations of Boolean queries and introduces ranked retrieval as a more effective method for returning relevant documents based on user queries. Key concepts include term frequency, inverse document frequency, and the tf-idf weighting scheme, which together help in ranking documents by their relevance to a given query.

Uploaded by

candymandy2925
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Introduction to Information Retrieval

Unit 2

Scoring, Term Weighting and the


Vector Space Model
Information
Retrieval

Note

Many images, graphs, texts, slides, definitions etc. a r e a d a p t e d


from various books as well as various sources of World Wide Web.
This is simply a presentation of concept based on the original work
of many contributors to the field as well as WWW.

Dr. Sunita Jahirabadkar


Introduction to Information Retrieval

Topics..
▪ Parametric and Zone indexes
▪ Ranked retrieval
▪ Scoring documents
▪ Term frequency and Weighting
▪ Collection statistics
▪ Weighting schemes
▪ Vector space model for scoring
▪ Variant of tf-idf functions
▪ Components of an “Information Retrieval System”
Introduction to Information Retrieval Ch. 6

Ranked retrieval
▪ Thus far, our queries have all been Boolean.
▪ Documents either match or don’t.
▪ Good for expert users with precise understanding of their
needs and the collection
▪ Not good for the majority of users.
▪ Most users incapable of writing Boolean queries (or
they are, but they think it’s too much work).
▪ Most users don’t want to wade through 1000s of
results.
▪ This is particularly true of web search.
Introduction to Information Retrieval Ch. 6

Problem with Boolean search:


feast or famine
▪ Boolean queries often result in either too few (=0) or too
many (1000s) results.
▪ Query 1: “standard user dlink 650” → 200,000 hits
▪ Query 2: “standard user dlink 650 no card found” → 0
hits
▪ It takes a lot of skill to come up with a query that produces
a manageable number of hits.
▪ AND gives too few; OR gives too many
Introduction to Information Retrieval

Ranked retrieval models


▪ Rather than a set of documents satisfying a query
expression, in ranked retrieval, the system returns an
ordering over the (top) documents in the collection for a
query
▪ Free text queries: Rather than a query language of
operators and expressions, the user’s query is just one or
more words in a human language
▪ In principle, there are two separate choices here, but in
practice, ranked retrieval has normally been associated
with free text queries and vice versa

6
Introduction to Information Retrieval Ch. 6

Feast or famine: not a problem in ranked


retrieval
▪ When a system produces a ranked result set, large result
sets are not an issue
▪ Indeed, the size of the result set is not an issue
▪ We just show the top k ( ≈ 10) results
▪ We don’t overwhelm the user

▪ Premise: the ranking algorithm works


Introduction to Information Retrieval Ch. 6

Scoring as the basis of ranked retrieval


▪ We wish to return in order the documents most likely to be
useful to the searcher
▪ How can we rank-order the documents in the collection
with respect to a query?
▪ Assign a score – say in [0, 1] – to each document
▪ This score measures how well document and query
“match”.
Introduction to Information Retrieval

Parametric and zone indexes

9
Introduction to Information Retrieval

Metadata, Fields, Zones


▪ Documents can have metadata and fields
▪ E.g., title of document, author of document, date of
creation
▪ Zones similar to fields, but can contain arbitrary text
▪ E.g., abstract, introduction, … of a research paper

▪ We can have an index for each field / zone


▪ To support queries like “documents having merchant in
the title and william in the author list”
▪ Either separate index for each field/zone, or part of the
same index
10
Introduction to Information Retrieval

Weighted zone scoring


▪ Given a Boolean query q and a document d
▪ Compute a ‘zone match score’ in [0,1] for each zone /
field of d with q
▪ Compute linear combination of zone match scores,
where each zone assigned a weight (sum of weights
will always equal to 1.0)
▪ Sometimes called ‘ranked Boolean retrieval’
▪ How to decide the weights?
▪ Option 1: Specified by experts, e.g., match in “title”
has higher significance than match in “body”
▪ Option 2: Learn from training examples – application
of Machine Learning
11
Introduction to Information Retrieval

Weighting the importance of terms

12
Introduction to Information Retrieval Ch. 6

Query-document matching scores


▪ We need a way of assigning a score to a query / document
pair
▪ Let’s start with a one-term query
▪ If the query term does not occur in the document: score
should be 0
▪ If the query terms occurs in the document, score 1
▪ For a multi-term query
▪ View the query as well as the document as sets of
words
▪ Compute some similarity measure between the two sets
Introduction to Information Retrieval Ch. 6

Jaccard coefficient
▪ A commonly used measure of overlap of two sets A and B
▪ jaccard(A,B) = |A ∩ B| / |A ∪ B|
▪ jaccard(A,A) = 1
▪ jaccard(A,B) = 0 if A ∩ B = 0
▪ A and B don’t have to be the same size.
▪ Always assigns a number between 0 and 1.
Introduction to Information Retrieval Ch. 6

Jaccard coefficient: Scoring example


▪ What is the query-document match score that the Jaccard
coefficient computes for each of the two documents
below?
▪ Query: ides of march
▪ Document 1: caesar died in march
▪ Document 2: the long march
Introduction to Information Retrieval Ch. 6

Issues with Jaccard for scoring


▪ It doesn’t consider term frequency (how many times a
term occurs in a document)
▪ A document / zone that mentions a query-term more
often intuitively matches the query more
▪ Rare terms in a collection are more informative than
frequent terms. Jaccard doesn’t consider this information
▪ We need a more sophisticated way of normalizing for
length
Introduction to Information Retrieval Sec. 6.2

Recall: Binary term-document incidence


matrix

Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth

Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0

Each document is represented by a binary vector ∈ {0,1}|V|


Introduction to Information Retrieval Sec. 6.2

Term-document count matrices


▪ Consider the number of occurrences of a term in a
document:
▪ Each document is a count vector in ℕv: a column below

Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth

Antony 157 73 0 0 0 0
Brutus 4 157 0 1 0 0
Caesar 232 227 0 2 1 1
Calpurnia 0 10 0 0 0 0
Cleopatra 57 0 0 0 0 0
mercy 2 0 3 5 5 1
worser 2 0 1 1 1 0
Introduction to Information Retrieval

Bag of words model


▪ Vector representation doesn’t consider the ordering of
words in a document
▪ John is quicker than Mary and Mary is quicker than John
have the same vectors
▪ This is called the bag of words model.
▪ In a sense, this is a step back: The positional index was
able to distinguish these two documents.
▪ For now: bag of words model
Introduction to Information Retrieval

Term frequency tf
▪ The term frequency tft,d of term t in document d is defined
as the number of times that t occurs in d.
▪ We want to use tf when computing query-document match
scores. But how?
▪ Raw term frequency is not what we want:
▪ A document with 10 occurrences of the term is more
relevant than a document with 1 occurrence of the term.
▪ But not 10 times more relevant.
▪ Relevance does not increase proportionally with term
frequency.

NB: frequency = count in IR


Introduction to Information Retrieval Sec. 6.2

Log-frequency weighting
▪ The log frequency weight of term t in d is
⎧1 + log10 tf t,d , if tf t,d > 0
wt,d =⎨
⎩ 0, otherwise
▪ 0 → 0, 1 → 1, 2 → 1.3, 10 → 2, 1000 → 4, etc.
▪ Score for a document-query pair: sum over terms t in both
q and d:
▪ score = (1 + log tf t ,d )
∑ t∈q∩d

▪ The score is 0 if none of the query terms is present in the


document.
Introduction to Information Retrieval Sec. 6.2.1

Document frequency
▪ Rare terms are more informative than frequent terms
▪ Recall stop words
▪ Consider a term in the query that is rare in the collection
(e.g., arachnocentric)
▪ A document containing this term is very likely to be relevant
to the query arachnocentric
▪ → We want a high weight for rare terms like arachnocentric.
Introduction to Information Retrieval Sec. 6.2.1

Document frequency, continued


▪ Frequent terms are less informative than rare terms
▪ Consider a query term that is frequent in the collection
(e.g., high, increase, line)
▪ A document containing such a term is more likely to be
relevant than a document that doesn’t
▪ But it’s not a sure indicator of relevance.
▪ → For frequent terms, we want positive weights for words
like high, increase, and line
▪ But lower weights than for rare terms.
▪ We will use document frequency (df) to capture this.
Introduction to Information Retrieval Sec. 6.2.1

idf weight
▪ dft is the document frequency of t: the number of
documents that contain t
▪ dft is an inverse measure of the informativeness of t
▪ dft ≤ N
▪ We define the idf (inverse document frequency) of t by
idf t = log10 ( N/df t )
▪ We use log (N/dft) instead of N/dft to “dampen” the effect
of idf.

Will turn out the base of the log is immaterial.


Introduction to Information Retrieval Sec. 6.2.1

idf example, suppose N = 10 Lakhs


term dft idft
calpurnia 1
animal 100
sunday 1,000
fly 10,000
under 100,000
the 1,000,000

idf t = log10 ( N/df t )


There is one idf value for each term t in a collection.
Introduction to Information Retrieval

Effect of idf on ranking


▪ Does idf have an effect on ranking for one-term queries,
like
▪ iPhone
▪ idf has no effect on ranking one term queries
▪ idf affects the ranking of documents for queries with at
least two terms
▪ For the query capricious person, idf weighting makes
occurrences of capricious count for much more in the
final document ranking than occurrences of person.

26
Introduction to Information Retrieval Sec. 6.2.1

Collection vs. Document frequency


▪ The collection frequency of t is the number of
occurrences of t in the collection, counting multiple
occurrences.
▪ Example:
Word Collection Document frequency
frequency
insurance 10440 3997
try 10422 8760

▪ Which word is a better search term (and should get a


higher weight)?
Introduction to Information Retrieval Sec. 6.2.2

tf-idf weighting
▪ The tf-idf weight of a term is the product of its tf weight and
its idf weight.

w t ,d = log(1 + tf t ,d ) × log10 ( N / df t )
▪ Best known weighting scheme in information retrieval
▪ Note: the “-” in tf-idf is a hyphen, not a minus sign!
▪ Alternative names: tf.idf, tf x idf
▪ Increases with the number of occurrences of term within a
document
▪ Increases with the rarity of the term in the collection
Introduction to Information Retrieval Sec. 6.2.2

Score for a document given a query

Score(q,d) = ∑ tf.idft,d
t ∈q∩d

▪ There are many variants


▪ How “tf” is computed (with/without logs)
▪ Whether the terms in the query are also weighted
▪…

29
Introduction to Information Retrieval Sec. 6.3

Binary → count → weight matrix


Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth

Antony 5.25 3.18 0 0 0 0.35


Brutus 1.21 6.1 0 1 0 0
Caesar 8.59 2.54 0 1.51 0.25 0
Calpurnia 0 1.54 0 0 0 0
Cleopatra 2.85 0 0 0 0 0
mercy 1.51 0 1.9 0.12 5.25 0.88
worser 1.37 0 0.11 4.15 0.25 1.95

Each document is now represented by a real-valued vector


of tf-idf weights ∈ R|V|
Introduction to Information Retrieval Sec. 6.3

Documents as vectors
▪ So we have a |V|-dimensional vector space
▪ Terms are axes of the space
▪ Documents are points or vectors in this space

▪ Very high-dimensional space: tens of millions of


dimensions in case of a web search engine
▪ These are very sparse vectors - most entries are zero.
Introduction to Information Retrieval Sec. 6.3

Queries as vectors
▪ Key idea 1: Do the same for queries: represent queries as
vectors in the space
▪ Key idea 2: Rank documents according to their proximity
to the query in this space
▪ proximity = similarity of vectors
▪ proximity ≈ inverse of distance
▪ Recall: We do this because we want to get away from the
you’re-either-in-or-out Boolean model.
▪ Instead: rank more relevant documents higher than less
relevant documents
Introduction to Information Retrieval Sec. 6.3

Formalizing vector space proximity


▪ First cut: distance between two points
▪ ( = distance between the end points of the two vectors)
▪ Euclidean distance?
▪ Euclidean distance is a bad idea . . .
▪ . . . because Euclidean distance is large for vectors of
different lengths.
▪ Two documents having similar content can have large
Euclidean distance simply because one document is much
longer than the other
Introduction to Information Retrieval Sec. 6.3

Why distance is a bad idea


The Euclidean distance
between q
and d2 is large even
though the
distribution of terms in
the query q and the
distribution of
terms in the document
d2 are
very similar.
Introduction to Information Retrieval Sec. 6.3

Use angle instead of distance


▪ Thought experiment: take a document d and append it to
itself. Call this document d′.
▪ “Semantically” d and d′ have the same content
▪ The Euclidean distance between the two documents can be
quite large
▪ The angle between the two documents is 0, corresponding
to maximal similarity.

▪ Key idea: Rank documents according to angle with query.


Introduction to Information Retrieval Sec. 6.3

From angles to cosines


▪ The following two notions are equivalent.
▪ Rank documents in increasing order of the angle
between query and document
▪ Rank documents in decreasing order of cosine(query,
document)
▪ Cosine is a monotonically decreasing function for the
interval [0o, 180o]
Introduction to Information Retrieval Sec. 6.3

cosine(query, document)
Dot product Unit vectors
! ! ! ! V
!! q • d q d ∑ q di
i =1 i
cos( q , d ) = !!= !• !=
qd q d V
q2
d
V
2
∑ i =1 i ∑i=1 i
qi is the tf-idf weight of term i in the query
di is the tf-idf weight of term i in the document

cos(q,d) is the cosine similarity of q and d … or,


equivalently, the cosine of the angle between q and d.
Introduction to Information Retrieval

Cosine for length-normalized vectors


▪ For length-normalized vectors, cosine similarity is simply
the dot product (or scalar product):

for q, d length-normalized.

38
Introduction to Information Retrieval

Cosine similarity illustrated

39
Introduction to Information Retrieval Sec. 6.3

Cosine similarity amongst 3 documents


How similar are
the novels term SaS PaP WH
SaS: Sense and affection 115 58 20
Sensibility jealous 10 7 11
PaP: Pride and gossip 2 0 6
Prejudice, and wuthering 0 0 38
WH: Wuthering
Heights? Term frequencies (counts)

Note: To simplify this example, we don’t do idf weighting.


Introduction to Information Retrieval Sec. 6.3

3 documents example contd.


Log frequency weighting After length normalization

term SaS PaP WH term SaS PaP WH


affection 3.06 2.76 2.30 affection 0.789 0.832 0.524
jealous 2.00 1.85 2.04 jealous 0.515 0.555 0.465
gossip 1.30 0 1.78 gossip 0.335 0 0.405
wuthering 0 0 2.58 wuthering 0 0 0.588

cos(SaS,PaP) ≈
0.789 × 0.832 + 0.515 × 0.555 + 0.335 × 0.0 + 0.0 × 0.0
≈ 0.94
cos(SaS,WH) ≈ 0.79
cos(PaP,WH) ≈ 0.69
Why do we have cos(SaS,PaP) > cos(SaS,WH)?
Introduction to Information Retrieval Sec. 6.4

tf-idf weighting has many variants

Columns headed ‘n’ are acronyms for weight schemes.

Why is the base of the log in idf immaterial?


Introduction to Information Retrieval Sec. 6.4

Weighting may differ in queries vs


documents
▪ Many search engines allow for different weightings for
queries vs. documents
▪ SMART Notation: denotes the combination in use in an
engine, with the notation ddd.qqq, using the acronyms
from the previous table
▪ A very standard weighting scheme is: lnc.ltc
▪ Document: logarithmic tf (l as first character), no idf and
cosine normalization
▪ Query: logarithmic tf (l in leftmost column), idf (t in
second column), cosine normalization …
A bad idea?
Introduction to Information Retrieval Sec. 6.4

tf-idf example: lnc.ltc


Document: car insurance auto insurance
Query: best car insurance
Term Query Document Prod

tf- tf-wt df idf wt n’lize tf-raw tf-wt wt n’lize


raw
auto 0 0 5000 2.3 0 0 1 1 1 0.52 0
best 1 1 50000 1.3 1.3 0.34 0 0 0 0 0
car 1 1 10000 2.0 2.0 0.52 1 1 1 0.52 0.27
insurance 1 1 1000 3.0 3.0 0.78 2 1.3 1.3 0.68 0.53
Exercise: what is N, the number of docs?
Doc length = 12 + 0 2 + 12 + 1.32 ≈ 1.92
Score = 0+0+0.27+0.53 = 0.8
Introduction to Information Retrieval

Summary – vector space ranking


▪ Represent the query as a weighted tf-idf vector
▪ Represent each document as a weighted tf-idf vector
▪ Compute the cosine similarity score for the query vector
and each document vector
▪ Rank documents with respect to the query by score
▪ Return the top K (e.g., K = 10) to the user
Introduction to Information Retrieval

Points to note
▪ A document may have a high cosine similarity score for a
query, even if it does not contain all terms in the query
▪ How to speedup the vector space retrieval?
▪ Can store the inverse document frequency (e.g., N/dft)
at the head of the postings list for term t
▪ Store the term-frequency (e.g., tft,d) in each postings
entry of the postings list for term t
▪ For a multi-word query, the postings lists of the various
query terms can even be traversed concurrently

46

You might also like