0% found this document useful (0 votes)
38 views46 pages

Probabilistic Model

Information retrieval part 11 ppt about probabilistic model
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views46 pages

Probabilistic Model

Information retrieval part 11 ppt about probabilistic model
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

PV211: Introduction to Information Retrieval


https://www.fi.muni.cz/~sojka/PV211

IIR 11: Probabilistic Information Retrieval


Handout version

Petr Sojka, Hinrich Schütze et al.

Faculty of Informatics, Masaryk University, Brno


Center for Information and Language Processing, University of Munich

2019-03-28

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 1 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Overview

1 Recap

2 Probabilistic Approach to IR

3 Basic Probability Theory

4 Probability Ranking Principle

5 Appraisal&Extensions

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 2 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Relevance feedback: Basic idea

The user issues a (short, simple) query.


The search engine returns a set of documents.
User marks some docs as relevant, some as irrelevant.
Search engine computes a new representation of the
information need – should be better than the initial query.
Search engine runs new query and returns new results.
New results have (hopefully) better recall.

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 4 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Rocchio illustrated

x
~qopt µ ~ NR x
~R − µ
x x
µ
~R
x
x
µ
~ NR

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 5 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Types of query expansion

Manual thesaurus (maintained by editors, e.g., PubMed)


Automatically derived thesaurus (e.g., based on co-occurrence
statistics)
Query-equivalence based on query log mining (common on the
web as in the “palm” example)

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 6 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Query expansion at search engines

Main source of query expansion at search engines: query logs


Example 1: After issuing the query [herbs], users frequently
search for [herbal remedies].
→ “herbal remedies” is potential expansion of “herb”.
Example 2: Users searching for [flower pix] frequently click on
the URL photobucket.com/flower. Users searching for [flower
clipart] frequently click on the same URL.
→ “flower clipart” and “flower pix” are potential expansions of
each other.

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 7 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Take-away today

Probabilistically grounded approach to IR


Probability Ranking Principle
Models: BIM, BM25
Assumptions these models make

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 8 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Relevance feedback from last lecture

Previous lecture: in relevance feedback, the user marks


documents as relevant/irrelevant
Given some known relevant and irrelevant documents, we
compute weights for non-query terms that indicate how likely
they will occur in relevant documents.
Today: develop a probabilistic approach for relevance
feedback and also a general probabilistic model for IR.

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 10 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Probabilistic Approach to Retrieval

Given a user information need (represented as a query) and a


collection of documents (transformed into document
representations), a system must determine how well the
documents satisfy the query
An IR system has an uncertain understanding of the user
query, and makes an uncertain guess of whether a document
satisfies the query
Probability theory provides a principled foundation for such
reasoning under uncertainty
Probabilistic models exploit this foundation to estimate how
likely it is that a document is relevant to a query

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 11 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Probabilistic IR Models at a Glance

Classical probabilistic retrieval model


Probability ranking principle
Binary Independence Model, BestMatch25 (Okapi)
Bayesian networks for text retrieval
Language model approach to IR
Important recent work, will be covered in the next lecture
Probabilistic methods are one of the oldest but also one of the
currently hottest topics in IR

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 12 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Exercise: Probabilistic model vs. other models

Boolean model
Probabilistic models support ranking and thus are better than
the simple Boolean model.
Vector space model
The vector space model is also a formally defined model that
supports ranking.
Why would we want to look for an alternative to the vector
space model?

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 13 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Probabilistic vs. vector space model

Vector space model: rank documents according to similarity


to query.
The notion of similarity does not translate directly into an
assessment of “is the document a good document to give to
the user or not?”
The most similar document can be highly relevant or
completely irrelevant.
Probability theory is arguably a cleaner formalization of what
we really want an IR system to do: give relevant documents
to the user.

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 14 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Basic Probability Theory


For events A and B
Joint probability P(A ∩ B) of both events occurring
Conditional probability P(A|B) of event A occurring given that
event B has occurred
Chain rule gives fundamental relationship between joint and
conditional probabilities:
P(AB) = P(A ∩ B) = P(A|B)P(B) = P(B|A)P(A)
Similarly for the complement of an event P(A):
P(AB) = P(B|A)P(A)
Partition rule: if B can be divided into an exhaustive set of
disjoint subcases, then P(B) is the sum of the probabilities of
the subcases. A special case of this rule gives:
P(B) = P(AB) + P(AB)

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 16 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Basic Probability Theory


Bayes’ Rule for inverting conditional probabilities:
" #
P(B|A)P(A) P(B|A)
P(A|B) = = P P(A)
P(B) X ∈{A,A} P(B|X )P(X )

Can be thought of as a way of updating probabilities:


Start off with prior probability P(A) (initial estimate of how
likely event A is in the absence of any other information)
Derive a posterior probability P(A|B) after having seen the
evidence B, based on the likelihood of B occurring in the two
cases that A does or does not hold
Odds of an event provide a kind of multiplier for how probabilities
change:
P(A) P(A)
Odds: O(A) = =
P(A) 1 − P(A)
Sojka, IIR Group: PV211: Probabilistic Information Retrieval 17 / 51
Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

The Document Ranking Problem

Ranked retrieval setup: given a collection of documents, the


user issues a query, and an ordered list of documents is
returned
Assume binary notion of relevance: Rd,q is a random
dichotomous variable, such that
Rd,q = 1 if document d is relevant w.r.t query q
Rd,q = 0 otherwise
Probabilistic ranking orders documents decreasingly by their
estimated probability of relevance w.r.t. query: P(R = 1|d, q)
Assume that the relevance of each document is independent
of the relevance of other documents

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 19 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Probability Ranking Principle (PRP)

PRP in brief
If the retrieved documents (w.r.t a query) are ranked
decreasingly on their probability of relevance, then the
effectiveness of the system will be the best that is obtainable

PRP in full
If [the IR] system’s response to each [query] is a ranking of the
documents [...] in order of decreasing probability of relevance
to the [query], where the probabilities are estimated as
accurately as possible on the basis of whatever data have been
made available to the system for this purpose, the overall
effectiveness of the system to its user will be the best that is
obtainable on the basis of those data

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 20 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Binary Independence Model (BIM)

Traditionally used with the PRP


Assumptions:
‘Binary’ (equivalent to Boolean): documents and queries
represented as binary term incidence vectors
E.g., document d represented by vector ~x = (x1 , . . . , xM ),
where xt = 1 if term t occurs in d and xt = 0 otherwise
Different documents may have the same vector representation
‘Independence’: no association between terms (not true, but
practically works - ‘naive’ assumption of Naive Bayes models)

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 21 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Binary incidence matrix


Anthony Julius The Hamlet Othello Macbeth ...
and Caesar Tempest
Cleopatra
Anthony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
...

Each document is represented as a binary vector ∈ {0, 1}|V | .

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 22 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Binary Independence Model

To make a probabilistic retrieval strategy precise, need to estimate


how terms in documents contribute to relevance
Find measurable statistics (term frequency, document
frequency, document length) that affect judgments about
document relevance
Combine these statistics to estimate the probability P(R|d, q)
of document relevance
Next: how exactly we can do this

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 23 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Binary Independence Model

P(R|d, q) is modeled using term incidence vectors as P(R|~x , ~q )

P(~x |R = 1, ~q )P(R = 1|~q )


P(R = 1|~x , ~q ) =
P(~x |~q )
P(~x |R = 0, ~q )P(R = 0|~q )
P(R = 0|~x , ~q ) =
P(~x |~q )

P(~x |R = 1, ~q ) and P(~x |R = 0, ~q ): probability that if a


relevant or irrelevant document is retrieved, then that
document’s representation is ~x
Use statistics about the document collection to estimate these
probabilities

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 24 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Binary Independence Model


P(R|d, q) is modeled using term incidence vectors as P(R|~x , ~q )
P(~x |R = 1, ~q )P(R = 1|~q )
P(R = 1|~x , ~q ) =
P(~x |~q )
P(~x |R = 0, ~q )P(R = 0|~q )
P(R = 0|~x , ~q ) =
P(~x |~q )

P(R = 1|~q ) and P(R = 0|~q ): prior probability of retrieving a


relevant or irrelevant document for a query ~q
Estimate P(R = 1|~q ) and P(R = 0|~q ) from percentage of
relevant documents in the collection
Since a document is either relevant or irrelevant to a query,
we must have that:

P(R = 1|~x , ~q ) + P(R = 0|~x , ~q ) = 1

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 25 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Deriving a Ranking Function for Query Terms (1)

Given a query q, ranking documents by P(R = 1|d, q) is


modeled under BIM as ranking them by P(R = 1|~x , ~q )
Easier: rank documents by their odds of relevance (gives same
ranking)
P(R=1|~q )P(~x |R=1,~q )
P(R = 1|~x , ~q ) P(~x |~q )
O(R|~x , ~q ) = = P(R=0|~q )P(~x |R=0,~q )
P(R = 0|~x , ~q )
P(~x |~q )
P(R = 1|~q ) P(~x |R = 1, ~q )
= ·
P(R = 0|~q ) P(~x |R = 0, ~q )
P(R=1|~q )
P(R=0|~q ) is a constant for a given query - can be ignored

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 26 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Deriving a Ranking Function for Query Terms (2)

It is at this point that we make the Naive Bayes conditional


independence assumption that the presence or absence of a word in
a document is independent of the presence or absence of any other
word (given the query):
M
P(~x |R = 1, ~q ) Y P(xt |R = 1, ~q )
=
P(~x |R = 0, ~q ) t=1 P(xt |R = 0, ~q )

So:
M
Y P(xt |R = 1, ~q )
O(R|~x , ~q ) = O(R|~q ) ·
t=1
P(xt |R = 0, ~q )

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 27 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Exercise

Naive Bayes conditional independence assumption: the presence or


absence of a word in a document is independent of the presence or
absence of any other word (given the query).
Why is this wrong? Good example?
PRP assumes that the relevance of each document is independent
of the relevance of other documents.
Why is this wrong? Good example?

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 28 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Deriving a Ranking Function for Query Terms (3)

Since each xt is either 0 or 1, we can separate the terms:


Y P(xt = 1|R = 1, ~q ) Y P(xt = 0|R = 1, ~q )
O(R|~x , ~q ) = O(R|~q )· ·
t:xt =1
P(xt = 1|R = 0, ~q ) t:xt =0
P(xt = 0|R = 0, ~q )

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 29 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Deriving a Ranking Function for Query Terms (4)

Let pt = P(xt = 1|R = 1, ~q ) be the probability of a term


appearing in relevant document
Let ut = P(xt = 1|R = 0, ~q ) be the probability of a term
appearing in a irrelevant document
Can be displayed as contingency table:

document relevant (R = 1) irrelevant (R = 0)


Term present xt = 1 pt ut
Term absent xt = 0 1 − pt 1 − ut

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 30 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Deriving a Ranking Function for Query Terms

Additional simplifying assumption: terms not occurring in the


query are equally likely to occur in relevant and irrelevant
documents
If qt = 0, then pt = ut
Now we need only to consider terms in the products that appear in
the query:
ptY Y 1 − pt
O(R|~x , ~q ) = O(R|~q ) · ·
u
t:x =q =1 t t:x =0,q =1
1 − ut
t t t t

The left product is over query terms found in the document


and the right product is over query terms not found in the
document

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 31 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Deriving a Ranking Function for Query Terms


Including the query terms found in the document into the right
product, but simultaneously dividing by them in the left product,
gives:
Y pt (1 − ut ) Y 1 − pt
O(R|~x , ~q ) = O(R|~q ) · ·
u (1 − pt ) t:q =1 1 − ut
t:x =q =1 t
t t t

The left product is still over query terms found in the


document, but the right product is now over all query terms,
hence constant for a particular query and can be ignored.
→ The only quantity that needs to be estimated to rank
documents w.r.t a query is the left product
Hence the Retrieval Status Value (RSV) in this model:
Y pt (1 − ut ) X pt (1 − ut )
RSVd = log = log
u (1 − pt ) t:x =q =1
t:x =q =1 t
ut (1 − pt )
t t t t

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 32 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Deriving a Ranking Function for Query Terms


Equivalent: rank documents using the log odds ratios for the terms
in the query ct :

pt (1 − ut ) pt ut
ct = log = log − log
ut (1 − pt ) (1 − pt ) 1 − ut

The odds ratio is the ratio of two odds: (i) the odds of the
term appearing if the document is relevant (pt /(1 − pt )), and
(ii) the odds of the term appearing if the document is
irrelevant (ut /(1 − ut ))
ct = 0: term has equal odds of appearing in relevant and
irrelevant docs
ct positive: higher odds to appear in relevant documents
ct negative: higher odds to appear in irrelevant documents

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 33 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Term weight ct in BIM

pt ut
ct = log (1−p t)
− log 1−u t
functions as a term weight.
Retrieval status value for document d: RSVd = xt =qt =1 ct .
P

So BIM and vector space model are identical on an


operational level . . .
. . . except that the term weights are different.
In particular: we can use the same data structures (inverted
index etc) for the two models.

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 34 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

How to compute probability estimates

For each term t in a query, estimate ct in the whole collection


using a contingency table of counts of documents in the collection,
where df t is the number of documents that contain term t:

documents relevant irrelevant Total


Term present xt = 1 s df t − s df t
Term absent xt = 0 S −s (N − df t ) − (S − s) N − df t
Total S N −S N
pt = s/S
ut = (df t − s)/(N − S)
s/(S − s)
ct = K (N, df t , S, s) = log
(df t − s)/((N − df t ) − (S − s))

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 35 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Avoiding zeros

If any of the counts is a zero, then the term weight is not


well-defined.
Maximum likelihood estimates do not work for rare events.
To avoid zeros: add 0.5 to each count (expected likelihood
estimation = ELE)
For example, use S − s + 0.5 in formula for S − s

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 36 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Exercise

Query: Obama health plan


Doc1: Obama rejects allegations about his own bad
health
Doc2: The plan is to visit Obama
Doc3: Obama raises concerns with US health plan
reforms
Estimate the probability that the above documents are relevant to
the query. Use a contingency table. These are the only three
documents in the collection

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 37 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Simplifying assumption

Assuming that relevant documents are a very small


percentage of the collection, approximate statistics for
irrelevant documents by statistics from the whole collection
Hence, ut (the probability of term occurrence in irrelevant
documents for a query) is df t /N and

log[(1 − ut )/ut ] = log[(N − df t )/df t ] ≈ log N/df t


This should look familiar to you . . .
The above approximation cannot easily be extended to
relevant documents

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 38 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Probability estimates in relevance feedback

Statistics of relevant documents (pt ) in relevance feedback


can be estimated using maximum likelihood estimation or ELE
(add 0.5).
Use the frequency of term occurrence in known relevant
documents.
This is the basis of probabilistic approaches to relevance
feedback weighting in a feedback loop
The exercise we just did was a probabilistic relevance feedback
exercise since we were assuming the availability of relevance
judgments.

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 39 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Probability estimates in adhoc retrieval


Ad-hoc retrieval: no user-supplied relevance judgments
available
In this case: assume that pt is constant over all terms xt in
the query and that pt = 0.5
Each term is equally likely to occur in a relevant document,
and so the pt and (1 − pt ) factors cancel out in the expression
for RSV
Weak estimate, but doesn’t disagree violently with
expectation that query terms appear in many but not all
relevant documents
Combining this method with the earlier approximation for ut ,
the document ranking is determined simply by which query
terms occur in documents scaled by their idf weighting
For short documents (titles or abstracts) in one-pass retrieval
situations, this estimate can be quite satisfactory
Sojka, IIR Group: PV211: Probabilistic Information Retrieval 40 / 51
Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

History and summary of assumptions

Among the oldest formal models in IR


Maron & Kuhns, 1960: Since an IR system cannot predict with
certainty which document is relevant, we should deal with
probabilities
Assumptions for getting reasonable approximations of the
needed probabilities (in the BIM):
Boolean representation of documents/queries/relevance
Term independence
Out-of-query terms do not affect retrieval
Document relevance values are independent

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 42 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

How different are vector space and BIM?

They are not that different.


In either case you build an information retrieval scheme in the
exact same way.
For probabilistic IR, at the end, you score queries not by
cosine similarity and tf-idf in a vector space, but by a slightly
different formula motivated by probability theory.
Next: how to add term frequency and length normalization to
the probabilistic model.

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 43 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Okapi BM25: Overview

Okapi BM25 is a probabilistic model that incorporates term


frequency (i.e., it’s nonbinary) and length normalization.
BIM was originally designed for short catalog records of fairly
consistent length, and it works reasonably in these contexts
For modern full-text search collections, a model should pay
attention to term frequency and document length
BestMatch25 (a.k.a BM25 or Okapi) is sensitive to these
quantities
BM25 is one of the most widely used and robust retrieval
models

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 44 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Okapi BM25: Starting point

The simplest score for document d is just idf weighting of the


query terms present in the document:
X N
RSVd = log
t∈q df t

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 45 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Okapi BM25 basic weighting

Improve idf term [log N/df] by factoring in term frequency


and document length.

N (k1 + 1)tf td
X  
RSVd = log ·
t∈q df t k 1 ((1 − b) + b × (Ld /Lave )) + tf td

tf td : term frequency in document d


Ld (Lave ): length of document d (average document length in
the whole collection)
k1 : tuning parameter controlling the document term
frequency scaling
b: tuning parameter controlling the scaling by document
length

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 46 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Exercise

Interpret BM25 weighting formula for k1 = 0


Interpret BM25 weighting formula for k1 = 1 and b = 0
Interpret BM25 weighting formula for k1 7→ ∞ and b = 0
Interpret BM25 weighting formula for k1 7→ ∞ and b = 1

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 47 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Okapi BM25 weighting for long queries


For long queries, use similar weighting for query terms

X N (k1 + 1)tf td (k3 + 1)tf tq



RSVd = log · ·
t∈q df t k1 ((1 − b) + b × (Ld /Lave )) + tf td k3 + tf tq

tf tq : term frequency in the query q


k3 : tuning parameter controlling term frequency scaling of the
query
No length normalization of queries (because retrieval is being
done with respect to a single fixed query)
The above tuning parameters should ideally be set to optimize
performance on a development test collection. In the absence
of such optimization, experiments have shown reasonable
values are to set k1 and k3 to a value between 1.2 and 2 and
b = 0.75
Sojka, IIR Group: PV211: Probabilistic Information Retrieval 48 / 51
Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Which ranking model should I use?

I want something basic and simple → use vector space with


tf-idf weighting.
I want to use a state-of-the-art ranking model with excellent
performance → use language models or BM25 with tuned
parameters
In between: BM25 or language models with no or just one
tuned parameter

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 49 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Take-away today

Probabilistically grounded approach to IR


Probability Ranking Principle
Models: BIM, BM25
Assumptions these models make

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 50 / 51


Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Resources

Chapter 11 of IIR
Resources at https://www.fi.muni.cz/~sojka/PV211/
and http://cislmu.org, materials in MU IS and FI MU
library

Sojka, IIR Group: PV211: Probabilistic Information Retrieval 51 / 51

You might also like