0% found this document useful (0 votes)
53 views42 pages

Probabilistic IR: Giorgio Gambosi

The document provides an overview of probabilistic approaches to information retrieval, including the basic probability theory, probability ranking principle, and binary independence model. It discusses how probabilistic models estimate the probability that a document is relevant to a query in order to rank documents. The key aspects covered are representing documents and queries as binary vectors, assuming term independence, and ranking documents based on decreasing probability of relevance according to the probability ranking principle.

Uploaded by

Lorenzo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views42 pages

Probabilistic IR: Giorgio Gambosi

The document provides an overview of probabilistic approaches to information retrieval, including the basic probability theory, probability ranking principle, and binary independence model. It discusses how probabilistic models estimate the probability that a document is relevant to a query in order to rank documents. The key aspects covered are representing documents and queries as binary vectors, assuming term independence, and ranking documents based on decreasing probability of relevance according to the probability ranking principle.

Uploaded by

Lorenzo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Probabilistic IR

Giorgio Gambosi

Corso di Information Retrieval


CdLM in Informatica
Universit di Roma Tor Vergata

Derived from slides produced by C. Manning and by H. Schütze

G.Gambosi: Probabilistic IR 1 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Overview

1 Probabilistic Approach to IR

2 Basic Probability Theory

3 Probability Ranking Principle

4 Appraisal&Extensions

G.Gambosi: Probabilistic IR 2 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Probabilistic Approach to Retrieval

Given a user information need (represented as a query) and a


collection of documents (transformed into document
representations), a system must determine how well the
documents satisfy the query
An IR system has an uncertain understanding of the user
query, and makes an uncertain guess of whether a document
satisfies the query
Probability theory provides a principled foundation for such
reasoning under uncertainty
Probabilistic models exploit this foundation to estimate how
likely it is that a document is relevant to a query

G.Gambosi: Probabilistic IR 4 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Probabilistic IR Models at a Glance

Classical probabilistic retrieval model


Probability ranking principle
Binary Independence Model, BestMatch25 (Okapi)
Bayesian networks for text retrieval
Language model approach to IR
Important recent work, will be covered in the next lecture
Probabilistic methods are one of the oldest but also one of the
currently hottest topics in IR

G.Gambosi: Probabilistic IR 5 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Probabilistic vs. vector space model

Vector space model: rank documents according to similarity


to query.
The notion of similarity does not translate directly into an
assessment of “is the document a good document to give to
the user or not?”
The most similar document can be highly relevant or
completely nonrelevant.
Probability theory is arguably a cleaner formalization of what
we really want an IR system to do: give relevant documents
to the user.

G.Gambosi: Probabilistic IR 6 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Basic Probability Theory


For events A and B
Joint probability P(A ∩ B) of both events occurring
Conditional probability P(A|B) of event A occurring given that
event B has occurred
Chain rule gives fundamental relationship between joint and
conditional probabilities:
P(AB) = P(A ∩ B) = P(A|B)P(B) = P(B|A)P(A)
Similarly for the complement of an event P(A):
P(AB) = P(B|A)P(A)
Partition rule: if B can be divided into an exhaustive set of
disjoint subcases, then P(B) is the sum of the probabilities of
the subcases. A special case of this rule gives:
P(B) = P(AB) + P(AB)

G.Gambosi: Probabilistic IR 8 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Basic Probability Theory


Bayes’ Rule for inverting conditional probabilities:
" #
P(B|A)P(A) P(B|A)
P(A|B) = = P P(A)
P(B) X ∈{A,A} P(B|X )P(X )

Can be thought of as a way of updating probabilities:


Start off with prior probability P(A) (initial estimate of how
likely event A is in the absence of any other information)
Derive a posterior probability P(A|B) after having seen the
evidence B, based on the likelihood of B occurring in the two
cases that A does or does not hold
Odds of an event provide a kind of multiplier for how probabilities
change:
P(A) P(A)
Odds: O(A) = =
P(A) 1 − P(A)
G.Gambosi: Probabilistic IR 9 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

The Document Ranking Problem

Ranked retrieval setup: given a collection of documents, the


user issues a query, and an ordered list of documents is
returned
Assume binary notion of relevance: Rd,q is a random
dichotomous variable, such that
Rd,q = 1 if document d is relevant w.r.t query q
Rd,q = 0 otherwise
Probabilistic ranking orders documents decreasingly by their
estimated probability of relevance w.r.t. query: P(R = 1|d, q)
Assume that the relevance of each document is independent
of the relevance of other documents

G.Gambosi: Probabilistic IR 11 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Probability Ranking Principle (PRP)

PRP in brief
If the retrieved documents (w.r.t a query) are ranked
decreasingly on their probability of relevance, then the
effectiveness of the system will be the best that is obtainable

PRP in full
If [the IR] system’s response to each [query] is a ranking of the
documents [...] in order of decreasing probability of relevance
to the [query], where the probabilities are estimated as
accurately as possible on the basis of whatever data have been
made available to the system for this purpose, the overall
effectiveness of the system to its user will be the best that is
obtainable on the basis of those data

G.Gambosi: Probabilistic IR 12 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Binary Independence Model (BIM)

Traditionally used with the PRP


Assumptions:
‘Binary’ (equivalent to Boolean): documents and queries
represented as binary term incidence vectors
E.g., document d represented by vector ~x = (x1 , . . . , xM ),
where xt = 1 if term t occurs in d and xt = 0 otherwise
Different documents may have the same vector representation
‘Independence’: no association between terms (not true, but
practically works - ‘naive’ assumption of Naive Bayes models)

G.Gambosi: Probabilistic IR 13 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Binary incidence matrix


Anthony Julius The Hamlet Othello Macbeth ...
and Caesar Tempest
Cleopatra
Anthony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
...

Each document is represented as a binary vector ∈ {0, 1}|V | .

G.Gambosi: Probabilistic IR 14 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Binary Independence Model

To make a probabilistic retrieval strategy precise, need to estimate


how terms in documents contribute to relevance
Find measurable statistics (term frequency, document
frequency, document length) that affect judgments about
document relevance
Combine these statistics to estimate the probability P(R|d, q)
of document relevance
Next: how exactly we can do this

G.Gambosi: Probabilistic IR 15 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Binary Independence Model

~)
P(R|d, q) is modeled using term incidence vectors as P(R|~x , q

P(~x |R = 1, q
~ )P(R = 1|~
q)
~) =
P(R = 1|~x , q
P(~x |~
q)
P(~x |R = 0, q
~ )P(R = 0|~
q)
~) =
P(R = 0|~x , q
P(~x |~
q)

P(~x |R = 1, q
~ ) and P(~x |R = 0, q
~ ): probability that if a
relevant or nonrelevant document is retrieved, then that
document’s representation is ~x
Use statistics about the document collection to estimate these
probabilities

G.Gambosi: Probabilistic IR 16 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Binary Independence Model


~)
P(R|d, q) is modeled using term incidence vectors as P(R|~x , q
P(~x |R = 1, q
~ )P(R = 1|~
q)
~) =
P(R = 1|~x , q
P(~x |~
q)
P(~x |R = 0, q
~ )P(R = 0|~
q)
~) =
P(R = 0|~x , q
P(~x |~
q)

P(R = 1|~ q ) and P(R = 0|~q ): prior probability of retrieving a


relevant or nonrelevant document for a query q ~
Estimate P(R = 1|~ q ) and P(R = 0|~ q ) from percentage of
relevant documents in the collection
Since a document is either relevant or nonrelevant to a query,
we must have that:

~ ) + P(R = 0|~x , q
P(R = 1|~x , q ~) = 1

G.Gambosi: Probabilistic IR 17 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Deriving a Ranking Function for Query Terms (1)

Given a query q, ranking documents by P(R = 1|d, q) is


~)
modeled under BIM as ranking them by P(R = 1|~x , q
Easier: rank documents by their odds of relevance (gives same
ranking)
P(R=1|~
q )P(~x |R=1,~
q)
~)
P(R = 1|~x , q P(~x |~
q)
~) =
O(R|~x , q = P(R=0|~
q )P(~x |R=0,~
q)
~)
P(R = 0|~x , q
P(~x |~
q)
q ) P(~x |R = 1, q
P(R = 1|~ ~)
= ·
q ) P(~x |R = 0, q
P(R = 0|~ ~)
P(R=1|~
q)
P(R=0|~
q) is a constant for a given query - can be ignored

G.Gambosi: Probabilistic IR 18 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Deriving a Ranking Function for Query Terms (2)

It is at this point that we make the Naive Bayes conditional


independence assumption that the presence or absence of a word in
a document is independent of the presence or absence of any other
word (given the query):
M
~ ) Y P(xt |R = 1, q
P(~x |R = 1, q ~)
=
P(~x |R = 0, q
~) P(xt |R = 0, q
~)
t=1

So:
M
Y P(xt |R = 1, q
~)
q) ·
~ ) = O(R|~
O(R|~x , q
P(xt |R = 0, q
~)
t=1

G.Gambosi: Probabilistic IR 19 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Deriving a Ranking Function for Query Terms (3)

Since each xt is either 0 or 1, we can separate the terms:

~ ) Y P(xt = 0|R = 1, q
Y P(xt = 1|R = 1, q ~)
~ ) = O(R|~
O(R|~x , q q )· ·
~)
P(xt = 1|R = 0, q ~)
P(xt = 0|R = 0, q
t:xt =1 t:xt =0

G.Gambosi: Probabilistic IR 20 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Deriving a Ranking Function for Query Terms (4)

Let pt = P(xt = 1|R = 1, q~ ) be the probability of a term


appearing in relevant document
Let ut = P(xt = 1|R = 0, q~ ) be the probability of a term
appearing in a nonrelevant document
Can be displayed as contingency table:

document relevant (R = 1) nonrelevant (R = 0)


Term present xt = 1 pt ut
Term absent xt = 0 1 − pt 1 − ut

G.Gambosi: Probabilistic IR 21 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Deriving a Ranking Function for Query Terms

Additional simplifying assumption: terms not occurring in the


query are equally likely to occur in relevant and nonrelevant
documents
If qt = 0, then pt = ut
Now we need only to consider terms in the products that appear in
the query:
Y pt Y 1 − pt
q) ·
~ ) = O(R|~
O(R|~x , q ·
ut 1 − ut
t:xt =qt =1 t:xt =0,qt =1

The left product is over query terms found in the document


and the right product is over query terms not found in the
document

G.Gambosi: Probabilistic IR 22 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Deriving a Ranking Function for Query Terms


Including the query terms found in the document into the right
product, but simultaneously dividing by them in the left product,
gives:
Y pt (1 − ut ) Y 1 − pt
q) ·
~ ) = O(R|~
O(R|~x , q ·
ut (1 − pt ) 1 − ut
t:xt =qt =1 t:qt =1

The left product is still over query terms found in the


document, but the right product is now over all query terms,
hence constant for a particular query and can be ignored.
→ The only quantity that needs to be estimated to rank
documents w.r.t a query is the left product
Hence the Retrieval Status Value (RSV) in this model:
Y pt (1 − ut ) X pt (1 − ut )
RSVd = log = log
ut (1 − pt ) ut (1 − pt )
t:xt =qt =1 t:xt =qt =1

G.Gambosi: Probabilistic IR 23 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Deriving a Ranking Function for Query Terms


Equivalent: rank documents using the log odds ratios for the terms
in the query ct :

pt (1 − ut ) pt ut
ct = log = log − log
ut (1 − pt ) (1 − pt ) 1 − ut

The odds ratio is the ratio of two odds: (i) the odds of the
term appearing if the document is relevant (pt /(1 − pt )), and
(ii) the odds of the term appearing if the document is
nonrelevant (ut /(1 − ut ))
ct = 0: term has equal odds of appearing in relevant and
nonrelevant docs
ct positive: higher odds to appear in relevant documents
ct negative: higher odds to appear in nonrelevant documents

G.Gambosi: Probabilistic IR 24 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Term weight ct in BIM

pt ut
ct = log (1−p t)
− log 1−u t
functions as a term weight.
P
Retrieval status value for document d: RSVd = xt =qt =1 ct .
So BIM and vector space model are identical on an
operational level . . .
. . . except that the term weights are different.
In particular: we can use the same data structures (inverted
index etc) for the two models.

G.Gambosi: Probabilistic IR 25 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

How to compute probability estimates

For each term t in a query, estimate ct in the whole collection


using a contingency table of counts of documents in the collection,
where df t is the number of documents that contain term t:

documents relevant nonrelevant Total


Term present xt = 1 s df t − s df t
Term absent xt = 0 S −s (N − df t ) − (S − s) N − df t
Total S N −S N
pt = s/S
ut = (df t − s)/(N − S)
s/(S − s)
ct = K (N, df t , S, s) = log
(df t − s)/((N − df t ) − (S − s))

G.Gambosi: Probabilistic IR 26 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Avoiding zeros

If any of the counts is a zero, then the term weight is not


well-defined.
Maximum likelihood estimates do not work for rare events.
To avoid zeros: add 0.5 to each count (expected likelihood
estimation = ELE)
For example, use S − s + 0.5 in formula for S − s

G.Gambosi: Probabilistic IR 27 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Exercise

Query: Obama health plan


Doc1: Obama rejects allegations about his own bad
health
Doc2: The plan is to visit Obama
Doc3: Obama raises concerns with US health plan
reforms
Estimate the probability that the above documents are relevant to
the query. Use a contingency table. These are the only three
documents in the collection

G.Gambosi: Probabilistic IR 28 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Simplifying assumption

Assuming that relevant documents are a very small


percentage of the collection, approximate statistics for
nonrelevant documents by statistics from the whole collection
Hence, ut (the probability of term occurrence in nonrelevant
documents for a query) is df t /N and

1 − ut N − df t N
log = log ≈ log
ut df t df t
This results into

pt (1 − ut ) pt N
ct = log ≈ log + log
ut (1 − pt ) (1 − pt ) df t

G.Gambosi: Probabilistic IR 29 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Probability estimates in adhoc retrieval

Ad-hoc retrieval: no relevance judgments available


In this case: assume that pt is constant over all terms xt in
the query and that pt = 0.5
Each term is equally likely to occur in a relevant document,
and so the pt and (1 − pt ) factors cancel out in the expression
for RSV
Weak estimate, but doesn’t disagree violently with
expectation that query terms appear in many but not all
relevant documents

G.Gambosi: Probabilistic IR 30 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Probability estimates in adhoc retrieval

Combining this method with the earlier approximation for ut ,


the document ranking is determined simply by which query
terms occur in documents scaled by their idf weighting
X pt (1 − ut ) X N
RSVd = log ≈ log
ut (1 − pt ) df t
t:xt =qt =1 t:xt =qt =1

For short documents (titles or abstracts) in one-pass retrieval


situations, this estimate can be quite satisfactory

G.Gambosi: Probabilistic IR 31 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

How different are vector space and BIM?

They are not that different.


In either case you build an information retrieval scheme in the
exact same way.
For probabilistic IR, at the end, you score queries not by
cosine similarity and tf-idf in a vector space, but by a slightly
different formula motivated by probability theory.
Next: how to add term frequency and length normalization to
the probabilistic model.

G.Gambosi: Probabilistic IR 33 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Okapi BM25: Overview

Okapi BM25 is a probabilistic model that incorporates term


frequency (i.e., it’s nonbinary) and length normalization.
BIM was originally designed for short catalog records of fairly
consistent length, and it works reasonably in these contexts
For modern full-text search collections, a model should pay
attention to term frequency and document length
BestMatch25 (a.k.a BM25 or Okapi) is sensitive to these
quantities
BM25 is one of the most widely used and robust retrieval
models

G.Gambosi: Probabilistic IR 34 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Okapi BM25: Starting point

The simplest score for document d is just idf weighting of the


query terms present in the document:
X N
RSVd = log
t∈q
df t

G.Gambosi: Probabilistic IR 35 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Okapi BM25 first basic weighting

Improve idf term [log N/df] by factoring in term frequency.


X (k1 + 1)tf td N
RSVd = log
t∈q
k1 + tf td df t

k1 : tuning parameter controlling the document term


frequency scaling
(k1 + 1) factor does not change ranking, but makes term
score 1 when tf td = 1
Similar to tf-idf, but term scores are bounded

G.Gambosi: Probabilistic IR 36 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Role of parameter k1

k1 helps determine term frequency saturation characteristics


it limits how much a single query term can affect the score of
a given document. It does this through approaching an
asymptote

A higher/lower k1 value means that the slope of tf() of BM25


curve changes. This has the effect of changing how terms
occurring extra times add extra score.
Usually, values around 1.2 − 2

G.Gambosi: Probabilistic IR 37 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Exercise

Interpret weighting formula for k1 = 0


Interpret weighting formula for k1 = 1
Interpret weighting formula for k1 7→ ∞

G.Gambosi: Probabilistic IR 38 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Document length normalization

Longer documents are likely to have larger tf td values


Why might documents be longer?
Verbosity: suggests observed tf td too high
Larger scope: suggests observed tf td may be right
A real document collection probably has both effects so we
should apply some kind of partial normalization

G.Gambosi: Probabilistic IR 39 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Document length normalization

Document length X
Ld = tf td
t

Lave : average document length


Length normalization component
Ld
B = (1 − b) + b 0≤b≤1
Lave

b = 1: full document length normalization


b = 0: no document length normalization

G.Gambosi: Probabilistic IR 40 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Role of parameter b

b shows up in the denominator and is s multiplied by the ratio


of the field length we just discussed.
If b is bigger, the effects of the length of the document
compared to the average length are more amplified.
Usually, b has a value around 0.75.

G.Gambosi: Probabilistic IR 41 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Okapi BM25 basic weighting

Improve idf term [log N/df] by factoring in term frequency


and document length.
X (k1 + 1)tf td N
RSVd = log
t∈q k1 ((1 − b) + b LLave
d
) + tf td df t

tf td : term frequency in document d


Ld (Lave ): length of document d (average document length in
the whole collection)
k1 : tuning parameter controlling the document term
frequency scaling (k1 = 0 is binary model, k1 large is raw term
frequency); usually around 1.2-2
b: tuning parameter controlling the scaling by document
length (b = 0 is no normalization, b = 1 is full normalization);
usually around .75
G.Gambosi: Probabilistic IR 42 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Exercise

Interpret BM25 weighting formula for k1 = 0


Interpret BM25 weighting formula for k1 = 1 and b = 0
Interpret BM25 weighting formula for k1 7→ ∞ and b = 0
Interpret BM25 weighting formula for k1 7→ ∞ and b = 1

G.Gambosi: Probabilistic IR 43 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

BM25 vs tf-idf

Suppose your query is [machine learning]


Suppose you have 2 documents with term counts:
doc1: learning 1024; machine 1
doc2: learning 16; machine 8
Suppose that machine occurs in 1 out of 7 documents in the
collection
Suppose that learning occurs in 1 out of 10 documents in the
collection
tf-idf: 1 + log10 (1 + tf ) log10 (N/df )
doc1: 41.1
doc2: 35.8
BM25: k1 = 2
doc1: 31
doc2: 42.6

G.Gambosi: Probabilistic IR 44 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Okapi BM25 weighting for long queries


For long queries, use similar weighting for query terms

X N

(k1 + 1)tf td (k3 + 1)tf tq
RSVd = log · ·
t∈q
df t k1 ((1 − b) + b × (Ld /Lave )) + tf td k3 + tf tq

tf tq : term frequency in the query q


k3 : tuning parameter controlling term frequency scaling of the
query
No length normalization of queries (because retrieval is being
done with respect to a single fixed query)
The above tuning parameters should ideally be set to optimize
performance on a development test collection. In the absence
of such optimization, experiments have shown reasonable
values are to set k1 and k3 to a value between 1.2 and 2 and
b = 0.75
G.Gambosi: Probabilistic IR 45 / 47
Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions

Which ranking model should I use?

I want something basic and simple → use vector space with


tf-idf weighting.
I want to use a state-of-the-art ranking model with excellent
performance → use BM25 (or language models) with tuned
parameters
In between: BM25 or language models with no or just one
tuned parameter

G.Gambosi: Probabilistic IR 46 / 47

You might also like