0% found this document useful (0 votes)
34 views

Reasoning With Uncertainty - Probabilistic Reasoning: Version 2 CSE IIT, Kharagpur

The document discusses probabilistic inference rules and Bayes' rule. It provides an example where Bayes' rule is used to calculate the probability of having a disease given a positive test result. The probability turns out to be lower than expected due to the rarity of the disease. Conducting multiple independent tests can increase the probability. Conditional independence is also covered, where variables may be conditionally independent given another variable.

Uploaded by

Vanu Sha
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Reasoning With Uncertainty - Probabilistic Reasoning: Version 2 CSE IIT, Kharagpur

The document discusses probabilistic inference rules and Bayes' rule. It provides an example where Bayes' rule is used to calculate the probability of having a disease given a positive test result. The probability turns out to be lower than expected due to the rarity of the disease. Conducting multiple independent tests can increase the probability. Conditional independence is also covered, where variables may be conditionally independent given another variable.

Uploaded by

Vanu Sha
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

www.jntuworld.

com

www.jwjobs.net

Module 10
Reasoning with Uncertainty Probabilistic reasoning
Version 2 CSE IIT, Kharagpur www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Lesson 27
Probabilistic Inference
Version 2 CSE IIT, Kharagpur www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

10.4 Probabilistic Inference Rules


Two rules in probability theory are important for inferencing, namely, the product rule and the Bayes' rule.

Here is a simple example, of application of Bayes' rule. Suppose you have been tested positive for a disease; what is the probability that you actually have the disease? It depends on the accuracy and sensitivity of the test, and on the background (prior) probability of the disease. Let P(Test=+ve | Disease=true) = 0.95, so the false negative rate, P(Test=-ve | Disease=true), is 5%. Let P(Test=+ve | Disease=false) = 0.05, so the false positive rate is also 5%. Suppose the disease is rare: P(Disease=true) = 0.01 (1%). Let D denote Disease and "T=+ve" denote the positive Tes. Then, P(T=+ve|D=true) * P(D=true) P(D=true|T=+ve) = -----------------------------------------------------------P(T=+ve|D=true) * P(D=true)+ P(T=+ve|D=false) * P(D=false)

Version 2 CSE IIT, Kharagpur www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

0.95 * 0.01 -------------------------------- = 0.161 0.95*0.01 + 0.05*0.99

So the probability of having the disease given that you tested positive is just 16%. This seems too low, but here is an intuitive argument to support it. Of 100 people, we expect only 1 to have the disease, but we expect about 5% of those (5 people) to test positive. So of the 6 people who test positive, we only expect 1 of them to actually have the disease; and indeed 1/6 is approximately 0.16. In other words, the reason the number is so small is that you believed that this is a rare disease; the test has made it 16 times more likely you have the disease, but it is still unlikely in absolute terms. If you want to be "objective", you can set the prior to uniform (i.e. effectively ignore the prior), and then get P(T=+ve|D=true) * P(D=true) P(D=true|T=+ve) = -----------------------------------------------------------P(T=+ve) 0.95 * 0.5 = -------------------------0.95*0.5 + 0.05*0.5 0.475 = ------0.5

= 0.95

This, of course, is just the true positive rate of the test. However, this conclusion relies on your belief that, if you did not conduct the test, half the people in the world have the disease, which does not seem reasonable. A better approach is to use a plausible prior (eg P(D=true)=0.01), but then conduct multiple independent tests; if they all show up positive, then the posterior will increase. For example, if we conduct two (conditionally independent) tests T1, T2 with the same reliability, and they are both positive, we get P(T1=+ve|D=true) * P(T2=+ve|D=true) * P(D=true) P(D=true|T1=+ve,T2=+ve) = -----------------------------------------------------------P(T1=+ve, T2=+ve) 0.95 * 0.95 * 0.01 0.009 = ----------------------------= ------- = 0.7826 0.95*0.95*0.01 + 0.05*0.05*0.99 0.0115 The assumption that the pieces of evidence are conditionally independent is called the naive Bayes assumption. This model has been successfully used for mainly application including classifying email as spam (D=true) or not (D=false) given the presence of various key words (Ti=+ve if word i is in the text, else Ti=-ve). It is clear that the words are not independent, even conditioned on spam/not-spam, but the model works surprisingly well nonetheless.

Version 2 CSE IIT, Kharagpur www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

In many problems, complete independence of variables do not exist. Though many of them are conditionally independent. X and Y are conditionally independent given Z iff

In full: X and Y are conditionally independent given Z iff for any instantiation x, y, z of X, Y,Z we have

An example of conditional independence: Consider the following three Boolean random variables: LeaveBy8, GetTrain, OnTime Suppose we can assume that: P(OnTime | GetTrain, LeaveBy8) = P(OnTime | GetTrain) but NOT P(OnTime | LeaveBy8) = P(OnTime)

Then, OnTime is dependent on LeaveBy8, but independent of LeaveBy8 given GetTrain. We can represent P(OnTime | GetTrain, LeaveBy8) = P(OnTime | GetTrain) graphically by: LeaveBy8 -> GetTrain -> OnTime

Version 2 CSE IIT, Kharagpur www.jntuworld.com

You might also like