0% found this document useful (0 votes)
175 views9 pages

Bayes' Theorem: Probability Theory Statistics

Bayes' theorem expresses how a degree of belief should change to account for evidence. It relates the probability of an event to the probability of evidence given that event. Bayes' theorem is fundamental to Bayesian statistics and has applications in many fields. It is named after Thomas Bayes but was further developed by others. Until recently, the Bayesian interpretation attracted dissent but is now widely accepted, aided by computing enabling its application.

Uploaded by

Vanlal Khiangte
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
175 views9 pages

Bayes' Theorem: Probability Theory Statistics

Bayes' theorem expresses how a degree of belief should change to account for evidence. It relates the probability of an event to the probability of evidence given that event. Bayes' theorem is fundamental to Bayesian statistics and has applications in many fields. It is named after Thomas Bayes but was further developed by others. Until recently, the Bayesian interpretation attracted dissent but is now widely accepted, aided by computing enabling its application.

Uploaded by

Vanlal Khiangte
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Bayes' theorem

In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule) is a theorem


with two distinct interpretations. In the Bayesian interpretation, it expresses how a subjective degree of
belief should rationally change to account for evidence. In the frequentist interpretation, it relates inverse
representations of the probabilities concerning two events. In the Bayesian interpretation, Bayes' theorem
is fundamental to Bayesian statistics, and has applications in fields
including science, engineering,economics (particularly microeconomics), game theory, medicine and law.
The application of Bayes' theorem to update beliefs is called Bayesian inference.

Bayes' theorem is named for Thomas Bayes (/ˈbeɪz/; 1701–1761), who first suggested using the theorem to
update beliefs. His work was significantly edited and updated by Richard Price before it was posthumously
read at the Royal Society. The ideas gained limited exposure until they were independently rediscovered
and further developed by Laplace, who first published the modern formulation in his 1812 Théorie
analytique des probabilités. Until the second half of the 20th century, the Bayesian interpretation attracted
widespread dissent[citation needed] from the mathematics community who generally held frequentist views,[citation
needed]
 rejecting Bayesianism as unscientific. However, it is now widely accepted. This may have been due to
the development of computing, which enabled the successful application of Bayesianism to many complex
problems.[1]

Contents
  [hide] 

1 Introductory example

2 Statement and interpretation

o 2.1 Bayesian interpretation

o 2.2 Frequentist interpretation

3 Forms

o 3.1 For events

 3.1.1 Simple form

 3.1.2 Extended

form

 3.1.3 Three or

more events

o 3.2 For random variables

 3.2.1 Simple form

 3.2.2 Extended

form

o 3.3 Bayes' rule
4 Derivation

o 4.1 For general events

o 4.2 For random variables

5 Examples

o 5.1 Frequentist example

o 5.2 Drug testing

6 History

7 Notes

o 7.1 Further reading

8 External links

[edit]Introductory example

Suppose someone told you they had a nice conversation with someone on the train. Not knowing anything
else about this conversation, the probability that they were speaking to a woman is 50%. Now suppose
they also told you that this person had long hair. It is now more likely that they were speaking to a woman,
since most long-haired people are women. Bayes' theorem can be used to calculate the probability that the
person is a woman.

To see how this is done, let

represent the event that the conversation was held with a woman,
and
denote the event that the conversation was held with a long-haired
person.

It can be assumed that women constitute half the population for this example. So, not knowing anything
else, the probability that   occurs is

Suppose it is also known that 75% of women have long hair, which we denote as

(read: the probability of event   given event   is 0.75).

Likewise, suppose it is known that 30% of men have long hair, or

,
where   is the complementary event of  , i.e., the event that the conversation was held with a man
(assuming that every human is either a man or a woman).

Our goal is to calculate the probability that the conversation was held with a woman, given the fact that the
person had long hair, or, in our notation,  . Using the formula for Bayes' theorem, we have

where we have used the law of total probability. The numeric answer can be obtained by substituting the
above values into this formula. This yields

i.e., the probability that the conversation was held with a woman, given that the person had long hair, is
about 71%.

[edit]Statement and interpretation

Mathematically, Bayes' theorem gives the relationship between the probabilities of   and  ,   


and  , and the conditional probabilities of   given   and   given  ,  and  .
In its most common form, it is:

The meaning of this statement depends on the interpretation of probability ascribed to the terms:

[edit]Bayesian interpretation
Main article:  Bayesian probability

In the Bayesian (or epistemological) interpretation, probability measures a degree of belief. Bayes'


theorem then links the degree of belief in a proposition before and after accounting for evidence. For
example, suppose somebody proposes that a biased coin is twice as likely to land heads than tails.
Degree of belief in this might initially be 50%. The coin is then flipped a number of times to collect
evidence. Belief may rise to 70% if the evidence supports the proposition.

For proposition   and evidence  ,

 , the prior, is the initial degree of belief in  .

 , the posterior, is the degree of belief having accounted for  .

  represents the support   provides for  .

For more on the application of Bayes' theorem under the Bayesian interpretation of probability,
see Bayesian inference.
[edit]Frequentist interpretation

Illustration of frequentist interpretation with tree diagrams. Bayes' theorem connects conditional


probabilities to their inverses.

In the frequentist interpretation, probability is defined with respect to a large number of trials, each
producing one outcome from a set of possible outcomes,  . An event is a subset of  . The
probability of event  ,  , is the proportion of trials producing an outcome in  . Similarly
for the probability of  ,  . If we consider only trials in which   occurs, the proportion in
which   also occurs is  . If we consider only trials in which   occurs, the proportion in
which   also occurs is  . Bayes' theorem is a fixed relationship between these
quantities.

This situation may be more fully visualised with tree diagrams, shown to the right. The two
diagrams represent the same information in different ways. For example, suppose that   is
having a risk factor for a medical condition, and   is having the condition. In a population, the
proportion with the condition depends whether those with or without the risk factor are examined.
The proportion having the risk factor depends whether those with or without the condition are
examined. Bayes' theorem links these inverse representations.

[edit]Forms

[edit]For events
[edit]Simple form

For events   and  , provided that  .


In a Bayesian inference step, the probability of evidence   is constant for all models  . The posterior
may then be expressed asproportional to the numerator:

[edit]Extended form

Often, for some partition of the event space  , the event space is given or conceptualized in terms
of   and  . It is then useful to eliminate   using thelaw of total probability:

In the special case of a binary partition,

[edit]Three or more events

Extensions to Bayes' theorem may be found for three or more events. For example, for three events, two
possible tree diagrams branch in the order BCA and ABC. By repeatedly applying the definition
of conditional probability:

As previously, the law of total probability may be substituted for unknown marginal probabilities.

[edit]For random variables


Diagram illustrating the meaning of Bayes' theorem as applied to an event space generated by continuous random
variables   and  . Note that there exists an instance of Bayes' theorem for each point in the  domain. In practise,
these instances might be parametrised by writing the specified probability densities as a functionof   and  .

Consider a sample space   generated by two random variables   and  . In principle, Bayes' theorem
applies to the events   and  . However, terms become 0 at points
where either variable has finite probability density. To remain useful, Bayes' theorem may be formulated in
terms of the relevant densities (see Derivation).

[edit]Simple form

If   is continuous and   is discrete,

If   is discrete and   is continuous,

If both   and   are continuous,

[edit]Extended form

Diagram illustrating how an event space generated by continuous random variables X and Y is often conceptualized.

A continuous event space is often conceptualized in terms of the numerator terms. It is then useful to
eliminate the denominator using the law of total probability. For  , this becomes an integral:
[edit]Bayes' rule
Main article:  Bayes' rule

Under the Bayesian interpretation of probability, Bayes' rule may be thought of as Bayes' theorem in odds


form.

where

So the rule says that the posterior odds are the prior odds times the Bayes factor.

[edit]Derivation

[edit]For general events


Bayes' theorem may be derived from the definition of conditional probability:

[edit]For random variables


For two continuous random variables   and  , Bayes' theorem may be analogously derived from the
definition of conditional density:

[edit]Examples

[edit]Frequentist example
Tree diagram illustrating frequentist example. R, C, P and P bar are the events representing rare, common, pattern and
no pattern. Percentages in parentheses are calculated. Note that three independent values are given, so it is possible to
calculate the inverse tree (see figure above).

An entomologist spots what might be a rare subspecies of beetle, due to the pattern on its back. In the rare
subspecies, 98% have the pattern. In the common subspecies, 5% have the pattern. The rare subspecies
accounts for only 0.1% of the population. How likely is the beetle to be rare?

From the extended form of Bayes' theorem,

[edit]Drug testing

Tree diagram illustrating drug testing example. U, U bar, "+" and "−" are the events representing user, non-user, positive
result and negative result. Percentages in parentheses are calculated.

Suppose a drug test is 99% sensitive and 99% specific. That is, the test will produce 99% true positive
results for drug users and 99% true negative results for non-drug users. Suppose that 0.5% of people are
users of the drug. If a randomly selected individual tests positive, what is the probability he or she is a
user?
Despite the apparent accuracy of the test, if an individual tests positive, it is more likely that they do  not use
the drug than that they do.

This surprising result arises because the number of non-users is very large compared to the number of
users, such that the number of false positives (0.995%) outweighs the number of true positives (0.495%).
To use concrete numbers, if 1000 individuals are tested, there are expected to be 995 non-users and 5
users. From the 995 non-users,   false positives are expected. From the 5
users,   true positives are expected. Out of 15 positive results, only 5, about 33%, are
genuine.

[edit]History

Bayes' theorem was named after the Reverend Thomas Bayes (1702–61), who studied how to compute a
distribution for the probability parameter of a binomial distribution (in modern terminology). His
friend Richard Price edited and presented this work in 1763, after Bayes' death, as An Essay towards
solving a Problem in the Doctrine of Chances.[2] The French mathematician Pierre-Simon
Laplace reproduced and extended Bayes' results in 1774, apparently quite unaware of Bayes' work.
[3]
 Stephen Stigler suggested in 1983 that Bayes' theorem was discovered by Nicholas Saunderson some
time before Bayes:[4] however, this interpretation has been disputed. [5]

Stephen Fienberg describes the evolution from "inverse probability" at the time of Bayes and Laplace, a
term still used by Harold Jeffreys (1939), to "Bayesian" in the 1950s.[6]Ironically, Ronald A.
Fisher introduced the "Bayesian" label in a derogatory sense [citation needed].

You might also like