0% found this document useful (0 votes)
8 views1 page

Bayesian Inference

Uploaded by

jagadishmec18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views1 page

Bayesian Inference

Uploaded by

jagadishmec18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1

Certainly, here's an overview of Bayesian inference with an example and solution:

“Bayesian Inference”: Bayesian inference is a statistical method for updating beliefs about the
parameters of a model in light of observed data. It involves calculating the posterior probability
distribution of the parameters given the data, using Bayes' theorem.

Example:

Suppose we want to estimate the probability of a coin landing heads up (\( \theta \)), based on the
results of flipping the coin multiple times.

- **Prior**: We start with a prior belief about the probability of the coin landing heads up,
represented by a probability distribution. Let's say we have a prior belief that the coin is fair, so we
use a Beta(1,1) distribution, which is a uniform distribution between 0 and 1.

- **Likelihood**: We then collect data by flipping the coin \( n \) times and observing the outcomes.
Let's say we flipped the coin 10 times and observed 7 heads and 3 tails. The likelihood function gives
the probability of observing these outcomes given a particular value of \( \theta \). In this case, the
likelihood is given by the binomial distribution:

\( P(\text{data}|\theta) = \theta^7 \cdot (1-\theta)^3 \)

- **Posterior**: We use Bayes' theorem to update our prior belief to obtain the posterior
distribution of \( \theta \) given the observed data. The posterior distribution is proportional to the
product of the prior and the likelihood:

\( P(\theta|\text{data}) \propto P(\text{data}|\theta) \cdot P(\theta) \)

Solution:

To compute the posterior distribution, we multiply the prior Beta(1,1) by the likelihood function,
resulting in a Beta distribution with parameters \( \alpha = 1 + 7 = 8 \) and \( \beta = 1 + 3 = 4 \). So,
the posterior distribution is Beta(8,4), which represents our updated belief about the probability of
the coin landing heads up after observing the data.

This example demonstrates how Bayesian inference allows us to incorporate prior knowledge and
update beliefs based on observed data, providing a more informed estimation of parameters in a
model.

You might also like