0% found this document useful (0 votes)
16 views6 pages

Lecture 4

This document contains notes on Bayesian statistical inference presented by Dr. M. Sjölander. The notes were originally prepared by Dr. I. Garisch and edited by Dr. D. Chikobvu. The document defines basic probability concepts like binomial and uniform distributions. It also introduces Bayesian inference concepts such as defining prior and posterior distributions for parameters and how observations are independent and identically distributed. The goal is to develop an understanding of Bayesian statistical methods and computing posterior distributions.

Uploaded by

kkoopedi10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views6 pages

Lecture 4

This document contains notes on Bayesian statistical inference presented by Dr. M. Sjölander. The notes were originally prepared by Dr. I. Garisch and edited by Dr. D. Chikobvu. The document defines basic probability concepts like binomial and uniform distributions. It also introduces Bayesian inference concepts such as defining prior and posterior distributions for parameters and how observations are independent and identically distributed. The goal is to develop an understanding of Bayesian statistical methods and computing posterior distributions.

Uploaded by

kkoopedi10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 6

STSM 2626

BAYESIAN STATISTICAL INFERENCE


2023
Notes prepared by Dr I. Garisch. Notes edited by Dr D. Chikobvu and Dr M. Sjölander

Slides (from notes) by Dr M. Sjölander


Presented by Dr M. Sjölander
S = {H,T}. P(H) = n(H)/n(S) = 1/2 = 0.5

S = {1,2,3,4,5,6}. P(even) = n(even)/n(S) = 3/6 = 0.5


and P(3) = n(3)/n(S) = 1/6
0 1 2 3 4 θ = 0, 1, 2, 3, 4.
●●●●●
e.g. θϵ[a,b], θϵ[a,∞), θϵ(-∞,b], θϵ(- ∞,∞)
0 1 2 etc. θ = 0, 1, 2, 3, …
● ● ● etc.
θ = P(Head) Fair = {P(Head) = ½} = {θ = ½} P(Fair) = P(θ = ½) = p
Not fair = {P(Head) = 1} = {θ = 1} P(Not fair) = P(θ = 1) = 1 – p

We can describe θ in any of the following ways:

• 2(θ - ½)~Bernoulli(p) θ ½ 1
• The prior pdf of θ is given in the table as:
p(θ) p 1–p
If I select 10 items from the lot, and x is the number of defective items, then:
X~Binomial(10,θ) where θ~Uniform(0,1) (prior distribution of θ.)

Prior pdf
Remember θ is a vector, and can also be denoted by θ.
ff means frequency function i.e. discrete pdf p.
df means density function i.e. continuous pdf f.
We will denote the pdf (whether discrete or continuous) by p from here on.
thus i.i.d.

The pdf of X = the product of the pdf’s of each xi because of independence.


Remember θ is a vector, and can also be denoted by θ.
Remember x is a vector, and can also be denoted by x.

Expressed in p(θ) X1,…,Xn with pdf p(x|θ)

We’ll see how to get the posterior distribution at the start of the next lecture…

You might also like