0% found this document useful (0 votes)
8 views22 pages

Statistical Decision 3.1

The document outlines the principles of Statistical Decision Theory, focusing on hypothesis testing and statistical inference. It explains the procedures for hypothesis testing, including setting up hypotheses, determining significance levels, and calculating confidence intervals. Additionally, it discusses the concepts of Type I and Type II errors, as well as one-tailed and two-tailed tests, providing an example of calculating a 95% confidence interval for a population mean.

Uploaded by

Faruk Sarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views22 pages

Statistical Decision 3.1

The document outlines the principles of Statistical Decision Theory, focusing on hypothesis testing and statistical inference. It explains the procedures for hypothesis testing, including setting up hypotheses, determining significance levels, and calculating confidence intervals. Additionally, it discusses the concepts of Type I and Type II errors, as well as one-tailed and two-tailed tests, providing an example of calculating a 95% confidence interval for a population mean.

Uploaded by

Faruk Sarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Dr. Md.

Atiqul Islam
PhD (UK), MSc (UK)
Professor
Department of Economics

University of Rajshahi

Course: ECO 314 (Statistics for Economics II)


30 May 2024
Statistical Decision Theory

Part I
Statistical Decision:
- Very often, in practice, we make decisions about population on the basis
of sample information.
- Such decisions are called statistical decisions.

Statistical Hypothesis:
- To reach statistical decision, we first make assumptions about the
populations involved.
- Such assumptions, may or may not true, are called statistical hypothesis.
- The hypothesis is made about the value of some parameter, but the only
facts available to estimate that parameter are those provided by sample.
Statistical Hypothesis:
- If the sample statistic differs from the hypothesis made about the
population parameter, a decision must be made as to whether or not this
difference is significant.
- If the difference is significant then the hypothesis is rejected. Otherwise
it must be accepted.
- It is also called test of hypothesis.
Procedure of Hypothesis Testing:
(1) Set up a hypothesis:
- Establish the hypothesis to be tested
- Usually assumptions about the value of some unknown
parameter
- Conventional approach is to set up two different hypothesis
(i) null hypothesis (𝐻0 ) , and
(ii) alternative hypothesis (𝐻1 )
- If sample information leads us to reject 𝐻0 , then we must
accept 𝐻1 .
- Thus, two hypothesis are constructed so that if one is true, the
other is false and vice versa
Procedure of Hypothesis Testing….CONTINUED:
(2) Set up a suitable significance level:
- The confidence with which an experimenter rejects or accept
null hypothesis depends on the significance level adopted
- The level of significance is denoted by 𝛼
- Though any level of significance can be adopted, in practice,
we either take 5% or 1% level of significance.
- If 5% level of significance is taken then there are about 5
chances out 100 that we would reject the null hypothesis when
it should be accept.
- That is, we are about 95% confident that we have made the
right decision
Procedure of Hypothesis Testing…..CONTINUED:
(3) Determination of a suitable test statistic:
- Determine a suitable test statistic and its distribution.
- One example of test statistic is

𝑆𝑎𝑚𝑝𝑙𝑒 𝑠𝑡𝑎𝑡𝑖𝑠𝑡𝑖𝑐 − 𝐻𝑦𝑝𝑜𝑡ℎ𝑒𝑠𝑖𝑠𝑒𝑑 𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛 𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟


𝑇𝑒𝑠𝑡 𝑠𝑡𝑎𝑡𝑖𝑠𝑡𝑖𝑐 =
𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑒𝑟𝑟𝑜𝑟 𝑜𝑓 𝑡ℎ𝑒 𝑠𝑎𝑚𝑝𝑙𝑒 𝑠𝑡𝑎𝑡𝑖𝑠𝑡𝑖𝑐𝑠

(4) Determination the critical region:


- Determine which values of test statistic will lead to rejection of 𝐻0
and which lead to acceptance of 𝐻0 .
- The values for rejection of 𝐻0 is called the critical region.
- Accepted region
Procedure of Hypothesis Testing……….CONTINUED:
(5) Doing computation:
- Do various computation from a random sample of size 𝑛 for the
test statistic obtained in step (3)
- Then see whether result falls in the critical region or the accepted
region.

(6) Making decisions:


- Draw a statistical conclusion
- Either accepting the hypothesis or rejecting it.
- If we reject 𝐻0 that means the difference between sample statistic
and hypothesised population parameter is considered to be
significance.
Statistical Inference: Estimation
Estimation can be done into
(i) Point estimation
(ii) Interval estimation

Point estimation:
- We assume that we know the theoretical probability distribution of
population parameter (but do not know the value of that).
- Suppose 𝜃 is the parameter and 𝜃 is an estimate (also known as statistic)
of true 𝜃.
- Then the estimator 𝜃 is known as a point estimator because it provides
only a single (point) estimate of 𝜃 .
Statistical Inference: Estimation
Interval estimation:
- …instead of obtaining a single estimate of 𝜃, we obtain two estimate 𝜃1
and 𝜃2 .
- And say with some confidence (i.e. probability) that the interval between
𝜃1 and 𝜃2 includes the true 𝜃.
- More generally, in interval estimation we construct two estimators 𝜃1 and
𝜃2 such that
𝑃 𝜃1 ≤ 𝜃 ≤ 𝜃2 = 1 − 𝛼 ; 0<𝛼<1
The probability is (1 − 𝛼) that the interval from 𝜃1 to 𝜃2 contains the
true parameter.
- This interval is known as a confidence interval of size (1 − 𝛼) for 𝜃.
- (1 − 𝛼) is known as confidence coefficient.
Statistical Inference: Estimation
Interval estimation…continued:
- If 𝛼 = 0.05 , then 1 − 𝛼 = 0.95 , meaning that if we construct a
confidence interval with a confidence coefficient of 0.95 then in repeated
sampling we shall be right in 95 out of 100 cases that the interval
contains the true 𝜃.
- When the confidence coefficient is 0.95, we often say that we have a 95%
confidence interval.
- 𝛼 is known as level of significance or probability of committing a Type I
error.
The Confidence Interval:
The sampling distribution of mean 𝑋 is distributed as
𝜎2
𝑋 ∼ 𝑁 𝜇,
𝑛
𝑋−𝜇
Therefore, 𝑍 = 𝜎 ∼ 𝑁(0, 1)
𝑛

Then from the standard normal variate table we get

𝑃 −1.96 ≤ 𝑍 ≤ 1.96 = 0.95


𝑋−𝜇
⇒ 𝑃 −1.96 ≤ 𝜎 ≤ 1.96 = 0.95
𝑛
𝑋−𝜇
⇒ 𝑃 −1.96 ≤ 𝜎 ≤ 1.96 = 0.95
𝑛

0.4750 0.4750

-1.96 1.96
𝑋−𝜇
⇒ 𝑃 −1.96 ≤ 𝜎 ≤ 1.96 = 0.95
𝑛
𝜎 𝜎
⇒ 𝑃 −1.96 ≤ 𝑋 − 𝜇 ≤ 1.96 = 0.95
𝑛 𝑛
𝜎 𝜎
⇒ 𝑃 −𝑋 − 1.96 ≤ −𝜇 ≤ −𝑋 + 1.96 = 0.95
𝑛 𝑛
𝜎 𝜎
⇒ 𝑃 − 𝑋 + 1.96 ≤ −𝜇 ≤ − 𝑋 − 1.96 = 0.95
𝑛 𝑛
𝜎 𝜎
⇒ 𝑃 𝑋 + 1.96 ≥ 𝜇 ≥ 𝑋 − 1.96 = 0.95
𝑛 𝑛
𝜎 𝜎
⇒ 𝑃 𝑋 − 1.96 ≤ 𝜇 ≤ 𝑋 + 1.96 = 0.95
𝑛 𝑛
𝜎 𝜎
⇒ 𝑃 𝑋 − 1.96 ≤ 𝜇 ≤ 𝑋 + 1.96 = 0.95
𝑛 𝑛
This is a 95% confidence interval for 𝜇 .

• In the language of hypothesis testing, the confidence interval that we have


established is called the acceptance region and the area outside the
acceptance region is called the critical region or region of rejection of the
null hypothesis.

• If the hypothesized value falls inside the acceptance region, we may not
reject the null hypothesis, otherwise we may reject it
Type I and Type II Error:
While deciding to accept or reject the null hypothesis 𝐻0 , we are likely to
commit two types of errors:
(1) we may reject 𝐻0 when it is, in fact, true; this is called type I
error
(2) we may not reject 𝐻0 when it is, in fact, false; this is called
type II error

State of nature

Decision 𝐻0 is true 𝐻0 is false

Reject Type I error No error

Do not reject No error Type II error


Type I and Type II Error:
- Ideally, we would like to minimize both type I and type II error.
- But, for any given sample, it is not possible to minimize both the errors
simultaneously.
- However, type I error is more serious than type II error. Therefore, we
should keep the probability of committing type I error as small as
possible (i.e. 0.01 or 0.05).
- The probability of not committing a type II error is called the power of
the test.
- The power of a test is its ability to reject a false null hypothesis.
One Tailed and Two-Tailed Test:

- In testing hypothesis, three kinds of test can arise:


(i) two-tailed test

(ii) right-tailed test, and

(iii) left-tailed test


One Tailed and Two-Tailed Test:
In two-tailed test the hypothesis is rejected for value falling into either tail
of the sampling distribution. For example: in case of testing about
population mean, if 𝐻0 : 𝜇 = 100 and 𝐻1 : 𝜇 ≠ 100 then it is a two
tailed test since in alternative hypothesis 𝜇 can be either side of 100.
One Tailed and Two-Tailed Test:
when the hypothesis is rejected only for value falling into one of the tails of
the sampling distribution. For example: in case of testing about population
mean, if 𝐻0 : 𝜇 = 100 and 𝐻1 : 𝜇 > 100 or 𝜇 < 100 then it is a one
tailed test since in alternative hypothesis 𝜇 can be right or left side of 100.
One Tailed and Two-Tailed Test:
Example 2: From the past experience, a wire manufacturing
company found that the breaking strengths of the wire are normally
distributed with s.d. of 200 kg. A random sample of 64 specimens
gave a mean of 6200 kg. What is the 95% confidence interval for
the mean breaking strength of the population.
Ans.
𝜎 𝜎
We know, 𝑃 𝑋 − 1.96 𝑛
≤ 𝜇 ≤ 𝑋 + 1.96
𝑛
= 0.95
Here, 𝑋 = 6200 kg. , 𝜎 = 200 , and 𝑛 = 64
𝜎 200
Therefore, the confidence limit are 𝑋 ± 1.96 = 6200 ± 1.96
𝑛 64
= 6151 𝑡𝑜 6249

Ans.: The 95% confidence interval for the breaking strength are 6151 kg to
6249 kg.

You might also like