Hypothesis Formulation
Hypothesis Formulation
Business
DEPARTMENT -Management
BUSINESS RESEARCH METHODS
Course Name: 23BAT-625
Diksha Gandhi
Assistant Professor
Chandigarh University
UNIT-1 –
Formulation of DISCOVER . LEARN . EMPOWER
hypothesis
1
Learning Objectives
Foundations of
Research
3
Formulation of Hypothesis
• Once you have identified you research question, it is time to formulate your
hypothesis. While the research question is broad and includes all the variables you
want your study to consider, the hypothesis is a statement that specific relationship
you expect to find from your examination of these variables. When formulating the
hypothesis(es) for your study, there are a few things you need to keep in mind.
4
A researcher should consider certain points while
formulating a hypothesis
• Expected relationship or differences between the variables.
• Operational definition of variable.
• Hypotheses are formulated following the review of literature
5
TYPES OF HYPOTHESES
6
Alternative Hypothesis
7
Level of Significance
• The level of significance is the probability of rejecting a true null hypothesis that is the
probability
• of “Type I error” and is denoted by α. The frequently used values of α are 0.05; 0.01; 0.1
etc.
• When, α = 0.05 it means that level of significance is 5%.
• α = 0.01 it means 1% level of significance.
• α = 0.01 it means 10% level of significance.
• In fact α specifies the critical region. A competed value of the test statistic that falls in the
• critical region (CR) is said to be significant. So, α is called the level of significance.
8
Critical/ Rejection Region
• The critical region (CR) or rejection region (RR)
is the area under the curve beyond certain
limits in which the population value is unlikely
to fall by chance only when the null hypothesis
is assumed to be true. If an observed value falls
in this region H0 is rejected and the observed
value is said to be significant. In a word, the
region for which H0 is rejected is called critical
region or rejection region.
Confidence Interval
• Confidence interval is the interval marked by
limits within which the population value lies by
chance and the hypothesis is consider to be
tenable. If an observed value falls in confidence
interval H0 is accepted.
Critical Values
• The values of the test statistic which separates
critical region from confidence region
(acceptance region) are called critical values. 9
One-tailed and Two-tailed Tests
• One-tailed Test: A test in which the critical region is located in one tail of the distribution of
test of statistic is called one-tailed test. There are two types of one-tailed test in test of
hypothesis –
(a) Right tailed test
(b) Left tailed test.
10
Two-tailed Test
11
12
Types of Error in testing of hypothesis
• Type I Error : We may reject a hypothesis which is true and should not be
rejected
When you see a p-value that is less than your significance level, you get excited because
your results are statistically significant. However, it could be a type I error. The supposed
effect might not exist in the population. Again, there is usually no warning when this
occurs.
• Why do these errors occur? It comes down to sample error. Your random sample has
overestimated the effect by chance. It was the luck of the draw. This type of error doesn’t
indicate that the researchers did anything wrong. The experimental design, data collection,
data validity, and statistical analysis can all be correct, and yet this type of error still occurs.
• Even though we don’t know for sure which studies have false positive results, we do know
their rate of occurrence. The rate of occurrence for Type I errors equals the significance
level of the hypothesis test, which is also known as alpha (α).
• The significance level is an evidentiary standard that you set to determine whether your
sample data are strong enough to reject the null hypothesis. Hypothesis tests define that
standard using the probability of rejecting a null hypothesis that is actually true. You set
this value based on your willingness to risk a false positive.
13
Type II Error
• We may accept a hypothesis which is false and should be rejected
• When you perform a hypothesis test and your p-value is greater than your significance
level, your results are not statistically significant. That’s disappointing because your sample
provides insufficient evidence for concluding that the effect you’re studying exists in the
population. However, there is a chance that the effect is present in the population even
though the test results don’t support it. If that’s the case, you’ve just experienced a Type II
error. The probability of making a Type II error is known as beta (β).
• What causes Type II errors? Whereas Type I errors are caused by one thing, sample error,
there are a host of possible reasons for Type II errors—small effect sizes, small sample
sizes, and high data variability. Furthermore, unlike Type I errors, you can’t set the Type II
error rate for your analysis. Instead, the best that you can do is estimate it before you
begin your study by approximating properties of the alternative hypothesis that you’re
studying. When you do this type of estimation, it’s called power analysis.
• To estimate the Type II error rate, you create a hypothetical probability distribution that
represents the properties of a true alternative hypothesis. However, when you’re
performing a hypothesis test, you typically don’t know which hypothesis is true, much less
the specific properties of the distribution for the alternative hypothesis. Consequently, the
true Type II error rate is usually unknown!
14
References
15
Blackboard
Assessment Pattern
Max. 10 10 10 4 4 2 40
Marks
16
THANK YOU
[email protected]