CRD Assignment

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

AN ASSIGNMENT ON

COMPLETELY RANDOMISED DESIGN (CRD)


Course Title: Advanced Research Methodology
Course Code: HRT 503

SUBMITTED BY SUBMITTED TO
Sadia Naznin Dr. Bidhan Chandra Halder
ID: 2305004
MS in Horticulture Professor
HSTU, Dinajpur. DepDepartment of Horticulture
Faculty
artment of Agriculture
of Horticulture
HSTU, Dinajpur

HAJEE MOHAMMAD DANESH SCIENCE AND TECHNOLOGY


UNIVERSITY, DINAJPUR-5200

1
CONTENTS
SI NO TITLE PAGE
NO
01 Introduction 04

02 Characteristics of a completely randomized design 04


03 Description of the Design (CRD) 05
04 Description of the Design (CRD) 05-06
05 Example of Randomization 06 -08
06 Advantages of a CRD 08
07 Disadvantages 08-09
08 The analysis of variance (ANOVA) table for a 09-10
Completely Randomized Design

09 Step-by-step guide on analyzing the data of CRD 11


10 ANOVA Table for Completely Randomized Design 12
(CRD) - Agricultural Example

11 The main objectives of a Completely Randomized 13


Design (CRD)

Fixed vs. Random Effects

Conclusion

References

2
COMPLETELY RANDOMISED DESIGN (CRD)

Introduction

The completely randomized design (CRD) is a foundational and widely employed


experimental design in scientific research. It is a straightforward approach used to
investigate the effects of different treatments or interventions on a particular outcome
variable.In a completely randomized design, the subjects or experimental units are
randomly assigned to the various treatment groups. The key characteristic of CRD is the
random allocation of treatments, ensuring that each unit has an equal chance of being
assigned to any treatment. Randomization helps to minimize bias and the influence of
confounding factors, making the comparison between treatment groups more reliable and
valid.The primary goal of a completely randomized design is to assess whether there are
statistically significant differences in the outcome variable among the treatment groups.
By randomly assigning treatments, researchers can attribute any observed differences to
the effects of the treatments rather than pre-existing differences between the units.
One of the advantages of CRD is its simplicity. It is relatively easy to implement and requires
minimal assumptions. The design also allows for straightforward statistical analysis, typically
employing techniques such as analysis of variance (ANOVA) to compare the treatment groups.

Characteristics of a completely randomized design (CRD):

Random assignment: Treatments are assigned to the experimental units randomly. This
randomization ensures that each unit has an equal chance of receiving any treatment, minimizing
bias and confounding factors.

Independence: The units or subjects in a CRD are considered independent of each other. The
treatment applied to one unit does not affect or influence the others. Independence is important
for making valid statistical inferences.

Homogeneity: The units or subjects within each treatment group are assumed to be
homogeneous or similar in characteristics that could affect the outcome. This assumption helps
ensure that any observed differences between treatment groups are likely due to the treatments
themselves.

Control group: A CRD may include a control group that does not receive any treatment or
receives a placebo. The control group serves as a baseline for comparison, providing a reference
point to evaluate the effects of the treatments.

3
Replication: Adequate replication involves having multiple units or subjects within each
treatment group. Replication helps to increase the precision of estimates, improve the reliability
of the results, and allows for statistical analysis.

Statistical analysis: After data collection, statistical techniques such as analysis of variance
(ANOVA) are used to analyze the data. ANOVA allows for comparing the treatment groups and
assessing the statistical significance of observed differences.

Simplicity: CRD is a relatively simple and straightforward experimental design. It is easy to


implement and understand, making it a common choice for many research studies.

Generalizability: The results obtained from a completely randomized design can be generalized
to the population from which the experimental units were sampled, assuming the random
assignment and other assumptions are met.

Limited control over extraneous factors: While randomization helps control for known and
unknown factors, a CRD may not be able to fully account for all potential sources of variation.
Other experimental designs, such as randomized block designs or factorial designs, may be more
appropriate for addressing specific sources of variability.

By incorporating these characteristics, a completely randomized design allows for unbiased


comparisons between treatment groups and helps to draw valid conclusions regarding the effects
of the treatments on the outcome variable.

Description of the Design (CRD)

-Simplest design to use.

-Design can be used when experimental units are essentially homogeneous.

-Because of the homogeneity requirement, it may be difficult to use this design for field
experiments.

-The CRD is best suited for experiments with a small number of treatments.

Description of the Design (CRD)

-Treatments are assigned to experimental units completely at random.

-Every experimental unit has the same probability of receiving any treatment.

4
-Randomization is performed using a random number table, computer, program, etc.

Example of Randomization

-Given you have 4 treatments (A, B, C, and D) and 5 replicates, how many experimental
units would you have?

1 2 3 4 5 6 7 8 9 10
D D B C D C A A B D
11 12 13 14 15 16 17 18 19 20
C B A B C B C D A A
-Note that there is no “blocking” of experimental units into replicates.

-Every experimental unit has the same probability of receiving any treatment.

Advantages of a CRD

1. Very flexible design (i.e. number of treatments and replicates is only limited by
the available number of experimental units).

2. Statistical analysis is simple compared to other designs.

3. Loss of information due to missing data is small compared to other designs due to
the larger number of degrees of freedom for the error source of variation.

Disadvantages

1. If experimental units are not homogeneous and you fail to minimize this variation
using blocking, there may be a loss of precision.

2. Usually the least efficient design unless experimental units are homogeneous.

3. Not suited for a large number of treatments.

5
The analysis of variance (ANOVA) table for a Completely Randomized Design
(CRD) typically includes the following components:

Source of Degrees of Sum of Mean Squares p-


Variation Freedom (df) Squares (SS) (MS) F-value value
Between MS(B) = MS(B) /
Treatments k-1 SS(B) SS(B) / (k - 1) MS(W) p-value
Within MS(W) =
Treatments N-k SS(W) SS(W) / (N - k)
SS(T) = SS(B)
Total N-1 + SS(W)

Where:
 k represents the number of treatment groups.
 N represents the total number of observations or experimental units.
 SS(B) denotes the sum of squares between treatments, which measures the variability among
the treatment means.
 SS(W) represents the sum of squares within treatments, which measures the variability
within each treatment group.
 SS(T) is the total sum of squares, which accounts for the overall variability in the data.

The mean squares (MS) are obtained by dividing the sum of squares by their respective degrees
of freedom. MS(B) represents the mean squares between treatments, and MS(W) represents the
mean squares within treatments.

The F-value is calculated by dividing MS(B) by MS(W) and measures the ratio of the between-
treatments variability to the within-treatments variability. It is used to test the null hypothesis
that there are no significant differences among the treatment means.

The p-value associated with the F-value is used to determine the statistical significance of the
treatment effects. It indicates the probability of observing such extreme results under the
assumption of no treatment effects.

The ANOVA table allows researchers to assess the significance of treatment effects, determine
the variability among and within treatment groups, and draw conclusions regarding the impact of
treatments on the outcome variable in a CRD.

6
To analyze the data from a Completely Randomized Design (CRD),we can use
statistical techniques, such as analysis of variance (ANOVA). Here's a step-by-
step guide on analyzing the data of CRD:

Step 1: Set up the ANOVA table:

Construct the ANOVA table with the following components:


Source of Variation: Treatments (between groups) and Residual (within groups)
Degrees of Freedom (df): Calculate the degrees of freedom for treatments (k - 1, where k is the
number of treatment groups) and residual (N - k, where N is the total number of observations)
Sum of Squares (SS): Compute the sum of squares for treatments (SS(B)) and residual (SS(W))
Mean Squares (MS): Calculate the mean squares for treatments (MS(B) = SS(B) / (k - 1)) and
residual (MS(W) = SS(W) / (N - k))

Step 2: Perform hypothesis testing:

Null Hypothesis (H0): Assume that there are no significant differences among the treatment
means.
Alternative Hypothesis (Ha): Assert that there are significant differences among the treatment
means.

Step 3: Calculate the F-value:

7
Compute the F-value by dividing the mean squares for treatments (MS(B)) by the mean squares
for residual (MS(W)).

Step 4: Determine the p-value:

Use the F-value to determine the p-value associated with the F-distribution.
Consult the F-distribution table or use statistical software to find the p-value.

Step 5: Make a decision:

Compare the p-value with the significance level (alpha) to determine the statistical significance
of the treatment effects.
If the p-value is less than the significance level (p < alpha), reject the null hypothesis and
conclude that there are significant differences among the treatment means.

Step 6: Conduct post hoc tests (if necessary):

If the null hypothesis is rejected, conduct post hoc tests to compare specific treatment groups and
identify pairwise differences.
Common post hoc tests include Tukey's Honestly Significant Difference (HSD), Bonferroni
correction, or Scheffé's method.

Step 7: Interpret the results:

Interpret the findings in the context of the research question and objectives.
Consider the treatment means, confidence intervals, and pairwise comparisons to understand the
nature and magnitude of the treatment effects.

Example: Let's say we have conducted an agricultural experiment with three different fertilizer
treatments (Treatment A, B, and C) and collected yield data from 30 crop plots (10 plots per
treatment). The ANOVA table for this CRD experiment may look as follows:

ANOVA Table for Completely Randomized Design (CRD) - Agricultural


Example

8
Source of Degrees of Sum of Mean p-
Variation Freedom (df) Squares (SS) Squares (MS) F-value value
Between MS(B) = MS(B) / p-
Treatments 2 SS(B) SS(B) / 2 MS(W) value
Within MS(W) =
Treatments 27 SS(W) SS(W) / 27
SS(T) = SS(B)
Total 29 + SS(W)

In this example, the "Between Treatments" row represents the variation due to the fertilizer
treatments, while the "Within Treatments" row represents the variation within each treatment
group. The degrees of freedom for "Between Treatments" is 2 (k - 1), where k is the number of
treatment groups (3 in this case), and the degrees of freedom for "Within Treatments" is 27 (N -
k), where N is the total number of observations (30).
The sum of squares for "Between Treatments" (SS(B)) measures the variability among the
treatment means, and the sum of squares for "Within Treatments" (SS(W)) measures the
variability within each treatment group. The total sum of squares (SS(T)) represents the overall
variability in the yield data.
The mean squares for "Between Treatments" (MS(B)) is obtained by dividing SS(B) by the
degrees of freedom for "Between Treatments" (2), and the mean squares for "Within
Treatments" (MS(W)) is obtained by dividing SS(W) by the degrees of freedom for "Within
Treatments" (27).
The F-value is calculated by dividing MS(B) by MS(W), and the p-value associated with the F-
value is used to determine the statistical significance of the treatment effects.

The main objectives of a Completely Randomized Design (CRD) can be


summarized as follows:

Treatment comparison: The primary objective of CRD is to compare the effects of different
treatments or interventions on a particular outcome variable. By randomly assigning treatments
to the experimental units, CRD ensures that any observed differences in the outcome variable
among the treatment groups are more likely to be attributed to the treatments themselves rather
than other factors.

Unbiased estimation: CRD aims to provide unbiased estimation of treatment effects. The
random assignment of treatments helps to minimize bias by ensuring that each experimental unit
has an equal chance of being assigned to any treatment. This randomization reduces the potential
for confounding factors and allows for unbiased estimation of treatment effects.

Statistical inference: CRD facilitates valid statistical inference by providing a framework for
hypothesis testing and estimating the significance of treatment effects. By using appropriate
statistical techniques, such as analysis of variance (ANOVA), researchers can determine if the

9
observed differences in the outcome variable among the treatment groups are statistically
significant or simply due to chance.

Generalizability: CRD aims to generate results that can be generalized to a larger population
from which the experimental units are sampled. By randomly assigning treatments, CRD helps
to ensure that the treatment effects observed in the study are representative of the larger
population and can be applied to similar settings or contexts.

Control of bias and confounding: Randomization in CRD helps to control for bias and
confounding factors that may influence the treatment effects. By randomly assigning treatments,
researchers can minimize the impact of pre-existing differences between the units, ensuring that
any observed differences in the outcome variable are more likely due to the treatments
themselves.

In summary, the main objectives of CRD are to compare treatments, provide unbiased estimation
of treatment effects, facilitate valid statistical inference, enable generalizability of results, and
control bias and confounding factors. By achieving these objectives, CRD allows researchers to
draw reliable conclusions about the effects of treatments on the outcome variable and make
evidence-based decisions in various fields of study.

Fixed vs. Random Effects

-The choice of labeling a factor as a fixed or random effect will affect how you will make the F-
test.

-This will become more important later in the course when we discuss interactions.

Fixed Effect

-All treatments of interest are included in your experiment.

-You cannot make inferences to a larger experiment.

Example 1: An experiment is conducted at Fargo and Grand Forks, ND. If location is


considered a fixed effect, you cannot make inferences toward a larger area (e.g. the
central Red River Valley).

Example 2: An experiment is conducted using four rates (e.g. ½ X, X, 1.5 X, 2 X) of a


herbicide to determine its efficacy to control weeds. If rate is considered a fixed effect,
you cannot make inferences about what may have occurred at any rates not used in the
experiment (e.g. ¼ x, 1.25 X, etc.).

10
Random Effect

-Treatments are a sample of the population to which you can make inferences.

-You can make inferences toward a larger population using the information from the
analyses.

Example 1: An experiment is conducted at Fargo and Grand Forks, ND. If location is


considered a random effect, you can make inferences toward a larger area (e.g. you could
use the results to state what might be expected to occur in the central Red River Valley).

Example 2: An experiment is conducted using four rates (e.g. ½ X, X, 1.5 X, 2 X) of an


herbicide to determine its efficacy to control weeds. If rate is considered a random effect,
you can make inferences about what may have occurred at rates not used in the
experiment (e.g. ¼ x, 1.25 X, etc.).

Conclusion

In conclusion, the Completely Randomized Design (CRD) is a widely used experimental


design that allows researchers to compare the effects of different treatments or
interventions on a particular outcome variable. CRD offers several advantages, including
simplicity, unbiased estimation, and statistical inference. By randomly assigning
treatments to the experimental units, CRD ensures that any observed differences in the
outcome variable among the treatment groups are more likely due to the treatments
themselves rather than other factors. This randomization helps minimize bias and
confounding factors, making the comparisons between treatment groups more reliable and
valid.

CRD facilitates unbiased estimation of treatment effects, enabling researchers to make


valid statistical inferences. Through techniques such as analysis of variance (ANOVA),
the significance of treatment effects can be assessed, providing insights into whether the
observed differences among treatment groups are statistically significant or simply due to
chance. Furthermore, CRD allows for generalizability of results. By randomly assigning
treatments, researchers aim to generate findings that can be applied to a larger population
from which the experimental units were sampled. This enhances the external validity of
the study's conclusions.
While CRD has its advantages, it is important to consider its limitations. CRD may not
account for specific sources of variation or individual characteristics that could influence

11
treatment responses. In such cases, alternative experimental designs, such as randomized
block designs or factorial designs, may be more appropriate.

In summary, CRD provides a robust framework for comparing treatments and drawing
valid conclusions about their effects on the outcome variable. By minimizing bias,
facilitating unbiased estimation, enabling statistical inference, and promoting
generalizability, CRD assists researchers in making evidence-based decisions and
advancing scientific knowledge in various fields of study.

References:
 Montgomery, D. C. (2017). Design and Analysis of Experiments. John Wiley & Sons.
 Kirk, R. E. (2012). Experimental Design: Procedures for the Behavioral Sciences. SAGE
Publications.
 Box, G. E. P., Hunter, J. S., & Hunter, W. G. (2005). Statistics for Experimenters:
Design, Innovation, and Discovery. John Wiley & Sons.
 Cobb, G. W., & Coffey, L. A. (2018). Introduction to the Design and Analysis of
Montgomery, D. C. (2017). Design and Analysis of Experiments. John Wiley & Sons.

12

You might also like