0% found this document useful (0 votes)
77 views57 pages

3 Single Factor Experiments

Uploaded by

AngelitaOlden1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views57 pages

3 Single Factor Experiments

Uploaded by

AngelitaOlden1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 57

SINGLE FACTOR

EXPERIMENTS
Single Factor Experiments
Experiments in which only a single factor varies while all others are kept constant are
called single-factor experiments. In such experiments, the treatments consist solely of the
different levels of the single variable factor. All other factors are applied uniformly to all
plots at a single prescribed level. For example, most crop variety trials are single-factor
experiments in which the single variable factor is variety and the factor levels (i.e.,
treatments) are the different varieties. Only the variety planted differs from one
experimental plot to another and all management factors, such as fertilizer, insect control,
and water management, are applied uniformly to all plots. Other examples of single-factor
experiment are:

-Fertilizer trials where several rates of a single fertilizer element are tested
-Insecticide trials where several insecticides are teste
-Plant-population trials where several plant densities are tested.
Single Factor Experiments
There are two groups of experimental design that are applicable to a single-factor
experiment. One group is the family of complete block designs, which is suited for
experiments with a small number of treatments and is characterized by blocks, each of
which contains at least one complete set of treatments. The other group is the family of
incomplete block designs, which is suited for experiments with a large number of
treatments and is characterized by blocks, each of which contains only i fraction of the
treatments to be tested.
Single Factor Experiments
We describe three complete block designs (completely randomized, randomized complete
block, and Latin square designs) and two incomplete block designs (lattice and group
balanced block designs). For each design, we illustrate the procedures for randomization,
plot layout, and analysis of variance with actual experiments.
COMPLETE
RANDOMIZED DESIGN
(CRD)
2.1. COMPLETE RANDOMIZED DESIGN
A completely randomized design (CRD) is one where the treatments are assigned
completely at random so that each experimental unit has the same chance of receiving
any one treatment. For the CRD, any difference among experimental units receiving the
same treatment is considered as experimental error. Hence, the CRD is only appropriate
for experiments with homogeneous experimental units, such as laboratory experiments,
where environmental effects are relatively easy to control. For field experiments, where
there is generally large variation among experimental plots, in such environmental factors
as soil, the CRD is rarely used.
2.1.1. CRD Randomization and Layout
The step-by-step procedures for randomization and layout of a CRD are given here for a
field experiment with four treatments A, B, C, and D, each replicated five times.

 STEP 1: Determine the total number of experimental plots (n) as the product of the
number of treatments (t) and the number of replications (r); that is, n = (r)(t). For our
example, n = (5)(4) = 20.

 STEP 2: Assign a plot number to each experimental plot in any convenient manner; for
example, consecutively from 1 to n. For our example, the plot numbers 1,..., 20 are
assigned to the 20 experimental plots as shown in Figure 2.1.
2.1.1. CRD Randomization
and Layout

 STEP 3: Assign the treatments to the experimental


plots by any of the following randomization
schemes:

A. By table of random numbers. The steps are:

STEP A1. Locate a starting point in a table of random


numbers (Appendix A) by closing your eyes and pointing a
finger to any position in a page. For our example, the starting
point is at the intersection of the sixth row and the twelfth
(single) column, as shown here.
2.1.1. CRD Randomization
and Layout

 STEP 3: Assign the treatments to the experimental


plots by any of the following randomization
schemes:

A. By table of random numbers. The steps are:

STEP A2. Using the starting point obtained in step A,, read
downward vertically to obtain n = 20 distinct three-digit
random numbers. Three digit numbers are preferred because
they are less likely to include ties than one- or two-digit
numbers. For our example, starting at the intersection of the
sixth row and the twelfth column, the 20 distinct
2.1.1. CRD Randomization
and Layout

 STEP 3: Assign the treatments to the


experimental plots by any of the following
randomization schemes:

A. By table of random numbers. The steps are:

STEP A2. Using the starting point obtained in step A,, read
downward vertically to obtain n = 20 distinct three-digit
random numbers. Three digit numbers are preferred
because they are less likely to include ties than one- or
two-digit numbers. For our example, starting at the
intersection of the sixth row and the twelfth column, the 20
3-digit numbers are as shown here together with their
corresponding sequence of appearance.
2.1.1. CRD Randomization
and Layout

 STEP 3: Assign the treatments to


the experimental plots by any of the
following randomization schemes:

A. By table of random numbers.


The steps are:

STEP A3. Rank the n random numbers


obtained in step A2 in ascending or
descending order. For our example, the 20
random numbers are ranked from the
smallest to the largest, as shown in the
following:
2.1.1. CRD Randomization
and Layout

 STEP 3: Assign the treatments to


the experimental plots by any of the
following randomization schemes:

A. By table of random numbers.


The steps are:

STEP A4. Divide the n ranks derived in


step A3 into t groups, each consisting of r
numbers, according to the sequence in
which the random numbers appeared. For
our example, the 20 ranks are divided into
four groups, each consisting of five
numbers, as follows:
2.1.1. CRD Randomization
and Layout

 STEP 3: Assign the treatments to the experimental


plots by any of the following randomization
schemes:

A. By table of random numbers. The steps are:

STEP A5. Assign the t treatments to the n experimental plots,


by using the group number of step A4 as the treatment
number and the corresponding ranks in each group as the plot
number in which the corresponding treatment is to be
assigned. For our example, the first group is assigned to
treatment A and plots numbered 17, 2, 15, 7, and 19 are
assigned to receive this treatment; the second group is
assigned to treatment B with plots numbered 13, 4, 18, 1, and
8; the third group is assigned to treatment C with plots
numbered 16, 14, 6, 9, and 12; and the fourth group to
treatment D with plots numbered 10, 20, 3, 11, and 5.
2.1.2. CRD ANALYSIS OF VARIANCE (ANOVA)
There are two sources of variation among the n observations obtained from a CRD trial.
One is the treatment variation, the other is experimental error. The relative size of the two
is used to indicate whether the observed difference among treatments is real or is due to
chance. The treatment difference is said to be real if treatment variation is sufficiently
larger than experimental error.

A major advantage of the CRD is the simplicity in the computation of its analysis of
variance, especially when the number of replications is not uniform for all treatments. For
most other designs, the analysis of variance becomes complicated when the loss of data
in some plots results in unequal replications among treatments tested.
2.1.2.1 Equal Replication
The steps involved in the
analysis of variance for data
from a CRD experiment with
an equal number of
replications are given below.
We use data from an
experiment on chemical
control of brown planthoppers
and stem borers in rice (Table
2.1).
2.1.2.1 Equal Replication
STEP 1. Group the data by treatments and calculate the treatment totals (T) and grand
total (G).

STEP 2. Construct an outline of the analysis of variance as follows:


2.1.2.1 Equal Replication
STEP 3. Using t to represent the number of treatments and r, the number of replications,
determine the degree of freedom (d.f.) for each source of variation as follows:

The error d.f. can also be obtained through subtraction as:


2.1.2.1 Equal Replication
STEP 4. Using Xi to represent the measurement of the ith plot, T as the total of the ith
treatment, and n as the total number of experimental plots [i.e., n = (r)(1)], calculate the
correction factor and the various sums of squares (SS) as:
2.1.2.1 Equal Replication
Use the symbol Σ to represent
"the sum of." For example, the
expression G = X, + X2 + ... +
Xn, can be written as G = Σni-1
Xi or simply G = ΣX. For our
example, using the T values and
the G value from Table 2.1, the
sums of squares are computed
as:
2.1.2.1 Equal Replication
STEP 5. Calculate the mean
square (MS) for each source of
variation by dividing each SS by
its corresponding d.f:
2.1.2.1 Equal Replication
STEP 6. Calculate the F value for
testing significance of the
treatment difference as:

Note here that the F value should be computed only when the error d.f. is large
enough for a reliable estimate of the error variance. As a general guideline, the F
value should be computed only when the error d.f. is six or more.
2.1.2.1 Equal Replication
STEP 7. Obtain the tabular F values from Appendix E, with f1 = treatment d.f. = (t - 1)
and f2 = error d.f. = (r - 1). For our example, the tabular F values with f1 = 6 and f2 = 21
degrees of freedom are 2.57 for the 5% level of significance and 3.81 for the 1%level.

STEP 8. Enter all the values computed in steps 3 to 7 in the outline of the analysis of
variance constructed in step 2.
2.1.2.1 Equal Replication
STEP 9. Compare the computed Fvalue of step 6 with the tabular F values of step 7, and
decide on the significance of the difference among treatments using the following rules:

1. If the computed F value is larger than the tabular F value at the 1% level of significance, the
treatment difference is said to be highly significant. Such a result is generally indicated by
placing two asterisks on the computed F value in the analysis of variance.

2. If the computed F value is larger than the tabular F value at the 5% level of significance but
smaller than or equal to the tabular F value at the 1% level of significance, the treatment
difference is said to be significant. Such a result is indicated by placing one asterisk on the
computed Fvalue in the analysis of variance.

3. If the computed F value is smaller than or equal to the tabular F value at the 5%level of
significance, the treatment difference is said to be nonsignificant. Such a result is indicated by
placing ns on the computed F value in the analysis of variance.
2.1.2.1 Equal Replication
For our example, the computed F value of 9.83 is larger than the tabular F value at the
1% level of significance of 3.81. Hence, the treatment difference is said to be highly
significant. In other words, chances are less than 1 in 100 that all the observed
differences among the seven treatment means could be due to chance. It should be
noted that such a significant F test verifies the existence of some differences among the
treatments tested but does not specify the particular pair (or pairs) of treatments that
differ significantly.
2.1.2.1 Equal Replication
STEP 10. Compute the grand mean and the coefficient of variation cv as follows:
2.1.2.1 Equal Replication
The cv indicates the degree of precision with which the treatments are compared and is
a good index of the reliability of the experiment. It expresses the experimental error as
percentage of the mean; thus, the higher the cv value, the lower is the reliability of the
experiment. The cv value is generally placed below the analysis of variance table, as
shown in Table 2.2. The cv varies greatly with the type of experiment, the crop grown,
and the character measured. An experienced researcher, however, can make a
reasonably good judgement on the acceptability of a particular cv value for a given type
of experiment. Our experience with field experiments in transplanted rice, for example,
indicates that, for data on rice yield, the acceptable range of cv is 6 to 8% for variety
trials, 10 to 12% for fertilizer trials, and 13 to 15% for insecticide and herbicide trials. The
cv for other plant characters usually differs from that of yield. For example, in a field
experiment where the cv for rice yield is about 10%, that for tiller number would be about
20% and that for plant height, about 3%.
RANDOMIZED COMPLETE
BLOCK DESIGN
(RCBD)
2.2. RANDOMIZED COMPLETE BLOCK DESIGN
The randomized complete block (RCB) design is one of the most widely used
experimental designs in agricultural research. The design is especially suited for field
experiments where the number of treatments is not large and the experimental area has a
predictable productivity gradient. The primary distinguishing feature of the RCB design is
the presence of blocks of equal size, each of which contains all the treatments.
2.2.1. Blocking Technique
The primary purpose of blocking is to reduce experimental error by eliminating the
contribution of known sources of variation among experimental units. This is done by
grouping the experimental units into blocks such that vari­ability within each block is
minimized and variability among blocks is maximized. Because only the variation within a
block becomes part of the experimental error, blocking is most effective when the
experimental area has a predictable pattern of variability. With a predictable pattern, plot
shape and block orientation can be chosen so that much of the variation is accounted for
by the difference among blocks, and experimental plots within the same block are kept as
uniform as possible.

There are two important decisions that have to be made in arriving at an appropriate and
effective blocking technique. These are:

• The selection of the source of variability to be used as the basis for blocking.
• The selection of the block shape and orientation.
2.2.1. Blocking Technique
An ideal source of variation to use as the basis for blocking is one that is large and highly
predictable. Examples are:

• Soil heterogeneity, in a fertilizer or variety trial where yield data is the primary character
of interest.
• Direction of insect migration, in an insecticide trial where insect infestation is the
primary character of interest.
• Slope of the field, in a study of plant reaction to water stress.
2.2.1. Blocking Technique
After identifying the specific source of variability to be used as the basis for blocking, the
size and shape of the blocks must be selected to maximize variability among blocks. The
guidelines for this decision are:

1. When the gradient is unidirectional (i.e., there is only one gradient), use long and narrow blocks.
Furthermore, orient these blocks so their length is perpendicular to the direction of the gradient.
2. When the fertility gradient occurs in two directions with one gradient much stronger than the
other, ignore the weaker gradient and follow the preceding guideline for the case of the
unidirectional gradient.
3. When the fertility gradient occurs in two directions with both gradients equally strong and
perpendicular to each other, choose one of these alternatives:
-Use blocks that are as square as possible.
-Use long and narrow blocks with their length perpendicular to the direction of one gradient
(see guideline 1) and use the covariance technique (see Chapter 10, Section
10.1.1) to take care of the other gradient. " Use the latin square design (see
Section 2.3) with two-way blockings, one for each gradient.
2.2.1. Blocking Technique
4. When the pattern of variability is not predictable, blocks should be as square as
possible.
2.2.1. Blocking Technique
Whenever blocking is used, the identity of the blocks and the purpose for their use must
be consistent throughout the experiment. That is, whenever a source of variation exists
that is beyond the control of the researcher, he should assure that such variation occurs
among blocks rather than within blocks. For example, if certain operations such as
application of insecticides or data collection cannot be completed for the whole experiment
in one day, the task should be completed for all plots of the same block in the same day. In
this way, variation among days (which may be enhanced by weather factors) becomes a
part of block variation and is, thus, excluded from the experimental error. If more than one
observer is to make measurements in the trial, the same observer should be assigned to
make measurements for all plots of the same block. In this way, the variation among
observers, if any, would constitute a part of block variation instead of the experimental
error.
2.2.2. Randomization and Layout
The randomization process for a RCB design is applied separately and independently to
each of the blocks. We use a field experiment with six treatments A, B, C, D, E, F and four
replications to illustrate the procedure.

STEP 1. Divide the experimental area into r equal blocks, where r is the number of
replications, following the blocking technique described in Section 2.2.1. For our example,
the experimental area is divided into four blocks as shown in Figure 2.2. Assuming that
there is a unidirectional fertility gradient along the length of the experimental field, block
shape is made rectangular and perpendicular to the direction of the gradient.
2.2.2. Randomization and Layout
2.2.2. Randomization and Layout
STEP 2. Subdivide the first block into t experimental plots, where t is the number of
treatments. Number the t plots consecutively from 1 to t, and assign t treatments at
random to the t plots following any of the randomization schemes for the CRD described
in Section 2.1.1.
For our example, block I is subdivided into six equal-sized plots, which are numbered
consecutively from top to bottom and from left to right (Figure 2.3); and, the six treatments
are assigned at random to the six plots using the table of random numbers (see Section
2.1.1, step 3A) as follows:
2.2.2. Randomization and Layout
Select six three-digit random numbers. We start at the intersection of the sixteenth row
and twelfth column of Appendix A and read downward vertically, to get the following:
2.2.2. Randomization and Layout
Rank the random numbers from the smallest to the largest, as follows:
2.2.2. Randomization and Layout
Assign the six treatments to the six
plots by using the sequence in
which the random numbers
occurred as the treatment number
and the corresponding rank as the
plot number to which the particular
treatment is to be assigned. Thus,
treatment A is assigned to plot 6,
treatment B to plot 5, treatment C to
plot 1, treatment D to plot 2,
treatment E to plot 4, and treatment
F to plot 3. The layout of the first
block is shown in Figure 2.3.
2.2.2. Randomization and Layout
STEP 3. Repeat step 2 completely for each of the remaining blocks. For our example, the
final layout is shown in Figure 2.4.

It is worthwhile, at this point, to emphasize the major difference between a CRD and a RCB design.
Randomization in the CRD is done without any restriction, but for the RCB design, all treatments must
appear in each block. This difference can be illustrated by comparing the RCB design layout of Figure
2.4 with a hypothetical layout of the same trial based on a CRD, as shown in Figure 2.5. Note that
each treatment in a CRD layout can appear anywhere among the 24 plots in the field. For example, in
the CRD layout, treatment A appears in three adjacent plots (plots 5, 8, and 11). This is not possible in
a RCB layout.
2.2.2. Randomization and Layout
2.2.3. Analysis of Variance for RCBD
There are three sources of
variability in a RCB design:
treatment, replication (or block),
and experimental error. Note that
this is one more than that for a
CRD, because of the addition of
replication, which corresponds to
the variability among blocks.

To illustrate the steps involved in the


analysis of variance for data from a
RCB design we use data from an
experiment that compared six rates of
seeding of a rice variety IR8 (Table
2.5).
2.2.3. Analysis of Variance for RCBD
STEP 1. Group the data by treatments and replications and calculate treatment totals (T),
replication totals (R), and grand total (G), as shown in Table 2.5.

STEP 2. Outline the analysis of variance as follows:


2.2.3. Analysis of Variance for RCBD
STEP 3. Using r to represent the number of replications and t, the number of treatments,
determine the degree of freedom for each source of variation as:
2.2.3. Analysis of Variance for RCBD
STEP 4. Compute the correction factor as follows:
2.2.3. Analysis of Variance for RCBD
STEP 4. Compute the various sums of squares (SS) as follows:

TOTAL SS
2.2.3. Analysis of Variance for RCBD
STEP 4. Compute the various sums of squares (SS) as follows:

REPLICATION SS
2.2.3. Analysis of Variance for RCBD
STEP 4. Compute the various sums of squares (SS) as follows:

TREATMENT SS
2.2.3. Analysis of Variance for RCBD
STEP 4. Compute the various sums of squares (SS) as follows:

ERROR SS
2.2.3. Analysis of Variance for RCBD
STEP 5. Compute the mean square for each source of variation by dividing each sum of
squares by its corresponding degree of freedom as:

REPLICATION MS
2.2.3. Analysis of Variance for RCBD
STEP 5. Compute the mean square for each source of variation by dividing each sum of
squares by its corresponding degree of freedom as:

TREATMENT MS
2.2.3. Analysis of Variance for RCBD
STEP 5. Compute the mean square for each source of variation by dividing each sum of
squares by its corresponding degree of freedom as:

ERROR MS
2.2.3. Analysis of Variance for RCBD
STEP 6. Compute the F value for testing the treatment difference as:
2.2.3. Analysis of Variance for RCBD
STEP 7. Compare the computed F value with the tabular F values (from Appendix E) with f1
= treatment d.f. and f2 = error d.f. and make conclusions following the guidelines given in
step 9 of Section 2.1.2.1.

For our example, the tabular F values with f1 = 5 and f2 = 15 degrees of freedom are 2.90
at the 5% level of significance and 4.56 at the 1% level. Because the computed F value of
2.17 is smaller than the tabular F value at the 5%level of significance, we conclude that the
experiment failed to show any significant difference among the six treatments.
2.2.3. Analysis of Variance for RCBD
STEP 8. Compute the coefficient of variation as:
2.2.3. Analysis of Variance for RCBD
STEP 9. Enter all values computed in steps 3 to 8 in the analysis of variance outline of step
2.
2.2.4. Block Efficiency
Blocking maximizes the difference among blocks, leaving the difference among plots of the
same block as small as possible. Thus, the result of every RCB experiment should be
examined to see how this objective has been achieved.

You might also like