0% found this document useful (0 votes)
48 views

Lecture 1 Definitions & Terminologies in Experimental Design

This document defines key terminology used in the design of experiments (DOE). It discusses factors, levels, response variables, effects, experimental units, treatments, blocking, randomization, and placebos. It also defines concepts like confounding, validity, and types of studies like equivalence and non-inferiority trials. The overall purpose of the document is to introduce students to fundamental concepts and vocabulary used in the statistical design and analysis of experiments.

Uploaded by

Malvika Patel
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

Lecture 1 Definitions & Terminologies in Experimental Design

This document defines key terminology used in the design of experiments (DOE). It discusses factors, levels, response variables, effects, experimental units, treatments, blocking, randomization, and placebos. It also defines concepts like confounding, validity, and types of studies like equivalence and non-inferiority trials. The overall purpose of the document is to introduce students to fundamental concepts and vocabulary used in the statistical design and analysis of experiments.

Uploaded by

Malvika Patel
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Lecture 1

Definitions and Terminologies

Prof. Bhaswat Chakraborty


Terminologies (1)
• Design of Experiment (DOE): It is a statistical approach with a series of experiments or
runs in which we purposefully make changes to the input variables and we observe the
responses at the same time. In industry or in clinical trials , DOE is used to systematically
investigate the product quality or safety and efficacy of the interventions.
• DOE in Statistics: In stats, DOE means examining a process, intervention or product as
follows:
Y = f(x)
where y is the dependent variable and x is the list of independent variables. Thus, we get
information on the effect of many variables and their potential interaction at the same time.
• Factors: are independent variables that are controlled and varied during the course of the
experiment and affect the response. For example, treatment is a factor in a clinical trial
with experimental units randomized to treatment. Another example is pressure and
temperature as factors in a chemical experiment.
• Factors are also known as input variables, predictive variables or explanatory variables.
2
Terminologies (1a)
• Level: A level is the setting or adjustment of a factor at a specific level during an
experiment, eg, high and low or +1 and -1.
• Response Variable: It is the output variable that shows the observed results or the
values of an experimental treatment. These are the ones that we want to optimize,
this is also known as the dependent or the y variable. There may be multiple
response variables in an experimental design.
• Effect: An effect is a relationship between a factor and a response variable (multiple
factors and multiple responses are possible in an experiment). The effects include
main, interaction and dispersion effects of which we are mainly interested in the first
two.
• Observed Value: A particular values of the response variable observed in an
experiment.
• Noise Factor: An independent variable that cannot be controlled. However, they are
often included in an experiment to broaden the conclusions on controlled factors.
3
Terminologies (2)
• Experimental unit: It is the smallest entity a researcher wants to make inferences
about (in the population) based on the sample. It (animal or human subject) is
randomized to the treatment regimen and receives the treatment directly.
• Observational unit: It has the measurements taken on it.
• In most pre- or clinical trials, the experimental units and the observational units are
one and the same, namely, the individual patient or animal.
• Treatment: A treatment is a specific setting or a combination of factor levels for an
experimental unit.
• Design Space: A design space is a multidimensional region of potential treatment
combinations formed by the selected factors and their levels.
• Experimental Error: A variability in the response that cannot be accounted by the (due to)
known factors,
• One-Way Designs: Most clinical trials are structured as one-way designs, i.e., only one
factor, treatment, with a few levels.
4
Terminologies (3)
• Two-way Factorial Design: Some clinical trials may have a two-way factorial
design, such as in oncology where various combinations of doses of two
chemotherapeutic agents comprise the treatments.
• incomplete factorial design: Sometimes it may be useful if it is inappropriate to
assign subjects to some of the possible treatment combinations, such as no
treatment (double placebo). We will study factorial designs in a later lesson.
• Crossover Design: A crossover design is a repeated measurements design such
that each experimental unit (patient) receives different treatments during the
different time periods, i.e., the patients cross over from one treatment to another
during the course of the trial.
• Parallel Design: A parallel design refers to a study in which patients are
randomized to a treatment and remain on that treatment throughout the course
of the trial. This is a typical design eg, i contrast, with a crossover design.
5
Terminologies (4)
• Randomization: Randomization is a process of randomly assigning experimental
subjects to one of the treatment groups. It is used to remove systematic error
(bias) and to justify Type I error probabilities in experiments. Randomization is
recognized as an essential feature of clinical trials for removing selection bias.
• Selection bias: occurs when a physician decides treatment assignment and
systematically selects a certain type of patient for a particular treatment..
Suppose the trial consists of an experimental therapy and a placebo. If the trialist
assigns healthier patients to the experimental therapy and the less healthy
patients to the placebo, the study could result in an invalid conclusion that the
experimental therapy is very effective.
• Blocking and stratification: are used to control unwanted variation. For example,
suppose a clinical trial is structured to compare treatments A and B in patients
between the ages of 18 and 65. Suppose that the younger patients tend to be
healthier. It would be prudent to account for this in the design by stratifying with
respect to age. One way to achieve this is to construct age groups of 18-30, 31-50,6
and 51-65 and to randomize patients to treatment within each age group.
Age Treatment Treatment
A B
18 - 30 12 13
31 - 50 23 23
51-65 6 7

• It is not necessary to have the same number of patients within each age stratum.
We do, however, want to have a balance in the number on each treatment within
each age group.
• Blocking is a restriction of the randomization process that results a balance of
numbers of patients on each treatment after a prescribed number of
randomizations. eg, blocks of 4 within these age strata would mean that after 4, 8,
12, etc. patients in a particular age group had entered the study, the numbers
assigned to each treatment within that stratum would be equal.
7
Terminologies (5)
• Placebo:A placebo is anything that seems to be a "real" medical treatment -- but isn’t.
They do not contain an active substance meant to affect health. Placebos are not
acceptable ethically in many situations, e.g., in surgical trials (although there have
been instances where 'sham' surgical procedures took place as the 'placebo' control.) 
• Placebo Effect: Even ineffective treatments can appear beneficial in some patients.
This may be due to random fluctuations, or variability in the disease. If, however, the
improvement is due to the patient’s expectation of a positive response. This can be
problematic when the outcome is subjective, such as pain or symptom assessment. 
• Treatment masking or blinding: is an effective way to ensure objectivity of the person
measuring the outcome variables. Masking is especially important when the
measurements are subjective or based on self-assessment. Double-masked
trials refer to studies in which both investigators and patients are masked to the
treatment. Single-masked trials refer to the situation when only patients are masked.
In some studies, statisticians are masked to treatment assignment when performing
the initial statistical analyses.
8
Terminologies (6)
• Confounding is the effect of other relevant factors on the outcome that may be incorrectly
attributed to the difference between study groups. Eg, an investigator plans to assign 10
patients to treatment and 10 patients to control with a 1 week follow-up on each patient.
The first 10 patients will be assigned treatment on March 01 and the next 10 patients will be
assigned control on March 15. The investigator may observe a significant difference between
treatment and control, but is it due to different environmental conditions between early
March and mid-March?
• The obvious way to correct this would be to randomize 5 patients to treatment and 5
patients to control on March 01, followed by another 5 patients to treatment and the 5
patients to control on March 15.
• Equivalency and noninferiority Studies: have different objectives than the usual trial which
is designed to demonstrate superiority of a new treatment to a control. A study to
demonstrate non-inferiority aims to show that a new treatment is not worse than an
accepted treatment in terms of the primary response variable by more than a pre-specified
margin. A study to demonstrate equivalence has the objective of demonstrating the
response to the new treatment is within a prespecified margin in both directions. We will
learn more about these studies when we explore sample size calculations. 9
Terminologies (7)
• Validity of an Experiment: A trial is said to possess internal validity if the observed
difference in outcome between the study groups is real and not due to bias, chance, or
confounding. Randomized, placebo-controlled, double-blinded clinical trials have high
levels of internal validity.
• External validity in a human trial refers to how well study results can be generalized to a
broader population. External validity is irrelevant if internal validity is low. External
validity in randomized clinical trials is enhanced by using broad eligibility criteria when
recruiting patients .
• Large simple and pragmatic trials emphasize external validity. A large simple trial
attempts to discover small advantages of a treatment that is expected to be used in a
large population. Large numbers of subjects are enrolled in a study with simplified design
and management.
• There is an implicit assumption that the treatment effect is similar for all subjects with
the simplified data collection. In a similar vein, a pragmatic trial emphasizes the effect of
a treatment in practices outside academic medical centers and involves a broad range of
clinical practices.
10
QUESTIONS?

11

You might also like