0% found this document useful (0 votes)
9 views7 pages

PR2 Q2 Notes 2023 2024 2

Uploaded by

karylleporras2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views7 pages

PR2 Q2 Notes 2023 2024 2

Uploaded by

karylleporras2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

QUANTITATIVE RESEARCH DESIGNS

There are four main types of Quantitative research: Descriptive, Correlational, Causal-Comparative/ Quasi-Experimental, and Experimental Research.

CAUSAL COMPARATIVE/
DESCRIPTIVE RESEARCH CORRELATIONAL RESEARCH [TRUE] EXPERIMENTAL RESEARCH
QUASI EXPERIMENTAL RESEARCH
Experimental designs aim to establish
Aims to describe the It attempts to establish cause-effect
cause-and-effect relationships and
characteristics, behaviors, or It aims to quantify the degree relationships among the variables but
often involve random assignment of
conditions of a phenomenon and direction of associations lacks full control over variables.
participants to control group and
without manipulating between variables to identify
experimental group/s.
variables. patterns or trends. In this design, the researcher
DEFINITION manipulates an independent
In this design, the researcher
It involves observing, It focuses on examining the variable and measures its effects on
manipulates one or more
recording, and presenting relationships between two or a dependent variable, but due to
independent variables to observe
data in a comprehensive more variables without practical or ethical constraints, true
their effect on a dependent variable.
manner to provide a detailed manipulating them. randomization and control may be
account of existing conditions. limited.
• Observation and • Relationship Exploration: • Limited Experimental Control: • Manipulation of Variables:
Description: The primary aim is to explore Lacks full control over variables Involves actively manipulating one or
The primary goal is to observe, and measure the relationships compared to true experimental more independent variables to
record, and describe the or associations between two or designs. observe their impact on a
characteristics, behaviors, or more variables. dependent variable.
conditions of a phenomenon. • Comparison without
• No Causation Inference: Randomization: • Random Assignment:
• No Manipulation: Correlational studies do not Involves manipulating an Participants/sample are randomly
Variables are not establish causation; they only independent variable and measuring assigned to control and experimental
manipulated, and there is no assess the strength and its effects, but without true groups to control for potential
attempt to establish cause- direction of associations. randomization. confounding variables.
FEATURES
and-effect relationships. The
goal is to provide a • Quantification of • Real-World Context: • Causation Inference:
comprehensive account of Associations: Often applied in real-world settings Experimental designs are designed to
existing conditions. Statistical measures, such as where strict experimental controls are establish cause-and-effect
correlation coefficients, are challenging, impractical, unethical, relationships between variables.
• Holistic Perspective: used to quantify the degree of or impossible.
It provides a holistic association between variables.
perspective on the subject,
aiming to present a
comprehensive overview.
• Descriptive Statistics: • Correlation Coefficients: • Statistical Control: • Inferential Statistics:
Measures of central tendency Pearson's correlation Employing statistical techniques to T-tests, Analysis of Variance
(mean, median, mode) and coefficient (r) or Spearman's control for confounding variables, (ANOVA), or Analysis of Covariance
measures of dispersion (range, rank correlation coefficient for such as analysis of covariance (ANCOVA) for assessing the
standard deviation) are quantifying the strength and (ANCOVA) or matching. significance of differences between
commonly used. direction of relationships. groups.
• Inferential Statistics:
• Frequency Distributions: • Scatter Plots: T-tests, Analysis of Variance (ANOVA), • Post Hoc Tests:
Representing the distribution of Visual representation of the or Analysis of Covariance (ANCOVA) Conducted after ANOVA to identify
scores using tables, graphs, or relationship between two for assessing the significance of specific group differences.
DATA
charts. variables. differences between groups.
ANALYSIS
• Factorial Analysis:
TOOLS
• Graphs and Charts: • Regression Analysis: • Post Hoc Tests: Assessing the impact of multiple
Histograms, bar charts, and Assessing the predictive power Conducted after ANOVA to identify independent variables.
pie charts for visual of one variable on another. specific group differences.
representation of data.
• Factorial Analysis:
Assessing the impact of multiple
independent variables.
• "An In-Depth Analysis of • "Examining the Relationship • "A Quasi-Experimental Study on • "Impact of Nanoparticles on Plant
Microbial Diversity in Arctic between Technology Usage the Effects of Virtual Reality Growth"
Soil Ecosystems" and Academic Training on Surgical Skill
EXAMPLE Performance in STEM Acquisition" • "The Effects of Different Fertilizer
RESEARCH • "Characterizing the Majors" Compositions on Crop Yield "
TILES: Microbial Community • "Evaluating the Impact of a STEM
Structure in Antarctic Ice • "Correlating Environmental Outreach Program in Urban • "An Experimental Investigation
Cores: A Descriptive Factors with Species Schools: A Quasi-Experimental into the Impact of Sleep
Analysis" Diversity in Coral Reefs" Approach" Deprivation on Cognitive
Performance"
• "Profiling Genetic • "Investigating the Link
Variations in a Population between Brain Activity • "Assessing the Influence of a New
of Endangered Orchids" Patterns and Mathematical Teaching Method on Student
Problem-Solving Skills" Engagement: A Quasi-
Experimental Design"
Sampling and Replication in Experimental Research:

DEFINITION PURPOSE Randomization REPLICATION REASONS FOR REPLICATION


Sampling: • Representativeness: • Random Replication is one of the keyways • Reducing Variability:
The process of To ensure that the Assignment: scientists build confidence in the Replicates help account for variability in
selecting a selected sample In experimental scientific merit of results. experimental outcomes, allowing for a more
representative accurately reflects the designs, accurate estimation of true effects.
group of characteristics of the participants/units Triplicates:
participants or larger population. are randomly In general, a research plan • Enhancing Precision:
units from a assigned to different entails three replicates so that Replication increases the precision of
larger • Generalizability: experimental the results obtained from them measurements and strengthens the robustness
population for Enhances the ability to conditions or can be verified. Thus, the relative of findings.
inclusion in an generalize the study's groups. differences of data from the • Statistical Rigor:
experiment. findings to the broader three replicates can be Enables the use of statistical tests to assess the
population. measured and compared. significance of observed effects.

WHY TRIPLICATE?
• Replication increases statistical power.
With three repetitions, researchers can calculate measures of central tendency (such as the mean) more reliably and perform statistical tests with greater
confidence.

• Variability Assessment:
Three replicates allow for a better assessment of variability within the data. Researchers can estimate the variance and assess the consistency of results, helping
to distinguish between true effects and random variability.

• Outlier Detection:
Triplicates provide a means to identify outliers or anomalies in the data. If one result significantly deviates from the others, researchers can investigate the cause
of this discrepancy.

• Practicality and Efficiency:


While increasing the number of replicates generally improves reliability, it also provides sufficient replication for statistical reliability without becoming overly
burdensome in terms of time, cost, or logistical complexity.

• Tradition and Common Practice:


The use of triplicates has become a common practice in experimental research. It is a practical standard that has been established over time and is widely
accepted in various scientific disciplines.
Experimental Design – Use of Control Variables:

POSITIVE CONTROL NEGATIVE CONTROL


A positive control is an experimental treatment or condition that is A negative control is an experimental treatment or condition where
DEFINITION expected to produce a known response. no response is expected.

It helps validate the experimental setup and confirms that the system is It serves to ensure that observed effects are specific to the
PURPOSE capable of responding as expected. experimental treatment and not due to other factors.

Validation of Experimental Setup: Baseline Measurement:


Positive controls are crucial for confirming that the experimental system Negative controls establish a baseline or reference point for
is functioning as expected. If the positive control fails to produce the comparison. They help researchers assess background noise or
anticipated outcome, it may indicate issues with the experimental variability in the absence of the experimental treatment.
protocol or equipment.
Detection of Contamination or Interference:
Quality Assurance: Negative controls are essential for detecting any contamination or
Positive controls act as a quality assurance measure, helping interference that might impact the experimental results. They reveal
IMPORTANCE
researchers identify and address any sources of variability or error in whether observed effects are due to the experimental treatment or
the experimental procedure. external factors.

Benchmark for Comparison: Confirmation of Specificity:


Positive controls provide a benchmark against which the experimental Negative controls confirm the specificity of the experimental
results can be compared. They help researchers distinguish between conditions by showing that the observed effects are not merely a
meaningful effects and artifacts that might arise from experimental result of the experimental procedure or background factors.
conditions.
EXAMPLES:
Biology:
Context: A positive control could involve treating cells with a well-established A negative control group might consist of cells treated with a
In a cell culture experiment growth factor known to induce rapid cell division. solution without any growth factor (distilled water).
investigating the effects of a new
growth factor on cell proliferation.
Physical Science: The negative control might involve exposing the experimental
The positive control could involve using a standard, high-efficiency
setup to darkness or covering the solar panels to ensure that any
Context: solar panel with known performance characteristics. This positive
changes in energy production are not due to external factors like
In an experiment examining the control establishes a benchmark for optimal solar energy conversion.
efficiency of a new solar panel design. ambient light variations.
Chemistry:
The positive control could involve using a well-established catalyst with The negative control in this case might include a reaction
Context:
known reaction kinetics. This positive control ensures that the conducted without any catalyst to establish the baseline rate of
In a chemical reaction study
investigating the effectiveness of a
experimental conditions are conducive to detecting catalytic effects. the reaction in the absence of catalytic influence.
catalyst.
Inferential statistics play a crucial role in research, enabling researchers to make
inferences about populations based on sample data. This lecture explores various
inferential statistical tools, each serving specific purposes in hypothesis testing and
drawing conclusions.

COMMONLY USED STATISTICAL TOOLS (INFERENTIAL)

1. T-Tests
a. Independent Samples T-Test
Used to compare means between two independent groups. For example, researchers
might apply this test to assess if there is a significant difference in exam scores
between students who received different teaching methods.

b. Paired Samples T-Test


Compares means between two related groups. In a clinical trial, researchers may use
this test to determine if there is a significant difference in blood pressure before and
after a treatment.
c. One-Sample T-Test
Used to compare the mean of a sample to a known value. For instance, in market
research, a one-sample T-test might be employed to assess if the average customer
satisfaction rating differs significantly from a predefined benchmark.

2. Analysis of Variance (ANOVA)


Extending beyond T-Tests, ANOVA assesses mean differences among three or more
groups.

a. One-Way ANOVA
Used when comparing means across a single independent variable. In agriculture
research, for example, one-way ANOVA could be employed to analyze crop yields
under different fertilizer conditions.

b. Two-Way ANOVA
Considers the impact of two independent variables on a dependent variable. In
psychology, researchers might use two-way ANOVA to study the effects of both age
and gender on memory performance.

3. Regression Analysis

Regression analysis explores relationships between a dependent variable and one or


more independent variables.
HYPOTHESIS TESTING USING T-TEST AND ANOVA

T-TEST: ANOVA

Scenario: Investigating whether there is a significant difference in the mean Scenario: Assessing whether there are significant differences in the mean
scores of two groups of students who received different teaching methods. performance scores of three different training programs.

Hypothesis: Hypothesis:
• Null Hypothesis (H0): There is no significant difference between the means of • Null Hypothesis (H0): There is no significant difference in mean scores across
the two groups. the three programs.

• Alternative Hypothesis (Ha): There is a significant difference between the • Alternative Hypothesis (Ha): At least one program has a different mean
means. score.

***Significance Level (α): The 5 percent level of significance, that is, α = 0.05 , has ***Significance Level (α): The 5 percent level of significance, that is, α = 0.05 , has
become the most common in practice. become the most common in practice.

Conduct T-Test: Conduct ANOVA:


• Obtain the T-statistic and degrees of freedom. • Obtain the F-statistic and degrees of freedom.
• COMPARE the calculated p-value with significance Level (a). • COMPARE the calculated p-value with significance Level (a).

• REJECT NULL HYPOTHESIS IF: p-value < a • REJECT NULL HYPOTHESIS IF: p-value < a
Example: 0.020 < 0.05 Example: 0.038 < 0.05

-since the p-value is LESS THAN a, the null hypothesis is rejected in favor of the -since the p-value is LESS THAN a, the null hypothesis is rejected in favor of the
alternative hypothesis alternative hypothesis

• RETAIN NULL HYPOTHESIS IF: p-value > a • RETAIN NULL HYPOTHESIS IF: p-value > a
Example: 0.084 > 0.05 Example: 0.072 > 0.05

-since the p-value is MORE THAN a, the null hypothesis is retained. -since the p-value is MORE THAN a, the null hypothesis is retained.

Interpretation: Interpretation:
There is evidence of a significant difference in mean scores between the two There is evidence of a significant difference in mean performance scores
teaching methods. among the three training programs.

The Pearson correlation coefficient (r) measures the strength and direction of a linear relationship between two continuous variables.
It ranges from -1 to 1, where:

• Positive Correlation (r > 0): Indicates a direct or positive linear relationship. As one variable increases, the other tends to increase as well. A
correlation coefficient close to +1 suggests a strong positive relationship. Example: If r has a value of 0.67, then it has a positive correlation.

• Negative Correlation (r < 0): Indicates an inverse or negative linear relationship. As one variable increases, the other tends to decrease. A
correlation coefficient close to -1 suggests a strong negative relationship. Example: If r has a value of -0.28, then it has a negative correlation.

The STRENGTH of the correlation depends on the DISTANCE OF THE R VALUE FROM 0.
Example: Values near 1 or -1 suggest a strong linear relationship, while values closer to 0 indicate a weaker relationship.

EXAMPLE:

If the value of r is 0.69, it is considered to exhibit strong relationship since it


is closer to 1.

If the value of r is 0.11, it is considered to exhibit weak relationship since it is


closer to 0.

COMBINING THE DIRECTION AND STRENGTH:

If the value of r is -0.88,

since it is closer to -1 than 0

and it is less than 0 (being a negative value),

then it can be interpreted to exhibit a STRONG NEGATIVE correlation.

You might also like