0% found this document useful (0 votes)
16 views39 pages

Redis Vulnerability

The document outlines a lecture on understanding the Introduction section of quantitative research papers, emphasizing its importance in setting the stage for the study. It covers learning objectives, the structure of scientific papers (IMRaD format), and critical appraisal skills necessary for evaluating research. Key components of a strong introduction, including the research problem, gap, objectives, and benefits, are also discussed to enhance research skills.

Uploaded by

Ag Phyo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views39 pages

Redis Vulnerability

The document outlines a lecture on understanding the Introduction section of quantitative research papers, emphasizing its importance in setting the stage for the study. It covers learning objectives, the structure of scientific papers (IMRaD format), and critical appraisal skills necessary for evaluating research. Key components of a strong introduction, including the research problem, gap, objectives, and benefits, are also discussed to enhance research skills.

Uploaded by

Ag Phyo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Slide 1: How to Understand the Introduction Section in Quantitative Research

Speaker Notes:

“Good morning everyone, and thank you for being here today. I’m Dr. May Chan Oo from the
College of Public Health Sciences at Chulalongkorn University.

In today’s session, we’ll be focusing on one of the most important parts of any quantitative
research paper — the Introduction section. This part sets the stage for the entire study, and if it’s
not done well, the rest of the paper may lose its impact.

Our goal today is to learn how to critically understand, analyze, and even write a strong
introduction section in quantitative research. Whether you are preparing to conduct your own
study or simply reviewing others’ work, having a clear grasp of this section will enhance your
research skills significantly.

So let’s begin by exploring what exactly makes a good introduction in a quantitative research
paper.”

Slide 2: Learning Objectives

Speaker Notes:

“Before we dive into the content, let me first walk you through the learning objectives of today’s
session. These objectives will guide what you should expect to take away from this lecture.

First, we’ll aim to understand the structure and purpose of the Introduction and Method
sections in a quantitative research paper. These two sections are foundational to any research
article and give us insight into the rationale, design, and justification for the study.

Second, we’ll learn to identify key components within both sections — for example, the
problem statement, research questions, hypotheses, variables, study design, and sampling
method. Being able to recognize these elements helps us assess the quality of a research article.

Lastly, we will critically evaluate the clarity, transparency, and scientific rigor of published
studies. This means asking: Is the methodology clearly written and reproducible? Are the
research questions logically connected to the methods? This skill is particularly important if you
are reading research to inform your own work or evidence-based practice.

By the end of today’s lecture, I hope you’ll feel more confident in reading, analyzing, and even
writing the introduction and methods sections of a quantitative research study.”
Slide 3: What is Quantitative Research?

Speaker Notes:

“Before we go deeper into the introduction section, it’s important to start with a clear
understanding of what we mean by quantitative research.

Quantitative research is a systematic investigation — this means that it follows a clear,


structured process. It involves collecting data that is quantifiable, meaning it can be measured
and expressed numerically.

This type of research commonly uses statistical, mathematical, or computational techniques


to analyze the data. The goal is to describe relationships, identify patterns, or make predictions
based on that numerical data.

In essence, we are using numbers to understand variables — for example, the relationship
between smoking and lung disease, or how age affects physical activity levels.

So keep in mind: quantitative research is focused on objectivity, measurement, and


generalizability. And the introduction section of a quantitative study should help us understand
why the researcher chose to use this kind of approach — and what they aim to measure.”

Slide 4: What is Critical Appraisal?

Speaker Notes:

“Now that we understand what quantitative research is, the next step is to learn how to critically
appraise it.

Critical appraisal is the process of carefully and systematically examining research to


determine how trustworthy it is. We look at whether the methods are valid, the results are
credible, and whether the study is relevant to our own research or practice.

In other words, critical appraisal is about not just accepting research at face value. Instead, we
evaluate its strengths and weaknesses, and judge whether we can rely on the findings in a
specific context.

One widely used tool is the Critical Appraisal Skills Programme, or CASP. This program
provides practical checklists to guide our evaluation. These checklists focus on three main areas:

 Validity: Was the research done well?


 Results: Are the findings meaningful and clearly presented?
 Relevance: Do the results apply to our setting or population?

As future researchers or public health professionals, developing critical appraisal skills will help
you make evidence-based decisions and avoid being misled by poorly conducted studies.”
Slide 5: Structure of a Scientific Paper (IMRaD Format)

Speaker Notes:

“In scientific writing, most journal articles — especially in health and medical sciences —
follow the IMRaD structure. This stands for Introduction, Methods, Results, and Discussion.

Let’s briefly look at each component:

 I – Introduction: This section tells us why the study was conducted. It usually includes a
brief literature review, identification of the research gap, a clear problem statement, and
the study's objectives or hypotheses. It helps readers understand the background and
importance of the study.
 M – Materials and Methods: Here, the researchers explain how the study was done.
This includes the study design, sampling methods, instruments used, and procedures. It
should be written in a way that’s transparent and replicable — so that another
researcher can repeat the study using the same approach.
 R – Results: This is what the researchers found. It includes the data, often in the form of
tables, figures, or descriptive statistics. It’s important that this section reports findings
objectively, without interpretation.
 D – Discussion: Finally, this section answers the question “What does it mean?” It
interprets the findings, connects them with existing literature, and discusses implications,
limitations, and suggestions for future research.

This format was formalized by the International Committee of Medical Journal Editors
(ICMJE), and is widely accepted because it creates a clear and logical flow for presenting
scientific evidence.

Throughout today’s lecture, we’ll focus mainly on the ‘I’ and ‘M’ sections, but it’s important to
understand how they fit into the bigger picture of the full IMRaD structure.”

Slide 6: Key Differences in IMRaD vs Non-IMRaD

Speaker Notes:

“Now let’s look at the differences between the IMRaD format and non-IMRaD formats,
because not all research articles follow the same structure.

Starting with structure, the IMRaD format has a rigid, four-part structure — Introduction,
Methods, Results, and Discussion. It is standardized and ideal for clear and scientific reporting.
On the other hand, non-IMRaD formats are more flexible and often thematic or narrative,
depending on the type of writing or publication.
In terms of clarity, IMRaD provides high clarity because of its structured flow. It’s easy for
readers to follow the logical progression of the study. In contrast, non-IMRaD formats can vary
in clarity — some may be well-written but harder to analyze critically because they don’t follow
a standard structure.

Organization is another major difference. IMRaD is highly structured and easy to navigate.
Non-IMRaD papers, such as narrative reviews or case studies, may lack that same level of
organization.

Finally, looking at types of studies, IMRaD is commonly used in primary studies like
observational or experimental studies, as well as secondary studies such as systematic reviews
and meta-analyses.

Meanwhile, non-IMRaD formats are typical for:

 Narrative literature reviews,


 Case studies — especially qualitative research,
 Toolkits and guidelines,
 Case reports like those published by the CDC,
 Editorials and letters to the editor.

So as readers or researchers, it’s important to recognize the format being used — because it
affects how we interpret and evaluate the content.”

Slide 7 title page not included.

Slide 8: Introduction – Research Rationale

Speaker Notes:

“Now we’re going to look closely at what makes a strong Introduction in a quantitative research
paper. The ultimate goal of this section is to provide a clear, strong, and complete rationale for
the study.

Let’s break that down:

1. Research Problem or Issue – This answers the question: What is already known? It
must include:
o a clear and specific statement of the issue,
o a detailed description of the background,
o and an explanation of why this issue matters or is a priority.
2. Knowledge Gap – Here, the researcher should identify what is missing in the current
literature. This sets up the justification for doing the study.
3. Research Objectives – After the gap is established, the researcher must explain what the
study aims to achieve. This includes specific aims or hypotheses.
4. Expected Benefits – The introduction should also answer: Why is this research useful?
Whether it contributes to policy, practice, or future research, the value must be stated
clearly.
5. References and Supporting Evidence – The author must support claims with evidence:
o Are there enough references?
o Are they from credible and authoritative sources?
6. Overall Research Rationale – This is the big picture: Does the introduction bring all
these elements together in a coherent and convincing way? Is it both strong and
complete?
7. Clarity – Finally, the writing itself should be clear, well-structured, and easy to follow.

🔍 The Final Step is to make an overall judgment based on all these criteria. When you're
reviewing a paper — or writing your own — this checklist can help ensure the introduction
serves its purpose effectively.”

Slide 9: Research Rationale – Issue

Speaker Notes:

“In this slide, we focus on the first and most essential step in building a strong research
rationale — identifying and describing the issue or problem.

Let’s begin with part (a) Define the Issue or Problem: We start by clarifying what we already
know. The term issue refers to a general topic or area of concern that may not be harmful — for
example, promoting healthy behaviors. In contrast, a problem refers to a harmful or undesirable
condition — like increasing rates of diabetes or maternal mortality.

Now part (b) Describe the Issue or Problem in more detail. This includes several key elements:

1. Magnitude: This means describing how widespread the issue is. Use prevalence,
incidence, and geographic distribution — and try to frame it from global to local levels to
show its relevance.
2. Severity of Impact: Discuss how the issue affects people’s health — for example, in
terms of mortality, morbidity, or disability. Don’t forget the social and economic
burden — like reduced quality of life or increased healthcare costs.
3. Contributing Factors: These are the underlying causes — which may be biological,
behavioral, environmental, or structural. A good introduction links the issue to these
deeper determinants.
4. Existing Solutions: What interventions or policies already exist to deal with the
problem? This shows that the author is aware of current efforts.
5. Implementation Barriers: Finally, explain why these existing solutions are not enough.
Maybe there are funding limitations, access issues, or cultural resistance. These barriers
often justify why more research is needed.
Altogether, this section sets the stage and provides the why behind the study — it shows that the
problem is real, serious, and still unresolved.”

Slide 10: Research Rationale – Priority

Speaker Notes:

“This slide introduces a helpful tool for assessing how much of a priority a particular issue
should be for research.

Once we’ve defined and described the issue, we can use these five criteria to evaluate its
priority strength. The idea is simple: the more criteria are met, the higher the score, and the
stronger the justification for addressing this issue through research.

Let’s briefly go through the five criteria:

1. Large Magnitude of the Problem – Is this issue affecting a large number of people, or
spreading rapidly across different regions?
2. Effects are both severe and common – We’re not just looking for rare or minor effects.
Is the problem frequent and does it cause serious harm?
3. Causes or risk factors are severe and widespread – Are the underlying factors difficult
to control and affecting a large segment of the population?
4. Solutions are difficult to implement – If current strategies aren’t working or are too
complicated or expensive to apply broadly, this adds to the urgency.
5. Barriers to solutions are serious and common – Maybe there are cultural barriers,
funding issues, or health system limitations that stop existing solutions from being
effective.

Each time a criterion is met, you place a ✓ mark. The final priority score ranges from 0 to 5.
This simple scoring system helps you, as a researcher or reviewer, assess whether the topic is
truly worth investigating and allocating resources to.

You can also use this checklist during proposal development, peer review, or classroom
exercises to strengthen critical thinking around research planning.”

Slide 11: Research Rationale – Gap

Speaker Notes:

“After identifying the problem, the next step in building a strong research rationale is to identify
the research gap.
A research gap refers to what is not yet known, unclear, or insufficiently studied in the existing
literature. It can relate to any part of the issue — the magnitude, causes, solutions, or barriers
to a health problem.

Some common signs that a research gap exists include:

 ‘However, few studies have…’


 ‘Nevertheless, this issue remains poorly understood…’
 ‘Most evidence is outdated or conflicting…’
 ‘No studies have been conducted in this setting…’
 ‘Previous studies had small samples or short follow-up durations…’

These phrases usually signal that more research is needed, and they help justify why your study
is important.

We can distinguish two main types of gaps:

1. Research Gaps – These are related to gaps in knowledge. For example, maybe no one
has evaluated a specific intervention in rural populations. This kind of gap can be
addressed directly through new research studies, so it’s what we focus on when writing
a rationale.
2. System or Program Gaps – These refer to operational issues like staff shortages, poor
funding, or medicine stockouts. These are important, but they are not something
researchers can usually solve through a study. So when identifying gaps, be careful to
focus on those that can be addressed with research, not with policy or program changes
alone.

So, when writing your introduction, always aim to clearly define the knowledge gap your study
is filling — and explain why it matters.”

Slide 12: Research Rationale – Objective

Speaker Notes:

“Now that we've identified a research gap, the next step is to transform that gap into an
actionable research objective. This is a key part of building a strong rationale.

Your research objective should directly respond to the gap. It’s the bridge between what is
missing and what your study will do to address it.

Let’s look at some important notes:

Note 1:
 Research gaps and objectives must be clearly linked. Often, they even use similar
language — the difference is that the gap describes what’s missing, and the objective
states what the study will do in response.

Note 2:

 Objectives are typically stated in the Introduction, but sometimes they appear in the
early part of the Methodology section.
 If you can’t find an objective clearly stated, you can usually infer it from the research
question.

Note 3:

 This note reminds us how objectives, research questions, and hypotheses are
connected:
o The research question is just the objective turned into a question.
o The research hypothesis is the expected answer to that question. It may be a
directional hypothesis (e.g., Group A will have higher scores than Group B) or a
null hypothesis (e.g., there will be no difference).

Note 4:

 All these elements — objectives, questions, and hypotheses — should be stated using
consistent terms. That means they should clearly relate to each other, even if one is a
statement, one is a question, and one is a prediction. Inconsistency in language often
weakens the clarity of a research paper.

So in summary, your objective is the heart of your study — it tells us exactly what you aim to
find out, and it must logically follow from the gap you identified earlier.”

Slide 13: Research Rationale – Benefits

Speaker Notes:

“After defining the research problem, gap, and objective, the next part of a strong introduction is
describing the benefits of the study.

So let’s begin with point 4 – Benefits. Ask yourself: Are the potential benefits of this study
clearly described?
These could include:

 Impact on policy or public health practice,


 Contribution to future research, or
 Improvements in health outcomes or service delivery.
Being able to clearly state why the research matters helps readers understand the value of your
work — especially in applied fields like public health.

Then we move to point 5 – Evidence and References.

 Are the major claims in the introduction supported by sufficient references?


 And are these sources credible and authoritative — for example, from peer-reviewed
journals, government databases, or major organizations?

Weak or outdated references can undermine the rationale, so a strong introduction always
includes solid evidence.

Now, in point 6, we bring all these components together with a simple scoring rubric to assess
the strength of a research rationale. You can use this tool when reading articles, reviewing
proposals, or writing your own paper.

The criteria include:

 Priority strength (scored out of 5 based on the earlier checklist),


 A clearly defined research gap (1 point),
 An objective that aligns with the gap (1 point),
 Stated benefits of the study (1 point),
 And strong supporting evidence (2 points).

Add them up for a total score out of 10. The higher the score, the more complete and convincing
the rationale. This is a great checklist for ensuring your introduction meets scientific standards.”

Slide 14: Introduction – Clear Text

Speaker Notes:

“Once we’ve structured the content of the introduction, the next thing we need to focus on is
clarity of writing. A well-written introduction is not just about having the right information —
it’s also about presenting that information in a way that’s clear, logical, and easy to read.

There are four key criteria for clarity that apply not only to the introduction, but also to the
methodology, results, and discussion sections of a research paper.

Let’s go through each of them:

7.1 Correct Grammar

 Sentences must follow a basic structure: subject, verb, and object.


 Also, maintain consistent verb tenses. For example, use past tense for describing what
was done in the methods, and present tense when stating facts or what is currently
known.

7.2 Use Simple and Clear Words

 Avoid technical jargon unless it’s absolutely necessary.


 Choose words that are familiar and clear to your audience. Overly complex or academic
language can make the introduction harder to understand — and even intimidating for
some readers.

7.3 Logical Timeline Flow

 Present events in the order that they occurred.


 For example, if one thing led to another, the writing should reflect that timeline. This
helps the reader follow the logical progression of the study background.

7.4 Logical Order: From Big to Small or Small to Big

 You can also organize content by scale.


 For instance, you might start with global statistics, then narrow down to regional or
country-level data — or do the opposite, depending on what you’re emphasizing.
 This is especially useful when describing disease trends or the burden of health problems.

In short, no matter how strong your content is, poor writing can weaken your message. So
always review your introduction for clarity, grammar, and logical flow.”

Slide 15: Introduction – Clear Text (Part 2)

Speaker Notes:

“This slide continues our focus on improving clarity in research writing. Here we look at a
practical strategy for writing clearly and logically, especially in the Introduction section.

We call it the ‘Say It – Say It All – Go to the Next’ strategy, and it helps avoid vague or
confusing writing.

Let me explain the three steps:

7.5.1 – Say It:

 This means clearly stating the main idea you’re introducing. Don’t be vague. Be direct
and to the point. Tell the reader what concept you’re going to explain.
7.5.2 – Say It All:

 Once you introduce the concept, explain it fully. Use the 5Ws and H — Who, What,
When, Where, Why, and How — to make sure your explanation is complete. This gives
your writing depth and makes it easier for readers to understand.

7.5.3 – Go to the Next:

 Once you’ve explained one idea completely, move on. Don’t jump around or revisit
unfinished thoughts. This keeps your text clean and logical.

We also want to avoid what we call 'Butterfly Writing'.

 This is when the writer jumps between topics too quickly — before finishing one idea.
 It confuses the reader and makes the writing feel unorganized and scattered.

And finally, here’s a small writing tip:

 Repetition is not always bad. In fact, sometimes it’s helpful — especially when you're
reinforcing complex ideas or ensuring clarity in educational texts. Just be intentional with
it.

Using this flow — Say it, Say it All, Go to the Next — is one of the best ways to make your
writing more professional and easier to follow.”

Slide 16: Introduction – Clear Text ‘Guidelines’

Speaker Notes:

“This slide provides some final writing tips for ensuring clarity in your introduction —
specifically how to place information where readers naturally expect to find it.

This approach meets the structural expectations of readers and makes your writing feel more
intuitive and easy to follow.

Let’s go over the three micro-strategies under 7.6:

7.6.1 Subject–Verb Placement

 Always keep the subject and verb close together in a sentence. Don’t insert too many
words between them — it breaks the flow and makes comprehension harder.
 For example, avoid: ‘The policy, which was developed after many consultations, aims
to…’
Instead, say: ‘The policy aims to…’
 The verb is the action, and it should be easy to find — readers naturally look for it soon
after the subject.

7.6.2 Topic at the Beginning

 The beginning of a sentence sets the focus. Start with the main concept, key person, or
idea.
 Readers expect you to say what the sentence is about right away.
 If you’re repeating old information for context, place it early so the rest of the sentence
can introduce new or more specific details.

7.6.3 Stress at the End

 The end of a sentence is where readers naturally expect emphasis. That’s where
important conclusions or takeaways should go.
 If you have two important points, place the stronger one at the end — a technique we
call ‘save the best for last’.
 This keeps your writing punchy and helps readers remember the most critical
information.

Together, these writing habits will help you organize your sentences in a way that matches how
people read — making your introduction clearer and more engaging.”

Slide 17: Introduction – Clear Text ‘Guidelines’ (Part 3)

Speaker Notes:

“This is the final part of our clear writing section — and it focuses on linking sentences
smoothly to avoid logical gaps in your writing. When ideas are not connected properly, the flow
breaks, and readers get confused or lost.

We’ll look at three key practices to help with this:

A. Start with Reappearing (Old) Information

 Begin a sentence with something that’s already been mentioned.


 This technique is called ‘backward-linking’ — it helps readers follow the logical flow
by grounding each new sentence in the context of the previous one.
 For example: ‘This method has proven effective. However, its application in rural
areas…’
 You're continuing the thread smoothly.

B. Use Sentence Connectors


 These are the bridges between your ideas.
 Words like however, for example, indeed, in contrast, and therefore guide the reader
through your reasoning and help signal what comes next.
 Don’t leave transitions up to guesswork — help your reader navigate.

C. Provide Context Before New Information

 When introducing new or complex ideas, start with a short sentence that provides some
context.
 For example: ‘In low-resource settings, access to antenatal care is limited.’ Then you can
go deeper into the implications or solutions.
 This allows readers to mentally prepare for unfamiliar material.

And the main takeaway here is at the bottom of the slide:

 Writers should aim to organize ideas clearly for readers.


 Remember, good writing is reader-centered, not writer-centered. That means we write
not just to express ourselves, but to help others understand what we mean.”

Slide 18: Introduction – Overall Assessment

Speaker Notes:

“As we come to the final section of this session, let’s reflect on how to assess whether an
introduction is well written.

Point 8 asks:
Is the introduction well written?
This is a simple but powerful question. You can use the answer choices: Yes, No, or So-so to
rate it. But more importantly, ask yourself: Why?

8.2 – A good introduction should demonstrate two main things:

1. A strong rationale – This includes clearly explaining:


o The issue or problem,
o Its magnitude and severity,
o Underlying causes,
o Current solutions and their limitations,
o Barriers to implementation,
o The research gap,
o And how the study will address that gap through clear objectives.
2. Clear and organized writing – This means the introduction is easy to follow, logically
structured, and free from distractions like butterfly writing, vague phrasing, or poor
grammar.
Point 9 introduces an activity: Peer Sharing
This is where students or participants can reflect and answer the question:
What will you include in your thesis introduction to make it strong and effective?
You can discuss your approach, what challenges you’ve faced, or how the checklist from this
lecture might guide your revisions.

Finally, a note for thesis writers:


The structure and content of a thesis introduction is very similar to that of a research article.
However, in a thesis, you also need to explicitly state:

 General and specific research objectives,


 Research questions,
 And hypotheses — if applicable.

So before closing, take a moment to reflect:


Have you covered all the essential elements?
Is your rationale clear and convincing?
And is your writing structured for your reader — not just for yourself?

These are the questions that will elevate your research writing.”

Slide 19 title page leave blank.

Slide 20: IMARD – Methodology Appraisal

Speaker Notes:

“Now we move to the Methodology section of the IMRaD structure — and specifically, how to
critically appraise it.

The Methodology section should give us a clear, detailed explanation of how the study was
designed and conducted. Here’s what we need to evaluate:

1. Study Design

 What type of study is it? For example: cross-sectional, cohort, case-control, or


randomized controlled trial (RCT).
 Is the choice of design clearly justified? The design should align with the research
question.

2. Data Collection Methods


 How was the data collected? Was it through interviews, surveys, medical records, or
another method?
 We’re checking for clarity and appropriateness — is the method suitable for the
research question?

3. Instruments or Tools Used and Their Quality


This is where we assess reliability and validity.

 3.1 Reliability:
o Internal consistency (e.g., Cronbach’s alpha for scales), and
o External test-retest reliability — whether repeated measurements produce
consistent results.
 3.2 Internal Validity:
o 3.2.a looks at different types of validity: construct, content, and face validity. Are
the tools measuring what they’re supposed to?
o 3.2.b involves evaluating how the study handled biases and confounding
factors. For example, did they control for age, gender, or other variables that
could affect results?
 3.3 External Validity (Generalizability):
o We ask: Can the findings be applied to other populations or settings?
o 3.3.a Was the sample size calculated? This is essential for statistical power.
o 3.3.b Was the sampling technique clearly described? For example, was it
random sampling, convenience sampling, or purposive?

We include checkbox questions here to help students or reviewers critically assess the quality of
a study's methods. If the answer is “no,” it’s important to ask: Why not? and How might that
affect the results or conclusions?

So, in summary, the Methodology section must provide enough detail for replication,
credibility, and transparency. A well-written methodology builds trust in the research
findings.”

Slide 21: IMARD – Methodology Appraisal (Part 2)

Speaker Notes:

“Let’s continue our critical appraisal of the Methodology section by looking at additional
elements that ensure quality, transparency, and rigor in quantitative research.

4. Statistical Analysis
 Are the independent and dependent variables clearly defined?
 Are the statistical tests used appropriate for the type of data and research question?
 For example, using chi-square for categorical data or t-tests for comparing means.
 Also check: Are these tests justified with proper reasoning? A study should not just
mention statistical tests — it should explain why they were chosen.

5. Ethical Considerations

 This is a critical component. Has the study mentioned ethical approval from a relevant
committee?
 Were participants asked for informed consent?
 In health-related research, skipping this information is a major flaw — it undermines the
trustworthiness of the study.

6. Clinical Trial Registration

 If the study is a clinical trial, has it been properly registered?


 For example, was it listed in a registry accepted by the ICMJE (like ClinicalTrials.gov)?
 Look for a valid 8-digit NCT number. This promotes transparency and allows the public
to track the study’s integrity and timeline.

7. Clarity of Writing

 Is the methodology written in a way that is easy to follow and logically structured?
 Even if the methods are technically sound, poor writing can make them hard to interpret
or replicate.
 Good writing reflects good thinking — and in methodology, clarity equals credibility.

8. Final Assessment

 Overall, ask: Does the methodology meet standards for rigor, transparency, and
reproducibility?
 Would another researcher be able to replicate the study based on what’s written?
 Does the study provide enough detail to support the validity and reliability of its
findings?

This checklist ensures that the methodology section is not only technically correct but also
ethically responsible and scientifically trustworthy.”
Slide 22: Methodology – 1. Research Design

Speaker Notes:

“Let’s now take a deeper look at the first critical component of the methodology section: the
research design.

A well-chosen and clearly explained research design is essential because it determines how the
study is conducted and whether it will adequately answer the research question.

1. What Type of Research Design Is Used?


The first question we ask when reviewing a study is: What type of design has the researcher
used?

Some common categories include:

 Experimental research, where the researcher controls variables and randomly assigns
participants to groups.
 Correlational designs, which look at the relationships between variables without
implying causation.
 Descriptive research, such as:
o Surveys — either cross-sectional or longitudinal,
o Case studies, which provide in-depth analysis of a single case,
o Observational studies, where researchers simply observe without intervention.
 Cohort studies, either retrospective (looking back in time) or prospective (looking
forward).
 Causal-comparative or quasi-experimental designs, which attempt to assess cause-
effect relationships but without full experimental control.

2. Is the Research Design Clearly Described?


Once the type is identified, we evaluate whether it’s clearly explained:

 Is the design named explicitly in the paper?


 Are important details provided — such as the time frame, whether there are comparison
or control groups, and what kind of intervention or exposure is being studied?
 Have the limitations of the design been acknowledged? For example, if it’s a cross-
sectional study, does the author mention the limitation of not establishing causality?

3. Is the Design Appropriate for the Study?


Finally, we ask: Is the chosen design suitable for answering the research question?
 Does it align with the research gap and objectives identified in the introduction?
 And if the study is testing a hypothesis, is this design strong enough to do that?

So, in summary, a high-quality methodology section not only identifies the design but also
justifies it, explains its structure clearly, and acknowledges its limitations. This builds the
reader’s confidence in the study’s findings.”

Slide 23: Hierarchy of Evidence

Speaker Notes:

“This slide presents the Hierarchy of Evidence, which helps us evaluate the strength and
reliability of different types of research designs. In evidence-based practice, not all studies are
created equal. The design of a study plays a critical role in determining how much confidence we
can place in its findings.

Let’s walk through the pyramid from bottom to top:

 At the base, we have Cross-sectional studies. These provide a snapshot of a population


at one point in time. While useful for identifying patterns or associations, they cannot
establish cause and effect, and they’re considered the weakest in terms of evidence
strength.
 One level up are Case-control studies, which compare people with a condition to those
without. These are retrospective and can suggest associations but are still prone to bias.
 Next are Cohort studies — either prospective (looking forward) or retrospective
(looking back). These follow groups over time and provide stronger evidence, especially
when examining risk factors.
 Above that are Quasi-experimental designs, where researchers intervene but do not
randomly assign subjects. These are better at establishing causality than observational
studies but still have some limitations.
 At the top of the pyramid is the Randomized Controlled Trial (RCT). This is
considered the strongest design for establishing cause-effect relationships. Participants
are randomly assigned to treatment or control groups, reducing bias and confounding.

As you go up the pyramid, the quality of evidence increases — but so does the complexity,
time, and cost of conducting the study. That’s why it’s important to choose the most appropriate
design based on your research question, resources, and ethical considerations.

In critical appraisal, always ask yourself: Where does this study design fall on the evidence
pyramid? and Is the strength of the evidence appropriate for the conclusions being drawn?”

Slide 24: Methodology – Matching Study Objectives to Appropriate Research Designs


Speaker Notes:

“This slide helps us understand how to match the research objective with the most appropriate
study design — which is a critical skill when reviewing or designing studies.

Let’s look at two common research objectives and the designs that fit them best:

Objective 1: Evaluate the efficacy or effectiveness of health interventions or programs


Examples include:

 Testing new therapies or drugs,


 Evaluating health education programs,
 Assessing healthcare delivery models.

For this type of question, the goal is to determine causal impact. So the most appropriate designs
are:

1a. Randomized Controlled Trial (RCT)

 This is the gold standard.


 Participants are randomly assigned to intervention or control groups.
 This helps control for confounders and increases internal validity.

1b. Quasi-Experimental Design

 In cases where randomization isn’t possible, researchers may assign groups (e.g.,
schools, communities) to different conditions.
 These designs still test effects but are more feasible in real-world settings.

Objective 2: Describe or explore associations


This is where the goal is not to test interventions but to explore or describe patterns. Examples
include:

 Studying disease prevalence,


 Examining behaviors or health outcomes,
 Assessing diagnostic test performance,
 Exploring willingness to pay.

Appropriate designs include:

 Descriptive studies – which present distributions of variables (e.g., how common a


condition is in different age groups).
 Association studies – which examine relationships between independent and dependent
variables (e.g., smoking and lung function).

In both cases, the most appropriate design is often a Cross-Sectional Survey:

 Data is collected at one point in time,


 It’s useful for both descriptive and correlational analyses,
 But note: it cannot establish causality.

In summary, always start with the research objective — then choose a design that best allows
you to answer the question accurately and ethically.”

Slide 25: Methodology – Matching Study Objectives to Appropriate Research Designs


(Part 2)

Speaker Notes:

“Continuing from the previous slide, we now look at three more types of research objectives and
how to match each one to a suitable research design.

1. Study Objective: Prognosis and Risk Estimation


This type of study aims to estimate risk or predict outcomes — for example:

 Life expectancy,
 Risk of developing a disease,
 Incidence rates,
 Relative risk across populations.

The most appropriate design for this type of objective is an Observational Cohort Study:

 Participants are followed over time to observe what happens — for instance, whether
exposure to a certain risk factor leads to disease.
 These studies can be retrospective (based on past records) or prospective (with follow-
up over time).
 Cohort studies are essential for prognosis because they track individuals before the
outcome occurs.

2. Study Objective: Disease Causation or Associations


These studies aim to identify what causes a disease or to explore associated factors — for
example, looking at environmental exposures linked to a rare illness.
⚠️It’s important to note that causation studies require more rigorous designs than studies that
just show correlation.

The appropriate designs are:

 Observational Longitudinal Cohort Studies, which follow groups over time to


establish exposure-outcome relationships.
 Or Case-Control (CC) Studies, which compare individuals with a condition to those
without to look for past exposures.

Both designs are strong for studying causality and uncovering risk factors, particularly in cases
where an experimental design is not ethical or feasible.

3. Study Objective: Clinical Observation or Case Illustration


Sometimes the goal is to describe new or rare clinical situations, not test a hypothesis.

Examples include:

 Reporting a new disease or syndrome,


 Documenting an unusual treatment reaction,
 Offering qualitative insight into a rare condition.

The best design here is a Case Report:

 It focuses on individual or small group cases.


 While not generalizable, case reports are valuable for generating hypotheses and early
identification of novel clinical trends.

So to summarize: Always match your study design to the specific objective of your research. A
mismatch — such as using a descriptive design for a causal question — weakens the validity and
impact of your findings.”

Slide 26: Methodology – 2. Methods – Are They Appropriate and Reliable?

Speaker Notes:

“In this part of methodology appraisal, we evaluate whether the methods used to collect data
are appropriate, valid, and reliable for answering the study’s research questions.

Let’s begin with the definition:


Research methods are the systematic procedures researchers use to gather data — grounded in
scientific logic and designed to minimize bias and error.
So, we start by asking:
What types of methods did the authors use?

Some common types include:

 Mixed-methods: This combines both quantitative (e.g., survey data, measurements) and
qualitative (e.g., interviews, narratives) approaches to give a more complete picture.
 Interviewing: Could be face-to-face, phone-based, or even self-administered. Interviews
may be structured, semi-structured, or in-depth — and can also include self-reporting or
proxy reporting (e.g., a parent answering on behalf of a child).
 Recording: Some studies include video, audio, or written records as data sources,
especially in qualitative research or behavioral analysis.
 Observation: Observational methods may be structured (following a checklist or
protocol) or unstructured (open-ended or naturalistic). These are often used in
behavioral or public health studies.
 Laboratory measurements: These include clinical, biomedical, or laboratory-based
indicators like blood pressure, lab tests, or diagnostic tools.

Now we ask the critical question:


Are the methods used appropriate, valid, and reliable?

That includes:

 Do the methods match the research aim?


For instance, if you're measuring knowledge, is a structured questionnaire the best tool?
 Are the methods clearly described and repeatable?
Can another researcher replicate the study based on the description?
 Are they scientifically justified?
Did the authors explain why they chose certain methods, and do those choices align with
best practices in the field?

This section is essential because even the best-designed study can be weakened by poor data
collection methods. Always evaluate the fit between the method and the research question —
and whether the process is transparent enough to ensure rigor.”

Slide 27: Methodology – 3. Instruments/Materials/Tools – Assessing the Instruments and


Tools Used in the Study

Speaker Notes:
“In this section, we turn our attention to the tools and instruments used in the study. Even if a
research design and method are sound, the quality of data collection instruments can
significantly affect the reliability and validity of the results.

Let’s walk through three key appraisal questions:

1. What instruments or tools were used?


Start by identifying the types of tools the authors employed:

 These may include checklists, questionnaires, or structured interview guides.


 Interview methods can vary — from face-to-face and phone interviews to self-reports or
in-depth qualitative interviews.
 Other examples include diaries, lab tests, clinical scales, or standardized surveys.
 Additionally, audio/video recorders or electronic devices may have been used for
documentation or observation.

Knowing the tool type helps you assess whether it’s appropriate for the study’s aim.

2. Are they clearly and sufficiently described?


Once tools are identified, we ask:

 Have the sections or components of the tools been explained?


 Is the type of data being collected clear?
 For example, if using a questionnaire, did the authors mention the format — like Likert
scale, open-ended, or binary yes/no options?
 Without this level of detail, it’s difficult to assess how valid or reliable the findings really
are.

3. Are they of good quality?


There are four dimensions of tool quality we consider:

 3.1 Reliability:
o Does the tool consistently yield the same results under the same conditions?
o Some literature uses “reliability” synonymously with precision, which refers to
consistency across repeated measures.
 3.2 Validity:
o Is the tool measuring what it is supposed to measure?
o In some cases, “valid” is equated with “accurate” — meaning how close the result
is to the true value.
 3.3 Sensitivity (in lab-based tests):
o The tool’s ability to correctly identify true positives — that is, those who have
the condition or characteristic.
 3.4 Specificity:
o The ability to correctly identify true negatives — those who do not have the
condition.

These last two (sensitivity and specificity) are especially relevant in diagnostic or biomedical
studies.

In summary, when appraising a study, always examine whether the instruments used are clearly
described, scientifically sound, and suitable for producing valid, reliable results.”

Slide 28: Methodology – 3. Instruments’ Quality – 3.1 Reliability

Speaker Notes:

“In this slide, we dive deeper into one of the core aspects of instrument quality — reliability.

What is reliability?
It refers to the consistency and stability of a measurement tool. In other words, a reliable tool
should yield the same result under the same conditions.

The importance of reliability is especially high in tools that use multiple items to measure a
single construct, like a knowledge test or an attitude scale.

There are two major types of reliability we commonly evaluate:

1. Internal Reliability (also called internal consistency)


This assesses whether the items within a tool are consistent with one another.

Here are some common methods used:

 Average Inter-item Reliability – Looks at how strongly each item correlates with the
others.
 Average Item-Total Correlation – Each item is correlated with the overall test score.
 Split-Half Reliability – The tool is split into two halves and the results are compared for
consistency.
 KR-20 (Kuder-Richardson Formula 20) – Used for dichotomous items like yes/no or
true/false questions.
 Cronbach’s Alpha (α) – One of the most widely used methods for tools that use Likert
scales or multiple-choice items. A commonly accepted benchmark for alpha is 0.7 or
higher.

All of these focus on whether the tool is internally stable and cohesive.

2. External Reliability
This refers to the consistency across raters or time — essentially, whether the results are
reproducible under varying conditions.

There are three key approaches here:

 Inter-Rater or Inter-Observer Reliability – Agreement between two or more


raters/observers (often used in qualitative or observational research).
 Test-Retest Reliability – Measures the stability of results over time. If a test is
administered twice to the same person under the same conditions, the scores should be
similar.
 Parallel-Forms Reliability – Compares two equivalent versions of the same test to
ensure consistency.

So, when appraising a study, you should ask:

 Did the authors mention how they assessed reliability?


 What type of reliability was tested?
 Were the results acceptable?

Strong reliability boosts the trustworthiness of data and ensures that the findings are not due to
chance or inconsistencies in measurement.”

Slide 29: Methodology – 3. Instruments’ Quality – 3.2 Internal Validity

Speaker Notes:

“This slide focuses on internal validity, specifically how well a measurement tool actually
reflects the concept it's intended to measure. In simpler terms, internal validity asks: Are we
measuring what we think we’re measuring?

There are three main types of internal validity we should evaluate:


1. Construct Validity

This type of validity ensures that the tool truly captures the theoretical construct it claims to
measure.
For example, if a questionnaire is supposed to assess self-esteem, do the items really reflect the
concept of self-esteem?

Construct validity is achieved by:

 Consulting with experts,


 Relying on established theories and frameworks,
 Following research guidelines.

It’s about making sure that concepts and definitions match at every stage — from theory to
question wording.

2. Content Validity

Here, we ask: Do the individual items (questions) adequately represent all aspects of the
concept?

It focuses on item-level accuracy — making sure every component of the concept is measured.

How do we achieve content validity?

 By defining clear operational definitions — spelling out exactly what each item is
meant to capture.
 Using Item-Objective Congruence (IOC) scores. This is when three or more experts
rate each item:
o +1 = Congruent, meaning the item aligns with the objective,
o 0 = Questionable,
o -1 = Incongruent.
o Items with an average IOC score below 0.5 should be revised or removed.
 For translated instruments, back-translation ensures accuracy across languages.
 You can also cross-check responses with objective biological markers when available
— like validating self-reported drug use with lab tests.

3. Face Validity
This is the simplest form of validity. It asks: Does the tool appear to measure what it’s supposed
to?

 It’s based on common sense and surface-level judgment — not deep statistical
analysis.
 Achieved by pilot testing the tool with individuals similar to the study population and
collecting feedback on clarity and appropriateness.

Even though it’s the weakest form of validity, face validity is still important, especially for
participant acceptance and usability.

In summary, internal validity ensures that a tool is both scientifically sound and practically
meaningful.
We must examine construct, content, and face validity to ensure the quality of the instrument and
the accuracy of the findings.”

Slide 30: Minimizing Bias and Controlling Confounders

Speaker Notes:

“Now, let’s continue our discussion on internal validity by focusing on two important threats:
biases and confounders.

What is Internal Validity?

Internal validity refers to how accurately a study measures what it intends to measure, without
being distorted by other influences or variables.

Biases

Biases are systematic errors that lead to consistent deviation from the true value. They’re not
random — they tend to push the results in a particular direction.
Examples of common biases include:

 Selection bias: when the sample isn’t representative of the population,


 Recall bias: especially common in retrospective studies, where participants may not
remember accurately,
 Interviewer bias: when the interviewer’s expectations subtly influence responses.
Bias can make results look stronger or weaker than they actually are.
Most of the time, bias is unintentional, and it often stems from flawed study design or
measurement tools.

Covariates and Confounders

These are external variables that can influence the outcome — but they’re not errors.

 Covariates are variables that independently affect the outcome, such as age, income, or
education level.
 Confounders, on the other hand, are trickier. They influence both the exposure and the
outcome, making the relationship between them appear misleading.

Example:
Let’s say you're studying the effect of a drug on recovery rates.
If patients who are more health-conscious are more likely to receive the drug and also more
likely to recover quickly, then lifestyle becomes a confounder — it influences both the drug
assignment and the recovery outcome.

Controlling for Confounding

To minimize the impact of confounding, researchers must take preventive steps during the study
design and/or during data analysis:

 In study design: Use techniques like randomization, matching, or restriction.


 In data analysis: Apply statistical methods such as multivariable regression,
ANCOVA, or stratification.

Some researchers even consider confounding to be a type of bias, because it systematically


distorts the observed relationship.

In summary:
To protect internal validity, researchers must minimize both bias and confounding — because
even small distortions can significantly alter the conclusions we draw from a study.”

Slide 31: Biases in Study Designs – NRCTs vs. RCTs

Speaker Notes:
“In this slide, we’re going to look more closely at the different types of biases that can affect
internal validity — especially in the context of study design.

First, let’s start with Non-Randomized Controlled Trials (NRCTs).

These studies don’t use random assignment, so they’re naturally more vulnerable to bias than
Randomized Controlled Trials (RCTs).

The key risk in NRCTs is selection bias.


That means the participants in the study might not accurately represent the broader population.

For example:

 If participants are volunteers or recruited through referrals, they might be more motivated
or healthier than the general population.
This skews the findings and makes them less generalizable.

However, if NRCTs are carefully designed and analyzed, they can still yield useful and valid
insights, especially in real-world or resource-limited settings.

Now let’s talk about biases that can still affect Randomized Controlled Trials
(RCTs).

Even though randomization helps reduce selection bias, RCTs aren’t immune to other biases.
Let’s go through three key types:

1. Self-Exclusion Bias
This happens when some participants drop out or refuse to participate, and these
individuals differ significantly from those who stay in.

 Ignoring them can lead to pro-intervention bias, because the remaining group may
respond better to the intervention.

2. Intervention Biases
These occur during the intervention process and can influence the validity of the
comparison:
 Contamination: when participants in different groups share information or receive
similar treatments,
 Co-interventions: when one group receives extra care or treatments that aren’t accounted
for in the analysis,
 Unequal provider expertise or timing: for example, one group might receive care from
more experienced clinicians, or have more time for treatment.

These can all distort the actual effect of the intervention.

3. Outcome/Measurement Biases
These affect how outcomes are measured or assessed, including:

 Recall bias (e.g., participants can’t accurately remember past events),


 Attention bias (e.g., when knowing they’re being observed influences behavior),
 Insensitive measurement tools that don’t detect small but meaningful changes,
 Expectation bias, especially in studies that lack blinding — when researchers or
participants expect a certain outcome, they may (unconsciously) influence results.

To summarize:
Even rigorous study designs like RCTs can still suffer from bias. As researchers, our goal is to
identify, minimize, and control for these biases as much as possible — both during design and
in our analysis.”

Slide 32: Internal Validity – Biases in Retrospective Cohort and Case-Control Studies

Speaker Notes:

“Now we’ll explore how internal validity can be affected by biases specific to retrospective
cohort and case-control study designs.

Retrospective Cohort Studies:

These designs use historical data, which makes them more prone to bias, especially two types:

1. Selection Bias
This arises when the comparison group isn't truly comparable to the exposed group.
 For example, if controls are selected from a different population or setting, it undermines
the comparability.
 This is a common and difficult challenge in retrospective designs since we have no
control over how data was originally collected or who was included.

2. Misclassification Bias
This happens when a participant’s exposure status is wrongly assigned.

 Causes include incomplete or inaccurate records, especially in old datasets.


 The good news is: some of this bias can be corrected at the analysis stage using
statistical tools like adjustment or sensitivity analysis.

Case-Control Studies:

Here, the main concern is Misclassification Bias, particularly in classifying participants as


‘cases’ or ‘controls’.

This bias often stems from:

 Unclear case definitions: When do we define someone as a case?


 Vague or inconsistent inclusion/exclusion criteria
 Subjectivity in classification, which can happen during record review or interviews

Ask yourself:

 Are the criteria for being a case or a control clearly defined and consistently applied?
 Was there any room for subjective judgment?

Because this bias directly affects how participants are grouped, it can significantly distort
associations and study outcomes.

In summary:
Both retrospective cohorts and case-control studies are valuable but carry higher risk for
certain biases.
Being aware of and planning for these biases can help strengthen internal validity and improve
the trustworthiness of your results.”

Slide 33: External Validity – Focus on Sample Size


Speaker Notes:

“On this slide, we’ll explore how external validity—the ability to generalize study results to a
broader population—relies heavily on sample size and sampling technique.

Why Sample Size Matters:

 A well-calculated sample size ensures that the study has enough power to detect a true
effect, if one exists.
 Without it, we risk drawing conclusions that are not reliable beyond our sample.

Key Points in Sample Size Calculation:

 Is the sample size justified using a proper formula?


We’re looking for transparency here—did the researchers specify how the number was
derived?
 In experimental designs, the sample size formula should include:
o A statistically significant effect size (usually defined in advance),
o Type I error (alpha, α) – the risk of a false positive, meaning we find an effect
when there isn’t one,
o Type II error (beta, β) – the risk of a false negative, meaning we fail to detect an
actual effect because our sample is too small.
 Ask: What values were input into the formula for alpha and beta? Were they standard or
context-specific?
Also, are these values clearly described in the paper?

Common Pitfalls:

 Small sample size is one of the most common threats to validity.


o It leads to low statistical power, making it hard to detect real differences or
effects.
o The result? A higher chance of Type II error—we fail to reject a false null
hypothesis.

In summary:
Assess whether the authors used a solid method to determine their sample size. Without enough
participants, even a well-designed study might not give us trustworthy or generalizable
results.”
Slide 34: External Validity – Focus on Sampling Technique

Speaker Notes:

“In this slide, we focus on sampling technique, which plays a key role in determining a study’s
external validity—that is, how well the findings apply to the larger population.

Key Questions to Ask:

 What type of sampling technique did the researchers use?


o If random sampling was used, we want to identify the method—such as:
 Simple random sampling
 Systematic sampling
 Stratified sampling
 Cluster sampling
 Multistage sampling
o If non-random sampling was used, we should note whether it was:
 Purposive sampling
 Convenience sampling

Why This Matters:

 The goal of random sampling is to reduce selection bias and maximize


generalizability.
o It allows us to apply the findings confidently to the broader population.
 On the other hand, non-random sampling (like purposive or convenience sampling):
o Can introduce selection bias.
o May limit generalizability, especially if the sample is not representative of the
target population.

Final Thought:

As you evaluate a research study, ask yourself—does the sampling technique match the goal
of generalizing the findings?
Random methods are preferred, but if a non-random method is used, the authors should justify it
and discuss its limitations on external validity.”

Slide 35: Evaluating Sampling Techniques – Flow Chart & Eligibility


Speaker Notes:

“In this slide, we highlight two important elements for evaluating sampling methods in a study:
the sample selection flow chart and the inclusion/exclusion criteria.

1. Sample Selection Flow Chart:

 This is especially important in complex or multistage sampling designs, where the


selection process can be hard to follow.
 Ask:
o Does the study include a visual flow chart?
 This should show how participants were identified, screened, and selected.
o Is the process clearly outlined step by step?
 A good flow chart improves transparency and makes it easier to assess for
potential bias or dropout.

2. Inclusion and Exclusion Criteria:

 These are essential for understanding who was eligible to participate and who was
excluded—and why.
 Look for:
o Clear justification for eligibility decisions.
o Are the criteria explicitly stated in the methods section?
o Can we tell who qualified, and what reasons were used to exclude others?

Final Thought:

Transparent sampling procedures and well-defined eligibility criteria are critical for ensuring
both internal validity (accuracy of findings) and external validity (generalizability). Always
assess whether these components are clearly reported and logically justified.”

Slide 36: Evaluating Sampling Techniques – Inclusion & Exclusion Criteria

Speaker Notes:

“This slide focuses on a critical aspect of sampling quality: the inclusion and exclusion criteria.
These criteria directly affect internal validity, participant safety, and data quality.
Inclusion Criteria:

 These are the baseline requirements participants must meet to be considered eligible.
 Examples:
o Specific age range
o Having a particular diagnosis or condition
o Being available for the full study duration
 Inclusion criteria ensure the study targets the right population.

Exclusion Criteria:

 These are factors that disqualify someone who otherwise meets inclusion criteria.
 It’s important to note: Exclusion criteria are not simply the opposite of inclusion—
they are used to improve safety, data quality, and ethical compliance.

Common reasons for exclusion include:

1. Inability to provide informed consent


o Due to legal, mental, or cognitive limitations
2. Risk of confounding the data
o Example: Chronic illness that could distort outcomes
o Or participation in another study that might interfere
3. Barriers to study completion
o Migration, dropout risk, or low protocol compliance
4. Potential safety concerns
o For participants or researchers
5. Needs beyond study capacity
o E.g., intensive medical care not covered by the research budget

Final Message:

 Transparent and well-justified inclusion/exclusion criteria help to ensure ethical


recruitment, robust results, and scientific credibility.
 If exclusion criteria are vague or just mirror inclusion in reverse, it could be a red flag—
what this slide calls a “lousy researcher” approach.
Slide 37: Methodology – Statistics (IVs and DVs)

Speaker Notes:

“This slide guides us through evaluating how well a research study identifies and defines its
Independent Variables (IVs) and Dependent Variables (DVs)—which is fundamental for
sound statistical analysis.

Definitions:

 Independent Variables (IVs) are also called predictor or explanatory variables. These
are the variables that the researcher manipulates or categorizes to observe their effect.
 Dependent Variables (DVs) are the outcome or response variables. These are what the
study aims to measure or predict.

Core Evaluation Questions:

1. Can you clearly identify what the IVs and DVs are in the study?
2. How many IVs and DVs are there?
3. Are these variables defined in a clear and specific way?
4. Do we understand how they were measured—for example, what tools or metrics were
used?

Best Practices for Defining IVs and DVs:

 The definitions must be precise and operational—this means they must be specific
enough to be measured or tested.
 Avoid vague phrases like ‘such as’ or ‘for example’. These are acceptable in general
writing, but not in research, where clarity is critical.
 Instead, use evidence-based and professional phrasing like:
o ‘namely’
o ‘as defined by the WHO’
o And always include a proper citation or reference to support your definitions.

Final Note:

 Always ensure that your IV and DV definitions align with your operational definitions
in the methodology section. This alignment is essential for internal consistency and
accurate interpretation of results.

Slide 38: Methodology – Statistics (Common and Advanced Analysis)


Speaker Notes:

“In this slide, we continue exploring the statistical approaches used in research studies, focusing
on both common types of analysis and more advanced methods relevant to trials and
intervention studies.

Common Types of Statistical Analysis:

 Bivariate analysis looks at the relationship between two variables. A classic example is
the correlation between smoking and lung cancer.
 Multivariate analysis is used when examining multiple variables at once. It’s especially
helpful when trying to identify independent predictors while also adjusting for
confounders—variables that may distort the true relationship.

Advanced Analysis Approaches:

Now let’s move to more complex methods often used in intervention studies:

1. Intention-to-Treat (ITT) Analysis:

 This includes all participants who were randomized into the study—even if they didn’t
complete the intervention.
 The denominator for analysis is everyone who started at baseline, not just those who
finished.
 This method is considered the gold standard, particularly in clinical drug trials and
public health interventions, because it preserves the benefits of randomization and
reduces potential bias.

2. Complete Case (CC) / Per-Protocol (PP) Analysis:

 In contrast, this only includes participants who completed the study exactly as planned.
 The denominator is smaller: only the ‘completers’.
 A key downside here is that missing data are not imputed, and the analysis may lose the
effect of randomization, which can introduce selection bias—particularly if those who
drop out differ systematically from those who stay.

Summary: Understanding which analysis method is used—and why—is critical for appraising
the validity and reliability of a study’s findings. ITT is generally more robust for public health
trials, while CC/PP can introduce bias if not carefully managed.

Slide 39 – table format.

Slide 40: Methodology – Ethics and Overall Assessment

Speaker Notes:
"In this final methodology assessment slide, we focus on three critical elements: ethics
approval, clarity of the methodology section, and an overall evaluation of how well the
methodology is written.

1. Ethics – ERB Approval

 One of the first things to look for is whether Ethical Review Board (ERB) approval is
clearly stated. Ethical approval is essential for any study involving human participants.
 This includes checking for specific reference numbers, institutional names, or a formal
statement confirming ethical clearance.
 Without this, the research could be considered ethically non-compliant.

2. Clarity of Methodology

 Use the same clarity checklist we used for evaluating the Introduction:
o Are the sentences logically structured?
o Is the grammar correct?
o Are the words simple and precise?
o Is the timeline and sequence of methods easy to follow?
o Does each paragraph present a complete idea before moving to the next?

3. Overall Assessment

 Finally, you are asked: Is the methodology well written?


 Use the assessment scale: Yes (Good), So/so (Sufficient), or No (Insufficient).
 Justify your judgment based on six key aspects:
1. Are the methods well defined?
2. Are the instruments reliable and valid?
3. Is generalizability addressed through sampling?
4. Are the statistical tests clearly explained?
5. Is ethics approval clearly stated?
6. And is the text clear and coherent throughout?

Wrap-up: A well-written methodology is the backbone of a credible research study. It ensures


transparency, reproducibility, and ethical soundness. Encourage your students or peers to treat
this section with the same level of rigor as the results.

You might also like