Module 1 Quarter 2
Module 1 Quarter 2
Determining the correct sample size and how the samples are selected are crucial in ensuring the accuracy and precision
of an estimate leading to valid research findings. Sampling is securing some of the elements of a population. An element is a
member of a population who can provide information for the population. A population consists of the total elements about which
you can make inference based on the data gathered from a determined sample size.
The first step in determining the sample size is identifying the population of the topic of interest. The
population is the totality of all the objects, elements, persons, and characteristics under consideration. It is understood
that this population possesses common characteristics about which the research aims to explore.
There are two types of population: target population and accessible population. The actual population is the
target population, for example, all Senior High School Students enrolled in Science, Technology, Engineering, and
Mathematics (STEM) in the Division of Cagayan de Oro City. While the accessible population is the portion of the
population in which the researcher has reasonable access, for example all Senior High School enrolled, STEM strand
at Marayon Science High School – X.
When the whole population is too costly or time-consuming or impractical to consider, then, a sample
representative is identified. Sampling pertains to the systematic process of selecting the group to be analyzed in the
research study. The goal is to get information from a group that represents the target population. Once a good sample is
obtained, the generalizability and applicability of findings increases.
The representative subset of the population refers to the sample. All the 240 Senior High School Students
enrolled in Science, Technology, Engineering, and Mathematics (STEM) Strand in a school, for example, constitute
the population; 60 of those students constitute the sample. A good sample should have characteristics of the
represented population – characteristics that are within the scope of the study with fair accuracy. Generally, the larger
the sample, the more reliable the sample be, but still, it will depend on the scope and delimitation and research design
of the study.
Lunenberg and Irby (2008), as cited by Barrot (2017, p 107), also suggested different sample sizes for each
quantitative research design
Survey 800
Experimental 30 or more
Literature Review. Another approach is by reading similar or related literature and studies to your current
research study. Since you are done writing your review of related literature and studies, you might want to recall how
these studies determine sample size. Using this approach increases the validity of your sampling procedure.
Formulas. Formulas are also being established for the computation of an acceptable sample size. The common
formula is Slovin’s Formula.
Example 1. A researcher wants to conduct a survey. If the population of a big university is 35, 000,
find the sample size if the margin of error is 5%.
N 35,000 35,000
n= ¿ = ¿ 395
1+ N e 1+ 35000(.05) 1+87.5
2 2
Example 2. Suppose you plan to conduct a study among 1,500 Grade 11 students enrolled in the STEM
Track. How many respondents are needed using a margin of error of 2%? Answer : 938
Power Analysis. This approach is founded on the principle of power analysis. There are two principles you
need to consider if you are going to use this approach: these are statistical power and effect size.
The probability of rejecting the null hypothesis is called statistical power. It suggests that indeed there
is a relationship between the independent and dependent variables of the research study. The ideal statistical
power of a research study is 80%. With the statistical power, it will be used to identify the sufficient sample
size for measuring the effect size of a certain treatment. The level of difference between the experimental
group and the control group refers to effect size.
If the statistical power tells that relationship between independent and dependent variables, the effect
size suggests the extent of the relationship between these two variables. Henceforth, the higher the effect size,
means the greater the level difference between the experimental and control groups. For example, your
research study reveals that there is a difference in the pretest and posttest scores of the students in the given
anxiety test after implementing a psychosocial intervention. With the effect size, you will have an idea of how
small or large the difference is.
1500
f= =4.9=5. This means every 5th person from the list of employees.
306
Example : There are 1,200 junior high school student, determine the number of samples from each level base on a
sample size of 300.
A questionnaire is an instrument for collecting data. It consists of a series of questions that respondents
provide answers to a research study.
The qualities of a good research instrument are (1) validity, (2) reliability, and (3) usability.
Validity
Validity means the degree to which an instrument measures what it intends to measure. The validity of a
measuring instrument refers to has to do with its soundness, what the test or questionnaire measures its effectiveness,
how it could be applied.
Types of Validity
Content validity means the extent to which the content or topic of the test is truly representative of the content
of the course. It involves, essentially, the systematic examination of the research instrument content to determine
whether it covers a representative sample of the behavior domain to be measured. It is commonly used in evaluation
achievement test.
Criterion validity is the degree to which the test agrees or correlates with a criterion set up as an acceptable
measure. The criterion is always available at the time of testing. Its is applicable to tests employed for the diagnosis
of existing status rather than for the prediction of future outcome.
Predictive validity, as described by Aquino and Garcia (2004), is determined by showing how well
predictions made from the test are confirmed by evidence gathered at some subsequent time. The criterion measure
against this type of validity is important because the outcome of the subjects is predicted.
The construct validity of a test is the extent to which the test measures a theoretical construct or trait. This
involves such tests as those of understanding, appreciation and interpretation of data. Examples are intelligence and
mechanical aptitude tests.
Face Validity. It is also known as “logical validity.” It calls for an initiative judgment of the instruments as it
“appear.” Just by looking at the instrument, the researcher decides if it is valid.
Reliability
Reliability means the extent to which a “test is dependable, self-consistent and stable” (Merriam, 1995). In
other words, the test agrees with itself. It is concerned with the consistency of responses from moment to moment.
Even if a person a takes the same test twice, the test yields the same results. However, a reliable test may not always
be valid.
Test-retest Reliability. It is achieved by giving the same test to the same group of respondents twice. The
consistency of the two scores will be checked.
Equivalent Forms Reliability. It is established by administering two identical tests except for wordings to the
same group of respondents.
Internal Consistency Reliability. It determines how well the items measure the same construct. It is
reasonable that when a respondent gets a high score in one item, he will also get one in similar items. There are three
ways to measure the internal consistency; through the split-half coefficient, Cronbach’s alpha, and Kuder-Richardson
formula
Practicality
Practicality also known as usability means the degree to which the research instrument can be satisfactorily
used by teachers, researchers, supervisors and school managers without undue expenditure of time, money and effort.
In other words, usability means practicability.
QUESTIONNAIRE
Two Forms:
A. Closed Form – calls for short checkmark responses
Examples: True or false, Yes or No, Multiples Choice Statement with options (strongly agree, agree etc.)
B. Open Form - unrestricted because it calls for a free response in the respondent’s own word.
STANDARDIZED TEST
Ready to use research instruments, usually these are products of long years of study.
Validated Tool – A tool used in previous similar studies but are not standardized.
Modified Tool – Existing Tool that are published but you want to modify to suit the nature of the research respondents
and locale
Researcher-Made Tool – Tool crafted by researcher to answer the problem of the study. Researcher must be an
expert of the filed or must seek the expertise of the authority or professional for validation. Then must pre-test the
instrument to at least 10-20 respondents (share the same characteristics of the intended respondents). Then get the
index of discrimination and difficulty.
There are 3 rigorous phases for developing an instrument that accurately measures the variables of interest (Creswell,
2005).
1. PLANNING
2. CONSTRUCTION
3. QUANTITATIVE EVALUATION
- includes administration of a pilot study to a representative sample. It may be helpful to ask the participants for
feedback to allow for further refinement of the instrument.
The pilot study provides quantitative data that the researcher can test for internal consistency by conducting
Cronbach’s alphas. The reliability coefficient can range from 0.00 to 1.00, with values of 0.70 or higher indicating
acceptable reliability (George and Mallery, 2003).
1. Likert scale. Likert scale is a very popular rating scale used by the researchers to measure behaviors and
attitudes quantitatively. It consists of choices that range from one extreme to another from where respondent
choose a degree of their opinions. It I the best tool for measuring the level of opinions.
Examples:
Frequency of Occurrence: Frequency of use: Degree of Importance
Very Frequently Always Very Important
Frequently Often Important
Occasionally Sometimes Moderately Important
Rarely Rarely Of little importance
Very Rarely Never Not Important
Quality Level of Satisfaction Agreement
Strongly Agree Very Satisfied Strongly Agree
Agree Satisfied Agree
Undecided Undecided Undecided
Disagree Unsatisfied Disagree
Strongly Disagree Very Unsatisfied Strongly Disagree
2. Semantic differential scale. The respondents are asked to rate concepts in a series of bipolar adjectives. It has
an advantage of being flexible and easy to construct.
Example: Description of the class president
Competent 5 4 3 2 1 incompetent
Punctual 5 4 3 2 1 Not punctual
Pleasant 5 4 3 2 1 Unpleasant
What’s In
In the previous discussion on quantitative research designs, you were taught about quasi-experimental and
experimental designs. Its uniqueness from other research designs was also described. Remember that experimental
research design controls or manipulates the independent variable. This is done by applying particular conditions or
treatments or what is called research intervention. In this lesson, the focus is on how to describe your research
intervention in your research paper.
What’s New
For example, in a study of determining the effects of special tutorial program to learners’ at risk of failing (LARF),
researcher decides the group of LARF who will receive intervention. In this example, a special tutorial program is the
research intervention. Furthermore, the extent to which the program will be administered to the learners is determined.
Write the Background Information. It is an introductory paragraph that explains the relevance of the
intervention to the study conducted. It also includes the context and duration of the treatment.
Describe the Differences and Similarities between the Experimental and Control Group. State what will
happen and what will not both in the experimental and control groups. This will clearly illustrate the
parameters of the research groups.
Describe the Procedures of the Intervention. In particular, describe how will the experimental group receive
or experience the condition. It includes how will the intervention happens to achieve the desired result of the
study. For example, how will the special tutorial program will take place?
Explain the Basis of Procedures. The reason for choosing the intervention and process should clear and
concrete reasons. The researcher explains why the procedures are necessary. In addition, the theoretical and
conceptual basis for choosing the procedures is presented to establish the validity of the procedures.
Visit the following link and learn further about experimental research.
https://bit.ly/2Xr5zes