Research Designs
Research Designs
When a man decides to build a house, does not he draws first the blue print before
he will start to do the work? Same with the conduct of research, the blueprint for the
collection, measurement, and data analysis is drawn as a pattern to follow. Furthermore,
research designs enable the researcher to obtain a more valid, objective, reliable, and
accurate answers to the research questions.
Research design is defined as the logical and coherent overall strategy that the
researcher uses to integrate all the components of the research study (Barrot, 2017, p 102).
In order to find meaning in the overall process of doing your research study, a step-by-step
process will be helpful to you.
Descriptive Research. When little is known about the research problem, then it is
appropriate to use descriptive research design. It is a design that is exploratory in
nature.
The purpose of descriptive research is basically to answer questions such as who, what,
where, when, and how much. So this design is best used when the main objective of
the
study is just to observe and report a certain phenomenon as it is happening.
Ex Post Facto. If the objective of the study is to measure a cause from a pre-existing
effects, then Ex Post Facto research design is more appropriate to use. In this design, the
researcher has no control over the variables in the research study. Thus, one cannot
conclude that the changes measured happen during the actual conduct of the study.
The last two types of quantitative research designs are identifiable for the existence
of treatment or intervention applied to the current research study. Intervention or treatment
pertains to controlling or manipulating the independent variable in an experiment. It is
assumed that the changes in dependent variables were caused by the independent variable.
There are also two groups of subjects, participants, or respondents in quasi-
experimental and experimental research. The treatment group refers to the group
subjected to treatment or intervention. The group not subject to treatment or intervention is
called the control group.
The first step in determining the sample size is identifying the population of the topic
of interest. The population is the totality of all the objects, elements, persons, and
characteristics under consideration. It is understood that this population possesses common
characteristics about which the research aims to explore.
There are two types of population: target population and accessible population. The
actual population is the target population, for example, all Senior High School Students
enrolled in Science, Technology, Engineering, and Mathematics (STEM) in the Division of
Cagayan de Oro City. While the accessible population is the portion of the population in
which the researcher has reasonable access, for example all Senior High School enrolled,
STEM strand at Marayon Science High School – X.
Power Analysis. This approach is founded on the principle of power analysis. There are two
principles you need to consider if you are going to use this approach: these are
statistical power and effect size.
Simple Random Sampling. It is a way of choosing individuals in which all members of the
accessible population are given an equal chance to be selected. There are various ways of
obtaining samples through simple random sampling. These are fish bowl technique, roulette
wheel, or use of the table of random numbers. This technique is also readily available
online. Visit this link https://www.randomizer.org/ to practice. 10 The probability of
rejecting the null hypothesis is called statistical power. It suggests that indeed there is a
relationship between the independent and dependent variables of the research study. The
ideal statistical power of a research study is 80%. With the statistical power, it will be used to
identify the sufficient sample size for measuring the effect size of a certain treatment. The
level of difference between the experimental group and the control group refers to effect size.
Stratified Random Sampling. The same with simple random sampling, stratified random
sampling also gives an equal chance to all members of the population to be chosen. However,
the population is first divided into strata or groups before selecting the samples. The
samples are chosen from these subgroups and not directly from the entire population.
This procedure is best used when the variables of the study are also grouped into classes
such as gender and grade level.
What do you think will happen if tools for building a house is not prepared
meticulously? The same thing when getting information for answers to a research problem,
tools, or instruments should be prepared carefully. In constructing a quantitative research
instrument, it is very important to remember that the tools created should require responses
or data that will be numerically analyzed.
Research Instruments are basic tools researchers used to gather data for specific
research problems. Common instruments are performance tests, questionnaires,
interviews, and observation checklist. The first two instruments are usually used in
quantitative research, while the last two instruments are often in qualitative research.
However, interviews and observation checklists can still be used in quantitative research
once the information gathered is translated into numerical data.
Concise. Have you tried answering a very long test, and because of its length, you
just pick the answer without even reading it? A good research instrument is concise in length
yet can elicit the needed data.
Valid and reliable. The instrument should pass the tests of validity and reliability to
get more appropriate and accurate information.
Face Validity. It is also known as “logical validity.” It calls for an initiative judgment of
the instruments as it “appear.” Just by looking at the instrument, the researcher decides if it
is valid.
Content Validity. An instrument that is judged with content validity meets the objectives of
the study. It is done by checking the statements or questions if this elicits the needed
information. Experts in the field of interest can also provide specific elements that should be
measured by the instrument.
Concurrent Validity. When the instrument can predict results similar to those similar tests
already validated, it has concurrent validity.
Predictive Validity. When the instrument is able to produce results similar to those similar
tests that will be employed in the future, it has predictive validity. This is particularly useful for
the aptitude test.
Reliability of Instrument
Test-retest Reliability. It is achieved by giving the same test to the same group of
respondents twice. The consistency of the two scores will be checked.
Internal Consistency Reliability. It determines how well the items measure the same
construct. It is reasonable that when a respondent gets a high score in one item, he will also
get one in similar items. There are three ways to measure the internal consistency; through the
split-half coefficient, Cronbach’s alpha, and Kuder-Richardson formula