Chapter 2 and Chapter 3
Chapter 2 and Chapter 3
(Methodology)
Introduction
In a research paper, thesis, dissertation, and research project, Related Literature Review is
divided into three parts. These are: 1) Related Legal Bases, 2) Related Literature, 3) Related
Studies.
The major sources of related legal bases are laws and department directives such as
circulars, orders, memoranda, etc. They all serve as legal bases for the paradigm of the study.
In presenting the related legal bases, the researcher has to arrange them chronologically
from the most recent to the oldest. The relevance of each legal basis should be explained. The
lack of explanation of the legal basis makes the study unscientific. If a study has related legal
bases, it shows that the investigation is important to respond to the government’s thrust.
Related Literature
Flores (2016) defined literature as any written materials such books, journal, magazines,
novel, poetry, yearbook, and encyclopedia. Is composed of discussion of facts and principles to
which the present study is related.
Related Studies
Related studies are studies, inquiries, or investigations already conducted to which the
present proposed study is related or has some bearing or similarity. They are usually published
and unpublished materials such as manuscripts, theses, and dissertations.
1. Local
a. for literature materials, if printed in the Philippines;
b. for studies, if conducted in the Philippines;
2. Foreign
a. for literature materials, if printed in other lands;
b. for studies, if conducted in foreign land;
1.2. How to write the Review of Literature
Researchers must exhaust all possible sources both online and offline, these can
be local, global, or databases. Peer-reviewed/scholarly outputs that will be used
must at least be published within three (3) to five (5) years;
Based from the collated literatures, “paraphrase” or restate the author’s thoughts
in condensed form using your own words. For opposing points of view, clearly
explain the nature of and differences between the opposing perspectives.
In – text Citations
Citations used in the body of the study identify the source of information. In-text
parenthetical citations are used to give credit to the authors whose ideas or thoughts are used
within the document. These internal citations allow the reader to identify the source and locate
the information being addressed. APA uses a system that includes the author’s last name and
year of publication. For example: (Small, 2009). If there is a direct quote or a specific part of the
work being referred to, the page numbers are also included. For example, (Small, 2009, p. 23).
Sources may include books and book chapters, journal or magazine articles, dissertations and
theses, conference papers, government reports, films, websites, blogs, and wikis, discussion
boards, personal communications, and more.
Paraphrasing
Paraphrasing is used when you take someone else’s direct quote and state their idea in
your own words. Changing a few words here and there is till considered plagiarism even if you
do cite the author. Paraphrasing mean that you expressed the author’s information or ideas in
your own words and have given that person credit for that information or idea. You can prevent
plagiarism by closing the document and restating the idea in your own words.
Example:
1. Original Passage: “Signed into law in January 2002 by President George W. Bush,
the No Child Left Behind (NCLB) Act signaled the nation’s most sweeping education
reform of federal education policy in decades” (Smith, 2008. p. 212)
2. Unacceptable Paraphrasing: Enacted into law in 2002 by President Bush, the No
Child Left Behind Act signaled the most sweeping education reform of U.S.
Educational policy in decades.
3. Paraphrased: According to Smith (2008), the No Child Left Behind Act (NCLB)
provided the most all-encompassing reform in US education in almost half a century.
Or
The No Child Left Behind Act (NCLB) provided the most all-encompassing reform
in US Education in almost half a century (Smith, 2008)
Assessments No. 8
Introduction
Researchers must describe the methods used in detail so that readers will know exactly
how the study was conducted, a good research study must be replicable and scientifically sound.
Learning Outcomes:
The research design serves as a master plan of the methods and procedures that should be
used to collect and analyze the data needed by the researcher. Determining the most appropriate
research design is a function of the information research objectives and the specific information
requirements. The topics covered by research design however are wide-ranging.
Category Options
The degree to which the research has been Exploratory study
crystalized Formal study
The method of the researcher to produce Monitoring
effects in the variables under study Communication study
The power of the researcher to produce Experimental
effects in the variable under Ex post facto
The purpose of the study Reporting
Descriptive
Causal
Explanatory
Predictive
The time dimension Cross – sectional
Longitudinal
The tropical scope – breadth and depth – Case study
of the study Statistical study
The research environment Field setting
Laboratory research
Simulation
The participants perceptions of research Actual routine
activity Modified routine
1. Exploratory Study
2. Historical Study
3. Causal Research Study
4. Descriptive Study
Exploratory studies tend toward loose structures with the objective of discovering future
research tasks. The immediate purpose of exploration is usually to develop hypotheses or
questions for further research.
Historical research is a scientific critical inquiry of the whole truth of past events using
the critical method in the understanding and interpretation of facts, which are applicable
to current issues and problems.
These are designed to collect raw data and create data structures and information that will
allow the researcher to model cause-and-effect relationships between two or more
variable. Causal research is more appropriate when the research objectives include the
need to understand the reasons why certain phenomena happen as they do. That is to say,
the researcher may have a strong desire to understand which variables are the causes of
the dependent phenomena defined in the research problem.
Causal studies seek to discover the effect that a variable(s) has on another (or others) or
why certain outcomes are obtained. The concept of causality is grounded on the logic of
hypothesis testing, which in turn produces inductive conclusions.
4. Experimental Design
Uses a set of scientific methods and procedures to collect raw data and create data
structures that describe the existing characteristics of a defined target population.
1. Descriptive – survey. This type is suitable whenever the subjects vary among
themselves and one is interested to know the extent to which different conditions
and situations are obtained among these subjects. The word survey signifies the
gathering of data regarding the present condition. A survey is useful in 1)
providing the value of facts; 2) focusing attention on the most important things to
be reported.
2. Descriptive-normative survey. This type is used to compare the local test results
with the national norm.
9. Longitudinal survey – this design involves much time allotted for investigation of
the same subjects of two or more points in time.
Group Work
Assessment
SAMPLING DESIGNS
Learning Outcomes:
Sampling is necessary, especially if the population of the study is too large where the
resources of the investigator are limited. And it is advantageous for the researcher to use sample
survey rather than the total population.
It is advisable to use the total population if subjects under study is less than 100. But if
the total population is equal to 100 or more, it is advisable to get a sample in order to be
effective, efficient and economical in gathering data, provided however, that the sample is a
representative cross-section of the population and is scientifically selected.
Sampling is the selection of a small number of elements from a larger defined target
group of elements and expecting that the information gathered from the group will allow
judgments to be made about the larger group.
There are different reasons for the inclusion of sampling procedures in research. The
main objective is to allow the researchers to make inductive and predictive judgments or
decisions about the total target population on the basis of limited information or in the absence of
perfect knowledge. The concept of sampling involves two basic issues 1) making the right
decisions in the selection of items (e.g. people, products, or services); and 2) feeling confident
that the data generated by the sample can be transformed into accurate information about the
overall target population.
Advantages of Sampling
1. It saves time, money, and effort.
2. It is more effective.
3. It is faster and cheaper.
4. It is more accurate.
5. It gives more comprehensive information.
Limitations of Sampling
1. Sample data involve more care in preparing detailed sub classifications because of a small
number of subjects.
2. If the sampling is not correctly designed and followed, the results may be misleading.
3. Sampling requires an expert to conduct the study in an area. If this is lacking, the results
could be erroneous.
4. The characteristics to be observed may occur rarely in a population. (e.g teachers over 30
years of teaching)
5. Complicated sampling plans are laborious to prepare.
In determining the sample size for investigation purposes, the subject of the study should
be identified first, by including its population. You may calculate the sample size by using either
the following equations:
1. Slovin’s formula
Slovin’s formula allows a researcher to sample the population with a desired degree of
accuracy. Slovin’s formula gives the researcher an idea of how large the sample size needs to be
to ensure a reasonable of results. Slovin’s Formula
n= N
1 + N (e)2
Where:
n= N
1 + N (e)2
n= 9,000
1 + 9,000 (.02)2
n= 1,957
Ss = NV + [Se x (1-p)]
NSe + [V2 x p (1 -p)]
Where Ss stands for sample size; N, population; V, standard value (2.58) or 1 percent
level of probability with 0.99 reliability level; Se, sampling error (0.01); and p, largest possible
population.
Representation Basis
Elements Section Probability/ Non – probability/
Scientific Non-Scientific
This is a process of selecting a sample in such a way that all individuals in the defined
population have an equal and independent chance of being selected for the sample, the process of
being called as randomization. Under this technique, the following procedures can be used:
Examples:
Drawing names from a hat random Numbers
2. Systematic Sampling
In this approach, every kth element population is sampled, select a random starting point
and then select every kth subject in the population. This is simple to use so it is used often.
In using this technique, you first have to identify and define the population; determine the
desired sample size; obtain a list of population; identify the sampling ratio, start at some
random place at the top of the population list; take every kth name of the list until the
desired sample size is reached.
3. Stratified Sampling
o Divide the population into at least two different groups with common
characteristic(s), then draw SOME subjects from each group (group is called strata or
stratum)
o Results in a more representative sample
4. Cluster Sampling – this sampling process in which groups and
not individuals are randomly selected. All members of selected
group have similar characteristics. It is a result from a two-
stage process in which the population is divided into groups
(called clusters), then groups are randomly selected, and then
data are collected data from ALL members of the selected
groups. This method is used extensively by government and
private research organizations
Examples: Exit Polls
NON-PROBABILITY SAMPLING
1. Convenience Sampling – a sampling technique where samples are selected from the
population only because they are conveniently available to the researcher.
Examples: Using family members or students in a classroom, Mall shoppers
3. Snowball – a sampling technique which helps researchers find a sample when they are
difficult to locate. Researchers use this technique when the sample size is small and not
easily available. This sampling system works like the referral program. Once the researchers
find suitable subjects, s/he asks them for assistance to seek similar subjects to form a
considerably good size sample.
Clearly describe the target population/s and context in which the study was conducted.
And, remind the reader of the units of analysis. Describe the sampling method and motivation in
details (disadvantages, sampling frame, sampling units, target sample size, how was this
determined, realized sample size, response rate, number of usable questionnaires, etc) and
provide a demographic or behavioral profile of respondents.
Assessment No. 10
Using Slovin’s formula compute the sample size of the following population with .01, .05
and .10 as margin or error:
1. 150
2. 800
3. 500
4. 680
5. 1500
DATA COLLECTION, METHODS AND TECHNIQUES
Many of the techniques in collecting data depend largely on the quality of the
measurement instrument that will be employed in the research process. The significance of any
research paper or its entirely, for that matter, can be put to waste if the instrumentations is
questionable. As a researcher, you are, thus cautioned to exercise care in designing the data
collection procedures that will be employed in the research, especially in the construction of the
research instrument.
Learning Outcomes:
Data Collection
Data collection is an extremely important part of any research because the conclusions of
a study are based on what the data reveal. There are several ways of collecting data. The choice
of procedures usually depends on the objectives and design of the study and the availability of
time, money and personnel.
The term data refers to any kind of information researchers obtain on the subjects,
respondents or participants of the study. In research, data are collected and used to answer the
research questions or objectives of the study.
Examples of data:
Demographic information such as age, sex, household size, civil status or religion. Social
and economic information such as educational attainment, health status, extent of
participants in social organizations, occupation, income, housing condition and the like.
Scores in exams, grades, etc.
Types of Data:
Research data are generally classified either as quantitative or qualitative. Based on their
source, data fall under two categories namely: Primary /Secondary
Quantitative Data
Primary Data - these are information collected directly from the subjects being studied,
such are people, areas, or objects
Secondary Data - these are information collected from other available sources, like recent
censuses, or data collected by large scale national or world-wide surveys, such as
agriculture and industry surveys, demographic and health surveys, data of completed
studies.
Qualitative Data - These are descriptive information which has no numerical values. Ex:
attitude or perception towards something, process used in accomplishing an activity, a person’s
experiences, one’s idea about certain concepts, situation, or phenomenon like drug abuse, family
planning, Brgy. Justice system, etc.
The choice of the best way to collect data depends largely on the type of data to be collected
and the source of data. Before starting to collect data, a researcher should decide: A. What data
to collect, B. Where or from whom the data will be obtain, C. What instrument/s or device/s to
use in collecting the data.
The two most common means of collecting quantitative information are the self-administered
questionnaire and the structured interview. Quantitative information may also be collected from
secondary sources and service statistics (Fisher, et.al.,1991)
There is a wide variety of data collection methods available to the researcher. The choice
of method depends upon the following:
The following are methods most commonly utilized by researchers in gathering data:
1. Personal Interview – in an interview, the persons from whom the needed data are obtained,
referred to as respondents or informants, are either interrogated face-to-face or through the
phone/online.
There are two types of interviews:
a. Structured / Standardized – this is characterized by a set of questions formulated
in a standardized way, as in questionnaires. It utilizes an instrument called
interview schedule. This type of instrument is used in well-structured types of
research problems where the variables are delineated. This is applicable to
quantitative types of research problems.
2. Observation – is a data collection method where the researcher acquires knowledge about
the subjects under study by observing them in various settings or situations. In this method,
the researcher witness’ the event in the natural setting and thus, give a firsthand account of
the event. The person is not mediated by other persons who have witnessed the event, as in
normally the case in interviews and questionnaires where the personal experience of
respondents is communicated to the researcher. (Bautista, 1998)
The main characteristics of direct observation technique is that researchers must rely
heavily on their powers of observing rather than actually communicating with people to
collect primary data. Basically, the researcher depends on watching and recording what
people or objects do in many different research situations. Example:
Physical attributes
Expressive behaviors
Verbal behavior
Temporal behavior
Special relationships and locations
Physical objects
Types of Observation
a. Structured observation
b. Unstructured observation
3. Interviews and Questionnaires – another method for asking information which utilize tools
for questioning are interviews and questionnaires. Interview and questionnaires are very
common primary techniques of data collection. They both entail drawing information from
respondent. One advantage of questionnaires is it can be administered to a large group of
people at the same time. Another thing is that the respondents are allowed to maintain their
anonymity, hence they will be more honest in answering the questions.
1. Standardized interview
2. Unstructured interview
Two types of questionnaires:
1. Patterned / Standard questionnaire
2. Self – made questionnaire
a. Open-ended format – in this format, the respondents are allowed to answer any questions
they wish to answer based on their understanding. Ex. age, monthly income,
b. Multiple choice format – this type presents a question which is followed by a set of
options pre-determined by the researcher or based on a pre-survey.
4. Focused Group Discussion (FGD) – when it is not feasible to use the interview or
questionnaire method in gathering data due to money and time constraints, the FGD may be
conducted in gathering data. Through purposely sampling, the members of the group from
whom the needed information or data will be obtained are selected. The selected group
should more or less represent the characteristics or cross sections of the population from
which it is drawn.
Using secondary sources is complex and challenging. As discussed earlier, there are two
categories of sources available. These are internal and external data. There are also three types of
sources (primary, secondary, and tertiary). Primary sources are original works of research or
raw data without interpretation. Secondary sources are interpretations of primary data. Tertiary
sources are interpretations of secondary sources or more commonly, finding aids such as
indexes, bibliographies, and Internet search engines.
Information sources are generally categorized into three levels, these are:
1. Primary sources – included among the primary sources are memos, letters, complete
interviews or speeches (in audio, video or written transcripts formats), laws, regulations,
court decisions or standards, and most government data, including census, economic and
employment/labor data. Primary sources are always the most authoritative because the
information has not been filtered or interpreted by a second party. Information from the
primary sources will become secondary literature of the researcher who supports his or her
original research. Internal sources of primary data would include inventory records,
personnel records, purchasing requisition forms, statistical control charts and other similar
data.
2. Secondary sources – there are interpretation of primary data, encyclopedias, textbooks,
handbooks, magazine and newspaper articles, and most newscasts are considered secondary
information sources. Indeed, all reference materials fall into this category. Internally, sales
analysis summaries and investor annual reports would be examples of secondary sources as
they are compiled from a variety of primary sources. To an outsider, however, the annual
report is viewed as a primary source, as it represents the official position of the company.
3. Tertiary sources – these may be an interpretation of a secondary source but they are
generally represented by indexes, bibliographies and other finding aids; i.e. internet engines.
NATURE OF MEASUREMENT
There are major criteria for evaluating a measurement too. These are:
1. Validity
2. Reliability
3. Usability or practicability.
• Validity - Degree to which a test measures what it intends to measure or the truthfulness
of the response.
• Reliability - The extent to which a test is consistent and dependable.
• Even if the responded takes the same test twice, the test will yield the same result.
• Usability - A measure has a practical value for the research if it is economical, convenient
and interpretable.
Scale Measurement
This is the process of assigning a set of descriptors to represent the range of possible
responses that a person gives in answering a question about a particular object, construct, or
factor. This process aids in determining the amount of raw data that can be obtained from asking
questions, and therefore, indirectly has impact on the amount of information that can be derived
from the data. Central to the amount of data issue understands that there are four basic scaling
properties (ie. Assignment, order, distance, and origin) which can be activated through scale
measurements.
1. Nominal Scale - these are the most basic and they provide the least number of data. They
activate only the “assignment” scaling property; the raw data do not exhibit relative
magnitudes between categorical subset of responses. The main data structures (or
patterns) that can be derived from nominal data are the form of modes and frequency
distribution.
2. Ordinal scale - these require respondents to express their feelings of relative magnitude
about the given topic. Ordinal scales activate both the assignment and order scaling
properties and allow researchers to create a hierarchical pattern among possible raw data
responses (or scale points) that determine “greater than/less than” relationships. Data
structures that can be derived from ordinal scales measurements are in the form of
median and ranges as well as modes and frequency distributions.
3. Interval scales - the scale measurement allows the researcher to build into the scale
elements that demonstrate the existence of absolute differences between each scale point.
Normally, the raw scale descriptors will represent a distinct set of numerical ranges as
responsible responses to a given questions. With interval scaling designs, the distance
between each scale point and response does not have to be equal. Disproportional scale
descriptors can be used. With interval raw data, researchers can develop a number of
more meaningful data structures that are based on means and standard deviations, or
create data structures that are based on mode, median, frequency distribution, and range.
4. Ratio Scales - these are the only scale measurements that simultaneously activate all four
scaling properties. Considered the most sophisticated scale design, they allow researchers
to identify absolute differences between each scale point and to make absolute
comparisons between the respondents’ raw responses.
Scoring and Interpretation for Ordinal Scales using Likert Scale
LIKERT SCALE
Example:
Assessment No. 11
QUALITIES OF A GOOD RESEARCH INSTRUMENT
Learning Objectives:
VALIDITY
Validity means the degree to which a test or measuring instrument measures what it
intends to measure. The validity of a measuring instrument has to do with its soundness, what the
test or questionnaire measures its effectiveness, how well it could be applied.
No test or research instrument can be said to have a “high” or “low” validity in the
abstract. Its validity must be determined with reference to the particular use for which the test is
being considered. The validity of test must always be considered in relation to the purpose it
serves. Validity is always specific in relation to some definite situation. Likewise, a valid test is
always valid.
Types of Validity:
Validity is classified under four types, namely, content validity, concurrent validity,
predictive validity, and construct validity.
Content validity means the extent to which the content or topic of the test is truly
representative of the content of the course. It involves, essentially, the systematic examination of
the test content to determine whether it covers a representative sample of the behavior domain to
be measured. It is very important that the behavior domain to be tested must be systematically
analyzed to make certain that all major aspects are covered by the test items in correct
proportions. The domain under consideration should be fully described in advance rather than
defined after the test has been prepared.
Content validity is described by the relevance of a test to different types of criteria, such a
thorough judgments and systematic examination of relevant course syllabi and textbook, pooled
judgments of subject – matter experts, statements of behavioral objectives, and analysis of
researcher-made test questions, among others. Thus, content validity depends on the relevance of
the individual’s responses to the behavior area under consideration rather on the apparent
relevance of item content.
Concurrent Validity is the degree to which the test agrees or correlates with a criterion set
up as an acceptable measure. The criterion is always available at the time of testing. It is
applicable to tests employed for the diagnosis of existing status rather than for the prediction of
future outcome.
Construct Validity is the extent to which the test measures a theoretical construct or trait.
This involves such tests are those of understanding, appreciation and interpretation of data.
RELIABILITY
Reliability means the extent to which a “test is dependable”, self-consistent and stable” In
other word, the test agrees with itself. It is concerned with the consistency of responses from
moment to moment. Even if a person took the same test twice, the test yields the same results.
However, a reliable test may not always be valid.
A. Test-Retest Method - The same test is administered twice to the same group and the
correlation coefficient is determined. The disadvantage of this method are 1) when the
time interval is short, the respondents may recall their previous responses and this tends
to make the correlation coefficient high; 2) when the time interval is long, such factors as
unlearning and forgetting, among other, may occur and may result in low correlation of
the test; and 3) regardless of the time interval separating the two administrations, other
varying environmental conditions such as noise, temperature, lighting, and other factors
may affect the correlation coefficient of the test.
Take note that in conducting pilot tests, it is very unscientific if the pilot
sample of the study is in the same institution even if they are not subjects or
respondents of the study.
B. Parallel Forms – a test administered to a group of students and the paired observation is
correlated. In constructing parallel forms, the two forms of eh test must be constructed so
that the content, type of test item, difficulty and instruction of administration are similar
but not identical. Pearson product moment correlation coefficient is a statistical tool used
to determine the correlation of parallel forms.
C. Split-Half – is administered once, but the test items are divided into two. The common
procedure is to divide the test into odd and even items. The two halves of the test must be
similar but not identical in content, number of items, difficulty, means, and standard
deviations. For example, each student obtains two scores, one on the odd and the other
even items in the same test. The scores obtained in the two halves are correlated. The
result is reliability coefficient of a half test. Since the reliability holds only for a half test,
the reliability coefficient for the whole test is estimated by using the Spearman-Brown
formula.
USABILITY OR PRACTICABILITY
Usability or practicability means the degree to which the research instrument can be
satisfactorily used by teachers, researchers, supervisors and school managers without undue
expenditure of time, money, and effort. In other words, usability means practicability.
Moreover, scoring is easier when all subjects are instructed to write their responses in one
column in numerical form or word and with separate answer sheets for their responses.
3. Ease of interpretation and application. Results of tests are easy to interpret and apply if
tables are provided. All scores must be given meaning from the tables of norms without
necessity of computation. As a rule, norms should be based both on age and year level, as in
the case of school achievement tests. It is also desirable if all achievement tests should be
provided with separate norms for rural and urban subjects as well as for learners of various
degrees of mental ability.
4. Low cost. It is more practical if the test is low cost, material-wise. It is more economical also
if the research instrument is of low cost and can be reused by future researchers.
JUSTNESS
Is the degree to which the researcher is fair in evaluating the rating / grade of the
respondents.
MORALITY
Is the degree of secrecy of the rating of the respondents. Morality or ethics means that
test results/respondents answers must be treated with utmost confidentiality.
HONESTY
Researchers must be honest in constructing the research instrument, thus researchers must
avoid copying verbatim the contents of other books or authors without citing or acknowledging
them.
For validity, five experts in line with the field of study must be requested to go over the
research instrument to test its validity. Each item in the instrument has column on 3 (retain), 2
(revise), and 1 (delete). The expert is requested to check the appropriate option column for each
item. The researcher computes the weighted mean per item. Items with mean values of 2.5 – 3.0
are retained; items with mean values, 2.4 – 1.5, are revised; and items with means value, 1.4 –
1.0 are deleted.