Practical Research 2 Q2 Module 3 1
Practical Research 2 Q2 Module 3 1
Lesson
Constructing A Valid and Reliable
1 Instrument.
Des Moines University Library defines research instrument as “Research Instruments are
measurement tools (for example, questionnaires or scales) designed to obtain data on a topic of
interest from research subjects.” Research instrument is therefore the tool or the device to be
used by the researcher to get the information needed for the research.
Figure 1 Adapted from Singleton & Straits (2010) retrieved from http://korbedpsych.com/R09DevelopInstruments.html
Korb (2012) adapted the figure above and states that after determining the Research
design, the researcher has two separate responsibility 1. identifying the sample and 2.
determining how the variables will be measured. After the sample has been identified, the
researcher must think both the theoretical and practical meaning of the variables for him to
properly measure the key variables in the study. Furthermore, Korb reminds researchers that in
designing an instrument they must keep in mind the following.
• The conclusions drawn in a research study are only as good as the data that is collected.
• The data that is collected is only as good as the instrument that collects the data.
• A poorly designed instrument will lead to bad data, which will lead to bad conclusions.
• Therefore, developing a good instrument is the most important part of conducting a high-
quality research study.
Researchers have two options when it comes to Research Instrument, either they will
develop their own or use a previously developed one. We will discuss briefly the two options.
When the researchers decided that they will develop their own, they must do a lot of work
and advanced preparation. Korb (2012) says that “It is very important that the instrument has
been thoroughly critiqued, evaluated, and pilot tested by the student, supervisor, and others
before it is administered for the actual study.” She expounds that “any problems with the
instrument after it has been administered will require the student to completely redo the data
collection process, wasting considerable time and money.”
Identify Other Research Studies that Study the Key Variable- It will give you good idea on
what and how to effectively measure the different variables your research.
1. Develop a Construct Definition – it is also known as the theoretical definition and you can get
it by referencing other related studies that have measured the same variable.
3. Choose an Instrument- Each variable has different components and characteristics unique to
it and because of it, each variable in the study needs to be measured according to its own
component and characteristics. Each variable has its own separate measurement.
Korb (2012) suggested the following steps in the development of a Research Instrument.
Steps in Developing an Instrument
1. Identify the key variables. The first step is to identify the variables you want to study and
measure.
2. Define the key variables. The next step is to define the variables in order to clear what you
are studying specially to the respondents and readers of the research.
3. Review the literature. It will give you an idea or a pattern on bow to construct your instrument.
4. Consider the population. You must consider the characteristics of your population because it
will determine the manner how will construct your instrument that the respondents will
understand.
5. Write a draft. You must write a draft of the instrument for you to present to members and/or
inner circle.
6. Revise the draft. After you have written the draft you have to rescan it for grammatical and
errors. After that you must read the draft on the perspective of the respondents. You must
take note of the things that might confuse them or the things that they might not understand.
7. Pilot the draft. Let the revised draft to be pilot tested. Give the instrument to a few people
belonging to your population. Ask them to inform you if there is any part of the instrument that
they do not understand or confuse them. The objective of this step is to recalibrate the
instrument.
8. Revise the draft, Revise the draft, Revise the draft. Revising the instrument several times over
is more of a rule rather than an exemption. Revising it before it is administered is a better
option rather than repeating the entire process.
Korb (2012) suggested the following guidelines in developing personal information items.
1. All response options must be exhaustive. Respondents must be able to find their response in
one and only category.
Example:
Your respondents are Senior High School Students, and you are asking their age a poor
option would be 16-17 and 17-18. If in case the respondent is only 15 years old and will be
turning 16 in a month or the respondent is more than 18 years old, obviously the respondents
do not have an option to reveal their real age. The better option will be 15 and Under, 16-17,
18-19 and 19 and over.
2. All response options must be mutually exclusive. All options must be unique with one another.
Example:
It is incorrect to use 10-15 and 15-20 because those respondents whose age is 15 will
obviously have a problem choosing between your options.
3. All response options must have equal intervals. Intervals to be used in the options must be
uniform. Response categories must be specific. When we say specific, the options must be as
detailed as possible.
Example:
If you use 5 as an interval all the options must have an interval of 5.
4. Avoid the social desirability problem. Respondents have the tendency to choose the most
desirable option and avoid the most undesirable one. If you really need to get the information
rephrase or redesign the question and option in such a manner that it is not undesirable.
Example:
Poor: What is your Father’s educational level? Literate/Illiterate
Better Question: What is the last level of schooling that your father completed. None,
Elementary, Secondary, Tertiary, Post-Graduate
1. Measure the variable directly – If you want to measure the classroom attendance of students,
you may ask “Which measure the attendance of the students better?”. Is it an item in the
questionnaire or School Form No. 2 (SF 2). Since the construct definition of attendance is the
number of days that the student attends class, SF 2 is a better measurement rather than an
item in a questionnaire.
2. Avoid items that are the causes of the variable – Korb (2012) gives an example, “consider
attitude toward educational psychology. A cause of attitude toward educational psychology
might be how one feels about the course lecturer. You may be tempted to write an item: "I
enjoy the educational psychology lecturer." It is possible for somebody to dislike educational
psychology but like the lecturer, so this a BAD item. All items reflect the variable itself.”
3. Avoid Effects of the variable – Korb (2012) gives an example “Because a student who has a
positive attitude is likely to attend class more, you may be tempted to write an item that says,
"I always attend educational psychology class." It is possible for somebody to strongly agree
to always attending educational psychology, but not like educational psychology class. Again,
this is a BAD item”.
4. Avoid double-barreled items – Double-barreled items have 2 points or idea in a single
question. Each item must only have one point. Example. The teacher has the monopoly of
knowledge in classroom discussion and the students are loving it. The students may agree
that the teacher has the monopoly of knowledge in classroom discussion, but it does not follow
that the students are loving it.
5. Consider some of the items to be reversely coded – This will keep the respondents alert and
to prevent acquiescence bias, which means the tendency to agree with every statement.
6. Highlight a negative word such as NO or NOT in capital and bold letters – This will help the
respondents that read the items quickly and may skip over key words.
7. Consider whether the participant can and/or will answer the question honestly.
8. Ensure that the question is worded in a way that is understandable to individuals in the
population under study.
9. Ensure the item is not biased or leading toward a specific response.
11. Ensure that response categories are specific to get similar responses across
participant.
12.Write twice as many items per variable as you think are necessary. Many items might be
canceled, so it is good to have extras to fill the gap.
13. In using a Likert-Scale type of options try to have an even-number of options to avoid the
tendency of the respondents to choose the middle.
1. Adopting the instrument means that you will take the instrument as it is, verbatim. You do not
need to test the validity and reliability of the instrument. The reason for it is because you can
apply the validity and reliability scores of the instrument in the previous study where you get
the instrument.
2. Adapting the instrument means that you significantly alter the instrument. If you alter the
program it means that the validity and reliability scores of the original instrument will not be
applied to your instrument. You have to compute for the new validity and reliability score.
Calmorin and Calmorin states that there are three qualities of a good research instrument.
1. Usability
2. Validity
3. Reliability
Calmorin and Calmorin (1995) defined usability as the degree to which the research
instrument can be satisfactorily used by anyone without undue expenditure of time, money, and
effort. It can also be defined as something practical.
1. Ease of Administration
2. Ease of Scoring
3. Ease of Interpretation and application
4. Low cost
5. Proper mechanical make-up or lay-out
“Validity means the degree to which a test or measuring instrument measure what it
intends to measure.” according to Calmorin and Calmorin. While Subong and Beldia (2005)
defines validity as “the appropriateness, meaningfulness, and usefulness of the inferences a
researcher makes based on the data he collects.” Furthermore, they states that there are three
types of validity.
1. Content-related evidence of validity. It refers to the format and content of the instrument. A
table of specification must be prepared to check whether all the topics were covered and
evenly distributed, and the required aspects were identified in order to measure what it
supposed to measure in the research.
2. Criterion-related evidence of validity- Refers to the relationship of scores using the instrument
and the scores using one or more instruments or measures known as criterion. Researchers
will compare the scores from the instrument being validated to the score of the independent
criterion. If the scores from the instrument being validated is parallel to the result from the
independent criterion, then we can say that there is a Criterion-related evidence of validity.
Reliability refers to the degree of which the result is dependable, stable and consistent.
Validity coefficient according to Subong and Beldia (2005) “expresses the relationship that
exists between scores of the same individuals on two separate instruments”. On the other hand,
reliability coefficient expresses a relationship between the scores of the same person using the
same instrument at two different time or between two parts of the same instrument. Reliability
coefficients must range from 0.00 to 1.00
Methods in Testing the Reliability of an Instrument
1. Test-Retest Method- It involves the administering of the same test twice to the same
group after a determined time period have passed.
Spearman rho may be used to correlate data for this method. The equation is:
rs = 1 - 6 ΣD2
N3 - N
Calmorin and Calmorin provided a step by step process in computing the Spearman rho.
Step 1. Rank the scores of subjects/respondents from highest to lowest in the first set of
administration.
Step 2 Rank the scores in the 2nd administration in the same manner as in Step 1.
Example: The following two sets of scores in Research were obtained from the same
respondents using a research-made instrument administered at different times. Compute for the
reliability coefficient of the instrument.
Rs = 1 - 6(26)
103 - 10
= 1 - 156
990
= 1 - (1575)
3. Split-half method – The test items was divided into two halves hat are equal in content and
difficulty, usually an odd and even scheme is done. The scores of the respondents in the two
halves are correlated.
rwt = 2 (rht)
1 + rht
For instance, test is administered to ten students as pilot sample to test the reliability
of the odd and even items. The results are shown below.
rht = 1 - 6 ΣD2
N3-N
= 1 - 6(16.50)
990
rht = .90
rwt = 2 (rht )
1 +rht
= 2 (.90)
1 + .90
Values range from 0 to 1. A high value indicates reliability, while too high a value (in excess
of .90) indicates a homogeneous test.
The values of p in row 18 are the percentage of students who answered that question
correctly – e.g. the formula in cell B18 is =B16/COUNT(B4:B15). The values of q in row 19 are
the percentage of students who answered that question incorrectly – e.g. the formula in cell B19
is =1–B18. The values of pq are simply the product of the p and q values, with the sum given in
cell M20.
We can calculate ρKR20 as described in Figure 2.
B22 k =COUNTA(B3:L3)
B23 𝑘 =M20
∑ 𝑝𝑗 𝑞𝑗
𝑗=1
B24 𝜎2 =VARP(M4:M15)
The value ρKR20 = 0.738 shows that the test has high reliability.
Real Statistics Function: The Real Statistics Resource Pack contains the following
supplemental function:
KUDER(R1) = KR20 coefficient for the data in range R1.
Observation: For Example 1, KUDER(B4:L15) = .738.
Observation: Where the questions in a test all have approximately the same difficulty (i.e. the
mean score of each question is approximately equal to the mean score of all the questions), then
a simplified version of Kuder and Richardson Formula 20 is Kuder and Richardson Formula 21,
defined as follows:
𝑘 𝜇(𝑘−𝜇)
ΡKR21= [1 − ]
𝑘−1 𝑘 𝜎2
where μ is the population mean score (obviously approximated by the observed mean score).
For Example 1, μ = 69/12 = 5.75, and so
𝑘 𝜇(𝑘−𝜇) 11 5.75(11−5.75)
ΡKR21= [1 − ] = [1 − ] = .637
𝑘−1 𝑘 𝜎2 10 11(6.5208
Note that ρKR21 typically underestimates the reliability of a test compared to ρKR20.
6. Cronbach alpha α or coefficient alpha was developed by Lee Cronbach in 1951. It is based
on the internal consistency of items in the test. It is flexible and can be used with test formats
that have more than one correct answer. A Likert Scale type of question is compatible with
the Cronbach alpha. All the above-mentioned test has software packages that can be used
by the student.
Answer the following questions. Write your answers on a separate sheet of paper.
1. Using your own words discuss the process on how to develop a research instrument.
2. In your own words explain which is the better option in choosing your research instrument,
develop your own Instrument or look for a previously made instrument.
3. When can you possibly adopt or adapt a research instrument?
Scale to be used
How it will be validated?
Using the information above, write the Research Instrument part of your research paper. Fill the
table below. Use a separate sheet for your answer.
Research Title:
Research Instrument:
References
Book
Calmorin, Laurentina P. and Calmirin , Melchor A. Methods of Research and Thesis Writing.
Quezon City. Rex Printing Company. 1995.
Subong Jr. , Pablo E. and Beldia, D. Statistics for Research Application in Research, Thesis
and Dissertation Writing, and Statistical Data Management Using SPSS Software.
Quezon City. Rex Printing Company: .2005
Internet
Charkes Zaiontz. Real Statistics Using Excel. Retrived October 30, 2020.
https://www.real-statistics.com/
Des Moines University Library. CINAHL (Cumulative Index to Nursing and Allied Health
Literature) (N.D). Retrieved November 19, 2018. https://lib.dmu.edu/
https://www.flickr.com/search/?text=jessica%20soho
Louie Diangson. Online Food Delivery Apps in the Philippines. Retrieved December 28, 2018.
https://www.yugatech.com
Manila Standard Showbitz. Jessica Soho Makes History at Reader’s Digest Trusted Brand
Awards 2018 (N.D). Retrieved April 30, 2018. https://www.manilastandard.net
/bel