100% found this document useful (1 vote)
3K views14 pages

Practical Research 2 Q2 Module 3 1

This document discusses how to construct a valid and reliable research instrument. It defines a research instrument as a measurement tool designed to obtain data on a topic of interest. The key steps outlined include: 1) identifying the variables to be measured, 2) defining the variables, 3) reviewing literature to inform instrument design, 4) writing draft items and response options, 5) piloting the draft with a sample to identify issues, and 6) revising the instrument based on piloting. Guidelines are provided for writing clear, unbiased items and exhaustive, mutually exclusive response options to accurately measure the intended variables. The goal is to develop an instrument that will produce high-quality data to support valid conclusions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
3K views14 pages

Practical Research 2 Q2 Module 3 1

This document discusses how to construct a valid and reliable research instrument. It defines a research instrument as a measurement tool designed to obtain data on a topic of interest. The key steps outlined include: 1) identifying the variables to be measured, 2) defining the variables, 3) reviewing literature to inform instrument design, 4) writing draft items and response options, 5) piloting the draft with a sample to identify issues, and 6) revising the instrument based on piloting. Guidelines are provided for writing clear, unbiased items and exhaustive, mutually exclusive response options to accurately measure the intended variables. The goal is to develop an instrument that will produce high-quality data to support valid conclusions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

PRACTICAL RESEARCH 2

Lesson
Constructing A Valid and Reliable
1 Instrument.

After going through this module, you are expected to:


1. Define what is a research instrument.
2. Identify characteristics of a good research instrument.
3. Identify each step in constructing a research instrument
4. Establish the validity of the research instrument.
5. Establish the reliability of the research instrument
6. Construct a valid and reliable research instrument

Des Moines University Library defines research instrument as “Research Instruments are
measurement tools (for example, questionnaires or scales) designed to obtain data on a topic of
interest from research subjects.” Research instrument is therefore the tool or the device to be
used by the researcher to get the information needed for the research.

Figure 1 Adapted from Singleton & Straits (2010) retrieved from http://korbedpsych.com/R09DevelopInstruments.html
Korb (2012) adapted the figure above and states that after determining the Research
design, the researcher has two separate responsibility 1. identifying the sample and 2.
determining how the variables will be measured. After the sample has been identified, the
researcher must think both the theoretical and practical meaning of the variables for him to
properly measure the key variables in the study. Furthermore, Korb reminds researchers that in
designing an instrument they must keep in mind the following.

• The conclusions drawn in a research study are only as good as the data that is collected.
• The data that is collected is only as good as the instrument that collects the data.
• A poorly designed instrument will lead to bad data, which will lead to bad conclusions.
• Therefore, developing a good instrument is the most important part of conducting a high-
quality research study.

Researchers have two options when it comes to Research Instrument, either they will
develop their own or use a previously developed one. We will discuss briefly the two options.
When the researchers decided that they will develop their own, they must do a lot of work
and advanced preparation. Korb (2012) says that “It is very important that the instrument has
been thoroughly critiqued, evaluated, and pilot tested by the student, supervisor, and others
before it is administered for the actual study.” She expounds that “any problems with the
instrument after it has been administered will require the student to completely redo the data
collection process, wasting considerable time and money.”

Activities Before Developing the Research Instrument

Identify Other Research Studies that Study the Key Variable- It will give you good idea on
what and how to effectively measure the different variables your research.

1. Develop a Construct Definition – it is also known as the theoretical definition and you can get
it by referencing other related studies that have measured the same variable.

2. Operationalize the Construct Definition- Construct definition is somewhat abstract therefore


you must translate it into a more concrete way of measuring it quantitatively.

3. Choose an Instrument- Each variable has different components and characteristics unique to
it and because of it, each variable in the study needs to be measured according to its own
component and characteristics. Each variable has its own separate measurement.

4. Write Operational Definition – Part of the planning process of developing a research


instrument is to write the operational definition. It includes construct definition, type of
measurement and the specifics of measurement. The researchers must present the
operational definition of the variables when seeking the approval of the adviser for the
instrument.

Korb (2012) suggested the following steps in the development of a Research Instrument.
Steps in Developing an Instrument
1. Identify the key variables. The first step is to identify the variables you want to study and
measure.

2. Define the key variables. The next step is to define the variables in order to clear what you
are studying specially to the respondents and readers of the research.

3. Review the literature. It will give you an idea or a pattern on bow to construct your instrument.

4. Consider the population. You must consider the characteristics of your population because it
will determine the manner how will construct your instrument that the respondents will
understand.

5. Write a draft. You must write a draft of the instrument for you to present to members and/or
inner circle.

6. Revise the draft. After you have written the draft you have to rescan it for grammatical and
errors. After that you must read the draft on the perspective of the respondents. You must
take note of the things that might confuse them or the things that they might not understand.

7. Pilot the draft. Let the revised draft to be pilot tested. Give the instrument to a few people
belonging to your population. Ask them to inform you if there is any part of the instrument that
they do not understand or confuse them. The objective of this step is to recalibrate the
instrument.

8. Revise the draft, Revise the draft, Revise the draft. Revising the instrument several times over
is more of a rule rather than an exemption. Revising it before it is administered is a better
option rather than repeating the entire process.

Guidelines in Developing a Questionnaire

Korb (2012) suggested the following guidelines in developing personal information items.

1. All response options must be exhaustive. Respondents must be able to find their response in
one and only category.

Example:
Your respondents are Senior High School Students, and you are asking their age a poor
option would be 16-17 and 17-18. If in case the respondent is only 15 years old and will be
turning 16 in a month or the respondent is more than 18 years old, obviously the respondents
do not have an option to reveal their real age. The better option will be 15 and Under, 16-17,
18-19 and 19 and over.
2. All response options must be mutually exclusive. All options must be unique with one another.

Example:
It is incorrect to use 10-15 and 15-20 because those respondents whose age is 15 will
obviously have a problem choosing between your options.

3. All response options must have equal intervals. Intervals to be used in the options must be
uniform. Response categories must be specific. When we say specific, the options must be as
detailed as possible.

Example:
If you use 5 as an interval all the options must have an interval of 5.

4. Avoid the social desirability problem. Respondents have the tendency to choose the most
desirable option and avoid the most undesirable one. If you really need to get the information
rephrase or redesign the question and option in such a manner that it is not undesirable.

Example:
Poor: What is your Father’s educational level? Literate/Illiterate
Better Question: What is the last level of schooling that your father completed. None,
Elementary, Secondary, Tertiary, Post-Graduate

Tips in Writing Questionnaire Items

1. Measure the variable directly – If you want to measure the classroom attendance of students,
you may ask “Which measure the attendance of the students better?”. Is it an item in the
questionnaire or School Form No. 2 (SF 2). Since the construct definition of attendance is the
number of days that the student attends class, SF 2 is a better measurement rather than an
item in a questionnaire.

2. Avoid items that are the causes of the variable – Korb (2012) gives an example, “consider
attitude toward educational psychology. A cause of attitude toward educational psychology
might be how one feels about the course lecturer. You may be tempted to write an item: "I
enjoy the educational psychology lecturer." It is possible for somebody to dislike educational
psychology but like the lecturer, so this a BAD item. All items reflect the variable itself.”

3. Avoid Effects of the variable – Korb (2012) gives an example “Because a student who has a
positive attitude is likely to attend class more, you may be tempted to write an item that says,
"I always attend educational psychology class." It is possible for somebody to strongly agree
to always attending educational psychology, but not like educational psychology class. Again,
this is a BAD item”.
4. Avoid double-barreled items – Double-barreled items have 2 points or idea in a single
question. Each item must only have one point. Example. The teacher has the monopoly of
knowledge in classroom discussion and the students are loving it. The students may agree
that the teacher has the monopoly of knowledge in classroom discussion, but it does not follow
that the students are loving it.

5. Consider some of the items to be reversely coded – This will keep the respondents alert and
to prevent acquiescence bias, which means the tendency to agree with every statement.

6. Highlight a negative word such as NO or NOT in capital and bold letters – This will help the
respondents that read the items quickly and may skip over key words.

7. Consider whether the participant can and/or will answer the question honestly.

8. Ensure that the question is worded in a way that is understandable to individuals in the
population under study.
9. Ensure the item is not biased or leading toward a specific response.

10. Item options must contain all possible responses.

11. Ensure that response categories are specific to get similar responses across
participant.

12.Write twice as many items per variable as you think are necessary. Many items might be
canceled, so it is good to have extras to fill the gap.

13. In using a Likert-Scale type of options try to have an even-number of options to avoid the
tendency of the respondents to choose the middle.

If you choose to use a pre-existing instrument to measure a variable in your research,


there are two ways to use that instrument in your study. Either you adopt or adapt the instrument.

1. Adopting the instrument means that you will take the instrument as it is, verbatim. You do not
need to test the validity and reliability of the instrument. The reason for it is because you can
apply the validity and reliability scores of the instrument in the previous study where you get
the instrument.
2. Adapting the instrument means that you significantly alter the instrument. If you alter the
program it means that the validity and reliability scores of the original instrument will not be
applied to your instrument. You have to compute for the new validity and reliability score.

Calmorin and Calmorin states that there are three qualities of a good research instrument.

1. Usability
2. Validity
3. Reliability

Calmorin and Calmorin (1995) defined usability as the degree to which the research
instrument can be satisfactorily used by anyone without undue expenditure of time, money, and
effort. It can also be defined as something practical.

Factors that Determine Usability

1. Ease of Administration
2. Ease of Scoring
3. Ease of Interpretation and application
4. Low cost
5. Proper mechanical make-up or lay-out

“Validity means the degree to which a test or measuring instrument measure what it
intends to measure.” according to Calmorin and Calmorin. While Subong and Beldia (2005)
defines validity as “the appropriateness, meaningfulness, and usefulness of the inferences a
researcher makes based on the data he collects.” Furthermore, they states that there are three
types of validity.

1. Content-related evidence of validity. It refers to the format and content of the instrument. A
table of specification must be prepared to check whether all the topics were covered and
evenly distributed, and the required aspects were identified in order to measure what it
supposed to measure in the research.

2. Criterion-related evidence of validity- Refers to the relationship of scores using the instrument
and the scores using one or more instruments or measures known as criterion. Researchers
will compare the scores from the instrument being validated to the score of the independent
criterion. If the scores from the instrument being validated is parallel to the result from the
independent criterion, then we can say that there is a Criterion-related evidence of validity.

3. Construct-related evidence of validity- It refers to it as psychological construct or characteristic


being measured while Calmorin and Calmorin defined it as the extent to which the test
measures a theoretical construct or trait. It is like comparing your idea to different sources.

Reliability refers to the degree of which the result is dependable, stable and consistent.

Validity coefficient according to Subong and Beldia (2005) “expresses the relationship that
exists between scores of the same individuals on two separate instruments”. On the other hand,
reliability coefficient expresses a relationship between the scores of the same person using the
same instrument at two different time or between two parts of the same instrument. Reliability
coefficients must range from 0.00 to 1.00
Methods in Testing the Reliability of an Instrument

1. Test-Retest Method- It involves the administering of the same test twice to the same
group after a determined time period have passed.

Spearman rho may be used to correlate data for this method. The equation is:

rs = 1 - 6 ΣD2
N3 - N

Where rs = Spearman rho


ΣD2 = sum of the squared differences between ranks
N = Total number of cases

Calmorin and Calmorin provided a step by step process in computing the Spearman rho.

Step 1. Rank the scores of subjects/respondents from highest to lowest in the first set of
administration.

Step 2 Rank the scores in the 2nd administration in the same manner as in Step 1.

Step 3 Determine the difference in ranks for every pair of ranks

Step 4 Square each difference to get D2

Step 5 Sum the square difference to find ΣD2

Step 6 Compute the spearman rho.

Example: The following two sets of scores in Research were obtained from the same
respondents using a research-made instrument administered at different times. Compute for the
reliability coefficient of the instrument.

Respondent Score in Score in R1 R2 D D2


st nd
1 Administration 2 Administration
1 40 44 7 4 3 9
2 42 43 6 5 1 1
3 43 42 5 6 -1 1
4 44 41 4 7 -3 9
5 39 39 8 9 -1 1
6 38 40 9 8 1 1
7 45 49 3 1 2 2
8 30 32 10 10 0 0
9 46 47 2 3 -1 1
10 47 48 1 2 1 1
Total 26

Rs = 1 - 6(26)
103 - 10

= 1 - 156
990

= 1 - (1575)

= .8425 (indicates high relationship) meaning achievement in Research is


reliable.

2. Parallel-forms or Equivalent-Forms method – In this method, two different but equivalent


forms of an instrument are administered to the same group of respondents.
The formula for Parallel-forms or equivalent-forms is the same in Test-Retest method.
The only difference is that, you are comparing the scores coming from two different but equal
instrument rather than the scores from the same instrument administered at different times.

3. Split-half method – The test items was divided into two halves hat are equal in content and
difficulty, usually an odd and even scheme is done. The scores of the respondents in the two
halves are correlated.

rwt = 2 (rht)
1 + rht

Where : rwt = the reliability of the whole test


rht= reliability of a half testand it is computed like the test-retest method

For instance, test is administered to ten students as pilot sample to test the reliability
of the odd and even items. The results are shown below.

Respondents Score in Score in Rank in Rank in Difference Square of


Odd Items even Items Odd items Even Items Differences
1 23 30 9 7.5 1.5 2.25
2 25 24 7.5 9.5 2 4
3 27 30 6 7.5 1.5 2.25
4 35 40 5 4 1 1
5 48 55 3 1.5 1.5 2.25
6 21 24 10 9.5 .5 .25
7 25 35 7.5 6 1.5 2.25
8 50 51 2 3 1 1
9 38 38 4 5 1 1
10 55 55 1 1.4 .5 .25
Total 16.5

rht = 1 - 6 ΣD2
N3-N

= 1 - 6(16.50)
990

rht = .90

rwt = 2 (rht )
1 +rht

= 2 (.90)
1 + .90

rwt = .95 very high relationship

4. Kuder-Richardson approach is the most frequently used formula in computing the


consistency of the instrument. Kuder and Richardson in 1937 devised it where the
respondents receive a point of one and zero for each of the item. The following formula was
lifted verbatim from real statisctics.com with the hyperelink https://www.real-
statistics.com/reliability/internal-consistency-reliability/kuder-richardson-formula-20/ with the
intention to explain the Kuder-Richardson approach
𝑘 (∑𝑘
𝑗 =1𝑝𝑗 𝑞𝑗 )
KR20 = (1 − )
𝑘−1 𝜎2
where
k = number of questions
pj = number of people in the sample who answered question j correctly
qj = number of people in the sample who didn’t answer question j correctly
σ2 = variance of the total scores of all the people taking the test = VARP(R1) where R1 =
array containing the total scores of all the people taking the test.

Values range from 0 to 1. A high value indicates reliability, while too high a value (in excess
of .90) indicates a homogeneous test.

Example 1: A questionnaire with 11 questions is administered to 12 students. The results are


listed in the upper portion of Figure 1. Determine the reliability of the questionnaire using Kuder
and Richardson Formula 20.
Figure 1 – Kuder and Richardson Formula 20 for Example 1

The values of p in row 18 are the percentage of students who answered that question
correctly – e.g. the formula in cell B18 is =B16/COUNT(B4:B15). The values of q in row 19 are
the percentage of students who answered that question incorrectly – e.g. the formula in cell B19
is =1–B18. The values of pq are simply the product of the p and q values, with the sum given in
cell M20.
We can calculate ρKR20 as described in Figure 2.

CELL ENTITY FORMULA

B22 k =COUNTA(B3:L3)

B23 𝑘 =M20
∑ 𝑝𝑗 𝑞𝑗
𝑗=1

B24 𝜎2 =VARP(M4:M15)

B25 ρKR20 =(B22/(B22-1))*(1-B23/B24)

Figure 2 – Key formulas for worksheet in Figure 1

The value ρKR20 = 0.738 shows that the test has high reliability.
Real Statistics Function: The Real Statistics Resource Pack contains the following
supplemental function:
KUDER(R1) = KR20 coefficient for the data in range R1.
Observation: For Example 1, KUDER(B4:L15) = .738.
Observation: Where the questions in a test all have approximately the same difficulty (i.e. the
mean score of each question is approximately equal to the mean score of all the questions), then
a simplified version of Kuder and Richardson Formula 20 is Kuder and Richardson Formula 21,
defined as follows:

𝑘 𝜇(𝑘−𝜇)
ΡKR21= [1 − ]
𝑘−1 𝑘 𝜎2

where μ is the population mean score (obviously approximated by the observed mean score).
For Example 1, μ = 69/12 = 5.75, and so

𝑘 𝜇(𝑘−𝜇) 11 5.75(11−5.75)
ΡKR21= [1 − ] = [1 − ] = .637
𝑘−1 𝑘 𝜎2 10 11(6.5208

Note that ρKR21 typically underestimates the reliability of a test compared to ρKR20.

5. Internal-consistency – Subong and Beldia (2005) defined it as “Estimating or determining


reliability of an instrument through single administration of an instrument is being called as the
internal-consistency method.” The respondents complete one instrument at a time,
that is, requiring only a single administration of an instrument. For this reason, this is the
easiest form of reliability to investigate.”

6. Cronbach alpha α or coefficient alpha was developed by Lee Cronbach in 1951. It is based
on the internal consistency of items in the test. It is flexible and can be used with test formats
that have more than one correct answer. A Likert Scale type of question is compatible with
the Cronbach alpha. All the above-mentioned test has software packages that can be used
by the student.

FOR MODULAR STUDENTS ONLY:

Answer the following questions. Write your answers on a separate sheet of paper.

1. Using your own words discuss the process on how to develop a research instrument.

2. In your own words explain which is the better option in choosing your research instrument,
develop your own Instrument or look for a previously made instrument.
3. When can you possibly adopt or adapt a research instrument?

4. Why it is important to plan the data collection procedure?

GROUP ACTIVITY: FOR ALL MODALITIES


Construct your research instruments and establish validity and reliability.
Directions: Fill out the guide table below to be able to create a good research instrument for your
study. Use a separate sheet of paper for your answer.

The objective of the Research


Instrument

Factors to be measured in the


instruments

Number of items per Factors

Scale to be used
How it will be validated?

Who will validate the


Instrument?

How can you establish the


reliability of the instrument?

Using the information above, write the Research Instrument part of your research paper. Fill the
table below. Use a separate sheet for your answer.

Research Title:

Research Instrument:

References
Book

Calmorin, Laurentina P. and Calmirin , Melchor A. Methods of Research and Thesis Writing.
Quezon City. Rex Printing Company. 1995.

Subong Jr. , Pablo E. and Beldia, D. Statistics for Research Application in Research, Thesis
and Dissertation Writing, and Statistical Data Management Using SPSS Software.
Quezon City. Rex Printing Company: .2005

Internet

Charkes Zaiontz. Real Statistics Using Excel. Retrived October 30, 2020.
https://www.real-statistics.com/
Des Moines University Library. CINAHL (Cumulative Index to Nursing and Allied Health
Literature) (N.D). Retrieved November 19, 2018. https://lib.dmu.edu/

https://www.flickr.com/search/?text=jessica%20soho

Korb, Katrina A. Conducting Educational Research (2012).


http://korbedpsych.com/R09DevelopInstruments.html

Louie Diangson. Online Food Delivery Apps in the Philippines. Retrieved December 28, 2018.
https://www.yugatech.com

Manila Standard Showbitz. Jessica Soho Makes History at Reader’s Digest Trusted Brand
Awards 2018 (N.D). Retrieved April 30, 2018. https://www.manilastandard.net

/bel

You might also like