Assignment 02 Procedural Booklet
Assignment 02 Procedural Booklet
Information
IMPORTANT
Assignment 02 and Assignment 03 involve the writing and
NOTE assessment of psychological questionnaire items, much the
same as was the case with Assignment 01. These two
assignments constitute a group. You cannot submit
Assignment 03 unless you have submitted Assignment 02.
Note that you have to submit Assignment 02 online as a PDF
document. Also, you have to submit Assignment 02 via
myUnisa (myModules) and NOT via myAdmin or any
other platform, otherwise, your assignment will not be
included in the peer review process.
Page 1 of 28
R eview Scale
Resource material
The review scale is used to assess interview questions as well as questionnaire items. In the
case of an interview, one refers to ‘the question’, and in the case of a questionnaire one refers
to ‘the item’.
Each question (in the case of an interview) or each item (in the case of a questionnaire) is
reviewed. For example, an interview involving 10 questions has 30 review ratings – three
ratings for each question/item used in the interview/questionnaire.
Rate 1 if none of a, b, c
Rate 2 if one of a, b, c
Rate 4 if two of a, b, c
Rate 5 if a+b+c
Page 2 of 28
Interview questions and questionnaire items
The review scale is used to review interview questions as well as questionnaire items. The only
difference between interview questions and questionnaire items is the format or structure of
the questions. Interview questions may be broader questions, requiring more lengthy and
detailed responses, whereas questionnaire items may be more structured and require the
respondent to select answers from predefined lists or scales. In an interview, one may formulate
follow-up questions in light of the respondent’s answer to a previous question, whereas
questionnaires contain a fixed set of items. However, in the case of both interviews and
questionnaires, the interview questions and the questionnaire items should be:
- formulated correctly
- grounded in theory
- suitable for practical use.
The review scale consists of review criteria and rules to determine ratings. The criteria refer to
the aspects or features one has to look for when reviewing an interview question (or
questionnaire item). The rating rules are rules that indicate how criteria have to be combined
to obtain a particular rating.
Review criteria
One needs assessment criteria to review an interview or a questionnaire. Below are the criteria
that are used to assess the quality of an interview question/questionnaire item with reference to
its generic qualities, namely formulation, theoretical groundedness and practical use. The
criteria are numbered a, b, c, etcetera. (When these criteria are used to assess the quality of a
questionnaire item one refers to ‘the item’ instead of ‘the question’).
Rating rules
Page 3 of 28
ratings range from 1 to 5. A rating of 1 is ‘very poor’ because none (or few) of the criteria are
met. A rating of 5 is ‘very good’ because all the criteria are satisfied.
Here are the rules for combining the criteria that are used to determine whether a question/item
has been formulated correctly:
Rate 1 if none of a, b, c
Rate 2 if one of a, b, c
Rate 4 if two of a, b, c
Rate 5 if a+b+c
- The first rule states that a rating of 1 is assigned if none of the criteria have been met.
- The second rule states that a rating of 2 is assigned if any ONE of criteria ‘a’, ‘b’, or ‘c’ has
been met.
- The third rule states that a rating of 4 is assigned if any TWO of criteria ‘a’, ‘b’, or ‘c’ have
been met.
- The fourth rule states that a rating of 5 is assigned if criteria ‘a’, ‘b’, AND ‘c’ have been
met.
Although a 5-point scale offers five possible ratings, namely 1, 2, 3, 4 and 5, all ratings are not
always used. The lowest value (in this case 1) and the highest value (in this case 5) are always
defined, but some of the values in between may be left undefined. For example, if no rule is
provided for rating 3 (as is the case with this set of criteria), the only possible ratings would be
1, 2, 4 or 5. No rating of 3 can be given.
Here are the rules for combining the criteria that are used to determine whether a question/item
has been grounded in theory:
Rate 1 if not a
Rate 2 if a
Rate 4 if a+b
Rate 5 if a+b+c
- The first rule states that a rating of 1 is assigned if criterion ‘a’ is not met.
- The second rule states that a rating of 2 is assigned if criterion ‘a’ is met.
- The third rule states that a rating of 4 is assigned if criteria ‘a’ and ‘b’ are met.
- The fourth rule states that a rating of 5 is assigned if criteria ‘a’, ‘b’ and ‘c’ are met.
As was the case with the first set of criteria related to the formulation of the item/question
above, not all ratings are used with this set of criteria. The lowest value (in this case 1) and the
highest value (in this case 5) are always defined, but some of the values in between may be left
undefined. For example, there is no rule provided for rating 3, so the only possible ratings
would be 1, 2, 4 or 5. No rating of 3 can be given.
Here are the rules for combining the criteria that are used to determine whether a question/item
qualifies for practical use.
Page 4 of 28
Rate 4 if a+b+c
Rate 5 if a+b+c+d
The criteria are combined by means of ‘+’ and ‘or’. Thus ‘a + b’ means both criterion ‘a’ and
criterion ‘b’ are met, and ‘a’ or ‘b’ means either criterion ‘a’ or criterion ‘b’ is satisfied. In the
case of ‘not a + not b’ neither criterion ‘a’ nor criterion ‘b’ is met. Hence:
- The first rule states that a rating of 1 is assigned if neither criterion ‘a’ nor criterion ‘b’ are
met.
- The second rule states that a rating of 2 is assigned if either criterion ‘a’ or ‘b’ has been
met.
- The third rule states that a rating of 3 is assigned if both criteria ‘a’ and ‘b’ have been met.
- The fourth rule states that a rating of 4 is assigned if criteria ‘a’, ‘b’, AND ‘c’ have been
met.
- The fifth rule states that a rating of 5 is assigned if ALL of ‘a’, ‘b’, ‘c’, ‘d’, and ‘e’ have
been met.
IMPORTANT NOTE:
In order to judge whether an question/item meets the criteria in the rating scale, you first need
to know what each criterion in the scale refers to – in this respect you have to study the ‘Notes
on the review criteria’ (See Assignment 03 Procedural Booklet pp. 8-17) – you cannot judge
whether a question/item is leading if you do not know what a leading question/item looks like.
Similarly, you cannot judge whether a question/item is ambiguous if you don’t know what
ambiguity refers to – all the criteria in the rating scale have been unpacked on pp. 8-17 of
Assignment 03 Procedural Booklet. It is VITAL to understand what the criteria refer to if you
are going to provide correct ratings.
Suppose you want to assess whether a question (in the case of an interview) or an item (in the
case of a questionnaire) is grounded in theory (see Item 2 above). Then note the following:
Firstly, that rating 3 is not defined. Therefore, the only valid ratings are 1, 2, 4 or 5. Secondly,
that the criteria are combined in a manner that sets up prerequisites. Criterion ‘a’ is required
for ratings 2, 4 and 5. Therefore, criterion a is a prerequisite for criteria ‘b’ and ‘c’. Criterion
‘b’ is required for ratings 4 and 5. As such, it is a prerequisite for criterion ‘c’. These
prerequisites are set up for a reason. If the theory that informs the question/item (criterion ‘a’)
is not correct, the question/item cannot be a correct implementation of theory (criterion ‘b’),
and one cannot expect the question/item to function or operate correctly in terms of the theory
it is based on (criterion ‘c’). In other words, if ‘a’ is not correct one cannot consider ‘b’ and ‘c’,
and if ‘b’ is not correct, ‘c’ cannot be considered.
Rating scales do not only provide ratings. For Assignment 03 and 06 (the peer review
assignments) you are also required to provide comments that explain why you gave a particular
rating. For example, suppose a question’s grounding in theory is rated 4, this means criteria ‘a’
and ‘b’ are met. This generates the following comment:
Although the theory the question is based on is correct, and the question implements the theory
correctly, it does not operate correctly in terms of the theory it is based on. It does not operate
correctly because …. (add an explanation to indicate how the question fails to operate
correctly).
Page 5 of 28
How to conduct a review
**Now that you are familiar with the elements of the review scale and how to go about
assigning ratings it is essential to study the resource ‘Notes on the review criteria’ so that you
know what each criterion in the rating scale refers to. Please see the next page for the ‘Notes
on the review criteria’.
Page 6 of 28
N otes on review criteria
Resource material
Note 1.a Questions/items should not be phrased in terms of technical language and
specialist vocabulary. The questions/items used in psychological interviews
and psychological questionnaires are based on psychological theory. Such
questions/items are designed to elicit responses that can be interpreted in terms
of psychological theory. It is not easy to formulate questions/items that are
based on theory. These questions/items should be formulated in ordinary
language, and not make use of psychological jargon. But despite being
formulated in ordinary language the questions/items should still elicit
information that can be interpreted in terms of the underlying theory. The point
to keep in mind is that the person who is interviewed or completes the
psychological questionnaire should understand the question/item without
having any knowledge of psychology.
Note 1.b The question/item should not be leading because leading questions/items
introduce response bias. A leading question/item is a question that hints at or
suggests an answer or guides the respondent to a particular answer. Consider
this question, for example: Do you believe one can use sexually abusive
language even though sexual abuse is prohibited by law? The second part of
the question (that sexual abuse is prohibited by law) informs the respondent
that sexual abuse is not allowed, which alerts the respondent to the fact that
using sexually abusive language may not be a good idea. This guides the
respondent by hinting at a particular answer. The respondent is more likely to
indicate that one should not use abusive language. This response is biased
because the respondent reacts to the hint and not to what he/she truly believes.
Note 1.c Ambiguous questions/items that are not resolved when a respondent responds
to the question/item because it is not clear what the respondent is thinking.
Ambiguous items/questions include (a) double-response items/questions, (b)
items/questions that lack information required for providing a proper answer
and (c) the use of words that describe an indefinite frequency.
Page 7 of 28
(a) Consider the following ambiguous item:
Mark [yes] or [no] to indicate whether you agree or disagree that South
Africans have high levels of anxiety?
If the respondent, marks [yes] it is not clear whether he/she means: “Yes, I
agree”, or “Yes I disagree”. This also holds when the respondent marks [no].
This is an example of a double-response item (yes/no and agree/disagree). One
of these responses should be removed to make sure the question is not
ambiguous:
Mark [yes] or [no] to indicate whether you agree that South Africans have high
levels of anxiety?
Or: Mark [agree] or [disagree] to indicate whether you agree or disagree that
South Africans have high levels of anxiety?
(b) Here is a second kind of an ambiguous item:
Use the scale [1… 2 … 3 … 4] to indicate how much you feel in control of your
life.
If a respondent marks 1 it is not clear whether the respondent feels totally in
control or totally without control. The same holds for 4, and for any of the other
numbers on the scale, because the meaning of these numbers is not specified.
This is an example of an item that lacks information that is necessary for a
proper answer. The question would not be ambiguous if formulated in the
following way:
Use the scale [1… 2 … 3 … 4] (in which 1 means no control and 4 means total
control) to indicate how much you feel in control of your life.
Or: Use the scale [no control… little control … much control … total control]
to indicate how much you feel in control of your life.
(c) A third source of ambiguity lies in the use of words that describe an
indefinite frequency such as ‘often’, ‘seldom’, ‘many’. These words hold
different meanings for different people, i.e., for person A, ‘seldom’ refers to
once a week but for person B, ‘seldom’ refers to once every few months. The
same is true for ‘often’, ‘many’, ‘lots’, etc.
Page 8 of 28
Is the theory that is presented as justification for the
question/item correct?
Page 9 of 28
represents unhealthy movement towards others. If you wanted to implement
this aspect of Horney’s theory you could ask a question such as “Do you need
the constant support, approval and validation of others?” This implementation
will identify someone of the submissive or compliant type whose movement
towards others is excessive and fixated.
Now consider the following question: ‘When things get difficult, do you prefer
to take control?’ This question does not implement the idea of moving towards
people. It implements a different aspect of Horney’s theory, namely the
tendency to move against others in healthy ways (in which people can be
assertive and display the ability to argue and differ from other people). When
movement against others becomes excessive and fixated it represents
unhealthy movement against others. A question that implements this unhealthy
movement against others is, for example, “Do you exploit others in the pursuit
of your own goals?” This is a characteristic behaviour pattern of the hostile
personality type so if the interviewer considers the question as a reflection of
Horney’s hostile personality type, the question would be a correct
implementation of the theory.
An example of a question that does not implement Horney’s psychoanalytic
theory correctly: ‘When things get difficult, do you seek help from your
colleagues before you take control?’ According to Horney, although, in healthy
development the individual has recourse to all three interpersonal styles on an
alternating basis, the different interpersonal styles are irreconcilable and
preclude one another in any particular situation – the person cannot
simultaneously move towards and away from others. On the grounds of
Horney’s theory, one does not expect somebody to move towards others
(seeking help) only to then move against them (taking control). Thus, the
question does not enable respondents to provide a valid answer in terms of
Horney’s theory and the question does not implement the theory correctly.
Note 2.c There is a difference between how a question/item implements theory and how
it operates in terms of the implemented theory. Implementation refers to how
the underlying theory is represented in the question, how it is translated into
ordinary language. For example, the psychological construct of ‘moving
towards others’ can be translated into ‘seeking help from colleagues’ (an
ordinary language statement). Operation goes further than reflection or
implementation of the underlying theory. It is about how the question works,
what it does with the theory, what distinctions are drawn in terms of the theory
as provided by the responses to the question/item. Think in terms of the
theoretical operation of the item/question. In other words, what does the
information that is elicited by the item/question tell you in terms of the theory?
Here you have to look at the response options made available by the responses
to the question/item. Look at the distinctions that are drawn by the
question/item to see if these distinctions can be meaningfully interpreted in
terms of the theory. The important thing to keep in mind is that this criterion
refers to how the item/question operates in terms of the theory. The focus is
therefore on what the item/question does with the theory – what distinctions do
the operation of the question make in terms of the theory.
The following examples of how questions operate are based on Karen Horney’s
Page 10 of 28
theory, which distinguishes three kinds of interpersonal behaviour, namely
‘moving towards others’, ‘moving against others’ and ‘moving away from
others’. Horney sees these interpersonal behaviour types as irreconcilable and
preclusive of one another because a person cannot simultaneously move
towards, against or away from people. Thus, they constitute three categories
into which people can be classified.
If one asks whether a person approaches colleagues for help when things get
difficult, the function of the question, the way in which it operates in terms of
Karen Horney’s psychoanalytic theory, is to identify healthy movement
towards others. The respondents who answer ‘yes’ confirm that they tend to
‘move towards others’. Respondents who deny the tendency to seek help from
colleagues do not tend to ‘move towards others’ and therefore may not display
this interpersonal style. Thus, the way in which the question operates is to
distinguish between people who move towards others in healthy ways and
those who do not. The question identifies movement towards others, but it does
not produce further information about those who do not display this
interpersonal style. Although its function is restricted, the question operates
correctly in terms of the theory it is based on.
Suppose the question used in the previous example is revised as follows:
‘When things get difficult, do you seek help from your colleagues, or do you
take control?’ In this form, the question elicits more information. It
differentiates between movement towards others (do you seek help from your
colleagues) and those who display the interpersonal style of movement against
others (do you take control). The question thus functions (operates) to
distinguish between movement towards others and movement against others.
The question/item qualifies for practical use Required knowledge and skills:
if: Knowledge of the nature of psychological
a the question/item is comprehensible by questions/item, and the ability to judge
those it is intended for whether such questions/items are fit for
b the question/item serves the purpose of purpose.
the interview/questionnaire
c the question/item is valid in practice
d the question/item is reliable in practice
Page 11 of 28
b the question/item serves the purpose of is intended to be used for (which is to be
the interview/questionnaire if: found in the scenario).
b1 the question/item operates correctly in
terms of the theory on which it is based Ask b1:
Does the question/item operate correctly in
b2 the question/item generates terms of the theory?
information that contributes toward
achieving the purpose of the Ask b2:
interview/questionnaire Does the question/item generate
information that contributes toward
achieving the purpose of the
questionnaire/interview
Consider:
c2 the question/item is fair to all The nature of the respondents that the
respondents question/item is intended for.
Ask c2:
Are these respondents equally able to
respond to this question/item, that is the
item is not biased in any way?
Page 12 of 28
same responses as before when asked Consider
again at a later stage
The consistency of responses over time.
Ask d2:
Are respondents likely to provide the same
responses as before when they are asked
the question/item again at a later stage?
Page 13 of 28
distinguish between the preference to direct and the preference to comply.
Each of these questions/items contributes information towards the overall
purpose of profiling an individual’s behaviour in terms of these four
behavioural preferences.
It is important to be clear about the overall purpose of an
interview/questionnaire. Suppose the purpose of an interview/questionnaire
is only to differentiate between individuals with a high preference to direct
and persuade versus those with a high preference to conciliate and comply.
Then all questions/items in the interview/questionnaire should aim to provide
information about this particular distinction. Questions/items aimed at
delineating differences between the preference to direct and the preference to
persuade would be wasted because making this distinction does not contribute
to the purpose of the interview/questionnaire.
Note 3.c To be valid, a question/item should be meaningful and appropriate given the
context in which the interview/questionnaire is used. If the purpose is to
explore the experience of stress in a particular kind of situation (e.g., the work
environment), the question/item should refer to the situation in question, and
not mention a different kind of situation (e.g., the home environment). It
should also refer to experiences that are likely to be encountered in the context
in question. If the interview/questionnaire is about experiences in the
workplace, the questions/items should be phrased in terms of behaviour and
events that could reasonably be encountered in the particular work
environment.
The question/item should also be fair to all respondents. All respondents
should find the question/item appropriate and suitable. In other words, the
question/item should not be biased by advantaging some individuals and
disadvantaging others. A question/item is biased if the response given by the
respondent depends on a characteristic of the respondent that does not
concern the purpose of the question/item. Questions/items formulated using
psychological jargon, for example, are not fair to all respondents – those with
knowledge of psychological terminology are in a better position to be able to
respond to questions/items containing psychological jargon than respondents
without knowledge of psychology. Such a question/item has an educational
bias towards people with knowledge of psychological jargon.
Experience is a common source of bias. If individuals with particular
experiences find it easier to respond to an item/question the question is not
fair to those who do not have the experience, which means the question/item
is biased. For example, suppose an interviewer wants to establish whether a
respondent is inclined more towards internal locus of control than external
locus of control in an information technology environment, and asks: ‘Do you
think you prefer to work in XML because it allows you to control the structure
of a database instead of having to work within a predetermined structure?’
Although the question is contextually relevant in dealing with information
technology individuals who are experienced XML programmers are in a
better position to respond to the question than those who may not have XML
programming experience.
Other common sources of bias are gender, age and education. For example,
Page 14 of 28
the purpose of a question/item may be to identify managers’ leadership styles.
However, if men are more likely than women to provide a particular response
the question/item has a gender bias. Questions/items about competence in
information technology often have an age bias because younger people are
more likely than older people to show competence in this regard.
Questions/items involving knowledge about a particular field often have an
educational bias. These kinds of questions are not fair because the responses
given by respondents depend on characteristics of the respondents
(experience, gender, age, education, etc.) that do not concern the purpose of
the questions/items.
Note 3.d Reliability has to do with the consistency of responses. There are two kinds
of consistency, namely consistency across respondents and consistency over
time.
Consistency across respondents is evident when all respondents attach the
same meaning to the item/question. Of course, this does not mean they all
provide the same answer. They have different opinions, and therefore
different answers, but they all have similar understandings of what the
question/item means. In other words, there is consistency in their
understanding of the question/item.
Respondents respond inconsistently when they do not attach the same
meaning to an item/question, which happens when the item/question is open
to different interpretations.
Ambiguous items/questions are unreliable because respondents do not
understand such questions in the same way and therefore their responses
cannot be compared to each other. In the case of double-response
items/questions, two respondents may provide the same response meaning
different things or may give different responses meaning the same thing.
For example, given the item: Select ‘yes’ or ‘no’ to indicate whether the
statement that you are a prolific reader is true or false:
1. Respondent A may say: YES, meaning that he/she is a prolific reader
(confirming that the statement is true)
2. Respondent B may say: YES, meaning that he/she is not a prolific reader
(confirming that the statement is false)
3. Or, Respondent B may say: NO, meaning that he/she is a prolific reader
(denying that the statement is false)
In (1) and (2) respondents A and B give the same response, but the responses
mean different things, and in (1) and (3) they give different responses, but the
responses mean the same thing. Inconsistency also occurs in the case of items
that lack guidelines for responding properly. Respondents’ responses cannot
be compared because there is no way to determine what their responses
actually mean when they make up and follow their own guidelines for
responding to an item.
A second source of inconsistent responses is questions/items that contain
adverbs that describe an indefinite frequency (e.g., often, seldom, and rarely).
Although each respondent offers definite answers to such questions/items the
answers obtained from different respondents may not be comparable because
Page 15 of 28
interpretations of such words may differ among respondents. For example,
the statement ‘I rarely sleep late’ may mean something different to different
individuals. People do not have the same interpretations of what ‘rarely’
means and what time ‘late’ refers to.
A third source for inconsistency is guessing. Guessing occurs when
respondents do not know how to respond to items/questions. Questions/items
that lack proper guidelines are one reason for guessing (respondents guess at
how to respond to such questions/items), but another reason is
incomprehensibility. When questions/items are incomprehensible
respondents have to guess what the questions/items actually mean. Responses
based on guessing are inconsistent.
In addition to consistency across respondents, reliability also refers to
consistency over time. To be reliable a question/item should elicit a similar
response from a respondent when he/she is asked to respond to the
item/question again at a later stage. A question such as: “Do you feel angry?”
depends on the moment and context in which the question is asked. A
respondent who feels angry at the moment the question is put to him will say
‘yes’, but when the question is repeated later, the same respondent may say
‘no’. Thus, the question does not work in practice. A better question would
have been: “Do you feel angry when other people ignore you?” because this
question is likely to elicit the same response from a respondent when repeated
at a later stage.
But not all sources of bias involve the characteristics of the respondent. There
are also other factors that cause random or biased information. For example,
ambiguous items/questions are sources of random information. Respondents
respond randomly (i.e., not systematically) to an ambiguous item/question.
Leading questions/items are sources of biased information because the
respondents’ responses may be influenced by values and perceptions that do
not involve underlying theory.
** Now that you are familiar with the review criteria, study the different kinds of questionnaire
item types in the Assignment 02 Procedural Booklet (below) so that you know what kinds of
items can be formulated in a psychological questionnaire.
Page 16 of 28
Q uestionnaire item types
Resource material
There are different kinds of items that can be included in a questionnaire. This section
describes three kinds of items, namely closed items, open items, and rating scales.
Closed items
Generally speaking, most items in psychological questionnaire are ‘closed’. A closed item is
one that offers respondents a limited choice of alternate responses. Closed items allow for
comparison across a large group of respondents. An example of a closed item is ‘Do you think
the parole system in South Africa is effective’ and the response options provided for
respondents are ‘Yes’ and ‘No’. In this case, the only two responses available to select from
are ‘Yes’ and ‘No’.
Page 17 of 28
Although open ended questions can give you a great deal of information, the disadvantage is
that they are difficult to analyse. If you want to compare the responses of a large group of
people, it may be better to use closed items for which all the respondents have the same choice
of answers (as is the case in psychological questionnaires).
Open items
The main advantage of open items is that respondents have the freedom to express their ideas
without the restrictions of set possible answers. Respondents may have ideas and opinions
that you have not thought of, and these might be lost if you ask a closed question.
For example, in a psychological interview, an open-ended question is “What do you think of
the parole system in South Africa?”
However, open ended questions invariably elicit some irrelevant and repetitious information
that cannot be used and wastes time in processing them. Answering open ended questions
also requires a considerable degree of language proficiency and communication skills. For
this reason, they are not suitable for people with language difficulties or low levels of literacy.
Although, generally speaking, the aim of an interview is to obtain rich, detailed information,
one can also make use of closed items in psychological interviews (such as a question that
requires a respondent to indicate a preference between two/more options, for example,
“Explain to me whether you prefer to plan ahead, or do you take every day as it comes?”).
Rating scales
Rating scales are generally regarded as having a closed item format because the
respondent has a limited range of options (ratings) to choose from.
It is seldom useful to use single items or questions to measure complex or non-factual topics
such as opinions, beliefs, attitudes and values. These are complex issues that have to do
with states of mind rather than with behaviour or events in the outside world and are
therefore more difficult to measure. They are usually multifaceted and have to be
approached from different angles. Therefore, instead of requiring single options (like Yes
or No), the tendency is to use rating scales, which allow respondents a wider range of
options (like Absolutely Yes, Possibly Yes, Possibly No, Absolutely No). Such questions
usually take the form of rating items. When a number of rating questions are used to assess
the same topic or concept the group of rating questions constitute a rating scale. Thus, rating
scales refer to multi-item scales, that is, a group of items dealing with the same topic, each
item requiring a rated response.
For each item in the rating scale, respondents have to indicate the extent to which they
agree or disagree with a statement by marking a point on a numerical scale. For example,
you could use a rating scale to investigate attitudes toward crime in South Africa.
Respondents would be required to rate their responses (by ticking the appropriate box) to
statements such as the following:
Page 18 of 28
Police effectively curb crime
Note that, in the case of the scale above, the use of ‘a little’ (in Disagree and Agree a little) is
not ambiguous because it is clear that ‘Disagree a little’ indicates less agreement than ‘Strongly
agree’. Similarly, ‘Agree a little’ indicates a weaker degree of agreement than ‘Strongly agree’.
In other words, the responses across respondents can still be compared to each other because the
order of intensity of agreement/disagreement is the same across respondents. (If the first
statement above was phrased “Crime sometimes disrupts our lives”, the word ‘sometimes’ is
ambiguous as it is likely to hold different meanings for different respondents. Similarly, if the
second statement was phrased “Police often effectively curb crime” the use of the word ‘often’
will also hold different meanings for different people and is therefore considered ambiguous.)
Ratings give a numerical value to some kind of assessment or judgement. We can apply
ratings to anything such as rating entrants in a competition, preferences for certain objects
or differentiating characteristics. For example, you might be asked to rate the service provided
by shop assistants on a scale of 1 to 5, where 1 = very bad service, 2 = poor service, 3 =
adequate service, 4 = good service and 5 = excellent service. You could also rate them in terms
of friendliness, appearance and efficiency.
Ratings are given numbers because it is easier to work with numbers than descriptions when
you have to analyse people’s responses. You can, for example, add together a person’s
ratings on a scale and compare one person’s total with another, or work out an average for a
particular group.
1 Define the dimension being rated. This means that you have to decide what it is
that you want respondents to rate. Each item or statement to be rated must refer to
only one thing or dimension. For example, if you ask respondents to “Rate the shop
assistant’s friendliness and efficiency” on a scale of 1 (bad) to 5 (good), you are
confusing two different dimensions: friendliness and efficiency. The respondent
might think that the shop assistant was very efficient but not at all friendly. The
respondent then does not know which rating to use.
2 Decide on the number of ratings for the scale. You may only need 3 but there may
be as many as 10. It depends on what is being rated. If you only need respondents
to indicate whether they agree, are neutral or disagree, then you only need 3 ratings.
If, however, you feel that there may be a greater range of opinions, then you need
to provide more options (a range of 5 or more). For example, look at the following
statements regarding people’s reading habits:
Page 19 of 28
1 2 3 4 5/more a
How often do
Never One in One a One a week
you read books? six month week
months
These two ratings give you different information. The first one gives you an
indication of how often respondents read books, which is fine if that is all you want
to know. It does not tell you much about the respondents’ reading habits. The
second one gives you a much better idea of their book reading habits. From this,
you can see that the number of ratings depends on what you want to find out.
3 Decide whether to use an even or uneven number of ratings. Many researchers prefer
an uneven number in order to have a neutral category in the middle, but the
problem is that people may tend to choose the neutral one (this is called the error of
central tendency).
4 Define the different rating categories. You must specify criteria for each rating so
that they are mutually exclusive. This means that each rating category should mean
something different so that the respondents do not have the problem of deciding
which rating category their responses fit into. For example, it might be confusing if
your rating scale has the following options:
There is nothing to indicate what the difference is between “agree somewhat” and “agree
a little”. Here is another example: you might want to know how many times a week a
respondent does physical exercise, and you provide the following rating categories 1 =
none, 2 = two to three times, 3 = three or more times. If the respondent exercises three times
a week, what rating will he or she choose: a 2 or a 3? Each category or rating must
specify a particular kind of response that is not described by any other category or rating.
A rating scale is a technique for placing people on a continuum in relation to each other,
in relative and not in absolute terms. These scales are not designed to yield subtle insights
into individual situations. Their main function is to classify people with regard to the topic
being rated. The ratings people provide on a rating scale can be correlated to other
characteristics they may have. For example, if one finds that the ratings provided by men
differ from those provided by women it means there is a relationship between the gender
of the person and how he/she rates the topic in question. Another example: One may find
that different age or different socio-economic groups have different totals on a scale
measuring attitudes to crime.
There are many different types of rating scales. We will only discuss two here, namely Likert
scales and semantic differential scales.
Likert scales
The Likert scale is also known as a summated scale. A summated attitude scale is a rating
scale on which an individual indicates how much he or she agrees (or disagrees) with
Page 20 of 28
statements. These statements may deal with a particular social or political issue or
institution, such as communism, abortion, or a particular political party. However, they may
also deal with personal feelings, experiences and beliefs. Often only the endpoints of such
scales are labelled. The one on the left may be labelled ‘Disagree entirely’ and the one on
the right: ‘Agree entirely’. Subjects mark the point that reflects how much they agree or
disagree with the statement involved.
Here is an example of some items on a Likert scale. In this case personal beliefs
regarding one’s emotional experience are being investigated:
1 2 3 4 5
Statement Disagree Agree
entirely entirely
I find it easy to control my emotions
I get angry quickly
I often feel sad
The respondent marks the point that best reflects his or her attitude with regard to his/her
personal beliefs or experiences. The scores for each item or statement are then added up to
obtain a total score for the scale – and this is why it is referred to as a summated scale. When
using a summated scale, it is important to ensure that the scale is uni-dimensional, that is,
that all the items measure the same dimension or topic. In this example, all the items
measure opinions regarding personal emotional experience. If the following item were to
be included in the table above, the scale would not be uni- dimensional: “Others describe
me as a quick thinker”. The scale would not be uni-dimensional because the newly added
item refers to cognition (thinking speed) and not to emotion.
Note that the first two statements in the example provided above have been phrased in a
way that requires the respondent to provide ‘opposite’ or ‘reversed’ responses. For example,
people who agree that they can easily control their emotions will probably disagree that they
get angry quickly. It is important to phrase questions in a manner that requires opposite or
reversed responses in order to keep respondents from falling into a response pattern by simply
agreeing (or disagreeing) with every statement, without thinking about the question being
asked.
Likert scales usually have the option of 5 or 7 ratings. The advantage is that a number of
ratings (5 or 7 options) allow the respondent a larger range of opinions than just a yes/no
answer. Although most scales employ an uneven number of options (3, 5, or 7) there are
possible disadvantages to using an uneven number because people may tend to choose the
midpoint and then you do not know if the person is neutral, lukewarm, lacks knowledge
about the matter the question refers to, or lacks an attitude toward the issue in question.
Semantic differential
The semantic differential is a type of rating scale in which the scale endpoints are defined by
opposing adjectives (and not by ‘agree’ or ‘disagree’, as in the case of a Likert scale). There
are usually 7 or 9 points in between the two opposing adjectives. The respondents are
required to indicate their positions on the scale by marking one of the points in between the
Page 21 of 28
two adjectives. An example of a semantic differential is given below:
Controlled _ _ _ _ _ _ _ Uncontrolled
Hostile _ _ _ _ _ _ _ Friendly
Emotionally warm _ _ _ _ _ _ _ Emotionally cold
Expressive _ _ _ _ _ _ _ Reserved
The respondent marks the rating that expresses his/her opinion. Using the example of
emotional intelligence again, if the respondent thinks of him/herself as emotionally controlled
rather than emotionally uncontrolled he/she would place a mark on the first or second space
closest to ‘controlled’. If he/she thinks of him/herself as sometimes emotionally reserved and
sometimes emotionally expressive, then he/she is more likely to choose a space in the
middle. In this way a picture can be obtained of people’s opinions about their personal
emotional character in terms of various descriptors of emotion.
When one compiles a semantic differential scale, one should be careful to not always put the
positive extreme on one side and the negative on the other side. The location of positive and
negative poles should be random (sometimes positive on the left and negative on the right,
and sometimes the other way round) to counteract any response patterns that are not based on
the respondents’ assessments of each individual item. If all positive terms are on the left and
all negative terms are on the right, respondents may tick down the left side without paying
close attention to the questions.
It is also important that the two poles define the same construct. For example, if you want to
measure the construct of mood, your pole descriptors may be happy and sad. However, if
the two poles are described by ‘happy’ and ‘clever’, it is not clear whether you are measuring
the construct of mood (happiness) or cognitive ability (intellect). In addition, you should take
care that the two descriptors really are opposites. For example, sad/satisfied are not really
opposites because you can feel satisfied about something while still being sad.
Furthermore, it is important that respondents are clear that they need to respond to the item in
terms of BOTH poles of the construct. For example, if you want to assess job satisfaction, the
two poles of the construct ‘job satisfaction’ could be represented in the item as follows:
Satisfied □ □ □ □ □ □ □ □ Dissatisfied
If a respondent is totally satisfied in his/her job, s/he will place a tick in the leftmost box (the
tick box closest to ‘satisfied’). A tick in the leftmost box indicates BOTH complete satisfaction
and the absence of dissatisfaction. If a respondent is only slightly more satisfied than
dissatisfied s/he could tick the 4th tick box from the left. This response indicates that a
respondent is only marginally more satisfied than dissatisfied with his/her job. Again, the
response is made in terms of both constructs – the 4th tick box from the left is closer to the
‘satisfaction’ pole (4 tick boxes from the ‘satisfied’ pole) and further away from the
dissatisfaction pole (5 tick boxes away from the ‘dissatisfied’ pole). This indicates that a
respondent is (marginally) more satisfied but also somewhat dissatisfied. The response can
therefore be interpreted in terms of BOTH poles of the construct. If, on the other hand, a
respondent is more dissatisfied than satisfied, s/he will provide a tick mark that is closer to the
‘dissatisfied’ pole and further away from the ‘satisfied’ pole. Note that if a semantic differential
Page 22 of 28
item does not provide proper instructions for respondents as to how to complete the item, it is
considered ambiguous. If the instruction to respondents indicates “How well do these
statements describe you?” the respondent is unlikely to know that s/he has to respond in terms
of BOTH poles of the differential. Clear instructions should be provided such that respondents
are clear on how to respond. For example, “Below are two statements. Indicate which statement
describes you best by ticking one of the squares between the two statements. The better a
statement describes you, the closer you tick should be to that statement”. This instruction
indicates that a respondent should indicate their response in terms of both poles of the
differential.
Page 23 of 28
Once you have decided what the focus of your scenario should be, you need to formulate the
questionnaire items to elicit the information you are looking for. Design each question with a
particular purpose in mind such that each item contributes to the overall purpose of the
questionnaire (that is, to identify particular psychological characteristics/traits as espoused by
the theory you choose). And don’t try to get too complicated or fancy – keep it simple. Identify
what theoretical concept/construct you want to assess and formulate an item that implements
this concept. At this point you’ve worked with the criteria related to the item’s formulation,
theoretical grounded-ness and practical use, so use what you’ve learned to scaffold a
framework for developing your items (make sure they’re not leading, ambiguous etc., make
sure they implement the theory correctly and operate correctly in terms of the theory and that
they meet the criteria for practical use etc.)
You have to provide a justification for each item that you formulate. The item’s justification
must indicate three important aspects pertaining to the theoretical grounded-ness of an item:
(1) the theory (or aspect of a theory) that underpins the item: Be specific and cite the theoretical
notion that informs the formulation of the item. In other words, if the item taps into Jung’s
taxonomy of personality types in order to identify, for example, the extravert-thinking type,
indicate precisely what the theory says about the extravert-thinking type i.e., ‘The item is based
on Jung’s theory which identifies eight different personality types. The extravert-thinking type
perceives the world as structured and lives according to fixed objective rules and all subjective
feelings are repressed’; (2) how the item implements the (aspect of the) theory, that is how the
theory is represented/translated into the (everyday) language used in the item’s formulation;
and (3) how the item operates in terms of the theory (how it functions in terms of the theory on
which it is based in order to, for example, identify someone as extraverted or to determine
whether someone moves towards or against others). (Please see the resource ‘Assignment 02:
Task’ in this document below [‘Step 3’, pp. 25-26]) for an explanation of these 3 aspects.) You
need to indicate all 3 of these aspects in the justification so that when your peers review your
questionnaire items it is clear what the theory says about the psychological concept under
investigation, how the theory is implemented in the item and how the item operates in terms of
the theory.
Important note:
Assignment 02 is due on 29 May. You have to submit assignment 02 as a PDF document via
myUnisa (and NOT via myAdmin or the assignment submission (stratus.unisa portal). Students
are advised that if their assignment 02s are not received by the due date of 29 May they will
not be able to partake in the peer review process. This is an invaluable learning experience, and
the review of your peers’ assignments offers additional opportunities to engage with the
prescribed theories and to work with and practice using the review criteria to assess
psychological questionnaire items. Please note that NO extensions can be granted because once
the submission portal closes no further assignment submissions are possible.
Again, please be sure to submit your Assignment 02 via the myUnisa (myModules)
platform and NOT via myAdmin or any other platform otherwise your assignment will
not be included in the peer review process.
End of resource
Page 24 of 28
T ask
Assignment 02
Project Formulate questionnaire items for a psychological questionnaire.
Agree Disagree
□ □
Item justification
Page 25 of 28
Self-transcendence involves moving beyond self-absorption and
focusing rather on developing productive relationships with others
and the world. (2) The item implements ‘self-transcendence’ in the
notion that people who display self-transcendent behaviour are
outward-looking rather than turned in on themselves. (3) The item
operates to distinguish between individuals who display self-
transcendent behaviour (those who agree with the statement) and
those who do not (individuals who disagree with the statement).
As you can see, in the item’s justification above you need to indicate
3 things:
(1) First, provide a theoretical statement (a statement which indicates
the particular aspect of a theory on which the item is based). In this
instance, the item assesses self-transcendence, hence, one has to
indicate what the theory says about self-transcendence.
(2) Second, indicate how the theoretical concept under investigation
(in this case, self-transcendence) is implemented in the item.
Implementation refers to how the theoretical concept of interest is
translated into ordinary, everyday language (study the resource
called ‘Notes on the review criteria’ in this document about
implementing the theory). For example, ‘self-transcendence’ can be
implemented in the notion of ‘being outward-looking rather than
focussed on myself’.
(3) Third, indicate how the item operates in terms of the theory. How
an item operates in terms of the theory refers to how the item
functions in terms of the theory i.e., what the item does with the
theory on which it is based (see the ‘Notes on the Review Criteria’ in
this document). In the example above, the item operates to
distinguish between respondents who either agree with the statement
(i.e., that they are outward looking as opposed to turned in on
themselves) or disagree with the statement (i.e., that they are turned
in on themselves rather than outward looking). In terms of the
theory, respondents who agree with the statement, therefore, display
self-transcendent behaviour and those who disagree with the
statement do not display self-transcendence.
(Note that these are merely examples of the kinds of scenarios you
can use in the construction of your questionnaire items. You can
Page 26 of 28
develop any scenario that lends itself to the development of
questionnaire items. Note, however, that because the questionnaire
items are based on a specific theory, the theory you choose must
lend itself to the information required from the questionnaire items.
In other words, you cannot use a particular theory to assess for
depression if the theory does not tell you anything about depression.
Different theories conceptualise psychological functioning and/or
behaviour in different ways so make sure that the theory you choose
lends itself to whatever your questionnaire items are designed to
assess.)
Note: If your work does not fit on single pages as indicated above,
simply continue to the next page, and increase the page numbers
accordingly.
Page 27 of 28
your assignment via the myAdmin or assignment submission
platform (stratus.unisa) otherwise your assignment will not be
included in the peer review process.
Page 28 of 28