P-7 Econ Pratishtha
P-7 Econ Pratishtha
P-7 Econ Pratishtha
techniques
PRATISHTHA VERMA
L-2017-HSc-349-M
INTRODUCTION
Scale basically deals with providing four types of information about the
subject being measured. These components are :
• Description (unique label or descriptors)
• Order (relative position or size of descriptors)
• Distance (absolute difference between descriptors)
• Origin (true zero point in a scale)
TYPES OF MEASUREMENT SCALES
• Nominal data are numerical in name only, because they do not share any of the
properties of the numbers we deal in ordinary arithmetic.
• The numbers serve only as labels or tags for identifying and classifying objects.
• When used for identification, there is a strict one-to-one correspondence between
the numbers and the objects.
• The numbers do not reflect the amount of the characteristic possessed by the
objects.
• The only permissible operation on the numbers in a nominal scale is counting.
• Only a limited number of statistics, all of which are based on frequency counts, are
permissible, e.g., percentages, and mode.
ORDINAL SCALE
• Possesses all the properties of the nominal, ordinal, and interval scales.
• It has an absolute zero point.
• It is meaningful to compute ratios of scale values.
• Only proportionate transformations of the form y = bx, where b is a
positive constant, are allowed.
• All statistical techniques can be applied to ratio data.
INTERVAL SCALE
• Numerically equal distances on the scale represent equal values in the characteristic being
measured.
• It permits comparison of the differences between objects.
• The location of the zero point is not fixed. Both the zero point and the units of
measurement are arbitrary.
• Any positive linear transformation of the form y = a + bx will preserve the properties of the
scale.
• It is meaningful to take ratios of scale values.
• Statistical techniques that may be used include all of those that can be applied to nominal
and ordinal data, and in addition the arithmetic mean, standard deviation, and other
statistics commonly used in marketing research.
SOURCES OF ERRORS
Measurement should be precise and unambiguous in an ideal research study. The researcher
must be aware about the sources of error in measurement. The following are the possible
sources of error in measurement.
(a)Respondent: At times the respondent may be reluctant to express strong negative feelings
or it is just possible that he may have very little knowledge but may not admit his ignorance.
Transient factors like fatigue, boredom, anxiety, etc. may limit the ability of the respondent to
respond accurately and fully.
(b)Situation: Any condition which places a strain on interview can have serious effects on the
interviewer-respondent rapport. For instance, if someone else is present, he can distort
responses by joining in or merely by being present.
SOURCES OF ERORS
(d)Instrument: Error may arise because of the defective measuring instrument. The use of
complex words, beyond the comprehension of the respondent, ambiguous meanings, poor
printing, inadequate space for replies, response choice omissions, etc. are a few things that
make the measuring instrument defective and may result in measurement errors. Another
type of instrument deficiency is the poor sampling of the universe of items of concern.
TESTS OF SOUND MEASUREMENT
• Sound measurement must meet the tests of validity, reliability and practicality.
• Validity refers to the extent to which a test measures what we actually wish to measure.
Validity is of two kinds; internal and external.
• External validity of research findings is their generalizability to populations, settings,
treatment variables and measurement variables. The internal validity of a research
design is its ability to measure what it aims to measure.
• Reliability has to do with the accuracy and precision of a measurement procedure.
• Practicality is concerned with a wide range of factors of economy, convenience, and
interpretability.
TEST OF VALIDITY
• Validity is the most critical criterion and indicates the degree to which an instrument
measures what it is supposed to measure.
• Validity can also be thought of as utility.
• Validity is the extent to which differences found with a measuring instrument reflect
true differences among those being tested.
• Relevance of evidence often depends upon the nature of the research problem and the
judgment of the researcher.
• Three types of validity are taken into account for research -
Content validity, Criterion-related validity and Construct validity.
TYPES OF VALIDITY
• Content validity : The extent to which a measuring instrument provides adequate coverage of
the topic under study. If the instrument contains a representative sample of the universe, the
content validity is good. Its determination is primarily judgmental and intuitive.
• Criterion-related validity : Ability to predict some outcome or estimate the existence of some
current condition. This form of validity reflects the success of measures used for some empirical
estimating purpose. The concerned criterion must possess the following qualities:
a) Relevance: (A criterion is relevant if it is defined in terms we judge to be the proper
measure.)
b) Freedom from bias: (Freedom from bias is attained when the criterion gives each subject an
equal opportunity to score well.)
c) Reliability: (A reliable criterion is stable or reproducible.)
d) Availability: (The information specified by the criterion must be available.)
TYPES OF VALIDITY
Scaling Techniques
The Method of
The graphic paired Method of
rating scale itemized comparison rank order
rating scale s
RATING SCALES
• The rating scale involves qualitative description of a limited number of aspects of a thing or
of traits of a person.
• Rating scales (or categorical scales), we judge an object in absolute terms against some
specified criteria, properties of objects without reference to other similar objects.
• These ratings may be in such forms as “like-dislike”, “above average, average, below average”,
or “excellent—good—average—below, average—poor” and so on. These are two types:
i. The graphic rating scale is quite simple and is commonly used in practice. Under it the
various points are usually put along the line to form a continuum and the rater indicates
his rating by simply making a mark (such as ü) at the appropriate point on a line that
runs from one extreme to the other.
ii. The itemized rating scale (also known as numerical scale) presents a series of
statements from which a respondent selects one as best reflecting his evaluation.
RANKING SCALES
Ranking scales (or comparative scales),we make relative judgements against other similar objects.
The respondents under this method directly compare two or more objects and make choices among
them. There are two generally used approaches of ranking scales:
(a) Method of paired comparisons: The respondent can express his attitude by making a choice
between two objects, say between a new flavour of soft drink and an established brand of drink. But
when more than two stimuli to judge, the number of judgements required in a paired comparison is
given by the formula:
N=n(n-1)/2, where N = number of judgements and n = number of stimuli or objects to be
judged
(b) Method of rank order: Under this method the respondents are asked to rank their choices.
This method is easier and faster than the method of paired comparisons stated above. For example,
with 10 items it takes 45 pair comparisons to complete the task, whereas the method of rank order
simply requires ranking of 10 items only.
DIFFERENT SCALES FOR MEASURING
ATTITUDES OF PEOPLE
(i) Arbitrary approach: It is an approach where scale is developed on ad hoc basis. This is
the most widely used approach. It is presumed that such scales measure the concepts for
which they have been designed, although there is little evidence to support such an
assumption.
(ii) Consensus approach: Here a panel of judges evaluate the items chosen for inclusion
in the instrument in terms of whether they are relevant to the topic area and unambiguous
in implication.
(iii) Item analysis approach: Under it a number of individual items are developed into a
test which is given to a group of respondents. After administering the test, the total scores
are calculated for every one. Individual items are then analyzed to determine which items
discriminate between persons or objects with high total scores and those with low scores.
Contd..