0% found this document useful (0 votes)
5 views6 pages

Module 3 - Sampling Design - Measurement Techniques

The document outlines the concepts of sampling design and measurement techniques, emphasizing the importance of selecting an appropriate sample from a population for research. It distinguishes between census and sample surveys, detailing steps and criteria for effective sampling design, as well as the differences between probability and non-probability sampling methods. Additionally, it discusses measurement scales, sources of error in measurement, and the tests of sound measurement, including validity, reliability, and practicality.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views6 pages

Module 3 - Sampling Design - Measurement Techniques

The document outlines the concepts of sampling design and measurement techniques, emphasizing the importance of selecting an appropriate sample from a population for research. It distinguishes between census and sample surveys, detailing steps and criteria for effective sampling design, as well as the differences between probability and non-probability sampling methods. Additionally, it discusses measurement scales, sources of error in measurement, and the tests of sound measurement, including validity, reliability, and practicality.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Bhargavi K

Assistant Professor
MODULE-3
Sampling Design & Measurement Techniques

A sampling design is a definite plan for obtaining a sample from a given population.
It refers to the technique or the procedure the researcher w o u l d adopt in selecting items
for the sample.
Sample design is determined before data is collected. There are many sample designs from
which a researcher can choose. Some designs are relatively more precise and easier to apply
than others. Researcher must select/prepare a sample design which should be reliable and
appropriate for his research study.
CENSUS AND SAMPLE SURVEY:
Census Survey: All items in any field of inquiry constitute a ‘Universe’ or ‘Population.’
A complete enumeration of all items in the ‘population’ is known as a census inquiry.
It can be presumed that in such an inquiry, when all items are covered, no element of chance is
left, and highest accuracy is obtained.
This type of inquiry involves a great deal of time, money and energy. Therefore, when the field of
inquiry is large, this method becomes difficult to adopt because of the resources involved.
Government is the only institution which can get the complete enumeration carried out.
Sample Survey: When field studies are undertaken in practical life, considerations of time and
cost leads to a selection of respondents i.e., selection of only a few items. The respondents
selected should be as representative of the total population as possible in order to produce
results. The selected respondents constitute what is technically called a ‘sample’ and the
selection process is called ‘sampling technique.’ The survey so conducted is known as ‘sample
survey’.
Steps in Sampling Design:
• Type of universe: The first step in developing any sample design is to clearly
define the set of objects, technically called the Universe, to be studied
➢ Finite Universe: population in city, workers in factory, etc.,
➢ Infinite Universe: No. of Stars in sky, listeners of radio program, etc.,
• Sampling unit:
A decision has to be taken concerning a sampling unit before selecting sample. Sampling unit
may be a geographical one such as state, district, village, etc The researcher will have to
decide one or more of such units that he has to select for his study.
• Source list: It is also known as ‘sampling frame’ from which sample is to be drawn. It
contains the names of all items of a universe.
• Size of sample: This refers to the number of items to be selected from the universe to
constitute a sample. The size of sample should neither be excessively large, nor too small. It
should be optimum. An optimum sample is one which fulfills the requirements of efficiency,
representativeness, reliability and flexibility.
• Parameters of interest: In determining the sample design, one must consider the
question of the specific population parameters which are of interest.
Example: proportion of persons with some characteristics, important sub-groups in the
population, etc.,
• Budgetary constraint: Cost considerations, from practical point of view, have a major
impact upon decisions relating to not only the size of the sample but also to the type of
sample.
• Sampling procedure: researcher must decide about the technique to be used in
selecting the items for the sample with small sampling error.

CRITERIA OF SELECTING A SAMPLING PROCEDURE:


Two costs are involved in a sampling analysis viz., the cost of collecting the data and the cost of
an incorrect inference resulting from the data.
Researchers must keep in view the two causes of incorrect inferences viz., systematic bias and
sampling error.
a) A systematic bias results from errors in the sampling procedures, and it cannot be
reduced or eliminated by increasing the sample size.
At best the causes responsible for these errors can be detected and corrected.
Usually a systematic bias is the result of one or more of the following factors:
1. Inappropriate sampling frame
2. Defective measuring device
3. Non-respondents
4. Indeterminancy principle
5. Natural bias in the reporting of data
1. Inappropriate sampling frame: If the sampling frame is inappropriate i.e., a biased
representation of the universe, it will result in a systematic bias.
2. Defective measuring device: If the measuring device is defective, it will result in
systematic bias. In survey work, systematic bias can result if the questionnaire or the
interviewer is biased. Similarly, if the physical measuring device is defective there will be
systematic bias in the data collected through such a measuring device.
3. Non-respondents: If we are unable to sample all the individuals initially included in the
sample, there may arise a systematic bias.
4. Indeterminancy principle: Sometimes we find that individuals act differently when kept
under observation than what they do when kept in non-observed situations.
For instance, if workers are aware that somebody is observing them in course of a work study
on the basis of which the average length of time to complete a task will be determined and
accordingly the quota will be set for piece work, they generally tend to work slowly in
comparison to the speed with which they work if kept unobserved. Thus, the indeterminancy
principle may also be a cause of a systematic bias.
5. Natural bias in the reporting of data: Natural bias of respondents in the reporting of data is
often the cause of a systematic bias in many inquiries. There is usually a downward bias in the
income data collected by government taxation department, whereas we find an upward bias in
the income data collected by some social organisation. People in general understate their
incomes if asked about it for tax purposes, but they overstate the same if asked for social status.
b) Sampling error refers occurs because we are typically unable to collect data from an
entire population and must instead rely on a sample. Since samples are only a subset of
the population, they may not perfectly represent the population as a whole, leading to
sampling error.
Characteristics of a good Sample Design:
• Sample design must result in a truly representative sample.
• Sample design must be such which results in a small sampling error.
• Sample design must be feasible in the context of funds available for the research study.
• Sample design must be such so that systematic bias can be controlled in a better way.
• Sample should be such that the results of the sample study can be applied, in general, for
the universe with a reasonable level of confidence.

Probability and non- probability sample designs:

Non- probability sampling:


Non-probability sampling is that sampling procedure which does not afford any basis for
estimating the probability that each item in the population has of being included in the
sample.
Non-probability sampling is also known by different names such as deliberate sampling,
purposive sampling and judgement sampling.
Example: If the economic conditions of people living in a state are to be studied, a few towns
and villages may be purposively selected for intensive study on the principle that they can
be representative of the entire state. Thus, the judgement of the organisers of the study
plays an important part in this sampling design.
Quota sampling is also an example of non-probability sampling.
Under quota sampling the interviewers are simply given quotas to be filled from the
different levels, with some restrictions on how they are to be filled.

Probability sampling:
Probabilitysampling is also knownas ‘random sampling’ or ‘chance
sampling’.
Under this sampling design, every item of the universe has an equal chance of inclusion in
the sample.
Example: lottery method in which individual units are picked up from the whole group not
deliberately but by some mechanical process. Here it is blind chance alone that determines
whether one item or the other is selected.
The results obtained from probability or random sampling can be assured in terms of
probability.
Random sampling ensures the law of Statistical Regularity which states that if on an average
the sample chosen is a random one, the sample will have the same composition and
characteristics as the universe. This is the reason why random sampling is considered as the
best technique of selecting a representative sample.
The implications of random sampling (or simple random sampling) are:
a) It gives each element in the population an equal probability of getting into the sample;
and all choices are independent of one another.
b) It gives each possible sample combination an equal probability of being chosen.

Measurement and Scaling Techniques:


Measurement is a process of mapping aspects of a domain onto other aspects of a range
according to some rule of correspondence.
Measurement Scales
The most widely used classification of measurement scales are:
(a) nominal scale
(b) ordinal scale
(c) interval scale
(d) ratio scale.
(a) nominal scale:
Nominal scale is simply a system of assigning number symbols to events in order to label
them.
Example: assignment of numbers of basketball players in order to identify them.
Nominal scale is the least powerful level of measurement.
(b) Ordinal scale:
The lowest level of the ordered scale that is commonly used is the ordinal scale.
A student’s rank in his graduation class involves the use of an ordinal scale.
(c) Interval scale:
The intervals are adjusted in terms of some rule that has been established as a basis for
making the units equal. The centigrade scale is an example of an interval scale.
(d) Ratio scale:
Ratio scales have an absolute or true zero of measurement.
With ratio scales involved one can make statements like “Jyoti’s” typing performance was
twice as good as that of “Reetu.” The ratio involved does have significance and facilitates a
kind of comparison which is not possible in case of an interval scale.

Sources of error in measurement:


The following are the possible sources of error in measurement.
(a) Respondent: At times the respondent may be reluctant to express strong negative
feelings, or it is just possible that he may have very little knowledge but may not admit his
ignorance. All this reluctance is likely to result in an interview of ‘guesses.’ Transient factors
like fatigue, boredom, anxiety, etc. may limit the ability of the respondent to respond
accurately and fully.
(b) Situation: Situational factors may also come in the way of correct measurement. Any
condition which places a strain on interview can have serious effects on the interviewer-
respondent rapport. For instance, if someone else is present, he can distort responses by
joining in or merely by being present. If the respondent feels that anonymity is not assured,
he may be reluctant to express certain feelings.
(c) Measurer: The interviewer can distort responses by rewording or reordering questions.
His behaviour, style and looks may encourage or discourage certain replies from
respondents. Careless mechanical processing may distort the findings. Errors may also creep
in because of incorrect coding, faulty tabulation and/or statistical calculations, particularly
in the data-analysis stage.
(d) Instrument: Error may arise because of the defective measuring instrument. The use of
complex words, beyond the comprehension of the respondent, ambiguous meanings, poor
printing, inadequate space for replies, response choice omissions, etc. are a few things that
make the measuring instrument defective and may result in measurement errors. Another
type of instrument deficiency is the poor sampling of the universe of items of concern.

Tests of sound measurement:


Sound measurement must meet the tests of validity, reliability and practicality.
a) Tests of validity: Validity is the most critical criterion and indicates the degree to which
an instrument measures what it is supposed to measure.
validity is the extent to which differences found with a measuring instrument reflect true
differences among those being tested.
Three types of validity are:
(i) Content validity - is the extent to which a measuring instrument provides adequate
coverage of the topic under study.
(ii) Criterion-related validity - relates to our ability to predict some outcome or estimate the
existence of some current condition.
(iii) Construct validity - is the most complex and abstract. A measure is said to possess
construct validity to the degree that it confirms predicted correlations with other theoretical
propositions.
b) Tests of reliability: A measuring instrument is reliable if it provides consistent results.
Reliable measuring instrument does contribute to validity, but a reliable instrument need
not be a valid instrument.
For instance, a scale that consistently overweighs objects by five kgs., is a reliable scale, but
it does not give a valid measure of weight. But the other way is not true i.e., a valid
instrument is always reliable.
The stability aspect is concerned with securing consistent results with repeated
measurements of the same person and with the same instrument.
The equivalence aspect considers how much error may get introduced by different
investigators or different samples of the items being studied.
Reliability can be improved in the following two ways:
By standardizing the conditions under which the measurement takes place.
By carefully designed directions for measurement with no variation from group to group, by
using trained and motivated people to conduct the research and by broadening the sample
of items used. This will improve the equivalence aspect.
c) Test of Practicality: The practicality characteristic of a measuring instrument can be judged
in terms of economy, convenience and interpretability.
Economy consideration suggests that some trade-off is needed between the ideal research
project and that which the budget can afford.
Convenience test suggests that the measuring instrument should be easy to administer. For
this purpose, one should give due attention to the proper layout of the measuring
instrument.
Interpretability consideration is specially important when persons other than the designers
of the test are to interpret the results.
The measuring instrument, in order to be interpretable, must be supplemented by
(a) detailed instructions for administering the test
(b) scoring keys
(c) evidence about the reliability and
(d) guides for using the test and for interpreting results.

You might also like