0% found this document useful (0 votes)
8 views27 pages

Unit 1.2

The research process involves systematic steps to generate knowledge on a specific topic, starting from formulating a research problem to writing a report. Key steps include evaluating existing literature, creating hypotheses, designing research, collecting and analyzing data, and testing hypotheses. A well-structured research design is crucial for effective data collection and analysis, ensuring the reliability and validity of findings.

Uploaded by

gamingsyed444
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views27 pages

Unit 1.2

The research process involves systematic steps to generate knowledge on a specific topic, starting from formulating a research problem to writing a report. Key steps include evaluating existing literature, creating hypotheses, designing research, collecting and analyzing data, and testing hypotheses. A well-structured research design is crucial for effective data collection and analysis, ensuring the reliability and validity of findings.

Uploaded by

gamingsyed444
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Research Process

The research process is a set of systematic procedures that enable the researcher to generate
knowledge and focus on the topic of interest.
Steps of research process

Step 1: Formulate research problem: Initially the problem may be stated in a broad general
way and then the ambiguities, if any, relating to the problem be resolved. The formulation of
a general topic into a specific research problem, thus, constitutes the first step in a scientific
enquiry. Essentially two steps are involved in formulating the research problem, viz.,
understanding the problem thoroughly, and rephrasing the same into meaningful terms from
an analytical point of view. There are two types of research problems, viz., those which relate
to states of nature and those which relate to relationships between variables.
The subject of your project can guide you when determining your research objectives,
choosing the research tools and data collection methods (survey, observation, focus group,
interview, SPSS, MATLAB, Tableau)

Step 2: Evaluating the existing literature: learn more about the problem, by reviewing the
available information. You should learn about previous research and how to conduct these
studies. This will give you idea about objectives, target population, GAP analysis, future
areas for research, Limitations. Literature could be available in form of Conceptual literature
and Empirical literature research paper, review paper, case study, book chapter, academic
journals, conference proceedings, government reports, books etc., one source will lead to
another. Write down the Synopsis.

Step 3: Creating a hypothesis: Hypothesis are assumption about problem at


hand. Additionally, they help you figure out logical relationships between variables. You can
start creating your hypotheses based on the information you have obtained from your
preliminary research. Null and Alternate hypothesis, Level of significance.

Step 4: Designing the research:

The function of research design is to provide for the collection of relevant evidence with
minimal expenditure of effort, time and money. But how all these can be achieved depends
mainly on the research purpose. Research purposes may be grouped into four categories, viz.,
(i) Exploration (provides opportunity for considering many different aspects of a problem is
considered) (ii) Description (suitable design will be one that minimises bias and maximises
the reliability of the data collected and analysed) (iii) Diagnosis, and (iv) Experimentation.
There are several research designs, such as,

Experimental (manipulate a variable in order to determine its effects on a


control group)

Informal designs

before-and-after without control,

after-only with control,

before-and-after with control) or

Formal designs

Completely randomized design,

randomized block design,

Latin square design,

simple and complex factorial designs

Non-experimental (No manipulation or control of any variables, descriptive,


observational, or correlational data)

hypothesis testing.

Step 5: Determining sample design: In other words, a sample design is a definite plan
determined before any data are actually collected for obtaining a sample from a given
population.

Samples can be either probability samples or non-probability samples.

Probability samples each element has a known probability of being included in the sample
(simple random sampling, systematic sampling, stratified sampling, cluster/area sampling)

Non-probability samples do not allow the researcher to determine this probability. Like
convenience sampling, judgement sampling and quota sampling techniques.
Step 6: Collecting data: Once you identify the population, you can start to collect data on
the subject. You can organize the data you gather and make it easier to execute. Data is
critical in providing the information needed to answer the research question. When
conducting research, consider the sources you need for your research question.

You can utilize a combination of primary sources and secondary sources.

Primary research: experiment, survey, case study, focus group, interviews. Secondary
research: Literature review, Systematic review, Meta-analysis, Official and unofficial
reports, Library resources (textbooks, journal articles, and research articles)

Step 7: Analysing the data: Data analysis consists of interconnected steps such as identifying
categories, applying coding and tabulation to these categories, and drawing statistical
conclusions. Coding operation is usually done at this stage through which the categories of
data are transformed into symbols that may be tabulated and counted. Editing is the
procedure that improves the quality of the data for coding. With coding the stage is ready for
tabulation. Tabulation is a part of the technical procedure wherein the classified data are put
in the form of tables.
Step 8: Hypothesis testing: After analysing the data as stated above, the researcher is in a
position to test the hypotheses, Various tests, such as Chi square test, t-test, F-test, have been
developed by statisticians for the purpose. Hypothesis-testing will result in either accepting
the hypothesis or in rejecting it.

Step 9: Generalisation and interpretation: If a hypothesis is tested and upheld several


times, it may be possible for the researcher to arrive at generalisation, i.e., to build a theory.
As a matter of fact, the real value of research lies in its ability to arrive at certain
generalisations.

Step 10: Writing/ Preparing a report: Finally, the researcher has to prepare the report of
what has been done by him. Writing of report must be done with great care keeping in view
the following:

1. The layout of the report should be as follows: (i) the preliminary pages; (ii) the main
text, and (iii) the end matter.

In its preliminary pages the report should carry


title and date
acknowledgements and foreword.
table of contents followed by a
list of tables and
list of graphs and charts

The main text of the report should have the following parts:
(a) Introduction: It should contain a clear statement of the objective of the research
and an explanation of the methodology adopted in accomplishing the research.
The scope of the study along with various limitations should as well be stated in
this part.
(b) Summary of findings: After introduction there would appear a statement of findings
and recommendations in non-technical language. If the findings are extensive, they
should be summarised.

(c) Main report: The main body of the report should be presented in logical sequence and
broken-down into readily identifiable sections.

(d) Conclusion: Towards the end of the main text, researcher should again put down the
results of his research clearly and precisely. In fact, it is the final summing up.

At the end of the report, appendices should be enlisted in respect of all technical data.
Bibliography,
i.e., list of books, journals, reports, etc., consulted, should also be given in the end. Index
should also be given specially in a published research report.
Research problem formulation

Research problem formulation is the process of clearly defining the problem that a research
project will address. It involves identifying the problem, its context, and its root cause, and
then proposing a solution.
Steps for formulating a research problem
1. Identify the problem: State the problem as a question or statement.
2. Review the context: Consider the problem's background and any related literature.
3. Identify research gaps: Look for unanswered questions, untested variables, or new
problems.
4. Define variables: Identify and define the variables that will be used in the study.
5. Consider consequences: Think about the potential outcomes of different courses of action.
6. Propose a solution: Outline how the problem will be addressed and what the benefits of the
solution will be.
What makes a good research question?
 It's narrow and specific
 It seeks to improve knowledge on an important topic
 It's relevant to the type of study being conducted

Research Design

Just as for better, economical and attractive construction of a house, we need a blueprint (or
what is commonly called the map of the house) well thought out and prepared by an expert
architect, similarly we need a research design or a plan in advance of data collection and
analysis for our research project.

“A research design is the arrangement of conditions for collection and analysis of data in a
manner that aims to combine relevance to the research purpose with economy in procedure.”
In fact, the research design is the conceptual structure within which research is conducted; it
constitutes the blueprint for the collection, measurement and analysis of data.

Keeping in view the above stated design decisions, one may split the overall research design
into the following parts:

(a) the sampling design which deals with the method of selecting items to be observed for the
given study;
(b) the observational design which relates to the conditions under which the observations are
to be made;
(c) the statistical design which concerns with the question of how many items are to be
observed and how the information and data gathered are to be analysed; and
(d) the operational design which deals with the techniques by which the procedures specified
in the sampling, statistical and observational designs can be carried out.

In brief, research design must, at least, contain—(a) a clear statement of the research
problem; (b) procedures and techniques to be used for gathering information; (c) the
population to be studied; and (d) methods to be used in processing and analysing data.

Important concept relating to research design

1. Dependent / Independent variables


2. Discrete / Continuous variables
3. Extraneous variable
Independent variables that are not related to the purpose of the study, but may
affect the dependent variable are termed as extraneous variables.

Suppose the researcher wants to test the hypothesis that there is a relationship
between children’s gains in social studies achievement and their self-concepts. In this
case self-concept is an independent variable and social studies achievement is a
dependent variable. Intelligence may as well affect the social studies achievement, but
since it is not related to the purpose of the study undertaken by the researcher, it will
be termed as an extraneous variable.

Whatever effect is noticed on dependent variable as a result of extraneous


variable(s) is technically described as an ‘experimental error’.

4. Control: One important characteristic of a good research design is to minimise the


influence or effect of extraneous variable(s). The technical term ‘control’ is used
when we design the study
minimising the effects of extraneous independent variables. In experimental
researches, the term ‘control’ is used to refer to restrain experimental conditions.

5. Confounded relationship: When the dependent variable is not free from the
influence of extraneous variable(s), the relationship between the dependent and
independent variables is said to be confounded by an extraneous variable(s).

6. Experimental and control groups: In an experimental hypothesis-testing research


when a group is exposed to usual conditions, it is termed a ‘control group’, but when
the group is exposed to some novel or special condition, it is termed an ‘experimental
group’.
7. Treatment if we want to determine through an experiment the comparative impact of
three varieties of fertilizers on the yield of wheat, in that case the three varieties of
fertilizers will be treated as three treatments.
Different research design
1. In exploratory research studies
2. In Descriptive and Diagnostic research studies
3. In Hypothesis testing research studies (Experimental research)

Exploratory research design

Exploratory research studies are also termed as formulative research studies.


The main purpose of such studies is that of formulating a problem for more
precise investigation. As such the research design appropriate for such studies
must be flexible enough to provide opportunity for considering different aspects
of a problem under study.

Generally, the following three methods in the context of research design for
such studies are talked about: (a) the survey of concerning literature; (b) the
experience survey and (c) the analysis of ‘insight-stimulating’ examples.

The survey of concerning literature

Experience survey means the survey of people who have had practical
experience with the problem to be studied. The object of such a survey is to
obtain insight into the relationships between variables and new ideas relating to
the research problem.

Analysis of ‘insight-stimulating’ examples

a type of research that aims to uncover deep, meaningful


understandings or "insights" about a topic by going beyond surface-level
data, often prompting new perspectives, questions, and potential
solutions through a more exploratory and qualitative approach. it focuses
on understanding the "why" behind phenomena rather than just describing
"what" is happening.
Descriptive and Diagnostic research studies

Descriptive research studies are those studies which are concerned with
describing the characteristics of a particular individual, or of a group, whereas
diagnostic research studies determine the frequency with which something
occurs or its association with something else.

The studies concerning whether certain variables are associated are examples of
diagnostic research studies. As against this, studies concerned with specific
predictions, with narration of facts and characteristics concerning individual,
group or situation are all examples of descriptive research studies. Most of the
social research comes under this category.
Hypothesis testing research studies (Experimental research)

Hypothesis-testing research studies (generally known as experimental


studies) are those where the researcher tests the hypotheses of causal
relationships between variables. Professor R.A. Fisher’s name is
associated with experimental designs. As such the study of
experimental designs has its origin in agricultural research. Professor
Fisher found that by dividing agricultural fields or plots into different
blocks and then by conducting experiments in each of these blocks.

BASIC PRINCIPLES OF EXPERIMENTAL DESIGNS

Professor Fisher has enumerated three principles of experimental designs:


(1) Principle of Replication;
(2) Principle of Randomization; and the
(3) Principle of Local Control.

According to the Principle of Replication, the experiment should be


repeated more than once. Thus, each treatment is applied in many
experimental units instead of one.

By doing so the statistical accuracy of the experiments is increased.


For example, suppose we are to examine the effect of two varieties of
rice. For this purpose we may divide the field into two parts and grow
one variety in one part and the other variety in the other part. We can
then compare the yield of the two parts and draw conclusion on that
basis. But if we are to apply the principle of replication to this
experiment, then we first divide the field into several parts, grow one
variety in half of these parts and the other variety in the remaining
parts. We can then collect the data of yield of the two varieties and
draw conclusion by
comparing the same.

The Principle of Randomization provides protection, when we


conduct an experiment, against the effect of extraneous factors by
randomization. In other words, this principle indicates that we should
design or plan the experiment in such a way that the variations caused
by extraneous factors can be combined under general heading of
chance, For instance, if we grow one variety of rice, say, in the first
half of the parts of a field and the other variety is grown in the other
half, then it is just possible that the soil fertility may be different in
the first half in comparison to the other half. If this is so, our results
would not be realistic.
The Principle of Local Control is another important principle of
experimental designs. Under it the extraneous factor, the known
source of variability, is made to vary deliberately over as wide a range
as necessary and this needs to be done in such a way that the
variability it causes can be measured and hence eliminated from the
experimental error.
according to the principle of local control, we first divide the field
into several homogeneous parts, known as blocks, and then each such
block is divided into parts equal to the number of treatments. Then the
treatments are randomly assigned to these parts of a block. Dividing
the field into several homogenous parts is known as ‘blocking’. In
general, blocks are the levels at which we hold an extraneous factor
fixed.

Experimental research design

Experimental design refers to the framework or structure of an


experiment. informal experimental designs (less sophisticated form of
analysis) and formal experimental designs. (offer relatively more
control and use precise statistical procedures for analysis.)

Important experiment designs are as follows:

(a) Informal experimental designs:


(i) Before-and-after without control design.
(ii) After-only with control design.
(iii) Before-and-after with control design.
(b) Formal experimental designs:
(i) Completely randomized design (C.R. Design).
(ii) Randomized block design (R.B. Design).
(iii) Latin square design (L.S. Design).
(iv) Factorial designs

1. Before-and-after without control design: In such a design a


single test group or area is selected and the dependent variable is
measured before the introduction of the treatment. The treatment
is then introduced and the dependent variable is measured again after
the treatment has been introduced. The effect of the treatment would
be equal to the level of the phenomenon after the treatment minus the
level of the phenomenon before the treatment.
2. After-only with control design: In this design two groups or areas
(test area and control area) are selected and the treatment is
introduced into the test area only. The dependent variable is then
measured in both the areas at the same time. Treatment impact is
assessed by subtracting the value of the dependent variable in the
control area from its value in the test area. This can be exhibited in
the following form:

4. Before-and-after with control design: In this design two areas


are selected and the dependent variable is measured in both the
areas for an identical time-period before the treatment. The
treatment is then introduced into the test area only, and the
dependent variable is measured in both for an identical time-
period after the introduction of the treatment. The treatment
effect is determined by subtracting the change in the dependent
variable in the control area from the change in the dependent
variable in test area.
(b) Formal research design

(i) Completely randomized design (Experimental unit is


homogenous, no blocking required, fit for lab research)

Involves only two principles viz., the principle of replication and the
principle of randomization of experimental designs. The essential
characteristic of the design is that subjects are randomly assigned to
experimental treatments (or vice-versa). Such a design is generally used
when experimental areas happen to be homogeneous. Technically, when all
the variations due to uncontrolled extraneous factors are included under the
heading of chance variation.

(i.a) Two group simple randomized design

The merit of such a design is that it is simple and randomizes the differences
among the sample items. But the limitation of it is that the individual
differences among those conducting the treatments are not eliminated, i.e., it
does not control the extraneous variable and as such the result of the experiment
may not depict a correct picture. But this does not control the differential effects
of the extraneous independent variables (in this case, the individual differences
among those conducting the training programme).
(i.b) Random replication design

Teacher differences on the dependent variable were ignored, i.e., the extraneous
variable was not controlled. But in a random replications design, the effect of
such differences are minimised (or reduced) by providing a number of
repetitions for each treatment. Each repetition is technically called a
‘replication’.

(b.) Random Block Design (Experimental unit is heterogeneous, blocking


required, fit for field research)

In the R.B. design the principle of local control can be applied along with the
other two principles of experimental designs. In the R.B. design, subjects are
first divided into groups, known as blocks, such that within each group the
subjects are relatively homogeneous in respect to some selected variable
(variation within block is minimum, while there is variation among blocks).

The number of subjects/plots in a given block would be equal to the number of


treatments and one subject/plot in each block would be randomly assigned to
each treatment. In general, blocks are the levels at which we hold the extraneous
factor fixed, so that its contribution to the total variability of data can be
measured. The main feature of the R.B. design is that in this each treatment
appears the same number of times in each block.

(c.) Latin square design

The L.S. design is used when there are two major extraneous factors such as
the varying soil fertility and varying seeds. The Latin-square design is one
wherein each fertilizer, in our example, appears five times but is used only once
in each row and in each column of the design. In other words, the treatments in
a L.S. design are so allocated among the plots that no treatment occurs more
than once in any one row or any one column. The two blocking factors may be
represented through rows and columns (one through rows and the other through
columns). The following is a diagrammatic form of such a design in respect of,
say, five types of fertilizers, viz., A, B, C, D and E and the two blocking factor
viz., the varying soil fertility and the varying seeds.

(d.) Factorial Design

Factorial designs are used in experiments where the effects of varying more
than one factor are to be determined. They are especially important in several
economic and social phenomena where usually a large number of factors affect
a particular problem. Factorial designs can be of two types: (i) simple factorial
designs and (ii) complex factorial designs.

(i) Simple factorial designs: In case of simple factorial designs, we consider the
effects of varying two factors on the dependent variable, but when an
experiment is done with more than two factors, we use complex factorial
designs. Simple factorial design is also termed as a ‘two-factor-factorial design’,
whereas complex factorial design is known as ‘multifactor-factorial design.’
Simple factorial design may either be a 2 × 2 simple factorial design,
(ii) Complex factorial designs: Experiments with more than two factors at a
time involve the use of complex factorial designs. A design which considers
three or more independent variables simultaneously is called a complex factorial
design. In case of three factors with one experimental variable having two
treatments and two control variables, each one of which having two levels, the
design used will be termed 2 × 2 × 2 complex factorial design which will
contain a total of eight cells as shown below in Fig. 3.13.

Features of a good research design


The four main characteristics of a research design to be considered a good
research design are- reliability, validity, neutrality, and generalizability. A good
design has specific characteristics that allow it to produce reliable, significant
results.

The features of a research design have been explained in detail below.

Focused: The research question and goals are specific and clear. Narrow the
scope to what can be studied. For example, instead of asking, "How do students
learn?" ask, "How does peer tutoring affect chemistry learning for first-year
undergraduate students?"

Logical: The steps of the design follow each other in a reasonable order. For
example, surveys are given after interviews to clarify and quantify initial
findings.

Pragmatic/ logical: The design is realistic, given constraints. For example,


limiting the number of interviews to what can be transcribed and analyzed
within the study timeframe.

Adaptable: Some flexibility is built to deal with unexpected issues or


opportunities. For example, being open to modifying participant recruitment
strategies if not yielding the needed sample.

Ethical: Protocols ensure participants are treated and privacy is maintained. For
example, obtaining informed consent and transparency about the study's
risks/benefits.

Transparent: The design is described in sufficient detail so others can critique


and reproduce the research. For example, stating data analysis procedures in the
methodology section.

Valid: Measures what it intends to measure. There are no systematic errors that
distort the results. For example, using a validated questionnaire that has been
shown to assess the underlying construct of interest.

Reliable: Can produce consistent results over multiple studies. For example,
pilot testing protocols to identify and resolve sources of measurement error.

Controlled: Extraneous variables that could influence outcomes are controlled


or accounted for. For example, random assignment to conditions controls for
confounding variables.
Replicable: Methods and procedures are described in enough detail so others
can reproduce the study. For example, providing transcripts of open-ended
interview questions alongside a detailed coding scheme.

Generalizable: Results can be extended to the broader population from which


the sample was drawn. For example, using a selected, representative sample of
the target population.

Feasible: Can be implemented within available resources and constraints. For


example, collecting existing data rather than conducting large-scale primary
data collection.

Acknowledges limitations: Recognizes imperfections and biases that still exist


in the design. For example, discuss how self-reported measures may suffer from
social desirability bias.

Uses triangulation: Employs multiple methods to confirm research findings. For


example, comparing themes from interviews to those from an open-ended
survey.

Measurement in Research
In our daily life we are said to measure when we use some yardstick to
determine weight, height, or some other feature of a physical object. We also
measure when we judge how well we like a song, a painting or the personalities
of our friends. We, thus, measure physical objects as well as abstract concepts.
Properties like weight, height, etc., can be measured directly with some standard
unit of measurement, but it is not that easy to measure properties like motivation
to succeed, ability to stand stress and the like

“By measurement we mean the process of assigning numbers to objects or


observations.” Technically speaking, measurement is a process of mapping
aspects of a domain onto other aspects of a range according to some rule of
correspondence.

person’s marital status as 1, 2, 3 or 4, depending on whether the person is


single, married, widowed or divorced.

if one mineral can scratch another, it receives a higher hardness number


and on Mohs’ scale the numbers from 1 to 10 are assigned respectively to talc,
gypsum, calcite, fluorite, apatite, feldspar, quartz, topaz, sapphire and diamond.

we are given the following temperature readings (in degrees Fahrenheit):


58°, 63°, 70°, 95°, 110°, 126° and 135°

Measuring height and weight of individuals.

From what has been stated above, we can write that scales of measurement can
be considered in terms of their mathematical properties. The most widely used
classification of measurement scales are: (a) nominal scale; (b) ordinal scale; (c)
interval scale; and (d) ratio scale.

Nominal Data
In this artificial or nominal way, categorical data (qualitative or descriptive) can
be made into numerical data and if we thus code the various categories, we refer
to the numbers we record as nominal data. Nominal data are numerical in name
only, because they do not share any of the properties of the numbers we deal in
ordinary arithmetic.

Ordinal Data
In those situations, when we cannot do anything except set up inequalities, we
refer to the data as ordinal data.
For instance, if one mineral can scratch another, it receives a higher hardness
number and on Mohs’ scale the numbers from 1 to 10 are assigned respectively
to talc, gypsum, calcite, fluorite, apatite, feldspar, quartz, topaz, sapphire and
diamond.

With these numbers we can write 5 > 2 or 6 < 9 as apatite is harder than gypsum
and feldspar is softer than sapphire,

but we cannot write for example 10 – 9 = 5 – 4, because the difference in


hardness between diamond and sapphire is actually much greater than that
between apatite and fluorite.

Interval Data
When in addition to setting up inequalities we can also form meaningful
differences, we refer to the data as interval data. Suppose we are given the
following temperature readings (in degrees Fahrenheit):58°, 63°, 70°, 95°, 110°,
126° and 135°. In this case,

we can write 100° > 70° or 95° < 135° which simply means that 110° is warmer
than 70° and that 95° is cooler than 135°.

We can also write for example 95° – 70° = 135° – 110°, since equal temperature
differences are equal in the sense that the same amount of heat is required to
raise the temperature of an object from 70° to 95° or from 110° to 135°.

On the other hand, it would not mean much if we said that 126° is twice as hot
as 63°, even though 126° ¸ 63° = 2. To show the reason, we have only to change
to the centigrade scale, where the first temperature becomes 5/9 (126 – 32) =
52°, the second temperature becomes 5/9 (63 –32) = 17° and the first figure is
now more than three times the second.

This difficulty arises from the fact that Fahrenheit and Centigrade scales both
have artificial origins (zeros) i.e., the number 0 of neither scale is indicative of
the absence of whatever quantity we are trying to measure.

Ratio Data
When in addition to setting up inequalities and forming differences we can also
form quotients (i.e., when we can perform all the customary operations of
mathematics), we refer to such data as ratio data. In this sense, ratio data
includes all the usual measurement (or determinations) of length, height, money
amounts, weight, volume, area, pressures etc.

Sources of Error in Measurement


(a) Respondent: At times the respondent may be reluctant to express strong
negative feelings or it is just possible that he may have very little
knowledge but may not admit his ignorance. All this reluctance is likely
to result in an interview of ‘guesses.’ Transient factors like fatigue,
boredom, anxiety, etc. may limit the ability of the respondent to respond
accurately and fully.
(b) Situation: Any condition which places a strain on interview can have
serious effects on the interviewer-respondent rapport. For instance, if
someone else is present, he can distort responses by joining in or merely
by being present. If the respondent feels that anonymity is not assured,
he may be reluctant to express certain feelings.
(c) Measurer: The interviewer can distort responses by rewording or
reordering questions. His behaviour, style and looks may encourage or
discourage certain replies from respondents. Careless mechanical
processing may distort the findings. Errors may also creep in because of
incorrect coding, faulty tabulation and/or statistical calculations,
particularly in the data-analysis stage.

(d) Instrument: Error may arise because of the defective measuring


instrument. The use of complex words, beyond the comprehension of the
respondent, ambiguous meanings, poor printing, inadequate space for
replies, response choice omissions, etc. are a few things that make the
measuring instrument defective and may result in measurement errors.
Another type of instrument deficiency is the poor sampling of the
universe of items of concern.
Scaling and Important Scaling techniques

In research we quite often face measurement problem (since we want a valid


measurement but may not obtain it), especially when the concepts to be
measured are complex and abstract and we do not possess the standardised
measurement tools.

Scaling describes the procedures of assigning numbers to various degrees of


opinion, attitude and other concepts. This can be done in two ways viz., (i)
making a judgement about some characteristic of an individual and then placing
him directly on a scale that has been defined in terms of that characteristic and
(ii) constructing questionnaires in such a way that the score of individual’s
responses assigns him a place on a scale. It may be stated here that a scale is a
continuum, consisting of the highest point (in terms of some characteristic e.g.,
preference, favourableness, etc.) and the lowest point along with several
intermediate points between these two extreme points.

Important Scaling techniques

Rating scales (also known as categorical scale)

The rating scale involves qualitative description of a limited number of aspects


of a thing or of traits of a person. When we use rating scales (or categorical
scales), we judge an object in absolute terms against some specified criteria i.e.,
we judge properties of objects without reference to other similar objects. These
ratings may be in such forms as “like-dislike”, “above average, average, below
average”

The graphic rating scale


Under it the various points are usually put along the line to form a continuum
and the rater indicates his rating by simply making a mark (such as ü) at the
appropriate point on a line that runs from one extreme to the other.
The itemized rating scale (also known as numerical scale)
presents a series of statements from which a respondent selects one as best
reflecting his evaluation. These statements are ordered progressively in terms of
more or less of some property.

Ranking scales: (or comparative scales)

Under ranking scales we make relative judgements against other similar objects.
The respondents under this method directly compare two or more objects and
make choices among them.

(a) Method of paired comparisons: Under it the respondent can express his
attitude by making a choice between two objects, say between a new flavour of
soft drink and an established brand of drink. But when there are more than two
stimuli to judge, the number of judgements required in a paired comparison is
given by the formula:

(b) Method of rank order: Under this method of comparative scaling, the
respondents are asked to rank their choices. This method is easier and faster
than the method of paired comparisons stated above. For example, with 10
items it takes 45 pair comparisons to complete the task, whereas the method of
rank order simply requires ranking of 10 items only.

Summated scale (likert type scale)


summated scales consist of a number of statements which express either a
favourable or unfavourable attitude towards the given object to which the
respondent is asked to react. The respondent indicates his agreement or
disagreement with each statement in the instrument. Each response is given a
numerical score, indicating its favourableness or unfavourableness, and the
scores are totalled to measure the respondent’s attitude.

In a Likert scale, the respondent is asked to respond to each of the statements in


terms of several degrees, usually five degrees (but at times 3 or 7 may also be
used) of agreement or disagreement. For example, when asked to express
opinion whether one considers his job quite pleasant, the respondent may
respond in any one of the following ways: (i) strongly agree, (ii) agree, (iii)
undecided, (iv) disagree, (v) strongly disagree.

Semantic differential scale: Semantic differential scale or the S.D. scale


developed by Charles E. Osgood, G.J. Suci and P.H. Tannenbaum (1957), is an
attempt to measure the psychological meanings of an object to an individual.
This scaling consists of a set of bipolar rating scales, usually of 7 points, by
which one or more respondents rate one or more concepts on each scale item.
For instance, the S.D. scale items for analysing candidates for leadership
position may be shown as under

You might also like