0% found this document useful (0 votes)
365 views176 pages

RESEARCHMETHODOLOGY

Uploaded by

An Wismo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
365 views176 pages

RESEARCHMETHODOLOGY

Uploaded by

An Wismo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 176

Research

Methodology
M.Com [Semester-II]
1st e: 2013-14 onwards

THE MANAGEMENT CONSORTIUM


‘All for knowledge, and knowledge for all’
Research Methodology
Study Material as per new Syllabus [2013-14 onwards]

The Management Consortium


‘All for knowledge, and knowledge for all’

1st e: 2013-14 onwards


MRP: - ` 200 /-
Student’s Discounted Price: - ` 150 /-
© All Rights Reserved with TMC
Published by: TMC, Nagpur
No part of this book may be reproduced in any form, by photocopy, microfilm, or any other means, or incorporated
into any information retrieval system, electronic or mechanical, without the written permission of the publisher. All
inquiries can be emailed to [email protected]

For retail procurement of TMC study material on BE, M.Com, MBA, DBM & PhD contact:
NAGPUR: Book World; Gokulpeth, 0712-2562999, Computer World: Sitabuldi, 0712-2564444, Shreejee Stationers:
Trimurti Nagar, 9422813542 Shri Tirupati Books and Stationers: Congress Nagar, 9860661848, 0712-2456864, Om Sai
Publishers & Distributers- Plot No. 29, Old Imamwada Police Station, Behind T.B. Ward, Indira Nagar, 9764673503,
9923693503, 9923693506 . Vidarbha Book Distributors-Salpekar Building, Jhansi Rani Square, Sitabuldi, Nagpur, 0712-
2524747. Kushal Book Depot-Golcha Marg, Sadar, Nagpur, 0712-2554280, 0992336624. Shri Tirupati Books and
Stationers: Congress Nagar, 9860661848, 0712-2456864 Neha Pustakalay: Sakkardara Square, 8007161421, Chavan
Book Depot: Golcha Marg, Sadar, Nagpur Poineer Books- Plot No 5, Pooja Arcade, Near Petrol Pump, Abhyankar
Nagar, 0712-2233577/66146663.
WARDHA: Gandhi Book Depot-Tel 07152-253791, M ; 9422904861
Unique Traders- Near Saibaba Mandir, M.G. Road, Wardha, 07152-243617
GONDIA: Shri Mahavir Book Depot-Pal Chowk, Tel : 07182-253401 (M) 9823072632
Sunny Stores- Stadium Shop No 1, Gondia, M: 09373320867
CHANDRAPUR: Venkateshwar Stationers and Book Depot- Tel:07172-254086 , M: 9422135263;
Samarth Book Depot: Near Gandhi Chowk, Main Road, Chandrapur, 07172-253125.
For more details contact our Distributor Shri Mukesh Gujarati on 9422864426 or [email protected]

[© 2013-14: TMC Study Material on RM] Page 2


Preface

Research is a part of any systematic knowledge. It has occupied the realm of


human understanding in some form or the other from times immemorial. The
thirst for new areas of knowledge and the human urge for solutions to the
problems have developed a faculty for search and research and re-research in
him/her. Research has now become an integral part of all the areas of human
activity.

It is in this context, a study Material on introduction to the subject of Research


Methodology is presented to the students of Post-Graduate M.Com degree.

The purpose of this Study Material is to present an introduction to the Research


Methodology subject of M.Com. The book contains the syllabus from basics of the
subjects going into the intricacies of the subjects. All the concepts have been
explained with relevant examples and diagrams to make it interesting for the readers.

An attempt is made here by the experts of TMC to assist the students by way of
providing Study Material as per the curriculum with non-commercial considerations.
However, it is implicit that these are exam-oriented Study Material and students are
advised to attend regular lectures in the Institute and utilize reference books
available in the library for In-depth knowledge.

We owe to many websites and their free contents; we would like to specially
acknowledge contents of website www.wikipedia.com and various authors whose
writings formed the basis for this book. We acknowledge our thanks to them.

At the end we would like to say that there is always a room for improvement in
whatever we do. We would appreciate any suggestions regarding this study material
from the readers so that the contents can be made more interesting and meaningful.
Readers can email their queries and doubts to our authors on [email protected].
We shall be glad to help you immediately.

A utho rs a nd C ompi la tio n by: Team TMC , Nag pur

[© 2013-14: TMC Study Material on RM] Page 3


Syllabus and TMC Contents

U.N. Syllabus and TMC Contents P.N.

1 Introduction - Meaning, Objectives and Types of research, Research Approach, 5


Motivation of research, Research Process, Significance of Research, Features of
good research, Major problems in Research process.
Use of advanced technology in Research,

2 Research Design –Research problem selection, problem definition techniques, 33


Components of research design, features of good design, Steps in sample Design,
Characteristics of a good sample Design, Probability & Non Probability sampling,
Measurement & scaling techniques. Scaling and scale construction techniques.

3 Collection and Processing data - Methods of data collection-Primary data– 81


questionnaire, interviews, observation; Collection of secondary data; ,
Field work, Survey plan, survey Errors, Data coding; Editing and Tabulation.
Analysis of data, Tools of Analysis.

4 Testing of hypothesis –Concept of hypothesis, Characteristics of hypothesis, 126


Hypotheses formulation, Procedure for hypothesis testing; Use of statistical
techniques for testing of hypothesis.
Interpretation of data - Techniques of Interpretation.

5 Report writing:-Qualities of good report, Layout of a project report, Steps in report 144
writing, Precautions in research report writing,
Research in Commerce - General management, Small business innovation research
(SBIR),
Research in functional areas – marketing, finance, HR and Production.
Software packages SPSS.

 Further readings and References 172


 Model Question Paper

[© 2013-14: TMC Study Material on RM] Page 4


Unit 1 : Research Methodology

Introduction
Research comprises of creative work undertaken on a systematic basis in order to increase the stock of knowledge,
including knowledge of man, culture and society, and the use of this stock of knowledge to devise new applications.
Research can be defined to be search for knowledge or any systematic investigation to establish facts. The primary
purpose for applied research (as opposed to basic research) is discovering, interpreting, and the development of
methods and systems for the advancement of human knowledge on a wide variety of scientific matters of our world
and the universe. Research can use the scientific method, but need not do so.
Scientific research relies on the application of the scientific method, a harnessing of curiosity. This research provides
scientific information and theories for the explanation of the nature and the properties of the world around us. It makes
practical applications possible. Scientific research is funded by public authorities, by charitable organisations and by
private groups, including many companies. Scientific research can be subdivided into different classifications
according to their academic and application disciplines.
Research can be defined as a scientific and systematic search for gaining information and knowledge on a specific topic
or phenomena. In management, research is extensively used in various areas. For example, we all know that,
Marketing is the process of Planning & Executing the concepts; pricing, promotion & distribution of ideas, goods, and
services to create exchange that satisfy individual & organizational objectives. Thus, we can say that, the Marketing
Concept requires Customer Satisfaction rather than Profit Maximization to be the goal of an organization. The
organization should be Consumer oriented and should try to understand consumer‘s requirements & satisfy them
quickly and efficiently, in ways that are beneficial to both the consumer & the organization.
This means that any organization should try to obtain information on consumer needs and gather market intelligence
to help satisfy these needs efficiently. This can only be done only by research.
Research in common parlance refers to a search for knowledge. It is an endeavour to discover answers to problems (of
intellectual and practical nature) through the application of scientific methods. Research, thus, is essentially a
systematic inquiry seeking facts (truths) through objective, verifiable methods in order to discover the relationship
among them and to deduce from them broad conclusions. It is thus a method of critical thinking. It is imperative that
any type of organisation in the globalised environment needs systematic supply of information coupled with tools of
analysis for making sound decisions, which involve minimum risk.

[© 2013-14: TMC Study Material on RM] Page 5


Meaning
Research in common context refers to a search for knowledge. It can also be defined as a scientific and systematic
search for gaining information and knowledge on a specific topic or phenomena. In management, research is
extensively used in various areas.
For example, we all know that, Marketing is the process of Planning & Executing the concepts; pricing, promotion &
distribution of ideas, goods, and services to create exchange that satisfy individual & organizational objectives. Thus,
we can say that, the Marketing Concept requires Customer Satisfaction rather than Profit Maximization to be the goal
of an organization. The organization should be Consumer oriented and should try to understand consumer‘s
requirements & satisfy them quickly and efficiently, in ways that are beneficial to both the consumer & the
organization.
The Random House Dictionary of the English language defines the term ‗Research‘ as a diligent and systematic inquiry
or investigation into a subject in order to discover or revise facts, theories, applications, etc. This definition explains
that research involves acquisition of knowledge. Research means search for truth. Truth means the quality of being in
agreement with reality or facts. It also means an established or verified fact. To do research is to get nearer to truth, to
understand the reality. Research is the pursuit of truth with the help of study, observation, comparison and
experimentation. In other words, the search for knowledge through objective and systematic method of finding
solution to a problem/answer to a question is research. There is no guarantee that the researcher will always come out
with a solution or answer. Even then, to put it in Karl Pearson‘s words ―there is no short cut to truth… no way to gain
knowledge of the universe except through the gate way of scientific method‖.

Let us see some definitions of Research:


L.V. Redman and A.V.H. Mory in their book on ―The Romance of Research‖ defined research as ―a systematized
effort to gain new knowledge‖
―Research is a scientific and systematic search for pertinent information on a specific topic‖ (C.R. Kothari, Research
Methodology - Methods and Techniques)
―A careful investigation or inquiry specially through search for new facts in any branch of knowledge‖ (Advanced
learners Dictionary of current English) Research refers to a process of enunciating the problem, formulating a
hypothesis, collecting the facts or data, analyzing the same, and reaching certain conclusions either in the form of
solution to the problem enunciated or in certain generalizations for some theoretical formulation.
D. Slesinger and M. Stephenson in the Encyclopedia of Social Sciences defined research as: ―Manipulation of things,
concepts or symbols for the purpose of generalizing and to extend, correct or verify knowledge, whether that
knowledge aids in the construction of a theory or in the practice of an art‖.

To understand the term ‗research‘ clearly and comprehensively let us analyze the above definition.
i) Research is manipulation of things, concepts or symbols
 manipulation means purposeful handling,
 things means objects like balls, rats, vaccine,
 concepts mean the terms designating the things and their perceptions about
 which science tries to make sense. Examples: velocity, acceleration, wealth, income.
 Symbols may be signs indicating +, –, ÷, ×, x , s, S, etc.

[© 2013-14: TMC Study Material on RM] Page 6


 Manipulation of a ball or vaccine means when the ball is kept on different degrees of incline how and at what speed
does it move? When the vaccine is used, not used, used with different gaps, used in different quantities (doses) what
are the effects?
ii) Manipulation is for the purpose of generalizing
The purpose of research is to arrive at generalization i.e., to arrive at statements of generality, so that prediction
becomes easy. Generalization or conclusion of an enquiry tells us to expect something in a class of things under a class
of conditions. Examples: Debt repayment capacity of farmers will be decreased during drought years.
When price increases demand falls. Advertisement has a favourable impact on sales.
iii) The purpose of research (or generalization) is to extend, correct or verify knowledge
Generalization has in turn certain effects on the established corpus or body of knowledge. It may extend or enlarge the
boundaries of existing knowledge by removing inconsistencies if any. It may correct the existing knowledge by
pointing out errors if any. It may invalidate or discard the existing knowledge which is also no small achievement. It
may verify and confirm the existing knowledge which also gives added strength to the existing knowledge. It may also
point out the gaps in the existing corpus of knowledge requiring attempts to bridge these gaps.
iv) This knowledge may be used for construction of a theory or practice of an art
The extended, corrected or verified knowledge has two possible uses to which persons may put it.
a) may be used for theory building so as to form a more abstract conceptual system. E.g. Theory of relativity, theory of
full employment, theory of wage.
b) may be used for some practical or utilitarian goal. E.g. ‗Salesmanship and advertisement increase sales‘ is the
generalization. From this, if sales have to be increased, use salesmanship and advertisement for increasing sales.
Theory and practice are not two independent things. They are interdependent. Theory gives quality and effectiveness
to practice. Practice in turn may enlarge or correct or confirm or even reject theory.

Some other definitions of Research are:


1. Redman and Mory define research as a ―systematized effort to gain new knowledge‖.
2. Some people consider research as a movement, a movement from known to unknown. It is actually a voyage to
discovery.
3. According to Clifford Woody
―Research comprises of defining and redefining problems, formulating hypothesis or suggested solutions; making
deductions and reaching conclusions; and at last carefully testing the conclusions to determine whether they fit the
formulating hypothesis‖.
On evaluating these definitions we can conclude that Research refers to the systematic method consisting of:

 Enunciating the problem,

 Formulating a hypothesis,

 Collecting the fact or data,


 Analyzing the facts and

 Reaching certain conclusions either in the form of solutions towards the concerned problem or in certain
generals for some theoretical formulation.

[© 2013-14: TMC Study Material on RM] Page 7


Research covers the search for and retrieval of information for a specific purpose. Research has many categories,
from medical research to literary research.
Research is essentially a fact-finding process, which influences decision-making. It is a careful search or inquiry into
any subject or subject matter, which is an endeavour to discover or find out valuable facts, which would be useful for
further application or utilization. Research can be a basic research or applied research. Basic research is studies
conducted toward long-range questions or advancing scientific knowledge.

Objectives of research
Following are the key objectives of research:
1. Exploration- an understanding of an area of concern in very general terms. Example: I want to know how to go
about doing more effective research on school violence.
2. Description - an understanding of what is going on. Example: I want to know the attitudes of potential clients
toward Air-Conditioner use.
3. Explanation - an understanding of how things happen. Involves an understanding of cause and effect relationships
between events. Example: I want to know if a group of people who have gone through a certain program have higher
self-esteem than a control group.
4. Prediction - an understanding of what is likely to happen in the future. If I can explain, I may be able to predict.
Example: If one group had higher self-esteem, is it likely to happen with another group?
5. Intelligent intervention - an understanding of what or how in order to help more effectively.
6. Awareness - an understanding of the world, often gained by a failure to describe or explain.

Significance of Research
Research is the process of systematic and in-depth study or search for a solution to a problem or an answer to a
question backed by collection, compilation, presentation, analysis and interpretation of relevant details, data and
information. It is also a systematic endeavour to discover valuable facts or relationships. Research may involve careful
enquiry or experimentation and result in discovery or invention. There cannot be any research which does not increase
knowledge which may be useful to different people in different ways.
Let us see the need for research to business organizations and their managers and how it is useful to them.
i) Industrial and economic activities have assumed huge dimensions. The size of modern business
organizations indicates that managerial and administrative decisions can affect vast quantities of capital and a large
number of people.
Trial and error methods are not appreciated, as mistakes can be tremendously costly. Decisions must be quick but
accurate and timely and should be objective i.e. based on facts and realities. In this back drop business decisions now a
days are mostly influenced by research and research findings. Thus, research helps in quick and objective decisions.
ii) Research, being a fact-finding process, significantly influences business decisions. The business management is
interested in choosing that course of action which is most effective in attaining the goals of the organization. Research
not only provides facts and figures to support business decisions but also enables the business to choose one which is
best.
iii) A considerable number of business problems are now given quantitative treatment with some degree of success
with the help of operations research.

[© 2013-14: TMC Study Material on RM] Page 8


Research into management problems may result in certain conclusions by means of logical analysis which the decision
maker may use for his action or solution.
iv) Research plays a significant role in the identification of a new project, project feasibility and project
implementation.
v) Research helps the management to discharge its managerial functions of planning, forecasting, coordinating,
motivating, controlling and evaluation effectively.
vi) Research facilitates the process of thinking, analysing, evaluating and interpreting of the business environment
and of various business situations and business alternatives. So as to be helpful in the formulation of business policy
and strategy.
vii) Research and Development (R & D) helps discovery and invention. Developing new products or modifying the
existing products, discovering new uses, new markets etc., is a continuous process in business.
viii) The role of research in functional areas like production, finance, human resource management, marketing need
not be over emphasized. Research not only establishes relationships between different variables in each of these
functional areas, but also between these various functional areas.
ix) Research is a must in the production area. Product development, new and better ways of producing goods,
invention of new technologies, cost reduction, improving product quality, work simplification, performance
improvement, process improvement etc., are some of the prominent areas of research in the production area.
x) The purchase/material department uses research to frame alternative suitable policies regarding where to buy,
when to buy, how much to buy, and at what price to buy.
xi) Closely linked with production function is marketing function. Market research and marketing research provide
a major part of marketing information which influences the inventory level and production level. Marketing research
studies include problems and opportunities in the market, product preference, sales forecasting, advertising
effectiveness, product distribution, after sales service etc.,
xii) In the area of financial management, maintaining liquidity, profitability through proper funds management
and assets management is essential. Optimum capital mix, matching of funds inflows and outflows, cash flow
forecasting, cost control, pricing etc., require some sort of research and analysis. Financial institutions also (banking
and non-banking) have found it essential to set up research division for the purpose of collecting and analysing data
both for their internal purpose and for making in-depth studies on economic conditions of business and people.
xiii) In the area of human resource management personnel policies have to be guided by research. An individual‘s
motivation to work is associated with his needs and their satisfaction. An effective Human Resource Manager is one
who can identify the needs of his work force and formulate personnel policies to satisfy the same so that they can be
motivated to contribute their best to the attainment of organizational goals. Job design, job analysis, job assignment,
scheduling work breaks etc., have to be based on investigation and analysis.
xiv) Finally, research in business is a must to continuously update its attitudes, approaches, products goals,
methods, and machinery in accordance with the changing environment in which it operates.

[© 2013-14: TMC Study Material on RM] Page 9


Features of good research

The eight features of Good research are listed below:

1. Reliability is a subjective term which cannot be measured precisely but today there are instruments which
can estimate the reliability of any research. Reliability is the repeatability of any research, research instrument, tool
or procedure. If any research yields similar results each time it is undertaken with similar population and with
similar procedures, it is called to be a reliable research. Suppose a research is conducted on the effects of
separation between parents on class performance of the children. If the results conclude that separation causes low
grades in class, these results should have to be reliable for another sample taken from similar population. More the
results are similar; more reliability is present in the research.

2. Validity is the strength with which we can call a research conclusions, assumptions or propositions true or false.
Validity determines the applicability of research. Validity of the research instrument can be defined as the
suitability of the research instrument to the research problem or how accurately the instrument measures the
problem. Some researchers say that validity and reliability are co-related but validity is much more important than
reliability. Without validity research goes in the wrong direction. To keep the research on-track define your
concepts in the best possible manner so that no error occur during measurement.

3. Accuracy is also the degree to which each research process, instrument and tool is related to each other.
Accuracy also measures whether research tools have been selected in best possible manner and research
procedures suits the research problem or not. For example if a research has to be conducted on the trans-gender
people, several data collection tools can be used depending on the research problems but if you find that
population less cooperative the best way is to observe them rather than submitting questionnaire because in
questionnaire either they will give biased responses or they will not return the questionnaires at all. So choosing
the best data collection tool improves the accuracy of research.
4. Credibility comes with the use of best source of information and best procedures in research. If you are using
second-hand information in your research due to any reason your research might complete in less time but its
credibility will be at stake because secondary data has been manipulated by human beings and is therefore not
very valid to use in research. A certain percentage of secondary data can be used if primary source is not available
but basing a research completely on secondary data when primary data can be gathered is least credible. When
researcher give accurate references in research the credibility of research increases but fake references also
decrease the credibility of research.

5. Generalizability is the extent to which research findings can be applied to larger population. When a
researcher conducts a study he/she chooses a target population and from this population he takes a small sample
to conduct the research. This sample is representative of the whole population so the findings should also be. If
research findings can be applied to any sample from the population, the results of the research are said to be
generalizable.

6. Empirical nature of research means that the research has been conducted following rigorous scientific methods
and procedures. Each step in the research has been tested for accuracy and is based on real life experiences.
Quantitative research is easier to prove scientifically than qualitative research. In qualitative research biases and
prejudice are easy to occur.

[© 2013-14: TMC Study Material on RM] Page 10


7. Systematic approach is the only approach for research. No research can be conducted haphazardly. Each step
must follow other. There are set of procedures that have been tested over a period of time and are thus suitable to
use in research. Each research therefore should follow a procedure.

8. Controlled-in real life experiences there are many factors that affect an outcome. A single event is often result of
several factors. When similar event is tested in research, due to the broader nature of factors that affect that event,
some factors are taken as controlled factors while others are tested for possible effect. The controlled factors or
variables should have to be controlled rigorously. In pure sciences it is very easy to control such elements because
experiments are conducted in laboratory but in social sciences it becomes difficult to control these factors because
of the nature of research.

In short the above features that a good research must possess are summarized below–
1. Should be systematic in nature.
2. Should be logical.
3. Should be empirical and replicable in nature.
4. Should be according to plans.
5. Should be according to the rules and the assumptions should not be based on the false bases or judgments.
6. Should be relevant to what is required.
7. Procedure should be reproducible in nature.
8. Controlled movement of the research procedure.

Types of Research
Research may be classified into different types for the sake of better understanding of the concept. Several bases can be
adopted for the classification such as nature of data, branch of knowledge, extent of coverage, place of investigation,
method employed, time frame and so on. Depending upon the BASIS adopted for the classification, research may be
classified into a class or type. It is possible that a piece of research work can be classified under more than one type,
hence there will be overlapping. It must be remembered that good research uses a number of types, methods, &
techniques. Hence, rigid classification is impossible.
The following is only an attempt to classify research into different types.

i) According to the Branch of Knowledge


Different Branches of knowledge may broadly be divided into two:
a) Life and physical sciences such as Botany, Zoology, Physics and Chemistry.
b) Social Sciences such as Political Science, Public Administration, Economics, Sociology, Commerce and Management.
Research in these fields is also broadly referred to as life and physical science research and social science research.
Business education covers both Commerce and Management, which are part of Social sciences. Business research is a
broad term which covers many areas.

The research carried out, in these areas, is called management research, production research, personnel research,
financial management research, accounting research, Marketing research etc.

[© 2013-14: TMC Study Material on RM] Page 11


a. Management research includes various functions of management such as planning, organizing, staffing,
communicating, coordinating, motivating, controlling. Various motivational theories are the result of research.
b. Production (also called manufacturing) research focuses more on materials and equipment rather than on human
aspects. It covers various aspects such as new and better ways of producing goods, inventing new technologies,
reducing costs, improving product quality.
c. Research in personnel management may range from very simple problems to highly complex problems of all types.
It is primarily concerned with the human aspects of the business such as personnel policies, job requirements, job
evaluation, recruitment, selection, placement, training and development, promotion and transfer, morale and attitudes,
wage and salary administration, industrial relations. Basic research in this field would be valuable as human behaviour
affects organizational behaviour and productivity.
d. Research in Financial Management includes financial institutions, financing instruments (egs. shares, debentures),
financial markets (capital market, money market, primary market, secondary market), financial services (egs. merchant
banking, discounting, factoring), financial analysis (e.g. investment analysis, ratio analysis, funds flow / cash flow
analysis) etc.,
e. Accounting research though narrow in its scope, but is a highly significant area of business management.
Accounting information is used as a basis for reports to the management, shareholders, investors, tax authorities,
regulatory bodies and other interested parties. Areas for accounting research include inventory valuation, depreciation
accounting, generally accepted accounting principles, accounting standards, corporate reporting etc.
f. Marketing research deals with product development and distribution problems, marketing institutions, marketing
policies and practices, consumer behaviour, advertising and sales promotion, sales management and after sales service
etc. Marketing research is one of the very popular areas and also a well established one. Marketing research includes
market potentials, sales forecasting, product testing, sales analysis, market surveys, test marketing, consumer
behaviour studies, marketing information system etc.
g. Business policy research is basically the research with policy implications. The results of such studies are used as
indices for policy formulation and implementation.
h. Business history research is concerned with the past. For example, how was trade and commerce during the
Moghul regime.

ii) According to the Nature of Data


A simple dichotomous classification of research is Quantitative research and Qualitative research / non-quantitative.
a. Quantitative research is variables based where as qualitative research is attributes based. Quantitative research is
based on measurement / quantification of the phenomenon under study. In other words, it is data based and hence
more objective and more popular.
b. Qualitative research is based on the subjective assessment of attributes, motives, opinions, desires, preferences,
behaviour etc. Research in such a situation is a function of researcher‘s insights and impressions.

iii) According to the Coverage


According to the number of units covered it can be Macro study or Micro study. Macro study is a study of the whole
where as Micro study is a study of the part. For example, working capital management in State Road Transport

[© 2013-14: TMC Study Material on RM] Page 12


Corporations in India is a macro study where as Working Capital Management in Andhra Pradesh State Road
Transport Corporation is a micro study.

iv) According to Utility or Application


Depending upon the use of research results i.e., whether it is contributing to the theory building or problem solving,
research can be Basic or Applied.
a. Basic research is called pure / theoretical / fundamental research. Basic research includes original investigations for
the advancement of knowledge that does not have specific objectives to answer problems of sponsoring agencies.
b. Applied research also called Action research, constitutes research activities on problems posed by sponsoring
agencies for the purpose of contributing to the solution of these problems.

v) According to the place where it is carried out


Depending upon the place where the research is carried out (according to the data generating source), research can be
classified into:
a) Field Studies or field experiments
b) Laboratory studies or Laboratory experiments
c) Library studies or documentary research

vi) According to the Research Methods used


Depending upon the research method used for the investigation, it can be classified as:
a) Survey research, b) Observation research, c) Case research, d) Experimental research, e) Historical research, f)
Comparative research.

vii) According to the Time Frame


Depending upon the time period adopted for the study, it can be:
a) One time or single time period research - e.g. One year or a point of time. Most of the sample studies, diagnostic
studies are of this type.
b) Longitudinal research - e.g. several years or several time periods (a time series analysis) e.g. industrial development
during the five year plans in India.

viii) According to the purpose of the Study


What is the purpose/aim/objective of the study? Is it to describe or analyze or evaluate or explore? Accordingly the
studies are known as.
a) Descriptive Study: The major purpose of descriptive research is the description of a person, situation, institution or
an event as it exists. Generally fact finding studies are of this type.
b) Analytical Study: The researcher uses facts or information already available and analyses them to make a critical
examination of the material. These are generally Ex-post facto studies or post-mortem studies.
c) Evaluation Study: This type of study is generally conducted to examine /evaluate the impact of a particular event,
e.g. Impact of a particular decision or a project or an investment.

[© 2013-14: TMC Study Material on RM] Page 13


d) Exploratory Study: The information known on a particular subject matter is little. Hence, a study is conducted to
know more about it so as to formulate the problem and procedures of the study. Such a study is called exploratory/
formulative study.

Research Approaches
The researcher has to provide answers at the end, to the research questions raised in the beginning of the study. For
this purpose he has investigated and gathered the relevant data and information as a basis or evidence. The procedures
adopted for obtaining the same are described in the literature as methods of research or approaches to research. In fact,
these are the broad methods used to collect the data.
These methods are as follows:
1) Survey Method
2) Observation Method
3) Case Method
4) Experimental Method
5) Historical Method
6) Comparative Method
It is now proposed to explain briefly, each of the above mentioned approaches.

1. Survey Method
The dictionary meaning of ‗Survey‘ is to oversee, to look over, to study, to systematically investigate. Survey research
is used to study large and small populations (or universes). It is a fact finding survey. Mostly empirical problems are
investigated by this approach. It is a critical inspection to gather information, often a study of an area with respect to a
certain condition or its prevalence. For example: a marketing survey, a household survey, All India Rural Credit
Survey.
Survey is a very popular branch of social science research. Survey research has developed as a separate research
activity along with the development and improvement of sampling procedures. Sample surveys are very popular now
a days. As a matter of fact sample survey has become synonymous with survey. For example, see the following
definitions:
Survey research can be defined as ―Specification of procedures for gathering information about a large number of
people by collecting information from a few of them‖. (Black and Champion). Survey research is ―Studying samples
chosen from populations to discover the relative incidence, distribution, and inter relations of sociological and
psychological variables‖. (Fred N. Kerlinger) By surveying data, information may be collected by observation, or
personal interview, or mailed questionnaires, or administering schedules or telephone enquiries.

Features of Survey method


The important features of survey method are as follows:
i) It is a field study, as it is always conducted in a natural setting.
ii) It solicits responses directly from the respondents or people known to have knowledge about the problem under
study.

[© 2013-14: TMC Study Material on RM] Page 14


iii) Generally, it gathers information from a large population.
iv) A survey covers a definite geographical area e.g. A village / city or a district.
v) It has a time frame.
vi) It can be an extensive survey involving a wider sample or it can be an intensive study covering few samples but is
an in-depth and detailed study.
vii) Survey research is best adapted for obtaining personal, socio-economic facts, beliefs, attitudes, opinions.
Survey research is not a clerical routine of gathering facts and figures. It requires a good deal of research knowledge
and sophistication. The competent survey investigator must know sampling procedures, questionnaire / schedule /
opionionaire construction, techniques of interviewing and other technical aspects of the survey. Ultimately the quality
of the Survey results depends on the imaginative planning, representative sampling, reliability of data, appropriate
analysis and interpretation of the data.

2. Observation Method
Observation means seeing or viewing. It is not a casual but systematic viewing. Observation may therefore be defined
as ―a systematic viewing of a specific phenomenon in its proper setting for the purpose of gathering information
for the specific study‖.
Observation is a method of scientific enquiry. We observe a person or an event or a situation or an incident. The body
of knowledge of various sciences such as biology, physiology, astronomy, sociology, psychology, anthropology etc.,
has been built upon centuries of systematic observation.
Observation is also useful in social and business sciences for gathering information and conceptualizing the same. For
example, What is the life style of tribals? How are the marketing activities taking place in Regulated markets? How will
the investment activities be done in Stock Exchange Markets? How are proceedings taking place in the Indian
Parliament or Assemblies? How is a corporate office maintained in a public sector or a private sector undertaking?
What is the behaviour of political leaders? Traffic jams in Delhi during peak hours?
Observation as a method of data collection has some features:
i) It is not only seeing & viewing but also hearing and perceiving as well. ii) It is both a physical and a mental activity.
The observing eye catches many things which are sighted, but attention is also focused on data that are relevant to the
problem under study.
iii) It captures the natural social context in which the person‘s behaviour occurs.
iv) Observation is selective: The investigator does not observe everything but selects the range of things to be observed
depending upon the nature, scope and objectives of the study.
v) Observation is not casual but with a purpose. It is made for the purpose of noting things relevant to the study.
vi) The investigator first of all observes the phenomenon and then gathers and accumulates data.
Observation may be classified in different ways. According to the setting it can be (a) observation in a natural setting,
e.g. Observing the live telecast of parliament proceedings or watching from the visitors‘ gallery, Electioneering in India
through election meetings or (b) observation in an artificially stimulated setting, e.g. business games, Tread Mill Test.
According to the mode of observation it may be classified as (a) direct or personal observation, and (b) indirect or
mechanical observation. In case of direct observation, the investigator personally observes the event when it takes
place, where as in case of indirect observation it is done through mechanical devices such as audio recordings, audio
visual aids, still photography, picturization etc. According to the participating role of the observer, it can be classified

[© 2013-14: TMC Study Material on RM] Page 15


as (a) participant observation and (b) non-participant observation. In case of participant observation, the investigator
takes part in the activity, i.e. he acts both as an observer as well as a participant. For example, studying the customs
and life style of tribals by living / staying with them. In case of non-participant observation, the investigator observes
from outside, merely as an on looker. Observation method is suitable for a variety of research purposes such as a study
of human behaviours, behaviour of social groups, life styles, customs and traditions, inter personal relations, group
dynamics, crowd behaviour, leadership and management styles, dressing habits of different social groups in different
seasons, behaviour of living creatures like birds, animals, lay out of a departmental stores, a factory or a residential
locality, or conduct of an event like a meeting or a conference or Afro- Asian Games.

3. Case Method
Case method of study is borrowed from Medical Science. Just like a patient, the case is intensively studied so as to
diagnose and then prescribe a remedy. A firm or a unit is to be studied intensively with a view to finding out problems,
differences, specialties so as to suggest remedial measures. It is an in-depth/intensive study of a unit or problem under
study. It is a comprehensive study of a firm or an industry, or a social group, or an episode, or an incident, or a process,
or a programme, or an institution or any other social unit.
According to P.V. Young ―a comprehensive study of a social unit, be that unit a person, a group, a social institution,
a district, or a community, is called a Case Study‖.
Case Study is one of the popular research methods. A case study aims at studying everything about something rather
than something about everything. It examines complex factors involved in a given situation so as to identify causal
factors operating in it. The case study describes a case in terms of its peculiarities, typical or extreme features. It also
helps to secure a fund of information about the unit under study. It is a most valuable method of study for diagnostic
therapeutic purposes.

4. Experimental Method
Experimentation is the basic tool of the physical sciences like Physics, Chemistry for establishing cause and effect
relationship and for verifying inferences. However, it is now also used in social sciences like Psychology, Sociology.
Experimentation is a research process used to observe cause and effect relationship under controlled conditions. In
other words it aims at studying the effect of an independent variable on a dependent variable, by keeping the other
interdependent variables constant through some type of control. In experimentation, the researcher can manipulate the
independent variables and measure its effect on the dependent variable.
The main features of the experimental method are:
i) Isolation of factors or controlled observation.
ii) Replication of the experiment i.e. it can be repeated under similar conditions.
iii) Quantitative measurement of results.
iv) Determination of cause and effect relationship more precisely.
Three broad types of experiments are:
a) The natural or uncontrolled experiment as in case of astronomy made up mostly of observations.
b) The field experiment, the best suited one for social sciences. ―A field experiment is a research study in a realistic
situation in which one or more independent variables are manipulated by the experimenter under as carefully
controlled conditions as the situation will permit‖. ( Fred N. Kerlinger)

[© 2013-14: TMC Study Material on RM] Page 16


c) The laboratory experiment is the exclusive domain of the physical scientist.
―A laboratory experiment is a research study in which the variance of all or nearly all of the possible influential
independent variables, not pertinent to the immediate problem of the investigation, is kept at a minimum. This is
done by isolating the research in a physical situation apart from the routine of ordinary living and by manipulating
one or more independent variables under rigorously specified, operationalized, and controlled conditions‖.
(Fred N. Kerlinger). The contrast between the field experiment and laboratory experiment is not sharp; the difference is
a matter of degree. The laboratory experiment has a maximum of control, where as the field experiment must operate
with less control.

5. Historical Method
When research is conducted on the basis of historical data, the researcher is said to have followed the historical
approach. To some extent, all research is historical in nature, because to a very large extent research depends on the
observations / data recorded in the past. Problems that are based on historical records, relics, documents, or
chronological data can conveniently be investigated by following this method. Historical research depends on past
observations or data and hence is non-repetitive, therefore it is only a post facto analysis. However, historians,
philosophers, social psychiatrists, literary men, as well as social scientists use the historical approach. Historical
research is the critical investigation of events, developments, experiences of the past, the careful weighing of evidence
of the validity of the sources of information of the past, and the interpretation of the weighed evidence. The historical
method, also called historiography, differs from other methods in its rather elusive subject matter i.e. the past. In
historical research primary and also secondary sources of data can be used. A primary source is the original repository
of a historical datum, like an original record kept of an important occasion, an eye witness description of an event, the
inscriptions on copper plates or stones, the monuments and relics, photographs, minutes of organization meetings,
documents. A secondary source is an account or record of a historical event or circumstance, one or more steps
removed from an original repository. Instead of the minutes of the meeting of an organization, for example, if one uses
a newspaper account of the meeting, it is a secondary source.
The aim of historical research is to draw explanations and generalizations from the past trends in order to understand
the present and to anticipate the future. It enables us to grasp our relationship with the past and to plan more
intelligently for the future.
For historical data only authentic sources should be depended upon and their authenticity should be tested by
checking and cross checking the data from as many sources as possible. Many a times it is of considerable interest to
use Time Series Data for assessing the progress or for evaluating the impact of policies and initiatives. This can be
meaningfully done with the help of historical data.

6. Comparative Method
The comparative method is also frequently called the evolutionary or Genetic Method. The term comparative method
has come about in this way: Some sciences have long been known as ―Comparative Sciences‖ - such as comparative
philology, comparative anatomy, comparative physiology, comparative psychology, comparative religion etc. Now the
method of these sciences came to be described as the ―Comparative Method‖, an abridged expression for ―the method
of the comparative sciences‖. When the method of most comparative sciences came to be directed more and more to
the determination of evolutionary sequences, it came to be described as the ―Evolutionary Method‖.

[© 2013-14: TMC Study Material on RM] Page 17


The origin and the development of human beings, their customs, their institutions, their innovations and the stages of
their evolution have to be traced and established. The scientific method by which such developments are traced is
known as the Genetic method and also as the Evolutionary method. The science which appears to have been the first to
employ the Evolutionary method is comparative philology. It is employed to ―compare‖ the different languages in
existence, to trace the history of their evolution in the light of such similarities and differences as the comparisons
disclosed. Darwin‘s famous work ―Origin of Species‖ is the classic application of the Evolutionary method in
comparative anatomy.
The whole theory of biological evolution rests on applications of evolutionary method. This method can be applied not
only to plants, to animals, to social customs and social institutions, to the human mind (comparative psychology), to
human ideas and ideals, but also to the evolution of geological strata, to the differentiation of the chemical elements
and to the history of the solar system. The term comparative method as a method of research is used here in its
restricted meaning as synonymous with Evolutionary method. To say that the comparative method is a ‗method of
comparison‘ is not convincing, for comparison is not a specific method, but something which enters as a factor into
every scientific method. Classification requires careful comparison and every other method of science depends upon a
precise comparison of phenomena and the circumstances of their occurrence. All methods are, therefore, ―comparative‖
in a wider sense.

[© 2013-14: TMC Study Material on RM] Page 18


Research Process
Having received the research brief, the researcher responds with a research proposal. This is a document which
develops after having given careful consideration to the contents of the research brief. The research proposal sets out
the research design and the procedures to be followed.
The seven steps are set out in figure.

Step -I: Defining research problem


The point has already been made that the decision-
maker should clearly communicate the purpose of the
research to the researcher but it is often the case that
the objectives are not fully explained to the individual
carrying out the study. Decision-makers seldom work
out their objectives fully or, if they have, they are not
willing to fully disclose them. In theory, responsibility
for ensuring that the research proceeds along clearly
defined lines rests with the decision-maker. In many
instances, the researcher has to take the initiative.
In situations, in which the researcher senses that the
decision-maker is either unwilling or unable to fully
articulate the objectives then he/she will have to pursue
an indirect line of questioning. One approach is to take
the problem statement supplied by the decision-maker
and to break this down into key components and/or
terms and to explore these with the decision-maker. For
example, the decision-maker could be asked what he has in mind when he uses the term market potential. This is a
valid question since the researcher is charged with the responsibility to develop a research design which will provide
the right kind of information. Another approach is to focus the discussions with the person commissioning the
research on the decisions which would be made given alternative findings which the study might come up with. This
process frequently proves of great value to the decision-maker in that it helps him think through the objectives and
perhaps select the most important of the objectives.
Whilst seeking to clarify the objectives of the research it is usually worthwhile having discussions with other levels of
management who have some understanding of the marketing problem and/or the surrounding issues. Other helpful
procedures include brainstorming, reviews of research on related problems and researching secondary sources of
information as well as studying competitive products.

[© 2013-14: TMC Study Material on RM] Page 19


The nature of problems
A decision maker‘s degree of uncertainty influences decisions about the type of research that will be conducted. A
business manager may be
completely certain about the
situation s/he is facing. Or, at the
other extreme, a manager or
researcher may describe a
decision-making situation as
absolute ambiguity. The nature
of the problem to be solved is
unclear. The objectives are vague
and the alternatives are difficult
to define. This is by far the most
difficult decision situation. Most
business decision face situations
falling in-between these two
extremes.

The importance of proper problem definition


Business research is conducted to help solve managerial problems. It is extremely important to define the business
problem carefully because such definition will determine the purpose of the research and, ultimately, the research
design.
Formal qualitative research should not begin until the problem has been clearly defined. However, when a problem or
opportunity is discovered, managers may have only vague insights about a complex situation. If quantitative research
is conducted before the researchers understand exactly what is important, then false conclusions may be drawn from
the investigation.
Problem definition indicates a specific business decision area that will be clarified by answering some research
questions.

Problem identification process


The process of defining the problem involves several interrelated steps. They are:
a. Ascertain the decision maker‘s objectives.
b. Understand the background of the problem
c. Isolate and identify the problem not the symptoms
d. Determine the unit of analysis
e. Determine the relevant variables
f. State the research questions (Hypotheses) and
g. Research objectives

[© 2013-14: TMC Study Material on RM] Page 20


a) Ascertain the decision maker‘s objectives
The research investigation must attempt to satisfy the decision maker‘s objectives. Sometimes, decision makers are not
able to articulate precise research objectives. Both the research investigator and the manager requesting the research
should attempt to have a clear understanding of the purpose of undertaking the research. Often, exploratory
research—by illuminating the nature of the business opportunity or problem—helps managers clarify their objectives
and decisions.
The iceberg principle
The dangerous part of any business problem, like the submerged part of an iceberg, is neither visible to nor
understood by the business managers. If the submerged portions of the problem are omitted from the problem
definition, and subsequently from the research design, then the decision based on such research may be less than
optimal.
b) Understand the background of the problem.
The background of the problem is vital. A situation analysis is the logical first step in defining the problem. This
analysis involves the informal gathering of background information to familiarize researchers or managers with the
decision area. Exploratory research techniques have been developed to help formulate clear definitions of the problem.
c) Isolate and identify the problem, not the symptoms.
Anticipating the many influences and dimensions of a problem is impossible for any researcher or executive. Certain
occurrences that appear to be the problem may only be symptoms of a deeper problem. Executive judgment and
creativity must be exercised in identifying a problem.
d) What is the unit of analysis?
The researcher must specify the unit of analysis. Will the individual consumer be the source of information or will it
be the parent-child dyad? Industries, organizations, departments, or individuals, may be the focus for data collection
and analysis. Many problems can be investigated at more than one level of analysis.
e) What are the relevant variables?
One aspect of problem definition is identification of the key variables. A variable is a quality that can exhibit
differences in value, usually magnitude or strength.
In statistical analysis, a variable is identified by a symbol such as X. A category or classificatory variable has a limited
number of distinct variables (e.g., sex—male or female). A continuous variable may encompass an infinite range of
numbers (e.g., sales volume).

Step II: Formulation of research hypothesis


Meaning of hypothesis
The word hypothesis is made up of two Greek roots which mean that it is some sort of ‗sub-statements‘, for it is the
presumptive statement of a proposition, which the investigation seeks to prove. The scientist observes the man of
special class of phenomena and broads over it until by a flash of insight he perceives an order and intelligent harmony
in it. This is often referred to as an ‗explanation‘ of the facts he has observed. He has a ‗theory‘ about particular mass of
fact. This theory when stated testable proposition formally and clearly subjected to empirical or experimental
verification is known as a hypothesis. The hypothesis furnishes the germinal basis of the whole investigation and
remains to the end its corner stone, for the whole research is directed to test it out by facts. At the start of investigation

[© 2013-14: TMC Study Material on RM] Page 21


the hypothesis is a stimulus to critical thoughts offers insights into the confusion of phenomena. At the end it comes to
prominence as the proposition to be accepted or rejected in the light of the findings.
The word hypothesis consists of two words:
Hypo + thesis = Hypothesis
‗Hypo‘ means tentative or subject to the verification and ‗Thesis‘ means statement about solution of a problem.
The world meaning of the term hypothesis is a tentative statement about the solution of the problem. Hypothesis offers
a solution of the problem that is to be verified empirically and based on some rationale.
Another meaning of the word hypothesis which is composed of two words:
1. ‗Hypo‘ means composition of two or more variables which is to be verified.
2. ‗Thesis‘ means position of these variables in the specific frame of reference.

Definitions of hypothesis
The term hypothesis has been defined in several ways. Some important definitions have been given in the following
paragraphs:
1. Hypothesis
A tentative supposition or provisional guess ―It is a tentative supposition or provisional guess which seems to explain
the situation under observation.‖ – James E. Greighton
2. Hypothesis
A Tentative generalization. A Lungberg thinks ―A hypothesis is a tentative generalisation the validity of which remains
to be tested. In its most elementary stage the hypothesis may be any hunch, guess, imaginative idea which becomes the
basis for further investigation.‖
3. Hypothesis: Shrewd Guess
According to John W. Best, ―It is a shrewd guess or inference that is formulated and provisionally adopted to explain
observed facts or conditions and to guide in further investigation.‖
4. Hypothesis: Guides the Thinking Process
According to A.D. Carmichael, ―Science employs hypothesis in guiding the thinking process. When our experience
tells us that a given phenomenon follows regularly upon the appearance of certain other phenomena, we conclude that
the former is connected with the latter by some sort of relationship and we form an hypothesis concerning this
relationship.‖
5. Hypothesis
A proposition is to be put to test to determine its validity: Goode and Han, ―A hypothesis states what we are looking
for. A hypothesis looks forward. It is a proposition which can be put to a test to determine its validity. It may prove to
be correct or incorrect.
6. Hypothesis
An expectation about events based on generalization: Bruce W. Tuckman, ―A hypothesis then could be defined as an
expectation about events based on generalization of the assumed relationship between variables.‖
7. Hypothesis
A tentative statement of the relationship between two or more variables: ―A hypothesis is a tentative statement of the
relationship between two or more variables. Hypotheses are always in declarative sentence form and they relate, either
generally or specifically variable and variables.‖

[© 2013-14: TMC Study Material on RM] Page 22


8. Hypothesis
A theory when it is stated as a testable proposition. M. Verma, ―A theory when stated as a testable proposition
formally and clearly and subjected to empirical or experimental verification is known as a hypothesis.‖

Step III: Decision on type of study


Research can be carried out on one of three levels: a) Exploratory, b) Descriptive research and c) Experimental
research.

Step IV: Decision on data collection method


The next set of decisions concerns the method(s) of data gathering to be employed. The main methods of data
collection are secondary data
searches, observation, and
the survey, experimentation
and consumer panels.
Under ideal conditions the
researcher would select the
most appropriate method-
field research, survey,
experiment, or secondary
data analysis-for the research
problem. Realities of available money, time, access to information, and own personal skills often are decisive factors in
design choice and data collection. Once the design is firm, follow through the steps in the design and collect the data.
All of us have collected data, not necessarily precisely and carefully in a scientific manner. Frequently we observe
people in a new situation to determine what is expected of us, such as when we first started college, visited a new city,
or started a new job; this is called participant observation, a particular type of field research. We may ask friends how
and why they are going to vote a certain way in an upcoming election. This is known as interviewing. We may try
different types or amounts of spices in a recipe to find which combination tastes the best. This is called experimenting.
Most of us have investigated sources and data in the library to help us in making a decision about a trip, car, house or
major appliance purchase. This is known as secondary analysis, the analysis of data collected by others.
All of these are research "data collection" techniques, though they lack the rigor, care, and explicitness of scientific
research. Some may approach scientific quality for testing statements, while others would be considered as primarily
acceptable for the generation of hypothesis, but not acceptable for drawing conclusions.
Research techniques vary in terms of the formal aspects of their structure. Some are more open-ended and there is less
consensus on structure (field studies, content analysis, focus groups, etc.). Most of these techniques of study are not
really lacking in numbers and counting of observations; where they differ from other techniques is in their more open
approach. Additionally, they frequently lack precise agreed upon data collection techniques and sufficient numbers in
their samples to allow using statistics and generalizing conclusions.

[© 2013-14: TMC Study Material on RM] Page 23


Step V: Development of an analysis plan
Those new to research often intuitively believe that decisions about the techniques of analysis to be used can be left
until after the data has been collected. Such an approach is ill-advised. Before interviews are conducted the following
checklist should be applied:
 Is it known how each and every question is to be analysed? (e.g. which univariate or bivariate descriptive
statistics, tests of association, parametric or nonparametric hypotheses tests, or multivariate methods are to be
used?)
 Does the researcher have a sufficiently sound grasp of these techniques to apply them with confidence and to
explain them to the decision-maker who commissioned the study?
 Does the researcher have the means to perform these calculations? (e.g. access to a computer which has an
analysis program which he/she is familiar with? Or, if the calculations have to be performed manually, is
there sufficient time to complete them and then to check them?)
 If a computer program is to be used at the data analysis stage, have the questions been properly coded?
 Have the questions been scaled correctly for the chosen statistical technique? (e.g. a t-test cannot be used on
data which is only ranked)
There is little point in spending time and money on collecting data, which subsequently is not or cannot be analysed.
Therefore consideration has to be given to issues such as these before the fieldwork is undertaken.

Step VI: Data collection


Research involves the collection of data to obtain insight and knowledge into the needs and wants of customers and
the structure and dynamics of a market. In nearly all cases, it would be very costly and time-consuming to collect data
from the entire population of a market. Accordingly, in market research, extensive use is made of sampling from
which, through careful design and analysis, researchers can draw information about the market.
i) Sample Design
Sample design covers the method of selection, the sample structure and plans for analysing and interpreting the
results. Sample designs can vary from simple to complex and depend on the type of information required and the way
the sample is selected.
Sample design affects the size of the sample and the way in which analysis is carried out. In simple terms the more
precision the market researcher requires, the more complex will be the design and the larger the sample size.
The sample design may make use of the characteristics of the overall market population, but it does not have to be
proportionally representative. It may be necessary to draw a larger sample than would be expected from some parts of
the population; for example, to select more from a minority grouping to ensure that sufficient data is obtained for
analysis on such groups.
Many sample designs are built around the concept of random selection. This permits justifiable inference from the
sample to the population, at quantified levels of precision. Random selection also helps guard against sample bias in a
way that selecting by judgement or convenience cannot.
ii) Defining the Population
The first step in good sample design is to ensure that the specification of the target population is as clear and complete
as possible to ensure that all elements within the population are represented. The target population is sampled using a
sampling frame. Often the units in the population can be identified by existing information; for example, pay-rolls,

[© 2013-14: TMC Study Material on RM] Page 24


company lists, government registers etc. A sampling frame could also be geographical; for example postcodes have
become a well-used means of selecting a sample.
iii) Sample Size
For any sample design deciding upon the appropriate sample size will depend on several key factors.
(1) No estimate taken from a sample is expected to be exact: Any assumptions about the overall population based on
the results of a sample will have an attached margin of error.
(2) To lower the margin of error usually requires a larger sample size. The amount of variability in the population (i.e.
the range of values or opinions) will also affect accuracy and therefore the size of sample.
(3) The confidence level is the likelihood that the results obtained from the sample lie within a required precision. The
higher the confidence level that is the more certain you wish to be that the results are not atypical. Statisticians often
use a 95 per cent confidence level to provide strong conclusions.
(4) Population size does not normally affect sample size. In fact the larger the population size the lower the proportion
of that population that needs to be sampled to be representative. It is only when the proposed sample size is more than
5 per cent of the population that the population size becomes part of the formulae to calculate the sample size.
iv) Types of Sampling (Discussed in details in the Unit II: Sampling)

Step VII: Analysis of data


The word 'analysis' has two component parts, the prefix 'ana' meaning 'above' and the Greek root 'lysis' meaning 'to
break up or dissolve'.
Thus data analysis can be described as:
"...a process of resolving data into its constituent components, to reveal its characteristic elements and structure."
Where the data is quantitative there are three determinants of the appropriate statistical tools for the purposes of
analysis. These are the number of samples to be compared, whether the samples being compared are independent of
one another and the level of data measurement.
Suppose a fruit juice processor wishes to test the acceptability of a new drink based on a novel combination of tropical
fruit juices. There are several alternative research designs which might be employed, each involving different numbers
of samples.

Test A Comparing sales in a test market and the market share of the product it is Number of
targeted to replace. samples = 1

Test B Comparing the responses of a sample of regular drinkers of fruit juices to Number of
those of a sample of non-fruit juice drinkers to a trial formulation. samples = 2

Test C Comparing the responses of samples of heavy, moderate and infrequent Number of
fruit juice drinkers to a trial formulation. samples = 3

The next consideration is whether the samples being compared are dependent (i.e. related) or independent of one
another (i.e. unrelated). Samples are said to be dependent, or related, when the measurement taken from one sample in
no way affects the measurement taken from another sample.
Take for example the outline of test B above. The measurement of the responses of fruit juice drinkers to the trial
formulation in no way affects or influences the responses of the sample of non-fruit juice drinkers. Therefore, the

[© 2013-14: TMC Study Material on RM] Page 25


samples are independent of one another. Suppose however a sample were given two formulations of fruit juice to taste.
That is, the same individuals are asked first to taste formulation X and then to taste formulation Y. The researcher
would have two sets of sample results, i.e. responses to product X and responses to product Y. In this case, the samples
would be considered dependent or related to one another. This is because the individual will make a comparison of the
two products and his/her response to one formulation is likely to affect his/her reaction or evaluation of the other
product.
The third factor to be considered is the levels of measurement of the data being used. Data can be nominal, ordinal,
interval or ratio scaled. Table summarises the mathematical properties of each of these levels of measurement.
Once the researcher knows how many samples are to be compared, whether these samples are related or unrelated to
one another and the level of measurement then the selection of the appropriate statistical test is easily made. To
illustrate the importance of understanding these connections consider the following simple, but common, question in
research. In many instances the age of respondents will be of interest. This question might be asked in either of the two
following ways:
Please indicate to which of the following age categories you belong:
(a) 15-21 years ___
22 - 30 years ___
Over 30 years ___
(b) How old are you? ___ Years

Levels of measurement
Measurement Measurement Level Examples Mathematical properties
scale

Nominal Frequency counts Producing grading categories Confined to a small number of tests
using the mode and frequency

Ordinal Ranking of items Placing brands of cooking oil in Wide range of nonparametric tests
order of preference which test for order

Interval Relative differences of Scoring products on a 10 point scale Wide range of parametric tests
magnitude between items of like/dislike

Ratio Absolute differences of Stating how much better one All arithmetic operations
magnitude product is than another in absolute
terms.

Choosing format (a) would give rise to nominal (or categorical) data and format (b) would yield ratio scaled data.
These are at opposite ends of the hierarchy of levels of measurement. If by accident or design format (a) were chosen
then the analyst would have only a very small set of statistical tests that could be applied and these are not very
powerful in the sense that they are limited to showing association between variables and could not be used to establish
cause-and-effect. Format (b), on the other hand, since it gives the analyst ratio data, allows all statistical tests to be used
including the more powerful parametric tests whereby cause-and-effect can be established, where it exists. Thus a
simple change in the wording of a question can have a fundamental effect upon the nature of the data generated.

[© 2013-14: TMC Study Material on RM] Page 26


Selecting statistical tests
The individual responsible for commissioning the research may be unfamiliar with the technicalities of statistical tests
but he/she should at least be aware that the number of samples, their dependence or independence and the levels of
measurement does affect how the data can be analysed. Those who submit research proposals involving quantitative
data should demonstrate an awareness of the factors that determine the mode of analysis and a capability to undertake
such analysis.
Researchers have to plan ahead for the analysis stage. It often happens that data processing begins whilst the data
gathering is still underway. Whether the data is to be analysed manually or through the use of a computer program,
data can be coded, cleaned (i.e. errors removed) and the proposed analytical tests tried out to ensure that they are
effective before all of the data has been collected.
Another important aspect relates to logistics planning. This includes ensuring that once the task of preparing the data
for analysis has begun there is a steady and uninterrupted flow of completed data forms or questionnaires back from
the field interviewers to the data processors. Otherwise, the whole exercise becomes increasingly inefficient. A second
logistical issue concerns any plan to build up a picture of the pattern of responses as the data comes flowing in. This
may require careful planning of the sequencing of fieldwork. For instance, suppose that research was being undertaken
within a particular agricultural region with a view to establishing the size, number and type of milling enterprises
which had established themselves in rural areas following market liberalisation. It may be that the West of the district
under study mainly wheat is grown whilst in the East it is maize which is the major crop. It would make sense to
coordinate the fieldwork with data analysis so that the interim picture was of either wheat or maize milling since the
two are likely to differ in terms of the type of mill used (e.g. hammer versus plate mills) as well as screen sizes and end
use (e.g. the proportions prepared for animal versus human food).

Step VIII: Drawing conclusions and making recommendations


The concluding chapters of this textbook are devoted to the topic of research report writing. However, it is perhaps
worth noting that the end products of research are conclusions and recommendations. With respect to the marketing

[© 2013-14: TMC Study Material on RM] Page 27


planning function, research helps to identify potential threats and opportunities, generates alternative courses of
action, provides information to enable marketing managers to evaluate those alternatives and advises on the
implementation of the alternatives.
Too often research reports chiefly comprise a lengthy series of tables of statistics accompanied by a few brief comments
which verbally describe what is already self-evident from the tables. Without interpretation, data remains of potential,
as opposed to actual use. When conclusions are drawn from raw data and when recommendations are made then data
is converted into information. It is information which management needs to reduce the inherent risks and uncertainties
in management decision making.
Customer oriented researchers will have noted from the outset of the research which topics and issues are of particular
importance to the person(s) who initiated the research and will weight the content of their reports accordingly. That is,
the researcher should determine what the marketing manager's priorities are with respect to the research study. In
particular he/she should distinguish between what the managers:
a. must know
b. should know
c. could know
This means that there will be information that is essential in order for the manager to make the particular decision with
which he/she is faced (must know), information that would be useful to have if time and resources within the budget
allocation permit (should know) and there will be information that it would be nice to have but is not at all directly
related to the decision at hand (could know). In writing a research proposal, experienced researchers would be careful
to limit the information which they firmly promise to obtain, in the course of the study, to that which is considered
'must know' information. Moreover, within their final report, experienced researchers will ensure that the greater part
of the report focuses upon 'must know' type information.

Major problems in Research process


In India, researchers in general, and business researchers in particular are facing several problems. This is all the more
true in case of empirical research.
Some of the important problems are as follows:
i) The lack of scientific training in the business research methodology is a major problem in our country. Many
researchers take a leap in the dark without having a grip over research methodology. Systematic training in business
research methodology is a necessity.
ii) There is paucity of competent researchers and research supervisors. As a result the research results many a time
do not reflect the reality.
iii) Many of the business organizations are not research conscious and feel that investment in research is wastage of
resources and does not encourage research.
iv) The research and Development Department has become a common feature in many medium and large
organizations. But decision makers do not appear to be very keen on implementing the findings of their R & D
departments.
v) At the same time, small organizations which are the majority in our economy, are not able to afford a R & D
department at all. Even engaging a consultant seems to be costly for them. Consequently, they do not take the help of
research to solve their problems.

[© 2013-14: TMC Study Material on RM] Page 28


vi) Many people largely depend on customs, traditions and routine practices in their decision making, as they feel
that research does not have any useful purpose to serve in the management of their business.
vii) There are insufficient interactions between the University departments and business organizations,
government departments and research organizations. There should be some mechanism to develop university and
industry interaction so that both can benefit i.e. the academics can get ideas from the practitioners on what needs to be
researched upon and the practioners can apply the research results of the academics.
viii) The secrecy of business information is sacrosanct to business organizations. Most of the business
organizations in our country do not part with information to researchers, except public sector organizations which
have the culture of encouraging research, many of the private sector organizations are not willing to provide the data.
ix) Even when research studies are undertaken, many a time, they are overlapping, resulting in duplication because
there is no proper coordination between different departments of a university and between different universities.
x) Difficulty of funds. Because of the scarcity of resources many university departments do not come forward to
undertake research.
xi) Poor library facilities at many places, because of which researchers have to spend much of their time and energy in
tracing out the relevant material and information.
xii) Many researchers in our country also face the difficulty of inadequate computerial and secretarial assistance,
because of which the researchers have to take more time for completing their studies.
xiii) Delayed publication of data: There is difficulty of timely availability of upto date data from published sources.
The data available from published sources or governmental agencies is old. At least 2 to 3 years time lag exists as a
result the data proves irrelevant.
xiv) Social Research, especially managerial research, relates to human beings and their behaviour. The observations,
the data collection and the conclusions etc must be valid. There is the problem of conceptualization of these aspects.
xv) Another difficulty in the research arena is that there is no code of conduct for the researchers. There is need for
developing a code of conduct for researchers to educate them about ethical aspects of research, maintaining
confidentiality of information etc.
In spite of all these difficulties and problems, a business enterprise cannot avoid research, especially in the fast
changing world. To survive in the market an enterprise has to continuously update itself, it has to change its attitudes,
approaches, products, technology, etc., through continuous research.

Use of advanced technology in Research


Nearly five decades ago, the first programmable, electronic, digital computer was switched on. That day science
acquired a tool that at first simply facilitated research, then began to change the way research was done. Today these
changes continue, and now amount to a revolution.
Electronic digital computers at first simply replaced earlier technologies. Researchers used computers to do
arithmetic calculations previously done with paper and pencil, slide rules, abacuses, or roomfuls of people running
mechanical calculators. Benefits offered by the earliest computers were more quantitative than qualitative; bigger
computations could be done faster, with greater reliability, and perhaps more cheaply. But computers were large,
expensive, required technically expert operators and programmers, and consequently were accessible only to a
relatively small fraction of scientists and engineers.

[© 2013-14: TMC Study Material on RM] Page 29


One human generation and several computer generations later, with the advent of the integrated circuit (the
semiconductor ―chip‖), computational speed increased by a factor of 1 trillion, computational cost decreased by a
factor of 10 million, and the smallest useful calculator went from the size of a typewriter to the size of a wristwatch. At
present, personal computers selling for a few thousand dollars can put significant computing power on the desk of
every scientist. Meanwhile, advances in the software through which people interact with and instruct computers have
made computers potentially accessible to people with no specific training in computation. More recently, computer
technology has joined telecommunications technology to create a new entity, ―information technology.‖ Information
technology has done much to remove from the researcher the constraints of speed, cost, and distance.
On the whole, information technology has led to improvements in research. New avenues for scientific exploration
have opened. The amount of data that can be analyzed has expanded, as has the complexity of analyses. And
researchers can collaborate more widely and efficiently.

The following are some of the Uses of advanced technology in Research:


1. Data Collection and Analysis
Collecting and analyzing data with computers are among the most widespread uses of information technology in
research. Computer hardware for these purposes comes in all sizes, ranging from personal computers to
microprocessors dedicated to specific instrumentational tasks, large mainframe computers serving a university campus
or research facility, and supercomputers. Computer software ranges from general- purpose programs that computer
numeric functions or conduct statistical analyses to specialized applications of all sorts.
2. Communication and Collaboration among Researchers
Researchers cannot work without access to collaborators, to instruments, to information sources and, sometimes, to
distant computers. Computers and communication networks are increasingly necessary for that access. Three
technologies are concerned with communications and collaboration: word processing, electronic mail, and networks.
Word processing and electronic mail are arguably the most pervasive of all the routine uses of computers in research
communication.
Electronic mail—sending text from one computer user to another over the networks—is replacing written and
telephone communication among many communities of scientists, and is changing the ways in which these
communities are defined. Large, collaborative projects, such as oceanographic voyages, use electronic mail to organize
and schedule experiments, coordinate equipment arrivals, and handle other logistical details. With the advent of
electronic publishing tools that help lay out and integrate text, graphics, and pictures, mail systems that allow
interchange of complex documents will become essential.
Networks range in size from small networks that connect users in a certain geographic area, to national and
international networks. Scientists at different sites increasingly use networks for conversations by electronic mail and
for repeated exchanges of text and data files.

3. Information Storage and Retrieval


How information is stored determines how accessible it is. Scientific texts are generally stored in print (in the jargon, in
hard copy) and are accessible through the indices and catalogs of a library. Some texts, along with programs and data,
however, are stored electronically—on disks or magnetic tapes to be run in computers—and are generally more easily
accessible. In addition, collections of data, known as databases, are sometimes stored in a central location. In general,

[© 2013-14: TMC Study Material on RM] Page 30


electronic storage of information holds enormous advantages: it can be stored economically, found quickly without
going to another location, and moved easily.

Motivation of research
Motivation is essential to nearly all behaviour at work. However, it is easy to define. Motivation can be thought of as
the force that drives behaviour. In other words, can be considered as both the powerhouse behind behaviour, and also
a person‘s reasons for doing something (or nothing). Motivation involves both feelings (emotions) and thinking
(cognition).
All human behaviour arises in response to some forms of internal
(physiological) or external (environmental) stimulation. These
behaviors are purposeful or goal directed. These behaviors are the
result of the arousal of certain motives. Thus motivation can be
defined as the process of activating, maintaining and directing
behaviour toward a particular goal. The process is terminated
after the desired goal is obtained.
The process of initiating action in the organism is technically
called motivation. Motivation refers to a state that directs the
Behaviour of the individual towards certain goals. Motivation is
not directly observable. It is described as an inferred process and
is called so by the psychologists to explain certain behaviors.
When we ask "What motivates a person to do a particular thing", we usually mean why does he behave as he does. In
other words, motivation as popularly used, refers to the cause or why of behavior. Since psychology is the study of
human Behaviour, motivation is an important part of psychology. Motivation refers to a state of a person that directs
Behaviour of the individual towards certain goals.
Various types of motivation which leads to effective research work are as follows:
1. Intrinsic motivation - the love of the work itself. Intrinsic motivations include: interest; challenge; learning;
meaning; purpose; creative flow. Research has shown that high
levels of intrinsic motivation are strongly linked to outstanding
creative performance
2. Extrinsic motivation - rewards for good work or
punishments for poor work. Extrinsic motivations include:
money; fame; awards; praise; status; opportunities; deadlines;
commitments; bribes; threats. Research shows that too much
focus on extrinsic motivation can block creativity.
3. Personal motivation - individual values, linked to
personality. Examples include: power; harmony; achievement;
generosity; public recognition; authenticity; knowledge;
security; pleasure.
Each of us prioritizes some values over others; understanding your own values and those of people around you is key
to motivating yourself and influencing others.

[© 2013-14: TMC Study Material on RM] Page 31


4. Interpersonal motivation - influences from other people. Much of our behaviour is a response to people around
us, such as: copying; rebellion; competition; collaboration; commitment; encouragement.
All four of these factors are important to get success in a research activity.

Questions for Review:

1. Define the concept of research and analyze its characteristics.


2. Explain the significance of research.
3. Write an essay on various types of research.
4. What is meant by research process? What are the various stages / aspects involved in the research process?
5. What do you mean by a method of research? Briefly explain different methods of research.
6. Explain the significance of research in various functional areas of commerce.
7. What is Survey Research? How is it different from Observation Research?
8. List the various types of studies according to the purpose of the study.
9. List out five important difficulties faced by business researchers in India.
10. Write short note on:
A. Case Research E. Types of research
B. Experimental Research F. Research Process
C. Historical Research G. Define research problem
D. Comparative Method of research H. Problem identification process



[© 2013-14: TMC Study Material on RM] Page 32


Unit 2 : Research Design

Research problem selection


1. Formulation of research problem
Formulation of research problem constitutes the first stage in the research process. Essentially, two issues are
involved in formulation of research problem viz., understanding the problem thoroughly, and rephrasing the same
into meaningful terms from an analytical point of view.
The best way of understanding the problem is to discuss it with one‘s own colleagues or with those having some
expertise in the subject. In an academic institution the researcher can seek the help from a teacher who is usually an
experienced person, often the teacher put forth the problem in general terms and it is up to the researcher to narrow it
down and phrase the problem in operational terms. In governmental or non-governmental organisations, the problem
is usually earmarked by the administrative heads with whom the researcher can discuss as to how the problem
originally came about and what considerations are involved in its possible solutions.
The researcher must at the same time examine all available literature to get himself/herself acquainted with the
selected problem. He/She may review the conceptual literature concerning the concepts and theories, and the
empirical literature consisting of studies made earlier which are similar to the one proposed. The basic outcome of this
review will be the knowledge as to what research questions have been explored and what were the findings. This will
enable the researcher to specify his/her own research problem in a meaningful context. After this the researcher
rephrases the problem into analytical or operational terms i.e., to state the problem in as specific as possible. This task
of defining a research problem is a step of greatest importance in the entire research process. The problem to be
investigated must be defined unambiguously for that will help discriminating relevant data from irrelevant ones. Care
must, however, be taken to verify the authenticity and validity of the facts concerning the problem. The statement of
the problem determines the data which are to be collected, the characteristics of the data which are relevant, relations
between variables which are to be examined, the choice of the method and techniques to be used in these
investigations. If there are certain pertinent terms, the same should be clearly defined along with the task of
formulating the problem. In fact, formulation of the problem often follows a sequential pattern where a number of

[© 2013-14: TMC Study Material on RM] Page 33


formulations are set up, each formulation more specific than the preceding one, each one phrased in more analytical
terms, and each more realistic in terms of the available data and resources.

2. Problem definition techniques


Once a research problem has been identified, the research problem needs to be defined. The definition of a problem
amounts to specifying it in detail and narrowing it down to workable size. Each question and subordinate question to
be answered is specified at this stage and the scope and limits of investigation are determined. In this stage of research
the overall plan for the research project must be set out in logical order to see if it makes sense. Your research topic
should be defined in such a way that it is clearly understood. If you are studying, for example, alcoholism; you need to
put your research question into a framework which suggests that you are very clear and specific about the problem of
alcohol consumption and abuse. In short, topics of research must be grounded in some already-known factual
information which is used to introduce the topic and from which the research questions will emerge. Usually, it is
necessary to review previous studies in order to determine just what is to be done. While defining the problem, it is
necessary to formulate the point of view on which the research study is to be based. In case certain assumptions are
made, they must be explicitly stated.

3. Statement of the Problem


A good statement of a problem must clarify exactly what is to be determined or solved or what is the research question.
It must restrict the scope of the study to specific and workable research questions. So, you are required to describe the
background of the study, its theoretical basis and underlying assumptions, and specify the issues in concrete, specific,
and workable questions. All questions raised must be related to the problem. Each major issue or element should be
separated into a subsidiary or secondary elements, and these should be arranged in a logical order under the major
divisions.

4. Operationalisation of Variables
In stating a problem, the researcher should make sure that it is neither stated in terms so general as to make it vague
nor specified so narrowly as to make it insignificant and trivial. The most important step in this direction is to specify
the variables involved in the problem and define them in operational terms. To illustrate, suppose you state that you
want to study the ―Effectiveness of Self-help Groups on the Empowerment of Rural women‖. This statement is broad
and it communicates in a general way what you want to do. But it is necessary to specify the problem with much
greater precision. For this the first step is to specify the variables involved in the problem and define them in
operational terms.
The variables involved in the problem are ―effectiveness‖ and ―empowerment‖. Please note that these expressions are
to be understood beyond their dictionary meanings. For example, the dictionary meaning of ―effectiveness‖ is
―producing the desired effect‖. This meaning is not sufficient for research purposes. It is important for you to specify
exactly what indicators of effectiveness you will use or what you will do to measure the presence or absence of the
phenomenon denoted by the term ―effectiveness‖. Similarly, you have to define the other variable ―empowerment‖
also in terms of the operations or processes that will be used to measure them. In this study, you might choose to
define ―effectiveness‖ as the improvement made by the rural women in scores on a standardised scale. The term
‗empowerment‘ might refer to the scores on the achievement test in empowerment.

[© 2013-14: TMC Study Material on RM] Page 34


It is worth noting that the problem should be stated in a way that it indicates a relationship between two or more
variables. It should involve neither philosophical issues, values nor questions of judgement that cannot be answered by
scientific investigations. For example, should television be more effective in increasing performance level of students?
Such value questions cannot be answered through research. Similarly, the question ―what is there in television teaching
that enhance performances‖ is a philosophical question which cannot be probed easily.
5. Evaluation of the Problem
You, as a researcher, should evaluate a proposed problem in the light of your competence and professional experience,
possible difficulties in the availability of data, the financial and field constraints, and limitations of time. After
evaluating a broad research problem you have to narrow it down to a highly specific research problem. You formulate
the problem by stating specific questions for which you would seek answers through the application of scientific
method.
It is worthwhile for to you to ask yourself a series of questions before you undertake the research. The questions should
be helpful in the evaluation of the problem on various criteria. All such questions must be answered affirmatively
before the study is undertaken. What are the questions that we should ask?
i) Is the problem researchable?
There are certain problems that cannot be effectively solved through the process of research. A researchable problem is
always concerned with the relationship existing between two or more variables that can be defined and measured. The
problem should be capable of being stated in the form of workable research question that can be answered empirically.
ii) Is the problem new?
There is no use in studying a problem which has already been adequately investigated by other researchers. To avoid
such duplication, it is essential to examine very carefully the literature available in the field concerned. The problem
should be selected only when you are convinced that it is really a new problem which has never before been
investigated successfully. However, it must be noted that a researcher may repeat a study when he/she wants to verify
its conclusions or to extend the validity of its findings in a situation entirely different from the previous one.
iii) Is the problem significant?
The problem should be such that it is likely to fill in the gaps in the existing knowledge, to help to solve some of the
inconsistencies in the previous research, or to help in the interpretation of the known facts. The results or findings of a
study should either become a basis for a theory, generalisations or principles. Besides, they should lead to new
problems for further research or have some useful practical applications.
iv) Is the problem feasible for the particular researcher?
a) Research competencies
The problem should be in an area in which the researcher is qualified and competent. He/She must possess the
necessary skills and competencies that may be needed to develop and administer the data gathering tools, and
interpret the data available for analysis. The researcher should also have the necessary knowledge of research design,
qualitative and quantitative techniques of data analysis etc. that may be required to carry out the research to its
completion.
b) Interest and enthusiasm
The researcher should be genuinely interested in and enthusiastic about the problem he/she wants to undertake for
research.

[© 2013-14: TMC Study Material on RM] Page 35


c) Financial considerations and feasibility
The problem should be financially feasible. The researcher should ascertain whether he/she has the necessary financial
and temporal resources to carry on the study. The cost is an important element in feasibility. It is important to estimate
the cost of the project and assess the availability of funds. This will determine whether the project can be actually
executed.
d) Administrative considerations
In addition to personal limitations, financial and time constraints, the researcher should also consider the nature of
data, equipment, specialised personnel, and administrative facilities that are needed to complete the study successfully.
He/She should check whether he/she is able to get the co-operation from various administrative authorities for
collecting various types of data.
e) Time
Projects are a time bound exercise. Most of you, if not all, are already engaged in more than one activity in office, at
home and at social organizations. It is important to assess the time required to complete a study, besides the
assessment of total period, it is necessary to identify the period of the year in relation to the nature of the study.

Research Design
The decisions regarding what, where, when, how much, by what means concerning a research project constitute a
research design. ―A research design is the arrangement of conditions for collection and analysis of data in a manner
that aims to combine relevance to the research purpose with economy in procedure‖. In fact, the research design is the
conceptual structure within which research is conducted; it constitutes the blueprint for the collection, measurement
and analysis of data. As such the design includes an outline of what the researcher will do from writing the hypothesis
and its operational implications to the final analysis of data.
More explicitly, the design decisions happen to be in respect of:

 What is the study about?


 Why is the study being made?
 Where will the study be carried out?
 What type of data is required?
 Where can the required data be found?
 What periods of time will the study include?
 What will be the sample design?
 What techniques of data collection will be used?
 How will the data be analysed?
 In what style will the report be prepared?

Meaning of Research Design


Research design is also known by different names such as research outline, plan, blue print. In the words of Fred N.
Kerlinger, it is the plan, structure and strategy of investigation conceived so as to obtain answers to research questions
and control variance. The plan includes everything the investigator will do from writing the hypothesis and their
operational implications to the final analysis of data. The structure is the outline, the scheme, the paradigms of the

[© 2013-14: TMC Study Material on RM] Page 36


operation of the variables. The strategy includes the methods to be used to collect and analyze the data. At the
beginning this plan (design) is generally vague and tentative. It undergoes many modifications and changes as the
study progresses and insights into it deepen. The working out of the plan consists of making a series of decisions with
respect to what, why, where, when, who and how of the research.
According to Pauline V.Young ―a research design is the logical and systematic planning and directing of a piece of
research‖. According to Reger E.Kirk ―research designs are plans that specify how data should be collected and
analyzed‖.
Research design is the plan, structure and strategy of investigation conceived so as to obtain answers to research
questions and to control variance. (Kerlinger)
A research is the specification of methods and procedures for acquiring the information needed. It is the overall
operational pattern or framework of the project that stipulates what information is to be collected from which
sources by what procedures. (Green and Tull).
The research has to be geared to the available time, energy, money and to the availability of data. There is no such
thing as a single or correct design. Research design represents a compromise dictated by many practical
considerations that go into research.

Why Research design is required?


Research design is needed because it facilitates the smooth sailing of the various research operations, thereby making
research as efficient as possible yielding maximal information with minimal expenditure of effort, time and money.
For example, economical and attractive construction of house we need a blueprint (or what is commonly called the
map of the house) well thought out and prepared by an expert architect, similarly we need a research design or a plan
in advance of data collection and analysis for our research project. Research design stands for advance planning of the
methods to be adopted for collecting the relevant data and the techniques to be used in their analysis.

Functions of Research Design
Regardless of the type of research design selected by the investigator, all plans perform one or more functions
outlined below.
i) It provides the researcher with a blue print for studying research questions.
ii) It dictates boundaries of research activity and enables the investigator to channel his energies in a specific direction.
iii) It enables the investigator to anticipate potential problems in the implementation of the study.
iv) The common function of designs is to assist the investigator in providing answers to various kinds of research
questions.
A study design includes a number of component parts which are interdependent and which demand a series of
decisions regarding the definitions, methods, techniques, procedures, time, cost and administration aspects.

Features of good Design


A research design basically is a plan of action. Once the research problem is selected, then it must be executed to get
the results. Then how to go about it? What is its scope? What are the sources of data? What is the method of enquiry?
What is the time frame? How to record the data? How to analyze the data? What are the tools and techniques of
analysis? What is the manpower and organization required? What are the resources required? These and many such

[© 2013-14: TMC Study Material on RM] Page 37


are the subject matter of attacking the research problem demanding decisions in the beginning itself to have greater
clarity about the research study. It is similar to having a building plan before the building is constructed.
The following are the main features of a good research design.
a. Simplicity: It should be simple and understandable
b. Economical: It must be economical. The technique selected must be cost effective and less time-consuming
c. Reliability: It should give the smallest experimental error. This should have the minimum bias and have the
reliability of data collected and analysed.
d. Workability: It must be workable. It should be pragmatic and practicable.
e. Flexibility: It must be flexible enough to permit the consideration of many different aspects of a phenomenon.
f. Accuracy: It must lead to accurate results .
According to P.V. Young the various ―considerations which enter into making decisions regarding what, where, when,
how much, by what means constitute a plan of study or a study design‖.

Usually the features or components of a Research design are as follows:


1) Need for the Study: Explain the need for and importance of this study and its relevance.
2) Review of Previous Studies: Review the previous works done on this topic, understand what they did, identify
gaps and make a case for this study and justify it.
3) Statement of Problem: State the research problem in clear terms and give a title to the study.
4) Objectives of Study: What is the purpose of this study? What are the objectives you want to achieve by this study?
The statement of objectives should not be vague. They must be specific and focused.
5) Formulation of Hypothesis: Conceive possible outcome or answers to the research questions and formulate into
hypothesis tests so that they can be tested.
6) Operational Definitions: If the study is using uncommon concepts or unfamiliar tools or using even the familiar
tools and concepts in a specific sense, they must be specified and defined.
7) Scope of the Study: It is important to define the scope of the study, because the scope decides what is within its
purview and what is outside.
Scope includes Geographical scope, content scope, chronological scope of the study. The territorial area to be covered
by the study should be decided. E.g. only Delhi or northern states or All India. As far as content scope is concerned
according to the problem say for example, industrial relations in so and so organization, what are aspects to be studied,
what are the aspects not coming under this and hence not studied. Chronological scope i.e., time period selection and
its justification is important. Whether the study is at a point of time or longitudinal say 1991-2003.
8) Sources of Data: This is an important stage in the research design. At this stage, keeping in view the nature of
research, the researcher has to decide the sources of data from which the data are to be collected. Basically the sources
are divided into primary source (field sources) and secondary source (documentary sources). The data from primary
source are called as primary data, and data from secondary source are called secondary data. Hence, the researcher has
to decide whether to collect from primary source or secondary source or both sources.
9) Method of Collection: After deciding the sources for data collection, the researcher has to determine the methods
to be employed for data collection, primarily, either census method or sampling method. This decision may depend on
the nature, purpose, scope of the research and also time factor and financial resources.

[© 2013-14: TMC Study Material on RM] Page 38


10) Tools & Techniques: The tools and techniques to be used for collecting data such as observation, interview,
survey, schedule, questionnaire, etc., have to be decided and prepared.
11) Sampling Design: If it is a sample study, the sampling techniques, the size of sample, the way samples are to be
drawn etc., are to be decided.
12) Data Analysis: How are you going to process and analyze the data and information collected? What simple or
advanced statistical techniques are going to be used for analysis and testing of hypothesis, so that necessary care can be
taken at the collection stage.
13) Presentation of the Results of Study: How are you going to present the results of the study? How many
chapters? What is the chapter scheme? The chapters, their purpose, their titles have to be outlined. It is known as
chapterisation.
14) Time Estimates: What is the time available for this study? Is it limited or unlimited time? Generally, it is a time
bound study. The available or permitted time must be apportioned between different activities and the activities to be
carried out within the specified time. For example, preparation of research design one month, preparation of
questionnaire one month, data collection two months, analysis of data two months, drafting of the report two months
etc.,
15) Financial Budget: The design should also take into consideration the various costs involved and the sources
available to meet them. The expenditures like salaries (if any), printing and stationery, postage and telephone,
computer and secretarial assistance etc.
16) Administration of the Enquiry: How is the whole thing to be executed? Who does what and when? All these
activities have to be organized systematically, research personnel have to be identified and trained. They must be
entrusted with the tasks, the various activities are to be coordinated and the whole project must be completed as per
schedule. Research designs provide guidelines for investigative activity and not necessarily hard and fast rules that
must remain unbroken. As the study progresses, new aspects, new conditions and new connecting links come to light
and it is necessary to change the plan / design as circumstances demand. A universal characteristic of any research
plan is its flexibility. Depending upon the method of research, the designs are also known as survey design, case study
design, observation design and experimental design.

Components of research design


Stated in simple language, a research design is a plan of action, a plan for collecting and anlysing data in an
economic, efficient and relevant manner. Whatever be the nature of design, the following steps are generally
followed.
1. Selection and Definition of a problem: The problem selected for study should be defined clearly in operational
terms so that researcher knows positively what facts he is looking for and hat is relevant to the study.
2. Source of Data: Once the problem is selected it is the duty of the researcher to state clearly the various sources of
information such as library, personal documents, field work, a particular residential group etc.
3. Nature of Study: The research design should be expressed in relation to the nature of study to be undertaken. The
choice of the statistical, experimental or comparative type of study should be made at this stage so that the
following steps in planning may have relevance to the proposed problem.

[© 2013-14: TMC Study Material on RM] Page 39


4. Object of Study: Whether the design aims at theoretical understanding or presupposes a welfare notion must be
explicit at this point. Stating the object of the study helps not only in clarity of the design but also in a sincere
response from the respondents.
5. Social-Cultural Context: The research design must be set in the social-cultural context. For example in a study of
the fertility rate in a people of „backward‟ class the context of the so-called backward class of people and the
conceptual reference must be made clear. Unless the meaning of the term is clearly defined there tends to be a large
variation in the study because the term backward could have religious, economic and political connotations.
6. Temporal context: The geographical limit of the design should also be referred to at this stage that research related
to be hypothesis is applicable to particular social group only.
7. Dimension: It is physically impossible to analyze the data collected from a large universe. Hence the selection of an
adequate and representative sample is a by-word in any research.
8. Basis of Selection: The mechanics of drawing a random, stratified, and purposive, double cluster or quota sample
when followed carefully with produce a scientifically valid sample in an unbiased manner.
9. Technique of Data Collection: relevant to the study design a suitable technique has to be adopted for the collection
of required data. The relative merit of observation, interview and questionnaire, when studied together will help in
the choice of suitable technique. Once the collecting of data is complete, analysis, coding and presentation of the
report naturally follow.

Types of research designs


There are various designs which are used in research, all with specific advantages and disadvantages. Which one the
research uses, depends on the aims of the study and the nature of the study.
Various types of research design can be classified under three titles viz :

I) Exploratory Research Design


II) Descriptive research Design
III) Experimental Research Design

I) Exploratory research design


Exploratory research helps ensure that a rigorous and conclusive study will
not begin with an inadequate understanding of the nature of the business
problem. Most exploratory research designs provide qualitative data which
provides greater understanding of a concept. In contrast, quantitative data
provides precise measurement.
Exploratory research may be a single research investigation or it may be a
series of informal studies; both methods provide background information.
Researchers must be creative in the choice of information sources. They should explore all appropriate inexpensive
sources before embarking on expensive research of their own. However, they should still be systematic and careful at
all times.

[© 2013-14: TMC Study Material on RM] Page 40


Why conduct exploratory research?
There are three purposes for conducting exploratory research; all three are interrelated:
A. Diagnosing a situation: Exploratory research helps diagnose the dimensions of problems so that successive
research projects will be on target.
B. Screening alternatives: When several opportunities arise and budgets restrict the use of all possible options,
exploratory research may be utilized to determine the best alternatives. Certain evaluative information can be obtained
through exploratory research. Concept testing is a frequent reason for conducting exploratory research. Concept
testing refers to those research procedures that test some sort of stimulus as a proxy for a new, revised, or remarketed
product or service. Generally, consumers are presented with an idea and asked if they like it, they would use it, etc.
Concept testing is a means of evaluating ideas by providing a feel for the merits of the idea prior to the commitment of
any research and development, marketing, etc. Concept testing portrays the functions, uses, and possible situations for
the proposed product.
C. Discovering new ideas: Uncovering consumer needs is a great potential source of ideas. Exploratory research is
often used to generate new product ideas, ideas for advertising copy, etc.

Characteristics of exploratory design


The exploratory design must process the following characteristics.
a. Business Significance: Unless the problem has a place in the industry or has business significance, its study shall be
useless and meaningless.
b. Practical Aspect: If should be of practical value to the management. If it has not practical value it shall be useless for
business decisions.
c. Combination of Theory: Mere practical significance of the problem has no meaning unless it is based on theory. If a
particular problem is based on certain theoretical aspects it shall be possible for the researcher to judge its utility or
proceed with his study in the right direction.
d. Reliable and valuable facts: In the absence of reliable and valuable facts, the study of the problem shall be no
managerial significance.

Role/significance of exploratory design


Its role can be emphasized owing to following aspects:
a. Information about the immediate conditions: The design provides information about the conditions of the problem.
When the investigator does not have resources and capability to test the hypothesis he is able to find facts through
exploratory design which is suitable to or in accordance with the hypothesis.
b. Presentations of Important Problems: Through exploratory and formulative designs, it is possible to present
important research problems. Once the problems have been presented, the investigator is automatically attracted
towards the study of the problem that has greater importance for our society.
c. Study of the unknown fields: For research, theory or hypothesis is inevitable. They provide proper basis. In order to
formulate a hypothesis, we have to acquire the relevant information and through exploratory design this task is
achieved.

[© 2013-14: TMC Study Material on RM] Page 41


d. Theoretical Base: The research problem deals with our social life and social problems and data about them can only
be collected through exploratory design. This design is helpful in providing a theoretical base to the hypothesis and
theories.
e. Presentation of uncertain problem for study in research: Through exploratory designs we are able to determine
these problems. This method on the one hand, focuses the attention of the investigator on the problem and, on the
other, it helps him to collect facts on scientific lines so that research may be carried out correctly.

Categories of exploratory research


The purpose, rather than the technique, of the research determines whether a study is exploratory, descriptive, or
causal. A manager may choose from three general categories of exploratory research:

A. Experience surveys: Concepts may be discussed with top executives and knowledgeable managers who have
had personal experience in the field being researched. This constitutes an informal experience survey. Such a study
may be conducted by the business manager rather than the research department. On the other hand, an experience
survey may be a small number of interviews with experienced people who have been carefully selected from outside
the organization. The purpose of such a study is to help formulate the problem and clarify concepts rather than to
develop conclusive evidence.

B. Secondary data analysis: A quick and economical source of background information is trade literature in the
public library. Searching through such material is exploratory research with secondary data; research rarely begins
without such an analysis. An informal situation analysis using secondary data and experience surveys can be
conducted by business managers. Should the project need further clarification, a research specialist can conduct a pilot
study.

C. Case study method: The purpose of a case study is to obtain information from one, or a few, situations similar
to the researcher's situation. A case study has no set procedures, but often requires the cooperation of the party whose
history is being studied. However, this freedom to research makes the success of the case study highly dependent on
the ability of the researcher. As with all exploratory research, the results of a case study should be seen as tentative.
Case study research excels at bringing us to an understanding of a complex issue or object and can extend experience
or add strength to what is already known through previous research. Case studies emphasize detailed contextual
analysis of a limited number of events or conditions and their relationships. Researchers have used the case study
research method for many years across a variety of disciplines. Social scientists, in particular, have made wide use of
this qualitative research method to examine contemporary real-life situations and provide the basis for the application
of ideas and extension of methods. Researcher Robert K. Yin defines the case study research method as an empirical
inquiry that investigates a contemporary phenomenon within its real-life context; when the boundaries between
phenomenon and context are not clearly evident; and in which multiple sources of evidence are used.
Many well-known case study researchers such as Robert E. Stake, Helen Simons, and Robert K. Yin have written about
case study research and suggested techniques for organizing and conducting the research successfully. This
introduction to case study research draws upon their work and proposes six steps that should be used:
1. Determine and define the research questions

[© 2013-14: TMC Study Material on RM] Page 42


2. Select the cases and determine data gathering and analysis techniques
3. Prepare to collect the data
4. Collect data in the field
5. Evaluate and analyze the data
6. Prepare the report

D. Pilot studies: The term "pilot studies" is used as a collective to group together a number of diverse research
techniques all of which are conducted on a small scale. Thus, a pilot study is a research project which generates
primary data from consumers, or other subjects of ultimate concern. There are four major categories of pilot studies:
1. Focus group interviews: These interviews are free-flowing interviews with a small group of people. They have a
flexible format and can discuss anything from brand to a product itself. The group typically consists of six to ten
participants and a moderator. The moderator's role is to introduce a topic and to encourage the group to discuss it
among themselves. There are four primary advantages of the focus group: (1) it allows people to discuss their true
feelings and convictions, (2) it is relatively fast, (3) it is easy to execute and very flexible, (4) it is inexpensive.
One disadvantage is that a small group of people, no matter how carefully they are selected, will not be representative.
Specific advantages of focus group interviews have to be categorized as follows:
a) Synergism: the combined effort of the group will produce a wider range of information, insights and ideas than will
the cumulation of separately secured responses.
b) Serendipity: an idea may drop out of the blue, and affords the group the opportunity to develop such an idea to its
full significance.
c) Snowballing: a bandwagon effect occurs. One individual often triggers a chain of responses from the other
participants.
d) Stimulation: respondents want to express their ideas and expose their opinions as the general level of excitement
over the topic increases.
e) Security: the participants are more likely to be candid because they soon realize that the things said are not being
identified with any one individual.
f) Spontaneity: people speak only when they have definite feelings about a subject; not because a question requires an
answer.
g) Specialization: the group interview allows the use of a more highly trained moderator because there are certain
economies of scale when a large number of people are "interviewed" simultaneously.
h) Scientific scrutiny: the group interview can be taped or even videoed for observation. This affords closer scrutiny
and allows the researchers to check for consistency in the interpretations.
i) Structure: the moderator, being one of the groups, can control the topics the group discusses.
j) Speed: a number of interviews are, in effect, being conducted at one time.
The ideal size for a focus group is six to ten relatively homogeneous people. This avoids one or two members
intimidating the others, and yet, is a small enough group that adequate participation is allowed. Homogeneous groups
avoid confusion which might occur if there were too many differing viewpoints. Researchers who wish to collect
information from different groups should conduct several different focus groups.
The sessions should be as relaxed and natural as possible. The moderator's job is to develop a rapport with the group
and to promote interaction among its members. The discussion may start out general, but the moderator should be able
to focus it on specific topics.

[© 2013-14: TMC Study Material on RM] Page 43


An effective focus group moderator prepares a discussion guide to help ensure that the focus group will cover all
topics of interest. The discussion guide consists of written prefatory remarks to inform the group about the nature of
the focus group and an outline of topics/questions that will be addressed in the group session.
The focus group technique has two shortcomings:
 Without an experienced moderator, a self-appointed leader will dominate the session resulting in an
abnormal "halo effect" on the interview.
 There may be sampling problems.

2. Interactive Media and online Focus Group: When a person uses the Internet, he or she interacts with a computer. It
is an interactive media because the user clicks a command and the computer responds. The use of the Internet for
qualitative exploratory research is growing rapidly. The term online focus group refers to qualitative research where a
group of individuals provide unstructured comments by keyboarding their remarks into a computer connected to the
Internet. The group participants either keyboard their remarks during a chat room format or when they are alone at
their computers. Because respondents enter their comments into the computer, transcripts of verbatim responses are
available immediately afterward the group session. Online groups can be quick and cost efficiency. However, because
there is less interaction between participants, group synergy and snowballing of ideas can suffer.
Research companies often set up a private chat room on their company Web sites for focus group interviews.
Participants in these chat rooms feel their anonymity is very secure. Often they will make statements or ask questions
they would never address under other circumstances. This can be a major advantage for a company investigating
sensitive or embarrassing issues.
Many online focus groups using the chat room format arrange for a sample of participants to be online at the same time
for about typically 60 to 90 minutes. Because participants do not have to be together in the same room at a research
facility, the number of participants in online focus groups can be much larger than traditional focus groups. A problem
with online focus groups is that the moderator cannot see body language and facial expressions (bewilderment,
excitement, interest, etc.) to interpret how people are reacting. Also, the moderator‘s ability to probe and ask additional
questions on the spot is reduced in online focus groups, especially those in which participants are not simultaneously
involved. Research that requires tactile touch, such as a new easy-opening packaging design, or taste experiences
cannot be performed online.

3. Projective techniques: Individuals may be more likely to give a true answer if the question is disguised. If
respondents are presented with unstructured and ambiguous stimuli and are allowed considerable freedom to
respond, they are more likely to express their true feelings.
A projective technique is an indirect means of questioning that enables respondents to "project their beliefs onto a third
party." Thus, the respondents are allowed to express emotions and opinions that would normally be hidden from
others and even hidden from themselves. Common techniques are as follows:
a) Word association: The subject is presented with a list of words, one at a time, and asked to respond with the first
word that comes to mind. Both verbal and non-verbal responses are recorded. Word association should reveal each
individual's true feelings about the subject. Interpreting the results is difficult; the researcher should avoid subjective
interpretations and should consider both what the subject said and did not say (e.g., hesitations).

[© 2013-14: TMC Study Material on RM] Page 44


b) Sentence completion method: This technique is also based on the assumption of free association. Respondents are
required to complete a number of partial sentences with the first word or phrase that comes to mind. Answers tend to
be more complete than in word association, however, the intention of the study is more apparent.
c) Third-person technique and role playing: Providing a "mask" is the basic idea behind the third-person technique.
Respondents are asked why a third person does what he or she does, or what a third person thinks of a product. The
respondent can transfer his attitudes onto the third person. Role playing is a dynamic reenactment of the third-person
technique in a given situation. This technique requires the subject to act out someone else's behavior in a particular
setting.
d) Thematic apperception test (TAT): This test consists of a series of pictures in which consumers and products are the
center of attention. The investigator asks the subject what is happening in the picture and what the people might do
next. Theses ("thematic") are elicited on the basis of the perceptual-interpretive ("apperception") use of the pictures. The
researcher then analyses the content of the stories that the subjects relate. The picture should present a familiar,
interesting, and well-defined problem, but the solution should be ambiguous. A cartoon test, or picture frustration
version of TAT, uses a cartoon drawing in which the respondent suggests dialogue that the cartoon characters might
say. Construction techniques request that the consumer draw a picture, construct a collage, or write a short story to
express their perceptions or feelings.

4. Depth interviews: Depth interviews are similar to the client interviews of a clinical psychiatrist. The researcher
asks many questions and probes for additional elaboration after the subject answers; the subject matter is usually
disguised.
Depth interviews have lost their popularity recently because they are time-consuming and expensive as they require
the services of a skilled interviewer.

Limitations
The following are some of the limitations of exploratory research design:
 Exploratory research techniques have their limitations. Most of them are qualitative, and the interpretation of their
results is judgmental—thus, they cannot take the place of quantitative, conclusive research.
 Because of certain problems, such as interpreter bias or sample size, exploratory findings should be treated as
preliminary. The major benefit of exploratory research is that it generates insights and clarifies the business
problems for testing in future research.
 If the findings of exploratory research are very negative, then no further research should probably be conducted.
However, the researcher should proceed with caution because there is a possibility that a potentially good idea
could be rejected because of unfavorable results at the exploratory stage.
 In other situations, when everything looks positive in the exploratory stage, there is a temptation to market the
product without further research. In this situation, business managers should determine the benefits of further
information versus the cost of additional research. When a major commitment of resources is involved, it is often
well worth conducting a quantitative study.

[© 2013-14: TMC Study Material on RM] Page 45


II) Descriptive Research design
This is intended to describe certain factors that management is likely to be interested in such as market conditions,
customers‘ feelings or opinions toward a particular company, purchasing behaviour as so forth. Such research is not
intended to allow the researcher to establish causal relationships between marketing variables and sales or consumer
behaviour, or to enable the researcher to predict likely future conditions. Descriptive research merely examines ‗what
is‘. Such research, just like exploratory research, usually forms part of an ongoing research programme. Once the
researcher has established the present situation in terms of market size, main segments, main competitors, etc., they
may then proceed to types of research of a more predictive and/or conclusive nature. Descriptive research usually
makes use of descriptive statistics to help the user understand the structure of the data and any significant patterns that
may be found in the data. All measures of central tendency such as the mean, median and mode are often used along
with measures of dispersion such as the variance and standard deviation. Descriptive research results are often
presented using pictorial methods such as graphs, ‗pie charts‘, histograms, etc..
When the nature of the initial decision problem is either to describe specific characteristics of existing market
phenomena or to evaluate current marketing mix strategies of a defined target population or market structure, then a
descriptive research design is appropriate.
Or
If the research question(s) is linked to answering specified questions concerning who, what, where, when, and how
about known members or elements of the target population or market structures under investigation, then the
researcher should consider using a descriptive research design to gather the needed primary data.
Remember, there are two basic ways to gather the primary data needed: observation and asking questions. When the
researcher needs to ask questions, the different approaches used are referred to as survey methods.
Over time, descriptive research designs have come to be viewed and acknowledged as the different survey methods
available to researchers for collecting quantitative primary data from large groups of people through the question and
answer protocol process.
Examples of questions for descriptive research:
1. Do teachers hold favorable attitudes toward using computers, in schools?
2. What kinds of activities that involve technology occur in 6th-grade classrooms and how frequently do they occur?
3. Is there a relationship between experience with multimedia computers and problem solving skills?

Descriptive research can be either quantitative or qualitative.


Descriptive research involves gathering data that describe events and then organizes, tabulates, depicts, and describes
the data collection. Descriptive statistics are very important in reducing the data to manageable form.

The Nature of Descriptive Research


1. The descriptive function of research is heavily dependent on instrumentation for measurement and observation
2. Once the instruments are developed, they can be used to describe phenomena of interest to the researchers.
3. The intent of' some descriptive research is to produce statistical information about aspects of education that interests
policymakers and educators.
4. There has been in ongoing debate among researchers about the value of quantitative vs. qualitative research, with
some saying descriptive research is less pure than traditional experimental, quantitative designs.

[© 2013-14: TMC Study Material on RM] Page 46


Some of the Descriptive Techniques

The descriptive techniques that are commonly used include:


 Graphical description
o use graphs to summarize data
o examples: histograms, scatter diagrams, bar charts, pie charts
 Tabular description
o use tables to summarize data
o examples: frequency distribution schedule, cross tabs
 Parametric description
o estimate the values of certain parameters which summarize the data
measures of location or central tendency
arithmetic mean
median
mode
interquartile mean
measures of statistical dispersion
standard deviation
statistical range

[© 2013-14: TMC Study Material on RM] Page 47


III) Experimental Research Design
Experimental Research Design can also be called hypothesis – testing research design. It refers to that research
process in which one or more variable are manipulated under conditions that permit the collection of data that shows
the effect, if any, of such variable in unconfused fashion.

Basic Principles of Experimental Designs


There are three principles of experimental designs:
1. Principle of Replication;
2. Principle of Randomization
3. Principle of Local Control
Now let us discuss each one of these experimental design
1. Principle of Replication
In this design, the experiment should be repeated more than once. Thus, each treatment is applied in many
experimental units instead of one. By doing so the statistical accuracy of the experiments is increased. For example,
suppose we are to examine the effect of two varieties of rice.
For this purpose, we may divide the field into two parts and grow one variety in one part and the other variety in the
other part. We can then compare the yield of the two parts and draw conclusion on that basis. But if we are to apply
the principle of replication to this experiment, then we first divide the field into several parts, grow one variety in half
of these parts and the other variety in the remaining parts. We can then collect the data of yield of the two varieties and
draw conclusion by comparing the same. The result so obtained will be more reliable in comparison to the conclusion
we draw without applying the principle of replication. The entire experiment can even be repeated several times for
better results.
Conceptually replication does not present any difficulty, but computationally it does. For example, if, an experiment
requiring a two-way analysis of variance is replicated, it will then require a three-way analysis of variance since
replication itself may be a source of variation in the data. However, it should be remembered that replication is
introduced in order to increase the precision of a study; that is to say, to increase the accuracy with which the main
effects and interactions can be estimated.
2. Principle of Randomization
This principle indicates that we should design or plan the experiment in such a way that the variations caused by
extraneous factor can all be combined under the general heading of ―chance.‖ For example - if grow one variety of rice,
say, in the first half of the parts of a field and the other variety is grown in the other half, then it is just possible that the
soil fertility may be different in the first half in comparison to the other half. If this is so our results would not be
realistic. In such a situation, we may assign the variety of rice to be grown in different parts of the field on the basis of
some variety ‗sampling technique, i.e., we may apply randomization principle and random ourselves against the
effects of the extraneous factors (soil fertility processes in the given case.)
3. The Principle of Local Control
It is another important principle of experimental designs. Under it the extraneous factor, the known source of
variability, is made to vary deliberately over as wide a range as necessary and these needs to be done in such a way
that the variability it causes can be measured and hence eliminated from the experimental error.

[© 2013-14: TMC Study Material on RM] Page 48


This means that we should plan the experiment in a manner that we can perform a two-way analysis of variance, in
which the total variability of the data is divided into three components attributed to treatments (varieties of rice in our
case), the extraneous factor (soil fertility in our case) and experimental error.
In other words, according to the principle of local control, we first divide the field into several homogeneous parts,
known as blocks, and then each such block is divided into parts equal to the number of treatments. Then the treatments
are randomly assigned to these parts of a block.

[© 2013-14: TMC Study Material on RM] Page 49


Sampling Design
What is a Sample?
A sample is a finite part of a statistical population whose properties are studied to gain information about the whole
(Webster, 1985). When dealing with people, it can be defined as a set of respondents (people) selected from a larger
population for the purpose of a survey.
A population is a group of individual‘s persons, objects, or items from which samples are taken for measurement for
example a population of presidents or professors, books or students.
Market research involves the collection of data to obtain insight and knowledge into the needs and wants of
customers and the structure and dynamics of a market. In nearly all cases, it would be very costly and time-consuming
to collect data from the entire population of a market. Accordingly, in market research, extensive use is made of
sampling from which, through careful design and analysis, Marketers can draw information about the market.
Sampling is the key to survey research. No matter how well a study is done in other ways, if the sample has not been
properly found, the results cannot be regarded as correct. It applies mainly to surveys, but is also important for
planning other types of research.
Sampling is the process of selecting units (e.g., people, organizations) from a population of interest so that by studying
the sample we may fairly generalize our results back to the population from which they were chosen. Let's begin by
covering some of the key terms in sampling like "population" and "sampling frame." Then, because some types of
sampling rely upon quantitative models, we'll talk about some of the statistical terms used in sampling. Finally, we'll
discuss the major distinction between probability and Non-probability sampling methods and work through the major
types in each. Sampling is often used when conducting a census is impossible or unreasonable.

What is sampling?
Sampling is the act, process, or technique of selecting a suitable sample, or a representative part of a population for the
purpose of determining parameters or characteristics of the whole population.
What is the purpose of sampling? To draw conclusions about populations from samples, we must use inferential
statistics which enables us to determine a population‗s characteristics by directly observing only a portion (or sample)
of the population. We obtain a sample rather than a complete enumeration (a census) of the population for many
reasons. Obviously, it is cheaper to observe a part rather than the whole, but we should prepare ourselves to cope with
the dangers of using samples. Some are better than others but all may yield samples that are inaccurate and unreliable.
We will learn how to minimize these dangers, but some potential error is the price we must pay for the convenience
and savings the samples provide.

Why sampling?
One of the decisions to be made by a researcher in conducting a survey is whether to go for a census or a sample
survey. We obtain a sample rather than a complete enumeration (a census) of the population for many reasons. The
most important considerations for this are: cost, size of the population, accuracy of data, accessibility of population,
timeliness, and destructive observations.

[© 2013-14: TMC Study Material on RM] Page 50


1) Cost: The cost of conducting surveys through census method would be prohibitive and sampling helps in
substantial cost reduction of surveys. Since most often the financial resources available to conduct a survey are scarce,
it is imperative to go for a sample survey than census.
2) Size of the Population: If the size of the population is very large it is difficult to conduct a census if not impossible.
In such situations sample survey is the only way to analyse the characteristics of a population.
3) Accuracy of Data: Although reliable information can be obtained through census, sometime the accuracy of
information may be lost because of a large population. Sampling involves a small part of the population and a few
trained people can be involved to collect accurate data. On the other hand, a lot of people are required to enumerate all
the observations. Often it becomes difficult to involve trained manpower in large numbers to collect the data thereby
compromising accuracy of data collected. In such a situation a sample may be more accurate than a census. A sloppily
conducted census can provide less reliable information than a carefully obtained sample.
4) Accessibility of Population: There are some populations that are so difficult to get access to that only a sample can
be used, e.g., people in prison, birds migrating from one place to another place etc. The inaccessibility may be economic
or time related. In a particular study, population may be so costly to reach, like the population of planets, that only a
sample can be used.
5) Timeliness: Since we are covering a small portion of a large population through sampling, it is possible to collect
the data in far less time than covering the entire population. Not only does it take less time to collect the data through
sampling but the data processing and analysis also takes less time because fewer observations need to be covered.
Suppose a company wants to get a quick feedback from its consumers on assessing their perceptions about a new
improved detergent in comparison to an existing version of the detergent. Here the time factor is very significant. In
such situations it is better to go for a sample survey rather than census because it reduces a lot of time and product
launch decision can be taken quickly.
6) Destructive Observations: Sometimes the very act of observing the desired characteristics of a unit of the population
destroys it for the intended use. Good examples of this occur in quality control. For example, to test the quality of a
bulb, to determine whether it is defective, it must be destroyed. To obtain a census of the quality of a lorry load of
bulbs, you have to destroy all of them. This is contrary to the purpose served by quality-control testing. In this case,
only a sample should be used to assess the quality of the bulbs. Another example is blood test of a patient.

The disadvantages of sampling are few but the researcher must be cautious. These are risk, lack of
representativeness and insufficient sample size each of which can cause errors. If researcher don‘t pay attention to
these flaws it may invalidate the results.
1) Risk: Using a sample from a population and drawing inferences about the entire population involves risk. In other
words the risk results from dealing with a part of a population. If the risk is not acceptable in seeking a solution to a
problem then a census must be conducted.
2) Lack of representativeness: Determining the representativeness of the sample is the researcher‘s greatest problem.
By definition, ‗sample‘ means a representative part of an entire population. It is necessary to obtain a sample that meets
the requirement of representativeness otherwise the sample will be biased. The inferences drawn from non-
reprentative samples will be misleading and potentially dangerous.
3) Insufficient sample size: The other significant problem in sampling is to determine the size of the sample. The size
of the sample for a valid sample depends on several factors such as extent of risk that the researcher is willing to accept
and the characteristics of the population itself.

[© 2013-14: TMC Study Material on RM] Page 51


What is Sampling Design?
Sample design covers the method of selection, the sample structure and plans for analysing and interpreting the
results. Sample designs can vary from simple to complex and depend on the type of information required and the way
the sample is selected.
Sample design affects the size of the sample and the way in which analysis is carried out. In simple terms the more
precision the market researcher requires, the more complex will be the design and the larger the sample size.
The sample design may make use of the characteristics of the overall market population, but it does not have to be
proportionally representative. It may be necessary to draw a larger sample than would be expected from some parts of
the population; for example, to select more from a minority grouping to ensure that sufficient data is obtained for
analysis on such groups.
Many sample designs are built around the concept of random selection. This permits justifiable inference from the
sample to the population, at quantified levels of precision. Random selection also helps guard against sample bias in a
way that selecting by judgement or convenience cannot.

Characteristics of a good sample Design


It is important that the sampling results must reflect the characteristics of the population. Therefore, while selecting the
sample from the population under investigation it should be ensured that the sample has the following characteristics:
1) A sample must represent a true picture of the population from which it is drawn.
2) A sample must be unbiased by the sampling procedure.
3) A sample must be taken at random so that every member of the population of data has an equal chance of selection.
4) A sample must be sufficiently large but as economical as possible.
5) A sample must be accurate and complete. It should not leave any information incomplete and should include all the
respondents, units or items included in the sample.
6) Adequate sample size must be taken considering the degree of precision required in the results of inquiry.

[© 2013-14: TMC Study Material on RM] Page 52


What are the Steps involved in sample Design?
The sampling design process consists of five stages:

1. Definition of population of concern


2. Specification of a sampling frame, a set of items or events that it is possible to measure
3. Specification of sampling method for selecting
items or events from the frame
4. Sampling and data collecting
5. Review of sampling process

1) Populations, (Universe) definition:-


The first concept you need to understand is the difference between a
population and a sample. To make a sample, you first need a population.
In non-technical language, population means "the number of people
living in an area." This meaning of population is also used in survey
research, but this is only one of many possible definitions of population.
The word universe is sometimes used in survey research, and means
exactly the same in this context as population.
The unit of population is whatever you are counting: there can be a
population of people, a population of households, a population of events,
institutions, transactions, and so forth. Anything you can count can be a
population unit. But if you can't get information from it, and you can't
measure it in some way, it's not a unit of population that is suitable for
survey research.
For a survey, various limits (geographical and otherwise) can be placed on a population. Some populations that could
be covered by surveys are...
 All people living in India.
 All people aged 18 and over.
 All households in Nagpur.
 All schools in Maharashtra.
 All instances of tuning in to FM radio station in the last seven days
...and so on. If you can express it in a phrase beginning "All," and you can count it, it's a population of some kind. The
commonest kind of population used in survey research uses the formula:
 All people aged X years and over, who live in area Y.
The "X years and over" criterion usually rules out children below a certain age, both because of the difficulties involved
in interviewing them and because many research questions don't apply to them.
Even though some populations can't be questioned directly, they're still populations. For example, schools can't fill in
questionnaires, but somebody can do so on behalf of each school. The distinction is important when finding the
answers to questions like "What proportions of Primary schools have libraries?" You need only one questionnaire from
each school - not one from each teacher, or one from each student.

[© 2013-14: TMC Study Material on RM] Page 53


Often, the population you end up surveying is not the population you really wanted, because some part of the
population cannot be surveyed. For example, if you want to survey opinions among the whole population of an area,
and choose to do the survey by telephoning people at home, the population you actually survey will be people with a
telephone in their home. If the people with no telephone have different opinions, you will not discover this.
As long as the surveyed population is a high proportion of the wanted population, the results obtained should also be
true for the larger population. For example, if 90% of homes have a telephone, the 10% without a phone would have to
be very different, for the survey's results not to be true for the whole population.

2. Sampling frames
A sampling frame can be one of two things: either a list of all members of a population, or a method of selecting any
member of the population. The term general population refers to everybody in a particular geographical area.
Common sampling frames for the general population are electoral rolls, street directories, telephone directories, and
customer lists from utilities which are used by almost all households: water, electricity, sewerage, and so on.
It is best to use the list that is most accurate, most complete, and most up to date. This differs from country to country.
In some countries, the best lists are of households, in other countries, they are of people. For most surveys, a list of
households is more useful than a list of people. Another commonly used sampling frame (which is not recommended
for sampling people) is a map.

Samples
A sample is a part of the population from which it was drawn. Survey research is based on sampling, which involves
getting information from only some members of the population.
If information is obtained from the whole population, it is not a sample, but a census. Some surveys, based on very
small populations (such as all members of an organization) in fact are censuses and not sample surveys. When you do
a census, the techniques given in this book still apply, but there is no sampling error - as long as the whole group
participates in the census.
Samples can be drawn in several different ways, e.g. probability samples, quota samples, purposive samples etc.

Sample size
Contrary to popular opinion, sample sizes do not have to be particularly large. Their size is not, as commonly
thought, determined by the size of the population they are to represent. The U.S., for example, contains more than
two and a half million people, yet the General Social Survey, a highly valued yearly interview survey of the U.S.
population, is based on a sample of around 1500 cases. Political and attitudinal polls, such as the California Poll,
typically draw a sample of around 1000, and some local polls obtain samples of 500 or less. The determiners of sample
size are the variability within the population and the degree of accuracy of population estimates the researcher is
willing to accept (pay for). If you are, for example, interested in the gender distribution of crime victims, the sample
could be relatively small with limited variability of only two possibilities (male and female) compared to the size of the
sample needed to make the same level of accuracy statement about the ethnicity of crime victims (Germans, Italians,
Irish, Poles, Canadians, etc.). To make a statement about the gender makeup of crime victims that would be within 3%
of the population parameter that we would be 95% confident in making would require a sample of 1200, while a
similar statement about the ethnic makeup of victims, would require a much larger sample due to the variability.

[© 2013-14: TMC Study Material on RM] Page 54


For any sample design deciding upon the appropriate sample size will depend on several key factors:-
(1) No estimate taken from a sample is expected to be exact: Any assumptions about the overall population based on
the results of a sample will have an attached margin of error.
(2) To lower the margin of error usually requires a larger sample size. The amount of variability in the population
(i.e. the range of values or opinions) will also affect accuracy and therefore the size of sample.
(3) The confidence level is the likelihood that the results obtained from the sample lie within a required precision.
The higher the confidence level that is the more certain you wish to be that the results are not atypical. Statisticians
often use a 95 per cent confidence level to provide strong conclusions.
(4) Population size does not normally affect sample size. In fact the larger the population sizes the lower the
proportion of that population that needs to be sampled to be representative. It is only when the proposed sample size
is more than 5 per cent of the population that the population size becomes part of the formulae to calculate the sample
size.
Sampling error is the error in sample estimates of a population. Of course you would like to precisely know the
population characteristics from your sample, but that is not likely. Suppose that you wanted to know about the
students at a school of 1000 students and you choose a random sample of 100. With much variability at all it is
unlikely that your sample of 100 would have exactly the same characteristics as another sample of 100 from the same
1000 students. This variation in samples is called sampling error. It is at this point that statistics enters the picture.
We know from the logic of statistics that if we took all possible samples of 100 from our population the distribution of
characteristics such as means and standard deviations of the samples would be "normal," with the mean and standard
deviation of the samples collectively equal to the population mean and standard deviation.

3) Sampling method
The difference between non-probability and probability sampling is that non-probability sampling does not involve
random selection and probability sampling does. Does that mean that non-probability samples aren't representative of
the population? Not necessarily. But it does mean that non-probability samples cannot depend upon the rationale of
probability theory. At least with a probabilistic sample, we know the odds or probability that we have represented the
population well. We are able to estimate confidence intervals for the statistic. With non-probability samples, we may or
may not represent the population well, and it will often be hard for us to know how well we've done so. In general,
researchers prefer probabilistic or random sampling methods over non-probabilistic ones, and consider them to be
more accurate and rigorous. However, in applied social research there may be circumstances where it is not feasible,
practical or theoretically sensible to do random sampling. Here, we consider a wide range of non-probabilistic
alternatives.
Probability sampling, or random sampling, is a sampling technique in which the probability of getting any particular
sample may be calculated. Non-probability sampling does not meet this criterion and should be used with caution.
Non-probability sampling techniques cannot be used to infer from the sample to the general population. Any
generalizations obtained from a non-probability study must be filtered through ones knowledge of the topic being
studied. Performing non-probability sampling is considerably less expense than doing probability sampling.

[© 2013-14: TMC Study Material on RM] Page 55


A) Probability sampling methods
Each subject or unit in the population has a known non-zero probability of being included in the sample. This
allows the application of probability theory to estimate how likely it is that the sample reflects the target
population. In statistical terms, a calculation of sampling error can be made. Probability sampling method is any
method of sampling that utilizes some form of random selection. In order to have a random selection method, you
must set up some process or procedure that assures that the different units in your population have equal probabilities
of being chosen. Humans have long practiced various forms of random selection, such as picking a name out of a hat,
or choosing the short straw. These days, we tend to use computers as the mechanism for generating random numbers
as the basis for random selection.
General advantages
 A high degree of representativeness is likely
 The sampling error can be calculated
General disadvantages
 Expensive
 Time consuming
 Relatively complicated

Definition of basic terms:-


These are:
N = the number of cases in the sampling frame
n = the number of cases in the sample

NC n = the number of combinations (subsets) of n from N


f = n/N = the sampling fraction

In Probability sampling, all items have some chance of selection that can be calculated. Probability sampling
technique ensures that bias is not introduced regarding who is included in the survey.

Five common Probability sampling or random sampling techniques are:


1) Simple random sampling,
2) Systematic sampling,
3) Stratified sampling,
4) Cluster sampling, and
5) Multi-stage sampling

1) Simple random sampling


With simple random sampling, each item in a population has an equal chance of inclusion in the sample. For example,
each name in a telephone book could be numbered sequentially. If the sample size was to include 2,000 people, then

[© 2013-14: TMC Study Material on RM] Page 56


2,000 numbers could be randomly generated by computer or numbers could be picked out of a hat. These numbers
could then be matched to names in the telephone book, thereby providing a list of 2,000 people.
Example: - A lotto draw is a good example of simple random sampling. A sample of 6 numbers is randomly generated
from a population of 45, with each number having an equal chance of being selected.
The advantage of simple random sampling is that it is simple and easy to apply when small populations are
involved. However, because every person or item in a population has to be listed before the corresponding random
numbers can be read, this method is very cumbersome to use for large populations.

2) Systematic sampling
Systematic sampling, sometimes called interval-sampling, means that there is a gap, or interval, between each
selection. This method is often used in industry, where an item is selected for testing from a production line (say,
every fifteen minutes) to ensure that machines and equipment are working to specification.
Alternatively, the manufacturer might decide to select every 20th item on a production line to test for defects and
quality. This technique requires the first item to be selected at random as a starting point for testing and, thereafter,
every 20th item is chosen.
This technique could also be used when questioning people in a sample survey. A market researcher might select
every 10th person who enters a particular store, after selecting a person at random as a starting point; or interview
occupants of every 5th house in a street, after selecting a house at random as a starting point.
It may be that a researcher wants to select a fixed size sample. In this case, it is first necessary to know the whole
population size from which the sample is being selected. The appropriate sampling interval, I, is then calculated by
dividing population size, N, by required sample size, n, as follows: I = N/n
Example:-If a systematic sample of 500 students were to be carried out in a university with an enrolled population of
10,000, the sampling interval would be: I = N/n = 10,000/500 =20
Note: if I is not a whole number, then it is rounded to the nearest whole number.
All students would be assigned sequential numbers. The starting point would be chosen by selecting a random number
between 1 and 20. If this number was 9, then the 9th student on the list of students would be selected along with every
following 20th student. The sample of students would be those corresponding to student numbers 9, 29, 49, 69, ........
9929, 9949, 9969 and 9989.
The advantage of systematic sampling is that it is simpler to select one random number and then every 'Ith' (e.g.
20th) member on the list, than to select as many random numbers as sample size. It also gives a good spread right
across the population. A disadvantage is that you may need a list to start with, if you wish to know your sample
size and calculate your sampling interval.

3) Stratified sampling
A general problem with random sampling is that you could, by chance, miss out a particular group in the sample.
However, if you form the population into groups, and sample from each group, you can make sure the sample is
representative.
In stratified sampling, the population is divided into groups called strata. A sample is then drawn from within these
strata. Some examples of strata commonly used by the research Organisation are States, Age and Sex. Other strata may
be religion, academic ability or marital status.

[© 2013-14: TMC Study Material on RM] Page 57


Example: - The committee of a school of 1,000 students wishes to assess any reaction to the reintroduction of rural Care
into the school timetable. To ensure a representative sample of students from all year levels, the committee uses the
stratified sampling technique.
In this case the strata are the year levels. Within each stratum the committee selects a sample. So, in a sample of 100
students, all year levels would be included. The students in the sample would be selected using simple random
sampling or systematic sampling within each stratum.
Stratification is most useful when the stratifying variables are simple to work with, easy to observe and closely related
to the topic of the survey.
An important aspect of stratification is that it can be used to select more of one group than another. You may do this if
you feel that responses are more likely to vary in one group than another. So, if you know everyone in one group has
much the same value, you only need a small sample to get information for that group; whereas in another group, the
values may differ widely and a bigger sample is needed.
If you want to combine group level information to get an answer for the whole population, you have to take account of
what proportion you selected from each group .
When stratified sampling designs are to be employed, there are 3 key questions which have to be immediately
addressed:
1 The bases of stratification, i.e. what characteristics should be used to subdivide the universe/population into
strata?
2 The number of strata, i.e. how many strata should be constructed and what stratum boundaries should be used?
3 Sample sizes within strata, i.e. how many observations should be taken in each stratum?

1) Bases of stratification
Intuitively, it seems clear that the best basis would be the frequency distribution of the principal variable being
studied. For example, in a study of coffee consumption we may believe that behavioural patterns will vary according
to whether a particular respondent drinks a lot of coffee, only a moderate amount of coffee or drinks coffee very
occasionally. Thus we may consider that to stratify according to "heavy users", "moderate users" and "light users"
would provide an optimum stratification. However, two difficulties may arise in attempting to proceed in this way.
First, there is usually interest in many variables, not just one, and stratification on the basis of one may not provide
the best stratification for the others. Secondly, even if one survey variable is of primary importance, current data on
its frequency is unlikely to be available. However, the latter complaint can be attended to since it is possible to
stratify after the data has been completed and before the analysis is undertaken. The only approach is to create strata
on the basis of variables, for which information is, or can be made available, that are believed to be highly correlated
with the principal survey characteristics of interest, e.g. age, socio-economic group, sex, farm size, firm size, etc.
In general, it is desirable to make up strata in such a way that the sampling units within strata are as similar as
possible. In this way a relatively limited sample within each stratum will provide a generally precise estimate of the
mean of that stratum. Similarly it is important to maximise differences in stratum means for the key survey variables
of interest. This is desirable since stratification has the effect of removing differences between stratum means from
the sampling error.
Total variance within a population has two types of natural variation: between-strata variance and within-strata
variance. Stratification removes the second type of variance from the calculation of the standard error. Suppose, for
example, we stratified students in a particular university by subject specialty - marketing, engineering, chemistry,

[© 2013-14: TMC Study Material on RM] Page 58


computer science, mathematics, history, geography etc. and questioned them about the distinctions between training
and education. The theory goes that without stratification we would expect variation in the views expressed by
students from say within the marketing specialty and between the views of marketing students, as a whole, and
engineering students as a whole. Stratification ensures that variation between strata does not enter into the standard
error by taking account of this source in drawing the sample.
2) Number of strata
The next question is that of the number of strata and the construction of stratum boundaries. As regards number of
strata, as many as possible should be used. If each stratum could be made as homogeneous as possible, its mean
could be estimated with high reliability and, in turn, the population mean could be estimated with high precision.
However, some practical problems limit the desirability of a large number of strata:
a) No stratification scheme will completely "explain" the variability among a set of observations. Past a certain point,
the "residual" or "unexplained" variation will dominate, and little improvement will be effected by creating more strata.
b) Depending on the costs of stratification, a point may be reached quickly where creation of additional strata is
economically unproductive.
If a single overall estimate is to be made (e.g. the average per capita consumption of coffee) we would normally use
no more than about 6 strata. If estimates are required for population subgroups (e.g. by region and/or age group),
then more strata may be justified.
3) Sample sizes within strata
Proportional allocation: Once strata have been established, the question becomes, "How big a sample must be drawn
from each?" Consider a situation where a survey of a two-stratum population is to be carried out:
Stratum Number of Items in Stratum

A 10,000

B 90,000

If the budget is fixed at ` 3000 and we know the cost per observation is ` 6 in each stratum, so the available total
sample size is 500. The most common approach would be to sample the same proportion of items in each stratum.
This is termed proportional allocation. In this example, the overall sampling fraction is:

Thus, this method of allocation would result in:


Stratum A (10,000 × 0.5%) = 50

Stratum B (90,000 × 0.5%) = 450

The major practical advantage of proportional allocation is that it leads to estimates which are computationally
simple. Where proportional sampling has been employed we do not need to weight the means of the individual
stratum when calculating the overall mean. So:

sr = W1 1 + W2 2 + W3 3+ - - - Wk k

Optimum allocation: Proportional allocation is advisable when all we know of the strata is their sizes. In situations
where the standard deviations of the strata are known it may be advantageous to make a disproportionate allocation.

[© 2013-14: TMC Study Material on RM] Page 59


Suppose that, once again, we had stratum A and stratum B, but we know that the individuals assigned to stratum A
were more varied with respect to their opinions than those assigned to stratum B. Optimum allocation minimises the
standard error of the estimated mean by ensuring that more respondents are assigned to the stratum within which
there is greatest variation.

4) Cluster sampling
It is sometimes expensive to spread your sample across the population as a whole. For example, travel can become
expensive if you are using interviewers to travel between people spread all over the country. To reduce costs you may
choose a cluster sampling technique.
Cluster sampling divides the population into groups, or clusters. A number of clusters are selected randomly to
represent the population, and then all units within selected clusters are included in the sample. No units from non-
selected clusters are included in the sample. They are represented by those from selected clusters. This differs from
stratified sampling, where some units are selected from each group.
Examples of clusters may be factories, schools and geographic areas such as electoral sub-divisions. The selected
clusters are then used to represent the population.
Example:- Suppose an organisation wishes to find out which sports 11 Std students are participating in across
Maharashtra. It would be too costly and take too long to survey every student, or even some students from every
school. Instead, 100 schools are randomly selected from all over Maharashtra.
These schools are considered to be clusters. Then, every 11 Std student in these 100 schools is surveyed. In effect,
students in the sample of 100 schools represent all 11 Std students in Maharashtra.
Cluster sampling has several advantages: reduced costs, simplified fieldwork and administration are more
convenient. Instead of having a sample scattered over the entire coverage area, the sample is more localised in
relatively few centres (clusters).
Cluster sampling's disadvantage is that less accurate results are often obtained due to higher sampling error than for
simple random sampling with the same sample size. In the above example, you might expect to get more accurate
estimates from randomly selecting students across all schools than from randomly selecting 100 schools and taking
every student in those chosen.

5) Multi-stage sampling
Multi-stage sampling is like cluster sampling, but involves selecting a sample within each chosen cluster, rather than
including all units in the cluster. Thus, multi-stage sampling involves selecting a sample in at least two stages. In the
first stage, large groups or clusters are selected. These clusters are designed to contain more population units than are
required for the final sample.
In the second stage, population units are chosen from selected clusters to derive a final sample. If more than two stages
are used, the process of choosing population units within clusters continues until the final sample is achieved.
Example:- An example of multi-stage sampling is where, firstly, electoral sub-divisions (clusters) are sampled from a
city or state. Secondly, blocks of houses are selected from within the electoral sub-divisions and, thirdly, individual
houses are selected from within the selected blocks of houses.

[© 2013-14: TMC Study Material on RM] Page 60


The advantages of multi-stage sampling are convenience, economy and efficiency. Multi-stage sampling does not
require a complete list of members in the target population, which greatly reduces sample preparation cost. The list of
members is required only for those clusters used in the final stage.
The main disadvantage of multi-stage sampling is the same as for cluster sampling: lower accuracy due to higher
sampling error.

B) Non-probability sampling techniques


The selection of subjects or units is left to the discretion of the researcher and methods are less structured and less
strict. Probability theory cannot be used to estimate sampling error.
Non-probability sampling methods are usually used for qualitative research when the purpose is exploratory or
interpretative.
We can divide non-probability sampling methods into two broad types: accidental or purposive. Most sampling
methods are purposive in nature because we usually approach the sampling problem with a specific plan in mind.
The most important distinctions among these types of sampling methods are the ones between the different types of
purposive sampling approaches.
General advantages

 Typicality of subjects is aimed for


 Permits exploration
General disadvantage

 Unrepresentative

Examples of non-probability sampling includes:

1) Accidental, Haphazard or Convenience Sampling


Members of the population are chosen based on their relative ease of access. To sample friends, co-workers, or
shoppers at a single mall, are all examples of Convenience sampling.
Accidental, convenience, available samples are all names for non-purposive non-probability samples. In these, people
in the samples are those who simply agreed to take part, were around and available at the time. They are quick and
cheap but their use is really limited to pilot or exploratory work; or, if one is used because there re is no alternative
form of sampling available, caution must be exercised in the analysis of the results. Tempting though it may be, you
cannot assume the sample is representative.

2) Purposive Sampling
In purposive sampling the people/units/ elements/ in the sample are selected because they are regarded as having
similar characteristics to the people in the designated research population. So, for example, in research investigating
the management skills of owner/managers of small enterprises, the researcher might select some typical owner
managers to take part in the study. They will not be selected randomly. One advantage of this kind of sample is that
it is usually possible to get a targeted sample together very quickly - and hence cheaply.
All of the methods that follow can be considered subcategories of purposive sampling methods. We might sample
for specific groups or types of people as in modal instance, expert, or quota sampling. We might sample for

[© 2013-14: TMC Study Material on RM] Page 61


diversity as in heterogeneity sampling. Or, we might capitalize on informal social networks to identify specific
respondents who are hard to locate otherwise, as in snowball sampling. In all of these methods we know what we
want -- we are sampling with a purpose.

a) Modal Instance Sampling


In statistics, the mode is the most frequently occurring value in a distribution. In sampling, when we do a modal
instance sample, we are sampling the most frequent case, or the "typical" case. In a lot of informal public opinion polls,
for instance, they interview a "typical" voter. There are a number of problems with this sampling approach. First, how
do we know what the "typical" or "modal" case is? We could say that the modal voter is a person who is of average age,
educational level, and income in the population. But, it's not clear that using the averages of these is the fairest
(consider the skewed distribution of income, for instance). And, how do you know that those three variables -- age,
education, income -- are the only or event the most relevant for classifying the typical voter? What if religion or
ethnicity is an important discriminator? Clearly, modal instance sampling is only sensible for informal sampling
contexts.

b) Expert Sampling
Expert sampling involves the assembling of a sample of persons with known or demonstrable experience and expertise
in some area. Often, we convene such a sample under the auspices of a "panel of experts." There are actually two
reasons you might do expert sampling. First, because it would be the best way to elicit the views of persons who have
specific expertise. In this case, expert sampling is essentially just a specific sub case of purposive sampling. But the
other reason you might use expert sampling is to provide evidence for the validity of another sampling approach
you've chosen. For instance, let's say you do modal instance sampling and are concerned that the criteria you used for
defining the modal instance are subject to criticism. You might convene an expert panel consisting of persons with
acknowledged experience and insight into that field or topic and ask them to examine your modal definitions and
comment on their appropriateness and validity. The advantage of doing this is that you aren't out on your own trying
to defend your decisions -- you have some acknowledged experts to back you. The disadvantage is that even the
experts can be, and often are, wrong.

c) Quota Sampling
In quota sampling, you select people non-randomly according to some fixed quota. There are two types of quota
sampling: proportional and non proportional.
i) In proportional quota sampling you want to represent the major characteristics of the population by sampling a
proportional amount of each. For instance, if you know the population has 40% women and 60% men, and that you
want a total sample size of 100, you will continue sampling until you get those percentages and then you will stop.
So, if you've already got the 40 women for your sample, but not the sixty men, you will continue to sample men but
even if legitimate women respondents come along, you will not sample them because you have already "met your
quota." The problem here (as in much purposive sampling) is that you have to decide the specific characteristics on
which you will base the quota. Will it be by gender, age, education race, religion, etc.?
ii) Non-proportional quota sampling is a bit less restrictive. In this method, you specify the minimum number of
sampled units you want in each category. Here, you're not concerned with having numbers that match the

[© 2013-14: TMC Study Material on RM] Page 62


proportions in the population. Instead, you simply want to have enough to assure that you will be able to talk about
even small groups in the population. This method is the non-probabilistic analogue of stratified random sampling in
that it is typically used to assure that smaller groups are adequately represented in your sample.

d) Heterogeneity Sampling
We sample for heterogeneity when we want to include all opinions or views, and we aren't concerned about
representing these views proportionately. Another term for this is sampling for diversity. In many brainstorming or
nominal group processes (including concept mapping), we would use some form of heterogeneity sampling because
our primary interest is in getting broad spectrum of ideas, not identifying the "average" or "modal instance" ones. In
effect, what we would like to be sampling is not people, but ideas. We imagine that there is a universe of all possible
ideas relevant to some topic and that we want to sample this population, not the population of people who have the
ideas. Clearly, in order to get all of the ideas, and especially the "outlier" or unusual ones, we have to include a broad
and diverse range of participants. Heterogeneity sampling is, in this sense, almost the opposite of modal instance
sampling.

e) Snowball sampling
In snowball sampling, you begin by identifying someone who meets the criteria for inclusion in your study. You then
ask them to recommend others who they may know who also meet the criteria. Although this method would hardly
lead to representative samples, there are times when it may be the best method available. Snowball sampling is
especially useful when you are trying to reach populations that are inaccessible or hard to find. For instance, if you are
studying the homeless, you are not likely to be able to find good lists of homeless people within a specific
geographical area. For example, we might wish to know whether a new educational program causes subsequent
achievement score gains, whether a special work release program for prisoners causes lower recidivism rates, whether
a novel drug causes a reduction in symptoms, and so on.

[© 2013-14: TMC Study Material on RM] Page 63


The following is the characteristic of most popular sampling techniques:-

Sampling Method Definition Uses Limitations


Cluster Sampling Units in the population can often be Quick & easy; does not Expensive if the clusters
found in certain geographic groups or require complete are large; greater risk of
"clusters" (e.g. primary school children population information; sampling error
in Chandrapur). A random sample of good for face-to-face
clusters is taken, then all units within surveys
the cluster are examined

Convenience Sampling Uses those who are willing to volunteer Readily available; large Cannot extrapolate from
amount of information sample to infer about the
can be gathered quickly population; prone to
volunteer bias

Judgement Sampling A deliberate choice of a sample - the Good for providing Very prone to bias;
opposite of random illustrative examples or samples often small;
case studies cannot extrapolate from
sample

Quota Sampling Aim is to obtain a sample that is Quick & easy way of Not random, so still some
"representative" of the overall obtaining a sample risk of bias; need to
population; the population is divided understand the
("stratified") by the most important population to be able to
variables (e.g. income,. age, location) identify the basis of
and a required quota sample is drawn stratification
from each stratum

Simply Random Sampling Ensures that every member of the Simply to design and Need a complete and
population has an equal chance of interpret; can calculate accurate population
selection estimate of the listing; may not be
population and the practical if the sample
sampling error requires lots of small
visits all over the country

Systematic Sampling After randomly selecting a starting point Easier to extract the Can be costly and time-
from the population, between 1 and "n", sample than via simple consuming if the sample
every nth unit is selected, where n random; ensures is not conveniently
equals the population size divided by sample is spread across located
the sample size the population

[© 2013-14: TMC Study Material on RM] Page 64


Measurement & scaling techniques
Introduction
The data consists of quantitative variables like price, income, sales etc., and qualitative variables like knowledge,
performance, character etc. The qualitative information must be converted into numerical form for further analysis.
This is possible through measurement and scaling techniques. A common feature of survey based research is to have
respondent‘s feelings, attitudes, opinions, etc. in some measurable form. For example, a bank manager may be
interested in knowing the opinion of the customers about the services provided by the bank. Similarly, a fast food
company having a network in a city may be interested in assessing the quality and service provided by them. As a
researcher you may be interested in knowing the attitude of the people towards the government announcement of a
metro rail in Nagpur. In this unit we will discuss the issues related to measurement, different levels of measurement
scales, various types of scaling techniques and also selection of an appropriate scaling technique.

Measurement and scaling


Before we proceed further it will be worthwhile to understand the following two terms: (a) Measurement, and (b)
Scaling.

a) Measurement: Measurement is the process of observing and recording the observations that are collected as part
of research. The recording of the observations may be in terms of numbers or other symbols to characteristics of objects
according to certain prescribed rules. The respondent‘s, characteristics are feelings, attitudes, opinions etc. For example,
you may assign ‗1‘ for Male and ‗2‘ for Female respondents. In response to a question on whether he/she is using the
ATM provided by a particular bank branch, the respondent may say ‗yes‘ or ‗no‘. You may wish to assign the number
‗1‘ for the response yes and ‗2‘ for the response no. We assign numbers to these characteristics for two reasons. First,
the numbers facilitate further statistical analysis of data obtained. Second, numbers facilitate the communication of
measurement rules and results. The most important aspect of measurement is the specification of rules for assigning
numbers to characteristics. The rules for assigning numbers should be standardised and applied uniformly. This must
not change over time or objects.

b) Scaling: Scaling is the assignment of objects to numbers or semantics according to a rule. In scaling, the objects are
text statements, usually statements of attitude, opinion, or feeling. For example, consider a scale locating customers of a
bank according to the characteristic ―agreement to the satisfactory quality of service provided by the branch‖. Each
customer interviewed may respond with a semantic like ‗strongly agree‘, or ‗somewhat agree‘, or ‗somewhat disagree‘,
or ‗strongly disagree‘. We may even assign each of the responses a number. For example, we may assign strongly agree
as ‗1‘, agree as ‗2‘ disagree as ‗3‘, and strongly disagree as ‗4‘. Therefore, each of the respondents may assign 1, 2, 3 or 4.

Issues in measurement
When a researcher is interested in measuring the attitudes, feelings or opinions of respondents he/she should be clear
about the following:
a) What is to be measured?
b) Who is to be measured?
c) The choices available in data collection techniques
The first issue that the researcher must consider is ‗what is to be measured‘?

[© 2013-14: TMC Study Material on RM] Page 65


The definition of the problem, based on our judgments or prior research indicates the concept to be investigated. For
example, we may be interested in measuring the performance of a fast food company. We may require a precise
definition of the concept on how it will be measured. Also, there may be more than one way that we can measure a
particular concept. For example, in measuring the performance of a fast food company we may use a number of
measures to indicate the performance of the company. We may use sales volume in terms of value of sales or number
of customers or spread of network of the company as measures of performance. Further, the measurement of concepts
requires assigning numbers to the attitudes, feelings or opinions. The key question here is that on what basis do we
assign the numbers to the concept. For example, the task is to measure the agreement of customers of a fast food
company on the opinion of whether the food served by the company is tasty, we create five categories: (1) strongly
agree, (2) agree, (3) undecided, (4) disagree, (5) strongly disagree. Then we may measure the response of respondents.
Suppose if a respondent states ‗disagree‘ with the statement that ‗the food is tasty‘, the measurement is 4.
The second important issue in measurement is that, who is to be measured? That means who are the people we are
interested in. The characteristics of the people such as age, sex, education, income, location, profession, etc. may have a
bearing on the choice of measurement. The measurement procedure must be designed keeping in mind the
characteristics of the respondents under consideration.

Types of measurement scales


We know that the level of measurement is a scale by which a variable is measured. For 50 years, with few detractors,
science has used the Stevens (1951) typology of measurement levels (scales). There are three things, which you need to
remember about this typology: Anything that can be measured falls into one of the four types.
The higher the level of measurement, the more precision in measurement and every level up contains all the properties
of the previous level. The four levels of measurement, from lowest to highest, are as follows:

1. Nominal
2. Ordinal
3. Interval
4. Ratio

1) Nominal scales
This, the crudest of measurement scales, classifies individuals, companies, products, brands or other entities into
categories where no order is implied. Indeed it is often referred to as a categorical scale. It is a system of classification
and does not place the entity along a continuum. It involves a simply count of the frequency of the cases assigned to
the various categories, and if desired numbers can be nominally assigned to label each category as in the example
below:
An example of a nominal scale

Which of the following food items do you tend to buy at least once per month? (Please tick)

Okra Palm Oil Milled Rice

Peppers Prawns Pasteurised milk

The numbers have no arithmetic properties and act only as labels. The only measure of average which can be used is
the mode because this is simply a set of frequency counts. Hypothesis tests can be carried out on data collected in the

[© 2013-14: TMC Study Material on RM] Page 66


nominal form. The most likely would be the Chi-square test. However, it should be noted that the Chi-square is a test
to determine whether two or more variables are associated and the strength of that relationship. It can tell nothing
about the form of that relationship, where it exists, i.e. it is not capable of establishing cause and effect.

2) Ordinal scales
Ordinal scales involve the ranking of individuals, attitudes or items along the continuum of the characteristic being
scaled. For example, if a researcher asked farmers to rank 5 brands of pesticide in order of preference he/she might
obtain responses like those in table below.
An example of an ordinal scale used to determine farmers' preferences among 5 brands of pesticide.
Order of preference Brand

1 Rambo

2 Harpic

3 DDT

4 Bagyone

5 Rat kill

From such a table the researcher knows the order of preference but nothing about how much more one brand is
preferred to another, which is there is no information about the interval between any two brands. All of the
information a nominal scale would have given is available from an ordinal scale. In addition, positional statistics such
as the median, quartile and percentile can be determined.
It is possible to test for order correlation with ranked data. The two main methods are Spearman's Ranked Correlation
Coefficient and Kendall's Coefficient of Concordance. Using either procedure one can, for example, ascertain the
degree to which two or more survey respondents agree in their ranking of a set of items. Consider again the ranking of
pesticides example in given figure. The researcher might wish to measure similarities and differences in the rankings
of pesticide brands according to whether the respondents' farm enterprises were classified as "arable" or "mixed" (a
combination of crops and livestock). The resultant coefficient takes a value in the range 0 to 1. A zero would mean that
there was no agreement between the two groups, and 1 would indicate total agreement. It is more likely that an answer
somewhere between these two extremes would be found.
The only other permissible hypothesis testing procedures are the runs test and sign test. The runs test (also known as
the Wald-Wolfowitz). Test is used to determine whether a sequence of binomial data - meaning it can take only one of
two possible values e.g. African/non-African, yes/no, male/female - is random or contains systematic 'runs' of one or
other value. Sign tests are employed when the objective is to determine whether there is a significant difference
between matched pairs of data. The sign test tells the analyst if the number of positive differences in ranking is
approximately equal to the number of negative rankings, in which case the distribution of rankings is random, i.e.
apparent differences are not significant. The test takes into account only the direction of differences and ignores their
magnitude and hence it is compatible with ordinal data.

[© 2013-14: TMC Study Material on RM] Page 67


3) Interval scales
It is only with an interval scaled data that researchers can justify the use of the arithmetic mean as the measure of
average. The interval or cardinal scale has equal units of measurement, thus making it possible to interpret not only the
order of scale scores but also the distance between them. However, it must be recognised that the zero point on an
interval scale is arbitrary and is not a true zero. This of course has implications for the type of data manipulation and
analysis we can carry out on data collected in this form. It is possible to add or subtract a constant to all of the scale
values without affecting the form of the scale but one cannot multiply or divide the values. It can be said that two
respondents with scale positions 1 and 2 are as far apart as two respondents with scale positions 4 and 5, but not that a
person with score 10 feels twice as strongly as one with score 5. Temperature is interval scaled, being measured either
in Centigrade or Fahrenheit. We cannot speak of 50°F being twice as hot as 25°F since the corresponding temperatures
on the centigrade scale, 10°C and -3.9°C, are not in the ratio 2:1.
Interval scales may be either numeric or semantic. Study the examples below in figure.
Examples of interval scales in numeric and semantic formats:
Please indicate your views on Balkan Olives by scoring them on a scale of 5 down to 1 (i.e. 5 = Excellent; = Poor) on
each of the criteria listed

Balkan Olives are: Circle the appropriate score on each line

Succulence 5 4 3 2 1

Fresh tasting 5 4 3 2 1

Free of skin blemish 5 4 3 2 1

Good value 5 4 3 2 1

Attractively packaged 5 4 3 2 1

(a)

Please indicate your views on Balkan Olives by ticking the appropriate responses below:

Excellent Very Good Good Fair Poor

Succulent

Freshness

Freedom from skin blemish

Value for money

Attractiveness of packaging

(b)
Most of the common statistical methods of analysis require only interval scales in order that they might be used. These
are not recounted here because they are so common and can be found in virtually all basic texts on statistics.

4) Ratio scales
The highest level of measurement is a ratio scale. This has the properties of an interval scale together with a fixed
origin or zero point. Examples of variables which are ratio scaled include weights, lengths and times. Ratio scales
permit the researcher to compare both differences in scores and the relative magnitude of scores. For instance the

[© 2013-14: TMC Study Material on RM] Page 68


difference between 5 and 10 minutes is the same as that between 10 and 15 minutes, and 10 minutes is twice as long as
5 minutes.
Given that sociological and management research seldom aspires beyond the interval level of measurement, it is not
proposed that particular attention be given to this level of analysis. Suffice it to say that virtually all statistical
operations can be performed on ratio scales.

Measurement error
In principle, every operation of a survey is a potential source of measurement error. Some examples of causes of
measurement error are non-response, badly designed questionnaires, respondent bias and processing errors.
Measurement errors can be grouped into two main causes, systematic errors and random errors. Systematic error
(called bias) makes survey results unrepresentative of the target population by distorting the survey estimates in one
direction. For example, if the target population is the entire population in a country but the sampling frame is just the
urban population, then the survey results will not be representative of the target population due to systematic bias in
the sampling frame. On the other hand, random error can distort the results on any given occasion but tends to
balance out on average. Some of the types of measurement error are outlined below:
1. Failure to identify the target population
Failure to identify the target population can arise from the use of an inadequate sampling frame, imprecise definition
of concepts, and poor coverage rules. Problems can also arise if the target population and survey population do not
match very well. Failure to identify and adequately capture the target population can be a significant problem for
informal sector surveys. While establishment and population censuses allow for the identification of the target
population, it is important to ensure that the sample is selected as soon as possible after the census is taken so as to
improve the coverage of the survey population.
2. Non-response bias
Non-respondents may differ from respondents in relation to the attributes/variables being measured. Non-response
can be total (where none of the questions were answered) or partial (where some questions may be unanswered owing
to memory problems, inability to answer, etc.). To improve response rates, care should be taken in training
interviewers, assuring the respondent of confidentiality, motivating him or her to cooperate, and revisiting or calling
back if the respondent has been previously unavailable. 'Call backs' are successful in reducing non-response but can
be expensive. It is also important to ensure that the person who has the information required can be contacted by the
interviewer; that the data required are available and that an adequate follow up strategy is in place.
3. Questionnaire design
The content and wording of the questionnaire may be misleading and the layout of the questionnaire may make it
difficult to accurately record responses. Questions should not be misleading or ambiguous, and should be directly
relevant to the objectives of the survey. In order to reduce measurement error relating to questionnaire design, it is
important to ensure that the questionnaire:
 can be completed in a reasonable amount of time;
 can be properly administered by the interviewer;
 uses language that is readily understood by both the interviewer and the respondent; and
 can be easily processed.

[© 2013-14: TMC Study Material on RM] Page 69


In designing questionnaires and training interviewers in the case of informal sector survey where there is a strong
potential for inaccurate information being provided by respondents, consideration should be given to the use of
random question sequencing, derived or imputed results, and the use of partial questionnaires. The random question
sequencing approach involves the interviewer asking the survey respondent a number of questions about the relevant
data items (e.g. input costs and quantities, output prices and output units sold) in a random order. The interviewer
would use a deck of questionnaire cards. The cards would be shuffled and then the interviewer would ask a series of
questions out of sequence, record each answer and then reassemble the questions in the right sequence to get the final
response (e.g. profit or value added information) as a derived result. Another approach-to consider;-where particular-
responding businesses form a reasonably homogeneous group operating with similar cost structures and market
conditions, is aggregating results from sample measures of inputs and outputs. This approach involves using separate
but representative random samples of businesses to collect information about different data items. The data are then
brought together to produce imputed aggregate level estimates.
4. Interviewer bias
The respondent answers questions can be influenced by the interviewer's behaviour, choice of clothes, sex, accent and
prompting when a respondent does not understand a question. A bias may also be introduced if interviewers receive
poor training as this may have an effect on the way they prompt for, or record, the answers. The best way to minimise
interviewer bias is through effective training and by ensuring manageable workloads.
Training can be provided in the form of manuals, formal training courses on questionnaire content and interviewing
techniques, and on-the-job training in the field. Topics that should be covered in interviewer training include - the
purpose of the survey; the scope and coverage of the survey; a general outline of the survey design and sampling
approach being used; the questionnaire; interviewing techniques and recording answers; ways to avoid or reduce non-
response; how best to maintain respondent co-operation; field practice; quality assurance and editing of data; planning
workloads; and administrative arrangements.
5. Respondent bias
Refusals and inability to answer questions, memory biases and inaccurate information will lead to a bias in the
estimates. An increasing level of respondent burden (due to the number of times a person is included in surveys) can
also make it difficult to get the potential respondent to participate in a survey. When designing a survey it should be
remembered that uppermost in the respondent's mind will be protecting their own personal privacy, integrity and
interests. Also, the way the respondent interprets the questionnaire and the wording of the answer the respondent
gives can cause inaccuracies to enter the survey data. Careful questionnaire design, effective training of interviewers
and adequate survey testing can overcome these problems to some extent.
6. Processing errors
There are four stages in the processing of the data where errors may occur: data grooming, data capture, editing and
estimation. Data grooming involves preliminary checking before entering the data onto the processing system in the
capture stage. Inadequate checking and quality management at this stage can introduce data loss (where data are not
entered into the system) and data duplication (where the same data are entered into the system more than once).
Inappropriate edit checks and inaccurate weights in the estimation procedure can also introduce errors to the data at
the editing and estimation stage. To minimise these errors, processing staff should be given adequate training and
realistic workloads. Training material for processing staff should cover similar topics to those for interview staff,
however, with greater emphasis on editing techniques and quality assurance practices.

[© 2013-14: TMC Study Material on RM] Page 70


7. Misinterpretation of results
This can occur if the researcher is not aware of certain factors that influence the characteristics under investigation. A
researcher or any other user not involved in the data collection process may be unaware of trends built into the data
due to the nature of the collection (e.g. where interviews are always conducted at a particular time of the weekday
could result in only particular types of householders being interviewed). Researchers should carefully investigate the
methodology used in any given survey.
8. Non-response
Non-response results when data are not collected from respondents. The proportion of these non-respondents in the
sample is called the non-response rate. It is important to make all reasonable efforts to maximise the response rate as
non-respondents may have differing characteristics to respondents. Significant non-response can bias the survey
results. When a respondent replies to the survey answering some but not all questions then it is called partial non-
response. Partial non-response can arise due to memory problems, inadequate information or an inability to answer a
particular question. The respondent may also refuse to answer questions if they find questions particularly sensitive;
or have been asked too many questions (the questionnaire is too long). Total non-response can arise if a respondent
cannot be contacted (the frame contains inaccurate or out-of-date contact information or the respondent is not at
home), is unable to respond (may be due to language difficulties or illness) or refuses to answer any questions.

Scaling
In research we quite often face measurement problem (since we want a valid measurement but may not obtain it),
especially when the concepts to be measured are complex and abstract and we do not possess the standardised
measurement tools. Alternatively, we can say that while measuring attitudes and opinions, we face the problem of
their valid measurement. Similar problem may be faced by a researcher, of course in a lesser degree, while measuring
physical or institutional concepts. As such we should study some procedures which may enable us to measure
abstract concepts more accurately. This brings us to the study of scaling techniques.

Meaning of Scaling
Scaling describes the procedures of assigning numbers to various degrees of opinion, attitude and other concepts. This
can be done in two ways viz., (i) making a judgement about some characteristic of an individual and then placing him
directly on a scale that has been defined in terms of that characteristic and (ii) constructing questionnaires in such a
way that the score of individual‘s responses assigns him a place on a scale. It may be stated here that a scale is a
continuum, consisting of the highest point (in terms of some characteristic e.g., preference, favourableness, etc.) and
the lowest point along with several intermediate points between these two extreme points. These scale-point positions
are so related to each other that when the first point happens to be the highest point, the second point indicates a
higher degree in terms of a given characteristic as compared to the third point and the third point indicates a higher
degree as compared to the fourth and so on. Numbers for measuring the distinctions of degree in the
attitudes/opinions are, thus, assigned to individuals corresponding to their scale-positions. All this is better
understood when we talk about scaling technique(s). Hence the term ‗scaling‘ is applied to the procedures for
attempting to determine quantitative measures of subjective abstract concepts. Scaling has been defined as a
―procedure for the assignment of numbers (or other symbols) to a property of objects in order to impart some of the
characteristics of numbers to the properties in question.‖

[© 2013-14: TMC Study Material on RM] Page 71


Scaling is the measurement of a variable in such a way that it can be expressed on a continuum. Rating your
preference for a product from 1 to 10 is an example of a scale.
With comparative scaling, the items are directly compared with each other (example: Do you prefer Pepsi or Coke?).
In non-comparative scaling each item is scaled independently of the others (example: How do you feel about Coke?).

Scale construction decisions


 What level of data is involved (nominal, ordinal, interval, or ratio)?
The type of information collected can influence scale construction. Different types of information are measured in
different ways.
1. Some data is measured at the nominal level. That is, any numbers used are mere labels : they express no
mathematical properties. Examples are SKU inventory codes and UPC bar codes.
2. Some data is measured at the ordinal level. Numbers indicate the relative position of items, but not the
magnitude of difference. An example is a preference ranking.
3. Some data is measured at the interval level. Numbers indicate the magnitude of difference between items,
but there is no absolute zero point. Examples are attitude scales and opinion scales.
4. Some data is measured at the ratio level. Numbers indicate magnitude of difference and there is a fixed zero
point. Ratios can be calculated. Examples include: age, income, price, costs, sales revenue, sales volume, and
market share.
 What will the results be used for?
 Should you use a scale, index, or typology?
 What types of statistical analysis would be useful?
 Should you use a comparative scale or a non-comparative scale?
 How many scale divisions or categories to use (1 to 10; 1 to 7; -3 to +3)?
 Odd or even number of divisions - odd gives neutral center value; even forces respondents to take a non-neutral
position
 The nature and descriptiveness of the scale labels?
 The physical form or layout of the scale? (Graphic, simple linear, vertical, horizontal)
 Forced versus optional response?

Classification of scales
The number assigning procedures or the scaling procedures may be broadly classified on one or more of the following
bases: (a) subject orientation; (b) response form; (c) degree of subjectivity; (d) scale properties; (e) number of
dimensions and (f) scale construction techniques.
We take up each of these separately.
(a) Subject orientation: Under it a scale may be designed to measure characteristics of the respondent who completes it
or to judge the stimulus object which is presented to the respondent. In respect of the former, we presume that the
stimuli presented are sufficiently homogeneous so that the between stimuli variation is small as compared to the
variation among respondents. In the latter approach, we ask the respondent to judge some specific object in terms of
one or more dimensions and we presume that the between-respondent variation will be small as compared to the
variation among the different stimuli presented to respondents for judging.

[© 2013-14: TMC Study Material on RM] Page 72


(b) Response form: Under this we may classify the scales as categorical and comparative. Categorical scales are also
known as rating scales. These scales are used when a respondent scores some object without direct reference to other
objects. Under comparative scales, which are also known as ranking scales, the respondent is asked to compare two or
more objects. In this sense the respondent may state that one object is superior to the other or that three models of pen
rank in order 1, 2 and 3. The essence of ranking is, in fact, a relative comparison of a certain property of two or more
objects.
(c) Degree of subjectivity: With this basis the scale data may be based on whether we measure subjective personal
preferences or simply make non-preference judgements. In the former case, the respondent is asked to choose which
person he favours or which solution he would like to see employed, whereas in the latter case he is simply asked to
judge which person is more effective in some aspect or which solution will take fewer resources without reflecting any
personal preference.
(d) Scale properties: Considering scale properties, one may classify the scales as nominal, ordinal, interval and ratio
scales. Nominal scales merely classify without indicating order, distance or unique origin. Ordinal scales indicate
magnitude relationships of ‗more than‘ or ‗less than‘, but indicate no distance or unique origin. Interval scales have
both order and distance values, but no unique origin. Ratio scales possess all these features.
(e) Number of dimensions: In respect of this basis, scales can be classified as ‗unidimensional‘ and ‗multidimensional‘
scales. Under the former we measure only one attribute of the respondent or object, whereas multidimensional scaling
recognizes that an object might be described better by using the concept of an attribute space of ‗n‘ dimensions, rather
than a single-dimension continuum.
(f) Scale construction techniques: Following are the five main techniques by which scales can be developed.
(i) Arbitrary approach: It is an approach where scale is developed on ad hoc basis. This is the most widely used
approach. It is presumed that such scales measure the concepts for which they have been designed, although there is
little evidence to support such an assumption.
(ii) Consensus approach: Here a panel of judges evaluate the items chosen for inclusion in the instrument in terms of
whether they are relevant to the topic area and unambiguous in implication.
(iii) Item analysis approach: Under it a number of individual items are developed into a test which is given to a group
of respondents. After administering the test, the total scores are calculated for everyone. Individual items are then
analysed to determine which items discriminate between persons or objects with high total scores and those with low
scores.
(iv) Cumulative scales are chosen on the basis of their conforming to some ranking of items with ascending and
descending discriminating power. For instance, in such a scale the endorsement of an item representing an extreme
position should also result in the endorsement of all items indicating a less extreme position.
(v) Factor scales may be constructed on the basis of inter-correlations of items which indicate that a common factor
accounts for the relationship between items. This relationship is typically measured through factor analysis method.

Scale construction techniques


The various types of scales used in research fall into two broad categories: comparative and non comparative. In
comparative scaling, the respondent is asked to compare one brand or product against another. With non-comparative
scaling respondents need only evaluate a single product or brand. Their evaluation is independent of the other product
and/or brands which the researcher is studying.

[© 2013-14: TMC Study Material on RM] Page 73


Non-comparative scaling is frequently referred to as monadic scaling and this is the more widely used type of scale in
commercial research studies.

I) Comparative scales
a) Paired comparison: It is sometimes the case that researchers wish to find out which are the most important factors
in determining the demand for a product. Conversely they may wish to know which are the most important factors
acting to prevent the widespread adoption of a product. Take, for example, the very poor farmer response to the first
design of an animal-drawn mould board plough. A combination of exploratory research and shrewd observation
suggested that the following factors played a role in the shaping of the attitudes of those farmers who feel negatively
towards the design:
 Does not ridge
 Does not work for inter-cropping
 Far too expensive
 New technology too risky
 Too difficult to carry.
Suppose the organisation responsible wants to know which factors is foremost in the farmer's mind. It may well be the
case that if those factors that are most important to the farmer than the others, being of a relatively minor nature, will
cease to prevent widespread adoption. The alternatives are to abandon the product's re-development or to completely
re-design it which is not only expensive and time-consuming, but may well be subject to a new set of objections.
The process of rank ordering the objections from most to least important is best approached through the questioning
technique known as 'paired comparison'. Each of the objections is paired by the researcher so that with 5 factors, as in
this example, there are 10 pairs-

In 'paired comparisons' every factor has to be paired with every other factor in turn. However, only one pair is ever put
to the farmer at any one time.
The question might be put as follows:
Which of the following was the more important in making you decide not to buy the plough?

In most cases the question, and the alternatives, would be put to the farmer verbally. He/she then indicates which of
the two was the more important and the researcher ticks the box on his questionnaire. The question is repeated with a
second set of factors and the appropriate box ticked again. This process continues until all possible combinations are
exhausted, in this case 10 pairs. It is good practice to mix the pairs of factors so that there is no systematic bias. The
researcher should try to ensure that any particular factor is sometimes the first of the pair to be mentioned and
sometimes the second. The researcher would never, for example, take the first factor (on this occasion 'Does not ridge')
and systematically compare it to each of the others in succession. That is likely to cause systematic bias.
Below labels have been given to the factors so that the worked example will be easier to understand. The letters A - E
have been allocated as follows:

[© 2013-14: TMC Study Material on RM] Page 74


A = Does not ridge

B= Far too expensive

C = New technology too risky

D = Does not work for inter-cropping

E= Too difficult to carry.


The data is then arranged into a matrix. Assume that 200 farmers have been interviewed and their responses are
arranged in the grid below. Further assume that the matrix is so arranged that we read from top to side. This means,
for example, that 164 out of 200 farmers said the fact that the plough was too expensive was a greater deterrent than
the fact that it was not capable of ridging. Similarly, 174 farmers said that the plough's inability to inter-crop was more
important than the inability to ridge when deciding not to buy the plough.

A preference matrix
A B C D E

A 100 164 120 174 180

B 36 100 160 176 166

C 80 40 100 168 124

D 26 24 32 100 102

E 20 34 76 98 100

If the grid is carefully read, it can be seen that the rank order of the factors is -
Most important E Too difficult to carry

D Does not inter crop

C New technology/high risk

B Too expensive

Least important A Does not ridge.

It can be seen that it is more important for designers to concentrate on improving transportability and, if possible, to
give it an inter-cropping capability rather than focusing on its ridging capabilities (remember that the example is
entirely hypothetical).
One major advantage to this type of questioning is that whilst it is possible to obtain a measure of the order of
importance of five or more factors from the respondent, he is never asked to think about more than two factors at any
one time. This is especially useful when dealing with illiterate farmers. Having said that, the researcher has to be
careful not to present too many pairs of factors to the farmer during the interview. If he does, he will find that the
farmer will quickly get tired and/or bored. It is as well to remember the formula of n(n - 1)/2. For ten factors, brands
or product attributes this would give 45 pairs. Clearly the farmer should not be asked to subject himself to having the
same question put to him 45 times. For practical purposes, six factors is possibly the limit, giving 15 pairs.
It should be clear from the procedures described in these notes that the paired comparison scale gives ordinal data.

[© 2013-14: TMC Study Material on RM] Page 75


b) Rupee Metric Comparisons: This type of scale is an extension of the paired comparison method in that it requires
respondents to indicate both their preference and how much they are willing to pay for their preference. This scaling
technique gives the researcher an interval - scaled measurement. An example is given below:-

An example of a Rupee metric scale


Which of the following types of fish do you How much more, in cents, would you be prepared to pay for your
prefer? preferred fish?

Fresh Fresh (gutted) ` 0.70

Fresh (gutted) Smoked 0.50

Frozen Smoked 0.60

Frozen Fresh 0.70

Smoked Fresh 0.20

Frozen(gutted) Frozen

From the data above the preferences shown below can be computed as follows:

Fresh fish: 0.70 + 0.70 + 0.20 =1.60

Smoked fish: 0.60 + (-0.20) + (-0.50) =(-1.10)

Fresh fish(gutted): (-0.70) + 0.30 + 0.50 =0.10

Frozen fish: (-0.60) + (-0.70) + (-0.30) =(-1.60)

c) The Unity-sum-gain technique: A common problem with launching new products is one of reaching a decision as to
what options, and how many options one offers. Whilst a company may be anxious to meet the needs of as many
market segments as possible, it has to ensure that the segment is large enough to enable him to make a profit. It is
always easier to add products to the product line but much more difficult to decide which models should be deleted.
One technique for evaluating the options which are likely to prove successful is the unity-sum-gain approach.
The procedure is to begin with a list of features which might possibly be offered as 'options' on the product, and
alongside each you list its retail cost. A third column is constructed and this forms an index of the relative prices of
each of the items. The table below will help clarify the procedure. For the purposes of this example the basic reaper is
priced at ` 20,000 and some possible 'extras' are listed along with their prices.
The total value of these hypothetical 'extras' is RS 7,460 but the researcher tells the farmer he has an equally
hypothetical ` 3,950 or similar sum. The important thing is that he should have considerably less hypothetical money
to spend than the total value of the alternative product features. In this way the farmer is encouraged to reveal his
preferences by allowing researchers to observe how he trades one additional benefit off against another. For example,
would he prefer a side rake attachment on a 3 metre head rather than have a transporters trolley on either a standard
or 2.5m wide head? The farmer has to be told that any unspent money cannot be retained by him so he should seek the
best value-for-money he can get.
In cases where the researcher believes that mentioning specific prices might introduce some form of bias into the
results, then the index can be used instead. This is constructed by taking the price of each item over the total of ` 7,460
and multiplying by 100. Survey respondents might then be given a maximum of 60 points and then, as before, are
asked how they would spend these 60 points. In this crude example the index numbers are not too easy to work with

[© 2013-14: TMC Study Material on RM] Page 76


for most respondents, so one would round them as has been done in the adjusted column. It is the relative and not the
absolute value of the items which is important so the precision of the rounding need not overly concern us.

The unity-sum-gain technique


Item Additional Cost (` ) Index Adjusted Index

2.5 wide rather than standard 2m 2,000 27 30

Self lubricating chain rather than belt 200 47 50

Side rake attachment 350 5 10

Polymer heads rather than steel 250 3 5

Double rather than single edged cutters 210 2.5 5

Transporter trolley for reaper attachment 650 9 10

Automatic levelling of table 300 4 5


The unity-sum-gain technique is useful for determining which product features are more important to farmers. The
design of the final market version of the product can then reflect the farmers' needs and preferences. Practitioners treat
data gathered by this method as ordinal.

II) Non-comparative scales


a) Continuous rating scales: The respondents are asked to give a rating by placing a mark at the appropriate position
on a continuous line. The scale can be written
on card and shown to the respondent during
the interview. Two versions of a continuous
rating scale are depicted in figure
When version B is used, the respondent's score
is determined either by dividing the line into
as many categories as desired and assigning
the respondent a score based on the category
into which his/her mark falls, or by measuring
the distance, in millimetres or inches, from
either end of the scale.
Whichever of these forms of the continuous scale is used, the results are normally analysed as interval scaled.

b) Line marking scale: The line marked scale is typically used to measure perceived similarity differences between
products, brands or other objects. Technically, such a scale is
a form of what is termed a semantic differential scale since
each end of the scale is labelled with a word/phrase (or
semantic) that is opposite in meaning to the other.
Following figure provides an illustrative example of such a
scale.
Consider the products below which can be used when
frying food. In the case of each pair, indicate how similar or

[© 2013-14: TMC Study Material on RM] Page 77


different they are in the flavour which they impart to the food.
For some types of respondent, the line scale is an easier format because they do not find discrete numbers (e.g. 5, 4, 3, 2,
1) best reflect their attitudes/feelings. The line marking scale is a continuous scale.

c) Itemised rating scales: With an itemised


scale, respondents are provided with a scale
having numbers and/or brief descriptions
associated with each category and are asked
to select one of the limited numbers of
categories, ordered in terms of scale position
that best describes the product, brand,
company or product attribute being studied.
Examples of the itemised rating scale are
illustrated in figure.
Itemised rating scales can take a variety of innovative forms as demonstrated by the two illustrated in figure, which are
graphic.
Whichever form of itemised scale is
applied, researchers usually treat the
data as interval level.

d) Semantic scales: This type of scale


makes extensive use of words rather
than numbers. Respondents describe
their feelings about the products or
brands on scales with semantic labels.
When bipolar adjectives are used at the
end points of the scales, these are
termed semantic differential scales. The
semantic scale and the semantic differential scale are illustrated in figure.

e) Likert scales: A Likert scale is what is termed a summated instrument scale. This means that the items making up a
Liken scale are summed to produce a total score. In fact, a Likert scale is a composite of itemised scales. Typically, each
scale item will have 5 categories, with scale values ranging from -2 to +2 with 0 as neutral response.

[© 2013-14: TMC Study Material on RM] Page 78


This explanation may be clearer from the example in the table below.
Strongly Agree Neither Disagree Strongly
Agree Disagree

If the price of raw materials fell firms would reduce the price of -2 -1 0 1 2
their food products.

Without government regulation the firms would exploit the -2 -1 0 1 2


consumer.

Most food companies are so concerned about making profits they -2 -1 0 1 2


do not care about quality.

The food industry spends a great deal of money making sure that -2 -1 0 1 2
its manufacturing is hygienic.

Food companies should charge the same price for their products -2 -1 0 1 2
throughout the country

The Likert scale

Likert scales are treated as yielding Interval data by the majority of researchers. The scales which have been described
in this chapter are among the most commonly used in research. Whilst there are a great many more forms which scales
can take, if students are familiar with those described in this chapter they will be well equipped to deal with most types
of survey problem.

Electing a measurement scale: some practical questions


There is no best scale that applies to all research projects. The choice of scale will be a function of the nature of the
attitudinal object to be measured, the manager‘s problem definition, and the backward and forward linkages to other
choices that have already been made (e.g., telephone survey versus mail survey). There are several issues that will be
helpful to consider:
• Is a ranking, sorting, rating, or choice technique best? The answer to this question is largely determined by the
problem definition and especially by the type of statistical analysis that is desired.
• Should a monadic or comparative scale be used? If a scale is other than a ratio scale, the researcher must make a
decision whether to use a standard of comparison. A monadic rating scale uses no such comparison; it asks a
respondent to rate a single concept in isolation. A comparative rating scale asks a respondent to rate a concept in
comparison with a benchmark—in many cases, "the ideal situation" presents a reference for comparison with the actual
situation.
• What type of category labels, if any, will be used for the rating scale? We have discussed verbal labels, numerical
labels, and unlisted choices. The maturity and educational levels of the respondents and the required statistical
analysis will influence this decision.
• How many scale categories or response positions are required to accurately measure an attitude? The researcher
must determine the number of meaningful positions that is best for each specific project.
• Should a balanced or unbalanced rating scale be chosen? The fixed-alternative format may be balanced—with a
neutral or indifferent point at the center of the scale—or unbalanced. Unbalanced scales may be used when the

[© 2013-14: TMC Study Material on RM] Page 79


responses are expected to be distributed at one end of the scale; an unbalanced scale may eliminate this type of "end
piling."
• Should respondents be given a forced-choice scale or a non-forced-choice scale? In many situations, a respondent has
not formed an attitude towards a concept, and simply cannot provide an answer. If many respondents in the sample
are expected to be unaware of the attitudinal object under investigation, this problem may be eliminated by using a
non-forced-choice scale that provides a "no opinion" category. The argument for forced choice is that people really do
have attitudes, even if they are unfamiliar with the attitudinal object.
• Should a single measure or an index measure be used? The researcher‘s conceptual definition will be helpful in
making this choice. The researcher has many scaling options. The choice is generally influenced by what is planned for
the later stages of the research project.

Questions for Review:

1. What is a research design? Explain the functions of a research design.


2. Define a research design and explain its contents.
3. What are the various features/components of a research design?
4. What is experimental design? Elucidate the benefits and drawbacks
5. What essential characteristics distinguish a true experiment from other research design?
6. Discuss the different experimental designs. Illustrate the same.
7. What is the meaning of measurement in research? What difference does it make whether we measure
in terms of a nominal, ordinal, interval or ratio scale? Explain giving examples.
8. Describe the different methods of scale construction, pointing out the merits and demerits of each.
9. ―Scaling describes the procedures by which numbers are assigned to various degrees of opinion,
attitude and other concepts.‖ Discuss. Also point out the bases for scale classification.
10. Are the following nominal, ordinal, interval or ratio data? Explain your answers.
(a) Temperatures measured on the Kelvin scale.
(b) Military ranks.
(c) Social security numbers.
(d) Number of passengers on buses from Delhi to Mumbai.
11. Write Short Notes on:
A. Features of good Design D. Experimental Research Design
B. Exploratory Research Design E. Classifications of experimental designs
C. Descriptive Research Design



[© 2013-14: TMC Study Material on RM] Page 80


Unit 3 : Collection and Processing data

Introduction
A research design is a blue print which directs the plan of action to complete the research work. The collection of data
is an important part in the process of research work. The quality and credibility of
the results derived from the application of research methodology depends upon
the relevant, accurate and adequate data.
In this unit, we shall study about the various sources of data and methods of
collecting primary and secondary data with their merits and limitations and also
the choice of suitable method for data collection.

Methods of data collection


Data is required to make a decision in any business situation. The researcher is faced with one of the most difficult
problems of obtaining suitable, accurate and adequate data. Utmost care must be exercised while collecting data
because the quality of the research results depends upon the reliability of the data.
Suppose, you are the Director of your company. Your Board of Directors has asked you to find out why the profit of
the company has decreased since the last two years. Your Board wants you to present facts and figures. What are you
going to do?
The first and foremost task is to collect the relevant information to make an analysis for the above mentioned problem.
It is, therefore, the information collected from various sources, which can be expressed in quantitative form, for a
specific purpose, which is called data. The rational decision maker seeks to evaluate information in order to select the
course of action that maximizes objectives. For decision making, the input data must be appropriate. This depends on
the appropriateness of the method chosen for data collection. The application of a statistical technique is possible when
the questions are answerable in quantitative nature, for instance; the cost of production, and profit of the company
measured in rupees, age of the workers in the company measured in years. Therefore, the first step in statistical
activities is to gather data. The data may be classified as primary and secondary data. Let us now discuss these two
kinds of data in detail.

[© 2013-14: TMC Study Material on RM] Page 81


Primary and Secondary data
The Primary data are original data which are collected for the first time for a specific purpose. Such data are published
by authorities who themselves are responsible for their collection. The Secondary data on the other hand, are those
which have already been collected by some other agency and which have already been processed. Secondary data may
be available in the form of published or unpublished sources. For instance, population census data collected by the
Government in a country is primary data for that Government.
But the same data becomes secondary for those researchers who use it later. In case you have decided to collect
primary data for your investigation, you have to identify the sources from where you can collect that data. For
example, if you wish to study the problems of the workers of X Company Ltd., then the workers who are working in
that company are the source. On the other hand, if you have decided to use secondary data, you have to identify the
secondary source who have already collected the related data for their study purpose. With the above discussion, we
can understand that the difference between primary and secondary data is only in terms of degree. That is the data
which is primary in the hands of one becomes secondary in the hands of another.

Primary data
Primary data can be obtained by communication or by observation. Communication involves questioning respondents
either verbally or in writing. This method is versatile, since one need only to ask for the information; however, the
response may not be accurate. Communication usually is quicker and cheaper than observation. Observation involves
the recording of actions and is performed by either a person or some mechanical or electronic device. Observation is
less versatile than communication since some attributes of a person may not be readily observable, such as attitudes,
awareness, knowledge, intentions, and motivation. Observation also might take longer since observers may have to
wait for appropriate events to occur, though observation using scanner data might be quicker and more cost effective.
Observation typically is more accurate than communication.
Some common types of primary data are:

 demographic and socioeconomic characteristics


 psychological and lifestyle characteristics

 attitudes and opinions

 awareness and knowledge - for example, brand awareness

 Intentions - for example, purchase intentions. While useful, intentions are not a reliable indication of actual
future behaviour

 motivation - a person's motives are more stable than his/her behaviour, so motive is a better predictor of
future behaviour than is past behaviour

[© 2013-14: TMC Study Material on RM] Page 82


Methods of primary data collection
Data collection method is an integral part of the research design. There are various methods of data collection, each
method has its own advantages and disadvantages. Selection of an appropriate method of data collection may enhance
the value of research and at the same time wrong choice may lead to questionable research findings. Data collection
methods include interviews, Self-administered questionnaires, observations and other methods.
The choice of a method depends on the following factors:
• Nature , scope and objectives of the research
• Availability of resources
• Degree of accuracy required
• Expertise of the researcher
• Time span of the study
• Cost involved and the like
If the available secondary data does not meet the requirements of the present study, the researcher has to collect
primary data. As mentioned earlier, the data, which is collected for the first time by the researcher for his own
purpose, is called primary data. There are several methods of collecting primary data, such as observation, interview
through reporters, questionnaires and schedules.
Let us study about them in detail.

1. Observation Method
The Concise Oxford Dictionary defines observation as, ‗accurate watching and noting of phenomena as they occur in
nature with regard to cause and effect or mutual relations‘. Thus observation is not only a systematic watching but it
also involves listening and reading, coupled with consideration of the seen phenomena. It involves three processes.
They are: sensation, attention or concentration and perception.
Under this method, the researcher collects information directly through observation rather than through the reports of
others. It is a process of recording relevant information without asking anyone specific questions and in some cases,
even without the knowledge of the respondents. This method of collection is highly effective in behavioural surveys.
For instance, a study on behaviour of visitors in trade fairs, observing the attitude of workers on the job, bargaining
strategies of customers etc. Observation can be participant observation or non-participant observation. In Participant
Observation Method, the researcher joins in the daily life of informants or organisations, and observes how they
behave. In the Non-participant Observation Method, the researcher will not join the informants or organisations but
will watch from outside.

Merits
1) This is the most suitable method when the informants are unable or reluctant to provide information.
2) This method provides deeper insights into the problem and generally the data is accurate and quicker to process.
Therefore, this is useful for intensive study rather than extensive study.

[© 2013-14: TMC Study Material on RM] Page 83


Limitations
Despite of the above merits, this method suffers from the following limitations:
1) In many situations, the researcher cannot predict when the events will occur. So when an event occurs there may not
be a ready observer to observe the event.
2) Participants may be aware of the observer and as a result may alter their behaviour.
3) Observer, because of personal biases and lack of training, may not record specifically what he/she observes.
4) This method cannot be used extensively if the inquiry is large and spread over a wide area.

2. Interview Method
Interview is one of the most powerful tools and most widely used method for primary data collection in business
research. In our daily routine, we see interviews on T.V. channels on various topics related to social, business, sports,
budget etc. In the words of C. William Emory, ‗personal interviewing is a two way purposeful conversation initiated
by an interviewer to obtain information that is relevant to some research purpose‘. Thus an interview is basically, a
meeting between two persons to obtain the information related to the proposed study. The person who is interviewing
is named as interviewer and the person who is being interviewed is named as informant. It is to be noted that, the
research data/information collect through this method is not a simple conversation between the investigator and the
informant, but also the glances, gestures, facial expressions, level of speech etc., are all part of the process.
Through this method, the researcher can collect varied types of data intensively and extensively. Interviewes can be
classified as direct personal interviews and indirect personal interviews. Under the techniques of direct personal
interview, the investigator meets the informants (who come under the study) personally, asks them questions
pertaining to enquiry and collects the desired information. Thus if a researcher intends to collect the data on spending
habits of Nagpur University (NU) students, he/ she would go to the NU, contact the students, interview them and
collect the required information.
Indirect personal interview is another technique of interview method where it is not possible to collect data directly
from the informants who come under the study. Under this method, the investigator contacts third parties or
witnesses, who are closely associated with the persons/situations under study and are capable of providing necessary
information. For example, an investigation regarding bribery pattern in an office. In such a case it is inevitable to get
the desired information indirectly from other people who may be knowing them. Similarly, clues about the crimes are
gathered by the CBI. Utmost care must be exercised that these persons who are being questioned are fully aware of the
facts of the problem under study, and are not motivated to give a twist to the facts.
Another technique for data collection through this method can be structured and unstructured interviewing. In the
Structured interview set questions are asked and the responses are recorded in a standardised form. This is useful in
large scale interviews where a number of investigators are assigned the job of interviewing. The researcher can
minimise the bias of the interviewer. This technique is also named as formal interview. In Un-structured interview, the
investigator may not have a set of questions but have only a number of key points around which to build the interview.
Normally, such types of interviews are conducted in the case of an explorative survey where the researcher is not
completely sure about the type of data he/ she collects. It is also named as informal interview. Generally, this method
is used as a supplementary method of data collection in conducting research in business areas.

[© 2013-14: TMC Study Material on RM] Page 84


Now-a-days, telephone or cellphone interviews are widely used to obtain the desired information for small surveys.
For instance, interviewing credit card holders by banks about the level of services they are receiving. This technique is
used in industrial surveys specially in developed regions.

Merits
The major merits of this method are as follows:
1) People are more willing to supply information if approached directly. Therefore, personal interviews tend to yield
high response rates.
2) This method enables the interviewer to clarify any doubt that the interviewee might have while asking him/her
questions. Therefore, interviews are helpful in getting reliable and valid responses.
3) The informant‘s reactions to questions can be properly studied.
4) The researcher can use the language of communication according to the standard of the information, so as to obtain
personal information of informants which are helpful in interpreting the results.

Limitations
The limitations of this method are as follows:
1) The chance of the subjective factors or the views of the investigator may come in either consciously or unconsciously.
2) The interviewers must be properly trained, otherwise the entire work may be spoiled.
3) It is a relatively expensive and time-consuming method of data collection especially when the number of persons to
be interviewed is large and they are spread over a wide area.
4) It cannot be used when the field of enquiry is large (large sample).

Precautions : While using this method, the following precautions should be taken:
1. Obtain thorough details of the theoretical aspects of the research problem.
2. Identify who is to be interviewed.
3. The questions should be simple, clear and limited in number.
4. The investigator should be sincere, efficient and polite while collecting data.
5. The investigator should be of the same area (field of study, district, state etc.).

3. Questionnaire and Schedule Methods


Questionnaire and schedule methods are the popular and common methods for collecting primary data in business
research. Both the methods comprise a list of questions arranged in a sequence pertaining to the investigation. Let us
study these methods in detail one after another.
i) Questionnaire Method
Under this method, questionnaires are sent personally or by post to various informants with a request to answer the
questions and return the questionnaire. If the questionnaire is posted to informants, it is called a Mail Questionnaire.
Sometimes questionnaires may also sent through E-mail depending upon the nature of study and availability of time
and resources. After receiving the questionnaires the informants read the questions and record their responses in the
space meant for the purpose on the questionnaire. It is desirable to send the quetionnaire with self-addressed
envelopes for quick and high rate of response.

[© 2013-14: TMC Study Material on RM] Page 85


Merits
1) You can use this method in cases where informants are spread over a vast geographical area.
2) Respondents can take their own time to answer the questions. So the researcher can obtain original data by this
method.
3) This is a cheap method because its mailing cost is less than the cost of personal visits.
4) This method is free from bias of the investigator as the information is given by the respondents themselves.
5) Large samples can be covered and thus the results can be more reliable and dependable.

Limitations
1) Respondents may not return filled in questionnaires, or they can delay in replying to the questionnaires.
2) This method is useful only when the respondents are educated and co-operative.
3) Once the questionnaire has been despatched, the investigator cannot modify the questionnaire.
4) It cannot be ensured whether the respondents are truly representative.

ii) Schedule Method


As discussed above, a Schedule is also a list of questions, which is used to collect the data from the field. This is
generally filled in by the researcher or the enumerators. If the scope of the study is wide, then the researcher appoints
people who are called enumerators for the purpose of collecting the data. The enumerators go to the informants, ask
them the questions from the schedule in the order they are listed and record the responses in the space meant for the
answers in the schedule itself. For example, the population census all over the world is conducted through this
method. The difference between questionnaire and schedule is that the former is filled in by the informants, the latter is
filled in by the researcher or enumerator.

Merits
1) It is a useful method in case the informants are illiterates.
2) The researcher can overcome the problem of non-response as the enumerators go personally to obtain the
information.
3) It is very useful in extensive studies and can obtain more reliable data.

Limitations
1) It is a very expensive and time-consuming method as enumerators are paid persons and also have to be trained.
2) Since the enumerator is present, the respondents may not respond to some personal questions.
3) Reliability depends upon the sincerity and commitment in data collection.
The success of data collection through the questionnaire method or schedule method depends on how the
questionnaire has been designed.

[© 2013-14: TMC Study Material on RM] Page 86


4. Personal interview method
Personal interviews or face to face communication is a two-way conversation initiated by the interviewer to obtain
information from the participants. The interviewer and the participants may be strangers. The interviewer controls the
topic and pattern of discussion. The participant or the respondents may not gain anything out of their participation in
the interview.
The success of the personal interview lies among other things on the respondent‘s ability to provide the information
needed and the ability to understand the importance of information provided by him. The researcher should take
necessary steps to motivate the respondents to cooperate so as to ensure successful conduct of the interview.

Increasing participation
The researcher can enhance the respondent‘s participation by way of explaining the kind of answer sought, the terms
that should be expressed, the depth and clarity of information needed etc. Coaching can be provided to the participants
but care should be taken to avoid the biasing factor. The interviewer can make the session an interesting and enjoyable
experience by means of administering adequate motivation techniques.
Some of the techniques for successful interviewing of the participants are listed below:
 The interviewer should introduce himself by name and the organizations to which they are affiliated to. The
interviewer can identify himself with the introductory letters or other information that confirms the legitimacy of
the work. Enough details regarding the work to be done should be given, wherever demanded more information
may be provided. The interviewer should be able to kindle the interest of the respondent.
 If the participant is busy, the interviewer should try to stimulate interest so as to arrange for an interview at
another time.
 The successful conduct of interview requires a good rapport and understanding between the interviewer and
participant. The interviewer should earn the confidence of the respondent so as to elicit response without censure,
coercion or pressure.
 In the process of gathering data the interviewer should ensure that the objective of each question is achieved and
the needed response is obtained. The interviewer can resort to probing, but steps should be taken to avoid the bias.
 The interviewer should record the answers of the participant in an efficient manner. The interview should record
responses as they occur, recording the response later will lead to loss of information. Shorthand mechanism like
recording only the keywords can be done in the case of time constraint.
 Interviewers should have good communication skills, should be able to adapt to flexible schedules, be willing to
work during intermittent work hours and should be mobile. If the interview is conducted by the researcher
himself, there is no need for much training else proper training should be provided so that the interviewer is able
to understand the objective of the study, the purpose of each question, the possible responses and an outline of the
research work conducted, its importance etc. Written instructions can be provided wherever needed.
 Questioning techniques should be followed by the interviewer. Funneling approach can be practiced i.e. in the
beginning of the unstructured interview open-ended questions can be asked to get a broad idea and create an
impression about the situation. Care should be taken to see that the questions are unbiased.
 The interviewer should restate or rephrase important information so as to ensure that the issues are recorded as
how the respondent intends to represent the same. The researcher can also help the respondent to verbalize the
perceptions.

[© 2013-14: TMC Study Material on RM] Page 87


Problems in conducting personal interview
The two problems in conducting personal interview are the increased cost and the problem of biased results. Biased
results arise out of three types of errors viz., sampling, response and non response error
i.Sampling error
One of the major criteria of a good sample design is the precision of estimate made with the samples. The sample
respondents selected for conducting the interview may not fully represent the population in all aspects. The numerical
descriptors that describe the sample may differ from those that describe the population because of the random
fluctuations inherent in the sample process. This is called as sampling error. The sampling error reflects the influence of
chance in drawing the sample members. The sampling error is that which is left after accounting for all known sources
of systematic variance.
ii.Non-response error
Non-response error occurs when the responses of participants differ in some systematic way from the responses of
non-participants. The error occurs due to the inability to locate and access the selected sample respondent or the
selected sample respondent may not be willing to participate in the interview. This problem specifically arises due to
selection of samples through the probability sampling method. The problem can be tackled by way of attempting to
contact the respondent again. Another approach is to treat all the remaining non-participants as a new sub-population
after a few callbacks.
A random sample is drawn from the non participant group and attempt is made to contact and complete this sample at
hundred percent success rate. Finding from this non-participant sample can be then weighed into the total population
estimate. The researcher can also try to substitute the missing participant but care should be taken to see that the
substitute participant possess the significant character of the replaced participant. For e.g., the respondent should
belong to the same occupation, educational status, income level etc.,
iii.Response error
Response error occurs when the data reported differ from the actual data. The error can be caused by the respondent or
the interviewer or during the preparation of data for analysis. Participant initiated error occurs when the participant
fails to answer accurately either by choice or due to lack of knowledge. Interviewer error arises due to the inability to
conduct the interview in a controlled manner. This may take many forms like the failure to secure cooperation, lack of
consistent interview procedures, inability to establish appropriate interview environment, bias due to physical
presence, failure to record answers correctly. These errors affect the quality of the data collected.
iv. Cost
To conduct the personal interview, the respondents should be met individually. They might be scattered
geographically and the time and cost involved in administrative and travel task is higher. Sometimes, the respondents
may not be available and repeated contacts have to be made which adds to the cost. In addition to this the researcher
may employ interviewers who have to be paid. To reduce the cost telephonic interviews and self-administrated
surveys can be attempted.

Advantages and drawbacks


The major advantage of personal interviewing is the ability to secure in-depth information and detail. The ability to
harness information is more in personal interviewing as compared to telephone, mail survey and through internet. The
researcher can adopt the questioning technique in tune with the respondent‘s ability to understand. Further

[© 2013-14: TMC Study Material on RM] Page 88


clarification can be immediately made by repeating or rephrasing the questions concerned. The researcher can also get
information from the nonverbal clues exhibited through the body language of the respondent.
However, the personal interviewing involves cost in terms of both money and time. Costs may escalate in case, where
the study covers a wide geographic area or has a large sample to be covered. The chance of the outcome being affected
by the interviewer‘s bias is more in the case of personal interviews. The respondents may feel uneasy about the secrecy
of their responses in case of the face to face interaction.

Primary data collection methods: Some Advantages & Disadvantages

Survey type Advantages Disadvantages


Spoken surveys Effective in all situations, e.g. when Need a lot of organization.
literacy level is low.
Face to face surveys Usually provides very accurate results. Expensive, specially when large areas are
Any question can be asked. Can include covered.
observation and visual aids.
Face to face surveys at Can cover the entire population. Expensive; much organization needed.
respondents‘
home/work/etc.
Face to face surveys in Can do lots of interviews in a short time. Samples are usually not representative of
public places the whole population.
Telephone surveys High accuracy obtainable if most No visual aids possible. Only feasible with
members of population have telephones. high telephone saturation.
Written surveys Cheaper than face-to-face surveys. Hard to tell if questions not correctly
understood. More chance of question
wording causing problems.
Mail surveys Cheap. Requires high level of literacy and good
Allows anonymity. postal system. Slow to get results.
Self-completion, Cheap. Gives respondents time to check Respondents must be highly literate..
questionnaires collected documents.
and delivered
Fax surveys Fast Questionnaires with more than one page are
Cheap. often only partially returned.
Email surveys Very cheap Samples not representative of whole
Quick results. population. Some respondents lie. High
computer skills needed.
Web surveys More easily processed than email Many people don‘t have good web access..
questionnaires
Informal methods Fast Can‘t produce accurate figures. Experience
Flexible needed for comparisons. Subjective. Most
suitable for preliminary studies.

[© 2013-14: TMC Study Material on RM] Page 89


Monitoring Little work required Often not completely relevant. Samples
Cheap. often not representative. Most suitable
when assessing progress.
Observation (can be More accurate than asking people their Only works in limited situations.
combined with surveys) behaviour.
Meters More accurate than asking people their Very expensive to set up; measures
behaviour. equipment rather than people. Can‘t find
out reasons for behaviour.
Panels Ability to discover changes in Need to maintain records of previous
individuals‘ preferences and behaviour. contact, etc.
Depth interviews Provide insights not available with most Expensive; need highly skilled interviewers.
other methods.
Focus groups Provide insights not available with most Need highly skilled moderator, trained in
other methods. psychology etc.
Consensus groups Instant results. Secretary and/or moderator need strong
Clear wording. verbal skills. Don‘t work well in some
Cheap cultures, e.g. Buddhist.
Internet qualitative Easy for a geographically dispersed Doesn‘t provide the subtlety of personal
research group to meet. interaction. Very new, so few experts
Low cost. available to help with problems.

[© 2013-14: TMC Study Material on RM] Page 90


Collection of secondary data
Introduction
Before going through the time and expense of collecting primary data, one should check for secondary data that
previously may have been collected for other purposes but that can be used in the immediate study. Secondary data
may be internal to the firm, such as sales invoices and warranty cards, or may be external to the firm such as
published data or commercially available data. The government census is a valuable source of secondary data.
Secondary data has the advantage of saving time and reducing data gathering costs. The disadvantages are that the
data may not fit the problem perfectly and that the accuracy may be more difficult to verify for secondary data than for
primary data.
Some secondary data is republished by organizations other than the original source. Because errors can occur and
important explanations may be missing in republished data, one should obtain secondary data directly from its source.
One also should consider who the source is and whether the results may be biased.

The nature of secondary sources of information


Secondary data is data, which has been collected by individuals or agencies for purposes other than those of our
particular research study. For example, if a government department has conducted a survey of, say, family food
expenditures, and then a food manufacturer might use this data in the organisation's evaluations of the total potential
market for a new product. Similarly, statistics prepared by a ministry on agricultural production will prove useful to a
whole host of people and organisations, including those marketing agricultural supplies.
No research study should be undertaken without a prior search of secondary sources (also termed desk research).

There are several grounds for making such a bold statement.


 Secondary data may be available which is entirely appropriate and wholly adequate to draw conclusions and
answer the question or solve the problem. Sometimes primary data collection simply is not necessary.
 It is far cheaper to collect secondary data than to obtain primary data. For the same level of research budget a
thorough examination of secondary sources can yield a great deal more information than can be had through a
primary data collection exercise.
 The time involved in searching secondary sources is much less than that needed to complete primary data collection.
 Secondary sources of information can yield more accurate data than that obtained through primary research. This is
not always true but where a government or international agency has undertaken a large scale survey, or even a
census, this is likely to yield far more accurate results than custom designed and executed surveys when these are
based on relatively small sample sizes.
 It should not be forgotten that secondary data can play a substantial role in the exploratory phase of the research
when the task at hand is to define the research problem and to generate hypotheses. The assembly and analysis of
secondary data almost invariably improves the researcher's understanding of the marketing problem, the various
lines of inquiry that could or should be followed and the alternative courses of action which might be pursued.
 Secondary sources help define the population. Secondary data can be extremely useful both in defining the
population and in structuring the sample to be taken. For instance, government statistics on a country's agriculture
will help decide how to stratify a sample and, once sample estimates have been calculated, these can be used to
project those estimates to the population.

[© 2013-14: TMC Study Material on RM] Page 91



Precaution in Using Secondary Data
With the above discussion, we can understand that there is a lot of published and unpublished sources where
researcher can gets secondary data. However, the researcher must be cautious in using this type of data. The reason is
that such type of data may be full of errors because of bias, inadequate size of the sample, errors of definitions etc.
Bowley expressed that it is never safe to take published or unpublished statistics at their face value without knowing
their meaning and limitations. Hence, before using secondary data, you must examine the following points.
1. Suitability of Secondary Data
Before using secondary data, you must ensure that the data are suitable for the purpose of your enquiry. For this, you
should compare the objectives, nature and scope of the given enquiry with the original investigation. For example, if
the objective of our enquiry is to study the salary pattern of a firm including perks and allowances of employees. But,
secondary data is available only on basic pay. Such type of data is not suitable for the purpose of the study.
2. Reliability of Secondary Data
For the reliability of secondary data, these can be tested: i) un-biasedness of the collecting person, ii) proper check on
the accuracy of field work, iii) the editing, tabulating and analysis done carefully, iv) the reliability of the source of
information, v) the methods used for the collection and analysis of the data. If the data collecting organisations are
government, semi-government and international, the secondary data are more reliable corresponding to data collected
by individual and private organisations.
3. Adequacy of Secondary Data
Adequacy of secondary data is to be judged in the light of the objectives of the research. For example, our objective is
to study the growth of industrial production in India. But the published report provide information on only few states,
then the data would not serve the purpose. Adequacy of the data may also be considered in the light of duration of
time for which the data is available. For example, for studying the trends of per capita income of a country, we need
data for the last 10 years, but the information available for the last 5 years only, which would not serve our objective.
Hence, we should use secondary data if it is reliable, suitable and adequate.

Sources of information
Secondary sources of information may be divided into two categories: internal sources and external sources.

1) Internal sources of secondary


information
a. Sales data: All organisations collect
information in the course of their
everyday operations. Orders are received
and delivered, costs are recorded, sales
personnel submit visit reports, invoices
are sent out, and returned goods are
recorded and so on. Much of this
information is of potential use in
marketing research but a surprising

[© 2013-14: TMC Study Material on RM] Page 92


amount of it is actually used. Organisations frequently overlook this valuable resource by not beginning their search of
secondary sources with an internal audit of sales invoices, orders, inquiries about products not stocked, returns from
customers and sales force customer calling sheets. For example, consider how much information can be obtained from
sales orders and invoices:
 Sales by territory
 Sales by customer type
 Prices and discounts
 Average size of order by customer, customer type, geographical area
 Average sales by sales person and
 Sales by pack size and pack type, etc.
This type of data is useful for identifying an organisation's most profitable product and customers. It can also serve to
track trends within the enterprise's existing customer group.
b. Financial data: An organisation has a great deal of data within its files on the cost of producing, storing,
transporting and marketing each of its products and product lines. Such data has many uses in research including
allowing measurement of the efficiency of marketing operations. It can also be used to estimate the costs attached to
new products under consideration, of particular utilisation (in production, storage and transportation) at which an
organisation's unit costs begin to fall.
c. Transport data: Companies that keep good records relating to their transport operations are well placed to establish
which are the most profitable routes, and loads, as well as the most cost effective routing patterns. Good data on
transport operations enables the enterprise to perform trade-off analysis and thereby establish whether it makes
economic sense to own or hire vehicles, or the point at which a balance of the two gives the best financial outcome.
d. Storage data: The rate of stock turn, stock handling costs, assessing the efficiency of certain marketing operations
and the efficiency of the marketing system as a whole. More sophisticated accounting systems assign costs to the cubic
space occupied by individual products and the time period over which the product occupies the space. These systems
can be further refined so that the profitability per unit, and rate of sale, are added. In this way, the direct product
profitability can be calculated.

2) External sources of secondary information


The researcher who seriously seeks after useful secondary data is more often surprised by its abundance than by its
scarcity. Too often, the researcher has secretly (sometimes subconsciously) concluded from the outset that his/her
topic of study is so unique or specialised that a research of secondary sources is futile. Consequently, only a specified
search is made with no real expectation of sources. Cursory researches become a self-fulfilling prophecy. Dillon et. al
give the following advice:
"You should never begin a half-hearted search with the assumption that what is being sought is so unique that no one
else has ever bothered to collect it and publish it. On the contrary, assume there are scrolling secondary data that
should help providing definition and scope for the primary research effort."
The same authors support their advice by citing the large numbers of organisations that provide marketing
information including national and local government agencies, quasi-government agencies, trade associations,
universities, research institutes, financial institutions, specialist suppliers of secondary marketing data and professional
research enterprises. Dillon et al further advise that searches of printed sources of secondary data begin with referral

[© 2013-14: TMC Study Material on RM] Page 93


texts such as directories, indexes, handbooks and guides. These sorts of publications rarely provide the data in which
the researcher is interested but serve in helping him/her locate potentially useful data sources.

The main sources of external secondary sources are (1) government (Central, state and local) (2) trade associations (3)
commercial services (4) national and international institutions.

Government These may include all or some of the following:


statistics · Population censuses
· Social surveys, family expenditure surveys
· Import/export statistics
· Production statistics
· Agricultural statistics.

Trade associations Trade associations differ widely in the extent of their data collection and information
dissemination activities. However, it is worth checking with them to determine what they do
publish. At the very least one would normally expect that they would produce a trade directory
and, perhaps, a yearbook.

Commercial services Published market research reports and other publications are available from a wide range of
organisations which charge for their information. Typically, marketing people are interested in
media statistics and consumer information which has been obtained from large scale consumer
or farmer panels. The commercial organisation funds the collection of the data, which is wide
ranging in its content, and hopes to make its money from selling this data to interested parties.

National and Bank economic reviews, university research reports, journals and articles are all useful sources
international to contact. International agencies such as World Bank, IMF, UNDP, ITC, FAO and ILO produce
institutions a overabundance of secondary data which can prove extremely useful to the researcher.

Merits and Limitations of Secondary Data


Merits
1) Secondary data is much more economical and quicker to collect than primary data, as we need not spend time and
money on designing and printing data collection forms (questionnaire/schedule), appointing enumerators, editing and
tabulating data etc.
2) It is impossible to an individual or small institution to collect primary data with regard to some subjects such as
population census, imports and exports of different countries, national income data etc. but can obtain from secondary
data.
Limitations
1) Secondary data is very risky because it may not be suitable, reliable, adequate and also difficult to find which exactly
fit the need of the present investigation.
2) It is difficult to judge whether the secondary data is sufficiently accurate or not for our investigation.
3) Secondary data may not be available for some investigations. For example, bargaining strategies in live products
marketing, impact of T.V. advertisements on viewers, opinion polls on a specific subject, etc. In such situations we have
to collect primary data.

[© 2013-14: TMC Study Material on RM] Page 94


Collection of secondary Data
As already mentioned, secondary data involves use of published or unpublished data. Published data are available in:
a) Publications of the central state and local government, b) publication of foreign or of international bodies c)
technical and trade journals d) reports prepared by research scholars, universities in different fields etc. The sources
of unpublished data are many; they may be found in diaries, letter, biographies, and autobiographies, trade
associations etc.

Types of Secondary Published data

Type of What it Is Why It might be Where to Access Examples


Information Useful

Newspapers  published daily,  provide  electronic  Economic Times


weekly, monthly immediate news databases  Times of India
 written by journalists,  local news  print indexes  Employment News
freelancers, staff who are  editorials  some
usually paid  can provide newspapers have
 written for general photographs free websites
public (although some  excellent for
target specific groups) contemporary
reactions

Note: Because newspapers are meant to provide immediate information, some facts might not be
accurate or will change over time.

Popular  published weekly,  usually provide  electronic  India Today


Magazines monthly, etc. general information databases  Business World
 written for a wide, in short articles (can  print indexes  Sports week
general (non-academic) provide analysis)  some
audience  lots of graphics, magazines have
 written by journalists, photographs and free websites and
staff and freelancers who illustrations some exist solely
are usually paid  can also be a online
 slick appearance, source for public
variety of formats opinion.
 lots of advertising  rarely provide in-
which may be tied to depth background
editorial content information,
overviews of topics,
statistics,
bibliographies, cited
references

[© 2013-14: TMC Study Material on RM] Page 95


Note: Popular magazines, in general, exist to entertain, sell products, express a particular point of view,
or provide news summaries of current events.

Scholarly  published monthly,  cite sources and  electronic  Journal of Marketing


Journals quarterly, yearly provide full-text  The Strategist
 written for scholars, bibliographies databases  Western Criminology
researchers, students and  provide in-depth  print indexes Review
assumes scholarly articles  some
background  provide results of scholarly journals
 written by original research and have websites
scholars/researchers experimentation and some exist
 use language of  often a solely online
specific discipline preliminary step
 generally peer- before publishing
reviewed (articles are research in book
evaluated by experts format
who make publication
recommendations)
 serious appearance
with few images or
graphics

Note: Scholarly journals are often published by scholarly societies and organizations or by publishers
of other scholarly information.

Books  written by and for a  can provide very  Use a library  Marketing
(Monographs) variety of audiences in-depth coverage catalogue to find Management-Kotlar
 generally takes longer  can be primary out what a  Organization
time to be published resources library owns Behaviour- Robbins
 often provides  can present  some
citations and multiple viewpoints published in
bibliographies in compilations and electronic format
anthologies (e-Books) and are
accessible
through library
catalogs

Reference  encyclopaedias  provide general  Use a library  Encyclopaedia


Sources  dictionaries or in-depth catalogue to find Britannica
 chronologies information out what a  Oxford Dictionary
 thesauri  provide library owns  Monorama
 usually written by background  some

[© 2013-14: TMC Study Material on RM] Page 96


scholars/experts in a information and available online
field overview of topics via Library
 statistics subscriptions
 bibliographies  some only
 facts and available in the
timelines Library
 names, addresses,
and biographical
information
 define terms

Statistics  population  provide a  Use a library  Statistical Abstract of


(Census)  demographics statistical look at a catalogue to find the India
 crime particular population out what a  Indian Bureau of the
 health care or topic library owns Census

 education  some
 income available via web

 public opinion etc.  some only


available in the
Library

Websites All kinds of information: The web is:  Internet  www.goggle.com


 full-text books  an infinite array connection  www.yahoo.com
 government of information
documents sources
 online shopping  In a variety of
 greeting cards formats.

Tip: Keep a research notebook or log of databases searched

[© 2013-14: TMC Study Material on RM] Page 97


Analysis of data
Data Analysis is the process of systematically applying statistical and/or logical techniques to describe and illustrate,
condense and recap, and evaluate data. According to Shamoo and Resnik (2003) various analytic procedures ―provide
a way of drawing inductive inferences from data and distinguishing the signal (the phenomenon of interest) from the
noise (statistical fluctuations) present in the data‖..
While data analysis in qualitative research can include statistical procedures, many times analysis becomes an ongoing
iterative process where data is continuously collected and analyzed almost simultaneously. Indeed, researchers
generally analyze for patterns in observations through the entire data collection phase (Savenye, Robinson, 2004).
An essential component of ensuring data integrity is the accurate and appropriate analysis of research findings.
Improper statistical analyses distort scientific findings, mislead casual readers, and may negatively influence the public
perception of research. Integrity issues are just as relevant to analysis of non-statistical data as well.

Considerations/issues in data analysis


There are a number of issues that researchers should be aware of with respect to data analysis. These include:

 Having the necessary skills to analyze


 Concurrently selecting data collection methods and appropriate analysis
 Drawing unbiased inference
 Inappropriate subgroup analysis
 Following acceptable norms for disciplines
 Determining statistical significance
 Lack of clearly defined and objective outcome measurements
 Providing honest and accurate analysis
 Manner of presenting data
 Environmental/contextual issues
 Data recording method
 Partitioning ‗text‘ when analyzing qualitative data
 Training of staff conducting analyses
 Reliability and Validity
 Extent of analysis

Whether statistical or non-statistical methods of analyses are used, researchers should be aware of the potential for
compromising data integrity. While statistical analysis is typically performed on quantitative data, there are numerous
analytic procedures specifically designed for qualitative material including content, thematic, and ethnographic
analysis. Regardless of whether one studies quantitative or qualitative phenomena, researchers use a variety of tools to
analyze data in order to test hypotheses, discern patterns of behavior, and ultimately answer research questions.
Failure to understand or acknowledge data analysis issues presented can compromise data integrity.

[© 2013-14: TMC Study Material on RM] Page 98


Data processing
Data are raw facts. When organised and presented properly, they become information. Turning data into information
involves several steps. These steps are known as data processing. This section looks at data processing and the use of
computers to do it easily and quickly.
The diagram below shows a
simplified view of the procedure for
turning data into information. Data,
in a range of forms and from various
sources, may be entered into a
computer where it can be
manipulated to produce useful
information (output).
The data, after collection, has to be processed and analysed in accordance with the outline laid down for the purpose at
the time of developing the research plan. This is essential for a scientific study and for ensuring that we have all
relevant data for making contemplated comparisons and analysis.
Technically speaking, processing implies editing, coding, classification and tabulation of collected data so that they are
amenable to analysis. The term analysis refers to the computation of certain measures along with searching for patterns
of relationship that exist among data-groups. Thus, ―in the process of analysis, relationships or differences supporting
or conflicting with original or new hypotheses should be subjected to statistical tests of significance to determine with
what validity data can be said to indicate any conclusions‖.

Data processing includes the following steps:


1. Data coding,
2. Data input,
3. Data editing, and
4. Data manipulation.
5. Data tabulations

1) Data coding
Coding refers to the process of assigning numerals or other symbols to answers so that responses can be put into a
limited number of categories or classes. Such classes should be appropriate to the research problem under
consideration. They must also possess the characteristic of exhaustiveness (i.e., there must be a class for every data
item) and also that of mutual exclusively which means that a specific answer can be placed in one and only one cell in a
given category set. Another rule to be observed is that of unidimensionality by which is meant that every class is
defined in terms of only one concept.
Coding is necessary for efficient analysis and through it the several replies may be reduced to a small number of
classes which contain the critical information required for analysis. Coding decisions should usually be taken at the
designing stage of the questionnaire. This makes it possible to precode the questionnaire choices and which in turn is
helpful for computer tabulation as one can straight forward key punch from the original questionnaires. But in case of

[© 2013-14: TMC Study Material on RM] Page 99


hand coding some standard method may be used. One such standard method is to code in the margin with a coloured
pencil. The other method can be to transcribe the data from the questionnaire to a coding sheet. Whatever method is
adopted, one should see that coding errors are altogether eliminated or reduced to the minimum level.
Coding is placing data in a usable form. Researcher must make decisions about the level of measurement needed and
assign numbers to variables, including codes for variables where the data is missing or unusable. This is likely already
done if the researcher is using a pre-coded questionnaire, but for other data collection techniques, such as using public
records, this is a step that has to be taken.
Before raw data is entered into a computer it may need to be coded. Coding involves labelling the responses in a
unique and abbreviated way (often by simple numerical codes). The reason raw data are coded is that it makes data
entry and data manipulation easier. Coding can be done by interviewers in the field or by people in an office.
A closed question implies that only a fixed number of predetermined responses are allowed, and these responses can
have codes affixed on the form. An open question implies that any response is allowed, making subsequent coding
more difficult. One may select a sample of responses, and design a code structure which captures and categorizes most
of these.
Each variable should be carefully examined in terms of research problem. In general the level of measurement for a
variable should be the highest level possible to retain the most information and allow the most powerful statistics to be
used. For example, education could be classified into categories such as (1) less then 12 years, (2) high school degree,
(3) some college, and (4) college degree. This may be perfectly acceptable for research problem as long as we are
examining differences based on degrees. Frequently, a research hypothesis is modified in the process, but while the
original categories worked for the original hypothesis, the new hypothesis might need more specific data. For example,
we may find we need specific number of years of education and not just degrees, because degrees alone do not seem to
be the relevant categories of education. Thus, it is preferable to code at the highest level of measurement possible. You
can always recode data into simpler categories for testing hypothesis if the original data is there but you can't create
higher-level data from lower level measurement.
Level of measurement: the issue of measurement levels is very complex. Luckily we don't have to become experts but
we do have to know enough to define our variables and later to choose appropriate statistics. A simple outline of
levels of measurement: -
We can demonstrate these levels by defining sex/gender two different ways.
(1) A self-selected choice on a questionnaire
What is your gender, please check the appropriate selection!
(1) Female: - ______
(2) Male: - ______
The first definition of gender is a nominal level measure, a simple classification system with limited statistics
appropriate for analysis: only the mode would be acceptable for measuring central tendency. Incidentally, while
gender is our variable, the choices 1 and 2 are referred to as attributes or values of the variable gender.

2) Data input
The keyboard of a computer is one of the more commonly known input, or data entry, devices in current use. In the
past, punched cards or paper tapes have been used.

[© 2013-14: TMC Study Material on RM] Page 100


Other input devices in current use include light pens, trackballs, scanners, mice, optical mark readers and bar code
readers. Some common everyday examples of data input devices are:
 Bar code readers used in shops, supermarkets or libraries, and
 Scanners used in desktop publishing.

3) Data editing
Editing of data is a process of examining the collected raw data (specially in surveys) to detect errors and omissions
and to correct these when possible. Before being presented as information, data should be put through a process called
editing. This process checks for accuracy and eliminates problems that can produce disorganised or incorrect
information. Data editing may be performed by clerical staff, computer software, or a combination of both; depending
on the medium in which the data is submitted.
As a matter of fact, editing involves a careful scrutiny of the completed questionnaires and/or schedules. Editing is
done to assure that the data are accurate, consistent with other facts gathered, uniformly entered, as completed as
possible and have been well arranged to facilitate coding and tabulation.
With regard to points or stages at which editing should be done, one can talk of field editing and central editing. Field
editing consists in the review of the reporting forms by the investigator for completing (translating or rewriting) what
the latter has written in abbreviated and/or in illegible form at the time of recording the respondents‘ responses. This
type of editing is necessary in view of the fact that individual writing styles often can be difficult for others to decipher.
This sort of editing should be done as soon as possible after the interview, preferably on the very day or on the next
day.
While doing field editing, the investigator must restrain himself and must not correct errors of omission by simply
guessing what the informant would have said if the question had been asked. Central editing should take place when
all forms or schedules have been completed and returned to the office. This type of editing implies that all forms
should get a thorough editing by a single editor in a small study and by a team of editors in case of a large inquiry.
Editor(s) may correct the obvious errors such as an entry in the wrong place, entry recorded in months when it should
have been recorded in weeks, and the like. In case of inappropriate on missing replies, the editor can sometimes
determine the proper answer by reviewing the other information in the schedule. At times, the respondent can be
contacted for clarification. The editor must strike out the answer if the same is inappropriate and he has no basis for
determining the correct answer or the response. In such a case an editing entry of ‗no answer‘ is called for. All the
wrong replies, which are quite obvious, must be dropped from the final results, especially in the context of mail
surveys.
Editors must keep in view several points while performing their work: (a) They should be familiar with instructions
given to the interviewers and coders as well as with the editing instructions supplied to them for the purpose. (b)
While crossing out an original entry for one reason or another, they should just draw a single line on it so that the same
may remain legible. (c) They must make entries (if any) on the form in some distinctive colur and that too in a
standardised form. (d) They should initial all answers which they change or supply. (e) Editor‘s initials and the date of
editing should be placed on each completed form or schedule.
Some editing processes are:
Validity check: ensures that data fall within set limits. For example, alphabetic characters do not appear in a field that
should have only numerical characters, or the month of year is not greater than 12.

[© 2013-14: TMC Study Material on RM] Page 101


Verification check: checks the accuracy of entered data by entering it again and comparing the two results.
Consistency check: checks the logical consistency of answers. For example, an answer stating never married should
not be followed by one stating divorced.
Data editing should detect and minimise errors such as:
 questions not asked by interviewers,
 answers not recorded, and
 inaccurate responses.
Inaccuracy in responses may result from carelessness or a deliberate effort to give misleading answers. Answers
needing mental calculations may result in errors, for example: when converting days into hours, or annual income into
weekly income.

4) Data manipulation
After editing, data may be manipulated by computer to produce the desired output. The software used to manipulate
data will depend on the form of output required.
Software applications such as word processing, desktop publishing, graphics (including graphing and drawing),
databases and spreadsheets are commonly used. Following are some ways that software can manipulate data:
 Spreadsheets are used to create formulas that automatically add columns or rows of figures calculate means and
perform statistical analyses. They can be used to create financial worksheets such as budgets or expenditure
forecasts, balance accounts and analyse costs.
 Databases are electronic filing cabinets: systematically storing data for easy access to produce summaries,
stocktakes or reports. A database program should be able to store, retrieve, sort, and analyse data.
 Charts can be created from a table of numbers and displayed in a number of ways, to show the significance of a
selection of data. Bar, line, pie and other types of charts can be generated and manipulated to advantage.
Processing data provides useful information called output. Computer output may be used in a variety of ways. It may
be saved in storage for later retrieval and use. It may be laser printed on paper as tables or charts, put on a transparent
slide for overhead projector use, saved on floppy disk for portable use in other computers, or sent as an electronic file
via the internet to others.
Types of output are limited only by the available output devices, but their form is usually governed by the need to
communicate information to someone. For whom is output being produced? How will they best understand it? The
answers to these questions help determine one's output type.

5. Data Tabulation
Before analysis can be performed, raw data must be transformed into the right format. First, it must be edited so that
errors can be corrected or omitted. The data must then be coded; this procedure converts the edited raw data into
numbers or symbols. A codebook is created to document how the data was coded. Finally, the data is tabulated to
count the number of samples falling into various categories. Simple tabulations count the occurrences of each variable
independently of the other variables. Cross tabulations, also known as contingency tables or cross tabs, treats two or
more variables simultaneously. However, since the variables are in a two-dimensional table, cross tabbing more than
two variables is difficult to visualize since more than two dimensions would be required. Cross tabulation can be
performed for nominal and ordinal variables.

[© 2013-14: TMC Study Material on RM] Page 102


Cross tabulation is the most commonly utilized data analysis method in research. Many studies take the analysis no
further than cross tabulation. This technique divides the sample into sub-groups to show how the dependent variable
varies from one subgroup to another. A third variable can be introduced to uncover a relationship that initially was not
evident.
Tabulation is an orderly arrangement of data in columns and rows. It is a systematic presentation of classified data
on the basis of the nature of analysis & investigation.
Tabulation refers to the orderly arrangement of data in a table or other summary format. Counting the number of
responses to a question and putting them into a frequency distribution is a simple tabulation, or marginal tabulation,
which provides the most basic form of information for the researcher. Often such simple tabulation is presented in the
form of a frequency table. A frequency table is the arrangement of statistical data in a row and column format that
exhibits the count of responses or observations for each of the categories or codes assigned to a variable. Large samples
generally require computer tabulation of the data.

Tabulation is important because:-


1) It conserves space and reduces explanatory and descriptive statement to the minimum
2) It facilitates the process of comparison
3) It saves time and interpretation, induction, deduction ad conclusion become easier.
Tabulation may be simple or complex. Simple calculation gives information about one or more groups of independent
questions. A complex tabulation gives information or shows the division of data in two or more categories. A complex
table generally results in two way (which give information about two interrelated characteristics of data), three –way
tables or still higher order tables, which supply information about several interrelated characteristic of data.

Principles of tabulation:
1) A clear, brief and self explanatory title is necessary for a table.
2) Stubs (row headings) and captions (column headings) should be clearly mentioned.
3) The body of the table must show all the relevant information according to their description.
4) Data should be arranged systematically; that is chronologically, alphabetically and geo-graphically.
5) Adequate spacing should be given in between the columns and rows.
6) Abbreviation should be avoided to the extent possible.

7) Always mention the source of data at the foot of the table.

[© 2013-14: TMC Study Material on RM] Page 103


Tools of Data Analysis
The following is the list of important tools used in analysis of data:

1. Factor Analysis
2. Cluster Analysis
3. Discriminant Analysis
4. Conjoint Analysis
5. Multi Dimensional Scaling,

1) Factor Analysis
Factor analysis is a statistical technique that originated in mathematical psychology. It is used in the social sciences and
in marketing, product management, operations research, and other applied sciences that deal with large quantities of
data. The objective is to discover patterns among variations in the values of multiple variables. This is done by
generating artificial dimensions (called factors) that correlate highly with the real variables.
Factor analysis is a very popular technique to analyze interdependence. Factor analysis studies the entire set of
interrelationships without defining variables to be dependent or independent. Factor analysis combines variables to
create a smaller set of factors. Mathematically, a factor is a linear combination of variables. A factor is not directly
observable; it is inferred from the variables. The technique identifies underlying structure among the variables,
reducing the number of variables to a more manageable set. Factor analysis groups variables according to their
correlation.
The factor loading can be defined as the correlations between the factors and their underlying variables. A factor
loading matrix is a key output of the factor analysis. An example of matrix is shown below.
Factor 1 Factor 2 Factor 3

Variable 1

Variable 2

Variable 3

Column's Sum of Squares:


Each cell in the matrix represents correlation between the variable and the factor associated with that cell. The square
of this correlation represents the proportion of the variation in the variable explained by the factor. The sum of the
squares of the factor loadings in each column is called an eigenvalue. An eigenvalue represents the amount of variance
in the original variables that is associated with that factor. The communality is the amount of the variable variance
explained by common factors.
A rule of thumb for deciding on the number of factors is that each included factor must explain at least as much
variance as does an average variable. In other words, only factors for which the eigenvalue is greater than one are
used. Other criteria for determining the number of factors include the Scree plot criteria and the percentage of
variance criteria.
To facilitate interpretation, the axis can be rotated. Rotation of the axis is equivalent to forming linear combinations of
the factors. A commonly used rotation strategy is the varimax rotation. Varimax attempts to force the column entries
to be either close to zero or one.

[© 2013-14: TMC Study Material on RM] Page 104


The basic steps are:
 Identify the salient attributes consumers use to evaluate products in this category.
 Use quantitative research techniques (such as surveys) to collect data from a sample of potential customers
concerning their ratings of all the product attributes.
 Input the data into a statistical program and run the factor analysis procedure. The computer will yield a set
of underlying attributes (or factors).
 Use these factors to construct perceptual maps and other product positioning devices.

Information collection
The data collection stage is usually done by research professionals. Survey questions ask the respondent to rate a
product from one to five (or 1 to 7, or 1 to 10) on a range of attributes. Anywhere from five to twenty attributes are
chosen. They could include things like: ease of use, weight, accuracy, durability, colourfulness, price, or size. The
attributes chosen will vary depending on the product being studied. The same question is asked about all the products
in the study. The data for multiple products is codified and input into a statistical program such as SPSS or SAS.

Analysis
The analysis will isolate the underlying factors that explain the data. Factor analysis is an interdependence technique.
The complete set of interdependent relationships are examined. There is no specification of either dependent variables,
independent variables, or causality. Factor analysis assumes that all the rating data on different attributes can be
reduced down to a few important dimensions. This reduction is possible because the attributes are related. The rating
given to any one attribute is partially the result of the influence of other attributes. The statistical algorithm
deconstructs the rating (called a raw score) into its various components, and reconstructs the partial scores into
underlying factor scores. The degree of correlation between the initial raw score and the final factor score is called a
factor loading. There are two approaches to factor analysis: "principal component analysis" (the total variance in the
data is considered); and "common factor analysis" (the common variance is considered).
The use of principle components in a semantic space can vary somewhat because the components may only "predict"
but not "map" to the vector space. This produces a statistical principle component use where the most salient words or
themes represent the preferred Basis .

Advantages
1. both objective and subjective attributes can be used
2. it is fairly easy to do, inexpensive, and accurate
3. it is based on direct inputs from customers
4. there is flexibility in naming and using dimensions

[© 2013-14: TMC Study Material on RM] Page 105


Disadvantages
1. Usefulness depends on the researcher‘s ability to develop a complete and accurate set of product attributes - If
important attributes are missed the procedure is valueless.
2. Naming of the factors can be difficult - multiple attributes can be highly correlated with no appearent reason.
3. Factor analysis will always produce a pattern between variables, no matter how random.

Decision situations and application suitability of Factor Analysis:


1. Building for new product development: As pointed out earlier, a real life situation is highly complex and it
consists of several variables. A model for the real life situation can be built by incorporating as many features of the
situation as possible. But then, with a multitude of features, it is very difficult to build such a highly idealistic model. A
practical way is to identify the important variables and incorporate them in the model. Factor analysis seeks to identify
those variables which are highly correlated among themselves and find a common factor which can be taken as a
representative of those variables. Based on the factor loading, some of variables can be merged together to give a
common factor and then a model can be built by incorporating such factors. Identification of the most common features
of a product preferred by the consumers will be helpful in the development of new products.
2. Building for consumers: Another application of factor analysis is to carry out a similar exercise for the
respondents instead of the variables themselves. Using the factor loading, the respondents in a research survey can be
sorted out into various groups in such a way that the respondents in a group have more or less homogeneous
opinions on the topics of the survey. Thus a model can be constructed on the groups of consumers. The results
emanating from such an exercise will guide the management in evolving appropriate strategies towards market
segmentation.

2) Cluster analysis
Cluster analysis is a technique that is used in order to segment a market. The objective is to find out a group of
customers in the market place that are homogeneous i.e., they share some characteristics so that they can be classified
into one group. The cluster/group so found out should be large enough so that the company can develop it profitably,
as the ultimate objective of a company is to serve the customer and earn profits. The group of customers that the
company hopes to serve should be large enough for a company so that it is an economically viable proposition for the
company. This is also true for the customer as customer would not be willing to pay beyond. a certain price for a
particular product (price of course is a function of positioning of product, cost of production etc.).
As an example, let us consider the Watch Industry. There could be many ways in which the Watch Industry could be
segmented which are as follows
a. Gender (Male/Female)
b. Technology (Digital/Analog)
c. Design Features
d. Occasion of Use (Formal/Casual/Party)
e. Price (Low/Medium/High/Jewellery)

Some of the above segmentation factors are demographic (price, gender) whereas some are psychographic factors
(occasion to use.)

[© 2013-14: TMC Study Material on RM] Page 106


This, therefore, presents a problem to the market researcher/company, as to how to identify combination of factors
that can be used to segment the market place. It is not always possible to segment a market on the basis of one single
factor. Thus, a combination of factors must be used to segment the market place. And this is where Cluster Analysis
technique specifically deals with how objects (people, places, products) should be assigned to groups, so that there
should be similarity within the groups; and as much difference between the groups, as possible.
Cluster analysis is a class of statistical techniques that can be applied to data that exhibits ―natural‖ groupings.
Cluster analysis sorts through the raw data and groups them into clusters. A cluster is a group of relatively
homogeneous cases or observations. Objects in a cluster are similar to each other. They are also dissimilar to objects
outside the cluster, particularly objects in other clusters.
Cluster analysis, like factor analysis and multi dimensional scaling, is an interdependence technique : it makes no
distinction between dependent and independent variables. The entire set of interdependent relationships is examined.
It is similar to multi dimensional scaling in that both examine inter-object similarity by examining the complete set of
interdependent relationships. The difference is that multi dimensional scaling identifies underlying dimensions, while
cluster analysis identifies clusters. Cluster analysis is the obverse of factor analysis. Whereas factor analysis reduces
the number of variables by grouping them into a smaller set of factors, cluster analysis reduces the number of
observations or cases by grouping them into a smaller set of clusters.

The basic procedure is:


1. Formulate the problem - select the variables that you wish to apply the clustering technique to
2. Select a distance measure - various ways of computing distance:
o Squared Euclidean distance - the square root of the sum of the squared differences in value for each
variable
o Manhattan distance - the sum of the absolute differences in value for any variable
o Chebychev distance - the maximum absolute difference in values for any variable
3. Select a clustering procedure (see below)
4. Decide on the number of clusters
5. Map and interpret clusters - draw conclusions - illustrative techniques like perceptual maps, icicle plots, and
dendrograms are useful
6. Assess reliability and validity - various methods:
o repeat analysis but use different distance measure
o repeat analysis but use different clustering technique
o split the data randomly into two halves and analyze each part separately
o repeat analysis several times, deleting one variable each time
o repeat analysis several times, using a different order each time

Decision situations and application suitability of Cluster Analysis


The concept of cluster analysis has applications in a variety of areas. A few examples are listed below:
1. A marketing manager can use it to find out which brands of products are perceived to be similar by the consumers.
2. A doctor can apply this method to find out which diseases follow the same pattern of occurrence.
3. An agriculturist may use it to determine which parts of his land are similar as regards the cultivating crop.

[© 2013-14: TMC Study Material on RM] Page 107


4. Once a set of objects have been put in different clusters, the top level management can take a policy decision as to
which cluster has to be paid more attention and which cluster needs less attention, etc. Thus it will help the
management in the decision on market segmentation. In short, cluster analysis finds applications in so many contexts.
Some practical area where cluster analysis can be used is explained below;
i. Segmenting the market: The consumers may be clustered on the basis of the benefits sought from the purchase of a
product. Each cluster would consist of consumers who are relatively homogeneous in term of the benefit they seek.
This is called benefit segmentation.
ii. Understanding buyer behaviour: Cluster analysis can be used to identify homogeneous groups of buyers. The
buying behaviour of each group may be examined separately.
iii. Identifying new product opportunities: clustering brands and products enables to identify the competitive sets
within the market. Brands within the same cluster compete more fiercely with each other than with brands in other
clusters. A firm can examine its current offerings compared to those of the competitors to identify potential new
product opportunities.
iv. Selecting test markets: Clustering geographical areas enables to select comparable cities to test the various
marketing strategies.
v. Reducing data: Clusters analysis can be used as data reduction tool to develop clusters or subgroups of data that are
more manageable than individual observations.

Limitations of using cluster analysis technique


The following aspects should be kept in mind while using cluster analysis method:
a) A number of clusters may emerge after doing analysis. However, there is limit to the number of cluster that a
company can consider due to
− Limitation of market potential within a cluster
− Difference between clusters not sharply defined
b) Cluster analysis provides a way of segmenting the market but these segments are not water tight compartments.
Products that are developed for a particular segment may attract people from other segments too.
c) The characteristics of a cluster may change over time, as the consumer‘s economic status, education, lifestyle, etc.,
change over time then the company has to take a relook at the market place.
d) The clusters that have been identified are used developing further marketing strategies in the areas ―like 'product
developments, advertising research, distribution strategies, pricing strategies etc,
e) The most important assumption in cluster analysis is that the basic measure of similarity on which clustering is
based is a valid measure 'of the similarity between objects. A second major assumption is that there is theoretical
justification and basis for structuring objects into clusters. As with other multivariate techniques, there should be
theory and logic underlying the cluster analysis.
f) The major limitation of cluster analysis is the difficulty in evaluating the quality of the clusters. It is very, difficult to
know exactly which clusters are very similar and which objects are dissimilar, and also difficult to select clustering
criterion.
In conclusion, cluster analysis is a scientific method of help understand the consumer groups with 'their differing
needs and perceptions.

[© 2013-14: TMC Study Material on RM] Page 108


3) Discriminant analysis
Discriminant analysis is a statistical technique used in marketing and the social sciences. It is applicable when there is
only one dependent variable but multiple independent variables (similar to ANOVA and regression). But unlike
ANOVA and regression analysis, the dependent variable must be categorical. It is similar to factor analysis in that
both look for underlying dimensions in responses given to questions about product attributes. But it differs from
factor analysis in that it builds these underlying dimensions based on differences rather than similarities.
Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction
between independent variables and dependent variables (also called criterion variables) must be made.
Discriminant analysis works by creating a new variable called the Discriminant function score which is used to predict
to which group a case belongs.
Discriminant function scores are computed similarly to factor scores, i.e. using eigenvalues. The computations find the
coefficients for the independent variables that maximize the measure of distance between the groups defined by the
dependent variable.
The discriminant function is similar to a regression equation in which the independent variables are multiplied by
coefficients and summed to produce a score.
The data structure for DFA is a single grouping variable that is predicted by a series of other variables. The grouping
variable must be nominal, which might also be a reclassification of a continuous variable into groups. The function is
presented thus:
Y‘ = X1W1 + X2W2 + X3W3 + ...XnWn + Constant
This is essentially identical to a multiple regression, but in reality the two techniques are quite different. Regression is
built on a linear combination of variables that maximizes the regression relationship, i.e., the least squares regression,
between a continuous dependent variable and the regression variate. In DFA, the dependent variable consists of
discrete groups, and what you want to do with the function is to maximize the distance between those groups, i.e., to
come up with a function that has strong discriminatory power among the groups. Although logit regression does
somewhat the same thing when you have a binary (two group) variable, and the book makes a big thing of the
similarities, the reality is that the way in which they compute the functions is quite different.

The objective of discriminat analysis


The objective of discriminant analysis is to separate a population (or samples from the population) into two distinct
groups or two distinct conditionalities. After such a separation is made, we should be able to discriminate one group
against the other. In other words, if some sample data is given, it should be possible for us to say with certainty
whether that sample data has come from the first group or the second group. For this purpose, a function called
‗Discriminant function‘ is constructed. It is a linear function and it is used to describe the differences between two
groups.

Discriminant Analysis Involves:


1. Formulate the problem and gather data - Identify the salient attributes consumers use to evaluate products in
this category - Use quantitative research techniques (such as surveys) to collect data from a sample of potential
customers concerning their ratings of all the product attributes. The data collection stage is usually done by
research professionals. Survey questions ask the respondent to rate a product from one to five (or 1 to 7, or 1 to

[© 2013-14: TMC Study Material on RM] Page 109


10) on a range of attributes chosen by the researcher. Anywhere from five to twenty attributes are chosen. They
could include things like: ease of use, weight, accuracy, durability, colourfulness, price, or size. The attributes
chosen will vary depending on the product being studied. The same question is asked about all the products in
the study. The data for multiple products is codified and input into a statistical program such as SPSS or SAS.
(This step is the same as in Factor analysis).
2. Estimate the Discriminant Function Coefficients and determine the statistical significance and validity -
Choose the appropriate discimininant analysis method. The direct method involves estimating the discriminant
function so that all the predictors are assessed simultaneously. The stepwise method enters the predictors
sequentially. The two-group method should be used when the dependant variable has two categories or states.
The multiple discriminant method is used when the dependent variable has three or more categorical states.
Use Wilks‘s Lambda to test for significance in SPSS or F stat in SAS. The most common method used to test
validity is to split the sample into an estimation or analysis sample, and a validation or holdout sample. The
estimation sample is used in constructing the discriminant function. The validation sample is used to construct a
classification matrix which contains the number of correctly classified and incorrectly classified cases. The
percentage of correctly classified cases is called the hit ratio.
3. Plot the results on a two dimensional map, define the dimensions, and interpret the results. The statistical
program (or a related module) will map the results. The map will plot each product (usually in two dimensional
space). The distance of products to each other indicate either how different they are. The dimensions must be
labelled by the researcher. This requires subjective judgement and is often very challenging.

Decision situations and application suitability of Discriminant Function Analysis


General Purpose
Discriminant function analysis is used to determine which variables discriminate between two or more naturally
occurring groups. For example, an educational researcher may want to investigate which variables discriminate
between high school graduates who decide (1) to go to college, (2) to attend a trade or professional school, or (3) to
seek no further training or education. For that purpose the researcher could collect data on numerous variables prior to
students‘ graduation. After graduation, most students will naturally fall into one of the three categories.
Discriminant Analysis could then be used to determine which variable(s) are the best predictors of students‘
subsequent educational choice.
A medical researcher may record different variables relating to patients‘ backgrounds in order to learn which variables
best predict whether a patient is likely to recover completely (group 1), partially (group 2), or not at all (group 3).
A biologist could record different characteristics of similar types (groups) of flowers, and then perform a discriminant
function analysis to determine the set of characteristics that allows for the best discrimination between the types.

4) Conjoint analysis
Conjoint analysis, also called multi-attribute compositional models, is a statistical technique that originated in
mathematical psychology. Today it is used in many of the social sciences and applied sciences including marketing,
product management, and operations research. The objective of conjoint analysis is to determine what combination
of a limited number of attributes is most preferred by respondents. It is used frequently in testing customer

[© 2013-14: TMC Study Material on RM] Page 110


acceptance of new product designs and assessing the appeal of advertisements. It has been used in product
positioning, but there are some problems with this application of the technique.
When asked to do so outright, many consumers are unable to accurately determine the relative importance that they
place on product attributes. For example, when asked which attributes are the more important ones, the response may
be that they all are important. Furthermore, individual attributes in isolation are perceived differently than in the
combinations found in a product. It is difficult for a survey respondent to take a list of attributes and mentally
construct the preferred combinations of them. The task is easier if the respondent is presented with combinations of
attributes that can be visualized as different product offerings. However, such a survey becomes impractical when
there are several attributes that result in a very large number of possible combinations.
Fortunately, conjoint analysis can facilitate the process. Conjoint analysis is a tool that allows a subset of the possible
combinations of product features to be used to determine the relative importance of each feature in the purchasing
decision. Conjoint analysis is based on the fact that the relative values of attributes considered jointly can better be
measured than when considered in isolation.
In a conjoint analysis, the respondent may be asked to arrange a list of combinations of product attributes in decreasing
order of preference. Once this ranking is obtained, a computer is used to find the utilities of different values of each
attribute that would result in the respondent's order of preference. This method is efficient in the sense that the survey
does not need to be conducted using every possible combination of attributes. The utilities can be determined using a
subset of possible attribute combinations. From these results one can predict the desirability of the combinations that
were not tested.

We can best understand conjoint analysis with the help of an example:


Example 1
Suppose we have to design a public transport system. We wish to test the relative desirability of three attributes:
The company aims to provide a service. They wish to test three levels of frequency, and three levels of prices. Further
they want to test the weightage given by consumer to add on features such as AC and music. The conjoint problem can
be presented as follows:
Fare (three levels ` 10, ` 15, ` 20)
Frequency of service (10 minutes, 15 minutes, 20 minutes)
AC vs non AC vs. music (Ac & music, AC, music, nothing)
A sample of 500 respondents are selected and asked to rank their
preferences for all possible combinations and for each level. These are
shown below along with one respondent‘s sample rankings. We can
present our trade off information in the form of a table:
Basically the respondent‘s preference ranking help reveal how desirable a particular feature is to a respondent.
Features respondents are unwilling to give up from one preference ranking to the next are given a higher utility. Thus
in the above example the respondent gives a high weightage to service followed by AC. the offer of music is clearly not
very important as he ranks it below AC. However he is not willing to trade off frequency of service with either AC or
music.
Conjoint analysis uses preference rankings to calculate a set of utilities for each respondent where one utility is
calculated for each respondent for each attribute or feature. The calculation of utilities is such that the sum of utilities
for a particular combination shows a good correspondence with that combination‘s position in the individual‘s original

[© 2013-14: TMC Study Material on RM] Page 111


preference rankings. The utilities basically show the importance of each level of each importance to respondents. We
can also identify the more important attributes by looking at the range of utilities for each of the different levels.

For Example
 Frequency of service has a range from 1.6 to .04. The range is therefore equal to = 1.2.A high range implies that the
respondent is more sensitive to changes in the level of this attribute.
 These utilities are calculated across all respondents for all attributes and for different levels of each attribute.
At the end of the analysis we would identify 3-4 of the most popular combinations would be identified for which the
relative costs and benefits can be worked out.

Steps in Developing a Conjoint Analysis


1. Choose product attributes, for example, appearance, size, or price.
2. Choose the values or options for each attribute. For example, for the attribute of size, one may choose the
levels of 5", 10", or 20". The higher the number of options used for each attribute, the more burden that is
placed on the respondents.
3. Define products as a combination of attribute options. The set of combinations of attributes that will be used
will be a subset of the possible universe of products.
4. Choose the form in which the combinations of attributes are to be presented to the respondents. Options
include verbal presentation, paragraph description, and pictorial presentation.
5. Decide how responses will be aggregated. There are three choices - use individual responses, pool all
responses into a single utility function, or define segments of respondents who have similar preferences.
6. Select the technique to be used to analyze the collected data. The part-worth model is one of the simpler
models used to express the utilities of the various attributes. There also are vector (linear) models and ideal-
point (quadratic) models.
The data is processed by statistical software written specifically for conjoint analysis.
Conjoint analysis was first used in the early 1970's and has become an important research tool. It is well-suited for
defining a new product or improving an existing one.

Decision situations and application suitability of Conjoint Analysis


1. An idea of consumer‘s preferences for combinations of attributes will be useful in designing new products or
modification of an existing product.
2. A forecast of the profits to be earned by a product or a service.
3. A forecast of the market share for the company‘s product.
4. A forecast of the shift in brand loyalty of the consumers.
5. A forecast of differences in responses of various segments of the product.
6. Formulation of marketing strategies for the promotion of the product.
7. Evaluation of the impact of alternative advertising strategies.
8. A forecast of the consumers‘ reaction to pricing policies.
9. A forecast of the consumers‘ reaction on the channels of distribution.
10. Evolving an appropriate marketing mix.

[© 2013-14: TMC Study Material on RM] Page 112


11. Even though the technique of conjoint analysis was developed for the formulation of corporate strategy, this
method can be used to have a comprehensive knowledge of a wide range of areas such as family decision making
process, pharmaceuticals, tourism development, public transport system, etc.

Advantages of Conjoint Analysis


1. The analysis can be carried out on physical variables.
2. Preferences by different individuals can be measured and pooled together to arrive at a decision.

Disadvantages of Conjoint Analysis


1. When more and more attributes of a product are included in the study, the number of combinations of attributes
also increases, rendering the study highly difficult. Consequently, only a few selected attributes can be included in the
study.
2. Gathering of information from the respondents will be a tough job.
3. Whenever novel combinations of attributes are included, the respondents will have difficulty in capturing such
combinations.
4. The psychological measurements of the respondents may not be accurate.

5) Multidimensional scaling
Multidimensional scaling (MDS) is a series of techniques that helps the analyst to identify key dimensions
underlying respondents‘ evaluations of objects. It is often used in Marketing to identify key dimensions underlying
customer evaluations of products, services or companies.
Multidimensional scaling (MDS) is a statistical technique often used in marketing and the social sciences. It is a
procedure for taking the preferences and perceptions of respondents and representing them on a visual grid. These
grids, called perceptual maps are usually two-dimensional, but they can represent more than two. Potential customers
are asked to compare pairs of products and make judgements about their similarity. Whereas other techniques (such
as factor analysis, discriminant analysis, and conjoint analysis) obtain underlying dimensions from responses to
product attributes identified by the researcher, MDS obtains the underlying dimensions from respondents‘
judgements about the similarity of products. This is an important advantage. It does not depend on researchers‘
judgments. It does not require a list of attributes to be shown to the respondents. The underlying dimensions come
from respondents‘ judgements about pairs of products. Because of these advantages, MDS is the most common
technique used in perceptual mapping.

Multidimensional Scaling Procedure


There are several steps in conducting MDS research:
1. Formulating the problem - What brands do you want to compare? How many brands do you want to compare?
More than 20 is cumbersome. Less than 8 (4 pairs) will not give valid results. What purpose is the study to be
used for?
2. Obtaining Input Data - Respondents are asked a series of questions. For each product pair they are asked to rate
similarity (usually on a 7 point Likert scale from very similar to very dissimilar). The first question could be for
Coke/Pepsi for example, the next for Coke/Hires rootbeer, the next for Pepsi/Dr Pepper, the next for Dr

[© 2013-14: TMC Study Material on RM] Page 113


Pepper/Hires rootbeer, etc. The number of questions is a function of the number of brands and can be calculated
as Q = N (N - 1) / 2 where Q is the number of questions and N is the number of brands. This approach is referred
to as the ―Perception data : direct approach‖. There are two other approaches. There is the ―Perception data :
derived approach‖ in which products are decomposed into attributes which are rated on a semantic differential
scale. The other is the ―Preference data approach‖ in which respondents are asked their preference rather than
similarity.
3. Running the MDS statistical program - Software for running the procedure is available in most of the better
statistical applications programs. Often there is a choice between Metric MDS (which deals with interval or ratio
level data), and Nonmetric MDS (which deals with ordinal data). The researchers must decide on the number of
dimensions they want the computer to create. The more dimensions, the better the statistical fit, but the more
difficult it is to interpret the results.
4. Mapping the results and defining the dimensions - The statistical program (or a related module) will map the
results. The map will plot each product (usually in two dimensional space). The proximity of products to each
other indicate either how similar they are or how preferred they are, depending on which approach was used.
The dimensions must be labelled by the researcher. This requires subjective judgement and is often very
challenging. The results must be interpreted.
5. Test the results for reliability and Validity - Compute R-squared to determine what proportion of variance of
the scaled data can be accounted for by the MDS procedure. An R-square of .6 is considered the minimum
acceptable level. Other possible tests are Kruskal‘s Stress, split data tests, data stability tests (i.e.: eliminating one
brand), and test-retest reliability.

Applications of multi-dimensional scaling


Some of the typical marketing applications that emerge from the MDS technique are:
1) Market Segmentation
Market segmentation is the technique of trying to identify groups of consumers who exhibit commonality of
perception of products and preferences, One can use MDS techniques to identify present perceptions of products by
consumers, and use it modify the company's product, package, advertising, additional features, so that the product
offering of the company moves more and more closer to the `ideal' requirement of the consumer.
2) Advertisement Evaluation
The MDS technique could be used at the stage of advertisement pre-testing. Once an advertisement has been
developed, it could 'be tested for similarity/dissimilarity with other advertisements in the same product category. As
the ultimate objective of an advertisement is to communicate, with the target consumer effectively, and this is possible

only if the advertisement is distinct in its message from the other competing advertisements,
3) Product Re-positioning Studies
If &-company is interested in re-positioning its product/service (in the mind of the consumer), the first and foremost
activity to be done is to assess the. current perception of the product in the mind of the consumer. The classic re-
positioning case is that of Cadbury chocolates, which kept on assessing its p9sitiolihng platform, and successfully
moved Chocolates from a product perceived. as one for children, to a product which could be consumed by a person of
any age,, at any time, of the day, and for varied occasions.

[© 2013-14: TMC Study Material on RM] Page 114


4) New Product Development
MDS technique shows us the various perceived perceptions of the different brands. Spaces/ Gaps in the product
'
perceptions could be used to, develop new offerings for the target consumer.
5) Test Marketing
MDS technique can be used to identify cities that have similar demographic characteristics, and one could then identify
a city which could represent a national character, and use that city for test marketing.
One can thus `observe that MDS is a very useful technique to help understand the market place and develop strategies
for the future.

Advantage of MDS
The advantage of NOS methods is not in the measurement of physical distances, but rather "psychological distances",
also called as `dissimilarities'. In MDS, we assume that every individual pawn has a 'metal map' of products, people,
places, events, companies, and individuals keep on evaluating their external environment on a continuous basis.
We also assume that the respondent is able to provide either numerical measure of his or her perceived degree of
similarity/dissimilarity between pairs of objects, or can rank pairs of objects (ordinal scale of measurement) in terms of
similarity/dissimilarity to each other.
We can then make use of methodology, of MDS to construct a physical map in one or more dimensional whose inter-
point distances (or ranks of distances) are most consistent with input data.
Now-a-days a number of software programmes are available for conducting MDS analysis. These programmes
provide for a variety of input data. Some of the widely used softwares include MDPREF, MDSCAL SM, INDSCAL,
PREFMAM, PROFIT, KUST.

[© 2013-14: TMC Study Material on RM] Page 115


Field work
Introduction
When interviewers talk about the field, it's not a farm, but the place where they go to do their interviews: whether at
people's homes, public places, or even a call centre where they do telephone interviewing. The term fieldwork includes
all interviewing, as well as the activities that go with it: preparation, interviewer supervision, and verification.
There are two main forms of fieldwork: face-to-face interviews, and telephone interviews. Telephone interviews are
much less laborious (no walking - just ring a number) but also more restrictive because nothing visible can pass
between the interviewer and the respondent.
Field research is essentially studying something by "going where it is happening and watching it happen‖. One
version of this method referred to as "participant observation" obviously involves participating and observing at the
same time. Field research develops a fuller understanding by providing insight and understanding of meanings,
motivations and processes in a holistic natural setting. Studies using these techniques have covered topics from
anthropological studies of other cultures, to studies of student demonstrations in the 60's to examining jury actions in
courtroom proceedings, to the study of rituals in a devil-worshiping group.
The aims of fieldwork have traditionally been implicit within the dominant methodologies of fieldwork practice. The
traditional approaches, sometimes termed 'fieldwork excursions' have aims rooted in the development of content
knowledge. The data collection/hypothesis testing and field enquiry approaches extend the learning opportunities
available and promote the application of learning objectives to the planning of fieldwork.
A more effective, but time-consuming approach is one that incorporates the processes of field research. Incorporating
the elements of observation, description, and explanation it adopts a problem-solving focus. The researcher identify a
problem as a result of their observations or studies; they formulate a hypothesis; design a research methodology;
collect and record data; process and analyse the information and draw conclusions that result in the acceptance or
rejection of the original hypothesis.

The three possible approaches to fieldwork are: -


1) A 'hypothetic-deductive' approach: -
Where the researcher generate aims and hypotheses based upon prior theoretical knowledge, select appropriate
methods, collect data and carry out analysis.
2) An 'enquiry' approach: -
Issues are introduced, key questions raised, and students select methods to investigate and develop possible solutions
to these.
3) An ´Individual Inquiry` approach: -
Whereby the researcher have the opportunity to select their own topic, adopt their own approach and complete an
independent project or field investigation. Staff acts as supervisors and advisors, providing equipment, advice and
ensuring safe working.

The Usefulness of Fieldwork


 Improving observation skills and a better understanding of the processes.
 Experiential learning: fieldwork provides opportunities to learn through direct, concrete experiences, enhancing the
understanding that comes from observing 'real world' manifestations of theoretical concepts and processes.

[© 2013-14: TMC Study Material on RM] Page 116


 Developing and applying analytical skills: fieldwork relies on a range of skills, many of which are not used in the
desk based research.
 Experiencing real-life research: developing investigative, communicative and participatory skills.
 Teamwork: fieldwork experiences provide an important teamwork element.
 Skill development: observation, synthesis, evaluation, reasoning, instrumentation skills, practical problem solving,
adaptability to new demands that call upon creative solutions, etc.
 Uses of technology: applying technology to investigate problems and issues.

Effective Fieldwork
To be effective fieldwork should:
 be well planned, interesting, cost effective and represent an effective use of the time available
 target specific issues and topic outcomes
 provide opportunities for the researcher to develop a range of cognitive and manipulative skills
 be integrated with the subject matter to ensure that researcher take full advantage of enhanced understanding that is
achieved through direct observation, data collection/recording and inquiry learning.
 be supported by pre-and post- expedition classroom activities that establish the context for learning and provide the
necessary follow-up and reinforcement.

Common sources of error in Fieldwork:


Five common sources of error in fieldwork are identified in the following discussion : 1) Errors in selecting
respondents 2) Non-response errors (i.e. failure to get data from selected respondents ) 3) Errors created by the
method of seeking data 4) Errors resulting from interviewer‘s misinterpreting or mis-recording answers and 5)
Interviewer cheating
1) Error in respondent selection mean that the interviewer at times uses his own biased judgement in selecting the
respondents e.g. he may choose a respondent who looks more friendly and appear easy to interview. Errors also occur
in classification; interviewers who classify the same respondents on the basis of income may differ in as much as 30%
of the cases. Interviewers also tend to se elect the more accessible individuals in the household in both telephone and
personal interviews.
2) Non –response Errors : In almost every study no response is obtained from a certain part of the sample – that is,
from those who refuse to cooperate, those who cannot be located and those who are unsuitable for interview. If non-
response group is large, it may easily bias the results of the study.
3) The third type of error which occurs during fieldwork is error in stimulating responses. Whether data is recorded
by telephone, personal interview, or mail, the information obtained will be influenced by the questioning process. The
wording of the question and the manner in which it is presented cause problems. Problems regarding the purpose of
survey and what is expected out of the respondent are the most common ones. The interviewer‘s method of asking a
question also influences results in several ways apart from this, the perception of respondent of the interviewer also
influences the responses. Age, race and income of the interviewer all tend to influence the response obtained from
personal interviews.

[© 2013-14: TMC Study Material on RM] Page 117


4) Differences in answer also arise due to their impact on the respondents, but they may influence results by the way
they react to respondents. Differences in the characteristics of interviewers such as experience, attitudes and opinions
also affect the recorded answers.
5) Interviewers cheating implies that falsification of data is done during data collection. e.g. the interviewer who fills
out questionnaires without making interviews.

Minimising Fieldwork Errors


There are general administrative and control procedures, which can improve the overall quality of fieldwork while
holding costs at acceptable levels. Most research organizations pay particular attention to five factors:
1) Selection and training of field –workers – interviewers or observers.
2) Administrative procedures for handling projects in the field.
3) Supervision of field – workers and the data-collection process.
4) Quality and cost control procedures
5) Validation of fieldwork.

Survey plan
Surveys are quantitative information collection techniques used in marketing, political polling, and social science
research.
All surveys involve questions of some sort. When the questions are administered by a researcher, the survey is called
an interview or a researcher administered survey. When the questions are administered by the respondent, the
survey is referred to as a questionnaire or a self-administered survey.

Advantages of surveys
The advantages of survey techniques include:
 It is an efficient way of collecting information from a large number of respondents. Very large samples are possible.
Statistical techniques can be used to determine validity, reliability, and statistical significance.

 Surveys are flexible in the sense that a wide range of information can be collected. They can be used to study
attitudes, values, beliefs, and past behaviours.

 Because they are standardized, they are relatively free from several types of errors.
 They are relatively easy to administer.
 There is an economy in data collection due to the focus provided by standardized questions. Only questions of
interest to the researcher are asked, recorded, codified, and analyzed. Time and money is not spent on tangential
questions.

Disadvantages of surveys
Disadvantages of survey techniques include:
 They depend on subjects‘ motivation, honesty, memory, and ability to respond. Subjects may not be aware of their
reasons for any given action. They may have forgotten their reasons. They may not be motivated to give accurate
answers, in fact, they may be motivated to give answers that present themselves in a favorable light.

[© 2013-14: TMC Study Material on RM] Page 118


 Surveys are not appropriate for studying complex social phenomena. The individual is not the best unit of analysis
in these cases. Surveys do not give a full sense of social processes and the analysis seems superficial.

 Structured surveys, particularly those with closed ended questions, may have low validity when researching
affective variables.

Survey Methods
Once the researcher has decided on the size of sample, the next step is to decide on the method of data collection. Each
method has advantages and disadvantages.

a) Personal Interviews
An interview is called personal when the Interviewer asks the questions face-to-face with the Interviewee. Personal
interviews can take place in the home, at a shopping mall, on the street, outside a movie theatre or polling place, and so
on.
Advantages
1. The ability to let the Interviewee see, feel and/or taste a product.
2. The ability to find the target population. For example, you can find people who have seen a film much more easily
outside a theatre in which it is playing than by calling phone numbers at random.
3. Longer interviews are sometimes tolerated. Particularly with in-home interviews that have been arranged in
advance. People may be willing to talk longer face-to-face than to someone on the phone.
Disadvantages
1. Personal interviews usually cost more per interview than other methods. This is particularly true of in-home
interviews, where travel time is a major factor.
2. Each mall has its own characteristics. It draws its clientele from a specific geographic area surrounding it, and its
shop profile also influences the type of client. These characteristics may differ from the target population and create a
non-representative sample.

b) Telephone Surveys
Surveying by telephone is the most popular interviewing method in the most of the country. This is made possible by
nearly universal coverage (Approx. 70 % of homes have a telephone in urban area).
Advantages
1. People can usually be contacted faster over the telephone than with other methods. If the Interviewers are using
CATI (computer-assisted telephone interviewing), the results can be available minutes after completing the last
interview.
2. You can dial random telephone numbers when you do not have the actual telephone numbers of potential
respondents.
3. CATI software, such as The Survey System, makes complex questionnaires practical by offering many logic options.
It can automatically skip questions, perform calculations and modify questions based on the answers to earlier
questions. It can check the logical consistency of answers and can present questions or answers choices in a random
order (the last two are sometimes important for reasons described later).

[© 2013-14: TMC Study Material on RM] Page 119


4. Skilled interviewers can often elicit longer or more complete answers than people will give on their own to mail,
email surveys (though some people will give longer answers to Web page surveys). Interviewers can also ask for
clarification of unclear responses.
5. Some software, such as The Survey System, can combine survey answers with pre-existing information you have
about the people being interviewed.
Disadvantages
1. Many telemarketers have given legitimate research a bad name by claiming to be doing research when they start a
sales call. Consequently, many people are reluctant to answer phone interviews and use their answering machines to
screen calls.
2. The growing number of working women often means that no one is home during the day. This limits calling time to
a "window" of about 6-9 p.m. (when you can be sure to interrupt dinner or a favourite TV program).
3. You cannot show or sample products by phone.

c) Mail Surveys
One way of improving response rates to mail surveys is to mail a postcard telling your sample to watch for a
questionnaire in the next week or two. Another is to follow up a questionnaire mailing after a couple of weeks with a
card asking people to return the questionnaire. The downside is that this doubles or triples your mailing cost. If you
have purchased a mailing list from a supplier, you may also have to pay a second (and third) use fee - you often cannot
buy the list once and re-use it.
Another way to increase responses to mail surveys is to use an incentive. One possibility is to send a dollar bill (or
more) along with the survey (or offer to donate the dollar to a charity specified by the respondent). If you do so, be
sure to say that the dollar is a way of saying "thanks," rather than payment for their time. Many people will consider
their time worth more than a dollar. Another possibility is to include the people who return completed surveys in a
drawing for a prize. A third is to offer a copy of the (non-confidential) result highlights to those who complete the
questionnaire. Any of these techniques will increase the response rates.
Remember that if you want a sample of 1,000 people, and you estimate a 10% response level, you need to mail 10,000
questionnaires. You may want to check with your local post office about bulk mail rates - you can save on postage
using this mailing method. However, most researchers do not use bulk mail, because many people associate "bulk"
with "junk" and will throw it out without opening the envelope, lowering your response rate. Also bulk mail moves
slowly, increasing the time needed to complete your project.
Advantages
1. Mail surveys are among the least expensive.
2. This is the only kind of survey you can do if you have the names and addresses of the target population, but not
their telephone numbers.
3. The questionnaire can include pictures - something that is not possible over the phone.
4. Mail surveys allow the respondent to answer at their leisure, rather than at the often inconvenient moment they are
contacted for a phone or personal interview. For this reason, they are not considered as intrusive as other kinds of
interviews.
Disadvantages
1. Time! Mail surveys take longer than other kinds. You will need to wait several weeks after mailing out
questionnaires before you can be sure that you have gotten most of the responses.

[© 2013-14: TMC Study Material on RM] Page 120


2. In populations of lower educational and literacy levels, response rates to mail surveys are often too small to be
useful. This, in effect, eliminates many immigrant populations that form substantial markets in many areas. Even in
well-educated populations, response rates vary from as low as 3% up to 90%. As a rule of thumb, the best response
levels are achieved from highly-educated people and people with a particular interest in the subject (which, depending
on your target population, could lead to a biased sample).

d) Computer Direct Interviews


These are interviews in which the Interviewees enter their own answers directly into a computer. They can be used at
malls, trade shows, offices, and so on. The Survey System's optional Interviewing Module and Interview Stations can
easily create computer-direct interviews. Some researchers set up a Web page survey for this purpose.
Advantages
1. The virtual elimination of data entry and editing costs.
2. You will get more accurate answers to sensitive questions. Recent studies of potential blood donors have shown
respondents were more likely to reveal HIV-related risk factors to a computer screen than to either human interviewers
or paper questionnaires. The National Institute of Justice has also found that computer-aided surveys among drug
users get better results than personal interviews. Employees are also more often willing to give more honest answers to
a computer than to a person or paper questionnaire.
3. The elimination of interviewer bias. Different interviewers can ask questions in different ways, leading to different
results. The computer asks the questions the same way every time.
4. Ensuring skip patterns are accurately followed. The Survey System can ensure people are not asked questions they
should skip based on their earlier answers. These automatic skips are more accurate than relying on an Interviewer
reading a paper questionnaire.
5. Response rates are usually higher. Computer-aided interviewing is still novel enough that some people will answer
a computer interview when they would not have completed another kind of interview.
Disadvantages
1. The Interviewees must have access to a computer or one must be provided for them.
2. As with mail surveys, computer direct interviews may have serious response rate problems in populations of lower
educational and literacy levels. This method may grow in importance as computer use increases.

e) Email Surveys
Email surveys are both very economical and very fast. More people have email than have full Internet access. This
makes email a better choice than a Web page survey for some populations. On the other hand, email surveys are
limited to simple questionnaires, whereas Web page surveys can include complex logic.
Advantages
1. Speed. An email questionnaire can gather several thousand responses within a day or two.
2. There is practically no cost involved once the set up has been completed.
3. You can attach pictures and sound files.
4. The novelty element of an email survey often stimulates higher response levels than ordinary ―snail‖ mail surveys.
Disadvantages
1. You must possess (or purchase) a list of email addresses.

[© 2013-14: TMC Study Material on RM] Page 121


2. Some people will respond several times or pass questionnaires along to friends to answer. Many programs have no
check to eliminate people responding multiple times to bias the results. The Survey System‘s Email Module will only
accept one reply from each address sent the questionnaire. It eliminates duplicate and pass along questionnaires and
checks to ensure that respondents have not ignored instructions (e.g., giving 2 answers to a question requesting only
one).
3. Many people dislike unsolicited email even more than unsolicited regular mail. You may want to send email
questionnaires only to people who expect to get email from you.
4. You cannot use email surveys to generalize findings to the whole populations. People who have email are different
from those who do not, even when matched on demographic characteristics, such as age and gender.
5. Email surveys cannot automatically skip questions or randomize question or answer choice order or use other
automatic techniques that can enhance surveys the way Web page surveys can.
Although use of email is growing very rapidly, it is not universal - and is even less so outside the urban areas. Many
―average‖ citizens still do not possess email facilities, especially older people and those in lower income and education
groups. So email surveys do not reflect the population as a whole. At this stage they are probably best used in a
corporate environment where email is common or when most members of the target population are known to have
email.

f) Internet/Intranet (Web Page) Surveys


Web surveys are rapidly gaining popularity. They have major speed, cost, and flexibility advantages, but also
significant sampling limitations. These limitations make software selection especially important and restrict the groups
you can study using this technique.
Advantages
1. Web page surveys are extremely fast. A questionnaire posted on a popular Web site can gather several thousand
responses within a few hours. Many people who will respond to an email invitation to take a Web survey will do so the
first day, and most will do so within a few days.
2. There is practically no cost involved once the set up has been completed. Large samples do not cost more than
smaller ones (except for any cost to acquire the sample).
3. You can show pictures. Some Web survey software can also show video and play sound.
4. Web page questionnaires can use complex question skipping logic, randomisations and other features not possible
with paper questionnaires or most email surveys. These features can assure better data.
5. Web page questionnaires can use colours, fonts and other formatting options not possible in most email surveys.
6. A significant number of people will give more honest answers to questions about sensitive topics, such as drug use
or sex, when giving their answers to a computer, instead of to a person or on paper.
7. On average, people give longer answers to open-ended questions on Web page questionnaires than they do on
other kinds of self-administered surveys.
8. Some Web survey software, such as The Survey System, can combine the survey answers with pre-existing
information you have about individuals taking a survey.

[© 2013-14: TMC Study Material on RM] Page 122


Disadvantages
1. Current use of the Internet is far from universal. Internet surveys do not reflect the population as a whole. This is
true even if a sample of Internet users is selected to match the general population in terms of age, gender and other
demographics.
2. People can easily quit in the middle of a questionnaire. They are not as likely to complete a long questionnaire on
the Web as they would be if talking with a good interviewer.
3. If your survey pops up on a web page, you often have no control over who replies - anyone from Antarctica to
Zanzibar, cruising that web page may answer.
4. Depending on your software, there is often no control over people responding multiple times to bias the results.

g) Scanning Questionnaires
Scanning questionnaires is a method of data collection that can be used with paper questionnaires that have been
administered in face-to-face interviews; mail surveys or surveys completed by an Interviewer over the telephone. The
Survey System can produce paper questionnaires that can be scanned using Remark Office OMR (Optical Mark
Reader). Other software can scan questionnaires and produce ASCII Files that can be read into The Survey System.
Advantages
1. Scanning can be the fastest method of data entry for paper questionnaires.
2. Scanning is more accurate than a person in reading a properly completed questionnaire.
Disadvantages
1. Scanning is best-suited to "check the box" type surveys and bar codes. Scanning programs have various methods to
deal with text responses, but all require additional data entry time.
2. Scanning is less forgiving (accurate) than a person in reading a poorly marked questionnaire. Requires investment
in additional hardware to do the actual scanning.
Summary of Survey Methods
The choice of survey method will depend on several factors. These include:

Speed Email and Web page surveys are the fastest methods, followed by telephone interviewing. Mail
surveys are the slowest.

Cost Personal interviews are the most expensive followed by telephone and then mail. Email and
Web page surveys are the least expensive for large samples.

Internet Usage Web page and Email surveys offer significant advantages, but you may not be able to generalize
their results to the population as a whole.

Literacy Levels Illiterate and less-educated people rarely respond to mail surveys.

Sensitive People are more likely to answer sensitive questions when interviewed directly by a computer in
Questions one form or another.

Video, Sound, A need to get reactions to video, music or a picture limits your options. You can play a video on
Graphics a Web page, in a computer-direct interview, or in person. You can play music when using these
methods or over a telephone. You can show pictures in those first methods and in a mail survey.

[© 2013-14: TMC Study Material on RM] Page 123


Errors in survey
The following are the TWO major types of Errors in survey: (A) Random sampling error
(B) Systematic error
A. Random sampling error: Most surveys try to portray a representative cross section of a particular target
population, but even with technically proper random probability samples, statistical errors will occur because of
chance variation. Without increasing sample size, these statistical problems are unavoidable.

B. Systematic error: Systematic errors result from some imperfect research design or from a mistake in the
execution of the research. These errors are also called non-sampling errors. A sample bias exists when the results of a
sample show a persistent tendency to deviate in one direction from the true value of the population parameter.
The two general categories of systematic error are respondent error and administrative error.
1. Respondent error: If the respondents do not cooperate or do not give truthful answers then two types of error
may occur.
a) Non-response error: To utilize the results of a survey, the researcher must be sure that those who did not respond
to the questionnaire were representative of those who did not. If only those who responded are included in the survey
then non-response error will occur. Non-respondents are most common in mail surveys, but may also occur in
telephone and personal surveys in the form of no contacts (not-at-homes) or refusals. The number of no contacts has
been increasing because of the proliferation of answering machines and growing usage of Caller ID to screen telephone
calls. Self-selection may also occur in self-administered questionnaires; in this situation, only those who feel strongly
about the subject matter will respond, causing an over-representation of extreme positions. Comparing demographics
of the sample with the demographics of the target population is one means of inspecting for possible biases. Additional
efforts should be made to obtain data from any underrepresented segments of the population. For example, call-backs
can be made on the not-at-homes.
b) Response bias: Response bias occurs when respondents tend to answer in a certain direction. This bias may be
caused by an intentional or inadvertent falsification or by a misrepresentation of the respondent‘s answer.
(1) Deliberate falsification: People may misrepresent answers in order to appear intelligent, to avoid embarrassment,
to conceal personal information, to "please" the interviewer, etc. It may be that the interviewees preferred to be viewed
as average and they will alter their responses accordingly.
(2) Unconscious misrepresentation: Response bias can arise from question format, question ambiguity or content.
Time-lapse may lead to best-guess answers.
Types of response bias: There are five specific categories of response bias. These categories overlap and are by no
means mutually exclusive.
(i) Agreement bias: This is a response bias caused by a respondent‘s tendency to concur with a particular position. For
example, "yes Sayers" who accept all statements they are asked about.
(ii) Extremity bias: Some individuals tend to use extremes when responding to questions which may cause extremity
bias.
(iii) Interviewer bias: If an interviewer‘s presence influences respondents to give untrue or modified answers, the
survey will contain interviewer bias. Respondents may wish to appear wealthy or intelligent, or they may try to give
the "right" answer or the socially acceptable answer.

[© 2013-14: TMC Study Material on RM] Page 124


(iv) Patronage bias: The answers to a survey may be deliberately or unintentionally misrepresented because the
respondent is influenced by the organization conducting the survey.
(v) Social desirability bias: This may occur consciously or subconsciously. Answers to questions that seek factual
information or matters of public knowledge are usually quite accurate, but the interviewer‘s presence may increase a
respondent‘s tendency toward an inaccurate response to a sensitive question in an attempt by the respondent to gain
prestige in the interviewer‘s mind.
2. Administrative error: The results of improper administration or execution of the research task are examples of
administrative error. Such errors are inadvertently caused by confusion, neglect, omission, or some other blunder.
There are four types of administrative error:
a) Data processing error: The accuracy of the data processed by computer depends on correct data entry and
programming. Mistakes can be avoided if verification procedures are employed at each processing stage.
b) Sample selection error: This type of error is a systematic error that results in an unrepresentative sample because of
an error in either the sample design or execution of the sampling procedure.
c) Interviewer error: Interviewers may record an answer incorrectly or selective perception may influence them to
record data supportive of their own attitudes.
d) Interviewer cheating: To avoid possible cheating, it is wise to inform the interviewers that a small sample of
respondents will be back to confirm that the interview actually took place.

Questions for Review:

1. What do you mean by data? Why it is needed for research?


2. Distinguish between primary and secondary data. Illustrate your answer with examples.
3. Write names of five web sources of secondary data which have not been included in the above table.
4. Explain the merits and limitations of using secondary data.
5. What precautions must a researcher take before using the secondary data?
6. In the following situations indicate whether data from a census should be taken?
i) A TV manufacturer wants to obtain data on customer preferences with respect to size of TV.
ii) RTMNU wants to determine the acceptability of its employees for subscribing to a new employee insurance
programme.
7. How can data be collected through the Observation Method?
8. Distinguish between the observation and the interview method of data collection.
9. Discuss the different data sources, explaining their usefulness and disadvantages?
10. Discuss the important issues to be considered in designing a questionnaire.
11. What is an electronic survey? Discuss the issues to be considered in designing and electronic questionnaire.
12. Write short notes on :
A. Primary and secondary data B. Data collection methods



[© 2013-14: TMC Study Material on RM] Page 125


Unit 4 : Testing of hypothesis

Introduction
Many a time, we strongly believe some results to be true. But after taking a sample, we notice that one sample data
does not wholly support the result. The difference is due to (i) the original belief being wrong, or (ii) the sample being
slightly one sided.
Tests are, therefore, needed to distinguish between the two possibilities. These tests tell about the likely possibilities
and reveal whether or not the difference can be due to only chance elements. If the difference is not due to chance
elements, it is significant and, therefore, these tests are called tests of significance. The whole procedure is known as
Testing of Hypothesis.
Setting up and testing hypotheses is an essential part of statistical inference. In order to formulate such a test, usually
some theory has been put forward, either because it is believed to be true or because it is to be used as a basis for
argument, but has not been proved. For example, the hypothesis may be the claim that a new drug is better than the
current drug for treatment of a disease, diagnosed through a set of symptoms.
In each problem considered, the question of interest is simplified into two competing claims/hypotheses between
which we have a choice; the null hypothesis, denoted by H0, against the alternative hypothesis, denoted by H1. These
two competing claims / hypotheses are not however treated on an equal basis; special consideration is given to the null
hypothesis.
We have two common situations:
(i) The experiment has been carried out in an attempt to disprove or reject a particular hypothesis, the null hypothesis;
thus we give that one priority so it cannot be rejected unless the evidence against it is sufficiently strong. For example,
null hypothesis H0: there is no difference in taste between coke and diet coke, against the alternate hypothesis H1: there
is a difference in the tastes.
(ii) If one of the two hypotheses is ‗simpler‘, we give it priority so that a more ‗complicated‘ theory is not adopted
unless there is sufficient evidence against the simpler one. For example, it is ‗simpler‘ to claim that there is no
difference in flavour between coke and diet coke than it is to say that there is a difference.
The hypotheses are often statements about population parameters like expected value and variance. For example, H0,
might be the statement that the expected value of the height of ten year old boys in the Indian population, is not

[© 2013-14: TMC Study Material on RM] Page 126


different from that of ten year old girls. A hypothesis might also be a statement about the distributional form of a
characteristic of interest; for example, that the height of ten years old boys is normally distributed within the Indian
population.

Concept of hypothesis
A hypothesis is the assumption that we make about the population parameter. This can be any assumption about a
population parameter not necessarily based on statistical data. For example it can also be based on the gut feel of a
manager. Managerial hypotheses are based on intuition; the market place decides whether the manager‘s intuitions
were in fact correct.
In fact managers propose and test hypotheses all the time. For example:
 If a manager says ‗if we drop the price of this car model by ` 15000, we‘ll increase sales by 25000 units‘ is a
hypothesis. To test it in reality we have to wait to the end of the year to and count sales.
 A manager estimates that sales per territory will grow on average by 30% in the next quarter is also an assumption
or hypotheses.
To understand the meaning of a hypothesis, let us see some definitions:
―A hypothesis is a tentative generalization, the validity of which remains to be tested. In its most elementary stage
the hypothesis may be any guess, hunch, imaginative idea, which becomes the basis for action or investigation‖.
(G.A.Lundberg)
―It is a proposition which can be put to test to determine validity‖. (Goode and Hatt).
―A hypothesis is a question put in such a way that an answer of some kind can be forth coming‖ - (Rummel and
Ballaine).
These definitions lead us to conclude that a hypothesis is a tentative solution or explanation or a guess or assumption
or a proposition or a statement to the problem facing the researcher, adopted on a cursory observation of known and
available data, as a basis of investigation, whose validity is to be tested or verified.

How would the manager go about testing this assumption?


Suppose he has 70 territories under him.
 One option for him is to audit the results of all 70 territories and determine whether the average is growth is
greater than or less than 30%. This is a time consuming and expensive procedure.
 Another way is to take a sample of territories and audit sales results for them. Once we have our sales growth
figure, it is likely that it will differ somewhat from our assumed rate. For example we may get a sample rate of
27%. The manager is then faced with the problem of determining whether his assumption or hypothesized rate of
growth of sales is correct or the sample rate of growth is more representative. To test the validity of our
assumption about the population we collect sample data and determine the sample value of the statistic.
We then determine whether the sample data supports our hypotheses assumption regarding the average sales growth.

How is this Done?


If the difference between our hypothesized value and the sample value is small, then it is more likely that our
hypothesized value of the mean is correct. The larger the difference the smaller the probability that the hypothesized
value is correct. In practice however very rarely is the difference between the sample mean and the hypothesized

[© 2013-14: TMC Study Material on RM] Page 127


population value larger enough or small enough for us to be able to accept or reject the hypothesis prima-facie. We
cannot accept or reject a hypothesis about a parameter simply on intuition; instead we need to use objective criteria
based on sampling theory to accept or reject the hypothesis. Hypotheses testing is the process of making inferences
about a population based on a sample. The key question therefore in hypotheses testing is: how likely is it that a
population such as one we have hypothesized to produce a sample such as the one we are looking at.

Hypothesis Formulation
When research is conducted hypothesis formulation is one of the most preliminary steps. Hypothesis formulation
helps in formulating research problem. Hypothesis formulation is not a necessary but an important step of research. A
valid and reasonable research can be conducted without any hypothesis. Hypothesis can be one and it can be as many
as possible.
A hypothesis is an expected answer to a research question, it provides direction to the research study. A hypothesis in
reality determines the focal point of the study, in the absence of a valid and testable hypothesis the researcher cannot
concentrate on one direction. Many new researchers face the problem of summing up their research question in a
precise and concise manner. The reason behind this problem is mainly lack of hypothesis or presence of irrelevant
hypothesis.
In some studies, though, hypothesis is not required and a valid as well as reliable study can be conducted in the
absence of hypothesis. These studies do not require testing of interaction between variables. In most of the other
studies this is not so. In addition, a single hypothesis can be enough for many studies but some may require
formulation of more than one hypothesis. Such studies are as much valid, reliable and generalizable as studies that
have single hypothesis but it takes more time to test more than one hypothesis.
The purpose of hypothesis formulation and the method by which it has to be formulated is different for different
research studies. In one research hypothesis is an integral part of the whole study, while in the other hypothesis testing
is not compulsory and still in another the purpose is to build hypothesis for future studies rather than testing it in
reality.
Basically there are four types of research studies; purpose of hypothesis formulation and testing in each
category is explained below.
1. Experimental Research Studies
Experimental research studies are based on critical scientific methods. In these the purpose of hypothesis formulation

is to build a proposition or to give possible reason for any phenomena that occurs. On the basis of this proposition
hypothesis is tested to find out whether a particular phenomena occurs due to this reason or not. Hypothesis
formulation in experimental research thus helps in taking the study forward. In the absence of any suitable hypothesis
the experimenter has to test various possible reasons for the phenomena to be interpreted. Hypothesis formulation in
experimental research gives direction to the study and it also clears the clutter so that the researcher can think
directionally and with greater confidence. It should be noted that possible rejection or acceptance of the hypothesis
does not influence the credibility of hypothesis it rather concludes that the reason to be tested was true or false. In case
the hypothesis does not proved to be true the researcher gets new ideas or new directions for further research to test
what was the actual reason for the particular phenomena.es often help in

[© 2013-14: TMC Study Material on RM] Page 128


Hypothesis formulation in experimental studies also helps in removing any biases, uncertainties, prejudices and myths
linked with a particular subject area.
2. Descriptive Research Studies
These studies often do not have any hypothesis because the purpose of the study is to describe a fact, figure,
phenomena, situation or a person. It does not need to have any hypothesis most of the facts are already there the
researcher bring those facts together rather than actually experimenting and finding answers.
3. Exploratory Research Studies
The purpose of exploratory research is to develop better understanding of a phenomena to be tested in future.
Exploratory research studies are pure research rather than applied research-which can be applied to the population at
the current time. In many of the exploratory research studies there is no hypothesis testing but hypothesis can be
formulated in exploratory research studies. Actually these research studies help researcher in formulating
hypothesis.Exploratory research is characterized by its flexibility because the purpose is not to reach on conclusion but
to provide a platform in the form of better and more suitable hypothesis, research design, and clarified concepts.
4. Explanatory Research Studies
The term explanatory is in reality self-explanatory; these studies determine cause and effect relationship. Explanatory
research studies explain why there is a correlation between two or more than two phenomena, thus it explains the
reason why rather than what or which. While experimental studies explain what is the relationship or which two
phenomenas are related, explanatory studies determine why there is a relationship and test these kind of hypothesis.

Characteristics of hypothesis
A hypothesis controls and directs the research study. When a problem is felt, we require the hypothesis to explain it.
Generally, there is more than one hypothesis which aims at explaining the same fact. But all of them cannot be equally
good. Therefore, how can we judge a hypothesis to be true or false, good or bad? Agreement with facts is the sole and
sufficient test of a true hypothesis. Therefore, certain conditions can be laid down for distinguishing a good
hypothesis from bad ones.
The formal conditions laid down by thinkers provide the criteria for judging a hypothesis as good or valid. These
conditions are as follows:

i) A hypothesis should be empirically verifiable: The most important condition for a valid hypothesis is
that it should be empirically verifiable. A hypothesis is said to be verifiable, if it can be shown to be either true or false
by comparing with the facts of experience directly or indirectly. A hypothesis is true if it conforms to facts and it is false
if it does not. Empirical verification is the characteristic of the scientific method.

ii) A hypothesis should be relevant: The purpose of formulating a hypothesis is always to explain some facts.
It must provide an answer to the problem which initiated the enquiry. A hypothesis is called relevant if it can explain
the facts of enquiry.

iii) A hypothesis must have predictive and explanatory power: Explanatory power means that a good
hypothesis, over and above the facts it proposes to explain, must also explain some other facts which are beyond its
original scope. We must be able to deduce a wide range of observable facts which can be deduced from a hypothesis.
The wider the range, the greater is its explanatory power.

[© 2013-14: TMC Study Material on RM] Page 129


iv) A hypothesis must furnish a base for deductive inference on consequences: In the process of
investigation, we always pass from the known to the unknown. It is impossible to infer anything from the absolutely
unknown. We can only infer what would happen under supposed conditions by applying the knowledge of nature we
possess. Hence, our hypothesis must be in accordance with our previous knowledge.

v) A hypothesis does not go against the traditionally established knowledge: As far as possible, a
new hypothesis should not go against any previously established law or knowledge. The new hypothesis is expected to
be consistent with the established knowledge.
vi) A hypothesis should be simple: A simple hypothesis is preferable to a complex one. It sometimes happens
that there are two or more hypotheses which explain a given fact equally well. Both of them are verified by observable
facts. Both of them have a predictive power and both are consistent with established knowledge. All the important
conditions of hypothesis are thus satisfied by them. In such cases the simpler one is to be accepted in preference to the
complex one.

vii) A hypothesis must be clear, definite and certain: It is desirable that the hypothesis must be simple
and specific to the point. It must be clearly defined in a manner commonly accepted. It should not be vague or
ambiguous.
(viii) A Hypothesis should be related to available techniques: If tools and techniques are not available
we cannot test the hypothesis. Therefore, the hypothesis should be formulated only after due thought is given to the
methods and techniques that can be used to measure the concepts and variables related to the hypothesis.

Testing of Hypothesis
When the hypothesis has been framed in the research study, it must be verified as true or false. Verifiability is one of
the important conditions of a good hypothesis. Verification of hypothesis means testing of the truth of the hypothesis
in the light of facts. If the hypothesis agrees with the facts, it is said to be true and may be accepted as the explanation
of the facts. But if it does not agree it is said to be false. Such a false hypothesis is either totally rejected or modified.
Verification is of two types‘ viz., Direct verification and Indirect verification.
1. Direct verification may be either by observation or by experiments. When direct observation shows that the
supposed cause exists where it was thought to exist, we have a direct verification. When a hypothesis is verified by an
experiment in a laboratory it is called direct verification by experiment. When the hypothesis is not amenable for direct
verification, we have to depend on indirect verification.
2. Indirect verification is a process in which certain possible consequences are deduced from the hypothesis and
they are then verified directly. Two steps are involved in indirect verification. (i) Deductive development of hypothesis:
By deductive development certain consequences are predicted and (ii) finding whether the predicted consequences
follow. If the predicted consequences come true, the hypothesis is said to be indirectly verified. Verification may be
done directly or indirectly or through logical methods.
Testing of a hypothesis is done by using statistical methods. Testing is used to accept or reject an assumption or
hypothesis about a random variable using a sample from the distribution. The assumption is the null hypothesis (H0),
and it is tested against some alternative hypothesis (H1). Statistical tests of hypothesis are applied to sample data. The
procedure involved in testing a hypothesis is A) select a sample and collect the data. B) Convert the variables or
attributes into statistical form such as mean, proportion. C) formulate hypotheses. D) select an appropriate test for the

[© 2013-14: TMC Study Material on RM] Page 130


data such as t-test, Z-test. E) perform computations. F) finally draw the inference of accepting or rejecting the null
hypothesis.

Procedure for hypothesis testing


Hypothesis testing involves the following steps:
1. Formulate the null and alternative hypotheses.
2. Choose the appropriate test.
3. Choose a level of significance (alpha) - determine the rejection region.
4. Gather the data and calculate the test statistic.
5. Determine the probability of the observed value of the test statistic under the null hypothesis
given the sampling distribution that applies to the chosen test.
6. Compare the value of the test statistic to the rejection threshold.
7. Based on the comparison, reject or do not reject the null hypothesis.
8. Make the research conclusion.
In order to analyze whether research results are statistically significant or simply by chance, a test of statistical
significance can be run.

How do we use Sampling to accept or Reject Hypothesis?


Again we go back to the normal sampling distribution. We use the result that there is a certain fixed probability
associated with intervals from the mean defined in terms of number of standard deviations from the mean. Therefore
our problem of testing a hypothesis reduces to determining the probability that a sample statistic such as the one we
have obtained could have arisen from a population with a hypothesized mean m. In the hypothesis tests we need two
numbers to make our decision whether to accept or reject the null hypothesis:
 an observed value or computed from the sample
 a critical value defining the boundary between the acceptance and rejection region .
Instead of measuring the variables in original units we calculate a standardized z variable for a standard normal
distribution with mean x =0.The z statistic tells us how many how many standard deviations above or below the mean
standardized mean (z,<0, z>0) our observation falls. We can convert our observed data into the standardized scale
using the transformation

The z statistic measures the number of standard deviations away from the hypothesized mean the sample mean lies.
From the standard normal tables we can calculate the probability of the sample mean differing from the true
population mean by a specified number of standard deviations.
For example:
o We can find the probability that the sample mean differs from the population mean by two or more standard
deviations.
It is this probability value that will tell us how likely it is that a given sample mean can be obtained from a
population with a hypothesized mean m. .

[© 2013-14: TMC Study Material on RM] Page 131


o If the probability is low for example less than 5% , perhaps it can be reasonably concluded that the difference
between the sample mean and hypothesized population mean is too large and the chance that the population
would produce such a random sample is too low.
What probability constitutes too low or acceptable level is a judgment for decision makers to make. Certain situations
demand that decision makers be very sure about the characteristics of the items being tested and even a 2% probability
that the population produces such a sample is too high. In other situations there is greater latitude and a decision
maker may be willing to accept a hypothesis with a 5% probability of chance variation.
In each situation what needs to be determined are the costs resulting from an incorrect decision and the exact level of
risk we are willing to assume. Our minimum standard for an acceptable probability, say, 5%, is also the risk we run of
rejecting a hypothesis that is true.

Hypothesis errors:
 type I error (also called alpha error)
o the study results lead to the rejection of the null hypothesis even though it is actually true
 type II error (also called beta error)
o the study results lead to the acceptance (non-rejection) of the null hypothesis even though it is actually false
The choice of significance level affects the ratio of correct and incorrect conclusions which will be drawn. Given a
significance level there are four alternatives to consider:
Type I and type II errors
Correct Conclusion Incorrect Conclusion

Accept a correct hypothesis Reject a correct hypothesis


Reject an incorrect hypothesis Accept an incorrect hypothesis

Consider the following example. In a straightforward test of two products, we may decide to market product A if, and
only if, 60% of the population prefer the product. Clearly we can set a sample size, so as to reject the null hypothesis of
A = B = 50% at, say, a 5% significance level. If we get a sample which yields 62% (and there will be 5 chances in a 100
that we get a figure greater than 60%) and the null hypothesis is in fact true, then we make what is known as a Type I
error.
If however, the real population is A = 62%, then we shall accept the null hypothesis A = 50% on nearly half the
occasions as shown in the diagram overleaf. In this situation we shall be saying "do not market A" when in fact there is
a market for A. This is the type II error. We can of course increase the chance of making a type I error which will
automatically decrease the chance of making a type II error.
Obviously some sort of compromise is required. This depends on the relative importance of the two types of error. If it
is more important to avoid rejecting a true hypothesis (type I error) a high confidence coefficient (low value of x) will
be used. If it is more important to avoid accepting a false hypothesis, a low confidence coefficient may be used. An
analogy with the legal profession may help to clarify the matter. Under our system of law, a man is presumed innocent
of murder until proved otherwise. Now, if a jury convicts a man when he is, in fact, innocent, a type I error will have
been made: the jury has rejected the null hypothesis of innocence although it is actually true. If the jury absolves the
man, when he is, in fact, guilty, a type II error will have been made: the jury has accepted the null hypothesis of

[© 2013-14: TMC Study Material on RM] Page 132


innocence when the man is really guilty. Most people will agree that in this case, a type I error, convicting an innocent
man, is the more serious.
In practice, of course, researchers rarely base their decisions on a single significance test. Significance tests may be
applied to the answers to every question in a survey but the results will be only convincing, if consistent patterns
emerge. For example, we may conduct a product test to find out consumers preferences. We do not usually base our
conclusions on the results of one particular question, but we ask several, make statistical tests on the key questions and
look for consistent significances. We must remember that when one makes a series of tests, some of the correct
hypotheses will be rejected by chance. For example, if 20 questions were asked in our "before" and "after" survey and
we test each question at the 5% level, then one of the differences is likely to give significant results, even if there is no
real difference in the population.
No mention is made in these notes of considerations of costs of incorrect decisions. Statistical significance is not always
the only criterion for basing action. Economic considerations of alternative actions are often just as important.
These, therefore, are the basic steps in the statistical testing procedure. The majority of tests are likely to be parametric
tests where researchers assume some underlying distribution like the normal or binomial distribution. Researchers
will obtain a result, say a difference between two means, calculate the standard error of the difference and then ask
"How far away from the zero difference hypothesis is the difference we have found from our samples?"
To enable researchers to answer this question, they convert their actual difference into "standard errors" by dividing it
by its standard deviation, then refer to a chart to ascertain the probability of such a difference occurring.

Uses of Hypothesis
If a clear scientific hypothesis has been formulated, half of the research work is already done. The advantages/utility of
having a hypothesis is summarized here underneath:
i) It is a starting point for many a research work.
ii) It helps in deciding the direction in which to proceed.
iii) It helps in selecting and collecting pertinent facts.
iv) It is an aid to explanation.
v) It helps in drawing specific conclusions.
vi) It helps in testing theories.
vii) It works as a basis for future knowledge.

Use of statistical techniques for testing of hypothesis


A hypothesis test is a statistical method that uses sample data to evaluate a hypothesis about a population parameter.
The hypothesis testing is standard and it follows a specific order;
(i) first state a hypothesis about a population (a population parameter, e.g. mean µ),
(ii) obtain a random sample from the population and also find its mean x , and
(iii) compare the sample data with the hypothesis on the scale (standard z or normal distribution).
A hypothesis test is typically used in the context of a research study, i.e. a researcher completes one round of a field
investigation and then uses a hypothesis test to evaluate the results. Depending on the type of research and the type of
data, the details will differ from one research situation to another.

[© 2013-14: TMC Study Material on RM] Page 133


The following are some of the statistical techniques for testing of hypothesis

1. Z-Score Statistics
Z-Score is called a test statistics. The purpose of a test statistics is to determine whether the result of a research study
(the obtained difference) is more than what would be expected by the chance alone.

Obtained difference
z 
Difference due to chance

Now suppose a manufacturer, produces some type of articles of good quality. A purchaser by chance selects a sample
randomly. It so happens that the sample contains many defective articles and it leads the purchaser to reject the whole
product. Now, the manufacturer suffers a loss even though he has produced a good article of quality. Therefore, this
Type-I error is called ―producers risk‖.
On the other hand, if we accept the entire lot on the basis of a sample and the lot is not really good, the consumers are
put in loss. Therefore, this Type-II error is called the ―consumers risk‖.
In practical situations, still other aspects are considered while accepting or rejecting a lot. The risks involved for both
producer and consumer are compared. Then Type-I and Type-II errors are fixed; and a decision is reached.

2. Student‘s t-distribution
This concept was introduced by W. S. Gosset (1876 - 1937). He adopted the pen name ―student‖. Therefore, the
distribution is known as ‗student‘s t-distribution‘.

It is used to establish confidence limits and test the hypothesis when the population variance is not known and sample
size is small (< 30).

If a random sample x1, x2, . . . , xn of n values be drawn from a normal population with mean  and standard deviation
 then the mean of sample

 xi
x 
n

3. Chi-square test
Tests like z-score and t are based on the assumption that the samples were drawn from normally distributed
populations or more accurately that the sample means were normally distributed. As these tests require assumptions
about the type of population or parameters, these tests are known as ‗parametric tests‘.
There are many situations in which it is impossible to make any rigid assumption about the distribution of the
population from which samples are drawn. This limitation led to search for non-parametric tests. Chi-square (Read as
Ki-square) test of independence and goodness of fit is a prominent example of a non-parametric test. The chi-square
(2) test can be used to evaluate a relationship between two nominal or ordinal variables.
2 (chi-square) is measure of actual divergence of the observed and expected frequencies. In sampling studies, we
never expect that there will be a perfect coincidence between actual and observed frequencies and the question that we
have to tackle is about the degree to which the difference between actual and observed frequencies can be ignored as
arising due to fluctuations of sampling. If there is no difference between actual and observed frequencies then 2 = 0. If
there is a difference, then 2 would be more than 0. But the difference may also be due to sample fluctuation and thus

[© 2013-14: TMC Study Material on RM] Page 134


the value of 2 should be ignored in drawing the inference. Such values of 2 under different conditions are given in the
form of tables and if the actual value is greater than the table value, it indicates that the difference is not solely due to
sample fluctuation and that there is some other reason.
On the other hand, if the calculated 2 is less than the table value, it indicates that the difference may have arisen due to
chance fluctuations and can be ignored. Thus 2-test enables us to find out the divergence between theory and fact or
between expected and actual frequencies.
If the calculated value of 2 is very small, compared to table value, then expected frequencies are very little and the fit
is good.
If the calculated value of 2 is very large as compared to table value then divergence between the expected and the
observed frequencies is very big and the fit is poor.
We know that the degree of freedom r (df) is the number of independent constraints in a set of data.
Suppose there is a two 2 association table and actual frequencies of the various classes are as follows :
A a

B AB aB

22 38 60

Ab ab

b 8 32 40

30 70 100

Now the formula for calculating expected frequency of any class (cell)

Total for column containing the cell


Row total for row containing the cell  colum =
The total number of observatio ns

R  C
In notations: Expected frequency 
N

For example, if we have two attributes A and B that are independent then the expected frequency of the class (cell) AB

30  60
would be   18 .
100

Once the expected frequency of cell (AB) is decided the expected frequencies of remaining three classes are
automatically fixed.

Thus for class (aB) it would be 60 – 18 = 42

for class (Ab) it would be 30 – 18 = 12

for class (ab) it would be 70 – 42 = 28

[© 2013-14: TMC Study Material on RM] Page 135


This means that so far as two 2 association (contingency) table is concerned, there is
1 degree of freedom.

In such tables, the degrees of freedom are given by a formula n = (c – 1) (r – 1),


where c = Number of columns and r = Number of rows.

Thus in 2  2 table df = (2 – 1) (2 – 1) = 1

3  3 table df = (3 – 1) (3 – 1) = 4

4  4 table df = (4 – 1) (4 – 1) = 9 etc.

If the data is not in the form of contingency tables but as a series of individual observations or discrete or continuous
series then it is calculated by n = n – 1 where n is the number of frequencies or values of number of independent
individuals.

2
 ( Obs erved frequency  Expected frequency )
2

    

 Expected frequency 

2
 (O  E ) 2 
    
 E 

where O = Observed frequency and E = Expected frequency.

[© 2013-14: TMC Study Material on RM] Page 136


Interpretation of data
Introduction
Statistics are not an end in themselves but they are a means to an end, the end being to draw certain conclusions from
them. This has to be done very carefully, otherwise misleading conclusions may be drawn and the whole purpose of
doing research may get vitiated. A researcher/statistician besides the collection and analysis of data, has to draw
inferences and explain their significance. Through interpretation the meanings and implications of the study become
clear. Analysis is not complete without interpretation, and interpretation cannot proceed without analysis. Both are,
thus, inter-dependent. In this unit, therefore, we will discuss the interpretation of analysed data, summarizing the
interpretation and statistical fallacies.

Meaning of interpretation
The following definitions can explain the meaning of interpretation.

 ―The task of drawing conclusions or inferences and of explaining their significance after a careful analysis of selected
data is known as interpretation‖.

 ―It is an inductive process, in which you make generalizations based on


the connections and common aspects among the categories and patterns‖.
 ―Scientific interpretation seeks relationship between the data of a study
and between the study findings and other scientific knowledge‖.

 ―Interpretation in a simple way means the translation of a statistical result


into an intelligible description‖.
Thus, analysis and interpretation are central steps in the research process.
The purpose of analysis is to summarize the collected data, where as
interpretation is the search for the broader meaning of research findings. In
interpretation, the researcher goes beyond the descriptive data to extract
meaning and insights from the data.

Why interpretation?
A researcher/ statistician is expected not only to collect and analyse the data but also to interpret the results of his/ her
findings. Interpretation is essential for the simple reason that the usefulness and utility of research findings lie in
proper interpretation. It is only through interpretation that the researcher can expose relations and patterns that
underlie his findings. In case of hypothesis testing studies the researcher may arrive at generalizations. In case the
researcher had no hypothesis to start with, he would try to explain his findings on the basis of some theory. It is only
through interpretation that the researcher can appreciate why his findings are what they are, and can make others
understand the real significance of his research findings.
Interpretation is not a mechanical process. It calls for a critical examination of the results of one‘s analysis in the light of
all the limitations of data gathering. For drawing conclusions you need a basis. Some of the common and important
bases of interpretation are: relationships, ratios, rates and percentages, averages and other measures of comparison.

[© 2013-14: TMC Study Material on RM] Page 137


Essentials for interpretation
Certain points should be kept in mind before proceeding to draw conclusions from statistics. It is essential that:
a) The data are homogeneous: It is necessary to ascertain that the data are strictly comparable. We must be careful to
compare the like with the like and not with the unlike.
b) The data are adequate: Sometimes it happens that the data are incomplete or insufficient and it is neither possible to
analyze them scientifically nor is it possible to draw any inference from them. Such data must be completed first.
c) The data are suitable: Before considering the data for interpretation, the researcher must confirm the required
degree of suitability of the data. Inappropriate data are like no data. Hence, no conclusion is possible with unsuitable
data.
d) The data are properly classified and tabulated: Every care is to be taken as a pre-requisite, to base all types of
interpretations on systematically classified and properly tabulated data and information.
e) The data are scientifically analyzed: Before drawing conclusions, it is necessary to analyze the data by applying
scientific methods. Wrong analysis can play havoc with even the most carefully collected data. If interpretation is based
on uniform, accurate, adequate, suitable and scientifically analyzed data, there is every possibility of attaining a better
and representative result. Thus, from the above considerations we may conclude that it is essential to have all the pre-
requisites/pre-conditions of interpretation satisfied to arrive at better conclusions.

Precautions in interpretation
It is important to recognize that errors can be made in interpretation if proper precautions are not taken. The
interpretation of data is a very difficult task and requires a high degree of skill, care, judgement and objectivity. In the
absence of these, there is every likelihood of data being misused to prove things that are not true. The following
precautions are required before interpreting the data.
1) The interpreter must be objective.
2) The interpreter must understand the problem in its proper perspective.
3) He / she must appreciate the relevance of various elements of the problem.
4) See that all relevant, adequate and accurate data are collected.
5) See that the data are properly classified and analyzed.
6) Find out whether the data are subject to limitations? If so what are they?
7) Guard against the sources of errors.
8) Do not make interpretations that go beyond the information / data.
9) Factual interpretation and personal interpretation should not be confused. They should be kept apart.
If these precautions are taken at the time of interpretation, reasonably good conclusions can be arrived at.

[© 2013-14: TMC Study Material on RM] Page 138


Techniques of Interpretation
There are many different of tnterpretation techniques like graph or chart, but most are not used in business research.
Those used most often include:

1. pie charts
2. vertical bar charts (histograms)
3. horizontal bar charts (also histograms)
4. pictograms
5. line charts
6. area charts
Some other types of charts, well suited to audience research, but less often used, include

7. perceptual maps
Though many different kinds of graph are possible, if a report includes too many types, it‘s often confusing for readers,
who must work out how to interpret each new type of graph, and why it is different from an earlier one. It is
recommended using as few types of graph as are necessary.
If you have a spreadsheet or graphics program, such as Excel or Deltagraph, it‘s very easy to produce graphs. You
simply enter the numbers and labels in a table, click a symbol to show which type of graph you want, and it appears
before your eyes. These graphs are usually not very clear when first produced, but the software has many options for
changing headings, scales, and graph layout. You can waste a lot of time perfecting these graphs. Excel (actually,
Microsoft Graph, which Excel uses) has dozens of options, and it takes a lot of clicking of the right-hand mouse button
to discover them all. If you don‘t have a recent and powerful computer, Excel can be a very slow and frustrating
program to use.
The main types of graph include pie charts, bar charts (histograms), line charts, area charts, and several others.

1) Pie chart
A round graph, cut (like a pie) into slices of varying size, all adding to 100%. Because a pie chart is round, it‘s useful for
communicating data which takes a "round" form: for
example, the answers to "How many minutes in each hour
would you like FM RADIOMIRCHI to spend on each of the
following types of program...?" In this case, the pie
corresponds to a clock face, and the slices can be interpreted
as fractions of an hour.
Pie charts are easily understood when the slices are similar
in size, but if several slices are less than 5%, or lots of
different colours are used, it can be quite difficult to read a
pie chart. In that case the chart has to be very big, taking perhaps half a page to convey one set of numbers. Not a very
efficient way to display information.

[© 2013-14: TMC Study Material on RM] Page 139


2) Vertical bar chart
Also known as a histogram. A very common type of graph, easily understood. But when one of these charts has more
than about 6 vertical bars, there‘s very little space below each bar to explain what it‘s measuring.

3) Horizontal bar chart


Exactly like a vertical bar chart, but turned
sideways. The big advantage of the horizontal bar
chart is that you can easily read a description with
more than one word. Unfortunately, most graphics
software displays the bars upside down — you‘re
expected to read from the bottom, upwards to the top. A standard bar chart looks like this. (Like the two above charts,
this was created with Excel.)
You don‘t need graphics software to produce a
horizontal bar chart: you can do it easily with a word
processing program. One of the easiest ways to do this
is to use the | symbol to produce the bars. This symbol
is usually found on the \ key; it is not a lower-case L or
upper-case I or number 1. It stands out best in bold
type. This is what we call a blobbogram.
For example:

Q14. SEX OF RESPONDENT

Male 47.4% |||||||||||||||||||||||||

Female 52.6% |||||||||||||||||||||||||||

Total 100.0% = 325 cases

If each symbol represents 2% of the sample, you can usually fit the graph on a single line. Round each figure to the
nearest 2% to work out how many times to press the symbol key. In the above example, 47.4% is closer to 48% than to
46%, so I pressed the | key 24 times to graph the percentage of men. This is a very clear layout, and quick to produce,
so it is well suited to a preliminary report.
A more elaborate looking graph can be made by using special symbols. For example, if you have the font Zapf
Dingbats or Wingdings, you can use the shaded-
This is wider than the | symbol, and no more than about 20 will fit on a normal-width line, if half the line is taken up
o 5%:

Q14. SEX OF RESPONDENT

Male 47.4%

Female 52.6%

Total 100.0% = 325 cases

[© 2013-14: TMC Study Material on RM] Page 140


4) Pictograms
Like a bar chart, a pictogram can be either vertical or horizontal, but instead of showing a solid bar, a pictogram shows
a number of symbols -

10%, and the number to be graphed is 45%, you see four and a half little men...

5) Line chart
This is used when the variable you are graphing is a numeric one. In audience research, most variables are nominal,
not numeric, so line charts aren‘t needed much. But to
plot the answers to a question such as "How many people
live in your household?" you could produce a graph like
this:
It‘s normal to show the measurement (e.g. percentage)
upwards, and the scale (e.g. hours per week) on the
horizontal scale. Unlike a bar chart, it will confuse people
if the scales are exchanged. You‘ll find that almost every
line chart has a peak in the middle, and falls off to each
side, reflecting what‘s known as the "normal curve."
A line chart is really another form of a vertical bar chart. You could turn a vertical bar chart into a line chart by drawing
a line connecting the top of each bar, then deleting the bars.
A line chart can have more than one line. For example, you could have a line chart comparing the number of hours per
week that men and women watch TV. There‘d be two lines, one for each sex. Each line needs to be shown with a
different style, or a different colour. With more than 3 or 4 lines, a line chart becomes very confusing, specially when
the lines cross each other.

6) Area chart
In a line chart with several lines — such as the above example, with two sexes — each line starts from the bottom of the
table. That way, you can compare the height of
the lines at any point. An area chart is a little
different, in that each line starts from the line
below it. So you don‘t compare the height of
the lines, but the areas between them. These
areas always add to 100% high. You can think
of an area chart as a lot of pie charts, flattened
out and laid end-to-end.
A common use of area charts in audience
research is to show how people‘s behaviour changes across the 24 hours of the day. The horizontal scale runs from

[© 2013-14: TMC Study Material on RM] Page 141


midnight to midnight, and the vertical scale from 0 to 100%. This area chart, taken from a survey in Vietnam, shows
how people divide their day into sleep, work, watching TV, listening to radio, and work and everything else.
An area chart needs to be studied closely: the results aren‘t obvious at a glance. However, area charts provide a lot of
information in a small space.

Which type of graph is best?


There are dozens of other chart types not mentioned above, and also dozens of variations on the above types - specially
bar charts. However the above graph types cover most situations. It becomes confusing to readers of reports if many
different types of graph are presented, so it is recommended that any report should include no more different graph
types than necessary.
The most appropriate type of graph to present depends on the number of variables being displayed, and whether these
are nominal variables (with a limited number of separate values) or metric variables (whose value can be any number).
It is suggested to use a horizontal bar chart whenever possible. In a normal audience survey, less than a third of the
graphs are unsuited to being shown as horizontal bar charts.
Variables Recommended chart type
number type
1 nominal bar chart, pictogram, or pie chart
1 metric line graph, or box and whisker plot
2 both nominal multiple bar chart, or domino chart
2 both metric bubble chart, or scattergram
2 1 metric, 1 nominal box and whisker plot, or area chart
3-D charts can look very impressive, but It is strongly suggested to avoid using them — it‘s just too easy to misread
them. The simpler a graph is, the more effective it is at communicating

[© 2013-14: TMC Study Material on RM] Page 142


Questions for Review:

1. Define the term ‗Hypothesis‘. Differentiate among assumption, postulate and hypothesis.
2. Explain the nature and functions of a hypothesis in a research process.
3. Enumerate the significance and importance of hypotheses in scientific research.
4. There are various kinds of hypotheses. Mention some important hypotheses. Why researchers prefer non-
directional hypotheses?
5. Hypothesis is a statement which involves a relationship of variable. Enumerate the types of variables included in
stating a hypothesis.
6. Explain the procedure for testing a statistical hypothesis.
7. Describe a situation where you can apply t-distribution.
8. How would you distinguish between a t-test for independent sample and a paired t-test?
9. State any five precautionary steps to be taken before interpretation.
10. What is meant by interpretation of statistical data? What precautions should be taken while interpreting the data?
11. What do you understand by interpretation of data? Illustrate the types of mistakes which frequently occur in
interpretation.
12. Explain the need, meaning and essentials of interpretation.
13. Write a short note on:
A. Cross tabulation C. T test
B. Z test D. F test



[© 2013-14: TMC Study Material on RM] Page 143


Unit 5 : Report writing

Introduction
The last and final phase of the journey in research is writing of the report. After the collected data has been analyzed
and interpreted and generalizations have been drawn the report has to be prepared. The task of research is incomplete
till the report is presented.
Writing of a report is the last step in a research study and requires a set of skills somewhat different from those called
for in respect of the earlier stages of research. This task should be accomplished by the researcher with utmost care.

Purpose of a report
The report may be meant for the people in general, when the investigation has not been carried out at the instance of
any third party. Research is essentially a cooperative venture and it is essential that every investigator should know
what others have found about the phenomena under study. The purpose of a report is thus the dissipation of
knowledge, broadcasting of generalizations so as to ensure their widest use.
A report of research has only one function, ―it must inform‖. It has to propagate knowledge. Thus, the purpose of a
report is to convey to the interested persons the results and findings of the study in sufficient detail, and so arranged as
to enable each reader to comprehend the data, and to determine for himself the validity of conclusions. Research
results must invariably enter the general store of knowledge. A research report is always an addition to knowledge. All
this explains the significance of writing a report. In a broader sense, report writing is common to both academics and
organizations. However, the purpose may be different. In academics, reports are used for comprehensive and
application-oriented learning. Whereas in organizations, reports form the basis for decision making.

Meaning
Reporting simply means communicating or informing through reports. The researcher has collected some facts and
figures, analyzed the same and arrived at certain conclusions. He has to inform or report the same to the parties
interested. Therefore ―reporting is communicating the facts, data and information through reports to the persons for
whom such facts and data are collected and compiled‖.
A report is not a complete description of what has been done during the period of survey/research. It is only a
statement of the most significant facts that are necessary for understanding the conclusions drawn by the investigator.
Thus, ― a report by definition, is simply an account‖. The report thus is an account describing the procedure adopted,
the findings arrived at and the conclusions drawn by the investigator of a problem.

[© 2013-14: TMC Study Material on RM] Page 144


Qualities of good report
Research report is a channel of communicating the research findings to the readers of the report. A good report is
one which does this task efficiently and effectively. As such it should have the following characteristics/qualities.
i) It must be clear in informing the what, why, who, whom, when, where and how of the research study.
ii) It should be neither too short nor too long. One should keep in mind the fact that it should be long enough to cover
the subject matter but short enough to sustain the reader‘s interest.
iii) It should be written in an objective style and simple language, correctness, precision and clarity should be the
watchwords of the scholar. Wordiness, indirection and pompous language are barriers to communication.
iv) A good report must combine clear thinking, logical organization and sound interpretation.
v) It should not be dull. It should be such as to sustain the reader‘s interest.
vi) It must be accurate. Accuracy is one of the requirements of a report. It should be factual with objective presentation.
Exaggerations and superlatives should be avoided.
vii) Clarity is another requirement of presentation. It is achieved by using familiar words and unambiguous
statements, explicitly defining new concepts and unusual terms.
viii) Coherence is an essential part of clarity. There should be logical flow of ideas (i.e. continuity of thought), sequence
of sentences. Each sentence must be so linked with other sentences so as to move the thoughts smoothly.
ix) Readability is an important requirement of good communication. Even a technical report should be easily
understandable. Technicalities should be translated into language understandable by the readers.
x) A research report should be prepared according to the best composition practices. Ensure readability through proper
paragraphing, short sentences, illustrations, examples, section headings, use of charts, graphs and diagrams.
xi) Draw sound inferences/conclusions from the statistical tables. But don‘t repeat the tables in text (verbal) form.
xii) Footnote references should be in proper form. The bibliography should be reasonably complete and in proper
form.
xiii) The report must be attractive in appearance, neat and clean whether typed or printed.
xiv) The report should be free from mistakes of all types viz. language mistakes, factual mistakes, spelling mistakes,
calculation mistakes etc.,
The researcher should try to achieve these qualities in his report as far as possible.

Precautions in research report writing


Research report is a channel of communicating the research findings to the readers of the report. A good research
report is one which does this task efficiently and effectively.
As such it must be prepared keeping the following precautions in view:
1. While determining the length of the report (since research reports vary greatly in length), one should keep in view
the fact that it should be long enough to cover the subject but short enough to maintain interest. In fact, report-writing
should not be a means to learning more and more about less and less.
2. A research report should not, if this can be avoided, be dull; it should be such as to sustain reader‘s interest.

[© 2013-14: TMC Study Material on RM] Page 145


3. Abstract terminology and technical jargon should be avoided in a research report. The report should be able to
convey the matter as simply as possible. This, in other words, means that report should be written in an objective style
in simple language, avoiding expressions such as ―it seems,‖ ―there may be‖ and the like.
4. Readers are often interested in acquiring a quick knowledge of the main findings and as such the report must
provide a ready availability of the findings. For this purpose, charts, graphs and the statistical tables may be used for
the various results in the main report in addition to the summary of important findings.
5. The layout of the report should be well thought out and must be appropriate and in accordance with the objective
of the research problem.
6. The reports should be free from grammatical mistakes and must be prepared strictly in accordance with the
techniques of composition of report-writing such as the use of quotations, footnotes, documentation, proper
punctuation and use of abbreviations in footnotes and the like.
7. The report must present the logical analysis of the subject matter. It must reflect a structure wherein the different
pieces of analysis relating to the research problem fit well.
8. A research report should show originality and should necessarily be an attempt to solve some intellectual problem.
It must contribute to the solution of a problem and must add to the store of knowledge.
9. Towards the end, the report must also state the policy implications relating to the problem under consideration. It is
usually considered desirable if the report makes a forecast of the probable future of the subject concerned and indicates
the kinds of research still needs to be done in that particular field.
10. Appendices should be enlisted in respect of all the technical data in the report.
11. Bibliography of sources consulted is a must for a good report and must necessarily be given.
12. Index is also considered an essential part of a good report and as such must be prepared and appended at the end.
13. Report must be attractive in appearance, neat and clean, whether typed or printed.
14. Calculated confidence limits must be mentioned and the various constraints experienced in conducting the research
study may also be stated in the report.
15. Objective of the study, the nature of the problem, the methods employed and the analysis techniques adopted must
all be clearly stated in the beginning of the report in the form of introduction.

Presentation of research report


The material of your presentation should be concise, to the point and tell an interesting story. In addition to the
obvious things like content and visual aids, the following are just as important as the audience will be subconsciously
taking them in:
 Your voice - how you say it is as important as what you say
 Body language - a subject in its own right and something about which much has been written and said. In essence,
your body movements express what your attitudes and thoughts really are.
 Appearance - first impressions influence the audience's attitudes to you. Dress appropriately for the occasion.
As with most personal skills oral communication cannot be taught. Instructors can only point the way. So as always,
practice is essential, both to improve your skills generally and also to make the best of each individual presentation
you make.

[© 2013-14: TMC Study Material on RM] Page 146


Preparation
Prepare structure of the talk carefully & logically, as you would for a written report. What are?
 The objectives of the talk?
 The main points you want to make?
Make a list of these two things as your starting point.
Write out the presentation in rough, just like a first draft of a written report. Review the draft. You will find things that
are irrelevant or superfluous - delete them. Check the story is consistent and flows smoothly. If there are things you
cannot easily express, possibly because of doubt about your understanding, it is better to leave them unsaid.
Never read from a script. It is also unwise to have the talk written out in detail as a prompt sheet - the chances are you
will not locate the thing you want to say amongst all the other text. You should know most of what you want to say - if
you don't then you should not be giving the talk! So prepare cue cards which have key words and phrases (and
possibly sketches) on them. Postcards are ideal for this. Don't forget to number the cards in case you drop them.
Remember to mark on your cards the visual aids that go with them so that the right OHP or slide is shown at the right
time
Rehearse your presentation - to yourself at first and then in front of some colleagues. The initial rehearsal should
consider how the words and the sequence of visual aids go together. How will you make effective use of your visual
aids?

Making the Oral presentation


Greet the audience (for example, 'Good morning, ladies and gentlemen'), and tell them who you are. Good
presentations then follow this formula:
 tell the audience what you are going to tell them,
 then tell them,
 at the end tell them what you have told them.

1. Keep to the time allowed. If you can, keep it short. It's better to under-run than over-run. As a rule of thumb, allow 2
minutes for each general overhead transparency or PowerPoint slide you use, but longer for any that you want to use
for developing specific points. 35 mm slides are generally used more sparingly and stay on the screen longer.
However, the audience will get bored with something on the screen for more than 5 minutes, especially if you are not
actively talking about it. So switch the display off, or replace the slide with some form of 'wallpaper' such as a
company logo.
2. Stick to the plan for the presentation, don't be tempted to digress - you will eat up time and could end up in a dead-
end with no escape!
3. Unless explicitly told not to, leave time for discussion - 5 minutes is sufficient to allow clarification of points. The
session chairman may extend this if the questioning becomes interesting.
4. At the end of your presentation ask if there are any questions - avoid being terse when you do this as the audience
may find it intimidating (i.e. it may come across as any questions? - if there are, it shows you were not paying
attention). If questions are slow in coming, you can start things off by asking a question of the audience - so have one
prepared.

[© 2013-14: TMC Study Material on RM] Page 147


Delivery
1. Speak clearly. Don't shout or whisper - judge the acoustics of the room.
2. Don't rush, or talk deliberately slowly. Be natural - although not conversational.
3. Deliberately pause at key points - this has the effect of emphasising the importance of a particular point you are
making.
4. Avoid jokes - always disastrous unless you are a natural expert
5. Use your hands to emphasise points but don't indulge in to much hand waving. People can, over time, develop
irritating habits. Ask colleagues occasionally what they think of your style.
6. Look at the audience as much as possible, but don't fix on an individual - it can be intimidating. Pitch your
presentation towards the back of the audience, especially in larger rooms.
7. Don't face the display screen behind you and talk to it. Other annoying habits include:
 Standing in a position where you obscure the screen. In fact, positively check for anyone in the audience who may be
disadvantaged and try to accommodate them.
 Muttering over a transparency on the OHP projector plate and not realising that you are blocking the projection of
the image. It is preferable to point to the screen than the foil on the OHP (apart from the fact that you will probably
dazzle yourself with the brightness of the projector)
8. Avoid moving about too much. Pacing up and down can unnerve the audience, although some animation is
desirable.
9. Keep an eye on the audience's body language. Know when to stop and also when to cut out a piece of the
presentation.

Visual Aids
Visual aids significantly improve the interest of a presentation. However, they must be relevant to what you want to
say. A careless design or use of a slide can simply get in the way of the presentation. What you use depends on the
type of talk you are giving. Here are some possibilities:
 Overhead projection transparencies (OHPs)
 35mm slides
 Computer projection (PowerPoint, applications such as Excel, etc)
 Video, and film,
 Real objects - either handled from the speaker's bench or passed around
 Flipchart or blackboard - possibly used as a 'scratch-pad' to expand on a point

PowerPoint Presentation Do‘s and Don‘t‘s


1. Keep it simple though - a complex set of hardware can result in confusion for speaker and audience. Make sure you
know in advance how to operate the equipment and also when you want particular displays to appear. Sometimes a
technician will operate the equipment. Arrange beforehand, what is to happen and when and what signals you will
use. Edit your slides as carefully as your talk - if a slide is unnecessary then leave it out. If you need to use a slide twice,
duplicate it. And always check your slides - for typographical errors, consistency of fonts and layout.

[© 2013-14: TMC Study Material on RM] Page 148


2. Slides and OHPs should contain the minimum information necessary. To do otherwise risks making the slide
unreadable or will divert your audience's attention so that they spend time reading the slide rather than listening to
you.
3. Try to limit words per slide to a maximum of 10. Use a reasonable size font and a typeface which will enlarge well.
Typically use a minimum 18 pt Times Roman on OHPs, and preferably larger. A guideline is: if you can read the OHP
from a distance of 2 metres (without projection) then it's probably OK
4. Avoid using a diagram prepared for a technical report in your talk. It will be too detailed and difficult to read.
5. Use colour on your slides but avoid orange and yellow which do not show up very well when projected. For text
only, white or yellow on blue is pleasant to look at and easy to read. Books on presentation techniques often have quite
detailed advice on the design of slides. If possible consult an expert such as the Audio Visual Centre
6. Avoid adding to OHPs with a pen during the talk - it's messy and the audience will be fascinated by your shaking
hand! On this point, this is another good reason for pointing to the screen when explaining a slide rather than pointing
to the OHP transparency.
7. Room lighting should be considered. Too much light near the screen will make it difficult to see the detail. On the
other hand, a completely darkened room can send the audience to sleep. Try to avoid having to keep switching lights
on and off, but if you do have to do this, know where the light switches are and how to use them.

Types of research reports


Broadly speaking reporting can be done in two ways:
a) Oral or Verbal Report: reporting verbally in person, for example; presenting the findings in a conference or seminar
or reporting orally to the superiors.
b) Written Report: Written reports are more formal, authentic and popular. Written reports can be presented in
different ways as follows.
i) Sentence form reports: Communicating in sentence form
ii) Tabular reports: Communicating through figures in tables
iii) Graphic reports: Communicating through graphs and diagrams
iv) Combined reports: Communicating using all the three of the above. Generally, this is the most popular
Research reports vary greatly in length and type. In each individual case, both the length and the form are largely
dictated by the purpose of the study and problems at hand. For example, business organizations generally prefer
reports in letter form, that too short in length. Banks, insurance and other financial institutions generally prefer figure
form in tables. The reports prepared by government bureaus, enquiry commissions etc., are generally very
comprehensive on the issues involved. Similarly research theses/dissertations usually prepared by students for Ph.D.
degree are also elaborate and methodical.
It is; thus, clear that the results of a research enquiry can be presented in a number of ways. They may be termed as a
technical report, a popular report, an article, or a monograph.
1) Technical Report: A technical report is used whenever a full written report (ex: Ph.D. thesis) of the study is
required either for evaluation or for record keeping or for public dissemination. The main emphasis in a technical
report is on:
a) the methodology employed.
b) the objectives of the study.

[© 2013-14: TMC Study Material on RM] Page 149


c) the assumptions made / hypotheses formulated in the course of the study.
d) how and from what sources the data are collected and how have the data been analyzed.
e) the detailed presentation of the findings with evidence, and their limitations.
2) Popular Report: A popular report is one which gives emphasis on simplicity and attractiveness. Its aim is to make
the general public understand the findings and implications. Generally, it is simple. Simplicity is sought to be achieved
through clear language and minimization of technical details. Attention of the readers is sought to be achieved through
attractive layout, liberal use of graphs, charts, diagrams and pictures. In a popular report emphasis is given on practical
aspects and policy implications.
3) Research Article: Sometimes the findings of a research study can be published in the form of a short paper called
an article. This is one form of dissemination. The research papers are generally prepared either to present in seminars
and conferences or to publish in research journals. Since one of the objectives of doing research is to make a positive
contribution to knowledge, in the field, publication (publicity) of the work serves the purpose.
4) Monograph: A monograph is a treatise or a long essay on a single subject. For the sake of convenience, reports may
also be classified either on the basis of approach or on the basis of the nature of presentation such as:
i) Journalistic Report
ii) Business Report
iii) Project Report
iv) Dissertation
v) Enquiry Report (Commission Report), and
vi) Thesis
Reports prepared by journalists for publication in the media may be journalistic reports. These reports have news and
information value. A business report may be defined as report for business communication from one departmental
head to another, one functional area to another, or even from top to bottom in the organizational structure on any
specific aspect of business activity. These are observational reports which facilitate business decisions.
A project report is the report on a project undertaken by an individual or a group of individuals relating to any
functional area or any segment of a functional area or any aspect of business, industry or society. A dissertation, on the
other hand, is a detailed discourse or report on the subject of study.
Dissertations are generally used as documents to be submitted for the acquisition of higher research degrees from a
university or an academic institution. The thesis is an example in point.
An enquiry report or a commission of enquiry report is a detailed report prepared by a commission appointed for the
specific purpose of conducting a detailed study of any matter of dispute or of a subject requiring greater insight. These
reports facilitate action, since they contain expert opinions.

[© 2013-14: TMC Study Material on RM] Page 150


Steps in Report Writing
Research reports are the product of slow and painstaking and accurate work. Therefore, the preparation of the report
may be viewed in the following major stages.

1) The logical understanding and analysis of the subject matter.


2) Planning/designing the final outline of the report.
3) Write up/preparation of rough draft.
4) Polishing/finalization of the Report.

1. Logical Understanding of the Subject Matter:


It is the first stage which is primarily concerned with the development of a subject. There are two ways to develop a
subject viz. a. logically and b. chronologically. The logical development is done on the basis of mental connections and
associations between one aspect and another by means of logical analysis. Logical treatment often consists of
developing material from the simple to the most complex. Chronological development is based on a connection or
sequence in time or happening of the events. The directions for doing something usually follow the chronological
order.

2. Designing the Final Outline of the Report:


It is the second stage in writing the report. Having understood the subject matter, the next stage is structuring the
report and ordering the parts and sketching them. This stage can also be called as planning and organization stage.
Ideas may pass through the author‘s mind. Unless he first makes his plan/sketch/design he will be unable to achieve a
harmonious succession and will not even know where to begin and how to end. Better communication of research
results is partly a matter of language but mostly a matter of planning and organizing the report.
3. Preparation of the Rough Draft:
The third stage is the write up/drafting of the report. This is the most crucial stage to the researcher, as he/she now
sits to write down what he/she has done in his/her research study and what and how he/she wants to communicate
the same. Here the clarity in communicating/reporting is influenced by some factors such as who the readers are, how
technical the problem is, the researcher‘s hold over the facts and techniques, the researcher‘s command over language
(his communication skills), the data and completeness of his notes and documentation and the availability of analyzed
results. Depending on the above factors some authors may be able to write the report with one or two drafts. Some
people who have less command over language, no clarity about the problem and subject matter may take more time
for drafting the report and have to prepare more drafts (first draft, second draft, third draft, fourth draft etc.,)

4. Finalization of the Report:


This is the last stage, perhaps the most difficult stage of all formal writing. It is easy to build the structure, but it takes
more time for polishing and giving finishing touches. Take for example the construction of a house. Up to roofing
(structure) stage the work is very quick but by the time the building is ready, it takes up a lot of time. The rough draft
(whether it is second draft or ‗n‘ th draft) has to be rewritten, polished in terms of requirements. The careful revision of
the rough draft makes the difference between a mediocre and a good piece of writing. While polishing and finalizing
one should check the report for its weaknesses in logical development of the subject and presentation cohesion. He/she
should also check the mechanics of writing — language, usage, grammar, spelling and punctuation.

[© 2013-14: TMC Study Material on RM] Page 151


Guidelines for effective report
Research report is a channel of communicating the research findings to the readers of the report. A good report is one
which does this task efficiently and effectively. As such it should follow the guidelines and qualities given below:
i) It must be clear in informing the what, why, who, whom, when, where and how of the research study.
ii) It should be neither too short nor too long. One should keep in mind the fact that it should be long enough to cover
the subject matter but short enough to sustain the reader‘s interest.
iii) It should be written in an objective style and simple language, correctness, precision and clarity should be the
watchwords of the scholar. Wordiness, indirection and pompous language are barriers to communication.
iv) A good report must combine clear thinking, logical organization and sound interpretation.
v) It should not be dull. It should be such as to sustain the reader‘s interest.
vi) It must be accurate. Accuracy is one of the requirements of a report. It should be factual with objective presentation.
Exaggerations and superlatives should be avoided.
vii) Clarity is another requirement of presentation. It is achieved by using familiar words and unambiguous
statements, explicitly defining new concepts and unusual terms.
viii) Coherence is an essential part of clarity. There should be logical flow of ideas (i.e. continuity of thought), sequence
of sentences. Each sentence must be so linked with other sentences so as to move the thoughts smoothly.
ix) Readability is an important requirement of good communication. Even a technical report should be easily
understandable. Technicalities should be translated into language understandable by the readers.
x) A research report should be prepared according to the best composition practices. Ensure readability through proper
paragraphing, short sentences, illustrations, examples, section headings, use of charts, graphs and diagrams.
xi) Draw sound inferences/conclusions from the statistical tables. But don‘t repeat the tables in text (verbal) form.
xii) Footnote references should be in proper form. The bibliography should be reasonably complete and in proper
form.
xiii) The report must be attractive in appearance, neat and clean whether typed or printed.
xiv) The report should be free from mistakes of all types viz. language mistakes, factual mistakes, spelling mistakes,
calculation mistakes etc.,
The researcher should try to achieve these qualities in his report as far as possible.

Layout and format of the research report


The following outline is the suggested layout and format for writing the research report:
The contents of a report can broadly be divided into three parts as:
1) The front matter or prefactory items.
2) The body or text of the report.
3) The back matter or terminal items.
The following pages summarizes the broad sequence of the contents of a research report.

A) Front Matters 3. Declaration

1. Title Page 4. Acknowledgments

2. Certificate 5. Executive Summary


6. Table of Contents

[© 2013-14: TMC Study Material on RM] Page 152


7. List of Illustrations and List of Tables 8. Sample description
8. List of abbreviations used 9. Tabulation and Analysis of Data
10. Finding of study
B) Main Text 11. Conclusions

1. Introduction 12. Recommendations of study

2. Research methodology
3. Background to the research problem C) Reference Matters
4. Objectives 1. Bibliography
Hypotheses 2. Appendices (optional)
5. Data collection 3. Glossary (optional)
6. Sample and sampling method 4. References (optional)
7. Statistical or qualitative methods used for data
analysis

A) Front Pages
1) Title Page
The cover page should display full name of researcher, guide along with qualification, and the title of report.
2) Certificate
Format for same given in sample page below
3) Declaration
Format for same given in sample page below
4) Acknowledgments
The researcher may wish to acknowledge people who helped in preparation of report. For example, you may wish to
thank someone you interviewed, or someone who provided you with some special information.

5) Table of Contents and List of Figures


Report should have a Table of Contents that lists the report's sections and page numbers. If figures include in report
(charts, tables, diagrams), one must also include a list of figures, indicating titles and page numbers. Figures should be
numbered, titled, and mentioned in the text preceding them.
6) List of tables and illustrations used
7) Executive Summary
One of the most important components of the report is the Executive Summary. It answers the question, "What does
the report contain?" and should be written after the rest of the report is complete. The Executive Summary should be
complete in itself and may be consulted by readers who wish to determine whether they need to read the whole report.
Limit the Executive Summary to two-three pages and discuss:
 Purpose and extent of the report
 Major points contained in the body of the report
 Highlights of key conclusions
 Highlights of key recommendations

[© 2013-14: TMC Study Material on RM] Page 153


B) Main Text
1) Introduction:-The Introduction should establish the purpose of the report and should convey what is in the body of
the report. One should provide the reader with the following information:
a. Necessary background information
b. major points that will be covered in the report
c. the situation or problem that will be analyzed
d. what your aims are in compiling the report Analysis
e. Why does a problem exist?
f. How does the problem affect the environment?
g. What efforts may solve this problem?
h. What aspects of the problem have been measured and improved? How?
i. What problems does the potential solution not solve? Why not?
j. What could be improved?
2) Research Methodology: -
 Goals of the study, specific objectives, and purpose of the study.
 Statistical design:- Universe of study, sampling method, sample size and unit , secondary data sources
,and Limitations of study.
 Tools of Data collection, and the response rate
3) Tabulation and Analysis: -
Analysis is the most important part of report because it contains "workings out" - how one reaches the conclusions.
Analysis should contain the thoughts, reasons, judgments based on the facts and figures and data collected. In analysis
one makes INFERENCES, conclusions that are drawn from the research.
4) Finding, Conclusions and Recommendations of study: -
The conclusions are the final results of analysis. They should be brief and should contain no new information. They
should not make direct reference to sources, figures, or tables. The conclusions should be listed and numbered, with
brief explanation for each. Each conclusion should follow logically from the facts and arguments presented in the main
text (body). RECOMMENDATIONS are suggestions, based on the conclusions reached from the research. These should
brief and should follow logically from the conclusions.

C) Reference Matter
i) Bibliography
A bibliography is an alphabetical list of all materials consulted in the preparation of research.
ii) Appendices containing copies of the questionnaires, etc.
Why do a bibliography?
Some reasons:
1. To acknowledge and give credit to sources of words, ideas, diagrams, illustrations, and quotations borrowed, or any
materials summarized or paraphrased.
2. To show that you are respectfully borrowing other people‘s ideas, not stealing them, i.e. to prove that you are not
plagiarizing (Copying).
3. To offer additional information to readers who may wish to further pursue the topic.

[© 2013-14: TMC Study Material on RM] Page 154


4. To give readers an opportunity to check out the sources for accuracy. An honest bibliography inspires reader
confidence in writing.

What must be included in a bibliography?


1. Author
2. Title
3. Place of publication
4. Publisher
5. Date of publication
6. Page number(s) (for articles from magazines, journals, periodicals, newspapers, encyclopaedias etc.)

1. Author
Ignore any titles, designations or degrees, etc. which appear before or after the name, e.g., The Honourable, Dr., Mr.,
Mrs., Ms., Rev., S.J., Esq., Ph.D., M.D., Q.C., etc. Exceptions are Jr. and Sr. Do include Jr. and Sr. as John Smith, Jr. and
John Smith, Sr. are two different individuals. Include also I, II, III, etc. for the same reason.
Examples:
a) Last name, first name:
Kotlar, Philip.
Christensen, Asger.
Wilson-Smith, Anthony.
b) Last name, first and middle names:
Wyse, Cassandra Ann Lee.
c) Last name, first name and middle initial:
Schwab, Charles R.
d) Last name, initial and middle name:
Holmes, A. William.
e) Last name, initials:
Meister, F.A.
f) Last name, first and middle names, Jr. or Sr. designation:
Davis, Benjamin Oliver, Jr.
g) Last name, first name, I, II, III, etc.:
Stilwell, William E., IV.

2. Title and subtitle


a) If the title on the front cover or spine of the book differs from the title on the title page, use the title on the title page
for your citation.
b) UNDERLINE the title and subtitle of a book, magazine, journal, periodical, newspaper, or encyclopaedia, e.g., What
to Do When Things Go Wrong, Sports Illustrated, New York Times, Encyclopaedia Britannica.
c) If the title of a newspaper does not indicate the place of publication, add the name of the city or town after the title
in square brackets, e.g. National Post [Toronto].
Freeze, Colin. "Illinois Puts the Death Penalty Itself on Trial." Globe and Mail [Toronto] 29 Oct. 2002: A3.

[© 2013-14: TMC Study Material on RM] Page 155


Furuta, Aya. "Japan Races to Stay Ahead in Rice-Genome Research." Nikkei Weekly [Tokyo] 5 June 2000: 1+.
d) DO NOT UNDERLINE the title and subtitle of an article in a magazine, journal, periodical, newspaper, or
encyclopedia; put the title and subtitle between quotation marks: Baker, Peter, and Susan B. Glasser. "No Deals with
Terrorists: Putin." Toronto Star 29 Oct. 2002: A1+.
Fisher, Dennis. "Safe Data: At What Price?" eWeek 21 Oct. 2002: 26.
Penny, Nicholas B. "Sculpture, The History of Western." New Encyclopaedia Britannica.1998 ed.
e) CAPITALIZE the first word of the title, the first word of the subtitle, as well as all important words except for
articles, prepositions, and conjunctions, e.g., Flash and XML: A Developer's Guide, or The Red Count: The Life and
Times of Harry Kessler.
f) Use LOWER CASE letters for conjunctions such as and, because, but, and however; for prepositions such as in, on,
of, for, and to; as well as for articles: a, an, and the, unless they occur at the beginning of a title or subtitle, or are being
used emphatically, e.g., "And Now for Something Completely Different: A Hedgehog Hospital," "Court OKs Drug
Tests for People on Welfare," or "Why Winston Churchill Was The Man of The Hour."
g) Separate the title from its subtitle with a COLON (:), e.g. "Belfast: A Warm Welcome Awaits."

3. Place of publication - for books only


a) DO NOT use the name of a country, state, province, or country as a Place of Publication, e.g. do not list India,
Australia, Canada, United Kingdom, Great Britain, United States of America, California, or Maharashtra as a place of
publication.
b) Use only the name of a city or a town.
c) Choose the first city or town listed if more than one Place of Publication is indicated in the book.
d) It is not necessary to indicate the Place of Publication when citing articles from major encyclopaedias, magazines,
journals, or newspapers.
e) If the city is well known, it is not necessary to add the State or Province after it, e.g.:
New Delhi:
Mumbai:
London:
New York:
f) If the city or town is not well known, or if there is a chance that the name of the city or town may create confusion,
add the abbreviated letters for State, Province, or Territory after it for clarification. Example:
Amravati, MS
Hyderabad, AP
Austin, TX:
g) Use "n.p." to indicate that no place of publication is given.

4. Publisher - for books only


a) Be sure to write down the Publisher, NOT the Printer.
b) If a book has more than one publisher, not one publisher with multiple places of publication, list the publishers in
the order given each with its corresponding year of publication, e.g.:
Conrad, Joseph. Lord Jim. 1920. New York: Doubleday; New York: Signet, 1981.
c) Shorten the Publisher's name, e.g. use Macmillan, not Macmillan Publishing Co., Inc.

[© 2013-14: TMC Study Material on RM] Page 156


d) No need to indicate Publisher for encyclopaedias, magazines, journals, and newspapers.
e) If you cannot find the name of the publisher anywhere in the book, use "n.p." to indicate there is no publisher listed.

5. Date of publication
a) For a book, use the copyright year as the date of publication, e.g.: 2003, not ©2003 or Copyright 2003, i.e. do not draw
the symbol © for copyright or add the word Copyright in front of the year.
b) For a monthly or quarterly publication use month and year, or season and year. For the months May, June, and July,
spell out the months, for all other months with five or more letters, use abbreviations: Jan., Feb., Mar., Apr., Aug., Sept.,
Oct., Nov., and Dec. Note that there is no period after the month. For instance, the period after Jan. is for the
abbreviation of January only. See Abbreviations of Months of the Year, Days of the Week, and Other Time
Abbreviations. If no months are stated, use Spring, Summer, Fall, Winter, etc. as given, e.g.:
Alternatives Journal Spring 2004.
Classroom Connect Dec. 2003/Jan. 2004.
Discover July 2003.
Scientific American Apr. 2004.
c) For a weekly or daily publication use date, month, and year, e.g.:
Newsweek 11 Aug. 2003.
d) Use the most recent Copyright year if two or more years are listed, e.g., ©1988, 1990, 2004. Use 2004.
e) Do not confuse Date of Publication with Date of Printing, e.g., 7th Printing 2004, or Reprinted in 2004. These are not
publication dates.
f) If you cannot find a publication date anywhere in the book, use "n.d." to indicate there is "No Date" listed for this
publication.
g) If there is no publication date, but you are able to find out from reliable sources the approximate date of publication,
use [c. 2004] for circa 2004, or use [2003?]. Always use square brackets [ ] to indicate information that is not given but is
supplied by you.

6. Page number(s)
a) Page numbers are not needed for a book, unless the citation comes from an article or essay in an anthology, i.e. a
collection of works by different authors.
Example of a work in an anthology (page numbers are for the entire essay or piece of work):
Fish, Barry, and Les Kotzer. "Legals for Life." Death and Taxes: Beating One of the Two Certainties in Life. Ed. Jerry
White. Toronto: Warwick, 1998. 32-56.
b) If there is no page number given, use "n. pag."
(Works Cited example)
Schulz, Charles M. The Meditations of Linus. N.p.: Hallmark, 1967.
(Footnote or Endnote example)
1 Charles M. Schulz, The Meditations of Linus (N.p.: Hallmark, 1967) n. pag.
c) To cite a source with no author, no editor, no place of publication or publisher stated, no year of publication, but you
know where the book was published, follow this example:
Full View of Temples of Taiwan - Tracks of Pilgrims. [Taipei]: n.p., n.d.

[© 2013-14: TMC Study Material on RM] Page 157


d) Frequently, page numbers are not printed on some pages in magazines and journals. Where page numbers may be
counted or guessed accurately, count the pages and indicate the page number or numbers.

Working list of books (bibliography)

Working list of journals, magazine, newspapers (bibliography)

Presentation of research report


The research report should be typed following the requirements detailed below:
1. Use Executive bond A-4 size paper, type on one side of the paper only.
2. Use double spacing
3. Include margins: Left-hand 3.8 cm (1&1/2 inches )
Right-hand 2.5 cm (1 inch)
4. Paragraphs should not be indented.
5. Pages should be numbered.
6. Tables should be numbered
7. Figures (e.g. diagrams and graphs) should be treated in a similar way to tables but should be numbered "Figure 2"
etc
8. Headings: Section Heading : upper case (e.g. INTRODUCTION), Subsection Heading: lower case underlined,
numbered 1.1, 1.2 etc indented to start of lettering on main heading
Example:
1. INTRODUCTION: Technological advances have opened many doors in education.....
1.1 The model presented: In the final year the occupational therapy course is being developed.....

[© 2013-14: TMC Study Material on RM] Page 158


1.2 The task: A tutorial workbook.....
1.2.1 Using the programs: The programs designed are very varied....

9. Length of project: The project should be approximately 15,000 - 22,000 words (For project at Post – graduate level)
10. Submitted copies of the project should be hard-bound volume only.
11. If you wish to acknowledge any individual's contribution to the project, this should be stated on a separate
acknowledgement page.
12. Your project should contain a list of contents which states the page number of each section of the project.
13. Appendices should not be considered part of the project report (for example, raw data could be included in this
way). Appendices should be placed at the very end of the project and referred to in the contents section.

Research in Commerce
Commerce is the whole system of an economy that constitutes an environment for business. The system includes legal,
economic, political, social, cultural and technological systems that are in operation in any country. Thus, commerce is a
system or an environment that affects the business prospects of an economy or a nation-state. It can also be defined as a
component of business which includes all activities, functions and institutions involved in transferring goods from
producers to consumer.
The term commerce refers to the process of buying and selling-wholesale; retail, import, export, enter port trade and
all those activities which facilitate or assist in such buying and selling such as storing, grading, packaging,
financing, transporting, insuring, communicating, warehousing, etc.
The main functions of commerce is to remove the hindrance of (i) persons through trade; (ii) place through
transportation, insurance and packaging; (iii) time through warehousing and storage; and (iv) knowledge through
salesmanship, advertising, etc., arising in connection with the distribution of goods and services until they reach the
consumers.
The concept of commerce includes two types namely: (i) Trade and (ii) Aids to trade which are explained in

the following paragraphs.


(i) Trade: The term trade refers to the sale, transfer or exchange of goods and services and constitutes the central

activity around which the ancillary functions like Banking, Transportation, Insurance, Packaging, Warehousing and
Advertising cluster.
Trade may be classified into two broad categories as follows:
(a) Internal or Domestic Trade: It consists of buying and selling of goods within the boundaries of a country and
the payment for the same is made in national currency either directly or through the banking system. Internal trade
may be further sub-classified into wholesale trade and retail trade.
(b) International or Foreign Trade: It refers to the exchange of goods and services between two or more countries.
International trade involves the use of foreign currency (called foreign exchange) ensuring the payment of the price of
the exported goods and services to the domestic exporters in domestic currency, and for making payment of the price
of the imported goods and services to the foreign exporter in that country‘s national currency (foreign exchange).To
facilitate this payment, involving exchange transactions, a highly developed system of international banking under the
overall control and supervision of the central bank of the concerned country (Reserve Bank of India in our case) is

[© 2013-14: TMC Study Material on RM] Page 159


involved.
International trade is carried on mostly in larger quantities both on Government account and on private account
involving both individuals and business houses.

(ii) Auxiliary to Trade or Aids to Trade : As mentioned above, there are certain function such as banking,
transportation, insurance, ware-housing, advertising, etc. which constitute the main auxiliary functions helping trade-
internal and international. These auxiliary functions have been briefly discussed hereunder:
(a) Banking: Banks provide a device through which payments for goods bought and sold are made thereby
facilitating the purchase and sale of goods on credit. Banks serve the useful economic function of collecting the savings
of the people and business houses and making them available to those who may profitably use them. Thus, banks may
be regarded as traders in money and credit.
(b) Transportation: Transport performs the function of carrying goods from producers
to wholesalers, retailers, and finally customers. It provides the wheels of commerce. It has linked all parts of the world
by facilitating international trade.
(c) Warehousing: There is generally a time lag between the production and consumption of goods. This problem can
be solved by storing the goods in warehouse. Storage creates time utility and removes the hindrance of time in trade. It
performs the useful function of holding the goods for the period they move from one point to another. Thus, ware
housing discharges the function of storing the goods both for manufacturers and traders for such time till they decide
to move the goods from one point to another.
(d) Insurance: Insurance provides a cover against the loss of goods in the process of transit and storage. An insurance
company performs a useful service of compensating for the loss arising from the damage caused to goods through fire,
pilferage, thief and the hazards of sea, transportation and thus protects the traders form the fear of loss of goods. It
charges insurance premium for the risk covered.
(e) Advertising: Advertising performs the function of bridging the information gap about the availability and uses of
goods between traders and consumers. In the absence of advertising, goods would not have been sold to a widely
scattered market and customers would not have come to know about many of the new products because of the paucity
of time, physical-spatial distance, etc.
Knowledge and research in all the above functional areas of commerce are essential for smooth
functioning of businesses.

Research in various functional areas


Through research, an executive can quickly get a synopsis of the current scenario, which improves his information base
for making sound decisions affecting future operations of the enterprise. The following are the major areas in which
research plays a key role in making effective decisions.
There are many topics that benefit from business research. Some major topics are: general business, economic, and
corporate research; financial and accounting research; management and organizational research; sales and
marketing research; information systems research; and corporate responsibility research.

[© 2013-14: TMC Study Material on RM] Page 160


Few of the above important areas are covered in detail below:

1. Marketing
Marketing research is undertaken to assist the marketing function. Marketing research stimulates the flow of
marketing data from the consumer and his environment to marketing information system of the enterprise. Market
research involves the process of
 Systematic collection
 Compilation
 Analysis
 Interpretation of relevant data for marketing decisions
This information goes to the executive in the form of data. On the basis of this data the executive develop plans and
programmers. Advertising research, packaging research, performance evaluation research, sales analysis, distribution
channel, etc., may also be considered in management research.
Research tools are applied effectively for studies involving:
1. Demand forecasting
2. Consumer buying behaviour
3. Measuring advertising effectiveness
4. Media selection for advertising
5. Test marketing
6. Product positioning
7. Product potential

Marketing Research
i. Product Research: Assessment of suitability of goods with respect to design and price.
ii. Market Characteristics Research (Qualitative): Who uses the product? Relationship between buyer and user,
buying motive, how a product is used, analysis of consumption rates, units in which product is purchased, customs
and habits affecting the use of a product, consumer attitudes, shopping habits of consumers, brand loyalty, research of
special consumer groups, survey of local markets, basic economic analysis of the consumer market, etc.
iii. Size of Market (Quantitative): Market potential, total sales quota, territorial sales quota, quota for individuals,
concentration of sales and advertising efforts; appraisal of efficiency, etc.
iv. Competitive position and Trends Research
v. Sales Research: Analysis of sales records.
vi. Distribution Research: Channels of distribution, distribution costs.
vii. Advertising and Promotion Research: Testing and evaluating, advertising and promotion
viii. New product launching and Product Positioning.

[© 2013-14: TMC Study Material on RM] Page 161


2. Production
Research helps you in an enterprise to decide in the field of production on:
 What to produce
 How much to produce
 When to produce
 For whom to produce
Some of the areas you can apply research are:
 Product development
 Cost reduction
 Work simplification
 Profitability improvement
 Inventory control
Materials
The materials department uses research to frame suitable policies regarding:
 Where to buy
 How much to buy
 When to buy
 At what prices to buy?

3. Human Resource Development


You must be Aware that The Human Resource Development department uses research to study wage rates, incentive
schemes, cost of living, employee turnover rates, employment trends, and performance appraisal. It also uses research
effectively for its most important activity namely manpower planning.

4. Solving Various Operational and Planning Problems of Business and Industry


Various types of researches, e.g., market research, operations research and motivational research, when combined
together, help in solving various complex problems of business and industry in a number of ways. These techniques
help in replacing intuitive business decisions by more logical and scientific decisions
i. Government and Economic System
Research helps a decision maker in a number of ways, e.g., it can help in examining the consequences of each
alternative and help in bringing out the effect on economic conditions. Various examples can be quoted such as‘
problems of big and small industries due to various factors–up gradation of technology and its impact on lab our and
supervisory deployment, effect of government‘s liberal policy, WTO and its new guidance, ISO 9000/14000 standards
and their impact on our exports allocation of national resources on national priority basis, etc. Research lays the
foundation for all Government Policies in our economic system.
We all are aware of the fact that research is applied for bringing out union finance budget and railway budget every
year. Government also uses research for economic planning and optimum utilization of resources for the development
of the country. For systematic collection of information on the economic and social structure of the country, you need
Research. Such types of information indicate what is happening to the national economy and what changes are taking
place.

[© 2013-14: TMC Study Material on RM] Page 162


ii. Social Relationships
Research in social sciences is concerned with both-knowledge for self and knowledge for helping in solving immediate
problems of human relations. It is a sort of formal training, which helps an individual in a better way, e.g.
 It helps professionals to earn their livelihood
 It helps students to know how to write and report various findings.
 It helps philosophers and thinkers in their new thin kings and ideas.
 It helps in developing new styles for creative work.
 It may help researchers, in general, to generalize new theories.

Small business innovation research (SBIR)


The Small Business Innovation Research (or SBIR) program is a United States Government program, coordinated by
the Small Business Administration, in which a portion of the extramural research budgets of several government
agencies are reserved for contracts and grants to small businesses. Started with the passing of the Small Business
Innovation Development Act in 1982, the goal of the program is to assist small businesses, providing competitive
opportunities and stimulating innovation.
For the purposes of the SBIR program, the term "small business" is defined as an American-owned for-profit business
with fewer than 500 employees.
A similar program, the Small Business Technology Transfer Program (STTR), uses a similar approach to the SBIR
program to expand public/private sector partnerships between small businesses and nonprofit U.S. research
institutions. They hand out the Tibbetts Award to deserving agencies annually.
Participating Agencies
Currently, SBIR programs are in place at the following agencies:

 Department of Agriculture
 Department of Commerce
 Department of Defense
 Department of Education
 Department of Energy
 Department of Health and Human Services
 Department of Homeland Security
 Department of Transportation
 Environmental Protection Agency
 National Aeronautics and Space Administration
 National Science Foundation
 National Institutes of Health

Participating agencies publish one or more SBIR solicitations per year. The solicitation is essentially a grocery list of
topics and areas where they are interested in sponsoring research. In the case of some agencies such as the
Departments of Defense and Homeland Security the topics are very specific. These agencies have some very real,
specific and immediate problems that they need your help in solving. At the other end of the specificity spectrum, the

[© 2013-14: TMC Study Material on RM] Page 163


National Institutes of Health (NIH) and Department of Agriculture publish broader categories of interest and leave it
to the applicant small business to specify the topic. Beyond those categories, NIH will entertain any proposal related to
improving the nation‘s health and is the only SBIR agency to consider unsolicited proposals.
Companies that think they have a technology that will address an agency‘s problem or interests can develop and
submit a Phase I proposal. Proposals are evaluated competitively and awards are made based upon relative merit.
Emphasis is placed on technologies that both address the sponsoring agency's interest and also have commercial
application.

Eligibility
To be eligible to participate, a company must be 51% owned and controlled by individuals who are U.S. citizens or
permanent resident aliens. It must also be a small business with no more than 500 employees including affiliates. All
Phase I and Phase II work must be performed in the U.S.

Three Phase Program


There are three phases to SBIR.
1. The purpose of Phase I is to demonstrate the technical, scientific and increasingly commercial merit and feasibility of
the proposed technology. Phase I grant awards vary in size by agency. They are typically up to $100,000, but
sometimes more.
Upon successful completion of Phase I, companies can apply for Phase II. In the case of the Department of Defense,
companies must be specifically invited to apply for Phase II. Awards are made based upon the results and potential of
the Phase I work and the sponsoring agency's interest in the developing technology.
2. Phase II supports the main R&D effort and may include the development of a prototype. Phase II awards also vary
by agency. They are typically up to $750,000, but sometimes more. The government is placing increasing emphasis on
the commercialization of Phase II technologies and agencies now require the submission of a commercialization plan as
part of the Phase II proposal. One of the frustrations of working with the government is that an agency may do the
same thing somewhat differently than other agencies. This is characteristic of SBIR where there are essentially 11
different SBIR programs rather than a single uniform program. The chances of Phase I funding generally range
between 1 in 5 and 1 in 8. These chances increase significantly for Phase II to approximately 2 in 5. In combination,
Phases I and II provide substantial risk capital for developing a new technology. Beyond the immediate value of a
much needed cash fusion, however, the real underlying value of the SBIR program is that it can serve as a pathway to
equity financing to help support technology commercialization activities.
Entrepreneurs face the temptation of focusing too narrowly and almost exclusively on the development of their
technology during the SBIR contract period. If, however, they simultaneously give attention to removing as much
market and business risk from their venture as possible, particularly during Phase II, they can reach the point where
they can begin to get on the radar screens of prospective equity investors and commercialization partners.
3. Phase III is commercialization. Companies that successfully complete Phases I and II are expected to commercialize
their technology. There are, however, no additional cash awards for Phase III. Companies are generally expected by
that point to be able to raise the funding they need privately, or through a government customer.
Participating small businesses typically retain the worldwide patent rights to any new technology. The sponsoring
agency receives a royalty-free license, reserves the right to require the patent holder to license others under certain

[© 2013-14: TMC Study Material on RM] Page 164


circumstances, and generally requires that the commercialized technology be manufactured in the US. Agencies‘
licenses are rarely invoked and are for the most part not a risk or threat.
The SBIR program is faculty unfriendly. Although university collaborations are allowed and encouraged, a full time
faculty member cannot serve as a project‘s principal investigator (PI). The PI must be employed more than 50% of their
time by the small business during the project period and cannot work full time for another organization. However, not
all of that 50% is required to be spent on the SBIR project.

What is a statistical software package?


A statistical software package consists of a series of pre-written and pre-tested programs that perform a number of
specified operations on data. For example, the software package may have a program that calculates the mean and
median for a set of data. Many statistical software packages are currently available. Since these packages are merely
large software programs, they are purchased separately from the computer and separately from each other. They are
then stored on disk as part of the secondary or auxiliary memory of the main computer or server. Or if you're using a
single-user (microcomputer) system, they may be stored directly on the hard drive. The cost or "annual subscription" to
these packages may range anywhere from several hundred to thousands of dollars. Each statistical package has its own
set of unique capabilities and commands. However, there are common elements to the logic behind almost all
statistical packages, so as you learn one system, you'll also find it easier to work with other systems.
There are many items of software available which are capable of carrying out statistical analyses. It is not possible to
recommend a single piece of software as being the best to use for all applications, as the choice depends a great deal on
exactly what you want to do and how you want to do it.
This Chapter is intended to help anyone needing to carry out statistical analysis to choose the most appropriate
software for their needs. Careful consideration of which software to use may slightly delay the start of the analysis, but
experience has shown that in the long term this will save time and effort and also provide a much better chance of
obtaining valid results from the analysis.
There are from time to time new releases of packages which add features or remove existing problems.
Each package handles one type of situation better than the other. The reason for this is that the packages were written
to tackle the specific types of problem encountered by a group of researchers who were involved in either experimental
or observational science. There is considerable overlap in the range of techniques used by the two types of researcher,
but there are also techniques which are largely confined to one group or the other. The consequence of this is that these
techniques are better implemented and have more facilities in the package that is oriented towards that type of data
analysis.

Software Application in SPSS


SPSS has been around for many years, both as software and as a company. The original SPSS was a statistical package
written for mainframe computers in the late 1960s. It was soon distinguished from competitors by
its ease of use and clearly written documentation. The program has evolved considerably since; it
has been rewritten several times, and has adapted to new hardware and new ways of working. It
has developed from a statistical engine, for which you had to prepare your data in accord with
fairly strict rules, into a very general package for data manipulation and analysis. Its data handling
features rival many dedicated 'database' products, while its analysis capabilities far exceed them. Recent developments

[© 2013-14: TMC Study Material on RM] Page 165


allow software writers to incorporate the SPSS processor into their own applications; a user may be faced with a
customised interface and not be aware that SPSS is working in the background.
Despite all this development, there are features of SPSS that can be traced back to its origins. That's reasonable; tasks
that were appropriate when analysing data in the 1960s are still used. It has always been a feature that you could give
SPSS a set of data and the minimum of commands and let it generate default but well laid-out tables summarising the
values. The classic application is a questionnaire survey. Each questionnaire forms a 'case' or row of data within SPSS
and each point where the respondent could reply forms a 'variable' or column. The data have the appearance of a
spreadsheet; each column can contain numeric or text values and is referred to by a name.
A simple command like:
FREQUENCIES VARIABLES= age, sex.
generates tables for each of these columns. It doesn't matter that one might be numeric and the other text ('M', 'F'). It
doesn't matter how many distinct values there are - indeed, the classic criticism of SPSS was that it would tabulate
everything and only stopped when the printer ran out of paper. You avoid the ecological disaster now by sending the
output to a file and viewing it on a screen before printing. SPSS is an excellent tool for making initial data scans to
confirm that values have been read correctly (right formats for prepared data files) and that the values are acceptable
(detecting the 'Yes please' answer to sex).

SPSS and Data


SPSS recognises three data types, although one of these has to be stored in a file rather than in memory. SPSS's major
data types are the numeric and character variables. All numeric variables are stored as real numbers even if they are
integers. There is no distinction between interval, ordinal and categorical data; they are all just variables. This
arrangement both simplifies things and creates problems. If you have an ordered categorical variable with values 1, 2, 3
and 4, and you wish to select the cases with values less than or equal to 3, you should select values less than 3.5 to be
sure of getting the correct cases. This is because when 3 is stored as a real number it is not stored as exactly 3 and may
be stored as 2.9999999. The same problem occurs with recoding. The problem would not arise if there was a data type
in SPSS which used integers for categorical variables. The moral is to be careful and is especially applicable to survey
data where a large number of variables are categorical.
SPSS is entirely case orientated and will only see data as being a set of variables measured or recorded on each of a
number of cases. Frequency distributions and contingency tables can only be handled if they are generated internally
from raw case data, except in a few limited circumstances.
SPSS will also handle matrices under certain circumstances, but these are stored in files not as variables, and they are
only available to and from a limited range of procedures.
The area of data structures is where SPSS is most in need if improvement, as a package which is targeted at survey data
needs to have ordered (ordinal) and non-ordered (nominal) categorical data types.

Strengths of SPSS
1. Cross-tabulation in SPSS is very good indeed, and with the addition of the TABLES product, the written output
can be made to look extremely professional. You can have your data subdivided into categories in several
dimensions and then get a whole range of descriptive statistics for each cell in the categorisation. This is no more
and no less than you would expect from a survey analysis package. There are also a very good range of simple

[© 2013-14: TMC Study Material on RM] Page 166


hypothesis tests of both parametric and non-parametric types. Most forms of regression from simple to multiple
and linear to non-linear and log-linear are well implemented, and all the bells and whistles are there. In addition
such multivariate techniques as factor analysis and discriminant analysis are available in a fully featured form
(SPSS uses the term factor analysis in its generic sense to cover a range of related techniques, including principal
components analysis).
2. The one major disappointment in the area of multivariate techniques is cluster analysis, where all the similarity
measures offered are designed for interval data and there are none for mixed or binary data. Much data in
survey analysis is mixed or binary and there is little option but to turn to other packages for cluster analysis.
There is also an additional option called TRENDS which does various forms of time series analysis. A macro
facility is available, but the SPSS syntax and data structures do not give sufficient flexibility to make it really
useful.

Weaknesses of SPSS
1. The major weakness of SPSS is in its handling of designed experiments. It either does things badly or in an
extremely convoluted manner. There are probably very few people in the world who fully understand the
MANOVA command and how to make it do all the things that it is supposed to do. It tries to do too much in one
command and ends up doing almost everything in a totally counter-intuitive way.
2. Until recently there were very few techniques in the package designed for data which is at best ordinal, which is
surprising for a package that is targeted at survey data, though the recent inclusion of multi-dimensional scaling
and the optional extra CATEGORIES, which includes correspondence analysis, has improved matters.
3. In terms of the philosophy of statistics, SPSS will lead the unwary astray. The statistical philosophy demands that
you make your assumptions explicit before making a hypothesis test. In most packages you have to make your
assumptions clear in subcommands or options and will therefore be making a specific test. In SPSS you get a
series of answers which have different assumptions attached to them, and then you choose the answer that you
like best. SPSS does not prevent you from establishing a priori assumptions but it does not encourage you to do it.
4. In addition almost all SPSS commands have defaults for most of the choices between methods, and so if you do
not specify anything you get an analysis which may well be inappropriate to your type of data and situation.
Considerable care is needed, especially with some of the more sophisticated techniques, in order to specify an
appropriate form of analysis. In order to do this a set of manuals is essential. They are well written and they are
the only place where you can find out exactly what the analysis is going to do to your data and what assumptions
are to be made. Many people remark that SPSS is easy to use, that they understand it and that one doesn't need
manuals to use it. It is in reality easy to misuse, many of the techniques are extremely difficult to understand, and
if you use it without manuals you are in grave danger of seriously undermining your academic credibility.
5. Under most circumstances it is very difficult in SPSS to pass the results of one analysis as input to another, as it
does not support the data structures to do this. This reduces the flexibility of the package quite considerably.
6. SPSS is very poor at assumption checking; if it warns you about problems with your data you are in serious
trouble, as such warnings are few and far apart. Much the same, in this respect, applies to SPSS as to MINITAB.
The warnings and the checks are all described in the manuals, but the checks have to be carried out by you on a
preliminary analysis or analyses of the data, and then you need to modify the options accordingly - none of this
will be done for you automatically.

[© 2013-14: TMC Study Material on RM] Page 167


Graphics Facilities
SPSS offers a range of character graphics facilities in all its versions. These enable you to plot crude scatter diagrams
etc. High-resolution graphics is provided in most versions via third party packages. When you issue an SPSS command
to produce the graphics, it writes a file of commands and passes this to the other package (which is started up
automatically by SPSS) to produce the graph. This way of providing graphics has both advantages and disadvantages.
It means that SPSS Inc do not have to write graphics software, as they can just use someone else's, which might be
better written than anything that they could write themselves. It also means however that they can only use features
that are available in the third party software (e.g. neither Harvard Graphics nor Microsoft Chart, which are the
packages used by the PC version, can produce error bars). You have to buy the graphics package separately (which can
be as much as six times the cost of a site licence copy of SPSS). There is also no guarantee that the package that you
have bought will continue to be the one that SPSS uses in the future. No high-resolution graphics is implemented on
the mainframe version of SPSS at Glasgow.

Graphical User Interfaces


The versions of SPSS for the PC, OS/2 and Apple Macintosh have what are referred to as graphical user interfaces.
These have help systems and separate windows for commands, output, logs etc. You can choose commands from
menus and 'paste' them into the commands window,
and thus build up standard SPSS commands from
menus. This is a two-edged sword; it makes it easier
to get the command syntax right, but makes it even
more tempting not to use the manuals, which are the
only source of essential information about methods
and assumptions. These interfaces do make it easier to
construct SPSS commands, to correct errors and tidy
up output, and are much 'friendlier' than the
mainframe systems, but the pitfalls are still there and
are easier to fall into.
SPSS can handle very large data sets because it does
not load all the data into memory at once; it just gets it from disk when it wants it. It was designed for survey data, and
that design philosophy has not changed much over the years. It is excellent for sorting out and tabulating survey data,
and it can handle a competent range of univariate and multivariate techniques. However, if you have designed
experiments to analyse then it is not for you.

SPSS: The Output Viewer

[© 2013-14: TMC Study Material on RM] Page 168


Comparison of Various statistical packages
Product Publisher Web-site Price Pack Surviv
age al Data Processing
type[ Analys
4] is
Base Clust Discrim Base Extended
Series er inant Data (data
Proces Anal Analysi Proces sampling,
sing ysis s sing transform
(differi (sortin ation)
ng, g etc.)
smoot
hing)
AcaStat AcaStat www.acastat.co $29 S - - - - -
m
Analyse-it Analyse-it www.analyse- $149 X - - - + +
it.com [6]

BioStat AnalystSoft www.analystsof $100[7] S + - - + +


t.com [6]

EasyReg Herman J. Bie econ.la.psu.edu $300 S - - - + +


/~hbierens/EA
SYREG.HTM
Gauss Aptech System www.aptech.co Unkn SC[8] + - - + +
m own
Mathemati Wolfram www.wolfram.c $1880[ S + + - + +
ca Research om 6]

MedCalc Frank www.medcalc.c $299 S - - - + -


Scoonjans om
Minitab Minitab Inc. www.minitab.c $1195[ S + + + + +
om 6]

NCSS NCSS www.ncss.com >$399 S + + + + +


Statistical
Software
Origin OriginLan www.originlab. $699 S + - - + +
com
RATS Estima www.estima.co $500 S + - - + +
m
Statistica StatSoft www.statsoft.co >$695 S + + + + +
m
Statit Statit www.statit.com >$295 S + + + N/A N/A

[© 2013-14: TMC Study Material on RM] Page 169


StatPlus AnalystSoft www.analystsof $150[7] S + - - + +
t.com [6]

SPlus Insightful Inc. www.splus.com Unkn SC[8] + + + + +


own
(on
reque
st, >
$1000
?)
SPSS SPSS Inc. www.spss.com $1599[ S + + + + +
6]

StatsDirect StatsDirect www.statsdirect $179 S + - - N/A N/A


.com
Statistix Statistix www.statistix.c $695 S + - - + +
om
SYSTAT
UNISTAT UNISTAT Ltd www.unistat.co $895 S + + + + +
m
VisualStat VisualStat www.visualstat. $195 S - - - + -
Computing com
XLStat Kovach www.kovcomp. $395[6] X - + + N/A N/A
Computing co.uk
Notes
1. ^ For noncommercial use
2. ^ a b c d GUI interface for the R programming language
3. ^ a b c d Using R as platform
4. ^ S = Standalone; X = Excel Add-In
5. ^ Includes multivariate linear regression
6. ^ a b c d e f g Academic discounts available
7. ^ a b Promo price. Check for availability. Regular prices are higher by up to 50%.
8. ^ a b Command prompt is used.

[© 2013-14: TMC Study Material on RM] Page 170


Questions for Review:

1. What do you mean by a report? What is the purpose of a report?


2. What do you mean by verbal reporting?
3. List the stages involved in the preparation of a report.
4. What are the ways of developing a subject?
5. What is meant by outlining the report?
6. Enumerate the characteristics of a good report.
7. List the stages involved in the preparation of a report.
8. What is reporting? What are the different stages in the preparation of a report?
9. What is a report? What are the characteristics/qualities of a good report?
10. Briefly describe the structure of a report.
11. What are the various aspects that have to be checked before going to final typing?
12. What are the points to be kept in mind in revising the draft report?
13. What are the various items that will find a place in the text / body of the report?
14. Describe briefly how a research report should be presented.
15. Describe the considerations and steps involved in planning a report writing work.
16. Write short notes on:
a) Characteristics of a good report. c) Chapter plan
b) Sources of data
17. What is Small business innovation research (SBIR),
18. Explain the use of Software packages SPSS in Research analysis.



[© 2013-14: TMC Study Material on RM] Page 171


Suggested Readings and Bibliography

 Aaker D A, Kumar V & Day G S - Marketing Research (John Wiley &Sons Inc, 6th ed.)
 Agresti A., Categorical Data Analysis. New York: John Wiley & Sons 1990.

 Albrecht J. , "Measuring Application Development Productivity", Proceedings of IBM


 B.N.Agarwal. Basic Statistics, Wiley Eastern Ltd.
 B.N.Gupta. Statistics. Sahitya Bhavan, Agra.

 C.R.Kothari, Research Methodology (Methods and Techniques), New AgeInternational Pvt. Ltd. New Delhi

 Cauvery – Research Methodology – (S. Chand & Co.)

 Dwivedi – Research Methods in Behavioral Science, ( Macmillan)


 Flower , Floyed J. Jr. : Survey methods, Sage Publication 1993
 Fred N. Kerlinger. Foundations of Behavioural Research, Surjeet Publications,Delhi

 Golde, Biddle, Koren : Composing Qualitative Research, Sage Publication


 Green, P.E. and Srinivasan, V., Conjoint Analysis in Consumer Research: Issues and Outlook, Journal of
Consumer Research, 5, 1978, 103 – 123.

 Gupta S.P. : Statistical Methods, Sultan Chand, New Delhi 2001


 Gy, P (1992) Sampling of Heterogeneous and Dynamic Material Systems: Theories of Heterogeneity, Sampling
and Homogenizing
 J.F.Rummel & W.C.Ballaine. Research Methodology in Business, Harper &Row, Publishers, New York
 Kothari, C.R., Quantitative Techniques, Vikas Publishing House Private Ltd., New Delhi, 1997.

 Kendall, P. C., & Grove, W. (1988). Normative comparisons in therapy outcome. Behavioral Assessment, 10, 147-
158.
 Levin R I & Rubin DS - Statistics for Management (Prentice Hall of India, 2002)
 Marrison, D.F., Multivariate Statistical Methods, McGraw Hill, New York, 1986.
 Nowak, R. (1994). Problems in clinical trials go far beyond misconduct. Science. 264(5165): 1538-41.

 P.Saravanavel. Research Methodology, Kitab Mahal, Allahabad.

 P.V.Young. Scientific Social Surveys and Research, Prentice-Hall of India,New Delhi


 Panneerselvam, R., Research Methodology, Prentice Hall of India, New Delhi, 2004.
 Rencher, A.V., Methods of Multivariate Analysis, Wiley Inter-science, Second Edition, New Jersey, 2002.

 Resnik, D. (2000). Statistics, ethics, and research: an agenda for educations and reform. Accountability in
Research. 8: 163-88
 Romesburg, H.C., Cluster Analysis for Researchers, Lifetime Learning Publications, Belmont, California, 1984.

 T.S. Wilkinson & P.L.Bhanarkar. Methodology and Techniques of SocialResearch, Himalaya Publishing House,
Mumbai

 Zikmund : Business Research Methods, ( Thomson Learning Books)



[© 2013-14: TMC Study Material on RM] Page 172


TMC Question Bank

1. Explain the stages in the research process with the help of a flow chart of research process.
2. A researcher is interested in knowing the answer to a why question, but does not know what sort of answer will
be satisfying. Is this exploratory, descriptive, or casual research? Explain.
3. What is the task of problem definition? The city police wishes to understand its image from the public‘s point of
view. Define the business problem.
4. Give the categories of exploratory research would you suggest in each of the following situations?
(a) A product manager suggests that a non-tobacco cigarette, blended from wheat, cocoa, and citrus, be
developed.
(b) A manager needs to determine the best site for a departmental store in urban area.
5. With the help of examples, classify survey research methods.
6. Discuss the use of self – administered questionnaires along with their classifications.
7. Design a complete questionnaire to evaluate job satisfaction of entry level marketing executives.
8. Outline the step – by – step procedure to select following:-
(a) A sample of 150 students at your school,
(b) A sample of 50 mechanical engineers, 40 electrical engineers, and 40 civil engineers, from the subscriber list
of an engineering journal,
(c) A sample of two wheeler and four wheeler owners in a Big – Bazzar‘ intercept sample,
(d) A sample of male and female workers to compare hourly wages of drill press operators.
9. What is a hypothesis? Write general procedure for hypothesis testing. Differentiate  and  errors.
10. Define and classify secondary data. Discuss the process of evaluating secondary data.
11. Discuss in detail the application of Research Methodology in Business Management.
12. Discuss various contents required in the layout of Internet questionnaire.
13. Compare Sampling techniques in details. Differentiate between t-distribution and z-distribution. Write a detailed
note on Total Survey Error.
14. Discuss various factors that influence the validity of experimental studies in research.
15. Company manufacturing readymade snacks introduced its new product with different flavours in Indian market.
The company looks forward to note the preferences of consumers for the offered flavours. The company is also
interested in developing new flavours that can do well in the market.
16. What type of research should be conducted? Give reasons to support your answer.
17. Design the research process in detail. Support your answer with flow diagram.
18. Give meaning of research and describe the stages of development of Research.
19. State the meaning and importance of Hypothesis with examples.
20. What are the major characteristics in sampling? State the type of sampling with suitable illustrations.
21. Discuss briefly the various methods of data collection. What steps will you follow while writing a Research
Report?
22. Write notes on any two:-
(a) Scaling Techniques, (c) Presentation of Data ,
(b) Processing of Data,
23. What do you understand by the term Research ‗? Which are the various stages in the development of a research?

[© 2013-14: TMC Study Material on RM] Page 173


24. Define a Hypothesis Discuss the importance of hypothesis in research and the process of a formation of
hypothesis.
25. How do you recognize a research problem? Describe the criteria of a good research problem‘
26. Define sampling state briefly various methods of sampling.
27. Explain the significance of statistical tools in the interpretation of data. What its limitation?
28. Define the interviewing and the questionnaire techniques of data collection.
29. Define Research Report. Explain the characteristics of a good research report
30. Write notes on any two:-
(a) Observation Method
(b) Scaling technique.
31. What is the meaning of hypothesis? Discuss the importance and functions of hypothesis in research.
32. What is meant by research problem? Elaborate the guiding principles in the selection research problem,
33. Define the terms ‗Interviewing and Questionnaire‘. Explain the techniques for interviewing Questionnaire in data
collection.
34. Which guiding principles should be followed by a Research, while writing the Research? Explain
35. Discuss the problem of measurement of attitudes and Scale Construction,
36. Write note on any two:-
(a) Scaling Techniques, (c) Research Design
(b) Tabulation, (d) Presentation
37. What is an ‗Exploratory Research? Discuss the stages involved in carrying explorer Research.
38. Discuss the concept of measurement and scaling. What are the criteria for good measurement?
39. Discuss the nature and scope of satisfaction surveys.
40. Construct a questionnaire for measuring attitude of DBM student towards social issues.
41. Explain following terms in relation to data analysis
i) Coding ii) Tabulation and iii) Cross Tabulation
42. Explain how a research follows up is taken. What is the need of having a research follow up? List and explain the
steps in report writing process.
43. Explain what is meant by a semantic differential scale.
44. Summarise the qualities of a good questionnaire.
45. Where should interviewer instructions pertaining to responses to a particular question be placed on the
questionnaire?
46. The textbook says that one does not start by writing questions. How should the researcher begin?
47. What are the two occasions when apparently "redundant" questions should be found in a questionnaire?
48. Name the three advantages of open-ended questions.
49. What are the three reasons why a respondent is unable to answer a question?
50. What is the recommended duration of interviews carried out in rural situations?
51. What are the key characteristics of opening questions in a questionnaire?
52. Define the term 'random sampling'. Name the 3 non-probability sampling methods shown in the opening section
of the chapter.
53. What are the 3 key questions to be posed when employing stratified sampling?
54. Explain the term 'primary sampling units 'PSUs'. Define the term null hypothesis'.

[© 2013-14: TMC Study Material on RM] Page 174


55. What are the 2 types of statistical tests? Explain the meaning of a 'type I error'.
56. Define the term 'hypothesis'.
57. What are the 3 types of research design?
58. What are the main items of information which should be included in a research brief?
59. Name the 3 factors which determine which is the appropriate statistical test to conduct on data obtained from a
random sample.
60. What is the aim of exploratory research? Name 4 characteristics of a good research brief.
61. Why is it important to devise a data analysis plan before collecting the data?
62. What steps can the researcher take to increase the probability of obtaining the respondent's cooperation?
63. What are the causes of respondent bias in personal interviews?
64. How many participants should be involved in a focus group session?
65. What name is given to the interviewer leading a focus group session?
66. Outline the possible problems that can arise from using focus groups
67. What is meant by a 'structured interview'?
68. What is Business Research? Explain the ‗research-process‘ with suitable block diagram.
69. Explain the various types of Research Designs and compare qualitative and quantitative research.
70. What do you mean by sampling? Compare the probability and non-probability sampling techniques with
suitable examples.
71. Explain the various methods of data collection.
72. Design an appropriate questionnaire for the following (any TWO)
(a) Consumer Preferences of Airtel Mobile Service
(b) Market Share of ‗cold-drinks‘ in Nagpur
(c) Insurance preferences on ‗motor vehicle insurance‘ in Nagpur
(d) Changing investment pattern of households in last decade
73. Explain the concepts of hypothesis and ‗hypothesis testing‘ along with the procedure for hypothesis testing.
74. Explain the layout of a research report used in report writings.
75. How do you interpret the collected data during a research? Explain various techniques involved therein.



[© 2013-14: TMC Study Material on RM] Page 175


Model Question Paper for Q3. (a) What is Survey plan? Explain various
survey Errors
Summer: 2013
(b) ―Processing of data implies editing, coding,
classification and tabulation‖. Comment.
Second Semester of
or
Master of Commerce (M.Com)
(c) What is the purpose of Analysis of data?
Examination
Explain advantages and disadvantages of
RESEARCH METHODOLOGY
various Tools of Analysis.

Time: Three Hours] [Max. Marks: 80


Q4. (a) What is Hypotheses Formulation?
NB:- (b) Define of hypothesis? State the Characteristics
1) All the Five Questions are compulsory. of good hypothesis.
2) All questions carry equal marks.(16 marks) or
(c)Explain the Procedure for hypothesis testing?
Q1. (a) Briefly describe the different steps
What the statistical techniques Used for testing of
involved in a research process.
hypothesis.

(b) Discuss Major problems in Research process


Q5. a) Explain the significance of Research
or
(c) What are the Significance Features of good Report.

research? Explain the Use of advanced (b) Describe various steps involved while

technology in Research writing reports.


or

Q2. (a) ―Research design in exploratory studies (c) What are the various functional areas in

must be flexible but in descriptive studies, it Commerce in which Research can be of great
significance to the organization?
must minimize bias and maximize reliability.‖
Discuss.
(b) What are the Characteristics of a good sample
Design?

or
(c) What do you mean by ‗Sample Design‘? What
points should be taken into consideration by a
researcher in developing a sample design for a
research project?

[© 2013-14: TMC Study Material on RM] Page 176

You might also like