RESEARCHMETHODOLOGY
RESEARCHMETHODOLOGY
Methodology
M.Com [Semester-II]
1st e: 2013-14 onwards
For retail procurement of TMC study material on BE, M.Com, MBA, DBM & PhD contact:
NAGPUR: Book World; Gokulpeth, 0712-2562999, Computer World: Sitabuldi, 0712-2564444, Shreejee Stationers:
Trimurti Nagar, 9422813542 Shri Tirupati Books and Stationers: Congress Nagar, 9860661848, 0712-2456864, Om Sai
Publishers & Distributers- Plot No. 29, Old Imamwada Police Station, Behind T.B. Ward, Indira Nagar, 9764673503,
9923693503, 9923693506 . Vidarbha Book Distributors-Salpekar Building, Jhansi Rani Square, Sitabuldi, Nagpur, 0712-
2524747. Kushal Book Depot-Golcha Marg, Sadar, Nagpur, 0712-2554280, 0992336624. Shri Tirupati Books and
Stationers: Congress Nagar, 9860661848, 0712-2456864 Neha Pustakalay: Sakkardara Square, 8007161421, Chavan
Book Depot: Golcha Marg, Sadar, Nagpur Poineer Books- Plot No 5, Pooja Arcade, Near Petrol Pump, Abhyankar
Nagar, 0712-2233577/66146663.
WARDHA: Gandhi Book Depot-Tel 07152-253791, M ; 9422904861
Unique Traders- Near Saibaba Mandir, M.G. Road, Wardha, 07152-243617
GONDIA: Shri Mahavir Book Depot-Pal Chowk, Tel : 07182-253401 (M) 9823072632
Sunny Stores- Stadium Shop No 1, Gondia, M: 09373320867
CHANDRAPUR: Venkateshwar Stationers and Book Depot- Tel:07172-254086 , M: 9422135263;
Samarth Book Depot: Near Gandhi Chowk, Main Road, Chandrapur, 07172-253125.
For more details contact our Distributor Shri Mukesh Gujarati on 9422864426 or [email protected]
An attempt is made here by the experts of TMC to assist the students by way of
providing Study Material as per the curriculum with non-commercial considerations.
However, it is implicit that these are exam-oriented Study Material and students are
advised to attend regular lectures in the Institute and utilize reference books
available in the library for In-depth knowledge.
We owe to many websites and their free contents; we would like to specially
acknowledge contents of website www.wikipedia.com and various authors whose
writings formed the basis for this book. We acknowledge our thanks to them.
At the end we would like to say that there is always a room for improvement in
whatever we do. We would appreciate any suggestions regarding this study material
from the readers so that the contents can be made more interesting and meaningful.
Readers can email their queries and doubts to our authors on [email protected].
We shall be glad to help you immediately.
5 Report writing:-Qualities of good report, Layout of a project report, Steps in report 144
writing, Precautions in research report writing,
Research in Commerce - General management, Small business innovation research
(SBIR),
Research in functional areas – marketing, finance, HR and Production.
Software packages SPSS.
Introduction
Research comprises of creative work undertaken on a systematic basis in order to increase the stock of knowledge,
including knowledge of man, culture and society, and the use of this stock of knowledge to devise new applications.
Research can be defined to be search for knowledge or any systematic investigation to establish facts. The primary
purpose for applied research (as opposed to basic research) is discovering, interpreting, and the development of
methods and systems for the advancement of human knowledge on a wide variety of scientific matters of our world
and the universe. Research can use the scientific method, but need not do so.
Scientific research relies on the application of the scientific method, a harnessing of curiosity. This research provides
scientific information and theories for the explanation of the nature and the properties of the world around us. It makes
practical applications possible. Scientific research is funded by public authorities, by charitable organisations and by
private groups, including many companies. Scientific research can be subdivided into different classifications
according to their academic and application disciplines.
Research can be defined as a scientific and systematic search for gaining information and knowledge on a specific topic
or phenomena. In management, research is extensively used in various areas. For example, we all know that,
Marketing is the process of Planning & Executing the concepts; pricing, promotion & distribution of ideas, goods, and
services to create exchange that satisfy individual & organizational objectives. Thus, we can say that, the Marketing
Concept requires Customer Satisfaction rather than Profit Maximization to be the goal of an organization. The
organization should be Consumer oriented and should try to understand consumer‘s requirements & satisfy them
quickly and efficiently, in ways that are beneficial to both the consumer & the organization.
This means that any organization should try to obtain information on consumer needs and gather market intelligence
to help satisfy these needs efficiently. This can only be done only by research.
Research in common parlance refers to a search for knowledge. It is an endeavour to discover answers to problems (of
intellectual and practical nature) through the application of scientific methods. Research, thus, is essentially a
systematic inquiry seeking facts (truths) through objective, verifiable methods in order to discover the relationship
among them and to deduce from them broad conclusions. It is thus a method of critical thinking. It is imperative that
any type of organisation in the globalised environment needs systematic supply of information coupled with tools of
analysis for making sound decisions, which involve minimum risk.
To understand the term ‗research‘ clearly and comprehensively let us analyze the above definition.
i) Research is manipulation of things, concepts or symbols
manipulation means purposeful handling,
things means objects like balls, rats, vaccine,
concepts mean the terms designating the things and their perceptions about
which science tries to make sense. Examples: velocity, acceleration, wealth, income.
Symbols may be signs indicating +, –, ÷, ×, x , s, S, etc.
Formulating a hypothesis,
Reaching certain conclusions either in the form of solutions towards the concerned problem or in certain
generals for some theoretical formulation.
Objectives of research
Following are the key objectives of research:
1. Exploration- an understanding of an area of concern in very general terms. Example: I want to know how to go
about doing more effective research on school violence.
2. Description - an understanding of what is going on. Example: I want to know the attitudes of potential clients
toward Air-Conditioner use.
3. Explanation - an understanding of how things happen. Involves an understanding of cause and effect relationships
between events. Example: I want to know if a group of people who have gone through a certain program have higher
self-esteem than a control group.
4. Prediction - an understanding of what is likely to happen in the future. If I can explain, I may be able to predict.
Example: If one group had higher self-esteem, is it likely to happen with another group?
5. Intelligent intervention - an understanding of what or how in order to help more effectively.
6. Awareness - an understanding of the world, often gained by a failure to describe or explain.
Significance of Research
Research is the process of systematic and in-depth study or search for a solution to a problem or an answer to a
question backed by collection, compilation, presentation, analysis and interpretation of relevant details, data and
information. It is also a systematic endeavour to discover valuable facts or relationships. Research may involve careful
enquiry or experimentation and result in discovery or invention. There cannot be any research which does not increase
knowledge which may be useful to different people in different ways.
Let us see the need for research to business organizations and their managers and how it is useful to them.
i) Industrial and economic activities have assumed huge dimensions. The size of modern business
organizations indicates that managerial and administrative decisions can affect vast quantities of capital and a large
number of people.
Trial and error methods are not appreciated, as mistakes can be tremendously costly. Decisions must be quick but
accurate and timely and should be objective i.e. based on facts and realities. In this back drop business decisions now a
days are mostly influenced by research and research findings. Thus, research helps in quick and objective decisions.
ii) Research, being a fact-finding process, significantly influences business decisions. The business management is
interested in choosing that course of action which is most effective in attaining the goals of the organization. Research
not only provides facts and figures to support business decisions but also enables the business to choose one which is
best.
iii) A considerable number of business problems are now given quantitative treatment with some degree of success
with the help of operations research.
1. Reliability is a subjective term which cannot be measured precisely but today there are instruments which
can estimate the reliability of any research. Reliability is the repeatability of any research, research instrument, tool
or procedure. If any research yields similar results each time it is undertaken with similar population and with
similar procedures, it is called to be a reliable research. Suppose a research is conducted on the effects of
separation between parents on class performance of the children. If the results conclude that separation causes low
grades in class, these results should have to be reliable for another sample taken from similar population. More the
results are similar; more reliability is present in the research.
2. Validity is the strength with which we can call a research conclusions, assumptions or propositions true or false.
Validity determines the applicability of research. Validity of the research instrument can be defined as the
suitability of the research instrument to the research problem or how accurately the instrument measures the
problem. Some researchers say that validity and reliability are co-related but validity is much more important than
reliability. Without validity research goes in the wrong direction. To keep the research on-track define your
concepts in the best possible manner so that no error occur during measurement.
3. Accuracy is also the degree to which each research process, instrument and tool is related to each other.
Accuracy also measures whether research tools have been selected in best possible manner and research
procedures suits the research problem or not. For example if a research has to be conducted on the trans-gender
people, several data collection tools can be used depending on the research problems but if you find that
population less cooperative the best way is to observe them rather than submitting questionnaire because in
questionnaire either they will give biased responses or they will not return the questionnaires at all. So choosing
the best data collection tool improves the accuracy of research.
4. Credibility comes with the use of best source of information and best procedures in research. If you are using
second-hand information in your research due to any reason your research might complete in less time but its
credibility will be at stake because secondary data has been manipulated by human beings and is therefore not
very valid to use in research. A certain percentage of secondary data can be used if primary source is not available
but basing a research completely on secondary data when primary data can be gathered is least credible. When
researcher give accurate references in research the credibility of research increases but fake references also
decrease the credibility of research.
5. Generalizability is the extent to which research findings can be applied to larger population. When a
researcher conducts a study he/she chooses a target population and from this population he takes a small sample
to conduct the research. This sample is representative of the whole population so the findings should also be. If
research findings can be applied to any sample from the population, the results of the research are said to be
generalizable.
6. Empirical nature of research means that the research has been conducted following rigorous scientific methods
and procedures. Each step in the research has been tested for accuracy and is based on real life experiences.
Quantitative research is easier to prove scientifically than qualitative research. In qualitative research biases and
prejudice are easy to occur.
8. Controlled-in real life experiences there are many factors that affect an outcome. A single event is often result of
several factors. When similar event is tested in research, due to the broader nature of factors that affect that event,
some factors are taken as controlled factors while others are tested for possible effect. The controlled factors or
variables should have to be controlled rigorously. In pure sciences it is very easy to control such elements because
experiments are conducted in laboratory but in social sciences it becomes difficult to control these factors because
of the nature of research.
In short the above features that a good research must possess are summarized below–
1. Should be systematic in nature.
2. Should be logical.
3. Should be empirical and replicable in nature.
4. Should be according to plans.
5. Should be according to the rules and the assumptions should not be based on the false bases or judgments.
6. Should be relevant to what is required.
7. Procedure should be reproducible in nature.
8. Controlled movement of the research procedure.
Types of Research
Research may be classified into different types for the sake of better understanding of the concept. Several bases can be
adopted for the classification such as nature of data, branch of knowledge, extent of coverage, place of investigation,
method employed, time frame and so on. Depending upon the BASIS adopted for the classification, research may be
classified into a class or type. It is possible that a piece of research work can be classified under more than one type,
hence there will be overlapping. It must be remembered that good research uses a number of types, methods, &
techniques. Hence, rigid classification is impossible.
The following is only an attempt to classify research into different types.
The research carried out, in these areas, is called management research, production research, personnel research,
financial management research, accounting research, Marketing research etc.
Research Approaches
The researcher has to provide answers at the end, to the research questions raised in the beginning of the study. For
this purpose he has investigated and gathered the relevant data and information as a basis or evidence. The procedures
adopted for obtaining the same are described in the literature as methods of research or approaches to research. In fact,
these are the broad methods used to collect the data.
These methods are as follows:
1) Survey Method
2) Observation Method
3) Case Method
4) Experimental Method
5) Historical Method
6) Comparative Method
It is now proposed to explain briefly, each of the above mentioned approaches.
1. Survey Method
The dictionary meaning of ‗Survey‘ is to oversee, to look over, to study, to systematically investigate. Survey research
is used to study large and small populations (or universes). It is a fact finding survey. Mostly empirical problems are
investigated by this approach. It is a critical inspection to gather information, often a study of an area with respect to a
certain condition or its prevalence. For example: a marketing survey, a household survey, All India Rural Credit
Survey.
Survey is a very popular branch of social science research. Survey research has developed as a separate research
activity along with the development and improvement of sampling procedures. Sample surveys are very popular now
a days. As a matter of fact sample survey has become synonymous with survey. For example, see the following
definitions:
Survey research can be defined as ―Specification of procedures for gathering information about a large number of
people by collecting information from a few of them‖. (Black and Champion). Survey research is ―Studying samples
chosen from populations to discover the relative incidence, distribution, and inter relations of sociological and
psychological variables‖. (Fred N. Kerlinger) By surveying data, information may be collected by observation, or
personal interview, or mailed questionnaires, or administering schedules or telephone enquiries.
2. Observation Method
Observation means seeing or viewing. It is not a casual but systematic viewing. Observation may therefore be defined
as ―a systematic viewing of a specific phenomenon in its proper setting for the purpose of gathering information
for the specific study‖.
Observation is a method of scientific enquiry. We observe a person or an event or a situation or an incident. The body
of knowledge of various sciences such as biology, physiology, astronomy, sociology, psychology, anthropology etc.,
has been built upon centuries of systematic observation.
Observation is also useful in social and business sciences for gathering information and conceptualizing the same. For
example, What is the life style of tribals? How are the marketing activities taking place in Regulated markets? How will
the investment activities be done in Stock Exchange Markets? How are proceedings taking place in the Indian
Parliament or Assemblies? How is a corporate office maintained in a public sector or a private sector undertaking?
What is the behaviour of political leaders? Traffic jams in Delhi during peak hours?
Observation as a method of data collection has some features:
i) It is not only seeing & viewing but also hearing and perceiving as well. ii) It is both a physical and a mental activity.
The observing eye catches many things which are sighted, but attention is also focused on data that are relevant to the
problem under study.
iii) It captures the natural social context in which the person‘s behaviour occurs.
iv) Observation is selective: The investigator does not observe everything but selects the range of things to be observed
depending upon the nature, scope and objectives of the study.
v) Observation is not casual but with a purpose. It is made for the purpose of noting things relevant to the study.
vi) The investigator first of all observes the phenomenon and then gathers and accumulates data.
Observation may be classified in different ways. According to the setting it can be (a) observation in a natural setting,
e.g. Observing the live telecast of parliament proceedings or watching from the visitors‘ gallery, Electioneering in India
through election meetings or (b) observation in an artificially stimulated setting, e.g. business games, Tread Mill Test.
According to the mode of observation it may be classified as (a) direct or personal observation, and (b) indirect or
mechanical observation. In case of direct observation, the investigator personally observes the event when it takes
place, where as in case of indirect observation it is done through mechanical devices such as audio recordings, audio
visual aids, still photography, picturization etc. According to the participating role of the observer, it can be classified
3. Case Method
Case method of study is borrowed from Medical Science. Just like a patient, the case is intensively studied so as to
diagnose and then prescribe a remedy. A firm or a unit is to be studied intensively with a view to finding out problems,
differences, specialties so as to suggest remedial measures. It is an in-depth/intensive study of a unit or problem under
study. It is a comprehensive study of a firm or an industry, or a social group, or an episode, or an incident, or a process,
or a programme, or an institution or any other social unit.
According to P.V. Young ―a comprehensive study of a social unit, be that unit a person, a group, a social institution,
a district, or a community, is called a Case Study‖.
Case Study is one of the popular research methods. A case study aims at studying everything about something rather
than something about everything. It examines complex factors involved in a given situation so as to identify causal
factors operating in it. The case study describes a case in terms of its peculiarities, typical or extreme features. It also
helps to secure a fund of information about the unit under study. It is a most valuable method of study for diagnostic
therapeutic purposes.
4. Experimental Method
Experimentation is the basic tool of the physical sciences like Physics, Chemistry for establishing cause and effect
relationship and for verifying inferences. However, it is now also used in social sciences like Psychology, Sociology.
Experimentation is a research process used to observe cause and effect relationship under controlled conditions. In
other words it aims at studying the effect of an independent variable on a dependent variable, by keeping the other
interdependent variables constant through some type of control. In experimentation, the researcher can manipulate the
independent variables and measure its effect on the dependent variable.
The main features of the experimental method are:
i) Isolation of factors or controlled observation.
ii) Replication of the experiment i.e. it can be repeated under similar conditions.
iii) Quantitative measurement of results.
iv) Determination of cause and effect relationship more precisely.
Three broad types of experiments are:
a) The natural or uncontrolled experiment as in case of astronomy made up mostly of observations.
b) The field experiment, the best suited one for social sciences. ―A field experiment is a research study in a realistic
situation in which one or more independent variables are manipulated by the experimenter under as carefully
controlled conditions as the situation will permit‖. ( Fred N. Kerlinger)
5. Historical Method
When research is conducted on the basis of historical data, the researcher is said to have followed the historical
approach. To some extent, all research is historical in nature, because to a very large extent research depends on the
observations / data recorded in the past. Problems that are based on historical records, relics, documents, or
chronological data can conveniently be investigated by following this method. Historical research depends on past
observations or data and hence is non-repetitive, therefore it is only a post facto analysis. However, historians,
philosophers, social psychiatrists, literary men, as well as social scientists use the historical approach. Historical
research is the critical investigation of events, developments, experiences of the past, the careful weighing of evidence
of the validity of the sources of information of the past, and the interpretation of the weighed evidence. The historical
method, also called historiography, differs from other methods in its rather elusive subject matter i.e. the past. In
historical research primary and also secondary sources of data can be used. A primary source is the original repository
of a historical datum, like an original record kept of an important occasion, an eye witness description of an event, the
inscriptions on copper plates or stones, the monuments and relics, photographs, minutes of organization meetings,
documents. A secondary source is an account or record of a historical event or circumstance, one or more steps
removed from an original repository. Instead of the minutes of the meeting of an organization, for example, if one uses
a newspaper account of the meeting, it is a secondary source.
The aim of historical research is to draw explanations and generalizations from the past trends in order to understand
the present and to anticipate the future. It enables us to grasp our relationship with the past and to plan more
intelligently for the future.
For historical data only authentic sources should be depended upon and their authenticity should be tested by
checking and cross checking the data from as many sources as possible. Many a times it is of considerable interest to
use Time Series Data for assessing the progress or for evaluating the impact of policies and initiatives. This can be
meaningfully done with the help of historical data.
6. Comparative Method
The comparative method is also frequently called the evolutionary or Genetic Method. The term comparative method
has come about in this way: Some sciences have long been known as ―Comparative Sciences‖ - such as comparative
philology, comparative anatomy, comparative physiology, comparative psychology, comparative religion etc. Now the
method of these sciences came to be described as the ―Comparative Method‖, an abridged expression for ―the method
of the comparative sciences‖. When the method of most comparative sciences came to be directed more and more to
the determination of evolutionary sequences, it came to be described as the ―Evolutionary Method‖.
Definitions of hypothesis
The term hypothesis has been defined in several ways. Some important definitions have been given in the following
paragraphs:
1. Hypothesis
A tentative supposition or provisional guess ―It is a tentative supposition or provisional guess which seems to explain
the situation under observation.‖ – James E. Greighton
2. Hypothesis
A Tentative generalization. A Lungberg thinks ―A hypothesis is a tentative generalisation the validity of which remains
to be tested. In its most elementary stage the hypothesis may be any hunch, guess, imaginative idea which becomes the
basis for further investigation.‖
3. Hypothesis: Shrewd Guess
According to John W. Best, ―It is a shrewd guess or inference that is formulated and provisionally adopted to explain
observed facts or conditions and to guide in further investigation.‖
4. Hypothesis: Guides the Thinking Process
According to A.D. Carmichael, ―Science employs hypothesis in guiding the thinking process. When our experience
tells us that a given phenomenon follows regularly upon the appearance of certain other phenomena, we conclude that
the former is connected with the latter by some sort of relationship and we form an hypothesis concerning this
relationship.‖
5. Hypothesis
A proposition is to be put to test to determine its validity: Goode and Han, ―A hypothesis states what we are looking
for. A hypothesis looks forward. It is a proposition which can be put to a test to determine its validity. It may prove to
be correct or incorrect.
6. Hypothesis
An expectation about events based on generalization: Bruce W. Tuckman, ―A hypothesis then could be defined as an
expectation about events based on generalization of the assumed relationship between variables.‖
7. Hypothesis
A tentative statement of the relationship between two or more variables: ―A hypothesis is a tentative statement of the
relationship between two or more variables. Hypotheses are always in declarative sentence form and they relate, either
generally or specifically variable and variables.‖
Test A Comparing sales in a test market and the market share of the product it is Number of
targeted to replace. samples = 1
Test B Comparing the responses of a sample of regular drinkers of fruit juices to Number of
those of a sample of non-fruit juice drinkers to a trial formulation. samples = 2
Test C Comparing the responses of samples of heavy, moderate and infrequent Number of
fruit juice drinkers to a trial formulation. samples = 3
The next consideration is whether the samples being compared are dependent (i.e. related) or independent of one
another (i.e. unrelated). Samples are said to be dependent, or related, when the measurement taken from one sample in
no way affects the measurement taken from another sample.
Take for example the outline of test B above. The measurement of the responses of fruit juice drinkers to the trial
formulation in no way affects or influences the responses of the sample of non-fruit juice drinkers. Therefore, the
Levels of measurement
Measurement Measurement Level Examples Mathematical properties
scale
Nominal Frequency counts Producing grading categories Confined to a small number of tests
using the mode and frequency
Ordinal Ranking of items Placing brands of cooking oil in Wide range of nonparametric tests
order of preference which test for order
Interval Relative differences of Scoring products on a 10 point scale Wide range of parametric tests
magnitude between items of like/dislike
Ratio Absolute differences of Stating how much better one All arithmetic operations
magnitude product is than another in absolute
terms.
Choosing format (a) would give rise to nominal (or categorical) data and format (b) would yield ratio scaled data.
These are at opposite ends of the hierarchy of levels of measurement. If by accident or design format (a) were chosen
then the analyst would have only a very small set of statistical tests that could be applied and these are not very
powerful in the sense that they are limited to showing association between variables and could not be used to establish
cause-and-effect. Format (b), on the other hand, since it gives the analyst ratio data, allows all statistical tests to be used
including the more powerful parametric tests whereby cause-and-effect can be established, where it exists. Thus a
simple change in the wording of a question can have a fundamental effect upon the nature of the data generated.
Motivation of research
Motivation is essential to nearly all behaviour at work. However, it is easy to define. Motivation can be thought of as
the force that drives behaviour. In other words, can be considered as both the powerhouse behind behaviour, and also
a person‘s reasons for doing something (or nothing). Motivation involves both feelings (emotions) and thinking
(cognition).
All human behaviour arises in response to some forms of internal
(physiological) or external (environmental) stimulation. These
behaviors are purposeful or goal directed. These behaviors are the
result of the arousal of certain motives. Thus motivation can be
defined as the process of activating, maintaining and directing
behaviour toward a particular goal. The process is terminated
after the desired goal is obtained.
The process of initiating action in the organism is technically
called motivation. Motivation refers to a state that directs the
Behaviour of the individual towards certain goals. Motivation is
not directly observable. It is described as an inferred process and
is called so by the psychologists to explain certain behaviors.
When we ask "What motivates a person to do a particular thing", we usually mean why does he behave as he does. In
other words, motivation as popularly used, refers to the cause or why of behavior. Since psychology is the study of
human Behaviour, motivation is an important part of psychology. Motivation refers to a state of a person that directs
Behaviour of the individual towards certain goals.
Various types of motivation which leads to effective research work are as follows:
1. Intrinsic motivation - the love of the work itself. Intrinsic motivations include: interest; challenge; learning;
meaning; purpose; creative flow. Research has shown that high
levels of intrinsic motivation are strongly linked to outstanding
creative performance
2. Extrinsic motivation - rewards for good work or
punishments for poor work. Extrinsic motivations include:
money; fame; awards; praise; status; opportunities; deadlines;
commitments; bribes; threats. Research shows that too much
focus on extrinsic motivation can block creativity.
3. Personal motivation - individual values, linked to
personality. Examples include: power; harmony; achievement;
generosity; public recognition; authenticity; knowledge;
security; pleasure.
Each of us prioritizes some values over others; understanding your own values and those of people around you is key
to motivating yourself and influencing others.
4. Operationalisation of Variables
In stating a problem, the researcher should make sure that it is neither stated in terms so general as to make it vague
nor specified so narrowly as to make it insignificant and trivial. The most important step in this direction is to specify
the variables involved in the problem and define them in operational terms. To illustrate, suppose you state that you
want to study the ―Effectiveness of Self-help Groups on the Empowerment of Rural women‖. This statement is broad
and it communicates in a general way what you want to do. But it is necessary to specify the problem with much
greater precision. For this the first step is to specify the variables involved in the problem and define them in
operational terms.
The variables involved in the problem are ―effectiveness‖ and ―empowerment‖. Please note that these expressions are
to be understood beyond their dictionary meanings. For example, the dictionary meaning of ―effectiveness‖ is
―producing the desired effect‖. This meaning is not sufficient for research purposes. It is important for you to specify
exactly what indicators of effectiveness you will use or what you will do to measure the presence or absence of the
phenomenon denoted by the term ―effectiveness‖. Similarly, you have to define the other variable ―empowerment‖
also in terms of the operations or processes that will be used to measure them. In this study, you might choose to
define ―effectiveness‖ as the improvement made by the rural women in scores on a standardised scale. The term
‗empowerment‘ might refer to the scores on the achievement test in empowerment.
Research Design
The decisions regarding what, where, when, how much, by what means concerning a research project constitute a
research design. ―A research design is the arrangement of conditions for collection and analysis of data in a manner
that aims to combine relevance to the research purpose with economy in procedure‖. In fact, the research design is the
conceptual structure within which research is conducted; it constitutes the blueprint for the collection, measurement
and analysis of data. As such the design includes an outline of what the researcher will do from writing the hypothesis
and its operational implications to the final analysis of data.
More explicitly, the design decisions happen to be in respect of:
A. Experience surveys: Concepts may be discussed with top executives and knowledgeable managers who have
had personal experience in the field being researched. This constitutes an informal experience survey. Such a study
may be conducted by the business manager rather than the research department. On the other hand, an experience
survey may be a small number of interviews with experienced people who have been carefully selected from outside
the organization. The purpose of such a study is to help formulate the problem and clarify concepts rather than to
develop conclusive evidence.
B. Secondary data analysis: A quick and economical source of background information is trade literature in the
public library. Searching through such material is exploratory research with secondary data; research rarely begins
without such an analysis. An informal situation analysis using secondary data and experience surveys can be
conducted by business managers. Should the project need further clarification, a research specialist can conduct a pilot
study.
C. Case study method: The purpose of a case study is to obtain information from one, or a few, situations similar
to the researcher's situation. A case study has no set procedures, but often requires the cooperation of the party whose
history is being studied. However, this freedom to research makes the success of the case study highly dependent on
the ability of the researcher. As with all exploratory research, the results of a case study should be seen as tentative.
Case study research excels at bringing us to an understanding of a complex issue or object and can extend experience
or add strength to what is already known through previous research. Case studies emphasize detailed contextual
analysis of a limited number of events or conditions and their relationships. Researchers have used the case study
research method for many years across a variety of disciplines. Social scientists, in particular, have made wide use of
this qualitative research method to examine contemporary real-life situations and provide the basis for the application
of ideas and extension of methods. Researcher Robert K. Yin defines the case study research method as an empirical
inquiry that investigates a contemporary phenomenon within its real-life context; when the boundaries between
phenomenon and context are not clearly evident; and in which multiple sources of evidence are used.
Many well-known case study researchers such as Robert E. Stake, Helen Simons, and Robert K. Yin have written about
case study research and suggested techniques for organizing and conducting the research successfully. This
introduction to case study research draws upon their work and proposes six steps that should be used:
1. Determine and define the research questions
D. Pilot studies: The term "pilot studies" is used as a collective to group together a number of diverse research
techniques all of which are conducted on a small scale. Thus, a pilot study is a research project which generates
primary data from consumers, or other subjects of ultimate concern. There are four major categories of pilot studies:
1. Focus group interviews: These interviews are free-flowing interviews with a small group of people. They have a
flexible format and can discuss anything from brand to a product itself. The group typically consists of six to ten
participants and a moderator. The moderator's role is to introduce a topic and to encourage the group to discuss it
among themselves. There are four primary advantages of the focus group: (1) it allows people to discuss their true
feelings and convictions, (2) it is relatively fast, (3) it is easy to execute and very flexible, (4) it is inexpensive.
One disadvantage is that a small group of people, no matter how carefully they are selected, will not be representative.
Specific advantages of focus group interviews have to be categorized as follows:
a) Synergism: the combined effort of the group will produce a wider range of information, insights and ideas than will
the cumulation of separately secured responses.
b) Serendipity: an idea may drop out of the blue, and affords the group the opportunity to develop such an idea to its
full significance.
c) Snowballing: a bandwagon effect occurs. One individual often triggers a chain of responses from the other
participants.
d) Stimulation: respondents want to express their ideas and expose their opinions as the general level of excitement
over the topic increases.
e) Security: the participants are more likely to be candid because they soon realize that the things said are not being
identified with any one individual.
f) Spontaneity: people speak only when they have definite feelings about a subject; not because a question requires an
answer.
g) Specialization: the group interview allows the use of a more highly trained moderator because there are certain
economies of scale when a large number of people are "interviewed" simultaneously.
h) Scientific scrutiny: the group interview can be taped or even videoed for observation. This affords closer scrutiny
and allows the researchers to check for consistency in the interpretations.
i) Structure: the moderator, being one of the groups, can control the topics the group discusses.
j) Speed: a number of interviews are, in effect, being conducted at one time.
The ideal size for a focus group is six to ten relatively homogeneous people. This avoids one or two members
intimidating the others, and yet, is a small enough group that adequate participation is allowed. Homogeneous groups
avoid confusion which might occur if there were too many differing viewpoints. Researchers who wish to collect
information from different groups should conduct several different focus groups.
The sessions should be as relaxed and natural as possible. The moderator's job is to develop a rapport with the group
and to promote interaction among its members. The discussion may start out general, but the moderator should be able
to focus it on specific topics.
2. Interactive Media and online Focus Group: When a person uses the Internet, he or she interacts with a computer. It
is an interactive media because the user clicks a command and the computer responds. The use of the Internet for
qualitative exploratory research is growing rapidly. The term online focus group refers to qualitative research where a
group of individuals provide unstructured comments by keyboarding their remarks into a computer connected to the
Internet. The group participants either keyboard their remarks during a chat room format or when they are alone at
their computers. Because respondents enter their comments into the computer, transcripts of verbatim responses are
available immediately afterward the group session. Online groups can be quick and cost efficiency. However, because
there is less interaction between participants, group synergy and snowballing of ideas can suffer.
Research companies often set up a private chat room on their company Web sites for focus group interviews.
Participants in these chat rooms feel their anonymity is very secure. Often they will make statements or ask questions
they would never address under other circumstances. This can be a major advantage for a company investigating
sensitive or embarrassing issues.
Many online focus groups using the chat room format arrange for a sample of participants to be online at the same time
for about typically 60 to 90 minutes. Because participants do not have to be together in the same room at a research
facility, the number of participants in online focus groups can be much larger than traditional focus groups. A problem
with online focus groups is that the moderator cannot see body language and facial expressions (bewilderment,
excitement, interest, etc.) to interpret how people are reacting. Also, the moderator‘s ability to probe and ask additional
questions on the spot is reduced in online focus groups, especially those in which participants are not simultaneously
involved. Research that requires tactile touch, such as a new easy-opening packaging design, or taste experiences
cannot be performed online.
3. Projective techniques: Individuals may be more likely to give a true answer if the question is disguised. If
respondents are presented with unstructured and ambiguous stimuli and are allowed considerable freedom to
respond, they are more likely to express their true feelings.
A projective technique is an indirect means of questioning that enables respondents to "project their beliefs onto a third
party." Thus, the respondents are allowed to express emotions and opinions that would normally be hidden from
others and even hidden from themselves. Common techniques are as follows:
a) Word association: The subject is presented with a list of words, one at a time, and asked to respond with the first
word that comes to mind. Both verbal and non-verbal responses are recorded. Word association should reveal each
individual's true feelings about the subject. Interpreting the results is difficult; the researcher should avoid subjective
interpretations and should consider both what the subject said and did not say (e.g., hesitations).
4. Depth interviews: Depth interviews are similar to the client interviews of a clinical psychiatrist. The researcher
asks many questions and probes for additional elaboration after the subject answers; the subject matter is usually
disguised.
Depth interviews have lost their popularity recently because they are time-consuming and expensive as they require
the services of a skilled interviewer.
Limitations
The following are some of the limitations of exploratory research design:
Exploratory research techniques have their limitations. Most of them are qualitative, and the interpretation of their
results is judgmental—thus, they cannot take the place of quantitative, conclusive research.
Because of certain problems, such as interpreter bias or sample size, exploratory findings should be treated as
preliminary. The major benefit of exploratory research is that it generates insights and clarifies the business
problems for testing in future research.
If the findings of exploratory research are very negative, then no further research should probably be conducted.
However, the researcher should proceed with caution because there is a possibility that a potentially good idea
could be rejected because of unfavorable results at the exploratory stage.
In other situations, when everything looks positive in the exploratory stage, there is a temptation to market the
product without further research. In this situation, business managers should determine the benefits of further
information versus the cost of additional research. When a major commitment of resources is involved, it is often
well worth conducting a quantitative study.
What is sampling?
Sampling is the act, process, or technique of selecting a suitable sample, or a representative part of a population for the
purpose of determining parameters or characteristics of the whole population.
What is the purpose of sampling? To draw conclusions about populations from samples, we must use inferential
statistics which enables us to determine a population‗s characteristics by directly observing only a portion (or sample)
of the population. We obtain a sample rather than a complete enumeration (a census) of the population for many
reasons. Obviously, it is cheaper to observe a part rather than the whole, but we should prepare ourselves to cope with
the dangers of using samples. Some are better than others but all may yield samples that are inaccurate and unreliable.
We will learn how to minimize these dangers, but some potential error is the price we must pay for the convenience
and savings the samples provide.
Why sampling?
One of the decisions to be made by a researcher in conducting a survey is whether to go for a census or a sample
survey. We obtain a sample rather than a complete enumeration (a census) of the population for many reasons. The
most important considerations for this are: cost, size of the population, accuracy of data, accessibility of population,
timeliness, and destructive observations.
The disadvantages of sampling are few but the researcher must be cautious. These are risk, lack of
representativeness and insufficient sample size each of which can cause errors. If researcher don‘t pay attention to
these flaws it may invalidate the results.
1) Risk: Using a sample from a population and drawing inferences about the entire population involves risk. In other
words the risk results from dealing with a part of a population. If the risk is not acceptable in seeking a solution to a
problem then a census must be conducted.
2) Lack of representativeness: Determining the representativeness of the sample is the researcher‘s greatest problem.
By definition, ‗sample‘ means a representative part of an entire population. It is necessary to obtain a sample that meets
the requirement of representativeness otherwise the sample will be biased. The inferences drawn from non-
reprentative samples will be misleading and potentially dangerous.
3) Insufficient sample size: The other significant problem in sampling is to determine the size of the sample. The size
of the sample for a valid sample depends on several factors such as extent of risk that the researcher is willing to accept
and the characteristics of the population itself.
2. Sampling frames
A sampling frame can be one of two things: either a list of all members of a population, or a method of selecting any
member of the population. The term general population refers to everybody in a particular geographical area.
Common sampling frames for the general population are electoral rolls, street directories, telephone directories, and
customer lists from utilities which are used by almost all households: water, electricity, sewerage, and so on.
It is best to use the list that is most accurate, most complete, and most up to date. This differs from country to country.
In some countries, the best lists are of households, in other countries, they are of people. For most surveys, a list of
households is more useful than a list of people. Another commonly used sampling frame (which is not recommended
for sampling people) is a map.
Samples
A sample is a part of the population from which it was drawn. Survey research is based on sampling, which involves
getting information from only some members of the population.
If information is obtained from the whole population, it is not a sample, but a census. Some surveys, based on very
small populations (such as all members of an organization) in fact are censuses and not sample surveys. When you do
a census, the techniques given in this book still apply, but there is no sampling error - as long as the whole group
participates in the census.
Samples can be drawn in several different ways, e.g. probability samples, quota samples, purposive samples etc.
Sample size
Contrary to popular opinion, sample sizes do not have to be particularly large. Their size is not, as commonly
thought, determined by the size of the population they are to represent. The U.S., for example, contains more than
two and a half million people, yet the General Social Survey, a highly valued yearly interview survey of the U.S.
population, is based on a sample of around 1500 cases. Political and attitudinal polls, such as the California Poll,
typically draw a sample of around 1000, and some local polls obtain samples of 500 or less. The determiners of sample
size are the variability within the population and the degree of accuracy of population estimates the researcher is
willing to accept (pay for). If you are, for example, interested in the gender distribution of crime victims, the sample
could be relatively small with limited variability of only two possibilities (male and female) compared to the size of the
sample needed to make the same level of accuracy statement about the ethnicity of crime victims (Germans, Italians,
Irish, Poles, Canadians, etc.). To make a statement about the gender makeup of crime victims that would be within 3%
of the population parameter that we would be 95% confident in making would require a sample of 1200, while a
similar statement about the ethnic makeup of victims, would require a much larger sample due to the variability.
3) Sampling method
The difference between non-probability and probability sampling is that non-probability sampling does not involve
random selection and probability sampling does. Does that mean that non-probability samples aren't representative of
the population? Not necessarily. But it does mean that non-probability samples cannot depend upon the rationale of
probability theory. At least with a probabilistic sample, we know the odds or probability that we have represented the
population well. We are able to estimate confidence intervals for the statistic. With non-probability samples, we may or
may not represent the population well, and it will often be hard for us to know how well we've done so. In general,
researchers prefer probabilistic or random sampling methods over non-probabilistic ones, and consider them to be
more accurate and rigorous. However, in applied social research there may be circumstances where it is not feasible,
practical or theoretically sensible to do random sampling. Here, we consider a wide range of non-probabilistic
alternatives.
Probability sampling, or random sampling, is a sampling technique in which the probability of getting any particular
sample may be calculated. Non-probability sampling does not meet this criterion and should be used with caution.
Non-probability sampling techniques cannot be used to infer from the sample to the general population. Any
generalizations obtained from a non-probability study must be filtered through ones knowledge of the topic being
studied. Performing non-probability sampling is considerably less expense than doing probability sampling.
In Probability sampling, all items have some chance of selection that can be calculated. Probability sampling
technique ensures that bias is not introduced regarding who is included in the survey.
2) Systematic sampling
Systematic sampling, sometimes called interval-sampling, means that there is a gap, or interval, between each
selection. This method is often used in industry, where an item is selected for testing from a production line (say,
every fifteen minutes) to ensure that machines and equipment are working to specification.
Alternatively, the manufacturer might decide to select every 20th item on a production line to test for defects and
quality. This technique requires the first item to be selected at random as a starting point for testing and, thereafter,
every 20th item is chosen.
This technique could also be used when questioning people in a sample survey. A market researcher might select
every 10th person who enters a particular store, after selecting a person at random as a starting point; or interview
occupants of every 5th house in a street, after selecting a house at random as a starting point.
It may be that a researcher wants to select a fixed size sample. In this case, it is first necessary to know the whole
population size from which the sample is being selected. The appropriate sampling interval, I, is then calculated by
dividing population size, N, by required sample size, n, as follows: I = N/n
Example:-If a systematic sample of 500 students were to be carried out in a university with an enrolled population of
10,000, the sampling interval would be: I = N/n = 10,000/500 =20
Note: if I is not a whole number, then it is rounded to the nearest whole number.
All students would be assigned sequential numbers. The starting point would be chosen by selecting a random number
between 1 and 20. If this number was 9, then the 9th student on the list of students would be selected along with every
following 20th student. The sample of students would be those corresponding to student numbers 9, 29, 49, 69, ........
9929, 9949, 9969 and 9989.
The advantage of systematic sampling is that it is simpler to select one random number and then every 'Ith' (e.g.
20th) member on the list, than to select as many random numbers as sample size. It also gives a good spread right
across the population. A disadvantage is that you may need a list to start with, if you wish to know your sample
size and calculate your sampling interval.
3) Stratified sampling
A general problem with random sampling is that you could, by chance, miss out a particular group in the sample.
However, if you form the population into groups, and sample from each group, you can make sure the sample is
representative.
In stratified sampling, the population is divided into groups called strata. A sample is then drawn from within these
strata. Some examples of strata commonly used by the research Organisation are States, Age and Sex. Other strata may
be religion, academic ability or marital status.
1) Bases of stratification
Intuitively, it seems clear that the best basis would be the frequency distribution of the principal variable being
studied. For example, in a study of coffee consumption we may believe that behavioural patterns will vary according
to whether a particular respondent drinks a lot of coffee, only a moderate amount of coffee or drinks coffee very
occasionally. Thus we may consider that to stratify according to "heavy users", "moderate users" and "light users"
would provide an optimum stratification. However, two difficulties may arise in attempting to proceed in this way.
First, there is usually interest in many variables, not just one, and stratification on the basis of one may not provide
the best stratification for the others. Secondly, even if one survey variable is of primary importance, current data on
its frequency is unlikely to be available. However, the latter complaint can be attended to since it is possible to
stratify after the data has been completed and before the analysis is undertaken. The only approach is to create strata
on the basis of variables, for which information is, or can be made available, that are believed to be highly correlated
with the principal survey characteristics of interest, e.g. age, socio-economic group, sex, farm size, firm size, etc.
In general, it is desirable to make up strata in such a way that the sampling units within strata are as similar as
possible. In this way a relatively limited sample within each stratum will provide a generally precise estimate of the
mean of that stratum. Similarly it is important to maximise differences in stratum means for the key survey variables
of interest. This is desirable since stratification has the effect of removing differences between stratum means from
the sampling error.
Total variance within a population has two types of natural variation: between-strata variance and within-strata
variance. Stratification removes the second type of variance from the calculation of the standard error. Suppose, for
example, we stratified students in a particular university by subject specialty - marketing, engineering, chemistry,
A 10,000
B 90,000
If the budget is fixed at ` 3000 and we know the cost per observation is ` 6 in each stratum, so the available total
sample size is 500. The most common approach would be to sample the same proportion of items in each stratum.
This is termed proportional allocation. In this example, the overall sampling fraction is:
The major practical advantage of proportional allocation is that it leads to estimates which are computationally
simple. Where proportional sampling has been employed we do not need to weight the means of the individual
stratum when calculating the overall mean. So:
sr = W1 1 + W2 2 + W3 3+ - - - Wk k
Optimum allocation: Proportional allocation is advisable when all we know of the strata is their sizes. In situations
where the standard deviations of the strata are known it may be advantageous to make a disproportionate allocation.
4) Cluster sampling
It is sometimes expensive to spread your sample across the population as a whole. For example, travel can become
expensive if you are using interviewers to travel between people spread all over the country. To reduce costs you may
choose a cluster sampling technique.
Cluster sampling divides the population into groups, or clusters. A number of clusters are selected randomly to
represent the population, and then all units within selected clusters are included in the sample. No units from non-
selected clusters are included in the sample. They are represented by those from selected clusters. This differs from
stratified sampling, where some units are selected from each group.
Examples of clusters may be factories, schools and geographic areas such as electoral sub-divisions. The selected
clusters are then used to represent the population.
Example:- Suppose an organisation wishes to find out which sports 11 Std students are participating in across
Maharashtra. It would be too costly and take too long to survey every student, or even some students from every
school. Instead, 100 schools are randomly selected from all over Maharashtra.
These schools are considered to be clusters. Then, every 11 Std student in these 100 schools is surveyed. In effect,
students in the sample of 100 schools represent all 11 Std students in Maharashtra.
Cluster sampling has several advantages: reduced costs, simplified fieldwork and administration are more
convenient. Instead of having a sample scattered over the entire coverage area, the sample is more localised in
relatively few centres (clusters).
Cluster sampling's disadvantage is that less accurate results are often obtained due to higher sampling error than for
simple random sampling with the same sample size. In the above example, you might expect to get more accurate
estimates from randomly selecting students across all schools than from randomly selecting 100 schools and taking
every student in those chosen.
5) Multi-stage sampling
Multi-stage sampling is like cluster sampling, but involves selecting a sample within each chosen cluster, rather than
including all units in the cluster. Thus, multi-stage sampling involves selecting a sample in at least two stages. In the
first stage, large groups or clusters are selected. These clusters are designed to contain more population units than are
required for the final sample.
In the second stage, population units are chosen from selected clusters to derive a final sample. If more than two stages
are used, the process of choosing population units within clusters continues until the final sample is achieved.
Example:- An example of multi-stage sampling is where, firstly, electoral sub-divisions (clusters) are sampled from a
city or state. Secondly, blocks of houses are selected from within the electoral sub-divisions and, thirdly, individual
houses are selected from within the selected blocks of houses.
Unrepresentative
2) Purposive Sampling
In purposive sampling the people/units/ elements/ in the sample are selected because they are regarded as having
similar characteristics to the people in the designated research population. So, for example, in research investigating
the management skills of owner/managers of small enterprises, the researcher might select some typical owner
managers to take part in the study. They will not be selected randomly. One advantage of this kind of sample is that
it is usually possible to get a targeted sample together very quickly - and hence cheaply.
All of the methods that follow can be considered subcategories of purposive sampling methods. We might sample
for specific groups or types of people as in modal instance, expert, or quota sampling. We might sample for
b) Expert Sampling
Expert sampling involves the assembling of a sample of persons with known or demonstrable experience and expertise
in some area. Often, we convene such a sample under the auspices of a "panel of experts." There are actually two
reasons you might do expert sampling. First, because it would be the best way to elicit the views of persons who have
specific expertise. In this case, expert sampling is essentially just a specific sub case of purposive sampling. But the
other reason you might use expert sampling is to provide evidence for the validity of another sampling approach
you've chosen. For instance, let's say you do modal instance sampling and are concerned that the criteria you used for
defining the modal instance are subject to criticism. You might convene an expert panel consisting of persons with
acknowledged experience and insight into that field or topic and ask them to examine your modal definitions and
comment on their appropriateness and validity. The advantage of doing this is that you aren't out on your own trying
to defend your decisions -- you have some acknowledged experts to back you. The disadvantage is that even the
experts can be, and often are, wrong.
c) Quota Sampling
In quota sampling, you select people non-randomly according to some fixed quota. There are two types of quota
sampling: proportional and non proportional.
i) In proportional quota sampling you want to represent the major characteristics of the population by sampling a
proportional amount of each. For instance, if you know the population has 40% women and 60% men, and that you
want a total sample size of 100, you will continue sampling until you get those percentages and then you will stop.
So, if you've already got the 40 women for your sample, but not the sixty men, you will continue to sample men but
even if legitimate women respondents come along, you will not sample them because you have already "met your
quota." The problem here (as in much purposive sampling) is that you have to decide the specific characteristics on
which you will base the quota. Will it be by gender, age, education race, religion, etc.?
ii) Non-proportional quota sampling is a bit less restrictive. In this method, you specify the minimum number of
sampled units you want in each category. Here, you're not concerned with having numbers that match the
d) Heterogeneity Sampling
We sample for heterogeneity when we want to include all opinions or views, and we aren't concerned about
representing these views proportionately. Another term for this is sampling for diversity. In many brainstorming or
nominal group processes (including concept mapping), we would use some form of heterogeneity sampling because
our primary interest is in getting broad spectrum of ideas, not identifying the "average" or "modal instance" ones. In
effect, what we would like to be sampling is not people, but ideas. We imagine that there is a universe of all possible
ideas relevant to some topic and that we want to sample this population, not the population of people who have the
ideas. Clearly, in order to get all of the ideas, and especially the "outlier" or unusual ones, we have to include a broad
and diverse range of participants. Heterogeneity sampling is, in this sense, almost the opposite of modal instance
sampling.
e) Snowball sampling
In snowball sampling, you begin by identifying someone who meets the criteria for inclusion in your study. You then
ask them to recommend others who they may know who also meet the criteria. Although this method would hardly
lead to representative samples, there are times when it may be the best method available. Snowball sampling is
especially useful when you are trying to reach populations that are inaccessible or hard to find. For instance, if you are
studying the homeless, you are not likely to be able to find good lists of homeless people within a specific
geographical area. For example, we might wish to know whether a new educational program causes subsequent
achievement score gains, whether a special work release program for prisoners causes lower recidivism rates, whether
a novel drug causes a reduction in symptoms, and so on.
Convenience Sampling Uses those who are willing to volunteer Readily available; large Cannot extrapolate from
amount of information sample to infer about the
can be gathered quickly population; prone to
volunteer bias
Judgement Sampling A deliberate choice of a sample - the Good for providing Very prone to bias;
opposite of random illustrative examples or samples often small;
case studies cannot extrapolate from
sample
Quota Sampling Aim is to obtain a sample that is Quick & easy way of Not random, so still some
"representative" of the overall obtaining a sample risk of bias; need to
population; the population is divided understand the
("stratified") by the most important population to be able to
variables (e.g. income,. age, location) identify the basis of
and a required quota sample is drawn stratification
from each stratum
Simply Random Sampling Ensures that every member of the Simply to design and Need a complete and
population has an equal chance of interpret; can calculate accurate population
selection estimate of the listing; may not be
population and the practical if the sample
sampling error requires lots of small
visits all over the country
Systematic Sampling After randomly selecting a starting point Easier to extract the Can be costly and time-
from the population, between 1 and "n", sample than via simple consuming if the sample
every nth unit is selected, where n random; ensures is not conveniently
equals the population size divided by sample is spread across located
the sample size the population
a) Measurement: Measurement is the process of observing and recording the observations that are collected as part
of research. The recording of the observations may be in terms of numbers or other symbols to characteristics of objects
according to certain prescribed rules. The respondent‘s, characteristics are feelings, attitudes, opinions etc. For example,
you may assign ‗1‘ for Male and ‗2‘ for Female respondents. In response to a question on whether he/she is using the
ATM provided by a particular bank branch, the respondent may say ‗yes‘ or ‗no‘. You may wish to assign the number
‗1‘ for the response yes and ‗2‘ for the response no. We assign numbers to these characteristics for two reasons. First,
the numbers facilitate further statistical analysis of data obtained. Second, numbers facilitate the communication of
measurement rules and results. The most important aspect of measurement is the specification of rules for assigning
numbers to characteristics. The rules for assigning numbers should be standardised and applied uniformly. This must
not change over time or objects.
b) Scaling: Scaling is the assignment of objects to numbers or semantics according to a rule. In scaling, the objects are
text statements, usually statements of attitude, opinion, or feeling. For example, consider a scale locating customers of a
bank according to the characteristic ―agreement to the satisfactory quality of service provided by the branch‖. Each
customer interviewed may respond with a semantic like ‗strongly agree‘, or ‗somewhat agree‘, or ‗somewhat disagree‘,
or ‗strongly disagree‘. We may even assign each of the responses a number. For example, we may assign strongly agree
as ‗1‘, agree as ‗2‘ disagree as ‗3‘, and strongly disagree as ‗4‘. Therefore, each of the respondents may assign 1, 2, 3 or 4.
Issues in measurement
When a researcher is interested in measuring the attitudes, feelings or opinions of respondents he/she should be clear
about the following:
a) What is to be measured?
b) Who is to be measured?
c) The choices available in data collection techniques
The first issue that the researcher must consider is ‗what is to be measured‘?
1. Nominal
2. Ordinal
3. Interval
4. Ratio
1) Nominal scales
This, the crudest of measurement scales, classifies individuals, companies, products, brands or other entities into
categories where no order is implied. Indeed it is often referred to as a categorical scale. It is a system of classification
and does not place the entity along a continuum. It involves a simply count of the frequency of the cases assigned to
the various categories, and if desired numbers can be nominally assigned to label each category as in the example
below:
An example of a nominal scale
Which of the following food items do you tend to buy at least once per month? (Please tick)
The numbers have no arithmetic properties and act only as labels. The only measure of average which can be used is
the mode because this is simply a set of frequency counts. Hypothesis tests can be carried out on data collected in the
2) Ordinal scales
Ordinal scales involve the ranking of individuals, attitudes or items along the continuum of the characteristic being
scaled. For example, if a researcher asked farmers to rank 5 brands of pesticide in order of preference he/she might
obtain responses like those in table below.
An example of an ordinal scale used to determine farmers' preferences among 5 brands of pesticide.
Order of preference Brand
1 Rambo
2 Harpic
3 DDT
4 Bagyone
5 Rat kill
From such a table the researcher knows the order of preference but nothing about how much more one brand is
preferred to another, which is there is no information about the interval between any two brands. All of the
information a nominal scale would have given is available from an ordinal scale. In addition, positional statistics such
as the median, quartile and percentile can be determined.
It is possible to test for order correlation with ranked data. The two main methods are Spearman's Ranked Correlation
Coefficient and Kendall's Coefficient of Concordance. Using either procedure one can, for example, ascertain the
degree to which two or more survey respondents agree in their ranking of a set of items. Consider again the ranking of
pesticides example in given figure. The researcher might wish to measure similarities and differences in the rankings
of pesticide brands according to whether the respondents' farm enterprises were classified as "arable" or "mixed" (a
combination of crops and livestock). The resultant coefficient takes a value in the range 0 to 1. A zero would mean that
there was no agreement between the two groups, and 1 would indicate total agreement. It is more likely that an answer
somewhere between these two extremes would be found.
The only other permissible hypothesis testing procedures are the runs test and sign test. The runs test (also known as
the Wald-Wolfowitz). Test is used to determine whether a sequence of binomial data - meaning it can take only one of
two possible values e.g. African/non-African, yes/no, male/female - is random or contains systematic 'runs' of one or
other value. Sign tests are employed when the objective is to determine whether there is a significant difference
between matched pairs of data. The sign test tells the analyst if the number of positive differences in ranking is
approximately equal to the number of negative rankings, in which case the distribution of rankings is random, i.e.
apparent differences are not significant. The test takes into account only the direction of differences and ignores their
magnitude and hence it is compatible with ordinal data.
Succulence 5 4 3 2 1
Fresh tasting 5 4 3 2 1
Good value 5 4 3 2 1
Attractively packaged 5 4 3 2 1
(a)
Please indicate your views on Balkan Olives by ticking the appropriate responses below:
Succulent
Freshness
Attractiveness of packaging
(b)
Most of the common statistical methods of analysis require only interval scales in order that they might be used. These
are not recounted here because they are so common and can be found in virtually all basic texts on statistics.
4) Ratio scales
The highest level of measurement is a ratio scale. This has the properties of an interval scale together with a fixed
origin or zero point. Examples of variables which are ratio scaled include weights, lengths and times. Ratio scales
permit the researcher to compare both differences in scores and the relative magnitude of scores. For instance the
Measurement error
In principle, every operation of a survey is a potential source of measurement error. Some examples of causes of
measurement error are non-response, badly designed questionnaires, respondent bias and processing errors.
Measurement errors can be grouped into two main causes, systematic errors and random errors. Systematic error
(called bias) makes survey results unrepresentative of the target population by distorting the survey estimates in one
direction. For example, if the target population is the entire population in a country but the sampling frame is just the
urban population, then the survey results will not be representative of the target population due to systematic bias in
the sampling frame. On the other hand, random error can distort the results on any given occasion but tends to
balance out on average. Some of the types of measurement error are outlined below:
1. Failure to identify the target population
Failure to identify the target population can arise from the use of an inadequate sampling frame, imprecise definition
of concepts, and poor coverage rules. Problems can also arise if the target population and survey population do not
match very well. Failure to identify and adequately capture the target population can be a significant problem for
informal sector surveys. While establishment and population censuses allow for the identification of the target
population, it is important to ensure that the sample is selected as soon as possible after the census is taken so as to
improve the coverage of the survey population.
2. Non-response bias
Non-respondents may differ from respondents in relation to the attributes/variables being measured. Non-response
can be total (where none of the questions were answered) or partial (where some questions may be unanswered owing
to memory problems, inability to answer, etc.). To improve response rates, care should be taken in training
interviewers, assuring the respondent of confidentiality, motivating him or her to cooperate, and revisiting or calling
back if the respondent has been previously unavailable. 'Call backs' are successful in reducing non-response but can
be expensive. It is also important to ensure that the person who has the information required can be contacted by the
interviewer; that the data required are available and that an adequate follow up strategy is in place.
3. Questionnaire design
The content and wording of the questionnaire may be misleading and the layout of the questionnaire may make it
difficult to accurately record responses. Questions should not be misleading or ambiguous, and should be directly
relevant to the objectives of the survey. In order to reduce measurement error relating to questionnaire design, it is
important to ensure that the questionnaire:
can be completed in a reasonable amount of time;
can be properly administered by the interviewer;
uses language that is readily understood by both the interviewer and the respondent; and
can be easily processed.
Scaling
In research we quite often face measurement problem (since we want a valid measurement but may not obtain it),
especially when the concepts to be measured are complex and abstract and we do not possess the standardised
measurement tools. Alternatively, we can say that while measuring attitudes and opinions, we face the problem of
their valid measurement. Similar problem may be faced by a researcher, of course in a lesser degree, while measuring
physical or institutional concepts. As such we should study some procedures which may enable us to measure
abstract concepts more accurately. This brings us to the study of scaling techniques.
Meaning of Scaling
Scaling describes the procedures of assigning numbers to various degrees of opinion, attitude and other concepts. This
can be done in two ways viz., (i) making a judgement about some characteristic of an individual and then placing him
directly on a scale that has been defined in terms of that characteristic and (ii) constructing questionnaires in such a
way that the score of individual‘s responses assigns him a place on a scale. It may be stated here that a scale is a
continuum, consisting of the highest point (in terms of some characteristic e.g., preference, favourableness, etc.) and
the lowest point along with several intermediate points between these two extreme points. These scale-point positions
are so related to each other that when the first point happens to be the highest point, the second point indicates a
higher degree in terms of a given characteristic as compared to the third point and the third point indicates a higher
degree as compared to the fourth and so on. Numbers for measuring the distinctions of degree in the
attitudes/opinions are, thus, assigned to individuals corresponding to their scale-positions. All this is better
understood when we talk about scaling technique(s). Hence the term ‗scaling‘ is applied to the procedures for
attempting to determine quantitative measures of subjective abstract concepts. Scaling has been defined as a
―procedure for the assignment of numbers (or other symbols) to a property of objects in order to impart some of the
characteristics of numbers to the properties in question.‖
Classification of scales
The number assigning procedures or the scaling procedures may be broadly classified on one or more of the following
bases: (a) subject orientation; (b) response form; (c) degree of subjectivity; (d) scale properties; (e) number of
dimensions and (f) scale construction techniques.
We take up each of these separately.
(a) Subject orientation: Under it a scale may be designed to measure characteristics of the respondent who completes it
or to judge the stimulus object which is presented to the respondent. In respect of the former, we presume that the
stimuli presented are sufficiently homogeneous so that the between stimuli variation is small as compared to the
variation among respondents. In the latter approach, we ask the respondent to judge some specific object in terms of
one or more dimensions and we presume that the between-respondent variation will be small as compared to the
variation among the different stimuli presented to respondents for judging.
I) Comparative scales
a) Paired comparison: It is sometimes the case that researchers wish to find out which are the most important factors
in determining the demand for a product. Conversely they may wish to know which are the most important factors
acting to prevent the widespread adoption of a product. Take, for example, the very poor farmer response to the first
design of an animal-drawn mould board plough. A combination of exploratory research and shrewd observation
suggested that the following factors played a role in the shaping of the attitudes of those farmers who feel negatively
towards the design:
Does not ridge
Does not work for inter-cropping
Far too expensive
New technology too risky
Too difficult to carry.
Suppose the organisation responsible wants to know which factors is foremost in the farmer's mind. It may well be the
case that if those factors that are most important to the farmer than the others, being of a relatively minor nature, will
cease to prevent widespread adoption. The alternatives are to abandon the product's re-development or to completely
re-design it which is not only expensive and time-consuming, but may well be subject to a new set of objections.
The process of rank ordering the objections from most to least important is best approached through the questioning
technique known as 'paired comparison'. Each of the objections is paired by the researcher so that with 5 factors, as in
this example, there are 10 pairs-
In 'paired comparisons' every factor has to be paired with every other factor in turn. However, only one pair is ever put
to the farmer at any one time.
The question might be put as follows:
Which of the following was the more important in making you decide not to buy the plough?
In most cases the question, and the alternatives, would be put to the farmer verbally. He/she then indicates which of
the two was the more important and the researcher ticks the box on his questionnaire. The question is repeated with a
second set of factors and the appropriate box ticked again. This process continues until all possible combinations are
exhausted, in this case 10 pairs. It is good practice to mix the pairs of factors so that there is no systematic bias. The
researcher should try to ensure that any particular factor is sometimes the first of the pair to be mentioned and
sometimes the second. The researcher would never, for example, take the first factor (on this occasion 'Does not ridge')
and systematically compare it to each of the others in succession. That is likely to cause systematic bias.
Below labels have been given to the factors so that the worked example will be easier to understand. The letters A - E
have been allocated as follows:
A preference matrix
A B C D E
D 26 24 32 100 102
E 20 34 76 98 100
If the grid is carefully read, it can be seen that the rank order of the factors is -
Most important E Too difficult to carry
B Too expensive
It can be seen that it is more important for designers to concentrate on improving transportability and, if possible, to
give it an inter-cropping capability rather than focusing on its ridging capabilities (remember that the example is
entirely hypothetical).
One major advantage to this type of questioning is that whilst it is possible to obtain a measure of the order of
importance of five or more factors from the respondent, he is never asked to think about more than two factors at any
one time. This is especially useful when dealing with illiterate farmers. Having said that, the researcher has to be
careful not to present too many pairs of factors to the farmer during the interview. If he does, he will find that the
farmer will quickly get tired and/or bored. It is as well to remember the formula of n(n - 1)/2. For ten factors, brands
or product attributes this would give 45 pairs. Clearly the farmer should not be asked to subject himself to having the
same question put to him 45 times. For practical purposes, six factors is possibly the limit, giving 15 pairs.
It should be clear from the procedures described in these notes that the paired comparison scale gives ordinal data.
Frozen(gutted) Frozen
From the data above the preferences shown below can be computed as follows:
c) The Unity-sum-gain technique: A common problem with launching new products is one of reaching a decision as to
what options, and how many options one offers. Whilst a company may be anxious to meet the needs of as many
market segments as possible, it has to ensure that the segment is large enough to enable him to make a profit. It is
always easier to add products to the product line but much more difficult to decide which models should be deleted.
One technique for evaluating the options which are likely to prove successful is the unity-sum-gain approach.
The procedure is to begin with a list of features which might possibly be offered as 'options' on the product, and
alongside each you list its retail cost. A third column is constructed and this forms an index of the relative prices of
each of the items. The table below will help clarify the procedure. For the purposes of this example the basic reaper is
priced at ` 20,000 and some possible 'extras' are listed along with their prices.
The total value of these hypothetical 'extras' is RS 7,460 but the researcher tells the farmer he has an equally
hypothetical ` 3,950 or similar sum. The important thing is that he should have considerably less hypothetical money
to spend than the total value of the alternative product features. In this way the farmer is encouraged to reveal his
preferences by allowing researchers to observe how he trades one additional benefit off against another. For example,
would he prefer a side rake attachment on a 3 metre head rather than have a transporters trolley on either a standard
or 2.5m wide head? The farmer has to be told that any unspent money cannot be retained by him so he should seek the
best value-for-money he can get.
In cases where the researcher believes that mentioning specific prices might introduce some form of bias into the
results, then the index can be used instead. This is constructed by taking the price of each item over the total of ` 7,460
and multiplying by 100. Survey respondents might then be given a maximum of 60 points and then, as before, are
asked how they would spend these 60 points. In this crude example the index numbers are not too easy to work with
b) Line marking scale: The line marked scale is typically used to measure perceived similarity differences between
products, brands or other objects. Technically, such a scale is
a form of what is termed a semantic differential scale since
each end of the scale is labelled with a word/phrase (or
semantic) that is opposite in meaning to the other.
Following figure provides an illustrative example of such a
scale.
Consider the products below which can be used when
frying food. In the case of each pair, indicate how similar or
e) Likert scales: A Likert scale is what is termed a summated instrument scale. This means that the items making up a
Liken scale are summed to produce a total score. In fact, a Likert scale is a composite of itemised scales. Typically, each
scale item will have 5 categories, with scale values ranging from -2 to +2 with 0 as neutral response.
If the price of raw materials fell firms would reduce the price of -2 -1 0 1 2
their food products.
The food industry spends a great deal of money making sure that -2 -1 0 1 2
its manufacturing is hygienic.
Food companies should charge the same price for their products -2 -1 0 1 2
throughout the country
Likert scales are treated as yielding Interval data by the majority of researchers. The scales which have been described
in this chapter are among the most commonly used in research. Whilst there are a great many more forms which scales
can take, if students are familiar with those described in this chapter they will be well equipped to deal with most types
of survey problem.
Introduction
A research design is a blue print which directs the plan of action to complete the research work. The collection of data
is an important part in the process of research work. The quality and credibility of
the results derived from the application of research methodology depends upon
the relevant, accurate and adequate data.
In this unit, we shall study about the various sources of data and methods of
collecting primary and secondary data with their merits and limitations and also
the choice of suitable method for data collection.
Primary data
Primary data can be obtained by communication or by observation. Communication involves questioning respondents
either verbally or in writing. This method is versatile, since one need only to ask for the information; however, the
response may not be accurate. Communication usually is quicker and cheaper than observation. Observation involves
the recording of actions and is performed by either a person or some mechanical or electronic device. Observation is
less versatile than communication since some attributes of a person may not be readily observable, such as attitudes,
awareness, knowledge, intentions, and motivation. Observation also might take longer since observers may have to
wait for appropriate events to occur, though observation using scanner data might be quicker and more cost effective.
Observation typically is more accurate than communication.
Some common types of primary data are:
Intentions - for example, purchase intentions. While useful, intentions are not a reliable indication of actual
future behaviour
motivation - a person's motives are more stable than his/her behaviour, so motive is a better predictor of
future behaviour than is past behaviour
1. Observation Method
The Concise Oxford Dictionary defines observation as, ‗accurate watching and noting of phenomena as they occur in
nature with regard to cause and effect or mutual relations‘. Thus observation is not only a systematic watching but it
also involves listening and reading, coupled with consideration of the seen phenomena. It involves three processes.
They are: sensation, attention or concentration and perception.
Under this method, the researcher collects information directly through observation rather than through the reports of
others. It is a process of recording relevant information without asking anyone specific questions and in some cases,
even without the knowledge of the respondents. This method of collection is highly effective in behavioural surveys.
For instance, a study on behaviour of visitors in trade fairs, observing the attitude of workers on the job, bargaining
strategies of customers etc. Observation can be participant observation or non-participant observation. In Participant
Observation Method, the researcher joins in the daily life of informants or organisations, and observes how they
behave. In the Non-participant Observation Method, the researcher will not join the informants or organisations but
will watch from outside.
Merits
1) This is the most suitable method when the informants are unable or reluctant to provide information.
2) This method provides deeper insights into the problem and generally the data is accurate and quicker to process.
Therefore, this is useful for intensive study rather than extensive study.
2. Interview Method
Interview is one of the most powerful tools and most widely used method for primary data collection in business
research. In our daily routine, we see interviews on T.V. channels on various topics related to social, business, sports,
budget etc. In the words of C. William Emory, ‗personal interviewing is a two way purposeful conversation initiated
by an interviewer to obtain information that is relevant to some research purpose‘. Thus an interview is basically, a
meeting between two persons to obtain the information related to the proposed study. The person who is interviewing
is named as interviewer and the person who is being interviewed is named as informant. It is to be noted that, the
research data/information collect through this method is not a simple conversation between the investigator and the
informant, but also the glances, gestures, facial expressions, level of speech etc., are all part of the process.
Through this method, the researcher can collect varied types of data intensively and extensively. Interviewes can be
classified as direct personal interviews and indirect personal interviews. Under the techniques of direct personal
interview, the investigator meets the informants (who come under the study) personally, asks them questions
pertaining to enquiry and collects the desired information. Thus if a researcher intends to collect the data on spending
habits of Nagpur University (NU) students, he/ she would go to the NU, contact the students, interview them and
collect the required information.
Indirect personal interview is another technique of interview method where it is not possible to collect data directly
from the informants who come under the study. Under this method, the investigator contacts third parties or
witnesses, who are closely associated with the persons/situations under study and are capable of providing necessary
information. For example, an investigation regarding bribery pattern in an office. In such a case it is inevitable to get
the desired information indirectly from other people who may be knowing them. Similarly, clues about the crimes are
gathered by the CBI. Utmost care must be exercised that these persons who are being questioned are fully aware of the
facts of the problem under study, and are not motivated to give a twist to the facts.
Another technique for data collection through this method can be structured and unstructured interviewing. In the
Structured interview set questions are asked and the responses are recorded in a standardised form. This is useful in
large scale interviews where a number of investigators are assigned the job of interviewing. The researcher can
minimise the bias of the interviewer. This technique is also named as formal interview. In Un-structured interview, the
investigator may not have a set of questions but have only a number of key points around which to build the interview.
Normally, such types of interviews are conducted in the case of an explorative survey where the researcher is not
completely sure about the type of data he/ she collects. It is also named as informal interview. Generally, this method
is used as a supplementary method of data collection in conducting research in business areas.
Merits
The major merits of this method are as follows:
1) People are more willing to supply information if approached directly. Therefore, personal interviews tend to yield
high response rates.
2) This method enables the interviewer to clarify any doubt that the interviewee might have while asking him/her
questions. Therefore, interviews are helpful in getting reliable and valid responses.
3) The informant‘s reactions to questions can be properly studied.
4) The researcher can use the language of communication according to the standard of the information, so as to obtain
personal information of informants which are helpful in interpreting the results.
Limitations
The limitations of this method are as follows:
1) The chance of the subjective factors or the views of the investigator may come in either consciously or unconsciously.
2) The interviewers must be properly trained, otherwise the entire work may be spoiled.
3) It is a relatively expensive and time-consuming method of data collection especially when the number of persons to
be interviewed is large and they are spread over a wide area.
4) It cannot be used when the field of enquiry is large (large sample).
Precautions : While using this method, the following precautions should be taken:
1. Obtain thorough details of the theoretical aspects of the research problem.
2. Identify who is to be interviewed.
3. The questions should be simple, clear and limited in number.
4. The investigator should be sincere, efficient and polite while collecting data.
5. The investigator should be of the same area (field of study, district, state etc.).
Limitations
1) Respondents may not return filled in questionnaires, or they can delay in replying to the questionnaires.
2) This method is useful only when the respondents are educated and co-operative.
3) Once the questionnaire has been despatched, the investigator cannot modify the questionnaire.
4) It cannot be ensured whether the respondents are truly representative.
Merits
1) It is a useful method in case the informants are illiterates.
2) The researcher can overcome the problem of non-response as the enumerators go personally to obtain the
information.
3) It is very useful in extensive studies and can obtain more reliable data.
Limitations
1) It is a very expensive and time-consuming method as enumerators are paid persons and also have to be trained.
2) Since the enumerator is present, the respondents may not respond to some personal questions.
3) Reliability depends upon the sincerity and commitment in data collection.
The success of data collection through the questionnaire method or schedule method depends on how the
questionnaire has been designed.
Increasing participation
The researcher can enhance the respondent‘s participation by way of explaining the kind of answer sought, the terms
that should be expressed, the depth and clarity of information needed etc. Coaching can be provided to the participants
but care should be taken to avoid the biasing factor. The interviewer can make the session an interesting and enjoyable
experience by means of administering adequate motivation techniques.
Some of the techniques for successful interviewing of the participants are listed below:
The interviewer should introduce himself by name and the organizations to which they are affiliated to. The
interviewer can identify himself with the introductory letters or other information that confirms the legitimacy of
the work. Enough details regarding the work to be done should be given, wherever demanded more information
may be provided. The interviewer should be able to kindle the interest of the respondent.
If the participant is busy, the interviewer should try to stimulate interest so as to arrange for an interview at
another time.
The successful conduct of interview requires a good rapport and understanding between the interviewer and
participant. The interviewer should earn the confidence of the respondent so as to elicit response without censure,
coercion or pressure.
In the process of gathering data the interviewer should ensure that the objective of each question is achieved and
the needed response is obtained. The interviewer can resort to probing, but steps should be taken to avoid the bias.
The interviewer should record the answers of the participant in an efficient manner. The interview should record
responses as they occur, recording the response later will lead to loss of information. Shorthand mechanism like
recording only the keywords can be done in the case of time constraint.
Interviewers should have good communication skills, should be able to adapt to flexible schedules, be willing to
work during intermittent work hours and should be mobile. If the interview is conducted by the researcher
himself, there is no need for much training else proper training should be provided so that the interviewer is able
to understand the objective of the study, the purpose of each question, the possible responses and an outline of the
research work conducted, its importance etc. Written instructions can be provided wherever needed.
Questioning techniques should be followed by the interviewer. Funneling approach can be practiced i.e. in the
beginning of the unstructured interview open-ended questions can be asked to get a broad idea and create an
impression about the situation. Care should be taken to see that the questions are unbiased.
The interviewer should restate or rephrase important information so as to ensure that the issues are recorded as
how the respondent intends to represent the same. The researcher can also help the respondent to verbalize the
perceptions.
Sources of information
Secondary sources of information may be divided into two categories: internal sources and external sources.
The main sources of external secondary sources are (1) government (Central, state and local) (2) trade associations (3)
commercial services (4) national and international institutions.
Trade associations Trade associations differ widely in the extent of their data collection and information
dissemination activities. However, it is worth checking with them to determine what they do
publish. At the very least one would normally expect that they would produce a trade directory
and, perhaps, a yearbook.
Commercial services Published market research reports and other publications are available from a wide range of
organisations which charge for their information. Typically, marketing people are interested in
media statistics and consumer information which has been obtained from large scale consumer
or farmer panels. The commercial organisation funds the collection of the data, which is wide
ranging in its content, and hopes to make its money from selling this data to interested parties.
National and Bank economic reviews, university research reports, journals and articles are all useful sources
international to contact. International agencies such as World Bank, IMF, UNDP, ITC, FAO and ILO produce
institutions a overabundance of secondary data which can prove extremely useful to the researcher.
Note: Because newspapers are meant to provide immediate information, some facts might not be
accurate or will change over time.
Note: Scholarly journals are often published by scholarly societies and organizations or by publishers
of other scholarly information.
Books written by and for a can provide very Use a library Marketing
(Monographs) variety of audiences in-depth coverage catalogue to find Management-Kotlar
generally takes longer can be primary out what a Organization
time to be published resources library owns Behaviour- Robbins
often provides can present some
citations and multiple viewpoints published in
bibliographies in compilations and electronic format
anthologies (e-Books) and are
accessible
through library
catalogs
education some
income available via web
Whether statistical or non-statistical methods of analyses are used, researchers should be aware of the potential for
compromising data integrity. While statistical analysis is typically performed on quantitative data, there are numerous
analytic procedures specifically designed for qualitative material including content, thematic, and ethnographic
analysis. Regardless of whether one studies quantitative or qualitative phenomena, researchers use a variety of tools to
analyze data in order to test hypotheses, discern patterns of behavior, and ultimately answer research questions.
Failure to understand or acknowledge data analysis issues presented can compromise data integrity.
1) Data coding
Coding refers to the process of assigning numerals or other symbols to answers so that responses can be put into a
limited number of categories or classes. Such classes should be appropriate to the research problem under
consideration. They must also possess the characteristic of exhaustiveness (i.e., there must be a class for every data
item) and also that of mutual exclusively which means that a specific answer can be placed in one and only one cell in a
given category set. Another rule to be observed is that of unidimensionality by which is meant that every class is
defined in terms of only one concept.
Coding is necessary for efficient analysis and through it the several replies may be reduced to a small number of
classes which contain the critical information required for analysis. Coding decisions should usually be taken at the
designing stage of the questionnaire. This makes it possible to precode the questionnaire choices and which in turn is
helpful for computer tabulation as one can straight forward key punch from the original questionnaires. But in case of
2) Data input
The keyboard of a computer is one of the more commonly known input, or data entry, devices in current use. In the
past, punched cards or paper tapes have been used.
3) Data editing
Editing of data is a process of examining the collected raw data (specially in surveys) to detect errors and omissions
and to correct these when possible. Before being presented as information, data should be put through a process called
editing. This process checks for accuracy and eliminates problems that can produce disorganised or incorrect
information. Data editing may be performed by clerical staff, computer software, or a combination of both; depending
on the medium in which the data is submitted.
As a matter of fact, editing involves a careful scrutiny of the completed questionnaires and/or schedules. Editing is
done to assure that the data are accurate, consistent with other facts gathered, uniformly entered, as completed as
possible and have been well arranged to facilitate coding and tabulation.
With regard to points or stages at which editing should be done, one can talk of field editing and central editing. Field
editing consists in the review of the reporting forms by the investigator for completing (translating or rewriting) what
the latter has written in abbreviated and/or in illegible form at the time of recording the respondents‘ responses. This
type of editing is necessary in view of the fact that individual writing styles often can be difficult for others to decipher.
This sort of editing should be done as soon as possible after the interview, preferably on the very day or on the next
day.
While doing field editing, the investigator must restrain himself and must not correct errors of omission by simply
guessing what the informant would have said if the question had been asked. Central editing should take place when
all forms or schedules have been completed and returned to the office. This type of editing implies that all forms
should get a thorough editing by a single editor in a small study and by a team of editors in case of a large inquiry.
Editor(s) may correct the obvious errors such as an entry in the wrong place, entry recorded in months when it should
have been recorded in weeks, and the like. In case of inappropriate on missing replies, the editor can sometimes
determine the proper answer by reviewing the other information in the schedule. At times, the respondent can be
contacted for clarification. The editor must strike out the answer if the same is inappropriate and he has no basis for
determining the correct answer or the response. In such a case an editing entry of ‗no answer‘ is called for. All the
wrong replies, which are quite obvious, must be dropped from the final results, especially in the context of mail
surveys.
Editors must keep in view several points while performing their work: (a) They should be familiar with instructions
given to the interviewers and coders as well as with the editing instructions supplied to them for the purpose. (b)
While crossing out an original entry for one reason or another, they should just draw a single line on it so that the same
may remain legible. (c) They must make entries (if any) on the form in some distinctive colur and that too in a
standardised form. (d) They should initial all answers which they change or supply. (e) Editor‘s initials and the date of
editing should be placed on each completed form or schedule.
Some editing processes are:
Validity check: ensures that data fall within set limits. For example, alphabetic characters do not appear in a field that
should have only numerical characters, or the month of year is not greater than 12.
4) Data manipulation
After editing, data may be manipulated by computer to produce the desired output. The software used to manipulate
data will depend on the form of output required.
Software applications such as word processing, desktop publishing, graphics (including graphing and drawing),
databases and spreadsheets are commonly used. Following are some ways that software can manipulate data:
Spreadsheets are used to create formulas that automatically add columns or rows of figures calculate means and
perform statistical analyses. They can be used to create financial worksheets such as budgets or expenditure
forecasts, balance accounts and analyse costs.
Databases are electronic filing cabinets: systematically storing data for easy access to produce summaries,
stocktakes or reports. A database program should be able to store, retrieve, sort, and analyse data.
Charts can be created from a table of numbers and displayed in a number of ways, to show the significance of a
selection of data. Bar, line, pie and other types of charts can be generated and manipulated to advantage.
Processing data provides useful information called output. Computer output may be used in a variety of ways. It may
be saved in storage for later retrieval and use. It may be laser printed on paper as tables or charts, put on a transparent
slide for overhead projector use, saved on floppy disk for portable use in other computers, or sent as an electronic file
via the internet to others.
Types of output are limited only by the available output devices, but their form is usually governed by the need to
communicate information to someone. For whom is output being produced? How will they best understand it? The
answers to these questions help determine one's output type.
5. Data Tabulation
Before analysis can be performed, raw data must be transformed into the right format. First, it must be edited so that
errors can be corrected or omitted. The data must then be coded; this procedure converts the edited raw data into
numbers or symbols. A codebook is created to document how the data was coded. Finally, the data is tabulated to
count the number of samples falling into various categories. Simple tabulations count the occurrences of each variable
independently of the other variables. Cross tabulations, also known as contingency tables or cross tabs, treats two or
more variables simultaneously. However, since the variables are in a two-dimensional table, cross tabbing more than
two variables is difficult to visualize since more than two dimensions would be required. Cross tabulation can be
performed for nominal and ordinal variables.
Principles of tabulation:
1) A clear, brief and self explanatory title is necessary for a table.
2) Stubs (row headings) and captions (column headings) should be clearly mentioned.
3) The body of the table must show all the relevant information according to their description.
4) Data should be arranged systematically; that is chronologically, alphabetically and geo-graphically.
5) Adequate spacing should be given in between the columns and rows.
6) Abbreviation should be avoided to the extent possible.
1. Factor Analysis
2. Cluster Analysis
3. Discriminant Analysis
4. Conjoint Analysis
5. Multi Dimensional Scaling,
1) Factor Analysis
Factor analysis is a statistical technique that originated in mathematical psychology. It is used in the social sciences and
in marketing, product management, operations research, and other applied sciences that deal with large quantities of
data. The objective is to discover patterns among variations in the values of multiple variables. This is done by
generating artificial dimensions (called factors) that correlate highly with the real variables.
Factor analysis is a very popular technique to analyze interdependence. Factor analysis studies the entire set of
interrelationships without defining variables to be dependent or independent. Factor analysis combines variables to
create a smaller set of factors. Mathematically, a factor is a linear combination of variables. A factor is not directly
observable; it is inferred from the variables. The technique identifies underlying structure among the variables,
reducing the number of variables to a more manageable set. Factor analysis groups variables according to their
correlation.
The factor loading can be defined as the correlations between the factors and their underlying variables. A factor
loading matrix is a key output of the factor analysis. An example of matrix is shown below.
Factor 1 Factor 2 Factor 3
Variable 1
Variable 2
Variable 3
Information collection
The data collection stage is usually done by research professionals. Survey questions ask the respondent to rate a
product from one to five (or 1 to 7, or 1 to 10) on a range of attributes. Anywhere from five to twenty attributes are
chosen. They could include things like: ease of use, weight, accuracy, durability, colourfulness, price, or size. The
attributes chosen will vary depending on the product being studied. The same question is asked about all the products
in the study. The data for multiple products is codified and input into a statistical program such as SPSS or SAS.
Analysis
The analysis will isolate the underlying factors that explain the data. Factor analysis is an interdependence technique.
The complete set of interdependent relationships are examined. There is no specification of either dependent variables,
independent variables, or causality. Factor analysis assumes that all the rating data on different attributes can be
reduced down to a few important dimensions. This reduction is possible because the attributes are related. The rating
given to any one attribute is partially the result of the influence of other attributes. The statistical algorithm
deconstructs the rating (called a raw score) into its various components, and reconstructs the partial scores into
underlying factor scores. The degree of correlation between the initial raw score and the final factor score is called a
factor loading. There are two approaches to factor analysis: "principal component analysis" (the total variance in the
data is considered); and "common factor analysis" (the common variance is considered).
The use of principle components in a semantic space can vary somewhat because the components may only "predict"
but not "map" to the vector space. This produces a statistical principle component use where the most salient words or
themes represent the preferred Basis .
Advantages
1. both objective and subjective attributes can be used
2. it is fairly easy to do, inexpensive, and accurate
3. it is based on direct inputs from customers
4. there is flexibility in naming and using dimensions
2) Cluster analysis
Cluster analysis is a technique that is used in order to segment a market. The objective is to find out a group of
customers in the market place that are homogeneous i.e., they share some characteristics so that they can be classified
into one group. The cluster/group so found out should be large enough so that the company can develop it profitably,
as the ultimate objective of a company is to serve the customer and earn profits. The group of customers that the
company hopes to serve should be large enough for a company so that it is an economically viable proposition for the
company. This is also true for the customer as customer would not be willing to pay beyond. a certain price for a
particular product (price of course is a function of positioning of product, cost of production etc.).
As an example, let us consider the Watch Industry. There could be many ways in which the Watch Industry could be
segmented which are as follows
a. Gender (Male/Female)
b. Technology (Digital/Analog)
c. Design Features
d. Occasion of Use (Formal/Casual/Party)
e. Price (Low/Medium/High/Jewellery)
Some of the above segmentation factors are demographic (price, gender) whereas some are psychographic factors
(occasion to use.)
4) Conjoint analysis
Conjoint analysis, also called multi-attribute compositional models, is a statistical technique that originated in
mathematical psychology. Today it is used in many of the social sciences and applied sciences including marketing,
product management, and operations research. The objective of conjoint analysis is to determine what combination
of a limited number of attributes is most preferred by respondents. It is used frequently in testing customer
For Example
Frequency of service has a range from 1.6 to .04. The range is therefore equal to = 1.2.A high range implies that the
respondent is more sensitive to changes in the level of this attribute.
These utilities are calculated across all respondents for all attributes and for different levels of each attribute.
At the end of the analysis we would identify 3-4 of the most popular combinations would be identified for which the
relative costs and benefits can be worked out.
5) Multidimensional scaling
Multidimensional scaling (MDS) is a series of techniques that helps the analyst to identify key dimensions
underlying respondents‘ evaluations of objects. It is often used in Marketing to identify key dimensions underlying
customer evaluations of products, services or companies.
Multidimensional scaling (MDS) is a statistical technique often used in marketing and the social sciences. It is a
procedure for taking the preferences and perceptions of respondents and representing them on a visual grid. These
grids, called perceptual maps are usually two-dimensional, but they can represent more than two. Potential customers
are asked to compare pairs of products and make judgements about their similarity. Whereas other techniques (such
as factor analysis, discriminant analysis, and conjoint analysis) obtain underlying dimensions from responses to
product attributes identified by the researcher, MDS obtains the underlying dimensions from respondents‘
judgements about the similarity of products. This is an important advantage. It does not depend on researchers‘
judgments. It does not require a list of attributes to be shown to the respondents. The underlying dimensions come
from respondents‘ judgements about pairs of products. Because of these advantages, MDS is the most common
technique used in perceptual mapping.
only if the advertisement is distinct in its message from the other competing advertisements,
3) Product Re-positioning Studies
If &-company is interested in re-positioning its product/service (in the mind of the consumer), the first and foremost
activity to be done is to assess the. current perception of the product in the mind of the consumer. The classic re-
positioning case is that of Cadbury chocolates, which kept on assessing its p9sitiolihng platform, and successfully
moved Chocolates from a product perceived. as one for children, to a product which could be consumed by a person of
any age,, at any time, of the day, and for varied occasions.
Advantage of MDS
The advantage of NOS methods is not in the measurement of physical distances, but rather "psychological distances",
also called as `dissimilarities'. In MDS, we assume that every individual pawn has a 'metal map' of products, people,
places, events, companies, and individuals keep on evaluating their external environment on a continuous basis.
We also assume that the respondent is able to provide either numerical measure of his or her perceived degree of
similarity/dissimilarity between pairs of objects, or can rank pairs of objects (ordinal scale of measurement) in terms of
similarity/dissimilarity to each other.
We can then make use of methodology, of MDS to construct a physical map in one or more dimensional whose inter-
point distances (or ranks of distances) are most consistent with input data.
Now-a-days a number of software programmes are available for conducting MDS analysis. These programmes
provide for a variety of input data. Some of the widely used softwares include MDPREF, MDSCAL SM, INDSCAL,
PREFMAM, PROFIT, KUST.
Effective Fieldwork
To be effective fieldwork should:
be well planned, interesting, cost effective and represent an effective use of the time available
target specific issues and topic outcomes
provide opportunities for the researcher to develop a range of cognitive and manipulative skills
be integrated with the subject matter to ensure that researcher take full advantage of enhanced understanding that is
achieved through direct observation, data collection/recording and inquiry learning.
be supported by pre-and post- expedition classroom activities that establish the context for learning and provide the
necessary follow-up and reinforcement.
Survey plan
Surveys are quantitative information collection techniques used in marketing, political polling, and social science
research.
All surveys involve questions of some sort. When the questions are administered by a researcher, the survey is called
an interview or a researcher administered survey. When the questions are administered by the respondent, the
survey is referred to as a questionnaire or a self-administered survey.
Advantages of surveys
The advantages of survey techniques include:
It is an efficient way of collecting information from a large number of respondents. Very large samples are possible.
Statistical techniques can be used to determine validity, reliability, and statistical significance.
Surveys are flexible in the sense that a wide range of information can be collected. They can be used to study
attitudes, values, beliefs, and past behaviours.
Because they are standardized, they are relatively free from several types of errors.
They are relatively easy to administer.
There is an economy in data collection due to the focus provided by standardized questions. Only questions of
interest to the researcher are asked, recorded, codified, and analyzed. Time and money is not spent on tangential
questions.
Disadvantages of surveys
Disadvantages of survey techniques include:
They depend on subjects‘ motivation, honesty, memory, and ability to respond. Subjects may not be aware of their
reasons for any given action. They may have forgotten their reasons. They may not be motivated to give accurate
answers, in fact, they may be motivated to give answers that present themselves in a favorable light.
Structured surveys, particularly those with closed ended questions, may have low validity when researching
affective variables.
Survey Methods
Once the researcher has decided on the size of sample, the next step is to decide on the method of data collection. Each
method has advantages and disadvantages.
a) Personal Interviews
An interview is called personal when the Interviewer asks the questions face-to-face with the Interviewee. Personal
interviews can take place in the home, at a shopping mall, on the street, outside a movie theatre or polling place, and so
on.
Advantages
1. The ability to let the Interviewee see, feel and/or taste a product.
2. The ability to find the target population. For example, you can find people who have seen a film much more easily
outside a theatre in which it is playing than by calling phone numbers at random.
3. Longer interviews are sometimes tolerated. Particularly with in-home interviews that have been arranged in
advance. People may be willing to talk longer face-to-face than to someone on the phone.
Disadvantages
1. Personal interviews usually cost more per interview than other methods. This is particularly true of in-home
interviews, where travel time is a major factor.
2. Each mall has its own characteristics. It draws its clientele from a specific geographic area surrounding it, and its
shop profile also influences the type of client. These characteristics may differ from the target population and create a
non-representative sample.
b) Telephone Surveys
Surveying by telephone is the most popular interviewing method in the most of the country. This is made possible by
nearly universal coverage (Approx. 70 % of homes have a telephone in urban area).
Advantages
1. People can usually be contacted faster over the telephone than with other methods. If the Interviewers are using
CATI (computer-assisted telephone interviewing), the results can be available minutes after completing the last
interview.
2. You can dial random telephone numbers when you do not have the actual telephone numbers of potential
respondents.
3. CATI software, such as The Survey System, makes complex questionnaires practical by offering many logic options.
It can automatically skip questions, perform calculations and modify questions based on the answers to earlier
questions. It can check the logical consistency of answers and can present questions or answers choices in a random
order (the last two are sometimes important for reasons described later).
c) Mail Surveys
One way of improving response rates to mail surveys is to mail a postcard telling your sample to watch for a
questionnaire in the next week or two. Another is to follow up a questionnaire mailing after a couple of weeks with a
card asking people to return the questionnaire. The downside is that this doubles or triples your mailing cost. If you
have purchased a mailing list from a supplier, you may also have to pay a second (and third) use fee - you often cannot
buy the list once and re-use it.
Another way to increase responses to mail surveys is to use an incentive. One possibility is to send a dollar bill (or
more) along with the survey (or offer to donate the dollar to a charity specified by the respondent). If you do so, be
sure to say that the dollar is a way of saying "thanks," rather than payment for their time. Many people will consider
their time worth more than a dollar. Another possibility is to include the people who return completed surveys in a
drawing for a prize. A third is to offer a copy of the (non-confidential) result highlights to those who complete the
questionnaire. Any of these techniques will increase the response rates.
Remember that if you want a sample of 1,000 people, and you estimate a 10% response level, you need to mail 10,000
questionnaires. You may want to check with your local post office about bulk mail rates - you can save on postage
using this mailing method. However, most researchers do not use bulk mail, because many people associate "bulk"
with "junk" and will throw it out without opening the envelope, lowering your response rate. Also bulk mail moves
slowly, increasing the time needed to complete your project.
Advantages
1. Mail surveys are among the least expensive.
2. This is the only kind of survey you can do if you have the names and addresses of the target population, but not
their telephone numbers.
3. The questionnaire can include pictures - something that is not possible over the phone.
4. Mail surveys allow the respondent to answer at their leisure, rather than at the often inconvenient moment they are
contacted for a phone or personal interview. For this reason, they are not considered as intrusive as other kinds of
interviews.
Disadvantages
1. Time! Mail surveys take longer than other kinds. You will need to wait several weeks after mailing out
questionnaires before you can be sure that you have gotten most of the responses.
e) Email Surveys
Email surveys are both very economical and very fast. More people have email than have full Internet access. This
makes email a better choice than a Web page survey for some populations. On the other hand, email surveys are
limited to simple questionnaires, whereas Web page surveys can include complex logic.
Advantages
1. Speed. An email questionnaire can gather several thousand responses within a day or two.
2. There is practically no cost involved once the set up has been completed.
3. You can attach pictures and sound files.
4. The novelty element of an email survey often stimulates higher response levels than ordinary ―snail‖ mail surveys.
Disadvantages
1. You must possess (or purchase) a list of email addresses.
g) Scanning Questionnaires
Scanning questionnaires is a method of data collection that can be used with paper questionnaires that have been
administered in face-to-face interviews; mail surveys or surveys completed by an Interviewer over the telephone. The
Survey System can produce paper questionnaires that can be scanned using Remark Office OMR (Optical Mark
Reader). Other software can scan questionnaires and produce ASCII Files that can be read into The Survey System.
Advantages
1. Scanning can be the fastest method of data entry for paper questionnaires.
2. Scanning is more accurate than a person in reading a properly completed questionnaire.
Disadvantages
1. Scanning is best-suited to "check the box" type surveys and bar codes. Scanning programs have various methods to
deal with text responses, but all require additional data entry time.
2. Scanning is less forgiving (accurate) than a person in reading a poorly marked questionnaire. Requires investment
in additional hardware to do the actual scanning.
Summary of Survey Methods
The choice of survey method will depend on several factors. These include:
Speed Email and Web page surveys are the fastest methods, followed by telephone interviewing. Mail
surveys are the slowest.
Cost Personal interviews are the most expensive followed by telephone and then mail. Email and
Web page surveys are the least expensive for large samples.
Internet Usage Web page and Email surveys offer significant advantages, but you may not be able to generalize
their results to the population as a whole.
Literacy Levels Illiterate and less-educated people rarely respond to mail surveys.
Sensitive People are more likely to answer sensitive questions when interviewed directly by a computer in
Questions one form or another.
Video, Sound, A need to get reactions to video, music or a picture limits your options. You can play a video on
Graphics a Web page, in a computer-direct interview, or in person. You can play music when using these
methods or over a telephone. You can show pictures in those first methods and in a mail survey.
B. Systematic error: Systematic errors result from some imperfect research design or from a mistake in the
execution of the research. These errors are also called non-sampling errors. A sample bias exists when the results of a
sample show a persistent tendency to deviate in one direction from the true value of the population parameter.
The two general categories of systematic error are respondent error and administrative error.
1. Respondent error: If the respondents do not cooperate or do not give truthful answers then two types of error
may occur.
a) Non-response error: To utilize the results of a survey, the researcher must be sure that those who did not respond
to the questionnaire were representative of those who did not. If only those who responded are included in the survey
then non-response error will occur. Non-respondents are most common in mail surveys, but may also occur in
telephone and personal surveys in the form of no contacts (not-at-homes) or refusals. The number of no contacts has
been increasing because of the proliferation of answering machines and growing usage of Caller ID to screen telephone
calls. Self-selection may also occur in self-administered questionnaires; in this situation, only those who feel strongly
about the subject matter will respond, causing an over-representation of extreme positions. Comparing demographics
of the sample with the demographics of the target population is one means of inspecting for possible biases. Additional
efforts should be made to obtain data from any underrepresented segments of the population. For example, call-backs
can be made on the not-at-homes.
b) Response bias: Response bias occurs when respondents tend to answer in a certain direction. This bias may be
caused by an intentional or inadvertent falsification or by a misrepresentation of the respondent‘s answer.
(1) Deliberate falsification: People may misrepresent answers in order to appear intelligent, to avoid embarrassment,
to conceal personal information, to "please" the interviewer, etc. It may be that the interviewees preferred to be viewed
as average and they will alter their responses accordingly.
(2) Unconscious misrepresentation: Response bias can arise from question format, question ambiguity or content.
Time-lapse may lead to best-guess answers.
Types of response bias: There are five specific categories of response bias. These categories overlap and are by no
means mutually exclusive.
(i) Agreement bias: This is a response bias caused by a respondent‘s tendency to concur with a particular position. For
example, "yes Sayers" who accept all statements they are asked about.
(ii) Extremity bias: Some individuals tend to use extremes when responding to questions which may cause extremity
bias.
(iii) Interviewer bias: If an interviewer‘s presence influences respondents to give untrue or modified answers, the
survey will contain interviewer bias. Respondents may wish to appear wealthy or intelligent, or they may try to give
the "right" answer or the socially acceptable answer.
Introduction
Many a time, we strongly believe some results to be true. But after taking a sample, we notice that one sample data
does not wholly support the result. The difference is due to (i) the original belief being wrong, or (ii) the sample being
slightly one sided.
Tests are, therefore, needed to distinguish between the two possibilities. These tests tell about the likely possibilities
and reveal whether or not the difference can be due to only chance elements. If the difference is not due to chance
elements, it is significant and, therefore, these tests are called tests of significance. The whole procedure is known as
Testing of Hypothesis.
Setting up and testing hypotheses is an essential part of statistical inference. In order to formulate such a test, usually
some theory has been put forward, either because it is believed to be true or because it is to be used as a basis for
argument, but has not been proved. For example, the hypothesis may be the claim that a new drug is better than the
current drug for treatment of a disease, diagnosed through a set of symptoms.
In each problem considered, the question of interest is simplified into two competing claims/hypotheses between
which we have a choice; the null hypothesis, denoted by H0, against the alternative hypothesis, denoted by H1. These
two competing claims / hypotheses are not however treated on an equal basis; special consideration is given to the null
hypothesis.
We have two common situations:
(i) The experiment has been carried out in an attempt to disprove or reject a particular hypothesis, the null hypothesis;
thus we give that one priority so it cannot be rejected unless the evidence against it is sufficiently strong. For example,
null hypothesis H0: there is no difference in taste between coke and diet coke, against the alternate hypothesis H1: there
is a difference in the tastes.
(ii) If one of the two hypotheses is ‗simpler‘, we give it priority so that a more ‗complicated‘ theory is not adopted
unless there is sufficient evidence against the simpler one. For example, it is ‗simpler‘ to claim that there is no
difference in flavour between coke and diet coke than it is to say that there is a difference.
The hypotheses are often statements about population parameters like expected value and variance. For example, H0,
might be the statement that the expected value of the height of ten year old boys in the Indian population, is not
Concept of hypothesis
A hypothesis is the assumption that we make about the population parameter. This can be any assumption about a
population parameter not necessarily based on statistical data. For example it can also be based on the gut feel of a
manager. Managerial hypotheses are based on intuition; the market place decides whether the manager‘s intuitions
were in fact correct.
In fact managers propose and test hypotheses all the time. For example:
If a manager says ‗if we drop the price of this car model by ` 15000, we‘ll increase sales by 25000 units‘ is a
hypothesis. To test it in reality we have to wait to the end of the year to and count sales.
A manager estimates that sales per territory will grow on average by 30% in the next quarter is also an assumption
or hypotheses.
To understand the meaning of a hypothesis, let us see some definitions:
―A hypothesis is a tentative generalization, the validity of which remains to be tested. In its most elementary stage
the hypothesis may be any guess, hunch, imaginative idea, which becomes the basis for action or investigation‖.
(G.A.Lundberg)
―It is a proposition which can be put to test to determine validity‖. (Goode and Hatt).
―A hypothesis is a question put in such a way that an answer of some kind can be forth coming‖ - (Rummel and
Ballaine).
These definitions lead us to conclude that a hypothesis is a tentative solution or explanation or a guess or assumption
or a proposition or a statement to the problem facing the researcher, adopted on a cursory observation of known and
available data, as a basis of investigation, whose validity is to be tested or verified.
Hypothesis Formulation
When research is conducted hypothesis formulation is one of the most preliminary steps. Hypothesis formulation
helps in formulating research problem. Hypothesis formulation is not a necessary but an important step of research. A
valid and reasonable research can be conducted without any hypothesis. Hypothesis can be one and it can be as many
as possible.
A hypothesis is an expected answer to a research question, it provides direction to the research study. A hypothesis in
reality determines the focal point of the study, in the absence of a valid and testable hypothesis the researcher cannot
concentrate on one direction. Many new researchers face the problem of summing up their research question in a
precise and concise manner. The reason behind this problem is mainly lack of hypothesis or presence of irrelevant
hypothesis.
In some studies, though, hypothesis is not required and a valid as well as reliable study can be conducted in the
absence of hypothesis. These studies do not require testing of interaction between variables. In most of the other
studies this is not so. In addition, a single hypothesis can be enough for many studies but some may require
formulation of more than one hypothesis. Such studies are as much valid, reliable and generalizable as studies that
have single hypothesis but it takes more time to test more than one hypothesis.
The purpose of hypothesis formulation and the method by which it has to be formulated is different for different
research studies. In one research hypothesis is an integral part of the whole study, while in the other hypothesis testing
is not compulsory and still in another the purpose is to build hypothesis for future studies rather than testing it in
reality.
Basically there are four types of research studies; purpose of hypothesis formulation and testing in each
category is explained below.
1. Experimental Research Studies
Experimental research studies are based on critical scientific methods. In these the purpose of hypothesis formulation
is to build a proposition or to give possible reason for any phenomena that occurs. On the basis of this proposition
hypothesis is tested to find out whether a particular phenomena occurs due to this reason or not. Hypothesis
formulation in experimental research thus helps in taking the study forward. In the absence of any suitable hypothesis
the experimenter has to test various possible reasons for the phenomena to be interpreted. Hypothesis formulation in
experimental research gives direction to the study and it also clears the clutter so that the researcher can think
directionally and with greater confidence. It should be noted that possible rejection or acceptance of the hypothesis
does not influence the credibility of hypothesis it rather concludes that the reason to be tested was true or false. In case
the hypothesis does not proved to be true the researcher gets new ideas or new directions for further research to test
what was the actual reason for the particular phenomena.es often help in
Characteristics of hypothesis
A hypothesis controls and directs the research study. When a problem is felt, we require the hypothesis to explain it.
Generally, there is more than one hypothesis which aims at explaining the same fact. But all of them cannot be equally
good. Therefore, how can we judge a hypothesis to be true or false, good or bad? Agreement with facts is the sole and
sufficient test of a true hypothesis. Therefore, certain conditions can be laid down for distinguishing a good
hypothesis from bad ones.
The formal conditions laid down by thinkers provide the criteria for judging a hypothesis as good or valid. These
conditions are as follows:
i) A hypothesis should be empirically verifiable: The most important condition for a valid hypothesis is
that it should be empirically verifiable. A hypothesis is said to be verifiable, if it can be shown to be either true or false
by comparing with the facts of experience directly or indirectly. A hypothesis is true if it conforms to facts and it is false
if it does not. Empirical verification is the characteristic of the scientific method.
ii) A hypothesis should be relevant: The purpose of formulating a hypothesis is always to explain some facts.
It must provide an answer to the problem which initiated the enquiry. A hypothesis is called relevant if it can explain
the facts of enquiry.
iii) A hypothesis must have predictive and explanatory power: Explanatory power means that a good
hypothesis, over and above the facts it proposes to explain, must also explain some other facts which are beyond its
original scope. We must be able to deduce a wide range of observable facts which can be deduced from a hypothesis.
The wider the range, the greater is its explanatory power.
v) A hypothesis does not go against the traditionally established knowledge: As far as possible, a
new hypothesis should not go against any previously established law or knowledge. The new hypothesis is expected to
be consistent with the established knowledge.
vi) A hypothesis should be simple: A simple hypothesis is preferable to a complex one. It sometimes happens
that there are two or more hypotheses which explain a given fact equally well. Both of them are verified by observable
facts. Both of them have a predictive power and both are consistent with established knowledge. All the important
conditions of hypothesis are thus satisfied by them. In such cases the simpler one is to be accepted in preference to the
complex one.
vii) A hypothesis must be clear, definite and certain: It is desirable that the hypothesis must be simple
and specific to the point. It must be clearly defined in a manner commonly accepted. It should not be vague or
ambiguous.
(viii) A Hypothesis should be related to available techniques: If tools and techniques are not available
we cannot test the hypothesis. Therefore, the hypothesis should be formulated only after due thought is given to the
methods and techniques that can be used to measure the concepts and variables related to the hypothesis.
Testing of Hypothesis
When the hypothesis has been framed in the research study, it must be verified as true or false. Verifiability is one of
the important conditions of a good hypothesis. Verification of hypothesis means testing of the truth of the hypothesis
in the light of facts. If the hypothesis agrees with the facts, it is said to be true and may be accepted as the explanation
of the facts. But if it does not agree it is said to be false. Such a false hypothesis is either totally rejected or modified.
Verification is of two types‘ viz., Direct verification and Indirect verification.
1. Direct verification may be either by observation or by experiments. When direct observation shows that the
supposed cause exists where it was thought to exist, we have a direct verification. When a hypothesis is verified by an
experiment in a laboratory it is called direct verification by experiment. When the hypothesis is not amenable for direct
verification, we have to depend on indirect verification.
2. Indirect verification is a process in which certain possible consequences are deduced from the hypothesis and
they are then verified directly. Two steps are involved in indirect verification. (i) Deductive development of hypothesis:
By deductive development certain consequences are predicted and (ii) finding whether the predicted consequences
follow. If the predicted consequences come true, the hypothesis is said to be indirectly verified. Verification may be
done directly or indirectly or through logical methods.
Testing of a hypothesis is done by using statistical methods. Testing is used to accept or reject an assumption or
hypothesis about a random variable using a sample from the distribution. The assumption is the null hypothesis (H0),
and it is tested against some alternative hypothesis (H1). Statistical tests of hypothesis are applied to sample data. The
procedure involved in testing a hypothesis is A) select a sample and collect the data. B) Convert the variables or
attributes into statistical form such as mean, proportion. C) formulate hypotheses. D) select an appropriate test for the
The z statistic measures the number of standard deviations away from the hypothesized mean the sample mean lies.
From the standard normal tables we can calculate the probability of the sample mean differing from the true
population mean by a specified number of standard deviations.
For example:
o We can find the probability that the sample mean differs from the population mean by two or more standard
deviations.
It is this probability value that will tell us how likely it is that a given sample mean can be obtained from a
population with a hypothesized mean m. .
Hypothesis errors:
type I error (also called alpha error)
o the study results lead to the rejection of the null hypothesis even though it is actually true
type II error (also called beta error)
o the study results lead to the acceptance (non-rejection) of the null hypothesis even though it is actually false
The choice of significance level affects the ratio of correct and incorrect conclusions which will be drawn. Given a
significance level there are four alternatives to consider:
Type I and type II errors
Correct Conclusion Incorrect Conclusion
Consider the following example. In a straightforward test of two products, we may decide to market product A if, and
only if, 60% of the population prefer the product. Clearly we can set a sample size, so as to reject the null hypothesis of
A = B = 50% at, say, a 5% significance level. If we get a sample which yields 62% (and there will be 5 chances in a 100
that we get a figure greater than 60%) and the null hypothesis is in fact true, then we make what is known as a Type I
error.
If however, the real population is A = 62%, then we shall accept the null hypothesis A = 50% on nearly half the
occasions as shown in the diagram overleaf. In this situation we shall be saying "do not market A" when in fact there is
a market for A. This is the type II error. We can of course increase the chance of making a type I error which will
automatically decrease the chance of making a type II error.
Obviously some sort of compromise is required. This depends on the relative importance of the two types of error. If it
is more important to avoid rejecting a true hypothesis (type I error) a high confidence coefficient (low value of x) will
be used. If it is more important to avoid accepting a false hypothesis, a low confidence coefficient may be used. An
analogy with the legal profession may help to clarify the matter. Under our system of law, a man is presumed innocent
of murder until proved otherwise. Now, if a jury convicts a man when he is, in fact, innocent, a type I error will have
been made: the jury has rejected the null hypothesis of innocence although it is actually true. If the jury absolves the
man, when he is, in fact, guilty, a type II error will have been made: the jury has accepted the null hypothesis of
Uses of Hypothesis
If a clear scientific hypothesis has been formulated, half of the research work is already done. The advantages/utility of
having a hypothesis is summarized here underneath:
i) It is a starting point for many a research work.
ii) It helps in deciding the direction in which to proceed.
iii) It helps in selecting and collecting pertinent facts.
iv) It is an aid to explanation.
v) It helps in drawing specific conclusions.
vi) It helps in testing theories.
vii) It works as a basis for future knowledge.
1. Z-Score Statistics
Z-Score is called a test statistics. The purpose of a test statistics is to determine whether the result of a research study
(the obtained difference) is more than what would be expected by the chance alone.
Obtained difference
z
Difference due to chance
Now suppose a manufacturer, produces some type of articles of good quality. A purchaser by chance selects a sample
randomly. It so happens that the sample contains many defective articles and it leads the purchaser to reject the whole
product. Now, the manufacturer suffers a loss even though he has produced a good article of quality. Therefore, this
Type-I error is called ―producers risk‖.
On the other hand, if we accept the entire lot on the basis of a sample and the lot is not really good, the consumers are
put in loss. Therefore, this Type-II error is called the ―consumers risk‖.
In practical situations, still other aspects are considered while accepting or rejecting a lot. The risks involved for both
producer and consumer are compared. Then Type-I and Type-II errors are fixed; and a decision is reached.
2. Student‘s t-distribution
This concept was introduced by W. S. Gosset (1876 - 1937). He adopted the pen name ―student‖. Therefore, the
distribution is known as ‗student‘s t-distribution‘.
It is used to establish confidence limits and test the hypothesis when the population variance is not known and sample
size is small (< 30).
If a random sample x1, x2, . . . , xn of n values be drawn from a normal population with mean and standard deviation
then the mean of sample
xi
x
n
3. Chi-square test
Tests like z-score and t are based on the assumption that the samples were drawn from normally distributed
populations or more accurately that the sample means were normally distributed. As these tests require assumptions
about the type of population or parameters, these tests are known as ‗parametric tests‘.
There are many situations in which it is impossible to make any rigid assumption about the distribution of the
population from which samples are drawn. This limitation led to search for non-parametric tests. Chi-square (Read as
Ki-square) test of independence and goodness of fit is a prominent example of a non-parametric test. The chi-square
(2) test can be used to evaluate a relationship between two nominal or ordinal variables.
2 (chi-square) is measure of actual divergence of the observed and expected frequencies. In sampling studies, we
never expect that there will be a perfect coincidence between actual and observed frequencies and the question that we
have to tackle is about the degree to which the difference between actual and observed frequencies can be ignored as
arising due to fluctuations of sampling. If there is no difference between actual and observed frequencies then 2 = 0. If
there is a difference, then 2 would be more than 0. But the difference may also be due to sample fluctuation and thus
B AB aB
22 38 60
Ab ab
b 8 32 40
30 70 100
Now the formula for calculating expected frequency of any class (cell)
R C
In notations: Expected frequency
N
For example, if we have two attributes A and B that are independent then the expected frequency of the class (cell) AB
30 60
would be 18 .
100
Once the expected frequency of cell (AB) is decided the expected frequencies of remaining three classes are
automatically fixed.
Thus in 2 2 table df = (2 – 1) (2 – 1) = 1
3 3 table df = (3 – 1) (3 – 1) = 4
4 4 table df = (4 – 1) (4 – 1) = 9 etc.
If the data is not in the form of contingency tables but as a series of individual observations or discrete or continuous
series then it is calculated by n = n – 1 where n is the number of frequencies or values of number of independent
individuals.
2
( Obs erved frequency Expected frequency )
2
Expected frequency
2
(O E ) 2
E
Meaning of interpretation
The following definitions can explain the meaning of interpretation.
―The task of drawing conclusions or inferences and of explaining their significance after a careful analysis of selected
data is known as interpretation‖.
Why interpretation?
A researcher/ statistician is expected not only to collect and analyse the data but also to interpret the results of his/ her
findings. Interpretation is essential for the simple reason that the usefulness and utility of research findings lie in
proper interpretation. It is only through interpretation that the researcher can expose relations and patterns that
underlie his findings. In case of hypothesis testing studies the researcher may arrive at generalizations. In case the
researcher had no hypothesis to start with, he would try to explain his findings on the basis of some theory. It is only
through interpretation that the researcher can appreciate why his findings are what they are, and can make others
understand the real significance of his research findings.
Interpretation is not a mechanical process. It calls for a critical examination of the results of one‘s analysis in the light of
all the limitations of data gathering. For drawing conclusions you need a basis. Some of the common and important
bases of interpretation are: relationships, ratios, rates and percentages, averages and other measures of comparison.
Precautions in interpretation
It is important to recognize that errors can be made in interpretation if proper precautions are not taken. The
interpretation of data is a very difficult task and requires a high degree of skill, care, judgement and objectivity. In the
absence of these, there is every likelihood of data being misused to prove things that are not true. The following
precautions are required before interpreting the data.
1) The interpreter must be objective.
2) The interpreter must understand the problem in its proper perspective.
3) He / she must appreciate the relevance of various elements of the problem.
4) See that all relevant, adequate and accurate data are collected.
5) See that the data are properly classified and analyzed.
6) Find out whether the data are subject to limitations? If so what are they?
7) Guard against the sources of errors.
8) Do not make interpretations that go beyond the information / data.
9) Factual interpretation and personal interpretation should not be confused. They should be kept apart.
If these precautions are taken at the time of interpretation, reasonably good conclusions can be arrived at.
1. pie charts
2. vertical bar charts (histograms)
3. horizontal bar charts (also histograms)
4. pictograms
5. line charts
6. area charts
Some other types of charts, well suited to audience research, but less often used, include
7. perceptual maps
Though many different kinds of graph are possible, if a report includes too many types, it‘s often confusing for readers,
who must work out how to interpret each new type of graph, and why it is different from an earlier one. It is
recommended using as few types of graph as are necessary.
If you have a spreadsheet or graphics program, such as Excel or Deltagraph, it‘s very easy to produce graphs. You
simply enter the numbers and labels in a table, click a symbol to show which type of graph you want, and it appears
before your eyes. These graphs are usually not very clear when first produced, but the software has many options for
changing headings, scales, and graph layout. You can waste a lot of time perfecting these graphs. Excel (actually,
Microsoft Graph, which Excel uses) has dozens of options, and it takes a lot of clicking of the right-hand mouse button
to discover them all. If you don‘t have a recent and powerful computer, Excel can be a very slow and frustrating
program to use.
The main types of graph include pie charts, bar charts (histograms), line charts, area charts, and several others.
1) Pie chart
A round graph, cut (like a pie) into slices of varying size, all adding to 100%. Because a pie chart is round, it‘s useful for
communicating data which takes a "round" form: for
example, the answers to "How many minutes in each hour
would you like FM RADIOMIRCHI to spend on each of the
following types of program...?" In this case, the pie
corresponds to a clock face, and the slices can be interpreted
as fractions of an hour.
Pie charts are easily understood when the slices are similar
in size, but if several slices are less than 5%, or lots of
different colours are used, it can be quite difficult to read a
pie chart. In that case the chart has to be very big, taking perhaps half a page to convey one set of numbers. Not a very
efficient way to display information.
If each symbol represents 2% of the sample, you can usually fit the graph on a single line. Round each figure to the
nearest 2% to work out how many times to press the symbol key. In the above example, 47.4% is closer to 48% than to
46%, so I pressed the | key 24 times to graph the percentage of men. This is a very clear layout, and quick to produce,
so it is well suited to a preliminary report.
A more elaborate looking graph can be made by using special symbols. For example, if you have the font Zapf
Dingbats or Wingdings, you can use the shaded-
This is wider than the | symbol, and no more than about 20 will fit on a normal-width line, if half the line is taken up
o 5%:
Male 47.4%
Female 52.6%
10%, and the number to be graphed is 45%, you see four and a half little men...
5) Line chart
This is used when the variable you are graphing is a numeric one. In audience research, most variables are nominal,
not numeric, so line charts aren‘t needed much. But to
plot the answers to a question such as "How many people
live in your household?" you could produce a graph like
this:
It‘s normal to show the measurement (e.g. percentage)
upwards, and the scale (e.g. hours per week) on the
horizontal scale. Unlike a bar chart, it will confuse people
if the scales are exchanged. You‘ll find that almost every
line chart has a peak in the middle, and falls off to each
side, reflecting what‘s known as the "normal curve."
A line chart is really another form of a vertical bar chart. You could turn a vertical bar chart into a line chart by drawing
a line connecting the top of each bar, then deleting the bars.
A line chart can have more than one line. For example, you could have a line chart comparing the number of hours per
week that men and women watch TV. There‘d be two lines, one for each sex. Each line needs to be shown with a
different style, or a different colour. With more than 3 or 4 lines, a line chart becomes very confusing, specially when
the lines cross each other.
6) Area chart
In a line chart with several lines — such as the above example, with two sexes — each line starts from the bottom of the
table. That way, you can compare the height of
the lines at any point. An area chart is a little
different, in that each line starts from the line
below it. So you don‘t compare the height of
the lines, but the areas between them. These
areas always add to 100% high. You can think
of an area chart as a lot of pie charts, flattened
out and laid end-to-end.
A common use of area charts in audience
research is to show how people‘s behaviour changes across the 24 hours of the day. The horizontal scale runs from
1. Define the term ‗Hypothesis‘. Differentiate among assumption, postulate and hypothesis.
2. Explain the nature and functions of a hypothesis in a research process.
3. Enumerate the significance and importance of hypotheses in scientific research.
4. There are various kinds of hypotheses. Mention some important hypotheses. Why researchers prefer non-
directional hypotheses?
5. Hypothesis is a statement which involves a relationship of variable. Enumerate the types of variables included in
stating a hypothesis.
6. Explain the procedure for testing a statistical hypothesis.
7. Describe a situation where you can apply t-distribution.
8. How would you distinguish between a t-test for independent sample and a paired t-test?
9. State any five precautionary steps to be taken before interpretation.
10. What is meant by interpretation of statistical data? What precautions should be taken while interpreting the data?
11. What do you understand by interpretation of data? Illustrate the types of mistakes which frequently occur in
interpretation.
12. Explain the need, meaning and essentials of interpretation.
13. Write a short note on:
A. Cross tabulation C. T test
B. Z test D. F test
Introduction
The last and final phase of the journey in research is writing of the report. After the collected data has been analyzed
and interpreted and generalizations have been drawn the report has to be prepared. The task of research is incomplete
till the report is presented.
Writing of a report is the last step in a research study and requires a set of skills somewhat different from those called
for in respect of the earlier stages of research. This task should be accomplished by the researcher with utmost care.
Purpose of a report
The report may be meant for the people in general, when the investigation has not been carried out at the instance of
any third party. Research is essentially a cooperative venture and it is essential that every investigator should know
what others have found about the phenomena under study. The purpose of a report is thus the dissipation of
knowledge, broadcasting of generalizations so as to ensure their widest use.
A report of research has only one function, ―it must inform‖. It has to propagate knowledge. Thus, the purpose of a
report is to convey to the interested persons the results and findings of the study in sufficient detail, and so arranged as
to enable each reader to comprehend the data, and to determine for himself the validity of conclusions. Research
results must invariably enter the general store of knowledge. A research report is always an addition to knowledge. All
this explains the significance of writing a report. In a broader sense, report writing is common to both academics and
organizations. However, the purpose may be different. In academics, reports are used for comprehensive and
application-oriented learning. Whereas in organizations, reports form the basis for decision making.
Meaning
Reporting simply means communicating or informing through reports. The researcher has collected some facts and
figures, analyzed the same and arrived at certain conclusions. He has to inform or report the same to the parties
interested. Therefore ―reporting is communicating the facts, data and information through reports to the persons for
whom such facts and data are collected and compiled‖.
A report is not a complete description of what has been done during the period of survey/research. It is only a
statement of the most significant facts that are necessary for understanding the conclusions drawn by the investigator.
Thus, ― a report by definition, is simply an account‖. The report thus is an account describing the procedure adopted,
the findings arrived at and the conclusions drawn by the investigator of a problem.
1. Keep to the time allowed. If you can, keep it short. It's better to under-run than over-run. As a rule of thumb, allow 2
minutes for each general overhead transparency or PowerPoint slide you use, but longer for any that you want to use
for developing specific points. 35 mm slides are generally used more sparingly and stay on the screen longer.
However, the audience will get bored with something on the screen for more than 5 minutes, especially if you are not
actively talking about it. So switch the display off, or replace the slide with some form of 'wallpaper' such as a
company logo.
2. Stick to the plan for the presentation, don't be tempted to digress - you will eat up time and could end up in a dead-
end with no escape!
3. Unless explicitly told not to, leave time for discussion - 5 minutes is sufficient to allow clarification of points. The
session chairman may extend this if the questioning becomes interesting.
4. At the end of your presentation ask if there are any questions - avoid being terse when you do this as the audience
may find it intimidating (i.e. it may come across as any questions? - if there are, it shows you were not paying
attention). If questions are slow in coming, you can start things off by asking a question of the audience - so have one
prepared.
Visual Aids
Visual aids significantly improve the interest of a presentation. However, they must be relevant to what you want to
say. A careless design or use of a slide can simply get in the way of the presentation. What you use depends on the
type of talk you are giving. Here are some possibilities:
Overhead projection transparencies (OHPs)
35mm slides
Computer projection (PowerPoint, applications such as Excel, etc)
Video, and film,
Real objects - either handled from the speaker's bench or passed around
Flipchart or blackboard - possibly used as a 'scratch-pad' to expand on a point
2. Research methodology
3. Background to the research problem C) Reference Matters
4. Objectives 1. Bibliography
Hypotheses 2. Appendices (optional)
5. Data collection 3. Glossary (optional)
6. Sample and sampling method 4. References (optional)
7. Statistical or qualitative methods used for data
analysis
A) Front Pages
1) Title Page
The cover page should display full name of researcher, guide along with qualification, and the title of report.
2) Certificate
Format for same given in sample page below
3) Declaration
Format for same given in sample page below
4) Acknowledgments
The researcher may wish to acknowledge people who helped in preparation of report. For example, you may wish to
thank someone you interviewed, or someone who provided you with some special information.
C) Reference Matter
i) Bibliography
A bibliography is an alphabetical list of all materials consulted in the preparation of research.
ii) Appendices containing copies of the questionnaires, etc.
Why do a bibliography?
Some reasons:
1. To acknowledge and give credit to sources of words, ideas, diagrams, illustrations, and quotations borrowed, or any
materials summarized or paraphrased.
2. To show that you are respectfully borrowing other people‘s ideas, not stealing them, i.e. to prove that you are not
plagiarizing (Copying).
3. To offer additional information to readers who may wish to further pursue the topic.
1. Author
Ignore any titles, designations or degrees, etc. which appear before or after the name, e.g., The Honourable, Dr., Mr.,
Mrs., Ms., Rev., S.J., Esq., Ph.D., M.D., Q.C., etc. Exceptions are Jr. and Sr. Do include Jr. and Sr. as John Smith, Jr. and
John Smith, Sr. are two different individuals. Include also I, II, III, etc. for the same reason.
Examples:
a) Last name, first name:
Kotlar, Philip.
Christensen, Asger.
Wilson-Smith, Anthony.
b) Last name, first and middle names:
Wyse, Cassandra Ann Lee.
c) Last name, first name and middle initial:
Schwab, Charles R.
d) Last name, initial and middle name:
Holmes, A. William.
e) Last name, initials:
Meister, F.A.
f) Last name, first and middle names, Jr. or Sr. designation:
Davis, Benjamin Oliver, Jr.
g) Last name, first name, I, II, III, etc.:
Stilwell, William E., IV.
5. Date of publication
a) For a book, use the copyright year as the date of publication, e.g.: 2003, not ©2003 or Copyright 2003, i.e. do not draw
the symbol © for copyright or add the word Copyright in front of the year.
b) For a monthly or quarterly publication use month and year, or season and year. For the months May, June, and July,
spell out the months, for all other months with five or more letters, use abbreviations: Jan., Feb., Mar., Apr., Aug., Sept.,
Oct., Nov., and Dec. Note that there is no period after the month. For instance, the period after Jan. is for the
abbreviation of January only. See Abbreviations of Months of the Year, Days of the Week, and Other Time
Abbreviations. If no months are stated, use Spring, Summer, Fall, Winter, etc. as given, e.g.:
Alternatives Journal Spring 2004.
Classroom Connect Dec. 2003/Jan. 2004.
Discover July 2003.
Scientific American Apr. 2004.
c) For a weekly or daily publication use date, month, and year, e.g.:
Newsweek 11 Aug. 2003.
d) Use the most recent Copyright year if two or more years are listed, e.g., ©1988, 1990, 2004. Use 2004.
e) Do not confuse Date of Publication with Date of Printing, e.g., 7th Printing 2004, or Reprinted in 2004. These are not
publication dates.
f) If you cannot find a publication date anywhere in the book, use "n.d." to indicate there is "No Date" listed for this
publication.
g) If there is no publication date, but you are able to find out from reliable sources the approximate date of publication,
use [c. 2004] for circa 2004, or use [2003?]. Always use square brackets [ ] to indicate information that is not given but is
supplied by you.
6. Page number(s)
a) Page numbers are not needed for a book, unless the citation comes from an article or essay in an anthology, i.e. a
collection of works by different authors.
Example of a work in an anthology (page numbers are for the entire essay or piece of work):
Fish, Barry, and Les Kotzer. "Legals for Life." Death and Taxes: Beating One of the Two Certainties in Life. Ed. Jerry
White. Toronto: Warwick, 1998. 32-56.
b) If there is no page number given, use "n. pag."
(Works Cited example)
Schulz, Charles M. The Meditations of Linus. N.p.: Hallmark, 1967.
(Footnote or Endnote example)
1 Charles M. Schulz, The Meditations of Linus (N.p.: Hallmark, 1967) n. pag.
c) To cite a source with no author, no editor, no place of publication or publisher stated, no year of publication, but you
know where the book was published, follow this example:
Full View of Temples of Taiwan - Tracks of Pilgrims. [Taipei]: n.p., n.d.
9. Length of project: The project should be approximately 15,000 - 22,000 words (For project at Post – graduate level)
10. Submitted copies of the project should be hard-bound volume only.
11. If you wish to acknowledge any individual's contribution to the project, this should be stated on a separate
acknowledgement page.
12. Your project should contain a list of contents which states the page number of each section of the project.
13. Appendices should not be considered part of the project report (for example, raw data could be included in this
way). Appendices should be placed at the very end of the project and referred to in the contents section.
Research in Commerce
Commerce is the whole system of an economy that constitutes an environment for business. The system includes legal,
economic, political, social, cultural and technological systems that are in operation in any country. Thus, commerce is a
system or an environment that affects the business prospects of an economy or a nation-state. It can also be defined as a
component of business which includes all activities, functions and institutions involved in transferring goods from
producers to consumer.
The term commerce refers to the process of buying and selling-wholesale; retail, import, export, enter port trade and
all those activities which facilitate or assist in such buying and selling such as storing, grading, packaging,
financing, transporting, insuring, communicating, warehousing, etc.
The main functions of commerce is to remove the hindrance of (i) persons through trade; (ii) place through
transportation, insurance and packaging; (iii) time through warehousing and storage; and (iv) knowledge through
salesmanship, advertising, etc., arising in connection with the distribution of goods and services until they reach the
consumers.
The concept of commerce includes two types namely: (i) Trade and (ii) Aids to trade which are explained in
activity around which the ancillary functions like Banking, Transportation, Insurance, Packaging, Warehousing and
Advertising cluster.
Trade may be classified into two broad categories as follows:
(a) Internal or Domestic Trade: It consists of buying and selling of goods within the boundaries of a country and
the payment for the same is made in national currency either directly or through the banking system. Internal trade
may be further sub-classified into wholesale trade and retail trade.
(b) International or Foreign Trade: It refers to the exchange of goods and services between two or more countries.
International trade involves the use of foreign currency (called foreign exchange) ensuring the payment of the price of
the exported goods and services to the domestic exporters in domestic currency, and for making payment of the price
of the imported goods and services to the foreign exporter in that country‘s national currency (foreign exchange).To
facilitate this payment, involving exchange transactions, a highly developed system of international banking under the
overall control and supervision of the central bank of the concerned country (Reserve Bank of India in our case) is
(ii) Auxiliary to Trade or Aids to Trade : As mentioned above, there are certain function such as banking,
transportation, insurance, ware-housing, advertising, etc. which constitute the main auxiliary functions helping trade-
internal and international. These auxiliary functions have been briefly discussed hereunder:
(a) Banking: Banks provide a device through which payments for goods bought and sold are made thereby
facilitating the purchase and sale of goods on credit. Banks serve the useful economic function of collecting the savings
of the people and business houses and making them available to those who may profitably use them. Thus, banks may
be regarded as traders in money and credit.
(b) Transportation: Transport performs the function of carrying goods from producers
to wholesalers, retailers, and finally customers. It provides the wheels of commerce. It has linked all parts of the world
by facilitating international trade.
(c) Warehousing: There is generally a time lag between the production and consumption of goods. This problem can
be solved by storing the goods in warehouse. Storage creates time utility and removes the hindrance of time in trade. It
performs the useful function of holding the goods for the period they move from one point to another. Thus, ware
housing discharges the function of storing the goods both for manufacturers and traders for such time till they decide
to move the goods from one point to another.
(d) Insurance: Insurance provides a cover against the loss of goods in the process of transit and storage. An insurance
company performs a useful service of compensating for the loss arising from the damage caused to goods through fire,
pilferage, thief and the hazards of sea, transportation and thus protects the traders form the fear of loss of goods. It
charges insurance premium for the risk covered.
(e) Advertising: Advertising performs the function of bridging the information gap about the availability and uses of
goods between traders and consumers. In the absence of advertising, goods would not have been sold to a widely
scattered market and customers would not have come to know about many of the new products because of the paucity
of time, physical-spatial distance, etc.
Knowledge and research in all the above functional areas of commerce are essential for smooth
functioning of businesses.
1. Marketing
Marketing research is undertaken to assist the marketing function. Marketing research stimulates the flow of
marketing data from the consumer and his environment to marketing information system of the enterprise. Market
research involves the process of
Systematic collection
Compilation
Analysis
Interpretation of relevant data for marketing decisions
This information goes to the executive in the form of data. On the basis of this data the executive develop plans and
programmers. Advertising research, packaging research, performance evaluation research, sales analysis, distribution
channel, etc., may also be considered in management research.
Research tools are applied effectively for studies involving:
1. Demand forecasting
2. Consumer buying behaviour
3. Measuring advertising effectiveness
4. Media selection for advertising
5. Test marketing
6. Product positioning
7. Product potential
Marketing Research
i. Product Research: Assessment of suitability of goods with respect to design and price.
ii. Market Characteristics Research (Qualitative): Who uses the product? Relationship between buyer and user,
buying motive, how a product is used, analysis of consumption rates, units in which product is purchased, customs
and habits affecting the use of a product, consumer attitudes, shopping habits of consumers, brand loyalty, research of
special consumer groups, survey of local markets, basic economic analysis of the consumer market, etc.
iii. Size of Market (Quantitative): Market potential, total sales quota, territorial sales quota, quota for individuals,
concentration of sales and advertising efforts; appraisal of efficiency, etc.
iv. Competitive position and Trends Research
v. Sales Research: Analysis of sales records.
vi. Distribution Research: Channels of distribution, distribution costs.
vii. Advertising and Promotion Research: Testing and evaluating, advertising and promotion
viii. New product launching and Product Positioning.
Department of Agriculture
Department of Commerce
Department of Defense
Department of Education
Department of Energy
Department of Health and Human Services
Department of Homeland Security
Department of Transportation
Environmental Protection Agency
National Aeronautics and Space Administration
National Science Foundation
National Institutes of Health
Participating agencies publish one or more SBIR solicitations per year. The solicitation is essentially a grocery list of
topics and areas where they are interested in sponsoring research. In the case of some agencies such as the
Departments of Defense and Homeland Security the topics are very specific. These agencies have some very real,
specific and immediate problems that they need your help in solving. At the other end of the specificity spectrum, the
Eligibility
To be eligible to participate, a company must be 51% owned and controlled by individuals who are U.S. citizens or
permanent resident aliens. It must also be a small business with no more than 500 employees including affiliates. All
Phase I and Phase II work must be performed in the U.S.
Strengths of SPSS
1. Cross-tabulation in SPSS is very good indeed, and with the addition of the TABLES product, the written output
can be made to look extremely professional. You can have your data subdivided into categories in several
dimensions and then get a whole range of descriptive statistics for each cell in the categorisation. This is no more
and no less than you would expect from a survey analysis package. There are also a very good range of simple
Weaknesses of SPSS
1. The major weakness of SPSS is in its handling of designed experiments. It either does things badly or in an
extremely convoluted manner. There are probably very few people in the world who fully understand the
MANOVA command and how to make it do all the things that it is supposed to do. It tries to do too much in one
command and ends up doing almost everything in a totally counter-intuitive way.
2. Until recently there were very few techniques in the package designed for data which is at best ordinal, which is
surprising for a package that is targeted at survey data, though the recent inclusion of multi-dimensional scaling
and the optional extra CATEGORIES, which includes correspondence analysis, has improved matters.
3. In terms of the philosophy of statistics, SPSS will lead the unwary astray. The statistical philosophy demands that
you make your assumptions explicit before making a hypothesis test. In most packages you have to make your
assumptions clear in subcommands or options and will therefore be making a specific test. In SPSS you get a
series of answers which have different assumptions attached to them, and then you choose the answer that you
like best. SPSS does not prevent you from establishing a priori assumptions but it does not encourage you to do it.
4. In addition almost all SPSS commands have defaults for most of the choices between methods, and so if you do
not specify anything you get an analysis which may well be inappropriate to your type of data and situation.
Considerable care is needed, especially with some of the more sophisticated techniques, in order to specify an
appropriate form of analysis. In order to do this a set of manuals is essential. They are well written and they are
the only place where you can find out exactly what the analysis is going to do to your data and what assumptions
are to be made. Many people remark that SPSS is easy to use, that they understand it and that one doesn't need
manuals to use it. It is in reality easy to misuse, many of the techniques are extremely difficult to understand, and
if you use it without manuals you are in grave danger of seriously undermining your academic credibility.
5. Under most circumstances it is very difficult in SPSS to pass the results of one analysis as input to another, as it
does not support the data structures to do this. This reduces the flexibility of the package quite considerably.
6. SPSS is very poor at assumption checking; if it warns you about problems with your data you are in serious
trouble, as such warnings are few and far apart. Much the same, in this respect, applies to SPSS as to MINITAB.
The warnings and the checks are all described in the manuals, but the checks have to be carried out by you on a
preliminary analysis or analyses of the data, and then you need to modify the options accordingly - none of this
will be done for you automatically.
Aaker D A, Kumar V & Day G S - Marketing Research (John Wiley &Sons Inc, 6th ed.)
Agresti A., Categorical Data Analysis. New York: John Wiley & Sons 1990.
C.R.Kothari, Research Methodology (Methods and Techniques), New AgeInternational Pvt. Ltd. New Delhi
Kendall, P. C., & Grove, W. (1988). Normative comparisons in therapy outcome. Behavioral Assessment, 10, 147-
158.
Levin R I & Rubin DS - Statistics for Management (Prentice Hall of India, 2002)
Marrison, D.F., Multivariate Statistical Methods, McGraw Hill, New York, 1986.
Nowak, R. (1994). Problems in clinical trials go far beyond misconduct. Science. 264(5165): 1538-41.
Resnik, D. (2000). Statistics, ethics, and research: an agenda for educations and reform. Accountability in
Research. 8: 163-88
Romesburg, H.C., Cluster Analysis for Researchers, Lifetime Learning Publications, Belmont, California, 1984.
T.S. Wilkinson & P.L.Bhanarkar. Methodology and Techniques of SocialResearch, Himalaya Publishing House,
Mumbai
1. Explain the stages in the research process with the help of a flow chart of research process.
2. A researcher is interested in knowing the answer to a why question, but does not know what sort of answer will
be satisfying. Is this exploratory, descriptive, or casual research? Explain.
3. What is the task of problem definition? The city police wishes to understand its image from the public‘s point of
view. Define the business problem.
4. Give the categories of exploratory research would you suggest in each of the following situations?
(a) A product manager suggests that a non-tobacco cigarette, blended from wheat, cocoa, and citrus, be
developed.
(b) A manager needs to determine the best site for a departmental store in urban area.
5. With the help of examples, classify survey research methods.
6. Discuss the use of self – administered questionnaires along with their classifications.
7. Design a complete questionnaire to evaluate job satisfaction of entry level marketing executives.
8. Outline the step – by – step procedure to select following:-
(a) A sample of 150 students at your school,
(b) A sample of 50 mechanical engineers, 40 electrical engineers, and 40 civil engineers, from the subscriber list
of an engineering journal,
(c) A sample of two wheeler and four wheeler owners in a Big – Bazzar‘ intercept sample,
(d) A sample of male and female workers to compare hourly wages of drill press operators.
9. What is a hypothesis? Write general procedure for hypothesis testing. Differentiate and errors.
10. Define and classify secondary data. Discuss the process of evaluating secondary data.
11. Discuss in detail the application of Research Methodology in Business Management.
12. Discuss various contents required in the layout of Internet questionnaire.
13. Compare Sampling techniques in details. Differentiate between t-distribution and z-distribution. Write a detailed
note on Total Survey Error.
14. Discuss various factors that influence the validity of experimental studies in research.
15. Company manufacturing readymade snacks introduced its new product with different flavours in Indian market.
The company looks forward to note the preferences of consumers for the offered flavours. The company is also
interested in developing new flavours that can do well in the market.
16. What type of research should be conducted? Give reasons to support your answer.
17. Design the research process in detail. Support your answer with flow diagram.
18. Give meaning of research and describe the stages of development of Research.
19. State the meaning and importance of Hypothesis with examples.
20. What are the major characteristics in sampling? State the type of sampling with suitable illustrations.
21. Discuss briefly the various methods of data collection. What steps will you follow while writing a Research
Report?
22. Write notes on any two:-
(a) Scaling Techniques, (c) Presentation of Data ,
(b) Processing of Data,
23. What do you understand by the term Research ‗? Which are the various stages in the development of a research?
research? Explain the Use of advanced (b) Describe various steps involved while
Q2. (a) ―Research design in exploratory studies (c) What are the various functional areas in
must be flexible but in descriptive studies, it Commerce in which Research can be of great
significance to the organization?
must minimize bias and maximize reliability.‖
Discuss.
(b) What are the Characteristics of a good sample
Design?
or
(c) What do you mean by ‗Sample Design‘? What
points should be taken into consideration by a
researcher in developing a sample design for a
research project?