Research Designs in Marketing Research
Research Designs in Marketing Research
Research designs
A research design is a framework or blueprint for conducting the business research project. It details the procedures necessary for obtaining the information needed to structure or solve business research problems.
Research Design
The research design is the master plan specifying the methods and procedures for collecting and analyzing the needed information.
data collection Specify the sampling process and sample size Develop a plan of data analysis
Three traditional categories of research design: Exploratory Descriptive Causal/Experimental The choice of the most appropriate design depends largely on the objectives of the research and how much is known about the problem and these objectives.
Descriptive Research
Causal Research
Cross-Sectional Design
Longitudinal Design
CAUSAL OR DESCRIPTIVE
COMPLETELY CERTAIN
ABSOLUTE AMBIGUITY
EXPLORATORY
Conclusive
To test specific hypotheses and examine relationships. Information needed is clearly defined. Research process is formal and structured. Sample is large and representative. Data analysis is quantitative.
Characteristics:
Tentative.
Conclusive.
Descriptive
Causal
Determine cause and effect relationships Manipulation of one or more independent variables Control of other mediating variables Experiments
Discovery of ideas Describe market and insights characteristics or functions Flexible, versatile Marked by the prior formulation of specific hypotheses Preplanned and structured design
Characteristics:
Often the front end of total research design Expert surveys Pilot surveys Secondary data Qualitative research
Methods:
What kind of people are buying Will buyers purchase more of our product? Who buys our our products in a new package? competitors product? Which of two advertising What features do buyers prefer campaigns is more effective? in our product?
10
To gain background information, to define terms, to clarify Exploratory problems and develop hypotheses, to establish research priorities, to develop questions to be answered To describe and measure marketing phenomena at a point Descriptive in time To determine causality, test hypotheses, to make if-then Causal/Experimental statements, to answer questions
11
12
Exploratory research is used in a number of situations: To gain background information To define terms To clarify problems and hypotheses To establish research priorities
13
14
precisely Identify alternative courses of action Develop hypotheses Isolate key variables and relationships for further examination Gain insights for developing an approach to the problem Establish priorities for further research
16
Panels
Observational and other data
18
Cross-sectional Designs
Involve the collection of information from any given sample of
population elements only once. In single cross-sectional designs, there is only one sample of respondents and information is obtained from this sample only once. In multiple cross-sectional designs, there are two or more samples of respondents, and information from each sample is obtained only once. Often, information from different samples is obtained at different times. Cohort analysis consists of a series of surveys conducted at appropriate time intervals, where the cohort serves as the basic unit of analysis. A cohort is a group of respondents who experience the same event within the same time interval.
20
21
23
Experiments
An experiment is defined as a research process that allows study of one or more variables which can be manipulated under conditions that permits collection of data that show the effect of such variables in unconfused fashion Independent variables: those over which the researcher has control and wishes to manipulate i.e. package size, ad copy, price. Dependent variables: those over which the researcher has little to no direct control, but has a strong interest in testing i.e. sales, profit, market share. Extraneous variables: those that may effect a dependent variable but are not independent variables.
24
Experimental Design
An experimental design is a procedure for devising an experimental setting such that a change in the dependent variable may be solely attributed to a change in an independent variable. Symbols of an experimental design: O = measurement of a dependent variable X = manipulation, or change, of an independent variable R = random assignment of subjects to experimental and control groups E = experimental effect
25
26
(independent variables) and which variables are the effect (dependent variables) of a phenomenon To determine the nature of the relationship between the causal variables and the effect to be predicted METHOD: Experiments
Uses less sophisticated form of analysis Allows study of one variable at a time
Formal
Uses precise statistical methods for analysis Allows study of more than one variables at a time Interaction between variables can be studied
28
After only Before - After Before - After with Control Group After Only with Control Group
29
30
After Only Dependent variable is measured only after independent variable is introduced Purchase of Pepsi through coupon redemption in advertisement Not a good design because Before the event response is not measured Sometimes used for new products when Before measurements is known to be zero
31
Before measurement X1 Experimental variable Introduced X 1 Yes After measurement X2 Effect of Experimental variable =
( X 2 X1 )
History Effect Maturation Pre Test Effect Variety introduced by the researcher
32
33
34
(Describe the experiment of regular & special trainings) Always use simple random for any selection Involves two principles: Replication & Randomization For analysis One way ANOVA is used Shortcoming: Can not control the effect of extraneous variable: Quality of training by different trainers
35
Control Group
Treatment B
Populatio n
Randmly Assigned
Expimtal Group
Independent Variable
36
Treatment A
Principle of local control is also applied ANOVA is used for analysis Describe the experiment: To measure the effect of four different tests on students having different IQ level
Divide students according to IQ level One student from each group is selected randomly Tests can be taken in random order: eliminates effect of fatigue or experience gained by taking repeat exams
37
Form 1
Form 2 Form 3 Form 4
82
90 86 93
67
68 83 77
57
54 51 60
71
70 69 65
73
81 84 71
each of 5 students and the scores are as they obtained above. If each student separately randomize the order in which he or she took the test, we refer to the design as RB design . The purpose is to take care of extraneous variable (fatigue or experience) because of test repetition.
a stores the researcher also want to know the interest in the store. Interest in the STORES Store Patronage High Medium Low High A C B Medium C B A Low B A C
40
control of two non-interacting external variables in addition to the manipulation of the independent variables. For analysis- two way ANOVA
41
Reliability
Practicality
44
Validity
External & Internal validity Internal validity: One aspect
Ability of a research instrument to measure
what it is purported to measure Does the instrument really measure what its designer claims it does?
Classification of validity
Content validity
Validity
Content Validity
Degree to which the content of the items
adequately represents the universe of all relevant items under study To measure Corporate Image what knowledge, attitude and opinions are relevant to be included for measurement It is judgmental or panel evaluation
46
Validity
Criterion-Related Validity
Reflects the success of measures used for
prediction or estimation
Predictive Concurrent
forecast the outcome of a union election has predictive validity An observational method that can correctly classify families into income classes has concurrent validity May sound very easy but for some variables it may prove to be difficult to secure correct figure for example Income of a family
47
Validity
Construct Validity
To measure attitude, aptitude, personality
tests and infer is very difficult No empirical validation seems possible This is typical construct validity
To evaluate construct validity both theory and measuring instrument need to be considered To determine the effect of ceremony on organizational culture
48
Reliability
Example: Weighing Scale It is Valid & Reliable if it measures your weight correctly If it consistently overweighs you by 3 Kgs, it is Reliable but not valid If it weighs erratically, it is neither valid nor reliable Reliable instruments are free from random
errors, and work well under different times and different conditions
49
Reliability
Three perspectives of reliability Stability, Equivalence & Internal consistency Stability
Producing consistent results with repeated measurements under similar environment Is concerned with personal and situational variations from one time to another Test-retest
Reliability
Equivalence
How much errors can be introduced by different investigators or different samples Is concerned with variations at one point in time among observers or samples Parallel forms Tests the homogeneity among the items Split-half Kuder Richardson Formula 20 Cronbachs alpha
Internal Consistency
51
Practicality
Measurement process has to be Scientifically: Valid & Reliable Operationally: Practical Practicality has been defined as Economy Convenience Interpretability
52
Types of Experiments
Two broad classes: Laboratory experiments: those in which the independent variable is manipulated and measures of the dependent variable are taken in a contrived, artificial setting for the purpose of controlling the many possible extraneous variables that may affect the dependent variable Field experiments: those in which the independent variables are manipulated and measurements of the dependent variable are made on test units in their natural setting
Research Design: Dr. Dey 53
data, Hypothesis formulation may not be possible Time factor Cost of conducting experiments is high Administrative problems of coordination & execution
54
55
Standard test market: one in which the firm tests the product and/or marketing mix variables through the companies normal distribution channels Controlled test markets: ones that are conducted by outside research firms that guarantee distribution of the product through pre-specified types and numbers of distributors
56
57
Test Markets
Test marketing is used in both consumer markets and industrial or B2B markets as well. Lead country test market: test marketing conducted in specific foreign countries that seem good predictors for an entire continent
58
Representativeness: Do demographics match the total market? Degree of isolation: Cities are isolated markets; Or not isolated. Ability to control distribution and promotion: Are there pre-existing arrangements to distribute the new product in selected channels of distribution? Are local media designed to test variations in promotional messages?
59
Test Marketing
Pros: Allows most accurate method of forecasting future sales Allows firms the opportunity to pretest marketing mix variables Cons: Does not yield infallible results Are expensive Exposes the new product or service to competitors Takes time to conduct
60
Non-sampling Error
Response Error
Non-response Error
Researcher Error
Surrogate Information Error Measurement Error Population Definition Error Sampling Frame Error Data Analysis Error
Interviewer Error
Respondent Error
Inability Error Unwillingness Error
a) Surrogate information error: the variation between the info needed and sought by the researcher (e.g., instead of info on consumer choices, the researcher obtains info on consumer preferences because the choice process cannot be easily observed) b) Measurement error: the variation between the info sought and info generated (e.g., measuring perceptions rather than preferences)
c) Population definition error: the variation between the actual population relevant to the problem at hand and the pop. as defined by the researcher (e.g., how to define a population of affluent households?)
d) Sampling frame error: the variation between the population defined by the researcher and the population as implied by the sampling frame (e.g., the telephone directory used to generate a list of telephone numbers does not accurately represent the pop. of potential consumers due to unlisted, disconnected, and new numbers in service) e) Data analysis error: e.g., when an inappropriate statistical procedure is used
a) Respondent selection error: respondents are selected other than those specified by the sampling design (e.g., a nonreader of a journal is selected rather than a reader to meet a difficult quota requirement) b) Questioning error: e.g., interviewer does not use the exact wording given in a questionn. c) Recording error: errors in hearing, interpreting, and recording the answers d) Cheating error: the interviewer fabricates the answers (e.g., does not ask about income, but then fills in the answer based on personal assessment)
a) Inability error: because of unfamiliarity, fatigue, boredom, faulty recall, question format, question content, etc. (e.g., a respondent cannot recall the brand of yogurt purchased four weeks ago) b) Unwillingness error:
No answer Intentionally wrong answer (e.g., declares himself as a reader of a prestigious magazine rather than a tabloid)
Thanks