0% found this document useful (0 votes)
142 views33 pages

ASEMAP in Detail

This document describes the Adaptive Self-Explication of Multi-Attribute Preferences (ASEMAP) method for measuring attribute importance. ASEMAP uses a three step process: 1) collecting ratings of attribute levels, 2) ranking attributes by importance, and 3) conducting constant sum paired comparisons of attributes. It then uses log-linear regression and interpolation to estimate attribute importances. The method selects which attribute interval to explore next adaptively based on minimizing maximum interpolation error. It iterates paired comparisons and importance estimates to converge on stable importance values.

Uploaded by

mohit bhalla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
142 views33 pages

ASEMAP in Detail

This document describes the Adaptive Self-Explication of Multi-Attribute Preferences (ASEMAP) method for measuring attribute importance. ASEMAP uses a three step process: 1) collecting ratings of attribute levels, 2) ranking attributes by importance, and 3) conducting constant sum paired comparisons of attributes. It then uses log-linear regression and interpolation to estimate attribute importances. The method selects which attribute interval to explore next adaptively based on minimizing maximum interpolation error. It iterates paired comparisons and importance estimates to converge on stable importance values.

Uploaded by

mohit bhalla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Prof.

“Seenu” Srinivasan Optimal Strategix Group, July 2, 2020


Adaptive Self-Explication of
Multi-Attribute Preferences (ASEMAP)

6
Part-worth

Part-worth
5
4

2
0 1 0
1

6 9 12 15 0 1 2 3

Resolution (megapixels) Warranty (years)

 Within – attribute information

 Across – attribute information

1
Self-Explicated Approach
Example:Resolution Part-worths
10 6
Desirabilities

Part-worth
8 4.8

X 60% =
0 2 1.2
0

6 9 12 15 6 9 12 15

Resolution (megapixels) Resolution (megapixels)

Desirability Ratings X Importance = Part-worths

2
ASEMAP Step 1 – Relative Desirabilties
 Ratings of levels of each attribute, one attribute at a time
 How desirable to you are the options below for Size
relative to each other? Assume that all other
attributes are held constant.
 E.g., Fannypack o o o o o o o o o o o
0 10
Lo Relative Desirability Hi
 For ordered attributes (preference ordering of levels known)
10 and 0 are pre-filled for most and least preferred levels.
Two level ordered attributes are skipped from this step.
3
Measurement of Attribute Importance
 Rating scales (Self-Explicated (SE) Approach)

 Respondents tend to say that every attribute is


important thereby reducing its usefulness

4
Sawtooth – Adaptive Conjoint Analysis (ACA)

o Start with the Self-Explicated Approach. Augment it with 15-


21 (adaptive) paired comparisons similar to below:

5
Constant-Sum Method
(Improved method for measuring importances)

 Constant-sum method
 Allocate (say) 100 points across attributes
Levels
Attribute Importance
Least Most
Attribute a Level l Level m

Attribute b Level l Level m

… … …
_______________
Total 100

 Information overload when number of attributes is large.

6
Attribute Importance – Ranking
 ASEMAP Step 2 – Ranking attributes by Importance

7
ASEMAP- Step 3: Constant Sum Paired Comparisons

Which of the improvements below is more valuable to you? How much more?

Assume that price and all other attributes remain the same.

Resolution (6MP 15MP) 55


50
60

Warranty (None 3 Yrs.) 45


50
40
ASEMAP Third Step:
Adaptive Measurement of Attribute
Importance – Initial Step
(Example – 12 Attributes)

 Three paired comparisons of attributes


 1 vs. 12 (Most important vs. least important attribute)
 1 vs. 6 (Most important vs. middle importance attribute)
 6 vs. 12 (Middle important vs. least important attribute)
 Estimate the importances v1, v6, and v12 by log-linear
multiple regression. (Details on the next page.)

9
Log-Linear Regression Estimation
of Attribute Importance
 Example: V1/ V6 = 2, V6/ V12 = 3, V1/ V12 = 5 (some inconsistency)

 (Log V1 ) – (Log V6 ) = Log 2 = 0.30 (Log is taken to the base 10)

 We can take without loss of generality, V1 = 100 so that Log V1 = 2

 Thus 2 – (Log V6 ) = 0.30, so that – (Log V6 ) = 0.30 - 2 = -1.7

 Similarly (Log V6 ) – (Log V12 ) = Log 3 = 0.48, and

 (Log V1 ) – (Log V12 ) = Log 5 = 0.7, so that – (Log V12 ) = 0.7 – 2 = -1.3
10
Log-Linear Regression
 The three equations in the previous page can be written in matrix form as:
Log V6 Log V12 Dep. Var.
-1 0 - 1.7
1 -1 0. 48
0 -1 -1.3
 This can be thought of as a multiple regression with three observations, two
independent variables, and no intercept. The two regression coefficients are
Log V6 and Log V12 . Adjusted R-squared gives a measure of the consistency
of the data. Taking antilogs, i.e., 10 raised to the power of the regression
coefficients, we get V6 and V12 . We had set V1 = 100.

 Given V1 and V6, we can interpolate the values for V2 ,V3 , V4 , and V5.
Likewise, we can interpolate the values for V7,V8,…, V11 from V6 and V12 .

 Normalize all V1 ,V2 , …, V12 to add to 100.


11
Adaptive Measurement of
Attribute Importance (ASEMAP)
 Suppose we have measured the relative importance of the most (V1),
middle (V6), and least important (V12)attributes.
 Consider the following two scenarios:
Scenario A Scenario B
Attribute Attribute
Importance Importance

1 6 12 Rank 1 6 12 Rank
I II I II

 In scenario A determine importances of an attribute in interval I


 In scenario B determine importances of an attribute in interval II
12
Criterion for Selecting the Interval to Explore
Each interval has a top attribute, a bottom attribute, and
one or more intermediate attributes.

Importance Worst error


scenario
Linear
Interpolation

Top Intermediate Bottom


Attributes in the interval

Difference in importance # of
Maximum Possible error =
between the top and bottom X intermediate /2
attributes attributes

13
Step 3 (Contd.) Adaptive Paired Comparisons
 Suppose the interval between attributes 1 and 6 is the chosen. We choose
an attribute in the middle of the interval to minimize the maximum
interpolation error.

 Suppose attribute 3 is chosen. The respondent provides data for two


additional paired comparisons (1, 3) and (3, 6).

 We combine the original three paired comparisons with the two additional
paired comparisons, for a total of five paired comparisons.

 This leads to a multiple regression with five observations and three


independent variables, resulting in regression coefficients for V3, V6 and V12.

 Knowing V1=100 and V3, V6 and V12, we interpolate for the remaining
importances. Normalize all 12 importances to add to 100. 14
Iterative Method (ASEMAP)
 Suppose we have the following attribute improvement rankings :

Attribute rankings Pairs 1-3 Pairs 1-5 Pairs 1-7


Resolution 21.36 25.96 23.32
Optical zoom
Price 12.70 11.72
Shot delay
Video Clip
Battery life 8.48 7.98 7.57
Warranty
Brand
Camera size 5.08
LCD size
Memory
Light sensitivity 0.70 0.76 0.62
15
Step 3 (Contd.) Adaptive Paired Comparisons
 The number of paired comparisons is chosen so as to keep the interpolation
error to a small amount.

 Using computer simulation (and consistent with empirical evidence) we


recommend:

# of attributes # paired comparisons


<= 10 9 ( for 3,4,5 attributes: 3, 5, 7 pairs)
11 to 15 11
16 to 50 13

 Under these guidelines, the interpolation error is less than 5%,


i.e., average interpolation error / average importance <= 0.05.

16
ASEMAP: Step 4 (Linking Utility
to Likelihood of Purchase)
 Market researcher chooses the number of most important attributes
(e.g., 6) out of all attributes (e.g., 12)

 Respondent is shown five profiles on only his/her most important


attributes and asked for his/her likelihood of purchase if that profile
were the best available product/service in the market (assuming that all
remaining attributes are at their best levels). The five profiles are:
(1) all important attributes are at best levels
(2) half the important attributes are at best, the other half at medium
(3) all important attributes are at medium levels
(4) half the important attributes are at medium, the other half at worst
(5) all important attributes are at worst levels
17
Linking Likelihood to Total Utility
 We can determine the total utility U for each of
the five profiles from the utility function
determined from stages 1 through 3.
 Likelihood L is linked to total utility U by the
s-shaped curve:
 Ln[L/(1-L)] = b0 + (b1*U); five observations

 Delete the respondent if


Logit adjusted R-squared < 0.25
18
Empirical Comparison
 Digital cameras with 12 attributes
 Respondents randomly divided into two groups:
 ASEMAP with up to 21 paired comparison questions (mean - 18
questions)
 ACA with 10 two-factor paired comparisons and 11 three-factor paired
comparisons
 Procedure:
 Validation task -- ASEMAP/ACA -- Post survey evaluation
 Validation task includes 2 (out of 4) choice sets with 4 alternatives each
 ASEMAP (n = 52); ACA (n = 49)
 Survey duration not significantly different between ASEMAP and
ACA methods (each averaging 15 minutes)

19
Research Design
ASEMAP 2 Validation Attribute level Rank order Paired Post Survey
Choice Sets Desirabilities of Attribute Comparisons of Evaluation
Importance Attribute
Importances

ACA 2 Validation Attribute level Attribute Paired Post Survey


Choice Sets Desirabilities Importance Comparisons of Evaluation
Ratings Partial Product
Profiles on
2 & 3 Attributes
SE ACA Data

20
Prediction Accuracy
Percent of (first) choices correctly predicted

ASEMAP 60.6 % (Adaptive Self-Explication)


ACA 39.8 % (Adaptive Conjoint Analysis)

 The improvements are statistically significant (p < .01)


 50% improvement in predictive accuracy of choices

21
Does ASEMAP Provide a Significant
Improvement over (Standard)
Self-Explicated Procedure?
ASEMAP 60.6% (Adaptive Self-Explication)
SE 44.9% (Self-Explication)

 The improvement is large and highly statistically significant (p < .01)

22
Does Hierarchical Bayes Provide
Significant Improvement?
Individual Hierarchical
Estimates Bayes
ASEMAP 60.6% 64.4%
ACA 39.8% 40.8%

 HB marginally improves the prediction ability of all methods


 ASEMAP/HB significantly outperforms the ACA/HB

23
Choice Share Prediction
*
Mean Predicted Choice Share – Actual Choice Share

Individual Hierarchical
Estimates Bayes
ASEMAP 0.067 0.068
ACA 0.113 0.122

SE 0.082 N.A.
•Mean absolute deviation averaged over four brands in each of four choice sets.
•Smaller numbers are better.
24
Effect of # of Paired Comparison Questions
(Adaptive Self-Explication)
ASEMAP
# of Pairs Hit Rate
0 43.7% Uniform Importances

1 51.9%

3 57.7%
5 60.6%
7 60.6%
9 61.5%
11 63.5%
13 64.4%
15 62.5%
17 60.6%
19 62.5%
21 60.6%

25
Attribute Importance:
Coefficient of Variation
(= Std.dev./Mean)
Individual Hierarchical
Estimates Bayes
ASEMAP 0.849 0.728
ACA 0.487 0.430

SE 0.403 N.A.
 ASEMAP shows more variation in importance
 HB “shrinks” the importance variation
26
A Replication of the Previous Study
Comparing ASEMAP-C to ACA

 Replication to the Laptop category


 14 Attributes, n= 60 per method
 Choice Set Hit Rates:
 ASEMAP = 54.2%
 ACA = 46.2%
 ASEMAP statistically significantly better (p <.05)

27
Comparing ASEMAP-C to ACBC
% hits (first choices correctly predicted)

Headphones Laptops Average

ASEMAP 56.2% 64.1% 60.1%


ACBC 52.2% 65.0% 58.6%

12 attributes for each product category; 150 respondents/cell


5-6% reduction in time taken by the ASEMAP survey
Differences are not statistically significant
28
(ASEMAP-I vs. Constant Sum)
 Marketing Science Institute Prioritization of Research Topics
 15 topics, n = 160 managers/method
 Average percent of pairs correctly predicted:
 ASEMAP = 81.6%
 CSUM (Constant Sum) = 59.6 %
 ASEMAP statistically significantly better (p <.05)

29
Comparing ASEMAP to ACBC
% hits (first choices correctly predicted)

Headphones Laptops Average

ASEMAP 56.2% 64.1% 60.1%


ACBC 52.2% 65.0% 58.6%

12 attributes for each product category; 150 respondents/cell


5-6% reduction in time taken by the ASEMAP survey
Differences are not statistically significant
30
Other Advantages of ASEMAP
 ACBC is limited to 12 attributes; If you do a study with 20 attributes, it will zero
out the importances of each respondent’s less important 8 attributes.

 Sawtooth recommends Adaptive Conjoint Analysis (ACA) for more than12


attributes. ASEMAP predicts better than ACA by 14.2 percentage points on
average (across two product categories, as reported earlier.)

 The importances and part-worths are immediately available as soon as each


respondent takes the ASEMAP survey; in ACBC you need to wait to get all the
data, do a hierarchical Bayes analysis (additional researcher time). This is an
important advantage for connected surveys.

 ASEMAP provides an unbiased estimate of each respondent’s preferences;


ACBC provides a biased estimate because of the need to conduct Hierarchical
Bayes estimation, thereby biasing benefit segmentation results.
Comparing ASEMAP-I to MaxDIFF: #1

 Presidential priorities (prior to 2008 elections)


 17 topics, n = 102 /method
 Average Absolute Error (based on a constant sum of 4 topics)
 Smaller numbers are better
 ASEMAP = 8.73
 MAXDIFF (Maximum Differences Scaling) = 11.31
 ASEMAP’s improvement over MAXDIFF = 22.8%
 ASEMAP statistically significantly better (p <.05)

32
Comparing ASEMAP-I to MAXDIFF: #2
Mean Absolute Error (Smaller the Better)

Mean Absolute Error of Prediction (MAE)

ASEMAP-I 11.5 (32.5% Improvement)


MAXDIFF 17.0
MAE difference is highly statistically significant (p < .001)
Context: Importance of Issues for voters (2020 Democratic Presidential
Primary in the U.S.) Approx.155 respondents/method; ASEMAP-I takes
a little longer time (2 min. 51 sec. cf to 2 min. 14 sec. for MaxDiff.)
33

You might also like