Chapter 3

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 34

Chapter No: 03

How people make decisions


involving multiple objectives: SMART
Introduction
• As we saw in the last chapter, when decision
problems involve a number of objectives
unaided decision makers tend to avoid
making trade-offs between these objectives.
• This can lead to the selection of options that
perform well on only one objective.
• These problems arise because the unaided
decision maker has ‘limited information-
processing capacity’ (Wright).
• This chapter will explore how decision
analysis can be used to support decision
makers who have multiple objectives.
Basic terminology
Objectives and attributes:
• An objective has been defined as an indication
of the preferred direction of movement (by
Keeney and Raiffa).
• An attribute is used to measure performance
in relation to an objective. For example, if we
have the objective ‘maximize the exposure of a
television advertisement we may use the
attribute.’
Value and utility
• For each course of action facing the decision
maker we will be deriving a numerical score
to measure its attractiveness to him. If the
decision involves no element of risk and
uncertainty we will refer to this score as the
value of the course of action.
• Alternatively, where the decision involves
risk and uncertainty, we will refer to this
score as the utility of the course of action.
Example: An office location problem
An overview of the analysis
Edwards in 1971
Simple Multi- attribute Rating Technique (SMART)

Stage 1: Identify the decision maker (or decision


makers).
Stage 2: Identify the alternative courses of
action.
Stage 3: Identify the attributes which are
relevant to the decision problem.
Stage 4: For each attribute, assign values to
measure the performance of the alternatives on
that attribute.
Stage 5: Determine a weight for each attribute.
Stage 6: For each alternative, take a weighted
average of the values assigned to that
alternative.
Stage 7: Make a provisional decision.
Stage 8: Perform sensitivity analysis
Constructing a value tree

We start constructing the tree by addressing the attributes


which represent the general concerns of the decision
maker.
Five criteria which can be used to judge
the tree

1. Completeness.
2. Operationality.
3. Decomposability.
4. Absence of redundancy.
5. Minimum size.
Measuring how well the options perform
on each attribute

• The least secure and most uncomfortable to


make’ of all the judgments required in
decisions involving multiple objectives.
• Because of this we will now ignore the costs
until the end of our analysis and, simply
concentrate on the benefit attributes.
Measure the performance
1. Direct rating: Image Ranking and rating.
It is the interval (or improvement) between the
points in the scale which we compare.
2. Value functions

Bisection:
• This method requires the owner to identify an
office area whose value is halfway between
the least-preferred area (400 ft2) and the most
preferred area (1500 ft2).
• Having identified the midpoint value, the
decision maker is now asked to identify the
‘quarter points’. The first of these will be the
office area, which has a value halfway
between the least-preferred area
• Swing weights: These are derived by asking
the decision maker to compare a change (or
swing) from the least-preferred to the most-
preferred value on one attribute to a similar
change in another attribute.
Aggregating the benefits using the additive
model
We did these two tasks:
1. Measure of how well each office performs on
each attribute and
2. Weights which enable us to compare the
values allocated to one attribute with the
values allocated to the others.
This means that we are now in a position to find
out how well each office performs overall by
combining the six value scores allocated to that
office.
Trading benefits against costs
Sensitivity analysis
• Sensitivity analysis is used to examine how
robust the choice of an alternative is to
changes in the figures used in the analysis.
• Carrying out sensitivity analysis should
contribute to the decision maker’s
understanding of his problem and it may lead
him to reconsider some of the figures he has
supplied.
Theoretical considerations
• The axioms of the method:
1. Decidability:
2. Transitivity:
3. Summation:
4. Solvability:
5. Finite upper and lower bounds for value:
Assumptions made when aggregating values
• As we pointed out, the use of this model is
not appropriate where there is an interaction
between the scores on the attributes. In
technical terms, in order to apply the model
we need to assume that mutual preference
independence exists between the attributes.
This clearly suggests that the decision maker
should choose office P.
Conflicts between intuitive and analytic
results

• The larger the problem, the less reliable


holistic judgments may be.
• Alternatively, discrepancies between holistic
and analytic results may result when the
axioms are not acceptable to the decision
maker.
• Thus the requisite modeling process does not
attempt to obtain an exact representation of
the decision maker’s beliefs and preferences,
or to prescribe an optimal solution to his
problem.
• However, by exploiting the conflicts between
the results of the analysis and his intuitive
judgments it will help him to resolve conflicts
and inconsistencies in his thinking.
Variants of SMART

Value-focused thinking:
• In this approach you first determine your
‘values’ – that is what objectives (and hence
what attributes) are important to you. Only
then do you create alternatives that might
help you to achieve these objectives.
• These alternatives are then evaluated in the
same way as for alternative-focused thinking.
SMARTER (SMART Exploiting Ranks)
• One of the main attractions of SMART is its
relative simplicity.
SMARTER differs from SMART in two ways:
• First, value functions are normally assumed
to be linear.
• The second difference between SMART and
SMARTER relates to the elicitation of the
swing weights.
• In SMARTER we still have to compare swings,
but the process is made easier by simply
asking the decision maker to rank the swings
in order of importance, rather than asking for
a number to represent the relative
importance. SMARTER then uses what are
known as ‘rank order centroid’, or ROC,
weights to convert these rankings into a set
of approximate weights.
SMARTER RESERVATIONS
• First, in problems where it has been necessary
to separate costs from benefits you might
obtain a different efficient frontier if you use
SMARTER rather than SMART. This means we
should be very careful before we exclude
dominated options from further consideration.
• Finally, the ROC weights themselves raise a
number of concerns. The method through which
they are derived involves some sophisticated
mathematics, which means that they will lack
transparency to most decision makers.
THANK U 

You might also like