Planning Under Uncertainty: Regret Theory
Planning Under Uncertainty: Regret Theory
UNDER UNCERTAINTY
REGRET THEORY
MI NI MAX REGRET ANALYSI S
Motivating Example
s
1
High
s
2
Medium
s
3
Low
Average
A 19 14 -3 10
B 16 7 4 9
C 20 8 -4 8
D 10 6 5 7
Max 20(C) 14(A) 5(D) 10(A)
-5
0
5
10
15
20
1 2 3
$
M
i
l
l
i
o
n
s
1
s
2
s
3
A
C
B
D
Traditional way
Maximize Averageselect A
Optimistic decision maker
MaxiMax select C
Pessimistic decision maker
MaxiMin select D
MI NI MAX REGRET ANALYSI S
( )
s s
s
ternative Chosen Al
ofit from
native Best Alter
ofit from
|
|
.
|
\
|
|
|
.
|
\
|
=
Pr Pr
Regret
If chosen decision is the best Zero regret
Nothing is better than the best No negetive Regret
MI NI MAX REGRET ANALYSI S
Motivating Example
Calculate regret:
find maximum regret
A regret = 8 @ low market
C regret = 9 @ low market
D regret = 10 @ high market
B regret = 7 @ medium market
MI NI MAX B
In general, gives conservative decision
but not pessimistic.
s
1
High
s
2
Medium
s
3
Low
Maximum
Regret
A 1 0 8 8
B 4 7 1 7
C 0 6 9 9
D 10 8 0 10
0
2
4
6
8
10
1 2 3
$
M
i
l
l
i
o
n
s
1
s
2
s
3
A
C
B
D
MI NI MAX REGRET ANALYSI S
Two-Stage Stochastic Programming Using Regret Theory
Here & Now (HN)
Uncertainty Free
Optimal Profit
Two-Stage Model
Optimal Profit
Wait & See (WS)
MI NI MAX REGRET ANALYSI S
Two-Stage Stochastic Programming Using Regret Theory
s
1
High
s
2
Medium
s
3
Low
Average
A 19 14 -3 10
B 16 7 4 9
C 20 8 -4 8
D 10 6 5 7
Max 20(C) 14(A) 5(D) 10(A)
( ) ( ) | | x HN - WS ax x MR inimize
s s
x | S se
= M M
{ } x c y q M WS
s
T
s
y x
s
s
T
,
ax =
{ } x c y q M x HN
s
T
s
x y
s
s
T
|
ax ) ( =
b Ax= 0 x >
s y
s
0 >
s h Wy x
s s
T
s
= +
where:
subject to:
,
b Ax= 0 x >
0 >
s
y
s s
h Wy x = +
s
T
,
subject to:
MI NI MAX REGRET ANALYSI S
Two-Stage Stochastic Programming Using Regret Theory
s
1
High
s
2
Medium
s
3
Low
Average
A 19 14 -3 10
B 16 7 4 9
C 20 8 -4 8
D 10 6 5 7
Max 20(C) 14(A) 5(D) 10(A)
( ) ( ) | | x HN - WS ax x MR inimize
s s
x | S se
= M M
{ } x c y q M WS
s
T
s
y x
s
s
T
,
ax =
{ } x c y q M x HN
s
T
s
x y
s
s
T
|
ax ) ( =
b Ax= 0 x >
s y
s
0 >
s h Wy x
s s
T
s
= +
where:
subject to:
,
b Ax= 0 x >
0 >
s
y
s s
h Wy x = +
s
T
,
subject to:
MI NI MAX REGRET ANALYSI S
Two-Stage Stochastic Programming Using Regret Theory
s
1
High
s
2
Medium
s
3
Low
Maximum
Regret
A 1 0 8 8
B 4 7 1 7
C 0 6 9 9
D 10 8 0 10
( ) ( ) | | x HN - WS ax x MR inimize
s s
x | S se
= M M
{ } x c y q M WS
s
T
s
y x
s
s
T
,
ax =
{ } x c y q M x HN
s
T
s
x y
s
s
T
|
ax ) ( =
b Ax= 0 x >
s y
s
0 >
s h Wy x
s s
T
s
= +
where:
subject to:
,
b Ax= 0 x >
0 >
s
y
s s
h Wy x = +
s
T
,
subject to:
MI NI MAX REGRET ANALYSI S
Two-Stage Stochastic Programming Using Regret Theory
s1 s2 s3 s4 s5 ENPV Max Min
d1 19.01 10.38 10.57 15.48 10.66 13.22 19.01 10.38
d2 11.15 14.47 8.87 20.54 10.58 13.12 20.54 8.87
d3 12.75 7.81 16.02 22.25 9.16 13.60 22.25 7.81
d4 5.41 9.91 12.63 32.02 8.08 13.61 32.02 5.41
d5 15.09 7.40 8.81 12.48 15.05 11.77 15.09 7.40
Max 19.01 14.47 16.02 32.02 15.05 13.61 32.02 10.38
NPV
s1 s2 s3 s4 s5 Max
d1 0.00 4.09 5.45 16.54 4.39 16.54
d2 7.86 0.00 7.15 11.48 4.47 11.48
d3 6.26 6.66 0.00 9.77 5.89 9.77
d4 13.60 4.56 3.39 0.00 6.97 13.60
d5 3.92 7.07 7.21 19.54 0.00 19.54
9.77
Regret
Min
5
15
25
35
s1 s2 s3 s4 s5
d1 d2 d3 d4 d5
MI NI MAX REGRET ANALYSI S
Limitations on Regret Theory
May select different preferences if one
of the alternatives was excluded
or a new alternative is added.
$1000 - $1050 = 50
$100 - $150 = 50
s
1
s
2
s
3
Max.
Regret
A 100 0 5 100
B 99 95 40 99
C 0 100 200 200
D 150 85 0 150
It is not necessary that equal differences
in profit would always correspond
to equal amounts of regret:
A small advantage in one scenario
may lead to the loss of larger
advantages in other scenarios.
CONCLUSI ON
Minimizing the average regret instead
of minimizing the maximum.
Measure relative regret instead
of absolute regret: % 8 . 4
1050
1000 1050
=
% 33
150
100 150
=
1050-1000 = 50 150-100 = 50
versus
versus
instead of:
s
1
s
2
s
3
Max.
Regret
A 100 0 5 100
B 99 95 40 99
C 0 100 200 200
D 150 85 0 150
Suggested improvements to minimax-regret criterion:
Upper
Regret
52.5
97
150
117.5
Avrg.
Regret
35
78
100
78.3
Minimizing the upper regret average
instead of the maximum only.