DecideIT Manual
DecideIT Manual
DecideIT Manual
60
www.preference.bz [email protected]
Copyright Notice
Copyright 2006 - 2011 M.A.D. Preference AB. All rights reserved. No part of this manual may be reproduced in any manner or translated into another language without a written permission of Preference.
Trademarks
DecideIT is a trademark of Preference AB. Windows is a registered trademark of Microsoft Corporation.
Reservation
The information in this manual has been carefully reviewed and is believed to be accurate. The vendor assumes no responsibility for any inaccuracies that may be contained in this document, makes no commitment to update or to keep current the information in this manual, or to notify any person or organization of the updates. Preference AB reserves the right to make changes to the product described in this manual at any time and without notice. This product, including software, and documentation may not, in whole or in part, be copied, photocopied, reproduced, translated or reduced to any medium or machine without prior written consent. IN NO EVENT WILL PREFERENCE AB BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING FROM THE USE OR INABILITY TO USE THIS PRODUCT OR DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Page ii of 148
Table of Contents
SYSTEM REQUIREMENTS .............................................................................................................................. 8 INSTALLATION .................................................................................................................................................. 8 CONTACT............................................................................................................................................................. 8 1 THE PROBLEM ....................................................................................................................................... 10 1.1 1.2 1.3 1.4 1.5 1.6 1.7 2 MAPPING OF PRODUCT AND INFORMATION FLOW ................................................................................... 10 STOCK LEVELS ........................................................................................................................................ 11 ADMINISTRATION .................................................................................................................................... 12 PRODUCTION ........................................................................................................................................... 12 PRICE OF THE SYSTEM ............................................................................................................................. 12 QUALITY ASPECTS................................................................................................................................... 12 MODELLING THE PROBLEM ..................................................................................................................... 13
USING DECIDEIT .................................................................................................................................... 14 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17 2.18 2.19 2.20 2.21 2.22 2.23 2.24 START DECIDEIT ..................................................................................................................................... 14 CREATE A TREE AND LABEL ALTERNATIVES ........................................................................................... 14 DEFINE CONSEQUENCES .......................................................................................................................... 16 SAVING AND NAMING THE TREE ............................................................................................................. 16 LABEL CONSEQUENCES ........................................................................................................................... 17 DEFINE AND LABEL SUB-CONSEQUENCES ............................................................................................... 17 ASSIGN PROBABILITIES ........................................................................................................................... 19 DEFINE TEMPLATES ................................................................................................................................. 21 ASSIGN TEMPLATES ................................................................................................................................ 23 ASSIGN VALUES ................................................................................................................................. 25 EVALUATE THE DECISION PROBLEM................................................................................................... 27 TOTAL RANKING................................................................................................................................. 30 CARDINAL RANKING .......................................................................................................................... 31 STUDY CRITICAL PROBABILITIES AND VALUES .................................................................................. 32 SECURITY THRESHOLDS...................................................................................................................... 33 EXTREME VALUES .............................................................................................................................. 35 CUMULATIVE RISK PROFILE ............................................................................................................... 36 DEFINE CRITERIA................................................................................................................................ 36 ASSIGN CRITERIA WEIGHTS................................................................................................................ 38 ASSIGN A TREE TO A CRITERION ......................................................................................................... 40 DEFINE NEW DECISION MODELS ........................................................................................................ 41 ASSERTING VALUE RELATIONS .......................................................................................................... 41 EVALUATING MULTI-CRITERIA MODELS ............................................................................................ 44 A ROUGH SENSITIVITY ANALYSIS ...................................................................................................... 45
User Manual DecideIT Decision Tool Preference AB 2006-2011 2.25 1 CONCLUSIONS ..................................................................................................................................... 47
MENUS AND TOOLBARS; FILE .......................................................................................................... 50 2.1 2.2 2.3 2.4 2.5 2.6 CREATE MODEL ...................................................................................................................................... 50 OPEN AN EXISTING MODEL ..................................................................................................................... 50 CLOSE A MODEL ...................................................................................................................................... 50 SAVE A MODEL........................................................................................................................................ 51 SAVE A COPY OF CURRENT TREE ............................................................................................................ 51 PRINT MODEL .......................................................................................................................................... 51
MENUS AND TOOLBARS; EDIT .......................................................................................................... 52 3.1 3.2 3.3 3.4 3.5 3.6 UNDO ...................................................................................................................................................... 52 REDO ....................................................................................................................................................... 52 ALTERNATIVE PROPERTIES...................................................................................................................... 52 SET VALUE/WEIGHT RELATIONS............................................................................................................. 53 SET VALUE SCALE ................................................................................................................................... 54 Value Scales and Multi-Criteria Decision Problems .................................................................... 54 SET BACKGROUND COLOR ...................................................................................................................... 57
3.5.1
MENUS AND TOOLBARS; VIEW ......................................................................................................... 57 4.1 4.2 4.3 OVERVIEW............................................................................................................................................... 57 HIDE/SHOW ALL EVALUATION WINDOWS ............................................................................................... 57 UPDATE MODEL ...................................................................................................................................... 58
5 6
MENUS AND TOOLBARS; TEMPLATES ............................................................................................ 58 MENUS AND TOOLBARS; EVALUATION .......................................................................................... 59 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.10 6.11 6.12 SECURITY THRESHOLDS .......................................................................................................................... 59 TOTAL RANKING ..................................................................................................................................... 60 CARDINAL RANKING ............................................................................................................................... 61 EXPECTED VALUE GRAPH ....................................................................................................................... 62 CUMULATIVE RISK PROFILE .................................................................................................................... 63 RISK PROFILE .......................................................................................................................................... 64 CRITICAL PROBABILITIES/VALUES/WEIGHTS .......................................................................................... 64 TOTAL RANKING ALL CRITERIA ........................................................................................................... 65 CARDINAL RANKING ALL CRITERIA ..................................................................................................... 66 EXPECTED VALUE GRAPH - ALL CRITERIA ......................................................................................... 66 EXTREME VALUES .............................................................................................................................. 66 PREFERENCE ORDER ........................................................................................................................... 68 Page iv of 148
User Manual DecideIT Decision Tool Preference AB 2006-2011 7 MENUS AND TOOLBARS; TOOLS ....................................................................................................... 69 7.1 7.2 7.3 8 DOCUMENT HISTORY .............................................................................................................................. 69 CHOOSE ACTIVE EXCEL SPREADSHEET ................................................................................................... 69 SETTINGS ................................................................................................................................................. 69
MENUS AND TOOLBARS; HELP ......................................................................................................... 70 8.1 8.2 8.3 ABOUT..................................................................................................................................................... 70 CONTENTS AND INDEX ............................................................................................................................ 70 ENTER LICENSE KEY ............................................................................................................................... 70
NODE PROPERTY FRAME ................................................................................................................... 71 9.1 9.2 IDENTIFY DECISION ALTERNATIVES ........................................................................................................ 71 IDENTIFY EVENT AND CONSEQUENCE NODES ......................................................................................... 73
10
EVALUATION WINDOWS..................................................................................................................... 79 10.1 10.2 10.3 10.4 FILE .................................................................................................................................................... 79 EDIT .................................................................................................................................................... 80 VIEW................................................................................................................................................... 80 UPDATE .............................................................................................................................................. 82
11 12
MULTIPLE AND SEQUENTIAL DECISIONS .................................................................................... 83 HISTORICAL BACKGROUND ............................................................................................................. 87 1.1 1.2 DECISION ANALYSIS ................................................................................................................................ 89 PERSPECTIVES ON DECISION THEORY...................................................................................................... 92
13 14 15
PROBABILITY THEORY ....................................................................................................................... 94 UTILITY THEORY .................................................................................................................................. 96 DECISION MODELLING ....................................................................................................................... 98 15.1 15.2 DECISIONS UNDER CERTAINTY ........................................................................................................... 99 DECISIONS UNDER STRICT UNCERTAINTY ........................................................................................ 100 Laplace ................................................................................................................................... 100 Wald ....................................................................................................................................... 100 Hurwicz .................................................................................................................................. 101 Savage .................................................................................................................................... 101
DECISIONS UNDER RISK BAYESIAN DECISION ANALYSIS .............................................................. 102 ASSUMPTIONS AND AXIOMS IN UTILITY THEORY ............................................................................. 103 Axiom Systems ........................................................................................................................ 103 Some Criticism Against the Utility Theory ............................................................................. 108 Risk Attitudes .......................................................................................................................... 109 Security Thresholds ................................................................................................................ 110
Page v of 148
User Manual DecideIT Decision Tool Preference AB 2006-2011 16 17 MULTIPLE AND CONFLICTING OBJECTIVES............................................................................. 111 ELICITATION TECHNIQUES............................................................................................................. 114 6.1 6.2 6.3 18 ASSESSING UTILITIES ............................................................................................................................ 114 ASSESSING PROBABILITIES .................................................................................................................... 115 ASSESSING WEIGHTS ............................................................................................................................. 116
IMPRECISE DOMAINS ........................................................................................................................ 117 18.1 18.2 18.3 18.4 MEASURABLE AND IMMEASURABLE UNCERTAINTIES ....................................................................... 117 IMPRECISE PROBABILITY .................................................................................................................. 118 IMPRECISE UTILITY ........................................................................................................................... 119 SECOND-ORDER BELIEFS .................................................................................................................. 120
19
GRAPH MODELS .................................................................................................................................. 122 19.1 19.2 DECISION TREES ............................................................................................................................... 122 INFLUENCE DIAGRAMS ..................................................................................................................... 123 Relationship between Influence Diagrams and Trees ............................................................ 126
19.2.1 20
THE METHOD OF DECIDEIT............................................................................................................. 127 20.1 20.2 20.3 20.4 20.5 20.6 20.7 20.8 20.9 20.10 20.11 20.12 INFORMATION GATHERING ............................................................................................................... 128 MODELLING ...................................................................................................................................... 128 INFORMATION AND DECISION FRAMES ............................................................................................. 128 FRAME STRUCTURE .......................................................................................................................... 130 BASES ............................................................................................................................................... 131 PROBABILITY BASES ......................................................................................................................... 132 VALUE BASES ................................................................................................................................... 132 FRAMES ............................................................................................................................................ 133 SANITY CHECKS ............................................................................................................................... 133 SECURITY THRESHOLDS.................................................................................................................... 133 EVALUATIONS .................................................................................................................................. 134 CUTTING THE HULL .......................................................................................................................... 136
9.13 SENSITIVITY ANALYSES .......................................................................................................................... 138 9.14 DECISION PROCESS RESULTS .................................................................................................................. 138 REFERENCES .................................................................................................................................................. 140 INDEX ................................................................................................................................................................ 144
Page vi of 148
DecideIT is a user-friendly tool for decision analysis developed by Preference AB. It has several features such as: Good overview to yield a better overall picture Easy to document, review, and adjust the underlying data Hard problems are solvable within reasonable time Supports evaluation of imprecise probability and value estimates Supports comparative statements of values Supports evaluation of multiple criteria decision problems Simple ways of detecting lack of information Applicable within both decision and risk analysis
This manual describes how you use DecideIT for modelling and evaluation of decision situations. This version of the manual is written for DecideIT 2.5 2.69 and may be lacking some of the recently features of the 2.69 version. The manual consists of three main parts: A tutorial that introduces the terminology and working procedures in DecideIT A reference manual for DecideIT An introduction to the area of decision analysis
Page 7 of 148
System Requirements
Operating system: Windows XP, Windows Vista, Windows 7 (with Java Runtime Environment) Processor: 500MHz (may depend on operating system) RAM Memory: 512MB (may depend on operating system) HDD: 100MB of free hard disk space
Installation
If downloaded from our homepage on the web, run the file DecideIT_Setup.exe by doubleclick on it. If you received a CD-Rom with DecideIT, insert the installation CD-Rom in your CD/DVD-reader. If the installation procedure does not start automatically, follow the following procedure. 1. Double-click the program run DecideIT_Setup.exe from the installation CD and follow the instructions. 2. Restart the computer if needed. Note that you might need administrative rights on your operative system to be allowed to install this software. Contact your system administrator if you need such rights. If you are running Windows XP or 2000, this step might not be necessary. For portability reasons, the graphical user interface of DecideIT is developed in the Java Programming Language. The Java Runtime Environment is needed.
Contact
For questions and comments concerning this manual, please contact [email protected]
Page 8 of 148
This part describes how to model and analyze a decision problem. It starts by presenting a problem followed by a step-wise instruction of how to use DecideIT for analyzing the options.
Page 9 of 148
1 The Problem
The decision problem consists of whether a new system for logistic control should be implemented in a company or not.
B
2. Order confirmation (daily) Physical flow Info. flow
5. Replenishment (weekly)
With a new system for logistic control the forecasts will still be made, but these are entered directly into the system and will, together with information regarding stock levels and order and production planning, be transferred in real time to the main factory. This means that all information about the central warehouse will be available for this factory, which then has the ability to check the stock levels, planned production, and already made orders. The order administration will then have been replaced by a demand driven information system. When this information is visible, there are vastly increased possibilities to replenish in a more efficient way. The flow can be seen in Figure 22.
Page 10 of 148
3. Replenishment (daily)
A
Physical flow
B
Info. flow
4. Replenishment (weekly)
Stock level
2001-1-2
2001-2-2
2001-3-2
2001-4-2
2001-5-2
2001-6-2
2001-7-2
2001-8-2
2001-9-2
Time
The production has to be reliable for a possible reduction in the stock levels. As can be seen from Figure 23, the stock level decreases to zero on a number of occasions. This depends to a large extent on production disturbances and planned production stops that were delayed. Assuming that the production can be stabilized, stock level reductions are possible. This depends foremost on better forecasts that can be made due to customer needs being followed in real time. If better forecasts can be made, this will lead to better production planning, which in turn will facilitate lower stock levels. No change in the customers stock levels will occur. The possible effects of introducing the system for logistic control are: Reduction in stock levels by 40%, with a probability between 0% and 20%. Reduction in stock levels by 25%, with a probability between 25% and 55%. Reduction in stock levels by 15%, with a probability between 20% and 40%. No change in stock levels, with a probability between 10% and 30%.
Page 11 of 148
2001-10-2
The storage time is 16 days on average, which is equivalent to approximately 4200 tons. The price per ton of these articles is around 400500 USD, and the stock-keeping cost is 15% 25% of the stock value.
1.3 Administration
The administration cost will probably be reduced after an introduction of the system. Because of the automated process of order receiving and order confirmation, these resources can be redirected to more qualitative work, and there are three possibilities. Reduction of two employees, with a probability between 10% and 30%. Reduction of one employee, with a probability between 50% and 70%. No change, with a probability between 10% and 30%.
One employee corresponds to a cost between 40,000 and 60,000 USD to the company.
1.4 Production
Savings in production costs can probably not be done, but through more efficient production planning, the production can become demand driven. This will not result in any direct cost reductions, but will affect the stock levels.
Page 12 of 148
two sub-criteria: process quality and product quality. The quality criterion is considered to be important, but definitely not of equal importance as the financial aspect. The financial aspect is considered to be at least four to five times more important. The criterion Finance has a weight between 0.8 and 1, whereas the weight of the criterion Quality is between 0 and 0.2. Furthermore, the Product quality is considered to have a weight between 0.7 and 1.0, whereas Process quality has a weight between 0 and 0.3.
As Figure 24 shows, two areas that have improvement potential are detected. These are changes in stock levels in one of the factories (factory B) and changes in administration in the other (factory A). The cost of the system has been adjusted to include the actual purchase together with an annual fee as well as the development cost for the interface between the business system and the system.
Page 13 of 148
As can be seen, we discuss only two alternatives. Alternative 1 means an investment in the system and alternative 2 means to not invest. Other alternatives, such as considering other suppliers, are omitted.
2 Using DecideIT
Consider the tree in Figure 24. You will now construct this tree in DecideIT and perform various kinds of analyses.
After the program has been loaded, your screen will look as in Figure 25.
Page 14 of 148
2. Click OK. A rudimentary decision tree opens. It contains two alternatives and two consequences. See Figure 27.
As you saw from Figure 24, the decision problem consists of two alternatives. These have the labels Invest and Not Invest. You will now label the alternatives in the tree. 1. Left-click the upper yellow box. The dialog box Node Properties opens. In this you can label the first alternative. See Figure 28.
Page 15 of 148
2. Write the text Invest in the text field Scenario and click OK. Now the text Invest appears in the upper yellow box in the tree. 3. Open the Node Property dialog box for the other alternative and label the alternative Not Invest (short for Do Not Invest)
2. Write the number 4 in the text field and click OK. Figure 210.
The tree now contains four consequences of the alternative Invest. See
Page 16 of 148
1. Select Save as from the File Menu. The Save dialog box opens. In this you can save the tree under a name. 2. Write the text Investment in the text field and click OK. The tree is now saved in the file Investment.tree.
Figure 2-12: Decision tree after adding of consequences to C1, converting it to the event node E1.
3. Name the three sub-consequences No, -1 Person and -2 Persons, respectively. You will now define these new sub-consequences to each of the remaining direct consequences of alternative Invest. You can repeat the above procedure or copy the branch to the other nodes: 1. Right-click the node E1 and select Copy Branch from the pop-up menu. 2. Right-click the node C4 and select Paste Branch from the pop-up menu. A copy of the branch from node E1 now replaces the node C4. 3. Define the remaining sub-consequences in the tree. The tree should now look as in Figure 213.
Page 18 of 148
Figure 2-13: Decision tree with twelve consequences associated with Alt. 1.
You will now assert these values for the direct consequences of the alternative Invest. 1. Left-click the yellow box 40% Reduction. The Node Properties dialog box opens. In this you can define a probability interval for this event. 2. Click the tag Probability (%). The dialog box Node Properties now looks as in Figure 214.
Page 19 of 148
There are four radio buttons in the dialog box. These are used for asserting the type of probabilities you want to assign to an event. You check the leftmost radio button when you want to define a precise probability. The second radio button is used when you want to assign a probability interval as in this case. You will now assign the probability interval 0% to 20% to the event. 3. Click the second radio button from the left. You can now define a probability interval for the event. See Figure 215.
Page 20 of 148
4. Write the number 0 in the left text field and the number 20 in the right text field. Then click OK. The event is now in the probability interval 0% to 20%. 5. Assign probability intervals to the remaining direct consequences of the alternative Invest. The tree should now look as Figure 216.
These options are the same for all sub-consequences. Therefore, it is useful to define templates for the probability assignments.
Page 21 of 148
1. Select Probability Templates from the Templates Menu. The Probability Templates dialog box now opens. In this you define templates that can be used to simplify the handling of probability assignments. See Figure 217.
You will define the three templates: None, One and Two. 2. Write the text None in the textbox Name of Probability Templates. 3. Click the Second Radio Button. You can now assign an interval probability to this template. See Figure 218.
The probability for No change is between 10% and 30%. 4. Write the number 10 in the left textbox and the number 30 in the right textbox. You define the template by clicking the button Add.
Page 22 of 148
5. Click the button Add. The template is now defined. See Figure 219.
6. Define the two remaining templates in the same way. The templates should now be One (5070%) and Two (1030%). 7. Click OK. You have now defined the three templates.
Page 23 of 148
3. Assign probability templates for the remaining consequences. The tree should now look as Figure 221.
Page 24 of 148
C10 USD 100,000 to 70,000 C11 USD 60,000 to 10,000 C12 USD 20,000 to 50,000 C13 USD 0 Before you assert these values, you must define a value scale. 1. Select Set Value Scale in the Edit menu. The Value Scale Settings dialog box opens. In this you can set the value scale. See Figure 222.
You will let the scale be between 100 and 260. This means that the minimal and maximal values will be the values of the best and the worst consequences (divided by 1000), respectively. 2. Write the number -100 in the upper textbox and the number 260 in the lower textbox and click OK. You will now assert values to the consequences.
Page 25 of 148
1. Left-click the uppermost yellow box No. The Node Properties dialog box opens. In this you can define a value interval for this consequence. 2. Click the tag Value. The Node Properties dialog box now looks as in Figure 223.
There are three radio buttons in this dialog box. As for probabilities, these are used for asserting different types of values you want to assign to the event. You check the leftmost radio button when you want to assign a precise value. The second radio button is used when you want to assign a value interval as in this case. You will now assign the value interval 0 to 140,000 to the consequence. 3. Click the second radio button. You can now define a value interval for the consequence. 4. Write the number 0 in the left text field and the number 140 in the right text field. Then click OK. The event is now in the value interval 0 to 140. 5. Assign value intervals to the remaining consequences. The tree should now look as Figure 224.
Page 26 of 148
You will first analyze the alternatives with the PMEU rule. 1. Select Expected Value Graph from the Evaluation menu. The Evaluation Properties dialog box is shown. See Figure 225.
Page 27 of 148
In this dialog box, you have various options. Perform pair-wise comparisons; if there are more than two alternatives, pair-wise comparisons can be carried out between all of them. Compare alternatives against a mean value of all others; this is only applicable if you have more than two alternatives. Evaluate the alternatives separately; you should here keep in mind that this means that all relations between the alternatives are dismissed during the analysis. Set how many evaluation steps that will be shown; usually 20% is sufficient. This is further described below. You can also choose the preferred contraction mode. Before the evaluation, you have also the choice to contract only the probability or value base or pre contract one of them to the most probable point first. The default is to contract both the probability and value bases. Normally you just use the default choice.
You will now perform a pair-wise evaluation of the two alternatives. 2. Click OK in the Evaluation Properties dialog box. A window presenting the result is shown. See Figure 226.
Page 28 of 148
The result of the analysis is shown when the depreciation rate is five years. The upper graph of Figure 226 shows the maximal possible difference between the alternatives Invest and Not Invest, i.e., when the former is made as good as possible compared to the latter. The lower graph shows the opposite, i.e., when the alternative Not Invest is made as good as possible compared to the alternative Invest. The result also shows the values in relation to the degree of contraction of the decision frame. The values for various degrees of contraction demonstrate the stability of the decision. Values near the boundaries of the constraint intervals are likely less reliable than the centre values, due to the intervals being deliberately imprecise. If the decision problem is evaluated on a sequence of ever-smaller intervals, a good appreciation of the solutions dependency on boundary values can be obtained. This is taken into account by cutting off the dominated regions indirectly using reductions of the probability and value bases, and is denoted cutting the bases. The amount of cutting is indicated as a percentage, which can range from 0% to 100%. In the figures, the numerical difference in expected value is shown for each 20% contraction step. The intuition behind contractions is to zoom in on increasingly believable sub-intervals of the deliberately imprecise original statements, thus forming a succession of belief-denser solution sub-spaces. Before the evaluation, you have also the choice to contract only the probability or value base or pre contract one of them to the most probable point first. The default is to contract both the probability and value bases. In this case, the alternative Invest is significantly better than the alternative Not Invest. The easiest way to see this is to consider the sizes of the respective areas above and below the axis Contraction. In the case of Figure 226, you can see that the area above the axis is considerably larger than the area below the axis. This means that Alt. 1, i.e., the alternative Invest is better. You can also see other aspects of the result. For instance, you can have the results presented in a numerical format. 3. Select Numerical in the View menu. A window presenting the result numerically is shown. See Figure 227.
Page 29 of 148
Considering the alternative to invest in relation to not invest, if the worst case occurs (from the perspective of he alternative to invest), the difference in expected monetary values is a loss of about USD 34,000. On the other hand if the best case occurs the expected monetary value is a gain of USD 125,000. The most likely difference in expected monetary value is a positive result of approximately USD 37,000, as can be seen at 100% contraction.
In this dialog box, you set with how large percentages of the expected value at a given contraction level the alternatives must differ to be considered to be different. The default is 5% difference at 100% contraction. 2. Click OK. The Total Ranking dialog box is shown. See Figure 229.
Page 30 of 148
In this it can clearly be seen that the Invest alternative is ranked higher that Not invest.
In this dialog box, you set the contraction level and the contraction mode. The default contraction level is 0% contraction, i.e., the entire intervals are taken into account. 2. Click OK. The Total Ranking dialog box is shown. See Figure 231.
Page 31 of 148
In this it can be seen that there is an overlap between the alternatives, but that the Invest alternative should be preferred to the alternative Not invest.
Page 32 of 148
The window presents the possible variation of the expected value when the respective probabilities and values are varying within their admissible intervals. For instance, the value of consequence C6 can affect the expected value with an impact of USD 5480 to 5480, and the probability of the same consequence can affect the expected value with an impact of USD 3000 to 3000. This type of information can be vital when the alternatives under consideration are close to equal. In such cases, it is important to know which consequences that affect the situation most, i.e., are the most critical for the analysis. When more information has to be collected and when resources have to be allocated, the consequences that are critical in this respect should be focused on primarily.
Page 33 of 148
In this dialog box, you make settings for when you consider a consequence, or a combination of consequences, too dire. In the upper text box you set the value threshold. The lower text bow is used for defining the probability threshold. We assume that you do not accept that an alternative has consequences with values lower than USD 30,000 if they can occur with probabilities greater than 40%. 2. Write 30 in the left text box and 40 in the right text box. See Figure 234.
3. Click OK. A window presenting the result of the analysis is shown. See Figure 235.
Page 34 of 148
The window shows that there are consequences of the alternative Invest that are unacceptable. This is shown in red. However, already after about 10% contraction, this is not the case any longer. It is a matter of your own risk attitude whether this is acceptable or if the alternative should be dismissed. However, since the alternative Invest becomes acceptable after just a few contractions, it seems reasonable to still consider it as the best candidate for selection.
In this dialog box, you can see how the alternatives look considering the best and worst consequences only. There is obviously a consequence (-100) that can make the Invest alternative worse than Not invest, but, on the other hand, the best consequence is much better for the invest alternative (260). The Not invest alternative is of course constant zero. The values indicated in the dialog box is the weighted averages between the best and the worst consequences of the respective alternatives in the same way as for the Hurwitz criteria (see Part III).
Page 35 of 148
In this dialog box, several graphs can be seen. The (blue) graphs for the Invest alternative are showing the cumulative risk profiles for the minimum and maximum possible values (0% contraction) as well as the average of these two. Since the alternative Not invest has a constant value, all the (green) graphs coincide. In this case no of the alternatives statistically dominates the other; not even when considering the extreme (lower and upper) graphs.
Page 36 of 148
times more important. Thus, the criterion Finance has a weight between 0.8 and 1, whereas the weight of the criterion Quality is between 0 and 0.2. Furthermore, the Product quality is considered to have a sub-weight between 0.7 and 1.0, whereas Process quality has a subweight between 0 and 0.3 (i.e. relative to the Quality criterion). In his section, you will build a multi-criteria model, consisting of a criteria tree and connected decision trees. Let us first build the criteria tree. 1. Select New from the File menu. You are now asked whether you want to create a decision tree or a multicriteria model. See Figure 238. You will create a multi-criteria model.
2. Click the radio button Multi-criteria model and then click OK. A criteria tree opens. See Figure 239.
You will now create the multi-criteria tree. 1. Left-click the node Cr. 1. The dialog box Add Criteria opens. In this you can define the number of sub-criteria. 2. Write the number 2 in the text field and click OK. 3. Confirm the message. The tree should now look as in Figure 240.
You are now asked to confirm that you cannot use weight relations.
Page 37 of 148
4. Put in names as in Figure 241. This is done exactly in the same way as for decision trees.
1. Left-click the yellow box Quality. The Node Properties dialog box opens. In this you can define the weight of this criterion. 2. Click the tag Weight (%). The dialog box Node Properties now looks as in Figure 242.
Page 38 of 148
There are three radio buttons in the dialog box. These are used for asserting the type of weights you want to assign to a criterion. You check the leftmost radio button when you want to define a precise weight. The second radio button is used when you want to assign a weight interval (as in your model). 3. Click the second radio button from the left. You can now enter a weight interval for the criterion. 4. Write the number 0 in the left text field and the number 20 in the right text field. Then click OK. The criterion is now assigned the weight interval 0% to 20%. 5. Assign intervals to the remaining criteria. The tree should now look as in Figure 243.
Page 39 of 148
2. Click the radio button Connect decision model, choose T1:Investment.tree in the popup menu and click OK. The decision tree that was earlier created is now connected to the criteria model. See Figure 245.
3. Save the criteria tree under the name MCDM Investment. The tree is now saved in the file MCDM Investment.ch.
Page 40 of 148
The assumption, from the process quality perspective, is that two employed persons are preferred to one, which in turn is preferred to none. Such a decision tree is shown in Figure 2 46. Note that the scale is chosen as in the financial decision model to make these trees comparable.
1. Define a new decision model as shown above and label it Process quality. This is done in the same way as the for the financial decision model.
Page 41 of 148
s
Figure 2-47: Value relations.
2. Select C1 in the left upper combo box and C2 in the right upper combo box. See Figure 248.
Page 42 of 148
3. Select C2 in the 2nd left upper combo box and C3 in the 2nd right upper combo box. 4. Select C4 in the 3rd left upper combo box, = in the 3rd pop-up menu and C1 in the 3rd right upper combo box. Note that you can add more cells by clicking the button Extra relations. The Value Relations dialog box should now appear as Figure 249.
4. Click OK. The relations are now defined. 5. Save the model under the name Process quality. The decision tree for the product quality is very simple. See Figure 250.
Recall that an investment will raise the quality of the product. Thus the only assertion to be made is that at C1 is preferred to C2.
Page 43 of 148
1. Create the decision tree above and set the scale between -100 and 260. 2. Select Value Relations from the Edit menu. 3. Select C1 in the left upper combo box and C2 in the right upper combo box. 4. Click OK. 5. Save the model under the name Product quality. Now the decision model for product quality is defined. The only thing that remains is to connect the decision models to the multi-criteria tree. 1. Right-click the node Cr. 1 in the tree MCDM Investment.ch and assign the tree Product quality.tree. 2. Right-click the node Cr. 1 in the tree MCDM Investment.ch and assign the tree Investment.tree. The multi-criteria tree should now look like Figure 251.
3. Save the criteria tree. The entire model is now saved under the name Investment.ch.
Page 44 of 148
It is still the case that the alternative Invest is the best, but with a slightly lower degree of dominance.
Page 45 of 148
5. Then click OK. The new weights are assigned to the criteria. You will now again perform a total evaluation of the alternatives. 6. Select Expected Value Graph - All Criteria from the Evaluation menu and perform a pair-wise comparison between the alternatives. The result of this analysis should now look as in Figure 254.
The alternative Not Invest is only slightly changed compared to Invest. The result is therefore not very sensitive to changes in the criteria weights for Quality and Finance. This is further emphasized when the criteria are set to be equal. The result can be seen in Figure 255.
Page 46 of 148
As can be seen from the result, the alternative Invest is still much better than Not Invest.
2.25 Conclusions
You have now modelled an investment decision problem under two criteria and performed various kinds of analyses. All weights, probabilities, and values have been taken into account in the evaluation phase. Despite the impreciseness of the input data, important results could be obtained. In short, given the original estimates, the alternative Invest should definitely be chosen.
Page 47 of 148
In this part there are instructions how to install DecideIT and information about different parts of the application. The reference guide also contains descriptions of all menus and commands.
Page 48 of 148
1 Preparations
1.1 Install DecideIT
If downloaded from our homepage on the web, run the file DecideIT_Setup.exe by doubleclick on it. If you received a CD-Rom with DecideIT, insert the installation CD-Rom in your CD/DVD-reader. If the installation procedure does not start automatically, follow the following procedure. 1. Double-click the program run DecideIT_Setup.exe from the installation CD and follow the instructions. 2. Restart the computer if needed. Note that you might need administrative rights on your operative system to be allowed to install this software. Contact your system administrator if you need such rights. If you are running Windows XP or 2000, this step might not be necessary. For portability reasons, the graphical user interface of DecideIT is developed in the Java Programming Language. The Java Runtime Environment is needed.
Page 49 of 148
Page 50 of 148
not saved, DecideIT will ask if the model should be saved. 1a. Close by using close button in window corner. The Close button in a model window corner corresponds to Close in the file menu. 1b. Close DecideIT via Exit in the File menu. This command will be followed by a question whether you want to save an open model or not. The Close button in the corner of the program session corresponds to the Exit command in the file menu, and will be followed by a question whether you want to save an open model or not.
2. Select Page Setup to optimize printing properties. Orientate the structure horizontally or vertically. Choose margin settings for a suitable print.
3.1 Undo
The command Undo disregards the latest action.
3.2 Redo
The command Redo reapplies the latest action before the latest Undo command.
Page 52 of 148
In the screenshot above, C1 is better than C2, C3 is at least 250 value units better than C4, and the value of C5 is between 100 and 180 value units greater than the value of C7. The Value/Weight Relations button or in the toolbar corresponds to Value
Relations and Weight Relations respectively in the edit menu. If a criteria tree contains more than one level, weight relation cannot be asserted.
Page 53 of 148
3.5.1
Changing the value scale of any criterion in the decision problem will affect the evaluation. It is extremely important that prior to any multi-criteria evaluation, all criteria must have well defined value scales. This is why the greatest value in each value scale represents the best possible outcome with respect to the given criteria, and the worst value in each value scale represent the worst possible outcome with respect to the given criteria. Manipulating a value scale in a multi-criteria decision problem means that the given best and/or the worst possible outcome is assigned new values, making the already defined values of the different consequences to be less good and/or less bad relative to the best and/or worst cases. Example: Consider a decision situation with two criteria: ROI and Research. ROI will be measured in monetary units and Research will be measured in number of active researchers. A pre-investigation leads the decision analyst to state that the worst possible ROI is to loose 1 million, and the best possible ROI is to gain 5 million. Another investigation leads the analyst to state that the worst possible research is to employ zero researchers, and the best possible consequence is to have ten active full time researchers. Now, assume that the company holds the two criteria to be equally important, thus assigning them both the weight of 0.5, and that the following two alternatives are considered: Alt. 1 which means an ROI of 4 million and number of researchers set to 4, and Alt. 2 which means an ROI of 3 million and number of researchers is 8.
Page 54 of 148
According to the semantics of the additive utility function described in section 5, the utility of 5 million will be one, and the utility of -1 million will be zero. A transformation1 of 4 million onto the [0,1] scale will then be 4E6 / |5E6 - (-1E6)| = 0.667, and a transformation of 4 researchers onto the [0,1] scale will be 4 / |10-0| = 0.4. Building this decision situation in DecideIT and performing a multi-criteria pairwise comparison of the alternatives yields the evaluation window below.
1 The given example is made under the assumption that the utility is linear with size of ROI and number of
Page 55 of 148
Now, if we should change the scale for the criterion Research to be [0,30].
Then the multi-criteria evaluation of the same decision problem would yield the result in Figure 3-6.
As can be seen, the result of the evaluation now says that Alt.1 is the better option, and all we have done is change the scale for one criterion in the decision context! To clarify this, recall that the utility of 4 workers mapped to 0.4 using the first scale [0,10]. Now, since the scale is set to [0,30] the utility of 4 workers will map to 4 / |30-0| = 0.133, thus the changing of scale
Page 56 of 148
imply a depreciation of the utility of 4 researchers, implicitly making the values in the ROI criterion greater relative to the Research criterion. The important conclusion of this example is that the individual value scales for all criteria must be well defined prior to any multicriteria evaluation, when the scale is adjusted, all values defined on this scale must be re-set relative to the new worst and best cases represented by the upper and lower bounds.
4.1 Overview
Use this command to zoom out the view of a decision tree, so that a better overview of large trees is obtained. Use the command again to restore the view to normal. Note that it is still possible to click with the mouse button on the nodes in the tree when overview is chosen. The Overview button in the toolbar corresponds to Overview in the view menu.
Page 57 of 148
In the screenshot above a probability template Prob. of rain has been added, with a probability between 0.15 and 0.25 and no explicitly given most likely point. A probability template Prob. of snow will be added if the user should click Add. The Probability Templates button Templates in the edit menu. in the toolbar corresponds to Probability
Page 58 of 148
Note: When the values in a decision tree are modified, the evaluation windows of the decision tree need to be updated. To mark that an evaluation window needs to be updated, it will turn grey (shaded).
Page 59 of 148
In the screenshot above given security threshold will find Alt. 2 to fit into the specific risk profile. Alt. 1 might be at risk even if it fulfils the thresholds at an early contraction level. The Security Threshold button the evaluate menu. in the toolbar corresponds to Security Threshold in
Total Ranking presents an overview of a preference order of the alternatives based on the alternatives' expected values at a specified level of contraction. The ranking is obtained through the following procedure: 1) Pick the alternative with the greatest expected value at the specified contraction level, 2) Let any remaining alternatives having expected values not differing more than the indifference interval percentage of the value scale receive the same rank as the alternative picked in the previous step, 3) Remove the alternatives picked in
Page 60 of 148
previous steps, 4) If there are remaining alternatives, go to step 1. A result of such an analysis is shown in Figure 3-11.
Cardinal Ranking presents an overview of the respective range of the alternatives' expected values at a specified level of contraction. A result of such an analysis is shown in Figure 3-13, where the ranges of the expected values can be seen at 0% contraction level.
Page 61 of 148
The comparisons take account of value relations between consequences belonging to different alternatives. If there are no such relations between the alternatives, the result is the same as taking the difference between two single evaluations. a. Compare alternatives pair-wise (compare two alternatives against each other). b. Compare alternative to average (compare one alternative against an average of the other alternatives). c. Single alternative (study the Expected value of a single alternative). Size of evaluation step sets the number of calculated points in the evaluation graph, i.e., how fast the interval contractions should be performed. An evaluation step of 20%
Page 62 of 148
is the smallest number of evaluation steps, and the calculation process will run quicker. Before the evaluation, you chose the Contraction Mode, i.e., you chose to contract only the probability or value base or pre contract one of them to the most probable point first. The default is to contract both the probability and value bases. Normally you just use the default choice. When pressing OK the calculation of evaluation graphs is carried out and the graphs will be presented in a new window. The Expected Value Graph button the evaluate menu. in the toolbar corresponds to Expected Value Graph in
In this dialog box, several graphs can be seen. The graphs show the cumulative risk profiles for the minimum and maximum possible values (0% contraction) as well as the average of these two. In Figure 3-15 none of the alternatives statistically dominates the other. However in Figure 3-16 alternative 1 (middle graph) statistically dominates alternative 2.
Page 63 of 148
Page 64 of 148
Red colour indicates that the expected value is influenced in a negative way, and green colour indicates a positive influence on the expected value. A representation of this is shown in Figure 3-17.
In the screenshot above, the value interval of consequence C4 has the most critical probability assignment and the consequence C5 the most critical value assignment considering the impact on the expected value. The Critical probabilities button circles). The Critical values button latter, the values are also weighted. The Critical weights button in the toolbar corresponds to Critical weights in the in the toolbar corresponds to Critical values in the in the toolbar corresponds to Critical probabilities
in the evaluate menu. Critical probabilities apply to decision trees having event nodes (red
evaluate menu. Critical values apply to both decision trees and multi-criteria models. For the
Page 66 of 148
c. Pessimism-Optimism Index. This rule can be regarded as a mixture of maximin and maximax. Let the number A in the interval [0,1] be the index, when A = 1 we are just as pessimistic as in maximin, and when A = 0 we are just as optimistic as in maximax. Slightly more formally explained: Let A in [0,1] be the PO-index. Let Pi denote the value of the best consequence of each alternative i, and let Qi be the value of the worst consequence of each alternative i. The decision rule will choose the alternative whose A* Pi + (1A)* Qi is greatest. Again, note that when A = 1 the rule is the same as maximin, and when A = 0 the rule is the same as maximax. In the graphs to the left in the windows, DecideIT lets the index assume the values 0, 0.2, 0.4, 0.6, 0.8 and 1 for all evaluated alternatives, and each graph respond to an alternative. d. Value Span. Choose the alternative where the consequence with the maximum value span of that alternative is lowest. e. Principle of Insufficient Reason. This rule is based on the assumption that if the probabilities of the different consequences are completely unknown, then they can be assumed to be
Page 67 of 148
equal. Choose the alternative such that the average most likely point value of the possible consequence is maximized. The Extreme Values button menu. in the toolbar corresponds to Extreme Values in the evaluate
If the value intervals of two consequences are overlapping and no ordering value relation is set between the consequences, they will be assumed to be indifferent in this evaluation and will not be present in the window. The order is presented from top to bottom, i.e., the most preferred consequence(s) is at the top and the least preferred consequences(s) is at the bottom. See Figure 3-19. Unordered consequences will not be present in the window. The Preference Order button evaluate menu. in the toolbar corresponds to Preference Order in the
Page 68 of 148
7.3 Settings
Use this command to edit general settings for each decision model. a. Default Settings. Press this button to restore default settings. b. State probability in percentage. If the set probabilities are stated in percentage 0% 100%, instead of being stated on the scale 0 1. c. Allow usage of solver optimization. If this is set (highly recommended), the computational kernel of DecideIT performs approximations in the cases where there are dependencies between alternatives (value relations) and the values of the consequences within each alternative can not be strictly ordered. d. Use approximate contraction point If this is set (highly recommended), approximations in the computational steps of computing contraction points are allowed resulting in a significant increase of speed. e. Auto save every X minutes If set, your model will be automatically saved every X minutes.
Page 69 of 148
f. Explanation width. Sets the width of the yellow rectangles in the tree. Increase this size if the sentences are too wide to fit within the rectangle.
Page 70 of 148
Figure 3-21: Right-click pop-up menu when right-clicking on the initial decision node (green square).
a. Increase the number of alternatives. Left-click the green box to open the dialog box - Add Alternatives, or rightclick the green box (which then turns yellow) and select Add alternative(s)Enter identified number of alternatives in the problem structure. Minimum set of alternatives possible are two and maximum set are eight alternatives. During the designing phase it is evidently feasible to add and remove alternatives from the decision structure.
b. Copy the tree to another location. Right-click a node and chose Copy tree. Choose a new node in another tree and select Paste.
Page 71 of 148
c. Hide Sub-nodes. The command Hide sub-nodes is useful for large tree structures. Right-click the green box (which then turns yellow) and select Hide sub-nodes The command will collapse (hide) the tree structure from the specific node you are editing. To unfold the collapsed tree structure repeat given procedure with the command Show sub-nodes. d. Label identified decision structure. Left/Right-click the light yellow box to open the dialog box Node Properties: D1, or right-click the green box (which then turns yellow) and select Node properties It is also possible to select Decision from then Open Node menu to open the dialog box for Node Properties for the specific node The decision structure can be labelled partly with a short name under Tree name, occurring in the yellow box, partly with an extended description of the structure under Decision.
Page 72 of 148
a. Identify number of sub nodes (consequences) for each specific event. Left-click the blue triangular to open the dialog box - Add Nodes, or rightclick the blue triangular (which then turns yellow) and select Add node(s)Enter identified number of sub nodes in the problem structure. Minimum set of sub nodes possible are two and maximum set are 512. Maximum set of sub nodes altogether is around 900. During the designing phase it is evidently feasible to add and remove sub nodes from the decision structure.
Page 73 of 148
b. Convert nodes. Right-click a node and choose Convert to probability/decision node. This is depending of the selected node type.
c. Delete a branch. Right-click a node (which then turns yellow) to open the node menu and select Delete branch. A dialog box Delete branch will open and ask whether to delete or not. d. Move up and down branches. Right-click a node and choose Move up/Move down. This enables you to move up or down a branch in a tree.
Page 74 of 148
e. Copy node/ branch.. If two nodes are identified to contain identical values the command Copy node/branch is useful. Right-click the blue triangle (which then turns yellow) to open the node menu and select Copy node/branch. Select the specific node where to Paste the previous copied node. f. Hide Sub-nodes. The command Hide sub-nodes is useful for large and complex tree structures. Right-click the green box (which then turns yellow) and select Hide sub-nodes. g. Choose/Disregard/Regard alternative. The commands Choose/Disregard/Regard alternative are primarily used when handling multiple decisions in the same tree. Multiple decisions are described below. Right-click the green box (which then turns yellow) and select Choose/Disregard/Regard alternative.
Page 75 of 148
h. Label identified event nodes and consequence nodes. Left/Right-click the light yellow box to open the dialog box Node Properties, or right-click a node (which then turns yellow) and select Node properties. The Event node/Consequence node can be labelled partly with a short name under Scenario, occurring in the yellow box, partly with an extended description of the structure under Extended explanation of scenario. It is also possible to select Event or Consequence from then Open Node menu to open the dialog box Node Properties: D1. i. Edit the probabilities of an event node. Left/Right-click the light yellow box to open the dialog box Node Properties, or right-click the blue box (which then turns yellow) and select Node properties The valid probability statements are; a precise probability P, an interval I, an interval with a most likely point I+P, and a probability template PT. The probabilities must be consistent with given constraints, and the fully consistent probabilities are shown to the right as hull-probabilities. The consistency checks are performed when OK or Apply is pressed in the node property frame. If the assigned probabilities cannot be consistent, you will be asked to modify your probability statements. The probabilities may be given as percentage (%) between 0%100% or on the 01 scale, see Settings.
Page 76 of 148
j. Edit the value of an event node. Left/Right-click the light yellow box to open the dialog box Node Properties, or right-click the blue box (which then turns yellow) and select Node properties The valid value statements are; a precise value P, an interval I and an interval with a most likely point I+C. The values must be consistent with given constraints, and the fully consistent values are shown at the bottom as hull-values. The consistency checks are performed when OK or Apply is pressed in the node property frame. If the assigned values cannot be consistent with given value relations, you will be asked to modify your value statements.
Page 77 of 148
Page 78 of 148
10
Evaluation Windows
The evaluation windows are Security Thresholds, Total Ranking, Cardinal Ranking, Expected Value Graph, Cumulative Risk Profile, Critical Probabilities/Values, Total Ranking All Alternatives, Cardinal Ranking All Alternatives , Expected Value Graph All Alternatives , Extreme Values and Preference Order . These windows contain menus, a toolbar, and a frame showing the result. The choices made in the menu and toolbar is slightly different depending on the given evaluation window. The menus contained in the windows are;
10.1 File
a. Select Export Analysis to JPEG-format from the File menu The Export button in the toolbar corresponds to Export Analysis to
JPEG-format in the file menu. b. Name the image in the panel File name. c. Click Save The main purpose of the command, Export Tree to JPEG-format, is to facilitate further documentation and representation of problem structures.
Page 79 of 148
10.2 Edit
The Edit menu contains the following commands:
Set y-scale (Expected value graphs and Cardinal ranking) Reset y-scale (Expected value graphs and Cardinal ranking) Set Color
a. Set y-scale. Use this command to set the vertical scale in the evaluations window. This will only change the graphical presentation of the graph.
This command sets the vertical scale in the evaluation window to be automatically set, which is default. c. Set Color. Use this command to change the colour of the background and some objects.
10.3 View
The View menu contains the following commands:
Hide window Compare positive graphs (Expected value graphs) Numerical (Expected value graphs) Contraction (Expected value graphs) Size (Expected value graphs)
Page 80 of 148
a. Hide window. Use this command to hide all evaluation windows, e.g., windows containing evaluation graphs, security thresholds, critical values etc. The Hide/Show button in the toolbar corresponds to Hide window in the
view menu. Use this button to show the evaluation window again. b. Compare Positive Graphs. Use this command to compare the positive graphs of two alternatives instead of one positive and one negative. Consider a comparison between Alt. 1 and Alt. 2. Comparing the positive graphs means that we compare the upper line in the comparison Alt. 1 against Alt. 2 and the upper line in the comparison Alt. 2 against Alt. 1 in the default evaluation graphs
Compare Positive Graphs in the view menu. c. Numerical. Use this command to show some of the calculated values in the graph.
Page 81 of 148
Use this command to show the level of contraction along the x-axis. e. Size. Use this command to set the size of the evaluation windows. The options are small and large where small is the default size. The Show Small/Large Graph button Small/Large in the view menu. in the toolbar corresponds to
10.4 Update
The Update menu contains the following commands:
a. Update. Use this command to update the evaluation according to new parameters in the decision tree. The Update button menu. b. Automatic update. When Automatic Update is selected the evaluation graph will automatically
Page 82 of 148
be updated according to new parameters in the decision tree. The default setting is that updating the evaluation windows is done manually, while if many evaluation windows are open it may take some time to update them all.
11
In Figure 3-34, you can see that there are three decisions involved. Two of them are crossed. This means that you have to evaluate decision D2 first. Obviously, you should choose Alt. 1 High, since it has a higher value. 1. Right-click the node C2 and select Choose alternative. This means that you decide that the alternative High in decision D2 is the preferred one. The other alternative (Low) then becomes grey and the cross in decision node D3 disappears. This means that you can evaluate decision node D3. See Figure 3-35.
Page 83 of 148
Obviously, you should choose Alt. 1 Modify, since it has a higher value (60) than Not modify (25). 2. Right-click the node D2 and select Choose alternative. The other alternative (Not modify) then becomes grey and the cross in decision node D1 disappears. This means that you can make a decision also in node D1. Now, you should choose Alt. 2 Buy, since it has a higher value (60) than Sell (50). 3. Right-click the node D3 and select Choose alternative. The other alternative (Sell) then becomes grey and you have finished the analysis. The decision sequence to make is then [Buy, Modify, High]. See Figure 3-36.
Page 84 of 148
You can also reset a selection by right-clicking a grey node and select Regard Alternative. As an alternative of choosing an alternative you can also select Disregard Alternatives for the other ones. A further way of making these selections is to right-click a node and select Node Properties and the click the tag Alt. status in the dialog box that appears. In this you can change the status. See Figure 3-37.
Page 85 of 148
This part describes general concepts and procedures in the area of decision analysis and the theoretical background of the tool DecideIT. The content is provided as a general background to the area and parts of it might be complicated to grasp at a first reading. However, a complete understanding of the details is absolutely not necessary when working with DecideIT and can be skipped. The background theory of DecideIT is provided in Chapter 20.
Page 86 of 148
12
Historical Background
Cogito, ergo sum. Following these words, Descartes concluded the existence of free will without the presence of pre-determinism. In a non-deterministic world, we are capable of choosing for ourselves from the possible courses of action we identify. But with the right to choose comes the responsibility for the consequences of our actions. It is up to ourselves to discriminate between the different alternatives, and we are expected to do the right thing. The majority of such discriminations are trifling little choices, natural parts of our everyday lives, but some are of such importance that a structured approach is desired and a careful analysis is undertook before choosing and implementing a particular course of action. However, the origin of the field decision analysis can be traced back beyond any Descartes meditation, while the theory has evolved from the statistical aspects of games. Fibonaccis Liber Abaci (1202) and Pacciolis Summa de arithmetic, geometria et proportionalit (1494) constitute crucial early written work on such questions. Paccioli raises the question of how the stakes should be divided between two players of balla, who have agreed to play until one of them wins six rounds, but they are interrupted and cannot continue when one player has won five rounds and his counterpart has won three ([David, 1962], p. 37). Later, Gerolamo Cardano (1501-1571) tried to answer the question in his Liber de ludo aleae (1663), in which he formulated the fundamental concept of solving a probability problem by identifying a sample space with equally likely outcomes. Pierre-Remond Montmort further stimulated the early work on probability theory in his Essay d Analyse sur les Jeux de Hazard (1708), where he wanted to show superstitious gamblers how to behave rationally. Other important early contributors to a general theory of probability include Blaise Pascal (1623-1662) and Pierre de Fermat (1601-1665), who, after they encountered a gambling question from the French nobleman Antoine Gombaud (a.k.a. Chevalier de Mr, 1607-1684), initiated an exchange of letters in which fundamental principles of probability theory were formulated. Gombauds game consisted in throwing two six-sided dices 24 times, and the problem was to decide whether or not to bet even money on the occurrence of at least one pair of sixes among the 24 throws. A seemingly well established but deceiving gambling rule had led Gombaud to believe that betting on a double six in 24 throws would be profitable; however his calculations had indicated the opposite.
Page 87 of 148
The importance of statistics grew in the 17th and 18th century with the introduction of life annuities and insurance. Mortality statistics and life annuities were research areas of Abraham de Moivre (1667-1754), and in his Doctrine of Chance (1718) de Moivre defines statistical independence. Later, in Miscellanea Analytica (1730) the same de Moivre introduced the normal distribution as an approximation of the binomial distribution for use in prediction of gambles. In the second edition of Miscellanea Analytica (1738), de Moivre improved the formula for the normal distribution with the support of James Stirling (1692-1770). Furthermore, Reverend Thomas Bayes (1702-1761), an English Presbyterian minister, famous from his posthumously published An Essay Toward Solving a Problem in the Doctrine of Chances (1763), introduced the widely applied Bayes theorem and the concept of Bayesian updating. As a result, Bayes is credited with the introduction of subjective probability theory as well as the theory of information. Bayes conclusions were later accepted by Pierre-Simon Laplace (1749-1827), and published in his double volume Thorie Analytique des Probabilits (1812). In this comprehensive work, Laplace investigated generating functions, approximations to various expressions occurring in probability theory, methods of finding probabilities of compound events when the probabilities of their simple components are known, and a discussion of the method of least squares. Alongside with the early development on a theory of probability, the Swiss physician and mathematician Daniel Bernoulli (1700-1782) wrote a landmark paper, Specimen Theoriae Novae de Mensara Sortis (1738), in which a motivation for the concept of utility is given, commonly referred to as his solution to the famous St. Petersburg Paradox posed in 1713 by Daniel Bernoullis cousin, Nicolaus Bernoulli. The name St. Petersburg Paradox is due to the fact that the distinguished Bernoulli family was in many ways connected to St. Petersburg. In this paradox, Nicolaus Bernoulli considered a fair coin, defined by the property that the probability of heads is . This coin is tossed until head appears. The gambler is rewarded with 2n ducats if the first head appears on the n:th trial. The expected monetary value of this game is EMV(w) = i=1 (1/2n)2n = (1/2)2 + (1/4)22 + (1/8)23 + .... = 1 + 1 + 1 + ..... = (emv) Thus, it is infinite. It is nevertheless difficult to believe that any gambler would be willing to pay an infinite amount of money to participate in such a game. Bernoulli concluded therefore that the expected monetary value is inappropriate as a decision rule. Bernoullis solution to this paradox involved two ideas that have had great impact on economic theory. Firstly, he
Page 88 of 148
stated that the utility of money cannot be linearly related to the amount of money; it rather increases at a decreasing rate.
To make this clear it is perhaps advisable to consider the following example: Somehow a very poor fellow obtains a lottery ticket that will yield with equal probability either nothing or twenty thousand ducats. Will this man evaluate his chance of winning at ten thousand ducats? Would he not be illadvised to sell this lottery ticket for nine thousand ducats? To me it seems that the answer is in the negative. On the other hand I am inclined to believe that a rich man would be ill-advised to refuse to buy the lottery ticket for nine thousand ducats. If I am not wrong then it seems clear that all men cannot use the same rule to evaluate the gamble [...] the value of an item must not be based on its price, but rather on the utility it yields. The price of the item is dependent only on the thing itself and is equal for everyone; the utility, however, is dependent on the particular circumstances of the person making the estimate. (Bernoulli, 1954, p.23)
Bernoulli identified the value of the consequences of a choice as being different from the objective economical outcome, commonly referred to as the idea of diminishing marginal utility. Bernoullis second idea is that a persons valuation of a risky prospect is not the expected return of that prospect, but rather the prospects expected utility, E(u | p, X) = x X p(x)u(x) Where X is the set of possible outcomes, p(x) is the probability of a particular outcome x X, and u: X R is a utility function over the outcomes X on the real numbers. Thus, expected utility is the mathematically expected value, when subjective utility is taken into account. In the St. Petersburg Paradox, the value of the game is finite due to the principle of diminishing marginal utility. Originally Bernoulli employed a logarithmic utility function, u(x) = log x, where the is dependent on the gamblers wealth prior to the gamble itself, and x is the outcome. Substituting this value for x in (emv) yields a finite number. Consequently, people would only be willing to pay a finite amount of money to participate, even though the expected monetary value of the game is infinite.
axioms, he could justify a procedure to measure a persons degree of belief from preferences between acts of certain forms. Preceding Ramseys work, the concept of degree of belief as an approach to subjective probability had been introduced by John Maynard Keynes (1883-1946) in his A Treatise on Probability (1921). Subjective probability, as opposed to objective probability, means that the different values reflect the decision-makers actual beliefs, thus they are a measure of the degree of belief in a statement. These beliefs are not necessarily logical or rational, while they should be interpreted in terms of the willingness to act in a certain way.
[Under uncertainty] there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability waiting to be summed. (Keynes, 1937)
In contrast, an objective or classic view on probabilities, as defined by Laplace, says that probabilities are exogenously given by nature. In Probability, Statistics and Truth (1928), Richard von Mises (1883-1953) introduced the relative frequency view, which argues that the probability of a specific event in a particular trial is the relative frequency of occurrence of that event in an infinite sequence of similar trials. The modern and formal approach to game theory is attributed to John von Neumann (1903-1957), who in Zur Theorie der Gesellschaftsspiele (1928) laid the foundation to a theory of games and conflicting interests. Later he wrote, together with Oskar Morgenstern (1902-1976) the important book Theory of Games and Economic Behaviour (1947), in which they introduced a considerable amount of important elements such as the axiomatization of utility theory per se and a formalization of the expected utility hypothesis. This axiomatization is sometimes deemed reasonable to a rational decision-maker, and it is demonstrated that the decision-maker is obliged to prefer the alternative with the highest expected utility to act rational, given that she acted in accordance with the axioms. Of further importance, through this work von Neumann and Morgenstern bridged the gap between the mathematics of rationality and social science. However, von Neumann and Morgenstern did not take subjective probability into account, while they regarded probability in an objective sense and thus the decision-maker could not influence the probabilities. Leonard J. Savage (1917-1971) combined the ideas by Ramsey and the ideas by von Neumann and Morgenstern
Page 90 of 148
in The Theory of Statistical Decision (1951). Savage here gives a thorough treatment of a complete theory of subjective expected utility and associated utility functions. In Statistical Decision Functions (1950), Abraham Wald (1902-1950) takes use of loss functions and an expected loss criterion, as opposed to utility functions and the expected utility criteria. Loss functions and expected loss criteria later become standard basic elements in what that is commonly referred to as Bayesian or statistical decision theory. The name Bayesian derives from that this theory utilizes prior information and non-experimental sources of information. However, in the general case it is easy to adjust Walds statistical decision theory to include utilities (cf. Savage, 1972, p.159). Further, Wald had an objective view on probabilities. His concern focused on characterizing admissible acts and strategies for experimentation, where an act or strategy is admissible if no other act is better. Hence, Walds decision analysis could result in a family of admissible strategies, i.e., the non-dominated set of strategies. In recent literature, many modern characterizations of decision theory and decision analysis are suggested. Simon French, Ralph Keeney, Michael D. Resnik, and Peter Grdenfors and Nils-Eric Sahlin, respectively, have given their, more or less technical, views on the area as follows:
Decision analysis is the term used to refer to the careful deliberation that precedes a decision. More particularly it refers to the quantitative aspects of that deliberation. (French, 1988, p.27) A philosophy, articulated by a set of logical axioms, and a methodology and collection of systematic procedures, based upon those axioms, for responsibly analyzing the complexities inherent in decision problems. (Keeney, 1982, p.806) Decision theory is the product of the joint efforts of economists, mathematicians, philosophers, social scientists, and statisticians toward making sense of how individuals and groups make or should make decisions. (Resnik, 1987, p.3) The main aims of a decision theory are, first, to provide models for how we handle our wants and our beliefs and, second, to account for how they combine into rational decisions. (Grdenfors and Sahlin, 1988, p.1)
Page 91 of 148
Solving decision problems computationally are usually categorized as belonging to the area of optimization, and in particular linear optimization subject to some linear constraints. Typically, such questions are of the form what is the maximum/minimum value of this variable subject to these constraints? When discussing decision problems, such constraints typically include economical, time, or personnel aspects. The use of formal methods and mathematics for evaluating possible strategies had an important upswing during the second World War, and following this war the terms operations analysis and operations research are closely related to decision analysis and optimization techniques. Later, the militaristic area of operational research is often being studied together with topics such as management science, industrial engineering, and mathematical programming. At present time, the widespread use of computers and the rise of the graphical user interface have rendered it possible to facilitate the use of decision analytic techniques to a wider group of users. The growth of operational research since it began is, to a large extent, the result of the increasing computational power and widespread availability of desktop computers. Finally, due to the well-foundedness of decision theory, research in artificial intelligence have merged classical theories of decision making with other techniques for handling uncertainty into a sub-field of artificial intelligence commonly referred to as uncertain reasoning.
hypothesis occurs when gains are replaced by losses in choosing between alternatives with uncertain outcomes; people tend to be less keen on risk taking when there are gains involved rather than losses (Markowitz, 1952). However, the perspective of main interest here is of the normative kind. The aim of normative decision theory is to recommend various decision procedures and decision rules implying rational decision making when followed. In this case, the logical foundations and the validity of the model do matter. The proponents of such models often argue for them by constructing axiom systems (like the one of Savage presented below), and then deduce some decision rules, which induce a (normative) preference order on a set of alternatives. The area of decision tools is clearly derived from the normative kind of decision theory. According to Danielson (1997, p.2), this area contains approaches, which deal with mechanizing the structuring and analysis of decision situations. A salient idea is to model the situations according to a normative model of rational behaviour. Presuming the decisionmaker to be rational, the mechanical model can devise suitable courses of action given supplied information. A decision analytic tool then handles a smaller number of alternative courses of action and supports the evaluation and selection of those alternatives. Such a tool aids human decision-maker in her search for a preference order of a set of alternatives and in her strive for rationality. Prescriptive decision theory is a more recent perspective. The prescriptive theory focuses on identifying the discrepancies between how decisions are made (descriptive) and how the normative theory suggests they should be made (Riabacke, 2002). One purpose of the theory is to bridge the gap between decision analysis and actual decision making.
Page 93 of 148
13
Probability Theory
Something that is not for certain is matter of some uncertainty. When a decision-maker has to act in situations where uncertainty prevails, and this uncertainty can be quantified in terms of a probability measure, it is said that the decision is made under risk. In Bayesian decision theory, probabilities are used to capture and model beliefs. Thus, they are considered to be measures of degrees of beliefs. Needless to say, performing statistical investigations to obtain these degrees of beliefs is recommended, but in many real-life situations historical data is not available and the probability assessment has to be made on more subjective grounds. Although the theories of probability can be traced back to the 16th century, the foundations to modern probability theory were laid by Andrey Nikolaevich Kolmogorov (1902-1987). Kolmogorov rigorously constructed a probability theory from fundamental axioms, defining conditional expectation, and laying the foundations to Markov random processes in Grundbegriffe der Wahrscheinlichkeitsrechnung (1933) and in Analytic Methods in Probability Theory (1938). The basic formulas for probability calculus usually takes the form P(A) = pA, and is read as the probability of the uncertain event A is pA, where pA[0,1] is a real number. For example, A can be the statement it will not rain on your next birthday and you will receive at least ten gifts. Every event is a subset of a sample space , supposed to capture every possible event. The Kolmogorov-axioms: are usually stated as follows: 1. 0 P(A) 1, for all events A 2. P() = 1 3. If A and B are mutually exclusive events, then P(A B) = P(A) + P(B), and P(A B) = 0. The second axiom can be interpreted as it is certain that one of the events in the sample space will be the true outcome, i.e., a condition of exhaustiveness. Conditional probability arises when additional information is obtained, and is formulated as P(A | B) which can be interpreted as: the probability of A given B. Thus, the decision-maker knows that B is true and this might have impact on the probability of A. For example in medical applications, a test yields a positive result, which in turn implies some probability of an actual disease.
Page 94 of 148
Definition: Definition:
Conditional probability: P(A | B) = P(A B) / P(B). Independence: Event A with outcomes {A1, , An} and B with outcomes {B1, , Bm} are independent if and only if P(Ai | Bj) = P(Ai) for all Ai and Bj.
Definition:
Conditional independence: Event A and B are conditionally independent given event C if and only if P(Ai | Bj, Ck) = P(Ai | Ck).
Theorem:
Bayes Theorem: P(B | A) = P(A | B)P(B) / ( P(A | B)P(B) + P(A | B)P(B)), where B denotes not B.
It follows from these definitions that two mutual exclusive events cannot be independent. The set of probabilities associated with all possible outcomes is a probability distribution. When the sample space contains of a discrete set of outcomes, the probability distribution on it is discrete.
Page 95 of 148
14
Utility Theory
The term utility can be regarded as a measure of some degree of satisfaction, and a utility function is a mapping from outcomes, i.e., losses or gains, to real numbers representing this degree of satisfaction. The logarithmic utility function defined by Bernoulli was in itself considered adequate for almost two hundred years. However, Karl Menger (1902-1985) showed in his Das Unsicherheitsmoment in der Wertlehre (1934) that the Bernoulli function was heuristic and ad hoc, while the function was unsatisfactory already on its formal grounds. Menger showed the existence of a game related to the game presented in the St. Petersburg Paradox, in which the subjective expectation of the gambler on the basis of this value function is infinite when evaluating additions to a fortune by any unbounded function (Menger, 1934, p.264). The implication of this is that it is always possible to provide a paradox, in the important respects equivalent to the St. Petersburg Paradox, which cannot be resolved only through the idea of diminishing marginal utility. Menger also showed the inadequacy of mathematical utility functions of the type suggested by Bernoullis contemporary Gabriel Cramer (1704-1752). Consequently, we have to elaborate a bit more on utility theory. Before we continue, we first present some notation: a >p b means that the decision-maker holds alternative a to be strictly preferred to alternative b. This binary relation is transitive and asymmetric, thus it is a strict order. a p b means that the decision-maker holds alternative a to be at least as good as alternative b, i.e., b is weakly preferred to a. This binary relation is complete and transitive, thus it is a weak order. a p b means that the decision-maker is indifferent between alternative a and alternative b. This binary relation is reflexive, transitive, and symmetric, thus it is an equivalence relation. If the decision-maker can assign a number u(a) such that u(a) u(b) if and only if a p b, then it is said that there exists a utility function over a and b. Utility functions are defined on an interval scale, i.e., they are unique up to a positive affine transformation; such transformations are the only admissible transformations of utility functions. In formal terms: Let U be a utility function on a set C of consequences, then there
Page 96 of 148
exists > 0 and such that W(x) = U(x) + is a utility functions representing the same preferences, i.e., two different interval scales count as equivalent if and only if they can be obtained from each other by means of positive affine transformations. Apart from ratio scales, interval scales do not have an absolute zero (e.g., zero length); nor do they represent the ratio of some measured entity to some standard unit of measurement (e.g., meters or seconds). Thus, in an interval scale, the gap between two degrees has a meaning, while the gap between two ratios does not. In general, people are willing to pay more money for what they consider to be more desirable. In this respect a monetary scale can at least be expected to be an ordinal scale, i.e., a scale measuring preference ordering without the possibility to state, e.g., magnitudes of desires. For a majority of business decisions, the use of monetary scales is considered as a reasonable and acceptable measure of utility. However, it is not uncommon that monetary values are used to scale non-monetary outcomes, such as public health and environmental damage. In many cases, this problem is due to lack of means and usable tools for representing and evaluating intangibles and vague valuations. This is particularly troublesome when aggregating ordinal information and can be severely misleading. Ordinal scales are described further below.
Page 97 of 148
15
Decision Modelling
A world can be modelled as having different possible future states and in many situations it is beyond the capabilities of the decision-maker to tell in advance which state will be the true state. In this world, the decision-maker is an entity facing a choice between a set of alternatives. Every alternative in turn has a set of consequences connected to the states via the alternatives, i.e., given an alternative and a state there is a consequence of the performed alternative. The concern of the decision-maker is to choose the best alternative given the sets of consequences and states. Given this, there are at least four basic types of difficulties: How should the decision-maker to compare the alternatives with respect to different multiple objectives on the decision?2 How should the decision-maker compare the alternatives for each objective? How should the decision-maker estimate the probabilities that the given states occur, given that a certain act is performed? How should the decision-maker estimate the different values of the consequences?
If not considering multiple objectives, a decision table as the one in Figure 4-1 is a frequently used representation of a decision problem. s1 a1 a2 ... am c11 c21 ... cm1 s2 c12 c22 ... cm2 ... ... ... ... ... sn c1n c2n ... cmn
2Typical perspectives can be environmental, financial, security, etc. Such concerns will be further demonstrated
in section 5.
Page 98 of 148
The possible states (s1,...,sn) describe a set of mutually exclusive (disjoint) and complete descriptions of the world, not leaving any relevant state out. These determine the consequences (such as cij) of the different alternatives (a1,...,am). The true state is the state that does in fact occur. Thus, if the decision-maker selects the alternative a2, and if s3 will be the true state, consequence c23 will occur. An immediate question is which world to use as an adequate frame and how this world serves as the description of the actual world the decision-maker perceives? Depending on the purpose of the model, this world has to bee large enough. To build such a world, with all its relevant state descriptions, usually requires a thorough investigation and analysis. Luce and Raiffa (1957, p.13) provided a useful classification of decision situations, addressing that an important factor in every decision problem is the decision-makers knowledge and beliefs about the situation. They distinguish between the following three types of decision situations: Decisions under certainty Decisions under strict uncertainty Decisions under risk
Page 99 of 148
such a function is all that is needed, since it is enough in this context only to treat the cases involving a finite number of consequences.3 Because an ordinal value function can always be constructed, it makes sense to talk about the value of a consequence. This is valid also when P is an arbitrary set of objects that a decision-maker can have preferences upon.
15.2.1
Laplace
The decision rule of Laplace is based on the assumption that if the probabilities of the different states are completely unknown, then they can be assumed to be equal. This idea is commonly referred to as the principle of insufficient reason. Choose the alternative ak, such that the average value of the possible outcomes from this alternative is maximized: max(jn vij)/n, where 1 k n, and where vij denotes the value of cij.
15.2.2
Wald
Walds rule can be expressed as follows: 1. Set a security level by choosing an index pi = min{vij : j = 1,...,n} 2. Choose ak such that its index pk = max{pi}. As can be seen, Walds view on strict uncertainty was not an optimistic one, while according to Wald, you should always choose the alternative that gives the best result if the worst
3Uncountable sets are treated in (Debreu, 1952) (which demands that you are comfortable with topological
arguments) as well as in (Krantz, 1971), Chapter 4. The corresponding result for countable sets can be found in (French, 1988), p.98, together with a simple induction argument.
possible outcome will occur for each alternative. Thereof the name the maximin utility criterion, which originated from Walds work within game theory.
15.2.3
Hurwicz
Apart from Wald, the rule of Hurwicz has a less pessimistic approach. Hurwicz recommends a mixture of an optimistic and a pessimistic attitude: 1. Select a constant [0,1] as the pessimism-optimism index. 2. Let oi = max{vij, j = 1,...,n} and pi = min{vij, j = 1,...,n}. 3. Choose ak such that pk + (1 )ok = max{pi + (1 )oi}. Note that if = 1 this is again the maximin utility criterion, whereas if = 0, it is the socalled maximax utility criterion. Different ways of choosing appropriate pessimism-optimism indices have been presented, but we will not enter into that discussion here.
15.2.4
Savage
1972, p.164). Informally speaking,
In Savages own words: [...] the minimax rule recommends the choice of such an act that the greatest loss
that can possibly accrue to it shall be as small as possible. (Savage,
the decision-maker should choose the alternative giving the smallest possible regret. 1. Let rij = max{vsj, s = 1, ...,m} vij. 2. Let pi = max{rij, j = 1,...,n} 3. Choose ak such that pk = min{pi} This minimax risk criterion was first suggested as an improvement over Walds maximin utility criterion. Figure 42 shows Milnors example (Milnor, 1954, p.50) of a decision problem where all of the above criteria give different results. s1 a1 a2 a3 a4 2 1 0 1 s2 2 1 4 3 s3 0 1 0 0 s4 1 1 0 0 Laplace Wald Hurwicz (>1/4) Savage
The question remains: to act rational, which one of the above rules should be employed? Milnor proved that no decision criterion is compatible with ten seemingly reasonable axioms
that constitute his test set (Milnor, 1954, p.53). As it turns out, it is relatively easy to show that it is impossible to find a decision rule that fulfils all desirable properties. Further, Ackhoff (1962) argues that any concept of strict uncertainty is inappropriate, i.e., strict uncertainty implies that there is always some information or some beliefs being disregarded. In DecideIT, it is nevertheless possible, but not recommended, to employ the decision rules suggested by Laplace, Wald, and Hurwicz if they are felt to be appropriate in certain situations.
4 Note here that probabilities are assigned to consequences instead of being assigned to states of the world.
These two models are fully compatible when considering only a finite number of states and consequences.
Definition:
If a is {ci}, {pi}, and Va is a real-valued function on {ci}, then a has a value equal to piVa(ci), denoted by EV(a).
Definition:
A decision-maker accepts the utility principle if and only if she assigns the value piVa(ci) to a, given that it has assigned the value Va(ci) to ci.
Definition:
An ordering p of the alternatives is compatible to the principle of maximizing the expected utility if and only if a p b implies EV(a) EV(b).
Definition:
A decision-maker accepts the principle of maximizing the expected utility if and only if its ordering of the values of the alternatives is compatible to that principle.
15.4.1
Axiom Systems
The idea is to in a systematic way define the meaning of rationality. The point is, if a decision rule can be deduced from an indisputable axiomatization, then this rule should be the natural and obvious rule for a rational entity, provided that the necessary information is available. Fllesdal (1984, p.268) suggests the following conditions for a decision rule: A decision rule should recommend an alternative with valuable consequences before an alternative with less valuable consequences. A decision rule should recommend an alternative with high probability of valuable consequences before an alternative with low probability of valuable consequences.
5Cf., e.g., (Savage, 1972), (Herstein, 1953), (Suppes, 1956), (Luce, 1971), and (Jeffrey, 1983). Surveys over a
wide variety of axiomatizations are given in (Fishburn, 1981) and (Malmns, 1990b).
A decision rule should recommend an alternative with low probability of bad consequences before an alternative with high probability of bad consequences.
This seems to be reasonable, but is too vague to fill the needs of a normative decision theory and has to be elaborated a bit. In this, we introduce the technique of axiomatization. The axiom systems that will be presented consist of primitives, and axioms constructed from the primitives. Typical primitives include states, sets of states, and ordering relations such as p. The axioms then imply a numerical representation of probabilities and preferences, i.e., the axioms imply the existence of a probability distribution and a utility function. Although Ramsey (1931) and von Neumann and Morgenstern (1947) are credited for the axiomatic foundation of utility theory, we present the axiom system of Luce and Raiffa (1957), very similar to the aforementioned, and later the axiomatic justification of the utility principle according to Savage (1972). At a first glance, the two systems seem dissimilar, but the important implications boil down to the same central results. Starting with Luce and Raiffa, in which alternatives (or a gambles) with uncertain outcomes are called lotteries. An alternative is denoted p1v1, , pivi, , prvr, which can be considered as a lottery with the probability pi for the outcome vi. All the probabilities are supposed to sum up to one. For example, the alternative a with uncertain outcomes v1 and v2 associated with probabilities p1 and (1- p1) respectively is represented as the lottery a = piv1, (1- pi) vr. Axiom 1: Ordering of alternatives and transitivity: For any two alternatives a and b, either a p b or b p a, and if a p b and b p c then a p c. Axiom 2: Reduction of compound lotteries: Any compound lottery6 is indifferent to a simple lottery with v1, v2, , vr as prizes, in which the probabilities for the prizes in the simple lottery is computed according to ordinary probability calculus. Axiom 3: Continuity: Each prize vi is indifferent to some lottery involving just v1 and vr. Thus, there exists some number (or probability) pi[0,1] such that vi p piv1, 0v2, , 0vr-1, (1- pi) vr.
6 A compound lottery may be thought upon as a mixture of lotteries, i.e., the prize of a lottery consists of another
Axiom 4:
Substitutability (independence of irrelevant alternatives): In any lottery L, vi is substitutable for vi, that is, p1v1, , pivi, , prvr p p1v1, , pi vi, , prvr when vi p vi.
Axiom 5:
Monotonicity: piv1, (1- pi) vr p piv1, (1- pi) vr if and only if pi pi.
Note that nothing is being explicitly said about the origin of the probability distributions, they are just assumed to exist, and thus the view on probabilities is of the objective kind. From these axioms, the principle of maximizing the expected utility as well as some other important results in utility theory are readily derived. Shifting our attention to the system of Savage, he argues7 that if utility is regarded as affecting only consequences (rather than acts), then for a weakly ordered consequence set C, the following is valid: 1(x) and 2(x) are numerical order preserving functions representing the ordering relation between the consequences if and only if there is a strictly increasing function r such that, for every ciC, 1(ci) = r(2(ci)). This shows that (ci) is just an ordinal scale: it cannot be interpreted as quantitatively measuring the strength of preferences in any meaningful way.
The probability-less idea of utility of economics has been completely discredited in the eyes of almost all economists, the following argument against it [...] being widely accepted. If utility is regarded as controlling only consequences, rather than acts, it is not true as it is when acts, or at least gambles, are considered and the formal definition in 3,8 is applied that utility is determined except for a linear transformation. Indeed, confining attention to consequences, any strictly monotonically increasing function of one utility is another utility. Under these circumstances there is little, if any, value in talking about utility at all [...] In particular, utility as a function of wealth can have any shape whatsoever in the probability-less context, provided only that the function in question is increasing with increasing wealth, the provision following from the casual observation that almost nobody throws money away. (Savage, 1972, p.96).
7 Savage adopted this argument from Vilfredo Pareto (1848-1923). 8(Savage, 1972, p.73).
The primitives building up the axiom system of Savage9 slightly differ from the ones of Luce and Raiffa. Savage proposes the following primitives: (i) the binary preference relation p, (ii) a set S = {s1, s2,} of states, (iii) a set C = {c1, c2, } of consequences, and (iv) a set F = {f: S C} of all possible mappings from S to C where such a mapping is called an act. Now, Savage defines E as the power set of S, where the elements of E are called events denoted by A, B, C,and further defines the following concepts: 1. For f,g,f,gF and B,BcE, f p g given B if and only if f p g for every f and g that agree with f and g respectively, on B, and with each other on Bc and also g p f either for all such pairs or for no such pair (where Bc is the complement of B). 2. ci p cj if and only if f p f when f(s) = ci and f(s) = cj, for all sS. 3. B is null (B = ) if and only if f p g given B, for all f,gF. 4. A is not more probable than B (A B) if and only if fA p fB or ci p cj, for every fA,fB,ci,cj such that fA(s) = ci for sA, fB(s) = cj for sAc, fB(s) = ci for sB, fB(s) = cj for sBc. 5. f p ci given B (ci p f given B) if and only if f p h given B (h p f given B), when h(s) = ci, for all sS. To clarify some concepts if desired, in the first concept, when act f agrees with act f on B, then when performing f this will yield the same consequence as when performing f given the event (set of states) B, thus f(s) = f(s) for all sB. The third concept says that if weak preference holds regardless of which pair of acts compared given the event B, implying that all acts are indifferent given B, then B is an empty set of states (and vice versa). Further, looking at the fourth concept, when an act fB given A is preferred to an act fA given not A, and fB given not B is preferred to fA given B, then if fB is preferred to fA this means that a decision-maker holds event B more probable than event A (and vice versa). Now Savage proposes the following assumptions: Axiom 1: Transitivity: The relation p is a weak order.
Axiom 2: Axiom 3:
Completeness: For every f,g, and B, f p g or g p f given B. Resolution independence: If f(s) = ci, f(s) = cj, for every sB, B, then f p f given B if and only if ci p cj.
Qualitative probability: For every A,BE, A B or B A. Minimal strict preference: It is false that for every cj, cj, ci p cj. Continuity: Suppose h p g, then for every ci there is a finite partition {Bi} of S such that, if g = ci(Bi), and h = ci(Bi), for some i, then h p g or h p g.
Axiom 7:
Dominance: If f p g(s) given B (g(s) p f given B) for every sB, then f p g given B (g p f given B).
The second axiom says that when two acts have the same consequences, the relation between f and f must be independent of states. Furthermore, the third axiom says that the knowledge of an event cannot discard any preference between two consequences. Together, axioms 2 and 3 constitute Savages debated sure-thing principle. Informally, if a decision-maker does not prefer f to g, either knowing that the event B obtained, or knowing that B is not obtained, then the decision-maker does not prefer f to g (Savage, 1972, p.21). Further, from axiom 3 we can deduce that preferences between acts depend only on realized consequences, and not possible ones. The fourth axiom says that is a qualitative probability, thus is a weak order, and B C if and only if (B D) (C D) when (B D) = (C D) = 0. Furthermore, 0 B, 0 < S (all events are at least as probable as the impossible event and the universal event S must not be regarded as impossible). Axiom 5 says that there is at least one pair of consequences such that one is strictly preferred to the other, and axiom 6 implies the existence of a unique probability measure P on E. This probability measure is consistent with the qualitative probability in that E is not more probable than E if and only if P(E) P(E). The last axiom says that if f p g(s) for all consequences of f for a set of states B, then f p g, if one of those states occurs, of further importance this axiom implies that the utility function is bounded (nothing is infinitely bad or infinitely good). Given these assumptions, Savage proved the existence of a real-valued utility function on C with the following property: Let {Li} be a partition of S and let f be an act with consequences {f(si)} on {Li}, and let {Li} be another partition of S and let g be an act with
consequences {g(si)} on {Li}. Then f p g if and only if piu(f(si)) qiu(g(si)) where pi = P(Li) and qi = P(Li), i.e., the principle of maximizing the expected utility. Looking back at the system of Luce and Raiffa, it has been proved by von Neumann and Morgenstern (1947) that if a decision-maker has preferences between lotteries, i.e., given that the assumptions in the axiom system are fulfilled, then there is a real-valued utility function, unique up to a positive affine transformation, on the set of lotteries. Furthermore, let Lc = {L1, L2, } be a set of lotteries on C (alternatives with uncertain outcomes in the consequence set C), then they showed that the utility function u:LcR, has a representation u(Li) = pi(ci)u(ci) and Li p Lj if and only if u(Li) u(Lj). Thus, both axiom systems serve as attempts to a formal justification of the utility principle and the principle of maximizing the expected utility. Due to the subjective vein in the approach of Savage, his theory is often referred to as subjective expected utility (SEU).
15.4.2
The assumptions in both systems may seem reasonable at a first glance, but they have been subject to severe controversy. Human decision-makers tend to, under given circumstances, behave inconsistent with the utility principle. Famous so-called paradoxes include Allais paradox and Ellsbergs paradox. Allais paradox shows that people tend to act inconsistent with the sure-thing principle. This paradox derives from a common human behaviour of preferring a good outcome for certain to having a chance between something not as good and something even better. Ellsbergs paradox is quite similar, while it shows peoples tendencies towards preferring known risks to unknown uncertainties, and thereby violating the utility principle. Paradoxes of these kinds are often resolved through arguing that even intelligent beings make mistakes, and after some explanation of the inconsistency in their choices, they change their minds. However, for instance, an empirical study by Slovic (1974) has shown that as much as about 30% refuse to change their opinion and conform to the utility principle. Tversky (1981) tries to answer why this is the case, and his conclusion is that irrelevant contextual effects are often influencing people, making them act inconsistent with the utility principle, i.e., the framing process. Further, it can be argued that it is impossible for any normative theory of decision making to embrace all inherent peculiarities in a free world of heterogeneous decision-making inhabitants.
Furthermore, and independent of this, in real life decision making, the requirements of precise probability and utility estimates are often too strong, and thus making the utility theory in this format inapplicable.
15.4.3
Risk Attitudes
Defenders of classical Bayesian decision theory often argue that the concept of utility captures different risk attitudes. The assumption is that to each expected utility, there corresponds a certainty monetary equivalent xce. The decision-maker is indifferent between having this monetary value with certainty, and performing an alternative with uncertain outcomes, i.e., u(xce) = piu(xi), where u(xi) is the utility of gaining the monetary value xi. The risk premium, p, of an act is now defined as the demand that a decision-maker has for carrying out the act, instead of having the monetary equivalent xce for certain, i.e., p = pixi xce. With respect to the risk premium p, a classification of decision-makers into three classes can be made: a decision-maker is risk averse if p > 0; risk prone if p < 0; and risk neutral if p = 0. As an example, assume that a decision-maker is in desperate need of a certain amount of money, and any lesser amount than this amount would not be useful. For instance, a person may be in need of money for a medical treatment of a disease that, if not cured, will result in death. If this person should seize the opportunity of entering a bet with her last funds that will give her a chance of winning an amount sufficient enough for the treatment to be affordable, this person would probably not be labelled irrational. In this situation, the risk premium p is probably negative. However, some argue that it will never be possible to formalize the decision process with all reasonable risk attitudes by a utility function and an associated risk premium. Many critics emphasize that a majority of the mathematical models of decision analysis are oversimplified. Consider, e.g., the reasons for gambling. Most people would agree on that there is a pleasure involved in the pure act of participating in a game with uncertain outcomes. If mathematical expectation were the only criterion for gambling, no games would ever be arranged by rational beings, since when the rules of the game would make it rational for the gambler to bet, then the arranger should be irrational to offer the bet. However, people do still arrange and participate in games, although either the gambler or the bookmaker will be on the irrational side. Furthermore, it has also been argued that humans tend to disregard very small probabilities, even in games with finite mathematical expectations (like nation-wide lotteries),
and that, in the case of very high probabilities, a gambler is not willing to risk arbitrary amounts (Menger, 1934).
15.4.4
Security Thresholds
In many decision contexts, decision-makers wish to avoid particular strategies which involve some risk of ending up in a, for the decision-maker, consequence that is considered as a catastrophe, or at least highly undesirable. Even if the probability for such an event is estimated as extremely low, it is simply not a risk the decision-maker is willing to be exposed to. An insurance company serves as a pertinent example, while insurance companies probably find it irrational to let their clients insure themselves against nuclear war, meteorites, acts of terrorism, and similar catastrophes. Although the insurance company might find such events to be highly improbable, the occurrence of any such event would without doubt imply bankruptcy. Having such concerns in mind, a decision theory should be sensitive to different risk attitudes and provide the decision-maker means to express her risk attitudes in a number of different ways. As indicated above, the possibility to shape a utility function is not a sufficient model alone in this respect. One way to express such attitudes includes the ability to define security thresholds, together with procedures for handling the inevitable vagueness in the estimations of the probabilities and values that often is inherent in all decision modelling.
16
Objectives
In the previous section, the question was raised on how decision-maker should to compare the alternatives with respect to different types of objectives of the decision. Keeney and Raiffa (1976) present four adequate examples of decision situations where the decision-maker cannot hide from the fact that there are multiple objectives in conflict with each other. One of the examples considers the choice of a site for a new airport near Mexico City, where the head of the Ministry of Public Works was obliged to balance objectives such as, e.g., minimize costs, capacity of airport facilities, improve regional developments, and minimize access time for travellers. Such decision problems are the concern of multi attribute utility theory (MAUT) or multi-criteria decision analysis. In MAUT, each objective is referred to as one attribute in the decision context, and the approach is to define one individual utility function for each attribute. These are then aggregated into a global utility function, in which weights express the relative importance of each attribute. Each consequence Ci may be thought upon as a vector of achievement levels regarding the identified attributes, in the case of n attributes, the consequence Ci = (ci1, ci1, , cin). There is a vast source of literature on decision making with multiple objectives, some literature use the terms criteria or perspective instead of attribute, however from the decision-makers point of view we can use these terms interchangeably. A number of approaches to aggregate utility functions under a variety of attributes have been suggested, such as (Keeney and Raiffa, 1976), (Keeney, 1980), (Saaty, 1980), and (von Winterfeldt and Edwards, 1986). The most widely employed method is the additive utility function, sometimes referred to as the weighted sum. Some conditions must be fulfilled in order for the additive utility function to serve properly as an aggregated utility function. Firstly, the assumption of mutual preferential independence must hold, which states that when a subset of alternatives differs only on a subset Gi G of the set of attributes G. Then the preferences between the alternatives must not depend on the common performance levels G \ Gi. Secondly, the condition of additive independence must hold, meaning that changes in the uncertain outcomes (its probability distribution) in one attribute will not affect preferences for lotteries in other attributes.
The weights are restricted by a normalization constraint wj = 1, wj[0,1], where wj denotes the weight of attribute Gj. A global utility function U using the additive utility function is then expressed as
n
U ( x) =
w u ( x) ,
i =1 i i
where wi is the weight representing the relative importance of attribute i. ui: Xi [0,1] is the increasing individual utility function for attribute Gi, and Xi is the state space for attribute Gi. It is assumed that the ui:s map to zero for the worst possible state regarding the i:th attribute, and map to one for the best. Another global utility function is the multiplicative utility function, introduced in (Keeney and Raiffa, 1976). The multiplicative model requires that every attribute must be mutually utility independent of all other attributes, saying that changes in sure levels of one attribute do not affect preferences for lotteries in the other attributes. In contrast to additive independence, the condition of utility independence allows the decision-maker to consider two attributes to be substitutes or complements of each other. In this respect, it is a weaker preference condition than additive independence. Generally, the global utility function is usually expressed as
1 + KU ( xi ) =
n
[ Kk u ( x ) + 1] ,
i =1 i i i
where ui: Xi [0,1]. ui is the increasing individual utility function for attribute Gi, and Xi is the state space for attribute Gi. As for the additive function, the ui:s map to zero for the worst possible state regarding the i:th attribute, and map to one for the best. The scaling constant K is the nonzero solution to
n
1+ K =
(1 + Kk ) ,
i =1 i
where the ki represent scaling constants, similar in their meaning to weights, but without the normalization requirement. Other formal methods of decision evaluation under multiple objectives include the outranking approach (Roy, 1991), (Vincke, 1992), often referred to as the European/French School of Decision Aid. This approach is based on a search for outranking relations deduced from a set of binary preference relations. However, these approaches do not incorporate the modelling of uncertainty in the probabilistic sense, and thus does not capture the risk
associated with different courses of actions. Nevertheless, it has proved to be useful in a number of applications.
17
Elicitation Techniques
In any model for decision analysis the input parameters do matter. All input parameters must be elicited carefully, while it is their responsibility to reflect the attitude and beliefs of the decision-maker. Consequently, much deliberation must be taken into the elicitation process of these input parameters, and methods for such processes have been suggested by a number of authors.
10 There may be several least preferred as well as several most preferred consequences.
consistency checks. However, one drawback of assessing utilities in this way is the requirement for eminent introspective powers. For instance, when fixing the probabilities in the reference alternatives, the procedure results in a precise utility measure of a certain consequence. Although this consequence may be well defined in terms of which state of the modelled world it represents, the actual implications of this consequence may be very hard to grasp due to insufficient or imperfect information. Under such circumstances, a decisionmaker may feel more comfortable in defining utilities in vague or imprecise statements.
In its simplest subjective form, assessing a probability is done through asking the question What is my belief regarding the probability of A? If the phenomenon is known to occur in a given number of trials or an investigation regarding relative frequencies has been realized, the decision-maker may feel confident when answering such a question. Another way of assessing probabilities is to investigate the decision-makers attitude to placing bets concerning the uncertain event that is to be attached a probability estimate. This technique assumes that the decision-maker has accepted the principle of maximizing the expected utility, while it is based on a search for indifference between two alternatives with uncertain outcomes, which then imply equality in the mathematical expectation. As an example, when assessing the probability of A, consider the following two bets 1. Win x if A is true, Lose y if A is false 2. Win y if A is true, Lose x if A is false When indifference holds, then P(A)x + P(A)y = P(A)y + P(A)x and the subjective probability of A is P(A) = y/(x+y). However, the assessed probabilities must be consistent with the axioms of probability theory. If not, they must be modified until they are. Assessing probabilities in this way is commonly referred to as the reference lottery approach. As in the case of utility assessment, the demanded introspection of the decision-maker has been discussed and a common understanding is that in real-life decision problems, a decision-
maker in many situations finds it too difficult to precisely define when exactly indifference is the case.
where ui+ is the best utility for attribute Gi and uj- the worst utility for attribute Gj. If the weights are assessed in this manner, but the decision-maker cannot agree with assigning the weights consistent with wj = 1, the multiplicative model is more appropriate. In this model, the assessment of the scaling constants can be done through the reference lottery approach, such that
ki = U (u1 , u2 ,..., ui+ , ui+1 ,..., un ) ,
where ui+ is the best utility and ui- the worst utility for attribute i. In the case of two attributes with scaling constants k1 and k2, such that k1 + k2 < 1, it can be said that they complement each other. If k1 + k2 > 1, the attributes can be considered as substitutes of each other. Assessing weights through the swing weighting technique begins with defining the worst-case scenario, which is used as a benchmark, attached with a value of zero. The term swing derives from that a hypothetical outcomes are constructed in which the attributes swings from worst to best, and the decision-maker creates a preference order on this set of hypothetical outcomes.
18
Imprecise Domains
In a vast majority of real-life decision situations, the decision-maker do not have access to the significant amount of statistical data demanded to aggregate precise numerical values and probabilities, nor does the decision-maker have the ability to perform precise estimations of utilities. Furthermore, people find it hard to distinguish between probabilities ranging from approximately 0.3 to 0.7, Shapira (1995). A great deal of attention has been given to problems of imprecise information as a source of decision uncertainty, Morgan and Henrion (1990) identifies two main types of uncertainty. The first type of uncertainty derives from lack of historical data, and takes its form in statistical variation, subjective judgments, linguistic imprecision, variability, inherent randomness, disagreement and approximation. For example in experiments, errors in the measurements of quantities give rise to statistical variation. The second type of uncertainty arises from the model chosen, for example a utility function. Furthermore, uncertainty due to biases in communication and value differences is unavoidable in the use of expertise in policy processes. Instead of addressing the sources of uncertainty, Funtowicz and Ravetz (1990) discuss different types of uncertainties, including inexactness (or technical uncertainty), unreliability (or methodological uncertainty), and border with ignorance (or epistemological uncertainty). These authors consider ignorance to be endemic to scientific research. Finally, Wynne (1992) addresses uncertainty in the foundations of information and knowledge, as well as in processing information.
systems. He does not object to the use of the principle of maximizing the expected utility, but suggests that the underlying axiomatic systems should not be applied in situations where the available information is to some extent not precisely defined. Doyle and Thomason (1997) give an approach where impreciseness is being modelled through using only qualitative data. However, in many cases this restriction will yield a too narrow outlook of a decision problem, numerical estimates should still play a role. Using the words of Ekenberg:
A useful theory for decision analysis should include procedures for handling such qualitative aspects in connection with a quantitative evaluation. (Ekenberg, 1994, p. 39)
There is a wide variety of mathematical models for the representation of imprecise probability. According to Walley (1997), most research in imprecise probabilities has been concerned with different types of upper and lower probability. However, some common and useful kinds of uncertainty cannot be modelled through the use of upper and lower probability models, especially, commonly used comparative statements of the form A is at least as probable as B cannot be allowed for11. Walleys highly influential Statistical Reasoning with Imprecise Probabilities (1991) introduces the concept of upper and lower previsions. Briefly speaking, the lower prevision of a gamble is defined from the amount a gambler is willing to
pay for a lottery ticket, the upper prevision is defined from how much he is willing to sell the same ticket for. When more than one probability distribution defined on the same set of outcomes is reasonable given the information obtained, we speak in terms of sets of probability distributions. The American philosopher Isaac Levi gives three conditions such sets of probability measures B must satisfy. These imply (among other things) that the probability distributions in B for a given state of nature form an interval, in literature such sets is commonly referred to as convex sets of probability measures. The significance of Levis work is emphasized as Levi compares the different alternatives in decision situations. He gives an example in which two similar decision situations with different sets of probability measures yield results different from his theory, even if the generated intervals are the same (Levi, 1974, pp. 416-418). Many attempts have been made to express imprecise probabilities in terms of intervals. In Choquet (1953) the concept of capacities is introduced. These capacities can be used for defining a framework for modelling imprecise probabilities as intervals (Huber, 1973). The use of interval-valued probability functions, by means of classes of probability measures, has also been integrated in classical probability theory by e.g., (Good, 1962) and (Smith, 1961). A similar approach was taken by Dempster (1967), where a framework for modelling upper and lower probabilities is investigated. This was further developed in (Shafer, 1976), where the concept of basic probability assignments was also introduced. Within the field of artificial intelligence, the Dempster-Shafer theory for quantifying subjective judgments has received a lot of attention, but it seems to be unnecessarily strong with respect to interval representation (Weichselberger and Phlman, 1990). Weichselbergers theory of intervalprobability argues in favour of an axiom system for interval probabilities clearly related to the one of Kolmogorov, in his own words:
Altogether, theory of interval-probability comes nearer to the classical understanding of probability assignment than those approaches relying on more general types of assessment. (Weichselberger, 1999)
instead of a predicted fixed number which in almost every case will be more or less incorrect. Furthermore, many types of decisions involve utility measures of non-monetary outcomes which then must be measured on some precisely defined interval scale, such measurements is often hard to motivate, e.g., due to underlying ethic responsibilities and democratic values. Levi uses a set G of permissible utility functions, which do not obey the classical Bayesian requirement that all elements in G are positive affine transformations of each other. He then stipulates the following definitions: Definition: An alternative A is E-admissible if and only if there is a probability distribution p in B and a utility function u in G, such that E(A), defined relative to p and u, is optimal among all alternatives. Definition: An alternative A is S-admissible if and only if it is E-admissible and there is a function u in G such that the minimum u-value assigned to some possible consequence is at least as great as the u-values assigned to the consequences of any other of the remaining alternatives. However, a problem with Levis approach is the violation of the independence of irrelevant alternatives.
User Manual DecideIT Decision Tool Preference AB 2006-2011 contestants is an excellent tennis player, although she does not know anything about which player it is, and the second player is indeed an amateur so that everybody considers the outcome of the match a foregone conclusion. (Grdenfors and Sahlin, 1982, p. 362)
There are however several complication with this theory that are solved in (Ekenberg and Thorbirnson, 2001), where another approach to second-order decision analysis is suggested. Their theory does not only support the use of interval statements to model imprecise information, but also takes into account various belief distributions over the intervals as measures of the epistemic reliabilities concerning the different probability and utility distributions on a set of outcomes. In DecideIT, it is possible to model and evaluate secondorder beliefs through explicitly defining points with higher density of belief within the given intervals.
19
Graph Models
Graphical models are often of intuitive appeal to humans. They serve well as an instrument for communication, they are dynamic, and they are easy to manipulate (especially through the assistance of a graphical user interface). Two important graph models that have proliferated within the area of risk and decision analysis are the decision tree and the influence diagram. Generally, any graph model G is a structure consisting of a set of nodes N and a set of edges E between these nodes, thus G = N,E. A directed graph is a graph in which the edges have a direction, i.e., an edge between ni and nj does not imply an edge in the opposite direction. A tree graph is a set of straight line segments connected at their ends with no cycles, thus it is an acyclic graph and a tree with m nodes has m-1 edges. In a rooted tree, each node nj one edge further away from a given node ni is called a child to ni, and nodes connected to the same node which are the same distance from the root node are referred to as siblings. The root node is the node in absence of a parent, nodes without children are called leaves, and there is a unique path from the root node to any leaf.
Figure 4-3: A decision tree in DecideIT (note that this screenshot is a zoomed out view and shows less details than the default view).
Usually, the root node is a decision node representing the initial decision, as in Figure 43. The tree often indicates a temporal order in which the events take place, i.e., if event Ei is said to occur before Ej then Ej usually do not precede Ei in the model. This is especially the case for decision nodes, i.e., all outcomes related to preceding nodes are known prior to the actual decision the decision node represents. Furthermore, the tree is a representation of a conditional expansion order. For example, the probability of C1 in Figure 43 is a conditional probability, P(C1) = P(E11 | E21,D11). Decision trees are usually evaluated by pruning the tree, sometimes called rolling back or folding back the tree. This technique creates a preference ordering according to PMEU and is quite straightforward. Start at the consequence nodes and move towards the root node. Calculate the expected values of chance nodes when such are encountered, and replace the chance node with its expected value. When a decision node is encountered, choose the branch with the highest value, discarding other branches with lower expected values. When this algorithm terminates, the path that remains is the one to choose. This is the evaluating algorithm of DecideIT, generalized for imprecise input parameters.
making. According to Howard and Matheson (1984, p.721), the influence diagram is a formal description of a problem that can be treated by computers and a representation easily understood by people in all walks of life and degrees of technical proficiency. Shachter (1986) continues in an analogous fashion:
An influence diagram...is an intuitive framework in which to formulate problems as perceived by decision-makers and to incorporate the knowledge of experts. At the same time, it is a precise description of information that can be stored and manipulated by a computer. (Shachter, 1986, p.871)
The classic influence diagram is a network with three types of nodes. The nodes are: decision nodes, chance nodes (event nodes), and one value node (utility node, payoff node, consequence node). In an influence diagram, squares represent the decisions to be made, circles represent chance nods, and a rounded rectangle represents the value node. There are two types of directed arcs: conditional arcs with chance nodes or the value node as successors and informational arcs with decision nodes as successors. An example of an influence diagram is shown in Figure 44.
Figure 4-4: Example of an influence diagram and corresponding symmetric decision tree if every node contains two outcomes.
With respect to any given node in an influence diagram, Howard and Matheson (1984, p.737) provides the following definitions: The predecessor set of a node is the set of all nodes having a path leading to the given node. The direct predecessor set of a node is the set of nodes having an arc connected directly to the given node. The indirect predecessor set of a node is the set formed by removing from its predecessor set all elements of its direct predecessor set. The direct successor set of a node is the set of nodes having an arc connected directly from the given node. The indirect successor set of a node is the set formed by removing from its successor set all elements of its direct successor set. Each chance node can be thought of as a random variable, with an assigned probability distribution. There is an underlying joint probability distribution for all chance nodes. This joint distribution can be decomposed into a set of conditional distributions, to be assessed by the analyst, with conditioning represented by arcs in the influence diagram. If there are no undirected paths between two nodes, then they must be independent. If a chance node has no arcs into it, then its probability distribution is a marginal distribution. (Shachter, 1986, p.872) For chance nodes, the diagram partially constrains the probabilistic conditioning order. Let Nx be the set of all non-successors of node x, and Dx be the set of direct predecessors of x so that Dx Nx. The influence diagram then asserts that the probability assignment to x given Nx is the same as to x given Dx, so that {x | Nx} = {x | Dx}. With respect to x, Dx is a sufficient statistic for Nx (Howard and Matheson, 1984, p.739). This assertion is noticeable in the DecideIT implementation (not in version 2.5). When setting the conditional probabilities of a conditionally dependent chance node, the number of expansions in the probabilistic conditioning order for x equals the size of Dx. The influence diagram asserts that the only available information when a decision is made is represented by the direct predecessors of the relevant decision node; thus the name informational arcs. A common practice is therefore the use of no-forgetting arcs in
diagrams with more than one decision node. This is to make it explicit in the structure so that the decision-maker does not forget the information from at an earlier decision part of the same problem.12 The original way of evaluating influence diagrams is to convert them into a corresponding decision tree and evaluate the tree (Howard and Matheson, 1984). The influence diagram is then only used to formulate the situation. There are also methods for evaluating influence diagrams without converting them into decision trees. The most famous method is that developed by Shachter (1986), based on node removal and arc reversal. Due to the workings of the DELTA method (Danielson and Ekenberg 1998), DecideIT uses the original method to evaluate influence diagrams. Influence Diagrams are not supported in DecideIT 2.5 and 2.6, however work is being done and future releases of DecideIT are planned to support this type of graph model.
19.2.1
As stated above, an influence diagram is a compact representation of a symmetric decision tree. Decision trees and influence diagrams are isomorphic structures, i.e., any properly built influence diagram can be converted into a corresponding decision tree and vice versa (Clemens, 1996, p.74). The conversion from an influence diagram to its corresponding decision tree is very useful for the implementation of influence diagrams in the DecideIT software. Unlike the nodes in a decision tree, the nodes in an influence diagram do not have to be totally ordered nor do they have to depend directly on all predecessors. To convert an influence diagram into a corresponding decision tree, two main requirements must be fulfilled in the diagram: The influence diagram must imply a total ordering over the decision nodes. Each decision node in the influence diagram and its direct predecessors directly influence all successor decision nodes. The first condition is quite obvious, as a decision tree with multiple decisions clearly defines a sequence order in which the decision-maker makes his/her choices. The second condition is the no-forgetting condition, which assures that a single decision-maker does not forget information. According to Howard and Matheson (1984, p.744), these two conditions
12 No-forgetting arcs also fill a purpose when converting an influence diagram into a decision tree.
guarantee that a decision tree can be constructed. However, for a proper conversion some probabilistic processing may be necessary. This is why a non-direct predecessor z of a decision node x in an influence diagram does not imply that the decision rule in x depends on z. It simply implies that z is used in the probability assignment model (Howard and Matheson, 1984, p.740). When such a situation is converted into a decision tree, the tree would then imply that the decision rule in x depends on z.
20
Suppose a decision-maker wants to evaluate a specific decision situation. In order to approach the problem in a reasonable way, given available resources, a decision process such as the following could be employed, not necessarily in the exact order given. Clarify the problem, divide it into sub-problems if necessary Decide which information is a prerequisite for the decision Collect and compile the information Define possible courses of action For each alternative: Identify possible consequences For each consequence: Estimate how probable it is Estimate the value of it occurring for each criterion Disregard obviously bad courses of action Based on the above, evaluate the remaining alternatives Carry out a sensitivity analysis
The method described in the following should be seen in the context of such a decision process. The process above is supported by DecideIT and is carried out in a number of steps. The first step is a bit special, since there is much information to collect. The initial information is gathered from different sources. Then it is formulated in statements and entered into DecideIT. Following that, an iterative process commences where step by step the decision-makers gain further insights. During this process, decision-makers receive help in
realizing which information is missing, is too vague, or is too precise. They can also change the problem structure by adding or removing consequences or even entire alternatives, as more information becomes available.
20.2 Modelling
After the data collection phase, a modelling task commences where the decision-maker structures and orders the information. Given the set of criteria, she tries to compile a smaller number of reasonable courses of action and identify the consequences belonging to each alternative. For instance, simulation results can be clustered into meaningful sets. There is no requirement for the alternatives to have the same number of consequences. However, within any given alternative, it is required that the consequences are exclusive and exhaustive, i.e. whatever the result, it should be covered by the description of exactly one consequence. This is unproblematic, since a residual consequence can be added to take care of unspecified events. The probability and value statements plus the weights are represented by interval constraints and core intervals described later. Intervals are a natural form in which to express such imprecise statements. It is not required that the consequence sets are determined from the outset. A new consequence may be added at a later stage, thus facilitating an incremental style of working.
lies between the numbers ak and bk, are translated to pij [ak,bk]. Similarly, sentences such as: The probability of cij is greater than the probability of ckl are translated into inequalities such as pij > pkl. In this way, each statement is represented by one or more constraints. The conjunction of such constraints together with pij = 1 for each strategy Ai involved13, is called the probability base (P). The utility base (V) consists of similar translations of utility estimates. The collection of probability and utility statements constitutes the decision frame. The idea with such a frame is to collect all information necessary for the model in one structure. This structure is then filled in with user statements. All the probability statements in a decision problem share a common structure because they are all made relative to the same decision frame. The correspondence between the user model and the representation is summarized in Table 1.
User model Decision problem Alternative Consequence, event Collection of statements Interval statement
Representation Decision frame Consequence set Consequence Base Core interval Interval constraint
In practice, a model of the situation is created with criteria, relevant courses of action, and their consequences when specific events occur. The courses of action are called alternatives in the user model, and they are represented by consequence sets in the decision frame. If the problem contains more than one decision level, it is internally transformed into an alternative consequence form (AC-form), a one-level decision tree that is a computationally equivalent representation. In the user interface, all levels are kept as they were originally entered. Following the establishment of a decision frame in the tool, the probabilities of the events and the values of the consequences are subsequently filled in.
13 The normalization constraint is added because the consequences are assumed to be exhaustive as well as pair
wise disjoint.
The orthogonal hull is a concept that in each dimension signals which parts are definitely incompatible with the constraint set. The orthogonal hull can be pictured as the result of wrapping the smallest orthogonal hyper-cube around the constraint set. Definition: Given a consistent constraint set X in {xi}iI, the set of pairs {Xmin(xi),Xmax(xi)} is the orthogonal hull of the set and is denoted Xmin(xi),Xmax(xi)n.
Constraints and core intervals have different roles in specifying a decision situation. The constraints represent negative information, which vectors are not part of the solution sets. The contents of constraints specify which ranges are infeasible by excluding them from the solutions. This is in contrast to core intervals, which represent positive information in the sense that the decision-maker enters information about sub-intervals that are felt to be the most central ones and that no further discrimination is possible within those ranges. Definition: Given a constraint set X in {xi} and the orthogonal hull ai,bin of X, a core interval of xi is an interval [ci,di] such that ai ci di bi. A core [ci,di]n of {xi} is a set of core intervals {[ci,di]}, one for each xi. As for constraint sets, the core might not be meaningful in the sense that it may contain no possible variable assignments able to satisfy all the inequalities. This is quite similar to the concept of consistency for constraint sets, but for core intervals, the requirement is slightly different. It is required that the most likely point is contained within the core. Definition: Given a consistent constraint set X in {xi} and a most likely point r = (r1,,rn), the core [ci,di]n of {xi} is permitted with respect to r iff ci ri di. Together, constraint sets and cores delimit the shape of the belief in the numerical values for the variables. See Figure 4-5.
Belief
Value
Figure 4-5: The hull, core and most likely point for a variable
Hull
20.5 Bases
A base consists of a constraint set for a set of variables together with a core. A base is simply a collection of constraints and the core that belongs to the variables in the set. The idea with a base is to represent a class of functions over a finite, discrete set of consequences. Definition: Given a set {xi} of variables and a most likely point r, a base X in {xi} consists
of a constraint set XC in {xi} and a core XK of {xi}. The base X is consistent if XC is consistent and XK is permitted with respect to r.
of exhaustive and mutually exclusive consequences to sum to one. No such dimension reducing constraint exists for the value variables. Definition: Given a set {Cik} of disjoint and exhaustive consequences, a base V in {vik}, and a discrete, finite value function :C[0,1]. Let vik denote the function value (Cik). Because of the range of , vik [0,1] are default constraints in the constraint set VC. Then V is a value base. Similar to probability bases, a value base can be seen as characterizing a set of value functions. The value core VK can be seen as an attempt to estimate a class of value functions.
The probability and value bases together with structural information constitute the decision frame.
20.8 Frames
Using the above concepts of consequence, constraint, core, and base, it is possible to model the decision-makers situation in a decision frame. Compare the decision frame to Table 1 at the beginning of the appendix. The frame captures a decision problem on AC-form, a onelevel tree problem in normal form. The frame is also the key data structure in the tool implementation, holding references to other structure information and to the bases containing most of the information. All statements entered via the tool user interface are collected in the decision frame. When all statements in the current state of the problem have been entered, the data entry phase is over for the time being. As the insights into the decision problem accumulate during all the following phases, it is possible to add new information and alter or delete information already entered.
20.10
Security Thresholds
Many decisions are one-off decisions, or are important enough not to allow a too undesirable outcome regardless of its having a very low probability. The common aggregate decision rules will not rule out an alternative with such a consequence provided it has a very low probability. If the probability for a very undesirable consequence is larger than some security level, it seems reasonable to require that the alternative should not be considered, regardless
Page 133 of 148
of whether expected value shows it to be a good course of action. If the security level is violated by one or more consequences in an alternative and this persists beyond a predetermined rate of cutting (described below), then the alternative is unsafe and should be disregarded. An example of security levelling is an insurance company desiring not to enter into insurance agreements where the profitability is high but there is a very small but not negligible risk for the outcome to be a loss large enough to put the companys existence at stake. The security analysis requires some parameters to be set and security thresholds serve as an important supplement to the expected value.
20.11
Evaluations
After having taken security thresholds into account, which value does a particular decision have? In cases where the outcomes can be assigned monetary values, it seems natural that the value of the decision should be some kind of aggregation of the values of the individual consequences. The ultimate comparing rule of an evaluation in DecideIT as well as in many other methods is the expected value (EV), sometimes instantiated as the expected utility or the expected monetary value. Since neither probabilities nor values are fixed numbers, the evaluation of the expected value yields quadratic (bilinear) objective functions of the form EV(Ai) = pi1vi1 + + pinvin, where the piks and viks are variables. Further complicating the picture is the presence of multiple criteria. For s criteria, this leads to the expression EV(Ai) = w1 (p1i1v1i1 + + p1inv1in) + + ws (psi1vsi1 + + psinvsin), where wk is the weight of criterion k. Maximization of such expressions are computationally demanding problems to solve in the general case, using techniques from the area of non-linear programming. This leaves us with differing values and weights. By multiplying in the weights and making the probabilities common, the expression can be rewritten: EV(Ai) = pi1 w1v1i1 + + pin w1v1in + + pi1 ws vsi1 + + pin ws
vsin,
vsin);
thus permitting local (at consequence level) culling of weighted values. Maximization of such expressions is less but still computationally demanding problems to solve, using techniques from the area of quadratic programming. In (Danielson, 1998) there are discussions about and proofs of the existence of computational procedures to reduce the problem to systems with linear objective functions, solvable with ordinary linear programming methods. When a rule for calculating the EV for decision frames containing interval statements is established, the next question is how to compare the courses of action using this rule. It is not a trivial task, since usually the possible EV's of several alternatives overlap. The most favourable assignments of numbers to variables for each alternative usually render that alternative the preferred one. The first step towards a usable decision rule is to establish some concepts that tell when one alternative is preferable to another. Definition: The alternative A1 is at least as good as A2 if EV(A1) EV(A2) for all consistent assignments of the probability and value variables. The alternative A1 is better than A2 if it is at least as good as A2 and further EV(A1) > EV(A2) for some consistent assignments of the probability and value variables. The alternative A1 is admissible if no other alternative is better. If there is only one admissible alternative it is obviously the preferred choice. Usually there are more than one, since apparently good or bad alternatives are normally dealt with on a manual basis long before decision tools are brought into use. All non-admissible alternatives are removed from the considered set and do not take further part in the evaluation. The existence of more than one admissible alternative means that for different consistent assignments of numbers to the probability and value variables, different courses of action are preferable. When this occurs, how is it possible to find out which alternative is to prefer? Let 12 = EV(A1) EV(A2) be the differences in expected value between the alternatives. The strength of A1 compared to A2, given as a number max(12) [1,1], shows how the most favourable consistent assignments of numbers to the probability and value variables lead to the greatest difference in the expected value between A1 and A2. In the same manner, A2 is compared to A1. These two strengths need not sum to one or to any other constant the first might for example be 0.2 and the second 0.4. If there are more than two alternatives, pair-wise comparisons are carried out between all of them.
Furthermore, there is a strong element of comparison inherent in a decision procedure. For example, statements such as v11 > v22 are not taken into account when calculating maximal and minimal EV(Ai) unless they influence the hull. As the results are interesting only in comparison to other alternatives, it is reasonable to consider the differences in strength as well. Therefore, it makes sense to evaluate the relative strength of A1 compared to A2 in addition to the strengths themselves, since such strength values would be compared to some other strengths anyway in order to rank the alternatives. The relative strength between the two alternatives A1 and A2 are calculated using the formula mid(12) = [max(12)+min(12)]/2 = [max(12) max(21)]/2 The concept of strength is actually somewhat more complicated. Dominance means that one consequence set is superior to another, at least in a part of the solution space to the bases. The weakest relation would be if a part refers to a single solution vector. A more reasonable interpretation of a part is if it is superior in a substantial fraction of the solutions. Dominance in the strongest sense would mean to require that the part consists of all solution vectors. This idea is captured in the concepts of strong, marked, and weak dominance. They correspond to the minimal, medium, and maximal differences. Alternative A1 is said to strongly dominate alternative A2 if min(12) > 0, to markedly dominate if mid(12) > 0, and finally to weakly dominate if max(12) > 0. This is further explained in (Danielson, 2003). In DecideIT, the relative strength is shown as the middle line in the evaluation graphs.
20.12
The hull cut (in DecideIT contractions) is a generalized sensitivity analysis to be carried out in a large number of dimensions. In non-trivial decision situations, when a decision frame contains numerically imprecise information, the different principles suggested above are often too weak to yield a conclusive result by themselves. Only studying the differences in the expected value for the complete bases often gives too little information about the mutual strengths of the alternatives. Thus, after the elimination of undesirable consequence sets, the decision-maker could still find that no conclusive decision has been made. One way to proceed is to determine the stability of the relation between the consequence sets under consideration. A natural way to investigate this is to consider values near the boundaries of the constraint intervals as being less reliable than the core due to the former being deliberately imprecise. Hence, it is important to be able to study the strengths (or dominances) between the
alternatives on sub-parts of the bases. If dominance is evaluated on a sequence of ever-smaller sub-bases, a good appreciation of the strengths dependency on boundary values can be obtained. This is taken into account by cutting off the dominated regions indirectly using the hull cut operation. This is denoted cutting the bases, and the amount of cutting is indicated as a percentage , which can range from 0% to 100%. For a 100% cut, if no core is specified, the bases are transformed into single points, and the evaluation becomes the calculation of the ordinary expected value. Definition: Given a base X, a set of real numbers {ai,bi}, a hull [ci,di]n, and a real number [0,1], a -cut of X is to replace the hull by [ci(1)(aici), di(1)(bi di)]n. It is possible to regard the hull cut as an automated kind of sensitivity analysis. In order to maintain consistency, the cut decreases the bases in predefined ways. Since the belief in peripheral values is somewhat less, the interpretation of the cut is to zoom in on more believable values that are more centrally located. The principle can also be motivated by the difficulties of performing simultaneous sensitivity analysis in several dimensions at the same time. It can be hard to gain real understanding of the solutions to large decision problems using only low-dimensional analyses, since different combinations of dimensions can be critical to the evaluation results. Investigating all possible such combinations would lead to a procedure of high combinatorial complexity in the number of cases to investigate. Using hull cuts, such difficulties are circumvented. The evaluation idea behind the principle is to investigate how much the hull can be cut before dominance appears between the consequence sets compared. If there is no dominance even in the original core, it may be further cut towards the most likely point in order to achieve dominance. The cut avoids the complexity inherent in combinatorial analyses, but it is still possible to study the stability of a result by gaining a better understanding of how important the constraint boundaries really are. By co-varying the cut of an arbitrary set of intervals, it is possible to gain much better insight into the influence of the structure of the decision frame on the solutions. Consequently, a cut can be regarded as a focus parameter that zooms in from the full statement intervals to central sub-intervals (the core). The results of the comparisons can be displayed either in a diagram for each pair of alternatives (Delta diagrams) or as a summary for each alternative (Gamma diagrams). Figure 4-6 below deals only with Delta diagrams.
In the figure, the evaluation three alternatives are shown as three pair-wise comparisons between the alternatives respectively. The x-axis shows the cut in per cent ranging from 0 to 100. The y-axis is the expected value difference ij for the pairs. The cone (which need not be linear if comparative statements are involved) consists of three lines. For comparing alternatives A1 and A2, the upper line is max(12), the middle is mid(12), and the lower is min(12). Thus, one can see from which cut level an alternative dominates weakly, markedly, and strongly. As the cut progresses, one of the alternatives eventually dominates strongly. The cut level necessary for that to occur shows the separability between the expected values.
(ii b) Remove the markedly dominated consequence sets (ii c) A combination of (ii a) and (ii b) (iii) If only one consequence set remains (iii a) Uncut the frame until other consequence sets appear (iii b) Study the markedly dominated consequence sets (iii c) A combination of (iii a) and (iii b) Before a new iteration starts, alternatives found to be undesirable or obviously inferior by other information could be removed from the decision process. Likewise, a new alternative can be added, should the information gathered indicate the need for it. Consequences in an alternative can be added or removed as necessary to reflect changes in the model. Often a number of cycles are necessary to produce an interesting and reliable result. After the appropriate number of iterations has been completed, both the decision problem and its proposed solution(s) in the form of preferred courses of action will be fairly well understood and documented. Anyone interested and with access to the information can afterwards check, verify (and criticize) the decision based on the output documentation, which because all consequences are clearly presented shows how all the alternative courses of action have been valued. Also, during the decision process, the analysis is open for comments and can become the basis for further discussions.
References
(Bernoulli, 1954) D. Bernoulli, Specimen Theoriae de Mensura Sortis, translated into Theory on the Measurement of Risk, Econometrica 22, pp. 2236, 1954. (Choquet, 1953) G. Choquet, Theory of Capacities, Ann. Inst. Fourier 5, pp. 131295, 1953/1954. (Clemen, 1996) R. T. Clemen, Making Hard Decisions, Brooks/Cole Publishing Co., Pacific Grove CA, 1996. (Danielson, 1997) M. Danielson, Computational Decision Analysis, PhD. Thesis, Department of Computer and Systems Sciences, Stockholm University and KTH, Report No. 97011, 1997. (Danielson, 2003) M. Danielson, Generalized Evaluation in Decision Analysis, to appear in European Journal of Operational Research, 2003. (Dempster, 1967) A. P. Dempster, Upper and Lower Probabilities Induced by a Multivalued Mapping, Annals of Mathematical Statistics 38, pp.325339, 1967. (Doyle and Thomason, 1997) R. Doyle and R. H. Thomason (eds.), Qualitative Preferences in Deliberation and Practical Reasoning, Working Notes, Stanford University, 1997. (Ekenberg, 1994) L. Ekenberg, Decision Support in Numerically Imprecise Domains, PhD. Thesis, Department of Computer and Systems Sciences, Stockholm University and KTH, Report No. 94-003-DSV, 1994. (Ekenberg and Danielson, 1998) L. Ekenberg and M. Danielson, A Framework for Analysing Decision under Risk, European Journal of Operational Research 104/3, pp. 474484, 1998. (Ekenberg and Thorbirnson, 2001) L. Ekenberg and J. Thorbirnson, Second-Order Decision Analysis, Int. Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 9/1, pp. 1338, 2001. (Ellsberg, 1961) D. Ellsberg, Risk, Ambiguity, and the Savage Axioms, Quarterly Journal of Economics 75, pp.643669, 1961.
(French, 1988) S. French, Decision Theory An Introduction to the Mathematics of Rationality, Ellis Horwood Ltd., 1988. (Friedman, 1953) M. Friedman, Essays in Positive Economics, University of Chicago Press, 1953. (Funtowicz and Ravetz, 1990) S. O. Funtowicz and J. R. Ravetz, Uncertainty and Quality in Science for Public Policy, Kluwer, Dordrecht, 1990. (Good, 1962) I. J. Good, Subjective Probability as the Measure of a Non-measurable Set, Logic, Methodology, and the Philosophy of Science, P. Suppes, B. Nagel, and A. Tarski (eds.), Stanford University Press, pp.319329, 1962. (Grdenfors and Sahlin, 1982) P. Grdenfors and N-E. Sahlin, Unreliable Probabilities, Risk Taking, and Decision Making, Synthese 53, pp. 361386, 1982. (Huber, 1973) P. J. Huber, The Case of Choquet Capacities in Statistics, Bulletin of the International Statistical Institute 45, pp.181188, 1973. (Hurwicz, 1951) L. Hurwicz, Optimality Criteria for Decision Making under Ignorance, Cowles Commission Discussion Paper No. 370, 1951. (Keeney, 1980) R. Keeney, Siting Energy Facilities, New York Academic Press, 1980. (Keeney, 1982) R. Keeney, Decision Analysis: An Overview, Operations Research 30, pp.803838, 1982. (Keeney and Raiffa, 1976) R. Keeney and H. Raiffa, Decisions with Multiple Objectives Preferences and Value Tradeoffs, Wiley, New York, 1976. (Levi, 1974) I. Levi, On Indeterminate Probabilities, Journal of Philosophy 71, pp. 391 418, 1974. (Luce and Raiffa, 1957) R. D. Luce and H. Raiffa, Games and Decisions Introduction and Critical Survey, John Wiley and Sons, 1957. (Malmns, 1990) P-E. Malmns, Axiomatic Justification of the Utility Principle, HSFR 677/87, 1990. (March and Simon, 1958) J. G. March and H. Simon, Organizations, Wiley, New York, 1958. (Markowitz, 1952) H. Markowitz, The Utility of Wealth, Journal of Political Economy 60, pp.151158, 1952.
(Menger, 1934) K. Menger, Das Unsicherheitsmoment in der Wertlere, Zeitschrift fr Nationalkonomie 5, pp.459485, 1934. (Morgan and Henrion, 1990) M. G. Morgan and M. Henrion, Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analyses, Cambridge University Press, 1990. (von Neumann and Morgenstern, 1947) J. von Neumann and O. Morgenstern, Theory of Games and Economic Behaviour, 2nd ed., Princeton University Press, 1947. (Raiffa, 1968) H. Raiffa, Decision Analysis: Introductory Lectures and Choices under Uncertainty, Random House, 1968. (Ramsey, 1931) F. P. Ramsey, Truth and Probability, The Foundations of Mathematics and other Logical Essays, 1931. (Resnik, 1987) M. D. Resnik, Choices An Introduction to Decision Theory, University of Minnesota Press, Minneapolis, 1987. (Riabacke, 2002) A. Riabacke, Computer Based Prescriptive Decision Support, Department of Information Technology and Media, Mid Sweden University, Fibre Science and Communication Network Report No. R-02-33, 2002. (Roy, 1991) B. Roy, The Outranking Approach and the Foundations of the ELECTRE Methods, Theory and Decision 31, pp.4973, 1991. (Saaty, 1980) T. L. Saaty, The Analytical Hierarchy Process, Mc-Graw Hill, 1980. (Savage, 1972) L. Savage, The Foundation of Statistics, 2nd ed., Dover, John Wiley and Sons, 1972. (Shachter, 1986) R. D. Shachter, Evaluating Influence Diagrams, Operations Research 34, pp.871882, 1986. (Shapira, 1995) Z. Shapira, Risk Taking: A Managerial Perspective, Russel Sage Foundation, 1995. (Simon, 1955) H. Simon, A Behavioral Model of Rational Choice, Quarterly Journal of Economics 69, pp.99118, 1955. (Smith, 1961) C. A. B. Smith, Consistency in Statistical Inference and Decision, Journal of the Royal Statistic Society 23 Series B, pp.125, 1961.
(Tversky and Kahneman, 1986) A. Tversky and D. Kahneman, Rational Choice and the Framing of Decisions, Journal of Business 59/2, pp.251278, 1986. (Vincke, 1992) Ph. Vincke, Multicriteria Decision Aid, John Wiley and Sons, Chichester, 1992. (Wald, 1950) A. Wald, Statistical Decision Functions, John Wiley and Sons, 1950. (Walley, 1991) P. Walley, Statistical Reasoning with Imprecise Probabilities, Chapman and Hall, 1991. (Walley, 1997) P. Walley, Imprecise Probabilities, The Imprecise Probabilities Project, http://ippserv.rug.ac.be, 1997. (Weichselberger, 1999) K. Weichselberger, The Theory of Interval-Probability as a Unifying Concept for Uncertainty, Proc. of ISIPTA99, Ghent, 1999. (Weichselberger and Phlman, 1990) K. Weichselberger and S. Phlman, A Methodology for Uncertainty in Knowledge-Based Systems, Springer-Verlag, New York, 1990. (von Winterfeldt and Edwards, 1986) D. von Winterfeldt and W. Edwards, Decision Analysis and Behavioural Research, Cambridge University Press, 1986. (Wynne, 1992) B. Wynne, Uncertainty and Environmental Learning: Reconceiving Science and Policy in the Preventative Paradigm, Global Environmental Change 2, pp.11127, 1992.
Index
Add alternatives .......................................72 Add nodes ................................................74 affine transformation .......................97, 121 alternative define ...................................................15 Alternative Properties ..............................53 Automatic update ....................................83 axiom probability ...........................................95 utility theory ..............................105, 107 axiom system .........................................104 Bayes theorem .........................................96 Bayesian decision analysis ....................103 cardinal ranking .......................................32 Cardinal ranking ......................................80 Cardinal Ranking...............................62, 67 Cardinal Ranking.....................................62 Cardinal ranking all alternatives ..........80 Cardinal Ranking All Criteria ..............67 Choose alternative ...................................76 Close ............................................51, 52, 81 command Critical Probabilities/Values ...............60 Extreme Values ...................................60 Preference Order .................................60 Probability Templates .........................59 Risk Profile ..........................................60 Security Thresholds .............................60 command Alternative Properties ..........................53 Close ....................................................51 Exit................................................ 51, 52 Export tree to JPEG-format ................ 51 Hide/Show all Evaluation Windows ... 58 New ..................................................... 51 Open .................................................... 51 Overview............................................. 58 Page setup ........................................... 51 Print tree.............................................. 51 Redo .................................................... 53 Save..................................................... 51 Save as ................................................ 51 Set Background Color......................... 53 Set Value Scale ................................... 53 Undo ................................................... 53 Update ................................................. 58 Update Tree ........................................ 58 Value/Weight Relations ...................... 53 command Total Ranking ..................................... 60 command Cardinal Ranking ................................ 60 command Expected Value Graph ........................ 60 command Total Ranking All Criteria ............... 60 command Cardinal Ranking All Criteria .......... 60 command Expected Value Graph - All Criteria .. 60 command Settings ............................................... 70
command File .......................................................80 command Edit ......................................................80 command View ....................................................80 command Update .................................................80 command Update .................................................83 command Automatic update ................................83 Compare positive graphs ...................81, 82 conditional independence ........................96 conditional probability ............................96 consequence define ...................................................17 label .....................................................18 consequence node ....................................74 contraction ...........................30, 31, 36, 137 Contraction ..............................................81 Convert to probability/decision node ......75 Copy branch ............................................76 Copy node ...............................................76 Copy tree .................................................72 create a tree..............................................15 criteria....................................................112 Criteria assign a tree .........................................41 define ...................................................37 evaluate................................................45 weights ................................................39 Criteria .....................................................37 Criteria .....................................................39
Criteria .................................................... 45 criteria weights........................................ 39 critical probabilities ................................ 33 Critical probabilities/values .................... 80 Critical Probabilities/Values ............. 33, 65 critical values .......................................... 33 cumulative risk profile ...................... 37, 64 Cumulative risk profile ........................... 80 decision evaluate ......................................... 28, 29 decision analysis ................. 90, 92, 93, 128 decision frame . 30, 130, 131, 134, 137, 138 decision frames ..................................... 129 decision modelling .................................. 99 decision process .................................... 139 decision table .................................. 99, 100 decision tree .................................... 15, 123 decisions under certainty ...................... 100 decisions under risk .............................. 103 decisions under strict uncertainty ......... 101 Delete branch .......................................... 75 Disregard alternative ............................... 76 Edit .......................................................... 81 elicitation .............................................. 115 Evaluation ............................................... 60 Evaluation windows................................ 80 event node ............................................... 74 Exit.......................................................... 52 expected utility........................................ 90 expected value ...................................... 135 expected value graph .............................. 80 Expected value graph .............................. 80 Expected Value Graph .... 28, 45, 63, 64, 67 Expected Value Graph - All Criteria ...... 67
Expected value graph all alternatives ....80 extreme values .........................................36 Extreme values ........................................80 Extreme Values .................................67, 69 File ...........................................................80 File menu .................................................51 graph models .........................................123 Hide Sub-nodes .................................73, 76 Hide window .....................................81, 82 Hide/Show all Evaluation Windows .......58 hull cut .....................................................30 probability .............................................119 imprecise probability .............................119 imprecise utility .....................................120 influence diagrams ........................124, 127 install DecideIT .......................................50 installation .................................................8 interval constraint ..................................131 Kolmogorov-axioms................................95 maximax ..................................................67 maximax utility criterion .......................102 maximin...........................................67, 101 maximin utility criterion........................102 minimax .................................................101 minimax risk criterion ...........................102 model create ...................................................51 most likely point ................................59, 77 most likely point ......................................59 most likely point ......................................78 most likely point ....................................132 most likely point ....................................132 most likely point ....................................132 most likely point ....................................132
most likely point ................................... 138 Move branch ........................................... 75 multi attribute utility theory .................. 112 multi criteria decision analysis ....... 37, 112 multiple and conflicting objectives ....... 112 Multiple Decisions .................................. 84 Node properties ....................................... 77 Node Properties ...................................... 73 Node Property Frame .............................. 72 Numerical ............................................... 81 Open ............................................ 51, 73, 77 ordinal scale .................................... 98, 100 ordinal value function ........................... 100 orthogonal hull ...................................... 131 Overview................................................. 58 pessimism-optimism Index .............. 68, 101 PMEU ............................................. 28, 124 Preference order ...................................... 80 Preference Order ..................................... 69 principle of insufficient reason ....... 68, 101 Principle of Insufficient Reason ........... 101 principle of maximizing the expected utility ........................... 28, 103, 104, 109 Print Model ............................................. 52 probability ......................... 91, 95, 116, 133 assign .................................................. 20 conditional .................................. 96, 124 edit ...................................................... 77 elicitation .......................................... 116 independent ......................................... 96 qualitative ......................................... 108 probability base ............................. 130, 133 probability theory.................................... 95 qualitative probability ........................... 108
Redo ........................................................53 Regard alternative ...................................76 Reset y-scale ............................................81 risk attitude ............................................110 risk constraint ........................................111 risk premium .........................................110 Risk Profile ..............................................64 save ..........................................................17 Save .........................................................52 Save as ...............................................18, 52 second-order beliefs ..............................121 security threshold ....................60, 111, 134 security thresholds ...................................34 Security thresholds ..................................80 Security Thresholds .................................34 sensitivity analysis...46, 128, 137, 138, 139 Set Background Color .............................58 Set color...................................................81 Set Value Scale........................................55 Set y-scale ...............................................81 Settings ....................................................70 Size ..........................................................81 start DecideIT ....................................15, 50 system requirements ..................................8 template .............................................23, 24 define ...................................................22 Templates ................................................59 Templates ................................................23 Templates ................................................59 Templates ................................................59 Tools ........................................................70 total ranking.............................................31 Total ranking ...........................................80 Total Ranking ..........................................61
Total Ranking ......................................... 61 Total Ranking ......................................... 62 Total Ranking ......................................... 62 Total Ranking ......................................... 63 Total Ranking ......................................... 63 Total Ranking ......................................... 66 Total ranking all alternatives ............... 80 Total Ranking All Criteria ................... 66 uncertainty ............................................ 118 Undo ....................................................... 53 Update ............................................... 59, 83 Update Tree ............................................ 59 utility ..................................................... 115 elicitation .......................................... 115 utility base ............................................. 130 utility function ................................ 97, 113 utility imprecise .................................... 120 utility theory axiom ................................................ 104 utility theory............................................ 97 axiom .................................................. 91 utility theory.......................................... 104 utility theory.......................................... 105 utility theory axiom ................................................ 105 utility theory.......................................... 107 utility theory axiom ................................................ 107 utility theory .......................................... 109 utility theory criticism ............................................ 109 utility theory criticism ............................................ 110 utility theory
independence .....................................113 value assign ...................................................26 value base ..............................................133 value relations .........................................42
Value Relations ................................. 42, 44 Value Span .............................................. 68 Value/Weight Relations .......................... 54 View .................................................. 58, 81 weight ................................................... 117