0% found this document useful (0 votes)
89 views9 pages

Mcda Theory and Examples

The document introduces multi-criterion decision analysis (MCDA), a process for comparing alternatives that considers both quantitative and qualitative measures. MCDA allows approximate quantification of all measures on a common numerical scale to perform calculations. Key steps include determining criteria and their importance, evaluating alternative performances on criteria, and combining ratings into overall scores using methods like weighted average to rank alternatives. The document describes the weighted average and compromise programming MCDA methods.

Uploaded by

claudia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views9 pages

Mcda Theory and Examples

The document introduces multi-criterion decision analysis (MCDA), a process for comparing alternatives that considers both quantitative and qualitative measures. MCDA allows approximate quantification of all measures on a common numerical scale to perform calculations. Key steps include determining criteria and their importance, evaluating alternative performances on criteria, and combining ratings into overall scores using methods like weighted average to rank alternatives. The document describes the weighted average and compromise programming MCDA methods.

Uploaded by

claudia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Introduction to Multi-Criterion Decision Analysis

Often in engineering we are faced with making a choice among various options. If the
only consideration is cost, we can use economic principles to guide our selection. Many
times, however, the choice involves both quantitative measures (such as costs) and
qualitative measures (such as ease of use or political viability). In such cases we must
use a process to approximately “quantify” all measures on a similar, numerical scale so
that we can perform mathematical calculations. Multi-Criterion Decision Analysis
(MCDA) is a numerical process to compare or “score” alternatives on a comparable
scale. This handout describes the methods used in the general MCDA spreadsheet
model.

The steps of the method are as follows:

1) Determine the main criteria that should be considered in choosing the best option.
These criteria should be reasonably independent. For example, if criteria are
identified as construction cost, operation costs, and maintenance cost they all relate to
a cost criterion and are not necessarily so independent. It would be better to treat
these as sub-criteria of a general cost criterion.
2) Determine the relative importance of these criteria to each other. A common
approach is to select the least important criterion and assign it a value of 1. Then for
each of the other criterion, ask the question “How many times more important is this
criterion than the least important criterion?” The answer will relate to the value
assigned, for example, if the selected criterion is twice as important as the least
important criterion, it would receive a value of 2, or if it were equally important it
would have a value of 1. It is permissible to use fractions, for example, a value of 1.5
indicates that the selected criterion is one and one/half times more important as
compared to the least important criterion. It is also necessary to limit the maximum
value that is assigned to any criterion. A maximum value of 3 or 4 is a good choice.
If the maximum value is too large, it has the numerical effect of reducing the problem
to a single criterion problem. After a relative importance value is obtained for each
criterion, then a normalized importance “weight” for each criterion is obtained by
dividing the individual relative importance value by the summation of all the relative
importance values. This produces a set of “importance weights” that sum to one.
3) Use a process similar to Step 2) to assign normalized importance weights to sub-
criteria that have been defined.
4) Select the alternatives to consider. For each alternative, evaluate the performance of
that alternative with respect to each criterion or sub-criterion. This performance
might be described as a number (such as construction cost) or it might be a word
(such as good or poor).
5) Convert the evaluations of Step 4) to a common numerical score called a “rating”. A
commonly used scale is 1 to 5, where 5 represents the best condition and 1 represents
the worst condition. A scale of 5 fits word descriptions such as: Poor(1) Fair(2)
OK(3) Good(4) Excellent(5). Summarize the results of Steps 2) – 5) into a table
called a payoff or impact matrix:

1
Criterion Importance Wgts Alternative 1 Alternative 2 Alternative 3
C1 W1 R1,1 R1,2 R1,3
C2 W2 R2,1 R2,2 R2,3
C3 W3 R3,1 R3,2 R3,3
C4 W4 R4,1 R4,2 R4,3

6) It is important to make sure than any one of the alternatives is not completely
“dominated” by the others. An alternative is completed dominated by another if its
rating for each of the criterion is lower than the corresponding ratings for another
alternative. For example, if all the ratings for alternative 1 are less than those for
alternative 2, Ri,1 < Ri,2 (for i=1,2,3,4), then there is no need to consider alternative 1.
7) Now the ratings in the payoff or impact matrix must be combined into a final score
for each alternative. One of the most common MCDA methods used to do this is
called the weighted average method (WAM). The score for an alternative is defined
as the summation of the products of the normalized weights times the rating for each
criteria. For example, the overall score for alternative 1 would be computed as:

4
S 1 = ∑Wi * Ri ,1
i =1

Where i represents the various criteria. In general then, the score for alternative j is
found by:

4
Sj = ∑Wi * Ri , j
i =1

Where j = 1, 2, 3

The alternative with the highest score (maximum value of Sj) is the preferred
alternative. It is said to have a “rank” of 1. The second highest score is the second
preferred alternative (rank 2) and so forth. Hopefully, this is a very familiar process
because it is the method most commonly used to assign grades in school, to evaluate
performance at the sporting events (such as the Olympics), and to rank best choices
for cars, computers, etc. that appear in popular magazines.

8) If sub-criteria are used, the ratings for the sub-criteria are combined using the sub-
criteria weights in the manner described in Step 7) to produce an overall rating for the
main criteria. These overall ratings are then combined as described in Step 7).

There are many MCDA methods and the basic difference in them is the way the scoring
process is handled. The Discrete Compromise Programming Method (CP) is very similar
to the WAM, except in the way it defines it ratings. Instead of using a scale such as 1 to
5, it uses the following ratio equation (metric) to determine the rating as a measure of the

2
relative performance of an alternative with respect to the best and worst alternatives for a
specific criterion:
p
 Actuali , j − Worsti 
Ri , j =  
 Besti − Worsti 

Notice that if a particular alternative is the best it will receive a rating of 1 and if the
worst it will receive a value of 0. The exponent p is the equation is used to put increasing
stress on the better rating values. If p = 1, the results are very close to the WAM, only
using a different scale. If p = 2, then the larger the ratio, the less its square is reduced
(consider 0.92 vs. 0.22). This has the effect of giving more weight to the better ratings.

Both the Weighted Average Method and the Discrete Compromise Programming Method
are called “Value-based Methods.” This means that the actual value of the rating is used
to find the final score. For example, a rating of 4 is exactly twice as good as a rating of 2.
A different group of MCDA methods are called “Out-ranking Methods.” In an out-
ranking method, they actual value of the rating is much less important. What is important
is whether on rating is preferred over another.

The out-ranking method used in the spreadsheet is the PROMETHEE II Method. In this
method, we begin by making pair-wise comparisons of all the alternatives. For example,
suppose we begin by comparing alternative A1 to alternative A2 with respect to criterion
1. We would ask the question “Is A1 preferred to A2 with respect to C1. If it is preferred
we assign a value of 1 in a preference table and if it is not preferred we assign a value of
0. A value of 0 is assigned if the rating values are equal (R1,1 = R1,2), since equal
ratings implies one is not better than the other. The method also allows a range of
indifference to be considered. This means that if the difference in the ratings between A1
and A2 is less than some range of indifference (for example, within 5% of each other)
then no preference will be significant and a value of 0 is assigned for the comparison. If
this range of indifference is set to zero for a certain criteria, this implies a “strict
preference structure.” The results of this pair-wise comparison will yield a preference
table with a row for each criterion and the number of columns equal to the number of
alternatives raised to the power of 2. For example, the pair-wise comparison of 5
alternatives with respect to 5 criteria will produce a preference table with 5 rows and 25
columns. An example of such a table is shown below:

A1- A1- A1- A1- A1- A2- A2- A2- A2- A2- A3- …. …
A1 A2 A3 A4 A5 A1 A2 A3 A4 A5 A1
C1 W1 0 0 0 0 0 1 0 1 1 1 0
C2 W2 0 1 1 1 1 0 0 1 1 1 0
C3 W3 0 0 1 0 1 0 0 1 0 1 0
C4 W4 0 0 1 1 1 0 0 1 1 1 0
C5 W4 0 0 0 0 0 0 0 0 0 0 0

It is important to illustrate why all the comparisons in the table are made. First it is not
absolutely necessary to include the columns where an alternative is compared to itself. It

3
is obvious that an alternative cannot be preferred to itself and so the preference values
must equal 0 for all criteria. These columns are included in the spreadsheet model
because they provide a symmetrical structure that is easier to understand.

Next it is important to point out that the relationship of A1-A2 and A2-A1 is not simply
complementary. For example, if A1 is preferred to A2 with respect to C1, then the
corresponding preference value of A1-A2 is equal to 1, while the preference value of A2-
A1 is 0. If A2 had been preferred to A1, then the preference value of A1-A2 is equal to
0, while the preference value of A2-A1 is equal to 1. This appears to suggest a
complementary relationship. However, suppose that A1 is equal to A2. Then the
preference value of A1-A2 is equal to 0 and the preference value of A2-A1 is also equal
to 0. It is for this reason that all comparisons should be made.

The format of the preference table is the similar to the format of a WAM table. For each
column in the preference table, if we sum up the products of the criteria weights and their
corresponding preference value, we will get a weighted average preference score. For
our example of 5 alternatives, we would have 25 scores. These scores are then input into
a second table, called an “out-ranking” table. This table has the following format:

A1 A2 A3 A4 A5
A1 Score for Score for Score for Score for Score for
A1-A1 A1-A2 A1-A3 A1-A4 A1-A5
A2 Score for Score for Score for Score for Score for
A2-A1 A2-A2 A2-A3 A2-A4 A2-A5
A3 … … … … …
A4 … … … … …
A5 … … … … …

The format of this table is such that the rows represent the amount by which an
alternative is preferred to each of the other alternatives. If we were to sum up or take the
average of the values across the row (over the columns), this would represent the amount
by which an alternative is preferred to all of the other alternatives. The PROMETHEE
Method uses an average over all the values in the row, except for the column containing
the comparison of an alternative to itself. Notice that this is mathematically the same as
summing up all values in the row and then dividing by the number of alternatives minus
1.

In a similar manner the columns represent the amount by which each of the other
alternatives is preferred to a given alternative. If we take the average over the column,
this represents the amount by which all of the other alternatives are preferred to a given
alternative.

If we perform these averaging operations we can produce an additional column and row
to the previous table:

4
A1 A2 A3 A4 A5 Φ+
A1 Score for Score for Score for Score for Score for Avg over
A1-A1 A1-A2 A1-A3 A1-A4 A1-A5 row 1
A2 Score for Score for Score for Score for Score for Avg over
A2-A1 A2-A2 A2-A3 A2-A4 A2-A5 row 2
A3 … … … … … …
A4 … … … … … …
A5 … … … … … …
Φ- Avg over Avg over … … …
col A1 col A2

Now to determine the final ranking we compute the net outranking, Φ , by:
Φ+ = Φ+ - Φ -
The larger the value of Φ the better. Notice that as a result of the way Φ was defined, a
positive value indicates the degree of outranking exceeds the degree of being outranked, a
negative value indicates the reverse and a value of 0 indicates that the degree of
outranking is equal to the degree of being outranked.

The spreadsheet contains two implementations of the PROMETHEE II method. They


differ in the way they handle the sub-criteria. In the pure application of the method, the
procedure is applied to each of the sub-criterion. Since the spreadsheet allows 5 main
criteria with each main criteria allowed 5 sub-criteria, this will require 5 tables, with each
table having 5 rows (one for each sub-criterion) and 25 columns. The net outrankings
based upon the results of each of these 5 tables represents the “scores” for the main
criteria. This will require an additional table of 5 rows and 25 columns to process the
results for the main criteria and provide the information to arrive at the final outrankings.

To reduce the size of the problem, one option is to first combine the sub-criteria using the
weighted average method. This will produce “scores” for the main criteria that can then
be combined using the PROMETHEE II method. This requires only a single table of 5
rows and 25 columns. This method is a hybrid combination of PROMETHEE II and the
WAM.

An Example: Suppose that you are offered three jobs upon completion of your degree.
You want use MCDA to help you decide which of the job offers you should accept. First
you identify the important criteria and their associated sub-criteria, and then you develop
the basic value data as summarized in the table below:

5
ALTERNATIVES
CRITERIA 1 2 3 Best Worst
1. Salary
Direct Pay Max 48000 50000 48000 50000 48000
Benefits Max 2000 6000 4000 6000 2000
2. City Population Min 40000 125000 50000 40000 125000
3. Location
Distance from Home Min 100 1000 250 100 1000
Recreational Opportunities Max Few Some Many Many Few
4. Job Appeal Max Good Fair Poor Good Poor
Table 1: Basic Job Selection Data

The next step is to convert this data to a common 1 to 5 scale, where 5 represents the best
option. Also you need to assign weights to the main criteria and sub-criteria. The results
of this step are shown below:
ALTERNATIVES Rating Scale
CRITERIA Weights S.C. Wts. 1 2 3 Best Worst
1. Salary 0.25 1 5 1.6
Direct Pay 0.7 1 5 1 5 1
Benefits 0.3 1 5 3 5 1
2. City Population 0.25 5 1 4.53 5 1
3. Location 0.25 3 2 4.665
Distance from Home 0.5 5 1 4.33 5 1
Recreational Opportunities 0.5 1 3 5 5 1
4. Job Appeal 0.25 4 3 2 4 2
Table 2: Conversion to a Common Rating Scale (1-5)

In the development of table 2, the scale applied to the Job Appeal main criteria was
chosen as Excellent (5), Good (4), Fair (3), Poor (2), and Terrible (1). This was done to
illustrate that the ratings do not have to necessarily include the end points of 1 or 5.
Further note that the ratings shown for the main criteria salary were derived by
combining the ratings for the sub-criteria using the sub-criteria weights. For instance, the
rating for alternative 3 is 1.6, which is equal to [0.7*1 + 0.3*3].

Now the ratings for the main criteria can be combined with the main criteria weights as
shown below:

6
ALTERNATIVES
CRITERIA Weights 1 2 3
1. Salary 0.25 1.00 5.00 1.60
2. City Population 0.25 5.00 1.00 4.53
3. Location 0.25 3.00 2.00 4.67
4. Job Appeal 0.25 4.00 3.00 2.00
Weighted
Score = 3.25 2.75 3.20

Rank - Best = 1 Rank = 1 3 2


Table 3: Final Rankings by the Weighted Average Method

If all the main criteria were of equal importance then alternative 1 would appear to be the
best choice. However, suppose that the criteria are not equally important, as shown in the
following table of relative importance factors:
Relative
Criteria Importance
1. Salary 3
2. City Population 2
3. Location 1
4. Job Appeal 2
Table 4: Relative Importance Factors

This table indicates that the most important criteria is salary, followed by city population
and job appeal, and then least important is location. Converting these relative importance
factors to normalized weights (dividing each by the sum of all factors) yields:

ALTERNATIVES
CRITERIA Weights 1 2 3
1. Salary 0.38 1.00 5.00 1.60
2. City Population 0.25 5.00 1.00 4.53
3. Location 0.13 3.00 2.00 4.67
4. Job Appeal 0.25 4.00 3.00 2.00
Weighted
Score = 3.00 3.13 2.82

Rank - Best = 1 Rank = 2 1 3


Table 5: Final Rankings by the Weighted Average Method

This table indicates that now alternative 2 appears to be the best choice.

In the Compromise Programming method, we first convert the rating data in table 2 using
the compromise programming metric. Using a power of 2 (p=2) produces the following
table:

7
ALTERNATIVES Rating Scale
CRITERIA Weights S.C. Wts. 1 2 3 Best Worst
1. Salary 0.375 0 1 0.08
Direct Pay 0.7 0 1 0 1 0
Benefits 0.3 0 1 0.25 1 0

2. City Population 0.25 1 0 0.78 1 0

3. Location 0.125 0.5 0.125 0.85


Distance from Home 0.5 1 0 0.69 5 1
Recreational Opportunities 0.5 0 0.25 1 1 0

4. Job Appeal 0.25 1 0.25 0 1 0


Table 6: Compromise Programming Rating Table

Now using the same steps as in the weighted average method, we can compute the final
scores and rankings as shown below:

ALTERNATIVES
CRITERIA Weights 1 2 3
1. Salary 0.375 0 1 0.08
2. City Population 0.25 1 0 0.78
3. Location 0.125 0.50 0.13 0.85
4. Job Appeal 0.25 1 0.25 0
Weighted
Score = 0.56 0.45 0.33

Rank - Best = 1 Rank = 1 2 3


Table 7: Final Rankings by the Compromise Programming Method

To apply the PROMETHEE II – WAM method, we would begin with the rating
information shown in table 5. Then applying the outranking procedure to this data would
produce the following results:

Preference Table for the Main Criteria


Criteria Indifference % Weights A1-A1 A1-A2 A1-A3 A2-A1 A2-A2 A2-A3 A3-A1 A3-A2 A3-A3
1. Salary 0.0% 0.38 0 0 0 1 0 1 1 0 0
2. City Population 0.0% 0.25 0 1 1 0 0 0 0 1 0
3. Location 0.0% 0.13 0 1 0 0 0 0 1 1 0
4. Job Appeal 0.0% 0.25 0 1 1 0 0 1 0 0 0

Preference Index 0 0.625 0.5 0.375 0 0.625 0.5 0.375 0

Outranking Table A-1 A-2 A-3 φ+


A-1 0 0.625 0.5 0.5625
A-2 0.375 0 0.625 0.5
A-3 0.5 0.375 0 0.4375
φ− 0.4375 0.5 0.5625

φ =φ+ − φ− 0.125 0.000 -0.125


Rank 1 2 3
Table 8: Final Rankings by the PROMETHEE II – WAM Method

8
The final possibility would be to apply the outranking method to all of the sub-criteria
and main criteria. This would require the development of four tables similar to table 8,
one for each of the criteria. If a criterion had sub-criteria they would be used in the table.
If a criterion had no sub-criteria, it would be treated in a table by itself. The final scores
from each of these tables would then be combined in a summary table as shown below:
ALTERNATIVES
CRITERIA Weights 1 2 3
1. Salary 0.38 -0.65 1.00 -0.35
2. City Population 0.25 1.00 -1.00 0.00
3. Location 0.13 0.00 -0.50 0.50
4. Job Appeal 0.25 1.00 0.00 -1.00
Table 9: Result of the PROMETHEE Method applied to All Criteria and Sub-Criteria

Applying the PROMETHEE II method to the data in Table 9 will yield the following
results:
Preference Table for All the Main Criteria
Criteria Indifference % Weights A1-A1 A1-A2 A1-A3 A2-A1 A2-A2 A2-A3 A3-A1 A3-A2 A3-A3
1. Salary 0.0% 0.38 0 0 0 1 0 1 1 0 0
2. City Population 0.0% 0.25 0 1 1 0 0 0 0 1 0
3. Location 0.0% 0.125 0 1 0 0 0 0 1 1 0
4. Job Appeal 0.0% 0.25 0 1 1 0 0 1 0 0 0

Preference Index 0 0.625 0.5 0.375 0 0.625 0.5 0.375 0

Outranking Table A-1 A-2 A-3 φ+


A-1 0 0.625 0.5 0.5625
A-2 0.375 0 0.625 0.5
A-3 0.5 0.375 0 0.4375
φ− 0.4375 0.5 0.5625

φ =φ+ − φ− 0.125 0.000 -0.125


Rank 1 2 3

Table 10: Final Rankings by the PROMETHEE II Method

The results of the PROMETHEE II methods indicated that alternative 1 would be the
preferred choice. Reviewing the information from our example, you should notice that
the results could be sensitive to the weights as well as to the methodology used.
Sensitively analysis is therefore very important in a MCDA application.

References:
1) Goicoechea, Hansen and Duckstein, Multi-Objective Decision Analysis with
Engineering and Business Applications, 1982, John Wiley and Sons.
2) Brans, J.P., and Vincke, P. (1985) 'A preference ranking organisation method : The
PROMETHEE method for MCDM', Management Science, 31, 6, pp.647-656.
3) Brans, J.P., Mareschall, B. and Vincke, P. (1986) 'How to select and how to rank
projects : The PROMETHEE method for MCDM', EJOR, 24, pp.228-238.
4) Website describing PROMETHEE: http://www.promethee-gaia.com/

You might also like