TOPSIS

From Wikipedia, the free encyclopedia
Jump to: navigation, search

The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is a multi-criteria decision analysis method, which was originally developed by Hwang and Yoon in 1981[1] with further developments by Yoon in 1987,[2] and Hwang, Lai and Liu in 1993.[3] TOPSIS is based on the concept that the chosen alternative should have the shortest geometric distance from the positive ideal solution and the longest geometric distance from the negative ideal solution. It is a method of compensatory aggregation that compares a set of alternatives by identifying weights for each criterion, normalising scores for each criterion and calculating the geometric distance between each alternative and the ideal alternative, which is the best score in each criterion. An assumption of TOPSIS is that the criteria are monotonically increasing or decreasing. Normalisation is usually required as the parameters or criteria are often of incongruous dimensions in multi-criteria problems.[4][5] Compensatory methods such as TOPSIS allow trade-offs between criteria, where a poor result in one criterion can be negated by a good result in another criterion. This provides a more realistic form of modelling than non-compensatory methods, which include or exclude alternative solutions based on hard cut-offs.[6]

TOPSIS method[edit]

The TOPSIS process is carried out as follows:

Step 1
Create an evaluation matrix consisting of m alternatives and n criteria, with the intersection of each alternative and criteria given as x_{ij}, we therefore have a matrix ( x_{ij} )_{m \times n}.
Step 2
The matrix ( x_{ij} )_{m \times n} is then normalised to form the matrix

R = ( r_{ij} )_{m \times n}, using the normalisation method  r_{ij} = \frac  {x_{ij}} {\sqrt{\sum_{i=1}^{m} x_{ij}^2 }}, i = 1, 2, . . ., m, j = 1, 2, . . ., n


Step 3
Calculate the weighted normalised decision matrix

T =(t_{ij})_{m \times n} = ( w_jr_{ij} )_{m \times n}, i = 1, 2, . . .,  m

Where w_j = W_j / \sum_{j=1}^{n}W_j, j = 1, 2, . . ., n so that  \sum_{j=1}^{n} w_j = 1, and W_j is the original weight given to the indicator v_j, j = 1, 2, . . ., n.
Step 4
Determine the worst alternative (A_w) and the best alternative (A_b):

 A_w = \{ \langle max(t_{ij} | i = 1,2,...,m)| j \in J_- \rangle, \langle min(t_{ij} | i = 1,2,...,m)| j \in J_+ \rangle \rbrace \equiv \{ t_{wj} | j= 1,2,...,n \rbrace,

 A_b = \{ \langle min(t_{ij} | i = 1,2,...,m)| j \in J_- \rangle, \langle max(t_{ij} | i = 1,2,...,m)| j \in J_+ \rangle \rbrace \equiv \{ t_{bj} | j= 1,2,...,n \rbrace,

where,
 J_+ = \{ j = 1,2,...,n | j associated with the criteria having a positive impact, and
 J_- = \{ j = 1,2,...,n | j associated with the criteria having a negative impact.
Step 5
Calculate the L2-distance between the target alternative i and the worst condition A_w
 d_{iw} = \sqrt{\sum_{j=1}^{n}(t_{ij} - t_{wj})^2}, i = 1, 2, . . ., m ,

and the distance between the alternative i and the best condition A_b

 d_{ib} = \sqrt{\sum_{j=1}^{n}(t_{ij} - t_{bj})^2}, i = 1, 2, . . ., m
where d_{iw} and d_{ib} are L2-norm distances from the target alternative i to the worst and best conditions, respectively.
Step 6
Calculate the similarity to the worst condition:

 s_{iw}= d_{iw} / (d_{iw} + d_{ib}), 0 \le s_{iw} \le 1, i = 1, 2, . . ., m .

s_{iw} = 1 if and only if the alternative solution has the worst condition; and

s_{iw} = 0 if and only if the alternative solution has the best condition.

Step 7
Rank the alternatives according to s_{iw} (i = 1, 2, . . ., m).

Normalisation[edit]

Two methods of normalisation that have been used to deal with incongruous criteria dimensions are linear normalisation and vector normalisation.

Linear normalisation can be calculated as in Step 2 of the TOPSIS process above. Vector normalisation was incorporated with the original development of the TOPSIS method,[1] and is calculated using the following formula:

 r_{ij} = \frac  {x_{ij}} {\sqrt{\sum_{i=1}^{m} x_{ij}^2 }}, i = 1, 2, . . ., m, j = 1, 2, . . ., n

In using vector normalisation, the non-linear distances between single dimension scores and ratios should produce smoother trade-offs.[7]

Assumptions[edit]

1. The value and suitability of each criterion should be linearly decreasing or increasing.
2. The criteria should be independent.

Advantages[edit]

1. Easy decision making using both negative and positive criteria.
2. Number of criteria can be applied during the decision process.
3. Simple and faster than AHP, FDAHP,SAW.

References[edit]

  1. ^ a b Hwang, C.L.; Yoon, K. (1981). Multiple Attribute Decision Making: Methods and Applications. New York: Springer-Verlag. 
  2. ^ Yoon, K. (1987). A reconciliation among discrete compromise situations. Journal of Operational Research Society 38. pp. 277–286. doi:10.1057/jors.1987.44. 
  3. ^ Hwang, C.L.; Lai, Y.J.; Liu, T.Y. (1993). "A new approach for multiple objective decision making". Computers and Operational Research 20: 889–899. doi:10.1016/0305-0548(93)90109-v. 
  4. ^ Yoon, K.P.; Hwang, C. (1995). Multiple Attribute Decision Making: An Introduction. California: SAGE publications. 
  5. ^ Zavadskas, E.K.; Zakarevicius, A.; Antucheviciene, J. (2006). "Evaluation of Ranking Accuracy in Multi-Criteria Decisions". Informatica 17 (4): 601–618. 
  6. ^ Greene, R.; Devillers, R.; Luther, J.E.; Eddy, B.G. (2011). "GIS-based multi-criteria analysis". Geography Compass 5/6: 412–432. 
  7. ^ Huang, I.B.; Keisler, J.; Linkov, I. (2011). "Multi-criteria decision analysis in environmental science: ten years of applications and trends". Science of the Total Environment 409: 3578–3594. doi:10.1016/j.scitotenv.2011.06.022. 
8. I. Beg and T. Rashid: Multi-criteria trapezoidal valued intuitionistic fuzzy decision making with Choquet integral based TOPSIS, OPSEARCH, 51(1) (2014), 98-129.