# TOPSIS

The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is a multi-criteria decision analysis method, which was originally developed by Ching-Lai Hwang and Yoon in 1981 with further developments by Yoon in 1987, and Hwang, Lai and Liu in 1993. TOPSIS is based on the concept that the chosen alternative should have the shortest geometric distance from the positive ideal solution (PIS) and the longest geometric distance from the negative ideal solution (NIS).

## Description

It is a method of compensatory aggregation that compares a set of alternatives by identifying weights for each criterion, normalising scores for each criterion and calculating the geometric distance between each alternative and the ideal alternative, which is the best score in each criterion. An assumption of TOPSIS is that the criteria are monotonically increasing or decreasing. Normalisation is usually required as the parameters or criteria are often of incongruous dimensions in multi-criteria problems. Compensatory methods such as TOPSIS allow trade-offs between criteria, where a poor result in one criterion can be negated by a good result in another criterion. This provides a more realistic form of modelling than non-compensatory methods, which include or exclude alternative solutions based on hard cut-offs. An example of application on nuclear power plants is provided in.

## TOPSIS method

The TOPSIS process is carried out as follows:

Step 1
Create an evaluation matrix consisting of m alternatives and n criteria, with the intersection of each alternative and criteria given as $x_{ij}$ , we therefore have a matrix $(x_{ij})_{m\times n}$ .
Step 2
The matrix $(x_{ij})_{m\times n}$ is then normalised to form the matrix
$R=(r_{ij})_{m\times n}$ , using the normalisation method
$r_{ij}={\frac {x_{ij}}{\sqrt {\sum _{k=1}^{m}x_{kj}^{2}}}},\quad i=1,2,\ldots ,m,\quad j=1,2,\ldots ,n$ Step 3
Calculate the weighted normalised decision matrix
$t_{ij}=r_{ij}\cdot w_{j},\quad i=1,2,\ldots ,m,\quad j=1,2,\ldots ,n$ where $w_{j}=W_{j}{\Big /}\sum _{k=1}^{n}W_{k},j=1,2,\ldots ,n$ so that $\sum _{i=1}^{n}w_{i}=1$ , and $W_{j}$ is the original weight given to the indicator $v_{j},\quad j=1,2,\ldots ,n.$ Step 4
Determine the worst alternative $(A_{w})$ and the best alternative $(A_{b})$ :
$A_{w}=\{\langle \max(t_{ij}\mid i=1,2,\ldots ,m)\mid j\in J_{-}\rangle ,\langle \min(t_{ij}\mid i=1,2,\ldots ,m)\mid j\in J_{+}\rangle \rbrace \equiv \{t_{wj}\mid j=1,2,\ldots ,n\rbrace ,$ $A_{b}=\{\langle \min(t_{ij}\mid i=1,2,\ldots ,m)\mid j\in J_{-}\rangle ,\langle \max(t_{ij}\mid i=1,2,\ldots ,m)\mid j\in J_{+}\rangle \rbrace \equiv \{t_{bj}\mid j=1,2,\ldots ,n\rbrace ,$ where,
$J_{+}=\{j=1,2,\ldots ,n\mid j\}$ associated with the criteria having a positive impact, and
$J_{-}=\{j=1,2,\ldots ,n\mid j\}$ associated with the criteria having a negative impact.
Step 5
Calculate the L2-distance between the target alternative $i$ and the worst condition $A_{w}$ $d_{iw}={\sqrt {\sum _{j=1}^{n}(t_{ij}-t_{wj})^{2}}},\quad i=1,2,\ldots ,m,$ and the distance between the alternative $i$ and the best condition $A_{b}$ $d_{ib}={\sqrt {\sum _{j=1}^{n}(t_{ij}-t_{bj})^{2}}},\quad i=1,2,\ldots ,m$ where $d_{iw}$ and $d_{ib}$ are L2-norm distances from the target alternative $i$ to the worst and best conditions, respectively.
Step 6
Calculate the similarity to the worst condition:
$s_{iw}=d_{iw}/(d_{iw}+d_{ib}),\quad 0\leq s_{iw}\leq 1,\quad i=1,2,\ldots ,m.$ $s_{iw}=1$ if and only if the alternative solution has the best condition; and
$s_{iw}=0$ if and only if the alternative solution has the worst condition.
Step 7
Rank the alternatives according to $s_{iw}\,\,(i=1,2,\ldots ,m).$ ## Normalisation

Two methods of normalisation that have been used to deal with incongruous criteria dimensions are linear normalisation and vector normalisation.

Linear normalisation can be calculated as in Step 2 of the TOPSIS process above. Vector normalisation was incorporated with the original development of the TOPSIS method, and is calculated using the following formula:

$r_{ij}={\frac {x_{ij}}{\sqrt {\sum _{k=1}^{m}x_{kj}^{2}}}},\quad i=1,2,\ldots ,m,\quad j=1,2,\ldots ,n$ In using vector normalisation, the non-linear distances between single dimension scores and ratios should produce smoother trade-offs.

## Online tools

• Decision Radar : A free online TOPSIS calculator written in Python.
• Yadav, Vinay; Karmakar, Subhankar; Kalbar, Pradip P.; Dikshit, A.K. (January 2019). "PyTOPS: A Python based tool for TOPSIS". SoftwareX. 9: 217–222. doi:10.1016/j.softx.2019.02.004.