# Hungarian algorithm

The Hungarian method is a combinatorial optimization algorithm that solves the assignment problem in polynomial time and which anticipated later primal–dual methods. It was developed and published in 1955 by Harold Kuhn, who gave the name "Hungarian method" because the algorithm was largely based on the earlier works of two Hungarian mathematicians: Dénes Kőnig and Jenő Egerváry.[1][2]

James Munkres reviewed the algorithm in 1957 and observed that it is (strongly) polynomial.[3] Since then the algorithm has been known also as the Kuhn–Munkres algorithm or Munkres assignment algorithm. The time complexity of the original algorithm was ${\displaystyle O(n^{4})}$, however Edmonds and Karp, and independently Tomizawa noticed that it can be modified to achieve an ${\displaystyle O(n^{3})}$ running time.[4][5][how?] One of the most popular[citation needed] ${\displaystyle O(n^{3})}$ variants is the Jonker–Volgenant algorithm.[6] Ford and Fulkerson extended the method to general maximum flow problems in form of the Ford–Fulkerson algorithm. In 2006, it was discovered that Carl Gustav Jacobi had solved the assignment problem in the 19th century, and the solution had been published posthumously in 1890 in Latin.[7]

## The problem

### Example

In this simple example there are three workers: Paul, Dave, and Chris. One of them has to clean the bathroom, another sweep the floors and the third washes the windows, but they each demand different pay for the various tasks. The problem is to find the lowest-cost way to assign the jobs. The problem can be represented in a matrix of the costs of the workers doing the jobs. For example:

Clean bathroom Sweep floors Wash windows
Paul $2$3 $3 Dave$3 $2$3
Chris $3$3 $2 The Hungarian method, when applied to the above table, would give the minimum cost: this is$6, achieved by having Paul clean the bathroom, Dave sweep the floors, and Chris wash the windows.

### Matrix formulation

In the matrix formulation, we are given a nonnegative n×n matrix, where the element in the i-th row and j-th column represents the cost of assigning the j-th job to the i-th worker. We have to find an assignment of the jobs to the workers, such that each job is assigned to one worker and each worker is assigned one job, such that the total cost of assignment is minimum.

This can be expressed as permuting the rows of a cost matrix C to minimize the trace of a matrix,

${\displaystyle \min _{P}\operatorname {Tr} (PC)\;,}$

where P is a permutation matrix. (Equivalently, the columns can be permuted using CP.)

If the goal is to find the assignment that yields the maximum cost, the problem can be solved by negating the cost matrix C.

### Bipartite graph formulation

The algorithm can equivalently be described by formulating the problem using a bipartite graph. We have a complete bipartite graph ${\displaystyle G=(S,T;E)}$ with n worker vertices (S) and n job vertices (T), and each edge has a nonnegative cost ${\displaystyle c(i,j)}$. We want to find a perfect matching with a minimum total cost.

## The algorithm in terms of bipartite graphs

Let us call a function ${\displaystyle y:(S\cup T)\to \mathbb {R} }$ a potential if ${\displaystyle y(i)+y(j)\leq c(i,j)}$ for each ${\displaystyle i\in S,j\in T}$. The value of potential y is the sum of the potential over all vertices: ${\displaystyle \sum _{v\in S\cup T}y(v)}$.

The cost of each perfect matching is at least the value of each potential: the total cost of the matching is the sum of costs of all edges; the cost of each edge is at least the sum of potentials of its endpoints; since the matching is perfect, each vertex is an endpoint of exactly one edge; hence the total cost is at least the total potential.

The Hungarian method finds a perfect matching and a potential such that the matching cost equals the potential value. This proves that both of them are optimal. In fact, the Hungarian method finds a perfect matching of tight edges: an edge ${\displaystyle ij}$ is called tight for a potential y if ${\displaystyle y(i)+y(j)=c(i,j)}$. Let us denote the subgraph of tight edges by ${\displaystyle G_{y}}$. The cost of a perfect matching in ${\displaystyle G_{y}}$ (if there is one) equals the value of y.

During the algorithm we maintain a potential y and an orientation of ${\displaystyle G_{y}}$ (denoted by ${\displaystyle {\overrightarrow {G_{y}}}}$) which has the property that the edges oriented from T to S form a matching M. Initially, y is 0 everywhere, and all edges are oriented from S to T (so M is empty). In each step, either we modify y so that its value increases, or modify the orientation to obtain a matching with more edges. We maintain the invariant that all the edges of M are tight. We are done if M is a perfect matching.

In a general step, let ${\displaystyle R_{S}\subseteq S}$ and ${\displaystyle R_{T}\subseteq T}$ be the vertices not covered by M (so ${\displaystyle R_{S}}$ consists of the vertices in S with no incoming edge and ${\displaystyle R_{T}}$ consists of the vertices in T with no outgoing edge). Let Z be the set of vertices reachable in ${\displaystyle {\overrightarrow {G_{y}}}}$ from ${\displaystyle R_{S}}$ by a directed path only following edges that are tight. This can be computed by breadth-first search.

If ${\displaystyle R_{T}\cap Z}$ is nonempty, then reverse the orientation of a directed path in ${\displaystyle {\overrightarrow {G_{y}}}}$ from ${\displaystyle R_{S}}$ to ${\displaystyle R_{T}}$. Thus the size of the corresponding matching increases by 1.

If ${\displaystyle R_{T}\cap Z}$ is empty, then let

${\displaystyle \Delta :=\min\{c(i,j)-y(i)-y(j):i\in Z\cap S,j\in T\setminus Z\}.}$

Δ is well defined because at least one such edge ${\displaystyle ij}$ must exist whenever the matching is not yet of maximum possible size (see the following section); it is positive because there are no tight edges between ${\displaystyle Z\cap S}$ and ${\displaystyle T\setminus Z}$. Increase y by Δ on the vertices of ${\displaystyle Z\cap S}$ and decrease y by Δ on the vertices of ${\displaystyle Z\cap T}$. The resulting y is still a potential, and although the graph ${\displaystyle G_{y}}$ changes, it still contains M (see the next subsections). We orient the new edges from S to T. By the definition of Δ the set Z of vertices reachable from ${\displaystyle R_{S}}$ increases (note that the number of tight edges does not necessarily increase).

We repeat these steps until M is a perfect matching, in which case it gives a minimum cost assignment. The running time of this version of the method is ${\displaystyle O(n^{4})}$: M is augmented n times, and in a phase where M is unchanged, there are at most n potential changes (since Z increases every time). The time sufficient for a potential change is ${\displaystyle O(n^{2})}$.

### Proof that the algorithm makes progress

We must show that as long as the matching is not of maximum possible size, the algorithm is always able to make progress — that is, to either increase the number of matched edges, or tighten at least one edge. It suffices to show that at least one of the following holds at every step:

• M is of maximum possible size.
• ${\displaystyle G_{y}}$ contains an augmenting path.
• G contains a loose-tailed path: a path from some vertex in ${\displaystyle R_{S}}$ to a vertex in ${\displaystyle T\setminus Z}$ that consists of any number (possibly zero) of tight edges followed by a single loose edge. The trailing loose edge of a loose-tailed path is thus from ${\displaystyle Z\cap S}$, guaranteeing that Δ is well defined.

If M is of maximum possible size, we are of course finished. Otherwise, by Berge's lemma, there must exist an augmenting path P with respect to M in the underlying graph G. However, this path may not exist in ${\displaystyle G_{y}}$: Although every even-numbered edge in P is tight by the definition of M, odd-numbered edges may be loose and thus absent from ${\displaystyle G_{y}}$. One endpoint of P is in ${\displaystyle R_{S}}$, the other in ${\displaystyle R_{T}}$; w.l.o.g., suppose it begins in ${\displaystyle R_{S}}$. If every edge on P is tight, then it remains an augmenting path in ${\displaystyle G_{y}}$ and we are done. Otherwise, let ${\displaystyle uv}$ be the first loose edge on P. If ${\displaystyle v\notin Z}$ then we have found a loose-tailed path and we are done. Otherwise, v is reachable from some other path Q of tight edges from a vertex in ${\displaystyle R_{S}}$. Let ${\displaystyle P_{v}}$ be the subpath of P beginning at v and continuing to the end, and let ${\displaystyle P'}$ be the path formed by travelling along Q until a vertex on ${\displaystyle P_{v}}$ is reached, and then continuing to the end of ${\displaystyle P_{v}}$. Observe that ${\displaystyle P'}$ is an augmenting path in G with at least one fewer loose edge than P. P can be replaced with ${\displaystyle P'}$ and this reasoning process iterated (formally, using induction on the number of loose edges) until either an augmenting path in ${\displaystyle G_{y}}$ or a loose-tailed path in G is found.

### Proof that adjusting the potential y leaves M unchanged

To show that every edge in M remains after adjusting y, it suffices to show that for an arbitrary edge in M, either both of its endpoints, or neither of them, are in Z. To this end let ${\displaystyle vu}$ be an edge in M from T to S. It is easy to see that if v is in Z then u must be too, since every edge in M is tight. Now suppose, toward contradiction, that ${\displaystyle u\in Z}$ but ${\displaystyle v\notin Z}$. u itself cannot be in ${\displaystyle R_{S}}$ because it is the endpoint of a matched edge, so there must be some directed path of tight edges from a vertex in ${\displaystyle R_{S}}$ to u. This path must avoid v, since that is by assumption not in Z, so the vertex immediately preceding u in this path is some other vertex ${\displaystyle v'\in T}$. ${\displaystyle v'u}$ is a tight edge from T to S and is thus in M. But then M contains two edges that share the vertex u, contradicting the fact that M is a matching. Thus every edge in M has either both endpoints or neither endpoint in Z.

### Proof that y remains a potential

To show that y remains a potential after being adjusted, it suffices to show that no edge has its total potential increased beyond its cost. This is already established for edges in M by the preceding paragraph, so consider an arbitrary edge uv from S to T. If ${\displaystyle y(u)}$ is increased by Δ, then either ${\displaystyle v\in Z\cap T}$, in which case ${\displaystyle y(v)}$ is decreased by Δ, leaving the total potential of the edge unchanged, or ${\displaystyle v\in T\setminus Z}$, in which case the definition of Δ guarantees that ${\displaystyle y(u)+y(v)+\Delta \leq c(u,v)}$. Thus y remains a potential.

## Matrix interpretation

Given n workers and tasks, the problem is written in the form of an n×n matrix

 a1 a2 a3 a4 b1 b2 b3 b4 c1 c2 c3 c4 d1 d2 d3 d4

where a, b, c and d are workers who have to perform tasks 1, 2, 3 and 4. a1, a2, a3, and a4 denote the penalties incurred when worker "a" does task 1, 2, 3, and 4 respectively.

The problem is equivalent to assigning each worker a unique task such that the total penalty is minimized. Note that each task can only be worked on by one worker.

### Step 1

For each row, its minimum element is subtracted from every element in that row. This causes all elements to have non-negative values. Therefore, an assignment with a total penalty of 0 is by definition a minimum assignment.

This also leads to at least one zero in each row. As such, a naive greedy algorithm can attempt to assign all workers a task with a penalty of zero. This is illustrated below.

 0 a2 a3 a4 b1 b2 b3 0 c1 0 c3 c4 d1 d2 0 d4

The zeros above would be the assigned tasks.

Worst-case there are n! combinations to try, since multiple zeroes can appear in a row if multiple elements are the minimum. So at some point this naive algorithm should be short circuited.

### Step 2

Sometimes it may turn out that the matrix at this stage cannot be used for assigning, as is the case for the matrix below.

 0 a2 0 a4 b1 0 b3 0 0 c2 c3 c4 0 d2 d3 d4

To overcome this, we repeat the above procedure for all columns (i.e. the minimum element in each column is subtracted from all the elements in that column) and then check if an assignment with penalty 0 is possible.

In most situations this will give the result, but if it is still not possible then we need to keep going.

### Step 3

All zeros in the matrix must be covered by marking as few rows and/or columns as possible. Steps 3 and 4 form one way to accomplish this.

For each row, try to assign an arbitrary zero. Assigned tasks are represented by starring a zero. Note that assignments can't be in the same row or column.

• We assign the first zero of Row 1. The second zero of Row 1 can't be assigned.
• We assign the first zero of Row 2. The second zero of Row 2 can't be assigned.
• Zeros on Row 3 and Row 4 can't be assigned, because they are on the same column as the zero assigned on Row 1.

We could end with another assignment if we choose another ordering of the rows and columns.

 0* a2 0 a4 b1 0* b3 0 0 c2 c3 c4 0 d2 d3 d4

### Step 4

Cover all columns containing a (starred) zero.

 × × 0* a2 0 a4 b1 0* b3 0 0 c2 c3 c4 0 d2 d3 d4

Find a non-covered zero and prime it. (If all zeroes are covered, skip to step 5.)

• If the zero is on the same row as a starred zero, cover the corresponding row, and uncover the column of the starred zero.
• Then, GOTO "Find a non-covered zero and prime it."
• Here, the second zero of Row 1 is uncovered. Because there is another zero starred on Row 1, we cover Row 1 and uncover Column 1.
• Then, the second zero of Row 2 is uncovered. We cover Row 2 and uncover Column 2.
 × 0* a2 0' a4 × b1 0* b3 0 0 c2 c3 c4 0 d2 d3 d4
 0* a2 0' a4 × b1 0* b3 0' × 0 c2 c3 c4 0 d2 d3 d4
• Else the non-covered zero has no assigned zero on its row. We make a path starting from the zero by performing the following steps:
1. Substep 1: Find a starred zero on the corresponding column. If there is one, go to Substep 2, else, stop.
2. Substep 2: Find a primed zero on the corresponding row (there should always be one). Go to Substep 1.

The zero on Row 3 is uncovered. We add to the path the first zero of Row 1, then the second zero of Row 1, then we are done.

 0* a2 0' a4 × b1 0* b3 0' × 0' c2 c3 c4 0 d2 d3 d4
• (Else branch continued) For all zeros encountered during the path, star primed zeros and unstar starred zeros.
• As the path begins and ends by a primed zero when swapping starred zeros, we have assigned one more zero.
 0 a2 0* a4 b1 0* b3 0 0* c2 c3 c4 0 d2 d3 d4
• (Else branch continued) Unprime all primed zeroes and uncover all lines.
• Repeat the previous steps (continue looping until the above "skip to step 5" is reached).
• We cover columns 1, 2 and 3. The second zero on Row 2 is uncovered, so we cover Row 2 and uncover Column 2:
 × × 0 a2 0* a4 b1 0* b3 0' × 0* c2 c3 c4 0 d2 d3 d4

All zeros are now covered with a minimal number of rows and columns.

The aforementioned detailed description is just one way to draw the minimum number of lines to cover all the 0s. Other methods work as well.

### Step 5

Find the lowest uncovered value. Subtract this from every unmarked element and add it to every element covered by two lines.

This is equivalent to subtracting a number from all rows which are not covered and adding the same number to all columns which are covered. These operations do not change optimal assignments.

Repeat steps 4–5 until an assignment is possible; this is when the minimum number of lines used to cover all the 0s is equal to min(number of people, number of assignments), assuming dummy variables (usually the max cost) are used to fill in when the number of people is greater than the number of assignments.

If following this specific version of the algorithm, the starred zeros form the minimum assignment.

From Kőnig's theorem,[8] the minimum number of lines (minimum Vertex cover[9]) will be n (the size of maximum matching[10]). Thus, when n lines are required, minimum cost assignment can be found by looking at only zeroes in the matrix.

## Bibliography

• R.E. Burkard, M. Dell'Amico, S. Martello: Assignment Problems (Revised reprint). SIAM, Philadelphia (PA.) 2012. ISBN 978-1-61197-222-1
• M. Fischetti, "Lezioni di Ricerca Operativa", Edizioni Libreria Progetto Padova, Italia, 1995.
• R. Ahuja, T. Magnanti, J. Orlin, "Network Flows", Prentice Hall, 1993.
• S. Martello, "Jeno Egerváry: from the origins of the Hungarian algorithm to satellite communication". Central European Journal of Operational Research 18, 47–58, 2010

## References

1. ^ Harold W. Kuhn, "The Hungarian Method for the assignment problem", Naval Research Logistics Quarterly, 2: 83–97, 1955. Kuhn's original publication.
2. ^ Harold W. Kuhn, "Variants of the Hungarian method for assignment problems", Naval Research Logistics Quarterly, 3: 253–258, 1956.
3. ^ J. Munkres, "Algorithms for the Assignment and Transportation Problems", Journal of the Society for Industrial and Applied Mathematics, 5(1):32–38, 1957 March.
4. ^ Edmonds, Jack; Karp, Richard M. (1 April 1972). "Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems". Journal of the ACM. 19 (2): 248–264. doi:10.1145/321694.321699. S2CID 6375478.
5. ^ Tomizawa, N. (1971). "On some techniques useful for solution of transportation network problems". Networks. 1 (2): 173–194. doi:10.1002/net.3230010206. ISSN 1097-0037.
6. ^ Jonker, R.; Volgenant, A. (December 1987). "A shortest augmenting path algorithm for dense and sparse linear assignment problems". Computing. 38 (4): 325–340. doi:10.1007/BF02278710. S2CID 7806079.
7. ^ "Presentation". Archived from the original on 16 October 2015.
8. ^ Kőnig's theorem (graph theory) Konig's theorem
9. ^ Vertex cover minimum vertex cover
10. ^ Matching (graph theory) matching