# Fine-grained reduction

In computational complexity theory, a fine-grained reduction is a transformation from one computational problem to another, used to relate the difficulty of improving the time bounds for the two problems. Intuitively, it provides a method for solving one problem efficiently by using the solution to the other problem as a subroutine. If problem ${\displaystyle A}$ can be solved in time ${\displaystyle a(n)}$ and problem ${\displaystyle B}$ can be solved in time ${\displaystyle b(n)}$, then the existence of an ${\displaystyle (a,b)}$-reduction from problem ${\displaystyle A}$ to problem ${\displaystyle B}$ implies that any significant speedup for problem ${\displaystyle B}$ would also lead to a speedup for problem ${\displaystyle A}$.

## Definition

Let ${\displaystyle A}$ and ${\displaystyle B}$ be computational problems, specified as the desired output for each possible input. Let ${\displaystyle a}$ and ${\displaystyle b}$ both be time-constructible functions that take an integer argument ${\displaystyle n}$ and produce an integer result. Usually, ${\displaystyle a}$ and ${\displaystyle b}$ are the time bounds for known or naive algorithms for the two problems, and often they are monomials such as ${\displaystyle n^{2}}$.[1]

Then ${\displaystyle A}$ is said to be ${\displaystyle (a,b)}$-reducible to ${\displaystyle B}$ if, for every real number ${\displaystyle \epsilon >0}$, there exists a real number ${\displaystyle \delta >0}$ and an algorithm that solves instances of problem ${\displaystyle A}$ by transforming it into a sequence of instances of problem ${\displaystyle B}$, taking time ${\displaystyle O{\bigl (}a(n)^{1-\delta }{\bigr )}}$ for the transformation on instances of size ${\displaystyle n}$, and producing a sequence of instances whose sizes ${\displaystyle n_{i}}$ are bounded by ${\displaystyle \sum _{i}b(n_{i})^{1-\epsilon }.[1]

An ${\displaystyle (a,b)}$-reduction is given by the mapping from ${\displaystyle \epsilon }$ to the pair of an algorithm and ${\displaystyle \delta }$.[1]

## Speedup implication

Suppose ${\displaystyle A}$ is ${\displaystyle (a,b)}$-reducible to ${\displaystyle B}$, and there exists ${\displaystyle \epsilon >0}$ such that ${\displaystyle B}$ can be solved in time ${\displaystyle O{\bigl (}b(n)^{1-\epsilon }{\bigr )}}$. Then, with these assumptions, there also exists ${\displaystyle \delta >0}$ such that ${\displaystyle A}$ can be solved in time ${\displaystyle O{\bigl (}a(n)^{1-\delta }{\bigr )}}$. Namely, let ${\displaystyle \delta }$ be the value given by the ${\displaystyle (a,b)}$-reduction, and solve ${\displaystyle A}$ by applying the transformation of the reduction and using the fast algorithm for ${\displaystyle B}$ for each resulting subproblem.[1]

Equivalently, if ${\displaystyle A}$ cannot be solved in time significantly faster than ${\displaystyle a(n)}$, then ${\displaystyle B}$ cannot be solved in time significantly faster than ${\displaystyle b(n)}$.[1]

## History

Fine-grained reductions were defined, in the special case that ${\displaystyle a}$ and ${\displaystyle b}$ are equal monomials, by Virginia Vassilevska Williams and Ryan Williams in 2010. They also showed the existence of ${\displaystyle (n^{3},n^{3})}$-reductions between several problems including all-pairs shortest paths, finding the second-shortest path between two given vertices in a weighted graph, finding negative-weight triangles in weighted graphs, and testing whether a given distance matrix describes a metric space. According to their results, either all of these problems have time bounds with exponents less than three, or none of them do.[2]

The term "fine-grained reduction" comes from later work by Virginia Vassilevska Williams in an invited presentation at the 10th International Symposium on Parameterized and Exact Computation.[1]

Although the original definition of fine-grained reductions involved deterministic algorithms, the corresponding concepts for randomized algorithms and nondeterministic algorithms have also been considered.[3]

## References

1. Williams, Virginia V. (2015), "Hardness of easy problems: basing hardness on popular conjectures such as the Strong Exponential Time Hypothesis", 10th International Symposium on Parameterized and Exact Computation, LIPIcs. Leibniz Int. Proc. Inform., 43, Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, pp. 17–29, MR 3452406
2. ^ Williams, Virginia Vassilevska; Williams, R. Ryan (2018), "Subcubic equivalences between path, matrix, and triangle problems", Journal of the ACM, 65 (5): A27:1–A27:38, doi:10.1145/3186893, MR 3856539. A preliminary version of these results, including the definition of a "subcubic reduction", a special case of a fine-grained reduction, was presented at the 2010 Symposium on Foundations of Computer Science.
3. ^ Carmosino, Marco L.; Gao, Jiawei; Impagliazzo, Russell; Mihajlin, Ivan; Paturi, Ramamohan; Schneider, Stefan (2016), "Nondeterministic extensions of the strong exponential time hypothesis and consequences for non-reducibility", ITCS'16—Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science, ACM, New York, pp. 261–270, MR 3629829