Quadratic unconstrained binary optimization (QUBO), also known as unconstrained binary quadratic programming (UBQP), is a combinatorial optimization problem with a wide range of applications from finance and economics to machine learning.[1] QUBO is an NP hard problem, and for many classical problems from theoretical computer science, like maximum cut, graph coloring and the partition problem, embeddings into QUBO have been formulated.[2][3] Embeddings for machine learning models include support-vector machines, clustering and probabilistic graphical models.[4] Moreover, due to its close connection to Ising models, QUBO constitutes a central problem class for adiabatic quantum computation, where it is solved through a physical process called quantum annealing.[5]

## Definition

Let ${\displaystyle f_{Q}:\mathbb {B} ^{n}\rightarrow \mathbb {R} }$ be a quadratic polynomial over binary variables,

${\displaystyle f_{Q}(x)=\sum _{i=1}^{n}\sum _{j=1}^{i}q_{ij}x_{i}x_{j}}$

with ${\displaystyle x_{i}\in \mathbb {B} }$ for ${\displaystyle i\in [n]}$ and coefficients ${\displaystyle q_{ij}\in \mathbb {R} }$ for ${\displaystyle 1\leq j\leq i\leq n}$. Here ${\displaystyle [n]}$ denotes the set of strictly positive integers less or equal to ${\displaystyle n}$, and ${\displaystyle \mathbb {B} =\lbrace 0,1\rbrace }$. The QUBO problem consists of finding a binary vector ${\displaystyle x^{*}}$ that is minimal with respect to ${\displaystyle f_{Q}}$ among all other binary vectors, namely

${\displaystyle x^{*}={\underset {x\in \mathbb {B} ^{n}}{\arg \min }}~f_{Q}(x)}$

Sometimes, QUBO is defined as a maximization instead of a minimization problem, which has no effect on the problem's complexity class, as maximizing ${\displaystyle f_{Q}}$ is the same as minimizing ${\displaystyle f_{-Q}=-f_{Q}}$ (see below). Another, more compact way to formulate ${\displaystyle f_{Q}}$ is using matrix notation,

${\displaystyle f_{Q}(x)=x^{\top }Qx}$

where ${\displaystyle Q\in \mathbb {R} ^{n\times n}}$ is the symmetric ${\displaystyle n\times n}$ matrix containing the coefficients ${\displaystyle q_{ij}}$.

## Properties

• Multiplying the coefficients ${\displaystyle q_{ij}}$ with a positive factor ${\displaystyle \alpha >0}$ scales the output of ${\displaystyle f}$ accordingly, leaving the optimum ${\displaystyle x^{*}}$ unchanged:
${\displaystyle f_{\alpha Q}(x)=\sum _{i\leq j}(\alpha q_{ij})x_{i}x_{j}=\alpha \sum _{i\leq j}q_{ij}x_{i}x_{j}=\alpha f_{Q}(x)}$
• Flipping the sign of all coefficients flips the sign of ${\displaystyle f}$'s output, making ${\displaystyle x^{*}}$ the binary vector that maximizes ${\displaystyle f_{-Q}}$:
${\displaystyle f_{-Q}(x)=\sum _{i\leq j}(-q_{ij})x_{i}x_{j}=-\sum _{i\leq j}q_{ij}x_{i}x_{j}=-f_{Q}(x)}$
• If all coefficients are positive, the optimum is trivially ${\displaystyle x^{*}=(0,\dots ,0)}$. Similarly, if all coefficients are negative, the optimum is ${\displaystyle x^{*}=(1,\dots ,1)}$.
• If ${\displaystyle \forall i\neq j:~q_{ij}=0}$, then the corresponding QUBO problem is solvable in ${\displaystyle {\mathcal {O}}(n)}$, the optimal variable assignments ${\displaystyle x_{i}^{*}}$ simply being 1 if ${\displaystyle q_{ii}<0}$ and 0 otherwise.

## Connection to Ising models

QUBO is very closely related and computationally equivalent to the Ising model, whose Hamiltonian function is defined as

${\displaystyle H(\sigma )=-\sum _{\langle i~j\rangle }J_{ij}\sigma _{i}\sigma _{j}-\mu \sum _{j}h_{j}\sigma _{j}}$

with real-valued parameters ${\displaystyle h_{j},J_{ij},\mu }$ for all ${\displaystyle i,j}$. The spin variables ${\displaystyle \sigma _{j}}$ are binary with values from ${\displaystyle \lbrace -1,+1\rbrace }$ instead of ${\displaystyle \mathbb {B} }$. Moreover, in the Ising model the variables are typically arranged in a lattice where only neighboring pairs of variables ${\displaystyle \langle i~j\rangle }$ can have non-zero coefficients. Applying the identity ${\displaystyle \sigma \mapsto 2x-1}$ yields an equivalent QUBO problem:[2]

{\displaystyle {\begin{aligned}f(x)&=\sum _{\langle i~j\rangle }-J_{ij}(2x_{i}-1)(2x_{j}-1)+\sum _{j}\mu h_{j}(2x_{j}-1)\\&=\sum _{\langle i~j\rangle }-4J_{ij}x_{i}x_{j}+2J_{ij}x_{i}+2J_{ij}x_{j}-J_{ij}+\sum _{j}2\mu h_{j}x_{j}-\mu h_{j}&&{\text{using }}x_{j}=x_{j}x_{j}\\&=\sum _{i=1}^{n}\sum _{j=1}^{i}q_{ij}x_{i}x_{j}+C\end{aligned}}}

where

{\displaystyle {\begin{aligned}q_{ij}&={\begin{cases}-4J_{ij}&{\text{if }}i\neq j\\\sum _{\langle k~i\rangle }2J_{ki}+\sum _{\langle i~\ell \rangle }2J_{i\ell }+2\mu h_{i}&{\text{if }}i=j\end{cases}}\\C&=-\sum _{\langle i~j\rangle }J_{ij}-\sum _{j}\mu h_{j}\end{aligned}}}

As the constant ${\displaystyle C}$ does not change the position of the optimum ${\displaystyle x^{*}}$, it can be neglected during optimization and is only important for recovering the original Hamiltonian function value.

## References

1. ^ Kochenberger, Gary; Hao, Jin-Kao (2014). "The unconstrained binary quadratic programming problem: a survey" (PDF). Journal of Combinatorial Optimization. 28: 58–81. doi:10.1007/s10878-014-9734-0. S2CID 16808394.
2. ^ a b Glover, Fred; Kochenberger, Gary (2019). "A Tutorial on Formulating and Using QUBO Models". arXiv:1811.11538 [cs.DS].
3. ^ Lucas, Andrew (2014). "Ising formulations of many NP problems". Frontiers in Physics. 2: 5. arXiv:1302.5843. Bibcode:2014FrP.....2....5L. doi:10.3389/fphy.2014.00005.
4. ^ Mücke, Sascha; Piatkowski, Nico; Morik, Katharina (2019). "Learning Bit by Bit: Extracting the Essence of Machine Learning" (PDF). LWDA. S2CID 202760166. Archived from the original (PDF) on 2020-02-27.
5. ^ Tom Simonite (8 May 2013). "D-Wave's Quantum Computer Goes to the Races, Wins". MIT Technology Review. Retrieved 12 May 2013.