Quadratic unconstrained binary optimization (QUBO), also known as unconstrained binary quadratic programming (UBQP), is a combinatorial optimization problem with a wide range of applications from finance and economics to machine learning. QUBO is an NP hard problem, and for many classical problems from theoretical computer science, like maximum cut, graph coloring and the partition problem, embeddings into QUBO have been formulated. Embeddings for machine learning models include support-vector machines, clustering and probabilistic graphical models. Moreover, due to its close connection to Ising models, QUBO constitutes a central problem class for adiabatic quantum computation, where it is solved through a physical process called quantum annealing.

## Definition

The set of binary vectors of a fixed length $n>0$ is denoted by $\mathbb {B} ^{n}$ , where $\mathbb {B} =\lbrace 0,1\rbrace$ is the set of binary values (or bits). We are given a real-valued upper triangular matrix $Q\in \mathbb {R} ^{n\times n}$ , whose entries $Q_{ij}$ define a weight for each pair of indices $i,j\in \lbrace 1,\dots ,n\rbrace$ within the binary vector. We can define a function $f_{Q}:\mathbb {B} ^{n}\rightarrow \mathbb {R}$ that assigns a value to each binary vector through

$f_{Q}(x)=x^{\top }Qx=\sum _{i=1}^{n}\sum _{j=i}^{n}Q_{ij}x_{i}x_{j}$ Intuitively, the weight $Q_{ij}$ is added if both $x_{i}$ and $x_{j}$ have value 1. When $i=j$ , the values $Q_{ii}$ are added if $x_{i}=1$ , as $x_{i}x_{i}=x_{i}$ for all $x_{i}\in \mathbb {B}$ .

The QUBO problem consists of finding a binary vector $x^{*}$ that is minimal with respect to $f_{Q}$ , namely

$x^{*}={\underset {x\in \mathbb {B} ^{n}}{\arg \min }}~f_{Q}(x)$ In general, $x^{*}$ is not unique, meaning there may be a set of minimizing vectors with equal value w.r.t. $f_{Q}$ . The complexity of QUBO arises from the number of candidate binary vectors to be evaluated, as $|\mathbb {B} ^{n}|=2^{n}$ grows exponentially in $n$ .

Sometimes, QUBO is defined as the problem of maximizing $f_{Q}$ , which is equivalent to minimizing $f_{-Q}=-f_{Q}$ .

## Properties

• Multiplying the coefficients $Q_{ij}$ with a positive factor $\alpha >0$ scales the output of $f$ accordingly, leaving the optimum $x^{*}$ unchanged:
$f_{\alpha Q}(x)=\sum _{i\leq j}(\alpha Q_{ij})x_{i}x_{j}=\alpha \sum _{i\leq j}Q_{ij}x_{i}x_{j}=\alpha f_{Q}(x)$ • Flipping the sign of all coefficients flips the sign of $f$ 's output, making $x^{*}$ the binary vector that maximizes $f_{-Q}$ :
$f_{-Q}(x)=\sum _{i\leq j}(-Q_{ij})x_{i}x_{j}=-\sum _{i\leq j}Q_{ij}x_{i}x_{j}=-f_{Q}(x)$ • If all coefficients are positive, the optimum is trivially $x^{*}=(0,\dots ,0)$ . Similarly, if all coefficients are negative, the optimum is $x^{*}=(1,\dots ,1)$ .
• If $\forall i\neq j:~Q_{ij}=0$ , i.e., the bits can be optimized independently, then the corresponding QUBO problem is solvable in ${\mathcal {O}}(n)$ , the optimal variable assignments $x_{i}^{*}$ simply being 1 if $Q_{ii}<0$ and 0 otherwise.

## Applications

QUBO is a structurally simple, yet computationally hard optimization problem. It can be used to encode a wide range of optimization problems from various scientific areas.

### Cluster Analysis

Binary Clustering with QUBO
Visual representation of a clustering problem with 20 points: Circles of the same color belong to the same cluster. Each circle can be understood as a binary variable in the corresponding QUBO problem.

As an illustrative example of how QUBO can be used to encode an optimization problem, we consider the problem of cluster analysis. Here, we are given a set of 20 points in 2D space, described by a matrix $D\in \mathbb {R} ^{20\times 2}$ , where each row contains two cartesian coordinates. We want to assign each point to one of two classes or clusters, such that points in the same cluster are similar to each other. For two clusters, we can assign a binary variable $x_{i}\in \mathbb {B}$ to the point corresponding to the $i$ -th row in $D$ , indicating whether it belongs to the first ($x_{i}=0$ ) or second cluster ($x_{i}=1$ ). Consequently, we have 20 binary variables, which form a binary vector $x\in \mathbb {B} ^{20}$ that corresponds to a cluster assignment of all points (see figure).

One way to derive a clustering is to consider the pairwise distances between points. Given a cluster assignment $x$ , the values $x_{i}x_{j}$ or $(1-x_{i})(1-x_{j})$ evaluate to 1 if points $i$ and $j$ are in the same cluster. Similarly, $x_{i}(1-x_{j})$ or $(1-x_{i})x_{j}$ indicate that they are in different clusters. Let $d_{ij}\geq 0$ denote the Euclidean distance between points $i$ and $j$ . In order to define a cost function to minimize, when points $i$ and $j$ are in the same cluster we add their positive distance $d_{ij}$ , and subtract it when they are in different clusters. This way, an optimal solution tends to place points which are far apart into different clusters, and points that are close into the same cluster. The cost function thus comes down to

{\begin{aligned}f(x)&=\sum _{i From the second line, the QUBO parameters can be easily found by re-arranging to be:

{\begin{aligned}Q_{ij}&={\begin{cases}d_{ij}&{\text{if }}i\neq j\\-\left(\sum \limits _{k=1}^{i-1}d_{ki}+\sum \limits _{\ell =i+1}^{n}d_{i\ell }\right)&{\text{if }}i=j\end{cases}}\end{aligned}} Using these parameters, the optimal QUBO solution will correspond to an optimal cluster w.r.t. above cost function.

## Connection to Ising models

QUBO is very closely related and computationally equivalent to the Ising model, whose Hamiltonian function is defined as

$H(\sigma )=-\sum _{\langle i~j\rangle }J_{ij}\sigma _{i}\sigma _{j}-\mu \sum _{j}h_{j}\sigma _{j}$ with real-valued parameters $h_{j},J_{ij},\mu$ for all $i,j$ . The spin variables $\sigma _{j}$ are binary with values from $\lbrace -1,+1\rbrace$ instead of $\mathbb {B}$ . Moreover, in the Ising model the variables are typically arranged in a lattice where only neighboring pairs of variables $\langle i~j\rangle$ can have non-zero coefficients. Applying the identity $\sigma \mapsto 2x-1$ yields an equivalent QUBO problem:

{\begin{aligned}f(x)&=\sum _{\langle i~j\rangle }-J_{ij}(2x_{i}-1)(2x_{j}-1)+\sum _{j}\mu h_{j}(2x_{j}-1)\\&=\sum _{\langle i~j\rangle }(-4J_{ij}x_{i}x_{j}+2J_{ij}x_{i}+2J_{ij}x_{j}-J_{ij})+\sum _{j}(2\mu h_{j}x_{j}-\mu h_{j})&&{\text{using }}x_{j}=x_{j}x_{j}\\&=\sum _{\langle i~j\rangle }(-4J_{ij}x_{i}x_{j})+\sum _{\langle i~j\rangle }2J_{ij}x_{i}+\sum _{\langle i~j\rangle }2J_{ij}x_{j}+\sum _{j}2\mu h_{j}x_{j}-\sum _{\langle i~j\rangle }J_{ij}-\sum _{j}\mu h_{j}\\&=\sum _{\langle i~j\rangle }(-4J_{ij}x_{i}x_{j})+\sum _{\langle j~i\rangle }2J_{ji}x_{j}+\sum _{\langle i~j\rangle }2J_{ij}x_{j}+\sum _{j}2\mu h_{j}x_{j}-\sum _{\langle i~j\rangle }J_{ij}-\sum _{j}\mu h_{j}&&{\text{using }}\sum _{\langle i~j\rangle }=\sum _{\langle j~i\rangle }\\&=\sum _{\langle i~j\rangle }(-4J_{ij}x_{i}x_{j})+\sum _{j}\sum _{\langle k=j~i\rangle }2J_{ki}x_{j}+\sum _{j}\sum _{\langle i~k=j\rangle }2J_{ik}x_{j}+\sum _{j}2\mu h_{j}x_{j}-\sum _{\langle i~j\rangle }J_{ij}-\sum _{j}\mu h_{j}\\&=\sum _{\langle i~j\rangle }(-4J_{ij}x_{i}x_{j})+\sum _{j}\left(\sum _{\langle i~k=j\rangle }(2J_{ki}+2J_{ik})+2\mu h_{j}\right)x_{j}-\sum _{\langle i~j\rangle }J_{ij}-\sum _{j}\mu h_{j}&&{\text{using }}\sum _{\langle k=j~i\rangle }=\sum _{\langle i~k=j\rangle }\\&=\sum _{i=1}^{n}\sum _{j=1}^{i}Q_{ij}x_{i}x_{j}+C\end{aligned}} where

{\begin{aligned}Q_{ij}&={\begin{cases}-4J_{ij}&{\text{if }}i\neq j\\\sum _{\langle i~k=j\rangle }(2J_{ki}+2J_{ik})+2\mu h_{j}&{\text{if }}i=j\end{cases}}\\C&=-\sum _{\langle i~j\rangle }J_{ij}-\sum _{j}\mu h_{j}\end{aligned}} and using the fact that for a binary variable $x_{j}=x_{j}x_{j}$ .

As the constant $C$ does not change the position of the optimum $x^{*}$ , it can be neglected during optimization and is only important for recovering the original Hamiltonian function value.