# Non-negative least squares

In mathematical optimization, the problem of non-negative least squares (NNLS) is a constrained version of the least squares problem where the coefficients are not allowed to become negative. That is, given a matrix A and a (column) vector of response variables y, the goal is to find[1]

$\arg\min_\mathbf{x} \|\mathbf{Ax} - \mathbf{y}\|_2$ subject to x ≥ 0.

Here, x ≥ 0 means that each component of the vector x should be non-negative and ‖·‖₂ denotes the Euclidean norm.

## Quadratic programming version

The NNLS problem is equivalent to a quadratic programming problem

$\arg\min_\mathbf{x \ge 0} \frac{1}{2} \mathbf{x}^\mathsf{T} \mathbf{Q}\mathbf{x} + \mathbf{c}^\mathsf{T} \mathbf{x}$,

where Q = AA and c = Ay. This problem is convex as Q is positive semidefinite and the non-negativity constraints form a convex feasible set.[2]

## Algorithms

The first widely used algorithm for solving this problem is an active set method published by Lawson and Hanson in their 1974 book Solving Least Squares Problems.[3]:291 In pseudocode, this algorithm looks as follows:[1][4]

# Inputs
A : matrix of shape (m, n)
y : vector of length m
tol : tolerance for the stopping criterion

# Initialization
P ← ∅
R ← {1, ..., n}
x ← zero-vector of length n
w ← Aᵀ(y − Ax)

while R ≠ ∅ and max(w) > tol
j ← index of max(w) in w
add j to P
remove j from R
# Aᴾ is A restricted to the variables included in P
sᴾ ← ((Aᴾ)ᵀ Aᴾ)⁻¹ (Aᴾ)ᵀy
while min(sᴾ) ≤ 0
α ← −min(xᵢ / (xᵢ - sᵢ) for i in P)
x ← x + α(s - x)
Update R
Update P
sᴾ ← ((Aᴾ)ᵀ Aᴾ)⁻¹ (Aᴾ)ᵀy
sᴿ ← 0
x ← s
w ← Aᵀ(y − Ax)


This algorithm takes a finite number of steps to reach a solution and smoothly improves its candidate solution as it goes (so it can find good approximate solutions when cut off at an reasonable number of iterations), but is very slow in practice, owing largely to the computation of the pseudoinverse ((Aᴾ)ᵀ Aᴾ)⁻¹.[1] Variants of this algorithm are available in Matlab as the routine lsqnonneg[1] and in SciPy as optimize.nnls.[5]

Many improved algorithms have been suggested since 1974.[1] Fast NNLS (FNNLS) is an optimized version of the Lawson—Hanson algorithm.[4] A sequential, coordinate-wise algorithm based on the quadratic programming problem above was published in 2005.[2] Lawson and Hanson themselves have generalized it to handle general bounded-variable least squares (BLVS) problems, with upper and lower bounds αx ≤ β.[3]:291

## Applications

Non-negative least squares problems turn up as subproblems in matrix decomposition, e.g. in algorithms for PARAFAC[4] and non-negative matrix/tensor factorization.[6] The latter can be considered a generalization of NNLS.[1]

## References

1. Chen, Donghui; Plemmons, Robert J. (2009). "Nonnegativity constraints in numerical analysis". Symposium on the Birth of Numerical Analysis.
2. ^ a b Franc, V. C.; Hlaváč, V. C.; Navara, M. (2005). "Sequential Coordinate-Wise Algorithm for the Non-negative Least Squares Problem". Computer Analysis of Images and Patterns. Lecture Notes in Computer Science 3691. p. 407. doi:10.1007/11556121_50. ISBN 978-3-540-28969-2. edit
3. ^ a b Lawson, Charles L.; Hanson, Richard J. (1995). Solving Least Squares Problems. SIAM.
4. ^ a b c Bro, R.; De Jong, S. (1997). "A fast non-negativity-constrained least squares algorithm". Journal of Chemometrics 11 (5): 393. doi:10.1002/(SICI)1099-128X(199709/10)11:5<393::AID-CEM483>3.0.CO;2-L.
5. ^ "scipy.optimize.nnls". SciPy v0.13.0 Reference Guide. Retrieved 25 January 2014.
6. ^ Lin, Chih-Jen (2007). "Projected Gradient Methods for Nonnegative Matrix Factorization". Neural Computation 19 (10): 2756–2779. doi:10.1162/neco.2007.19.10.2756. PMID 17716011.