# Landweber iteration

The Landweber iteration or Landweber algorithm is an algorithm to solve ill-posed linear inverse problems, and it has been extended to solve non-linear problems that involve constraints. The method was first proposed in the 1950s by Louis Landweber,[1] and it can be now viewed as a special case of many other more general methods.[2]

## Basic algorithm

The original Landweber algorithm [1] attempts to recover a signal x from (noisy) measurements y. The linear version assumes that ${\displaystyle y=Ax}$ for a linear operator A. When the problem is in finite dimensions, A is just a matrix.

When A is nonsingular, then an explicit solution is ${\displaystyle x=A^{-1}y}$. However, if A is ill-conditioned, the explicit solution is a poor choice since it is sensitive to any noise in the data y. If A is singular, this explicit solution doesn't even exist. The Landweber algorithm is an attempt to regularize the problem, and is one of the alternatives to Tikhonov regularization. We may view the Landweber algorithm as solving:

${\displaystyle \min _{x}\|Ax-y\|_{2}^{2}/2}$

using an iterative method. The algorithm is given by the update

${\displaystyle x_{k+1}=x_{k}-\omega A^{*}(Ax_{k}-y).}$

where the relaxation factor ${\displaystyle \omega }$ satisfies ${\displaystyle 0<\omega <2/\sigma _{1}^{2}}$. Here ${\displaystyle \sigma _{1}}$ is the largest singular value of ${\displaystyle A}$. If we write ${\displaystyle f(x)=\|Ax-y\|_{2}^{2}/2}$, then the update can be written in terms of the gradient

${\displaystyle x_{k+1}=x_{k}-\omega \nabla f(x_{k})}$

and hence the algorithm is a special case of gradient descent.

For ill-posed problems, the iterative method needs to be stopped at a suitable iteration index, because it semi-converges. This means that the iterates approach a regularized solution during the first iterations, but become unstable in further iterations. The reciprocal of the iteration index ${\displaystyle 1/k}$ acts as a regularization parameter. A suitable parameter is found, when the mismatch ${\displaystyle \|Ax_{k}-y\|_{2}^{2}}$ approaches the noise level.

Using the Landweber iteration as a regularization algorithm has been discussed in the literature.[3][4]

## Nonlinear extension

In general, the updates generated by ${\displaystyle x_{k+1}=x_{k}-\tau \nabla f(x_{k})}$ will generate a sequence ${\displaystyle f(x_{k})}$ that converges to a minimizer of f whenever f is convex and the stepsize ${\displaystyle \tau }$ is chosen such that ${\displaystyle 0<\tau <2/(\|\nabla f\|^{2})}$ where ${\displaystyle \|\cdot \|}$ is the spectral norm.

Since this is special type of gradient descent, there currently is not much benefit to analyzing it on its own as the nonlinear Landweber, but such analysis was performed historically by many communities not aware of unifying frameworks.

The nonlinear Landweber problem has been studied in many papers in many communities; see, for example,.[5]

## Extension to constrained problems

If f is a convex function and C is a convex set, then the problem

${\displaystyle \min _{x\in C}f(x)}$

can be solved by the constrained, nonlinear Landweber iteration, given by:

${\displaystyle x_{k+1}={\mathcal {P}}_{C}(x_{k}-\tau \nabla f(x_{k}))}$

where ${\displaystyle {\mathcal {P}}}$ is the projection onto the set C. Convergence is guaranteed when ${\displaystyle 0<\tau <2/(\|A\|^{2})}$.[6] This is again a special case of projected gradient descent (which is a special case of the forward–backward algorithm) as discussed in.[2]

## Applications

Since the method has been around since the 1950s, it has been adopted and rediscovered by many scientific communities, especially those studying ill-posed problems. In X-ray computed tomography it is called SIRT - simultaneous iterative reconstruction technique. It has also been used in the computer vision community[7] and the signal restoration community.[8] It is also used in image processing, since many image problems, such as deconvolution, are ill-posed. Variants of this method have been used also in sparse approximation problems and compressed sensing settings.[9]

## References

1. ^ a b Landweber, L. (1951): An iteration formula for Fredholm integral equations of the first kind. Amer. J. Math. 73, 615–624
2. ^ a b P. L. Combettes and J.-C. Pesquet, "Proximal splitting methods in signal processing," in: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, (H. H. Bauschke, R. S. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz, Editors), pp. 185–212. Springer, New York, 2011. ArXiv
3. ^ Louis, A.K. (1989): Inverse und schlecht gestellte Probleme. Stuttgart, Teubner
4. ^ Vainikko, G.M., Veretennikov, A.Y. (1986): Iteration Procedures in Ill-Posed Problems. Moscow, Nauka (in Russian)
5. ^ A convergence analysis of the Landweber iteration for nonlinear ill-posed problems Martin Hanke, Andreas Neubauer and Otmar Scherzer. NUMERISCHE MATHEMATIK Volume 72, Number 1 (1995), 21-37, doi:10.1007/s002110050158
6. ^ Eicke, B.: Iteration methods for convexly constrained ill-posed problems in Hilbert space. Numer. Funct. Anal. Optim. 13, 413–429 (1992)
7. ^ Johansson, B., Elfving, T., Kozlovc, V., Censor, Y., Forssen, P.E., Granlund, G.; "The application of an oblique-projected Landweber method to a model of supervised learning", Math. Comput. Modelling, vol 43, pp 892–909 (2006)
8. ^ Trussell, H.J., Civanlar, M.R.: The Landweber iteration and projection onto convex sets. IEEE Trans. Acoust., Speech, Signal Process. 33, 1632–1634 (1985)
9. ^ Anastasios Kyrillidis & Volkan Cevher (2011). "Recipes on hard thresholding methods". Recipes for hard thresholding methods. pp. 353–356. doi:10.1109/CAMSAP.2011.6136024. ISBN 978-1-4577-2105-2. S2CID 14996037.