Total variation denoising
In signal processing, Total variation denoising, also known as total variation regularization is a process, most often used in digital image processing, that has applications in noise removal. It is based on the principle that signals with excessive and possibly spurious detail have high total variation, that is, the integral of the absolute gradient of the signal is high. According to this principle, reducing the total variation of the signal subject to it being a close match to the original signal, removes unwanted detail whilst preserving important details such as edges. The concept was pioneered by Rudin et al. in 1992.
This noise removal technique has advantages over simple techniques such as linear smoothing or median filtering which reduce noise but at the same time smooth away edges to a greater or lesser degree. By contrast, total variation denoising is remarkably effective at simultaneously preserving edges whilst smoothing away noise in flat regions, even at low signal-to-noise ratios.
Mathematical exposition for 1D digital signals
For a digital signal , we can, for example, define the total variation as:
Given an input signal , the goal of total variation denoising is to find an approximation, call it , that has smaller total variation than but is "close" to . One measure of closeness is the sum of square errors:
So the total variation denoising problem amounts to minimizing the following discrete functional over the signal :
By differentiating this functional with respect to , we can derive a corresponding Euler–Lagrange equation, that can be numerically integrated with the original signal as initial condition. This was the original approach. Alternatively, since this is a convex functional, techniques from convex optimization can be used to minimize it and find the solution .
The regularization parameter plays a critical role in the denoising process. When , there is no denoising and the result is the same as the input signal. As , however, the total variation term plays an increasingly strong role, which forces the result to have smaller total variation, at the expense of being less like the input (noisy) signal. Thus, the choice of regularization parameter is critical to achieving just the right amount of noise removal. See Regularization (mathematics) for details.
2D digital signals
We now consider 2D signals y, such as images. The total variation norm proposed by the 1992 paper is
The standard total variation denoising problem is still of the form
Due in part to much research in compressed sensing in the mid-2000s, there are many algorithms, such as the split-Bregman method, that solve variants of this problem.
- Total variation
- Anisotropic diffusion
- Signal Processing
- Digital Image Processing
- Noise reduction
- Non-local means
- TVDIP: Full-featured Matlab 1D total variation denoising implementation.
- Demonstration of the original Rudin, Osher, Fatemi approach and some advances.
- Rudin, L. I.; Osher, S.; Fatemi, E. (1992). "Nonlinear total variation based noise removal algorithms". Physica D 60: 259–268. doi:10.1016/0167-2789(92)90242-f.
- Strong, D.; Chan, T. (2003). "Edge-preserving and scale-dependent properties of total variation regularization". Inverse Problems 19: S165–S187. doi:10.1088/0266-5611/19/6/059.
- Little, M. A.; Jones, Nick S. (2010). "Sparse Bayesian Step-Filtering for High-Throughput Analysis of Molecular Machine Dynamics". ICASSP 2010 Proceedings. 2010 IEEE International Conference on Acoustics, Speech and Signal Processing.
- Chambolle, A. (2004). "An algorithm for total variation minimization and applications". Journal of Mathematical Imaging and Vision 20: 89–97. doi:10.1023/b:jmiv.0000011321.19549.88. CiteSeerX: 10.1.1.160.5226.