# Matching pursuit

A signal and its wavelet representation. Each pixel in the heat map (top) represents an atom (a wavelet centered in time according to the horizontal position and with frequency corresponding to height). The color of the pixel gives the inner product of the corresponding wavelet atom with the signal (bottom). Matching pursuit should represent the signal by just a few atoms, such as the three at the centers of the clearly visible ellipses.

Matching pursuit (MP) is a sparse approximation algorithm which finds the "best matching" projections of multidimensional data onto the span of an over-complete (i.e., redundant) dictionary ${\displaystyle D}$. The basic idea is to approximately represent a signal ${\displaystyle f}$ from Hilbert space ${\displaystyle H}$ as a weighted sum of finitely many functions ${\displaystyle g_{\gamma _{n}}}$ (called atoms) taken from ${\displaystyle D}$. An approximation with ${\displaystyle N}$ atoms has the form

${\displaystyle f(t)\approx {\hat {f}}_{N}(t):=\sum _{n=1}^{N}a_{n}g_{\gamma _{n}}(t)}$

where ${\displaystyle g_{\gamma _{n}}}$ is the ${\displaystyle \gamma _{n}}$th column of the matrix ${\displaystyle D}$ and ${\displaystyle a_{n}}$ is the scalar weighting factor (amplitude) for the atom ${\displaystyle g_{\gamma _{n}}}$. Normally, not every atom in ${\displaystyle D}$ will be used in this sum. Instead, matching pursuit chooses the atoms one at a time in order to maximally (greedily) reduce the approximation error. This is achieved by finding the atom that has the highest inner product with the signal (assuming the atoms are normalized), subtracting from the signal an approximation that uses only that one atom, and repeating the process until the signal is satisfactorily decomposed, i.e., the norm of the residual is small, where the residual after calculating ${\displaystyle \gamma _{N}}$ and ${\displaystyle a_{N}}$ is denoted by

${\displaystyle R_{N+1}=f-{\hat {f}}_{N}}$.

If ${\displaystyle R_{n}}$ converges quickly to zero, then only a few atoms are needed to get a good approximation to ${\displaystyle f}$. Such sparse representations are desirable for signal coding and compression. More precisely, the sparsity problem that matching pursuit is intended to approximately solve is

${\displaystyle \min _{x}\|f-Dx\|_{2}^{2}\ {\text{ subject to }}\ \|x\|_{0}\leq N,}$

where ${\displaystyle \|x\|_{0}}$ is the ${\displaystyle L_{0}}$ pseudo-norm (i.e. the number of nonzero elements of ${\displaystyle x}$). In the previous notation, the nonzero entries of ${\displaystyle x}$ are ${\displaystyle x_{\gamma _{n}}=a_{n}}$. Solving the sparsity problem exactly is NP-hard, which is why approximation methods like MP are used.

For comparison, consider the Fourier transform representation of a signal - this can be described using the terms given above, where the dictionary is built from sinusoidal basis functions (the smallest possible complete dictionary). The main disadvantage of Fourier analysis in signal processing is that it extracts only the global features of the signals and does not adapt to the analysed signals ${\displaystyle f}$. By taking an extremely redundant dictionary, we can look in it for atoms (functions) that best match a signal ${\displaystyle f}$.

## The algorithm

If ${\displaystyle D}$ contains a large number of vectors, searching for the most sparse representation of ${\displaystyle f}$ is computationally unacceptable for practical applications. In 1993, Mallat and Zhang[1] proposed a greedy solution that they named "Matching Pursuit." For any signal ${\displaystyle f}$ and any dictionary ${\displaystyle D}$, the algorithm iteratively generates a sorted list of atom indices and weighting scalars, which form the sub-optimal solution to the problem of sparse signal representation.

Algorithm Matching Pursuit
Input: Signal: ${\displaystyle f(t)}$, dictionary ${\displaystyle D}$ with normalized columns ${\displaystyle g_{i}}$.
Output: List of coefficients ${\displaystyle (a_{n})_{n=1}^{N}}$ and indices for corresponding atoms ${\displaystyle (\gamma _{n})_{n=1}^{N}}$.
Initialization:
${\displaystyle R_{1}\,\leftarrow \,f(t)}$;
${\displaystyle n\,\leftarrow \,1}$;
Repeat:
Find ${\displaystyle g_{\gamma _{n}}\in D}$ with maximum inner product ${\displaystyle |\langle R_{n},g_{\gamma _{n}}\rangle |}$;
${\displaystyle a_{n}\,\leftarrow \,\langle R_{n},g_{\gamma _{n}}\rangle }$;
${\displaystyle R_{n+1}\,\leftarrow \,R_{n}-a_{n}g_{\gamma _{n}}}$;
${\displaystyle n\,\leftarrow \,n+1}$;
Until stop condition (for example: ${\displaystyle \|R_{n}\|<\mathrm {threshold} }$)
return

• "←" denotes assignment. For instance, "largestitem" means that the value of largest changes to the value of item.
• "return" terminates the algorithm and outputs the following value.

In signal processing, the concept of matching pursuit is related to statistical projection pursuit, in which "interesting" projections are found; ones that deviate more from a normal distribution are considered to be more interesting.

## Properties

• The algorithm converges (i.e. ${\displaystyle R_{n}\to 0}$) for any ${\displaystyle f}$ that is in the space spanned by the dictionary.
• The error ${\displaystyle \|R_{n}\|}$ decreases monotonically.
• As at each step, the residual is orthogonal to the selected filter, the energy conservation equation is satisfied for each ${\displaystyle N}$:
${\displaystyle \|f\|^{2}=\|R_{N+1}\|^{2}+\sum _{n=1}^{N}{|a_{n}|^{2}}}$.
• In the case that the vectors in ${\displaystyle D}$ are orthonormal, rather than being redundant, then MP is a form of principal component analysis

## Applications

Matching pursuit has been applied to signal, image[2] and video coding,[3][4] shape representation and recognition,[5] 3D objects coding,[6] and in interdisciplinary applications like structural health monitoring.[7] It has been shown that it performs better than DCT based coding for low bit rates in both efficiency of coding and quality of image.[8] The main problem with matching pursuit is the computational complexity of the encoder. In the basic version of an algorithm, the large dictionary needs to be searched at each iteration. Improvements include the use of approximate dictionary representations and suboptimal ways of choosing the best match at each iteration (atom extraction).[9] The matching pursuit algorithm is used in MP/SOFT, a method of simulating quantum dynamics.[10]

MP is also used in dictionary learning.[11][12] In this algorithm, atoms are learned from a database (in general, natural scenes such as usual images) and not chosen from generic dictionaries.

## Extensions

A popular extension of Matching Pursuit (MP) is its orthogonal version: Orthogonal Matching Pursuit[13][14] (OMP). The main difference from MP is that after every step, all the coefficients extracted so far are updated, by computing the orthogonal projection of the signal onto the subspace spanned by the set of atoms selected so far. This can lead to results better than standard MP, but requires more computation.

Extensions such as Multichannel MP[15] and Multichannel OMP[16] allow to process multicomponent signals. An obvious extension of Matching Pursuit is over multiple positions and scales, by augmenting the dictionary to be that of a wavelet basis. This can be done efficiently using the convolution operator without changing the core algorithm.[17]

Matching pursuit is related to the field of compressed sensing and has been extended by researchers in that community. Notable extensions are Orthogonal Matching Pursuit (OMP),[18] Stagewise OMP (StOMP),[19] compressive sampling matching pursuit (CoSaMP),[20] Generalized OMP (gOMP),[21] and Multipath Matching Pursuit (MMP).[22]

## References

1. ^ Mallat, S. G.; Zhang, Z. (1993). "Matching Pursuits with Time-Frequency Dictionaries". IEEE Transactions on Signal Processing. 1993: 3397–3415. Bibcode:1993ITSP...41.3397M. doi:10.1109/78.258082.
2. ^ Perrinet, L. (2015). "Sparse models for Computer Vision". Biologically inspired computer vision. 14. arXiv:1701.06859. doi:10.1002/9783527680863.ch14.
3. ^ Bergeaud, F.; Mallat, S. (1995). "Matching pursuit of images". Proc. International Conference on Image Processing. 1: 53–56. doi:10.1109/ICIP.1995.529037.
4. ^ Neff, R.; Zakhor, A. (1997). "Very low bit-rate video coding based on matching pursuits". IEEE Transactions on Circuits and Systems for Video Technology. 7 (1): 158–171. doi:10.1109/76.554427.
5. ^ Mendels, F.; Vandergheynst, P.; Thiran, J.P. (2006). "Matching pursuit-based shape representation and recognition using scale-space". International Journal of Imaging Systems and Technology. 16: 162–180. doi:10.1002/ima.20078.
6. ^ Tosic, I.; Frossard, P.; Vandergheynst, P. (2005). "Progressive coding of 3D objects based on over-complete decompositions". IEEE Transactions on Circuits and Systems for Video Technology. 16: 1338–1349. doi:10.1109/tcsvt.2006.883502.
7. ^ Chakraborty, Debejyo; Kovvali, Narayan; Wei, Jun; Papandreou-Suppappola, Antonia; Cochran, Douglas; Chattopadhyay, Aditi (2009). "Damage Classification Structural Health Monitoring in Bolted Structures Using Time-frequency Techniques". Journal of Intelligent Material Systems and Structures. 20 (11): 1289–1305. doi:10.1177/1045389X08100044.
8. ^ Perrinet, L. U.; Samuelides, M.; Thorpe, S. (2002). "Sparse spike coding in an asynchronous feed-forward multi-layer neural network using Matching Pursuit". Neurocomputing. 57C: 125–34. doi:10.1016/j.neucom.2004.01.010.
9. ^ Lin, Jian-Liang; Hwang, Wen-Liang; Pei, Soo-Chang (2007). "Fast matching pursuit video coding by combining dictionary approximation and atom extraction". IEEE Transactions on Circuits and Systems for Video Technology. 17 (12): 1679–1689. doi:10.1109/tcsvt.2007.903120.
10. ^ Wu, Yinghua; Batista, Victor S. (2003). "Matching-pursuit for simulations of quantum processes". Chem. Phys. 118 (15): 6720–6724. Bibcode:2003JChPh.118.6720W. doi:10.1063/1.1560636.
11. ^ Perrinet, L. P. (2010). "Role of homeostasis in learning sparse representations". Neural Computation. 22 (7): 1812–1836. arXiv:0706.3177. doi:10.1162/neco.2010.05-08-795.
12. ^ Aharon, M.; Elad, M.; Bruckstein, A.M. (2006). "The K-SVD: An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation". IEEE Transactions on Signal Processing. 54: 4311–4322. Bibcode:2006ITSP...54.4311A. doi:10.1109/tsp.2006.881199.
13. ^ Pati, Y.; Rezaiifar, R.; Krishnaprasad, P. (1993). "Orthogonal Matching Pursuit: recursive function approximation with application to wavelet decomposition". Asilomar Conf. on Signals, Systems and Comput. doi:10.1109/acssc.1993.342465.
14. ^ Davis, G.; Mallat, S.; Zhang, Z. (1994). "Adaptive time-frequency decompositions with matching pursuits". Optical Engineering. 33: 2183. Bibcode:1994OptEn..33.2183D. doi:10.1117/12.173207.
15. ^ "Piecewise linear source separation", R. Gribonval, Proc. SPIE '03, 2003
16. ^ Tropp, Joel; Gilbert, A.; Strauss, M. (2006). "Algorithms for simultaneous sparse approximations ; Part I : Greedy pursuit". Signal Proc. – Sparse approximations in signal and image processing. 86: 572–588. doi:10.1016/j.sigpro.2005.05.030.
17. ^ "Sparse models for Computer Vision". arXiv:1701.06859. doi:10.1002/9783527680863.ch14.
18. ^ Tropp, Joel A.; Gilbert, Anna C. (2007). "Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit" (PDF). IEEE Transactions on Information Theory. 53 (12): 4655–4666. doi:10.1109/tit.2007.909108.
19. ^ Donoho, David L.; Tsaig, Yaakov; Drori, Iddo; Jean-luc, Starck (2006). "Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit". IEEE Transactions on Information Theory. 58: 1094–1121. doi:10.1109/tit.2011.2173241.
20. ^ Needell, D.; Tropp, J.A. (2009). "CoSaMP: Iterative signal recovery from incomplete and inaccurate samples". Applied and Computational Harmonic Analysis. 26: 301–321. doi:10.1016/j.acha.2008.07.002.
21. ^ Wang, J.; Kwon, S.; Shim, B. (2012). "Generalized Orthogonal Matching Pursuit". IEEE Transactions on Signal Processing. 60 (12): 6202–6216. arXiv:1111.6664. Bibcode:2012ITSP...60.6202J. doi:10.1109/TSP.2012.2218810.
22. ^ Kwon, S.; Wang, J.; Shim, B. (2014). "Multipath Matching Pursuit". IEEE Transactions on Information Theory. 60 (5): 2986–3001. doi:10.1109/TIT.2014.2310482.