# Talk:Low-rank approximation

WikiProject Articles for creation (Rated C-class)
This article was created via the article wizard and reviewed by member(s) of WikiProject Articles for creation. The project works to allow unregistered users to contribute quality articles and media files to the encyclopedia and track their progress as they are developed. To participate, please visit the project page for more information.
C  This article has been rated as C-Class on the project's quality scale.
 This article was accepted on 11 January 2012 by reviewer Ktr101 (talk · contribs).
WikiProject Mathematics (Rated C-class, Low-importance)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
 C Class
 Low Importance
Field: Algebra

It seems to me that in Recommender system applications , the low rank approximation may consist of categorical data, but that is not necessarily the case.

Similarly in Machine Learning (including Recommender systems), the data may be non linearly structured, but that is not necessarily the case. — Preceding unsigned comment added by AndrewMcN (talkcontribs) 07:20, 25 December 2013 (UTC)

## Proof of matrix approximation theorem

While the theorem is stated in terms of the Frobenius norm, the proof is given for the spectral norm. This should be fixed.

It is also possible to formulate it as a direct proof: take ${\displaystyle {\mathcal {V}}:=\operatorname {span} \{v_{1},\ldots ,v_{k+1}\}}$ and the null space ${\displaystyle {\mathcal {N}}:=\{x\ :\ {\widehat {D}}x=0\}}$. By the dimension formula, the intersection is non-trivial, so we can choose ${\displaystyle x\in {\mathcal {V}}\cap {\mathcal {N}}}$ with ${\displaystyle \|x\|_{2}=1}$. This leads directly to ${\displaystyle \|(D-{\widehat {D}})x\|_{2}=\|Dx\|_{2}\geq \sigma _{k+1}\|x\|_{2}}$, proving ${\displaystyle \|D-{\widehat {D}}\|_{2}\geq \sigma _{k+1}}$.

I assume that proving the theorem for the Frobenius norm might pose a greater challenge.

The proof for the Frobenius norm can be found at the 1936 paper "The approximation of one matrix by another of lower rank" in the reference section (the pdf can be found with google scholar). However, I'm not sure if this is the most simple proof known to date. Also what should we do for the proof for spectral norm? Should we modify the problem description to account for both cases? Bbbbbbbbba (talk) 03:26, 20 November 2014 (UTC)

The current proof for the Frobenius norm is wrong; there is no quick justification for the "clearly..." step. One nice (valid) proof is given here: http://math.stackexchange.com/a/759174/81360 via Weyl's inequalities. Bengski68 (talk) 09:33, 21 June 2016 (UTC)

## Proof of uniqueness?

The theorem statement mentions uniqueness but there is no uniqueness argument in the proof. This also needs fixed. Uniqueness can be only up to some orthogonal rotations when some of the (r largest) singular values are not unique. Jfessler (talk) 22:27, 9 January 2016 (UTC)