|This is the talk page for discussing improvements to the Low-rank approximation article.|
|WikiProject Articles for creation||(Rated C-class)|
|WikiProject Mathematics||(Rated C-class, Low-importance)|
|A Wikipedia contributor, Imarkovs (talk · contribs), may be personally or professionally connected to the subject of this article. This user's editing has included contributions to this article. Relevant guidelines include Wikipedia:Conflict of interest, Wikipedia:Autobiography, and Wikipedia:Neutral point of view.|
It seems to me that in Recommender system applications , the low rank approximation may consist of categorical data, but that is not necessarily the case.
Similarly in Machine Learning (including Recommender systems), the data may be non linearly structured, but that is not necessarily the case. — Preceding unsigned comment added by AndrewMcN (talk • contribs) 07:20, 25 December 2013 (UTC)
Proof of matrix approximation theorem
While the theorem is stated in terms of the Frobenius norm, the proof is given for the spectral norm. This should be fixed.
It is also possible to formulate it as a direct proof: take and the null space . By the dimension formula, the intersection is non-trivial, so we can choose with . This leads directly to , proving .
I assume that proving the theorem for the Frobenius norm might pose a greater challenge.
- The proof for the Frobenius norm can be found at the 1936 paper "The approximation of one matrix by another of lower rank" in the reference section (the pdf can be found with google scholar). However, I'm not sure if this is the most simple proof known to date. Also what should we do for the proof for spectral norm? Should we modify the problem description to account for both cases? Bbbbbbbbba (talk) 03:26, 20 November 2014 (UTC)