|WikiProject Mathematics||(Rated B-class, Low-priority)|
- 1 Vertex reordering
- 2 Cholesky factorization
- 3 References
- 4 Sparse polynomials?
- 5 too much emphasis on band matrices
- 6 Row Bandwidth def'n incorrect
- 7 Banker's algorithm
- 8 Orthogonal List?
- 9 Bandwidth and invariants
- 10 Why are academic explanations so hard to comprehend?
- 11 Storing a sparse matrix
- 12 Sparse Matrix with consecutive zero rows
- 13 More on fill-in
- 14 Linked Scheme
- 15 Please correct Matlab
- 16 Error in example of Yale Format
- 17 Confusing statement in the end of Yale Format
While the Cuthill-McKee algorithm is a fine algorithm, it was first shown that the Reverse Cuthill-McKee algorithm was often better by A. George in 1971. There have been many arguably better algorithms in the last 30 years.
I added some [incomplete, sketchy] remarks.
-- User:Nahaj 2005-08-28
Cholesky factorization usually only shows up (in my experience) for large sparse matrices in solutions of least squares problems via Normal Equations.
There have been many many alternatives since the 1960's and Householders work. [Lawson and Hanson, 1973] covered several, for example. Reference: Lawson, Charles L. and Hanson, Richard J. 1974, "Solving Least Squares Problems" Prentice-Hall.
The article seems, to me, to have a very old fashioned bias in my opinion. I added some remarks against that bias.
-- User:Nahaj 2005-08-28
- My first sentence shows how small an area I play in. In Tim Davis' note below under "too much emphasis on band matrices" he corrects this, and in email has sent me a pointer to his http://www.cise.ufl.edu/research/sparse/matrices which has lots and lots of symmetric positive definite matrices that arise in non least squares contexts and for which Cholesky factorization would be appropriate. Nahaj 14:54, 27 September 2007 (UTC)
A pointer to one guy's implimentation does not make for an reasonable reference list. There are many many good books on the subject.
I added one.
-- User:Nahaj 2005-08-28
in algebraic geometry sometimes sparse (multivariate) polynomials are mentoined, although I do not know an explicit definition. Assuming somebody here knows a bit about this, it could be a nice topic to add..
- Well, looking forward to seeing a sparse polynomial article! :) I know nothing of the topic, so I can't help. Oleg Alexandrov 21:10, 14 September 2005 (UTC)
too much emphasis on band matrices
This article has too much emphasis on band matrices.
"Band" matrices are often thought of as arising from the discretization of a 2D mesh. These are not truly "banded" matrices, however, since the optimal ordering (nested dissection) results in a matrix that is far from banded. Bandwidth reducing orderings are not suitable for a matrix arising from the discretization of a 2D or 3D square mesh (for example).
For example, a n-by-n matrix arising from an s-by-s 2D mesh can be factorized in O(s^3) time, where n = s^2, and with only O (n log n) entries in the factor L, or about O(s^2 log s).
On the other hand, if the "natural" ordering is used (the one that "looks banded"), the time taken is O(n^2), or O(s^4). That's quite a bit higher. The number of entries in the factor is O(n times sqrt(n)), or O(s^3).
Cholesky factorization arises in more problems than the Normal Equations (using Cholesky factorization for the Normal Equations is usually a bad idea; QR factorization is more accurate).
-- Tim Davis, Univ. of Florida
- I agree it is a bad idea... but it is still being promoted in current Surveying texts, and still used by the National Geodetic Survey for the national adjustments. So I see it a lot. Nahaj 15:02, 27 September 2007 (UTC)
Row Bandwidth def'n incorrect
I don't have a reference in front of me, but I think the definition of row bandwidth is wrong. Something like n-m, where m is the minimizer, is within 1 of the lower bandwidth. But it doesn't appear to capture upper bandwidth. Jonas August 16:20, 30 April 2007 (UTC)
- Yes, the definition was rather strange; thanks for bringing that to our attention. I replaced it by another one. Let me know if the new one doesn't make sense to you either. -- Jitse Niesen (talk) 02:00, 1 May 2007 (UTC)
I removed the link to a "Banker's algorithm" since it points to another algorithm of the same name, not Dr. Snay's algorithm. Nahaj 14:52, 27 September 2007 (UTC)
As I remember, Orthogonal List (or a similar name) is a common data structure to store a sparse matrix. Is it correct? Btw, I can't even find any info about Orthogonal List in wikipedia. Could anyone more knowledgeable on this issue write some stuff? Took 05:10, 9 November 2007 (UTC)
Bandwidth and invariants
Is this correct?: It seems to me that the bandwidth of a matrix describes the matrix but not the underlying linear operator. That is, by changing the basis in which you write a matrix, you can change the matrix's bandwidth. As such, bandwidth is not an invariant the way trace and determinant are.
Why are academic explanations so hard to comprehend?
"It stores an initial sparse N×N matrix M in row form using three arrays, A, IA, JA"
I admit, I'm not academic. I believe I have a need for a sparse matrix. I understand the basic theory, but wonder how they deal with insertions. I search the web, and find nothing but incomprehensible academic articles. I decide to search Wikipedia, which I find almost always to answer my questions, except now. I see this article, and try to understand this explanation and get no where.....what is wrong with academics that they can not communicate clearly? Problems with this statement:
- Does N*N mean rows and columns? If so, say "rows and columns"! - Does reusing the letter N mean that rows and columns are the same size? If so, say "where the number of rows and columns are the same"! - Why is A reused in A, IA, JA? If there is no reason, then DON'T DO IT! If there is then say so, IN PLAIN ENGLISH! It confused the hell out of me as to what the relationship was given that A is everywhere.
Why do academics make it so hard to understand their writing? It seems like a college social fraternity, with all it's social pressure glory. REBEL AGAINST THE PRESSURE TO WRITE INCOMPREHENSIBLE DOUBLETHINK! Please write so us dumb dumbs can understand.....I believe that is in the in spirit of Wikipedia! —Preceding unsigned comment added by 126.96.36.199 (talk) 17:01, 12 April 2008 (UTC)
- Thanks for the suggestions. NxN means it's square. mxn means it's m rows and n columns; this is standard matrix notation that could be referenced but is common knowledge for anyone attempting a sparse-matrix implementation. I cleaned it up a bit. Is that clearer? Cheers. —Ben FrantzDale (talk) 23:47, 14 April 2008 (UTC)
Yes its clearer but the IA row values should be explained more than saying they are the "Index of the first nonzero element of row i in A, which is of length m + 1 (i.e., one entry per row, plus one)". [1 3 5 7] do not appear to be indexes (in the sense of an array index) of any nonzeros in rows 1 to 3 (what's the + 1 for?). The first row has a nonzero at position 1. The second row, however, has a non-zero at position 2 as does the third row. Clearly I am not a mathematician and don't understand what is being explained here, but I would like to understand because it has an impact on my practical understanding of EAV structure and database modelling. I suppose the IA and JA arrays have nothing to do with cartesian coordinates and are more complex for purposes of compression but at the risk of appearing stupid, I don't get it. —Preceding unsigned comment added by 188.8.131.52 (talk) 09:28, 3 October 2009 (UTC)
- Please help! I tried now for 15 minutes to understand the second array, and asked several colleagues, and no one seem to know what is meant with those indizes :/ This is REALLY frustrating :/ Thanks in advance! — Preceding unsigned comment added by 184.108.40.206 (talk) 14:46, 29 May 2013 (UTC)
Storing a sparse matrix
Please change the example so that the sparse format actually takes up less space than the naive format. (Make the matrix bigger)
Sparse Matrix with consecutive zero rows
- Storing an entirely zero row is not usually useful in linear algebra, since then the matrix is singular (non-invertible). However, if need be, just store a zero entry as if it were a nonzero. That is, store a 0 in the value array. The diagonal is a reasonable choice. —Preceding unsigned comment added by 220.127.116.11 (talk) 18:27, 13 February 2010 (UTC)
More on fill-in
The section on fill-in seems very brief. I'm not an expert on this, but I would guess it should refer to things like minimal degree permutation and other reordering algorithms. See e.g.
Also, a few comments on special cases could be very helpful. E.g. I read in Gockenbach's textbook (p.228) that the Cholesky factor of a banded matrix retains the same (half-) bandwidth (though zeros within the band can still fill-in). It also seems to be the case that block-diagonal matrices do not fill-in at all, but I have not seen this stated/proven... Can anyone help with this and other special cases? Thanks--Ged.R (talk) 16:09, 22 July 2010 (UTC)
Suppose that an m X n matrix with t non-zero terms is represented. How small must t be so that the linked scheme uses less space than an m X n array uses? —Preceding unsigned comment added by 18.104.22.168 (talk) 15:42, 8 March 2011 (UTC)
Please correct Matlab
I've corrected the mistake: Matlab's sparse uses compressed column form. Please include the proper reference (the referenced article of Moler requires sign-up) http://www.mathworks.com/help/pdf_doc/otherdocs/simax.pdf — Preceding unsigned comment added by Dimacq (talk • contribs) 18:06, 24 June 2011 (UTC)
Error in example of Yale Format
Hi, i think the examples in Yale Format are wrong in the IA array. The instruction for the IA array reads: (array of index of first nonzero element of row i), so the IA array must have the same number of elements as the number row of the original matrix. That means IA = [0 2 4] for the first example and IA = [0 2 4 8] for the second. I corrected that, pls verify it. Thank you
Confusing statement in the end of Yale Format
Fellows, i found that the following statement is confusing:
(Note that in this format, the first value of IA will always be zero and the last will always be NNZ: these two cells may not be useful.)
There is no explicity reason for the last value, with the size of NNZ, to exist in this array. If there is any reason for the last value NNZ, that is linked to the sparse data structure, than it must be made explicity in the article. Hugs. — Preceding unsigned comment added by Antromindopofagist (talk • contribs) 23:14, 16 April 2013 (UTC)