|WikiProject Mathematics||(Rated C-class, High-importance)|
|Archives for the Matrix multiplication talk page|
|Sources for development of this article may be located at|
Recent addition of section: Matrix_multiplication#Multiplication_of_n-dimensional_matrix
This section has unsourced content which is maybe not immediately recognizable. The English is also a little broken, along with a bit of the organization. I think the section is badly in need of a haircut and some citations, if it is going to remain. Rschwieb (talk) 14:16, 6 March 2013 (UTC)
- (edit conflict)Is "multidimensional arrays" the best terminology? "N-dimensional matrices" usually refers to a n × n square matrix (which in the terminology of the new section means a 2d matrix... which may confuse people)? Rschwieb and myself asked the editor on his/her talk page. M∧Ŝc2ħεИτlk 14:22, 6 March 2013 (UTC)
I've removed the section. If this is not complete original research, then to be treated in the main article on the topic of matrix multiplication would obviously require very good sources (WP:WEIGHT). As it stands, the proposed matrix product seems to be completely novel. If it is studied in the literature, then sources are needed to determine where and how an encyclopedia can deal with the topic. Sławomir Biały (talk) 22:50, 6 March 2013 (UTC)
- I tried to relate it to tensor contraction, but failed. It involves a cyclic permutation of indices in addition to the contractions. The product ends up being non-associative, even for powers of a single matrix. Tensors are also defined by their transformation properties under a coordinate change and, IIRC, there was no mention of transformation properties in the section. --Mark viking (talk) 23:07, 7 March 2013 (UTC)
Ideas for Improvement
- I am no expert on article rating, but in my opinion, the lead is completely inadequate in introducing matrix multiplication, giving the reader context, or in summarizing the article. There are relatively few citations and some sections have no citations at all. There is no history section. Check out Wikipedia:Good article criteria as a guide to what editors look for in a good article. --Mark viking (talk) 22:01, 7 April 2013 (UTC)
I'd like to say that regardless of this page's rating as a "Wikipedia page" it is fantastic as an educational tool. It has explained matrix products perfectly for me -- where most maths wiki pages seem to completely smother the reader in unexplained jargon. — Preceding unsigned comment added by 220.127.116.11 (talk • contribs)
- The article is still rather incomplete and sources are still lacking, but at least now we have a lead section. Hope my resectioning is not controversial.
- Sbenzell's question should be repeated: apart from sources and a history section (and the empty sections I introduced, not inclined to fill them in for now), what more could be improved or added? M∧Ŝc2ħεИτlk 17:37, 5 January 2014 (UTC)
Sources for a history section
Let's compile them here:
- Computation, algorithms
- Virginia Vassilevska (2008). Efficient Algorithms for Path Problems in Weighted Graphs. ProQuest. ISBN 0-549-758-852.
- I've just added Binet as the inventor to the lead of the article, with MacTutor as the source, without seeing this section. Feel free to revert if you don't trust MacTutor. QVVERTYVS (hm?) 22:43, 22 October 2013 (UTC)
- Agreed MacTutor is not reliable for WP. It's pretty hard to find reliable sources on the history of matrix multiplication - every book I've seen on matrix algebra or linear algebra seem to just define the matrix operations without providing any historical background. M∧Ŝc2ħεИτlk 15:47, 23 October 2013 (UTC)
Matrix powers and discrete difference equations
Matrix powers of the form are useful for solving linear difference equations look like:
where is the state space, is the input, is the state-space matrix, and is the input matrix.
This method was taught to me by a Hungarian mathematician and I'm not even sure what it is called. The problem is to find .
Anyways I will include an example and steps below. Given
First, compute the eigenvalues. and each with an algebraic multiplicity of two.
Now, take the exponential of each eigenvalue multiplied by : . Multiply by an unknown matrix . If the eigenvalues have an algebraic multiplicity greater than 1, then repeat the process, but multiply by a factor of for each repetition. If one eigenvalue had a multiplicity of three, then there would be the terms: . Sum all terms.
In our example we get:
So how can we get enough equations to solve for all of the unknown matrices? Increment .
Since the these equations must be true, regardless the value of , we set . Then we can solve for the unknown matrices.
This can be solved using linear algebra (don't let the fact the variables are matrices confuse you). Once solved using linear algebra you have:
Plugging in the value for gives:
So the final answer would be:
This method was taught as it is nearly the same procedure to calculate (useful for linear differential equations). Except that instead of incrementing to get additional equations and using in the terms, one takes the derivative with respect to to generate additional equations and using in the terms.Mouse7mouse9 05:38, 7 December 2013 (UTC)
- I made a package for the computer algebra system Maxima, it is still under development but can be found here (https://github.com/Mickle-Mouse/Maxima-Matrix-Power). It handles repeated zero eigenvalues and repeated nonzero eigenvalues using tricks similar to Buchheim's generalization of Sylvester's formula, except in a discrete domain. To see this trick used in its appropriate context see (https://en.wikipedia.org/wiki/Talk:Matrix_exponential#A_method_I_was_taught.2C_but_cannot_find_external_references).Mouse7mouse9 23:21, 29 December 2013 (UTC) — Preceding unsigned comment added by Mouse7mouse9 (talk • contribs)
- The method seems unnecessarily complicated. Why not go directly from:
- ? In any case, I am almost certain it is published somewhere, and it's not really contentious, so I might be in favor of inclusion. Would it help if I posted the methods somewhere on the Internet, noting that my expertise would make it allowable under WP:SPS, until such time as a (normally) published reliable source can be found? — Arthur Rubin (talk) 02:27, 30 December 2013 (UTC)
- The method seems unnecessarily complicated. Why not go directly from:
- @Maschen:Perhaps you are correct Maschen. This is more useful as an application for solving discrete linear systems. So I made a cross post to the talk section for the recurrence relation page. https://en.wikipedia.org/wiki/Talk:Recurrence_relation#A_method_for_discrete_linear_systems.
New asymptotic complexity by Le Gall (2014)
Cited from the lead of Coppersmith–Winograd algorithm:
In 2014, François Le Gall simplified the methods of Williams and obtained an improved bound of O(n2.3728639).
The section Matrix multiplication#Algorithms for efficient matrix multiplication should be updated accordingly, by an expert. Also, the picture File:Bound on matrix multiplication omega over time.svg should be updated, by someone who has installed Mathematica. - Jochen Burghardt (talk) 18:30, 27 June 2014 (UTC)
- I updated the text, but I don't have Mathematica so someone else needs to do that part. Le Gall's paper is appearing in ISSAC 2014 so it should count as a reliable source (the version previously cited on the Coppersmith–Winograd article was just a preprint). —David Eppstein (talk) 20:59, 27 June 2014 (UTC)