# Talk:Matrix multiplication

WikiProject Mathematics (Rated C-class, High-importance)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
 C Class
 High Importance
Field: Algebra
One of the 500 most frequently viewed mathematics articles.

## Recent addition of section: Matrix_multiplication#Multiplication_of_n-dimensional_matrix

This section has unsourced content which is maybe not immediately recognizable. The English is also a little broken, along with a bit of the organization. I think the section is badly in need of a haircut and some citations, if it is going to remain. Rschwieb (talk) 14:16, 6 March 2013 (UTC)

(edit conflict)Is "multidimensional arrays" the best terminology? "N-dimensional matrices" usually refers to a n × n square matrix (which in the terminology of the new section means a 2d matrix... which may confuse people)? Rschwieb and myself asked the editor on his/her talk page. M∧Ŝc2ħεИτlk 14:22, 6 March 2013 (UTC)

I've removed the section. If this is not complete original research, then to be treated in the main article on the topic of matrix multiplication would obviously require very good sources (WP:WEIGHT). As it stands, the proposed matrix product seems to be completely novel. If it is studied in the literature, then sources are needed to determine where and how an encyclopedia can deal with the topic. Sławomir Biały (talk) 22:50, 6 March 2013 (UTC)

Fine enough, wasn't sure if it was OR. M∧Ŝc2ħεИτlk 23:03, 6 March 2013 (UTC)
You can have tensors with more than two indices, and contract over several of those indices at once. Does that relate to anything in the deleted section? Jheald (talk) 22:44, 7 March 2013 (UTC)
I tried to relate it to tensor contraction, but failed. It involves a cyclic permutation of indices in addition to the contractions. The product ends up being non-associative, even for powers of a single matrix. Tensors are also defined by their transformation properties under a coordinate change and, IIRC, there was no mention of transformation properties in the section. --Mark viking (talk) 23:07, 7 March 2013 (UTC)

## Ideas for Improvement

Does anyone know why this page is still rated C-level? What do people want to see in an improved version?Sbenzell (talk) 19:53, 7 April 2013 (UTC)

I am no expert on article rating, but in my opinion, the lead is completely inadequate in introducing matrix multiplication, giving the reader context, or in summarizing the article. There are relatively few citations and some sections have no citations at all. There is no history section. Check out Wikipedia:Good article criteria as a guide to what editors look for in a good article. --Mark viking (talk) 22:01, 7 April 2013 (UTC)
History and citations are always nice. But the lead does say what matrix multiplication is, and states clearly enough that there is more than one way to multiply matrices, although an extension would be fitting I simply don't know how... M∧Ŝc2ħεИτlk 22:35, 7 April 2013 (UTC)

I'd like to say that regardless of this page's rating as a "Wikipedia page" it is fantastic as an educational tool. It has explained matrix products perfectly for me -- where most maths wiki pages seem to completely smother the reader in unexplained jargon.

## Sources for a history section

Let's compile them here:

General
Computation, algorithms

The book may be OK, less sure about the webpages... Feel free to add to the list. M∧Ŝc2ħεИτlk 11:49, 12 July 2013 (UTC)

I've just added Binet as the inventor to the lead of the article, with MacTutor as the source, without seeing this section. Feel free to revert if you don't trust MacTutor. QVVERTYVS (hm?) 22:43, 22 October 2013 (UTC)
I'm afraid I do not consider MacTutor reliable. However, I'm not an expert in math history. — Arthur Rubin (talk) 10:17, 23 October 2013 (UTC)
Agreed MacTutor is not reliable for WP. It's pretty hard to find reliable sources on the history of matrix multiplication - every book I've seen on matrix algebra or linear algebra seem to just define the matrix operations without providing any historical background. M∧Ŝc2ħεИτlk 15:47, 23 October 2013 (UTC)

## Matrix powers and discrete difference equations

Matrix powers of the form $A^t$ are useful for solving linear difference equations look like:

$x_{t+1} = A x{t} + B u(t)$,

where $x(t)$ is the state space, $u(t)$ is the input, $A$ is the state-space matrix, and $B(t)$ is the input matrix.

This method was taught to me by a Hungarian mathematician and I'm not even sure what it is called. The problem is to find $A^t$.

Anyways I will include an example and steps below. Given $A = \begin{pmatrix} 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & -1/8 \\ 0 & 0 & 1/2 & 1/2 \end{pmatrix}$

First, compute the eigenvalues. $\lambda_1 = 3/4$ and $\lambda_2 = 1$ each with an algebraic multiplicity of two.

Now, take the exponential of each eigenvalue multiplied by $t$: $\lambda_i^t$. Multiply by an unknown matrix $B_i$. If the eigenvalues have an algebraic multiplicity greater than 1, then repeat the process, but multiply by a factor of $t$ for each repetition. If one eigenvalue had a multiplicity of three, then there would be the terms: $B_{i_1} \lambda_i^t, B_{i_2} t \lambda_i^t, B_{i_3} t^2 \lambda_i^t$. Sum all terms.

In our example we get:

$A^{t} = B_{1_1} \lambda_1^t + B_{1_2} t \lambda_1^t + B_{2_1} \lambda_2^t + B_{2_2} t \lambda_2^t$

$A^{t} = B_{1_1} (3/4)^t + B_{1_2} t (3/4)^t + B_{2_1} 1^t + B_{2_2} t 1^t$.

So how can we get enough equations to solve for all of the unknown matrices? Increment $t$.

\begin{align} I A^{t} =& B_{1_1} (3/4)^t + B_{1_2} t (3/4)^t + B_{2_1} + B_{2_2} t \\ A A^{t} =& B_{1_1} (3/4)^{(t+1)} + B_{1_2} {(t+1)} (3/4)^{(t+1)} + B_{2_1} + B_{2_2} {(t+1)} \\ A^2 A^{t} =& B_{1_1} (3/4)^{(t+2)} + B_{1_2} {(t+2)} (3/4)^{(t+2)} + B_{2_1} + B_{2_2} {(t+2)} \\ A^3 A^{t} =& B_{1_1} (3/4)^{(t+3)} + B_{1_2} {(t+3)} (3/4)^{(t+3)} + B_{2_1} + B_{2_2} {(t+3)} \end{align}

Since the these equations must be true, regardless the value of $t$, we set $t=0$. Then we can solve for the unknown matrices.

\begin{align} I =& B_{1_1} + B_{2_1} \\ A =& (3/4) B_{1_1} + (3/4) B_{1_2} + B_{2_1} + B_{2_2} \\ A^2 =& (3/4)^2 B_{1_1} + 2 (3/4)^2 B_{1_2} + B_{2_1} + 2 B_{2_2} \\ A^3 =& (3/4)^3 B_{1_1} + 3 (3/4)^3 B_{1_2} + B_{2_1} + 3 B_{2_2} \end{align}

This can be solved using linear algebra (don't let the fact the variables are matrices confuse you). Once solved using linear algebra you have:

\begin{align} B_{1_1} =& 128 A^3 - 336 A^2 + 288 A - 80 I \\ B_{1_2} =& \frac{64}{3} A^3 - \frac{176}{3} A^2 + \frac{160}{3} A - 16 I \\ B_{2_1} =&-128 A^3 + 336 A^2 - 288 A + 81 I\\ B_{2_2} =& 16 A^3 - 40 A^2 + 33 A - 9 I \end{align}

Plugging in the value for $A$ gives:

\begin{align} B_{1_1} =& \begin{pmatrix}0 & 0 & 48 & -16\\ 0 & 0 & -8 & 2\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{pmatrix}\\ B_{1_2} =& \begin{pmatrix}0 & 0 & \frac{16}{3} & -\frac{8}{3}\\ 0 & 0 & -\frac{4}{3} & \frac{2}{3}\\ 0 & 0 & \frac{1}{3} & -\frac{1}{6}\\ 0 & 0 & \frac{2}{3} & -\frac{1}{3}\end{pmatrix}\\ B_{2_1} =& \begin{pmatrix}1 & 0 & -48 & 16\\ 0 & 1 & 8 & -2\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{pmatrix}\\ B_{2_2} =& \begin{pmatrix}0 & 1 & 8 & -2\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{pmatrix} \end{align}

So the final answer would be:

${A}^{t}=\begin{pmatrix}1 & t & \frac{\left( 24\,t-144\right) \,{2}^{2\,t}+\left( 16\,t+144\right) \,{3}^{t}}{3\,{2}^{2\,t}} & -\frac{\left( 6\,t-48\right) \,{2}^{2\,t}+\left( 8\,t+48\right) \,{3}^{t}}{3\,{2}^{2\,t}}\\ 0 & 1 & -\frac{\left( 4\,t+24\right) \,{3}^{t}-3\,{2}^{2\,t+3}}{3\,{2}^{2\,t}} & \frac{\left( 2\,t+6\right) \,{3}^{t}-3\,{2}^{2\,t+1}}{3\,{2}^{2\,t}}\\ 0 & 0 & \frac{\left( t+3\right) \,{3}^{t-1}}{{2}^{2\,t}} & -t\,{2}^{-2\,t-1}\,{3}^{t-1}\\ 0 & 0 & t\,{3}^{t-1}\,{2}^{1-2\,t} & -\frac{\left( t-3\right) \,{3}^{t-1}}{{2}^{2\,t}}\end{pmatrix}$

This method was taught as it is nearly the same procedure to calculate $exp(A t)$ (useful for linear differential equations). Except that instead of incrementing $t$ to get additional equations and using $\lambda_i^t$ in the terms, one takes the derivative with respect to $t$ to generate additional equations and using $e^{(\lambda_i t)}$ in the terms.Mouse7mouse9 05:38, 7 December 2013 (UTC)