Talk:Matrix (mathematics)/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1 Archive 2 Archive 3

inverses of Matrices

There appears to be nothing on Wikipedia about finding the inverse of a matrix manually -- I think this is a pretty major omission, and I only started the module of my course on matrices last week. There is some sparse mention of uses of a matrix inverse, but the methods for finding inverses and, I believe, determinants are lacking, or just hard to find.

Perhaps someone will advise me, otherwise I shall upload some stuff on finding inverses and determinants on sunday [probably]. EdJ343 15:28, 14 March 2007 (UTC)

See Invertible matrix. In the future please add new comments at the end of the talk page.MathMartin 15:50, 14 March 2007 (UTC)
If you go to the section Square matrices and related definitions in the article, you'll see all these concepts shortly mentioned. There are links to invertible matrix (as MathMartin said) and determinant where you can find more information. -- Jitse Niesen (talk) 00:45, 15 March 2007 (UTC)
Thankyou, my mistake. I have learnt something. EdJ343 07:20, 19 March 2007 (UTC)

Notation issues

What does: "The notation A = (aij) means that A[i,j] = aij for all indices i and j. " mean? What is a, is it another matrix, or a contant or what? -- SGBailey 22:15 Jan 17, 2003 (UTC)

There is no a, just a11, a32, etc. -- Wshun Jan 21

There are several ways of notating the (i,j)th element. A[i,j] is one; aij is another which is easier on the eye. the small a is used to emphasize that it is a number. Also, Aij is used for the matrix A with some sort of manipulation to the (i,j)th element or ith row & jth column. This probably needs adfding to the article -- Tarquin 23:19 Jan 21, 2003 (UTC)

You mean I could rephrase the original quote as:

" The notation A = (A[i,j]) means that A[i,j] = A[i,j] for all indices i and j. " -- It seems overcomplicated to introduce an alternative set of nomenclature for this "one page" article. I suggest we either stick to one method throughout to explain matrices or we consider both nomenclatures important enough to be explained as part of the article and explain them and give an example in each case. -- SGBailey 23:32 Jan 21, 2003 (UTC)

More notation issues

I'm confused about notation here. What's with the (parentheses) and [brackets]? When do we use one notation, and when the other? What's the difference, if any, between (aij) and [aij]? MathWorld uses the same notation, but doesn't explain well either.
Herbee 01:05, 2004 Feb 26 (UTC)

The notation here seems consistent: for example with a vector it's a[i] for i-th component of the vector, and (ai) for the whole vector written as a list of indexed numbers.
Charles Matthews 09:01, 26 Feb 2004 (UTC)
Yes, I see it now—thanks for kicking my eyes open. I was looking for a deeper meaning in a badly designed page...
Wikipedia turns out to be inconsistent on matrix notation, so there is little point in fixing this one page. We should really convert everything to standard mathematical notation. I might even volunteer, except that I wouldn't know how to track down all the relevant pages. Anyone?
Herbee 12:50, 2004 Feb 26 (UTC)
You could take that to Wikipedia talk:WikiProject Mathematics. Paradoxically (or perhaps not) the maths here grows apace, but the standardisation of how it's written is pretty much neglected.

Charles Matthews 13:43, 26 Feb 2004 (UTC)

equivalence relations

Could someone do a section on the different equivalence relations that are defined on matrices? There's similarity, but I'm sure I remember one that worked with the transpose.

The latter is in relation to bilinear forms, where ATMA can replace M by change of basis. I forget the name for it.

Charles Matthews 09:03, 26 Feb 2004 (UTC)

Matrix multiplication

Perhaps there should be some explanation of why multiplication works the way it does? It seems somewhat arbitrary to me.

Historically it was certainly discovered in relation to choosing new variables in simultaneous linear equations. These days we'd probably say that it is a question of having matrix multiplication match up with composition of linear transformations.

Charles Matthews 09:47, 28 Jan 2004 (UTC)

I see it particularly obvious when you start by writing a system of 3 equations and 3 variables so that each equation has its terms ordered (say, first the x term, then the y term and last the z term) and terms are aligned vertically. And then you just "factor out" the variables as a vector which multiplies the coefficients by the right. Well, an image is worth a thousand words:
 1 x  + 5 y  - 1 z   =   9
 2 x  - 4 y  + 1 z   =   9
-3 x  + 3 y  - 1 z   =   9

Becomes:

 1    + 5    - 1       x    =   9
 2    - 4    + 1   X   y    =   9
-3    + 3    - 1       z    =   9

If you wonder about matrix by matrix, and not matrix by vector, well, it's just like forming the right matrix and the result matrix by juxtaposing column-vectors. --euyyn 19:17, 3 February 2007 (UTC)

rotation matrix

==3D-Rotation of any vector (x,y,z) around an axis of the
  direction(a,b,c) by an angel @== 
We reduce the vector of the axis-direction to the length 1:
(1/ sqrt(a^2+b^2+c^2))* (a,b,c)=(A,B,C).
Reckon the following and you get the result of the rotation
    1  0  0             0  -C   B                  0   -C   B   2       x
[ ( 0  1  0 ) + sin@* ( C   0   -A ) +(1- cos@)* ( C   0  -A )    ] * ( y )
    0  0  1             -B  A    0                -B   A   0            z
(Notice, that the third matrix must be squared and then multiplied by cos@)
Imagine a plane, to which the axis is normal to and in which lies the tip of
the arrow (that is the picture of the vector)In this plane you add an arrow
from the tip in the direction of travel -that is the orientation of the rotation.
And from this you add another one in this plane in the direction of 90 degrees
to the left respective to the previous one.
The vector (x,y,z) and the result of the formula above are of same length.
The angle between these two is not the angle of rotation - the tip of the arrow
is rotated in the plane , which is perpendicular to the axis.
==Extract axis and angle out of a rotation-matrix==
A rotation-matrix D has the property: det D =1 and D * D(T) = E , where D(T)
is the matrix transponed, that is you interchanged colums and rows and
E is the unit-matrix.
A matrix can be split into a symmetrical and an antisymmetrical  

(a(ik)= a(ki) ) and (a(ik) = - a(ki) )

So:
  a d e           2a   d+g    e+h           0     d-g    e-h
( g b f ) =1/2*( d+g    2b    f+i ) +1/2*(g-d      0     f-i )
  h i c          e+h   f+i     2c         h-e     i-f     0
The antisymmetrical part gives the direction af the axis: (i-f, e-h, g-d )*1/2.
The length of this is sin@.
The main diagonal of the matrix gives the "spur": a+b+c and this equals
1 + 2*cos@. From these you get @.
An extra-bonus: The affin mappings (if this is the right word), that is here
the 3*3-matrices can be split in an symmetrical and an antisymmetrical
part. The first you explore by means of main-axis transformation and the
antisymmetrical ones - applied to a vector - correspond to the
cross-product:
  0   -c    b       x
( c    0   -a ) * ( y ) = (a, b, c ) x (x, y, z )
  -b   a    0       z
Hero van Jindelt

Block diagonal matrices

Block diagonal matrices / diagonal block matrices: should there be a seperate entry for this type of matrix, or could it be added to diagonal matrix? Chris Wood 20:09, 9 Mar 2004 (UTC)

In the sequel

Under the category of "Linear transformations, ranks and transpose," the second paragraph begins "Here and in the sequel we identify..." In the sequel?SWAdair | Talk 11:46, 24 Mar 2004 (UTC)

Jargon

This page is full of words that someone that doesn't remember this stuff from math class or never learned it would not understand...and my math textbook explains a lot of this stuff a lot more clearly than this page does. argh. Some changes need to be made, but I'm not sure how to go about that. Braaropolis | Talk 00:13, 28 Jun 2004 (UTC)

  • Well, I disagree that it necessarily needs to be changed a great deal, since it would be pointless to only include the information that the "average" person knows about matrices (which is pretty close to zero, in my opinion). I understand pretty much everything on this page, and well I should, but I think it should stay pretty much as is. Quandaryus 19:38, 5 Sep 2004 (UTC)

Refactoring of article

I agree with User:Braaropolis the article is in bad shape. It is too long and the scope is to wide. The basic article on matrices should be as accessible as possible as the topic is so central to linear algebra. I tried reordering the material to make it clearer and moved the content of Partitioning matrices to block matrix. But the article is still too long. Perhaps we should put square matrices into a separate article and move some topics of the matrix atricle into matrix theory (in the same way graph_(mathematics) is related to graph theory) MathMartin 15:09, 26 Sep 2004 (UTC)

Rings vs. semirings as foundation

The current revision states that the entries of a matrix are generally elements of a ring. This is too specific. Matrix addition and multiplication, as defined here, do not require additive inverses. In fact, these definitions apply unchanged if the underlying algebraic structure R is a semiring. This is of crucial importance in graph theory and formal language theory, since e.g. the algebraic structures underlying weighted graphs can often be arbitrary semirings and do not have to be rings (for example, Kleene algebras). I know this is getting far afield, but the generality of matrices over semirings is essential in many cases, and the distinction of matrices over rings vs. matrices over semirings is often crucial. For example, all sub-cubic-time algorithms for matrix multiplication I'm aware of assume at least matrices over rings and do not generally apply to matrices over semirings. --MarkSweep 07:56, 30 Sep 2004 (UTC)

reordering sections

Can we move Matrices with entries in arbitrary rings to the bottom since it's more abstruse then the rest?

Since history has only one entry (the link to matrix theory) can we incorporate it into something near the beginning? RJFJR 16:32, Dec 24, 2004 (UTC)

I moved the history. First, the way it was before, in front of the definition, was inappropriate (you don't get to writing history of things before you define them!). Maybe the history can be moved up, but where? Maybe after Examples, because the sections below it fit together very well. On the other hand, I think in a math article the history should be the last entry. Not because history is not important, but because in math the properties of things are more important than history.
I want to mention that the article Matrix theory advertized as "Main article" is a very poor article. It has no theory, only history, and elementary introduction. I would suggest the history be moved back to the main page, the elementary introction too, and then, having all stuff in one place, do lots of thinking of how to organize things better, because the way things are now is not good. Too much stuff!
I agree with you, the entry Matrices with entries in arbitrary rings needs to go towards the bottom. This section is not as elementary as others.
Looking forward to feedback! --Oleg Alexandrov 01:56, 25 Dec 2004 (UTC)


about the definition

The definition currently says that a matrix is a rectangular array of numbers. I'm not sure that's accurate. Isn't a matrix an abstract concept with certain properties? And we represent a matrix as a rectangular array of number? (Am I arguing about what the meaning of 'IS' is? :) )

For that matter can a matrix have something else as a value? In a partition matrix do we have a matrix that is a rectangular array of matrices? RJFJR 00:28, Dec 27, 2004 (UTC)


I think the current definition is fine the way it is. Making things more abstract will make things more confusing for the general public, and this is not what we want.
Yes, a matrix can have anything as value. But again, let's keep things simple. Oleg Alexandrov 02:01, 27 Dec 2004 (UTC)

I agree with the objection. If a matrix "is rectangular", then number 0 "is round", say. Avoiding saying that zero is round isn't being "too abstract", just rigorous. 81.36.11.45 (talk) 02:02, 8 July 2008 (UTC)

removed

The material: Matrix storage uses two conventions: row major and column major ordering. The former means that the matrix is stored such that row elements are packed together contiguously, the latter means that the matrix is stored such that column elements are packed together contiguously.

Belongs in array. It does not refer to the mathematical nature of a matrix but rather to how the values of a matrix are stored in a computer. I am removing it from this article. RJFJR 04:25, Dec 31, 2004 (UTC)

Agree! I also thought that was suspicios (and poorly explained in addition). Oleg Alexandrov 05:26, 31 Dec 2004 (UTC)


Matrix (mathematics) vs. matrix theory

As of now, there exist two articles on matrices in mathematics, namely Matrix (mathematics) and matrix theory. There is some overlap between them, the logic of splitting the article into two is not clear, and the article matrix theory is very badly written. I suggest that the article Matrix (mathematics) be introductory, listing the definition, examples, and basic properties. The article matrix theory could be the more abstract one. So I think some of the materials of these sections need to be interchanged. I will think more on this. Feedback welcome. Oleg Alexandrov 23:06, 31 Dec 2004 (UTC)


I interpreted it as 'matrix' is the noun and 'matrix theory' is what to do with a matrix. RJFJR 04:06, Jan 7, 2005 (UTC)

I replied on the Talk:Matrix theory page. Oleg Alexandrov 04:36, 7 Jan 2005 (UTC)

template:matrices

template:matrices - for some cohesion among terms. -SV|t 15:27, 27 Apr 2005 (UTC)

A section on encrypting

Shouldn't there be a section here on encrypting, since that is one use of Matrices?

On Applications

You Make no mention of the uses of matrices in computers

Viewing problem

The problem, somehow, has been solved. Thank you. 203.91.132.17 09:50, 8 November 2006 (UTC)

Error in example

The first example is said to be a 4*3 matrix, but by the above definition, where it states it should be row*column, it is a 3*4 matrix. It's been like that for a long time. 212.108.17.165 10:14, 27 November 2006 (UTC)

Huh? The matrix has 4 rows and 3 columns, so it is a 4-by-3 matrix. -- Jitse Niesen (talk) 11:45, 27 November 2006 (UTC)

On history

What was the contribution of the developers of Quantum Mechanics to Matrices? I've read books that claim it was like their invention, but, as explained in the history section here, it's obviously not... --euyyn 19:19, 3 February 2007 (UTC)

Why are magic squares relevant in the history of matrices. If they are the article fails to point out the historical connection between magic squares and matrices. —Preceding unsigned comment added by 62.194.143.164 (talk) 01:33, 16 February 2009 (UTC)

Further reading...

The introductory paragraph mentions the use of matrices with elements from rings. What's a good reference for further reading? --HappyCamper 00:35, 15 April 2007 (UTC)

Prehistory

Were magic squares really around in prehistoric times, or did someone just get carried away with their terms? Does anyone have a reference? 71.204.151.203 02:49, 15 May 2007 (UTC)

It seems rather unlikely. Prehistory is characterized by an absence of written records, so it's not clear how we would know that magic squares were around at that time. Anyway, there is no reference. I replaced it by something copied from magic square. Thanks for your comment. -- Jitse Niesen (talk) 04:16, 15 May 2007 (UTC)

Still more notational woes!

All seemed to be going well until I ran into the following:

Here and in the sequel we identify Rn with the set of "columns" or n-by-1 matrices. For every linear map f : RnRm ....


This is another example of the problem of mathematical notation. Rn would normally imply that n is the exponent of R, but above it's given a new, overloaded definition. The "identify with" usage would be more clear if it was changed to "define as". The word "with" implies aggregation, which is not what the author intended.

What exactly is a "linear map", and what does f : mean? What does that arrow mean?

All the math on Wikipedia should be translated into C++. I doubt that anybody would try to overload pow(R, n) to mean "the set of n-by-1 matrices" so the math would become much more readable. true true ....... i might even say a C+.....

216.23.105.3 09:34, 23 May 2007 (UTC)

Rn is an exponentiation:the cartesian product of n copies of R.
a linear map is a function which is linear ,i.e., obeys the superposition principle:f(x+y)=f(x)+f(y)
"f:X→Y" means "f, a function from X to Y".
Kaoru Itou (talk) 22:38, 4 February 2009 (UTC)

`m-by-n' or `m-cross-n'

I used to read and pronounce mxn as `m-cross-n'. Is `m-by-n' commonly used in math community? Shouldn't we make a note that it can be read and pronounced as `m-cross-n' also? 59.178.126.93 17:20, 18 July 2007 (UTC)

For what it's worth, my linear algebra textbooks and professor say by not cross. The former is also consistent with non-mathematical usage (we say two-by-three table not *two-cross-three table). 208.106.1.215 (talk) 18:28, 24 December 2007 (UTC)

Oh my darling

The tune to the ballad Oh My Darling, Clementine is frequently adapted by secondary school teachers worldwide to teach about matrix multiplication:

Row by column, row by column
Multiply them line by line
Add the products, form a matrix
Now you're doing it just fine.

The authorship of this version has been disputed, but is most frequently attributed to the mathematician/musician Aaron B. Barnett, who published this teaching tool in his seminal work, It's Reciprocal.


The text above the line was recently added to the article. There is no reference, I have never heard about this seminal work and I couldn't find anything about it, so I'm moving it here. Does somebody know a reference confirming this? -- Jitse Niesen (talk) 02:06, 9 August 2007 (UTC)

Problem Viewing

Is there an editor of this article who can make the script especially on this part, under sum who can make it easier to read (i.e. A + B = (ai,j)1≤i≤m;1≤j≤n + (bi,j)1≤i≤m;1≤j≤n = (ai,j + bi,j)1≤i≤m;1≤j≤n ). As you can see it is readable when you copy and paste but in the article very hard to read, Thanks BigDunc 17:58, 18 September 2007 (UTC)

Notation for Matrix Addition (Sum)

In my opinoin, the notation at the explanation for matrix addition is incorrect. The first line of

could (or maybe should) be interpreted as "take any ai,j, where i is between 1 and m and j between 1 and m, and add it to any bk,l, where again k is between 1 and m and l between 1 and m" - since i and j appear twice in different parts of equation, and are free variables in neither of them, they can be renamed.

Therefore, I believe that the first line should be deleted, and only the second should remain. The second line binds both i-s and j-s at the same time, so they cannot be renamed independently.

If nobody answers in 3 days, I will edit the article.

Tom Primožič —Preceding unsigned comment added by 193.77.126.73 (talk) 09:52, 22 December 2007 (UTC)

The definition of addition is given in the second equality:
The first line of the definition as you quoted above is notation-setting. It just names the entries of matrices A and B. Notice in particular that it does not name the entries of the matrix A + B; that is the job of the right-hand side of the definition. Michael Slone (talk) 12:08, 22 December 2007 (UTC)
Oh, I get it. Thank you for your explanation! Tom Primožič —Preceding unsigned comment added by 193.77.126.73 (talk) 10:24, 26 December 2007 (UTC)

Generalization to more than 2 dimensions

In programming we would use say a 'three-dimensional array' as a sort of matrix with three axes rather than the standard two considered in this article. Such a matrix could be used to hold some values from a three-dimensional space for example. The article on matrices does not seem to make any reference to any kind of extension from two dimensions to three or more, but surely some results regarding matrices would apply to similar structures with more than just two dimensions. And of course, a one-dimensional 'matrix' would be the same thing as a vector. It seems like a good idea to emphasize common features rather than ignore them, to promote higher levels of understanding, and economy of conceptualization. —Preceding unsigned comment added by 220.253.113.241 (talk) 13:47, 13 March 2008 (UTC)

The article you are looking for is Tensor. Though I agree, it seems a bit odd we don't mention tensors at all in this article. Mdwh (talk) 00:28, 5 April 2008 (UTC)

When a matrix is a vector?

The page currently says: A matrix where one of the dimensions equals one is often called a vector. This was supposed to be the same as the last edition (according to Oleg Alexander): "A matrix where only one of the dimensions is higher than one". However, both phrases does not say the same, because the first one only works for bidimensional matrixes, but what happens with matrixes that have three dimensions? The second sentence is therefore more correct. Example: an 3x2x2x1 matrix would be called 'vector' according to the first sentence, however it is not. —Preceding unsigned comment added by 88.6.21.192 (talk) 23:39, 23 March 2008 (UTC)

A matrix has only two dimensions. Certainly that is the meaning of matrix used in the whole article. A 3x2x2x1 "matrix" would usually be called a tensor (or an array in computer science). -- Jitse Niesen (talk) 23:57, 23 March 2008 (UTC)
Actually, the new version is more correct, because the old definition excluded vectors with one entry. Dcoetzee 17:59, 29 October 2008 (UTC)

Square matrix section

At the top of the article:

''For the square matrix section, see [[Matrix (mathematics)#Square matrices and related definitions|square matrix]]''. <!-- please do not remove this sentence. The page [[square matrix]] redirects here. -->

Contrary to the comment, I don't think this should stay. Square matrix already redirects straight to that section.

CRGreathouse (t | c) 13:05, 9 April 2008 (UTC)

About different notations used in different articles

The following notations are used in the articles Matrix, Minor, Cofactor, and Adjugate matrix:

Notation 1
(most commonly used?)
Notation 2
(most formal)
Notation 3 Notation 4
3,2 entry of A (3,2)th or (3,2)-th (or (3,2)nd?) entry of A (3,2) entry of A (*) (3,2)-entry of A (**)
3,2 minor of A (3,2)th or (3,2)-th (or (3,2)nd?) minor of A (3,2) minor of A (3,2)-minor of A
3,2 cofactor of A (3,2)th or (3,2)-th (or (3,2)nd?) cofactor of A (3,2) cofactor of A (3,2)-cofactor of A

(*) Not used in this article. (**) Never used, neither in this article nor in the others.

Syntactic drawback of notation 1. Consider the sentence "the 3,2 entry of a matrix", as opposed to "the (3, 2) entry of a matrix". In the first sentence, the comma may appear to divide "the 3" from "2 entry of a matrix". In the second, this syntactic drawback is avoided and you can even insert a space after the comma (as I suggest to do). The space after the comma can be used only for notations with parentheses (2, 3, 4). It is not a good idea to use it for notation 1.

Doubt about notation 2. (3,2)-th or (3,2)-nd or both? And if (3,2)-th is correct, how do you read it? "Three, two-th" or "third, second"...? The answer is difficult to find in books.

Notation 3 is useful. Notation 3 is not presented in the definition section of this article, although it is based on the standard notation for an ordered pair: (a, b) or <a, b>. Because of the above described syntactic drawback, I would rather use notation 3 than notation 1. Thus, I propose to mention notation 3 in the definition section.

Rationale for notation 4. I cannot see the rationale behind notation 4 (used in the articles Minor, Cofactor, and Adjugate matrix). Is it used in the literature?

Please let me know your opinion. Paolo.dL (talk) 08:15, 19 June 2008 (UTC)

Matrix as general

Do you have an article of matrix as general...tq..che (talk) 06:09, 13 September 2008 (UTC)

Hello..anyone?che (talk) 22:32, 13 September 2008 (UTC)

Further reading

Under this heading we have;

A more advanced article on matrices is Matrix theory.

Is that meant to be a joke? the article does little more than list applications and links to other pages. SpinningSpark 22:51, 4 October 2008 (UTC)

Error

The definition of orthogonal matrix is wrong. Boris Tsirelson (talk) 15:09, 29 October 2008 (UTC)

Indeed; I removed it. Of course, I could have corrected it, but I want to make sure that the list is kept short, so I also removed the entries for rotation matrix and idempotent matrix which I deemed less important.. -- Jitse Niesen (talk) 15:33, 29 October 2008 (UTC)

Matrices without entries

Does this article really need to contain this section? What is so special about a matrix with zero elements? The formula allows it, and the only thing that could possibly be said about them which is interesting is that their determinant is formed by the sum of zero elements (which is definitionately the identity of addition = 0)

The whole "you need to keep track of how many cols and rows there are" is directly implied by the formula and is no more special than needing to remember that a 2x3 matrix is not the same as a 1x6 matrix or a 3x2 matrix

I personally feel that matrices with zero elements are a specific (and boring) case of a normal matrix, and since they have no interesting special properties, they do not deserve the prominence that they hold on this page, since the entire section doesn't say anything that any idiot couldn't work out for themselves from the formula.

Just my 2c worth.

MattTait (talk) 16:07, 8 November 2008 (UTC)


Moved section here:

Matrices without entries

A subtle question that is hardly ever posed is whether there is such a thing as a 3-by-0 matrix. That would be a matrix with 3 rows but without any columns, which seems absurd. However, if one wants to be able to have matrices for all linear maps between finite dimensional vector spaces, one needs such matrices, since there is nothing wrong with linear maps from a 0-dimensional space to a 3-dimensional space (in fact if the spaces are fixed there is one such map, the zero map). So one is led to admit that there is exactly one 3-by-0 matrix (which has 3×0=0 entries; not null entries but none at all). Similarly there are matrices with a positive number of columns but no rows.

Even in absence of entries, one must still keep track of the number of rows and columns, since the product BC where B is the 3-by-0 matrix and C is a 0-by-4 matrix is a perfectly normal 3-by-4 matrix, all of whose 12 entries are 0 (as they are given by an empty sum). Note that this computation of BC justifies the criterion given above for the rank of a matrix in terms of possible expressions as a product: the 3-by-4 matrix with zero entries certainly has rank 0, so it should be the product of a 3-by-0 matrix and a 0-by-4 matrix.[1]

To allow and distinguish between matrices without entries, matrices should formally be defined, in a somewhat pedantic computer science style, as quadruples (A, r, c, M), where A is the set in which the entries live, r and c are the (natural) numbers of rows and columns, and M is the rectangular collection of rc elements of A (the matrix in the usual sense).

0-by-0 matrix

(I'm starting a new subsection to distinguish my comments from the previous unsigned section.)

Because of the connection with linear transformations, it is important to allow 0-by-n and n-by-0 matrices.

In response to MattTait, the determinant of the unique 0-by-0 matrix is not 0, but 1. There are several ways to see this:

  • If one defines the determinant as a sum over permutations of products of entries, there is one permutation of the empty set, and the corresponding product is the empty product, which is 1.
  • If one wants the computation of determinants by expansion by minors to work on nonzero 1-by-1 matrices, the determinant of the 0-by-0 matrix must be 1.
  • One can also compute how the corresponding linear transformation multiplies 0-dimensional volume.
  • The determinant of a block matrix
A 0
0 B

is the product of det(A) and det(B). For this to hold when A is 0-by-0 and B is a nonsingular matrix, one must define det(A)=1. --FactSpewer (talk) 06:42, 17 November 2008 (UTC)

Order of a matrix

I don't think this use of "order" is common in current mathematical literature. To me, a matrix of order 3 would more likely be a matrix M such that M^3 = I. I think it is better to stick with the standard terminology "dimensions". --FactSpewer (talk) 06:42, 17 November 2008 (UTC)

F^{n x m}

I think it is more common to write , not , for the space of m-by-n matrices with entries in F. (I do understand that it is in bijection with linear maps from to .) --FactSpewer (talk) 06:42, 17 November 2008 (UTC)

External Links

I wish to add an external link to this and other related articles. Please see * Talking Picture Book version - of this article. It is just another way of presenting information. It is completely free to the user. I have spent a lot of time creating these articles. Maybe after my death, I will get a reward for my hard work, but in the meantime it could be of use to the Wiki users/donors. Wayp123 (talk) 22:40, 7 December 2008 (UTC)

Can you please provide a precise URL to the said article at your page? From a first glance at the site I'm not really in favour. I will also post at Wikipedia talk:Wikiproject Mathematics about your suggestion. Please don't reinsert the links until consensus is reached there. Thanks, Jakob.scholbach (talk) 06:25, 8 December 2008 (UTC)
On the web page the links are labeled, Matrices and related articles and Eigenvalue, eigenvector and eigenspace. They are download links to *.bkk files. To view theses files you also need to download and install the viewer program called bookbuddi. After the 1st run of bookbuddi, you can close it and click on the *.bkk files you have downloaded, to view them. To get bookbuddi goto bookbuddi download site 1 or bookbuddi download site 2 Wayp123 (talk) 09:13, 8 December 2008 (UTC)
Our external link guidelines say that we should avoid linking to documents which need external applications. In this case, very few people will have installed the bookbuddi application. I can't even run the application because I don't have Windows. So, I also think that the link is not appropriate. -- Jitse Niesen (talk) 11:34, 8 December 2008 (UTC)
Maybe, one day, it will be integrated into the web-browsers, so that it can run without windows, and an external app. How many wiki links are not like this? Wayp123 (talk) 14:03, 8 December 2008 (UTC)
Consensus at Wikipedia talk:WikiProject Mathematics#Some external links is that it is not appropriate to add links to your web-site to Wikipedia articles. Gandalf61 (talk)

2009

References

I cleaned up some typos in the reference citation in the first paragraph of the Basic operations section. While doing so, I noticed that there are many references to "Brown, 1991," but nothing identifying that reference in full. Somebody needs to identify it. Lou Sander (talk) 13:50, 13 January 2009 (UTC)

Further examination shows that there are MANY references in such a condition -- citations referred to only by the author's name and year, with no further detail. Please tell us where to find these things. Lou Sander (talk) 13:56, 13 January 2009 (UTC)

Well, simply click at the year in the reference and the browser jumps to the reference. Alternatively scroll down to the "References" section. Jakob.scholbach (talk) 14:12, 13 January 2009 (UTC)
Aha! I see. This is a different scheme than I have encountered on Wikipedia, though I suppose the footnotes and references can be separate. Perhaps one could keep the existing scheme and add the full reference info to the first occurrence of "Brown, 1991." Lou Sander (talk) 16:15, 13 January 2009 (UTC)
The method of reference used in this article is rather standard in featured articles and in fact simply uses the Harvard citation template. I rather enjoy its uncluttered nature. RobHar (talk) 16:40, 13 January 2009 (UTC)
I don't doubt you, and I agree it's uncluttered, but it's still very unfamiliar to me to go to a footnote and then to a reference. Looking at the Harvard citation template didn't help, as it seemed to be talking about cases where the brief citation is in the text, rather than a footnote. I'm new to this Harvard stuff, and I think it could help a lot in some of the articles I work on. Can you point me to some other articles that use the "footnote then reference" scheme? Lou Sander (talk) 17:01, 13 January 2009 (UTC)
For documentation of this method of referencing see this note on "shortened footnotes" and this note on linking footnotes to full references. Gandalf61 (talk) 17:10, 13 January 2009 (UTC)
Thanks! I've been looking for this kind of flexibility for quite a while. Just never came across it before. Now I've got to go and redo dozens and dozens of references. Sheesh! ;-) Lou Sander (talk) 02:44, 14 January 2009 (UTC)

Linear Transformation Section

{{editsemiprotected}}

In the Linear Transformation Section, there is a subttle error that can be very confusing. The second sentence: "Any n-by-m matrix A gives rise to a linear transformation Rn → Rm, by assigning to any vector x in Rn the (matrix) product Ax, which is an element in Rm", is incorrect. The statement is true but for a m-by-n matrix, instead of a n-by-m matrix. I hope you can change it because it can be very confusing for people trying to study linear algebra. --Erwing80 (talk) 11:37, 23 January 2009 (UTC)

Done. Jakob.scholbach (talk) 11:46, 23 January 2009 (UTC)

Another error in labeling

The image at the start of the article also has a labeling error. Its heading says "m-by-matrix n." It should say "m-by-n matrix." Lou Sander (talk) 12:54, 23 January 2009 (UTC)

Yeah, that's weird. If you look at the file [1] it is displayed correctly, but on the commons page it is not. Does anybody have a clue how to fix this? Jakob.scholbach (talk) 14:11, 23 January 2009 (UTC)
It looks like there is Wikipedia:SVG Help for this kind of problems. Cenarium (Talk) 15:40, 23 January 2009 (UTC)
Fixed. Converted the relevant text to a path. RobHar (talk) 01:14, 5 February 2009 (UTC)

Every finite group is isomorphic to a matrix group.

Can anybody provide a reference for this claim. I know it's true, but fail to find a reference. Jakob.scholbach (talk) 07:44, 13 April 2009 (UTC)

Isn't it a consequence of Cayley's theorem? If you consider that every permutation is a fr:Matrice de passage (sorry, I lack vocabulary). --El Caro (talk) 10:02, 13 April 2009 (UTC)
I think this it is pretty obvious that the regular representation is faithful, but anyways, google books has given me the following reference: example 19.2 of page 198 of Louis Halle Rowen's "Graduate algebra: noncommutative view". RobHar (talk) 15:49, 13 April 2009 (UTC)
Good. I will try to round off the history section in the next few days. Do you see any further obstructions for a Good Article nomination? Jakob.scholbach (talk) 19:44, 13 April 2009 (UTC)

Re: Good Article nomination

The article is looking good. One issue that should probably be resolved first is the issue of whether Matrix theory should continue to co-exist with Matrix, or whether one should be merged into the other. See Talk:Matrix_theory#Matrix_vs_matrix_theory.

A couple of other thoughts:

1) It would make sense to move the History section further down, say immediately before or after the Applications section, because

  • Probably most people seeking this page will be mainly looking for the definition and basic properties of matrices, so it would be good for the article to discuss these as early as possible.
  • The History section alludes to many topics that are treated only much later in the article. Readers who have not yet learned, say, the connection between matrices and systems of linear equations, are unlikely to appreciate this section.

2) It may be worth preceding the "Definiteness" subsection with a subsection entitled "Symmetry conditions" (or something like that) into which the definitions and properties of symmetric, skew-symmetric, and Hermitian matrices would be moved. For instance, it is strange that the spectral theorem is mentioned only for positive definite matrices, when in fact it applies to symmetric and Hermitian matrices (and even more) and has little to do with positive definiteness; the mention of this theorem could be moved into that new subsection.

--FactSpewer (talk) 03:38, 19 April 2009 (UTC)

Thanks for the input. I realized 2) right away. For 1), I'm not terribly strongly attached to the history section in that place, but I do think that it does provide an leisure introduction to some of the notions, while still be roughly accessible to a non-math-person, thereby also serving kind of as a motivation section. You are right that a student wishing to learn about matrix addition might be interested less in this, though. Finally, history is quite orthogonal to applications, so would be somewhat of a breach. At groups we did the same structure. But, surely, if you feel strongly, just move it.
About merging: the discussion you link to is relatively old, I guess done at times when there was far less content to both pages. Now, the matrix theory page does have big overlap with this page here. I personally think that there should be a separate page about more advanced stuff, which should be matrix theory IMO, but given the little content on that page, I guess we should merge. I will set up a merger proposal. Jakob.scholbach (talk) 11:19, 19 April 2009 (UTC)
OK, since there seem to have been no objections to moving History further down, I'll do so now, for the reasons I gave above. --FactSpewer (talk) 04:23, 23 April 2009 (UTC)

Suggestions for future development

I haven't yet had the chance to read this article thoroughly, but it seems good. Here are some deferential suggestions that you might consider for its future development to Featured Article.

  • First and foremost, I would work on improving the organization and writing. Almost every topic relevant to matrices is mentioned here, but I think the presentation and order of topics could be improved so that average readers get more out of the article. In particular, the lead could really benefit. I would try to organize the article to be simple earlier and grow gradually more complicated, and to offer strong guidance to your reader, so that they see where you're headed.
  • Sure. I usually tend to avoid working on the lead too early in the process, since it is likely to change when the article evolves, and also since it is the most difficult part of the article. Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)
  • The applications section is good, but perhaps too focused on typical undergraduate topics in physics. It would help to give a broader view, I think. Here are some examples:
    • Matrices in statistical mechanics, e.g., the transfer-matrix method by which Lars Onsager solved the two-dimensional Ising model.
    • Linear dynamical systems of the form dc/dt = M· c, important for modeling the flow of dynamical systems in the neighborhood of fixed points. Normal modes can be viewed as a special case.
    • The normal-mode discussion could be coupled with the discussion of the Hessian, the covariance matrix, and modeling infrared spectra of molecules. Decaying modes (complex eigenvalues) would be a good addition.
    • Discrete Fourier transforms and other discrete transforms, such as the Hadamard.
    • In structural biology, we use matrices to find the closest possible way to overlap two molecules.
    • In engineering, the behavior of linear four-port systems, as in microwave systems.
    • In addition to their leading role in solving linear problems, matrices can be used in non-linear problems as in Prony's method. The general strategy is to concentrate the non-linearity into finding the roots of the characteristic polynomial.

Notice that many of these applications are based on an eigenanalysis of the matrix. That might be worth stressing.

  • All right. I think we should not try to come up with an exhaustive list of possible further applications. (Perhaps create Applications of matrices instead?) Highlighting general principles such as eigenanalysis, and underlining this with a few examples seems better. Another point I deem important: we should not be guided by the nicety of the example, but by the relevance to matrices. In addition, we have to keep a limited overall length in mind. I guess we should not expand the applications section by more than at most 20% of its current length (assuming that the rest of the article stays as long as it is), which means we need to find ways to trim down the current material, probably mostly the physics part. Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)
  • A picture of an elliptical Gaussian scatterplot with a superimposed eigenanalysis of its covariance matrix might help a lot in explaining covariance matrices. A picture/animation of an oscillating molecule with the corresponding Hessian would be an excellent tie-in.
  • Elasticity theory might help in explaining linearity along different dimensions. Imagine a jello that is stiffer in one direction than in another...
  • Another helpful example for understanding eigenanalysis might be the inertia tensor of a rigid object. People can intuitively see the eigenaxes and understand that the eigenvalues report on the difficulty of rotating the object about the corresponding eigenaxis. This might lead nicely into a discussion of the gyration tensor, which is often used in molecular dynamics.
  • Yes. However, we don't have much space. Perhaps convey a good amount of motivation by a nice image? Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)
  • I would've liked to have seen more about singular value decomposition, both the theory and its applications, such as the (pseudo)inversion of rectangular matrices, least-squared solutions of overdetermined systems, identification of null spaces, deconvolution of functions into linear combinations of basis functions (done for far-UV circular dichroism spectra), etc.
  • Sure. I have to say I can't fully tell the importance and interdependencies of the various decomposition methods. Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)
  • Something about the invariance of eigenvalues and their derived quantities (such as trace, determinant, axiality, rhombicity, etc.) under similarity transforms?
  • Sounds reasonable. At the moment, the abstract linear algebraic side (that is, vector spaces and bases) get short shrift, but I'm not sure we should spend much more space on doing that. Perhaps a little. Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)
  • Perhaps include something about resultants and differential resultants and elimination theory, which I've used in my own research more than once?
  • You might consider having an early "definitions" section where you describe matrices by types: normal, symmetric, antisymmetric, unitary, Hermitean, diagonal, tridiagonal, band diagonal, defective, triangular, block, sparse, etc. You could show that any matrix can be decomposed into the sum of a symmetric and an antisymmetric part. A few basic facts might be good to add, such as the transposition of a matrix product, the invariance of the eigenvalues under similarity transforms, the distinction between right and left eigenvectors, Cholesky decomposition, etc.
  • Hm. We have List of matrices. Pure definitions sections tend to be only loosely integrated to the rest of the article, and potentially boring, too. Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)
  • Mention some "speciality" matrices such as Hilbert matrices, Vandermonde matrices and Toeplitz matrices? Of course, there are so many, it'll be difficult to choose which ones to mention.
  • I'd say we should avoid arid collections of information. We have list of matrices (which I'm currently trying to give more structure). Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)
  • I think the material on Gaussian elimination/elementary matrices might be understandable by many readers, if you fleshed it out and perhaps illustrated it more. The same is true for determinants, which I would almost count as basic as addition, multiplication and transposition.
  • Right. By the 2nd, do you mean we should merge determinants into the basic section? (I would not do that.) Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)

I hope these suggestions are helpful. I'll re-read the article again tonight and think more about it. Good work! Proteins (talk) 18:57, 22 April 2009 (UTC)

Thanks muchly. (I levelled down your heading per the guidelines here). I will have to learn about many topics you mention. We will have trouble to fit all this into the article, though. Jakob.scholbach (talk) 19:14, 22 April 2009 (UTC)
Now that the article has been promoted to GA status, we can move forward. Are you or anybody else willing to work together on these points (and possible further ones) in order to bring the article to FA status?
While I like all of the above suggestions, I want to to emphasize again: for space reasons, we can not afford to write about topics that are very nice but only loosely related to matrices. Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)

Explanation of a few edits to the introduction

The data indexed by matrices depends on two parameters only, not multiple parameters; anyway, is this worth saying?

"Multiple linear" sounds a little too much like "multilinear", and anyway, "multiple linear equations" are more commonly called a "system of linear equations".

I made the "connection" between matrices and linear transformations more specific by saying that matrices represent linear transformations.

It is strange to suggest that "matrix multiplication" is not elementary.

I mentioned briefly the motivation behind matrix multiplication.

To avoid dwelling too long in this introduction on the noncommutativity, I shortened the discussion and instead linked to commutative.

I deferred the definition of square matrices to later in the article, by linking there.

"Refined" is a strange adjective to apply to the notions of determinant, eigenvalues, etc.

Not every square matrix has an inverse.

Merged a one-sentence paragraph into the previous one.

WardenWalk (talk) 03:50, 28 April 2009 (UTC)

Fair enough. Probably I should have spent more care on the lead. Jakob.scholbach (talk) 11:56, 28 April 2009 (UTC)

Introduction

I feel that the introduction goes into too many details regarding applications. I would advocate moving most of these details into the applications section. Also, regarding the sentence about sparse matrices, it seems strange to single out the finite element method, when in truth sparse matrices occur in almost all applications of matrices. --FactSpewer (talk) 17:14, 30 May 2009 (UTC)

Disagree. The lead should be an introduction and also a summary of the article. Applications should be in the lead. But I think we should move that sparse matrices sentence into the application part in the first paragraph. Visit me at Ftbhrygvn (Talk|Contribs|Log|Userboxes) 18:00, 30 May 2009 (UTC)
Dear Ftbhrygvn, I agree with you that the lead should be an introduction and also a summary of the article. Just to clarify, I was not advocating eliminating all mention of applications in the introduction. I just feel that the introduction should serve to say what the article contains as briefly as possible, and then readers can continue reading the details that interest them. So I would be happy if the part about applications in the introduction were abbreviated to something like
"Matrices find applications throughout the sciences and engineering. They appear even in some branches of music theory."
with a link to the later section on applications. This also has the advantage that it removes the implication that matrices are used only in some specific sciences. --FactSpewer (talk) 19:37, 30 May 2009 (UTC)
I agree that it is a little bit overly detailed. However, if we are aiming for FA, the content of the applications section will probably have to be thoroughly revised anyway, accordingly the lead will change too. I usually tend to work on the lead only at the very end of the editing process. Jakob.scholbach (talk) 10:11, 31 May 2009 (UTC)
Yes, that's reasonable. --FactSpewer (talk) 14:17, 31 May 2009 (UTC)

Help improving this article

Fellow Wikipedians:
For my first time I am planning to go for some big contributions to Wikipedia and my first step is to bring a GA article to FA. I have quite a number of choices and I decided to go for Matrix, which I am familiar with. I want some SERIOUS Wikipedians who will work together on this article until it reaches FA status helping me. Visit me at Ftbhrygvn (Talk|Contribs|Log|Userboxes) 07:31, 29 May 2009 (UTC)
Current Group:(Feel free to add yourself)

  1. Ftbhrygvn
  2. Jakob.scholbach
  3. FactSpewer —Preceding undated comment added 17:09, 30 May 2009 (UTC).
I'm concerned about the intro. According to WP:LEAD, the lead section "should be written in a clear, accessible style." For a basic math article like this, there is no reason not to make the intro understandable to someone unfamiliar with the topic. We should not presume readers know double subscript notation, or understand what is meant by "the usual identities" or use obscure constructions like "the identity AB=BA fails." I also think we shoulcd not introduce side topics like vectors and tensors before explaining the basics. I made some edits in an attempt to improve the intro which were promptly reverted en masse [2] . I don't wish to get into an edit war here, but I don't find the current intro acceptable for a GA, let alone an FA.--agr (talk) 14:27, 22 July 2009 (UTC)
I apologize for reverting your edit without much talk. You are absolutely free to re-revert it and work on it, I was just overly lazy. I disliked the formatting (introducing see below links in the lead, line breaks) and, more importantly the addition of a mistake ("AB and BA are both defined only for square matrices"). But I now realize that putting the concrete example is probably helpful. It is just now that I see there has been a peer review, with a couple of good comments. I'm personally very busy now, but the article definitely could and should become better! Jakob.scholbach (talk) 15:49, 22 July 2009 (UTC)
Thanks for the apology. Of course I have no problem with your correcting my foolish error. I'll make another pass with your other suggestions in mind.--agr (talk)

Infinitesimal Matrix

Where can I find information on infinitesimal matrices?mezzaninelounge (talk) 02:12, 23 July 2009 (UTC)

Definition

Is a matrix really a rectangular array of numbers? Thats a description of a particular typographic representation, not what a matrix is. Its an object with internal structure that gives it varying algebraic properties...or something. There has to be a better definition. As it stands, isn't it sort of like saying "Addition is putting two numbers together with a little cross between them" —Preceding unsigned comment added by 76.175.72.51 (talk) 04:18, 30 July 2009 (UTC)

Matrices are two-dimensional arrays, therefore giving them an implicitly rectangular shape. From Merriam-Webster.com:

5 a: a rectangular array of mathematical elements (as the coefficients of simultaneous linear equations) that can be combined to form sums and products with similar arrays having an appropriate number of rows and columns b: something resembling a mathematical matrix especially in rectangular arrangement of elements into rows and columns c: an array of circuit elements (as diodes and transistors) for performing a specific function

I cannot find any definition that does not refer to matrices as "rectangular array". DKqwerty (talk) 04:39, 30 July 2009 (UTC)
I think one could say an n×m matrix is an n-tuple of m-tuples (or vice versa) generally organised as a rectangular array of numbers, but that seems a bit pedantic. The analogue would be saying addition is putting two numbers together generally denoted by placing a cross between them. And addition IS just putting two numbers together; it just so happens that it satisfies other properties, too: one might say addition is an associative commutative binary operation. A matrix however has various interpretations (linear transformations, coefficients of systems of linear equations, etc), and these come after and are not intrinsic to the notion of a matrix. I do sympathise with the feeling that "rectangular array" is somehow a cop out. However, I think it's the truth. My two cents. RobHar (talk) 04:50, 30 July 2009 (UTC)

Rectangular is not just typographical. It means that each row has the same number of entries. Many computer languages represent two dimensional arrays as a vector of vectors and do not require each element vector to have the same number of entries. These would not be matrices in general.--agr (talk) 17:43, 2 August 2009 (UTC)

Theoretically, a more abstract definition could be made. In practice, the closest to this I've met is defining matrices as special kinds of tensors. (This possible approach was mentioned in Shilov's classical Introduction to the theory of linear spaces.) However, I do not think that either the classical treatment of tensors or or a more modern tensor (intrinsic definition) approach would be to any use here. (Perhaps one could add a few words at the end of the matrix article about matrices being treated as special cases of tensors, with a link, though, if this isn't done already.)
The more "algebraic" abstract definition would be to consider matrices as (indexed) families, where the indices run over a cartesian product of two sets. The advantage with that is of course that it immediately gives a sense to the term "infinite matrices". However, "matrices" thus defined would just form a rather special case of families in general. I assume that this is a reason why mathematicians do not seem to bother overmuch with such an abstract definition.
WP has no reason to bother more than mathematicians in general do, I think. JoergenB (talk) 21:26, 5 November 2009 (UTC)
Note also that a matrix is not a tensor. It can be used to represent a particular type of tensor (must commonly linear maps). In the end a matrix is just what it is a rectangular array, with no particular algebraic properties.(TimothyRias (talk) 11:07, 6 November 2009 (UTC))
... or a doubly indexed family , yes. (However, while a tensor has more structure than a matrix, that given a basis might represent it, it is not uncommon to consider tensors as a kind of generalisation of inter alia matrices, as this article actually does. On the other hand, the mentioning in Matrix (mathematics)#Abstract algebraic aspects and generalizations IMHO is sufficient; but I'll add a "See also" link.) JoergenB (talk) 22:54, 6 November 2009 (UTC)
How about the following definition (not that I've seen it in print anywhere). A matrix is a quadruplet where (with of course ; how tiresome that this basic truth is not yet believed by everyone), E is a set (the one where the entries live), and f is a map [r]×[c]→E (where [r] designates your favorite fixed r-element interval of ). This dispenses of the graphical representation of the matrix, and makes a certain number of other points clear, like the fact that the indices always range over a finite rectangular (but possibly empty) set, that matrices can be distinguished just by the domain of their entries (a zero binary matrix (with entries in ) is not automatically identified with a matrix of the same size with complex or polynomial or whatever entries; of course people may choose to identify them nonetheless, but this is much easier than to "unidentify" things that are equal by definition), and most importantly it allows to distinguish various types of empty matrices (which would not be the case if the matrix were defined just as the function f); see the discussion of such matrices in the article. Marc van Leeuwen (talk) 12:44, 6 November 2009 (UTC)
Wow, this is actually rather close to Bourbaki's definition, though, clearly, they think requiring [r] and [c] to be subsets of N is needlessly restrictive (their definition is in Algebra I, Section 10). Of course, I still think "rectangular array" is the best way to go. Though mentioning a definition that doesn't require visualizing something is appropriate. RobHar (talk) 13:12, 6 November 2009 (UTC)
There is a fundamental difference in approach, because Bourbaki does not define what an unqualified matrix is, just what a matrix of type (I,K) over H is, in other words you have to specify the dimensions and the kind of elements before you can even start introducing a matrix. It means for instance that to ask the question what type some matrix has is answered trivially by (I,K) if the matrix was introduced (as it has to be) as a matrix of type (I,K). However I notice that Bourbaki does not himself live up to this discipline, and frequently discusses matrices without specifying their type first, though usually they are then given by an expression that suggests the type. However, and in conflict with this usage, he states that any matrix over H of some type (I,K) where either I or K is empty is identified with the family of elements of H indexed by the empty set (bottom of the cited page). This makes it impossible to recover the other (non-empty) index set from the empty matrix. It does not take long for this to lead into trouble, though Bourbaki does not seem to notice. If you look at the definition of matrix product (next page 339) you can see that in particular the product of a matrix of type (I,Ø) and with one of type (Ø,L) is one of type (I,L) with (independently of the multiplication operation f used) all entries zero (because given by an empty sum). In other words the product of the empty matrix with itself has multiple conflicting definitions, equating it to a zero matrix of any dimension one wishes. Marc van Leeuwen (talk) 13:40, 7 November 2009 (UTC)
I don't actually view this as a fundamental difference. I believe that its clear from their definition that a "matrix" (unqualified) is a matrix of some type (I,K) over some H. It's rather common to leave that unstated. Furthermore, if you want to multiply an empty matrix by another, you know they both have some type (I,Ø) and (Ø,L), respectively, so their product should be of type (I,L) and there's nothing else their entries should be but zero. It's never impossible to recover the other index set because it's part of the data of saying the word "matrix". I fail to see the problem. Though this is not the forum to discuss the understanding of bourbaki, it's to discuss the wiki article "Matrix". RobHar (talk) 15:39, 7 November 2009 (UTC)
The problem is that is a matrix of some type (I,K) can simultaneously be of some other type (I‘,K‘), and this happens (only) for empty matrices. If one defines, as Bourbaki does, a matrix as no more than a family of entries (or equivalently as a map from an index set to the domain H of entries), then this means one can distinguish only a single empty matrix, because there is only a single empty family of elements of H (which in turn is because there is only a single empty set Ø; any Cartensian product I×Ø or Ø×L is equal to Ø and there is no way to tell them apart). Quoting Bourbaki

Every matrix over H for which one of the indexing sets I, K is empty is identical with the empty family of elements of H; it is also called the empty matrix.

So the empty matrix has not one type (I,Ø) or (Ø,L), but all those types at once. This means one cannot, as Bourbaki does, talk of the indexing set of the rows (or columns) of a given matrix; this is where his definition of matrix multiplication is broken. As I said, one could insist that a matrix is never considered by itself but is always introduced as one of a previously specified type; this requires extreme discipline (more than Bourbaki brings up) and also makes it impossible to ever consider a collection matrices with varying types. In my opinion it is much better to avoid all this mess, and simply make the type part of the matrix data itself; then every matrix "knows" what type it has, and one can distinguish empty matrices of different types. Marc van Leeuwen (talk) 09:07, 13 November 2009 (UTC)

2010

Definiteness

"negative to its transpose: A = AT, respectively" should change to "negative to its transpose: A = -AT, respectively" —Preceding unsigned comment added by Sassanh (talkcontribs) 10:08, 7 January 2009 (UTC)

Fixed. Gandalf61 (talk) 11:06, 7 January 2009 (UTC)


The sentence "A matrix is positive definite if and only if all its eigenvalues are positive." is wrong: There are matrices which have only positive eigenvalues but aren't positive definite. Consider for example the matrix

The matrix has only positive eigenvalues but if

then


I think the sentence should be replaced by:

"A symmetric matrix is positive definite if and only if all its eigenvalues are positive." —Preceding unsigned comment added by 130.83.228.13 (talk) 09:53, 28 January 2009 (UTC)

The definition of definite matrices is applied to symmetric matrices only, so that your example is not admissible, so to say. Jakob.scholbach (talk) 10:28, 28 January 2009 (UTC)
For clarity, I will add the word "symmetric". (If interpreted literally as written, one half of the statement says that "a matrix whose eigenvalues are positive is positive definite", which as both of you have pointed out, is either false or meaningless if the matrix is not assumed to be symmetric. Since the premise of the statement does not assume that the matrix is positive definite, it cannot be viewed as assuming that the matrix is symmetric automatically.) FactSpewer (talk) 21:44, 18 April 2009 (UTC)
I actually disagree. Your example does NOT have all positive numbers, it has 3, and "0." Zero is NOT positive, and therefore your example is not a counter example to the conjecture raised in the article. Are there any other examples you know of that contradict what it originally stated? Because, otherwise, its more accurate to keep it the original way. —Preceding unsigned comment added by 64.233.227.21 (talk) 02:05, 13 March 2010 (UTC)

Electronics

Under "Applications", I have added a subsection Electronics, very briefly describing the use of matrices in calculating electronic circuits. I first wrote this for the Dutch Wikipedia (nl:Matrix (wiskunde)#Elektronica: vierpoolmodel). However, the only sources I have available are in Dutch. If anyone is able to add a source in English, or to further improve this subsection, I would appreciate that.

--HHahn (Talk) 10:43, 6 January 2010 (UTC)

I can't find any sources for the type of matrix that you describe. Nearest thing I can find is the impedance matrix described here and at Equivalent impedance transforms#2-terminal, n-element, 3-element-kind networks. However, that is not the same as the matrix you describe, as it transforms a current vector into a voltage vector. Also, surely Kirchhoff's circuit laws say that for a component with one input and one output, input current = output current, so the bottom row of the matrix you describe will always be (0 1), won't it ? Gandalf61 (talk) 12:04, 6 January 2010 (UTC)

Lacking

The page is lacking several things such as, Let A be any square matrix, then what is sin(A)=? Or let f(t) be any function then what is f(A)=? A general way to compute these things. —Preceding unsigned comment added by 128.100.86.53 (talk) 20:58, 3 February 2010 (UTC)

Ordering

How to order matrix? there are some rules, although not all matrix are orderable. Jackzhp (talk) 03:27, 7 March 2010 (UTC)

The question is unclear, since a matrix is not a set. The matrix entries can be ordered in many ways, like by rows, by columns, or by increasing size. Probably this is not what you meant though. Marc van Leeuwen (talk) 06:17, 7 March 2010 (UTC)



WHY?

i understand the basic principals of a matrix, but I still don't understand, WHY?

Why isn't multiplication as simple as addition? What usefulness does multiplication of matrices even HAVE?

If someone could clarify it here, or in the article, that would be great. —Preceding unsigned comment added by 64.233.227.21 (talk) 02:07, 13 March 2010 (UTC)

Your question is answered in the article, but if you still don't understand, ask your teacher. Here's a short answer. If we lived in a one-dimensional universe, we wouldn't need matrices. But we live in a 3 (or 4) dimensional universe, so we need matrices. Rick Norwood (talk) 12:41, 13 March 2010 (UTC)

Multiplication of matrices is defined in the rather non-intuiitive way that it is so that when matrices are used to represent linear transformations, then multiplication of matrices corresponds to composition of transformations. Gandalf61 (talk) 14:32, 13 March 2010 (UTC)

Matrix subscripts

I noticed that all the subscripts were like . However, I was always taught to do unless either the column or row in the index was multiple digits, such as . Maybe it's too minor to worry about, but I still feel that a scientific wiki should be notated perfectly. Thoughts?

BrainFRZ (talk) 02:19, 19 March 2010 (UTC)


Edit: Oh, and we should probably also point out the difference in notation so someone who doesn't know about matricies mistakes it as a vector subscript. —Preceding unsigned comment added by BrainFRZ (talkcontribs) 02:23, 19 March 2010 (UTC)

The perfect notation is the one with a comma. Omission of commas is an abbreviation inspired by laziness (the economy of space obtained is totally negligible, and readability actually decreases) that can be unambiguous if used in restricted situations as you indicated, but creates difficulty for a consistent presentation when it has to be abandoned (if a matrix has more than 9 rows or columns, do you put in commas everywhere, or only past row/column 9?). A similar situation exists for symbolic indices. Many authors write but to me that looks like a product, even more so in which in principle is not ambiguous, but looks awful. So putting in commas always seems the most consistent policy. Marc van Leeuwen (talk) 08:39, 19 March 2010 (UTC)
I strongly agree with Marc. Rick Norwood (talk) 11:56, 20 March 2010 (UTC)

External Links - Source Code

Hi,

I've got an article with code in C++ which is very useful for people reviewing these concepts.

http://www.oriontransfer.co.nz/blog/2009-05/matrix-mathematics

This could be a useful external link.

Kind regards, Samuel—Preceding unsigned comment added by 60.234.246.33 (talkcontribs)

I don't know by which section of WP:NOT, maybe WP:NOTREPOSITORY, but this is not the place for posting such source code. Marc van Leeuwen (talk) 15:20, 13 April 2010 (UTC)

"Vertical shear" really horizontal?

The figure labeled "vertical shear", and the associated transform matrix, shows what I think is usually described as a horizontal shear. Gwideman (talk) 21:14, 25 September 2010 (UTC)

I agree, and our article on shear mappings agrees too. I have fixed the description. Gandalf61 (talk) 07:48, 26 September 2010 (UTC)

Assessment comment

The comment(s) below were originally left at Talk:Matrix (mathematics)/Comments, and are posted here for posterity. Following several discussions in past years, these subpages are now deprecated. The comments may be irrelevant or outdated; if so, please feel free to remove this section.

Good, well explained; needs more on history and applications. Tompw 14:06, 5 October 2006 (UTC) And also more references. Geometry guy 22:02, 9 June 2007 (UTC)

Last edited at 22:02, 9 June 2007 (UTC). Substituted at 21:35, 3 May 2016 (UTC)

  1. ^ de Boor, Carl (1990), "An empty exercise" (PDF), ACM SIGNUM Newsletter, 25 (2): 3–7, doi:10.1145/122272.122273.
    Nett, C.N.; Haddad, W.M. (1993), "A system-theoretic appropriate realization of the empty matrix concept", IEEE Transactions on Automatic Control, 38 (5): 771–775, doi:10.1109/9.277245, ISSN 0018-9286.