Jump to content

Talk:Matrix (mathematics)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 64.233.227.21 (talk) at 02:05, 13 March 2010 (Disagreeing with someone else's modification.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Good articleMatrix (mathematics) has been listed as one of the Mathematics good articles under the good article criteria. If you can improve it further, please do so. If it no longer meets these criteria, you can reassess it.
Article milestones
DateProcessResult
April 27, 2009Good article nomineeListed
June 21, 2009Peer reviewReviewed
Current status: Good article
WikiProject iconMathematics GA‑class Top‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
GAThis article has been rated as GA-class on Wikipedia's content assessment scale.
TopThis article has been rated as Top-priority on the project's priority scale.

Template:WP1.0 Template:FAOL

External Links

I wish to add an external link to this and other related articles. Please see * Talking Picture Book version - of this article. It is just another way of presenting information. It is completely free to the user. I have spent a lot of time creating these articles. Maybe after my death, I will get a reward for my hard work, but in the meantime it could be of use to the Wiki users/donors. Wayp123 (talk) 22:40, 7 December 2008 (UTC)[reply]

Can you please provide a precise URL to the said article at your page? From a first glance at the site I'm not really in favour. I will also post at Wikipedia talk:Wikiproject Mathematics about your suggestion. Please don't reinsert the links until consensus is reached there. Thanks, Jakob.scholbach (talk) 06:25, 8 December 2008 (UTC)[reply]
On the web page the links are labeled, Matrices and related articles and Eigenvalue, eigenvector and eigenspace. They are download links to *.bkk files. To view theses files you also need to download and install the viewer program called bookbuddi. After the 1st run of bookbuddi, you can close it and click on the *.bkk files you have downloaded, to view them. To get bookbuddi goto bookbuddi download site 1 or bookbuddi download site 2 Wayp123 (talk) 09:13, 8 December 2008 (UTC)[reply]
Our external link guidelines say that we should avoid linking to documents which need external applications. In this case, very few people will have installed the bookbuddi application. I can't even run the application because I don't have Windows. So, I also think that the link is not appropriate. -- Jitse Niesen (talk) 11:34, 8 December 2008 (UTC)[reply]
Maybe, one day, it will be integrated into the web-browsers, so that it can run without windows, and an external app. How many wiki links are not like this? Wayp123 (talk) 14:03, 8 December 2008 (UTC)[reply]
Consensus at Wikipedia talk:WikiProject Mathematics#Some external links is that it is not appropriate to add links to your web-site to Wikipedia articles. Gandalf61 (talk)

Definiteness

"negative to its transpose: A = AT, respectively" should change to "negative to its transpose: A = -AT, respectively" —Preceding unsigned comment added by Sassanh (talkcontribs) 10:08, 7 January 2009 (UTC)[reply]

Fixed. Gandalf61 (talk) 11:06, 7 January 2009 (UTC)[reply]


The sentence "A matrix is positive definite if and only if all its eigenvalues are positive." is wrong: There are matrices which have only positive eigenvalues but aren't positive definite. Consider for example the matrix

The matrix has only positive eigenvalues but if

then


I think the sentence should be replaced by:

"A symmetric matrix is positive definite if and only if all its eigenvalues are positive." —Preceding unsigned comment added by 130.83.228.13 (talk) 09:53, 28 January 2009 (UTC)[reply]

The definition of definite matrices is applied to symmetric matrices only, so that your example is not admissible, so to say. Jakob.scholbach (talk) 10:28, 28 January 2009 (UTC)[reply]
For clarity, I will add the word "symmetric". (If interpreted literally as written, one half of the statement says that "a matrix whose eigenvalues are positive is positive definite", which as both of you have pointed out, is either false or meaningless if the matrix is not assumed to be symmetric. Since the premise of the statement does not assume that the matrix is positive definite, it cannot be viewed as assuming that the matrix is symmetric automatically.) FactSpewer (talk) 21:44, 18 April 2009 (UTC)[reply]
I actually disagree. Your example does NOT have all positive numbers, it has 3, and "0." Zero is NOT positive, and therefore your example is not a counter example to the conjecture raised in the article. Are there any other examples you know of that contradict what it originally stated? Because, otherwise, its more accurate to keep it the original way.

References

I cleaned up some typos in the reference citation in the first paragraph of the Basic operations section. While doing so, I noticed that there are many references to "Brown, 1991," but nothing identifying that reference in full. Somebody needs to identify it. Lou Sander (talk) 13:50, 13 January 2009 (UTC)[reply]

Further examination shows that there are MANY references in such a condition -- citations referred to only by the author's name and year, with no further detail. Please tell us where to find these things. Lou Sander (talk) 13:56, 13 January 2009 (UTC)[reply]

Well, simply click at the year in the reference and the browser jumps to the reference. Alternatively scroll down to the "References" section. Jakob.scholbach (talk) 14:12, 13 January 2009 (UTC)[reply]
Aha! I see. This is a different scheme than I have encountered on Wikipedia, though I suppose the footnotes and references can be separate. Perhaps one could keep the existing scheme and add the full reference info to the first occurrence of "Brown, 1991." Lou Sander (talk) 16:15, 13 January 2009 (UTC)[reply]
The method of reference used in this article is rather standard in featured articles and in fact simply uses the Harvard citation template. I rather enjoy its uncluttered nature. RobHar (talk) 16:40, 13 January 2009 (UTC)[reply]
I don't doubt you, and I agree it's uncluttered, but it's still very unfamiliar to me to go to a footnote and then to a reference. Looking at the Harvard citation template didn't help, as it seemed to be talking about cases where the brief citation is in the text, rather than a footnote. I'm new to this Harvard stuff, and I think it could help a lot in some of the articles I work on. Can you point me to some other articles that use the "footnote then reference" scheme? Lou Sander (talk) 17:01, 13 January 2009 (UTC)[reply]
For documentation of this method of referencing see this note on "shortened footnotes" and this note on linking footnotes to full references. Gandalf61 (talk) 17:10, 13 January 2009 (UTC)[reply]
Thanks! I've been looking for this kind of flexibility for quite a while. Just never came across it before. Now I've got to go and redo dozens and dozens of references. Sheesh! ;-) Lou Sander (talk) 02:44, 14 January 2009 (UTC)[reply]

Linear Transformation Section

{{editsemiprotected}}

In the Linear Transformation Section, there is a subttle error that can be very confusing. The second sentence: "Any n-by-m matrix A gives rise to a linear transformation Rn → Rm, by assigning to any vector x in Rn the (matrix) product Ax, which is an element in Rm", is incorrect. The statement is true but for a m-by-n matrix, instead of a n-by-m matrix. I hope you can change it because it can be very confusing for people trying to study linear algebra. --Erwing80 (talk) 11:37, 23 January 2009 (UTC)[reply]

Done. Jakob.scholbach (talk) 11:46, 23 January 2009 (UTC)[reply]

Another error in labeling

The image at the start of the article also has a labeling error. Its heading says "m-by-matrix n." It should say "m-by-n matrix." Lou Sander (talk) 12:54, 23 January 2009 (UTC)[reply]

Yeah, that's weird. If you look at the file [1] it is displayed correctly, but on the commons page it is not. Does anybody have a clue how to fix this? Jakob.scholbach (talk) 14:11, 23 January 2009 (UTC)[reply]
It looks like there is Wikipedia:SVG Help for this kind of problems. Cenarium (Talk) 15:40, 23 January 2009 (UTC)[reply]
Fixed. Converted the relevant text to a path. RobHar (talk) 01:14, 5 February 2009 (UTC)[reply]

A better introduction than "rectangular array of numbers"

Does anyone have a good idea what to write in the introduction? In higher level mathematics, and mostly in system theory, one of the most important things is that a matrix should not be viewed as an array of numbers, instead numbers should be viewed as 1*1 matrices. --131.188.3.21 (talk) 16:54, 15 March 2009 (UTC)[reply]

I don't think this is a good explanation for somebody who does not yet know matrices. If you define numbers via matrices, how do you define matrices then (without using numbers)? Jakob.scholbach (talk) 07:23, 16 March 2009 (UTC)[reply]
You can define them, for example, as a collection of vectors, or as a description of a transformation. But you're right, the current introduction is the best for people unfamiliar with matrices. --131.188.3.21 (talk) 19:20, 16 March 2009 (UTC)[reply]

Notation

Can anybody confirm this? Jakob.scholbach (talk) 17:04, 27 March 2009 (UTC)[reply]

An alternate convention is to annotate matrices with their dimensions in small type underneath the symbol, for example, for an m-by-n matrix.

Every finite group is isomorphic to a matrix group.

Can anybody provide a reference for this claim. I know it's true, but fail to find a reference. Jakob.scholbach (talk) 07:44, 13 April 2009 (UTC)[reply]

Isn't it a consequence of Cayley's theorem? If you consider that every permutation is a fr:Matrice de passage (sorry, I lack vocabulary). --El Caro (talk) 10:02, 13 April 2009 (UTC)[reply]
I think this it is pretty obvious that the regular representation is faithful, but anyways, google books has given me the following reference: example 19.2 of page 198 of Louis Halle Rowen's "Graduate algebra: noncommutative view". RobHar (talk) 15:49, 13 April 2009 (UTC)[reply]
Good. I will try to round off the history section in the next few days. Do you see any further obstructions for a Good Article nomination? Jakob.scholbach (talk) 19:44, 13 April 2009 (UTC)[reply]

Re: Good Article nomination

The article is looking good. One issue that should probably be resolved first is the issue of whether Matrix theory should continue to co-exist with Matrix, or whether one should be merged into the other. See Talk:Matrix_theory#Matrix_vs_matrix_theory.

A couple of other thoughts:

1) It would make sense to move the History section further down, say immediately before or after the Applications section, because

  • Probably most people seeking this page will be mainly looking for the definition and basic properties of matrices, so it would be good for the article to discuss these as early as possible.
  • The History section alludes to many topics that are treated only much later in the article. Readers who have not yet learned, say, the connection between matrices and systems of linear equations, are unlikely to appreciate this section.

2) It may be worth preceding the "Definiteness" subsection with a subsection entitled "Symmetry conditions" (or something like that) into which the definitions and properties of symmetric, skew-symmetric, and Hermitian matrices would be moved. For instance, it is strange that the spectral theorem is mentioned only for positive definite matrices, when in fact it applies to symmetric and Hermitian matrices (and even more) and has little to do with positive definiteness; the mention of this theorem could be moved into that new subsection.

--FactSpewer (talk) 03:38, 19 April 2009 (UTC)[reply]

Thanks for the input. I realized 2) right away. For 1), I'm not terribly strongly attached to the history section in that place, but I do think that it does provide an leisure introduction to some of the notions, while still be roughly accessible to a non-math-person, thereby also serving kind of as a motivation section. You are right that a student wishing to learn about matrix addition might be interested less in this, though. Finally, history is quite orthogonal to applications, so would be somewhat of a breach. At groups we did the same structure. But, surely, if you feel strongly, just move it.
About merging: the discussion you link to is relatively old, I guess done at times when there was far less content to both pages. Now, the matrix theory page does have big overlap with this page here. I personally think that there should be a separate page about more advanced stuff, which should be matrix theory IMO, but given the little content on that page, I guess we should merge. I will set up a merger proposal. Jakob.scholbach (talk) 11:19, 19 April 2009 (UTC)[reply]
OK, since there seem to have been no objections to moving History further down, I'll do so now, for the reasons I gave above. --FactSpewer (talk) 04:23, 23 April 2009 (UTC)[reply]

Suggestions for future development

I haven't yet had the chance to read this article thoroughly, but it seems good. Here are some deferential suggestions that you might consider for its future development to Featured Article.

  • First and foremost, I would work on improving the organization and writing. Almost every topic relevant to matrices is mentioned here, but I think the presentation and order of topics could be improved so that average readers get more out of the article. In particular, the lead could really benefit. I would try to organize the article to be simple earlier and grow gradually more complicated, and to offer strong guidance to your reader, so that they see where you're headed.
  • Sure. I usually tend to avoid working on the lead too early in the process, since it is likely to change when the article evolves, and also since it is the most difficult part of the article. Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)[reply]
  • The applications section is good, but perhaps too focused on typical undergraduate topics in physics. It would help to give a broader view, I think. Here are some examples:
    • Matrices in statistical mechanics, e.g., the transfer-matrix method by which Lars Onsager solved the two-dimensional Ising model.
    • Linear dynamical systems of the form dc/dt = M· c, important for modeling the flow of dynamical systems in the neighborhood of fixed points. Normal modes can be viewed as a special case.
    • The normal-mode discussion could be coupled with the discussion of the Hessian, the covariance matrix, and modeling infrared spectra of molecules. Decaying modes (complex eigenvalues) would be a good addition.
    • Discrete Fourier transforms and other discrete transforms, such as the Hadamard.
    • In structural biology, we use matrices to find the closest possible way to overlap two molecules.
    • In engineering, the behavior of linear four-port systems, as in microwave systems.
    • In addition to their leading role in solving linear problems, matrices can be used in non-linear problems as in Prony's method. The general strategy is to concentrate the non-linearity into finding the roots of the characteristic polynomial.

Notice that many of these applications are based on an eigenanalysis of the matrix. That might be worth stressing.

  • All right. I think we should not try to come up with an exhaustive list of possible further applications. (Perhaps create Applications of matrices instead?) Highlighting general principles such as eigenanalysis, and underlining this with a few examples seems better. Another point I deem important: we should not be guided by the nicety of the example, but by the relevance to matrices. In addition, we have to keep a limited overall length in mind. I guess we should not expand the applications section by more than at most 20% of its current length (assuming that the rest of the article stays as long as it is), which means we need to find ways to trim down the current material, probably mostly the physics part. Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)[reply]
  • A picture of an elliptical Gaussian scatterplot with a superimposed eigenanalysis of its covariance matrix might help a lot in explaining covariance matrices. A picture/animation of an oscillating molecule with the corresponding Hessian would be an excellent tie-in.
  • Elasticity theory might help in explaining linearity along different dimensions. Imagine a jello that is stiffer in one direction than in another...
  • Another helpful example for understanding eigenanalysis might be the inertia tensor of a rigid object. People can intuitively see the eigenaxes and understand that the eigenvalues report on the difficulty of rotating the object about the corresponding eigenaxis. This might lead nicely into a discussion of the gyration tensor, which is often used in molecular dynamics.
  • Yes. However, we don't have much space. Perhaps convey a good amount of motivation by a nice image? Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)[reply]
  • I would've liked to have seen more about singular value decomposition, both the theory and its applications, such as the (pseudo)inversion of rectangular matrices, least-squared solutions of overdetermined systems, identification of null spaces, deconvolution of functions into linear combinations of basis functions (done for far-UV circular dichroism spectra), etc.
  • Sure. I have to say I can't fully tell the importance and interdependencies of the various decomposition methods. Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)[reply]
  • Something about the invariance of eigenvalues and their derived quantities (such as trace, determinant, axiality, rhombicity, etc.) under similarity transforms?
  • Sounds reasonable. At the moment, the abstract linear algebraic side (that is, vector spaces and bases) get short shrift, but I'm not sure we should spend much more space on doing that. Perhaps a little. Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)[reply]
  • Perhaps include something about resultants and differential resultants and elimination theory, which I've used in my own research more than once?
  • You might consider having an early "definitions" section where you describe matrices by types: normal, symmetric, antisymmetric, unitary, Hermitean, diagonal, tridiagonal, band diagonal, defective, triangular, block, sparse, etc. You could show that any matrix can be decomposed into the sum of a symmetric and an antisymmetric part. A few basic facts might be good to add, such as the transposition of a matrix product, the invariance of the eigenvalues under similarity transforms, the distinction between right and left eigenvectors, Cholesky decomposition, etc.
  • Hm. We have List of matrices. Pure definitions sections tend to be only loosely integrated to the rest of the article, and potentially boring, too. Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)[reply]
  • Mention some "speciality" matrices such as Hilbert matrices, Vandermonde matrices and Toeplitz matrices? Of course, there are so many, it'll be difficult to choose which ones to mention.
  • I think the material on Gaussian elimination/elementary matrices might be understandable by many readers, if you fleshed it out and perhaps illustrated it more. The same is true for determinants, which I would almost count as basic as addition, multiplication and transposition.
  • Right. By the 2nd, do you mean we should merge determinants into the basic section? (I would not do that.) Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)[reply]

I hope these suggestions are helpful. I'll re-read the article again tonight and think more about it. Good work! Proteins (talk) 18:57, 22 April 2009 (UTC)[reply]

Thanks muchly. (I levelled down your heading per the guidelines here). I will have to learn about many topics you mention. We will have trouble to fit all this into the article, though. Jakob.scholbach (talk) 19:14, 22 April 2009 (UTC)[reply]
Now that the article has been promoted to GA status, we can move forward. Are you or anybody else willing to work together on these points (and possible further ones) in order to bring the article to FA status?
While I like all of the above suggestions, I want to to emphasize again: for space reasons, we can not afford to write about topics that are very nice but only loosely related to matrices. Jakob.scholbach (talk) 20:11, 27 April 2009 (UTC)[reply]

Explanation of a few edits to the introduction

The data indexed by matrices depends on two parameters only, not multiple parameters; anyway, is this worth saying?

"Multiple linear" sounds a little too much like "multilinear", and anyway, "multiple linear equations" are more commonly called a "system of linear equations".

I made the "connection" between matrices and linear transformations more specific by saying that matrices represent linear transformations.

It is strange to suggest that "matrix multiplication" is not elementary.

I mentioned briefly the motivation behind matrix multiplication.

To avoid dwelling too long in this introduction on the noncommutativity, I shortened the discussion and instead linked to commutative.

I deferred the definition of square matrices to later in the article, by linking there.

"Refined" is a strange adjective to apply to the notions of determinant, eigenvalues, etc.

Not every square matrix has an inverse.

Merged a one-sentence paragraph into the previous one.

WardenWalk (talk) 03:50, 28 April 2009 (UTC)[reply]

Fair enough. Probably I should have spent more care on the lead. Jakob.scholbach (talk) 11:56, 28 April 2009 (UTC)[reply]

Introduction

I feel that the introduction goes into too many details regarding applications. I would advocate moving most of these details into the applications section. Also, regarding the sentence about sparse matrices, it seems strange to single out the finite element method, when in truth sparse matrices occur in almost all applications of matrices. --FactSpewer (talk) 17:14, 30 May 2009 (UTC)[reply]

Disagree. The lead should be an introduction and also a summary of the article. Applications should be in the lead. But I think we should move that sparse matrices sentence into the application part in the first paragraph. Visit me at Ftbhrygvn (Talk|Contribs|Log|Userboxes) 18:00, 30 May 2009 (UTC)[reply]
Dear Ftbhrygvn, I agree with you that the lead should be an introduction and also a summary of the article. Just to clarify, I was not advocating eliminating all mention of applications in the introduction. I just feel that the introduction should serve to say what the article contains as briefly as possible, and then readers can continue reading the details that interest them. So I would be happy if the part about applications in the introduction were abbreviated to something like
"Matrices find applications throughout the sciences and engineering. They appear even in some branches of music theory."
with a link to the later section on applications. This also has the advantage that it removes the implication that matrices are used only in some specific sciences. --FactSpewer (talk) 19:37, 30 May 2009 (UTC)[reply]
I agree that it is a little bit overly detailed. However, if we are aiming for FA, the content of the applications section will probably have to be thoroughly revised anyway, accordingly the lead will change too. I usually tend to work on the lead only at the very end of the editing process. Jakob.scholbach (talk) 10:11, 31 May 2009 (UTC)[reply]
Yes, that's reasonable. --FactSpewer (talk) 14:17, 31 May 2009 (UTC)[reply]

Help improving this article

Fellow Wikipedians:
For my first time I am planning to go for some big contributions to Wikipedia and my first step is to bring a GA article to FA. I have quite a number of choices and I decided to go for Matrix, which I am familiar with. I want some SERIOUS Wikipedians who will work together on this article until it reaches FA status helping me. Visit me at Ftbhrygvn (Talk|Contribs|Log|Userboxes) 07:31, 29 May 2009 (UTC)[reply]
Current Group:(Feel free to add yourself)

  1. Ftbhrygvn
  2. Jakob.scholbach
  3. FactSpewer —Preceding undated comment added 17:09, 30 May 2009 (UTC).[reply]
I'm concerned about the intro. According to WP:LEAD, the lead section "should be written in a clear, accessible style." For a basic math article like this, there is no reason not to make the intro understandable to someone unfamiliar with the topic. We should not presume readers know double subscript notation, or understand what is meant by "the usual identities" or use obscure constructions like "the identity AB=BA fails." I also think we shoulcd not introduce side topics like vectors and tensors before explaining the basics. I made some edits in an attempt to improve the intro which were promptly reverted en masse [2] . I don't wish to get into an edit war here, but I don't find the current intro acceptable for a GA, let alone an FA.--agr (talk) 14:27, 22 July 2009 (UTC)[reply]
I apologize for reverting your edit without much talk. You are absolutely free to re-revert it and work on it, I was just overly lazy. I disliked the formatting (introducing see below links in the lead, line breaks) and, more importantly the addition of a mistake ("AB and BA are both defined only for square matrices"). But I now realize that putting the concrete example is probably helpful. It is just now that I see there has been a peer review, with a couple of good comments. I'm personally very busy now, but the article definitely could and should become better! Jakob.scholbach (talk) 15:49, 22 July 2009 (UTC)[reply]
Thanks for the apology. Of course I have no problem with your correcting my foolish error. I'll make another pass with your other suggestions in mind.--agr (talk)

Infinitesimal Matrix

Where can I find information on infinitesimal matrices?mezzaninelounge (talk) 02:12, 23 July 2009 (UTC)[reply]

Definition

Is a matrix really a rectangular array of numbers? Thats a description of a particular typographic representation, not what a matrix is. Its an object with internal structure that gives it varying algebraic properties...or something. There has to be a better definition. As it stands, isn't it sort of like saying "Addition is putting two numbers together with a little cross between them" —Preceding unsigned comment added by 76.175.72.51 (talk) 04:18, 30 July 2009 (UTC)[reply]

Matrices are two-dimensional arrays, therefore giving them an implicitly rectangular shape. From Merriam-Webster.com:

5 a: a rectangular array of mathematical elements (as the coefficients of simultaneous linear equations) that can be combined to form sums and products with similar arrays having an appropriate number of rows and columns b: something resembling a mathematical matrix especially in rectangular arrangement of elements into rows and columns c: an array of circuit elements (as diodes and transistors) for performing a specific function

I cannot find any definition that does not refer to matrices as "rectangular array". DKqwerty (talk) 04:39, 30 July 2009 (UTC)[reply]
I think one could say an n×m matrix is an n-tuple of m-tuples (or vice versa) generally organised as a rectangular array of numbers, but that seems a bit pedantic. The analogue would be saying addition is putting two numbers together generally denoted by placing a cross between them. And addition IS just putting two numbers together; it just so happens that it satisfies other properties, too: one might say addition is an associative commutative binary operation. A matrix however has various interpretations (linear transformations, coefficients of systems of linear equations, etc), and these come after and are not intrinsic to the notion of a matrix. I do sympathise with the feeling that "rectangular array" is somehow a cop out. However, I think it's the truth. My two cents. RobHar (talk) 04:50, 30 July 2009 (UTC)[reply]

Rectangular is not just typographical. It means that each row has the same number of entries. Many computer languages represent two dimensional arrays as a vector of vectors and do not require each element vector to have the same number of entries. These would not be matrices in general.--agr (talk) 17:43, 2 August 2009 (UTC)[reply]

Theoretically, a more abstract definition could be made. In practice, the closest to this I've met is defining matrices as special kinds of tensors. (This possible approach was mentioned in Shilov's classical Introduction to the theory of linear spaces.) However, I do not think that either the classical treatment of tensors or or a more modern tensor (intrinsic definition) approach would be to any use here. (Perhaps one could add a few words at the end of the matrix article about matrices being treated as special cases of tensors, with a link, though, if this isn't done already.)
The more "algebraic" abstract definition would be to consider matrices as (indexed) families, where the indices run over a cartesian product of two sets. The advantage with that is of course that it immediately gives a sense to the term "infinite matrices". However, "matrices" thus defined would just form a rather special case of families in general. I assume that this is a reason why mathematicians do not seem to bother overmuch with such an abstract definition.
WP has no reason to bother more than mathematicians in general do, I think. JoergenB (talk) 21:26, 5 November 2009 (UTC)[reply]
Note also that a matrix is not a tensor. It can be used to represent a particular type of tensor (must commonly linear maps). In the end a matrix is just what it is a rectangular array, with no particular algebraic properties.(TimothyRias (talk) 11:07, 6 November 2009 (UTC))[reply]
... or a doubly indexed family , yes. (However, while a tensor has more structure than a matrix, that given a basis might represent it, it is not uncommon to consider tensors as a kind of generalisation of inter alia matrices, as this article actually does. On the other hand, the mentioning in Matrix (mathematics)#Abstract algebraic aspects and generalizations IMHO is sufficient; but I'll add a "See also" link.) JoergenB (talk) 22:54, 6 November 2009 (UTC)[reply]
How about the following definition (not that I've seen it in print anywhere). A matrix is a quadruplet where (with of course ; how tiresome that this basic truth is not yet believed by everyone), E is a set (the one where the entries live), and f is a map [r]×[c]→E (where [r] designates your favorite fixed r-element interval of ). This dispenses of the graphical representation of the matrix, and makes a certain number of other points clear, like the fact that the indices always range over a finite rectangular (but possibly empty) set, that matrices can be distinguished just by the domain of their entries (a zero binary matrix (with entries in ) is not automatically identified with a matrix of the same size with complex or polynomial or whatever entries; of course people may choose to identify them nonetheless, but this is much easier than to "unidentify" things that are equal by definition), and most importantly it allows to distinguish various types of empty matrices (which would not be the case if the matrix were defined just as the function f); see the discussion of such matrices in the article. Marc van Leeuwen (talk) 12:44, 6 November 2009 (UTC)[reply]
Wow, this is actually rather close to Bourbaki's definition, though, clearly, they think requiring [r] and [c] to be subsets of N is needlessly restrictive (their definition is in Algebra I, Section 10). Of course, I still think "rectangular array" is the best way to go. Though mentioning a definition that doesn't require visualizing something is appropriate. RobHar (talk) 13:12, 6 November 2009 (UTC)[reply]
There is a fundamental difference in approach, because Bourbaki does not define what an unqualified matrix is, just what a matrix of type (I,K) over H is, in other words you have to specify the dimensions and the kind of elements before you can even start introducing a matrix. It means for instance that to ask the question what type some matrix has is answered trivially by (I,K) if the matrix was introduced (as it has to be) as a matrix of type (I,K). However I notice that Bourbaki does not himself live up to this discipline, and frequently discusses matrices without specifying their type first, though usually they are then given by an expression that suggests the type. However, and in conflict with this usage, he states that any matrix over H of some type (I,K) where either I or K is empty is identified with the family of elements of H indexed by the empty set (bottom of the cited page). This makes it impossible to recover the other (non-empty) index set from the empty matrix. It does not take long for this to lead into trouble, though Bourbaki does not seem to notice. If you look at the definition of matrix product (next page 339) you can see that in particular the product of a matrix of type (I,Ø) and with one of type (Ø,L) is one of type (I,L) with (independently of the multiplication operation f used) all entries zero (because given by an empty sum). In other words the product of the empty matrix with itself has multiple conflicting definitions, equating it to a zero matrix of any dimension one wishes. Marc van Leeuwen (talk) 13:40, 7 November 2009 (UTC)[reply]
I don't actually view this as a fundamental difference. I believe that its clear from their definition that a "matrix" (unqualified) is a matrix of some type (I,K) over some H. It's rather common to leave that unstated. Furthermore, if you want to multiply an empty matrix by another, you know they both have some type (I,Ø) and (Ø,L), respectively, so their product should be of type (I,L) and there's nothing else their entries should be but zero. It's never impossible to recover the other index set because it's part of the data of saying the word "matrix". I fail to see the problem. Though this is not the forum to discuss the understanding of bourbaki, it's to discuss the wiki article "Matrix". RobHar (talk) 15:39, 7 November 2009 (UTC)[reply]
The problem is that is a matrix of some type (I,K) can simultaneously be of some other type (I‘,K‘), and this happens (only) for empty matrices. If one defines, as Bourbaki does, a matrix as no more than a family of entries (or equivalently as a map from an index set to the domain H of entries), then this means one can distinguish only a single empty matrix, because there is only a single empty family of elements of H (which in turn is because there is only a single empty set Ø; any Cartensian product I×Ø or Ø×L is equal to Ø and there is no way to tell them apart). Quoting Bourbaki

Every matrix over H for which one of the indexing sets I, K is empty is identical with the empty family of elements of H; it is also called the empty matrix.

So the empty matrix has not one type (I,Ø) or (Ø,L), but all those types at once. This means one cannot, as Bourbaki does, talk of the indexing set of the rows (or columns) of a given matrix; this is where his definition of matrix multiplication is broken. As I said, one could insist that a matrix is never considered by itself but is always introduced as one of a previously specified type; this requires extreme discipline (more than Bourbaki brings up) and also makes it impossible to ever consider a collection matrices with varying types. In my opinion it is much better to avoid all this mess, and simply make the type part of the matrix data itself; then every matrix "knows" what type it has, and one can distinguish empty matrices of different types. Marc van Leeuwen (talk) 09:07, 13 November 2009 (UTC)[reply]

Entry or element

In all and every (printed) book or article on matrices in English I've seen, the thingos at given positions in a matrix are called "entries". The same is true for this article, but without defining "entry" explicitly. I've heard rumours of the usage of "elements" instead; and I notice that this usage - inconsistently - is employed in some other articles in Category:Matrix theory.

E.g., in the beginning of the section Matrix#Matrix multiplication, linear equations and linear transformations the ordinary definition of the entries of a product matrix is given. The section, however, refers to matrix product as its main article; and there the same rule is given for the elements of the product matrix, in the section matrix multiplication#Ordinary product. Actually, at a glance, I found no direct mention of "entries" anywhere in that article. However, there was an indirect reference, which actually might be a bit confusing for non-expert readers. In the section matrix product#Hadamard product, the concept entrywise product is mentioned, but again as defined by a multiplication of "elements".

Moreover, there seems to be no way for a non-expert to find a reference to matrix entries; at least, I found none. The page Entry is a redirect to Entrance, which is a disambiguation page. This page does not mention "entry" in any usage; not even the accountant's entries are mentioned. (Actually, I think that the accountant terminology is the historical reason for the term entry in connection with matrices.)

What I wonder is, first, is actually "elements" nowadays used in lieu of "entries" in some text books or articles in English, at lower or higher level; and second, is there any article on matrix entries anywhere in en:wiki? JoergenB (talk) 21:53, 5 November 2009 (UTC)[reply]

Actually, looking in Shilov's Introductiong to the theory of linear spaces for matrices as tensors, I found that here the entries indeed are called "elements". Now, it's about 40 years since I had Shilov as a text book; but I have looked in it since then, and ought to have remembered.
So, both "entry" and "element" do be represented in English text-books. Which term is nowadays most common? JoergenB (talk) 22:33, 6 November 2009 (UTC)[reply]

Electronics

Under "Applications", I have added a subsection Electronics, very briefly describing the use of matrices in calculating electronic circuits. I first wrote this for the Dutch Wikipedia (nl:Matrix (wiskunde)#Elektronica: vierpoolmodel). However, the only sources I have available are in Dutch. If anyone is able to add a source in English, or to further improve this subsection, I would appreciate that.

--HHahn (Talk) 10:43, 6 January 2010 (UTC)[reply]

I can't find any sources for the type of matrix that you describe. Nearest thing I can find is the impedance matrix described here and at Equivalent impedance transforms#2-terminal, n-element, 3-element-kind networks. However, that is not the same as the matrix you describe, as it transforms a current vector into a voltage vector. Also, surely Kirchhoff's circuit laws say that for a component with one input and one output, input current = output current, so the bottom row of the matrix you describe will always be (0 1), won't it ? Gandalf61 (talk) 12:04, 6 January 2010 (UTC)[reply]

Lacking

The page is lacking several things such as, Let A be any square matrix, then what is sin(A)=? Or let f(t) be any function then what is f(A)=? A general way to compute these things. —Preceding unsigned comment added by 128.100.86.53 (talk) 20:58, 3 February 2010 (UTC)[reply]

Ordering

How to order matrix? there are some rules, although not all matrix are orderable. Jackzhp (talk) 03:27, 7 March 2010 (UTC)[reply]

The question is unclear, since a matrix is not a set. The matrix entries can be ordered in many ways, like by rows, by columns, or by increasing size. Probably this is not what you meant though. Marc van Leeuwen (talk) 06:17, 7 March 2010 (UTC)[reply]