# Talk:Determinant/Archive 2

## Restatement of concerns based on further reading

As a courtesy to the editors who encouraged me to follow up my previous efforts regarding this article, I spent some time yesterday consulting further sources. I am restating my concerns here, taking this further material into account, but I do not plan any further action. If anyone else thinks any of my concerns have merit, they can act. Otherwise I accept that I have misunderstood the role of Wikipedia.

As a non-mathematician who does know a little mathematics, particularly what is taught to science students, I think the lead takes a far more abstract and esoteric approach to the subject than has been customary in authoritative accounts from the time determinants were invented to the present, and uses mathematics outside the material typically taught to science students, let alone what is known to the general public. The standard approach can be understood by anyone who can add and multiply. I have questioned and still question the verifiability of some statements in the lead. Also, I question further the neutrality of "the geometric aspect ... is the key to understanding ... and is not adequately represented in the current article" which already presents this idea disproportionately in comparison with sources I cite below. Several statements in the lead are misleading.

Books that I have consulted include those listed in the following collapsed paragraph.

Extended content

### Kreyszig's text

The textbook Advanced Engineering Mathematics by Erwin O. Kreyszig in collaboration with Herbert Kreyszig and Edward J. Normington, John Wiley & Sons, Inc. 10th edition, 2011 [1] contains extensive coverage of determinants. This book is being shipped in large volume to university bookstores at present. The WK article about Kreyszig begins:

Erwin O. Kreyszig ... (1922 ... 2008) was ... a pioneer in the field of applied mathematics: non-wave replicating linear systems. He was also a distinguished author, having written the textbook Advanced Engineering Mathematics, the leading textbook for civil, mechanical, electrical, and chemical engineering undergraduate engineering mathematics.

The "Purpose and Structure of the Book" (p. vii) states:

"This book provides a comprehensive, thorough and up-to-date treatment of engineering mathematics. It is intended to introduce students of engineering, physics, mathematics, computer science and related fields to those areas of applied mathematics that are most relevant for solving practical problems. A course in elementary calculus is the sole prerequisite. ... (the book adopts) a modern approach ... "

This book contains a substantial section on Determinants. Geometrical interpretations are introduced as a team project in the penultimate exercise, that takes up less than 1% of the entire section. It is couched in terms of points, lines, squares, circles and spheres, with their common usage, rather than measure theory.

### The texts by Bretscher, by Strang and by Artin

Otto Bretscher, Linear Algebra with Applications, 4th ed. Prentice-Hall, 2008. On p. ix, under Continuing Text Features, the author states "Linear transformations are introduced early ... Visualization and geometrical interpretation are emphasized extensively throughout".

Gilbert Strang, Linear Algebra and its Applications. The author makes similar remarks in front matter. His first diagram is on p. 3.

It is essential to note, however, that the first geometrical topics mentioned in these texts are utterly different from the geometrical topic in the lead. These examples relate to the solution of linear systems as the intersection of lines and planes. Bretscher introduces a simple case of the idea that comprises the second sentence of the lead, but not until p. 85, where he proves it as a formal Theorem -- not a definition, not an explanation and without comment on how it helps understanding.

Michael Artin, Algebra. This DOES give the transformation of a unit square on p. 18. However, this does NOT try to encompass multidimensionality, which the present lead sentence does. And Artin writes from the specialized viewpoint of an algebraic geometer.

### Encylopedic works

In my previous posting I mentioned the consistent treatment of determinants in the pre-1990 works in the list that follows. These CANNOT be ignored. These (in particular Encyclopedia Britannica), are amongst the reference works in a vast number of public, university and school libraries. They express the knowledge learned by older readers who seek refreshment of their recollections. The Oxford University Press and the Cambridge University Press mathematics encyclopedias published in 2003 and 2011 treat determinants traditionally, and the introductions in each can be understood by anyone who can add and multiply. The CUP compendium mentions the 3-dimensional geometrical interpretation, using vector notation (not measure theory) in a short, final paragraph.

• 1. Encyclopedia Brittanica
• 2. Encyclopedic Dictionary of Mathematics of the Mathematical Society of Japan, English translation, published by MIT press,that has forewards by the President of that society and the President of the American Mathematical Society
• 3. Compendium of Applicable Mathematics, edited by Karel Rektorys
• 4. Eberhard Zeidler, Oxford User's Guide to Mathematics, Oxford University Press, 2003.
• 5. K.F.Riley and M.P.Hobson, Essential Mathematical Methods for the Physical Sciences, Cambridge University Press, 2011.

Because editors who are not familiar with the topic may be following this discussion, I think the following collapsed explanation may be helpful.

Extended content

### Some simple examples

I begin with some simple examples. Starting in this way is consistent with the way many published accounts of determinants have started, since determinants were invented in the 1700s to major texts and encyclopedias published in 2011.

The following tabulation of four numbers, enclosed within a pair of vertical lines, is called a determinant. The number of rows equals the number of columns. This number is called the order of the determinant. The individual items are called the elements of the determinant.

${\displaystyle {\begin{vmatrix}1&2\\3&4\end{vmatrix}}}$

This is a shorthand for 1 × 4 - 2 × 3 = 4-6 = -2. Correspondingly,

${\displaystyle {\begin{vmatrix}x&y\\u&v\end{vmatrix}}=x\times v-y\times u}$

The determinant of order 3, that consists of the elements 1, 2, …, 9, is written

${\displaystyle {\begin{vmatrix}1&2&3\\4&5&6\\7&8&9\end{vmatrix}}}$

This is a shorthand for

${\displaystyle 1\times {\begin{vmatrix}5&6\\8&9\end{vmatrix}}-2\times {\begin{vmatrix}4&6\\7&9\end{vmatrix}}+3\times {\begin{vmatrix}4&5\\7&8\end{vmatrix}}=}$

${\displaystyle 1\times (5\times 9-6\times 8)-2\times (4\times 9-6\times 7)+3\times (4\times 8-5\times 7)=}$

${\displaystyle 1\times (45-48)-2\times (36-42)+3\times (32-35)=1\times (-3)-2\times (-6)+3\times (-3)=-3+12-9=0}$

This follows the following prescription:

• 1. Take the first element in the first row, in this case 1.
• 2. Strike out the row and column that contain the first element in the determinant that we started with. This leaves the determinant (of order 2) ${\displaystyle {\begin{vmatrix}5&6\\8&9\end{vmatrix}}}$.
• 3. Multiply the results of steps 1 and 2.
• 4. Take the actions corresponding to steps 1 to 3, using the second element in the first row. This gives ${\displaystyle 2\times {\begin{vmatrix}4&6\\7&9\end{vmatrix}}}$.
• 5. Subtract the result of step 4 from the result of step 3.
• 6. Take the actions corresponding to steps 1 to 3, using the third element in the first row. This gives ${\displaystyle 3\times {\begin{vmatrix}4&5\\7&8\end{vmatrix}}}$.
• 7. Add the result of step 6 to the result of step 5.

This fits the following general pattern that holds for determinants of any order. Work through the first row, element by element. For each of these elements in turn:

• 1. Multiply it by the result of striking out the row and column in which it is located. (In this prescription, the row is always the first).
• 2. Subtract the second of these results from the first.
• 4. Subtract the next, if the order is more than 3.
• 5. Continue in this way, alternately adding and subtracting, until the end of the first row has been reached.

This prescription converts a determinant of order 100 into a sum that contains 100 determinants of order 99. The prescription turns each of these into a sum that contains 99 determinants of order 98. And so on. If the determinant contains a lot of zeroes, the situation is not so bad, but allowance has to be made for when this is not the case. There are simple notations for writing the prescription concisely, but these do not need to be explained yet.

A question my students in a graduate course on computer literacy for humanists would have asked at this point was "why bother us with this -- why are determinants useful?" My answer would have fallen back on the problem: "2 oranges and 3 apples cost 60 cents. 3 oranges and 2 apples cost 65 cents. Can we calculate the cost of an orange and the cost of an apple from this information?" The question is not "what are the prices", but "can we determine them". The answer is yes. Next question "how do you know that the answer is 'yes'". Answer: "because the determinant ${\displaystyle {\begin{vmatrix}2&3\\3&2\end{vmatrix}}}$ is not zero. Suppose the information was 2 oranges and 3 apples cost 60 cents. 4 oranges and 6 apples cost \$1.20. Then we would not have enough information. I can tell this because the determinant ${\displaystyle {\begin{vmatrix}2&3\\4&6\end{vmatrix}}}$ is zero." Next question: "But suppose you were given 20 shopping receipts for different amounts of the same 20 kinds of food, it would take ages to compute the determinant by the process you described." Answer: "There are several theorems that enable the conversion of one determinant to another that has the same value and takes less time to compute because it contains more zeroes."

I did not actually take this route in class. The module on matrices focused on linear transformations, using word problems and worked solutions that consisted of simple sentences to explain:

• 1. The respective costs of an apple and an orange can be found "graphically" (by plotting on squared paper) and observing where the straight lines with the formulas 2x+3y=60 and 3x+2y=65 intersect.
• 2. The meaning of "linear expression" (such as 2x+3y), "linear equation" (such as 2x+3y=60), "linear transformation" (such as: convert the number of jam tarts and muffins to be shipped into the pounds of sugar and flour these require) and "linear system" (lists of linear equations that can be solved collectively), that all have names that follow from the fact that they deal with expressions that give straight lines when plotted.
• 3. The definition and basic properties of matrices.
• 4. The multiplication of matrices to express compositions of transformations (for example, to find the number of pounds of flour and sugar needed to supply Dainty Teas and Hearty Arty's with jam tarts and muffins, using the numbers of each that they require, respectively).

(I mention this for readers who seek clarification of "linear" that pervades Determinants and related topics. I am not trying to fork or sneak in unpublished work. It has been reported.)

### The lead, sentence by sentence

• 1. "characteristic number" has a specialized technical meanings in algebra that is irrelevant to this article.
• 2. "volume" is being used here with a specialized meaning that the article links, for an explanation, to measure theory which is outside the content of the Kreyszig book (see collapsed data above) and many other math texts for non-mathematicians.
• 4. "straight forward" -- to whom?
• 5. The feedback to my request for a reference supporting the second sentence claims it is verified clearly in the body of the article. WHERE???
• 6. "are important both in calculus where they appear in the Jacobian ... and in multilinear algebra". This suggests that besides an incomprehensible (to the common science graduate) use that needs knowledge of measure theory to understand, the two other uses are for Jacobian in calculus (ignoring other uses in calculus) and in multilinear algebra, ignoring the VAST utilization in linear algebra that is the basis of its central role in scientific and engineering computation.
• 7. The lead now goes on to use further terminology that is completely unnecessary and off-putting to readers who want the kind of information in any of the sources I cite in the collapsed data above (except Artin's book).

The societal need to make mathematics intelligible has a considerable literature, that includes reports of the National Academy of Sciences, educational agencies and educational literature. Need WK be responsive? Michael P. Barnett (talk) 02:38, 21 May 2011 (UTC)

The meaning of characteristic is the typical meaning in a dictionary
"1. Also, char·ac·ter·is·ti·cal. pertaining to, constituting, or indicating the character or peculiar quality of a person or thing; typical; distinctive: Red and gold are the characteristic colors of autumn."
You are welcome to try and say it better but there is no jargon there.
I agree having measure there is overkill. I can see a problem with just saying volume as it is area for 2 dimensional matrices and hypervolumes for n-dimensional one and just scale for a scalar. The standard term is just volume for the lot but it might be better to expand a bit here.
The first few bits of articles should be pitched at a level where people with a grasp of most of the basics for an elementary introduction can read it to get the basics. The lead may contain some more advanced material at the end as it has to also summarize the article.
Straightforward to practically anyone who will be able to grasp the basics..
By practically any book on it in the references I'd have thought, do you actually really doubt this or what's your point? Why do you think they occur in the denominator when inverting matrices or why there are all those pictures of skewed parallelograms or parallelepipeds about?
The bit about the Jacobian is part of the statement of notability for the topic, all topics should say why they are notable in the lead. As far as I'm aware the uses as stated are the major reasons why determinants are useful besides just being used as an intermediary in inverting matrices which is hardly a reason for major notability. There's lots of other important determinants but the Jacobian is by far the most important in engineering and science.
I agree the second paragraph should be moved to the end of the lead as it is a summary of more advanced uses and should be marked as more advanced.
Are you sure you're not conflating the large treatment of matrices in books with the treatment of determinants? Dmcq (talk) 12:16, 21 May 2011 (UTC)
OK I volunteer to be the dummy. I'm an engineer, my work is in engineering / scientific fields, I devour technical writings. I learned a little about matrices and determinants in school and forgot it all and never used it. I just read the lead and learned absolutely zero / got no clue from it regarding what a determinant is. IMHO it is just a jargon-loaded dance around the edges of it without really defining what it is. In order to not jeopardize my dummy status, I'll avoid reading and learning the body of the article.  :-) North8000 (talk) 12:34, 21 May 2011 (UTC)
[Reply to initial post, pushed down below other replies due to edit conflict] Dear Michael P. Barnett, please allow me to suggest sincerely that you are putting too much effort into this issue. I sympathise, but so much text on a talk page makes it virtually impossible to reply (it also risks errors, as in you computation of a 3×3 determinant that by inspection should have given 0, but which WP policy does not allow me to correct). I'll try to reply just to the point-by-point list
1. "characteristic" is of course a silly word. It should go.
2. "volume" is being used with its everyday meaning, assuming the space is of dimension 3. The link to measure theory could be useful (depending on the quality of that article) for those wondering how volume can be defined in general, but this is not prerequisite to understanding this sentence. I already commented earlier on unfortunate aspects of the "measure of volume" phrase, and would welcome improvement. If determinants are around since the 1700s (which was news to me, and makes them about twice as old as matrices), concern about area and volume have been around since the earliest signs of civilization (of course linear transformations are not quite so old).
3. I don't really understand which part of the lead is targeted by this question. The answer is of course anybody interested in determinants and their use, including but not limited to students of science and engineering.
4. "straight-forward" is a kind of apologetic term that usually contributes little. This case is no exception. But calling "this transformation multiplies area by 2" a straight-forward statement does not shock me.
5. I don't recall any discussion about verification of the second sentence. Do you doubt that it is correct? The body of the article does discuss this point under "Applications", as I think I mentioned before. That is about as late in the article as the cited sentence is early, very curious. But that was not true the first time I mentioned this, a month ago or so.
6. The question suggests that understanding the fact that area or volume are multiplied by some factor is incomprehensible to the common science graduate. This would not be my assessment of their mental capabilities. Why are you so obstinate about this statement; my experience is that it causes absolutely no difficulty. Saying this is incomprehensible suggests that you take an extreme formalistic point of view (yes defining volume properly in a very general sense is hard, but it is very intuitive notion that directly inspired differential calculus, any only led to measure theory much later). The multilinear algebra reference is weird, I never really understood what is meant by that. From my personal experience, the major reasons for introducing deetrminants in the undergraduate math curriculum appears to be twofold (I think the sentence attempts to address those two points, in the opposite order):
1. The are needed to define characteristic polynomials
2. They are needed for doing change of variables in multivariate integration
7. Is what you find off-putting the talk about fields and commutative rings? This does address a somewhat more mathematically mature audience, but does touch one the essence of determinants (unlike the volume interpretation, IMO). One could make this more broad-public by saying that a linear system of equations with as many equations as unknowns is uniquely solvable if and only it its determinant is nonzero. Personally I feel however that viewing determinants as expressions where the coefficients are assumed to lie in a field goes somewhat against their nature; the most fundamental properties of determinants are related to the fact that they only require a commutative ring (i.e., they do not involve any divisions). The definition of the characteristic polynomial is a good example of this; there is absolutely no need to consider rational functions (nor interpreting polynomials as polynomial functions) to understand that definition. I'm not sure how this should be reflected in the lead however.
I'd like to have some other opinions about this though, before tinkering with the lead. Marc van Leeuwen (talk) 12:41, 21 May 2011 (UTC)
I've put in a couple of examples to cut down the impact of field and ring and removed the reference to measure theory. The uniqueness seemed unnecessary in the lead and I moved the symbolism bit a bit up.
There's a bit of wordsmithing that cold be done on things like characteristic, I guess if you know a lot of maths you might think it had a formal meaning here so yes it should go but what should it be replaced with? The bit about straightforward could go too without any loss as it is explained immediately but I haven't figured out how to rephrase that bit either. Dmcq (talk) 13:04, 21 May 2011 (UTC)
I've made some further changes, including some trimming-down, but also mentioning the relation to the solvability of linear systems, which I think is the prime motivation for determinants in linear algebra. I also included a reference to Cramer's rule, which is maybe not that crucial (and is likely to provoke some hostility from the crowd convinced that Cramer's is a bad bad rule, and by contagion determinants are bad), but which I felt was useful to counter the obvious "what's the relevance of determinants if all you care about is whether they are zero or not". Marc van Leeuwen (talk) 14:29, 21 May 2011 (UTC)
The last paragraph already talks about inverting matrices so there's duplication there. Plus I think you've put in rather a lot of padding words which don't really help much. You don't need determinants to solve linear equations and they just appear in passing is my belief if you solve them using for instance gaussian elimination. Dmcq (talk) 14:43, 21 May 2011 (UTC)
Very grateful to respondents. IMHO article is vastly improved. Have non-contentious (I hope) information about infinite determinants (in series solution of Mathieu equation re lunar motion, Schrödinger equation for two-electron atoms -- and I do mean determinants as well matrices), symbolic calculation of determinants (so-called Markov algorithm actually published earlier by a physical chemist) etc. Sorry about 4 times 8 = 48. Hope you believe it was just exhaustion. More anon, with less verbosity. Michael P. Barnett (talk) 14:59, 21 May 2011 (UTC)
I don't see that anything about atoms or Markov algorithms would be suitable for the article. They just are not really relevant to the topic, they're just some algorithms or uses. There's a mention of infinite determinants at the end. Dmcq (talk) 15:33, 21 May 2011 (UTC)
I consider this edit a serious step backward for the following reasons:
1. The mention of the volume interpretation is back to the place of the second sentence, and volume is all that is mentioned about determinants in the first paragraph (Jacobians being volume as well). It is like there is interdiction to mention anything else.
2. The first two sentences are transformed to the point of incoherence: the determinant is no longer associated to a matrix, which just serves as a supply of coefficients; the linear transformation in the next sentence comes out of the blue.
3. The point of mentioning that determinants can be computed from the entries is to indicate how this happens; the rank of a matrix is also computed from the entries (what else is there?) but not by arithmetic.
4. "Finding a unique solution to a system of linear equations involves inverting a matrix given by the coefficients" is just plain false. I've solved many linear systems either manually or by computer, sometimes by inspection or by substitution methods, often using Gaussian elimination, but hardly ever by inverting a matrix. The opposite claim could be justified: inverting a matrix amounts to solving (several) systems of linear equations, but linear systems and inverting matrices are simply not the same thing, in spite of their well known relation. One does not need linear algebra to understand or solve linear systems (and I thing systems of equations are often taught without linear algebra pespective), which is why I think they should be mentioned separately.
So I think I'll undo this edit and make a stab at obtaining the intended changes in a better way. Marc van Leeuwen (talk) 05:37, 22 May 2011 (UTC)
Just an afterthought, this article does seriously lack discussion of linear systems in the body, which are only mentioned in the history section (and in passing in the, somewhat curious looking, Cramer's rule section). Marc van Leeuwen (talk) 05:43, 22 May 2011 (UTC)

## Another look by the semi-dummy

Further to my comments above, I took a second look. The lead is better, but it and the article are still very weak on defining and explaining "what is a determinant?" I see a lot of other discussions, (uses, sidebar commentary, characterizations, classifications, discussion of special cases and what special cases mean, what they are used for etc. ) but the lead and the article are very weak on defining and explaining what a determinant is. Sincerely, Semi-Dummy Emeritus :-) North8000 (talk) 14:37, 23 May 2011 (UTC)

## Inverse if determinant is non-zero in lead

I removed the bit before about the matrix having an inverse if the determinant was non-zero and put in the correct condition into the last paragraph of the lead but it has been reverted. The matrix can only be inverted if the determinant can be inverted, not just if it is non-zero, so if we're woking just with integers the determinant must be 1 or -1 as said in the last paragraph. Also I think the statement with the business about the non-zero in the first paragraph is rather long with latter and former in it. I think one could probably say something interesting instead about the determinant being zero in the first paragraph instead of the invertibility business. Dmcq (talk) 10:50, 22 May 2011 (UTC)

I am very well aware of the fact that a square matrix over a commutative ring is invertible if and only if its determinant is invertible, not if it is nonzero. However to state that one must talk about rings, and about an inverse with entries in the same ring. The large majority of people reading this will be acquainted (if at all) with matrices over real, maybe complex numbers, and this context it is custom (and correct) to say nonzero, not invertible. Moreover the current statement is strictly correct, since it talks about a linear transformation of a vector space, both of which terms imply scalars in a field (while hopefully it won't distract those who have never heard about fields). The more subtle distinction is correctly made in the final paragraph of the lead, in accordance to the policy of increasing abstraction gradually; I don't see too much of a problem. In the final part of your remark it is not so clear what you propose (apart from avoiding "former" and "latter", which in itself would only increase length); would you prefer to not mention inversion of matrices or transformations at all in the first paragraph? One could move that to the last paragraph, with some care, although I think the notion of invertibility in itself is not that hard to grasp. Maybe you want to put emphasis on the determinant zero case rather than the determinant nonzero case, in the style of "if the determinant of a matrix is zero, then the corresponding linear transformation will map some nonzero vectors to zero (it is not injective) while on the other hand not attaining all vectors (it is not surjective either)". That could be helpful, though not shorter than the current text, nor easier to understand (and the reverse implications, equally important, are missing). Marc van Leeuwen (talk) 14:52, 22 May 2011 (UTC)
I have to agree that recent edits have introduced a lot of verbal padding that makes the prose pretty flabby. We need to avoid the temptation to prolixity! Having said that, I admit I had momentarily forgotten that vector spaces require fields, and I had the same gut reaction as Dmcq. Could we not change the second sentence as follows:
current: The determinant provides important information when the matrix is that of the coefficients of a system of linear equations, or when it corresponds to a linear transformation of a vector space: in the former case the system has a unique solution if and only if the determinant is nonzero, in the latter case that same condition means that the transformation has an inverse operation.
proposed: A system of linear equations has a unique solution if and only if the determinant of the matrix of coefficients is nonzero. Similarly, a linear transformation of a vector space over a field (like the real numbers) has an inverse if and only if the determinant of the corresponding matrix is nonzero.
or maybe: ... of a vector space (over the reals or any other field) ...
Again, farther down:
current: Thus for instance the determinant of a matrix with integer coefficients will be an integer, and the matrix has an inverse with integer coefficients if and only if this determinant is 1 or −1 (these being the only invertible elements of the integers).
proposed: For example, the determinant of a matrix with integer entries will be an integer, and the matrix has an inverse with integer entries if and only if this determinant is +1 or −1 (the only invertible integers).
I also think the preservation of multiplication belongs somewhere near the top of the lede (along the line of my earlier suggestions under "Discussions elsewhere" way above somewhere).
-- Elphion (talk) 17:33, 22 May 2011 (UTC)
I've still got problems with the system of linear equations. They may not be square and there is no requirement that the solutions are in a field. I can see it is an important use but I've not been able to rephrase to anything definite, I think a little handwaving might be called for. Dmcq (talk) 13:48, 23 May 2011 (UTC)
To Elphion, I approve the breakup of the sentence in the first proposition. However, one needs to be careful how it interacts with the context. The original formulation implicitly only talks about "square" systems, because its starting point is the matrix that was clearly supposed to be square; however the alternative proposed can rightly be read to be a statement about arbitrary linear systems, in which case it often makes no sense (determinant undefined), which would justify Dmcq's critique. The second sentence has a similar problem; here the ambiguity could be removed by mentioning a linear operator rather than a linear transformation (but I think linear operator sounds too technical at this point). Finally I really don't like "linear transformation of a vector space over a field (like the real numbers)" which really just says "linear map" (which implies vector space which implies a field, and "like the real numbers" is really not doing much to help those who have never heard of fields, and maybe even never really considered the real numbers). Really, I don't think the "all nonzero numbers are supposed to be invertible" point needs any stressing in the first paragraph (and again, it is implicit in the terms used, which the interested reader can discover in the linked-to articles); it is hard to construe that somebody thinks this is about systems of diophantine equations, which is the simplest context where that assumption would be false.
For the second proposal, I could not get myself to writing +1 or −1 are the only invertible integers, since this is only true if inversion is considered only within the integers themselves; my somewhat longer formulation tries to force this interpretation. But I agree the alternative would not confuse many.
I am all for mentioning the multiplicative property as a fundamental aspect of determinants. Somewhat along the lines that the determinant is very useful in studying the multiplication of square matrices, as it provides a map to the much simpler (and commutative) structure of multiplying scalars, which respects multiplication (and is essentially the only map with this property). This requires much effort for a good formulation though. Marc van Leeuwen (talk) 06:35, 24 May 2011 (UTC)

## Use of |A| notation

The mention of a notation "|A|" instead of "det A" for the determinant of a matrix named A has been added to and removed from this article a couple of times in the past. I've now put a {{fact}} tag on it. At the time I made some effort in trying to find the notation in the literature, and did not come across any, while the other notations mentioned here abound (notably the use of vertical bars around the matrix entries, possibly in a condensed form |ai,j|, but that is not the same as |A|). Of course I could have easily missed some text that does use it, but my impression is that it is not a common notation for determinants. The mentioned use is potentially confusing with the use of bars for (operator) norm or absolute value. The latter cannot be directly applied to a matrix, but consider the absolute value of a determinant, which is frequently used (our opening paragraph discusses it, also consider Jacobian determinants in integration); should it be written "||A||" or "|(|A|)|"? Neither is very attractive, compared to "|det A|". (I realise that the same could be said about absolute values of determinants of matrices written out in full, but the need in such cases of an absolute value is not frequent.) So I seriously doubt the notation "|A|" sufficiently common that we should mention it in the lead of this article; WP should not be used as a means to promote rare notational practices. Marc van Leeuwen (talk) 05:48, 4 June 2011 (UTC)

I'm surprised you found it difficult to find as it is quite common and I've used it myself. I believe other people have made your complaint about it so I'll see if I can find a citation with such a complaint. Personally I have never had to get the absolute value of a determinant so I have never come across that problem. Dmcq (talk) 09:26, 4 June 2011 (UTC)
Cited something on google books which talked about that, it says fortunately it will usually be clear from the context which is intended. Dmcq (talk) 09:43, 4 June 2011 (UTC)
Thank you Marc for your contribution. I inserted the notation |A| not because I have seen it in the literature, but because I found it in MathWorld). I know that MathWorld is not a very reliable source, but the article already stated that the two vertical bars were sometimes used "especially in the case where...", which clearly means they can be used also (less frequently) in other cases, such as |A|. However, my first goal was to correct the poor logic of that sentence (see my last edit summary). If you don't mind, in the future try not to undo an edit only because you don't like a part of it. Paolo.dL (talk) 14:51, 4 June 2011 (UTC)
It's a common notation, perhaps more in actual use than in texts, which probably favor "det" to keep things as unambiguous as possible. But I've seen it at least mentioned in many linear algebra texts, including Lang's classic. -- Elphion (talk) 15:00, 4 June 2011 (UTC)
It is interesting to quote the cited reference explicitly:

The determinant of a matrix A is sometimes also denoted |A|, so for the 2×2 matrix ${\displaystyle A={\begin{bmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{bmatrix}}}$ we may also write

${\displaystyle |A|={\begin{vmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{vmatrix}}=a_{11}a_{22}-a_{12}a_{21}.}$
and then goes on to (as far as I can tell from the displayed pages) never ever use the notation |A| again in the about 20 pages about determinants that follow, while both "det A" and bars around displayed entries are legion. For instance, the multiplicative property is stated as
${\displaystyle \det(AB)=(\det A)(\det B)\quad {\text{rather than as}}\quad |AB|=|A|\,|B|}$
(p 272) although the latter looks less cumbersome. So agreed, you have found a book that states that |A| can be used to denote the determinant of A, but the book also demonstrates that this is in fact hardly ever done. The two occurrences of |A| that it actually contains seem to be just an introduction to the notation actually used, pretending that it consists of writing vertical bars around the matrix (rather than replacing brackets by bars). I still believe there is no strong case for mentioning this notation in the lead. Marc van Leeuwen (talk) 15:19, 4 June 2011 (UTC)

(outdent) I'm sorry to have to point this out, but this really goes over the line. The book mentions a usage, and then does not use it itself. From this you conclude that this "demonstrates" (your term) that this is hardly ever done, presumably anywhere else? This logic is not exactly sound. I assure you that it is a common usage -- I see it all the time in the classroom and the seminar room, and several texts mention it even if they do not subsequently use it (perhaps as I suggested above, to avoid ambiguity in a situation where the lector cannot seek clarification from the author). Your suggestion that authors introduce |A| only to prepare the stage for the "actual" use smacks of special pleading. Any author actually doing that would indicate that the "special notation" is not standard -- but this is not the case. -- Elphion (talk) 15:51, 4 June 2011 (UTC)

For heaven's sake, I selected that citation specially because it had a comment agreeing that it could be confused with absolute value. Why on earth woulkd they then use it after they said such a thing? Do you want me instead to find a citation which uses it often? I don't think I'll find one that makes that sort of comment and also uses it often! Dmcq (talk) 17:13, 4 June 2011 (UTC)

The fact I commented on this book is that it was supplied as a witness for the fact that "the notation |A| sufficiently common..." (see above). So I did not choose it, but it seems reasonable that I considered it in the light of that question. If anybody can find a reference that uses |A| regularly, that can replace the current footnote and settle the matter (it need not make remarks about absolute value). I really don't want to make a fuss about it, just clear up the matter. If I'm being a bit insistent, that may be because on another WP page a notation was recently thrown out after discussion as being insufficiently widespread, even though it's use and origin were well established by notable authors in a notable book, but which is "only" about 25 years old. Marc van Leeuwen (talk) 18:06, 4 June 2011 (UTC)
I think the Poole reference is perfect. Look, we've provided citations indicating that |A| is a recognized notation. It is, as you imply, an older usage, but it is still current, as the citations show, and something that readers will encounter in the literature. A quick non-scientific sampling of my books (at least those not in storage) suggests that it is more common among analysts than algebraists, and in applied fields than pure -- but still used. If you want to supply a reference stating that it is currently deprecated by the scientific community, go for it. -- Elphion (talk) 18:28, 4 June 2011 (UTC)
PS -- (re Marc's example of underlined exponents). Indeed, I've never seen that notation before. Combinatorics is not my field, but I do use binomial coefficients, and that looks like a useful notation. If it is established in reputable authors, I see no problem with introducing it. In any event, |A| is (in my experience) far more common, and is mentioned in elementary texts. -- Elphion (talk) 20:32, 4 June 2011 (UTC)
The main problem there was that there is a much more common notation for doing the same thing. I think a couple of other people have copied the notation in the book mentioned there so possibly it may become common in the future, however it's not Wikipedia's job to proselytise for anyone. Dmcq (talk) 20:39, 4 June 2011 (UTC)
Well I'll show how I did this. I selected Google books. I put in the search query "determinants matrices" I looked at the first three books that it would let me preview. I looked for 'adjoint' in these books. All three gave the |A| notation rather than det A. I was most surprised as I expected to have to search, but yes all three used the |A| notation instead of det A.
The three were after selecting adjoint: [2] page 26, [3] page 38, [4] page 275. I don't feel like sticking them in instead but I think it illustrates the notation is not strange. Dmcq (talk) 20:39, 4 June 2011 (UTC)
OK, that's fine, I'm convinced the notation is used a lot. Just to have a last word, I'll note that the examples were found by searching for a wrong term: all these books are calling "adjoint matrix" what is in fact (according to WP) an adjugate matrix. But that's a different discussion. Marc van Leeuwen (talk) 20:54, 4 June 2011 (UTC)

## Definition

Looking at the definition section I see sentences such as

The determinant of a square matrix A, one with the same number of rows and columns, is a value that can be obtained by...

or

The determinant of a matrix of arbitrary size can be defined by the Leibniz formula or the Laplace formula.

This is by no means any stringent mathematical definition. I think there should be the stringent general definition on top of the section named definition. Afterwards name equivalent definitions and then bring some examples for 2x2 matrices etc. — Preceding unsigned comment added by 88.74.35.205 (talk) 09:15, 22 July 2011 (UTC)

I disagree. The lede gives a description, not a definition, which is appropriate for the lede. The article gives a more formal definition. Reproducing that in the lede would make it overlong and not particularly useful for people who are just trying to get an idea about what determinants are. -- Elphion (talk) 20:08, 22 July 2011 (UTC)

The problem with putting a general definition at the start is that there won't necessarily be agreement on whether it is "the" general definition, and it would be too much for newcomers to digest at first sight. But other generalizations certainly deserve a mention so the definition section is inevitably going to be quite long. The current order of the definitions (2x2, 3x3, nxn, generalizations) also seems consistent with the early historical development of the theory. I've added a section on the implicit and geometric definitions, as well as a short overview that aims for a non-technical explanation and motivate the various competing definitions. Hoping someone can add information on less well-known topics such as determinants for rectangular matrices or matrices with entries that are non-commutative or divisors of zero.BPets (talk) 08:37, 3 September 2011 (UTC)

I'm sorry to say, but your edits, while no doubt in good faith, have made the definition section into a complete mess. There is confusion between definition, characterization and computational methods, and some of the things that aim at generality are just nonsense. A definition should fix the object it defines unambiguously; apart from being transparent and concise (if possible) it does not need to meet any criteria of efficiency. A characterization serves to make sure one has the object one intends (if certain properties can be verified), but does not in itself say what that object is. A computational method is any way to compute a given (defined) function; it could use al kinds of tricks and properties that are not directly related to the definition of the function. Here is a list of things I find the current definition mixes up
1. "arbitrary square matrices of real or complex numbers, such as 2-by-2 and 3-by-3": being 2-by-2 or 3-by-3 contradicts "arbitrary", and "real or complex" is totally irrelevant in these explicit formulas
2. "typically arise as expressions..." has nothing to do with these definitions; they do pretty well by themselves
3. "for computational and theoretical purposes it is often more convenient to state the definition in terms of its essential properties" is the confusion I hinted at: for computation one does not need to replace the definition, on the contrary one needs a definition to be sure that what is to be computed is something definite. One can use certain properties (which characterize the determinant) to avoid the lengthy evaluation of the defining expression, but this does not replace the definition.
4. "For example, the equivalence of the implicit and explicit definitions holds true when the matrix entries are expressions from a division ring with unity (such as polynomials)" is messed up; in fact a definition and characterization valid over commutative rings can be given easily (for the latter see remarks in the "Exterior algebra" section) but "division ring with unity" is not the right notion: division is irrelevant to the definition of the determinant, but commutativity (the lack of which distinguishes division rings from fields) is essential (and having a unity is required only to define the 0×0 determinant); finally polynomials do not define a division ring.
5. "Special care is required, for example, if the matrix elements are themselves matrices (which arise when defining the characteristic polynomial) or not commutative or divisors of zero, or if the matrix is rectangular or infinite." Determinants of matrices with matrix entries are not defined (unless you just consider everything as one large matrix with scalar entries, have luck that it is square, and take its determinant). Matrices with matrix entries do not arise when defining the characteristic polynomial (entries are polynomials there). Non-commutative entries: no determinant defined. Divisors of zero: no problem at all. Rectangular matrices: no determinant defined (none of the definitions even suggest what to do for this case, other than to make the determinant identically zero). Infinite matrices: no determinant defined algebraically (in very rare cases analysis can produce some sensible limit, but this goes way beyond "special care is required"). Total score: 0 points.
6. Geometric definition. I won't go into details but this is neither geometric (in spite of the recurrence of the phrase "volume of the parallelogram", it is purely algebraic) nor a definition (it is basically the characterization as n-linear alternating form taking unit value at the chosen basis that follows just below it).
Since there does not seems much to recover easily from it, I think the edit is best reverted. Marc van Leeuwen (talk) 15:02, 3 September 2011 (UTC)

## No determinants over a non-commutative ring

For matrices over non-commutative rings, the equivalence remains true provided the scalar multiplication is consistently applied (say, on the left); this can be verified by a similar process[1].

1. ^ Lang, Serge (1969). "Section 13.4". Algebra (1st ed.). Addison-Wesley.

This surprised me a lot, notably the claim that Lang would write such nonsense. He doesn't, chapter XIII clear states that throughout R will be a commutative ring (my emphasis), and there is absolutely no mention of non-commutativity of scalars or the distinction of left and right multiplication in section 4 of that chapter. The sentence "the equivalence remains true" does not even make sense until a determinant is defined in this setting, which would require a specification of the order of products in (for instance) the Leibniz formula (and which would make the equivalence fail). Would it be too much to ask checking whether a reference actually supports a claim before inserting the claim and the reference into an article? To avoid all doubt, I'll explain why over a non-commutative ring there is no such thing as an n-linear form on matrices taking the value 1 on the identity matrix, for n ≥ 2. Here the linearity of the columns is taken in the same sense, say left-linear: multiplying any column on the left by a scalar should multiply the result of the function on the left by the same scalar. Now the a, b be two non-commuting scalars. Then

${\displaystyle ab=ab\left|{\begin{matrix}1&0\\0&1\end{matrix}}\right|=a\left|{\begin{matrix}1&0\\0&b\end{matrix}}\right|=\left|{\begin{matrix}a&0\\0&b\end{matrix}}\right|=b\left|{\begin{matrix}a&0\\0&1\end{matrix}}\right|=ba\left|{\begin{matrix}1&0\\0&1\end{matrix}}\right|=ba,}$

contradicting the assumption on a, b. The best one can do in a non-commutative setting is define a function on 2×2 matrices that is (say) left-linear in the first column and right-linear in the second column (the function ${\displaystyle {\tbinom {a~c}{b~d}}\mapsto ad-bc}$ is an example), but even then it would not be alternating, nor linear of any kind in the rows. There are fruitful attempts to define some substitute for determinants in a non-commutative setting (which for instance allow special linear groups to be defined for the quaternions; interesting detail: it is of real codimension 1 in the set of all matrices, not 4 as one would expect by analogy to the complex special linear groups) but it is not by way of an expression taking values in the ring, as in the commutative caseMarc van Leeuwen (talk) 10:04, 12 September 2011 (UTC)

## New mnemonic

What do people think of this alternative mnemonic for a 3 × 3 (only) determinant? Its referanced and perhaps clearer in signs and permutations than the current one (no offence to the author of that diagram...).

Visual mnemonic for a 3 × 3 (only - no other order) determinant, to help obtain the correct sign and permutation of expanded terms. (REF TO ADD: Linear Algebra, S. Lipschutz, M. Lipson, Schuam's Outline Series, McGraw Hill (USA), 2009, ISBN 978-0-07-154352-1)

--Maschen (talk) 00:01, 4 December 2011 (UTC)

If you're going to stick in arows you should at least put the terms in the order encountered on the arrows. The order the terms are written seems better than the order in the diagrams. I think it is a bad mnemonic if they ignore it and use some other system of their own. Dmcq (talk) 00:34, 4 December 2011 (UTC)

Thanks for feedback. I made the change (in the same place as above). Any chance now?--Maschen (talk) 08:28, 4 December 2011 (UTC)

Facts can be verifiable but still trivia so having a reference isn't sufficient for inclusion. Mnemonics in general have, at best, marginal encyclopedic value (see WP:NOTTEXTBOOK), so I'd like to see at least two independent sources to show that someone beside the person who made it up thinks it's useful. If it's in common use then it shouldn't be too hard to find more than one source. Btw, the diagram shown only works for 3x3, the 2x2 rule is slightly different and may need a separate image.--RDBury (talk) 12:37, 4 December 2011 (UTC)

Your'e correct about the 2 × 2 determinant, I seemed to have looked at the paths incorrectly - the caption has been repaired (but the 2 × 2 det is trivial anyway - no real need for an image).

I included the referance since thats where I found it, and its in a renowned series (Schuam's) so in that case I fail to see how it can be hard to find. Also the referance is not for persuasion or to "add value for inclusion" etc.

This was just a suggestion by the way - i'm not forcefully trying to include it. If anyone wants/allows it - take it and use it. If not - leave it and forget it...--Maschen (talk) 13:25, 4 December 2011 (UTC)

This looks like a different visual presentation of Sarrus' rule to me. It corresponds roughly to what I do mentally for a 3×3 determinant, so it's fine with me, but it does not seem to be much of a big deal. Another mnemonic that works for n=3 (but not for any other n (except n=0 :-)) is that the sign is negative iff exactly one main diagonal entry is chosen. Marc van Leeuwen (talk) 13:34, 4 December 2011 (UTC)
I find the diagram that's already in the article more intuitively clear. -- Elphion (talk) 15:31, 4 December 2011 (UTC)
Same here and in addition I wonder if even that might give the erroneous impression that 4th order determinants follow the same pattern, it'd be better if they got to this via the adjugates first I think. However when push comes to shove it is whether there is a lot of support for it out there or not. In this though I think Schaum certainly gives very good evidence I'd like to see two instances as the circular path doesn't strike me as something that good as a mnemonic. Dmcq (talk) 17:54, 4 December 2011 (UTC)
The article already warns the reader that the Rule of Sarrus does not generalize, though perhaps we should say it louder. I disagree about introducing adjugates first; the typical reader is interested primarily in 2 and 3 dimensions, and those should be covered first with their simpler rules. The general rule will just scare people away. (I'm always struck by how our natural tendency to generalize right out of the gate manages to make us generally unintelligible!) -- Elphion (talk) 19:49, 4 December 2011 (UTC)
I found another source for the diagram, see Thomas Muir's classic text. Also there are probably those who prefer this to expanding the matrix as in Sarrus. It might be better to add the image to Rule of Sarrus rather than this article though.--RDBury (talk) 16:47, 5 December 2011 (UTC)

## C++ code

The C++ implementations recently added by Mkhan3189 (talk · contribs) look like spam links to me. Should the article include any such code directly? And do these particular implementations come with any vetting regarding numerical accuracy or stability? (Much of this code was also added to LU decomposition -- even if we keep the code, a link here ought to suffice.) -- Elphion (talk) 15:00, 15 May 2012 (UTC)

I agree, this material is quite inappropriate. I'm not even sure if a link is appropriate. McKay (talk) 03:12, 24 May 2012 (UTC)
I've removed the code in question. This has come up in other articles before. I doubt the user meant any harm, but robust, tested implementations of common algorithms such as these are freely available on-line under a variety of licenses and in a variety of languages. Wikipedia is a particularly poor place to put source code, precisely because it is a wiki -- even if the code is correct and compiles when you add it to an article, not every editor is an equally good programmer and subsequent edits to the source code are likely to break it. So that's one issue. The other is that languages have aficionados, and if you implement algorithm X in language Y someone will eventually come along and think, "Wouldn't it be good to have this same code in language Z? After all, language Y sucks." These articles quickly become dominated by varying implementations of the algorithm, many of which don't actually work, and which don't really further a casual reader's knowledge of the concept in question at all.
As relates to this particular implementation, it depended on the definition of a matrix class that was not defined, so it didn't even compile. This is part of the reason that algorithms are best implemented in simple pseudocode, rather than in a particular language, particularly one like C++ that doesn't have a matrix type in its standard library.
So for all these reasons I've simply nuked the code in question. If the original user or someone else wants to add a link to a well-tested, well-supported C++ library that implements determinant calculation by LU decomposition, well, that's a different story altogether. But WP is not a programmer's cookbook. Eniagrom (talk) 12:39, 6 August 2012 (UTC)

## Alternating form section

The new section, "Transformation on alternating multilinear n-forms", seems like a restatement of the definition using exterior algebras. It's also unsourced so I'm wondering if it's OR. I'm leaning toward deleting it but am willing to listen to arguments to keep it first.RDBury (talk) 10:47, 6 January 2013 (UTC)

I share your concern, but think there is an element of merit that should be considered. In particular, the new text seems to be merely a particularly neat (and understandable) summary of that subsection, which if rephrased better would not benefit from such a summary.
On a related point, the Exterior algebra subsection contains a statement "...considering all but one column of A fixed..." that IMO does not belong there at all, as it relates to matrices and not to the exterior algebra per se, unless it is a very long lead-in to "This fact also implies that every other n-linear alternating function...". I would prefer to see Exterior algebra reworded in a way that does not relate back to matrices (i.e. it should use a coordinate-free approach throughout).
In summary, I think the Exterior algebra should be rewritten to be coordinate-independent (and preferably more concise) taking its cue from the new subsection, and the new subsection should vanish. — Quondum 13:36, 6 January 2013 (UTC)

See this unsourced edit, claiming the Sarrus rule extends to n × n matrices, and changes made to the 4 × 4 determinant. Is it true? Thanks, M∧Ŝc2ħεИτlk 07:12, 1 April 2013 (UTC)

I'm concerned by the reference "Ramazi, p., Shoeiby, B. and Abbasian, T. (2012) The extension of Sarrus’ Rule for finding the determinant of a 4×4 matrix. The American Mathematical Monthly. April V." added by an anonymous IP editor [5]. No such article is listed in the table of contents [6], and none of the three editors has ever been reviewed by ZMATH. Deltahedron (talk) 08:32, 1 April 2013 (UTC)
True. The only book I have to hand on linear algebra is just Schuam's; it states there is no analogue for n-dimensional matrices for n > 3. That's not likely to be wrong... M∧Ŝc2ħεИτlk 09:18, 1 April 2013 (UTC)
Schaum probably meant that an easy-to-remember rule like this is not possible, since the number of terms grows so quickly. But the terms in the sum for the determinant are all just products of diagonals; the trick is arranging the columns in a scheme so that you can find all of them. So in principle the Rule of Sarrus can be extended; in practice it's just too tedious. -- Elphion (talk) 13:14, 2 April 2013 (UTC)
The cited article does not seem to exist. Not only is it not in the Monthly, but I can't find an article by that title anywhere. Sławomir Biały (talk) 11:03, 1 April 2013 (UTC)
The source I used was the very article linked, published in the American Mathematical Monthly. I read the article in question after developing a very similar method for finding the determinant of matrices. (I can provide a link to an image of the article, though I am unaware as to the legal requirements for doing so. I am uninterested in becoming bogged down in some sort of legal dispute. I will link to the image here, hosted on an external site for the purpose of discussion. http://imgur.com/a/EraXu The first image in that album is the article, the second an illustration of the method applied to 5x5 and 6x6 matrices).
In the article, a rigorous proof is not provided, but it can be verified by performing the standard method for obtaining a determinant of a matrix with many rows.
I am unable provide a rigorous proof of this method, but it can be observed through the use of mathematical software capable of performing matrix operations with variables. I used mathematica.
The article is available on JSTOR, in the April Edition of the Journal, under the section "Back Matter" I also had difficulty in finding it, but it is there. http://www.jstor.org/stable/10.4169/amer.math.monthly.119.04.bm E290341 (talk) 23:44, 1 April 2013 (UTC)
The JSTOR link cited points to an image of a brief note by Rittaud and Vivier on a different subject. Deltahedron (talk) 06:29, 2 April 2013 (UTC)
It is there, look at page 3 of the PDF file. It is a letter to the editor, half a page, not an article. I don't think it belongs here. A reference (but not more than that) might be justified at Rule of Sarrus. McKay (talk) 07:44, 2 April 2013 (UTC)
Oh. I don't have access beyond page 1. Deltahedron (talk) 17:35, 2 April 2013 (UTC)
Agreed: the extension is not a practical rule. Worth a footnote at Rule of Sarrus (with a proper bibliographic entry, not the hash given earlier here), but not here. -- Elphion (talk) 13:14, 2 April 2013 (UTC)

"The determinant provides important information when the matrix is that of the coefficients of a system of linear equations" — Preceding unsigned comment added by 92.84.114.170 (talk) 16:47, 28 June 2013 (UTC)

Not bad grammar, but awkward writing. Revised -- Elphion (talk) 20:19, 28 June 2013 (UTC)

The lede (2nd sentence) states:"It [the determinant] can be computed from the entries of the matrix by a specific arithmetic expression, while other ways to determine its value exist as well." I have a couple of problems with this. The easiest one is whether or not the "other ways" clause is in any way useful or meaningful. (You can "determine" its value by inspection or comparison, but these seem so obvious as to not be helpful.) The second problem I see is the phrase "a specific arithmetic expression". An expression doesn't compute anything. (One USES the expression to compute...) Worse, there IS no "specific" expression (depending on what is meant by the term) which concretely and specifically expresses all additions & multiplications required for all matrices (of any size). The specific expression depends on the number of elements in the matrix under consideration. Isn't the term 'algorithm' better here? There is a GENERAL 'algorithm' which can be used to compute the value of any determinant, although there are many more efficient special case algorthims also. I also wonder whether the determinant is (as claimed in the first sentence of the Lede) a value? It certainly is if the matrix is composed of numbers - but what if it is composed of vectors? functions? operators? (any abstract mathematical object)? There are also matrices which are composed of more than one type of object (for instance both a set of variables variable and their numerical coefficients in some set of linear equations). Although these are generalizations of the definition of what a basic matrix is, not allowing the generalizations to enter into the explantion is equivalent to claiming, in an explanation of numbers, that they are (all) integers.173.189.76.20 (talk) 15:11, 13 August 2014 (UTC)

## Geometric interpretation section

I have temporarily hidden this newly added section because it is a)poorly written b)not geometric in nature c)very trivial and d)unsourced and likely to be WP:OR. I could be wrong, so I didn't just delete it. Other opinions? Bill Cherowitzo (talk) 03:04, 11 September 2014 (UTC)

Agreed. It could have been intended as a geometric interpretation of specific properties of the determinant, but it got the illustrations wrong, and in any event would add little of value. The impression one gets is that the article is being treated as a sandbox to develop ideas in the absence of a source. —Quondum 06:13, 11 September 2014 (UTC)

## Property #12 is WRONG!

The text says:"Interchanging two columns of a matrix multiplies its determinant by −1." This is so wrong as to make me suspect vandalism. This paragraph goes on to say that the interchange (I don't know why it uses the term "permutation" - a permutation of the rows is a bit ambiguous - does it mean permutation in a row (or columns) or permutation of the rows (or columns) (yeah, I know it means the latter, but I'm talking about what a typical reader might think). Any matrix can be thought of as either a set of rows Ri or a set of columns Cj (of course i = j if its square). Anyway, the permutation switching any two Ri's or Ci's) results in a determinant with the same magnitude and whose sign is sgn(i,i'). Switching two adjacent rows or columns results in multiplication of the determinant by -1. I will fix it, but feel free to reword my attempt in order to improve its clarity, simplicity, and elegance.Abitslow (talk) 16:42, 14 January 2015 (UTC)

The property was not incorrect and you provided the correct reason why this is so. Any interchange is a transposition (as a permutation) and applying a transposition changes the sign of a permutation, so interchanging any two columns will change the sign of every permutation and hence multiply the determinant by -1. The argument for rows follows by taking the transpose. I also changed your reference of property 8 to property 9, as 8 didn't seem correct. If I am wrong in this, it probably means that the comment needs more amplification. Bill Cherowitzo (talk) 18:47, 14 January 2015 (UTC)
This is correct: interchanging any two columns (rows) involves an odd number of adjacent interchanges. E.g.:
ABC to CBA (interchange A and C):
ABC => BAC => BCA => CBA (3 adjacent interchanges)
More generally, you have to use some number of adjacent interchanges to move the first column up to the second, interchange them, then move the second column down the same number you moved the first one up. So it's always an odd number.
-- Elphion (talk) 19:07, 14 January 2015 (UTC)

## Symbol / Convention confusing?

I'm not familiar with wikipedia formatting guidelines, but it is confusing to me that Matrices are A in text, but ${\displaystyle A}$ in formulas. — Preceding unsigned comment added by 141.3.42.183 (talk) 18:21, 7 April 2015 (UTC)

Yeah, the font distinction is a little problematic. It is that, or sizing/placement problems. There is another intermediate format for inline use: A, which is used in some articles and should be more similar to the format in standalone formulae. —Quondum 23:51, 7 April 2015 (UTC)