Talk:Adjugate matrix
Mathematics B‑class Low‑priority | ||||||||||
|
adjoint is not used?
The adjugate has sometimes been called the "adjoint", but that terminology is ambiguous and is not used in Wikipedia. Today, "adjoint" normally refers to the conjugate transpose.
A more generic example is this: given matrix A its adjoint is ...
The article is clear to me: "adjugate" resolves an ambiguity in the literature. But the literature is full of ambiguities! None of my references mention the word "adjugate" at all: they unabashedly use the word "adjoint" without worrying about ambiguities. I'm all for fixing our language: it is just another tool after all. But it would be much more helpful to the cause if several references that recommend this particular word to fix the ambiguity in the literature were cited. As it is, I'm not yet convinced that the literature recommends "adjugate". Does it? Where?
Cjfsyntropy (talk) 21:55, 21 April 2010 (UTC)
transpose required
after the co-factoring break up is done as shown in the picture, you need to take the transpose of that entire matrix to get the real adjoint, this is not mentioned on the picture or the text
I'm a mere student using this site to help with coursework, so take this with a pinch of salt. But when you say
"Today, "adjoint" normally refers to the conjugate transpose."
Might I suggest that you mean the conjugate transpose of the cofactor matrix?
Please ignore if I'm wrong or if you feel this is implied.
ta,
Th
--I don't think they mean conjugate transpose of the cofactor matrix. The cofactor matrix is only involved in the *classical adjoint* or adjugate, whereas the *adjoint* is precisely the conjugate (i.e. take the complex conjugate of each element in the matrix) transpose of the matrix. 74.104.1.193 06:08, 4 February 2007 (UTC) Jordan
correction
the last adjugate example, the 3 by 3 with A subscripts, is incorrect. the result needs to be transpose.
Fixed. TooMuchMath 19:14, 28 January 2006 (UTC)
q(A)
I read If p(t) = det(A - tI) is the characteristic polynomial of A and we define the polynomial q(t) = (p(0) - p(t))/t, then adj(A) = q(A)., but q(A) = (p(0) - p(A))/A and p(A) = det(A - AI) = 0.. Maybe you mean qA(t) --151.28.36.120 07:43, 27 September 2006 (UTC)
There was actually no problem here. I clarified this in the text by noting the standard way to understand q(A) with q a polynomial is as the sum q_0 + q_1 A + ...+ q_n A^n where q_n are the coefficients of q(t). You are correct that p(A)=0, however this doesn't imply q(A) = 0, rather q(A) = q(0)/ A = (deta)/A = adj(A)! (Incidentally, the proof that p(A)=0 since det(A-A I) =0 is incorrect, as there is a priori no reason that p(A) = det (A-AI)!. To explain: for an arbitrary matrix B one defines p(B) = p_0 + p_1 B + ... + p_n B^n with p_j the coefficients of p(t) = det (A -t I). And indeed, it is not necessarily true that p(B) = det (A -B) I! A simple example is
A = ( 0 1 // 0 0 ) and B = (0 1 // 1 0 ) // = new row .
Then p(A)= det (A -t I) = t^2 so p(B) = B^2 = I. However, det (A - B) = 0!) --Jhschenker 16:30, 18 October 2006 (UTC)
What is the derivative with respect to A of adj(A)?
Please give the formula using both matrix notation and index notation.
Umm... What exactly do you want? You can take the derivative of each element of a matrix. and matrices can be used as linear transformations of polynomial to produce the derivative. There is the Wronskian matrix. However taking the derivative with respect to an entire matrix is not something you can do.--Cronholm144 05:02, 13 July 2007 (UTC)
Yes you can... The result will be a tensor valued function. In other words, adj' will be a function which takes a matrix (a 2-tensor, i.e. an n x n array of numbers) as input and gives a 4-tensor (an n x n x n x n array) as its ouput. This is because adj is a function from 2-tensors to 2-tensors. In general, the derrivative of a function from j-tensors to k-tensors is a function from j-tensors to (j+k)-tensors. Anyway, sorry I'm at a loss for what adj' actually is :) --Wikimorphism 02:20, 13 August 2007 (UTC)
- I think the OP means something like
- Only this is valid for invertible A only, and I have no idea of how to find a more general formula. David 09:10, 16 May 2008 (UTC)
I think the formula maybe:
--Liuyifourfire (talk) 17:34, 22 March 2009 (UTC)
Multiplicative property of the adjugate
- from page Wikipedia:Reference desk/Mathematics
Hi! I'm looking for a proof of the identity:
The article Adjugate matrix says nothing about the poof and the proof is not contained in the reference 'Gilbert Strang: Linear Algebra and its Applications '. When A and B are invertable the proof is easy, but otherwise? I can't treat cofactors well. Would you be so kind and help me to find a real reference or a proof? Thanks, Mozó (talk) 14:21, 15 December 2008 (UTC)
- For real or complex square matirces you may get it by continuity, because invertible matrices are dense--PMajer (talk) 19:57, 15 December 2008 (UTC)
- Uuuuh, what a good idea, thank's! I would have thought about it :) or how to say it in English :S But, what about the engineer courses? I'd like to show them directly, that trace(adj(A)) is the 2nd scalar invariant of the (classical) tensor A that is for every O orthogonal trfmtion (and for orthonormal bases) trace adj (A) = trace adj (OTAO). And (I realised that) I need the identity above. Mozó (talk) 22:23, 15 December 2008 (UTC)
- Yeah, trying to explain topology to a bunch of engineers is probably a bad idea! If you're only interesting in a particular number of dimensions, then you could do it as a direct calculation (or, rather, set it as an exercise - it will be a horrible mess!). There is probably a better way I'm just not thinking of, though. --Tango (talk) 23:04, 15 December 2008 (UTC)
- Actually, we do teach topology for engineers (specially the argument above), when we find the continuous solution of the function equation |f|=ex (or |f(x)|=|x|), so PMajer's idea could work. And of course the proof of the identity manually (by term×term) for home work may cause bad feelings about math :) Mozó (talk) 07:17, 16 December 2008 (UTC)
- Here's a way that avoids topology, though it probably doesn't qualify as "direct". Each element of adj(AB) − adj(B) adj(A) is a polynomial in the elements of A and B and is zero whenever A and B are both invertible. There are infinitely many invertible n×n matrices over the reals, so the polynomials must be identically zero over the reals and hence over any commutative ring, since their construction is independent of the ring. -- BenRG (talk) 08:28, 16 December 2008 (UTC)
- Actually, we do teach topology for engineers (specially the argument above), when we find the continuous solution of the function equation |f|=ex (or |f(x)|=|x|), so PMajer's idea could work. And of course the proof of the identity manually (by term×term) for home work may cause bad feelings about math :) Mozó (talk) 07:17, 16 December 2008 (UTC)
- Yeah, trying to explain topology to a bunch of engineers is probably a bad idea! If you're only interesting in a particular number of dimensions, then you could do it as a direct calculation (or, rather, set it as an exercise - it will be a horrible mess!). There is probably a better way I'm just not thinking of, though. --Tango (talk) 23:04, 15 December 2008 (UTC)
- Uuuuh, what a good idea, thank's! I would have thought about it :) or how to say it in English :S But, what about the engineer courses? I'd like to show them directly, that trace(adj(A)) is the 2nd scalar invariant of the (classical) tensor A that is for every O orthogonal trfmtion (and for orthonormal bases) trace adj (A) = trace adj (OTAO). And (I realised that) I need the identity above. Mozó (talk) 22:23, 15 December 2008 (UTC)
Here's a fairly direct proof. Let B1, ..., Bn be the rows of B, and A1, ..., An the columns of A. Now examine the i,j entry of each side of the matrix identity. Each side is a function that:
- does not depend on the row Bj or the column Ai;
- is an alternating multilinear form with respect to the remaining rows of B
- is an alternating multilinear form with respect to the remaining columns of A.
Thus it is enough to check equality when A and B are both permutation matrices. (Proof: Fix i, j. Because each of the two quantities is linear with respect to each row of B/column of A other than Bj and Ai, you can assume each of these is one of the standard basis vectors, and that none are repeated. Then set Bj (resp. Ai) to be the remaining standard basis vector, since this doesn't affect the i,j entry.) Now check that if the identity is true for A, B, then it remains true when two consecutive columns of A (resp. rows of B) are exchanged. (This results from the fact that if you exchange, say, two consecutive rows of C, this has the effect of exchanging the corresponding columns of Adj(C) and multiplying it by -1.) This reduces the problem to checking the identity when A, B are both the identity matrix. 67.150.253.60 (talk) 13:04, 16 December 2008 (UTC)
- This is obviously "the right way" to do the problem. I had wanted to say something along the lines that it is true as a formal identity in the polynomial ring where the entries of A and B had been adjoined and that passing to a suitable transcendental field extension then implies the result. But that was too complicated. Good answer, siℓℓy rabbit (talk) 20:01, 16 December 2008 (UTC)
- Just another option, in case your students don't like extension of polynomial identities in several variables: prove it first for A, B invertible as suggested by others above. Now let A be arbitrary and B be invertible. All but a finite number of the matrices A + tI are invertible. The identity to be proved is a single-variable polynomial one true for most t, and therefore for all t. Now let A, B be arbitrary and do the same thing with B + sI. 67.150.246.75 (talk) 03:22, 17 December 2008 (UTC) —Preceding unsigned comment added by Mozó (talk • contribs)
3x3 example
If you follow the "3x3 numeric matrix" example it doesn't make sense. The definition of adj(A) (the big matrix with 9 cofactors) clearly shows the bottom middle entry as -det(1 3 4 6), which would be -det(-3 -5 -1 -2) in the numeric example. This conflicts with the claim that the submatrix is (-3 2 3 -4). There seems to be some confusion over whether the answer is transposed or not. I don't really care, but the wikipedia page shouldn't contradict itself. Either change the example so it fits the definition of adj(A) as given above, or transpose the definition of adj(A) so it matches the example. 131.215.143.14 (talk) 23:54, 26 August 2009 (UTC)
Was all quite wrong - Fixed.
Stormcloud51090 (talk) 07:05, 16 June 2010 (UTC)
Fixed?
The problem with the internal contradiction mentioned above is still there. I fail to see how it's been fixed. I made the corrections once so that it did not conflict with the 1-9 matrix in the example above but apparently someone "fixed" it so that it now contradicts itself again. Someone needs to find a definition and stick by it, and make sure that definition is followed throughout the article.
Elevent 2010-08-13 —Preceding unsigned comment added by 83.251.231.9 (talk) 08:38, 13 August 2010 (UTC)
Left & right Adj?
The article assumes that the matrix is square. This does not appear to be a requirement. That is, SVD handles non-square matrices, so left- and right-psuedo-inverses are easily defined for non-square matrices. So also for adj, correct?
What is the definition for a left-adj or a right-adj? (FWIW, it is my guess that much of the complicated explanations in terms of cofactors will become trivial when expressed in terms of the SVD -- but I don't know, since I can't find any useful (non-functor) definition of left/right-adj.) Jmacwiki (talk) 21:00, 7 July 2012 (UTC)
Formula correct
This edit marked the formula below dubious:
The formula is correct, provided is a unit. Here's a proof:
Cancel one from each side to get the result. I've edited the article accordingly. 75.76.162.89 (talk) 08:56, 11 August 2012 (UTC)