Wikipedia:Reference desk/Archives/Mathematics/2008 December 15

From Wikipedia, the free encyclopedia
Mathematics desk
< December 14 << Nov | December | Jan >> December 16 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 15[edit]

Integral question[edit]

How does one solve ? My textbook says to split the terms out into , and use a Pythagorean identity substitution. The answer it ultimately gives is . However, when I try to differentiate that answer, I don't seem to get , rather . What am I missing? seresin ( ¡? )  01:24, 15 December 2008 (UTC)[reply]

You need to differentiate and then apply the Pythagorean identity:
Cheers, siℓℓy rabbit (talk) 01:35, 15 December 2008 (UTC)[reply]
Hi seresin, put -sin into brackets: , and follow Silly rabbit's solution. Mozó (talk) 14:41, 15 December 2008 (UTC)[reply]
Thanks both of you. I see what I did wrong. seresin ( ¡? )  23:13, 15 December 2008 (UTC)[reply]

A property of the adjugate[edit]

Hi! I'm looking for a proof of the identity:

The article Adjugate matrix says nothing about the poof and the proof is not contained in the reference 'Gilbert Strang: Linear Algebra and its Applications '. When A and B are invertable the proof is easy, but otherwise? I can't treat cofactors well. Would you be so kind and help me to find a real reference or a proof? Thanks, Mozó (talk) 14:21, 15 December 2008 (UTC)[reply]

For real or complex square matirces you may get it by continuity, because invertible matrices are dense--PMajer (talk) 19:57, 15 December 2008 (UTC)[reply]
Uuuuh, what a good idea, thank's! I would have thought about it :) or how to say it in English :S But, what about the engineer courses? I'd like to show them directly, that trace(adj(A)) is the 2nd scalar invariant of the (classical) tensor A that is for every O orthogonal trfmtion (and for orthonormal bases) trace adj (A) = trace adj (OTAO). And (I realised that) I need the identity above. Mozó (talk) 22:23, 15 December 2008 (UTC)[reply]
Yeah, trying to explain topology to a bunch of engineers is probably a bad idea! If you're only interesting in a particular number of dimensions, then you could do it as a direct calculation (or, rather, set it as an exercise - it will be a horrible mess!). There is probably a better way I'm just not thinking of, though. --Tango (talk) 23:04, 15 December 2008 (UTC)[reply]
Actually, we do teach topology for engineers (specially the argument above), when we find the continuous solution of the function equation |f|=ex (or |f(x)|=|x|), so PMajer's idea could work. And of course the proof of the identity manually (by term×term) for home work may cause bad feelings about math :) Mozó (talk) 07:17, 16 December 2008 (UTC)[reply]
Here's a way that avoids topology, though it probably doesn't qualify as "direct". Each element of adj(AB) − adj(B) adj(A) is a polynomial in the elements of A and B and is zero whenever A and B are both invertible. There are infinitely many invertible n×n matrices over the reals, so the polynomials must be identically zero over the reals and hence over any commutative ring, since their construction is independent of the ring. -- BenRG (talk) 08:28, 16 December 2008 (UTC)[reply]

Here's a fairly direct proof. Let B1, ..., Bn be the rows of B, and A1, ..., An the columns of A. Now examine the i,j entry of each side of the matrix identity. Each side is a function that:

  • does not depend on the row Bj or the column Ai;
  • is an alternating multilinear form with respect to the remaining rows of B
  • is an alternating multilinear form with respect to the remaining columns of A.

Thus it is enough to check equality when A and B are both permutation matrices. (Proof: Fix i, j. Because each of the two quantities is linear with respect to each row of B/column of A other than Bj and Ai, you can assume each of these is one of the standard basis vectors, and that none are repeated. Then set Bj (resp. Ai) to be the remaining standard basis vector, since this doesn't affect the i,j entry.) Now check that if the identity is true for A, B, then it remains true when two consecutive columns of A (resp. rows of B) are exchanged. (This results from the fact that if you exchange, say, two consecutive rows of C, this has the effect of exchanging the corresponding columns of Adj(C) and multiplying it by -1.) This reduces the problem to checking the identity when A, B are both the identity matrix. 67.150.253.60 (talk) 13:04, 16 December 2008 (UTC)[reply]

This is obviously "the right way" to do the problem. I had wanted to say something along the lines that it is true as a formal identity in the polynomial ring where the entries of A and B had been adjoined and that passing to a suitable transcendental field extension then implies the result. But that was too complicated. Good answer, siℓℓy rabbit (talk) 20:01, 16 December 2008 (UTC)[reply]

Dear Colleagues, thanks all of you! I found the the answers and the presented proofs very useful. The section should be moved to the discussion page of Adjugate matrix. Regards, Mozó (talk) 19:43, 16 December 2008 (UTC)[reply]

Just another option, in case your students don't like extension of polynomial identities in several variables: prove it first for A, B invertible as suggested by others above. Now let A be arbitrary and B be invertible. All but a finite number of the matrices A + tI are invertible. The identity to be proved is a single-variable polynomial one true for most t, and therefore for all t. Now let A, B be arbitrary and do the same thing with B + sI. 67.150.246.75 (talk) 03:22, 17 December 2008 (UTC)[reply]

Also: a simpler version of the density argument, for engineers (after Tango's remark :) ) Let E be the usual matrix with only nonzero entry=1 at ij. Then just by expanding, det(A+sE)=det(A)+sAdj(A). Then, one can observe that in case either rank(A)<n-1 or rank(B)<n-1, also rank(AB)<n-1, and the identity to prove becomes trivial 0=0. So the only case to consider, is A and B of rank either n or n-1. In this case they are easily approximated by invertible matrices, just perturbing one entry, for if rank(A)=n-1 there is at least one ij such that A:=A+sE is non singular for the formula above (and similarly B:=B+sE). I think you can convince the engineers without writing anything else, that the identity for the matrices A, B passes to the limit when , for the coefficients involved are made with sum and products. They only need to know the rules for limits of a sum and a product of sequences. --PMajer (talk) 09:54, 17 December 2008 (UTC)[reply]
67.150.246.75 it's probably the easiest proof since students are familiar with solving the equations of the type det(A - λI)=0. Let me detail it. Let A be a singular and B be a regular matrix. det(A + tI) = q(t) is a polynomial and it has finite numbers of zeros (or it is the zero polynomial :), say the points of the set F. The matrix function
m: R Rn×n; t adj((A+tI)B)-adj(B)adj(A+tI)
is also continuous (it maps into the normed space (Rn×n,||.||op) and m(t) = 0 for all t ∈ R \ F. Hence, m ≡ 0 and m(0)=0. The next step is to do the same with a singular B.
It is also a nice question in an examination of calculus of functions of several variables :) However, 67.150.253.60's proof is good for experts who are interested in algebra of matrices. Mozó (talk) 10:47, 17 December 2008 (UTC)[reply]

Problem[edit]

34x+3+4x-2-8*44x/42*34x-2 —Preceding unsigned comment added by 193.77.182.61 (talk) 19:05, 15 December 2008 (UTC) Please help, thank you! —Preceding unsigned comment added by 193.77.182.61 (talk) 19:06, 15 December 2008 (UTC)[reply]

The answer is probably 'may be'. But I don't know what the problem is. --CiaPan (talk) 19:25, 15 December 2008 (UTC)[reply]
I don't understand your notation. What does 4x-2-8 mean? And how much is supposed to be on top of the fraction? Just the last term before the / or the whole thing? And what is the question, how to simplify the expression? --Tango (talk) 19:58, 15 December 2008 (UTC)[reply]
Try using some parens (to show what terms and operations go together). I don't see an equals sign so this isn't an equation and can't be solved. Is the goal to reduce it in complexity ? Please show us the work you've done so far. Hint: You can also put the expression on two lines or more, if needed, with a space in front, like this random complex expression:
 (-8x3/4/9y2/-3)/(2x4/5/6y-7/8)
-------------------------------
 (8x-3/4/-9y2/3)/(2x4/5/-6y7/8) 
StuRat (talk) 20:30, 15 December 2008 (UTC)[reply]

Function of best fit for these couples; and a general question[edit]

I don't know how to solve things like this beyond just looking and seeing what pops into my mind or trial and error. Neither of these has worked for me, so I thought I would ask here.

I'm looking for a function, f(x), such that f(0)=0; f(6/16)=2/16; f(8/16)=5/16; f(10/16)=8/16; and f(1)=1.

Those are the five couples I have. It was my understanding that any two points can be joined by a linear function, and any three by a quadratic equation, and so on. So I thought that there would be, at the maximum necessary degree, a quintic equation which fits these points. Is this wrong?

Thanks for reading.--Atethnekos (talk) 23:09, 15 December 2008 (UTC)[reply]

You are correct. See Polynomial_interpolation#Constructing_the_interpolation_polynomial. --Tango (talk) 23:15, 15 December 2008 (UTC)[reply]

No; that is NOT correct. Fourth degree is enough. Michael Hardy (talk) 00:18, 16 December 2008 (UTC)[reply]

Thanks for the info, Tango. Not being a maths guy, I did not know the terminology(interpolation) or where to start (matrix and Gaussian elimination). And thanks Michael; I wrote quintic when I should have said quartic.
With the right terminology, I searched google for more information on how to interpolate this, and someone made a java applet which solves it. It gave me the solution, but then I realized I had more constraints then just the five points I mentioned! I realized that the curve cannot have a negative slope at any point between x=0...x=1. But the solution the program gave did initially move down to negative values of f(x) after f(0)=0 however. Obviously this makes the problem more complex and I probably can't go about trying to solve it with my limited knowledge. Thanks anyway kind persons. (If anyone wants to try to come up with a function which has no negative slope between x=0...x=1 and fits those points mentioned, feel free to solve it for me! I'm sure you have more time than me...j/k) --Atethnekos (talk) 02:20, 16 December 2008 (UTC)[reply]
There is a unique polynomial of degree 4 or less such that it goes through 5 fixed points. So, if you try picking another point in there and then try looking for a polynomial of degree 5, maybe it will turn out correct by luck. Again, there would be a unique polynomial of degree 5 or less going through 6 fixed points. But, you can change the 6th point around to get many different possibilities. You could do f(3/16) and clearly it needs to be something bigger than 0 and smaller than 2/16. Maybe 1/16 works. You can also vary the 3/16 to 2 or 4 or 5. Try a few options and see if you get lucky. If these do not work, try f(7/16) or f(9/16) or f(11/16) and various values of these. Off the top of my head, I do not know how to just solve such a thing. StatisticsMan (talk) 03:56, 16 December 2008 (UTC)[reply]

You are looking for a function having those values at those particular points? I don't see why you need a polynomial (maybe for smoothness) but why not just define f so that f has those values at those points and is 0 otherwise? If you really want continuity, then make the function piece-defined such that its graph between 0 and 1 equals the union of finitely many line-segments (in the obvious way so that the function passes through the points you have given and the vertices of the graph are just those points), it takes a value of 0 for real numbers less than 0, and a value of 1 for real numbers greater than 1. Topology Expert (talk) 09:35, 16 December 2008 (UTC)[reply]

A function such as y = g(x) = ln(x/(−x)) transforms the x-interval [0...1] to the y-interval [−∞ ... +∞] . So find an increasing function h satisfying h(g(x))~g(y) for (x,y) = (6/16,2/16), (8/16,5/16), and (10/16,8/16). Then f(x) ~ g−1(h(g(x))) will solve your problem. The conditions f(0)=0 and f(1)=1 are automatically satisfied. The linear function h(z) = az+b where a = 1.90467 and b =−0.911456, gives a good approximation. Bo Jacoby (talk) 13:01, 16 December 2008 (UTC).[reply]
You did not really mean g(x) = ln(x/(−x)), did you? -- Jao (talk) 13:28, 16 December 2008 (UTC)[reply]
I think it should be (1-x) in the denominator. --Tango (talk) 13:37, 16 December 2008 (UTC)[reply]
Sorry, it should have been g(x) = ln(x/(1−x)) . Bo Jacoby (talk) 13:48, 16 December 2008 (UTC).[reply]
People commonly use splines to get a smooth function through some points. The problem with with fitting a high degree polynomial is you can get huge swings between points and the ends diverge rapidly. Dmcq (talk) 06:27, 17 December 2008 (UTC)[reply]