Wikipedia:Reference desk/Archives/Mathematics/2011 March 28

From Wikipedia, the free encyclopedia
Mathematics desk
< March 27 << Feb | March | Apr >> March 29 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 28[edit]

Even and odd permutations[edit]

I am trying to prove the equivalence of the following two definitions: 1. A permutation is called even if the total number of inversions is even, and odd if the total number of inversions is odd.

2. A permutation is called even if it is a product of even number of transpositions, and odd if it is a product of odd number of transpositions.

How should I proceed? I feel that there should be a simple proof for this but I cant put my finger on it. Thanks-Shahab (talk) 05:49, 28 March 2011 (UTC)[reply]

Use the properties of the Vandermonde determinant. Sławomir Biały (talk) 11:32, 28 March 2011 (UTC)[reply]
Which property? That it is alternating in the entries? How do I use that to prove my result? Can you be a little more explicit please? Thanks-Shahab (talk) 11:51, 28 March 2011 (UTC)[reply]
Act by permutations on the variables . Show that a permutation changes the sign of if and only if there is an odd number of inversions. Then show that applying an individual transposition also changes the sign of . I can give more hints if necessary. Note that there are two definitions of the Vandermonde determinant, and these can both be useful. Sławomir Biały (talk) 12:00, 28 March 2011 (UTC)[reply]
The first definition is clearly well-defined, whereas the second is not obviously so (a priori there might be a permutation that could arise a a product of either an even or an odd number of transpositions).
What I'd do would be to take the first definition as primitive and then prove by induction that any product of an odd (or even) number of transpositions happens to be an odd (or even) permutation according to the first definition. (Namely, any transposition applied to some preexisting permutation causes the number of inversions to increase or decrease by an odd number. See this by case analysis on whether the two elements were initially in the wrong or right order, and parameterized by the number of elements that currently lie between them and compare differently to the two of them). This simultaneously proves that the second definition defines anything, and that what it defines agrees with the first one. –Henning Makholm (talk) 13:58, 28 March 2011 (UTC)[reply]
By the way, the Vandermonde approach also does show that the second definition is well-defined, since defines a group homomorphism into , so the sign of a product of transpositions is the product of the signs. Sławomir Biały (talk) 15:12, 28 March 2011 (UTC)[reply]
However, that does seem much more abstract and involved that one needs for such an elementary result as we're after here. For example, one would need to make sure that one had not already used the desired result to derive properties of determinants. The presentation in the Vandermonde determinant article certainly does assume familiarity with the sign of a permutation -- I don't doubt that the relevant properties can be developed without using it, but it's less clear to me that they should or that one gains anything from such a strategy. –Henning Makholm (talk) 16:05, 28 March 2011 (UTC)[reply]
A proof is fairly straightforward just using the formula as definition, not using anything about determinants at all. (These would just make the proof slightly easier.) As for nothing being gained by such an approach, I couldn't disagree more. Acting with a group on another space is one of the most important techniques in group theory. Sławomir Biały (talk) 16:27, 28 March 2011 (UTC)[reply]
Just to illustrate what I mean by a proof that doesn't use any properties of determinants, to show that the transposition (ab) (with, say a<b) changes the sign of Δ, just write
where does not involve . Each term in brackets is invariant under the transposition (ab). Sławomir Biały (talk) 16:56, 28 March 2011 (UTC)[reply]
This looks to me exactly like what I proposed (find out how the number of inversions changes, and see whether it is odd or even), except that you have disguised it as taking differences and multiplying them, and only afterwards ignoring the magnitude of the final product. I don't see what you get from the "subtract and multiply" approach that I don't get from "count inversions modulo 2". Is this technique smarter or more illuminating in some way that I have overlooked? –Henning Makholm (talk) 22:52, 28 March 2011 (UTC)[reply]
Without having done the details of your approach, I'd say it's probably the same, but here the calculation is organized very neatly. I didn't mean to suggest that your approach was inferior, but you seemed quite frankly hostile to my suggestion. There are many good ways to do this problem. Yours is explicitly combinatorial, and mine involves identifying the sign of a permutation via an explicit group homomorphism. These are each good things to have in a solution: they each illustrate a very different feature of the problem. (I should add that my approach is substantially simpler—basically just a few lines—if one is allowed to use the fact that Δ is the determinant of the Vandermonde matrix, and some elementary facts about determinants. This is why I originally suggested it: it is the simplest proof that I know of.) Sławomir Biały (talk) 00:09, 29 March 2011 (UTC)[reply]
Sorry for sounding hostile. I must confess that your proposal irritates me more than I can explain completely rationally. It still seems to prove an elementary fact by reference to something more general and advanced, which I think generally is not helpful to those who need help with the elementary fact. Since permutations are one of the core motivating examples for group theory in general, I feel that it ought to be developed without use of abstract concepts such as homomorphisms. The sign of a permutation is itself an important early example of a homomorphism -- which works better when its properties can be understood independently of the abstract notion.
I suppose your position would be that the problem can be an excuse to introduce the OP to some more abstract techniques. I don't have really decisive arguments against that; perhaps it comes down to whether we're here just to help people with their concrete problems, or to promote mathematical learning in general. –Henning Makholm (talk) 03:05, 29 March 2011 (UTC)[reply]
I do, however, have this nightmare where someone asks "Can the FOIL rule be proved or is it just something that works by experience?", and the first reply starts: "It's very simple, actually. Suppose that you have a monoidal closed category where the tensor distributes over a certain commutative bifunctor ..." –Henning Makholm (talk) 03:05, 29 March 2011 (UTC)[reply]
Just curious... What do you specifically mean by "product"? -- kainaw 15:02, 28 March 2011 (UTC)[reply]
See permutation group.Sławomir Biały (talk) 15:33, 28 March 2011 (UTC)[reply]
All that says is: "Every permutation can be written as a product of simple transpositions." It doesn't define what the word "product" means in this sense. I am asking because it is very vague due to the fact that "product" has one meaning in mathematics, another in computer science, and another in regular speech. -- kainaw 15:39, 28 March 2011 (UTC)[reply]
In this context "product" unambiguously means the permutation that results from applying two given permutations in sequence.
Sławomir's links to permutation group does imply this: when a group is concerned, it is standard and unproblematic to call the group operation a "product", and the very first sentence of the article describes that the "group operation is the composition of permutations in G (which are thought of as bijective functions from the set M to itself)". –Henning Makholm (talk) 16:05, 28 March 2011 (UTC)[reply]
Thanks. It makes sense. I just didn't want to assume what it meant. Whenever something appears to be simple on the math desk, it usually has a very cryptic hidden meaning. -- kainaw 18:34, 28 March 2011 (UTC)[reply]

Delay Differential Equations of the Retarded Type[edit]

I have a delay differential equation of the retarded type:

I try to solve with

giving the characteristic equation

This is a transcendental equation with infinite roots.

I have a proof that depending on the parameters there will be a maximum of two real roots, and that there may be zero real roots.

I would like to know what determines whether the dynamics of the system are oscillatory or not.

I have one paper that seems to imply that if there are two real roots then there are no oscillations, and another that seems to imply that there are oscillations even if two real roots exist.

Is this the case?

Or does anybody know of a good introduction to this topic? —Preceding unsigned comment added by 130.102.158.21 (talk) 07:54, 28 March 2011 (UTC)[reply]

is an undamped oscillation when is purely imaginary. If the imaginary part is zero there is no oscillation. The motion is damped when the real part of is negative. Does this answer your question? Bo Jacoby (talk) 10:31, 28 March 2011 (UTC).[reply]
Thanks, but I was under the impression that this was the case for ordinary differential equations, but delay differential equations are a bit different: they always have infinite roots (they have a transcendental characteristic equation with infinite possible solutions), of which at most two roots are real.
So there will always be infinite values of which are complex.
However delay differential equations are not always oscillatory! So does that mean that if they possess real roots (in addition to their infinite complex roots) that they are not then oscillatory? —Preceding unsigned comment added by 118.208.158.249 (talk) 09:33, 30 March 2011 (UTC)[reply]
The trancendental equation has an infinite number of roots (assuming ). The roots themselves are not infinite. Together with the complex root the complex conjugate is also a root, because all the coefficients are real. If the real parts of the complex roots are negative, then the oscillations die out quickly. Bo Jacoby (talk) 10:58, 30 March 2011 (UTC).[reply]
Ah yes, by "infinite roots" what I meant was "infinite number of roots". My problem is that this infinite number of roots includes an infinite number of complex roots. So a delay differential equation will always have an infinite number of complex roots - does that mean that delay differential equations always have oscillatory solutions (even if the oscillations die out quickly when there is a negative real part to the roots)? From simulations I am doing it seems that some delay differential equations have no oscillations in their solutions at all. But still, they have to have an infinite number of complex roots just by virtue of being delay differential equations. So I'm not sure what the relationship is. —Preceding unsigned comment added by 130.102.158.15 (talk) 20:54, 30 March 2011 (UTC)[reply]
Yes, it does mean that delay differential equations always have (damped) oscillatory solutions. Did you compute the roots for the examples you simulated? Choose the unit of time such that , and divide the equation by , and set . Then the characteristic equation is simplified to . This equation is still trancendental. Expand the exponential and solve the algebraic equation . Any root , for which the last term is negligible, is an approximate solution to the trancendental equation. Bo Jacoby (talk) 21:50, 30 March 2011 (UTC).[reply]
They are always oscillatory, cool. Thank you. —Preceding unsigned comment added by 130.102.158.15 (talk) 06:05, 31 March 2011 (UTC)[reply]

Physics of insect wings[edit]

Though this is not math, there are equations involved. If someone can translate the equations from here to math equations on Wikipedia so I can include them in the article, Wing (insect)? Bugboy52.4 ¦ =-= 17:03, 28 March 2011 (UTC)[reply]

Do you just want the ones on that particular page? Michael Hardy (talk) 19:39, 28 March 2011 (UTC)[reply]

Is this what you meant?:



Michael Hardy (talk) 19:43, 28 March 2011 (UTC)[reply]

Thanks, and the ones on page 79 too, please, and thank you again, these will come in handy. Bugboy52.4 ¦ =-= 19:51, 28 March 2011 (UTC)[reply]
Page 79 is not available in the google preview. Michael Hardy (talk) 03:38, 30 March 2011 (UTC)[reply]
Amusing, the first equation actually contains an error. (The error is in the book; the version above does match the book.) It's pretty clear that should not be squared. Looie496 (talk) 04:09, 29 March 2011 (UTC)[reply]
Yeah this is a duplicate of this thread on the Science desk where the error is also noted. Grandiose (me, talk, contribs) 16:17, 29 March 2011 (UTC)[reply]

confusion regarding the definition of sample space.[edit]

sir, i have read the article on 'sample space' on the pages of wikipedia. it goes as 'the set containing all the possible outcomes of an experiment....'. but sir, if all the events listed in the sample space are possible, why are the probabilities of a few events are calculated as '0'. i rather feel that the definition should go as 'the set containing all the known outcomes of an experiment' because i understand probability=0 indicates that the event cannot occur at all. in other words it is impossible. thus it should not belong to the sample space. Is my understanding correct? If not please correct it sir.Maheedhara (talk) 16:28, 28 March 2011 (UTC)[reply]

Probability=0 doesnt neccessarily imply that the event cannot occur at all. Much depends on how you define the probability function. Unless I am wrong the probability of choosing a number x lying between 0 and 1 (uniform dist) is 0, yet the event can occur obviously.-115.118.206.185 (talk) 17:01, 28 March 2011 (UTC)[reply]
In short, your understanding is incorrect. Keep in mind that this description of the Sample_space is a formal definition, and is distinct from other statistical notions of sampling. Thus, the sample space contains all possible outcomes by definition. There can be issues when the sample space is used as a mathematical_model for a real-world problem. For example, I may define the sample space for a fair coin-toss to be {H,T} for Heads and Tails. According to my model, the event 'coin lands on edge' has probability 0, because it is not in the sample space. The fact that 'coin lands on edge' does not appear in my space is not a shortcoming of the concept of a sample space, but merely a shortcoming of my model design. Taking another tack, now consider the sample space of an experiment which is an infinite series of coin tosses. Then the sample space consists of all infinite strings of H and T. In this case, the probability tossing infinitely many heads in a row, "HHHH..." is zero. This does not mean it is theoretically impossible for such an event to occur. In fact, the string "HHH..." is in the sample space because it is technically possible. Instead, probability zero means in this case that the chances of this happening are infinitely small in some sense. The rigorous treatment of my statements above requires familiarity with foundations of measure_theory. Intuitively, what I'm saying is that almost every series of infinite coin tosses will contain at least one 'tails', so the complementary event "HHH..." happens almost never. Thus, the probability of "HHH..." is zero, even though it is not "impossible." SemanticMantis (talk) 18:31, 28 March 2011 (UTC)[reply]

Throw a dart at a target. It hits a point, whose area is 0. What was the probability that it would hit that particular point? Michael Hardy (talk) 19:38, 28 March 2011 (UTC)[reply]

1.  :-D Sławomir Biały (talk) 20:55, 28 March 2011 (UTC)[reply]