Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 92.15.4.196 (talk) at 21:22, 19 July 2010 (→‎Algorithm to combine trees?). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


July 12

order of growth

To show that log x grows more slowly then x^c (c > O) is it sufficient to show that log x/x^c goes to 0? How does that mathematically imply that log x is less then x^c for sufficiently large x. Thanks-Shahab (talk) 06:38, 12 July 2010 (UTC)[reply]

I can answer the second part. If , then for every there exists such that for all , . Choosing gives us that for large enough (larger than a finite positive real number ), , which implies that (since whenever is positive). RJaguar3 | u | t 06:44, 12 July 2010 (UTC)[reply]
Thanks for answering both parts.-Shahab (talk) 06:50, 12 July 2010 (UTC)[reply]

Calculus/Derivatives

Find the equations of both lines that are tangent to the curve y=1+x^2 and are parallel to the line 12x-y=1


—Preceding unsigned comment added by KRmiwaD93 (talkcontribs) 07:38, 12 July 2010 (UTC) I need some help finding the equation. Could I have an explanation please? Thanks.[reply]

What have you tried so far? 71.141.88.179 (talk) 08:08, 12 July 2010 (UTC)[reply]
The question didn't actually ask for both lines did it? Are you sure it didn't say the line that is both tangent to.. and parallel to ...? Dmcq (talk)
That's the way I read it. —Anonymous DissidentTalk 08:45, 12 July 2010 (UTC)[reply]
If the line has to be parallel to then it must have the same gradient. You then need to find out at which value of x the curve has that gradient. Readro (talk) 09:17, 12 July 2010 (UTC)[reply]
Well one can't have 'both lines' as no two points on the quadratic have the same tangent direction whereas a straight line only has the one tangent direction. Dmcq (talk) 11:32, 12 July 2010 (UTC)[reply]
In general, there is only one tangent to a given parabola that is parallel to a given line. The exception is that there are no tangents parallel to the axis of the parabola.--RDBury (talk) 13:36, 12 July 2010 (UTC)[reply]
Rewrite the linear part as , and then you mentioned derivatives in the title, what does differentiating with respect to x give and what use is that? Dmcq (talk) 14:33, 12 July 2010 (UTC)[reply]

Trig question

Resolved

I'm having a bit of trouble trying to solve the following identity verification:

[ (sec - tan)^2 + 1 ] divided by [(sec)(csc) - (tan)(csc)] entire fraction set equal to 2tan

I have gotten as far as converting the denominator in terms of sine and cosine, but I'm a bit stuck from there. Any help is appreciated! Thanks, WordyGirl90 12:07, 12 July 2010 (UTC)[reply]

If you express the numerator in terms of cos and sin too it works out pretty simply - don't forget a famous identity involving the squares of cos and sin. AndrewWTaylor (talk) 12:57, 12 July 2010 (UTC)[reply]
If you divide said famous identity by the square of cos x then you get another identity in terms of tan and sec, which you can use if you fancy cancelling some terms first before converting to sin and cos. Readro (talk) 13:06, 12 July 2010 (UTC)[reply]
Thanks guys. A classmate solved it basically through factoring. I ended up using his method. Thanks anyway! WordyGirl90 19:58, 12 July 2010 (UTC)[reply]

integers solutions

Resolved

Hello. I have the following equation in three variables: 3x + 4y = 7z. I want to find out solutions to it which satisfy the following criteria: one they are all positive integers, secondly, they are all distinct, thirdly x,y,z are all less then 30. The only ways I know are to set it up as an integer programming problem, and use a software which I have, or to write a computer program which uses brute force. I want to then find out solutions for 2x + 5y = 7z, and in general I want to find out a way to solve ax + by = (a+b)z. What would be the appropriate way? Thanks-Shahab (talk) 14:25, 12 July 2010 (UTC)[reply]

. For the general case, . Gandalf61 (talk) 14:40, 12 July 2010 (UTC)[reply]
...assuming (without loss of generality) that a and b are coprime.—Emil J. 14:47, 12 July 2010 (UTC)[reply]
To write it more explicitly, integer solutions of the equation are exactly triples of the form (x, x + k(a + b), x + kb), where x and k are integers.—Emil J. 14:51, 12 July 2010 (UTC)[reply]
Gandalf61 how do I solve and from where did you get it. Thanks to you both for responding so fast.--Shahab (talk) 15:18, 12 July 2010 (UTC)[reply]
just means that x and y leave the same remainder when you divide them by 7 - see modular arithmetic. For a parametric solution, just pick two integers x and k and use Emil J.'s solution (x, x + 7k, x + 4k). Geometrically, these solutions all line on the plane spanned by the vectors (1,1,1) and (0,7,4). Gandalf61 (talk) 15:46, 12 July 2010 (UTC)[reply]
No, no...I mean how did you deduce that x,y such that give the solutions.-Shahab (talk) 16:37, 12 July 2010 (UTC)[reply]
On the one hand, if 3x + 4y = 7z, then 7 | 3x + 4y, hence also 7 | 4(yx). As 7 and 4 are coprime, this implies 7 | yx, i.e., . On the other hand, if this conguence holds, then z = (3x + 4y)/7 = x + 4(yx)/7 is an integer, and solves 3x + 4y = 7z.—Emil J. 16:43, 12 July 2010 (UTC)[reply]
Thanks, its all clear now-Shahab (talk) 16:51, 12 July 2010 (UTC)[reply]


July 13

the value of 0/0

i have a doubt for long days. what is the value for 0/0??? upto my knowledge i have guessed that this will have three values as shown below....could you please give me the right answer????


case (i):

 0/0=0

reason: zero divided by any number is equal to zero.


case(ii):

 0/0=1

reason: any number divided by the same number is equal to zero.


case(iii):

 0/0=infinite

reason: any number divided by zero is equal to infinite.


i have asked this question to many teachers. all said that the answer is infinite.....but they can't explain why we shouldnot consider the remaining cases.....please explain me —Preceding unsigned comment added by 122.183.208.130 (talk) 14:35, 13 July 2010 (UTC)[reply]

Division by zero is undefined. So, it does not have a value. You can argue that it is any value you like. So, in a sense, division by zero produces EVERY answer possible. -- kainaw 14:37, 13 July 2010 (UTC)[reply]
You mean division of zero by zero, I suppose. It does not make sense to make the value of x/0 a finite real number for nonzero x.—Emil J. 14:44, 13 July 2010 (UTC)[reply]
See indeterminate form. --Tango (talk) 14:40, 13 July 2010 (UTC)[reply]
In computing dividing one floating point 0 by another will give a special value called Not a Number, which basically is saying it is indeterminate. In reasoning about values in automatic proofs it is liable to be set to Bottom type which is yet another way of saying the same thing - but then again one might mean any value rather than no value. In some circumstances a value can be defined for it for instance x/x is indeterminate when x'=0 but it has a limit value of 1 at that point.You might as well ask how many angels can dance on the head of a pin? Dmcq (talk) 14:58, 13 July 2010 (UTC)[reply]
If someone with computer graphing skills can upload a graph showing together the three equations 0/x = y, x/x = y, and x/0 = y, then we might see where they lead for 0/0 = y.—Wavelength (talk) 15:11, 13 July 2010 (UTC)[reply]
There is nothing to graph here. For we have , and in the real projective line . You also have and . All of these approach the indeterminate form 0/0, and because of this, there is no natural definition for it. -- Meni Rosenfeld (talk) 15:23, 13 July 2010 (UTC)[reply]
Division by zero is also relevant. -- Meni Rosenfeld (talk) 15:24, 13 July 2010 (UTC)[reply]
Your teacher, even allowing them to be a bit sloppy, isn't right to say "infinite". More often the answer is something like "unknown". Imagine that you're trying to calculate the speed of an object (which is travelling at a steady speed) using the equation speed = distance / time. If you check how far its travelled after 2 seconds and find it has travelled half a metre, then its speed is 0.5 / 2 = 0.25 metres per second. If you had checked after just half a second you would find it had travelled an eighth of a metre, so its speed is still 0.125 / 0.5 = 0.25 metres per second. If you check how far it has travelled after zero seconds i.e. after no time at all, then you would find it hasn't travelled at all, so your calculation would look like 0/0. But this is what you would get no matter how fast or slow the object was travelling, so really "the object travels 0 metres in 0 seconds" gives no information. (What your teacher was thinking of was x/0 for any x other than 0 e.g. if an object travelled 2 metres in 0 seconds then it must have been going infinitely fast.) Quietbritishjim (talk) 00:36, 14 July 2010 (UTC)[reply]

A (slightly) more rigorous way of looking at it is to look at the graph. The page inverse function shows how to get from an equation like y=(something involving x) to x=(something involving y): by flipping about the diagonal line shown on that graph. In our case we want to go from y = 0 x to x = y / 0. But y = 0 x is a horizontal line, so x = y / 0 becomes a vertical line. This backs up the idea that if you try to take 0 / 0 you have any (or all) values, whereas y / 0 for any other y doesn't make sense (or is infinite, if you allow that). Quietbritishjim (talk) 00:43, 14 July 2010 (UTC)[reply]

The fraction a/b is 'the' solution to the equation bx=a. If b≠0 then this equation uniquely determines x, (because (b≠0 and bx=a and by=a) → (b≠0 and bx−by=a−a) → (b≠0 and b(x−y)=0) → (x−y=0) → (x=y)). If a=b=0 then the equation bx=a is 0x=0 which is true independent of the value of x, and so the equation does not define x, and so 0/0 is undefined. If a≠0 and b=0, then the equation bx=a is 0=a, which is false independent of the value of x. Bo Jacoby (talk) 03:20, 14 July 2010 (UTC).[reply]

time to break out truth

In case the above isn't convincing to you, let's break out actual reality - you know, truth. So, what is 0 - it is the absence of a single one or fraction of whatever you're counting, but not negative - you don't owe any either. For example, 0 apples is the complete absence of a single Apple. Now, I am about to do strictly integer division - no fractions. Let's say you drive to my house, pick up 2 apples, drive home, put them in your basket at home, and repeat until you can't make your next drive, since there aren't enough apples left. How many times can you make that drive (physically - we are talking about reality here, not math). You can physically do that half as many times (INTEGER DIVISION) as I have apples. If I have 8 Apples, you can drive to my house, pick up two, drive home, and repeat, 4 times (8/2 = 4). After the fourth time, you can't do that again, there not being enough apples at my house to make the trip. So, that is how many times you can do it. Now what if instead of coming to my house and picking up 2 apples, you come to my house and pick up 0 apples, drive home, and repeat. After how many trips will you have to stop, there not being enough apples at my house for you to make your next trip? There is no answer to "after how many do you stop" because you do not stop. It's not that you stop after infinite times - you do not stop; the division is not exhaustive: you can always do one more. So, if the normal definition of integer division would be subtracting the divisor one time until you can't do that any more* (because you run out of the dividend), then you will never meet that definition: you will never "run out" of the dividend when you're taking 0 from it. So, there is no solution to how many times you are going to be able to do that, if the process is never finished. This is like asking: if every time you say something, your wife says something, and every time your wife says something, you say something, then if you are the first to speak, how many times will the last person to speak have spoken? There is no "last person to speak" by that definition, so it is a meaningless question, it fails to refer. Asking what 5/0 is, is the same as asking what the value of the last prime is. There is no last prime, and there is no end result to the division. Saying 5/0 is "infinite" is just as wrong as saying the "last prime" is "infinite". The last prime is not infinite, since there is none.84.153.208.32 (talk) 10:13, 14 July 2010 (UTC)[reply]

  • obviously this is the normal definition. When I ask you how many times you can remove 2 apples from 8, your answer isn't "Oh, you have a lot of choices. You can do it 0 times, 1 time, 2 times, 3 times, or 4 times - these are the physically possible alternatives facing you when you are looking at 8 apples and are to remove 2 a certain number of times, these are the possibilities of how many times you can do that.". Yes, "linguistically" the question "how many times can you remove 2 apples from 8" has the answer "0, 1, 2, 3 or 4", but that's not how we understand it. We understand "how many times until you can't do it again." That's the definition, and by that definition when you divide by 0 you do not get "any answer you want", any more than when you divide 8 by 2 you get "any answer you want, from the choices 0, 1, 2, 3, or 4" -- the definition is to do it until you can't do it one more time, and since there is no such point in the calculation, there is no answer when dividing by 0. It's like me telling you how to calculate Mango's number: you start with 2, then keep doubling (2, 4, 8, 16, 32) until you get to a negative number: that's Mango's number. What is the value of Mango's number? (Obviuosly it is not "infinite", or "any number you want", or architecture-dependent). No, there is no Mango's number. It's the same if we say Mango's number is defined as "4/0": there is no value, the definition does not admit of a point at which you can say, "Okay, now I have Mango's number", just as you can't reach a point where you can say "Okay, now I have the largest prime." You can't say "Okay, now I have the value of 4/0".
You are correct in saying that there is no answer in the real numbers, but many mathematicians and also many non-mathematicians are happy with R, the Extended real number line, where the answer to x/0 is plus or minus infinity for non-zero x. Dbfirs 08:25, 16 July 2010 (UTC)[reply]
The extended real number line is not suitable for division by zero. For this you need the real projective line, where 1/0 is unsigned infinity. -- Meni Rosenfeld (talk) 09:07, 16 July 2010 (UTC)[reply]
Another way round is used in IEEE floating point where they have Signed zero. So you either have plus infinity equal to minus infinity or you have plus zero being different from minus zero, take your pick ;-) Dmcq (talk) 11:47, 16 July 2010 (UTC)[reply]
I considered linking to the real projective line article, but most people are happier with plus infinity being distinct from minus infinity. Dbfirs 17:06, 17 July 2010 (UTC)[reply]


July 14

Free stuff

What's the free ring over an abelian group? Is it this: given an abelian group A, take the free ring generated by A then quotient it by the ideal generated by all the i(a)+i(b)-i(a!b), where i is insertion of generators and ! is the addition in A?

Also I was reading about Freyd's adjoint functor theorem. If you run through the whole proof it seems to be somewhat constructive. My book (MacLane's 2nd edition) on page 125 proves the forgetful functor from compact hausdorff spaces to SET has a left adjoint using adjoint functor theorem; he finds a small solution set explicitly then applies the theorem. Did this suggest how to construct an explicit stone cech compactification? Similarly for Grp. Does the adjoint functor theorem suggest ways to construct free objects explicitly? Because free groups are so much more complicated than free abelian groups (you could put the abelian relation on the free group but why do that when you can use set of all cofinally 0 function from X to the integers) I wonder how they managed to get free groups in the first place. Money is tight (talk) 04:17, 14 July 2010 (UTC)[reply]

Regarding the first question, I don't specifically know that terminology, but I think it would be the group ring of Z over A. Using your phrasing I guess that would be the free ring generated by A modded out by the ideal generated by all the i(a)*i(b) - i(a!b). Rckrone (talk) 06:11, 14 July 2010 (UTC)[reply]
I think what you described is the tensor algebra over A with A considered as a Z-module. (Can anyone confirm if that is correct?) Rckrone (talk) 06:26, 14 July 2010 (UTC)[reply]
Sorry for the confusion I was in a rush. What I wanted was the left adjoint to the forgetful Rng->Ab, sending each ring to its underlying addition abelian group. I've never heard of this construction before. Also my 2nd question is more clearly stated as: How do people explicitly construct free objects for the first time? Is it raw ingenuity or is there some algorithm in doing so? Money is tight (talk) 09:20, 14 July 2010 (UTC)[reply]
For algebraic categories like the examples you mentioned there is a uniform "algorithm". If U and V are varieties such that the signature σ of U includes the signature τ of V and V includes all reducts of algebras from U, then the forgetful functor UV has a left adjoint F: VU defined as follows: F(A) is the quotient of the free U-algebra (which can itself be constructed as the quotient of an absolutely free σ-algebra, e.g. the term algebra, over the congruence generated by the defining identities of U) in generators A over the congruence ~ generated by the atomic diagram of A (i.e., f(a1,...,ak) ~ a for each k-ary operation f ∈ τ and each a1,...,ak,aA such that a = fA(a1,...,ak)). (Incidentally, the coverage of universal algebra on Wikipedia is extremely poor.)—Emil J. 12:57, 14 July 2010 (UTC)[reply]
Oh, and the answer to your first question is yes, your construction is in fact a special case of the general construction above (more precisely, the general construction would also put i(−a) + i(a) into the ideal, but that's easily seen to be redundant).—Emil J. 13:22, 14 July 2010 (UTC)[reply]

Thanks! I've never had any exposure to universal algebra so I don't exactly know what you mean. I had a look at the online pdf version "A course in universal algebra" page 73, and I think this is how: Say we want the free group over a set X (with signiture multiplication binary inverse unary and 1 constant). Take the term algebra T(X), which is like the free magma except it's got terms with inverse and 1. Next take the intersection E of all congruences on T(X) such that the quotient is actually a group (since T(X) is the "absolutely free algebra" you described, it has no relations on it, not even associativity), and intersection of congrugence is a congruence. The quotient under E is a group too, the free group on X.

I really have no idea what's going on lol. I'll take a proper look some other time.

Another question: this "uniform algorithm" is not the standard way to construct free groups. Is it because this construction is so much more difficult to use practically than the standard construction using equivalence class of words? The free abelian group can be constructed from the free group by using further identifications, but why do that when we can use all confinally 0 functions to the integers. So sure, this algorithm proves every forgeful functor on some type of algebras has a left adjoint, but it might as well just be a pure existence because of the insane complexity in its construction. Money is tight (talk) 05:19, 15 July 2010 (UTC)[reply]

The congruence E can also be described "from below": t E s iff there exist n ≥ 0 and terms t0 = t, t1, ..., tn = s such that for each i < n, ti = ui(vi) and ti+1 = ui(wi) for some terms ui, vi, wi, where either viwi or wivi is an instance of one of the defining identities of groups (for example, we may have vi = 1 and wi = r−1·r for some term r). The standard group-specific construction is actually equivalent to this: another description of E is that t E s iff (the flattened forms of) t and s are equivalent as words. In this way it gives more information, as it provides an explicit description of representants of the equivalence classes. This is useful for various group-theoretic considerations, but I don't think it is any easier than the general construction if you just need to prove that the free group is a free group. Of course, in the case of abelian groups the explicit description of free objects is so simple that it beats any generic construction.—Emil J. 12:28, 15 July 2010 (UTC)[reply]

Polish notation

What's the point of Polish notation? Not only is it harder to read, it's more difficult to work with algebraicly. For example, compare the normal version ab+cd to its Polish notation equivalent, + * a b * c d. --138.110.206.101 (talk) 16:45, 14 July 2010 (UTC)[reply]

It removes ambiguity. How do you KNOW that ab+cd=(ab)+(cd) and not ab+cd=a(b+c)d? You are using an assumption that multiplication precedes addition. Polish notation never requires assumptions or parenthesis. -- kainaw 16:59, 14 July 2010 (UTC)[reply]
It isn't really an assumption is it, when everyone is using previously agreed-upon rules? There isn't really any ambiguity to ab+cd is there? mislih 20:07, 14 July 2010 (UTC)[reply]
It's very easy to evaluate without ever rereading any of the input (which is helpful if the input is large). You just note each operation as you come to it and then evaluate it with the next one or two values (depending on the operator). One or both of those might itself be the result of an operator, so you proceed recursively. In your example you would say "Add... multiply a and b..." (at which point you have the two things + and ab to remember; you need not remember a and b separately or that you multiplied them) "...multiply c and d" (at which point you obtain cd and then immediately the overall result). In practice, RPN is used for this purpose because then only numbers (and never operators) have to be temporarily remembered (and the stack that remembers them can often then be smaller). --Tardis (talk) 18:59, 14 July 2010 (UTC)[reply]
So Polish or reverse Polish would be better for determining the value of a given expression, but isn't it worse for algebraic manipulation? --138.110.206.101 (talk) 19:01, 14 July 2010 (UTC)[reply]
Not really. You are just assuming it would be because you aren't used to seeing it. If you only learned Polish notation, you would look at all the precedence rules and parenthesis used in inline notation and think it was a confusing mess. -- kainaw 19:04, 14 July 2010 (UTC)[reply]
How would you, for example, solve x 2 ^ x 1 + 2 ^ + x - 1 - 0 = without converting to regular notation? It's a lot harder. --138.110.206.101 (talk) 19:09, 14 July 2010 (UTC)[reply]
You don't include = in the notation. Keep it as two separate expressions, one being x 2 ^ x 1 + 2 ^ + x - 1 - and the other being just 0. Anything you do to one you do to the other. ie: To get rid of "1 -", you use "1 +" and get x 2 ^ x 1 + 2 ^ + x - and 0 1 +. Since there are no parenthesis or order of operations, we had no issues with deciding what we could and could not do. The next thing to work on is the trailing x -. But, it is apparent that you will hit a point at which you want to have one side simply 0 and the other side will contain a squared x. You will have to simplify. How do you simplify? You memorize patterns. You've probably memorized them as inline patterns, but you could have memorized them as Polish notation patterns just as well. -- kainaw 19:25, 14 July 2010 (UTC)[reply]
English is like the standard arithmetic expressions if you consider verbs as operators. Latin is reverse polish as in Jim the ball hit and Irish is polish with the equivalent of hit jim the ball. I gues sin them polish or reverse polish might appear a bit more natural. Dmcq (talk) 20:54, 14 July 2010 (UTC)[reply]
In Hebrew, all three are correct. And the subject doesn't even have to precede the object, giving a total of 6 variations! (Though I'm unsure of the "Hit the ball Jim" variant). -- Meni Rosenfeld (talk) 09:50, 15 July 2010 (UTC)[reply]
Verb Object Subject doesn't seem to be all that common, and Object Verb Subject even less so - it says Klingon uses that order because it is so strange sounding :) Dmcq (talk) 11:15, 15 July 2010 (UTC)[reply]
To avoid potential misunderstanding, all six variations are possible in Polish as well, though the default word order is SVO as in English. Polish notation has nothing to do with Polish language, it's named after members of the Polish logical school who invented it.—Emil J. 11:41, 15 July 2010 (UTC)[reply]

Rubik's cube group

How many conjugacy classes has the Rubik's cube group? --84.61.131.18 (talk) 20:05, 14 July 2010 (UTC)[reply]

How large is the largest order of any element in the Rubik's cube group? --84.61.131.18 (talk) 20:41, 14 July 2010 (UTC)[reply]

We have some info on the group in Rubik's cube group. The answer to your questions is not there, but some of the stuff may be useful. I suppose it shouldn't be hard to compute conjugacy classes and maximal orders using some computer algebra system.—Emil J. 15:38, 15 July 2010 (UTC)[reply]
Off the top of my head there are elements of order 8.3.5.7=840. A complete listing of conjugacy classes would be rather lengthy, I'm guessing there are at least a thousand. Conjugacy class of Wreath products of the symmetric groups are fairly easy to work with so a computer algebra system may not be necessary.--RDBury (talk) 16:50, 15 July 2010 (UTC)[reply]
According to the Magma computer algebra system there are 81120 conjugacy classes and the largest element order is 1260. The group has elements of the following orders: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15, 16, 18, 20, 21, 22, 24, 28, 30, 33, 35, 36, 40, 42, 44, 45, 48, 55, 56, 60, 63, 66, 70, 72, 77, 80, 84, 90, 99, 105, 110, 112, 120, 126, 132, 140, 144, 154, 165, 168, 180, 198, 210, 231, 240, 252, 280, 315, 330, 336, 360, 420, 462, 495, 504, 630, 720, 840, 990, 1260. Most orders are made up of many classes of several sizes. Ignoring class sizes, and just sorting by order, one gets the following table:
Order table
Order NrClasses NrElems
1 1 1
2 74 170911549183
3 119 33894540622394
4 439 4346957030144256
5 5 133528172514624
6 7542 140621059298755526
7 3 153245517148800
8 316 294998638981939200
9 219 55333752398428896
10 136 34178553690432192
11 1 44590694400
12 20899 2330232827455554048
14 54 23298374383021440
15 190 14385471333209856
16 35 150731886270873600
18 4739 1371824848124089632
20 315 151839445189791744
21 66 39337151559333120
22 5 927085127270400
24 7590 3293932519796244480
28 114 97419760907673600
30 5155 1373347158867028224
33 23 15874019662233600
35 7 65526218912563200
36 8703 3768152294808760320
40 130 835897246403788800
42 1428 737199776831097600
44 4 100120377950208000
45 144 197329441659727104
48 739 911497647410380800
55 1 4854321355161600
56 47 671205306846412800
60 7371 4199961633799421952
63 57 264371433705308160
66 103 404051175250329600
70 37 210461722916290560
72 3289 1981453794190295040
77 2 187238109413376000
80 7 13349383726694400
84 1664 1697725818678067200
90 2177 1764876446897050368
99 27 104367909135974400
105 70 232824419423354880
110 1 4854321355161600
112 5 128726200221696000
120 1776 1947044011463147520
126 679 854783686296207360
132 36 637129677864960000
140 28 223125413717606400
144 330 714192029378150400
154 2 187238109413376000
165 12 213590139627110400
168 274 1050269239266508800
180 2015 2320395168471367680
198 55 759701292082790400
210 388 1053174509332070400
231 4 374476218826752000
240 84 407156203664179200
252 468 689877080447385600
280 6 68653973451571200
315 35 99309879652515840
330 12 213590139627110400
336 10 257452400443392000
360 460 571019888909352960
420 182 961155628321996800
462 4 374476218826752000
495 4 174755568785817600
504 70 238381852262400000
630 87 395380140162416640
720 10 120144453540249600
840 24 240288907080499200
990 4 174755568785817600
1260 8 51490480088678400
By hand calculations of the orders/sizes of all conjugacy classes are likely much too difficult due to the length, but David Joyner's Adventures in Group Theory (chapter 11, I believe) works out the elements of order 2 in quite some detail by hand. I believe Joyner also includes the by hand calculation of the maximum order, though it might be in one of his other books. JackSchmidt (talk) 17:25, 15 July 2010 (UTC)[reply]

How many sporadic subgroups has the Rubik's cube group? --84.61.131.18 (talk) 21:53, 15 July 2010 (UTC)[reply]

Which sporadic groups can be subgroups of the Rubik's cube group? --84.61.131.18 (talk) 16:24, 16 July 2010 (UTC)[reply]

M11 and M12 are subgroups of A12, which is a subgroup of the cube group. Every simple subgroup is isomorphic to a section of some simple composition factor, in other words, is contained as a section of A12 (or A8, but A8 ≤ A12, so this adds nothing new). The only sporadic simple groups contained in the cube group are M11 and M12. JackSchmidt (talk) 02:19, 17 July 2010 (UTC)[reply]

Are there any sporadic groups which have the Rubik's cube group as a subgroup? --84.61.131.18 (talk) 16:24, 16 July 2010 (UTC)[reply]

No. The only possibility not ruled out by Lagrange's theorem is the monster group, but the cube group is not contained in any of the monster's maximal subgroups, again by Lagrange's theorem. JackSchmidt (talk) 02:19, 17 July 2010 (UTC)[reply]

Help with a definite integral

I am trying to solve this integral,

which, I believe, has the solution

.

However I am having a hard time getting this answer. Using integration by parts, I get

.

This gives the correct answer when x is any integer other than zero, but is infinite when . I could get the correct result for just by plugging that in at the beginning. Is there something wrong with my derivation above? Is there a more elegant way to arrive at the correct answer? Thanks mislih 20:03, 14 July 2010 (UTC)[reply]

, but because of the possibility, in which case you have , of course. If you write (which is just using a different C), then the limit works out properly, so you might try that as an intuitive approach, but it's still technically undefined. So when you do integration by parts you have to split off the case just as if you were dividing by x on both sides of an equation. Then of course you're allowed to assume later in the other branch, so you would avoid bringing in the Kronecker delta. --Tardis (talk) 23:36, 14 July 2010 (UTC)[reply]
Thanks for clearing that up. mislih 21:14, 15 July 2010 (UTC)[reply]

Denesting Radicals

Hello. Why does have to be an integer if some nested radical, , can be denested into a sum of surds, ? If I rewrite in terms of and , I get . Thanks in advance. --Mayfare (talk) 22:17, 14 July 2010 (UTC)[reply]

If

then squaring both sides yields

Now if a, b, d, e are rational and √c is not, then we have

Therefore

and if everything in site is nonnegative, we can then say

Finally we have these two products:

Therefore

So if d − e is an integer then we're done. Michael Hardy (talk) 23:50, 14 July 2010 (UTC)[reply]

July 15

Testing the bias on a coin

If a coin is flipped a number of times, and the results are n heads and m tails, what is the probability that the next flip will be heads?115.178.29.142 (talk) 03:22, 15 July 2010 (UTC)[reply]

There's no way to know. But the maximum likelihood estimate is n/(n+m). 67.119.12.184 (talk) 05:11, 15 July 2010 (UTC)[reply]
Assuming of course that you have no other information regarding the fairness of the coin, the probability that the next flip will be heads is P=(n+1)/(n+m+2). This formula also applies when n=m=0, unlike the approximate estimate Pn/(n+m). Bo Jacoby (talk) 09:33, 15 July 2010 (UTC).[reply]
If you were pretty certain there should be little or no bias either way then (n+10)/(n+m+20) might be better. You can see it depends on your prior probability. Dmcq (talk) 09:48, 15 July 2010 (UTC)[reply]
Dmcq's result assumes that the coin has been flipped n+m+18 times given n+9 heads and m+9 tails. The prior probability can not be chosen freely. No information about the fairness of the coin means that the prior probability of heads, (H|), is 1/2, and the prior probability of tails, (T|), is also 1/2. The wanted conditional probability of heads, giving the facts, F, (that there were already n heads and m tails), is computed by Bayes' formula together with the formula for the hypergeometric distribution
Bo Jacoby (talk) 11:11, 15 July 2010 (UTC).[reply]
That is a pretty good prior if you'd never come across a coin before but most people recognize that coins are usually pretty unbiased. Now supposed we tossed the coin once and it came up heads. That would give the chance of the coming coming up heads as 2/3. That means if I bet 100 quatloos on tails and the other person 200 on heads then we should break even over a number of throws - in three goes on average I'd lose twice and they'd lose once. Now would you really be willing to bid against me if I put 150 quatloos on tails to your 200 on heads? Dmcq (talk) 12:00, 15 July 2010 (UTC)[reply]
Indeed. Bo is assuming we have no information about the fairness of the coin, but that's not the case. We know it's a coin, which means it is very likely to be very close to fair. --Tango (talk) 12:42, 15 July 2010 (UTC)[reply]
If you already know that the coin is fair, then don't test it, just tell the OP that the probability is 1/2 because you know that it is a fair coin. But the OP is not sure, and that's why he tests it. If you have noticed (or have been told) that coins are usually pretty unbiased, it must be because coins have been flipped before, and then the values of n and m are high. It doesn't need to have been the same coin every time. Now the OP clearly assumes that "a coin is flipped a number of times, and the results are n heads and m tails", so we are justified in assuming that n and m are the correct counts. Or perhaps the OP is using the word coin figuratively meaning some device providing random values called heads and tails. In any case (n+1)/(n+m+2) is the correct formula for the probability that the next flip show head. Bo Jacoby (talk) 14:46, 15 July 2010 (UTC).[reply]
Previous coin tosses aren't the only kind of evidence you could have. You could have the word of the person that gave it to you that is isn't a fixed coin, for example. Or simply the knowledge that most coins in existence are fair. Knowing that a different coin was tossed and got certain results isn't the same as knowing that the coin in your hand was tossed and get certain results. --Tango (talk) 14:53, 15 July 2010 (UTC)[reply]
I feel this is rapidly heading towards at least one side of at least one sheep seems black territory. :) Dmcq (talk) 15:33, 15 July 2010 (UTC)[reply]
That's nonsense. There is no dichotomy between "know for certain the coin is fair" and "have no idea what a coin is, and hence use a maximum-entropy prior". If you know that 99% of coins are fair, then when given a random coin (which you have no reason to suspect was chosen based on its fairness or lack thereof), you'll say there's 99% that this coin is fair. If you then toss it 3 times and it's all heads, you'll still have a high credence to it being fair (the exact amount depends on the continuous prior distribution for the bias of biased coins), and thus your credence for the next toss being heads will be close to 50%. As more evidence is collected where this coin lands heads more then it should, your posterior will shift towards the coin having the bias implied by the Heads:Tails ratio.
Even if past coin tosses is the only knowledge of coins you have (as opposed to, say, proficiency in metallurgy and rigid body mechanics), you can't compress this knowledge to 2 numbers m and n. If you sample 2 coins and toss each many times, then "one coin always lands heads, one always tails" is very different, wrt your conclusions about the bias of coins, from "each coin lands a roughly equal number of heads or tails". Your probability distribution for the bias of coins can take any form depending on what observations you have made. -- Meni Rosenfeld (talk) 15:51, 15 July 2010 (UTC)[reply]

The arcsin for numbers >1

How do you find the arcsin for numbers greater than 1 (which I believe is a complex value) —Preceding unsigned comment added by 115.178.29.142 (talk) 04:44, 15 July 2010 (UTC)[reply]

By the way, I know the real part is pi/2 . but what is the imaginary part?115.178.29.142 (talk) 05:15, 15 July 2010 (UTC)[reply]
Sin(z) is defined in terms of the exponential function exp(z) as sin(z)=(exp(iz)-exp(-iz))/2i, and exp(z) is in turn defined in terms of a power series (see article Exponential_function). So when you want to find arcsin of x, you solve for z in x=(exp(iz)-exp(-iz))/2i Money is tight (talk) 05:24, 15 July 2010 (UTC)[reply]
Which will give you the expression in Inverse trigonometric functions#Logarithmic forms.—Emil J. 11:51, 15 July 2010 (UTC)[reply]
Gandalf61 (talk) 13:00, 15 July 2010 (UTC)[reply]

Comparing two averages

If you are comparing two samples from completely separate populations, you can use a two sample t-test. Is there anyway to compare a population's mean to a subset's mean? For instance, you look at 2000 students with an average weight of 150 pounds. Then you choose 50 of those students and get their average weight. Is there anyway to check that their average weight is not significantly different from the population's average of 150 pounds? Like: an average of 149 might not be significant, but an average of 220 is significant. Remember, you don't have the average and standard deviation of the whole population without those students.

Basically you're trying to show that the group of students you chose as a whole is not significantly heavier than the other students at the school. 160.10.98.106 (talk) 15:21, 15 July 2010 (UTC)[reply]

The whole population without those students is (2000×150−50×220) / (2000−50) = 148.2 pounds per student. Your problem is treated by analysis of variance. Bo Jacoby (talk) 09:46, 17 July 2010 (UTC).[reply]
It depends how you know the population's average. If you've measured the entire population then there's no uncertainty in its mean, so to test the null hypothesis that the weights of the 50 students could be a random sample from the population then you can use a one-sample t-test with the population mean as the specified value μ0, and you'd gain precision by using the population standard deviation if you know it rather than the sample standard deviation. If on the other hand you found the population mean from a random sample of the population then you have a two sample t-test provided none of that sample were also members of your subset of 50 students. If they were, it's going to get a bit trickier—simplest to exclude those students from one group or the other, e.g. from the larger group. Qwfp (talk) 11:14, 17 July 2010 (UTC)[reply]

Theoretical computer science

Suppose F is a boolean function with n variables, and I want to determine whether there exists an assignment of true or false to those variables such that F is true. With a nondeterministic Turing machine, this can clearly be solved in time n, so this is in NP. However, a deterministic Turing machine requires time 2n to check all 2n possible assignments of variables, so this is not in P. Therefore, P!=NP. What's the problem with this proof? --138.110.206.101 (talk) 16:25, 15 July 2010 (UTC)[reply]

You're assuming there's no better method for a deterministic Turing machine to use than straightforward brute force search. You're also taking "polynomial-time" to mean polynomial in the number of variables, while I think the only sensible meaning for it here is polynomial in the length of the proposition. Algebraist 16:31, 15 July 2010 (UTC)[reply]
[ec] The problem is that you have assumed that brute-forcing all possibilities is the most efficient algorithm to solve this. To prove that a problem is not in P you have to show that each and every algorithm that solves it is superpolynomial, not that one particular algorithm is superpolynomial. -- Meni Rosenfeld (talk) 16:33, 15 July 2010 (UTC)[reply]
Are there any polynomial-time deterministic algorithms for determining whether a boolean equation has a solution? --138.110.206.101 (talk) 16:40, 15 July 2010 (UTC)[reply]
We don't know. That's exactly what the P =? NP problem asks for. However, there are SAT solvers which do better than the brute force assignment search (they achieve time complexity for some ), which should at least persuade you that the problem is nontrivial.—Emil J. 16:43, 15 July 2010 (UTC)[reply]
This is a very common mistake for those beginning to study P/NP. You are not using a standard nondeterministic Turing machine. You are using an infinite-length nondeterministic Turing machine. What if the maximum number of values that your machine can store is m. You have n values to check. If m>n, no problem. If m<n, you have a problem. You cannot do this in simply polynomial time. A basic rule: If your "polynomial" time solution begins with "get a tape long enough to store all the values" or "get enough memory to store all the variables" or "get a rope long enough to stretch around all the posts", it is not a polynomial solution. -- kainaw 17:24, 15 July 2010 (UTC)[reply]
What on earth are you talking about? The standard Turing machine, as used in the definition of NP, has (at least potentially) an infinite tape. It can get away with a polynomially bounded tape as it is a poly-time machine here, but it certainly cannot be bounded by a constant. That would be equivalent to a nondeterministic finite automaton, which can only recognize regular languages, a tiny subset of P (let alone NP). The OP's NP algorithm is correct, and it is in fact a textbook example (SAT being among the most popular NP-complete problems).—Emil J. 18:04, 15 July 2010 (UTC)[reply]
I was attempting to use language to make it understood to the questioner. When you say "infinite tape" and you know what you are talking about, those just learning about P/NP hear "the tape starts out with an infinite amount of data already written on it and all I have to do is read it quickly from one end to the other". I have argued for many years that the language used in describing P/NP leads to more confusion than anything else. -- kainaw 18:07, 15 July 2010 (UTC)[reply]
Whereas telling the OP that he or she made a mistake when they actually did not (in this part of the argument, that is) is supposed to reduce confusion, right?—Emil J. 18:15, 15 July 2010 (UTC)[reply]
I figured he was confused enough already because he most likely intended to describe a 3-SAT but didn't. A 2-SAT (or 1-SAT, which is what he described) has a simple solution. 3-SAT is where the trouble starts. Further, it appeared that he was going towards the standard "I can't believe I solved the 3-SAT problem" solution of: 1. Get enough memory to store every 3-variable set in the 3-SAT. 2. Calculate the possibility of each of those being true. 3. Solve those solutions as a 1-SAT. Step 1 is where it immediately gets thrown out of P. -- kainaw 21:54, 15 July 2010 (UTC)[reply]
But the OP was not trying to prove that SAT (or 3-SAT, though I don't see anything in their post which would indicate that they intend such a restriction) is in P. On the contrary, they were trying to prove that it is not in P (whereas it is in NP, that's what the part about nondeterministic TM does).—Emil J. 10:27, 16 July 2010 (UTC)[reply]

Implicit function theoem

My text book writes "...note that the cain rule applied to F(x,g(x)) = 0 gives

which is equivalent to

I was wondering the first equation can be arrived at. Oh, and F:ℝn+1 → ℝ. PS Do you have to bold the D everytime you write the derivative matrix? Because it's hard to bold on paper...74.15.137.192 (talk) 19:31, 15 July 2010 (UTC)[reply]

Riddle (of my own)

I apologize if you think it's to easy. Additionaly, user:EmilJ is hereby asked not to respond, unless no response has been given by Sunday morning. If even EmilJ won't respond by Monday morning, then I'll provide the solution. The riddle goes as follows:

By "arithmetical" mapping - we shall hereby mean: a mapping expressible in the arithmetical language: (+,×,0,1), i.e. a function for which there exists a formula comprising of logical symbols as well as the symbols: +,×,0,1, so that for every a in the domain there is a unique b in the codomain such that .
Now, one has to find a field F (+,× being its corresponding addition and multiplication, 0,1 being its units respectively), with respect to which the relation: " there is an arithmetical mapping from S to T " - is not transitive, i.e. one has to find three sub-sets of F, let's call them A,B,C, such that there's an arithmetical mapping from A to B, and there's an arithmetical mapping from B to C, but there's no arithmetical mapping from A to C.

HOOTmag (talk) 21:20, 15 July 2010 (UTC)[reply]

I'll bow out too then with EmilJ and ponder deeply on "How much wood could a woodchuck chuck if a woodchuck could chuck wood?" Dmcq (talk) 23:33, 15 July 2010 (UTC)[reply]
If you think my riddle is too easy, you may Email me. Thanks.HOOTmag (talk) 23:48, 15 July 2010 (UTC)[reply]
Well, I was supposed not to respond to this question, but let me just say that I don't understand the description. What do you mean by "expressible in the arithmetical language"? I would read it as "definable by a first-order formula in the structure (F,+,×,0,1)", but then the riddle would have no solution, because the composition of two definable surjections is a surjection definable by . You thus may want to clarify the statement of the problem.—Emil J. 10:39, 16 July 2010 (UTC)[reply]
Yes: by "expressible in the arithmetical language" I really mean just as you've interpreted it.
No: although there are surjections f and g expressible in the arithmetical language, the surjection may be not expressible in the arithmetical language. This is my riddle, and it does have a solution.
To make things even more provocative, I've replaced "surjection" by "mapping", as you can see now in the current version of my riddle.
Good luck. HOOTmag (talk) 14:49, 16 July 2010 (UTC)[reply]
You contradict yourself. What I wrote above is a (standard) proof that the composition of two definable functions is definable, so you obviously cannot mean it as I've interpreted it. You have to explain your terminology, otherwise your "riddle" is just gobbledygook.—Emil J. 15:07, 16 July 2010 (UTC)[reply]
No, I don't.
Yes, what you wrote above is really a (standard) proof that the composition of two [arithmetically] definable functions is [arithmetically] definable.
However, the composition of two [arithmetically] definable functions is unnecessarilly an [arithmetically] definable function.
No, my riddle is not gobbledygook.
Good luck. HOOTmag (talk) 15:27, 16 July 2010 (UTC)[reply]
Then your terminology must be even more puzzling than I expected. The composition of two functions is always a function, by definition. What you write makes no sense.—Emil J. 15:31, 16 July 2010 (UTC)[reply]
No, my terminology is just like yours.
Yes, The composition of two functions is always a function, and the composition of two [arithmetically] definable functions is [arithmetically] definable.
However, the composition of two [arithmetically] definable functions is unnecessarilly an [arithmetically] definable function.
No, what I wrote does make sense.
Good luck. HOOTmag (talk) 15:38, 16 July 2010 (UTC)[reply]
Let me try it in a different way. Here's the standard terminology:
  • a function f: AB is, depending on whom you ask, either a relation RA × B such that for every a in A there exists a unique b in B such that a R b, or the triple (R, A, B) where R is as above,
  • the composition of functions f: AB and g: BC is the function such that (sometimes f and g are written in the opposite order),
  • a function f: AB, where A and B are subsets of the domain of a structure M, is definable in M, if there exists a first-order formula in the language of M such that for every a and b in M, iff a is in A and f(a) = b.
Tell us which of these your terminology disagrees with. In particular, you seem to make a distinction between a definable function and a function which is definable, but these two are synonyms in the standard terminology.—Emil J. 15:49, 16 July 2010 (UTC)[reply]
Your third terminology of "definability in M" has nothing to do with my riddle.
To put things clearer: A,B being sub-sets of the field F, the expression "arithmetical mapping" f: AB should be interpreted here as follows: there exists a formula comprising of logical symbols as well as the symbols: +,×,0,1 (which are interpreted as the addition the multiplication and the units of both operations of F respectively), so that for every a in A and for every b in B, iff f(a) = b. I'll be back on sunday morning. HOOTmag (talk) 16:15, 16 July 2010 (UTC)[reply]
Aha, so you don't want the function itself to be arithmetically definable, but that there exists an arithmetically definable relation whose intersection with A × B is (the graph of) the function. Then the problem becomes sensible.—Emil J. 16:29, 16 July 2010 (UTC)[reply]
so others don't have to read through all this, you should move this part of the conersatino to the talk page (to preserve it) and just change the original phrasing so it is clear as finally realized by EmilJ. 92.229.14.255 (talk) 21:26, 16 July 2010 (UTC)[reply]

Let F = GF(4) = {0, 1, a, a + 1}, and put A = {0}, B = {a}, C = {a, a + 1}, f(0) = a, g(a) = a. Then f: AB and g: BC are arithmetical mappings in HOOTmag's sense (f by the true formula, g by x = y), but no mapping from A to C is arithmetical, because it is not invariant under the Frobenius automorphism (which swaps a and a + 1). Pretty much the same thing can be done with any field which carries any nonidentical automorphism.—Emil J. 12:00, 18 July 2010 (UTC)[reply]

Excellent. Now try to solve the same riddle but with my original "surjection" (and also with "bijection") instead of "mapping". (By "arithmetical surjection", I mean: an "arithmetical mapping" - which is also a surjection. By "arithmetical bijection" from A to B, I mean: an "arithmetical mapping" from A to B - which is also an "arithmetical mapping" from B to A). HOOTmag (talk 12:00, 18 July 2010 (UTC)[reply]
I'm also not sure in what way f is arithmetical. As clever as using an ifexpr statement is, it doesn't really work since it's visible when editing... --Tango (talk) 20:26, 17 July 2010 (UTC)[reply]
It's arithmetical since the formula uses no symbols other than the logical ones and the arithmetical ones. HOOTmag (talk 12:00, 18 July 2010 (UTC)[reply]
But how can you write it without using the symbol a? I thought we were only allowed addition, multiplication and identities. --Tango (talk) 22:53, 17 July 2010 (UTC)[reply]
f is expressed (e.g.) by the formula: "0=0" (or by any true formula, as EmilJ indicated). You should remember that B= {a}, so the only object in B satisfying the formula "0=0" is a. If you still have any doubt, look again at the very definition of "arithmetical mapping", above. HOOTmag (talk 12:00, 18 July 2010 (UTC)[reply]
I've boldly removed the time delays because they were getting ridiculous and Emil J.'s answer wasn't correct anyway (not being a surjection). Ah, I see. That kind of trick will only work if A and B are singleton sets, though, which will make it difficult to use it with surjections (if there is a surjection from B to C, and B is a singleton set, then C must be too, which means any mapping from A to C is trivially arithmetical by the same trick). --Tango (talk) 23:45, 17 July 2010 (UTC)[reply]
I've boldly put back the time delays, because EmilJ who decided to use them didn't allow you to remove them. and Emil J.'s answer was correct (because the current version of my riddle is not about surjections but rather about mappings). If you want to remove the time delays from your posts, you can do that, but please don't touch Emilj's posts and my posts :). My riddle has a solution even for surjections. For more details, read my response to EmilJ's last response, and wait for EmilJ's response, or wait untill Monday morning when I give the full answer if EmilJ hasn't responded by then. HOOTmag (talk 12:00, 18 July 2010 (UTC)[reply]
I've removed them again. Please do not put them back without a consensus on the talk page. Hiding answers to questions goes against the intentions of the reference desk, which is to answer people's questions. This isn't a competition page. --Tango (talk) 12:00, 18 July 2010 (UTC)[reply]
I've put them back again. If you want to remove them from your posts, you can do that, but please do not remove them from other editors' posts - without a consensus on the talk page. Changing other editors' posts without getting their explicit permission - goes against Wikipedia rules. The current discussion has nothing to do with competitions. This page welcomes riddles, as well as solutions for riddles, but it isn't a quarrel page. HOOTmag (talk 12:00, 18 July 2010 (UTC)[reply]

Consensus on the talk page seems to be that this timed-hiding of posts is inappropriate for the ref desk. It's disruptive, so that stuff can be removed. If you don't remove them yourself, I'm sure someone else will. :) ←Baseball Bugs What's up, Doc? carrots16:12, 18 July 2010 (UTC)[reply]

Which I have now done. Put them back again, and I will ask that you be blocked for disruption. Have a nice day. :) ←Baseball Bugs What's up, Doc? carrots16:17, 18 July 2010 (UTC)[reply]
Unfortunately, your removing the time delays, was done too late, when it was already not actual, since the time delays had expired more than four hours before you removed them.
Similarly, your advice not to put them back, was given too late, due to the same reason mentioned above.
Unfortunately, your redundant warning about what you may do if the time delays (which are already not actual) are put back, does not assume good faith. I'm sure that if you had been aware of the sequence of events, you wouldn't have hurried to warn innocent wikipedians, who acted only according to what Wikipedia rules permit (in my opinion and in other editors' opinion, like DMacks). If you disagree with me (and with DMacks) and think that Wikipedia allows the wikipedians to edit other editors' posts, then your opinion is legitimate and should be discussed gravely solemnly and seriously (in other cases in the future since our case is already not actual after the time delays had expired at 12:00 before they were removed), because as far as I'm concerned - I do assume good faith from all sides.
have a nice day too. :)
HOOTmag (talk) 21:24, 18 July 2010 (UTC)[reply]
There is no concensus with regard to the time delays, because two editors (Dmacks and me) think that Wikipedia rules don't permit any editor to change other editors' posts without getting their permission.
You advise me "remove the stuff" from what?
  1. From my posts only? Note that I responded to hidden responses, while I couldn't present my post as an unhidden response to a hidden one: just try to think about our page, if it had contained an unhidden response to a hidden post...
  2. Did you advise me to remove the time delays from the other users' posts to which I responded? In my opinion, I was not permitted by Wikipedia to do so, and this opinion - is not mine only - but is also what DMacks thinks.
If you disagree with me (and with DMacks) and think that Wikipedia permits me to edit other editors' posts in our specific circumstances, then your opinion is legitimate and should be discussed gravely solemnly and seriously, but you cannot advise me to act in a way considered by me (and by another editor) as violating Wikipedia rules.
Anyways, your advice to remove the time delays is already not actual, since you advised me this after the time delays had expired (they expired at 12:00, i.e. more than four hours before your advice was given).
HOOTmag (talk) 21:24, 18 July 2010 (UTC)[reply]

July 16

Fixed Point

Studying some old analysis exams, I came across the following problem and it has resisted all efforts by everyone I know (including myself) so I appeal here. Here is the problem verbatim.

"The continuous map from to satisfies
a) for
b) F maps the set into itself.
Show that F has a unique fixed point in ".

Ok, several issues with this question. The only fixed point theorem we are supposed to know is the contraction mapping theorem so I am guessing that the only way to do this problem is to show that F is a strict contraction (with the contraction constant being strictly less than one) on the unit square. Condition a) doesn't give us a contraction because the two metrics being used on the opposite sides of the inequality are different (left is using the 2-norm and the right is using the 1-norm). I have shown that F continuous implies that it maps the closed unit square to itself but I can't get a strict contraction for any of the metrics I can think of. The contraction constant always comes out to one which is useless. In fact I proved that for any of the p-metrics (include the max-metric). I think that as long as I find one metric (for which R^2 is complete) for which F is a contraction, its good because the actual fixed point doesn't depend on the metric. Any help?-Looking for Wisdom and Insight! (talk) 02:46, 16 July 2010 (UTC)[reply]

left is using the 2-norm. Really? There is no square or square root involved. Condition a) is the two inequalities
when added together you get
or
which looks like a contraction. PS. The notation in the problem is confusing. Indexes 1 and 2 refer sometimes to different points and sometimes to different coordinates of the same point. is better. Bo Jacoby (talk) 07:41, 16 July 2010 (UTC).[reply]

Yes true but this is still and the constant of contraction is still one (which I already knew). The thing is that it needs to be a number strictly less than one. Contraction constant being one doesn't say anything. There may be a fixed point (like the identity map which has many fixed points) or there may be no fixed points. The constant needs to be less than one for the existence and uniqueness of a fixed point. I already know that any of the standard p-norms (include the infinity norm) give us one. I even tried more exotic things like and I still get the contraction constant as 1. Is there a metric which would work? How would we go about finding it? Or are we going about this completely the wrong way and the approach is completely different? I was also thinking, this problem of having the contraction constant c=1 happens because of the diagonal y=x so maybe if I could take the diagonal, take a little band around it (width=epsilon) and remove it from the closed unit square. Then I would have a c<1 and then maybe take a limit as epsilon goes to zero or something. Would that work?-Looking for Wisdom and Insight! (talk) 07:58, 16 July 2010 (UTC)[reply]

(Edit Conflict) You are also right about the notation being confusing. Its basically, pick two points in . And F has the two scalar components f1 and f2. I agree this is not the best choice but that's what it said.-Looking for Wisdom and Insight! (talk) 07:58, 16 July 2010 (UTC)[reply]

A square with side a is mapped into a square with side a/2? No, sorry. Bo Jacoby (talk) 08:07, 16 July 2010 (UTC).[reply]
There's a wee bit of a problem in that if you're talking about the open square there isn't any fixed point - it could all tend to one corner. If you're talking about a closed square then the contraction factor doesn't have to be less than 1, it just needs to always be less than 1 for any pair of points you look at but could have a limit of 1. A map like that sending everything to a diagonal line that then goes towards a corner is
The important thing for a closed square is that it is compact. Dmcq (talk) 09:59, 16 July 2010 (UTC)[reply]

Stupid me! Just substitute zeroes in all coordinates and obtain the relation 0<0 which is false. No mapping satisfy the required conditions, and so every mapping that does satisfy the condition has a fixed point. Bo Jacoby (talk) 13:30, 16 July 2010 (UTC).[reply]

Ah but is it a unique fixed point :) You're quite right, the theorem for the compact case is where the inequality holds whenever the two points are different. Dmcq (talk) 13:39, 16 July 2010 (UTC)[reply]
I don't see the need for any separate "compact case". In the original case, f: R2R2 maps Ω to itself and is continuous, therefore it also maps to itself. Thus by your argument it has a fixed point in and a fortiori in R2. Uniqueness is straightforward.—Emil J. 14:05, 16 July 2010 (UTC)[reply]
Oh I see what you're talking about, it is a fixed point of the map of the whole plane, I was just thinking of the open square. Um, yes I should have read it better, thanks. Dmcq (talk) 15:51, 16 July 2010 (UTC)[reply]

Wait, maybe its just me being stupid but what's the verdict here? How do we know that there is a fixed point and how do we know that its unique? Maybe its irrelevant but can we say anything about where (or what) the fixed point would be? Is it in omega? Is it in omega closure (like the origin)? The only thing I can understand from this is that if I pick two points in omega, then their images will be closer in L1 norm. But we don't even know specifically how close.-Looking for Wisdom and Insight! (talk) 23:25, 16 July 2010 (UTC)[reply]

Yep there's always a unique fixed point. I was using the Brouwer fixed point theorem rather than the Banach fixed point theorem on the closure of the unit square. As EmilJ pointed out the closure of an open area will always transform to within the closure of the transformed open area. Plus you can't have two fixed points if the distance after the mapping is always smaller. Dmcq (talk) 00:02, 17 July 2010 (UTC)[reply]
The theorem that a distance-reducing map on a compact metric space has a fixed point is much easier than Brouwer (just consider a point x minimizing the continuous function x↦d(x,f(x))). Algebraist 09:11, 17 July 2010 (UTC)[reply]
Yes that's better, compact sets map to compact sets, or even invoke the extreme value theorem from analysis but topology is easier here. I was trying to work in the poster's contraction mapping theorem and I didn't manage anyway. Looking at Banach fixed point theorem I see it mentions this bit about a compact set and not needing the contraction to have a definite factor less than 1. I don't see how to fit in the poster's bit about having just done the contraction mapping theorem,it sounds like it should be relevant but just doesn't seem to be, or is the bit about compact sets a part of the contraction mapping theorem? Dmcq (talk) 10:35, 17 July 2010 (UTC)[reply]

No, the verdict is that the problem is a joke. Condition a) is a contradiction because for point 1 equal to point 2 condition a) says that 0<0, which is simply not the case. You do not need any fixed point theorems at all. There exist no mapping satisfying condition a). The set of mappings satisfying condition a) is the empty set = { } = Ø. What can be said about the elements of the empty set? The answer is that anything can be said about the elements of the empty set, because the empty set has got no elements. So all the mappings satisfying condition a), ( there are none ), have got unique fix points, Q.E.D. Bo Jacoby (talk) 04:01, 17 July 2010 (UTC).[reply]

That's true but I'm reading it as meaning the inequality refers to any two distinct points as I said above. It's not a joke, its just a missing condition. Dmcq (talk) 09:06, 17 July 2010 (UTC)[reply]

The original problem may be modified in several ways to make more sense. The sign '<' may be replaced by '≤' to avoid the contradiction 0<0. Then the identity map, having plenty of fixed points, is a possibility. Then the constants 1/2 could be lowered a little to, say, 499/1000. Then the Banach theorem applies and there is a unique fixed point. But the problem as written assumes a contradictory condition. Bo Jacoby (talk) 11:20, 17 July 2010 (UTC).[reply]

If a person looks around a house and says "I looked and there was nobody there" we don't go around saying "Ha ha you're trying to trick us, you were there so it couldn't have been empty". Let's just assume a bit of good faith and a straightforward problem and not jump at things like that. Dmcq (talk) 12:58, 17 July 2010 (UTC)[reply]

An old analysis exam need no good faith for its interpretation. The purpose is to check that the student can read and think. Bo Jacoby (talk) 18:50, 17 July 2010 (UTC).[reply]


Summarizing the above discussion :

  • (b) is already sufficient to get a fixed point in the closed square , which is an invariant set of the map : by the Brower fixed point thm.
  • Condition (a) as it is, is empty, so it was possibly not properly stated. One may modify it requiring (a'): the strict inequality holds only for pairs . Summing the two inequalities one gets for all in the square, where is the L1 distance (sum of absolute values of the coordinates).
  • In the assumptions (a') and (b) the conclusion follows by minimizing on the compact square, or, if you want to use the contraction principle, taking a limit point of a sequence of fixed points of the contractions . (Unicity is obvious).

--pma 18:16, 19 July 2010 (UTC)[reply]

Linear independence over complex numbers of modular forms of different weights

Hello, I am trying to show that n nonzero modular forms of n distinct integer weights are linearly independent over the complex numbers. I thought induction might work and the base case is easy as it is just the case n = 1. But, I can also prove it for n = 2 directly:

Assume are nonzero modular forms of distinct integer weights . Assume also that there exist , not both zero such that . WLOG, assume . Then, I can solve for . Since these are both modular forms, they are invariant under the stroke operator for any for their respective weights. Thus,

which gives

which gives

so that

.

Now, since can be anything and this exact equation must hold for all such matrices, the only conclusion is , a contradiction. Thus, they are linearly independent.

I tried extending this same argument to the general case of n but I am not sure where to go. Any suggestions? Thanks StatisticsMan (talk) 13:26, 16 July 2010 (UTC)[reply]

Just to be clear, when I tried it with n, hoping to use the induction hypothesis that n-1 are linearly independent, I got
and this does not seem to have any great way to cancel. I put everything on one side to get
which is almost a linear combination of n-1 of them, well it is, but it's in complex variables instead of complex numbers. StatisticsMan (talk) 13:57, 16 July 2010 (UTC)[reply]

Millennium Prize Problems

Would you get the prize money for disproving one of the conjectures or proving it independent? --138.110.206.101 (talk) 16:12, 16 July 2010 (UTC)[reply]

The rules say

In the case of the P versus NP problem and the Navier-Stokes problem, the SAB will consider the award of the Millennium Prize for deciding the question in either direction. In the case of the other problems if a counterexample is proposed, the SAB will consider this counterexample after publication and the same two-year waiting period as for a proposed solution will apply. If, in the opinion of the SAB, the counterexample effectively resolves the problem then the SAB may recommend the award of the Prize. If the counterexample shows that the original problem survives after reformulation or elimination of some special case, then the SAB may recommend that a small prize be awarded to the author. The money for this prize will not be taken from the Millennium Prize Problem fund, but from other CMI funds.

Independence proofs aren't mentioned as far as I can see. Algebraist 16:16, 16 July 2010 (UTC)[reply]
For good reason, I'd say. It's always important to keep in mind that independence is not a truth value. All you can hope to show — well, at least all anyone heretofore has shown — is that a proposition is independent of some specific formal theory.
Granted, the same objection could be applied to a proof, period; I'm not sure what would happen if, say, someone proved that P!=NP, but had to assume the existence of a supercompact cardinal. Well I can tell you one thing that would happen, which is it would make me very happy. But would they award the prize? In my opinion they ought to, but I don't know whether they have a settled decision on the point. --Trovatore (talk) 20:57, 18 July 2010 (UTC)[reply]
If they can prove it by assuming the existence of such a cardinal and prove that it is false if there is no such cardinal (ie. they've proven it is independent of any axiomatic system that doesn't precisely determine whether or not supercompact cardinals exist) then I'd say they have solved the problem and should get the money. If they've only proven the first half, then I'd say they haven't solved it. They've just solved a special case of the problem. If I were in charge, I would give them a smaller prize, as per the rules for counterexamples due to technicalities. If you give them the full prize, then what do you do when someone else comes along as solves the whole problem? The money would already be gone. --Tango (talk) 21:27, 18 July 2010 (UTC)[reply]
You're going to require a sharp estimate of the exact large-cardinal strength needed? It seems a bit arbitrary to apply that criterion only at consistency strengths higher than ZFC. For example the axiom of replacement can be thought of as a large-cardinal axiom in a way. If someone proved P!=NP in a way that used replacement, but didn't show that replacement was necessary, would you withhold the prize? --Trovatore (talk) 21:44, 18 July 2010 (UTC)[reply]
I read your scenario as suggesting someone proved LC=>(P!=NP) for some large cardinal LC, but didn't rule out the possibility that P!=NP is provable in, say, Peano arithmetic (it has already been proven independent of some fragments of PA). Of course it would be different (and astounding) if P vs NP was proved independent of ZFC. Note that there is a theorem that if P vs NP is provably independent of PA, then in "practical" terms P is equal to NP. Specifically, SAT can be solved in running time O(nf(n)) where f(n) is an unbounded but ultra-slow-growing function, like the inverse of the Ackermann function, i.e. effectively constant for all problems that can fit in a computer. This is mentioned on p. 15 of [1] which is a good paper on the possible independence of P vs NP. 67.122.211.208 (talk) 02:36, 19 July 2010 (UTC)[reply]
Yes, you read my scenario correctly. Since I am of the Cabal mindset that large-cardinal axioms may be used freely, I would consider that scenario to settle the question.
Now, as to the theorem you mention, I would like to get this straight because we should really be accurate about it in the articles. If I remember correctly, at least one of our articles mentions it in the form you state, namely that if P!=NP is not provable in PA, then your conclusion holds. But someone claimed on a talk page that you actually need to assume that P!=NP is not provable in PA by means of the sort of proof that has up to now been used (possibly Aronson's "natural proofs"). And Aronson himself states the conclusion if P!=NP is not provable in the theory he calls "PA+Π1", that is, PA plus all true Π01 sentences.
Note that the latter theory is quite a bit stronger than PA as regards the natural numbers. For example the statement "the theory 'ZFC+there exists a supercompact cardinal' is consistent" is a true Π01 statement, and therefore provable in PA+Π1. So independence from PA+Π1 would be, in my subjective estimation, not really all that far from independence from "ZFC+there exists a supercompact cardinal". --Trovatore (talk) 03:32, 19 July 2010 (UTC)[reply]
Sorry-- I've added the word "provably" above, but I think that means any sort of proof, not just a "natural" proof. That is explained in Aaronson's article, pp. 14-15. 67.122.211.208 (talk) 04:53, 19 July 2010 (UTC)[reply]
It means "any sort of proof" if you're assuming independence from the stronger theory PA+Π1. I haven't been over Aronson's paper with a fine-tooth comb, but I see nothing that would enable the conclusion merely if P!=NP is independent from PA.
Now, he does observe at the top of page 15 that, if you can prove P!=NP is independent of PA, then you can prove that SAT can be computed in nlog n time, or much less (your inverse Ackermann etc). But it doesn't follow that you can prove this in PA! Not unless your independence proof is also formalizable in PA. --Trovatore (talk) 05:12, 19 July 2010 (UTC)[reply]
Sorry, I got confused — that's not what he observes at the top of page 15. If that were true it would pretty much prove your point, but it isn't. What he shows is that if PA doesn't prove that SAT can't can be computed in nlog n time, well then it can cannot be.
But that doesn't exclude the following situation: Suppose ZFC (just for example) proves that P!=NP is independent of PA, but true, and that SAT cannot be computed in nlog n time. As far as I can see, nothing that Aronson writes excludes that possibility. If you see something that does, or otherwise know a reason that that can't be true, please let me know. --Trovatore (talk) 05:36, 19 July 2010 (UTC)[reply]
I should say: I'm not sure the above is quite right, when talking about time. Aronson actually talks about "circuits" rather than time. I haven't read his paper closely enough to know what a "circuit" is. So please don't take my analysis as necessarily a precisely correct representation of what Aronson means. --Trovatore (talk) 07:34, 19 July 2010 (UTC)[reply]
See circuit complexity. Basically, circuits are a nonuniform computation model where the size of the circuit roughly corresponds to time on a Turing machine; for example, SAT is computable by circuits of size nO(log n) iff it is computable by an algorithm running in time nO(log n) with access to an extra "advice string" An, which has to be the same for all inputs of length n (and itself has length nO(log n), but that's implicit in that the algorithm would not have the time to read it otherwise).—Emil J. 11:40, 19 July 2010 (UTC)[reply]
Separately from P vs NP as above, I wonder what the current state of the Millenium Prize problem fund is, since one of the problems has been solved but the solver declined the prize. 67.122.211.208 (talk) 07:20, 19 July 2010 (UTC)[reply]

July 17

Half pennies on credit cards

Can half pennies be saved on British credit cards? --84.61.131.18 (talk) 13:54, 17 July 2010 (UTC)[reply]

Half pennies are no longer legal tender. The miscellaneous reference desk is probably better for this sort of thing. Dmcq (talk) 14:29, 17 July 2010 (UTC)[reply]
British banks have never dealt in decimal half pennies (except in pairs), even when they were legal tender. Some credit cards are generous and round up to the nearest penny. Dbfirs 16:59, 17 July 2010 (UTC)[reply]

Can farthings be saved on British credit cards? --84.61.131.18 (talk) 10:21, 18 July 2010 (UTC)[reply]

As the article Farthing (British coin) notes, they ceased to be legal tender on 31 December 1960. That is, they didn't even make it to decimalisation. The only way you could save them on a (post-decimalisation) credit card would be with sellotape. -- 174.24.196.51 (talk) 15:31, 18 July 2010 (UTC)[reply]

Can the old half pennies be saved on british credit cards? --84.61.131.18 (talk) 17:30, 18 July 2010 (UTC)[reply]

No. You can't save old Nigerian 1/10 pennies either on them or Euros. Except of course with sellotape or some such analogue as 174.24.196.51 suggested. Dmcq (talk) 18:16, 18 July 2010 (UTC)[reply]

What subdivision is the smallest which could be stored on the oldest British credit cards? --84.61.131.18 (talk) 19:16, 18 July 2010 (UTC)[reply]

What does it mean to "save" or "store" money on a credit card? You borrow money on credit cards, you don't save it to them... --Tango (talk) 21:28, 18 July 2010 (UTC)[reply]
Perhaps there is a misconception that the magnetic stripe on a credit card stores the balance of the card. As far as I know, it stores only the account number, and perhaps some other identifying information. —Bkell (talk) 09:14, 19 July 2010 (UTC)[reply]
Even prepaid debit cards are normally just treated by the bank like a normal debit card with a hard limit. There are some cards which are more equivalent to money, for instance cards used for paying fares typically store the actual amount remaining on the card. Dmcq (talk) 10:06, 19 July 2010 (UTC)[reply]

Greedy tricolorability

The section called "How to color a knot" in the Tricolorability article seems to imply that a greedy, nonbacktracking algorithm will always succeed in properly tricoloring a knot if the knot is indeed tricolorable. But surely this is not the case—in the fourth step of the "example of a tricolorable knot", if the lower right strand is selected and colored blue (rather than the L-shaped strand on the left) then it is impossible to properly complete the tricoloring. Right? —Bkell (talk) 14:17, 17 July 2010 (UTC)[reply]

In the example given, each step is forced except for the initial color choices. So it's not so much greedy as take the only choice available at each step. The article doesn't do into it but I assume there are more complex knots where you have multiple choices available at an intermediate step, and have to use some sort of backtracking. The section is a bit WP:HOWTO for my taste but it seems correct mathematically.--RDBury (talk) 14:34, 17 July 2010 (UTC)[reply]
The colors for the top right weren't forced, they could be swapped round. Notice the 'and color it with another color' in box 3. The only available different color for that lower right strand is red. Dmcq (talk) 14:36, 17 July 2010 (UTC)[reply]
Sure, the colors are forced if the strands are chosen in the order given. But if I choose to color the lower right strand third rather than fourth, and I choose to color it blue, then I arrive at a conflict that cannot be resolved without backtracking (because I cannot then color the L-shaped strand on the left). —Bkell (talk) 15:26, 17 July 2010 (UTC)[reply]

Anyway, my point is that I think the section is subtly flawed. It (imprecisely) describes an algorithm for coloring the strands—a greedy, nonbacktracking algorithm—and implicitly makes the claim that if this algorithm gets stuck then the knot is not tricolorable. I think this claim is false. —Bkell (talk) 15:29, 17 July 2010 (UTC)[reply]

Ah, okay, I see what you're saying, RDBury. In these examples the strands are chosen in an order that is carefully prepared so that there is only one possible color in each step, thus making backtracking unnecessary. (That's an important point that is omitted in the "how-to".) However, even this is not exactly the case: the "rules" of tricolorability described in the preceding section allow all of the strands incident upon a crossing to be given the same color, so, for example, the second strand colored in each of these examples could have been red. —Bkell (talk) 15:38, 17 July 2010 (UTC)[reply]
If you followed their rules you could have chosen different colors for the top right but the only choice for the bottom right was red. The color chosen according to the rule is another color. It can't be the same color as the other side unless the string over was the same color too. Dmcq (talk) 15:42, 17 July 2010 (UTC)[reply]
Yes, I know. I'm talking about the point in time when you have just colored the first strand red and you are pondering what color to use for the second strand. You have two choices here: either color the second strand red, so that it matches the first strand, or green (say), so that it doesn't match. Either is a priori allowable by the rules of tricolorability, even though the description of the algorithm says that a different color must be chosen. —Bkell (talk) 15:45, 17 July 2010 (UTC)[reply]
In other words, the entire lower left corner of the first example could properly be colored red. Or the entire upper right corner. (But not both, because the rules require at least two colors be used.) —Bkell (talk) 15:47, 17 July 2010 (UTC)[reply]
If your comment was a reply to what I wrote at 15:26, then let me try to explain it again. First operation is the same as the example: coloring the upper left strand red. Second operation is also the same as the example: coloring the J-shaped strand green. Third operation is different from the example: we are going to color the lower right strand now, not the lower left strand. This seems to be a legal choice to make. We choose to color this strand blue. This also seems to be a legal choice to make. It does not conflict with the coloring of the lower left L-shaped strand, because we haven't colored the lower left L-shaped strand yet. However, if we now choose to color the lower left L-shaped strand as our fourth operation, we arrive at a conflict and must backtrack. —Bkell (talk) 15:52, 17 July 2010 (UTC)[reply]
I see, instead of following the string across choose a string that was crossed over. That certainly gives problems. It might be the statement of the algorithm is incomplete, but anyway I had a quick look on the web and I couldn't see any reference to an algorithm to three color a knot if it is possible, so on a quick guess I'd have to agree it was made up, or in wiki speech original research. Lets just see if anyone else comes up with a citation within a day since there are nice diagrams, and otherwise just remove that stuff entirely and note its removal and date in the talk page. Dmcq (talk) 16:32, 17 July 2010 (UTC)[reply]

Interest calculations

My credit card company has just sent me an offer of zero percent interest for one year, and no transfer fee. I am not at all in debt apart from an interest-only mortgage. How should I calculate the amount of money that I can obtain from the credit card to pay off part of my mortgage? The two conditions are i) there must be zero money outstanding on the credit card when the interest-free year ends, and ii) the combined payments on the credit card and the interest-only mortgage must be no more than they are now. The credit card small-print says I should pay at least 2.25% of the outstanding balance off each month, or £5, whichever is greater. Thanks 92.29.117.202 (talk) 15:54, 17 July 2010 (UTC)[reply]

Without thinking about the particulars my first impulse would be to borrow as much as possible and stick 73% into bonds for 11 months and pay back the 2.25% with the remainder. Then walk away with the interest when paying the bank back. I doubt they'd lend you enough to pay any decent part of the mortgage back but you might be able to buy yourself a nice dinner. Dmcq (talk) 16:57, 17 July 2010 (UTC)[reply]

The interest rate on the mortgage would be greater than that on bonds. 92.29.117.202 (talk) 18:59, 17 July 2010 (UTC)[reply]

The problem with using borrowed capital to pay off a mortgage is that the company will not normally give the money back (increasing the mortgage again) at the end of the year (or at least not without an extra fee). In these circumstances, Dmcq's solution seems optimal, but if you happen to have a very flexible mortgage, then pay it off with 73% of your loan and borrow it back again after 12 months, but be very careful, because the slightest glitch in timing could cost you a lot of money in charges. Personally, I don't think this is worth the risk, but you might make a profit if you get the timing right. Dbfirs 08:49, 18 July 2010 (UTC)[reply]

Sorry I have not made my point clear enough. If I used the credit card offer to pay off some of the mortgage, then (assuming it is adjusted immediately) the monthly payment from the mortgage will reduce, but I will have make credit card payments also. What would in theory be the amount I should borrow from the credit card given the two constraints that i) there must be zero money outstanding on the credit card when the interest-free year ends, and ii) the combined payments on the credit card and the interest-only mortgage must be no more than they are now? It has never been my intention to increase the mortgage again after the year is up - the amount borrowed on the credit card must drop down to zero after a year due to its normal monthly repayments.

If I borrowed enough money from the credit card to reduce my mortgage payments by £100 a month, then the maximum I could borrow under the constraints would be 12 x £100 = £1200. Now I think of it, any amount I borrow would reduce my monthly mortgage payments by only a small amount, so the amount I could pay off on the credit card would be even less than that, and which by a circular arguement indicates that the two criteria above cannot be fulfilled for any amount more than zero pounds. Thanks 92.24.178.184 (talk) 10:47, 18 July 2010 (UTC)[reply]

By the way I'd have another close look at the terms. Is the zero percent on the transfer or on new purchases? And you'll very likely find that it is at most on credit card purchases and you can't borrow money on it. Dmcq (talk) 12:37, 18 July 2010 (UTC)[reply]

Any money trasfered from somewhere else is charged zero interest for one year, and there is no transfer fee. 92.28.244.168 (talk) 19:10, 18 July 2010 (UTC)[reply]

When they say "transferred from somewhere else", they usually mean from another credit card, not from a mortgage! Dbfirs 11:17, 19 July 2010 (UTC)[reply]

ACT practice test question

This is an extremely elementary geometry question, I'm just asking because there is a surprising amount of disagreement over it. I was taking an ACT practice test, and there is a question, which I've copyied below:

The rectangles ABCD and EFGH shown below are similar. Using the given information, what is the length of side EH, to the nearest tenth of an inch? (Rectangle drawing summary follows: ABCD: AD labelled 3 inches, CD labelled 8 inches, others unlabelled; EFGH: GH labelled 2 inches; AB and CD appear to be the longer sides of ABCD; EH and FG appear to be the longer sides of EFGH) Attempt at ASCII art rendering follows

                       E _______F
   A_____________ B     |       |         
   |             |    ? |       |
 3 |             |      |       |
   |_____________|      |_______|
  D       8      C     H   2 in  G

The answer choices are: A. 0.8
B. 1.3
C. 5.3
D. 7
E. 8

I saw that EH and FG was longer than GH and EF, but the instructions in the ACT said that Figures are not necessarily drawn to scale, so I disregarded this. I reasoned that since ABCD~EFGH then EH and AD must be proportional, as must be GH and CD. Setting up the proportion EH/3=2/8 I got EH=.75 which rounds up to .8 and thus picked A. However in the answers section they claim the correct answer is C. The explanation they provided makes no sense to me. They said that 3/8=2/EH therefore 3(EH)=16 and EH=5.3. Is this a mistake in the book? I thought corresponding segments in similar figures have to be proportional, ie A corresponds to E, B to F, C to G, D to H. Thanks for any help. Sorry about the TLDR. 76.229.159.199 (talk) 17:25, 17 July 2010 (UTC)[reply]

Apparently they want you to decide that EFGH is a rotated and scaled version of ABCD (thus still similar). The idea that they're not to scale is meant to prevent you from measuring EH and GH directly on the paper with a ruler to get the length; it seems it is not meant to prevent you from noticing which sides are longer. --Tardis (talk) 17:37, 17 July 2010 (UTC)[reply]
I don't get it. If you pick GH is the long side then EH = 0.75 whereas if you pick GH to be the short side then EH = 5.33. Both are included (rounded) in the choices so what was the clue to treat GH as the short side? hydnjo (talk) 19:47, 17 July 2010 (UTC)[reply]

GH (in the drawing) appears shorter. 76.229.159.199 (talk) 03:49, 18 July 2010 (UTC)[reply]

Poorly-set questions like this reflect badly on the intelligence of the author of the book. The question should have said that rectangle ABCD is similar to rectangle FGHE, and it should also have mentioned that answers are rounded to one decimal place. Dbfirs 08:35, 18 July 2010 (UTC)[reply]
I took the ACT in high school and remember that one of the questions definitely had an error, such that one of the answer choices was obviously correct, but because of how the question was worded, one of the other choices was also correct. The developers of the SAT are famously careful about that sort of thing (the questions get a ton of expert review and closed trials before going "live") and the ACT was sloppy by comparison. 67.122.211.208 (talk) 02:42, 19 July 2010 (UTC)[reply]

Algorithm to combine trees?

Is there an algorithm to combine two similar trees please? The trees would be broadly similar, except some of the nodes (and their sub-nodes) would be missing in one of the trees, and there would be extra nodes (and their sub-nodes) in one of the trees. The trees start from the same root.

If it makes thing clearer then the two trees-to-be-merged could be TreeA: Food>breakfast,lunch. breakfast>boiledegg,toast,marmalade. lunch>pasta,apple.

TreeB: Food>breakfast,lunch, dinner. breakfast>toast,kipper,marmalade. lunch>sandwich,fruit. fruit>apple,orange,banana. dinner>lentils,rice,strawberries.

TreeCombined: Food>breakfast,lunch,dinner. breakfast>boiledegg,toast,marmalade,kipper. lunch>pasta,apple,sandwich,fruit. fruit>apple,orange,banana. dinner>lentils,rice,strawberries.

Thanks 92.29.117.202 (talk) 18:26, 17 July 2010 (UTC)[reply]

The Computing reference desk is probably better for this sort of thing. Dmcq (talk) 18:46, 17 July 2010 (UTC)[reply]
By the way I believe this would be called a merge of the trees and you have named nodes. Dmcq (talk) 18:52, 17 July 2010 (UTC)[reply]

Here is a sketch:

Represent the trees as a sorted line-numbered file
10000 TreeA
11000 Food
11100 breakfast
11110 boiledegg
11120 toast
11130 marmalade
11200 lunch
11210 pasta
11220 apple
20000 TreeB
21000 Food
21100 breakfast
21110 toast
21120 kipper
21130 marmalade
21200 lunch
21210 sandwich
21220 fruit
21221 apple
21222 orange
21223 banana
21300 dinner
21310 lentils
21320 rice
21330 strawberries
Move the first digit to last position and sort
00001 TreeA
00002 TreeB
10001 Food
10002 Food
11001 breakfast
11002 breakfast
11101 boiledegg
11102 toast
11201 toast
11202 kipper
11301 marmalade
11302 marmalade
12001 lunch
12002 lunch
12101 pasta
12102 sandwich
12201 apple
12202 fruit
12212 apple
12222 orange
12232 banana
13002 dinner
13102 lentils
13202 rice
13302 strawberries
Delete the last digit and clean up extras
0000 TreeA
0000 TreeB
1000 Food
1100 breakfast
1110 boiledegg
1110 toast
1120 kipper
1130 marmalade
1200 lunch
1210 pasta
1210 sandwich
1220 fruit
1221 apple
1222 orange
1223 banana
1300 dinner
1310 lentils
1320 rice
1330 strawberries
Renumbering
000 Food
100 breakfast
110 boiledegg
120 toast
130 kipper
140 marmalade
200 lunch
210 pasta
220 sandwich
230 fruit
231 apple
232 orange
233 banana
300 dinner
310 lentils
320 rice
330 strawberries

Bo Jacoby (talk) 11:53, 18 July 2010 (UTC).[reply]

Interesting, but I don't get how you can see what the parents or children of a node are. It may be some node-numbering convention I am not aware of. Thanks. 92.28.244.168 (talk) 19:16, 18 July 2010 (UTC)[reply]

  1. The parent of 233 'banana' is 230 'fruit', the parent of 230 is 200 'lunch', and the parent of 200 is 000 'Food'. So the parent of a node is obtained by zeroing the least significant (nonzero) digit.
  2. The node-numbering convention is published in Bo Jacoby: ORDINAL FRACTIONS - THE ALGEBRA OF DATA, Konferensdokumentation NordDATA 90, vol 3, page 357-367 (Göteborg 11-14 juni 1990). I invented the method in 1980 and have used it ever since. My wikipedia article on ordinal fraction was by my learned fellow editors identified as original research and deleted. (Most of my inventions are reinventions, but apparently not this one). Bo Jacoby (talk) 22:35, 18 July 2010 (UTC).[reply]
Yeah, it would appear that the example worked only because TreeA was missing dinner, not lunch. So, if TreeA and TreeB each have a node with the same value (or name) then are those nodes guaranteed to be at the same depth? If so, it would seem easy to do simultaneous tree traversals, either adding missing items to one tree or creating a third tree with the merged content of both. -- 110.49.193.95 (talk) 20:30, 18 July 2010 (UTC)[reply]

Sorry, they are not guaranteed to be the same depth. They probably will not be. Thanks 92.15.26.94 (talk) 21:58, 18 July 2010 (UTC)[reply]

It seems to me that the simplest method is to pick one tree to be the "new" tree. For each node in the other tree, insert the node in the new tree. As you attempt to insert the node, the standard insert algorithm will look where to insert it. If it finds the node, just don't insert it. Once finished, you may want to balance the new tree using a standard balancing algorithm. -- kainaw 22:03, 18 July 2010 (UTC)[reply]

Thanks. I'm thinking of how to copy two similar computer directory trees to make a combined tree where nothing is lost. In that situation you would start at the top of the trees and have no knowledge of what was lower down. I think its easiest to copy one tree and then add the other to it. The procedure at each directory is not too difficult, but as a non-mathematician and non-programmer its the traversing of the source tree that I find difficult. 92.28.240.114 (talk) 09:28, 19 July 2010 (UTC)[reply]

Traversing the trees is easy, but have you really thought of what you want to do at each level? What do you do if the same directory (as identified by name and level) in each tree each has a file of the same name, but the files differ in content, or if one is a file and the other is a subdirectory? In a directory tree a subdirectory may have the same name as its parent (or earlier ancestor), and you said earlier that matching nodes (directories, or possibly files as well?) may not be at the same depth in each tree, so how do you know if you really have a match or not? If you are not concerned about such special cases, then a merge is trivial. -- 110.49.193.24 (talk) 11:24, 19 July 2010 (UTC)[reply]

"Traversing trees is easy" - tell me how then please? I have already thought of the various choices you mention. Thanks 92.15.4.196 (talk) 21:22, 19 July 2010 (UTC)[reply]

Mathematical game

Player B chooses a positive integer 1 < n < 20. Then, player A writes down any n consecutive positive integers. Starting with A, players alternate crossing off numbers until there are two left. If the two numbers are relatively prime, A wins; otherwise, B wins. Can one player guarantee victory? --138.110.206.102 (talk) 21:17, 17 July 2010 (UTC)[reply]

Yes, according to zermelo's theorem (game theory) and the fact that a draw is not possible. I have no idea which one it would be though. ––Alphador (talk) 06:41, 18 July 2010 (UTC)[reply]
By the way, I assume you meant 2 < n < 20.--Alphador (talk) 06:42, 18 July 2010 (UTC)[reply]
Why assume that? Allowing n=2 just gives B a chance to lose very quickly. Algebraist 09:51, 18 July 2010 (UTC)[reply]
Well the game is trivial with n = 2 and there is a very simple winning strategy up to n=7 for A if he has a free choice of where to start his integers. A can still win for n=8 to 11, but has to be careful where to start. My intuition says that A always has a winning strategy, but intuition is sometimes wrong and, even if it is correct, I'm not sure how to prove it other than by trial and error. Has anyone the time to do a full analysis? For some values of n, A can be generous and allow B to go first in crossing off. Dbfirs 07:07, 18 July 2010 (UTC)[reply]
I don't think A needs to be careful where he starts at all. Whatever n integers he picks, he can use a simple strategy to ensure the last two are consecutive (and hence coprime). Algebraist 10:00, 18 July 2010 (UTC)[reply]
I must be missing something as I see a winning strategy for B choosing n=18 (giving B the last move, with both A and B crossing off 8 numbers each). Of A's 18 consecutive positive integers, there will be 9 even numbers and 3 odd multiples of three. In order for A to win, 8 of the 9 and 2 of the 3 must be crossed off, something that A cannot accomplish alone. B's winning strategy is to avoid crossing off multiples of 2 or 3 until all other numbers have been crossed off. With only 6 numbers not multiples of 2 or 3, B will have succeeded in crossing them off with at least two more moves to go. Thus, B's final move will be to cross off one of three numbers which are all either multiples of 2 or 3, and can ensure that the final two numbers belong to one group or the other. -- 117.47.248.208 (talk) 12:21, 18 July 2010 (UTC)A similar strategy should work for all even n≥12.-- 117.47.248.208 (talk) 13:29, 18 July 2010 (UTC)[reply]
I must also be missing something, because I'm pretty sure A is incapable of ensuring that the numbers are consecutive in the case of n=3. Sure, he can ensure that they are coprime, but they may have to be odd n, n+2.
Can you be more explicit about your simple strategy, Algebraist? --173.174.22.246 (talk) 12:27, 18 July 2010 (UTC)[reply]
Remember that A goes first, so it is easy for A to ensure two consecutive numbers in the case of n=3. It seems clear that B can ensure that there are not two consecutive numbers when making the last move (even n>2), but I don't know about odd n. -- 117.47.248.208 (talk) 12:42, 18 July 2010 (UTC)[reply]
Sorry, I meant n=4. Dunno how I made that mistake. --173.174.22.246 (talk) 15:43, 18 July 2010 (UTC)[reply]
I miscalculated, sorry. My scheme works only for n odd. It would work for all n if B had to remove first. Algebraist 18:45, 18 July 2010 (UTC)[reply]
What is your scheme and is there a simply proof that it always works? It seems to me that for n odd, A choosing to cross off the smallest integer that is not part of an isolated pair, when possible, will result in a consecutive pair, but I've not though of a proof yet. -- 117.47.213.137 (talk) 08:29, 19 July 2010 (UTC)[reply]
I think Algebraist's scheme is simpler than that. For their first move A removes either the lowest number if that is odd, or the highest number if the lowest number is even. This leaves an even number of numbers starting at an even number. Thereafter, if B removes an even number k then A removes k+1, and if B removes an odd number k then A removes k-1. This ensures that after A's last move the final two remaining numbers are consecutive and hence coprime. Gandalf61 (talk) 08:51, 19 July 2010 (UTC)[reply]
So it remains to check whether A always has a winning choice for n = 12, 14, 16 and 18 -- unless Algebraist and Gandalf61 can come up with another clever strategy. Dbfirs 11:10, 19 July 2010 (UTC)[reply]
I must be missing something too just reading this. As far as I can see the argument by 117.47.248.208 above could be applied to n=10 rather than 18 so A ends up either with 3 even numbers and an odd number or two even numbers and two odd numbers divisible by 3. And yet someone above said the start number can be chosen so A wins with n=10. Dmcq (talk) 11:22, 19 July 2010 (UTC)[reply]
With n=10 each player gets to cross off 4, so A should choose the numbers {4, 5, 6, 7, 8, 9, 10, 11, 12, 13} and cross off {6, 8, 10, 12}. (Starting at 4 and crossing off evens larger than 2 is A's winning strategy for even n ≤ 10.) The strategy I described above for B to win does not work for n = 10 because there will be only 5 evens and as few as 1 odd multiple of three, leaving as many as 4 numbers divisible by neither 2 nor 3, so B cannot guarantee them crossed off prior to B's final move. This strategy requires an even n ≥ 12 = 6 * 2, giving at least 2 odd multiples of three and thus at most n/2 - 2 numbers divisible by neither 2 nor 3 compared with n/2 -1 moves for B. -- 110.49.193.24 (talk) 11:46, 19 July 2010 (UTC)[reply]
Sorry yes I miscounted, but it should work for 12 or more so that's the problem done and dusted. Dmcq (talk) 11:55, 19 July 2010 (UTC)[reply]
Just to summarise the answer to the OP's question: A has a winning strategy for all n except 12, 14, 16 and 18, and if B choses one of these then he should be "generous" and allow B to go first. Dbfirs 12:17, 19 July 2010 (UTC)[reply]

July 18

Hyperbolic geometry

It's been proved equiconsistent to Euclidean geometry, but has anyone successfully proved that Euclidean geometry is consistent? --138.110.206.101 (talk) 16:00, 18 July 2010 (UTC)[reply]

Euclid's axioms are rather imprecise. Hilbert rephrased some of them and proved the consistency of his axiomitization (Hilbert's axioms) in 1899. 67.122.211.208 (talk) 02:51, 19 July 2010 (UTC)[reply]
I don't think Hilbert actually proved consistency, but using analytic geometry the consistency of geometry can be regarded as consequence of the consistency of arithmetic.--RDBury (talk) 03:54, 19 July 2010 (UTC)[reply]

Ancient Greek rejection of the fourth power

I think I've read the following somewhere: Since the Greeks thought of numbers geometrically, they viewed exponentiation—in particular, squaring and cubing—as geometrical processes, forming a square or a cube out of a line segment, respectively, and so they rejected the concept of a fourth power (or in fact the multiplication of any four quantities) as geometrically absurd. Heron's formula involves multiplying four lengths together in an intermediate step, which is remedied by the square root at the end to obtain a geometrically meaningful result, but is itself "meaningless". Either the Greeks accepted this as an "imaginary" quantity (similar to the reaction in the 16th century to Tartaglia's method of solving cubic equations, which occasionally requires the extraction of the square root of a negative number), or they rewrote the formula to avoid the problem:

.

This is an interesting piece of mathematical history if true, but now I cannot find a reference for it. Does anyone know of a source for this story? —Bkell (talk) 20:10, 18 July 2010 (UTC)[reply]

Heron's formula is from 300 years after Euclid and Diophantus who lived another century later certainly was quite happy to do arithmetic without reference to geometry and used fourth powers. So perhaps they had given up their inhibitions by then. I believe you're right about the ancient Greeks and that even the Pythagorean's who were more arithmetically inclined liked to make their numbers exist in the real world as figures. My guess is there probably is some study around saying something like what you say. Dmcq (talk) 20:51, 18 July 2010 (UTC)[reply]

It seems rather bigoted to speak of "inhibitions". I don't think Euclid et al. were "inhibited" about fourth powers, nor that the "viewed exponentiation geometrically". Rather, they had a concept of a square whose side had a particular length, and similarly a cube, and the square and the cube and an area and a volume respectively. They didn't have an "inhibition" about fourth powers; rather, the concept simply didn't make sense in the context in which they worked. They had no concept of real number. When we do Euclidean geometry today, we often think of lengths, areas, and volumes as real numbers. Euclid did not have that concept and did not use it in doing geometry. He did have a concept of what it meant to say that the ratio of areas of two polygons is the same as the ratio of lengths of two segments, etc.

Off hand, I don't know exactly what Heron did; I'll see if I can find it...... Michael Hardy (talk) 00:36, 19 July 2010 (UTC)[reply]

Could you be a bit more careful with the 'bigoted' word please? Thanks. Dmcq (talk) 08:57, 19 July 2010 (UTC)[reply]

The Greeks did not use modern notation, such as the expression

My guess would be that they spoke of a line segment that is the side of a square whose area is the same as that of a rectangle whose sides have lengths s and s − a, etc. Except that they wouldn't have written those last two expressions in modern notation either. Michael Hardy (talk) 00:45, 19 July 2010 (UTC)[reply]

The Greeks had terminology that would be understood by few mathematicians today, so it would be a mistake to underestimate how flexible their terminology was even though they did not use modern formulas. Heath is probably the best authority on Greek mathematics. I have to disagree about them having no concept of real numbers; they had ratios which played much the same role in their math as real numbers do in ours. In fact, if you read Euclid's definition of the equality of ratios you'll find a striking similarity with Dedekind cuts.--RDBury (talk) 03:41, 19 July 2010 (UTC)[reply]
That's not identical to the real numbers. In particular, they didn't multiply them. They could, however, in effect multiply two lengths and get an area. Michael Hardy (talk) 17:07, 19 July 2010 (UTC)[reply]
They did compound ratios to give another ratio. I guess really what they had was dimensional analysis. As a matter of interest, did they have anything like velocity as a ratio of distance over time, i.e. where two different things were used? Dmcq (talk) 17:39, 19 July 2010 (UTC)[reply]
Did the Ancient Greeks use maths to describe physical quantities like velocity? Pretty much all the Ancient Greek maths I've seen has been abstract. --Tango (talk) 19:00, 19 July 2010 (UTC)[reply]
Archimedes did concrete physical mechanics, hydrostatics, and optics. Bo Jacoby (talk) 20:51, 19 July 2010 (UTC).[reply]

July 19

Finding text in all of the universe

Hello, smart people, can any of you please figure out this for me? Say I took every atom in the universe (let's use 1080 from the article Observable universe) and assigned each of them a random letter or punctuation (let's assume 30 possible characters). For what length of a string of characters would I have a 50% confidence of it being somewhere in the 1080 characters? For example, could I be 50% sure of my name being in there (22 characters)? What about the lyrics of A Hard Day's Night (song) (1,160 characters) or Hamlet (200,000 characters)? For what length could I be 95% confident of it being in there? Thank you. —Preceding unsigned comment added by 142.104.215.130 (talk) 02:17, 19 July 2010 (UTC)[reply]

Infinite monkey theorem has information on this kind of thing.--RDBury (talk) 03:44, 19 July 2010 (UTC)[reply]
BTW, 1080 is about 3054. So I think the most you'd expect to find is about 54 characters. In other words you wouldn't get much farther than "To be or not to be, that is the question."--RDBury (talk) 04:04, 19 July 2010 (UTC)[reply]
Thanks, RDBury. What confidence is there that you would find a 54 character string? I'm guessing that it's more than 50%, because the chance of a die rolling at least one six in six rolls is more than 50%, so if 6 chances to get 1/6 is more than 50%, then 3054 chances to get 3054 should be more than 50%. I think. 142.104.215.130 (talk) 04:23, 19 July 2010 (UTC)[reply]

Each n-character string is expected to occur (1080n+1)/ 30n ≈ 3054.16−n times. Each 54-character string is expected to occur 300.16 = 1.7 times, and each 53-character string is expected to occur 301.16 = 52.7 times. The confidence that you would find a 54 character string is the poisson distribution 1−e−1.7 = 82%, and the confidence that you would find a 53 character string is 1−e−52.7 = 100%. So you will most certainly find "to be or not to be, that is the question" somewhere in the long string, and "the slings and arrows of outrageous fortune" somewhere else. Bo Jacoby (talk) 08:56, 19 July 2010 (UTC). [reply]

"Fundamental theorem of arithmetic"?

Merely a question about conventional nomenclature:

Does this term refer to

  • Both the existence and the uniqueness of prime factorizations; or
  • the uniqueness of prime factorizations?

Existence seems trivial. Uniqueness is the "hard" part. Somehow I was raised under the impression that what is conventionally called the FTA is about uniqueness. But I sometimes see people writing "by the fundamental theorem of arithmetic, we have...." etc., when existence is what they're talking about. Michael Hardy (talk) 18:33, 19 July 2010 (UTC)[reply]

What makes you say that Existence seems trivial. Uniqueness is the "hard" part? Uniqueness of factorization is provable already in VTC0 (i.e., it is trivial, as VTC0 is what one needs even to prove that multiplication is total) TV0, whereas the weakest theory known to prove existence is V1, and that probably cannot be significantly improved (under some cryptographic assumptions on hardness of factoring). Not that I would expect anyone to be familiar with these theories of bounded arithmetic, but the important point is that the latter is stronger than the former, it's something like a theory with the computational power of TC0 P vs. a theory with the computational power of NP.—Emil J. 18:49, 19 July 2010 (UTC)[reply]
On second thought, I'm not sure about the TC0 part, but it's weaker anyway.—Emil J. 18:57, 19 July 2010 (UTC)[reply]
I suspect the OP means "trivial" as in "extremely easy to prove within ordinary mathematics" rather than any reference to formal axiomatic systems. Either a number is prime, or it can be decomposed, so clearly you can continue decomposing until you have all primes. The only thing to prove is that the process eventually terminates, which it clearly does since the product of infinitely many positive integers would be infinite (or, more precisely, would diverge to infinity). As David points out below, technically I've only proven factorisation into irreducibles, but whether or not that is enough depends on your definition of "prime". When we're just talking about integers, a prime is usually defined as a number that has no factors other than 1 and itself, rather than using the more general definition of a prime (p|ab => p|a or p|b). If you want the more general definition, then you need to prove that all irreducible integers are prime, which isn't hard. --Tango (talk) 19:07, 19 July 2010 (UTC)[reply]

Existence of a factorization into Irreducible elements is trivial enough, and prime numbers are often defined as being irreducible elements over the integers, but it seems to me that it's not fair to call it a prime factorization until one knows that the irreducible elements really are prime elements. —David Eppstein (talk) 18:54, 19 July 2010 (UTC)[reply]

Vector Spaces

Let be finite dimensional vector spaces with dimensions respectively. It is easy to show that . Assume that

Given , by definition if and only if Writing in terms of the basis:

and so if and only if for all It follows that

and by the same argument we see that

Thus and since it follows that

  • What happens in the infinite dimensional case? Is it still true that , even when are infinite dimensional vector spaces? If so, then how might one prove such a thing? •• Fly by Night (talk) 19:28, 19 July 2010 (UTC)[reply]
The relevant case is when P=0. To show that R is isomorphic to the direct sum of Q and S=R/Q, one needs to construct a splitting S -> R. This always exists but it does depend on the axiom of choice. The general framework for this is the notion of a projective module over a ring. When the ring is a field the property is automatic (again, modulo AC). Tkuvho (talk) 20:51, 19 July 2010 (UTC)[reply]