Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 198.188.150.134 (talk) at 04:57, 4 February 2010 (→‎Really simple algebra problem.: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:



January 29

Why is the perimeter of a spherical triangle less than 2 pi?

Hi all - my question is pretty much as in the title: why is it that, assuming we take the shorter of the 2 arc lengths between any 2 points on S^2, the perimeter of any given spherical triangle is strictly bounded above by ? I tried using Gauss-Bonnet (overkill?) but to no avail. I've managed to prove the triangle inequality which is fairly trivial, if that's any use!

Thanks very much, Delaypoems101 (talk) 00:51, 29 January 2010 (UTC)[reply]

perhaps I'm misreading what you're asking, but it seems to me that (on a sphere with radius of 1), the largest possible triangle would be where all three vertices line up along a diameter of the sphere - the triangle turns into a circle, and would thus have a perimeter of 2π. --Ludwigs2 01:06, 29 January 2010 (UTC)[reply]

I guess my point is that I'm wondering how to actually show that that construes the largest possible triangle - for example, a triangle with one point 'A' at the north pole and 2 antipodal points on the equator would also have perimeter length pi+pi/2+pi/2=2pi, though I would consider these degenerate cases since 'A' isn't really a vertex and I'm not sure 3 collinear points is considered a triangle on the sphere - how would I go about showing that for any side lengths a, b, c, of a non-degenerate spherical triangle, we have a+b+c < 2pi? (Rather than saying 'this case looks like it should be a maximum, and has perimeter 2pi, hence any other triangle probably has a smaller perimeter than that'?) Thanks for all responses, Delaypoems101 (talk) 04:29, 29 January 2010 (UTC)[reply]

Here's one way with coordinates that is not pretty. The distance d between points A and B on the sphere staisfies cos(d) = A.B, and since we are requiring 0 ≤ d ≤ π where cosine is strictly decreasing, if the dot product is smaller then the distance is larger. For arbitrary points A and B, we can let A be (1, 0, 0) and B be (x, y, 0) with y ≥ 0 without loss of generality. For any point C = (u, v, w), let C' = (u, -(v2+w2)1/2, 0). A.C = A.C' and B.C ≥ B.C'. So the perimeter of ABC is ≤ that of ABC' and ABC' is a triangle with points on a great circle so has perimeter ≤ 2π. Rckrone (talk) 05:03, 29 January 2010 (UTC)[reply]
Spherical triangles satisfy the triangle inequality because their sides are geodesics. Now for a spherical triangle ABC, consider the triangle A'BC where A' is the antipodal point to A. Then BC <= A'B + A'C = (π - AB) + (π - AC) ... Gandalf61 (talk) 17:20, 29 January 2010 (UTC)[reply]
That is pretty. Rckrone (talk) 20:39, 29 January 2010 (UTC)[reply]

Is there a (in some sense) mathematically optimal strategy for unique bid auctions, eg. a Nash equilibrium strategy? Has any work been done on optimal strategies in real-life unique bid auctions? -- The Anome (talk) 01:02, 29 January 2010 (UTC)[reply]

Is it likely you will participate in one in the near future? I found one reference for this type of action, a Danish company doing an iPod promotion.--RDBury (talk) 01:51, 29 January 2010 (UTC)[reply]
No, my interest is currently entirely hypothetical; it just seems like a perfect subject for auction theory. There certainly seem to be a lot of these things: see http://www.google.com/search?q=unique+bid+auction -- The Anome (talk) 01:59, 29 January 2010 (UTC)[reply]

Update:An earlier version of the article suggests that there can't be any deterministic optimal strategy, as if more than one player uses it, they will all select the same bid, and thus all lose, therefore contradicting the premise. However, I can't see any reason for a probabilistic strategy not to exist. -- The Anome (talk) 02:16, 29 January 2010 (UTC)[reply]

OK, there does seem to be some academic literature on this topic: see for example http://ideas.repec.org/p/cca/wpaper/112.html -- The Anome (talk) 02:20, 29 January 2010 (UTC)[reply]
and this: http://mpra.ub.uni-muenchen.de/4185/ and http://zs.thulb.uni-jena.de/receive/jportal_jparticle_00141919?lang=en, which look like they pretty much answer my question. -- The Anome (talk) 02:28, 29 January 2010 (UTC)[reply]
Thanks for pointing it out, looks to me like a subject that could be developed into a quite interesting article if someone wanted to give it a go. Dmcq (talk) 11:19, 29 January 2010 (UTC)[reply]

Cardinality

I'm taking a computer science course in Discrete Structures. One of the questions is asking about cardinality, which is defined on wikipedia and my text book as the number of elements in a set. Therefore: {x} = 1
{{x}} = 1
{x, {x}} = 2
{x, {x}, {x, {x}}} = 3 but why is this true? shouldn't the answer be 4??
and this one:
{2, {3, 4, 10, {4, 0}, 6, 3}, 6, 12} the answer is listed as 4...but why, that makes no since.

-- penubag  (talk) 08:51, 29 January 2010 (UTC)[reply]

I think what you're really confused about is the notion of "an element of a set". Given some set A and an object x, either (x is an element of A) or . The objects in question can themselves be sets - given sets A and B, it is meaningful to ask whether .
The relation is not transitive. From and it does not follow that .
You can specify a set by describing exactly which objects are its elements. For example, there is a unique set which includes 3, includes 5 and does not include any other object. This set is denoted {3,5}.
Similarly, there is a unique set which includes {3,5} but does not include any other object. This set is denoted { {3,5} }. Note that and , but - when I defined { {3,5} } I explicitly said that any object which is not {3,5} is not an element of { {3,5} }. Obviously 3 is not the same as {3,5}, so 3 is not an element of { {3,5} }.
Now we can go back to cardinality, which is how many different objects are elements of a set. There is exactly one object which is an element of { {3,5} } - this object is {3,5}. Thus, the cardinality of { {3,5} } is 1. Likewise, there are exactly 3 objects in {x, {x}, {x, {x}}}, which are x, {x} and {x,{x}}. There are exactly 4 objects in {2, {3, 4, 10, {4, 0}, 6, 3}, 6, 12} - one of them is 2, another one is {3, 4, 10, {4, 0}, 6, 3}, another is 6 and another is 12. -- Meni Rosenfeld (talk) 09:13, 29 January 2010 (UTC)[reply]
I've been reading up on cardinality for an hour but you explained it very concisely! Thanks very much, I understand it now. -- penubag  (talk) 09:22, 29 January 2010 (UTC)[reply]

ratio inequality

At: http://en.wikipedia.org/wiki/Demographics_of_Australia#Population_growth_rate There appears to be a numerical error in the following section:


As of the end of June 2009 the population growth rate was 2.1%.[7] This rate was based on estimates of:[8]

   * one birth every 1 minute and 45 seconds,
   * one death every 3 minutes and 40 seconds,
   * a net gain of one international migrant every 1 minutes and 51 seconds leading to
   * an overall total population increase of one person every 1 minutes and 11 seconds.

In 2009 the estimated rates were:

   * Birth rate - 12.47 births/1,000 population (Rank 164)
   * Mortality rate - 6.68 deaths/1,000 population (Rank 146)
   * Net migration rate - 6.23 migrant(s)/1,000 population. (Rank 15)

The ratio between: one birth every 1 minute and 45 seconds and a net gain of one international migrant every 1 minutes and 51 seconds is approximately 1:1.06

whereas the ratio between: Birth rate - 12.47 births/1,000 and Net migration rate - 6.23 migrant(s)/1,000 is approximately 1:0.5

Should these ratios not be equal?

I am unable to discern which figures are correct and which is wrong, so am in no position to edit the article, but would like an answer to satisfy my interpretation of the figures or show why I am wrong to expect actual or near equality between the two ratios. —Preceding unsigned comment added by Briandcjones (talkcontribs) 10:55, 29 January 2010 (UTC)[reply]

I think you're right, the figures don't seem consistent. One set is coming from the Australian government and the other is coming from the CIA so maybe there is a difference in the way each organization defines its terms. Reporting numbers from different sources side by side as is done in the article is bound to lead to confusion. Having worked on database reports many years I can say from experience that there are many ways that figures can be right but appear wrong because of differences in terminology.--RDBury (talk) 16:17, 29 January 2010 (UTC)[reply]
Note that in the first case, the rate of migration is roughly equal to the rate of birth (1:51 and 1:45), whereas in the second case the net migration rate is roughly half of the birth rate (6.23 to 12.47). this is probably due to different forms or criteria of measurement - estimations of illegals, types of immigration papers, accommodations for exit and re-entry, etc. you'd have to look into the methodology behind the statistics more closely to figure out the difference. --Ludwigs2 16:31, 29 January 2010 (UTC)[reply]

The page Talk:Demographics_of_Australia is the proper place for your question, I think. Bo Jacoby (talk) 16:41, 29 January 2010 (UTC).[reply]

What algorithm?

Basically, say I had a list of characters in a production, and knew which scenes each character appeared in. Then say I needed to assign actors to characters, with each character only needing one actor, and each actor being able to play any number of characters as long as none of their characters share scenes with each other. Assuming none of the actors need any specific requirements to play each character, how would I find the minimum number of actors required? (Note: I only know discrete maths at A-Level standard - I tried looking at stuff about P and NP and didn't understand, so if you could make your explanation as simple as possible, I'd appreciate it.) Thanks! Anthrcer (click to talk to me) 12:17, 29 January 2010 (UTC)[reply]

Would graph coloring do what you want, with characters at each vertex and 'appears in the same scene' as the edges ? If so there's a section on algorithms.--JohnBlackburnewordsdeeds 12:54, 29 January 2010 (UTC)[reply]
Graph coloring does seem to be the simplest way to solve this. Look into chromatic polynomial, which can be computed by hand for smallish graphs once you learn the recurrence relation for it. — Carl (CBM · talk) 13:00, 29 January 2010 (UTC)[reply]

I see that this method would be what I need, but I really don't understand the articles, and the Simple English Wikipedia doesn't have an article specifically about graph colouring. Can the method be explained in words I can understand? Or can an appropriate website be linked? Anthrcer (click to talk to me) 20:27, 29 January 2010 (UTC) (edited by Anthrcer (click to talk to me) 22:08, 29 January 2010 (UTC))[reply]

Draw a bunch of circles on a page, each circle representing a character from the play. Then for every pair of characters that appears on stage at the same time, draw a line or arc between the circles corresponding to those characters. That is a graph (mathematics). Next, color the circles so that no two connected circles are the same color. That is graph coloring. Each color corresponds to an actor. A trivial coloring is to make each circle a different color, but you want to do it in fewer colors. The graph coloring problem (to decide whether a graph is colorable with k colors for arbitrary k) is NP-complete, which means you can solve it by brute force (enumerate all possible colorings and see whether any fit the requirement), but there is no known computationally tractable way. For actual play with not too many characters you might be able to use that approach. Otherwise greedy coloring is a reasonable approximation, though it might not always get you the absolute best solution. A book like CLRS will explain this stuff in detail. 66.127.55.192 (talk) 22:10, 29 January 2010 (UTC)[reply]

I see. Thanks for helping! Anthrcer (click to talk to me) 13:08, 30 January 2010 (UTC)[reply]

Proof

If you were asked to prove that the square root of 2 is an irrational number with proofs, how would it be done? 198.188.150.134 (talk) 13:02, 29 January 2010 (UTC)[reply]

Probably it would look like one of Square root of 2#Proofs of irrationality... --CiaPan (talk) 13:07, 29 January 2010 (UTC)[reply]
Assume that there exist two positive coprime integers, a and b, such that a2 = 2b2. Now compute modulo 3. Then either a ≡ 0 or a ≡ 1 or a ≡ 2, and consequently a2 ≡ 02 ≡ 0 or a2 ≡ 12 ≡ 1 or a2 ≡ 22 ≡ 4 ≡ 1, but a2 is not congruent with 2. Similarly 2b2 is not congruent with 1. So the only possibility is that b ≡ a ≡ 0 which is incompatible with the assumption. Bo Jacoby (talk) 15:00, 29 January 2010 (UTC).[reply]
Strictly speaking, most of the standard proofs only show that there is no rational number whose square is 2. Some more work is needed to contruct the reals and show that they contain such a number. AndrewWTaylor (talk) 18:06, 29 January 2010 (UTC)[reply]
If you mean to ask how one would prove that the square root of 2 is irrational (i.e. the technique), I would say that proof by contradiction is typically employed for such a proof. Thus the first step would be to assume that for coprime integers a and b (if can be written as a fraction, it can of course be written as a fraction in "lowest terms"). Instinct and intuition then suggest to square both sides of the equation and "somehow end up with a contradiction"; because such a contradiction is encountered, the initial hypothesis, namely that is rational, must be false. Thus, the square root of 2 is irrational, as desired. --PST 12:23, 30 January 2010 (UTC)[reply]

Derivative question

Hi, I'm having a little trouble finishing this one differentiation problem I'm trying to do. I have right now:

y^2 = sqrt(b^2- (b^2x/a^2))

for positive y values only. Thanks, --Fbv65edeltc // 20:00, 29 January 2010 (UTC)[reply]

thats
one way is to differentiate both sides wrt x to give I think

--JohnBlackburnewordsdeeds 20:12, 29 January 2010 (UTC)[reply]

You lost a minus sign: . Algebraist 20:26, 29 January 2010 (UTC)[reply]

The function y is implicitly defined anyway, so why not get rid of the square root and the fraction: , and differentiate: Bo Jacoby (talk) 20:47, 29 January 2010 (UTC).[reply]

Oops, yes, and I agree that's even neater. --JohnBlackburnewordsdeeds 20:51, 29 January 2010 (UTC)[reply]
Or alternatively, since you are assuming y is positive, simply note . No need for implicit differentiation. Nm420 (talk) 20:56, 29 January 2010 (UTC)[reply]

Solving equations by algebra alone is often hard and sometimes impossible. Differentiation of polynomials is easy. So if you are not explicitly requested to provide an explicit expression, implicit differentiation is sufficient. For example, the function y = f(x), defined implicitly by the equation y5+y = x, cannot be expressed explicitly, but the equation can easily be differentiated. Bo Jacoby (talk) 16:36, 30 January 2010 (UTC)[reply]

Great, thanks for all your help! I appreciate it. --Fbv65edeltc // 06:17, 31 January 2010 (UTC)[reply]

We know the probability is 1/3 if you dont switch your guess and 50% if you do, so whats wrong with this reasoning: lets look at what happens from the perspective of what was under your original guess. In 1/3 of the cases there was a car under it; if you switch your guess when Monty gas removed one goat from the possibilities, you will be guaranteed to switch to the other goat. But in 2/3 of the cases there is a goat under your original door: in these cases, Monty removed the other one, leaving you guaranteed to switch to a car. Therefore, from this perspective, 2/3 of the time you will get a car with the switchin strategy. (but this is false; in fact it is onlz 50 % of the time see monty hall problem. So, whats wrong with the line of reasoning I outlined above? Where is the logical error? Thank you. 80.187.105.225 (talk) 22:54, 29 January 2010 (UTC)[reply]

No, the probability of getting the car if you switch is 2/3 - 50% is the popular-but-incorect solution. read the article page more carefully, they do a pretty good job with it. --Ludwigs2 23:06, 29 January 2010 (UTC)[reply]
The easy way to understand why switching is 2/3 is this: If you don't switch, you are stuck with 1 in 3 doors. If you do switch, you get both of the other doors. Just because you know what is behind one of them doesn't mean you don't get it. So, switching gives you 2 in 3 odds of winning. -- kainaw 00:02, 30 January 2010 (UTC)[reply]
Hey, clever — nice one, kainaw. That's easier to understand than any explanation I've heard. Comet Tuttle (talk) 01:01, 30 January 2010 (UTC)[reply]
I feel the need to point out how dated this problem is. Given the current economy and the price of gas, many people nowadays would be aiming to get the goat. --Ludwigs2 02:37, 30 January 2010 (UTC)[reply]
LOL! Baccyak4H (Yak!) 04:53, 30 January 2010 (UTC)[reply]
The 50:50 paradox isn't so confusing. It was easy to understand if the guest could chose "one door, or two doors", alternatively. Double chance for a pair of 2 doors: One unavoidable goat there indeed, but the second goat there in just 1/3 only, and the car there after all in considerable 2/3, really so privileged! Of course, the car never is guaranteed, but significantly double chance of 2/3. Just easy! In effect that's the fundament of the game.
Confusion results right from opening these two doors not simultaneously. That's it.
Everyone knows: Inevitably given in each and every pair of two doors is at least one goat there, this pair of two doors having double chance of 2/3, though. Double chance! Opening both doors simultaneously and removing a given goat there will only in 1/3 leave the second goat there, but in sizeable 2/3 it will leave the one and only car there. Clearly arranged. But: Opening those two doors not simultaneously and showing only the one unavoidably given goat within this pair of two doors, leaving the "privileged partner-door" still closed, gives birth to confusion. This confusion could easily be rectified, if facts were represented clearly. And if math would stop nebulizing :-)   Kind regards   Gerhardvalentin (talk) 11:38, 30 January 2010 (UTC)[reply]
The even easier way to understand it (for me) is that the host has to pick a goat. Therefore the likelier choice (you pick a goat, 2/3) reduces to 50:50 as the host has picked the other goat for you. So in the likelier case, the poor host eliminates all uncertainty for you. If you're "unlucky" enough to pick the car, of course, it doesn't really help. x42bn6 Talk Mess 16:18, 4 February 2010 (UTC)[reply]
You are right, any door having a risk of 2/3 each, both refused doors must contain at least one unavoidable goat. A second goat there never is "unavoidably given", but the risk for the second goat there is still 1/3. And you do not know which one of those two doors contains the "given" goat (risk 3/3, chance 0/3) and which one is the "privileged partner-door" with a risk of only 1/3 but with a chance of 2/3. Gerhardvalentin (talk) 19:58, 4 February 2010 (UTC)[reply]


January 30

percentages,round offs and number based systerms

Please, I will like to know how any of the topics above can help in the world of business and how it can help solve problems in either sports,health,fishiries and it's economic importance to a country. —Preceding unsigned comment added by Ericasante (talkcontribs) 11:25, 30 January 2010 (UTC)[reply]

Perhaps you should ask this question at Wikipedia:Reference desk/Miscellaneous; this question does not really require much mathematics to answer, depending on the sorts of responses your expect (you will receive responses here, of course, but you will probably receive a greater number of responses at the miscellaneous reference desk). In the context of this question you might like to see financial mathematics, but in any case, most school mathematics textbooks should give you some indication of how to use these concepts in the "real world". As for the much deeper mathematics in economics, you might like to use calculus, or even the theory of manifolds. Thus you should realize that the mathematics in economics is far deeper than "percentages, round offs and number bases systems"; it can involve, for instance, Ergodic theory and Stochastic calculus. --PST 12:04, 30 January 2010 (UTC)[reply]

could there be ANY change made in the rules of logic without immediate inconsistency?

Could there be any, even the slightest change in the fundamental laws of logic without the new logic being totally inconsistent - you can prove anything and its opposite. Thanks. 84.153.213.154 (talk) 14:07, 30 January 2010 (UTC)[reply]

Yes. There are many different formal logical systems. Algebraist 14:10, 30 January 2010 (UTC)[reply]
See Non-classical logic. Buddy431 (talk) 16:27, 30 January 2010 (UTC)[reply]
Doublethink and Cognitive dissonance indicate being able to prove anything and its opposite should pose no problems :) Dmcq (talk) 16:44, 30 January 2010 (UTC)[reply]

Hexadecimal

Why large hexadecimal numbers are converted into negative decimal numbers on a calculator?--Mikespedia (talk) 14:22, 30 January 2010 (UTC)[reply]

Because of a bug in the calculator, probably related to an overflow when the number gets bigger than can be stored in the number of bits available. What calculator is it? Can you give an example of a hexadecimal number and the negative decimal number it gets converted to? --Tango (talk) 14:44, 30 January 2010 (UTC)[reply]
It could be it's using a particular signed integer representation. E.g with 32 bit signed integers the largest number that can be represented is 2^31 - 1, the smallest is -2^31. But if you represent the numbers as hex you get
2^31 - 1 = 0x7fffffff
-2^31 = 0x80000000
i.e. hex numbers larger than 0x7fffffff represent negative numbers. There's a little info here, more here (it uses binary but the same can be applied to hex)--JohnBlackburnewordsdeeds 15:34, 30 January 2010 (UTC)[reply]

See Calculator#Mechanical_calculators_reach_their_zenith. Old mechanical calculators had a fixed number of decimal digit positions but no sign. Zero was represented by 00000000 and one by 00000001. Subtracting one from zero gave 99999999, which thus represented minus one. So negative numbers were represented in the same way as large numbers. This convention was herited by electronic computers having a fixed number of binary digit positions but no sign. Bo Jacoby (talk) 17:45, 30 January 2010 (UTC).[reply]

The hex calculators are probably using Two's-complement arithmetic since that's what most computers use. Of course that means they have to operate at a specific (maybe user selectable) word length. 66.127.55.192 (talk) 20:17, 30 January 2010 (UTC)[reply]


January 31

Normal vectors

For homework I have this problem: "You start at the point (1,0) and walk along the vector v = 4i + 2j. If you want to end up on the point (1,6) and only make 1 turn of 90 degrees, what are the coordinates of the turning point?" I know that the dot product of the two vectors must be 0, so I substituted y=(1/4)x+1 into vectors a and b to get · = (17/16)x2 - (9/4)x. Solving for x I got (36/17,26/17) for the point to turn, which isn't correct. What am I doing wrong? 24.116.192.195 (talk) 00:37, 31 January 2010 (UTC)[reply]

I can't make any sense of what you've done. Where did y=(1/4)x+1 come from? You're starting at (1,0), then walking in the direction (4,2) for a while, then walking in the orthogonal direction (i.e. the direction (-2,4)) for a while. So you need to solve (1,6)=(1,0)+a(4,2)+b(-2,4), which has a unique solution. Algebraist 00:42, 31 January 2010 (UTC)[reply]
This seems a problem of unnecessary complication, introducing a vector for no good reason. If the wording was "You start ... walk along the straight line of slope 1/2 ...", you'd never think of a dot product and (I hope) get a solution in such a way as Algebraist has shown.→86.164.73.234 (talk) 01:07, 31 January 2010 (UTC)[reply]
Perhaps the student has not met the "product of gradients = -1" rule, and so needs to find a vector perpendicular to v = 4i + 2j using the scalar product. I agree though that Algebraist's method is much simpler than trying to use the vector equation of a line. Dbfirs 00:23, 2 February 2010 (UTC)[reply]

Rhombicosidodecahedron

Someone told me that you can't go round a rhombicosidodecahedron to visit every face once, but you can with any other Archimedean solid. Is it right? 4 T C 03:32, 31 January 2010 (UTC)[reply]

It's true there's not a way to do it for a rhombicosidodecahedron. If you look at the diagram with the colored faces in the article, the blue and red faces are only adjacent to yellow faces and vice versa, so every path has to alternate yellow and blue/red. But there are 32 blue/reds and only 30 yellows so that doesn't work. I don't know about the other Archimedean solids. Rckrone (talk) 07:09, 31 January 2010 (UTC)[reply]
Actually after a quick look it seems that icosidodecahedron, truncated dodecahedron, truncated cube and rhombicuboctahedron don't work either by the same argument. There may be others also. For each of these I assume you mean that you're only allowed to visit each face exactly once. Rckrone (talk) 07:13, 31 January 2010 (UTC)[reply]

Strange use of log

The last sentence of Hand sanitizer#Hand alcohol says:

Alcohol rub sanitizers containing 70% alcohol kill 3.5 log10 (99.9%) of the bacteria on hands 30 seconds after application and 4 to 5 log10 (99.99 to 99.999%) of the bacteria on hands 1 minute after application.

This is a very strange notation to me. It seems to be similar to nines (engineering). Is it actually in common usage in any field, or should it be changed to something more common? —Bkell (talk) 10:00, 31 January 2010 (UTC)[reply]

The reference quoted talked about a reduction of 3 log in bacteria meaning 99.9% killed or 0.1% remaining, it didn't put in the subscript 10 and I don't see that the 10 is needed. Dmcq (talk) 13:54, 31 January 2010 (UTC)[reply]
Correct or not, the notation is inappropriate for an article of that type. Encyclopedia articles are supposed to help people understand, not confuse them with arcane notation.--RDBury (talk) 14:55, 31 January 2010 (UTC)[reply]
I'm not so sure. I'd have thought wikipedia should be just reporting things as they are rather than trying to impose its own ideas of what measurements are allowable. Dmcq (talk) 16:41, 31 January 2010 (UTC)[reply]
... but we should not reproduce faulty notation. (I presume that the original document made more sense?) I see that the offending nonsense has now been removed. Dbfirs 09:43, 1 February 2010 (UTC)[reply]
Why not? It isn't Wikipedia's job to decide on correct notation. I'd have thought it should at least be documented in that article if it is a commonly used notation there, after all it measures what the article is all about. That's the sort of the thing the original document said, a reduction of 3 log in the bacteria. Dmcq (talk) 11:53, 1 February 2010 (UTC)[reply]
It depends on whether we view Hand sanitizer as primarily a medical article, or as primarily a non-technical "general interest" article. In the latter case, we should use the sort of standard English that we would see in a professional newspaper. In the former case, we should use the standard terminology from medical papers (whatever that is). If the medical literature usually says "a 5 log reduction" then there is no reason to avoid it here just because it sounds strange to mathematicians. — Carl (CBM · talk) 12:10, 1 February 2010 (UTC)[reply]
If we use a non-standard notation in a general article, we should explain its meaning, just as we should provide a translation if we quote a foreign language in an article written in English. The expression "a reduction of 3 log" has no meaning in mathematics or general science. Perhaps the medical article explains elsewhere that the phrase means a reduction of 3 on a (base 10) logarithmic scale graphing the number of bacteria? Dbfirs 00:17, 2 February 2010 (UTC)[reply]
Dmcq is correct in saying that medical sites use "3 log" in this context (though some use "3-log" to distinguish the usage from the ususal scientific meaning). (Personally, I would prefer to see "3log" for this alternative meaning of "log", but this is unlikely to be taken up, so I'll forget it.) I've added a footnote explaining the medical notation. Dbfirs 20:08, 2 February 2010 (UTC)[reply]

Function harmonic on a strip

Hi all,

I've been trying to finish off this problem but I'm not quite sure where I'm going wrong - if anyone could give me any suggestions for how to get going, I'm happy to finish it all off myself, just need to know where I'm headed first!

This is the problem:

Let g(z) = exp(πz/a), h(z) = sin(πz/a) .

Show that g maps onto

and h maps onto .

Find a conformal map of onto . (Done up to this point!) Find a function v which is harmonic on the strip −a/2 < x < a/2, y > 0 with limiting values on the boundaries given by: v = 0 on parts of the boundary in the left half plane (x < 0) and v = 1 on parts of the boundary in the right half plane. Is there only one such function?


Right, so the first few parts were fine, and I did what they obviously wanted and used the product for the conformal map (product of conformal maps conformal etc) - now I know that if k(z) is conformal, it is also holomorphic, and the real part of any holomorphic function is harmonic, so then I tried to take the real part of , where log is on the principal branch - I got something like (IIRC) , which at the boundaries x=±a/2 is equal to , obviously not what we want - for one thing there's a y-dependence which if I understand the question correctly shouldn't be there. My only thought is that perhaps I shouldn't be taking the principal branch of log, since it looks like moving along the y=0 part of the boundary along the x-axis we have a discontinuity (0->1) at 0, and I can't see how else I might solve that part of the problem. Following that, how on earth would I be expected to confirm the uniqueness of such a function? I at least know what I'm meant to do, roughly, for the rest of the question, but when it comes to the uniqueness I'm totally stuck.

Thanks very much in advance, 82.6.96.22 (talk) 14:52, 31 January 2010 (UTC)[reply]

Try using arg instead of log (i.e. Im instead of Re of the log). As for uniqueness, I seem to remember there are a bunch of theorems on the uniqueness of harmonic functions, so maybe one of them applies here. Not sure though, need to think about it more.--RDBury (talk) 15:24, 31 January 2010 (UTC)[reply]
Actually Re(sin(πz/a) would be 0 on the boundary, so the answer to the uniqueness question is no.--RDBury (talk) 15:29, 31 January 2010 (UTC)[reply]
Is it? I can see why it would be 0 when x=±a/2, but what about when y=0? Surely we get a cosh term in y which will never be '0', and sin(pi x/a) is only 0 when x=±na, not ±a/2? Unless I'm missing something here... 82.6.96.22 (talk) 07:42, 1 February 2010 (UTC)[reply]

Intersecting cylinders

I've come across these results: the common volume of two cylinders of unit diameter whose axes intersect at right angles is 2/3, while that of three such cylinders whose axes intersect mutually at right angles is 2-√2. How can these be derived? The first one had a footnote saying that the result can be found without the use of calculus, rather suggesting that the second result cannot.→86.132.162.4 (talk) 15:08, 31 January 2010 (UTC)[reply]

There's a page about the shapes here - Steinmetz solid. In the first case how I'd do it is consider the sphere diameter 1 at the centre of the intersection. This has volume π/6, and by considering slices of the shape and sphere the ratio of the volumes is the ratio of the area of the circle and square. It should be possible to do the other shape in the same way, except in pieces, perhaps in three directions, which is where the sqrt 2 comes in, though it'd be a bit more work. --JohnBlackburnewordsdeeds 15:19, 31 January 2010 (UTC)[reply]
The MathWorld page linked to from that article has derivations for both and gives formulas for a couple more complex intersections as well.--RDBury (talk) 15:38, 31 January 2010 (UTC)[reply]

Thanks - helped to know the name of the shapes.→86.132.162.4 (talk) 20:08, 31 January 2010 (UTC)[reply]

I already know what the faces, edges and vertices of a 3D shape are, but what do I need to do to the number of edges on any 3D shape (Except the cylinder) to get the same number as adding the number of faces and vertices together? Chevymontecarlo (talk) 16:16, 31 January 2010 (UTC)[reply]

Euler characteristic -- SGBailey (talk) 16:26, 31 January 2010 (UTC)[reply]

Thanks. Your link eventually led me to the answer. Chevymontecarlo (talk) 16:43, 31 January 2010 (UTC)[reply]

February 1

welcome to the palyndrome day 01022010 --pma 01:33, 1 February 2010 (UTC)[reply]

What's the standard of proof to tell if an operation on a vector yields a vector?

I know that just returning a triplet of components doesn't mean an operation has yielded a vector. So given some operation @ where A @ B yields a triplet of numbers Cx,Cy,Cz, what's a property these three numbers will have only if C is a vector?71.161.63.23 (talk) 02:29, 1 February 2010 (UTC)[reply]

I don't understand. What is the difference between three components and a vector in three-dimensional space? You can usually treat them as equivalent. —Bkell (talk) 04:16, 1 February 2010 (UTC)[reply]
It seems the OP is using the definition of "vector" described here. --Tango (talk) 11:38, 1 February 2010 (UTC)[reply]

The question seems somewhat unclear, but here's a guess as to what is meant: a triple of numbers represents a vector relative to some basis (or "coordinate system"). Suppose the input is one or more triples of scalars, and the output is a triple of scalars. Suppose you change the coordinate system (or the "basis") and then put in the same input but in the new coordinate system. Look at the output. Does it or doesn't it represent the same vector that you got before, but in the new coordinate system?

One could ask this about cross-products, for example. For those, the answer would be a bit of algebraic definition chasing. Before going into that in detail, maybe we should await further clarification of the question. Michael Hardy (talk) 04:22, 1 February 2010 (UTC)[reply]

My best-guess interpretation of the question agrees with Michael's. Another way to look at it is to say that the result of an operation is a vector if and only if the operator commutes with the linear transformations L(Ax,Ay,Az) that represent changes of basis when they act on the co-ordinates on vectors. In other words
So the result of
produces the vector A+B, but the result of
is not a vector because it depends on the choice of coordinate system. The 3D cross-product is a special (and somewhat confusing) case because it commutes with linear transformations that have positive determinant but anticommutes with transformations that have negative determinant. The result of the 3D cross-product is called a pseudo-vector. Gandalf61 (talk) 10:03, 1 February 2010 (UTC)[reply]
This is the OP. I was asking because I was reading chapter one of Richard Feynman's book Six Not-So-Easy Pieces where he says, "Suppose we multiply a vector by a number α, what does this mean? We define it to mean a new vector whose components are αax,αay, and αaz. We leave it as a problem for the student to prove that it is a vector." Well, in order to prove something you need to know how to prove it. —Preceding unsigned comment added by 20.137.18.50 (talk) 13:26, 1 February 2010 (UTC)[reply]
Indeed, and Feynman's point here was to get his students to think about just what it means for something to be a physically meaningful vector. The key quality is that a physical vector should not depend on your arbitrary choice of co-ordinate system. So if an object C is defined as the result of applying operation A to vector B, then if we change our co-ordinate system from P to Q, apply operation A in co-ordinate system Q, then change co-ordinate system back from Q to P, we should be the same result. Re-arranging this gives:
(Change co-ordinate system from P to Q)(Apply operation A) = (Apply operation A)(Change co-ordinate system)
In other words, operation A and the change of co-ordinate system commute. Muitiplying a vector's co-ordinates by α satisfies this rule, so it is a physically meaningful vector operation. Squaring a vector's co-ordinates does not satisfy this rule, so this is not a physically meaningful vector operation. Gandalf61 (talk) 13:47, 1 February 2010 (UTC)[reply]
Right. This is another example of the great caution with which one has to read physics texts.
To expand on the previous example: let V be any 2 dimensional vector space and let a and b be fixed, independent vectors in V, which form an ordered basis. Consider the map f from V to V that proceeds by writing a vector in coordinates according to this ordered basis, squaring each of those coordinates, and then finding the vector in V that has those new coordinates. Certainly f is a well defined map from V to V, for each vector x in V, the result f(x) is (trivially) another vector in V. Also, it makes no difference "in which coordinate system we apply f" - because the coordinates of a vector in the basis {a,b} are the same no matter what other coordinate system one might consider. — Carl (CBM · talk) 14:22, 1 February 2010 (UTC)[reply]

square root of x equals negative one

Solve for x? —Preceding unsigned comment added by 220.253.218.157 (talk) 03:25, 1 February 2010 (UTC)[reply]

By definition, is equivalent to so that . Now, can you solve for x in ? --PST 03:42, 1 February 2010 (UTC)[reply]
Well, by definition, the radical sign refers to the principal square root, which is never negative, so has no solution. On the other hand, it is true that −1 is a square root of 1, and if −1 is going to be the square root of anything that thing had better be 1; but −1 is not the square root of 1 denoted by . So the equation is not true. —Bkell (talk) 04:13, 1 February 2010 (UTC)[reply]

What is the rate of data input on a wiki article over time?

What I am interested in knowing is, for a given wiki article [so let us say, on average], is there a pattern for data input, and if so what is it? By data input, I mean the content of a wiki article, such as alterations, additions, corrections, deletions etc...

I would suggest not including pictures, as they require data that is out of proportion to text data.

I am assuming that there would be an oscillating pattern relating to the amount of data available on the subject that hasn't yet been put up in the article. So something like lots of data input initially, then tapering off as new data on the subject becomes scarce, only to repeat whenever there is any substantial new amount of data made available on a subject.

I assume also that this pattern would vary due to controversies within a wiki article, such that so long as the controversy is 'hot' the rate of data input would be increased, and so too decrease as the controversy 'cools'.

For context, I am working on a project utilizing open source organization for the creation of specific projects and would like to have an idea as to any patterns in data input that might give a clue as to when a given 'open source project' might be either finishing up, or more likely, ending a cycle. In short, some point when one would be able to say that the project is basically done for now.

I don't necessarily need the detailed mathematics behind the pattern [though that would be nice I suppose], so much as an understanding on if there is a pattern, and if so, what is the pattern and how might it be used to determine when and if there is a point when one could say something like "this piece is done for now" or at least "nothing much new is going to be added to this piece in the near future".

Any help or info would be appreciated, thanks bunches EAshe (talk) 04:14, 1 February 2010 (UTC)[reply]

I don't know if this will help. the problem as you've laid it out has some serious difficulties, because it depends on editing style and article type issues that are difficult to quantify. for instance, on wikipedia I could say you need to distinguish between mainspace and talkspace changes (in some cases mainspace changes can be predicted from talk space volume, in other cases talk space volume follows brief flurries of changes in mainspace). further, you'd need to identify the (fairly minor but ongoing) process of link updates, citation fixes, bot entries and cleanup efforts from actual substantive content changes. I'd just scratch the effort to analyze it directly, and take a month's worth of raw date (e.g., pull every edit made for an entire month straight off the wikipedia servers) and analyze it statistically for determinable patterns. you may not be able to determine the causation of such patterns, but you can probably generalize that the pattern itself will translate across similar constructs. --Ludwigs2 08:04, 1 February 2010 (UTC)[reply]
The idea of data mining may be useful here, really more of a computing question than a math question though.--RDBury (talk) 09:34, 1 February 2010 (UTC)[reply]
It varies by article. You can download the complete history of any article through the m:API. You can download the complete edit history of most of the Wikipedias from m:dumps, but unfortunately no history dumps of the English Wikipedia have been released in the past couple of years, supposedly due to its size. There are some older enwiki history dumps floating around on the internet, and you can get more recent ones for most of the non-English wikis. There are various people who study the stuff you are asking about, but I don't know of anything published. Some qualitative discussion is in Ray Rosenzweig's well-known article about Wikipedia's history-related content.[1] Some other materials from that same site may also be of interest. 66.127.55.192 (talk) 16:19, 1 February 2010 (UTC)[reply]
WP:Statistics is a good place to start seeing what other people have done that way with Wikipedia. Dmcq (talk) 16:24, 1 February 2010 (UTC)[reply]

Closed subspace

I wish to show that the range of the operator T:ℓ-->ℓ where ℓ has the norm ||x||=sup|xn|, T((xn))=(xn/n) is not a closed set. I tried taking a limit point of the range and a sequence converging to it and thereby tried to show that the limit point has an inverse image but nothing came out of it. What would be the correct approach. Thanks-Shahab (talk) 07:01, 1 February 2010 (UTC)[reply]

Consider the element y ∈ ℓ such that yn:=1/√n. Prove that y is not in the image of T although is in its closure. --pma 07:51, 1 February 2010 (UTC)[reply]
Here is a hint for a more abstract proof. The range of T certainly contains the space cc of all sequences with compact support, and certainly is contained in the space c0 of all sequences vanishing at infinity. Note that the former is dense in the latter. So, were the range of T closed, it would be c0. But then we would have a linear continuous bijection T:ℓ → c0, hence invertible by the open mapping theorem, which is impossible, because ℓ and c0 are not even homeomorphic (the latter is separable, whereas the former is not).
The second proof, or other similar indirect arguments may be convenient or even necessary for more difficult cases; however note that for the present problem it would be considered somehow out of place. We feel it mathematically impolite using indirect arguments and general principles (recall that there is the axiom of choice behind the open mapping theorem) in order to prove the existence of an object that could be easily exhibited. On the other hand, as soon as you are a bit acquainted with the basic of functional analysis, the second proof is what should naturally come to your mind (as it first came to mine) -there are no computations in it, but it's just a simple organization of known facts.
Moral: abstract functional analysis, as well as category theory and other general theories, is not a remedy for solving all concrete problems in mathematics; it is rather a guidance that tells you what should be true and why, and which dirction you should take. --pma 08:55, 1 February 2010 (UTC)[reply]
Your moral raises an interesting point; namely whether mathematics is about "pure problem solving" (that is, the formulation and solution of a given problem), or whether it is something deeper than that. Some mathematicians with whom I have collaborated have occassionally stated that they believe "problem solving" to be the whole story behind mathematics, but I feel otherwise. As you pointed out, mathematics seems more to be about developing intuition about the connections between different concepts; a good mathematician should have a feel for how certain "principles", for instance, are connected, and should be able to use this feel to "do mathematics" (in the realm of functional analysis, one could note, in some basic sense, that the theory of Von Neumann algebras is about the connection between the algebraic and topological structure of a *-algebra). Thus I believe that mathematics does not really undermine the procedure of "taking a problem, breaking it into simpler problems, and solving the simpler problems" (not in full generality but this possibly may work in concrete cases). I am probably delving into a controversial topic here, but I do agree with your moral, if I have interpreted some aspects of it correctly. PST 11:39, 1 February 2010 (UTC)[reply]
Bill Thurston wrote a well-known essay on that topic,[2] plus there are books like "The Mathematical Experience" (which I haven't read). 66.127.55.192 (talk) 16:24, 1 February 2010 (UTC)[reply]
Thank you all.-Shahab (talk) 03:55, 2 February 2010 (UTC)[reply]

are there inaccessible integers?

I'm trying to make sense of Edward Nelson's concept of predicative arithmetic. My question:

Is there a sentence T of the form where is an arithmetic predicate, where T is a theorem of Peano arithmetic, but there is no PA theorem of the form ? This basically says a certain integer x exists but it's impossible to count up to it and know when you've gotten there. (And just to be sure: I think there is obviously no such sentence if is required to be recursive, but am I mistaken?) Thanks. 66.127.55.192 (talk) 18:22, 1 February 2010 (UTC)[reply]

There are many such sentences. Just take , where is any sentence undecidable in PA. By a more complicated argument, you can also arrange that PA does not even prove for any n. The property that a counterexample you want does not exist is called the numerical existence property; while the argument above shows that no reasonable classical arithmetic can have it, intuitionistic theories like Heyting arithmetic usually do have it. — Emil J. 18:42, 1 February 2010 (UTC)[reply]
Oh, and can't be recursive, as you say. — Emil J. 18:45, 1 February 2010 (UTC)[reply]
Hmm, thanks, I guess my question didn't capture what I was trying to get at, which is whether there are numbers (like the enormous ones that appear in Ramsey theory), that are finite according to PA, but that are too large to count to. I'll see if I can figure out a more accurate way to formalize this notion, without making it imply that PA is omega-inconsistent (although, hmm, maybe it really does imply exactly that). 66.127.55.192 (talk) 19:48, 1 February 2010 (UTC)[reply]
I don't know if you've completely grasped Nelson's point. When he says you can't write such a natural number as or whatever, he means you literally can't write it. You don't have enough time, enough chalk, enough space.
What we usually say is that "in principle" the number could be written down, if we don't have to pay for chalk or space and are given enough time. But Nelson challenges you to figure out what this "in principle" actually means. What does it mean? If you're a formalist like Nelson, and don't accept (or at least don't rely on) the existence of ideal objects apart from our formalized reasoning about them, it's very hard to give a defensible account of what "in principle" means here. --Trovatore (talk) 19:55, 1 February 2010 (UTC)[reply]
I think he goes further. He doesn't like the induction scheme of PRA because the induction step is φ(n)→φ(n+1) for formulas φ that range over all the integers including the ones not yet shown to be numerals (i.e. PRA is an impredicative theory). He has been trying to prove PA is actually inconsistent (why he hopes to find an inconsistency even if PA is false, I'm not sure). He does say that multiplication is a legitimate operation (though exponentiation is not), so numbers like 1000*1000*1000*1000*1000*1000*1000 exist, even though there is not enough chalk in the world to write down that number in unary. I.e. he allows proofs "in principle", it's just a weaker principle than PRA. But, I'm having a hard time coming up with an example of a PA integer that he would say doesn't exist. 66.127.55.192 (talk) 20:27, 1 February 2010 (UTC)[reply]
I don't think he's literally accepting "in principle" in that case. Rather, he can see concretely enough that if you had a proof of a contradiction by allowing multiplication, you could violate intuitions he can actually check about accessible physical objects, that he's convinced multiplication is OK.
As to a specific example, I think he explicitly says that there is no justifiable way to get to . But it has been quite a long time since I looked at his stuff, so you may be more up-to-date on that. --Trovatore (talk) 21:13, 1 February 2010 (UTC)[reply]
I believe predicative arithmetic is something like PRA but with a weaker induction schema, so you can't use it on arbitrary formulas, you can only use it on formulas with a certain syntactic characteristic that fits Nelson's concept of predicativity, and it turns out from this that multiplication is total. The crucial difference between multiplication and exponentiation is that multiplication is associative. It is pretty interesting stuff. I haven't tried to read his book but have read some of his expository papers from his site. This one is just 9 pages: [3]. 66.127.55.192 (talk) 05:47, 2 February 2010 (UTC)[reply]

Derivation of volume of a pyramid

I've seen a derivation which embedded three pyramids of demonstrably equal volume into a prism, demonstrating that the volume of each is one third that of the prism? I looked for the derivation online, and couldn't find it. Does anyone know if there's a Wikipedia article or other website that illustrates this derivation?

Thanks, --129.116.47.49 (talk) 18:48, 1 February 2010 (UTC)[reply]

You can draw one yourself. Label A, B, and C the corners of the triangular base of a prism, and A', B', and C' the corresponding corners of the opposite triangular face. Then mark out three pyramids: The one linking A, B, C, and A', the one linking A', B, C, and B', and the one linking A', B', C', and C. You can show these are all the same volume if you assume that skewing a solid shape doesn't change its volume. The pyramid A'B'C'C can be made into a reflection of ABCA' by sliding the corner C to the point A, so they have the same volume. The pyramid A'B'CB you can slide the corner C to the point C', then the corner B to the point A and the pyramid is congruent to A'B'C'A, which is the same volume as ABCA'. Black Carrot (talk) 19:17, 1 February 2010 (UTC)[reply]

Cut a cube into three square based pyramids having a common summit. Bo Jacoby (talk) 22:44, 1 February 2010 (UTC).[reply]

What are the formulas for a Mercator Projection?

The article didn't have formulas for these situations. Perhaps they should be added.

Let's say Theta is the distance in miles (or kilometers) between two points on the same line of latitude. Let's also say Theta sub zero (in inches or centimeters) is the distance Theta on a mercator map at the equator. For latitude Phi, what is the distance in inches or centimeters for Theta miles (or kilometers) compared to Theta sub zero?

A related question: what is the distance Phi in inches (centimeters) representing the distance between two lines of latitude Theta sub one and Theta sub two, both either above or below the equator?

And then what is the Pythagorean theorem? Start at latitude Phi sub one and end at Phi sub two (both above or below the equator), start at longitude Theta sub one and end at Theta sub two?

I promise I'm not in school and haven't been for twenty-five years. I saw a Mercator map on TV and just started wondering, and these formulas don't appear to be in the article.Vchimpanzee · talk · contributions · 19:02, 1 February 2010 (UTC)[reply]

I think you mean "for latitude Phi" rather than longitude in your first question, and I think the answer is just csc φ times θ0. The second answer is also given by the difference between cosecants. I don't think the Pythagorean theorem applies. 66.127.55.192 (talk) 19:58, 1 February 2010 (UTC)[reply]
You are correct on the latitude. I fixed it. Thanks.Vchimpanzee · talk · contributions · 20:42, 1 February 2010 (UTC)[reply]
The scaling factor for distances measured along lines of constant latitude φ (horizontal lines on the map) is 1 / cos(φ), also known as sec(φ) - this gives a scaling factor that is 1 at the equator (φ=0) and approaches infinity as you approach the poles (φ = +/- 90 degrees). The vertical distance on the map between two points with the same longitude is more complex and depends on their respective latitudes - it is:
Gandalf61 (talk) 10:46, 2 February 2010 (UTC)[reply]
Can these formulas be added to the article?Vchimpanzee · talk · contributions · 15:53, 2 February 2010 (UTC)[reply]
They are already there, more or less - see Mercator Projection#Mathematics of the projection. Gandalf61 (talk) 16:01, 2 February 2010 (UTC)[reply]

Okay, the new formula may be. The ones above it aren't. I just noticed lambda was used for longitude. When I took a math class that dealt with related issues, we used Theta and Phi.Vchimpanzee · talk · contributions · 18:46, 2 February 2010 (UTC)[reply]

In spherical coordinate systems mathematicians traditionally use θ for elevation and φ for azimuth. In a geographic coordinate system, on the other hand, cartographers use φ for latitude and λ for longitude. Gandalf61 (talk) 09:41, 3 February 2010 (UTC)[reply]

Capacitance between 3 concentric spherical shell capacitors

Hi all,

I was just wondering if I could get a quick answer to this: if I have 3 spherical shells (concentric), at radii a, b and c (a < b < c), then how do I calculate the capacitance of the system? The shells are at potentials (respectively) 0, V, 0, and I've managed to obtain a general formula for both the potential and the electric field: I'm just not really sure what formula I use to calculate C - I know C=Q/V in 2-capacitor situations, but what do I do here? Do i treat a-b and b-c as 2 separate capacitor pairs and then add their capacitances after, for example, or what?

Many thanks - no great detail of explanation is needed, I just need to know how I should be calculating it so don't go out of your way with a long answer!

Otherlobby17 (talk) 22:16, 1 February 2010 (UTC)[reply]

Sounds like two capacitors in series, described by the usual formula. Maybe I'm missing something. 66.127.55.192 (talk) 00:23, 2 February 2010 (UTC)[reply]
Capacitance is always defined between precisely two points: you add +Q here and -Q there, measure the voltage between those two points (more precisely, its change when you added the charges), and divide. The two is very important; when we speak of the "capacitance of an object" (like a capacitor), we're implicitly talking about the capacitance between its two terminals. When we speak of the capacitance of "two capacitors in series", we mean the capacitance between the two terminals that aren't connected to the other (constituent) capacitor. When we speak of the capacitance of one electrically-connected object (like a sphere), we usually mean the capacitance between it and "infinity" (the limit of the capacitance between it and an enclosing sphere whose radius grows without bound). (The common components called capacitors do not involve a "capacitor pair"; the word you may be looking for for "half a capacitor" is "plate".)
In your case, my guess (based on the potentials you mentioned) would be that it's the capacitance between the two spheres and the middle sphere. Since we're treating them as one object, we're constraining them to have the same voltage in this thought experiment; that's equivalent to running a very thin wire between them (through a tiny hole in the middle sphere). So it's just Q/V again in your case, bearing in mind that the charges on the plates are not -Q/+Q/-Q, but are rather a/+Q/b where . --Tardis (talk) 02:23, 2 February 2010 (UTC)[reply]
Thanks ever so much, that's been a great help :) Otherlobby17 (talk) 22:47, 3 February 2010 (UTC)[reply]

February 2

Asymptotics of Euler Numbers

I was looking at the Taylor Series of hyperbolic functions, particularly of sech(x). I am confused now about the Euler numbers, which are defined by . This series is supposed to have radius of convergence .

The Euler number article claims . It would all make sense to me with a factorial term instead of that n to the 2n term-- isn't that way too powerful? Won't that overwhelm the n factorial in the denominator of the Taylor series, along with the remaining exponential pieces, so that terms of the Taylor series grow arbitrarily large for any nonzero x?

What am I missing here? 207.68.113.232 (talk) 02:16, 2 February 2010 (UTC)[reply]

You should--I surmise--be able to see that the answer to your question is No--and be able to derive the radius of convergence--from reading Stirling's approximation.Julzes (talk) 03:14, 2 February 2010 (UTC)[reply]
Perhaps you're missing the subscript  (not ) on the left-hand side, as I first did? —Bkell (talk) 07:18, 2 February 2010 (UTC)[reply]
Details. Notice that you can very easily derive an asymptotics on the coefficients of f(z)=sech(z) by a standard procedure. The poles of f(z) are the solutions of exp(2z)=-1, that is zn=iπ(2n+1)/2 forall nZ. The values n=0 and n=-1 give the poles of minimum modulus π/2, which is therefore the radius of convergence for the expansion of f(z) at z=0. So you can write f(z)=a/(z-iπ/2) + b/(z+iπ/2) + h(z), where a,b are respectively the residues of f(z) at iπ/2 and -iπ/2 (here notice that since f(z) is an even function so is its principal part, and it has to be a=-b), and the function h(z) has a larger radius of convergence, actually 3π/2, corresponding to the next poles of f(z) from the origin. As a consequence the coefficients of h(z) have a growth O((2/3π)n), and the coefficients of the power series expansion of f(z) at z=0 are asymptotically those of its principal part, a/(z-iπ/2) + b/(z+iπ/2) = 2az0/(z2-z02), which is a geometric series. Note also that the residue of f(z) at z0) is the limit of (z-z0)f(z) as z→z0, that is, the reciprocal of the limit of cosh(z)/(z-z0), and the last limit is the derivative of cosh(z) at z=z0. To get En of course you have to multiply by n! using the Stirling formula. So now you should be able to compute that formula and even more precise asymptotics if you wish (consider the Laurent expansions at the next poles). Also note that the rough estimate En=O(n!(2/π)n) is immediately available once you know the minimum modulus of the poles, and that the fact that the En vanish for odd n is a consequence of f(z) being even.--pma 09:11, 2 February 2010 (UTC)[reply]
Actually in this case the residues of f(z) at all poles are easily computed, giving rise to a classic convergent series of the form (I couldn't find it in wikipedia but I'm sure it's there). Then you can expand each term and rearrange into a power series within the radius of convergence π/2. This gives an exact expression for the coefficients of the power series expansion of sech(z); in particular you may derive more refined asymptotics and bounds. --pma 12:26, 2 February 2010 (UTC)[reply]
Thanks! (from the OP) I knew the nearest poles would be pi over 2 away. Stirling's approximation is precisely what I was missing. 146.186.131.95 (talk) 13:17, 2 February 2010 (UTC)[reply]
Good. Btw, following the above lines one immediately finds the exact expression for En in terms of "Sn" reported here. --pma 14:15, 2 February 2010 (UTC)[reply]

P-value: What is the connection between significance level of 5% and likelihood of 30%

In the Wikipedia article P-value it says:

"Generally, one rejects the null hypothesis if the p-value is smaller than or equal to the significance level,[1] often represented by the Greek letter α (alpha). If the level is 0.05, then results that are only 30% likely or less are deemed extraordinary, given that the null hypothesis is true."


This confuses me. I thought that

-if the significance level is 0.05, results with a p-value of 0.05 or less are deemed extraordinary enough,

and that

-a p-value of 0.05 means that the results are 5% likely (to have arisen by chance, considering that the null hypothesis is true), and not 30%.

Georg Stillfried (talk) 14:56, 2 February 2010 (UTC)[reply]

This is probably an error in the article. A p-value of 5% means that the probability of observing what you observed when, in fact, the null hypothesis is true is 5%. Wikiant (talk) 15:00, 2 February 2010 (UTC)[reply]
Uncaught vandalism from the 11th of January. Fixed now. Algebraist 15:03, 2 February 2010 (UTC)[reply]
Thanks Georg Stillfried (talk) 15:48, 2 February 2010 (UTC)[reply]

Comparing vectors

I've been writing a survey paper for a few months and I want to see if there are any other areas of research I can include. The topic is comparing vectors. In this realm, a vector is a set of discrete items in a specific order. The first one is always the first one. The vectors can grow, so a new last one can be added at any time. I've covered a lot of research into comparing the vectors using a cosine function and using Levenshtein-based algorithms. I've tried to find adaptations of BLAST/FASTA used in protein strands, but found nothing. Is there a fundamental method of comparing vectors that I'm missing? There has to be more than two methods. -- kainaw 15:22, 2 February 2010 (UTC)[reply]

Are these vectors supposed to be representing something specific? What do you want to achieve by comparing them? What comparison methods are sensible will depend crucially on these things. Algebraist 15:39, 2 February 2010 (UTC)[reply]
By "discrete", I mean that a value in one vector indicates the same thing as that value showing up in another vector. Some examples: vectors of URLs visited by users. Vectors of UPC codes on foods purchased by customers. Vectors of numbers showing up on a lottery. The values have meaning, but what is being compared is the similarity (or lack of similarity) of vectors. -- kainaw 15:43, 2 February 2010 (UTC)[reply]
I should have clarified that by stating "survey paper", I am interested in bad methods of comparison as well as optimal methods. I already have over 200 pages of detail on methods I've studied and plan to add another 300 pages or so. -- kainaw 15:58, 2 February 2010 (UTC)[reply]
The first problem is that what you are talking about is not really a vector in the sense most commonly used in mathematics. It is really a sequence; or a multiset, if order is not important; or a set, if repetition is impossible\not important. -- Meni Rosenfeld (talk) 16:46, 2 February 2010 (UTC)[reply]
I found a good survey here with a couple algorithms that I haven't studied (yet). From these, I expect to find a few more algorithms that I can include in my survey. -- kainaw 05:47, 3 February 2010 (UTC)[reply]
Order is very important (the main point) and repetition is expected. Therefore, it is not a set. Each of the sequences has an origin that does not change (the first item) and continues to the next item and the next item and the next item... In computer science (where the comparison theories are applied), they are called arrays. I don't know of any concept of arrays in mathematics. -- kainaw 16:53, 2 February 2010 (UTC)[reply]
I think a finite Sequence is the exact analog of an array. -- Meni Rosenfeld (talk) 17:14, 2 February 2010 (UTC)[reply]
Searching for "sequence similarity" brings up bioinformatics (BLAST/FASTA), which I've already covered in depth. -- kainaw 16:56, 2 February 2010 (UTC)[reply]
I don't know if you're going to get a good answer because the question is sort of vague. The method you would want for comparing two sequences simply depends on how you might want to define closeness. You could really pick any function you wanted. If you're looking for commonly used functions, that's tied to what the sequences are common used to represent. Besides proteins and DNA sequences, strings of words, or vectors in some n dimensional space, what might you want to represent with sequences and compare? Rckrone (talk) 06:32, 3 February 2010 (UTC)[reply]
I am purposely making it vague because I'm not interested in what the similarity is measuring. I am collecting, categorizing, and describing in high detail as many methods for comparing the similarity of sequences as possible. I'm focusing on sequences of FILL_IN_THE_BLANK in time right now. I haven't found a lot of methods that take time ordering into consideration. -- kainaw 06:37, 3 February 2010 (UTC)[reply]

Units Problem

I'm in an intro physics class and completely confused about a unit conversion problem---any help, not the the answer, but pointing me in the right direction would be appreciated!


Suppose x=ay^(3/2) where a=7.81 g/Tm. Find the value of y when x=61.7 Eg (fm)^2/(ms)^3


note that that's femtometers squared OVER miliseconds cubed


I'm just so confused as to how to combine these units! 209.6.54.248 (talk) 17:23, 2 February 2010 (UTC)[reply]

First solve the equation for y using algebra. Then see what that does to the units. 66.127.55.192 (talk) 17:56, 2 February 2010 (UTC)[reply]
OP here, I've solved for y using algebra to come up with y= cube root of (3806.89 x 10³⁶ g² f⁴ m⁴ Ym²) ALL DIVIDED BY cube root of (60.9961 g² m⁶ s⁶)

I'm still stuck!209.6.54.248 (talk) 19:37, 2 February 2010 (UTC)[reply]

First change the units to metres and seconds, then you can divide the numbers, and you can cancel units that occur in both numerator and denominator before taking the cube root of the whole expression as the last step. (Divide powers by 3 to get the cube root). I'm puzzled by the units you give in the question. Could you explain them in words? What are "f" & "Y" in your answer? Perhaps it would help if you looked at some really simple examples first. Dbfirs 21:41, 2 February 2010 (UTC)[reply]
All problems of change of unit use the same principle, as in the simple example of 6 secs to be converted to millisecs. 6 secs X (millisecs per sec) = 6 X 1000 = 6000 millisecs. Note how the "unit A per unit B" acts as a fraction to cancel the multiplying "unit B". This conversion can be done in both numerator and denominator, so that g/sec could be changed to kg/min by applying the separate factors 1000 and 60, as appropriate.→86.152.78.134 (talk) 23:23, 2 February 2010 (UTC)[reply]

February 3

Transcendental Galois extension

Resolved

Is there such a thing? I'm especially unsure about the possible topology. It cannot be (as I understand) the usual profinite one (since a Galois group may not be compact.) Algebraically, there is a separability for transcendental extensions (due to Mac Lane?) but don't know if it is a good idea to define Galois = (Mac Lane) separable + normal, where "normal" being the usual one. I would be delighted if you know something more. -- Taku (talk) 02:43, 3 February 2010 (UTC)[reply]

I've encountered no such notion myself and Wikipedia seems unaware of it, but it doesn't seem impossible. The article Galois extension characterizes a Galois extension as an algebraic extension K/F whose automorphism group has fixed field F; if we drop the condition of being algebraic, the definition still seems sensible on the face of it. For example, take F as a field of characteristic other than 2, and consider the transcendental extension ; the automorphism of that fixes F and maps X to -X has fixed field F, so therefore that extension would be Galois. Plausibly such a notion might be useful. Others? Eric. 131.215.159.171 (talk) 06:31, 3 February 2010 (UTC)[reply]
Are you sure about your example? The polynomial is mapped onto itself under the automorphism you mentioned, and thus should lie in the automorphism's fixed field. PST 13:04, 3 February 2010 (UTC)[reply]
Indeed. The fixed field in this case is F(x2), and the extension F(x)/F(x2) is algebraic. — Emil J. 13:22, 3 February 2010 (UTC)[reply]
The whole point of Galois extensions is that they satisfy the fundamental theorem of Galois theory, i.e., there is a dual correspondence of intermediate extensions to closed subgroups of the Galois group. If the extension is transcendental, merely requiring that Fix(Aut(K/F)) = F comes nowhere near ensuring this goal for whatever definition of "closed" (thought I can't remember the specific counterexample ATM), and is unlikely to be a very useful property by itself. — Emil J. 13:18, 3 February 2010 (UTC)[reply]
Well, of course you could argue that for the purposes of "pure field theory", the existence of a Galois connection should be somehow related to the notion of a "Galois extension", and I agree. But field extensions also occur often in other branches of mathematics, and although an example does not immediately come to mind (I vaguely recall one, and if I remember, I will note it here), perhaps some definition of "Galois extension" for transcendental extensions may be useful in algebraic geometry (for instance). If Taku is looking for an example of how to use the theory of transcendental extensions in Galois theory, one striking example is the fact that if F is a purely transcendental finitely generated extension of , and if E is Galois over F (in the usual sense), there is a Galois extension K of , such that the Galois group of K over is isomorphic to the Galois group of E over F; most standard proofs of this fact involve Hilbert's irreducibility theorem. The result may be of interest if you were looking for connections between "transcendental extensions" and "Galois theory". PST 13:38, 3 February 2010 (UTC)[reply]

Well, I wasn't thinking anything fancy. I had a very innocent example like . (It's not a Galois extension since it's not algebraic.) I thought, in application, it makes sense to start with the top field ; since this way you can apply analytic results (e.g., Lie groups). The problem is that there are transcendental elements when we go down. (Or maybe I'm missing the story completely.) Hence, my somehow rhetorical question. I think we can agree that It would be nice if there is such a thing as a transcendental Galois extension. (I don't know any possible applications to algebraic geometry that PST mentioned.) Of course it is possible to define a extension to be Galois by the closure property: i.e., where * means taking a Galois group and taking fixed field respectively. As Emil. J pointed out, such a definition is vacuous. (And, if I understand correctly, if we require a Galois group to be profinite and further assume that there is a Galois connection, then it would follow that the extension is algebraic (since the union of finite extensions is algebraic.) -- Taku (talk) 02:23, 4 February 2010 (UTC)[reply]

This is the answer to myself. Apparently there is "transcendental Galois theory": a lecture note by J.S. Milne has a section on this. [4] As I noted above, a Galois group is not compact. But then what is it? Do we know? -- Taku (talk) 02:47, 4 February 2010 (UTC)[reply]

This book connects moduli spaces and Galois theory. In general, most of the connections between Galois theory and geometry involve the inverse Galois problem (as far as I know). PST 03:48, 4 February 2010 (UTC)[reply]

Smooth maps

Resolved

I'm working on a problem where I am supposed to give necessary and sufficient conditions for a map, f, from one smooth manifold on the reals to another is a smooth map. With very little work, I showed that this is equivalent to showing is smooth in the normal calculus sense. Here smooth means . Now, do you think my answer should be that the cube of f is smooth on the reals? I mean that is necessary and sufficient, but I don't know if there's more that can be said, like "Every such function would look like ...". I can see that products and sums of smooth functions product smooth functions, so if f is smooth, then should be smooth. But, the opposite is not true as is not smooth but its cube is. Any thoughts? Thanks. StatisticsMan (talk) 05:03, 3 February 2010 (UTC)[reply]

I asked my professor and he said that is all he is looking for. StatisticsMan (talk) 21:02, 3 February 2010 (UTC)[reply]

Induction

Everyone knows how to do induction and recursive definition on well-ordered sets. Is there a generalized notion of well ordering to partially ordered sets so we can do things similar to induction and recursion on them? Money is tight (talk) 07:37, 3 February 2010 (UTC) Nvm I found what I was searching for in Well-founded_induction Money is tight (talk) 07:46, 3 February 2010 (UTC)[reply]

I'd say the Zorn lemma. --pma 09:36, 3 February 2010 (UTC)[reply]

Winning eight games out of ten

Sometimes I play a series of ten Reversi games online against ten different opponents, selected at random. If I win eight of the ten games, and there are no draws, could I use that to calculate or estimate at what percentile in the Reversi-player ability range I am? Ignoring that its a small sample size. Thanks. 78.146.251.66 (talk) 12:27, 3 February 2010 (UTC)[reply]

If on an average you win eight of the ten games you play, and if everything stated in your question is assumed, you should effectively defeat 80% of the population in Reversi. But that implies that your percentile rank is 80. PST 12:58, 3 February 2010 (UTC)[reply]
Given that you had a uniform chance of any ranking beforehand the probability density afterwards of where you are from 0 to 1 is as x8(1-x)2 normalized so it all adds up to to 1 which is the Beta distribution with parameters 9 and 3. That expression is x8-2x9+x10. The integral is x9/9-2x10/10+x11/11. Its integral between 0 and 1 is 1/9-2/10+1/11=2/990 so thats what you divide by to get your final result. The limits of the 8th percentile are I believe 0.7 and 0.8 but you might want something else. You work out the integral at the two end figures, subtract and divide by that total between 0 and 1. And that gives your chance of being in that percentile. But I'm afraid I can't do that in my head. Dmcq (talk) 13:42, 3 February 2010 (UTC)[reply]
BTW by that article the average (arithmetic mean) of your ranking is 75% and the most likely value (mode (statistics))) is 80%. Dmcq (talk) 13:51, 3 February 2010 (UTC)[reply]
No, no, no. All these calculations confuse "percentile of Reversi ability" with "probability to defeat a random opponent".
First, playing ability probably cannot be placed on a one-dimensional scale. It's very possible to have cycles where player A dominates B (defeats him with probability > 0.5), B dominates C and C dominates A. It's possible that the world champion of reversi, which dominates all other powerful players, is so confused by the cluelessness of weaker players that he beats them less often than he should. Thus the player in the 100th percentile would have a relatively low probability to defeat a random opponent.
Even if we ignore this and choose "probability to defeat a random opponent" as our measure for ability, this quantity will only be monotonous with percentile, not identical to it. It's very possible that the best player, at the 100th percentile, is only able to defeat a random opponent 70% of the time.
So we see that the OP's 80% winning record, even if measured for a large sample, does not tell us what the percentile is. It is possible the he is really the world champion, and it is possible that he is on the 60th percentile (I think it can't be lower).
For estimating "probability to defeat a random opponent", Dmcq gives the right ideas, but the prior needn't be uniform. It depends on the distribution of this parameter among all players, and your objective estimates about your own ability. -- Meni Rosenfeld (talk) 15:44, 3 February 2010 (UTC)[reply]
Sorry you're quite right, it doesn't give the percentiles at all and one would need a lot more information to do that. Thanks for pointing that out for me, very silly. Dmcq (talk) 17:10, 3 February 2010 (UTC)[reply]
Any attempt to assess your relative ranking depends upon some other things that aren't being stated. First and foremost, if you are playing where there are ratings, you should use that as a guide rather than your performance in specific games against players who may be at either end of the spectrum. Generally, stronger players will avoid playing much weaker ones so as to waste little of their time. If you played a truly random selection of opponents, the percentage of games you would win would roughly match up with the percentage of players you are stronger than; but, for example, in chess the player generally considered strongest historically, Gary Kasparov, only had about a 70-30 record, since he only played other chess players in the top hundredth of a percentile through most of his career. [Incidentally, one of the strongest Reversi/Othello players is Imre Leader, the godson of Imre Lakatos (recently mentioned at this desk for his book, Proofs and Refutations).]
One thing said here is absolutely false. A (very) strong player will beat a weak one 100% of the time unless s/he falls asleep during the game. It's not a subject for this desk, but it shouldn't go unchallenged.Julzes (talk) 17:59, 3 February 2010 (UTC)[reply]
You misunderstood me. I have no knowledge about Reversi, and I wasn't trying to make a statement about how likely a strong Reversi player is to beat a weak one. I was talking about games in general, using "Reversi" as a placeholder. It requires specific domain knowledge to show that Reversi does not exhibit any of the scenarios I mentioned (if this is so).
For proffesionally played games that do not involve randomness, the probability in question is indeed usually close to 100%; for games that do it is usually less.
"The percentage of games you would win would roughly match up with the percentage of players you are stronger than" is false in general, and a strong statement about Reversi in particular. -- Meni Rosenfeld (talk) 19:40, 3 February 2010 (UTC)[reply]
Reversi doesn't use dice or cards. The question was not about games with randomness, but about random selection of opponents. The matchup between proportion of games won and percentile of skill is greater the more highly graded skill-levels are in a game without randomness. In general, one won't enjoy success over opponents one is stronger than 100% of the time, so it is certainly true that the matchup is not perfect. It is close, and, given a large sample of games against truly randomly chosen opponents, I think it's probably the best estimator for someone in the middle ranks (I don't know if this question has been researched). This, however, probably excludes players who only barely know the game. Someone with only a modicum of skill may very well have a hard time getting good results against a rank beginner (somewhat as you said about strong versus weak players). At any rate, one thing that can be argued is that all games exhibit some randomness, to the extent that weak players may choose their moves with no more skill (and sometimes less) than a random-move selector would. And also, as I said, opponent selection cannot possibly be random and the rating systems that are available are a better guide to determining skill level. There is a lack of transitivity in the ranking question as well, and I think this is the point that Mr. Rosenfeld was trying to get across. Such things as variations in styles of play and specific preparation for specific opponents can either make comparisons impossible or yield false results. In some cases comparisons can be made but are not made well with head-to-head results, and in others comparison is effectively impossible. And then there is the question of which game of Reversi or chess one is talking about, from 1 minute per game to postal.Julzes (talk) 21:36, 3 February 2010 (UTC)[reply]
The article about the Elo rating system (used in chess and tennis) might be of some help here. 66.127.55.192 (talk) 18:07, 3 February 2010 (UTC)[reply]

distance of a hyperplane from the origin and the norm

My question is this: Show that the norm ||f|| of a bounded linear functional f (non-zero) on a normed space X can be interpreted geometrically as the reciprocal of the distance D = inf{||x||: f(x)=1} of the hyperplane H = {x : f(x)=1} from the origin. Firstly what is the meaning of hyperplane? The book I am reading doesn't define hyperplane, it defines hyperplane parallel to a subspace Y as an element of X/Y. Secondly, how should I prove the result? Thanks.-Shahab (talk) 13:30, 3 February 2010 (UTC)[reply]

A hyperplane is an affine subspace of codimension 1. In other words, it's a set of the form {x : f(x)=a} for some nonzero linear functional f and scalar a. What do you mean by "recipient"? Algebraist 15:24, 3 February 2010 (UTC)[reply]
"Recipient" should perhaps be "reciprocal" ? Gandalf61 (talk) 15:29, 3 February 2010 (UTC)[reply]
Yes, that was a mistake. Corrected now. So how do I proceed? Also I believe can think of H as x/f(x)+N where N=Null space of f and x is a fixed element in X-N. But in general how do I interpret a hyperspace as an element of X/Y (what element and what is the subspace Y.) Thanks-Shahab (talk) 17:10, 3 February 2010 (UTC)[reply]
Y = {x: f(x) = 0} = your N, and H is literally an element of X/Y if the latter is defined in the obvious way as a set of equivalence classes, it's not necessary to "interpret" it. Anyway, all this talk about hyperplanes and X/Y is just a red herring. The result that ||f|| = 1/inf{||x||: f(x) = 1} = sup{1/||x||: f(x) = 1} follows fairly trivially from the definition of ||f|| = sup{|f(x)|: ||x|| = 1}, just show that the two suprema are taken over the same set (well, except for 0). — Emil J. 17:26, 3 February 2010 (UTC)[reply]

Discrete Mathematics: Do quantifier orders matter?

There's a question in my discrete mathematics book that asks to write in English:

and then asks us to write:

My question is: are these equivalent?

Thanks for the help! Sebsile, an alternate account of Saebjorn 16:17, 3 February 2010 (UTC)[reply]

Try letting your quantifiers range over people, and letting P(x,y) mean "x loves y". Algebraist 16:19, 3 February 2010 (UTC)[reply]
If, unlike xkcd, you don't want to mix math and romance, try . -- Meni Rosenfeld (talk) 16:25, 3 February 2010 (UTC)[reply]
What about this (a kind of variation on Algebraist's example): "for any x there exists a y that can screw x" vs "there exists a y that can screw any x". pma 17:09, 3 February 2010 (UTC)[reply]

No they're not equivalent. In the first case the statement is true even if the y-value is different for different x-values. In the second case the statement is not true unless the same y-value works regardless of what the x-value is.

This is precisely the difference between pointwise convergence and uniform convergence. Michael Hardy (talk) 19:52, 3 February 2010 (UTC)[reply]

Is there still a distinction between and ? It's a long time since I used these symbols. Dbfirs 22:37, 3 February 2010 (UTC)[reply]
Those colons, as far as I'm aware, mean absolutely nothing. So the difference persists. Algebraist 23:32, 3 February 2010 (UTC)[reply]
the colons are old-schoolish symbols for 'such that'. they are kind of unnecessary, so I think modern usage tends to drop them. but to answer the question in english (rather than mathese), the difference would be between for every X there exists (some) Y where P(x,y) as opposed to there exists a (particular) Y for all X where P(x,y). in the first case y can be different for different x's; in the second it's the same y for all x's. --Ludwigs2 00:08, 4 February 2010 (UTC)[reply]

Homoeomeric curves

I don't really need help with this, just thought it would be interesting. The ancient Greeks studied homoeomeric curves, that is curves for which any part can be made to coincide with any other part. Or in more modern language, a connected 1-manifold embedded in Euclidean space so that its symmetry group under isometries of Euclidean space is transitive. Geminus showed there are only three homoeomeric curves (in 3-space): the line, the circle, and the circular helix. It appears though that there is a fourth type of curve in 4-space and in general there are n types in n-space. It appears that there are three types of homoeomeric surfaces in 3-space: the plane, the sphere, and the circular cylinder. It seems natural to ask, can the homoeomeric m-submanifolds of Euclidean n-space be classified?--RDBury (talk) 16:19, 3 February 2010 (UTC)[reply]

For the case of curves in three-space, they can be classified in terms of the Frenet–Serret formulas: one with zero curvature (the line), one with nonzero curvature but zero torsion (the circle) and one with nonzero curvature and torsion (the helix). It seems likely that Jordan's extension to n dimensions will nicely classify the homoeomeric curves in dimension n. Algebraist 16:28, 3 February 2010 (UTC)[reply]
I'm not sure there are only 4 curves in 4D. Off the top of my head I can think of
The first three are line, spiral and circle - there's only one spiral as the first two components, ate0 + bte1, are orthogonal to the other two whatever the values of a and b. Obviously a = b = 0 gives a circle, c = 0 a straight line.
The last three (including the circle) are related to simple, isoclinic and double rotations: they come from thinking of the paths of points under those rotations, as such path will map to itself via that rotation and powers of it.
The last gives more than one curve as values of m and n can be chosen to generate different closed curves, much like Lissajous curve, except these are homoeomeric. E.g. m = 1, n = 2 is the simplest one that's different from the fourth/isoclinic one. In theory there are as many as there pairs or relatively prime (m, n), i.e. infinitely many. Then there are a class of non-closed curves when m/n is not rational so the curve loops forever and is dense but never joins up.
So it gets a lot more complex just for curves in four dimensions. I can't even think what will happen for surfaces, or for more general m-manifolds in n-dimensions, which gain far more degrees of freedom in spaces with far more complex transformations.
Actually scrub the fourth one. With a suitable change of basis it's just a circle radius . I think my thinking on the paths generated by the general double rotation still makes sense though.--JohnBlackburnewordsdeeds 20:42, 3 February 2010 (UTC)[reply]

Clarification on 'Electric Potential Energy' article

Hi all,

further to my question on charged spheres as capacitors a few days ago, I was discussing electromagnetism with a friend a few years older than me and he pointed out that, in the derivation of the alternate formula for calculating energy ( Electric Potential Energy - under 'Energy stored in an electrostatic field distribution'), when we derive the |E|2 formula for energy, we throw away a surface integral over a sphere of infinite radius because as : however, as the potential tends to 0, it's also true that the surface area of the sphere tends to infinity.

Now I can appreciate the basic concept of what's going on here - as we take the limiting values of phi and the surface area for , phi tends to 0 sufficiently fast so that the integral over the surface tends to 0 despite the surface area becoming arbitrarily large, but what I want to know is why? I've consulted 2 textbooks (and the above article) on this, and the only answer I seem to be able to find is a mumbled 'oh, well, the potential just goes to 0 faster...' without any actual justification. Why can we be certain that the potential drops off sufficiently fast that the surface integral becomes negligible? Can anyone give me a proper answer without just hiding behind the fact that '0 times infinity = 0 in this case'?

I greatly appreciate any help you're able to provide, all I want is a proper justified answer or a decent explanation - many thanks, Otherlobby17 (talk) 22:59, 3 February 2010 (UTC)[reply]

Far from all charges (you have to assume the charge density is localized, or at least itself eventually falls off with radius "sufficiently fast"), they act like a single point charge, whose potential goes as and whose field goes as . The area of the bounding surface goes as , so the product of the potential, the field, and the area goes as and vanishes as r grows without bound. (If the total charge is 0, the potential and field will drop off at an even faster rate determined by the precise geometry of the charges.) --Tardis (talk) 01:57, 4 February 2010 (UTC)[reply]

February 4

Scientific Notation

If 6.02x10e23 atoms of carbon have a mass of 12g, then what is the mass of 1 atom? Express your answer in scientific notation.

I don't even know where to start on this one. I'm pretty sure it has something to do with dividing the exponent and 6.02.

Explaining how you got your answer would be great.

174.112.38.185 (talk) 01:58, 4 February 2010 (UTC)[reply]

If two cars weigh 2 tonnes, how much does one car weigh? —Preceding unsigned comment added by 129.67.39.49 (talk) 02:07, 4 February 2010 (UTC)[reply]

Statistics course (titled Stochastic processes)

I'm taking a statistics course (titled stochastic process). It's like no other stats course I've taken previously because the prof covers in lecture many proofs and mathematical theorems. I've taken only calculus 1 to 3 and I don't have any background in proof. I don't know why but I also have proof-phobia. Proofs just never appealed to me or they were never possible for me to understand reproduce by myself. I don't know how I should ace this course. Even the homework is really hard. In my past stats courses, I prepared for exams by doing chapter review questions at the end of every chapter. But the prof's questions are nothing like the ones in the text. What should I do? —Preceding unsigned comment added by 142.58.129.94 (talk) 02:29, 4 February 2010 (UTC)[reply]

Is the course a requirement? If not, drop it. If the course is required, do any other professors teach it? Check to see if they are better suited to your skills. If so, switch courses. If it is required and he is the only professor, meet him after class - as much as possible - and ask tons and tons of questions. The more questions you ask, the more answers you will get. -- kainaw 02:31, 4 February 2010 (UTC)[reply]
It's not a requirement. But if I drop the course now, I'll get no refund back for the course tuition (around $400). I'll also have "W" mark in my transcript. I don't have a single "W" right now, but I heard it doesn't look good. —Preceding unsigned comment added by 142.58.129.94 (talk) 02:41, 4 February 2010 (UTC)[reply]
In any case, there is little chance that we can assist you (effectively) regarding this issue; you make your own future. But we can offer you some advice, and with enough perseverance on your part, this advice may be useful for you later on. Firstly, it is a big mistake to practice "reproducing proofs"; proofs should come naturally to you, but if they do not, reproducing them is a bad habit and can be detrimental to your understanding of the subject in question. The best option available if you do not appreciate proofs in stochastic calculus, is to attempt to appreciate them in "lower-level analogous". For instance, since a proof is nothing but a series of logical implications, practice "logical implications" by manipulating some basic trigonometric identities (take out a calculus book, and try to appreciate some theoretical proofs, such as that of the fundamental theorem of calculus, as well; look at the underlying intuition of the proof rather than the proof itself). Personally, you do not necessarily have to "know how to do proofs in university" to become a great mathematician, but your professor will probably tell you otherwise.
When you say that your professor's questions are nothing like those in the test, it is likely (but not necessarily the case) that the professor's questions test an understanding of the material rather than a routine memorization of the material. Thus, instead of attempting to have the ability to "reproduce the textbook in exams", attempt to understand the textbook; be in the position where you have a feel for the material that would permit you to engage in a 1 hour discussion with any expert of stochastic calculus, and be interested in that which is being discussed! It is difficult to attain good grades if you are not interested in what you are doing, but some students do have the ability to do exactly this (and this is an extremely difficult alternative; I am sure that many professors of mathematics would fail mathematics exams if they followed this procedure).
At the end of the day, you make your own future; instead of feeling helpless about your course, try to take it step by step. You do not necessarily need to attain an A; thoroughly understand whatever material you can and enjoy what you are doing! If there happens to be a few concepts that you do not understand, try to enjoy thinking about these concepts, and maybe you will understand them eventually. Finally, when the final exam is imminent, do not spend too much time solving textbook problems; if you have taken the course, you have probably solved enough of those types of problems anyhow. Rather, try to discuss stochastic calculus with a friend or fellow student, and by this I mean engage in a lengthy discussion covering most of the topics in the syllabus (use paper and pen as well, if necessary) (and forget memorized speeches; discuss the material as if you were discussing what you did on the weekend). To summarize, the most important advice in this instance is to enjoy what you are doing; if you are enjoying it, everything else will come naturally to you. Try to determine the aspects of stochastic calculus that interest you the most and develop your appreciation of the entire subject from there. PST 03:43, 4 February 2010 (UTC)[reply]
Not much I can add except proofs aren't really something you can learn overnight. Most math majors take an elementary analysis course to get the basics. But there aren't any easy recipes otherwise there wouldn't be any unproven conjectures, and there are plenty of those. There are books devoted to the basic mechanics, e.g. How to Read and Do Proofs: An Introduction to Mathematical Thought Processes by D. Solow comes to mind. Keep in mind also that proofs come in all levels of difficulty, so with this course the proofs may just be a matter of applying rules of algebra to known formulas.--RDBury (talk) 04:03, 4 February 2010 (UTC)[reply]

Really simple algebra problem.

The surface area of a sphere is A = 4πr2 . Now I want to replace the radius in the formula with diameter. Since r = d/2, I can rewrite the question as A = 4π(d/2)2. Correct? Then reducing it follows A=2πd2... Is this correct? Assuming it is, why am I being told that the answer is A = πd2 ??? 198.188.150.134 (talk) 04:57, 4 February 2010 (UTC)[reply]