Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Deeptrivia (talk | contribs) at 00:12, 13 January 2012 (→‎January 13). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 6

How did mathematicians figure out trigonometric raios?

Like how to use sin, cos, tan, etc. I always thought it was amazing that they can find a side that way or find an angle.

Thanks a lot. — Preceding unsigned comment added by 139.62.223.182 (talk) 00:23, 6 January 2012 (UTC)[reply]

Basically by thinking. Sine, cosine, and so on are fairly simple - they are defined as the ratio between two sides of a right triangle. The core insight is that this ratio is the same for a right triangle with the same angles, no matter what the absolute size of the triangle is. The field of maths is called trigonometry, and the first systematic exposition (still extant) was Euclid's Elements, written around 300 BCE. --Stephan Schulz (talk) 00:36, 6 January 2012 (UTC)[reply]
Exactly. I would say that the only difficulty would have come from computing the values of the functions. I'm not sure what method they would have used to create the tables that they used, but you can calculate them to any degree of accuracy by evaluating sufficiently many terms in their Taylor series. Fly by Night (talk) 00:53, 6 January 2012 (UTC)[reply]
See CORDIC for a modern method of calculating the values of trig functions. AndrewWTaylor (talk) 01:04, 6 January 2012 (UTC)[reply]
Presumably the first trig tables were produced just by measuring accurately and dividing lengths in large right-angled triangles, as is sometimes done in schools to introduce trigonometry. Dbfirs 10:47, 6 January 2012 (UTC)[reply]
Thanks for the link Andrew. I hadn't heard of the CORDIC method before. I was quite surprised reading the CORDIC article. It seems, or at least the article makes it seem, that the CORDIC evaluation is far more complicated than the evaluation of a polynomials (i.e. a truncated power series). But I suppose that that reflects the state of affairs: computers are very good at doing the same thing a trillion times in a row. Fly by Night (talk) 21:37, 6 January 2012 (UTC)[reply]
Computing a table of trig functions suggests different methods than computing just one or a few values. As an example, let me consider antilogarithms first, because they're simpler to imagine.
If you have a single number x and want to compute exp(x) to ten digits, then indeed a method involving power series is the best. But suppose that instead you want to compile a full table of antilogarithms to four or five digits precision. For this, you could start from exp(c) where c is a small number, then compute exp(nc) for each natural number n by repeated multiplication of precise result. (The obvious method would be to compute exp((n+1)c) = exp(c)exp(nc) all the time, but this is wrong: it loses precision very quickly. Instead you keep doing something like exp((n+2k)c) = exp(2kc)exp(nc), where k is the greatest natural number such that 2k < n, because then each number in your table will be a result of O(log n) multiplications, thus you get precise results.)
If you want a table of base 10 antilogarithms, you don't in fact need to be able to compute even a single starting value 10c in the first place. Instead you just choose a small positive number C, assume 10c = 1 + C and compute the value of 10nc by repeated multiplication as above, all without knowing the actual value of c. Finally, when you get near the end of the table, where 10nc ≈ 10, you solve 10xc = 10 with interpolation from your table, thus you know c = 1/x.
Computing sines and cosines works similar to the above. If you need just one value, then a solution based on power series is probably the best. If you need a full table, you repeatedly multiply a unit magnitude complex number (represented by its real and imaginary parts, and be sure to normalize each value to unit magnitude). Here too, you can start from any unit magnitude complex number, and you will compute its angle when you get to the end of the table near exp(/4).
b_jonas 16:54, 9 January 2012 (UTC)[reply]
Update: clarified on what k is.

solving all quintics by algebraic means

i read an article dated 5th October 2011 on this Wikipedia about solving all quintics by algebraic means. I am the discoverer and introducing new mathematics methods which will be helping scientists and mathematicians in the next generations. Right now, i am about to publish the paper and inviting mathematicians to challenge the research paper. Also to AndrewWTaylor, you said something about NOBEL PRIZE awaiting for such discovery.i am really interested. — Preceding unsigned comment added by NII AFRAH (talkcontribs) 12:20, 6 January 2012 (UTC)[reply]

I charge a standard fee of $1000 (US) to provide a detailed analysis of such "discoveries". Sławomir Biały (talk) 12:30, 6 January 2012 (UTC)[reply]
Sławomir Biały makes a very reasonable offer, but here is a short check-list that you might want to run through yourself:
  1. Does your method truly apply to all quintics, with no exclusions ? Will it work for  ?
  2. Does it only involve addition, substraction, multiplication, division, taking roots and no other operations or functions (so no trigonometric functions or Bring radicals, for example) ?
  3. Does it always terminate in a finite number of steps - so no unbounded iteration loops, for example ?
  4. Does it result in an exact solution or set of solutions, rather than an approximate solution with an error term that can be reduced by additional calculations ?
  5. Can you provide full and explicit details for each step - for example, anywhere that you have said "obviously", "it is clear that" or "anyone can see that", can you fill in the missing details ?
  6. Can you explain how your method is consistent with the Abel–Ruffini theorem ? Or, alternatively, why the Abel–Ruffini theorem is wrong ?
If you are absolutely sure that you can answer "yes" to all these questions then you may have discovered something genuinely new and notable, and your next step should be to write a paper and submit it to a reputable journal. Gandalf61 (talk) 13:28, 6 January 2012 (UTC)[reply]
Very nice answer. I'll add that this proposed general solution for all quintics would not only contradict Abel-Ruffini, but also would indicate that there are severe problems with the correctness of all of Galois theory, which allows for very concise modern proofs of A-R (given in that article). A key concept here is that, unlike science, mathematics generally does not move forward by undermining previous claims. Once a proof is published and accepted (say for ~50 years), it is (almost) never falsified by later work. SemanticMantis (talk) 16:10, 6 January 2012 (UTC)[reply]
Just show us your solution to . Bo Jacoby (talk) 16:32, 6 January 2012 (UTC).[reply]
I'd say the closest example to claims being undermined are the creation of Spherical and Hyperbolic Geometries, but that was disproving those who believe that no consistent structure could be built with the 5th postulate changed. And the only reason that it took so long for Fermat to be disbelieved in terms of Fermat's Last Theorem was that he was arguably one of the most famous mathematicians in Europe, though there were certainly those who challenged him for the proof within his lifetime. As for a Nobel Prize, Wiles didn't get one and FLT is much more famous. Nash's work that did get him the Nobel in Economics was actually useful to Economists directly, I'm not sure this one (even if correct) is.Naraht (talk) 16:58, 6 January 2012 (UTC)[reply]
As the OP mentioned my name, I should perhaps point out that my main contribution to the previous discussion was to point out that such a discovery would be more likely to lead to a Fields Medal than a Nobel prize. AndrewWTaylor (talk) 17:13, 6 January 2012 (UTC)[reply]

MY ANSWERS ARE ALL YES. My discovery is going to change the view of ALGEBRA if it enters into the mathematics domain. Galois,Abel and others missed two key equations during their research and I also say that Galois and Abel were gifted with this discovery but they died too early. Abel and Ruffini came out with the "impossibility Theorem" because they also missed the key equations before Galois concluding that " polynomial could be solved by means of a general algebraic formula only if the polynomial has a degree less than five". I see Galois conclusion and the "impossibility Theorem" to be a false statement algebraically. After their death, mathematicians were researching into that but they focused more on the existing mathematical tools. Now I have the solution and ready for a challenge. — Preceding unsigned comment added by NII AFRAH (talkcontribs) 17:17, 6 January 2012 (UTC)[reply]

You could start by uploading your paper to the arxiv, so that we all (and others) could read it. This is often done by researchers just before (or at the same time) they submit the manuscript to a publisher. Staecker (talk) 17:48, 6 January 2012 (UTC)[reply]
Or as said above give a solution for . I'd find that extremely interesting and pretty convincing. Dmcq (talk) 18:48, 6 January 2012 (UTC)[reply]
  • It has been really nice to read all of the replies given (especially Gandalf's). Unfortunately it seems, to me at least, that this is a classic case of trolling. The OP was given, more than once, specific problems to apply his new theory to, and yet failed to even acknowledge the questions. I know that it's very tempting to think "what an idiot" and then to throw lots of detailed mathematical theory at the OP in the hope of making him realise the error of his ways. This would work if the OP were an arrogant, yet well-meaning, novice; but the OP does not appear to be well meaning (although s/he is clearly a novice). I would ask you all to deny recognition and to not make any further posts. Fly by Night (talk) 21:50, 6 January 2012 (UTC)[reply]

I will like to upload it on arxiv.org, so check out for it.— Preceding unsigned comment added by NII AFRAH (talkcontribs) 23:35, 6 January 2012 (UTC)[reply]

I have a better idea: post a link on here when you've uploaded it. --COVIZAPIBETEFOKY (talk) 00:25, 7 January 2012 (UTC)[reply]
I completely agree with Fly by Night's suggestion; this post is just SPAM. --pma 12:09, 8 January 2012 (UTC)[reply]

What would it cost to wall off Pakistan from Afghanistan?

The reason is because the Taliban are free to roam between the countries; that's why there's a constant problem with Taliban resurgence.

Could the DoD take Israel's example? They walled off Palestinian-held territories, and look at how that stifled attacks by the Palestinians.

Therefore, couldn't we better control the Taliban menace by walling off the border? --70.179.174.101 (talk) 14:03, 6 January 2012 (UTC)[reply]

This is not a math question. You might try again at another desk.--RDBury (talk) 14:56, 6 January 2012 (UTC)[reply]
See Durand Line for the history of the Pakistan-Afghanistan border, and a figure for its length. See Israeli West Bank barrier and Israel and Egypt–Gaza Strip barrier for information on the Israeli barriers to which you refer. Qwfp (talk) 15:36, 6 January 2012 (UTC)[reply]
You can't just build a barrier and leave the area, since they would just put holes in it, tunnel under, or climb over. A barrier only works if it's continuously guarded. Of course, you can also continuously guard the border without a barrier. The barrier itself isn't necessary, although it does help by slowing crossings to give border guards time to catch those crossing.
The barrier probably shouldn't be right at the border, both because that may not be the most defensible position, and because those building it may come under fire from Taliban in Pakistan. On the surface, perhaps just razor wire might be best, since it's cheap and you can easily add more to patch holes put in by those trying to cross.
Ideally we could put in both infrared cameras and remote-controlled guns aimed at the barrier, so we could watch and open fire on anyone trying to cross. Of course, the Taliban on the Afghan side would try to destroy these, so they would need to be protected from that, too. Perhaps they could be placed on mountain peaks in locations only accessible by helicopter.
Land mines are also a possibility. Of course, they are not politically popular because they can kill or wound innocent people long after a war ends. If they were placed between sections of barbed wire, that might keep innocent people out. They can also be designed to inactivate after some period of time, although I expect this war to last for decades. It might be better to put in low-power landmines only designed to wound, since those missing legs would become ineffective fighters, would be a more visible warning to others, would make identifying networks of Taliban easier, and might be "worse than death" for those who want to become a martyr.
Another problem is caves crossing the border. We need to find all of those, and collapse them with explosives. Bribing smugglers and others who know about them is one way to find them. Perhaps some technology like ground-penetrating radar could help to locate them. StuRat (talk) 17:05, 6 January 2012 (UTC)[reply]
I suspect some of these ideas may have already come to the U.S. government's mind ([1], [2]) -- clearly someone has seen Terminator too many times, and thought "that's f'n great, I'll have me some of that!" Honestly, that's a bad message to take home from that film. -- The Anome (talk) 19:20, 6 January 2012 (UTC)[reply]
I should point out that much of the border between Pakistan and Afghanistan runs along a series of ridges in Western Himalayas, it would be quite challenging to build a wall all the way, even if it's just a razor wire fence. Along the way, it reaches a number of peaks, including Kohanha, Kohe Baba Tangi, Sakar Sar, Rahozon Zom, Lunkho e Dosare, Akher Tsagh, Kohe Urgunt, Langula E-Barfi, Kohe Shakhawr, Kohe Mandaras, Gumbaz E-Safed, and, most importantly, Noshaq, which are so thoroughly unknown to Westerners that most of them don't even have Wikipedia entries, even though each one of these is taller than the tallest mountain in North America - Mount Denali. It is very difficult to get to the top of any one of these mountains (let alone carry supplies to build a wall there), even the most specialized helicopters can't land or take off at that altitude, you have gale-force winds blowing nine months a year, heavy snowfall, and frequent avalanches threatening to take out your fence.
But at least you didn't propose to build a fence along the border between Pakistan and China - which goes over K2.--Itinerant1 (talk) 23:04, 8 January 2012 (UTC)[reply]

I need a mathematical function that scores 0 for "infinitely incorrect estimate" or "no answer" and 1 for "exactly precise answer"

I'm helping a friend with an experimental design for a cognitive test. (She's the psych student, not me!) For a cognitive test, she has to score groups (based on whether they work together or individually) on how accurate their estimates are. Sample questions include: "The number of miles between this building and the nearest Dunkin Donuts" or "Number of rooms in the local Red Roof Inn" etc. (fairly arbitrary and yes I realise there are issues). Her predecessor used Mean absolute percentage error, but there are problems with this, in that someone who answers "eight million" to "number of miles between the Earth and the sun" is basically an order of magnitude off, but far far off in terms of MAPE, and this will distort the subject's overall score even if he gets every other question perfectly right, and might even perform worse scorewise than subjects who answer 90 million for that question but are grossly off by an order of magnitude for every other question, if the other questions use small quantities.

We'd like for an answer that is way off base to be as good as giving an error of "zero" or no answer at all, and for an answer that is perfectly precise to have a score of one.

I thought of the normal distribution, but then variance parameter would be sort of arbitrary. How would one decide how quickly the score drops off? I thought of the properties of the Gini coefficient. How would I use that? elle vécut heureuse à jamais (be free) 18:32, 6 January 2012 (UTC)[reply]

It appears to me that your problem is with scale. You need to normalize the answers/results of all of the questions before doing the calculations. I regularly normalize values that are in the 0.00-3.00 range with values that are in the 30-300 range. In the 0.00-3.00 values, I divide all values by 3 and I get values that are 0.00-1.00. In the 30-300 range, I subtract 30 and divide by 270 to get values from 0.00-1.00. I can do this because both of my measures have normal distribution. If you don't have normal distribution (which I get with some things, like serum-creatinine measures), I take all my values, square them, sum up the squares, take the square root of that sum, and then divide all values by the square root. That shifts the values so that they are between -1 and 1 (but since mine are all positive, I get 0 to 1). Once everything is between 0 and 1, you won't have issues with scale. -- kainaw 18:45, 6 January 2012 (UTC)[reply]
I think the mean absolute percentage error is fine butUse log of value of guess over actual value then just order the results on that and use non-parametric statistics for the rest. You probably have a statistician around who can help you with setting it all up.
I second the approach of using non-parametric statistics. This kind of thing is what it exists for. Unfortunately, it does require a statistician to understand it properly, but if you're doing real modern science, you should have one on tap already: or at least now you know to go and have a look for one. -- The Anome (talk) 19:24, 6 January 2012 (UTC)[reply]
I don't think she can simply scale the MAPE score like is being described -- the issue is disparities within a single question. For example a subject who answers "77 million" for "number of people in China" being compared with subjects who answer "1.4 billion" has a percentage error of 94.2%, but a person who answers "90 million" for "population of Israel" might have a percentage error of over 1200%.
Re: non-parametric statistics -- I'll go tell her that. elle vécut heureuse à jamais (be free) 19:36, 6 January 2012 (UTC)[reply]
The log of the guess over the actual value tells if someone is closer or further away than another person for a particular question like the population of Israel. It simply is saying that double the figure is as bad as half the figure. The non-parametric bit then just gives an ordering for each question so it doesn't matter if some questions tend to be answered less accurately than others. Dmcq (talk) 22:24, 6 January 2012 (UTC)[reply]
Why not devise a discreet marking system. Use the ratios to mark each question, i.e. (their answer) divided by (the correct answer). Then set up a marking scheme of, say, {0,1,2,3} where you assign a score depending on the answer given. You would need to decide how accurate you want them to be, e.g. give them a 0 if the ratio between their answer and the correct distance to the sun is of the same order, a one if it is ±1 order of magnitude and two if it is ±2 orders of magnitude. You can change the scoring system as much as you like depending on the question. Just choose a mark scheme for each question. Fly by Night (talk) 22:08, 6 January 2012 (UTC)[reply]
At first glance, it seems the issue is merely a matter of normalization; as Kainaw explained above, any set of data can be normalized to a standard range for direct comparison.
But the problem with that approach is more subtle; the entire concept is ill-formed. The request is to construct a simple function that is applicable to a wide variety of data-domains. This can't be done properly; numerical error in one question is not directly comparable to numerical error in another question, even when normalized.
To avoid being overly mathematical (given that the OP's friend is a psychologist, I presume), I constructed a "counter-example" series of estimation questions. Hopefully this will elucidate the problem
  • "What year was the Declaration of Independence drafted?"
  • "How distant is the moon from the Earth?"
A "correct" answer to the Orbit of the Moon should accept the variance of the moon's elliptic orbit. At its closest, the moon is about 360,000 km; and at its most distant, it is about 400,000 km. So a "correct" answer really ought to be acceptable to a range within ±5%. If the same degree of error is acceptable for the first question, any year between 1687 and 1864 should be acceptable! If these two questions are scored on the same scale, mixing up the American Revolution and the American Civil War is numerically equivalent to a minute error of astrophysics! The test is ill-formed; the questions can't be compared on a numerical basis. In fact, the units of time (years since a particular calendar-date) and the units of distance are so dissimilar, there's no sensible way to normalize the values. At best, you can normalize to the distribution of answers provided by a control-set of individuals - much the way a standardized test is constructed.
It is implausible to construct a simple mathematical formula that can account for the huge variations in domain-specific acceptable tolerances. The field of computational heuristics is widely studied in artificial intelligence and computer science; it's very difficult. We have an article on Fermi problems, which may help give you some insight. Different scales have different units, so trying to compare "percentage-error in years" and "percentage error in kilometers" is a fundamental failure of dimensional analysis. It can't be done.
Your friend needs to construct a set of standard criteria for each question to determine an appropriate score. Nimur (talk) 22:26, 6 January 2012 (UTC)[reply]
The "classic" way of dealing with outliers with respect to averages is to use the median rather than the mean. Wikipedia doesn't have an article on the "median absolute percentage error", but a quick Google search shows that use of MdAPE is not unheard of, though I can't speak to how frequently, or if it would be well regarded in a cognitive psychology context. -- 140.142.20.101 (talk) 22:29, 6 January 2012 (UTC)[reply]

A mathematical function that scores 0 for "infinitely incorrect estimate" and 1 for "exactly precise answer" is where x(>0) is the actual answer, x0(>0) is the correct answer, and σ(>1) is a factor describing the required precision. Bo Jacoby (talk) 11:11, 7 January 2012 (UTC).[reply]


January 7

standard scrabble points

E=1, M=3, O=1, R=1, Y=4

so why E=1, M=2, O=1, R=1 Y=10? http://www.bbc.co.uk/news/health-16425522 — Preceding unsigned comment added by 81.147.58.70 (talk) 12:28, 7 January 2012 (UTC)[reply]

Scrabble has many editions for different languages, each with different letter distributions and points. The photo clearly isn't of an English-language edition. From the points shown, it could be a French edition. Qwfp (talk) 12:50, 7 January 2012 (UTC)[reply]
In fact of the scrabble editions mentioned, French is the only one that matches up. French and German both have 10 point 'Y's, but an 'M' in German is 3 points, not 2.

Two- or three-dimensional chaotic system or map with at least two parameters

The section header pretty much says it. I would prefer the system or map (I don't know if there is a difference between the two terms) to be continuous. I am currently taking calculus in high school so I don't understand how to apply terms like dyadic transformation and eigenvalues, so I would prefer the system to be in an explicit form. --Melab±1 15:26, 7 January 2012 (UTC)[reply]

What have you attempted and where did you get stuck? Bo Jacoby (talk) 20:38, 7 January 2012 (UTC).[reply]
I am not stuck due to this is not being an assignment. I am interested in experimenting with initial conditions and I would rather use a continuous system with an exact and explicit solution as opposed to any of the discrete systems I could find that have exact and explicit solution. Connecting points generated discretely using a continuous equation just isn't satisfactory. --Melab±1 20:50, 7 January 2012 (UTC)[reply]
Your question was posed and answered here[3], remember. Bo Jacoby (talk) 08:23, 8 January 2012 (UTC).[reply]
Perhaps we could help you better if you explained some of your goals. There is plenty of fun to be had playing with chaotic systems, but why the need for a three dimensional state variable and continuity? Why do you wish for exact solutions? Exact solutions for chaotic systems are usually found only in simple "toy" models, and even then are not so common. There is a reason most of this stuff was not studied well until modern computation for numeric integration became cheap. As an aside, I'd highly recommend this book, "Nonlinear dynamics and chaos" [4]. You should be able to handle at least the first few chapters, and it gives some very nice intuitive approaches. It deals with chaos, but also other important features of nonlinear dynamics, such as stable limit cycles. SemanticMantis (talk) 15:09, 8 January 2012 (UTC)[reply]

The classical analysis of the Geiger-Marsden experiment may be relevant. The orbit in 3-dimensional space depend critically on the impact parameter, so the actual orbit is unpredictable, even if the formula is well known. Bo Jacoby (talk) 12:42, 9 January 2012 (UTC).[reply]

I want it to be continuous because I want to be able to evaluate for any and not have to use only specific values and I want an exact solution simply because I do not like errors to have any chance of propagating. --Melab±1 13:10, 9 January 2012 (UTC)[reply]
I just checked the book, some of the notation looks to be beyond me. Like on page 235, I don't understand what solves. I also don't understand how and fit into . --Melab±1 22:06, 9 January 2012 (UTC)[reply]
The system doesn't need to model a physical process. --Melab±1 22:16, 9 January 2012 (UTC)[reply]
Your enthusiasm is inspiring, but we must walk before we run. Strogatz starts from a (somewhat) elementary perspective, but you need to read (and fully understand) the first 234 pages before you tackle p. 235. Make sure you are confident with ch.1-3, and feel free to ask a new question if you need help with the book. As to the original question, I don't have the time to search out what you're looking for right now, nor am I totally sure that it exists. Perhaps after you've read a bit more (and made sure you ace your HS math class this semester), you can re-post the question :) SemanticMantis (talk) 22:59, 9 January 2012 (UTC)[reply]
Thank you, very much. :) I want to say that I think I may have come up with a discrete chaotic system a while ago, it just doesn't satisfy my requirements. --Melab±1 23:16, 9 January 2012 (UTC)[reply]


January 8

Cardinality

I'm not sure if I've asked this question before. If I have, then forgive me. Given two differentiable manifolds X and Y, I want to know the cardinality of the space of smooth maps from X to Y, in terms of the cardinalities of X and Y. Forgive me if my question doesn't make perfect sense; I'm a novice in set theory. I have an idea that I'm trying to make sense of. I'm trying to express that the space of smooth maps from two-space to three-space is "much bigger" than the space of planes in three-space (which is diffeomorphic to the real projective plane RP2). I'm trying to make sense of "how many more" smooth surfaces there are than planes. Cardinality is an obvious way of doing it, but it seems very crude. Maybe there is another way of doing it? Hopefully you understand what I'm trying to get at. Fly by Night (talk) 18:21, 8 January 2012 (UTC)[reply]

Cardinality is rarely a useful measure of size, except in discrete mathematics and naive set theory. In the case of smooth functions from one manifold to another, this always has the cardinality of the continuum, as do both X and Y, unless Y is zero dimensional (i.e., discrete points). Indeed, one can see that the cardinality of the set of smooth maps is not less than the continuum by considering just the constant maps into Y. For the opposite inequality, Y can be embedded into a ball in Rn for sufficiently large n (by the Whitney embedding theorem). Smooth mappings from X to Y are thus contained in the separable Hilbert space , whose cardinality is the continuum.
There are many more useful ways to measure the "size" of a space than cardinality. Usually these involve introducing some topology on the space of interest, or taking some kind of quotient by an equivalence relation, or both. For instance, rather than counting mappings from X to Y, it might make more sense to count the homotopy classes of mappings. In the specific problem you are interested in, you can define a topology on the space of mappings in terms of which it becomes an infinite dimensional Frechet manifold. You probably also want to quotient by the group of automorphisms of X to get a meaningful comparison to the set of planes in R3, and that will complicate things. But, at any rate, one will be an infinite dimensional space, and the other a finite dimensional one. Sławomir Biały (talk) 11:52, 9 January 2012 (UTC)[reply]
Sławomir Biały has already given a nice answer, but let me expand a bit. I'd like to explain how you can see that the cardinality of smooth maps is at most continuum. The trick is that a manifold X is a separable space, which means that it has a dense subset of countable cardinality. Choose such a dense subset, say K. Now if you have even a map f from X to Y, then consider g = f|K (the restriction of f to K) uniquely determines f. As this g has to be chosen as a function from a countable set (K) to a set of at most continuum cardinality (Y), the cardinality of such functions is also at most continuum. Note that all we are using here about the maps is that they're continuous, that is, we don't need to restrict to smooth maps. – b_jonas 15:37, 9 January 2012 (UTC)[reply]
Veering slightly off-topic here — whether a manifold has to be separable depends on the details of your definition of "manifold". An example of a non-separable manifold is the long line. Which by the way is one of my favorite topological spaces — I love the way that the construction itself can be pushed as far as you like through the ordinals, but if you go past ω1, it's no longer a manifold --Trovatore (talk) 10:15, 10 January 2012 (UTC)[reply]
Thanks Sławomir and b_jonas. I've worked out what I need to do. I use the Whitney topologies on the space of map germs from X to Y. Assuming dim(X) < dim(Y), at a point p of X, I can write the image of X locally as the graph of a function. I get a map into a jet space, where any plane gets mapped to a point. Fly by Night (talk) 17:39, 10 January 2012 (UTC)[reply]

Four color theorem

help me understand this theorem, does it mean that if you use less than four color on a map, two adjacent section will have the same color? If so, next question is does checkered pattern be accepted as a 'map'? becase you can do three colors on checkered pattern. MahAdik usap 19:20, 8 January 2012 (UTC)[reply]

No it doesn't. It says that every separation of the plane can be coloured (i.e. assigned colour so that no two adjacent sections have the same colour) with at most four colours. The trivial separation, i.e. no boundaries, can be coloured with a single colour. If the split the plane into two or three pieces then you can colour it using two or three colours resp. The contrapositive of the theorem says that if you have less than four colours, say n, then there exists a separation which cannot be coloured with n colours. Fly by Night (talk) 20:57, 8 January 2012 (UTC)[reply]
The contrapositive is that if a graph is not 4-colorable then it's not planar, or if a map requires more than 4 colors then its regions aren't contiguous. The statement you said, while true, is not equivalent to the four color theorem. Rckrone (talk) 01:45, 9 January 2012 (UTC)[reply]
Actually, that is the wrong contrapositive in this case, Fly By Night is saying that given a separation of the plane, then "If you have four, or more colours, you can colour it." The contrapositive would be, "If you cannot colour it, then you have less than four colours.", which is the same thing, basically, as his sentence (you would certainly be right if he were taking graphs as the basic object." Sorry for being contentious...Phoenixia1177 (talk) 16:44, 9 January 2012 (UTC)[reply]
I agree that the contrapositive depends on exactly how you state the theorem. However, what you said is also not the same as what Fly By Night said. You said that if a map is not n-colorable, then n < 4. Fly By Night's statement is a partial converse to that: if n < 4, then there exists a map that is not n-colorable. The 4 color theorem gives an upper bound on the number of colors needed, while his/her statement is that this bound is tight. Rckrone (talk) 02:09, 10 January 2012 (UTC)[reply]
I, suppose, that you are technically correct, but that the bound would need to be tight is rather obvious. More over, your version of the contrapositive is still off for the reasons I said. That said, I'm starting to feel really nit-picky and am going to stop:-) Phoenixia1177 (talk) 04:40, 10 January 2012 (UTC)[reply]
The issue isn't what statements are obvious, it's about what statements are logically equivalent. The statement "if you have less than four colours, say n, then there exists a separation which cannot be coloured with n colours," is not equivalent to the 4-color theorem. As a result it's not the contrapositive of any formulation of the 4-color theorem. Rckrone (talk) 05:29, 10 January 2012 (UTC)[reply]
...Wow, I'm feeling embarrassed, the entire time I've been reading that as something different. For some reason, my brain kept interpreting it as "If you have less than four colours, only then, and definitely then, do you have a separation that cannot be coloured.", I'm really not sure why (probably not paying attention closely...) At any rate, it seemed like you were complaining about the fact that this would not only claim you could always get by with four colours, but that sometimes they were required...which seemed unreasonable; obviously, you weren't being unreasonable, my apologies. Though, I still stand by my whining about whether you are talking about graphs or maps, etc. Sorry again:-) Phoenixia1177 (talk) 10:22, 10 January 2012 (UTC)[reply]

January 9

series acceleration

hello. What is the best form of series acceleration to apply to the (real-valued) Taylor series for e^x (which already converges pretty fast, but hey)? I had a glance at your series acceleration article but it doesn't seem to say which is best used when. This is preferably something that it is easy to write a computer program for. Thanks. 24.92.85.35 (talk) 22:56, 9 January 2012 (UTC)[reply]

I don't think there are many specific rules since it depends on factors independent of the specific series. For example if you're only doing a few values to a relatively low accuracy and the series converges reasonably quickly then it may be simpler just to go with the original series. Otherwise it's generally a tradeoff between accuracy, speed of convergence and how complex of an algorithm you're willing to program. The series for ex already converges pretty rapidly unless x is large, and there are simple identities you can apply to reduce to the case where x is within given bounds. Instead of trying to accelerate the series it might be better to use a different method such as continued fractions to get more accuracy, but in many cases adding a few more terms to the series is just as effective.--RDBury (talk) 14:44, 10 January 2012 (UTC)[reply]

January 10

How soon would a dishwasher pay for itself in savings?

I was told that machine dishwashers are more efficient than washing by hand.

In this case, how much do we use on average to wash dishes by hand?

On the other... "hand," how efficient is a new dishwasher made this year, and by how much more is it efficient than the hand methods? How much does said dishwasher cost (and from what store?)

Therefore, assuming regular usage, how soon would the machine dishwasher pay for itself in savings? --70.179.174.101 (talk) 00:19, 11 January 2012 (UTC)[reply]

A comparison of the article and its comments here give some indication of the gap between the spin and the common experience. --Tagishsimon (talk) 00:58, 11 January 2012 (UTC)[reply]
Another factor is the amount of detergent used up. Hand dish washing detergent seems cheaper, to me, and you might use less, since I apply it directly to the dishes, and only as needed. StuRat (talk) 16:50, 12 January 2012 (UTC)[reply]

Polynomials

Hello. I am trying to prove succinctly but rigorously that if for some polynomial P(x+c)-P(x)=k, c and k constant, for all x, then P must be a polynomial of degree at most one. I already have a proof considering an indeterminate degree 'n' that involves sigma summation and binomial expansion but it is very ugly. Can anybody provide a hint? Thanks. 24.92.85.35 (talk) 01:06, 11 January 2012 (UTC)[reply]

Assume that has degree , and say . What is the coefficient of in ? --COVIZAPIBETEFOKY (talk) 01:43, 11 January 2012 (UTC)[reply]
Take a derivative of both sides of your equation and conclude that P' is a periodic function (and therefore it is bounded). The only bounded polynomials are constant, so P' must be constant. Sławomir Biały (talk) 10:52, 11 January 2012 (UTC)[reply]
Sławomir, that's a neat little proof. It is intuitively obvious that the only bounded polynomials are constant but how might one prove that? The sine function can be represented as an infinite power series. If you took the polynomial consisting of the first googol to the power of a googol terms in that power series then you'd have a polynomial. It's tempting to think that this polynomial might be periodic over some large interval. Presumably you'd need an argument involving the convergence of power series? What would you do next? Fly by Night (talk) 19:53, 11 January 2012 (UTC)[reply]
You just consider x large enough and the polynomial will be dominated by the largest power. n times the largest coefficient (or 2 if smaller) will be quite big enough and more. Dmcq (talk) 20:11, 11 January 2012 (UTC)[reply]
I'm aware of the method, but I was asking Sławomir to give explicit details about what he would do. Fly by Night (talk) 23:04, 11 January 2012 (UTC)[reply]
Actually, there's a simple proof that is constant that doesn't rely on this fact. First note that for each integer j (by the argument I just gave) . By Rolle's theorem, this implies that has a zero in . Thus is a polynomial with infinitely many zeros, and so is therefore the zero polynomial.
To answer your question about how I would prove that the only bounded polynomials are constant: By the squeeze theorem, if is bounded and then . But if had degree , then the leading coefficient of is , a contradiction. Sławomir Biały (talk) 00:08, 12 January 2012 (UTC)[reply]
It's obvious when you put it like that :o) Fly by Night (talk) 17:53, 12 January 2012 (UTC)[reply]

Thank you everybody, this was really helpful! Thank you especially Slawomir, for such a clever proof. Just to be sure I understand you: there is no real requirement that the j in your argument be an integer correct? It could in fact be any number? Thanks again! 24.92.85.35 (talk) 03:10, 12 January 2012 (UTC)[reply]

Right, j need not be an integer for the identity to hold, but it simplifies the argument a bit to assume this because it ensures that the intervals do not overlap. Sławomir Biały (talk) 10:06, 12 January 2012 (UTC)[reply]

January 11

Matrix multiplication on integers and overflow

While reading the description of YUV, this occurred to me. Normally, multiplication by a square, nonsingular matrix is one-to-one and onto, but because RGB and YUV values are constricted on an interval, the transform can overflow. This led me to the following question:

For the matrix equation A*x=b, what values of x, given (x1, x2, x3, ...) exist in an interval [0, 1], will produce values of b with (b1, b2, b3, ...) etc on the interval [0, 1]? — Preceding unsigned comment added by 68.40.57.1 (talk) 05:35, 11 January 2012 (UTC)[reply]

All possible values of x where Ax falls within a Cartesian product of intervals [0, 1], a hypercube, are given by inverting the matrix and applying that to the hypercube, which will give a distorted hypercube where bits may fall outside the source. You'd then need to find the intersection of that and the source hypercube which doesn't sound too nice. Dmcq (talk) 11:12, 11 January 2012 (UTC)[reply]

Rapid calculation of standard deviation in a time series

I came up with a rapid algorithm that calculates a running variance of the last N values of a noisy (chaotic) time series as each new data sample comes in, without relying on any loops. Basically it maintains two running sums:

  • S = the sum of the last N values of x
  • T = the sum of the last N values of x2.

Each new data value x is added to S, and the square is added to T, while the value that was added n samples ago is subtracted from each. In this way the running sums require no loops. Based on this identity for variance:

...the standard deviation for my time series (N weighted, not N−1 weighted) is simply:

So far this is working OK for me. However, I am concerned about errors that can occur when the two terms in the numerator are many orders of magnitude larger than their difference (or worse, the difference due to roundoff is negative). This hasn't happened yet in my application, but the possibility is there.

So I've spent several hours looking for rapid calculation techniques for time series, and found nothing. I do find single-pass methods for calculating variance (see Standard deviation#Rapid calculation methods for example, based on Welford's classic paper of 1962), but for a fixed-length variance of a time series, this would still require a loop every time the series gets a new data point.

Does anybody know of a loop-less rapid calculation of standard deviation of a time series that doesn't introduce the possibility of roundoff error? The only alternative I know of is exponential moving standard deviation:

...but this has some undesirable settling time properties for me, so I'd prefer not to use it.

Anyone know of any other alternatives? ~Amatulić (talk) 21:37, 11 January 2012 (UTC)[reply]

Would this Algorithms_for_calculating_variance#Compute_running_.28continuous.29_variance work? --NorwegianBlue talk 22:20, 11 January 2012 (UTC)[reply]
No, that's Welford's algorithm that I mentioned above. That section title is somewhat misleading. You could run that algorithm continuously on a time series but your result would be the variance of all the data contained within the time series, not the last N values as I'm trying to calculate.
My problem can be restated like this:
Calculate the variance or standard deviation inside a fixed-length window that slides over an infinitely long data set, using a rapid calculation algorithm that doesn't require loops AND doesn't introduce potentially catastrophic roundoff errors.
I have solved the first part but not that last part. ~Amatulić (talk) 23:23, 11 January 2012 (UTC)[reply]
Try searching for "recursive calculation of variance". Does it help? --HappyCamper 23:37, 11 January 2012 (UTC)[reply]
You can always remove roundoff errors by not introducing any when adding or subtracting by doing rounding before that stage. For instance in computing terms if you round the square to a float instead of a double before adding it to a double and you don't have too wide a range of values then double value will be the exact sum of the float values. Dmcq (talk) 12:54, 12 January 2012 (UTC)[reply]

HappyCamper: Thanks. My searches for recursive variance turned up mixed results, either a method that does what I'm already doing, or a description of exponentially weighted variance that I described above, or a method like Welford's algorithm, which isn't a sliding-window algorithm.

Dmcq: Intuitively, rounding off a double to a float seems like it would introduce even more error, which one might get if one started out with floats in the first place. After thinking about it, though, I see how errors in the low significant digits would get lopped off by rounding to floats, so that might work. What I'm doing now, just for safety, is to use as my numerator max(0, NTS2). At least that will prevent the possibility of a negative argument in the square root. ~Amatulić (talk) 20:33, 12 January 2012 (UTC)[reply]

January 12

Power series reloaded

Hey again, sorry for so many questions! I will help someone else when I can, I promise! My question is, where does the power series for an exponential function a^x arise naturally? Alternately put, assuming we know nothing about Taylor series but assuming that we know infinite series "work", how can we discover it without constructing it from its derivatives? (I think the series for this particular function was known before Taylor series were developed, so someone apparently did it) What I'm thinking is something along the lines of a formula or discovery or something where it just "appears" (and possibly we do not know it is the series for a^x). Thanks, and sorry if I wasn't really clear. 24.92.85.35 (talk) 02:53, 12 January 2012 (UTC)[reply]

The exponential function y=ex satisfis the linear differential equation y'=y and the initial condition y(0)=1. Substituting the power series y=Σ aixi into the initial condition gives a0=1. Substituting the power series into the differential equation gives Σiaixi−1=Σaixi=Σai−1xi−1. So iai=ai−1. So ai=ai−1/i=a0/i!=1/i!. So ex=Σxi/i! Bo Jacoby (talk) 08:22, 12 January 2012 (UTC).[reply]
Looks like the OP was asking about for a general a. But this is kind of moot since to expand this you usually first translate it to and use the power series for exp. And the series for that can be found for example using exp'=exp or . Both ways are fairly elegant, I don't think you'll find something more "natural". -- Meni Rosenfeld (talk) 10:38, 12 January 2012 (UTC)[reply]

How advantageous is it to replace a paper-towel dispenser with a wall dryer?

In a dorm restroom, how often does an average stack of paper towels get used up, how much is it to get another new stack, and therefore, how much will it cost per month and per year to keep the typical paper towel dispenser restocked?

On the other hand, if it gets replaced with a heated wall dryer, how much electricity would the typical new 2012-model wall dryer consume in each typical use, in a month, and a year? (Let us assume 9¢ a kilowatt-hour, as it was 8.44¢/KWh in Kansas last time I checked in May.)

(The initial purchase of a dryer will be made by donation, so the recipient won't have to worry about the procurement cost.)

Therefore, how far ahead will the dorm come out by switching from a paper towel dispenser to a heated wall-dryer? Thanks! --70.179.174.101 (talk) 13:23, 12 January 2012 (UTC)[reply]

How many paper towels are in an average stack? It depends. How often is an average stack used up? It depends. How often is the stack refilled? It depends. How much does a new stack cost? It depends. How much does the electricity for a heated dryer cost? It depends. That is the only thing that you gave values to work with. You need to make assumptions about everything else. Otherwise, any result is simply useless. -- kainaw 13:51, 12 January 2012 (UTC)[reply]
You could get an energy monitor to measure actual energy usage, or estimate it using the power rating of the device and multiply it by the time the drier is on. Finding out how much energy is used to make a paper towel is a harder, you could start at Carbon footprint to try and find out.--Salix (talk): 14:22, 12 January 2012 (UTC)[reply]
Another option I thought of - if this is real life - is to simply install the air dryer next to the paper towel dispenser. If the air dryer saves money and some people actually use it, then there will be an overall savings. Of course, there would be more savings if everyone simply stopped washing their hands! -- kainaw 14:27, 12 January 2012 (UTC)[reply]

Shouldn't questions like this be under Miscellaneous rather than maths? The work here is looking up things and knowing about the objects, why would anyone on this reference desk know anything like that? Dmcq (talk) 14:35, 12 January 2012 (UTC)[reply]

Well I did a bit of looking about and its tricky to find comprehensible data. The European Commission give full lifecycle inputs and output for some materials such as Corrugated board boxes, slightly more useful is the UK Enviornment Agency who have spreadsheet of CO2 equivilents for the construction industry[5]. Nearest equivalent to a paper towel I could find on that is particle board at 0.54 tonnes of CO2 per tonne of material. Grid electric is 0.00059368 tonnes of CO2 per kWh or 593.68g CO2 per kWh. I estimate 1 paper towel is 5g so 2.7g of CO2 equivalent (excluding transport costs). I estimate air drier runs at about 3kW (same as my kettle) and I spend 30 secs drying my hands using it, so thats 0.025 kWh or 15g of CO2. My estimates could easily be off by an order of magnitude so I can draw any conclusion.--Salix (talk): 15:48, 12 January 2012 (UTC)[reply]
The OP didn't raise any question about carbon, but about cost of the $ & £ sort. --Tagishsimon (talk) 15:54, 12 January 2012 (UTC)[reply]
Another issue is that paper towels can do things which hot air blowers can't, like allow you to open the bathroom door without having germs redeposited on your hand, or clean up water splashed all over the sink. So, if you remove the paper towels, you may find that toilet paper is now used for those purposes, and it's usage increase must then enter into the calcs. So, I second the idea of offering both the blower and towels. Paper towels can also both be recycled and made from recycled paper, so that's a plus. StuRat (talk) 16:42, 12 January 2012 (UTC)[reply]

January 13

Help with integral

How do you evaluate this integral? Could someone help?

deeptrivia (talk) 00:12, 13 January 2012 (UTC)[reply]