Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Ftbhrygvn (talk | contribs) at 14:47, 11 January 2011 (→‎Series and Intergration). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 4

statistics

I'm having difficulty with my thesis. can "knowledge or awareness of people to statistics" be subjected to study? what statistical tool can be used? — Preceding unsigned comment added by Rionsgeo (talkcontribs) 02:06, 4 January 2011 (UTC)[reply]

If I understand you, you want to measure people's knowledge and awareness of statistics, using statistical methods. I suppose you could do a poll/quiz where you ask people how often they use statistics, then ask them to solve some statistics problems, and use that data to calculate standard deviations, confidence intervals, etc.
One suggestion for a refinement of your thesis: "Resolved, that people who are ignorant or distrustful of statistics tend to engage in statistically unhealthy habits and thus shorten their lives." You could design a way to test this assertion and either prove or disprove it. StuRat (talk) 05:43, 4 January 2011 (UTC)[reply]
Why does this remind me to Correlation from xkcd? – b_jonas 10:12, 7 January 2011 (UTC)[reply]

Permutations

How can I compute the number of ways to choose n elements in sets of size k (with replacement), so that no element occurs in each set more than x times? 70.162.9.144 (talk) 07:30, 4 January 2011 (UTC)[reply]

I don't know if this helps, but it should be equal to the coefficient of in , if I followed your notation correctly. I'd guess there isn't any nice closed-form solution. Are you looking for a way to efficiently compute it? Eric. 82.139.80.114 (talk) 01:39, 5 January 2011 (UTC)[reply]
Sorry, could you explain what you mean by coefficient and how it is derived from ? 70.162.9.144 (talk) 04:37, 5 January 2011 (UTC)[reply]
is expanded by the Multinomial theorem. Bo Jacoby (talk) 12:56, 5 January 2011 (UTC).[reply]

Identifying a quotient

Let A be free abelian on 3 generators a,b,c and K the subgroup (n+m)a+(n-m)b+(m-n)c for all integers n,m. Is A/K just Z x Z/2Z? This seems like a very trivial task a computer should be able to do, is there any software to identify stuff like this? Money is tight (talk) 08:26, 4 January 2011 (UTC)[reply]

K is generated by the elements a + bc and 2a, it thus consists of elements of the form na + mb + kc where k = −m and . From this it follows easily that K = Ker(f), where f: AZ × (Z/2Z) is defined by f(na + mb + kc) = (m + k, n + m mod 2). Since f is clearly onto, A/K is indeed isomorphic to Z × (Z/2Z).—Emil J. 12:41, 4 January 2011 (UTC)[reply]
Thanks, I'm used to doing "show/prove" questions and this question wasn't one of those, just needed some confirmation to check my understanding is correct. Money is tight (talk) 00:56, 5 January 2011 (UTC)[reply]
I think the convention is to use the symbol ⊕ to denote the direct sum of abelian groups. (Instead of using the direct product, which is often used with non-abelian groups.) So it would be normal to write ZZ/2Z. If A1, …, An are abelian groups then the direct sum A1 ⊕ … ⊕ An is the set of n-tuplues (a1, …, an), where a1A1, …, anAn, under the binary operation
This turns A1 ⊕ … ⊕ An into an abelian group. Fly by Night (talk) 13:43, 5 January 2011 (UTC)[reply]


January 5

Special magic squares

I'm reading a book that has a chapter on magic squares, and it gives the following special magic square:

 5 22 18
28 15  2
12  8 25

If you write the names for the numbers out in English and then count the number of letters in each name, you get another magic square:

 4  9  8
11  7  3
 6  5 10

It then says that, for totals of less than 200, English has seven of these squares, while French only has one. What are they? --75.60.13.19 (talk) 03:08, 5 January 2011 (UTC)[reply]

non-trivial irreducible character of primitive permutation character is faithful?

Hello,

I am struggling with what should be an easy exercise...

let G be a group acting primitively on a set . Let be the permutation character( let's say over ). Hence maps every to the number of elements in it fixes.

Let be any irreducible constituent of , different from the trivial character. Prove that is faithful (i.e. the corresponding representation of maps only the trivial element of to the trivial matrix).

I think it should be easy, but it appears I am missing a crucial observation. I already know why primitivity is important, because otherwise the non-faithful permutation character on the blocks of imprimitivity would be contained in the character.

Groups of order 24

Do all the 15 groups of order 24 have a subgroup of order 12? How can I prove or disprove this.-Shahab (talk) 09:41, 5 January 2011 (UTC)[reply]

Take a look at the Sylow theorems article. As the article says, they are "a collection of theorems... that give detailed information about the number of subgroups of fixed order that a given finite group contains.". Since 24 = 23⋅3, Burnside's theorem tells us that G is solvable. There's lots more information to be had in the Solvable group article. Fly by Night (talk) 15:48, 5 January 2011 (UTC)[reply]
I think that the semidirect product has no subgroup of order 12, where Q8 is the quaternion group, and C3 acts on Q8 by cyclic permutation of i, j, k. Any subgroup of G of index 2 would be normal, hence it would induce a nontrivial homomorphism f: G → C2. However, f vanishes on C3 (as it has order 3) and on at least one of i, j, k (as ij = k), and therefore on all of i, j, k (as they are conjugated in G, and C2 is abelian). Thus f is trivial, a contradiction.—Emil J. 16:00, 5 January 2011 (UTC)[reply]
This is a central extension of A4 by C2. A4 has no subgroups of order 6 and this implies that the extension has no subgroups of order 12. This is also the only group of order 24 that has no subgroup of order 12. I got this by checking generators and relations for all 15 groups, these are well known, but I suppose it wouldn't be too hard to prove it from scratch using Sylow etc.--RDBury (talk) 16:54, 5 January 2011 (UTC)[reply]

1+1=2

What were the definitions and axioms used in Principia Mathematica that required such a long proof of 1+1=2 and other basic arithmetic facts? A link to a page or website describing the foundations is sufficient; I just don't know where to find such a reference. --24.27.16.22 (talk) 15:54, 5 January 2011 (UTC)[reply]

At the very end of that page about Principia Mathematica you can find proposition 54.43 which is that 1+1=2 validated using a modern theorem checker Metamath. Dmcq (talk) 16:06, 5 January 2011 (UTC)[reply]
Qualitatively speaking, the issue is that the axioms are very low-level. For instance, 1+1=2 does not require proof if you are using the Peano axioms. But if you don't want to do that, you could start with Zermelo_Fraenkel_set_theory and derive the Peano system from there. This would be a much longer proof than that quoted on the PM page. Both Peano and ZFC schemes pre-date PM, and Whitehead even used much of Peano's notation. I think that PM axioms are lower-level / weaker than ZFC, but hopefully someone else can elaborate on that. I can't easily find an short itemized list of axioms in the PM, but you can download the whole book through google books here [1]. SemanticMantis (talk) 17:03, 5 January 2011 (UTC)[reply]
If we start with ZFC, and identify natural numbers with finite cardinals and define addition as a disjoint union of sets (using some silly trick like A+B = (Ax∅)∪(Bx{∅})), would a complete formal proof of 1+1=2 require a similar length? --24.27.16.22 (talk) 17:49, 5 January 2011 (UTC)[reply]
No, but that's an awful lot of "if"s. Russell and Whitehead were working at a time when there was no settled notions of things such as a logical theory, and so they had to build up everything from the very ground. Many tricks and tools that would be used in a modern work to simplify and streamline the presentation simply hadn't been invented yet (or had been used once or twice but their general applicability not appreciated). Furthermore, the theory they designed is/was markedly more cumbersome to work with and reason about than ZFC, and as a result didn't really catch on.
Also, nobody's saying that all of the 360 pages that preceded *54.43 were necessary prerequisites to that particular proposition. As a comparison with a newer text, Mendelson Introduction to Mathematical Logic (4th ed., 1997) reaches page 258 before it defines "". However, that includes 70 pages that develop formal number theory from the Peano axioms and are not used for the axiomatic set theory, so "number of pages before addition is defined" is not really a meaningful metric. –Henning Makholm (talk) 22:08, 5 January 2011 (UTC)[reply]
Does anyone know any source where real everyday mathematics is developed formally in ZFC? By real math I mean calculus, algebra, analysis etc. Money is tight (talk) 23:22, 5 January 2011 (UTC)[reply]
http://us.metamath.org/mpegif/mmset.html 67.122.209.190 (talk) 08:50, 6 January 2011 (UTC)[reply]

What constitutes a valid solution?

One of my profs made an interesting remark today. He was doing an example on the board, and the answer came out to . But he stopped, and said that he didn't feel this was a complete solution, and that the fullest answer possible would be to say that .

As an example, he said, suppose we had to solve the equation . Saying that is just tautological. The only meaningful solution would be to write out cube root of 4 as a decimal expansion.

At first I agreed, but thinking about it now, I'm not so sure. A decimal expansion of a number is, by definition, the expression of the number as an finite or infinite series of fractions with a denominator of a power of ten. How is this any more valid of an answer? —Preceding unsigned comment added by 74.15.138.87 (talk) 16:56, 5 January 2011 (UTC)[reply]

What constitutes a "complete solution" in mathematics is somewhat arbitrary, and most professors would be perfectly happy to accept in my experience, since that is the most precise answer one can provide. It's really not clear exactly what your professor is looking for in a complete solution, except possibly that he expects final answers to be written out in decimal notation? --COVIZAPIBETEFOKY (talk) 17:14, 5 January 2011 (UTC)[reply]

√2 is exact, whereas the decimal form is approximate. Of course all solutions of equations are in a sense tautological. Suppose the equation had been

Would he object that x = 5/3 was "just tautological? Michael Hardy (talk) 18:30, 5 January 2011 (UTC)[reply]

Right on Michael. Much of math can be seen as a search for tautologies. In my experience, "just a tautology" is used as a dismissal much more commonly in fields outside of math. SemanticMantis (talk) 19:02, 5 January 2011 (UTC)[reply]
Incidentally: I took two classes taught by Alphonse Vasquez, and remember he used to say that everything that's been proven in math is tautological, and if one doesn't see something as such then he hasn't sufficiently wrapped his head around it.—msh210 19:16, 5 January 2011 (UTC)[reply]

When I teach math, I ask students for exact answers. Partially this is to make it easier on the grader, and partially because I think it is important to grasp that root two cannot be expressed exactly in decimal notation. But this is just personal preference. Also, asking for decimal approximations implicitly encourages students to use calculators when they are not required, and I believe this is counterproductive to really learning math. --But on to your prof's comments. I disagree completely that the decimal approximation is the "fullest possible answer", but perhaps this is not a direct quote from the instructor. Rather than talk of 'meaningful' or 'valid', we may consider whether an answer is *informative*. Consider an application where two quantities are to be compared. If solved exactly, it is not easy to see how compares to . However, it's quite easy to see that 1.1487... < 1.1746... So really, the best form for an answer depends on why you're quantifying something in the first place. SemanticMantis (talk) 18:55, 5 January 2011 (UTC)[reply]

But before a student could reach for a calculator, she should notice that . -- 119.31.126.69 (talk) 16:38, 8 January 2011 (UTC)[reply]

There's an important point that my prof made that I forgot to mention. In the final solution, he doesn't want decimals; he wants radicals, because decimals aren't exact. His point was that the only reason is an acceptable answer is because someone could look up the decimal expansion to the desired precision. Presumably, he also means to say that if mathematicians were dumber, and had made the notation but couldn't figure out how to expand it, then would be meaningless. But I don't see why decimal representation should have validity than other representations. At the same time, if we accept, as Michael Hardy said (and which I agree with) that all solutions are tautological, then that would mean that when we say , we are really saying "I have found that the solution to such-and-such equation also happens to be the positive solution to the equation ". I guess this makes sense, but it makes the whole business of solving equations kinda...arbitrary, no?74.15.138.87 (talk) 20:53, 5 January 2011 (UTC)[reply]

It depends why you're solving something. Do you need to know how many people it will take to change your lightbulb? Then you need a decimal representation (actually, the ceiling of one, usually). Do you need a number you can substitute into another expression? Then a form like is usually best. Do you need it as an element of ℂ? Then you want (perhaps) . Etc.—msh210 21:00, 5 January 2011 (UTC)
That's very dismissive of tautologies. I try and make everything I say logical and as nearly tautological as possible. Like for instance 'If I don't get some sleep I'll never wake up in the morning' ;-) Dmcq (talk) 21:30, 5 January 2011 (UTC)[reply]
I grappled with the same problem when initially introduced to square roots and logarithms at school (and arcsin...in fact, ANY inverse function). It seemed retarded and meaningless to say that the solution of is (decimal log obviously). What's the point of just inventing notation and calling these answers "solutions"? They are no more useful than the original equations. In a sense, ALL inverse functions are just tautologies, useless for ACTUALLY solving anything. :That was until I reached university and learned about Taylor series, which allows you to actually compute and evaluate those expressions in a meaningful way. That was when it all made sense. Zunaid 12:20, 6 January 2011 (UTC)[reply]

Math and prejudice

How many cases should you consider until you come up to the conclusion that an ethnic group has this or that feature? For example, if you take nations with 300 millions or 100 millions, is my personal experience of 200 interactions each year enough? Quest09 (talk) 17:03, 5 January 2011 (UTC)[reply]

There's a question above about elections that is related. What if the 200 people you meet all had a property that no-one else in the population had? I guess you want to answer the following question: If p% of a sample has a certain property, then what's the probability that (p±d)% of the whole population has that property. You need to decide what percentage of the sample/population needs to posses a property before you call it a characteristic. Take a look at my question, and Meni's answer, here. Fly by Night (talk) 17:19, 5 January 2011 (UTC)[reply]
This is a question of statistics. A simple random sample of 1000 or so people is generally enough to establish with reasonable certainty the rough proportion of people in a given population who possess a feature/hold a particular opinion/etc, independently of the size of the population (assuming only that it is significantly larger than 1000). Your personal experiences most likely do not constitute a simple random sample, and therefore cannot be used for this purpose. --COVIZAPIBETEFOKY (talk) 17:24, 5 January 2011 (UTC)[reply]
What COVIZAPIBETEFOKY said. We all have social interactions strongly biased by our income, locality, profession, etc. Our own experience is never a good source from which to extrapolate to the general population. See Sampling bias#Historical examples for some famous cases where samples of millions that were not truly random and unbiased, were insufficient. RayTalk 18:35, 5 January 2011 (UTC)[reply]

The Real Answer

Let me put it this way. Let's say your IQ is 300, and you've just made a handful of major breakthroughs in your chosen field, however you are just an undergraduate at a huge state school. Let's say that you are able to prove yourself to be an extremely valuable researcher to a professor at your University, if he will speak to you for 10 minutes. Then he would be convinced by your ideas, be instantly swayed, and want to publish with you. As a result of this, you would be able to get admitted to the graduate program of your choice, even if you did nothing more than flesh out the ideas you just published as an undergrad, you would be set to get tenure based on that, if not at Harvard, then at least in some respectable state school such as the one you're attending. There's just one little problem: your state school is in Misouri, has low standards of admission, you are of a minority race, and the professor has had a LOT of experience with semi-literate members of that minority!! He might refuse the 10-minute interview on that grounds alone, just from remembering your face among the 300 faces he teaches at any one time!! So, let me ask you the question this way: how many members of your race that he had experiences with, who were semi-literate and had an IQ of more like 60 (one fifth of yours), would make you say: You know what, he shouldn't waste 10 minutes on an interview with me, it's just not a reasonable request. 100 such people? 1000? A million? How about if you're Indian, and there are one BILLION people who are all different from you, and you're the only Indian who can do Italian opera in all the world? Would you agree that the director of La Scala should refuse to even listen to you on that basis? 87.91.6.33 (talk) 19:05, 5 January 2011 (UTC)[reply]

I would argue that such an interview should never be refused. However, there are cases where the "judge" is in such demand that he can't spend the time, so then underlings should be enlisted to do a "pretest", to determine if you are anything worth bothering the "judge" about. StuRat (talk) 21:56, 5 January 2011 (UTC)[reply]
Exacrtly. the OP should come to the same conclusion, and, therefore, realize that he should not become racist even after a billion examples confirming his suspicions about a group. 87.91.6.33 (talk) 22:02, 5 January 2011 (UTC)[reply]
(ec) Please, there is no need for emotional language; just state your assertion clearly and with justification (and without fallacies like appeal to consequences).
The question Quest09 asked ("is my personal experience of 200 interactions each year enough [... to] come up [with] the conclusion that an ethnic group has this or that feature") is a simple question of statistics for which COVIZAPIBETEFOKY gave a direct answer ("Your personal experiences most likely do not constitute a simple random sample, and therefore cannot be used for this purpose."). Eric. 82.139.80.114 (talk) 22:05, 5 January 2011 (UTC)[reply]
Yes, this was a question about mathematics in the math RD. I was not asking about moral implications and actually not even thinking about discriminating people. Quest09 (talk) 23:04, 5 January 2011 (UTC)[reply]
Of course there also is an implicit assumption about why the interview was rejected. Maybe the professor can only grant so many interviews and cuts it off at an arbitrary number. Or maybe he grants no non-class-related interviews to undergrads at all. Or maybe he has already seen common misconceptions during the initial request for the interview. All of these are arguably beyond the realm of mathematics, of course.Unless you consider "There are only 16 working hours per day, if I grant one interview to every of my 600 students I'll never finish that research" as maths ;-) --Stephan Schulz (talk) 10:41, 6 January 2011 (UTC)[reply]
600 student won't ask you for an interview. And even if they do, if you spend 15 minutes with each, that would only make 30'/day each year. Quest09 (talk) 12:10, 6 January 2011 (UTC)[reply]
With 600 what I'd do is I wouldn't even bother reading the first line of the application for most. I'd do a random draw of a selection to check through and then winnow down to an even smaller number to interview. Actually I think interviews are very overrated so its mainly to eliminate the unsuitable rather than to pick the best. Dmcq (talk) 13:15, 6 January 2011 (UTC)[reply]

let me spell it out for you guys

"the conclusion that an ethnic group has this or that feature" is the definition of racism. Sorry. That thought is the definition of racism. In other words, this is a question about what level of statistical confidence justifies racism. The answer is: none. Even if you have a hundred billion examples of a member of an ethnic group with a certain feature, you still can't come to "the conlcusion that an ethnic group has this or that feature". Is that clear enough for you? How about this way: I grew up in a very poor part of Boston. I met literally thousands of black kids who were way below the required level in their grade. How many should I have met before I concluded that a black kid has features that make them, say, not qualified for a Presidential-track education? The answer is, there is no such number. (Math: Not a Number, NaN). Because even if you have 300,000,000 black kids who can't be president, because they're too stupid and all the ritalin in the world would not make them smart enough: it only takes one. You can never induce the rule. The rule is the definition of racism. Clear enough for you? 87.91.6.33 (talk) 20:38, 6 January 2011 (UTC)[reply]

in yet different terms, an ethnic group can't have any features (besides the tautological ones*). Except in the mind of a racist. A racist can list dozens of features for any given ethnic group. Go try a racist sometime, I am not making this up. 87.91.6.33 (talk) 20:43, 6 January 2011 (UTC)[reply]
* tautological = obviously if you group people in races based on criteria, the "race" will have member meeting that criteria... but this says nothing about them, and only about you and your grouping choices -- it's "begging the question".
It's not racist to believe or state a fact about demographics if there's truth behind it. Chances are, however, that the true statement will not be a universal; it will merely be that a certain percentage of the population shares a characteristic, and that this percentage happens to be unusually high compared to other demographics. Also, statement of a fact does not necessitate any particular response, positive or negative, to the fact.
For instance, it is definitely true that a higher percentage of black people in the United States are below poverty line than the US population as a whole. This does not necessarily mean that black people are somehow inherently poor, and that we should discriminate against them for being incompetent or, at the other end of the spectrum, that we should send more money to the cause of improving the welfare of black people; it is merely a statement of fact.
We may then speculate as to whether black people are in some way rendered incapable of finding and keeping a self-sustaining job as a consequence of genetic predispositions not directly/obviously related to the color of their skin, or if there are social and environmental pressures that cause them to fall below the poverty line, and they could have done better if they had been brought up differently. I think most people agree that the latter explanation is the more significant factor, in this case. We would also want to speculate what the proper response would be to encourage improvement in the welfare of black people, a problem which remains unsolved.
It is also important to be able to make observations about various characteristics of a population in order to better understand a variety of phenomena. For instance, again in the US, non-whites as a direct percentage are more likely to vote democrat than whites are. But if you correct for socioeconomic status, which is also seen to affect a person's vote, you actually find that non-whites are more likely to vote republican than whites are. Direct percentages are misleading, but you can't see this until you are willing to take in 'racist' data. --COVIZAPIBETEFOKY (talk) 02:40, 7 January 2011 (UTC)[reply]
It is important to separate what is true from what we believe. For example, I know it is true that black Americans are dumber -- have a lower IQ -- than white Americans. See the book called "The Bell Curve". However, even though I am aware of this fact, I do not believe it. I do not believe that black Americans are dumber than white Americans. Why? Because I'm not a racist. It's that simple guys. When it comes to groups of people, you simply can't internalize statistically "true" statements about that group, unless you're a racist. Only a racist would agree that black Americans are dumber than white Americans. Your very first sentence "It's not racist to believe or state a fact about demographics if there's truth behind it" is, simply false. Put another way, when you hear a racist statement, you need to realize that you are being asked to participate in racism -- and not react by questioning whether the statement is true. You have a store. "Well, we've just gotten through the first round of interviews, who should we call back for a second interview?" If someone says "let's not call back the black guys, because blacks are far poorer and more likely to steal from us, this will increase our chances of finding someone reliable." It doesn't matter what city in the world you're living in. It doesn't matter if 20%, 50%, 75%, or 98% of blacks will, in fact, steal from your store. (short the till). You can't believe that statement, because you are not a racist. (You can turn this around. If you're the first black guy in your family to get a college degree, and yours is in art history, what percent of black people in your city would be likely to be unreliable manpower in the art gallery, and short the till -- steal from their employers -- before you agreed that they shouldn't call you in for a second interview because you're black? 50%? 70%? 99%? No: the answer is, even if every single black man in the whole city except you would make a terrible employee in that shop, you still do not want the shop to make the conclusion that black people will steal from them). Frankly this whole discussion is extremely dirty, and I can't believe we're having it in English in 2011. Philosophically, everything I've written above could be interesting in 1947. It's 2011. All of this is way in the past. We have a black President. I don't know why we're even talking about why you can't generalize about a group of people no matter how "true" such a statement would be. 87.91.6.33 (talk) 11:32, 7 January 2011 (UTC)[reply]
This is the kind of discussion I expect in Germany. I recently lived in Munich for a year, and that city was the most racist, and thus awful, place of any location I've ever lived. Of course, if someone is from Munich, I will not generalize and say they a racist. It's simply my experience of that awful city. It's no wonder that Hitler, who actually wasn't German but Austrian, had to travel all the way to Munich, Germany before he could find an audience for his spiels. 87.91.6.33 (talk) 11:49, 7 January 2011 (UTC)[reply]
See this article. The guy is the next Hitler, right down to the Mustache. And, according to you, as long as his book doesn't make factual mistakes in its statistics, it is "right". Do you even have any idea what the consequences of what you're promoting are. 87.91.6.33 (talk) 11:54, 7 January 2011 (UTC)[reply]
This is a maths reference desk. Please take unrelated stuff elsewhere. It is not a soapbox see WP:SOAP. Dmcq (talk) 13:39, 7 January 2011 (UTC)[reply]
Actually, the question was entitled "Math and prejudice", and consists in whole oftwo short questions. The first reads "How many cases should you consider until you come up to the conclusion that an ethnic group has this or that feature?". The answer to this question is "Not a Number" - there is no number that justifies the conclusion. The second simple question is: "For example, if you take nations with 300 millions or 100 millions, is my personal experience of 200 interactions each year enough?". The answer is "no". In fact, for a nation of 300 million, not even personal interaction with 299,999,999 of them is enough to come to a conclusion about the one you didn't meet. The reason Obama got a shot at the presidency despite being elected by people who have met plenty of African-Americans who were not qualified to be president (very far from it) is that they understood this fact. Sorry, there is no mathematical question here, the original poster wants to know what sample size justifies racism or prejudice, and the answer is there is no such sample size. Even if you meet EVERY single member of a race, you still can't make any statements about that race. What? How??? Because someone new can be born ino that race who is the first counterexample. Sorry, your "prejudice" (OP's word fro mtitle) has no justification in mathematics or statistics at any sample size. 87.91.6.33 (talk) 14:39, 7 January 2011 (UTC)[reply]
Dmcq is right. If you must continue this discussion, I've responded to your silly post on my talk page. --COVIZAPIBETEFOKY (talk) 15:44, 7 January 2011 (UTC)[reply]
This is ridiculous. Believing things without sufficient evidence is the reason that racism exists, not the solution for it. The OP should not come to racist conclusions because A) Observed statistical differences between the races are due to sample bias and external conditions, not inherent genetic differences and B) Averages are averages. Even if a group did have an inherently lower average IQ (which is not the case) there would still be many intelligent members of that group and there are many ways that someone can contribute positively to society for which IQ play no or a very small role. We cannot solve any problems with society by believing things unjustified by evidence, only by looking at the world and coming up with informed solutions. Racism has no justification in fact, and we should point to the facts rather than making tolerance something entirely separate from critical thinking. 76.67.79.61 (talk) 01:23, 10 January 2011 (UTC)[reply]
You've missed the point. I'd restate it, but I don't think I was unclear to begin with. --COVIZAPIBETEFOKY (talk) 03:15, 10 January 2011 (UTC)[reply]
I was not addressing you; I was addressing 87.91.6.33, who said "I know it is true that black Americans are dumber -- have a lower IQ -- than white Americans. See the book called "The Bell Curve". However, even though I am aware of this fact, I do not believe it." I'm sorry if this was unclear. 74.14.110.15 (talk) 07:07, 10 January 2011 (UTC)[reply]
The fact that black americans have a lower IQ than white americans is frequently pointed out as an indication of a flaw in the standardized tests for IQ. In short we can't really draw any meaningfull conclusion from it because we can't know how well our tests actually messure intelligence. However there are several meaningful facts we can draw about a population. For example we know that few black americans take higher education than white americans. And I belive this holds even when you adjust for parents income. We can't draw any conclusions about genetics from this, and when we deal with individuals there will be better indicators. But we can still state that black americans tend to be less educated than white americans. This is a fact that we should be careful not to ignore because it doesn't just say something about the ethnic group, it also says something about American society as a whole. (Actually I can safely ignore it, but that's because I come from Norway). Note that OP did not say how much would you need to sample a group in order to draw conclusions about individuals of the group. He asked how much must you sample before you can say something about the group. And that has nothing to do with racism, it's a pure question of statistics. (Although as the example of IQ shows we should be careful about what we are actually testing for). Taemyr (talk) 20:10, 11 January 2011 (UTC)[reply]


January 6

Tetrahedral angles

If I take a regular tetrahedron and draw segments from each of the vertices to the center, what's the angle between two of those segments? --75.60.13.19 (talk) 00:54, 6 January 2011 (UTC)[reply]

If you look at the Tetrahedron article, i.e. click here, then you'll find out everything you need to know, any many things you don't. Fly by Night (talk) 01:33, 6 January 2011 (UTC)[reply]

By symmetry, the average of the three vectors is zero; hence the sum of the three is zero; hence the sum of their x-coordinates is zero if one of them points in the direction of the x-axis. Again by symmetry, the x-coordinates of the other three are equal to each other. Since the one pointing in the axis diretion is 1, the others must each be −1/3. Hence the angle is arccos (−1/3). Michael Hardy (talk) 02:49, 11 January 2011 (UTC)[reply]

Learning to read and write proofs

What books would you recommend for learning the general techniques of mathematical proofs? 74.14.111.188 (talk) 07:17, 6 January 2011 (UTC)[reply]

I suppose it really depends on the level of mathematics you're concerned with, but one course I found very helpful in the early years of my undergraduate studies used the textbook "A Transition to Higher Mathematics," by Smith, Eggen, and St. Andre. I found it be very illuminating at the time, particularly in regards to learning methods of proof. Nm420 (talk) —Preceding undated comment added 15:41, 6 January 2011 (UTC).[reply]
How to read and do proofs by Daniel Solow is another one to look at. There are probably others in the same vein.--RDBury (talk) 00:54, 7 January 2011 (UTC)[reply]

I think there's one by Daniel Velleman. And some others...... Michael Hardy (talk) 02:46, 11 January 2011 (UTC)[reply]

Websites first postings

I need to find out when the sites listed below first established a website/page on the internet. Thanks

Taylor & Francis Group: an informa business - http://www.taylorandfrancisgroup.com/

- www.tandf.co.uk

Association for Childhood Education International - http://acei.org/

- www.acei.org/cehp.htm

National Council of Teachers of Mathematics - http://www.nctm.org/

- http://my.nctm.org

National Association for the Education of Young Children - http://www.naeyc.org/yc/

- www.journal.naeyc.org —Preceding unsigned comment added by 24.210.25.124 (talk) 13:30, 6 January 2011 (UTC)[reply]
Question reformatted for legibility. The Wayback machine should be able to help - e.g. it suggests that www.tandf.co.uk first appeared in early 1997. AndrewWTaylor (talk) 20:50, 6 January 2011 (UTC)[reply]

Remainder term

If the Taylor series becomes arbitrarily close to the original function for all analytic functions (in other words, all function that we are normally interested in), what is the purpose of the remainder function? 24.92.70.160 (talk) 21:34, 6 January 2011 (UTC)[reply]

Analytic functions define a very small subset in the space of all function. Asking a function to have continuous derivatives of all orders and then asking for a series to converge is a very, very big ask. High school functions like 1/x aren't analytic; it fails to be continuous at x = 0. Take a look at the article on flat functions. The exponential is an example of a flat function. It is well defined at x = 0 because both the positive and negative limits x → 0 give ƒ(x) → 0. In fact, it is a smooth function because each of its derivatives exist and are continuous for all x (particularly at x = 0). But you will find that each and every derivative vanishes at x = 0. So the function is always contained in the remainder function; no matter how far along the Taylor series you go. As for non-smooth functions, only the first few derivatives may be continuous, so we can only define the Taylor series up to a certain, finite order. Then the remainder term takes up the slack. Fly by Night (talk) 22:03, 6 January 2011 (UTC)[reply]
Basically its purpose is to provide you with language for reasoning about whether or not the function you're considering at any given time happens to be one of the (in practice) rarely occurring exceptions to analyticity. –Henning Makholm (talk) 23:15, 6 January 2011 (UTC)[reply]
Numerical analysts often use a finite series as an approximation for a function; having an upper bound on the remainder term allows them to draw conclusions about how accurate their final results will be. Knowing that the infinite series converges exactly is not good enough if you can't compute an infinite series. Eric. 82.139.80.114 (talk) 01:54, 7 January 2011 (UTC)[reply]
Just to clarify a few points: analytic functions are a small subset theoretically (measure zero in L^2 I think...), but a huge subset of functions commonly used (i.e. in practice outside pure math research). Another thing to consider is the domain. Everyone seems to be assuming the whole real line, but 1/x and exp(-1/x^2) are both analytic on R\{0}. SemanticMantis (talk) 02:06, 7 January 2011 (UTC)[reply]
As a tangent here, are there smooth functions that are not analytic almost everywhere? The smooth function article claims that they exist (even nowhere analytic ones), but does not give details. Can they be constructed without the axiom of choice? –Henning Makholm (talk) 03:14, 7 January 2011 (UTC)[reply]
Here for instance is an explicit example. Algebraist 03:18, 7 January 2011 (UTC)[reply]
And it turns out we have an article on it. Algebraist 03:25, 7 January 2011 (UTC)[reply]
Oops, didn't notice you had already added a link to the smooth function article. It wasn't my intention to have two of them. –Henning Makholm (talk) 07:36, 7 January 2011 (UTC)[reply]
Another example. Algebraist 03:30, 7 January 2011 (UTC)[reply]
Interesting, thanks. Those appear to be impeccably constructible. –Henning Makholm (talk) 03:42, 7 January 2011 (UTC)[reply]

LaTeX

Resolved

Can someone suggest a good LaTeX writing program for Windows. I mean a program where I type the code and it compiles it, turns it into .dvi, .ps or .pdf. I use Kile on Linux but it needs some fiddling with to run on Windows and I don't know how. A nice user friendly interface would be perfect, with some symbol buttons that substitute the LaTeX code when you click them, etc. Fly by Night (talk) 22:25, 6 January 2011 (UTC)[reply]

there aren't any. just install miktex like everyone else. 87.91.6.33 (talk) 22:42, 6 January 2011 (UTC)[reply]


There's a commercial program called PCTeX that some people like. Personally I don't like it; if I recall correctly (but this was years ago) it has its own style and/or class files, and it's a bit of a pain to produce TeX source that other people can use without the program. But maybe they've fixed that for all I know. --Trovatore (talk) 22:48, 6 January 2011 (UTC)[reply]
I use TeXnicCenter; some of my friends use LEd. But there are plenty of editors (free or not) listed here. The Menu for Inserting Symbols column may be of interest to you. Invrnc (talk) 22:52, 6 January 2011 (UTC)[reply]
I use Lyx, quite good and free editor and writer. It can give output in .div, .ps and .pdf formats. Anyway, the list given above indicates many options. I haven't used it yet, but have heard good comments of Scientific WorkPlace. Pallida  Mors 00:53, 7 January 2011 (UTC)[reply]

Thanks for all the suggestions. I went for TeXnixCenter in the end. Fly by Night (talk) 12:54, 7 January 2011 (UTC)[reply]

January 7

Differentiation w/ respect to complex conjugate

My prof defined partial differentiation of a (single-variable) complex function with respect to the complex conjugate as follows:

If z = x + iy, and f(z) = u(x,y) + iv(x,y), then

Is there an intuitive way of seeing the origin of this definition, other than an after-the-fact observation that it behaves as a partial derivative w/ respect to ? 74.15.138.87 (talk) 01:56, 7 January 2011 (UTC)[reply]

I suppose the observation you refer to is something like
given a suitable companion definition of (beware: I haven't checked whether this is in fact true; some factors of -1 or 2 or 1/2 may be needed to make it true). There's nothing wrong with "after-the-fact observations"; they are only "after-the-fact" by virtue of the more or less arbitrary order in which your text presents things. The symbol could equally well be defined as the complex number that makes the equation above hold, except in that case you would still need to prove that such a number is unique if exists, etc. etc. Most authors seem to feel that, all other things being equal, it is easiest to understand the formal development if definitions are chosen such that there is minimal doubt about the definition actually defining something.
As an alternative to either of these two characterizations, you could interpret as a way to quantify how far f is from satisfying the Cauchy-Riemann equations. –Henning Makholm (talk) 02:54, 7 January 2011 (UTC)[reply]
There might not be anything wrong with an after-the-fact definition, but it's nice to have different perspectives on things, and certainly being able to see the logic behind the definition (before seeing its consequences) has some advantages.
I was looking for a way to go from
to the above equation (h is, obviously, a complex number). Would you know how? 74.15.138.87 (talk) 03:43, 7 January 2011 (UTC)[reply]
I'm not sure your limit even makes sense to me; previously you said that f is a single-variable function but here you give it two arguments? In any case, a limit of this kind would probably not exist unless the function f happened to be an ordinary holomorphic function composed with an explicit conjugation, which is not a very interesting situation.
I would suggest that the property of ordinary differentiation that your definition generalizes is not the high-school limit definitions, but more the property of being the coefficient in a linear approximation. Does the characterization I gave above make sense to you? –Henning Makholm (talk) 03:56, 7 January 2011 (UTC)[reply]
(Also, I think this is one of the not uncommon cases where "the logic behind the definition" is that it happens to be what gives the desired consequences) –Henning Makholm (talk) 04:00, 7 January 2011 (UTC)[reply]
The equation you wrote above is familiar to me from real-valued differentiation, but my problem for is the same as for (which, just to make sure I understand, is different than , right?). At any rate, I haven't seen that formalism for complex numbers, so I can't say I understand it entirely.
As for my differentiation thing, what I meant is something like this: suppose . Evidently, f is a function of z alone, but you could pretend that z and are seperate variables. Then, and . Does that make sense? Probably not ... I'm a physics major, so there's a good chance I just broke like ten rules of math. But I like to have an intuitive understanding of the math I'm using, and when I see a symbol like , this is what I think of. So, for me at least, it's nice to see how this perhaps non-rigorous understanding of the math fits into the overall picture. 74.15.138.87 (talk) 04:40, 7 January 2011 (UTC)[reply]
I'm not sure how well your idea of pretending that z and z* are different works. What if f is given by some arbitrary expressions for u(x,y) and v(x,y)? Then we couldn't say which x's and y's came from z and which came from z*. It might work OK in those cases where you can express f as a complex differentiable function of z and z*, if you add a factor of 1/2 to your definition as I suggest below (or at least it seemed to work in the few examples I worked out), but it still seems a bit shifty to me. –Henning Makholm (talk) 07:26, 7 January 2011 (UTC)[reply]
Okay, first beware that I've actually never seen the notation before; I'm making this up as I go along! If you're doing this for the purpose of physics, it may be that it's all meant to be used for some kind of Hermitean-ish form and my suggestions are completely off. But:
Usually, is only defined when satisfies the Cauchy-Riemann equations, and in that case your would be identically zero. So I'm assuming that there is a to go along with it, such that both are somehow meaningful for a non-differentiable .
My idea now is to go back to multivariate real analysis and look at the real functions u and v. Let's keep fixed and look at the differential
(from the definition of f, and the chain rule). Now, if we can write the left-hand side of this in the form
for some appropriate complex numbers A and B (which depend on but not on and ), then it would make some sense to call A and B and , respectively, because then the whole thing would look sort of like the chain rule. Expressing A and B in terms of the partial derivatives of u and v is a matter of simple (real) linear algebra. Calculate, calculate ... it turns out that B becomes half of your definition for . No matter; this just means that the pseudo-chain rule that works for your definitions will be
which is not quite an unreasonable convention either, though it does have the strange consequence that is two times when the latter is defined. Alternatively, perhaps there is an 1/2 in your notes that you forgot to copy?
Clearer now? –Henning Makholm (talk) 06:45, 7 January 2011 (UTC)[reply]
Yes, there was a missing 1/2 factor, and yes it is clear now. Thanks! 74.15.138.87 (talk) 15:29, 7 January 2011 (UTC)[reply]

limit

how would I prove that for any n, therby proving that the factorial function grows faster than any exponential? Is this even true? 24.92.70.160 (talk) 02:43, 7 January 2011 (UTC)[reply]

See Factorial#Rate_of_growth. Staecker (talk) 02:45, 7 January 2011 (UTC)[reply]


Basic idea: When you increase x by 1, the numerator increases by a factor of about x, whereas the denominator increases by a constant factor of n. Eventually x is greater than n. Work it out from there. --Trovatore (talk) 02:48, 7 January 2011 (UTC)[reply]
If part of your quandary is how to deal with the factorial, you might try converting Stirling's approximation into a bound on x!, and use that to derive the limit. Alternatively, you can substitute the Gamma function for the factorial, as the Gamma function is the continuous version of the factorial. -- 174.21.250.227 (talk) 03:06, 7 January 2011 (UTC)[reply]
I don't think that is any easier than keeping it as a discrete sequence and working directly from the definition. One easily sees that from a certain onwards, the sequence is strictly increasing, and it is then also easy for any to find an such that (note that it is not necessary to be able to pinpoint the first such ). –Henning Makholm (talk) 03:22, 7 January 2011 (UTC)[reply]

Question: what is the meaning of R suerscript n, subscript ++.

Deascription of the question : In general in mathematics, R with superscript n and subscript + mean a cartisian space of real number of n dimentions or n coordinates. The subscript + indicate that all the values are > or = 0. However, in the book jehle G A & Reny P J (2009) advanced microeconomic theory, 2nd ed. Low price ed. pearson education. in page 36 notatioon R superscript n, subscript ++ has been used. Meaning of this new notation is not clear. Please help. —Preceding unsigned comment added by 218.248.80.62 (talk) 11:48, 7 January 2011 (UTC)[reply]

is like , but the coordinates are required to be strictly positive, i.e. all of them greater than zero. Pallida  Mors 14:01, 7 January 2011 (UTC)[reply]
The notation may sound strange for other areas, but it is more or less widespread in Mathematical Economics. is sometimes called the strictly positive orthant, see for instance this source, page 2. Pallida  Mors 14:11, 7 January 2011 (UTC)[reply]


distance measure

Hi. I have two vectors, and , that sum to one: . All elements are non-negative. I need to define a "distance" between these and I am sure that there is a better way than just . The correct term is eluding me. Anyone? Robinh (talk) 15:50, 7 January 2011 (UTC)[reply]

There are many ways of defining distances between vectors. Which is best depends on the situation. What are you trying to do with these vectors and this distance? Algebraist 15:55, 7 January 2011 (UTC)[reply]
(edit conflict) The term you want is probably metric. I can't say specifically what metric would be "better" than the standard Euclidean metric—it depends on what you're going to use it for. —Bkell (talk) 15:56, 7 January 2011 (UTC)[reply]
I think you want vector cosine - a common and efficient method of defining the distance between two vectors with the same number of elements, all between -1 and 1. -- kainaw 15:57, 7 January 2011 (UTC)[reply]
(e/c) Obvious choices are the Lp-norms ( for 1 ≤ p < ∞, for p = ∞). There is no telling what is "better" unless you specify a bit more what kind of application you have in mind.—Emil J. 15:59, 7 January 2011 (UTC)[reply]
thanks guys. The context is Dirichlet distribution, but vector cosine takes me to Hamming distance, which is more-or-less what I want (most of the elements of the vector are zero). Cheers, Robinh (talk) 16:04, 7 January 2011 (UTC)[reply]
I suggested cosine instead of Hamming because Hamming will give you a headache if you have values that are not 0 or 1. Cosine will give the same results as Hamming for binary (0/1) values, but a more accurate result for a collection of values between 0 and 1. -- kainaw 16:08, 7 January 2011 (UTC)[reply]
Thanks for this. I'll use both and report back. Best wishes, Robinh (talk) 16:11, 7 January 2011 (UTC)[reply]
Kainaw, I don't understand what you mean when you say, "Cosine will give the same results as Hamming for binary (0/1) values, but a more accurate result for a collection of values between 0 and 1." The vector cosine will always be a real number between −1 and 1, whereas the Hamming distance will always be a nonnegative integer. —Bkell (talk) 16:18, 7 January 2011 (UTC)[reply]
Indeed, I'm not sure the concept of Hamming distance has any meaning at all in the context of real-valued vectors, other than being generalized into one of the Lp norms. -- The Anome (talk) 16:45, 7 January 2011 (UTC)[reply]
I didn't mean to imply it will give the same "value". I meant the same "result" as in a general idea of distance. To be more specific, if the vectors are binary 0/1 values, vector cosine produces a Jaccard index (or a Tanimoto coefficient, depending on exactly how you implement it). Jaccard index is, in general, a measure of how many elements between the vectors are the same. Hamming distance is also a measure of how many elements between the vectors are the same. So, the result is the same in concept even though the exact value will be different. -- kainaw 17:19, 7 January 2011 (UTC)[reply]

is the most common definition of distance. Why shouldn't it be good enough? Bo Jacoby (talk) 21:56, 7 January 2011 (UTC).[reply]

(OP). Well, none of the suggestions "use" the fact that the total of the elements is unity, nor the fact that each element is non-negative. I just have tip of the tongue that there is a distance measure out there that "uses" these features of my vectors, which has found applications in statistics. I'm sure that the distance measure I'm thinking of has some nice properties in the context of the Dirichlet distribution....but I just can't remember what it's called. I have a vague sense that it's someone's name. Smith's distance? The Jones distance? thanks everyone, Robinh (talk) 22:45, 7 January 2011 (UTC)[reply]

"In the context of the Dirichlet distribution". The natural measuring stick for a random variable is the standard deviation, which is proportional to , so you may like to use as a measure of distance between x and y, where . Bo Jacoby (talk) 11:15, 8 January 2011 (UTC).[reply]

What does it mean to raise big-O to a power?

I apologise if this question is foolish, but I am rather confused. In our article Time complexity, the table at the start claims that polynomial time is 2O(log n). In the text of the article, it says that polynomial time is O(nk). Now, if someone asked me (as someone just did, which led me to the article to check I was right) I would have defined polynomial time as O(nk). But what on earth does 2O(log n) mean? O(...) is a measure of complexity, not a number; how do you raise it to a power? And why does the article say both P=2O(log n) and P=O(nk), without any explanation as to the difference? Marnanel (talk) 18:08, 7 January 2011 (UTC)[reply]

"f(n)=2O(log n)" means there exists a function g such that g(n)=O(log n) and f(n)=2g(n). This is indeed equivalent to being O(nk) for some k. I don't know why the table uses one form rather than the other. Algebraist 18:15, 7 January 2011 (UTC)[reply]
I'd say 2O(log n) is unnecessarily confusing. If the idea is to have a single expression instead of a union like , then the fairly common notation nO(1) is simpler and easier to understand.—Emil J. 18:26, 7 January 2011 (UTC)[reply]
If I may (not an expert) offer a counterexample, g(n)=a*log n + b*log(log n), which is still O(log n). Then f(n)=2g(n)=nk(log n)k', which is definitely not O(nk) (unless we redefine O in this problem). Therefore the two are not equivalent. SamuelRiv (talk) 07:59, 10 January 2011 (UTC)[reply]
How is "definitely not" ? By the definitions I'm familiar with, it is for any . –Henning Makholm (talk) 12:00, 10 January 2011 (UTC)[reply]

Inverse Function Theorem

I have been reading Spivak's Calculus on Manifolds, and just got through the proof of the inverse function theorem. The statement is as follows:

Let aURn, where U is open, and let f: U → Rn be continuously differentiable. Assume that f ′(a) is invertible. Then f defines a bijection of some open neighbourhood V of a onto an open neighbourhood W of f(a), and V and W can be chosen so that f-1 is differentiable on W.

I have been over this several times, and it appears to me that it is only necessary in the proof to assume that f is differentiable on U, and f ′ is continuous at a. My question is, am I correct?

If not, please give a counterexample or a reference. If so, please give a reference that states the result in at least that generality. 86.205.29.53 (talk) 20:02, 7 January 2011 (UTC)[reply]

Your version is true. Here is a reference. Algebraist 21:53, 7 January 2011 (UTC)[reply]
Hello. Unfortunately, I can't get the Preview to work on Google Books. However, now I'll know where to look! Thank you very much. 86.205.29.53 (talk) 23:24, 7 January 2011 (UTC)[reply]
It's working now. 86.205.29.53 (talk) 23:33, 7 January 2011 (UTC)[reply]

January 8

Pretty stupid question about statistics

What are the WP articles for broken line graph, frequency polygon, frequency curve, cumulative frequency polygon and cumulative frequency curve? (I bet WP uses fancy names for the titles. :P ) Kayau Voting IS evil HI AGAIN 13:37, 8 January 2011 (UTC)[reply]

Chart should lead to most of that. Dmcq (talk) 13:50, 8 January 2011 (UTC)[reply]
Thanks, I found the broken line graph and made an RDR. Unfortunately, I could not find the others. Kayau Voting IS evil HI AGAIN 13:54, 8 January 2011 (UTC)[reply]
List of graphical methods gives a few more. Also just doing a search by putting the terms in the search box should help. For instance putting 'cumulative frequency curve' in the serach the first return was cumulative frequency analysis. It's probably a good idea to add a few redirects between where a name has graph chart or plot in its name, I'll have a look at that. Dmcq (talk) 14:00, 8 January 2011 (UTC)[reply]
Thanks! I did know about the cumulative frequency analysis article, although I didn't understand a word of it. :P By the way, class boundary, class limit, and class width are also redlinks, you may want to redirect them to something useful. Kayau Voting IS evil HI AGAIN 14:07, 8 January 2011 (UTC)[reply]
There seems to be very little about bunching data into classes other than via classifiers. It isn't used much in actual statistics nowadays, just in displaying the data in histograms. In fact the only thing I could find was an article I mainly wrote myself called assumed mean. The closest to that nowadays would be quantization error. I notice the histogram article switches between calling them categories, classes and bins. Dmcq (talk) 21:40, 8 January 2011 (UTC)[reply]
cumulative distribution function is much better for cumulative frequency curve. hate to say this but Wolfram Mathworld seems to have articles for most of what you said. Dmcq (talk) 21:48, 8 January 2011 (UTC)[reply]
I know this is probably a very stupid thing to ask, but both the CF analysis page and the cumulative distribution function page seem to be about probability (that's what the cats say...) However, the CF polygon/curve I have in mind is, like that described in Wolfram Mathworld, a chart that presents continuous data in a way similar to a histogram. Are they actually the same thing? Kayau Voting IS evil HI AGAIN 00:51, 9 January 2011 (UTC)[reply]

Cryptography

At my bank, online access to one's bank account is made secure through the use of a small cryptography device.

It works as follows:

the bank website provides a random number, the *challenge*. You put your debet card into the cryptography device and enter the challenge and the secret code of the debet card. If the secret code is correct, the device replies with another number, which you type into the bank website to login.

I always thought this was a nifty way of verifying the user has his debet card and knows its secret code, without actually sending the secret code over the internet.

Today I learned, much to my surprise, that the cryptography device is non-deterministic: it gives different replies for the same inputs. I am baffled. What is going on here? 83.134.178.145 (talk) 14:39, 8 January 2011 (UTC)[reply]

Could it have a clock and use the date/time as another input ? This would mean the results would "expire" if the website doesn't get the code within whatever time frame they allow. Why do this ? Let's say someone has a key-logging program on your computer, and gets the code you typed in. If they try to use it fraudulently some time later, hopefully the code would have expired by then. StuRat (talk) 15:41, 8 January 2011 (UTC)[reply]
he can't use it sometime later anyway, because the bank website will give a different challenge number next time, which ensures a different input to the device at every login. Furthermore, the device is only as big as a common cell phone and has worked for years without battery replacement - in fact it doesn't even have an opening to replace the battery, presumably to prevent tampering. I suppose it could contain a very low-power clock, but in that case what would be the point of the challenge number? The device maps (challenge number,secret code)->password number, and the bank website provides a new challenge number at every login. The challenge number and password number are both up to 8 digits long. 83.134.178.145 (talk) 16:16, 8 January 2011 (UTC)[reply]
"he can't use it sometime later anyway, because the bank website will give a different challenge number next time" -> Well, if the person with the key-logging program is an untrustworthy room-mate, and uses your own computer, and you have failed to log off, he could gain access that way, if the web site hasn't yet timed out. The time-out period for the website might be longer (say half an hour), than the time-out to enter the validation code (say 2 minutes). StuRat (talk) 22:49, 8 January 2011 (UTC)[reply]
Banks I've used with a similar system (but no challenge number, just a non-deterministic map from secret code -> password number) gave results that did expire after a few minutes. Sometimes if I dallied in copying the number I would get rejected. Eric. 82.139.80.114 (talk) 18:10, 8 January 2011 (UTC)[reply]
And since the device took no input other than my card and PIN, it would have to have had an internal clock to have that behavior. Eric. 82.139.80.114 (talk) 18:11, 8 January 2011 (UTC)[reply]
Many cryptography algorithms allow for random numbers. Take a VERY simplistic example. You give me a number. I will give you a number such that if I add it to the number you gave me and mod 11, it will produce a result of 5. So, if you give me 7, I can give you any number n such that (7+n)%11 = 5. In a real algorithm, the restrictions are more complex, but they allow for multiple answers. -- kainaw 18:16, 8 January 2011 (UTC)[reply]
The output expired after a few minutes, which meant that the number was produced by a system that knew what time it was. Since the input (card and PIN) were constant, that meant the device had an internal clock. Eric. 82.139.80.114 (talk) 01:44, 9 January 2011 (UTC)[reply]

Group theory: what is precise definition of A: B en A.B ?

Hello,

I have been confused about this for quite some time. In many articles I see the following notation for groups

and

I am aware of the notion of (external and internal) semidirect, and highly suspect that there is a relation. I remember from my own undergraduate course in group theory that should mean that G has a normal subgroup isomorphic to with the quotient isomorphic to. It also said that should mean that there is a normal subgroup isomorphic to A, another isomorphic to B, trivially intersecting and generating the entire group.

So my questions are:

1) Is this the correct standard notation? (I often see it being used in articles without any name, explanation or reference).

2) Is this sufficient information to determine the entire group? It seems not, because both the cyclic group of order 10 and the dihedral group of order 10 could then be written as . But then why is this notation used like that?

Many thanks in advance! — Preceding unsigned comment added by Evilbu (talkcontribs)

Could you be more specific about which articles you're seeing this in? My understanding is A:B denotes the set of (left or right depending on the author) cosets of B in a group A, with [A:B] or something similar meaning the index of B in A when B is a subgroup. I think the dot notation is sometimes used for the subgroup (of a permutation group) that fixes a letter. Group theory is still young enough that different notations are often used by different authors.--RDBury (talk) 23:04, 8 January 2011 (UTC)[reply]
I find it hard to give examples that are publicly available. It seems these authors used the Atlas or GAP. This is an example from the online Atlas were both notations are used when giving maximal subgroups : Atlas: Maximal subgroups of M24. The first interpretation you give (of A:B) looks like what I was taught). But apart from the notation, my second question remains as well: does this make completely clear to readers in what isomorphism class this group is? — Preceding unsigned comment added by Evilbu (talkcontribs)
It looks like this is notation I'm not familiar with but from the examples it appears to be telling how the permutation group breaks down into orbits. If you know that the group is a maximal subgroup of a specific permutation group then such information would indeed determine the isomorphism class. If no one here knows then I'd suggest looking at the paper version of the Atlas and some of the references there. Sorry not to be more help.--RDBury (talk) 17:10, 9 January 2011 (UTC)[reply]


January 9

Alternate proof of the Weierstrass approximation theorem

Hello everyone,

I have been asked to provide ('complete') an alternate proof of the Weierstrass approximation theorem - for any continuous function on [a, b] we can uniformly approximate it by polynomials - which begins as follows (the 'original' was the common proof using Bernstein polynomials):

Let 0 < a < b < 1, and f:[a,b]->R the continuous function we wish to approximate by polynomials. Fix any continuous extension of f to all of R such that the function is identically zero outside of [0,1], and denote this again by f...

Problem is, I have absolutely no idea how to continue. How does extending f to all of R help? Surely that only makes it harder to approximate. Could anyone please help me? Thank you very much! 178.176.2.17 (talk) 00:07, 9 January 2011 (UTC)[reply]

Is that all you have to go by? No indications even of which kind of theory you're supposed to use?
The extension appears to indicate that you're supposed to do something with the function that requires f to be defined on the entire real line. My first hunch would be something like Fourier transform it, then Taylor approximate each frequency component and then sum them back together. But I don't actually know whether that would work, and even if it did, it would seem to be much easier to extend f periodically and use a plain Fourier series. So it's probably a false scent.Henning Makholm (talk) 02:45, 9 January 2011 (UTC)[reply]
Extending the function to all of R may be a device to allow convolution. If you take the convolution product of f with a polynomial P, the resulting function will be a polynomial. If P is taken to be a good approximation of the Dirac delta function, then the convolution product will be uniformly close to f. Of course, to do this, you need to cut off part of the function p. But since you've extended f by zero, the part of p that you've cut off doesn't matter, and the convolution is still a polynomial. 82.120.58.206 (talk) 03:28, 9 January 2011 (UTC)[reply]
That is, it's a polynomial on the interval you're interested in. 82.120.58.206 (talk) 03:35, 9 January 2011 (UTC)[reply]
That makes much more sense than my wild guessing above. –Henning Makholm (talk) 05:15, 9 January 2011 (UTC)[reply]
What you wrote is the proof I first learned for Stone-Weierstrass, and still the one I like best, although I'm having trouble remembering sufficient conditions for uniform convergence of Fourier series at the moment. The function f would probably first need to be approached by a function satisfying those conditions. (Either that, or replace the Fourier series with its Cesàro sum and use Fejér's theorem.) 82.120.58.206 (talk) 06:54, 9 January 2011 (UTC)[reply]
That makes sense to me - the second approach sounds good certainly - but if our polynomials are going to tend to the delta function, don't we need the 'width' to get smaller and smaller? But at the same time, we presumably need the polynomial to be positive over [0,1], otherwise we can't guarantee effects at the edges will disappear in the convolution. So, does the polynomial in the convolution get sharper and sharper as it tends to the delta function, or does it get stay the same width, i.e. remain positive on [0,1]? Clarification would be much appreciated, thank you! 178.176.6.47 (talk) 04:36, 10 January 2011 (UTC)[reply]
Consider, for example scaled such that its integral over [-1,1] becomes 1. The peak becomes narrower with increasing n, but the useful domain of the function stays the same. –Henning Makholm (talk) 12:10, 10 January 2011 (UTC)[reply]
Yes, I agree with Henning's suggestion. Extend the polynomial by 0 outside [-1,1]. 82.120.58.206 (talk) 12:23, 10 January 2011 (UTC)[reply]
Got it, thank you very much! 178.176.10.8 (talk) 14:45, 10 January 2011 (UTC)[reply]

January 10

Where was Godel's original Godel sentence on the arithmetic hierarchy? In general, what is the lowest level of the arithmetic hierarchy where a Godel sentence for Peano arithmetic can be constructed? 76.67.79.61 (talk) 01:28, 10 January 2011 (UTC)[reply]

The Goedel sentence for any c.e. theory in first-order logic is . At least I'm pretty sure it is. It essentially says "for every n, n is not the Goedel number of a proof of me". Of course you can't literally say "me"; you have to use the recursion theorem, and I'd have a bit of work to do to be sure that doesn't increase the complexity, but I don't think it does. --Trovatore (talk) 01:34, 10 January 2011 (UTC)[reply]
Isn't the idea of "is a proof of" nontrivial enough to raise its position? 74.14.110.15 (talk) 07:12, 10 January 2011 (UTC)[reply]
No. To say that n is the Goedel number of a proof of proposition σ is primitive recursive. --Trovatore (talk) 07:14, 10 January 2011 (UTC)[reply]
Oh yeah. Thanks for your help. 74.14.110.15 (talk) 08:40, 10 January 2011 (UTC)[reply]
And to answer the second part of the question, is the best possible for a true unprovable sentence, since all true -sentences are provable already in Robinson's Q.—Emil J. 13:45, 10 January 2011 (UTC)[reply]

Groups

Let a,b be linearly independent in Z^2. Is there an automorphism on Z^2 so that a,b gets turned into the form (x,0),(y,0)? i.e. their first and second coordinates vanish resp. Is it possible to do this for one element as well? Money is tight (talk) 14:13, 10 January 2011 (UTC)[reply]

No -- consider a=(1,1), b=(-1,1). Since neither a nor b is a proper multiple of any element, x and y must both be units. But then there is no possible image of (0,1), which must be halfway between the images of a and b. –Henning Makholm (talk) 14:41, 10 January 2011 (UTC)[reply]
Damn. I noticed I made a mistake: (x,0),(y,0) should be (x,0),(0,y) but nevertheless your example still works (I have to say it's pretty clever). My real problem was this: let A be a group have order the power of a prime and be generated by 2 torsion elements, is there only two factors in A's primary decomposition into cyclic groups (the structure theorem for finitely generated abelian groups)? Also, if A is generated by a torsion free element and a torsion element is A of the form Z x Z/nZ for some n? Thanks. Money is tight (talk) 15:06, 10 January 2011 (UTC)[reply]
I assume you're talking only about Abelian groups. For the second question, it should be easy to prove that the subgroups generated by your two elements form a direct sum. For the first, write your group as a product of cyclic groups of order a power of the prime number p, and assume there are at least three factors. Then there is some quotient of it that is isomorphic to (Z/pZ)3. This quotient must be generated by the projections of your two generators. But now this is a vector space question! 82.120.58.206 (talk) 16:41, 10 January 2011 (UTC)[reply]
As I mentioned before, you ought to use the direct sum ⊕ for abelian groups instead of the more general direct product ×. Using ⊕ emphasises the additive structure of an abelian group. So you want Z ⊕ Z/nZ instead of Z × Z/nZFly by Night (talk) 19:17, 10 January 2011 (UTC)[reply]

Earth vs. human vs. rhinovirus

How many average-sized humans would fit into the Earth? And how many rhinoviruses would fit into an average-sized human? I could figure it out but my cold is so bad I can't even remember how standard notation works. —Preceding unsigned comment added by 93.96.113.87 (talk) 20:41, 10 January 2011 (UTC)[reply]

Before attempting to answer, you must state what condition the humans must be in. If you allow them to be puréed first, you can get a lot more in. Not only will the liquid remains fill in nicer, it will also remove the empty air cavities. -- kainaw 20:44, 10 January 2011 (UTC)[reply]

January 11

Squared differences

When fitting a line to a sample of data in linear regression, why are squared differences minimized instead of the absolute value of the differences? --41.213.125.249 (talk) 12:37, 11 January 2011 (UTC)[reply]

I've sometimes asked myself the same question, and I don't have a complete answer. Mathematically, what you're suggesting is to measure distances between certain vectors using the L1 norm instead of the L2 norm. From the point of view of interpreting the data in the real world, it's not clear to me why one would be better than the other. For example, it's not clear why it's better to be off by 2 in two cases than off by 3 in one and off by 1 in the other. It depends whether you want to penalize big discrepancies.
But mathematically, the L2 norm leads to a simpler theory, since there is a geometric interpretation in terms of the Euclidean scalar product. For example, unless I'm mistaken, with the L1 norm, you could never get simple formulas for the best fit the way you do using the squares of the distances. On the other hand, with computers, I imagine it wouldn't be too much of a problem to find the best L1 fit if you wanted to, or at least an approximation of it. 82.120.58.206 (talk) 13:18, 11 January 2011 (UTC)[reply]
See Ordinary least squares#Geometric approach. It's a bit sketchy, but it will give you an idea. 82.120.58.206 (talk) 13:31, 11 January 2011 (UTC)[reply]

Series and Intergration

Can anyone explain why is equivalent to ? Visit me at Ftbhrygvn (Talk|Contribs|Log|Userboxes) 13:00, 11 January 2011 (UTC)[reply]

What you have written is a Riemann sum for the integral. Look at the part of the article about right sums, and try and figure out what f, Q, a, b and n need to be in the formula.82.120.58.206 (talk) 13:24, 11 January 2011 (UTC)[reply]
Thanks for your quick answer! Visit me at Ftbhrygvn (Talk|Contribs|Log|Userboxes) 14:47, 11 January 2011 (UTC)[reply]

How to visualize integer sequences?

Hello,

is there a good way to visualize integer sequences up to a given upper bound? I know, if for example I wanted to visualize the sequence of prime numbers up to, lets say 100, I could simply draw an x-axis and mark the position of each prime. Is there another way of visualizing sequences of integers? Toshio Yamaguchi (talk) 13:27, 11 January 2011 (UTC)[reply]

Is your sequence increasing? 82.120.58.206 (talk) 13:32, 11 January 2011 (UTC)[reply]
Yes, most of the sequences I have in mind are subsequences of the prime numbers. Toshio Yamaguchi (talk) 13:41, 11 January 2011 (UTC)[reply]