Wikipedia:Reference desk/Archives/Mathematics/2008 October 3
Mathematics desk | ||
---|---|---|
< October 2 | << Sep | October | Nov >> | October 4 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
October 3
[edit]Imaginary Numbers
[edit]Hello. Does i4a = 1 because (i4)a = 1a = 1? There are no parameters onto which set of numbers a must belong. Is there any amibugity? Thanks in advance. --Mayfare (talk) 00:10, 3 October 2008 (UTC)
- 1(3/2) = (e(2iπ))(3/2) = -1 (just kidding ;), 1a = 1 for any a. hydnjo talk 02:16, 3 October 2008 (UTC)
- (e/c) i4a=1 only if a is an integer. Going from i4a to (i4)a does not follow order of operations. If a= 3/4, we get i4•3/4=i3=-i. Paragon12321 02:53, 3 October 2008 (UTC)
- Parentheses matter. hydnjo talk 03:05, 3 October 2008 (UTC)
- This is another question where wikipedia ought really have a special article. I just had a look for a paradox based on it by Thomas Clausen (mathematician) because I was feeling evil, but I can even find it using google. Dmcq (talk) 09:19, 3 October 2008 (UTC)
- Actually, there's nothing wrong with i4•3/4=(i4)3/4=13/4. You just have to remember that the complex exponential is multivalued, and you have to take the right value of the right-hand-side. The values of 13/4 are 1, -1, i and -i. Algebraist 09:23, 3 October 2008 (UTC)
- Hmm perhaps I'll just put in a version of that paradox by Clausen:
- e2πin = 1 for all integer n
- e1+2πin = e
- (e1+2πin)1+2πin = e
- e1+4πin-4π2n2 = e
- e.e-4π2n2 = e
- e-4π2n2 = 1
- This is plainly false for any integer n except 0 but the original line was true for all integer n Dmcq (talk) 09:47, 3 October 2008 (UTC)
- Hmm perhaps I'll just put in a version of that paradox by Clausen:
- I think I'll go an put something into the 'Powers of complex numbers' section of the exponentiation article as it doesn't treat the multiple results of powers very well - which is basically what Clausen's paradox exploits. Also whilst the Logarithm article does say the complex logarithm is multiple valued it doesn't explicitly put in the 2πin. Dmcq (talk) 09:32, 8 October 2008 (UTC)
Margin of error question...?
[edit]If I ask a question to a simple random sample of 8 individuals out of a population of 40, and 1 out of these 8 say yes, then the percentage of yesses is 12.5% But how do I calculate the margin of error for, let's say, 90% confidence.? What formula could I plug all those numbers in?--Sonjaaa (talk) 14:59, 3 October 2008 (UTC)
- Statistics isn't my area, so I don't feel confident giving you a definite answer, but Sample size#Estimating proportions looks like the place to start. --Tango (talk) 15:28, 3 October 2008 (UTC)
The population size is and the sample size is . The sample number of yeas is and the sample number of nays is . The population number of yeas, , assumes values from 1 to 33, and the population number of nays, , assumes values from 7 to 39. The number of ways to obtain is the product of binomial coefficients These 33 numbers are 15380937 25240512 30886416 33390720 33622600 32277696 29904336 26926848 23666175 20358000 17168580 14208480 11544390 9209200 7210500 5537664 4167669 3069792 2209320 1550400 1058148 700128 447304 274560 160875 89232 46332 22176 9570 3600 1116 256 33. Their sum is 350343565, which happens to be equal to The general distribution function for is actually but I am not going to prove it here. The accumulated values are 0.04 0.12 0.2 0.3 0.4 0.49 0.57 0.65 0.72 0.78 0.82 0.86 0.9 0.92 0.94 0.96 0.97 0.98 0.99 0.99 0.99 1 1 1 1 1 1 1 1 1 1 1 1 for 1..33. So a 90% confidence interval is or . Bo Jacoby (talk) 19:21, 4 October 2008 (UTC).
Please help in German–English maths translation
[edit]Borsuk's conjecture#Conjecture status waits for someone to verify a translation from German. --CiaPan (talk) 15:01, 3 October 2008 (UTC)
I can provide you the final English translation of that quotation on Saturday (tomorrow) if you answer my stats question above today... :) --Sonjaaa (talk) 15:21, 3 October 2008 (UTC)
- I do not trade my skills here. And I really don't care what you can provide me, but would be glad if you can – and want to! – help to improve the article.
Anyway I hate statistics. [ ] --CiaPan (talk) 06:26, 7 October 2008 (UTC)
- I do not trade my skills here. And I really don't care what you can provide me, but would be glad if you can – and want to! – help to improve the article.
Grading on a Curve: Percents, Percentiles, and Z-Scores
[edit]Let us say that there is a college class with 100 students, and this group is being co-taught by two professors. Philosophically, Professor X and Professor Y agree that ideal grade distributions are: 10% A, 20 % B, 40 % C, 20% D, and 10 % F. At the end of the year, each professor independently grades the class and this is how they grade. Professor X rank-orders the students' scores, high to low, from 1 to 100. Then, Professor X assigns an "A" grade to students ranked 1 through 10; a "B" grade to students ranked 11 through 30; a "C" grade to students ranked 31 through 70; a "D" grade to students ranked 71 through 90; and an "F" grade to students ranked 91 through 100. Professor Y approaches the task differently. Professor Y calculates a z-score for each student. A student whose z-score is at or above the 90th percentile (z = 1.28) earns an "A" grade. A student whose z-score is at or below the 10th percentile (z = -1.28) earns an "F" grade. And so forth with the appropriate z-score cut-offs for the "B" and "C" and "D" students. (To make the conversation easier, let us assume that there are no tie-scores and no tie-ranks at all.) Question 1: Will Professor X and Professor Y ultimately have the same final grades for the same students ... or will each method give different results? Question 2: If the results are different, why is that exactly? Question 3: If the class scores are (or are not) normally distributed, does that make any difference or not? Thanks. (Joseph A. Spadaro (talk) 16:58, 3 October 2008 (UTC))
- It all depends on the distribution of marks. Consider the case where 1 student gets 100%, 1 gets 99.99%, 1 gets 99.98% and so on for the first 50 down to 99.51% and the other 50 have a similar distribution in the range 0% to 0.49%. Prof X will get the ideal grade distributions, but Prof Y will get half B's and half D's (if I've calculated it correctly). The difference is because Prof X forces the grades to be exactly the ideal distribution whereas Prof Y calculates the grades based on individual marks and weights it so that the expected distribution for a normally distributed set of marks will be the ideal distribution. The actual grades will depend on the actual marks, just because a variable is normally distributed doesn't mean a sample will follow that distribution exactly (or even at all for a small sample). I would say Prof Y has the better method, since how good a student you are doesn't actually depend on how good your peers are - some years are just cleverer than other years due to all kinds of influences (including random fluctuation). As long as you try and keep the exam difficulty the same from year to year (not always easy, admittedly), then you'll get fair grades. --Tango (talk) 00:11, 4 October 2008 (UTC)
- Thank you. Three follow-ups to your above comment. (1) If indeed the particular scores of this class were normally distributed, then Professor X's method and Professor Y's method will and should yield the exact same results ... is that correct? (2) To paraphrase, are you saying something along the lines of the following? Professor Y is basing his system on an idealized (normal) distribution, and then superimposing his actual student performance over that theoretical student performance (normal distribution). Professor X is artificially assuming that his group is indeed ideal (normal), whether they are or not. Is that what you are in essence saying? (3) You make the statement that: "Prof Y has the better method, since how good a student you are doesn't actually depend on how good your peers are." Under Professor Y's system, your entire grade (via your z-score) is indeed based on the performance of your peers, no? That is the very definition of your z-score ... comparing you with those being evaluated along with you. A student's z-score derives from the overall class mean and standard deviation. And thus, how well I do (via my z-score) is indeed contingent upon my comparison to my peers and how well they do. Is that not correct? Thanks. (Joseph A. Spadaro (talk) 01:19, 4 October 2008 (UTC))
- 1) Yes, I think so (to the extent that it makes sense to say that a sample is normally distributed - I think it's better to say that it's representative of a normal distribution, although that may just be me). 2) Not really, Prof X's method doesn't involve a normal distribution in any way. 3) Yes, good point. It does depend on your peers, but only via the class performance as a whole rather than individual peers, which is why it is better. It would be better still to calculate the z-scores compared to the last few years data which would account for differences between cohorts (although it requires consistent exams, which can be hard to achieve, especially if the syllabus changes). --Tango (talk) 12:17, 4 October 2008 (UTC)
- Thank you. Three follow-ups to your above comment. (1) If indeed the particular scores of this class were normally distributed, then Professor X's method and Professor Y's method will and should yield the exact same results ... is that correct? (2) To paraphrase, are you saying something along the lines of the following? Professor Y is basing his system on an idealized (normal) distribution, and then superimposing his actual student performance over that theoretical student performance (normal distribution). Professor X is artificially assuming that his group is indeed ideal (normal), whether they are or not. Is that what you are in essence saying? (3) You make the statement that: "Prof Y has the better method, since how good a student you are doesn't actually depend on how good your peers are." Under Professor Y's system, your entire grade (via your z-score) is indeed based on the performance of your peers, no? That is the very definition of your z-score ... comparing you with those being evaluated along with you. A student's z-score derives from the overall class mean and standard deviation. And thus, how well I do (via my z-score) is indeed contingent upon my comparison to my peers and how well they do. Is that not correct? Thanks. (Joseph A. Spadaro (talk) 01:19, 4 October 2008 (UTC))
- Re (2). No, professor X is making no assumption whether the distribution is a normal distribution or not. He simply forces the 'ideal grade distribution' to be true each year, without regard to whether the students this year are better or worse than the students last year. If the answers and grades from two years are compared, the results may be unfair. The ambitious student selects a class of stupid peers in order to obtain a high grade. Basicly, both professors are cheeting. The grade should reflect the competence of the student, and not that of his peers. Bo Jacoby (talk) 04:38, 4 October 2008 (UTC).
- An interesting case of normalizing scores happens with setting up IQ tests where the normalization involves ranking the scores for a test sample of people and projecting them onto a normal distribution. So the z-score will be exactly the same for that sample. The distribution of types of questions can also be altered so different groups of people get the same average - this is to avoid bias! Dmcq (talk) 12:23, 4 October 2008 (UTC)
- That's too what I was trying to avoid thinking about. IQ tests are done like that also. They grade people like prof Y. For example, the IQ test which I'm studying and have trouble accepting, is vulnerable to the scenario mentioned. It would be possible for the clusters of scores to skew the Z-scores in a way that makes no intuitive sense. Sentriclecub (talk) 13:18, 4 October 2008 (UTC)
- For an IQ test to work they need to use a large enough sample for the normalisation so that the risk of clusters like that is minimal. The IQ reported is basically a z-score, just expressed in a different way (z-scores are arranged so the mean is 0 and the standard deviation is 1, for an IQ the mean is 100 and the standard deviation is fixed, although what's it's fixed to varies from test to test). --Tango (talk) 15:19, 4 October 2008 (UTC)
- That's too what I was trying to avoid thinking about. IQ tests are done like that also. They grade people like prof Y. For example, the IQ test which I'm studying and have trouble accepting, is vulnerable to the scenario mentioned. It would be possible for the clusters of scores to skew the Z-scores in a way that makes no intuitive sense. Sentriclecub (talk) 13:18, 4 October 2008 (UTC)
- An interesting case of normalizing scores happens with setting up IQ tests where the normalization involves ranking the scores for a test sample of people and projecting them onto a normal distribution. So the z-score will be exactly the same for that sample. The distribution of types of questions can also be altered so different groups of people get the same average - this is to avoid bias! Dmcq (talk) 12:23, 4 October 2008 (UTC)
Follow up
[edit]Thanks to all for the above comments. Here is my follow-up question. In the world of education (specifically, higher education) ... or even in the world of math / statistics / testing / evaluation / etc. ... is there any accepted "standard" of grade distribution? In the above example, I simply (and conveniently) "made up" the distribution of 10%-20%-40%-20%-10% for the A-B-C-D-F grade ranges. Are there any statistically sound or generally accepted distributions? Thanks. (Joseph A. Spadaro (talk) 14:46, 4 October 2008 (UTC))
- Not really. See grade inflation for one thing that stops such a distribution from existing. --Tango (talk) 15:19, 4 October 2008 (UTC)
- If you want a mathematically justified choice of grade distribution, you could choose a uniform distribution (i.e., 20%-20%-20%-20%-20%). This choice of distribution maximizes the information conveyed by a single grade. However, this choice ignores the non-mathematical issues which dominate the choice of a grade distribution. Eric. 131.215.159.210 (talk) 10:20, 5 October 2008 (UTC)
- I guess that I would like to know ... what percent of a distribution would (statistically) be considered "exceptional", what percent would be considered "above average", "average", "below average", etc. Alas ... it is probably circular. The grade of A ("exceptional") means whatever X% the professor considers to be exceptional. Just wondered if there were any standards across fields of statistics / testing / psychology / measurements & evaluation / etc. Thanks. (Joseph A. Spadaro (talk) 21:47, 5 October 2008 (UTC))
- I don't think there is a standard definition. You may fine outlier interesting, it gives some of the definitions people use. --Tango (talk) 12:58, 6 October 2008 (UTC)
- Thanks. I will look at that article. (Joseph A. Spadaro (talk) 14:57, 6 October 2008 (UTC))
- My impression, based on a few years of teaching nights at a local college, is that there is a three part distribution; the A students identify themselves and so do the F students; the B-C-D folks might follow a Gaussian distribution. Gzuckier (talk) 15:40, 6 October 2008 (UTC)
Thanks to all for your input on my question. Much appreciated. (Joseph A. Spadaro (talk) 01:10, 7 October 2008 (UTC))