Wikipedia:Reference desk/Mathematics
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
February 18
Percentage/Sum calculation
Say I have a 1000. I would like give 20 percent to A, 20 percent to B, and 20 percent to C. How do I calculate (by pen and paper) how much I’ve given A, B and C? Step by step guide in simple terms please. -- Apostle (talk) 18:32, 18 February 2016 (UTC)
- The value of 20% is equivalent to the expression 0.20 in decimal notation. So, you would take 1,000 and multiply it by 0.20, which will yield 200. So, "A" gets 200; "B" gets 200; and "C" gets 200. Joseph A. Spadaro (talk) 21:31, 18 February 2016 (UTC)
- Do you mean that after you've given 20% to A, you give 20% of what's left to B, and then 20% of what's left again to C? If so, then you give 200 to A, as above, 0.2 x 800 = 160 to B, and 0.2 x 640 = 128 to C. Rojomoke (talk) 23:38, 18 February 2016 (UTC)
- Thank you guys -- Apostle (talk) 07:24, 19 February 2016 (UTC)
- BTW, on most calculators you would type:
1000 × 20 %
- Oddly, you don't want to hit the equals sign, as that might then multiply the 200 answer by 1000. StuRat (talk) 18:34, 19 February 2016 (UTC)
- Yes, I got it. I was mainly confused with deduction part... -- Apostle (talk) 19:43, 19 February 2016 (UTC)
February 19
Cartesian product in constructive mathematics
Is there a constructive proof (i.e. a proof in constructive mathematics) of the fact that if a Cartesian product of sets is a singleton, then all of the sets are singletons? Classically, if is the unique element of the Cartesian product , and , then one can consider the family where and if , and from this deduce that , showing that is a singleton for all . GeoffreyT2000 (talk) 23:25, 19 February 2016 (UTC)
- I may be missing something stupid but it seems like your proof works constructively. Let the Cartesian product be where is the tuple as a function on . I'll say is a singleton if , i.e. if . Then you want to prove . The proof is: given , take ; given , define ; then , so , so . -- BenRG (talk) 03:16, 22 February 2016 (UTC)
February 21
Logical Ambiguity in Expression
Apparently there is a logical ambiguity in "Someone who smokes can’t appreciate this wine.", but I'm currently unable to see it. Thoughts?
- Belongs on language desk. The ambiguity is that "someone" might refer to one person whose name is unknown or otherwise unstated, or "someone" might be intended to mean "anyone". Loraof (talk) 01:26, 21 February 2016 (UTC)
- There are two different interpretations of the sentence. The first is "there is at least one person who smokes and can't appreciate this wine"; the second is "any person who smokes can't appreciate this wine". These would be expressed in the predicate calculus in different ways. — Preceding unsigned comment added by 88.105.123.227 (talk) 20:42, 21 February 2016 (UTC)
- Right, this comes loosely under Interpretation_(logic), and hence is basically the domain of the math desk. Some related info at Ambiguity#Mathematical_interpretation_of_ambiguity, see also perhaps vagueness. SemanticMantis (talk) 20:26, 22 February 2016 (UTC)
Correlation
When Pearson's r = 1.0 (perfect correlation), is the covariance always equal to the variance of both data sets? I think so. Also, is the standard deviation of both data sets always equal to the square root of 2? Schyler (exquirere bonum ipsum) 20:39, 21 February 2016 (UTC)
- No and no. — Preceding unsigned comment added by 88.105.123.227 (talk) 20:48, 21 February 2016 (UTC)
- The formula is covariance = r × [variance(first data set)]1/2 × [variance(second data set)]1/2. If r=1 and the two variances are the same, then the covariance equals that variance. If the variances are different from each other, then the question cannot be interpreted. The variances could be anything, not just 2. Loraof (talk) 23:22, 21 February 2016 (UTC).
- Covariance and Pearson_product-moment_correlation_coefficient are our main articles, but also see Anscombe's quartet. SemanticMantis (talk) 20:34, 22 February 2016 (UTC)
February 22
Robot baseball players...
If a baseball team were to consist of robot baseball players that struck out 70% of the time and walked 30% of the time, what is the average number of runs per inning they would get?Naraht (talk) 07:21, 22 February 2016 (UTC)
- Generalize a bit and say you get a walk with probability p and strike out with probability 1-p=q. The number of walks W you get before the third strikeout follows a negative binomial distribution; in this case
- 0: q3
- 1: 3pq3
- 2: 6p2q3
- 3: 10p3q3
- 4: 15p4q3
- etc.
- The number of runs is max(0, W-3) and we're looking for the expected value of this or
- 1⋅15p4q3 + 2⋅21p5q3 + 3⋅28p6q3 + ...
- which, according to my calculations, is
- p4q-1(15-18p+6p2).
- --RDBury (talk) 10:44, 22 February 2016 (UTC)
Zenithal map projection questions
Our article on the subject implies that it is possible to have a map projection of a sphere onto a plane that preserves direction from two points to every other point. However, I have found no link to a map projection that preserves directions for more than one point. Is our article wrong?--Leon (talk) 21:40, 22 February 2016 (UTC)
- I don't know if it has a name, but such a projection is easy to describe mathematically. Call your two special points p1 and p2. To know where a third point q on the sphere appears on the projection, measure the azimuth of q from p1 and from p2 on the sphere. On the projection, draw a line from p1 at the measured azimuth, and draw a line from p2 at the other measured azimuth. q lies at the intersection of the two lines. Egnau (talk) 16:03, 23 February 2016 (UTC)
- Just to clarify a bit, the article states this in the first bullet point in the section "Classification". Also note that this projection would only work for a patch, not the entire sphere. For example if p1 and p2 are on the equator and q is one of the poles then the directions would both be north/south and the third vertex of the triangle in the plane would be at infinity. In fact antipodal points would always map to the same point in the plane, so the projection would only work for a half-sphere at most. Also also note that p1 and p2 can't themselves be antipodal, otherwise the directions from q would be parallel again and the intersection in the plane would not be defined. Perhaps for these reasons, and the fact that such a projection would be highly distorted near the edges, this projection is rarely used in cartography and not notable in that sense. --RDBury (talk) 17:46, 23 February 2016 (UTC)
- Interesting. Thanks!
- Just to clarify a bit, the article states this in the first bullet point in the section "Classification". Also note that this projection would only work for a patch, not the entire sphere. For example if p1 and p2 are on the equator and q is one of the poles then the directions would both be north/south and the third vertex of the triangle in the plane would be at infinity. In fact antipodal points would always map to the same point in the plane, so the projection would only work for a half-sphere at most. Also also note that p1 and p2 can't themselves be antipodal, otherwise the directions from q would be parallel again and the intersection in the plane would not be defined. Perhaps for these reasons, and the fact that such a projection would be highly distorted near the edges, this projection is rarely used in cartography and not notable in that sense. --RDBury (talk) 17:46, 23 February 2016 (UTC)
- My next question (the title was "questions"): retroazimuthal projections allow someone to find the direction from any point B to a special point A. But, it's not clear how this would work with the Hammer retroazimuthal projection, for example, as the meridians are not straight lines. How would you go about taking an angle to define a bearing towards your chosen point? It's obvious with the Craig retroazimuthal projection as the meridians are vertical, but with the others I'm not so sure.--Leon (talk) 18:50, 23 February 2016 (UTC)
- You're supposed to align the "up" direction of the map with north, and get your bearing that way. The drawn meridians are only a distraction. In the Hammer example, if you take B = 15°S, 165°W and instead align the meridian at B with north, then the map would tell you to travel due north to reach A = 45°N, 90°W which is nonsense. Egnau (talk) 15:09, 24 February 2016 (UTC)
- My next question (the title was "questions"): retroazimuthal projections allow someone to find the direction from any point B to a special point A. But, it's not clear how this would work with the Hammer retroazimuthal projection, for example, as the meridians are not straight lines. How would you go about taking an angle to define a bearing towards your chosen point? It's obvious with the Craig retroazimuthal projection as the meridians are vertical, but with the others I'm not so sure.--Leon (talk) 18:50, 23 February 2016 (UTC)
February 23
tensor product 0 means coprime annihilators
Let M, N be two finitely generated modules over a commutative ring, such that is a proper ideal. I try to see why this implies that . How is it done? Is there some bilinear function defined on that we can show not to be the zero function?--46.117.106.166 (talk) 18:59, 23 February 2016 (UTC)
- Quick and dirty approach: quotient down by a maximal ideal that contains . Then quotients down to a tensor product of nonzero vector spaces over the same field. Sławomir
Biały 20:02, 23 February 2016 (UTC)- I'm not sure I understood your suggestion. Is it the following argument? Supposing to the contrary that , we tensor it twice with R/I, where I is a maximal ideal containing both annihilators. then we get which is a tensor product of two vector spaces over R/I, hence WLOG , so M=IM, hence by Nakayama's lemma there is some such that for all m, we have . Hence , contradiction.--46.117.106.166 (talk) 20:52, 23 February 2016 (UTC)
Open the longest
What organization anywhere in the world has been continuously open the longest? The Facebook page for this mental hospital says it has been open continuously since June 15, 1841 — Preceding unsigned comment added by Diddlesticks355454646dddddddd (talk • contribs) 19:02, 23 February 2016 (UTC)
- That doesn't answer the question at all. I ask for ones that have been "continuously open" the longest. As you can see from the link I provided, the mental hospital claims to have been open 24 hours a day 7 days a week since 1841. Is that the longest or does something else beat it? I am NOT asking for oldest companies, I am asking for ones that have been continuously open the longest, ie 24/7 the longest without ever shutting down for Christmas or Thanksgiving or Sundays etc Diddlesticks355454646dddddddd (talk) 19:42, 23 February 2016 (UTC)
- Define "open". Define "continuously". Define "organisation". You might think in terms of, say, the Royal Navy, which has been plodging around in boats for quite a while. Or the Catholic Church. Or Judaism. In the absence of definitions, it may be difficult to assist you. --Tagishsimon (talk) 20:33, 23 February 2016 (UTC)
Probability of duplicated filenames
I asked this question because I store all the pictures I have taken with my Olympus E-620 DSLR camera in a directory structure in the form AAA/BBBB
where AAA
is a running number from 100 onwards and BBBB
is a number from 0 to 9800, in steps of 200. Olympus digital cameras name their photographs in the form mddnnnn
where m
is the month, from 1 to c (hexadecimal is used to avoid spending an extra digit), dd
is the day of the month and nnnn
is a running number from 1 to 9999. I store all the pictures in the order of the running numbers, making a new AAA
directory every time the counter resets from 1. If a month or year changes in between, I don't care about it.
What is the probability of a single filename occurring in multiple directories, in terms of the number of AAA
directories and the number of years I've been using the camera? JIP | Talk 20:24, 23 February 2016 (UTC)
- How many pictures do you take in a year? 175.45.116.60 (talk) 22:51, 23 February 2016 (UTC)
- It varies between fifty and one hundred thousand. JIP | Talk 04:51, 24 February 2016 (UTC)
- I think that you are definitely likely to have a problem with different files having the same name. I had that problem with my first digital camera which gave file names DSC_xxxx, where xxxx is a four-digit number. I got duplications after it rolled over 9999. I wrote a program to rename the files by changing the "DSC" to encoding the year and month. With my current camera, I can set what it assigns for the first three characters of the file name. Right now it is set to "A16", the 16 is for the year, and "A" is the first set of 10,000. If I reach 10,000 in "A" while still in 2016, I'll change the setting to "B16", etc. Bubba73 You talkin' to me? 05:07, 24 February 2016 (UTC)
- I already know I have files with the same name. That's not a problem as they go to different directories. I want to know, based on this information, what is the average probability of more than one file in different directories having the same name. JIP | Talk 06:08, 24 February 2016 (UTC)
- I can't answer your question, but surely if there is one match between two directories (eg DSC12345) there are likely to be several more sequential matches (DSC12346, DSC12347, DSC12348 etc) unless the names are more random. -- SGBailey (talk) 14:24, 24 February 2016 (UTC)
- On average, you take roughly 200 pictures per day. That means that in principle you could go maybe 50 years without duplicating a label, but in practice it will happen much sooner -- it's a birthday problem type question, but more complicated than the usual one in that you are packing intervals of length 200 into a larger list of length 10000. That's for a single day; this is happening simultaneously for every day of the year, although of course the outcomes for nearby days are related (if, in year 2, you didn't have overlap with the previous year for pictures on Jan 1, you also probably won't have overlap on Jan 2). I think that, realistically, it would be easier and more reliable to produce an empirical answer by simulating 10000 copies of yourself on a computer than it would be to try to do anything analytic. (Also this will allow you to tinker with the model of how you take photos to find something realistic, even if it's not analytically nice.) --JBL (talk) 14:52, 24 February 2016 (UTC)
February 24
Help me with question of reliability of mixing frequency probability with bayesian probability
I need your help because I cannot find the answer in any Mathematics textbook. There are two types of probability. Frequency probability and Bayesian probability. I have no problems with using both of them. I trust the result of the outcomes of both of them. But the problem I have is that I have full confidence in them only when I am using them by themselves.
My problem is that when I have a mathematical problem where half the probabilities are derived from frequency probabilities and the other half are derived from bayesian probabilities and the final result is derived from the result of procedures that utilizes both kind of probabilities. Now I am completely unsure of how much confidence I can place in the result of such a calculation. No textbook tells me what would happen when both these types of probabilities are mixed together.
Can someone please enlighten me? 175.45.116.60 (talk) 03:14, 24 February 2016 (UTC)
- Can you give an example where they both appear in the same problem? Loraof (talk) 15:08, 24 February 2016 (UTC)
- You are talking about using methods from both frequentist and Bayesian schools. These are classified as Probability_interpretations. Both have ways of estimating some sort of confidence in a result. They are called confidence interval and credible interval, note the sections Credible_interval#Confidence_interval and Confidence_interval#Credible_interval. Anyway, these are notions of certainty that work within the rules of an interpretation, but they have nothing to do with your confidence in the validity of the method, or your confidence that you performed the method correctly, etc. I'm not sure if that gets at the source of your concern, but there is in principle nothing wrong with using e.g. bayesian methods to estimate a probability distribution then using that that distribution as part of some additional non-bayesian methods. At the same time, there are tons of ways you can mix and match bayesian and frequentist inference that are totally meaningless and useless. So there is no general rule for or against using methods associated with the different interpretations, and your confidence in such a method is not addressable within the scope of those methods, but rather lies in your own approach to epistemology and doxastic logic. Maybe an example will help: define statement S="X is in (0,100) with 95% confidence". Now, S is a statement that may be derived from a frequentist approach, but no frequentist method will allow you to say "I believe statement S is true with 90% confidence", or "I am 85% confident that I made no errors when deriving statement S". SemanticMantis (talk) 15:31, 24 February 2016 (UTC)
- To amplify what Loraof asked above, what do you mean in particular by "half the probabilities are derived from frequency probabilities" ? Are you simply referring to tabulations of observed frequencies? (In which case you may need some kind of smoothing method for items or categories that have low observed counts). Or are you talking about the outputs of frequentist procedures -- which are mostly not probabilities? Most practical statisticians these days in practice are "eclectic", open to using a variety of Bayesian and frequentist and empirical methods, depending on the problem at hand. But you do need to give us more information about what sort of things you are trying to combine, and why. Jheald (talk) 16:08, 24 February 2016 (UTC)
Divisible abelian groups
Let G be an abelian group and let H be the intersection of the subgroups nG where n ranges over the positive integers. Is H always divisible? GeoffreyT2000 (talk) 03:33, 24 February 2016 (UTC)
- Yes. If x is in H, then for any n, there exists y such that x=ny, by definition of H. Sławomir
Biały 14:11, 24 February 2016 (UTC)
- But y need not be in H. If ny = x and nmz = x, then nmz = ny but this need not imply mz = y unless G is torsion-free. GeoffreyT2000 (talk) 18:05, 24 February 2016 (UTC)
- Hmm... right. That suggests perhaps a counterexample is possible. Sławomir
Biały 18:54, 24 February 2016 (UTC)
- Hmm... right. That suggests perhaps a counterexample is possible. Sławomir
Simple monotonic functions that asymptomatically approach a value from below
I'm looking for simple smooth monotonically increasing functions f(x) that have all the following properties:
- f(0) = 0
- as x approaches infinity, f(x) asymptomatically approaches, from below, a positive constant c
What are the simplest functions you can think of that fit these conditions? Thanks.
—SeekingAnswers (reply) 13:18, 24 February 2016 (UTC)
- Provided you are satisfied with monotonically increasing for x > 0, there is the family of rational functions
- with k > 0. The smaller the value of k, the more rapidly the function approaches c. Gandalf61 (talk) 13:32, 24 February 2016 (UTC)
- (ec)In general, for asymptotically approaches 0, so approaches the constant. Now you need to shift that graph left until . The simplest I can come up with is . The smaller , the more gradual the asymptotic approach. --Stephan Schulz (talk) 13:36, 24 February 2016 (UTC)
- How about f(x) = k * ( 1 - c^x ) ? -- SGBailey (talk) 14:19, 24 February 2016 (UTC)
- is a standard one. Sławomir
Biały 15:04, 24 February 2016 (UTC)- for an exponential decay in the difference from the constant, very common as a solution to physical applications. Jheald (talk) 15:14, 24 February 2016 (UTC)
Integer Sequences with all levels of differences increasing?
for a seequence A, define dA as the sequence made up of the differences between terms. So if A is 1,3,5,8,100,... dA is 2,2,3,92,... and ddA is 0,1,89,... . I'm looking for how to generate a sequence A where for all n d^nA has only positive values in it. Setting A equal to the powers of 2 does so because A = dA = ddA ,etc. However are there integer sequences which grow more slowly than this for which this is true? (I'm thinking not)Naraht (talk) 16:34, 24 February 2016 (UTC)
- Invert the transformation to write in terms of and it's easy to see that implies . --JBL (talk) 16:46, 24 February 2016 (UTC)
Calculating percent below X on normal distribution curve
I think this is a very easy problem, just one I haven't encountered. I have the average and standard distribution for a standard distribution curve. For this example, assume it is avg=123 and standard distribution=16. I want to know what percent of the population being measured are below 140. I started with trying to calculate the value of the curve at 140. I used a rather nasty looking formula: (1/(sdev * sqrt(2*PI)))*exp(-1*(pow(140-avg,2)/(2*pow(sdev,2)))). However, that gives me 0.0142. I expect it to be much higher. So, I checked the value at the mean 123. I got 0.0249. This tells me that the max height of the curve is 0.0249 or that the formula I am using is completely wrong. So, I thought I'd ask here. Am I on the right track and my formula is wrong or do I need to tackle this in a completely different way? 209.149.114.211 (talk) 19:44, 24 February 2016 (UTC)