Wikipedia:Reference desk/Mathematics: Difference between revisions
Deeptrivia (talk | contribs) →Radians and calculus: Eigenvalues |
Deeptrivia (talk | contribs) →Matrix Eigenvalues: added |
||
Line 186: | Line 186: | ||
\end{bmatrix}</math> |
\end{bmatrix}</math> |
||
where O is the zero matrix, I is identity matrix, B is a circulant tridiagonal matrix with elements (-1,2,-1) and C and D are diagonal matrices. S is a circulant matrix (or a diagonal matrix, if that helps.) I am hoping that the presence of large number of O's , I's and simple matrices would lead to a closed form solution for the eigenvalues and the eigenvectors. Any help will be sincerely appreciated. [[User:deeptrivia|deeptrivia]] ([[User talk:deeptrivia|talk]]) 03:35, 22 February 2012 (UTC) |
where O is the zero matrix, I is identity matrix, B is a circulant tridiagonal matrix with elements (-1,2,-1) and C and D are diagonal matrices with constant diagonal terms (in other words, a scalar times the identity matrix.) S is a circulant matrix (or a diagonal matrix, if that helps.) I am hoping that the presence of large number of O's , I's and simple matrices would lead to a closed form solution for the eigenvalues and the eigenvectors. Any help will be sincerely appreciated. [[User:deeptrivia|deeptrivia]] ([[User talk:deeptrivia|talk]]) 03:35, 22 February 2012 (UTC) |
Revision as of 03:37, 22 February 2012
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
February 14
Number naming system
Why is the system for naming large numbers built around 1000 rather than some other power of 10 (e.g. 10000)? --108.225.117.174 (talk) 01:04, 14 February 2012 (UTC)
- Completely abritrary choice, I think. Ancient Egyptians had dedicated names for powers of 10 up to 1,000,000. Greeks had a system based on 10,000. East Asians use 10,000 as well: see Myriad#History and usage.--Itinerant1 (talk) 01:24, 14 February 2012 (UTC)
- I suspect it's related to why we put commas (or decimals) every three digits. That is, you can remember only so many digits at a time, maybe around 3-4. So, dividing numbers up in groups of 3 or 4 makes sense. I occasionally need to copy a number without divisions every few digits (like with some web sites that want a credit card number but don't allow spaces) and find it quite irritating. StuRat (talk) 02:09, 14 February 2012 (UTC)
- See Digit grouping. Note that "groups of three" is not a cultural universal, even today. Lakh describes how in India the least three significant digits of an integer are grouped, and subsequent digits are grouped in pairs. As StuRat suggests, their naming is reflected by this grouping. Powers of ten in Thai all have their own names, and are not grouped. Where you would call 2,345,678 "two million, three hundred forty-five thousand, six hundred seventy-eight", in Thai ๒๓๔๕๖๗๘ would be spoken as "song lan, sam saen, si muen, ha phan, hok roi, chet sip paet" -- literally "two million, three hundred-thousand, four ten-thousand, five thousand, six hundred, seventy-eight". -- 110.49.235.82 (talk) 06:38, 14 February 2012 (UTC) (I've used our article's Thai transliterations here. What little Thai I know I learned phonetically, and have come up with my own transliteration which often fails to precisely match what I see written.)
- Apparently they do group the last two digits. Otherwise it would end "...seven tens and eight ones". StuRat (talk) 21:57, 14 February 2012 (UTC)
- Only in the same sense as with English where, "Spelled-out two-word numbers from 21 to 99 are hyphenated (e.g. fifty-six)" (from WP:Numbers#Typography -- do we have a real article which gives a name to this practice?) and certain two digit numbers have their own special names ("eleven, twelve, ... nineteen" in English and "sip et, song sip et, ... kao sip et" (eleven, twenty-one, ... ninety-one, where "et" is used for one instead of the regular "nueng") in Thai). While we write and speak it this way, we don't usually think of this as a "group of two" in English, do we? Also note that the commas I put in the transliteration wouldn't be present in the actual Thai as they don't even put spaces between words. -- 203.82.91.156 (talk) 02:59, 15 February 2012 (UTC) (Previously posted as 110.49.235.82 -- Thailand yesterday, Malaysia today.)
Is there an accepted name for systems of naming numbers where powers of 1000 play an important role, say, rather than 10000? Also, the metric system seems awfully unfair on East Asians. 96.46.204.126 (talk) 00:18, 15 February 2012 (UTC)
- I would say "chiliadic" and "myriadic", but that's just me. I believe that the Japanese have names for the two systems, though, but I don't remember exactly what those names are. 75.40.137.93 (talk) 10:33, 23 February 2012 (UTC)
RANSAC number of iterations
In the article on RANSAC it is said that the k (number of iterations required) in this formula
is for the case where the n points are selected independently and that "the derived value for k should be taken as an upper limit in the case that the points are selected without replacement". Is that saying that k, as calculated by this formula is slightly bigger than it needs to be or smaller than it needs to be? In my algorithm I need to select without replacement and I want to play it safe, so is k large enough? 41.164.7.242 (talk) 11:21, 14 February 2012 (UTC) Eon
- Slightly larger than it needs to be. If you sample without replacement, then the likelihood that a later iteration selects a set of points which are all inliers is higher than for earlier iterations. If you sample with replacement, it's constant over all iterations. 131.111.255.9 (talk) 17:07, 14 February 2012 (UTC)
- I'm pretty sure the article talks about replacement after each datum selection within a single iteration. After each iteration the state (other than a random seed) resets. Hence neither the "with replacement" nor the "without replacement" scenarios they are referring to has the current iteration affecting future iterations. What puzzles me is that it instinctively feels like k should be larger if one (like I have to do) doesn't do replacement. 196.215.142.253 (talk) 20:16, 14 February 2012 (UTC) Eon
Diophantine equations
I'm in the throws of doing Project Euler problem 66 which involves solving x^2 - D*y^2 = 1.
I have ascertained that continued fractions of D^0.5 are involved and it often works. You evaluate the continued fraction till just before the repeat
EG 7
7^0.5 = [2;1,1,1,4]
2+ 1/ (1+ 1/ (1+ 1/ 1))) = (I've put cont fract coeffs in bold)
2+ 1/ (1+ 1/ (1+ 1))) =
2+ 1/ (1+ 1/ 2)) =
2+ 1/ (3/ 2)) =
2+ 2/3 =
8/3
And we have 8^2 - 7*3^2 = 64 - 7*9 = 64-63 = 1. OK
EG 19
19^0.5 = [4;2,1,3,1,2,8]
4+ 1/ (2+ 1/ (1+ 1/ (3+ 1/ (1+ 1/2 )))) =
4+ 1/ (2+ 1/ (1+ 1/ (3+ 1/ (3/2 )))) =
4+ 1/ (2+ 1/ (1+ 1/ (3+ 2/3 ))) =
4+ 1/ (2+ 1/ (1+ 1/ (11/3 ))) =
4+ 1/ (2+ 1/ (1+ 3/11 )) =
4+ 1/ (2+ 1/ (14/11 )) =
4+ 1/ (2+ 11/14 ) =
4+ 1/ (39/14 ) =
4+ 14/39 =
170/39
And we have 170^2 - 19*39^2 = 28900 - 19*1521 = 28900-28899 = 1. OK
But it doesn't seem to work for 13
IE 13
13^0.5 = [3;1,1,1,1,6]
3+ 1/ (1+ 1/ (1+ 1/ (1+ 1/ 1))) =
3+ 1/ (1+ 1/ (1+ 1/ (1+ 1))) =
3+ 1/ (1+ 1/ (1+ 1/2)) =
3+ 1/ (1+ 1/ ( 3/2)) =
3+ 1/ (1+ 2/3) =
3+ 1/ ( 5/3) =
3+ 3/5 =
18/5 =
And we have 18^2 - 13*5^2 = 324 - 13*25 = 324-325 = -1. WHY -1?
What am I doing wrong? -- SGBailey (talk) 11:26, 14 February 2012 (UTC)
- You might find the article on Pell's equation useful with this. Basically the problem you're running up against is that the period for the continued fraction for √13 is odd length and the technique you're truing only works for even period lengths. There is a simple work-around though: just use two periods instead of one to force the length to be even.--RDBury (talk) 12:24, 14 February 2012 (UTC)
- Thanks - worked a treat. -- SGBailey (talk) 16:06, 14 February 2012 (UTC)
Planar graphs and coloring
I was trying to read the following proof I found on the internet and could not understand the first line. The graph in question is planar (and that's all we know about it):
" Without loss of generality, we may assume that all the faces of our graph are triangular, because adding edges just makes a graph harder to color. (We are coloring vertices.) Let V, E and F be the number of vertices, edges and faces. Every edge is in two faces, and every face has three edges, so F=(2/3)E. We have V−E+F=2 by Euler's relation, so V=E/3+2. Since the sum of all vertex degrees is 2E, the average degree of a vertex is 2E/V=(2E)/(E/3+2)<6. "
My question is how can one assume without loss of generality that the faces of our graph are triangular? Thanks-Shahab (talk) 14:55, 14 February 2012 (UTC)
- Suppose you had a face that wasn't triangular; maybe a square. Add an edge going across it, dividing the face into two triangles. Since adding edges makes coloring harder, if you can color the new graph, you can color the old graph, and you've gotten rid of one non-triangle face. Repeat for all faces.--121.74.118.113 (talk) 15:08, 14 February 2012 (UTC)
- (edit conflict) What they are claiming is that assuming triangular faces does not affect their claim about coloring planar graphs. The rationale is that adding edges makes a graph harder to color, because more edges means more conditions for each vertex's color to satisfy. Look at it this way: if given a graph with non-triangular faces, add in edges until it has all triangular faces. Then, apply the coloring algorithm (if it's a constructive proof), or convince your self that the valid coloring exists (if the proof is existential). When you are done with the coloring, you can remove the edges that you added, and still have a valid coloring for the original graph. Does that help? SemanticMantis (talk) 15:10, 14 February 2012 (UTC)
- It does. Thank you both-Shahab (talk) 15:16, 14 February 2012 (UTC)
February 15
A Quick Proof
Just a quick question here to all, my algebraic topology is a bit rusty. I know that a 2-torus is not homeomorphic to a 3-torus. And my argument is that because their fundamental groups are not isomorphic there is no way to continuously deform one onto another. Is this right? And I am pretty sure there is a theorem like this (which I am using) but I can't remember its name. Oh and just for curiosity, what are their fundamental groups? And yes, the context stems from Valentine's day, having an argument whether human male and females bodies are topologically equivalent or not. I say they aren't. :-) - Looking for Wisdom and Insight! (talk) 03:48, 15 February 2012 (UTC)
- The fundamental group is a homeomorphism invariant; that means that if two spaces are homeomorphic, they have the same fundamental group. Topologists would never have bothered studying the fundamental group if it weren't homeomorphism invariant, just as algebraists are only interested in properties of groups which are isomorphism invariant. So yes, your reasoning is correct. I don't think this fact has a name; it's a straightforward check.
- The fundamental group of a product is the product of the fundamental groups. Since the 2-torus is the product of two circles, while the 3-torus is the product of 3, that makes their fundamental groups and , respectively.--121.74.100.56 (talk) 05:05, 15 February 2012 (UTC)
Wait, now I am confused by something in your response. I thought that 1-torus is the ordinary torus (a doughnut with one hole). A 2-torus is a double torus and it has two holes, from attaching two 1-tori together. And a 3-torus (triple torus) has three holes. So I thought that the fundamental group of an ordinary 1-torus is just . So wouldn't the fundamental group of 2-torus be and the fundamental group of 3-torus be ? - Looking for Wisdom and Insight! (talk) 08:14, 15 February 2012 (UTC)
- Sorry, I interpreted 2-torus and 3-torus to refer to the number of dimensions, not the number of holes. In retrospect, that doesn't make much sense when discussing humans.
- The fundamental groups of the double and triple torus aren't that simple; you need to use the Seifert–van Kampen theorem to compute them (the computation is on that page). In particular, they're not abelian. For the n-torus, the fundamental group is --121.74.100.56 (talk) 08:35, 15 February 2012 (UTC)
- Maybe I am missing something here, but rather than calculating fundamental groups, wouldn't it be simpler to argue that the 2-torus cannot be homeomorphic to the 3-torus because they have different Euler characteristics ? Gandalf61 (talk) 09:04, 15 February 2012 (UTC)
Well for me the simplest argument for me would be the intuitive one. If two tori have a different number of holes, then one must have more holes than the other which means that the one with the fewer holes must be torn/cut to create more holes which is not a continuous deformation. But for a "formal" proof, I just thought of fundamental groups because I don't know much about algebraic topology and fundamental groups is all I remember from the introductory class I took. - Looking for Wisdom and Insight! (talk) 09:42, 15 February 2012 (UTC)
- Looking for Wisdom: that's the same problem as you have with distinguishing a sphere from a torus. It might seem intuitively clear that the torus has a hole and a sphere does not, but it's not easy to give a precise proof that they're not homeomorphic, because it's not easy to define precisely what a "hole" of this kind means. That's why we need fundamental groups or some other algebraic invariant. – b_jonas 10:04, 15 February 2012 (UTC)
- The dimension is homeomorphism invariant. 74.98.35.216 (talk) 11:35, 15 February 2012 (UTC)
- (An aside on motivation): Why do you think that human males and females have different topological genera? I'd say they both have genus 1, due to the connection between the mouth and anus. Other bits of plumbing (male or female) don't really connect in the same direct manner. There are all sorts of finer scale connections, but if you start counting that way, you'd have to include tear ducts and sweat glands, etc., etc. and end up with a very large genus. I don't want to debate finer points of human physiology here; just curious how you're thinking of the problem. SemanticMantis (talk) 14:10, 15 February 2012 (UTC)
- Hmmm ... "My dog has genus 1" ... "How does he smell ?" ...Gandalf61 (talk) 16:25, 15 February 2012 (UTC)
- Ha--Ok, so the genus of a mammal is rather ill-defined. There's all kinds of flaps and switches inside that change the communicativity, I was thinking of the body in the position where the flaps lead air to the lungs, and not the stomach. If we consider that nostrils can be connected to the stomach, then I guess a the dog is topologically equivalent to a block with a Y-shaped hole in it. Doesn't that have genus 2? I guess my point was that, if you can agree on a genus for a representation of a human male, it should be the same as that of a human female. It's not as though there's two different ways to get to open air from inside a uterus. SemanticMantis (talk) 18:22, 15 February 2012 (UTC)
- Hmmm ... "My dog has genus 1" ... "How does he smell ?" ...Gandalf61 (talk) 16:25, 15 February 2012 (UTC)
Central tendency and usefulness of measurements
if measures of central tendency and dispersion are given,then how these measurements can be useful? — Preceding unsigned comment added by 180.211.216.154 (talk) 14:13, 15 February 2012 (UTC)
- I created a section header for you. This will happen automatically if you hit the "ask a new question" button at the top of the page. SemanticMantis (talk) 15:01, 15 February 2012 (UTC)
- I don't understand your question. If you are asking why measures of central tendency are used, they are used to indicate the "middle" value of a set of data. If you are asking when one type or another should be used, here's some ideas:
- 1) An arithmetic mean (arithmetic average) is perhaps the most common. It's useful when data is symmetrically distributed, like in a bell curve. For example, you might find the arithmetic mean of class grades, to determine how well the students are getting the material. If the mean is 95%, then they are getting it. If it's 50%, then they aren't.
- 2) A weighted arithmetic mean is useful when some data points need to be weighted more than others. For example, in the US Presidential election, each state gets a different number of electoral college votes. If you were doing polling to try to determine who would win the election, you might want to weight the state results by the number of electoral votes in that state.
- 3) For data which is not symmetrically distributed, like incomes and wealth, an arithmetic mean is meaningless. Imagine Bill Gates in a room with 1000 minimum wage workers. The average wealth of each person in the room would be millions of dollars, which would make you think the room is full of rich people. Dealing with percentiles is more useful here. You could say 99.9% of the people earn minimum wage, and 0.1% earn billions.
- 4) The mode is just the most common number in a set. Let's say you run a restaurant and want to add one item to your menu. If you keep track of the things people request, the most often requested item, which you don't already have, might be a good choice to add.
- 5) Geometric means, logarithmic means, etc., are useful when data varies in those specific ways. See central tendency for others. StuRat (talk) 19:45, 15 February 2012 (UTC)
- Re #3, the arithmetic mean is definitely not meaningless for asymmetric distributions. It's just not always what we want to look at. It's also useful to explicitly mention the median which is the 50% percentile, and useful as a central tendency measure in some situations. -- Meni Rosenfeld (talk) 05:45, 16 February 2012 (UTC)
- Do you have an example of when the arithmetic mean for an asymmetrical distribution is meaningful ? StuRat (talk) 16:55, 18 February 2012 (UTC)
- To be honest your suggestion that the arithmetic mean (or its counterpart for probability distributions, expectation) can be in some situations meaningless is so alien to me that I'm not sure what kind of example you're looking for. Unlike median, mode, geometric mean and many other measures, the mean is additive, and as addition is such a basic concept, so is arithmetic mean. Similarly, variance is additive for independent variables. This is why expectation and variance are the key attributes of distributions that are at the heart of all probabilistic and statistical analysis. When you want to learn about a distribution, first learn its expectation and variance. Anything else is a bonus which only in specific cases is of much use (one such case is your income example, where the median better captures our intuitive notion of what a "typical" person earns).
- All that said, here's an example from a domain I'm involved in. For Bitcoin mining, the expected time to find a block (for the whole network or for any individual miner) follows the exponential distribution, which is pretty much asymmetrical (skewness 2). But the mean of this distribution translates directly to the average rate of finding blocks, and hence, the profitability of mining. As anyone who has invested thousands of dollars into mining hardware will tell you, this is pretty important.
- Another important example is Von Neumann–Morgenstern utility. A rational agent will maximize his expected utility. In general there is no utility function for which maximizing the median (or any other measure) isn't irrational.
- And a simpler example - A restaurant receives 100 orders pay day, with amounts distributed in some asymmetric way. Given that, what you need to know to deduce the total revenue of the restaurant is the order amount mean, not some other quantity. -- Meni Rosenfeld (talk) 19:41, 18 February 2012 (UTC)
- Do you have an example of when the arithmetic mean for an asymmetrical distribution is meaningful ? StuRat (talk) 16:55, 18 February 2012 (UTC)
- I don't follow. Surely to get the mean, the total revenue was divided by the number of orders in the first place. Or did you have in mind that the historic mean would be multiplied by the number of orders one night to come up with an estimate of that night's receipts ? I'm not sure how useful that would be, say if the asymmetrical distribution is due to different demographics, like drunken businessmen paying on their company account (spending $$$), versus families trying to save money (spending $). In that case, you would really need to at least weight the averages by the number of each type of order, to get a reasonable estimate. StuRat (talk) 23:47, 18 February 2012 (UTC)
- Yes, you could for example base future mean estimates on history. But that doesn't really matter, the point is that the mean is a meaningful thing to say about the order amounts, other measures don't carry any useful information. Of course each amount should be weighted by its probability/frequency. -- Meni Rosenfeld (talk) 06:49, 19 February 2012 (UTC)
Limit conditions
Under what conditions on the functions f and g does the following statement hold:
Where * denotes either addition, subtraction, multiplication or division. I understand that there may be different conditions depending upon the operation. Let's assume that the left and right limits of f are the same, and likewise for g. Is the statement always true? I'm really looking for the least possible. — Fly by Night (talk) 15:30, 15 February 2012 (UTC)
- For addition, subtraction and multiplication you just need that the limits in the RHS exist; this is for any * that is continuous. For division you need the denominator to be non-zero to get continuity so add that the limit of g is not 0, which you would need for the RHS to be defined anyway. So the short answer is if the RHS is defined then the LHS is defined and they are equal.--RDBury (talk) 15:54, 15 February 2012 (UTC)
- ... but of course the LHS may be defined when the RHS is not. This seems to relate directly to a subset of the classic indeterminate forms. — Quondum☏✎ 16:15, 15 February 2012 (UTC)
- When you say "are defined", what are the explicit conditions? Do we simply need f and g to be continuous, and for each function's own left and right limit to be equal? If so then what is the proof for this? Which articles contain this information? — Fly by Night (talk) 16:09, 15 February 2012 (UTC)
- Continuity at the limit is not required, only the two-sided limit (one-sided for limit at ∞), and that the limits on the right are finite (for them to be defined). — Quondum☏✎ 16:24, 15 February 2012 (UTC)
- Thank you for your reply, but I don't find your reply particularly illumination. Please supply more details and, as we are at a reference desk, links to the appropriate articles. — Fly by Night (talk) 17:15, 15 February 2012 (UTC)
- When RDBury says "defined", the limit is being referred to, not the functions f and g. Implied is that the limits are finite. Continuity of a function is not a prerequisite at any point of its domain for the limit to exist; I can provide a simple example of an everywhere-discontinuous function that nevertheless has a limit. The domain can even be discrete. So the problem statement is a little sparse on detail, and the natural thing is to assume that we are dealing with real numbers, that the limit meant is the ε–δ definition, and that the limiting value ℓ is either −∞, finite or +∞. Anyhow, perhaps Limit of a function#Properties will answer much of your question. — Quondum☏✎ 18:44, 15 February 2012 (UTC)
- I'm sorry, but having read your reply and followed your link, I can't really say that it helped. Thank you very much for your efforts, but I would be very grateful if someone with a friendly nature were to answer my question. I am very sorry to have to say this, but I find your replies somewhat condescending. If people are to learn then they should feel welcome and comfortable. Your replies to this post, and the post below, do neither of these. Let me thank you once again, but request that you do not involve yourself any further. — Fly by Night (talk) 22:32, 15 February 2012 (UTC)
- I think RDBury's last sentence nails it. I'm not sure, but I think Spivak's textbook "Calculus" (really an elementary real analysis text) covers this if you want to get into the gritty details. SemanticMantis (talk) 22:54, 15 February 2012 (UTC)
- Just to clarify, f and g don't have to be continuous, what I'm saying is the equality follows from operation * being a continuous function R×R→R. Also I wasn't considering infinite limits but the statement would still be true excluding the well-known indeterminate forms ∞−∞, ∞/∞, etc. Again it's nothing really to do with f and g but whether the operation * can be extended continuously to include infinite values. For example × is a continuous function of R×R→R which can be extended continuously to R∪{∞}×R∪{∞}→R∪{∞} if you exclude the points (0,∞) and (∞,0).--RDBury (talk) 01:21, 16 February 2012 (UTC)
- I think RDBury's last sentence nails it. I'm not sure, but I think Spivak's textbook "Calculus" (really an elementary real analysis text) covers this if you want to get into the gritty details. SemanticMantis (talk) 22:54, 15 February 2012 (UTC)
- I'm sorry, but having read your reply and followed your link, I can't really say that it helped. Thank you very much for your efforts, but I would be very grateful if someone with a friendly nature were to answer my question. I am very sorry to have to say this, but I find your replies somewhat condescending. If people are to learn then they should feel welcome and comfortable. Your replies to this post, and the post below, do neither of these. Let me thank you once again, but request that you do not involve yourself any further. — Fly by Night (talk) 22:32, 15 February 2012 (UTC)
- When RDBury says "defined", the limit is being referred to, not the functions f and g. Implied is that the limits are finite. Continuity of a function is not a prerequisite at any point of its domain for the limit to exist; I can provide a simple example of an everywhere-discontinuous function that nevertheless has a limit. The domain can even be discrete. So the problem statement is a little sparse on detail, and the natural thing is to assume that we are dealing with real numbers, that the limit meant is the ε–δ definition, and that the limiting value ℓ is either −∞, finite or +∞. Anyhow, perhaps Limit of a function#Properties will answer much of your question. — Quondum☏✎ 18:44, 15 February 2012 (UTC)
- Thank you for your reply, but I don't find your reply particularly illumination. Please supply more details and, as we are at a reference desk, links to the appropriate articles. — Fly by Night (talk) 17:15, 15 February 2012 (UTC)
- Continuity at the limit is not required, only the two-sided limit (one-sided for limit at ∞), and that the limits on the right are finite (for them to be defined). — Quondum☏✎ 16:24, 15 February 2012 (UTC)
Non-Standard Proof
Does anyone know a proof of the statement:
which does not use the binomial expansion and the fact that
- — Fly by Night (talk) 16:06, 15 February 2012 (UTC)
- Substitute n=mr. — Quondum☏✎ 16:19, 15 February 2012 (UTC)
- Thanks for your reply, but I fail to see how this answers my question. Your suggestion leads to much the same proof. I was hoping or a qualitatively different approach. As we find ourselves on a reference desk, links to articles would be greatly appreciated. (Forgive me for not making this explicit; I had assumed it to be a tacit assumption.) — Fly by Night (talk) 17:17, 15 February 2012 (UTC)
- Forgive my brevity – perhaps I was making unwarranted assumptions based on your user page, and it is not clear what level of rigour you need. A proof will depend heavily on how you define the exponential function or exponentiation. The statement can simply be a definition of er (def #1 of link), rather than a proof. But first, I think you should give the required starting point: the definition of exponentiation and/or of the exponential function you choose to start with, else it is guesswork what you want. — Quondum☏✎ 18:18, 15 February 2012 (UTC)
- The only unwarrented assumptions you made were those about my mathematical understanding. Because I didn't like your reply, and because I asked you to answer a reference desk question in the spirit of the reference desk (please read the preamble at the very top of this page) does not mean I am some mathematical dimwit that does not understand the exponential function. Please re-read my post, and especially its title. I wanted a non-standard, i.e. non-trivial proof. — Fly by Night (talk) 22:24, 15 February 2012 (UTC)
- Forgive my brevity – perhaps I was making unwarranted assumptions based on your user page, and it is not clear what level of rigour you need. A proof will depend heavily on how you define the exponential function or exponentiation. The statement can simply be a definition of er (def #1 of link), rather than a proof. But first, I think you should give the required starting point: the definition of exponentiation and/or of the exponential function you choose to start with, else it is guesswork what you want. — Quondum☏✎ 18:18, 15 February 2012 (UTC)
- Thanks for your reply, but I fail to see how this answers my question. Your suggestion leads to much the same proof. I was hoping or a qualitatively different approach. As we find ourselves on a reference desk, links to articles would be greatly appreciated. (Forgive me for not making this explicit; I had assumed it to be a tacit assumption.) — Fly by Night (talk) 17:17, 15 February 2012 (UTC)
Q.E.D. Bo Jacoby (talk) 19:34, 15 February 2012 (UTC).
- Bo Jacoby's "proof" is exactly what I had in mind, but one should note two steps that one should not consider rigorous (which is why I made reference to rigour before): the premises that z(mr) = (zm)r and that lim m→∞ wr = (lim m→∞ w)r, so it is more a consistency check or plausibility demonstration than a proof. — Quondum☏✎ 20:11, 15 February 2012 (UTC)
Thanks chaps. But this proof is no different in method to the proof I gave and then asked for another proof. I wanted a proof using other methods from different areas of mathematics. Thank you both for your effort, but it isn't quite what I'm looking for. I'm sure you'll agree that making a single substitution of a variable does not give a qualitatively different proof. Maybe the answer is that none exists. Thanks again folks. — Fly by Night (talk) 22:17, 15 February 2012 (UTC)
- In all good faith, I think to get the answers you want, you'll need to specify exactly how/where you want to start. As you are probably aware, there are several different schemes for what is a definition and what is a result in this area. It's nothing personal, just an admission that, even in textbooks, you will find different perspectives on what which parts are definitions and which parts are logical implications of those definitions. SemanticMantis (talk) 22:49, 15 February 2012 (UTC)
You got what you asked for: "a proof which does not use the binomial expansion and the fact that ". If that isn't quite what you are looking for, you should ask for what you are looking for. Bo Jacoby (talk) 23:03, 15 February 2012 (UTC).
Alternative approach
Let . Show that for n large and and the sequence and is uniformly bounded on compact sets. Conclude that exists, is differentiable, and and . Sławomir Biały (talk) 00:19, 16 February 2012 (UTC)
- (The above is for . For , replace by . Sławomir Biały (talk) 10:42, 16 February 2012 (UTC))
- Sławomir ment to say . Bo Jacoby (talk) 07:29, 16 February 2012 (UTC).
- Corrected.Sławomir Biały (talk) 10:42, 16 February 2012 (UTC)
(!) . Thus the function we started with satisfied the DE and f(0)=1. It must therefore be the exponential function. There is some work to be done to justify swapping the limit and the derivative.Tinfoilcat (talk) 10:00, 16 February 2012 (UTC)- Yes, this is the idea. It follows if the function and its derivative converge uniformly on compact sets, which the above argument shows. Sławomir Biały (talk) 10:43, 16 February 2012 (UTC)
- I hadn't read your post properly, now I see that my idea was the same as yours. Tinfoilcat (talk) 11:14, 16 February 2012 (UTC)
- Yes, this is the idea. It follows if the function and its derivative converge uniformly on compact sets, which the above argument shows. Sławomir Biały (talk) 10:43, 16 February 2012 (UTC)
Spectral radius of a derivative
I ask for help in understanding the following condition for the use of iterative method in numerical solving of a non-linear equation, given in the relevant article:
...a sufficient condition for convergence is that the spectral radius of the derivative is strictly bounded by one in a neighborhood of the fixed point.
What does "spectral radius" mean and how to calculate it? I've found lots of information both on the Wikipedia and on the web related to matrices and the systems of linear equations, but I am not knowledgeable enough to bind these topics together in my mind. --Esmu Igors (talk) 16:23, 15 February 2012 (UTC)
- If you type spectral radius into Wikipedia's search bar then you will find the spectral radius article. If you take the modulus of all of the eigenvalues then the S.R. is defined to be the greatest of these. — Fly by Night (talk) 17:24, 15 February 2012 (UTC)
- Thank you, Fly by Night, for your help, but I have already found this article previously and understand almost nothing in it. Furthermore, while I know how to use matrices for solving simple linear equations, I have absolutely no idea about their usage for non-linear equations (not systems). I know I don't actually know math, but I really hope somebody would point my attention to where actually information about matrices for non-linear equations can be found...--Esmu Igors (talk) 19:45, 16 February 2012 (UTC)
- (I could be wrong here, I'm rather rusty on this topic) I think the article must be talking about the spectral radius of the Jacobian matrix for f, which is a matrix of partial derivatives that captures how a point x_n in phase space will move in a small neighborhood of a point x (where the partial derivatives are evaluated), when subjected to the dynamics described by the function f. In effect, the Jacobian serves to linearize the system at a point. This is similar to finding a tangent line to a curve. The same concept applies: in a small enough neighborhood, everything is well approximated by linear relationships. The radius being bounded by 1 has the effect that there is some small neighborhood around x, in which nearby points will converge to x. This is because the spectral radius is the dominant eigenvalue. Thus, if it is less that 1, then the displacement between x_n+1 and x will be smaller than the displacement between x_n and x, and the sequence x_n will approach x. Conceptually, this is roughly analogous to Newton's method. Does that help at all? If not, you might get better answers by explaining in more detail what you are trying to do, and what types of math you are comfortable with. SemanticMantis (talk) 21:03, 16 February 2012 (UTC)
- Thank you, Fly by Night, for your help, but I have already found this article previously and understand almost nothing in it. Furthermore, while I know how to use matrices for solving simple linear equations, I have absolutely no idea about their usage for non-linear equations (not systems). I know I don't actually know math, but I really hope somebody would point my attention to where actually information about matrices for non-linear equations can be found...--Esmu Igors (talk) 19:45, 16 February 2012 (UTC)
February 16
drawbacks to riemann & riemann stieltjes integral
why we need to introduce a genralisation of riemann integral ? also what are the drawbacks to RS integration ? and what we introduce to overcome that drawbacks? — Preceding unsigned comment added by 14.139.120.178 (talk) 09:09, 16 February 2012 (UTC)
- The main drawback of both the Riemann and Riemann-Stieltjes integrals is the lack of easy theorems that allow the interchange of limits with integration. The Lebesgue integral has better properties from this perspective, in part because it is able to integrate more functions (possible limit functions of sequences). The Riemann-Stieltjes integral is still useful in many situations. In probability theory, for instance, sometimes it is necessary to take a Stieltjes integral with respect to a CDF (a PDF may not exist). Sławomir Biały (talk) 10:53, 16 February 2012 (UTC)
Differential equations describing Euler's three-body problem
In Grapher on Mac OS X, explicit two-dimensional differential equations take the form of . What would be the general form describing a restricted three-body system containing , , , , , , , , , , and , where
- and are the masses of two objects fixed in space,
- is the mass of the movable object,
- , , , and are the and coordinates of their respective fixed objects,
- and are the initial and coordinates of the movable object,
- and and are the component vectors of the movable object's initial velocity?
--Melab±1 ☎ 20:24, 16 February 2012 (UTC)
- The equations of motion are second order differential equations (i.e. they involve the second derivatives of x and y with respect to time), so if Grapher can only plot first order ODEs (as your description suggests) then you have hit a fundamental obstacle. Gandalf61 (talk) 09:27, 17 February 2012 (UTC)
- They are second order as well. --Melab±1 ☎ 21:33, 17 February 2012 (UTC)
- Not sure what you mean by "They are second order as well", but here is an outline of how you find the equations of motion. First you write down expressions for the forces on the movable object due to the gravitational attraction between it and the other two objects - let's call these forces F1 and F2. Note that these are vectors, and the magnitude of each one will be a function of the square of the distance between the movable object and each of the other objects i.e.
- Then you resolve these forces into x and y components, so you have F1x, F1y, F2x and F2y. Then your equation of motion in the x direction is
- and you have a similar equation of motion in the y direction, involving the y components of the force instead of the x components. I am not going to write out all these expressions for you because that is very tedious, but it is conceptually straightforward once you understand the underlying physics.
- As you see, this is not in the format that you requested because it is a pair of second order ODEs, and your format was a pair of first order ODEs, as I pointed out above. Gandalf61 (talk) 10:33, 18 February 2012 (UTC)
- Not sure what you mean by "They are second order as well", but here is an outline of how you find the equations of motion. First you write down expressions for the forces on the movable object due to the gravitational attraction between it and the other two objects - let's call these forces F1 and F2. Note that these are vectors, and the magnitude of each one will be a function of the square of the distance between the movable object and each of the other objects i.e.
February 17
February 18
Topological relationship between a sphere and a cone?
Specifically, for a story I'm writing, I'm curious if there's any special mathematical relationship between the surface of the sphere of Earth and the hollow stepped cone of Dante's Hell. Is there some clever topological way one can be folded to produce the other? Please to remember, I'm a lay questioner - keep it simple...
Adambrowne666 (talk) 07:58, 18 February 2012 (UTC)
- Well, for the simplest topological relationship, homeomorphism, sharp edges and spikes don't matter, so you can smooth them out (our article has a picture of a coffee cup being folded and stretched to make a doughnut). You could easily stretch a closed stepped cone to make a sphere (you could even flip it, so the inside of the cone becomes the outside of the sphere, as long as you're talking about mathematical abstractions rather than solid rock). The problem is that the pit is, if I recall correctly, open at the top - certainly it is in pictures like this: File:Stradano Inferno Map Lower.jpg. You could flatten out the steps of the pit, and stretch it around into a kind of ball shape, but you'd still have a hole that you couldn't close. I don't think there's a way to fold that hole away (or to be more rigorous, I don't think there's a way to map every point on a solid sphere onto a punctured ball continuously), since you fundamentally change the properties of the shape (a ball with a hole in it doesn't have a clearly defined interior - a sphere does). Smurrayinchester 11:52, 18 February 2012 (UTC)
- Thanks, Smurray - that's interesting - so you'd end up with a sphere with a hole; makes me think of the 19th Century speculation on the idea of there being an entry to the hollow Earth somewhere around the Arctic. I must admit, I'm being sort of plagiaristic here, referring to the great old Christopher Priest novel The Inverted World, where the protagonists' altered perceptions causes them to see the Earth and Sun etc as pseudospheres. Thanks again Adambrowne666 (talk) 13:14, 18 February 2012 (UTC)
- If you allow an imaginary radius as you go up past the north pole you get a paraboloid going off to infinity there from . Dmcq (talk) 15:37, 18 February 2012 (UTC)
- Thanks, Dmcq, that's a sort of magnificent answer, but I can't see it - I see from your profile you're interested in the visualisation of mathematics - maybe you could take pity on me and try and depict it to someone mathematically shortsighted? (BTW, I'm fascinated by your link to EPI - gonna look into that further.) Adambrowne666 (talk) 05:07, 19 February 2012 (UTC)
- How about this then? If Chariots of the Gods can make money there must be something in trying to sell Dante's hell at the poles with this. ;-) Dmcq (talk) 14:56, 19 February 2012 (UTC)
- Ultracool! Thanks heaps. I can certainly use that.
Strange voting system
I'm not sure if I read about this somewhere or made it up. Does the following system have a name? Every eligible voter is also a potential candidate. All votes cast for a particular person X get "paid forward" to whoever X votes for. Anyone wishing to be elected votes for themself and receives all votes cast directly for them as well as any that were paid forward from other voters. Votes are paid forward until they arrive at a self-voter, and otherwise (if there's a loop of votes) they never get counted. Sound familiar to anyone? Staecker (talk) 21:37, 18 February 2012 (UTC)
- Delegated voting, also known as liquid democracy, is close, although it doesn't quite involve the same pooling of votes that goes on here (people nominate themselves as candidates, rather than voting for themselves, though apart from preventing loops that is just a semantic quibble really). Smurrayinchester 21:58, 18 February 2012 (UTC)
- Thanks! That's close enough to probably be what I was thinking of. Staecker (talk) 22:34, 18 February 2012 (UTC)
3y-2x=5+9y-2x
How to graph this equation? I just curious. — Preceding unsigned comment added by 65.92.151.169 (talk) 23:59, 18 February 2012 (UTC)
- First solve for X or Y (in this case, only Y is possible, and even that will make for a rather dull graph). StuRat (talk) 00:04, 19 February 2012 (UTC)
- Add 2x to both sides to get 3y=5+9y. Subtract 3y from each side and subtract 5 from each side to get -5=6y. Divide both sides by 6 and switch the sides to get y=-5/6. The graph will be a straight horizontal line at -5/6.--Mattmatt1987 (talk) 17:34, 19 February 2012 (UTC)
- I was hoping they would do their own homework, after I told them what they needed to do. StuRat (talk) 22:26, 19 February 2012 (UTC)
February 19
Geometric algebra question: degrees of freedom to a k-blade
I am well-aware that for a geometric algebra over the reals, the number of (real) degrees of freedom to a k-vector is given by the appropriate binomial coefficient, i.e. is the number of real numbers required to specify a 2-blade for a geometric algebra over an -dimensional real space. However, clearly the number of real numbers required to specify a 2-blade is often less than this; given that every pair of vectors can be specified by , and any 2-blade can be expressed as the exterior product of 2 vectors, a 2-blade cannot represent more degrees of freedom than this, yet the space of 2-vectors is larger for . So, how many degrees of freedom are required to specify a general k-blade?--Leon (talk) 14:55, 19 February 2012 (UTC)
- For a general k-vector, it's n choose k (see exterior algebra). The subvariety of k-blades is (projectively) the Grassmannian of k-dimensional subspaces of the n-dimensional space and has dimension k(n-k) (projectively) or k(n-k)+1 (nonprojectively). Sławomir Biały (talk) 17:11, 19 February 2012 (UTC)
- The number of degrees of freedom for picking 2 vectors would be n2, not 2n, so it's not less than the binomial coeff. This corresponds (sort of) to the dimension of the tensor space being n2; the space of 2-vectors is a quotient of this.--RDBury (talk) 21:22, 19 February 2012 (UTC)
- But the question isn't how many degrees of freedom are involved in picking two vectors. A two blade is the wedge product of two vectors (not the tensor product.) The question is equivalent to asking how many degrees of freedom there are in picking two dimensional subspaces (projectively, add one for a scale). In R3 for instance, this is equal to 2+1: a two-dimensional subspace is determined by its unit normal, plus one for scaling. Similarly a k-blades is the wedge product of k vectors. These are in one-to-one correspondence with a trivial line bundle over a Grassmannian. See exterior algebra for a discussion. Sławomir Biały (talk) 02:06, 20 February 2012 (UTC)
- Sławomir, I think this is a pretty useful result in general, since blades serve a significant role in GA. Do you know of a suitable reference, so that it could be added perhaps to Geometric_algebra#Representation_of_subspaces or some similar section? — Quondum☏✎ 06:17, 20 February 2012 (UTC)
- I don't know about any references that focus on geometric algebra, but for Grassmannians (including the result about dimension) a good book is: Griffiths, Phillip; Harris, Joseph (1994), Principles of algebraic geometry, Wiley Classics Library, New York: John Wiley & Sons, ISBN 978-0-471-05059-9, MR1288523. The proof of the dimensionality is actually straightforward. Take k vectors and wedge them together and perform elementary column operations on these (factoring the pivots out) until the top block are elementary basis vectors of . The wedge product is then parametrized by the product of the pivots and the lower block. Sławomir Biały (talk) 11:29, 20 February 2012 (UTC)
- Thanks for the detail; I've taken the liberty of copying the reference and your explanation into Blade (geometry) in the interim. Later, we could copy it to the Geometric algebra article. — Quondum☏✎ 13:59, 20 February 2012 (UTC)
- I don't know about any references that focus on geometric algebra, but for Grassmannians (including the result about dimension) a good book is: Griffiths, Phillip; Harris, Joseph (1994), Principles of algebraic geometry, Wiley Classics Library, New York: John Wiley & Sons, ISBN 978-0-471-05059-9, MR1288523. The proof of the dimensionality is actually straightforward. Take k vectors and wedge them together and perform elementary column operations on these (factoring the pivots out) until the top block are elementary basis vectors of . The wedge product is then parametrized by the product of the pivots and the lower block. Sławomir Biały (talk) 11:29, 20 February 2012 (UTC)
- It looked to me like there were two questions, first how many degrees of freedom are there in a picking a blade; and second, how can this number be greater than 2n if that's the degrees of freedom in picking the vectors individually. I was answering the second question.--RDBury (talk) 09:17, 20 February 2012 (UTC)
- There was one question, and I (think I) knew that the dimension of a -blade could not have dimension greater than , for an underlying basis consisting of basis vectors. Anyway, thank you, Slawomir, you have answered my question completely! As for adding the result to the geometric algebra article, I have several textbooks on the matter that don't answer the question (otherwise I wouldn't have had to ask), and given that these are fairly "standard" textbooks on GA I must conclude that it may be difficult to find a reference on GA specifically that is suitable. Is there a name for this result? If so, I can have a glance through Google Scholar to find a citation.--Leon (talk) 12:04, 20 February 2012 (UTC)
- I didn't really understand the last part of the question, and it looked like you were replying to me instead of the original poster. (The reply did seem strange to me.) Sorry for my confusion. Sławomir Biały (talk) 11:29, 20 February 2012 (UTC)
- No problem, confusion often abounds with text based dialog and I was a bit confused myself.--RDBury (talk) 13:41, 20 February 2012 (UTC)
- Sławomir, I think this is a pretty useful result in general, since blades serve a significant role in GA. Do you know of a suitable reference, so that it could be added perhaps to Geometric_algebra#Representation_of_subspaces or some similar section? — Quondum☏✎ 06:17, 20 February 2012 (UTC)
- But the question isn't how many degrees of freedom are involved in picking two vectors. A two blade is the wedge product of two vectors (not the tensor product.) The question is equivalent to asking how many degrees of freedom there are in picking two dimensional subspaces (projectively, add one for a scale). In R3 for instance, this is equal to 2+1: a two-dimensional subspace is determined by its unit normal, plus one for scaling. Similarly a k-blades is the wedge product of k vectors. These are in one-to-one correspondence with a trivial line bundle over a Grassmannian. See exterior algebra for a discussion. Sławomir Biały (talk) 02:06, 20 February 2012 (UTC)
- The number of degrees of freedom for picking 2 vectors would be n2, not 2n, so it's not less than the binomial coeff. This corresponds (sort of) to the dimension of the tensor space being n2; the space of 2-vectors is a quotient of this.--RDBury (talk) 21:22, 19 February 2012 (UTC)
describe conditions that would allow you to martingale
here is an example of a system that allows you to martingale: let's simplify roulette to be a game of red, black, or green, with red and black having a nearly 50%, and equal, chance each of occurring (e.g. 48%), and green being the house's edge (e.g. 4%). The green pays off only slightly better than red or black (e.g. 4x, 10x, whatever), and red or black pay off at an "even" rate (ten dollars gets you twenty plus change - whatever makes it an even bet at 48% chance of winning - if you win, zero if you lose).
a casino feels that roulette players are put off by excessive strings of reds or blacks, so it tweaks the random number generator to work thusly: rather than pick each red/black/green in succession based on a probability, instead it allots 12 red, 12 black, and 1 green token into randomly into a block of 25, doles out the 25 results in succession, then repeats this random allotment. This has the advantage, that, for example, nobody could bet green 50 times in succession without a payoff, nor would it be common for anyone to observe, joining the table, that out of 20 bets on red, only three or four pay off. This would otherwise be a common occurrence.
naturally, although this "seems" more random to the observers or players, it is in fact less random. this is so because rather than being a random walk, the system in some sense has memory.
it is hard to game, as it is difficult to know where the block boundaries are. To really have some confidence in a bet on red, you would have to join after observing 24 blacks: 12 at the end of the first block, and 12 at the beginning of the next. Then you could be sure the halfway point was the 25-block boundary, and now you know the rest of the block will be reds or greens. You can wait for a green if you want to have 100% payoff on a red.
All right, then, it's a bit difficult to game, but nevertheless imminently possible. The reason it is possible is obvious: given sufficient observation and the nature of the system, there are locations wherein a bet has positive expectation.
Now suppose that some stock on the stock market were literally cyclic (like a sine wave) and you had some guarantee of this. Naturally, if you have a guarantee that it will reach 0 again, any bet below zero will have a positive expectation (you can just hold the stock until it reaches 0). Of course, the time value may be such that this is not a worthwhile investment. So, we must consider stocks guarantee4d to follow sine waves and guaranteed to do so with high frequency. For best results, with high amplitude as well, though of course you can repeat smaller bets by buying low and selling high.
Thus, were any stock to be a sine wave of high frequency, we would be guaranteed to make money by placing bets with positive expectation.
returning to games. I would like a generalization of the sufficient and necessary conditions a system must meet, in as general terms as possible, to allow an observer to join and make bets with positive expectation. I gave two systems above: a stock roughyl following a sine wave in price, and a casino that doles out roulette results by allotting them in chunks instead of randomly one after the other. I would like a generalization that extends to every system that can be gamed, and how one can prove or disprove that in such a system, an observer can place bets with probability 1 of a positive payoff, and probability 0 of a negative payoff: and we are talking a single bet. (So that someone with this in proof who is infinitely risk-averse will still place the bet, provided he or she is a good mathematician and believes the premises or conditions are really a correct model of the system in question). --80.99.254.208 (talk) 15:14, 19 February 2012 (UTC)
- What you're describing is basically the martingale characterization of algorithmic randomness. In this case, the system must not be computably random.--121.74.109.179 (talk) 20:15, 19 February 2012 (UTC)
- Thanks for the link, feel free to make my ramblings leading to it smaller. Could you tell me what it means to be "computably random" and general strategies for deciding this. Could you give me a formalism that I could apply to arbitrary systems to determine whether they are "computably random"... --80.99.254.208 (talk) 20:32, 19 February 2012 (UTC)
- What I mean here, is I would like to be able to describe the properties of a system, and apply whatever you tell me to, to decide whether I can ever game it as described. Of course, my assumptions are very important, but so are the tools I'm asking from you in order to be able to follow through on deciding the consequences of those assumptions. Thanks for anything you might have along these lines. --80.99.254.208 (talk) 20:35, 19 February 2012 (UTC)
- Unfortunately, no. The definition of computable randomness is basically "can't be gamed". So I'd have a hard time giving you an alternate way to recognize it. A good rule of thumb, though, is to consider the law of large numbers; is every sequence of outputs possible? In the roulette case, 49 blacks isn't possible, so you know right away that the system can be beaten.--121.74.109.179 (talk) 21:16, 19 February 2012 (UTC)
- To get back to the hypothetical roulette game, actually it would be relatively easy to game. The probably that a green follows another green would drop from 1 in 50 to 1 in 2500 and it's certain that someone would notice. Knowing this, even without knowing there the boundaries are, you'd just have to wait for a green to appear and bet on a color other than green on the next roll to beat the house advantage. This is actually a variation of what card counters do in blackjack, in fact it would be much simpler than card counting.--RDBury (talk) 21:42, 19 February 2012 (UTC)
- Unfortunately, no. The definition of computable randomness is basically "can't be gamed". So I'd have a hard time giving you an alternate way to recognize it. A good rule of thumb, though, is to consider the law of large numbers; is every sequence of outputs possible? In the roulette case, 49 blacks isn't possible, so you know right away that the system can be beaten.--121.74.109.179 (talk) 21:16, 19 February 2012 (UTC)
followup question
in this hypothetical roulette game, after joining randomly how many turns would you have to wait on average before you were 100% sure of the boundary? 84.2.147.177 (talk) 15:06, 20 February 2012 (UTC)
- A number n is a possible boundary if the preceeding 25 outcomes contains exactly 12 reds, 12 blacks, and 1 green. The probability for this to happen by chance can be computed, see multinomial distribution. It is So the first 50 outcomes contains one true boundary and, with 6% probability, one additional false boundary. The true boundary is reproduced after the next 25 outcomes, while the false one is eliminated with 94% probability. Bo Jacoby (talk) 06:53, 21 February 2012 (UTC).
- Sorry if I misunderstand you, but I am not interested in confidence levels, but with 100% confidence. Let me give you an example. If you join and the first two results are greens, you have just gained 100% certitude, in two turns, of where the boundary is. (It's in between the two, the first is at the end of the previous group of 25, and the second is the first or the next group of 25 results). So, this is ONE case where you reach 100% certitude. This case has length 2.
- And so when you join, there is always a length at which you reach 100% certitude of the boundaries.
- What is the average of all of these lengths? (In other words, if I have just joined, I have an average wait time of x turns before I can become 100% sure of where the boundaries are. In practice, it could turn out that I reach certitude after 2 turns, but before I have seen a single result, what is my expected wait for 100% certitude?) --80.99.254.208 (talk) 10:59, 21 February 2012 (UTC)
- I do understand your question, but I did not provide the complete answer. I merely provided a step. After two results you know the answer (2) with probability 1/25^2=0.0016. So the product 2*0.0016=0.0032 contributes to the average length. After three results you know the answer (3) with probability 24/25^3=0.001536 and the product 3*0.001536=0.004608 contributes to the average length. The calculations becomes increasingly complicated, so it is tempting to make some approximations. The probability that you know the answer when n=25 is very low, and the probability that you know the answer when n=50 is very high, so the average is somewhere in between. Bo Jacoby (talk) 15:19, 21 February 2012 (UTC).
f(1/x)
If I have the graph for a function f(x), what happens to the graph when I turn the function into f(1/x)?190.24.187.123 (talk) 15:39, 19 February 2012 (UTC)
- First off, I'd think of the fixed points of the transformation. That is, where x = 1/x. Those would occur at x = 1 and x = -1. The graph at exactly those points will be the same. As the transformation on the x coordinate is continuous and differentiable in those regions, the area around the fixed points before will also be around the fixed points after. However, as x -> 1/x flips the ordering ( 0.99 < 1 before, but 1.0101... > 1 after), the graph will likewise be flipped around the points at x = 1 and x = -1. So if the graph before is increasing through x = 1, afterwards it will be decreasing, and vice versa. The other place to look is at discontinuities. The behavior around x=0 will be interesting. Points just near the right hand side of zero will be thrown right, toward positive infinity, and points just to the left of zero will be thrown further left, towards negative infinity (the zero point itself will drop off the graph). As the transformation is self-inverse, we can also conclude the reverse will happen - points near positive and negative infinity will be brought in toward zero. As the transformation on the x coordinate is continuous and differentiable on each side from very near zero to both positive and negative infinity, you can envision the transformation as a flipping and stretching. - In summary, take each of the positive and negative sides of the graph, flip them around ±1, as appropriate, and then stretch or squash them to fit in their new ranges (non-uniformly, so the most stretching/squashing happens near zero/infinity). Given the non-uniformity in the scaling and the fact that the infinities don't stay out at infinity, it's difficult to exactly visualize what the after graph will look like, but that's the general idea. -- 67.40.215.173 (talk) 18:57, 19 February 2012 (UTC)
February 20
Trig functions
Hey I am in precalculus and need help (not answers) on solving two problems.
My job is to find the solutions of the equation that are in the interval [0, 2π) [couldn't find pi in the special characters.
- sin 2t + sin t = 0 -- I know this can simplify to "2 sin t cos t + sin t = 0" but if I was trying to find solutions within 2π, where would I go from here? Am I allowed to factor sin t out?
- cos u + cos 2u = 0 -- same problem. I know it simplifies to "cos u + cos^2 u = 0" or "cos u + 1 - 2 sin^2 u = 0", but same with the first problem, can I simplify cos u out?
How would I solve these problems? Thanks for your help!--Prowress (talk) 16:35, 20 February 2012 (UTC)
- You can factor something, but not simply remove a common factor. So:
- 2 sin t cos t + sin t = 0 ⇒ sin t (2 cos t + 1) = 0 ⇒ sin t = 0 or 2 cos t + 1 = 0
- The first equation of the last pair tells you t = nπ are solutions, n ∈ ℤ. The second equation of the pair gives futher solutions. Simply discarding a factor would hide half the solutions. — Quondum☏✎ 16:53, 20 February 2012 (UTC)
- Double-check your first step in the second problem. --COVIZAPIBETEFOKY (talk) 17:13, 20 February 2012 (UTC)
- So for #1, I got sin t = 0 and cos t = -1/2 but I cannot think of any configuration for both of them. --Prowress (talk) 23:08, 20 February 2012 (UTC)
- Sorry, I meant by "configuration" that I cannot find any radians of pi that would fit both the answers.--Prowress (talk) 23:10, 20 February 2012 (UTC)
- But you don't need a solution to both of them. You have two things which you're multiplying together, and they're supposed to make 0. So it's enough that one of them be 0.--130.195.2.100 (talk) 23:26, 20 February 2012 (UTC)
- Meaning that for #1 you get sin t = 0 or cos t = -1/2 rather than sin t = 0 and cos t = -1/2 . Bo Jacoby (talk) 05:59, 21 February 2012 (UTC).
February 21
phi(n) question
On page 224 of Handbook of Number Theory II, by Sandor and Crstici, it says that C. A Nicol proved that there exist infinitely many numbers n such that phi(n) <= phi(n-k) for all 1 <= k <= n-1. (It references problem E2590 in AMM, vol 83, p 656, with the solution in vol 85, p. 654, but I don't have access to those.) But the statement doesn't make sense to me - when k=n-1, you have phi(n) <= phi(1) = 1, and only phi(1) and phi(2)=1. Is there an error? Bubba73 You talkin' to me? 03:46, 21 February 2012 (UTC)
- I don't know whether this link is stable but it displays problem E2590 which asks to show that there are infinitely many numbers n such that phi(n) <= phi(k) + phi(n-k) for 1 <= k <= n-1. It sounds like Sandor and Crstici forgot phi(k). PrimeHunter (talk) 04:17, 21 February 2012 (UTC)
Thanks, that makes sense because the next line says that there are infinitely many n such that phi(n) >= phi(k) + phi(n-k) for that range of k. Bubba73 You talkin' to me? 04:41, 21 February 2012 (UTC)
Statistics and entailment
On page 173 of Cognitive Strategy Research by McCormick, Miller and Pressley, there is an F-statistic table that evaluates causal inferences, but I've never come across this use of the F-statistic. Part of the table is given here:
Entailment | BETA | F |
---|---|---|
Importance -> learning | 0.13 | F(2, 503) = ... |
Importance -> attention -> learning | 0.12 | F(3, 502) = ... |
In the experiment (a study of attention-focusing strategies), students had to read a technical passage, then sit a brief test. Those parts of the text that were examined on the test were deemed important by the experimenters (the "importance" part of the table). Students were measured on their ability to focus their attention on these important parts of the text (the "attention" part of the table), as well as their actual performance (the "learning" part). So intuitively, I hope that is clear: the causal chain involving attention (line 2) describes the link between "importance of the text" and "learning of the text", when it is mediated by conscious attention on the part of the reader. The chain without attention (line 1) is blind to this intermediate step.
So I can get the idea but not how the table works. The F-statistic is being used for some kind of test where the numerator steals a degree of freedom from the denominator, whenever an extra link is added to the causal chain. Can anyone explain? IBE (talk) 19:43, 21 February 2012 (UTC)
- F-test is the article you want. Thats got some typical examples of its use.--Salix (talk): 21:22, 21 February 2012 (UTC)
February 22
Radians and calculus
Is there an intuitive/easy way to see why measuring angles in radians leads to nicer formulas in calculus? 74.15.139.132 (talk) 01:30, 22 February 2012 (UTC)
Matrix Eigenvalues
I need to find the eigenvalues and eigenvectors of the matrix
where O is the zero matrix, I is identity matrix, B is a circulant tridiagonal matrix with elements (-1,2,-1) and C and D are diagonal matrices with constant diagonal terms (in other words, a scalar times the identity matrix.) S is a circulant matrix (or a diagonal matrix, if that helps.) I am hoping that the presence of large number of O's , I's and simple matrices would lead to a closed form solution for the eigenvalues and the eigenvectors. Any help will be sincerely appreciated. deeptrivia (talk) 03:35, 22 February 2012 (UTC)