Wikipedia:Reference desk/Mathematics: Difference between revisions
→Evaluating very small natural logs: log (e^x + e^y) |
|||
Line 412: | Line 412: | ||
::<math>\log (e^x + e^y) \approx \begin{cases} y + \log (1 + e^{x-y}) & x \approx y \\ \mathrm{max}(x,y) & \text{otherwise.} \end{cases}</math> |
::<math>\log (e^x + e^y) \approx \begin{cases} y + \log (1 + e^{x-y}) & x \approx y \\ \mathrm{max}(x,y) & \text{otherwise.} \end{cases}</math> |
||
:-- [[User:BenRG|BenRG]] ([[User talk:BenRG|talk]]) 11:50, 29 April 2009 (UTC) |
:-- [[User:BenRG|BenRG]] ([[User talk:BenRG|talk]]) 11:50, 29 April 2009 (UTC) |
||
@CiaPan. Thanks, but unfortunately Lx-Ly is not computable, as too small. |
|||
@Tango. Yes, but I am dealing with the probability of data given a set of parameters. Liklihood is a better word. So they won't add to 1. |
|||
@BenRG. Thanks, that helps a lot, I think that's the solution. |
Revision as of 12:33, 29 April 2009
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
April 22
The Floor Function
I am trying to evaluate
which is basically the floor function in a n-fold integral over the unit cube. I tried to start with a simple case and go step by step.
that is easy to see.
can be rewritten as
where
which has an area of 1/2 so the entire integral is 1/2. And similarly for three dimensions, I got
as
where
My questions is, I can't find the volume of these regions correctly (because I can't set up the triple integral correctly). Can someone please shed some light on this on how to find the volume of both of these regions and then how to generalize this to n-dimensions? Thanks!69.224.116.142 (talk) 18:08, 22 April 2009 (UTC)
- I guess it is . Change variable in the integral putting: and sum the two, so you get twice is the integral on the n-cube of ; then use the identities and (a.e.) --pma (talk) 20:33, 22 April 2009 (UTC)
- As to the volume of the sets here they are [1] pma (talk) 22:22, 22 April 2009 (UTC)
- By the way, "a.e" as in pma's post stands for "almost everywhere" in case you were wondering... --PST 03:19, 23 April 2009 (UTC)
This is great my now my question is how can I show that making that change of variables in the integral still equals ? It is pretty easy to show it for the n=1 case but for higher n's, it doesn't seem to work out.130.166.159.98 (talk) 02:09, 24 April 2009 (UTC)
- It's just the change of variables formula. But here you need a very particular case of it, for the change of variable map φ(x):=(1,1..1)-x is quite an elementary isometry and the integrand is a simple function. So, if you prefer, write your integral as you did,
- ,
- where
- and observe that and have the same measure, because they are obtained from each other (up to a null set) with a simmetry (the change of sign) and a translation. For instance in your computation above for , so . pma (talk) 10:45, 24 April 2009 (UTC)
Factoring a cubic
Does the cubic factor nicely? If it does, what is the factorization? Lucas Brown 42 (talk) 19:23, 22 April 2009 (UTC)
- The first step would be to get rid of the pi in the equation. Introduce a substitution and the cubic reduces to
- . Readro (talk) 19:48, 22 April 2009 (UTC)
- Which polynomial is irreducible over the rationals. Algebraist 19:58, 22 April 2009 (UTC)
- What about the irrationals? 72.197.202.36 (talk) 04:35, 23 April 2009 (UTC)
- ,
- ,
- ,
- ,
- .
- --78.13.138.117 (talk) 06:53, 23 April 2009 (UTC)
- In other words: No, it doesn't factorise nicely! --Tango (talk) 16:50, 23 April 2009 (UTC)
- --78.13.138.117 (talk) 06:53, 23 April 2009 (UTC)
You can see right away that it has at least one positive root. If you find such a root, then u minus that root is a factor of the polynomial. If you divide the polynomial by that factor, you get a quadratic polynomial, and then it's just a matter of solving a quadratic equation. But whether the positive root you find can be expressed "nicely" is another question. If "Tango" has the details right, then it's no nicer than the messiest you could expect under the circumstances. Michael Hardy (talk) 22:07, 23 April 2009 (UTC)
- The anon did the calculation (using the cubic formula, by the looks of it), I just concluded that that wasn't "nice" (it clearly doesn't simplify significantly). --Tango (talk) 10:24, 24 April 2009 (UTC)
apparent error in pages on kernel smoothing
Hi, there appears to be an inconsistency in the pages Kernel smoother and Kernel (statistics). The second of these gives the requirement for a K function that whereas the Kernel smooth article gives the following as a K function:
I add that the full notation for kernel smoothers, using the K function, is:
This, however, doesn't seem to make any difference, since I assume the K funtion is to be interpreted as a function of X, not X-nought. In this case, it looks the integral of the D function is 2, which is incompatible with the requirement that it be 1. Am I making a fundamental oversight, and if not, what is the resolution for the inconsistency? Regards, It's been emotional (talk) 23:57, 22 April 2009 (UTC)
- I suspect it should have said 1/2 if |t| ≤ 1. Michael Hardy (talk) 04:26, 23 April 2009 (UTC)
April 23
In geometry, is a square with rounded corners still considered a square? If not, what is it called, mathematically?
Normally I can answer any question my daughter asked, but this one stumped me! —Preceding unsigned comment added by 216.61.187.254 (talk) 21:30, 23 April 2009 (UTC)
- Might be Squircle. Zain Ebrahim (talk) 21:35, 23 April 2009 (UTC)
- In practical use it's more probably a composition of line segments and circular arcs, though, which is simply called a "rounded square" in the same article. But the squircle is obviously more mathematically interesting. —JAO • T • C 21:41, 23 April 2009 (UTC)
- Yes, right. Squircle is not what OP is looking for but OP maybe pleasantly surprised to find a closed match of rounded square defined actually by an algebraic equation. Squircle nowhere has straight edges, which OP's question seems to insist on. Though as a side note even square may not have straight edges in Non-Euclidean geometry. But, I hope we are dealing with Euclidean geometry here. - DSachan (talk) 21:52, 23 April 2009 (UTC)
- In practical use it's more probably a composition of line segments and circular arcs, though, which is simply called a "rounded square" in the same article. But the squircle is obviously more mathematically interesting. —JAO • T • C 21:41, 23 April 2009 (UTC)
- Also rounded square, or smoothed square, or similar. The fact is that not every object in mathematics has a standard aknowledged name; just the most common and used ones (for instance: the trigonometric functions sin(x), sin(x)/cos(x) and 1/sin(x) all have special names, but sin(cos(x)) has none). If in a given context (a book, a paper, a theorem) one needs to use frequently something which is not otherwise so common to deserve a special name, one may just introduce a name to be used there. --pma (talk) 21:53, 23 April 2009 (UTC)
Quotient spaces, and cell complexes
I'm currently reading Hajime Sato's Algebraic Topology: An Intuitive Approach, but I'm having a bit of trouble understanding how to understand what particular quotient spaces look like (despite the book's name). I've taken an introductory course in Algebraic Topology, but was hoping for a bit more rigour regarding quotient spaces, but the book hasn't really helped. It's easy enough to visualise simple examples (such as a closed two-dimensional ball which has its boundary identified to a single point being homeomorphic to S2), but for more complicated examples, I'm just not comfortable stretching these shapes around in my head - I'd like some sort of rigorous method to determine what the quotient space looks like, but haven't been able to find one.
Another (somewhat related) issue I'm having with Sato is the idea of cell-complexes. I've come across these before too, but after describing how to construct general spaces using them, he bounds into examples without really explaining what to do. He, for example, say that the torus, T2 can be written as where the are closed i-balls, and the are 'attaching maps', which take the boundary of one part of the complex onto a lower-dimensional part, and which he does not actually give the details of. (For instance, I'm interpreting h1 to be ). I've covered similar ground to this before, having studied n-simplices, which are each obviously homeomorphic to (and with boundary operators in place of the 'attaching maps', however the boundary operators were never used to glue), but I can't seem to get the two ideas to gel.
In my mind, the above example starts with two disjoint closed intervals, and whose boundaries (end points) are attached/glued by h1 to a single point, , giving a sort of figure of eight shape. This is then glued by h2 to the boundary of , S2 (edit: obviously I meant S1) ...which I again have no idea how to visualise, or see how this could possibly give a torus.
Sorry for this massive outpouring, I just really can't get my head around it. Thanks a lot! Icthyos (talk) 22:27, 23 April 2009 (UTC)
Don't apologize - we are here to help you. Topology is a subject on which there are too few a number of textbooks; at least half of which try to simplify the subject down. I have attempted to understand why this is the case (when I first saw the definition of a topology (a long time ago), I did not see the purpose of having such a complex set-theoretic definition, but after a week I got used to it and saw the definition more intuitively than ever - the definition of a topology is really ingenious and I respect Hausdorff for this). Quotient spaces are rigorously defined as follows: Let X be a topological space and ~ an equivalence relation on X. Let X* be the set of all equivalence classes under this relation. Define U to be open in X* iff the union of the equivalence classes that belong to U, is open as a subset of X. Then X* is a topological space with this topology - it is called the quotient space of X by the equivalence relation ~ (verify that this is a topological space, if you have not done so already). Note that some authors prefer to define the quotient space using the quotient map (which is probably a less confusing way to define it), but I think this definition is alright too. Think about the following - it should improve your intuition of the concept:
a) Consider the natural projection . Is this map continuous? Is the inverse of this map continuous? Is this map surjective? Deduce some properties of X*, given properties of X, using this map.
b) Suppose X is homogeneous (i.e the homeomorphism group acts transitively on X) (equivalently, if x is in X and y is in X, there exists a homeomorphism of X taking x to y). Under what conditions is X* homogeneous?
c) Consider the circle with ~ defined by - x ~ y iff x is antipodal to y (i.e x = - y). Consider the (two-point) equivalence classes (actually viewed as a single entity)- one equivalence class ((1,0) and (-1,0)) essentially determines the x - axis. As you move clockwise around the circle, the equivalence classes are distinct until you have rotated 180 degrees. So essentially, to consider the quotient space, you have to examine the geometry of the equivalence classes - i.e points on the upper semicircle. Essentially, the equivalence classes "close" to (-1,0) are also close to (0,1) because both points are equivalent. What is the quotient space X* homeomorphic to?
d) With a similar equivalence relation defined on the sphere, note that the resulting quotient space is a two-manifold. However, it is impossible to embed the resulting quotient space in R3. What is the smallest n, such that this quotient space can be embedded in Rn? What is the smallest n for which the resulting quotient space can be embedded via a diffeomorphism in Rn (define a smooth structure on the quotient space using the natural smooth structure on the sphere, and the projection (i.e quotient) map).
e) Define x ~ y on an arbitrary topological space iff there is a homeomorphism of that topological space carrying x to y. Find a space for which the resulting quotient space is the Sierpinski space. Is there a topological space for which this is homeomorphic to the countable discrete space?
f) Is the quotient space of a manifold, again a manifold? If not, under what conditions is it a manifold? Under what conditions does it have the same dimension (as a manifold) as the original manifold?
g) Let G act on a topological space X. Use this action to define a quotient space of X. Prove that the resulting quotient map to the quotient space, is a covering map, under certain conditions.
I think that these questions are useful to develop your intuition. As I don't know your exact level, I can't say whether these questions will be challenging for you or not. However, most should be relatively easy. If you have any other questions, or any difficulties, feel free to ask again. I will help you on your other question, if no-one else will do so, but I have got to go now. --PST 03:00, 24 April 2009 (UTC)
- For the torus example in particular, I think you can consider this by looking at the fundamental polygon for the torus, which is realising the torus by the following construction . You can see that this has only one point (), two 1-cells and one 2-cell. So you attach these together as the above diagram says, and you get the torus.
- I don't think there really can be a general method for determining what the quotient space looks like, you need to consider what the attaching maps involved actually are and see if you can relate that to anything you know; you can realise many spaces by a similar construction as above, for example I remember a nice example of taking a cube and identifying opposite faces with a quarter-twist, you can put a cell structure on this quite easily by counting points, edges, faces and the whole volume, and you end up with some space with fundamental group , the quaternion group, which I thought was quite nifty. - XediTalk 02:56, 24 April 2009 (UTC)
Thanks a lot for your responses, they've been a great help. Xedi: I see now how those attaching maps on those cells give the torus, I'm just a bit miffed that I couldn't build it from the ground up, without using that diagram. Thanks!
PST: I've not come across a lot of those definitions before. Ordinarily I'd dive right in, but it's exam crunch time at the moment, and I feel guilty when I wander away from curricular-based topics! It's definitely helped consolidate the ideas I have about quotient topologies, though. What you said about understanding the set theoretic definition of a topology struck a chord with me, though. As I assume is the usual way, I was first taught the subject from the point of view of metric spaces, and once comfortable with that, the notion of distance was stripped away, and we defined a topology to be a certain collection of subsets of a space that maintained the 'nice' properties of the open sets in a metric space. What I don't understand is, why do we define a topology to have those specific properties? (unions and finite intersections of open being open etc.) Just because they allow us to prove certain theorems? If the subject were taught without the lead in from metric spaces, how would one justify the definition of a topology? I suppose I just don't see how a 'structure' is defined on the space in such abstract terms - without some notion like distance, it's harder to visualise. Thanks again, Icthyos (talk) 20:28, 25 April 2009 (UTC)
- If you feel confortable with metric spaces, and if you find them reasonable objects, you will sooner or later accept topological spaces as well. There are several reasons to abstract and consider topological spaces, even if one is only interested in metric spaces. One is that, even in a metric space, certain properties are stated and studied better in terms of open sets rather than distances (compactness, connectedness, continuity of functions... every topological property, in one word). Another reason is that, especially if you like more metric spaces, you would be happy to put a suitable distance in a set and make it a metric space (example: is a certain notion of convergence in functional analysis a convergence w.r.to a suitable distance?). As a matter of fact, the problem of metrizability was initially the big motivation to study general topological spaces --in the meanwhile people became acquainted with the generalized notion, and learnt to do without distances. A third reason that comes to my mind, is that certain natural and useful constructions that one may perform on metric spaces, to build other ones, in some case give rise naturally to a topological space, but maybe not a metric space (uncountable products, quotients, spaces of mappings, weak topologies in functional analysis...). So you are quite naturally led to go out of the class of metric spaces, in much the same way you are led to consider rational numbers when making operations with natural numbers. --pma (talk) 15:49, 26 April 2009 (UTC)
- Let me note that a topological space is essentially a metric space - although not in the way that most people expect. For example, I can derive a metric out of any topological space using other mathematical disciplines, although this "metric" will not necessarily satisfy the axioms. But who cares? :) Mathematics is a general subject and although working with objects that satisfy so many similar properties to Euclidean space is easy, it is interesting to study objects which don't. After all, people like Euclidean space just because that is our universe. But what if we were living in another Banach space? Actually, throw away that idea, because Banach spaces are again too similar to Euclidean space. The point I am trying to make is that, in my view, a lot of mathematics is still to come - in particular, I strongly feel a new (crucially important) field of mathematics will be invented. After all, it was only in the last century that many famous areas of mathematics were fully axiomatized. However, it does not seem that this will occur in our lifetime. --PST 02:12, 27 April 2009 (UTC)
- On a different note, I feel that exams are not a good thing. Rather than letting students pick up interest themselves, they are rather forced to "do it for good marks". This is not to say that all exams are bad. Nowadays there are take-home exams which allow one to actually think rather than do it in a few hours. However, mathematics is not about the time it takes to prove something, but rather the quality of that which you prove, no matter how long it takes. --PST 02:17, 27 April 2009 (UTC)
- there are take-home exams which allow one to actually think...and to put a post here 82.84.117.68 (talk) 18:51, 28 April 2009 (UTC)
- On a different note, I feel that exams are not a good thing. Rather than letting students pick up interest themselves, they are rather forced to "do it for good marks". This is not to say that all exams are bad. Nowadays there are take-home exams which allow one to actually think rather than do it in a few hours. However, mathematics is not about the time it takes to prove something, but rather the quality of that which you prove, no matter how long it takes. --PST 02:17, 27 April 2009 (UTC)
- Let me note that a topological space is essentially a metric space - although not in the way that most people expect. For example, I can derive a metric out of any topological space using other mathematical disciplines, although this "metric" will not necessarily satisfy the axioms. But who cares? :) Mathematics is a general subject and although working with objects that satisfy so many similar properties to Euclidean space is easy, it is interesting to study objects which don't. After all, people like Euclidean space just because that is our universe. But what if we were living in another Banach space? Actually, throw away that idea, because Banach spaces are again too similar to Euclidean space. The point I am trying to make is that, in my view, a lot of mathematics is still to come - in particular, I strongly feel a new (crucially important) field of mathematics will be invented. After all, it was only in the last century that many famous areas of mathematics were fully axiomatized. However, it does not seem that this will occur in our lifetime. --PST 02:12, 27 April 2009 (UTC)
April 24
Growth of a bacterial colony
Hi I was wondering if I could obtain some help with the following question: Starting with 1 bacterium that divides every 15 minutes, how many bacteria would there be after 6 hours? Assume none die etc. I thought ok so first of all how many sets of 15 minutes in 6 hours? There are 4 lots of 15 mins in 1 hour therefore 4 x 6 = 24 15 minute intervals in 6 hours. Nothing too difficult there. I thought then to work it out I would do eoriginal number x time making the equation e1x24 which gives an answer of 2.649 x 1010. I had a look at the answer and it said 16,777,216 which is obtained by the equation 224 but I have no idea where this equation 224 comes from. I'm also why my calculation involving e was incorecct. If anyone could explain these two things to me it would be a great help. Thanks. —Preceding unsigned comment added by 92.22.189.144 (talk) 08:24, 24 April 2009 (UTC)
- Dividing every 15 minutes means doubling, so there will be twice as many as before. Hence it is the appropriate power of 2 which is wanted.81.154.108.6 (talk) 08:52, 24 April 2009 (UTC)
- Right. After 15 minutes, there will be 2 (21) bacteria. They both will divide after 30 minutes (15x2), and there will be 4 (22) bacteria. After 45 minutes (15x3), these 4 bacteria will double and number of bacteria will be 8 (23), so you see there is a pattern. After 15xn minutes, there will be 2n bacteria. In your case n is 24, so there will be 224 bacteria after 15x24 = 360 minutes = 6 hours. I have no idea why you think there should be a connection with 'e'. - DSachan (talk) 11:21, 24 April 2009 (UTC)
Many thanks for the reply. I am sure there is an equation of exponential growth involving e or would it be log ekt? Not sure. Thanks anyway! —Preceding unsigned comment added by 92.21.233.141 (talk) 13:08, 24 April 2009 (UTC)
- The article on exponential growth is quite helpful in this regard. We could write the number of bacteria as a function of the number of minutes that have passed as
- but as the original poster guessed, we could just as well write this formula using base e.
- To get the value of T, we set the two formulas equal and just take the log of both sides.
- .
- The number of bacteria at a given time can now be written as . (any suggestions to make my math look better would be appreciated). mislih 13:48, 24 April 2009 (UTC)
- a suggestion to make the math look better: try \scriptstyle : . Bo Jacoby (talk) 14:13, 24 April 2009 (UTC).
sci.mathresearch newsreader question
What's an easy newsreader to use for posting there? (They say you have to use a newsreader now, and I don't know a newsreader from Adam.) thanks, Rich (talk) 09:28, 24 April 2009 (UTC)
e lon to nhat la ai nhi lu khon nan nhat oi vai lon chua kia so lon thi so bi aids nhe kinh —Preceding unsigned comment added by 123.18.213.33 (talk) 11:04, 24 April 2009 (UTC)
- Just go to groups.google.com and follow instructions. McKay (talk) 12:02, 24 April 2009 (UTC)
- thanksRich (talk) 06:07, 25 April 2009 (UTC)
Polynomial division
This is actually a homework sum. I tried in as many ways as possible but i didn't get an answer. The question is :
If the polynomial is divided by another polynomial , the remainder comes out to be . Find k and a.
I tried by the following method:
putting remainder on L.H.S,
If i divide L.H.S by g(x), i get and . I equate r(x) to zero coz g(x) is a factor of L.H.S. I get . I am unable to continue. Please help me--harish (talk) 13:51, 24 April 2009 (UTC)
- The remainder must be the zero polynomial, meaning that both coefficients are zero. . Bo Jacoby (talk) 14:07, 24 April 2009 (UTC).
- (e/c) If is the zero polynomial, this means that all its coefficients are zero, i.e., and . You can extract k from the first equation, and then you get a from the second equation. However, I think that you made a numerical error, I got a different result for q and r. — Emil J. 14:09, 24 April 2009 (UTC)
- solve(identity(x^4-6*x^3+16*x^2-25*x+10 = (x^2-2*x+k)*(x^2+b*x+c) + (x+a),x),{a,b,c,k}); McKay (talk) 00:40, 25 April 2009 (UTC)
Holomorphic mapping
Hello, I am trying to prove that any one-to-one and onto holomorphic mapping of with a removable singularity at 0 satisfies and I am stuck.
I assumed it was something else, c. Then, there must exist some other point that also maps to c, say p. I assume I need to do something with an integral to count the number of zeros of the function f(z) - c, which has two (if I just abuse notation and let this f now represent f with the removable singularity removed). But, I am not sure what I can do. Form a circle big enough to contain 0 and p, call it D. Then, consider the function
which is a function of q. At c, the value is 2. I am guessing that I want to prove that for some small neighborhood around c, all numbers in that disc must also have value 2 for this integral, which will contradict the fact that the original map was one-to-one.
So, is this integral function a continuous function of q at c? I am pretty sure I know it is continuous in z but that's not what I want here. I am not sure exactly how this works.
Thanks StatisticsMan (talk) 21:52, 24 April 2009 (UTC)
- Your argument is correct, but just observe that the extended function on is a nonconstant holomorphic function, hence it is an open map. Terefore if the pre-image of c has at least 2 points, the same is true in a nbd of c, loosing the injectivty on . --pma (talk) 06:56, 25 April 2009 (UTC)
- Let me see if I understand this. If I take a disc around 0, maybe with radius |p|/2 to ensure p is not in it. Then I map this with the holomorphic function, it must be an open set and it contains c. Then, by continuity, the inverse image of that is an open set. But, it now contains p and also at least a small disc around p. But, that means every point in that disc is an inverse image of some point (I didn't name these discs so hard to talk about discs now), which means every non-p point in that disc around p goes to the same value as some nonzero point in the disc around 0, contradiction. Okay, this makes sense and it's much simpler.
Just for my knowledge, can someone please help me understand why that integral is continuous in q?(Nevermind on that, I actually found this in my book finally... and it's in the proof of the open mapping theorem, which makes sense.) Thanks for the help! StatisticsMan (talk) 14:08, 25 April 2009 (UTC)
How to compute statistical significance of correlation
I have a list correlations R[] and the associated number of samples for each correlation N[]. And I am not sure how to calculate a weight for each correlation/amount which is proportional to the chance that the correlation is non-spurious. Right now I have weight set to sqrt(N)*R^2 but this does a lousy job at discriminating one pair of R,N from another.
- I believe you mean "statistically insignificant" instead of "non spurious." Spurious (at least in econometrics) refers to a correlation that is statistically significant due to random chance. Determining whether a correlation is spurious is, at best, non-trivial and, depending on your particular circumstances, quite possibly impossible. Wikiant (talk) 23:22, 24 April 2009 (UTC)
If you have an assumption of joint normality, and if you mean what I suspect you might mean (but I can't be sure, given what you've said and what you haven't said), an F-test should do it. Michael Hardy (talk) 03:21, 26 April 2009 (UTC)
April 25
higher derivatives
Hi, I read the following surprising comment in a maths textbook: "Let be a function such that exists. Then exists in an interval around for " Surely this is false. Take the function for rational x, and for some integer n > 2, for all irrational x. At x = 0, all derivatives below the nth will exist, but the function isn't even continuous around 0. Have I got this right? It's been emotional (talk) 00:21, 25 April 2009 (UTC)
- It looks like the first derivative exists, but do the higher ones? The first derivative is only defined at 0, don't you need your function to be defined on an interval to differentiate it? --Tango (talk) 00:34, 25 April 2009 (UTC)
<sheepish grin> oops - I was only thinking with reference to the squeeze theorem, in which case the idea makes sense, but I think you are quite right, and there is no such thing as the higher derivatives. However, if instead of using in the definition of , you use the definition of in terms of , it might be a different story. Then I suspect the squeeze theorem would apply, and the result would follow. Am I right now? It's been emotional (talk) 04:24, 25 April 2009 (UTC)
- The comment in your book is correct, and it's not a surprising result but rather a remark on the definitions. The n order derivative of f at c is the derivative at c of the function , therefore as Tango says you need the latter to exist in a neighborhood of c. The same for all . Maybe you have in mind a weaker definition of the second derivative like ? --pma (talk) 07:16, 25 April 2009 (UTC)
Exactly what I was thinking. was in fact the form I came up with, and in the function I gave, you get an easy limit, since f(0) = 0 and 2h is rational exactly when h is, so they are taken from the same subcomponent of f(x) for any h. Is there anything wrong with using this as the definition of the second derivative? If not, it would seem advantageous, since differentiation is usually considered a "good" property for a function to have. Thanks for the help; I have always got good marks in maths, but never really had my rigour checked, so I make careless errors like this. It's good getting things "peer reviewed" so to speak. Much appreciated, It's been emotional (talk) 00:15, 26 April 2009 (UTC)
- I think the primary difficulty is the loss of the property mentioned. It would be a bit of a problem if taking the derivative of something twice did not produce the second derivative, which would be the case if the second derivative was defined in that way. Black Carrot (talk) 02:47, 26 April 2009 (UTC)
- Yes, it seems a notion too weak to be useful, as your example shows (nice!). Also in the symmetric form I wrote, any even function would have "". A somehow richer property closer to what you have in mind is maybe having a n-th order polynomial expansion: with as . So, n=0 is continuity at c; n=1 is differentiability at c; however for any n it doesn't even imply continuity at points other than c (therefore in particular it doesn't imply the existence of ). An intersting result here is that the Taylor theorem has a converse: that is of class if and onlfy if has n-th order expansions at all points x, with continuous coefficients and remainder as , locally uniformly. Then the coefficients are of course the derivatives. The analog characterization of maps holds true in the case of Banach spaces. So "having an n-th order polynomial expansion in a point" is a reasonable alternative notion to "having the n-th order derivative in a point"; they are not equivalent, but having them everywhere continuously is indeed the same. --pma (talk) 10:56, 26 April 2009 (UTC)
dice
If I roll 64 standard non-weighted 6 sided dice, what are my odds of rolling <= 1, 6?
- Could you explain this further, please? Are you referring to your score on each die individually, or your total score, or what? It's been emotional (talk) 04:25, 25 April 2009 (UTC)
I mean what are the odds that 63 of the 64 dice will not have a 6 pips facing up. —Preceding unsigned comment added by 173.25.242.33 (talk) 04:39, 25 April 2009 (UTC)
If you mean 0 or 1 of them showing a 6, the probability is which is about 5.068 E-48, or 0.0000...005068, with 47 zeros after the decimal point. If you mean exactly 1 six, then that's just , or 5.02 E-48. It's been emotional (talk) 05:44, 25 April 2009 (UTC)
- You surely mean . Bikasuishin (talk) 10:05, 25 April 2009 (UTC)
- And about 1.2*10-4 for the other possibility. Algebraist 14:33, 25 April 2009 (UTC)
- Thanks, that was my thesis doing things to my brain ;) It's been emotional (talk) 23:56, 25 April 2009 (UTC)
Poker theory and cardless poker
Inspired by This question, I was wondering if any math(s) types could gve me an idea of how this game would have to be played. The idea is you pick a poker hand at the start, write it down and then play as if you had been dealt that hand. Clearly to prevent everyone from choosing royal flushes or allowing one hand to become the optimum pick, there need to be rules limiting payment to high hands etc. What rules would be a good start? Any help is hugely appreciated 86.8.176.85 (talk) 05:07, 25 April 2009 (UTC)
- The simplest variant would be to give everyone a pack, and they choose whatever cards, but when you've chosen a hand, that gets discarded. If you need to avoid having a pack, just get everyone to write the cards down, and strike out your cards as they are used, then you can't use them again for that game. With 52 cards, of course that's 10 hands, with two for each player that don't get used, then you start over. That limits a proper game to multiples of 10 rounds, or you just accept that you finish when everyone gets sick of it, and if Fred's been saving his royal flush up, and Sally has already played hers, too bad for him. It's been emotional (talk) 05:50, 25 April 2009 (UTC)
A large slice of pie
The 50,366,472nd digit onwards of pi is 31415926 (it's true, I'm not making it up). I was wondering what the expectation value is of the digit in pi after which all the digits up to that point are repeated. I think the probability of this happening must be;
which limits to
which is less than 0.5. Does this mean that expectation value is meaningless here? SpinningSpark 13:06, 25 April 2009 (UTC)
- Your first assumption would have to be that pi is a normal number, which is not actually known. —JAO • T • C 13:48, 25 April 2009 (UTC)
- Yes, I was making that assumption (and indeed I knew that it was an assumption - forgive my slopiness, I am an engineer, not a mathematician). I would still like to know how this can have a finite probability but not have an expectation value. SpinningSpark 14:19, 25 April 2009 (UTC)
- Expectations are only meaningful for random events, and the digits of pi are not random, so you're never going to get a meaningful expectation here. Furthermore, even if pi is normal, it's not obvious (at least to me) that it must contain such a repetition. If instead of considering pi, we consider a random number (between 0 and 1, with independent uniformly-distributed digits, say), then the probability that such a repetition occurs is not 1 (it's not 1/9, either; your calculation is an overestimate due to some double counting). Thus to get a meaningful expectation value for the point at which the first repetition occurs, you need to decide what value this variable should take if no such repetition occurs. The obvious value to choose is infinity, in which case the expectation is also infinite. Algebraist 14:31, 25 April 2009 (UTC)
- The digits of pi certainly are random in the Bayesian paradigm. Robinh (talk) 20:55, 25 April 2009 (UTC)
- Expectations are only meaningful for random events, and the digits of pi are not random, so you're never going to get a meaningful expectation here. Furthermore, even if pi is normal, it's not obvious (at least to me) that it must contain such a repetition. If instead of considering pi, we consider a random number (between 0 and 1, with independent uniformly-distributed digits, say), then the probability that such a repetition occurs is not 1 (it's not 1/9, either; your calculation is an overestimate due to some double counting). Thus to get a meaningful expectation value for the point at which the first repetition occurs, you need to decide what value this variable should take if no such repetition occurs. The obvious value to choose is infinity, in which case the expectation is also infinite. Algebraist 14:31, 25 April 2009 (UTC)
- Yes, I was making that assumption (and indeed I knew that it was an assumption - forgive my slopiness, I am an engineer, not a mathematician). I would still like to know how this can have a finite probability but not have an expectation value. SpinningSpark 14:19, 25 April 2009 (UTC)
One could say the sequence of digits itself is random (regardless of whether π is normal or not) in the sense that it defines a probability measures, thus: the probability assigned to any sequence abc...x of digits is the limit of its relative frequency of occurrence as consecutive digits in the whole decimal expansion. Then one could ask about the expected value. However, if it is not known that π is a normal number, then finding such expected values could be very hard problem, whose answer one would publish in a journal rather than posting here.
As for being "random in the Bayesian paradigm", from one point of view the probability that they are what they are is exactly 1. Bayesianism usually takes provable mathematical propositions to have probability 1 even though in reality there may be reasonable uncertainty about conjectures. The Bayesian approach to uncertainty is quite mathematical in that respect. Michael Hardy (talk) 03:12, 26 April 2009 (UTC)
- (@MH) As to the frequencies, note that at the moment it is not even known if they have a limit.--pma (talk) 08:27, 26 April 2009 (UTC)
- The digits of pi are random in the Bayesian paradigm because it identifies uncertainty with randomness. In my line of research, we treat deterministic computer programs as having 'random' output (the application is climate models that take maybe six months to run). Sure, the output is knowable in principle but the fact is that if one is staring at a computer monitor waiting for a run to finish, one does not know what the answer will be. One can imagine people taking bets on the outcome. This qualifies the output to be a random variable from a Bayesian perspective. I see no difference between a pi-digits-program and a climate-in-2100 program (some people would take bets on the googol-th digit of pi, presumably). I write papers using this perspective, and it is a useful and theoretically rigorous approach. Best, Robinh (talk) 19:59, 26 April 2009 (UTC)
- I wouldn't advise such a bet due to existence of these algorithms. I'm not sure what the constants are for the running time, so it is difficult to know if calculating the googolth digit is practical, but I suspect it is. --Tango (talk) 21:54, 26 April 2009 (UTC)
- In 1999 they computed the 4*1013 binary digit of pi (and some subsequent ones) this way. It took more than one year of computing. I don't know of the further results. It's a 0, btw. --pma (talk) 23:13, 26 April 2009 (UTC)
- I wouldn't advise such a bet due to existence of these algorithms. I'm not sure what the constants are for the running time, so it is difficult to know if calculating the googolth digit is practical, but I suspect it is. --Tango (talk) 21:54, 26 April 2009 (UTC)
- The digits of pi are random in the Bayesian paradigm because it identifies uncertainty with randomness. In my line of research, we treat deterministic computer programs as having 'random' output (the application is climate models that take maybe six months to run). Sure, the output is knowable in principle but the fact is that if one is staring at a computer monitor waiting for a run to finish, one does not know what the answer will be. One can imagine people taking bets on the outcome. This qualifies the output to be a random variable from a Bayesian perspective. I see no difference between a pi-digits-program and a climate-in-2100 program (some people would take bets on the googol-th digit of pi, presumably). I write papers using this perspective, and it is a useful and theoretically rigorous approach. Best, Robinh (talk) 19:59, 26 April 2009 (UTC)
Upper bound on the number of topologies of a finite set
Hi there - I'm looking to prove that the number of topologies on a finite set ({1,2,...,n} for example) doesn't have an upper bound of the form k^n (assuming this is true!), probably by contradiction, having proved 2^n is a lower bound (n>1) - but I'm not sure how to get started - could anyone give me a hand please?
Thanks very much, Otherlobby17 (talk) 20:48, 25 April 2009 (UTC)
2^n is an upper bound, isn't it? A topology has to be a subset of the power set, which has cardinality 2^n. --20:58, 25 April 2009 (UTC)- Which, of course, means an upper bound of 2^2^n. I apologise for my idiocy. --Tango (talk) 21:09, 25 April 2009 (UTC)
- A topology on a finite set is the same thing as a preorder (OEIS:A000798). But even the number of total orders is already more than kn for any k. —David Eppstein (talk) 21:03, 25 April 2009 (UTC)
D. J. Kleitman and B. L. Rothschild, The number of finite topologies, Proc. Amer. Math. Soc., 25 (1970), 276-282 showed that the logarithm (base 2) of the number of topologies on an n-set is asymptotic to n2/4. So it is smaller than any expression 2kn for any k>1, which is probably the question you meant to ask. McKay (talk) 01:31, 26 April 2009 (UTC)
Very helpful thankyou - but I did mean to ask about rather than , having already known that is a (crude) upper bound, I was wondering if it could be improved to the extent of being OTF for some k - since I generally see it quoted as I assumed there was no such form, hence my question. Thanks very much for the information! How do we know it's a preorder? Is the number of total orders smaller than the number of topologies then? Thanks again, Otherlobby17 (talk) 01:58, 26 April 2009 (UTC)
- How do we know: simply define a preorder : iff ; see the quoted link.--pma (talk) 08:09, 26 April 2009 (UTC)
Homework problem
As the title suggests, this is a homework problem but I only need to be told what a question means. Given the equation of an ellipse I am told that "The point N is the foot of the perpendicular from the origin, O, to the tangent to the ellipse at P.' I'm confused by the use of the word foot because if it's being used in the way I've seen it used before then in this case it would mean the origin but that can't be right. What is it meant to mean? Thanks 92.3.150.200 (talk) 21:33, 25 April 2009 (UTC)
- I would guess it simply means the point where the two lines (L1 = the tangent and L2 = the line through the origin and perpendicular to the tangent) intersect. This MathWorld article seems to think the same. —JAO • T • C 21:45, 25 April 2009 (UTC)
- Yes, that's what the foot of a perpendicular usually means. --Tango (talk) 21:50, 25 April 2009 (UTC)
- So if this makes it clear, you would also find that if the tangent was a vertical or horizontal line, then N and P would be the same point, on the ellipse. Otherwise, N would be outside the ellipse. It's been emotional (talk) 00:02, 26 April 2009 (UTC)
- I think it's not whether it's vertical or horizontal but whether dy/dx at P corresponds to the slope of a circle going through P. The slope of a circle at (x,y) is -x/y. The slope of an ellipse at (x,y) is -x/ay. So when their slopes times a particular constant factor are equal, the OP's N and P are the same point. I guess an ellipse's tangent is equal to a circle's tangent (at the same point) either 4 times (there's your "vertical or horizontal") or at every point (if the ellipse is a circle). .froth. (talk) 03:46, 26 April 2009 (UTC)
April 26
Convert a computer file to a non-negative Integer number
How do I convert an ordinary computer file into a non-negative Integer number. What I'm interested in is the contents of the file. The name of the file is irrelevant and need not be saved.
For example: a file with the size of 1 byte and the value of 16 can be easily be converted to the binary number 10000 which is decimal value 16.
However, I realizes that this method does not work because what if I have
A file with size of 2 bytes and with the hexadecimal value of "00 10" would not this also convert to the decimal value 16.
Clearly I must somehow also encode the size of the files in bytes as well as the actual value of the file.
What is the best way of converting an ordinary computer file to a non-negative Integer number which uses the least amount of numerical digits? 122.107.207.98 (talk) 00:36, 26 April 2009 (UTC)
- Quick and dirty fix; tack a 1 in front of every file. So the first file would become x110, in decimal 272. The second file becomes x10010, or in decimal 65552. Taemyr (talk) 00:42, 26 April 2009 (UTC)
- Since the total number of files with n or fewer bits is 2n+1-1, Taemyr's method is exceedingly close to optimal. McKay (talk) 01:37, 26 April 2009 (UTC)
- I think that's wrong. Try it for n=3 or n=4; your constant is off by one. I think . .froth. (talk) 04:24, 26 April 2009 (UTC)
- You forget the empty file. I have lots of them on my computer so I know they exist. :) McKay (talk) 08:23, 26 April 2009 (UTC)
- Oh and have a look at arithmetic encoding
although it doesn't really help your integer situationcurse you tiredness it works fine. I suspect this is the optimal that taemyr's bit of waste approaches. .froth. (talk) 04:25, 26 April 2009 (UTC)
- Oh and have a look at arithmetic encoding
Taemyr, your method is brilliant.
- no file has the decimal representation value of 0
- a file with zero bytes has the decimal representation value of 1
- a file with 1 byte and hex value of 00 has the decimal representation value of 256
- a file with 2 bytes and hex value of 00 00 has the decimal representation value of 65536
- a file with 1 byte has the decimal representation value ranging from 256-511
- a file with 2 bytes has the decimal representation value ranging from 65536-131071
eh? what kind of file has the decimal representation value of 2 . It does seems to me that the range 2-255 is not being used. And the range 512-65535 is also not being used. 122.107.207.98 (talk) 10:46, 26 April 2009 (UTC)
- Here's a better way: treat the file as a base-256 number, but with the bytes representing digit values of 1–256 instead of 0-255. This maps the empty file to 0, the one-byte files to 1–256, the two-byte files to 257–65792, and so on, and it's easy to compute:
Integer int_of_file(FILE* f) { Integer n = 0, place = 1; int c; while ((c = getc(f)) != EOF) { n += place * (c + 1); place *= 256; } return n; }
void file_of_int(FILE* f, Integer n) { while (n) { putc((n - 1) % 256, f); n = (n - 1) / 256; } }
- That treats the file as little-endian. Big-endian is a bit more tricky as you have to search for the maximum place value in
file_of_int
. -- BenRG (talk) 11:04, 26 April 2009 (UTC)
Birthday paradox
If A doesn't share a birthday with B and B doesn't share a birthday with C, then there's no way A can share a birthday with C. Doesn't this ruin the calculations that "prove" the unintuitive result of the birthday paradox? Also there are circular relationships with 4 people, and 5 people, and n people.. What's the actual graph look like, or at least what's the 50/50 point? .froth. (talk) 03:26, 26 April 2009 (UTC)
- Yes there is - if A and C are born on the 1st January, B is born on the 2nd of January, then A doesn't share a birthday with B who doesn't share a birthday with C, but A shares a birthday with C. 'Not sharing a birthday' does not have transitivity, whereas sharing a birthday does, so in fact sharing a birthday is an equivalence relation - don't expect to see it turning up on exams any time soon though... Otherlobby17 (talk) 03:45, 26 April 2009 (UTC)
- Oh right >_< But sharing is transitive so should the graph actually be higher? .froth. (talk) 03:52, 26 April 2009 (UTC)
- What makes you think that somehow the calculation would ignore the possibility of more than two sharers? It doesn't. (The actual, real-life, 50/50 point is a little higher due to leap days but may also be influenced by systematic biases in when people are born—but I'm pretty sure it's still above 23 and below 24.) —JAO • T • C 09:39, 26 April 2009 (UTC)
The Gateaux derivative - how is it defined?
Our article on the Gateaux derivative defines it this way:
- A function f : U ⊂ V → W is called Gâteaux differentiable at if f has a directional derivative along all directions at x. This means that there exists a function g : V → W such that
- for any chosen vector h in V, and where t is from the scalar field associated with V (usually, t is real).
My question is whether anyone can confirm that this is correct, because I thought it required that t approach 0 from above, ie. t is always positive. The reason is that the influence function is considered a special Gateaux derivative, and that definitely requires that t approach from above. Thanks in advance, It's been emotional (talk) 08:35, 26 April 2009 (UTC)
- Well, the standard definition (for V,W Banach spaces or also TVS , U open in V) is, f is Gâteaux differentiable at iff what you wrote happens, with a linear continuous operator; in this case you may equivalently take t positive in the definition, for g(-h)=-g(h). (By the way, I do not like so much the distinction between G-derivative and G-differential as it is made in the link; if g is not a linear continuous operator, people would just say "f has directional derivatives in all directions h", with no further names for this). --pma (talk) 09:06, 26 April 2009 (UTC)
- Aren't you describing the Frechet derivative? My understanding was that the Gateaux derivative differed from the Frechet derivative in that the derivative did not have to be linear.76.126.116.54 (talk) 20:51, 26 April 2009 (UTC)
- No, both differentials are linear continuous operators, but Fréchet differentiability is a stronger condition, in that it is required that f(x+h)-f(x)-Lh=o(h) as h tends to 0. A standard example of a function on that is differentiable in the origin in Gâteaux but not in Fréchet sense, is , which is not even continuous. --pma (talk) 21:26, 26 April 2009 (UTC)
- Aren't you describing the Frechet derivative? My understanding was that the Gateaux derivative differed from the Frechet derivative in that the derivative did not have to be linear.76.126.116.54 (talk) 20:51, 26 April 2009 (UTC)
Thanks, pma that's much clearer (though I'm still trying to work out if the function you gave really is discontinuous, rather than just not Frechet differentiable). I got the stuff I cut and pasted from our article Frechet derivative, which seems completely wrong. If you can confirm that for me, I'll get to work editing it, either soon or at least I'll make a note of it for when my thesis is done (I'm under the hammer at the moment). I'll add an acknowledgement of you and the ref desk for the help too. Thanks also to 76.126, because I also had the same question. It's been emotional (talk) 08:27, 27 April 2009 (UTC)
- Well, maybe it's not completely wrong, but it uses a definition of Gâteaux derivative that is not the standard one. Also, usually differentiability or derivability are synonymous (both in the F. and in the G. context). At most, some authors distinguish between "differential" and "derivative", preferring the latter for functions of one (real or complex) variable, so that the differential is always the linear map, and the derivative is the usual limit vector, the two being linked by the identities df(x)[h]=f'(x)h and f'(x)=df(x)[1]). Great books on differential calculus in Banach spaces: Cartan; Dieudonné; also, the first chapter of Hörmander has a short but complete and perfect introduction. Going back to the function of the example, I think it is constant on the graph of any parabola (x,cx2), x>0, with a constant depending on c (this shows the discontinuity at the origin). --pma (talk) 11:57, 27 April 2009 (UTC)
Thx, all clear now! I think I'll at least tag that page, because it does need editing - it's inconsistent with the page on Gateaux derivatives. cheers, It's been emotional (talk) 02:35, 29 April 2009 (UTC)
Pólya enumeration theorem
I am having trouble in reconciling the Pólya enumeration theorem I am reading from my book, and what is given here on wikipedia.
My book states: Suppose S is a set of n objects and G is a subgroup of the symmetric group Sn. Let PG(X) be the cycle index of G. Then the pattern inventory for the nonequivalent colorings of S under the action of G using colors y1, y2...ym is
Here a pattern inventory of the colorings of n objects using the m colors is the generating function: . The sum here runs over all vectors of nonnegative integers satisfying ; represents the number of nonequivalent colorings of the n objects where the color occurs precisely times. For example by looking at the pattern inventory of the colorings of 4 objects (beads) by 3 colors (r,g and b) and taking G=D4, we can see that as the coefficient r2gb is 2 so there are two necklaces with 4 beads possible using these three colors. I understand this fully.
The wikipedia article however seems to take a more general approach. It starts off with two sets X and Y. I assume that X stands for the 4 beads and Y the colors {r,g,b}. The colors are then accorded some weights. Then the colorings are also assigned weights. Now it defines a generating function c(t) whose coefficients are the number of colors of a particular weight. I am having difficulty in renconciling this with what I have understood from my book. Specifically it would help if someone could clarify these things to me:
Does the WP aricle takes the approach that the number of possible colors are infinite?- What are the weights in the necklace problem that I have outlined?
- What do weights signify in general?
- How is my book's PET equivalent (or a special case of) WP's PET?
Thanks--Shahab (talk) 09:44, 26 April 2009 (UTC)
Example in the article about the Hermite interpolation
I looked at the article about Hermite interpolation [1] and I tried to reproduce the example, however I could not figure out where the 28 comes from (third row, fourth column).
It's also not clear for me what the columns (after the x and f(x) values) contain. The column which starts with -8 contains the first derivative, if the values on the left are equal to the values on the left one row above then it seems to contain the derivative, but otherwise?
It would be great if u could explain this a bit, so we could edit the example and make it easier to understand.
Thx in advance!
--F7uffyBunny (talk) 18:27, 26 April 2009 (UTC)
--- http://en.wikipedia.org/wiki/Hermite_interpolation
Figured it out and updated the article —Preceding unsigned comment added by F7uffyBunny (talk • contribs) 21:06, 26 April 2009 (UTC)
- While waiting for a more specific answer to your question, notice that you can easily make free links using double square brakets. As to the Hermite interpolation, have also a look to Chinese theorem#Applications. --pma (talk) 21:10, 26 April 2009 (UTC)
Digit distribution
Periodically in "detective stories" you get a plot involving someone faking a set of accounts or some other list of numbers and they don't meet the normal statistical usage of digits wherein 1 is much more frequent than 9. Two parts to this: (1) does this apply for item sales records where I would expect an excess of ".99" to come up since stores love this price break. (2) If the list is the sort where that analysis applies, how should you fake it from a list of random numbers wherein each digit is equally likely - no I'm not planning anything fraudulent. -- SGBailey (talk) 22:10, 26 April 2009 (UTC)
- Benford's law is about the leading digit. I haven's seen statistics about prices but I would guess it partially applies there. Selection of prices just below a round number could very well cause a deviation from it. PrimeHunter (talk) 23:09, 26 April 2009 (UTC)
- For realistic distribution of the first digit, use 10**random(). Then you might want to round the result to an integer, and subtract .01 or .05. You'll also need to adjust this formula somehow for a distribution of magnitudes (number of digits). —Tamfang (talk) 23:42, 26 April 2009 (UTC)
April 27
The Greatest Integer Function
Hello, I am trying to prove that
is true for all positive integers where the square brackets represent the greatest integer function. I reasoned that if I can show that
for an integer m, then I am done because that is a definition of the floor function. So in order to prove this, I have shown that the difference between these functions is always between zero and one. Furthermore, two of the inequalities are easy to show but the other two are hard. Any ideas?--68.121.32.160 (talk) 03:08, 27 April 2009 (UTC)
- If the claim is not true, there is some integer m and some value a between 1 and 2 such that
- The solution to this equation is
- For a in (1,2) the fractional part of the right side lies in
- for even and odd m, respectively, so it can't be an integer. McKay (talk) 04:12, 27 April 2009 (UTC)
I understand everything perfectly except for the fractional part. How did you arrive at those bounds for the fractional part? Why does it matter if the integer is odd or even and how did you get those intervals? Thanks!68.126.127.36 (talk) 07:51, 28 April 2009 (UTC)
- If m is even, say m=2k, then
- Now you can check that for a in (1,2), that value is always strictly between k2-1 and k2.
- If m is odd, say m=2k+1, then
- which is strictly between k2+k-1 and k2+k. McKay (talk) 08:20, 28 April 2009 (UTC)
Why is "dense-in-itself" a useful notion?
After committing the embarassing rookie's mistake of linking the phrase "dense in itself" in the sentence "a nowhere dense set is always dense in itself" to dense-in-itself I got myself thinking: why is the notion of a set being dense-in-itself at all useful? Are there any interesting non-trivial properties of topological spaces without isolated points? The topology books I know may define the notion but only to never mention it again. — Tobias Bergemann (talk) 07:06, 27 April 2009 (UTC)
- Perfect sets are, by definition, closed dense-in-itself sets, and they appear in various contexts, see e.g. Cantor–Bendixson theorem. — Emil J. 10:27, 27 April 2009 (UTC)
- A complete metric space with no isolated points essentially has a subspace that is the continuous injective image of the cantor space. A similar result that (possibly) applies to a larger class of spaces asserts that any locally compact Hausdorff space without isolated points, has cardinality at least that of the continuum. The proofs of these facts are relatively simple if you were to attempt the proofs yourself. The idea embedded within the previous assertions is that spaces with no isolated points, having certain properties, must essentially be "large". --PST 10:39, 27 April 2009 (UTC)
- Thank you both for your answers. I really should have thought of perfect sets and the Cantor-Bendixson theorem myself. I just couldn't think of anything interesting to say about a topological space about which it only is known that it has no isolated points and nothing else (a perfect space). — Tobias Bergemann (talk) 12:29, 27 April 2009 (UTC)
- A complete metric space with no isolated points essentially has a subspace that is the continuous injective image of the cantor space. A similar result that (possibly) applies to a larger class of spaces asserts that any locally compact Hausdorff space without isolated points, has cardinality at least that of the continuum. The proofs of these facts are relatively simple if you were to attempt the proofs yourself. The idea embedded within the previous assertions is that spaces with no isolated points, having certain properties, must essentially be "large". --PST 10:39, 27 April 2009 (UTC)
Proving or disproving a homeomorphism with [0,1]
Hi there guys - I was wondering about how to go about showing that is or is not homeomorphic to ? I don't want you to tell me how to do it, but what would you suggest to get started? I know the 2 sets have the same cardinality so that won't rule the possibility of bijection out, but I'm less certain about continuity - could anyone suggest anything to get me going? I imagine if they aren't homeomorphic I'll simply want to find a topological property they don't share, but I'm not sure where to start looking or whether that's even the case...
Also, I'm trying to find a homeomorphism between and - is the function continuous in the topological sense between these 2 sets?
I hope I'm not asking too much - Thanks a lot! Spamalert101 (talk) 08:39, 27 April 2009 (UTC)
- (On a formatting note, how do I get my 2 sets to display in the same sized font?) Spamalert101 (talk) 08:40, 27 April 2009 (UTC)
- To begin my response, let me stress that much of topology is intuitive. I do not think that it is worth it to worry too much about proving whether two spaces are homeomorphic or not if you absolutely see it intuitively - an exception being when the equivalence of two spaces may be of crucial importance in a theorem. If you have first learnt the concept however, it is nice to construct a few homeomorphisms.
- To expand on my previous point is equivalent to solving the first problem. Initially, the idea is recall the connection between the factors of a product space (assuming the product topology - not that it matters in this case, even if you were to choose the box topology) and the product itself. Often one can say a lot about the product given information about its factors; this assertion lies within the continuous projection maps onto the factors. Therefore, it is necessary to find a property that is preserved under continuous maps, and that is shared by [0,1] but not by a finite discrete space.
- Secondly, to check the continuity of the map given, is equivalent to checking continuity of its restriction to each "piece". This is because, essentially, the two pieces are "far from each other" (or more precisely, their closures are disjoint), and since continuity is "points close together get mapped to points close together", we only need to consider the map defined on each piece separately. Doing so is simply basic calculus.
- Hope this helps. Let me add that it is nice to have a question on topology, once in a while! Rarely are questions on fields outside calculus, asked. --PST 10:24, 27 April 2009 (UTC)
- To the first question: [0,1] is connected, whereas is disconnected (indeed, totally disconnected), hence they are not homeomorphic.
- To the second question: yes, your function is continuous, and in fact a homeomorphism. However, you are making it unnecessarily complicated: works just as well. — Emil J. 10:19, 27 April 2009 (UTC)
- Let me note, Emil J., that the OP requested specifically that only a hint be given (to get him/her started) rather than the answer. --PST 10:25, 27 April 2009 (UTC)
- Don't worry, having the answers will be useful to check my own suggestions against, having (oddly) read upwards from the bottom of the post I managed to avoid the given solutions themselves whilst reading, but will certainly come back to them after attempting the rest of the problem - Thankyou both for the help, and I'll be sure to bring a couple more topology questions your way in the future! ;) Spamalert101 (talk) 11:26, 27 April 2009 (UTC)
Can every unit algebraic number be expressed as a root of unity?
What I mean by "unit algebraic number" is an algebraic number which has an absolute value of 1. Root of unity, of course, means a solution to Zn-1=0 for some positive integer n.
I believe this is equivalent (in light of the fundamental theorem of algebra and closure of algebraic numbers under multiplication) to saying that every polynomial with rational coefficients can be expressed, by multiplying it by some other polynomial and then factoring, as a product of polynomials of the form (aZ)n-1 for some quadratic a and positive integer n, in addition to some polynomial of the form Zn for positive integer n, for the zero roots. Of course, it doesn't matter whether a is a coefficient of Zn or directly with Z, but in the latter case, it has the more intuitive meaning of being the reciprocal of the magnitude of Z.
Is that statement true for polynomials with any complex coefficients, letting a be any real number?
All responses appreciated. --COVIZAPIBETEFOKY (talk) 12:47, 27 April 2009 (UTC)
- I think the answer to your first question is "no". Consider the ring of algebraic integers in Q(sqrt(2)). Then 3+sqrt(2) is a unit in this ring, because its minimal polynomial is x2 - 6x + 1 (its associate is 3-sqrt(2)). But 3+sqrt(2) is clearly not a root of unity - all its integer powers are greater than 1. Gandalf61 (talk) 13:17, 27 April 2009 (UTC)
- Note that the OP uses nonstandard terminology. The absolute value of 3+sqrt(2) is not 1, so it is not a "unit number" the way he defined it. — Emil J. 13:21, 27 April 2009 (UTC)
- Nevertheless, the answer is still "no". The algebraic number (3 + 4i)/5 has absolute value 1, but it is not a root of unity, as its minimal polynomial is 5x2 − 6x + 5. — Emil J. 13:34, 27 April 2009 (UTC)
- This result gives a large number of counterexamples, namely any a+bi where (a/b)2 is rational and not in the set {0, 1/3, 1, 3}. -- BenRG (talk) 14:00, 27 April 2009 (UTC)
- A simple example is u:=2+i. It is easy to see (induction) that for all even natural number n, 5 divides un+u (i.e. it divides both the real and the imaginary part). Therefore no positive integer power of u is a real number. Hence u/|u| is an algebraic number of modulus 1, not a root of unity. --pma (talk) 14:32, 27 April 2009 (UTC)
- I'm not sure I understand how you can use the minimal polynomial to predict whether a number will be a root of unity; after all, the minimal polynomial of -1/2+i√(3)/2 is X2 + X + 1, but it is a 3rd root of unity.
- However, I do have an understanding of why (3+4i)/5 wouldn't be a root of unity, unrelated to its minimal polynomial: because its angle (roughly 0.927295 radians or 53.130102 degrees) is an irrational multiple of 2π, adding the angle to itself many times will never give a multiple of 2π. Thus, multiplying the number by itself will never yield 1. This sheds some light on BenRG's set of counterexamples. I also might understand pma's explanation if I mull it over a bit.
- Also, apologies for using "nonstandard terminology". I haven't taken an actual class on the material, and I thought I could get away with it if I actually explained what I meant thoroughly in the text. But who reads the actual text? Silly me...
- Thanks for the help. I suppose I should have been able to figure this out by myself, but I was thinking about it in terms of polynomials (the latter half of my question) rather than the numbers, which seems to have muddied things up a bit. Any pointers to getting a better understanding of the behavior of the polynomial side of the question? --COVIZAPIBETEFOKY (talk) 15:00, 27 April 2009 (UTC)
- Well, yes, the angle atan(4/3) is an irrational multiple of 2π, but how do you prove that? It's no easier than showing that (3 + 4i)/5 is not a root of unity in the first place.
- As for minimal polynomials: sorry I wasn't more clear on this point. Roots of unity are algebraic integers, hence their primitive minimal polynomials are monic (or equivalently, their monic minimal polynomials have integer coefficients). 5x2 − 6x + 5 is a primitive irreducible polynomial and it is not monic, hence its roots are not roots of unity. — Emil J. 15:13, 27 April 2009 (UTC)
Disjoint balls in
Hi there, since you enjoyed the last topology question so much I figured I might send another one or two your way! I'm revising it right now so you might end up getting a good few if you don't mind lending me a little more help!
I've showed that there DNE 2 closed balls of radius 1 inside a closed ball of radius 2 in the Euclidean space but I'm now trying to find how many closed unit balls there are in the space inside balls of radii 3.001 and 2.001 - the first is apparently for some k>0, but how do I go about beginning to prove it? Thanks very much again for the help and if I'm asking too much just say!
Spamalert101 (talk) 14:54, 27 April 2009 (UTC)
- So you want to pack unit -dimensional Euclidean balls into a ball of radius . I took the liberty of using open unit balls instead of closed, since the problem remains essentially the same, and things are easier to describe. Now, although these problems are generally difficult, at least in the case the situation is quite simple: take the balls pairwise tangent, that is, put their centers at a distance 2 from each other. With a small computation this gives a radius for the minimal ball containing them. Also, it is not immediate but not even hard to prove that this is actually the least number such that there are disjoint unit open balls inside : in other words, the minimizing configuration for the balls is the one above, where they are pairwise tangent. In particular, in dimension , there are three disjoint open unit balls inside iff . Another consequence is that if the number of unit balls inside is bounded independently from the dimension. On the other hand, as soon as , in dimension there are at least unit balls in , so the number of balls is unbounded as dimension increases, and very difficult to count exacltly. If I guess that the maximum number of unit balls in is obtained with one ball with the same center of and all the other tangent to this one (as far as I see, it could be trivially true or trivially false, or an open problem). If so, the max number of unit balls in should be 1 plus the kissing number, still a topic of current research. PS: Your question has the following nice Hilbert space version: there are infinitely many disjoint unit open balls inside the ball of an infinite dimensional Hilbert space (just take them centered in , where is an orthonormal base). But if you take the radius any smaller, , then only finitely many disjoint unit open balls can be located in a ball of radius : precisely, at most . Life in Hilbert space is curious... --pma (talk) 21:47, 27 April 2009 (UTC)
April 28
Łukasiewicz notation for propositional functions
Jan Łukasiewicz used C to denote implication, K to denote conjunction, A for disjunction, and E for logical equivalence, as noted at Polish_notation#Polish_notation_for_logic. Why these letters? Are they the initial letters of some relevant Polish words? If so, what words? —Dominus (talk) 04:15, 28 April 2009 (UTC)
- Hmm, I always had a vague impression that the letters were based on Latin, but Polish actually makes more sense now that you mention it. Koniunkcja, alternatywa, ekwiwalencja (though równoważność appears to be the more common name), negacja, możliwość and dysjunkcja (which, strangely enough, does not mean disjunction in Polish, but Sheffer stroke) are transparent. I do not understand the source of C for implication (implikacja) and L for necessity (konieczność). — Emil J. 11:47, 28 April 2009 (UTC)
- I suppose C might have come from czyni (makes), but I can't figure out the source for L, either. --CiaPan (talk) 14:56, 28 April 2009 (UTC)
- Thanks. The article is probably wrong when it says that Łukasiewicz originated the use of L and M for modal operators. It is certainly wrong when it says that he originated the use of Σ and Π for quantifiers. —Dominus (talk) 15:04, 28 April 2009 (UTC)
- Well, the article does not actually claim that Łukasiewicz originated all the notation, so it is not wrong. You may be right that L and M may come from a different source. But then the question remains, what is the source and what does it mean. According to modal logic#Axiomatic Systems, the usual and notation was already used by the founder of modern modal logic, C. I. Lewis. J. J. Zeman confirms it in the case of , but he notes that is a later addition. Either way, M and L must have been introduced when was already in use, which seems to suggest that they indeed originated in the context of the Polish prefix notation, even though nowadays they are also used in infix notation. — Emil J. 15:42, 28 April 2009 (UTC)
Easy probability question
- Event occurs: probability 1
- Result 1 occurs: probability 1/x
- Result 2 occurs: probability (x-1)/x
What are the chances of result 1 happening if the event occurs x times? Vimescarrot (talk) 15:51, 28 April 2009 (UTC)
- Sounds a bit like homework. Assuming the trials are independent, you can calculate the probability that only event 2 ever occurs using the multiplication rule, from which you can derive the result you want. You may also have a look at e to turn it into a neatly-looking approximation for large x. — Emil J. 16:08, 28 April 2009 (UTC)
- And have a look at binomial distribution in case the number of occurrences of Result 1 is of interest.81.132.236.12 (talk) 16:14, 28 April 2009 (UTC)
- I realised I forgot to specify "once or more", but never mind. It's not homework, it's just that...Well, this applys in computer games a lot (if the chances of this monster dropping this item are 1/100, what are the chances of getting it after kiling it 100 times?) Anyways, thanks very much for the help. Vimescarrot (talk) 18:03, 28 April 2009 (UTC)
What event occurs x times?? Could it be that you meant that their are x trials, and on each trial the probability of success is 1/x? It gets confusing when you don't use terminology in a standard way? And why do you mention "Result 2" if it has nothing to do with your question? Michael Hardy (talk) 19:33, 28 April 2009 (UTC)
- I think the question was quite clear. A event occurs and can have one of two outcomes, the probability of outcome 1 is 1/x, the probability of outcome two is 1-1/x (=(x-1)/x). What is the probability of outcome one occurring at least once in x (independent) trials? The question has been answered by EmilJ, and the OP seems to be happy, so another question successfully resolved! --Tango (talk) 19:45, 28 April 2009 (UTC)
- ...OK, further guesses: What you meant was that "Result 2" was the complement of Result 1, i.e. to say that "Result 2" happens just means "Result 1" doesn't happen. Really, it wouldn't have hurt to say so, but even better would have been not to mention Result 2 at all. At any rate, if my guesses are right then the probability that "Result 1" never occurs in x trials is (1 − 1/x)x. That number approaches 1/e as x grows (where e is the base of natural logarithems). So the probability that "Result 1" occurs at least once is 1 minus that. Michael Hardy (talk) 19:48, 28 April 2009 (UTC)
- I didn't use standard terminology because I don't know standard terminology. Vimescarrot (talk) 21:54, 28 April 2009 (UTC)
- Your terminology was fine, I don't know what Michael is complaining about. Perhaps he missed the fact that (x-1)/x=1-1/x, which makes it clear that one is the complement of the other? --Tango (talk) 11:09, 29 April 2009 (UTC)
- I didn't use standard terminology because I don't know standard terminology. Vimescarrot (talk) 21:54, 28 April 2009 (UTC)
The set of complex numbers is the largest possible set of numbers?
I've once seen a proof that claimed that the set of all complex numbers was the largest possible set of numbers that can be conceived of, but do not remember the details of that proof. Does anyone know if this is true, and if so, have a link to a proof? JIP | Talk 18:50, 28 April 2009 (UTC)
- Probably you are thinking of the fundamental theorem of algebra, which says that the only proper algebraic field extension of the field of real numbers is the field of complex numbers. Similar results for systems of numbers larger than the complex numbers are described in the articles Frobenius theorem (real division algebras) and Hurwitz's theorem#Hurwitz's theorem for composition algebras. JackSchmidt (talk) 19:06, 28 April 2009 (UTC)
The definition of "number" is not fully standard. Sometimes things like non-standard real numbers are considered numbers. Transfinite cardinal and ordinal numbers are called "numbers". Sometimes things like quaternions or members of finite fields are considered "numbers".
Maybe Jack Schmidt's guess as to what you remember is right. The term "fundamental theorem of algebra" is something of a misnomer. It says you don't need to extend your set of "numbers" beyond the complex numbers in order to have solutions of all algebraic (i.e. polynomial) equations. Michael Hardy (talk) 19:36, 28 April 2009 (UTC)
April 29
Evaluating very small natural logs
I am interested in the ratio of 2 probabilities, where the numerator is the probability associated with a value of x (x~), and the denominator is the Sum of the probabilities all possible values of x (Sigma x).
I apologise for not being able to show this in LaTex notation.
The problem is the the probability for all values of x is tiny and expressed as a natural log. e.g ln(P(x))=(-50000). I cannot see how to evaluate this function without calculating the exp. of each term, then taking the ratio. But the number is too small to be dealt with...can I use algebra to express the answer in terms of natural logs?
Thanks!
Ironick (talk) 10:02, 29 April 2009 (UTC)
- There is something wrong here, you can't take the log of a negative number. Are you saying that you take the log of a small number (eg 10^-50) and the log of that is a large negative number. But again, a negative number isn't right for a probaility which should be between 0 and 1. Please clarify -- SGBailey (talk) 10:30, 29 April 2009 (UTC)
Sorry, my mistake. Edited for clarity (I Hope). The output of the natural log terms are around -50000. So the natural log of the true probability is -50000. Making the true probability too small to deal with. Thanks. Ironick (talk) 10:38, 29 April 2009 (UTC)
- I understand you have some events, say x, y, z..., with probabilities Px, Py, Pz... respectively.
- And the probabilities are not known explicitly, instead you have their logs, ie. values: Lx=log(Px), Ly=log(Py), Lz=log(Pz)... which are 'large negative numbers'.
- And you say you are interested in ratios, say Px/Py or Px/Pz — is that right?
- If so, utilize the most important property of logarithms, that they reduce multiplication to addition:
- and consequently division to subtraction:
- Thus your ratios can be calculated by
- Lxy = log( Px/Py ) = log( Px ) − log( Py ) = Lx − Ly
- and finally
- Px/Py = exp( Lxy ) = exp( Lx − Ly )
- CiaPan (talk) 11:04, 29 April 2009 (UTC)
- Isn't the sum of the probabilities of all possible values of x simply 1? That's part of the definition of probability. --Tango (talk) 11:07, 29 April 2009 (UTC)
- I guess you're trying to calculate where x and y are too small (or huge) to evaluate the exponentials directly. In that case, a good approximation is
- The second case is exact, and you can calculate it directly on your floating point unit when x ≈ y. The first case is obtained from the second by using the fact that when z is small. The third is obtained by swapping the variables in the second and doing the same thing. Chances are you can ignore the exponential part of the first and third formulas and just use
- -- BenRG (talk) 11:50, 29 April 2009 (UTC)
@CiaPan. Thanks, but unfortunately Lx-Ly is not computable, as too small. @Tango. Yes, but I am dealing with the probability of data given a set of parameters. Liklihood is a better word. So they won't add to 1. @BenRG. Thanks, that helps a lot, I think that's the solution.