Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 75.166.200.250 (talk) at 06:41, 22 July 2012 (→‎Function inversion via control theory: 0?). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


July 16

How do you call in statistics when you correct the data?

I mean it in a good way, like social group A has a higher number of criminals than social group B, but also less affluent. When you put data under this perspective, you discover that both were equally criminal, if the same economical status were given. OsmanRF34 (talk) 00:45, 16 July 2012 (UTC)[reply]

I think you could use "adjusted for..." or "corrected for...", followed by the factor in question (e.g. "adjusted for social group"). (Of course, you need a legitimate method of "adjustment", otherwise you could just make up any old data, heaven forbid...) 86.148.153.100 (talk) 01:34, 16 July 2012 (UTC)[reply]
Maybe also see regression analysis. Rckrone (talk) 05:10, 16 July 2012 (UTC)[reply]
The term I've heard statisticians use (not being one myself) is "controlling for ...". — Quondum 06:40, 16 July 2012 (UTC)[reply]
I think statisticians would not so much "correct" such data as analyse it in such a way as to estimate the sizes of the various effects involved. The techniques of Analysis of variance and Analysis of covariance might be appropriate. Thincat (talk) 10:55, 16 July 2012 (UTC)[reply]

"n-body problem" singularities

I was reading the n-body problem article, and in the "Singularities" section it talks of "singularities in which a collapse does not occur, but q(t) does not remain finite". Firstly, does "collapse" mean "collision"? Secondly, does q(t) refer to the position of the particles? Thirdly, if it does, how does the position of the particles ever become infinite? I can see how position could grow without limit if some of the bodies have sufficient velocity to escape the system, but that wouldn't be a singularity, would it? I really don't understand what this is referring to. Can anyone explain? — Preceding unsigned comment added by 86.148.153.100 (talk) 01:42, 16 July 2012 (UTC)[reply]

This paper is instructive. As far as I can see the answers are:
  • Yes, a collapse is a collision, having 2 bodies at the same place at the same time.
  • Yes, is the position.
  • To be a singularity the position (or velocity) has to become infinite in finite time. I don't really understand it either but apparently this is known to be possible with 5 bodies, and the paper attempts to build an example with 4 bodies.
-- Meni Rosenfeld (talk) 04:20, 16 July 2012 (UTC)[reply]
It may be worth noting that the potential energy of gravitation in the Newtonian model is unlimited as two bodies approach each other, e.g. (intuitively from conservation of energy) potentially allowing infinite velocity via a slingshot effect. — Quondum 07:14, 16 July 2012 (UTC)[reply]
But infinite velocity would require a collision (of point masses), wouldn't it, which the scenario specifically excludes? I do not see how any finite separation could possibly impart infinite velocity. 86.176.211.101 (talk) 11:38, 16 July 2012 (UTC)[reply]
Ok, I think I understand where the magic happens, though maybe it's more of a technicality. It's only considered a collision if it happens in some specific point in space. In the construction in the paper, there are two bodies with a distance converging to 0 (thus unbounded kinetic energy), but they both escape to infinity thus there's no point in which they collide. -- Meni Rosenfeld (talk) 12:00, 16 July 2012 (UTC)[reply]
Escape to infinity in finite time, colliding "at infinity"? 86.176.211.101 (talk) 12:04, 16 July 2012 (UTC)[reply]
Exactly. -- Meni Rosenfeld (talk) 12:06, 16 July 2012 (UTC)[reply]
Cool, thanks! 86.176.211.101 (talk) 12:18, 16 July 2012 (UTC)[reply]
This is somewhat perplexing (but technically I agree) – a "collision at infinity" not being a called collision is such a fine technical point as to be uninteresting. Redefine a collision as the separation of two bodies diminishing to zero in finite time, and this "collision at infinity" would still be called a collision. A far more interesting case would be two objects merging in finite time with zero rate of reduction of separation. — Quondum 19:20, 16 July 2012 (UTC)[reply]
I suppose it is a moot point, but if you can't point to an actual place where a collision occurs then maybe it's fair to say that it doesn't exist. However, notwithstanding this, it seems to me that this is an interesting and distinct case simply because the bodies can escape to infinity within a finite time (and without the system having already blown up because of a collision). Incidentally, if I may ask another question, I notice that in the paper Meni linked to (which is mostly beyond my ability), it says:
"Meanwhile, in 1974, Mather and McGehee showed that if the solution is allowed to be continued through an infinite number of binary collisions, then there exist noncollision singularities with four bodies on the line."
Does this imply that there is some mathematically feasible way of continuing the process through a collision? I thought that at a collision everything just blew up to infinity and it was game over as far as the maths was concerned. 86.176.211.101 (talk) 20:57, 16 July 2012 (UTC)[reply]
Take the case of 2 bodies, for simplicity one of them has infinitesimal mass so the other doesn't move. The motion of the moving particle can be given by . What happens after the singularity at is underspecified, but it's logical to extend it smoothly as , so the body continues in its path and catapults past the large body in motion that mirrors its approach. The only ways that energy and momentum are conserved is either this or that the velocities are reversed as in a "normal" elastic collision of two bodies. The same can be done with more complicated scenarios. -- Meni Rosenfeld (talk) 14:18, 17 July 2012 (UTC)[reply]
I see. Thanks for your help. 86.129.16.198 (talk) 22:46, 17 July 2012 (UTC)[reply]

(simple) Example of design with non-fixed intersection numbers?

Hello,

I am considering block designs and I had this question: given a fixed block, can I compute how many elements of the design will intersect it in x points? Does it have to be independent of the block I started with? I know that the answer is "yes" for Steiner systems and for symmetric designs but I suspect it isn't true in general. But then the design would have no group acting transitively on its blocks, so it can't be a very well known design. Are there simple counterexamples?

Many thanks,Evilbu (talk) 19:34, 16 July 2012 (UTC)[reply]

An N by M block intersects at N*M points, or about (((N-1)*M)/2)+N points in the case of triangular blocks. 71.212.249.178 (talk) 22:16, 16 July 2012 (UTC)[reply]
I'm sorry, what do you mean by N by M block? (I am assuming that the blocks in the design do have the same size)Evilbu (talk) 07:54, 17 July 2012 (UTC)[reply]
What's the application? 75.166.200.250 (talk) 03:20, 18 July 2012 (UTC)[reply]
Well, N^2 for square blocks. 75.166.200.250 (talk) 23:06, 18 July 2012 (UTC)[reply]


July 17

Power in the Zero Channel

If I have an arbitrary signal and I want to estimate its power spectrum density, what is the best way to "detrend" the data so that there is no power in the zero channel. The problem is that the signal I am working with has a very large DC offset so when I get a PSD estimate as it is, there is too much leakage into the neighboring frequencies and I get way too much power than there should be. It just dominates everything and is completely useless. So everyone keeps telling me to "detrend" but what does that mean? Should I just subtract the constant arithmetic average of the signal from the signal? I tried that but it doesn't seem to work. Should I fit a straight line using linear least squares fit to the signal and then subtract that? That reduces the power in the zero channel but doesn't really make it small as I think it should be. Maybe subtracting the line is good enough because the power becomes relatively small compared with other channels but it still seems a lot to me in absolute magnitude.

The signal kind of looks like a decaying exponential with tiny oscillations around the "middle". So next I thought of just fitting an exponential and then subtracting that but would that take away power from other channels? The signal is only about a hundred measurements and I am using the multitaper method with 7 slepian functions from MATLAB's builtin pmtm function with nfft being the length of the signal itself (not padding up to the next power of two). Using more slepian functions seems to decrease the DC (although there seems to be a lower bound and I can't get it below that) but it takes forever. If increasing the signal length makes a difference I can do that. But I can't change the resolution of the signal. Any ideas/suggestions/insights would be helpful. Thanks! 174.56.103.148 (talk) 07:15, 17 July 2012 (UTC)[reply]

It sure would be helpful if you could provide a pic of the graph, as it is now, and another of how you want it to look after processing. A picture is worth a thousand words and all. If you have pics and need help displaying them here, just let us know. StuRat (talk) 07:17, 17 July 2012 (UTC)[reply]
Take the discrete fourier transform, Ykj Xj1jk/N/√N, (where Xj is the complex conjugate to Xj, and j=(0,...,N−1), and k=(0,...,N−1), and 1x is shorthand for e2π√−1x). Detrend by setting Y0=0. Transform back: Xkj Yj1jk/N/√N. Bo Jacoby (talk) 16:14, 17 July 2012 (UTC).[reply]
I agree with Bo Jacoby. The multitaper method is not appropriate for this sort of signal. You should use a Fourier transform instead -- it's actually a lot simpler, and the Matlab signal toolbox supplies it to you pretty straightforwardly. With a Fourier transform, if you subtract the DC offset, you are guaranteed not to have any power in the zero band. Given the nature of your signal, it's not likely that any sort of frequency analysis will be very informative, but at least a Fourier transform will allow you to do what you say you want to do. Looie496 (talk) 18:12, 17 July 2012 (UTC)[reply]

OP here, Bo your idea is pretty cool. I don't know why I didn't think of it. So I can just take the FFT of the original signal straight up, then set the first element to zero and then take the IFFT? Then on this new signal, can I use the multitaper method to estimate the PSD? The reason I am using the multitaper method is because I thought multitaper was a bit more "modern" and "sophisticated" than good old fashioned basic FFT and/or welch-like windows. I just need to get the PSD and if you guys think something besides MTM is more appropriate, let me know! 65.100.24.211 (talk) 19:59, 17 July 2012 (UTC)[reply]

A good principle of data analysis is to always use the least sophisticated tool that does the job. Getting the PSD from the FFT is a very simple operation and does not require an IFFT. I don't have access to Matlab presently but I'm sure it gives you a simple function to do this. Looie496 (talk) 20:19, 17 July 2012 (UTC)[reply]

If your data points by nature cannot be negative (such as the signal looking like an exponential decay indicates) then you should take the logarithm first. If you take the complex conjugate and divide by the square root of the number of data points, then the backward FFT is exactly the same as the forward FFT, which is nice. If your data set is symmetric, then the fourier transform is real, which is nice. The n-sized data set (x0,...,xn−1) is extended to the symmetric (2n−2)-sized dataset (x0,...,xn−1,xn−2,...,x1) and the transformed data set (y0,...,yn−1,yn−2,...,y1) is truncated to (y0,...,yn−1). The element y0 is the DC amplitude and the element yn−1 is the VHF amplitude. (PS. What is PSD?). Bo Jacoby (talk) 03:26, 18 July 2012 (UTC).[reply]

PSD is Power spectral density. I think all that is too complicated -- computing a power spectrum is everyday stuff, and the Matlab signal processing toolkit makes it easy. Looie496 (talk) 03:44, 18 July 2012 (UTC)[reply]

two problems with tangent circles

I was solving this, [note this is no homework], but few questions I am completely puzzled. Can anyone explain me to solve Q67 and Q39, this page. Thanks, extra999 (talk) 11:26, 17 July 2012 (UTC)[reply]

Q39. Let the diameter in the big semicircle be AB=4. The radius in the small circle is x. Pytagoras on triangle OPS says 12+(2−x)2=(1+x)2. Now you can finish yourself. Bo Jacoby (talk) 12:22, 17 July 2012 (UTC).[reply]
Q67. Let the unknown radius of the little circle be x. So the ratio between the small radius and the big radius is q=x/2. So x=2q. The distance from the centre of the big circle to the corner includes an infinite sum of diameters of decreasingly small circles. 2√2=2+2x+2xq+2xq2... = 2+4q+4q2+4q3+... So (1+√2)/2 = 1+q+q2+q3+... = 1/(1−q) = 2/(2−x). So x = 2−4/(1+√2) = 2−4(√2−1) = 6−4√2. Bo Jacoby (talk) 12:22, 17 July 2012 (UTC).[reply]
67: Because the two circles are externally tangent (one does not contain the other), the distance between their centers is obviously the sum of the radii. Now consider the bounding lines as coordinate axes, find the Cartesian coordinates of the centers, and use the coordinates to find the Pythagorean distance between them. With a tiny bit of algebraic massage, you have a quadratic equation. Notice, by the way, the geometric mean of its two solutions. —Tamfang (talk) 20:28, 17 July 2012 (UTC)[reply]
I can not see a quadratic equation here. Draw two slope radii of circles and a vertical radius of the small one. Imagine some squares and find out that which implies . Multiply both enumerator and denominator by , expand, plug , reduce, done. --CiaPan (talk) 10:21, 18 July 2012 (UTC)[reply]
My quadratic equation is . —Tamfang (talk) 04:48, 20 July 2012 (UTC)[reply]
I hope it's not a problem that I changed the title of this section to something more meaningful than "Help". —Tamfang (talk) 20:11, 17 July 2012 (UTC)[reply]

A weird kind of 20 sided dice...

Suppose I make one of these out of three identical, thin, intersecting, "golden-ratio" rectangles. I want to be able to roll it like one of those icosahedral 20-sided dice that Dungeons & Dragons players are so fond of...except that the 'facets' of the icosahedron are implied rather than physically existing.

The question is: Is it a "fair" dice? Will there be an equal probability of it coming up with any of the 20 "invisible facets" uppermost? (Presuming the three rectangles have non-zero mass of course).

I can see that there are two 'classes' of triangular face:

  • Eight faces have their three vertices taken one from each of the three different rectangles - like the face that's foremost in the diagram (lets call these "Class A faces").
  • Twelve faces have two vertices from one rectangle and one from another ("Class B faces") which connect between the class A faces.

Since there is symmetry here, all class A faces are equally probable - and so are all class B faces - so I presume that the problem boils down to whether there is some preferential weighting of Class A faces compared to Class B.

Bonus question: If it's not "fair", then how bad is the unfairness? (I presume that depends on the masses of the rectangles...but maybe if I make them out of light materials, the error will be negligible?)

Any ideas before I have to resort to making one and rolling it about a thousand times to check?  :-)

TIA SteveBaker (talk) 19:58, 17 July 2012 (UTC)[reply]

Its moment of inertia is spherically symmetric (the inertia tensor of any one of the three rectangles wrt aligned coordinates is diag(x, y, z), and the sum of all three is (x+y+z) I), so I think it is probably fair, but I may be missing something. -- BenRG (talk) 20:17, 17 July 2012 (UTC)[reply]
Other than experimentally (and hence imperfectly), how do you even define "fair"? * 86.129.16.198 (talk) 23:05, 17 July 2012 (UTC) (* Other than the case of indistinguishable faces, which must be fair by symmetry)[reply]
I would say that "fair" dice are dice where the theoretical probability of each possible result is the same (i.e. equal to 1/n, n being the number of possible results). This is accomplished when all the faces are the exact same size, the density of the thing is consistent throughout (so one face does not weigh more than another, for example), etc. Technically, even some small action like scratching a die with one's car keys is enough to upset this balance, not to mention all of the tiny, tiny, imperceptible errors that probably occur during the manufacture of even those nice, sharp-edged casino dice. Of course, such errors are not likely to have any noticeable influence on the game, so they can be safely discounted.  dalahäst (let's talk!) 03:01, 18 July 2012 (UTC)[reply]
Some cases are clearly fair by symmetry, and I suppose some are clearly unfair by obvious lack of symmetry, but in the general case a procedure for calculating the theoretical probability of each possible result would need to be established, and it is not very obvious how to go about doing that. 86.129.16.198 (talk) 03:15, 18 July 2012 (UTC)[reply]
According to the presumption of innocence the dice is fair until proven unfair. The burden of proof is on the prosecution. According to the principle of insufficient reason the dice is fair. Bo Jacoby (talk) 04:01, 18 July 2012 (UTC).[reply]
The possibilities are not indistinguishable, as Steve pointed out in the original question. He was already applying the principle of indifference where it does apply. You need Newton's laws to answer this question any further than that. It's really more of a science desk question. -- BenRG (talk) 05:18, 18 July 2012 (UTC)[reply]
I tend to believe the die is unfair, and this shouldn't be hard to show by simulation. Take a random orientation of the die; find the lowest vertex; find the location of the center of mass with respect to it, and rotate it in that direction until there is a second vertex touching the table; then do the same for the third vertex. This is a fairly good approximation for what happen when you roll it. By running this enough iterations or analytically deriving it, you can find something which is similar to the actual probabilities. -- Meni Rosenfeld (talk) 04:08, 18 July 2012 (UTC)[reply]
Ah, I see a few of us know that a "die" is singular and "dice" are plural, but, alas, I lament that, if put up for a vote, the plurality may be against us. Perhaps we should take everyone who makes this mistake and toss them into the center of a black hole ?  :-) StuRat (talk) 05:00, 18 July 2012 (UTC) [reply]
I've always said "die", but I think singular "dice" is so widespread and so old that it's rather ridiculous to treat it as anything but correct. This is how languages change. -- BenRG (talk) 05:18, 18 July 2012 (UTC)[reply]
It's probably "correct" by now in the UK. In the States it's still jarring. To me it's a nails-on-chalkboard thing; I will never accept it as correct. --Trovatore (talk) 06:58, 18 July 2012 (UTC)[reply]
(ec). I stand corrected. English is not my first language and sometimes I spell latin rooted words correctly, writing 'excentricity' for eccentricity and 'exspect' for expect and so on. In order not to spell more correctly than the English I wrote 'the dice' rather than 'the die'. Bo Jacoby (talk) 05:41, 18 July 2012 (UTC). [reply]
The vertices form a regular icosahedron, and the ellipsoid of inertia is a sphere. I can't see what else would bias it given the sort of idealizing assumptions one usually makes in mathematics. I guess air resistance could bias it, but your simulation as described won't catch that. -- BenRG (talk) 05:18, 18 July 2012 (UTC)[reply]
From a mechanical point of view the faces are indistinguishable. The center of mass is at the geometrical center. When thrown on a plane table only the shape of the convex hull matters, and it is regular. If the table is not plane the die may be unfair. Bo Jacoby (talk) 05:41, 18 July 2012 (UTC).[reply]
"From a mechanical point of view the faces are indistinguishable."?? Maybe in a spherical cow sense of an ideal plane, "convex hull" and ideal die or some likeness of these, but I doubt, mechanically, this to be the case here. Unlike the A face which has only contact points at the corners, the B face has an edge that can become flush with the table where friction and drag will be created as the die comes to a rest. --Modocc (talk) 06:45, 18 July 2012 (UTC)[reply]
The suggested iterated simulation should not differ from that of an ordinary 20-sided die having the same vertices and mass center. I also agree with Ben with regard to air resistance, as its likely unfair because of a substantial difference in drag (thus stability) with respect to the air (one can sort of get a feel for this by imagining blowing on one while its in different positions while it is at rest), and in the asymmetric contact with the table of its edges with respect to the two face types A (no edge, just three corners) and B (with one edge and a corner). But since this die's vertices are equidistant from the mass center which is located at the center of the intersection of the three rectangles, it might be more or less fair if the edges are recessed slightly to prevent them from contacting the table and these dice are tossed in a vacuum, but even then there might be a slight bias simply because of small differences in how the die's kinetic energy is dissipated. --Modocc (talk) 05:59, 18 July 2012 (UTC)[reply]


This is a fascinating example, and it's great to see Steve back, by the way.
One thing no one has touched on (unless I missed it) — are we assuming that the rectangles are perfectly rigid? If the rectangles flex and dissipate energy that way when the die bounces, I would expect that to contribute some sort of asymmetry.
It seems to me that there's a chance the die might be "fair" if everything were perfectly rigid, it's in a vacuum, and there's no friction. But then one has to ask, why would it ever stop bouncing? So maybe in that case it would be "fair" because the probability of it coming to rest on any of the implied faces is the same as for any other face, namely zero. --Trovatore (talk) 07:05, 18 July 2012 (UTC)[reply]

Unfairness is not proven merely by pointing out that perfect symmetry does not exist in this imperfect world. The model of fairness implies that each of the 20 outcomes has the same probability. Any other model should show and argue for some other distribution of the probability. It is not sufficient to "expect some sort of asymmetry". Bo Jacoby (talk) 09:23, 18 July 2012 (UTC).[reply]

It is also not sufficient to expect fairness, thus when it comes to correctness, assigning equal probabilities can be just as blind as assigning unequal probabilities, hence I take issue with the principle of indifference or insufficient reason as being too prejudicial. --Modocc (talk) 11:01, 18 July 2012 (UTC)[reply]
Apparently you missed the fact that the faces are not mutually indistinguishable, having nothing to do with imperfections as they exist in the real world. Sławomir Biały (talk) 12:27, 18 July 2012 (UTC)[reply]

What then, Sławomir and Modocc, is the correct probability distribution function ? Bo Jacoby (talk) 13:29, 18 July 2012 (UTC).[reply]

I don't know Bo. But I am not quite so arrogant as to think that my lack of knowledge implies that all outcomes are equally likely. I also don't know what the probability that a coin will come up "edge" is, but I don't therefore conclude that the three possible outcomes "heads", "tails" and "edge" are equally likely. Sławomir Biały (talk) 13:51, 18 July 2012 (UTC)[reply]
I haven't done the sums, but as Steve says there are A faces and B faces. Because of symmetry, the A faces are going to be equilateral triangles and the distance from the cntre of the triangle to the centre of mass is readily calculable - though I haven't done it. Is the B face equilateral or just isosceles? What is the distance from the centre of a B face to the centre of mass - if it differs from the face A case then it isn't fair. If the triangle isn't equilateral then I guess that it isn't fair. Even if distances are equal and the B triangles equilateal there may still be unfairness relating to face-face angles along edges. -- SGBailey (talk) 13:37, 18 July 2012 (UTC)[reply]
The (virtual) faces form an icosahedron, so they are all equilateral, all equidistant from the centre of mass, and all indistinguishable in every sense except with respect to the geometry of the rectangles. 86.160.212.146 (talk) 13:45, 18 July 2012 (UTC)[reply]
The faces are not indistinguishable. Certainly some moment of the mass distribution (maybe the third moment, although I'm not convinced that even the second moments are the same) will be different at some of the faces. The question is, will these different moments influence the probability, or do they somehow conspire to create a fair die? Sławomir Biały (talk) 13:51, 18 July 2012 (UTC)[reply]
The ways in which they are distinguishable involve the geometry or disposition of the rectangles. I was trying to explain that all the other aspects, such as length of edges, distance to centre, dihedral angles, etc., that SGBailey was concerned about, are not points of difference since the faces form an icosahedron. 86.160.212.146 (talk) 13:57, 18 July 2012 (UTC)[reply]

Wow! I started off more of a debate than I expected! I would prefer to ignore air resistance and concerns about the friction of edges rather than corners because the dice will be pretty heavy and those things will likely be negligible. I just love the suggestion to make the short edges of the rectangles slightly concave so that contact with a planar tabletop only happens at the vertices! That's obviously a good idea...providing that making those edges slightly concave doesn't make it less fair for reasons of center-of-gravity.). No real-world dice are perfectly fair - they all have numbers or spots etched into them for example...but I'm only really concerned about whether this is likely to be significantly less "fair" than a conventional 20 sided dice.

Incidentally...I'm also thinking about making a more normal 6-sided dice from two interlocking rectangles (each having sides of length one and root-two)...similar problem - some faces are formed from two parallel edges and others are an X-shape intersection to two edges. Are these fair?

Thanks again! This is great stuff. SteveBaker (talk) 14:40, 18 July 2012 (UTC)[reply]

The alternative construction of using six rigid wires to connect pairs of opposite vertices to the centre would indubitably give fairness - and the inevitable slight springiness would give a pleasing degree of bounce when the arifact was thrown onto a table. It would be harder to make, though. I'd do it by marking the spherical coordinates of each vertex on a foam ball, pushing the wires through, temporarily tieing the vertices together, burning the ball away, securing the wires at the centre then cutting the temporary ties ←86.139.64.77 (talk) 15:43, 18 July 2012 (UTC)[reply]
Hard to mark the numbers though... 86.160.212.146 (talk) 17:17, 18 July 2012 (UTC)[reply]
The device in the OP didn't have faces to mark, either ←86.139.64.77 (talk) 23:55, 18 July 2012 (UTC)[reply]
You could write the numbers on the rectangles according to some suitable scheme. 86.160.212.146 (talk) 00:50, 19 July 2012 (UTC)[reply]
Or you could have the numbers removed, for luck. It is OK, as long as you remember where they used to be. --Trovatore (talk) 02:37, 19 July 2012 (UTC) [reply]

SteveBaker, your drawing of the icosahedral dice (or die?) is very nice (or nie?). Show us a drawing of the hexahedral thing you have in mind! You ask if it is fair. It has less symmetry than the first one because the moments of inertia are not obviously equal. But that does not imply that it is unfair, so the answer is that it is fair until proven otherwise. Bo Jacoby (talk) 04:44, 19 July 2012 (UTC).[reply]

I'm sorry, but I really do think that "fair until proven otherwise" argument is nonsense. 86.146.110.153 (talk) 10:35, 19 July 2012 (UTC)[reply]

Don't be sorry, you are entitled to be mistaken. The fair probability distribution has maximum entropy reflecting complete ignorance. An unfair probability distribution reflects some knowledge about why some outcome is less probable than another outcome. That's why. Bo Jacoby (talk) 10:53, 19 July 2012 (UTC).[reply]

I'm afraid it's you who are mistaken. Assuming that an arbitrary dice is fair just because no one has proved it unfair is quite clearly incorrect. 86.146.110.153 (talk) 11:19, 19 July 2012 (UTC)[reply]
See indifference principle. If you have n alternatives, and know nothing about them, the best assumption is to treat them as equally likely. While dice may not be fair as a general rule, usually your best estimate of the behavior of an unknown die is to treat it as fair. This is likely wrong, but less wrong than an arbitrary other assumption. --Stephan Schulz (talk) 13:19, 19 July 2012 (UTC)[reply]
Yes, and it has already been explained why the indifference principle does not apply. FTA (emphasis mine): "The principle of indifference states that if the n possibilities are indistinguishable except for their names, then each possibility should be assigned a probability equal to 1/n." But the sides are not indistinguishable. Indeed, the mass has distinct moments about the two different sets of faces. If anything, that should constitute a proof that the die is not fair, unless somehow these moments conspire to create a fair die. That should require proof. As 86 says, the "'fair until proven otherwise' argument is nonsense". I might as well claim that the three outcomes of a coin toss (heads, tails, and edge) are equally likely. Sławomir Biały (talk) 13:52, 19 July 2012 (UTC)[reply]
The logical fallacy that you and Bo are committing is argument from ignorance. Sławomir Biały (talk) 15:05, 19 July 2012 (UTC)[reply]
Some pseudo-mathematical musings: To a first order approximation, the die will be fair, since the convex hull is a regular icosahedron whose center of mass in its geometric center. Hence all of the equilibria of the die have the same first moment. The higher moments will be different, so the die (most likely) will not be fair when it is actually rolled, since this involves the mechanics of rotation about various axes, etc. It would probably be difficult to detect this lack of fairness in practice, since the deviation from a fair die will be small, although one should in principle be able to estimate it by a calculation. I lack the particular expertise to do this, but many similar calculations have been performed in classical mechanics by people like Richard Montgomery and Jerrold Marsden. Sławomir Biały (talk) 15:21, 19 July 2012 (UTC)[reply]

The stated condition for fairness that "the n possibilities are indistinguishable except for their names" is sufficient but not necessary. Probabilities express our state of knowledge, and we do not know any reason why type A outcomes should be less or more frequent than type B outcomes. Maybe people like Richard Montgomery and Jerrold Marsden can contribute to our knowledge, but so far they haven't.

The three outcomes of a coin toss (heads, tails, and edge) are not equally likely because edge is an excited state while heads and tails are ground states. The Boltzmann factor estimates the ratio between the probabilities. Bo Jacoby (talk) 16:04, 19 July 2012 (UTC).[reply]

If a coin's edge allow is allowed to be as wide or wider than its faces your argument fails because you do not know what kind of coin you have. In general, we use empirical knowledge to assert fairness. --Modocc (talk) 16:56, 19 July 2012 (UTC)[reply]
we do not know any reason why type A outcomes should be less or more frequent than type B: Actually, we do. There is asymmetry present. This implies that, unless a proof is given to the contrary, one outcome will be more likely than another. Our computational inability to determine otherwise does not mean that suddenly both outcomes are equally likely. I hope you can see that this is a ridiculous argument! If someone were to pick a random mass distribution supported in the icosahedron, with center of mass at the geometrical center, the resulting die would almost surely not be fair. So why is this particular mass distribution special? Sławomir Biały (talk) 16:41, 19 July 2012 (UTC)[reply]
Are you guys having a Bayesian/frequentist smackdown? It sounds like Bo and Stephan might be operating with the Bayesian definition of probability, which I'll crudely summarize here as "probability is a statement about our current state of knowledge. Absent any additional information, you can assign equal priors to all of the equivalent states. Then, if and when you get new information, you can just calculate updated posteriors". It also sounds like Sławomir is operating from a more "frequentist" perspective, which I'll crudely summarize as "events have fixed, constant and objective probabilities. Just because we're not aware of what they should be, doesn't mean that they change as we learn more." - I think that explains your perspectives, where Bo and Stephan are saying "okay, there are different faces, but we can't think up a way that would make a difference - we can just call them equivalent until someone points why they're not", and Sławomir is saying "The faces are different! We can't say anything about the probabilities until we can determine how that affects the (true, underlying and unchanging) probabilities." -- 71.217.5.199 (talk) 16:56, 19 July 2012 (UTC)[reply]
My position is that there is an absolute, for allegorically its the elephant in blind men and an elephant, and I prefer Bayesian probability approach since it takes into account the knowledge of the not so blind men. The best assumptions are usually going to be those that happen to be correct, but these are not always necessary. --Modocc (talk) 18:16, 19 July 2012 (UTC)[reply]
The thing is though that I'm not interested in our present state of knowledge about a particular dice - it's useless to me. The problem here is to predict the future: What statistical distribution of numbers will this dice produce? We don't have any lack of knowledge here. The system is fully described. So saying "we don't know the answer so we're going to just guess that it's fair" is a viable philosophical position - but absolutely useless for producing the desired result. In answering this WP:RD question, "The math is too complicated, so I don't know" is a better answer than "I assume it must be fair". SteveBaker (talk) 16:30, 20 July 2012 (UTC)[reply]
Bayesian statistics is superior to frequentist statistics. But Bo is misusing it by applying it to the wrong question. If the question was "I've just rolled the die, what is the probability that it landed on face 13?" the answer is 1/20. But the question is "for a given idealized probabilistic process of rolling the die, what is the probability of the event that the die lands on 13?", the answer isn't 1/20, it's a prior distribution over the collection of possible probabilities, with a mean of 1/20. If I spend more time gaining knowledge by studying the probabilistic process in question, I can obtain a more precise answer, and if I solve it completely I will be able to give a single number as an answer (and at that point my answer to the first question will also be different). -- Meni Rosenfeld (talk) 05:16, 22 July 2012 (UTC)[reply]

Assume you know nothing about the die except that the outcomes are 1,2,3,...,20.

  • If the question was "I've just rolled the die, what is the probability that it landed on face 13?" the answer is "1.000000 if it landed on 13 and 0.000000 otherwise".
  • If the question was "what is the probability that it will land on face 13 the first time I toss it?" the answer is "0.050000".
  • If the question was "what is the probability that it will land on face 13 the second time I toss it, provided it landed on face 13 the first time?" the answer is "0.095238".
  • If the question was "what is the probability that it will land on face 13 the second time I toss it, provided it didn't land on face 13 the first time?" the answer is "0.047619".

Bo Jacoby (talk) 12:46, 22 July 2012 (UTC).[reply]

It might be easier to guess the reasoning behind these numbers if they were presented as 1/20, 2/21, 1/21 rather than 0.050000, 0.095238, 0.047619. —Tamfang (talk) 18:30, 22 July 2012 (UTC)[reply]
Bayesian probabilities represent a subjective state of knowledge. If you rolled the die and I know it landed on 13 then the probability is 1. If you rolled the die and I don't know the result, the probability (for me) is 1/20. -- Meni Rosenfeld (talk) 03:49, 23 July 2012 (UTC)[reply]

Two persons sharing knowledge should agree on the Bayesian probabilities. In that sense it is objective. Frequentists only consider probabilities for future events such that nobody knows the result. They consider the probability that the die will land on 13, not the probability that the die did land on 13. Bo Jacoby (talk) 12:36, 23 July 2012 (UTC).[reply]

Can moments of the mass distribution over order 2 be ignored?

Suppose we assume simple Newtonian physics, and, in particular, a constant gravitational acceleration in the region of interest. Can we then say that the dynamics of a perfectly rigid object such as an idealized version of this die, in that environment, in interactions that never penetrate its convex hull, are completely determined by its mass, center of mass, and inertia tensor? If not, can anyone give an example of a phenomenon where third- or higher-order moments have some sort of physical effect, given the restrictions of the model above? -- The Anome (talk) 18:48, 19 July 2012 (UTC)[reply]

  1. 71.217.5.199 is right. I am a Bayesian.
  2. Anome is right. The hamiltonian of a rigid body does not depend on third- or higher-order moments.
  3. Modocc is right. If a cylindrical 'coin' is thicker than its diameter then the edge outcome is probable.
  4. Sławomir, is face type A or face type B the more probable ?
  5. Perhaps the probability that a convex polyhedron will land on face number i may be approximated by (the solid angle of face number i as seen from the center of mass)/(4π). (I retract the suggestion to use Boltzmanns law because die tossing is far from thermodynamic equilibrium).

Bo Jacoby (talk) 22:11, 19 July 2012 (UTC).[reply]

I have computed the inertia tensor. It's proportional to the identity, so the Hamiltonian is spherically symmetric and the die is fair. Sławomir Biały (talk) 23:11, 19 July 2012 (UTC)[reply]
Luke 15:7. Bo Jacoby (talk) 06:57, 20 July 2012 (UTC).[reply]
Well, even as a Bayesian you can still work on your prior... The Hamiltonian is not spherically symmetric because the di(c)e has to land on a flat surface and come to rest in a stable or metastable position; the surface (and the gravitational force) breaks the symmetry. Imagine taking a cube and chipping off a bit at the corners, so you end up with a polyhedron with large octagonal and small triangular (trigonal?) sides. The moment of inertia is still spherically symmetric, but the thing will more often come to rest on one of the octagonal sides and only rarely on a triangular side (Bo's fifth point). For the icosahedron in question, the situation is less obvious and from a statistical point of view the assumption of fairness (or complete ignorance) is justified if you don't want to spend a lot of effort on the physics, even more so when you're dealing with a material realization of the thing with faults and imbalances and everything. --Wrongfilter (talk) 07:51, 20 July 2012 (UTC)[reply]
Wrongfilter, what is your suggestion for an improved prior? Bo Jacoby (talk) 08:33, 20 July 2012 (UTC).[reply]
I don't know. But as Steve said in the original question, we do know that there are two classes of triangular faces. The question was, and still is, how to use that knowledge to predict more accurate probabilities for whether the icosahedron will fall on one or the other type of face. --Wrongfilter (talk) 11:10, 20 July 2012 (UTC)[reply]
Imagine that the convex hull of Steve's die is filled with some mysterious opaque and massless substance. Then you have a perfect regular icosahedron with center of mass located in the geometrical center and inertia tensor proportional to the identity tensor. There is no way that you can distinguish A-faces from B-faces any more. This thing has the same convex hull and the same Hamiltonian as Steve's die, so it moves in exactly the same way, both when tossed and when hitting the table. The regular icosahedron is fair, and so is Steve's die. Q.E.D.. Bo Jacoby (talk) 14:30, 20 July 2012 (UTC).[reply]
Aha! That's the answer I needed! Many thanks! SteveBaker (talk) 16:30, 20 July 2012 (UTC)[reply]
Perfected your prior, I see. Now if we could only assume that such approximations are sufficiently valid. Given that golf balls are dimpled to improve flight, and the seams of heavy baseballs affects their flight, see knuckle ball, I'd would want Steve's dice tested for systemic bias due to drag effects, and perhaps modify them to eliminate any, before using them in a lottery. The die has to settled into a stable non-rotating state where it cannot role anymore, and if one configuration has less drag as it rolls it will tend to not lose momentum as quickly and settle down. Small random imperfections may not affect the die, but this systematic one might be significant, and dice are generally fairly lightweight (for practical reasons) thus the problem might be significant. Modocc (talk) 15:40, 20 July 2012 (UTC)[reply]
We could roll the die in a vacuum. Rckrone (talk) 16:02, 20 July 2012 (UTC)[reply]
I'll grant that air resistance and different friction (and 'sticktion') effects might come about because of the different amount of contact with the ground between class A and class B surfaces. However, for my purposes where the die is small and fairly heavy - I don't think they'll be significant enough to concern me. After all, no dice is ever perfectly constructed - they have dimples or numbers on them that alter the center of gravity, the air drag on the facets and the friction for each face is different...the real world is complicated. However, I don't need that kind of perfection - I just need people who play Dungeons & Dragons not to notice that class A faces come up twice as often as class B faces! SteveBaker (talk) 16:30, 20 July 2012 (UTC)[reply]
Its been decades since I've played, but I have enjoyed the game, and thus hope these do work out well. With a single, standard die, most minor mass and surface imperfections will tend to cancel each other of course, and people normally assume that their dice are not loaded in any obvious special way, and certainly with any game that is not as high-stakes as the lottery, and with a small sample size with limited use, no one will notice or care about anything as small as a one percent difference. That said, should there ever be a larger audience though, and a larger tested sample size of these unique dice with a proven bias, even if the difference is negligible, there will be folks that will tend to favor the luckier sides or favor the standard dice if these are perceived as being fairer. I do think this die looks really cool, and it should be relatively easy for a manufacture to test and modify. :-) Modocc (talk) 17:47, 20 July 2012 (UTC)[reply]

Experimental results

  • I have made one of these out of card, and I observe a bias towards "Class A" faces (currently 181 out of 300 throws). When you throw it, it also "feels" as if that result will be more likely. I encourage others to try this and see if they get the same results. 86.179.1.131 (talk) 19:23, 20 July 2012 (UTC)[reply]

The die has 8 A-faces and 12 B-faces, so 300 throws should give 120±8 A-faces and 180±8 B-faces. Are you sure you haven't switched labels A and B? Bo Jacoby (talk) 00:06, 21 July 2012 (UTC).[reply]

The case I am calling "A" is the case when the dice is resting on three points of three different rectangles, per the OP's original description. 86.179.1.131 (talk) 00:35, 21 July 2012 (UTC)[reply]

Your result 181:119 indicate that the A-probability is 0.603±0.028. (beta distribution). The A-probability for a fair icosahedron is 0.400 (=8/20), which is 7.25 standard deviations below mean. (0.603-7.25*0.028=0.40). So you have - beyond reasonable doubt - proven unfairness of your die! What are the dimensions of your cards? Bo Jacoby (talk) 07:04, 21 July 2012 (UTC).[reply]


July 18

Polygon triangulation

The article on Polygon triangulation states that "In the strict sense, these triangles may have vertices only at the vertices of P." I want to know whether it is always possible to triangulate a polygon in this fashion, and if so, why. Thanks---Shahab (talk) 12:54, 18 July 2012 (UTC)[reply]

Provided, for any polygon, it is always possible to draw at least one straight line between two vertices that lies entirely inside the polygon, then any polygon can be dissected into two smaller ones, and the process continued until only triangles are left. So, for the triangulation not to be possible, there would have to be a polygon for which it wasn't possible to draw such a straight line. 86.160.212.146 (talk) 20:55, 18 July 2012 (UTC)[reply]
Okay. Basically if the polygon is convex. But I don't know of a necessary condition.-Shahab (talk) 15:12, 19 July 2012 (UTC)[reply]
I don't exactly understand what you mean by that, but this argument works equally for convex and concave polygons. I cannot conceive of any polygon, concave or convex, where it is not possible to draw at least one line connecting vertices that lies entirely within the polygon. I think it is certain that such a polygon cannot exist,* but I don't know how to actually prove it. 86.146.110.153 (talk) 19:29, 19 July 2012 (UTC) * Other than a triangle, obviously...[reply]
Here's a (messy) proof for if the polygon is not convex: The polygon has at least one "inner" corner, call it B, and suppose the adjacent vertices are A and C. Sweep a ray from B along all the angles between BA and BC through the interior of the polygon. For any given angle, the ray will hit the boundary of the polygon somewhere, and just consider the first such point it hits. If any of the rays hit a vertex, you're done. If none of them hit a vertex, then they all must hit the same face (this part would take some effort to flesh out). This is impossible because the interior angle at B is greater than 180°. I hope there's a nicer proof though. Rckrone (talk) 06:23, 20 July 2012 (UTC)[reply]
It is always possible to find a triangle which is entirely contained in a given polygon and has all its vertices in the polygon vertices. Start with any convex angle of the polygon, like in this polygon. Consider a :
  • if a line segment lies entirely in the polygon, cut the triangle off the polygon,
  • otherwise there are some polygon vertices in the triangle – find the one of them (e.g. ) such that the line segment lies entirely in the polygon, and recursively consider .
In a finite sequence of such steps you find a desired triangle.
Removing it may, however, split the polygon into pieces touching only in their common vertex (or vertices). Then proceed with each piece separately. --CiaPan (talk) 07:02, 20 July 2012 (UTC)[reply]

Collective term for interpolation and extrapolation

Interpolation and extrapolation are obviously closely linked, so what is the collective term for them? Obviously you could describe them with something very generic like "prediction", but is there a more specific word? It seems odd that e.g. linear interpolation and linear extrapolation have exactly the same formula (if y > x and 0 < p < 1 then we can interpolate a fraction p between x and y using lerp (x, y, p) = x + p(y-x) = (1-p)x + py ... but if p is greater than 1 or below zero this is actually just linear extrapolation) but for them not to have a common name. The obvious answer is "polation" of course, but the dictionaries show no such word (though I've come across some computer scientists using it). For general results about both processes, I've seen academic papers with "inter/extrapolation" in the title. It's really crying out for a single word that describes both! But is there one? ManyQuestionsFewAnswers (talk) 21:15, 18 July 2012 (UTC)[reply]

I don't think there is such a word, and I don't think "prediction" is a very good choice, at least in the usual sense of guessing what will happen in the future, as that's really only an extrapolation forward in time. However, there's often a lack of a single word for paired concepts. Is there a word that means either "enter" or "exit" ? StuRat (talk) 21:20, 18 July 2012 (UTC)[reply]
"Prediction" would be a poor choice in my opinion, but I wanted to get it in there so that nobody else said it! It's true that quite a lot of paired concepts lack a single word. But I can't think of another case where two mathematical concepts share exactly the same formula without having a common word - mathematicians usually like to generalize too much for that! ManyQuestionsFewAnswers (talk) 21:45, 18 July 2012 (UTC)[reply]

Alternate definition?

Would the standard definition of Regular Polyhedra (including the Regular Star Polyhedra) be equivalent to the following: 1) All edges are the same length and 2) All faces have the same area. If not, can someone please give me a counter example?

Do you mean like Rhombohedron? 86.160.212.146 (talk) 20:58, 18 July 2012 (UTC)[reply]
Thanx. The Rhombic dodecahedron also works as a counter example.Naraht (talk) 13:45, 19 July 2012 (UTC)[reply]

July 19

Number that isn't a scalar

As I understand it, a scalar is a number which doesn't depend on the coordinate system. For example, no matter what coordinate system you use, the temperature of an object remains the same, so temperature is a scalar.

Are there examples of numbers which aren't scalars? 65.92.7.148 (talk) 03:12, 19 July 2012 (UTC)[reply]

The x-component of a vector is not a scalar. Bo Jacoby (talk) 03:31, 19 July 2012 (UTC).[reply]
I think you're talking about the definition in physics. Scalar has another meaning in mathematics, see Scalar (mathematics) and Scalar multiplication Fly by Night (talk) 03:44, 19 July 2012 (UTC)[reply]
Indeed. I think what the OP is really asking about is cases where the mathematical definition and the physics definition disagree. Bo's example is a good one. Speed is another example. --Tango (talk) 03:55, 19 July 2012 (UTC)[reply]
Also agree. When physicists say "scalar" they (often) mean a scalar field. So:
  1. A temperature distribution is a scalar field that associates a scalar (in the mathematical sense) to every point in space.
  2. As Bo says, the x-component of a vector field associates a scalar (in the mathematical sense) to every point in space but is not a scalar field (unless it means "x component relative to a given fixed co-ordinate system" in which case it is a scalar field, but not a very natural one).
  3. On the other hand, x-component2 + y-component2 + z-component2 of a vector field is a scalar field because it represents the square of the magnitude of the vector field at each point, and so is invariant under co-ordinate transformations.
  4. To be really pedantic, you have to keep unit length the same in the previous example. If you allow transformations that change the unit length then the magnitude of a vector field is a scalar density or relative scalar. Gandalf61 (talk) 08:37, 19 July 2012 (UTC)[reply]
Scalars are quantities which do not depend on certain group of symmetry transforms (see Symmetry for mathematical context and Symmetry (physics) for physical). But the term "scalar" is confusing because there is not always clear from the context, which symmetry group is assumed. For example, a function corresponding to what physicists call a scalar field is a scalar corresponding to the space transforms (such as different coordinate systems), and hence is a scalar function from the PoV of differential geometry. But if that physical theory provides gauge symmetry, the function is not a scalar value (with respect to gauge symmetry). For example, complex-valued ψ from Ginzburg–Landau theory is a geometrical scalar, but it is not a gauge scalar – only its absolute value, a real number, is a scalar in both senses. Oppositely, the magnetic field F1 2 = from the same theory (2-dimentional + 1 time) is a gauge scalar (F is a 2-form of curvature), but it is not a geometrical scalar (relatively to Lorentz transforms). So, the notion of "scalar" is not absolute. Incnis Mrsi (talk) 11:53, 19 July 2012 (UTC)[reply]

How are they related? Or are they the same thing? Rich (talk) 08:08, 19 July 2012 (UTC)[reply]

rotating a sphere in higher dimensions

how many dimensions is this tesseract rotating in?

If you rotate a circular object such as a bicycle tire about its axis (the axis at its center and normal to it) a force will be exerted that will tend to make it want to expand outward equally in all directions in the plane normal to the axis of rotation. If you spin an elastic spherical object about an axis passing through its center it will expand outward at its equator in the plane normal to its center of rotation. Would it be mathematically possible to spin an expandable sphere in some higher dimension so that all points on its surface would move outward equally in three dimensions (like a balloon expanding) away from the point at its center, rather than just at its equator? If so, what would this rotation be about, (obviously not about a two-dimensional linear axis) and in what dimension would the rotation have to be? Thanks. μηδείς (talk) 19:57, 19 July 2012 (UTC)[reply]

The first thing I will say is that rotation in even and odd dimensional spaces behaves very differently. I think you'll need to consider the cases and separately. Fly by Night (talk) 21:00, 19 July 2012 (UTC)[reply]
You can do this in four dimension. It's enough to cook up a rotation that moves each point on the sphere by the same amount. You can find such a rotation by representing rotations in four dimensions by left and right multiplication by unit quaternions. Sławomir Biały (talk) 21:18, 19 July 2012 (UTC)[reply]
Thanks. I have to say that I am not familiar with anything more than one year of high school euclidean geometry and can understand how a tesseract works from Sagan and Flatland and comprehend its rotation from this wonderful animation. But the meaning of the terms and quaternion are entirely unfamiliar to me. I will read the link to quaternion. My assumption is that if you can rotate a sphere in two dimensions about a one-dimensional axis in a three dimensional space and have its equator expand in a plane, by analogy you can rotate a sphere in three dimensions about a two dimensional plane in a 4d hyperspace and have its surface expand in three dimensions. Is that right? If so, can someone help me visualize what it is to rotate about a plane? And would it actually be a plane, or just a circle that bisects the sphere? What articles should I read?μηδείς (talk) 21:39, 19 July 2012 (UTC)[reply]
You might look at Plane of rotation and Rotations in 4-dimensional Euclidean space. To answer your question in four dimensions an isoclinic rotation has the property you desire: every point on the 3D surface of a sphere in four dimensions rotates at the same speed with such a rotation. The tesseract in the animation is actually rotating isoclinally in 4D, though I don't know if that helps visualise it: both the shape of the tesseract and the projection make hard to see what's going on.
In higher even dimensions you get analogous rotations, I guess also called isoclinic, where every point on a hypersphere is rotating with the same speed. In odd dimensions you always have a non-rotating axis so at least two fixed points.--JohnBlackburnewordsdeeds 21:53, 19 July 2012 (UTC)[reply]
You can visualize the "isoclinic" rotations in 4 dim easily with the aid of the Hopf fibration, but this sort of exceeds my ability to explain. Sławomir Biały (talk) 22:06, 19 July 2012 (UTC)[reply]
Ok, so here's how you can visualize a sphere in four dimensions. At each point of a spherical globe, place a circle (a "clock") flat against the globe (so the face is pointing outward from the center). The four dimensional sphere is then parametrized by picking a point on the sphere and a clock value at that point. Then simultaneous rotation of every clock by the same amount is an isoclinic rotation ("passage of time", if you like). Sławomir Biały (talk) 00:22, 20 July 2012 (UTC)[reply]
Isn't that ? That's not the same as is it? (In the same way that is a torus, not a sphere.) --Tango (talk) 17:04, 20 July 2012 (UTC)[reply]
No, what I have described is the unit tangent bundle of S^2, which is the 3-sphere. Note that you cannot smoothly orient all of the clocks, so there is no preferred global time. This is something of a small technical point that is likely to be of little interest to the OP though. Sławomir Biały (talk) 19:42, 20 July 2012 (UTC)[reply]
And I thought the OP was talking about rotating a regular 3D sphere in a 4 dimensional space embedded in . Even after re-reading, the wording is ambiguous... SemanticMantis (talk) 19:21, 20 July 2012 (UTC)[reply]
That's how I read it too. 86.179.1.131 (talk) 20:18, 20 July 2012 (UTC)[reply]

July 20

If you know the average of a group bigger or equalt to 2 ...

...you do not know anything about its elements (right?). Is there a name for this 'rule'? OsmanRF34 (talk) 17:54, 20 July 2012 (UTC)[reply]

Well, you might know some things. For example, if you have 2 elements and the average is 2.75, you know that at least one of the numbers isn't an integer. You also know at least one of the numbers is larger than or equal to the average and one is smaller or equal. So, for example, if the average is negative, at least one of the elements is negative. StuRat (talk) 18:53, 20 July 2012 (UTC)[reply]
"at least one of the numbers is larger than the average and one is smaller" -- being very pedantic, unless the numbers are equal... 86.179.1.131 (talk) 20:20, 20 July 2012 (UTC)[reply]
Fixed. StuRat (talk) 23:44, 20 July 2012 (UTC)[reply]
Such a rule wouldn't be too useful. First, you'll have to restrict it maybe to positive numbers like age, size, and so on. Second, if you do this, you end up with other additional information - you know that ages and sizes have a certain range. If I give you a group of 2 humans whose average is 2 meter, you can deduct that they are not very far away from 2 m each one. Third, in the same way that you obtained the average (from a series of measures), you can obtain other statistics - mean, mode, and all sorts of distribution patterns. 88.9.110.244 (talk) 23:35, 20 July 2012 (UTC)[reply]
If you just know that the average is at least 2 (but not what it is), there are some things you can say. Some element is greater than or equal to 2. If any element is less than 2, then some element is greater than 2. Maybe in a stretch you could say that these are the pigeonhole principle.
If you know the average precisely and that all the values are not negative, then you get Markov's inequality, which says that for any value x, at most a fraction of average/x elements have value greater than or equal to x. Rckrone (talk) 16:58, 21 July 2012 (UTC)[reply]

July 21

System of bilinear equations

We know that system of linear equations can be solved in polynomial time (in terms of input bits). I want to know how we solve bilinear equations and if we can solve, can we solve in polynomial time (in terms of input bits)? — Preceding unsigned comment added by Karun3kumar (talkcontribs) 15:32, 21 July 2012 (UTC)[reply]

See System of polynomial equations. Bo Jacoby (talk) 18:45, 21 July 2012 (UTC).[reply]
Here is a PDF of a 1997 paper called Systems of bilinear equations that discusses the general problem and how it can be solved. Looie496 (talk) 19:19, 21 July 2012 (UTC)[reply]

(2x)^y=x

what is y?

example: 9^y=4.5

thank you — Preceding unsigned comment added by 79.180.141.120 (talk) 19:13, 21 July 2012 (UTC)[reply]

(for x > 0)--Wrongfilter (talk) 20:24, 21 July 2012 (UTC)[reply]
--CiaPan (talk) 20:40, 21 July 2012 (UTC)[reply]

Function inversion via control theory

I have an unknown function ; writing , we have and everywhere (a sort of monotonicity). I suspect that f is convex in the same sense, but doing without that assumption would of course be more powerful.

I would like to evaluate (with a precision dependent on the computational effort expended) and the simpler where . The numerical tool I have available takes the form of a dynamical system on (where the dots indicate many additional dimensions). In this dynamical system (the reason for calling them dimensions will be given in a moment), and there exist known functions and such that . However, and do not exist (they oscillate chaotically and thus serve as something like pink noise), and they may differ significantly from for some time after whatever initial state.

So far, the obvious approach is to choose a for each of a number of pairs , evaluate by integrating for some period of time (dependent on desired accuracy), and then obtain some sort of fit and thence .

However, the reason x and y were included in is that there may exist a better algorithm that approaches the solution continuously (a sort of optimal control and/or stochastic filter) by varying them during one (long) integration. The system will take time to "recover" towards after such a change, and it's easy to overcontrol by reacting to fluctuations in and , so what's the best approach here? --Tardis (talk) 22:53, 21 July 2012 (UTC)[reply]

or ? 75.166.200.250 (talk) 06:41, 22 July 2012 (UTC)[reply]

July 22