Wikipedia:Reference desk/Archives/Mathematics/2009 October 12

From Wikipedia, the free encyclopedia
Mathematics desk
< October 11 << Sep | October | Nov >> October 13 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 12[edit]

Sphere volume and surface area[edit]

So, years ago I noticed that the volume of a sphere is and the derivative of this, with respect to r, is , which is the surface area of a sphere. Is there some reason for this, or is it just coincidence? Another easy example to try, which does not have this relationship is a cube with derivative of volume being and surface area being . And, many other surfaces rely on more than one variable. StatisticsMan (talk) 02:51, 12 October 2009 (UTC)[reply]

It's not a coincidence. Imagine the sphere is like an onion, made up of lots of shells. You can get the volume of the sphere by adding up the volumes of the shells, which is roughly (for a thin shell - exactly for an infinitesimally thin shell) the surface area at that radius times the thickness. When you change the radius by a small amount you remove the outmost shell, that means the volume reduces by the volume of that shell, which is the surface area of the sphere, times an infinitesimal thickness, dr. --Tango (talk) 02:59, 12 October 2009 (UTC)[reply]
For a cube, if you take 2r = s, you get a volume of and a surface area of . Also the derivative of is . So it can make a difference which parameterization of the family of cubes you use. There is some information about which shapes make this possible at [1] and [2]. You can find a lot more with google. — Carl (CBM · talk) 03:09, 12 October 2009 (UTC)[reply]
Indeed - using the OP's parameterisation the derivative of volume is the area of 3 sides since rather than a nested family of cubic shells you have a family formed by gradually adding more and more 3-sided caps - the 2D analogue would look something like this:
_____
____ |
___ ||
__ |||
_ ||||
.|||||
Bare in mind that the gaps between each cap should be infinitesimal, so there wouldn't be any gaps on the left or bottom sides. Does that make any sense? --Tango (talk) 03:35, 12 October 2009 (UTC)[reply]

Let's see if this way of looking at it sheds some light: Say the sphere is expanding. The rate at which the surface moves outward is the rate at which the radius r increases, so it is dr/dt. Now:

(the rate at which the boundary moves) × (the size of the boundary)
= the rate at which the volume increases.

For example, suppose at some instant:

the boundary has an area of 20 square feet; and
the boundary is moving at 3 feet per minute.

Then:

20 square feet × 3 feet per minute
= 60 cubic feet per minute.

That's how fast the volume is increasing. But of course the volume V increases at a rate of dV/dt. The size of the boundary is the surface area A. So:

A × dr/dt = dV/dt = (dV/dr) × (dr/dt).

Canceling dr/dt from both sides we get:

A = dV/dr.

So no, it's not a "coincidence"; it's just what we should expect. Michael Hardy (talk) 05:08, 12 October 2009 (UTC)[reply]


And if SM wishes some more evidence of it not being a coincidence, let's mention some facts on the variation of area and volume, on the line of what Carl said. Let A be a bounded convex in ; let E be the three dimensional unit Euclidean ball; so A+rE is the set of points at Euclidean distance less than r from A (e.g. for a sphere of radius R it's a sphere of radius R+r, and for a cube, it's a cube with smoothed edges). The volume of it writes as a third degreee polynomial in r
where
c=surface integral of the mean curvature of A,
For instance, if A is a sphere of radius R, its surface area is ; its mean curvature is constant, at any point, and the surface integral of it is just so we get the expansion of as it has to be. Note also that the convex A needs not to be smooth; its mean curvature can be defined as a measure, and c is just the total mass of it. For instance, if A is a polyhedron the curvature is concentrated on the edges, and one has
where is the length of the i-th edge, and is the angle between normal outer directions of the corresponding adjacent faces. --pma (talk) 08:02, 12 October 2009 (UTC)[reply]

Note also the twodimensional case. The area of a circle is A = πr2 and the circumference 2πr equals dA/dr. Bo Jacoby (talk) 08:08, 12 October 2009 (UTC).[reply]

  • I like to think of it in terms of a ball that is repainted. An extremely thin new layer is added, equal to the ball's surface area: dV/dr = surface area. —Anonymous DissidentTalk 11:01, 12 October 2009 (UTC)[reply]

Ring Extension[edit]

Resolved

--Shahab (talk) 05:16, 13 October 2009 (UTC)[reply]

I have two questions related to ring extensions. A ring extension R' of a ring R is a ring R' which contains R (or a ring isomorphic to it) as a subring. The rings are commutative and with identity in my questions.

  • Given a ring R and a nonmonic polynomial over it, is it always possible to extend R so that it contains a root of the polynomial? If so how? The canonical way does not always work. During the adjoining of the root we might kill some elements of the ring. For example: If I want to adjoin the root of 2x-1 defined over Z/(4) to it, then the ring (Z/(4))[x]/(2x-1) doesn't seem to work as the process of adjoining the inverse of 2 actually kills 2 itself (since now 2 ∈ (2x-1)). In fact if ab=0 adjoining a-1 always kills b.
  • Regarding the last sentence if a ring has no zero divisors and is so an integral domain then will adjoining a-1 never kill any element. If so, why? Hence is it possible to obtain the fraction field of an integral domain just by adjoining the inverses and not through the usual procedure of taking the equivalence classes?

Thanks.--Shahab (talk) 04:50, 12 October 2009 (UTC)[reply]

I'm still thinking about the first question, but for the second how would you construct the inverses? As the roots of polynomials of the form ax-1? I guess you could do that, but I think the standard way with equivalence classes is easier. For an infinite ring you would have to adjoin an infinite number of elements and (I think), a priori, the result of that depends on the order you adjoin them in. You would need to prove that the order doesn't matter in this case (or define a specific order, which would be rather arbitrary). --Tango (talk) 05:08, 12 October 2009 (UTC)[reply]
Why can't I add them all at once by considering (R[x,y,z...]/(ax-1,by-1,cz-1...) and even if add them one at a time doesn't the third isomorphism theorem ensure that (R/(a))/([b]) (here [b] stands for b+(a)) is isomorphic to R/(a,b) and so the order won't matter? I know the standard way is easier, my question was only whether this way is valid or not?--Shahab (talk) 05:16, 12 October 2009 (UTC)[reply]
I don't know, maybe you can do that... I don't think the 3rd isom. thm. works - you have to apply it an infinite number of times, which complicates matters (it can probably be done, but I expect there are technicalities to deal with - there usually are). User:Algebraist will know, I'm sure he'll be along soon. --Tango (talk) 05:26, 12 October 2009 (UTC)[reply]
The expression is meaningful and behaves the way you expect it to; in particular the "order" does not matter, as there is no "limiting" operation going on here -- the set {x,y,z...} is being adjoined "all at once". I'm not sure what you are asking about the third isomorphism theorem, but whatever it is Tango is probably right that it can't be used here. Eric. 131.215.159.109 (talk) 11:10, 12 October 2009 (UTC)[reply]
How do you define R[x,y,z,...]? I would define it as (...((R[x])[y])[z])..., although it is easy enough to prove that the order doesn't matter for that (it does have to be proven, though), so once you've done that I suppose you might be able to mod out by the entire ideal at once. --Tango (talk) 11:50, 13 October 2009 (UTC)[reply]
R[x,y,z,...] is the ring of multivariate polynomials over R with variables x, y, z, .... There's no need to do it one variable at a time (and it's rather counterproductive). — Emil J. 11:54, 13 October 2009 (UTC)[reply]
Ok, how do you (rigorously) define "multivariate polynomials over R with variables x, y, z, ..."? By far the easiest way I can see is to define it one variable at a time. For a finite number of variables you can do it all at once, but for an infinite number it gets more difficult. I could probably do it, but it would be a mess. --Tango (talk) 12:34, 13 October 2009 (UTC)[reply]
For example, as in polynomial ring#The polynomial ring in several variables, except that the field can be any ring, and since we need an infinite number of variables, we have to add the condition that all but finitely many coordinates of a multi-index are zero. — Emil J. 12:42, 13 October 2009 (UTC)[reply]
Or equivalently: if V is a set of variables, let M be the free commutative monoid with generators V (which is just the direct sum of |V| copies of (N,+)), then R[V] can be defined as the monoid ring R[M]. — Emil J. 12:51, 13 October 2009 (UTC)[reply]
My query regarding the third isomorphism was this. Let R be a ring and I be an ideal containing some other ideal J. Let R' be the ring R/J. Now by the 3rd isomorphism theorem the ideal I corresponds with an ideal I' in R' and moreover R/I is isomorphic to R'/I'. Now if we consider a,b to be any elements in R and R'=R/(a) then since (a,b) and ([b]) are corresponding ideals so R/(a,b) is isomorphic to R'/([b]). In other words first killing a then killing b is the same as killing a and b together. Hence I concluded that the order won't matter even if we adjoin inverses one at a time. What I was asking Tango is whether this is correct or not.--Shahab (talk) 12:58, 13 October 2009 (UTC)[reply]
For the first question, your example of 2x-1 defined over Z/(4) is a good one. Suppose R is an extension of Z/(4) containing a root r. Then 2r = 1 and so 0 = (2+2)r = 2r + 2r = 1 + 1 = 2. --Matthew Auger (talk) 05:50, 12 October 2009 (UTC)[reply]
Yes. In fact, since the ring (Z/(4))[x]/(2x-1) is isomorphic to Z[x]/(2x-1,4) (i.e. when we make 2x-1=0 and 4=0 in Z[x]) and 2x-1=0 implies 4x=2; 4=0 implies 4x=0 which implies 2=0 and finally 2=0 implies 2x=1=0 so essntially we end up with the zero ring. This is true in general from what you pointed out. So I guess the answer to the first part is no. Thanks--Shahab (talk) 06:12, 12 October 2009 (UTC)[reply]
For the second question, first part, the answer is yes. The usual homomorphism from R to is injective if a is not a zero divisor. For the second part, the answer is yes. It requires a little bit of care to adjoin an arbitrary number of elements but it is fundamentally no different from adjoining a finite number of elements. You will probably find localization of a ring of interest. You can show that the adjoining elements method is equivalent to localizing but it is tedious. Eric. 131.215.159.109 (talk) 11:02, 12 October 2009 (UTC)[reply]
Thanks--Shahab (talk) 05:16, 13 October 2009 (UTC)[reply]

Cardinality[edit]

What is the cardinality of the set of infinite strings of real numbers? NeonMerlin[3] 06:23, 12 October 2009 (UTC)[reply]

Define "infinite". If you mean sequences of real numbers indexed by natural numbers then it is , which I think is the same as , Beth-two. --Tango (talk) 06:35, 12 October 2009 (UTC)[reply]
Tango, can you explain? My instinct is that should have the cardinality of the continuum. I'm not sure the following construction works, but it might go something like this - consider a function from the real line to the set of all countable sequences of real numbers: Take the decimal expansion of the 1st coordinate to be all the even digits, the decimal expansion of the 2nd to be all those digits divisible by 3 that are not even, and so on - the decimal expansion of the nth real number to be all those digits in the decimal expansion of your original number divisible by the nth prime number not previously taken, etc, etc. RayTalk 06:54, 12 October 2009 (UTC)[reply]
I've got it upside down, haven't I? I meant , which is, indeed, the cardinality of the continuum. --Tango (talk) 06:59, 12 October 2009 (UTC)[reply]
Yeah, I think we've got it now. By the way, thanks for introducing me to cardinal arithmetic - I was familiar with the standard arguments in real analysis, but had never gone further. RayTalk 07:08, 12 October 2009 (UTC)[reply]

Curve fitting[edit]

Hi, when fitting a curve to a set of data, why the 'Sum of Least Squares' is minimized. What will happen when we minimize 'Sum(abs(y_i - f(x_i)))' ? and why this quantity is not a goodness of fit parameter?

thanks! Re444 (talk) 06:44, 12 October 2009 (UTC)[reply]

For various reasons, Least squares is the most common (summarizing the lead of the article - least squares fitting corresponds to the maximum likelihood estimator for most common models of noise). The method you describe is actually known as Least absolute deviation, and is also used in certain scenarios. The article has a lot more. Good luck! RayTalk 07:16, 12 October 2009 (UTC)[reply]
To expand: There are two main reasons that lease square methods are so popular:
  1. If your measured data deviates from the "curve", because of additive Gaussian noise, then the least squares solution corresponds to the maximum likelihood (ML) solution to the problem (the Least absolute deviation solution corresponds to the ML solution if the noise is Laplacian). Since Gaussian noise assumptions are reasonable in many practical scenarios (thanks to the Central limit theorem), using least square approach is often the "right" choice.
  2. The least square solution is relatively easy to compute for some families of curves, for example when are linear functions.
That said, one is not limited to just the least square or the least absolute deviation methods. Several other goodness-of-fit measures, such as other other ℓp norms or even general convex/non-convex functionals, can be and have been used depending upon the application. Abecedare (talk) 07:50, 12 October 2009 (UTC)[reply]
And note that p=1 or p= infinity, although quite natural, in general give a minimization problem without unicity, due to the fact that the corresponding norms are not uniformly convex. Lack of unicity is annoying if one wants to assign a solution (here's f) with continuous dependence wrto the data (here the x_i and y_i). What makes so nice the life with the L2 norm is that the dependence of the minimizer from the data is even linear. pma --131.114.72.230 (talk) 10:01, 12 October 2009 (UTC)[reply]
I'll try to give a naive reason. The sum of the least squares represents sort of the error between the estimated curve and the data. Minimizing it beforehand ensures that the curve we end up with has very little error. However the sum of the absolute values of y_i-f(x_i) also represents the error and minimizing it too should intuitively achieve the same end. In practice, however the method of least squares is better then absolute deviation because the sum of the absolute values does not stress the magnitude of each individual error. So there may be one big error in one of the deviations and almost no other errors if you use absolute value method. It is better usually to have a curve which has several small errors rather then a curve which has one large one. Small errors are reasonably bound to occur due to noise but it isn't easy explaining a large error away. To penalize large absolute errors, they are squared before the minimization. This is of course only a naive explanation. The mathematical reasons are as given above.--Shahab (talk) 08:01, 12 October 2009 (UTC)[reply]
The least squares method can be solved with calculus while least total error (absolute values) can't. However, with the advent of the Simplex method and modern computers it's perfectly feasible to do least total error if you want to. Least squares will go out of it's way to accommodate outlying data points, which may be something you want or something to avoid depending on the situation.--RDBury (talk) 15:02, 12 October 2009 (UTC)[reply]

The mean value between, say, 1 and 3, is x = (1+3)/2 = 2. The sum of squares of deviations: (x−1)2+(x−3)2 is minimized for x = 2, while the sum of absolute deviations |x−1|+|x−3| is constantly = 2 for 1≤x≤3. So the minimum sum of absolute deviations does not select the mean value. Bo Jacoby (talk) 15:25, 12 October 2009 (UTC).[reply]

Thanks a lot every body! Re444 (talk) 23:04, 14 October 2009 (UTC)[reply]

Interesting sounds[edit]

As we all know, a pure sine tone with frequency ν can be produced by sending the waveform

to a speaker. More interesting sound effects are obtained if the argument of the sine function is replaced by a less trivial function of t, e.g. a polynomial, or if an even more complicated expression y(t) is chosen. I am seeking suggestions for interesting sounds. I would appreciate all individual suggestions, as well as links to external (WWW) documents discussing different waves. --Andreas Rejbrand (talk) 10:33, 12 October 2009 (UTC)[reply]

Note I changed y(x) to y(t) in your equation, since the x didn't make sense in that place. Define "interesting"? Are you just trying to produce nice sounds or is there an engineering application? A few examples from electrical engineering: amplitude modulation and frequency modulation as heard on your radio. A chirp signal (I'll blue-link it to chirp now) Asin[at + bt2] is often used in interferometry and acoustic signal processing to determine the frequency response of a system over a given frequency range. I'm sure there other examples. Zunaid 13:22, 12 October 2009 (UTC)[reply]
No, I am merely interesting in sounds that sound interestingly. I will try your examples. --Andreas Rejbrand (talk) 17:25, 12 October 2009 (UTC)[reply]
You will want to investigate synthesis techniques. Refer to Miller Puckette's "Theory and Technique of Electronic Music". Note that just considering simple models such as compositions of periodic functions generally don't create interesting sounds over time. You need sounds that vary in timbre, amplitude and phase over time to really get things going. Subtractive and Additive synthesis models are a good place to start. —Preceding unsigned comment added by 94.171.225.236 (talk) 19:19, 12 October 2009 (UTC)[reply]
Try Shepard tone -- SGBailey (talk) 08:38, 13 October 2009 (UTC)[reply]
This is what I call an interesting sound: A⋅sin(ω⋅(sin(t)⋅t)). --Andreas Rejbrand (talk) 20:08, 18 October 2009 (UTC)[reply]

No solutions to equation similar to FLT[edit]

Hi all,

I'm looking for the easiest/quickest way to show has no solutions in , unless I'm being stupid in which case somebody please point out a solution! (This is indirectly related to a question paper but not an actual question on it, just me trying to satiate my own curiosity)

Thanks very much :) —Preceding unsigned comment added by 82.6.96.22 (talk) 15:57, 12 October 2009 (UTC)[reply]

First, show that if it has a nonzero rational solution, then it also has a nonzero integer solution, and furthermore one where gcd(a, b, c) = 1 (which in particular implies that a and b can't be both even). Then consider what is the remainder of both sides modulo 4. — Emil J. 16:06, 12 October 2009 (UTC)[reply]
(edit conflict) It might go something like this. There are two cases: either a and b are both divisible by 3, or neither are (because the right hand side has to be divisible by 3). If they are both divisible by 3, then you have a contradiction, because you have an even number of powers of three on the left hand side and an odd number of powers of 3 on the right hand side. If neither are, then one of must be 1 mod 3, and the other must be 2 mod 3. This is impossible, since there are no numbers whose squares are 2 mod 3 (1 mod 3 squared is just 1 mod 3, but so is 2 mod 3 squared). RayTalk 16:08, 12 October 2009 (UTC)[reply]
oops. I realized I addressed the question in . Yes, first you must show a rational solution implies an integer solution, which is not difficult :) RayTalk 16:09, 12 October 2009 (UTC)[reply]
If c is even, you do have an even number of powers of 3 on the right-hand side. — Emil J. 16:57, 12 October 2009 (UTC)[reply]
Oh, I see, you mean that the maximal power of 3 which divides the RHS has odd exponent. But then it does not follow so trivially that the exponent for the LHS has to be even if both a and b are divisible by 3. You have to apply a generalization of the reasoning used in the case when they are not divisible (to rule out the possibility that a = 3kc, b = 3kd, where 3 does not divide c, d, but it divides c2 + d2). — Emil J. 17:23, 12 October 2009 (UTC)[reply]
Right. Oops :) RayTalk 17:50, 12 October 2009 (UTC)[reply]

Brilliant, thanks very much all :) —Preceding unsigned comment added by 82.6.96.22 (talk) 16:24, 12 October 2009 (UTC)[reply]

Markov chain-like model[edit]

In our article on Markov chain, it states that you do not need to know the past to predict the future. Everything is recorded in the present state. What is the name of a similar model in which the past is required to be known to predict the future? Is there a Markov model which consider the "present" to be the last few states, but not the entire history? -- kainaw 19:27, 12 October 2009 (UTC)[reply]

Such a random process can be treated as a Markov chain by redefining it slightly, so that what was the last few states becomes the present. This is how Markov text generators work, for example: a process that was Markov on individual words wouldn't be able to generate real-looking text, but one that's Markov on the level of sequences of a few words can. Algebraist 19:36, 12 October 2009 (UTC)[reply]
Isn't that just a "Markov chain of order m" as defined in the article? -- Meni Rosenfeld (talk) 20:47, 12 October 2009 (UTC)[reply]
Damn, I knew I should've looked at the article more closely. Algebraist 20:59, 12 October 2009 (UTC)[reply]
Thanks. I didn't read the whole article. I read the beginning and it makes it very clear that Markov chains do not look at past states. I didn't expect that halfway down it would sneak in that sometimes they do look at past states. -- kainaw 23:25, 12 October 2009 (UTC)[reply]
Note that Markov chains of order m are not Markov chains, any more than Gaussian integers are integers. Our articles often mention variations on their main subject. -- Meni Rosenfeld (talk) 20:16, 13 October 2009 (UTC)[reply]
I don't see why it's a departure. To turn your Markov chain of order m into a chain of order 1, you can just expand what you define as the current state to include the information about the last m choices. Rckrone (talk) 17:56, 14 October 2009 (UTC)[reply]
Of course you can reduce MCOOm to normal Markov chains. That does not mean they are the same thing. Mathematics is full of concepts which can be easily reduced to simpler \ more common ones, but are still introduced because they are interesting, offer additional insight, and are a more natural way to think of some problems. -- Meni Rosenfeld (talk) 21:18, 14 October 2009 (UTC)[reply]
Any Markov chain of order m is isomorphic to some Markov chain of order 1. There's no stronger form of identity than that. You said "Markov chains of order m are not Markov chains" but that seems to be false since they form a subset, or at the very least there's a natural embedding. On the other hand some Gaussian integers aren't integers. A more apt comparison might be to say that all integers are Gaussian integers. Rckrone (talk) 22:12, 14 October 2009 (UTC)[reply]
I mean I understand what you mean, that the conventional way to think about Markov chains of order m breaks the "rules" of Markov chains. But the truth is that the distinction is entirely superficial, since structurally they still fall into the definition of a Markov chain. Rckrone (talk) 22:18, 14 October 2009 (UTC)[reply]
[ec] My point with the Gaussian integers example was to highlight a tricky usage of language. The name "Gaussian integer" seems to suggest that they are a special case of integers, which is of course false - i is a Gaussian integer which is not an integer. Likewise, the name "Markov chain of order m" might suggest that they are a special case of Markov chains, which is also false - in a typical MCOO 2, depends on given , which makes it not a Markov chain.
kainaw's statement "sometimes [Markov chains] do look at past states" suggested he was fooled by this, and I wanted to clarify that Markov chains never look at past states.
The existence of an embedding of MCOOm in Markov chains is irrelevant to that point. -- Meni Rosenfeld (talk) 22:31, 14 October 2009 (UTC)[reply]

What is a Mexican pyramid?[edit]

Resolved

If you've seen them, they look sort of like an Egyptian pyramid without its top. Actually, the structures are more like steps, but I'm saying the shape would be like an Egyptian pyramid as seen from a distance, with the top part removed to make a smaller pyramid.Vchimpanzee · talk · contributions · 19:31, 12 October 2009 (UTC)[reply]

Frustum. —JAOTC 19:35, 12 October 2009 (UTC)[reply]
Thank you. I looked under pyramid (geometry) and didn't see anything.Vchimpanzee · talk · contributions · 19:45, 12 October 2009 (UTC)[reply]
... and if you want to read a non-mathematical article, we have Step pyramid. Dbfirs 07:19, 13 October 2009 (UTC)[reply]