Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 238: Line 238:


:First, your notation does not make much sense, as <math>\mathbb{Z}_p</math> is not a subset of <math>\mathbb{Z}_{p^n}</math>. Assuming you wanted to write <math>u_i\in\{0,1,\dots,p-1\}</math>, existence follows immediately from the fact that any natural number can be written in a base-''p'' representation. As for uniqueness, the sets <math>\{0,1,\dots,p-1\}^n</math> and <math>\mathbb{Z}_{p^n}</math> have the same finite number of elements, and we have just established that the mapping from the former to the latter sending each sequence <math>\langle u_0,u_1,\dots,u_{n-1}\rangle</math> to the sum <math>\sum_{i<n}u_ip^i\,</math> is surjective, hence it is also injective. —&nbsp;[[User:EmilJ|Emil]]&nbsp;[[User talk:EmilJ|J.]] 10:32, 16 July 2009 (UTC)
:First, your notation does not make much sense, as <math>\mathbb{Z}_p</math> is not a subset of <math>\mathbb{Z}_{p^n}</math>. Assuming you wanted to write <math>u_i\in\{0,1,\dots,p-1\}</math>, existence follows immediately from the fact that any natural number can be written in a base-''p'' representation. As for uniqueness, the sets <math>\{0,1,\dots,p-1\}^n</math> and <math>\mathbb{Z}_{p^n}</math> have the same finite number of elements, and we have just established that the mapping from the former to the latter sending each sequence <math>\langle u_0,u_1,\dots,u_{n-1}\rangle</math> to the sum <math>\sum_{i<n}u_ip^i\,</math> is surjective, hence it is also injective. —&nbsp;[[User:EmilJ|Emil]]&nbsp;[[User talk:EmilJ|J.]] 10:32, 16 July 2009 (UTC)

== Rope cutting formula? ==

My math knowledge is hopeless and this particular problem can by solved by most teenagers eventhough iam an adult.
Consider that there is a rope that is about 30 thousand foot long. It is cut exactly into two equal length ropes. Those two ropes are cut again into 4 equal ropes. This goes on and on until the ropes are about 3 hundred foot long. The question is what would be the formula that gives the number of cuts so that the rope is around 3 hundred foot long?. May be the formula will not work when the original rope is too long or too short, I have no idea.

Revision as of 12:00, 16 July 2009

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:



July 8

Raise a matrix to a matrix power

Can a matrix be raised to the power of another matrix? NeonMerlin 02:38, 8 July 2009 (UTC)[reply]

I have never heard of such an operation, and I can't yet think of any useful way of giving meaning to it. Why do you ask? Algebraist 02:41, 8 July 2009 (UTC)[reply]
Well, An obviously makes sense for a square matrix A and integer n, and there is a matrix exponential exp(A) defined as the obvious power series in A, so why not AB=exp(B log A) for some suitable "log A" that comes out the right shape? 208.70.31.206 (talk) 02:50, 8 July 2009 (UTC)[reply]
Or AB=exp((log A)B). Yeah, I thought of that, but I can't see what use it would be. Algebraist 03:02, 8 July 2009 (UTC)[reply]
But math isn't about being useful, especially when it comes to abstract data structures! That's why engineering is taught in a different department. NeonMerlin 03:10, 8 July 2009 (UTC)[reply]
I think what Algebraist meant is that this concept may not have any application within mathematics. Although I agree that the purpose of mathematics is not for mere application in physics and other sciences, mathematical concepts are invented because they are interesting and shed some light on other concepts. As an example, I could define the concept of dimension of a vector space. This is interesting because vector spaces are characterized by their dimension (up to isomorphism). On the other hand, I could define another "concept" - the "product" of two vectors. Just define this product to be zero for any two vectors. This certainly makes the vector space into an algebra but is not nearly as interesting as the concept of dimension. This is not to say that the general idea of multiplying two vectors is not interesting (is has already been defined, in fact) but that some possible products, although amusing, are not so interesting to study. On the other hand, if you can find some interesting properties that the concept of A^B satisfies, and how it applies in matrix theory, I have no doubt that it is useful. --PST 04:00, 8 July 2009 (UTC)[reply]
Well, talking of single functions, people usually prefer to consider a generic exponential function in the form ecx, rather than ax, because it is somehow of better use, especially for the purposes of calculus. As to the general definition of a matrix f(A) or f(A,B) &c, as function of one or more matrices, note that we do it also with operators in place of matrices, or even more generally, in abstract Banach algebras; the function f need not to be analytic, nor even continuous, but only a borel function may be sufficient, depending on the context. You may check Functional calculus. However, not every nice property of the initial function f may possibly hold for the corresponding f(A); for example, recall that in general exp(A)exp(B) is not such a simple thing as exp(A+B), if A and B do not commute. For the same reason AB is not so a nice operation as it is with numbers (Algebraist alluded to this point, if you note). --pma (talk) 06:27, 8 July 2009 (UTC)[reply]
The log of a matrix does sound interesting, so X = log A is such that A = eX. There's all sorts of questions one can ask, for instance the complex logarithm has a number of different possible valid values. Dmcq (talk) 10:21, 8 July 2009 (UTC)[reply]

Note that there are Wikipedia articles on matrix exponentiation and logarithms. The logarithm article addresses the question Dmcq was interested in: "Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm."

I have yet to find an article discussing Neon's idea, and I certainly don't have the background to figure out what nuances might lie behind such a definition myself. I'm sure the same idea must have been discussed before somewhere, if anyone can try and hunt down an article or discussion, possibly outside Wikipedia. I'll look where I can, but others can pitch in as well. --COVIZAPIBETEFOKY (talk) 14:48, 8 July 2009 (UTC)[reply]

Thinking about this a bit, it occurred to me that in certain circumstances, you could plausibly define a matrix logarithm to a matrix base. i.e., . Readro (talk) 15:05, 8 July 2009 (UTC)[reply]
What is that expression supposed to mean? I can think of at least four possibilities. There's also the alternate pair of definitions . Algebraist 15:27, 8 July 2009 (UTC)[reply]
Take the logarithm of both sides of the right-hand equality there, and you get either or , depending on your definition of . So your definitions are equivalent, and the ambiguity of Readro's definition is the exact same ambiguity as that of the definition of . --COVIZAPIBETEFOKY (talk) 14:56, 9 July 2009 (UTC)[reply]
You're assuming log(A) is invertible (you're also ignoring multivaluedness when you take logs; I haven't bothered checking if this changes things). One might also want to interpret Readro's for A and B matrices as another potentially multivalued expression with value any C such that A=BC (or A=CB). Algebraist 15:09, 9 July 2009 (UTC)[reply]
Let's assume for a moment that all the possible values of log(AY) match all the possible values of Y log(A) (I'm not going to keep listing the alternative log(A)Y, because I'm lazy, but I haven't forgotten about it and neither should you); I honestly don't know if this is always the case. But if it is, and if we interpret Readro's fraction the way you suggested, then your definitions yield identical equations.
Removing that assumption, there may possibly be a logarithm of AY which is not of the form Y log(A) for some choice of logarithm of A (vice-versa is not possible, due to our definition of AY). That means that Algebraist's definition actually probably encompasses more values of logA(X) than Readro's. --COVIZAPIBETEFOKY (talk) 19:46, 9 July 2009 (UTC)[reply]
Oh, and I should be the first to admit that, in accordance with the above, I was wrong about your definitions being equivalent. They are equivalent only if the assumption I made in the first paragraph above is valid, but I see no reason to make that assumption. --COVIZAPIBETEFOKY (talk) 19:50, 9 July 2009 (UTC)[reply]
Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm.... That's OK if we think a "logarithm of A" just as a matrix B in the pre-image of the map exp, that is, any B such that exp(B)=A. But then one does not speak of logarithm, one would just say: a point of the pre-image. Usually "logarithm" is used to mean a function, that is, a function that insome domain selects continuously an element in the pre-image of exp. In this sense the quoted sentence is a bit misleading: it is not that there is a function (log) having " more values" at any point, as sometimes people say. A function is a function, and has exactly one value at each point. (Yes, one can consider multifunctions, but the context where multifunction are of use is quite different from complex analysis, for multifunction have quite a poor algebra :what is sqrt(1)+sqrt(1)? The set {-2,0,2} maybe?). Let's say that exp(z), as any other complex function, has a local section at any regular point. That is, a function g:U→C such that exp(g(z))=z for all z in some domain U. The idea is very simple and geometrically clear. Each section can be extended to a maximal domain, and some call it "a determination of the logarithm". Each of these functions, as any holomorphic function, may be used in the functional calculus to define g(A), provided spec(A) is a subset of U, for any matrix A, or what else we like. --pma (talk) 18:38, 8 July 2009 (UTC)[reply]
Personally I prefer to think in terms of covering spaces if at all possible instead of either cuts or multifunctions. Dmcq (talk) 18:16, 10 July 2009 (UTC)[reply]


July 9

Calculating time to full battery recharge

I need a little help calculating how long it would take to charge the battery pack of an electric vehicle with the following parameters:

The vehicle has six 12-volt flooded electrolyte batteries, and an on-board 72-volt DC charger that plugs into a standard 110-volt AC 15-amp outlet.

The manufacturer of this particularly odious Neighborhood electric vehicle (they're all a little odious) doesn't mention the time to full charge in its published specs- a rather telling omission, I think.

Thanks Wolfgangus (talk) 07:19, 9 July 2009 (UTC)[reply]

This question is already asked on the Science Desk. Please don't post the same question on multiple desks. Rkr1991 (talk) 11:15, 9 July 2009 (UTC)[reply]


July 10

Practical uses of very big numbers

Just ran across Skewes' number while deleting a nonsense page, and I'm just amazed by it. What is the practical benefit of theorising such a large number? Reading Ramsey theory, I can understand (slightly) that Graham's number, because it helps Ramsey's theory, helps us in predicting sequences of some sort of events, although I'm not sure what kind of events. I gather that the two numbers are somehow related (more than by just being very very very big numbers), but I can't see how Skewes' benefits anything "in the real world" [no slam on higher mathematics intended]. Understand, by the way, that I was a history major in college, so I'm (1) altogether unfamiliar with higher mathematics, and (2) accustomed to being asked about the utility of my field of study. Nyttend (talk) 12:20, 10 July 2009 (UTC)[reply]

As explained in the article you linked, the various things called Skewes' number were introduced because Skewes could prove that something rather interesting and unexpected happens at some point lower than that number. It is now known that this phenomenon in fact occurs at at much lower point, so Skewes' original numbers are now just historical curiosities relating to his specific proofs. Algebraist 14:04, 10 July 2009 (UTC)[reply]
Do you mean the section "Skewes' numbers"? Looking at that, I didn't realise that it was an example of the formula given in the intro; for all I knew, those were two proofs that he had done on other topics. Because there's no pi in either expression, and because the greater-than expression included e, I thought it was something different. Assuming that I understand you rightly, I can now see the point of these numbers; thanks. Nyttend (talk) 14:27, 10 July 2009 (UTC)[reply]

One practical area where theories about extremely large numbers can be useful is program verification. Say you have a computer program that defines three functions: 1) f(n) appears to compute something complicated and it's hard to tell quite what it's doing. 2) g(n) = f(n) + 1, and 3) h(n) = g(n) - f(n). You'd like to use equational reasoning to prove that h(n)=1 for all n, regardless of what f is. The problem is this reasoning can fail if f never returns. For example you could give the recursive "definition"

and subtracting f(n) from both sides, you get 1=0, not a good basis for sound proofs of anything ;). Of course if you try to treat that definition as an executable program and actually run it, it will simply recurse forever, not giving you an opportunity to show that it's wrong. So your "proof" that h(n)=1 is only valid if you can also prove that f actually terminates and returns a value for each n (i.e. it is a total function). If f doesn't always terminate, it might also imply (if you incorrectly assume that it does terminate) that 1=0, and the implication may be much less obvious than the blatant recursive example I gave, so it could go silently screwing up the results of some fancy automated theorem prover trying to reason about the program. For example, if f(n) is defined as the smallest counterexample to Goldbach's conjecture that is greater than n and it works by searching upwards from n, then determining whether even f(0) halts is a famous unsolved math problem.

So here's where the big numbers come in. In general, deducing whether some arbitrary function terminates is called the halting problem and it is unsolvable (there is provably no algorithm that can do it, as has even been proved in verse). You are ok only if f turns out to be one of the functions whose termination you can prove. And the termination might take an extremely large number of steps: for example, f might compute the Ackermann function or even a Goodstein sequence while still being provably total. Numbers like Skewes' and Graham's are pretty big by most everyday standards, but at least you can write down formulas for computing them. The size of the Goodstein sequence grows so fast, that you can only prove nonconstructively that it does eventually finish--the number of steps in it even for fairly small n makes Graham's number look tiny.

So, you've got a situation where you can make a valid and useful deduction (e.g. h(n)=1) about a piece of software only if you can prove that for every n, there's a number t so that f(n) finishes computing in under t steps, where t might be unimaginably enormous. But you don't to compute t or care how large it is; all you have to do is prove that t exists, to ensure that your reasoning about some other part of the program is actually sound. That, then is a practical use of theories involving enormous numbers.

Harvey Friedman has written some "Lecture notes on enormous integers" [1]. The math is fairly technical but you might be able to get a sense of the topic just from the english descriptions. 208.70.31.206 (talk) 05:23, 11 July 2009 (UTC)[reply]

Added: and an anecdote by Friedman about the topic of the article I linked. 208.70.31.206 (talk) 21:32, 11 July 2009 (UTC)[reply]
Thanks much for a detailed explanation! I hadn't expected from the first explanation that there was a continuing modern use for these numbers. And the poem was entertaining, too :-) Nyttend (talk) 20:59, 13 July 2009 (UTC)[reply]

Complex logarithm

Hello, I am looking at a solution to the problem "Suppose f is analytic and non-vanishing on an open set U. Prove that log |f| is harmonic on U." The solution here is to show that Log(f(z)) is holomorphic on some neighborhood of each point of U, and then log |f(z)| is the real part of it so harmonic. But, I do not understand the log function very well, especially where it is holomorphic. So, the basic idea makes sense but the details of knowing that the composite is holomorphic does not make sense. I've looked through my undergrad book and it does not seem to tell where log is holomorphic (we can assume the principal branch). I also looked at the [Complex logarithm] article and it does not help me understand much either. It says under the section Logarithms of holomorphic functions that

If f is a holomorphic function on a connected open subset U of , then a branch of log f on U is a continuous function g on U such that eg(z) = f(z) for all z in U. Such a function g is necessarily holomorphic with g′(z) = f′(z)/f(z) for all z in U.

What I don't get is, f(z) could be 0 at some point and then this makes no sense for two reasons, e^z is never 0, and the derivative as shown would have a 0 in the denominator. Is the article wrong or do I not understand? Maybe a better question is, "Is the article wrong?" I think I do not understand either way. Any help would be much appreciated! StatisticsMan (talk) 15:02, 10 July 2009 (UTC)[reply]

Your remark is correct, and the article is right. The point of that definition is that it does not state that every f on a domain U admits such a g (maybe it could be good to add a small rkm there on this point). As you observe a first condition is that the image of f should be included in the image of exp (i.e. C\{0}, meaning that f does not vanish in U). You may check e.g. Rudin's Real & Complex Analysis for all the story and the connection with the Riemann mapping thm, to solve globally the problem on a domain U. As to log|f(z)|, note that the Log in the argument you quoted is a sort of auxiliary function that you just need to have locally, so there is no topology to consider: locally you have your Log for free, because exp is locally invertible. Also, note that you can prove in less elegant but more elementary way that log|f(z)| is harmonic by direct computation (you can try it if you haven't still done it). Write f(z)=u(x,y)+iv(x,y) where z=x+iy, and log|f(z)|= (1/2)log(u2+v2); then derive and use CR. --pma (talk) 15:41, 10 July 2009 (UTC)[reply]
In my study group, the guy who did the problem did it the way you mention and this way is simpler and is probably what I would come up with if I tried. But, in my complex analysis class, the professor did it the way I mentioned. I want to understand more so I am trying to understand this one as well. So, are you saying basically that as long as we have a small disc around f(p) where f is not 0 at all in the disk (would we need not 0 in the closure of the disk?), then Log of f is holomorphic on that disk? StatisticsMan (talk) 16:38, 10 July 2009 (UTC)[reply]
Exact. If a holomorphic function f:U → C has f '(p)≠0 at p in U, then it is locally invertible, meaning that there is an open nbd V subset of U such that f(V) is open and f:V → f(V) has a holomorphic inverse. It is a particular case of the inverse function theorem if you want. Here you just need a local inverse of exp(z) defined in a nbd of a given point p≠0. Such a p is therefore in the image of exp, say p=exp(a) (of course there are many such points a; we just choose one); moreover exp is locally invertible everywhere because its derivative never vanishes; so there is a local inverse of exp, call it Log, defined on a nbd of p, with values in a nbd of a. It verifies the relation exp(Log(z))=z in the domain of Log (the nbd of p), which is all you need to conclude that Re(Log(z))=log|z| (remember |exp(z)|=exp(Re(z)) ). Going back to the harmonicity of log|f|, look at any z0 in U; take a Log defined in a nbd of p:=f(z0) and write log(|f(z)|)=Re Log(f(z)), where Log(f(z)) is defined in a nbd of z0. There is possibly no Log such that Log(f(z)) is globally defined in the whole of U, but that's no problem at all.--pma (talk) 17:23, 10 July 2009 (UTC)(I've re-edited to change notations or correct, sorry)[reply]
I'm still not understanding this completely but I've thought about it a lot and I understand it better. It's starting to make sense. Thanks for the help! StatisticsMan (talk) 19:58, 10 July 2009 (UTC)[reply]
Summarizing: all you need in order to prove the harmonicity of your log|f(z)| at each z0 in U, is a holomorphic function "Log" defined in a nbd of f(z0) such that exp(Log(w))=w, hence Re Log(w)=log|w|, so log(|f|)=Re Log f(z) in a nbd of z0.--pma (talk) 20:13, 10 July 2009 (UTC)[reply]

This is probably not a good answer (since it uses ideas beyond undergraduate curriculums), but this is my solution: if is analytic on some open set U, then is subharmonic there (f doesn't have to vanish). If, in addition, is nonvanishing, then is subharmonic and so is subharmonic. Hence, is harmonic. I guess my point is that it is possible to approach the problem from real analysis. (This is a very good problem, so I couldn't resist.) -- Taku (talk) 22:29, 10 July 2009 (UTC)[reply]

Another possibility is: u(x,y):=log|z| is harmonic (by easy direct check, or because it's real part of the complex logarithm). It's a general fact that a conformal change of variables in any harmonic function u is still harmonic , that is u(f(x,y)) is harmonic if f is holomorphic, just because any two-variable harmonic function u is locally real part of a holomorphic. --pma (talk) 07:41, 11 July 2009 (UTC)[reply]

Quaternion algebra

The article Quaternion algebra claims:

One illustration of the strength of this analogy concerns unit groups in an order of a rational quaternion algebra: it is infinite if the quaternion algebra splits at and it is finite otherwise, just as the unit group of an order in a quadratic ring is infinite in the real quadratic case and finite otherwise.

Is this correct? I would expect the unit group of a rational quaternion algebra to be infinite in both cases, since a splitting quadratic field can be embedded in the quaternion algebra in infinitely many ways. --Roentgenium111 (talk) 15:59, 10 July 2009 (UTC)[reply]

Actually, looks right to me. The norm should be positive definite in the nonsplit case, and the order should form a lattice in the algebra, and the units of the order should have norm 1, as in the Hurwitz quaternion situation. (There are only a couple imaginary quadratic fields with nontrivial units, too.) No time to think more (and be more correct. :-))John Z (talk) 23:05, 16 July 2009 (UTC)[reply]

non base 10 math

What is the purpose for non base 10 math? I can understand the uses of Hex, or binary, but are there practical uses for base 5, or base 28 etc? Googlemeister (talk) 19:54, 10 July 2009 (UTC)[reply]

There's nothing particularly useful about them as far as I'm aware. Of course, there's nothing particularly useful about base 10, either; it just so happens that we're using it. Apparently some languages count in quinary. Algebraist 20:29, 10 July 2009 (UTC)[reply]
(ec) Well maybe not such a big use as base 10, 2, 16 &c. But, for instance, they can be of use in arithmetic computations by hand, for it is very easy to reduce a number mod pk when written in base p. They give representations for p-adic extensions (via unbounded sequences of digits from the left). In any case, we have them all for free, with no particular stocking problems. Even the number 12931/3 is not so used as it is 3 or 4, but we know it is there at any need. --pma (talk) 20:39, 10 July 2009 (UTC)[reply]
Base 28 may be practical for some purposes when you work with an alphabet of 28 symbols, for example 26 letters, space and period. PrimeHunter (talk) 22:04, 10 July 2009 (UTC)[reply]
A base-85 encoding is used in some file formats. -- BenRG (talk) 08:54, 11 July 2009 (UTC)[reply]
Once upon a time I encountered an algorithm for determining base-10 square roots on mechanical calculators (i.e. from the old days before electronic calculators took off). This algorithm actually used base-20 arithmetic internally because it helped to minimize the number of mechanical components involved. Dragons flight (talk) 22:13, 10 July 2009 (UTC)[reply]
I learned that technique a long time ago as the "twenty method", but that does not appear to be a common term based on my lack of search results. It is actually a method for calculating square roots in base ten using a technique similar to long division, described here.
As for other bases, base 60 has been historically popular and is still present in daily life on clocks (hours, minutes, and seconds) and in angular measurements (degrees, minutes, and seconds. Binary is the natural base for digital circuitry, and thus computers, but for convenience, binary digits are often represented in groups of three or four yielding octal and hexadecimal. As for decimal, the obvious reason that we commonly use that system has to do with the number of fingers we have. It's no coincidence that the word digit refers to both a number or a part of your anatomy. If we had evolved from three-toed sloths, we might be using base 6 or base 12 on a daily basis, but mathematics as a whole would be the same. -- Tcncv (talk) 00:35, 11 July 2009 (UTC)[reply]
There have been computers that actually used hexadecimal internally, although a hexadecimal digit was still represented as four bits. I'm thinking of the implementation of floating-point numbers on the IBM 360, which stored a mantissa M and exponent E in order to represent the value M×16E. There was also at least one computer that used base 3 internally, the Russian-built Setun. Some computers from Digital Equipment Corporation, in the days when internal and external memory were both far more expensive than today, used software that was able to store 3 characters of text in a 16-bit word by using a character set with just 40 possible characters and treating the 3 characters as a number in base 40, which was then translated to base 2 to be stored. (403 = 64,000 < 65,536 = 216.) DEC called this RADIX-50, where the "50" meant 40 but was written in octal (base 8)!! --Anonymous, 07:20 UTC, July 12, 2009.
The most useful bases from the point of view of doing common mental arithmetic are those with a lot of factors, especially low factors. Dividing a given number by some divisor is often easier if the divisor is a factor of the base in which you have the number represented - so, for instance, it's easier to divide numbers in decimal by 2 or 5 than by 3. Thus a base with lots of low factors like 2, 3, 4, 5 will make it easier to divide numbers into halves, thirds, quarters, etc. This is why base 60 is used a lot - it's a highly composite number. Maelin (Talk | Contribs) 13:18, 11 July 2009 (UTC)[reply]
IMO the greatest significance of non-decimal representations is to demonstrate that they are possible. As noted above, base 10 is completely arbitrary, but most people don't really understand this fact. They think that 9 must be followed by a two-digit number by some sort of cosmic edict. Some then go and attribute all kinds of significances to the digits constituting a number's decimal representation. Familiarity with writing numbers in different bases helps to understand why this is absurd. -- Meni Rosenfeld (talk) 18:46, 11 July 2009 (UTC)[reply]
Very true. I once heard one guy claiming that the fact that we have ten fingers is possibly due to the Lord wanting to provide men with a kind of hand calculator "not eight, nor twelve, you see". --pma (talk) 15:18, 12 July 2009 (UTC)[reply]
Base64 is widely used on the internet, and in other computer applications. Here, the base is chosen by the number of different characters (“digits”) that are safely available in the ASCII character set, and is also a power of 2, which is handy for software efficiency and simplicity. Red Act (talk) 12:19, 13 July 2009 (UTC)[reply]

question about 0.333...

That's a number right? Am I out of my mind? I was discussing 0.999... repeating and someone said that 0.333.. is a limit, and I told him that 0.333.. isn't a limit, it's a number. He then told me he has a bs in mathematics and ask for my credentials... am I going crazy? I know it can be expressed as a limit (as well as an infinite series), but is 0.333... itself a limit? Thanks--12.48.220.130 (talk) 20:04, 10 July 2009 (UTC)[reply]

Sure, it's a limit. It's also a number. Why do you think a limit shouldn't be a number, or a number shouldn't be a limit? --Trovatore (talk) 20:14, 10 July 2009 (UTC)[reply]
(ec)Yes, 0.333.. is a number, 1/3, and a limit (of real numbers) is itself a number. In fact you may also say that 0.333.. is not a number, nor a limit, it is a representation of a number. --pma (talk) 20:21, 10 July 2009 (UTC)[reply]
No, you're not crazy. As many other people have indicated, 0.333... represents both a number and a limit. As a side note, your acquaintance ought to be less defensive and arrogant. Asking for your credentials for asserting correctly that 0.333... is a number, or even for asserting incorrectly that it is not a limit, is obnoxious. Michael Slone (talk) 03:36, 11 July 2009 (UTC)[reply]

But how can it be a limit? The mathematical limit article says that it has to be a function in order to be a limit. And if it was a limit, wouldn't it mean that 2 or pi would also be limits?--12.48.220.130 (talk) 20:47, 10 July 2009 (UTC)[reply]

That's not what the article says at all. Functions (and sequences, and suchlike things) can have limits, but the limits themselves are simply numbers (at least in the cases we're talking about). Any given real number is the limit of any one of many real sequences or real-valued functions. The notation '0.333…' denotes the number 1/3 by giving a specific sequence (0, 0.3, 0.33, 0.333, 0.3333,…) of which 1/3 is the limit. Similarly, I could if I wanted to refer to the number 2 with the curious notation '1.999…', using the fact that 2 is the limit of the sequence 1, 1.9, 1.99, 1.999, …. Algebraist 20:55, 10 July 2009 (UTC)[reply]

Ok, but if I showed a bunch a mathematicians the symbol "2", the first thing that will come to their mind is "oh, that's a number", not "oh, that's a limit".--12.48.220.130 (talk) 21:35, 10 July 2009 (UTC)[reply]

I think you have some fundamental misunderstanding of what a limit is, or maybe you just expect it to carry more baggage than it does. Saying that something "is a limit" conveys exactly nothing -- anything at all can be a limit. --Trovatore (talk) 21:43, 10 July 2009 (UTC)[reply]
But if you showed them 2.000... they might think of limit first, and if you showed them 2/1 they might think of a fraction. A limit can be a way to express a number. You would normally only use the word limit about a number when the number is expressed in a way referring directly or indirectly to a limit. Compare to the possibly simpler term "sum". Is 2.0 a number or is it a sum (for example 2 + 0/10)? It can be viewed as both, and as several other things. PrimeHunter (talk) 21:56, 10 July 2009 (UTC)[reply]
Let me try an analogy: Consider . Clearly is a function, but isn't a function, it's a number, it's 9. In the same way (0.9, 0.99, 0.999...) is a sequence, but the limit of that sequence is just a number, 1. --Tango (talk) 22:22, 10 July 2009 (UTC)[reply]
I probably do this a lot, but let me make clear that which is important in this context. Mathematicians are not particularly concerned with what 2, or 1, or 0.333... actually means but rather, they treat these "numbers" as a collection of symbols which together form a set (of symbols). But do not interpret this incorrectly - it is not that mathematicians simply remove meaning from these symbols, but rather, they study the relations between them (in fields such as number theory and calculus, for instance). For instance, assuming I am a mathematician, if you told me just the number 2, I would frankly not think of anything in particular. However, if you told me every integer, I would start thinking about prime numbers and all sorts of concepts in the realm of number theory. Therefore, a number alone does not mean anything (whether a limit, or "a function"), but if you tell me every number (that is, give me a context), then I can talk about limits and and other such concepts. --PST 03:07, 11 July 2009 (UTC)[reply]
Well, the axiomatization of real numbers I was forced to not understand as an undergrad defined the reals as (the limits of) (equivalence classes of) Cauchy sequences over the rationals - see Construction_of_the_real_numbers#Construction_from_Cauchy_sequences. So in that sense, every real number very directly is a limit. --Stephan Schulz (talk) 08:04, 11 July 2009 (UTC)[reply]
That's not an axiomatization, that's a construction. Axiomatizations of the reals don't define them to be anything at all. Algebraist 12:12, 11 July 2009 (UTC) [reply]
Anyway, I think that the doubts of the OP are due to a slight ambiguity of language. Strictly speaking a number and a limit are two different concepts (if not why two different terms). But a limit of a sequence (of real numbers) is itself a number by definition, that is, a certain number with a special property wrto that sequence. And any number is, of course, a limit of some sequence, and, according to some construction of real numbers, it is a certain limit by definition. --pma (talk) 09:42, 11 July 2009 (UTC)[reply]
The expression "0.333..." is shorthand for the series , because of the way decimal numbers are defined. Since you can't really evaluate the sum directly with that infinity there, it's also taken to mean (explained in more detail at infinite series). You can evaluate this limit, which turns out to be identical to . So 0.333... represents a limit, and also the number which is the result of evaluating that limit. (The last part is fairly standard terminology - when a mathematician refers to a "limit", she can be either referring to the limit expression itself, or to the numerical result of evaluating that expression.) -- 128.104.112.84 (talk) 22:01, 11 July 2009 (UTC)[reply]

But that's when I think it gets messy. Because if you say "oh, 0.333.. is just shorthand for ", then what's to stop someone from saying is shorthand for or that two is shorthand for . Does it really matter that .333... is a repeating decimal in deciding whether to call it a limit or not?--12.48.220.130 (talk) 13:33, 12 July 2009 (UTC)[reply]

Let's back up a little. We use a positional number system with ten as the base. That is, the '2' in "20" and the '2' in "200" mean different things. It tends to be easier to discuss these things when using a different base, so let's look at the value "258hex" in hexadecimal (base 16). Each position to the left of the decimal point is worth the next power of 16. So "258hex" in hexadecimal is equivalent to 2*162 + 5*161 + 8*160. That's the way a hexadecimal number is defined. The same holds true for decimal numbers. 376dec is defined to mean 3*102 + 7*101 + 6*100. This scheme holds for numbers to the right of the decimal place too, but in that case you're reducing the exponent by one for each position to the right. "0.02" is defined to mean 2*10-2. So when you write something like "0.333...", by the definition of what that sort of decimal representation means, you're writing a shorthand for "0*100 + 3*10-1 + 3*10-2 + 3*10-3 ...", which is equivalent to the more compact . That series is the most straightforward way of transforming "0.333..." into a form which is mathematically tractable (that is, into a form which you can then use in subsequent calculations). The other examples you give do not proceed directly from definitions. Pi is defined as the ratio of the circumference to the diameter of a circle in Euclidean geometry - the series is one of the ways to calculate that ratio. The other one is just a series which evaluates to two, and has nothing to do with the definition of two. When you start writing things like "0.333...", you're playing with notation and in doing so you must be aware of what the notation means. The most straightforward, minimal translation of the concept of "0.333..." is the above infinite series - and if you want to know what the value of an infinite series is, you use a limit. So the value of "0.333..." is defined by the limit simply because it is a repeating decimal, which is equivalent - by definition - to an infinite series. -- 128.104.112.84 (talk) 16:31, 12 July 2009 (UTC)[reply]

But any number can be, by definition, an infinite series. Just because you chose to define 0.333.. as an infinite series doesn't mean that you can't arbitrarily not decide to define 2 or as an infinite series. That's just the number system we use. In base-10, 0.333.. is an infinite series. But in other number bases, such as base 4, 0.333.. = 1. And numbers that are infinite series in base 4 , might not be infinite series in base 10. So defining numbers by their decimal representation is flawed, especially considering that fractions came first.--12.48.220.130 (talk) 22:28, 14 July 2009 (UTC)[reply]

1/3 represents an element of the rational numbers, and we don't need the concept of a limit to understand what element we're talking about. There are sequences that have 1/3 as a limit, just like any other number. The simplest case, for any element of a metric space x, the sequence x, x, x, x,... has a limit of x, so being a limit is not a particularly notable property. However, the decimal representation of 1/3, "0.333...", implicitly uses the concept of the limit to describe what value we're talking about. That notation can be thought of as describing the sequence 0.3, 0.33, 0.333,... which has 1/3 as its limit. Since 1/3 can't be described as a terminating decimal, we allude to it by describing a sequence of terminating decimals that approaches it. Note that limits are only involved in the notation, and have nothing to do with the number itself. So it could be said that the expression "0.333..." refers to a limit, but you're also right that the number it describes is just a number.
An important side note though is that there are cases where there are "limits" that aren't "numbers". If M is a metric space that isn't complete, then there are Cauchy sequences where the "limit" isn't actually in the set M. For example, pi isn't in the rational numbers, but we can construct a sequence of rationals that has pi as a limit. So in the space of rationals, we can say that pi isn't a "number", even though it is a "limit" (quotation marks on "limit" because technically such a sequence doesn't have a limit). If we extend the rational numbers by adding elements that correspond to Cauchy sequences that didn't have limits before, then we get the real numbers, which said to be the completion of the rationals. Back to the example of 1/3, we can define a metric space M that is the set of numbers that can be expressed as terminating decimals. 1/3 is not a number in M, but it is a "limit". That is, we can identify 1/3 with certain Cauchy sequences that don't have a limit in M. In particular, the sequence 0.3, 0.33, 0.333... has the properties we're looking for. The completion of M turns out to also be the real numbers. On the other hand 1/3 is obviously an element of both the reals and the rationals. Rckrone (talk) 19:00, 15 July 2009 (UTC)[reply]
Not sure how much this will add to the above responses, but let's back up a little more. I'll assume we have already constructed the real numbers somehow (say, using Cauchy sequences), and start with defining a decimal expansion (other people might give a different, but ultimately equivalent definition):
A decimal expansion is a function with the property that there is some such that for every we have .
Next, I will define the real number represented by a decimal expansion:
Let f be a decimal expansion. The real number represented by f is defined to be the sum of the infinite series .
That this is well-defined is left as an exercise. It can also be proven that every positive real number is represented by at least one, and at most two, decimal expansions.
Next we have a convention to use a string of ASCII characters to denote both any terminating decimal expansion (having only finitely many nonzero entries) and the number it represents. You start with writing the first nonzero digit, write all subsequent ones up to (and write it even if the number has no integer part), write a dot, and then write all subsequent digits in order until the last nonzero one. So the decimal expansion
Is denoted by the string 666.6666. This string also denotes the number represented by the decimal expansion.
Next we have a convention to denote repeating decimal expansions. The formal convention uses overbars, dots or parentheses, while the informal convention says "write repeating part enough times to make it clear what it is, and then write an ellipsis". According to this convention, the string 0.333... denotes the decimal expansion
The string also denotes the number represented by this expansion, which is , which is 1/3.
So, it is in this sense that 0.333... is "defined" to be . It's not that 1/3 somehow has the property of being an infinite series (as you have correctly noted, all real numbers are the sum of some infinite series). It's just that the decryption of what the notation "0.333..." means involves the use of an infinite series. The decryption of the notation "", however, does not involve infinite series at all - it only involves our definition of what is (which is usually the ratio of the circumference and diameter of a circle). -- Meni Rosenfeld (talk) 22:46, 15 July 2009 (UTC)[reply]

Distance between points -- using lat/lon

OK, I've got two points on the surface of the earth, identified only by their lat/long coordinates in DMS. I'm wanting to know the "straight-line" distance between them, in miles or km.

First, I have to convert everything to one unit, logically degrees, but seconds might actually make the endgame easier since I recall 1 second of lat is close to 1 mile. Further, I recall that longtitudinal distance has to be reduced by the cosine of the latitude ... but I can't work out the rest of it.

BUT, maybe I don't have to think much harder than that, i.e. Pythagorus is Close Enough. For "small" distances, the corrections for a spherical or even ellipsoidal surface won't make a noticeable difference from a plane. For example, if the planar distance is 1000 km and the spherical distance is 999 or 1001, that's 1/10 of 1%. At what point does "small" become significant? --DaHorsesMouth (talk) 22:20, 10 July 2009 (UTC)[reply]

1. 1 nautical mile is about 1 minute of arc at the equator, not 1 second.

2. The simplest approach to finding spherical distance is convert the lat/lon to rectangular coordinates where the center of the earth is at the origin. You get two 3-dimensional vectors from which you can easily compute the dot product which gives you the angle between the vectors. Since you know the earth's radius, the angle lets you figure out the distance. 208.70.31.206 (talk) 03:17, 11 July 2009 (UTC)[reply]

Great-circle distance should be relevant here. Michael Hardy (talk) 05:19, 11 July 2009 (UTC)[reply]
Great circle is usually sufficient for most cases, but as the earth isn't an exact sphere (it is more an oblate spheroid), a more accurate method is Vincenty's formulae3mta3 (talk) 07:52, 11 July 2009 (UTC)[reply]
Actually the article you want is Geographical distance. It is all covered there. —3mta3 (talk) 11:21, 11 July 2009 (UTC)[reply]

Proves once again that knowing what something is properly called is the single best starting point for learning about it. Thanks to all! Issue is

Resolved

.

July 11

July 12

July 13

Points on square grid

I presume that this is a well-known result, but I couldn't find it. What is the greatest number of points which can be marked on an n by n square grid so that no three are collinear? Drawing successive cases suggests that for n=0, 1, 2, 3 and 4, the answer is 0, 1, 4, 5 and 6 - is this correct so far, and what's the general result?—86.132.235.208 (talk) 18:21, 13 July 2009 (UTC)[reply]

You're wrong with the last two values: for n=3, the answer is 6 (take all except one diagonal) and for n=4, it's 8 (take the inner two of each side of the square). For arbitrary n, the number can be no higher than 2n (since every line or column can only contain two marked points), but I'm not sure if this value can always be achieved for n>4 (for n=5, I get no further than 9 marked points). --Roentgenium111 (talk) 22:02, 13 July 2009 (UTC)[reply]
10 points are possible for :
**---
*--*-
---**
-**--
--*-*
For , 12:
**----
---*-*
-*--*-
*-*---
----**
--**--
And it does not seem completely implausible that this generalizes to every n. -- Meni Rosenfeld (talk) 15:29, 14 July 2009 (UTC)[reply]
Some Google-fu turned up this summary from Research Problems in Discrete Geometry by Peter Brass, W. O. J. Moser, János Pach, p417: "In a long sequence of papers ... many examples were constructed, up to n = 52, for which the bound 2n is obtained. Most of these sets were found by computer search and no general pattern has emerged". Gandalf61 (talk) 16:22, 14 July 2009 (UTC)[reply]

Average of the first billion digits of the decimal expansion of Pi

Is it possible to know what the answer to that is - in which case what is it and how did you work it out - without laboriously adding them up? -- JackofOz (talk) 20:58, 13 July 2009 (UTC)[reply]

In other words, you add the numeral values of the digits and divide by 1,000,000,000? If that's what you mean, couldn't you just put them all into a spreadsheet? Nyttend (talk) 21:02, 13 July 2009 (UTC)[reply]


Do you want an exact answer? I doubt there's any clever trick to speed this calculation up faster than simply adding up the digits. (Well, actually there is — just print out the right answer. But I suppose you want a justified method.)
On the other hand, if you're OK with an answer that's good to 3 or 4 decimal digits, the answer is 4.5 . --Trovatore (talk) 21:09, 13 July 2009 (UTC)[reply]
That's close enough. Thanks for the quick responses. -- JackofOz (talk) 21:18, 13 July 2009 (UTC)[reply]
Using the table here, the first billion digits of add up to 4500057062. Also to be found in this OEIS sequence. Fredrik Johansson 21:28, 13 July 2009 (UTC)[reply]

"Trovatore" didn't really state his answer except for its bottom line number. Here's the rest: The average of the ten digits 0 through 9 is

(0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9)/10 = 45/10 = 4.5.

That's Trovatore's answer.

That's approximately correct if the ten digits occur equally frequently in the long run. But here's a hard question: Do the ten digits really occur equally frequently?

For that we have only statistical evidence, not a mathematical proof.

In one sense, the answer is clearly "no": if the sum is 4500057062 instead of 4500000000 (i.e. 4.5 billion) then it deviates slightly from exact equality. But it is conjectured that you can get as close as you want to equality of those ten frequencies by making the number of digits big enough.

If you want to get into statistical evidence, then we'd also talk about pairs of consecutive digits, and triples of consecutive digits, and so on. The number cited above, 4500057062, does not deviate from 4.5 billion by more than would be predicted by the full-fledged conjecture dealing with pairs, triples, etc. occurring equally frequently. One could get into details of how that conclusion was reached as well.

But the way Trovatore came up with 4.5 is just that it's the average of the ten digits 0 through 9. Michael Hardy (talk) 23:25, 13 July 2009 (UTC)[reply]

It is conjectured but unproven that pi is a normal number (Trovatore's answer being wrong would have been interesting evidence against the conjecture). However, if you just want the billionth digit without having to compute all the preceding ones, there is a beautiful spigot algorithm for obtaining it, at least if you don't mind a hexadecimal rather than decimal expansion. 70.90.174.101 (talk) 01:08, 14 July 2009 (UTC)[reply]

I'm supremely indifferent to what the billionth digit is, but thanks anyway. I was too naive in accepting Trovatore's answer as the answer to the question I asked. It's probably very close to 4.5, but just for the fun of it, can anyone produce the actual average I'm seeking, to, say, 9 decimal places? -- JackofOz (talk) 09:23, 14 July 2009 (UTC)[reply]
4.500057062 - see Fredrik Johansson's response above. Gandalf61 (talk) 09:46, 14 July 2009 (UTC)[reply]
Why were you "too naive"? I was right, wasn't I? --Trovatore (talk) 21:46, 14 July 2009 (UTC)[reply]
Note that Trovatore's answer was right even wrto the precision: tree or four decimal digits. --pma (talk) 14:26, 15 July 2009 (UTC)[reply]

Randomly choosing one object out of a countably infinite set

During a physical-world discussion of a subject that has caused much distress here, the person I was discussing it with suggested that it would be much easier to look at from the point of view of picking a door from an infinite set of doors. This has caused me great distress independent of the problem. Let's look at the odds of randomly selecting one element from N with equal probability, that is P(1) = P(k), for any k in N. By the definition of probability, P(1) is either 0 or an element of (0..1]. The sum of all P(x) (=P(1)), x in N, equals 1, and by the Archimedian principle, if P(1) > 0, there exists some n such that P(1) * n < 1, so P(1) = 0. Thus P(1..n) = n * P(1) = 0, and the limit as n goes to infinity of 0 is 0. The only conclusion I can come to is that it's not meaningful to speak of selecting an element from N with equal probability?!? I'm not sure what I'm missing here, but I think I'm missing something.--Prosfilaes (talk) 23:17, 13 July 2009 (UTC)[reply]

You're missing nothing. There is no uniform countably additive probability measure on the natural numbers. Algebraist 00:07, 14 July 2009 (UTC)[reply]
In many cases, such as the German tank problem, a probability distribution f(k)=1/Ω, for k=1...Ω, and f(k)=0 elsewhere, for some large value of Ω, may serve as a prior distribution. After computation of a posterior distribution you may take the limit of infinite Ω. It is incorrect to take the limit first and compute afterwards. Bo Jacoby (talk) 10:15, 14 July 2009 (UTC).[reply]
As a consequence of what is stated in the first reply, if you want the distribution to be uniform you have to renounce to countably additivity. There are translation invariant, finitely additive probability measures in N; they are all in the dual of l. -pma (talk) 16:13, 14 July 2009 (UTC)[reply]

July 14

Proving inequalities

Say . Due to the AM-GM inequality, , which means that the maximum value of abc is 1/27. Now, what values of a, b and c give this maximum value? Obviously it's when they're all 1/3, but is there some way to prove this? On a related topic, if I want to prove where , is it fair to say that (since you can swap the variables around...?) and somehow prove the inequality using this? Is there a name for the kinds of things I'm talking about, where you have inequalities with permutations of variables? --wj32 t/c 11:27, 14 July 2009 (UTC)[reply]

The AM-GM inequality says that the AM and GM are equal if and only if all terms are equal, which in your example gives . For the second question, what you say isn't true in general - e.g. . AndrewWTaylor (talk) 11:38, 14 July 2009 (UTC)[reply]
You can only "swap variables around" when there's symmetry. Suppose you start with the assumption . So far a, b and c are symmetrical. Then you evaluate your sum, starting with . After this step, a and b are still symmetrical, but c is not symmetrical with them. You have included a summand which has a and b, but not one that contains c. So from this point forth, is very different from and you can't interchange them.
By the way, in your first question you have forgotten to state that (not sure if this is required for the second). -- Meni Rosenfeld (talk) 12:55, 14 July 2009 (UTC)[reply]

My favorite way to see the AGM inequality is like this. Say x=abc where . Now let and . Suppose you adjust a and b while keeping their sum constant. That means that u doesn't change, while the product is clearly at maximum when v=0 which means a=b. By symmetry (since you could have permuted the variables in any way) the product is at maximum when a=b=c. —Preceding unsigned comment added by 70.90.174.101 (talk) 12:24, 14 July 2009 (UTC)[reply]

Lagrange multipliers shows the general technique for this sort of problem. HTH, Robinh (talk) 19:45, 14 July 2009 (UTC)[reply]
Thanks for all your answers. Say I changed the problem to that of proving where and . Could I use symmetry to prove this inequality, rewriting it as ? --wj32 t/c 01:20, 15 July 2009 (UTC)[reply]
No, are you sure you have read my explanation above?
Symmetry allows you to do things like assuming, without loss of generality, an ordering on the variables (say ). Since both the condition and the result are, as a whole, symmetrical, and there must be some ordering, you may as well assume that a is the least and c is the greatest. But you can't just change the inequality arbitrarily.
The whole point in such theorems is that to satisfy the condition, if one variable increases, another must decrease. Thus an increase in one summand is offset by a decrease in another. You destroy all that if you keep only one summand - you can change its value by playing with the variables without any consequences.
Anyway, neither of these statements can be valid, since the LHS is unbounded under the condition. -- Meni Rosenfeld (talk) 10:02, 15 July 2009 (UTC)[reply]

regarding Bernoulli trials

Dear Wikipedians:

The question reads "Bernoulli trials with probability p of success and q of failure, what is the probability of 3rd success on 7th trial and 5th success on 12th trial?"

I reasoned that 3rd success on 7th trial means exactly 2 successes in first 6 trials, and 5th success on 12th trial means exactly 1 success among the 8th, 9th, 10th and 11th trials (4 trials in total), therefore my answer to this question is:

(6 choose 2)p²q4 + (4 choose 1)pq³.

How sound is my reasoning?

Thanks for the help.

70.31.152.197 (talk) 16:46, 14 July 2009 (UTC)[reply]

First, 3rd success on the 7th trial means exactly 2 successes in the first 6 trials and a success on the 7th trial, so it's . Similarly you need a success on the 12th trial, so the second part is . Finally, you want both to happen, so you need to multiply the parts rather than add them. So it should be . -- Meni Rosenfeld (talk) 17:02, 14 July 2009 (UTC)[reply]

Simplifying this sum

Is there a simple expression for this sum?:

Bromskloss (talk) 20:46, 14 July 2009 (UTC)[reply]

(pi^2)/6 --pma (talk) 20:56, 14 July 2009 (UTC)[reply]
That's the limit as N tends to infinity. I don't think there's a nice expression for the partial sums. Algebraist 21:00, 14 July 2009 (UTC)[reply]
Ops, too much in a hurry, did'n notice the "N". However, we can easily write an analytic function S(z) such that the sum above is S(N). Can be of any interest? --pma (talk) 21:07, 14 July 2009 (UTC)[reply]
Maple gives where is the nth Polygamma function. Whether or not you consider that simple is up to you. It's not elementary so I wouldn't consider it simple. Irish Souffle (talk) 21:05, 14 July 2009 (UTC)[reply]
If you want to exhibit your sum as a value of an analytic function at z=N, just write
.
You can further expand in a geometric series each term and rearrange into a power series; you can do it by the absolute convergence (however, within a radius of convergence 1).This way in fact you find the quoted polygamma. Added note: the above S(x), restricted to the positive semi-axis, is the unique increasing solution to the functional equation S(x)=S(x-1)+ 1/x2 for all x>1. --pma (talk) 21:28, 14 July 2009 (UTC)[reply]
PS: Irish Souffle, I don't agree with yours "not elementary hence not simple"; there are elementary things that are not at all simple and simple things that are not at all elementary; in fact usually in math we make things less elementary exactly in order to make them simpler --pma (talk) 21:34, 14 July 2009 (UTC)[reply]
It's simple in the sense that it is nice, neat, and compact, but if he was planning on calculating things by hand I'm not sure that he'd think of it as simple (unless there's an easy way to evaluate the polygamma function at integers; I've never used it before). Irish Souffle (talk) 21:46, 14 July 2009 (UTC)[reply]
It's also called a generalized harmonic number, specifically . The Euler–Maclaurin formula can probably be used to evaluate it numerically. -- Meni Rosenfeld (talk) 22:09, 14 July 2009 (UTC)[reply]

July 15

Bounded variation

So, I'm working on a qualifier problem, as usual. Here, I just want to know if there is a typo. It asks about a function being of bounded variation on (0, 1):

If f is uniformly continuous on the open interval (0, 1), then f is of bounded variation on (0, 1).

Prove or disprove. See, the thing is, Royden, de Barra, Wikipedia all define bounded variation only on closed intervals. So, is it even defined on an open interval? And, if so, what does it mean? There seems to be some sort of minor/major error on every qualifying exam so I would not be surprised if it's wrong. But, I could also see the definition just being f is BV(a, b) when f is BV[c, d] for every a < c < d < b. StatisticsMan (talk) 01:40, 15 July 2009 (UTC)[reply]

A uniformly continuous function on (0, 1) extends uniquely to a uniformly continuous function on [0, 1], so it might just mean that the latter function is of BV. Algebraist 01:46, 15 July 2009 (UTC)[reply]
Exact. The variation of f on an open interval I is defined the same way as in the case of closed bounded intervals. If you(SM) are ok with the definitions on closed intervals, you can just say it is the sup of the variation of f on all bounded closed subintervals. For the easy counterexample on (0,1), the hint is: such an f should oscillate a lot, so as to have unbounded variation, still not too much, because it needs to have limits at 0 and 1. So, look for the right α in f(x):=xαsin(1/x).....Note: the definition you (SM) suggest is a weaker thing; it means: f is BVloc(I), locally BV.--pma (talk) 07:59, 15 July 2009 (UTC)[reply]
Also notice that if f is both continuous and of bounded variation on (0,1), then it is uniformly continuous; in any case, if f is BV on (0,1) then it admits limits at 0 and at 1; the corresponding extended function F on [0,1] so that it is continuous on 0 and 1 has of course the same variation as f on (0,1) (the variation of F on [0,x] is continuous at x if F is). So, this gives you another equivalent definition of a BV function f on (0,1) : it is the restriction to (0,1) of a BV function on [0,1], that you can assume continuous on the end-points. --pma (talk) 09:11, 15 July 2009 (UTC)[reply]
BV functions are differentiable almost everywhere, so a continuous nowhere-differentiable function will fail spectacularly. Or, without using known properties or standard weird functions, you could build a counterexample by hand by just making a function with a spike of height 1/n centred at 1/n. Algebraist 10:03, 15 July 2009 (UTC)[reply]
nice example... but be careful not to sit on it ;-) --pma (talk) 10:13, 15 July 2009 (UTC)[reply]
I already know that x sin(1/x) is the counterexample, though I do not have a proof. But, I just looked at the page for uniformy continuity and I found the Heine-Cantor theorem which says a continuous function on a compact space is uniformly continuous. If I define f to be x sin(1/x) except at 0 and 0 at 0, then it is continuous on [0, 1] and thus uniformly continuous. It seems pretty clear from the definition that if I now delete the endpoints from the domain that it's still uniformly continuous as the definition on [0, 1] says for any epsilon, there exists a delta such that for any x, y with d(x, y) < delta, then d(f(x), f(y)) < epsilon. In particular, it is true for any x, y in (0, 1). Then, bounded variation I have seen before and it's just to pick a sequence of points that give the max value of each oscillation. You get a sequence of partial sums that is divergent so it's not of bounded variation.
Speaking of that (since I was thinking it was BV at first but it's not), Royden does absolute continuity only with finite sums. But, the Wiki article says "(finite or infinite)". So, my question is, are the three definitions equivalent if you do it with 1) finite sums only, 2) infinite sums only, 3) both? I think they're all equivalent. StatisticsMan (talk) 14:31, 15 July 2009 (UTC)[reply]
Yes, they're all pretty obviously equivalent. Algebraist 14:44, 15 July 2009 (UTC)[reply]

elliptical tube cuts a plan

The cutting line of an elliptical tube and plan, is it an ellipsis? 88.72.242.144 (talk) 02:17, 15 July 2009 (UTC)[reply]

Yes, unless the axis of the tube is parallel to the plane, in which case you get either nothing, one line, or two parallel lines. --Spoon! (talk) 04:23, 15 July 2009 (UTC)[reply]
Yes, though it's an ellipse because an ellipsis is somehting else.. —Preceding unsigned comment added by 83.100.250.79 (talk) 16:13, 15 July 2009 (UTC)[reply]
Which is strange enough, given that both come from the same Greek word (ἔλλειψις). — Emil J. 16:56, 15 July 2009 (UTC)[reply]

Thank you Spoon!, Emil and 83.100.250.79 for your the quick answers. Do you know where to find a proof or reference to this result? 88.72.242.144 (talk) 21:24, 15 July 2009 (UTC)[reply]

Well, here's a crude sketch: If the tube were perpendicular to the plane, then the cross section is obviously an ellipse. As you tilt the plane, it just "stretches" the cross section (the cross section as you look down the tube is still the same, but now it lies on a tilted plane, so distances are farther in one direction). A stretched ellipse is still an ellipse, because stretching preserves the sign of the discriminant of a conic section. --Spoon! (talk) 05:01, 16 July 2009 (UTC)[reply]

functions

please can anyone give me hints on these questions?

2f(x-1) - f(1/x - 1) = x

what is the value of f(x)?

next....

f(1) = 2005 f(1) + f(2) + f(3) + ...... + f(n) = n2 * f(n), n>1

what is the value of f(2004)? —Preceding unsigned comment added by 122.50.137.12 (talk) 12:47, 15 July 2009 (UTC)[reply]

What is the domain of f supposed to be for the first question? Algebraist 12:59, 15 July 2009 (UTC)[reply]
As for the second one, rearranging the equation to (n2 − 1)f(n) = f(1) + f(2) + f(3) + … + f(n − 1) gives you a recursive definition of f. In order to solve the recurrence, compute the first few values of the function, and see whether a pattern emerges. (Note that all values of the function are linear in the value of f(1); the picture will be much more clear if you start with f(1) = 1 at first, and only obfuscate it by multiplying it with 2005 after you solve it.) Then prove that your pattern is correct by induction on n, and plug in n = 2004 to get the result. — Emil J. 13:15, 15 July 2009 (UTC)[reply]
Better yet, subtracting the equation for n and n − 1 gives
f(n) = n2f(n) − (n − 1)2f(n − 1).
This recurrence is much easier to solve than the one above. — Emil J. 14:58, 15 July 2009 (UTC)[reply]
As for the first one, substitute 1/x for x to get
2f(1/x − 1) − f(x − 1) = 1/x.
Together with the original equation, you now have a system of two linear equations in two unknowns f(x − 1) and f(1/x − 1). Solve it. — Emil J. 13:34, 15 July 2009 (UTC)[reply]

Equation solving

I have just added the Wikiproject Mathematics template to the talk page of Equation solving. The article seems to have been pretty much ignored until now and it needs a lot of work. I have filled in the bits on ratings etc.. If someone wants to do a more official assessment then please do. Yaris678 (talk) 16:44, 15 July 2009 (UTC)[reply]

Wikipedia talk:WikiProject Mathematics is a more appropriate place for such messages than the Reference desk. — Emil J. 16:49, 15 July 2009 (UTC)[reply]
OK. Thanks. I will post it there. Yaris678 (talk) 18:01, 15 July 2009 (UTC)[reply]
On the wider point, am I right in thinking that Wikipedia talk:WikiProject Mathematics is for bringing up issues with mathematics articles, whereas the helpdesk is for questions about mathematics itself? Perhaps that should be made clear at the top of the article. Yaris678 (talk) 18:21, 15 July 2009 (UTC)[reply]
Yes, that is correct. At the top of which article? If you mean the Mathematics Reference desk (which is not an article), then since the header is shared with the other desks, it should probably be discussed in Wikipedia talk:Reference desk (the irony is not lost on me). -- Meni Rosenfeld (talk) 20:20, 15 July 2009 (UTC)[reply]

test edit

TeX is dead while the image server is down, I think. Algebraist 20:46, 15 July 2009 (UTC)[reply]

Is this alleged fact that is stated by saying "the image server is down" supposed to be somehow known to the Wikipedia public? Where does one find such information? Michael Hardy (talk) 20:48, 15 July 2009 (UTC)[reply]

There should be a note at the top of the page related to the downage, though it doesn't mention maths rendering specifically: 'Uploads and Thumbnails generation have been temporarily halted while we upgrade our image store.' The best way to have up-to-date technical information is via IRC. Algebraist 20:51, 15 July 2009 (UTC)[reply]

The top of the page is where I'd never think of looking. One edits talk pages at the bottom of the page. Michael Hardy (talk) 20:54, 15 July 2009 (UTC)[reply]

Well, it's the established place for sitewide notices. More details here, btw. Algebraist 20:58, 15 July 2009 (UTC)[reply]

Thank you. I'd never have guess that "techblog" exists. I do have some vague suspicion that the hardware and software that makes Wikipedia work were not actually brought down from Heaven by an archangel at the beginning of Time, but it's only a vague suspicion. Michael Hardy (talk) 21:06, 15 July 2009 (UTC)[reply]

And now the initial line I posted above is getting properly rendered. Maybe we're back to normal. Michael Hardy (talk) 21:12, 15 July 2009 (UTC)[reply]

July 16

How do I solve a V=pie r2h problem for r —Preceding unsigned comment added by 71.244.44.98 (talk) 00:13, 16 July 2009 (UTC)[reply]

sorry I meant for h and r=6 and pi=72pi —Preceding unsigned comment added by 71.244.44.98 (talk) 00:15, 16 July 2009 (UTC)[reply]

Show that the function

If f [x} = I 2x-3 I . [ x ] , x > 1 [ Greater than and equal to 1]

and f{x} = sin { pi .x/ 2 } , x < 1 [Less than 1]

is continuous but not differentiable at x = 1,

Where pi = 180 Degree, [ x ] is the greatest integer —Preceding unsigned comment added by 122.174.90.103 (talk) 05:27, 16 July 2009 (UTC)[reply]

Unique representation of an element of a local ring

What I want to show is that an element u of (p is prime, n is a natural number) can be written uniquely in the form where . I vaguely feel that this is something related to p-adic numbers but I never studied those. What I do know is that is the field on p elements. If someone can point me in the right direction I will be grateful.--Shahab (talk) 09:27, 16 July 2009 (UTC)[reply]

First, your notation does not make much sense, as is not a subset of . Assuming you wanted to write , existence follows immediately from the fact that any natural number can be written in a base-p representation. As for uniqueness, the sets and have the same finite number of elements, and we have just established that the mapping from the former to the latter sending each sequence to the sum is surjective, hence it is also injective. — Emil J. 10:32, 16 July 2009 (UTC)[reply]

Rope cutting formula?

My math knowledge is hopeless and this particular problem can by solved by most teenagers eventhough iam an adult. Consider that there is a rope that is about 30 thousand foot long. It is cut exactly into two equal length ropes. Those two ropes are cut again into 4 equal ropes. This goes on and on until the ropes are about 3 hundred foot long. The question is what would be the formula that gives the number of cuts so that the rope is around 3 hundred foot long?. May be the formula will not work when the original rope is too long or too short, I have no idea.