# Wikipedia:Reference desk/Archives/Mathematics/2007 December 25

Mathematics desk
< December 24 << Nov | December | Jan >> December 26 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.

# December 25

## Graphing a Fourier Transform

This has been bugging me for a while. I'd like to get a FT to output a graph of the frequency components of a function on my Mac's Grapher utility, and I have been using the function ${\displaystyle f(t)=sin(t)}$ as my function of time. But grapher doesn't seem to want to work with me... So I understand that a FT looks like (I'll use x here for the more conventional lowercase omega) ${\displaystyle F(x)=\int _{0}^{x}f(t)e^{-i\pi xt}dt}$ and I put this into Grapher, but it doesn't output the FT with peaks at 1 and -1 like I expect it to. Maybe i just have too rudimentary knowledge of calculus to attempt this? Is sin(t) Lebesgue integrable? I'm just curious about the whole thing, maybe Grapher's just not the right program to use. 72.219.143.150 (talk) 03:45, 25 December 2007 (UTC)

See Fourier transform. The formula is ${\displaystyle F(x)=\int _{-\infty }^{\infty }f(t)e^{-2i\pi xt}dt}$ and not as you have written (the important difference is the bounds of the integral). Also, you shouldn't expect to find peaks exactly, you should expect to find Dirac deltas which are messier. sin(t) is not Lebesgue integrable over the entire real line. If you want to get a sane result, try a function which tends to 0 as ${\displaystyle t\to \pm \infty }$. -- Meni Rosenfeld (talk) 08:37, 25 December 2007 (UTC)

Yep, I bet the problem's mainly in the fact that sin(t) isn't lebesgue integrable-----I've seen the formula written both as the way you present it, and the way I put it (I guess my source is wrong!). Thanks for the info.72.219.143.150 (talk) 19:12, 25 December 2007 (UTC)

Well, no, the problem is mainly that the formula you originally used is wrong. I don't think Lebesgue integrability has anything to do with the Fourier transform. You can take the Fourier transform of sin(t), but what you get is not something that can be graphed (since it is periodic). You can take a function like ${\displaystyle {\frac {\sin(t)}{t}}}$, which if I'm not mistaken is not Lebesgue integrable, and graph its Fourier transform (it will be a rectangle). -- Meni Rosenfeld (talk) 19:34, 25 December 2007 (UTC)
There are two kinds of transformations going on here. One transformation applies to L2 [or square-integrable] functions on the circle, which can be identified with locally-L2 periodic functions on the line. Given such a function f, we define the Fourier series F by ${\displaystyle F(n)=\int _{0}^{1}f(t)e^{-i2\pi nt}dt}$, where n ranges over the integers. This corresponds to the fact that every such periodic function can be written as a convergent sum of the form f(t) = Σaksin(kt) + Σbkcos(kt). The Fourier series F is basically just a list of numbers. The Fourier transform applies to any L1 [integrable] function on the line -- not necessarily periodic -- and is defined by a similar formula: ${\displaystyle F(x)=\int _{-\infty }^{\infty }f(t)e^{-i2\pi xt}dt}$, but here x ranges over the whole real line, and F is a function in its own right. The original question seemed to refer to the Fourier transform, but your aside about sin(t) refers to the Fourier series. It's important not to confuse the two. Tesseran (talk) 06:56, 31 December 2007 (UTC)

Ah yes. Pardon me, my ignorance is showing...Grapher doesn't seem to want to cooperate with the infinite limits of integration, or maybe I'm parsing something wrong in it. Either way, thanks for your help, MR.72.219.143.150 (talk) 00:21, 26 December 2007 (UTC)

## Circles

Hye everybody, can anyone please provide me all the BASIC THEOREMS regarding circles in coordinate geometry? its really tough to find them all at one place!! —Preceding unsigned comment added by GK ROCKS (talkcontribs) 12:29, 25 December 2007 (UTC)

Circle contains some theorems. -- Meni Rosenfeld (talk) 12:38, 25 December 2007 (UTC)

## Coordinate axes

what are the slopes of the X and Y-AXIS? Please give a proof too.... —Preceding unsigned comment added by GK ROCKS (talkcontribs) 12:37, 25 December 2007 (UTC)

Do you know what a slope is? -- Meni Rosenfeld (talk) 12:43, 25 December 2007 (UTC)
Well the old saying is that "slope equals, rise over run." So by that ideology, slope is y/x, so what happens when x is zero, or when y is zero???? A math-wiki (talk) 16:06, 25 December 2007 (UTC)
If anything, slope is Δy/Δx. -- Meni Rosenfeld (talk) 16:25, 25 December 2007 (UTC)
Well, the equation for the x-axis is y = 0x + 0, therefore the gradient of the x-axis is 0 (this is obvious). As for the y-axis, the equation is simply x = 0, which we can't differentiate. Instead, consider the series of y = nx + 0 as ${\displaystyle n\rightarrow \infty }$. The gradient of this is n, and as n tends to infinity, the line in question tends towards the y-axis. Alternatively just consider that the gradient of the y axis is r/0 for some r. This can be assumed to be infinite. mattbuck (talk) 16:35, 25 December 2007 (UTC)

## Irrational exponents

I know that to raise a number to, for example, the power of 1.5, you cube that number and square root it, in either order. Since irrationals numbers by definition cannot be expressed as a quotient, how do you go about raising a number to, say, pi? 19:41, 25 December 2007 (UTC)

See Exponential function and Exponentiation#Real powers of positive real numbers. We want bx to be a continuous function and that makes it unique for real x when it has been defined for rational x. There are different ways to formulate the definition and to perform the calculation but the end result is the same, making bx the unique number such that if x is between y and z then bx is between by and bz. PrimeHunter (talk) 20:11, 25 December 2007 (UTC)
Note that this last criterion has less to do with ${\displaystyle b^{x}}$ being continuous, and more with it being monotone. I'll elaborate on that principle; You know that ${\displaystyle 2^{3}=8}$ and ${\displaystyle 2^{4}=16}$, and since ${\displaystyle 3<\pi <4}$ you have ${\displaystyle 8<2^{\pi }<16}$. Furthermore, ${\displaystyle 2^{3.1}=8.5741...}$ and ${\displaystyle 2^{3.2}=9.1895...}$, so ${\displaystyle 8.5741<2^{\pi }<9.1896}$. You can continue this; ${\displaystyle 2^{3.14}=8.8152...}$ and ${\displaystyle 2^{3.15}=8.8765...}$, so ${\displaystyle 8.8152<2^{\pi }<8.8766}$, and so on. The values converge to a unique value, ${\displaystyle 2^{\pi }=8.8249778...}$. -- Meni Rosenfeld (talk) 20:47, 25 December 2007 (UTC)
So does that mean that there is no way of manually calculating the exact value? Or when you take an irrational exponent of a number is that also irrational? 172.159.54.87 (talk) 21:41, 25 December 2007 (UTC)
That's not what it means. There are zillions of ways to calculate anything - we were just demonstrating the principle behind it. Of course, it's not that easy to do with pen and paper - for that matter, how do you manually calculate ${\displaystyle 2^{0.1}}$?
Even when the exponent is rational, the power is usually irrational. There is no direct relation between the irrationality of the exponent and the irrationality of the power. -- Meni Rosenfeld (talk) 21:52, 25 December 2007 (UTC)
The short version: ${\displaystyle a^{b}}$ can be defined to be ${\displaystyle \exp(b\ln a)\;\!}$, where ${\displaystyle \ln x=\sum _{n=0}^{\infty }{\frac {2}{2n+1}}\left({\frac {x-1}{x+1}}\right)^{2n+1}}$ and ${\displaystyle \exp x=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}}$. You can calculate these to any wanted precision using only basic arithmetic operations. -- Meni Rosenfeld (talk) 21:58, 25 December 2007 (UTC)
Coming at it from a very different angle: Any real number can be expressed as the limit of a sequence of rationals (see Continued fraction#Best rational approximations), so you can think of ${\displaystyle x^{y}}$ as the limit of ${\displaystyle x^{y_{j}}}$ where ${\displaystyle y_{j}}$ are a sequence of rational approximations to ${\displaystyle y}$. —Tamfang (talk) 21:05, 29 December 2007 (UTC)
Not that different. This is exactly the requirement that ${\displaystyle b^{x}}$ is continuous - it's just not as easy to work with as monotonicity. -- Meni Rosenfeld (talk) 00:00, 30 December 2007 (UTC)

## Integration

Can the function x^-1 be integrated? --Seans Potato Business 23:07, 25 December 2007 (UTC)

Yes, its antiderivative is the natural logarithm of x, usually written ln(x). mattbuck (talk) 23:21, 25 December 2007 (UTC)
While I try to understand that section, is it appropriate to end equations with a full-stop? It their a policy against this 'cause I think there should be:
${\displaystyle \ln(x)\equiv \int _{1}^{x}{\frac {dt}{t}}.}$
The floating dot looks like a mathematical thing (obviously not to someone well versed in mathematical things, but to someone who is learning, it does). I managed to deduce that it was poorly placed full-stop. --Seans Potato Business 23:42, 25 December 2007 (UTC)
Yes, the full stop ought to be there if it is the end of a sentence, unless the sentence was a question (as in your next comment), in which case it should be a question mark. In case of an exclamation, use a bang, although there might be a risk of confusion, as in "This computation results in the value 6!". The fact that the formula is in a "display" instead of inline text does not make a difference to this general rule of punctuation. If the sentence runs on after the formula, you'll see a comma or semicolon, whichever is grammatically appropriate.  --Lambiam 02:32, 26 December 2007 (UTC)
Does that section actually explain why ${\displaystyle {\frac {d}{dx}}\ln(x)={\frac {1}{x}}.}$ --Seans Potato Business 23:53, 25 December 2007 (UTC)?
The section states that the function ln can be defined as the integral of the reciprocal function. If you indeed define it that way, then by the fundamental theorem of calculus d/dx ln(x) = 1/x is an immediate consequence.  --Lambiam 02:32, 26 December 2007 (UTC)
Is it fair to state that the function ln can be defined as the integral of the reciprocal function? Can't it be mathematically demonstrated? It's not obvious to me... :/ --Seans Potato Business 19:37, 26 December 2007 (UTC)
ln has several popular definitions which can be shown to be equivalent. -- Meni Rosenfeld (talk) 19:38, 26 December 2007 (UTC)
It can only be mathematically demonstrated if you have another definition already. Here are three facts about ln and exp:
(DL) ln is the antiderivative of the reciprocal function such that ln(1) = 0;
(DE) exp is the solution of the equation f' = f such that exp(0) = 1;
(LE) ln and exp are each other's inverse.
You can use (DL) and (DE) as definitions of ln and exp, and then prove (LE);
You can use (DL) as the definition of ln, next (LE) as the definition of exp, and finally prove (DE);
You can use (DE) as the definition of exp, next (LE) as the definition of ln, and finally prove (DL).
None of these approaches is significantly better than the others; you basically end up with equally difficult (actually fairly easy) proofs.  --Lambiam 20:30, 26 December 2007 (UTC)
I am happy with LE and I want to prove DL, but I don't understand DE. Anything to the power of zero is 1... when you say that f' = f, what are f and f'? I know that they are functions but what functions? --Seans Potato Business 21:02, 26 December 2007 (UTC)
This definition says: exp is defined to be the unique function satisfying exp' = exp and exp(0) = 1 (exp' is the derivative of exp).
Of course there are other possible definitions. But this one is probably the simplest. It doesn't assume a priori that exp is any sort of power, so we need to specify what exp(0) is. -- Meni Rosenfeld (talk) 21:13, 26 December 2007 (UTC)

Couldn't the function ${\displaystyle F(x)=\int _{a}^{x}{\frac {dt}{t}},{a\in \mathbb {R} }}$ be better represented as ${\displaystyle F(x)=ln|x|}$? 72.219.143.150 (talk) 00:17, 26 December 2007 (UTC)

No, not quite. Because you have a definite integral. If you set a = 0, then that formula would be valid, but otherwise you'd need a constant of integration of the value -ln|a|. mattbuck (talk) 02:40, 26 December 2007 (UTC)
If anything, it should be ${\displaystyle a=1}$. But this still doesn't work for any ${\displaystyle x\neq 0}$, since for ${\displaystyle x<0}$ you get a definite integral which does not exist in the usual sense. -- Meni Rosenfeld (talk) 08:56, 26 December 2007 (UTC)
(after edit conflict) Since you use a definite integral, you should have something like F(x) = ln|x| − ln|a| = ln|x/a|; in particular, F(a) = 0. For this to be true, a and x must both be non-zero and have the same sign, so you can as well leave out the absolute value bars in the second form: F(x) = ln(x/a).
For the indefinite integral, we have indeed:
dx/x = ln|x| + C, x ≠ 0.
Curiously enough, you can actually use two distinct constants of integration, say C−1 and C+1, and use instead:
dx/x = ln|x| + Csgn(x), x ≠ 0,
where sgn denotes the sign function.  --Lambiam 02:49, 26 December 2007 (UTC)

Whoops, I seem to have been caught up in it all and gone definite, when I think the original question was presumably asking about indefinite integrals...my bad.72.219.143.150 (talk) 03:16, 26 December 2007 (UTC)

No, the original also used a definite integral, which was a special case of the one you gave, namely with the variable a set to 1. So if you take the equation F(x) = ln(x/a) I gave above, and set a := 1, you get F(x) = ln(x/1) = ln(x). Look ma, no absolute bars!  --Lambiam 07:49, 26 December 2007 (UTC)