Talk:Fourier analysis

From Wikipedia, the free encyclopedia
Jump to: navigation, search


This talk page has been archived at Talk:Fourier transform/Archive1. I moved the page, so the edit history is preserved with the archive page. I've copied back the most recent thread. Wile E. Heresiarch 23:58, 20 Sep 2004 (UTC)

Question about Notation[edit]

Wouldn't the correct notation be:


Since f(t) is in fact the inverse fourier transform of F, which is itself a function of ?

Also, as a precedent, from the laplace transform article:

I'd say that the most correct notation, as Michael Hardy says below, is
The inverse Fourier transform operator works on a function, and the function is F. In contrast, F(ω) denotes the function F evaluated at a particular point, namely ω.
However, alternative notations are used quite frequently. -- Jitse Niesen (talk) 01:49, 13 March 2006 (UTC)

Yes, but couldn't one argue that ω is not a specific point, but rather a locus of all of the points ω such that ω extends from minus infinity to plus infinity? Also, it is helpful to show F(ω) rather than plain old F as a convenience so that we all can keep track of the fact that F is a function of ω and not, say, γ or β or whatever. Or t for that matter. -- Metacomet 02:00, 13 March 2006 (UTC)

It would also be helpful to use a letter other than F to denote the function of interest, since we are already using F (in calligraphy style) to represent the Fourier transform operator itself. Perhaps we could use, oh I don't know, maybe X or Y to represent the transformed function...

So, for example:

seems less potentially ambiguous to me. -- Metacomet 02:10, 13 March 2006 (UTC)

I think using is slightly ambiguous since it could be misinterpreted as the inverse fourier multiplied by t. It would seem extrenuous since the inverse fourier transform itself, as a result of the integral, generates a function of t... -- Tristan Jones (talk) 11:16, 13 March 2006 (MT)

Metacomet wrote: "Also, it is helpful to show F(ω) rather than plain old F as a convenience so that we all can keep track of the fact that F is a function of ω". This may well be a difference between mathematicians and engineers. For mathematicians, F is not a function of ω; F is a function which maps numbers to numbers. We can replace the dummy variable ω by another variable, say s:
I guess that for engineers or physicists, ω is not just a number but it also has a meaning (frequency, or something like that). In that context, I can see why you'd like to mention the argument.
Similarly, from a mathematical perspective, the inverse Fourier transform does not generate a function of t. It just generates a function.
Tristan is right that is slightly ambiguous, but the same goes for : is this a times b+c or the function a applied to b+c? This ambiguity is just a feature of mathematical notation, a price we pay for its succinctness.
I agree that using f for the function to be Fourier transformed is indeed a bit confusing, giving that we use curly F for the Fourier transform operator. But it is rather customary to denote arbitrary functions by f. However, I won't complain if you were to change it. -- Jitse Niesen (talk) 07:07, 13 March 2006 (UTC)

In that case couldn't it be written as: ? I think there is value in showing the omega (or any other variable) even if it is not "technically" correct, as the meaning is the same and it shows in a more explicit manner that the transform changes the function into a different 'space'... Also, since both methods could be considered correct (the omega merely shows that F is a function of some single variable, in this case valled omega, but it could be anything else), that to someone who has not previously seen the transform, writing is more obvious and easier to understand... -- Tristan Jones (talk) 11:16, 13 March 2006 (MT)

In danger of ambiguity use a·(b+c) for multiplication and a(b+c) for evaluating the function a in the point (b+c). Jitse, you want to replace all three occurences of ω by s. And there is no need to use curly brackets. And the outer pair of parentheses is redundant.

Bo Jacoby 18:12, 13 March 2006 (UTC)

There are other forms of the continuous Fourier transform, which have different advantages and are favoured by different people because of their mathematical or practical simplicity, or because they make the inverse transform look more like the transform. I mean, it's not just a few recalcitrant nutbars with a chip on their shoulder; I'm talking widespread usage. Most discussions of the Fourier transform mention this, so maybe we ought to as well. For example,

gets rid of the ::out the front, and gets s in terms of frequency rather than rads. This form is used in the course notes for Signal Processing here at Adelaide Uni. N-gon 10:08, 22 August 2006 (UTC)

If you had written , you would have been more likely to notice that is the wrong variable of integration. (Redundancy is a good thing.)   I hope you also unintentionally omitted a factor of 2 with your . Thus:
But the suggestion to dispense with both the normalization factors and the radian/sec units (for cycles/sec) in one fell swoop is of course very sensible. If I were king, I would decree it and one more thing. I would switch and ; i.e. works well for frequency, leaving available for (say) signal   (and for Spectrum):
--Bob K 23:45, 22 August 2006 (UTC)
It's clear that widespread alternative conventions should be mentioned. However, it is best to keep this article a short summary and leave such details to continuous Fourier transform. Otherwise we end up with a lot of redundancy —Steven G. Johnson 16:03, 23 August 2006 (UTC)

Recommended Book[edit]

Delete this if you like but it will help lots of people. If you want to *truly* understand the Fourier Transform and where it really comes from then read the book "Who is Fourier? A mathematical adventure" ISBN 0964350408. It's excellent. Lecturers can't teach this subject for toffee. It's a shame I only found this book after my course involving Fourier but it more than makes up for it now. You can get it off Chris, Wales UK 16:14 25th November 2005

F notation[edit]

I don't know where else to ask. What's the difference between and . Are they used correctly in this article? - Omegatron 01:56, Sep 19, 2004 (UTC)

Seems clear to me from the article. is the function whose Fourier transform is to be found, and is the transformed function, so that is the transform itself, i.e., the mapping from one space of functions to another. Michael Hardy 22:25, 19 Sep 2004 (UTC)
If you know it, explain it in the integral transform article, as well. - Omegatron 02:02, Sep 19, 2004 (UTC)
The function is the Fourier transform of the function . is the Fourier transform operator.CSTAR 02:18, 19 Sep 2004 (UTC)
Ok. But when do you use the ? ? hmm is there an article for operator? ... Yes, but it doesnt mention the notation. - Omegatron 02:28, Sep 19, 2004 (UTC)
Strictly speaking, is the right way to parse the thing, and is a solecism, often used by engineers and sometimes used even by mathematicians. When you use alone would include such things as when you say "the Fourier transform is a 90 ° rotation of the space of square-integrable functions". I seem to recall that the article titled operator was something of an Augean stable, but that was months ago; I don't know if it's improved. Michael Hardy 22:30, 19 Sep 2004 (UTC)
f(x) denotes the value of f at x. f denotes a function x denotes a real number. would make sense only if were defined for numbers, but it;s defiend for functions.CSTAR 02:34, 19 Sep 2004 (UTC)
Oh oh I see. So

Yeah. If you're picky you will note that t has no real purpose in the above formula. It really should be within the scope of a binding operator such as .; this however by rules of lambda-calclus reduces to the term f.CSTAR 02:47, 19 Sep 2004 (UTC)

is used in the article. is this the correct way to say the above, are they both incorrect, or is the t just extraneous but it doesn't really matter? - Omegatron 02:54, Sep 19, 2004 (UTC)
The variable t occurs on the right hand side as well as the left hand side. The RHS is a term (in this case an integral of an exponential); You can't get rid of the t. Try it. What would you get?CSTAR 03:12, 19 Sep 2004 (UTC)
In that case, I don't understand. I thought you meant that it was correct to use . - Omegatron 03:30, Sep 19, 2004 (UTC)
If you think of as meaning two things: (a) and (b) a suggestion that the symbol is reserved to name an independent variable to name the argument of the function F, since is often thought of (for instance by physicist or engineers) as frequency.CSTAR 03:40, 19 Sep 2004 (UTC)
I thought I understood, but I guess not. The term "scope of a binding operator" would probably help. You don't have to teach it to me if it's something I don't already know. Just point me where to look. Regardless, is the article notation right? - Omegatron 14:42, Sep 19, 2004 (UTC)
See variable-binding operator or some such thing. There is a link to this somewhere in wikipedia. CSTAR 15:36, 19 Sep 2004 (UTC)
It's at free variables and bound variables. Michael Hardy 22:14, 19 Sep 2004 (UTC)

I'm used to engineering notation, where , so you can use f for frequency and avoid confusing Fs. Then of course there's .  :-) - Omegatron 02:40, Sep 19, 2004 (UTC)

In fact, I vote that we change f(t)->F(ω) into some other letter (I know you won't use X, but maybe g?), to avoid confusing newcomers to the Fourier transform. - Omegatron 02:42, Sep 19, 2004 (UTC)

I didn't make the choice of notation here. I'll leave your suggestion to somebody else.CSTAR 02:46, 19 Sep 2004 (UTC)

One thing the "engineering notation" does not allow for is the idea that functions have values. E.g., if f(x) = x3 for all values of x, then f(2) is the value of that function at 2, and is equal to 8. If you say f(ω) is the function to be transformed, and g(t) is the transformed function, then g(2) should be the value of the transformed function at the point t = 2. But if you use the "engineers' notation" and write , then you cannot plug 2 into the left side. But watch this: is the result of plugging 2 into the transformed function.

One thing to be said for the difficulties introduced by the engineers' notation that are avoided by the cleaner, simpler, but more abstract "mathematicians' notation", is that perhaps sometimes one ought not to be evaluating these functions pointwise anyway! But that's a slightly bigger can of worms than what I want to open at this moment ... Michael Hardy 22:38, 19 Sep 2004 (UTC)

All I meant by "engineering notation" was not using the letter f as a function, since it would too easily get confused with frequency as a variable in lowercase and script F for the fourier transform in uppercase, leading to protracted discussions about why there are two capital Fs and why one is script and the other is not and what it all means. :-)
*Reads the recommended articles on bound variables* - Omegatron 03:40, Sep 20, 2004 (UTC)
Let me drop in w/ my $0.02 -- I agree w/ Mike Hardy that conventional engineering notation (which puts function arguments in inappropriate places) is imprecise & misleading, and the usual "pure math" notation is superior. That said, it is certainly confusing & unnecessary to have different kinds of "F" running around; making technical distinctions based on font types is problematic IMHO. So, Omegatron, I'm not opposed to replacing f by g, likewise F by G, throughout the article. I'll consider doing that myself. Regards & happy editing, Wile E. Heresiarch 15:11, 20 Sep 2004 (UTC)
Check Continuous Fourier transform as well. - Omegatron 15:31, Sep 20, 2004 (UTC)

involutory fourier transform[edit]

The formula for the discrete fourier transform

can be simplified because


This convention that an n'th root of unity is written rather than is convenient. Introducing the analytical concepts and and into an algebraic context is confusing and pointless.

The transform can be modified a little by taking the square root of n and by taking the complex conjugate of f.

Now this transformation is an involution. The procedure that transforms f into x is exactly the same as the procedure that transforms x into f. This is nice.

PS: I inserted the note "Moreover: reality in one domain implies symmetry in the conjugate domain."

Bo Jacoby 13:18, 8 September 2005 (UTC)

Why no involution ?[edit]

15:55, 21 September 2005 Stevenj (tricks to compute the inverse DFT in terms of the forward DFT are well known, but belong in the DFT article (they don't define a fundamentally different FT); "involutary" is just a general adjective)

Stevenj removed my entry on involutory fourier transform. I think it should be put back. It makes life easier if you don't have to worry about the difference between forwards and backwards. Note that it is not merely "a trick to compute the inverse DFT in terms of the forward DFT" but a true identification of the inverse and the forward transform. The fourier integral and fourier series should be considered limiting cases of the DFT, so the involutory version is perfectly general and not specific to the discrete fourier transform. Please explain your point of view more carefully. See involution. Bo Jacoby 12:23, 22 September 2005 (UTC)

It is closely related to well-known tricks to compute the forward in terms of the inverse transform, which are themselves trivial consequences of the close relationship between the FT and inverse FT due to unitarity. Moreover, I can't find any outside use of the term "Involutary Fourier Transform" (or the corresponding definition) so I don't think we should highlight this particular term as if it were widespread—do you have any references? (In any case, conceptually it doesn't really change the features of the transform.)
I added a mention of it, however, to the discrete Fourier transform article, which I also expanded to give several well-known tricks to compute the inverse DFT from the forward DFT. (Yes, other Fourier variants have similar properties, but the point is that this is essentially a property of the transforms, and belongs on their respective pages, rather than a fundamentally different transform.) What you called the "involutary Fourier transform" is a consequence of a standard conjugation trick. In fact, it is actually related by a factor of 1+i to the discrete Hartley transform, which is well known to be involutary. —Steven G. Johnson 18:41, 22 September 2005 (UTC)

Vote for new external link[edit]

Here's my site full of Fourier transform example problems. Someone please put it in the external links if you think it's helpful!

Excuse me![edit]

The introductory paragraph for any section in this encyclopedia should be accessible to all readers.

What was wrong with my input? Was it too easy to understand?:

" It is a mathematical technique for expressing a waveform as a weighted sum of sines and cosines."

Some introductory info is appropriate as:

It usually happens in mathematical analysis that certain given functions are to be operated on, and other functions or numbers are thus obtained. Often, the operations used involve integration, and here is found a very general class of operators, the so-called integral transforms. The simplest integral transform is the operation of integrating.

---Voyajer 04:34, 30 December 2005 (UTC)

What was wrong is that it was repetitive and redundant. The introduction already says that the transform re-expresses a function in terms of sinusoidal basis functions, i.e. as a sum or integral of sinusoidal functions multiplied by some coefficients ("amplitudes"). —Steven G. Johnson 19:41, 30 December 2005 (UTC)
Please try to make the intro more accessible like Voyajer's version. — Omegatron 19:51, 30 December 2005 (UTC)


I concur, the introduction hardly explains what FT is to the layman.

*rush to a square-wave[edit]

?can anyone tell me why the first term of Fourier series when expanding a square wave

I have done the exercise about it..

thanks a lot--HydrogenSu 14:53, 19 January 2006 (UTC)

is the DC (zero frequency) offset. If the average value of a square wave is zero, will be zero. Madhu 19:37, 10 August 2006 (UTC)

DFT definition given is actually inverse DFT?[edit]

If you consider this article: [1], it appears that the definition of DFT you have here on this page is actually the inverse DFT. Whose page is wrong?

  • Answer: Neither. The DFT (or FT) can be defined with either sign in the exponent as long as the inverse is defined such that applying transform and inverse in succession gives the original sequence (or function). There is also some freedom as to wether to put the , , or on the transform or inverse for discrete, finite, and infinte cases respectively. I suppose I should add this to the page.--Kevmitch 02:39, 1 April 2006 (UTC)
  • Yeah, its tough to say. This seems to be covered on most of the pages covering the specific types of Fourier transforms such as the beautiful table in Continuous Fourier transform How specific do we want to get on a the page that covers the Fourier Transform in general?--Kevmitch 03:04, 1 April 2006 (UTC)

Definition and transformations[edit]

James Nilsson and Susan Riedel's book Electric Circuits seventh edition defines fourier series like this:

and inverse:

I couldn't find a matching type of fourier transform on here, and they just call it "fourier transform" as if theres no other types. It goes on to give this table of transforms:

f(t) F(w)
A (constant)
sgn(t) 2/jw
u(t) \pi delta(w) + 1/jw
etc etc

Is there something I'm missing? Fresheneesz 02:16, 5 June 2006 (UTC)

You didn't see it because you didn't click on continuous Fourier transform. That paragraph does warn you that you have to follow that link to see it, but maybe it should be explicitly displayed here. I'll be back. Michael Hardy 18:49, 5 June 2006(UTC)
Actually, I did click on that link. Problem is, it defines it differently than I did above. Neither the tables, nor the definition quite match right. Everything except the inverse transform seems to be multiplied by in the example I give above (compared to the one you gave a link to). I'm still a little confused as to why Nilsson and Riedel's definition is a constant multiple of the one on wikipedia. I'm even more confused as to why the inverse transform does not match that pattern. Fresheneesz 02:39, 7 June 2006 (UTC)

Strange introduction[edit]

Why does this article start off with with writing down the inverse transform?!?

It's not my doing, but I understand the thinking. What it actually starts off with is the assertion that can be represented as a continuous sum of complex exponentials, and the amplitudes and phases of those exponentials are given by a function, . It is of course appropriate to formalize that concept mathematically, which also happens to be the formula we call an inverse transform. Thus the inverse transform (for lack of a better name) is actually an equation to be solved (for ), and the Fourier transform is the solution to that equation. It's actually more logical than the traditional approach of pulling a Fourier transform definition out of thin air. --Bob K 16:56, 13 July 2006 (UTC)

Also there is no mention of PDE's anywhere. This is where the subject found it's birth and much of it's most fruitful application. I will try to add something to this effect but input would be appreciated. Also as defined here (as with any other definition I am familiar with)the DFT takes a finite vector and produces a vector of the same length.

That is not how I read it. I know you are looking at the part. But that is explained in the preceeding paragraph which states: "Due to periodicity [of the DTFT], the number of unique coefficients (N) to be evaluated is always finite, leading to ... ". --Bob K 04:39, 14 July 2006 (UTC)
And please sign your entries with "~~~~"

But in the section "Family of Fourier transforms" we include it on this list as periodic. There should be some explanation there as to what is meant. Are we thinking of the DFT living on ?

Notation once again[edit]

This article establishes that the Fourier transform of a function s(t) is a new function and this relation is written

However, a frequently appearing notation (at several places in Wikipedia) is to write it simply as

There has already been a discussion above about the validity of the former notation. But it should also be recognized that the latter notation, while formally incorrect, has the advantage of allowing us to insert an arbitrary function expression into the transform operator. For example, in a compact way we can express that the Fourier transform of a sinc function is a rectangular as

The meaning of the operator has here changed into a transform table lookup and this way of using appears handy and is probably the motivation for its frequent use. There are limitations to this notations as well, for instance writing

implicitly assumes that the function being transformed is a function of t, not of a.

Here are my points

  • Should THE NOTATION ISSUE be completely ignored in the article even though it appears to be a source of controversy on the talk pages of several articles in Wikipedia? Maybe a section which explains THE NOTATION ISSUE rather than to have it on this and other talk pages?
  • It is possible to use a notation according to
    • is the Fourier transform operator as defined in this article
    • is a transformation lookup operator which explicitly gives the transform of f as a function of t? This implies that .

The advantage of the second form of the operator is that is allows a formalization of the lookup functionality which also makes explicit which variable is used in the transformation. --KYN 09:34, 4 August 2006 (UTC)


Using an arrow instead of a comma makes it more readable.

The function f, defined by y = f(x), can be written f = (yx), pronounced: "y as a function of x".. This nice notation allows you to distinguish the power function, (xyx), from the exponential function, (xyy).

The transform

F = (Ss) = ((S(ω)←ω)←(s(t)←t))

means that

S = F(s)
(S(ω)←ω) = F(s(t)←t)


S(ω) = (F(s))(ω) = (F(s(t)←t))(ω)

This notation does not identify the function f with the function value f(x), using a special letter x to identify the independent variable. Bo Jacoby 12:07, 4 August 2006 (UTC)

Sure, this also solves the problem and if it is more established, I will go for it. Next issue: is it relevant to present this somewhere in the article, giving an example for a notation which both is formally correct and can be used for the occations when an explicit function of a specific variable needs to be transformed? --KYN 18:23, 5 August 2006 (UTC)

Where to put it[edit]

The natural place to put the notation is in the article function (mathematics), where presently the sum of two functions is defined by (f+g)(x) = f(x)+g(x) rather than by f+g = (f(x)+g(x)←x), thus defining the function value (f+g)(x) rather than the function f+g. For functions of several variables and for functions whose arguments or values are functions rather than simple variables, the notation y = f(x) instead of f = (yx) is severely insufficient. The lambda calculus writes the independent variable to the left, f = λ x.y, rather than to the right as in f = (yx). I prefer the latter convention because the formula for composition of functions,

(g o f)(x) = g(f(x)) for all x in X

is written without breaking the order like this:

g o f = (g(f(x))←x) for all x in X .

Anyway, when the arrow points from the independent to the dependent variable there is not much room for misunderstanding.

Note also the elegant notations for inverse functions such as the natural logaritm and for the square root:

f−1 = (xf(x))
log = (x←ex)
sqrt = (xx2)

I will make a note on Talk:Function (mathematics) to ask the advice of the mathematicians there.

Bo Jacoby 11:40, 7 August 2006 (UTC)

I don't mind that a more exensive discussion about the notation of functions and their values goes into function (mathematics), but I also believe that a short summary applied to the notation of Fourier transforms is useful in the Fourier transform article. The unorthodox notation is wide-spread and it would then be easier to justify changes by just refering to that summary. Otherwise the justification lies one hyper-link further away and is easier to ignore. --KYN 22:12, 7 August 2006 (UTC)

I agree. Is this understandable? :

The Fourier transform F = (Ss) transforms a function s = (s(t)←t) of time t into a function S = (S(ω)←ω) of frequency ω.
S = F(s)
The amplitude is
S(ω) = F(s)(ω)

Your examples look like this:

sinc = (sin(t)/tt),
rect = ([−1<ω<+1]←ω), using Iverson bracket,
F(sinc) = rect .

There is no need to use special names for dummy variables, nor is there any need for special function names:

F(sin(x)/xx) = ([−1<x<+1]←x).

Bo Jacoby 09:16, 8 August 2006 (UTC)

Wikipedia is not the place to promote new/obscure notations[edit]

The problem with the notation that Bo is suggesting for the Fourier transform, and for functions in general, is that it is much less common than the usual notations which are already employed in the articles. If the notation is established but obscure, it might be worth mentioning in function (mathematics), but a wholesale adoption in other articles is not a good idea.

Given that Bo has a history of trying to promote his own invented nonstandard notations in Wikipedia, however, I would also request that he provide a mainstream citation for his notation before inserting it into any article.

—Steven G. Johnson 17:03, 8 August 2006 (UTC)

Steven seems to have an excellent point here. I agree. Thenub314 23:50, 8 August 2006 (UTC)

To StevenJ the word standard means known to me, and the word nonstandard means unknown to me; he makes no reference to standardization documents. Stevenj has a history of deleting everything that is new to him. KYN's complaint that StevenJ's notation is unsatisfactory, makes no impression on StevenJ. Still StevenJ does me too much honor by believing that I am the original inventor of the trivial shorthand notation 1x = (e2π i)x for e2π i x, or of the straightforward explicite notation f = (y←x) for the function defined by y = f(x), or of the involutary N'th order fourier transform: FN = ( (N−1/2Σj xj* 1jk/N←k) ← (xj←j) ). The article is very bad, but, according to StevenJ, it is standard, and that is all that matters. Progress is suspended until StevenJ gets majorized by fellow wikipedians. Meanwhile, study the notation for functions used in the J programming language. (I am the inventor, however, of the ordinal fraction technology, and my WP article on Ordinal fraction was deleted on StevenJ's request). Bo Jacoby 07:30, 9 August 2006 (UTC)
Hi Bo, should I be surprised by the vehemence your invective? (Your protests seem rather strained, as in both the 1^x and "involutary" DFT cases, in addition to ordinal fraction, you admitted that you didn't know of any publication that used your proposed notations/definitions. Why is Wikipedia policy so hard for you to comprehend?) By the way, I'm aware that x^2←x is sometimes used as function notation (although I maintain that it is uncommon in this context), but I don't think one often sees it mixed with "=" in the way you suggest, as in f = (x^2 ← x) instead of f(x)=x^2. And notice that I simply asked you to provide mainstream citations, which would have supported your case far better than a diatribe. (If the best you can do is an uncommon programming language that appears to only loosely match the notation you suggest...) —Steven G. Johnson 04:55, 11 August 2006 (UTC)
Hi Steven. You are aware that x2x is sometimes used as function notation although you maintain that it is uncommon in this context. Has the wikipedia policy requirement changed from known to common? If a is a mathematical expression, then of course you can name it: b=(a). So if x2x is a function, of course you can name it if you want: f=(x2x). The point is that you do not need to name is, as you do when you write f(x)=x2. You can write a function value elegantly (x2+yx+1←x)(4) = 4y+17. (Oh I forgot, you are not interested in good or bad, you are interested in standard, although you never refer to standardization documents.) Bo Jacoby 11:07, 11 August 2006 (UTC)

I am unconvinced that we have a severe notational problem[edit]

I am unconvinced that we have a severe notational problem. It seems to me that in this case the proposed cure is worst than the disease (for the reasons StevenJ gave). Thank you for suspending progress. Also, I don't agree to the relevance of a programming language, because it is created for a completely different set of constraints. --Bob K 18:24, 9 August 2006 (UTC)

Hi Bob. What will you do to help KYN? Bo Jacoby 11:07, 11 August 2006 (UTC)
Hi. KYN is not proposing a change in notation (see below). No notation is perfect. I have suggested that we focus on specific formulas that seem unclear. I expect they can be remedied by supplementary annotation (unlike a computer language). KYN is free to ask my opinion. If I have one, I will give it. --Bob K 13:55, 11 August 2006 (UTC)
Yes, there is a notation problem. Severe or not depends on the reader/writer. What about writing something like
This is,..., what? In the context of this article, it is a Fourier transform, but is it a transform of rect being a function of x, or of t, or it is a 2D transform of rect being a function of both x and t? One answer to the posed problem would be: Don't use this notation unless it is unambiguous which variables are involved in the transformation. Another answer would be, in the case that you want to transform a function of A SPECIFIC variable, use notation XYZ, where XYZ is whatever can be agreed upon. A third alternative, which appears to be the result of the above discussion, is that no established notation exists for this case, even though is appears to be useful (see example above). My point is that regardless of which answer this discussion provides it should be made explicit in the article simply because it appears to be a source of questions in various area of Wikipedia. --KYN 10:48, 11 August 2006 (UTC)
I agree with you. And no matter what we do there will always be confused readers. If we switch notations now, I will probably be one of them. The example you gave above is devoid of context, which never happens here. Or if it does, then that is the problem, not the notation. And let's not compare technical writing to computer languages, because compilers are ignorant of context, use a more limited symbol set, and ignore annotations. More helpful would be to cite specific articles that seem to be problematic, and discuss all the possible specific remedies. --Bob K 11:34, 11 August 2006 (UTC)
My point is not necessarily that we should change notation, but that this discussion should be included in the article. Since I havn't seen anyone in support of this idea, I guess we should't? --KYN 12:00, 11 August 2006 (UTC)
In general, yes, annotation and clarification is a good thing. In this case, my opinion is that the article is good enough. But that is based on my reading of the article, not on my reading of this page. So maybe I am missing an important point. What specifc formula are we trying to clarify? --Bob K 13:55, 11 August 2006 (UTC)
Here are some examples of the informal use of (and other operators on function spaces) which can be found in Wikipedia:

In consequence of this confusion I suggest a section like:
Section About notation:
The Fourier transform is a mapping on a function space. This mapping is here denoted and is used to denote the Fourier transform of the function s. This mapping is linear, which means that can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the signal s) can be used to write instead of . Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value for its variable, and this is denoted either as or as . Notice that in the former case, it is implicitly understood that is applied first to s and then the resulting function is evaluated at , not the other way around.
In mathematics and various applied sciences it is often necessary distinguish between a function s and the value of s when its variable equals t, denoted s(t). This means that a notation like formally must be interpreted as the Fourier transform of the values of s at t, which must be considered as an ill-formed expression. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example, is sometimes used to express that the Fourier transform of a rectangular function is a sinc function, or is used to express the shift property of the Fourier transform. Notice, that the last example is only correct under the assumption that the transformed function is a function of t, not of . If possible, this informal usage of the operator should be avoided, in particular when it is not perfectly clear which variable the function to be transform depends on.
Well, something along these lines, anyway. --KYN 21:47, 11 August 2006 (UTC)
Thank you. I like it. I think many readers will appreciate it. --Bob K 21:57, 11 August 2006 (UTC)

I agree that "informal usage of the operator should be avoided, in particular when it is not perfectly clear which variable the function to be transform depends on."

The formula tells which variable the function to be transformed depends on.

There is no need for calligraphy. The formula is: F(s(t+u)←t) = F(s) e(i ω uω).

Linearity is irrelevant in the context.

Bo Jacoby 14:31, 13 August 2006 (UTC)

Besides being unusual, that notation doesn't even make it clear for me what the F(s) part means. E.g., it could be a constant.
For the record, my preference is this:
And if the audience is really too green to understand that, we probably need more words rather than fewer definitions.
--Bob K 19:46, 13 August 2006 (UTC)

Hi Bob. The symbols t and t0 enters symmetrically on the left hand side of , but asymmetrically on the right hand side. What is ? The answer depends on whether u or v is the independent variable, and the readers are clueless. The explicit multiplication sign improves clarity: F(s(t+u)←t) = F(s)·(eω·uω). Rather than a formula for the function, you might prefer a formula for the function value: F(s(t+u)←t)(ω) = F(s)(ω)·eω·u. The expression F(s) = F(s(t)←t) is the transform of the function s. People are aquainted to expressions giving function values, while the fourier transform is about functions. You need some notation for a function f apart from the function value f(x) or f(t) or f(ω). Bo Jacoby 09:04, 14 August 2006 (UTC)

Hi Bo. Yes, I understand that is ambiguous. All I am saying is that the ambiguity can usually be resolved by context and/or by some commentary. And I think the latter is a fairly rare necessity. Also, most of the articles here use instead of u or v, and that is sufficient for most readers. But if stand-alone non-ambiguity is really necessary, then I would prefer something like this:
Also, if I had to choose between F(s)(ω)·eω·u and F(s)·(eω·uω), I would choose the former. I just don't like that darn arrow.   :-)
And by the way, these html expressions are nasty to type!
Have a great day.
--Bob K 11:52, 14 August 2006 (UTC)

Thanks, Bob. When it comes to liking and disliking there is no point arguing. I trust that time will settle the question, so that either you will learn to love the arrow (which you seriously doubt because it looks strange), or I will learn to love the ambiguity (which I seriously doubt because unambiguous notation makes commentary unnecessary, which is lovely). The html is nasty mainly because of the italics. I write first without the italics, F(s)(ω)·ei·ω·u , and then I insert two apostrophes around each variable name: F(s)(ω)·eω·u . Bo Jacoby 12:14, 14 August 2006 (UTC)

Actually, for an encyclopedia article, I think most concepts should be explained in commentary if possible, even if it is redundant with the mathematics. And especially Wikipedia, where anybody can write an article or edit one. You've heard the old saying... "If you think you understand something, try explaining it to someone who doesn't." --Bob K 17:14, 14 August 2006 (UTC)

The dot notation[edit]

Hi, the topic of this thread is twofold: (1) is there an established notation which can be used for saying, eg, the FT of sinc is rect, in a formally correct way, and (2) should the notation issue be mentioned in the article. Bo has proposed a notation which addresses (1) which it OK with me, but since I havn't seen any evidence that it is established I don't want to write about it since Wikipedia should not be the place to introduce new notations which does not exist in the literature. An alternative to Bo's arrow can be found in Daubechies "Ten Lectures on Wavelets" where the anonymous variable is simply denoted with a dot: . To my surprise the informal notation can be found in hard core math text book, for example see Debnath & Mikusinsky "Introduction to Hilbert Spaces with Applications" (2ed, page 194). About (2), I will try to figure out something based on the text I wrote above and put it into the article shortly. --KYN 15:48, 14 August 2006 (UTC)

Hi KYN. A dot as a placeholder for the independent variable is worse than x or t or ω or k. The formula F(s(t+u)←t) = F(s)·(eω·uω) becomes F(s(·+u)) = F(s)·ei···u which is spooky. Bo Jacoby 09:15, 15 August 2006 (UTC)

The dot notation has a place. But it needs to know that place and stay there. When the point is simply that the Fourier tranform of sinc is rect, it avoids the irrelevant question of whether we're talking about time or space or some other dimension. --Bob K 11:31, 15 August 2006 (UTC)

When stating that the Fourier tranform of sinc is rect, no dot is needed: F(sinc) = rect. I know of no place where the dot notation is sufficient. It is limited to functions of one variable only, and it looks like a multiplication sign. It has no future. Bo Jacoby 12:52, 15 August 2006 (UTC)
F(x(·)) confirms for me that x is a function. I am not so sure about F(x). --Bob K 14:42, 15 August 2006 (UTC)

Standard notation for F(x(·)) is F(x(t)). You cannot tell, however, whether you are refering to the value F(x(t)) or the function (F(x(t))←t) or the operator value F(x) = F(x(t)←t) or the operator itself F = (F(x)←x) = (F(x(t)←t)←(x(t)←t)) . Bo Jacoby 10:08, 16 August 2006 (UTC)

"the value F(x(t))" ... that ambiguity is avoided by F(x(·)).   In certain situations, F(x(·)) is better than either F(x(t)) or F(x). --19:49, 16 August 2006 (UTC)


My conclusion of this discussion is:

  • there is no consensus about notation, in particular about how, e.g., "Fourier transform of rect is sinc" should be written in a concise and formally correct way. (Yes, we have seen proposals, but no consensus).
  • the informal/ill-formed appears to be frequently used, even in math text books. Becomes meaningful only with proper context.
  • there are several proposals for a formal way to rewrite this expression, e.g., Bo's arrow and Daubechies' dot. The arrow notation, although perfectly valid, appears not be in use in the literature, and the dot notation is also not in frequent use (Daubechies' book is the only example that I have encountered). Given that Wikipedia explicitly should have a verifiable content, which I interpret in this particular case as "describe notations which are established in literature or in common use", I suggest to leave the arrow and the dot until more evidence of their usages can be demonstrated.

--KYN 19:50, 15 August 2006 (UTC)

The following notation is supposed to be standard, , according to the recommandation at Talk:Function (mathematics)#Standard notation. The maps-to-arrow must have the special shape , and it must point to the right, otherwise it is not standard. Bo Jacoby 12:30, 17 August 2006 (UTC)

Here is my favorite excerpt from that reference: "In most written mathematics, it is enough to just say (in prose) what the bound variable is when confusion could arise. Since the entire community of mathematicians seems to have agreed that this is precise enough, new notation is apparently not necessary." --Bob K 14:04, 17 August 2006 (UTC)

Bob, you cannot be serious. Which reference are you talking about? Bo Jacoby 12:25, 18 August 2006 (UTC)

Yours! (Talk:Function (mathematics)#Standard notation. ) --Bob K 13:50, 18 August 2006 (UTC)

Don't you consider me and KYN as part of the community of mathematicians? Don't you think that wikipedia should be readable to others than the few mathematicians who can reliably guess the meaning of say  ? Bo Jacoby 12:25, 18 August 2006 (UTC)

I did not claim to know the community of mathematicians. Your reference did that. --Bob K 13:50, 18 August 2006 (UTC)

Sorry. I agree that there is no point in solving a nonexisting problem. It is no problem to write f(x)=x2 rather than fx.x2. However, the notation f(x(t)) is ambiguous, as has been pointed out. We cannot pretend that there is no problem of notation. Bo Jacoby 14:31, 20 August 2006 (UTC)

Is someone pretending that? Nothing is without disadvantages. So finding one is not sufficient justification for change. That just leads us into circles. --Bob K 15:44, 20 August 2006 (UTC)

Yes, BobK, you wrote: "I am unconvinced that we have a severe notational problem". Now we agree that there is a severe notational problem. KYN suggests "to leave the arrow and the dot until more evidence of their usages can be demonstrated". Now we have evidence for the maps-to arrow and the lambda-dot. Is there any reason why we should not start clarifying the article using these notations? Bo Jacoby 16:50, 20 August 2006 (UTC)

Clearly, I did not agree to that. I said nothing is without disadvantages, and that includes your proposals. What I also said to you, 13:55, 11 August 2006, and subsequently echoed by your own reference (CMummert), is to use supplementary annotation (aka prose) where clarification is needed. --Bob K 17:48, 20 August 2006 (UTC)

Somehow, people in mathematics and engineering have learned to live with this "severe notational problem," and the reason is obvious. The meaning of is perfectly clear, taken in context, since it only has one reasonable interpretation if you know that stands for the Fourier transform. In the rare cases where clarification is needed, it can be given in prose. This is no reason to depart from the most established and widely understoon notation for this subject. "A foolish consistency is the hobgoblin of little minds." —Steven G. Johnson 17:53, 20 August 2006 (UTC)

No reason at all, except to make the article comprehensible to new readers, the so-called 'little minds'. Bo Jacoby 05:59, 21 August 2006 (UTC)

Graphs and pictures[edit]

It would be nifty if there were a small graph of sinc function, etc. in the tables of equations. The continuous graphs plotted with nice smooth curves to emphasize that it is continuous, while discrete graphs plotted as more of a bar-chart or set of spikes, to emphasize its discreteness. Do we really need 4 sets of equations and graphs:

? (If there is some significant difference in the equations and graphs, it would be more visible if they were plotted all on one page, rather than scattered across all 4 articles).

But there are also advantages to scattering, and they outweigh the disadvantages. One advantage is divide-and-conquer. I.e., it allows better focus on small parts of a large picture. Scattering, linking, and search-engines are the advantages of online technology. "web" is a good thing. --Bob K 15:25, 10 August 2006 (UTC)

Normal distribution?[edit]

I understand that the normal distribution is an eigenfunction of the Fourier transform. Is this the case? Should the article mention this? —Ben FrantzDale 19:32, 9 August 2006 (UTC)

To me, that seems perfectly appropriate for inclusion in this article. Michael Hardy 23:19, 9 August 2006 (UTC)
More precisely: the density function of the normal distribution is an eigenfunction of the Fourier transform. Michael Hardy 23:19, 9 August 2006 (UTC)
Added. —Ben FrantzDale 00:53, 10 August 2006 (UTC)
just keep in mind that there are an infinite number of eigenfunctions in the Fourier transform. like sinc()+rect(). any even-symmetry function added to the F.T. of that (which is also even symmetry) is an eigenfunction. but the Gaussian function is a nice simple and intrinsic eigenfunction and i can't think of another that is. r b-j 05:27, 10 August 2006 (UTC)
We have also polynomials times Gaussian which as a class provide eigenfunctions to the FT. Or maybe an irreducible representation? There is also the shah or comb function. Anyway, if this subject is to be developed to some length, I suggest to do it in a separate article "Eigenfunctions of the Fourier transform" rather than putting it into this article since it is rather long as it is. --KYN 10:30, 11 August 2006 (UTC)

Rbj is correct in that there are uncountably many choices of eigenfunctions, because there are only four eigenvalues (the 4th roots of unity), each with infinite multiplicity. However, arguably the most important eigenfunctions of the continuous Fourier transform are the Hermite functions (of which the gaussian is a special case). These functions are orthogonal and have some optimal localization properties I believe. The most appropriate place to mention them, however, would be under continuous Fourier transform. Unfortunately, they don't generalize in an easy way to the other Fourier variants. For example, it is not obvious what is the appropriate discrete analogue of the Hermite functions for the DFT, and this is still being debated in the literature. —Steven G. Johnson 17:59, 15 August 2006 (UTC)

Asymptotic expansion ?[edit]

Hello my question is why nobody has pointed the "asymptotic" behavior for a Fourier transform?.. in the sense:


Linear Combination?[edit]

I wish to raise an issue about the use of the expression "linear combination" in relation to the contunuous Fourier transform. Much as I appreciate that the transformed function is simmilar to the coefficients of a linear combination, it's not what the transformed function IS. For one thing, we could change many of the values of the transformed function and as long as we did it over finitely many values, or over a simmilarly null set, we would change really nothing about the essential properties of the transformed function, ie would generate the same function when the inverse transform was appled. So we're not really talking about the individual contributions made by different frequencies, because each frequency taken in isolation contributes nothing. Also, and perhaps this is a pointless thing to argue about, I have repeatedly had it drilled into me by more than one person that linear combinations have to be finite, not infinite and definitely not continuous.

OK, I removed that phrase. If you can improve on the new intro, please feel free.
--Bob K 01:55, 5 September 2006 (UTC)
At Talk:Fourier_transform, I asked if others agree with you. Here is one response:
I disagree with that. While there may be a formal context in which you might shy from using the term "linear combination" to refer to an infinite series or continuous integral, colloquial usage has no such restriction. And while each "individual" frequency has measure zero in the integral unless it is weighted by a delta function, conceptually this is the simplest way for a nontechnical person to think of the Fourier transform (essentially, as a limit of a Fourier series). And after all, the current text talks about "frequency components", which has much the same problem except that it is more vague about what the "components" are. The topic paragraph should be handwavy and conceptual, should be informal for the benefit of the nontechnical reader, and should try to convey a picture of what is going on without worrying about distributions and measurable sets. All of those precise distinctions can be made later in the article. Calling it "a certain linear operator that maps functions to other functions" conveys nothing at all (those readers who know what "linear operator" means will doubtless already have a better idea what the Fourier transform is). —Steven G. Johnson 18:09, 14 August 2007 (UTC)
--Bob K 19:33, 14 August 2007 (UTC)


See Talk:Continuous Fourier transform --catslash 12:19, 4 September 2006 (UTC)

Fourier integral[edit]

Fourier integral redirects here, and I was reading Partial differential equation which suggested it is somewhat different from a Fourier transform. Can anyone explain this? - Rainwarrior 16:53, 27 September 2006 (UTC)

I seems strange to me Fourier Integral redirects hear. Often in Partial Differential Equations the term Fourier Integral Operator comes up. Which are indeed different the Fourier Transforms, could this be what the book was discussing? Thenub314 00:27, 28 September 2006 (UTC)
It wasn't in a book, it was in the Partial differential equation article. Specifically, it was in the Heat equation in one space dimension section. Anyhow, from the way it was described it sounds that the Fourier integral is related to, but distinct from, the Fourier transform. I would like to know more about it, but there is no information about the Fourier integral here.
I went and read it, here they mean fourier transform. I will edit the article to make that more clear. Thenub314 23:08, 28 September 2006 (UTC)
I'm suggesting that someone who knows about it should amend this article to explain the difference, or at least acknowledge the existance of a "Fourier integral", otherwise the redirect isn't very useful. - Rainwarrior 03:36, 28 September 2006 (UTC)
An article on Fourier Integral Operators would be nice, but would be an endevor to write. Prehaps sometime when I am feeling brave. :) Thenub314 23:08, 28 September 2006 (UTC)

Variants of Fourier analysis[edit]

This section used to be called "Variants of Fourier transforms", when the whole article was called "Fourier transform". Is it really the analysis that differs, or is it only the transforms?

On another note, I feel that much of this section is redundant, which is discussed in Talk:Fourier transform. I'll see if I can change the section so that it expresses what I meant in that discussion. — Sebastian (talk) 00:25, 8 October 2006 (UTC)

Consistency between formulas[edit]

I was trying to standardize the formulas so the comparison is easy. Here's what I ended up with:

Name transform inverse transform
(Continuous) Fourier transform
Fourier series
Discrete-time Fourier transform
Discrete Fourier transform

Now I feel gypped: The formula for DTFT is basically be same as that for the Fourier series. Both transforms go from a continuous, periodic function to a discrete and aperiodic. Calling one variable "time" and the other "frequency" is an arbitrary convention that has nothing to do with mathematical reality. Is that really the way it is defined, or are we only looking at the inverse transformation here? I rather expected something like this:

Name transform inverse transform
(Continuous) Fourier transform
Fourier series
Discrete-time Fourier transform
Discrete Fourier transform

Sebastian (talk) 03:28, 8 October 2006 (UTC)

Yes, the DTFT is just an inverse transform. I always wondered why wikipedia made this distinction but decided not to fight about it too much. Also from a mathematical point of view I would reverse the titles on your table. Thenub314 12:42, 8 October 2006 (UTC)
The reason for the distinction is that the DTFT is a forward transform (time-to-frequency), as its name correctly states. It converts a discrete-time function into a periodic frequency function. Fourier series converts a periodic [time] function into a set of discrete [frequency] components. --Bob K 22:03, 9 October 2006 (UTC)
I understand this, my point was given a discrete function simply forget you call it's domain time, decided instead to call it frequency, then the transform we are talking about is the the same as the inverse transform for fourier series. I am not suggesting any change to the article, just letting Sebastian I agree that "Calling one variable 'time' and the other 'frequency' is an arbitrary convention that has nothing to do with mathematical reality." But I understand it may be useful for intuition to make such a convention. Thenub314 23:47, 9 October 2006 (UTC)
Sebastian, if you are just learning about this stuff for the first time, and you don't even know which one is the forward transform and which is the inverse, I'm not sure if you should be the one to make wholesale changes in notation. And it's not Wikipedia that makes the distinction between the DTFT and the Fourier series, it's the real world — because the practical applications are quite different. I'm not sure we should try so hard for consistency in this article that we adopt notations that conflict with what everyone uses in practice and with what is used in the main Wikipedia articles on each subject. —Steven G. Johnson 14:22, 8 October 2006 (UTC)
Note that the comparison table already in the article is now messed up, too, because the third column mixes up the forward and the inverse transforms. —Steven G. Johnson 16:06, 8 October 2006 (UTC)

This is why I didn't make a fuss. Stevenj is more of an expert then myself on what the real world does. I didn't mean to imply this article doesn't make the distinction for a good reason, I simply never understood what it was.Thenub314 23:47, 9 October 2006 (UTC)

Sebastian says "Calling one variable "time" and the other "frequency" is an arbitrary convention that has nothing to do with mathematical reality. "
I'm going to say yes it has mathematical reality and no because it doesn't have to be time. Integral transformations are bijective maps between two CONJUGATE (not complex conjugate) domains. And co-conjugate domains are dimensioned as mutual inverses of each other. So in the case of time signals (t domain), the conjugate domain is dimensioned 1/t (whether rads/sec or Hz) In the 3-D space domain, the dimension length (m) would have a conjugate domain dimensioned 1/m (whether rads/m or cycles/m)
This conjugate mapping holds true for the other integral transforms, DFT, DCT, Laplace and Z-transform, the conjugate domains are mutually inversely dimensioned. The Laplace and Z-transforms are more complicated because the transforms of a real function (graphed in 2-D), say for example in time, t, transforms to a complex function of a complex variable ('complex frequency' domain), graphed in 4-D. However the imaginary and real parts of the co-conjugate domain are still dimensioned as 1/t. The imaginary part of each point in the domain is in the exponent of e in inverse transformation, which generates an infinitesimal complex sinusoid paired as a cofactor of t in the exponent, if in the integration path of inverse transformation. The real part of the domain point (in the integration path) is also in the exponent of e, and when paired with t, generates a damping exponential starting at t=0 for the previous sinusoid. Notice that the real and complex part of the domain are both multiplied by t to generate an infinitesimal sinusoid, and so must be dimensioned as 1/t to be in the exponent for inverse transformation. In this way the 'complex frequency' domain is characterized as complex frequency determined only by the imaginary part, and the real part not associated with frequency per se, but damping rate rather.Groovamos (talk) 00:27, 11 May 2016 (UTC)

Informal Language[edit]

Considering this is such a technical article - Why do I come across the word "jaggies" 19:22, 24 November 2006 (UTC)

Added basic explanation[edit]

16-Oct-2007: To handle concerns of "too technical" (or "too simple"), I have expanded the article with a new bottom section, Basic explanation. That section gives a simple explanation for "Fourier analysis" (plus the applications and branches of mathematics used), without cluttering the main text of the article. I think such a bottom section is about the best way to explain many novice issues about the subject, without annoying the more advanced readers about the background basics, such as using "formulas in integral calculus" and "algebra". I have removed the "{technical}" tag at top (for this year). -Wikid77 23:17, 16 October 2007 (UTC)

I don't agree with the explanation of DFT in terms of numerical integration.
--Bob K 15:38, 17 October 2007 (UTC)
In addition to the fact that the explanation is not very correct, it's also strange how it's included with the funny "Also see bottom" note; is this some standard thing I was not aware of, or a new hack invented for the purpose? Anyway, I'd rather not see simple explanations if simple means wrong. Dicklyon 17:45, 17 October 2007 (UTC)
25-Oct-2007: Use of the top hatnote often ties to name-related articles (such as "see Fourier rock band"), but it has been used in German-related articles to note alternate German spellings. Rather than tie to another article, I linked the hatnote to the new bottom "Basic explanation" to avoid cluttering the intro section, from the viewpoint of advanced readers. -Wikid77 12:59, 25 October 2007 (UTC)
I edited it down some; the one on the DFT page I took out completely, but we could consider fixing it and putting it back. However, to have the DFT represented as an approximation to the FT is just not right at all; let's don't go there. Dicklyon 05:07, 18 October 2007 (UTC)
  • 25-Oct-2007: Actually books explaining DFT do refer to the concept of the square signals as being approximations to the sine-wave signals, but they also note the exact fit for discrete signals, so I added that clarification. I used the term "approximation" and the explanation from numerical integration to explain the connection between the discrete and continuous forms of Fourier analysis. I have been explaining the math (for both calculus and finite math), as well as the engineering applications, in that bottom section "Basic explanation" which has space for more explanation and examples, since at the bottom, it does not clutter the view for more advanced readers. The explanation is intended to cover several various aspects, not just one person's view of the subject. I also want to explain why the DFT formula uses exponential functions, rather than "sine(x)" as might be expected with sine-waves. -Wikid77 12:59, 25 October 2007 (UTC)

I appreciate what you are attempting to do. And I appreciate that you are trying to keep the intro uncluttered. If it were up to me, I would put it in a separate place completely, such as another article or a subcategory (Fourier_analysis/Basic_explanation_1 "1" because there are bound to be other perspectives). But that is not how Wikipedia resolves these things. (Pity, because I think it would save a lot of editor time.) Anyhow, I perused your reference for the numerical integration analogy, and it appears to come from chapter 8.5, where he poses and tries to answer the question "how do we determine the bandwidth of each of the discrete frequencies in the frequency domain?" His answer is "the bandwidth can be defined by drawing dividing lines between the samples." That might give the desired answer, but in mathematics, the end does not justify the means. IMO, he should clearly state that it is not to be taken literally. And it bothers me to refer to that fabrication as a "basic explanation". It is an incorrect explanation that just happens to fit the formula, apparently.

--Bob K 15:10, 25 October 2007 (UTC)

Also, in chapter 8.6, pertaining to "calculating the DFT", he does not mention the numerical integration analogy. It turns into "correlation", which I think is a much better description of what's going on.

Another issue is the statement in the article: "The calculation computes the total area of thin rectangles aligned under a similar skewed curve,". Does the reference say anything about a "skewed curve"? What does that mean?

--Bob K 15:18, 25 October 2007 (UTC)

Bob K's simple explanation[edit]

Bob, your explanation is rather a detailed mechanical one, but it's actually a better explanation of the [[periodogram] (including Lomb's method; see Least-squares spectral analysis). That's because it doesn't describe the use of orthogonal basis functions, nor the notion of decomposition, from which derives inversion. By focusing on how to "measure the amplitude and phase of a particular frequency component" it presupposes the idea of particular frequency component and misses the idea of a transform, that a signal can be expressed in terms of such components.

Coming up with simple explanations is tricky; we've all tried, and I think all failed. Maybe we can find something in a book? Dicklyon 16:04, 28 October 2007 (UTC)

"synthesis is not mentioned in that article"[edit]

The reason I changed "inverse transform" to "inverse transform (synthesis)" was not because synthesis is mentioned in the other article (which it's not). It's because inverse tranform is not previously mentioned in this article. It just appears out of nowhere.

--Bob K 14:50, 1 December 2007 (UTC)

That should be easy to fix in the lead. Dicklyon 16:29, 1 December 2007 (UTC)

Expanding the applications section[edit]

I think instead of just listing the applications, it would help a lot to include exactly what function the Fourier analysis has in each application. --Sylvestersteele (talk) 10:12, 20 January 2008 (UTC)

That information would be more appropriate in the individual articles. Divide and conquer.
--Bob K (talk) 12:40, 15 September 2008 (UTC)
How many articles deep should people search. For example, the article on physics doesn't (and probably shouldn't) mention the word Fourier. Neither does number theorey, combinatorics, signal processing, probability, statistics, option pricing, cryptography, oceanography, numerical analysis, acoustics, diffraction, nor geometry. I think a single sentence pointing to just the major uses would be appropriate. Thenub314 (talk) 13:25, 15 September 2008 (UTC)
If it's that important to you, consider a separate article called "Applications of Fourier Analysis". I don't care enough to debate it anymore.
--Bob K (talk) 00:00, 16 September 2008 (UTC)
I don't think there is enough content for a separate article. If it comes into reasonable shape here, then I will put it in the article. Thenub314 (talk) 10:57, 16 September 2008 (UTC)

Not a bad idea, let's expand the list here and see if we get something of good enough quality to put in the article.

Fourier analysis has many scientific applications:


  • Chan, Ngai Hang (2002), Time series: Applications to Finance, John Wiley & Sons, Inc. 
  • Chow, Tai L. (2000), Mathematical Methods for Physicists: A Concise Introduction, Cambridge University Press 
  • Evans, Lawrence (1998), Partial Differential Equations, American Mathematical Society 
  • Krylov, N. V. (1995), Introduction to the Theory of Diffusion Processes, American Mathematical Society 
  • MacCluer, Charles R. (2000), Industrial Mathematics: Modeling in Industry,Science, Government, Prentice Hall 
  • Stein, Elias; Shakarchi, Rami (2003), Fourier Analysis: An introduction, Princeton University Press, ISBN 0-691-11384-X .
  • Stein, Elias (1970), Singular integrals and differentiability properties of functions, Princeton University Press .
  • Lee, Roger (2004), "Option Pricing by Transform Methods: Extensions, Unification, and Error Control", Journal of Computational Finance, 7 (3): 51–86 
  • Massey, J. L. (1998), "The discrete-fourier transform in coding and cryptography", Proc. ITW, San Diego, CA (Feb.) 
  • Mills, D. L. (1991), Nonlinear Optics: Basic Concepts, Springer-Verlag 
  • Tao, Terence; Vu, Van (2006), Additive Combinatorics, Cambridge University Press, ISBN 0521853869 .
  • Thomas, William (1995), Numerical Partial Differential Equations: Finite Difference Methods, Springer, ISBN 0387979999 

It seems that in some sense I could not really do justice to magnitude of applicability even in cases where I know the subject. But here is a start at least, I hope other people add there two cents. Thenub314 (talk) 11:20, 15 September 2008 (UTC)

Any such list or expansion should be based on sources, otherwise it's pointless. Dicklyon (talk) 19:43, 15 September 2008 (UTC)
Absolutely! I will do the best to reference the sentences I have added. Thenub314 (talk) 10:57, 16 September 2008 (UTC)


Why do we separate "Applications in signal processing" form "Applications"? Thenub314 (talk) 13:59, 18 September 2008 (UTC)

New lead[edit]

The new lead by User:Thenub314 seems a bit vacuous, whereas the old one had at least a bit of a definition in the opening sentence. I don't understand why the attempt to make it less technical, but I think it is not a success. Comments? Dicklyon (talk) 04:37, 19 September 2008 (UTC)

My problems with the old lead were that Fourier analysis encompasses much more then breaking a function into trigonometric functions, it is an entire subject area. Frequently one wants to analyze more general objects, such as measures or distributions. Also it is common wants to do it with more general functions besides trigonometric functions in subjects such as Fourier analysis on locally compact groups. Also, if your not in L2 then what was meant by basis functions needs some clarification and citation. Lastly the first two paragraphs also introduced what I felt was a lot of jargon, such as sinusoidal, basis, amplitude, phase, and frequency, which made me feel it was not "suitable for a high school student or first year undergraduate". Thenub314 (talk) 06:26, 19 September 2008 (UTC)
On the one hand I agree that the new version is an improvement. The first paragraph of the article should not attempt to give a definition if no facile definition is available. However, this is an endemic problem to the entire article, which adheres rigidly to this definition of Fourier analysis throughout. The first thing I think needs to be done is that an Introduction section should be written. In addition to introducing briefly the historical elements of the theory, such as Fourier's own theory of heat, this section can describe in more depth the methods and aims of Fourier analysis. When the article begins to take shape, then I think the lead section will also fall into step. siℓℓy rabbit (talk) 11:22, 19 September 2008 (UTC)
I don't have a suggestion at the moment as to how technical or nontechnical the lead should be, but there is a lot of language in the new lead that I think should be revised:
  • "Today the term Fourier analysis encompasses a vast spectrum of mathematics with parts that at first glance may appear quite different." bothers me, partly because we should not be telling the reader how something appears to them (it's condescending), and partly because "Fourier analysis encompasses a vast spectrum of mathematics" contains more or less no information about Fourier analysis ("Fourier analysis is important" would be about as informative). Either give a quick list of applications, or don't bother trying to hype it up. Please don't make vague suggestions about its importance.
  • "At the heart of" is unnecessary and distracting to the sentence it begins.
  • The continual use of the analogy of "breaking into pieces" I think would be confusing for someone unfamiliar with the topic. If you think of taking an object, say a cookie, and breaking it into pieces with your hands, this image really does not facilitate understanding of a Fourier transform. Furthermore, no text on Fourier analysis is going to refer to the various parts of the signal as "pieces". I would recommend keeping the language a little more abstract (to avoid misleading analogy) and refer to them as "component parts" or something else that doesn't suggest that a Fourier transform could be done with a knife.
  • Referring to the "pieces" as "basic" is not a good idea in the lead, especially since you the term "basis function" no longer appears in the lead at all. I am unfamiliar with the use of the word "basic" as meaning "pertaining to a basis" (though it may be used in Fourier analysis somewhere; I will give you the benefit of the doubt here), but doing so before explaining basis functions will have the dictionary definition of "basic" for the reader and wrongfully suggest that there is something more fundamental about a Fourier series than the signal it came from.
Anyhow, keep working on it. I'll give more suggestions if I have them. - Rainwarrior (talk) 17:00, 19 September 2008 (UTC)
I am not opposed to changing language. Perhaps I should try to to explain my ideas differently so maybe a better way to describe them will be clear. I think they may not have come across the way I intended. By "encompasses a vast spectrum of mathematics" I meant to basically describe that Fourier analysis is a very large subject, with many different parts. It was not meant to hype up the importance, perhaps all of it is bunk, just a lot of different bunk. By "at the heart of" I simply meant that this is the central themes that makes the different subjects related. I am glad you mention cookies, if I understand the pieces of the cookie better, I am fairly happy with the analogy. The term basic was meant to be understood as simple (though fundamental would have work equally well) but in no way meant to have anything to do with a basis. I was hoping to suggest there was something fundamental about the pieces (trigonometric functions, characters, what have you), not necessarily the whole series. Thenub314 (talk) 18:40, 19 September 2008 (UTC)
I don't quite understand how on the one hand the intent can be to make it less technical, and on the other hand to make it more applicable to "general objects, such as measures or distributions." What's left is meaningless abstractions that give the reader little clue what it's talking about, as in "the attempt to understand functions (or other objects) by breaking them into better understood pieces". It would be much better to introduce the topic in terms of concrete concepts like functions and sums of sinusoids, and to keep the generalizations for a later section. The old intro did that, if not perfectly. Dicklyon (talk) 18:10, 19 September 2008 (UTC)
I must admit I do not see how the phrase "the attempt to understand functions (or other objects) by breaking them into better understood pieces" is meaningless. At a simple level how else would you describe the subject? The is no problem with describing things in concrete terms. Do you have a suggestion of something more concrete that is still accessible to a general audience? The difficulty is that the more concrete you attempt to get the more background you require from the reader. Also, the previous lead to a very narrow view on the subject as a whole. I think siℓℓy rabbit has an excellent point that if we focus our effort on creating a good introduction then the a better way to rewrite the lead may become apparent. Thenub314 (talk) 18:40, 19 September 2008 (UTC)
To someone not in the know, the phrase "other objects" is completely meaningless; and so is "better understood pieces"; so is the verb "breaking" here. It would be better to talk about approximating functions as sum of sinusoids. Dicklyon (talk) 01:06, 20 September 2008 (UTC)
Approximating functions by trigonometric functions gives people "not in the know" a very limited point of view on the subject. "Other objects" is meant to imply the study includes things which are not functions, without trying to give a list. I don't think the verb breaking is meaningless here, it just doesn't have a precise mathematical meaning. The definition from my dictionary that is closest to what I mean is: "to split into smaller units, parts, or processes". Thenub314 (talk) 07:27, 20 September 2008 (UTC)
Found new lead still a bit too narrow, tried to find a compromise. Thenub314 (talk) 12:44, 21 September 2008 (UTC)
I still found the lead that Dicklyon RV'ed to a bit narrow. I still object to the way the term basis is used. I tried to use term simpler as well as basic to take into account some of the comments about my language. I used the verb "break" at some point, which is no more or less meaningful than decompose. This an honest attempt at a compromise, comments are more appreciated then reverts. Thenub314 (talk) 06:47, 22 September 2008 (UTC)
OK, I'm trying too. But to open without mentioning sums of sinusoids seem to totally miss the mark. You can generalize from there, rather than leaving the opening to be so general as to be meaningless. Dicklyon (talk) 14:55, 22 September 2008 (UTC)


My first most basic attpempt at an intro. section.... many changes to be made. Thenub314 (talk) 14:55, 20 September 2008 (UTC)

The subject of Fourier analysis began with the study of Fourier series. In 1807, Joseph Fourier showed how one could attempt to express general functions as sums trigonometric functions. The attempts to understand the full validity and generality of Fourier's method lead to many important discoveries in mathematics. Including Dirichlet's definition of a function, Reimann's integral in it's full generality, Cantor's study of cardinality, and Lebesgue's definition of the integral (Cite Zygmund's intro here). At the same time from its inception with Fourier's work on heat propagation, the study has been useful in many applications. So much so that the study of the discrete Fourier transform and the FFT algorithm predates the invention of the computer (cite Gauss's work).

Actually, the study of the discrete Fourier transform and FFT algorithms (note that "the" FFT is a misnomer) predate Fourier's work too. —Steven G. Johnson (talk) 16:10, 20 September 2008 (UTC)
That is interesting. Do you have a reference I could look up and learn more (and maybe include something)? Thenub314 (talk) 19:02, 20 September 2008 (UTC)
"In 1807, Joseph Fourier showed how would could attempt to express general functions as sums trigonometric functions." is the most useful sentence in there. The lead needs to describe what Fourier Analysis is, which is not done any where else in the paragraph.
"The attempts to understand the full validity and generality of Fourier's method lead to many important discoveries in mathematics." This is way too general a statement, and makes a poor consequential connection. Is Dirichlet and Riemann's work really because they trying to understand Fourier's method? Isn't it more directly the result of understanding it? Why not just say Important advances in Mathematics that made use of Fourier Analysis include Dirichlet's... etc.? Additionally, why should "Fourier Analysis" be "born" from the "Fourier Series"? They are parts of the same idea; I don't think we need to set up one as a parent of the other. - Rainwarrior (talk) 16:16, 20 September 2008 (UTC)
Well, the definitions of function, integral, or cardinality did not make use of Fourier's work. I consider the subject of Fourier Analysis much larger then simply "Fourier Transforms." Examples of subjects in Fourier analysis that are not directly part of Fourier Series are the study of sets of convergence for trigonometric series that are not Fourier series. The study of multipliers, singular integrals, weight norm inequalities, etc. Things that are intimately related to the Fourier Transform but not directly about it. I have a feeling we have different ideas about what the term Fourier Analysis means which is why we disagree so much about what the article should say. Thenub314 (talk) 19:02, 20 September 2008 (UTC)
I don't think we really disagree about what "Fourier Analysis" means. My objection was about how the genesis of Fourier Analysis is portrayed. To conceive of the Fourier Series is to conceive of Fourier Analysis at the same time. I don't mean the whole field of what is considered Fourier Analysis, but the beginning idea. It might be fair to say that "Fourier Analysis began with the Fourier Series", but not that "Fourier Analysis was born from the Fourier Series". The latter suggests that the idea of a Fourier Series could have persisted without Fourier Analysis, which I don't think is correct. The relationship between these two things is very different from, say, Quaternions to Complex Numbers, which is a case that you could definitely say that one preceded the other with an independant life of its own and became the foundation of the other, and that's where the birth metaphor could be more properly applied. I don't really care for that metaphor in general, but aside from that I think it suggests the wrong thing about Fourier Analysis. (Please keep working on the lead though. Don't let my criticism slow you down; I am very glad that someone is working to improve it.) - Rainwarrior (talk) 03:59, 21 September 2008 (UTC)
The early development of the FFT is not worth mentioning in the lead. You'd have to already know what the FFT is and why it's important to understand the importance this is trying to assign to Fourier Analysis (and if you're coming here to know what Fourier Analysis is, the FFT probably isn't something you already understand fully).
History in the lead should be fairly minimal. The real history belongs in its own history section, not the lead. I think the content of the lead should be something like: 1. basic description (this is the most important part), 2. basic history (just enough to hit the most major 2 or 3 points), 3. short list of applications (i.e. various areas of math, engineering, sound, other signal processing, etc). (Not necessarily in that order.) - Rainwarrior (talk) 16:16, 20 September 2008 (UTC)
Just to respond to the last part of your remark, I think the idea is that this would be an introductory section distinct from the lead section, which attempts to situate the subject in some kind of appropriate historical context. For an example of what I had in mind in suggesting this, see the article Hilbert space. siℓℓy rabbit (talk) 16:46, 20 September 2008 (UTC)
This was the intention I had. I also realize that what I had put down is no where good enough for "show time" which is why I started it here instead of in the article. But I did want some discussion as to what might be good to include. Thenub314 (talk) 19:02, 20 September 2008 (UTC)
The Hilbert Space article is reasonably well structured. It would be an okay model for what to do here. - Rainwarrior (talk) 03:40, 21 September 2008 (UTC)

Maybe going the other direction will be helpful. I've put a plain simple statement of what it is for the lead sentence. I can't see how starting with something more abstract, more general, or less definite than that is going to help anybody, but I'm sure there are still ways it can be improved. Once we can say what it is, it should be easier to write an introduction... Dicklyon (talk) 05:06, 21 September 2008 (UTC)

Thenub, please response here if you disagree, instead of just changing it. I very much disagree with the approach "is a subject area" in a lead sentence, instead of a plain simple statement of what it is. Dicklyon (talk) 21:48, 21 September 2008 (UTC)

Dicklyon, I did respond here, but I put it under thread above, which is about the lead instead of the thread about the introduction. Thenub314 (talk) 06:41, 22 September 2008 (UTC)

Fourier Series[edit]

I am changing the revision to the Fourier Series section of the Variants of the Fourier transform. It seems to me that using the Poisson summation formula to introduce Fourier series is too complicated. It also circular and requires more of the function. Perhaps what is there could be more straight forward, but I didn't see the change as an improvement. Thenub314 (talk) 06:06, 24 September 2008 (UTC)

I can make it more straightforward with a little less brevity, and I would like to try. I disagree that the PSF is "circular". Fourier series is not a prerequisite to the PSF. It is simply a special case, where the shifted f(t)'s don't happen to overlap. Interestingly, if they did overlap, you could define a new funtion, g(t), that is one period of the overlapped f(t)'s and instantly conclude:
But I digress. Getting back to your comments, I don't know why you say it "requires more of the function".

If that is still germaine, please clarify.

Sure, Poisson summation formula does not always hold. You would need for example that the periodic function is continuous. Thenub314 (talk) 16:16, 24 September 2008 (UTC)

I have tried to rewrite the section to be a bit more clear. Perhaps it should be the first of the "Variants of Harmonic analysis" that we list, seeing as it is usually the variant the students encounter first. Thenub314 (talk) 08:46, 24 September 2008 (UTC)

My thinking is to use the Fourier transform to introduce F(ν), and leverage that notation in the Fourier series section, since the FS coefficients are also samples of an FT. It ties them together nicely and is notationally efficient. This is not the place to teach FS to new students. This is a great place to summarize each concept and to relate them to each other. To that end, I would strongly suggest a more consistent notation.
No where in Wikipedia is appropriate for teaching (excluding maybe the Reference desk). We could also try to be notationally efficient by first introducing the Fourier transform on locally compact Abelian groups, then all of these variants are special cases. Thenub314 (talk) 16:16, 24 September 2008 (UTC)
I of course prefer s(t) or x(t) and S(f) or X(f), but in the interest of harmony I can live with f(x) and F(ν). The one thing I really dislike is Do you think we can compromise/agree on consistent use of f(x) and F(ν)? I would also change the DTFT and DFT sections to be consistent. Then we should also update their individual articles to be consistent. It would be a beautiful thing.
--Bob K (talk) 12:24, 24 September 2008 (UTC)
If you'd like to make this article self consistent that would be fine. But is by far the more common notation in mathematics, at least in any branch I am familiar with. And in my brief time as my engineer I came by this often enough to believe to know what it meant. I don't think it would be a beautiful thing to see it disappear from the articles here. It would be a disservice to the readers. Thenub314 (talk) 16:16, 24 September 2008 (UTC)
This serves to illustrate how big the gulf is between engineering and mathematics notations. Bob K and I are engineers; I for one have not often encountered the hat notation, and would prefer to NOT see the contents of this article cloaked in math-speak, like locally compact Abelian groups. Dicklyon (talk) 16:51, 24 September 2008 (UTC)
I don't think that it would be a good idea to make the article more complicated. My point was that we should take the most concise, straight forward description of each "Variant" we choose to list (But at some point I hope to add more about Fourier analysis on LCA groups). I was just arguing against "leveraging" more complex ideas to explain simpler ones. Thenub314 (talk) 17:25, 24 September 2008 (UTC)
Thenub314 said "Poisson summation formula needlessly complicates the exposition." But it's just the Fourier series formula, because I constrained f(t) to an interval of length τ. I merely noted that it is also a case of Poisson summation. The article got "more complicated" when I introduced fτ(t) in response to your complaint about clarity. (And I think it is an improvement.)
--Bob K (talk) 17:30, 24 September 2008 (UTC)
I didn't find it complicated, but it wasn't clear what's the advantage of writing the Fourier seriers in terms of the Fourier transform. You could start with just F[k]] in the series, and then later mention that the formula that gives it is the same as sampling a Fourier transform. I'm still not clear on what the PSF brings to the table. Is it that you're assuming knowledge of Fourier transform first, and then wanting to build from there? Notation-wise, I'd rather see a capital T for the period than a tau. Dicklyon (talk) 21:13, 24 September 2008 (UTC)
I'm repeating myself: "My thinking is to use the Fourier transform to introduce F(ν), and leverage that notation in the Fourier series section, since the FS coefficients are also samples of an FT. It ties them together nicely, and is notationally efficient." So "yes" to your last question, Dick. The elegance of that approach seems transparently obvious to me. The PSF is easy to prove and understand, and it includes FS as just a special case. If Fourier had thought of it, I'm sure he would have presented the PSF instead of the Fourier series. In my opinion, the FS article should just lose all the redundant math and reference the PSF article.
! Thenub314 (talk) 22:21, 24 September 2008 (UTC)
I would also prefer capital T. But the problem is the potential confusion with the DTFT sampling interval. As you can see, the DTFT can also be understood in terms of the FT and the PSF (but not the FS special case). So it behooves us to clearly distinguish between the sampling interval and the periodicity. Since we use for sampling frequency, maybe we should consider for sample interval and for the period of a periodic function.
--Bob K (talk) 21:50, 24 September 2008 (UTC)
Oops. I'm wrong. I should have refreshed my memory of the PSF proof before I tried to defend it. The proof relies on the Fourier series, as Thenub314 tried to tell me. Sorry for wasting everybody's time.
--Bob K (talk) 22:12, 24 September 2008 (UTC)
No worries. But I do have a comment about the DTFT and the DFT. I have not edited these at any point, and I think it may be best that way. But naively it seems that the DTFT description is aimed entirely at sampling a continuous function. This would ignore any situations where the data by its nature is discrete. Also, the description of the DFT uses a lot of machinery (like Dirac combs, etc.) to describe a finite sum of numbers. Does this seem like the right way to go?
Thanks for your understanding. The current DTFT section does include this afterthought: "Applications of the DTFT are not limited to sampled functions. It can be applied to any discrete sequence." IMO, its really interesting application is sampling, so I'm fine with the status quo.
What about applications like analyzing tick data stocks, or other statistical time series? Thenub314 (talk) 11:20, 30 September 2008 (UTC)
I think I wrote the DFT section, but I agree with you. Too much info was left in my head and not put on the screen, because I was trying to keep it short. Now that the info has left my head, what's on the screen is too hard to understand. So we need to simplify.
--Bob K (talk) 02:11, 25 September 2008 (UTC)
Also, are we standardizing the notation in this article? Thenub314 (talk) 23:10, 24 September 2008 (UTC)
I think we should at least talk about it. What do you think of the f(x), F(ν), standard, with an acknowledgement of the convention? We also need a symbols for sample interval and function period. When I think about that, I'm drawn back to the f(t) convention instead of f(x). What we have now is T and τ, but Dick doesn't like τ. Ts and Tp would make more sense, but then the formulas would look more cluttered. Thoughts?
--Bob K (talk) 02:11, 25 September 2008 (UTC)
Well, as I said before, for present purposes I am happy with standardizing at least this article. I don't even think we need to make an issue of acknowledging notation here at the moment. The individual articles can do that. That was why I changed Fourier series section. Now we just have to change the DFT and DTFT sections. Thenub314 (talk) 06:33, 25 September 2008 (UTC)

For starters, I would like to move the FS section closer (in appearance) to the FT notation. (See latest revision.)

--Bob K (talk) 14:38, 25 September 2008 (UTC)

I think we should stick to attempting to describe the subject simply, more complex ideas, like the Fourier transform of a periodic function are better suited to the main articles. Thenub314 (talk) 15:22, 25 September 2008 (UTC)

That only appears to explain why you would delete the leading sentence. So why didn't you just do that? But even so, I think it is worthwhile to mention that the FT doesn't work for periodic functions, because that explains why we're talking about the FS.
I had a few other problems, as I will explain below, but I was in just heading out the door when I saw the edit, and didn't want to leave some of the comments there. Maybe I just should have waited. Thenub314 (talk) 18:03, 25 September 2008 (UTC)
And regarding the comment: "Shouldn't leverage FT to define FS, point values of distribution not defined, no need to describe PS formula here.", I did not define the FS in terms of the FT. I defined the FS and pointed out a correspondence to the FT. And now you call the FT a "distribution". When I did that, you said it wasn't. And point values are indeed defined, unless the FT formula is not a definition.
--Bob K (talk) 16:05, 25 September 2008 (UTC)

My mistake, the sentance "The Fourier transform of a periodic function is zero-valued except at a discrete set of harmonic frequencies, where it can diverge." was trying to make a connection to a Dirac comb. It is, by the way, false that the Fourier transform of a periodic function converges to zero at points outside this discrete set of harmonic frequencies. It divereges at every point, so it can not be used to distinguish the harmonic frequencies on which you'd like to base your Fourier series. I dislike introducing the inverse transform before the forward transform. The comment about convergence of Fourier series shouldn't be a foot note. For the same reason we don't define the Fourier transform on Rn, I think we should stick with 2π periodic functions, it keeps the descriptions simplier without losing much. My understanding of the history was that Fourier was studying heat distributions in a ring, which is why I put more emphasis on functions defined on a circle. Thenub314 (talk) 18:03, 25 September 2008 (UTC)


The article currently uses both and for the "time" domain and and for ordinary frequency (cycles per time-unit). It also uses for the time-domain function, but not at the same time it uses for frequency (thank goodness).

Predictably, it is causing confusion and editorial thrashing. But of course it does accurately reflect the real world, which is also inconsistent and confusing.

--Bob K (talk) 13:01, 28 March 2009 (UTC)

Most of the EE texts on signals and systems use and for input and output functions, respectively. Groovamos (talk) 06:11, 18 March 2016 (UTC)

"How it works" section[edit]

Maybe it's just because I'm not a layman on this subject, but the How it works (basic explanation) section really seems to grate. At best, I think that this belongs in the cross-correlation article, but either way there are some specific points in the language that need addressing:

  • "its shape (but not its amplitude) is effectively squared" - what distinction is this trying to draw?
  • the section begins by describing the mechanism as squaring, and then goes on to say that it's not really squaring
  • the sudden introduction of the sine+cosine basis is unjustified
  • the introduction of a complex basis means that no squaring occurs, as one must multiply by the complex conjugate waveform
  • the introduction of the notion of vectors is misleading/unnecessarily complicated, as it's perfectly possible to interpret complex numbers as scalars.

Basically, I think that as it stands this section has more potential to confuse a newcomer than to help them. Any thoughts? Oli Filth(talk|contribs) 21:35, 28 May 2009 (UTC)

I've now removed the section in question. If anyone feels like restoring it, could they address the points I've raised above. Even then, there's still the question of whether such an explanation should really belong in the cross-correlation article. Oli Filth(talk|contribs) 00:02, 4 June 2009 (UTC)

Fourier series in interval [0,1) instead of [0,2pi) for consistency[edit]

Currently the Fourier transform has the factor in the exponent of , which I find reasonable since this definition simplifies most laws (no constant factors). The factor is also in the exponent of in the discrete Fourier transform. For consistency, I suggest to have this factor also in the Fourier series' exponential function argument, meaning that the Fourier series express 1-periodic functions. HenningThielemann (talk) 16:15, 1 November 2010 (UTC)

That of course makes perfect sense. Unfortunately, the Fourier series article uses the same definition as currently used here. So this section can't be consistent with both articles, unless someone also changes the other article, which is likely to meet with some editorial resistance (though not from me). Still, IMO it is more logical for this article to be internally consistent than for one section to be consistent with a different article. So I agree with your suggestion.
--Bob K (talk) 05:54, 2 November 2010 (UTC)
On further inspection, the Fourier series article contains the section Fourier series on a general interval [a,b], which is exactly what we need here.
--Bob K (talk) 10:11, 3 November 2010 (UTC)

Doubt on fourier transform[edit]

Fourier transform is a representation of a signal as a collection of sinusoidals. But when adding two periodic signals the resultant would be another periodic signal. If we have fourier transform for aperiodic signals, it implies that the sum of periodic signals results an aperiodic signal. can anyone clarify this paradox? — Preceding unsigned comment added by (talk) 18:31, 27 August 2015 (UTC)

I'll give it a try. The sum of two periodic signals gives another periodic signal only if their periods are proportioned as a rational number. Second, periodic signals can be represented as a Fourier series so that not all sums of pairs of periodic signals will have a Fourier series, based on the first point, and no signal which is not square integrable (such as the sum of two differently periodic signals) can be represented as a Fourier transform*. So all sums of any set of periodic signals are not subject to Fourier transformation except for the trivial case below, and any sum of a set of periodic signals only will have a Fourier series if all possible pairings of their periods are of rational proportionality. This leaves you with the scenario that not all sums of sets of periodic signals are subject to Fourier analysis without subjecting the members of the set to analysis and summing the results. There is the trivial case of two periodic signals, one being the negative of the other, the sum of which is square integrable.

*Not strictly true where periodic signals, with the use of limits, the Fourier series of which can be viewed as a special case of the Fourier transform, made up of Dirac delta functions. Also as mentioned, if the sum of periodic signals is not periodic, then the Fourier series of each component can be analysed independently and the results summed for a visual graph of Dirac deltas, representing overlayed Fourier series.

I'm not a mathematician but retired EE so I would welcome any correction.Groovamos (talk) 19:18, 17 March 2016 (UTC)