Talk:Fast Fourier transform

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Statistics (Rated B-class, High-importance)
WikiProject icon

This article is within the scope of the WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page or join the discussion.

B-Class article B  This article has been rated as B-Class on the quality scale.
 High  This article has been rated as High-importance on the importance scale.
 
WikiProject Mathematics (Rated B-class, High-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
B Class
High Importance
 Field: Applied mathematics
One of the 500 most frequently viewed mathematics articles.

Hey, does anyone have an issue number for the citation? The citation reads:

Cooley, James W., and John W. Tukey, 1965, "An algorithm for the machine calculation of complex Fourier series," Math. Comput. 19: 297–301

But there is no issue number.

A scan of the paper (of uncertain legality) can be found here. I believe it was in the April issue, but strangely enough the scanned pages don't list the journal issue or volume. In any case, the volume and page number uniquely identify the article, and should be sufficient to look it up in your library. —Steven G. Johnson 21:59, 2 April 2007 (UTC)

The "Other FFT Algorithms" segment is just an unreadable blob of text; I've separated the algorithms out (I think), which doesn't look as nice but is easier to read. Frankly, however, the whole thing ought to be rewritten for coherency, or expanded and put into its own section. I'm not qualified to do either of those things.

I think breaking this into more than three paragraphs is overkill. As for expanding it, any additional information about the FFT algorithms in question should go in a separate article, like for Prime-factor FFT algorithm and Rader's FFT algorithm. —Steven G. Johnson 23:58, 29 May 2006 (UTC)


More information on 2d or 3d FFT's would be super.

Alex.Szatmary

Done. At least as a start (the fancier multidimensional algorithms such as vector-radix could eventually use their own pages). —Steven G. Johnson 04:41, August 23, 2005 (UTC)

More information about the practical uses of FFT (e.g. how it is used for digital signal processing) would be very useful. If I ever find out what it does, I may write it up myself... Jevon 12:11, 23 November 2005 (UTC)

As is described in the first paragraph, information about applications belongs in discrete Fourier transform. —Steven G. Johnson 16:34, 23 November 2005 (UTC)

in finite fields[edit]

The whole article seems to address floating-point arithmetics as an approximation to the complex field.

However, the same algorithms may be used in any field where there is a nth prime root of the unit, where n is the length of the vector, including finite fields GF(pk). Since the multiplicative group of a finite field is cyclic, the condition for the existence of such a root is n | pk-1. If k=1, such a field is easily implementable in machine by modular arithmetics.

There is an algorithm for multiplying long integers by

  • consider them written in base \beta: a=\sum a_k.\beta^k and b=\sum b_k.\beta^k, as polynomials
  • transforming them by FFT modulo p
  • writing the convolution product a.b=\sum c_k.\beta^k where c_k=\sum_{j=0}^k a_j.b_{k-j}
  • computing this convolution product by inverse FFT of the product of the two FFTs
  • doing a little carrying.

A little web search brings up: J. M. Pollard, Mathematics of Computation, Vol. 25, No. 114. (Apr., 1971), pp. 365-374. Stable URL: http://links.jstor.org/sici?sici=0025-5718%28197104%2925%3A114%3C365%3ATFFTIA%3E2.0.CO%3B2-U

I'm not an expert in the field, so I don't think comfortable writing about this. I'll look how GNU MP does it. David.Monniaux 22:07, 13 October 2006 (UTC)

Only one section of the article discusses floating-point approximation, so I'm not sure why you're saying that the "whole article" is concerned with this. However, it's true that the article is concerned with algorithms for the discrete Fourier transform over vectors of complex numbers. It's certainly true that many of the algorithms have direct analogues for any finite field (although I can give counter-examples of some that can't, such as the Rader-Brenner algorithm); in particular, those algorithms that only use the fact that \omega_N is a primitive root of unity (e.g. Cooley-Tukey, PFA, Rader...) are generalizable in this way. But such transforms are generally given different names, like the number theoretic transform. —Steven G. Johnson 22:49, 13 October 2006 (UTC)
I've added a brief mention of such generalizations in the introductions. Any further details belong in articles like number theoretic transform. —Steven G. Johnson 23:02, 13 October 2006 (UTC)
Ah, interesting. I actually didn't know different names were used in English (I learned mathematics in French :-) ). David.Monniaux 08:27, 14 October 2006 (UTC)

example[edit]

Past century, I was frustrated with complexity of FFT progtams suggested in the literature. I believe, beginners can add all the accessories and improvements, if the short algorithm alreaady works. So, I add the short portable example. If you can write even shorter copylefted example, it may be even better. dima 03:23, 24 January 2007 (UTC)

I removed this. First, Wikipedia is not a code repository, and using a particular programming language (as opposed to English pseudo-code) is inappropriate in a mathematical article. Second, it makes the common mistake of confusing an "FFT algorithm" (which is usually considered in the literature to be an abstract decomposition of the DFT) with an "FFT implementation". Third, an example of the Cooley-Tukey algorithm would belong on Cooley-Tukey FFT algorithm, not here. Fourth, that page already gives the radix-2 example, but as math and English, not C++ code. Fifth, your code was far from minimal. Here is a shorter (but still not minimal) example in C (using the C99 complex type) that you would call with rec_fft(-1, N, in, out, 1, 1), for N a power of 2:
void rec_fft(int sign, int N, complex double *x, complex double *y, int is, int os)
{
     if (N == 1) *y = *x;
     else {
          int N2 = N/2;
          rec_fft(sign, N2, x, y, is*2, os);
          rec_fft(sign, N2, x+is, y+os*N2, is*2, os);
          complex double omega = 1.0, omega1 = cexp(sign * I * 6.2831853071795864769 / N);
          for (int i = 0; i < N2; ++i) {
               complex double a = y[os*i], b = omega * y[os*(i + N2)];
               y[os*i] = a + b;
               y[os*(i + N2)] = a - b;
               omega *= omega1;
          }
     }
}
(Making the code work out-of-place, and recursively, allows you to basically copy the two-line mathematical derivation of the radix-2 case, and leads to the simplest implementations because you don't have to worry about bit-reversal etc. It is not nearly as fast as a highly optimized code, but then again no textbook radix-2 FFT is remotely optimal these days.) If you want a code library that is pretty short and simple and of reasonable speed, Google "kissfft".
I would also disagree that "beginners can add all the accessories and improvements". A typical textbook radix-2 FFT implementation is an extremely poor starting point for a full-featured, highly-optimized FFT program.
—Steven G. Johnson 07:00, 25 January 2007 (UTC)

Example[edit]

Dear Steven G. Johnson.

You could declare that any examples of algorithmic codes are prohibited at Wiki, and I would understand;

but you seem to believe, that the kissfft, as well as your example, are simpler than the example you have removed.

I just checked the kissfft; the "minimal" version requires several files insteaad of two;

and the files required are significantly longer than the example you removed;

even the istructiuons of use in the README file are 4792 byte long.

In addition, the use is not straight-forward; it requires skills and/or assistance, because some additional files are supposed to be pre-installed at the user's computer, for example, the bench-user.h header required by the shortest example doit.c

As for the example you suggest, it requires 6 arguments instead of 3, and therefore is not simpler. I would recommend your example to the article recursion.

Sincerely, Dima.

I didn't say that kissfft was simpler than the code you posted; kissfft is more full-featured, faster, and supports non-power-of-two sizes, among other things, and so it is more complicated. But it is still pretty simple and easy to learn from; while still being vastly better than a textbook radix-2 code. (Nor does it require any code to be preinstalled; you're apparently confused because the kissfft author included an optional test program based on the benchfft framework. Kissfft only requires you to compile one .c file, along with two .h files; everything else is optional extras.) I mentioned it to help you since you claimed to be frustrated with the complexity of what you found on the web. (You can find zillions of textbook radix-2 FFT codes on the web and usenet similar to the one you posted, by the way—i.e. nested bit-reversal loops followed or preceded by nested butterfly loops—some of it dating back to the 1960s. Nowadays, however, these are mainly useful as pedagogical exercises and bear little resemblance to the programs actually used for serious numerical calculations.)
Regarding the code I posted above as an example, you have a very odd way of measuring complexity if you measure it by the number of arguments, as opposed to by the number of instructions. And the whole idea of most FFT algorithms is based on recursion; implementations that do not use recursion explicitly and work in-place are certainly possible and even common, but are more complicated (require more instructions and are conceptually less direct) than the simplest non-recursive out-of-place versions. If you want to actually understand FFTs and why they work, as opposed to cutting-and-pasting code, you need to understand recursion. And if you don't need to understand the guts of FFTs, and just want to use them to do something, then it hardly matters whether the code is 5 lines or 500 lines as long as it has the features you require and a reasonable API.
(Not to mention the fact that any user who has trouble with a 5k README file is unlikely to be capable of using a fast Fourier transform in a reasonable way, in my opinion.)
In any case, as I said, Wikipedia is not a code repository, and it doesn't matter whether we agree on the relative merits of different FFT implementations because none of them belongs in the article. Nor are Wikipedia articles on mathematical subjects intended to address the problem of installing and compiling and calling source code on a user's computer. Creating a code repository for numerical code, while a worthwhile mission (and being addressed elsewhere, see e.g. GNU Scientific Library) is quite a different problem from that of creating an encyclopedia. (I didn't say that algorithmic source code is "prohibited" on Wikipedia, just that it's not appropriate here. Code fragments would be appropriate, for example, in an article on programming languages or techniques.) Of course, if you want to create your own numerical code library (somewhere else, perhaps on Wikibooks) for "minimal" simplistic numerical codes, as a pedagogical tool, knock yourself out.
—Steven G. Johnson 04:23, 27 January 2007 (UTC)

Dear Steven. Thank you for the explanaiton. I had to modify your code (the compiler complained about "complex double"), and the modification below seems to work well. Is such modification portable?

using namespace std;
#include <complex>
void rec_fft(int sign, int N, complex<double> *x, complex<double> *y, int is, int os)
{ if (N==1) *y = *x;
  else{ int N2 = N/2;
       rec_fft(sign, N2, x, y, is*2, os);
       rec_fft(sign, N2, x+is, y+os*N2, is*2, os);
       double f=sign*2*M_PI/N;
       complex<double> omega =1., omega1=complex<double>(cos(f), sin(f));
       for(int i=0; i<N2; ++i)
       {       complex<double> a = y[os*i], b = omega * y[os*(i + N2)];
               y[os*i] = a + b;
               y[os*(i + N2)] = a - b;
               omega *= omega1;
       }
     }
}
main(){ int n,N=8,m=1; //play with values of N and m;
       complex<double> x[N],y[N];
       for(n=0;n<N;n++){x[n]=complex<double>(cos(2*M_PI/N*n),sin(-2*M_PI/N*n));}
       rec_fft(1,N,x,y,1,1);
       for(n=0;n<N;n++){printf("%1d %6.3f %6.3f %6.3f %6.3f\n",
                       n,x[n].real(),x[n].imag(),y[n].real(),y[n].imag());}
       }

I agree with your argumentation and I like your code. If it is still too large for Wiki, will you consider to post your example somewhere, and to supply the link at the FFT article? For Wiki, it is better than "kissfft": such example requires neither installation nor instruction. Sincerely, dima 05:23, 27 January 2007 (UTC)

(The reason you had to modify it is that I posted C code, whereas you are using C++. C++ does not support C's complex-number types.)
No, I don't think it's appropriate for Wikipedia. This is an encyclopedia article on a mathematical topic, not an article on C or C++ programming. Readers can follow the links, or use Google, if they want code in their language of choice. As for posting it somewhere else and linking to it, that's a bit too much like sneaking around Wikipedia policy. For Wikipedia to cite or link to a web page, it should generally be notable already. That is, if you create a page with the above code, and it becomes popular (e.g. lots of people link to it, it ranks high in Google, etcetera), then Wikipedia can link to it. Using Wikipedia to try and make a site popular is linkspam and is prohibited by policy.
Also, the code above still requires installation—you still need to compile it. And it still requires instruction: you have to clearly define what its inputs and outputs are, at least (and remember that there are several different conventions for the DFT). Moreover, who is the audience? If you just want to use an FFT code, then I would strongly recommend something more sophisticated. And if the above code is intended for instructional purposes, then it needs a lot more explanation because you have to explain how the code works.
(I hereby release my code above into the public domain, by the way. Feel free to use it wherever you want outside Wikipedia.)
—Steven G. Johnson 07:26, 27 January 2007 (UTC)

To Steven G. Johnson, Hello - I think this isn't the place for a debate on what Wikipedia is for, but I feel I must speak up with regard to the inclusion of a bit of C++/C code to illustrate an implementation of the FFT algorithm. First of all, FFT is not purely a "maths topic" - it is a maths implementation of a very important tool, and understanding how it works or how it can be applied is vital to many of us who use Wikipedia as a resource for knowledge. Too many articles on Wikipedia dealing with technical (and particularly mathematical) topics are so badly written as to leave the reader stunned and frustrated. The C example here can only help and does nothing to contaminate the topic in any way. If one leaves Wikipedia more confused than when they arrived, something is wrong. All the academic writing styles in the world are of no use whatsoever if they cannot convey simple concepts to the reader who is interested in the topic. I encourage fully the contributions of -every- reader of Wikipedia who feels that a few examples or explanations in styles other than that in which the article was first written would go a long way to satisfy the needs of people who take the trouble to come here and read about the topic in question. It is all about language - and as it stands, the language used in most articles of this kind leaves a lot to be desired. A C code example is just another language like maths short-hand which this article is riddled with. Even pure maths and TeX shorthand is not the only way of illustrating an algorithm. For those who are not familiar with maths shorthand, you might just as well write the article in Chinese. It would still be valid but it wouldn't be understood to those who only know English. With kind regards- Sam, UK — Preceding unsigned comment added by 46.233.70.199 (talk) 19:20, 19 February 2014 (UTC)

Separate pages for code examples?[edit]

Why not add create a Wikipedia article called "Fast Fourier Transform Example" and link there?

I think the goal of the project is to preserve human knowledge. It does not seem right to me to post example elsewhere. It is guaranteed to disappear / be irrelevant.

Above looks simple and easy to understand, I'm all for keeping it. Separately is fine, if that is better stylistically. —Preceding unsigned comment added by 68.162.147.222 (talk) 19:51, 20 December 2007 (UTC)

The FFT acronym[edit]

On the Polish Wikipedia, we had a discussion regarding the proper term for the FFT algorithm. It turned out that transformata, which some believed is closer in meaning to the English word transform, was more frequently used, even though the other word, transformacja (transformation, translated literally), was more appropriate as the name for FFT. Now I'm in doubt about which of the words: transformation or transform is the proper term for the operator or operation that maps a vector to its discrete Fourier transform. Is there a distinction between the operator (which we on pl-wiki agreed was closer to transformation) and its result, or is transform used in both meanings? Note that fast Fourier transformation is also used in English publications, online [1] or otherwise (like the first entry in FFT#References). Safek (talk) 08:00, 1 May 2008 (UTC)

In English, "fast Fourier transform" is far more common than "fast Fourier transformation", but the two are used more or less interchangeably as far as I can tell. But English is a bit funny because "transform" was originally exclusively a verb, and acquired a noun usage sometime over the last 200 years according to the OED. And, even in this sense, "transform" originally referred to the result of a transformation, but nowadays is used for the transformation itself (the "operator") too. Of course, in English, "FFT" can also be used as an adjective, as in "FFT algorithm." I have no idea what Polish does, but our usual practice on Wikipedia is to go with the most common usage, and accept the fact that living languages are often illogical. Language and notation are ultimately determined by usage and convention; there is no absolute "right" and "wrong" unlike for mathematical facts. —Steven G. Johnson (talk) 23:54, 1 May 2008 (UTC)
Thanks for your reply. It turns out Polish is undergoing the same process as English. However, it hasn't gone that far yet, so on pl-wiki we are trying to stand by the more accurate word and explain the difference between the words. Safek (talk) 12:37, 2 May 2008 (UTC)

FFT WTF[edit]

Can I be provocative? This is the worst article I have ever found on Wikipedia.

I have a decent understanding of mathematics and calculus. I am an experienced programmer. I have used Wikipedia as a source for explanations of programming techniques and algorithms which I have successfully coded up and put into action, including the manipulation of 3-d data and drawing complex shapes.

I would like to produce rough and ready frequency spectra and frequency/time plots for audio signals (bat calls) using a standard microprocessor (not a custom DSP chip).

I know that FFT is the usual technique for doing such analysis and hoped Wikipedia would start me on the right road.

The article doesn't even explain what FFT is in plain english.

It could start "Fourier transformation is a technique for analysing the frequency components of a signal. Fast fourier transform is a mathematical technique that allows this do be done rapidly using limited computing power, and as such is a basic building block for digital signal processing. The technique is a critical element to the past decade's explosion in applications using the real time digital processing of sounds and video, from mobile phones to high definition television." - but i don't know if that (which is my understanding) is correct or not.

Worse still, it doesn't explain any of the mathematics or theory in a non-mathematical way. It certainly doesn't explain how FFT lends itself to the processing of digital signals.

At the moment the article says "this subject is the province of graduate mathematicians only", which I feel is against the whole spirit of Wikipedia.

The Yowser (talk) 14:52, 8 October 2008 (UTC)

Well, the first line reference the Discrete Fourier Transform, and the first line of that page references the Fourier Transform. If you want to understand their use in signal processing, those are the article you should read. Those still might not answer your questions, but this is not the right place. Gah4 (talk) 02:31, 13 August 2009 (UTC)

... or perhaps I am thick?

The details of the specific algorithms are all described by sub-articles, so it's not surprising that you didn't find the "numerical recipe" you were looking for here. As explained in the article, there are many, many FFT algorithms, and the purpose of this article is to give an overview of what is out there. You probably want Cooley-Tukey FFT algorithm if you want a description of the most common specific algorithm in enough detail to implement it. As for explaining the algorithm in a "non-mathematical way", it already says that the most common algorithm (Cooley-Tukey) is a divide-and-conquer algorithm that works by breaking a transform of size n down into smaller transforms, most commonly of size n/2; for any more detail than that, you need mathematics. As for your proposed introduction, FFTs don't compute Fourier transforms in general, they only compute DFTs, so that's not appropriate here. If the reader wants to learn about Fourier transforms, they should go to another article, as is pointed out explicitly in the introductory paragraph. Your criticism would be a lot more useful if it wasn't addressed by things already explicitly stated in the article. —Steven G. Johnson (talk) 15:06, 8 October 2008 (UTC)
Some of those criticisms accepted, and I fully accept that WP is not an "algorithm mine" (though I personally wish it was full of well commented pseudo-code examples!). But, surely, the following questions could be answered in a non-mathematical way:

What is FFT?

"an algorithm to compute the discrete Fourier transform (DFT) and its inverse". — Steven G. Johnson (talk) 19:33, 7 February 2013 (UTC)

What problems does it solve?

What does it tell you about a signal?

What information do you need on the signal and what information do you get?

All described in the article on discrete Fourier transform. It wouldn't be appropriate to duplicate that information here, which is why we write "see discrete Fourier transform for properties and applications of the transform". — Steven G. Johnson (talk) 19:33, 7 February 2013 (UTC)

What, in plain english, is the basic method of getting an FFT?

The simplest algorithm "is a divide and conquer algorithm that recursively breaks down a DFT of any composite size N = N1 N2 into many smaller DFTs of sizes N1 and N2."

In my mind a simple way would be to sample a signal at a regular frequency. You use a simple statistical technique to identify the size of the component of the signal at that frequency. You then extract that component, half the frequency and repeat the exercise. You end up with a histogram with groups f, f/2, f/4 ,f/8... each on representing the size of that component.

It sounds like you want information on the discrete Fourier transform (i.e. the mathematical operation that FFTs are a sophisticated algorithm for), not on FFTs. — Steven G. Johnson (talk) 19:33, 7 February 2013 (UTC)

Now there isn't enough information in the article for me to decide whether that is the case, or if I'm completely up the wrong tree.

Another example, you wouldn't have an entry for commutativity that had as its introduction "is a mathematical property of certain functions such that f(x,y)=f(y,x). Or would you?

You might want to check out podcasts of In Our Time (BBC Radio 4) to see how complex mathematical concepts can be described without recourse to any formulae at all - check out the programme on infinity. And there is, of course Godel, Escher, Bach though I don't expect every Wikipedian to reach that level of lucidity :)

Oh, and thanks for the Cooley-Tukey suggestion, but I already found that as part of a recursive loop around several articles ending 'FT' that all reference another article ending 'FT' for a proper explanation...

The Yowser (talk) 13:28, 9 October 2008 (UTC)

That is a problem with most math and science articles on WP, they are written for people that don't need to look up the subject.

Afterthought the article on Fourier Transformation is better than the others - I'm going to asd a link to it from the introduction, Although the 2002 article had pretty good introduction:

http://en.wikipedia.org/w/index.php?title=Fast_Fourier_transform&oldid=60732

The Yowser (talk) 13:59, 10 October 2008 (UTC)

(The 2002 article was simpler by being wrong. It assumed the only algorithm was Cooley-Tukey, and only described the radix-2 case.)

Another Terrible Wikipedia Article[edit]

The FFT is actually very simple and needs no Maths to explain, but this page is totally inaccessible to an intelligent layperson.

As Einstein said, “If you can't explain it to a six year old, you don't understand it yourself.” Gutta Percha (talk) 06:08, 16 October 2011 (UTC)


--- Have to agree: This article is totally incomprehensible to the layperson and doesn't even explain the purpose or give any practical uses by way of example. 1st April 2013 — Preceding unsigned comment added by 109.224.137.15 (talk) 22:30, 1 April 2013 (UTC)

I absolutely agree with the above sentiments. This page is full of jargon and completely out of line with Wikipedia's scope as listed here: [[2]], particularly items 6, 7, and 8. I'm a fairly well educated person, although math isn't my specialty I have been exposed to lots of it. This article reads like an in-depth technical piece, completely inaccessible for someone who, for instance, saw the term "Fast Fourier transform" in a scientific paper's methods section and just wants to get a brief overview of what the technique does. There are other outlets for teaching or showcasing the depth of your knowledge about frequency analysis techniques. This, however, is Wikipedia. Sirenology (talk) 18:42, 6 May 2013 (UTC)

Not constructive. What, specifically, should the article explain, or explain differently? Explaining the discrete Fourier transform and frequency analysis (the most common request) is outside of the scope of this article — you want the discrete Fourier transform and Fourier analysis articles for that. For instance, we already write:
This operation is useful in many fields (see discrete Fourier transform for properties and applications of the transform)
We shouldn't duplicate discrete Fourier transform#Applications here; it would be like explaining why matrices are useful in the Strassen algorithm article, or what sorting is good for in the Quicksort article. — Steven G. Johnson (talk) 18:51, 6 May 2013 (UTC)
OK, first of all, thank you for your time and effort spent on this Wiki article. You are obviously very knowledgeable in this field and have made great contributions to this article. My beef is that, when I go to the Wikipedia page on "Fast Fourier transform", I want to be able to read the article and gain some general context and understanding of the procedure, what it does, what are the applications. Although I am not a mathematician or a statistician, I am capable of comprehending things if laid out clearly and in common language, as Wikipedia articles are supposed to be (see my citation above). As it was written (before some edits earlier today), this article was useless to someone not familiar with the jargon. I did find some very helpful posts on other websites (e.g., Stack Overflow) which accomplished exactly this. SO my constructive criticism is---and this has already been accomplished to a degree by edits earlier today---you should create an overview paragraph that lays out, jargon free, what a FFT does. A "for dummies", if you will. Then the dummies could stop there and the mathematics students could read on. Thank you again for your contributions- Sirenology (talk) 21:56, 6 May 2013 (UTC)

Numerical estimates[edit]

I strongly believe some simple numerical estimates should be included. Many of the readers who find this page will have no recollection of log(), or how slowly it grows with N. This is very important; a fast car is twice as fast as a normal car, but a fast Fourier transform is a hundred times as fast.

Clearly, it cannot be stated that a *general* O(N^2) process is slower than an O(NlogN) process by N/log(N). But in this case we know (from decades of work) that the constants are quite near 1, so the calculation is justified. It's also quite reasonable in practice. If you implement the DFT in the obvious way (using a table for the coefficients), such as:

 complex<double> CD;
 void DFT(int N, CD* in, CD *out, CD *table)
 {
 for(int k=0; k<N; k++) {
   CD sum(0.0,0.0);
   for(int i=0; i<N; i++) {
       int indx = (i*k) & (N-1);  // and is like mod for 2^N
       sum += in[i]*table[indx];
       }
   out[k] = sum;
   }
 }

Then a tuned FFT (I tried fftw) is about 230 times faster than the code above for N=1024. Given that FFTw is a factor of several faster than naive FFTs such as Numerical Recipes, the speedups of all realistic FFT implementations should be within a factor of a few of N/log2(N). So it's is quite realistic in practice. LouScheffer (talk) 04:28, 9 October 2008 (UTC)

Before trying this experiment, I looked for any experiments in the literature compaing the DFT and the FFT. I thought the early papers might have some, but could not find any. In a way it's a tribute to the FFT that people only compare FFTs against each other, since they all beat DFTs by huge margins. But in a very introductory article, likely to be read by (very) non-specialists, it would be great to say by how much. I added some ballpark figures to the intro, but a reference would be nice. LouScheffer (talk) 12:39, 9 October 2008 (UTC)
Another piece of evidence that numerical values may be helpful might be in "What Is the Fast Fourier Transform?", Proceedings of the IEEE, VOL. 55. NO. 10, October 1967. This is an introduction to the FFT, aimed at a technical audience, and includes the sentence "For example, if N = 1024= 210, then N . log N = 10 240[7][9]. Conventional methods for computing (1) for N = 1024 would require an effort proportional to N2 = 1 048 576, more than 50 times that required with the FFT." Also, they felt they needed to footnote the calculation of log2(1024) - this is not obvious to everyone. Of course this is only counting multiplies, assuming a simple FFT strategy, etc., but it's still helpful to see the big picture, IMO. LouScheffer (talk) 14:11, 9 October 2008 (UTC)
In what other mathematical article would one witness the argument, Yes, I know this analysis is wrong, but it happens to work out roughly okay in this particular case, so I'll present it anyway? The constant factors can vary by orders of magnitude depending on the implementation details, depending on which FFT algorithm is chosen, depending on the factorization of N, the size of N, etc.
Your insistence on saying "the FFT" goes to the heart of the intellectual bankruptcy of this whole approach. There are many FFT algorithms that are totally distinct and have totally different constant factors; talking about the absolute performance of "the FFT" is nonsense (albeit excusable in 1967 when the existence of other algorithms was not well appreciated; there are still plenty of people who make the same mistake today, but now it is indisputably a mistake, like the common notions that FFTs don't exist for prime sizes or that powers of 2 are always the fastest). And even for a "single" algorithm like Cooley-Tukey, the performance easily varies by a factor of 20 between implementations, especially for large N. And the "constant" factor in the N log N itself varies by a factor of 5 or more as N varies, e.g. when you go out of cache (see fftw.org/speed, which plots the "constant" factor vs. N).
In summary, based on one data point, you are putting numbers in the article that could easily be off by one or two orders of magnitude, depending on the circumstances. This kind of nonsense has no place in an encyclopedia article.
—Steven G. Johnson (talk) 15:55, 9 October 2008 (UTC)
Sure, what you are saying is true, and should be considered by anyone to whom FFT performance is important. However, Wikipedia should (I believe) first give the big picture, then the gory details. The big picture is that FFTs are faster by *roughly* N/log(N), with all the caveats you mentioned. Also, I suspect the average Wikipedia reader has less technical and mathematical background than an IEEE Proceeding reader of 1967, where they felt they needed a numerical example. As a side note, I suspect you are one of the few people on the planet who would consider different FFT algorithms totally distinct; most folks, I surmise, might lump them in a common family when compared to, for example, singular value decomposition or gradient descent.LouScheffer (talk) 21:08, 9 October 2008 (UTC)
Would you therefore refer to "the" fast sorting algorithm, since all O(n log n) sorting algorithms solve the same problem (as opposed to SVD etc.)? No one does as far as I know, because everyone in CS knows there are many different sorting algorithms that achieve the same complexity—it's almost the first thing one learns about algorithms. With FFTs, the difference is that one algorithm (Cooley-Tukey) is so dominant that few people are aware that there are very different ways to solve the same problem (and few people, even in DSP, really learn the details of any FFT algorithm to begin with); and, even for those who do know, often they are writing in a context where it is clear what algorithm they are referring to, so they can afford to be sloppy. But of course, they are all in the same family: that's why we call them all fast Fourier transforms, even if none of them is the fast Fourier transform.
"Things should be simplified as much as possible, but no simpler." I agree that we should try to present the big picture, but I think we should try to avoid presenting analyses that are wrong. —Steven G. Johnson (talk) 22:56, 9 October 2008 (UTC)

Does anyone else have an opinion on these issues? If so, please weigh in! LouScheffer (talk) 21:08, 9 October 2008 (UTC)

Hi, Im just a grad student looking for information for a project, and I cannot find what is the physical meaning of the power spectrum for the FFT: I know how to calculate it with the complex conjugates, and I have found an interesting algoritmus, but no clue how to interpret the data. ThaNks guys1 —Preceding unsigned comment added by Minsk380 (talkcontribs) 12:38, 20 January 2009 (UTC)

Length of LEDE[edit]

Hi! My feeling is that the LEDE should hit all the main points, and be short enough to fit on one screen without scrolling. The longer lede version is better at this. Adding extra divisions, in my mind, makes it worse since it pushes introductory text off the first screen, where readers are most likely to see it. Note that many featured articles (see for example Hubble Space Telescope have ledes of this length. LouScheffer (talk) 11:04, 11 October 2008 (UTC)

I agree; a lede that fits comfortably without scrolling is perfectly fine, and it's not clear how the reader will be helped by breaking up the current lede. In any case, if you want to split it up, you can't simply add headers, you need to rewrite the text. Just adding headers without changing the text breaks up the train of thought, and makes the lede inadequate because crucial information (e.g. a clear definition of FFTs) is relegated to a later section. (If it really needed to be shortened, my inclination would be to put the explanation of how much slower O(N^2) is than O(N log N) into a subsection. The main purpose of the lede should be to delineate the subject of the article. But again, this would require some rewriting, not just adding headers and cut-and-paste.) —Steven G. Johnson (talk) 23:07, 11 October 2008 (UTC)

Oh Notation[edit]

"Θ(N2) to Θ(N log N) remains." Say. Shouldn't those thetas be capital letter Os?--72.75.86.157 (talk) 07:56, 16 October 2008 (UTC)

Technically, no. —Steven G. Johnson (talk) 15:12, 16 October 2008 (UTC)
But as the user points out we make it all the way through the article using Big Oh notation until this point. Does it really make sense to switch to a slightly different concept without comment, link, etc? Thenub314 (talk) 06:50, 17 October 2008 (UTC)
I've added a comment and link. The problem is that saying FFTs reduce the complexity from O(N^2) to O(N log N) is literally false. FFT algorithms are still O(N^2) in addition to being O(N log N), because "O" only denotes an upper bound. —Steven G. Johnson (talk) 14:12, 17 October 2008 (UTC)
I am very confused by this second sentence. Big Theta implies Big-Oh, so saying the complexity is O(N log N) is true. I agree that every worse upper bound is still an upper bound, so the algorithms are also O(N3), etc. But there is nothing literally false about saying FFTs reduce the complexity from O(N^2) to O(N log N). Thenub314 (talk) 07:13, 18 October 2008 (UTC)
If you say that they reduce the complexity from O(N^2) to O(N log N), you are literally saying that they are no longer O(N^2), which is false. —Steven G. Johnson (talk) 20:02, 18 October 2008 (UTC)

About complexity[edit]

I have readed the article and I don't understand why the original transformation cost is O(N^2). It is just a sum for 0..N-1 so it should be O(N). If I want to compute X_i I just need to iterate N times. —Preceding unsigned comment added by 62.43.167.123 (talk) 16:47, 13 January 2009 (UTC)

Each output is a sum of N terms, so is O(N). But there are N outputs, so you need to do N sums, so it is N O(N) = O(N2). —Steven G. Johnson (talk) 21:37, 22 May 2009 (UTC)

Huh?[edit]

Could someone please write a new opening paragraph with a more pedestrian explanation of what FFT is?

Reading this, I feel like an illiterate who can't count trying to read an advanced calculus book. —Preceding unsigned comment added by 208.127.100.174 (talk) 09:22, 22 August 2009 (UTC)

I also think that people with finished university education in mathematical or physical area do not need Wikipedia to read about FFT as they already know about it enough. For others the article seems getting unreadable Audriusa (talk) 13:53, 29 November 2010 (UTC)

One of the worst articles I've ever read[edit]

I'd like to pile on about how terrible this article is. I came here to refresh myself on FFTs, and this article is completely different from how FFT's are explained in the books and signal processing classes I've taken. It doesn't really define FFTs, or even explain how they eliminate computations, and is overly mathematical. It doesn't even mention FFT butterflies, or the W operator!

Let me make my suggestions: The FFTs are a set of algorithms which help reduce the number of calculations required for computing the DFT. In general, the FFT reduces the number of calculations by choosing parameters of the bank so that they are harmonically and symmetrically related, and consolidate the operations, namely the phase calculations. For example, if a bank's size is 4, then a phase rotation of 2*d_theta, which is 180 degrees, simply becomes a negative sign, while a phase rotation of 3*d_theta becomes -1*d_theta, eliminating 2 calculations through harmonic symmetry.

166.19.102.20 (talk) 13:35, 3 February 2011 (UTC)

Often in the DSP world, the term FFT is used where DFT would be mathematically correct. One should read the DFT page before this one, if one doesn't understand the idea of DFT. Most likely, the DFT page also references a Fourier Transform page, though Fourier Series is closer to correct. Gah4 (talk) 14:12, 3 February 2011 (UTC)

It also sounds like the anon didn't notice that there are sub-articles describing the specific algorithms; this article is necessarily less specific because it has to summarize all the algorithms. In many courses and textbooks, only the Cooley–Tukey FFT algorithm is described, often only for power-of-two sizes, so you may get the false impression that there is only one FFT algorithm — a clue to this mistake is the misleading phrase "the FFT". If you come here expecting that false impression to be reinforced, you will be disappointed. If you want information specifically on the Cooley–Tukey algorithm (which does talk about butterflies), you can read that sub-article. — Steven G. Johnson (talk) 14:27, 3 February 2011 (UTC

Gauss[edit]

but it was later discovered (Heideman & Burrus, 1984) that those two authors had independently re-invented an algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limited forms).': This is a wrong statement. In the Gauss paper[1] there are no Fast Forier Transform, Discrete Fourier Transform, or even ordinary Fourier Transform --- everbody can see it in the Gauss paper undependetly on the Latin language used by Gauss, since all formulas are NOT in Latin. The source (Heideman & Burrus, 1984) isn't appropriate, since the paper was written not my mathematicians, published not in mathematical journal and didn't come the process of math.reviews. The authors evidently have no knowledges in computational complexity.However, even by formulas presented in Gauss work easy to see there are no any FFT algorithms there.Riemann'sZeta (talk) 09:27, 2 April 2011 (UTC)

First of all, this is original research, because your claims directly contradict the published research (Heideman & Burrus, who are world-renowned experts on fast Fourier transform algorithms). (By the way, most FFT research is published in IEEE and other engineering journals.) Note also that many other published secondary sources echo the same information (for example, this millennial review article). If you disagree, you are free to try to publish your own article in a reputable journal, but Wikipedia is not the place to push your iconoclastic views.
Second, I have a copy of Gauss's Theoria interpolationis notes (and I know some Latin), and it is easy to verify that he does very clearly describe trigonometric interpolation via a DFT and the inverse DFT, and he also very clearly describes a Cooley-Tukey FFT algorithm (he performs a radix-3 step to subdivide an N=12 transform into 3 N=4 transforms, and explicitly mentions the possibility of additional subdivisions for N with more factors). Of course, he doesn't call it a "Fourier" transform, and his notation is different from modern notation [he writes everything out in terms of sines and cosines, and doesn't use subscripts but instead uses primes (IIRC the cosine coefficients are \alpha', \alpha'', \alpha''' and so on)], but once you get past the notational differences the equivalence is clear. (Oh, and by the way, if you read the Heideman & Burrus article they explain in great detail what Gauss is doing, and you can look at Gauss' notes to verify that Gauss is doing what they say he is.) But I write this only for the information of other editors; I don't have to convince you, because the published secondary sources are absolutely unambiguous on this point and the WP:OR policy applies.
— Steven G. Johnson (talk) 17:32, 2 April 2011 (UTC)\

What you wrote --- it's nonsense! This is not Fast Fourier Transform! What you describe it's ordinary transformations of trigonometric series, it's possible to find analogius expressions in many Gauss works. You wrote Haidemann and Burrus are specialists in fast algorithms --- what other papers did they write before? — Preceding unsigned comment added by Riemann'sZeta (talkcontribs) 20:44, 4 April 2011 (UTC)


May be, it would be appropriate to write that there is an opinion (ref. to Heidemann and Burrus) that in the paper of Gauss (ref to this Gauss work + scan of this work in Latin, original work with all formulas) the prototype of the FFT is described. In such form you give a possibility to everybody to judge yourself. In the present form you misinform the readers!Riemann'sZeta (talk) 20:54, 4 April 2011 (UTC)

Computation of the coefficients of trigonometric interpolation for equispaced points is a discrete Fourier transform. If you don't understand this, you don't understand what the DFT is (or how trigonometric interpolation works).
C. S. Burrus has written hundreds of papers on signal processing, and is widely known for his papers on FFT algorithms. Heideman is also well known (e.g. for this paper).
The attribution of FFT algorithms to Gauss is not just the idiosyncratic opinion of two authors; it has been widely accepted in the field. I already mentioned the millenial review article by Rockmore, which cites it approvingly, but the paper has been cited dozens of other times. Notably, Jim Cooley himself approvingly cited the Heideman and Burrus article, and agreed that earliest known FFT was due to Gauss: James W. Cooley, "The re-discovery of the fast Fourier transform algorithm," Mikrochimica Acta vol. 3, pp. 33–45 (1987).
— Steven G. Johnson (talk) 22:49, 4 April 2011 (UTC)

Additions to Bounds on Complexity and Operation Counts[edit]

I attempted to add a recent result on some FFT operation bounds proven by SAT solvers to the subsection "Bounds on Complexity and Operation Counts" but my changes were reverted due to conflict of interest. (I thought I was OK since the result refers to a peer-reviewed journal.) I'd appreciate review and possible inclusion by some independent person(s). You can see the additions I tried to make under the article history (changes on 6 December 2011 by Haynals). — Preceding unsigned comment added by Haynals (talkcontribs) 22:06, 6 December 2011 (UTC)

I agree that your paper, which is an unusual approach to this problem, seems worth mentioning in this context, and have added back a somewhat amended description. — Steven G. Johnson (talk) 04:13, 7 December 2011 (UTC)

Why power of two?[edit]

Just wondering. Why do most of the algorithms I've seen require the input array length to be a power of 2? 68.173.113.106 (talk) 00:06, 1 March 2012 (UTC)

2 isthe smallest prime, so the algorithms are simplest in that case...this is why many textbooks only show that case, andin particular they usually only show Cooley-Tukey for power-of-two sizes. FFT algorithms exist for any size, though, even prime sizes. — Steven G. Johnson (talk) 00:37, 1 March 2012 (UTC)

An additional reason is that FFT algorithms are typically implemented on binary computers, for which some operations (such as modulo and multiplication) are much more efficient when they have an argument of the form 2ⁿ.

While this is a superficially plausible theory, it is not really accurate as far as I can tell (nor can I recall seeing this justification in any of the early papers). As a practical matter, one can always arrange the computation so that the differencing in indexing performance between (e.g.) 2ⁿ and 3ⁿ is negligible: for example, in a traditional textbook-style breadth-first Cooley-Tukey implementation, the multiplication or division by the radix only needs to be done once per pass over the array (with the rest of the indexing computations being additions). — Steven G. Johnson (talk) 14:51, 30 July 2013 (UTC)
Even in Cooley-Tukey the factor 2ⁿ pops up in many places, e.g. when the for-loops all run from 0 to 2ⁿ-1, i.e. all-zeroes to N ones. . That's the ideal loop, and there are a lot of loops. (O(NlogN)). I'm not at all claiming that it is impossible; I'm just indicating why (as a programmer not a mathematician) you'd prefer such powers of 2. — Preceding unsigned comment added by 2001:980:396E:1:D86C:31F5:6801:F457 (talk) 09:24, 1 August 2013 (UTC)
I've implemented numerous FFTs, and this is a negligible issue from a practical programming standpoint as well as a mathematical viewpoint. You have to look at where you multiply or divide by the radix, and you'll see that this is typically only done at most O(log N) times, and hence is negligible. A more substantial issue is that the twiddle factors for power-of-two radix butterfly operations often simplify (e.g. to ±1, ±i, and (±1±i)/√2), so it is easier to implement optimized radix-4, radix-8, and radix-16 Cooley–Tukey steps than it is to implement, say, radix-10 efficiently. — Steven G. Johnson (talk) 15:10, 13 June 2014 (UTC)

Known exploits?[edit]

Are there known exploits of the FFT, by say using the power of two nature to design visual camouflage or radio jamming so that the interference is ignored? Hcobb (talk) 12:40, 1 August 2013 (UTC)

  1. 1 Your comment makes no sense. #2 FFTs are rarely used in modern signal processing because you need to do something to the FFT data to get something useful out of it. Wavelets provided that kind of discrimination. — Preceding unsigned comment added by 24.12.228.160 (talk) 04:19, 7 December 2013 (UTC)
    • ^ Gauss, Carl Friedrich, "Nachlass: Theoria interpolationis methodo nova tractata", Werke, Band 3, 265–327 (Königliche Gesellschaft der Wissenschaften, Göttingen, 1866)