Talk:Sinc function

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Mathematics (Rated C-class, Mid-priority)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
C Class
Mid Priority
 Field:  Unassessed
The assessment information for this article is not complete. Anyone can add information about the quality, significance, or subject area of an article.

Relationship to the Dirac delta distribution[edit]

There is nothing special to the sinc function resulting in f(x/a)/a approaching the delta. This is true for a very wide class of functions. — Preceding unsigned comment added by (talk) 03:04, 5 April 2012 (UTC)


I have seen definitions of sinc where the removable singularity at x=0 is removed explicity, ie.,

However, I've also seen definitions where it is simply left as (sin x)/x. Which is best? Shall we mention both? Dysprosia 07:39, 8 Jan 2005 (UTC)

vote for explicit removal plus comment that that makes the function analytic at 0. --Zero 12:07, 1 Feb 2005 (UTC)
I agree, I inserted it Paul Reiser 15:03, 1 Feb 2005 (UTC)
I strongly disagree. Just read the definition of removable singularity (which is correct) and you'll see that this is not correct. I'm very much in favour of writing the correct definition (at least for the "mathematical" case, if engeneers would be offended), or at least write clearly and explicitely that the sinc function is defined to equal 1 in the point where this expression is not defined. Else it is NOT defined, and since it should be a classical function (and not only L² or distribution), one HAS to specify a value in 0. — MFH:Talk 11:58, 16 October 2008 (UTC)

More on sinc definition[edit]

first of all, there appears to be something missing in the first paragraph. "It is defined by:" what?!

certainly in some math and physics texts, i have seen the definition of sinc() as stated in this

but in electrical engineering signal processing texts (this is where the sinc() function is used most often, as far as i can tell) the definition is adjusted a little:

it is mentioned as the "normalized sinc" function in this article, but i have never seen that term used anywhere else.

this is typical of the difference in notation between math and physics texts and electrical engineering texts. is it or  ? is it or in the Fourier Transform? is it or  ?

i am bothered by the lack of self-consistent notation inside of wikipedia (the i vs. j thing doesn't bother me) but am not sure what can be done about it. r b-j 17:07, 11 Mar 2005 (UTC)

A google search on "sinc function" is helpful. It seems both are used. How do you suggest we fix the article? Regarding what to do, I think we just have to fix any inconsistencies whenever we run across them. Paul Reiser 17:38, 11 Mar 2005 (UTC)
believe me, i have googled it. and it comes out both ways. Wolfram likes it without the pi but there are many web pages on both sides of the definition. what i think has happened is that some electrical engineering profs have long ago decided that the definition was better, espcially in signal processing (since the sinc() function hits zero at all other sample times). Many have also redefined the Fourier Transform from the definition here to one with and no in the inverse transform. This simplifies the duality property, power expressions, and Parseval's theorem. It would be nice if it were completely consistent in Wikipedia and the simplest definitions, but i am afraid that historical and practical concerns are not identical. r b-j 05:47, 12 Mar 2005 (UTC)

The "normalized" means that x would have a range of -1 to 1 and cover the unit circle. Cburnett 17:29, 14 Mar 2005 (UTC)

Here "normalized" means that it has unity squared integral. See the second to last item in "properties" section. Aoosten (talk) 10:09, 26 October 2009 (UTC)

Removed this section:[edit]

It has the following limiting property:

where δ is the Dirac delta function. This is an interesting equation since it contradicts the view of the delta function δ(x) as being zero every where except at x=0. In this representation, the delta function oscillates infinitely rapidly inside an envelope of y=±1/x. If a function is continuous in some interval that does not contain zero, then the integral times the delta function will be zero. In this case, this is due to the infinitely fast variation of the delta function which averages to zero.

The displayed equation does not make sense. The variable a is a dummy variable so the answer cannot be a function of it. The first limit in fact does not exist according to the definition of limit and so the second doesn't either (as it's the same function). The comment about "oscillating infinite often" has no mathematical meaning. --Zero 12:07, 1 Feb 2005 (UTC)

Sorry about that, its a typo and should read δ(x) on the right hand side.

With regard to the question of whether the limit exists, a quote from the Dirac delta function article says:

The delta function can be viewed as the limit of a sequence of functions

where δa(x) is sometimes called a nascent delta function. This may be useful in specific applications; to put it another way, one justification for the delta-function notation is that it doesn't presuppose which limiting sequence will be used. On the other hand the term limit needs to be made precise, as this equality holds only for some meanings of limit. The term approximate identity has a particular meaning in harmonic analysis, in relation to a limiting sequence to an identity element for the convolution operation (on groups more general than the real numbers, e.g. the unit circle).

With regard to "oscillating infinitely often", I agree that strictly speaking it has no meaning, but neither does this quote from the delta function article:

The Dirac delta function, introduced by Paul Dirac, can be informally thought of as a function δ(x) that has the value of infinity for x = 0, the value zero elsewhere, and a total integral of one.

The point I was trying to make is that the above informal idea of the delta function is not applicable to the above limit of the sinc function. We can write about the delta function in strictly correct mathematical terms, which will be opaque to anyone without mathematical training, or we can relax and give a newcomer an intuitive idea of the behavior of the delta function. I believe the latter is best. Maybe the "informally thought of" caveat can be added to the "oscillating infinitely often" statement? Anyway, looking at what I've just written, I think that this discussion of the delta function should go in the delta function article, not here in the sinc function article, so I will put it in there soon. Paul Reiser 15:03, 1 Feb 2005 (UTC)

I think the limit
is valid, if interpreted in the distributional sense, i.e.
for all compactly supported functions φ. The limit is of course invalid in the usual calculus sense, because the δ "function" is not a function but a distribution. However, I'm not absolutely sure about it, so I didn't put it back in. Anyway, if the limit is indeed correct, then this could well be mentioned in this article (either with the intuitive explanation, or with a mathematically precise explanation, or both). Personally, I would leave out the sentence "This is an interesting equation since it contradicts the view of the delta function δ(x) as being zero every where except at x=0". -- Jitse Niesen 14:43, 2 Feb 2005 (UTC)
I think it should be , not , in which case I agree with your interpretation. --Zero 17:49, 2 Feb 2005 (UTC)
I agree with that too. Again, it might be worth mentioning here, but its mostly a discussion of the nature of the delta function and would better go in the delta function article. I will do that soon, and I will be sure to drink a cup of coffee and wake up before I do it. Paul Reiser 18:53, 2 Feb 2005 (UTC)
Very intersting. Belongs in the delta function article linas 17:30, 26 Mar 2005 (UTC)
I put a mention of it in the introduction, with an explanation in the sinc function article. The delta function article has a number of cases where it is assumed that the delta function is zero everywhere except at x=0, including the idea that the delta function is a probability distribution, and that its support is x=0. I'm not an expert in distributions, so I'm trying to get a discussion going rather than just go in and change things, but it doesn't seem to be working. If you know about this subject, or know anyone who does, I think some input would be helpful. PAR 20:48, 26 Mar 2005 (UTC)
The reason people get into a mess when talking about the delta function is that they think it is a function. It isn't! It's a distribution. Further confusion comes from the word "distribution" having at least two quite different meanings. The delta function is not a "probability distribution", since they are functions which are monotonically non-decreasing (etc). If we take the step function F(x) = 0 (x<0), 1 (x≥ 0), that is a probability distribution. Its density does not exist in the usual sense (as a function) but can be described as a distribution (other sense), in which case we get the delta function. Probabilists prefer to use the language of measures and Stieltjes integrals, so instead of writing they write , both of which equal g(0). The talk about being infinite or oscillating infinitely often etc is just an outcome of the mistake of thinking of δ(x) as a function. The correct interpretation is given a few paragraphs above. And, yes, the delta function article needs fixing. I'll try to look at it before long. --Zero 02:04, 27 Mar 2005 (UTC)

I reverted to the previous version and if you revert to your previous version, I won't fight it until we've talked about this for a while. The reason I reverted is because I carefully revised the Feb.1 statement about the limit of the sinc function to produce this new one. Its not a simple reversion to the Feb.1 statement, yet you use phrases from the old statement as the reasons you reverted. For example, I never used the phrase "infinitely often" in the latest statement. I've never used the idea that the delta function is a function without the "informal" modifier. Please read the latest statment before discarding it.

Also, I am trying to make a point here, and its the same one that you are trying to make. Thinking of the delta function as a function can get you into trouble, and this is an excellent example of that. Can you see why it bothers me that every time I try to make this point you revert and leave nothing in its place? Will you please help me come up with something that shakes people out of the "zero everywhere except infinity at zero" mindset, and yet is not so esoteric as to make a newcomers eyes glaze over? PAR 03:18, 27 Mar 2005 (UTC)

PAR, could you please provide a reference for the limit. I have no problems with the current formulation, except that I'd like to add a reference and a caveat that it is not a normal limit. Zero, could you please say exactly what you disagree with? It might be nice to connect the example to the Riemann-Lebesgue lemma, which says that "oscillating infinitely often = zero". -- Jitse Niesen 12:44, 27 Mar 2005 (UTC)

Its taken from some notes I have, and I don't have the original reference. Googling "dirac delta sinc limit" yields a number of possibilities, the best of which seems to be [1] which in turn references Bracewell (1986) on the subject a number of times. I will track down more if this is not sufficient. Thanks for helping us with this. PAR 16:54, 27 Mar 2005 (UTC)

I object to it because it is wrong. The limit on the left side does not exist for any x except x=0. That means the limit claim cannot be allowed as is. We can consider presenting it as a limit in the space of generalised functions but that page is not too user-friendly and I can't see a definition of limit there which is clear enough to refer to. That leaves spelling out the definition, like Jitse did above just under the phrase "in the distributional sense". Notice how the limit is taken outside the integral; this makes all the difference. I would have no objection to seeing that on the page. However, the metaphysical comments about δ(x) we can do without. I find them also wrong (to the extent they are meaningful at all). Even if these were ordinary functions (which δ(x) is not) there is no reason that a limit should have any particular property of the functions in the sequence. It is perfectly ordinary for a limit to satisfy bounds that none of the functions in the sequence satisfy. An example which is quite similar is the Gibbs phenomenon -- we don't say that this contradicts our understanding of the square-wave function having square corners.
In summary: my preference is to put in something like this:

In the language of distributions, the sinc function is related to the delta function δ(x) by
This is not an ordinary limit, since the left side does not converge except at x=0. Rather, it means that
for any smooth function with compact support.

Btw, this might look nicer using sincN. --Zero 00:35, 28 Mar 2005 (UTC)

Ok, I get the idea. Note, the above needs to have the limit a->0 so that will change things a little, the limit doesn't exist for any x at all, right? I still think we should not pretend the "zero everywhere except at zero" idea of the delta function doesn't exist. The delta function article shows that mindset a number of times, and references with that mindset are all over the place. I think we should use this example to help dispose of that idea. That means we have to make metaphysical comments if only to point out that they are not useful. We shouldn't just "do it right" and ignore the problem. Do you have any ideas on how to do this? PAR 01:27, 28 Mar 2005 (UTC)
Yes, a->0, sorry. As for the delta function, the article on it is the place to correct errors and misconceptions, not this article which is only slightly related. --Zero 11:48, 28 Mar 2005 (UTC)
I revised the statement to conform to your statment above. I kept some "metaphysical comments but I agree, they would better go in the dirac delta function article. Right now, there is just a passing reference in the introduction to the delta function article to this sinc function topic. I couldn't figure out how transfer the metaphysical stuff, but I will eventually unless you edit the delta function article first, which, by the way, I think is a better idea. PAR 13:58, 28 Mar 2005 (UTC)

What is the purpose of the "disclaimer"
The normalized sinc function can be used as a nascent delta function, even though it is not a distribution.
 ? As a locally integrable function, it IS a distribution. And anyway, why SHOULD it be a distribution, in order to BE (not "to be used as"...) a nascent delta function? — MFH:Talk 12:10, 16 October 2008 (UTC)

incorrect use of de l'Hospital rule[edit]

One cannot state that, by de l'Hospital rule, it is

because the de l'Hospital rule uses the derivatives, and in order to derivate the function, one has to use that limit. In fact

You dont NEED l'Hospital's rule to calculate the derivative of sin(x). You have just shown that it COULD be used. You could take the derivative of the Taylor expansion of sin(x) to get the derivative of sin(x) without using l'Hospital's rule. Then l'Hospital's rule could be used to prove sin(x)/x -> 1. PAR 15:07, 28 October 2005 (UTC)
I don't understand your reply. I do not use l'Hospital rule to calculate the derivative of sin(x). I have shown that in order to calculate the limit of sin(x)/x with the de l'Hospital rule you must know in advance that sin(x)/x->1, so you can't calculate the limit of sin(x)/x using the de l'Hospital rule. Roberto.zanasi 21:13, 28 October 2005 (UTC)
in that case, you would say
r b-j 20:00, 28 October 2005 (UTC)
Roberto, you do have a point. If one wants to use l'Hôpital's rule to calculate the limit, then one has to know the derivative of the sine at x = 0. However, the limit follows immediately from that derivative being one, by the very definition of derivative. Nevertheless, I think that many people of the intended audience will prefer using l'Hôpital's rule and the fact that the derivative of the sine is the cosine (which they accept without much thought). -- Jitse Niesen (talk) 20:15, 28 October 2005 (UTC) (via edit conflict)

Roberto, you say "I have shown that in order to calculate the limit of sin(x)/x with the de l'Hospital rule you must know in advance that sin(x)/x->1". I don't need to know that in advance, unless I use your method of calculating the derivative D(sin(x)). Suppose we calculate the derivative using


The limit of sin(x)/x was never needed.PAR 20:22, 28 October 2005 (UTC)

i dunno what your point is meant to be PAR, and i dunno if Roberto was the "OP", but you still have a circular logic problem if you use McLauren or Taylor series to make your proof that the "". how do you get your McLaurin or Taylor series in the first place without knowing the derivatives of and at  ? BTW, i hope you are not using this "Euler's notation": "" in articles for the derivative. it's peculiar. why not the normal Newton or Liebniz? r b-j 21:05, 28 October 2005 (UTC)

I see what you are saying. But I could say "how do you get the derivative at zero without first knowing the series expansion?".

the way we did it in my first calculus class was Roberto's way with a geometric argument for why sin(x)/x -> 1 (the chord becomes the same length as the arc as x -> 0).

I guess the final answer depends on the definition of sin(x). I think sin(x) is mathematically (not geometrically!) defined in terms of its series, but I may be wrong. What is the definition of sin(x)? Once we have that, we will have the answer.

Also - I'm only using D(sin(x)) because I was responding to Roberto, who introduced the notation. PAR 21:45, 28 October 2005 (UTC)

i see that's the case. it's an okay notation when i'm doing normed metric spaces, but i don't like it for normal calculus. it's just a preference, there's little else wrong with it. r b-j 02:05, 29 October 2005 (UTC)

I have changed the wording a bit so as not to imply that l'Hospital's rule can necessrily be used as a proof of the statement in question. PAR 03:07, 29 October 2005 (UTC)

Ok, the very question is: which is the definition of sin(x)? If we define sin(x) geometrically, then we cannot use the de l'Hospital rule to calculate the limit of sin(x)/x, as I have shown. This is the normal way (pre-graduate and university courses, here in Italy). If we define sin(x) like that:
then it is possible to use de l'Hospital. I think that this is an "innatural" definition (but more rigorous than the classical one), but it is my very humble opinion... Roberto.zanasi (how can I sign my posts automatically? I'm quite a newby :-))

Hi Roberto - I am quite sure that whatever the definition is, it should not be geometrical, because that involves a lot of unnessecary baggage, needing to build up a two-dimensional euclidean metric space before you can define the function. It should be very simple and not make reference to triangles, etc. I consulted "Shilov - Elementary Real and Complex Analysis" and he gives two definitions:

1. sin(x) and cos(x) are functions of complex variables x such that:
  • for sufficiently small x>0
2.the power series

Shilov says either definition yields the other as a result, but the series definition is preferred.

Also - sign your name with four tildes: ~ ~ ~ ~ PAR 15:10, 29 October 2005 (UTC)

Very well. In that case, the use of the de L'Hospital rule is legitimate. Thanks. Roberto.zanasi 16:50, 30 October 2005 (UTC)

Relationship to delta function[edit]

Should this:

actually be this?:

--Bob K 19:26, 28 March 2006 (UTC)

No, I don't think so, at least not under the definition used in the article. It follows from
that at least the constant is correct. However, if you use the alternative definition which in the article is denoted by sincπ(x), then the second formula is indeed correct. -- Jitse Niesen (talk) 07:50, 29 March 2006 (UTC)

Revisit normalization and definition issue[edit]

Since the article itself says: " Applications of the sinc function are found in digital signal processing, communication theory, control theory, and optics. " I can pretty much guarantee that, as the sinc() function is presently defined herein, that every appearance of sinc will have a π in it. Since nearly every communications/signal_processing text defines the sinc() function as

why don't we change the definition to the normalized form, make mention that the historical definition and some mathematics texts will have it without the π and i am will to find all citations of this article and fix the reference to sinc() so that WP is consistent. Rbj 21:46, 30 April 2006 (UTC)

Every appearance of sinc will have a π in it In your world. In my world (math and physics), I rarely see it defined with a π. We went through the same thing with the Fourier series stuff and the resolution was that if you want to understand the mathematics and prove theorems, learn the mathematicians notation. If you want to use it in a practical sense, learn the notation that suits that field. The article should attempt to address both users: Someone coming from a communications article should not have too much difficulty finding what they want, nor should someone coming from a mathematics or physics article. Its not easy. I'm sorry if I sound contentious, but it irks me when communications/signal_processing people start talking like they are the only real people in the known universe. Seriously, I realise the sinc-sub-pi notation for the communications people is a problem that needs to be fixed, but this kind of language isn't the right beginning. PAR 00:46, 1 May 2006 (UTC)
It is a generous offer, and I happen to also inhabit Rbj's world, but unfortunately I think PAR is correct. It's not a matter of contemporary vs. historical. It's just two different conventions, overloading of the "word" sinc. A textbook has the advantage of being able to choose one and ignore all others, and that is what we are accustomed to. But Wikipedia does not have that luxury. And anything we "resolve" today can be revisited ad infinitum (as we are doing right now). That is why Wikipedia will never be high quality. We could write the perfect article today, and it would erode over time. We could make some aspect of Wikipedia totally consistent over all articles, and the inconsistencies and errors would creep back it, like weeds. So all I am saying is that there is a limit to how much we should stress over this. But each person must decide that for himself. --Bob K 14:26, 2 May 2006 (UTC)
it's okay, i give up. it's just that i clicked What links here and looked at some of the results. Only Bessel function has an expression of (without explicitly changing it to ). even the Gamma function (the other ostensibly pure math article) uses the normalized and all of the other uses were communications/signal_processing or sampling/interpolation related and i know for sure that the normalized sinc is what is used in those context. so i do not see evidence here at Wikipedia to support PAR's point. Rbj 16:24, 2 May 2006 (UTC)
Nyquist–Shannon sampling theorem, for instance, has an explicit π in the argument of its sinc functions, so I believe that is the unnormalized variety. But I think the only reason it was chosen is to conform with this article. I.e., the normalized sinc would have a cleaner look. Perhaps that is what you mean(?). --Bob K 17:06, 2 May 2006 (UTC)
yes. that is what i mean. until the definition is changed to include π in it, we have to put the π in everywhere else. i, for the most part, refuse to use the present normalized sinc notation or or whatever. i'll just put the π in the argument. but it seems to me silly to do so because we want to leave it out of the definition. it makes much more sense, at least from the electrical engineering and DSP POV to do what the textbooks do and put the π in the definition. i know this sounds EE and DSP centric, but i would like to see how often the sinc is used in applications in other contexts, and how many of those times there is no π in the argument (given the present definition). Rbj 19:15, 2 May 2006 (UTC)
Me too. --Bob K 01:13, 3 May 2006 (UTC)
Come to think of it, the Wikipedia standard for dealing with different concepts or definitions associated with the same [overloaded] title is to disambiguate. Therefore Sinc function should actually be disambiguation page, pointing to Sinc function, unnormalized and to Sinc function, normalized. Each article that references a sinc function would link directly to the appropriate definition. --Bob K 17:06, 2 May 2006 (UTC)
i don't like that idea. i'd rather keep it as it is and continue to put π in the argument of the sinc everywhere we use it. i want to reduce multiple definitions of the same thing. i want more unification of meaning of terms. Rbj 19:15, 2 May 2006 (UTC)
That is exactly what I have been doing, in the interest of global peace and harmony. But the world is actually a messy place. I am inclined to distinguish Wikipedia's role from that of a textbook. Wikipedia's job is to organize a messy room (so people can find what they want), but not throw anything of value away. A textbook, OTOH, achieves apparent unity by limiting its scope to just one corner of the organized room. So I think your goal for Wikipedia is too ambitious. To paraphrase PAR, unity within each discipline is more important than unity across the disciplines. By reaching for the latter, we ensure that Wikipedia's EE-oriented articles will have different-looking mathematics than all the EE textbooks. I was resigned to that fate until I realized that disambiguation is the organizational tool we need to avoid throwing away something of obvious value. --Bob K 01:13, 3 May 2006 (UTC)

The "disambiguation" of the sinc function serves no useful purpose and should be reversed.[edit]

User:Omegatron put the merge notices on the articles and, if no one else does, i'll write the first dissent.

BobK, this was a bad idea from the beginning, you made only one suggestion of doing it in advance, you got absolutely no support for the idea, and you went ahead and split it into two anyway. it makes me regret my original suggestion that the default definition be changed from



there is not enough difference in the definitions to warrant splitting it into two. i agree that the notation of or is useless because we can just use instead. now we have the worst of both worlds with two quantitatively different definitions of floating around Wikipedia and now we have to explain which definition we're using each time we make a reference to sinc function instead of just including the π factor. this disambiguation was unecessary, it clutters WP, and it causes confusion. it should be reversed. r b-j 04:27, 26 May 2006 (UTC)

I'm sorry you now regret your suggestion. I thought it was a good start but just as impractical as your current suggestion, because unlike most textbooks, Wikipedia is a multi-discipline product, and its policy is to maintain a neutral POV. My bottom line argument is that "sinc(x)" is an ambiguous symbol in the world outside Wikipedia. For the people who come here to learn about it, that is the first thing they should learn. And disambiguation is the quickest, clearest, and most neutral way to do that. But if there is a merge, the two definitions should be given equal standing. Before I did the disambig, you had to read the fine print to discover the normalized definition. It was easy to miss, which is too bad for all those readers puzzling over the subsequent discrepancies with their EE textbooks. --Bob K 05:47, 26 May 2006 (UTC)
I think the disambiguation was a bad choice. We now have two pages which should have the same material except for the odd factor of π floating around. There are quite a lot of mathematical terms that are ambiguous (does the set of natural numbers contain 0 or not?); see for instance Wikipedia:WikiProject Mathematics/Conventions for one attempt to tackle these issues.
Thank you... I was unaware of Wikipedia:WikiProject Mathematics/Conventions. However, debating whether to represent concept A with notation B or notation C is not ambiguity. and are functions A and B, being represented by notation C. That is ambiguity. Wikipedia has a novel and very effective procedure for dealing with it. And IMO, it does give A and B equal standing. --Bob K 22:02, 30 May 2006 (UTC)
If your problem is that one of the definitions was not given enough prominence in the old article, then why not change that? But in the end, it will not be possible to give both definitions exactly equal standing; one of them has to be mentioned first.
Whatever approach we choose, I think that the definition of sinc should be given each time it is used, precisely because it is ambiguous. We cannot expect the reader to click on a link to discover what definition is being used. -- Jitse Niesen (talk) 12:45, 26 May 2006 (UTC)
Yes, that is the established standard that has evolved, rather than someone having the sense to use different symbols. It is now too late for to catch on. I have no objection to being explicit. Repeating the definition is an acceptable way to be explicit. And as of this moment, an easier way is to mention sinc function (unnormalized) or sinc function (normalized). --Bob K 22:02, 30 May 2006 (UTC)

My bottom line argument is that "sinc(x)" is an ambiguous symbol in the world outside Wikipedia. For the people who come here to learn about it, that is the first thing they should learn.

Exactly. One of the greatest things about Wikipedia is that it presents the same idea from all different perspectives. An engineer like me reads an article about, say, the continuous Fourier transform and learns that there are many different weightings and conventions used in different fields; the ones I'm used to are specific to my field. — Omegatron 13:26, 26 May 2006 (UTC)
(Please sign your comments.) Yes, it's very cool. And I did a lot of that very sort of work on continuous [time] Fourier transform, so thank you for your appreciation! I don't have as much trouble with as I do with , because nobody writes (or ) for a transform to radian frequency or (or ) for a transform to ordinary frequency. The problem has already been addressed to a large extent. The amplitude scaling differences between unitary and non-unitary transforms tend to be far less troublesome than the frequency scale, in my own experience. --Bob K 22:38, 30 May 2006 (UTC)
Cramming all the definitions into one article is not the only way to bring them to peoples' attention. For instance, in the "See Also" section, one could add a link to the disambiguation page. It's a powerful tool. There is nothing wrong with it. Let's use it. --Bob K 22:38, 30 May 2006 (UTC)

And disambiguation is the quickest, clearest, and most neutral way to do that.

No. Since they're the same thing, putting them in the same article and explaining the difference between each form is the best, most neutral way to do that. — Omegatron 13:26, 26 May 2006 (UTC)
They are not the same thing. That is the problem. --Bob K 22:02, 30 May 2006 (UTC)
How aren't they the same thing? — Omegatron 23:25, 30 May 2006 (UTC)
is an instance of . But it is never an instance of . Rather they are both instances of . If that is the "sinc" function you want to define, i.e. a generalized sinc(), then I would happily agree to merge them all into one place. You are entitled to a different opinion, so there is no need to belabor the point. --Bob K 04:11, 31 May 2006 (UTC)
in one important sense (a purely quantitative sense), they are not the same thing. and that is the problem, Bob. i don't want two quantitatively different definitions of floating around in the same encylcopedia. especially when there is no need for it. there is some need for different scaled definitions of the continuous Fourier transform - we need that nomalized unitary definition common in communications textbookds so we can apply duality and Parseval's theorem or so easily without having to worry about scaling factors. from a DSP and communications systems POV, it would be nice if the primary definition of sinc() was the normalized sinc. i see only one reference of it (in Bessel function) without the π. but i don't want to have a war with the pure mathematicians about it and it isn't so hard to always include the π in the argument so that our definitions are the same, both qualitatively (that's where they are no different) or quantitatively. r b-j 16:12, 31 May 2006 (UTC)
Yes, would be the generalized sinc that sinc function is about. I doubt many people use it as such, though, so it would either be a very small note or implied by context.
For the record, this is how Mathworld does it.[2]
The two functions are pretty similar, no?
Sinc function (unnormalized).svg Sinc function (normalized).svgOmegatron 05:34, 31 May 2006 (UTC)
Thanks. Shall I plot them on one set of axes for you? How do you folks feel about the notation? I like it, but it is non-standard, and I don't think it is going to become standard in my lifetime ... besides the fact that it is not Wikipedia's charter to replace [bad] old standards with [better] new ones. --Bob K 12:26, 31 May 2006 (UTC)
Should we split sin(x) and sin(πx) into separate articles, too? Or sine and cosine?
Under what ambiguous name do sin(x) and sin(πx) both go? Ditto for sine and cosine? --Bob K 22:56, 31 May 2006 (UTC)

the notation is fine for the normalized sinc. there is no reason to bring in another definition so that someone reading has to wonder what the hell that new subscripted function is about. i s'pose there is a minute danger of a reader seeing "" and thinking it's a normalized sinc without the π factor so what is displayed would be thought to be . but i think that danger is small. r b-j 16:12, 31 May 2006 (UTC)
I don't see why we would need any special notation (maybe in other articles?), but there are several instances where we've used conventions and notations that are not widespread in order to prevent confusion (where confusion wouldn't exist anywhere except a non-field-specific encyclopedia). I see no problem with this, as long as it's clearly explained. — Omegatron 14:51, 31 May 2006 (UTC)

In response to "an easier way is to mention sinc function (unnormalized) or sinc function (normalized)": well, I don't think that's enough. It is not clear without following those links what normalization you're talking about, and a Wikipedia article has to stand on itself as much as possible. So, I think that you have to mention the definition anyway. -- Jitse Niesen (talk) 01:58, 31 May 2006 (UTC)

and a Wikipedia article has to stand on itself as much as possible
That would lead to a lot of redundancy/clutter, and I'm not just talking about sinc(). Is that a personal opinion or [bad] policy? It is a major strength of Wikipedia that distractions from the main point of an article can be quietly relegated to links, rather than embedded in the main flow. --Bob K 04:11, 31 May 2006 (UTC)
Of course, there is a balance between explaining everything (and having a very long off-topic article) and explaining nothing (and having a short unintellible article). But explaining definitions that the reader is not supposed to know is standard practice in technical writing. In this case, I wonder how much clutter would it give. Instead of "where sinc denotes the unnormalized sinc function", we write "where sinc is defined by sinc(x) = sin(x) / x". Indeed, all articles that use the sinc function except triangular function do this.
standard practice in technical writing
This seems to be the heart of the matter. It's a fine standard for the constraints/technology under which it evolved. But evolution does not end. Change will come, but apparently not here, not now. --Bob K 12:52, 31 May 2006 (UTC)
I assume almost everybody would agree with this, which makes it policy. I don't know whether it's explicitly written down somewhere. Related issues are touched upon by Wikipedia:Guide to writing better articles, which says for instance "establish context so that a reader unfamiliar with the subject can get an idea about the article's meaning without having to check several links".
As I said before, N can denote either the set {0,1,2,3,…} or {1,2,3,4,…}, but we have only one article natural number; we don't have two articles natural number (with zero) and natural number (without zero). That seems to be the same situation.
I have no opinion on whether it is the same or not, because I do not use it. It has not been a problem I have had to cope with. But if it is problematic for its users, then I recommend they consider disambiguation as a solution. --Bob K 12:52, 31 May 2006 (UTC)
I asked on Wikipedia talk:WikiProject Mathematics for some more opinions. -- Jitse Niesen (talk) 05:22, 31 May 2006 (UTC)
  • I vote to merge the articles, giving both definitions in one place. They describe the same function, just with a different scaling of the x axis. One article even directs the reader to the other article for more details. Disambig is unnecessary and confusing. Gandalf61 08:14, 31 May 2006 (UTC)
  • Yes, do merge these into one article Sinc. What about this idea. First say there are two definitions in use, the "pure mathematician's" and the "engineer's", and give both explicitly. Then observe that they can be unified by introducing an implicit parameter z, as in sinc(x) = sin(zx)/zx, from which the originals can be retrieved by setting z to 1 or π in the Sinc entry of the Math > Parameter Options section of the Preferences menu on your mathotron. Then state (inasmuch as reasonable) the properties of sinc using z in their expression, like that the zero crossings are at the multiples of z, except at 0. Maybe it's too confusing, but my expectation is that the reader who cares about the properties of sinc has enough maths background to understand this. --LambiamTalk 13:48, 31 May 2006 (UTC)
  • My vote is already clear. r b-j 16:12, 31 May 2006 (UTC)
  • I agree the dab page should go. The two versions of sinc should be merged and discussed in one article. Lambiam's suggestion seems a workable approach that would also not appear to mirror Mathworld. Gimmetrow 16:23, 31 May 2006 (UTC)
  • Merge as explained above. — Omegatron 17:53, 31 May 2006 (UTC)
  • Merge Of course, mention both definitions and where each is typically used. --Macrakis 20:50, 31 May 2006 (UTC)
  • No. But I will contribute a plot of both functions in different colors on one set of axes. --Bob K 22:56, 31 May 2006 (UTC)
    • For what purpose? I can make one easily enough, if there's a good reason for it. — Omegatron 19:39, 1 June 2006 (UTC)
Thanks, but don't bother (see The purpose, of course, is to graphically illustrate the differences, e.g. It should have been done a long time ago. Instead, we have separate plots, which actually suggests separate articles [and disambiguation]. --Bob K 20:02, 2 June 2006 (UTC)
That's because I created them while the articles were split. — Omegatron 00:02, 3 June 2006 (UTC)
And now they are apparently being merged. That is the answer to your question. --Bob K 05:43, 3 June 2006 (UTC)
  • Merge There should be one article describing each form, with some notes on where each is used. Any other article which uses the sinc function should explicitly state which definition is being used, according to the common usage for that particular field. This way anyone interested in the sinc function per se will be aware of the differences, but anyone reading about it in a particular context won't have to be continually translating to the commonly used definition for that particular field if that definition doesn't happen to be the Wikipedia "base or primary" definition. Requiring uniformity in this situation is counterproductive. PAR 01:11, 1 June 2006 (UTC)
  • Merge. Sorry folks, but, to be brutally honest t to point of offensiveness, splitting these articles apart is one of the goofiest ideas I've encountered at WP. linas 21:15, 1 June 2006 (UTC)

i am re-asserting the issue of the base or primary definition.[edit]

this should get the attention of User:PAR and other non-engineering mathematics scholars and pracitioners. when we reverse the disambiguation (i think the consensus is pretty clear on that), we should emphasize, as the primary definition of the sinc function, the so-called "normalized" sinc() function:

we can mention that the historical definition of the sinc function (literally "sine cardinal") leaves the π scaling out of the definition and that this so-called "unnormalized" sinc function, without the π in its definition, is the one that is equal to the zeroth order spherical Bessel function of the first kind. other than in that reference, every other reference of application of the sinc function in the literature or on the internet (the first few pages that Google returns), either uses the normalized sinc() definition (what i advocate changing this to, as the primary definition) or uses the unnormalized definition but sticks a π factor in the argument. that strongly suggests that the natural definition of the sinc() function is the normalized version. it is the normalized sinc() function that is the natural Fourier transform of the rectangular function when the normalized and unitary definition of the F.T. is used.

now, User:PAR complained about that change of primary definition, but now i challenge those that oppose this change to show that there would be any quantity of articles that would have to put a 1/π factor in the argument of sinc() if the definition was changed to include the π. i assert that the facts show the other way around - that, other than the Bessel function article, all other articles will have to include π (or 1/2 for angular frequency) in the argument of sinc() if it is used in a real application.

so, i am formally proposing that the primary definition of the sinc function is changed to

and that the article mentions as an alternative and historical definition of the sinc function is

and that this might be more natural to use with the zeroth order spherical Bessel function of the first kind. but, i believe, if we do make this change, that in the Bessel function article, we be explicit and simply refer to the sinc function article but explicitly use without calling it "" so that there is no possible confusion of scaling. r b-j 21:15, 31 May 2006 (UTC)

  • Yes. --Bob K 22:46, 31 May 2006 (UTC)
  • No See the explanation after my vote above. PAR 01:11, 1 June 2006 (UTC)
okay, but as it is, there are multiple places in WP where "" is explicitly referred to in equations and in all cases there is a π factor in it and there is no place, outside of the sinc function article, where is used without the π factor. it's okay with me that neither definition in the sinc function article is "primary" or "alternate" and both are co-equal, but since it appears that only Bessel function has "" without the π, my proposal is that while either use can refer to the article: Sinc function, the only use of the notation "" in equations outside of the article Sinc function, should only refer to one common and quantitatively consistent definition and that should be the so-called "normalized" sinc function. is that acceptable to you, PAR? to be explicit, outside of the Sinc function article, whenever you see "" in an equation, there should be no doubt that the meaning is . if one is creating an equation that has "", that editor should leave it in that form but should identify it as the sinc function with link to the article. is this a compromise you can live with PAR?
I am not convinced, but it is a workable solution and in this case, I can live with the consensus. PAR 15:19, 1 June 2006 (UTC)
okay, to be clear this is what i will do after waiting a little longer (for more possible objections/comments).
  • i'm gonna try to legitimately rename Sinc function (unnormalized) to Sinc function
  • merge the stuff from Sinc function (normalized)
  • go to all linked references of Sinc function (normalized) and Sinc function (unnormalized) and change the link to Sinc function.
    • A couple of redirects could be helpful. --Bob K 23:33, 1 June 2006 (UTC)
  • in all cases in those other articles where there is in math equations, i will change it to .
    • I just took a shot at it...   --Bob K 11:51, 6 June 2006 (UTC)
  • There are, outside of the Sinc function articles, no present use of just .
    • You mean in the unnormalized sense (right?). --Bob K 18:48, 7 June 2006 (UTC)
    • Exceptions to that "rule":
triangle function (subsequently updated by me)
continuous Fourier transform (subsequently updated by me)
Diffraction of light
  • in all cases where appear (presently it's just the Bessel function article), we will leave it as without equating it to (which is the inconsistancy in usage i want to avoid at all costs) or to (which is consistent with other uses of the sinc() but maybe a little ugly) but, in the text, we will link to Sinc function.
if this is not acceptable, now is the time to complain. r b-j 16:07, 1 June 2006 (UTC)

OK, so who actually defines the sinc with a pi in it?

Without pi[edit]

  • mathworld
  • planet math
  • Cambridge University (page 3)
  • Hecht, "Optics" [3]
  • Modern Nmr Techniques and Their Application in Chemistry, by K Hallenga, Millicent Popov, A I Popov, 1990.
  • Linear Networks and Systems: Alogrithms and Computer-Aided Implementations, by Wai-Fah Chen, 1990.
  • A Handbook of Fourier Theorems, by D. C. Champeney, 1989.
  • Signal Processing Handbook, by Chen Chen, 1988

With pi[edit]

Internet citings:

Textbook and published literature:

i believe i have found all the places with "sinc" in equations...[edit]

... and made sure they consistently meant "normalized" sinc(). this was the basic inconsistency i wanted fixed from the beginning. i might suggest that where sinc() is in the nascent Delta functions, we change all of the Fourier integrals to the unitary normalized Fourier integral (instead of using angular frequency) and the sinc() won't need a 2 π in the denominator. also we should completely clean the links to go strait to the Sinc function page. and then ask the admins for some file deletes. r b-j 06:11, 8 June 2006 (UTC)

You missed at least one. I will start a list here:
changed in both cases. that article did not (until now) have a linked reference to Sinc function or derivatives so i didn't know it was there. even in that article, changing to normalized sinc simplified the equations. r b-j 16:38, 8 June 2006 (UTC)
I reverted this change. The physics literature (such as Hecht's Optics) uses the mathematical definition of sinc without pi in it. It's nice to have consistency in wikipedia, but the articles should match the relevant literature even if that makes things inconsistent. Pfalstad 22:41, 8 June 2006 (UTC)
I concur with the reversion. Consistency is nice, but consistency with the optics literature for an optics article is more important than consistency with the rest of the wikipedia. We don't get to decide that theirs only one important definition of sinc, if standard practice has two. One can always add a note to say it's the unnormalized definition... Dicklyon 03:28, 9 June 2006 (UTC)

move request to admins[edit]

Rbj, you request that an admin do a move. I'm an admin, and I'm happy to do a move with deletion if you like. But is it ready to go? Is there anything left at sinc function or sinc function (normalized) which needs to be merged back? For example, the disambig page has a graph which is in neither particular page. Is there any desire to keep that graph in the article? What about the stuff about orthonormal bases that's in the normalized article but not the unnormalized article? -lethe talk + 06:23, 3 June 2006 (UTC)

I've done the move. -lethe talk + 06:47, 3 June 2006 (UTC)
i made a stab at merging the stuff. more wordsmithing and editing will have to be done. thanks for doing the move. i would like to eventually kill the sinc function (normalized) and sinc function (unnormalized) pages, but i have to find the places where they are referenced and fix that before requesting AfD. r b-j 06:54, 3 June 2006 (UTC)
Redirects are cheap, why bother? -lethe talk + 06:56, 3 June 2006 (UTC)
'cause i'm anal-retentive and i hate dangling loose ends. r b-j 06:59, 3 June 2006 (UTC)
Well, you can find all the articles which go through the redirects by Special:Whatlinkshere/Sinc function. After you bypass all the redirects, I'll be happy to delete them for you. By the way, I guess you've merged whatever content there was from sinc function (normalized), but not the picture from the disambig page. If you want it, it's in the article history, which I've undeleted after the merge, in case you think it has a place in the article (I personally do not though). -lethe talk + 07:05, 3 June 2006 (UTC)

Another thing, I notice that you plan to standardize all of wikipedia's uses of the sinc function to be the same. I'm not sure whether that's a good idea, but it might be. I would suggest that you put your choice of convention at Wikipedia:WikiProject Mathematics/Conventions, so that future editors might know what the preferred meaning of sinc(x) is at Wikipedia. I will opine that I don't think it should be disallowed to use the other convention, so long as the author of any particular article is clear on what convention he is using. On the other hand, consistency is always good. -lethe talk + 07:22, 3 June 2006 (UTC)

Two different functions?[edit]

Omegatron's attempt at showing both on the same graph

I hope everyone will come to realize at some point during this debate that changing your units doesn't give you two different functions, does not give you two different graphs, does not require two different Wikipedia articles, and does not require disambiguation. There is only one function here. -lethe talk + 14:58, 3 June 2006 (UTC)

I agree that they belong in the same article, but sin(x) and sin(2x) are, indeed, different functions. Same here. If we're going to have two different definitions of the function, we need to show both graphs. — Omegatron 15:03, 3 June 2006 (UTC)
If you insist, let's at least show them both in one graph. -- Jitse Niesen (talk) 15:43, 3 June 2006 (UTC)
I don't think you can see the shape as clearly with them overlapping. Maybe show a graph of just the normalized alone, and then a graph of both on the same plot (which shows the unnormalized zero crossings more clearly)?
Here's my attempt at showing both on the same graph. — Omegatron 17:15, 3 June 2006 (UTC)
Excellent ! --Bob K 11:33, 4 June 2006 (UTC)
There is already a graph showing them both, which I've removed. But why do you need to see them both? They are, after all, the same graph! Just because you change the symbol that looks like a 1 on the x-axis into a symbol that looks like a π does not mean you've got a new function. Rather, it just means you changed your scale. If you need to see a new graph for every scale, then you should be reading graph of a function, rather than this article. -lethe talk + 20:36, 3 June 2006 (UTC)
they are the same function qualitatively, but different quantitatively and that difference created some contention regarding what is the real sinc function. there is not enough difference (a scaling factor of π) to justify two articles for essentially the same thing, but they're not precisely the same thing and those that prefer the historical definition over the one that is practical for communications and signal processing will not like it (and have already express as much) if this historical definition isn't presented in some way co-equally although i don't want to see occurances of floating around in equations of other articles in WP that do not clearly mean the same thing and, i think we got some concession and consensus that, at least in other articles where is used, it means . but if we only graph the normalized sinc function, the proponents of the historical definition will have a legitimate reason to say that the two definitions are not presented as co-equal.
i want to say that my first stab at a merge is just a first stab, i expect some significant reorganizing and/or wordsmithing (from others, i'm sorta outa ideas) to this. —Preceding unsigned comment added by rbj (talkcontribs)
Yeah, I'm not sure if it's better to show two graphs or show them both on the same graph. It's kind of cluttered, since each wants to be on its own number line (integers vs multiples of pi), and I tried to show that with the grid lines. I guess it's ok. — Omegatron 02:22, 4 June 2006 (UTC)
you have to tell us how to create consistant looking graphs like that, O. i got MATLAB and an old version of Powerpoint on the Mac. i have a student version of MATLAB and dunno what on a PC (i hate PCs).r b-j 02:43, 4 June 2006 (UTC)
It's just gnuplot and Inkscape. Instructions and source code are on the image description page. I'm always working towards the Ideal Graph; more info is on Wikipedia:How to create graphs for Wikipedia articles. You can use GNU Octave, a MATLAB clone, and output directly to gnuplot, too, as described on that page. Any suggestions on improving the graphs are appreciated, and on improving the coverage of that page. — Omegatron 04:21, 4 June 2006 (UTC)
i guess i'm so Mac-centric and, despite them saying it's good on virtually every OS, there is no gnuplot for the Mac that i know of. BTW, there might be a mathematical mistake on your example Image:Exponentialchirp.png. when you want the instantaneous frequency of a sinusoid, you take its argument and differentiate it (w.r.t. t) rather than divide by t. so the argument of your exponential chirp should be the integral of 3t rather than 3t t. r b-j 05:19, 4 June 2006 (UTC)
Yeah, I know. Something about instantaneous frequency not being the same as the f in sin(2πft), right? But in a chirp are you varying the instantaneous frequency or the f variable? The examples in the article are my fault, too, but were more easily changed than the graph. I've been meaning to fix it but haven't invested the thought/time required. — Omegatron 14:32, 4 June 2006 (UTC)

if you want ordinary (non-angular) frequency, you always calculate the derivative (w.r.t. time) of the argument of sin(θ(t)), which is θ'(t) and divide by 2π. that's the consistent and accurate rule. it is not the same as dividing θ(t) by 2πt except in the case of θ(t) = 2πft. r b-j 18:55, 4 June 2006 (UTC)

MacOSX is a unix, so of course it can run gnuplot. You can install it using fink:
$ fink install gnuplot
and run it under Apple's X11 implementation. -lethe talk + 05:36, 4 June 2006 (UTC)
where do i get X11? i got Inkscape a while back but it wouldn't run (didn't tell me why) and after investigation i think i figgered out that i needed X11, but i couldn't find where to get it (too many hits on Google). is it free? where can i get it? personally, although i have one OSX mac, i feel it's really more bother than my older OS9 (that has lotsa old, but working software on it that would cost $4digits to replace). i need someone to tell me where i can get this stuff because it just doesn't seem clear. i am OS10.2.8, is that too old? r b-j 18:55, 4 June 2006 (UTC)
Apple's X11 comes with the OS. It's on the install disk, though it does not install by default. However, during the time of Jaguar (10.2), it was still under development, and only available as a development download. I didn't find that beta release still available for download on the Apple X11 website, but I didn't look too hard, maybe it's still there somewhere. If you can't find a version of Apple's implementation to run on your 4 year old OS, you should download XDarwin, the predecessor, which I think has continued development. And get the OroborosX window manager to give it that mac "look and feel". But can I just add that MacOSX is a huge shift in direction for Apple, and it went through a very rapid development cycle because it took a lot of time to bring it up to speed. By sticking with Jaguar, you're stuck to the beginning of that shift, and if you're not willing or able to do the shift with Apple, then yes, you probably should have stuck with OS9. MacOSX is gonna take time to get right. -lethe talk + 20:16, 4 June 2006 (UTC)
I found a download page. This is Apple's X11 v 1.0, while Tiger comes with v 1.1, so it's a little old, but newer than the beta download. The page lists Panther as a requirement, but maybe it'll run on your Jaguar box anyway. Keep your fingers crossed. -lethe talk + 20:24, 4 June 2006 (UTC)
doesn't work (but i forgot to cross my fingers - i hope that didn't blow it). .pkg installer senses immediately my OS ain't new enough and declares i need 10.3 or later. r b-j 21:10, 4 June 2006 (UTC)
And does it seem like XDarwin wants $60 for their software? Well, you could try fink. I think fink will compile X11 for your system. But it may be more trouble than it's worth. The best idea would be to upgrade to Tiger. -lethe talk + 21:44, 5 June 2006 (UTC)

There are many places where there is some tension between different conventions. We do not have to give each convention equal coverage. It suffices to simply mention both conventions. Sticking to them for too long is more confusing than it is helpful. Look to error function. They managed to get along with 3 different graphs for different normalization choices. It seems to me that the double graph is far too cluttered to be useful. For example, can you read the roots off of that graph? I offer you a completely neutral graph. With no axes, one cannot tell what normalization it has. -lethe talk + 02:32, 4 June 2006 (UTC)

Sinc function (neutral).jpg
no one disagrees as to the vertical scaling of the sinc function. and not having some meaning to where it crosses zero in the graph is, missing something. i think have both on a single graph is the best solution.
well, i just took a look at error function and that mess illustrates precisely what i want to avoid here. there are two mathematically inconsistent uses of erf(). i think the definition where erf(0) = 1/2 is best, but i am not willing yet to join that fight. but they should have the definition that makes most compact sense for use in probability theory and then mention the historical or alternative definition where erf(0) = 0 for completeness. there is some similarity to the issue here in sinc land.
the article (sinc) needs unification of properties. other than the two properties resulting from "normalization" (which are nice and that's why it's a better definition, IMHO) that the normalized sinc lands on zero at non-zero integers and has an integral of 1, all other qualitative properties apply to both sinc() functions. we need to make the article say it as such and the properties of the sinc() need to be all combined with the exception of mentioning those two advantageous properties for the normalized sinc and that the unnormalized sinc is the historical definition, has the present property 1. and is the zeroth order spherical Bessel function of the first kind (did i say that right?). i'll think about it, but am happy if others stab at this. maybe one of us will connect, kill it, and get it over with. r b-j 02:43, 4 June 2006 (UTC)

i have AfD the other sinc function pages ...[edit]

... and it looks like they acted pretty quickly, but there is a left over turd of some of the old talk pages. i don't get how this AfD works completely, but i tried to follow the directions. r b-j 17:21, 8 June 2006 (UTC)

Why? They should be kept as redirects. — Omegatron 17:48, 8 June 2006 (UTC)
Yes. And furthermore I see no point in changing a link such as "sinc function (normalized) | sinc function" to just "sinc function". The casual reader sees the same thing in either case. So you are needlessly destroying a bit of information that might actually be useful to somebody. --Bob K 23:02, 8 June 2006 (UTC)
But it's not really information, is it? It's misinformation. At best, it's simply evidence of the fact that at some point, a Wikipedian made a bad decision, which is not something our readers need to know. -lethe talk + 01:01, 9 June 2006 (UTC)
Wrong on both counts:
  1. Knowing that disambiguation is considered a "bad idea" might prevent a recurrance, not that I would mind.
  2. And a link such as "sinc function (normalized) | sinc function" assures me that it's the normalized definition that applies. That is information. The article itself does nothing of the kind; i.e. it is ambiguous, just the way you like it.
--Bob K 02:05, 9 June 2006 (UTC)
the sinc function article itself is a little ambiguous due to "political" reasons (the non-engineers who would prefer the unnormalized sinc) with no more than two qualitatively identical but quantitatively different definitions. my aim was so that, in equations throughout WP, every occurace of meant the same thing quantitatively. and i think we successfully argued that it should be the normalized sinc definition. what we do is say "normalized sinc function " if need be (i don't think the word "normalized" would be necessary in all cases, but if you want to add the word where it is missing, fine with me) and in these cases we can use in the equations.
in the other cases (not very many) we say "unnormalized sinc function " in the article text and explicitly use without any use of in the equations. in the latter case, we always include the word "unnormalized" just before the link: sinc function. this way, the math notation is always consistent and accurate ( always means ), the concept of the sin(x)/x species of function (whichever scaling) is always associated with the sinc function article. this is the best we can do to avoid confusion and convey information and do this right. r b-j 03:03, 9 June 2006 (UTC)

Since there is an ambiguity in normalization, every usage anywhere in the encyclopedia should be explicit about which normalization it is using. If they do that, then hiding information inside a redirect or piped link is not necessary (I also prefer this solution to rbj's policy of enforcing that everyone use the same form). Furthermore, doing so is against the MoS. It's a sensible rule; if we start allowing information to be hidden in our links, then we leave our readers high and dry when we print this encyclopedia out on paper. You can't see that hidden information in the print version. Anyway, I've deleted the redirects. -lethe talk + 03:27, 9 June 2006 (UTC)

Paper? Now who's kidding? And I just read Manual_of_Style#Wikilinking, and I think you might need a refresher. But don't get me wrong. I am happier with the current outcome than with disambiguation. I just didn't think it was possible. Disambig was my incremental solution to Wikipedia's narrow-minded view and usage of sinc. Rbj's persistence made it happen, but I like to think of my own role as catalytic, at least. Apparently I provided the common enemy that made unity seem more tolerable than before. And for the record, I did not say that the extra information in the links is "necessary". I only said that its destruction is unnecessary. Rbj described it as "anal", and actually that makes more sense than any thing else. And I respect his honesty, if that's what it was. We're all just trying to have a little fun here, so Rbj, if that's what floats your boat, please be my guest. --Bob K 05:15, 9 June 2006 (UTC)
A paper wikipedia may seem like a distant fantasy, but it is a goal, and we should not encourage practices which are detrimental to that goal. So no, I'm not kidding. If you can't find the rule about including hidden information in your links, check WP:PIPE (easter eggs). Given your rather misguided behavior in this situation, I find it rather silly that now you are advising me that I need a refresher in WP policy. -lethe talk + 05:35, 9 June 2006 (UTC)
Wikipedia:What_Wikipedia_is_not#Wikipedia_is_not_a_paper_encyclopedia --Bob K 16:32, 19 August 2006 (UTC)
The example of a no-no I found at WP:PIPE (not MoS) is a link whose appearance is "exceptions" and then takes the reader to "Thomas Bowdler". But Rbj is worried about this link: "sinc function (normalized) | sinc function", whose appearance is "sinc function" and took the reader to "sinc function", via re-direction. No harm, no foul. --Bob K 12:57, 9 June 2006 (UTC)
So did you want to hide information inside the redirect? Or did you merely think it was harmless to do so? -lethe talk + 13:10, 9 June 2006 (UTC)
Actually I hid nothing. After I created the sinc function (normalized) article, I changed some [[sinc function]] links to normalized [[sinc function (normalized) | sinc function]], because I decided that normalized sinc function looks a little better than sinc function (normalized). I considered renaming the article, but this solution seemed less disruptive. As you can see, both forms contain the same information. It was just a cosmetic rearrangement. --Bob K 13:52, 9 June 2006 (UTC)
A little while ago, you were arguing against deleting the redirects because they contain information. -lethe talk + 13:57, 9 June 2006 (UTC)
It's redundant information. Is there a "rule" against that too? And speaking of rules, here is a refresher straight from the MoS: "Clear, informative, and unbiased writing is always more important than presentation and formatting." Those were my motives for everything that I did, whether you happen to agree with it or not. I took a very biased sinc function article and tried to clarify the situation and de-bias it. Unlike Rbj who thinks that the unnormalized sinc should be outlawed, I happen to agree with you that "every usage anywhere in the encyclopedia should be explicit about which normalization it is using". When I changed ambiguous [[sinc function]] links to normalized [[sinc function (normalized) | sinc function]], that is exactly what I was doing. --Bob K 16:24, 9 June 2006 (UTC)

yea! PAR! that looks really nice![edit]

i'm pretty happy with the article they way it is and will just accept that the Optics guys will use sinc(x) differently than i would prefer. i am now trying to make two new articles for Zero-order hold and First-order hold and to get rid of a poor rendition of the two in another article i'd like to see deleted. i'm in a sorta "weed the garden" mode now. r b-j 02:18, 12 June 2006 (UTC)


Anybody have any idea how to find the Fourier Transform of Sinc2(πax) using the Convolution Theorem? I cant find any refs that actually show examples of how to use the Conv Theorem in FTs.

Any thoughts?--Zereshk 21:53, 15 September 2006 (UTC)

The transform of sinc times sinc is rect convolved with rect, which is the triangle function. In the frequency domain, the triangle is scaled to go to zero at twice the frequency at which the sinc's spectrum goes to zero, which you can think of as a frequency doubling effect. If you want equations, let us know. Dicklyon 23:57, 15 September 2006 (UTC)

Assuming you mean the unnormalized sinc function it's row 12  with    and     (not )    --Bob K 05:02, 16 September 2006 (UTC)

Very helpful indeed. Thank You both. I now understand it a bit more clearly.--Zereshk 01:31, 21 September 2006 (UTC)

2D Sinc?[edit]

I reverted the addition of the nice 3D picture of the 2D sinc as being off topic and not all that useful. I cannot think of any use of the 2D sinc; can anyone? Dicklyon 04:56, 18 November 2006 (UTC)

I assume that it applies to image filtering and filter design analogous to signal processing filters. --Bob K 14:46, 18 November 2006 (UTC)
Sounds like a plausible guess, but it is not so. The impulse response of a circularly symmetric sharp lowpass filter is an Airy function, not a 2D since. And the impulse response of a square sharp lowpass filter is a separable product of sincs, not a radial sinc. Dicklyon 16:46, 18 November 2006 (UTC)

Integral proof[edit]

May someone proof ? Thanks in advance, --Abdull 19:31, 4 December 2006 (UTC)

It's a trivial consequence of the Fourier transform relationship to the rect function, which is easier to show in the other direction, as done here: Nyquist-Shannon#Mathematical_basis_for_the_theorem. Dicklyon 20:23, 4 December 2006 (UTC)
Ah, thank you for the hint! Once I understood what you meant it really was trivial. Bye, --Abdull 20:12, 7 December 2006 (UTC)
why put this proof that everyone agrees is trivial into the article? r b-j 04:19, 8 December 2006 (UTC)
I agree. A few words to say it follows from the transform relationship at f=0 would be more than adequate. Dicklyon 05:07, 8 December 2006 (UTC)
I did it because it was not trivial for me in the first moment (else i wouldn't have asked). Here is my edit to the article that was removed, in case of someone has similar problems seeing it right ahead:
--Abdull 21:19, 20 December 2006 (UTC)


Is the table really necessary? It doesn't really provide anything useful to the reader that they couldn't work out with a calculator, or by looking at the graph. It also breaks up the flow of the article. Oli Filth 08:50, 5 March 2007 (UTC)

I agree. And by the way, the column labelled should be labelled or
--Bob K 11:26, 5 March 2007 (UTC)
I've bitten the bullet and removed the table. Oli Filth 22:27, 5 March 2007 (UTC)


when sinc is the answer of a equation? —Preceding unsigned comment added by (talk) 19:43, 26 July 2008 (UTC)

--Bob K (talk) 20:18, 26 July 2008 (UTC)

Questions about definition[edit]

I am confused as to the correct definition of the sinc function. Assuming that the inclusion of pi or not is just a matter of convention (and doesn't matter), I want to know which way of writing it is rigorously correct. MathWorld defines it as a piecewise function


but the current article defines it as


First of all, since the article only cites the MathWorld page, which doesn't have the second form, I don't see how it should even appear in the article, unless another source is cited. Also, there is only a = and not a ≡.

Could someone please explain to me how the second form is correct, and if it is, what, if any, assumptions must be made. Ie, does the first form contain more mathematical information than the second form? would a very technical proof ever require only one form to be used? jay (talk) 06:18, 13 September 2008 (UTC)

It appears to me that those questions are all addressed in the introduction, except the point about = and ≡.  FWIW, I was taught to use ≡ for identities (whatever that is) and    for definitions.  But someone also introduced    as a potentially more self-evident substitute. There is not universal agreement, which might explain why many editors don't even bother to pick one.
The inclusion or exclusion of π does not matter at the origin, but it does matter everywhere else of course.
--Bob K (talk) 09:13, 13 September 2008 (UTC)
I also disagree with the oversimplified (since currently incorrect) definition, cf. 1st paragraph on this page. [I don't bother about "=", "\equiv", ":=" etc, though.]. If one can be at the same time mathematically correct and explicit enough to avoid confusion and polemics and doubts and questions, at such a moderate cost, why not do it? — MFH:Talk 12:06, 16 October 2008 (UTC)

Sa ??[edit]

I never before saw Sa for sinc. In what context is it used? What does it stand for? If it is not used for the "math" version (sin x)/x, then maybe it could be removed at least there. I think w.r.t. its popularity, it occupies a much too much predominant postion on the page (at a first glance on the page one would think it is the main usage). I'd be in favour of removing it from the big displayed equation; anyway it's written on the 1st line that its "sometimes [denoted by] Sa(x)". — MFH:Talk 11:46, 16 October 2008 (UTC)

graph incorrect[edit]

It appears that the example graph is incorrect. The normalized (with pi) is shown crossing zero at integer (non-zero) points and the non-normalized appears to cross at multiples of pi. —Preceding unsigned comment added by (talk) 22:12, 15 October 2009 (UTC)

Yes, sin(πx)/πx has zeroes at nonzero integer points. Why do you think it is incorrect? — Emil J. 10:22, 16 October 2009 (UTC)

Computing sinc(0)[edit]

Using the l'Hôpital rule to compute the limit

is completely pointless and brain-damaged. Aside from the fact that l'Hôpital's rule is a rather nontrivial theorem which should not be used unless really needed, consider what does it take to compute the limit using l'Hôpital:

  1. Compute and .
  2. Compute the derivatives and .
  3. Apply l'Hôpital rule and conclude that the result is .

On the other hand, if we simply use the definition of

we directly observe that the limit we want to compute is sin'(0) (using sin(0) = 0), thus we only need the steps 1 and 2 above. In order to use the l'Hôpital rule, one needs to do the same work as when using the definition directly, and only then to apply the l'Hôpital rule as a redundant extra step. How anyone can consider the direct computation "long-winded" and "belabored application of l'Hopital's rule" as compared to l'Hôpital being "simple" is escaping me. — Emil J. 16:13, 16 October 2009 (UTC)

I don't understand why the right-hand-side of that last equation is what we're looking for; why the subtraction of the values at 0, unless this is just an expansion and regrouping of l'Hopital's rule? And why is it simpler to have to "recognize" such an expression as the derivative of a sine and then evaluate that derivative, rather than simply apply a forward simple rule? This seems like the situation that l'Hopital's rule was made for, and it's very simple to apply, and numerous sources show that application. So I'm still a bit baffled by your objection and your change. Dicklyon (talk) 16:19, 16 October 2009 (UTC)
Do you know what is a derivative? Can you write down your definition of sin'(0) here so that we know what you are talking about, if you can't recognize the usual one? — Emil J. 16:24, 16 October 2009 (UTC)
Yes, of course, EmilJ I recognize that limit expression as the definition of the derivative of the sine function at 0. But how did you get to that limit expression? What propted you to subtract sin(0) from the numerator and 0 from the denominator, such that it would be in the form recognizable as the thing whose limit is the derivative of the sine? It seems back-assward to say that just because the expression exists it provides a simple way to solve the problem. The problem, you recall, was to find the limit of the sin(x) over x at 0; how does one go from that problem to your expression that led to the solution? l'Hopital provided a rule that works for exactly such situations and is trivial to apply. But how did you find your way? Dicklyon (talk) 01:13, 17 October 2009 (UTC)
Support EmilJ. Why is it that one needs to cite an applied engineering book for this, sorry for the repetition, braindead application of L'Hospitals rule inside a purely mathematical topic? L'Hospital is useful for things like or .--LutzL (talk) 16:39, 16 October 2009 (UTC)
Refer to List of trigonometric identities#Calculus or Proofs of trigonometric identities#Sine and angle ratio identity, instead of trying to repeat a proof here? — Miym (talk) 16:48, 16 October 2009 (UTC)
Apart from the fact that invoking l'Hôpital's rule here is a circular argument (no textbook would do it that way), I don't see why it's necessary to attempt a proof here at all. The sentence just needs to say that it is defined as 1 at 0. Shreevatsa (talk) 16:53, 16 October 2009 (UTC)
Actually, many textbooks do it exactly that way, using l'Hopital's rule. It's sort of a canonical example of where l'Hopital's rule is useful. Dicklyon (talk) 01:15, 17 October 2009 (UTC)
I agree with (almost) everybody else: L'Hôpital makes no sense here. Apparently some people don't know how a derivative is defined but still have an understanding of what it means, how it is used, and of some of the rules that hold for it. That's fine. I guess for some engineers that's perfectly sufficient. But for a reference source it would be eccentric to write specifically for such readers who consider l'Hôpital more fundamental, or easier, than the actual definition. It would be misleading for readers who are here to get a full understanding and don't have it yet. And it's not as if we only have a choice between one way of proving this and another: We can simply state it without proof. That's known as encyclopedic brevity. Hans Adler 17:04, 16 October 2009 (UTC)
I've changed it to omit mention of the rule or any attempt at proof or explanation. But I'd still like to know, having forgotten a lot of my pre-calculus, how one evaluates the limit at 0 if not by l'Hopital's rule. I don't doubt it can be done, e.g. by recognizing the transformations that get there as EmilJ did, but is there an actual simple rule or process that would do that? Isn't that why we have l'Hopital's rule, to make it straightforward? What is different about LutzL's examples? Just that it's harder to recognize a path to the answer? Dicklyon (talk) 17:31, 16 October 2009 (UTC)
Well, what we are looking for is . This happens to be the same as , and the latter is exactly the definition of . Whether you are more likely to find this way of proving or the one using L'Hôpital depends on your background: A pure mathematician may well have forgotten about L'Hôpital, but will come up easily with the direct proof. Someone working with L'Hôpital a lot may well see that first. We should be writing for both kinds of readers (and everybody else who has a chance to understand this article). Hans Adler 17:42, 16 October 2009 (UTC)
PS: Reminder: For any function , the derivative at a point is defined as . We just apply this to and . Hans Adler 17:46, 16 October 2009 (UTC)
I understand all that, but you're missing the point. It took a leap of imagination, or luck, or exploration, or something, to write the thing you're looking for in the form that could be recognized as the derivative of a function whose derivative you know. At least that first step is unmotivated by any rule or procedure; it's OK to prove it that way, but not a sensible way to show how it is found; on the other hand, application of the simple and well known rule by l'Hopital gets directly to the answer. Dicklyon (talk) 18:00, 16 October 2009 (UTC)
I understand your point, but you in turn are missing the point of what everyone else is saying. To use l'Hopital you need to know that the derivative of sin(x) at 0 is 1, but the derivative of sin(x) at 0 is by definition exactly the limit here — so you couldn't possibly use l'Hopital's rule without already knowing this limit, in essence. It's circular reasoning. This is OK if you're an engineer who remembers both l'Hopital's rule and the derivative of the sine (while having forgotten how the latter was found), but it uses a nontrivial theorem and is pedagogically unsound. And unnecessary here.
PS: Responding to the rest of the thread: you don't even need to use the definition of a derivative to find this limit; it is often done using geometric arguments (see e.g. File:Sinx x limit proof.svg): for small x, we can see that sin(x) < x < tan(x), and so cos(x) < sin(x)/x < 1, and now take x→0. Shreevatsa (talk) 18:37, 16 October 2009 (UTC)
What? What do you mean "by definition"? Seems to me that the other method shown also requires you to evaluate the derivative of sin at 0, in addition to recognizing a manipulation that makes that give the answer you seek. What definition are you thinking of? And how do you get to the answer without l'Hopital's rule and without guessing a lucky rewrite? I realize there are other ways to get there, but none so simple as applying the rule for what to do to get the limit of the ratio of two things that go to zero. Dicklyon (talk) 19:07, 16 October 2009 (UTC)
The definition of the derivative is simply (see any calculus text for a reference). The preceding editors assume it is known that , and hence the limit is just the definition of the derivative at 0, the evaluation of the cosine function at 0 being obvious. If you want to know how the derivative of the sine is derived, that is a more interesting question involving the way the sine function is defined in terms of power series (or, if you like, showing that the power series definition is equivalent to the high-school geometry formulation). This is getting a bit far afield, since the article doesn't require considerations of this sort. The other editors are mostly pointing out that L'Hopital's rule is purely redundant and complicating here. RayTalk 19:43, 16 October 2009 (UTC)
I remain baffled at how you miss my point. I'm not missing any of the info you're giving me. Of course I know the definition of the derivative and can recognize the expression that EmilJ wrote as the derivative of the sine at 0, and can see how the answer is obvious from there. What I'm saying is that there's an unexplained leap in how he went from the problem of finding the derivative of a function at 0 to that particular limit expression. Someone had to have the intuition or good luck to subtract the sin(0) from the numerator and the 0 from the denominator to put it into the form that's recognizable as the definition of the derivative of the sine. Anyone with a little scratching around might have found that, but if there's no rule or algorithm that led to it, how can it be said to be simpler than just applying the well-known l'Hopital's rule that is the obvious place to turn for problems of this sort? Numerous books show it done exactly that way; none EmilJ's way, I'd venture to guess. Dicklyon (talk) 01:13, 17 October 2009 (UTC)
It is a telling fact that all of them are engineering books. :-) The question here is not of what is the easiest way, but whether using l'Hopital's rule is valid at all. And the answer is that it is not valid as proof, because to use l'Hopital's rule you need to know sin'(0), which by definition is this limit. (Now that the derivative has been defined at least four times in this thread, I hope you won't ask again what the definition is.) You cannot prove X by using a fact that depends on X. (l'Hopital's rule remains a valid transformation and gives the correct answer, and if you remember it, it is certainly easy and straightforward to apply — but an application of the rule necessarily involves knowing sin'(0) which is this limit, so at most l'Hopital's rule is a mnemonic device in this case. Not a proof.) Shreevatsa (talk) 02:08, 17 October 2009 (UTC)
Maybe if I keep telling you I know what a derivative is you'll stop pretending I don't? But note my new note in next section, involving math books, not engineeering books. What you're saying is nonsense – there is no logical circularity in using the derivative of the sine function to evaluate the limit of the sinc function at 0 via l'Hopital's rule; many books do so as an example of exactly what l'Hopital's rule is good for. Dicklyon (talk) 02:18, 17 October 2009 (UTC)

We should definitely just state the value for the limit. There has to be a reason to include any proof. There is no reason to include this one. Charles Matthews (talk) 19:34, 16 October 2009 (UTC)

We've done that already. Dicklyon (talk) 01:13, 17 October 2009 (UTC)

Computing (normalized) sinc(0)[edit]

If one didn't already know EmilJ's derivation for the limit of the (unnormalized) sinc at 0, how would one normally approach the problem of evaluating the simple expression:


If one knew the trick of turning it into the derivative of another function, after recognizing that the numerator and denominator both evaluate to 0 at the point where you're taking the limit (the same recognition that traditionally triggers application of l'Hopital's rule), one would write, as EmilJ did

With a little rearrangement, this is easily recognized as the derivative of . For anyone with the most elementary skills at differentiation, this is clearly just , which at x=0 is 1. So, same answer as before, with just slightly more complicated path; but I think it illustrates that the method is based on being able to intuit what to do with the subtractions, and on the ability to recognize a limit expression as a derivative even when it's not in exactly the right form. Is there any generality here, or is it just re-deriving l'Hopital's rule in a lucky case-specific way?

The way normal people (not you mathematicians) would do it is say: "whoa, ratio 0/0, how am I going to do that? Oh, I remember, they taught us about some French dude who had a rule that makes it work trivially...something about a hospital...oh, here it is...doh!"

The sinc function is very often used as an example of where l'Hopital's rule is useful; for example, in the article l'Hôpital's rule; or in this book; or in this book; or in this book; or in this book; in general, many more books than I found before, because they don't call it "sinc".

If there's a more straightforward way to evaluate such limits, I'd like to know it. Dicklyon (talk) 01:52, 17 October 2009 (UTC)

There is, in the sense that the limit is by a substitution that of sin(y)/y. Now why you know this is 1 is a matter of definitions, if it is not arguing in a circle. But we do know that limit. Charles Matthews (talk) 16:22, 17 October 2009 (UTC)
OK, that's just stupid. Replace one unsolved problem by another whose solution has not been well explained. I agree that there are ways to find a path to the limit via such substitutions. But that requires search, as opposed to the simple application of rule that is triggered by exactly this situation. Did you look at any the books I linked? Don't they all show the use of the l'Hopital's rule as the way to evaluate this limit? Don't th e other methods described here all require some messing around to find the path of transformations to get to an answer? That's the difference I'm talking about. Dicklyon (talk) 18:40, 17 October 2009 (UTC)
It seems that this discussion is no longer aimed at changing the article in any way, but at resolving this situation where two parties don't understand each other and don't know if they are talking past each other or what's going on. So far as I am concerned, that's fine. Here is how I feel about it, as a pure mathematician from a field where real numbers hardly occur at all: If you had asked me for this limit without giving me any hints or additional information, I would not have remembered l'Hôpital's rule. (Before EmilJ left a message at WT:WPM, this rule was hidden in a dusty corner of my brain.) Instead I would very likely have come up with the formula that I have given you above. (BTW: Sorry if the formula was nothing new for you, but at least it helped to clarify what we are talking about.) For a mathematician this is the natural way to do it; and part of our work in teaching university level maths is to make our students approach problems in the same way. If a significant portion of, say, second year mathematics students at a certain university would use l'Hôpital to compute this limit, this might be an indication that something went wrong in the course; that they don't think like mathematicians. For a mathematician doing a very small "backwards" step is no "messing around". We have to do it all the time and it becomes second nature. And many mathematicians find it harder to remember a theorem like l'Hôpital's rule than to remember vaguely that something like it exists and fill in the details by reinventing it. That's a result of our first year courses, in which the students must prove half a dozen theorems on their own every week. (Another result is that most people who drop out of a mathematics course do so in the first year, in contrast to most other subjects where people tend to drop out much later. At least that's the situation in Germany.) Hans Adler 01:05, 18 October 2009 (UTC)
I agree with all that entirely. I do a fair bit of math myself, though I'm primarily an engineer, so I know how to look around for the right transformations – or how to ask Mathematica to help me. But for "the rest of us" as Steve Jobs used to say of the non-nerds, it's handy to just apply the rule that's triggered by the situation; of course, for those who don't remember the rule, or if it doesn't get triggered, that's not going help a bit! Nonetheless, I was a bit miffed by all the people saying things like it's "completely pointless and brain-damaged"; or "no textbook would do it that way" when there's a ton of evidence that many do, and in fact use this particular limit as the canonical example; or saying "by definition" when they means it's possible to find a way... Sigh... Dicklyon (talk) 03:59, 18 October 2009 (UTC)

OK, I found the method, called "Recognizing a Given Limit as a Derivative", in this book. Example 52 explains exactly what you guys did above. Very nice. Then it concludes: "Also, l'Hopital's rule yields 1 immediately as the answer." Doh! Dicklyon (talk) 04:16, 18 October 2009 (UTC)

Forgive me, but what I suggested is a two-line proof, consisting of an observation and then a look-up. I realise that the ad hominem arguments above have hardly added to the tone of the discussion, but they weren't from me. I answered your question and hardly deserve to be called "stupid" for that. Charles Matthews (talk) 10:51, 18 October 2009 (UTC)
I'm not calling you stupid; the adjective applied to "that", which was a statement that the method involving a leap of recognition in the first step was "more straightforward" than simply applying the well-known rule. Dicklyon (talk) 15:18, 18 October 2009 (UTC)
Shrug. "How would one normally approach the problem of evaluating the simple expression ... ?". That was the question. First, recognise that π has no business in there complicating the issue. Second, "simple expression" is then "well-known expression" for the slope of the graph of sin at 0. Sorry, it does depend on who "one" is. It does depend on being comfortable with the limit concept. On realising that if you know the derivative of sin then a fortiori you know this limit. As has been said before, there is nothing to justify dressing up the procedure of evaluating the limit in an algorithmic garb just to produce the same answer by a longer way round. Anyway, if you ask an honest question it goes down badly if you react that way to a competent answer. Charles Matthews (talk) 15:57, 18 October 2009 (UTC)

What is the Sinc function and why should I care?[edit]

This article, like many others in the higher math domain, lack a root in reality. Wikipeida is a general encyclopedia not a PhD in Mathematics encyclopedia. Can an expert in the field please post why this is notable or relevant to those who are not math experts? Otherwise I propose it be deleted. (talk) 20:20, 28 June 2010 (UTC)

Did you read the lead section? It is relevant in signal processing. Charles Matthews (talk) 20
38, 28 June 2010 (UTC)
I am sorry the article lacks a root in YOUR reality, but you are wrong, Wikipedia IS an encyclopedia for math experts. And science experts, and art experts, and history experts and every kind of expert you can think of. Its also a general encyclopedia. You start out at the simplest level. If you want to learn more about a subject, hopefully there will be a link to help you instead of a stop sign that says "sorry, if you want more information on this, you will begin to become an expert, and we can't have that". Wikipedia is not intended to bolster your self-esteem about the level of your education. There is not one "expert" contributor to Wikipedia who isn't totally baffled by some other "expert's" contribution. Instead of proposing to delete what you don't understand, why not contribute what you do understand, and just hit the back button for the ones that you don't. PAR (talk) 01:28, 29 June 2010 (UTC)

sinc(x) Fourier Transform[edit]

It should be noted in the article that formally, sinc(x) has no fourier transform as it does not belong to G(R) (It is not absolutely integrable over the real axis). —Preceding unsigned comment added by (talk) 20:07, 16 January 2011 (UTC)

I don't think that's a requirement for the existence of a Fourier transform. It's square integrable; isn't that enough? Dicklyon (talk) 20:09, 16 January 2011 (UTC)
By square integrable do you mean (sinc(x))^2 is integrable? that is definitely not a condition good enough for the Fourier transform to exist as (sinc(x))^2<abs(sinc(x)) for x>1. —Preceding unsigned comment added by (talk) 20:18, 16 January 2011 (UTC)
79, you might want to check this out Fourier_transform#Square-integrable_functions. If you're correct, you'll have to fix it in more than one place. (talk) 03:39, 17 January 2011 (UTC)
He does seem confused. I'm not absolutely certain that square integrable is a sufficient condition, but I am absolutely certain that this particular function's FT exists. Dicklyon (talk) 04:38, 11 February 2011 (UTC)
There are similar disputes about the dirac delta function and whether it can stand alone and be Fourier transformed. Personally, I'm willing to accept the duality property of the Fourier transform, which means if
then (talk) 05:04, 11 February 2011 (UTC)
True, the Fourier-Integrals of sinc do not exist, as sinc is not absolutely integrable. However, L1(IR), the set of absolutely integrable functions, is a dense subset of L2(IR), the square integrable functions. On this dense subset, the Fourier integral defines a linear operator that is bounded in the L2 norm. Thus there exists a bounded extension of the Fourier integral to a bounded linear operator on the whole of L2 that is called the Fourier transform (operator). As such, the Fourier transform of sinc exists. -- For the delta distribution, look for Schwartz test functions and tempered distributions. The idea is similar, L1 and L2 may be interpreted as regular distributions and as such they are tempered distributions, and they are dense in the last set, so the Fourier transform operator may be extended as a bounded operator on the tempered distributions.--LutzL (talk) 10:26, 11 February 2011 (UTC)

Abbrevation for 'Cardinal Sine'?[edit]

Sinc stands for something like 'cardinal sine' (see french article). Moemin05 (talk) 00:25, 11 February 2011 (UTC)

Read the lead. Dicklyon (talk) 04:34, 11 February 2011 (UTC)

Limit of function[edit]

I've found intresting thing about generalized sinc function. If the function is given: n*(sin(nx)/nx) then function is limited by functions 1/x and -1/x. It is easy to see on function grapher — Preceding unsigned comment added by (talk) 19:05, 17 June 2011 (UTC)

sinc * sinc = sinc[edit]

I added "(k integer)" to where "orthonormal base" is mentioned. The orthonormality as well as values of all other scalar products (among functions sinc(t−k) where k is not necessarily integer, this time) follows immediately from the following nice property that you might want to include: sinc * sinc = sinc. (* denotes convolution.) This in turn follows via Fourier transform from rect2=rect. (talk) 00:30, 19 August 2011 (UTC)

a graphic illustration would be nice (with sinc and sinc²)-- (talk) 10:18, 8 September 2013 (UTC)

Incorrect definition of the Sinc function[edit]

The real definition of the Sinc function is if x ≠ 0 and Sinc 0 = 1. Blackbombchu (talk) 00:00, 2 December 2013 (UTC)

Extra property to add to the Properties section[edit]

Blackbombchu (talk) 01:25, 2 December 2013 (UTC)

Yes, but why is this interesting? And where can it be sourced? Dicklyon (talk) 05:51, 2 December 2013 (UTC)
I'm not sure if Wikipedia works this way but maybe it doesn't need to be sourced if it can instead be proven in the article. Blackbombchu (talk) 16:06, 2 December 2013 (UTC)
No, it doesn't work that way. Dicklyon (talk) 05:20, 3 December 2013 (UTC)
This is the same as saying that the sinc function is the inverse Fourier transform of a (area=1) box function:
so it should be covered in the Fourier transform section.--LutzL (talk) 10:51, 4 December 2013 (UTC)

History of the name[edit]

The name sinus cardinalis dates back by Edmund T. Whittaker in 1915, where he named the bandlimited or most simple function of a family of cotabular functions (sharing a function table, i.e., values at equally spaced sample points) the cardinal function of this family. One would have to check if he already named the basis functions or if that came later. See A History of interpolation.--LutzL (talk) 11:09, 4 December 2013 (UTC)

Put fork to Sinc filter on top of the article?[edit]

Since Sinc redirects here, it's very probable that some "audio freaks" will first search for sinc used in sound backends, but instead get a boatload of mathematical stuff they actually did not want to delve into so much. Of course, there is a link to the "Sinc filter", but it's on the very bottom, squeezed somewhere between antialiasing and lanczos stuff. However, there's already a "Redirect" template on top. In case you agree, could you give me a hand how to handle this best? (and remove the "Sinc filter" from the "See also" list in the process) -andy (talk) 17:54, 11 October 2014 (UTC)