Talk:Dirac delta function/Archive 1

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Graphic/Animation

The animation of the Gaussian limit is not normalized -> Incorrect. 128.200.93.188 (talk) 21:26, 16 April 2009 (UTC)

Definite integrals of the delta function

From the article:

If you integrate the delta function between ANY limits a and b, then the integral is:

0 if a,b > 0 or a,b < 0
1 if a < 0 < b
0.5 if a = 0 or b = 0

Really? I'm not sure about the last of these lines, the one with value 0.5. Surely this contradicts the "compact support" bit?

—The preceding unsigned comment was added by 217.158.106.142 (talkcontribs) 22:36, 22 November 2002 (UTC2)

Looks right to me -- I remember the delta func being defined as the limit of a sequence of functions, each getting pointier. A pointy function of full integral 1, centred on 0 clearly has a half-integral of 1/2. with Lebesgue integration (which I think is the only thing you can use for the delat function, you can't use Riemann), there's something about the limit of the integrals is the integral of the limits (uniform convergences is probably a requirement too) -- Tarquin 20:44 Nov 22, 2002 (UTC)

The difference is that in Lebesgue integration you really integrate over a set. For Lebesgue measure it doesn't matter whether that set is an open interval or its closure, but for the Dirac measure it does. Thus when you integrate over [0, b] you get 1 and when you intergrate over (0, b) you get zero. There is no way to get 1/2. Just as there is no set of which 0 is half an element. -MarSch 17:57, 5 May 2005 (UTC)


The sequence of functions that go into the delta function does not necessarily have to be centered at zero. I conjecture the sufficient condition is simply that the center approaches 0 as the function approaches an infinite spike. For example, consider the rectangular function

f_{\epsilon}(x)=\left\{\begin{matrix} 1/\epsilon, & \mbox{if } (c - 1/2)\epsilon < x < (c + 1/2)\epsilon \\ 0, & \mbox{otherwise} \end{matrix}\right.

This function is centered at . For any real c,

\lim_{\epsilon \to 0}f_{\epsilon}(x) = \delta(x)

However, the value of the half-integral depends on c.

Cyan 05:58 4 Jul 2003 (UTC)

I don't think I really understand this discussion. It is morally equivalent to trying to convolve the Dirac delta-function with the Heaviside function, no? Which is like trying to specify the value of the Heaviside function at 0. Saying it is 0.5, i.e. halfway up, is sort of the right answer in Fourier theory - but I doubt it is the right way to say it.

Charles Matthews 07:48 4 Jul 2003 (UTC)

I think what you are trying to say is: if you evaluate the Fourier series of a discontinuous periodic function at a discontinuity, it converges to the average of the original function's limiting values at the discontinuity. But all that means is that the original function and the Fourier series representation can disagree at discontinuities.

The delta function can be defined in various ways, e.g. as a measure in measure theory, or as a linear functional, or as an integral satisfying certain properties under a limiting operation. I don't know squat about measure theory or functional analysis, so I go with the third definition:

if

 \lim_{\epsilon \to 0}\int_{-\infty}^{\infty} f_{\epsilon}(x)g(x)dx = g(0)

we say that f_{\epsilon}(x) is a delta sequence, and for shorthand, we abuse proper notation by writing

 \lim_{\epsilon \to 0}f_{\epsilon}(x) = \delta(x)

(Lists of delta sequences may be found at [1] and [2].)

Here's the issue: is the following statement true for all delta sequences?

 \lim_{\epsilon \to 0}\int_{0}^{\infty} f_{\epsilon}(x)g(x)dx = 0.5 \cdot g(0)

Now, some delta sequences are symmetric about the y-axis, and would yield a half-integral of 0.5*g(0). But other delta sequences, like the one I defined above, are not necessarily symmetric about the y-axis. The half-integral is really indeterminate, because the definition of a delta sequence doesn't constrain it to any particular value.

Cyan 05:18 7 Jul 2003 (UTC)

We Japanese think that


    \int^{0}_{-\epsilon} \delta(x) dx = \int^{+\epsilon}_{0} \delta(x) dx = 1/2 
for all \epsilon > 0

and every image  H(x) of Heaviside step function   H : \mathbb{R} \ni x \mapsto H(x) \in H(\mathbb{R}) \subset \mathbb{R}  is


    H(x) = \int^{x}_{-\infin} \delta(\xi) d\xi = \frac{1+{\rm sgn} x}{2} = \left\{\begin{matrix} 0 & \left( x < 0 \right) \\ 1/2 & \left( x = 0 \right) \\ 1 & \left( x > 0 \right) \end{matrix}\right. 
.

User:Koiki Sumi 00:00, 15 Sep 2003 (UTC) & 00:30, 18 Sep 2003 (UTC) Who had changed \mapsto for \longrightarrow? It has been returned as before.

I believe that my off-center rectangular function (see above) is a counter-example to the idea that

 \int^{0}_{-\epsilon} \delta(x) dx = \int^{+\epsilon}_{0} \delta(x) dx = 1/2 for all \epsilon > 0 . -- Cyan 04:47, 15 Sep 2003 (UTC)

The indefinite integral of a distribution is another distribution, not a pointwise valued function. So, the whole question about \int_0^a \delta(x) dx just isn't well-defined. Phys 17:44, 6 Nov 2003 (UTC)


Would anyone please put something in the article about the value of

\int_0^{\infty} \delta(x) dx ?

I was taught that if you for some reason need to evaluate such an expression in a physical problem, you need to state which convention you use (0, 1, or 1/2). But I was educated as a physicist, who are known to treat mathematics from a practical point of view. :-) Han-Kwang 21:01, 5 June 2006 (UTC)

Japanese

Cannot understand the sense of that paragraph... Pfortuny 11:29, 13 Sep 2004 (UTC)

the statement that \delta(\mathbb{R}) \subset \mathbb{R} is probably false. for example, \delta(0)\not\in\mathbb{R}. I am going to remove that part. --> You say that \delta(0)\in\mathbb{C}-\mathbb{R}, don't you?. (OMAERA WA BAKA KA ?) ---
Anyway, in short, this japanese version (i have never heard this version referred to as "japanese", can someone attest that usage?) simply says that the delta function is the derivative of the Heaviside step function. In other words, let
\theta(x)=\begin{cases}
0&    x<0\\
1&    x\leq0
\end{cases}
(this is what is known as the Heaviside step function. It can also be written in terms of the signum function, as is done in the article). Then you can simple define the delta function to be

\delta(x)=\frac{d\theta}{dx}
Your homework: figure out how what the article says is the same thing as what I said. -lethe talk


On "The delta function as a probability density function" in the article, most Japanese see that \delta and h must be homeomorphic. -- OMAERA WA BAKA KA ?


-- < OBAKA E >


Let    f : \mathbb{R} \to \mathbb{R}  be


f(t)=\begin{cases}
-1&    ( t \in \mathbb{Q} )              \\
 1&    ( t \in \mathbb{R} - \mathbb{Q} )  
\end{cases} 
.


For every  a,b \in \mathbb{R} ,



    \int^{b}_{a} f(t) dt = b-a  
,


where integral means improper Riemann integral.


Let  \phi be a function  \mathbb{Q}-\{0\} \to \mathbb{R}-\{-\infin,+\infin\} ,   and let  g be a distribution \mathbb{R}\to\mathbb{R} :



g(t)=\begin{cases}
\phi(t)&    ( t \in \mathbb{Q}-\{0\} )                 \\
\delta(t)&  ( t \in (\mathbb{R}-\mathbb{Q})\cup\{0\} )  
\end{cases} 
.


For every  x\in\mathbb{R} ,



    \int^{x}_{-\infin} g(t) dt = h(x)  
,


where  h : \mathbb{R} \ni t \mapsto \frac{1+{\rm sgn} t}{2} \in \mathbb{R} .

—The preceding unsigned comment was added by 219.49.2.14 (talkcontribs) 13:45, 13 April 2006 (UTC2)

sequences

Well, I just want to add this admittedly pedantic piece of comment: From the point of view of mathematical exactness and completeness it should be emphasized that the parameter a shares the property of all epsilon-small quantities in maths: It is positive! Otherwise, many given formulae are incorrect in the sense that they in fact define sign(a)*delta(x).

The support of the delta function

Ok, maybe the support of the delta function is zero, but consider this:

\lim_{a\rightarrow 0}\frac{\textrm{sinc}(x/a)}{\pi a}=\delta(x)

where sinc(x)=sin(x)/x is the sinc function. This is indeed a delta function, but the reason it yields zero when integrated over any interval not containing zero is NOT because it goes to zero, but because its period of oscillation goes to zero, while its amplitude stays between ±1/x. This means that it goes negative sometimes, and is therefore not a probability density function. Only if you specify that the Dirac delta function be further constrained to be a probability density function can it be thought of as approaching zero except at x=0. That is why I question the support of δ(x) being zero, and the inclusion of the probability infobox in the article. PAR 01:50, 24 Mar 2005 (UTC)

FWIW, I find the probability infobox annoying; these infoboxes would be a lot more useful if they were consolidated onto one page, so that user could actually compare distributions to find the one that they wanted, instead of clawing randomly through various articles hoping to trip over the infobox they wanted. linas 06:11, 26 Jun 2005 (UTC)

Broken HTML table

The html table at the bottom, listing many different kinds of limits, it totally broken on my browser. I notice User:PAR added this maybe 3 months ago. Can I get rid of it? I'd rather do that than try to figure out the bug in the HTML markup. (We shouldn't be using html markup in WP articles anyway). linas 06:11, 26 Jun 2005 (UTC)

I am against removing it. It is not an HTML table, its a wiki table. I have looked at it on three different browsers on two different machines, and it always looks fine. It could be a temporary glitch in the wikipedia software, or it could be a problem with your browser or machine. If you can, try looking with a different browser and/or a different machine. Also wait a while and try again. What is the nature of the problem? I mean what does it look like on your browser? PAR 16:34, 26 Jun 2005 (UTC)
Are you talking about "Some nascent delta functions are:" invisible table? Looks fine here. --MarSch 18:56, 26 Jun 2005 (UTC)
Its still broken. I'm using Konqueror version 3.3.2 which I beleive is the same as what is the main default browser on Macintoshes (Safari). Visually, the formulas overlap with text and each other; the thing is considerably wider than what fits in a standard column width. (My monitor is set for 1600x 2000 and the table is still to wide for that.) linas 20:30, 24 July 2005 (UTC)
Can you get access to a different browser? Try it with that. Really, the table is fine, its broken on your end. PAR 22:19, 24 July 2005 (UTC)
No, its simply the best browser out there. No other web sites are broken, no other WP articles are broken. Just this one. I'm assuming its not my browser, but something about the html that is generated. Maybe it should be run through an HTML checker? linas 03:24, 25 July 2005 (UTC)
 1 - Why run it through an HTML checker when its a Wikipedia table?
 2 - Do you have another browser?
PAR 15:33, 25 July 2005 (UTC)

Konqueror is krap, in my experience.

I just ran this page through HTML validation, and it came out fine. "This Page Is Valid XHTML 1.0 Transitional!" So there's nothing wrong with the code itself. Of course validation doesn't point out visual errors with otherwise legit code.

Can you take a screenshot of your problem?

You should just get Firefox.  :-) - Omegatron 16:02, July 25, 2005 (UTC)

Distribution Notation?

Can someone define what this notation means, or link to page that defines it? I've never seen it before, and its not used on the page about distributions:

As a distribution, the Dirac delta is defined by

\delta[\phi] = \phi(0)\,

the preceding unsigned comment is by 129.7.57.161 (talk • contribs) 01:07, 8 January 2006 (UTC1)

It means that the delta functional takes a function φ as intput, and returns that function's value at 0 as output. I guess it's the square brackets that are throwing you off? Sometimes physicists like to use square brackets for things that take entire functions as inputs, instead of just numbers. -lethe talk 01:15, 8 January 2006 (UTC)
My problem was that I didn't understand what a distribution was. I read the distribution page enough to see that a distribution was a generalized function, and that they did not use the same notation. Reading further, I finally see that a distrubution is a map on the domain of test functions. the preceding unsigned comment is by 129.7.57.161 (talk • contribs) 16:53, 8 January 2006 (UTC1)

Formal introduction

Is this a typo?: \int_{-\infty}^\infty f(x) \, d\delta(x) = f(0)

If not, it needs more explanation, IMO. --Bob K 07:51, 23 January 2006 (UTC)

I think it's a case of someone following an infelicitous convention, at best. Michael Hardy 23:12, 31 March 2006 (UTC)
this is the way to define it when δ is the Dirac delta measure. Eliding x may make that more clear. \int f \, \mathrm{d}\delta = f(0)--MarSch 14:46, 13 April 2006 (UTC)

The subsection with that equation ("The delta function as a measure") is very short and a quarter of it is about distributions instead of measures. Should the subsection be there at all? Is there any satisfactory way of thinking about the delta function as a measure? How do you get the \int_{-\infty}^0 f(x)\, \mathrm{d}\delta(x) = \int_0^{\infty} f(x)\, \mathrm{d}\delta(x) = f(0)/2 property? Perhaps by setting measure of {0} to 1/2 and then adding two infinitesimals around 0 so that the measure of {-infinitesimal, +infinitesimal} is 1/2, and the measure of the rest is 0, and extend f's domain accordingly? Surely there's some standard way of doing this (unless this subsection is original research) and it'd be nice if it were explained. The scaling property too - it's given as a "helpful identity", but as far as I can see it does not follow from this measure definition, which I take as an indication that the definition is lacking. (intuitively it follows from the delta function being an approximation of a very thin box and scaling the axis changes the area of the box which should be reflected in the approximation, but that's motivation for the definition, not a definition) 82.103.198.180 23:31, 14 July 2006 (UTC)

cumulative distribution function

When talking about probability density functions, it is conceivable that there is a reason to prefer a convention that, at points when a jump discontinuity occurs, one defines the value of the function to be halfway between the two one-sided limits. But with cumulative distribution functions rather than probabilbity density functions, that is absolutely incorrect. By the usual convention, one defines the cumulative probability distribution function of a real-valued random variable X by

F(x) = Pr(Xx).

There is also a convention, seldom seen, as far as I can tell, that defines it thus:

F(x) = Pr(X < x).

But either way, picking the halfway point at a jump discontinuity is completely wrong. Michael Hardy 23:16, 31 March 2006 (UTC)

You should probably add something to the article explaining this, as the subtlty of this point will be lost on most readers, leaving the field open for this error to be repeated over and over again ... linas 23:31, 31 March 2006 (UTC)

The Dirac Delta Function in Curvilinear Orthogonal Coordinates

it should be interesting to introduce the three dimensional delta functions, along with it's definition for other orthogonal coordinates (spherical and cylindrical) since it is useful for finding Green's function.

a good reference may be found here: [3]

\delta^3(\vec{r}-\vec{r'}) = \delta(x-x')\delta(y-y')\delta(z-z')
\delta^3(\vec{r}-\vec{r'}) = \frac{\delta(r-r')\delta(\phi-\phi')\delta(z-z')}{r}
\delta^3(\vec{r}-\vec{r'}) = \frac{\delta(r-r')\delta(\theta-\theta')\delta(\phi-\phi')}{r^2 \sin \theta}

Cumulative distribution function

I restored the cumulative distribution function plot because there was no explanation given as to why it was wrong. If its wrong, please give an indication of how it is wrong. PAR 03:57, 5 April 2006 (UTC)

maybe you should look up two topics? --MarSch 14:43, 13 April 2006 (UTC)

Merge with Dirac delta function

The articles are obviously equivalent, and Dirac delta function is much more complete than Dirac delta measure. I recognize that "function" is technically an incorrect term, but it is still extremely widely used, much more so than that "delta measure". I therefore propose that Dirac delta measure be merged into Dirac delta function. --Zvika 18:15, 4 April 2006 (UTC)

Dirac delta is also widely used and not incorrect. I think we should merge to there. -MarSch 12:59, 5 April 2006 (UTC)

That article is quite silly. It never even defines the Dirac measure! The measure is defined here though, so I don't even know if there's anything to merge. Maybe can just be made into a redirect here. (And I favor the name dirac delta function over the slightly name Dirac delta). -lethe talk + 15:08, 5 April 2006 (UTC)

Keep an open mind I've never heard of a dirac measure either, but it sounds right as rain to me. And I've formally studied the Dirac delta function from the perspective of Papoulis's The Fourier integral and its applications (McGraw Hill, 1962). --Firefly322 (talk) 12:26, 11 March 2008 (UTC)

Unit impulse function??

If the article on the unit impulse function is correct, then to say that "The Dirac delta function (is) sometimes referred to as the unit impulse function" is wrong. The integral of a delta function is 1, whereas the integral of the unit impulse function would be infintesimal. What should we do about that? Fresheneesz 05:00, 19 April 2006 (UTC)

The term unit impulse directly redirects here, which I don't think it is appropriate. At least unit impulse should be directed into unit impulse function I mean disambiguation because Kronecker delta is more appropriate for discrete domain unit impulse. —Preceding unsigned comment added by Lielei (talkcontribs) 21:16, 9 December 2007 (UTC)
That impulse function is defined on the integers, so an "integral" is really a summation. If you sum that bad boy, you do indeed get 1. So I don't think there's a contradiction. -lethe talk + 05:05, 19 April 2006 (UTC)
Yes an integral is a sum, but its a weighted sum. Each infintesimal part is multiplied by the function at that point, and thats added up. If you only have one infinitesimal, the product of those is infintesimal. There is a contradiction. You don't indeed get 1. At least... I don't understand how you could. Fresheneesz 07:49, 19 April 2006 (UTC)
An integral over the reals can be thought of as a sum of infinitesimals. That is, it's the supremum of a sum of smaller and smaller bits. But when you're working over the integers, there is no "smaller and smaller". There is no infinitesimal. There's just a sum. So the integral of the function x2 over the integers from 0 to 3 is just 0+1+4+9 = 16. Similarly, the integral over the integers of the impulse function on the integers will just be 1. -lethe talk + 08:01, 19 April 2006 (UTC)
I believe one of the properties of the dirac delta function is that its width is infinitely small but the area underneath is equal to 1. (It is infinitely skinny but infinitely tall, but the one constant is that its integral is 1--this is the reason the sifting property exists.) -Msa11usec 03:55, 20 April 2006 (UTC)
It's true that the dirac delta can be approximated by functions with decreasing width, increasing height, and constant area of 1. Thank you for the synopsis. -lethe talk + 04:07, 21 April 2006 (UTC)

I can believe that "The Dirac delta function (is) sometimes referred to as the unit impulse function", so we should deal with that reality. But linking to unit impulse function is not currently a solution, because that is an article about the Kronecker delta. A possible solution is to convert that article into a disambiguity page that points to both Kronecker delta and Dirac delta function. --Bob K 16:25, 19 April 2006 (UTC)

Looks like you did that. But in response to Lethe, an integral is a sum over a continuous set, an integral doesn't exist on integers. An integral is always an infinite sum, no matter what interval. But sums of integers are not always infinite, and are never continuous. In any case, your example with integers doesn't apply anyway, because the Dirac delta function is a function of a continuous set. Fresheneesz 08:06, 21 April 2006 (UTC)
Let me summarize for you the definition of an integral, maybe that will help. An integral of a function over a set is the greatest of the sums of the heights of the function over small bits times the size of the small bits. When considering real functions, one uses the Lebesgue measure, which in the limit mutliplies by vanishingly small bits. When considering functions on the integers, one uses the counting measure, which simply counts the number of points in the small bits. The latter kind of integral is simply a sum over the integers, and the Dirac delta functional on that measure space is just the Kronecker delta, what someone around here is calling the unit impulse function. In short, the definition of an integral allows you to integrate over the reals or the integers, using the appropriate measure. Your comment that integrals are always over continuous sets and that integrals don't exist over the integers is simply not true. Integrals do exist over the integers, where they're known as sums. -lethe talk + 08:18, 21 April 2006 (UTC)
I had the notion that an integral was a type of sum - the fact that they have different symbols implies (of course doesn't prove) that they are in fact different. In anycase, I could hypothetically agree with your definition, but this would mean that the "area under the curve" approach to an integral no longer works. Fresheneesz 22:03, 21 April 2006 (UTC)
That's true, it's hard to interpret the integral of a function over the integers as an area. However, in modern mathematics, the integral is not defined in terms of area. On the contrary, area is usually defined in terms of integrals. -lethe talk + 23:47, 21 April 2006 (UTC)


Maybe it would be useful to re-frame this discussion as a mis-communication due to overloading of the word integration (and the disambiguation page isn't much help!). You probably don't need me to tell you this, but I have a few idle minutes to waste. One person is using a broader, more general, definition than the other person. But indeed, "integration", is commonly used in the more specific sense of \mathbb{R} and Lebesgue measure. In fact I would dare to suggest that that is its most common usage (in a mathematical context). Anyhow, it's a safe bet that the intuitively-pleasing association with "area under the curve" will be around longer than Wikipedia. Fagetaboutit, and have a great weekend everybody. --Bob K 00:04, 22 April 2006 (UTC)
Bob is right. Fresh, the most common usage of the word "integral" is like you describe. Area under a continuous real curve. However, there is a commonly used rigorous definition which allows the definition of delta functions in such a way as that the unit impulse function you're complaining about is perfectly rigorous. This is why my point was originally, and is still, there is no contradiction. -lethe talk + 01:33, 22 April 2006 (UTC)
By the way, your explanation inspired me to read Integral, which led me to Calculus. If Wikipedia is correct (always a big if), Calculus appears to be limited to the more specific meaning of integral. And the Integral article begins with the statement: "This article deals with the concept of an integral in calculus." So where is the proper place for your information/explanation? My instinct is to add a small section to Integral and get rid of the leading caveat. The only alternative I can think of is to rename Integral (e.g. "Integral calculus") and create a new article for your information. (What would it be called?) Then Integral would become a disambiguation page. --Bob K 14:08, 22 April 2006 (UTC)
This is a small problem, since the two different definitions of "integral" have nontrival differences. If the integral of the Kronecker delta function is 1, then it should be explained why that is, and what the connotations are of such a thing. If the statement is true.. but useless, then it probably shouldn't be in the article. In any case, the Kronecker delta article has nothing on its integral - but it would be very interesting to note what the significance of an integral over integers is. Fresheneesz 03:11, 23 April 2006 (UTC)
There aren't two different definitions of "integral". There is only one definition, but there are two choices of measure: the Lebesgue measure on the reals, and the counting measure on the integers. The fact that the Kronecker delta satisfies this property is already discussed in the article, where it's referred to as the sifting property, but if you would like the article to mention integration, I can certainly try to accommodate. I've added a comment to the article Kronecker delta. How do you like it? -lethe talk + 04:11, 23 April 2006 (UTC)
Oh, my mistake. Heh, well.. I don't understand your revision - it doesn't clarify anything for me : ) . But I don't know enough about either sifting properties .. or integration (apparently).. or delta fuctions to be able to help. I was just trying to point out what a thought was an inconsistancy. It's all fine with me now. Fresheneesz 11:04, 23 April 2006 (UTC)
If you would like to understand how summation is an example of integration, first you should learn what a measure is, and then how integrals are defined for arbitrary measure. This is standard material for a first semester graduate class in real analysis. Wikipedia has all the relevant material as well. -lethe talk + 04:21, 26 April 2006 (UTC)
I also think we're in pretty good shape. The change to Kronecker delta looks good to me... short but sweet with a link for those who would like to know more. My only concern is that when I turn to Wikipedia to learn the definition of integral, I find the calculus definition, which is the [limited] common definition, not the [general] mathematical one. Effectively there are two definitions. I think it is fine to emphasize the common definition, but the integral article is probably remiss not to mention the real definition somewhere in the fine print. If it had done so, this discussion would have been either unnecessary or much more efficient. --Bob K 12:01, 23 April 2006 (UTC)

Is it really unit?

Another question about calling it a "unit impulse function": Is it really called that? I've learned about a unit impulse function that is called unit because it jumps to 1 at time 0. Any old impulse function is a multiple of the unit impulse function. In that case, I don't think the dirac delta function is a unit impulse function - but rather an infinite impulse function. Fresheneesz 03:39, 26 April 2006 (UTC)

Also, delta function links here - but doesn't "delta function" have a more general meaning (ie. a delta function is 0 everywhere except at x=0) ?


I would say that a unit impulse function is called unit because the integral (using the appropriate measure) is 1. The integral is the only distinguishing feature of \delta(x)\, and \delta(2x)\,. They both have infinite height. For me the term infinite impulse function implies a Dirac delta, because the concept of infinite is irrelevant to the concept of Kronecker delta. --Bob K 04:04, 26 April 2006 (UTC)
Those are the only meanings I know of for delta function. My thought is to make delta function a disambiguation page that goes to both places. --Bob K 04:04, 26 April 2006 (UTC)

I don't use the term "unit impulse function", nor am I familiar with that usage, though I do have a vague impression that it's common with engineers. I note that the Dirac delta function is an identity element under convolution. -lethe talk + 04:18, 26 April 2006 (UTC)

Neither do I, and I am an EE/DSP type. However, Google-searches for the exact phrase "unit impulse" found matches at 29 Wiki articles and 70,000 web sites. So it seems to be serving a useful purpose. I spent a little time perusing the hits and came away with the impression that most people associate it with the Dirac delta, probably because of the word "impulse". In fact the original unit impulse function article was simply a redirect to here. That was changed 14-Apr-2006 into an article about the Kronecker delta (except the name was omitted). Then on 20-Apr I moved its contents into the Kronecker delta article, and changed it into a disambiguation page, pointing back to here. But actually it seems that the original article (a simple redirect to Dirac delta) is probably the one in agreement with most of the user world. Shall we revert? --Bob K 12:32, 26 April 2006 (UTC)
If I may vote on my own question, I vote to not revert, because all DSP engineers use the term impulse response, even if they prefer delta function to unit impulse. So the word impulse is firmly entrenched in DSP-land, where it refers to the Kronecker delta. --Bob K 12:49, 26 April 2006 (UTC)
I'm not clear on what reversion you are considering. You want Dirac delta to be a disambiguation? I don't support that. I think Dirac delta should go here, and Kronecker delta should be separate. -lethe talk + 20:39, 26 April 2006 (UTC)
I agree with that. The reversion I was considering is the Unit_impulse_function article... full circle... back to its original incarnation: "unit impulse function, take 1" --Bob K 23:01, 26 April 2006 (UTC)
Oh, I see. I'm having a hard time making myself care about whether unit impulse function is a redirect or a disambig, so whatever you like is fine with me. -lethe talk + 23:12, 26 April 2006 (UTC)
If both are refered to as unit impulse functions, then I vote no revert. Fresheneesz 03:13, 27 April 2006 (UTC)

Unit impulse now redirects to Delta function which points to both Kronecker delta and Dirac delta function. --Bob K 14:56, 27 April 2006 (UTC)

It really is a unit

We engineers really do refer to the "Dirac delta" as the "unit impulse": Defined as the (normalized) product of the amplitude and the time (t-axis) in the limit as t --> 0 (thus the amplitude --> infinity). It's used in communication theory in particular to "sample a wave". Hence the comb function or "shah" function is sampling at a fixed interval of time, ad infinitum. This is what a DSP does, and when it does the process creates the opportunity for aliasing, thus the need for low-pass filtering before the sampling-device. The Fourier transform of the unit impulse is a straight line across the spectrum indicating noise of equal amplitude at all frequencies: Easy to see with a discrete Fourier transform that you can put on your Excel spreadsheet in about 15 minutes. The phrase "Kronecker delta" I have stuck in my head for some reason and went hunting in my old text books but haven't found a reference yet, but I did find references to "the Dirac delta", if anybody wants me to add them as references lemme know. What I read here seems accurate.

I added the notation to the derivative of the sigmoid function, an absolutely fascinating function. The derivative -- as pointed out in the sigmoid page -- is also a nice function plus it is a "hump" that, in the limit and multiplied by a constant, makes a perfectly-good (mathematically-continuous) impulse-function. 1/wvbaileyWvbailey 14:42, 11 June 2006 (UTC)

I am not disagreeing with your description of sampling, but I think it can bear some clarification.
  • A DSP (more specifically an analog-to-digital converter) produces samples of a waveform at regular increments of time. The samples themselves are not a function of continuous time, and their continuous [time] Fourier transform is undefined.
  • To use that analysis tool, a continuous-time function is contrived conceptually (not actually nor numerically) by using the samples to modulate the "teeth" of a Dirac comb function, which does have a continuous-time Fourier transform.
    • The transform of the modulated comb is related to the transform of the original waveform in a very insight-filled way, which leads to an understanding of aliasing and ways to mitigate it, such as lowpass filtering and/or increasing the sample-rate.
    • It also reveals that the sample-rate can be unnecessarily high. Undersampling, which causes aliasing, is not a reversible operation. Oversampling is inefficient/wasteful, but it is also reversible, meaning that no information is actually lost (see sampling theorem).
  • The continuous-time Fourier transform of the modulated comb can be mathematically simplified (reduced) to a numerical calculation (that a computer can do) using just the original samples (without the infinite-valued delta functions).
    • That special case of the continuous-time Fourier transform is called discrete-time Fourier transform (DTFT). But it is a continuous-frequency function, which means that a computer cannot evaluate it at every frequency (because it is a continuum).
    • The discrete [frequency] Fourier transform (DFT) is a formula for computing regularly-spaced values (i.e. at discrete frequencies) of the DTFT function (which is always periodic).
    • The fast Fourier transform (FFT) is an algorithm for computing the DFT very efficiently.
--Bob K 15:48, 12 June 2006 (UTC)

A holding place for this reference: I finally found the definition of "Kronecker delta" in my perusing of Kreider, et. al. An Introduction to Linear Analysis, Addison Wesley, Reading Mass, 1966. In the following, the bold-face indicates these are vectors and the * should be a dot-product:

Eq. 7.31: xi*xj = 0 whenever i <> j
Eq. 7.32: xi*xj = 1 whenever i = j
"For economy of notation when discussing orthonormal sets, Eqs. (7.31) and (7.32) are frequently combined by writing
"xi*xj =δij, where δij = {0 if i <>j, 1 if i = j }
"The symbol introduced here is called the Kronecker delta" (Kreider p.268-269)

Thus it is the diagonal vector of 1's in all 0's otherwise. wvbaileyWvbailey

So are you suggesting that Kronecker delta is just another name for an identity matrix? --Bob K 20:57, 13 June 2006 (UTC)
Yes. The Kronecker Delta is the unit matrix. See next section below. wvbaileyWvbailey 13:38, 14 June 2006 (UTC)
I did. And it does not say that the Kronecker delta is a matrix. When the elements of a square matrix are denoted by the Kronecker delta, the matrix (not the Kronecker delta) is an identity matrix. --Bob K 16:11, 14 June 2006 (UTC)
You are entitled to your opinion. But I've re-read it and can't see your point of view. Point me to a paper-document source (not a website) that documents this and clarifies it for me and for others (i.e. a math text book, etc).wvbaileyWvbailey 16:24, 14 June 2006 (UTC)
You expect me to "prove" a negative for you? And if I can't find a book that states the Kronecker delta is not a sandbar at the mouth of the Mississippi River, are you also going to conclude that it is? --Bob K 17:03, 14 June 2006 (UTC)
My request is legitimate. I want to see your sources so I can understand what you are asserting, your point of view. Either put up or shut up and by the way, I find your tone abrasive. wvbaileyWvbailey 17:10, 14 June 2006 (UTC)
  • You cited "next section below" as the source for the assertion: "The Kronecker Delta is the unit matrix". I read that source, and it does not justify the assertion. That section (written by you) is the only source I need for that observation. My only other statement is: "When the elements of a square matrix are denoted by the Kronecker delta, the matrix (not the Kronecker delta) is an identity matrix." I think we agree on the positive part of that statement. --Bob K 20:43, 14 June 2006 (UTC)
I have given you a source you can verify. Library <== operative word. I'm waiting for you quote me an alternate source that I can verify. I'm agnostic about the definition of "kronecker delta". Whatever the definition is: is. But so far you haven't provided me with an alternate source. You haven't offered an alternate definition. I'm waiting. I'm patient.wvbaileyWvbailey 22:38, 14 June 2006 (UTC)
  • I do not owe you an alternate definition. I have merely observed that your source does not support your conclusion, in case you care. --Bob K 04:17, 15 June 2006 (UTC)
I on the other hand find your tone (put up or shut up) very soothing. You should become a diplomat. -lethe talk+ 20:25, 14 June 2006 (UTC)
You can also find that information in the Wikipedia article Kronecker delta. -lethe talk + 19:17, 13 June 2006 (UTC)
Clearly Mr. Lethe likes to intrude into little cat-fights where his opinion is not welcome.wvbaileyWvbailey 22:38, 14 June 2006 (UTC)

Yeah but the difference is: this is a verifiable source, not the rubbish that passes for references on that page. Actually there aren't any references there. This dirac delta page isn't much better with respect to references. wvbaileyWvbailey 19:32, 13 June 2006 (UTC)

Another difference is that a lot more people have easy access to Wikipedia than those who have your reference on their bookshelf. Anyhow, why all the noise? Why don't you just quietly add your reference to the Kronecker delta article and be done with it? --Bob K 20:57, 13 June 2006 (UTC)

Alternate definitions, sources

copied from above to keep the continuity: A holding place for this reference: I finally found the definition of "Kronecker delta" in my perusing of Kreider 1966, Kreider et. al. An Introduction to Linear Analysis, Addison Wesley, Reading Mass, 1966. In the following, the bold-face indicates these are vectors and the * should be a dot-product:

Eq. 7.31: xi*xj = 0 whenever i <> j
Eq. 7.32: xi*xj = 1 whenever i = j
"For economy of notation when discussing orthonormal sets, Eqs. (7.31) and (7.32) are frequently combined by writing
"xi*xj =δij, where δij = {0 if i <>j, 1 if i = j }
"The symbol introduced here is called the Kronecker delta" (Kreider p.268-269)

Thus it is the diagonal vector of 1's in all 0's otherwise. wvbaileyWvbailey

So are you suggesting that Kronecker delta is just another name for an identity matrix? --Bob K 20:57, 13 June 2006 (UTC)

That would seem to be what Kreider suggests. But there is no more to be found in his text of 773 pages of dense math. I found a reference Topper 1962 that states this explicitly:

"1.5 The unit matrix.
"We already have a matrix which corresponds in matrix algebra to the number zero in the algebra of numbers. We now need a matrix 1 to take the place of the number unity. 1 must have the property that 1A = A for every A, whenever the product on the left exists. Consider the diagonal matirx 1m of order m x m whose diagonal elements are all equal to unity.
[drawing of 1m = unit matrix]
" The elements of this matrix are usually denoted by the Kronecker delta δik, which is such that
δik = 0 (i<>k), δii = 1
"Then
[1m*A]ik = from i=1 to m Σ(δik*aik) = δik*aik = aik [* is just regular multiplication]
"so that 1m*A=A for any matrix 1m*A of order m x n. The matrix 1m is called the unit matrix of order m, and there are unit matrices of all (square) orders. [etc]"(her italics and boldface: Topper, p. 19)

Noble 1969 defines the Kronecker delta this way:

"For an orthonormal set we have
(ur, us) = δrs
where δrs is the Kronecker delta [his italics], which is unity if r = s and zero if r <>s.
" We have already met these ideas in Chapter 9 [Eigenvalues and Eigenvectors]. The main difference is that we are now talking about any abstract vector space instead of the space of n x 1 column vectors. As an example of a type that we have not met before, consider
ur = 2^1/2 sin rΠt, 0 <=t <=1, r = 1, 2, 3, ...

the inner product being that defined ... with w(t) = 1 [weight]. We have

(ur, us) = 2*integral from 0 to 1 [sin rΠt sin sΠt dt] = integral from 0 to 1 { cos (r - s)Πt - cos(r + s)Πt } dt
"It is left to the reader to show that this is unity if r = s and zero if r<>s, so that the ur form an orhtonormal set. note that the set contains and infinite number of vectors."
" We now give a theorem which generalizes results proved in Chapter 9 for n x 1 column vectors..."(p. 487)


In his chapter The impulse symbol δ(x), Bracewell 1965 defines the Kronecker delta in exactly the same way as Kreider. On page 97, problem 18:

"18 The Kronecker delta is defined by
δij = {0 if i <>j, 1 if i = j }
"show that it may be expressed as a null function of i - j as follows:
δij = δ^0(i - j)" (p. 97)

He defines a "null function" as:

"Null functions are known chiefly for having Fourier transforms which are zero, while not themselves being identically zero. By definition, f(x) is a null function if
integral from a to b (f(f(x) dx) = 0
for all a and b.... Null functions arise in connection with the one-to-one relationship between a function and its transform, a relationship defined by Lerch's theorem, which says that if two functions f(x) and g(x) have the same transform, then f(x) - g(x) is a null function.
"An example of a null function ... is
δ^0(x) = { 0 if x<>0, 1 x=0 }
"under the ordinary rules of integration, the integral of δ^0(x)is certainly zero.(Bracewell p. 82, 83)

While we're at it, Carlson 1968 defines "the unit impulse (also called Dirac impulse or delta function)"(p. 45) in terms of convolution:

"... a function with unit area satisfying
δ(u) = { 0 at u<>0, infinity at u=0) (2.42)
"More precisely it should be staated that d(u) has the property
integral from -infinity to +infinity (g(u)*δ(u - u0)du = g(u0) (2.43)
"where g(u) is any regular function continuous at u0. As a special case of (2.42) we have the more familiar relation
integral from -infinity to +infinity (δ(u)du) = integral from 0- to 0+ (δ(u)du)= 1
In words, an imulse has unit are or weight [his italics] concentrated at the point where its argument is zero and one elsewhere [his italics]. For most purposes we can also say that d(u - u0) is located at u=u0 and is zero everywhere else. Thus Ad(u-u0), an impulse of weight A, is graphically represented by ... [drawing here of an arrow of height "A"]
" Strictly speaking, impulses are not functions in the usual sense. Consequently, (2.42) and (2.43) -- or any expression containing impulses -- require a standard for interpretation. the usual convention is to replace δ(u) by a a unit-area pulse of finite amplitude and nonzero duration, the pulse shape being relatively unimportant. Operations involving δ(u) are then carried out by the finite pulse, after which we consider the limit as the duration approaches zero.
" Of the many possible shapes which become impulses in the limit, we shall have use for the gaussian, rectangular, sinc and sinc^2 pulses [his italics].... Further detailed treatment of impulses and the associated concept of generalized functions can be found in the literature, e.g., Bracewell (1965, chap. 5) and Lighthill (1958)." (Carlson, p. 45-46)

in yet another book that describes the Dirac Delta: I have found Lighthill 1962 referenced Jordan and Balmain 1950,1968. Also he references Papoulis 1962. See below.

In Jordan and Balmain 1950, 1968, section

1.06 The Dirac Delta
"... Under the name of "unit impulse" the Dirac delta has ben widely used in circuit theory to represent a verys short pulse of high amplitude...detailed discussions of the properties of the Dirac delta may be found in the books referenced at the end of the chapter [Lighthill 1962 and Popoulis 1962].
" The Dirac delta at the point x = xo is designated by δ(x - xo) and at x = 0 it is designated by δ(x). It has the property
integral from a to b [δ(x - xo)]dx = 1 if xo is in (a, b), =0 if xo is not in (a, b)
" Thus the delta behaves as if it were a very sharply peaked function of unit are; in fact, the detlat may be represented rigoursly as a vanishingly thin Gaussian function of unit area..."
" Another fundamental property may be stated as follows:
integral from a to b [f(x)δ(x - xo)]dx = f(xo) if xo is in (a, b), =0 if xo is not in (a, b)
"Under the integral operation, the delta has the property of selecting the value of the function f(x) at the point xo; thus the delta is a kind of mathematical "sampling" device or gating operation.
" The derivative of the Dirac delta [sketch in Fig. 1-12] is useful in representing charge diopolses and electric double layers [etc.]" (Jordan and Balmain, pages 18-19)

The Dirac delta function is defined in Cunningham 1965, p. 148 as the following:

"Let us consider the step function f(x0 which is defined as follows
f(x) = 1/ε, -ε/2 <= x <= ε/2
f(x) = 0, otherwise
" The area enclosed by such a function and the x-axis will clearly be unity (see Fig. 8.7) [drawing of the step function with base from -ε/2 to +ε/2 and height 1/ε]. If now we consider the limit of the function f(x) as e-->0 we obtain the so-called Dirac delta function which as the property δ(x) = 0 provided x<>0,δ(0)=infinity, together with the condition that
integral from -inf to +inf δ(x)dx = 1
"in order that the 'area under the curve' remains unity. Clearly this is not a function in the usuaal sense of the word but pyscial parallels are legion."(Cunningham, p. 148)

There is a lot more in Cunningham -- including a Theorem of 5 parts that look very much like Carlson in the sense of defining/using the delta function in convolution (p. 149 ff); most of this is directed to complex variable theory.

Cannon 1967 defines the "unit impulse" this way:

"A unit impulse-- denoted by δ(t - τ) ... is basically a mathematical device obtained by reducing the width of a real impulse until it occupies a time which is very short compared to characteristic time constants of the system being studied, but at the same time maintaining the magnitude (are) of the impulse unity. Formally, the unit impulse is defined by
δ(t - τ)= lim Δt-->0 of {a[u(t - τ) - u(t - τ + Δt)) with aΔt = 1"(Cannon p. 211)

References:

Topper, A. Mary, Matrix Theory for Electrical Engineers, George G. Harrap ,London -- Addison-Wesley Publishing CompanyReading, Reading Mass, 1962.

Cannon, Robert H, Dynamics of Physical Systems, McGraw-Hill, New York, 1967.

Noble, Ben, Applied Linear Algebra, Prentice-hall, Englewood Cliffs, NJ, 1969.

Cunningham, John, Complex Variable Methods in Science and Technology, D. Van Nostrand, London, 1965.

Bracewell, Ron, The Fourier Transform and Its Applications, McGraw-Hill, New York, 1965.

Carlson, Bruce A, Communication Systems: An Introduction to Signals and Noise in Electrical Communication, McGraw-hill, New York,

Jordan, Edward C. and Balmain, Keith G., Electromagnetic Waves and Radiating Systems: Second edition, Prentice-hall, Englewood Cliffs, NJ, 1950, 1968.

The following I have not examined, but are referenced by Jordan and Balmain, and by others:

Lighthill, M.J., Fourier Analysis and Generalized Functions, Cambridge university Press, London, 1960.

Papoulis, A., 'he Fourier Inegral and its Applications, McGraw-Hill Book Company, New York, 1962.

wvbaileyWvbailey 16:00, 14 June 2006 (UTC)

Delta function on more complicated arguments.

Although everything is written here about the properties of delta function(and their proofs) can be found in the "External links" and any standard textbook on this subject, I wasn't able to find something about the n-dimensional generalized scaling property ---> \int_V f(\mathbf{r}) \, \delta(g(\mathbf{r})) \, d^nr
= \int_{\partial V}\frac{f(\mathbf{r})}{|\mathbf{\nabla}g|}\,d^{n-1}r

I would be grateful if someone could supply a reference about this property.

StefanosNikolaou 15:33, 9 November 2006 (UTC)

Please could someone supply a reference? —Preceding unsigned comment added by 130.226.56.2 (talk) 11:01, 18 May 2009 (UTC)

Derivative of Dirac delta function

Does the Dirac delta function have a derivative? --Abdull 14:37, 23 January 2007 (UTC)

Yes - see Scienceworld entry. This article doesn't have a section on that, but it needs one, definitely. When you become an expert, why don't you add to the article? PAR 16:21, 23 January 2007 (UTC)
In short, it has a distributional derivative Lavaka 23:30, 12 April 2007 (UTC)
The Scienceworld article has some very useful equations which would enhance the wikipedia article, eg,

x\,\delta'(x) = -\delta(x)

Fourier Representation justification

Since the dirac delta function is a tempered distribution, we can define it's Fourier Transform, which, as the article states, is 1. Hence, it seems OK at first to define the delta function as the Inverse Fourier Transform of 1, which is what the article states:

\int_{-\infty}^\infty 1 \cdot e^{-i 2\pi k t}\,dt = \delta(k)

My question: why is this true? For nice functions g (say, in the Schwartz Space, or in L^1), we have, where F represents the Fourier Transform, and F^{-1} represents the Inverse Fourier Transform, the following fundamental result

F^{-1}( F(g)) = g

but as far as I know, this is not true for distributions. So, how to justify the first formula? Maybe the author of that formula can tell me where they got it from? --Lavaka 23:37, 12 April 2007 (UTC)

It really is a function

The Dirac delta does I believe satisfy the Wikipedia definition of a function, so the page here was edited to provide consistency. - Lee

No, the Dirac delta is not a function. If it were a true function, with values of zero except the origin, where the value would be infinite, such a thing would have a Lebesgue integral of zero, and not equal to 1, as it happens. Oleg Alexandrov (talk) 05:07, 2 May 2007 (UTC)
Well, it's a real-valued function (in fact, a bounded, linear functional) defined on a space of test functions, it's just not a function defined on the domain of those test functions. :-) Sullivan.t.j 05:54, 2 May 2007 (UTC)
Well, if you put it that way, then yeah. :) Oleg Alexandrov (talk) 06:07, 2 May 2007 (UTC)
It just occurred to me that the difference in domains might have been the source of the confusion. Maybe, maybe not. Sullivan.t.j 07:12, 2 May 2007 (UTC)

\delta(0)=?

It is commonly said in introductory texts that the delta function has value \infty at the origin and value zero elsewhere. However this is not true. We can construct a delta sequence \{\delta_n\} which converges pointwisely to zero at the origin simply by forcing every sequence member to have value zero at the origin. Frigoris 14:54, 1 July 2007 (UTC)

Fourier Transform

1610 Hello Sir, my name is Joni.

I want to ask: What is the Fourier Transform of δ(Хо,Уо)? Thanks.

Joni NTUST-Taiwan email address:m9602801@mail.ntust.edu.tw —Preceding unsigned comment added by 140.118.123.226 (talk) 13:48, 16 October 2007 (UTC)

what is the differance?

what is the difference between those symbole delta for functions?

δn(r) and δ(r) —Preceding unsigned comment added by Greeniq (talkcontribs) 21:30, 17 February 2008 (UTC)


\delta_a(x)\, represents a function of x that approaches \delta(x)\, in the limit as parameter a\, approaches to zero. There are many such functions, and \delta_a(x)\, is just a general notation for any of them.
--Bob K (talk) 01:32, 18 February 2008 (UTC)

so when I have function like this (δn(r)- δ(r))how can I solve such a function? the general function I have is

-Δφ(r)=(e/ε)(δn(r)- δ(r))? —Preceding unsigned comment added by Greeniq (talkcontribs) 09:18, 18 February 2008 (UTC)


It does not appear that you are referring to the article. This talk page is only for the article.
--Bob K (talk) 12:50, 18 February 2008 (UTC)
You may ask your question on Reference_desk/Science or Reference_desk/Mathematics which would be better places for your quesion. - Justin545 (talk) 03:02, 18 March 2008 (UTC)

Laplace transform

Given inverse Laplace transform of $\cos(as)$ is incorrect. This can be easily verified by substituting the sum of delta functions into Laplace transform definition. As can be seen the point $t=i*a $ does not belong to the interval $(0,\infty)$. Thus, the integral should be equal to zero. Vladimir1954 (talk) 23:38, 23 February 2008 (UTC)

UNITs

Can anybody tell me : what are the units of delta func? For example:   [ \delta (\hbar \omega - E_g)]  = ?

Is its unit in "energy"?

Thanks a lot !!!

                     An Asian boy

notation

Is there really the need to use the notation \delta_a(x) for the representations ("nascent deltas")? This looks like delta centered at a, quite popular notation. It's a bit confusing. JWroblewski (talk) 08:04, 24 June 2008 (UTC)

Egorov's theory

An outline should be added of the properties of the delta function in Egorov's theory of generalised functions. In this theory there are several deltas, all of which have the properties of the "usual" delta. Egorov's theory is important because it allows multiplications, powers, and indeed almost any functions of deltas. The theory is moreover very near to Dirac's original formulation and to the use of delta functions in physics.

See

- Yu. V. Egorov, A contribution to the theory of generalized functions, Russ. Math. Surveys (Uspekhi Mat. Nauk) 45(5), 1–49 (1990).

- Yu. V. Egorov, Generalized functions and their applications, in P. Exner and H. Neidhardt, eds., Order, Disorder and Chaos in Quantum Systems: Proceedings of a conference held at Dubna, USSR on October 17–21, 1989 (Birkhäuser Verlag, Basel, 1990).

- A. S. Demidov, Generalized Functions in Mathematical Physics: Main Ideas and Concepts (Nova Science Publishers, Huntington, USA, 2001), with an addition by Yu. V. Egorov. —Preceding unsigned comment added by 81.172.158.10 (talk) 15:45, 8 July 2008 (UTC)

Broken External Link

The link at the bottom "Integral and Series Representations of the Dirac Delta Function" goes to an inaccessable page. I get a page saying "You do not have access to this file." This link should either be fixed or removed. 165.230.20.142 (talk) 16:46, 22 October 2008 (UTC)

How the Dirac delta is a generalization of the Kronecker delta

I have a few comments about the new section. I think the title is a bit long, perhaps "Relation to Kronecker delta"? Then we invoke a lot of machinery to describe the relationship. One say more simply that and avoid discussing eigenvalues of operators.

\sum_{k=1}^n \delta_{ik}v_k=v_i\qquad \int \delta(x-x_0) f(x)\,dx=f(x_0).

We are assuming various things about the operator to turn the sum into an integral, we should really point out what they are. Lastly, the notation seems a bit off. The article claims ξ' both as a complex number and a vector, but I think this could be cleared up by indexing eigenvalues and eigenvectors. Thenub314 (talk) 12:23, 23 December 2008 (UTC)

Dimensional analysis?

The phrase "In terms of dimensional analysis, this definition of δ(x) implies that δ(x) has dimensions reciprocal to those of dx." seems to be somewhat irrelevant. I'm guessing it's come from the fact that the definite integral results in a dimensionless quantity, but this is true of all definite integrals (unless units are specifically invoked). Can we safely remove this sentence? Oli Filth(talk|contribs) 12:12, 4 January 2009 (UTC)

Definition of the Dirac Delta

WRONG!! THIS IS THE CORRECT DEFINITION:

\delta(x) = \begin{cases} \ 0, & x \ne 0 \\ \int_{-\infty}^\infty \delta(x) \, dx = 1. \end{cases}

When x=0, the Dirac function is undefined, NOT equal to infinity!!! In some cases when x=0, infinity works, but sometimes (as in the sampling theory) one finds that a value of one is helpful. —Preceding unsigned comment added by 216.207.242.34 (talk) 22:00, 4 March 2009 (UTC)

To be technically correct, both definitions are wrong because neither defines an actual function on the real number line. The definition you refer to in the article is merely a heuristic definition useful for remembering properties of the dirac distribution (but makes no sense mathematically). I think maybe your are referring to scaling of the Dirac distribution. --67.193.128.233 (talk) 01:34, 9 March 2009 (UTC)

Explanation of table removal

I have removed the section containing the table, which was restored by User:PAR with the additional implication that I was "destroying this section", which I can only imagine means removing those items from the table that were already treated in context above. I do feel that perhaps to avoid further misunderstandings, I should explain the removal of the table here.

  • First of all, it is completely mathematically trivial to write down a nascent delta function. However, some at least have identifiable importance in various areas of mathematics such as partial differential equations, probability theory, and Fourier analysis. The appropriate format is to build context for the examples in the text, rather than lump them all together in a table.
  • Second, the following paragraph appeared in the section under discussion, as though it communicated some deep fact about nascent delta functions:

Note: If η(ε,x) is a nascent delta function which is a probability distribution over the whole real line (i.e. is always non-negative between -∞ and +∞) then another nascent delta function ηφ(ε, x) can be built from its characteristic function as follows:

\eta_\varphi(\epsilon,x)=\frac{1}{2\pi}~\frac{\varphi(1/\epsilon,x)}{\eta(1/\epsilon,0)}
where
\varphi(\epsilon,k)=\int_{-\infty}^\infty \eta(\epsilon,x)e^{-ikx}\,dx
is the characteristic function of the nascent delta function η(ε, x). This result is related to the localization property of the continuous Fourier transform.
Now, this is really just a special case of the fact that f(x/ε)/ε is a nascent delta function for any f of total integral 1, and this already appears in the subsection on approximations to the identity.
  • Third, the material on probability distributions had already been integrated elsewhere into the section.

What remained of the table was the sinc-squared function and the derivative of the sigmoid function. I have no idea in what context these appear, or indeed if they were merely constructed as examples for the naive purposes of enlarging the table, but it seems better not to have them at all in light of my first point above than to run the risk of being accused of committing "slow destruction". Sławomir Biały (talk) 03:56, 16 June 2009 (UTC)

Unless the section is completely wrong, it would be better to move it to the talk page: in this way, disagreements between different editors can be resolved faster and more constructively. This is a fairly standard course of action for chunks of articles that seem irrelevant, and serves other useful purposes: some material may turn out to be useful, after all; if later identical or similar material is readded, there is a transparent record of why it had been removed in the first place, etc. At the moment, I cannot even follow your arguments, since they refer to the non-existing context. Arcfrk (talk) 13:34, 16 June 2009 (UTC)
There is a link to the removed content in the first line of my comment. I have also included it in the box below for easier navigation. Sławomir Biały (talk) 13:41, 16 June 2009 (UTC)

So, just to reiterate more succinctly for those who may have trouble following my above arguments:

  • The first paragraph in the deleted section already appears elsewhere in the article in a better context.
  • The second-to-last paragraph is saying something that is a trivial special case of the general construction in a very convoluted way.
  • The entries of the table itself have been, over the past few days, almost entirely integrated into the text for better context.

--Sławomir Biały (talk) 15:09, 16 June 2009 (UTC)


Please comment here on the recent and unnecessary creation of the new article List of nascent delta functions. As indicated above, all of this material already appears in context in the article. Our own manual of style tends to favor list incorporation, rather than standalone lists. Also, it is a stupid endeavor to attempt to give a "list of nascent delta functions" that attempts to be at all comprehensive. The article already indicates the many ways in which they occur "naturally", and it is plainly obvious that any attempt to list them is futile at best, and potentially misleading at worst. 173.75.158.194 (talk) 15:30, 4 November 2009 (UTC)

I disagree with the removal of this table. I think the above remarks are all very well able for an able and knowledgeable mathematician, but a wikipedia article should be aimed at improving the understanding of those who are less able, and a list of examples (not intended to be comprehensive) is a useful adjunct. I have also found it a handy reference in the past. RQG (talk) 11:36, 29 November 2009 (UTC)
Each and every one of the examples is still in the article, just not in the form of a table. The only difference is that the examples are now organized (and that requires text) and given appropriate context. As a matter of fact, there are now more examples than there ever were in the table, because neither the plane wave decomposition, the Cauchy integral, nor the fundamental solution of the wave equation, were mentioned in the table. Anyway, physicists who need particular nascent delta functions (for whatever reason) usually get them as fundamental solutions. So a more useful organization of that sort of thing would be in an article about those. Sławomir Biały (talk) 15:01, 29 November 2009 (UTC)

The is something wrong with the following statement

In the section Dirac delta function#Composition with a function the article states "... provided that g is a continuously differentiable function with g′ nowhere zero." A few line down it then states

This distribution satisfies

\delta(g(x)) = \sum_{i}\frac{\delta(x-x_i)}{|g'(x_i)|}

where xi are the real roots of g(x) (which are all simple by the restriction on the derivative of g).

Now this is of course correct, but it is somewhat of an odd statement since the condition that g'(x) != 0 everywhere implies that there is at most one root to begin with. I guess that the proper condition meant was that g'(x) != 0 at any of the roots of g. Anybody care to confirm this? (TimothyRias (talk) 08:49, 25 June 2009 (UTC))

Actually, the correct condition is that |g'(x)| must be nonzero everywhere, for otherwise the composition δ(g(x)) requires much more effort to define. Taking this into account, the expression on the right-hand side should be either zero (the 'empty sum') or the singleton sum, as you have noted. Sławomir Biały (talk) 00:09, 26 June 2009 (UTC)

too many directions

Only readers who already know about can understand the article! Otherwise there is no chance…

I think that the best way is giving up the"unformal/heuristic" definitions.

Can I suggest to:

1) Introduce delta in the Lebesgue measure context

2) Then define it as a Radon measure

3) briefly explain why/how these two definitions coincide (approximate identity and so on)

4) Give the distribution point of view: as a Radon mesure, delta is a distribution of order 0.

NB: the writing \delta(x) has NO MEANING. Only \delta(\{x\}) gets one in a measure context. Otherwise one has to write \delta(\phi) or \langle \delta,\phi \rangle (some would do \langle \phi, \delta \rangle  ;) )

Regards —Preceding unsigned comment added by 83.199.27.48 (talk) 20:25, 3 July 2009 (UTC)

Doesn't the article already do precisely what you think it should? Granted, the approximate identity comes later in the article. But I think that is as it should be, because there are some differences between the weak versus distributional meaning of the approximate identities. Sławomir Biały (talk) 02:41, 4 July 2009 (UTC)
Also, I disagree with your proposal to abandon the heuristic definition. This is how non-mathematicians typically define the delta function, and the article would be incomplete without this heuristic definition. Even Gel'fand and Shilov, on the first page of their seminal treatise, start with this. Also, while you may think that it is absolutely incorrect to write δ(x) under any circumstances, actually a lot of highly respected mathematicians and other scientists do indeed write it this way. See for instance the aforementioned work. Sławomir Biały (talk) 02:59, 4 July 2009 (UTC)

Who shall read the article: Gelfand or Mr Smith? That's why I think that, after an unformal overview, precise definitions should be given. Then you can introduce approximate identities and explain how the formal settings meet the first intuition. Thus, you can give the two points of view into keeping rigorous.

The article should be a readable introduction, not an abstract targeted for experts who already master unformal notations. —Preceding unsigned comment added by 83.199.27.48 (talk) 12:24, 4 July 2009 (UTC)

But as far as I can tell, the article does give precise definitions immediately after the informal overview. Also, you seem to be saying two contradictory things: one that the article is too informal, and the other that the article is so rigorous as to be completely inaccessible. For these two reasons, I don't see how one should edit based on your critique. Sławomir Biały (talk) 13:57, 4 July 2009 (UTC)

Your presentation is messy: you begin by some casual writings, then you claim that the Dirac is a measure, then a distribution without really linking the two parts: the point does not emerges.(Cf. L Schwartz-Analysis III) And why don't you mention the weak-* convergence?

\{f_n \} is a sequence of approximate identities then tends w-* to \delta

(i.e for every test-function and so on)

Writings like \int \delta(x) \phi(x)\mathrm{d}x are just conventions. They are not necessary and make the equalities heavier than they should be. like:

\int_{-\infty}^\infty f(x) \, \delta\{dx\} =  f(0)

instead of :

\int_\mathbb{R} f \mathrm{d}\delta =  f(0)

?

I'm not an expert: I only try to imagine how a BA student whould react. I bet that I'm not the only one to think this way. Maybe some people else could give their point of view… —Preceding unsigned comment added by 83.199.27.48 (talk) 14:35, 4 July 2009 (UTC) --83.199.27.48 (talk) 14:37, 4 July 2009 (UTC)

The notation \textstyle{\int_{-\infty}^\infty f(x) \, \delta\{dx\} =  f(0)} is fairly standard for the Lebesgue integral with respect to a measure (see, for instance, Feller's textbook on probability theory). It is not as popular, perhaps, as the notation \textstyle{\int_\mathbb{R} f \mathrm{d}\delta}, but this clashes with the (quite standard) notation for the Stieljes integral that is used in the same section, which is the reason I settled on the former notation instead of the latter. Sławomir Biały (talk) 14:59, 4 July 2009 (UTC)
With regard to your former point on weak-* convergence, I do mention this on several occasions in the relevant section. One does need to be careful about which weak-* topology is being used: for instance the weak-* topology on the space of measures (to which I have used the standard term vague topology) is different from the weak-* topology on the space of distributions. Finally, I do actually link the two notions in the "Definitions" section. Sławomir Biały (talk) 15:07, 4 July 2009 (UTC)
To quote from the article:
In the context of measure theory, the Dirac measure gives rise to a distribution by integration. Conversely, equation (2) defines a Daniell integral on the space of all compactly supported continuous functions φ which, by the Riesz representation theorem, can be represented as the Lebesgue integral of φ with respect to some Radon measure.
--Sławomir Biały (talk) 15:09, 4 July 2009 (UTC)
I have added a paragraph that attempts to put the link between the measure and distribution formulations up front in a more accessible manner. Sławomir Biały (talk) 15:26, 4 July 2009 (UTC)


I suggest to order the definitions like this:

The Dirac δ function is the borel measure that only loads the singleton {0}, namely: \delta(\{0\})=1,\, \delta(Q)=0\quad (\text{Q is a segment/closed cube which does not contains 0})

Fix A a Borel set. A straightforward computation now shows that \delta is indeed a Borel measure, the one such that:

- if A contains 0: \delta(A)=1 \text{ i.e } \int {1}_A \mathrm{d}\delta = 1

- if not:  \delta(A)=0 \text{ i.e } \int {1}_A \mathrm{d}\delta = 0

Since any Borel function f is the pointwise limit of simple functions, we reach:

 \int f \mathrm{d}\delta = f(0)

With a 'radonian' point of view, we set: \delta: \mathcal{C}_c (\mathbf{R}^n) \rightarrow \mathbf{\mathbf{R}}, \, f \rightarrow f(0).

Let be K a compact: the restriction of a such linear form on \mathcal{C}_K(\mathbf{R}^n) is clearly continuous (hence, δ is a Radon measure), of norm equal to 1:

 \sup \left\{ | f(0)  |:  \, \| f\|_\infty =1 \right\} =1


We now can see δ as distribution of order 0:

\delta: \mathcal{D} \rightarrow \mathbf{R} , \, \phi\rightarrow \phi(0) is a linear form such that:

\forall K \text{ compact}:\, | \delta(\phi)| \leqslant \| \phi \|_0 \quad (\phi \in \mathcal{D}_K  )

(Recall that \|.\|_0 denotes the supremum norm in the distributions context)

Applying the definition of a distribution support, we conclude that \text{spt }\delta=\{0\} is compact. Hence \delta is tempered

Remark: If we put T_H( \phi)\!:=\int H\phi \quad (\phi\in\mathcal{D}) - H denotes the Heaviside function, we obtain

T_H'( \phi)= -T_H( \phi')=-\int H\phi'= -[\phi]_0^\infty = \phi(0)=\delta(\phi)


This way we have the chain: Borel->Radon/linear form->distribution.

On a second step w* convergences can be mentioned without loss of clarity.

I suggest to mention Radon-Nikodym theorems at the end, with probability issues. —Preceding unsigned comment added by 83.199.27.48 (talk) 20:22, 4 July 2009 (UTC)

While I do see a few details in the above that could perhaps be mentioned in the article (e.g., that the delta measure is a Borel measure of total variation 1, and that it is a tempered distribution), I still don't see how the article is in any way at odds with the spirit of what the article tries to accomplish. Indeed, the overall structure of measure -> linear form -> distribution is certainly there, which is what you seem to continue to suggest is missing. Perhaps another set of eyes could be helpful here. Sławomir Biały (talk) 20:39, 4 July 2009 (UTC)
indeed it would be —Preceding unsigned comment added by 83.199.27.48 (talk) 21:22, 4 July 2009 (UTC)


Cite error: There are <ref> tags on this page, but the references will not show without a {{reflist}} template (see the help page).