# Talk:Convolution/Archive 1

Jump to: navigation, search

## Introduction

I think the introduction to this article needs cleaning up, as 'A convolution is a kind of very general moving average' is vague and doesn't offer any helpful insights.

I agree, and changed the intro. Now it is more complicated, but much more objective. Also replaced the "fixed filter impulse response", which is a narrow use, with LTI system which is broader. --D1ma5ad (talk) 22:52, 2 March 2008 (UTC)
Not sure what you mean. What LTI system can not be described as a filter? Dicklyon (talk) 03:20, 3 March 2008 (UTC)
I was thinking about a control system. I guess you could classify it as a filter, but it is not written so in any other article I could find. If you pretend to switch back to filter, at least wikify it to point to Filter#Signal_processing.
--D1ma5ad (talk) 12:57, 7 March 2008 (UTC)

I originally wrote:

• In electrical engineering, the output of a linear system is the convolution of the input with the system's response to an impulse.

Akella changed it to:

• In Systems Science, the output of a linear system is the convolution of the input with the system's response to an impulse.

I have a problem with that, because I know of no such discipline as Systems Science, and neither, at this point, does Wikipedia. I know of "system analysis," but as a rather imprecisely defined terms that once may have had an engineering meaning, and then became something like a high-ranking computer programmer...

I realize that the scope is much broader than electrical engineering, but I was trying to list some contexts, that might be familiar to readers, in which convolution makes an appearance.

Also, in electrical engineering, the phrase "linear system" is often used to refer to precisely the kind of system described by a one-dimensional convolution integral. Whereas, in other fields, such as, say, Linear algebra, the scope of the word "linear system" is much broader, so you couldn't just say "linear system" and leave it at that.

So, I've changed it to:

• In electrical engineering and other disciplines, the output of a linear system is the convolution of the input with the system's response to an impulse

Dpbsmith 23:58, 26 Feb 2004 (UTC)

It might be worth noting that the university I'm at has a School of Systems Engineering, which includes departments of Electrical Engineering and Cybernetics (as well as Computer Science). Perhaps the term we're after is somewhere amongst that lot. - IMSoP 00:34, 27 Feb 2004 (UTC)
I'm certainly not going to get into an edit war over it. I do think electrical engineering should be listed first, for the very personal POV reason that it's where I first encountered convolutions, and that any other disciplines that are mentioned should be ones that are a) readily recognizable to a reader, and b) ones in which any graduate with a degree in that discipline would instantly recognize the word "convolution" and know what it meant.... Dpbsmith 11:53, 27 Feb 2004 (UTC)
How about "In disciplines such as Electrical Engineering and Cybernetics...", just to be a bit less vague than "and other disciplines"? - IMSoP 18:11, 27 Feb 2004 (UTC)
Your call. Your edit. "Be bold." Since I think it's OK as is, I won't change it myself, but I certainly wouldn't revert it if you change it. In what course did you encountered the convolution integral and with what discipline do you associate it? Dpbsmith 00:44, 28 Feb 2004 (UTC)
I believe that the current version ("In mathematics and, in particular, functional analysis, convolution is a mathematical operator ...") gives a much broader sense of what convolution is. I also came to know convolution on an electrical engineering class on which it is used a lot, but it is, essentially, a mathematical operation! It is like saying that Laplace transform belongs to electronics or control engineering because it is used there extensively. Just like when Laplace lived, there was no electronics, I believe that convolution predates electrical engineering[citation needed]. --D1ma5ad (talk) 22:00, 2 March 2008 (UTC)

## Constant Factors

Does the ${\displaystyle {\sqrt {2\pi }}}$ factor in the convolution theorem apply where Laplace transforms are concerned? I can see where it comes in for Fourier transforms, but not Laplace transforms. --Glengarry 01:34, 15 Jul 2004 (UTC)

Constant factors like this only depend upon the normalization convention used for the transform in question. (There are normalizations of the Fourier transform, for example, where the constant is unity.) —Steven G. Johnson 04:56, Jul 15, 2004 (UTC)

## Etymology

Anybody know where the term "convolution" comes from? It'd be nice to add this bit of historical trivia.

It is a translation of the German word "Faltung," which may also be translated as "wrinkle." I'm not certain, but it is possible that the concept originated in the German-speaking world. Lovibond 15:24, 6 June 2007 (UTC)

## Merging from PlanetMath

The PlanetMath has a nice GFDL article on convolution, see http://planetmath.org/?op=getobj&from=objects&id=2790 Anybody willing to merge that stuff? Oleg Alexandrov (talk) 02:17, 24 November 2005 (UTC)

## Broken link

Hi - The link to 'Convolution' on Planet Math in the external links section seems to be broken. --anon

Now it works. I guess the PlanetMath websiste was down or something. Oleg Alexandrov (talk) 22:42, 6 January 2006 (UTC)

## Risk theory

in risk theory the distribution of the sum of n i.i.d random variables is found by convolution.

## multi-dimensional convolution

For things like image processing we have 2d convolution ${\displaystyle blurry(u,v)=\int \int f(u,v)g(u-\upsilon ,v-\nu )d\upsilon d\nu }$. Could someone who knows about this stuff explain how this works, and the rules for separating orders (I think it's the same as the theorem that allows 2d FT to be composed using two 1D FTs?) --njh 05:25, 19 April 2006 (UTC)

## Convolution kernel

Convolution kernel points here. This article should define it. - 72.58.19.66 05:55, 23 May 2006 (UTC)

shouldn't that be rather point to / be defined in integral transform? — MFH:Talk 13:49, 2 June 2006 (UTC)

## Convolution of measures

The notation μ×ν (used in section Convolution of measures) is not explained here, neither in Borel measure or any other (more or less "directly") linked page. I am not used to work on this subject, but from the background I have, I suspect it should rather be denoted as a tensor product ${\displaystyle \mu \otimes \nu }$. (The only definitions I know for a "cross product" are the vector cross product and the Cartesian product of sets, but not of maps.) — MFH:Talk 13:49, 2 June 2006 (UTC)

## Convolution Matrices needed

Please add a page listing different image processing convolution kernel matrices since there seems to be no good general reference for them on the web. Include the matrix values as well as a description of what the convolution kernel does.

it might be nice to see a discussion of convergence issues. E.g. the convolution operator F(f):=f*g is a linear operator on L^1 if g is in L^1. What other spaces does that hold for?

## Integration range

You wrote: "The integration range depends on the domain on which the functions are defined." Could you, please, specify the endpoints of the integration interval more specifically.

For example, if f has a normal distribution and g has a uniform distribution in the interval (a,b). How exactly are the endpoints of the integration interval.

Or if f has a normal distribution and g(x) has the distribution a*(b+1)*(1-a*x)**b in the interval (0,1/a). How exactly are the endpoints of the integration interval.

Or if f has a normal distribution and g(x) has the exponential distribution λ*e**(-λ*x) in the interval (0,infinity). How exactly are the endpoints of the integration interval.

Can you give a general rule?

## Fast Convolution

I added some bare bones information about fast convolution. If someone that knows more about it would add some juice, that'd be nice. -Mojodaddy

## Linear Convolution Versus Circular Convolution

There's a very thin article on circular convolution, I think that there should be a section here comparing linear to circular. -Mojodaddy

## Definition

The definition is not satisfying. What exactly are f and g? Something like ${\displaystyle f,g:D\to {\mathbb {C}}}$? --Bfrey 14:58, 10 May 2006 (UTC)

As of now, the introduction defines f and g as functions. I don't believe that their set needs to be defined, but a more formal definition could be included, if you think needed. As for me, that notation is not very understandable, regular english is better.
--D1ma5ad (talk) 21:43, 2 March 2008 (UTC)

As far as I can see, the limits in the integral after the change of variable are wrong. Shouldn't the second integral be:

${\displaystyle (f*g)(t)=\int _{t-a}^{t-b}f(t-\tau )g(\tau )\,d\tau }$

Or am I going mad? Oli Filth 22:38, 1 May 2007 (UTC)

I believe you are indeed going mad =] . I also believe that the definition with ${\displaystyle a,b}$ as limits is in fact less generic then with ${\displaystyle -\infty ,+\infty }$. Let me explain:
If ${\displaystyle g(t)=0\,|tt_{2}}$ and ${\displaystyle f(t)=0\,|tt_{4}}$ then
${\displaystyle \,a=t_{1}-t_{4}}$ and ${\displaystyle \,b=t_{2}+t_{3}}$ since ${\displaystyle \,f(t-\tau )g(\tau )=0|\tau b}$
--D1ma5ad (talk) 21:04, 2 March 2008 (UTC)
Perhaps we're talking at cross purposes here? When I wrote this, I was referring to the article in its then-current state. The change of variables the article was then referring to would also require an accompanying change in the integral limits. Oli Filth(talk) 21:36, 2 March 2008 (UTC)

## Simple practical application

I'm moving this off the article because it is licensed under creative commons attribution noncommercial share-alike 2.5. Noncommercial material cannot be used in articles to allow reuse of Wikipedia on commercial sites like answers.com (see copyright problems). --h2g2bob (talk) 11:28, 20 May 2007 (UTC)

Simple practical application

(transcribed from prof. Arthur Mattuck video lecture available @ MIT's OCW: 18.03 Differential Equations, Spring 2006 - Lecture 21: http://ocw.mit.edu/OcwWeb/Mathematics/18-03Spring-2006/CourseHome/index.htm)

Problem:

A nuclear power plant dumps radioactive waste at a rate of f(t) (kg/year).

The approximate radioactive material dumped in the interval [ti ; ti+1] is: ${\displaystyle f(t)\Delta t}$

Starting at ${\displaystyle t_{0}}$ = 0. What is the amount of radioactive material present in the pile at time ${\displaystyle t}$?

(As more radioactive material is getting dumped, the existing material decays. The problem is focused only on the material that is still radioactive at time ${\displaystyle t}$.)

Solution:

We model the radioactive decay with a simple exponential: If the initial amount is ${\displaystyle A_{0}}$, then at time ${\displaystyle t}$ there is: ${\displaystyle g(t)=A_{0}e^{-kt}}$ material left.

(${\displaystyle k}$ depends on the type of radioactive material used and for simplicity it is assumed there is only one type of material being dumped - thus ${\displaystyle k}$ is constant).

Replace ${\displaystyle t}$ in the figure above with ${\displaystyle u}$ and we have:

So amount dumped at time ${\displaystyle u_{i}\approx f(u_{i})\Delta u}$. How much of this material is still left at time ${\displaystyle t}$?

Amount left at time ${\displaystyle t}$ from that contribution is ${\displaystyle \approx f(u_{i})\Delta u\,g(t-\Delta u_{i})=f(u_{i})\Delta u\;e^{-k(t-u_{i})}}$ .

Total amount left at time ${\displaystyle t}$ (starting at ${\displaystyle u_{1}}$): ${\displaystyle =\sum _{i=1}^{n}f(u_{i})e^{-k(t-u_{i})}\Delta u}$

Let ${\displaystyle \Delta u\rightarrow 0}$ then the sum becomes an integral: ${\displaystyle =\int _{0}^{t}f(u)g(t-u)\,du=\int _{0}^{t}f(u)e^{-k(t-u)}\,du=f(t)*e^{-kt}}$

Radioactive decay of actual nuclear waste is more complicated in several ways. A radioactive element decays into a different element that typically is itself radioactive. The daughter element may emit a different kind of radiation -- e.g. alpha particles (helium nuclei), beta particles (electrons), gamma rays (photons), or neutrons. If you concentrate on radioactivity of just one kind, you could calculate the radioactivity at some future time with a convolution integral, but with a more complicated ${\displaystyle g(t)}$. In addition, the original nuclear waste probably included more than one kind of radioactive element.

After discussion with Mitrut (talk · contribs), it probably is ok to use. --h2g2bob (talk) 09:22, 26 May 2007 (UTC)

## Relationship to Laplace transform

Should this be in the article?

${\displaystyle {\mathcal {L}}\left\{x(t)\right\}{\mathcal {L}}\left\{y(t)\right\}={\mathcal {L}}\left\{(x*y)(t)\right\}}$

--h2g2bob (talk) 21:28, 3 June 2007 (UTC)

This is just the convolution theorem, which is already mentioned: Convolution#Convolution_theorem. Oli Filth 21:30, 3 June 2007 (UTC)

## "Convolution power" stuff

I've removed the convolution power stuff again, because it's still not defined adequately, and even if it was, I don't believe it belongs in this article. See my comments below:

${\displaystyle f^{*(n)}=\underbrace {f*f*f*...*f*f} _{n}\,}$
${\displaystyle g*f^{*(n)}=\delta \Rightarrow f^{*(-n)}=g\,}$


The meaning of ${\displaystyle \ f^{*(-n)}}$ hasn't been defined. Or is this supposed to be the definition?

${\displaystyle g^{*(n)}=f\Rightarrow {\sqrt[{*(n)}]{f}}=g\,}$


The meaning of ${\displaystyle \ {\sqrt[{*(n)}]{f}}}$ hasn't been defined. Or is this supposed to be the definition?

${\displaystyle e^{*(f)}=\sum _{k=0}^{\infty }{\frac {f^{*(k)}}{k!}}\,}$
${\displaystyle e^{*(g)}=f\Rightarrow (*\ln ){f}=g\,}$


The meaning of ${\displaystyle \ (*\ln ){f}}$ hasn't been defined. Or is this supposed to be the definition?

Besides which, the notation is now inconsistent. In ${\displaystyle \ f^{*(n)}}$, the superscript term indicates how many times to convolve ${\displaystyle \ f}$ by itself. ${\displaystyle \ e^{*(f)}}$ uses the same notation (i.e. ${\displaystyle \ x^{*(y)}}$), but means something completely different. Do you have a reference that uses this notation?

More importantly, I think that "convolution power" as portrayed here is actually defined via the convolution theorem, i.e.

${\displaystyle \ f^{*(n)}={\mathcal {F}}^{-1}{\big \{}{\mathcal {F}}\{f\}^{n}{\big \}}}$

In which case, all of these aren't basic properties of convolution at all, merely a set of definitions. Therefore, it should belong in its own article, or perhaps in the convolution theorem article. Oli Filth 17:47, 4 July 2007 (UTC)

I think the Taylor series' property is an important property of the connvolution. You may write a new article about the Convolution power if you would like to, but for now I put it back. I surely think the Taylor series' property should be here.

Bombshell 18:05, 4 July 2007 (UTC)

This isn't a "property" of convolution. It's merely a set of definitions, and so shouldn't be in the "Properties" section. Can you point to any of them which is an actual property of convolution?
${\displaystyle (1+f)^{n}=\sum _{k=0}^{\infty }{k \choose n}f^{k}\Rightarrow (1+f)^{*(n)}=\sum _{k=0}^{\infty }{k \choose n}f^{*(k)}\,}$

Your binomial expansion isn't correct; it should be ${\displaystyle \ \sum _{k=0}^{n}{n \choose k}f^{k}}$.
But let's look at this. As an example, take ${\displaystyle \ f(t)=\mathrm {rect} (t)}$, the rect function (an ${\displaystyle \ L^{1}(\mathbb {R} )}$ function) and ${\displaystyle \ n=2}$. We get:
{\displaystyle {\begin{aligned}(1+f(t))^{*(n)}&=(1+\mathrm {rect} (t))^{*(2)}\\&=(1+\mathrm {rect} (t))*(1+\mathrm {rect} (t))\\&=\int _{-\infty }^{+\infty }1d\nu +2+\mathrm {tri} (t)\\&=\infty \end{aligned}}}
where that's the triangular function.
On the other hand, we have:
{\displaystyle {\begin{aligned}\sum _{k=0}^{n}{n \choose k}f^{k}&=\mathrm {rect} (t)^{*(0)}+2\mathrm {rect} (t)^{*(1)}+\mathrm {rect} (t)^{*(2)}\\&=\delta (t)+2\mathrm {rect} (t)+\mathrm {tri} (t)\end{aligned}}}
Note that you haven't defined ${\displaystyle \ f^{*(0)}}$, but I'm assuming that it follows from the convolution theorem definition.
Clearly these aren't equal. You could argue that of course it's not going to work, because ${\displaystyle \ (1+\mathrm {rect} (t))\notin L^{1}(\mathbb {R} )}$. But even if we set ${\displaystyle \ f(t)=(\mathrm {rect} (t)-1)}$ to solve this problem, the right-hand side goes to infinity!
Off the top of my head, I can't think of an ${\displaystyle \ f}$ which satisfies both sides.
${\displaystyle e^{f}=\sum _{k=0}^{\infty }{\frac {f^{k}}{k!}}\Rightarrow e^{*(f)}=\sum _{k=0}^{\infty }{\frac {f^{*(k)}}{k!}}\,}$

Only because you've defined it as such above! This isn't a property.
${\displaystyle (1+f)^{n}=\sum _{k=0}^{\infty }f^{k}\Rightarrow (1+f)^{*(n)}=\sum _{k=0}^{\infty }f^{*(k)}\,}$

What does this even mean?
Can you cite a reference which actually uses these "equalities"? Oli Filth 19:06, 4 July 2007 (UTC)

I'm sorry, but I made some mistakes in the equations, of course one needs to change 1 by the Dirac delta

Now the equations are exact: for your example:

we have on the one hand:

{\displaystyle {\begin{aligned}\sum _{k=0}^{n}{n \choose k}f^{*(k)}&=\mathrm {rect} (t)^{*(0)}+2\mathrm {rect} (t)^{*(1)}+\mathrm {rect} (t)^{*(2)}\\&=\delta (t)+2\mathrm {rect} (t)+\mathrm {tri} (t)\end{aligned}}}

and on the other:

{\displaystyle {\begin{aligned}(\delta +f(t))^{*(n)}&=(\delta +\mathrm {rect} (t))^{*(2)}\\&=(\delta +\mathrm {rect} (t))*(\delta +\mathrm {rect} (t))\\&=\delta (t)+2\mathrm {rect} (t)+\mathrm {tri} (t)\\\end{aligned}}}

The last equation becomes:

${\displaystyle (1+f)^{n}=\sum _{k=0}^{\infty }f^{k}\Rightarrow (\delta -f)^{*(n)}=\sum _{k=0}^{\infty }f^{*(k)}\,}$


I 've used these properties in the studie of renewal theory, where there used to make the solution of the renewel equation "easier":

E.g.: renewal equation: G = g + G * F

Thus: ${\displaystyle G*(\delta +G)=g\,}$ and therefore ${\displaystyle G=g*(\delta +F)^{*(-1)}=g*\sum _{k=0}^{\infty }F^{*(k)}\,}$

However I admit making a lot of mistakes the first time, but I think the Taylor series property is very useful

Bombshell 22:14, 4 July 2007 (UTC)

Ok, but now your corrected implications (1st and 3rd in the Convolution#Taylor series section) are just direct applications of the convolution theorem. So they're not particular to Taylor series; this would work with any basic identity. The 2nd implication is still pointless, because it's still just a repetition of the definition given earlier. I'm too tired to think about the 4th implication, but as far as I can see, it still doesn't say anything useful, because it's entirely dependent on the definition of ${\displaystyle \ (*\ln )(f)}$.
There's also still the problem of inconsistent notation, and a lack of any references for this stuff.
Therefore, I'm going to remove this once again, because it's completely out of place in this article. If you want to continue with it, please create the Convolution power article. That way, it can be edited and fixed without affecting the convolution article.
Oli Filth 22:59, 4 July 2007 (UTC)
I may be wrong here, but I seem to remember that the convolution of two functions f and g is as differentiable as the more differentiable of f and g, which seems to preclude f ∗ g ever being the Dirac distribution δ. If so, this kills the proposed definition of negative convolution power f∗(−n). Sullivan.t.j 23:49, 4 July 2007 (UTC)
I think you're wrong, because as I just demonstrated, in renewal theory, one computes:

${\displaystyle F^{*(-1)}=\sum _{k=0}^{\infty }(F-\delta )^{*(k)}\,}$

You can prove it as follows:

${\displaystyle F*\sum _{k=0}^{\infty }(F-\delta )^{*(k)}={\mathcal {F}}^{-1}{\big (}{\mathcal {F}}{\big (}F)*\sum _{k=0}^{\infty }(F-\delta )^{*(k)}))\,}$
${\displaystyle ={\mathcal {F}}^{-1}{\big (}{\mathcal {F}}{\big (}F).\sum _{k=0}^{\infty }(({\mathcal {F}}{\big (}F)-{\mathcal {F}}{\big (}\delta ))^{k}))\,}$
${\displaystyle ={\mathcal {F}}^{-1}{\big (}{\mathcal {F}}{\big (}F).\sum _{k=0}^{\infty }(({\mathcal {F}}{\big (}F)-1)^{k}))\,}$ (properties of convolution)
${\displaystyle ={\mathcal {F}}^{-1}{\big (}{\mathcal {F}}{\big (}F).{\mathcal {F}}{\big (}F)^{-1})\,}$ (taylor series)
${\displaystyle =\delta }$

which proves it

I'll start a new article Convolution power.

You're welcome to enhance it

Bombshell 16:23, 5 July 2007 (UTC)

## Proof of Associativity

Is there maybe a way that a proof for commutativity could be added, using simple substitution (z = x-y) I'm sorry but I have no experience with Latex. Ameeya 19:07, 21 July 2007 (UTC)