|WikiProject Statistics||(Rated Start-class, Mid-importance)|
|WikiProject Mathematics||(Rated Start-class, Mid-importance)|
Why can we use quadratic variation to test differentiability? From Advanced Stochastic Processes by David Gamarnik: Even though Brownian motion is nowhere differentiable and has unbounded total variation, it turns out that it has bounded quadratic variation.
The theorem might be correct, but what is it useful for?
A differentiable function needn't have a continuous derivation:
I don't follow the your argument. If a function is everywhere differentiable, then it has vanishing quadratic variation. If a function has non-vanishing quadratic variation, then it is not everywhere differentiable. The theorem says neither more nor less than that. The function you've exhibited has a continuous derivative everywhere and therefore has vanishing quadratic variation. There are plenty of functions, such as |x|, which have discontinuities in their derivatives and nonetheless have vanishing quadratic variation. (In fact, every continuous function with a piecewise continuous derivative has vanishing q.v., whereas any discontinuity has non-vanishing q.v. as do nowhere differentiable processes like Brownian motion.) I have removed the dubious tags, but I also removed the very imprecise statement from the intro. –Joke 21:39, 3 January 2007 (UTC)
- In the meantime I learned that a differentiable function needn't be of bounded variation (like ). So the theorem is intereresting indeed. The function I've exhibited above () is a counterexample for the following statement in the proof: "Notice that |f'(t)| is continuous". The example is differentiable everywhere but its derivative is not continuous. Theowoll 18:58, 13 January 2007 (UTC)
definition vs. application
Most of the content in this article is the application of the quadratic variation. However, its definition is not quite right. It was said that the quadratic variation should be defined as sup over all partitions, and it is infinite for Brownian motion. see Richard Durrett (1996) page 7. Jackzhp (talk) 23:08, 18 May 2009 (UTC)
The definition says: Its quadratic variation is the process, written as [X]t, defined as
What does it mean? The probability that the sum of the square differences over a given partition is different from the limit by more than epsilon is at most a given function of the mesh, for any partition of a given mesh? Is this true, or do we only have almost sure convergence for any particular sequence of partitions with mesh going to zero? Commentor (talk) 13:39, 9 July 2010 (UTC)