# Wikipedia:Reference desk/Archives/Mathematics/2013 November 7

Mathematics desk
< November 6 << Oct | November | Dec >> November 8 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.

# November 7

## Single versus multiple integral signs

It seems like different authors (including Wikipedia editors) use different conventions for integrals of multivariate functions. For example, a surface integral ${\displaystyle \iint \limits _{S}FdS}$ might be written as ${\displaystyle \int \limits _{S}FdS}$. I personally prefer the convention with multiple integral signs in order to be clear on the dimensionality of the integral, but my physics textbook uses single integral signs for all multivariate and univariate integrals alike. Which one is considered to be most proper?--Jasper Deng (talk) 02:36, 7 November 2013 (UTC)

I think the single integral sign notation is more generally consistent. What will you do if you're integrating in a space of dimension 10? Or infinity, or fractional (I think this makes sense, not sure)?
It's similar to how you use multiple summation signs when explicitly listing the iterator bounds, but over a specified set of indexes you use a single sign. -- Meni Rosenfeld (talk) 05:51, 7 November 2013 (UTC)
Maybe for 50 dimensions, but for small natural numbers of dimensions, I find it most clear to use multiple signs. Before learning vector calculus, the use of single signs for multivariate integrals confused me, as I had perceived them as indefinite integrals (because the book simply listed integrals without domains, for example ${\displaystyle \oint \mathbf {E} \cdot d\mathbf {A} }$ for electric flux). It seems like mathematics textbooks tend to use multiple signs while physics textbooks and articles tend to use single signs.--Jasper Deng (talk) 06:44, 7 November 2013 (UTC)
Our article Multiple integral uses exclusively multiple integral signs, including
${\displaystyle \int \cdots \int _{\mathbf {D} }\;f(x_{1},x_{2},\ldots ,x_{n})\;dx_{1}\!\cdots dx_{n}}$
Maybe the article should mention that a single integral sign is sometimes used instead (but, I think, only when ${\displaystyle dx_{1}\!\cdots dx_{n}}$ is replaced by ${\displaystyle dD}$). Duoduoduo (talk) 15:20, 7 November 2013 (UTC)
• There is really no question of "proper" -- writers are allowed to use whatever notation they find most convenient, as long as it is well-defined and consistent. My personal view is that once the dimension goes above 3, multiple integral signs would be obnoxious, and below 3 it's a matter of personal choice. Looie496 (talk) 15:55, 7 November 2013 (UTC)
• It's sometimes useful to analytically continue integration over d variables to the complex d plane, so a single integration sign is better. Count Iblis (talk) 18:26, 7 November 2013 (UTC)
• It's not just notation unless I'm not comprehending the question. ${\displaystyle \oint }$ is a surface but that surface is created by a line integral that starts and ends in the same place. I guess you could write it as two integrals to define the same surface, but the circle notation I thought had identities that were useful for solving them. Is that not related to the question? --DHeyward (talk) 10:26, 8 November 2013 (UTC)
• Define "created by a line integral". When referring to a surface, ${\displaystyle \oint }$ implies a surface that's closed, at least in the sense that a finite volume is enclosed by it. Line integrals over closed loops don't really differ from those over open curves by much, except for the fact that Green's theorem (and generally, Stokes' theorem) is applicable to it, just like the divergence theorem is applicable to flux over a closed surface.--Jasper Deng (talk) 11:03, 8 November 2013 (UTC)
• I always thought of ${\displaystyle \oint }$ being the perimeter of the surface. So a 2D surface such as a circle on a plane, the double integral of the curl of the field on the surface is equivalent to the integral of the vector field along the linear contour (Stokes Theorem?). Similarly, it can be expanded to a 3D volume and a 2D surface but the notation is noting a closed surface, not a volume (though they are obviously related. Gauss' law for magnetism, for example, states the surface of a volume will have a net 0 vector field flowing through the surface (no magnetic monopoles). The surface encloses a volume but it's an inherently 2 dimensional closed surface that has a vector being integrated over an area. You can still do the volume integral of the curl of the vector field. Even with identities that make the operations equivalent, it makes it more natural to express as a closed line or a closed surface rather than an area or volume. Take, for example, inverse-square law forces like gravity. It's the flux through an area that is more intuitive to understand. Whence it makes more intuitive sense to me to use a operator as opposed to a triple integral of volume even if the answers are the same. --DHeyward (talk) 07:51, 9 November 2013 (UTC)

## Fundamental theorem of calculus fallacy

For a while, I've been thinking of the following fallacy. The function f is differentiable and continuous everywhere, which should be more than enough for the fundamental theorem to be valid, but it obviously isn't in this case, as follows. Let ${\displaystyle f(x)=x^{2}}$ for -1<x<1 and ${\displaystyle f(x)={\frac {1}{2}}x^{4}+{\frac {1}{2}}}$ elsewhere. The derivative of f is -2 at x=-1 and +2 and x=1 and f is continuous at +/-1. Now here lies the problem. The following should be the same, but they are not.

${\displaystyle \int _{-3}^{4}f(x)dx=\int _{-3}^{4}x^{2}dx={\frac {64}{3}}+9={\frac {91}{3}}}$

${\displaystyle \int _{-3}^{4}f(x)dx=\int _{-3}^{-1}f(x)dx+\int _{-1}^{1}f(x)dx+\int _{1}^{4}f(x)dx=\int _{-3}^{-1}x^{2}dx+\int _{-1}^{1}({\frac {1}{2}}x^{4}+{\frac {1}{2}})dx+\int _{1}^{4}x^{2}dx={\frac {26}{3}}+{\frac {1}{5}}+1+21={\frac {463}{15}}}$

My arithmetic may not be exactly right, but I think my point is made. These two are obviously not equal, despite f satisfying the conditions of the fundamental theorem of calculus. What's wrong with my reasoning?--Jasper Deng (talk) 22:39, 7 November 2013 (UTC)

Sorry, your first line (of displayed LaTeX, starting with ${\displaystyle \int _{-3}^{4}f(x)dx}$) is just wrong, and I don't understand why you think it should follow from the fundamental theorem (or anything else). What line of reasoning did you have in mind? --Trovatore (talk) 22:45, 7 November 2013 (UTC)
The fundamental theorem says nothing about whether the function is piecewise or not, it just says the function must be continuous in order to be able to use the the theorem to integrate from -3 to 4 (or any piecewise function in general). The antiderivative is defined piecewise (${\displaystyle {\frac {1}{10}}x^{5}+{\frac {1}{2}}x}$ for -1<x<1, ${\displaystyle {\frac {1}{3}}x^{3}}$ elsewhere - although I will note that it does fail to be continuous at the endpoints!), but the theorem only says to evaluate the antiderivative at the endpoints.--Jasper Deng (talk) 22:52, 7 November 2013 (UTC)
No, sorry, ${\displaystyle {\frac {1}{3}}x^{3}}$ is not an antiderivative of f. That's your mistake. --Trovatore (talk) 23:03, 7 November 2013 (UTC)
Then what is? There must exist one by the fundamental theorem.--Jasper Deng (talk) 23:07, 7 November 2013 (UTC)
Any function F such that the derivative of F is defined and equals f, everywhere. Your piecewise function doesn't qualify, because it's not differentiable at ±1. --Trovatore (talk) 23:11, 7 November 2013 (UTC)
I suspect you swapped intervals in the definition and meant to say: Let ${\displaystyle f(x)={\frac {1}{2}}x^{4}+{\frac {1}{2}}}$ for -1<x<1 and ${\displaystyle f(x)=x^{2}}$ elsewhere. That would give more meaning to some of your later expressions, but some things would still be wrong. An antiderivative must by definition be differentiable and therefore continuous, so a piecewise definition must adjust constants in the pieces to make it continuous where the pieces meet, in your case x=-1 and x=1. Here is a valid antiderivative: ${\displaystyle {\frac {1}{3}}x^{3}-{\frac {4}{15}}}$ for x≤1, ${\displaystyle {\frac {1}{10}}x^{5}+{\frac {1}{2}}x}$ for -1<x<1, ${\displaystyle {\frac {1}{3}}x^{3}+{\frac {4}{15}}}$ for x≥1. All other antiderivatives are this expression plus a constant (the same constant must be used in all 3 pieces). Note the opposite signs in -4/15 and +4/15. They mean your first computation of the integral is off by 4/15 - (-4/15) = 8/15. With the correct antiderivative we get ${\displaystyle \int _{-3}^{4}f(x)dx}$ = (1/3×43 + 4/15) - (1/3×(-3)3 - 4/15) = 463/15. PrimeHunter (talk) 00:58, 8 November 2013 (UTC)
The fallacy, while not clearly explained, is this: in a neighborhood of the points ${\displaystyle -3}$ and ${\displaystyle 4}$, an antiderivative for the function f is given by ${\displaystyle F(x)=x^{3}/3}$. So by the fundamental theorem of calculus ${\displaystyle \int _{-3}^{4}f(x)\,dx=F(4)-F(-3)={\frac {91}{3}}}$. The trouble with this argument is that F is not an antiderivative of f on the whole interval ${\displaystyle [-3,4]}$, as required by the fundamental theorem. Another way to think of this is that ${\displaystyle x^{3}/3}$ in a neighborhood of each of the points ${\displaystyle -3}$ and ${\displaystyle 4}$ corresponds to a different antiderivative of the function f, differing by a constant of integration. The fundamental theorem requires that you use the same antiderivative at both end points. Sławomir Biały (talk) 14:49, 8 November 2013 (UTC)
That might still be confusing. To be absolutely clear, if a bit tedious, I'd put it this way:
${\displaystyle f(x)={\begin{cases}x^{2},&{\mbox{if }}x\in (-\infty ,-1]\\{\frac {1}{2}}x^{4}+{\frac {1}{2}},&{\mbox{if }}x\in (-1,1)\\x^{2},&{\mbox{if }}x\in [1,\infty )\end{cases}}}$
Each piece can be separately integrated using the basic rule for ${\displaystyle x^{n}}$:
${\displaystyle F(x)={\begin{cases}{\frac {1}{3}}x^{3}+C_{1},&{\mbox{if }}x\in (-\infty ,-1]\\{\frac {1}{10}}x^{5}+{\frac {1}{2}}x+C_{2},&{\mbox{if }}x\in (-1,1)\\{\frac {1}{3}}x^{3}+C_{3},&{\mbox{if }}x\in [1,\infty )\end{cases}}}$
To apply the second fundamental theorem of calculus to calculate
${\displaystyle \int _{-3}^{4}f(x)dx=F(4)-F(-3)={\frac {91}{3}}+C_{3}-C_{1},}$
we need ${\displaystyle F'(x)=f(x)}$ for all ${\displaystyle x\in [-3,4]}$. The only places where this might not hold are ${\displaystyle x=-1}$ and ${\displaystyle x=1}$. We need ${\displaystyle F(x)}$ to be differentiable, and thus continuous, at these points, so we need to have
{\displaystyle {\begin{aligned}{\frac {1}{3}}(-1)^{3}+C_{1}&={\frac {1}{10}}(-1)^{5}+{\frac {1}{2}}(-1)+C_{2}\\{\frac {1}{3}}1^{3}+C_{3}&={\frac {1}{10}}1^{5}+{\frac {1}{2}}1+C_{2}\end{aligned}}}
which leads to ${\displaystyle C_{2}=C_{1}+{\frac {4}{15}}}$ and ${\displaystyle C_{3}=C_{1}+{\frac {8}{15}}}$. This yields
${\displaystyle \int _{-3}^{4}f(x)dx=F(4)-F(-3)={\frac {91}{3}}+C_{3}-C_{1}={\frac {463}{15}},}$
which matches Jasper's other calculation. Indeed, the only thing different in these calculations is being careful about the constants; the original is OK except it's missing the ${\displaystyle C_{3}-C_{1}}$ bit. When dealing with integration constants, it is not a good idea to generally assume that you can just pick one arbitrarily (e.g. C=0, as Jasper did originally), because other considerations may constrain your choices. Here, continuity constrains you: you can pick one of the constants arbitrarily, but not all three of them! Differential equations are another area where being careless with the constants will cause you grief. -- 212.149.196.26 (talk) 23:02, 8 November 2013 (UTC)