# Talk:Integration by substitution

WikiProject Mathematics (Rated C-class, Mid-priority)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
 C Class
 Mid Priority
Field: Analysis

## ?

Horrible explanation, incomprehensible to anyone who wants to know, and worthless without those people who'd want to know (otherwise, and obviously, they wouldn't be here to begin with).

Please be more specific. What do you find unclear or confusing? Daniel Sank 16:15, 22 September 2007 (UTC)

I'd like to agree with the above opinion. In the intro, I think this should be taken out: "Using the fundamental theorem of calculus often requires finding an antiderivative. For this and other reasons, integration by substitution is an important tool for mathematicians."

In the first main section, "Let something-or-other be an interval and something-or-other be a continuously differentiable function. Suppose that something-or-other is a continuous function." That's some pretty intense mathspeak that I don't really understand. And what's Leibniz notation again? Oh yeah, dx/dt = dx'. Or something like that... Irrelevant, almost historical note: "One could view the method of integration by substitution as a major justification of Leibniz's notation for integrals and derivatives." Also, deriving the method from the fundamental theorem of calculus makes my head hurt; all I'm looking for is how to USE u-substitution.

The examples are good, but they skip way too many steps! If we make the substitution u = x2 + 1, we obtain du = 2x dx. I infer that "obtain" means "take the derivative of". After staring at it for a while, I notice that you solved for 1/2 du = x dx and substituted that in. And the limits...ah, you plugged both x's in to the limits, I think. Do you see what I mean? Many people find it difficult to reason out those missing steps. It's true that after the equations, you say "It is important to note that since the lower limit x = 0 was replaced with u = 02 + 1 = 1, and the upper limit x = 2 replaced with u = 22 + 1 = 5, a transformation back into terms of x was unnecessary." However, it would be good to have this kind of explanation beforehand (but leave out the part about transformation back into terms of x for now).

Oh, and the difference between integrals and antiderivatives is unclear; you assume that readers know the difference which I don't really. — Preceding unsigned comment added by Jojojlj (talkcontribs) 01:40, 6 January 2013 (UTC)

## Integration by substitution

The objective of Integration by substitution is to substitute the integrand with ${\displaystyle u}$ or ${\displaystyle g(x)}$

Theory

We want to transform the Integral from a function of x to a function of u

${\displaystyle \int _{x=a}^{x=b}f(x)\,dx\,\rightarrow \,\int _{u=c}^{u=d}h(u)\,du}$

Starting with

${\displaystyle u\,=\,g(x)}$
and

 ${\displaystyle \int _{x=a}^{x=b}f(x)\,dx\,}$ ${\displaystyle =\int _{x=a}^{x=b}f(x)\,{\operatorname {d} \!u \over \operatorname {d} \!u}\,dx\,}$ (1) ie   ${\displaystyle {\operatorname {d} \!u \over \operatorname {d} \!u}\,=\,1}$ ${\displaystyle =\int _{x=a}^{x=b}(f(x)\,{\operatorname {d} \!x \over \operatorname {d} \!u})({\operatorname {d} \!u \over \operatorname {d} \!x})\,dx\,}$ (2) ie   ${\displaystyle {\operatorname {d} \!x \over \operatorname {d} \!u}{\operatorname {d} \!u \over \operatorname {d} \!x}\,=\,{\operatorname {d} \!u \over \operatorname {d} \!u}\,=\,1}$ ${\displaystyle =\int _{x=a}^{x=b}(f(x)\,{\operatorname {d} \!x \over \operatorname {d} \!u})g'(x)\,dx\,}$ (3) ie   ${\displaystyle {\operatorname {d} \!u \over \operatorname {d} \!x}\,=\,g'(x)}$ ${\displaystyle =\int _{x=a}^{x=b}h(g(x))g'(x)\,dx\,}$ (4) ie   Now equate ${\displaystyle (f(x)\,{\operatorname {d} \!x \over \operatorname {d} \!u})}$ with ${\displaystyle h(g(x))}$ ${\displaystyle =\int _{x=a}^{x=b}h(u)g'(x)\,dx\,}$ (5) ie   ${\displaystyle g(x)\,=\,u}$ ${\displaystyle =\int _{u=g(a)}^{u=g(b)}h(u)\,du\,}$ (6) ie   ${\displaystyle du\,=\,{\operatorname {d} \!u \over \operatorname {d} \!x}dx\,=\,g'(x)\,dx\,}$ ${\displaystyle =\int _{u=c}^{u=d}h(u)\,du\,}$ (7) ie   We have achieved our desired result

Procedure

• Calculate ${\displaystyle g'(x)\,=\,{\operatorname {d} \!u \over \operatorname {d} \!x}}$
• Calculate ${\displaystyle h(u)}$ which is ${\displaystyle f(x)\,{\operatorname {d} \!x \over \operatorname {d} \!u}\,=\,{\frac {f(x)}{g'(x)}}}$ and make sure you express the result in terms of the variable u
• Calculate ${\displaystyle c\,=\,g(a)}$
• Calculate ${\displaystyle d\,=\,g(b)}$

## conditions on f

In most calculus books f is required to be continuous (not just integratable), mainly because
their proofs are based on the fundamental theorem of calculus which requires continuity.
http://eom.springer.de/I/i051740.htm
http://mathworld.wolfram.com/ChangeofVariablesTheorem.html

However i've seen abstract generalization (based on lesbegue integrals) that seems to skip the contintuity condition for f (i do not fully understand the terms involved):
http://planetmath.org/encyclopedia/ChangeOfVariablesFormula.html

It would be nice if somebody knowledgeable could confirm/comment this and modify the article (there should be at least a note why or under which exact condition continuity can be dropped for f).

## which phi?

I replaced φ with Φ, since the latex \phi looks like that one. This may however differ depending on which font you are using...

I don't think thats strictly right. Φ is capital phi, φ is not italicized phi, and φ is italicized phi, which I think is the strictly correct one. I'll leave it up to you to revert if you think it should be done. PAR 20:13, 22 Mar 2005 (UTC)

φ and φ are alternate lowercase forms of phi (uppercase Φ). Both are accessible through LaTeX, via \varphi and \phi, respectively. —Caesura(t) 15:19, 16 October 2005 (UTC)

Fine article. To those responsible: thanks. --Christofurio 17:20, Apr 23, 2005 (UTC)

Small nitpick: Why phi? Can't we use g(x)? Kareeser|Talk! 05:53, 29 June 2006 (UTC)
I've changed the usage of ${\displaystyle \phi (t)}$ (or ${\displaystyle \varphi (t)}$) to ${\displaystyle g(t)}$ in the intro. ${\displaystyle \phi (\varphi )}$ or whatever is still used later on in another section, but I'll leave that to be edited by whomever. -Matt 17:00, 11 November 2007 (UTC)

I agree with Karesser on this. For most people who will be using the article phi just adds confusion. Is there any reason why it should be phi, because all the calculus books I have seen use g(x). Bizzako 21:54, 16 December 2006 (UTC)

## Requested move

speedy move according to WP standards for upper/lower case in article names. --Trovatore 02:44, 7 December 2005 (UTC)

### Voting

Add *Support or *Oppose followed by an optional one sentence explanation, then sign your vote with ~~~~

## Move issues

I moved this article back from Integration by Substitution. The reasons are three:

1. The page was moved against Wikipedia conventions for capitals; it should have been integration by substitution
2. The double redirects were not fixed.
3. I don't quite see any discussion about why the move would be necessary.

So, if it is decided to move this page indeed to integration by substitution, I can help (you would need an administrator to delete the redirect at integration by substitution first). But I would like to see if people agree on that, and somebody's got to promise to fix the redirects afterwards. Oleg Alexandrov (talk) 03:45, 7 December 2005 (UTC)

Maybe I rushed things a bit with the "requested move" process. I don't have any strong preference for Integration by substitution over Substitution rule; I just tried to move Integration by Substitution to Integration by substitution to comply with the capitalization convention, and then nominated the article when I found out I couldn't just move it. I suppose I think Integration by substitution is just epsilon better than Substitution rule, but I'd be happy to have the nomination quashed and things left as they are now if that's simpler. --Trovatore 04:08, 7 December 2005 (UTC)
Let us see what Atraxani says, the user who did the move (I wrote that user a message). All that really matters is that whoever wants to do the move should fix the redirects, to avoid redirects to redirects. Oleg Alexandrov (talk) 04:13, 7 December 2005 (UTC)
OK, I did the move and fixed the redirects myself. I guess that's the best thing to do. A message to Atraxani though, please do use more care when moving pages. Oleg Alexandrov (talk) 18:11, 9 December 2005 (UTC)

## Multi value thing

If one is required to substitute the variable x in the original integral to something like u^2-u-2, what do you do with the limits? There are obivously two solution of u for each x, so how can one tell which one to use? —Preceding unsigned comment added by 211.31.14.149 (talkcontribs) 23:21, 9 December 2005

You choose a part of the function to use (by imposing constraints) so that u is a (single-valued) function of x. And which one you choose will affect which derivative you get for du/dx. It may be that there are multiple valid choices, depending on your situation; but they should all give the same result. --Spoon! 23:06, 11 September 2007 (UTC)

## integrability of phi

Is it necessary to suppose integrability of (phi)'? Isn't it guaranteed by its assumption of continuity?

It seems that this is the only page that uses the Calculus Template whose link does not become bold and unlinkable when on the page. I don't know how to fix this. Ryulong 20:25, 6 February 2006 (UTC)

That is because the link in the Calculus template used to point to substitution rule. I changed it to point directly to this page, integration by substitution, and now the link does become bold. -- Jitse Niesen (talk) 22:06, 6 February 2006 (UTC)

## Expansion of Substitution rule for multiple variables

It would be great if someone expanded the Substitution rule for multiple variables section with examples and further explanations. Thanks, --Abdull 14:20, 25 May 2006 (UTC)

Yes. Here is something that is rather subtle but bothers people as I know from experience. When the number of dimensions is 1, the multi-variable formula should match the one-variable formula. However, the multi-variable formula has that absolute value in there. In the one-variable formula we have φ'(t) not |φ'(t)|. Explaining why this is not actually a contradiction is an expository challenge. McKay 02:10, 6 March 2007 (UTC)
This is because traditionally when we do a one-dimensional definite integral, we specify a lower bound and an upper bound. When we do the substitution, we apply the function separately onto the bounds. If the substituting function is a decreasing function, then after substitution the lower bound will be greater than the upper bound; and the derivative of the substituting function will be negative. These two effects cancel each other out, as "swapping" the bounds introduces another negative sign to the integral, canceling the negative from the derivative.
However, if you look at how the formula of the multi-variable integral is written, it considers integrals over a set, without considering a specific orientation. And after the substitution, you see that the second integral is over the set which is the image of the function over the original set. If the function has a negative Jacobian determinant, then that means somehow the function "flips" the geometric orientation of the variables. But since we are just integrating sets without considering the geometric orientation, we cannot cancel out that negative. So we take the absolute value. --Spoon! 23:02, 11 September 2007 (UTC)

## It is an unnecessary condition - nonvanishing derivative of g

I think we should remove this unnecessary condition on g :${\displaystyle g'(t)\neq 0\;\forall t\in [a,b]}$--Novwik (talk) 18:36, 12 January 2008 (UTC)

## Substitution theorem formulated directly for indefinite integral

I have just read a variant of the theorem which is formulated directly for indefinite integral.[1] I think it would fit in the article well. But still, I do not dare to write it to the article, because my knowledge in the topic lacks both the overview and the details.

Instead, I try to summarize the theorem here, together with some remarks, applications and examples. I do not include it here in the talk page literally, because it is too long, thus it may act as distracting the talk page. Physis (talk) 20:16, 20 June 2008 (UTC)

## Another assumption on g??

Shouldn't g(t) (the change of variables function) should be injective as well?

Otherwise the integral limits might change to be the same point and thus the integral will be zero independently of the value of the integrand itself... —Preceding unsigned comment added by Yaron hadad (talkcontribs) 23:18, 27 July 2008 (UTC)

Are you sure? For instance, if you change the lower limit in the first example given to −2 and again use the transformation ${\displaystyle u=x^{2}+1}$, you get
${\displaystyle \int _{-2}^{2}x\cos(x^{2}+1)\,dx={\frac {1}{2}}\int _{2}^{2}\cos(x^{2}+1)2x\,dx=0.}$
And the integral is indeed zero because the integrand x cos(x2 + 1) is an odd function. So you get the correct result, even though ${\displaystyle g(x)=x^{2}+1}$ is not injective on the interval [−2, 2]. -- Jitse Niesen (talk) 11:52, 28 July 2008 (UTC)
No, g does not need to be assumed injective, under our assumption that it is continuously differentiable. I believe that things are slightly more complicated when g is only assumed to be absolutely continuous (and the integral is the Lebesgue integral). In the latter case, it may be necessary to assume that g is monotonic to make the theorem tractable. siℓℓy rabbit (talk) 13:15, 28 July 2008 (UTC)

## Typo in Proof

In the proof:

Since ${\displaystyle f}$ is continuous, it possesses an antiderivative ${\displaystyle F:[a,b]\to \mathbb {R} }$.

Shouldn't that be:

Since ${\displaystyle f}$ is continuous, it possesses an antiderivative ${\displaystyle F:I\to \mathbb {R} }$ ?

DRE (talk) 01:01, 7 December 2008 (UTC)

You are right. I have corrected the mistake. siℓℓy rabbit (talk) 01:33, 7 December 2008 (UTC)

## Antiderivatives explanation?

Can someone explain to me how you can get this? This is from the second part to the third part of the antiderivatives exmaple.

${\displaystyle {\frac {1}{2}}\int 2x\cos(x^{2}+1)\,dx={\frac {1}{2}}\int \cos u\,du}$

Is there some implied substitution at work here or something? —Preceding unsigned comment added by 218.186.9.250 (talk) 13:53, 5 February 2010 (UTC)

You're changing coordinates from right to left. Consider the coordinate change u = x^2 + 1. The Fundamental Theorem then gives

${\displaystyle \int \cos u\,du=\int \cos(x^{2}+1)\,{\frac {du}{dx}}dx.}$ The only thing left to do is calculate the Jacobian term, which is trivial. Tomgg (talk) 04:04, 18 August 2011 (UTC)

Can someone add that formula into the main text of the article please? It makes it much more clear and accessible to the average reader. Jojojlj (talk) 01:02, 6 January 2013 (UTC)

## on probability

${\displaystyle p_{y}(y)=p_{x}(\Phi ^{-1}(y))~\left|\det \left[D\Phi ^{-1}(y)\right]\right|.}$??? What does D operator mean? and -1 for inverse? Jackzhp (talk) 04:41, 29 March 2010 (UTC)

D is the derivative, so that det D[..] is the Jacobian of [..]. 92.230.123.84 (talk) 21:19, 25 February 2012 (UTC)

## Innecesary..

"If the substitution function g(t) is decreasing, so that g(a) > g(b) the limits of integration must be reversed, with an additional minus sign appearing in front of the integral."

Why is this in the article? There's absolutely no need for it.. —Preceding unsigned comment added by 186.125.112.83 (talk) 21:37, 3 May 2011 (UTC)

Me again, I just removed it. —Preceding unsigned comment added by 190.224.65.87 (talk) 19:49, 4 May 2011 (UTC)

## Citation: Rudin

In Rudin's real and complex analysis from as late as '74 there is no Theorem 7.26 (chapter 7 ends with Theorem 7.14). Theorem 7.26 does exist in a French version from '98, though. This means that the citation in the article is wrong. 87.77.17.135 (talk) 14:33, 15 June 2011 (UTC)

## most general set of conditions

I'm thinking that in order to be a definitive source on the subject, there should be a theorem which states, in ${\displaystyle \mathbf {R} ^{1}}$, the most general set of circumstances for F under which the change of variables works in the Reimann-Stiltjes sense

${\displaystyle \int _{a}^{b}\psi (F(t))dF(t)=\int _{F(a)}^{F(b)}\psi (\xi )d\xi }$

that is to say that in order for the left hand side to exist as a Reimann-Stiltjes integral, the ${\displaystyle \psi (F(t))}$ must be integrable on (a,b) and the function, F, must be of bounded variation. Nothing about differentiability, and far less, nothing even about absolute continuity.

According to an exercise in Royden, the above is true for all F continuous and increasing (no need for absolute continuity and of course the requirement that ${\displaystyle \psi (F(t))}$ is integrable). Thus the above change of variables works when F is the Cantor function.

The Hewitt & Stromberg (1965, Theorem 20.3) citation attempts to be a general reference--its stated for functions on Polish Spaces, which is fine but more general than required. Something gives me the suspicion that this synopsis is mis-stated (c.f. the requirement that a function must be continuous _and_ absolutely continuous is redundant) also absolute continuity is not strictly required. Izmirlig (talk)izmirlig@mail.nih.gov

## Antiderivatives "technique" vs "theorem"

It would be nice to present a theorem about finding antiderivative. I don't see how to state it.. Is there a way to phrase it that isn't too tortuous?

In the article, the theorem of integration by susbsitution deals with definite integrals. The "technique" of substitution is used for antiderivatives without being stated as a theorem. At face value, the "rule" in calculus books for computing antiderivatives ${\displaystyle \int f(u(x))u'(x)dx=\int f(u)du}$. claims the equality of two functions. The left hand side is a function of x, the right hand side is a function of u. But if someone asks "Are ${\displaystyle G(x)=sin^{2}(x)}$ and ${\displaystyle H(u)=u^{2}}$ the same function? I'd have to say "No".

I could try to weasel out of it by saying "Yes, if the relation ${\displaystyle u=sin(x)}$ holds." The follow up question would be: "A function is a set of ordered pairs of numbers. If the two functions are the same function, does this "same" function contain the ordered pair ${\displaystyle (1.2,sin^{2}(1.2))}$? or does it contain the ordered pair ${\displaystyle (1.2,(1.2)^{2})}$. It can't contain both."

Tashiro (talk) 07:57, 27 November 2014 (UTC)

The f( ) in f(u(x)) does not specify f( ) as a function of x, it specifies it as a function of its argument, which is whatever is inside the parentheses, in this case u or u(x). The left hand side IS a function of x, but that function is not specified by f( ), it is specified by f(u( )) which is not the same as f( ). The integral statement in what you wrote does not claim the "equality" of the two functions, it claims that the f( ) on both sides are the same function, NOT that f(u( )) and f( ) are the same . PAR (talk) 18:05, 13 August 2016 (UTC)

## Assumptions may be relaxed

Note that it is assumed that φ : [a,b] → I is continuously differentiable. An integrable derivative suffices, since the fundamental theorem of calculus holds for functions with integrable but discontinuous derivatives as well. SomePseudonym (talk) 11:45, 13 August 2016 (UTC)

1. ^ Császár 1989: 311–312 (= II. 2.28)