# Kelvin–Stokes theorem

An illustration of the Kelvin–Stokes theorem, with surface Σ, its boundary ∂Σ and the "normal" vector n.

The Kelvin–Stokes theorem,[1][2] named after Lord Kelvin and George Stokes, also known as the Stokes' theorem,[3] the fundamental theorem for curls or simply the curl theorem,[4] is a theorem in vector calculus on ${\displaystyle \mathbb {R} ^{3}}$. Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface.

If a vector field ${\displaystyle \mathbf {A} =(P(x,y,z),Q(x,y,z),R(x,y,z))}$ is defined in a region with smooth surface ${\displaystyle \Sigma }$ and has first order continuous partial derivatives then:

{\displaystyle {\begin{aligned}\iint (\nabla \times \mathbf {A} )d\mathbf {a} &=\iint \limits _{\Sigma }{\Bigg (}\left({\frac {\partial R}{\partial y}}-{\frac {\partial Q}{\partial z}}\right)\,dy\,dz+\left({\frac {\partial P}{\partial z}}-{\frac {\partial R}{\partial x}}\right)\,dz\,dx+\left({\frac {\partial Q}{\partial x}}-{\frac {\partial P}{\partial y}}\right)\,dx\,dy{\Bigg )}\\&=\oint \limits _{\partial \Sigma }{\Big (}P\,dx+Q\,dy+R\,dz{\Big )}=\oint \mathbf {A} \cdot d\mathbf {l} ,\end{aligned}}}

where ${\displaystyle \partial \Sigma }$ is boundary of region with smooth surface ${\displaystyle \Sigma }$.

The Kelvin–Stokes theorem is a special case of the “generalized Stokes' theorem.”[5][6] In particular, a vector field on ${\displaystyle \mathbb {R} ^{3}}$ can be considered as a 1-form in which case curl is the exterior derivative.

## Theorem

Let γ: [a, b] → R2 be a piecewise smooth Jordan plane curve. The Jordan curve theorem implies that γ divides R2 into two components, a compact one and another that is non-compact. Let D denote the compact part that is bounded by γ and suppose ψ: DR3 is smooth, with S := ψ(D). If Γ is the space curve defined by Γ(t) = ψ(γ(t))[note 1] and F is a smooth vector field on R3, then[7][8][9]

${\displaystyle \oint _{\Gamma }\mathbf {F} \,\cdot \,d{\mathbf {\Gamma } }=\iint _{S}\nabla \times \mathbf {F} \,\cdot \,d\mathbf {S} .}$

## Proof

The proof of the theorem consists of 4 steps.[8][9][note 2] We assume Green's theorem, so what is of concern is how to boil down the three-dimensional complicated problem (Kelvin–Stokes theorem) to a two-dimensional rudimentary problem (Green's theorem). When proving this theorem, mathematicians normally use the differential form. The "pull-back[note 2] of a differential form" is a very powerful tool for this situation, but learning differential forms requires substantial background knowledge. So, the proof below does not require knowledge of differential forms, and may be helpful for understanding the notion of differential forms.

### First step of the proof (defining the pullback)

Define

${\displaystyle \mathbf {P} (u,v)=(P_{1}(u,v),P_{2}(u,v))}$

so that P is the pull-back[note 2] of F, and that P(u, v) is R2-valued function, dependent on two parameters u, v. In order to do so we define P1 and P2 as follows.

${\displaystyle P_{1}(u,v)=\left\langle \mathbf {F} (\psi (u,v)){\bigg |}{\frac {\partial \psi }{\partial u}}\right\rangle ,\qquad P_{2}(u,v)=\left\langle \mathbf {F} (\psi (u,v)){\bigg |}{\frac {\partial \psi }{\partial v}}\right\rangle }$

Where ${\displaystyle \langle \ |\ \rangle }$ is the normal inner product (for Euclidean vectors, the dot product; see Bra-ket notation) of R3 and hereinafter, ${\displaystyle \langle \ |A|\ \rangle }$ stands for the bilinear form according to matrix A.[note 3]

### Second step of the proof (first equation)

According to the definition of a line integral,

{\displaystyle {\begin{aligned}\oint _{\Gamma }\mathbf {F} \cdot d\mathbf {\Gamma } &=\int _{a}^{b}\left\langle (\mathbf {F} \circ \Gamma (t)){\bigg |}{\frac {d\Gamma }{dt}}(t)\right\rangle \,dt\\&=\int _{a}^{b}\left\langle (\mathbf {F} \circ \Gamma (t)){\bigg |}{\frac {d(\psi \circ \gamma )}{dt}}(t)\right\rangle \,dt\\&=\int _{a}^{b}\left\langle (\mathbf {F} \circ \Gamma (t)){\bigg |}(J\psi )_{\gamma (t)}\cdot {\frac {d\gamma }{dt}}(t)\right\rangle \,dt\end{aligned}}}

where, stands for the Jacobian matrix of ψ, and the clear circle denotes function composition. Hence,[note 3]

{\displaystyle {\begin{aligned}\left\langle (\mathbf {F} \circ \Gamma (t)){\bigg |}(J\psi )_{\gamma (t)}{\frac {d\gamma }{dt}}(t)\right\rangle &=\left\langle (\mathbf {F} \circ \Gamma (t)){\bigg |}(J\psi )_{\gamma (t)}{\bigg |}{\frac {d\gamma }{dt}}(t)\right\rangle \\&=\left\langle ({}^{t}\mathbf {F} \circ \Gamma (t))\cdot (J\psi )_{\gamma (t)}\ {\bigg |}\ {\frac {d\gamma }{dt}}(t)\right\rangle \\&=\left\langle \left(\left\langle (\mathbf {F} (\psi (\gamma (t)))){\bigg |}{\frac {\partial \psi }{\partial u}}(\gamma (t))\right\rangle ,\left\langle (\mathbf {F} (\psi (\gamma (t)))){\bigg |}{\frac {\partial \psi }{\partial v}}(\gamma (t))\right\rangle \right){\bigg |}{\frac {d\gamma }{dt}}(t)\right\rangle \\&=\left\langle (P_{1}(u,v),P_{2}(u,v)){\bigg |}{\frac {d\gamma }{dt}}(t)\right\rangle \\&=\left\langle \mathbf {P} (u,v){\bigg |}{\frac {d\gamma }{dt}}(t)\right\rangle \end{aligned}}}

So, we obtain the following equation

${\displaystyle \oint _{\Gamma }\mathbf {F} \cdot d\mathbf {\Gamma } =\oint _{\gamma }\mathbf {P} d\gamma }$

### Third step of the proof (second equation)

First, calculate the partial derivatives, using the Leibniz rule (product rule):

{\displaystyle {\begin{aligned}{\frac {\partial P_{1}}{\partial v}}&=\left\langle {\frac {\partial (\mathbf {F} \circ \psi )}{\partial v}}{\bigg |}{\frac {\partial \psi }{\partial u}}\right\rangle +\left\langle \mathbf {F} \circ \psi {\bigg |}{\frac {\partial ^{2}\psi }{\partial v\partial u}}\right\rangle \\{\frac {\partial P_{2}}{\partial u}}&=\left\langle {\frac {\partial (\mathbf {F} \circ \psi )}{\partial u}}{\bigg |}{\frac {\partial \psi }{\partial v}}\right\rangle +\left\langle \mathbf {F} \circ \psi {\bigg |}{\frac {\partial ^{2}\psi }{\partial u\partial v}}\right\rangle \end{aligned}}}
{\displaystyle {\begin{aligned}{\frac {\partial P_{1}}{\partial v}}-{\frac {\partial P_{2}}{\partial u}}&=\left\langle {\frac {\partial (\mathbf {F} \circ \psi )}{\partial v}}{\bigg |}{\frac {\partial \psi }{\partial u}}\right\rangle -\left\langle {\frac {\partial (\mathbf {F} \circ \psi )}{\partial u}}{\bigg |}{\frac {\partial \psi }{\partial v}}\right\rangle \\&=\left\langle (J\mathbf {F} )_{\psi (u,v)}\cdot {\frac {\partial \psi }{\partial v}}{\bigg |}{\frac {\partial \psi }{\partial u}}\right\rangle -\left\langle (J\mathbf {F} )_{\psi (u,v)}\cdot {\frac {\partial \psi }{\partial u}}{\bigg |}{\frac {\partial \psi }{\partial v}}\right\rangle &&{\text{ chain rule}}\\&=\left\langle {\frac {\partial \psi }{\partial u}}{\bigg |}(J\mathbf {F} )_{\psi (u,v)}{\bigg |}{\frac {\partial \psi }{\partial v}}\right\rangle -\left\langle {\frac {\partial \psi }{\partial u}}{\bigg |}{}^{t}(J\mathbf {F} )_{\psi (u,v)}{\bigg |}{\frac {\partial \psi }{\partial v}}\right\rangle \\&=\left\langle {\frac {\partial \psi }{\partial u}}{\bigg |}(J\mathbf {F} )_{\psi (u,v)}-{}^{t}{(J\mathbf {F} )}_{\psi (u,v)}{\bigg |}{\frac {\partial \psi }{\partial v}}\right\rangle \\&=\left\langle {\frac {\partial \psi }{\partial u}}{\bigg |}\left((J\mathbf {F} )_{\psi (u,v)}-{}^{t}(J\mathbf {F} )_{\psi (u,v)}\right)\cdot {\frac {\partial \psi }{\partial v}}\right\rangle \\&=\left\langle {\frac {\partial \psi }{\partial u}}{\bigg |}(\nabla \times \mathbf {F} )\times {\frac {\partial \psi }{\partial v}}\right\rangle &&\left((J\mathbf {F} )_{\psi (u,v)}-{}^{t}(J\mathbf {F} )_{\psi (u,v)}\right)\cdot \mathbf {x} =(\nabla \times \mathbf {F} )\times \mathbf {x} \\&=\det \left[(\nabla \times \mathbf {F} )(\psi (u,v))\quad {\frac {\partial \psi }{\partial u}}(u,v)\quad {\frac {\partial \psi }{\partial v}}(u,v)\right]&&{\text{ scalar triple product}}\end{aligned}}}

On the other hand, according to the definition of a surface integral,

{\displaystyle {\begin{aligned}\iint _{S}(\nabla \times \mathbf {F} )\,dS&=\iint _{D}\left\langle (\nabla \times \mathbf {F} )(\psi (u,v)){\bigg |}{\frac {\partial \psi }{\partial u}}(u,v)\times {\frac {\partial \psi }{\partial v}}(u,v)\right\rangle \,du\,dv\\&=\iint _{D}\det \left[(\nabla \times \mathbf {F} )(\psi (u,v))\quad {\frac {\partial \psi }{\partial u}}(u,v)\quad {\frac {\partial \psi }{\partial v}}(u,v)\right]\,du\,dv&&{\text{ scalar triple product}}\end{aligned}}}

So, we obtain

${\displaystyle \iint _{S}(\nabla \times \mathbf {F} )\,dS=\iint _{D}\left({\frac {\partial P_{2}}{\partial u}}-{\frac {\partial P_{1}}{\partial v}}\right)\,du\,dv}$

### Fourth step of the proof (reduction to Green's theorem)

Combining the second and third steps, and then applying Green's theorem completes the proof.

## Application for conservative vector fields and scalar potential

In this section, we will discuss the lamellar vector field based on Kelvin–Stokes theorem.

First, we define the notarization map as follows.

${\displaystyle {\begin{cases}\theta _{[a,b]}:[0,1]\to [a,b]\\\theta _{[a,b]}=s(b-a)+a\end{cases}}}$

${\displaystyle \theta _{[a,b]}}$ is a strictly increasing function. For all piece-wise smooth paths c: [a, b] → R3 and all smooth vector fields F, the domain of which includes c([a, b]), one has:

${\displaystyle \int _{c}\mathbf {F} dc=\int _{c\circ \theta _{[a,b]}}\ \mathbf {F} \ d(c\circ \theta _{[a,b]})}$

So, we can assume the domain of the curve to be [0, 1].

### Irrotational fields

Furthermore, if the domain of F is simply connected, then in mechanics, it can be identified as a conservative field.

### Helmholtz's theorem

In this section, we will introduce a theorem that is derived from the Kelvin–Stokes theorem and characterizes vortex-free vector fields. In fluid dynamics it is called Helmholtz's theorems.[note 5]

That theorem is also important in the area of Homotopy theorem.[5]

Some textbooks such as Lawrence[5] call the relationship between c0 and c1 stated in Theorem 2-1 as “homotope” and the function H: [0, 1] × [0, 1] → U as “homotopy between c0 and c1”. However, “homotope” or “homotopy” in above-mentioned sense are different (stronger than) typical definitions of “homotope” or “homotopy”.[note 6] So from now on we refer to homotopy (homotope) in the sense of Theorem 2-1 as tube-like-homotopy (homotope).[note 7]

### Proof of the theorem

The definitions of γ1, ..., γ4

Hereinafter, the ⊕ stands for joining paths[note 8] the ${\displaystyle \ominus }$ stands for backwards of curve[note 9]

Let D = [0, 1] × [0, 1]. By our assumption, c1 and c2 are piecewise smooth homotopic, there are the piecewise smooth homogony H: DM

{\displaystyle {\begin{aligned}&{\begin{cases}\gamma _{1}:[0,1]\to D\\\gamma _{1}(t):=(t,0)\end{cases}}\\&{\begin{cases}\gamma _{2}:[0,1]\to D\\\gamma _{2}(s):=(1,s)\end{cases}}\\&{\begin{cases}\gamma _{3}:[0,1]\to D\\\gamma _{3}(t):=(-t+0+1,1)\end{cases}}\\&{\begin{cases}\gamma _{4}:[0,1]\to D\\\gamma _{4}(s):=(0,1-s)\end{cases}}\\[6pt]\gamma (t)&:=(\gamma _{1}\oplus \gamma _{2}\oplus \gamma _{3}\oplus \gamma _{4})(t)\\\Gamma _{i}(t)&:=H(\gamma _{i}(t))&&i=1,2,3,4\\\Gamma (t)&:=H(\gamma (t))=(\Gamma _{1}\oplus \Gamma _{2}\oplus \Gamma _{3}\oplus \Gamma _{4})(t)\end{aligned}}}

And, let S be the image of D under H. Then,

${\displaystyle \oint _{\Gamma }\mathbf {F} \,d\Gamma =\iint _{S}\nabla \times \mathbf {F} \,dS}$

will be obvious according to the Theorem 1 and, F is Lamellar vector field that, right side of that equation is zero, so,

${\displaystyle \oint _{\Gamma }\mathbf {F} \,d\Gamma =0}$

Here,

${\displaystyle \oint _{\Gamma }\mathbf {F} \,d\Gamma =\sum _{i=1}^{4}\oint _{\Gamma _{i}}\mathbf {F} \,d\Gamma }$ [note 8]

and H is Tubeler homotopy, that

${\displaystyle \Gamma _{2}(s)=\Gamma _{4}(1-s)=\ominus \Gamma _{4}(s)}$

that, line integral along Γ2(s) and line integral along Γ4(s) are compensated each other[note 9] so,

${\displaystyle \oint _{\Gamma _{1}}\mathbf {F} \,d\Gamma +\oint _{\Gamma _{3}}\mathbf {F} d\Gamma =0}$

On the other hand,

${\displaystyle c_{1}(t)=H(t,0)=H(\gamma _{1}(t))=\Gamma _{1}(t)}$
${\displaystyle c_{2}(t)=H(t,1)=H(\ominus \gamma _{3}(t))=\ominus \Gamma _{3}(t)}$

that, subjected equation is proved.

### Application for conservative force

Helmholtz's theorem, gives an explanation as to why the work done by a conservative force in changing an object's position is path independent. First, we introduce the Lemma 2-2, which is a corollary of and a special case of Helmholtz's theorem.

Lemma 2-2, obviously follows from Theorem 2-1. In Lemma 2-2, the existence of H satisfying [SC0] to [SC3] is crucial. It is a well-known fact that, if U is simply connected, such H exists. The definition of Simply connected space follows:

You will find that, the [SC1] to [SC3] of both Lemma 2-2 and Definition 2-2 is same.

So, someone may think that, "for a conservative force, the work done in changing an object's position is path independent" is elucidated. However, there are very large gaps between following two:

• There are continuous H such that it satisfies [SC1] to [SC3]
• There are piecewise smooth H such that it satisfies [SC1] to [SC3]

To fill that gap, the deep knowledge of Homotopy Theorem is required. For example, the following resources may be helpful for you.

Considering above-mentioned fact and Lemma 2-2, we will obtain following theorem.

## Kelvin–Stokes theorem on singular 2-cube and cube subdivisionable sphere

### Singular 2-cube and boundary

We omit the proof of the lemma. Using the lemma from now we consider all singular 2-cubes to be notarized. In other words, we assume that the domain of all singular 2-cubes is I × I.

In order to facilitate the discussion of boundary, we define

${\displaystyle {\begin{cases}\delta _{[k,j,c]}:\mathbf {R} ^{k}\to \mathbf {R} ^{k+1}\\\delta _{[k,j,c]}(t_{1},\cdots ,t_{k}):=\left(t_{1},\cdots ,t_{j-1},c,t_{j+1},\cdots ,t_{k}\right)\end{cases}}}$

γ1, ..., γ4 are the one-dimensional edges of the image of I × I.Hereinafter, the ⊕ stands for joining paths[note 8] and, the ${\displaystyle \ominus }$ stands for backwards of curve.[note 9]

{\displaystyle {\begin{aligned}&{\begin{cases}\gamma _{1}:[0,1]\to I^{2}\\\gamma _{1}(t):={\delta }_{[1,2,0]}(t)=(t,0)\end{cases}}\\&{\begin{cases}\gamma _{2}:[0,1]\to I^{2}\\\gamma _{2}(t):={\delta }_{[1,1,1]}(t)=(1,t)\end{cases}}\\&{\begin{cases}\gamma _{3}:[0,1]\to I^{2}\\\gamma _{3}(t):=\ominus {\delta }_{[1,2,1]}(t)=(-t+0+1,1)\end{cases}}\\&{\begin{cases}\gamma _{4}:[0,1]\to I^{2}\\\gamma _{4}(t):=\ominus {\delta }_{[1,1,0]}(t)=(0,1-t)\end{cases}}\\[6pt]\gamma (t)&:=(\gamma _{1}\oplus \gamma _{2}\oplus \gamma _{3}\oplus \gamma _{4})(t)\\\Gamma _{i}(t)&:=\varphi (\gamma _{i}(t))&&i=1,2,3,4\\\Gamma (t)&:=\varphi (\gamma (t))=(\Gamma _{1}\oplus \Gamma _{2}\oplus \Gamma _{3}\oplus \Gamma _{4})(t)\end{aligned}}}

### Cube subdivision

The definition of the boundary of the Definitions 3-3 is apparently depends on the cube subdivision. However, considering the following fact, the boundary is not depends on the cube subdivision.

Therefore, the following Definition is well-defined:

## Notes

1. ^ γ and Γ are both loops, however, Γ is not necessarily a Jordan curve
2. ^ a b c Knowledge of differential forms and identification of the vector field A = (a1, a2, a3),
${\displaystyle \mathbf {A} =\omega _{\mathbf {A} }=a_{1}\,dx_{1}+a_{2}\,dx_{2}+a_{3}\,dx_{3}}$
${\displaystyle \mathbf {A} ={}^{*}\omega _{\mathbf {A} }=a_{1}\,dx_{2}\wedge dx_{3}+a_{2}\,dx_{3}\wedge dx_{1}+a_{3}\,dx_{1}\wedge dx_{2}}$
admits a proof similar to the proof using the pullback of ωF. Under the identification ωF = F the following equations are satisfied.
{\displaystyle {\begin{aligned}\nabla \times \mathbf {F} &=d\omega _{\mathbf {F} }\\\psi ^{*}\omega _{\mathbf {F} }&=P_{1}\,du+P_{2}\,dv\\\psi ^{*}(d\omega _{\mathbf {F} })&=\left({\frac {\partial P_{2}}{\partial u}}-{\frac {\partial P_{1}}{\partial v}}\right)du\wedge dv\end{aligned}}}
Here, d stands for exterior derivative of the differential form, ψ stands for pull back by ψ and, P1 and P2 are same as P1 and P2 of the body text of this article respectively.
3. ^ a b c Given a n × m matrix A we define a bilinear form:
${\displaystyle \mathbf {x} \in \mathbf {R} ^{m},\mathbf {y} \in \mathbf {R} ^{n}\ :\qquad \left\langle \mathbf {x} |A|\mathbf {y} \right\rangle ={}^{t}\mathbf {x} A\mathbf {y} }$
which also satisfies (tA is the transpose of the matrix A):
${\displaystyle \left\langle \mathbf {x} |A|\mathbf {y} \right\rangle =\left\langle \mathbf {y} \left|{}^{t}A\right|\mathbf {x} \right\rangle .}$
${\displaystyle \left\langle \mathbf {x} |A|\mathbf {y} \right\rangle +\left\langle \mathbf {x} |B|\mathbf {y} \right\rangle =\left\langle \mathbf {x} |A+B|\mathbf {y} \right\rangle }$
4. ^ We prove following (★0).
${\displaystyle ({(J\mathbf {F} )}_{\psi (u,v)}-{}^{t}{(J\mathbf {F} )}_{\psi (u,v)})\mathbf {x} =(\nabla \times \mathbf {F} )\times \mathbf {x} ,\quad {\text{for all}}\,\mathbf {x} \in \mathbf {R} ^{3}}$　(★0)
First, we note the linearity of the algebraic operator a × and we obtain its matrix representation (see linear map). Let both of a and x be 3-dimensional column vectors, represented as follows,
${\displaystyle \mathbf {a} ={\begin{pmatrix}a_{1}\\a_{2}\\a_{3}\end{pmatrix}},\quad \mathbf {x} ={\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}}$
Then, according to the definition of the cross product, a × x are represented as follows.
${\displaystyle \mathbf {a} \times \mathbf {x} ={\begin{pmatrix}a_{2}x_{3}-a_{3}x_{2}\\a_{3}x_{1}-a_{1}x_{3}\\a_{1}x_{2}-a_{2}x_{1}\end{pmatrix}}}$
Therefore, when we apply the operator a ×　to each of the standard basis, we obtain following, and thus, the matrix representation of a × as shown in (★1).
${\displaystyle \mathbf {a} \times \mathbf {e} _{1}={\begin{pmatrix}0\\a_{3}\\-a_{2}\end{pmatrix}},\quad \mathbf {a} \times \mathbf {e} _{2}={\begin{pmatrix}-a_{3}\\0\\a_{1}\end{pmatrix}},\quad \mathbf {a} \times \mathbf {e} _{3}={\begin{pmatrix}a_{2}\\-a_{1}\\0\end{pmatrix}}}$
${\displaystyle \mathbf {a} \times \mathbf {x} ={\begin{pmatrix}0&-a_{3}&a_{2}\\a_{3}&0&-a_{1}\\-a_{2}&a_{1}&0\end{pmatrix}}{\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}}$　(★1)
Next, let ${\displaystyle A=(a_{ij})}$ is a 3 × 3 matrix and using the following substitution, (a1, ... , a3 are components of a),
${\displaystyle a_{1}={a}_{32}-a_{23}}$ (★2-1)
${\displaystyle a_{2}=a_{13}-a_{31}}$ (★2-2)
${\displaystyle a_{3}=a_{21}-a_{12}}$ (★2-3)
Then we obtain following (★3) from the (★1).
${\displaystyle (A-{}^{t}A)\mathbf {x} =\mathbf {a} \times \mathbf {x} }$　(★3)
We substitute the JF to above mentioned A, under the substitution of (★2-1), (★2-2), and (★2-3), we obtain the following (★4)
${\displaystyle \mathbf {a} =(\nabla \times \mathbf {F} )}$　(★4)
The (★0) is obvious from (★3) and (★4).
5. ^ There are a number of theorems with the same name, however they are not necessarily the same.
6. ^ Typical definition of homotopy and homotope are as follows.
7. ^ In some textbooks such as Conlon, Lawrence (2008). Differentiable Manifolds. Modern Birkhauser Classics. Boston: Birkhaeuser. use the term of homotopy and homotope in Theorem 2-1 sense. homotopy and homotope in Theorem 2-1 sense Indeed, it is convenience to adopt such sense to discuss conservative force. However, homotopy in Theorem 2-1 sense and homotope in Theorem 2-1 sense are different from and stronger than homotopy in typical sense and homotope in typical sense. So there are no appropriate terminology which can discriminate between homotopy in typical sense and sense of Theorem 2-1. In this article, to avoid ambiguity and to discriminate between them, we will define two “just-in-time term”, tube-like homotopy and tube-like homotope as follows.
8. ^ a b c If the two curves α: [a1, b1] → M, β: [a2, b2] → M, satisfy α(b1) = β(a2) then, we can define new curve αβ so that, for all smooth vector field F (if domain of which includes image of αβ)
${\displaystyle \int _{\alpha \oplus \beta }\mathbf {F} \,d(\alpha \oplus \beta )=\int _{\alpha }\mathbf {F} \,d\alpha +\int _{\beta }\mathbf {F} \,d\beta }$
which is also used when we define the fundamental group. To do so, accurate definition of the “joint of paths” is as follows.
9. ^ a b c Given curve on M, α: [a1, b1] → M, we can define new curve ${\displaystyle \ominus }$α so that, for all smooth vector field F (if domain of which includes image of α)
${\displaystyle \int _{\ominus \alpha }\mathbf {F} \,d(\ominus \alpha )=-\int _{\alpha }\mathbf {F} \,d\alpha ,}$
which is also used when we define fundamental group. To do so, accurate definition of the “backwards of curve” is as follows.

And, given two curves on M, α: [a1, b1] → M, β: [a2, b2] → M, which satisfy α(b1 = β(b2) (that means α(b1) = ${\displaystyle \ominus }$ β(a2), we can define ${\displaystyle \alpha \ominus \beta }$ as following manner.

${\displaystyle \alpha \ominus \beta :=\alpha \oplus (\ominus \beta )}$

## References

1. Nagayoshi Iwahori, et.al:"Bi-Bun-Seki-Bun-Gaku" Sho-Ka-Bou(jp) 1983/12 ISBN 978-4-7853-1039-4 [1](Written in Japanese)
2. ^ a b Atsuo Fujimoto;"Vector-Kai-Seki Gendai su-gaku rekucha zu. C(1)" Bai-Fu-Kan(jp)(1979/01) ISBN 978-4563004415 [2] (Written in Japanese)
3. ^ Stewart, James (2012). Calculus - Early Transcendentals (7th ed.). Brooks/Cole Cengage Learning. p. 1122. ISBN 978-0-538-49790-9.
4. ^ Griffiths, David (2013). Introduction to Electrodynamics. Pearson. p. 34. ISBN 978-0-321-85656-2.
5. Lawrence Conlon; "Differentiable Manifolds (Modern Birkhauser Classics)" Birkhaeuser Boston (2008/1/11) [3]
6. John M. Lee; "Introduction to Smooth Manifolds (Graduate Texts in Mathematics, 218) "Springer (2002/9/23) [4] [5]
7. ^ Stewart, James (2010). Essential Calculus: Early Transcendentals. Cole.
8. ^ a b This proof is based on the Lecture Notes given by Prof. Robert Scheichl (University of Bath, U.K) [6], please refer the [7]
9. ^ a b This proof is also same to the proof shown in
10. ^ L. S. Pontryagin, Smooth manifolds and their applications in homotopy theory, American Mathematical Society Translations, Ser. 2, Vol. 11, American Mathematical Society, Providence, R.I., 1959, pp. 1–114. MR0115178 (22 #5980 [8])[9]
11. ^ Spivak, Michael (1971). Calculus On Manifolds: A Modern Approach To Classical Theorems Of Advanced Calculus. Westview Press.