In mathematics , some boundary value problems can be solved using the methods of stochastic analysis . Perhaps the most celebrated example is Shizuo Kakutani 's 1944 solution of the Dirichlet problem for the Laplace operator using Brownian motion . However, it turns out that for a large class of semi-elliptic second-order partial differential equations the associated Dirichlet boundary value problem can be solved using an Itō process that solves an associated stochastic differential equation .
Introduction: Kakutani's solution to the classical Dirichlet problem [ edit ]
Let
D
{\displaystyle D}
be a domain (an open and connected set ) in
R
n
{\textstyle \mathbb {R} ^{n}}
. Let
Δ
{\displaystyle \Delta }
be the Laplace operator , let
g
{\displaystyle g}
be a bounded function on the boundary
∂
D
{\displaystyle \partial D}
, and consider the problem:
{
−
Δ
u
(
x
)
=
0
,
x
∈
D
lim
y
→
x
u
(
y
)
=
g
(
x
)
,
x
∈
∂
D
{\displaystyle {\begin{cases}-\Delta u(x)=0,&x\in D\\\displaystyle {\lim _{y\to x}u(y)}=g(x),&x\in \partial D\end{cases}}}
It can be shown that if a solution
u
{\displaystyle u}
exists, then
u
(
x
)
{\displaystyle u(x)}
is the expected value of
g
(
x
)
{\displaystyle g(x)}
at the (random) first exit point from
D
{\displaystyle D}
for a canonical Brownian motion starting at
x
{\displaystyle x}
. See theorem 3 in Kakutani 1944, p. 710.
The Dirichlet–Poisson problem [ edit ]
Let
D
{\displaystyle D}
be a domain in
R
n
{\textstyle \mathbb {R} ^{n}}
and let
L
{\displaystyle L}
be a semi-elliptic differential operator on
C
2
(
R
n
;
R
)
{\textstyle C^{2}(\mathbb {R} ^{n};\mathbb {R} )}
of the form:
L
=
∑
i
=
1
n
b
i
(
x
)
∂
∂
x
i
+
∑
i
,
j
=
1
n
a
i
j
(
x
)
∂
2
∂
x
i
∂
x
j
{\displaystyle L=\sum _{i=1}^{n}b_{i}(x){\frac {\partial }{\partial x_{i}}}+\sum _{i,j=1}^{n}a_{ij}(x){\frac {\partial ^{2}}{\partial x_{i}\,\partial x_{j}}}}
where the coefficients
b
i
{\displaystyle b_{i}}
and
a
i
j
{\displaystyle a_{ij}}
are continuous functions and all the eigenvalues of the matrix
α
(
x
)
=
a
i
j
(
x
)
{\displaystyle \alpha (x)=a_{ij}(x)}
are non-negative. Let
f
∈
C
(
D
;
R
)
{\textstyle f\in C(D;\mathbb {R} )}
and
g
∈
C
(
∂
D
;
R
)
{\textstyle g\in C(\partial D;\mathbb {R} )}
. Consider the Poisson problem :
{
−
L
u
(
x
)
=
f
(
x
)
,
x
∈
D
lim
y
→
x
u
(
y
)
=
g
(
x
)
,
x
∈
∂
D
(P1)
{\displaystyle {\begin{cases}-Lu(x)=f(x),&x\in D\\\displaystyle {\lim _{y\to x}u(y)}=g(x),&x\in \partial D\end{cases}}\quad {\mbox{(P1)}}}
The idea of the stochastic method for solving this problem is as follows. First, one finds an Itō diffusion
X
{\displaystyle X}
whose infinitesimal generator
A
{\displaystyle A}
coincides with
L
{\displaystyle L}
on compactly-supported
C
2
{\displaystyle C^{2}}
functions
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} }
. For example,
X
{\displaystyle X}
can be taken to be the solution to the stochastic differential equation:
d
X
t
=
b
(
X
t
)
d
t
+
σ
(
X
t
)
d
B
t
{\displaystyle \mathrm {d} X_{t}=b(X_{t})\,\mathrm {d} t+\sigma (X_{t})\,\mathrm {d} B_{t}}
where
B
{\displaystyle B}
is n -dimensional Brownian motion,
b
{\displaystyle b}
has components
b
i
{\displaystyle b_{i}}
as above, and the matrix field
σ
{\displaystyle \sigma }
is chosen so that:
1
2
σ
(
x
)
σ
(
x
)
⊤
=
a
(
x
)
,
∀
x
∈
R
n
{\displaystyle {\frac {1}{2}}\sigma (x)\sigma (x)^{\top }=a(x),\quad \forall x\in \mathbb {R} ^{n}}
For a point
x
∈
R
n
{\displaystyle x\in \mathbb {R} ^{n}}
, let
P
x
{\displaystyle \mathbb {P} ^{x}}
denote the law of
X
{\displaystyle X}
given initial datum
X
0
=
x
{\displaystyle X_{0}=x}
, and let
E
x
{\displaystyle \mathbb {E} ^{x}}
denote expectation with respect to
P
x
{\displaystyle \mathbb {P} ^{x}}
. Let
τ
D
{\displaystyle \tau _{D}}
denote the first exit time of
X
{\displaystyle X}
from
D
{\displaystyle D}
.
In this notation, the candidate solution for (P1) is:
u
(
x
)
=
E
x
[
g
(
X
τ
D
)
⋅
χ
{
τ
D
<
+
∞
}
]
+
E
x
[
∫
0
τ
D
f
(
X
t
)
d
t
]
{\displaystyle u(x)=\mathbb {E} ^{x}\left[g{\big (}X_{\tau _{D}}{\big )}\cdot \chi _{\{\tau _{D}<+\infty \}}\right]+\mathbb {E} ^{x}\left[\int _{0}^{\tau _{D}}f(X_{t})\,\mathrm {d} t\right]}
provided that
g
{\displaystyle g}
is a bounded function and that:
E
x
[
∫
0
τ
D
|
f
(
X
t
)
|
d
t
]
<
+
∞
{\displaystyle \mathbb {E} ^{x}\left[\int _{0}^{\tau _{D}}{\big |}f(X_{t}){\big |}\,\mathrm {d} t\right]<+\infty }
It turns out that one further condition is required:
P
x
(
τ
D
<
∞
)
=
1
,
∀
x
∈
D
{\displaystyle \mathbb {P} ^{x}{\big (}\tau _{D}<\infty {\big )}=1,\quad \forall x\in D}
For all
x
{\displaystyle x}
, the process
X
{\displaystyle X}
starting at
x
{\displaystyle x}
almost surely leaves
D
{\displaystyle D}
in finite time. Under this assumption, the candidate solution above reduces to:
u
(
x
)
=
E
x
[
g
(
X
τ
D
)
]
+
E
x
[
∫
0
τ
D
f
(
X
t
)
d
t
]
{\displaystyle u(x)=\mathbb {E} ^{x}\left[g{\big (}X_{\tau _{D}}{\big )}\right]+\mathbb {E} ^{x}\left[\int _{0}^{\tau _{D}}f(X_{t})\,\mathrm {d} t\right]}
and solves (P1) in the sense that if
A
{\displaystyle {\mathcal {A}}}
denotes the characteristic operator for
X
{\displaystyle X}
(which agrees with
A
{\displaystyle A}
on
C
2
{\displaystyle C^{2}}
functions), then:
{
−
A
u
(
x
)
=
f
(
x
)
,
x
∈
D
lim
t
↑
τ
D
u
(
X
t
)
=
g
(
X
τ
D
)
,
P
x
-a.s.,
∀
x
∈
D
(P2)
{\displaystyle {\begin{cases}-{\mathcal {A}}u(x)=f(x),&x\in D\\\displaystyle {\lim _{t\uparrow \tau _{D}}u(X_{t})}=g{\big (}X_{\tau _{D}}{\big )},&\mathbb {P} ^{x}{\mbox{-a.s.,}}\;\forall x\in D\end{cases}}\quad {\mbox{(P2)}}}
Moreover, if
v
∈
C
2
(
D
;
R
)
{\textstyle v\in C^{2}(D;\mathbb {R} )}
satisfies (P2) and there exists a constant
C
{\displaystyle C}
such that, for all
x
∈
D
{\displaystyle x\in D}
:
|
v
(
x
)
|
≤
C
(
1
+
E
x
[
∫
0
τ
D
|
g
(
X
s
)
|
d
s
]
)
{\displaystyle |v(x)|\leq C\left(1+\mathbb {E} ^{x}\left[\int _{0}^{\tau _{D}}{\big |}g(X_{s}){\big |}\,\mathrm {d} s\right]\right)}
then
v
=
u
{\displaystyle v=u}
.
References [ edit ]