Renewal theory

Renewal theory is the branch of probability theory that generalizes Poisson processes for arbitrary holding times. Applications include calculating the best strategy for replacing worn-out machinery in a factory (example below) and comparing the long-term benefits of different insurance policies.

Renewal processes

Introduction

A renewal process is a generalization of the Poisson process. In essence, the Poisson process is a continuous-time Markov process on the positive integers (usually starting at zero) which has independent identically distributed holding times at each integer ${\displaystyle i}$ (exponentially distributed) before advancing (with probability 1) to the next integer: ${\displaystyle i+1}$. In the same informal spirit, we may define a renewal process to be the same thing, except that the holding times take on a more general distribution. (Note however that the independence and identical distribution (IID) property of the holding times is retained).

Formal definition

Sample evolution of a renewal process with holding times Si and jump times Jn.

Let ${\displaystyle S_{1},S_{2},S_{3},S_{4},S_{5},\ldots }$ be a sequence of positive independent identically distributed random variables such that

${\displaystyle 0<\mathbb {E} [S_{i}]<\infty .}$

We refer to the random variable ${\displaystyle S_{i}}$ as the "${\displaystyle i}$-th" holding time.

${\displaystyle \mathbb {E} [S_{i}]}$ is the expectation of ${\displaystyle S_{i}}$.

Define for each n > 0 :

${\displaystyle J_{n}=\sum _{i=1}^{n}S_{i},}$

each ${\displaystyle J_{n}}$ is referred to as the "${\displaystyle n}$-th" jump time and the intervals

${\displaystyle [J_{n},J_{n+1}]}$

being called renewal intervals.

Then the random variable ${\displaystyle (X_{t})_{t\geq 0}}$ given by

${\displaystyle X_{t}=\sum _{n=1}^{\infty }\mathbb {I} _{\{J_{n}\leq t\}}=\sup \left\{\,n:J_{n}\leq t\,\right\}}$

where ${\displaystyle \mathbb {I} _{\{J_{n}\leq t\}}}$ is the indicator function

${\displaystyle \mathbb {I} _{\{J_{n}\leq t\}}={\begin{cases}1,&{\mbox{if }}J_{n}\leq t\\0,&{\mbox{otherwise}}\end{cases}}}$

${\displaystyle X_{t}}$ represents the number of jumps that have occurred by time t, and is called a renewal process.

Interpretation

If one considers events occurring at random times, one may choose to think of the holding times ${\displaystyle \{S_{i}:i\geq 1\}}$ as the random time elapsed between two consecutive[1] events. For example, if the renewal process is modelling the breakdown of different machines, then the holding times represent the time between one machine breaking down before another one does.

Renewal-reward processes

Sample evolution of a renewal-reward process with holding times Si, jump times Jn and rewards Wi

Let ${\displaystyle W_{1},W_{2},\ldots }$ be a sequence of IID random variables (rewards) satisfying

${\displaystyle \mathbb {E} |W_{i}|<\infty .\,}$

Then the random variable

${\displaystyle Y_{t}=\sum _{i=1}^{X_{t}}W_{i}}$

is called a renewal-reward process. Note that unlike the ${\displaystyle S_{i}}$, each ${\displaystyle W_{i}}$ may take negative values as well as positive values.

The random variable ${\displaystyle Y_{t}}$ depends on two sequences: the holding times ${\displaystyle S_{1},S_{2},\ldots }$ and the rewards ${\displaystyle W_{1},W_{2},\ldots }$ These two sequences need not be independent. In particular, ${\displaystyle W_{i}}$ may be a function of ${\displaystyle S_{i}}$.

Interpretation

In the context of the above interpretation of the holding times as the time between successive malfunctions of a machine, the "rewards" ${\displaystyle W_{1},W_{2},\ldots }$ (which in this case happen to be negative) may be viewed as the successive repair costs incurred as a result of the successive malfunctions.

An alternative analogy is that we have a magic goose which lays eggs at intervals (holding times) distributed as ${\displaystyle S_{i}}$. Sometimes it lays golden eggs of random weight, and sometimes it lays toxic eggs (also of random weight) which require responsible (and costly) disposal. The "rewards" ${\displaystyle W_{i}}$ are the successive (random) financial losses/gains resulting from successive eggs (i = 1,2,3,...) and ${\displaystyle Y_{t}}$ records the total financial "reward" at time t.

Properties of renewal processes and renewal-reward processes

We define the renewal function as the expected value of the number of jumps observed up to some time ${\displaystyle t}$:

${\displaystyle m(t)=\mathbb {E} [X_{t}].\,}$

The elementary renewal theorem

The renewal function satisfies

${\displaystyle \lim _{t\to \infty }{\frac {1}{t}}m(t)=1/\mathbb {E} [S_{1}].}$

Proof

Below, you find that the strong law of large numbers for renewal processes tell us that

${\displaystyle \lim _{t\to \infty }{\frac {X_{t}}{t}}={\frac {1}{\mathbb {E} [S_{1}]}}.}$

To prove the elementary renewal theorem, it is sufficient to show that ${\displaystyle \left\{{\frac {X_{t}}{t}};t\geq 0\right\}}$ is uniformly integrable.

To do this, consider some truncated renewal process where the holding times are defined by ${\displaystyle {\overline {S_{n}}}=a\mathbb {I} \{S_{n}>a\}}$ where ${\displaystyle a}$ is a point such that ${\displaystyle 0 which exists for all non-deterministic renewal processes. This new renewal process ${\displaystyle {\overline {X_{t}}}}$ is an upper bound on ${\displaystyle X_{t}}$ and its renewals can only occur on the lattice ${\displaystyle \{na;n\in \mathbb {N} \}}$. Furthermore, the number of renewals at each time is geometric with parameter ${\displaystyle p}$. So we have

{\displaystyle {\begin{aligned}{\overline {X_{t}}}&\leq \sum _{i=1}^{[at]}\mathrm {Geometric} (p)\\\mathbb {E} \left[\,{\overline {X_{t}}}^{2}\,\right]&\leq C_{1}t+C_{2}t^{2}\\P\left({\frac {X_{t}}{t}}>x\right)&\leq {\frac {E\left[X_{t}^{2}\right]}{t^{2}x^{2}}}\leq {\frac {E\left[{\overline {X_{t}}}^{2}\right]}{t^{2}x^{2}}}\leq {\frac {C}{x^{2}}}.\end{aligned}}}

The elementary renewal theorem for renewal reward processes

We define the reward function:

${\displaystyle g(t)=\mathbb {E} [Y_{t}].\,}$

The reward function satisfies

${\displaystyle \lim _{t\to \infty }{\frac {1}{t}}g(t)={\frac {\mathbb {E} [W_{1}]}{\mathbb {E} [S_{1}]}}.}$

The renewal equation

The renewal function satisfies

${\displaystyle m(t)=F_{S}(t)+\int _{0}^{t}m(t-s)f_{S}(s)\,ds}$

where ${\displaystyle F_{S}}$ is the cumulative distribution function of ${\displaystyle S_{1}}$ and ${\displaystyle f_{S}}$ is the corresponding probability density function.

Proof of the renewal equation

We may iterate the expectation about the first holding time:
${\displaystyle m(t)=\mathbb {E} [X_{t}]=\mathbb {E} [\mathbb {E} (X_{t}\mid S_{1})].\,}$
But by the Markov property
${\displaystyle \mathbb {E} (X_{t}\mid S_{1}=s)=\mathbb {I} _{\{t\geq s\}}\left(1+\mathbb {E} [X_{t-s}]\right).\,}$
So
{\displaystyle {\begin{aligned}m(t)&{}=\mathbb {E} [X_{t}]\\[12pt]&{}=\mathbb {E} [\mathbb {E} (X_{t}\mid S_{1})]\\[12pt]&{}=\int _{0}^{\infty }\mathbb {E} (X_{t}\mid S_{1}=s)f_{S}(s)\,ds\\[12pt]&{}=\int _{0}^{\infty }\mathbb {I} _{\{t\geq s\}}\left(1+\mathbb {E} [X_{t-s}]\right)f_{S}(s)\,ds\\[12pt]&{}=\int _{0}^{t}\left(1+m(t-s)\right)f_{S}(s)\,ds\\[12pt]&{}=F_{S}(t)+\int _{0}^{t}m(t-s)f_{S}(s)\,ds,\end{aligned}}}
as required.

Asymptotic properties

${\displaystyle (X_{t})_{t\geq 0}}$ and ${\displaystyle (Y_{t})_{t\geq 0}}$ satisfy

${\displaystyle \lim _{t\to \infty }{\frac {1}{t}}X_{t}={\frac {1}{\mathbb {E} [S_{1}]}}}$ (strong law of large numbers for renewal processes)
${\displaystyle \lim _{t\to \infty }{\frac {1}{t}}Y_{t}={\frac {1}{\mathbb {E} [S_{1}]}}\mathbb {E} [W_{1}]}$ (strong law of large numbers for renewal-reward processes)

almost surely.

Proof

First consider ${\displaystyle (X_{t})_{t\geq 0}}$. By definition we have:
${\displaystyle J_{X_{t}}\leq t\leq J_{X_{t}+1}}$
for all ${\displaystyle t\geq 0}$ and so
${\displaystyle {\frac {J_{X_{t}}}{X_{t}}}\leq {\frac {t}{X_{t}}}\leq {\frac {J_{X_{t}+1}}{X_{t}}}}$
for all t ≥ 0.
Now since ${\displaystyle 0<\mathbb {E} [S_{i}]<\infty }$ we have:
${\displaystyle X_{t}\to \infty }$
as ${\displaystyle t\to \infty }$ almost surely (with probability 1). Hence:
${\displaystyle {\frac {J_{X_{t}}}{X_{t}}}={\frac {J_{n}}{n}}={\frac {1}{n}}\sum _{i=1}^{n}S_{i}\to \mathbb {E} [S_{1}]}$
almost surely (using the strong law of large numbers); similarly:
${\displaystyle {\frac {J_{X_{t}+1}}{X_{t}}}={\frac {J_{X_{t}+1}}{X_{t}+1}}{\frac {X_{t}+1}{X_{t}}}={\frac {J_{n+1}}{n+1}}{\frac {n+1}{n}}\to \mathbb {E} [S_{1}]\cdot 1}$
almost surely.
Thus (since ${\displaystyle t/X_{t}}$ is sandwiched between the two terms)
${\displaystyle {\frac {1}{t}}X_{t}\to {\frac {1}{\mathbb {E} [S_{1}]}}}$
almost surely.
Next consider ${\displaystyle (Y_{t})_{t\geq 0}}$. We have
${\displaystyle {\frac {1}{t}}Y_{t}={\frac {X_{t}}{t}}{\frac {1}{X_{t}}}Y_{t}\to {\frac {1}{\mathbb {E} [S_{1}]}}\cdot \mathbb {E} [W_{1}]}$
almost surely (using the first result and using the law of large numbers on ${\displaystyle Y_{t}}$).

A curious feature of renewal processes is that if we wait some predetermined time t and then observe how large the renewal interval containing t is, we should expect it to be typically larger than a renewal interval of average size.

Mathematically the inspection paradox states: for any t > 0 the renewal interval containing t is stochastically larger than the first renewal interval. That is, for all x > 0 and for all t > 0:

${\displaystyle \mathbb {P} (S_{X_{t}+1}>x)\geq \mathbb {P} (S_{1}>x)=1-F_{S}(x)}$

where FS is the cumulative distribution function of the IID holding times Si.

The renewal interval determined by the random point t (shown in red) is stochastically larger than the first renewal interval.

Observe that the last jump-time before t is ${\displaystyle J_{X_{t}}}$; and that the renewal interval containing t is ${\displaystyle S_{X_{t}+1}}$. Then

{\displaystyle {\begin{aligned}\mathbb {P} (S_{X_{t}+1}>x)&{}=\int _{0}^{\infty }\mathbb {P} (S_{X_{t}+1}>x\mid J_{X_{t}}=s)f_{J_{X_{t}}}(s)\,ds\\[12pt]&{}=\int _{0}^{\infty }\mathbb {P} (S_{X_{t}+1}>x|S_{X_{t}+1}>t-s)f_{J_{X_{t}}}(s)\,ds\\[12pt]&{}=\int _{0}^{\infty }{\frac {\mathbb {P} (S_{X_{t}+1}>x\,,\,S_{X_{t}+1}>t-s)}{\mathbb {P} (S_{X_{t}+1}>t-s)}}f_{J_{X_{t}}}(s)\,ds\\[12pt]&{}=\int _{0}^{\infty }{\frac {1-F(\max\{x,t-s\})}{1-F(t-s)}}f_{J_{X_{t}}}(s)\,ds\\[12pt]&{}=\int _{0}^{\infty }\min \left\{{\frac {1-F(x)}{1-F(t-s)}},{\frac {1-F(t-s)}{1-F(t-s)}}\right\}f_{J_{X_{t}}}(s)\,ds\\[12pt]&{}=\int _{0}^{\infty }\min \left\{{\frac {1-F(x)}{1-F(t-s)}},1\right\}f_{J_{X_{t}}}(s)\,ds\\[12pt]&{}={\begin{cases}\int _{0}^{\infty }{\frac {1-F(x)}{1-F(t-s)}}f_{J_{X_{t}}}(s)\,ds\geq \int _{0}^{\infty }1-F(x)f_{J_{X_{t}}}(s)\,ds=1-F(x),&{\mbox{if }}x\geq t-s\\\int _{0}^{\infty }f_{J_{X_{t}}}(s)\,ds=1\geq 1-F(x),&{\mbox{otherwise}}\end{cases}}\\\end{aligned}}}

and we know that

${\displaystyle 1-F(x)=\mathbb {P} (S_{1}>x)}$

which ends the proof. Q.E.D.

Superposition

The superposition of independent renewal processes is not generally a renewal process, but it can be described within a larger class of processes called the Markov-renewal processes.[2] However, the cumulative distribution function of the first inter-event time in the superposition process is given by[3]

${\displaystyle R(t)=1-\sum _{k=1}^{K}{\frac {\alpha _{k}}{\sum _{l=1}^{K}\alpha _{l}}}(1-R_{k}(t))\prod _{j=1,j\neq k}^{K}\alpha _{j}\int _{t}^{\infty }(1-R_{j}(u)){\text{d}}u}$

where Rk(t) and αk > 0 are the CDF of the inter-event times and the arrival rate of process k.[4]

Example applications

Example 1: use of the strong law of large numbers

Eric the entrepreneur has n machines, each having an operational lifetime uniformly distributed between zero and two years. Eric may let each machine run until it fails with replacement cost €2600; alternatively he may replace a machine at any time while it is still functional at a cost of €200.

What is his optimal replacement policy?

Solution

The lifetime of the n machines can be modeled as n independent concurrent renewal-reward processes, so it is sufficient to consider the case n=1. Denote this process by ${\displaystyle (Y_{t})_{t\geq 0}}$. The successive lifetimes S of the replacement machines are independent and identically distributed, so the optimal policy is the same for all replacement machines in the process.

If Eric decides at the start of a machine's life to replace it at time 0 < t < 2 but the machine happens to fail before that time then the lifetime S of the machine is uniformly distributed on [0, t] and thus has expectation 0.5t. So the overall expected lifetime of the machine is:

{\displaystyle {\begin{aligned}\mathbb {E} [S]&=\mathbb {E} [S\mid {\mbox{fails before }}t]\cdot \mathbb {P} [{\mbox{fails before }}t]+\mathbb {E} [S\mid {\mbox{does not fail before }}t]\cdot \mathbb {P} [{\mbox{does not fail before }}t]\\&={\frac {t}{2}}\left(0.5t\right)+{\frac {2-t}{2}}\left(t\right)\end{aligned}}}

and the expected cost W per machine is:

{\displaystyle {\begin{aligned}\mathbb {E} [W]&=\mathbb {E} [W\mid {\text{fails before }}t]\cdot \mathbb {P} ({\text{fails before }}t)+\mathbb {E} [W\mid {\text{does not fail before }}t]\cdot \mathbb {P} ({\text{does not fail before }}t)\\&={\frac {t}{2}}(2600)+{\frac {2-t}{2}}(200)=1200t+200.\end{aligned}}}

So by the strong law of large numbers, his long-term average cost per unit time is:

${\displaystyle {\frac {1}{t}}Y_{t}\simeq {\frac {\mathbb {E} [W]}{\mathbb {E} [S]}}={\frac {4(1200t+200)}{t^{2}+4t-2t^{2}}}}$

then differentiating with respect to t:

${\displaystyle {\frac {\partial }{\partial t}}{\frac {4(1200t+200)}{t^{2}+4t-2t^{2}}}=4{\frac {(4t-t^{2})(1200)-(4-2t)(1200t+200)}{(t^{2}+4t-2t^{2})^{2}}},}$

this implies that the turning points satisfy:

{\displaystyle {\begin{aligned}0&=(4t-t^{2})(1200)-(4-2t)(1200t+200)=4800t-1200t^{2}-4800t-800+2400t^{2}+400t\\&=-800+400t+1200t^{2},\end{aligned}}}

and thus

${\displaystyle 0=3t^{2}+t-2=(3t-2)(t+1).}$

We take the only solution t in [0, 2]: t = 2/3. This is indeed a minimum (and not a maximum) since the cost per unit time tends to infinity as t tends to zero, meaning that the cost is decreasing as t increases, until the point 2/3 where it starts to increase.