Divergence (computer science)

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In computer science, a computation is said to diverge if it does not terminate or terminates in an (unobservable) exceptional state. Otherwise it is said to converge. In domains where computations are expected to be infinite, such as process calculi, a computation is said to diverge if it fails to be productive (always produces an action within a finite amount of time.)


Various subfields of computer science use varying, but mathematically precise, definitions of what it means for a computation to converge or diverge.


In abstract rewriting a reduction is called convergent if and only if it is both confluent and terminating.[1] The notation tn means term t reduces to normal form n in zero or more reductions, t↓ means t reduces to some normal form in zero or more reductions, and t↑ means t does not reduce to a normal form.

In the lambda calculus an expression is divergent if it has no normal form.[2]

Denotational semantics[edit]

In denotational semantics an object function f : AB can be modelled as a mathematical function f : A ∪ {⊥} → B ∪ {⊥} where ⊥ (bottom) indicates that the object function or its argument diverges.

Concurrency theory[edit]

In the calculus of communicating sequential processes, divergence is a drastic situation where a process performs an endless series of hidden actions. For example, consider the following process, defined by CSP notation:

Clock = tick \rightarrow Clock

The traces of this process are defined as:

\operatorname{traces}(Clock) = \{\langle\rangle, \langle tick \rangle, \langle tick,tick \rangle, \cdots \} = \{ tick \}^*

Now, consider the following process, which conceals the tick event of the Clock process:

P= Clock \backslash tick

By definition, P is called a divergent process.

See also[edit]


  1. ^ Baader & Nipkow 1998, p. 9.
  2. ^ Pierce 2002, p. 65.