# Rewriting

(Redirected from Term rewriting)

In mathematics, computer science, and logic, rewriting covers a wide range of (potentially non-deterministic) methods of replacing subterms of a formula with other terms. The objects of focus for this article include rewriting systems (also known as rewrite systems, rewrite engines[1] or reduction systems). In their most basic form, they consist of a set of objects, plus relations on how to transform those objects.

Rewriting can be non-deterministic. One rule to rewrite a term could be applied in many different ways to that term, or more than one rule could be applicable. Rewriting systems then do not provide an algorithm for changing one term to another, but a set of possible rule applications. When combined with an appropriate algorithm, however, rewrite systems can be viewed as computer programs, and several theorem provers[2] and declarative programming languages are based on term rewriting.[3][4]

## Intuitive examples

### Logic

In logic, the procedure for obtaining the conjunctive normal form (CNF) of a formula can be implemented as a rewriting system.[5] The rules of an example of such a system would be:

${\displaystyle \neg \neg A\to A}$ (double negation elimination)
${\displaystyle \neg (A\land B)\to \neg A\lor \neg B}$ (De Morgan's laws)
${\displaystyle \neg (A\lor B)\to \neg A\land \neg B}$
${\displaystyle (A\land B)\lor C\to (A\lor C)\land (B\lor C)}$ (distributivity)
${\displaystyle A\lor (B\land C)\to (A\lor B)\land (A\lor C),}$[note 1]

where the symbol (${\displaystyle \to }$) indicates that an expression matching the left hand side of the rule can be rewritten to one formed by the right hand side, and the symbols each denote a subexpression. In such a system, each rule is chosen so that the left side is equivalent to the right side, and consequently when the left side matches a subexpression, performing a rewrite of that subexpression from left to right maintains logical consistency and value of the entire expression.

### Linguistics

In linguistics, rewrite rules, also called phrase structure rules, are used in some systems of generative grammar,[6] as a means of generating the grammatically correct sentences of a language. Such a rule typically takes the form A → X, where A is a syntactic category label, such as noun phrase or sentence, and X is a sequence of such labels or morphemes, expressing the fact that A can be replaced by X in generating the constituent structure of a sentence. For example, the rule S → NP VP means that a sentence can consist of a noun phrase followed by a verb phrase; further rules will specify what sub-constituents a noun phrase and a verb phrase can consist of, and so on.

## Abstract rewriting systems

From the above examples, it is clear that we can think of rewriting systems in an abstract manner. We need to specify a set of objects and the rules that can be applied to transform them. The most general (unidimensional) setting of this notion is called an abstract reduction system, (abbreviated ARS), although more recently authors use abstract rewriting system as well.[7] (The preference for the word "reduction" here instead of "rewriting" constitutes a departure from the uniform use of "rewriting" in the names of systems that are particularizations of ARS. Because the word "reduction" does not appear in the names of more specialized systems, in older texts reduction system is a synonym for ARS).[8]

An ARS is simply a set A, whose elements are usually called objects, together with a binary relation on A, traditionally denoted by →, and called the reduction relation, rewrite relation[9] or just reduction.[8] This (entrenched) terminology using "reduction" is a little misleading, because the relation is not necessarily reducing some measure of the objects; this will become more apparent when we discuss string-rewriting systems further in this article.

Example 1. Suppose the set of objects is T = {a, b, c} and the binary relation is given by the rules ab, ba, ac, and bc. Observe that these rules can be applied to both a and b in any fashion to get the term c. Such a property is clearly an important one. Note also, that c is, in a sense, a "simplest" term in the system, since nothing can be applied to c to transform it any further. This example leads us to define some important notions in the general setting of an ARS. First we need some basic notions and notations.[10]

• ${\displaystyle {\stackrel {*}{\rightarrow }}}$ is the transitive closure of ${\displaystyle \rightarrow \cup =}$, where = is the identity relation, i.e. ${\displaystyle {\stackrel {*}{\rightarrow }}}$ is the smallest preorder (reflexive and transitive relation) containing ${\displaystyle \rightarrow }$. It is also called the reflexive transitive closure of ${\displaystyle \rightarrow }$.
• ${\displaystyle \leftrightarrow }$ is ${\displaystyle \rightarrow \cup \ {\xrightarrow {-1}}}$, that is the union of the relation → with its converse relation, also known as the symmetric closure of ${\displaystyle \rightarrow }$.
• ${\displaystyle {\stackrel {*}{\leftrightarrow }}}$ is the transitive closure of ${\displaystyle \leftrightarrow \cup =}$, that is ${\displaystyle {\stackrel {*}{\leftrightarrow }}}$ is the smallest equivalence relation containing ${\displaystyle \rightarrow }$. It is also known as the reflexive transitive symmetric closure of ${\displaystyle \rightarrow }$.

### Normal forms, joinability and the word problem

An object x in A is called reducible if there exists some other y in A such that ${\displaystyle x\rightarrow y}$; otherwise it is called irreducible or a normal form. An object y is called a normal form of x if ${\displaystyle x{\stackrel {*}{\,\rightarrow \,}}y}$, and y is irreducible. If x has a unique normal form, then this is usually denoted with ${\displaystyle x{\downarrow }}$. In example 1 above, c is a normal form, and ${\displaystyle c=a{\downarrow }=b{\downarrow }}$. If every object has at least one normal form, the ARS is called normalizing.

A related, but weaker notion than the existence of normal forms is that of two objects being joinable: x and y are said to be joinable if there exists some z with the property that ${\displaystyle x{\stackrel {*}{\,\rightarrow \,}}z{\stackrel {*}{\,\leftarrow \,}}y}$. From this definition, it is apparent that one may define the joinability relation as ${\displaystyle {\stackrel {*}{\,\rightarrow }}\circ {\stackrel {*}{\,\leftarrow }}}$, where ${\displaystyle \circ }$ is the composition of relations. Joinability is usually denoted, somewhat confusingly, also with ${\displaystyle \downarrow }$, but in this notation the down arrow is a binary relation, i.e. we write ${\displaystyle x\downarrow y}$ if x and y are joinable.

One of the important problems that may be formulated in an ARS is the word problem: given x and y, are they equivalent under ${\displaystyle {\stackrel {*}{\,\leftrightarrow \,}}}$? This is a very general setting for formulating the word problem for the presentation of an algebraic structure. For instance, the word problem for groups is a particular case of an ARS word problem. Central to an "easy" solution for the word problem is the existence of unique normal forms: in this case if two objects have the same normal form, then they are equivalent under ${\displaystyle {\stackrel {*}{\leftrightarrow }}}$. The word problem for an ARS is undecidable in general.

### The Church–Rosser property and confluence

An ARS is said to possess the Church–Rosser property if ${\displaystyle x{\stackrel {*}{\,\leftrightarrow \,}}y}$ implies ${\displaystyle x\downarrow y}$. In words, the Church–Rosser property means that any two equivalent objects are joinable. Alonzo Church and J. Barkley Rosser proved in 1936 that lambda calculus has this property;[11] hence the name of the property.[12] (That lambda calculus has this property is also known as the Church–Rosser theorem.) In an ARS with the Church–Rosser property the word problem may be reduced to the search for a common successor. In a Church–Rosser system, an object has at most one normal form; that is the normal form of an object is unique if it exists, but it may well not exist.

Several different properties are equivalent to the Church–Rosser property, but may be simpler to check in some particular setting. In particular, confluence is equivalent to Church–Rosser. An ARS ${\displaystyle (A,\rightarrow )}$ is said:

• confluent if for all w, x, and y in A, ${\displaystyle x{\stackrel {*}{\,\leftarrow \,}}w{\stackrel {*}{\,\rightarrow \,}}y}$ implies ${\displaystyle x\downarrow y}$. Roughly speaking, confluence says that no matter how two paths diverge from a common ancestor (w), the paths are joining at some common successor. This notion may be refined as property of a particular object w, and the system called confluent if all its elements are confluent.
• locally confluent if for all w, x, and y in A, ${\displaystyle x\leftarrow w\rightarrow y}$ implies ${\displaystyle x\downarrow y}$. This property is sometimes called weak confluence.

Theorem. For an ARS the following conditions are equivalent: (i) it has the Church–Rosser property, (ii) it is confluent.[13]

Corollary.[14] In a confluent ARS if ${\displaystyle x{\stackrel {*}{\,\leftrightarrow \,}}y}$ then

• If both x and y are normal forms, then x = y.
• If y is a normal form, then ${\displaystyle x{\stackrel {*}{\,\rightarrow \,}}y}$

Because of these equivalences, a fair bit of variation in definitions is encountered in the literature. For instance, in Bezem et al. 2003 the Church–Rosser property and confluence are defined to be synonymous and identical to the definition of confluence presented here; Church–Rosser as defined here remains unnamed, but is given as an equivalent property; this departure from other texts is deliberate.[15] Because of the above corollary, in a confluent ARS one may define a normal form y of x as an irreducible y with the property that ${\displaystyle x{\stackrel {*}{\,\leftrightarrow \,}}y}$. This definition, found in Book and Otto, is equivalent to common one given here in a confluent system, but it is more inclusive [note 2] more in a non-confluent ARS.

Local confluence on the other hand is not equivalent with the other notions of confluence given in this section, but it is strictly weaker than confluence. The relation ${\displaystyle a\rightarrow b,\;b\rightarrow a,\;a\rightarrow c,\;b\rightarrow d}$ is locally confluent, but not confluent, as ${\displaystyle c}$ and ${\displaystyle d}$ are equivalent, but not joinable.[16]

### Termination and convergence

An abstract rewriting system is said to be terminating or noetherian if there is no infinite chain ${\displaystyle x_{0}\rightarrow x_{1}\rightarrow x_{2}\rightarrow \cdots }$. In a terminating ARS, every object has at least one normal form, thus it is normalizing. The converse is not true. In example 1 for instance, there is an infinite rewriting chain, namely ${\displaystyle a\rightarrow b\rightarrow a\rightarrow b\rightarrow \cdots }$, even though the system is normalizing. A confluent and terminating ARS is called convergent. In a convergent ARS, every object has a unique normal form.

Theorem (Newman's Lemma): A terminating ARS is confluent if and only if it is locally confluent.

## String rewriting systems

A string rewriting system (SRS), also known as semi-Thue system, exploits the free monoid structure of the strings (words) over an alphabet to extend a rewriting relation, ${\displaystyle R}$ to all strings in the alphabet that contain left- and respectively right-hand sides of some rules as substrings. Formally a semi-Thue systems is a tuple ${\displaystyle (\Sigma ,R)}$ where ${\displaystyle \Sigma }$ is a (usually finite) alphabet, and ${\displaystyle R}$ is a binary relation between some (fixed) strings in the alphabet, called rewrite rules. The one-step rewriting relation relation ${\displaystyle {\xrightarrow[{R}]{}}}$ induced by ${\displaystyle R}$ on ${\displaystyle \Sigma ^{*}}$ is defined as: for any strings ${\displaystyle s,t\in \Sigma ^{*}}$ ${\displaystyle s\,{\xrightarrow[{R}]{}}\,t}$ if and only if there exist ${\displaystyle x,y,u,v\in \Sigma ^{*}}$ such that ${\displaystyle s=xuy}$, ${\displaystyle t=xvy}$, and ${\displaystyle uRv}$. Since ${\displaystyle {\xrightarrow[{R}]{}}}$ is a relation on ${\displaystyle \Sigma ^{*}}$, the pair ${\displaystyle (\Sigma ^{*},{\xrightarrow[{R}]{}})}$ fits the definition of an abstract rewriting system. Obviously ${\displaystyle R}$ is subset of ${\displaystyle {\xrightarrow[{R}]{}}}$. If the relation ${\displaystyle R}$ is symmetric, then the system is called a Thue system.

In a SRS, the reduction relation ${\displaystyle {\xrightarrow[{R}]{*}}}$ is compatible with the monoid operation, meaning that ${\displaystyle x\,{\xrightarrow[{R}]{*}}\,y}$ implies ${\displaystyle uxv\,{\xrightarrow[{R}]{*}}\,uyv}$ for all strings ${\displaystyle x,y,u,v\in \Sigma ^{*}}$. Similarly, the reflexive transitive symmetric closure of ${\displaystyle {\xrightarrow[{R}]{}}}$, denoted ${\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}$, is a congruence, meaning it is an equivalence relation (by definition) and it is also compatible with string concatenation. The relation ${\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}$ is called the Thue congruence generated by ${\displaystyle R}$. In a Thue system, i.e. if ${\displaystyle R}$ is symmetric, the rewrite relation ${\displaystyle {\xrightarrow[{R}]{*}}}$ coincides with the Thue congruence ${\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}$.

The notion of a semi-Thue system essentially coincides with the presentation of a monoid. Since ${\displaystyle {\overset {*}{\underset {R}{\leftrightarrow }}}}$ is a congruence, we can define the factor monoid ${\displaystyle {\mathcal {M}}_{R}=\Sigma ^{*}/{\overset {*}{\underset {R}{\leftrightarrow }}}}$ of the free monoid ${\displaystyle \Sigma ^{*}}$ by the Thue congruence in the usual manner. If a monoid ${\displaystyle {\mathcal {M}}}$ is isomorphic with ${\displaystyle {\mathcal {M}}_{R}}$, then the semi-Thue system ${\displaystyle (\Sigma ,R)}$ is called a monoid presentation of ${\displaystyle {\mathcal {M}}}$.

We immediately get some very useful connections with other areas of algebra. For example, the alphabet {a, b} with the rules { ab → ε, ba → ε }, where ε is the empty string, is a presentation of the free group on one generator. If instead the rules are just { ab → ε }, then we obtain a presentation of the bicyclic monoid. Thus semi-Thue systems constitute a natural framework for solving the word problem for monoids and groups. In fact, every monoid has a presentation of the form ${\displaystyle (\Sigma ,R)}$, i.e. it may always be presented by a semi-Thue system, possibly over an infinite alphabet.

The word problem for a semi-Thue system is undecidable in general; this result is sometimes known as the Post-Markov theorem.[17]

## Term rewriting systems

Pic.1: Schematic triangle diagram of application of a rewrite rule ${\displaystyle l\longrightarrow r}$ at position ${\displaystyle p}$ in a term, with matching substitution ${\displaystyle \sigma }$
Pic.2: Rule lhs term ${\displaystyle x*(y*z)}$ matching in term ${\displaystyle {\frac {a*((a+1)*(a+2))}{1*(2*3)}}}$

A term rewriting system (TRS) is a rewriting system whose objects are terms, which are expressions with nested sub-expressions. For example, the system shown under § Logic above is a term rewriting system. The terms in this system are composed of binary operators ${\displaystyle (\vee )}$ and ${\displaystyle (\wedge )}$ and the unary operator ${\displaystyle (\neg )}$. Also present in the rules are variables, these each represent any possible term (though a single variable always represents the same term throughout a single rule).

In contrast to string rewriting systems, whose objects are sequences of symbols, the objects of a term rewriting system form a term algebra. A term can be visualized as a tree of symbols, the set of admitted symbols being fixed by a given signature.

### Formal definition

A term rewriting rule is a pair of terms, commonly written as ${\displaystyle l\rightarrow r}$, to indicate that the left-hand side ${\displaystyle l}$ can be replaced by the right-hand side ${\displaystyle r}$. A term rewriting system is a set ${\displaystyle R}$ of such rules. A rule ${\displaystyle l\rightarrow r}$ can be applied to a term ${\displaystyle s}$ if the left term ${\displaystyle l}$ matches some subterm of ${\displaystyle s}$, that is, if ${\displaystyle s|_{p}=l\sigma }$[note 3] for some position ${\displaystyle p}$ in ${\displaystyle s}$ and some substitution ${\displaystyle \sigma }$. The result term ${\displaystyle t}$ of this rule application is then obtained as ${\displaystyle t=s[r\sigma ]_{p}}$;[note 4] see picture 1. In this case, ${\displaystyle s}$ is said to be rewritten in one step, or rewritten directly, to ${\displaystyle t}$ by the system ${\displaystyle R}$, formally denoted as ${\displaystyle s\rightarrow _{R}t}$, ${\displaystyle s\,{\xrightarrow[{R}]{}}\,t}$, or as ${\displaystyle s\,{\xrightarrow {R}}\,t}$ by some authors. If a term ${\displaystyle t_{1}}$ can be rewritten in several steps into a term ${\displaystyle t_{n}}$, that is, if ${\displaystyle t_{1}\,{\xrightarrow[{R}]{}}\,t_{2}\,{\xrightarrow[{R}]{}}\,\ldots \,{\xrightarrow[{R}]{}}\,t_{n}}$, the term ${\displaystyle t_{1}}$ is said to be rewritten to ${\displaystyle t_{n}}$, formally denoted as ${\displaystyle t_{1}\,{\xrightarrow[{R}]{+}}\,t_{n}}$. In other words, the relation ${\displaystyle {\xrightarrow[{R}]{+}}}$ is the transitive closure of the relation ${\displaystyle {\xrightarrow[{R}]{}}}$; often, also the notation ${\displaystyle {\xrightarrow[{R}]{*}}}$ is used to denote the reflexive-transitive closure of ${\displaystyle {\xrightarrow[{R}]{}}}$, that is, ${\displaystyle s\,{\xrightarrow[{R}]{*}}\,t}$ if ${\displaystyle s=t}$ or ${\displaystyle s\,{\xrightarrow[{R}]{+}}\,t}$.[18] A term rewriting given by a set ${\displaystyle R}$ of rules can be viewed as an abstract rewriting system as defined above, with terms as its objects and ${\displaystyle {\xrightarrow[{R}]{}}}$ as its rewrite relation.

For example, ${\displaystyle x*(y*z)\rightarrow (x*y)*z}$ is a rewrite rule, commonly used to establish a normal form with respect to the associativity of ${\displaystyle *}$. That rule can be applied at the numerator in the term ${\displaystyle {\frac {a*((a+1)*(a+2))}{1*(2*3)}}}$ with the matching substitution ${\displaystyle \{x\mapsto a,\;y\mapsto a+1,\;z\mapsto a+2\}}$, see picture 2.[note 5] Applying that substitution to the rule's right hand side yields the term ${\displaystyle (a*(a+1))*(a+2)}$, and replacing the numerator by that term yields ${\displaystyle {\frac {(a*(a+1))*(a+2)}{1*(2*3)}}}$, which is the result term of applying the rewrite rule. Altogether, applying the rewrite rule has achieved what is called "applying the associativity law for ${\displaystyle *}$ to ${\displaystyle {\frac {a*((a+1)*(a+2))}{1*(2*3)}}}$" in elementary algebra. Alternatively, the rule could have been applied to the denominator of the original term, yielding ${\displaystyle {\frac {a*((a+1)*(a+2))}{(1*2)*3}}}$.

### Termination

Beyond section Termination and convergence, additional subtleties are to be considered for term rewriting systems.

Termination even of a system consisting of one rule with a linear left-hand side is undecidable.[19] Termination is also undecidable for systems using only unary function symbols; however, it is decidable for finite ground systems. [20]

The following term rewrite system is normalizing,[note 6] but not terminating,[note 7] and not confluent:[21]

${\displaystyle f(x,x)\rightarrow g(x),}$
${\displaystyle f(x,g(x))\rightarrow b,}$
${\displaystyle h(c,x)\rightarrow f(h(x,c),h(x,x)).}$

The following two examples of terminating term rewrite systems are due to Toyama:[22]

${\displaystyle f(0,1,x)\rightarrow f(x,x,x)}$

and

${\displaystyle g(x,y)\rightarrow x,}$
${\displaystyle g(x,y)\rightarrow y.}$

Their union is a non-terminating system, since ${\displaystyle f(g(0,1),g(0,1),g(0,1))\rightarrow f(0,g(0,1),g(0,1))\rightarrow f(0,1,g(0,1))\rightarrow f(g(0,1),g(0,1),g(0,1))\rightarrow \ldots }$. This result disproves a conjecture of Dershowitz,[23] who claimed that the union of two terminating term rewrite systems ${\displaystyle R_{1}}$ and ${\displaystyle R_{2}}$ is again terminating if all left-hand sides of ${\displaystyle R_{1}}$ and right-hand sides of ${\displaystyle R_{2}}$ are linear, and there are no "overlaps" between left-hand sides of ${\displaystyle R_{1}}$ and right-hand sides of ${\displaystyle R_{2}}$. All these properties are satisfied by Toyama's examples.

See Rewrite order and Path ordering (term rewriting) for ordering relations used in termination proofs for term rewriting systems.

### Graph rewriting systems

A generalization of term rewrite systems are graph rewrite systems, operating on graphs instead of (ground-) terms / their corresponding tree representation.

## Trace rewriting systems

Trace theory provides a means for discussing multiprocessing in more formal terms, such as via the trace monoid and the history monoid. Rewriting can be performed in trace systems as well.

## Philosophy

Rewriting systems can be seen as programs that infer end-effects from a list of cause-effect relationships. In this way, rewriting systems can be considered to be automated causality provers.[citation needed]

## Notes

1. ^ This variant of the previous rule is needed since the commutative law AB = BA cannot be turned into a rewrite rule. A rule like ABBA would cause the rewrite system to be nonterminating.
2. ^ i.e. it considers more objects as a normal form of x than our definition
3. ^ here, ${\displaystyle s\mid _{p}}$ denotes the subterm of ${\displaystyle s}$ rooted at position ${\displaystyle p}$, while ${\displaystyle l\sigma }$ denotes the result of applying the substitution ${\displaystyle \sigma }$ to the term ${\displaystyle l}$
4. ^ here, ${\displaystyle s[r\sigma ]_{p}}$ denotes the result of replacing the subterm at position ${\displaystyle p}$ in ${\displaystyle s}$ by the term ${\displaystyle r\sigma }$
5. ^ since applying that substitution to the rule's left hand side ${\displaystyle x*(y*z)}$ yields the numerator ${\displaystyle a*((a+1)*(a+2))}$
6. ^ i.e. for each term, some normal form exists, e.g. h(c,c) has the normal forms b and g(b), since h(c,c) → f(h(c,c),h(c,c)) → f(h(c,c),f(h(c,c),h(c,c))) → f(h(c,c),g(h(c,c))) → b, and h(c,c) → f(h(c,c),h(c,c)) → g(h(c,c),h(c,c)) → ... → g(b); neither b nor g(b) can be rewritten any further, therefore the system is not confluent
7. ^ i.e., there are infinite derivations, e.g. h(c,c) → f(h(c,c),h(c,c)) → f(f(h(c,c),h(c,c)) ,h(c,c)) → f(f(f(h(c,c),h(c,c)),h(c,c)) ,h(c,c)) → ...

## References

1. ^ Sculthorpe, Neil; Frisby, Nicolas; Gill, Andy (2014). "The Kansas University rewrite engine" (PDF). Journal of Functional Programming. 24 (4): 434–473. doi:10.1017/S0956796814000185. ISSN 0956-7968.
2. ^ Hsiang, Jieh, et al. "The term rewriting approach to automated theorem proving." The Journal of Logic Programming 14.1-2 (1992): 71–99.
3. ^ Frühwirth, Thom. "Theory and practice of constraint handling rules." The Journal of Logic Programming 37.1 (1998): 95–138.
4. ^ Clavel, Manuel, et al. "Maude: Specification and programming in rewriting logic." Theoretical Computer Science 285.2 (2002): 187–243.
5. ^ Kim Marriott; Peter J. Stuckey (1998). Programming with Constraints: An Introduction. MIT Press. pp. 436–. ISBN 978-0-262-13341-8.
6. ^ Robert Freidin (1992). Foundations of Generative Syntax. MIT Press. ISBN 978-0-262-06144-5.
7. ^ Bezem et al., p. 7,
8. ^ a b Book and Otto, p. 10
9. ^ Bezem et al., p. 7
10. ^ Baader and Nipkow, pp. 8–9
11. ^ Alonzo Church and J. Barkley Rosser. Some properties of conversion. Trans. AMS, 39:472–482, 1936
12. ^ Baader and Nipkow, p. 9
13. ^ Baader and Nipkow, p. 11
14. ^ Baader and Nipkow, p. 12
15. ^ Bezem et al., p.11
16. ^ M.H.A. Neumann (1942). "On Theories with a Combinatorial Definition of Equivalence". Annals of Mathematics. 42 (2): 223–243. doi:10.2307/1968867. JSTOR 1968867.
17. ^ Martin Davis et al. 1994, p. 178
18. ^ N. Dershowitz, J.-P. Jouannaud (1990). Jan van Leeuwen (ed.). Rewrite Systems. Handbook of Theoretical Computer Science. B. Elsevier. pp. 243–320.; here: Sect. 2.3
19. ^ M. Dauchet (1989). "Simulation of Turing Machines by a Left-Linear Rewrite Rule". Proc. 3rd RTA. LNCS. 355. Springer LNCS. pp. 109–120.
20. ^ Gerard Huet, D.S. Lankford (Mar 1978). On the Uniform Halting Problem for Term Rewriting Systems (PDF) (Technical report). IRIA. p. 8. 283. Retrieved 16 June 2013.
21. ^ Bernhard Gramlich (Jun 1993). "Relating Innermost, Weak, Uniform, and Modular Termination of Term Rewriting Systems". In Voronkov, Andrei (ed.). Proc. International Conference on Logic Programming and Automated Reasoning (LPAR). LNAI. 624. Springer. pp. 285–296. Here: Example 3.3
22. ^ Y. Toyama (1987). "Counterexamples to Termination for the Direct Sum of Term Rewriting Systems" (PDF). Inf. Process. Lett. 25 (3): 141–143. doi:10.1016/0020-0190(87)90122-0.
23. ^ N. Dershowitz (1985). "Termination" (PDF). In Jean-Pierre Jouannaud (ed.). Proc. RTA. LNCS. 220. Springer. pp. 180–224.; here: p.210