Rewriting

From Wikipedia, the free encyclopedia
Jump to: navigation, search
For other uses, see Rewriting (disambiguation).

In mathematics, computer science, and logic, rewriting covers a wide range of (potentially non-deterministic) methods of replacing subterms of a formula with other terms. What is considered are rewriting systems (also known as rewrite systems or reduction systems). In their most basic form, they consist of a set of objects, plus relations on how to transform those objects.

Rewriting can be non-deterministic. One rule to rewrite a term could be applied in many different ways to that term, or more than one rule could be applicable. Rewriting systems then do not provide an algorithm for changing one term to another, but a set of possible rule applications. When combined with an appropriate algorithm, however, rewrite systems can be viewed as computer programs, and several declarative programming languages are based on term rewriting.

Intuitive examples[edit]

Logic[edit]

In logic, the procedure for obtaining the conjunctive normal form (CNF) of a formula can be conveniently written as a rewriting system. The rules of such a system would be:

\neg\neg A \to A (double negative elimination)
\neg(A \land B) \to \neg A \lor \neg B (De Morgan's laws)
\neg(A \lor B)  \to \neg A \land\neg B
 (A \land B) \lor C \to (A \lor C) \land (B \lor C) (Distributivity)
 A \lor (B \land C) \to (A \lor B) \land (A \lor C),

where the symbol (\to) indicates that an expression matching the left hand side of the rule can be rewritten to one formed by the right hand side. In this system, we can perform a rewrite from left to right only when the logical interpretation of the left hand side is equivalent to that of the right.

Abstract rewriting systems[edit]

From the above examples, it's clear that we can think of rewriting systems in an abstract manner. We need to specify a set of objects and the rules that can be applied to transform them. The most general (unidimensional) setting of this notion is called an abstract reduction system, (abbreviated ARS), although more recently authors use abstract rewriting system as well.[1] (The preference for the word "reduction" here instead of "rewriting" constitutes a departure from the uniform use of "rewriting" in the names of systems that are particularizations of ARS. Because the word "reduction" does not appear in the names of more specialized systems, in older texts reduction system is a synonym for ARS).[2]

An ARS is simply a set A, whose elements are usually called objects, together with a binary relation on A, traditionally denoted by →, and called the reduction relation, rewrite relation[3] or just reduction.[2] This (entrenched) terminology using "reduction" is a little misleading, because the relation is not necessarily reducing some measure of the objects; this will become more apparent when we discuss string rewriting systems further in this article.

Example 1. Suppose the set of objects is T = {a, b, c} and the binary relation is given by the rules ab, ba, ac, and bc. Observe that these rules can be applied to both a and b in any fashion to get the term c. Such a property is clearly an important one. Note also, that c is, in a sense, a "simplest" term in the system, since nothing can be applied to c to transform it any further. This example leads us to define some important notions in the general setting of an ARS. First we need some basic notions and notations.[4]

Normal forms, joinability and the word problem[edit]

An object x in A is called reducible if there exists some other y in A such that x \rightarrow y; otherwise it is called irreducible or a normal form. An object y is called a normal form of x if x \stackrel{*}{\rightarrow} y, and y is irreducible. If x has a unique normal form, then this is usually denoted with x\downarrow. In example 1 above, c is a normal form, and c = a\downarrow = b\downarrow. If every object has at least one normal form, the ARS is called normalizing.

A related, but weaker notion than the existence of normal forms is that of two objects being joinable: x and y are said to be joinable if there exists some z with the property that x \stackrel{*}{\rightarrow} z \stackrel{*}{\leftarrow} y. From this definition, it's apparent one may define the joinability relation as \stackrel{*}{\rightarrow} \circ \stackrel{*}{\leftarrow}, where \circ is the composition of relations. Joinability is usually denoted, somewhat confusingly, also with \downarrow, but in this notation the down arrow is a binary relation, i.e. we write x\mathbin\downarrow y if x and y are joinable.

One of the important problems that may be formulated in an ARS is the word problem: given x and y, are they equivalent under \stackrel{*}{\leftrightarrow}? This is a very general setting for formulating the word problem for the presentation of an algebraic structure. For instance, the word problem for groups is a particular case of an ARS word problem. Central to an "easy" solution for the word problem is the existence of unique normal forms: in this case if two objects have the same normal form, then they are equivalent under \stackrel{*}{\leftrightarrow}. The word problem for an ARS is undecidable in general.

The Church–Rosser property and confluence[edit]

An ARS is said to possess the Church–Rosser property if and only if x\stackrel{*}{\leftrightarrow}y implies x\mathbin\downarrow y. In words, the Church–Rosser property means that any two equivalent objects are joinable. Alonzo Church and J. Barkley Rosser proved in 1936 that lambda calculus has this property;[5] hence the name of the property.[6] (The fact that lambda calculus has this property is also known as the Church–Rosser theorem.) In an ARS with the Church–Rosser property the word problem may be reduced to the search for a common successor. In a Church–Rosser system, an object has at most one normal form; that is the normal form of an object is unique if it exists, but it may well not exist.

Several different properties are equivalent to the Church–Rosser property, but may be simpler to check in some particular setting. In particular, confluence is equivalent to Church–Rosser. An ARS (A,\rightarrow) is said:

  • confluent if for all w, x, and y in A, x \stackrel{*}{\leftarrow} w \stackrel{*}{\rightarrow} y implies x\mathbin\downarrow y. Roughly speaking, confluence says that no matter how two paths diverge from a common ancestor (w), the paths are joining at some common successor. This notion may be refined as property of a particular object w, and the system called confluent if all its elements are confluent.
  • locally confluent if for all w, x, and y in A, x \leftarrow w \rightarrow y implies x\mathbin\downarrow y. This property is sometimes called weak confluence.

Theorem. For an ARS the following conditions are equivalent: (i) it has the Church–Rosser property, (ii) it is confluent.[7]

Corollary.[8] In a confluent ARS if x \stackrel{*}{\leftrightarrow} y then

  • If both x and y are normal forms, then x = y.
  • If y is a normal form, then x \stackrel{*}{\rightarrow} y

Because of these equivalences, a fair bit of variation in definitions is encountered in the literature. For instance, in Bezem et al. 2003 the Church–Rosser property and confluence are defined to be synonymous and identical to the definition of confluence presented here; Church–Rosser as defined here remains unnamed, but is given as an equivalent property; this departure from other texts is deliberate.[9] Because of the above corollary, in a confluent ARS one may define a normal form y of x as an irreducible y with the property that x \stackrel{*}{\leftrightarrow} y. This definition, found in Book and Otto, is equivalent to common one given here in a confluent system, but it is more inclusive [note 1] more in a non-confluent ARS.

Local confluence on the other hand is not equivalent with the other notions of confluence given in this section, but it is strictly weaker than confluence. The relation a \rightarrow b, \; b \rightarrow a, \; a \rightarrow c,\; b \rightarrow d is locally confluent, but not confluent, as c and d are equivalent, but not joinable.[10]

Termination and convergence[edit]

An abstract rewriting system is said to be terminating or noetherian if there is no infinite chain x_0 \rightarrow x_1 \rightarrow x_2 \rightarrow \cdots. In a terminating ARS, every object has at least one normal form, thus it is normalizing. The converse is not true. In example 1 for instance, there is an infinite rewriting chain, namely a \rightarrow b \rightarrow a \rightarrow b \rightarrow \cdots, even though the system is normalizing. A confluent and terminating ARS is called convergent. In a convergent ARS, every object has a unique normal form.

Theorem (Newman's Lemma): A terminating ARS is confluent if and only if it is locally confluent.

String rewriting systems[edit]

A string rewriting system (SRS), also known as semi-Thue system, exploits the free monoid structure of the strings (words) over an alphabet to extend a rewriting relation, R to all strings in the alphabet that contain left- and respectively right-hand sides of some rules as substrings. Formally a semi-Thue systems is a tuple (\Sigma, R) where \Sigma is a (usually finite) alphabet, and R is a binary relation between some (fixed) strings in the alphabet, called rewrite rules. The one-step rewriting relation relation \rightarrow_R induced by R on \Sigma^* is defined as: for any strings s and t in \Sigma^* s \rightarrow_R t if and only if there exist x, y, u, v in \Sigma^* such that s = xuy, t = xvy, and u R v. Since \rightarrow_R is a relation on \Sigma^*, the pair (\Sigma^*, \rightarrow_R) fits the definition of an abstract rewriting system. Obviously R is subset of \rightarrow_R. If the relation R is symmetric, then the system is called a Thue system.

In a SRS, the reduction relation \stackrel{*}{\rightarrow}_R is compatible with the monoid operation, meaning that x\stackrel{*}{\rightarrow}_R y implies uxv\stackrel{*}{\rightarrow}_R uyv for all strings x, y, u, v in \Sigma^*. Similarly, the reflexive transitive symmetric closure of \rightarrow_R, denoted \stackrel{*}{\leftrightarrow}_R, is a congruence, meaning it is an equivalence relation (by definition) and it is also compatible with string concatenation. The relation \stackrel{*}{\leftrightarrow}_R is called the Thue congruence generated by R. In a Thue system, i.e. if R is symmetric, the rewrite relation \stackrel{*}{\rightarrow}_R coincides with the Thue congruence \stackrel{*}{\leftrightarrow}_R.

The notion of a semi-Thue system essentially coincides with the presentation of a monoid. Since \stackrel{*}{\leftrightarrow}_R is a congruence, we can define the factor monoid \mathcal{M}_R = \Sigma^*/\stackrel{*}{\leftrightarrow}_R of the free monoid \Sigma^* by the Thue congruence in the usual manner. If a monoid \mathcal{M} is isomorphic with \mathcal{M}_R, then the semi-Thue system (\Sigma, R) is called a monoid presentation of \mathcal{M}.

We immediately get some very useful connections with other areas of algebra. For example, the alphabet {a, b} with the rules { ab → ε, ba → ε }, where ε is the empty string, is a presentation of the free group on one generator. If instead the rules are just { ab → ε }, then we obtain a presentation of the bicyclic monoid. Thus semi-Thue systems constitute a natural framework for solving the word problem for monoids and groups. In fact, every monoid has a presentation of the form (\Sigma, R), i.e. it may be always be presented by a semi-Thue system, possibly over an infinite alphabet.

The word problem for a semi-Thue system is undecidable in general; this result is sometimes known as the Post-Markov theorem.[11]

Term rewriting systems[edit]

Pic.1: Schematic triangle diagram of application of a rewrite rule l \longrightarrow r at position p in a term, with matching substitution \sigma
Pic.2: Rule lhs term x*(y*z) matching in term \frac{a*((a+1)*(a+2))}{1*(2*3)}

A term rewriting system (TRS) is a rewriting system where the objects are terms, or expressions with nested sub-expressions. For example, the system shown under Logic above is a term rewriting system. The terms in this system are composed of binary operators (\vee) and (\wedge) and the unary operator (\neg). Also present in the rules are variables, these each represent any possible term (though a single variable always represents the same term throughout a single rule).

In contrast to string rewriting systems, whose objects are flat sequences of symbols, the objects a term rewriting system works on, i.e. the terms, form a term algebra. A term can be visualized as a tree of symbols, the set of admitted symbols being fixed by a given signature.

Formal definition[edit]

A term rewriting rule is a pair of terms, commonly written as l \longrightarrow r, to indicate that the left hand side l can be replaced by the right hand side r. A term rewriting system is a set R of such rules. A rule l \longrightarrow r can be applied to a term s if the left term l matches some subterm of s, that is, if s \mid_p = l \sigma [note 2] for some position p in s and some substitution \sigma. The result term t of this rule application is then obtained as t = s[r \sigma]_p; [note 3] see picture 1. In this case, s is said to be rewritten in one step, or rewritten directly, to t by the system R, formally denoted as s \longrightarrow_R t, or as s \stackrel{R}{\longrightarrow} t by some authors. If a term t_1 can be rewritten in several steps into a term t_n, that is, if t_1 \longrightarrow_R t_2 \longrightarrow_R \ldots \longrightarrow_R t_n, the term t_1 is said to be rewritten to t_n, formally denoted as t_1 \longrightarrow_R^+ t_n. In other words, the relation \longrightarrow_R^+ is the transitive closure of the relation \longrightarrow_R; often, also the notation \longrightarrow_R^* is used to denote the reflexive-transitive closure of \longrightarrow_R, that is, s \longrightarrow_R^* t if s = t or s \longrightarrow_R^+ t. [12] A term rewriting given by a set R of rules can be viewed as an abstract rewriting system as defined above, with terms as its objects and \longrightarrow_R as its rewrite relation.

For example, x*(y*z) \longrightarrow (x*y)*z is a rewrite rule, commonly used to establish a normal form with respect to the associativity of *. That rule can be applied at the numerator in the term \frac{a*((a+1)*(a+2))}{1*(2*3)} with the matching substitution \{ x \mapsto a, \; y \mapsto a+1, \; z \mapsto a+2 \}, see picture 2. [note 4] Applying that substitution to the rule's right hand side yields the term (a*(a+1))*(a+2), and replacing the numerator by that term yields \frac{(a*(a+1))*(a+2)}{1*(2*3)}, which is the result term of applying the rewrite rule. Altogether, applying the rewrite rule has achieved what is called "applying the associativity law for * to \frac{a*((a+1)*(a+2))}{1*(2*3)}" in elementary algebra. Alternatively, the rule could have been applied to the denominator of the original term, yielding \frac{a*((a+1)*(a+2))}{(1*2)*3}.

Termination[edit]

Beyond section Termination and convergence, additional subtleties are to be considered for term rewriting systems.

Termination even of a system consisting of one rule with a linear left hand side is undecidable.[13] Termination is also undecidable for systems using only unary function symbols; however, it is decidable for finite ground systems. [14]

The following term rewrite system is normalizing,[note 5] but not terminating,[note 6] and not confluent:

f(x,x) → g(x),
f(x,g(x)) → b,
h(c,x) → f(h(x,c),h(x,x)).[15]

The following two examples of terminating term rewrite systems are due to Toyama:[16]

f(0,1,x) \rightarrow f(x,x,x)

and

g(x,y) \rightarrow x,
g(x,y) \rightarrow y.

Their union is a non-terminating system, since f(g(0,1),g(0,1),g(0,1)) \rightarrow f(0,g(0,1),g(0,1)) \rightarrow f(0,1,g(0,1)) \rightarrow f(g(0,1),g(0,1),g(0,1)) \rightarrow \ldots. This result disproves a conjecture of Dershowitz,[17] who claimed that the union of two terminating term rewrite systems R_1 and R_2 is again terminating if all left hand sides of R_1 and right hand sides of R_2 are linear, and there are no "overlaps" between left hand sides of R_1 and right hand sides of R_2. All these properties are satisfied by Toyama's examples.

See Rewrite order and Path ordering (term rewriting) for ordering relations used in termination proofs for term rewriting systems.

Graph rewriting systems[edit]

A generalization of term rewrite systems are graph rewrite systems, operating on graphs instead of (ground-) terms / their corresponding tree representation.

Trace rewriting systems[edit]

Trace theory provides a means for discussing multiprocessing in more formal terms, such as via the trace monoid and the history monoid. Rewriting can be performed in trace systems as well.

Philosophy[edit]

Rewriting systems can be seen as programs that infer end-effects from a list of cause-effect relationships. In this way, rewriting systems can be considered to be automated causality provers.

See also[edit]

Notes[edit]

  1. ^ i.e. it considers more objects as a normal form of x than our definition
  2. ^ here, s \mid_p denotes the subterm of s rooted at position p, while l \sigma denotes the result of applying the substitution \sigma to the term l
  3. ^ here, s[r \sigma]_p denotes the result of replacing the subterm at position p in s by the term r \sigma
  4. ^ since applying that substitution to the rule's left hand side x*(y*z) yields the numerator a*((a+1)*(a+2))
  5. ^ i.e. for each term, some normal form exists, e.g. h(c,c) has the normal forms b and g(b), since h(c,c) → f(h(c,c),h(c,c)) → f(h(c,c),f(h(c,c),h(c,c))) → f(h(c,c),g(h(c,c))) → b, and h(c,c) → f(h(c,c),h(c,c)) → g(h(c,c),h(c,c)) → ... → g(b); neither b nor g(b) can be rewritten any further, therefore the system is not confluent
  6. ^ i.e. there are infinite derivations, e.g. h(c,c) → f(h(c,c),h(c,c)) → f(f(h(c,c),h(c,c)) ,h(c,c)) → f(f(f(h(c,c),h(c,c)),h(c,c)) ,h(c,c)) → ...
  7. ^ About the "rewrite rule" notion in linguistics, corresponding to a production rule of a context-free grammar. The derivation relation of such a grammar constitutes an abstract rewriting system in the above sense.

References[edit]

  1. ^ Bezem et al., p. 7,
  2. ^ a b Book and Otto, p. 10
  3. ^ Bezem et al., p. 7
  4. ^ Baader and Nipkow, pp. 8-9
  5. ^ Alonzo Church and J. Barkley Rosser. Some properties of conversion. Trans. AMS, 39:472-482, 1936
  6. ^ Baader and Nipkow, p. 9
  7. ^ Baader and Nipkow, p. 11
  8. ^ Baader and Nipkow, p. 12
  9. ^ Bezem et al., p.11
  10. ^ M.H.A. Neumann (1942). "On Theories with a Combinatorial Definition of ``Equivalence". Annals of Mathematics 42 (2): 223–243. 
  11. ^ Martin Davis et al. 1994, p. 178
  12. ^ N. Dershowitz, J.-P. Jouannaud (1990). Jan van Leeuwen, ed. Rewrite Systems. Handbook of Theoretical Computer Science B. Elsevier. pp. 243–320. ; here: Sect.2.3
  13. ^ M. Dauchet (1989). "Simulation of Turing Machines by a Left-Linear Rewrite Rule". Proc. 3rd RTA. LNCS 355. Springer LNCS. pp. 109–120. 
  14. ^ Gerard Huet, D.S. Lankford (Mar 1978). On the Uniform Halting Problem for Term Rewriting Systems (Technical report). IRIA. p. 8. 283. Retrieved 16 June 2013. 
  15. ^ Bernhard Gramlich (Jun 1993). "Relating Innermost, Weak, Uniform, and Modular Termination of Term Rewriting Systems". In Voronkov, Andrei. Proc. [[International Conference on Logic Programming and Automated Reasoning]] (LPAR). LNAI 624. Springer. pp. 285–296.  Wikilink embedded in URL title (help) Here: Example 3.3
  16. ^ Y. Toyama (1987). "Counterexamples to Termination for the Direct Sum of Term Rewriting Systems". Inform. Process. Lett. 25: 141–143. doi:10.1016/0020-0190(87)90122-0. 
  17. ^ N. Dershowitz (1985). "Termination". In Jean-Pierre Jouannaud. Proc. RTA. LNCS 220. Springer. pp. 180–224. ; here: p.210

Further reading[edit]

String rewriting
  • Ronald V. Book and Friedrich Otto, String-Rewriting Systems, Springer (1993).
  • Benjamin Benninghofen, Susanne Kemmerich and Michael M. Richter, Systems of Reductions. LNCS 277, Springer-Verlag (1987).
Other

External links[edit]