# Reduction strategy

In rewriting, a reduction strategy or rewriting strategy is a relation specifying a rewrite for each object or term, compatible with a given reduction relation.[1] Some authors use the term to refer to an evaluation strategy.[2][3]

## Definitions

Formally, for an abstract rewriting system ${\displaystyle (A,\to )}$, a reduction strategy ${\displaystyle \to _{S}}$ is a binary relation on ${\displaystyle A}$ with ${\displaystyle \to _{S}\subseteq {\overset {+}{\to }}}$ , where ${\displaystyle {\overset {+}{\to }}}$ is the transitive closure of ${\displaystyle \to }$ (but not the reflexive closure).[1]

A one step reduction strategy is one where ${\displaystyle \to _{S}\subseteq \to }$. Otherwise it is a many step strategy.[4]

A deterministic strategy is one where ${\displaystyle \to _{S}}$ is a partial function, i.e. for each ${\displaystyle a\in A}$ there is at most one ${\displaystyle b}$ such that ${\displaystyle a\to _{S}b}$. Otherwise it is a nondeterministic strategy.[4]

## Term rewriting

In a term rewriting system a rewriting strategy specifies, out of all the reducible subterms (redexes), which one should be reduced (contracted) within a term.

One-step strategies for term rewriting include:[4]

• leftmost-innermost: in each step the leftmost of the innermost redexes is contracted, where an innermost redex is a redex not containing any redexes[5]
• leftmost-outermost: in each step the leftmost of the outermost redexes is contracted, where an outermost redex is a redex not contained in any redexes[5]
• rightmost-innermost, rightmost-outermost: similarly

Many-step strategies include:[4]

• parallel-innermost: reduces all innermost redexes simultaneously. This is well-defined because the redexes are pairwise disjoint.
• parallel-outermost: similarly
• Gross-Knuth reduction,[6] also called full substitution or Kleene reduction:[4] all redexes in the term are simultaneously reduced

Parallel outermost and Gross-Knuth reduction are hypernormalizing for all almost-orthogonal term rewriting systems, meaning that these strategies will eventually reach a normal form if it exists, even when performing (finitely many) arbitrary reductions between successive applications of the strategy.[7]

Stratego is a domain-specific language designed specifically for programming term rewriting strategies.[8]

## Lambda calculus

In the context of the lambda calculus, normal-order reduction refers to leftmost-outermost reduction in the sense given above.[9] Sometimes normal order reduction is simply called leftmost reduction, as the leftmost-outermost redex appears leftmost when the term is written out as a string of characters.[10] Normal-order reduction is normalizing, in the sense that if a term has a normal form, then normal‐order reduction will eventually reach it, hence the name normal. This is known as the standardization theorem.[11][12]

Applicative order reduction refers to leftmost-innermost reduction.[9] In contrast to normal order, applicative order reduction may not terminate, even when the term has a normal form.[9] For example, using applicative order reduction, the following sequence of reductions is possible:

{\displaystyle {\begin{aligned}&(\mathbf {\lambda } x.z)((\lambda w.www)(\lambda w.www))\\\rightarrow &(\lambda x.z)((\lambda w.www)(\lambda w.www)(\lambda w.www))\\\rightarrow &(\lambda x.z)((\lambda w.www)(\lambda w.www)(\lambda w.www)(\lambda w.www))\\\rightarrow &(\lambda x.z)((\lambda w.www)(\lambda w.www)(\lambda w.www)(\lambda w.www)(\lambda w.www))\\&\ldots \end{aligned}}}

But using normal-order reduction, the same starting point reduces quickly to normal form:

${\displaystyle (\mathbf {\lambda } x.z)((\lambda w.www)(\lambda w.www))}$
${\displaystyle \rightarrow z}$

Full β-reduction refers to the nondeterministic one-step strategy that allows reducing any redex at each step.[3] Takahashi's parallel β-reduction is the strategy that reduces all redexes in the term simultaneously.[13]

### Weak reduction

Normal and applicative order reduction are strong in that they allow reduction under lambda abstractions. In contrast, weak reduction does not reduce under a lambda abstraction.[14] Call-by-name reduction is the weak reduction strategy that reduces the leftmost outermost redex not inside a lambda abstraction, while call-by-value reduction is the weak reduction strategy that reduces the leftmost innermost redex not inside a lambda abstraction. These strategies were devised to reflect the call-by-name and call-by-value evaluation strategies.[15] In fact, applicative order reduction was also originally introduced to model the call-by-value parameter passing technique found in Algol 60 and modern programming languages. When combined with the idea of weak reduction, the resulting call-by-value reduction is indeed a faithful approximation.[16]

Unfortunately, weak reduction is not confluent,[14] and the traditional reduction equations of the lambda calculus are useless, because they suggest relationships that violate the weak evaluation regime.[16] However, it is possible to extend the system to be confluent by allowing a restricted form of reduction under an abstraction, in particular when the redex does not involve the variable bound by the abstraction.[14] For example, λx.(λy.x)z is in normal form for a weak reduction strategy because the redex y.x)z is contained in a lambda abstraction. But the term λx.(λy.y)z can still be reduced under the extended weak reduction strategy, because the redex y.y)z does not refer to x.[17]

### Optimal reduction

Optimal reduction is motivated by the existence of lambda terms where there does not exist a sequence of reductions which reduces them without duplicating work. For example, consider

((λg.(g(g(λx.x)))) (λh.((λf.(f(f(λz.z)))) (λw.(h(w(λy.y)))))))

It is composed of three similar terms, x=((λg. ... ) (λh.y)) and y=((λf. ...) (λw.z) ), and finally z=λw.(h(w(λy.y))). There are only two possible β-reductions to be done here, on x and on y. Reducing the outer x term first results in the inner y term being duplicated, and each copy will have to be reduced, but reducing the inner y term first will duplicate its argument z, which will cause work to be duplicated when the values of h and w are made known.[a]

Optimal reduction is not a reduction strategy for the lambda calculus in a strict sense because performing β-reduction loses the information about the substituted redexes being shared. Instead it is defined for the labelled lambda calculus, an annotated lambda calculus which captures a precise notion of the work that should be shared.[18]: 113–114

Labels consist of a countably infinite set of atomic labels, and concatenations ${\displaystyle ab}$, overlinings ${\displaystyle {\overline {a}}}$ and underlinings ${\displaystyle {\underline {a}}}$ of labels. A labelled term is a lambda calculus term where each subterm has a label. The standard initial labeling of a lambda term gives each subterm a unique atomic label.[18]: 132  Labelled β-reduction is given by:[19]

${\displaystyle ((\lambda .xM)^{\alpha }N)^{\beta }\to \beta {\overline {\alpha }}\cdot M[x\mapsto {\underline {\alpha }}\cdot N]}$

where ${\displaystyle \cdot }$ concatenates labels, ${\displaystyle \beta \cdot T^{\alpha }=T^{\beta \alpha }}$, and substitution ${\displaystyle M[x\mapsto N]}$ is defined as follows (using the Barendregt convention):[19]

{\displaystyle {\begin{aligned}x^{\alpha }[x\mapsto N]&=\alpha \cdot N&\quad (\lambda y.M)^{\alpha }[x\mapsto N]&=(\lambda y.M[x\mapsto N])^{\alpha }\\y^{\alpha }[x\mapsto N]&=y^{\alpha }&\quad (MN)^{\alpha }[x\mapsto P]&=(M[x\mapsto P]N[x\mapsto P])^{\alpha }\end{aligned}}}
The system can be proven to be confluent. Optimal reduction is then defined to be normal order or leftmost-outermost reduction using reduction by families, i.e. the parallel reduction of all redexes with the same function part label.[20]

A practical algorithm for optimal reduction was first described in 1989,[21] more than a decade after optimal reduction was first defined in 1974.[22] The Bologna Optimal Higher-order Machine (BOHM) is a prototype implementation of an extension of the technique to interaction nets.[18]: 362 [23] Lambdascope is a more recent implementation of optimal reduction, also using interaction nets.[24][b]

### Call by need reduction

Call by need reduction can be defined similarly to optimal reduction as weak leftmost-outermost reduction using parallel reduction of redexes with the same label, for a slightly different labelled lambda calculus.[14] An alternate definition changes the beta rule to find the "demanded" computation. This requires extending the beta rule to allow reducing terms that are not syntactically adjacent, so this definition is similar to the labelled definition in that they are both reduction strategies for variations of the lambda calculus.[25] As with call-by-name and call-by-value, call-by-need reduction was devised to mimic the behavior of the evaluation strategy known as "call-by-need" or lazy evaluation.

## Notes

1. ^ Incidentally, the above term reduces to the identity function (λy.y), and is constructed by making wrappers which make the identity function available to the binders g=λh..., f=λw..., h=λx.x (at first), and w=λz.z (at first), all of which are applied to the innermost term λy.y.
2. ^ A summary of recent research on optimal reduction can be found in the short article About the efficient reduction of lambda terms.

## References

1. ^ a b Kirchner, Hélène (26 August 2015). "Rewriting Strategies and Strategic Rewrite Programs". In Martí-Oliet, Narciso; Ölveczky, Peter Csaba; Talcott, Carolyn (eds.). Logic, Rewriting, and Concurrency: Essays Dedicated to José Meseguer on the Occasion of His 65th Birthday. Springer. ISBN 978-3-319-23165-5. Retrieved 14 August 2021.
2. ^ Selinger, Peter; Valiron, Benoît (2009). "Quantum Lambda Calculus" (PDF). Semantic Techniques in Quantum Computation: 23. doi:10.1017/CBO9781139193313.005. Retrieved 21 August 2021.
3. ^ a b Pierce, Benjamin C. (2002). Types and Programming Languages. MIT Press. p. 56. ISBN 0-262-16209-1.
4. Klop, J. W. "Term Rewriting Systems" (PDF). Papers by Nachum Dershowitz and students. Tel Aviv University. p. 77. Retrieved 14 August 2021.
5. ^ a b Horwitz, Susan B. "Lambda Calculus". CS704 Notes. University of Wisconsin Madison. Retrieved 19 August 2021.
6. ^ Barendregt, H. P.; Eekelen, M. C. J. D.; Glauert, J. R. W.; Kennaway, J. R.; Plasmeijer, M. J.; Sleep, M. R. (1987). "Term graph rewriting". PARLE Parallel Architectures and Languages Europe. 259: 141–158. doi:10.1007/3-540-17945-3_8.
7. ^ Antoy, Sergio; Middeldorp, Aart (September 1996). "A sequential reduction strategy" (PDF). Theoretical Computer Science. 165 (1): 75–95. doi:10.1016/0304-3975(96)00041-2. Retrieved 8 September 2021.
8. ^ Kieburtz, Richard B. (November 2001). "A Logic for Rewriting Strategies". Electronic Notes in Theoretical Computer Science. 58 (2): 138–154. doi:10.1016/S1571-0661(04)00283-X.
9. ^ a b c Mazzola, Guerino; Milmeister, Gérard; Weissmann, Jody (21 October 2004). Comprehensive Mathematics for Computer Scientists 2. Springer Science & Business Media. p. 323. ISBN 978-3-540-20861-7.
10. ^ Vial, Pierre (7 December 2017). Non-Idempotent Typing Operators, beyond the λ-Calculus (PDF) (PhD). Sorbonne Paris Cité. p. 62.
11. ^ Curry, Haskell B.; Feys, Robert (1958). Combinatory Logic. Vol. I. Amsterdam: North Holland. pp. 139–142. ISBN 0-7204-2208-6. |volume= has extra text (help)
12. ^ Kashima, Ryo. "A Proof of the Standardization Theorem in λ-Calculus" (PDF). Tokyo Institute of Technology. Retrieved 19 August 2021.
13. ^ Takahashi, M. (April 1995). "Parallel Reductions in λ-Calculus". Information and Computation. 118 (1): 120–127. doi:10.1006/inco.1995.1057.
14. ^ a b c d Blanc, Tomasz; Lévy, Jean-Jacques; Maranget, Luc (2005). "Sharing in the Weak Lambda-Calculus". Processes, Terms and Cycles: Steps on the Road to Infinity: Essays Dedicated to Jan Willem Klop on the Occasion of His 60th Birthday. Springer. pp. 70–87. CiteSeerX 10.1.1.129.147. doi:10.1007/11601548_7. ISBN 978-3-540-32425-6.
15. ^ Sestoft, Peter (2002). Mogensen, T; Schmidt, D; Sudborough, I. H. (eds.). Demonstrating Lambda Calculus Reduction (PDF). The Essence of Computation: Complexity, Analysis, Transformation. Essays Dedicated to Neil D. Jones. Lecture Notes in Computer Science. 2566. Springer-Verlag. pp. 420–435. ISBN 3-540-00326-6.
16. ^ a b Felleisen, Matthias (2009). Semantics engineering with PLT Redex. Cambridge, Mass.: MIT Press. p. 42. ISBN 0262062755.
17. ^ Sestini, Filippo (2019). "Normalization by Evaluation for Typed Weak lambda-Reduction" (PDF). 24th International Conference on Types for Proofs and Programs (TYPES 2018). doi:10.4230/LIPIcs.TYPES.2018.6.
18. ^ a b c Asperti, Andrea; Guerrini, Stefano (1998). The optimal implementation of functional programming languages. Cambridge, UK: Cambridge University Press. ISBN 0521621127.
19. ^ a b Fernández, Maribel; Siafakas, Nikolaos (30 March 2010). "Labelled Lambda-calculi with Explicit Copy and Erase". Electronic Proceedings in Theoretical Computer Science. 22: 49–64. doi:10.4204/EPTCS.22.5.
20. ^ Lévy, Jean-Jacques (1988). "Sharing in the Evaluation of lambda Expressions" (PDF): 187. Cite journal requires |journal= (help)
21. ^ Lamping, John (1990). "An algorithm for optimal lambda calculus reduction" (PDF). Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '90: 16–30. doi:10.1145/96709.96711.
22. ^ Lévy, Jean-Jacques (June 1974). Réductions sures dans le lambda-calcul (PDF) (PhD) (in French). Université Paris VII. pp. 81–109. OCLC 476040273. Retrieved 17 August 2021.
23. ^ Asperti, Andrea. "Bologna Optimal Higher-Order Machine, Version 1.1". GitHub.
24. ^ van Oostrom, Vincent; van de Looij, Kees-Jan; Zwitserlood, Marijn (2010). "Lambdascope: Another optimal implementation of the lambda-calculus" (PDF). Cite journal requires |journal= (help)
25. ^ Chang, Stephen; Felleisen, Matthias (2012). "The Call-by-Need Lambda Calculus, Revisited" (PDF). Programming Languages and Systems. 7211: 128–147. doi:10.1007/978-3-642-28869-2_7.