LL grammar

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
The C grammar[1] is not LL(1): The bottom part shows a parser that has digested the tokens "int v ;main(){" and is about choose a rule to derive the nonterminal "Stmt". Looking only at the first lookahead token "v", it cannot decide which of both alternatives for "Stmt" to choose, since two input continuations are possible. They can be discriminated by peeking at the second lookahead token (yellow background).

In formal language theory, an LL grammar is a context-free grammar that can be parsed by an LL parser, which parses the input from Left to right, and constructs a Leftmost derivation of the sentence (hence LL, compared with LR parser that constructs a rightmost derivation). A language that has an LL grammar is known as an LL language. These form subsets of deterministic context-free grammars (DCFGs) and deterministic context-free languages (DCFLs), respectively. One says that a given grammar or language "is an LL grammar/language" or simply "is LL" to indicate that it is in this class.

LL parsers are table-based parsers, similar to LR parsers. LL grammars can alternatively be characterized as precisely those that can be parsed by a predictive parser – a recursive descent parser without backtracking – and these can be readily written by hand. This article is about the formal properties of LL grammars; for parsing, see LL parser or recursive descent parser.

Formal definition[edit]

Finite case[edit]

Given a natural number , a context-free grammar is an LL(k) grammar if

  • for each terminal symbol string of length up to symbols,
  • for each nonterminal symbol , and
  • for each terminal symbol string ,

there is at most one production rule such that for some terminal symbol strings ,

  • the string can be derived from the start symbol ,
  • can be derived from after first applying rule , and
  • the first symbols of and of agree.[2]

An alternative, but equivalent, formal definition is the following: is an LL(k) grammar if, for arbitrary derivations

when the first symbols of agree with those of , then .[3][4]

Informally, when a parser has derived , with its leftmost nonterminal and already consumed from the input, then by looking at that and peeking at the next symbols of the current input, the parser can identify with certainty the production rule for .

When rule identification is possible even without considering the past input , then the grammar is called a strong LL(k) grammar.[5] In the formal definition of a strong LL(k) grammar, the universal quantifier for is omitted, and is added to the "for some" quantifier for . For every LL(k) grammar, a structurally equivalent strong LL(k) grammar can be constructed.[6]

The class of LL(k) languages forms a strictly increasing sequence of sets: LL(0) ⊊ LL(1) ⊊ LL(2) ⊊ ….[7] It is decidable whether a given grammar G is LL(k), but it is not decidable whether an arbitrary grammar is LL(k) for some k. It is also decidable if a given LR(k) grammar is also an LL(m) grammar for some m.[8]

Every LL(k) grammar is also an LR(k) grammar. An ε-free LL(1) grammar is also an SLR(1) grammar. An LL(1) grammar with symbols that have both empty and non-empty derivations is also an LALR(1) grammar. An LL(1) grammar with symbols that have only the empty derivation may or may not be LALR(1).[9]

LL grammars cannot have rules containing left recursion.[10] Each LL(k) grammar that is ε-free can be transformed into an equivalent LL(k) grammar in Greibach normal form (which by definition does not have rules with left recursion).[11]

Regular case[edit]

Let be a terminal alphabet. Subset of is a regular set if it is a regular language over . A partition of is called a regular partition if for every the set is regular.

Let be a context free grammar and let be a regular partition of . We say that is an LL() grammar if, for arbitrary derivations

such that it follows that .[12]

A grammar G is said to be LL-regular (LLR) if there exists a regular partition of such that G is LL().

LLR grammars are unambiguous and cannot be left-recursive.

Every LL(k) grammar is LLR. Every LL(k) grammar is deterministic, but there exists a LLR grammar that is not deterministic.[13] Hence the class of LLR grammars is strictly larger than the union of LL(k) for each k.

It is decidable whether, given a regular partition , a given grammar is LL(). It is, however, not decidable whether an arbitrary grammar G is LLR. This is due to the fact that deciding whether a grammar G generates a regular language, which would be necessary to find a regular partition for G, can be reduced to the Post correspondence problem.

Every LLR grammar is LR-regular (LRR, the corresponding equivalent for LR(k) grammars), but there exists an LR(1) grammar that is not LLR.[13]

Historically, LLR grammars followed the invention of the LRR grammars. Given a regular partition a Moore machine can be constructed to transduce the parsing from right to left, identifying instances of regular productions. Once that has been done, an LL(1) parser is sufficient to handle the transduced input in linear time. Thus, LLR parsers can handle a class of grammars strictly larger than LL(k) parsers while being equally efficient. Despite that the theory of LLR does not have any major applications. One possible and very plausible reason is that while there are generative algorithms for LL(k) and LR(k) parsers, the problem of generating an LLR/LRR parser is undecidable unless one has constructed a regular partition upfront. But even the problem of constructing a suitable regular partition given grammar is undecidable.

Simple deterministic languages[edit]

A context-free grammar is called simple deterministic,[14] or just simple,[15] if

  • it is in Greibach normal form (i.e. each rule has the form ), and
  • different right hand sides for the same nonterminal always start with different terminals .

A set of strings is called a simple deterministic, or just simple, language, if it has a simple deterministic grammar.

The class of languages having an ε-free LL(1) grammar in Greibach normal form equals the class of simple deterministic languages.[16] This language class includes the regular sets not containing ε.[15] Equivalence is decidable for it, while inclusion is not.[14]


LL grammars, particularly LL(1) grammars, are of great practical interest, as they are easy to parse, either by LL parsers or by recursive descent parsers, and many computer languages[clarify] are designed to be LL(1) for this reason. Languages based on grammars with a high value of k have traditionally been considered[citation needed] to be difficult to parse, although this is less true now given the availability and widespread use[citation needed] of parser generators supporting LL(k) grammars for arbitrary k.

See also[edit]


  1. ^ Kernighan & Ritchie 1988, Appendix A.13 "Grammar", p.193 ff. The top image part shows a simplified excerpt in an EBNF-like notation..
  2. ^ Rosenkrantz & Stearns (1970, p. 227). Def.1. The authors do not consider the case k=0.
  3. ^ where "" denotes derivability by leftmost derivations, and , , and
  4. ^ Waite & Goos (1984, p. 123) Def. 5.22
  5. ^ Rosenkrantz & Stearns (1970, p. 235) Def.2
  6. ^ Rosenkrantz & Stearns (1970, p. 235) Theorem 2
  7. ^ Rosenkrantz & Stearns (1970, p. 246–247): Using "" to denote "or", the string set has an , but no ε-free grammar, for each .
  8. ^ Rosenkrantz & Stearns (1970, pp. 254–255)
  9. ^ Beatty (1982)
  10. ^ Rosenkrantz & Stearns (1970, pp. 241) Lemma 5
  11. ^ Rosenkrantz & Stearns (1970, p. 242) Theorem 4
  12. ^ Poplawski, David (1977). "Properties of LL-Regular Languages". Purdue University. Cite journal requires |journal= (help)
  13. ^ a b David A. Poplawski (Aug 1977). Properties of LL-Regular Languages (Technical Report). Purdue University, Department of Computer Science.
  14. ^ a b Korenjak & Hopcroft (1966)
  15. ^ a b Hopcroft & Ullman (1979, p. 229) Exercise 9.3
  16. ^ Rosenkrantz & Stearns (1970, p. 243)


Further reading[edit]

  • Sippu, Seppo; Soisalon-Soininen, Eljas (1990). Parsing Theory: LR(k) and LL(k) Parsing. Springer Science & Business Media. ISBN 978-3-540-51732-0.