Jump to content

LL parser

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Hyperyl (talk | contribs) at 11:28, 25 December 2008 (→‎External links: remove link to packrat paper). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

An LL parser is a top-down parser for a subset of the context-free grammars. It parses the input from Left to right, and constructs a Leftmost derivation of the sentence (hence LL, compared with LR parser). The class of grammars which are parsable in this way is known as the LL grammars.

The remainder of this article describes the table-based kind of parser, the alternative being a recursive descent parser which is usually coded by hand (although not always; see e.g. ANTLR for an LL(*) recursive-descent parser generator).

An LL parser is called an LL(k) parser if it uses k tokens of look-ahead when parsing a sentence. If such a parser exists for a certain grammar and it can parse sentences of this grammar without backtracking then it is called an LL(k) grammar. Of these grammars, LL(1) grammars, although fairly restrictive, are very popular because the corresponding LL parsers only need to look at the next token to make their parsing decisions. Languages based on grammars with a high value of k require considerable effort to parse.

There is contention between the "European school" of language design, who prefer LL-based grammars, and the "US-school", who predominantly prefer LR-based grammars.[citation needed] This is largely due to teaching traditions and the detailed description of specific methods and tools in certain text books; another influence may be Niklaus Wirth at ETH Zürich in Switzerland, whose research has described a number of ways of optimising LL(1) languages and compilers.

General case

The parser works on strings from a particular grammar.

The parser consists of

  • an input buffer, holding the input string (built from the grammar)
  • a stack on which to store the terminals and non-terminals from the grammar yet to be parsed
  • a parsing table which tells it what (if any) grammar rule to apply given the symbols on top of its stack and the next input token

The parser applies the rule found in the table by matching the top-most symbol on the stack (row) with the current symbol in the input stream (column).

When the parser starts, the stack already contains two symbols:

[ S, $ ]

where '$' is a special terminal to indicate the bottom of the stack and the end of the input stream, and 'S' is the start symbol of the grammar. The parser will attempt to rewrite the contents of this stack to what it sees on the input stream. However, it only keeps on the stack what still needs to be rewritten.

Concrete example

Set up

To explain its workings we will consider the following small grammar:

  1. S → F
  2. S → ( S + F )
  3. F → 1

and parse the following input:

( 1 + 1 )

The parsing table for this grammar looks as follows:

( ) 1 + $
S 2 - 1 - -
F - - 3 - -

(Note that there is also a column for the special terminal, represented here as $, that is used to indicate the end of the input stream.)

Parsing procedure

The parser reads the first '(' from the input stream, and the 'S' from the stack. From the table it knows that it has to apply rule 2; it has to rewrite 'S' to '( S + F )' on the stack and write the number of this rule to the output. The stack then becomes:

[ (, S, +, F, ), $ ]

In the next step it removes the '(' from its input stream and from its stack:

[ S, +, F, ), $ ]

Now the parser sees a '1' on its input stream so it knows that it has to apply rule (1) and then rule (3) from the grammar and write their number to the output stream. This results in the following stacks:

[ F, +, F, ), $ ]
[ 1, +, F, ), $ ]

In the next two steps the parser reads the '1' and '+' from the input stream and, since they match the next two items on the stack, also removes them from the stack. This results in:

[ F, ), $ ]

In the next three steps the 'F' will be replaced on the stack with '1', the number 3 will be written to the output stream and then the '1' and ')' will be removed from the stack and the input stream. So the parser ends with both '$' on its stack and on its input stream.

In this case it will report that it has accepted the input string and write to the output stream the list of numbers

[ 2, 1, 3, 3 ]

which is indeed a leftmost derivation of the input string. We see that a leftmost derivation of the input string is:

S → ( S + F ) → ( F + F ) → ( 1 + F ) → ( 1 + 1 )

Remarks

As can be seen from the example the parser performs three types of steps depending on whether the top of the stack is a nonterminal, a terminal or the special symbol $:

  • If the top is a nonterminal then it looks up in the parsing table on the basis of this nonterminal and the symbol on the input stream which rule of the grammar it should use to replace it with on the stack. The number of the rule is written to the output stream. If the parsing table indicates that there is no such rule then it reports an error and stops.
  • If the top is a terminal then it compares it to the symbol on the input stream and if they are equal they are both removed. If they are not equal the parser reports an error and stops.
  • If the top is $ and on the input stream there is also a $ then the parser reports that it has successfully parsed the input, otherwise it reports an error. In both cases the parser will stop.

These steps are repeated until the parser stops, and then it will have either completely parsed the input and written a leftmost derivation to the output stream or it will have reported an error.

Constructing an LL(1) parsing table

In order to fill the parsing table, we have to establish what grammar rule the parser should choose if it sees a nonterminal A on the top of its stack and a symbol a on its input stream. It is easy to see that such a rule should be of the form Aw and that the language corresponding to w should have at least one string starting with a. For this purpose we define the First-set of w, written here as Fi(w), as the set of terminals that can be found at the start of any string in w, plus ε if the empty string also belongs to w. Given a grammar with the rules A1w1, ..., Anwn, we can compute the Fi(wi) and Fi(Ai) for every rule as follows:

  1. initialize every Fi(wi) and Fi(Ai) with the empty set
  2. add Fi(wi) to Fi(Ai) for every rule Aiwi, where Fi is defined as follows:
    • Fi(a w' ) = { a } for every terminal a
    • Fi(A w' ) = Fi(A) for every nonterminal A with ε not in Fi(A)
    • Fi(A w' ) = Fi(A) \ { ε } ∪ Fi(w' ) for every nonterminal A with ε in Fi(A)
    • Fi(ε) = { ε }
  3. add Fi(wi) to Fi(Ai) for every rule Aiwi
  4. do steps 2 and 3 until all Fi sets stay the same.

Unfortunately, the First-sets are not sufficient to compute the parsing table. This is because a right-hand side w of a rule might ultimately be rewritten to the empty string. So the parser should also use the a rule Aw if ε is in Fi(w) and it sees on the input stream a symbol that could follow A. Therefore we also need the Follow-set of A, written as Fo(A) here, which is defined as the set of terminals a such that there is a string of symbols αAaβ that can be derived from the start symbol. Computing the Follow-sets for the nonterminals in a grammar can be done as follows:

  1. initialize every Fo(Ai) with the empty set
  2. if there is a rule of the form AjwAiw' , then
    • if the terminal a is in Fi(w' ), then add a to Fo(Ai)
    • if ε is in Fi(w' ), then add Fo(Aj) to Fo(Ai)
  3. repeat step 2 until all Fo sets stay the same.

Now we can define exactly which rules will be contained where in the parsing table. If T[A, a] denotes the entry in the table for nonterminal A and terminal a, then

T[A,a] contains the rule Aw if and only if
a is in Fi(w) or
ε is in Fi(w) and a is in Fo(A).

If the table contains at most one rule in every one of its cells, then the parser will always know which rule it has to use and can therefore parse strings without backtracking. It is in precisely this case that the grammar is called an LL(1) grammar.

Constructing an LL(k) parsing table

Until the mid 1990s, it was widely believed that LL(k) parsing (for k > 1) was impractical[citation needed], since the size of the parse table would (in general, in the worst case) have to have exponential complexity in k. This perception changed gradually after the release of the Purdue Compiler Construction Tool Set (PCCTS, now known as ANTLR) around 1992, when it was demonstrated that many programming languages can be parsed efficiently by an LL(k) parser without triggering the worst-case behavior of the parser. Moreover, in certain cases LL parsing is feasible even with unlimited lookahead. By contrast, traditional parser generators, like yacc/GNU bison use LALR(1) parse tables to construct a restricted LR parser with a fixed one-token lookahead.

See also