Embedded pushdown automaton

From Wikipedia, the free encyclopedia
Jump to: navigation, search

An embedded pushdown automaton or EPDA is a computational model for parsing languages generated by tree-adjoining grammars (TAGs). It is similar to the context-free grammar-parsing pushdown automaton, except that instead of using a plain stack to store symbols, it has a stack of iterated stacks that store symbols, giving TAGs a generative capacity between context-free grammars and context-sensitive grammars, or a subset of the mildly context-sensitive grammars. Embedded pushdown automata should not be confused with nested stack automata which have more computational power.[citation needed]

History and applications[edit]

EPDAs were first described by K. Vijay-Shanker in his 1988 doctoral thesis.[1] They have since been applied to more complete descriptions of classes of mildly context-sensitive grammars and have had important roles in refining the Chomsky hierarchy. Various subgrammars, such as the linear indexed grammar, can thus be defined.[2] EPDAs are also beginning to play an important role in natural language processing.

While natural languages have traditionally been analyzed using context-free grammars (see transformational-generative grammar and computational linguistics), this model does not work well for languages with crossed dependencies, such as Dutch, situations for which an EPDA is well suited. A detailed linguistic analysis is available in Joshi, Schabes (1997).[3]

Theory[edit]

An EPDA is a finite state machine with a set of stacks that can be themselves accessed through the embedded stack. Each stack contains elements of the stack alphabet \,\Gamma, and so we define an element of a stack by \,\sigma_i \in \Gamma^*, where the star is the Kleene closure of the alphabet.

Each stack can then be defined in terms of its elements, so we denote the \,jth stack in the automaton using a double-dagger symbol: \,\Upsilon_j = \ddagger\sigma_j = \{\sigma_{j,k}, \sigma_{j,k-1}, \ldots, \sigma_{j,1} \},[clarification needed] where \,\sigma_{j, k} would be the next accessible symbol in the stack. The embedded stack of \,m stacks can thus be denoted by \,\{\Upsilon_j \} = \{\ddagger\sigma_m,\ddagger\sigma_{m-1}, \ldots, \ddagger\sigma_1 \} \in (\ddagger\Gamma^+)^*.[clarification needed]

We define an EPDA by the septuple (7-tuple)

\,M = (Q, \Sigma, \Gamma, \delta, q_0, Q_\textrm{F}, \sigma_0) where
  • \,Q is a finite set of states;
  • \,\Sigma is the finite set of the input alphabet;
  • \,\Gamma is the finite stack alphabet;
  • \,q_0 \in Q is the start state;
  • \,Q_\textrm{F} \subseteq Q is the set of final states;
  • \,\sigma_0 \in \Gamma is the initial stack symbol
  • \,\delta : Q \times \Sigma \times \Gamma \rightarrow S is the transition function, where \,S are finite subsets of \,Q\times (\ddagger\Gamma^+)^* \times \Gamma^* \times (\ddagger\Gamma^+)^*.

Thus the transition function takes a state, the next symbol of the input string, and the top symbol of the current stack and generates the next state, the stacks to be pushed and popped onto the embedded stack, the pushing and popping of the current stack, and the stacks to be considered the current stacks in the next transition. More conceptually, the embedded stack is pushed and popped, the current stack is optionally pushed back onto the embedded stack, and any other stacks one would like are pushed on top of that, with the last stack being the one read from in the next iteration. Therefore, stacks can be pushed both above and below the current stack.

A given configuration is defined by

\,C(M) = \{q,\Upsilon_m \ldots \Upsilon_1, x_1, x_2\} \in Q\times (\ddagger\Gamma^+)^* \times \Sigma^* \times \Sigma^*

where \,q is the current state, the \,\Upsilons are the stacks in the embedded stack, with \,\Upsilon_m the current stack, and for an input string \,x=x_1 x_2 \in \Sigma^*, \,x_1 is the portion of the string already processed by the machine and \,x_2 is the portion to be processed, with its head being the current symbol read. Note that the empty string \,\epsilon \in \Sigma is implicitly defined as a terminating symbol, where if the machine is at a final state when the empty string is read, the entire input string is accepted, and if not it is rejected. Such accepted strings are elements of the language

\,L(M) = \left\{ x | \{q_0,\Upsilon_0,\epsilon,x\} \rightarrow_M^* \{q_\textrm{F},\Upsilon_m \ldots \Upsilon_1, x, \epsilon\} \right\}

where \,q_\textrm{F} \in Q_\textrm{F} and \,\rightarrow_M^* defines the transition function applied over as many times as necessary to parse the string.

An informal description of EPDA can also be found in Joshi, Schabes (1997),[3] Sect.7, p.23-25.

k-order EPDA and the Weir hierarchy[edit]

A more precisely defined hierarchy of languages that correspond to the mildly context-sensitive class was defined by David J. Weir.[4] Based on the work of Nabil A. Khabbaz,[5][6] Weir's Control Language Hierarchy is a containment hierarchy of countable set of language classes[clarify] where the Level-1 is defined as context-free, and Level-2 is the class of tree-adjoining and the other three grammars.

Following are some of the properties of Level-k languages in the hierarchy:

  • Level-k languages are properly contained in the Level-(k + 1) language class
  • Level-k languages can be parsed in O(n^{3\cdot2^{k-1}}) time
  • Level-k contains the language \{a_1^n \dotso a_{2^k}^n|n\geq0\}, but not \{a_1^n \dotso a_{2^{k+1}}^n|n\geq0\}
  • Level-k contains the language \{w^{2^{k-1}}|w\in\{a,b\}^*\}, but not \{w^{2^{k-1}+1}|w\in\{a,b\}^*\}

Those properties correspond well (at least for small k > 1) to the conditions of mildly context-sensitive languages imposed by Joshi, and as k gets bigger, the language class becomes, in a sense, less mildly context-sensitive.

See also[edit]

References[edit]

  1. ^ Vijay-Shanker, K. (January 1988). "A Study of Tree-Adjoining Grammars". Ph.D. Thesis (University of Pennsylvania). 
  2. ^ Weir, David J. (1994). "Linear Iterated Pushdowns". Computational Intelligence 10 (4): 431–439. doi:10.1111/j.1467-8640.1994.tb00007.x. Retrieved 2012-10-20.  edit
  3. ^ a b Joshi, Aravind K.; Yves Schabes (1997). "Tree-Adjoining Grammars". Handbook of Formal Languages (Springer) 3: 69–124. Retrieved 2014-02-07. 
  4. ^ Weir, D. J. (1992), "A geometric hierarchy beyond context-free languages", Theoretical computer science 104 (2): 235–261, doi:10.1016/0304-3975(92)90124-X. 
  5. ^ Nabil Anton Khabbaz (1972). Generalized context-free languages (Ph.D.). University of Iowa. 
  6. ^ Nabil Anton Khabbaz (1974). "A geometric hierarchy of languages". J. Comput. System Sci. 8: 142–157. doi:10.1016/s0022-0000(74)80052-8. 

Further reading[edit]

  • Laura Kallmeyer (2010). Parsing Beyond Context-Free Grammars. Springer Science & Business Media. ISBN 978-3-642-14846-0.