National Repository of Grey Literature 4 records found  Search took 0.00 seconds. 
Lexicalized Syntactic Analysis by Restarting Automata
Mráz, F. ; Otto, F. ; Pardubská, D. ; Plátek, Martin
We study h-lexicalized two-way restarting automata that can rewrite at most i times per cycle for some i ≥ 1 (hRLWW(i)-automata). This model is considered useful for the study of lexical (syntactic) disambiguation, which is a concept from linguistics. It is based on certain reduction patterns. We study lexical disambiguation through the formal notion of h-lexicalized syntactic analysis (hLSA). The hLSA is composed of a basic language and the corresponding h-proper language, which is obtained from the basic language by mapping all basic symbols to input symbols. We stress the sensitivity of hLSA by hRLWW(i)-automata to the size of their windows, the number of possible rewrites per cycle, and the degree of (non-)monotonicity. We introduce the concepts of contextually transparent languages (CTL) and contextually transparent lexicalized analyses based on very special reduction patterns, and we present two-dimensional hierarchies of their subclasses based on the size of windows and on the degree of synchronization. The bottoms of these hierarchies correspond to the context-free languages. CTL creates a proper subclass of context-sensitive languages with syntactically natural properties.
Reducing Automata and Syntactic Errors
Procházka, Martin ; Plátek, Martin (advisor) ; Pardubská, Dana (referee) ; Průša, Daniel (referee)
This thesis deals with reducing automata, their normalization, and their application for a (robust) reduction analysis and localization of syntactic errors for deterministic context-free languages (DCFL). A reducing automaton is similar to a restarting automaton with two subtle differences: an explicit marking of reduced symbols (which makes it possible to determine a position of an error accurately), and moving a lookahead window inside a control unit (which brings reducing automata closer to devices of classical automata and formal language theory). In case of reducing automata, it is easier to adopt and reuse notions and approaches developed within classical theory, e.g., prefix correctness or automata minimization. For any nonempty deterministic context-free language specified by a monotone reducing automaton, both prefix correct and minimal, we propose a method of robust analysis by reduction which ensures localization of formally defined types of (real) errors, correct subwords, and subwords causing reduction conflicts (i.e., subwords with ambiguous syntactic structure that can be reduced in different words in different ways). We implement the proposed method by a new type of device (called postprefix robust analyzer) and we briefly show how to implement this method by a deterministic pushdown...

Interested in being notified about new results for this query?
Subscribe to the RSS feed.