National Repository of Grey Literature 46 records found  previous11 - 20nextend  jump to record: Search took 0.00 seconds. 
Machine Learning for Natural Language Question Answering
Sasín, Jonáš ; Fajčík, Martin (referee) ; Smrž, Pavel (advisor)
This thesis deals with natural language question answering using Czech Wikipedia. Question answering systems are experiencing growing popularity, but most of them are developed for English. The main purpose of this work is to explore possibilities and datasets available and create such system for Czech. In the thesis I focused on two approaches. One of them uses English model ALBERT and machine translation of passages. The other one utilizes the multilingual BERT. Several variants of the system are compared in this work. Possibilities of relevant passage retrieval are also discussed. Standard evaluation is provided for every variant of the tested system. The best system version has been evaluated on the SQAD v3.0 dataset, reaching 0.44 EM and 0.55 F1 score, which is an excellent result compared to other existing systems. The main contribution of this work is the analysis of existing possibilities and setting a benchmark for further development of better systems for Czech.
Identifying Entity Types Based on Information Extraction from Wikipedia
Rusiňák, Petr ; Otrusina, Lubomír (referee) ; Smrž, Pavel (advisor)
This paper presents a system for identifying entity types of articles on Wikipedia (e.g. people or sports events) that can be used for identifaction of any arbitrary entity. The~input files for this system are a list of several pages that belong to this entity and a list of several pages that do not belong to this entity. These lists will be used to generate features that can be used for generation of the list of all pages belonging to this entity. The fatures can be based on both structured information on Wikipedia such as templates and categories and non-structured informations found by the analysis of natural text in the first sentence of the article where a defining noun that represents what the article is about will be found. This system support pages written in Czech and English and can be extended to support other languages.
Georeferenced Data Visualization on Web-Based Map Interface
Růžička, Štěpán ; Polok, Lukáš (referee) ; Bartoň, Radek (advisor)
The master's thesis is concerned with design and implementation of library extending OpenLayers. For the solution was used JavaScript programming language. Part of this thesis is devoted to description of standards for maintaining and transfering geographic information and to JavaScript map presenting libraries and REST services.
Encyclopedia Expert
Krč, Martin ; Schmidt, Marek (referee) ; Smrž, Pavel (advisor)
This project focuses on a system that answers questions formulated in natural language. Firstly, the report discusses problems associated with question answering systems and some commonly employed approaches. Emphasis is laid on shallow methods, which do not require many linguistic resources. The second part describes our work on a system that answers factoid questions, utilizing Czech Wikipedia as a source of information. Answer extraction is partly based on specific features of Wikipedia and partly on pre-defined patterns. Results show that for answering simple questions, the system provides significant improvements in comparison with a standard search engine.
Information Extraction from Wikipedia
Valušek, Ondřej ; Otrusina, Lubomír (referee) ; Smrž, Pavel (advisor)
This thesis deals with automatic type extraction in English Wikipedia articles and their attributes. Several approaches with the use of machine learning will be presented. Furthermore, important features like date of birth in articles regarding people, or area in those about lakes, and many more, will be extracted. With the use of the system presented in this thesis, one can generate a well structured knowledge base, using a file with Wikipedia articles (called dump file) and a small training set containing a few well-classed articles. Such knowledge base can then be used for semantic enrichment of text. During this process a file with so called definition words is generated. Definition words are features extracted by natural text analysis, which could be used also in other ways than in this thesis. There is also a component that can determine, which articles were added, deleted or modified in between the creation of two different knowledge bases.
Information Extraction from Wikipedia
Musil, Martin ; Otrusina, Lubomír (referee) ; Schmidt, Marek (advisor)
This bachelor thesis deals with the problem of automatic information extraction from text. Goal is to create an application, which captures knowledge out of the articles from online information server Wikipedia, using extraction patterns. At the beginning, we interpret the basic terms of the subject and the main part of the publication is focused to the experiments and above all to the implementation, divided into two parts, processing of the text and following information extraction. The conclusion of the thesis analyses the results coming from experiments and efficiency of created rules.
Consistency Checking of Relations Extracted from Text
Stejskal, Jakub ; Otrusina, Lubomír (referee) ; Smrž, Pavel (advisor)
This bachelor thesis is dedicated to mechanical techniques that are used in the natural language processing and information extraction from particular text. It is approaching the general methods that starting to process the raw text and it continues to the relations extraction from processed language constructs, moreover it provides options for the use of obtained relational data which can be seen for example in the project DBpedia. Another milestone of the described bachelor thesis is the design and implementation of an automated system for extracting information about entities, which do not have their own article on the English version of Wikipedia. Thesis also presents algorithms developed for the extraction of entities with their own name, the verification of the articles ‘existence of the extracted entities and finally the actual extraction of information about individual entities, which can be used during the information consistency checking. In the end, it can be seen the results and suggestions for further development of the created system.
Named Entity Disambiguation in Slovak
Križan, Samuel ; Otrusina, Lubomír (referee) ; Smrž, Pavel (advisor)
Thesis deals with the topic of named entity recognition and disambiguation. A basic system was created which includes all prequisitions necessary for named entity disambiguation in Slovak language. Part of the system is building of a knowledge base out of an export from Slovak Wikipedia. This was subsequently compared to knowledge base obtained from Wikidata, which revealed that the main contribution of Wikipedia knowledge base for Slovak language is greater coverage of entities with link to Slovak Wikipedia and better determination of entity classes. Apart from that, morfological dictionary of KNOT@FIT research group was updated, which yielded an improvement by 33-39 %. This work presumes possible utilization in relation to system extention by a disambiguation modul and enhancement of alternative names coverage.
Authorship and actorship on Czech Wikipedia
Sedláček, Štěpán ; Abu Ghosh, Yasar (advisor) ; Kuřík, Bohuslav (referee)
The author carried out an ethnographic study of Czech Wikipedia in which he mapped human and non-human actors involved in the creation of an internet encyclopedia. As part of this process, he himself became one of the users and reflected how authorship, collective compiling of meanings, and supervision are constructed.
Creating a Bilingual Dictionary using Wikipedia
Ivanova, Angelina ; Zeman, Daniel (advisor) ; Straňák, Pavel (referee)
Title: Creating a Bilingual Dictionary using Wikipedia Author: Angelina Ivanova Department/Institute: Institute of Formal and Applied Linguistics (32-ÚFAL) Supervisor of the master thesis: RNDr. Daniel Zeman Ph.D. Abstract: Machine-readable dictionaries play important role in the research area of com- putational linguistics. They gained popularity in such fields as machine translation and cross-language information extraction. In this thesis we investigate the quality and content of bilingual English-Russian dictionaries generated from Wikipedia link structure. Wiki-dictionaries differ dramatically from the traditional dictionaries: the re- call of the basic terminology on Mueller's dictionary was 7.42%. Machine translation experiments with Wiki-dictionary incorporated into the training set resulted in the rather small, but statistically significant drop of the the quality of the translation compared to the experiment without Wiki-dictionary. We supposed that the main reason was domain difference between the dictio- nary and the corpus and got some evidence that on the test set collected from Wikipedia articles the model with incorporated dictionary performed better. In this work we show how big the difference between the dictionaries de- veloped from the Wikipedia link structure and the traditional...

National Repository of Grey Literature : 46 records found   previous11 - 20nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.