National Repository of Grey Literature 44 records found  previous11 - 20nextend  jump to record: Search took 0.00 seconds. 
Process Mediation Framework for Semantic Web Services
Vaculín, Roman ; Neruda, Roman (advisor) ; Nečaský, Martin (referee) ; Svátek, Vojtěch (referee)
The goal of Web services is to enable interoperability of heterogeneous software systems. Semantic Web services enhance syntactic specifications of traditional Web services with machine processable semantic annotations to facilitate interoperability. AsWeb services get popular in both corporate and open environments, the ability to deal with uncompatibilities of service requesters and providers becomes a critical factor for achieving interoperability. Process mediation solves the problem of interoperability by identifying and resolving all incompatibilities and by mediating between service requesters and providers. In this thesis we address the problem of process mediation of Semantic Web services. We introduce an Abstract Process Mediation Framework that identifies the key functional areas to be addressed by process mediation components. Specifically, we focus on process mediation algorithms, discovery of external services, monitoring, and fault handling and recovery. We present algorithms for solving the process mediation problem in two scenarios: (a) when the mediation process has complete visibility of the process model of the service provider and the service requester (complete visibility scenario), and (b) when the mediation process has visibility only of the process model of the service provider but...
Knowledge Systems on the Semantic Web
Pinďák, Josef ; Zamazal, Ondřej (advisor) ; Svátek, Vojtěch (referee)
The aim of this paper is to understand problematic of knowledge systems on semantic web. Analyze case studies a cases of usage shown on web portal W3C and create knowledge basis for expert system NEST, which on basis of analysis of case study of case usage will recommend clasification to particular type of knowledge system. The aim is achieved by analysis of case studies and case usages. The result of analysis is to set criteria to case study or case usage. On base of criteria, which are passed to expert system. Expert system will recommend clasification of case study or case usage. Knowledge basis also used for its developement data from dissertation [1], where are knowledge basis already separated into individual categories. Benefit of this work is creation of expert system, which allows user to get recommendation how to determinate type of unknown knowledge system and its field of potentional usage. First two chapters are dedicated to theory of knowledge systems and semantic web. Third chapter explains the steps to usage of analysis and fourth chapter is dedicated to creation of knowledge basis for system NEST.
Ontology of Building Accessibility
Hazuza, Petr ; Svátek, Vojtěch (advisor) ; Mynarz, Jindřich (referee)
Within the project Maps without Barriers realized under Charta 77 Foundation - Barriers Account, in 2015 we intend to map accessibility of buildings and its premises from the perspective of people with limited mobility. We plan to inspect nearly 600 castles, palaces and other tourist attractions in the Czech Republic. The acquired data will be gathered and published as an on-line map in form of open and machine-readable data. It will also appear as Linked Open Data. However, the project will not end with mapping premises, the main objective is to provide a solid foundation for a unified database of accessibility of buildings and its premises. Negotiations with institutions and organizations interested in mapping are in progress and we try to offer them our project platform for publication of their data. The required RDFS vocabulary will be designed and carried out as part of this diploma thesis. It will be tested on the data from a number of forms describing existing objects. The data will be gathered by means of services designed in terms of this theses and provided for purchasers and users equally.
Dolování asociačních pravidel jako podpora pro OLAP
Chudán, David ; Svátek, Vojtěch (advisor) ; Máša, Petr (referee) ; Novotný, Ota (referee) ; Kléma, Jiří (referee)
The aim of this work is to identify the possibilities of the complementary usage of two analytical methods of data analysis, OLAP analysis and data mining represented by GUHA association rule mining. The usage of these two methods in the context of proposed scenarios on one dataset presumes a synergistic effect, surpassing the knowledge acquired by these two methods independently. This is the main contribution of the work. Another contribution is the original use of GUHA association rules where the mining is performed on aggregated data. In their abilities, GUHA association rules outperform classic association rules referred to the literature. The experiments on real data demonstrate the finding of unusual trends in data that would be very difficult to acquire using standard methods of OLAP analysis, the time consuming manual browsing of an OLAP cube. On the other hand, the actual use of association rules loses a general overview of data. It is possible to declare that these two methods complement each other very well. The part of the solution is also usage of LMCL scripting language that automates selected parts of the data mining process. The proposed recommender system would shield the user from association rules, thereby enabling common analysts ignorant of the association rules to use their possibilities. The thesis combines quantitative and qualitative research. Quantitative research is represented by experiments on a real dataset, proposal of a recommender system and implementation of the selected parts of the association rules mining process by LISp-Miner Control Language. Qualitative research is represented by structured interviews with selected experts from the fields of data mining and business intelligence who confirm the meaningfulness of the proposed methods.
Modelování událostí na sémantickém webu
Hanzal, Tomáš ; Svátek, Vojtěch (advisor) ; Vacura, Miroslav (referee)
There are many ontologies and datasets on the semantic web that mention events. Events are important in our perception of the world and in our descriptions of it, therefore also on the semantic web. There is however not one best way to model them. This is connected to the fact that even the question what events are can be approached in different ways. Our aim is to better understand how events are represented on the semantic web and how it could be improved. To this end we first turn to the ways events are treated in philosophy and in foundational ontologies. We ask questions such as what sorts of things we call events, what ontological status we assign to events and if and how can events be distinguished from other entities such as situations. Then we move on to an empirical analysis of particular semantic web ontologies for events. In this analysis we find what kinds of things are usually called events on the semantic web (and what kinds of events there are). We use the findings from the philosophy of events to critically assess these ontologies, show their problems and indicate possible paths to their solution.
Bulk extraction of public administration data to RDF
Pomykacz, Michal ; Svátek, Vojtěch (advisor) ; Mynarz, Jindřich (referee)
The purpose of this work was to deal with data extraction from various formats (HTML, XML, XLS) and transformation for further processing. As the data sources were used Czech public contracts and related code lists and classifications. Main goal was to implement periodic data extraction, RDF transformation and publishing the output in form of Linked Data using SPARQL endpoint. It was necessary to design and implement extraction modules for UnifiedViews tool as it was used for periodic extractions. Theoretical section of this thesis explains the principles of linked data and key tools used for data extraction and manipulation. Practical section deals with extractors design and implementation. Part describing extractor implementation shows methods for parsing data in various dataset formats and its transformation to RDF. The success of each extractor implementation is presented at the conclusion along with thought of usability in a real world.
The use of linked open data for strategic knowledge game creation
Turečková, Šárka ; Svátek, Vojtěch (advisor) ; Zeman, Václav (referee)
The general theme of this thesis was the use of linked open data for a creation of games. This thesis specifically addressed the issue of usage of DBpedia for automatic question generation suitable for use in games. Within that are proposed suitable ways of selecting wanted objects from DBpedia and ways of obtaining and processing relevant information from them. Including a method for estimating renown of individual objects. Some of methods are then applied to create a program for a question generation from the data obtained through DBpedia during the run of the application. The real possibility of using these questions generated from DBpedia for gaming purposes is subsequently proved by the design, prototype and tests of a knowledge strategic multiplayer game. The paper also summarizes all the major issues and possible complications from using the data obtained through DBpedia or DBpedia Live endpoints. Current challenges and opportunities for mutual utilization of games and LOD are also briefly discussed.
Extraction of unspecified relations from the web
Ovečka, Marek ; Svátek, Vojtěch (advisor) ; Labský, Martin (referee)
The subject of this thesis is non-specific knowledge extraction from the web. In recent years, tools that improve the results of this type of knowledge extraction were created. The aim of this thesis is to become familiar with these tools, test and propose the use of results. In this thesis these tools are described and compared and extraction is carried out using OLLIE. Based on the results of the extractions, two methods of enriching extractions using name entity recognition, are proposed. The first method proposes to modify the weights of extractions and second proposes the enrichment of extractions by named entities. The paper proposed ontology, which allows to capture the structure of enriched extractions. In the last part practical experiment is carried out, in which the proposed methods are demonstrated. Future research in this field would be useful in areas of extraction and categorization of relational phrases.
Background annotation entit v Linked Data slovníků
Serra, Simone ; Svátek, Vojtěch (advisor) ; Zamazal, Ondřej (referee)
One the key feature behind Linked Data is the use of vocabularies that allow datasets to share a common language to describe similar concepts and relationships and resolve ambiguities between them. The development of vocabularies is often driven by a consensus process among datasets implementers, in which the criterion of interoperability is considered to be sufficient. This can lead to misrepresentation of real-world entities in Linked Data vocabularies entities. Such drawbacks can be fixed by the use of a formal methodology for modelling Linked Data vocabularies entities and identifying ontological distinctions. One proven example is the OntoClean methodology for curing taxonomies. In this work, it is presented a software tool that implements the PURO approach to ontological distinction modelling. PURO models vocabularies as Ontological Foreground Models (OFM), and the structure of ontological distinctions as Ontological Background Models (OBM), constructed using meta-properties attached to vocabulary entities, in a process known as vocabulary annotation. The software tool, named Background Annotation plugin, written in Java and integrated in the Protégé ontology editor, enables a user to graphically annotate vocabulary entities through an annotation workflow, that implements, among other things, persistency of annotations and their retrieval. Two kinds of workflows are supported: generic and dataset-specific, in order to differentiate a vocabulary usage, in terms of a PURO OBM, with respect to a given Linked Data dataset. The workflow is enhanced by the use of dataset statistical indicators retrieved through the Sindice service, for a sample of chosen datasets, such as the number of entities present in a dataset, and the relative frequency of vocabulary entities in that dataset. A further enhancement is provided by dataset summaries that offer an overview of the most common entity-property paths found in a dataset. Foreseen utilisation of the Background Annotation plugin include: 1) the checking of mapping agreement between different datasets, as produced by the R2R framework and 2) annotation of dependent resources in Concise Boundaries Descriptions of entities, used in data sampling from Linked Data datasets for data mining purposes.
Repository for results of association rules data mining tasks in SEWEBAR project
Marek, Tomáš ; Šimůnek, Milan (advisor) ; Svátek, Vojtěch (referee)
This diploma thesis aims at design and implementation of I:ZI Repository application. I:ZI Repository application provides management of data mining tasks and theirs results repository and functions for search in this repository. I:ZI Repository is a REST API build on top of Java EE technology, Berkeley XML database is used for storing data mining tasks. I:ZI Repository application was created based on XQuery search application. The application has completely new structure compared to XQuery search application, all functionality of XQuery search application is present in I:ZI Repository application. Possibilities of using more general search query was added into I:ZI Repository application as well as fuzzy approaches for searching and possibility of clustering search results. Enhanced logging of application activities aimed at logging incoming search queries and outgoing search results is a part of implementation. Results of application testing are included as well.

National Repository of Grey Literature : 44 records found   previous11 - 20nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.