National Repository of Grey Literature 45 records found  beginprevious21 - 30nextend  jump to record: Search took 0.00 seconds. 
Repository for results of association rules data mining tasks in SEWEBAR project
Marek, Tomáš ; Šimůnek, Milan (advisor) ; Svátek, Vojtěch (referee)
This diploma thesis aims at design and implementation of I:ZI Repository application. I:ZI Repository application provides management of data mining tasks and theirs results repository and functions for search in this repository. I:ZI Repository is a REST API build on top of Java EE technology, Berkeley XML database is used for storing data mining tasks. I:ZI Repository application was created based on XQuery search application. The application has completely new structure compared to XQuery search application, all functionality of XQuery search application is present in I:ZI Repository application. Possibilities of using more general search query was added into I:ZI Repository application as well as fuzzy approaches for searching and possibility of clustering search results. Enhanced logging of application activities aimed at logging incoming search queries and outgoing search results is a part of implementation. Results of application testing are included as well.
Extracting Structured Data from Czech Web Using Extraction Ontologies
Pouzar, Aleš ; Svátek, Vojtěch (advisor) ; Labský, Martin (referee)
The presented thesis deals with the task of automatic information extraction from HTML documents for two selected domains. Laptop offers are extracted from e-shops and free-published job offerings are extracted from company sites. The extraction process outputs structured data of high granularity grouped into data records, in which corresponding semantic label is assigned to each data item. The task was performed using the extraction system Ex, which combines two approaches: manually written rules and supervised machine learning algorithms. Due to the expert knowledge in the form of extraction rules the lack of training data could be overcome. The rules are independent of the specific formatting structure so that one extraction model could be used for heterogeneous set of documents. The achieved success of the extraction process in the case of laptop offers showed that extraction ontology describing one or a few product types could be combined with wrapper induction methods to automatically extract all product type offers on a web scale with minimum human effort.
Semantic support in CMS Drupal
Ivančo, Daniel ; Svátek, Vojtěch (advisor) ; Hazucha, Andrej (referee)
Aim of this diploma thesis is to map semantic features of CMS Drupal version 7. The goal of the first part of this work is to theoretically describe semantic web problematic and CMS Drupal. The second -- practical part of this work maps in details all the features of semantic web, which are supported by described CMS Drupal. These semantic features are mapped in two different points of views -- implementation and functional. Main contribution of this work is the method used to map these features. It's based on Drupal plugins code modification and revision in order to draw or demonstrate these features, which are not necessarily completely documented or functional. Furthermore all of these features are demonstrated on examples created as a part of this thesis. Finally the last part of this work compares these mapped features to similar CMS systems.
Zlepšování učinnosti prevence v telemedicíně
Nálevka, Petr ; Svátek, Vojtěch (advisor) ; Berka, Petr (referee) ; Štěpánková, Olga (referee) ; Šárek, Milan (referee)
This thesis employs data-mining techniques and modern information and communication technology to develop methods which may improve efficiency of prevention oriented telemedical programs. In particular this thesis uses the ITAREPS program as a case study and demonstrates that an extension of the program based on the proposed methods may significantly improve the program's efficiency. ITAREPS itself is a state of the art telemedical program operating since 2006. It has been deployed in 8 different countries around the world, and solely in the Czech republic it helped prevent schizophrenic relapse in over 400 participating patients. Outcomes of this thesis are widely applicable not just to schizophrenic patients but also to other psychotic or non-psychotic diseases which follow a relapsing path and satisfy certain preconditions defined in this thesis. Two main areas of improvement are proposed. First, this thesis studies various temporal data-mining methods to improve relapse prediction efficiency based on diagnostic data history. Second, latest telecommunication technologies are used in order to improve quality of the gathered diagnostic data directly at the source.
Evaluation of the medical web sites quality in the MedIEQ project
Svátek, Vojtěch
Internet resources currently play an important role in both selection of procedures for health care and also for the evaluation of their quality. Evaluation of the quality of medical information on the website is an activity of a number of rating agencies, typically connected with the national and international medical associations. One of the projects dealing with this problem is the MedIEQ project.
Programming tools for creation expert systems
Hrbek, Filip ; Berka, Petr (advisor) ; Svátek, Vojtěch (referee)
This bachelor thesis is going to explore the supply of programming tools for creation expert system and compare these tools by established criteria. This document is divided into theoretical and practical part. The first one is trying to describe expert system topic including division of programming tools into general programming language, programming language for artificial intelligence and development toolkits for expert system. The reader can imagine general model, which will be compared by practical applications. In the practical part I have written something about established criteria. The study describes programming tools. Information about them I have found in the manuals, tutorials, information from manufacturer, testing these tools. In conclusion I have compared them in tables. I have chosen only few examples of programming tools, because the offer on now day market is too wide.
Semantics in Multimedia: Event detection and cross-media feature extraction
Nemrava, Jan ; Svátek, Vojtěch (advisor) ; Berka, Petr (referee) ; Smrž, Pavel (referee)
This dissertation thesis describes the area of multimedia semantics which is a research area that brings together research streams that until recently run separately. The aim of the work is to provide an insight to all areas from this wide discipline and give an outlook on current problems especially to the semantic gab phenomena. Number of findings and outcomes in this work comes from international project K-Space, in which the author took part for three years. The extensive theoretical introduction into problematic is followed by a list state-of-the-art application from this area and overview of KIZI activities and involvements in the European project. The contribution of the work is a research on textual resources complementary to video and experiments with automatic detection of sporting events based on pre-classified examples and trained model. The practical contribution is also a demo web application that shows all the resources together and allows non-linear browsing of events.
Mapování ontologií a jeho vyhodnocování pomocí vzorů
Zamazal, Ondřej ; Svátek, Vojtěch (advisor) ; Vacura, Miroslav (referee) ; Pokorný, Jaroslav (referee) ; Štuller, Július (referee)
Ontology Matching is one of the hottest topic within the Semantic Web of recent years. There is still ample of space for improvement in terms of performance. Furthermore, current ontology matchers mostly concentrate on simple entity to entity matching. However, matching of whole structures could bring some additional complex relationships. These structures of ontologies can be captured as ontology patterns. The main theme of this thesis is an examination of pattern-based ontology matching enhanced with ontology transformation and pattern-based ontology alignment evaluation. The former is examined due to its potential benefits regarding complex matching and matching as such. The latter is examined because complex hypotheses could be beneficial feedback as complement to traditional evaluation methods. These two tasks are related to four different topics: ontology patterns, ontology transformation, ontology alignment evaluation and ontology matching. With regard to those four topics, this work covers the following aspects: * Examination of different aspects of ontology patterns. Particularly, description of relevant ontology patterns for ontology transformation and for ontology matching (such as naming, matching and transformation patterns). * Description of a pattern-based method for ontology transformation. * Introduction of new methods for an alignment evaluation; including using patterns as a complex structures for more detailed analysis. * Experiments and demonstrations of new concepts introduced in this thesis. The thesis first introduces naming pattern and matching pattern classification on which ontology transformation framework is based. Naming patterns are useful for detection of ontology patterns and for generation of new names for entities. Matching patterns are basis for transformation patterns in terms of sharing some building blocks. In comparison with matching patterns, transformation patterns have transformation links that represent way how parts of ontology patterns are transformed. Besides several evaluations and implementations, the thesis provides a demonstration of getting complex matching due to ontology transformation process. Ontology transformation framework has been implemented in Java environment where all generic patterns are represented as corresponding Java objects. Three main implemented services are made generally available as RESTful services: ontology pattern detection, transformation instruction generation and ontology transformation.
Hodnocení sémantických aplikací pro podnikové prostředí
Nekvasil, Marek ; Svátek, Vojtěch (advisor) ; Novotný, Ota (referee) ; Mikulecký, Peter (referee) ; Paralič, Ján (referee)
Broader and broader areas of application deployment are covered by semantic technologies recently and in the meantime their scope is increasing constantly. The possibilities of semantic applications are now so vast that they cannot be judged as a single market segment any more. The business skepticism that arises due to the uncertainty of investments in such technologies is only augmented by these differences and picking up on that this thesis concentrates on the aspects that can enable and evaluate not only the economic efficiency of engaging the semantic technologies in a business environment but also the effectiveness of doing so. This work concentrates on the ways of how to prove the differences amongst semantic applications, define their distinct segments based on use-case, and subsequently identify their Critical Success Factors and evaluate them against the real conditions of the applications' deployment with the participation of people involved in their development. Following the results of these interactions this thesis presents an innovative approach that enables to construct models for judging the maturity of enterprises for the deployment of the respective applications including the actual construction of these models for all the identified applications' use-cases segments. Moreover, in a later part of the work the evaluation using these models is demonstrated on AQUA application (outcome of a project the author did personally partake in) along with sketching additional specifics that may help the timely assessment of semantics in certain cases. The results presented in later chapters are supported by the underlying background researches in the fields of Semantic Technologies and IT assessment both of whose state-of-the-art methods are described here. Usability of the current standardized methods (such as those used in COBIT) for assessing semantic applications is also considered with respect to the lack of other best practices in business deployment of semantics.
Fuzzy GUHA
Ralbovský, Martin ; Rauch, Jan (advisor) ; Svátek, Vojtěch (referee) ; Holeňa, Martin (referee) ; Vojtáš, Peter (referee)
The GUHA method is one of the oldest methods of exploratory data analysis, which is regarded as part of the data mining or knowledge discovery in databases (KDD) scienti_c area. Unlike many other methods of data mining, the GUHA method has firm theoretical foundations in logic and statistics. In scope of the method, finding interesting knowledge corresponds to finding special formulas in satisfactory rich logical calculus, which is called observational calculus. The main topic of the thesis is application of the "fuzzy paradigm" to the GUHA method By the term "fuzzy paradigm" we mean approaches that use many-valued membership degrees or truth values, namely fuzzy set theory and fuzzy logic. The thesis does not aim to cover all the aspects of this application, it emphasises mainly on: - Association rules as the most prevalent type of formulas mined by the GUHA method - Usage of fuzzy data - Logical aspects of fuzzy association rules mining - Comparison of the GUHA theory to the mainstream fuzzy association rules - Implementation of the theory using the bit string approach The thesis throughoutly elaborates the theory of fuzzy association rules, both using the theoretical apparatus of fuzzy set theory and fuzzy logic. Fuzzy set theory is used mainly to compare the GUHA method to existing mainstream approaches to formalize fuzzy association rules, which were studied in detail. Fuzzy logic is used to define novel class of logical calculi called logical calculi of fuzzy association rules (LCFAR) for logical representation of fuzzy association rules. The problem of existence of deduction rules in LCFAR is dealt in depth. Suitable part of the proposed theory is implemented in the Ferda system using the bit string approach. In the approach, characteristics of examined objects are represented as strings of bits, which in the crisp case enables efficient computation. In order to maintain this feature also in the fuzzy case, a profound low level testing of data structures and algoritms for fuzzy bit strings have been carried out as a part of the thesis.

National Repository of Grey Literature : 45 records found   beginprevious21 - 30nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.