National Repository of Grey Literature 31 records found  1 - 10nextend  jump to record: Search took 0.00 seconds. 
Analyzer of Algebraic Expressions
Šudoma, Petr ; Knap, Tomáš (advisor) ; Klímek, Jakub (referee)
The aim of this bachelor's degree project is to analyze problems of computer simplifications of algebraic expressions and to create a program capable of performing such simplifications not involving human supervision. The first part provides a detailed analysis of expressions and their simplifications as well and objectives of such simplifications. It suggests and describes a formal language of expressions and presents a description of this formal language semantics. The second part of the project provides specific computer representation implementation of the language and a program capable of simplifying algebraic expressions. It contains a description of classes and algorithms used, examination of possibilities and explanation of suggested solutions.
Virtual honeynet with simulated user activity
Kouba, Tomáš ; Kaňkovský, Pavel (advisor) ; Knap, Tomáš (referee)
The goal of the work is to design and implement a honeypot (a trap for attackers) that will be able to simulate working user and other usual system activity in a convincing way so as to make it difficult to distinguish a honeypot from an ordinary system, will keep a stealth record of actions of any attackers who would attack the honeypot, and will make it possible to deploy a whole virtual network of honeypots (a honeynet) on a single host machine. The implementation should be resistant to any of the well-known techniques used to detect a modified operating system or OS kernel such as the kstat utility.
Normalisation of data in fulltext system
Kapusta, Matúš ; Lánský, Jan (advisor) ; Knap, Tomáš (referee)
The purpose of this thesis is to design and implement Java application to process data from full text system to well-formed XML data according to XML 1.0 specification. Input data are stored in XML files containing any kind of not well formed XML, typically HTML content. Major criteria are conservation of the structure and conservation of the text content as much as possible. Program verifies if namespaces are correct and allows replacement of special HTML entities by appropriate Unicode characters. The output file has to be correctly processed by standard XML parsers. Secondary objective is to investigate and implement suitable method to identify language of the resulting output file as a whole, the content of individual elements or plain text file.
Comparison of Fully Software and Hardware Accelerated XML Processing
Knap, Tomáš ; Holubová, Irena (advisor) ; Nečaský, Martin (referee)
The aim of this work is to compare XML processing abilities of standard software solutions and hardware accelerated scenarios using a new generation of XML processing appliances. The emphasis is puts on the speed of processing XML documents and on the demandingness of various operations over XML data. Firstly, we describe the used XML technologies and corresponding implementations in Java. Consequently, we characterize the core parts of our testing frameworks - IBM WebSphere DataPower Integration Appliance XI50 for hardware accelerated and IBM WebSphere Application Server 6.1 for standard XML processing. Further, the testing hierarchy involving two distinct testing suites - "Flat" and "Onion"- and tens of testing scenarios are defined. The "Flat" testing suite covers parsing, validating, transforming, and securing operations over XML data applied individually to a wide range of testing data, without bothering with concurrency. On the other hand, the "Onion" testing suite is a stress test combining several operations together. Both testing suites are executed on our testing framework and several measures (such as throughput) are collected and analyzed using n-dimensional OLAP cubes. The results show under which circumstances the appliance for hardware accelerated XML processing is worth using on and quantify...
Experiments with Linked Data
Nohejl, Pavel ; Nečaský, Martin (advisor) ; Knap, Tomáš (referee)
The goal of this master thesis is to create a "manual" to Linked Data technology. The first part of this thesis describes the Semantic Web and its relationship to Linked Data. Then follows a detailed explanation of Linked Data and so called "Linked Data principles" including involved technologies and tools. The second part of the thesis contains practical experiences with creation and using Linked Data. Firstly is described obtaining data on public procurement by web crawler developed for these purposes, followed by a description of transformation obtained (relational) data into Linked Data and their interlinking with external Linked Data sources. One part of this thesis is also an application consuming created Linked Data. This is compared with the traditional approach when the application consumes data from a relational database. This comparison is supplemented by a benchmark. Finally is presented a manual for the beginning developer which summarizes our experiences. The list of problems which are necessary to solve (from our point of view) for further development of Linked Data is also included.
Inference of XML Integrity Constraints
Vitásek, Matej ; Holubová, Irena (advisor) ; Knap, Tomáš (referee)
In this work we expand upon the previous efforts to infer schema information from existing XML documents. We find the inference of structure to be sufficiently researched and focus further on integrity constraints. After briefly introducing some of them we turn our attention to ID/IDREF/IDREFS attributes in DTD. Building on the research by Barbosa and Menelzon (2003) we introduce a heuristic approach to the problem of finding an optimal ID set. The approach is evaluated and tuned in a wide range of experiments.
Comparison of Enterprise Application Integration platforms
Kusák, David ; Nečaský, Martin (advisor) ; Knap, Tomáš (referee)
The master's thesis explores the domain of Enterprise Applications Integration (EAI) along with tools and platforms that allow such type of integration. The Enterprise Service Bus (ESB), the most mature of the EAI integration concepts, enables to create very powerful platform for EAI. The products based on the ESB concept are very often used as an integration backbone in contenporary SOA (Service Oriented architecture) environments. Many of them are available under an expensive commercial license. Contrary to that, there also exist Open Source based ESB products. The main part of the thesis is a detailed comparison of selected Open Source ESB products according to selected criteria. The thesis also tries to answer the question, whether these Open Source ESB products are capable and mature enought to be used in a corporate environment.
Web application for presentation integrated inspection data
Finger, Artur ; Nečaský, Martin (advisor) ; Knap, Tomáš (referee)
An initiative arises across the globe, to publish government data as Linked Data, thus con- necting it to other parts of the semantic web. Together the data forms the Linked Data Cloud, which serves as a rich data source for numerous applications. Overcoming unexpected pro- blems, we managed to convert data about inspections carried out by the State Veterinary Administration into that format. For that we used an ETL tool called UnifiedViews. Resulting data were described using well-known RDF ontologies. Then, also using UV, we integrated this data with Linked Data of the Czech Trade Inspection Authority. We created a web ap- plication, that uses this integrated database to search for fair businesses. Using a database server called Virtuoso, we managed to implement search by geographical coordinates. Our application is extensible by new data sources and extracted SVA data are available as open Linked Data.
Linked Data Integration
Michelfeit, Jan ; Knap, Tomáš (advisor) ; Klímek, Jakub (referee)
Linked Data have emerged as a successful publication format which could mean to structured data what Web meant to documents. The strength of Linked Data is in its fitness for integration of data from multiple sources. Linked Data integration opens door to new opportunities but also poses new challenges. New algorithms and tools need to be developed to cover all steps of data integration. This thesis examines the established data integration proceses and how they can be applied to Linked Data, with focus on data fusion and conflict resolution. Novel algorithms for Linked Data fusion are proposed and the task of supporting trust with provenance information and quality assessment of fused data is addressed. The proposed algorithms are implemented as part of a Linked Data integration framework ODCleanStore.
Traffic Simulation
Šmíd, Jakub ; Nečaský, Martin (advisor) ; Knap, Tomáš (referee)
Title: Traffic simulation Author: Jakub Šmíd Department: Department of Software Engineering Supervisor: Mgr. Martin Nečaský, Ph. D. Supervisor's e-mail address: necasky@ksi.mff.cuni.c Abstract: The goal of this bachelor thesis is to create a program that simulates the traffic in user defined city. Application mainly watches the status of roads, monitors number of vehicles on them and when the road becomes full, program reports traffic jam and diverts part of the traffic in order to reduce the jam or even completely eliminate it. Besides personal traffic it minitors vehicles of city public transport planned by time table. After the end of simulation it suggest changes that can optimilize time of arrivals of scheduled traffic nearer to scheduled time of arrivals. Keywords: traffic, simulation, jam

National Repository of Grey Literature : 31 records found   1 - 10nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.