National Repository of Grey Literature 95 records found  previous11 - 20nextend  jump to record: Search took 0.01 seconds. 
The Issues of Correct Implementation of ERP System for a Small Company
Conicov, Andrei ; Ulrych, Jan (advisor) ; Žemlička, Michal (referee)
The aim of this paper is to describe the architecture of an ERP system, clearly define the tasks it has to solve and as a result emphasize the differences between ERP for large and small size companies. It is important to understand that the requirements and possibilities of these two types of organizations are different and as a result the SW also has to be different. ERP systems are complex and in order to understand them, in some cases, I explain general concepts, which cover the pivot points of the ERP architecture. This papers' aim is not to describe the final architecture of an ERP system because it is a very complex software (SW) product. It just analyzes the clients' needs and the technical possibilities in order to be able to create an ERP system that will face the demands of small enterprises (SE); this is achieved by defining and describing points that should be taken in consideration. In some cases I am trying to offer possible solutions and tips to the encountered problems.
Macroprocessor
Hlaváček, Luděk ; Holan, Tomáš (advisor) ; Žemlička, Michal (referee)
The goal of this work is to design and implement general-purpose macro processor. This macro processor supports common features such as conditional evaluation, file inclusion, user-defined macros and manipulations with macros in runtime. Various modifications of the configuration of the macro processor are possible in run-time as well. Also it is possible to change the way of invoking built-in commands Several examples are included to demonstrate implemented features. The work also contains brief description, history and comparison with existing macro processors and theoretical principles of macro processing.
Environment for distributed computations
Dědeček, Jan ; Žemlička, Michal (advisor) ; Kruliš, Martin (referee)
In present work are analyzed possibilities of optimization special task. In this task is requested to generate all variants of word equal length n and test them by computation. The first part of work is analyzing possibility of precessing words more efficiently. Number of words is equal to 2n, then even for small lengths is not possible to compute task on one computer in satisfiable time. The advantage of this task is that computation of each word is independent, therefore task could be easily divided. The second part of work is analyzing possibility of dividing computing on multiple computers, so the overall time of computation is maximally decreased. Side way of this work is creation of distributive application.
Automated prediction of results of tennis matches
Ščavnický, Martin ; Surynek, Pavel (advisor) ; Žemlička, Michal (referee)
In the present work we study predicting results of men's tennis matches using a multilayer perceptron. We propose a variety of input and output parameters andc also existing techniques for their preprocessing for the neural network. Speci cally, noise ltering and principal component analysis (PCA). We also try to adjust the former chess rating to the need of tennis. In the experimental part we try to nd an optimal model for the prediction and study the in fluence of the preprocessing on the model's efficiency. For this purpose we have developed a software that facilitates testing and consequent prediction.
Kontextové modely pro statistickou kompresi dat
Paška, Přemysl ; Dvořák, Tomáš (advisor) ; Žemlička, Michal (referee)
Current context modelling methods use an aggregated form of the statistics reusing the data history only rarely. This work proposes two independent methods that use the history in a more elaborate way. When the Prediction by Partial Matching (PPM) method updates its context tree, previous occurrences of a newly added context are ignored, which harms precision of the probabilities. An improved algorithm, which uses the complete data history, is described. The empirical results suggest that this PPM sub-obtimality is one of the major cause of the problem of inaccurate probabilities in high context orders. Current methods (especially PAQ) adapt to non-stationary data by strong favoring of the most recent statistics. The method proposed in this work generalizes this approach by favoring those parts of the history which are the most relevant to the current data, and its imlementation provides an improvement for almost all tested data especially for some samples of non-stationary data.
Workgroup Planning Application
Nguyen Cong, Thang ; Kopecký, Michal (advisor) ; Žemlička, Michal (referee)
This thesis describes design and implementation of a tool that supports workgroup planning. The system allows its users to manipulate their tasks, as well as supports collaboration of more users. The user interface of the application is based on lightweight web client. The application is modular, so that the enhancing of the application in the future will be as simple as possible.
Lossless JPEG compression
Ondruš, Jan ; Lánský, Jan (advisor) ; Žemlička, Michal (referee)
JPEG is a commonly used method of compression for photographic images. It consists of lossy and lossless part. Static Huffman coding is a last step. We can replace this step using advanced techniques and arithmetic coding. In this work we introduce method used for additional compression JPEG files (files in JFIF format) saved in baseline mode. Partial decompression is a general way we can use in this case. We invert only last lossless steps of JPEG compression algorithm. Compressed file is transformed into array of quantized DCT coefficients. We designed algorithm for prediction of the DCT coefficients. It returns particular linear combination of previous coded coefficients in current and neighbouring blocks for each from 64 coefficients in block matrix. We show how this prediction can improve efficiency of compression of JPEG files using Context Mixing algorithm implemented in PAQ8 by Matt Mahoney. Specific implementation is described and its compression ratio is compared with existing methods and applications for further lossless JPEG images compression.
Tools for handling bibliographic data
Hlušičková, Šárka ; Žemlička, Michal (advisor) ; Hoffmann, Petr (referee)
In this work we study conversions of bibliographic records between different bibliographic formats and making lists of bibliographic references. We have focused on the formats that are native for the most used citation managers or can be easily imported to them. The formats are the BibTeX format (BibTeX), the RIS format (Reference Manager, Procite, EndNote) a the Tagged "EndNote Import"e (EndNote). We also support conversions from the MARC 21 format, in which libraries store their records. The bibliographic references are created according to the rules of two standards ČSN ISO 960 and ČSN ISO 960-2, we do not consider making the in-text citations.
Accessing and Management of Scanned Documents
Novák, Matyáš ; Žemlička, Michal (advisor) ; Hoffmannová, Petra (referee)
In this thesis we solve the problem of the digitization mainly historical documents. The goal of the thesis is to design a procedure for the digitization (scanning) and indexing these documents, with a focus on the internal structure of documents and easy searching in those documents. For a solution to the problem, it was necessary to propose appropriate structures and instruments for storing and cataloguing the digitized documents, and perform analysis, design and implementation of software required for the digitization process and for the publication of these documents. The proposed procedures and software were validated in the pilot project. During this (still running) project about three terabytes of image data has been obtained and (partially) indexed. The thesis includes the analysis of the experiences gained from this pilot project. Documents that has been digitalized during this pilot project are available at http://www.depositum.cz.
Implementation of Collate at the database level for PostgreSQL
Strnad, Radek ; Žemlička, Michal (advisor) ; Kopecký, Michal (referee)
Current version of PostgreSQL supports only one collation per database cluster. This does not meet the requirements of some users developing multi-lingual applications. The goal of the work will be to implement collation at database level and make foundations for further national language supp ort development. User will be able to set collation when creating a database. Particulary commands CREATE DATABASE... COLLATE ... will be implemented using ANSI standards. Work will also implement possibility of creating users's own collation collection commands CREATE COLLATION ... FROM ... USING and DROP COLLATION using ANSI standards.

National Repository of Grey Literature : 95 records found   previous11 - 20nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.