National Repository of Grey Literature 25 records found  previous11 - 20next  jump to record: Search took 0.00 seconds. 
Simplification of contour lines in flat areas
Čelonk, Marek ; Bayer, Tomáš (advisor) ; Kolingerová, Ivana (referee)
Simplification of contour lines in flat areas The diploma thesis is focused on the cartographic generalization of contour lines derived from dense point clouds in flat territories, where the original contour lines tend to oscillate. The main aim is to propose, develop and test a new algorithm for the contour simplification preserving the given vertical error and reflecting the cartographic rules. Three methods designed for large scale maps (1 : 10 000 and larger) are presented: the weighted average, modified Douglas-Peucker and potential-based approach. The most promising method is based on the repeated simplification of contour line segments by calculating the generalization potential of its vertices. The algorithm is implemented in Python 2.7 with the use of Arcpy library was tested on DMR5G data, the simplified contour lines were compared with the result created by a professional cartographer. Achieved results are presented on attached topographic maps. Keywords: contours, cartographic generalization, digital cartography, vertical buffer, smoothing, GIS
Syntax in methods for information retrieval
Straková, Jana
Title: Information Retrieval Using Syntax Information Author: Bc. Jana Kravalová Department: Institute of Formal and Applied Linguistics Supervisor: Mgr. Pavel Pecina, Ph.D. Supervisor's e-mail address: pecina@ufal.mff.cuni.cz Abstract: In the last years, application of language modeling in infor- mation retrieval has been studied quite extensively. Although language models of any type can be used with this approach, only traditional n-gram models based on surface word order have been employed and described in published experiments (often only unigram language models). The goal of this thesis is to design, implement, and evaluate (on Czech data) a method which would extend a language model with syntactic information, automatically obtained from documents and queries. We attempt to incorporate syntactic information into language models and experimentally compare this approach with uni- gram and bigram model based on surface word order. We also empirically compare methods for smoothing, stemming and lemmatization, effectiveness of using stopwords and pseudo relevance feedback. We perform a detailed ana- lysis of these retrieval methods and describe their performance in detail. Keywords: information retrieval, language modelling, depenency syntax, smo- othing
Automatic smoothing 3D models of cranial embryonic mouse cartilage
Kočendová, Kateřina ; Harabiš, Vratislav (referee) ; Jakubíček, Roman (advisor)
The focus of this thesis is the smoothing of manually segmented 3D models of mouse embryo craniofacial cartilege. During the process of manual segmentation, artefacts and other imperfections appear in the final models and need to be repaired. Firstly, manual segmentation is corrected using gradients and thresholding. Subsequent smoothing methods are constructed based on theoretical research. Algorithmizing is executed in the MATLAB environment. All the designed algorithms are then tested on selected models. Statistical evaluation is determined using the Srensen–Dice coefficient, where manually smoothened models cleared of all artefacts are used as the gold standard.
Smoothing device for FDM 3D prints
Dubský, Jan ; Čudek, Pavel (referee) ; Bayer, Robert (advisor)
This bachelor’s thesis deals with a design of a device that would smooth out 3D prints, made of various materials, using FDM 3D printers. It explains the principle of 3D printing and describes technologies used in 3D printing. In this work is an overview of materials used for 3D printing and of some selected organic solvents. It describes methods of vapour smoothing and aerosol smoothing that can be used for smoothing out of 3D prints surface. The vapour smoothing method was experimentally tested using these solvents: acetone, dichloromethane, chloroform and tetrahydrofuran. The aerosol smoothing method used a solution of acetone and dichloromethane in the ratio of 1:1. Both methods were tested on samples made of PLA, ABS, PETG and SBS. This paper describes the design of a device for smoothing of 3D prints with regard to the chemical compatibility of used construction materials and ease of use of the device.
Projection of mortality tables and their influence on insurance embedded value
Filka, Jakub ; Pešta, Michal (advisor) ; Cipra, Tomáš (referee)
We study development of mortality tables from 1950 to present in Czech Republic. Our aim is to look at the 6 basic models, which can be potentially used to describe behavior of dying for people over 60 years. Models that are being investigated vary from generally accepted Gompertz-Makeham model to logistic models of Thatcher and Kannisto. We also introduce Coale-Kisker and Heligman- Pollard model. Our analysis is concentrated mostly on projecting abilities of given models to the highest ages. Especially for women, where data do not show such dispersion as in the case of men, there is a visible trend that can be described better by using logistic models instead of Gompertz-Makeham model, which has a tendency to overestimate the probabilities of dying in higher ages. Keywords: projection of mortality tables, Gompertz-Makeham, logistic models 1
Syntax in methods for information retrieval
Straková, Jana
Title: Information Retrieval Using Syntax Information Author: Bc. Jana Kravalová Department: Institute of Formal and Applied Linguistics Supervisor: Mgr. Pavel Pecina, Ph.D. Supervisor's e-mail address: pecina@ufal.mff.cuni.cz Abstract: In the last years, application of language modeling in infor- mation retrieval has been studied quite extensively. Although language models of any type can be used with this approach, only traditional n-gram models based on surface word order have been employed and described in published experiments (often only unigram language models). The goal of this thesis is to design, implement, and evaluate (on Czech data) a method which would extend a language model with syntactic information, automatically obtained from documents and queries. We attempt to incorporate syntactic information into language models and experimentally compare this approach with uni- gram and bigram model based on surface word order. We also empirically compare methods for smoothing, stemming and lemmatization, effectiveness of using stopwords and pseudo relevance feedback. We perform a detailed ana- lysis of these retrieval methods and describe their performance in detail. Keywords: information retrieval, language modelling, depenency syntax, smo- othing
3D Triangles Polygonal Mesh Conversion on 3D Spline Surfaces
Jahn, Zdeněk ; Španěl, Michal (referee) ; Kršek, Přemysl (advisor)
This bachelor's thesis deals with the problem of the remeshing of unstructured triangular 3D meshes to more suitable representations ( quadrilateral meshes or spline surfaces ). It explains the basic problems related with the unstructured meshes and the reasons for its solution. It classifies the usable methods, describes the most suitable candidates briefly. It follows the chosen method in detail - both the theoretical matter and the specific implementation.
Polygonal Models Smoothing
Svěchovský, Radek ; Švub, Miroslav (referee) ; Kršek, Přemysl (advisor)
Object digitizing or 3D model transformation into surface representation brings defects in the form of noise. This thesis analyses the well-known approaches to the noise elimination from polygonal models. The reader will be concerned with the fundamental principles of smoothing and foremost the results of the comparison of different methods including Laplace method, algorithm Laplace-HC, Taubin's low-pass filter and bilateral filter.
Mining of Textual Data from the Web for Speech Recognition
Kubalík, Jakub ; Plchot, Oldřich (referee) ; Mikolov, Tomáš (advisor)
Prvotním cílem tohoto projektu bylo prostudovat problematiku jazykového modelování pro rozpoznávání řeči a techniky pro získávání textových dat z Webu. Text představuje základní techniky rozpoznávání řeči a detailněji popisuje jazykové modely založené na statistických metodách. Zvláště se práce zabývá kriterii pro vyhodnocení kvality jazykových modelů a systémů pro rozpoznávání řeči. Text dále popisuje modely a techniky dolování dat, zvláště vyhledávání informací. Dále jsou představeny problémy spojené se získávání dat z webu, a v kontrastu s tím je představen vyhledávač Google. Součástí projektu byl návrh a implementace systému pro získávání textu z webu, jehož detailnímu popisu je věnována náležitá pozornost. Nicméně, hlavním cílem práce bylo ověřit, zda data získaná z Webu mohou mít nějaký přínos pro rozpoznávání řeči. Popsané techniky se tak snaží najít optimální způsob, jak data získaná z Webu použít pro zlepšení ukázkových jazykových modelů, ale i modelů nasazených v reálných rozpoznávacích systémech.
Prediction of Time Series Using Statistical Methods
Beluský, Ondrej ; Bidlo, Michal (referee) ; Schwarz, Josef (advisor)
Many companies consider essential to obtain forecast of time series of uncertain variables that influence their decisions and actions. Marketing includes a number of decisions that depend on a reliable forecast. Forecasts are based directly or indirectly on the information derived from historical data. This data may include different patterns - such as trend, horizontal pattern, and cyclical or seasonal pattern. Most methods are based on the recognition of these patterns, their projection into the future and thus create a forecast. Other approaches such as neural networks are black boxes, which uses learning.

National Repository of Grey Literature : 25 records found   previous11 - 20next  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.