National Repository of Grey Literature 24 records found  previous11 - 20next  jump to record: Search took 0.00 seconds. 
Using Adversarial Examples in Natural Language Processing
Bělohlávek, Petr ; Žabokrtský, Zdeněk (advisor) ; Libovický, Jindřich (referee)
Machine learning has been paid a lot of attention in recent years. One of the studied fields is employment of adversarial examples. These are artifi- cially constructed examples which evince two main features. They resemble the real training data and they deceive already trained model. The ad- versarial examples have been comprehensively investigated in the context of deep convolutional neural networks which process images. Nevertheless, their properties have been rarely examined in connection with NLP-processing networks. This thesis evaluates the effect of using the adversarial examples during the training of the recurrent neural networks. More specifically, the main focus is put on the recurrent networks whose text input is in the form of a sequence of word/character embeddings, which have not been pretrained in advance. The effects of the adversarial training are studied by evaluating multiple NLP datasets with various characteristics.
Regularization techniques based on the least squares method
Kubínová, Marie ; Hnětynková, Iveta (advisor)
Title: Regularization Techniques Based on the Least Squares Method Author: Marie Michenková Department: Department of Numerical Mathematics Supervisor: RNDr. Iveta Hnětynková, Ph.D. Abstract: In this thesis we consider a linear inverse problem Ax ≈ b, where A is a linear operator with smoothing property and b represents an observation vector polluted by unknown noise. It was shown in [Hnětynková, Plešinger, Strakoš, 2009] that high-frequency noise reveals during the Golub-Kahan iterative bidiagonalization in the left bidiagonalization vectors. We propose a method that identifies the iteration with maximal noise revealing and reduces a portion of high-frequency noise in the data by subtracting the corresponding (properly scaled) left bidiagonalization vector from b. This method is tested for different types of noise. Further, Hnětynková, Plešinger, and Strakoš provided an estimator of the noise level in the data. We propose a modification of this estimator based on the knowledge of the point of noise revealing. Keywords: ill-posed problems, regularization, Golub-Kahan iterative bidiagonalization, noise revealing, noise estimate, denoising 1
Regularizační metody založené na metodách nejmenších čtverců
Michenková, Marie ; Hnětynková, Iveta (advisor) ; Zítko, Jan (referee)
Title: Regularization Techniques Based on the Least Squares Method Author: Marie Michenková Department: Department of Numerical Mathematics Supervisor: RNDr. Iveta Hnětynková, Ph.D. Abstract: In this thesis we consider a linear inverse problem Ax ≈ b, where A is a linear operator with smoothing property and b represents an observation vector polluted by unknown noise. It was shown in [Hnětynková, Plešinger, Strakoš, 2009] that high-frequency noise reveals during the Golub-Kahan iterative bidiagonalization in the left bidiagonalization vectors. We propose a method that identifies the iteration with maximal noise revealing and reduces a portion of high-frequency noise in the data by subtracting the corresponding (properly scaled) left bidiagonalization vector from b. This method is tested for different types of noise. Further, Hnětynková, Plešinger, and Strakoš provided an estimator of the noise level in the data. We propose a modification of this estimator based on the knowledge of the point of noise revealing. Keywords: ill-posed problems, regularization, Golub-Kahan iterative bidiagonalization, noise revealing, noise estimate, denoising 1
Blind Image Deconvolution of Electron Microscopy Images
Schlorová, Hana ; Odstrčilík, Jan (referee) ; Walek, Petr (advisor)
V posledních letech se metody slepé dekonvoluce rozšířily do celé řady technických a vědních oborů zejména, když nejsou již limitovány výpočetně. Techniky zpracování signálu založené na slepé dekonvoluci slibují možnosti zlepšení kvality výsledků dosažených zobrazením pomocí elektronového mikroskopu. Hlavním úkolem této práce je formulování problému slepé dekonvoluce obrazů z elektronového mikroskopu a hledání vhodného řešení s jeho následnou implementací a porovnáním s dostupnou funkcí Matlab Image Processing Toolboxu. Úplným cílem je tedy vytvoření algoritmu korigujícícho vady vzniklé v procesu zobrazení v programovém prostředí Matlabu. Navržený přístup je založen na regularizačních technikách slepé dekonvoluce.
Regularization of Illegal Migration as a Policy-making Process
Dumont, Anna ; Novotný, Vilém (advisor) ; Jelínková, Marie (referee)
'Regularization of Illegal Migration as a Policy-making Process' deals with the social problem of having a high number of irregular migrants in the Czech Republic and regularization as a tool that could help reduce it. Regularization is seen as a political process theoretically described by using the Advocacy Coalition Framework. This thesis tries to find normative definitions of the two coalitions, which hold different beliefs and two different points of view rather than describe the problem. The work is partly designed as a case study in which the theory is applied to the issue of regularization in part there is also an explanation of regularization as well as the Advocacy Coalition Framework. The thesis defines the two coalitions within the subsystem - the for-regularization and anti-regularization coalitions. Each coalition has its deep core and policy core beliefs that determine the relationship to the topic as well as the relationship between the coalitions themselves. In conclusion, the author summarized the information about the coalitions and their belief in three comparative tables where one can confront their approaches. The last part also contains a chapter on The Changes of Beliefs and Policies, where there is an introduction of two policies: the system of voluntary return and that...
Modifications of stochastic objects
Kadlec, Karel ; Štěpán, Josef (advisor) ; Dostál, Petr (referee)
In this thesis, we are concerned with the modifications of the stochastic processes and the random probability measures. First chapter is devoted to modifications of the stochastic process to the space of continuous functions, modifications of submartingale to the set of right-continuous with finite left-hand limits functions and separable modifications of stochastic process. In the second chapter is the attention on the regularization of random probability measure in Markov kernel focused. In particular, we work with random probability measures on the Borel subset of the Polish space, or Radon separable topological space.
Implementation of restoring method for reading bar code
Kadlčík, Libor ; Bartušek, Karel (referee) ; Mikulka, Jan (advisor)
Bar code stores information in the form of series of bars and gaps with various widths, and therefore can be considered as an example of bilevel (square) signal. Magnetic bar codes are created by applying slightly ferromagnetic material to a substrate. Sensing is done by reading oscillator, whose frequency is modulated by presence of the mentioned ferromagnetic material. Signal from the oscillator is then subjected to frequency demodulation. Due to temperature drift of the reading oscillator, the demodulated signal is accompanied by DC drift. Method for removal of the drift is introduced. Also, drift-insensitive detection of presence of a bar code is described. Reading bar codes is complicated by convolutional distortion, which is result of spatially dispersed sensitivity of the sensor. Effect of the convolutional distortion is analogous to low-pass filtering, causing edges to be smoothed and overlapped, and making their detection difficult. Characteristics of convolutional distortion can be summarized into point-spread function (PSF). In case of magnetic bar codes, the shape of the PSF can be known in advance, but not its width of DC transfer. Methods for estimation of these parameters are discussed. The signal needs to be reconstructed (into original bilevel form) before decoding can take place. Variational methods provide effective way. Their core idea is to reformulate reconstruction as an optimization problem of functional minimization. The functional can be extended by other functionals (regularizations) in order to considerably improve results of reconstruction. Principle of variational methods will be shown, including examples of use of various regularizations. All algorithm and methods (including frequency demodulation of signal from reading oscillator) are digital. They are implemented as a program for a microcontroller from the PIC32 family, which offers high computing power, so that even blind deconvolution (when the real PSF also needs to be found) can be finished in a few seconds. The microcontroller is part of magnetic bar code reader, whose hardware allows the read information to be transferred to personal computer via the PS/2 interface or USB (by emulating key presses on virtual keyboard), or shown on display.
Napoleon´s theorem
MRÁZ, Luděk
The target of the this diploma thesis called ''The Napoleon's theorem'' is a detailed concentration on this theorem, where the process of so called ''regularization'' is described. Under the investigation of the Napoleon's theorem this diploma thesis is concerned with a lot of proofs, properties and then their generalization in a plane and in space. Pictures, which can help the reader to understand this problem are supplemented in this diploma thesis.
Economics of Biased Estimation
Drvoštěp, Tomáš ; Špecián, Petr (advisor) ; Tříska, Dušan (referee)
This thesis investigates optimality of heuristic forecasting. According to Goldstein a Gigerenzer (2009), heuristics can be viewed as predictive models, whose simplicity is exploiting the bias-variance trade-off. Economic agents learning in the context of rational expectations (Marcet a Sargent 1989) employ, on the contrary, complex models of the whole economy. Both of these approaches can be perceived as an optimal response complexity of the prediction task and availability of observations. This work introduces a straightforward extension to the standard model of decision making under uncertainty, where agents utility depends on accuracy of their predictions and where model complexity is moderated by regularization parameter. Results of Monte Carlo simulations reveal that in complicated environments, where few observations are at disposal, it is beneficial to construct simple models resembling heuristics. Unbiased models are preferred in more convenient conditions.

National Repository of Grey Literature : 24 records found   previous11 - 20next  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.