National Repository of Grey Literature 117 records found  beginprevious21 - 30nextend  jump to record: Search took 0.00 seconds. 
Small sample asymptotics
Tomasy, Tomáš ; Sabolová, Radka (advisor) ; Omelka, Marek (referee)
In this thesis we study the small sample asymptotics. We introduce the saddlepoint approximation which is important to approximate the density of estimator there. To derive this method we need some basic knowledge from probability and statistics, for example the central limit theorem and the M- estimators. They are presented in the first chapter. In practical part of this work we apply the theoretical background on the given M-estimators and selected distribution. We also apply the central limit theorem on our estimators and compare it with small sample asymptotics. At the end we show and summarize the calculated results.
Kelly criterion in portfolio selection problems
Dorová, Bianka ; Kopa, Miloš (advisor) ; Omelka, Marek (referee)
In the present work we study portfolio optimization problems. Introduction is followed by chapter 2, where we introduce the concept of utility function and its relationship to the investor's risk attitude. To solve the optimization problem we consider the Markowitz portfolio optimization model and the Kelly criterion, which are recalled in the fourth and fifth chapter. The work also contains an extensive numerical study. Using the optimization software GAMS we solve portfolio optimization problems. We consider a portfolio problem with (and without) allowed short sales. We compare the obtained portfolios and we discuss whether Kelly optimal portfolio is a special case of the Markowitz optimal portfolio for the special value of the minimum expected return.
Statistical applications of urn models
Navrátil, Radim ; Pawlas, Zbyněk (advisor) ; Omelka, Marek (referee)
This work shows various applications of urn models in practice. First, basic properties of the occupancy distribution are derived together with its asymptotic approximation. This model is applied and generalized in the theory of database systems for records search from a given database. An application to random texts is mentioned, namely the computation of the expected number of missing and common words in random texts. There are presented exact formulas, their asymptotic approximations and the approximations via occupancy distribution. Then, some urn models, which are used in the randomized response theory for finding out respondents' answers to sensitive questions, are described. These models are compared according to their accuracy and respondents' goodwill to answer. Finally, two non-parametric tests of empty boxes are derived, one for the hypothesis whether a random sample comes from a given population and the second for the hypothesis whether two independent random samples come from the same population. The powers of these tests are compared with commonly used tests for these hypotheses.
Methods of artificial intelligence and their use in prediction
Šerý, Lubomír ; Omelka, Marek (advisor) ; Krtek, Jiří (referee)
Title: Methods of artificial intelligence and their use in prediction Author: Lubomír Šerý Department: Department of Probability and Mathematical Statistics Supervisor: Ing. Marek Omelka, Ph.D., Department of Probability and Mathe- matical Statistics Abstract: In the presented thesis we study field of artificial intelligence, in par- ticular we study part dedicated to artificial neural networks. At the beginning, concept of artificial neural networks is introduced and compared to it's biological base. Afterwards, we also compare neural networks to some generalized linear models. One of the main problems of neural networks is their learning. Therefore biggest part of this work is dedicated to learning algorithms, especially to pa- rameter estimation and specific computational aspects. In this part we attempt to bring in an overview of internal structure of neural network and to propose enhancement of learning algorithm. There are lots of techniques for enhancing and enriching basic model of neural networks. Some of these improvements are, together with genetic algorithms, introduced at the end of this work. At the very end of this work simulations are presented, where we attempt to verify some of the introduced theoretical assumptions and conclusions. Main simulation is an application of concept of neural...
Distance-based testing
Solnický, Radek ; Omelka, Marek (advisor) ; Komárek, Arnošt (referee)
When analyzing ecological data, one considers traditional multivariate techniques to be unsuitable. The use of dissimilarity coefficients and distance matrices is a way, how to solve this problem. In this work we present some of these coefficients and distance-based tests: Mantel test, several versions of ANOSIM and MRPP tests and distance-based test for homogeneity of multivariate dispersions. We focus on relationships among these tests and illustrate the use with an example. We also discuss the difficulties of interpretation of the results of these tests.
Multivariate extreme value theory
Šiklová, Renata ; Mazurová, Lucie (advisor) ; Omelka, Marek (referee)
In this thesis we will elaborate on multivariate extreme value modelling, re- lated practical and theoretical aspects. We will mainly focus on the dependence models, the extreme value copulas in particular. Extreme value copulas effec- tively unify the univariate extreme value theory and the copula framework itself in a single view. We familiarize ourselves with both of them in the first two chapters. Those chapters present generalized extreme value distribution, gen- eralized Pareto distribution and Archimedean copulas, that are suitable for the multivariate maxima and the threshold exceedances description. These two top- ics will be addressed in the third chapter in detail. Taking into consideration rather practical focus of this thesis, we examine the methods of data analysis extensively. Furthermore, we will employ these methods in a comprehensive case study, that will aim to reveal the importance of extreme value theory application in the Catastrophe Insurance. 1
Applications of EM-algorithm
Komora, Antonín ; Omelka, Marek (advisor) ; Kulich, Michal (referee)
EM algorithm is a very valuable tool in solving statistical problems, where the data presented is incomplete. It is an iterative algorithm, which in its first step estimates the missing data based on the parameter estimate from the last iteration and the given data and it does so by using the conditional expectation. In the second step it uses the maximum likelihood estimation to find the value that maximizes the logarithmic likelihood function and passes it along to the next iteration. This is repeated until the point, where the value increment of the logarithmic likelihood function is small enough to stop the algorithm without significant errors. A very important characteristic of this algorithm is its monotone convergence and that it does so under fairly general conditions. However the convergence itself is not very fast, and therefore at times requires a great number of iterations.

National Repository of Grey Literature : 117 records found   beginprevious21 - 30nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.