
An Application of Quantile Functions in Probability Model Constructions of Wage Distributions
Pavelka, Roman ; Kahounová, Jana (advisor) ; Vrabec, Michal (referee) ; Pacáková, Viera (referee)
Over the course of years from 1995 to 2008 was acquired by Average Earnings Information System under the professional gestation of the Czech Republic Ministry of Labor and Social Affairs wage and personal data by individual employees. Thanks to the fact that in this statistical survey are collected wage and personal data by concrete employed persons it is possible to obtain a wage distribution, so it how this wages spread out among individual employees. Values that wages can be assumed in whole wage interval are not deterministical but they result from interactions of many random influences. The wage is necessary due to this randomness considered as random quantity with its probability density function. This spreading of wages in all labor market segments is described a wage distribution. Even though a representation of a highincome employee category is evidently small, one's incomes markedly affect statistically itemized average wage level and particularly the whole wage file variability. So wage employee collections are distinguished by the averaged wage that exceeds wages of a major employee mass and the high variability due to great wage heterogeneity. A general approach to distribution of earning modeling under current heterogeneity conditions don't permit to fit by some chosen distribution function or probably density function. This leads to the idea to apply some quantile approach with statistical modeling, i.e. to model an earning distribution with some appropriate inverse distributional function. The probability modeling by generalized or compound forms of quantile functions enables better to characterize a wage distribution, which distinguishes by high asymmetry and wage heterogeneity. The application of inverse distributional function as a probability model of a wage distribution can be expressed in forms of distributional mixture of partial employee's groups. All of the component distributions of this mixture model correspond to an employee's group with greater homogeneity of earnings. The partial employee's subfiles differ in parameters of their component density and in shares of this density in the total wage distribution of the wage file.


Modelování rizika rezerv v neživotním pojištění založené na neagregovaných datech
Zimmermann, Pavel ; Kahounová, Jana (advisor) ; Cipra, Tomáš (referee) ; Jedlička, Petr (referee)
Recently the eld of actuarial mathematics has experienced a large development due to a signi cant increase of demands for insurance and nancial risk quanti cation due to the fact that the implementation of a complex of rules of international reporting standards (IFRS) and solvency reporting (Solvency II) has started. It appears that the key question for solvency measuring is determination of probability distribution of future cash ows of an insurance company. Solvency is then reported through an appropriate risk measure based e.g. on a percentile of this distribution. While as present popular models are based solely on aggregated data (such as total loss development from a certain time period), the main objective of this work is to scrutinize possibilities of modelling of the reserve risk (i.e. roughly said, the distribution of the ultimate incurred value of claims that have already happened in the past) based directly on individual claims. These models have not yet become popular and to the author's knowledge an overview of such models has not been published previously. The assumptions and speci cation of the already published models were compared to the practical experience and some inadequacies were pointed out. Further more a new reserve risk model was constructed which is believed to have practically more suitable assumptions and properties than the existing models. Theoretical aspects of the new model were studied and distribution of the ultimate incurred value (the modelled variable) was derived. An emphasis was put also on practical aspects of the developed model and its applicability in the case of industrial use. Therefore some restrictive assumptions which might be considered realistic in variety of practical cases and which lead to a signi cant simpli cation of the model were identi ed throughout the work. Furthermore, algorithms to reduce the number of the necessary calculations were developed. In the last chapters of the work, an e ort was devoted to the methods of the estimation of the considered parameters respecting practical limitations (such as missing observations at the time of modelling). For this purpose, survival analysis was (amongst other methods) applied.


Extreme Value Theory in Operational Risk Management
Vojtěch, Jan ; Kahounová, Jana (advisor) ; Řezanková, Hana (referee) ; Orsáková, Martina (referee)
Currently, financial institutions are supposed to analyze and quantify a new type of banking risk, known as operational risk. Financial institutions are exposed to this risk in their everyday activities. The main objective of this work is to construct an acceptable statistical model of capital requirement computation. Such a model must respect specificity of losses arising from operational risk events. The fundamental task is represented by searching for a suitable distribution, which describes the probabilistic behavior of losses arising from this type of risk. There is a strong utilization of the PickandsBalkemade Haan theorem used in extreme value theory. Roughly speaking, distribution of a random variable exceeding a given high threshold, converges in distribution to generalized Pareto distribution. The theorem is subsequently used in estimating the high percentile from a simulated distribution. The simulated distribution is considered to be a compound model for the aggregate loss random variable. It is constructed as a combination of frequency distribution for the number of losses random variable and the socalled severity distribution for individual loss random variable. The proposed model is then used to estimate a fi nal quantile, which represents a searched amount of capital requirement. This capital requirement is constituted as the amount of funds the bank is supposed to retain, in order to make up for the projected lack of funds. There is a given probability the capital charge will be exceeded, which is commonly quite small. Although a combination of some frequency distribution and some severity distribution is the common way to deal with the described problem, the final application is often considered to be problematic. Generally, there are some combinations for severity distribution of two or three, for instance, lognormal distributions with different location and scale parameters. Models like these usually do not have any theoretical background and in particular, the connecting of distribution functions has not been conducted in the proper way. In this work, we will deal with both problems. In addition, there is a derivation of maximum likelihood estimates of lognormal distribution for which hold F_LN(u) = p, where u and p is given. The results achieved can be used in the everyday practices of financial institutions for operational risks quantification. In addition, they can be used for the analysis of a variety of sample data with socalled heavy tails, where standard distributions do not offer any help. As an integral part of this work, a CD with source code of each function used in the model is included. All of these functions were created in statistical programming language, in SPLUS software. In the fourth annex, there is the complete description of each function and its purpose and general syntax for a possible usage in solving different kinds of problems.
