National Repository of Grey Literature 6 records found  Search took 0.01 seconds. 
The use of coherent risk measures in operational risk modeling
Lebovič, Michal ; Teplý, Petr (advisor) ; Doležel, Pavel (referee)
The debate on quantitative operational risk modeling has only started at the beginning of the last decade and the best-practices are still far from being established. Estimation of capital requirements for operational risk under Advanced Measurement Approaches of Basel II is critically dependent on the choice of risk measure, which quantifies the risk exposure based on the underlying simulated distribution of losses. Despite its well-known caveats Value-at-Risk remains a predominant risk measure used in the context of operational risk management. We describe several serious drawbacks of Value-at-Risk and explain why it can possibly lead to misleading conclusions. As a remedy we suggest the use of coherent risk measures - and namely the statistic known as Expected Shortfall - as a suitable alternative or complement for quantification of operational risk exposure. We demonstrate that application of Expected Shortfall in operational loss modeling is feasible and produces reasonable and consistent results. We also consider a variety of statistical techniques for modeling of underlying loss distribution and evaluate extreme value theory framework as the most suitable for this purpose. Using stress tests we further compare the robustness and consistency of selected models and their implied risk capital estimates...
Two-stage backtesting of Value-at-Risk models
Matyáš, Jan ; Seidler, Jakub (advisor) ; Brechler, Josef (referee)
Bachelor Thesis Two-stage backtesting of Value-at-Risk models Jan Matyáš Abstract This paper deals with a comparative evaluation of various Value-at-Risk models in terms of their prediction accuracy. We use two-stage backtesting procedure to find the most robust methodology in several aspects. Backtesting framework comprises of testing properties of independence, unconditional coverage, and conditional coverage and successive stage, that uses loss function allowing us to compare the two selected models from the previous part. Four European indices are taken to represent both well developed countries (DAX, ATX) and developing countries (PX, WIG). Models are examined over the period from January 1997 to February 2014. The best performing model in our selection appears to be the historical method with a 99% confidence interval. The use of stable distribution or lower confidence interval do not produce satisfactory results. Powered by TCPDF (www.tcpdf.org)
Backtesting of Different Scaling Rules for Value at Risk in the Basel Context
Klečka, Adam ; Krištoufek, Ladislav (advisor) ; Avdulaj, Krenar (referee)
1 Abstract There is a discrepancy between two important horizon for Value at Risk modelling in the Basel context. We take 10-day values for determining the regulatory capital but we consider 1-day models for backtesting. The main objective of this thesis is to examine the suitability of the currently used Square Root of Time rule for Value at Risk scaling. We compare its performance with the method utilizing Hurst exponent. Our analysis is performed for both the normal and stable distribution. We conclude that the normality assumption and the Square Root of Time rule prevail under the regulatory parameters. The results of the Hurst exponent method are not favourable under normality. On the other hand, the performance for the stable distribution is quite satisfactory under non-Basel parameters and the Hurst exponent complements this distribution very well. Therefore, the use of stable distribution and the Hurst exponent method is justified when dealing with complex non-linear instruments, during turbulent periods, or for general non-Basel setting. In general however, our results are strongly data-dependent and further evidence is needed for any conclusive implications. JEL Classification G21, G28, C58, G32, C14, G18 Keywords value at risk, backtesting, volatility scaling, Basel II, stable distribution, Hurst...
The use of coherent risk measures in operational risk modeling
Lebovič, Michal ; Teplý, Petr (advisor) ; Doležel, Pavel (referee)
The debate on quantitative operational risk modeling has only started at the beginning of the last decade and the best-practices are still far from being established. Estimation of capital requirements for operational risk under Advanced Measurement Approaches of Basel II is critically dependent on the choice of risk measure, which quantifies the risk exposure based on the underlying simulated distribution of losses. Despite its well-known caveats Value-at-Risk remains a predominant risk measure used in the context of operational risk management. We describe several serious drawbacks of Value-at-Risk and explain why it can possibly lead to misleading conclusions. As a remedy we suggest the use of coherent risk measures - and namely the statistic known as Expected Shortfall - as a suitable alternative or complement for quantification of operational risk exposure. We demonstrate that application of Expected Shortfall in operational loss modeling is feasible and produces reasonable and consistent results. We also consider a variety of statistical techniques for modeling of underlying loss distribution and evaluate extreme value theory framework as the most suitable for this purpose. Using stress tests we further compare the robustness and consistency of selected models and their implied risk capital estimates...
Value at Risk Calculation of the Czech Stock Portfolio Using Alternative Distributions
Hédl, Tomáš ; Gapko, Petr (advisor) ; Seidler, Jakub (referee)
The aim of this diploma thesis is to analyze ways of Value at Risk calculation. Its core is to get a suitable model that could most appropriately reflect the probability distribution of returns of the Czech stock portfolio that we have generated. In this thesis we find out that the returns follow unbounded distribution which was first described by Johnson (1949). Since we detect that returns are correlated we have to apply appropriate autoregressive process that removes this dependency. In the empirical part we discover an inability of models based on assumptions of normality, to correctly predict the Value at Risk. Historical simulation methods, which have promising backtesting results, are rejected because of the slow adaptation to the recent changes in the market. However, we find a way how to implement Johnson SU distribution into the GARCH model. This model, which passes all the tests, is thus able to predict Value at Risks of the portfolio most accurately. JEL Classification: C16, C22, G11 Keywords: Market risk, Value at Risk, Risk management, Johnson SU distribution
Application of quantile autoregressive models in minimum Value at Risk and Conditional Value at Risk hedging
Svatoň, Michal ; Baruník, Jozef (advisor) ; Vošvrda, Miloslav (referee)
Futures contracts represent a suitable instrument for hedging. One conse- quence of their standardized nature is the presence of basis risk. In order to mitigate it an agent might aim to minimize Value at Risk or Expected Shortfall. Among numerous approaches to their modelling, CAViaR models which build upon quantile regression are appealing due to the limited set of assumptions and decent empirical performance. We propose alternative specifications for CAViaR model - power and exponential CAViaR, and an alternative, flexible way of computing Expected Shortfall within CAViaR framework - Implied Expectile Level. Empirical analysis suggests that ex- ponential CAViaR yields competitive results both in Value at Risk and Ex- pected Shortfall modelling and in subsequent Value at Risk and Expected Shortfall hedging. 1

Interested in being notified about new results for this query?
Subscribe to the RSS feed.