National Repository of Grey Literature 57 records found  previous11 - 20nextend  jump to record: Search took 0.01 seconds. 
Stochastic Dynamic Programming Problems: Theory and Applications.
Lendel, Gabriel ; Sladký, Karel (advisor) ; Lachout, Petr (referee)
Title: Stochastic Dynamic Programming Problems: Theory and Applications Author: Gabriel Lendel Department: Department of Probability and Mathematical Statistics Supervisor: Ing. Karel Sladký CSc. Supervisor's e-mail address: sladky@utia.cas.cz Abstract: In the present work we study Markov decision processes which provide a mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of a decision maker. We study iterative procedures for finding policy that is optimal or nearly optimal with respect to the selec- ted criteria. Specifically, we mainly examine the task of finding a policy that is optimal with respect to the total expected discounted reward or the average expected reward for discrete or continuous systems. In the work we study policy iteration algorithms and aproximative value iteration algorithms. We give numerical analysis of specific problems. Keywords: Stochastic dynamic programming, Markov decision process, policy ite- ration, value iteration
Higher - order Markov chains and applications in econometrics
Straňáková, Alena ; Sladký, Karel (referee) ; Prášková, Zuzana (advisor)
In this paper, we generalize Raftery's model of Markov chain to a higher-order multivariate Markov chain model. This model is more suitable for practical applications because of smaller number of independent parameters. We propose a method of estimation of parameters of the model and apply it to the Credit risk measuring of a portfolio. We compute Value at Risk and Expected Shortfall in this portfolio. Theoretical results are applied to real data.
Transient and Average Markov Reward Chains with Applications to Finance
Sladký, Karel
The article is devoted to Markov reward chains, in particular, attention is primarily focused on the reward variance arising by summation of generated rewards. Explicit formulae for calculating the variances for transient and average models are reported along with sketches of algorithmic procedures for finding policies guaranteeing minimal variance in the class of policies with a given transient or average reward. Application of the obtained results to financial models is indicated.
Second Order Optimality in Transient and Discounted Markov Decision Chains
Sladký, Karel
The article is devoted to second order optimality in Markov decision processes. Attention is primarily focused on the reward variance for discounted models and undiscounted transient models (i.e. where the spectral radius of the transition probability matrix is less than unity). Considering the second order optimality criteria means that in the class of policies maximizing (or minimizing) total expected discounted reward (or undiscounted reward for the transient model) we choose the policy minimizing the total variance. Explicit formulae for calculating the variances for transient and discounted models are reported along with sketches of algoritmic procedures for finding second order optimal policies.
The Variance of Discounted Rewards in Markov Decision Processes: Laurent Expansion and Sensitive Optimality
Sladký, Karel
In this paper we consider discounted Markov decision processes with finite state space and compact actions spaces. We present formulas for the variance of total expected discounted rewards along with its partial Laurent expansion. This enables to compare the obtained results with similar results for undiscounted models.
Cumulative Optimality in Risk-Sensitive and Risk-Neutral Markov Reward Chains
Sladký, Karel
This contribution is devoted to risk-sensitive and risk-neutral optimality in Markov decision chains. Since the traditional optimality criteria (e.g. discounted or average rewards) cannot reflect the variability-risk features of the problem, and using the mean variance selection rules that stem from the classical work of Markowitz present some technical difficulties, we are interested in expectation of the stream of rewards generated by the Markov chain that is evaluated by an exponential utility function with a given risk sensitivity coefficient. Recall that for the risk sensitivity coefficient equal zero we arrive at¨traditional optimality criteria. In this note we present necessary and sufficient risk-sensitivity and risk-neutral optimality conditions; in detail for unichain models and indicate their generalization to multichain Markov reward chains.
Výzkum a vývoj systémů využívajících obnovitelné zdroje energie a potenciál úspor energie pro bytové a rodinné domy: Vývoj stavebnictví a využívání OZE ve výstavbě
Eurosolar CZ ; Inter-projekt ; ŽDB - závod Viadrus ; UniServis Hašek ; Aton centrum ; POWER SERVICE ; Solar-Dynamics ; Společnost pro techniku prostředí ; CZ BIOM - České sdružení pro biomasu, Praha ; Československá společnost pro sluneční energii, Praha ; Česká společnost pro větrnou energii, Praha ; Asociace pro využití obnovitelných zdrojů energie ; Tomeš, Petr ; Čimbura, Vlastislav ; Dubový, Jan ; Škarpa, Miroslav ; Havlíček, Michal ; Kramoliš, Petr ; Karásek, Dalibor ; Němeček, Josef ; Smrž, Milan ; Mizik, Josef ; Šafařík, Miroslav ; Tywoniak, Jan ; Pešat, Jan ; Hašek, Ilja ; Slejška, Antonín ; Šíma, Antonín ; Petříková, Vlasta ; Kutil, Antonín ; Novotný, Václav ; Židlický, Jiří ; Kottnauer, Antonín ; Peterka, Jaroslav ; Matuška, Tomáš ; Kuřina, Jiří ; Hošek, Jiří ; Sladký, Karel ; Váňa, Jaroslav ; Michalička, Ladislav ; Štekl, Josef ; Motlík, Jan
Rekapitulace řešení projektu v roce 2001 seznam autorů, kteří se na řešení projektu v roce 2001 podíleli. I. Část. Vývoj stavebnictví a využívání OZE ve výstavbě: Tepelně-technické vlastnosti obytných budov. Možnosti porovnání různých opatření k racionálnímu zacházení s energií v budově. Vývoj a trendy v bytové výstavbě. Popis projektu, cíle, plány a hlavní směry řešení úkolu.
Risk-Sensitive and Average Optimality in Markov Decision Processes
Sladký, Karel
This contribution is devoted to the risk-sensitive optimality criteria in finite state Markov Decision Processes. At first, we rederive necessary and sufficient conditions for average optimality of (classical) risk-neutral unichain models. This approach is then extended to the risk-sensitive case, i.e., when expectation of the stream of one-stage costs (or rewards) generated by a Markov chain is evaluated by an exponential utility function. We restrict ourselves on irreducible or unichain Markov models where risk-sensitive average optimality is independent of the starting state. As we show this problem is closely related to solution of (nonlinear) Poissonian equations and their connections with nonnegative matrices.
Separable Utility Functions in Dynamic Economic Models
Sladký, Karel
In this note we study properties of utility functions suitable for performance evaluation of dynamic economic models under uncertainty. At first, we summarize basic properties of utility functions, at second we show how exponential utility functions can be employed in dynamic models where not only expectation but also the risk are considered. Special attention is focused on properties of the expected utility and the corresponding certainty equivalents if the stream of obtained rewards is governed by Markov dependence and evaluated by exponential utility functions.

National Repository of Grey Literature : 57 records found   previous11 - 20nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.