National Repository of Grey Literature 3 records found  Search took 0.01 seconds. 
Stochastic Dynamic Programming Problems: Theory and Applications.
Lendel, Gabriel ; Sladký, Karel (advisor) ; Lachout, Petr (referee)
Title: Stochastic Dynamic Programming Problems: Theory and Applications Author: Gabriel Lendel Department: Department of Probability and Mathematical Statistics Supervisor: Ing. Karel Sladký CSc. Supervisor's e-mail address: sladky@utia.cas.cz Abstract: In the present work we study Markov decision processes which provide a mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of a decision maker. We study iterative procedures for finding policy that is optimal or nearly optimal with respect to the selec- ted criteria. Specifically, we mainly examine the task of finding a policy that is optimal with respect to the total expected discounted reward or the average expected reward for discrete or continuous systems. In the work we study policy iteration algorithms and aproximative value iteration algorithms. We give numerical analysis of specific problems. Keywords: Stochastic dynamic programming, Markov decision process, policy ite- ration, value iteration
Stochastic Dynamic Programming Problems: Theory and Applications.
Lendel, Gabriel ; Sladký, Karel (advisor) ; Lachout, Petr (referee)
Title: Stochastic Dynamic Programming Problems: Theory and Applications Author: Gabriel Lendel Department: Department of Probability and Mathematical Statistics Supervisor: Ing. Karel Sladký CSc. Supervisor's e-mail address: sladky@utia.cas.cz Abstract: In the present work we study Markov decision processes which provide a mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of a decision maker. We study iterative procedures for finding policy that is optimal or nearly optimal with respect to the selec- ted criteria. Specifically, we mainly examine the task of finding a policy that is optimal with respect to the total expected discounted reward or the average expected reward for discrete or continuous systems. In the work we study policy iteration algorithms and aproximative value iteration algorithms. We give numerical analysis of specific problems. Keywords: Stochastic dynamic programming, Markov decision process, policy ite- ration, value iteration
Dynamic decision making via approximate dynamic programming
Slimáček, V. ; Zeman, J. ; Kárný, Miroslav
This work deals with dynamic decision making via approximate dynamic programming in application to the futures trading. This work describes the theoretical description of dynamic decision making and approximate dynamic programming, you can also nd here the principles of Bayesian estimation, which is necessary for solving our task.We designed and described one of possible trading strategy { receding horizont strategy combined with anticipative strategy, the predictions of prices, which are necessary for using of this strategy, are made by certainty equivalence strategy and by Monte Carlo method. The designed strategy was tested on real data and unfortunately this strategy doesn t provide a prot.

Interested in being notified about new results for this query?
Subscribe to the RSS feed.