National Repository of Grey Literature 9 records found  Search took 0.01 seconds. 
Training Intelligent Agents in Unity Game Engine
Vaculík, Jan ; Chlubna, Tomáš (referee) ; Matýšek, Michal (advisor)
The goal of this work is to design applications, which demonstrate the power of machine learning in video games. To achieve this goal, this work uses the ML-Agents toolkit, which allows the creation of intelligent agents in the Unity Game Engine. Furthermore, a series of experiments showing the properties and flexibility of intelligent agents in several real-time scenarios is presented. To train the agents, the toolkit uses reinforcement learning and imitation learning algorithms.
Deep reinforcement learning and snake-like robot locomotion design
Kočí, Jakub ; Dobrovský, Ladislav (referee) ; Matoušek, Radomil (advisor)
This master thesis is discussing application of reinforcement learning in deep learning tasks. In theoretical part, basics about artificial neural networks and reinforcement learning. The thesis describes theoretical model of reinforcement learning process - Markov processes. Some interesting techniques are shown on conventional reinforcement learning algorithms. Some of widely used deep reinforcement learning algorithms are described here as well. Practical part consist of implementing model of robot and it's environment and of the deep reinforcement learning system itself.
Utilization of Robotic Operating System (ROS) for control of collaborative robot UR3
Juříček, Martin ; Matoušek, Radomil (referee) ; Parák, Roman (advisor)
The aim of the bachelor's thesis is to create a control program, its subsequent testing and verification of functionality for the collaborative robot UR3 from the company Universal Robots. The control program is written in python and integrates control options through the Robotic Operating System, where a defined point can be reached using pre-simulated trajectories of Q-learning, SARSA, Deep Q-learning, Deep SARSA, or using only the MoveIT framework. The thesis deals with a cross-section of the topics of collaborative robotics, Robotic Operating System, Gazebo simulation environment, feedback and deep feedback learning. Finally, the design and implementation of the control program with partial parts is described.
Posilované učení a agentní prostředí
Brychta, Adam
This work deals with reinforcement learning and its application in an agent environment. The theoretical part includes an analysis of the theory covering areas of agent environments, neural networks and reinforcement learning. The practical part is focused on the design and implementation of a deep reinforcement learning agent with the possibility of using hierarchical reinforcement learning.
Quoting behaviour of a market-maker under different exchange fee structures
Kiseľ, Rastislav ; Baruník, Jozef (advisor) ; Kočenda, Evžen (referee)
During the last few years, market micro-structure research has been active in analysing the dependence of market efficiency on different market character­ istics. Make-take fees are one of those topics as they might modify the incen­ tives for participating agents, e.g. broker-dealers or market-makers. In this thesis, we propose a Hawkes process-based model that captures statistical differences arising from different fee regimes and we estimate the differences on limit order book data. We then use these estimates in an attempt to measure the execution quality from the perspective of a market-maker. We appropriate existing theoretical market frameworks, however, for the pur­ pose of hireling optimal market-making policies we apply a novel method of deep reinforcement learning. Our results suggest, firstly, that maker-taker exchanges provide better liquidity to the markets, and secondly, that deep reinforcement learning methods may be successfully applied to the domain of optimal market-making. JEL Classification Keywords Author's e-mail Supervisor's e-mail C32, C45, C61, C63 make-take fees, Hawkes process, limit order book, market-making, deep reinforcement learn­ ing kiselrastislavSgmail.com barunik@f sv.cuni.cz
Training Intelligent Agents in Unity Game Engine
Vaculík, Jan ; Chlubna, Tomáš (referee) ; Matýšek, Michal (advisor)
The goal of this work is to design applications, which demonstrate the power of machine learning in video games. To achieve this goal, this work uses the ML-Agents toolkit, which allows the creation of intelligent agents in the Unity Game Engine. Furthermore, a series of experiments showing the properties and flexibility of intelligent agents in several real-time scenarios is presented. To train the agents, the toolkit uses reinforcement learning and imitation learning algorithms.
Utilization of Robotic Operating System (ROS) for control of collaborative robot UR3
Juříček, Martin ; Matoušek, Radomil (referee) ; Parák, Roman (advisor)
The aim of the bachelor's thesis is to create a control program, its subsequent testing and verification of functionality for the collaborative robot UR3 from the company Universal Robots. The control program is written in python and integrates control options through the Robotic Operating System, where a defined point can be reached using pre-simulated trajectories of Q-learning, SARSA, Deep Q-learning, Deep SARSA, or using only the MoveIT framework. The thesis deals with a cross-section of the topics of collaborative robotics, Robotic Operating System, Gazebo simulation environment, feedback and deep feedback learning. Finally, the design and implementation of the control program with partial parts is described.
Deep reinforcement learning and snake-like robot locomotion design
Kočí, Jakub ; Dobrovský, Ladislav (referee) ; Matoušek, Radomil (advisor)
This master thesis is discussing application of reinforcement learning in deep learning tasks. In theoretical part, basics about artificial neural networks and reinforcement learning. The thesis describes theoretical model of reinforcement learning process - Markov processes. Some interesting techniques are shown on conventional reinforcement learning algorithms. Some of widely used deep reinforcement learning algorithms are described here as well. Practical part consist of implementing model of robot and it's environment and of the deep reinforcement learning system itself.
Quoting behaviour of a market-maker under different exchange fee structures
Kiseľ, Rastislav ; Baruník, Jozef (advisor) ; Kočenda, Evžen (referee)
During the last few years, market micro-structure research has been active in analysing the dependence of market efficiency on different market character­ istics. Make-take fees are one of those topics as they might modify the incen­ tives for participating agents, e.g. broker-dealers or market-makers. In this thesis, we propose a Hawkes process-based model that captures statistical differences arising from different fee regimes and we estimate the differences on limit order book data. We then use these estimates in an attempt to measure the execution quality from the perspective of a market-maker. We appropriate existing theoretical market frameworks, however, for the pur­ pose of hireling optimal market-making policies we apply a novel method of deep reinforcement learning. Our results suggest, firstly, that maker-taker exchanges provide better liquidity to the markets, and secondly, that deep reinforcement learning methods may be successfully applied to the domain of optimal market-making. JEL Classification Keywords Author's e-mail Supervisor's e-mail C32, C45, C61, C63 make-take fees, Hawkes process, limit order book, market-making, deep reinforcement learn­ ing kiselrastislavSgmail.com barunik@f sv.cuni.cz

Interested in being notified about new results for this query?
Subscribe to the RSS feed.