National Repository of Grey Literature 2 records found  Search took 0.00 seconds. 
Application of Reinforcement Learning in Autonomous Driving
Vosol, David ; Zbořil, František (referee) ; Janoušek, Vladimír (advisor)
This thesis is focused on the topic of reinforcement learning applied to a task of autonomous vehicle driving. First, the necessary fundamental theory is presented, including the state-of-the-art actor-critic methods. From them the Proximal policy optimization algorithm is chosen for the application to the mentioned task. For the same purpose, the racing simulator TORCS is used. Our goal is to learn a reinforcement learning agent in a simulated environment with the focus on a future real-world application to an RC scaled model car. To achieve this, we simulate the conditions of remote learning and control in the cloud. For that, simulation of network packet loss, noisy sensory and actuator data is done. We also experiment with the least number of vehicle's sensors required for the agent to successfully learn the task. Experiments regarding the vehicle's camera output are also carried out. Different system architectures are proposed, among others also with the aim to minimize hardware requirements. Finally, we explore the generalization properties of a learned agent in an unknown environment.
Application of Reinforcement Learning in Autonomous Driving
Vosol, David ; Zbořil, František (referee) ; Janoušek, Vladimír (advisor)
This thesis is focused on the topic of reinforcement learning applied to a task of autonomous vehicle driving. First, the necessary fundamental theory is presented, including the state-of-the-art actor-critic methods. From them the Proximal policy optimization algorithm is chosen for the application to the mentioned task. For the same purpose, the racing simulator TORCS is used. Our goal is to learn a reinforcement learning agent in a simulated environment with the focus on a future real-world application to an RC scaled model car. To achieve this, we simulate the conditions of remote learning and control in the cloud. For that, simulation of network packet loss, noisy sensory and actuator data is done. We also experiment with the least number of vehicle's sensors required for the agent to successfully learn the task. Experiments regarding the vehicle's camera output are also carried out. Different system architectures are proposed, among others also with the aim to minimize hardware requirements. Finally, we explore the generalization properties of a learned agent in an unknown environment.

Interested in being notified about new results for this query?
Subscribe to the RSS feed.