National Repository of Grey Literature 316 records found  beginprevious292 - 301nextend  jump to record: Search took 0.00 seconds. 
Markovská lokalizace pro mobilní roboty: simulace a experiment
Krejsa, Jiří ; Věchet, S.
Summary: Localization of the robot is a task of estimating robot position in known environment from sensor observation. The paper describes basic principles of Markov localization technique, succesfully used for localization task. Method is robust against sensor errors and can deal with global uncertainty when robot position is completely unknown. Both simulation and experimental verification of method usability are included.
Plánování cesty pro čtyřnohého kráčejícího robota použitím rychlých náhodných stromů
Krejsa, Jiří ; Věchet, S.
Summary: There are several randomized methods for problem of path planning. Rapidly exploring random trees (RRT) is a method which can deal with constraints typical for legged walking robots, e.g. limitations in rotation step resolution. Paper describes the RRT method itself and its use for path planning of four-legged walking robot, including special failure case when robot is capable of only rotating in one direction. The method proved to be robust and fast.
The control of active magnetic bearing using two-phase Q-learning
Březina, Tomáš ; Krejsa, Jiří
The paper compares controllers based on two phase Q-learning with PID controller on active magnetic bearing control task.
Active magnetic bearing control through Q-learing
Březina, Tomáš ; Krejsa, Jiří ; Kratochvíl, Ctirad
Paper is focused on the control of active magnetic bearing using improved version of Q-learning. The improvement subsists in separating Q-learning into two phases - efficient prelearning phase and tutorage phase working with real system.
Using Modified Q-learning with LWR for Inverted Pendulum Control
Věchet, S. ; Krejsa, Jiří ; Březina, Tomáš
Locally Weighted Regression together with Q-learning is demonstrated in control task of a simple model of inverted pendulum.
Determination of Q-function optimum grid applied on active magnetic bearing control task
Březina, Tomáš ; Krejsa, Jiří
AMB control task can be solved using reinforcement learning based method called Q learning. However there are certain issues remaining to solve, mainly the convergence speed. Two-phase Q learning can be used to speed up the learning process. When table is used as Q function approximation the learning speed and precision of found controllers depend highly on the Q function table grid. The paper is denoted to determination of optimum grid with respect to the properties of controllers found by given method.
Contribution to solution of drive sytem with tooth wheels
Kratochvíl, Ctirad ; Krejsa, Jiří ; Grepl, Robert
Present contribution deals with optimalization of magnetic drives with permanent magnets.
Stochastic policy in Q-lerning used for control of AMB
Březina, Tomáš ; Krejsa, Jiří ; Věchet, S.
A great intention is lately focused on Reinforcement Learning (RL) methods. The article is focused on improving model free RL method known as Q-learning used on active magnetic bearing model. Stochastic strategy and adaptive integration step increased the speed of learning approximately hundred times. Impossibility of using proposed improvement online is the only drawback, however it might be used for pretraining and further fined online.
Q-learning used for control of AMB: reduced state definition
Březina, Tomáš ; Krejsa, Jiří
Previous work showed that stochastic strategy improved model free RL method known as Q-learning used on active magnetic bearing (AMB) model. So far the position, velocity and acceleration were used to describe the state of the system. This paper shows simplified version of controller which uses reduced state definition - position and velocity only. Furthermore the controlled initial conditions domain and its development during learning are shown.

National Repository of Grey Literature : 316 records found   beginprevious292 - 301nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.