National Repository of Grey Literature 7 records found  Search took 0.00 seconds. 
Computer as an Intelligent Partner in the Word-Association Game Codenames
Jareš, Petr ; Fajčík, Martin (referee) ; Smrž, Pavel (advisor)
This thesis solves a determination of semantic similarity between words. For this task is used a combination of predictive model fastText and count based method Pointwise Mutual Information. Thesis describes a system which utilizes semantic models for ability to substitue a player in a word association game Codenames. The system has implemented game strategy enabling use of context information from the game progression to benefit his own team. The system is able to substitue a player in both team roles.
Computer as an Intelligent Partner in the Word-Association Game Codenames
Jareš, Petr ; Fajčík, Martin (referee) ; Smrž, Pavel (advisor)
This thesis solves a determination of semantic similarity between words. For this task is used a combination of predictive model fastText and count based method Pointwise Mutual Information. Thesis describes a system which utilizes semantic models for ability to substitue a player in a word association game Codenames. The system has implemented game strategy enabling use of context information from the game progression to benefit his own team. The system is able to substitue a player in both team roles.
Artificial Intelligence for Children of the Galaxy Computer Game
Šmejkal, Pavel ; Gemrot, Jakub (advisor) ; Trunda, Otakar (referee)
Even though artificial intelligence (AI) agents are now able to solve many classical games, in the field of computer strategy games, the AI opponents still leave much to be desired. In this work we tackle a problem of combat in strategy video games by adapting existing search approaches: Portfolio greedy search (PGS) and Monte-Carlo tree search (MCTS). We also introduce an improved version of MCTS called MCTS considering hit points (MCTS_HP). These methods are evaluated in context of a recently released 4X strategy game Children of the Galaxy. We implement a combat simulator for the game and a benchmarking framework where various AI approaches can be compared. We show that for small to medium combat MCTS methods are superior to PGS. In all scenarios MCTS_HP is equal or better than regular MCTS due to its better search guidance. In smaller scenarios MCTS_HP with only 100 millisecond time limit outperforms regular MCTS with 2 second time limit. By combining fast greedy search for large combats and more precise MCTS_HP for smaller scenarios a universal AI player can be created.
Classic Card Games
Mifek, Jakub ; Gemrot, Jakub (advisor) ; Bída, Michal (referee)
Although there are libraries simplifying creation of card games, only few of them provide general and comprehensive design that facilitates creation of any classic card game. Our library enables simple development of card games and their graphic representation. As part of all-in-one solution we created a client-server application that is able to run any card game created using our library. To evaluate our library we implemented five exemplary games. We also created self-learning artificial intelligence that should be able to learn any classic card game implemented using our library with minimal developer's input. For our artificial intelligence we chose Q-Learning method. We hope that our project will enable simple and effective card game development and distribution to the gaming community.
RacingCarSim
Homa, Martin ; Gemrot, Jakub (advisor) ; Krijt, Filip (referee)
Driving is one of the favorite activities of not just men since the invention of automobiles. A dose of fun and adrenaline is connected to driving, which can be seen among car enthusiasts every day. These people oftentimes want to know what is behind the car movement itself. The aim of this work is to appeal to this type of people, giving them answers to their questions in an entertaining way. An application has been created to fulfil this intention, which, based on an advanced physical model, allows its user to simulate driving an automobile in various environments, observe its behavior and the physical variables which are affecting it. The main focus is allowing for a great amount of configurability, which is why the user can define the car himself, as well as modify the environment. If he chooses to, he might also be interested in one more question. What is the maximum speed the car can be driven at? The answer is also included in the application, as it implements an artificial player. This player is trying to learn to drive at a user-defined racetrack at the highest speeds possible, while the learning process and its individual components are also configurable. However, learning is highly dependent on its parameters and the representation of the environment, thus achieving the desired result is non-trivial.
Defending Choke Points in Star Craft: Brood War
Šťavík, Petr ; Gemrot, Jakub (advisor) ; Pilát, Martin (referee)
Despite the effort in the field of artificial intelligence for real time strategy games, computer controlled agents (bots) still struggle even against average human players. One of the keys to success in such games is the ability to take advantage of various tactical points on the map, like chokepoints - narrow passages connecting open areas. With the use of genetic algorithms and SparCraft, a simplified simulator of StarCraft: Brood War, we present a method to generate advantageous unit layouts for defending chokepoints. Our experiments show that layouts produced using our method perform significantly better than random layouts, and are comparable in quality with layouts traditionally employed by human players. Our method may also be used to generate a database of advantageous unit layouts, which could be incorporated into an existing StarCraft: Brood War bot. 1
Artificial Player for Hearthstone Card Game
Ohman, Ľubomír ; Gemrot, Jakub (advisor) ; Mráz, František (referee)
The goal of this work was to create an artificial agent that is able to learn how to play Hearthstone with given deck of cards. We decided to use Q-learning algorithm to achieve it. The side effect of this work is the transformation of the simple simulator of Hearthstone into the framework for developing Artificial Intelligence in this game. For the purpose of training and evaluation, commonly played strategies served us as inspiration for the testing agents that we developed. This work contains comparison of the table representation of Q-function and the neural network approximation of it. The original goal was fulfilled partially. We were successful in the creation of the learning agent but he is only able to learn one specific strategy.

Interested in being notified about new results for this query?
Subscribe to the RSS feed.