Hierarchical Reinforcement Learning With Monte Carlo Tree Search in Computer Fighting Game | IEEE Journals & Magazine | IEEE Xplore

Hierarchical Reinforcement Learning With Monte Carlo Tree Search in Computer Fighting Game


Abstract:

Fighting games are complex environments where challenging action-selection problems arise, mainly due to a diversity of opponents and possible actions. In this paper, we ...Show More

Abstract:

Fighting games are complex environments where challenging action-selection problems arise, mainly due to a diversity of opponents and possible actions. In this paper, we present the design and evaluation of a lighting player on top of the FightingICE platform that is used in the Fighting Game Artilicial Intelligence (FTGAI) competition. Our proposal is based on hierarchical reinforcement learning (HRL) in combination with Monte Carlo tree search (MCTS) designed as options. By using the FightingICE framework, we evaluate our player against state-of-the-art FTGAIs. We train our player against the current FTGAI champion (GigaThunder). The resulting learned policy is comparable with the champion in direct confront in regard to the number of victories, with the advantage of having less need for expert knowledge. We also evaluate the proposed player against the runners-up and show that adaptation to the strategies of each opponent is necessary for building stronger lighting players.
Published in: IEEE Transactions on Games ( Volume: 11, Issue: 3, September 2019)
Page(s): 290 - 295
Date of Publication: 11 June 2018

ISSN Information:


References

References is not available for this document.