By Topic

Two mode Q-learning

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
2 Author(s)
Kui-Hong Park ; Dept. of Electr. Eng. & Comput. Sci., Korea Adv. Inst. of Sci. & Technol., Daejeon, South Korea ; Jong-Hwan Kim

In this paper, a new two mode Q-learning using both the success and failure experiences of an agent is proposed for the fast convergence, which extends Q-learning, a well-known scheme used for reinforcement learning. In the Q-learning, if the agent enters into the "fail" state, it receives a punishment from environment. By this punishment, the Q value of the action which generated the failure experience is decreased. On the other hand, the proposed two mode Q-learning is based on both the normal and failure Q values for the selection of the action in a state-action space. To determine the failure Q value using the previous failure experience of the agent, it employs a failure Q value module. To demonstrate the effectiveness of the proposed method, it is compared with the conventional Q-learning in a goalie system to perform goalkeeping in robot soccer.

Published in:

Evolutionary Computation, 2003. CEC '03. The 2003 Congress on  (Volume:4 )

Date of Conference:

8-12 Dec. 2003