Skip to Main Content
One of attractive features of electronic-pets (e-pets) is interaction between the user and the e-pet. The interaction, however, is usually limited to using the predefined commands. In this paper, we present a way of involving the user in helping an e-pet learn high-level behaviors based on basic actions. The high-level behaviors are derived with planning, and the execution of the behaviors is then trained with reinforcement learning. In this research, we explain how we use a partially observable Markov decision process and the hierarchical task network planning for designing behaviors. A Q-learning method is then applied to the training of the e-pet for achieving the correct behavior. A prototype is presented to show its feasibility and effectiveness.