Using transfer learning to speed-up Reinforcement Learning: A cased-based approach

Nenhuma Miniatura disponível
Citações na Scopus
21
Tipo de produção
Artigo de evento
Data
2010-10-28
Autores
CELIBERTO JUNIOR, L. A.
MATSUURA, J. P.
MANTARAS, R. L.
Reinaldo Bianchi
Orientador
Periódico
Proceedings - 2010 Latin American Robotics Symposium and Intelligent Robotics Meeting, LARS 2010
Título da Revista
ISSN da Revista
Título de Volume
Citação
CELIBERTO JUNIOR, L. A.; MATSUURA, J. P.; MANTARAS, R. L.; BIANCHI, R. Using transfer learning to speed-up Reinforcement Learning: A cased-based approach. Proceedings - 2010 Latin American Robotics Symposium and Intelligent Robotics Meeting, LARS 2010, p. 55-60, Oct. 2010.
Texto completo (DOI)
Palavras-chave
Resumo
Reinforcement Learning (RL) is a well-known technique for the solution of problems where agents need to act with success in an unknown environment, learning through trial and error. However, this technique is not efficient enough to be used in applications with real world demands due to the time that the agent needs to learn. This paper investigates the use of Transfer Learning (TL) between agents to speed up the well-known Q-learning Reinforcement Learning algorithm. The new approach presented here allows the use of cases in a case base as heuristics to speed up the Q-learning algorithm, combining Case-Based Reasoning (CBR) and Heuristically Accelerated Reinforcement Learning (HARL) techniques. A set of empirical evaluations were conducted in the Mountain Car Problem Domain, where the actions learned during the solution of the 2D version of the problem can be used to speed up the learning of the policies for its 3D version. The experiments were made comparing the Q-learning Reinforcement Learning algorithm, the HAQL Heuristic Accelerated Reinforcement Learning (HARL) algorithm and the TL-HAQL algorithm, proposed here. The results show that the use of a case-base for transfer learning can lead to a significant improvement in the performance of the agent, making it learn faster than using either RL or HARL methods alone. © 2010 IEEE.

Coleções