Multi-agent multi-objective learning using heuristically accelerated reinforcement learning

Nenhuma Miniatura disponível
Citações na Scopus
5
Tipo de produção
Artigo de evento
Data
2012-10-19
Autores
FERREIRA, L. A.
Reinaldo Bianchi
RIBEIRO, C. H. C.
Orientador
Periódico
Proceedings - 2012 Brazilian Robotics Symposium and Latin American Robotics Symposium, SBR-LARS 2012
Título da Revista
ISSN da Revista
Título de Volume
Citação
FERREIRA, L. A. ; BIANCHI, R.; RIBEIRO, C. H. C. Multi-agent multi-objective learning using heuristically accelerated reinforcement learning. Proceedings - 2012 Brazilian Robotics Symposium and Latin American Robotics Symposium, SBR-LARS 2012, p. 14-20, Oct. 2012.
Texto completo (DOI)
Palavras-chave
Resumo
This paper introduces two new algorithms aimed at solving multi-agent multi-objective reinforcement learning problems in which the learning agent must not only interact with multiples agents but also consider various objectives (or criteria) in order to solve the problem. The main concept behind the proposed algorithms is a modular approach that is used to divide the multiple objectives in modules, and making each one of these modules learn a different objective with different Action-Value and reinforcement functions. Besides the decomposition of objectives, both algorithms use a heuristic function to accelerate the learning process. The first algorithm learns one objective at a time, iterating along the objectives, while the second proposed algorithm also divides the problem in sub-problems but learns every objective simultaneously. The Predator-Prey problem was chosen to compare the performance of both proposed solutions with well known algorithms. In this problem, the learning agent plays the role of the prey and must learn to find food in a fixed position of a grid world while being pursued by the predator. The considered objectives are finding food and avoiding the predator. As the results shows, decomposing a multi-objective problem in sub-problems and using heuristics makes the learning process faster and easier to implement. We notice that the first algorithm introduced in this paper learns faster, but it is more difficult to implement in a real world environment. © 2012 IEEE.

Coleções