Heuristically-accelerated multiagent reinforcement learning

Nenhuma Miniatura disponível
Citações na Scopus
58
Tipo de produção
Artigo
Data
2014-02-05
Autores
Reinaldo Bianchi
MARTINS, M. F.
RIBEIRO, C. H. C.
COSTA, A. H. R.
Orientador
Periódico
IEEE Transactions on Cybernetics
Título da Revista
ISSN da Revista
Título de Volume
Citação
BIANCHI, R.; MARTINS, M. F.; RIBEIRO, C. H. C.; COSTA, A. H. R. Heuristically-accelerated multiagent reinforcement learning. IEEE Transactions on Cybernetics, v. 44, n. 2, p. 252-265, Feb. 2014.
Texto completo (DOI)
Palavras-chave
Resumo
This paper presents a novel class of algorithms, called Heuristically-Accelerated Multiagent Reinforcement Learning (HAMRL), which allows the use of heuristics to speed up well-known multiagent reinforcement learning (RL) algorithms such as the Minimax-Q. Such HAMRL algorithms are characterized by a heuristic function, which suggests the selection of particular actions over others. This function represents an initial action selection policy, which can be handcrafted, extracted from previous experience in distinct domains, or learnt from observation. To validate the proposal, a thorough theoretical analysis proving the convergence of four algorithms from the HAMRL class (HAMMQ, HAMQ}(λ, HAMQS, and HAMS) is presented. In addition, a comprehensive systematical evaluation was conducted in two distinct adversarial domains. The results show that even the most straightforward heuristics can produce virtually optimal action selection policies in much fewer episodes, significantly improving the performance of the HAMRL over vanilla RL algorithms. © 2013 IEEE.

Coleções