Qualitative case-based reasoning and learning

Carregando...
Imagem de Miniatura
Citações na Scopus
35
Tipo de produção
Artigo
Data
2020-03-20
Autores
HOMEM, THIAGO PEDRO DONADON
Paulo Santos
COSTA, ANNA HELENA REALI
Reinaldo Bianchi
MANTARAS, RAMON LOPEZ DE
Orientador
Periódico
ARTIFICIAL INTELLIGENCE
Título da Revista
ISSN da Revista
Título de Volume
Citação
HOMEM, T. P. D.; SANTOS, P. E.;COSTA, A. H. R.; BIANCHI, R. A.DA C.; MANTARA, R. LOPEZ DE. Qualitative case-based reasoning and learning. Artificial Intelligence, v. 283, p. 103258, 2020.
Texto completo (DOI)
Palavras-chave
Case-based reasoning,Qualitative spatial reasoning,Reinforcement learning,Robot soccer
Resumo
The development of autonomous agents that perform tasks with the same dexterity as performed by humans is one of the challenges of artificial intelligence and robotics. This motivates the research on intelligent agents, since the agent must choose the best action in a dynamic environment in order to maximise the final score. In this context, the present paper introduces a novel algorithm for Qualitative Case-Based Reasoning and Learning (QCBRL), which is a case-based reasoning system that uses qualitative spatial representations to retrieve and reuse cases by means of relations between objects in the environment. Combined with reinforcement learning, QCBRL allows the agent to learn new qualitative cases at runtime, without assuming a pre-processing step. In order to avoid cases that do not lead to the maximum performance, QCBRL executes case-base maintenance, excluding these cases and obtaining new (more suitable) ones. Experimental evaluation of QCBRL was conducted in a simulated robot-soccer environment, in a real humanoid-robot environment and on simple tasks in two distinct gridworld domains. Results show that QCBRL outperforms traditional RL methods. As a result of running QCBRL in autonomous soccer matches, the robots performed a higher average number of goals than those obtained when using pure numerical models. In the gridworlds considered, the agent was able to learn optimal and safety policies.

Coleções