Guided navigation from multiple viewpoints using qualitative spatial reasoning

dc.contributor.authorPERICO, D. H.
dc.contributor.authorSANTOS, P. E.
dc.contributor.authorReinaldo Bianchi
dc.contributor.authorOrcidhttps://orcid.org/0000-0001-9097-827X
dc.date.accessioned2022-01-12T21:54:37Z
dc.date.available2022-01-12T21:54:37Z
dc.date.issued2020-09-29
dc.description.abstract© 2020 Taylor & Francis.Navigation is an essential ability for mobile agents to be completely autonomous and able to perform complex actions. However, the problem of navigation for agents with limited (or no) perception of the world, or devoid of a fully defined motion model, has received little attention from research in AI and Robotics. One way to tackle this problem is to use guided navigation, in which other autonomous agents, endowed with perception, can combine their distinct viewpoints to infer the localization and the appropriate commands to guide a sensory deprived agent through a particular path. Due to the limited knowledge about the physical and perceptual characteristics of the guided agent, this task should be conducted on a level of abstraction allowing the use of a generic motion model, and high-level commands, that can be applied by any type of autonomous agents, including humans. The main task considered in this work is, given a group of autonomous agents perceiving their common environment with their independent, egocentric and local vision sensors, the development and evaluation of algorithms capable of producing a set of high-level commands (involving qualitative directions: e.g. move left, go straight ahead) capable of guiding a sensory deprived robot to a goal location. In order to accomplish this, the present paper assumes relations from the qualitative spatial reasoning formalism called StarVars, whose inference method is also used to build a model of the domain. This paper presents two qualitative-probabilistic algorithms for guided navigation using a particle filter and qualitative spatial relations. In the first algorithm, the particle filter is run upon a qualitative representation of the domain, whereas the second algorithm transforms the numerical output of a standard particle filter into qualitative relations to guide a sensory deprived robot. The proposed methods were evaluated with experiments carried out on a 2D humanoid robot simulator. A proof of concept executing the algorithms on a group of real humanoid robots is also presented. The results obtained demonstrate the success of the guided navigation models proposed in this work.
dc.description.firstpage143
dc.description.issuenumber2
dc.description.lastpage172
dc.description.volume21
dc.identifier.citationPERICO, D. H.; SANTOS, P. E.; BIANCHI, R. Guided navigation from multiple viewpoints using qualitative spatial reasoning. Spatial Cognition and Computation, v. 21, n. 2, p. 143-172, Dec. 2020.
dc.identifier.doi10.1080/13875868.2020.1857386
dc.identifier.issn1542-7633
dc.identifier.urihttps://repositorio.fei.edu.br/handle/FEI/3609
dc.relation.ispartofSpatial Cognition and Computation
dc.rightsAcesso Restrito
dc.subject.otherlanguageKnowledge representation and reasoning in robotic systems
dc.subject.otherlanguagelocalization and exploration
dc.subject.otherlanguagemapping
dc.subject.otherlanguagemulti-robot systems
dc.titleGuided navigation from multiple viewpoints using qualitative spatial reasoning
dc.typeArtigo
fei.scopus.citations3
fei.scopus.eid2-s2.0-85098513354
fei.scopus.subjectCommon environment
fei.scopus.subjectEssential abilities
fei.scopus.subjectLevel of abstraction
fei.scopus.subjectMultiple viewpoints
fei.scopus.subjectProbabilistic algorithm
fei.scopus.subjectQualitative relation
fei.scopus.subjectQualitative representation
fei.scopus.subjectQualitative spatial reasoning
fei.scopus.updated2024-07-01
fei.scopus.urlhttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85098513354&origin=inward
Arquivos
Coleções