Protocols from perceptual observations

Nenhuma Miniatura disponível
Citações na Scopus
38
Tipo de produção
Artigo
Data
2005
Autores
Needham C.J.
Santos P.E.
Magee D.R.
Devin V.
Hogg D.C.
Cohn A.G.
Orientador
Periódico
Artificial Intelligence
Título da Revista
ISSN da Revista
Título de Volume
Citação
NEEDHAM, Chris; SANTOS, P.;Santos, P.;P. Santos;P. E. Santos;Eduardo Santos, Paulo;Santos, Paulo E.;E Santos, Paulo;Paulo E. Santos;Paulo Santos;Santos, Paulo;SANTOS, PAULO EDUARDO; MAGGEE, Derek; V. Devin; HOGG, David; COHN, Antony. Protocols from Perceptual Observations. Artificial Intelligence, v. 167, p. 103-136, 2005.
Texto completo (DOI)
Palavras-chave
Resumo
This paper presents a cognitive vision system capable of autonomously learning protocols from perceptual observations of dynamic scenes. The work is motivated by the aim of creating a synthetic agent that can observe a scene containing interactions between unknown objects and agents, and learn models of these sufficient to act in accordance with the implicit protocols present in the scene. Discrete concepts (utterances and object properties), and temporal protocols involving these concepts, are learned in an unsupervised manner from continuous sensor input alone. Crucial to this learning process are methods for spatio-temporal attention applied to the audio and visual sensor data. These identify subsets of the sensor data relating to discrete concepts. Clustering within continuous feature spaces is used to learn object property and utterance models from processed sensor data, forming a symbolic description. The progol Inductive Logic Programming system is subsequently used to learn symbolic models of the temporal protocols presented in the presence of noise and over-representation in the symbolic data input to it. The models learned are used to drive a synthetic agent that can interact with the world in a semi-natural way. The system has been evaluated in the domain of table-top game playing and has been shown to be successful at learning protocol behaviours in such real-world audio-visual environments. © 2005 Elsevier B.V. All rights reserved.

Coleções