Learning Semantic Combinatoriality from the Interaction between Linguistic and Behavioral Processes
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Robots that learn language: developmental approach to human-machine conversations
EELC'06 Proceedings of the Third international conference on Emergence and Evolution of Linguistic Communication: symbol Grounding and Beyond
Computational model of role reversal imitation through continuous human-robot interaction
Proceedings of the 2007 workshop on Multimodal interfaces in semantic interaction
ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
Hi-index | 0.00 |
This paper proposes a machine learning method for mapping object-manipulation verbs with sensory inputs and motor outputs that are grounded in the real world. The method learns motion concepts demonstrated by a user and generates a sequence of motions, using reference-point-dependent probability models. Four components, needed to learn objectmanipulation verbs, are estimated from camera images; (1) a trajector and landmark, which are the objects of transitive verbs; (2) a reference point; (3) an intrinsic coordinate system; and (4) parameters of the motion's probabilistic model. The motion concepts are learned using hidden Markov models (HMMs). In the motion generation phase, our method then combines HMMs to generate trajectories to accomplish goal-oriented tasks. Results from simulation experiments in which our method generates motion by combining learned motion primitives are shown.