Unknown

Dataset Information

0

Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.


ABSTRACT: In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.

SUBMITTER: Hagiwara Y 

PROVIDER: S-EPMC5859180 | biostudies-literature | 2018

REPOSITORIES: biostudies-literature

altmetric image

Publications

Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.

Hagiwara Yoshinobu Y   Inoue Masakazu M   Kobayashi Hiroyoshi H   Taniguchi Tadahiro T  

Frontiers in neurorobotics 20180313


In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate sm  ...[more]

Similar Datasets

| S-EPMC11324419 | biostudies-literature
| S-EPMC8571621 | biostudies-literature
| S-EPMC6882790 | biostudies-literature
| S-EPMC6692360 | biostudies-literature
| S-EPMC9643480 | biostudies-literature
| S-EPMC10663462 | biostudies-literature
| S-EPMC8062458 | biostudies-literature
| S-EPMC9603592 | biostudies-literature
| S-EPMC4396555 | biostudies-other
| S-EPMC7769489 | biostudies-literature