Unknown

Dataset Information

0

Vision-based Mobile Indoor Assistive Navigation Aid for Blind People.


ABSTRACT: This paper presents a new holistic vision-based mobile assistive navigation system to help blind and visually impaired people with indoor independent travel. The system detects dynamic obstacles and adjusts path planning in real-time to improve navigation safety. First, we develop an indoor map editor to parse geometric information from architectural models and generate a semantic map consisting of a global 2D traversable grid map layer and context-aware layers. By leveraging the visual positioning service (VPS) within the Google Tango device, we design a map alignment algorithm to bridge the visual area description file (ADF) and semantic map to achieve semantic localization. Using the on-board RGB-D camera, we develop an efficient obstacle detection and avoidance approach based on a time-stamped map Kalman filter (TSM-KF) algorithm. A multi-modal human-machine interface (HMI) is designed with speech-audio interaction and robust haptic interaction through an electronic SmartCane. Finally, field experiments by blindfolded and blind subjects demonstrate that the proposed system provides an effective tool to help blind individuals with indoor navigation and wayfinding.

SUBMITTER: Li B 

PROVIDER: S-EPMC6371975 | biostudies-literature | 2019 Mar

REPOSITORIES: biostudies-literature

altmetric image

Publications

Vision-based Mobile Indoor Assistive Navigation Aid for Blind People.

Li Bing B   Muñoz J Pablo JP   Rong Xuejian X   Chen Qingtian Q   Xiao Jizhong J   Tian Yingli Y   Arditi Aries A   Yousuf Mohammed M  

IEEE transactions on mobile computing 20180601 3


This paper presents a new holistic vision-based mobile assistive navigation system to help blind and visually impaired people with indoor independent travel. The system detects dynamic obstacles and adjusts path planning in real-time to improve navigation safety. First, we develop an indoor map editor to parse geometric information from architectural models and generate a semantic map consisting of a global 2D traversable grid map layer and context-aware layers. By leveraging the visual position  ...[more]

Similar Datasets

| S-EPMC5581837 | biostudies-literature
| S-EPMC6189856 | biostudies-other
| S-EPMC5751565 | biostudies-literature
| S-EPMC7446825 | biostudies-literature
| S-EPMC7038713 | biostudies-literature
| S-EPMC8271916 | biostudies-literature
| S-EPMC8444080 | biostudies-literature
| S-EPMC4737291 | biostudies-other
| S-EPMC9291975 | biostudies-literature
| S-EPMC5938094 | biostudies-literature