Unknown

Dataset Information

0

Intrinsic interactive reinforcement learning - Using error-related potentials for real world human-robot interaction.


ABSTRACT: Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.

SUBMITTER: Kim SK 

PROVIDER: S-EPMC5730605 | biostudies-literature | 2017 Dec

REPOSITORIES: biostudies-literature

altmetric image

Publications

Intrinsic interactive reinforcement learning - Using error-related potentials for real world human-robot interaction.

Kim Su Kyoung SK   Kirchner Elsa Andrea EA   Stefes Arne A   Kirchner Frank F  

Scientific reports 20171214 1


Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activit  ...[more]

Similar Datasets

| S-EPMC9263570 | biostudies-literature
| S-EPMC8982074 | biostudies-literature
| S-EPMC8677775 | biostudies-literature
| S-EPMC5376586 | biostudies-other
| S-EPMC7612196 | biostudies-literature
| S-EPMC6162322 | biostudies-literature
| S-EPMC10366154 | biostudies-literature
| S-EPMC6382150 | biostudies-literature
| S-EPMC8926160 | biostudies-literature
| S-EPMC6879530 | biostudies-literature