Unknown

Dataset Information

0

Control of chaotic systems by deep reinforcement learning.


ABSTRACT: Deep reinforcement learning (DRL) is applied to control a nonlinear, chaotic system governed by the one-dimensional Kuramoto-Sivashinsky (KS) equation. DRL uses reinforcement learning principles for the determination of optimal control solutions and deep neural networks for approximating the value function and the control policy. Recent applications have shown that DRL may achieve superhuman performance in complex cognitive tasks. In this work, we show that using restricted localized actuation, partial knowledge of the state based on limited sensor measurements and model-free DRL controllers, it is possible to stabilize the dynamics of the KS system around its unstable fixed solutions, here considered as target states. The robustness of the controllers is tested by considering several trajectories in the phase space emanating from different initial conditions; we show that DRL is always capable of driving and stabilizing the dynamics around target states. The possibility of controlling the KS system in the chaotic regime by using a DRL strategy solely relying on local measurements suggests the extension of the application of RL methods to the control of more complex systems such as drag reduction in bluff-body wakes or the enhancement/diminution of turbulent mixing.

SUBMITTER: Bucci MA 

PROVIDER: S-EPMC6894543 | biostudies-literature | 2019 Nov

REPOSITORIES: biostudies-literature

altmetric image

Publications

Control of chaotic systems by deep reinforcement learning.

Bucci M A MA   Semeraro O O   Allauzen A A   Wisniewski G G   Cordier L L   Mathelin L L  

Proceedings. Mathematical, physical, and engineering sciences 20191106 2231


Deep reinforcement learning (DRL) is applied to control a nonlinear, chaotic system governed by the one-dimensional Kuramoto-Sivashinsky (KS) equation. DRL uses reinforcement learning principles for the determination of optimal control solutions and deep neural networks for approximating the value function and the control policy. Recent applications have shown that DRL may achieve superhuman performance in complex cognitive tasks. In this work, we show that using restricted localized actuation,  ...[more]

Similar Datasets

| S-EPMC9691497 | biostudies-literature
| S-EPMC7308943 | biostudies-literature
| S-EPMC9728972 | biostudies-literature
| S-EPMC9110778 | biostudies-literature
| S-EPMC8850200 | biostudies-literature
| S-EPMC7176278 | biostudies-literature
| S-EPMC9931225 | biostudies-literature
| S-EPMC7390927 | biostudies-literature
| S-EPMC6656766 | biostudies-literature
| S-EPMC9427729 | biostudies-literature