ECCOMAS 2024

Distributed Control of Partial Differential Equations Using Convolutional Reinforcement Learning

  • Peitz, Sebastian (Universität Paderborn)
  • Stenner, Jan (Universität Paderborn)
  • Chidananda, Vikas (Universität Paderborn)

Please login to view abstract download link

We present a convolutional framework which significantly reduces the complexity and thus, the computational effort for distributed reinforcement learning control of partial differential equations (PDEs). Exploiting translational invariances, the high-dimensional distributed control problem can be transformed into a multi-agent control problem with many identical agents. Furthermore, using the fact that information is transported with finite velocity in many cases, the dimension of the agents' environment can be drastically reduced using a convolution operation over the state space of the PDE. In this setting, the complexity can be flexibly adjusted via the kernel width or using a stride greater than one. A central question in this framework is the definition of the reward function, which may consist of both local and global contributions. We also investigate how we can leverage existing symmetries in the underlying systems to further improve learning and/or performance. In the context of our multi-agent approach involving multiple copies of the same agents, symmetries in behaviour can play a central role in achieving global tasks. We demonstrate the performance of the proposed framework and methodology using several standard PDE examples with increasing complexity, where stabilization is achieved by training a low-dimensional DDPG agent with small training effort.