This letter introduces an innovative data-driven integral reinforcement learning (IRL) algorithm for the control of a class of underactuated mechanical systems. We propose a novel value function that allows shaping and learning the potential energy of an underactuated system and to drive it to a desired closed-loop potential energy. Consequently, we derive an actor-control policy that ensures asymptotic stability. In addition, we propose to parameterize the value function with a multi-layered perceptron (with 0, 1, and 2 hidden layers), exploring various parameter configurations. Eventually, we assess the performance of the proposed IRL through simulations and experimental results, thus confirming the practical effectiveness of the control design approach.
Investigating Integral Reinforcement Learning to Achieve Asymptotic Stability in Underactuated Mechanical Systems
Tonello A. M.
2024-01-01
Abstract
This letter introduces an innovative data-driven integral reinforcement learning (IRL) algorithm for the control of a class of underactuated mechanical systems. We propose a novel value function that allows shaping and learning the potential energy of an underactuated system and to drive it to a desired closed-loop potential energy. Consequently, we derive an actor-control policy that ensures asymptotic stability. In addition, we propose to parameterize the value function with a multi-layered perceptron (with 0, 1, and 2 hidden layers), exploring various parameter configurations. Eventually, we assess the performance of the proposed IRL through simulations and experimental results, thus confirming the practical effectiveness of the control design approach.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.