Abstract
This article aims to study event-triggered data-driven control of nonlinear systems via Q-learning. An input-output mapping is described by a pseudo-partial derivatives form. A Q-learning-based optimization criterion is provided to establish a data-driven control law. A dynamic penalty factor composed of tracking errors is supplied to accelerate errors convergence. Consequently, a novel triggering rule related to this factor and performance cost is proposed to save communication resources. Sufficient conditions are developed for guaranteeing the ultimately uniform boundedness of the resultant tracking errors system. Two simulation studies are executed to verify the effectiveness of the presented scheme.
Original language | English |
---|---|
Pages (from-to) | 1069-1077 |
Number of pages | 9 |
Journal | IEEE Transactions on Systems, Man, and Cybernetics: Systems |
Volume | 55 |
Issue number | 2 |
DOIs | |
State | Published - 2025 |
Keywords
- Data-driven
- Q-learning algorithm
- event-triggered (ET)