Wang D, Zhao M, Ha M, Qiao J. Convergence and Stability of Optimal Regulation via Generalized N-Step Value Gradient Learning.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024;
35:10923-10934. [PMID:
37027589 DOI:
10.1109/tnnls.2023.3245630]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
In this article, the generalized N -step value gradient learning (GNSVGL) algorithm, which takes a long-term prediction parameter λ into account, is developed for infinite horizon discounted near-optimal control of discrete-time nonlinear systems. The proposed GNSVGL algorithm can accelerate the learning process of adaptive dynamic programming (ADP) and has a better performance by learning from more than one future reward. Compared with the traditional N -step value gradient learning (NSVGL) algorithm with zero initial functions, the proposed GNSVGL algorithm is initialized with positive definite functions. Considering different initial cost functions, the convergence analysis of the value-iteration-based algorithm is provided. The stability condition for the iterative control policy is established to determine the value of the iteration index, under which the control law can make the system asymptotically stable. Under such a condition, if the system is asymptotically stable at the current iteration, then the iterative control laws after this step are guaranteed to be stabilizing. Two critic neural networks and one action network are constructed to approximate the one-return costate function, the λ -return costate function, and the control law, respectively. It is emphasized that one-return and λ -return critic networks are combined to train the action neural network. Finally, via conducting simulation studies and comparisons, the superiority of the developed algorithm is confirmed.
Collapse