Affordable Access

deepdyve-link deepdyve-link
Publisher Website

Optimal control for unknown discrete-time nonlinear Markov jump systems using adaptive dynamic programming.

Authors
  • Zhong, Xiangnan
  • He, Haibo
  • Zhang, Huaguang
  • Wang, Zhanshan
Type
Published Article
Journal
IEEE Transactions on Neural Networks and Learning Systems
Publisher
Institute of Electrical and Electronics Engineers
Publication Date
Dec 01, 2014
Volume
25
Issue
12
Pages
2141–2155
Identifiers
DOI: 10.1109/TNNLS.2014.2305841
PMID: 25420238
Source
Medline
License
Unknown

Abstract

In this paper, we develop and analyze an optimal control method for a class of discrete-time nonlinear Markov jump systems (MJSs) with unknown system dynamics. Specifically, an identifier is established for the unknown systems to approximate system states, and an optimal control approach for nonlinear MJSs is developed to solve the Hamilton-Jacobi-Bellman equation based on the adaptive dynamic programming technique. We also develop detailed stability analysis of the control approach, including the convergence of the performance index function for nonlinear MJSs and the existence of the corresponding admissible control. Neural network techniques are used to approximate the proposed performance index function and the control law. To demonstrate the effectiveness of our approach, three simulation studies, one linear case, one nonlinear case, and one single link robot arm case, are used to validate the performance of the proposed optimal control method.

Report this publication

Statistics

Seen <100 times