Affordable Access

Publisher Website

Average optimality for continuous-time Markov decision processes with a policy iteration approach

Authors
Journal
Journal of Mathematical Analysis and Applications
0022-247X
Publisher
Elsevier
Publication Date
Volume
339
Issue
1
Identifiers
DOI: 10.1016/j.jmaa.2007.06.071
Keywords
  • Continuous-Time Markov Decision Process
  • Policy Iteration Algorithm
  • Average Criterion
  • Optimality Equation
  • Optimal Stationary Policy
Disciplines
  • Computer Science
  • Mathematics

Abstract

Abstract This paper deals with the average expected reward criterion for continuous-time Markov decision processes in general state and action spaces. The transition rates of underlying continuous-time jump Markov processes are allowed to be unbounded, and the reward rates may have neither upper nor lower bounds. We give conditions on the system's primitive data and under which we prove the existence of the average reward optimality equation and an average optimal stationary policy. Also, under our conditions we ensure the existence of ϵ-average optimal stationary policies. Moreover, we study some properties of average optimal stationary policies. We not only establish another average optimality equation on an average optimal stationary policy, but also present an interesting “martingale characterization” of such a policy. The approach provided in this paper is based on the policy iteration algorithm. It should be noted that our way is rather different from both the usually “vanishing discounting factor approach” and the “optimality inequality approach” widely used in the previous literature.

There are no comments yet on this publication. Be the first to share your thoughts.