Affordable Access

Sleeping Experts and Bandits Approach to Constrained Markov Decision Processes

Authors
  • Chang, Hyeong Soo
Type
Preprint
Publication Date
Dec 16, 2014
Submission Date
Dec 16, 2014
Identifiers
arXiv ID: 1412.4898
Source
arXiv
License
Yellow
External links

Abstract

This brief paper presents simple simulation-based algorithms for obtaining an approximately optimal policy in a given finite set in large finite constrained Markov decision processes. The algorithms are adapted from playing strategies for "sleeping experts and bandits" problem and their computational complexities are independent of state and action space sizes if the given policy set is relatively small. We establish convergence of their expected performances to the value of an optimal policy and convergence rates, and also almost-sure convergence to an optimal policy with an exponential rate for the algorithm adapted within the context of sleeping experts.

Report this publication

Statistics

Seen <100 times