Affordable Access

Access to the full text

LP-based policies for restless bandits: necessary and sufficient conditions for (exponentially fast) asymptotic optimality

Authors
  • Gast, Nicolas
  • Gaujal, Bruno
  • Yan, Chen
Type
Preprint
Publication Date
Dec 22, 2023
Submission Date
Jun 18, 2021
Identifiers
DOI: 10.1287/moor.2022.0101
Source
arXiv
License
Yellow
External links

Abstract

We provide a framework to analyse control policies for the restless Markovian bandit model, under both finite and infinite time horizon. We show that when the population of arms goes to infinity, the value of the optimal control policy converges to the solution of a linear program (LP). We provide necessary and sufficient conditions for a generic control policy to be: i) asymptotically optimal; ii) asymptotically optimal with square root convergence rate; iii) asymptotically optimal with exponential rate. We then construct the LP-index policy that is asymptotically optimal with square root convergence rate on all models, and with exponential rate if the model is non-degenerate in finite horizon, and satisfies a uniform global attractor property in infinite horizon. We next define the LP-update policy, which is essentially a repeated LP-index policy that solves a new linear program at each decision epoch. We provide numerical experiments to compare the efficiency of LP-based policies. We compare the performance of the LP-index policy and the LP-update policy with other heuristics. Our result demonstrates that the LP-update policy outperforms the LP-index policy in general, and can have a significant advantage when the transition matrices are wrongly estimated.

Report this publication

Statistics

Seen <100 times