Affordable Access

Publisher Website

The factored policy-gradient planner

Authors
Journal
Artificial Intelligence
0004-3702
Publisher
Elsevier
Publication Date
Volume
173
Identifiers
DOI: 10.1016/j.artint.2008.11.008
Keywords
  • Concurrent Probabilistic Temporal Planning
  • Reinforcement Learning
  • Policy-Gradient
  • Ai Planning
Disciplines
  • Computer Science
  • Mathematics

Abstract

Abstract We present an any-time concurrent probabilistic temporal planner (CPTP) that includes continuous and discrete uncertainties and metric functions. Rather than relying on dynamic programming our approach builds on methods from stochastic local policy search. That is, we optimise a parameterised policy using gradient ascent. The flexibility of this policy-gradient approach, combined with its low memory use, the use of function approximation methods and factorisation of the policy, allow us to tackle complex domains. This factored policy gradient (FPG) planner can optimise steps to goal, the probability of success, or attempt a combination of both. We compare the FPG planner to other planners on CPTP domains, and on simpler but better studied non-concurrent non-temporal probabilistic planning (PP) domains. We present FPG- ipc, the PP version of the planner which has been successful in the probabilistic track of the fifth international planning competition.

There are no comments yet on this publication. Be the first to share your thoughts.