Affordable Access

Access to the full text

Input addition and deletion in reinforcement: towards protean learning

Authors
  • Bonnici, Iago1
  • Gouaïch, Abdelkader1
  • Michel, Fabien1
  • 1 LIRMM, Univ Montpellier, CNRS, Montpellier, Cedex 5, 34095, France , Montpellier (France)
Type
Published Article
Journal
Autonomous Agents and Multi-Agent Systems
Publisher
Springer-Verlag
Publication Date
Nov 09, 2021
Volume
36
Issue
1
Identifiers
DOI: 10.1007/s10458-021-09534-6
Source
Springer Nature
Keywords
Disciplines
  • Article
License
Yellow

Abstract

Reinforcement Learning (RL) agents are commonly thought of as adaptive decision procedures. They work on input/output data streams called “states”, “actions” and “rewards”. Most current research about RL adaptiveness to changes works under the assumption that the streams signatures (i.e. arity and types of inputs and outputs) remain the same throughout the agent lifetime. As a consequence, natural situations where the signatures vary (e.g. when new data streams become available, or when others become obsolete) are not studied. In this paper, we relax this assumption and consider that signature changes define a new learning situation called Protean Learning (PL). When they occur, traditional RL agents become undefined, so they need to restart learning. Can better methods be developed under the PL view? To investigate this, we first construct a stream-oriented formalism to properly define PL and signature changes. Then, we run experiments in an idealized PL situation where input addition and deletion occur during the learning process. Results show that a simple PL-oriented method enables graceful adaptation of these arity changes, and is more efficient than restarting the process.

Report this publication

Statistics

Seen <100 times