Affordable Access

Learning on real robots from experience and simple user feedback

Red de Agentes Físicos
Publication Date
  • Autonomous Robots
  • Reinforcement Learning
  • Ciencia De La Computación E Inteligencia Artificial
  • Computer Science


In this article we describe a novel algorithm that allows fast and continuous learning on a physical robot working in a real environment. The learning process is never stopped and new knowledge gained from robot-environment interactions can be incorporated into the controller at any time. Our algorithm lets a human observer control the reward given to the robot, hence avoiding the burden of defining a reward function. Despite the highly-non-deterministic reinforcement, through the experimental results described in this paper, we will see how the learning processes are never stopped and are able to achieve fast robot adaptation to the diversity of different situations the robot encounters while it is moving in several environments.

There are no comments yet on this publication. Be the first to share your thoughts.