Affordable Access

Autonomous and Online Generation of Skills Inferring Actions Adapted to Low-Level and High-Level Contextual States

Authors
  • Maestre, Carlos
Publication Date
Apr 18, 2018
Source
HAL-UPMC
Keywords
Language
English
License
Unknown
External links

Abstract

Robots are expected to assist us in our daily tasks. To that end, they may need to perform different tasks in changing scenarios. The number of dissimilar scenarios a robot can face is unlimited. Therefore, it is plausible to think that a robot must learn autonomously to perform tasks. A task consists in generating an expected change, i.e. an effect, in the environment, the robot configuration, or both. Therefore, the robot must learn to perform the right action on the environment to obtain the expected effect.An approach to learning these actions is through a continuous interaction of the robot with its environment focusing on those actions producing effects on the environment. The acquired relation of applying an action on an object to obtain an effect is called affordance. During the last years many Research efforts were devoted to affordance learning. Related works cover from the learning of simple push actions on tabletop scenarios to the definition of complex cognitive architectures. These works rely on different building blocks, as vision methods to identify the position of the objects or predefined sensorimotor skills to generate effects on a constrained environment.The use of predefined actions eases the learning of affordances, producing a rich and consistent information of the changes produced on an object. However, we claim that the use of these actions constrains the scalability of the available experiments to dynamic and noisy environments. The current work addresses the autonomous learning of a set of sensorimotor skills through interactions with an environment. Each skill must generate a continuous action to reproduce an effect on an object, adapted to the object position. Besides, each skill is simultaneously adapted to low- level perturbations, e.g. a change in the object position, and high-level contextual changes, e.g. a stove gets on.Few questions arise while addressing the skill generation: first, how can a robot explore an environment gathering information with limited a priori information about it? We address this question through a babbling of the environment driven by an intrinsic motivation. We define a method, called Novelty-driven Evolutionary Babbling (NovEB), to explore possible robot’s movements, while focusing on those that generate the highest novelty from the perception point of view. Perception relies on raw images gathered through the robot’s cameras. A simulated PR2 robot, using this method, discovered on its own which regions of the workspace generate novel perceptions and focuses its exploration around them.Second, how can a robot autonomously build a set of skills based on an initial information about the environment? We propose a method, named Adaptive Affordance Learning (A2L), which endows a robot with the capacity to learn affordances associated to an object, both adapting the robot’s skills to the object position, and increasing the robot’s information about the object when needed. Two main contributions are presented: (1) an interaction process with the object adapting each movement to the fixed object position, decomposing each action into a sequence of discrete movements; (2) an iterative process to increase the information about the object. These contributions are assessed in two experiments where a robot learns to push a box to different positions on a table. First, on a virtual setup on a simulated robotic arm. Finally, on a simulated Baxter robot.Finally, we extend the previous skill generation to environments including both low-level and high-level perturbations. Initially, one or more kinaesthetic demon- strations of an action producing an effect on the object are provided to the robot, through a Learning from Demonstration approach. Then, a vector field is computed for each demonstration, generating information about the next movement to execute based on the robot context, composed of the relative positon of the object w.r.t. the robot’s end-effector, and other high-level information. An action genera- tor is learned, inferring in a closed-loop the next movement to reproduce an effect on the object based on the current robot context. In this work, a study is performed in order to select the best parametrization to build a push to the right and a grasp skill to reproduce an effect. Then, the selected parametrization is used to build a set of diverse skills, which are validated in several experiments performing tasks with different objects. The assessment of the built skills is directly performed on a physical Baxter.

Report this publication

Statistics

Seen <100 times