Affordable Access

Access to the full text

Describing Upper-Body Motions Based on Labanotation for Learning-from-Observation Robots

Authors
  • Ikeuchi, Katsushi1
  • Ma, Zhaoyuan2
  • Yan, Zengqiang3
  • Kudoh, Shunsuke4
  • Nakamura, Minako5
  • 1 Microsoft Corporation, Redmond, USA , Redmond (United States)
  • 2 Worcester Polytechnic Institute, Worcester, USA , Worcester (United States)
  • 3 Hong Kong University of Science and Technology, Hong Kong, China , Hong Kong (China)
  • 4 University of Electro-Communications, Tokyo, Japan , Tokyo (Japan)
  • 5 Ochanomizu University, Tokyo, Japan , Tokyo (Japan)
Type
Published Article
Journal
International Journal of Computer Vision
Publisher
Springer-Verlag
Publication Date
Oct 05, 2018
Volume
126
Issue
12
Pages
1415–1429
Identifiers
DOI: 10.1007/s11263-018-1123-1
Source
Springer Nature
Keywords
License
Green

Abstract

We have been developing a paradigm that we call learning-from-observation for a robot to automatically acquire a robot program to conduct a series of operations, or for a robot to understand what to do, through observing humans performing the same operations. Since a simple mimicking method to repeat exact joint angles or exact end-effector trajectories does not work well because of the kinematic and dynamic differences between a human and a robot, the proposed method employs intermediate symbolic representations, tasks, for conceptually representing what-to-do through observation. These tasks are subsequently mapped to appropriate robot operations depending on the robot hardware. In the present work, task models for upper-body operations of humanoid robots are presented, which are designed on the basis of Labanotation. Given a series of human operations, we first analyze the upper-body motions and extract certain fixed poses from key frames. These key poses are translated into tasks represented by Labanotation symbols. Then, a robot performs the operations corresponding to those task models. Because tasks based on Labanotation are independent of robot hardware, different robots can share the same observation module, and only different task-mapping modules specific to robot hardware are required. The system was implemented and demonstrated that three different robots can automatically mimic human upper-body operations with a satisfactory level of resemblance.

Report this publication

Statistics

Seen <100 times