Affordable Access

Access to the full text

Searching for Complex Human Activities with No Visual Examples

Authors
  • İkizler, Nazlı1
  • Forsyth, David A.2
  • 1 Bilkent University, Ankara, 06800, Turkey , Ankara (Turkey)
  • 2 University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA , Urbana (United States)
Type
Published Article
Journal
International Journal of Computer Vision
Publisher
Springer-Verlag
Publication Date
May 29, 2008
Volume
80
Issue
3
Pages
337–357
Identifiers
DOI: 10.1007/s11263-008-0142-8
Source
Springer Nature
Keywords
License
Yellow

Abstract

We describe a method of representing human activities that allows a collection of motions to be queried without examples, using a simple and effective query language. Our approach is based on units of activity at segments of the body, that can be composed across space and across the body to produce complex queries. The presence of search units is inferred automatically by tracking the body, lifting the tracks to 3D and comparing to models trained using motion capture data. Our models of short time scale limb behaviour are built using labelled motion capture set. We show results for a large range of queries applied to a collection of complex motion and activity. We compare with discriminative methods applied to tracker data; our method offers significantly improved performance. We show experimental evidence that our method is robust to view direction and is unaffected by some important changes of clothing.

Report this publication

Statistics

Seen <100 times