Affordable Access

Access to the full text

Multi-Class Segmentation with Relative Location Prior

Authors
  • Gould, Stephen1
  • Rodgers, Jim1
  • Cohen, David1
  • Elidan, Gal1
  • Koller, Daphne1
  • 1 Stanford University, Department of Computer Science, Stanford, CA, USA , Stanford (United States)
Type
Published Article
Journal
International Journal of Computer Vision
Publisher
Springer-Verlag
Publication Date
May 15, 2008
Volume
80
Issue
3
Pages
300–316
Identifiers
DOI: 10.1007/s11263-008-0140-x
Source
Springer Nature
Keywords
License
Yellow

Abstract

Multi-class image segmentation has made significant advances in recent years through the combination of local and global features. One important type of global feature is that of inter-class spatial relationships. For example, identifying “tree” pixels indicates that pixels above and to the sides are more likely to be “sky” whereas pixels below are more likely to be “grass.” Incorporating such global information across the entire image and between all classes is a computational challenge as it is image-dependent, and hence, cannot be precomputed. In this work we propose a method for capturing global information from inter-class spatial relationships and encoding it as a local feature. We employ a two-stage classification process to label all image pixels. First, we generate predictions which are used to compute a local relative location feature from learned relative location maps. In the second stage, we combine this with appearance-based features to provide a final segmentation. We compare our results to recent published results on several multi-class image segmentation databases and show that the incorporation of relative location information allows us to significantly outperform the current state-of-the-art.

Report this publication

Statistics

Seen <100 times