Affordable Access

Publisher Website

Top–down learning of low-level vision tasks

Authors
Journal
Current Biology
0960-9822
Publisher
Elsevier
Publication Date
Volume
7
Issue
12
Identifiers
DOI: 10.1016/s0960-9822(06)00419-2
Keywords
  • Brief Communication
Disciplines
  • Computer Science

Abstract

Abstract Perceptual tasks such as edge detection, image segmentation, lightness computation and estimation of three-dimensional structure are considered to be low-level or mid-level vision problems and are traditionally approached in a bottom–up, generic and hard-wired way. An alternative to this would be to take a top–down, object-class-specific and example-based approach. In this paper, we present a simple computational model implementing the latter approach. The results generated by our model when tested on edge-detection and view-prediction tasks for three-dimensional objects are consistent with human perceptual expectations. The model's performance is highly tolerant to the problems of sensor noise and incomplete input image information. Results obtained with conventional bottom–up strategies show much less immunity to these problems. We interpret the encouraging performance of our computational model as evidence in support of the hypothesis that the human visual system may learn to perform supposedly low-level perceptual tasks in a top–down fashion.

There are no comments yet on this publication. Be the first to share your thoughts.

Statistics

Seen <100 times
0 Comments

More articles like this

Top-down learning of low-level vision tasks.

on Current Biology Dec 01, 1997

Joint solution of low, intermediate, and high-leve...

on IEEE transactions on neural ne... 1994

Top-down attention switches coupling between low-l...

on Proceedings of the National Ac... Sep 04, 2012
More articles like this..