Affordable Access

Access to the full text

Self-Calibration from Image Derivatives

Authors
  • Brodský, Tomáš1
  • Fermüller, Cornelia2
  • 1 Philips Research, 345 Scarborough Road, Briarcliff Manor, NY, 10510, USA , Briarcliff Manor
  • 2 University of Maryland, Computer Vision Laboratory, Center for Automation Research, College Park, MD, 20742-3275, USA , College Park
Type
Published Article
Journal
International Journal of Computer Vision
Publisher
Springer-Verlag
Publication Date
Jul 01, 2002
Volume
48
Issue
2
Pages
91–114
Identifiers
DOI: 10.1023/A:1016094806773
Source
Springer Nature
Keywords
License
Yellow

Abstract

This study investigates the problem of estimating camera calibration parameters from image motion fields induced by a rigidly moving camera with unknown parameters, where the image formation is modeled with a linear pinhole-camera model. The equations obtained show the flow to be separated into a component due to the translation and the calibration parameters and a component due to the rotation and the calibration parameters. A set of parameters encoding the latter component is linearly related to the flow, and from these parameters the calibration can be determined. However, as for discrete motion, in general it is not possible to decouple image measurements obtained from only two frames into translational and rotational components. Geometrically, the ambiguity takes the form of a part of the rotational component being parallel to the translational component, and thus the scene can be reconstructed only up to a projective transformation. In general, for full calibration at least four successive image frames are necessary, with the 3D rotation changing between the measurements. The geometric analysis gives rise to a direct self-calibration method that avoids computation of optical flow or point correspondences and uses only normal flow measurements. New constraints on the smoothness of the surfaces in view are formulated to relate structure and motion directly to image derivatives, and on the basis of these constraints the transformation of the viewing geometry between consecutive images is estimated. The calibration parameters are then estimated from the rotational components of several flow fields. As the proposed technique neither requires a special set up nor needs exact correspondence it is potentially useful for the calibration of active vision systems which have to acquire knowledge about their intrinsic parameters while they perform other tasks, or as a tool for analyzing image sequences in large video databases.

Report this publication

Statistics

Seen <100 times