Affordable Access

Smooth Optimization with Approximate Gradient

Authors
  • d'Aspremont, Alexandre
Type
Preprint
Publication Date
May 16, 2008
Submission Date
Dec 14, 2005
Identifiers
arXiv ID: math/0512344
Source
arXiv
License
Unknown
External links

Abstract

We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is only computed up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this means in some instances computing only a few leading eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method's computational cost. This also allows sparse problems to be solved efficiently using sparse maximum eigenvalue packages.

Report this publication

Statistics

Seen <100 times