Affordable Access

Exponential convergence of testing error for stochastic gradient methods

Authors
  • Pillaud-Vivien, Loucas
  • Rudi, Alessandro
  • Bach, Francis
Publication Date
Dec 12, 2017
Source
Kaleidoscope Open Archive
Keywords
Language
English
License
Unknown
External links

Abstract

We consider binary classification problems with positive definite kernels and square loss, and study the convergence rates of stochastic gradient methods. We show that while the excess testing loss (squared loss) converges slowly to zero as the number of observations (and thus iterations) goes to infinity, the testing error (classification error) converges exponentially fast if low-noise conditions are assumed.

Report this publication

Statistics

Seen <100 times