Affordable Access

Efficient Sequential Learning in Structured and Constrained Environments

Authors
  • Calandriello, Daniele
Publication Date
Dec 18, 2017
Source
Kaleidoscope Open Archive
Keywords
Language
English
License
Unknown
External links

Abstract

The main advantage of non-parametric models is that the accuracy of the model (degreesof freedom) adapts to the number of samples. The main drawback is the so-called "curseof kernelization": to learn the model we must first compute a similarity matrix among allsamples, which requires quadratic space and time and is unfeasible for large datasets.Nonetheless the underlying effective dimension (effective d.o.f.) of the dataset is often muchsmaller than its size, and we can replace the dataset with a subset (dictionary) of highlyinformative samples. Unfortunately, fast data-oblivious selection methods (e.g., uniformsampling) almost always discard useful information, while data-adaptive methods thatprovably construct an accurate dictionary, such as ridge leverage score (RLS) sampling,have a quadratic time/space cost.In this thesis we introduce a new single-pass streaming RLS sampling approach thatsequentially construct the dictionary, where each step compares a new sample only withthe current intermediate dictionary and not all past samples. We prove that the size ofall intermediate dictionaries scales only with the effective dimension of the dataset, andtherefore guarantee a per-step time and space complexity independent from the number ofsamples. This reduces the overall time required to construct provably accurate dictionariesfrom quadratic to near-linear, or even logarithmic when parallelized.Finally, for many non-parametric learning problems (e.g., K-PCA, graph SSL, online kernellearning) we we show that we can can use the generated dictionaries to compute approximatesolutions in near-linear that are both provably accurate and empirically competitive.

Report this publication

Statistics

Seen <100 times