Affordable Access

Predictive Hypothesis Identification

Authors
  • Hutter, Marcus
Type
Preprint
Publication Date
Sep 08, 2008
Submission Date
Sep 08, 2008
Source
arXiv
License
Yellow
External links

Abstract

While statistics focusses on hypothesis testing and on estimating (properties of) the true sampling distribution, in machine learning the performance of learning algorithms on future data is the primary issue. In this paper we bridge the gap with a general principle (PHI) that identifies hypotheses with best predictive performance. This includes predictive point and interval estimation, simple and composite hypothesis testing, (mixture) model selection, and others as special cases. For concrete instantiations we will recover well-known methods, variations thereof, and new ones. PHI nicely justifies, reconciles, and blends (a reparametrization invariant variation of) MAP, ML, MDL, and moment estimation. One particular feature of PHI is that it can genuinely deal with nested hypotheses.

Report this publication

Statistics

Seen <100 times