# Model Comparison Methods

- Authors
- Publisher
- Elsevier Science & Technology
- Identifiers
- DOI: 10.1016/s0076-6879(04)83014-3
- Disciplines

## Abstract

Publisher Summary This chapter describes the various aspects of model comparison methods. The question of how one should choose among competing explanations of observed data is at the core of science. Model comparison is ubiquitous and arises, for example, when a toxicologist must decide between two dose-response models or when a biochemist needs to determine which of a set of enzyme-kinetics models best accounts for the observed data. Given a data sample, the descriptive adequacy of a model is assessed by finding parameter values of the model that best fit the data in some defined sense. Selecting among models using a goodness-of-fit measure would make sense if data were free of noise. Generalizability or predictive accuracy refers to how well a model predicts the statistical properties of future, as yet unseen, samples from a replication of the experiment that generated the current data sample. Generalizability is a mean discrepancy between the true model and the model of interest, averaged across all data that could possibly be observed under the true model. It is found that cross-validation can easily be implemented using any computer programming language as its calculation does not require sophisticated computational techniques, in contrast to minimum description length.

## There are no comments yet on this publication. Be the first to share your thoughts.