Abstract The problem of optimal experiment design for linear regression models is addressed. The measurement errors are supposed to be bounded, with no other hypothesis on their distribution, contrary to the classical statistical approach. An experiment is considered here as optimal if it minimizes the volume of the set of all parameters that are consistent with the data, the model structure, and the noise bounds. This new design policy is compared to the classical D-optimal design used in a statistical context. When the number of measurements is equal to the number of parameters to be estimated, the new optimality criterion, although relying on quite different hypotheses on the noise distribution, is shown to lead to the same optimal policy as a D-optimality criterion. This is no longer true when more measurements are to be made. Depending on the shape of the admissible experimental domain, it is sometimes possible to design optimal experiments without any calculations. An algorithmic procedure is suggested, to be used when this is not so.