This paper evaluates the distributional properties of forecasts from six econometric models of the U.S. trade account. Using stochastic (Monte Carlo) simulation, we derive confidence intervals and forecast-based test statistics which account for uncertainty from future disturbances and from coefficient estimation. Empirically, the confidence intervals of the trade-account forecasts are very wide, and are generally (but not necessarily) increasing with the forecast horizon. Even with such a large degree of uncertainty, some models exhibit "predictive failure" when tested. To evaluate forecasts across models, we generalize Chong and Hendry's (1986) forecast-encompassing test statistic to allow for model nonlinearity and to account for uncertainty arising from estimation. All models are rejected by this test, i.e., the data are highly informative. Although both the calculated forecast uncertainty and the test failures temper the role of these models in formulating policy, the failures imply the potential for improved model specification with narrower confidence bands.