A policy maker knows two models. One implies an exploitable inflation-unemployment trade-off, the other does not. The policy maker's prior probability over the two models is part of his state vector. Bayes' law converts the prior probability into a posterior probability and gives the policy maker an incentive to experiment. For models calibrated to U.S. data through the early 1960s, we compare the outcomes from two Bellman equations. The first tells the policy maker to "experiment and learn." The second tells him to "learn but don't experiment." In this way, we isolate a component of government policy that is due to experimentation and estimate the benefits from intentional experimentation. We interpret the Bellman equation that learns but does not intentionally experiment as an "anticipated utility" model and study how well its outcomes approximate those from the "experiment and learn" Bellman equation. The approximation is good. For our calibrations, the benefits from purposeful experimentation are small because random shocks are big enough to provide ample unintentional experimentation. Copyright 2007 The Ohio State University.