Knowledge about development effectiveness is constrained by two factors. First, the project staff in governments and international agencies who decide how much to invest in research on specific interventions are often not well informed about the returns to rigorous evaluation and (even when they are)cannot be expected to take full account of the external benefits to others from new knowledge. This leads to under-investment in evaluative research. Second, while standard methods of impact evaluation are useful, they often leave many questions about development effectiveness unanswered. The paper proposes ten steps for making evaluations more relevant to the needs of practitioners. It is argued that more attention needs to be given to identifying policy-relevant questions (including the case for intervention); that a broader approach should be taken to the problems of internal validity; and that the problems of external validity (including scaling up) merit more attention.