This article examines the design of evaluations in settings where there is a choice as to how an intervention is to be introduced and evaluated. It uses data from a supervision program for offenders on probation in the UK (Bruce and Hollin forthcoming) that had been indicated by a pilot evaluation in one probation area to merit wider-scale implementation and evaluation. For the remaining two probation areas in the region, a randomized controlled allocation of participants to conditions was recommended. One of the areas adopted a stepped wedge design, in which probation offices were randomly allocated sequentially to the program. The second area opted to launch the program across the whole area simultaneously, with a retrospective sample as control group. The article compares the results of implementation in each probation area and seeks to draw wider inferences about the management of program implementation and the randomized controlled designs appropriate for similar field studies.