Return to the Rossi Award page.
Acceptance Remarks
(November 7, 2008)
I am honored to receive this award named after Peter Rossi, a man 
   who had such an enormous impact on our profession.
 
Thank you, Doug, for your generous remarks, but I want you and the audience to know that I don�t take them 
  personally. I accept this award for my work, but also for that of the many other people who, through a 
  combination of faith, fight, and funds, made a revolution in policy research by showing the feasibility and 
  value of social experiments.
  
      
       Photo by Rich Schmitt 
         | 
  
When I started down this path over 30 years ago, we were a lonely band of zealots. Random assignment was 
  anything but chic. You didn�t get tenure doing random assignment studies, and the hot shots in my field � 
  economics � favored fancy modeling and complex econometrics and statistics.
Our faith was in a vision of how to improve policy and government and, at the risk of sounding corny, a 
  way to leave the world a better place. This faith rested on the belief that social experiments could produce 
  more credible evidence on causality and that, while you could not assure that high-quality evidence would be 
  used, it had the potential to advance several desirable outcomes: improving people�s lives, increasing public 
  support for social programs, and getting a higher return on scarce public investments.
But, while we had to have faith to stick with it, we were never blind true believers. We knew that public 
  policy decisions were driven not just by facts but by values and that random assignment was no panacea. 
  It could not answer all questions or resolve trade-offs and choices. It could not and should not arbitrate 
  policy debates.
Yet our hope was that random assignment could produce more definite, less ambiguous evidence that would 
  more effectively isolate research from political pressure and help elevate these debates and focus them on 
  the real choices. In this sense, the revolution went beyond a research method to a vision of how to make 
  government more rational and effective.
So my first obligation today is to acknowledge three groups of people: first and foremost, my teammates 
  at MDRC, and there were an army of them, who sustained and inspired me; second, the small but crucial group 
  of academics (including Rob Hollister, a previous winner of this award) who in defending this methodology gave 
  it a legitimacy that we, the practitioners of the craft, could not; and third my colleagues and sometimes 
  competitors, primarily at Mathematica Policy Research and Abt Associates, but now at an increasing number of 
  universities and other research firms, who have been part of this movement. Over the years, our collective 
  drive and high standards, reinforced as we met at annual APPAM gatherings, created this revolution.
My second point is to remind us that this was a fight. To many of you in the next generation, the value 
  of random assignment may seem so obvious as to be uncontroversial. But showing that this method was 
  feasible, useful, and ethical took a relentless campaign. At the beginning, we were told that it was 
  fine in the laboratory, but that in the real world you could not get program operators to agree. It would 
  be akin to asking a doctor to deny patients a known cure.
Throughout these 30 plus years, and despite success after success in implementing random assignment 
  without legal challenge, launching such studies has almost always been a fight. There is, even now, 
  enormous pressure to use a weaker, less intrusive design. I have searing memories of people hurling 
  epithets drawn from skeletons in the medical research closet. Two stand out: of Barbara Goldman of MDRC 
  returning shaken from Santa Clara County, where she had been called a Nazi, and of testifying before a 
  committee of the Florida legislature and being told by a dentist turned legislator that I was using 
  practices akin to those in the infamous Tuskegee experiment (although that wasn�t even random assignment).
My third point is the importance of funds, and thus funders. It takes money to realize many dreams. 
  My work and this revolution would have been stillborn without the support and active partnership of 
  key people at foundations (particularly, in the critical early days, the Ford Foundation), and in 
  several federal agencies, notable the U.S. Department of Labor and the Office of the Assistant 
  Secretary for Planning and Evaluation and the Administration for Children and Families in the Department 
  of Health and Human Services. To test a new program as an experiment is to make an ex ante admission that 
  you don�t know whether it will work. That is the very reason for the experiment, or in fact for any 
  evaluation. However, it takes brave funders to sell a grant as a test, rather than as an obvious good 
  idea that they have confidence will work.
But faith, fight, and funders would not have done it without the cooperation of people in 
  participating states, cities, and community organizations. They are the true heroes of this story, 
  since they took the greatest political risks by letting us test their convictions and put them at 
  risk of unfavorable publicity. Our success is due to their willingness to redefine Justice Brandeis�s 
  famous description of states as laboratories for policy experiments, and also to the hundreds of 
  thousands of people in these studies who gave their time to answer our questions and allowed us to 
  collect their data.
In concluding, I want to share three final thoughts. The first is that, while I obviously believe 
  in the value of social experiments, I also am convinced that they are far from the only source of insight 
  and evidence. If we are to advance policy and practice, we will need to draw on the insights of 
  practitioners, managers (the M in APPAM), and those researchers who use different methods to diagnose 
  the problems and to understand why people behave as they do and how social programs work in practice.
Experiments are not an alternative to these sources of insight and innovation, but a way to confirm or 
disprove the expert judgment they suggest. In my experience, the strongest programs and evidence emerge 
when these different types of experts work together, using the strengths of their different fields.
The second relates to Peter Rossi and his oft quoted Iron Law of Evaluation: �The expected value of 
any net impact assessment of any large scale social program is zero.� In my experience, fortunately, this 
has not been true. Over and over again, social experiments did not confirm the null hypothesis but, instead, 
found impacts of clear statistical and policy significance. They showed that many programs worked, although 
they very rarely worked miracles. I don�t have time today to speculate on why that might be the case.
The third comment is more personal. I accept this award for myself, for many of you, and fittingly in 
this historic week also for the person who had the greatest impact on my work, my father. Born over 100 
years ago on the Lower East Side of Manhattan, he was a fighter and idealist. He might have been happy 
to hear about a methodological triumph, but he would have been proud if I could have told him that we 
had been wise enough to harness it to do some good for others. I hope we have delivered on both fronts.
In accepting this award, I turn back to you in the APPAM community and ask you to continue to have 
the faith and fight the fight.
Thank you.
  
Back to top