About Us
Topics
BUDGET POLICY
CHILD CARE, EARLY
EDUCATION & HEAD START
CHILD WELFARE & CHILD ABUSE
EDUCATION
ELDERLY
FAMILY POLICY, MARRIAGE & DIVORCE
FOOD ASSISTANCE, SNAP & WIC
HEALTH CARE POLICY
INCOME & POVERTY
JOB TRAINING
LEGAL ISSUES
PAY FOR SUCCESS, PAY FOR RESULTS & SIBS
POLITICAL PROCESS
PROGRAM EVALUATION
SOCIAL POLICY
TEEN SEX & NON-MARITAL BIRTHS
WELFARE REFORM
Child Abuse Training
International Activities
Rossi Award for Program Evaluation
UMD Capstone Courses
Publications
Mailing List
Contact Us



Return to the Rossi Award page.

2010 Rossi Award Winner - Howard S. Bloom

Acceptance Remarks

(November 5, 2010)

It is a true privilege to receive the Peter H. Rossi Award for Contributions to the Theory or Practice of Program Evaluation. This award is especially meaningful to me because Professor Rossi had two important, although indirect, influences on the evaluation path that I took.

Professor Rossi's first influence was through his former student and my good friend, Bill McAuliffe, with whom I shared an office during our first years teaching at Harvard in the early 1970s. Through endless conversations, Bill convinced me that research design was far more important for assessing causality than was statistical analysis (although both are clearly crucial). Professor Rossi's second influence was through our joint participation on a Department of Labor Advisory Panel in the early 1980s. This panel caused the Department to radically change its approach to evaluating federal employment and training programs from longitudinal comparison-group studies to a randomized trial, the National JTPA Study -- for which I subsequently became Co-Principal Investigator with Judy Gueron and Larry Orr, two outstanding colleagues.

That leads me to my next point -- the fact that I have been blessed by wonderful colleagues throughout my career. As an Assistant Professor at Harvard I worked especially closely with Sunny Ladd and Johnny Yinger, who supported my growing fascination with evaluation research by co-teaching courses on its methods and applying them to our joint research. As a Professor at NYU I had many supportive colleagues, among whom Jim Knickman, Jan Blustein and Dennis Smith in particular shared my interest in evaluation research. I also had many fine graduate students who now work in the field, foremost among whom are Hans Bos and Laura Peck.

But it was not until Judy Gueron convinced me to come to MDRC in 1999 that I began to get real traction in my work. Being at MDRC is like having an indefinite sabbatical on steroids. I get to do what I care about most (develop, use and teach rigorous evaluation methods) with unending support from the best colleagues imaginable. For this I give special thanks to Judy Gueron, Gordon Berlin, Bob Granger, Jim Riccio, Jim Kemple, Fred Doolittle, Pei Zhu, Marie-Andree Somers, Mike Weiss, Corrine Herlihy, Alison Black and Becky Unterman. While at MDRC I have also been privileged to work closely with terrific academic colleagues like Steve Raudenbush, Mark Lipsey, Sean Reardon and Carolyn Hill. Then of course there is my wife Sue, the ultimate colleague. With the help of people like these I would have to be brain dead to not be productive.

I was told that my remarks today should be substantive and last no more than eight minutes. So what follows very quickly and in no particular order are nine important lessons that I have learned about doing evaluation research and would like to share with you:

  1. The three keys to success are "design, design, design" (just like "location, location, location" in real estate). No form of statistical analysis can fully rescue a weak research design.

  2. You cannot get the "right answer" unless you pose the right question. Disagreements among capable researchers often reflect differences in the research questions that motivate them (explicitly or implicitly). Thus it is well worth spending the time needed to clearly articulate your research questions, being as specific as possible about the intervention, population and outcomes of interest.

  3. A "fair test" of an intervention requires that there be a meaningful treatment contrast (the difference in services received by treatment group and control or comparison group members). This condition has two sub-parts: (1) the intervention must be implemented properly and (2) services to control or comparison group members cannot be too substantial.

  4. The most credible evidence is that which is based on assumptions that are clear and convincing. Thus researchers should put "all of their cards on the table" when explaining what they did, what they found and what they think it means.

  5. The old saying, "keep it simple stupid" is crucial for meeting the preceding condition. This is especially important for evaluation research because no matter how simple a research design is the resulting study will be more complicated because of its interaction with the real world.

  6. You probably don't fully understand something if you cannot explain it. The best way to avoid this trap is to teach everyone who is willing to listen about what you are trying to do and how you are trying to do it.

  7. Thoughtful and constructive feedback is a researcher's "best friend." Hence, you should seek review early and often.

  8. Evaluation research is a "team sport". It is impossible to overstate the importance of complementary policy, programmatic, data, research and dissemination skills on an evaluation team.

  9. The best way to change how evaluation researchers do their work is to change how they are taught to think about it. Thus methodological training is essential both during graduate school and throughout one's career.

In closing I would like to thank this year's selection committee for adding me to the pantheon of prior Rossi Award winners.

 


Back to top


HOME - PUBLICATIONS - CONFERENCES - ABOUT US - CONTACT US