Return to the Rossi Award page.

Acceptance Remarks
(November 3, 2006)
It is especially fitting that I receive this award at a meeting in Madison, Wisconsin, where I started
a career devoted to the evaluation of public policies; even more so that this is on the fortieth
anniversary of the Institute for Research on Poverty, where I did that initial work.
I�d like to say a few words about Pete Rossi.
My first meeting with Pete occurred when he was commissioned by the Russell Sage Foundation to study
and critique the New Jersey Negative Income Tax experiment. His study appeared as a book, Reforming
Public Welfare: A Critique of the Negative Income Tax Experiment�which he coauthored with Katherine
Lyall, who until recently was the president of the University of Wisconsin Systems. My colleagues
and I had �words� with Pete about the experiment as he developed and after he finished the study.
Subsequently, Pete and I met at panels and consultancies for various projects, particularly several
funded by the Rockefeller Foundation and the Smith Richardson Foundation.
While I was a visiting scholar at the Russell Sage Foundation in 1991, there were several seminars
on homelessness. It was through these seminars that I became acquainted with Pete�s wonderful study
on estimating the extent of homelessness, which he derived by sending out teams on a given night to
count and interview the homeless. He did this at a time when there was considerable controversy and
conflict over the amount of homelessness. In the New York Times obituary for Pete, this study was
mentioned at some length, and rightly so, since he did what a good social scientist should do, though
few actually do: he went out and established the facts on a sound basis.
Pete and I were both members of the group Doug Besharov created devoted to evaluations of the 1996
welfare reform.
Pete had a sardonic sense of humor, which I much appreciated, but which made him appear somewhat
gruff to those who did not know him well and long. But I want to tell you about a side of Pete
that few may have known. Three times he sent me e-mails praising something I had written, and
these came not as the result of any request to assess my work. Once he said something like:
�I�ve been assigned to write a chapter on evaluating community development programs but you�ve
said it all. What could I add?� His last e-mail in this vein came when some committee of stalwarts
of the American Evaluation Association sent out a scathing indictment of the announced intention of
the U.S. Department of Education�s Institute of Education Sciences to give highest priority in
future funding of research to studies that used random assignment designs to evaluate education
programs. When apprised of this by some friends, I wrote a piece saying how they had indicted
themselves as researchers by their foolish remarks and I sent it back to my friends. Pete got
it from one of them and sent me an e-mail telling me how good a response he thought it was and
added something like: �With a statement like the one they made, the AEA has become akin to the
flat earth society. I am a past president of the AEA and I am resigning from the association in
protest.� When I got this e-mail from him I wrote back and said this was the third time he had
sent me such an e-mail and asked what prompted him to do it. He replied that he realized many
years ago that researchers seldom got positive feedback when they did good work and so he decided
he would provide such feedback, unsolicited, whenever he came across something he felt was well done.
I am truly honored to be associated through this award with such an outstanding social scientist
and unusually humane individual.
Now let me turn to the award at hand. When getting such an award after �long experience,� it seems
one is expected to give some avuncular advice for those who follow.
My first piece of advice is to have good luck in the timing of your birth. I was born in the smallest
birth cohort of the first half of the century. (I know Angrist and Krueger would figure out a way to
use this as an instrumental variable to provide an estimate of the true worth of my Stanford PhD
education.) And I was just finishing my PhD and entering the academic job market when the first wave
of the baby boom began to enter higher education. The result of the conjuncture is the old supply
and demand story: few PhDs entering the market just at the time when the demand for professors was
rapidly expanding. Most of us at that time got hired into university jobs without finishing our PhD
theses and took our leisurely time actually doing so.
The second piece of advice is to get on the front edge of a big expansion of federal funding of research.
For me, it was the Great Society programs in general and the War on Poverty in particular. This went on
into the 1970s when the federal government had appropriated more funds for research and evaluation than
it could even spend; there were so few experienced researchers that it quickly had all the effective
supply tied up in projects while more funds were on the doorstep begging to be spent. Researchers in
the 1970s regularly moved from the academy to the federal government research groups and back again to
the academy, which made for a richer quality of policy research in both places.
During the War on Poverty, the pressure to develop �a plan to end poverty in the United States� led
to the most important event of my career: the creation of the first large-scale social science research
study that used a random assignment/experimental design to try to estimate the likely impact of a new
social program. This was the New Jersey Negative Income Tax experiment (on which Pete wrote the above
mentioned critique). This experience gave me a breadth and depth of exposure which few, if any,
economists had at that time. I worked on the initial design of the experiment, determining the nature
of the intervention and the sample on which it was to be tested. I designed the data collection
instruments and methods to be used, and was able to implement the experiment in the field. I was also
able to handle a piece of the study�s analysis, and became tangled up in the political back and forth
about the proposed program.
My next piece of advice is to choose great colleagues and students. I�m going to list some of them�though
clearly I will miss many, many more.
At the War on Poverty office and at the Institute for Research on Poverty, Glen Cain and Harold Watts
were the highest quality social scientists, my co-workers, and close friends. At the institute I also
worked with Larry Orr and, later, with Bob Haveman. The institute provided a great experience in
cross-disciplinary work; lawyers, sociologists, linguists, and social work specialists were not just
housed together, but worked jointly on major projects. I became so exposed to these other disciplines
that Sheldon Danziger has several times introduced me as �a recovering economist.�
In my first academic job at Williams College, I had as students: David Kershaw, the first president
of Mathematica Policy Research (MPR); Chuck Metcalf, the second president of Mathematica Policy Research;
Walter Nicholson, author of a widely used micro-economic text and expert researcher on unemployment
insurance and other public policies; and John Palmer, assistant secretary of the U.S. Department of
Health, Education, and Welfare (now Health and Human Services) and dean of the Maxwell School at Syracuse.
Another important piece of advice in this regard is to have students who will later go on to positions
where they can hire you at exorbitant consulting fees. For me, there have been David Kershaw at MPR;
Jim Knickman, vice-president of the Robert Woods Johnson Foundation; and Patti Patrizzi, head evaluation
officer at the Pew Foundation.
As already noted, I saw the start-up of Mathematica Policy Research and, in one brief year while on
leave, got to hire and work with Becka Maynard and Stuart Kerachsky. Through the years, I have both
recruited and worked with many others at MPR as well.
I saw the creation of MDRC and worked closely with Judy Gueron for many years. Right at the outset
of the National Supported Work Demonstration, I made the mistake of sketching out a sort of PERT chart
for how we would develop through time the various parts of the evaluation of that experiment. Damned if
she didn�t hold me and the other MPR and University of Wisconsin researchers to delivery according to
that chart over the full five years of the project. And, of course, I have had the pleasure of working
with many of the MDRC staff members over the years and have seen its reputation deservedly grow to the
highest level.
These were the people with whom I worked who made my work easier and lots of fun. But there is another
side of the equation: the funders and the users of our research output. I have been a rabid advocate of
random assignment designs since that first experiment, but it took some time for me and the other early
enthusiasts to bring more researchers and policymakers along to seeing it as the first-best choice where
feasible. I remember Judy Gueron and I despairing in the 1980s that perhaps there would be few or no
experimental designs funded by the federal government. A critical person in the federal government who
kept the flickering flame alive, and in fact blew it into the roaring fire it has become, was Howard
Rolston at the Department of Health and Human Services. He was critical in working the interstices of
the government budgets and personnel to keep these projects moving forward. On the foundation side,
many people came out of the Ford Foundation. One especially critical person in the 1980s and 1990s
who insisted on high quality research designs was Phoebe Cottingham, first at the Rockefeller
Foundation, later at the Smith Richardson Foundation, and now on the government side at the
Institute of Education Sciences.
Three of my students who collaborated with me on important studies while still students, or shortly
after graduation�and, indeed, did most of the hard work�were: Jennifer Hill, now a PhD statistician
and on the faculty at Columbia�s School of International and Public Affairs; Ty Wilde, now a graduate
student in economics at Princeton; and Marcus Goldstein, now an economist at the World Bank.
While I am a random assignment fanatic, I have done a fair amount of non- experimental work, most
recently on problems of evaluating Community Development Financial Institutions (CDFIs). A colleague
at Swarthmore, John Caskey, worked together with me on an evaluation of a major CDFI project in the
Mississippi Delta over five years. (I needed a younger colleague to carry my bags, I told him.)
Julia Rubin of Rutgers has enlisted me to work on several CDFI projects and participated with me
on some APPAM panels on the subject.
The random assignment evaluation method is the wave I have ridden for many years and now it has
become referred to as �the Gold Standard� for estimating the causal impact of policy innovations.
The advocacy and implementation of the method has spread across many social science disciplines
and subject areas (even to that last holdout, education research) and into developing countries
(although Europe still lags). Others now strive hard to approximate the standard by searching out
�natural experiments� and doing clever quasi-experimental (instrumental variables) studies.
But I have also been concerned to search for determining �second-best� methods of evaluation
for when and where you cannot use random assignment designs. I also have sought to stress the
need for we evaluators to be brutally honest about the product we can deliver; to say: �We can�t
rigorously answer your question about impact� when that is the case; to say, in such circumstances,
that monitoring performance�not flawed, misleading impact estimation�may be the best we can do.
Finally, my appreciation to Doug Besharov for making this moment possible through all the hard work
he has done in creating the Peter Rossi Award.
Back to top