About Us
Topics
BUDGET POLICY
CHILD CARE, EARLY
EDUCATION & HEAD START
CHILD WELFARE & CHILD ABUSE
EDUCATION
ELDERLY
FAMILY POLICY, MARRIAGE & DIVORCE
FOOD ASSISTANCE, SNAP & WIC
HEALTH CARE POLICY
INCOME & POVERTY
JOB TRAINING
LEGAL ISSUES
PAY FOR SUCCESS, PAY FOR RESULTS & SIBS
POLITICAL PROCESS
PROGRAM EVALUATION
SOCIAL POLICY
TEEN SEX & NON-MARITAL BIRTHS
WELFARE REFORM
Child Abuse Training
International Activities
Rossi Award for Program Evaluation
UMD Capstone Courses
Publications
Mailing List
Contact Us



Return to the Rossi Award page.

2009 Rossi Award Winner - Rebecca A. Maynard

Acceptance Remarks

(November 6, 2009)

I am greatly honored to receive the Peter H. Rossi Award. Peter was a pioneer of program evaluation and, some would argue, a rebellious academic, in the best sense of the term. He chose to apply science to inform public policy and practice long before “evidence-based” policy and practice was in vogue. He also authored an important textbook that previewed “mixed methods” research long before that term was coined. In scanning his career, it is clear that he was inherently a man of strong principle, an inquiring mind, and a facility for matching research questions with the sample, data, and analytic methods needed to address them.

I want to share with you three principles of program evaluation research that I have come to view as cornerstones for the continued advancement of the field.

First, the goal of program evaluation and policy analysis—and of scientific research in general—is NOT to prove the effectiveness of a program, policy, or practice. Rather, it is the more subtle goal of amassing reliable evidence to enable us to make better decisions and achieve better outcomes.

I distinctly remember the feelings of self-doubt when the findings of some of my early research did not even qualify as “base hits.” Who cared about research showing that youth who participated in a supported work program did not benefit from participation or that setting up alternative schools did not prevent youth from dropping out? Even today, noted scholars openly criticize compendia of research that include high numbers of studies with null findings. Specifically, I am thinking about the criticisms of the “nothing works clearing house” (formally known as the What Works Clearinghouse), developed and supported by the Institute of Education Sciences.

A turning point for me on this score was when I signed on to the team evaluating the highly contentious national abstinence education program. As I talked with myriad stakeholders, I quickly realized the complexity of this task. It was to provide the best possible evidence about the implications for the health and well-being of youths of investing public money in abstinence education—whether the results were positive, neutral, or negative. The good news for those of us leading the effort was that there were important interest groups eager to make use of the study findings, regardless of what they were. This also was the bad news. The groups whose interests were not supported by the results quite likely would doggedly seek to discredit the study.

For the first time in my career I became focused on the importance of “bullet-proof” study designs. The study findings needed to be credible and actionable to all whose interests were the welfare of youth. This meant that we needed a “bullet proof” design for analyses of program impacts and, also, plans for complementary approaches for generating rich, credible implementation and process analyses.

Second, excellence in program evaluation is facilitated by selfless collaboration among professionals with diverse backgrounds. I cannot claim sole credit for any research project I have ever worked on. Moreover, most often, I have worked on research teams that have been diverse in substantive expertise, disciplinary backgrounds, and breadth of experiences. Efficient, effective collaboration generally entails shifting “who’s on first” as a project moves from the design phase, through the implementation, data collection, analysis, and dissemination phases.

This principle was most prominently reinforced for me during some highly collaborative work I was privileged to be part of to expand the use of research synthesis methods in education and social welfare. This work built on a ten to fifteen year history of research synthesis in the field of medicine and the work of a small but robust community of scholars in education and psychology.  Yet, it took numerous scholars from a wide range of disciplines—including many APPAM members—six to seven years to work though many thorny technical issues and come up with a widely accepted protocol applicable to reviews of effectiveness studies in the social sciences.  The reasons are not that the prior work was “bad.” Rather, the methods had been developed for application to simpler problems than those common in education, employment and training, and social welfare. This experience underscored for me the importance of continually refining our craft through collaboration and interdisciplinary collaboration.

Third, headlines are NOT the “end-game.” We should not judge our own professional contributions or those of others by the number of press mentions.

I first wrestled with this issue when working on a collaborative study to measure the social and economic costs of teenage childbearing. The sponsor of the project had “priors” that teenage pregnancy was so costly to young mothers (and to society) that if only we could (1) come up with the cost estimate, (2) print that cost in bright colors on glossy paper, and (3) inset the bright, glossy paper into Nike shoe boxes, the teens would get the message and the problem would go away. Teens would buy the shoes, read the paper, and, confronted with reality, avoid teenage childbearing.  

But, research did not support the priors. There was ample evidence of undesirable consequences of teenage childbearing, but the negative consequences pertained primarily to the children. The message was too complex to convey through a shoe-box insert.

Still, the research was important. It showed us that there were high costs associated with teenage childbearing, but the costs were borne by the children born to teens, not the teen mothers themselves. We needed to look to strategies other than the Nike shoe box inserts to address the problem. This research has helped shape public policy, but through more nuanced, less visible ways than envisioned by the study sponsor.

The theme of this conference is Evidence-Based Policy Making. I hold no illusions that my work has contributed in any big way to our ability to take smart, informed action on any policy. However, I am quite confident that as a community of scholars, the APPAM membership collectively is advancing our ability to use more and better evidence to guide public policy analysis and management. I am indebted to many of you in this room for contributing in large and small ways to the opportunities and support that have shaped my career. You have my sincere thanks and appreciation.

 


Back to top


HOME - PUBLICATIONS - CONFERENCES - ABOUT US - CONTACT US