About Us
Topics
BUDGET POLICY
CHILD CARE, EARLY
EDUCATION & HEAD START
CHILD WELFARE & CHILD ABUSE
EDUCATION
ELDERLY
FAMILY POLICY, MARRIAGE & DIVORCE
FOOD ASSISTANCE, SNAP & WIC
HEALTH CARE POLICY
INCOME & POVERTY
JOB TRAINING
LEGAL ISSUES
PAY FOR SUCCESS, PAY FOR RESULTS & SIBS
POLITICAL PROCESS
PROGRAM EVALUATION
SOCIAL POLICY
TEEN SEX & NON-MARITAL BIRTHS
WELFARE REFORM
Child Abuse Training
International Activities
Rossi Award for Program Evaluation
UMD Capstone Courses
Publications
Mailing List
Contact Us



New Jersey�s Family Development Program: An Overview and Critique of the Rutgers� Evaluation

Peter H. Rossi, Social and Demographic Research Institute, University of Massachusetts at Amherst

-- Authorized by a waiver from the U.S. Department of Health and Human Services (HHS), New Jersey reconfigured its Aid to Families with Dependent Children (AFDC) program in 1992. The authorizing legislation was passed in February 1992 and was implemented in October 1992. The refurbished AFDC, renamed the �New Jersey Family Development Program� (FDP), had the following features:

A �family cap.� AFDC benefits were not increased for additional children born to AFDC payees if the children were conceived while their mother was on the rolls. The family cap did not apply to other benefits, such as food stamps, WIC, Medicaid, or housing subsidies.

A more generous earned-income disregard for AFDC recipients sanctioned under the family cap. Benefits were not reduced because of earnings until the recipient earned an amount equal to 50 percent of cash benefits.

No marriage penalty. Financial penalties that applied under AFDC for marriage or remarriage were removed.

Increased benefits for two-parent households. Benefits for two-parent households were made more generous.

Extended Medicaid eligibility. Medicaid eligibility was continued for two years after leaving welfare for employment, an increase of one year.

Increased emphasis on employment services. The welfare department emphasized employment training and education to prepare recipients for employment.

Sanctions for noncompliance. When a recipient failed to comply with requirements concerning employment-related activities, benefits were reduced.

Because FDP�s family cap applied only to AFDC cash benefits, its effects were somewhat softened by increased benefits from other programs. The birth of a child increased a family�s food stamps benefits. Women sanctioned under the family cap could still participate in WIC and receive infant formula for additional children. If a sanctioned woman entered the labor force, her AFDC benefits were not reduced because of her earnings.

Although FDP was a �bundle� of changes, the feature that attracted the most attention in New Jersey�and nationwide�was the family cap. Accordingly, the evaluation of the effectiveness of FDP and the discussions of its findings have focused on FDP�s effects on fertility, contraceptive use, abortions, and sterilizations, with lesser attention given to effects on earnings and employment.

As a condition for granting the waiver, HHS had insisted that FDP be evaluated using a randomized experiment. The New Jersey Department of Human Services (NJDHS) contracted with the Rutgers University School of Social Work to conduct this evaluation. The experimental group�s members would experience FDP, and the control group would continue on AFDC. Although most of the data collected would be drawn from administrative data, some evidence would come from a survey of participants to be undertaken at the end of the experiment. The experiment was to run from October 1992 through December 1996.

The NJDHS was responsible for most of the operational aspects of the experiment; it took on the tasks of selecting participants, randomly allocating participants into experimental and control groups, training welfare workers in the rules governing how experimental and control participants were to be treated, and informing participants about the rules that applied to them. The main responsibilities of the Rutgers evaluation group were to analyze the data sets and to design and conduct the participant surveys.

This paper reviews and assesses the FDP evaluation; it is almost entirely based on the final reports of the FDP evaluation (Camasso et al. 1998a,b). The next section describes the FDP experiment, the procedures used, and the experimental findings. The following section focuses on the pre�post analysis, and the final section summarizes the findings and assesses what has been learned from the FDP evaluation[1]

The FDP Randomized Experiment

Implementation Issues

[2] First, control group members had to be informed (and persuaded) that the provisions of FDP that were widely discussed in the mass media and which some of their kin, friends, and neighbors experienced did not apply to them. Second, welfare workers handling control group cases also needed to be aware of their special status and apply appropriate rules not only when the cases were active but also if they left the rolls and subsequently reapplied.

The evidence shows that maintaining the integrity of the control group was problematic. Although control group members were told about their status at the time they were enrolled in the experiment and were sent letters with that information, the final report does not contain any information on how that knowledge was reinforced by additional written or oral communications (Camasso et al. 1998a). In addition, in the first year or so, about 20 control group recipients who had additional children were not given the increases in benefits that were called for by the rules of the experiment. Because only a small proportion of recipients will have children in that period of time, this situation implies that a much larger number of women were mistakenly treated as if they were under FDP rules.

In addition, a 1995 survey of a sample of participants in the experiment found that there was considerable confusion among both experimental and control group members about the welfare rules to which they were subject and even about whether they were members of the control group. As shown in panel A of table 2, a majority (55%) of the control group respondents claimed that they had not been told they were in the control group, and more than a quarter (28%) of those in the experimental group claimed that they had been told they were control group members. Even more disturbing are the findings shown in panel B, which summarizes the answers participants gave to questions about whether they were subject to the family-cap rules. Only 7 percent of the control group believed that they would receive additional cash benefits were they to have additional children, and 35 percent believed that they would receive no additional benefits after having an additional child. Note that no substantial differences exist between experimental group members and control group members in their answers to these questions.

Neither experimental nor control group members correctly understood the rules of FDP and Aid to Families With Dependent Children (AFDC). Whether in the experimental or control group, an additional child meant that food-stamp benefits would be increased and Medicaid coverage would be extended to the child: Only small numbers in both groups understood this correctly. More than one-third of each group believed that they would not receive any additional benefits�they, too, were incorrect. Participants in both the experimental group and the control group would receive increases in food stamps, and their additional children would be covered by Medicaid.

Although the findings shown in table 2 raise strong doubts about the degree of fidelity with which control conditions were maintained in the experiment, the survey data cannot be considered definitive. The questionnaire item shown in panel A appears to be poorly worded: �control group� is scarcely a term used frequently in everyday discourse, and respondents simply may not perceive their status in terms of membership in a control group. The question used in panel B also is problematic. Few of the respondents may have faced the issue of pregnancy: were they to do so, many would quickly try to find out what the implications of pregnancy were for their benefit status. After all, when asked, many Americans are not able to name the ocean they would have to cross to get to Europe, but when contemplating such a trip, few would fail to find out how to get there. For most practical purposes, sufficient knowledge is not complete knowledge. These considerations argue against taking the findings shown in table 2 as indicators of serious deterioration of experimental conditions. Nonetheless, those findings do indicate that the experiment was compromised to some degree.

The effect of poor implementation on experimental findings arises from the dilution of the contrast between experimental and control groups. Assuming that there was no bias in the implementation failure�that is, families who were given the wrong treatment were not different from families who received the treatment for which they were designated�the differences between the two groups would be diminished, and the standard errors of that difference would become enlarged in the data analysis. In the case of extensive implementation failure, an effective treatment would appear to be ineffective.

Outcome Measures

Table 3 shows the major outcomes used in the impact analysis and their sources. Note that only administrative data are used. The Family Assistance Management Information System (FAMIS) maintained by the New Jersey Department of Family Assistance was the source for information on welfare dependency, welfare payments, births, and earned income and employment as well as covariates used in the analyses. The Medicaid payment file, which contained records of payments made for abortions, contraceptive services, and sterilizations, was linked to families enrolled in the experiment through common AFDC case identifiers. The New Jersey Department of Labor�s (NJDOL�s) wage files consisted of employers� quarterly reports on wages paid for each employee and were linked to the FAMIS files through names and social security numbers.

Births are recorded both in FAMIS and in the Medicaid payment files.[3] FAMIS records were used for the experiment because New Jersey Medicaid files showed an abrupt downward shift in births during the last years of the experiment, apparently resulting from a shift to HMOs that led to a decline in recording. Inherent weaknesses exist in the birth, contraceptive services use, abortion, and sterility measures. Births only are detected when people are on the welfare rolls, and the last three measures only are detected when recipients are on the Medicaid rolls. Unrecorded events, however, are relevant. Even though when a family is off the welfare rolls, additional births do not affect income, re-enrollment would affect welfare payments for the experimental group. These considerations also make abortions, contraceptive use, and sterilizations relevant for families off welfare rolls.

Corresponding gaps also exist in the earnings and employment data, measurements of which were not taken for the periods when members of the experiment were not on the welfare rolls. One of the goals of FDP was to encourage recipients to leave welfare by becoming employed. Accordingly, earnings and employment after leaving welfare are important outcome indicators.

Data Analysis

[4] The administrative data contains only information for periods of enrollment, whereas the conventional analytic strategy requires records for everyone covering the entire time period of the experiment. To do so would have required surveying experiment members periodically throughout the experiment or using data sets (such as birth registration records) that record outcomes continuously.

The Rutgers research group elected to use a pooled cross-section strategy for data analysis, applicable to nonexperimental data, which permitted the use of the �perforated� administrative data sets.[5] The �pooled cross-section� approach also is used for the analysis of the pre�post data. The units of analysis are �recipient-quarters,� defined as a quarter in which a recipient is enrolled on welfare. More than 125,000 recipient-quarters were used in the analyses. The outcome measures were defined for each recipient-quarter. For example, if a recipient gave birth to a child in a quarter, the �births� outcome variable is 1 for that quarter; otherwise, it is recorded as 0. Of course, continuous outcome variables such as earnings are recorded as continuous variables. This analytic strategy used meant that the major advantages of the experimental design were lost. Potentially large selection biases were possible, which could arise from enrollment changes subsequent to randomization. The researchers calculated multivariate statistical models, regressing each outcome variable on experimental status and a set of covariates obtained from the FAMIS files. Table 4 lists the regressors typically used in the equations.

The effects of FDP were meant to be captured by �treatment status,� a dummy variable marking whether the recipient was in the experimental group, and an interaction term, �time*status,� which measured time trends in experimental effects. The variable �time� measured the trend in the outcome independent of experimental effects. The remaining regressors in table 4 are covariates included because they ordinarily affect outcomes. Their use can make the estimates of effects more precise by reducing their standard errors. For example, the New Jersey welfare system varies to some degree from county to county, and age clearly affects outcomes such as fertility.[6]

The data set presented some tricky problems. First, although no attempt to link data sets is completely successful, in this case, for some of the matching, it was unclear how much missing data existed. For example, if it was not possible to find a record for a recipient in the NJDOL wages file, it could mean either that the recipient had no earnings or that an existing record with earnings for that person could not be matched. The problem lies in the matching process. An error in recording social security numbers could mean a person had some earnings but could not be identified as an AFDC recipient. Errors in linking FAMIS and Medicaid could have been detected because all study participants should have been in both data sets. Unfortunately, the report is silent about how successful the linking was.

Second, the FAMIS data set only included members of the experiment when they were enrolled in FDP or AFDC. Although Medicaid records may have included some members when not enrolled, no linkages can be made for unenrolled periods. Consequently, children conceived while an experimental group member was enrolled but who were born when that member was unenrolled were not covered.

Third, the multiple quarter records for a given recipient are not independent; that is, a recipient�s fertility status in one quarter is related to her fertility status in another quarter. For example, a woman who gives birth in a given quarter cannot give birth again for at least three quarters and, most likely, for a longer period. A woman who undergoes sterilization in a quarter cannot give birth in any subsequent quarter. A woman practicing contraception in a quarter is more likely to continue doing so. These intrapersonal dependencies across records, if not taken into account, can lead to understated standard errors.

Fourth, as treated in the analyses, the outcomes are binary variables; for example, a recipient either gives birth in a quarter or does not. As a result, statistical models used for such outcomes must take that outcome characteristic into account. In particular, ordinary least squares is not appropriate for binary outcome variables.

Apparently unable to choose definitively among alternative statistical models,[7] the researchers present the results from four different estimation procedures: (1) ordinary least squares (OLS ), (2) logit regression, (3) probit regression, and (4) OLS with Huber correction for clustered data. Each of the statistical models rests on different assumptions concerning data characteristics. The two OLS models are designed for use with continuous outcome (dependent) variables that can take on any numerical value, whereas logit and probit regression are designed for binary dependent variables. The logit and probit models differ in their assumptions about the distributions of error terms. The OLS model with Huber corrections is designed to take into account dependence among data observations.

Given that the outcome variables are binary variables, the choice of either OLS or OLS with Huber corrections is clearly wrong: logit or probit is clearly the appropriate model. In the versions used, however, neither logit nor probit take into account the dependencies among observations, so standard errors are underestimated and calculated coefficients may appear to be statistically significant when they are not. This deficiency is not inherent in either logit or probit: advanced statistical software packages typically provide Huber corrections for logit and probit as well as pooled cross-section fixed-effects logit and probit models.[8]

The fact that the researchers did not apply appropriate statistical models in their data analyses means that it is difficult to assess the worth of their findings. By and large, the four models usually produce coefficients that are similarly signed, but not always. Results that are shown as statistically significant may not be so when compared with the most appropriate approach.

Experimental Results

Welfare Dependency.

Fertility-Related Behavior.

The intent of the family cap was to reduce births to welfare mothers and to increase behaviors that would foster that result. Accordingly, a major portion of the analysis was devoted to estimating experimental effects on births, abortions, the use of family-planning services, contraceptive use, and sterilizations. Table 5 summarizes the findings for both new and ongoing cases. Each entry in the table is a coefficient for experimental effects derived from each of four statistical procedures. The rows labeled �treatment� are the coefficients for the experimental group: a positive coefficient means that the experimental group experienced more of that outcome variable, whereas a negative coefficient means that the experimental group experienced fewer of those events compared with the control group. The coefficients for the time*status variable measure the trends over time in the outcome for the experimental group. A positive coefficient means that the variable increased over time in the experimental group.

Panel A of table 5 lists the coefficients for births occurring to the welfare recipients.[9] For ongoing cases, the probit and logit equations found that births declined in the experimental group as time went on. Among new cases, the experimental group experienced fewer births than the control group throughout the time period (i.e., there was no trend.) Note that the two OLS equations find no significant differences between the experimental and control families.

Panel B shows the results for the births variable for a subset restricted to recipients ages 15 to 45. In addition, the three quarters following a birth were excluded from the analysis, because births in those quarters are physically impossible. The findings are quite similar to the findings in Panel A, except that the coefficients are slightly larger.

Panel C lists the effects on the number of abortions obtained by recipients ages 15 to 45. Here the findings tend to be fairly consistent across models, with no significant differences between ongoing cases in the experimental and control groups. Significant positive coefficients were found for new cases, however, suggesting that newly enrolled welfare recipients in the experimental group were more likely to abort pregnancies than their counterparts in the control group. Possible explanations for the differences between experimental and control group members are not given in the report.

Panel D shows the effects on visits to family-planning services. Ongoing, but not new, cases in the experimental group were more likely than their control group counterparts to use those services. Panel E shows contradictory findings concerning contraceptive use: ongoing cases in the experimental group were less likely to use contraceptive services, but new cases in the experimental group were more likely to do so. Finally, Panel F contains somewhat contradictory findings concerning sterilizations, but the researchers claim that the data on sterilizations were likely to be invalid because of a shift toward managed care toward the end of the experiment.

It is difficult to know what to make of the findings shown in table 5. If we disregard the issue of whether the statistical models are appropriate, clear differences often exist between ongoing and new cases in their reactions to the experimental treatment. This may be an important finding about the reactions of long-term versus short-term recipients. However, it is difficult to ignore the model-selection problem. Many of the coefficients for logit and probit, arguably the best models used, have associated t values that are not very large,[10] raising the question of whether corrections for the lack of independence among observations would lower those values to insignificant levels. The deficiencies in the analyses, coupled with the disturbing signs that the experiment may not have been successfully implemented, lead to low confidence that the findings are firm enough to take seriously.

[11]

The time period studied was from January 1991 through December 1996, which provided 22 months of observations before FDP�s implementation and 38 months of observations under FDP. The pre�post analysis used administrative data from the sources shown in table 3, primarily FAMIS and Medicaid payment files. No data were collected from the employment and wages files of the NJDOL.

The pre�post analysis used two analysis strategies. First, the data were presented as aggregated by quarter and analyzed as an interrupted time series�that is, the before-FDP trends in major outcomes were contrasted with outcome trends in the post-FDP period, using tests to discern whether the trends in the two time periods differ. The time-series analyses showed no clear differences in births, but they did show that abortions increased in the post-FDP period.

The second analysis strategy relied on disaggregated data. The unit of analysis, the client-quarter, was identical to that of the FDP experiment; more than 2.3 million client-quarters were generated in the analysis.[12] The outcome variables are identical to those shown in table 3, except that wages and earning data from NJDOL files were not used.

The approach also was quite similar to that used in the analysis of the experiment, although the FDP effects were modeled somewhat differently. The post-FDP time period was divided into two segments, with FDP modeled by separate terms in each period, defined as follows:

Middle: The period of FDP implementation (December 1992 to September 1993)

Post: The period of full implementation (October 1993 to December 1996)

Time*middle: An interaction term capturing the trend during implementation

Time*post: An interaction term capturing time trends during the full implementation period.

[13]

Panels A and B show the effect coefficients for births to the welfare recipient (excluding births to other female household members). Panel B shows data based on excluding for each birth the three subsequent quarters. Both panels tell much the same story. Although births to recipients after FDP increased, there was a decided tendency for them to decrease over the middle period and the post period. Considering the middle and post period together, the result was a lower birth rate for the post-FDP period.

� Using the coefficients shown in Panel A, the researchers calculated[14] the total number of births averted as follows:

� OLS equation: 15,158 births averted

� Logit equation: 11,316 births averted

� Probit equation: 14,057 births averted.

[15]

Panel C lists the coefficients for abortions, which appear to indicate an increase in abortions in the period of full implementation of FDP. However, the coefficients for the post-period are weakly positive, with t values around 2.5. The Rutgers research team calculated the number of abortions produced by FDP as follows:

� OLS: 2,064 abortions

� Logit: 1,216 abortions

� Probit: 1,329 abortions.

Assessment of the New Jersey FDP Evaluation

Our best judgment is that the Family Development Program and its family cap had a definite effect on the family formation decisions of women on AFDC in New Jersey. Women on AFDC who were considering whether or not to bear more children were influenced by the additional financial restrictions of the family cap. While the estimated magnitudes of these impacts may vary with differences in methodology and estimates methods, the ultimate outcome remains the same. The net effect is a reduction in pregnancies and births among women on AFDC in New Jersey; this conclusion is supported by research reported here and elsewhere, and is consistent with expectations derived from economic analyses. (Camasso et al. 1998b, 164.)

An examination of the methods used and the findings themselves do not strongly support the firmness of the researchers� conclusions. This assertion does not mean that FDP had no effects on births and abortions or that the effects were opposite in sign to those claimed; it only means that the deficiencies in the research lead this reviewer to conclude that the various forms of evidence in the reports are not firm enough to support the researchers� claims. My assertions are based on the following three reasons:

1. The implementation of the FDP experiment was flawed sufficiently to undermine the resulting data. Little evidence indicates that members of the experimental and control groups knew the particular AFDC and FDP rules to which they were subject. According to the survey of families in the experiment, both experimental and control groups believed overwhelmingly that they were subject to the family cap, a finding so strong that it overcomes the survey�s poor response rate. Although analysis of the data showed some FDP effects, the deficiencies in analysis methods must be taken into consideration. In addition, the omission of fertility events occurring while families were not enrolled in AFDC means that the analyses cannot be regarded as taking advantage of the randomized experimental design.

2. The pre�post analysis is an evaluation research design that cannot support definitive estimates of the effects of a program. In particular, that design cannot take into account the effects of time or other events that might affect outcomes. Some evidence shows that changes in the AFDC population occurred during the same period in which FDP was enacted and implemented. The effects estimated by the pre�post analysis are likely confounded with those changes.

3. The statistical models used to estimate the net effects of FDP are not appropriate, given the characteristics of the data. The use of linear multiple regression (OLS) with binary dependent variables is simply incorrect. Indeed, it is surprising that the researchers present OLS results and appear to regard them as valid. Although logit and probit regressions are designed for use with binary dependent variables, the likely presence of statistical dependencies within observations made on individual welfare recipients means that the resulting standard errors of effect coefficients are underestimated. Consequently, some or all of the effect coefficients that were marked as statistically significant in the reports may not be so had those dependencies been taken into account. If such corrections had been made, however, it is unlikely that the signs of coefficients would have changed.

The FDP may have had the effects that the Rutgers research group claim, or it may not have had those effects. We simply do not know from this research, because the deficiencies noted above are serious enough to cast strong doubts on the validity of the findings.

Acknowledgments

This paper has benefited from comments made on an earlier draft of this paper by Professor Michael Camasso and his co-authors, Howard Rolston, HHS, Rudolph Myers, NJDHS, and Michael Laracy, Annie E. Casey Foundation. Their comments allowed me to correct some factual errors I had made in that draft and to clear up some ambiguous statements. They also disputed many of my assessments. I was convinced by some of their comments, and made changes. I did not make any changes in response to those objections with which I could not agree. I am grateful for all the comments made.

References

Camasso, M. J.; Harvey, C.; Jagannathan, R.; and Killingsworth, M. 1998a. A final report on the impact of New Jersey�s Family Development Program. New Brunswick, NJ: Rutgers University.

Camasso, M. J.; Harvey, C.; Jagannathan, R.; and Killingsworth, M. 1998b. A final report on the impact of New Jersey�s Family Development Program. Results from a pre-post analysis of AFDC case heads from 1990 to 1996. New Brunswick, NJ: Rutgers University.

Camasso, M. J.; Harvey, C.; and Jagannathan, R. 1998. Cost-benefit analysis of New Jersey�s Family Development Program: Final report. New Brunswick, NJ: Rutgers University.

Note: Ongoing cases were those on the rolls as of October 1, 1992. New applicants were those enrolled between October 1, 1992, and December 31, 1994.

Table 2. Recipient Understanding of Group Membership and Welfare Rules: Client Survey

A. Perceived and Actual Membership in Experimental and Control Groups

Responses to �Have you been told that you are included in a �control group� of welfare recipients who receive welfare benefits under the old welfare rules called REACH or JOBS?�

Perceived Membership

Actual Assigned Membership

Experimental Group

Control Group

Experimental group

427 (65%)

320 (55%)

Control group

184 (28%)

223 (39%)

Don�t know

42 (6%)

36 (6%)

Total

653 (100%)

579 (100%)

B. Welfare Recipient Perceptions of Applicability of Family Cap Provisions to Own Case

Responses to �If you were to remain on welfare and have a baby one year from now, what, if any, additional benefits would your child receive? Additional food stamps? Additional cash benefits? Additional Medicaid? No additional benefits of any kind?�

Benefits

Experimental Group (%)

Control Group (%)

Additional food stamps?

26.6

27.6

Additional cash benefits?

4.4

6.9

Additional Medicaid?

37.6

40.6

No additional benefits of any kind?

35.5

34.5

Note: Responses were recorded separately for each benefit.

Table 3. Outcome Measures and Data Sources Used in the Impact Assessment of FDP

Outcome Measure

Data Source

Welfare dependency (enrollment)

FAMIS

Welfare payments

FAMIS

Births

FAMIS

Abortions

Medicaid Payment Files

Sterilizations

Medicaid Payment Files

Contraceptive services

Medicaid Payment Files

Wages and employment

New Jersey Wage Reporting System and FAMIS

FAMIS = New Jersey Family Assistance Management Information System

Table 4. Regressors Used in Multivariate Analyses

Regressor

Definition

Treatment status

1 = Experimental group; 0 = control

Time*status

Time and treatment status interaction term

Time in treatment

Number of quarters enrolled

Time

Quarter of observation

Seasonal dummies

A set of dummy variables for the season of the observation

Age

Age of recipient in years

Non-needy parent

Dummy for �children only� payment cases

Race dummies

A set of dummy variables for race of recipient

Education dummies

A set of dummy variables for educational attainment

Eligible children

Number of eligible children covered by benefit

Earned income

Earned income of adult female recipient

County dummies

A set of dummy variables for each of the counties included

County unemployment rate

Estimated county unemployment rate

Welfare participation rate

Percentage of households enrolled in welfare in county


Table 5. Treatment-Effect Coefficients for Fertility-Related Behaviors

Ongoing Cases

New Cases

OLS

Logit

Probit

Huber

OLS

Logit

Probit

Huber

A. Own Births (Ages 15�45)

Treatment

.001

.052

.029

.002

�.004

�.219*

�.098*

�.005

Time*status

.000

�.028*

�.011*

�.000

NC

NC

NC

.000

B. Own Births (Adjusted Risk Pool Ages 15�45)

Treatment

.001

.071

.034

NC

�.004*

�.214*

�.098*

NC

Time*status

�.000

�.029*

�.012*

NC

NC

NC

NC

NC

C. Abortions (Ages 15�45)

Treatment

�.002

�.091

�.037

�.002

.009*

.382*

.170*

.008*

Time*status

.000

.017

.007

.000

�.001

�.023

�.010

�.001

D. Family Planning Services

Treatment

.005*

.139*

.063*

.005*

�.001

�.046

�.025

�.001

Time*status

NC

NC

NC

NC

.000

.007

.003

.000

E. Contraceptive Use

Treatment

�.009*

�.134*

�.064*

�.009*

.009*

.032

.018

.012*

Time*status

.001*

�.013*

�.006

.000

.000

.046*

.020*

NC

F. Sterilizations

Treatment

.003*

.552*

.205*

.003*

�.000

�.006

�.004

�.000

Time*status

�.000*

�.032

�.012

�.000*

�.000

�.021

�.007

�.000

NC = coefficients were not calculated.

* = coefficient significant at p � .05.

Table 6. Treatment-Effect Coefficients for Pre�Post Analyses of Fertility-Related Behavior

Effect Coefficients

Middle

Post

Time*Middle

Time*Post

A. Own Births

OLS

.007*

.001

�.001*

�.001*

Logit

.389*

�.022

�.054*

�.028*

Probit

.202*

�.041*

�.027*

�.011*

B. Own Births (Restricted Risk Poola)

OLS

.021*

.009*

�.003*

�.003*

Logit

.776*

.275*

�.119*

�.085*

Probit

.392*

.104*

�.059*

�.040*

C. Abortions

OLS

�.003

.002*

.000

.000

Logit

�.150

.107*

.020

�.004

Probit

�.067

.040*

.009

�.001

D. Family Planning

OLS

�.014*

�.021*

.002*

.002*

Logit

�.250*

�.476*

.043*

.047*

Probit

�.135*

�.233*

.023*

.024*

E. Sterilizations

OLS

�.004*

�.008*

.001*

.001*

Logit

�1.597*

�3.497*

.241*

.397*

Probit

�.550*

�1.185*

.083*

.135*

*Observations removed for three quarters after each birth.


[1] A third report (Camasso, Harvey, and Jagannathan 1998) describes a �cost-benefit� analysis concerned primarily with benefits deriving from recipient employment. Because no net effects on employment were found, FDP was not found to have any positive net benefits.

[2] Maintaining the integrity of the experiment was the obligation of the New Jersey Department of Human Services.

[3] Of course, better birth data arguably could have been obtained from birth registration records. The Camasso and colleagues (1999a,b) do not discuss why the latter were not used.

[4] The original evaluation plans called for a survey of all members of the experimental and control groups to obtain outcome measures. I do not know why this survey was not done. Had the planned survey been accomplished successfully, conventional analyses would have been possible. The report does not discuss why the end-of-experiment survey was dropped, nor does it discuss why a conventional analysis strategy was not used.

[5] The administrative data consist of observations made on AFDC recipients each quarter, constituting a series of cross-sections. Cumulating across cross sections produces �pooled cross sections�. The �perforations� in the resulting pooled data are observations missing because AFDC recipients are not observed when not on the AFDC rolls.

[6] A major omission is parity status (the number of previous births), although the number of eligible children in the household may be regarded as a rough proxy.

[7] Preliminary reports relied exclusively on the OLS model. The other models shown in the final reports were likely added in response to criticisms from the reviewers of the preliminary reports.

[8] Subsequent to the final report, the Rutgers research team used a Huber-corrected probit model (as reported in their comments on this paper) and claim that the resulting coefficients are not very different from the uncorrected ones reported earlier. It would, of course, have been helpful if this correction had been done as part of the project�s final report, but the report�s other deficiencies are sufficiently serious that even these findings are not credible.

[9] Other births could be recorded for other female household members, typically adolescent daughters of the welfare recipient: such births were ignored.

[10] Typical t values range from 2.0 to 2.6.

[11] Unless otherwise noted, data in this section are from Camasso et al. 1998b.

[12] Welfare cases participating in the experiment were excluded from the pre�post analysis along with clients who participated only in one quarter.

[13] Note that direct comparisons between tables 5 and 6 are not possible because those of Table 5 are presently separated for ongoing and new cases, a separation that is not possible within the data set used for Table 6. The report on the FDP experiment presents no analyses based on all the cases in the experiment.

[14] The calculations are based on the regression equations, entering mean values for covariate regressors, assuming 100,000 enrollees in each quarter. The numbers of births averted are the differences between projected births without the effect coefficients and with the effect coefficients summed over the entire period.

[15] Additional analyses not reported in the final report, which are based on Huber-corrected probit analysis, are reported in the comments by the Rutgers research group (see Chapter X).


Back to top


HOME - PUBLICATIONS - CONFERENCES - ABOUT US - CONTACT US