SAGE Journal Articles

Click on the following links. Please note these will open in a new window.

Journal Article 1: Newcomb, T. M. (1984). Conservation Program Evaluations: The Control of Self-Selection Bias. Evaluation Review, 8(3), 425-440. DOI: 10.1177/0193841X8400800308

Abstract: The Seattle City Light Department evaluated its weatherization program for low-income homeowners to determine how much electricity was conserved, and whether the program was cost-effective. The study employed a control group of future program participants in order to avoid the biased estimate that could result because participants are a voluntary, nonrandom group of utility customers.

Questions to Consider:

1. What was the design of this experiment?
2. How did the design nullify the potential biases due to selection, testing, regression, history, and instrumentation?
3. How can this design be applied in the evaluation of similar programs?

Journal Article 2: Bell, S. H., Olsen, R. B., Orr, L. L., & Stuart, E. A. (2016). Estimates of External Validity Bias When Impact Evaluations Select Sites Nonrandomly. Educational Evaluation and Policy Analysis, 38(2), 318-335. DOI: 10.3102/0162373715617549

Abstract: Recent research has shown that nonrandom site selection can yield biased impact estimates. To estimate the external validity bias from nonrandom site selection, the authors combine lists of school districts that were selected nonrandomly for 11 educational impact studies with population data on student outcomes from the Reading First program.

Questions to Consider:

1. How was this study designed to determine the level of bias among the 11 nonrandomly selected sites?
2. What were the results of this study in terms of bias?
3. How can this design be applied in the evaluation of similar programs?

Journal Article 3: Grady, M. D., Edwards, Jr., D., & Pettus-Davis, C. (2017). A Longitudinal Outcome Evaluation of a Prison-Based Sex Offender Treatment Program. Sexual Abuse, 29(3), 239-266. DOI: 10.1177/1079063215585731

Abstract: Sex offender outcome studies continue to produce mixed results. A common critique of these studies is their lack of methodological rigor. This study attempts to address this critique by adhering to the standards established by the Collaborative Outcome Data Committee (CODC) aimed at increasing the quality and confidence in outcome studies.

Questions to Consider:

1. How was this study designed?
2. What cohorts were involved in the study?
3. What were the findings among the two groups?