SAGE Journal Articles
Click on the following links. Please note these will open in a new window.
This study, which was described by the textbook authors in Ch. 4, compares response rates for mixed-mode survey implementation involving mail and e-mail/Web survey implementation. Results demonstrated that response rates were higher for the mail survey condition, suggesting that mail surveying may be more cost-effective in some situations.
This study provides a comparison of a randomized controlled trial (RCT) study design with a retrospective pretest method (RPM) design. The author tested whether RPM was a viable option for evaluating the effectiveness of two website content features (video and text) by simultaneously running RCT and RPM designs. Results indicated that the retrospective pretest design was comparable to the RCT with no significant differences. The author concludes that RPM could be a viable alternative research design when RCTs are not feasible or preferred.
Journal Article 3: Schwarz, N., & Oyserman, D. (2001). Asking questions about behavior: Cognition, communication, and questionnaire construction. American Journal of Evaluation, 22, 127–160.
In this seminal article, the authors provide theory-grounded recommendations on constructing survey questions for evaluations. The underlying cognitive and communicative processes involved in answering questions about one’s own behavior are highlighted along with common problems that arise within these processes. The authors provide a wealth of information and strategies for survey construction and implementation.
Journal Article 4: Shaw, T., Cross, D., & Zubrick, S. R. (2016). Testing for response shift bias in evaluations of school antibullying programs. Evaluation Review, 39, 527–554.
This study offers an example of a program evaluation in which response shift bias, one of the validity issues discussed in Ch. 4, is tested. The study investigated the presence of reconceptualization, reprioritization, and recalibration response shift, resulting from an anti-bullying intervention program. No evidence for response shift bias was found, but the authors provide detailed explanation of response shift biases in evaluations and the importance of testing for this bias.