SAGE Journal Articles

Access to full-text SAGE journal articles that have been carefully selected to support and expand on the concepts presented in each chapter. Journal articles can act as an ideal resource to help support your assignments and studies.

Click on the following links, which will open in a new window.

Article 1: Martone, A., & Sireci, S. (2009). Evaluating alignment between curriculum, assessment, and instruction. Review of Educational Research,79, 1332-1361.

http://rer.sagepub.com

Abstract: The authors (a) discuss the importance of alignment for facilitating proper assessment and instruction, (b) describe the three most common methods for evaluating the alignment between state content standards and assessments, (c) discuss the relative strengths and limitations of these methods, and (d) discuss examples of applications of each method. They conclude that choice of alignment method depends on the specific goals of a state or district and that alignment research is critical for ensuring the standards-assessment-instruction cycle facilitates student learning. Additional potential benefits of alignment research include valuable professional development for teachers and better understanding of the results from standardized assessments.

Article 2: Rieck, W.A. (1989). Teacher Tests: an Aspect of Evaluation. NASSP Bulletin, 73, 129-132.

http://bul.sagepub.com

Abstract: No abstract available

Article 3: Prestidge, L.K., & Williams Glaser, C.H. (2000). Authentic Assessment: Employing Appropriate Tools for Evaluating Students' Work in 21st-Century Classrooms. Intervention in School and Clinic, 35, 178-182.

http://isc.sagepub.com

Abstract: No abstract available

Article 4: Popham, W.J. (2001). Uses and Misuses of Standardized Tests. NASSP Bulletin, 85 (622), 24-31.

http://bul.sagepub.com

Abstract: Examines five tests by three publishers currently used in high schools today, and discusses four appropriate and inappropriate uses of these tests. Asserts that assessment literacy on behalf of educators is essential in order to avoid the misuse of standardized tests.

Article 5: Moon, T.R., & Callahan, C.M. (2001). Classroom Performance Assessment: What Should It Look Like in a Standards-Based Classroom? NASSP Bulletin, 85 (622), 48-58.

http://bul.sagepub.com

Abstract: Content standards and tests aligned to them are the focus of teachers' efforts and often present challenges in meeting varying student interests, readiness levels, and learner profiles. This article describes how performance assessments can enable administrators and teachers not only to address content standards but also to consider the academic diversity in their classrooms.

Article 6: Carter, K. (1984). Do Teachers Understand Principles for Writing Tests? Journal of Teacher Education, 35 (6), 57-60.

http://jte.sagepub.com

Abstract: In this study the author identifies the need for teachers to develop more effective test-making skills. Many preservice and inservice teachers rely on a repertoire of limited and uninformed test construction skills when they create assessment items. Most problematic for teachers are items that test higher-order thinking skills, such as inference and prediction. Carter suggests a reexamination of preservice measurement courses and a more thorough critique of inservice and testing activities at the school district and classroom level.

Article 7: Johnson, R.L., McDaniel II, F., & Willeke, M.J. (2000). Using Portfolios in Program Evaluation: An Investigation of Interrater Reliability. American Journal of Evaluation, 21 (1), 65-80.

http://aje.sagepub.com

Abstract: Portfolios and other open-ended assessments are increasingly incorporated into evaluations and testing programs. However, questions about the reliability of such assessments continue to be raised. After reviewing forces that may be leading to increased interest in and use of portfolio assessment, we investigate the interrater reliability of a portfolio assessment used in a small-scale program evaluation. Three types of portfolio scores were investigated--analytic, combined analytic (formed by summing across analytic scores), and holistic. The interrater reliability coefficient was highest for summed analytic scores (r 5 .86). Results indicate that at least three raters are required to obtain acceptable levels of reliability for holistic and individual analytic scores.