SAGE Journal Articles

Click on the following links. Please note these will open in a new window.

Journal Article 1: Buntins, M., Buntins, K., & Eggert, F. (2017). Clarifying the concept of validity: From measurement to everyday language. Theory & Psychology, 1–8.

Abstract: Test validity is widely understood as the degree to which a test measures what it should measure. We argue that this conceptualization does not refer to a psychometric problem but to the correspondence between scientific language and everyday language. Following Stevens, test results give an operational definition of attributes, qualifying any test as valid by definition. Following the representational theory of measurement, an attribute is defined by an empirical relational structure and a corresponding measurement model. Since measurement depends on the specified empirical structure, if a test measures anything, it must be valid. However, the question of validity can be asked in a meaningful way, if one interprets test results in the context of everyday language. We conclude that validity can be understood as the degree to which the variable measured by a test corresponds to concepts of everyday language.

Journal Article 2: Feldt, L. S., & Charter, R. A. (2006). Averaging internal consistency reliability coefficients. Educational and Psychological Measurement66, 215–227.

Abstract: Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of “average,” and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the approaches generally produce unequal averages, a Monte Carlo study found little difference among the average reliabilities calculated by the first six approaches. The first three approaches may be especially useful for reliability generalization studies.

Journal Article 3: Swanson, E., Wanzek, J., Haring, C., Ciullo, S., & McCulley, L. (2011). Intervention fidelity in special and general education research journals. The Journal of Special Education47, 3–13.

Abstract: Treatment fidelity reporting practices are described for journals that published general and special education intervention research with high impact factors from 2005 through 2009. The authors reviewed research articles, reported the proportion of intervention studies that described fidelity measurement, detailed the components of fidelity measurement reported, and determined whether the components of fidelity reported differed based on the research design, the type of intervention, or the number of intervention sessions. Results indicate that even intervention research articles in high-quality general and special education journals inconsistently report fidelity (less than 70% of the articles). Authors of single-case studies most frequently reported the collection of intervention fidelity data (81.3% of articles, compared with 67.4% of treatment-comparison study articles). Of the 67% of articles that provided information about intervention fidelity procedure, only 9.8% provided data about the quality of the treatment intervention.