SAGE Journal Articles

Click on the following links. Please note these will open in a new window.

Article 1: Shear, B. R., & Zumbo, B. D. (2013). False positives in multiple regression: Unanticipated consequences of measurement error in the predictor variables. Educational and Psychological Measurement, 73(5), 733–756. doi:10.1177/0013164413487738.

Summary/Abstract: Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new insights into the causes of this problem. Computer simulations and an illustrative example are used to demonstrate that when the predictor variables in a multiple regression model are correlated and one or more of them contains random measurement error, Type I error rates can approach 1.00, even for a nominal level of 0.05. The most important factors causing the problem are summarized and the implications are discussed. The authors use Zumbo’s Draper-Lindley-de Finetti framework to show that the inflation in Type I error rates results from a mismatch between the data researchers have, the assumptions of the statistical model, and the inferences they hope to make.

Questions to Consider

1. With respect to measurement what assumptions are made by the ordinary least squares regression model? Should researchers be concerned? Explain.

2. In the classical test theory model it is assumed that the expected value of the observed score equals the true score and that the measurement errors are uncorrelated with all other variables, including true scores and other measurement errors. This concept is called the ________.

  1. classical measurement error
  2. error term
  3. confounding error
  4. residual error

3. With respect to multiple regression if the predictor variables are correlated, and one or more of them contains random measurement error, there is the potential for _________.

  1. bias
  2. restricted ranges
  3. inflated Type I error rates
  4. inflated Type II error rates
     

Article 2: Nimon, K. F., & Oswald, F. L. (2013). Understanding the results of multiple linear regression. Organizational Research Methods, 16(4), 650–674. doi:10.1177/1094428113493929.

Summary/Abstract: Multiple linear regression (MLR) remains a mainstay analysis in organizational research, yet intercorrelations between predictors (multicollinearity) undermine the interpretation of MLR weights in terms of predictor contributions to the criterion. Alternative indices include validity coefficients, structure coefficients, product measures, relative weights, all-possible-subsets regression, dominance weights, and commonality coefficients. This article reviews these indices, and uniquely, it offers freely a available software that (a) computes and compares all of these indices with one another; (b) computes associated bootstrapped confidence intervals; and (c) does so for any number of predictors so long as the correlation matrix is positive definite. Other available software is limited in all of these respects. We invite researchers to use this software to increase their insights when applying MLR to a data set. Avenues for future research and application are discussed.

Questions to Consider

1. How does mulitcollinearity among predictors impact the use of regression weights?

2. Which of the following regression analysis identifies how much variance in the predicted scores (^Y) for the DV can be attributed to each IV?

  1. zero-order correlation coefficient
  2. squared structure coefficients
  3. structure coefficients
  4. Pratt measure

3. The authors state that when the predictors in X are correlated, _______ does not disentangle the effects of X on Y from the standard deviations of X; in fact, it confounds them in the service of placing all weights on a z -score metric.

  1. standardizing the data
  2. squaring the data
  3. winsorizing the data
  4. transforming the data