Chapter 12: Multiple Regression

1. In a multiple regression analysis with one predictor variable R is _______.

  1. The proportion of variance in the criterion associated with the predictor.
  2. The unit change in the predictor associated with each unit change in the criterion.
  3. The unit change in the criterion associated with each unit change in the predictor.
  4. Equal to the zero-order correlation between the predictor and the criterion.
  5. The square root of the y-intercept.

Answer: D

2. When two predictors are substantially correlated ______.

  1. The assumption of linearity is likely violated.
  2. Multicollinearity is likely an issue.
  3. Heteroscedasticity is likely an issue.
  4. All of the above are likely an issue.
  5. None of the above are likely an issue.

Answer: B

3. A scatterplot allows the research to ______.

  1. Look for bivariate outliers.
  2. Assess if the relation is linear.
  3. Assess the assumption of homoscedasticity.
  4. All of the above are true.
  5. None of the above are allowed for by a scatterplot.

Answer: D

4. An R2adj __________.

  1. Can never be greater than R2.
  2. Will always be greater than R2.
  3. Is always equal to r2.
  4. Is always equal to r2adj .
  5. Is the square root of R.

Answer: A

5. Mahalanobis’ test is intended to identify ______.

  1. Violations of linearity.
  2. Violations of normality.
  3. Violations of homoscedasticity.
  4. Univariate outliers.
  5. Multivariate outliers.

Answer: E

6. The general effect of a failure to include an important variable in the analysis is ______.

  1. Violations of linearity.
  2. Violations of normality.
  3. Biased results.
  4. Multicollinearity.
  5. An overestimate of R2.

Answer: C

7. A Tolerance of .2 equates to ____________.

  1. An R2 of .04
  2. A VIF of 5.0.
  3. An R2 of .40
  4. A VIF of .04.
  5. None of the above.

Answer: B

8. A VIF of 10 means ______.

  1. The standard errors for evaluating the coefficients are inflated by a factor of 3.16.
  2. The standard errors for evaluating the coefficients are inflated by a factor of 10.
  3. The standard errors for evaluating the coefficients are inflated by a factor of 100.
  4. The t-values for evaluating the coefficients are inflated by a factor of 3.16.
  5. The t-values for evaluating the coefficients are inflated by a factor of 10.

Answer: B

9. When assessing the interaction between a dichotomous variable and a measurement variable the B (slope) for the multiplicative term represents _______

  1. The difference between the slopes of the two dichotomous groups.
  2. The slope for one of the dichotomous groups.
  3. The proportion of the variance accounted for by the interaction.
  4. The square root of the proportion of the variance accounted for by the interaction.
  5. None of the above.

Answer: A

10 .Which of the following can produce a false interaction effect?

  1. When ordinal data are treated as measurement data.
  2. If the reliability of a variable differs for different subjects in the study.
  3. If the validity of a variable differs for different subjects in the study.
  4. All three of the above can produce a false interaction.
  5. None of the above can produce a false interaction.

Answer: D

Short answer questions.

1. If we had one criterion variable and two predictor variables, what would be the differences between conducting a single multiple regression analysis and conducting two separate simple regression analyses and summing the results?
Main Points:

  1. If the predictors were completely independent, the R2 would simply be the sum of the two r2.
  2. If the predictors were completely independent, the slopes of the two predictors would not change.
  3. If the predictors were not completely independent, the R2 would be overestimating the sum of the two r2.
  4. If the predictors were not completely independent, the two slopes would likely change.

2. In terms of R2, when would there be no difference between running the two separate regressions and a single multiple regression as described in question 1? Why?
Main points:

  1. When the predictors are independent: all correlations among the predictors are 0.0.
  2. Because there is no overlapping of the variance in the criterion explained by the predictors, their effects can be added. Nothing will be summed more than once.
  3. It is similar to summing the sum-of-the-squares for the various treatment variables in a factorial ANOVA.

3. What is multicollinearity and what are the circumstances that produce it?
Main Points:

  1. Multicollinearity exists when a predictor is a linear function of other predictors.
  2. When a predictor is a perfect linear function of other predictors the analysis becomes impossible.
  3. There are various ways in which multicollinearity can be produced. (1) A predictor variable is highly correlated with one other predictor in the model. (2) A predictor variable is moderately correlated with a few other predictors. (3) A predictor variable is somewhat correlated with a number of other predictors. (4) Some combination of the above three is also possible. 

4. What are four possible strategies for addressing multicollinearity?
Main Points:

  1. If there is only one predictor variable with a very low tolerance, then remove that variable from the model.
  2. If the source of the problem is a single high correlation with one other predictor, then one of the two variables may be excluded from the model.
  3. Alternatively, if the source of the problem is a high correlation with one other predictor, the two variables may be combined.
  4. Multicollinearity may also be addressed through a Step-Wise regression.

5. What are the three questions regarding possible interactions and how are they addressed?
Main Points:

  1. Is there an interaction? If there is an interaction, what is its strength? If there is an interaction, what is its nature?
  2. The presence is tested for by including the interaction term in the regression model and examining the possible significance of the interaction term in the coefficients table.
  3. The strength can be estimated by the difference in the R2 between the model that includes the interaction term and the model without the interaction term.
  4. Examining the nature of the interaction usually begins with a test for a bilinear interaction. A bilinear interaction exists when the B of one measurement predictor changes as a function of the scores on a second measurement predictor. Other types of interaction are also possible. 

6. What is the difference between Missing at Random (MAR) and Missing Completely at Random (MCAR)?
Main Points:

  1. Data are MCAR when the missing values are unrelated to the other observed values of that variable and to all other observed variables in the study. For example, the missing values of a variable show no tendency to be related to high or low scores in another variable.
  2. A weaker assumption is that the data are missing at random (MAR). When data for a particular variable are MAR the missing data are deemed random with respect to the variables’ observed values, after controlling for the other variables used in the analysis. For example, scores on variable A could be assumed MAR if those subjects with missing variable A scores do NOT have on average higher or lower variable A scores than the subjects whose data are not missing. It is impossible to actually test the MAR assumption because we do not know the missing values thus we cannot test them against the observed values.

7. What are four possible strategies for addressing missing data?
Main Points:

  1. List-wise deletion of subjects with missing data is a common strategy. Many researchers fail to recognize that list-wise deletion often biases the results when the assumption of MCAR is violated.
  2. A second strategy is pair-wise deletion. This approach is based on the fact that linear regression analysis can be conducted without the raw data. All that is required are the means, standard deviations, and the correlation matrix.
  3. The third strategy, imputation, is actually a collection of techniques. In statistics, imputation means replacing the missing data with a reasonable estimate. Historically one of the first forms of imputation was to replace any missing value with the observed mean of that variable.
  4. Another form of imputation involves conducting a multiple regression(s) to replace the missing data prior to running your intended multiple regression. 

Data set questions.

1. Create a data set in SPSS (n =10) with a criterion and two predictors where the presence of a multivariate outlier substantially affects the results. Demonstrate the effect by removing the outlier and rerunning the regression.
Main Points:

  1. The multivariate outlier needs to go against the trend of the relationship between the two variable.
  2. The Mahalanobis X2 value should be much larger than the critical value.
  3. This demonstration is easier to create when the sample size is relatively small. (The outlier will have a larger influence on the outcome.)

2. (A fictitious study) A researcher examined the existing literature on happiness and has proposed the following model of happiness. He deemed work to be important. The more we feel productive at work the happier we will be. He deemed pleasure to be important. Pleasure is defined as fun and the more fun we have the happier we will be. The literature also indicated to him that there may be an interaction between work and pleasure. Finally, he deemed openness to everyday experience important. Being able to enjoy the moment, any moment, simply for itself, appeared to him as crucial to real happiness. He called this the Zen factor, for lack of a better term. The psychologist tested a large number of subjects using psychometric tests of these variables. The data are listed in the file happinessstudy.sav on the texkbook’s web page. The predictor variables are age, work, pleasure, and zenatt. Happiness (Regr factor score) is considered the measure of overall personal happiness and is the criterion variable. What is the best model of happiness that you can derive from his data? Be sure to address the issues of missing data, tests for all types of outliers, linearity, multicollinearity, and a test for the normality of the residuals. Present a final model (with or without interaction term) along with the R2, the adj R2, and the individual regression coefficients (slopes). Also include any conclusions you wish to make. What limitations do you see with respect to your analysis and findings?
Mains Points:

  1. Descriptive statistics are run. Z-scores are saved for each variable and no univariate outliers are found (no z-scores greater than 4.0 found in the new variables.)1
  2. Scatter plots are run for the four predictors and the criterion. The relations appear linear and it appears that homoskedasticity can be assumed. For example, Happiness regressed on Zenatt.2

Age is the exception. Because it is nearly a constant, the scatterplot indicates that the variable has no linear relation with happiness and cannot be used in the regression.

  1. Happiness is regressed on work, pleasure, and zenatt. Mahalanobis values are saved to test for multivariate outliers. The ANOVA table indicates that the overall model is significant. The coefficient table reveals that all three predictors make a significant contribution to happiness when the others are held constant.

3 

Tolerance values in the coefficients table indicate that there is no problem with multicollinearity.
d. Testing for an interaction between work and pleasure by creating a multiplicative term and including it in the model.

4

The interaction term is nonsignificant (t =.091, p = .928). Its presence had little effect on the other individual predictors. It merely reduced the overall explanatory power of the model.
e. Regression for the standardized residuals on the standardized predicted values and the histogram indicate only minor issues with the normality of the residuals of the final model.
f. The best model, given the current data, is that work, pleasure, and zenatt all make an independent significant contribution to overall happiness.

5