Chapter Summary

This chapter argues that there are many different kinds of relationship between two variables, including difference, category clustering, covariation, subset relationships and triangular patterns. Variable-based statistical methods can address only the first three of these. Researchers will often begin bivariate analysis with bivariate data display. This might be used in a preliminary diagnostic context in which the researcher runs a number, perhaps a large number, of key crosstabulations to get a feel for the nature of relationships between variables, two at a time, or plots scattergrams to check whether relationships between metric variables are roughly linear. In a presentational context, the researcher may use tables and graphs selectively to illustrate key findings in a report.

 

To obtain summaries of the relationship between two variables it is possible to review the percentage difference between categories in respect of another variable, but this really only works if both variables are binary. It is more effective to use one of the many coefficients of association that are available. Some of these measure category clustering and some measure covariation. They are all designed to vary from 0 for no association or correlation to +1 for a perfect association or correlation. Where the relationship is negative (and this is possible only for ordinal or metric variables) a maximum value of −1 may be achieved. For two metric variables the standard coefficient is Pearson’s r, but for categorical variables there is a large number of coefficients, nine of which are provided in the SPSS Crosstabs procedure. Measures appropriate for nominal variables may be based either on the idea of departure from independence or on the notion of proportional reduction in error. Ordinal measures are based on the principle of pair-by-pair comparisons. Each statistic has its own strengths and limitations, so selecting the appropriate statistics to use can be quite complex. For the most part, Cramer’s V is probably the best for nominal variables and gamma for ordinal, but if researchers want to take account of which variables are playing an independent role in the research, then lambda may be the preferred statistic.

 

When testing bivariate hypotheses for statistical significance, the null hypothesis is always that there is no association or correlation. Where both variables are categorical, chi-square is normally used as a test of significance. Where both are metric, the statistical significance of Pearson’s r will be generated by SPSS. Where scales are mixed, provided the independent variable is categorical and the dependent variable metric, then analysis of variance may be used. This is based on comparing the variance of the metric variable within the categories with the variance between the categories. Although analysis of variance is commonly used by researchers, the conditions for its proper use are seldom met, particularly when the data were derived from surveys rather than experimental designs.

 

Testing the null hypothesis only means establishing the probability that the survey results may have come from a population of cases in which the null hypothesis is assumed to be true. It says nothing about the probability that the null hypothesis actually is true. A statistically ‘significant’ result does not mean that there is a high level of association or correlation, or even that the difference or the degree of covariation is worth looking at. For large samples of over 300 or so, even very small levels of covariation become statistically significant.

 

Statistical inference plays a very different role in a piece of research from data summaries. The latter describe features or patterns in the data researchers have before them; statistical inference calculates the probability that these results could have been a random sampling fluctuation. The two procedures are linked in that bigger differences or stronger associations or correlations are more unlikely to have been a random sampling fluctuation; but when samples are large, even quite small differences or weak associations can be statistically significant. In short, statistically significant results are not necessarily important findings. The usefulness of statistical inference for non-experimental datasets has been called into question by many authors and researchers, given that the conditions for its correct use are seldom met in real-life research. It might be better instead to pay more attention to investigating the nature of the relationships between variables, first two at a time and then elaborating this into three or more variables.