## Student Resources

# Answers to Exercises and questions for Discussion

**Bivariate data summary using SPSS is so quick and simple that the temptation must be to crosstabulate everything in sight. Is this a good idea?**

On the whole, no; it amounts to data dredging. This is why it is a good idea that hypotheses are formulated before the analysis begins. This is not to say that researchers should not try out vaguely formulated ‘hunches’ about what variables might be associated, but just producing hundreds of crosstabulations to see if anything emerges can be a waste of time and will probably focus on the 5 per cent of ‘accidental’ results that sampling theory says will emerge anyway! If students – or any researchers – do this it probably means that they have not thought sufficiently about the objectives of the research, and may, indeed, have chosen the wrong style of research. If they really are that vague about the variables then perhaps qualitative research should have been undertaken instead.

**What result from Cramer’s V do you think would count as a ‘high’ degree of association?**

What counts as ‘high’ in one piece of research may not be in another. What most researchers are looking for is what variables might be *relatively* most highly associated with the factors they are trying to explain or study. Having said that, a Cramer’s V of 0.1 or 0.2 would in any case be quite low, and one of 0.7 or more would be high, although I would be suspicious of such results. The chances are that the two variables are in fact measuring the same thing, so there is a degree of circularity.

**Access the alcohol marketing dataset (available at https://study.sagepub.com/kent).**

(i) Crosstabulate brand importance for alcohol by likeads and request the coefficients gamma and Cramer’s V. Is there any discernible pattern? Try collapsing the table to a three by three. Have the coefficients changed?

(ii) Regroup total importance of brands (totbrand) into three categories and crosstabulate against gender, requesting the coefficient Cramer’s V. Interpret the results.

(iii) Recreate Figure 5.15, requesting both gamma and Cramer’s V. Can you explain why the two statistics differ?

(iv) Redo the analysis for (ii) above, but using total involvement.

(i) The result is a 5 × 5 crosstabulation with 25 cells and some very low frequencies in many cells. It is difficult to interpret. Note that phi and Cramer’s V give different coefficients. **Cramer’s V** is a measure of departure from independence adjusted for size of table, so this is the appropriate one to review. The value of 0.127 is very low, but would still be statistically significant if the 774 cases were a random sample – which, in this research, they are not. Since both variables are ordered categories, **gamma** is also an appropriate statistic that takes account of the frequencies in pairs of categories that are on the diagonal. It has a higher value than Cramer’s V at 0.231. Notice that it is negative because brand importance is arranged from low to high rather than the reverse. There is a small tendency, as one would expect, for those who emphasize the importance of branding to like alcohol adverts. Using Recode to create three categories for both variables (treating DK as missing, although this makes very little difference since there are very few) results in slightly higher coefficient for both Cramer’s V and gamma. The table is, however, easier to interpret and it is possible to see the relatively high number (245) who see the branding of alcohol as unimportant and who also dislike alcohol ads.

(ii) Have a look at the frequency count for Totbrand and check out the cumulative frequency. To create three groups of more or less equal size, just under a third have total scores of 23 or less, another third between 24 and 29, and a third 30 or more. Use Recode, putting in these ranges and labelling the new variable into Low, Medium and High. If you now crosstabulate against sex of respondent, you will find that there is very little difference between males and females.

(iii) Cramer’s V comes out at 0.097 and gamma at 0.187. Both variables are ordered category and gamma is looking only at patterns of diagonality; there seems to be a slight tendency for those who have seen alcohol ads on up to four channels to say they do not intend to drink alcohol in the next year and for those who have seen seven or more channels to say they do intend to do so. Cramer’s V is looking at departure from independence for each cell. If you calculate expected frequencies for some of the cells you will find that they differ little from the observed frequencies. Chi-square is quite low (17.3) and with this many cases to divide by gives a very low value for Cramer’s V.

(iv) The distribution here is very different with 396 claiming no involvement and most of the rest having only one involvement, so could be split into zero, one and more than one. Crosstabulating by gender gives a Cramer’s V of 0.251. From inspection of the table it is the females who tend to claim no involvement.

**Access the Trinians dataset and reread the Folio article (see Exercise 6, Chapter 2 for instructions). The researchers have taken one item from the very last question (a semantic differential on the sixth item, left wing … right wing) and made it the ‘outcome’ variable to study. However, they have just compared percentages on an item-by-item basis. Try turning the items in Q27 (how often certain protest actions are justified) into a summated rating scale and correlate with the left-wing/right-wing scores. Interpret your answer and compare with the comments in the Folio article. Would a test of statistical significance be appropriate?**

If you just added up the allocated codes for each item, you would be adding together very different degrees of ‘radicalism’ – writing to a newspaper is not equivalent to sabotaging factories or assassination. The items would, at the very least, require some form of weighting. However, if you run a frequency count on each item, very few indicated ‘often’ or ‘sometimes’ for the more radical forms of protest. You might be justified in adding together just the non-violent or non-damaging items. I used Compute to do this, remembering that ‘often’ is given the low value so the more radical have the lowest total scores. A correlation of Pearson’s r against left wing/right wing on the last semantic differential question gives *r* = 0.35. Unsurprisingly, those who describe themselves as left wing are more likely to see as justified a number of non-violent forms of protest. However, it is *r*^{2} that indicates the degree of correlation, and this is only 0.12. Certainly a test of significance would not be appropriate – the cases are not a representative sample.

In the original *Folio* article, left- and right-wing self-definition are distinguished for each form of protest. It is possible to pick out which forms make the biggest distinction between left and right, for example unofficial strikes. It can also be seen that for each form of protest, left wingers have a higher justification rating than right wingers. In many ways this can be seen as more helpful and more informative than just reporting *r*^{2} = 0.12. Statistical summaries can be very helpful and indeed are very parsimonious, but they can also hide more than they reveal.