Answers to Exercises and questions for Discussion

Make a list of the different types of hypotheses and suggest how each might be evaluated.

Hypotheses are carefully worded statements, as yet untested, about one or more properties of a set of cases. They need to be carefully worded since the manner of their evaluation will depend on how they are phrased. A basic distinction is between univariate, bivariate and multivariate hypotheses. These may be evaluated using traditional univariate (Chapter 4), bivariate (Chapter 5) or multi­variate (Chapter 6) analyses. Which particular procedures are used depends a lot on whether the research objectives are descriptive, interpretive or verificational and whether the variables are binary, nominal, ordered category, ranked or metric. At a descriptive level, univariate hypotheses may be evaluated using a range of summary measures like proportions or modal categories for binary or nominal variables and measures of central tendency or dispersion for metric variables. Thus a hypothesis like ‘The average primary school class size in England is less than 30’ can be confirmed or denied by looking at official figures of average class sizes. If the data are based on a random sample, then further evaluation might include confidence intervals or by setting up the null hypothesis that classes are 30 or more. Multivariate hypotheses can be evaluated using either multivariate techniques or configurational data analysis like fuzzy set analysis.

Bivariate and multivariate hypotheses may be directional or non-directional. Non-directional hypotheses can be evaluated directly by using appropriate measures of association. Directional hypotheses bring in issues of sequence and timing. These can be evaluated sometimes from a thorough understanding of the cases in the research, or by using longitudinal data in extensions of traditional or configurational data analysis. Directional hypotheses nearly always imply some form of influence or causality. These can never be proven (at least, not in non-experimental research designs), but if the notion of causality is simple and linear, then checks for the impact of intervening variables and spurious relationships can be made by holding values of other variables constant in a multivariate analysis. If notions of causality are complex, for example involving asymmetrical relationships and the possibility of sufficient but not necessary or necessary but not sufficient relationships, then configurational data analysis can provide evidence that the data are at least consistent or not consistent with such propositions.

In seeking to explain their findings, do social scientists have any alternative than to attempt to establish some form of causality?

It has been argued in Chapter 10 that to explain is to attempt to provide understanding to an audience so that it understands something that it did not understand before – it is the persuasion of others that intelligibility is being offered. How things are explained and the ‘success’ of that explanation will thus depend on the audience. If the audience is other social scientists who, on the whole, will only accept the establishment of causal connections as explanatory, then probably the researcher has little alternative but to follow suit. However, there are very different ideas about what ‘causality’ means and how it is established. Within this context, researchers, then, may follow very different pathways.

To what extent are the manner and style of presentation of research findings part of the ‘rhetoric’ of explanation?

The short answer is: very much so. The audience needs to be persuaded that understanding and intelligibility are being offered. Presentations that are clear, structured, detailed, interesting, even amusing, are more likely to succeed in this task. The presenter may be limited to rhetorics that social scientists are likely to appreciate, for example experimental, statistical, quantitative or qualitative case-based rhetorics.