Chapter Summary
This chapter argues that while the generation, evaluation and testing of hypotheses is at the heart of academic research, testing them for statistical significance is, in fact, only one way in which hypotheses can be evaluated; a way, furthermore, that is appropriate only when a random sample of cases has been taken. The idea of hypothesis evaluation suggested here is a much wider concept that takes on board many different ways in which hypotheses can be measured against a dataset. Nor does it imply any kind of ‘test’, but rather some assessment of how far a particular hypothesis ‘stacks up’ against data in a given dataset.
Hypotheses frequently imply or explicitly state relationships in terms of causality, but establishing causality is a complex issue and many different meanings have been applied to the concept. Causes may be seen as deterministic or probabilistic; they may be seen as events or as conditions; they may be historical or they may be predictive. Causality may be simple or complex. Causal complexity arises when some relationships may be asymmetrical (that is, necessary but not sufficient or sufficient but not necessary), and when they are contingent, non-linear, conjunctural or multi-pathway. They may, in addition, be part of complex systems that are dynamic (so relationships are always changing) and with open boundaries, so it is not always clear what cases or types of cases are relevant to the study.
Researchers quite frequently define causality in terms of a pattern of constant conjunction and a temporal sequence. While these may be easier to warrant in terms of statistical or set-theoretic analyses, it does not help us to see or understand the mechanisms that may be linking cause and effect. These can be studied to some extent through elaboration analysis, both for variable-based and case-based analyses. The status of some properties of cases as causes and other properties as effects cannot be established by statistical or set-theoretic analyses themselves. The status of each property is determined by the researcher and needs to be warranted from evidence outside the dataset based either on consensuses in other research or in the literature, or on researchers’ detailed understanding of cases. Analyses of datasets can establish that certain patterns exist or tend to exist. It is for researchers to determine, argue or warrant that these patterns are consistent with whatever notions of ‘causality’ are being understood or implied.
To establish causality is not necessarily to explain how or why those relationships hold. To explain is to attempt to provide understanding to an audience so that they understand something that they did not understand before. How things are explained and the ‘success’ of that explanation will thus depend on the audience. Researchers as scientists use a variety of rhetorics in their attempts at explanation. These may be based on experimental design, statistical control or set theoretic methods. Alternatively, rhetorics may home in on using qualitative data to persuade audiences by way of deploying thick descriptions of market or social behaviour of individuals or exposing system dynamics. It can be argued that all forms of scientific explanation involve notions of causality at some point, but there are very different ideas about what ‘causality’ means and how it is established. In the final analysis, it is what the audience or the listener will accept as an explanation.
The presentation and communication of the results of data analysis either face-to-face to an audience in staff seminars or at conferences, or their incorporation into articles for academic journals or in reports to management, clients or research sponsors, is an important part of the research process and has to be seen in the context of the overall research design and the implications of those results for the wider community.