This case study examines the development of a general analytical inductive approach to qualitative research. It assesses the research design and analytical processes for developing a framework for understanding the development of collaborative partnerships between business schools and industry. The study is based on an interpretivist approach, which is about understanding how people make sense of their world. The outcomes demonstrate how an analytical inductive analysis, involving detailed readings and interpretations of raw data, can be used to identify concepts, themes, and models. Through practical application of the research design, data collection and analytical approach, this case study demonstrates the credibility of a general analytical inductive research strategy based on a qualitative research methodology. The benefits include facilitating the development of effective business relationships between universities and domestic firms.
1: Why are interpretivist approaches typically based on qualitative research designs?
2: Why are the findings of qualitative research intended to generalize to theory rather than the population?
The research referenced in this methodological case study chronicles the journey of the researcher in identifying a research design that would utilize both quantitative and qualitative research strategies to explore education reform initiatives across two schools implementing the same whole school reform model. Challenges of selecting the components of the mixed-methods design, data triangulation, and priority and timing of the quantitative and qualitative strands are discussed in this case.
1: What are the key components of a mixed-methods research design?
2: Why is triangulation important in mixed methods research?
This case uses a study of fan blog commentary to explain how researchers decide precisely what text to gather from the internet as well as how they identify the locations of that content, decide exactly from which website to gather data, and how to get the data from the internet into text files for analysis. If the data set thus gathered is too large for the chosen method of analysis, it describes how to employ random sampling on data gathered from multiple websites to ensure representativeness, as well as employ random selection in assigning chunks of the sampled data to multiple coders for analysis.
1: Why is data scraping useful in big data studies?
2: Why is it important to find appropriate websites for downloading in big data studies?