Common Sense as a Source of Theory

Article 1: Common Sense as a Source of Theory – Contributed by Dr. Sarah Fischer, Marymount University

What is common sense for a student may not be common sense to everyone else (especially adult professors). Think about the question: why do American teenagers in the year 2014 prefer to text rather than to talk on the phone? For a professor, common sense might theorize "Teenagers text because that way their parents can't overhear their conversations with their friends." To a teenager, common sense might say, "Teenagers text because it's faster and because we don't want our friends' phones to ring if they are in class/at church/at sports practice." Both are common sense theories--but students should not underestimate their own common sense, which sometimes they do (I certainly did).

Why Theory By Induction is Problematic

Article 2: Why Theory By Induction is Problematic

The big problem with producing theory by induction is that you can’t use it to extrapolate to unobserved cases: The “prediction” is only viable within the range of cases in which it was developed. We have no epistemological reason to believe that what happened today will happen tomorrow. A child who moves to Alaska during the winter – a six-month period of perpetual darkness – may believe that the sun has died or will never appear again. Just because the first few months of the child’s stay in Alaska are dark, however, does not mean that the darkness will persist forever. It was a valid inference within the period in which it was developed, but no matter what the child thinks, we have no logically valid reason to believe that the theory will hold tomorrow, or four weeks from now, or a year from now.

Just like all those investment agency ads note in the fine print, past performance is not a guarantee of future performance. Take, for example, Bertrand Russell’s famous story of the inductivist turkey:

The turkey found that, on his first morning at the turkey farm, that he was fed at 9 a.m. Being a good inductivist turkey he did not jump to conclusions. He waited until he collected a large number of observations that he was fed at 9 a.m. and made these observations under a wide range of circumstances, on Wednesdays, on Thursdays, on cold days, on warm days. Each day he added another observation statement to his list. Finally he was satisfied that he had collected a number of observation statements to inductively infer that `I am always fed at 9 a.m.'. However on the morning of Christmas eve he was not fed but instead had his throat cut.[1]

Or, as famed physicist Richard Feynman noted in his commentary on the Challenger disaster report, “When playing Russian roulette the fact that the first shot got off safely is of little comfort for the next.”[2]

More precisely speaking, we can fit an infinite number of lines through a set of points and still obtain the same squared correlation coefficient (R2, or amount of variation explained). This means that the problem of induction is not, “is my theory correct,” but is instead “which theory is correct?” And unfortunately, without being able to make predictions outside of the sample, we have no way to test the theories to determine which is correct.

Probabilistic Theorizing, Falsifiers and Counter-Examples

Article 3: Probabilistic Theorizing, Falsifiers and Counter-Examples

Despite the attention given to them in this and other textbooks, deterministic theories and hypotheses are actually pretty rare. Most of the theorizing that you do in your everyday life is probabilistic, as is most social science theory more generally. Take, for instance, the example of everyday theorizing from Chapter 2, about the fastest way to get to your noon class. Both of the theories offered in the book – cutting through the student union, or going around it – are phrased deterministically. As written, they appear to be deterministic, that is, they should hold under all circumstances, on every occasion.

But let’s think about that a little more carefully. Do you really expect that theory to be supported by every single instance of you going to class? What about the day when there was a tour group in the student union and you had to wade through them to get to the exit? Your route through the building was slower than going around on that day. What about the time when the sidewalk next to the union is closed for maintenance and you had to detour around a neighboring building? The indoor route was probably quicker that day than the outdoor one you chose.[3]

These situations, where your theory doesn’t hold in a single instance or a small number of instances, are counter-examples. They are unusual cases where your prediction fails to be supported.  Now, if these were truly deterministic hypotheses, we would say that observing even one instance of the theory failing to hold would falsify the theory and we should discard it. As I’m sure your intuition is telling you right now, though, that would make absolutely no sense. We all know that stuff happens that makes even our best predictions about stuff like this go wrong. Observing even one instance of the chosen route taking longer than the unchosen route should not cause us to throw out the entire theory. Why should we condemn ourselves to a semester of trudging through the icky weather outside just because the union’s main hall was blocked one day by people trying to load a couch into the elevator?

These deviant cases, as they’re known, can be great blows to deterministic theory, but they are often great aids to further theorizing. This is especially true for theories about the scope conditions for a particular theory or for insights about missing variables.  Deviant cases are least damning to a theory when an extenuating or unique circumstance clearly exists that explains the unanticipated outcome. Multiple deviant cases featuring the same extenuating circumstance should lead you to think about what variable is missing from your causal story that produces mis-predictions for that cluster of cases.

So, let’s summarize this example:

Falsifier: Observing an instance of the chosen route taking longer than the unselected route falsifies the (deterministic) theory.

Counter-Example: The day the sidewalk was closed, the outside route took longer than the inside route.

Counterfactual (deterministic): If the sidewalk were closed on a given day, then the route through the union would be faster.

Counterfactual (probabilistic): If the sidewalk is frequently closed, then the interior route is more likely to be the fastest route.

Here’s an example from political science. Casual observation suggests that all rich countries are democracies. On the surface, this is quite plausible. Canada and the United States are rich, and they’re democracies. So are Australia, France, Germany, Italy, Japan, and South Korea. You conclude that as countries get rich, they adopt democratic systems. You can’t think of a counter-example, so you’ve ‘proved’ your theory, right? Nope. You go get the data and sort countries by income and democratic-ness, and all of a sudden you find Qatar and Saudi Arabia at the top of the list of national income per capita, but they’ve got the lowest score democratic-ness. Does this totally falsify your theory? Not if you consider it as probabilistic rather than deterministic. These are counter-examples, which would falsify a deterministic theory. They do not totally falsify a probabilistic theory, but they do help to discredit it, especially if you also find more oil-rich but democracy-poor countries. Rather than prompting you to discard the theory entirely, this cluster of deviant cases should lead you to hypothesize instead about variables that your current theory omits, such as the role of natural resource income in the development of democracy.

Take a moment for yourself to think about a falsifier, a counter example, and a (probabilitistic) counterfactual for the theory that African-American women support affirmative action because they benefit from it doubly (on both race and gender grounds).