From Benjamin Keep:
There's a Twitter grab that I couldn't find that goes something like this:
A: I really believe that facts change minds.
B: No, facts don't change minds. Here are citations to three articles that discuss why: Cite 1, Cite 2, Cite 3.
A: Well, I still believe that facts can change minds.
You could create a bot to re-post this on Reddit every 3 months. The joke will work every time. The comments underneath will invariably affirm the point of the joke: facts don't change people's minds.
The central idea that "facts don't change minds" comes from research on motivated reasoning. But plenty of other research pursues similar ideas.
Research on conceptual change explores how we might move students away from naive understandings of physical phenomena and toward more nuanced (and accurate) understandings of the same phenomena. Research on effective debunking explores why simply telling people what the myth is isn't terribly effective for debunking that myth, and what debunkers should do differently.
But other lines of literature explore how facts do change minds. It happens so routinely in daily life that we forget. And that – counter-counter-intuitively – more reasoning often does lead to more accurate beliefs (as opposed to the motivated reasoning idea that more reasoning leads to more firmly entrenched wrong-headed beliefs). One simple argument is that if you believe that misinformation can change people's minds, then it's hard to argue that information can't as well.
The mistake – perhaps the central mistake in understanding the psychology of learning – is assuming that the same piece of information will be processed in the same way by everyone. That a fact is a fact is a fact – and, once anyone sees this fact, the conclusion must be obvious.
Lots of things affect how we process a specific bit of information. Our prior knowledge and prior epistemological commitments, certainly. But also our habits, our mood, the context of the information, the source of the information, and probably a lot of other factors I'm unfairly neglecting.
But I'm not trying to make a point about the prevalence of motivated reasoning. My concern is that people can spend hours arguing about whether facts do (or don't) change minds – in the same way that they can spend hours arguing about whether student-centered learning or direct instruction is superior, or about whether incentives-based rewards or fostering intrinsic motivation is superior – without acknowledging that these assertions are at the wrong level.
In the "whether facts change minds" case, we can say it's a version of binary thinking: either this is true or that is true. But this is just a special case of what I will call "absolutist thinking," for lack of a better word.
People take a blanket claim as being true of all situations, regardless of context. At the same time, the mechanics of that claim (i.e., why the claim might be true or untrue) go under-examined.
I most often run into this in education circles. This teaching curriculum is going to work because it's "student-centered". Or, this teaching curriculum is going to work because it involves "direct instruction".
You also see it in economic policy proposals. "Market-based solutions are superior to regulatory solutions". Or "making recreational drugs illegal leads to worse outcomes". Shades of these statements can all be true. Criminalizing recreational drug use can lead to worse outcomes. Market-based solutions can be superior to regulatory solutions. Direct instruction can be superior to other forms of instruction. But giving students more agency can also improve instruction.
You can even be on the "right side" of a debate and still engage in absolutist thinking. You hear about the advantages interleaved practice has over blocked practice. Then, like some crazed cartoon villain, you want to transform every learning experience into an interleaved one.
Perhaps another example is better for illustrating what I mean. My wife began volunteering at an organization that helps immigrants learn English. The organization wants her to spend an hour each week speaking English to one of their clients and they matched my wife with a Spanish speaker. Before the first meeting, my wife, who speaks Spanish, asked a staff member if it would be alright to speak Spanish with her student at first to get to know their learning goals. The staff member was reluctant to allow this, but finally agreed to a few minutes of Spanish-speaking time.
I can imagine someone telling managers or staff at the organization that students need a high level of exposure to English in order to learn it. But the result of this very reasonable statement was apparently an organization-wide dictum to ensure 100% English all the time. There's the transition from reasonable to extreme. There's multiple reasons why you would want to use someone's native language to support target language acquisition: to determine which topics are the most relevant to the students' daily life; to draw attention to parallel structure between the languages; to build trust between teacher and student; etc.
The interaction between my wife and this organization parallels a broader misinterpretation that I think occurs in language learning from something that has research support – "provide students with lots of comprehensible input" – to the coarser, more extreme, and far less accurate idea that "immersion leads to fluency".
It's possible that absolutist thinking is, in part, an improper export of an idea from the natural sciences to their more contextual cousins in the social sciences. You would rightly raise an eyebrow if I applied my "people don't pay attention to context" argument to Newton's second law. F = ma is the kind of thing that doesapply across contexts – provided you properly understand what F and a and the equation as a whole means. Though even here, of course, there are limits to the application of Newtonian mechanics – as there are for any principle or model.
Another very reasonable explanation is to say that absolutist thinking is part of what it means to learn new concepts. People over-extend the concepts to learn where the boundaries of these concepts are.
There's no doubt over-extension can be part of the learning process. But the arguments and beliefs that I'm criticizing here make presumptions about the nature of solutions. There's not much learning going on because the participants have already made ideological commitments. From the beginning, the understanding is a more extreme, extended version of the original claim. At least that's how it seems to me.