Reflections on confirmation bias

Below is the postscript/"final reflections" section from my PhD thesis. I tried to write it so that it would stand fairly well on its own as a high-level summary of the issues I discuss in more detail in the thesis itself: why the evidence for confirmation bias is much weaker than most people think, and how this fits with broader narratives about the importance of open-mindedness and changing one's mind. 

At some point I'd like to write something about this explicitly for a popular audience, but until then...

This PhD has been an interesting exercise for me in changing my own mind, and trying to set aside my preconceptions. I chose to study confirmation bias because I genuinely believed it was pervasive, and at the root of many of society’s problems. I hoped that my research could help find a way to ‘debias’ people against it, to reduce this harmful source of irrationality. More generally, I had the impression that people are too slow and reluctant to change their minds, too ‘closed-minded’, and that pushing in the other direction - helping people to be more open-minded, was clearly a good thing.

However, over the course of my research, I’ve come to question all of these assumptions. As I begun exploring the literature on confirmation bias in more depth, I first realised that there is not just one thing referred to by ‘confirmation bias’, but a whole host of different tendencies, often overlapping but not well connected. I realised that this is because of course a ‘confirmation bias’ can arise at different stages of reasoning: in how we seek out new information, in how we decide what questions to ask, in how we interpret and evaluate information, and in how we actually update our beliefs. I realised that the term ‘confirmation bias’ was much more poorly defined and less well understood than I’d thought, and that the findings often used to justify it were disparate, disconnected, and not always that robust.

Reasoning that it made sense to start at the beginning of the process, I first focused my attention on selective exposure: this idea that people tend to seek out information they expect to confirm what they already believe. Though I knew that this was not all there was to confirmation bias, I thought that it was a good place to start: if people don’t even engage with different viewpoints at all, how are they ever going to be able to change their minds when they should? My focus therefore shifted from ‘fix confirmation bias’ to the only-mildly-less-ambitious ‘fix selective exposure’. But as I began exploring the selective exposure literature further, and conducting my own experiments, this also began to look misguided: it wasn’t clear from either the existing literature, or from the results of my first few studies, that selective exposure was actually a particularly strong or robust phenomenon. Was I trying to fix a problem that didn’t exist?

Unsurprisingly, at this point I found myself feeling quite confused about what I was really trying to do. I spent several months trying to make sense of the mixed findings in the selective exposure literature, and trying to square this with a belief I still struggled to let go of: that outside of the lab, people do genuinely seem to have a hard time engaging with different perspectives. Eventually I realised that the problem was that selective exposure was far too narrow, and that my measures weren’t really capturing the most important aspects of people’s motivation and behaviour. Someone could display no or little selective exposure - reading a balance of arguments from both sides - but still not really be engaging with those arguments in an ‘open-minded’ way. Equally, the arguments a person chose to pay attention to might make them look biased, but actually be chosen for good reason - based on where they genuinely expected to learn more, for example. At this point I felt that further exploring the question of whether and when selective exposure occurs wasn’t really going to help me make progress on the questions I was really interested in: whether people really are biased towards their existing beliefs, and what it really means to be open-minded.

This set me off along two closely related paths that would eventually converge, both involving taking a big step back.

First, I began exploring the broader literature on confirmation bias in more detail, along with the associated normative issues. My investigation of the selective exposure literature had made me realise that if I wanted to understand confirmation bias, I couldn’t look at different aspects of reasoning independently: I needed to understand how bias might arise at all stages of reasoning, and how these stages interacted with one another. It made me wonder whether other findings I’d taken for granted, like selective exposure, might actually be less robust than I’d thought. I also realised that there were a number of normative questions that the selective exposure research did not adequately deal with - whether selective exposure is genuinely a ‘bias’ or ‘irrational’, and what this really means - that other areas of research might address better. I had been interested in this broader debate around what it means to be rational, and whether it is possible to improve human reasoning, since the beginning of my PhD, so I decided to look into this further.

Second, I started delving into the question of what it really means to be ‘open-minded’ and how we might measure it. I was dissatisfied with the way that selective exposure was often implicitly taken to be a measure of ‘open-mindedness’: where open-mindedness seemed to me to be a much broader concept, a concept that selective exposure experiments were far from capturing. I also recognised that open-mindedness was closely related to confirmation bias, but that the term seemed to be somewhat vague, and I wasn’t aware of good ways to measure how ‘open-minded’ someone was being. I therefore wanted to explore the literature on open-mindedness to see if I could get some more clarity on the concept and its relationship to confirmation bias, and to see whether there were better ways to measure open-mindedness than simply what arguments people select to read.

On the first path - exploring the confirmation bias literature and associated normative issues - I realised that most of the findings commonly cited as evidence for confirmation bias were much less convincing than they first seemed. In large part, this was because the complex question of what it really means to say that something is a ‘bias’ or ‘irrational’ is unacknowledged by most studies of confirmation bias. Often these studies don’t even state what standard of rationality they were claiming people were ‘irrational’ with respect to, or what better judgements might look like. I started to come across more and more papers suggesting that findings classically thought of demonstrating a confirmation bias might actually be interpreted as rational under slightly different assumptions - and found often these papers had much more convincing arguments, based on more thorough theories of rationality.

On the second path, I realised that most of the interesting discussion around open- mindedness was taking place in the philosophical, not the psychological, literature. In psychology, discussion of open-mindedness largely took it for granted what it means to be open-minded, and focused on developing measures of open-mindedness as a personality trait based on self-report scales. I was more interested in whether it was possible to measure open-mindedness behaviourally (i.e. how open-minded someone is in their thinking about a given topic), which required pinning down this vague term to something more precise. The philosophical discussion of open-mindedness seemed to be trying harder to elucidate what it means to be open-minded: but in doing so, found itself caught up in this tricky question of whether it’s possible to be too open-minded, and if so, whether it is misguided for us to think we should teach open-mindedness. For a while, I myself got caught up in this elusive quest to define open-mindedness in a way that evades all possible downsides, before realising this was probably neither useful nor necessary.

All of this investigation led me to seriously question the assumptions that I had started with: that confirmation bias was pervasive, ubiquitous, and problematic, and that more open-mindedness was always better. Some of this can be explained as terminological confusion: as I scrutinised the terms I’d been using unquestioningly, I realised that different interpretations led to different conclusions. I have attempted to clarify some of the terminological confusion that arises around these issues: distinguishing between different things we might mean when we say a ‘confirmation bias’ exists (from bias as simply an inclination in one direction, to a systematic deviation from normative standards), and distinguishing between ‘open-mindedness’ as a descriptive, normative, or prescriptive concept. However, some substantive issues remained, leading me to conclusions I would not have expected myself to be sympathetic to a few years ago: that the extent to which our prior beliefs influence reasoning may well be adaptive across a range of scenarios given the various goals we are pursuing, and that it may not always be better to be ‘more open-minded’. It’s easy to say that people should be more willing to consider alternatives and less influenced by what they believe, but much harder to say how one does this. Being a total ‘blank slate’ with no assumptions or preconceptions is not a desirable or realistic starting point, and temporarily ‘setting aside’ one’s beliefs and assumptions whenever it would be useful to consider alternatives is incredibly cognitively demanding, if possible to do at all. There are tradeoffs we have to make, between the benefits of certainty and assumptions, and the benefits of having an ‘open mind’, that I had not acknowledged before.

There’s a nice irony to the fact that over the course of this PhD, I’ve ended up thoroughly questioning my own views about confirmation bias and open-mindedness: questioning my assumptions about the value of making assumptions, as it were. I haven’t changed
my mind completely - I am still concerned that in some situations, and for certain topics, people really are too dogmatic and could do with exploring more. But I’m certainly more open-minded about this than I was. Whether my increased open-mindedness is a good thing, of course, is another question.

Jess Whittlestone3 Comments