People sometimes talk about “improving decision making” as a way to improve the world - if we could find ways to overcome the various ‘biases’ and ‘irrationalities’ that people are prone to, we’d be better able to solve some of the world’s most important problems. I think that there’s promise here, and I’d like more people to be focusing on this. But I also think that, as stated above, this project is too broad and vague to be tractable. I’d like to be able to say something a bit more concrete about what working on this problem might look like, and to begin with, I’ve found it helpful to distinguish between two different types of “improving decision-making.”
The idea of improving policy-making using “behavioural insights” has been gaining popularity in government over the last few years - largely due to the work of the UK Behavioural Insights Team (BIT), and other smaller groups and organisations doing similar work. (Disclaimer: I worked for BIT for ~1 year during my PhD.) The basic idea here is that we can use an understanding of behavioural science to design policies that “nudge” citizens’ behaviour in better directions: helping people to eat more healthily, save more for retirement, or get back into work quicker. By improving the design of policies that affect thousands or even millions of people, we can make the world better by improving many, many small decisions in people’s lives.
I think this work is clearly valuable, and I’m glad there’s more focus on it (setting aside potential ethical issues with governments deliberately influencing citizens’ behaviour - I think there are some legitimate worries here but in practice most of this work is defensible.) But there’s also a second way to improve decision-making, a second way of applying “behavioural insights” to improve policy, that I think might be even more valuable, and hasn’t gotten as much attention.
In addition to improving the design of specific policies, we could also apply insights from psychology to improve the processes by which policy decisions are made. Rather than trying to improve lots and lots of small decisions, we could focus our efforts on a few very high-stakes decisions, the decisions made by people in powerful positions most likely to affect humanity’s future. This might be more challenging than small nudges, but might also be much more valuable in the long-run. As technology gets more and more advanced, potential worst-case scenarios from conflict are growing in severity - with nuclear weapons, we have the ability to wipe out millions or even billions of humans, and advances in AI and biotechnology may pose new unprecedented threats. This makes the decisions of powerful institutions all the more crucial, and improving their decision-making competence all the more valuable.
These kinds of “high stakes decisions” - deciding how to respond to threats from other countries or terrorist groups, or deciding how to prioritise government’s scarce resources - are of course much more complex than the decisions most individuals make on a day-to-day basis. In the case of improving citizens’ decisions, generally it’s objectively clear what the “better” decision is (and this is part of how we defend the ethics of nudging) - people making healthier food choices, keeping more people in work, or widening participation in higher education, are all pretty uncontroversially good for society. When it comes to complex and high-stakes government decisions, it’s less often the case that people struggle to make what’s clearly the best decision - and more often that it’s incredibly difficult to know what the best decision is at all.
Perhaps it’s helpful here to additionally distinguish between two types of human irrationality. In some cases, we sort-of-know reflectively what the best decision or answer is, but short-term focused heuristics and incentives mean we fail to act accordingly - I know that I’ll feel better in the long-run if I exercise and keep my finances organised, but I often feel more motivated in the moment to spend money on fancy ice cream than to go running. But for other kinds of problems, it’s incredibly difficult for us to know what the right answer to a question or best course of action is, even reflectively, even given a lot of time to think about it. How advanced is North Korea’s nuclear weapons programme? How likely is it that there will be a nuclear attack on the US in the next two years? Part of the difficulty with answering these questions is incomplete information, of course, but there’s also the fact that our brains naturally struggle to combine large amounts of information at once, to think probabilistically, to see the implications and inferences one should draw given various different pieces of data, and so on. Even given a great deal of relevant information, and enough time for reflection, intelligent people will fail to make accurate judgements about complex problems, especially those involving predicting the future. This is largely a problem of limited cognitive ability, and so quite a different type of “irrationality.”
This means the best approaches for improving the ability of powerful institutions to make high-stakes decisions are likely to be very different from the best approaches for improving small decisions people make on a day-to-day basis. Simple nudges - making the obviously best option easier or more attractive - aren’t going to cut it. There might be some low-hanging fruit in terms of removing impediments to better decision-making: using checklists is surprisingly effective at reducing simple errors, for example. And I think continuing to push for more evidence-based policy is likely to be very valuable, which already has a fair amount of traction. But ultimately I think better institutional decision-making will be less straightforward - most of the best-established techniques from improving judgements and decisions (e.g. from the literature on forecasting and improving calibration) seem to require a fair amount of conscious effort and training on the part of decision-makers. There’s also the issue of incentives - arguably we already know a lot about how to make better decisions, and the reason they’re not used is that influential decision-makers face bureaucratic barriers and competing incentives which mean it’s not in their personal interests to do so. This means that if we want to improve decision-making e.g. at high levels of government, it’s not enough to just understand some techniques that have performed well in academic contexts - this research needs to be combined with an in-depth understanding of how bureaucracies work, and we need to find ways to align better decision-making with other incentives. To see better decision-making techniques adopted in government, I think we’ll need to find ways to show decision makers that these techniques will actually help them achieve their more immediate objectives, whatever they are.
None of this is easy, especially compared to changing the wording on a letter to increase response rates. So it’s not surprising that people interested in improving decision-making have focused much more on simple nudges like changing the wording on a letter. But I think that psychology research actually has a lot to say about this second type of improving decision-making - improving the processes by which people make complex, high-stakes decisions where there’s no obvious “correct” answer - as long as we acknowledge the practical complexities and avoid oversimplifying the problem. I think it would be really valuable if there was more collaboration between social scientists and people actually making important decisions, and more discussion of ways we could improve the quality of the institutional decision-making. And none of this is really meant to criticise the way government currently makes decisions - I certainly don’t know enough about it! - but just to recognise that humans are imperfect, there’s always room for improvement, and focusing our effort on the highest-stakes decisions might be particularly important.