I wrote the other day about a distinction between two different ways of decision-making: trying to improve lots of small decisions via ‘nudging’ (e.g. changing the presentation of options in cafeterias so more people make healthy choices) vs. trying to improve a few, particularly high stakes decisions by training people in better decision-making techniques (e.g. training members of the National Security Council to make more accurate forecasts.)
A conversation I had the other made me realise that these examples actually highlight two important distinctions - and therefore, potentially, four different ways of thinking about improving decision-making. There’s one distinction between different methods we might use to try and improve decisions, and another distinction between different kinds of decisions we might target for improvement:
Different methods: improving decisions via “nudges” vs. teaching people better decision-making strategies
Different kinds of decisions: improving lots of small decisions a small amount (thousands of people eat more healthily) vs. improving a few important decisions a larger amount (govt spends important resources on more effective health interventions.)
I only focused on two possible combinations of these two variables - improving lots of small decisions via nudges, and improving a few important decisions by teaching better strategies. But the other combinations are also possible - we could try to improve decisions made by the general public via training (by teaching better decision-making strategies in schools, for example), or we could try to improve a few very important decisions via small “nudges” in key environments (by, um, putting a picture of a pair of eyes over the Prime Minister’s desk?) To make this clearer, let’s put it in a 2x2 matrix, because everyone likes those:
|Lots of small decisions||Classic “behavioural insights” work - e.g. changing the wording on a letter so more people change their taxes, changing the display of food options so more people make healthy choices||Adding “critical thinking” or other “rationality training” into school curricula, running workshops that help people make better decisions in their lives|
|A few big decisions||Make relevant evidence more easily available and understandable to policymakers, create social rewards for using certain procedures||Training influential decision makers (e.g. in tech companies or government) to recognise and avoid cognitive biases or other bad thinking habits|
The bottom left square - trying to improve a few very important decisions by “nudges” - seems particularly interesting to me, because it’s perhaps the least obvious or least discussed. It’s not totally clear to me what “nudging” influential decision-makers - e.g. policymakers in government - would look like, but it certainly seems plausible that one could find ways to tweak the environments in which important decisions are made (by changing regulations, processes, or salient ideas/information) in ways that would result in a subtle shift of incentives, thereby really improving the quality of decisions made.
I recently wrote about improving institutional decision-making as a high impact cause area for 80,000 Hours, where I focused mostly on trying to get better decision-making techniques implemented - i.e. the bottom right cell of the above matrix. One piece of pushback I got on this was that this is incredibly difficult to do in practice, because policymakers and other influential decision-makers simply don’t have much incentive to use costly/effortful new strategies, and there are plenty of bureaucratic barriers to doing so. I agree that this is a concern, but I also wasn’t sure what on earth “changing incentives” could look like in practice. One thing it might look like, though, is nudging - making subtle changes to the environment in which decisions are made that make ‘better’ decisions easier. A huge advantage of “nudging” approaches over “training” approaches, long recognised by the behavioural science crowd, is that they don’t require much if any effort from the people whose decisions are being improved.
Of course, the fact that “nudging” often doesn’t even require awareness on the part of people whose decisions are being targeted, is also the reason it’s sometimes ethically dubious. However, I don’t think this is as much of a concern as it seems for nudging institutions towards better decisions, for a couple of reasons. First, if we’re going to try and ‘nudge’ important institutions/teams towards better decisions, those institutions/teams will presumably have to be a lot more involved in doing this than the general public are in the kinds of policy nudges currently employed. I think this doing this kind of thing would look a lot more like a few key specialists in an organisation coming up with proposals for how the organisations’ processes and environment could be subtly changed to incentivise better decisions. This would inevitably have to be approved by at least some of those who would be affected, while still not requiring much effort from them beyond that. Second, ‘nudging’ for better decisions here would probably focus on improving the processes by which decisions are made, and building the capacity of groups to make better decisions, rather than on improving the outcomes. For example, the focus might be on making it easier for policymakers to make use of relevant evidence when making decisions, or to use certain systematic processes for assessments.