Thoughts on short- vs. long-term AI policy

It’s generally acknowledged that there’s a distinction between “short-term” (or “near-term”) and “long-term” AI policy issues. But these distinctions actually tend to conflate (at least) three things:

  1. When issues arise.

    e.g. Cave and ÓhÉigeartaigh (2019) define ‘near-term’ issues as “immediate or imminent challenges”; the 80k guide to working in AI policy defines ‘short-term’ issues as “issues society is grappling with today”

  2. How advanced the relevant AI capabilities are.

    e.g. Baum (2018) distinguishes between a ‘futurist’ AI claim which says that “attention should go to the potential for radically transformative long-term AI” and a ‘presentist’ AI claim which says that “attention should go to existing and near-term AI.” Similarly 80k talk about ‘long-term’ issues as those “that either only arise at all or arise to a much greater extent when AI is much more advanced than it is today.”

  3. How likely an issue is to have long-term consequences.

    e.g. 80k say that ‘long-term’ issues are those that “will have very long-lasting consequences.”

These three things are generally assumed to go hand-in-hand, or at least not clearly distinguished. This might seem like quibbling with definitions, but I actually think it fuels confusion about which issues are most important to work on.

I think that what most people in the ‘long-term camp’ really care about is (3) - how likely an issue is to have (large and) long-lasting consequences for society. (1) and (2) only matter insofar as they influence this.

If we define ‘long-term’ issues in this way - as the issues most likely to have long-lasting consequences for society - I’m not sure how many people in the ‘short-term’ camp would actually put themselves in opposition to that. I certainly don’t think many people would say that they are explicitly prioritising issues that will only have a short-term impact on society over those with longer-lasting consequences. This distinction between ‘short-term’ and ‘long-term’ starts to feel a lot messier and less clear-cut.

I think there are actually several different ways in which people disagree about which AI policy issues to work on, that don’t come down to a simple short-/long-term distinction. It’s worth trying to pick these apart, because in doing so we might realise that e.g. people disagree less than they seem to, there’s empirical research that could resolve important disagreements, or perhaps even that important issues or areas are being neglected because of the assumptions being made on both ‘sides’. Here are some key disagreements I think are getting mixed up:

  • (a) Disagreement about whether we should work on issues affecting current vs. future people. There are some genuine disagreements about whether it’s more important to work on issues affecting current populations, and to what extent we should also be concerned about future generations. These stem from pretty deep philosophical beliefs: i.e. some people believe we have a greater moral obligation to those alive today whereas others don’t. I think these views contribute somewhat to what AI policy issues people think are most important to work on, but I suspect it’s only a relatively small part.

  • (b) Disagreement about how long-lasting the consequences of ‘nearer-term’ issues are likely to be. I think many people would broadly agree that all else equal, it’s better to prioritise working on issues with longer-lasting consequences for humanity. I imagine many people working on making algorithms fair and accountable today are doing so because they believe that failing to solve these problems could have extremely bad, long-lasting consequences for society (entrenching extreme power structures, leading to extreme inequality, and so on.)

  • (c) Disagreement about the best ways to influence the long-term future. When prioritising which issues to work on, what matters is not just their potential impact but also whether we have any ‘leverage’ to shape the way things go (a point made nicely by Ben Garfinkel here). One criticism of those who focus on issues relating to very advanced AI is that it’s very difficult for us to have any idea what AI will look like in the future - and the implication, there, I think, is that this means we have little leverage to influence it. On the other side, some in the ‘long-term’ camp might criticise work addressing issues arising today on the basis that it’s unlikely to have any long-lasting impact.

I think that the way the “short vs long-term” divide in AI policy is currently, there’s way too much focus on the deep ideological disagreement of (a), and not enough on really understanding the tricky and mostly empirically-based disagreements of (b) and (c). I think it would be really valuable to try and unpick further some of the assumptions underpinning these disagreements, and think about what kinds of research might actually help us think more clearly about the best ways to influence the long-term societal impacts of AI.

In case it’s not clear by this point, I’m pretty firmly in the “we should care about future populations” camp on (a), and I do think that we should be trying to work in those areas where we might have some influence over how AI impacts society in the very long-run. But I’m much less clear on (b) or (c). I think it’s possible that issues arising from current AI systems or more advanced capabilities that fall far short of AGI could have extreme and long-lasting impacts on society - either by leading to extreme scenarios themselves (e.g. automated surveillance leading to global authoritarianism), or by undermining our collective ability to manage other threats (e.g. AI-enabled disinformation undermining collective decision-making/coordination capacity.)

I also think that some of the best ways to influence the long-term future might be by working on what are mostly ‘current’ issues but with the long-term in mind (e.g. ensuring that today’s AI systems are developed in safe and interpretable ways that extend to more advanced systems; creating good research norms and a culture of responsibility within ML research; developing policy processes that are robust to uncertainty about how AI will develop, and so on.)

Mostly, I think we need more thorough thinking on both how ‘near-term’ and emerging issues might have very long-term consequences, and on what kinds of ‘near-term’ work give us the best leverage over the future trajectory and impact of AI.

One thing I haven’t really talked about, but which is important for prioritising areas to work on, is neglectedness: finding important areas to work on that aren’t getting much attention. Neglectedness is a large part of the reason that so far the ‘long-term’ community has mostly focused on more speculative risks from very advanced AI systems - no-one else was thinking about them. But I now think we may be at a point where something like “near-term work from a long-term perspective” is also looking pretty neglected.