How useful is technical understanding for working in AI policy?
It’s not totally clear what the ideal background or relevant ‘expertise’ for AI policy is. Working in other areas of technology policy seems like the most directly relevant experience, but because this is such a new and fast-growing area, people are coming from all kinds of different backgrounds. One thing I’ve been thinking about is how useful it is for people working in AI policy to have technical experience/understanding in machine learning, or computer science more generally. Should more people with technical expertise be working on AI policy issues? Should people working on AI policy already be focusing more on developing their technical understanding? I think I lean more strongly towards “yes” on these questions than many people, and so I want to try and spell out why I think this.
(Note: here I’m using ‘AI policy’ quite broadly, to encompass all kinds of thinking about how AI will impact society and how those impacts should be managed - not just referring people working directly in policy jobs.)
Why technical understanding matters
1. Thinking clearly about possibilities and risks
First, thinking clearly about how current AI systems will impact society requires a decent understanding of the capabilities and limitations of those systems. I do think that it’s possible to think usefully about societal decision-making around AI with a pretty high-level sense of what “AI” is. But I also worry that the notion of “AI” that underpins many policy discussions is far too vague, in a way that fuels misunderstanding about what the possibilities and risks posed by AI are.
We’re currently seeing a lot of people repeating the same buzzwords and concerns in vague terms - e.g. privacy, bias, explainability - but there are relatively few people thinking from ‘first principles’ about what specific current capabilities and limitations mean for society. For example, almost all applications of “AI” in society raising concerns today seem to be specifically those using supervised learning training methods, but this is rarely explicitly acknowledged. There’s an interesting question of whether current concerns around AI being applied in society are overly specific to SL, and whether AI systems based on different training methods (e.g. reinforcement learning) raise different concerns or should be treated differently. Thinking clearly about this question doesn’t require deep expertise in ML, but it does require solid intuitions about the differences between these methods and their applications that is non-trivial and most AI policy researchers probably lack.
2. Thinking ahead
Second, it’s important that AI policy can be anticipatory, not just reacting to the problems that have already arisen, but thinking ahead about how society can prepare for advances in AI capabilities and their possible impacts. Of course, we can’t predict any of this with any certainty, but we can think carefully about different ways that AI capabilities might evolve and the implications of different development trajectories. This requires a high-level understanding of what general capabilities AI systems have, what tasks and problems this makes them well-suited to, where the limitations of current systems are and where research appears to be making progress. I think there’s a real tendency to talk about future AI systems as if they’re magic - e.g. suggesting that in future AI will be used to do scientific research or be able to self-improve, without actually thinking through any details of what being able to do these things might involve - which is at least partly due to lack of technical understanding.
3. Working collaboratively
Third, solutions to problems arising from AI need to be a collaborative effort between policy experts, social scientists, and technical researchers/developers (at the very least.) This means these groups of people need to be able to talk to each other! Of course, there’s a responsibility to bridge important divides on all of these groups, but policy practitioners/researchers working to understand what ML research & development looks like will be an important component of these collaborations. One part of this is just being able to speak roughly the same language as technical researchers - e.g. understanding what it means to train a model, the difference between different training methods. Also important is being able to identify when drawing on deeper technical expertise would be useful, and where to look for it. More generally, most problems arising from AI will have solutions that are part ‘technical’, i.e. partly about the kinds of systems and capabilities we develop, and part ‘social’, i.e. partly about how we design aspects of society to respond to and govern these systems. For example, ensuring that medical AI systems are used safely requires thinking about both (a) how to make sure those systems are robust and verifiable/interpretable on the technical side, and (b) what kinds of checks, processes, and governance are needed more broadly to prevent, catch, and respond to important errors. If people working on (a) and (b) are operating entirely independently of one another, this work is going to be hugely inefficient at best - and in particular those on the governance side need to understand the limitations of technical safety approaches so they know where safety checks and regulations are most needed.
What kind of technical understanding?
One thing that is fairly clear to me is that the most useful kind of technical understanding is in most cases not going to be deep expertise in some specific subfield of ML. Much more likely to be useful is having a decent understanding of what ML research involves in practice, enough terminological understanding to be able to talk to ML researchers and skim papers, and a high-level understanding of current capabilities and limitations, and where they might go in future. I’m not actually sure what the best way to acquire this is - especially the “high-level understanding of current capabilities and limitations” part. I think this high-level understanding is probably quite difficult to acquire, and something that many ML experts don’t actually have, if their focus is very narrow. It’s also not necessarily something you get automatically from learning more about how ML works and knowing the difference between different current methods.
Obviously, the level and type of technical understanding that’s useful depends a lot on the type of research or work you’re doing, and I don’t necessarily think all policy researchers should be going away and taking ML courses. Maybe it’s fine for there to be many people thinking about AI policy in very broad strokes, just understanding the very general features and implications of AI - e.g. the fact that AI involves automating tasks previously done by humans, generally requires large amounts of data, and that methods aren’t always fully interpretable to us. But we do need some people in the policy space who are thinking more deeply about what is and what might be technically possible, to ensure current concerns are well-grounded, to ensure solutions will still be relevant as capabilities advance, and to ensure productive collaboration with ML researchers. In particular I worry that there’s a severe lack of technical expertise in government - where decisions about how AI is actually governed will actually get made.
One thing I’d like to think more about is how this works and has been thought about in other areas of science and technology policy: e.g. how well do people thinking about biosecurity or climate policy understand the relevant science and technical capabilities? Climate policy is a bit disanalogous because we’re not talking about an evolving technology, but there’s still a certain level of scientific understanding that seems like an important prerequisite for working in this space. Governance of biotechnology might be more closely analogous. One suspicion I have is that the average level of technical understanding in AI policy is lower than in many other areas, because almost everyone has a high-level impression of what ‘AI’ is (whereas most non-experts are more aware that they have no idea really what current biotechnology looks like.) It could be that the level of technical understanding required to contribute usefully to AI policy is just lower than in other fields, for this reason - but I also worry that it’s not, and it’s just easier to delude ourselves that we really understand what AI is than it is to delude ourselves about technologies where there’s less of a public narrative.