Very broadly, my research focuses on how to ensure that developments in AI are safe and beneficial in the long-term. Right now, I’m particularly interested in: how risks from AI might evolve as fundamental capabilities advance; how to develop policy today that positively shapes the development and use of AI in future; and how to ensure that public and policy conversations about AI risks are based on a solid understanding of technical capabilities.

Some ongoing and recent publications include: