Actually solving problems
I’ve been feeling some dissatisfaction lately around not always being clear what I’m trying to achieve in the work I’m doing, so I want to explore that a bit.
When I graduated from my undergrad back in 2012, I suddenly felt quite lost: it was the first time a path wasn’t laid out for me, and I had this sense I wanted to do something valuable with my career, but didn’t know how. This was the time I came across the nascent effective altruism community, and was very drawn to the way many people involved were thinking strategically and ambitiously about how to have an impact in the world. My involvement in his community, and many people in it, had a pretty big influence on me: it made me more ambitious about trying to actually solve important problems in the world, prompted me to take seriously things like risks from AI and the importance of shaping humanity’s long-term future, and encouraged me to think more strategically about how I’m trying to impact the world.
However, as EA evolved, there were an increasing number of things about the culture and norms of the community that didn’t sit quite right with me. I won’t go into loads of detail here, but a few things in particular: (1) I worried the community was getting overconfident about a set of relatively narrow ideas and becoming increasingly insular, lacking respect for any expertise that wasn’t “EA”, (2) relatedly, it felt like the community was evolving in a way that implicitly encouraged people to defer to the view of “high-status” individuals, rather than enabling independent thinking; and (3) I found that the culture of really questioning what was “the most important thing to do”, was causing me to put pressure on myself in unhealthy ways. For these reasons among others, I’ve distanced myself from the EA community over the years. I’m still connected with specific individuals and groups that I find supportive, interesting, and helpful, but I identify much less with this broader thing called “EA”.
Over the last couple of years, I’ve also reduced the pressure on myself to be doing “the most important thing” all the time. I drove myself a bit mad in the first couple of years of my PhD trying to figure out “the most important research topic”, resulting in me going round in circles, feeling unmotivated, and not doing much at all. I ended up really accepting that it’s better to do something that’s ‘merely’ somewhat useful than nothing at all. After my PhD, I decided to go and work at the Centre for the Future of Intelligence and the Centre for the Study of Existential Risk in Cambridge, thinking about the long-term impacts and risks of AI and what we can do about them today. I was drawn to this particular environment because there seemed to be a culture of thinking hard about important questions but with less pressure on always doing the absolute most important thing compared to other places that more explicitly identify as “EA”.
I’ve allowed myself a lot more freedom these past couple of years to not always be optimising, to allow myself to explore and learn and do what I’m motivated by. This felt particularly important to me going into research in the ‘AI policy’ space, which I thought was important but where I didn’t have fully-formed views on what really needed to be done or how I could best contribute. I’d been around a lot of other people with opinions about AI risks and policy, and I didn’t want to just adopt these assumptions and perspectives.
I think this has been really good for me, and I’ve developed my thinking a lot over the past couple of years on what we should be concerned about with AI, what needs to be done, and where I’m best placed to contribute. But though I’m getting closer, I still feel like I lack a very clear sense of what I am trying to achieve in the world, and this means I sometimes lack confidence that I can actually achieve anything important. I feel some tension here, because increasingly I feel like I need to get to a place where I’m clearer and more confident about what I’m trying to achieve in the world, but I’m also resisting it a bit because I don’t want to end up back in the place where I’m over-optimising and putting too much pressure on myself.
I think part of the answer here might be not to focus on identifying “the most important problem”, but on learning how to be actually solving problems, full stop. I think actually solving problems in the world is really, really, hard, and requires a kind of mindset that isn’t really taught or encouraged. In giving myself space to ‘explore’ different research topics, for example, I’ve noticed how easy it is to get caught up chasing short-term incentives: to convince myself I just need to get publications and job security so that later I can do something useful with it... and quickly lose sight of what I’m doing it all for.
Actually solving real problems in the world is much harder than getting academic publications or a promotion, because there’s no clear path and often no good feedback, so it can be very hard to tell if you’re making progress. At the same time, there are plenty of other incentives and feedback mechanisms which push us in other directions - towards making money, impressing other people, ‘succeeding’ or climbing the ladder in a given domain or industry. I don’t think any of these incentives are aligned with solving real problems.
I’d probably go so far as to say that very little of the work people do is really aimed at solving real problems. I think many people care about solving problems, but doing so is hard and it’s much easier to follow other things where incentives are stronger.
Actually solving problems requires a kind of persistent mindset that’s not natural for many of us, and so requires a lot of effort. Every now and then I notice this mindset in someone I meet, who seems to keep pulling conversations back to what the issue really is, rather than just talking about what’s interesting or feasible within clearly-defined constraints. When you ask them to make a decision, or ask why they’re working on a specific project, they’ll have a principled answer that comes back to something they want to achieve in the world, not a vague comment about what seemed interesting or what they think other people want to see. I’m finding myself increasingly drawn to these people, and increasingly keen to try and interrogate this mindset and figure out how to cultivate it more in myself.