In five years, we could have AI systems capable of accelerating science and automating skilled jobs. Fewer than 10,000 people worldwide are working full-time to reduce the risks of this transition. If you’re able to focus on having a positive impact on society, I think addressing these risks is what to focus on. Here's why.
1) World-changing AI systems could come faster than expected
I’ve ranked AI as the most pressing global problem for over ten years, but it seems even more urgent today. In the last 1-2 years, I’ve pivoted to focus more on it, and I wish I’d pivoted more earlier.
There’s now a significant chance that AI which can contribute to scientific research or automate many jobs is created by 2030. Current systems can already do a lot, and there are clear ways to continue to improve them. Forecasters and experts widely agree the probability is much higher than it was even just a couple of years ago.

2) Society could be transformed – whether we’re ready or not
Lots of people hype AI as 'transformative' but few internalise how crazy it could really be. There's three different types of acceleration that could be possible, and are much more grounded in empirical research than a couple of years ago (and would render your current career plans obsolete):
The intelligence explosion: through feedback loops in algorithmic efficiency, it might only take a few years from developing advanced AI to having billions of AI remote workers, making cognitive labour available for pennies.
The technological explosion: estimates suggest that with sufficiently advanced AI 100 years of technological progress in 10 is plausible. That means we could have advanced biotech, robotics, novel political philosophies, and more arrive much sooner than commonly imagined.
The industrial explosion: if AI and robotics automate industrial production that would create a positive feedback loop, meaning production could plausibly end up doubling each year. Within a decade of reaching that growth rate, humanity would harvest all available solar energy on Earth and start to expand into space.
Along the way, we could also see rapid progress on many key technological challenges — like curing cancer and developing green energy. But…

3) Advanced AI could bring enormous dangers
It might be hard to keep control of billions of AI systems thinking 100x faster than ourselves. But that’s only the first hurdle. The developments above could also:
Destabilise the world order (e.g. create conflict over Taiwan)
Enable the development of new weapons of mass destruction, like man-made viruses
Empower governments (or even individual companies) to entrench their power
Force us to face civilisation-defining questions about how to treat AI systems, how to share the benefits of AI, and how to govern an expansion into space.
This isn’t just about ‘technical safety’, but about an entire range of downstream issues.
4) Under 10,000 people work full-time reducing the risks
Although it can feel like all anyone talks about is AI, only a few thousand people work full-time on navigating some of the most important aspects of the risks.
This is tiny compared to the millions working on more established issues like cancer or climate change, or the number of people trying to deploy the technology as quickly as possible.
If you switch to working on this issue now, you could be among the first 10,000 people helping humanity navigate what may be the one of the most important transitions in history.
5) There are more and more concrete jobs
A couple of years ago, there weren’t many clearly defined projects, positions or training routes to work on this issue. Today, there are more and more concrete ways to help, such as:
Joining one of the many growing AI policy think tanks around the world
Improve forecasting and data about AI
Building defences against man-made viruses, like better PPE and detection tools
And more
80,000 Hours has compiled a list of 30+ important organisations, over 300 open jobs, and lists of fellowships, courses, internships, etc., to help you enter the field. Many of these are all well-paid too.
It’s true many of these jobs are extremely competitive, but due to their potential impact it could still be worth applying to them (while making sure you have a back-up plan).
You also don’t need to work in an explicitly “AI risk” focused organisation. For example there are hundreds of relevant government positions.
And otherwise you can contribute without changing job by donating, spreading clear thinking, building community around this issue, and investing in yourself to be ready to switch as more opportunities open up.
You don’t need to be technical or even focus directly on AI — we need people building organisations, in communications, and with many other skills. AI is going to affect every aspect of society, so people with knowledge of every aspect are needed (e.g. China, economics, biology, international governance, law, etc.).
The field was small until recently, so there’s comparatively few people with deep expertise. That means it’s often possible to spend about 100 hours reading and speaking to people, and transition in the field (and then keep learning from there). If you have a quantitative background, it’s possible to get to the technical forefront in under a year. The 80,000 Hours team can give you one-on-one advice on how to switch if you’re later-career, and how to skill-up if earlier. There’s more tactical advice here.
Real examples of people who switched:
Rashida Polk was an experienced nurse, but wanted to switch to reducing pandemic risk. She applied to the Horizon Fellowship, and is working in a relevant Senate Committee.
Neel Nanda studied maths and considered going into finance. He found out about AI risk and got an internship in the area. Now he leads research into interpretability at Google DeepMind.
Katie Hearsum was working in banking, and transitioned an operations role at Longview Philanthropy, one of the largest funders in the space, and where she’s now the COO.
6) The next five years seem crucial
I’ve argued the chance of building powerful AI is unusually high between now and around 2030, and declines thereafter. This makes the next five years especially critical.
That creates an additional reason to switch soon:
If transformative AI emerges in the next five years, you’ll be part of one of the most important transitions in human history.
If it doesn’t (which is definitely a live possibility), you’ll have time to return to your previous path — while having learned about a technology that will still shape our world in significant ways.
The bottom line
If you’re fortunate enough to be able to find a role helping to navigate these risks (especially over the next 5–10 years), that’s probably the highest expected impact thing you can do.
But I don’t think everyone reading this should work on AI.
You might not have the flexibility to make a large career change right now. In that case, you could look to contribute from your current job and prepare to switch in the future — or like most people, you just might not have the luxury of making social impact your focus.
There are other important problems, and you might have far better fit for a job focused on one of them.
You might be too concerned about the (definitely huge) uncertainties about how best to help or be less convinced by the arguments that it’s pressing.
However, I’d encourage almost everyone who’s able to pursue an impactful career to seriously consider it. If you’re unsure you’ll be able to find something, keep in mind there’s a very wide range of approaches and opportunities, and they’re expanding all the time.
All this is why I’m writing a new guide to careers tackling AI. Read a summary with some more practical advice on how to switch:
If you’ve decided you’d like to focus on this issue, 80,000 Hours may be able to give you one-on-one advice and introductions to people in the field. APPLY NOW.1
Thank you to Cody Fenwick and Dewi Erwan for help with this article.
Do you know if anyone has done an analysis of how many people are actually working on these risks (especially broken up into different types - ie misalignment, misuse, etc)? I know 80k used to say 200 people but that was like 2021.
I suppose anyone in the education field, particularly teaching, knows that AI is already taking over in so many ways, most of them negative. I teach in an international school and it is a rare student who is not using chatgpt or something similar on ALL of their papers. Even when I ask students to compose an opinion essay in their native language and then translate it, their preference is to just hit a few keys and get an AI generated essay based on their topic. Non-native speakers that I work with tend to view AI as the great equalizer. And it is being sold as such. One need only watch one Grammarly ad to recognize that they are trying to convince folks that doing one's own work and even thinking are things of the past. I know a student who is now at UC Davis who did not compose a single piece of work in any language while she was at the high school from which she graduated. I was a bit shocked when I noted that she was not being asked for any references on her UC application. I don't even want to get into the overseas SAT exams. (Not entirely due to AI but the work "smarter" mentality is definitely encouraged by one's ability to work "smarter" by using AI to do their work.) I would hope that universities are becoming vigilant to the threat of AI completely dominating academics and intellectual thought. It is becoming more and more difficult to spot academic dishonesty and I think it is accurate to say that most private schools tend to turn a blind eye to the use of AI as their bottom line is profit and graduating students into top universities is what attracts folks to pay hundreds of thousands of dollars that they pay over the course of a student's "career" in their institutions.