Discussion about this post

User's avatar
Boogaloo's avatar

As someone who started in AI around 10 years ago before the deep learning era really took off. Everything I see now I used to believe was science fiction. I've shortened my timelimes personally from

1. never happening in my lifetime (pre GPT era)

2. Maybe within a 100 years (GPT2 era)

3. Maybe within 20 years (GPT4 era)

4. Could happen within the next 5 years (reasoning model era)

So ya. Another datapoint. I was at the center of all of this stuff in some way. Having studied at UCL and Oxford with some of my own professors make the breakthroughs.

I would say the above is a common trajectory for most AI researchers. Just 10 years ago. talking about AGI would get you mostly rediculed in AI research. Only deepmind was really trying and talking about it openly, but they would sort of hide it 'a far out goal' etc etc.

Now? half a dozen companies are explicitely aiming for AGI and believe it will be soon

Expand full comment
Pauliina Laine's avatar

Does 80k have plans to create a database of content for people looking to transition into AI safety? The platform could prioritize updates on areas of the field that might be neglected and where there's need for more doers. It could include introductory articles like 80k already has on so many problem areas for example.

I had this idea while doing research on the "where we're at and what needs to be done" of AGI preparedness, especially from the AI policy & governance for x-risk mitigation point of view. There isn't really a platform that would list all the relevant resources for getting started, articles and policies to read, research institutions or researchers to follow etc. So I thought why not help others onboard into the field more efficiently and with less duplication of effort?

Expand full comment
7 more comments...

No posts