31 Comments
User's avatar
Benjamin Todd's avatar

Another interesting take on this topic: https://thezvi.substack.com/p/ai-practical-advice-for-the-worried

Expand full comment
Benjamin Todd's avatar

An extra idea:

I think the arguments for looking after your health could be a bit better, since a tech explosion could bring life extension tech much earlier. It's probably easier to prevent damage than to reverse it, so you might get "stuck" in the condition you were when the acceleration started.

Likewise you could argue it's more important to not die in an accident, since you're more likely to be giving up a 1,000 year lifespan than in business-as-usual. Signing up to cryo could also make more sense because we might be much closer to getting it to work.

Expand full comment
Aaron Scher's avatar

I feel like this is a truism, or an improper update? Like if we knew it was 20 years until AGI and 30 years until ASI, people would argue: You should prioritize health in order to make sure you make it to ASI.

Short timelines seem to me that they indicate you should mostly ignore health problems that would only affect you in 20+ years, and should prioritize health interventions that keep you alive until then. This is pretty different from generally being healthy, it's a much narrower target.

I suppose some of my thinking here is that I expect uploading will be a thing and will basically make health problems irrelevant, such that "It's probably easier to prevent damage than to reverse it" doesn't apply because uploading just reverses problems.

Expand full comment
Benjamin Todd's avatar

I'm mostly in the frame of looking for actions that are good in short or long timelines or no AGI. I agree the more confident you are in short timelines, the more to focus on short-term survival / productivity, and the more you could neglect long-term health.

(e.g. work out 20min a day because it gives you more energy, but not 1h per day, which is probably better for long-term health)

Expand full comment
Aaron Scher's avatar

Sure, that’s a fine frame, but it’s distinct from wording this as an update following from short timelines. And worth also remembering that “the optimal amount of health investment is not infinite.” Whether one should be doing more or less health investment depends on what they’re currently doing.

Expand full comment
Benjamin Todd's avatar

In the post, I maybe put too much emphasis on financial capital. It's probably more important to know the right people, or even have fame, which along with political power could still be useful after the transition. That's harder to act on, but maybe (i) do cool stuff (ii) take jobs that let you meet influential people (iii) follow up on connections you've made.

Expand full comment
Benjamin Todd's avatar

It's possible I should have put standing up for your political rights on the list. Political advocacy has a tiny chance of having an impact, thought the expected value could still be high: https://80000hours.org/articles/is-voting-important/ Though mostly it's about the social impact rather than personal benefit.

Expand full comment
Xavi's avatar

Thanks for writing this! Much more useful than simply stating the world is going to change radically in the near future. If you don't mind the question (sorry if it's too direct), do you think doing a PhD is a reasonable way of getting, say, US citizenship for someone who is still an undergrad? Is there some better alternative?

Expand full comment
Benjamin Todd's avatar

I haven't looked into the best ways to get a US citizenship. Grad school sounds like a reasonable option (if you're sure you can't get sponsored for a visa right away), though I'm not sure it's worth doing grad school *just* to get a citizenship. PhDs look less attractive if AI might come soon, so you'd want to think about US PhD vs. your best alternatives (e.g. getting a job right away).

Expand full comment
Benjamin Todd's avatar

This post explores why savings might remain relevant post-AGI (also see the upvoted comments for some push back):

https://www.lesswrong.com/posts/KFFaKu27FNugCHFmh/by-default-capital-will-matter-more-than-ever-after-agi

Expand full comment
Maya Ravichandran's avatar

Thanks for writing this! Glad to see that others are thinking and writing about this topic. One thought that I have is that I would suggest prioritizing life goals, like having children, now, while we still have stability and normalcy in the world. Who knows where the world will be in a few years, and if it will be a hospitable situation for life goals that are totally attainable right now

Expand full comment
Vasco Grilo's avatar

Thanks for the advice, Ben! Here is a bet I am open to make with people thinking there is a high chance of an intelligence explosion soon. If, until the end of 2028, Metaculus' question about superintelligent AI (https://www.metaculus.com/questions/9062/time-from-weak-agi-to-superintelligence/):

- Resolves with a date, I transfer to you 10 k today-$.

- Does not resolve, you transfer to me 10 k today-$.

- Resolves ambiguously, nothing happens.

The resolution date of the bet can be moved such that it would be good for you. I think the bet above would be neutral for you in terms of purchasing power if your median date of superintelligent AI as defined by Metaculus was the end of 2028.

Expand full comment
Tobias's avatar

"More controversially, if it’s not a big cost to you, it could make sense to delay having children by 3 years, and have them later when uncertainty is reduced."

Why does uncertainty point towards having children later?

Reasons for later:

* Can decide against having children if it's better to never have been born with AGI.

* You want to focus on working on AI Safety now.

* Might be able to have a healthier child once AGI arrives.

Reasons for sooner:

* Having children younger has health benefits for the child.

* You get to spend 3 more years with your children.

* You prefer having a family for 7 years than not having a family at all.

* You believe that early childhood will be better in a pre-AGI world (e.g. similar to how pre-internet childhood had some benefits).

Most of the 'reasons for sooner' are true independently of AI progress. Which points towards a shift towards later. But the shift towards later of 3 years seems rather large.

Expand full comment
I.M.J. McInnis's avatar

"It’s a scary situation, because one possibility is we all die. In that case, all you can do is try to enjoy your remaining years."

I really dislike this. Ordinary people can help, even without a change of careers. https://pauseai.info/

Expand full comment
Benjamin Todd's avatar

This article is about what you do from a personal perspective. From an altruistic perspective, it might make sense to spend the final years campaigning for Pause AI. But the chances of you individually making a difference are tiny, so that's not an effective way to protect your own wellbeing.

Expand full comment
dov's avatar

Can someone plz explain what the score on the y axis of the graph is measuring?

Expand full comment
Benjamin Todd's avatar

It's just the score on the ARC AGI benchmark.

Expand full comment
LM's avatar

Another suggestion: if you rely on a salary, don’t take out a mortgage, unless you’re willing to default. May still be worth it if monthly payments are below what you would pay in rent, but it won’t be a store of wealth once salaries fall/unemployment rises, and banks become very unwilling to lend over long time horizons

Expand full comment
hmijail's avatar

As an Australian, I'm surprised at the mention of Australia as a good place to be. Why would that be? What would make it better than e.g. China?

Expand full comment
Benjamin Todd's avatar

It's not so easy to get Chinese citizenship... And it also seems most likely that the US leads.

I agree I'd rank Australia behind the UK due to less military power, no security seat or G8 membership, fewer AI experts, less involved in AI policy. Similarly to the UK, it doesn't have any big tech companies or AI data centres. However, it's still a close US ally, so I'd expect it to get cut into the key deals. It also has a lot of land and natural resources (like 30x the UK), which will be valuable.

Mainly all the other rich countries seem well behind the US; I'm not sure there's huge differences between the remainders.

Expand full comment
Tom Hope's avatar

Thanks for the post! Although points 1 and 5 mention advantages of groups, these points are generally focused on the individual. Do your points change if looked at from the perspective of an ordinary community?

Also in a comment I notice you mention political advocacy. Hard for an individual but maybe you have a better chance as a group. But then, do you have any thoughts on what to advocate for?

Expand full comment
Sina Tootoonian's avatar

Interesting post Ben. One question immediately jumped out at me when reading your first point: how do we identify people who are ahead of the curve in such times of flux? COVID was arguably more predictable because the underlying epidemiology was understood. AGI and the dynamics leading up to it are much less understood, so identifying signal in that noise seems much more difficult.

Expand full comment
Benjamin Todd's avatar

That's a great question. Maybe at some point I'll create a list of resources.

Just quickly, there are people who have prescient historically who I follow (e.g. Carl Shulman, Gwern, the people who figured out the scaling laws).

There are also people who seem (to me) to have a good grasp of what's happening and be plugged in e.g. Dwarkesh, Leopold, Nathan Labenz (Cognitive Revolution Podcast).

Epoch AI is also very useful.

Expand full comment