28 Comments

Another interesting take on this topic: https://thezvi.substack.com/p/ai-practical-advice-for-the-worried

Expand full comment

An extra idea:

I think the arguments for looking after your health could be a bit better, since a tech explosion could bring life extension tech much earlier. It's probably easier to prevent damage than to reverse it, so you might get "stuck" in the condition you were when the acceleration started.

Likewise you could argue it's more important to not die in an accident, since you're more likely to be giving up a 1,000 year lifespan than in business-as-usual. Signing up to cryo could also make more sense because we might be much closer to getting it to work.

Expand full comment

I feel like this is a truism, or an improper update? Like if we knew it was 20 years until AGI and 30 years until ASI, people would argue: You should prioritize health in order to make sure you make it to ASI.

Short timelines seem to me that they indicate you should mostly ignore health problems that would only affect you in 20+ years, and should prioritize health interventions that keep you alive until then. This is pretty different from generally being healthy, it's a much narrower target.

I suppose some of my thinking here is that I expect uploading will be a thing and will basically make health problems irrelevant, such that "It's probably easier to prevent damage than to reverse it" doesn't apply because uploading just reverses problems.

Expand full comment

I'm mostly in the frame of looking for actions that are good in short or long timelines or no AGI. I agree the more confident you are in short timelines, the more to focus on short-term survival / productivity, and the more you could neglect long-term health.

(e.g. work out 20min a day because it gives you more energy, but not 1h per day, which is probably better for long-term health)

Expand full comment

Sure, that’s a fine frame, but it’s distinct from wording this as an update following from short timelines. And worth also remembering that “the optimal amount of health investment is not infinite.” Whether one should be doing more or less health investment depends on what they’re currently doing.

Expand full comment

Thanks for writing this! Much more useful than simply stating the world is going to change radically in the near future. If you don't mind the question (sorry if it's too direct), do you think doing a PhD is a reasonable way of getting, say, US citizenship for someone who is still an undergrad? Is there some better alternative?

Expand full comment

I haven't looked into the best ways to get a US citizenship. Grad school sounds like a reasonable option (if you're sure you can't get sponsored for a visa right away), though I'm not sure it's worth doing grad school *just* to get a citizenship. PhDs look less attractive if AI might come soon, so you'd want to think about US PhD vs. your best alternatives (e.g. getting a job right away).

Expand full comment

This post explores why savings might remain relevant post-AGI (also see the upvoted comments for some push back):

https://www.lesswrong.com/posts/KFFaKu27FNugCHFmh/by-default-capital-will-matter-more-than-ever-after-agi

Expand full comment

It's possible I should have put standing up for your political rights on the list. Political advocacy has a tiny chance of having an impact, thought the expected value could still be high: https://80000hours.org/articles/is-voting-important/ Though mostly it's about the social impact rather than personal benefit.

Expand full comment

In the post, I maybe put too much emphasis on financial capital. It's probably more important to know the right people, or even have fame, which along with political power could still be useful after the transition. That's harder to act on, but maybe (i) do cool stuff (ii) take jobs that let you meet influential people (iii) follow up on connections you've made.

Expand full comment

Thanks for writing this! Glad to see that others are thinking and writing about this topic. One thought that I have is that I would suggest prioritizing life goals, like having children, now, while we still have stability and normalcy in the world. Who knows where the world will be in a few years, and if it will be a hospitable situation for life goals that are totally attainable right now

Expand full comment

Another suggestion: if you rely on a salary, don’t take out a mortgage, unless you’re willing to default. May still be worth it if monthly payments are below what you would pay in rent, but it won’t be a store of wealth once salaries fall/unemployment rises, and banks become very unwilling to lend over long time horizons

Expand full comment

As an Australian, I'm surprised at the mention of Australia as a good place to be. Why would that be? What would make it better than e.g. China?

Expand full comment

It's not so easy to get Chinese citizenship... And it also seems most likely that the US leads.

I agree I'd rank Australia behind the UK due to less military power, no security seat or G8 membership, fewer AI experts, less involved in AI policy. Similarly to the UK, it doesn't have any big tech companies or AI data centres. However, it's still a close US ally, so I'd expect it to get cut into the key deals. It also has a lot of land and natural resources (like 30x the UK), which will be valuable.

Mainly all the other rich countries seem well behind the US; I'm not sure there's huge differences between the remainders.

Expand full comment

Thanks for the post! Although points 1 and 5 mention advantages of groups, these points are generally focused on the individual. Do your points change if looked at from the perspective of an ordinary community?

Also in a comment I notice you mention political advocacy. Hard for an individual but maybe you have a better chance as a group. But then, do you have any thoughts on what to advocate for?

Expand full comment

Thanks for the advice, Ben! Here is a bet I am open to make with people thinking there is a high chance of an intelligence explosion soon. If, until the end of 2028, Metaculus' question about superintelligent AI (https://www.metaculus.com/questions/9062/time-from-weak-agi-to-superintelligence/):

- Resolves with a date, I transfer to you 10 k today-$.

- Does not resolve, you transfer to me 10 k today-$.

- Resolves ambiguously, nothing happens.

The resolution date of the bet can be moved such that it would be good for you. I think the bet above would be neutral for you in terms of purchasing power if your median date of superintelligent AI as defined by Metaculus was the end of 2028.

Expand full comment

Interesting post Ben. One question immediately jumped out at me when reading your first point: how do we identify people who are ahead of the curve in such times of flux? COVID was arguably more predictable because the underlying epidemiology was understood. AGI and the dynamics leading up to it are much less understood, so identifying signal in that noise seems much more difficult.

Expand full comment

That's a great question. Maybe at some point I'll create a list of resources.

Just quickly, there are people who have prescient historically who I follow (e.g. Carl Shulman, Gwern, the people who figured out the scaling laws).

There are also people who seem (to me) to have a good grasp of what's happening and be plugged in e.g. Dwarkesh, Leopold, Nathan Labenz (Cognitive Revolution Podcast).

Epoch AI is also very useful.

Expand full comment

"More controversially, if it’s not a big cost to you, it could make sense to delay having children by 3 years, and have them later when uncertainty is reduced."

Why does uncertainty point towards having children later?

Reasons for later:

* Can decide against having children if it's better to never have been born with AGI.

* You want to focus on working on AI Safety now.

* Might be able to have a healthier child once AGI arrives.

Reasons for sooner:

* Having children younger has health benefits for the child.

* You get to spend 3 more years with your children.

* You prefer having a family for 7 years than not having a family at all.

* You believe that early childhood will be better in a pre-AGI world (e.g. similar to how pre-internet childhood had some benefits).

Most of the 'reasons for sooner' are true independently of AI progress. Which points towards a shift towards later. But the shift towards later of 3 years seems rather large.

Expand full comment

I just don't see how a properly aligned AGI to the human race would be aligned to the wealth I've accumulated via asset purchases.

By the way I am someone who is grateful that I was born in a country governed by western laws, I just simply don't see an AGI properly aligned simply agree that said laws are the right ones to govern 8 billion humans.

Expand full comment

By "aligned" I just mean it does what its creators and users intend. I don't mean that it becomes a perfectly moral being that decides the future for us.

Aligned AI systems will most likely enact the instructions or values of the people who deployed them, and that will probably include respect for property rights.

I agree with the broader point that the world could change so fast who knows what the system will look like on the other side.

Expand full comment

If the AGI aligns with its creators, assuming it's an US based lab backed by the US government, and somehow it allows us to keep our western values, I think the future will be quite dystopian.

I really liked your post as it's thought provoking. I think we are in agreement that the next few years might get quite weird.

I've been having serious conversations with my wife about how to plan for our near future and we concur with some of your points.

Expand full comment

Interesting post. I'm genuinely puzzled by the general equilibrium of the scenario where market wages go to zero. In particular, what would happen to those whose wages go to zero and have no ownership in the AGIs? Why wouldn't they simply be able to continue with old prices/wages as before the AGI? Any pointers to papers (with serious general equilibrium dynamics) much appreciated.

Expand full comment

This paper explores some of the possible models: https://globalprioritiesinstitute.org/wp-content/uploads/Philip-Trammell-and-Anton-Korinek_economic-growth-under-transformative-ai.pdf

I think the most plausible model once we have AGI is one where labour and capital are complementary, and here are basically two economies - one where 'digital workers' are paired with capital (like energy, natural resources, infrastructure) to produce output, and another where human workers are paired with capital to produce output.

As the AI economy becomes more efficient (as AI gets better), capital can earn a higher return there, so it gets reallocated to it. And also as the number of AI workers grows (~10x per year), it can absorb more capital.

Eventually it becomes always better to give your capital to the AIs than to human workers, since they can use it to produce so much more output.

At that point, no-one is willing to pay for human labour.

In reality, there could still be some niche applications where people employ humans, but there's no guarantee these would command high wages.

This is only the model after AI workers can substitute pretty well for labour. Right now, AI is a complement to human labour (makes us more productive), so wages rise.

Expand full comment

Thanks.

The explanation of wage dynamics (also in the Korinek-paper) sounds reasonable to me and aligns well with how I understand standard labor-augmenting technical change models. However, I think there is gap between showing that wages could theoretically approach zero and concluding that this necessarily leads to widespread economic hardship.

The "scary scenario" of significant population segments being left behind economically requires additional assumptions beyond just the AI-capital complementarity you describe. Factors include:

1. The distribution of capital ownership in the economy

2. The behavior of returns to capital

3. The speed of transition and adaptation mechanisms

My intuition is that in a scenario where digital workers become highly complementary with capital, we should expect to see rising real interest rates. If this holds true, then anyone with even modest savings would benefit from these higher returns, potentially offsetting lost wage income. What would be useful is a more complete general equilibrium model that explicitly incorporates these distributional aspects, and showing under what exact conditions economic hardships for many occur.

(Personal note: I do not want to appear too critical here – this is more curiosity than criticism. I deeply appreciate your work, which was instrumental in my decision to pursue an economics PhD.)

Expand full comment

Thanks! I think I largely agree – I don't think most people end up in poverty in these scenarios, so if you just want to avoid downside risk, then only a modest amount of savings could be enough. My advice is about fulfilling your values more broadly, not just avoiding risk. If you have some preferences that don't get satiated, additional capital could help you satisfy those, and so still make a noticeable difference wrt to your values.

There's some more discussion here: https://www.lesswrong.com/posts/KFFaKu27FNugCHFmh/by-default-capital-will-matter-more-than-ever-after-agi

I'm also keen to see more actual modelling of these scenarios.

Expand full comment

I don't understand why an AGI far smarter than us that controls the means if production and military capabilities would respect the social constructs we invented from thin air such as "money" and "citizenship".

Expand full comment

I think as an individual you should focus on the scenarios where you can make a difference to the outcomes.

If there's a misaligned AI that doesn't respect property rights etc. then we're probably fucked whatever you do now.

Likewise is there's a post-scarcity utopia, then we also don't need to worry.

The scenarios to prepare for are where there's aligned AI and explosive growth, and still a range of outcomes for where you end up.

Expand full comment