9 Comments
User's avatar
DS's avatar

Hey Ben! There is a little edit button that snuck in: "EditSituationReasoningExampleLegal"

Expand full comment
Benjamin Todd's avatar

Thanks!

Expand full comment
Swen Werner's avatar

Any decision anyone takes to act or not which could then causes direct or indirect negative consequence for somebody represent a theoretical liability. AI cannot be liable, humans are liable or entities are, and since AI have no mechanism to verify if they made a mistake I think AI can’t simply be made to run anything. That would be legally a big risk. They also have no agency and no persistent memory we can’t extrapolate saying they can do one thing so they can do everything. there are some limits for now. Just a thought

Expand full comment
Economics Help - T.Pettinger's avatar

Thanks. Some useful points

Expand full comment
Anthony West's avatar

Hey, great article. I was thinking along similar lines the other day. Have you heard of Jevons Paradox? This is what I found...

In the 19th century, economist William Stanley Jevons observed that as steam engines became more fuel efficient (using less coal for the same amount of work), total coal consumption increased rather than decreased. Improved efficiency lowered costs, making a wider range of applications viable, thus driving higher coal demand.

“It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption. The very contrary is the truth.”

— William Stanley Jevons, The Coal Question (1865)

A contemporary example comes from the commoditisation of compute with cloud computing. The widespread availability and lower cost of resources have driven a significant increase in total compute consumption and more technology jobs - albeit with a different mix of roles.

The same phenomenon is likely to occur with AI driven software development. Presently, long and complex software deliveries limit demand. Improved productivity is likely to unlock pent-up demand, resulting in more software.

Expand full comment
Benjamin Todd's avatar

I agree this is a factor - it's basically the point about elastic demand I'm covering in the third factor in the framework.

Expand full comment
Heather Baker's avatar

I agree but I would add:

creativity

empathy

resilience

the ability to learn and adapt quickly

Expand full comment
Benjamin Todd's avatar

In the full article, I expand personal effectiveness into productivity, social skills and learning how to learn, which overlaps a lot with these.

Creativity seems complicated since AI is very good at idea generation. I preferred to highlight the taste / discernment aspect, since that's often what humans end up focusing on when AI gets applied.

You could make a similar comment about empathy (AIs get rated as more empathetic than human doctors, and are also very popular for therapy), though I agree it's an important component of social skills as a whole.

Expand full comment
Nathan Young's avatar

Another really good article from you.

Expand full comment