Any decision anyone takes to act or not which could then causes direct or indirect negative consequence for somebody represent a theoretical liability. AI cannot be liable, humans are liable or entities are, and since AI have no mechanism to verify if they made a mistake I think AI can’t simply be made to run anything. That would be legally a big risk. They also have no agency and no persistent memory we can’t extrapolate saying they can do one thing so they can do everything. there are some limits for now. Just a thought
Hey, great article. I was thinking along similar lines the other day. Have you heard of Jevons Paradox? This is what I found...
In the 19th century, economist William Stanley Jevons observed that as steam engines became more fuel efficient (using less coal for the same amount of work), total coal consumption increased rather than decreased. Improved efficiency lowered costs, making a wider range of applications viable, thus driving higher coal demand.
“It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption. The very contrary is the truth.”
— William Stanley Jevons, The Coal Question (1865)
A contemporary example comes from the commoditisation of compute with cloud computing. The widespread availability and lower cost of resources have driven a significant increase in total compute consumption and more technology jobs - albeit with a different mix of roles.
The same phenomenon is likely to occur with AI driven software development. Presently, long and complex software deliveries limit demand. Improved productivity is likely to unlock pent-up demand, resulting in more software.
In the full article, I expand personal effectiveness into productivity, social skills and learning how to learn, which overlaps a lot with these.
Creativity seems complicated since AI is very good at idea generation. I preferred to highlight the taste / discernment aspect, since that's often what humans end up focusing on when AI gets applied.
You could make a similar comment about empathy (AIs get rated as more empathetic than human doctors, and are also very popular for therapy), though I agree it's an important component of social skills as a whole.
Hey Ben! There is a little edit button that snuck in: "EditSituationReasoningExampleLegal"
Thanks!
Any decision anyone takes to act or not which could then causes direct or indirect negative consequence for somebody represent a theoretical liability. AI cannot be liable, humans are liable or entities are, and since AI have no mechanism to verify if they made a mistake I think AI can’t simply be made to run anything. That would be legally a big risk. They also have no agency and no persistent memory we can’t extrapolate saying they can do one thing so they can do everything. there are some limits for now. Just a thought
Thanks. Some useful points
Hey, great article. I was thinking along similar lines the other day. Have you heard of Jevons Paradox? This is what I found...
In the 19th century, economist William Stanley Jevons observed that as steam engines became more fuel efficient (using less coal for the same amount of work), total coal consumption increased rather than decreased. Improved efficiency lowered costs, making a wider range of applications viable, thus driving higher coal demand.
“It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption. The very contrary is the truth.”
— William Stanley Jevons, The Coal Question (1865)
A contemporary example comes from the commoditisation of compute with cloud computing. The widespread availability and lower cost of resources have driven a significant increase in total compute consumption and more technology jobs - albeit with a different mix of roles.
The same phenomenon is likely to occur with AI driven software development. Presently, long and complex software deliveries limit demand. Improved productivity is likely to unlock pent-up demand, resulting in more software.
I agree this is a factor - it's basically the point about elastic demand I'm covering in the third factor in the framework.
I agree but I would add:
creativity
empathy
resilience
the ability to learn and adapt quickly
In the full article, I expand personal effectiveness into productivity, social skills and learning how to learn, which overlaps a lot with these.
Creativity seems complicated since AI is very good at idea generation. I preferred to highlight the taste / discernment aspect, since that's often what humans end up focusing on when AI gets applied.
You could make a similar comment about empathy (AIs get rated as more empathetic than human doctors, and are also very popular for therapy), though I agree it's an important component of social skills as a whole.
Another really good article from you.