I guess I am skeptical about definitions that are relative to human ability, whether expert or randomly selected, because human ability varies by orders of magnitude based on which tools you allow and how much time is given, and this seems arbitrary. A human expert + pencil and paper is far more capable in math than a human expert without. Give them mathematica and their ability grows again by a substantial amount.
Big fan of your writing on AI Ben, please keep it up.
I guess I am skeptical about definitions that are relative to human ability, whether expert or randomly selected, because human ability varies by orders of magnitude based on which tools you allow and how much time is given, and this seems arbitrary. A human expert + pencil and paper is far more capable in math than a human expert without. Give them mathematica and their ability grows again by a substantial amount.
This is a good point. Really there are many ways we can compare AI to human ability in a domain:
* Better than a human with no tools
* Better than a human with non-LLM tools
* Better than a human with tools AND access to an LLM ("centaurs")
>Only 70% of AI researchers know what AGI stands for
The tweet you cite is just Leo Gao walking around and asking random people at NeurIPS.
Probably more accurate to say 30% of attendees of NeurIPS are either a) not AI researchers, or b) just messing with Leo.
I’ve edited to “people at the biggest AI conference said”.