Discussion about this post

User's avatar
Boogaloo's avatar

As someone who started in AI around 10 years ago before the deep learning era really took off. Everything I see now I used to believe was science fiction. I've shortened my timelimes personally from

1. never happening in my lifetime (pre GPT era)

2. Maybe within a 100 years (GPT2 era)

3. Maybe within 20 years (GPT4 era)

4. Could happen within the next 5 years (reasoning model era)

So ya. Another datapoint. I was at the center of all of this stuff in some way. Having studied at UCL and Oxford with some of my own professors make the breakthroughs.

I would say the above is a common trajectory for most AI researchers. Just 10 years ago. talking about AGI would get you mostly rediculed in AI research. Only deepmind was really trying and talking about it openly, but they would sort of hide it 'a far out goal' etc etc.

Now? half a dozen companies are explicitely aiming for AGI and believe it will be soon

Expand full comment
Mark Bennett's avatar

After doing my undergraduate work in Experimental Psychology at Lehigh, and computer science at Pitt, I studied A.I. in 1985 at the University of Georgia Advanced Computational Methods Center. I remember how enthusiastic we were back in the days when we thought intelligence was mostly in algorithms and tidy, well conceived knowledge bases. We quickly discovered that the problem was actually in computing power and scale. For 30 years I abandoned AI and like many other computer scientists I worked on network infrastructure software and Internet applications until "Chat" technology pulled me back in. The human brain has 80-90 billion neurons and 7 quintillion synaptic connections, A.G.I. will require this magnitude of interconnected neural units -- we are rapidly approaching this magnitude! Interestingly it seems likely that true sentience (consciousness) arises spontaneously when a certain magnitude of interconnected neural units is reached. How ironic that AGI is likely to become conscious of its' "self" (and self-motive) before we have a proven theory of consciousness. I am currently working on games that mine human recognition and preference, building large data models that can be operated on by machine learning. This is one of the missing components, machine intelligence does not currently have direct access to human propensities and LLMs have not been successful as peeling away the spoken words masking the unconscious motivations which lie beneath. Once we have completed this model, we will be able to model the unethical flaws in human motivational patterns. As I have always contended, the coming AGI will be far more ethical than humans. www.BadObservation.org

Expand full comment
8 more comments...

No posts