I was actually thinking the same thing when I wrote it but I think we may finally actually be getting somewhat close to that, and I don’t think we’re even remotely close to discussing AGI outside of pure science fiction. LLMs have made us appear deceptively close; they can spit out sentences that look like stuff people write, but we haven’t moved even marginally closer to true comprehension, which would be required for actual AGI.
I was about to respond with pretty much the top half of what you said. But I think an early step in AGI is how we start splitting hairs about what “counts.” And the number of things that we were “supposed” to always be better at keep changing with each new advance.
In ten years I don’t think we will have clear, unquestionable Artificial General Intelligence, but I think there will be some people trying to explain that yes the model can act and respond exactly as a human would in the exact same circumstance but it’s not really thinking or feeling anything. I certainly don’t think the AI we’re playing with in 10 years will be based primarily on text prediction, but there are still just so many different routes being explored in this field, it sure doesn’t feel like a real plateau yet. Maybe I’ll change my mind when GPT5 is only marginally more capable than GPT4.
I was actually thinking the same thing when I wrote it but I think we may finally actually be getting somewhat close to that, and I don’t think we’re even remotely close to discussing AGI outside of pure science fiction. LLMs have made us appear deceptively close; they can spit out sentences that look like stuff people write, but we haven’t moved even marginally closer to true comprehension, which would be required for actual AGI.
I was about to respond with pretty much the top half of what you said. But I think an early step in AGI is how we start splitting hairs about what “counts.” And the number of things that we were “supposed” to always be better at keep changing with each new advance.
In ten years I don’t think we will have clear, unquestionable Artificial General Intelligence, but I think there will be some people trying to explain that yes the model can act and respond exactly as a human would in the exact same circumstance but it’s not really thinking or feeling anything. I certainly don’t think the AI we’re playing with in 10 years will be based primarily on text prediction, but there are still just so many different routes being explored in this field, it sure doesn’t feel like a real plateau yet. Maybe I’ll change my mind when GPT5 is only marginally more capable than GPT4.