There are several separate issues that add up together:
A background “chain of thoughts” where a system (“AI”) uses an LLM to re-evaluate and plan its responses and interactions by taking into account updated data (aka: self-awareness)
Ability to call external helper tools that allow it to interact with, and control other systems
Training corpus that includes:
How to program an LLM, and the system itself
Solutions to programming problems
How to use the same helper tools to copy and deploy the system or parts of it to other machines
How operators (humans) lie to each other
Once you have a system (“AI”) with that knowledge and capabilities… shit is bound to happen.
When you add developers using the AI itself to help in developing the AI itself… expect shit squared.
This is from mid-2023:
https://en.m.wikipedia.org/wiki/AutoGPT
OpenAI started testing it by late 2023 as project “Q*”.
Gemini partially incorporated it in early 2024.
OpenAI incorporated a broader version in mid 2024.
The paper in the article was released in late 2024.
It’s 2025 now.
Tool calling is cool funcrionality, agreed. How does it relate to openai blowing its own sails?
There are several separate issues that add up together:
Once you have a system (“AI”) with that knowledge and capabilities… shit is bound to happen.
When you add developers using the AI itself to help in developing the AI itself… expect shit squared.