• millie@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    Granted, I don’t assume that LLMs are currently equivalent to a lesser general AI, but like, won’t we always be able to say that they’re just generating the next token? Like, what level of complexity of ‘choice’ determines the difference between LLM and general AI? Or is that not the criteria?

    Are we talking some internal record of tracking specific reasoning? A long-term record that it can access between sessions? Some prescribed degree of autonomy within the systems it’s connected to? Introspection?

    Because to me “find the most reasonable next token for the current context” sounds a lot like how animals work. We make our way through a complex sea of sensory information and stored information to produce our next action, over and over again.

    I was watching Dr Kevin Mitchell discuss free will with Adam Conover recently, and a lot of their discussion touched on consciousness as basically the choice-making process itself. It’s worth watching, and I won’t try to summarize it, but it does make me wonder how big of a gap there is between ‘come up with the next token’ and ‘live’.

    It does make me suspect that some iteration of LLMs may form the foundation of a more complex proper AI that’s not just choosing the next token, but has some form of awareness of the process behind it.