Bill Gates feels Generative AI has plateaued, says GPT-5 will not be any better::The billionaire philanthropist in an interview with German newspaper Handelsblatt, shared his thoughts on Artificial general intelligence, climate change, and the scope of AI in the future.

  • 0ops@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    1 year ago

    My hypothesis is that that “extra juice” is going to be some kind of body. More senses than text-input, and more ways to manipulate itself and the environment than text-output. Basically, right now llm’s can kind of understand things in terms of text descriptions, but will never be able to understand it the way a human can until it has all of the senses (and arguably physical capabilities) that a human does. Thought experiment: Presumably you “understand” your dog - can you describe your dog without sensory details, directly or indirectly? Behavior had to be observed somehow. Time is a sense too. EDIT: before someone says it, as for feelings I’m not really sure, I’m not a biology guy. But my guess is we sense our own hormones as well

    • LinuxSBC@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      First, they do have senses. For example, many LLMs can “see” images. Second, they’re actually pretty good at describing things. What they’re really bad at is analysis and logic, which is not related to senses at all.

      • 0ops@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I’m not so convinced that logic is completely unrelated to the senses. How did you learn to count, add, and subtract mentally? You used your fingers. I don’t know about you, but even though I don’t count my fingers anymore I still tend to “visualize” math operations. Would I be capable of that if I were born blind? Maybe I’d figure out how to do the same thing in a different dimension of awareness, but I have no doubt that being able to conceptualize visually helps my own logic. As for more complicated math, I can’t do that mentally either, I need a calculator and/or scratch paper. Maybe analogues to those can be implemented into the model? Maybe someone should just train a model on khan academy videos, and it’ll pick this stuff up emergently? I’m not saying that the ability to visualize is the only roadblock though, I’m sure that improvements could be made to the models themselves, but I bet that it’ll be key to human-like reasoning