When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

    • wintermute@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      47
      ·
      2 months ago

      Exactly. LLMs don’t understand semantically what the data means, it’s just how often some words appear close to others.

      Of course this is oversimplified, but that’s the main idea.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        2 months ago

        no need for that subjective stuff. The objective explanation is very simple. The output of the llm is sampled using a random process. A loaded die with probabilities according to the llm’s output. It’s as simple as that. There is literally a random element that is both not part of the llm itself, yet required for its output to be of any use whatsoever.

    • Zeek@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Not really. The purpose of the transformer architecture was to get around this limitation through the use of attention heads. Copilot or any other modern LLM has this capability.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        2 months ago

        The llm does not give you the next token. It gives you a probability distribution of what the next token coould be. Then, after the llm, that probability distribution is randomly sampled.

        You could add billions of attention heads, it will still have an element of randomness in the end. Copilot or any other llm (past, present or future) do have this problem too. They all “hallucinate” (have a random element in choosing the next token)

        • Terrasque@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 months ago

          randomly sampled.

          Semi-randomly. There’s a lot of sampling strategies. For example temperature, top-K, top-p, min-p, mirostat, repetition penalty, greedy…

          • futatorius@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            Semi-randomly

            A more correct term is constrained randomness. You’re still looking at probability distribution functions, but they’re more complex than just a throw of the dice.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            randomly doesn’t mean equiprobable. If you’re sampling a probability distribution, it’s random. Temperature 0 is never used, otherwise a lot of stuff would consistently hallucinate the exact same thing

            • Terrasque@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Temperature 0 is never used

              It is in some cases, where you want a deterministic / “best” response. Seen it used in benchmarks, or when doing some “Is this comment X?” where X is positive, negative, spam, and so on. You don’t want the model to get creative there, but rather answer consistently and always the most likely path.

    • Rivalarrival@lemmy.today
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      32
      ·
      2 months ago

      It’s a solveable problem. AI is currently at a stage of development equivalent to a 2-year-old, just with better grammar. Everything it is doing now is mimicry and babbling.

      It needs to feed it’s own interactions right back into it’s training data. To become a better and better mimic. Eventually, the mechanism it uses to select the appropriate data to form a response will become more and more sophisticated, and it will hallucinate less and less. Eventually, it’s hallucinations will be seen as “insightful” rather than wild ass guesses.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        1
        ·
        2 months ago

        also, what you described has already been studied. Training an llm its own output completely destroys it, not makes it better.

        • linearchaos@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          16
          ·
          2 months ago

          This is incorrect or perhaps updated. Generating new data, using a different AI method to tag that data, and then training on that data is definitely a thing.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            15
            arrow-down
            1
            ·
            edit-2
            2 months ago

            yes it is, and it doesn’t work.

            edit: too expand, if you’re generating data it’s an estimation. The network will learn the same biases and make the same mistakes and assumtlptions you did when enerating the data. Also, outliers won’t be in the set (because you didn’t know about them, so the network never sees any)

            • Terrasque@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Microsoft’s Dolphin and phi models have used this successfully, and there’s some evidence that all newer models use big LLM’s to produce synthetic data (Like when asked, answering it’s ChatGPT or Claude, hinting that at least some of the dataset comes from those models).

              • vrighter@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                7
                ·
                2 months ago

                from their own site:

                Alpaca also exhibits several common deficiencies of language models, including hallucination, toxicity, and stereotypes. Hallucination in particular seems to be a common failure mode for Alpaca, even compared to text-davinci-003.

                  • vrighter@discuss.tchncs.de
                    link
                    fedilink
                    English
                    arrow-up
                    5
                    arrow-down
                    1
                    ·
                    2 months ago

                    yeah. what’s your point. I said hallucinations are not a solvable problem with LLMs. You mentioned that alpaca used synthetic data successfully. By their own admissions, all the problems are still there. Some are worse.

            • Rivalarrival@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              5
              ·
              edit-2
              2 months ago

              It needs to be retrained on the responses it receives from it’s conversation partner. It’s previous output provides context for it’s partner’s responses.

              It recognizes when it is told that it is wrong. It is fed data that certain outputs often invite “you’re wrong” feedback from it’s partners, and it is instructed to minimize such feedback.

              It is not (yet) developing true intelligence. It is simply learning to bias it’s responses in such a way that it’s audience doesn’t immediately call it a liar.

              • vrighter@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                9
                ·
                2 months ago

                Yeah that implies that the other network(s) can tell right from wrong. Which they can’t. Because if they did the problem wouldn’t need solving.

                • Rivalarrival@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  7
                  ·
                  2 months ago

                  What other networks?

                  It currently recognizes when it is told it is wrong: it is told to apologize to it’s conversation partner and to provide a different response. It doesn’t need another network to tell it right from wrong. It needs access to the previous sessions where humans gave it that information.

                  • LillyPip@lemmy.ca
                    link
                    fedilink
                    English
                    arrow-up
                    5
                    ·
                    edit-2
                    2 months ago

                    Have you tried doing this? I have, for *nearly a year, on the more ‘advanced’ pro versions. Yes, it will apologise and try again – and it gets progressively worse over time. There’s been a marked degradation as it progresses, and all the models are worse now at maintaining context and not hallucinating than they were several months ago.

                    LLMs aren’t the kind of AI that can evaluate themselves and improve like you’re suggesting. Their logic just doesn’t work like that. A true AI will come from an entirely different type of model, not from LLMs.

                    e: time. Wow, where did this year go?

                  • vrighter@discuss.tchncs.de
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    ·
                    2 months ago

                    here’s that same conversation with a human:

                    “why is X?” “because y!” “you’re wrong” “then why the hell did you ask me for if you already know the answer?”

                    What you’re describing will train the network to get the wrong answer and then apologize better. It won’t train it to get the right answer

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        edit-2
        2 months ago

        The outputs of the nn are sampled using a random process. Probability distribution is decided by the llm, loaded die comes after the llm. No, it’s not solvable. Not with LLMs. not now, not ever.

      • linearchaos@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        21
        ·
        2 months ago

        Good luck being pro AI here. Regardless of the fact that they could just put a post on the prompt that says The writer of this document was not responsible for the act they are just writing about it and it would not frame them as the perpetrator.

        • Hacksaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          19
          arrow-down
          3
          ·
          2 months ago

          If you already know the answer you can tell the AI the answer as part of the question and it’ll give you the right answer.

          That’s what you sound like.

          AI people are as annoying as the Musk crowd.

          • futatorius@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            I’m no AI fanboy, but what you just described was the feedback cycle during training.

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            20
            ·
            2 months ago

            How helpful of you to tell me what I’m saying, especially when you reframe my argument to support yourself.

            That’s not what I said. Why would you even think that’s what I said.

            Before you start telling me what I sound like, you should probably try to stop sounding like an impetuous child.

            Every other post from you is dude or LMAO. How do you expect anyone to take anything you post seriously?

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            21
            ·
            2 months ago

            You know what, don’t bother responding back to me I’m just blocking you now, before you decide to drag some more of that tired right wing bullshit that you used to fight with everyone else with, none of your arguments on here are worth anyone even reading so I’m not going to waste my time and responding to anything or reading anything from you ever again.

        • vrighter@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          2 months ago

          the problem isn’t being pro ai. It’s people puling ai supposed ai capabilities out of their asses without having actually looked at a single line of code. This is obvious to anyone who has coded a neural network. Yes even to openai themselves, but if they let you believe that, then the money stops flowing. You simply can’t get an 8-ball to give the correct answer consistently. Because it’s fundamentally random.