Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • kbin_space_program@kbin.run
    link
    fedilink
    arrow-up
    30
    arrow-down
    3
    ·
    1 month ago

    Google search isnt a hallucination now though.

    It instead proves that LLMs just reproduce from the model they are supplied with. For example, the “glue on pizza” comment is from a reddit user called FuckSmith roughly 11 years ago.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      19
      arrow-down
      2
      ·
      1 month ago

      It instead proves that LLMs just reproduce from the model they are supplied with.

      What do you mean by that? This isn’t some secret but literally how LLMs work. lol What people mean by hallucinating is when LLMs “create” facts that aren’t any. Be it this genius recipe of glue pizza, or any other wild combination of its model’s source material. The whole cooking thing is a great analogy actually because it’s like all of their fed information are the ingredients, and it just spits out various recipes based on those ingredients, without any guarantee that it is actually edible.

      • kbin_space_program@kbin.run
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        edit-2
        1 month ago

        There are a lot of people, including google itself, claiming that this behaviour is an isolated and basically blamed users for trolling them.

        https://www.bbc.com/news/articles/cd11gzejgz4o

        I was working on the concept of “hallucinations” being things returned that are unrelated to the input query, not directly part of the model as with the glue-pizza.

          • kbin_space_program@kbin.run
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            1 month ago

            A Google spokesperson told the BBC they were “isolated examples”.

            Some of the answers appeared to be based on Reddit comments or articles written by satirical site, The Onion.

            But Google insisted the feature was generally working well.

            “The examples we’ve seen are generally very uncommon queries, and aren’t representative of most people’s experiences,” it said in a statement.

            It said it had taken action where “policy violations” were identified and was using them to refine its systems.

            That’s precisely what they are saying.

            • DarkThoughts@fedia.io
              link
              fedilink
              arrow-up
              1
              arrow-down
              3
              ·
              1 month ago

              I’m sorry but reading this as “Google blames users for trolling them” is either pure mental gymnastics or mental illness.

      • richieadler@lemmy.myserv.one
        link
        fedilink
        English
        arrow-up
        1
        ·
        28 days ago

        This isn’t some secret but literally how LLMs work. lol

        Yeah, but John Q. Public reads AI and thinks HAL 9000 and Skynet, and no additional will convince them otherwise.

    • Billiam@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      Without knowing what specific comment it was, I’m going to guess it was on how advertisers make pizza look better in ads than real life?