• CheeseNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    3 days ago

    The scary part is even humans don’t really have a proper escape mechanism for this kind of misinformation. Sure we can spot AI a lot of the time but there are also situations where we can’t and it kind of leaves us only trusting people we already knew before AI, and being more and more distrustful of information in general.

    • theangryseal@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      Holy shit, this.

      I’m constantly worried that what I’m seeing/hearing is fake. It’s going to get harder and harder to find older information on the internet too.

      Shit, it’s crept outside of the internet actually. Family buys my kids books for Christmas and birthdays and I’m checking to make sure they aren’t AI garbage before I ever let them look at it because someone bought them an AI book already without realizing it.

      I don’t really understand what we hope to get from all of this. I mean, not really. Maybe if it gets to a point where it can truly be trusted, I just don’t see how.

      • Flagstaff@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I don’t really understand what we hope to get from all of this.

        Well, even among the most moral devs, the garbage output wasn’t intended, and no one could have predicted the pace at which it’s been developing. So all this is driving a real need for in-person communities and regular contact—which is at least one great result, I think.