The rise of AI and its integration with the attention-driven internet has led to a concerning trend where feelings matter more than facts. The author, once optimistic about YouTube’s potential for spreading good ideas, now realizes that the internet’s focus on triggering emotions for attention has overshadowed the importance of truth. Manipulation and misinformation thrive because they provoke strong feelings, while reasoned arguments struggle to compete. This shift poses a threat to reliable information sources and highlights the deep-rooted role of emotions in human decision-making. The internet’s relentless pursuit of attention has made truth an optional extra, with credibility often overshadowed by visually appealing but misleading content.

  • DarraignTheSane@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 year ago

    I would say that if a thing can be destroyed by ChatGPT then it probably needed to be destroyed, or at least reworked to meet the times. It’s not a lot different than people saying that calculators would destroy kids’ ability to do math, or that Wikipedia would ruin people’s ability to do research. It’s a tool, with its strengths and limitations, and should be used as such.

      • DarraignTheSane@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        True enough, but I’d counter that the person who uses ‘yellow’ as the answer to the square root of 100 is a moron, and the person who accepts it doubly so.

        A better example IMO would be using ChatGPT to write code via pseudocode. Sure it’ll spit out something for you, but you’ll still need to verify those results using your own knowledge, and test it before putting it into production.

        Another, different, example would be using it to write up proposals, project plans, etc. - if it was so easy that an AI could do it, maybe we need to take a good hard look at whether it needed doing in the first place, or examine how we’re going about it.

        It is a tool, albeit a smarter one than we’ve previously had. Like any tool, it can be used for good, evil, or moronic results.

          • DarraignTheSane@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            You’re 100% on the mark. Unfortunately the next automation revolution will come whether society wants it to or not. Just as soon as it’s cheaper and as effective to have machines do the work of humans, corporations will choose to go that route. We’ve seen a few instances where corporations are going against this in the short term, but it never works out for us humans in the long run. We know that corporate culture only has eyes for next quarter’s profit, humanity’s future be damned.

            I’m going to drop this here, you may have already seen it & know of CGP Grey on YouTube, but using this vid as a frame of reference: Humans Need Not Apply

            That video was made in 2014. We’ve known that an automation revolution is going to come along and make a good portion of the workforce obsolete (again) for a number of years (CGP Grey wasn’t the first to put forth the idea by any means), and we’re doing nothing to prepare for it. If we don’t figure out how to institute some form of UBI… well, it won’t be good.