My company is strongly pushing AI. There are lot of experiments, demos, and effort from decently smart people about integrating it into our workflows. There are some impressive victories that have been made with AI tooling producing some things fast. I am not in denial about this. And the SE department is tracking improved productivity (as measured by # of tickets being done, I guess?)

The problem is I hate AI. I hate every fucking thing about it. Its primary purpose, regardless of what utility is gained, is spam. I think it’s obvious how google search results are spam, how spam songs and videos are being produced, etc. But even bad results from AI that have to be discarded, IMO, are spam.

And that isn’t even getting into all massive amounts of theft to train the data, or the immense amounts of electricity it takes to do training and inference, as well as run, all this crap. Nor the psychosis being inflicted onto people who emplace their trust into these systems. Nor the fact that these tools are being used to empower authoritarian regimes to track vulnerable populations, both here (in the USA) and abroad. And all this AI shit serves to enrich the worst tech moguls and to displace people like artists and people like myself, a programmer.

I’m literally being told at my job that I should view myself basically as an AI babysitter, and that AI has been unambiguously proven in the industry, so the time for wondering about it, experimenting with it, or opposing it is over. The only fault and flaw is my (i.e. any given SE’s) unwillingness to adapt and onboard.

Looking for advice from people who have had to navigate similar crap. Because I feel like I’m at a point where I must adapt or eventually get fired.

  • Feyd@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    3 days ago

    AI tools will improve and in the near future

    There isn’t a good reason to believe they’ll be as good as you’re saying.

    • Helix 🧬@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Yeah, I think we’ll get the model collapse issue soon. As most of the dead internet is generated by AI, the amount of work done to try to figure out what is real and what is a hallucination will inevitably fail and lead to the LLM Ouroboros eating its own tail.

    • DigitalDilemma@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      3 days ago

      You sure?

      Every iteration of the major models is better, faster, with more context. They’re getting better at a faster speed. They’re already relied upon to write code for production systems in thousands of companies. Today’s reality is already as good as I’m saying. Tomorrow’s will be better.

      Give it, what, ten or twenty years and the thought of a human being writing computer code will be anachronistic.

      • Feyd@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        The major thing holding LLMs back is that they don’t actually understand or reason. They purely predict in the dimension of text. That is a fundamental aspect of the technology that isn’t going to change. To be as good as you’re saying requires a different technology.

        Also, alot of what you see people say they’re doing today is strongly exaggerated…

        • DigitalDilemma@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 days ago

          I think it’s… not wise to underplay or predict the growth of LLMs and AI. Five years ago we couldn’t have predicted their impact on many roles today. In another five years it will be different again.