My company is strongly pushing AI. There are lot of experiments, demos, and effort from decently smart people about integrating it into our workflows. There are some impressive victories that have been made with AI tooling producing some things fast. I am not in denial about this. And the SE department is tracking improved productivity (as measured by # of tickets being done, I guess?)

The problem is I hate AI. I hate every fucking thing about it. Its primary purpose, regardless of what utility is gained, is spam. I think it’s obvious how google search results are spam, how spam songs and videos are being produced, etc. But even bad results from AI that have to be discarded, IMO, are spam.

And that isn’t even getting into all massive amounts of theft to train the data, or the immense amounts of electricity it takes to do training and inference, as well as run, all this crap. Nor the psychosis being inflicted onto people who emplace their trust into these systems. Nor the fact that these tools are being used to empower authoritarian regimes to track vulnerable populations, both here (in the USA) and abroad. And all this AI shit serves to enrich the worst tech moguls and to displace people like artists and people like myself, a programmer.

I’m literally being told at my job that I should view myself basically as an AI babysitter, and that AI has been unambiguously proven in the industry, so the time for wondering about it, experimenting with it, or opposing it is over. The only fault and flaw is my (i.e. any given SE’s) unwillingness to adapt and onboard.

Looking for advice from people who have had to navigate similar crap. Because I feel like I’m at a point where I must adapt or eventually get fired.

  • Caveman@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    14 hours ago

    AI is pretty bad most things you do that are actually valuable so your critique definitely holds. It’s bad for the environment and creates tech consolation and all round is creating around as many problems as it claims to solve.

    AI as in neural networks are really good in most ways such as playing chess and detecting melonomas but I’m going to give some tips for spocifically LLMs.

    Treat it as a dumb intern. You ask it to find research papers but you have to read them yourself to actually assess them. You can use it to draft an email but you still have to proofread it. You can use it to write code but expect bugs and unhandled edge cases.

    I’m a software developer and I use an LLM to create code generators, internal tooling, a thing that takes a json and outputs SQL insert statements or to look up docs. The AI has not increased my productivity per se but the tooling I created with it has.

    Another use case is to ask for critique, you paste some code block in and ask it to review performance for example and it can spot the “use a hash map there” cases pretty easily.

    That’s my 2 cents on the topic.

  • gwl@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 hours ago

    Just don’t.

    Don’t change your morals just cause of peer pressure, especially not corporate pressure

  • PeriodicallyPedantic@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    19 hours ago

    Do we work at the same place? Lol

    I totally get you, despite some pedantic disagreement about what “spam” means. I’m in the same boat a bit.
    I’m embarrassed to admit that I’ve kinda leaned into it in the sense that “if they’re gonna make us use it, then I want to be in a position to steer the direction”. While I discourage its use and preach to people about it’s evils, I also try to improve the AI tools and processes that we’re using. That probably makes me part of the problem, but it relieves a bit of the pressure of how shit it is day to day.

    I’m actually kinda bearish on AI.
    Or rather I think that either it’s a bubble that will pop, or before too long it’s gonna cause a global depression. Maybe a bit of paranoia and doomerism.

  • MojoMcJojo@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    1 day ago

    My company does annual reviews. You have to write your own review, then they will read it over and then sit down to talk to you about it.

    Last year, I just had ChatGPT write it for me based on all of my past conversations with it. Turned it in. The first question they asked me was, ‘Did you use AI to write this?’ Without hesitation, I said absolutely. They loved it so much, they had me show everyone else how to do it and made them redo theirs. I couldn’t frikin believe it. Everyone is still pissed they have to use ChatGPT this year, but the bosses love that corporate hogwash so much.

    They’re about to receive a stack of AI-generated drivel so bad that I bet they have everyone go back to handwriting them.

  • LiveLM@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    21 hours ago

    I’m literally being told at my job that I should view myself basically as an AI babysitter

    Feel you 100%.
    I dunno why but my entire career everyone always talks like doing IT is simply a stepping stone to becoming a manager, so stupid. Like god forbid you’re not the lEaDeRsHiP type.
    And now with the rise of “Agentic IDEs” it’s even fucking worse, I don’t want to be managing people let alone herding a pack of blind cats autonomous agents.

    Unfortunately the only solution is to stop caring, Yes, really.
    I know it hurts producing sub-par garbage when you know you’re capable of much more, but unfortunately there’s no other way.
    If upper management doesn’t care about delivering quality products to their consumers anymore, you shouldn’t either. You’ll stress and burn yourself out meanwhile those responsible won’t lose a blink of sleep over it.
    Do exactly what they want. Slop it all. Fuck it. Save your energy for what really matters.

    That or start looking for another job, but you might struggle to find one that isn’t doing the same shit.

  • rImITywR@lemmy.world
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    1
    ·
    2 days ago

    Ask ChatGPT “How do I unionize my workplace to protect my job against AI obsessed management?”

    • GnuLinuxDude@lemmy.mlOP
      link
      fedilink
      arrow-up
      9
      ·
      2 days ago

      The slop being copied back and forth is actually is what they want. At the recent all-hands they basically said this without exaggeration. Quality and correctness were demoted to secondary importance.

      • ashughes@feddit.uk
        link
        fedilink
        arrow-up
        15
        ·
        2 days ago

        This actually made something click for me: why I haven’t been able to find work for 3 years in software QA. It’s not that AI came for my job or that it replaced me. At some point people stopped caring about quality so the assurance became moot.

  • Helix 🧬@feddit.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 days ago

    Try to distance yourself from the quality of your work.

    Produce AI slop like your overlords fetishise, then have a mouse jiggler wiggle the cursor and an AI answer your Teams messages.

  • Brokkr@lemmy.world
    link
    fedilink
    arrow-up
    32
    arrow-down
    7
    ·
    2 days ago

    AI is a tool, just like a hammer. You could use a rock, but that doesn’t give you the leverage that a hammer does.

    AI is also a machine, it can get you to your destination faster, like a car or train.

    Evil people have used hammers, cars, and trains to do evil and horrible things. These things can also be used for useless stupid things, like advertising.

    But they can also be used for good, like an ambulance or to transport food. They also make us more efficient and can be used to save resources and effort. It depends on who uses it and how they use it.

    You can’t control how other people may misuse these things, but you can control how much you know, how you use it, and what you use it for.

    • gwl@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      17 hours ago

      AI is a tool like a Gun, more like, specifically designed for a terrible purpose, used for a terrible purpose, and with no way to use it for good except by deconstructing it and finding a new use for it

    • thatsTheCatch@lemmy.nz
      link
      fedilink
      arrow-up
      14
      arrow-down
      2
      ·
      2 days ago

      One aspect that analogy doesn’t work for is hammers and cars weren’t built with the mass theft of intellectual property, they aren’t being leveraged to put people out of jobs, and they aren’t the driving force for building insane numbers of data centres that increase power bills for locals and ravage their water supply.

      It’s not necessarily the pure usage of AI that I don’t like, as much as what has been and is being used to create it.

      Cars have their own problems of course, and cause more issues with the direct use of them than what went into building them.

      I read someone leave a different comment where they said something like “If human meat was the healthiest, least environmentally damaging, and cheapest food, they still wouldn’t eat it.” In this case AI doesn’t really match those benefits anyway

      • Cowbee [he/they]@lemmy.ml
        link
        fedilink
        arrow-up
        13
        arrow-down
        2
        ·
        2 days ago

        IP itself needs to be abolished, so that part isn’t as important. Further, cars did put people out of jobs that used to draw horse carriages and maintain them. The original commenter is correct with their analysis.

      • Brokkr@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        5
        ·
        2 days ago

        Regarding the idea of IP theft, I think it’s more complicated.

        Let’s say you buy a book, learn from it, and use that knowledge to your benefit. We don’t think of this as stealing, even though you didn’t buy the knowledge, you only bought the text.

        If you borrow a book, likewise the knowledge is yours, not just while you’ve borrowed the book. Even if you pirate or steal the book (please don’t), the knowledge is still yours regardless.

        So I don’t think there should be an issue of an AI learning freely from content.

        However, I would agree that when it reproduces a work without attribution that it becomes a problem. The problem that we as a society have even when AI is not involved is defining where that line is. Because there are cases where some kinds of reproduction are ok (parody, homage, etc.). We as a society do not have clear rules for these things because it is hard to define those rules. That’s our problem, not a problem with AI. Using AI just makes it too easy for someone to cross that line and therefore I find it risky to use it for the production of that kind of material.

        But I do not think it’s an issue for me to ask the AI how to do some unusual thing in the terminal or to refactor a part of my code to work a little differently. There is no harm in asking it to create a GUI version of the cli program that I made.

        As for putting people out of jobs, we may be at an inflection point in our productivity curve. Historically, these have caused short term job loss but ultimately lead to improvements generally once we’ve had time to adjust once people learn how to leverage these tools effectively. Likely, it will create more jobs in new areas.

        Humans will always use more resources, especially energy. Until the last few decades the source of thst energy wasn’t concerning. Now it is and we need to find more ways to produce more energy cleanly. Arguing for less energy use is never going to work. We will always use more energy.

        • group_hug@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          18 hours ago

          Tech bros are mostly transhumanists. When interviewed by the New York Times Peter Thief couldn’t bring himself that end of the human race would be a bad thing.

          Only thing these dudes care about is total power. Living forever and going down in history as god’s to the machines that replace us.The rest of the world can burn.

          In thiels on words from the NYT Trans is only bad when you half ass it like transgender. That is only part trans so it is bad. When you do full trans like trans human, that is actually good.

          These dudes have no humanity. That’s why they can be so cruel. And they are the ones that control these tools.

        • Helix 🧬@feddit.org
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago

          Likely, it will create more jobs in new areas.

          Yeah, actually fixing the mess the AIs created. I don’t look forward to it.

  • LordCrom@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    2 days ago

    I remind my boss that giving AI full access to our codebase and access to environmemts, including prod, is the exact plot of the Silicon Valley episode where Gilfoyle gave Son of Anton access. His AI deleted the codebose after being asked to clean the bugs…deleting the entire codebase was the most efficient way of doing that.

  • anime_ted@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    2 days ago

    I am also encouraged to use AI at work and also hate it. I agree with your points. I just had to learn to live with it. I’ve realized that I’m not going to make it go away. All I can do is recognize its limited strengths and significant weaknesses and only use it for limited tasks where it shines. I still avoid using it as much as possible. I also think “improved productivity” is a myth but fortunately that’s not a metric I have to worry about.

    My rules for myself, in case they help:

    • Use it as a tool only for appropriate tasks.
    • Learn its strengths and use it for those things and nothing else. You have to keep thinking and exploring and researching for yourself. Don’t let it “think” for you. It’s easy to let it make you a lazy thinker.
    • Quality check everything it gives you. It will often get things flat wrong and you will have to spend time correcting it.
    • Take lots of deep breaths.

    [Edit: punctuation]

    • trilobite@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      I agree with all your points. The problem is that quality cheching AI outputs is something that only a few will do. The other day my son did a search with chat GPT. He was doing an analysis of his competitors within 20km radius from home. He took all the results for grated and true. Then i looked at the list and found many business names looked strange. When i asked for the links to the website, i found that some were in different countries. My son said “u cant trust this”. When i pointed it out to chatgpt, the dam thing replied “oh im sorry, i got it wrong”. Then you realise that these AI things are not accountable. So quality checking is fundamental. The accountability will always sit with the user. I’d like to see the day when managers take accountability of ai crap. That wont happen, do jobs for now are secure.

      • anime_ted@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        10 hours ago

        For my purposes I find it good for summarizing existing documents and categorizing information. It is also good at reformatting stuff I write for different comprehension levels. I never let it compose anything itself. If I use it to summarize web data, and I rarely do, I make it provide the URLs of all sources so I can double-check validity of the data.

        • Helix 🧬@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 hours ago

          Sounds good. It can also write corporate emails well. I’m just writing insults and harsh truths like I would want to throw against my conversation partners, and the LLM tones it down to some bland corpo speak.

      • anime_ted@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        10 hours ago

        Thanks! I skimmed this and have it in my reading list for later. I wonder how this pans out across disciplines other than software development. I would imagine there’s a huge diversity of skills out there that would affect how well people can craft prompts and interpret responses.

  • Thisiswritteningerman@midwest.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    If you don’t mind me asking, what do you do and kind of AI? Maybe it’s the autism but I find LLMs are bit limited and useless but other use cases aren’t quite as bad Training image recognition into AI is a legitimately great use of it and extremely helpful. Already being used for such cases. Just installed a vision system on a few of my manufacturing lines. A bottling operation detects cap presence, as well as cross threads or un-torqued caps based on how the neck vs cap bottom angle and distance looks as it passes the camera. Checking 10,000 bottles a day as they scroll past would be a mind numbing task for a human. Other line is making fresnel lenses. Operators make the lenses, and are personally checking each lens for defects and power. Using a known background and training the AI to what distortion good lenses should create when presented is showing good progress at screening just as well as my operators. In this case it’s doing what the human eye can’t; determine magnification and defraction visually.

    • GnuLinuxDude@lemmy.mlOP
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      The AI in this case is, for all intents and purposes, using Copilot to write all the code. It is basically beginning to be promoted as being the first resort, rather than a supplement.

      • Thisiswritteningerman@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I don’t know enough about copilot as work has made it optional for mostly accessibility related tasks: digging through the mass of extended Microsoft files in teams, outlook, OneDrive to find and summarize topics; record meeting notes, not that they’re overly helpful compared to human taken notes due to a lack of context; and normalizing data, as every power BI report out is formatted as it’s owner saw fit.

        Given it’s ability to make ridiculous errors confidently, I don’t suppose it has the memory to be used more like a toddler helper? Small, frequent tasks that are pretty hard to fuck up, once it can reliably do these through repetition and guidance on what’s a passing result, tieing more together?

  • paequ2@lemmy.today
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    AI tooling producing some things fast

    This isn’t necessarily a good thing. Yeah, maybe AI wrote a new microservice and generated 100s of new files and 1000s of lines of new code… but… there’s a big assumption there that you actually needed 100s of new files and 1000s of lines of new code. What it tends to generate is tech debt. That’s also ignoring the benefits of your workforce upskilling by learning more about the system, where things are, how they’re pieced together, why they’re like that, etc.

    AI just adds tech debt in a blackbox. It’s gonna lower velocity in the long term.

    • DigitalDilemma@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      I know I’m not reading the room here, but you mentioned “long term” and I think that’s an important term.

      AI tools will improve and in the near future, I’m pretty confident they will get better and one of the things they can do then is to solve the tech debt their previous generations caused.

      “Hey, ChatGPT 8.0, go fix the fucking mess ChatGPT 5.0 created”… and it will do it. It will understand security, and reliance and all the context it needs and it will work and be good. There is no reason why it won’t.

      That doesn’t help us if things break before that point, of course, so let’s keep a copy of the code that we knew worked okay.

      • Helix 🧬@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 days ago

        It will understand

        Hey ChatGPT, show me you don’t know what LLMs do without telling me.

        LLMs are basically autocorrect on steroids. They’ll implement deterministic algorithms in the background cobbled together via glue code and every time you ask it a math question the LLM will forward this to Wolfram Alpha and just spit out the result.

        LLMs don’t “understand” things, it’s just pattern matching and autocomplete on steroids. There’s no thinking involved here, however much the AI companies add “thinking…” to their output.

        • DigitalDilemma@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          2 days ago

          That’s a fair point about defining them as LLMs.

          But it’s wrong to assume those algorithms don’t change. They do, and improve, and become better with iterative changes and will continue to get less distinguishable from real intelligence with time. (Clarke’s quote about “sufficiently advanced technology being indistinguishable from magic” springs to mind)

          As for my point - writing good code is exactly the sort of task that LLMs will be good at. They’re just not always there /yet/. Their context histories are short, their references are still small (in comparison), they’re slow compared to what they will be. I’m an old coder and I’ve known many others, some define their code as art and there is some truth in that, and art is of course something any AI will struggle with, but code doesn’t need to be artistic to work well.

          There’s also the possibility there will be a real milestone and true AI will emerge. That’s a scary thought and we’ve no way of telling if that’s close or far away.

          • Helix 🧬@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            That’s a fair point about defining them as LLMs.

            But it’s wrong to assume those algorithms don’t change.

            Sure, but the current LLMs have inherent flaws in the concept of them being, well, supercharged autocorrect.

            It’s impressive that we can basically brute force language concepts and distill knowledge into a model of knowledge. To really advance in AI you’d have to come up with a different class of algorithms than deep learning and LLMs. You’d probably need to combine this with adversarial networks, algorithmic (deterministic!) decisions and so on.

            A teacher once told me “a computer is only as intelligent as the people programming it” and that sentence holds true even 30 years later.

            LLMs are already “true” AI in a sense that they’re a subclass of models produced by a subclass of machine learning algorithms. I’d argue that there will be many different kinds of AI cobbled together into a more potent chatbot or agentic system.

            And code definitely needs to be artistic to work well in some cases. You need to really understand the subject matter to write proper tests, for example. There will always be an issue of man-machine interfaces.

            You’re dead right in them being able to produce better code than the average software dev. The skill floor to work as a dev will be raised.

            These LLMs can take your job as a software dev. They can already translate instructions into code. But wait! They only work when the user knows what they want. I think your job is safe after all.

            There’s a difference between programming and software development, after all.

            • DigitalDilemma@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              All good points and well argued. Thank you.

              There’s a difference between programming and software development, after all.

              Yes, absolutely, but only because we’re the customers.

              The art is software design (imo) comes in understanding the problem and creating a clever, efficient and cost effective solution that is durable and secure. (This hardly ever happens in practice which is why we’re constantly rewriting stuff). This is good and useful and in this case Art is Good. The artist has ascended to seeing the whole problem from the beginning and a short path from A to B, not just starting to code and seeing where it goes, as so many of us do.

              A human programmer writing “artistic code” is often someone showing off by doing something in an unusual or clever way. In that case, I think boring, non-artistic code is better since it’s easier to maintain. Once smarty-pants has gone elsewhere, someone else has to pick up their “art” and try to figure it out. In this case, Art is Bad. Boring is Good. LLMs are good at boring.

              So the customer thing - by that I mean, we set the targets. We tell coders (AI or human) what we want, so it’s us that judge what’s good and if it meets our spec. The difficulty for the coders is not so much writing the code, but understanding the target, and that barrier is one that’s mostly our fault. We struggle to tell other humans what we want, let alone machines, which is why development meetings can go on for hours and a lot of time is wasted showing progress for approval. Once the computers are defining the targets, they’ll be fixing them before we’re even aware. This means a change from the LLM prompt -> answer methodology, and a number of guardrails being removed, but that’s going to happen sometime.

              At the moment it’s all new and we’re watching changes carefully. But we’ll tire of doing that and get complacent, after all we’re only human. Our focus is limited and we’re sometimes lazy. We’ll relax those guardrails. We’ll get AIs to tell other AIs what to do to save ourselves even the work of prompting. We’ll let them work in our codebase without checking every line. It’ll go wrong, probably spectacularly. But we won’t stop using it.

              • Helix 🧬@feddit.org
                link
                fedilink
                English
                arrow-up
                2
                ·
                20 hours ago

                Good points aswell. I agree with most, but one: AI writes good code because it’s boring.

                There’s fancy code which is too artful to maintain and artful code which is easy and beautiful and good to maintain. Artful code doesn’t have to be fancy and hard to read. Artful code can be boring and stupidly simple.

                LLMs tend to write stuff a skilled programmer can write in 10 lines in 50 lines instead. Think about it unwrapping loops into sequential statements [++var;++var;++var… instead of while(++var)] or case statements and nested ifs into if… if… if… chains.

                Sure, such code works, but it’s hard to maintain and the alternative is more beautiful, less lines of code, easier to read and to understand. That’s what artful code is to me.

                Most code in companies tends to be less than optimal. Most companies employ mostly workers who aren’t skillful. If you compare regular business code with super clean code of Open Source programming frameworks (e.g. Spring), you tend to hit your head against the wall.

                LLM code is way harder to maintain than human code, even worse than lifeless, artless, “boring” business code. I doubt it’ll get better because it copies shit code from the average and less-than-average programmers doing a busy-ness.

                I mean, you could easily throw lots and lots of already solved and documented problems against an LLM and they’ll be better than humans, because they’re essentially autocorrect with context from stackoverflow and interview question books.

                Over time, LLMs will get better input data and produce better output, which will lead to better code and better code quality. You still need to know how to prompt and it still won’t solve any new problems you encounter, only problems others encountered and solved thousands of times.

                In that regard, the shit programmers in companies usually churn out can and will be replaced with LLM generated output, which, on average, is better than the median business programmer. I’ll give you that. I guess it will make bad programmers less obvious and harmful, which might be good. Or bad, if your company only employs prompt monkeys and not a single sane developer.

                it’s us that judge what’s good and if it meets our spec.

                I’d argue that most people in companies can’t even judge what’s good and meets the specs 🤓

          • Helix 🧬@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            That’s a scary thought and we’ve no way of telling if that’s close or far away.

            AI is always 5 years away, no matter the year.

            • DigitalDilemma@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              I still think it’s going to be discovered by some guy working at home one evening.

              The first most of us will know about it is when the sky goes dark.

              (I’ve possibly read too much scifi)

      • Feyd@programming.dev
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        AI tools will improve and in the near future

        There isn’t a good reason to believe they’ll be as good as you’re saying.

        • Helix 🧬@feddit.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Yeah, I think we’ll get the model collapse issue soon. As most of the dead internet is generated by AI, the amount of work done to try to figure out what is real and what is a hallucination will inevitably fail and lead to the LLM Ouroboros eating its own tail.

        • DigitalDilemma@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago

          You sure?

          Every iteration of the major models is better, faster, with more context. They’re getting better at a faster speed. They’re already relied upon to write code for production systems in thousands of companies. Today’s reality is already as good as I’m saying. Tomorrow’s will be better.

          Give it, what, ten or twenty years and the thought of a human being writing computer code will be anachronistic.

          • Feyd@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            The major thing holding LLMs back is that they don’t actually understand or reason. They purely predict in the dimension of text. That is a fundamental aspect of the technology that isn’t going to change. To be as good as you’re saying requires a different technology.

            Also, alot of what you see people say they’re doing today is strongly exaggerated…

            • DigitalDilemma@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              2 days ago

              I think it’s… not wise to underplay or predict the growth of LLMs and AI. Five years ago we couldn’t have predicted their impact on many roles today. In another five years it will be different again.