My company is strongly pushing AI. There are lot of experiments, demos, and effort from decently smart people about integrating it into our workflows. There are some impressive victories that have been made with AI tooling producing some things fast. I am not in denial about this. And the SE department is tracking improved productivity (as measured by # of tickets being done, I guess?)
The problem is I hate AI. I hate every fucking thing about it. Its primary purpose, regardless of what utility is gained, is spam. I think it’s obvious how google search results are spam, how spam songs and videos are being produced, etc. But even bad results from AI that have to be discarded, IMO, are spam.
And that isn’t even getting into all massive amounts of theft to train the data, or the immense amounts of electricity it takes to do training and inference, as well as run, all this crap. Nor the psychosis being inflicted onto people who emplace their trust into these systems. Nor the fact that these tools are being used to empower authoritarian regimes to track vulnerable populations, both here (in the USA) and abroad. And all this AI shit serves to enrich the worst tech moguls and to displace people like artists and people like myself, a programmer.
I’m literally being told at my job that I should view myself basically as an AI babysitter, and that AI has been unambiguously proven in the industry, so the time for wondering about it, experimenting with it, or opposing it is over. The only fault and flaw is my (i.e. any given SE’s) unwillingness to adapt and onboard.
Looking for advice from people who have had to navigate similar crap. Because I feel like I’m at a point where I must adapt or eventually get fired.

I am also encouraged to use AI at work and also hate it. I agree with your points. I just had to learn to live with it. I’ve realized that I’m not going to make it go away. All I can do is recognize its limited strengths and significant weaknesses and only use it for limited tasks where it shines. I still avoid using it as much as possible. I also think “improved productivity” is a myth but fortunately that’s not a metric I have to worry about.
My rules for myself, in case they help:
[Edit: punctuation]
I agree with all your points. The problem is that quality cheching AI outputs is something that only a few will do. The other day my son did a search with chat GPT. He was doing an analysis of his competitors within 20km radius from home. He took all the results for grated and true. Then i looked at the list and found many business names looked strange. When i asked for the links to the website, i found that some were in different countries. My son said “u cant trust this”. When i pointed it out to chatgpt, the dam thing replied “oh im sorry, i got it wrong”. Then you realise that these AI things are not accountable. So quality checking is fundamental. The accountability will always sit with the user. I’d like to see the day when managers take accountability of ai crap. That wont happen, do jobs for now are secure.
Which tasks do you use it for?
For my purposes I find it good for summarizing existing documents and categorizing information. It is also good at reformatting stuff I write for different comprehension levels. I never let it compose anything itself. If I use it to summarize web data, and I rarely do, I make it provide the URLs of all sources so I can double-check validity of the data.
Sounds good. It can also write corporate emails well. I’m just writing insults and harsh truths like I would want to throw against my conversation partners, and the LLM tones it down to some bland corpo speak.
Stop thinking, start knowing: https://fortune.com/2025/07/20/ai-hampers-productivity-software-developers-productivity-study/
Thanks! I skimmed this and have it in my reading list for later. I wonder how this pans out across disciplines other than software development. I would imagine there’s a huge diversity of skills out there that would affect how well people can craft prompts and interpret responses.