“garbage account”
AI is trained on the Internet. Look at the bullshit on the Internet. AI will take some random schmoe’s bullshit opinion and present it as hard fact.
That, and it just re-introduced the problem of being able to see search results without visiting any of the resultant websites. The last time, sites ended up burying answers down the page to avoid being able to see results in search previews. Making everything shittier. What kind of response is there going to be to AI summaries? Everything will undoubtedly get even shittier as sites try to get people to visit and not just read the AI summary. Hello even more obfuscation. We’re taking the greatest method of spreading information around the globe ever devised and just absolutely filling it to the brim with bullshit.
This is only the beginning. Soon there will be LLMs trained on other LLMs garbage. And those LLMs will also post and write crap on the Internet. The true pinnacle of shite posting
oh yea it does, i see google summarizing, its just a mash up of different BLOG posts as truths, that isnt a source. its basically asking opinions of the LLM.
I hate the fact that thanks to chatgpt, every twerp out there things em-dashes are an automatic sign of something being written by ai…
As a writer and an em-dash enjoyer, hell with that!
I was never pedantic enough to get a real em dash, instead of just a regular dash
Is that Brian “Please don’t call me Brian “Brian Kibler” Kibler” Kibler?
The technology is way too resource intensive for the benefit it gives. By resource, I mean environmental and technological. Have you seen the prices of DDR5 RAM? Microsoft is actually working to bring TMI 1 back online. TMI = Three Mile Island as in a full sized nuclear reactor that has been retired from service since 2019. The only reason why they are not bringing TMI2 back online is because IF F$%KING MELTED DOWN IN 1979.
Add to that Micron exited the consumer market to provide memory to the AI market only… What the actual F#$k?
Now the bubble has formed and the people that shoved tens of billions into it are trying to fill that bubble by any means necessary. Which means the entire population of this country are constantly bombarded by it for purposes it is ill suited to.
When, not if, this bubble pops it’s going to be a wild ride.
At some point, we should legislate that all non production tech buisnesses have to be energy positive- as in 'wanna build a data center? Its got to have more solar/ wind etc, tha it uses or its unpermitable.
I don’t hate AI. I just hate the untrustworthy rich fucks who are forcing it down everyones throats.
I’ve seen it successfully perform exactly one task without causing more harm or crearing liability for the people using it:
Misinformation campaigns.
And thats exactly how the AI Companies are using to to grow exponentially, lying about its costs and its capabilities both.
It’s weird that this is somehow an unpopular opinion these days but I don’t like being lied to.
Ive been hearong the claim now occasionally For the last several years that we’ve moved into the ‘post truth’ age. AI has kind of cemented that for me.
I hate AI because it’s replacing jobs (a.k.a, salaries) without us having a social safety net to make it painless.
We’ve replaced you with ai
-CEO
Ai is replacing most of the jobs, and there isn’t enough open positions to be filled by the now unemployed.
-Ecconomists
I need food stamps, medical care, housing assistantance, and unemployment.
-Me
No! Get a job you lazy welfare queen!
-Politicians
Where? There aren’t any.
-Me
Not my problem! Now, excuse me while I funnel more money to my donors.
-The same politicians
The good news is, while automation like robot arms is continuing to replace humans, the AI aspect of it has been catastrophic and early adopters are often seen expressing remorse and reverting changes.
I don’t hate AI, I hate it being forced everyone’s throat and I don’t trust the companies running it to keep the data they collect safe and private
I don’t hate it, I hate how companies are forcing it in regardless of how stupid it is for the task.
As always, you don’t hate X, you hate capitalism. Replace X with almost anything.
HURR DURR HEY GUYS LETS DO SOMETBING VAGUE ABIUT CAPLISM
Reading comprehension is hard, I know.
what the fuck is this stereotype
It’s probably from a redditor who probably is white and male. Y’know, self-deprecating humor is pretty common among redditors just like it is here.
Well, it’s from a blue stared twitter user, which is much, much worse.
This isn’t a Twitter screenshot, it’s a Facebook one. Note the globe icon, that means a public post on Facebook. He’s also a niche microcelebrity, so the verification kinda makes sense?
It seems he’s stopped posting on Twitter after 2024, having 70k posts total - so he must’ve quit cold turkey. No blue check on his Twitter profile.
There’s also this tweet from him in 2017. I do not think this man is a nazi, or even nazi-adjacent.
I never said anything about nazis.
I only know blue check marks from twitter. There’s really not much difference between the two though.
…Why is a blue check mark on Twitter bad if not for the fact that paying for it supports a nazi platform? I’m not sure I follow your logic.
The Meta one isn’t paid, it’s just something you’re given if they can verify you’re who you say you are.
Mark Zuckerberg isn’t that much different from Elon Musk as far as politics go.
If you want to be verified, you should just have a personal website that you publicly direct people to. Using someone else’s social media website as your primary soap box has always been madness.
So you would click accept on my self-signed https website? Want some land in Florida?
just feels wrong
like it’s making stereotypes feel normal and creating xenophobia or something
when done humorously, it’s fine, but here it just seems serious
But it is done humorously, is that not an obvious joke?
Maybe not the funniest joke ever, but definitely not something someone’s saying seriously.
Welcome to Lemmy.
I don’t hate AI (specifically LLMs and image diffusion thingy) as a technology. I don’t hate people who use AI (most of the time).
I do hate almost every part of AI business, though. Most of the AI stuff is hyped by the most useless “luminaries” of the tech sector who know a good profitable grift when they see one. They have zero regard for the legal and social and environmental implications of their work. They don’t give a damn about the problems they are causing.
And that’s the great tragedy, really: It’s a whole lot of interesting technology with a lot of great potential applications. And the industry is getting run to the ground by idiots, while chasing an economic bubble that’s going to end disastrously. It’s going to end up with a tech cycle kind of similar to nuclear power: a few prominent disasters, a whole lot of public resentment and backlash, and it’ll take decades until we can start having sensible conversations about it again. If only we would have had a little bit of moderation to begin with!
The only upside AI business has had was that at least it has pretended to give a damn about open source and open access to data, but at this point it’s painfully obvious that to AI companies this is just a smoke screen to avoid getting sued over copyright concerns - they’d lock up everything as proprietary trade secrets if they could have their way.
As a software developer, I was first super excited about genAI stuff because it obviously cut down the time needed to consult references. Now, a lot of tech bosses tell coders to use AI tools even in cases that’s making everyone less productive.
As an artist and a writer I find it incredibly sad that genAI didn’t hit the brakes a few years ago. I’ve been saying this for decades: I love a good computerised bullshit generator. Algorithmically generated nonsense is interesting. Great source of inspiration for your ossified brain cells, fertile grounds for improvement. Now, however, the AI generated stuff pretends to be as human-like as possible, it’s doing a terrible job at it. Tech bros are half-assedly marketing it as a “tool” for artists, while the studio bosses who buy the tech chuckle at that and know they found a replacement for the artists. (Want to make genAI tools for artists? Keep the output patently unusable out of the box.)
The value in LLMs is in the training and the data quality… so it is easy to publish the code and charge for access to the data (DaaS).
I’m hopeful that when the bubble pops it’ll be more like the dot com crash, which is to say that the fallout is mostly of the economic variety rather than the superfund variety. Sure, that’ll still suck in the short term. But it will ideally lead to the big players and VC firms backing away and leaving behind an oversupply of infrastructure and talent that can be soaked up at fire sale prices by the smaller, more responsible companies that are willing to stick out the downturn and do the unglamorous work of developing this technology into something that’s actually sustainable and beneficial to society.
That’s my naive hope. I do recognize that there’s an unfortunately high probability that things won’t go that way.
Fuck Reddit and Fuck Spez.
I don’t hate AI. However, I:
- Am concerned about the labor displacement it may cause–though I am skeptical it will be as widespread as currently feared. I think many of the companies that have cut workers already will end up regretting it in the medium term.
- Am convinced that the massive, circular investment in this technology has produced an economic bubble that will burst in the coming years. Because we have so little insight into private credit markets, we don’t know to what degree retail and commercial banks will be exposed, and thus can’t anticipate the potential damage to the broader economy.
- Am fatigued (but unsurprised) that the US government is not considering thoughtful regulation that anticipates the disruption that AI is likely to cause.
- Am cognizant of its current limitations.
- Do not currently believe that AGI is imminent or even guaranteed. I think elites peddling this notion may be captured by highly motivated reasoning. In some cases, it seems like a bit of a belief system.
Anything the billionaire cabal pushes on us I automatically hate. Don’t even need to know what it is. If they are pushing it you know there is some nefarious shit under the hood.
No one has convinced me how it is good for the general public. It seems like it will benefit corpos and governments, to the detriment of the general public.
It’s just another thing they’ll use to fuck over the average person.
It COULD help the average person, but we’ll always fuck it up before it gets to that point.
You could build an app that teaches. Pick the curriculum, pick the tests, pick the training material for the users, and use the LLM to intermediate between your courseware and the end users.
LLM’s are generally very good at explaining specific questions they have a lot of training on, and they’re pretty good at dumbing it down when necessary.
Imagine an open-source, free college course where everyone gets as much time as they need and aren’t embarrased to ask whatever questions come to their minds in the middle of the lesson. Imagine more advanced students in a class not being held back because some slower students didn’t understand a reading assignment. It wouldn’t be hard to out teach an average community college class.
But free college that doesn’t need a shit ton of tax money? Who profits off that? we can’t possibly make that.
How about a code tool that doesn’t try to write your code for you, but watches over what you’re doing and points out possible problems, what if you strapped it on a compiler and got warnings that you have dangerous vectors left open or note where buffer overflows aren’t checked?
Reading medical images is a pretty decisive win. The machine going back behind the doctor and pointing out what it sees based on the history of thousands of patient images is not bad. Worst case the doctors get a little less good at doing it unassisted, but the machines don’t get tired and usually don’t have bad days.
The problem is capitalism. You can’t have anything good for free because it’s worth money. And we’ve put ALL the money into the tech and investors will DEMAND returns.
Imagine an open-source, free college course where everyone gets as much time as they need and aren’t embarrased to ask whatever questions come to their minds in the middle of the lesson.
My impression of the average student today is that they lack so much curiosity, in part because of youtube short–induced ADHD, in part because chatgpt just answers all of their homework questions for them, no effort at all, that a course like this would be functionally useless.
This is not an issue of capitalism, detestable as it is: young people are using AI to offload the mental burden of learning. Removing money incentives doesn’t fix this.
It’s been a problem for a long time before AI.
I’ll say two things that I have actually found useful with ChatGPT, helping me flesh out NPCs in the tabletop RPG campaign I’m running, and diagnosing tech problems. That’s it. I’ve tried to program, have it make professional documents, search things for me, all of it sucks compared to just doing it myself. Definitely not worth poring a significant chunk of the global GDP into.
It’s more like the opposite. There’s not much evidence if it saving money or increasing productivity for companies to the extent that it covers the cost of running it where as for the general population it can be helpful with stuff like writing assistance but I bet most people use it like I do which is entertainment. ChatGPT has 800 million weekly users - people clearly are getting some value from it.
Of those 800 million, how many are paying? That number could be easily over-represented by people doing things without real value to them. I also don’t know how many of those users need professional help whether it be severe social anxiety or the people who find intimacy in a chatbot.
Like, you’re right there has to be some value to it but I just can’t see trillions of USD in value.
Entertainment value - not monetary. I don’t pay for an AI because it makes me money. I do it because I enjoy using it.
Entertainment value is value, my dog has value to me but is nothing but a monetary cost. It is in how I enjoy having my dog so much that I will pay the monetary cost because he is that valuable to me.
Someone downloading and using an app isn’t indicative to that app having much value to the end user.






