I promise this question is asked in good faith. I do not currently see the point of generative AI and I want to understand why there’s hype. There are ethical concerns but we’ll ignore ethics for the question.
In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.
When I see AI ads directed towards individuals the selling point is convenience. But I would feel robbed of the human experience using AI in place of human interaction.
So what’s the point of it all?
Here’s some uses:
- skin cancer diagnoses with llms has a high success rate with a low cost. This is something that was starting to exist with older ai models, but llms do improve the success rate. source
- VLC recently unveiled a new feature of using ai to generate subtitles, i haven’t used it but if it delivers then it’s pretty nice
- for code generation, I agree it’s more harmful than useful for generating full programs or functions, but i find it quite useful as a predictive text generator, it saves a few keystrokes. Not a game changer but nice. It’s also pretty useful at generating test data so long as it’s hard to create but easy (for a human) to validate.
Learning languages is a great use case. I’m learning Mandarin right now, and being able to chat with a bot is really great practice for me. Another use case I’ve found handy is using it as a sounding board. The output it produces can stimulate new ideas in my own head, and it makes it a good exploration tool that let me pull on different threads of thought.
In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.
I’d actually challenge both of these. The property of “soulessness” is very subjective, and AI art has won blind competitions. On programming, it’s empirically made faster by half again even with the intrinsic requirement for debugging.
It’s good at generating things. There are some things we want to generate. Whether we actually should, like you said, is another issue, and one that doesn’t impact anyone’s bottom line directly.
i’ve written bots that filter things for me, or change something to machine-readable formats
the most successful thing i’ve done is have a bot that parses a web page and figures out the date/time in standard format, gets a location if it’s listed in the description and geocodes it, and a few other fields to make an ical for pretty much any page
i think the important thing is that gen ai is good at low risk tasks that reduce but don’t eliminate human effort - changing something from having to do a bunch of data entry to skimming for correctness
I need help getting started. I’m not an idea person. I can make anything you come up with but I can’t come up with the ideas on my own.
I’ve used it for an outline and then I rewrite it with my input.
Also, I used it to generate a basic UI for a project once. I struggle with the design part of programming so I generated a UI and then drew over the top of the images to make what I wanted.
I tried to use Figma but when you’re staring at a blank canvas it doesn’t feel any better.
I don’t think these things are worth the cost of AI ( ethically, financially, socially, environmentally, etc). Theoretically I could partner with someone who is good at that stuff or practice till I felt better about it.
I have a friend with numerous mental issues who texts long barely comprehensible messages to update me on how they are doing, like no paragraphs, stream of consciousness style… and so i take those walls of text and tell chat gpt to summarize it for me, and it goes from a mess of words into an update i can actually understand and respond to.
Another use for me is getting quick access to answered id previously have to spend way more time reading and filtering over multiple forums and stack exchanges posts to answer.
Basically they are good at parsing information and reformatting it in a way that works better for me.
I like using it to help get the ball rolling on stuff and organizing my thoughts. Then I do the finer tweaking on my own. Basically I kinda use a sliding scale of the longer it takes me to refine an AI output for smaller and smaller improvements is what determines when I switch to manual.
I have a very good friend who is brilliant and has slogged away slowly shifting the sometimes-shitty politics of a swing state’s drug and alcohol and youth corrections policies from within. She is amazing, but she has a reading disorder and is a bit neuroatypical. Social niceties and honest emails that don’t piss her bosses or colleagues off are difficult for her. She jumped on ChatGPT to write her emails as soon is it was available, and has never looked back. It’s been a complete game changer for her. She no longer spends hours every week trying to craft emails that strike that just-right balance. She uses that time to do her job, now.
I hope it pluralizes ‘email’ like it does ‘traffic’ and not like ‘failure’.
I hate questions like this due to 1 major issue.
A generative ai with “error free” Output, is very differently useful than one that isn’t.
Imagine an ai that would answer any questions objectively and unbiased, would that threaten job? Yeah. Would it be an huge improvement for human kind? Yeah.
Now imagine the same ai with a 10% bs rate, like how would you trust anything from it?
Currently generative ai is very very flawed. That is what we can evaluate and it is obvious. It is mostly useless as it produces mostly slop and consumes far more energy and water than you would expect.
A “better” one would be differently useful but just like killing half of the worlds population would help against climate change, the cost of getting there might not be what we want it to be, and it might not be worth it.
Current market practice, cost and results, lead me to say, it is effectively useless and probably a net negative for human kind. There is no legitimate usage as any usage legitimizes the market practice and cost given the results.
“at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.”
I’ve not experienced this. Debugging for me is always faster than writing something entirely from scratch.
100% agree with this.
It is so much faster for me to give the ai the api/library documentation than it would be for me to figure out how that api works. Is it a perfect drop-in, finished piece of code? No. But that is not what I ask the ai for. I ask it for a simple example which I can then take, modify, and rework into my own code.
making roguelike content. Mazes, monsters etc
It’s kinda handy if you don’t want to take the time to write a boring email to your insurance or whatever.
I sorta disagree though, based on my experience with llms.
The email it generates will need to be read carefully and probably edited to make sure it conveys your point accurately. Especially if it’s related to something as serious as insurance.
If you already have to specifically create the prompt, then scrutinize and edit the output, you might as well have just written the damn email yourself.
It seems only useful to write slop that doesn’t matter that only gets consumed by other machines and dutifully logged away in a slop container.
It does sort of solve the ‘blank page problem’ though IMO. It sometimes takes me ages to start something like a boring insurance letter because I open up LibreOffice and the blank page just makes me want to give up. If I have AI just fart out a letter and then I start to edit it, I’m already mid-project so it actually does save me some time in that way.
I agree. By the time I’m done, I’ve written most of the document. It gets me past the part where I procrastinate because I don’t know how to begin.
For us who are bad at writing though that’s exactly why we use it. I’m bad with greetings, structure, things that people expect and I’ve had people get offended at my emails because they come off as rude. I don’t notice those things. For that llms have been a godsend. Yes, I of course have to validate it, but it conveys the message I’m trying to usually
Yeah that’s how I use it, essentially as an office intern. I get it to write cover letters and all the other mindless piddly crap I don’t want to do so I can free up some time to do creative things or read a book or whatever. I think it has some legit utility in that regard.
I get the point here but I think it’s the wrong approach. If you feel the email needs too much business fluff, just write it more casual and get to the point quicker.
People keep meaning different things when they say “Generative AI”. Do you mean the tech in general, or the corporate AI that companies overhype and try to sell to everyone?
The tech itself is pretty cool. GenAI is already being used for quick subtitling and translating any form of media quickly. Image AI is really good at upscaling low-res images and making them clearer by filling in the gaps. Chatbots are fallible but they’re still really good for specific things like generating testing data or quickly helping you in basic tasks that might have you searching for 5 minutes. AI is huge in video games for upscaling tech like DLSS which can boost performance by running the game at a low resolution then upscaling it, the result is genuinely great. It’s also used to de-noise raytracing and show cleaner reflections.
Also people are missing the point on why AI is being invested in so much. No, I don’t think “AGI” is coming any time soon, but the reason they’re sucking in so much money is because of what it could be in 5 years. Saying AI is a waste of effort is like saying 3D video games are a waste of time because they looked bad in 1995. It will improve.
AI is huge in video games for upscaling tech like DLSS which can boost performance by running the game at a low resolution then upscaling it, the result is genuinely great
frame gen is blurry af and eats shit on any fast motion. rendering games at 640x480 and then scaling them to sensible resolutions is horrible artistic practice.
rendering games at 640x480 and then scaling them to sensible resolutions is horrible artistic practice.
Is that a reason a lot of pixel art games are looking like shit? I remember the era of 320x240 and 640x480 and the modern pixel art are looking noticeably worse.
that’s probably more to do with a lack of dithering and not using tubes anymore. lots of those older games looked better on crt than they do on digital
a good example is dracula’s eyes in symphony of the night, on crt the red bleeds over giving a really good red eyes effect
on lcd they are just single red pixels and look awful
Quite possibly, old games also look worse on emus (and don’t even let me start about those remasters, i got incredibly hyped for incoming Suikoden 1+2 on PC but my eyes fucking bleed).
In the context of programming:
- Good for boilerplate code and variables naming when what you want is for the model to regurgitate things it has seen before.
- Short pieces of code where it’s much faster to verify that the code is correct than to write the code yourself.
- Sometimes, I know how to do something but I’ll wait for Copilot to give me a suggestion, and if it looks like what I had in mind, it gives me extra confidence in the correctness of my solution. If it looks different, then it’s a sign that I might want to rethink it.
- It sometimes gives me suggestions for APIs that I’m not familiar with, prompting me to look them up and learn something new (assuming they exist).
There’s also some very cool applications to game AI that I’ve seen, but this is still in the research realm and much more niche.
I generate D&D characters and NPCs with it, but that’s not really a strong argument.
For programming though it’s quite handy. Basically a smarter code completion that takes the already written stuff into account. From machine code through assembly up to higher languages, I think it’s a logical next step to be able to tell the computer, in human language, what you actually are trying to achieve. That doesn’t mean it is taking over while the programmer switches off their brain of course, but it already saved me quite some time.