- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
So many problems with this. I assume they’re thinking of using an LLM since it would need to read language. It would need to adapt to our ever-moving Internet culture, knowing what intent is meant.
How well does it know irony? Slang? Taboo topics? Fresh new gen-z TikTok language?
“He should step on lego… in a video game…”, no way it will work at this early stage of AI.
I think AI could be useful to help actual human moderators to THEN determine if the activity is bad or not. But that’s only doing some of the work.
I think manual reports from the users goes a long way on its own.
Digg actually has a shot at winning back some disgruntled redditors that don’t think lemmy has enough content/users…
But this is ain’t it.
Not the kind of ai you think, at least so far its one for moderation, one to show trends, a tldr, and an anonymous one idk what it does
I feel like I remember digg being a graveyard for a while before reddit really picked up steam.
Lol sounds like absolute dogshit.
They say it’s going to handle moderation in a transparent way, and then go on to hand wave everything they’re talking about for the rest of the article. This sounds like investment bait to me.
The most difficult parts of moderating on Reddit aren’t the trolls or spammers or even the rule-breakers, it’s identifying the accounts who intentionally walk the line of what’s appropriate.
IMO only a human moderator can recognize when someone is being a complete asshole but “doing it politely”, or trying to push an agenda or generally behaving inauthentically, because human moderators are (in theory) members of the community themselves and have an interest in that community being enjoyable to be a part of.
Humans are messy, and finding the right balance of mess to keep things interesting without making a place overwhelming to newcomers is a fine balance to strike that I just don’t believe an AI can do on it’s own.