

Let’s think that through. For that to work we only want the bot to respond to toxic AI slop, not authentic humans trying to engage with other humans. If you have an accurate AI slop detector you could integrate that into existing moderation workflows instead of having a bot fake a response to such mendacity. Edit: But there could be value in siloing such accounts and feeding them poisoned training data… That could be a fun mod tool








I am seeing ads where businesses will try to influence AI to recommend your brand. I don’t know if they actually work but it strikes me as everything we hate about SEO but shoved into every facet and interface that a capitalist owns.