No, you just use a standard technique like word2vec.
Basically words are considered similar (and embedded to nearby locations in a high dimensional space) if they are likely to be used in the same context.
And because slurs are used to indicate that you don’t like someone, they tend to occur in the same kind of context.
So they’re all very similar. This is actual natural language processing being used, but it’s a shit post and the graphics aren’t very clear.














Yeah, but this is a legitimate concern in the US.
In most of the world, safety checks for cars include making sure that they are safe enough for people outside the car. There are rules about crumple zones and sharp edges, that effect how they hit pedestrians, and what they can do to other cars.
You can’t drive a cyber truck in Europe or the UK. In the US, it’s a free for all.
If you want smaller cars/trucks back on the road, you need to copy the safety laws from countries where people drive small cars.