I believe the old technique was gradient ascent starting with a random image and optimizing for the classifier’s dogginess score, but now we train image denoisers then give them pure noise and tell them it’s a noisy image of a dog. Basically, we lie to models to make them make stuff for us, and we’ve gotten better at what lying scheme we use.
If we were living in the 1930s they’d be the same people complaining we’re being unfair to Hitler and we need to hear his perspective too.
Hell, he used the same political strategy as modern day fascist politicians: simply lying. “I’m gonna make everything better! How? Don’t worry about that, just trust me and also let me reassert Germany’s national pride!” I’m reminded of Trump’s ACA “plan” (that he doesn’t have one).
And we just let them say that, unchallenged! Maybe someone asks how they’ll do it, but viewers just hear a strong man telling a story of future prosperity and ignore any small details a journalist might counter with. In the name of “balance”, we let them spread their info hazards and pretend silly things like facts will let people come to the right conclusion.