• Gaywallet (they/it)@beehaw.orgOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    The ways to control for algorithmic bias are typically through additional human developed layers to counteract bias present when you ingest large datasets to train. But that’s extremely work intensive. I’ve seen some interesting hypotheticals where algorithms designed specifically to identify bias can be used to tune layers with custom weighting to attempt to pull bias back down to acceptable levels, but even then we’ll probably need to watch how this changes language about groups for which there is bias.

    • Hexorg@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I think the trouble with human oversight is that it’s still going to keep whatever bias the overseer has.

      • Gaywallet (they/it)@beehaw.orgOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        AI is programmed by humans or trained on human data. Either we’re dealing in extremes where it’s impossible to not have bias (which is important framing to measure bias) or we’re talking about how to minimize bias not make it perfect.