A rising movement of artists and authors are suing tech companies for training AI on their work without credit or payment

  • 33KK@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Most models are trained unethically, relying on weird statements about how humans learn the “same way” (looking at a few references when drawing a specific thing, you need to know how it looks to draw it lol) as large models (more or less averaging and weighting billions of images stolen from internet with no regards to the licenses)

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I don’t think i said “humans learn the same way”, but I do think it helps to understand how ML algorithms work in comparison with existing examples of copyright infringement (i.e. photocopies, duplicated files on a hard drive, word for word or pixel for pixel duplication’s, ect.). ML’s don’t duplicate or photocopy training data, they “weight” (or to use your word choice, “average”) the data against a node structure. Other, more subjective copyright infringements are decided on a case-by-case basis, where an artist or entity has produced an “original” work that leans too heavily on a copyrighted work. It is clear that ML’s aren’t a straight-forward duplication. If you asked an MLA to reproduce an existing image, it wouldn’t be able to recreate it exactly, because that data isn’t stored in its model, only the approximate instructions on how to reproduce it. It might be able to get close, especially if that example is well represented in the data set, but the image would be fundamentally “new” in the sense that it has not been copied pixel by pixel from an original, only recreated through averaging.

      If our concern is that AI could literally reproduce existing creative work and pass it off as original, then we should pursue legal action against those uses. But to claim that the model itself is an illegal duplication of copyrighted work is ridiculous. If our true concern (as I think it is) that the use of MLAs may supplant the need for paid artists or writers, then I would suggest we re-think how we structure compensation for labor and not simply place barriers to AI deployment. Even if we were to reach some compensation agreement for the use of copyrighted material in the training of AI data, that wouldn’t prevent the elimination of artistic labor, it would only solidify AI as an elite, expensive tool owned only by a handful of companies that can afford the cost. It would consolidate our economy further, not democratize it.

      In my opinion, copyright law is already just a band-aid to a broader issue of labor relations, and the issue of AI training data is just a drastic expansion of that same wound.

      • 33KK@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        My concern is that billions of works are being used for training with no consent and no regard to the license, and that the model “learns” is not an excuse. If someone saved some of my content for personal use, sure, I don’t mind that at all, but huge scale scraping for-profit operation downloading all content they physically can? Fuck off. I just blocked all the crawlers from ever accesing my websites (well, google and bing literally refuse to index my stuff properly anyway, so fuck them too, none of them even managed to read the sitemap properly, and it was definitely valid)