In this paper we take a broad look at child sexual exploitation concerns on decentralized social media, present new findings on the nature and prevalence of child safety issues on the Fediverse, and offer several proposals to improve the ecosystem in a sustainable manner. We focus primarily on the Fediverse (i.e., the ecosystem supporting the ActivityPub protocol) and Mastodon, but several techniques could also be repurposed on decentralized networks such as Nostr, or semi-centralized networks such as Bluesky.

  • Maddox@feddit.ch
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    Thank you OP for linking the original paper!

    I have seen it cited and re-cited by the usual outlets such as WaPo and The Verge. I feel they sensentionalize too much and very little info is available on the magnitude of this problem even on centralized platforms. I have a hard time believing that paid “content scanners” are an effective solution. And hopefully not the best solution the internet can come up with…

    Very grim topic but I appreciate the community talks early about it! I find it frustrating that major news outlets focus on sensentionalism rather than platforming a productive discussion…

    • _Frog@feddit.chOPM
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      News are almost always about sensations - they want the people to spend time on their platform and let them see some ads.

      I find the conclusion interesting, quote:

      Investment in one or more centralized clearinghouses for performing content scanning (as well as investment in moderation tooling) would be beneficial to the Fediverse as a whole.

      This contradicts somehow the use case of a federated network, but there would also be major benefits from something like this.

      • Maddox@feddit.ch
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        This contradicts somehow the use case of a federated network, but there would also be major benefits from something like this.

        I am not sure how to feel about this. On the other hand it would mean that a node in the network cannot “outgrow” its own “moderation power”. But this would make the nodes vulnerable to a terrible form of attack or it would force the node to implement more rigorous sing-up procedures. All of these scenarios could end up pretty distopian!

        I believe if users are empowered to participate in combating the problem, almost all users would. The question is, how can the nodes harvest this? What can ordinary users do to help?

        • _Frog@feddit.chOPM
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          Good point. If the main function would be implemented on all nodes and everyone would exchange information (hashes), which elements are harmful, we would maybe have a chance to skip central nodes from the equation. If the information is distributed throughout multiple maybe random nodes, but this would pose new problems - where to find that information.

          I believe if users are empowered to participate in combating the problem, almost all users would. The question is, how can the nodes harvest this? What can ordinary users do to help?

          I agree. Maybe the node can create some “unsafe” hashes and share them between the nodes - kind of like an antivirus creates hashes for malware. The user can always report to the admins - when there are more instances there are automatically more admins to handle the request themselves or with bots.