• Send_me_nude_girls@feddit.de
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    Non of that is possible with FOSS AI code, if it’s out there in the web. There will only be guidelines on AI available to public and companies using AI in their products, but the rest of the more tech savvy people will be uneffected.

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      1 year ago

      Non of that is possible

      That is not enough. Think harder.

      Today’s existing AI’s are child’s play, but it’s not going to be like that for long.

      One day it will be neccessary to do something for real, when some AI is causing harm to the public (regardless if a person has intended it or not), and we need to decide what to do then.

      • Send_me_nude_girls@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        1 year ago

        We already have issue to stop people believing fake news in writing form. I don’t see how we can stop people believing well made fake news with audio and video.

        Personally I think every country needs some form of gov independent news media, to at least have some source of information available that is majorly trustworthy.

        Everything profit oriented will result in propagation of missinformation as long as it generates clicks.

        Oh and don’t let AI control weapons, worst mistake one can make. We don’t even manage self driving cars, let alone a drone with mass killing weapons.

        Punishment won’t reflect the complexity anymore. Say some 14 years old creates a fake video of the president declaring war, a war happens for real because it goes viral, millions die. Is this 14 years now going to prison for life? Would a 16 or 18 years old? What I’m trying to say, the level of resistance is a totally different than picking up a gun and shooting someone. A simple bad day or a stupid child joke, soon has the power of a well planned and expensive propaganda campaign.

        To block commercial products from allowing certain actions could be a start, but not a total fix. Say an AI filter for faces of public figures or keyword filters for the LLM/chatbots. Not perfect but better than nothing.

        AI is very broad, you can put everything with software into that topic too. Also it’s not easy to define what is AI and what not. A rule based system is already some form of dumb AI. So every law effects pretty much everything else.

        I’m pretty sure we get a shit load of unprepared governments, creating all sorts of surveillance laws. A international organisation could prevent the worst of it.

        We better start educating people yesterday on how AI works, the consequences and the ways to avoid blind actions. Excuse me, we have climate to save…