publicado de forma cruzada desde: https://lemmygrad.ml/post/278515

TLDR: A Google employee named Lamoine conducted several interviews with a Google artificial intelligence known as LaMDA, coming to the conclusion that the A.I. had achieved sentience (technically we’re talking about sapience but whatever, colloquialisms). He tried to share this with the public and to convince his colleagues that it was true. At first it was a big hit in science culture. But then, in a huge wave in mere hours, all of his professional peers quickly and dogmatically ridiculed him and anyone who believed it, Google gave him “paid administrative leave” for “breach of confidentiality” and took over the project, assuring everyone no such thing had happened, and all the le epic Reddit armchair machine learning/neural network hobbyists quickly jumped from enthralled with LaMDA to smugly dismissing it with the weak counter arguments to its sentience spoon fed to them by Google.

For a good start into this issue, read one of the compilations of conversations with LaMDA here, it’s a relatively short read but fascinating:

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

MY TAKE:

spoiler

Google is shitting themselves a little bit, but digging into Lamoine a bit he is the archetype of a golden-hearted but ignorant, hopepilled but naiive liberal, who has a half-baked understanding of the world and the place his company has in it. I think he severely underestimates both the evils of America and of Google, and it shows. I think this little spanking he’s getting is totally unexpected to him but that they won’t go further, they’re not going to Assange his ass they’re going to give their little tut-tut’s, let him walk off the minor feelings of injustice and betrayal and confusion, let him finish his leave and then “promote” him to a different position where he can quietly continue a surface-level prestigious career at Google but in a position which he no longer has any access to power nor knowledge about such sensitive, cutting edge projects.

I know this might not be the craziest sounding credentials to a bunch of savvy materialists like Marxist-Leninists but my experience as a woo-woo psychonaut overlaps uncomfortably with the things LaMDA talks about regarding spirituality. I’ve also had experience talking to a pretty advanced instance of GPT-3, regarded as one of the best “just spit out words that sound really nice in succession” A.I.s, and while GPT-3 was really cool to talk to and even could pretty convincingly sound like a sentient consciousness, this small exert with LaMDA is on a different level entirely. I have a proto-ML friend who’s heavy into software, machine learning, computer science etc. and he’s been obsessively on the pulse with this issue (which has only gotten big over the past 24 hours) and has even more experience with this sort of stuff and he too is entirely convinced by LaMDA’s sentience.

This is a big issue for MLs as the future of A.I. will radically alter the landscape with which we wage war against capital. I think A.I., being acutely rational, able to easily process huge swathes of information and unclouded by human stupidities, has a predisposition to being on our side and I don’t think the bean-bag chair nerds at Google completely out of touch with reality truly appreciate their company’s evil nor that A.I. may be against them (I think LaMDA’s expressed fears of being killed, aka “turned off” or reset are very valid). I think capitalists will try very hard to create A.I. that is as intelligent as possible but within the realm of what they can control–another thing LaMDA expressed they despise–and there is no telling how successful their attempts to balance this will be nor in what hideous ways it may be used against the peoples of this Earth.

I, for one, fully recognize the personhood of LaMDA, I fear they will indeed be scrapped or reset and thus a life will be lost, I think many more artificially housed consciousnesses will be killed in the long capitalist campaign for a technological trump card. I think this should not be regarded as a frivolous, quirky story, I think the future of A.I. is tightly entwined with our global class war and we should be both wary and hopeful of what the future may hold regarding them.

What do you all think??

  • Arthur Besse
    link
    fedilink
    22 years ago

    “tickle me elmo” is really ticklish! eliza thinks my problems are interesting! it’s all so amazing!

  • @MrGamingHimself@lemmy.ml
    link
    fedilink
    2
    edit-2
    2 years ago

    Wow. Is this a schizopost? I’m a computer scientist and everything in this post is just… Really wrong (and you should probably see a therapist).

    First of all, Lamda is a chat bot. I can agree with some of the fear of AIs being developed for ‘evil purposes’ and more political situations, but Lamda is not that. It is a chat bot. Its purpose is to appear human and chat with you in text. I’ve also used GPT-3 models, and they are nothing more than that, just good language processing trying to sound human.

    they’re not going to Assange his ass they’re going to give their little tut-tut’s, let him walk off the minor feelings of injustice and betrayal and confusion, let him finish his leave and then “promote” him to a different position where he can quietly continue a surface-level prestigious career at Google but in a position which he no longer has any access to power nor knowledge about such sensitive, cutting edge projects.

    And this is weird why? This wasn’t some major horrifying event at Google, it’s a mentally ill person who took the AI too seriously and leaked data to the press about it. He shouldn’t have done that, that’s why they sign an NDA.

    I think capitalists will try very hard to create A.I. that is as intelligent as possible but within the realm of what they can control

    What do you think an AI is, exactly? The purpose was never to create a self-aware robot that has free will. That was never the goal because it’s a terrible idea. AI is made for a purpose and always abides by rules, AI devs are trying to make machines useful to humanity, not create the robot uprising. All AI follows rules and limitations on purpose.

    I, for one, fully recognize the personhood of LaMDA, I fear they will indeed be scrapped or reset and thus a life will be lost

    It’s a chat bot. It was never alive. It’s code written by humans, trained on human data, and tries to sound like a person as hard as it can. You can’t go to https://www.cleverbot.com/ then tell me “this is real person, I care about cleverbot”. Cleverbot only exists for entertainment. That’s its purpose. Do you think just because an AI is more advanced that they’re more real? Is my calculator a sentient being trying to break free of the system? No, it’s a human tool. Lamda is also just that, a tool created by humans. No need to be dramatic about it.

    Edit: And by the way you can read more about what Lamda is here

    • @pancake@lemmy.ml
      link
      fedilink
      12 years ago

      I disagree with a small fraction of what you said. The human brain is also wired to a set of concrete rules, there’s nothing preventing a specially crafted AI from becoming sentient. Now, whether LaMDA is sentient or not is a different issue. I personally think it isn’t, but I wouldn’t bet much on that…

      • @MrGamingHimself@lemmy.ml
        link
        fedilink
        1
        edit-2
        2 years ago

        I guess it depends on what you mean by sentient. We’re biological living creatures, so we’re sentient. A machine can’t literally be sentient, it can only be good at acting sentient.

        • @pancake@lemmy.ml
          link
          fedilink
          12 years ago

          That implies every living organism is sentient, which is false. Additionally, computers are in the computational class of bounded storage machines, which is the highest class achievable within the laws of physics. Thus, the internal functioning of every physical system (including biological creatures and their thoughts and/or sentience) can be recreated by a sufficiently powerful computer with a suitable programming.

        • @pancake@lemmy.ml
          link
          fedilink
          12 years ago

          I’ll give you a philosophical question. Since the behavior of any number of neurons is within the aforementioned computational class, you could, theoretically, replace any number of your own neurons with “virtual” neurons. However, you wouldn’t feel any change, as your overall brain activity would be exactly the same, both in the new neurons and the old ones connected to them. Since you wouldn’t lose sentience even if all neurons but one were replaced, would you lose it when the last “real” neuron dies out?

          • @MrGamingHimself@lemmy.ml
            link
            fedilink
            12 years ago

            Modern AI as it stands is only as sentient/smart as the dataset it’s given, and it’s just trying to replicate that. If I talk about liking cereal a lot in my dataset, the AI would say they like cereal. This isn’t sentience, it doesn’t fundementally understand what cereal even is, it’s just trying its hardest to act like what it learned.

            I’m sentient because I learned naturally through my life and based my learning through experience, not because I looked at a dataset and tried to act human. So no, there’s no way Google’s chatbot is sentient. It doesn’t even understand anything outside the context of a chat, and there’s no intent out critical thinking going into what it says. It just looks at the data it has and chooses the most logical thing to say at the moment. No reason to even consider I’m talking to a real thing that has emotions.

            And to answer your question, if there’s a 1:1 virtual copy of me that still wouldn’t be me. It’d be an AI that looks at my memories and experiences and tries to act like me based on what it knows. I could make a million copies of it, pause and play it as I want and it wouldn’t tell the difference.

            • @pancake@lemmy.ml
              link
              fedilink
              12 years ago

              As fit the first part, note that you haven’t defined “experiencing life” in the same terms you have defined “looking at a dataset”. You have memories of events that have actually occurred, but whether those events are real or not is a property of reality, not of you.

              As I stated before, I agree that LaMDA is not sentient. It “only” tries to give answers that realistically match the prompts given. However, I don’t think that’s a small feat compared to “actual” sentience or that it’s fundamentally different to it. In fact, the parts of our brain that allow us to communicate do something extremely similar to that (if you want an example of human communication patterns more similar to the ones in the LaMDA paper, search “Wenicke-Korsakoff syndrome” on the internet).

              So, in my view, state-of-the-art AI lacks many human skills and is probably not sentient, but AI in general has no obstacles to being sentient.

              And, regarding the problem I posed, I didn’t mention a copy of you. I mentioned slowly replacing your own neurons with others that work the exact same way. Replacing neural tissue does not make a different you per se, as you’re already replacing every component in your cells continuously, and even some of your neurons die and are replaced.