I guess it depends on what you mean by sentient. We’re biological living creatures, so we’re sentient. A machine can’t literally be sentient, it can only be good at acting sentient.
It’s me, Mr. Gaming
I guess it depends on what you mean by sentient. We’re biological living creatures, so we’re sentient. A machine can’t literally be sentient, it can only be good at acting sentient.
Such a good article. Generative art is such a cool thing that’s recently been exploding in popularity because of how good it’s starting to become, but it really does beg the question if the advancement of AI-driven art will devalue real, human-drawn art.
A couple of years ago when AI was really kicking off a lot of people were saying that the future of a world with AI would be able to do a ton of things, but at least it can’t do artistic thinking like writing or drawing. That’s humanity’s turf… But here we are.
And now it’s “at least AI can’t write code or automatically create other powerful AI without human interference” but I guess I’ll see you in 20 years about that.
Wow. Is this a schizopost? I’m a computer scientist and everything in this post is just… Really wrong (and you should probably see a therapist).
First of all, Lamda is a chat bot. I can agree with some of the fear of AIs being developed for ‘evil purposes’ and more political situations, but Lamda is not that. It is a chat bot. Its purpose is to appear human and chat with you in text. I’ve also used GPT-3 models, and they are nothing more than that, just good language processing trying to sound human.
they’re not going to Assange his ass they’re going to give their little tut-tut’s, let him walk off the minor feelings of injustice and betrayal and confusion, let him finish his leave and then “promote” him to a different position where he can quietly continue a surface-level prestigious career at Google but in a position which he no longer has any access to power nor knowledge about such sensitive, cutting edge projects.
And this is weird why? This wasn’t some major horrifying event at Google, it’s a mentally ill person who took the AI too seriously and leaked data to the press about it. He shouldn’t have done that, that’s why they sign an NDA.
I think capitalists will try very hard to create A.I. that is as intelligent as possible but within the realm of what they can control
What do you think an AI is, exactly? The purpose was never to create a self-aware robot that has free will. That was never the goal because it’s a terrible idea. AI is made for a purpose and always abides by rules, AI devs are trying to make machines useful to humanity, not create the robot uprising. All AI follows rules and limitations on purpose.
I, for one, fully recognize the personhood of LaMDA, I fear they will indeed be scrapped or reset and thus a life will be lost
It’s a chat bot. It was never alive. It’s code written by humans, trained on human data, and tries to sound like a person as hard as it can. You can’t go to https://www.cleverbot.com/ then tell me “this is real person, I care about cleverbot”. Cleverbot only exists for entertainment. That’s its purpose. Do you think just because an AI is more advanced that they’re more real? Is my calculator a sentient being trying to break free of the system? No, it’s a human tool. Lamda is also just that, a tool created by humans. No need to be dramatic about it.
Edit: And by the way you can read more about what Lamda is here
Was that before or after Microsoft bought them? Does Microsoft have leaks often?
Microsoft is usually on top of privacy stuff but the companies they buy operate mostly independently from what I hear. I don’t think there’s much correlation.
The big leak happened last year though, it was a massive fuck up from linkedin. Almost everyone’s data got leaked. https://restoreprivacy.com/linkedin-data-leak-700-million-users/
There’s Xing.
First I’ve heard of it. Here’s the thing, there are alternatives but what matters is which is popular. Company presence is the most important thing in these websites, when I attach my LinkedIn profile to a job it increases my chances of being seen by a lot. If I attach something like Xing they’ll probably just ignore me.
It’s like WhatsApp. Are the great alternatives? For sure. Does it matter? No, because my family and colleagues and normie friends are too resistant of letting it go, so I’m forced to use it. Best I can do is get my inside circle to use something else.
Another day, another LinkedIn leak. Didn’t they have millions of people’s personal data leaked last year or so?
Kind of a shame there isn’t a real alternative. There are some alternatives but usually it’s so much more preferred for companies that you have a linkedin account when applying for a job.
So your solution to make people cultured and free-thinking is the censor the opposition
Hmmm…
And there are about a million books plainly explaining why communism wouldn’t work in a realistic context, but sure, have fun with the “everyone who doubts me is brainwashed” strawman argument mate
Just keep your weird takes outside the meme community…
Insane take but okay
Lots of great gems in this bundle, I’d recommend:
Celeste
Baba is You
ZeroRanger
SUPERHOT
A Short Hike
CrossCode
Soundodger+
The rest is mostly shovelware, but I’d honestly just get it for CrossCode alone. That game rocks.
Modern AI as it stands is only as sentient/smart as the dataset it’s given, and it’s just trying to replicate that. If I talk about liking cereal a lot in my dataset, the AI would say they like cereal. This isn’t sentience, it doesn’t fundementally understand what cereal even is, it’s just trying its hardest to act like what it learned.
I’m sentient because I learned naturally through my life and based my learning through experience, not because I looked at a dataset and tried to act human. So no, there’s no way Google’s chatbot is sentient. It doesn’t even understand anything outside the context of a chat, and there’s no intent out critical thinking going into what it says. It just looks at the data it has and chooses the most logical thing to say at the moment. No reason to even consider I’m talking to a real thing that has emotions.
And to answer your question, if there’s a 1:1 virtual copy of me that still wouldn’t be me. It’d be an AI that looks at my memories and experiences and tries to act like me based on what it knows. I could make a million copies of it, pause and play it as I want and it wouldn’t tell the difference.