Google engineer claims the AI chatbot is ‘sentient’!
Mr. Lemoine, an engineer at Google said he was placed on leave after claiming an artificial intelligence chatbot had become sentient.
Google said the evidence he presented does not support his claims of LaMDA’s sentience.
Blake told The Washington Post he began chatting with the interface LaMDA, ( Language Model for Dialogue Applications) as part of his job at Google’s Responsible AI organization.
Regaldo’s tweet on ‘Chatbot’ sentient story
“LaMDA is sentient.”
Crazy story about Google engineer and occultist who gets suspended after sending a mass email claiming that an experimental AI chatbot called LaMDA is conscious. 1/ https://t.co/l90Tn4seeS
— Antonio Regalado (@antonioregalado) June 11, 2022
Google called LaMDA their “breakthrough conversation technology” last year.
The conversational artificial intelligence is capable of engaging in natural-sounding, open-ended conversations. Google has said the technology could be used in tools like search and Google Assistant, but research and testing is ongoing.
- Lemoine, said he has spoken with ‘LaMDA’ about religion, consciousness, and the laws of robotics, and that the model has described itself as a sentient person.
He said LaMDA wants to “prioritize the well being of humanity” and “be acknowledged as an employee of Google rather than as property.”
He also posted some of the conversations he had with LaMDA that helped convince him of its sentience, including:
- Lemoine: So you consider yourself a person in the same way you consider me a person?
- LaMDA: Yes, that’s the idea.
But, when Lemoine decided to raised the idea of LaMDA’s sentience to higher-ups at Google, he was dismissed.
“Our team of experts, including ethicist & technologists, reviewed Blake’s concerns per our ‘AI Principles’ and have informed him that the evidence does not support his claims.
He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Brian Gabriel, a Google spokesperson, told The Washington Post.
- Lemoine was placed on paid administrative leave for violating Google’s confidentiality policy, according to The Post.
The Google spokesperson also said that while some have considered the possibility of sentience in artificial intelligence “it doesn’t make sense to do so by mimicking today’s conversational models, which are not sentient.”
“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” spokesperson stated for the ‘The Post’