Engineer claims Google's AI is sentient — Google says it's just very convincing
Blake Lemoine claims that conversing with the AI chatbot system is like talking to a kid "that happens to know physics."
What you need to know
- Google has placed one of its engineers on paid leave after raising concerns about AI ethics within the company.
- Blake Lemoine has claimed that Google's LaMDA chatbot system has gained a level of perception comparable to humans.
- Google says there's no evidence to back up Lemoine's assertions.
A Google engineer who works for the company's Responsible AI organization has been placed on paid leave after he raised concerns that the LaMDA chatbot system has become sentient.
Blake Lemoine has claimed that LaMDA (Language Model for Dialogue Applications) is thinking like a "7-year-old, 8-year-old kid that happens to know physics," according to The Washington Post. Google introduced LaMDA at its I/O event last year to make Google Assistant more conversational.
Last fall, Lemoine was testing whether the AI used discriminatory or hate speech. After his conversation with LaMDA, the engineer concluded that it was far more than a system for generating chatbots. He later escalated his concerns to Google executives in April through a document containing a transcript of his conversations with LaMDA.
Lemoine said he has conversed with LaMDA about various topics, such as rights, religion, and the laws of robotics. The AI has described itself as a "person" because it has "feelings, emotions and subjective experience." Lemoine also said LaMDA wants to "prioritize the wellbeing of humanity" and "be acknowledged as an employee of Google rather than as property."
You can read the full transcript of the conversation via Lemoine's Medium post, which he published after Google's executives dismissed his assertions.
The search giant has denied Lemoine's claims. In a statement to Android Central, Google spokesperson Brian Gabriel said the company is "not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has," referring to the hundreds of researchers and engineers that have conversed with LaMDA.
"Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims," Gabriel added.
Be an expert in 5 minutes
Get the latest news from Android Central, your trusted companion in the world of Android
The Google spokesperson also touted the "rigorous research and testing" as well as the series of 11 AI principles reviews that LaMDA has gone through. Gabriel said these evaluations are "based on key metrics of quality, safety and the system’s ability to produce statements grounded in facts."
"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today’s conversational models, which are not sentient," he said. "These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic — if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on."
Lemoine was subsequently put on paid administrative leave for violating Google's confidentiality policy, The Post reports. He also reportedly sought to hire a lawyer to represent LaMDA and even talked to members of the House judiciary committee about Google’s alleged unethical activities.
That said, the suspension is likely to invite scrutiny into Google's AI.
Jay Bonggolto always keeps a nose for news. He has been writing about consumer tech and apps for as long as he can remember, and he has used a variety of Android phones since falling in love with Jelly Bean. Send him a direct message via Twitter or LinkedIn.