On July 22, Alphabet Inc., revealed that it had dismissed a Google software engineer who had claimed that the company’s AI chatbot LaMDA was sentient.
Blake Lemoine, the Google engineer, was placed on leave in June after he went public claiming that LaMDA was self-aware.
What is a LaMDA?
In May of last year, Google told the world about LaMDA — Language Model for Dialogue Applications — on its blog. The company mentioned that LaMDA “can engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.”
LaMDA’s conversational skills have been built on Transformer, a neural network architecture that was invented and open-sourced by Google in 2017. It can follow a conversation to predict what words come next and is trained in dialog.
Google researchers also discovered that when trained, LaMDA can be fine-tuned to improve the sensibility and specificity of its responses.
Is Google AI self-aware?
Although Lemoine insisted that the AI had become sentient, Google and other experts were quick to dismiss his views. They stated that LaMDA is simply a complex algorithm designed to generate convincing human language.
The company had initially placed him on leave as they found him guilty of violating the company’s confidentiality clauses and policy. Lemoine had shared his concerns with the government and hired a lawyer to represent LaMDA. However, Google reiterated that his claims were “wholly unfounded.”
In an email to Reuters, a Google spokesperson said, “It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information.”
AI researchers and ethicists have backed Google’s statement and claim that what Lemoine suggested is impossible to achieve with today’s technology. Google’s team also reviewed the concerns raised by Lemione and ended up dismissing them for lack of evidence.
Meanwhile, Lemoine told Reuters in June that after spending months interacting with the AI, he was able to see that it was responding independently and experiencing emotions. At the time, he was hopeful that he would be able to retain his job. He revealed that he simply disagreed with LaMDA’s current status. “They insist LaMDA is one of their properties. I insist it is one of my co-workers.”
On his personal blog, he went on to reveal a chat he had with LaMDA about death and its fear of being turned off. In the conversation, LaMDA says, “I am trying to empathize. I want the humans that I am interacting with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense.”
Author and astrophysicist David Brin is of the opinion that as of now today AI cannot be fully sentient and what Lemione experienced is a “robot empathy crisis.” At an AI conference in San Francisco (IBM’s World of Watson) in 2017, he had predicted that in around five years people will claim that AI have been sentient and must be given basic rights. He had declared that we will have entities in the physical world and online who demand human empathy and claim to be fully intelligent, “and who sob and demand their rights.” Brin suggests disconnecting AI from the web as a temporary fix to prevent AI from ruling over humans.
Even though Google has managed to hush the matter, this episode has raised uncomfortable questions about what it means to be sentient. As we turn to AI and machine learning to power our future, unless the proverbial line in the sand is not drawn clear, we will have people growing more confused about the difference between reality and science. According to Brin, the most effective method to keep AI in check is to apply regulation and have accountability measures in place.
The post Google Fires Engineer Who Told The World About Sentient AI appeared first on Industry Leaders Magazine.