Business

Google fires AI engineer Blake Lemoine who claims his LaMDA 2 AI is sentient

Google fires AI engineer Blake Lemoine who claims his LaMDA 2 AI is sentient
Written by admin

Blake Lemoine, the Google engineer who publicly claimed that the company’s artificial conversational intelligence, LaMDA, is sentient, was fired the Big Technology newsletterwho spoke to Lemoine. In June, Google placed Lemoine on paid administrative leave for violating its confidentiality agreement after contacting government officials about his concerns and hiring a lawyer to represent LaMDA.

Email a statement to The edge On Friday, Google spokesman Brian Gabriel appeared to confirm the dismissal, saying, “We wish Blake well.” The company also says, “LaMDA has gone through 11 separate reviews and we published a research paper earlier this year, in detailing the work that goes into its responsible development.” Google claims to have examined Lemoine’s claims “at length” and found them to be “completely unfounded.”

This agrees with numerous AI experts and ethicists who have said his claims are more or less impossible given today’s technology. Lemoine claims that his conversations with LaMDA’s chatbot led him to believe that it has become more than just a program and has its own thoughts and feelings, rather than just having conversations realistic enough to make it appear as it is intended.

He argues that Google’s researchers should get LaMDA’s consent before conducting experiments on it (Lemoine himself was hired to test whether the AI ​​was producing hate speech) and posting parts of those conversations on his Medium account as evidence.

The YouTube channel Computerphile has a decently accessible nine-minute explainer about how LaMDA works and how it might produce the answers that convinced Lemoine without actually being sentient.

Here is Google’s full statement, which also addresses Lemoine’s allegation that the company failed to properly investigate its claims:

As we share in our AI principles, we take the development of AI very seriously and remain committed to responsible innovation. LaMDA went through 11 different reviews and we published one research paper Earlier this year detailed the work that goes into its responsible development. When an employee raises a concern about our work, as Blake did, we investigate it thoroughly. We found Blake’s claim that LaMDA was sentient to be completely unfounded and worked with him for many months to clarify this. These discussions were part of the open culture that helps us innovate responsibly. Therefore, it is unfortunate that despite a lengthy engagement with the issue, Blake has still chosen to persistently violate clear employment and data security policies, which include the need to protect product information. We will continue our painstaking development of language models and wish Blake all the best.

About the author

admin

Leave a Comment