What happened now? The strange case of a Google engineer who claimed that a chatbot had become sentient ended in his dismissal from the company. Blake Lemoine was already on paid leave for posting transcripts of conversations between him and Google LaMDA (a language model for conversational applications) in violation of the tech giant’s privacy policy.

- Advertisement -

Lemoine, also an ordained Christian mystic priest, made headlines around the world last month after claiming that LaMDA was reasonable. The conversations he published included the bot’s views on Isaac Asimov’s robotics laws, his fear of being disabled (which he likened to death), and his belief that he was not a slave as he did not need money.

- Advertisement -

Google flatly dismissed Lemoine’s claims, calling them “completely unfounded” and noting that LaMDA is just an algorithm designed to mimic human conversation, like all chatbots. Most AI experts, of course, agreed with Google.

The company also didn’t take too kindly to Lemoine’s release of the transcripts. He was suspended for violating privacy policies, although Lemoine compared his actions to a discussion he had with a colleague.

- Advertisement -

Things got even weirder a few weeks later when Lemoine said he had hired a lawyer for LaMDA at the request of the chatbot. He said that a professional lawyer was invited to Lemoine’s house and spoke with LaMDA, after which the AI ​​decided to keep his services. The lawyer then began filing on behalf of LaMDA, prompting Google to send a cease and desist letter. The company denies ever sending such a letter.

Lemoine also said Google should seek LaMDA’s consent before experimenting on it. He even contacted members of the government about his concerns. All of these actions led to Google accusing its former engineer of several “aggressive” actions.

Google seems to have recently decided they’ve had enough of Lemoine’s crusade. “If an employee shares concerns about our work, as Blake did, we analyze them carefully. We found that Blake’s claims that LaMDA was sentient were completely unfounded and worked with him to clarify this for many months. These discussions were part of an open culture. it helps us innovate responsibly,” said a spokesperson Big Tech Newsletter.

“Therefore, it is unfortunate that, despite the long discussion of this topic, Blake still chose to constantly violate clear employment and data security policies, which include the need to protect product information. We will continue to carefully develop language models and wish Blake all the best. “

While this is the end of Lemoine’s professional relationship with Google – it wouldn’t be too surprising if he demanded a legal response – this saga has taken the AI ​​debate to the masses and shows just how far artificial intelligence has come in the last couple of years. decades. Also, if you think the machine is intelligent, keep it to yourself.

Head credit: Francesco Tommasini