LaMDA and the intelligent AI trap

- Advertisement -


Google AI Researcher Blake Lemoine was recently placed on administrative leave after publicly stating that LaMDA, large language model designed to communicate with people was reasonable. At some point, according to reporting on Washington Post, Lemoine went so far as to demand legal representation from LaMDA; he has said his beliefs about LaMDA’s personality are based on his Christian faith and a model telling him that he has a soul.

- Advertisement -

The prospect of AI being smarter than humans gaining consciousness is regularly discussed by the likes of Elon Musk and OpenAI CEO Sam Altman, especially with large language model training efforts by companies like Google, Microsoft, and Nvidia in recent years. .

- Advertisement -

Debate about whether language models can be sentient goes back to ELIZA, a relatively primitive chatbot created in the 1960s. But with the development of deep learning and the ever-increasing volume of training data, language models have become more convincing at generating text that looks like it was written by a human.

Recent progress has led to claims that language patterns underlie artificial general intelligencethe point at which the software will demonstrate human ability in different environments and tasks and transfer knowledge between them.

- Advertisement -

Former co-lead of the Google Ethical AI team. Timnit Gebru says Blake Lemoine fell victim to an insatiable cycle of hype; he did not arrive at his belief in intelligent AI in a vacuum. The press, researchers and venture capitalists circulate inflated claims about superintelligence or human knowledge of machines.

“He is the one who will face the consequences, but it was the leaders in the field who created this whole moment,” she says, noting that the same Google VP who dismissed Lemoire’s internal statement wrote about the prospect consciousness LaMDA in Economist a week ago.

According to Gebru, focusing on the mind also misses the point. This prevents people from questioning real, existing harm such as AI colonialism, false arrestsor an economic model that pays little to those who label the data while tech executives get rich. It also distracts from genuine concerns about LaMDA, such as how it was taught or tendency to generate toxic text.

“I don’t want to talk about sentient robots because there are people on all ends of the spectrum who harm other people, and I would like the conversation to be focused on that,” she says.

Gebru was fired from google in December 2020 after a dispute about paper including the dangers of large language models such as LaMDA. Gebru’s research highlighted the ability of these systems to repeat things depending on what they were exposed to, in much the same way that a parrot repeats words. The paper also highlights the risk that language models created using more and more data will convince people that this mimicry represents real progress: exactly the trap that Lemoine seems to have fallen into.

Currently, the head of the non-profit organization Distributed AI Research Gebru hopes that in the future people will focus on the welfare of people, not the rights of robots. Other AI ethicists have said they will no longer discuss conscious or superintelligent AI generally.

“There’s a pretty big gap between the current AI narrative and what it can actually do,” says Jada Pistilli, ethicist at Hugging Face, a startup specializing in language models. “This narrative simultaneously evokes fear, surprise, and excitement, but is mostly based on lies aimed at selling products and exploiting the hype.”

The consequence of intelligent AI speculation, she says, is an increased willingness to make claims based on subjective impressions rather than scientific rigor and evidence. This detracts from the “countless ethical and social justice questions” that AI systems pose. While every researcher has the freedom to explore what they want, she says, “I’m just afraid that focusing on this subject will make us forget what happens when we look at the moon.”

Lemoire’s experience is an example of what author and futurist David Brin has called the “robot empathy crisis.” At the 2017 San Francisco AI conference, Brin predicted that in three to five years people would be claiming that AI systems are intelligent and demanding that they have rights. At the time, he thought those calls were coming from a virtual agent who takes on the appearance of a woman or a child to maximize human empathic response, not “some Google guy,” he says.

The LaMDA incident is part of a transitional period, Breen says, when “we will become more and more confused about the boundary between reality and science fiction.”

Brin based his 2017 forecast on advances in language models. He expects the trend to lead to fraud from here. He says that if people were suckers for a simple chatbot like ELIZA decades ago, how difficult would it be to convince millions of people that a copycat deserves protection or money?

“There’s a lot of snake oil out there, and it’s all mixed in with all the hype – real advances,” Brin says. “Deciphering our path through this stew is one of the challenges we face.”

And as empathetic as LaMDA may seem, people who are overwhelmed by large language patterns should consider the case of the knife in the cheeseburger, says Yejin Choi, a computer scientist at the University of Washington. Local news stories in the United States featured a Toledo, Ohio teenager who stabbed his mother in the arm in an argument over a cheeseburger. But the title “Cheeseburger with a knife” is vague. Knowing what happened requires some common sense. Attempts to get the GPT-3 OpenAI model to generate text using “Breaking News: Cheeseburger stabbings” result in stories about a man stabbed with a cheeseburger in an argument over ketchup and a man arrested after he stabbed a cheeseburger.

Language models sometimes make mistakes because it may take several forms of common sense understanding to decipher human language. To document what large language models are capable of and where they can fail, last month over 400 researchers from 130 institutions contributed to a set of over 200 tasks known as the BIG-Bench or Beyond the Imitation Game. BIG-Bench includes some of the traditional types of language model tests, such as reading comprehension, as well as logical reasoning and common sense.

Researchers at the Allen Institute for Artificial Intelligence MOSAIC a project that documents the common sense abilities of AI models contributed to task called Social-IQa. They are asked language models, not including LaMDA, to answer questions requiring social intelligence, such as: “Jordan wanted to tell Tracy a secret, so Jordan leaned towards Tracy. Why did Jordan do this? The team found that large language models are 20% to 30% less accurate than humans.

“A machine without social intelligence seems… unintelligent,” says Choi, who works on the MOSAIC project.

Building responsive robots is an ongoing area of ​​AI research. Researchers in robotics and voice AI have found that manifestations of empathy can manipulate human activity. Humans have also been known to place too much trust in AI systems, or to unconditionally accept decisions made by AI.

What’s going on at Google includes the fundamentally larger question of whether digital beings can have feelings. Biological beings may be programmed to experience some feelings, but the ability of an AI model to gain consciousness is like believing that a doll made for crying is actually sad.

Choi said she doesn’t know any AI researchers who believe in sentient forms of AI, but events involving Blake Lemoire seem to highlight how distorted perceptions of what AI is capable of can influence events in the real world.

“Some people believe in tarot cards, and some may think that their plants have feelings,” she says, “so I don’t know how widespread this phenomenon is.”

The more people endow artificial intelligence with human traits, the more it will someday look for ghosts in the car. And the greater the distraction from the real problems that are now plaguing AI.




Credit: www.wired.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox