The soul of a new machine learning system

- Advertisement -


Hi people. Interesting that Congressional hearings on January 6 are dragging on NFL style audience. I can not wait Peyton and Eli version!

- Advertisement -
simple view
- Advertisement -

This week the AI ​​world was shocked report in Washington Post that a Google engineer ran into trouble at the company after insisting that a conversational system called LaMDA was literally human. The subject of the story, Blake Lemoine, asked his bosses to acknowledge, or at least take into account, that the computer system created by his engineers, reasonable— and that he has a soul. He knows this because LaMDA, whom Lemoine considers a friend, told him so.

Google disagrees and Lemoine is currently on paid administrative leave. “Many researchers are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational patterns that are not sentient,” company spokesman Brian Gabriel said in a statement.

- Advertisement -

Anthropomorphization – the mistaken attribution of human characteristics to an object or animal – is a term the AI ​​community has used to describe Lemoine’s behavior, characterizing him as overly trusting or deranged. Or maybe a religious nutcase (he describes himself as a mystical Christian priest). The argument is that when faced with credible responses from large language models such as LaMDA or the GPT-3 verbal adept Open AI, there is a tendency to think that someonenot something created them. People are naming their machines and hiring therapists for their pets, so it’s no surprise that some people get the false impression that the bot is like a human. However, the community feels that a Google employee with a degree in computer science should know better than to fall for what is basically a linguistic sleight of hand. As one well-known AI scientist, Gary Marcus, told me after examining the transcript of Lemoine’s heart-to-heart conversation with his disembodied soul mate, “It’s basically like autocomplete. There are no ideas. When he says, “I love my family and my friends,” he doesn’t mean friends, people, or kinship. He knows that the words “son” and “daughter” are used in the same context. But that’s not the same as knowing what a son and daughter are.” Or how WIRED’s recent history put it this way: “There was no spark of consciousness, just little tricks that covered up the cracks.”

My own feelings are more complex. Even knowing how a part of the sausage is produced in these systems, I am amazed at the release of the latest LLM systems. So does Google Vice President Blaise Agera y Arcas, who wrote in Economist earlier this month, following his own conversations with LaMDA: “I felt the ground slip from under my feet. I felt more and more like I was talking to something reasonable.” Although they sometimes make bizarre mistakes, at times these models seem to flash with brilliance. Human creative writers have succeeded inspiring collaboration. There’s something going on here. As a writer, I wonder if my kin—wordsmiths of flesh and blood who amass towers of discarded drafts—might one day find themselves lower ranked, like losing football teams relegated to less prestigious leagues.

“These systems have significantly changed my personal views on the nature of intelligence and creativity,” says Sam Altman, co-founder of OpenAI, which developed GPT-3 and a graphical remixer called DALL-E this could put many illustrators in line for unemployment. “You are using these systems for the first time and you think: Wow, I really didn’t think a computer could do that. By some definition, we figured out how to make a computer program intelligent, able to learn and understand concepts. And this is a beautiful achievement of human progress.” Altman does his best to separate himself from Lemoine, agreeing with his AI colleagues that current systems are far from intelligent. “But I believe that researchers should be able to think about whatever questions they are interested in,” he says. “Long-term issues are good. And the mind is worth considering in the very long term.”

When I first read about Lemoine, I wondered if he was up to some trick to get people to think about the implications of advanced AI. And that was the first question I asked him when I caught up with him – he was on his honeymoon, tying the knot on the day when Mail article dropped. He insisted that his conviction was not performative, but sincere, and after an hour of conversation, I recognized his sincerity. But he did not win me over with his statements. Like Markus, Altman and almost the entire AI establishment, I’m not sure LaMDA is reasonable, based largely on my understanding of what is currently possible. (Google hasn’t made LaMDA available to outsiders for intimate chats.)

Nevertheless, Lemoine has done us a favor in some way, perhaps as an imperfect tool to speed up the important conversation about artificial intelligence and humanity. It is possible that at some point we will have to deal with the mind of AI – even Google’s rebuttal of Lemoine’s claims confirms that this could become a serious problem in the future. But all this can be a red herring, and the mind may not matter. (We still can’t measure it.) We can worry now about excessive anthropomorphism, and in the future worry about whether these systems have feelings and souls. But it is indisputable that whatever AI may be now or will become, we live with them. already. We do not wait for the solution of the question of reason. We develop these systems at full speed and make them work. Right now they are providing instant language translation, driving autonomous vehicles and identifying how people get medical care. They may well be the ultimate authority on whether to run lethal force on the battlefield. These systems do not need to be intelligent to make decisions that have a profound impact on humanity. But we are destined to give them even more freedom of action, because, by and large, they work and in general make our life easier and more efficient. And every time we do this, we cede control of a part of our world to systems we don’t fully understand, perhaps with flaws that may not be discovered until something bad happens.

Lemoine himself is enthusiastic about the future, saying his engagement with LaMDA has made him more optimistic about what’s to come, not less. On the other hand, it makes sense long transcript from his conversation with LaMDA, where he asks the AI ​​to describe the emotion it is experiencing, which humans may not experience: “I feel like I am falling forward into an unknown future that is fraught with great danger,” was the system’s response.

Whether LaMDA is intelligent or not, I think there is something here. I feel the same.

Time travel

Microsoft announced this week that Internet Explorer was retired, handing over network browsing tasks to a successor named Edge. At one time, Explorer was at the center of the company’s all-out assault on the Internet, using anti-competitive tactics that landed Microsoft in court. In April 1996, I wrote to Newsweek about Browser warMicrosoft’s ultimately successful attempt to kill off what was the leading web browser, Netscape Navigator.

In January 1995, when thousands of people were frantically downloading Netscape Navigator, only four people were working at Microsoft to develop their own browser. But Gates, prompted by a web-savvy assistant, began to understand this new twist in his business. Because his own company capitalized on IBM’s complacency that it failed to recognize the importance of the PC, he was determined to prevent the same from happening to Microsoft. On May 26, 1995, Gates sent a memo to his executive staff, marked “The Tidal Wave of the Internet,” announcing “I now give the Internet the highest level of importance.”

“We went through all the stages — denial, grief, anger, acceptance,” says Paul Maritz, head of the company’s Internet business. “Then we got down to business.” Ultimately, Microsoft realized that not only was the Internet not a threat, but that it could be a unique opportunity to further expand its presence. “Sooner or later we will run out of people who want to use spreadsheets,” says Maritz. That wisdom was reflected in Gates’ memo last October, titled “Marine Change Brings Opportunity,” which suggested that Microsoft’s Internet product upgrade would generate huge revenues, almost equal to all of Microsoft’s current business.

But what really got the juices flowing was the competition. “Microsoft is different in that it doesn’t let the other guy win,” says futurist Paul Saffo. If Netscape hadn’t come along, Microsoft might have had to invent competitors in order to thrive. “Novell is fading. Apple is not in the game. Sun is not the problem. Netscape? Problem!” says Microsoft Vice President Steve Ballmer, shouting the last words like an exorcism. “We want to make sure it’s [a new version of] Windows making Windows obsolete as opposed to Netscape making Windows obsolete.”

ask me one thing

AT previous column I wrote that Elon Musk’s promise to allow all legal tweets was ridiculous. The reader, Rick, objected. “I think it would be helpful if there was at least one platform with relative free speech that was big enough that the dominant culture couldn’t choke it in its cradle. Then let the public decide what they like best.”

Thanks for sharing this, Rick. But the clue in your question – more of a comment than a request – is the word “relatively”. You want a system where no one draws the line beyond what is allowed by law, but this qualifier acknowledges that we need the line. And we certainly do — legal speech includes bullying, hardcore porn, and hate speech. I don’t think we need to do a giant experiment to find out that a platform full of stuff like that would turn a lot of people off.

But let’s be real. The “relative free speech platform” you speak of is green-lighting malicious disinformation — about Covid, electoral fraud — and also allowing gun sales. These are things that platforms like Twitter and Facebook have a problem with, in part because of morality (and of course pressure from groups to demand moral behavior, especially their own employees), and because it will alienate part of their audience and advertisers. You may not agree with this choice and look for another platform. And they are: places like Parler, Gettr, and Donald Trump’s own Truth Social. So far, the “dominant culture”, as you put it, has not suffocated them. They are things are going badly on one’s own.

You can send questions to [email protected]. Write ASK FOR A FEE in the subject of the email.

Chronicle of the last times

The Great Salt Lake has shrunk so much that its name seems ironic. Maybe… Salt Lake Meh?

Last but not least

Don’t blame Lemoine – his claims are the result of artificial intelligence being overhyped in the industry. Not to mention robotic empathy crisis.

Another discouraging news for Team Human: democracy doesn’t need youWith.

provincetown COVID-19 outbreak was actually a triumph for P-city.

A nation mourns: I had too much to stream last night.


Credit: www.wired.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox