Axon plans to build Taser drones cause AI Ethics Council to resign

- Advertisement -


Majority Axon’s AI Ethics Council resigned yesterday in protest after ad Last week, the company planned to equip drones with tasers and cameras to end mass shootings in schools.

- Advertisement -

The company abandoned its sentence Sunday, but the damage was done. Axon turned to the advisory for the first time board consider a pilot program to equip a certain number of police departments with taser drones last year and again last month. Most members of the ethics advisory board, which includes AI ethics experts, law professors, and advocates for police reform and civil liberties, opposed both times. Advisory Board Chairman Barry Friedman told WIRED that Axon never asked the group to revisit any school-related scenario and that launching the pilot program without addressing previously expressed concerns is a disregard for the board and its established process.

- Advertisement -

AT joint letter of resignation Released today, nine AI ethics board members said the company appears to be “trading the tragedy of the recent mass shootings” in Buffalo and Uvalda, Texas. Despite the mention of both mass executions in Press release In announcing the pilot, Axon CEO Rick Smith denied accusations that the company’s offer was opportunistic. Reddit AMA. Smith said the Taser drone could still be years away, but he anticipates the school will have 50-100 Taser drones operated by trained personnel. Before Axon put the pilot on hold, Freidman called it a “poorly conceived idea” and said that if the idea was unlikely to materialize, then Axon’s presentation “distracts the world from real solutions to a serious problem.”

Another person to sign the resignation, University of Washington law professor Ryan Kahlo, calls Axon’s idea of ​​testing Taser drones in schools a “very, very bad idea.” Meaningful change to curb gun violence in the United States requires addressing issues such as exclusion, racism, and gun ownership. There were no deaths of children in Uvalde, Texas, because there were no stun guns at the school, Kahlo said.

- Advertisement -

“If we’re going to tackle school violence, we all know there are much better ways to do it,” he says.

The Board expressed concern that armed drones may result to more frequent use of force by the police, especially against communities of color. A report detailing the evaluation by the advisory board of the pilot program was due this fall.

The real disappointment, Kahlo says, isn’t that the company didn’t do exactly what the board advised. The fact is that Axon announced its plans to launch stun guns even before the board had a chance to detail its opposition. “All of a sudden, out of nowhere, the company decided to just drop this process,” he says. “That’s why it’s so discouraging.

He finds it hard to imagine that the police or trained staff at the school would have the situational awareness to use the Taser drone wisely. Even if a drone operator successfully saves the lives of suspects or people in marginalized or vulnerable communities, the technology won’t stay there.

“I think missions will spawn and that they will start using it in more and more contexts, and I think Axon’s statement about using it in a completely different context is proof of that,” Kahlo says. “A situation with ubiquitous cameras and remotely deployed stun guns is not the world I want to live in. Dot”.

Axon’s is the latest outside AI ethics board to clash with a related tech company. Google is known to have convened and dissolved the AI ​​Ethics Advisory Group in about a week in 2019. These groups often operate without a clear structure, other than asking members to sign a non-disclosure agreement, and companies can use them to “clean up ethics” rather than make a significant contribution, says Courtney Abercrombie, founder of the nonprofit AI Truth. Her organization is currently studying best practices for corporate AI ethics.

In the case of Axon, several AI ethics board members who spoke to WIRED said the company did listen to their suggestions, including decision 2019 do not use face recognition on body cameras. Which made the sudden announcement of a stun gun even more annoying.

Companies tend to have conflicts between people who understand the risks and limitations of technology and those who want to make products and make a profit, says Wael Abd Almagid, a computer scientist at the University of Southern California who resigned from Axon’s artificial intelligence ethics board. If companies like Axon want to take the ethics of AI seriously, the role of these boards can no longer be advisory, he said.

“If the AI ​​Ethics Council says the technology is problematic and that a company shouldn’t develop products, then it shouldn’t. I know it’s a tough proposition, but I really think that’s how it should be done,” he says. “We saw problems in google and other companies for the people they hire to talk about the ethics of AI.”

The AI ​​ethics board tried to convince Axon that it should respond to the needs of the community affected by their products rather than the police who buy them, Friedman said. The company has set up a community advisory committee, but Friedman says that until AI ethics boards decide how to involve local communities in the procurement process, “police technology vendors will continue to play on the side of the police.”

Five members of the AI ​​Ethics Council did not sign their resignations. They include former Seattle Police Department Chief Carmen Best, former Los Angeles Police Department Chief Charlie Beck and former California Highway Patrol Commissioner Warren Stanley.




Credit: www.wired.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox