Guest post: AI surveillance in prisons is a terrible idea, both technologically and ethically

DMCA / Correction Notice
- Advertisement -


University of Washington professors Rachel Tatman and Emily M. Bender. (UW Photos)

Editor’s note: This is a guest post written by professors at the University of Washington Emily M. Bender And Rachel Tatman On the use of AI in prison settings.

- Advertisement -

Thomson Reuters Foundation informed of On August 9, a panel in the US House of Representatives asked the Justice Department to explore using so-called “artificial intelligence” (AI) technology to monitor phone communications from people locked up with the clear purpose of preventing violent crime and suicide. asked for. .

It’s not a hypothetical exercise: LEO Technologies, a company “made for police by police,” already offers automated monitoring of phone calls of individuals imprisoned with their loved ones as a service.

advertisement

As linguists who study the development and application of speech recognition and other language technologies, including (or not) they work with different types of language, we want to say clearly and strongly that this is a terrible The idea is both technically and ethically.

We oppose mass surveillance in any way, especially when used against vulnerable populations without their consent or ability to opt out. Even if such surveillance could be shown to be in the best interest of the imprisoned and the communities to which they belong – which we do not believe it could be – there was a way to automate this process. The effort magnifies the potential damage.

- Advertisement -

The primary anticipated benefit of the technology to people in captivity, suicide prevention, is not possible using an approach “based on keywords and phrases” (as in LEO technologies). describes its product) Even Facebook’s suicide prevention program, which itself has suffered Inquiry from legal and ethics scholars, found the keywords to be an ineffective approach because it does not take into account the context. Furthermore, humans often take the output of a computer program as “objective” and therefore make decisions based on faulty information, without the assumption that it is faulty.

And even if the ability to prevent suicide was concrete and demonstrable, which it is not, it comes with the enormous potential for harm.

Automated transcription is an important part of these product offerings. The effectiveness of speech recognition systems depends on a close match between their training data and the input they receive in their deployment context, and for most modern speech recognition systems this means that the higher a person’s speech exceeds the newscaster standard, the lower. will take effect. The system will be able to transcribe their words correctly.

Not only would such systems undoubtedly produce unreliable information (while seemingly overly objective), the system would also fail more frequently for those that the US justice system fails most often.

a 2020 study That included an Amazon service used by Leo Technologies for speech transcription, confirming earlier findings that word error rates for African American English speakers were nearly twice that for white speakers. Given that African Americans are imprisoned at a rate five times bigger Compared to white Americans, these tools are deeply unsuitable for their application and have the potential to exacerbate already unacceptable racial inequalities.

This surveillance, which covers not only the captives but also the people they are talking to, is an unnecessary breach of privacy. Adding so-called “AI” would make it worse: Machines unable to accurately transcribe the warm, cozy language of the house would at the same time give a false glow of “fairness” to the wrong transcript. Should incarcerated loved ones bear the burden of defending against charges based on faulty transcripts of what they said? This invasion of privacy is particularly painful, as people in prison and their families are often Pay exorbitant rates for phone calls in the first place.

We urge Congress and the DOJ to skip this path and avoid incorporating automatic prediction into our legal system. Leo Technologies claims to “move the paradigm of law enforcement from reactive to predictive,” a paradigm that seems to be in keeping with the justice system where guilt must be proven.

And, finally, we urge all concerned not to be overly skeptical about “AI” applications. This is especially true when it has a real impact on people’s lives, and even more so when those people, like those in captivity, are particularly vulnerable.

- Advertisement -

Stay on top - Get the daily news in your inbox

Recent Articles

Related Stories