The fight over which uses of AI should be outlawed in Europe

- Advertisement -


In 2019 the guardsmen on the borders of Greece, Hungary and Latvia began to test a lie detector based on artificial intelligence. A system called iBorderCtrl analyzed facial movements, trying to identify signs that a person was lying to a border guard. The test was backed by nearly $5 million in research funding from the European Union and nearly 20 years research work at Manchester Metropolitan University, UK.

- Advertisement -

The court caused controversy. Polygraphs and other technologies designed to detect lies based on physical signs are widely considered unreliable by psychologists. Soon the errors became known from iBorderCtrl. According to media reports, his lie prediction algorithm failedand own website of the project recognized that technology “may pose risks to basic human rights”.

- Advertisement -

Silent Talker, the Manchester Met spin-off company that created the technology behind iBorderCtrl, broke up this month. But this is not the end of the story. Lawyers, activists and lawmakers are pushing for the European Union to pass AI regulation law that would ban systems that claim to detect human migration cheating, citing iBorderCtrl as an example of what can go wrong. Former Silent Talker executives could not be contacted for comment.

The ban on lie detectors with artificial intelligence at the borders is one of thousands of amendments to the law. AI law considered by EU officials and members of the European Parliament. The law aims to protect EU citizens. basic rightsfor example, the right to life without discrimination or to be granted asylum. It labels some AI use cases as “high risk,” some as “low risk,” and others outright bans. Those lobbying to change the AI ​​Act include human rights groups, labor unions and companies such as Google and Microsoftwho want the AI ​​Act to distinguish between those who build general purpose AI systems and those who deploy them for specific purposes.

- Advertisement -

Last month, advocacy groups including European Digital Rights and the Platform for International Cooperation on Undocumented Migrants called for a law to ban the use of AI polygraphs, which measure things like eye movement, tone of voice, or facial expressions at borders. Statewatch, a non-profit civil liberties organization, released analysis a warning that the AI ​​Act as it is written will allow systems like iBorderCtrl to be used, complementing Europe’s existing “publicly funded border AI ecosystem”. The analysis calculated that over the past two decades, about half of the €341 million ($356 million) allocated to fund the use of AI at the border, for example for profiling migrants, has gone to private companies.

The use of AI lie detectors at borders is effectively creating new immigration policy with the help of technology, says Petra Molnar, deputy director of the nonprofit Refugee Law Lab, calling everyone suspicious. “You have to prove that you are a refugee and you are considered a liar unless proven otherwise,” she says. “This logic underlies everything. This is at the heart of AI lie detectors, and at the heart of more surveillance and countermeasures at the borders.”

Molnar, an immigration attorney, says people often avoid eye contact with border or immigration authorities for innocuous reasons such as culture, religion or trauma, but this is sometimes misinterpreted as a signal that the person is hiding something. People often have difficulty communicating interculturally or connecting with trauma survivors, she says, so why do people believe a machine can work better?

first draft of the Artificial Intelligence Act, issued in April 2021, social credit scores and the use of real-time facial recognition in public places as technologies that will be completely banned. Emotion recognition and AI lie detectors for border or law enforcement have been flagged as high-risk, meaning their deployment should be listed on a public registry. Molnar says this is not enough and the technology should be added to the banned list.

Dragos Tudorache, one of two rapporteurs appointed by MEPs to lead the amendment process, said lawmakers have submitted the amendments this month and he expects them to be voted on by the end of 2022. Parliamentary rapporteurs in April recommended that the predictive police be added to the list. prohibited technologies, stating that they “violate the presumption of innocence as well as human dignity” but did not propose adding borderline AI polygraphs. They also recommended classifying healthcare triage systems or deciding whether people receive health or life insurance as high-risk groups.

While the European Parliament continues the amendment process, the Council of the European Union will also consider amendments to the Law on Artificial Intelligence. There, officials from countries including the Netherlands and France advocated for the exclusion of national security from the AI ​​Act. documents received at the request of the European Center for Non-Commercial Law on Freedom of Information.

Vanya Skorik, the organization’s program director, says the exemption from national security requirements would create a loophole through which artificial intelligence systems that threaten human rights, such as lie detectors, could fall into the hands of police or border guards.

Final steps to pass or reject the law could be taken by the end of next year. Before the members of the European Parliament submitted their amendments on June 1, Tudorache told WIRED: “If we get thousands of amendments, as some people expect, the work of reaching some compromise of thousands of amendments will be gigantic.” He now says that about 3,300 proposals for amendments to the AI ​​Act have been received, but he believes that the legislative process for the AI ​​Act could be completed by mid-2023.

The concern that data-driven predictions may be discriminatory is not only theoretical. An algorithm applied by the Dutch tax authority to detect potential child benefit fraud between 2013 and 2020 was found to be damaging tens of thousands of peopleand resulted in more than 1,000 children being placed in foster care. imperfect system used evidence such as a person’s dual citizenship as a trigger for investigation, and this had a disproportionate impact on immigrants.

The Dutch welfare scandal could have been prevented or reduced if the Dutch authorities had carried out an impact assessment of the system, as proposed in the AI ​​Act, which could raise red flags, Skorich says. She argues that there should be a clear explanation in the law as to why the model is given certain labels, such as when the speakers moved the predictive police from a high-risk category to a recommended ban.

Alexandru Circiumumaru, head of European public policy at the independent research and advocacy group at the Ada Lovelace Institute in the UK, agrees, saying that the AI ​​Law should better explain the methodology that leads to an AI system type going from banned to high. risk or vice versa. “Why are these systems included in these categories now and why weren’t they there before? What is the test? he asked.

More clarity on these issues is also needed to prevent the AI ​​Act from repealing potentially empowering algorithms, says Sennai Gebreab, founder and director of the Civic AI Lab at the University of Amsterdam. Profiling can be punitive, as in the Dutch benefits scandal, and he supports a ban on predictive policing. But other algorithms can be useful, such as helping to resettle refugees by profiling people based on their background and skills. A 2018 study published in The science calculated that a machine learning algorithm could increase job opportunities for refugees in the United States by more than 40 percent and by more than 70 percent in Switzerland at little cost.

“I don’t believe we can build perfect systems,” he says. “But I believe we can continually improve AI systems by seeing what went wrong and getting feedback from people and communities.”

Many of the thousands of proposed changes to the AI ​​Act will not be included in the final version of the law. But Petra Molnar of the Refugee Law Lab, which has proposed nearly two dozen changes, including a ban on systems like iBorderCtrl, says it’s important now to be clear about which forms of AI should be banned or deserve special attention.

“This is a really important opportunity to think about how we want our world to look like, how we want our societies to be, which actually means respecting human rights in reality, and not just on paper,” she says. “It’s about what we owe to each other, what kind of world we’re building, and who’s been left out of those conversations.”


Credit: www.wired.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox