EU wants big tech to scan your private chats for child abuse

- Advertisement -


All your whatsapp pictures, iMessage texts, and Snapchat videos could be scanned to check images and videos of child sexual abuse under newly proposed European regulations. These plans, experts warn, could undermine end-to-end encryption which protects the billions of messages sent every day and hinders the privacy of people online.

- Advertisement -

European Commission today disclosed long-awaited proposals to tackle the sheer volume of child sexual abuse material, also known as CSAM, uploaded to the Internet each year. The proposed law creates a new EU Center for dealing with child abuse content and introduces obligations for technology companies to “detect, report, block and remove” CSAM from their platforms. The law, unveiled by European Commissioner for Home Affairs Ylva Johansson, says tech companies failed to voluntarily remove offensive content and were welcome child protection and safety groups.

- Advertisement -

From web hosting services to messaging platforms, tech companies could be ordered to “discover” both new and previously discovered CSAMs, as well as potential cases of “leaving” under the plans, according to the plans. Detection may occur in chat messages, files uploaded to online services, or websites hosting offensive material. The plans echo Apple’s attempt last year to scan iPhone photos for offensive content before uploading them to iCloud. Apple suspended its efforts after widespread backlash.

If European legislation is passed, technology companies will be required to conduct risk assessments for their services in order to evaluate the levels of CSAM on their platforms and their existing prevention measures. If necessary, regulators or courts can then issue “detection orders” that say technology companies must begin “installation and operation of technology” to detect CSAM. These discovery orders will be issued for specific periods of time. The bill does not specify what technologies should be installed or how they will work – they will be tested by the new EU Anti-Abuse Center – but it does say that they should be used even with end-to-end encryption.

- Advertisement -

The European proposal to scan people’s messages has been met with disappointment from civil rights groups and security experts, who say it is likely to undermine the end-to-end encryption that has become the standard in messaging apps such as iMessage, whatsappand Signal. “It is incredibly disappointing that the proposed EU regulation on the Internet does not protect end-to-end encryption,” said WhatsApp CEO Will Cathcart. tweeted. “This proposal will force companies to scan every person’s messages and put the privacy and security of EU citizens at serious risk.” Any system that weakens end-to-end encryption can be misused or extended to find other types of content, researchers say.

“You either have E2EE or you don’t,” says Alan Woodward, professor of cybersecurity at the University of Surrey. End-to-end encryption protects people’s privacy and security by ensuring that only the sender and recipient of messages can see their content. For example, Meta, the owner of WhatsApp, does not have the ability to read your messages or extract data from their content. The draft EU regulation says the solutions must not weaken encryption and says it includes security measures to prevent this from happening; however, it does not provide details on how this would work.

“Therefore, there is only one logical solution: client-side scanning, in which the content is checked as it is decrypted on the user’s device for viewing/reading,” says Woodward. Last year, Apple announced that it would be introducing client-side scanning—scans that run on the iPhone, not Apple’s servers—to check photos for known CSAMs uploaded to iCloud. The move sparked anger over potential surveillance by human rights groups of Edward Snowden and led to Apple suspends its plans a month after they were originally announced. (Apple declined to comment for this story.)

For technology companies, detecting CSAM on their platforms and scanning some messages is nothing new. Companies operating in the United States are required to report any CSAM they discover or report to them from users to the National Center for Missing and Exploited Children (NCMEC), a non-profit organization in the United States. More than 29 million reports, containing 39 million images and 44 million videos, were submitted to NCMEC last year alone. Under the new EU rules, the EU Center will receive CSAM reports from technology companies.

“Today, a lot of companies are not doing detection,” Johansson said at a press conference to introduce legislation. “This is not a proposal for encryption, this is a proposal for child sexual abuse material,” Johansson said, adding that the law is “not about reading messages,” but about detecting illegal, abusive content.

At the moment, tech companies find CSAM online in different ways. And the number of detected CSAMs is on the rise as tech companies get better at detecting and reporting abuse, though some are much better than others. In some cases, AI used to find previously unseen CSAMs. Duplicate existing infringing photos and videos can be detected using “hashing systems” in which the offending content is assigned a fingerprint that can be detected when re-uploaded to the Internet. Over 200 companies, from Google to Apple, use Microsoft’s PhotoDNA hashing system to scan millions of files hosted on the Internet. However, to do this, systems must be able to access the messages and files that people send, which is not possible with end-to-end encryption.

“In addition to detecting CSAM, there will be obligations to detect child molestation (“grooming”) within a framework that can only mean that conversations will need to be read 24/7,” says Diego Naranjo, head of policy at the Civil Liberties Group. . European digital rights. “This is a disaster for message privacy. Companies will be encouraged (through discovery orders) or incentivized (through risk mitigation measures) to offer less secure services to all if they wish to comply with these obligations.”

The discussions about protecting children online and how this can be done with end-to-end encryption are extremely complex, technical and combined with the horrors of crimes against vulnerable young people. A study by UNICEF, the United Nations Children’s Fund, published in 2020 says encryption is necessary to protect the privacy of people, including children, but adds that it “impedes” efforts to remove content and identify people who share it. For years Law enforcement around the world sought to create ways to bypass or weaken encryption. “I’m not talking about privacy at any cost, and I think we can all agree on child abuse,” says Woodward, “but there needs to be a proper, public, impartial debate about whether it’s worth the risk of what might arise.” . true effectiveness in the fight against child abuse.”

Researchers and technology companies are increasingly focusing on the security tools that can exist alongside end-to-end encryption. Offers include using metadata from encrypted messages– who, how, what and why in messages, and not in their content – in order to analyze people’s behavior and potentially detect crime. One recent report from the non-profit group Business for Social Responsibility (BSR), commissioned by Meta, states that end-to-end encryption is an extremely positive force for the protection of human rights of the people. He offered 45 recommendations on how encryption and security can go together and not include access to people’s communications. When the report was released in April, Lindsey Andersen, BSR’s deputy director of human rights, told WIRED: “Contrary to popular belief, a lot can actually be done even without access to messages.”

.


Credit: www.wired.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox