Wednesday, 11 May 2022

EU Proposes Most Privacy-Invasive Measure Yet to Tackle Child Abuse

WebProNews
EU Proposes Most Privacy-Invasive Measure Yet to Tackle Child Abuse

The European Union (EU) has proposed a new set of rules to tackle child abuse, rules that are being criticized as the most invasive “ever deployed outside of China and the USSR.”

Governments and companies worldwide are grappling with how to protect children online. The EU has unveiled a new proposal that critics are almost universally panning, one that even the EU acknowledges is “most intrusive.”

The EU’s proposal involves forcing companies to search all text messages and communications, including private, encrypted ones, in an effort to find and flag potential “grooming” on the part of child predators, as well as other CSAM (child sexual abuse material). Below is the EU’s description of the requirement (bold theirs):

Detecting grooming would have a positive impact on the fundamental rights of potential victims by contributing to the prevention of abuse. At the same time, the detection process would be the most intrusive one for users (compared to the detection of known and new CSAM) since it would involve searching text, including in interpersonal communications, as the most important vector for grooming. On the one hand, such searches have to be considered as necessary to combat grooming since the service provider is the only entity able to detect it. Automatic detection tools have acquired a high degree of accuracy, and indicators are becoming more reliable with time as the algorithms learn, following human review. On the other hand, the detection of patterns in text-based communications may be more invasive into users’ rights than the analysis of an image or a video to detect CSAM, given the difference in the types of communications at issue and the mandatory human review of the online exchanges flagged as possible grooming by the tool.

Matthew Green, cryptography professor at Harvard, highlighted exactly why this proposal is so intrusive in a series of tweets:

“Let me be clear what that means: to detect grooming’ is not simply searching for known CSAM. It isn’t using AI to detect new CSAM, which is also on the table. It’s running algorithms reading your actual text messages to figure out what you’re saying, at scale.” —Matthew Green (@matthew_d_green), May 10, 2022

“It is potentially going to do this on encrypted messages that should be private. It won’t be good, and it won’t be smart, and it will make mistakes. But what’s terrifying is that once you open up ‘machines reading your text messages’ for any purpose, there are no limits.” — Matthew Green (@matthew_d_green), May 10, 2022

Green goes on to describe the proposal as “the most sophisticated mass surveillance machinery ever deployed outside of China and the USSR.”

There’s no denying that CSAM and child exploitation is a problem, and an abhorrent one at that. Tackling it requires finding a balance between the various factors involved. Unfortunately, it’s a balance that is difficult to achieve, as the very technologies journalists, activists, and other endangered individuals rely on to keep them safe are the same technologies predators use to exploit children.

The EU’s latest proposal, while giving lip-service to balance, is being accused of throwing balance out the window. What’s more, it may be the only similarly proposed legislation that doesn’t even attempt to hide its privacy implications. While many proposals try to falsely claim it’s possible to protect user privacy while implementing surveillance measures, the EU is plainly stating these measures are intrusive, especially the measures aimed at detecting new CSAM material.

This option would represent a higher impact on providers’ freedom to conduct a business and more interference into users’ right to privacy, personal data protection and freedom of expression

The EU also acknowledges that these measures are not as reliable as the measures employed to detect known CSAM.

However, given that accuracy levels of current tools, while still being well above 90%, are lower than for the detection of known CSAM, human confirmation is essential.

As Green points out, this opens the door for false positives, and host of other problems. What’s more, once deployed, there is nothing to prevent the technology from being used to detect other kinds of content other than existing policy — and policies change. An oppressive regime could easily repurpose the technology to scan for anything it views as a challenge to its authority or the status quo.

The EU has traditionally been a bastion of user privacy, affording its citizens much better protection than the US, let alone China. This new legislation may single-handedly undo that reputation.

EU Proposes Most Privacy-Invasive Measure Yet to Tackle Child Abuse
Matt Milano



from WebProNews https://ift.tt/yHmeqdY

No comments:

Post a Comment