|
Oppoistion to a secretive and dangerous EU proposal to force snooping software on people's phones and computers
|
|
|
| 5th May 2024
|
|
| See article from techcrunch.com See
open letter from nce.mpi-sp.org |
A controversial and secretive push by European Union lawmakers to legally require messaging platforms to scan citizens' private communications for child sexual abuse material (CSAM) could lead to millions of false positives per day, hundreds of security
and privacy experts have warned in an open letter . Concern over the EU proposal has been building since the Commission proposed the CSAM-scanning plan two years ago, with independent experts, lawmakers across the European Parliament and even the
bloc's own Data Protection Supervisor among those sounding the alarm. The EU proposal would not only require messaging platforms that receive a CSAM detection order to scan for known CSAM, but they would also have to use unspecified detection
scanning technologies to try to pick up unknown CSAM and identify grooming activity as it's taking place, leading to accusations of lawmakers indulging in magical thinking-levels of technosolutionism. The open letter has been signed by 309 experts
from 35 countries. The letter reads: Dear Members of the European Parliament, Dear Member States of the Council of the European Union, Joint statement of scientists and researchers on EU's new
proposal for the Child Sexual Abuse Regulation: 2nd May 2024 We are writing in response to the new proposal for the regulation introduced by the Presidency on 13 March 20241. The two main changes with respect to the previous
proposal aim to generate more targeted detection orders, and to protect cybersecurity and encrypted data. We note with disappointment that these changes fail to address the main concerns raised in our open letter from July 2023 regarding the unavoidable
flaws of detection techniques and the significant weakening of the protection that is inherent to adding detection capabilities to end-to-end encrypted communications. The proposal's impact on end-to-end encryption is in direct contradiction to the
intent of the European Court of Human Rights's decision in Podchasov v. Russia on 13 February, 2024. We elaborate on these aspects below. Child sexual abuse and exploitation are serious crimes that can cause lifelong harm to
survivors; certainly it is essential that governments, service providers, and society at large take major responsibility in tackling these crimes. The fact that the new proposal encourages service providers to employ a swift and robust process for
notifying potential victims is a useful step forward. However, from a technical standpoint, to be effective, this new proposal will also completely undermine communications and systems security. The proposal notably still fails to
take into account decades of effort by researchers, industry, and policy makers to protect communications. Instead of starting a dialogue with academic experts and making data available on detection technologies and their alleged effectiveness, the
proposal creates unprecedented capabilities for surveillance and control of Internet users. This undermines a secure digital future for our society and can have enormous consequences for democratic processes in Europe and beyond. 1. The proposed targeted detection measures will not reduce risks of massive surveillance
The problem is that flawed detection technology cannot be relied upon to determine cases of interest. We previously detailed security issues associated with the technologies that can be used to implement detection of known and new
CSA material and of grooming, because they are easy to circumvent by those who want to bypass detection, and they are prone to errors in classification. The latter point is highly relevant for the new proposal, which aims to reduce impact by only
reporting users of interest defined as those who are flagged repeatedly (as of the last draft: twice for known CSA material and three times for new CSA material and grooming). Yet, this measure is unlikely to address the problems we raised.
First, there is the poor performance of automated detection technologies for new CSA material and for the detection of grooming. The number of false positives due to detection errors is highly unlikely to be significantly reduced
unless the number of repetitions is so large that the detection stops being effective. Given the large amount of messages sent in these platforms (in the order of billions), one can expect a very large amount of false alarms (in the order of millions).
Second, the belief that the number of false positives will be reduced significantly by requiring a small number of repetitions relies on the fallacy that for innocent users two positive detection events are independent and that
the corresponding error probabilities can be multiplied. In practice, communications exist in a specific context (e.g., photos to doctors, legitimate sharing across family and friends). In such cases, it is likely that parents will send more than one
photo to doctors, and families will share more than one photo of their vacations at the beach or pool, thus increasing the number of false positives for this person. It is therefore unclear that this measure makes any effective difference with respect to
the previous proposal. Furthermore, to realize this new measure, on-device detection with so-called client-side scanning will be needed. As we previously wrote, once such a capability is in place, there is little possibility of
controlling what is being detected and which threshold is used on the device for such detections to be considered of interest. We elaborate below. High-risk applications may still indiscriminately affect a massive number of
people. A second change in the proposal is to only require detection on (parts of) services that are deemed to be high-risk in terms of carrying CSA material. This change is unlikely to have a useful impact. As the exchange of CSA
material or grooming only requires standard features that are widely supported by many service providers (such as exchanging chat messages and images), this will undoubtedly impact many services. Moreover, an increasing number of services deploy
end-to-end encryption, greatly enhancing user privacy and security, which will increase the likelihood that these services will be categorised as high risk. This number may further increase with the interoperability requirements introduced by the Digital
Markets Act that will result in messages flowing between low-risk and high-risk services. As a result, almost all services could be classified as high risk. This change is also unlikely to impact abusers. As soon as abusers become aware that a service
provider has activated client side scanning, they will switch to another provider that will in turn become high risk; very quickly all services will be high risk, which defeats the purpose of identifying high risk services in the first place. And because
open-source chat systems are currently easy to deploy, groups of offenders can easily set up their own service without any CSAM detection capabilities. We note that decreasing the number of services is not even the crucial issue,
as this change would not necessarily reduce the number of (innocent) users that would be subject to detection capabilities. This is because many of the main applications targeted by this regulation, such as email, messaging, and file sharing are used by
hundreds of millions of users (or even billions in the case of WhatsApp). Once a detection capability is deployed by the service, it is not technologically possible to limit its application to a subset of the users. Either it
exists in all the deployed copies of the application, or it does not. Otherwise, potential abusers could easily find out if they have a version different from the majority population and therefore if they have been targeted. Therefore, upon
implementation, the envisioned limitations associated with risk categorization do not necessarily result in better user discrimination or targeting, but in essence have the same effect for users as a blanket detection regulation. 2. Detection in end-to-end encrypted services by definition undermines encryption protection The new proposal has as one of its goals to protect cyber security and encrypted data, while keeping services using end-to-end encryption within the scope of detection orders. As we have explained before, this is an oxymoron.
The protection given by end-to-end encryption implies that no one other than the intended recipient of a communication should be able to learn any information about the content of such communication. Enabling detection
capabilities, whether for encrypted data or for data before it is encrypted, violates the very definition of confidentiality provided by end-to-end encryption. Moreover, the proposal also states that This Regulation shall not create any obligation that
would require [a service provider] to decrypt or create access to end-to-end-encrypted data, or that would prevent the provision of end-to-end encrypted services. This can be misleading, as whether the obligation to decrypt exists or not, the proposal
undermines the protection provided by end-to-end encryption. This has catastrophic consequences. It sets a precedent for filtering the Internet, and prevents people from using some of the few tools available to protect their right
to a private life in the digital space; it will have a chilling effect, in particular to teenagers who heavily rely on online services for their interactions. It will change how digital services are used around the world and is likely to negatively
affect democracies across the globe. These consequences come from the very existence of detection capabilities, and thus cannot be addressed by either reducing the scope of detection in terms of applications or target users: once they exist, all users
are in danger. Hence, the requirement of Art. 10 (aa) that a detection order should not introduce cybersecurity risks for which it is not possible to take any effective measures to mitigate such risk is not realistic, as the risk introduced by client
side scanning cannot be mitigated effectively. 3. Introducing more immature technologies may increase the risk The proposal states that age verification and age assessment measures will be taken, creating a need to prove age in
services that before did not require so. It then bases some of the arguments related to the protection of children on the assumption that such measures will be effective. We would like to point out that at this time there is no established, well-proven
technological solution that can reliably perform these assessments. The proposal also states that such verification and assessment should preserve privacy. We note that this is a very hard problem. While there is research towards technologies that could
assist in implementing privacy-preserving age verification, none of them are currently in the market.5 Integrating them into systems in a secure way is far from trivial. Any solutions to this problem need to be very carefully scrutinized to ensure that
the new assessments do not result in privacy harms or discrimination causing more harm than the one they were meant to prevent. 4. Lack of transparency It is quite regretful that the proposers failed to reach out to security and
privacy experts to understand what is feasible before putting forth a new proposal that cannot work technologically. The proposal pays insufficient attention to the technical risks and imposes - while claiming to be technologically neutral - requirements
that cannot be met by any state-of-the-art system (e.g., low false-positive rate, secrecy of the parameters and algorithms when deployed in a large number of devices, existence of representative simulated CSA material). We
strongly recommend that not only should this proposal not move forward, but that before such a proposal is presented in future, the proposers engage in serious conversations about what can and cannot be done within the context of guaranteeing secure
communications for society. 5. Secure paths forward for child protection Protecting children from online abuse while preserving their right to secure communications is critical. It is important to remember that CSAM content is the
output of child sexual abuse. Eradicating CSAM relies on eradicating abuse, not only abuse material. Proven approaches recommended by organisations such as the UN for eradicating abuse include education on consent, on norms and values, on digital
literacy and online safety, and comprehensive sex education; trauma-sensitive reporting hotlines; and keyword-search based interventions. Educational efforts can take place in partnership with platforms, which can prioritise high quality educational
results in search or collaborate with their content creators to develop engaging resources. We recommend substantial increases in investment and effort to support existing proven approaches to eradicate abuse, and with it, abusive
material. Such approaches stand in contrast to the current techno-solutionist proposal, which is focused on vacuuming up abusive material from the internet at the cost of communication security, with little potential for impact on abuse perpetrated
against children. UK signatories Dr. Ruba Abu-Salma King's College London Prof. Martin Albrecht King's College London Dr. Andrea Basso University of Bristol Prof. Ioana Boureanu University of Surrey Prof.
Lorenzo Cavallaro University College London Dr. Giovanni Cherubin Microsoft Dr. Benjamin Dowling University of Sheffield Dr. Francois Dupressoir University of Bristol Dr. Jide Edu University of Strathclyde Dr. Arthur Gervais University College London
Prof. Hamed Haddadi Imperial College London Prof. Alice Hutchings University of Cambridge Dr. Dennis Jackson Mozilla Dr. Rikke Bjerg Jensen Royal Holloway University of London Prof. Keith Martin Royal Holloway University of London Dr. Maryam Mehrnezhad
Royal Holloway University of London Prof. Sarah Meiklejohn University College London Dr. Ngoc Khanh Nguyen King's College London Prof. Elisabeth Oswald University of Birmingham Dr. Daniel Page University of Bristol Dr. Eamonn Postlethwaite King's College
London Dr. Kopo Marvin Ramokapane University of Bristol Prof. Awais Rashid University of Bristol Dr. Daniel R. Thomas University of Strathclyde Dr. Yiannis Tselekounis Royal Holloway University of London Dr. Michael Veale University College London Prof.
Dr. Luca Vigano King's College London Dr. Petros Wallden University of Edinburgh Dr. Christian Weinert Royal Holloway University of London
|
|
European police chiefs disgracefully call for citizens to lose their basic internet protection from Russian and Chinese spies, scammers, thieves and blackmailers.
|
|
|
|
23rd April 2024
|
|
| See article from
reclaimthenet.org See police statement [pdf] from docs.reclaimthenet.org |
European police chiefs have called for Europeans to be deprived of basic internet security used to protect against Russian & Chinese spies, scammers, thieves and blackmailers. The police chiefs write: Joint
Declaration of the European Police Chiefs We, the European Police Chiefs, recognise that law enforcement and the technology industry have a shared duty to keep the public safe, especially children. We have a proud partnership
of complementary actions towards that end. That partnership is at risk. Two key capabilities are crucial to supporting online safety. First, the ability of technology companies to reactively provide to law
enforcement investigations -- on the basis of a lawful authority with strong safeguards and oversight -- the data of suspected criminals on their service. This is known as lawful access. Second, the ability
of technology companies proactively to identify illegal and harmful activity on their platforms. This is especially true in regards to detecting users who have a sexual interest in children, exchange images of abuse and seek to commit contact sexual
offences. The companies currently have the ability to alert the proper authorities -- with the result that many thousands of children have been safeguarded, and perpetrators arrested and brought to justice. These are
quite different capabilities, but together they help us save many lives and protect the vulnerable in all our countries on a daily basis from the most heinous of crimes, including but not limited to terrorism, child sexual abuse, human trafficking, drugs
smuggling, murder and economic crime. They also provide the evidence that leads to prosecutions and justice for victims of crime. We are, therefore, deeply concerned that end to end encryption is being rolled out in a way that
will undermine both of these capabilities. Companies will not be able to respond effectively to a lawful authority. Nor will they be able to identify or report illegal activity on their platforms. As a result, we will simply not be able to keep the
public safe. Our societies have not previously tolerated spaces that are beyond the reach of law enforcement, where criminals can communicate safely and child abuse can flourish. They should not now. We cannot let ourselves be
blinded to crime. We know from the protections afforded by the darkweb how rapidly and extensively criminals exploit such anonymity. We are committed to supporting the development of critical innovations, such as encryption, as a
means of strengthening the cyber security and privacy of citizens. However, we do not accept that there need be a binary choice between cyber security or privacy on the one hand and public safety on the other. Absolutism on either side is not helpful.
Our view is that technical solutions do exist; they simply require flexibility from industry as well as from governments. We recognise that the solutions will be different for each capability, and also differ between platforms. We
therefore call on the technology industry to build in security by design, to ensure they maintain the ability to both identify and report harmful and illegal activities, such as child sexual exploitation, and to lawfully and exceptionally act on a lawful
authority. We call on our democratic governments to put in place frameworks that give us the information we need to keep our publics safe. Trends in crime are deeply concerning and show how offenders
increasingly use technology to find and exploit victims and to communicate with each other within and across international boundaries. It must be our shared objective to ensure that those who seek to abuse these platforms are identified and caught, and
that the platforms become more safe not less.
See article from
reclaimthenet.org Here we have Europol and the UK's National Crime Agency (NCA), teaming up to attack Meta for the one thing the company is apparently trying to do right. And that's implementing in its products
end-to-end encryption (E2EE), the very, necessary, irreplaceable software backbone of a safe and secure internet for everybody. Yet that is what many governments, and here we see the EU via Europol, and the UK, keep attempting to damage.
But mass surveillance is a hard sell, so the established pitch is to link the global and overall internet problem, to that of the safety of children online, and justify it that way. The Europol executive
director, Catherine De Bolle, compared E2EE to sending your child into a room full of strangers and locking the door. And yet, the technological truth and reality of the situation is that undermining E2EE is akin to giving the key to your front door and
access to everybody in it, children included, to somebody you trust (say, governments and organizations who like you to take their trustworthiness for granted). But once a copy of that key is out, it can be obtained and used by
anybody out there to get into your house at any time, for any reason. That includes governments and organizations you don't trust or like, straight-up criminals -- and anything active on the web in between.
|
|
European Parliament votes against an EU Commission proposal for mass scanning of all internet communication
|
|
|
|
16th November 2023
|
|
| See Creative Commons article
from edri.org |
On 14th November, Members of the European Parliament's Civil Liberties committee voted against attempts from EU Home Affairs officials to roll out mass scanning of private and encrypted messages across Europe. It was a clear-cut vote, with a significant
majority of MEPs supporting the proposed position. A political deal struck by the Parliament's seven political groups at the end of October meant that this outcome was expected. Nevertheless, this is an important and welcome
milestone, as Parliamentarians demand that EU laws are based in objective evidence, scientific reality and with respect for human rights law. This vote signals major improvements compared to the Commission's original draft law
(coined Chat Control'), which has courted controversy. The process around the legislation has faced allegations of conflicts of interest and illegal advert micro-targeting, and rulings of "maladministration". The proposal has also been widely
criticised for failing to meet EU requirements of proportionality -- with lawyers for the EU member states making the unprecedented critique that the proposal likely violates the essence of the right to privacy. In particular, the
vote shows the strong political will of the Parliament to remove the most dangerous parts of this law -- mass scanning, undermining digital security and mandating widespread age verification. Parliamentarians have recognised that no matter how important
the aim of a law, it must be pursued using only lawful and legitimate measures. At the same time, there are parts of their position which still concern us, and which would need to be addressed if any final law were to be
acceptable from a digital rights point of view. Coupled with mass surveillance plans from the Council of member states and attempts from the Commission to manipulate the process, we remain sceptical about the chances of a good final outcome.
Civil liberties MEPs also voted for this position to become the official position of the European Parliament. On 20 th November, the rest of the house will be notified about the intention to permit negotiators to move forward without
an additional vote. Only after that point will the position voted on today be confirmed as the European Parliament's mandate for the CSA Regulation. Mass scanning (detection orders) The European
Parliament's position firmly rejects the premise that in order to search for child sexual abuse material (CSAM), all people's messages may be scanned (Articles 7-11). Instead, MEPs require that specific suspicion must be required -- a similar principle
to warrants. This is a vital change which would resolve one of the most notorious parts of the law. The position also introduces judicial oversight of hash lists (Article 44.3), which we welcome. However, it unfortunately does not distinguish between
basic hashing (which is generally seen as more robust) and perceptual hashing (which is less reliable). At the same time, the wording also needs improvement to ensure legal certainty. The Parliament position rightly confirms that
scanning must be "targeted and specified and limited to individual users, [or] a specific group of users" (Article 7.1). This means that there must be "reasonable grounds of suspicion a link [...] with child sexual abuse material"
(Articles 7.1. and 7.2.(a)). However, despite attempts in Recital (21) to interpret the "specific group of users" narrowly, we are concerned that the phrasing "as subscribers to a specific channel of communications"(Article 7.1.) is
too broad and too open to interpretation. he concept of "an indirect link" is also ambiguous in the context of private messages, and should be deleted or clarified. The Parliament's position deletes solicitation
(grooming) detection from the scope of detection orders, recognising the unreliability of such tools. However, the fact that solicitation remains in the scope of risk assessment (Articles 3 and 4) still poses a risk of incentivising overly-restrictive
measures. End-to-end encryption The European Parliament's position states that end-to-end encrypted private message services -- like WhatsApp, Signal or ProtonMail -- are not subject to scanning
technologies (Articles 7.1 and 10.3). This is a strong and clear protection to stop encrypted message services from being weakened in a way that could harm everyone that relies on them -- a key demand of civil society and technologists.
Several other provisions throughout the text, such as a horizontal protection of encrypted services (Article 1.3a and Recital 9a), give further confirmation of the Parliament's will to protect one of the only ways we all have to keep
our digital information safe. There is a potential (likely unintended) loophole in the Parliament's position on end-to-end encryption, however, which must be addressed in future negotiations. This is the fact that whilst encrypted
'interpersonal communications services (private messages) are protected, there is not an explicit protecting for other kinds of encrypted services ('hosting services'). It would therefore be important to amend Article 1.3a. to
ensure that hosting providers, such as of personal cloud backups, cannot be required to circumvent the security and confidentiality of their services with methods that are designed to access encrypted information, and that Article 7.1. is amended so that
it is not limited to interpersonal communications. Age verification & other risk mitigation measures The European Parliament's position is mixed when it comes to age verification and other risk
mitigation measures. EDRi has been clear that mandatory age verification at EU level would be very risky -- and we are glad to see that these concerns have been acted upon. The European Parliament's position protects people's anonymity online by removing
mandatory age verification for private message services and app stores, and adds a series of strong safeguards for its optional use (Article 4.3.a.(a)-(k)). This is a positive and important set of measures. On the other hand, we
are disappointed that the Parliament's position makes age verification mandatory for porn platforms (Article 4a.) -- a step that is not coherent with the overall intention of the law. What's more, the cumulative nature of the risk mitigation measures for
services directly targeting children in the Parliament's position (Article 4.1.(aa)) need further attention. This is because there is no exception given for cases where the measures might not be right for a particular service, and
could instead risk platforms or services deciding to exclude young people from their services to avoid these requirements. We recommend that there should not be mandatory age verification for porn platforms, and that risk
mitigation measures should oblige providers to achieve a specific outcome, rather than creating overly-detailed (and sometimes misguided) service design requirements. We also warn that the overall CSA Regulation framework should not incentivise the use
of age verification tools. Voluntary scanning The European Parliament's position does not include a permanent voluntary scanning regime, despite some MEPs calling for such an addition. This is an
important legal point: if co-legislators agree that targeted scanning measures are a necessary and proportionate limitation on people's fundamental human rights, then they cannot leave such measures to the discretion of private entities. The Parliament's
position does -- however, extend the currently-in-force interim derogation by nine months (Article 88.2).
|
|
Children's campaigners claim that EU proposals for responding to child abuse don't go far enough and call for all internet communications to be open to snooping regardless of the safety of internet users from hackers, fraudsters and
thieves
|
|
|
| 6th
March 2023
|
|
| See article from ec.europa.eu |
The European Commission proposed new EU rules to prevent and combat child sexual abuse (CSA) in May 2022. Complementing existing frameworks to fight online CSA, the EU proposal would introduce a new, harmonised European structure for assessing and
mitigating the spread of child sexual abuse material (CSAM) online. The thrust of the proposal is to react in a unified way, either to CSAM detected, or else to systems identified most at risk of being used to disseminate such material. However
as is always the case with campaigners, this is never enough. The campaigners basically want everybody's communications to be open to snooping and surveillance without the slightest consideration for people's safety from hackers, identity thieves,
scammers, blackmailers and fraudsters. The European Commission wrote: The Commission is
proposing new EU legislation to prevent and combat child sexual abuse online. With
85 million pictures and videos depicting child sexual abuse reported worldwide in 2021 alone, and many more going unreported, child sexual abuse is pervasive. The COVID-19 pandemic has exacerbated the issue, with the Internet Watch foundation noting a
64% increase in reports of confirmed child sexual abuse in 2021 compared to the previous year. The current system based on voluntary detection and reporting by companies has proven to be insufficient to adequately protect children and, in any case, will
no longer be possible once the interim solution currently in place expires. Up to 95% of all reports of child sexual abuse received in 2020 came from one company, despite clear evidence that the problem does not only exist on one platform.
To effectively address the misuse of online services for the purposes of child sexual abuse, clear rules are needed, with robust conditions and safeguards. The proposed rules will oblige providers to detect, report and remove child
sexual abuse material on their services. Providers will need to assess and mitigate the risk of misuse of their services and the measures taken must be proportionate to that risk and subject to robust conditions and safeguards. A
new independent EU Centre on Child Sexual Abuse (EU Centre) will facilitate the efforts of service providers by acting as a hub of expertise, providing reliable information on identified material, receiving and analysing reports from providers to
identify erroneous reports and prevent them from reaching law enforcement, swiftly forwarding relevant reports for law enforcement action and by providing support to victims. The new rules will help rescue children from further
abuse, prevent material from reappearing online, and bring offenders to justice. Those rules will include:
Mandatory risk assessment and risk mitigation measures: Providers of hosting or interpersonal communication services will have to assess the risk that their services are misused to disseminate child sexual abuse material or
for the solicitation of children, known as grooming. Providers will also have to propose risk mitigation measures. Targeted detection obligations, based on a detection order: Member States will need to designate
national authorities in charge of reviewing the risk assessment. Where such authorities determine that a significant risk remains, they can ask a court or an independent national authority to issue a detection order for known or new child sexual abuse
material or grooming. Detection orders are limited in time, targeting a specific type of content on a specific service. Strong safeguards on detection: Companies having received a detection order will only be able to
detect content using indicators of child sexual abuse verified and provided by the EU Centre. Detection technologies must only be used for the purpose of detecting child sexual abuse. Providers will have to deploy technologies that are the least
privacy-intrusive in accordance with the state of the art in the industry, and that limit the error rate of false positives to the maximum extent possible. Clear reporting obligations: Providers that have detected
online child sexual abuse will have to report it to the EU Centre. Effective removal: National authorities can issue removal orders if the child sexual abuse material is not swiftly taken down. Internet access
providers will also be required to disable access to images and videos that cannot be taken down, e.g., because they are hosted outside the EU in non-cooperative jurisdictions. Reducing exposure to grooming: The rules
require app stores to ensure that children cannot download apps that may expose them to a high risk of solicitation of children. Solid oversight mechanisms and judicial redress: Detection orders will be issued by
courts or independent national authorities. To minimise the risk of erroneous detection and reporting, the EU Centre will verify reports of potential online child sexual abuse made by providers before sharing them with law enforcement authorities and
Europol. Both providers and users will have the right to challenge any measure affecting them in Court.
The new EU Centre will support:
Online service providers, in particular in complying with their new obligations to carry out risk assessments, detect, report, remove and disable access to child sexual abuse online, by providing indicators to detect child sexual
abuse and receiving the reports from the providers; National law enforcement and Europol, by reviewing the reports from the providers to ensure that they are not submitted in error, and channelling them quickly to law
enforcement. This will help rescue children from situations of abuse and bring perpetrators to justice. Member States, by serving as a knowledge hub for best practices on prevention and assistance to victims, fostering an
evidence-based approach. Victims, by helping them to take down the materials depicting their abuse.
Next steps It is now for the European Parliament and the Council to agree on the proposal. Once adopted, the new Regulation will replace the current
interim Regulation .
Feedback from members of the public on
the proposals is open for a minimum of 8 weeks.*
According to child campaigners: On 8 February 2023, the European Parliament's Committee on the Internal Market and Consumer Protection (IMCO)
published its draft report on the European Commission's proposal to prevent and combat child sexual abuse. The draft report seeks a vastly reduced scope for the Regulation. It prioritises the anonymity of perpetrators of abuse over the rights of victims
and survivors of sexual abuse and seeks to reverse progress made in keeping children safe as they navigate or are harmed in digital environments that were not built with their safety in mind. The letter also criticises the removal of age
verification and claims that technology can meet high privacy standards, explaining that the new legislation adds in additional safeguards to already effective measures to prevent the spread of this material online. And of course the campaigners
demand that technology companies allow the surveillance of all messages via backdoors to encryption or perhaps just to ban encryption. See the letter from the likes of the NSPCC. See
article from iwf.org.uk |
|
The Committee to Protect Journalists expresses concern about an proposal to ban secure encrypted messaging
|
|
|
|
10th November 2020
|
|
| See article from cpj.org |
The Committee to Protect Journalists expressed concern after the Council of the European Union proposed a draft resolution last week calling for national authorities across the EU to have access to encrypted messages as part of criminal investigations
into terrorism and organized crime. Journalists rely on encryption to evade surveillance and protect their sources, CPJ has found . End-to-end encryption prevents authorities, company employees, and hackers from viewing the
content of private digital messages, but the resolution proposes unspecified technical solutions to undermine those protections, according to rights groups European Digital Rights and Access Now. The groups said the resolution was drafted without input
from privacy experts or journalists. EU institutions must immediately retract all plans to undermine encryption, which is vital to press freedom and the free flow of information, said Tom Gibson, EU Representative for the
Committee to Protect Journalists. Encryption offers essential protection for journalists who routinely communicate and share files electronically. If journalists cannot communicate safely with colleagues and sources, they cannot protect the anonymity of
their sources. The resolution was proposed by Germany, which holds the current presidency of the Council of the European Union, and could serve as a basis for further negotiations with other EU institutions in 2021.
|
|
Even the EU Commission!
|
|
|
| 23rd February 2020
|
|
| See article from politico.eu |
The European Commission has told its staff to start using Signal, an end-to-end-encrypted messaging app, in a push to increase the security of its communications. The instruction appeared on internal messaging boards in early February, notifying
employees that Signal has been selected as the recommended application for public instant messaging. The app is favored by privacy activists because of its end-to-end encryption and open-source technology. Bart Preneel, cryptography expert at the
University of Leuven explained: It's like Facebook's WhatsApp and Apple's iMessage but it's based on an encryption protocol that's very innovative. Because it's open-source, you can check what's happening under the
hood. Promoting the app, however, could antagonize the law enforcement community. It will underline the hypocrisy of Officials in Brussels, Washington and other capitals have been putting strong pressure on Facebook and Apple to
allow government agencies to access to encrypted messages; if these agencies refuse, legal requirements could be introduced that force firms to do just that. American, British and Australian officials have published an open letter to Facebook CEO Mark
Zuckerberg in October, asking that he call off plans to encrypt the company's messaging service. Dutch Minister for Justice and Security Ferd Grappehaus told POLITICO last April that the EU needs to look into legislation allowing governments to access
encrypted data. |
|
Tell the EU Council: Protect our rights to privacy and security!
|
|
|
| 1st December 2016
|
|
| Sign the petition from
act.accessnow.org |
The Council of the EU could undermine encryption as soon as December. It has been asking delegates from all EU countries to detail their national legislative position on encryption. We've been down this road before. We know that encryption is
critical to our right to privacy and to our own digital security. We need to come together once again and demand that our representatives protect these rights -- not undermine them in secret. Act now to tell the Council of the EU to defend strong
encryption! Dear Slovak Presidency and Delegates to the Council of the EU: According to the Presidency of the Council of the European Union, the Justice and Home Affairs Ministers will meet in December to discuss the issue of
encryption. At that discussion, we urge you to protect our security, our economy, and our governments by supporting the development and use of secure communications tools and technologies and rejecting calls for policies that would prevent or undermine
the use of strong encryption. Encryption tools, technologies, and services are essential to protect against harm and to shield our digital infrastructure and personal communications from unauthorized access. The ability to freely develop and use
encryption provides the cornerstone for today's EU economy. Economic growth in the digital age is powered by the ability to trust and authenticate our interactions and communication and conduct business securely both within and across borders. The
United Nations Special Rapporteur for freedom of expression has noted, encryption and anonymity, and the security concepts behind them, provide the privacy and security necessary for the exercise of the right to freedom of opinion and expression in
the digital age. Recently, hundreds of organizations, companies, and individuals from more than 50 countries came together to make a global declaration in support of strong encryption. We stand with people from all over the world asking you
not to break the encryption we rely upon. Sign the petition from act.accessnow.org
|
| |