Melon Farmers Original Version

Internet News


2024: May

 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   April   May    

 

Upload moderation, the latest EU buzzword for messaging surveillance...

The latest EU proposal for governments to snoop on (once) private messaging


Link Here 26th May 2024
Full story: Mass snooping in the EU...The EU calls for member states to implement internet snooping with response to police requests in 6 hours
EU governments might soon implement messaging surveillance, euphemistically labelled as chat control, based on a new proposal by Belgium's Minister of the Interior. According to a leak obtained by Pirate Party MEP and shadow rapporteur Patrick Breyer , this could happen as early as June.

The proposal mandates that users of communication apps must agree to have all images and videos they send automatically scanned and potentially reported to the EU and police.

This agreement would be obtained through terms and conditions or pop-up messages. To facilitate this, secure end-to-end encrypted messenger services would need to implement monitoring backdoors, effectively causing a ban on private messaging. The Belgian proposal frames this as upload moderation, claiming it differs from client-side scanning. Users who refuse to consent would still be able to send text messages but would be barred from sharing images and videos.

The proposal first introduced on 8 May, has surprisingly gained support from several governments that were initially critical. It will be revisited on 24 May, and EU interior ministers are set to meet immediately following the European elections to potentially approve the legislation.

 

 

Know Your Creator...

Twitter introduces mandatory identity verification for paid creators


Link Here26th May 2024
Full story: Twitter Privacy...The sharing of user data for advertising purposes
X (Twitter) is now mandating the use of a government ID-based account verification system for users that earn revenue on the platform -- either for advertising or for paid subscriptions.

To implement this system, X has partnered with Au10tix, an Israeli company known for its identity verification solutions. #

The move raises profound questions about privacy and free speech, as X claims itself to be a free speech platform, and free speech and anonymity often go hand-in-hand.

X explains:

Starting today, all new creators must verify their ID to receive payouts. All existing creators must do so by July 1, 2024, the update to X's verification page now reads:

 

 

 

 

 

Making Britain the craziest place to run a business online...

Ofcom goes full on nightmare with age/ID verification for nearly all websites coupled with a mountain of red tape and expense


Link Here 8th May 2024
Full story: Online Safety Bill...UK Government legislates to censor social media
With a theatrical flourish clamouring to the 'won't somebody think of the children' mob, Ofcom has proposed a set of censorship rules that demand strict age/ID verification for practically ever single website that allows users to post content. On top of that they are proposing the most onerous mountain of expensive red tape seen in the western world.

There are few clever slight of hands that drag most of the internet into the realm of strict age/ID verification. Ofcom argues that nearly all websites will have child users because 16 and 17 year old 'children' have more or less the same interests as adults and so there is no content that is not of interest to 'children'

And so all websites will have to offer content that is appropriate to all age children or else put in place strict age/ID verification to ensure that content is appropriate to age.

And at every stage of deciding website policy, Ofcom is demanding extensive justification of decision made and proof of data used in making decisions. The amount of risk assessments, documents, research, evidence required makes the 'health and safety' regime look like child's play.

On occasions in the consultation documents Ofcom acknowledges that this will impose a massive administrative burden, but swats away criticism by noting that is the fault of the Online Safety Act law itself, and not Ofcom's fault.

 

Comment: Online Safety proposals could cause new harms

See article from openrightsgroup.org

Ofcom's consultation on safeguarding children online exposes significant problems regarding the proposed implementation of age-gating measures. While aimed at protecting children from digital harms, the proposed measures introduce risks to cybersecurity, privacy and freedom of expression.

Ofcom's proposals outline the implementation of age assurance systems, including photo-ID matching, facial age estimation, and reusable digital identity services, to restrict access to popular platforms like Twitter, Reddit, YouTube, and Google that might contain content deemed harmful to children.

Open Rights Group warns that these measures could inadvertently curtail individuals' freedom of expression while simultaneously exposing them to heightened cybersecurity risks.

Jim Killock, Executive Director of Open Rights Group, said:

Adults will be faced with a choice: either limit their freedom of expression by not accessing content, or expose themselves to increased security risks that will arise from data breaches and phishing sites.

Some overseas providers may block access to their platforms from the UK rather than comply with these stringent measures.

We are also concerned that educational and help material, especially where it relates to sexuality, gender identity, drugs and other sensitive topics may be denied to young people by moderation systems.

Risks to children will continue with these measures. Regulators need to shift their approach to one that empowers children to understand the risks they may face, especially where young people may look for content, whether it is meant to be available to them or not.

Open Rights Group underscores the necessity for privacy-friendly standards in the development and deployment of age-assurance systems mandated by the Online Safety Act. Killock notes, Current data protection laws lack the framework to pre-emptively address the specific and novel cybersecurity risks posed by these proposals.

Open Rights Group urges the government to prioritize comprehensive solutions that incorporate parental guidance and education rather than relying largely on technical measures.

 

 

Big Brother is watching with a hair trigger system to report images resembling child sexual abuse...

Opposition to a secretive and dangerous EU proposal to force snooping software on people's phones and computers


Link Here5th May 2024
Full story: Internet Encryption in the EU...Encryption is legal for the moment but the authorites are seeking to end this
A controversial and secretive push by European Union lawmakers to legally require messaging platforms to scan citizens' private communications for child sexual abuse material (CSAM) could lead to millions of false positives per day, hundreds of security and privacy experts have warned in an open letter .

Concern over the EU proposal has been building since the Commission proposed the CSAM-scanning plan two years ago, with independent experts, lawmakers across the European Parliament and even the bloc's own Data Protection Supervisor among those sounding the alarm.

The EU proposal would not only require messaging platforms that receive a CSAM detection order to scan for known CSAM, but they would also have to use unspecified detection scanning technologies to try to pick up unknown CSAM and identify grooming activity as it's taking place, leading to accusations of lawmakers indulging in magical thinking-levels of technosolutionism.

The open letter has been signed by 309 experts from 35 countries. The letter reads:

Dear Members of the European Parliament, Dear Member States of the Council of the European Union,

Joint statement of scientists and researchers on EU's new proposal for the Child Sexual Abuse Regulation: 2nd May 2024

We are writing in response to the new proposal for the regulation introduced by the Presidency on 13 March 20241. The two main changes with respect to the previous proposal aim to generate more targeted detection orders, and to protect cybersecurity and encrypted data. We note with disappointment that these changes fail to address the main concerns raised in our open letter from July 2023 regarding the unavoidable flaws of detection techniques and the significant weakening of the protection that is inherent to adding detection capabilities to end-to-end encrypted communications. The proposal's impact on end-to-end encryption is in direct contradiction to the intent of the European Court of Human Rights's decision in Podchasov v. Russia on 13 February, 2024. We elaborate on these aspects below.

Child sexual abuse and exploitation are serious crimes that can cause lifelong harm to survivors; certainly it is essential that governments, service providers, and society at large take major responsibility in tackling these crimes. The fact that the new proposal encourages service providers to employ a swift and robust process for notifying potential victims is a useful step forward.

However, from a technical standpoint, to be effective, this new proposal will also completely undermine communications and systems security. The proposal notably still fails to take into account decades of effort by researchers, industry, and policy makers to protect communications. Instead of starting a dialogue with academic experts and making data available on detection technologies and their alleged effectiveness, the proposal creates unprecedented capabilities for surveillance and control of Internet users. This undermines a secure digital future for our society and can have enormous consequences for democratic processes in Europe and beyond.

1. The proposed targeted detection measures will not reduce risks of massive surveillance

The problem is that flawed detection technology cannot be relied upon to determine cases of interest. We previously detailed security issues associated with the technologies that can be used to implement detection of known and new CSA material and of grooming, because they are easy to circumvent by those who want to bypass detection, and they are prone to errors in classification. The latter point is highly relevant for the new proposal, which aims to reduce impact by only reporting users of interest defined as those who are flagged repeatedly (as of the last draft: twice for known CSA material and three times for new CSA material and grooming). Yet, this measure is unlikely to address the problems we raised.

First, there is the poor performance of automated detection technologies for new CSA material and for the detection of grooming. The number of false positives due to detection errors is highly unlikely to be significantly reduced unless the number of repetitions is so large that the detection stops being effective. Given the large amount of messages sent in these platforms (in the order of billions), one can expect a very large amount of false alarms (in the order of millions).

Second, the belief that the number of false positives will be reduced significantly by requiring a small number of repetitions relies on the fallacy that for innocent users two positive detection events are independent and that the corresponding error probabilities can be multiplied. In practice, communications exist in a specific context (e.g., photos to doctors, legitimate sharing across family and friends). In such cases, it is likely that parents will send more than one photo to doctors, and families will share more than one photo of their vacations at the beach or pool, thus increasing the number of false positives for this person. It is therefore unclear that this measure makes any effective difference with respect to the previous proposal.

Furthermore, to realize this new measure, on-device detection with so-called client-side scanning will be needed. As we previously wrote, once such a capability is in place, there is little possibility of controlling what is being detected and which threshold is used on the device for such detections to be considered of interest. We elaborate below.

High-risk applications may still indiscriminately affect a massive number of people. A second change in the proposal is to only require detection on (parts of) services that are deemed to be high-risk in terms of carrying CSA material.

This change is unlikely to have a useful impact. As the exchange of CSA material or grooming only requires standard features that are widely supported by many service providers (such as exchanging chat messages and images), this will undoubtedly impact many services. Moreover, an increasing number of services deploy end-to-end encryption, greatly enhancing user privacy and security, which will increase the likelihood that these services will be categorised as high risk. This number may further increase with the interoperability requirements introduced by the Digital Markets Act that will result in messages flowing between low-risk and high-risk services. As a result, almost all services could be classified as high risk. This change is also unlikely to impact abusers. As soon as abusers become aware that a service provider has activated client side scanning, they will switch to another provider that will in turn become high risk; very quickly all services will be high risk, which defeats the purpose of identifying high risk services in the first place. And because open-source chat systems are currently easy to deploy, groups of offenders can easily set up their own service without any CSAM detection capabilities.

We note that decreasing the number of services is not even the crucial issue, as this change would not necessarily reduce the number of (innocent) users that would be subject to detection capabilities. This is because many of the main applications targeted by this regulation, such as email, messaging, and file sharing are used by hundreds of millions of users (or even billions in the case of WhatsApp).

Once a detection capability is deployed by the service, it is not technologically possible to limit its application to a subset of the users. Either it exists in all the deployed copies of the application, or it does not. Otherwise, potential abusers could easily find out if they have a version different from the majority population and therefore if they have been targeted. Therefore, upon implementation, the envisioned limitations associated with risk categorization do not necessarily result in better user discrimination or targeting, but in essence have the same effect for users as a blanket detection regulation.

2. Detection in end-to-end encrypted services by definition undermines encryption protection The new proposal has as one of its goals to protect cyber security and encrypted data, while keeping services using end-to-end encryption within the scope of detection orders. As we have explained before, this is an oxymoron.

The protection given by end-to-end encryption implies that no one other than the intended recipient of a communication should be able to learn any information about the content of such communication. Enabling detection capabilities, whether for encrypted data or for data before it is encrypted, violates the very definition of confidentiality provided by end-to-end encryption. Moreover, the proposal also states that This Regulation shall not create any obligation that would require [a service provider] to decrypt or create access to end-to-end-encrypted data, or that would prevent the provision of end-to-end encrypted services. This can be misleading, as whether the obligation to decrypt exists or not, the proposal undermines the protection provided by end-to-end encryption.

This has catastrophic consequences. It sets a precedent for filtering the Internet, and prevents people from using some of the few tools available to protect their right to a private life in the digital space; it will have a chilling effect, in particular to teenagers who heavily rely on online services for their interactions. It will change how digital services are used around the world and is likely to negatively affect democracies across the globe. These consequences come from the very existence of detection capabilities, and thus cannot be addressed by either reducing the scope of detection in terms of applications or target users: once they exist, all users are in danger. Hence, the requirement of Art. 10 (aa) that a detection order should not introduce cybersecurity risks for which it is not possible to take any effective measures to mitigate such risk is not realistic, as the risk introduced by client side scanning cannot be mitigated effectively.

3. Introducing more immature technologies may increase the risk The proposal states that age verification and age assessment measures will be taken, creating a need to prove age in services that before did not require so. It then bases some of the arguments related to the protection of children on the assumption that such measures will be effective. We would like to point out that at this time there is no established, well-proven technological solution that can reliably perform these assessments. The proposal also states that such verification and assessment should preserve privacy. We note that this is a very hard problem. While there is research towards technologies that could assist in implementing privacy-preserving age verification, none of them are currently in the market.5 Integrating them into systems in a secure way is far from trivial. Any solutions to this problem need to be very carefully scrutinized to ensure that the new assessments do not result in privacy harms or discrimination causing more harm than the one they were meant to prevent.

4. Lack of transparency It is quite regretful that the proposers failed to reach out to security and privacy experts to understand what is feasible before putting forth a new proposal that cannot work technologically. The proposal pays insufficient attention to the technical risks and imposes - while claiming to be technologically neutral - requirements that cannot be met by any state-of-the-art system (e.g., low false-positive rate, secrecy of the parameters and algorithms when deployed in a large number of devices, existence of representative simulated CSA material).

We strongly recommend that not only should this proposal not move forward, but that before such a proposal is presented in future, the proposers engage in serious conversations about what can and cannot be done within the context of guaranteeing secure communications for society.

5. Secure paths forward for child protection Protecting children from online abuse while preserving their right to secure communications is critical. It is important to remember that CSAM content is the output of child sexual abuse. Eradicating CSAM relies on eradicating abuse, not only abuse material. Proven approaches recommended by organisations such as the UN for eradicating abuse include education on consent, on norms and values, on digital literacy and online safety, and comprehensive sex education; trauma-sensitive reporting hotlines; and keyword-search based interventions. Educational efforts can take place in partnership with platforms, which can prioritise high quality educational results in search or collaborate with their content creators to develop engaging resources.

We recommend substantial increases in investment and effort to support existing proven approaches to eradicate abuse, and with it, abusive material. Such approaches stand in contrast to the current techno-solutionist proposal, which is focused on vacuuming up abusive material from the internet at the cost of communication security, with little potential for impact on abuse perpetrated against children.

UK signatories

Dr. Ruba Abu-Salma King's College London Prof. Martin Albrecht King's College London Dr. Andrea Basso University of Bristol Prof. Ioana Boureanu University of Surrey Prof. Lorenzo Cavallaro University College London Dr. Giovanni Cherubin Microsoft Dr. Benjamin Dowling University of Sheffield Dr. Francois Dupressoir University of Bristol Dr. Jide Edu University of Strathclyde Dr. Arthur Gervais University College London Prof. Hamed Haddadi Imperial College London Prof. Alice Hutchings University of Cambridge Dr. Dennis Jackson Mozilla Dr. Rikke Bjerg Jensen Royal Holloway University of London Prof. Keith Martin Royal Holloway University of London Dr. Maryam Mehrnezhad Royal Holloway University of London Prof. Sarah Meiklejohn University College London Dr. Ngoc Khanh Nguyen King's College London Prof. Elisabeth Oswald University of Birmingham Dr. Daniel Page University of Bristol Dr. Eamonn Postlethwaite King's College London Dr. Kopo Marvin Ramokapane University of Bristol Prof. Awais Rashid University of Bristol Dr. Daniel R. Thomas University of Strathclyde Dr. Yiannis Tselekounis Royal Holloway University of London Dr. Michael Veale University College London Prof. Dr. Luca Vigano King's College London Dr. Petros Wallden University of Edinburgh Dr. Christian Weinert Royal Holloway University of London

 

 

Commented Safe methods prove elusive...

Australian government to spend its own money on trying to find a safe method of age/ID verification for porn viewing


Link Here5th May 2024
Full story: Age Verification for Porn...Endangering porn users for the sake of the children
As part of its efforts to combat violence against women, the government of Australian Prime Minister Anthony Albanese has announced funding to test age/ID verification methods for pornography websites in a pilot program. This move came after Albanese and the national cabinet ruled in 2023 that mandatory age verification was not yet an option.

AUS $6.5 million has been allocated for a pilot of age assurance to test the technology's effectiveness. The pilot will identify available age assurance products and assess their efficacy, including in relation to privacy and security. The outcomes of this pilot will support the eSafety Commissioners' ongoing implementation of censorship rules under the Online Safety Act.

Australia's prime minister has also moved to ban deepfake and artificial intelligence pornography as part of a $925million bid to counter a  rise in violence against women. Sharing sexually explicit material using artificial intelligence will also be subject to serious criminal penalties.

Albanese noted community concerns about toxic male views online and young men's exposure to violent imagery on the internet.

 

Offsite Comment: The Australian Government Is Making Porn a Scapegoat for Rising Violence Against Women

5th May 2024. Thanks to Trog. See article from vice.com by Darcy Deviant

Here is an artlcie offering a very sensible counter argument to the usual porn is bad diatribes:

As a sex worker, the most concerning part of this conversation is the use of the sex industry as a political scapegoat for men's violence.

Let's be clear: the porn industry was never created to provide sex education to children. But let's also be honest: if your child is actively seeking out pornography, or so-called violent pornography, perhaps there's a gap in their learning about sex and sexuality that the education system or a guardian has failed to fill.

See article from vice.com

 

 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   April   May    


 


 
TV  

Movies

Games

Internet
 
Advertising

Technology

Gambling

Food+Drink
Books

Music

Art

Stage

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys