Melon Farmers Original Version

Internet News


2021: December

 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   April   May   June   July   Aug   Sept   Oct   Nov   Dec    

 

France opts to become a circumvention hub for Pornhub...

Trail blazing French Government will try blocking major porn websites


Link Here 28th December 2021
Full story: Age Verification in France...Macron gives websites 6 months to introduce age verification
The French government has announced it will move to block five of the world's major porn sites unless they introduce identity verification for all users to ensure that users are over 18.

Adult websites Pornhub, xHamster, XVideos, XNXX and Tukif are in the firing line.

The move by France s Higher Audiovisual Council looks to force the websites to comply with a law introduced last year which made it compulsory to carry out age verification checks for adult content.

The council said the sites current tactic of simply asking users to fill out a check-box stating they are over 18 is not satisfactory and that they are breaking the law by failing to introduce better controls.

The move sees France align with European neighbours Germany in their efforts to bring about change in the adult industry by taking action against sites and organisations that do not prevent children from accessing adult content.

It will be interesting to see how porn users respond to the censorship and the dangers of handing over ID to be associated with porn use. The British Government will surely be monitoring the situation carefully. Options available to users are to subscribe to a VPN, use the Tor browsers, seek out unblocked porn websites, or possibly share porn via different media. After all a year's worth of porn viewing can be shared on a single memory stick.

One can be sure that the French internet blocking mechanisms will surely by well analysed in what will become a hot topic for millions of interested people. One suspects that the security services will be somewhat unimpressed if millions of people start using encrypted services to circumvent the state censorship.

 

 

Cleansed...

Tumblr introduces numerous banned search keywords on its iOS app so as to appease Apple censors


Link Here28th December 2021
Tumblr has explained in a blog post:

For those of you who access Tumblr through our iOS app, we wanted to share that starting today you may see some differences for search terms and recommended content that can contain specific types of sensitive content. In order to comply with Apple's App Store Guidelines, we are having to adjust, in the near term, what you're able to access as it relates to potentially sensitive content while using the iOS app.

To remain available within Apple's App Store, we are having to extend the definition of what sensitive content is as well as the way you access it in order to comply with their guidelines.

How you may experience these changes

There are three main ways you may experience these changes while using our iOS app: search, blog access and dashboard.

Search

When searching for certain terms or phrases that may fall under the expanded definition of sensitive content, you may experience fewer results from that query than you have in the past. In certain circumstances, a search may not produce any results at all and you will see a message like the one below:

This content has been hidden because of potentially suggestive or explicit content.

Blog access

If you tap on a blog through the iOS Tumblr app that is flagged as explicit due to these changes, you will see the same message as above and will not be able to access that blog.

Dashboard

For iOS users, under our "stuff for you" and "following" sections within the dashboard, you may see fewer suggested posts based on the changes being made.

Why these changes are being made

We want to make sure Tumblr is available everywhere you would like to access it. In order for us to remain in Apple's App Store and for our Tumblr iOS app to be available, we needed to make changes that would help us be more compliant with their policies around sensitive content.

We understand that, for some of you, these changes may be very frustrating -- we understand that frustration and we are sorry for any disruption that these changes may cause. Please know that on the near horizon there will be meaningful developments that will overhaul how you choose to access sensitive content safely on Tumblr -- whether visiting us on mobile, mobile web or through our website.

 

 

Fine inflation...

A Russian court hands out a big fine to Google for not censoring what it was told


Link Here27th December 2021
Full story: Internet Censorship in Russia 2020s...Russia and its repressive state control of media
A Russian court has said it is fining Alphabet's Google 7.2bn roubles (£73m) for what it says is a repeated failure to delete content Russia deems illegal, the first revenue-based fine of its kind in Russia.

The Russian internet censor had ordered companies to delete posts promoting drug abuse and dangerous pastimes and information about homemade weapons and explosives, as well as ones by groups it designates as extremist or terrorist. In addition Russia had also objected to Google blocking its own RT news channels on YouTube.

Google said in an email it would study the court ruling before deciding on further steps.

Russia has imposed small fines on foreign technology companies throughout this year, but the penalty on Friday marks the first time it has exacted a percentage of a company's annual Russian turnover, greatly increasing the sum of the fine. It did not specify the percentage, although Reuters calculations show it equates to just over 8%.

 

 

Offsite Article: Co-censor...


Link Here16th December 2021
The Internet Watch Foundation petitions for a place in the UK's upcoming internet censorship regime

See article from iwf.org.uk

 

 

So if Telegram continues operating in Germany then we can infer that its encryption is weak...

Germany demands that Telegram opens a backdoor to encrypted messages


Link Here13th December 2021
The newly appointed German Federal Minister for 'Justice' Marco Buschmann has called for the prosecution of people who spread misinformation and hate on social platforms. Supporters of censorship singled out Telegram as the most popular avenue for spreading misinformation and hate speech.

Chancellor Olaf Scholz and the heads of several states are in favor of stricter speech control. At a conference Thursday, they said that open social networks with mass communication should be legally regulated.

The Federal Office of Justice noted that Telegram is not merely a messaging service, but a social network. Therefore, like Facebook, Twitter, and other social networks, Telegram should follow the requirements of the Network Enforcement Act (Germany's social networks and platforms regulation).

The rules would force Telegram to establish a system for users to report harmful content and a system through which the German courts can send demands.

 

 

Verified as dangerous...

Grindr reminds us how dangerous it is to trust adult websites with personal data


Link Here13th December 2021
Grindr, a location-based dating app aimed at the LGBT community, has been fined 6.5m euro(£5.5m) for selling user data to advertisers.

The Norwegian Data Protection Authority said that sharing such data without seeking explicit consent broke GDPR rules. Data which it found the app had shared with third parties included GPS location, IP address, advertising ID, age, gender and the fact that the user was on Grindr.

The fine was reduced from £8.6m after Grindr provided details about its financial situation, and made changes to its app. Tobias Judin, head of the Norwegian Data Protection Authority's (DPA) international department explained:

Our conclusion is that Grindr has disclosed user data to third parties for behavioural advertisement without a legal basis.

This was particularly intrusive because data about a person's sexual orientation constitutes special category data that merits particular protection under GDPR rules, the DPA added.

 

 

Online Safety Bill: Kill Switch for Encryption...

Open Rights Group explains how the Online 'Safety' Bill will endanger internet users


Link Here11th December 2021
Full story: UK Government vs Encryption...Government seeks to restrict peoples use of encryption

Of the many worrying provisions contained within the draft Online Safety Bill, perhaps the most consequential is contained within Chapter 4, at clauses 63-69. This section of the Bill hands OFCOM the power to issue "Use of Technology Notices" to search engines and social media companies. As worded, the powers will lead to the introduction of routine and perpetual surveillance of our online communications. They also threaten to fatally undermine the use of end-to-end encryption, one of the fundamental building blocks of digital technology and commerce.

Use of Technology Notices purport to tackle terrorist propaganda and Child Sexual Exploitation and Abuse (CSEA) content. OFCOM will issue a Notice based on the "prevalence" and "persistent presence" of such illegal content on a service. The phrases "perpetual" and "persistent" recur throughout the Bill but remain undefined, so the threshold for interference could be quite low.

Any company that receives a Notice will be forced to use certain "accredited technologies" to identify terrorist and CSEA content on the platform.

The phrase "accredited technologies" is wide-ranging. The Online Safety Bill defines it as technology that meets a "minimum standard" for successfully identifying illegal content, although it is currently unclear what that minimum standard may be.

The definition is silent on what techniques an accredited technology might deploy to achieve the minimum standard. So it could take the form of an AI that classifies images and text. Or it may be a system that compares all the content uploaded to the hashes of known CSEA images logged on the Home Office's Child Abuse Image Database (CAID) and other such collections.

Whatever the precise technique used, identifying terrorist or CSEA content must involve scanning each user's content as it is posted, or soon after. Content that a bot decides is related to terrorism or child abuse will be flagged and removed immediately.

Social media services are public platforms, and so it cannot be said that scanning the content we post to our timelines amounts to an invasion of privacy -- even when we post to a locked account or a closed group, we are still "publishing" to someone. Indeed, search engines have been scanning our content (albeit at their own pace) for many years, and YouTube users will be familiar with the way the platform recognises and monetises any copyrighted content.

It is nevertheless disconcerting to know that an automated pre-publication censor will examine everything we publish. It will chill freedom of expression in itself, and also lead to unnecessary automated takedowns when the system makes a mistake. Social media users routinely experience the problem of over-zealous bots causing the removal of public domain content, which impinges on free speech and damages livelihoods.

However, the greater worry is that these measures will not be limited to content posted only to public (or semi-public) feeds. The Interpretation section of the Bill (clause 137) defines "content" as "anything communicated by means of an internet service, whether publicly or privately ..."(emphasis added). So the Use of Technology Notices will apply to direct messaging services too .

This power presents two significant threats to civil liberties and digital rights.

The first is that once an "accredited technology" is deployed on a platform, it need not be limited to checking only for terrorism or child porn. Other criminal activity may eventually be added to the list through a simple amendment to the relevant section of the Act, ratcheting up the extent of the surveillance.

Meanwhile, other Governments around the world will take inspiration from OFCOM's powers to implement their own scanning regime, perhaps demanding that the social media companies scan for blasphemous, seditious, immoral or dissident content instead.

The second major threat is that the "accredited technologies" will necessarily undermine end-to-end encryption. If the tech companies are to scan all our content, then they have to be able to see it first. This demand, which the government overtly states as its goal , is incompatible with the concept of end-to-end encryption. Either such encryption will be disabled, or the technology companies will create some kind of "back door" that will leave those users vulnerable, to fraud, scams, and invasions of privacy.

Predictable examples include identity theft, credit card theft, mortgage deposit theft and theft of private messages and images. As victims of these crimes tell us, such thefts can lead to severe emotional distress and even contemplation of suicide -- precisely the 'harm' that the Online Safety Bill purports to prevent.

The trade-off, therefore, is not between privacy (or free speech) and security. Instead, it is a tension between two different types of online security: the 'negative' security to not experience harmful content online; and the 'positive' security of ensuring that our sensitive personal and corporate data is not exposed to those who would abuse it (and us).

As Cairan Martin, the former head of the National Cyber Security Centre said in November 2021 , "cyber security is a public good ... it is increasingly hard to think of instances where the benefit of weakening digital security outweighs the benefits of keeping the broad majority of the population as safe as possible online as often as possible. There is nothing to be gained in doing anything that will undermine user trust in their own privacy and security."

A fundamental principle of human rights law is that any encroachment on our rights must be necessary and proportionate. And as ORG's challenge to GCHQ's surveillance practices in Big Brother Watch v UK demonstrated, treating the entire population as a suspect whose communications must be scanned is neither a necessary nor proportionate way to tackle the problem. Nor is it proportionate to dispense with a general right to data security, only to achieve a marginal gain in the fight against illegal content.

While terrorism and CSEA are genuine threats, they cannot be dealt with by permanently dispensing with everyone's privacy.

Open Rights Group recommends

  • Removing the provisions for Use of technology Notices from the draft Online Safety Bill

  • If these provisions remain, Use of Technology Notices should only apply to public messages. The wording of clauses 64(4)(a) and (b) should be amended accordingly.

 

 

Shooting the messenger...

Sony Music is trying to block websites via public DNS resolvers


Link Here6th December 2021
DNS-resolver Quad9 has lost its appeal against Sony Music's pirate site-blocking order at the Regional Court in Hamburg. The non-profit Quad9 Foundation is disappointed with the outcome but isn't giving up the legal battle just yet, noting that various Internet services are at risk if the order isn't successfully challenged.

Earlier this year, Germany's largest ISPs agreed to voluntarily block pirate sites as part of a deal they struck with copyright holders. These blockades, which are put in place following a thorough vetting process, are generally implemented on the DNS level. This is a relatively easy option, as all ISPs have their own DNS resolvers.

DNS (un)Blocking

DNS blocking is also easy to circumvent, however. Instead of using the ISPs' DNS resolvers, subscribers can switch to alternatives such as Cloudflare, Google, OpenDNS, and Quad9. This relatively simple change will render the ISPs' blocking efforts useless.

This workaround is widely known, also by copyright holders. As such, it may not be a surprise that a few weeks after the German blocking agreement was reached, Sony Music obtained an injunction that requires DNS-resolver Quad9 to block a popular pirate site .

A blocking order against a DNS resolver is quite unusual and the Swiss-based non-profit organization Quad9 swiftly announced that it would appeal the verdict. The foundation stressed that it doesn't condone piracy but believes that enforcing blocking measures through third-party intermediaries is a step too far.

Court Upholds Site Blocking Order

Quad9 repeated these and other arguments at the Regional Court in Hamburg, asking it to overturn the injunction. After reviewing the input from both sides, the Court chose to uphold the site-blocking requirements.

The name of the targeted site remains redacted but the legal paperwork mentions that the unnamed site links to pirated music. We previously deduced that Canna.to is the likely target, as that site was already part of the ISPs' voluntary blocking agreement when the proceeding was initiated.

Having lost its first appeal, Quad9 notes that it will continue to block the site, as required by the injunction. The non-profit is disappointed with the Court's decision but announced that it will continue its appeal at a higher court. Quad9's General Manager John Todd said:

[We] will continue to pursue our legal fight against what we think is an outcome that threatens the very core of the Internet's ability to be a useful and trusted tool for everyone. Corporations should not have the ability to directly demand that network infrastructure operators censor sites.

 

 

Take a break from raging hormones...

Instagram introduces a policy to nudge teenagers towards what they 'should' be reading about


Link Here6th December 2021
Full story: Instagram Censorship...Photo sharing website gets heavy on the censorship

  At Instagram, we've been working for a long time to keep young people safe on the app; as part of that work, today we're announcing some new tools and features to keep young people even safer on Instagram.

We'll be taking a stricter approach to what we recommend to teens on the app, we'll stop people from tagging or mentioning teens that don't follow them, we'll be nudging teens towards different topics if they've been dwelling on one topic for a long time and we're launching the Take a Break feature in the US, UK, Ireland, Canada, Australia and New Zealand, which we previously announced.

We'll also be launching our first tools for parents and guardians early next year to help them get more involved in their teen's experiences on Instagram. Parents and guardians will be able to see how much time their teens spend on Instagram and set time limits. And we'll have a new educational hub for parents and guardians.

Parents and guardians know what's best for their teens, so we plan to launch our first tools in March to help them guide and support their teens on Instagram. Parents and guardians will be able to view how much time their teens spend on Instagram and set time limits. We'll also give teens a new option to notify their parents if they report someone, giving their parents the opportunity to talk about it with them. This is the first version of these tools; we'll continue to add more options over time.

We're also developing a new educational hub for parents and guardians that will include additional resources, like product tutorials and tips from experts, to help them discuss social media use with their teens.

It's important to me that people feel good about the time they spend on Instagram, so today we're launching Take A Break to empower people to make informed decisions about how they're spending their time. If someone has been scrolling for a certain amount of time, we'll ask them to take a break from Instagram and suggest that they set reminders to take more breaks in the future. We'll also show them expert-backed tips to help them reflect and reset.

We're also starting to test a new experience for people to see and manage their Instagram activity. We know that as teens grow up, they want more control over how they show up both online and offline so, for the first time, they will be able to bulk delete content they've posted like photos and videos, as well as their previous likes and comments. While available to everyone, I think this tool is particularly important for teens to more fully understand what information they've shared on Instagram, what is visible to others, and to have an easier way to manage their digital footprint. This new experience will be available to everyone in January.

In July, we launched the Sensitive Content Control, which allows people to decide how much sensitive content shows up in Explore. The control has three options: Allow, Limit and Limit Even More. Limit is the default state for everyone and based on our Recommendation Guidelines, Allow enables people to see more sensitive content, whereas Limit Even More means they see less of this content than the default state. The Allow option is unavailable to people under the age of 18.

We're exploring expanding the Limit Even More state beyond Explore for teens. This will make it more difficult for teens to come across potentially harmful or sensitive content or accounts in Search, Explore, Hashtags, Reels and Suggested Accounts. We're in the early stages of this idea and will have more to share in time.

Lastly, our research shows -- and external experts agree -- that if people are dwelling on one topic for a while, it could be helpful to nudge them towards other topics at the right moment. That's why we're building a new experience that will nudge people towards other topics if they've been dwelling on one topic for a while. We'll have more to share on this, and changes we're making when it comes to content and accounts we recommend to teens, soon.

 

 

Progressive justice...

US judge suspends Texas internet law intended to stop social media companies from censoring right leaning opinions


Link Here3rd December 2021
A us judge has banned a Texas state internet law that banned large internet companies from censoring user content on the basis of political bias.

Texas' HB 20 law, passed a few months ago, bans online platforms with over 50 million monthly active users from censoring content based on a users' viewpoint. The law focuses on restricting social platforms' ability to censor content, although it contains some provision to get illegal content removed faster.

Judge Robert Pitman granted an injunction filed by NetChoice and CCIA to put HB 20 on hold until the case is complete. The judge argues that the law violates the First Amendment rights of social media companies.

The judge insisted that the government cannot dictate what content a social media company is allowed or not allowed to publish.

Private companies that use editorial judgment to choose whether to publish content -- and, if they do publish content, use editorial judgment to choose what they want to publish -- cannot be compelled by the government to publish other content.

According to the court, viewpoint discrimination can be deemed editorial discretion, which is a principle protected by the First Amendment.

The law also requires large social media companies to provide detailed reports of their content moderation decisions. The court ruled that requirement is inordinately burdensome given the unfathomably large numbers of posts on these sites and apps.

 

 

Toxic proposals...

Russian proposals to treat all the depictions of gay relationships as restricted pornography


Link Here3rd December 2021
Full story: Gay in Russia...Russia bans gay parades and legislates against gay rights
A powerful Russian MP known for homophobic statements and projects has proposed a bill that would classify all depictions of LGBTQ+ relationships in the same banned or restricted categories as pornography.

Vitaly Milonov -- a member of Vladimir Putin's governing United Russia party, and deputy chairman of the Federal Assembly's Committee on Family Affairs, Women and Children -- said that people should have the right to ask the state's regulator to not allow the broadcast of films with LGBTQ+ content.

The announcement coincided with another Russian official's disclosure that he had prepared a catalogue of toxic content, using a system that labels content from completely banned to simply undesirable. Igor Ashmanov, a member of Russia's Presidential Council for Civil Society and Human Rights, revealed that he had developed a catalogue to mark so-called 'toxic content' on the internet.

The resource, Ashmanov said, would flag topics such as radical feminism [and] 'child-free' lifestyles, as well as the promotion of homosexuality and bestiality.


 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   April   May   June   July   Aug   Sept   Oct   Nov   Dec    


 


 
TV  

Movies

Games

Internet
 
Advertising

Technology

Gambling

Food+Drink
Books

Music

Art

Stage

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys