Melon Farmers Original Version

Internet News


2019: August

 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   Apr   May   June   July   Aug   Sep   Oct   Nov   Dec    

 

Don't Play in Google's Privacy Sandbox...

A detailed technical investigation of Google's advanced tools designed to profile internet users for advertising


Link Here31st August 2019
Full story: Gooogle Privacy...Google's many run-ins with privacy

Last week, Google announced a plan to build a more private web. The announcement post was, frankly, a mess. The company that tracks user behavior on over 2/3 of the web said that Privacy is paramount to us, in everything we do.

Google not only doubled down on its commitment to targeted advertising, but also made the laughable claim that blocking third-party cookies -- by far the most common tracking technology on the Web, and Google's tracking method of choice -- will hurt user privacy. By taking away the tools that make tracking easy, it contended, developers like Apple and Mozilla will force trackers to resort to opaque techniques like fingerprinting. Of course, lost in that argument is the fact that the makers of Safari and Firefox have shown serious commitments to shutting down fingerprinting, and both browsers have made real progress in that direction. Furthermore, a key part of the Privacy Sandbox proposals is Chrome's own (belated) plan to stop fingerprinting.

But hidden behind the false equivalencies and privacy gaslighting are a set of real technical proposals. Some are genuinely good ideas. Others could be unmitigated privacy disasters. This post will look at the specific proposals under Google's new Privacy Sandbox umbrella and talk about what they would mean for the future of the web. The good: fewer CAPTCHAs, fighting fingerprints

Let's start with the proposals that might actually help users.

First up is the Trust API . This proposal is based on Privacy Pass , a privacy-preserving and frustration-reducing alternative to CAPTCHAs. Instead of having to fill out CAPTCHAs all over the web, with the Trust API, users will be able to fill out a CAPTCHA once and then use trust tokens to prove that they are human in the future. The tokens are anonymous and not linkable to one another, so they won't help Google (or anyone else) track users. Since Google is the single largest CAPTCHA provider in the world, its adoption of the Trust API could be a big win for users with disabilities , users of Tor , and anyone else who hates clicking on grainy pictures of storefronts.

Google's proposed privacy budget for fingerprinting is also exciting. Browser fingerprinting is the practice of gathering enough information about a specific browser instance to try to uniquely identify a user. Usually, this is accomplished by combining easily accessible information like the user agent string with data from powerful APIs like the HTML canvas. Since fingerprinting extracts identifying data from otherwise-useful APIs, it can be hard to stop without hamstringing legitimate web apps. As a workaround, Google proposes limiting the amount of data that websites can access through potentially sensitive APIs. Each website will have a budget, and if it goes over budget, the browser will cut off its access. Most websites won't have any use for things like the HTML canvas, so they should be unaffected. Sites that need access to powerful APIs, like video chat services and online games, will be able to ask the user for permission to go over budget. The devil will be in the details, but the privacy budget is a promising framework for combating browser fingerprinting.

Unfortunately, that's where the good stuff ends. The rest of Google's proposals range from mediocre to downright dangerous.

The bad: Conversion measurement

Perhaps the most fleshed-out proposal in the Sandbox is the conversion measurement API . This is trying to tackle a problem as old as online ads: how can you know whether the people clicking on an ad ultimately buy the product it advertised? Currently, third-party cookies do most of the heavy lifting. A third-party advertiser serves an ad on behalf of a marketer and sets a cookie. On its own site, the marketer includes a snippet of code which causes the user's browser to send the cookie set earlier back to the advertiser. The advertiser knows when the user sees an ad, and it knows when the same user later visits the marketer's site and makes a purchase. In this way, advertisers can attribute ad impressions to page views and purchases that occur days or weeks later.

Without third-party cookies, that attribution gets a little more complicated. Even if an advertiser can observe traffic around the web, without a way to link ad impressions to page views, it won't know how effective its campaigns are. After Apple started cracking down on advertisers' use of cookies with Intelligent Tracking Prevention (ITP), it also proposed a privacy-preserving ad attribution solution . Now, Google is proposing something similar . Basically, advertisers will be able to mark up their ads with metadata, including a destination URL, a reporting URL, and a field for extra impression data -- likely a unique ID. Whenever a user sees an ad, the browser will store its metadata in a global ad table. Then, if the user visits the destination URL in the future, the browser will fire off a request to the reporting URL to report that the ad was converted.

In theory, this might not be so bad. The API should allow an advertiser to learn that someone saw its ad and then eventually landed on the page it was advertising; this can give raw numbers about the campaign's effectiveness without individually-identifying information.

The problem is the impression data. Apple's proposal allows marketers to store just 6 bits of information in a campaign ID, that is, a number between 1 and 64. This is enough to differentiate between ads for different products, or between campaigns using different media.

On the other hand, Google's ID field can contain 64 bits of information -- a number between 1 and 18 quintillion . This will allow advertisers to attach a unique ID to each and every ad impression they serve, and, potentially, to connect ad conversions with individual users. If a user interacts with multiple ads from the same advertiser around the web, these IDs can help the advertiser build a profile of the user's browsing habits.

The ugly: FLoC

Even worse is Google's proposal for Federated Learning of Cohorts (or FLoC). Behind the scenes, FLoC is based on Google's pretty neat federated learning technology . Basically, federated learning allows users to build their own, local machine learning models by sharing little bits of information at a time. This allows users to reap the benefits of machine learning without sharing all of their data at once. Federated learning systems can be configured to use secure multi-party computation and differential privacy in order to keep raw data verifiably private.

The problem with FLoC isn't the process, it's the product. FLoC would use Chrome users' browsing history to do clustering . At a high level, it will study browsing patterns and generate groups of similar users, then assign each user to a group (called a flock). At the end of the process, each browser will receive a flock name which identifies it as a certain kind of web user. In Google's proposal, users would then share their flock name, as an HTTP header, with everyone they interact with on the web.

This is, in a word, bad for privacy. A flock name would essentially be a behavioral credit score: a tattoo on your digital forehead that gives a succinct summary of who you are, what you like, where you go, what you buy, and with whom you associate. The flock names will likely be inscrutable to users, but could reveal incredibly sensitive information to third parties. Trackers will be able to use that information however they want, including to augment their own behind-the-scenes profiles of users.

Google says that the browser can choose to leave sensitive data from browsing history out of the learning process. But, as the company itself acknowledges, different data is sensitive to different people; a one-size-fits-all approach to privacy will leave many users at risk. Additionally, many sites currently choose to respect their users' privacy by refraining from working with third-party trackers. FLoC would rob these websites of such a choice.

Furthermore, flock names will be more meaningful to those who are already capable of observing activity around the web. Companies with access to large tracking networks will be able to draw their own conclusions about the ways that users from a certain flock tend to behave. Discriminatory advertisers will be able to identify and filter out flocks which represent vulnerable populations. Predatory lenders will learn which flocks are most prone to financial hardship.

FLoC is the opposite of privacy-preserving technology. Today, trackers follow you around the web, skulking in the digital shadows in order to guess at what kind of person you might be. In Google's future, they will sit back, relax, and let your browser do the work for them.

The ugh: PIGIN

That brings us to PIGIN. While FLoC promises to match each user with a single, opaque group identifier, PIGIN would have each browser track a set of interest groups that it believes its user belongs to. Then, whenever the browser makes a request to an advertiser, it can send along a list of the user's interests to enable better targeting.

Google's proposal devotes a lot of space to discussing the privacy risks of PIGIN. However, the protections it discusses fall woefully short. The authors propose using cryptography to ensure that there are at least 1,000 people in an interest group before disclosing a user's membership in it, as well as limiting the maximum number of interests disclosed at a time to 5. This limitation doesn't hold up to much scrutiny: membership in 5 distinct groups, each of which contains just a few thousand people, will be more than enough to uniquely identify a huge portion of users on the web. Furthermore, malicious actors will be able to game the system in a number of ways, including to learn about users' membership in sensitive categories. While the proposal gives a passing mention to using differential privacy, it doesn't begin to describe how, specifically, that might alleviate the myriad privacy risks PIGIN raises.

Google touts PIGIN as a win for transparency and user control. This may be true to a limited extent. It would be nice to know what information advertisers use to target particular ads, and it would be useful to be able to opt-out of specific interest groups one by one. But like FLoC, PIGIN does nothing to address the bad ways that online tracking currently works. Instead, it would provide trackers with a massive new stream of information they could use to build or augment their own user profiles. The ability to remove specific interests from your browser might be nice, but it won't do anything to prevent every company that's already collected it from storing, sharing, or selling that data. Furthermore, these features of PIGIN would likely become another option that most users don't touch. Defaults matter. While Apple and Mozilla work to make their browsers private out of the box, Google continues to invent new privacy-invasive practices for users to opt-out of.

It's never about privacy

If the Privacy Sandbox won't actually help users, why is Google proposing all these changes?

Google can probably see which way the wind is blowing. Safari's Intelligent Tracking Prevention and Firefox's Enhanced Tracking Protection have severely curtailed third-party trackers' access to data. Meanwhile, users and lawmakers continue to demand stronger privacy protections from Big Tech. While Chrome still dominates the browser market, Google might suspect that the days of unlimited access to third-party cookies are numbered.

As a result, Google has apparently decided to defend its business model on two fronts. First, it's continuing to argue that third-party cookies are actually fine , and companies like Apple and Mozilla who would restrict trackers' access to user data will end up harming user privacy. This argument is absurd. But unfortunately, as long as Chrome remains the most popular browser in the world, Google will be able to single-handedly dictate whether cookies remain a viable option for tracking most users.

At the same time, Google seems to be hedging its bets. The Privacy Sandbox proposals for conversion measurement, FLoC, and PIGIN are each aimed at replacing one of the existing ways that third-party cookies are used for targeted ads. Google is brainstorming ways to continue serving targeted ads in a post-third-party-cookie world. If cookies go the way of the pop-up ad, Google's targeting business will continue as usual.

The Sandbox isn't about your privacy. It's about Google's bottom line. At the end of the day, Google is an advertising company that happens to make a browser.

 

 

Kazakhstan pauses interception of encrypted traffic, but for how long?...

President Tokayev has called off the controversial 'national security certificates'


Link Here 31st August 2019
Full story: Internet Censorship in Kazakhstan...New internet censorship law

In late July, mobile network providers in Kazakhstan started sending out SMS messages demanding that their clients install a 'national security certificate' on all personal digital devices with internet access. These messages claimed that the certificate would protect citizens from cyberattacks. They also assured users who did not install the application that they would encounter problems accessing certain websites (particularly those with HTTPS encryption.)

This news came one and a half months after Kazakhstan's government blocked access to internet and streaming services on June 9, when the country held presidential elections. The victory of Kassym-Zhomart Tokayev came amid mass protests calling for fair elections. Meanwhile, an internet blackout prevented protesters from coordinating their actions, helping police to arrest them.

These moves led some observers to fear the beginning of a wider crackdown on digital rights in Kazakhstan. So while Tokayev called off the introduction of the controversial national security certificates on August 6, there are grounds to doubt that this will be the government's last attempt to intrude on cyberspace. Fear and suspicion on social media

In the first days [after receiving the SMS messages] we faced lots of panic. People were afraid that they would indeed be deprived of access to certain websites without installing the security certificate, Gulmira Birzhanova, a lawyer at the North Kazakhstan Legal Media Centre told GV:

However, few users rushed to obey the SMS messages. I didn't install [the application]. I don't even know if any of my acquaintances dida.

Nevertheless, the demands to install an unknown security tool caused a wave of distrust and outrage on social media.

Daniil Vartanov, an IT expert from neighbouring Kyrgyzstan, was one of the first people to react to the launch of the certificate and confirmed users' suspicions.

Now they can read and replace everything you look at online. Your personal information can be accessed by anybody in the state security services, ministry of internal affairs, or even the illicitly hired nephew of some top official. This isn't an exaggeration; this is really how bad it is.

On August 1, Kazakhstan's prosecutor general issued a statement reassuring citizens that the national security certificate was aimed to protect internet users from illicit content and cyberattacks, stressing that the state guaranteed their right to privacy.

IT experts proved otherwise. Censored Planet, a project at the University of Michigan which monitors network interference in over 170 countries, warned that the Kazakh authorities had started attempting to intercept encrypted traffic using man in the middle attacks on July 17. At least 37 domains were affected, including social media networks.

Man in the middle or HTTPS interception attacks are attempts to replace genuine online security certificates with fake ones. Normally, a security certificate helps a browser or application (for example, Instagram or Snapchat) to ensure that it connects to the real server. If a state, [internet] provider or illegal intruder tries to intercept traffic, the application will stop working and the browser will display a certificate error. The Kazakh authorities push citizens to install this certificate so that the browser and application continue to work after the interception is spotted, explained Vartanov in an interview to GV in early August.

This was the authorities' third attempt to enforce the use of a national security certificate. The first came in late November 2015, right after certificate-related amendments were made to Kazakhstan's law on communication. The law obliges telecom operators to apply a national security certificate to all encrypted traffic except in cases where the encryption originates from Kazakhstan.

That same month, service providers announced that a national security certificate would come into force by January 2016. The announcement was soon taken down, and the issue remained forgotten for three years.

The second attempt came in March 2019, and was barely noticed by the public until they started to receive the aforementioned SMS messages in July.

After two weeks of turmoil on social media, Tokayev called off the certificate on August 6.

Why did Tokayev put the initiative on hold? Dmitry Doroshenko, an expert with over 15 years of experience in Central Asia's telecommunications sector, believes that concern about the security of online transactions played a major role:

In case of a man in the middle attack, an illegal intruder or state can use any decrypted data at their own discretion. That compromises all participants in any exchange of information. Most players in online markets would not be able to guarantee data privacy and security, said Doroshenko. It's obvious that neither internet giants nor banks or international payment systems are ready to take this blow to their reputation. If information were leaked, users would hold them to account rather than the state, which would not be unable to conduct any objective investigation, the IT specialist told Global Voices.

Citizens of Kazakhstan also appealed to tech giants to intervene and prevent the government from setting a dangerous precedent. On August 21, Mozilla, Google, and Apple agreed to block the Kazakh government's encryption certificate. In its statement, Mozilla noted that the country's authorities had already tried to have a certificate included in Mozilla's trusted root store program in 2015. After it was discovered that they were intending to use the certificate to intercept user data, Mozilla denied the request.

Kazakhstan is hardly the only country where the right to digital privacy is under threat. The British government wants to create a backdoor to access encrypted communications, as do its partners in the US. The Kremlin wants to make social media companies store data on servers located in Russia.

 

 

Offsite Article: YouTube's biased political censorship tested in US court...


Link Here 30th August 2019
Full story: YouTube Censorship...YouTube censor videos by restricting their reach
YouTube faces dueling lawsuits from a conservative group and an LGBTQ+ group, both of which argue that the video site discriminates against them

See article from wired.com

 

 

A conspiracy to censor the internet...

YouTube is tweaking its recommendation algorithms to downplay conspiracy theory videos


Link Here29th August 2019
YouTube plans to tweak its recommendation algorithm to cut back on conspiracy theory videos in the UK. The platform is in the middle of rolling out the update to its British users, a spokesperson confirmed to TechCrunch. It's unclear when exactly the change will occur.

Back in January, the platform said it would begin reducing what it deemed borderline content, or videos that came close to -- but didn't quite -- violate YouTube's Community Guidelines and videos that misinformed people. The company listed Flat Earth, 9/11 and anti-vax conspiracy theories as some examples of content it would try to reduce.

It's unclear whether YouTube's efforts in the US are working. A Huffington Post investigation from July revealed that even though recommendations for conspiracy theories have been cut in half and some heavyweight distributors have been deplatformed, conspiracy theory videos are still thriving on the platform.

 

 

Poking around in your apps...

Microsoft to be investigated on Windows 10 slurping user data without consent


Link Here29th August 2019
Windows 10 'telemetry' snoops on your data without users having any choice to say no. Surely a massive no-no under the European General Data Protection Law. This required that either the data grab is either essential or else consent has been gained. And Microsoft never asks for consent, it just grabs it anyway.

Now the Dutch Data Protection Office (DPO) is asking how Microsoft complies with GDPR. It has referred Windows 10 to the data protection authority in Ireland, where Microsoft is headquartered in Europe

The case stems from the Dutch data-protection agency's (DPA's) findings in pre-GDPR 2017. At that time, the agency found that Microsoft didn't tell Windows 10 Home and Pro users which personal data it collects and how it uses the data, and didn't give consumers a way to give specific consent.

As part of the Windows 10 April 2018 Update, Microsoft last year released new privacy tools to help explain to users why and when it was collecting telemetry data. And by April 2018, the Dutch DPA assessed that the privacy of Windows 10 users was greatly improved due to its probe, having addressed the concerns raised over earlier versions of Windows 10.

However, the Dutch DPA on Tuesday said while the changes Microsoft made last year to Windows 10 telemetry collection did comply with the agreement, the company might still be in breach of EU privacy rules. The earlier investigation brought to light that Microsoft is remotely collecting other data from users. As a result, Microsoft is still potentially in breach of privacy rules.

Ireland's DPA has confirmed it had received the Netherlands' request.

 

 

Offsite Article: Hong Kong ISPs warn that restricting online access would be ruinous for the region...


Link Here29th August 2019
Hong Kong worries that action to quell the protests may include putting Hong Kong behind the Great Firewall of China

See article from techcrunch.com

 

 

Offsite Article: Google defends tracking cookies...


Link Here27th August 2019
Full story: Gooogle Privacy...Google's many run-ins with privacy
Banning tracking cookies jeopardizes the future of the vibrant Web. By Timothy B. Lee -

See article from arstechnica.com

 

 

Would you press the button to stop all internet porn?...

New Zealand's chief censor entertains TV viewers


Link Here 26th August 2019
Full story: Siege of Kalima...Tunisia police harass and close radio station
New Zealand's Children's Minister Tracey Martin has been calling for ideas to modernise internet censorship laws to protect kids from porn.

So the country's Chief Censor David Shanks has been on the campaign trail seeking to grab some of those powers to censor internet porn.

Shank's made an interesting pitch when invited on to the AM Show on breakfast TV. Speaking of ideas for porn censorship he noted:

Tracey Martin says all options are on the table. There are ethical dilemmas involved in cutting the supply, however. Are we going to become like China, in terms of state-imposed restrictions? And who decides where the limits to those are? These are difficult questions.

He said he once stood in front of a room full of people at a conference and outlined a scenario and said:

'I'm the chief censor. Imagine I've got a box with a button on it - a big red button - and if I push that button, I've terminated all access to pornography for everyone in this country. Should I push the button?'

There was a stunned silence from the room, then someone said, 'Who gets to decide what pornography is?' I said, 'I am! I'm the Chief Censor.' But I think that highlights some of the issues underpinning these questions.

No one in the audience urged him to push the button.

A working party has been set up to investigate what can be done, involving the Office of Film and Literature Classification leads the group, and other agencies involved are Netsafe, the Ministry of Health, Internal Affairs, the Ministry for Women, the Ministry of Social Development, ACC and the Ministry of Education.

 

 

Shooting the messenger...

Australia will set up a 24/7 crisis centre to coordinate internet censorship


Link Here26th August 2019

Censorship Control Centre
 

Australia plans to block websites to stop the spread of extreme content during crisis events. Prime minister Scott Morrison claimed at the G7 summit that the measures were needed in response to Brenton Tarrent's attack on two New Zealand mosques in March. He said in a statement:

The live-streamed murder of 51 worshippers demonstrated how digital platforms and websites can be exploited to host extreme violent and terrorist content.

That type of abhorrent material has no place in Australia, and we are doing everything we can to deny terrorists the opportunity to glorify their crimes, including taking action locally and globally.

Under the measures, Australia's eSafety Commissioner would work with companies to block websites propagating terrorist material. A new 24/7 Crisis Coordination Centre will be tasked with monitoring terror-related incidents and extremely violent events for censorship.

 

 

Tax and censor...

The US is resisting French bullying to censor social media in line with the Christchurch Call


Link Here25th August 2019

US social media companies have delayed signing a pledge which aims to combat what the French government deems to be online hate speech. The pledge pushes online service providers to commit to more aggressive censorship and moderation of content on their platforms.

Europe 1 radio is reporting that President Trump pressured US social media companies to delay signing the pledge saying that France was bullying the companies to join.

The pledge is titled Charter for an Open, Free, and Safe Internet . It expands on the commitments made by social media companies in the immediate aftermath of the New Zealand mosque massacre. Social media companies took down a live stream of the killings and the killer, Brenton Tarrent's manifesto. New Zealand ISPs blocked websites until such material was removed.

The pledge will widen the scope of the commitments from online service providers related to:

  • Taking down content
  • Moderating content
  • Being transparent
  • Providing support for victims

France wanted US social media companies to sign this pledge on August 23. However, according to France's junior minister for the digital industry Cédric O, the signing has been delayed until August 27.

A senior Trump administration official said that the White House is still evaluating the pledge and that the industry wants to water down the initiative.

Commentators suggest that background to the delay may be related to France's plans to introduce a new tax for US social media companies.

 

 

Peppa Prig...

YouTube announces new rules to ban adult parodies of children's cartoons using tags and titles that may still appeal to children


Link Here25th August 2019
Full story: YouTube Censorship...YouTube censor videos by restricting their reach
A little while ago there was an issue on YouTube about parody videos using well known children's cartoons as a baseline for adult humour. The videos were not in themselves outside of what YouTube allows but were not suitable for the child audience of the original shows. YouTube has now responded as follws:

Content that contains mature or violent themes that explicitly targets younger minors and families in the title, description and/or tags will no longer be allowed on the platform. This content was previously age-restricted, but today we're updating our child safety policies to better protect the family experience.

What content will be removed?

We're removing misleading family content, including videos that target younger minors and families, that contain sexual themes, violence, obscene, or other mature themes not suitable for young audiences. Here are some examples of content that will be removed:

  • A video with tags like "for children" featuring family friendly cartoons engaging in inappropriate acts like injecting needles.
  • Videos with prominent children's nursery rhymes targeting younger minors and families in the video's title, description or tags, that contain adult themes such as violence, sex, death, etc.
  • Videos that explicitly target younger minors and families with phrasing such as "for kids" or "family fun" in the video's title, description and/or tags that contain vulgar language.
What content will be age-restricted?

Content that is meant for adults and not targeting younger minors and families won't be removed, but it may be age-restricted. If you create adult content that could be confused as family entertainment, make sure your titles, descriptions, and tags match the audience you are targeting. Remember you can age restrict your content upon upload if it's intended for mature audiences. Here is an example of content that may still be allowed on YouTube but will be age-restricted :

  • Adult cartoons with vulgar language and/or violence that is explicitly targeted at adults.

 

 

Canada backs off from supporting the protection of strong encryption...

A detailed analysis suggesting that Canada is moving to supporting backdoors and deliberately weakening algorithms


Link Here 24th August 2019

 

 

 

Offsite Article: Can YouTube Be Liable For Copyright Infringing Videos?...


Link Here 23rd August 2019
Full story: Internet Censorship in EU...EU introduces swathes of internet censorship law
Top EU Court is to Decide on case threatening safe harbour protections underpinning the legality of European websites hosting user content

See article from torrentfreak.com

 

 

No compromise...

Google and Forefox thwart Kazakhstan's attempt to compromise https website encryption


Link Here22nd August 2019
Full story: Internet Censorship in Kazakhstan...New internet censorship law

Google and Mozilla have moved to block the Kazakhstan government from intercepting encrypted internet traffic.

It comes after reports ISPs in the country required people to install a government-issued certificate on all devices and in every browser. Google and Mozilla noted that installing the compromised certificate allows the government to decrypt and read anything a user types or posts.

Google and Mozilla said they would deploy a technical solution to their browsers to block the certificates. Chrome senior engineering director Parisa Tabriz said:

We will never tolerate any attempt, by any organisation - government or otherwise - to compromise Chrome users' data.

We have implemented protections from this specific issue, and will always take action to secure our users around the world.

Saying that Chrome's seems more than happy to allow UK user's browsing history data to be monitored by the state when it could implement an encrypted DNS alternative.

Mozilla senior director of trust and security Marshall Erwin said: People around the world trust Firefox to protect them as they navigate the internet, especially when it comes to keeping them safe from attacks like this that undermine their security.

According to researchers at Censored Planet , who have been tracking the interception system in Kazakhstan, the government have been mainly using the facility to monitor Facebook, Twitter and Google.

 

 

Offsite Article: Pit bull triceratops fighting...


Link Here22nd August 2019
Google's AI censor can't distinguish Robot Wars from dog fighting and bans fighting robot tournaments from YouTube

See article from bbc.com

 

 

Extract: Offtrack snooping...

Facebook is introducing a privacy option to prevent its offline tracking capabilities


Link Here21st August 2019
Full story: Facebook Privacy...Facebook criticised for discouraging privacy
Facebook is revealing a wider range of websites and apps that gather data for Facebook, previously without being identified or seeking consent. Facebook will offer a new privacy control covering these newly revealed snoopers.

A feature in settings called Off-Facebook Activity will show all the apps and websites that send information about you to Facebook, which is then used to target ads more effectively.You will also be able to clear your history and prevent your future off-app behaviour being tapped.

For now, it is rolling out very slowly, with only Ireland, South Korea and Spain getting access. But the goal is to eventually offer it globally.

Facebook collects data from beyond its platform either because you have opted to use the social media site to log in to an app or, more likely, because a website uses something called Facebook Pixel to track your activities.

If you select options to turn off tracking Facebook will still collect the data, but it will be anonymised.

...Read the full article from bbc.com

 

 

State sponsored fake news...

Facebook and Twitter close 950 accounts carrying Chinese propaganda against Hong Kong protests


Link Here20th August 2019
Twitter and Facebook have blocked what they described as a state-backed Chinese misinformation campaign.

Twitter said it removed 936 accounts it said were being used to sow political discord in Hong Kong. It said the accounts originated in mainland China and were part of a coordinated attempt to undermine the legitimacy and political positions of the protest movement.

Facebook said it had, after being tipped off by Twitter, removed seven Pages, three Groups and five Facebook accounts. Nathaniel Gleicher, Facebook's head of cybersecurity policy said:

Although the people behind this activity attempted to conceal their identities, our investigation found links to individuals associated with the Chinese government.

The move came after Twitter was criticised at the weekend for allowing China's Xinhua news agency to buy sponsored posts on the network. Twitter said on Monday it would no longer allow such ads, saying: Going forward, we will not accept advertising from state-controlled news media entities,

 

 

Offsite Article: Go BoJo Go...


Link Here20th August 2019
Full story: Internet Censorship in EU...EU introduces swathes of internet censorship law
EU planning to grab total control of internet regulations. By David Spence

See article from vpncompare.co.uk

 

 

Once Upon a Time in Hollywood...

Cut by the Indian film censors


Link Here18th August 2019
Full story: Film cuts in India...Censor cuts for movies released in India
Once Upon a Time ... in Hollywood is a 2019 USA / UK comedy drama by Quentin Tarantino.
Starring Leonardo DiCaprio, Brad Pitt and Margot Robbie. BBFC link IMDb

Quentin Tarantino's Once Upon a Time... in Hollywood visits 1969 Los Angeles, where everything is changing, as TV star Rick Dalton (Leonardo DiCaprio) and his longtime stunt double Cliff Booth (Brad Pitt) make their way around an industry they hardly recognize anymore. The ninth film from the writer-director features a large ensemble cast and multiple storylines in a tribute to the final moments of Hollywood's golden age.

Director Quentin Tarantino's new film, Once Upon a Time in Hollywood, has been passed by the Indian film censors at the Central Board of Film Certification (CBFC) with an adults sonly A certificate with a couple of curious cuts.

The censor board left multiple instances of the word 'fuck' but has beeped out every usage of the word 'ass', ccording to Pinkvilla, which has access to the censor certificate.

 

 

Fake censorship...

Instagram to allow users to report 'fake news' but no doubt this will used to harass those with opposing views


Link Here18th August 2019
Full story: Instagram Censorship...Photo sharing website gets heavy on the censorship
Instagram is adding an option for users to report posts they claim are false. The photo-sharing website is responding to increasing pressure to censor material that government's do not like.

Results then rated as false are removed from search tools, such as Instagram's explore tab and hashtag search results.

The new report facility on Instagram is being initially rolled out only in the US.

Stephanie Otway, a Facebook company spokeswoman Said:

This is an initial step as we work towards a more comprehensive approach to tackling misinformation.

Posting false information is not banned on any of Facebook's suite of social media services, but the company is taking steps to limit the reach of inaccurate information and warn users about disputed claims.

 

 

Offsite Article: The EU's latest assault on internet freedom...


Link Here17th August 2019
Full story: Internet Censorship in EU...EU introduces swathes of internet censorship law
Soon online speech will be regulated by Brussels. By Andrew Tettenborn

See article from spiked-online.com

 

 

Quality control...

Facebook introduces new censorship for private groups and labels it as 'Group Quality'


Link Here16th August 2019
Full story: Facebook Censorship...Facebook quick to censor
Facebook has introduced a new censorship tool known as Group Quality to evaluate private groups and scrutinize them for any 'problematic content'.

For a long time now, Facebook was facing heat from the media for the fact that the private groups feature is harboring extremists and the spreading of 'fake news'. As a result, the company wrote an article from newsroom.fb.com introducing a new feature known as Group Quality:

Being in a private group doesn't mean that your actions should go unchecked. We have a responsibility to keep Facebook safe, which is why our Community Standards apply across Facebook, including in private groups. To enforce these policies, we use a combination of people and technology -- content reviewers and proactive detection. Over the last few years, we've invested heavily in both, including hiring more than 30,000 people across our safety and security teams.

Within this, a specialized team has been working on the Safe Communities Initiative: an effort that started two years ago with the goal of protecting people using Facebook Groups from harm. Made up of product managers, engineers, machine learning experts and content reviewers, this team works to anticipate the potential ways people can do harm in groups and develops solutions to minimize and prevent it. As the head of Facebook Groups, I want to explain how we're making private groups safer by focusing on three key areas: proactive detection, tools for admins, and transparency and control for members.

On the plus side Facebook has updated settings used in defining access and visibility of groups which are much clearer than previus incarnations.

Critics say that Facebook's move will not curb misinformation and fake news, but, on the contrary, it may further push it deeper underground making it hard for censor to filter or terminate such content from the site.

 

 

Fake claims...

Thailand sets up an internet censorship centre in the name of 'fake news'


Link Here15th August 2019
Full story: Internet Censorship in Thailand...Thailand implements mass website blocking

Thailand's Digital Economy and Society Minister Puttipong Punnakanta plans to set up a Fake News Center.

The digital minister confirmed that he is looking to create the Fake News Center to:

get rid of fabricated, misleading content on social media which might jeopardize the people's safety and property and violate the Computer Crime Act and other laws.

For instance, content on social media about natural disasters and health care might be fabricated or exaggerated only to confuse and scare viewers. They might be deceived by fraudulent investment scams or lured to buy illegal, hazardous health products online.

He said a dozen government agencies will be asked to cooperate with the Fake News Center such as the police, the military, the Consumer Protection Board, the Food and Drugs Administration and the Public Relations Department, among others.

 

 

'Protecting' people's data by forcing them to hand it over to any internet Tom, Dick and Harry...

ICO seems to have backed off from requiring age verification for nearly all websites


Link Here13th August 2019
Full story: ICO Age Appropriate Design...ICO calls for age assurance for websites accessed by children
Back in April of this year the data protection police of the ICO set about drawing up rules for nearly all commercial websites in how they should deal with children and their personal data.

Rather perversely the ICO decided that age verification should underpin a massively complex regime to require different data processing for several age ranges. And of course the children's data would be 'protected' by requiring nearly all websites to demand everybody's identity defining personal data in order to slot people into the ICO's age ranges.

The ICO consulted on their proposals and it seems that the internet industry forcefully pointed out that it was not a good idea for nearly all websites to have to demand age verification from all website visitors.

The ICO has yet to publish the results of the consultation or its response to the criticism but the ICO have been playing down this ridiculously widespread age verification. This week the Information Commissioner Elizabeth Denham further hinted at this in a blog. She wrote:

Our consultation on the proposed code began in April, and prompted more than 450 written responses, as well more than 40 meetings with key stakeholders. We were pleased with the breadth of views we heard. Parents, schools and children's campaign groups helped us better understand the problems young people can face online, whether using social media services or popular games, while developers, tech companies and online service providers gave us a crucial insight into the challenges industry faces to make this a reality.

...

This consultation has helped us ensure our final code will be effective, proportionate and achievable.

It has also flagged the need for us to be clearer on some standards.

We do not want to see an age-gated internet, where visiting any digital service requires people to prove how old they are. Our aim has never been to keep children from online services, but to protect them within it. We want providers to set their privacy settings to high as a default, and to have strategies in place for how children's data is handled.

We do not want to prevent young people from engaging with the world around them, and we've been quick to respond to concerns that our code would affect news websites. This isn't the case. As we told a DCMS Select Committee in July, we do not want to create any barriers to children accessing news content. The news media plays a fundamental role in children's lives and the final version of the code will make that very clear.

That final version of the code will be delivered to the Secretary of State ahead of the statutory deadline of 23 November 2019.

We recognise the need to allow companies time to implement the standards and ensure they are complying with the law. The law allows for a transition period of up to a year and we'll be considering the most appropriate approach to this, before making a final decision in the autumn. In addition to the code itself, my office is also preparing a significant package to ensure that organisations are supported through any transition period, including help and advice for designers and engineers.

 

 

The European YouTube censor...

The Broadcasting Authority of Ireland volunteers to be the the country's internet video censor


Link Here13th August 2019
The UK Government recently outlines its plans for appointing Ofcom as the internet censor overseeing new EU censorship rules introduced under a new Audio Visual Media Services AVMS directive.

In Ireland, the Broadcasting Authority of Ireland (BAI) has pitched for similar powers, with the government currently considering the BAI's position alongside the appointment of an online safety commissioner.

The BAI believes that it could become an EU-wide regulator for online video, because Google and Facebook's European operations are headquartered in Dublin.

Earlier this year, the government announced plans that would see a future online safety commissioner given the power to issue administrative fines, meaning the commissioner would not have to go through a court.

 

 

Updated: The latest internet censorship nightmare from the EU...

The EU ups the internet ante and the UK will require video websites to be licensed by the state censors of Ofcom


Link Here 12th August 2019
Requirements for Video Sharing Platforms in the Audiovisual Media Services Directive

The Audiovisual Media Services Directive (AVMSD) is the regulatory framework governing EU-wide coordination of national legislation on all audiovisual media. The government launched a consultation on implementing the newly introduced and amended provisions in AVMSD on 30 May, which is available here .

One of the main changes to AVMSD is the extension of scope to cover video-sharing platforms (VSPs) for the first time. This extension in scope will likely capture audiovisual content on social media sites, video-sharing sites, pornography sites and live streaming services. These services are required to take appropriate measures to: protect children from harmful content; protect the general public from illegal content and content that incites violence or hatred, and; respect certain obligations around commercial communications.

The original consultation, published on 30 May, outlined the government's intention to implement these requirements through the regulatory framework proposed in the Online Harms White Paper . However, we also indicated the possibility of an interim approach ahead of the regulatory framework coming into force to ensure we meet the transposition deadline of 20 September 2020. We now plan to take forward this interim approach and have written to stakeholders on 23 July to set out our plans and consult on them.

This open letter and consultation sent to stakeholders, therefore, aims to gather views on our interim approach for implementing requirements pertaining to VSPs through appointing Ofcom as the national regulatory authority. In particular, it asks questions regarding:

  • how to transpose the definition of VSPs into UK law, and which platforms are in the UK's jurisdiction;

  • the regulatory framework and the regulator's relationship with industry;

  • the appropriate measures that should be taken by platforms to protect users;

  • the information gathering powers Ofcom should have to oversee VSPs;

  • the appropriate enforcement and sanctions regime for Ofcom;

  • what form the required out of court redress mechanism should take; and

  • how to fund the extension of Ofcom's regulatory activities from industry.

Update: The press get wind of the EU censorship nightmare of the new AVMS directive

12th August 2019. See article from bbc.com

The government is considering giving powers to fine video-sharing apps and websites to the UK's media censor Ofcom.

The proposal would see Ofcom able to impose multi-million pound fines if it judges the platforms have failed to prevent youngsters seeing pornography, violence and other harmful material.

Ofcom are already the designated internet censor enforcing the current AVMS censorship rules. These apply to all UK based Video on Demand platforms. The current rules are generally less stringent than Ofcom's rules for TV so have not particularly impacted the likes of the TV catch up services, (apart from Ofcom extracting significant censorship fees for handling minimal complaints about hate speech and product placement).

The notable exception is the regulation of hardcore porn on Video on Demand platforms. Ofcom originally delegated the censorship task to ATVOD but that was a total mess and Ofcom grabbed the censorship roles back. It too became a bit of a non-job as ATVOD's unviable age verification rules had effectively driven the UK adult porn trade into either bankruptcy or into foreign ownership. In fact this driving the porn business offshore gave rise to the BBFC age verification regime which is trying to find ways to censor foreign porn websites.

Anyway the EU has now created an updated AVMS directive that extends the scope of content to be censored, as well as the range of websites and apps caught up the law. Where as before it caught TV like video on demand websites, it now catches nearly all websites featuring significant video content. And of course the list of harms has expanded into the same space as all the other laws clamouring to censor the internet.

In addition, all qualifying video websites will have to register with Ofcom and have to cough up  a significant fee for Ofcom's censorship 'services'.

The EU Directive is required to be implemented in EU members' laws by 20th September 2020. And it seems that the UK wants the censors to be up on running from the 19th September 2020.

Even then, it would only be an interim step until an even more powerful internet censor gets implemented under the UK's Online Harms plans.

The Telegraph reported that the proposal was quietly agreed before Parliament's summer break and would give Ofcom the power to fine tech firms up to 5% of their revenues and/or block them in the UK if they failed to comply with its rulings. Ofcom has said that it is ready to adopt the powers.

A government spokeswoman told the BBC.

We also support plans to go further and legislate for a wider set of protections, including a duty of care for online companies towards their users.

But TechUK - the industry group that represents the sector - said it hoped that ministers would take a balanced and proportionate approach to the issue.  Its deputy chief executive Antony Walker said:

Key to achieving this will be clear and precise definitions across the board, and a proportionate sanctions and compliance regime, said

The Internet Association added that it hoped any intervention would be proportionate. Daniel Dyball, the association's executive director.said:

Any new regulation should be targeted at specific harms, and be technically possible to implement in practice - taking into account that resources available vary between companies.

The BBC rather hopefully noted that if the UK leaves the European Union without a deal, we will not be bound to transpose the AVMSD into UK law.

 

 

Cryptic motives...

Group of parliamentarians rant against DNS over HTTPS in a letter to the press


Link Here12th August 2019

Web browser risk to child safety

We are deeply concerned that a new form of encryption being introduced to our web browsers will have terrible consequences for child protection.

The new system 204 known as DNS over HTTPS -- would have the effect of undermining the work of the Internet Watch Foundation (IWF); yet Mozilla, provider of the Firefox browser, has decided to introduce it, and others may follow.

The amount of abusive content online is huge and not declining. Last year, the IWF removed more than 105,000 web pages showing the sexual abuse of children. While the UK has an excellent record in eliminating the hosting of such illegal content, there is still a significant demand from UK internet users: the National Crime Agency estimates there are 144,000 internet users on some of the worst dark-web child sexual abuse sites.

To fight this, the IWF provides a URL block list that allows internet service providers to block internet users from accessing known child sexual abuse content until it is taken down by the host country. The deployment of the new encryption system in its proposed form could render this service obsolete, exposing millions of people to the worst imagery of children being sexually abused, and the victims of said abuse to countless sets of eyes.

Advances in protecting users' data must not come at the expense of children. We urge the secretary of state for digital, culture, media and sport to address this issue in the government's upcoming legislation on online harms.

  • Sarah Champion MP;
  • Tom Watson MP;
  • Carolyn Harris MP;
  • Tom Brake MP;
  • Stephen Timms MP;
  • Ian Lucas MP;
  • Tim Loughton MP;
  • Giles Watling MP;
  • Madeleine Moon MP;
  • Vicky Ford MP;
  • Rosie Cooper MP;
  • Baroness Howe;
  • Lord Knight;
  • Baroness Thornton;
  • Baroness Walmsley;
  • Lord Maginnis;
  • Baroness Benjamin;
  • Lord Harris of Haringey

The IWF service is continually being rolled out as an argument against DoH but I am starting to wonder if it is still relevant. Given the universal revulsion against child sex abuse then I'd suspect that little of it would now be located on the open internet. Surely it would be hiding away in hard to find places like the dark web, that are unlikely to stumbled on by normal people. And of course those using the dark web aren't using ISP DNS servers anyway.

In reality the point of using DoH is to evade government attempts to block legal porn sites. If they weren't intending to block legal sites then surely people would be happy to use the ISP DNS including the IWF service.

 

 

Highlight events...

Russia gets heavy with Google for YouTube videos 'promoting' political protests in the country


Link Here12th August 2019
Full story: Internet Censorship in Russia...Russia and its repressive state control of media

Russia is continuing its pressure on Google to censor political political opinion that the government does not like. Media censor Roskomnadzor has sent a letter to Google insisting that it stop promoting banned mass events on YouTube.

It particularly didn't like that YouTube channels were using push notifications and other measures to spread information about protests, such as the recent demonstrations objecting to Moscow banning some opposition politicians from running in upcoming elections. Some users are allegedly receiving these alerts even if they're not subscribed to the channels.

The Russian agency said it would treat continued promotion as interference in the sovereign affairs of the country and consider Google a hostile influence ostensibly bent on obstructing elections.

Political protests have continued to grow in Russia (the most recent had about 50,000 participants), and they've turned increasingly from the Moscow-specific complaints to general dissatisfaction with President Putin's anti-democratic policies.

 

 

Protecting Americans from Online Censorship by appointing an online censor...

Donald Trump is considering appoint the FCC as the US social media censor


Link Here 11th August 2019
Full story: Internet Censorship in USA...Domain name seizures and SOPA

A draft executive order from the White House could put the Federal Communications Commission (FCC) in charge of social media censorship. The FFC has a disgraceful record on the subject of internet freedom. It recently showed totally disregard for the rights of internet users when siding when big business over net neutrality.

Donald Trump's draft order, a summary of which was obtained by CNN, calls for the FCC to develop new regulations clarifying how and when the law protects social media websites when they decide to remove or suppress content on their platforms. Although still in its early stages and subject to change, the Trump administration's draft order also calls for the Federal Trade Commission to take those new policies into account when it investigates or files lawsuits against misbehaving companies.

US media giants have clearly been showing political bias when censoring conservative views but appointing the FCC as the internet censor does not bode well.

According to the summary seen by CNN, the draft executive order currently carries the title Protecting Americans from Online Censorship . It claims that the White House has received more than 15,000 anecdotal complaints of social media platforms censoring American political discourse, the summary indicates.

The FTC will also be asked to open a public complaint docket, according to the summary, and to work with the FCC to develop a report investigating how tech companies curate their platforms and whether they do so in neutral ways. Companies whose monthly user base accounts for one-eighth of the U.S. population or more could find themselves facing scrutiny, the summary said, including but not limited to Facebook, Google, Instagram, Twitter, Pinterest and Snapchat.

The Trump administration's proposal seeks to significantly narrow the protections afforded to companies under Section 230 of the Communications Decency Act, a part of the Telecommunications Act of 1996. Under the current law, internet companies are not liable for most of the content that their users or other third parties post on their platforms. This law underpins any company wanting to allow users to post their own comments without prior censorship. If protectsion were to be removed all user posting would need to be censored before being published.

 

 

Commented: What bright spark thought that it was a good idea for ISPs to decide what to censor?...

New Zealand ISP Spark says it will unilaterally censor 8chan (should it return to life)


Link Here11th August 2019
Full story: Internet Censorship in New Zealand...New Zealand considers internet blocking
New Zealand ISP Spark says it will block the controversial website 8chan if it resumes service, because it continues to host disturbing material.

8chan is currently down after its web host pulled out in response to 8chan being used by US mass shooters. However, Spark said if 8chan finds another host provider, it would block access. Spark said:

We feel it is the right thing to do given the website's repeated transgressions and continual willingness to distribute disturbing material.

The 8chan internet forum was used by the accused Christchurch mosque gunman to distribute his manifesto and live stream the attack.

However Spark seemed to realise that it would now become a magnet for every easily offended social justice warrior with a pet grievance and said that the the government should step in:

Appropriate agencies of government should put in place a robust policy framework to address the important issues surrounding such material being distributed online and freely available.

Technology commentator Paul Brislen responded:

It's very, very nearly the edge of what's acceptable for what your internet provider to be doing in this kind of situation.

I'm as uncomfortable as they [Spark] are about it. They do really need to find a new way to manage hate-speech and extremist content on the internet.

It's much like the Telecom of old to decide which phone calls you can and can't make.

The risk was someone would now turn around and say okay you blocked 8Chan because of hate speech, now I want you to block this other website because it allows people to access something else. It might be hate speech, it might be pornography, it might be something that speaks out against a religious group or ethnicity.

You start down a certain track of Spark or any of the other ISPs being forced to decide what is and isn't acceptable for the NZ public and that's not their job at all. They really shouldn't be doing that.

Update: New Zealand's chief censor David Shanks chips in

11th August 2019. See article from classificationoffice.govt.nz

I applaud the announcement by Spark that they are prepared to block access to 8chan if and when it re-emerges on the internet.

This move is both brave and meaningful. Brave, because a decision not to provide users with access to a site is quite a different thing from a decision not to provide a site with the server capacity and services it needs (which is the choice that Cloudflare recently made). Meaningful, because everything I have seen tells me that 8chan is the white supremacist killer's platform of choice, with at least three major attacks announced on it within a few months. There is nothing indicating that upon re-emergence 8chan will be a changed, safer platform. Indeed, it may be even more toxic.

We appreciate that our domestic ISP's have obligations to provide their customers with access to the internet according to their individual terms and conditions. Within those constraints, as the experience post the March 15 attacks show, our ISP's can act and do the right thing to block platforms that are linked to terrorist atrocities and pose a direct risk of harm to New Zealanders.

I know that ISPs don't take these decisions lightly, and that they do not want to be in the business of making judgments around the content of sites. But these are extraordinary circumstances, and platforms that promote terrorist atrocities should not be tolerated on the internet, or anywhere else. Spark is making the right call here.

This is a unique set of circumstances, and relying on ISPs to make these calls is not a solution for the mid or long term. I agree with calls for a transparent, robust and sensible regulatory response. Discussions have already started on what this might look like here in NZ. Ultimately this is a global, internet problem. That makes it complex of course, but I believe that online extremism can be beaten if governments, industry and the public work together

 

 

FBI, Donald Trump, big tech, marketeers...they're all at it...

Everyone's out to scrape all your social media postings and compile a searchable database of your life


Link Here 10th August 2019
A few days ago Donald Trump responded to more mass shooters by calling on social networks to build tools for identifying potential mass murderers before they act. And across the government, there appears to be growing consensus that social networks should become partners in surveillance with the government.

So quite a timely moment for the Wall Street Journal to publish an article about FBI plans for mass snooping on social media:

The FBI is soliciting proposals from outside vendors for a contract to pull vast quantities of public data from Facebook, Twitter and other social media to proactively identify and reactively monitor threats to the United States and its interests.

The request was posted last month, weeks before a series of mass murders shook the country and led President Trump to call for social-media platforms to do more to detect potential shooters before they act.

The deadline for bids is Aug. 27.

As described in the solicitation, it appears that the service would violate Facebook's ban against the use of its data for surveillance purposes, according to the company's user agreements and people familiar with how it seeks to enforce them.

The Verge comments on a privacy paradox:

But so far, as the Journal story illustrates, the government's approach has been incoherent. On one hand, it fines Facebook $5 billion for violating users' privacy; on the other, it outlines a plan to potentially store all Americans' public posts in a database for monitoring purposes.

But of course it is not a paradox, many if not most people believe that they're entitled to privacy whilst all the 'bad' people in the world aren't.

Commercial interests are also very keen on profiling people from their social media postings. There's probably a long list of advertisers who would love a list of rich people who go to casinos and stay at expensive hotels.

Well As Business Insider has noted, one company Hyp3r has been scraping all public postings on Instagram to provide exactly that information:

A combination of configuration errors and lax oversight by Instagram allowed one of the social network's vetted advertising partners to misappropriate vast amounts of public user data and create detailed records of users' physical whereabouts, personal bios, and photos that were intended to vanish after 24 hours.

The profiles, which were scraped and stitched together by the San Francisco-based marketing firm Hyp3r, were a clear violation of Instagram's rules. But it all occurred under Instagram's nose for the past year by a firm that Instagram had blessed as one of its preferred Facebook Marketing Partners.

Hyp3r is a marketing company that tracks social-media posts tagged with real-world locations. It then lets its customers directly interact with those posts via its tools and uses that data to target the social-media users with relevant advertisements. Someone who visits a hotel and posts a selfie there might later be targeted with pitches from one of the hotel's competitors, for example.

The total volume of Instagram data Hyp3r has obtained is not clear, though the firm has publicly said it has a unique dataset of hundreds of millions of the highest value consumers in the world, and sources said more than of 90% of its data came from Instagram. It ingests in excess of 1 million Instagram posts a month, sources said.

See full article from businessinsider.com

 

 

Getting it right...

Politico reports that the White House is working on order to counter the political bias demonstrated by social media companies


Link Here8th August 2019
The White House is circulating drafts of a proposed executive order that would address the anti-conservative bias of social media companies. This appears to be the follow up to President Donald Trump pledging to explore all regulatory and legislative solutions on the issue.

The contents of the order remain undisclosed but it seems that many different ideas are still in the mix. A White House official is reported to have said:

If the internet is going to be presented as this egalitarian platform and most of Twitter is liberal cesspools of venom, then at least the president wants some fairness in the system. But look, we also think that social media plays a vital role. They have a vital role and an increasing responsibility to the culture that has helped make them so profitable and so prominent.

The social media companies have denied the allegations of bias, but nevertheless the large majority of users censored by the companies are indeed on the right.

 

 

Poleaxed...

Instagram shadowbans hash tags associated with pole dancers


Link Here8th August 2019
Instagram is hiding content hosted by the pole dancing community's most commonly used hashtags.

Pole dancers, performers and entrepreneurs say that the censorship is threatening their livelihood. Sweden-based instructor and performer, Anna-Maija Nyman, told Yahoo Lifestyle.:

The censorship is affecting our whole community because it makes it harder to share and connect, I felt that our community is in danger and under attack.

The controversy for pole dancers began on July 19, when hashtags such as #poledancing, #poledancer and #polesportorg were noticeably wiped off all content previously aggregated by pole dancers around the world.

To alert fellow dancers, California-based pole star, Elizabeth Blanchard, wrote in a post that day that the banning of 19 hashtags appeared to be an effort to shadowban the community. She wrote:

There seems to have been a massive 'cleanse' on instagram and pole dancers have been deemed dirty and inappropriate...or as Instagram puts it we don't 'meet Instagram's community guidelines. There has been lots of talk about shadowbans lately but this purge of hashtags is hard to mistake as being targeted towards pole dancers.

Shadowbanning is a method used by social networks to quietly silence an account by curtailing how it gets engagement without blocking the ability to post new content. Shadowbanned users are not told that they have been affected, they can continue to post messages, add new followers and comment on or reply to other posts. But their [content] doesn't appear in the feeds, their replies may be suppressed and they may not show up in searches.

Australia-based instructor, performer and business owner Michelle Shimmy points out that the current restrictions facing pole dancers on the social media platform are part of a much larger issue having to do with Instagram's policy changes to manage 'problematic' content, which she suggests are inherently sexist.

Apparently Instagram has apologised for its censorship but nobody is expecting a change i th eplicy.

 

 

Desexualising lesbians...

Google changes its search algorithm such that word 'lesbian' links to wiki stuff rather than porn


Link Here7th August 2019

Google has changed its algorithm for the search term 'lesbian' to show informative results instead of pornographic content.

Previously, the first results shown when googling the word were porn videos.

The algorithm has been changed seemingly as a result of a campaign led by the Twitter account @SEO_lesbienne and French news site Numerama. They noted that only the word lesbian linked to sexualised pages, whereas searching for gay or trans displayed Wikipedia pages, articles and specialised blogs.

Now if you want to find some lesbian porn, you have to type 'lesbian porn'.

 

 

Updated: Shooting the messenger...

Cloudflare withdraws support from 8chan, the forum website used by some of the recent US shooters


Link Here6th August 2019
8chan is a forum website that has become a home for the far right and those otherwise discontented by modern society for various reasons. There's nothing special about it that cannot be easily replicated elsewhere.

And as Buzzfeed notes:

Pull the plug, it will appear somewhere else, in whatever locale will host it. Because there's nothing particularly special about 8chan, there are no content algorithms, hosting technology immaterial. The only thing radicalizing 8chan users are other 8chan users.

However in the past six months is has been used to distribute racist and white nationalist manifestos prior to mass shootings.

It has now been refused service from Cloudfare which offers security services, most notably defending against denial of service attacks. Cloudflare announced in a blogpost the company would be terminating 8chan as a client.

This represents a reversal of Cloudflare's position from less than 24 hours earlier, when the co-founder and chief executive, Matthew Prince, defended his company's relationship with 8chan as a moral obligation in an extensive interview with the Guardian. Prince explained the change:

The rationale is simple: they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths. Even if 8chan may not have violated the letter of the law in refusing to moderate their hate-filled community, they have created an environment that revels in violating its spirit.

While removing 8chan from our network takes heat off of us, it does nothing to address why hateful sites fester online. It does nothing to address why mass shootings occur. It does nothing to address why portions of the population feel so disenchanted they turn to hate. In taking this action we've solved our own problem, but we haven't solved the Internet's.

You'd have though the authorities would be advised to keep an eye on public forums so as to be aware of any grievances that are widely shared. Maybe to try and resolve them, and maybe just to be aware of what people are thinking. For example, if David Cameron had been better aware of what many people thought about immigration, he might have realised that holding the EU referendum was a disastrously stupid idea.

Update: Censored by the website hosting company

6th August 2019. See article from theverge.com

Internet forum 8chan has gone dark after web services company Voxility banned the site -- and also banned 8chan's new host Epik, which had been leasing web space from it. Epik began working with 8chan today after web services giant Cloudflare cut off service, following the latest of at least three mass shootings linked to 8chan. But Stanford researcher Alex Stamos noted that Epik seemed to lease servers from Voxility, and when Voxility discovered the content, it cut ties with Epik almost immediately.

 

 

Facebook as the Origin of the Censorship World...

Facebook settles after denying French teacher his free speech to post classic art


Link Here3rd August 2019
Full story: Facebook Censorship...Facebook quick to censor
Facebook has agreed to settle a years-long legal battle with a French teacher who sued after the social media giant shuttered his account when he posted a renowned 19th-century painting that features a woman's genitals.

The dispute dates to 2011, when the teacher, Frederic Durand, ran foul of Facebook's censorship of nude images after posting a link that include a thumbnail image of L'Origine du Monde (The Origin of the World), an 1866 painting by the realist painter Gustave Courbet.

Durand argued that Facebook was infringing on his freedom of expression. He sought 20,000 euro in damages and initially won his case in a Paris court but a higher court overturned the ruling in March 2018.

Durand had been preparing an appeal, but in a statement to AFP, his lawyer Stephane Cottineau said a deal had been reached for Facebook to make an unspecified donation to a French street art association called Le MUR (The WALL).

 

 

Offsite Article: IPVanish pitches to keep porn users safe from age verification...


Link Here 2nd August 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
How would the UK adult content block harm digital rights?

See article from ipvanish.com

 

 

Messing with people's shit via their backdoors...

DOJ and FBI Show No Signs of Correcting Past Untruths in Their New Attacks on Encryption


Link Here1st August 2019

Last week, US Attorney General William Barr and FBI Director Christopher Wray chose to spend some of their time giving speeches demonizing encryption and calling for the creation of backdoors to allow the government access to encrypted data. You should not spend any of your time listening to them.

Don't be mistaken; the threat to encryption remains high . Australia and the United Kingdom already have laws in place that can enable those governments to undermine encryption, while other countries may follow. And it's definitely dangerous when senior U.S. law enforcement officials talk about encryption the way Barr and Wray did.

The reason to ignore these speeches is that DOJ and FBI have not proven themselves credible on this issue. Instead, they have a long track record of exaggeration and even false statements in support of their position. That should be a bar to convincing anyone--especially Congress--that government backdoors are a good idea.

Barr expressed confidence in the tech sector's ingenuity to design a backdoor for law enforcement that will stand up to any unauthorized access, paying no mind to the broad technical and academic consensus in the field that this risk is unavoidable. As the prominent cryptographer and Johns Hopkins University computer science professor Matt Green pointed out on Twitter , the Attorney General made sweeping, impossible-to-support claims that digital security would be largely unaffected by introducing new backdoors. Although Barr paid the barest lip service to the benefits of encryption--two sentences in a 4,000 word speech--he ignored numerous ways encryption protects us all, including preserving not just digital but physical security for the most vulnerable users.

For all of Barr and Wray's insistence that encryption poses a challenge to law enforcement, you might expect that that would be the one area where they'd have hard facts and statistics to back up their claims, but you'd be wrong. Both officials asserted it's a massive problem, but they largely relied on impossible-to-fact-check stories and counterfactuals. If the problem is truly as big as they say, why can't they provide more evidence? One answer is that prior attempts at proof just haven't held up.

Some prime examples of the government's false claims about encryption arose out of the 2016 legal confrontation between Apple and the FBI following the San Bernardino attack. Then-FBI Director James Comey and others portrayed the encryption on Apple devices as an unbreakable lock that stood in the way of public safety and national security. In court and in Congress, these officials said they had no means of accessing an encrypted iPhone short of compelling Apple to reengineer its operating system to bypass key security features. But a later special inquiry by the DOJ Office of the Inspector General revealed that technical divisions within the FBI were already working with an outside vendor to unlock the phone even as the government pursued its legal battle with Apple. In other words, Comey's statements to Congress and the press about the case--as well as sworn court declarations by other FBI officials--were untrue at the time they were made .

Wray, Comey's successor as FBI Director, has also engaged in considerable overstatement about law enforcement's troubles with encryption. In congressional testimony and public speeches, Wray repeatedly pointed to almost 8,000 encrypted phones that he said were inaccessible to the FBI in 2017 alone. Last year, the Washington Post reported that this number was inflated due to a programming error. EFF filed a Freedom of Information Act request, seeking to understand the true nature of the hindrance encryption posed in these cases, but the government refused to produce any records.

But in their speeches last week, neither Barr nor Wray acknowledged the government's failure of candor during the Apple case or its aftermath. They didn't mention the case at all. Instead, they ask us to turn the page and trust anew. You should refuse. Let's hope Congress does too.


 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   Apr   May   June   July   Aug   Sep   Oct   Nov   Dec    


 


 
TV  

Movies

Games

Internet
 
Advertising

Technology

Gambling

Food+Drink
Books

Music

Art

Stage

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys