|
YouTube has a long censorship list, including the politically right, the politically incorrect, and anyone who may offend touchy corporate advertisers. So more or less anybody could fall foul at any time
|
|
|
|
31st July 2019
|
|
| See article from telecoms.com
|
You'd think YouTube would be keen on supporting creators who generate content and income for the company. But Google is obviously a bit too rich to care much, and so content creators have to live with the knowledge that their livelihoods could easily be
wiped out by even the most trivial of political or PC transgressions. YouTube arbitrarily bans and demonitises those from a long list of no-noes, including being on the political right, offending the easily offended, being politically incorrect, or of
course saying something corporate advertisers don't like. Needless to say there is a long list of aggrieved creators that have an axe to grind with YouTube, and plenty more who are walking on eggshells trying to make sure that they are not the
next victims. And now they're fighting back. An obscure 'YouTubers union' has joined forces with IG Metall -- Germany an Europe's largest industrial union, to form the campaigning group FairTube. FairTube has called for the following from
YouTube and given it until 23 August to engage with it, or else.
- Publish all categories and decision criteria that affect monetization and views of videos
- Give clear explanations for individual decisions -- for example, if a video is demonetized, which parts of the video violated which criteria in the
Advertiser-Friendly Content Guidelines?
- Give YouTubers a human contact person who is qualified and authorized to explain decisions that have negative consequences for YouTubers (and fix them if they are mistaken)
- Let YouTubers contest
decisions that have negative consequences
- Create an independent mediation board for resolving disputes (here the Ombuds Office of the Crowdsourcing Code of Conduct can offer relevant lessons)
- Formal participation of YouTubers in
important decisions, for example through a YouTuber Advisory Board
At first glance one may wonder if the union has any way to generate a little leverage over YouTube but they have been thinking up a few ideas:
- Contesting the status of YouTube creators as self-employed, thus creating a greater duty of care on YouTube towards its creators.
- Claiming GDPR violations due to YouTube's refusal to give creators the data it stores about them and which it
does share with advertisers.
- Old fashioned collective action -- not so much striking as spreading the word and joining the union to put collective pressure on YouTube and its own Google.
Lets hope they are on the right tracks. |
|
Court Judgement allows the government to continue spying on us
|
|
|
|
31st July 2019
|
|
| Thanks to Jon See
article from libertyhumanrights.org.uk See also
article from theregister.co.uk See
full judgement [pdf] from judiciary.uk |
Liberty writes: In response to today's judgment in the People's vs the Snooper's Charter case Megan Goulding, Liberty lawyer, said: This disappointing judgment allows the government to continue to spy on every
one of us, violating our rights to privacy and free expression. We will challenge this judgment in the courts, and keep fighting for a targeted surveillance regime that respects our rights. These bulk surveillance powers allow the
state to hoover up the messages, calls and web history of hordes of ordinary people who are not suspected of any wrong-doing. The Court recognised the seriousness of MI5's unlawful handling of our data, which only emerged as a
result of this litigation. The security services have shown that they cannot be trusted to keep our data safe and respect our rights. |
|
Porn sites are tracking and snooping on users, and for some, their browsing may be classified as contrary to their public life.
|
|
|
| 31st
July 2019
|
|
| 19th July 2019. See study [pdf] from arxiv.org |
Elena Maris of Microsoft Research, Timothy Libert Carnegie Mellon University, and Jennifer Henrichsen University of Pennsylvania have penned a study examining tracking technologies from the likes of Google and Facebook that are incorporated into re
world's porn websites. They write: This paper explores tracking and privacy risks on pornography websites. Our analysis of 22,484 pornography websites indicated that 93% leak user data to a third party. Tracking on
these sites is highly concentrated by a handful of major companies, which we identify [Google and Facebook]. Our content analysis of the sample's domains indicated 44.97% of them expose or suggest a specific gender/sexual identity
or interest likely to be linked to the user. We identify three core implications of the quantitative results:
1) the unique/elevated risks of porn data leakage versus other types of data, 2) the particular risks/impact for vulnerable populations, and 3) the complications of
providing consent for porn site users and the need for affirmative consent in these online sexual interactions
The authors describe the problem: One evening, Jack decides to view porn on his laptop. He enables incognito mode in his browser, assuming his actions are now private. He pulls up a site and scrolls past a
small link to a privacy policy. Assuming a site with a privacy policy will protect his personal information, Jack clicks on a video. What Jack does not know is that incognito mode only ensures his browsing history is not stored on his computer. The sites
he visits, as well as any third-party trackers, may observe and record his online actions. These third-parties may even infer Jack's sexual interests from the URLs of the sites he accesses. They might also use what they have decided about these interests
for marketing or building a consumer profile. They may even sell the data. Jack has no idea these third-party data transfers are occurring as he browses videos.
The Authors are a bit PC and seem obsessed about trying to relate cookie
consent with sexual consent but finally cnclude: Through our results and connections to past porn site privacy and security breaches and controversies, we demonstrate that the singularity of porn data and the
characteristics of typical porn websites' lax security measures mean this leakiness poses a unique and elevated threat. We have argued everyone is at risk when such data is accessible without users' consent, and thus can potentially be leveraged against
them by malicious agents acting on moralistic claims of normative gender or sexuality. These risks are heightened for vulnerable populations whose porn usage might be classified as non-normative or contrary to their public life.
The
authors seemed to think the porn sites are somehow ethical and should be doing the 'right' thing. But in reality they are just trying to make money like everyone else and as they say, if the product is free the your data is the payment. But
as the report points out, that price may be a prove a little higher than expected. Update: An unconvincing denial from Google 20th July 2019. See
article from avn.com
AVN notes that Google responded to the claims in a rather obtuse way. Google on Thursday attempted to deny the study's findings, as quoted by The Daily Mail newspaper. We don't allow Google Ads on websites with adult
content and we prohibit personalized advertising and advertising profiles based on a user's sexual interests or related activities online, the company said. Additionally, tags for our ad services are never allowed to transmit personally identifiable
information.
The study, however, did not allege that Google had placed actual advertisements from its GoogleAds network on porn sites, and in its elliptical statement, Google did not specifically deny that its tracking code is
embedded on thousands of adult sites. In related news Google has also announced changes
to incognito mode on its Chrome browser to make it just a little more incognito. Chrome's Incognito Mode is based on the principle that you should have the choice to browse the web privately. At the end of July,
Chrome will remedy a loophole that has allowed sites to detect people who are browsing in Incognito Mode. People choose to browse the web privately for many reasons. Some wish to protect their privacy on shared or borrowed
devices, or to exclude certain activities from their browsing histories. In situations such as political oppression or domestic abuse, people may have important safety reasons for concealing their web activity and their use of private browsing features.
We want you to be able to access the web privately, with the assurance that your choice to do so is private as well.
Google also noted a useful bit of info on evading article count restrictions imposed by some
publishers with metered access policies Today, some sites use an unintended loophole to detect when people are browsing in Incognito Mode. Chrome's FileSystem API is disabled in Incognito Mode to avoid leaving traces
of activity on someone's device. Sites can check for the availability of the FileSystem API and, if they receive an error message, determine that a private session is occurring and give the user a different [more restricted] experience.
With the release of Chrome 76 scheduled for July 30, the behavior of the FileSystem API will be modified to remedy this method of Incognito Mode detection. The change will affect sites that use the FileSystem
API to intercept Incognito Mode sessions and require people to log in or switch to normal browsing mode, on the assumption that these individuals are attempting to circumvent metered paywalls. Unlike hard paywalls or registration
walls, which require people to log in to view any content, meters offer a number of free articles before you must log in. This model is inherently porous, as it relies on a site's ability to track the number of free articles someone has viewed, typically
using cookies. Private browsing modes are one of several tactics people use to manage their cookies and thereby reset the meter count.
Of course it is probably a bit easier to find an addon that lets you block or delete the cookies
for specific websites or else to try just turning javascript off. Update: More incognito 31st July 2019. See article from
venturebeat.com
And as promised, Google Chrome has been updated to make incognito mode a little more incognito. Chrome 76 which was released today has but a stop to the common ways in which websites can work out that users are surfing the web incognito and then
ban them from accessing content.
|
|
YouTube boss says that mainstream news companies will be given precedence over independent creators that are too often politically incorrect, wrong think, or right wing
|
|
|
|
30th July 2019
|
|
| See article from reclaimthenet.org |
A YouTube chief has proposed giving precedence to mainstream media over indie creators The company's chief product officer Neal Mohan claims that the platform has grown so much that it now needs new rules to regulate bad actors. Amid the recent
observations of YouTube's biased censorship, the company announced it will crackdown further on what it calls racist content and disinformation. Mohan said: YouTube has now grown to a big city. More bad actors have
come into place. And just like in any big city, you need a new set of rules and laws and kind of regulatory regime. We want to make sure that YouTube remains an open platform because that's where a lot of the magic comes from,
even though there may be some opinions and voices on the platform that I don't agree with, that you don't agree with.
reclaimthenet.org
commented: Mohan suggested that positive discrimination could be applied to authoritative sources like traditional media outlets such as AFP or CNN or BBC or the AP or whoever, raising an issue already mentioned by
the independent channels that made YouTube what it is today: their content is often obscured by search results and their subscribers miss the new content, while corporate media (that ironically is often a competitor to YouTube) is already being heavily
promoted by YouTube.
|
|
|
|
|
| 29th July 2019
|
|
|
NewsGuard, self styled fake news hunters, are trying to monetise their alerts, perhaps to try and get ISPs to pay on your behalf See
article from ispreview.co.uk |
|
|
|
|
|
29th July 2019
|
|
|
If Facebook can 'filter' or 'backup' your 'encrypted' communications then this proves that encryption is compromised, as does continued operation in any country that demands backdoors See
article from forbes.com |
|
Nominet outlines the UK stance on maintaining state censorship via DNS over HTTPS and Google will comply
|
|
|
| 27th July 2019
|
|
| See article from nominet.uk See
article from groups.google.com | Russell Haworth, CEO of Nominet, Britain's domain name authority has outlined the UK's stance on maintaining UK censorship and surveillance capabilities as the introduction of encrypted DNS over HTTPS (DoH) will make their job a bit more difficult.
The authorities' basic idea is that UK ISPs will provide their own servers for DNS over HTTPS so that they can still use this DNS traffic to block websites and keep a log of everyone's internet use. Browser companies will then be expected to enforce
using the governments preferred DoH server. And Google duly announced that it will comply with this censorship request. Google Chrome will only allow DoH servers that are government or corporate approved. Note that this decision is more
nuanced than just banning internet users from sidestepping state censors. It also applies to users being prevented from sidestepping corporate controls on company networks, perhaps a necessary commercial consideration that simply can't be ignored.
Russell Haworth, CEO of Nominet explains:
Firefox and Google Chrome -- the two biggest web browsers with a combined market share of over 70% -- are both looking to implement DoH in the coming months, alongside other operators. The big question now is how they implement it, who they offer to be
the resolvers, and what policies they use. The benefit offered by DoH is encryption, which prevents eavesdropping or interception of DNS communication. However, DoH raises a number of issues which deserve careful consideration as we move towards it.
Some of the internet safety and security measures that have been built over the years involve the DNS. Parental controls, for example, generally rely on the ISP blocking particular domains for their customers. The Internet Watch
Foundation (IWF) also ask ISPs to block certain domains because they are hosting child sexual abuse material. There may also be issues for law enforcement using DNS data to track criminals. In terms of cyber security, many organisations currently use the
DNS to secure their networks, by blocking domains known to contain malware. All of these measures could be impacted by the introduction of DoH. Sitting above all of these is one question: Will users know any of this is happening?
It is important that people understand how and where their data is being used. It is crucial that DoH is not simply turned on by default and DNS traffic disappears off to a server somewhere without people understanding and signing up to the privacy
implications. This is the reason what we have produced a simple explainer and will be doing more to communicate about DoH in the coming weeks.
DoH can bring positive changes, but only if it is accompanied by understanding, informed consent, and attention to some key principles, as detailed below: Informed user choice:
users will need to be educated on the way in which their data use is changing so they can give their informed consent to this new approach. We also need some clarity on who would see the data, who can access the data and under what
circumstances, how it is being protected and how long it will be available for. Equal or better safety: DoH disrupts and potentially breaks safety measures that have built
over many years. It must therefore be the responsibility of the browsers and DoH resolvers who implement DoH to take up these responsibilities. It will also be important for current protections to be maintained.
Local jurisdiction and governance:
Local DoH resolvers will be needed in individual countries to allow for application of local law, regulators and safety bodies (like the IWF). This is also important to encourage innovation globally, rather than
having just a handful of operators running a pivotal service. Indeed, the internet was designed to be highly distributed to improve its resilience.
Security: Many
organisations use the DNS for security by keeping suspicious domains that could include malware out of networks. It will be important for DoH to allow enterprises to continue to use these methods -- at Nominet we are embracing this in a scalable and
secure way for the benefit of customers through our cyber security offering.
Change is a constant in our digital age, and I for one would not stand in the way of innovation and development. This new approach to
resolving requests could be a real improvement for our digital world, but it must be implemented carefully and with the full involvement of Government and law enforcement, as well as the wider internet governance community and the third sector.
A Google developer has outlined tentative short term plans for DoH in Chrome. It suggest that Chrome will only allow the selection of DoH servers that are equivalent to approved non encrypted servers.
This is a complex space and our short term plans won't necessarily solve or mitigate all these issues but are nevertheless steps in the right direction. For the first milestone, we are considering an auto-upgrade approach. At a
high level, here is how this would work:
Chrome will have a small (i.e. non-exhaustive) table to map non-DoH DNS servers to their equivalent DoH DNS servers. Note: this table is not finalized yet. Per this table, if the system's recursive
resolver is known to support DoH, Chrome will upgrade to the DoH version of that resolver. On some platforms, this may mean that where Chrome previously used the OS DNS resolution APIs, it now uses its own DNS implementation in order to implement DoH.
A group policy will be available so that Administrators can disable the feature as needed. Ability to opt-out of the experiment via chrome://flags.
In other words, this would upgrade the protocol used for DNS resolution while keeping the user's DNS provider unchanged. It's also important to note that DNS over HTTPS does not preclude its operator from offering features such as
family-safe filtering.
|
|
Pakistan's internet censors calls on the government to ban foreign social media and replace it with locally based versions
|
|
|
| 27th July 2019
|
|
| See
article from sify.com |
The chairman of the Pakistan Telecommunications Authority (PTA) Amir Azeem Bajwa has called on the government to block social media websites in the country in consideration to the circulation of blasphemous content through these mediums. Briefing a
Senate Standing Committee, Bajwa asked the government to formulate a policy to block social media networks which are being operated outside the country, and in its stead develop indigenous social networking websites, just as in the UAE and China. Bajwa said that the PTA has blocked more than 39,000 URLs since 2010, and blocked as many as 8,000 websites related to pornography. In addition, the PTA has received over 8,000 complaints regarding blasphemous content on the internet.
|
|
|
|
|
| 27th July 2019
|
|
|
Silicon Valley Is Helping The US Government Circumvent The First Amendment See
article from forbes.com |
|
|
|
|
| 24th July 2019
|
|
|
A technical article explaining how Microsoft's internet browser Edge sends all your page URLs to Microsoft in the name of blocking phishing websites See
article from bleepingcomputer.com |
|
|
|
|
|
24th July 2019
|
|
|
'The status quo is exceptionally dangerous, it is unacceptable and only getting worse. It's time for the United States to stop debating whether to address it and start talking about how to address it' See
article from apnews.com |
|
They deserve and need investigating...but Aaron Banks' wallet says otherwise
|
|
|
| 21st July 2019
|
|
| See article from theguardian.com
|
The businessman Arron Banks and the unofficial Brexit campaign Leave.EU have issued a legal threat against streaming giant Netflix in relation to The Great Hack, a new documentary about the Cambridge Analytica scandal and the abuse of personal
data. The threat comes as press freedom campaigners and charity groups warn the government in an open letter that UK courts are being used to intimidate and silence journalists working in the public interest. In a
joint letter to key cabinet members, they call for new legislation to stop vexatious
lawsuits, highlighting one filed last week by Banks against campaigning journalist Carole Cadwalladr. The letter says: Following the recent global conference on media freedom held in London by the UK government,
we write to draw your attention to what appears to be a growing trend to use strategic litigation against public participation (SLAPP) lawsuits as a means of intimidating and silencing journalists working in the public interest.
Such legal threats are designed to inhibit ongoing investigations, and prevent legitimate public interest reporting. Abuse of defamation law, including through SLAPP lawsuits, has become a serious threat to press freedom and
advocacy rights in a number of countries, including the UK. Fears have been expressed in the UK and abroad, and by the European parliament that this legal tactic was being deployed against the murdered Maltese
journalist Daphne Caruana Galizia, who at the time of her death in October 2017 was subject to 42 civil libel suits against her, many of which were brought through UK-based law firms, acting for foreign banks and wealthy individuals. Twenty-seven of
these vexatious lawsuits remain open more than 21 months after her assassination. A range of other Maltese media have faced threats of similar suits, including investigative outlet the Shift News. Numerous legal and
online threats have been made against Carole Cadwalladr, whose journalism for the Observer and a range of other publications has stimulated a global debate about the power of online platforms to influence the behaviour of citizens, and raised important
questions about the regulation of digital technology. The legal claim against Ms Cadwalladr, issued on 12 July by lawyers acting for Arron Banks, is another example of a wealthy individual appearing to abuse the law in
an attempt to silence a journalist and distract from these issues being discussed by politicians, the media and the public at a critical time in the life of our democracy. The increasing deployment of what appear to be
SLAPP lawsuits in the UK poses a threat to media freedom and public interest advocacy, and demands a robust response. We believe that new legislation should be considered to prevent the abuse of defamation law to silence public interest investigative
reporting. We also urge you to take a clear public stance condemning such practices and supporting investigative journalism and independent media. We urge you to address this issue as a matter of priority. Action has
been discussed within the institutions of the European Union, but it is important that the government makes clear that the UK remains a country that welcomes and celebrates the role and value of independent public interest reporting.
Paul Webster, editor, the Observer, Rebecca Vincent, UK bureau director, Reporters Without Borders, Jodie Ginsberg, CEO, Index on Censorship, John Sauven, executive director, Greenpeace UK, Thomas Hughes, executive director, Article
19, Carles Torner, executive director, PEN International, Carl MacDougall, president, Scottish PEN, Summer Lopez, senior director of Free Expression Programs, PEN America, Tom Gibson, EU representative, Committee to Protect Journalists, Flutura Kusari,
legal adviser, European Centre for Press and Media Freedom, Scott Griffen, deputy director, International Press Institute, Caroline Muscat, co-founder and editor, the Shift News, Dr Justin Borg-Barthet, senior lecturer, University of Aberdeen School of
Law, Matthew Caruana Galizia, director, Daphne Caruana Galizia Foundation, Paul Caruana Galizia, finance editor, Tortoise, Corinne Vella, sister of Daphne Caruana Galizia, Andrew Caruana Galizia, son of Daphne Caruana Galizia See
details of The Great Hack from theguardian.com |
|
|
|
|
| 20th July 2019
|
|
|
YouTube does not want government censors to silence people the government doesn't like, whilst YouTube actively censors people it does not like, especially those on the right See
article from reclaimthenet.org |
|
|
|
|
|
20th July 2019
|
|
|
The EFF publishes a technical discussion on how the authorities are circumventing encryption used by messaging services See
article from eff.org |
|
The EU seeks to extend and centralise internet censorship across the EU
|
|
|
| 19th July 2019
|
|
| See article from reclaimthenet.org See
EU internet censorship document [pdf] from cdn.netzpolitik.org |
According to a leaked EU internet censorship document obtained by Netzpolitik, a German blog, the European
Commission (EC) is now preparing a new Digital Services Act to unify and extend internet censorship across the EU. The proposals are partially to address eCommerce controls required to keep up with technological changes, but it also addresses more
traditional censorship to control 'fake news' political ideas it does not like and 'hate speech'. The new rules cover a wider remit of internet companies covering all digital services, and that means anything from ISPs, cloud hosting, social
media, search engines, ad services, to collaborative economy services (Uber, AirBnB etc). The censorship regime envisaged does not quite extend to a general obligation for companies to censor everything being uploaded, but it goes way beyond
current censorship processes. Much of the report is about unifying the rules for takedown of content. The paper takes some of the ideas from the UK Online Harms whitepaper and sees requirements to extend censorship from illegal content to
legal-but-harmful content. The authors perceive that unifying censorship rules for all EU countries as some sort of simplification for EU companies, but as always ever more rules just advantages the biggest companies, which are unfortunately for
the EU, American. Eg requiring AI filtering of content is a technology very much in the control of the richest and most advanced companies, ie the likes of Google. Actually the EU paper does acknowledge that EU policies have in the past advantaged
US companies. The paper also notes unease at the way that European censorship decisions, eg the right to be forgotten, have become something implemented by the American giants. |
|
Instagram adds another reason to ban users but promises better warnings of impending censorship and also a better appeal process
|
|
|
| 19th July 2019
|
|
| See article from instagram-press.com |
Instagram explains in a blog post: Under our existing policy, we disable accounts that have a certain percentage of violating content. We are now rolling out a new policy where, in addition to removing accounts with a certain
percentage of violating content, we will also remove accounts with a certain number of violations within a window of time. Similarly to how policies are enforced on Facebook , this change will allow us to enforce our policies more consistently and hold
people accountable for what they post on Instagram. We are also introducing a new notification process to help people understand if their account is at risk of being disabled. This notification will also offer the opportunity to
appeal content deleted. To start, appeals will be available for content deleted for violations of our nudity and pornography, bullying and harassment, hate speech, drug sales, and counter-terrorism policies, but we'll be expanding appeals in the coming
months. If content is found to be removed in error, we will restore the post and remove the violation from the account's record. We've always given people the option to appeal disabled accounts through our Help Center , and in the next few months, we'll
bring this experience directly within Instagram. |
|
Margot James resigns over Brexit
|
|
|
|
18th July 2019
|
|
| See article
from standard.co.uk |
Margot James has resigned as digital minister after she voted against the government for an amendment to block the next prime minister from suspending parliament and forcing through a no-deal Brexit. James was the minister for Digital, Culture, Media
and Sport and MP for Stourbridge. Her remit included steering through the woefully inept age verification regime set to establish the BBFC as the UK internet porn censor. |
|
Now it appears that users who try to protect themselves with VPNs may be unknowingly handing their browsing data over to the Chinese government
|
|
|
| 17th July 2019
|
|
| See
press release from henryjacksonsociety.org |
ICO's Data Protection Training The Pavlov Method |
| ☑ | Yes I won't read this message. and
yes you can do what the fuck you like with my porn browsing data | ☑ | Yes please do, I waiver all my GDPR rights |
☑ | Yes I won't read this message. and yes, feel free to blackmail me | ☑
| Yes you can do anything you like 'to make my viewing experience better' | ☑ | Yes, no need to
ask, I'll tick anything |
With callous disregard for the safety of porn users, negligent lawmakers devised an age verification scheme with no effective protection of porn users' identity and porn browsing history. The Government considered that GDPR requirements, where
internet users are trainer to blindly tick a box to give consent to the internet companies doing what the fuck they like with your data. Now internet users are well conditioned like Pavlov's dog to tick the hundreds of tick boxes they are presented with
daily. And of course nobody ever reads what they are consenting to, life's too short. After a while the government realised that the total lack of data protection for porn users may actually prevent their scheme form getting off the ground, as
porn users simply would refuse to get age verified. This would result in bankrupt AV companies and perverse disinsentives for porn websites. Those that implement AV would then experience a devastating drop off in traffic and those that refuse age
verification would be advantaged. So the government commissioned a voluntary kitemark scheme for AV companies to try and demonstrate to auditors that they keep porn identity and browsing history safely. But really the government couldn't let go of
its own surveillance requirements to keep the browsing history of porn users. Eventually some AV companies won the right to have a scheme that did not log people's browsing history, but most still do maintain a log (justified as 'fraud protection' in the
BBFC kitemark scheme description). well Now it appears that those that try to avoid the dangers of AV via VPNs may be not s safe as they would hope. The Henry Jackson Society has been researching the VPN industry and has found that 30% of VPNs are
owned by Chinese companies that have direct data paths to the Chinese government. Surely this will have extreme security issues as privately porn using people could then be set up for blackmail or pressure from the Chinese authorities. The
government needs to put an end to the current AV scheme and go back to the drawing board. It needs to try again, this time with absolute legal requirements to immediately delete porn users identity data and to totally ban the retention of browsing logs.
Anyway, the Henry Jackson Society explains its latest revelations:
Chinese spies could exploit Government's new porn laws to gather compromising material on businessmen, civil servants and public figures, say think tanks. They say Chinese firms have quietly cornered the market in technology that
enables people to access porn sites without having to register their personal details with age verification firms or buy an age ID card in a newsagent. The new law require those accessing porn sites to prove they are 18 but the
checks and registration can be by-passed by signing up to a Virtual Private Network (VPNs). These anonymise the location of a computer by routing its traffic through a server based at remote locations. It has now emerged through
an investigation by security experts that many of the VPNs are secretly controlled by Chinese owned firms -- as many as 30% of the networks worldwide. It means that a VPN users' viewing habits and data can not only be legally
requested by the Chinese Government under its lax privacy laws but the VPNs could themselves also be state-controlled, according to the Adam Smith Institute and Henry Jackson Society. Sam Armstrong, spokesman for the Henry Jackson
Society, said: A list of billions of late-night website visits of civil servants, diplomats, and politicians could -- in the wrong hands -- amount to the largest-ever kompromat file compiled on British individuals.
Those in sensitive jobs are precisely the types of individuals who would seek to use a VPN to circumvent the trip to the newsagent to buy a porn pass. Yet, the opaque ownership of these VPNs by Chinese firms
means there is a real likelihood any browsing going through them could fall into the hands of Chinese intelligence.
|
|
Cuts in the uncut version have been cut from the Netflix version
|
|
|
| 16th July 2019
|
|
| Thanks to Jon See
article from people.com
|
Netflix has censored the climatic suicide scene in the finale of season 1 of 13 Reasons Why . The original uncut version is no longer available on Netflix The Netflix series' creator Brian Yorkey decided to edit the scene after seeking the
advice of medical experts, Netflix said in a statement to PEOPLE. Netflix continued: As we prepare to launch Season 3 later this summer, we've been mindful about the ongoing debate around the show, the statement
continues. So on the advice of medical experts, including Dr. Christine Moutier, Chief Medical Officer at the American Foundation for Suicide Prevention, we've decided with creator Brian Yorkey and the producers to edit the scene in which Hannah takes
her own life from Season 1. The scene, which takes place part way through the finale episode in the first season, no longer includes footage of Hannah, played by actress Katherine Langford, dying by suicide. Instead, the scene
goes from Hannah looking at herself in a mirror to her parents' reaction to her death.
The original uncut version is still available on DVD. It was passed 18 uncut by the BBFC for a suicide scene.
|
|
YouTube is reluctant to censor drill music videos
|
|
|
| 16th July 2019
|
|
| 13th July 2019. See article from
thecanary.co |
Ben McOwen Wilson the head of YouTube UK said that the website not remove drill music videos from the platform saying that they provide a place for those too often without a voice. He Said that YouTube must work with government and regulators to find
a balance on removing content. Writing in the Daily Telegraph, McOwen Wilson had a knock at the vague government internet censorship plan outline in the Online Harms white paper. He said it was right that anything which is illegal offline should
not be permitted online, but added that deciding when to remove videos which were legal but could be considered potentially harmful was a greater issue facing the tech industry. He said: Drawing a line on content
that should be removed isn't always clear. For example, as communities are working to address the issue of gang violence, we too find ourselves developing the right way to play our part. While some have argued there is no place
for drill music on YouTube, we believe we can help provide a place for those too often without a voice. To strike this balance, we work with the Metropolitan Police, community groups and experts to understand local context and
take action where needed.
Offsite Comment: YouTube is right to defend drill The British state's war on rappers is authoritarian and racist. 16th July 2019. See
article from spiked-online.com by Jason Reed |
|
Margot James apologises for the delay whilst the Open Rights Group points out the scheme is still not safe for porn viewers
|
|
|
| 15th July 2019
|
|
| See article from bbc.com See
open letter from openrightsgroup.org |
ICO's Data Protection Training The Pavlov Method |
| ☑ | Yes I won't read this message. and
yes you can do what the fuck you like with my porn browsing data | ☑ | Yes please do, I waiver all my GDPR rights |
☑ | Yes I won't read this message. and yes, feel free to blackmail me | ☑
| Yes you can do anything you like 'to make my viewing experience better' | ☑ | Yes, no need to
ask, I'll tick anything |
Digital Minister Margot James has apologised for the six-month delay on the so-called porn block, which had been due to take effect today (16th July). It is designed to force pornography websites to verify users are over 18. But the law has
been delayed twice - most recently because the UK government failed to properly notify European regulators. James told the BBC: I'm extremely sorry that there has been a delay. I know it sounds incompetent. Mistakes do
happen, and I'm terribly sorry that it happened in such an important area,
Of course the fundamental mistake is that the incompetent lawmakers cared only about 'protecting the children' and gave bugger all consideration to the
resulting endangerment of the adults visiting porn sites. It took the government months, but it finally started to dawn on them that perhaps they should do something to protect the identity data that they are forcing porn users to hand over that
can then be pinned to their porn browsing history. They probably still didn't care about porn users but perhaps realised that the scheme would not get of the ground if it proved so toxic that no one would ever sign up for age verification at all. Well as a belated after thought the government, BBFC and ICO went away to dream up a few standards that perhaps the age verifiers ought to be sticking to try and ensure that data is being kept safe.
So then the whole law ended up as a bag of worms. The authorities now realise that there should be level of data protection, but unfortunately this is not actually backed up by the law that was actually passed. So now the data protection standards
suggested by the government/BBFC/ICO are only voluntary and there remains nothing in law to require the data actually be kept safe. And there is no recourse against anyone who ends up exploiting people's data. The Open Rights Group have just
written an open letter to the government to ask that government to change their flawed law and actually require that porn users' data is kept properly safe: The Rt Hon Jeremy Wright QC MP Secretary of State for
Digital, Culture, Media and Sport Re: BBFC Age Verification Privacy Certification Scheme Dear Secretary of State,
We write to ask you to legislate without delay to place a statutory requirement on the British Board of Film Classification (BBFC) to make their privacy certification scheme for age verification providers mandatory. Legislation is also needed to
grant the BBFC powers to require compliance reports and penalise non-compliant providers. As presently constituted, the BBFC certification scheme will be a disaster. Our analysis report, attached, shows that rather than setting
out objective privacy safeguards to which companies must adhere, the scheme allows companies to set their own rules and then demonstrate that these are being followed. There are no penalties for providers which sign up to the standard and then fail to
meet its requirements. The broadly-drafted, voluntary scheme encourages a race to the bottom on privacy protection. It provides no consistent guarantees for consumers about how their personal data will be safeguarded and puts
millions of British citizens at serious risk of fraud, blackmail or devastating sexual exposure. The BBFC standard was only published in April. Some age verification providers have admitted that they are not ready. Others have
stated that for commercial reasons they will not engage with the scheme. This means that the bureaucratic delay to age verification's roll-out can now be turned to advantage. The Government needs to use this delay to introduce legislation, or at the
least issue guidance under section 27 of the Digital Economy Act 2017, that will ensure the privacy and security of online users is protected. We welcome the opportunity to bring this issue to your attention and await your
response. Yours sincerely, Jim Killock Executive Director Open Rights Group
|
|
India's ruling party resurrects its call to ban the short video app TikTok
|
|
|
|
15th July 2019
|
|
| See article from thedrum.com |
Short video-sharing app TikTok came into the spotlight in India in the spring of this year. The app was accused of facilitating the distribution of pornography. The app was banned for a while but was restored after it introduced a minimum age of at
least 13 for new accounts. It also implemented automatic censorship tools that detected and blocked nudity. Now the app has reappeared in the spotlight. The Swadeshi Jagran Manch (SJM), the economic wing of the Indian ruling party Rashtriya
Swayamsevak Sangh, has again called for video sharing site TikTok to be banned in India. In a letter to Prime Minister Narendra Modi, the SJM said: To prevent such applications from operating in India, we would humbly
request the creation of a new law that requires testing and also regulation to protect our national security as well as the privacy of Indian users from countries with inimical interests to India. Until such a law is notified, all such Chinese
applications, including TikTok and Helo should be banned by the Ministry of Home Affairs. In recent weeks, TikTok has become a hub for anti-national content that is being shared extensively on the application. We have been
notified of videos advocating views that promote religious violence, anti-Harijan sentiments, and mistreatment of women. There have also been various instances of deaths being caused due to TikTok across India.
The essence seems to be
that if people are going to communicate with anti state ideas then they could at least use an Indian app rather than a Chinese one. |
|
DCMS will consult about online ID cards so that your porn viewing and all your PC misdemeanours on social media can be logged against your social score
|
|
|
| 14th July 2019
|
|
| See House of Commons Committee Report [pdf] from
publications.parliament.uk |
Despite concern among some groups of witnesses, a shift in approach in the UK Government's position seems on the horizon. The Minister for Digital and the Creative industries, for example, implied support for a universal digital ID in a recent
interview with The Daily Telegraph in 2019: think there are advantages of a universally acclaimed digital ID system which nowhere in the world has yet. There is a great prize to be won once the technology and the
public's confidence are reconciled.
On 11 June 2019, DCMS and the Cabinet Office announced their intentions to launch a consultation on digital identify verification in the coming weeks. The following actions were set
out:
A consultation to be issued in the coming weeks on how to deliver the effective organisation of the digital identity market. The creation of a new Digital Identity Unit, which is a collaboration
between DCMS and Cabinet Office. The Unit will help bring the public and private sector together, ensure the adoption of interoperable standards, specification and schemes, and deliver on the outcome of the consultation. The
start of engagement on the commercial framework for using digital identities from the private sector for the period from April 2020 to ensure the continued delivery of public services.
Single unique identifiers for citizens can transform the efficiency and transparency of Government services. We welcome the Government's announcement in June 2019 that it will consult shortly on digital identity. While we recognise
that in the UK there are concerns about some of the features of a single unique identifier, as demonstrated by the public reaction to the 2006 Identity Card Act, we believe that the Government should recognise the value of consistent identity
verification. The Government should facilitate a national debate on single unique identifiers for citizens to use for accessing public services along with the right of the citizen to know exactly what the Government is doing with their data.
Offline Comment: Privacy International explains some of the reasons why this is a bad idea 14th July 2019. See
article from privacyinternational.org
The debate shouldn't be about having insight into how your identifier is used. It should be about making sure that identifiers are never usable. After all, any unique identifier will not be limited to government use. Whether
through design or commercial necessity, any such number will also find it's way into the private sector. This was another fear highlighted in the mid-2000s, but it has played out elsewhere. For example, the Indian Supreme Court, in their ruling on the
Aadhaar system that provided a unique number to more than a billion people, that there were dangers of its use in the private sector: Allowing private entities to use Aadhaar numbers will lead to commercial exploitation of an individual's personal data
without his/her consent and could lead to individual profiling. Given everything that's happened since, the 13 years since the 2006 ID Card Act (that was repealed in 2010) can seem like a lifetime. But it's clear that the concerns
expressed then remain prescient now. Now that we know so much more about the risks that the exploitation of people's data plays - and the targeting, profiling and manipulating of individuals and groups - we should be even more fearful today of such a
system than we were a decade ago. Furthermore, it's been shown that we do not need such a unique identifier for people to securely access government services online, and it's on such concepts we must build going forward. See the full
article from privacyinternational.org
|
|
The Soska sisters' Rabid
|
|
|
| 13th July 2019
|
|
| See article from
oneangrygamer.net |
Rabid is a 2019 Canada Sci-Fi horror by Jen Soska and Sylvia Soska. Starring Laura Vandervoort, Greg Bryk and Stephen Huszar.
An aspiring model suffers a disfiguring traffic
accident and undergoes a radical untested stem-cell treatment. The experimental transformation is a miraculous success, leaving her more beautiful than before. She finds her confidence and sexual appetite is also strangely increased resulting in several
torrid encounters. But she unknowingly sets off a spiraling contagion, and within 24 hours her lovers become rabid, violent spreaders of death and disease. As the illness mutates, it spreads through society at an accelerated rate causing an
ever-increasing number of people to rampage through the city in a violent and gruesome killing spree. Twitter has banned the account of the Soska Sisters after they posted promotional images for their forthcoming horror Rabid. The
image appears on the cover of the Rue Morgue magazine. The directors commented on their Facebook page: Bad girls. We'll be back. But man, those @mastersfx1 prosthetics in #Rabid must be medically accurate to get
us suspended for advertising our World Premiere with a FrightFest banner. I like how that makeup could be on the cover of @ruemorguemag & @fangoria, but shut down on Twitter. Wild world we are living in.
|
|
|
|
|
| 12th July 2019
|
|
|
Belgian researchers reveal recordings from Google's home assistant that are clearly not activated by an 'OK Google' See
article from theregister.co.uk |
|
|
|
|
| 12th July 2019
|
|
|
Climate campaigners already want to take away your car, holidays abroad, and your Sunday roast, now they've got their beady eyes on your porn and Netflix See
article from newscientist.com |
|
Twitter has updated its rules and will now ban tweets dehumanising religious groups to animals
|
|
|
|
11th July 2019
|
|
| See article from blog.twitter.com |
Twitter has blogged about its recent censorship rules update: Our primary focus is on addressing the risks of offline harm, and research shows that dehumanizing language increases that risk. As a result, after months of
conversations and feedback from the public, external experts and our own teams, we're expanding our rules against hateful conduct to include language that dehumanizes others on the basis of religion. Starting today, we will
require Tweets like these to be removed from Twitter when they're reported to us: Religious groups are viruses. They are making this country sick.
It is always one of the
unintended consequences of censorship is that it often applies most to those that are supposed to be in need of protection. Eg religious groups are the ones that are most likely to be pulled up for hate directed at other religious groups. So will
Twitter ban such bible quotes as: But [Moses] made his own people go out like sheep -- Distinguishing between them and the Egyptians, as a shepherd divideth between the sheep and the goats, having set his own mark upon
these sheep, by the blood of the Lamb sprinkled on their door-posts. And they went forth as sheep, not knowing whither they went. And guided them in the wilderness -- As a shepherd guides his flock. (Psalm 78:52-54)
|
|
Danish education minister proposes the establishment of a censor board for influencers on social media
|
|
|
| 11th July 2019
|
|
| See article from rt.com |
Denmark is considering the censorship of social media after an Instagram influencer's suicide note kicked off a controversy. Instagram personality Fie Laursen posted a suicide note which received 30,000 comments and 8,000 likes. The public suicide
note remained online for two days before Laursen herself took it down, having received treatment in a local hospital for an attempted overdose. In the aftermath, Danish Minister of Children and Education Pernille Rosenkrantz-Theil has proposed
that influencers and bloggers must adhere to press based rules to avoid 'harm' to the wider public. Rosenkrantz-Theil said: All journalists are familiar with the press ethics rules that, for example, that one must
be careful about talking about suicide in the public space. When managing popular blogs with hundreds of thousands of followers, I think we can make the same demands.
Rosenkrantz-Theil proposes the formation of a governmental
censorship board to enforce such rules which would be granted the authority to remove material in breach of whatever guidelines were created. The politician also outlined a scenario whereby the influencers would have to designate three people to have the
password for their accounts. These people can then remove a post if they believe it violates the press ethics. The proposed Press Board would be afforded the right to criticize and ultimately, to censor, offending posts that broke any potential
ethical guidelines. The censor's remit would be limited to those influencers with more than 5,000 followers.
|
|
UK ISPA trade association colludes with the state censors and claims that heroic Mozilla is an internet villain for implementing censor evading encrypted DNS in Firefox
|
|
|
|
10th July 2019
|
|
| 6th July 2019. See
article from ispa.org.uk |
The Internet Services Providers' Association has announced the finalists for what its members consider as the 2019 Internet Hero and Villain. The Internet Hero nominations this year include those campaigning to improve trust and confidence online;
mapping out the UK's evolving broadband landscape; and working on global internet governance issues. While, the Villain nominees take in the impact of new technical standards on existing online protections, the balance between freedom of expression and
copyright online and the global telecoms supply chain. This year's nominations for the 2019 Internet Heroes and Villains in full are: ISPA Internet Hero
- Sir Tim Berners-Lee -- for spearheading the Contract for the Web campaign to rebuild trust and protect the open and free nature of the Internet in the 30 th anniversary of the World Wide Web
- Andrew Ferguson OBE, Editor, Thinkbroadband - for
providing independent analysis and valuable data on the UK broadband market since the year 2000
- Oscar Tapp-Scotting & Paul Blaker, Global Internet Governance Team, DCMS -- for leading the UK Government's efforts to ensure a balanced and
proportionate agenda at the International Telecommunications Union Conference
ISPA Internet Villain
- Mozilla -- for their proposed approach to introduce DNS-over-HTTPS in such a way as to bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK
- Article 13 Copyright Directive -- for threatening
freedom of expression online by requiring content recognition technologies across platforms
- President Donald Trump -- for causing a huge amount of uncertainty across the complex, global telecommunications supply chain in the course of trying to
protect national security
The winners of this year's Heroes and Villains will be chosen by the ISPA Council, and will be announced at the ISPA Awards Ceremony on 11th July in London Update: Villainous ISPs decide that colluding with censors and
snoopers is bad PR 10th July 2019. See article from ispa.org.uk and
article from techdirt.com
The villains of ISPA have withdrawn their nomination of the heroic Mozilla as an internet villain. ISPA writes: Last week ISPA included Mozilla in our list of Internet Villain nominees for our upcoming annual awards.
In the 21 years the event has been running it is probably fair to say that no other nomination has generated such strong opinion. We have previously given the award to the Home Secretary for pushing surveillance legislation,
leaders of regimes limiting freedom of speech and ambulance-chasing copyright lawyers. The villain category is intended to draw attention to an important issue in a light-hearted manner, but this year has clearly sent the wrong message, one that doesn't
reflect ISPA's genuine desire to engage in a constructive dialogue. ISPA is therefore withdrawing the Mozilla nomination and Internet Villain category this year.
TechDirt noted that the ISPA nomination was kindly advertising Mozilla's
Firefox option for DNS over HTTPS: ISPA nominated Mozilla for the organization's meaningless internet villain awards for, at least according to ISPA, undermining internet safety standards in the UK:
Of course Mozilla is doing nothing of the sort. DNS over HTTPS not only creates a more secure internet that's harder to filter and spy on, it actually improves overall DNS performance, making everything a bit faster. Just because this
doesn't coalesce with the UK's routinely idiotic and clumsy efforts to censor the internet, that doesn't somehow magically make it a bad idea. Of course, many were quick to note that ISPA's silly little PR stunt had the opposite
effect than intended. It not only advertised that Mozilla was doing a good thing, it advertised DNS over HTTPS to folks who hadn't heard of it previously. Matthew Prince P (@eastdakota) tweeted: Given the number of
people who've enabled DNS-over-HTTPS in the last 48 hours, it's clear @ISPAUK doesn't understand or appreciate @mmasnick's so-called "Streisand Effect."
|
|
|
|
|
| 9th
July 2019
|
|
|
After China and North Korea...obviously... See article from gizmodo.com |
|
Netflix vows to stub out smoking in its productions
|
|
|
|
8th July 2019
|
|
| Thanks to Nick See article from
i-d.vice.com |
From chain-smoking time traveller Nadia in Russian Doll , to frazzled single mom Joyce Byers rarely seen without a pack in her shaking hands in Stranger Things , Netflix's characters love to smoke. But that looks set to change. Anti
smoking campaigners, the Truth Initiative, published campaign material noting that Stranger Things was among the programs that showed most smoking on screen. Other series that featured were Unbreakable Kimmy Schmidt, and Orange Is The New
Black. Netflix is now vowing to curtail the appearance of cigarettes on screen in all its new projects. In a statement to Variety, Netflix pledged to make all their programming aimed at young people -- anything with a rating below PG-13 or
TV-14 -- smoking and e-cigarette free, except for reasons of historical and factual accuracy. Meanwhile, for their content aimed at older viewers, there will be no smoking or e-cigarette use unless it's either essential or character-defining.
|
|
|
|
|
|
8th July 2019
|
|
|
Because adults don't get it and they aren't policing the children's playground See article from theguardian.com
|
|
Given that Google are clearly censoring right wing commentators, then it seems entirely plausible that they are similarly interfering on other political issues, including abortion in Ireland
|
|
|
| 7th July 2019
|
|
| See article from rt.com |
Google has been accused of blacklisting pro-life YouTube search entries ahead of last year's vote in Ireland on legalizing abortion. Pundits call it a deliberate manipulation and demand that the company be held accountable. Allegations that
Google's manual interference with YouTube search results may have played a role in the 2018 referendum on abortion in Ireland surfaced last week, when Project Veritas website published an insider-based article on the matter. Blocked terms reportedly
included abortion is murder, Irish Catholic, pro-life and other terms. Google responded, saying that there was no distinction between pro-life or pro-choice queries on YouTube at the time and that their whole procedure was transparent. This
is hardly a credible response from Google, their processes are never transparent, so how can one believe the other half of the statement?
|
|
The National Secular Society Government online harm plans could curb free expression
|
|
|
| 6th July 2019
|
|
| See article from secularism.org.uk
|
The National Secular Society has warned that government plans to require social media companies to censor hateful and offensive content could act as a de facto blasphemy law. In its response to the government's white paper on
online harms , the NSS said efforts to confront and challenge hateful speech and behaviour must not undermine free speech on religion. The white paper outlines plans to create a regulator with the power to fine online platforms
and block websites. The regulator will be required to create guidance for social media companies, outlining what constitutes hateful content online. The guidance would include content which is not necessarily illegal, content
which may directly or indirectly cause harm to other users and some offensive material in that definition. The NSS said censoring content that could be considered offensive would severely restrict freedom of expression, including
the freedom to criticise or satirise religion. The society added that the question of offence was an entirely subjective matter. The NSS also noted that a requirement on companies to demonstrate 'continuous improvement' in
tackling hateful material could encourage them to be more censorious. The NSS also challenged a claim in the white paper that offending online is just as serious as that occurring offline. The NSS said this line lowered the
threshold for hate crimes, because people's ability to commit such crimes is much more limited online than offline. The society raised the example of a man who was recently arrested on suspicion of hate crime after publishing a
video on Facebook of himself mocking Islamic prayer in a hospital prayer room. The NSS noted that threats of death and violence were made towards the man and were reported to the police, but no action appeared to have been taken against the perpetrators
to date. The NSS also criticised the government's definition of hate crime which is contained within the white paper. The definition says hate crimes include crimes demonstrating hostility on the grounds of an individual's actual
or perceived race, religion, sexual orientation, disability or transgender identity. The NSS said this definition was too broad, meaning any incident in which an individual demonstrates hostility toward another individual based on
the listed characteristics could be treated as a hate crime. The society said strong critics of religion or Christians who preach that gay people will go to Hell were examples of those who risk being charged with hate crimes.
NSS spokesperson Megan Manson said the white paper had given too much ground to those who attempt to shut down legitimate expression, including on religion. The government should treat the fundamental right to
free expression as a positive value in its attempts to promote social cohesion. Instead it has proposed cracking down on what people can say on social media, based largely on vague and broad definitions of what constitutes 'hateful' material. In the
process it risks significantly undermining free expression for all and stirring social resentment. Ministers must not treat the civil liberties of British citizens as an afterthought in their efforts to tackle online harms.
|
|
The BBFC makes its quarterly report of appeals against unnecessary ISP censorship of websites accessed on mobile devices
|
|
|
| 6th July 2019
|
|
| See report [pdf] from bbfc.co.uk
|
Every 3 months the BBFC reports on appeals from the public and website owners about excessive blocking by mobile ISPs. These appeals follow very much the pattern followed by previous reports. The ISPs clearly have a policy to slap an 18 rating on
sites selling VPNs and CBD products (cannabis related, but without psycho active effects). The BBFC continues to find that neither of these topics requires an 18 rating. But the ISPs clearly don't take heed, as there will surely be another set next
quarter. This month the BBFC did find that an 18 rating was required for a website campaigning for the legalisation of psychedelic drugs. |
|
Pirate Party MEP has been elected as a Vice-President of the EU Parliament.
|
|
|
| 5th July
2019
|
|
| See article from torrentfreak.com
|
The Pirate Party political movement owes its early success to sticking up for The Pirate Bay, following a raid in Sweden. In recent years Pirates have delivered many excellent politicians and Marcel Kolaja, one of the new MEPs, has just been elected as a
Vice-President of the EU Parliament. 4 Pirate MEPS were elected at the last European Election with one from Germany and three from the Czech Republic. During the last term, the excellent Julia Reda was at the forefront of many lawmaking
discussions, particularly with regard to the new Copyright Directive. While Reda recently left Parliament, the new MEPs obviously have similar ambitions. With 426 votes, Marcel Kolaja was elected with an absolute majority in the second voting
round. He will serve as one of the fourteen Vice-Presidents tasked with replacing the President as chair of the plenary if needed, as well as a variety of other tasks. |
|
But the News Media Association points out that it would force websites to choose between being devoid of audience or stripped of advertising
|
|
|
| 4th July 2019
|
|
| See article from newsmediauk.org See also
criticism of ICO plan from newsmediauk.org |
For some bizarre reason the ICO seems to have been given powers to make wide ranging internet censorship law on the fly without needing it to be considered by parliament. And with spectacular
incompetence, they have come up with a child safety plan to require nearly every website in Britain to implement strict age verification. Baldric would have been proud, it is more or less an internet equivalent of making children safe on the roads by
banning all cars. A trade association for news organisations, News Media Association, summed up the idea in a consultation response saying: ICO's Age Appropriate Code Could Wreak Havoc On News Media
Unless amended, the draft code published for consultation by the ICO would undermine the news media industry, its journalism and business innovation online. The ICO draft code would require commercial news media publishers to choose between their online
news services being devoid of audience or stripped of advertising, with even editorial content subject to ICO judgment and sanction, irrespective of compliance with general law and codes upheld by the courts and relevant regulators.
The NMA strongly objects to the ICO's startling extension of its regulatory remit, the proposed scope of the draft code, including its express application to news websites, its application of the proposed standards to all users in the
absence of robust age verification to distinguish adults from under 18-year olds and its restrictions on profiling. The NMA considers that news media publishers and their services should be excluded from scope of the proposed draft Code.
Attracting and retaining audience on news websites, digital editions and online service, fostering informed reader relationships, are all vital to the ever evolving development of successful newsbrands and their services, their
advertising revenues and their development of subscription or other payment or contribution models, which fund and sustain the independent press and its journalism. There is surely no justification for the ICO to attempt by way of
a statutory age appropriate design code, to impose access restrictions fettering adults (and children's) ability to receive and impart information, or in effect impose 'pre watershed' broadcast controls upon the content of all currently publicly
available, free to use, national, regional and local news websites, already compliant with the general law and editorial and advertising codes of practice upheld by IPSO and the ASA. In practice, the draft Code would undermine
commercial news media publishers' business models, as audience and advertising would disappear. Adults will be deterred from visiting newspaper websites if they first have to provide age verification details. Traffic and audience will also be reduced if
social media and other third parties were deterred from distributing or promoting or linking titles' lawful, code compliant, content for fear of being accused of promoting content detrimental to some age group in contravention of the Code. Audience
measurement would be difficult. It would devastate advertising, since effective relevant personalised advertising will be rendered impossible, and so destroy the vital commercial revenues which actually fund the independent media, its trusted journalism
and enable it to innovate and evolve to serve the ever-changing needs of its audience. The draft Code's impact would be hugely damaging to the news industry and wholly counter to the Government's policy on sustaining high quality,
trusted journalism at local, regional, national and international levels. Newspapers online content, editorial and advertising practices do not present any danger to children. The ICO has not raised with the industry any evidence
of harm, necessitating such drastic restrictions, caused by reading news or service of advertisements where these are compliant with the law and the standards set by specialist media regulators.
| The Information Commissioner's Office
has a 'cunning plan' |
Of course the News Media Association is making a strong case for its own exclusion from the ICO's 'cunning plan', but the idea is equally devastating for websites from any other internet sector. Information Commissioner Elizabeth Denham was
called to give evidence to Parliament's DCMS Select Committee this week on related matters, and she spoke of a clearly negative feedback to her age verification idea. Her sidekick backtracked a little, saying that the ICO did not mean Age
Verification via handing over passport details, more like one of those schemes where AI guesses age by scanning what sort of thing the person has been posting on social media. (Which of course requires a massive grab of data that should be best kept
private, especially for children). The outcome seems to be a dictate to the internet industry to 'innovate' and find a solution to age verification that does not require the mass hand over of private data (you know like what the data protection laws
are supposed to be protecting). The ICO put a time limit on this innovation demand of about 12 months. In the meantime the ICO has told the news industry that age verification idea won't apply to them, presumably because they can kick up a hell of
stink about the ICO in their mass market newspapers. Denham said: We want to encourage children to find out about the world, we want children to access news sites. So the concern about the
impact of the code on media and editorial comment and journalism I think is unfounded. We don't think there will be an impact on news media sites. They are already regulated and we are not a media regulator.
She did speak any similar
reassuring words to any other sector of the internet industry who are likely to be equally devastated by the ICO's 'cunning plan'. |
|
Indonesia's internet censor to consider a ban on VPNs
|
|
|
| 4th July 2019
|
|
| See article from
coconuts.co |
VPNs recently came under the scrutiny of the Indonesian government after authorities placed restrictions on social media during the May 21-22 election protests. At that time, the government temporarily banned certain features of social media to censor
the communications that it did not like. Inevitably many Indonesians turned to using VPNs to bypass the ban, causing a sharp increase in VPN downloads. In response, the government claimed that VPNs, especially the free ones, may pose threats to users'
private data and that they should be uninstalled. Now the Information and Communications Ministry (Kominfo) chipped in saying that Kominfo will not hesitate to block VPNs that aren't licensed in Indonesia. The licensing requirement seems to be a
tenuous correlation that VPNs are somehow equivalent to ISPs, and ISPs Indonesia must be licensed. This connection is not quite confirmed as yet and Kominfo is set to meet with the Association of Internet Service Providers in Indonesia (APJII) to
discuss a possible VPN provider ban. |
|
|
|
|
| 3rd July 2019
|
|
|
Press Gazette apposes the government's internet censorship proposals in the Online Harms white paper but only calls for the exemptions for themselves See
article from pressgazette.co.uk |
|
Germany fines Facebook for a transparency report that did not detail the amount of claims of illegal content that Facebook received
|
|
|
|
2nd July 2019
|
|
| See article from politico.eu
|
Germany has fined Facebook for failing to detail the number of complaints received in a transparency report. The Federal Office for Justice (BfJ,) a subdivision of the German justice ministry, announced that it had issued Facebook a fine of 2 million
euro for failing to meet the requirements of Berlin's Network Enforcement Act, a law against illegal content, in its transparency report for the first half of 2018. In the penalty charge notice, the BfJ reprimands in particular that in the
released report, the number of received complaints about unlawful content is incomplete, the office said in its announcement, adding that this is creating a distorted image in the public about the extent of unlawful content [on the platform] and the way
the social network is dealing with it. |
|
The EU notes that it has now received notification of the BBFC rules on censoring adult porn
|
|
|
| 2nd July 2019
|
|
| See
article from ec.europa.eu |
The implementation of teh UK's new pron censorship regime was recently delayed by 6 months due not not filing the details with the EU as required by European law. Well the BBFC censorship rules have now been duly filed. The EU websites note that they
were filed on 1st July 2019 and there is now a period for people to comment up until 2nd October 2019. When announcing the delay censorship minister Jeremy Wright noted that at the end of this period another month or so may required to respond to
comments. The EU website noted above provides for comments but maybe only MEPs and the like can hold of the process by asking questions or making comments. Or perhaps any European can comment. |
|
|
|
|
|
2nd July 2019
|
|
|
Here's what the Online Harms White Paper means for UK internet users See article from pcmag.com
|
|
|