|
Twitter censors the Euro election campaign accounts of Carl Benjamin and Tommy Robinson
|
|
|
| 4th May
2019
|
|
| 27th April 2019. See article from mirror.co.uk
|
Twitter has bannedTommy Robinson and Ukip candidate Carl Benjamin's campaign accounts Both had already had their personal accounts banned from the platform - but now
their campaign accounts are also suspended. Benjamin, a vlogger who calls himself Sargon of Akkad, was banned from Twitter in 2017 whilst Robinson was permanently banned in March 2018. In a tweet from the party's official account, Ukip
said: Official UKIP MEP Campaign account @CarlUkip, of which Carl Benjamin has no access to has been suspended from Twitter. This is election interference, and UKIP will get to the bottom of this.
And Ukip defended Benjamin as a YouTube entertainer fighting political correctness - before telling the Mirror: He will remain on the UKIP ticket.
Offsite Comment: Twitter's outrageous meddling in British democracy
4th May 2019. See article from spiked-online.com by Brendan O'Neill Twitter's outrageous meddling in British
democracy In banning Tommy Robinson and Carl Benjamin, Twitter is behaving like a corporate dictator.
|
|
India bans Reddit, once the few social media sites that allows porn
|
|
|
| 30th April 2019
|
|
| See article from
avn.com |
AVN.com is reporting that the online forum Reddit has been blocked by the country's largest ISPs Reddit is ranked as the 21st most heavily trafficked site in the world. It has 330 million users spread across 217 countries. Reddit, of
course, is one of the few remaining major social media platforms that does not ban porn, and according to the India Times report, that is likely the reason why the Indian ISPs would block the discussion forum site. |
|
|
|
|
| 30th April 2019
|
|
|
The platform's new policy will disproportionately affect women and sex workers. By Jesselyn Cook See article from
huffpost.com |
|
New Zealander falls victim to the country's extreme censorship of the mosque attack video
|
|
|
| 29th April 2019
|
|
| See article
from oneangrygamer.net |
New Zealand police have charged a young man for sharing a meme based on Brenton Tarrant's live streamed murderous attack on a Christchurch mosque. The New Zealand authorities had previously banned the video with the official film censor declaring
it as 'objectionable'. And apparently this makes even the use of still images as totally illegal. ABC News is reporting that at least six people have been charged with illegally sharing the video contents with other people, but presumably this is
referring to the whole video being passed on. And again according to ABC the meme sharing young man has been held in jail since being arrested for his joke. He will reappear in court on July 31 when electronically monitored bail will be
considered. Meanwhile New Zealand's Prime Minister, Jacinda Ardern, will be meeting with executives from big tech, along with world leaders, in order to prohibit the spread or sharing of violent extremism or terrorism from being shown online at
all. This official policy calling for censorship has been tagged The Christchurch Call but details haven't been made public, yet. Ironically this all seems to playing into the hands of the Christchurch shooter, Brenton Tarrant. In his
manifesto he specifically wanted governments and regulators to escalate censorship to the point of creating civil unrest. Update: Jailed 19th June 2019. See
article from straitstimes.com A New Zealand man was jailed for 21 months yesterday for distributing the gruesome
live-stream video of the Christchurch mosque attacks that killed 51 Muslim worshippers. Christchurch District Court heard that the man distributed the raw footage to about 30 people and had another version modified to include crosshairs and a kill
count, The New Zealand Herald reported. This was in effect a hate crime against the Muslim community, Judge Stephen O'Driscoll said, adding that it was particularly cruel to share the video in the days after the attacks, when families were still
waiting to hear news of their loved ones.
|
|
Well known security expert does a bit of a hatchet job on the BBFC Age Verification Certificate Standard
|
|
|
|
27th April 2019
|
|
| See article from threadreaderapp.com Also an
article from twitter.com |
Starting with a little background into the authorship of the document under review. AVSecure CMO Steve Winyard told XBIZ: The accreditation plan appears to have very strict rules and was crafted with significant
input from various governmental bodies, including the DCMS (Department for Culture, Media & Sport), NCC Group plc (an expert security and audit firm), GCHQ (U.K. Intelligence and Security Agency), ICO (Information Commissioner's Office) and of course
the BBFC.
But computer security expert Alec Muffett writes: This is the document which is being proffered to protect the facts & details of _YOUR_ online #Porn viewing. Let's read it together!
What could possibly go wrong? .... This document's approach to data protection is fundamentally flawed. The (considerably) safer approach - one easier to certificate/validate/police
- would be to say everything is forbidden except for upon for ; you would then allow vendors to appeal for exceptions under review. It makes a few passes at
pretending that this is what it's doing, but with subjective holes (green) that you can drive a truck through:
... What we have here is a rehash of quite a lot of reasonable physical/operational security, business continuity & personnel security management thinking -- with digital stuff almost entirely punted.
It's better than #PAS1296 , but it's still not fit for purpose.
Read the full thread
|
|
|
|
|
|
27th April 2019
|
|
|
Some Westerners are actually lauding Sri Lanka's authoritarian social-media ban. By Fraser Myers See article from
spiked-online.com |
|
NewsGuard is pushing for a deal for ISPs to flash warnings to internet users whilst they are browsing 'wrong think' news websites
|
|
|
| 26th April 2019
|
|
| See article from theguardian.com
|
NewsGuard is a US organisation trying to muscle in governments' concerns about 'fake news'' It doesn't fact check individual news stories but gives ratings to news organisations on what it considers to be indicators of 'trustworthiness'. At the moment
it is most widely known for providing browser add-ons that displays a green shield when readers are browsing an 'approved' news website and a red shield when the website is disapproved. Now the company is pushing something a little more Orwellian.
It is in talks with UK internet providers such that the ISP would inject some sort of warning screen should an internet user [inadvertently] stray onto a 'wrong think' website. The idea seems to be that users can select whether they want these
intrusive warnings or not, via a similar mechanism used for the parental control of website blocking. NewsGuard lost an awful of credibility in the UK when its first set of ratings singled out the Daily Mail as a 'wrong think' news source. It
caused a bit of a stink and the decisions was reversed, but it rather shows where the company is coming from. Surely they are patronising the British people if they think that people want to be nagged about reading the Daily Mail. People are
well aware of the bases and points of views of news sources they read. They will not want to be nagged by those that think they know best what people should be reading. I think it is only governments and politicians that are supposedly concerned
about 'fake news anyway'. They see it as some sort blame opportunity. It can't possibly be their politicians' own policies that are so disastrously unpopular with the people, surely it must be mischievous 'fake news' peddlers that are causing the grief.
|
|
Twitter statement misleadingly suggests it will be cracking down politician's lies
|
|
|
| 26th April 2019
|
|
| See article from
blog.twitter.com |
Twitter writes in a blog post: Strengthening our approach to deliberate attempts to mislead voters Voting is a fundamental human right and the public conversation occurring on Twitter is never more
important than during elections. Any attempts to undermine the process of registering to vote or engaging in the electoral process is contrary to our company's core values. Today, we are further expanding our enforcement
capabilities in this area by creating a dedicated reporting feature within the product to allow users to more easily report this content to us. This is in addition to our existing proactive approach to tackling malicious automation and other forms of platform manipulation
on the service. We will start with 2019 Lok Sabha in India and the EU elections and then roll out to other elections globally throughout the rest of the year. What types of content are in violation?
You may not use Twitter's services for the purpose of manipulating or interfering in elections. This includes but is not limited to:
Misleading information about how to vote or register to vote (for example, that you can vote by Tweet, text message, email, or phone call); Misleading information about requirements for voting,
including identification requirements; and Misleading statements or information about the official, announced date or time of an election.
|
|
TikTok video sharing app has tried to clean up its act after getting into trouble in an Indian court
|
|
|
| 25th April 2019
|
|
| 16th April 2019. See article from medianama.com
|
Video-sharing app TikTok has introduced an age gate feature for new users, which it claims will only allow those aged 13 years and above to create an account. TikTok also declared that it has removed more than six million videos that were in violation of
its community guidelines. TikTok is said to be based in more than 20 countries, including India, and covers major Indian languages, including Hindi, Tamil, Telugu and Gujarati. The app was banned by the Madras High Court earlier this month,
chiefly on the ground that it posed a danger to children. The court said the app contained degrading culture, and that it encouraged pornography and pedophilia. In February, TikTok was fined $5.7 million by the US Federal Trade Commission for
violating the Children's Online Privacy Protection Act (COPPA) by collecting personal information of children below 13 years without parental consent. As of April 15, the app remains available for download on Google's Play Store. TikTok's push for
user safety Update: TikTok unbanned 25th April 2019. See article from theverge.com The short video sharing app TikTok has managed to persuade an Indian court that it is capable of censoring nudity in videos that will degrade Indian viewers.
|
|
The UK government gets wind of a new internet protocol that will play havoc with their ability to block websites
|
|
|
| 23rd April 2019
|
|
| See article from
ispreview.co.uk See a more detailed explanation of DOH from ispreview.co.uk See
List of DOH Servers and DOH ready browsers from github.com |
A DNS server translates the text name of a website into the numerical IP address. At the moment ISPs provide the DNS servers and they use this facility to block websites. If you want to access bannedwebsite.com the ISP simply refuses to tell your browser
the IP address of the website you are seeking. The ISPs use this capability to implement blocks on terrorist/child abuse, copyright infringing websites, porn websites with out age verification, network level parental control blocking and many more things
envisaged in the Government's Online Harms white paper. At the moment DNS requests are transmitted in the clear so even if you chose another DNS server the ISP can see what you are up to, intercept the message and apply its own censorship rules
anyway. This is all about to change, as the internet authorities have introduced a change meaning that DNS requests can now be encrypted using the web standard encryption as used by https. The new protocol option is known is DNS Over HTTPS or DOH.
The address being requested cannot be monitored under several internet protocols, DNS over TLS and DNSCrypt but DNS Over HTTPS goes one step further in that ISPs cannot even detect that it is DNS request at al. It appears exactly the same as a
standard HTTPS request for the website content. This prevents the authorities from refusing to allow DNS Over HTTPS at all by blocking all such requests. If they tried they would have to block all https websites. There's nothing to stop users from
sticking with their ISPs DNS and submitting to all the familiar censorship policies. However if your browser allows, you can ask the browser to ask to use a non censorial DNS server over HTTPS. There are already plenty of servers out there to choose
from, but it is down to the browser to define the choice available to you. Firefox already allows you to select their own encrypted DNS server. Google is not far behind with its Chrome Browser. At the moment Firefox already allows those with
techie bent to opt for the Firefox DOH, but Firefox recently made waves by suggesting that it would soon default to using its own server and make it a techie change to opt out and revert to ISP DNS. Perhaps this sounds a little unlikely. The
Government have got well wound up by the fear of losing censorship control over UK internet users so no doubt will becalling in people from Firefox and Chrome to try to get them to enforce state censorship. However it may not be quite so easy. The
new protocol allows for anyone to offer non censorial (or even censorial) DOH servers. If Firefox can be persuaded to toe the government line then other browsers can step in instead. The UK Government broadband ISPs and the National Cyber Security
Centre (NCSC) are now set to meet on the 8th May 2019 in order to discuss Google's forthcoming implementation of encrypted DOH. It should be an interesting meeting but I bet they'll never publish the minutes. I rather suspect that the Government
has shot itself in the foot over this with its requirements for porn users to identify themselves before being able to access porn. Suddenly they have will have spurred millions of users to take an interest in censorship circumvention to avoid
endangering themselves, and probably a couple of million more who will be wanting to avoid the blocks because they are too young. DNS, DOH, VPNs, Tor and the likes will soon become everyday jargon. |
|
Germany lawmakers consider bill to ban Tor and perhaps even encrypted messaging
|
|
|
| 23rd April 2019
|
|
| See CC
article from privateinternetaccess.com by Caleb Chen
|
On the 15th of March, the German Bundesrat (Federal Council) voted to amend the Criminal Code in relation to internet based services such as The onion router (Tor). The proposed law has been lambasted as being too vague, with
privacy experts rightfully fearful that the law would be overapplied. The proposal, originating from the North Rhine-Westphalian Minister of Justice Peter Biesenbach, would amend and expand criminal law and make running a Tor node or website illegal and
punishable by up to three years in prison. According to Zeit.de, if passed, the expansion of the Criminal Code would be used to punish anyone who offers an internet-based service whose access and accessibility is limited by special technical precautions,
and whose purpose or activity is directed to commit or promote certain illegal acts. What's worse is that the proposed changes are so vaguely worded that many other services that offer encryption could be seen as falling under
this new law. While the proposal does seem to have been written to target Tor hidden services which are dark net markets, the vague way that the proposal has been written makes it a very real possibility that other encrypted services such as messaging
might be targeted under these new laws, as well. Now that the motion to amend has been accepted by Bundesrat, it will be forwarded to the Federal Government for drafting, consideration, and comment. Then, within a month and a
half, this new initiative will be forwarded to the German Senate, aka the Bundestag, where it will be finally voted on. Private Internet Access and many others denounce this proposal and continue to support Tor and an open internet
Private Internet Access currently supports the Tor Project and runs a number of Tor exit nodes as a part of our commitment to online privacy. PIA believes this proposed amendment to the German Criminal Code is not just bad for Tor,
which was named specifically, but also for online privacy as a whole -- and we're not the only ones. German criminal lawyer David Schietinger told Der Spiegel that he was concerned the law was too overreaching and could also mean
an e-mail provider or the operator of a classic online platform with password protection. The bill contains mainly rubber paragraphs with the clear goal to criminalize operators and users of anonymization services. Intentionally,
the facts are kept very blurred. The intention is to create legal uncertainty and unavoidable risks of possible criminal liability for anyone who supports the right to anonymous communication on the Internet.
|
|
|
|
|
| 23rd April 2019
|
|
|
Vendors must start adding physical on/off switches to devices that can spy on us. By Larry Sanger See
article from larrysanger.org |
|
Does the BBFC AV kite mark mean that at age verification service is safe?
|
|
|
| 22nd April 2019
|
|
| See BBFC Age-verification Certificate
Standard [pdf] from ageverificationregulator.com See article from avsecure.com |
The BBFC has published a detailed standard for age verifiers to get tested against to obtain a green AV kite mark aiming to convince users that their identity data and porn browsing history is safe. I have read through the document and conclude
that it is indeed a rigorous standard that I guess will be pretty tough for companies to obtain. I would say it would be almost impossible for a small or even medium size website to achieve the standard and more or less means that using an age
verification service is mandatory. The standard has lots of good stuff about physical security of data and vetting of staff access to the data. Age verifier AVSecure commented: We received the final
documents and terms for the BBFC certification scheme for age verification providers last Friday. This has had significant input from various Government bodies including DCMS (Dept for Culture, Media & Sport), NCC Group plc (expert security and audit
firm), GCHQ (UK Intelligence & Security Agency) ICO (Information Commissioner's Office) and of course the BBFC (the regulator). The scheme appears to have very strict rules. It is a multi-disciplined
scheme which includes penetration testing, full and detailed audits, operational procedures over and above GDPR and the DPA 2018 (Data Protection Act). There are onerous reporting obligations with inspection rights attached. It is also a very costly
scheme when compared to other quality standard schemes, again perhaps designed to deter the faint of heart or shallow of pocket. Consumers will likely be advised against using any systems or methods where the prominent green AV
accreditation kitemark symbol is not displayed.
But will the age verifier be logging your ID data and browsing history?
And the answer is very hard to pin down from the document. At first read it suggests that minimal data will be retained, but a more sceptical read, connecting a few paragraphs together suggests that the verifier will be required to keep extensive records
about the users porn activity. Maybe this is a reflection of a recent change of heart. Comments from AVSecure suggested that the BBFC/Government originally mandated a log of user activity but recently decided that keeping a log or not is down to
the age verifier. As an example of the rather evasive requirements: 8.5.9 Physical Location Personal data relating to the physical location of a user shall not be collected as part of the
age-verification process unless required for fraud prevention and detection. Personal data relating to the physical location of a user shall only be retained for as long as required for fraud prevention and detection.
Here it sounds
like keeping tabs on location is optional, but another paragraph suggest otherwise: 8.4.14 Fraud Prevention and Detection Real-time intelligent monitoring and fraud prevention and
detection systems shall be used for age-verification checks completed by the age-verification provider.
Now it seems that the fraud prevention is mandatory, and so a location record is mandatory after all. Also the use off the
phrase only be retained for as long as required for fraud prevention and detection. seems a little misleading too, as in reality fraud prevention will be required for as long as the customer keeps on using it. This may as well be forever. There are other statements that sound good at first read, but don't really offer anything substantial:
8.5.6 Data Minimisation Only the minimum amount of personal data required to verify a user's age shall be collected.
But if the minimum is to provide name and address + eg a
drivers licence number or a credit card number then the minimum is actually pretty much all of it. In fact there are only the porn pass methods that offer any scope for 'truely minimal' data collection. Perhaps the minimal data also applies to the
verified mobile phone method as although the phone company probably knows your identity, then maybe they won't need to pass it on to the age verifier. What does the porn site get to know
The rare unequivocal and reassuring statement is 8.5.8 Sharing Results Age-verification providers shall only share the result of an age-verification check (pass or fail) with the requesting
website.
So it seems that identity details won't be passed to the websites themselves. However the converse is not so clear: 8.5.6 Data Minimisation Information about
the requesting website that the user has visited shall not be collected against the user's activity.
Why add the phrase, against the user's activity. This is worded such that information about the requesting website could
indeed be collected for another reason, fraud detection maybe. Maybe the scope for an age verifier to maintain a complete log of porn viewing is limited more by the practical requirement for a website to record a successful age verification in a
cookie such that the age verifier only gets to see one interaction with each website. No doubt we shall soon find out whether the government wants a detailed log of porn viewed, as it will be easy to spot if a website queries the age
verifier for every film you watch. Fraud Detection And what about all this reference to fraud detection. Presumably the BBFC/Government is a little worried that passwords and accounts will be shared by enterprising kids.
But on the other hand it may make life tricky for those using shared devices, or perhaps those who suddenly move from London to New York in an instant, when in fact this is totally normal for someone using a VPN on a PC. Wrap up
The BBFC/Government have moved on a long way from the early days when the lawmakers created the law without any real protection for porn users and the BBFC first proposed that this could be rectified by asking porn companies to voluntarilyfollow 'best practice' in keeping people's data safe.
A definite improvement now, but I think I will stick to my VPN. |
|
It's good to see the internet community pull together to work around censorship via age verification
|
|
|
| 22nd April 2019
|
|
| Thanks to Jon and Kath 6th April 2019. See
article from prolificlondon.co.uk See also
iwantfourplay.com |
A TV channel, a porn producer, an age verifier and maybe even the government got together this week to put out a live test of age verification. The test was implemented on a specially created website featuring a single porn video. The test
required a well advertised website to provide enough traffic of viewers positively wanting to see the content. Channel 4 obliged with its series Mums Make Porn. The series followed a group of mums making a porn video that they felt would be
more sex positive and less harmful to kids than the more typical porn offerings currently on offer. The mums did a good job and produced a decent video with a more loving and respectful interplay than is the norm. The video however is still proper
hardcore porn and there is no way it could be broadcast on Channel 4. So the film was made available, free of charge, on its own dedicated website complete with an age verification requirement. The website was announced as a live test for
AgeChecked software to see how age verification would pan out in practice. It featured the following options for age verification
- entering full credit card details + email
- entering driving licence number + name and address + email
- mobile phone number + email (the phone must have been verified as 18+ by the service provider and must must be ready to receive an
SMS message containing login details)
Nothing has been published in detail about the aims of the test but presumably they were interested in the basic questions such as:
- What proportion of potential viewers will be put off by the age verification?
- What proportion of viewers would be stupid enough to enter their personal data?
- Which options of identification would be preferred by viewers?
The official test 'results' Alastair Graham, CEO of AgeChecked provided a few early answers inevitably claiming that: The results of this first mainstream test of our software were hugely
encouraging.
He went on to claim that customers are willing to participate in the process, but noted that verified phone number method emerged as by far the most popular method of verification. He said that this finding
would be a key part of this process moving forward. Reading between the lines perhaps he was saying that there wasn't much appetite for handing over detailed personal identification data as required by the other two methods. I suspect that
we will never get to hear more from AgeChecked especially about any reluctance of people to identify themselves as porn viewers. The unofficial test results
Maybe they were also interested in other questions too:
- Will people try and work around the age verification requirements?
- if people find weaknesses in the age verification defences, will they pass on their discoveries to others?
Interestingly the age verification requirement was easily sidestepped by those with a modicum of knowledge about downloading videos from websites such as YouTube and PornHub. The age verification mechanism effectively only hid the start button from
view. The actual video remained available for download, whether people age verified or not. All it took was a little examination of the page code to locate the video. There are several tools that allow this: video downloader addons, file downloaders or
just using the browser's built in debugger to look at the page code. Presumably the code for the page was knocked up quickly so this flaw could have been a simple oversight that is not likely to occur in properly constructed commercial websites.
Or perhaps the vulnerability was deliberately included as part of the test to see if people would pick up on it. However it did identify that there is a community of people willing to stress test age verification restrictions and see if work
rounds can be found and shared. I noted on Twitter that several people had posted about the ease of downloading the video and had suggested a number of tools or methods that enabled this. There was also an interesting article posted on
achieving age verification using an expired credit card. Maybe that is not so catastrophic as it still identifies a cardholder as over 18, even if cannot be used to make a payment. But of course it may open new possibilities for misuse of old data. Note
that random numbers are unlikely to work because of security algorithms. Presumably age verification companies could strengthen the security by testing that a small transaction works, but this intuitively this would have significant cost implications. I
guess that to achieve any level of take up, age verification needs to be cheap for both websites and viewers. Community Spirit It was very heartening to see how many people were helpfully contributing their thoughts
about testing the age verification software. Over the course of a couple of hours reading, I learnt an awful lot about how websites hide and protect video content, and what tools are available to see through the protection. I suspect that many
others will soon be doing the same... and I also suspect that young minds will be far more adept than I at picking up such knowledge. A final thought I feel a bit sorry for small websites who sell content. It adds a
whole new level complexity as a currently open preview area now needs to be locked away behind an age verification screen. Many potential customers will be put off by having to jump through hoops just to see the preview material. To then ask them to
enter all their credit card details again to subscribe, may be a hurdle too far. Update: The Guardian reports that age verification were easily circumvented 22nd April 2019. See
article from theguardian.com
The Guardian reported that the credit card check used by AgeChecked could be easily fooled by generating a totally false credit card number. Note that a random number will not work as there is a well known sum check algorithm which invalidates a lot of
random numbers. But anyone who knows or looks up the algorithm would be able to generate acceptable credit card numbers that would at least defeat AgeChecked. Or they would have been had AgeChecked not now totally removed the credit card check
option from its choice of options. Still the damage was done when the widely distributed Guardian article has established doubts about the age verification process. Of course the workaround is not exactly trivial and will stop younger kids
from 'stumbling on porn' which seems to be the main fall back position of this entire sorry scheme. |
|
David Flint looks into flimsy porn evidence used to justify government censorship
|
|
|
| 22nd
April 2019
|
|
| See article from
reprobatepress.com |
|
|
|
|
|
| 22nd April
2019
|
|
|
John Carr, a leading supporter of the government's porn censorship regime, is a little exasperated by its negative reception in the media See
article from johnc1912.wordpress.com |
|
|
|
|
| 22nd April 2019
|
|
|
A long list of mainly US news websites that are censored to readers in the EU due to GDPR See article from data.verifiedjoseph.com
|
|
The Austrian government introduces a bill requiring large websites to obtain the real identity of users
|
|
|
| 21st April 2019
|
|
| See article from engadget.com
|
It's not only China and the UK that want to identify internet users, Austria also wants to demand that forum contributors submit their ID before being able to post. Austria's government has introduced a bill that would require larger social media
websites and forums to obtain the identity of its users prior to them being able to post comments. Users will have to provide their name and address to websites but nicknames are still allowed and the identity data will not be made public. Punishments for non complying websites will be up to 500,000 euros and double that for repeat offences.
It would only affect sites with more than 100,000 registered users, bring in revenues above 500,000 euros per year or receive press subsidies larger than 50,000 euros. There would also be exemptions for retail sites as well as those that
don't earn money from either ads or the content itself. If passed and cleared by the EU, the law would take effect in 2020. The immediate issues noted are that some of the websites most offending the sensitivities of the government are often
smaller than the trigger condition. The law may also step on the toes of the EU in rules governing which EU states has regulatory control over websites. Update: Identity data will be available to other users 17th May
2019. See article from edri.org The law on care and
responsibility on the net forces media platforms with forums to store detailed data about their users in order to deliver them in case of a possible offence not only to police authorities, but also to other users who want to legally prosecute another
forum user. Looking at the law in detail, it is obvious that they contain so many problematic passages that their intended purpose is completely undermined. According to the Minister of Media, Gernot Blmel, harmless software will deal with the
personal data processing. One of the risks of such a system would be the potential for abuse from public authorities or individuals requesting a platform provider the person's name and address with the excuse to wanting to investigate or sue them, and
then use the information for entirely other purposes. |
|
|
|
|
| 21st April 2019
|
|
|
Politics, privacy and porn: the challenges of age-verification technology. By Ray Allison See article from computerweekly.com
|
|
Facebook, Google and co seem to be pooling their resources to create a shared database of images, files, URLs and website links that should be blocked from being uploaded by users
|
|
|
| 20th April 2019
|
|
| See article from wired.com |
In the aftermath of the horrific mosque attack in New Zealand, internet companies were interrogated over their efforts to censor the livestream video of Brenton Tarrant's propaganda. Some of their responses have included
ideas that point in a disturbing direction: toward increasingly centralized and opaque censorship of the global internet. Facebook, for example, describes plans for an expanded role for the Global Internet Forum to Counter
Terrorism, or GIFCT. The GIFCT is an industry-led self-regulatory effort launched in 2017 by Facebook, Microsoft, Twitter, and YouTube. One of its flagship projects is a shared database of hashes of files identified by the participating companies to be
extreme and egregious terrorist content. The hash database allows participating companies (which include giants like YouTube and one-man operations like JustPasteIt) to automatically identify when a user is trying to upload content already in the
database. In Facebook's post-Christchurch updates, the company discloses that it added 800 new hashes to the database, all related to the Christchurch video. It also mentions that the GIFCT is experimenting with sharing URLs
systematically rather than just content hashes --that is, creating a centralized list of URLs that would facilitate widespread blocking of videos, accounts, and potentially entire websites or forums. See the full
article from wired.com
|
|
VPNCompare reports a significant increase in website visitors in response to upcoming porn censorship. Meanwhile age verifications options announced so far for major websites seem to be apps only
|
|
|
|
20th April 2019
|
|
| See article from vpncompare.co.uk |
VPNCompare is reporting that internet users in Britain are responding to the upcoming porn censorship regime by investigating the option to get a VPN so as to workaround most age verification requirements without handing over dangerous identity
details. VPNCompare says that the number of UK visitors to its website has increased by 55% since the start date of the censorship scheme was announced. The website also sated that Google searches for VPNs had trippled. Website editor, Christopher
Seward told the Independent: We saw a 55 per cent increase in UK visitors alone compared to the same period the previous day. As the start date for the new regime draws closer, we can expect this number to rise even
further and the number of VPN users in the UK is likely to go through the roof. The UK Government has completely failed to consider the fact that VPNs can be easily used to get around blocks such as these.
Whilst the immediate assumption is that porn viewers will reach for a VPN to avoid handing over dangerous identity information, there may be another reason to take out a VPN, a lack of choice of appropriate options for age validation.
3 companies run the 6 biggest adult websites. Mindgeek owns Pornhub, RedTube and Youporn. Then there is Xhamster and finally Xvideos and xnxx are connected. Now Mindgeek has announced that it will partner with Portes Card for age
verification, which has options for identity verification, giving a age verified mobile phone number, or else buying a voucher in a shop and showing age ID to the shop keeper (which is hopefully not copied or recorded). Meanwhile Xhamster
has announced that it is partnering with 1Account which accepts a verified mobile phone, credit card, debit card, or UK drivers licence. It does not seem to have an option for anonymous verification beyond a phone being age verified without having to
show ID. Perhaps most interestingly is that both of these age verifiers are smart phone based apps. Perhaps the only option for people without a phone is to get a VPN. I also spotted that most age verification providers that I have looked at seem
to be only interested in UK cards, drivers licences or passports. I'd have thought there may be legal issues in not accepting EU equivalents. But foreigners may also be in the situation of not being able to age verify and so need a VPN. And of
course the very fact that is no age verification option common to the major porn website then it may just turn out to be an awful lot simpler just to get a VPN. |
|
|
|
|
| 20th April
2019
|
|
|
An interesting look at the government's Online Harms white paper proposing extensive internet censorship for the UK See article from cyberleagle.com
|
|
Is your identity data and porn browsing history safe with an age verification service sporting a green BBFC AV badge?...Err...No!...
|
|
|
| 19th April 2019
|
|
| See article from ageverificationregulator.com |
The Interrogator : Is it safe?
The BBFC (on its Age Verification website)...err...no!...: An assessment and accreditation under the AVC is not a
guarantee that the age-verification provider and its solution (including its third party companies) comply with the relevant legislation and standards, or that all data is safe from malicious or criminal interference. Accordingly
the BBFC shall not be responsible for any losses, damages, liabilities or claims of whatever nature, direct or indirect, suffered by any age-verification provider, pornography services or consumers/ users of age-verification provider's services or
pornography services or any other person as a result of their reliance on the fact that an age-verification provider has been assessed under the scheme and has obtained an Age-verification Certificate or otherwise in connection with the scheme.
|
|
Facebook bans several UK far right groups
|
|
|
| 19th April 2019
|
|
| See article from dailymail.co.uk
| Facebook has banned far-right groups including the British National Party (BNP) and the English Defence League (EDL) from having any presence on the social
network. The banned groups, which also includes Knights Templar International, Britain First and the National Front as well as key members of their leadership, have been removed from both Facebook or Instagram. Facebook said it uses an extensive
process to determine which people or groups it designates as dangerous, using signals such as whether they have used hate speech, and called for or directly carried out acts of violence against others based on factors such as race, ethnicity or national
origin.
Offsite comment: How to fight the new fascism 19th April 2019. See article from spiked-online.com by Andrew Doyle
This week we have seen David Lammy doubling down on his ludicrous comparison of the European Research Group with the Nazi party, and Chris Key in the Independent calling for UKIP and the newly formed Brexit Party to be banned from
television debates. It is clear that neither Key nor Lammy have a secure understanding of what far right actually means and, quite apart from the distasteful nature of such political opportunism, their strategy only serves to generate the kind of
resentment upon which the far right depends. Offsite comment: Facebook is calling for Centralized Censorship. That Should Scare You 19th April 2019. See
article from wired.com by Emma Llans
If we're going to have coherent discussions about the future of our information environment, we--the public, policymakers, the media, website operators--need to understand the technical realities and policy dynamics that shaped the response to the
Christchurch massacre. But some of these responses have also included ideas that point in a disturbing direction: toward increasingly centralized and opaque censorship of the global interne
|
|
|
|
|
| 19th
April 2019
|
|
|
By Julia Reda, the heroic MEP who fought against this disgraceful censorship law See article from juliareda.eu |
|
|
|
|
| 19th April 2019
|
|
|
The Online Censorship Machine Is Revving Up: Here Are a Few (Guitar) Lessons Learned. By Dylan Gilbert See
article from publicknowledge.org |
|
European Parliament removes requirement for internet companies to pre-censor user posts for terrorist content but approves a one hour deadline for content removal when asked by national authorities
|
|
|
| 18th April 2019
|
|
| See article from bbc.com |
The European Parliament has approved a draft version of new EU internet censorship law targeting terrorist content. In particular the MEPs approved the imposition of a one-hour deadline to remove content marked for censorship by various national
organisations. However the MEPs did not approve a key section of the law requiring internet companies to pre-process and censor terrorsit content prior to upload. A European Commission official told the BBC changes made to the text by parliament
made the law ineffective. The Commission will now try to restore the pre-censorship requirement with the new parliament when it is elected. The law would affect social media platforms including Facebook, Twitter and YouTube, which could face fines
of up to 4% of their annual global turnover. What does the law say? In amendments, the European Parliament said websites would not be forced to monitor the information they transmit or store, nor have to actively seek facts indicating illegal
activity. It said the competent authority should give the website information on the procedures and deadlines 12 hours before the agreed one-hour deadline the first time an order is issued. In February, German MEP Julia Reda of the European
Pirate Party said the legislation risked the surrender of our fundamental freedoms [and] undermines our liberal democracy. Ms Reda welcomed the changes brought by the European Parliament but said the one-hour deadline was unworkable for platforms run by
individual or small providers. |
|
Privacy International write to Jeff Bezos of Amazon about the revelation that employees are listening in on Echo conversations
|
|
|
| 18th April 2019
|
|
| See article from privacyinternational.org
|
Last week, an investigation by Bloomberg revealed that thousands of Amazon employees around the world are listening in on Amazon Echo users.
As we have been explaining across media, we believe that by using default settings and vague privacy policies which allow Amazon employees to listen in on the recordings of users' interactions with their devices, Amazon risks deliberately deceiving
its customers. Amazon has so far been dismissive, arguing that people had the options to opt out from the sharing of their recordings -- although it is unclear how their customers could have done so if they were not aware this was
going on in the first place. Even those who had read the privacy policy would have had a hard time interpreting "We use your requests to Alexa to train our speech recognition and natural language understanding
systems" to mean that thousands of employees are each listening up to a thousand recordings per day. And sharing file recordings with one another they find to be "amusing". As a result, today we wrote to Jeff
Bezos to let him know we think Amazon needs to step up and do a lot better to protect the privacy of their customers. If you use an Amazon Echo device and are concerned about this, read our instructions on how to opt out
here . Dear Mr. Bezos,
We are writing to call for your urgent action regarding last week's report [1] in Bloomberg, which revealed that Amazon has been employing thousands of workers to listen in on the recordings of Amazon Echo users.
Privacy International (PI) is a registered charity based in London that works at the intersection of modern technologies and rights. Privacy International challenges overreaching state and corporate surveillance, so that people
everywhere can have greater security and freedom through greater personal privacy. The Bloomberg investigation asserts that Amazon employs thousands of staff around the world to listen to voice recordings captured by the Amazon
Alexa. Among other examples, the report states that your employees use internal chat rooms to share files when they "come across an amusing recording", and that they share "distressing" recordings -- including one of a sexual assault.
Currently, your privacy policy states: "We use your requests to Alexa to train our speech recognition and natural language understanding systems." We are concerned that your customers could not reasonably assume from
such a statement that recordings of their interactions with the Amazon Echo could, by default, be listened to by your employees. An ambiguous privacy policy and default settings that allow your employees to access recordings of
all interactions is not our idea of consent. Instead, we believe the default settings should be there to protect your users' privacy. Millions of customers enjoy your product and they deserve better from you. As such, we ask
whether you will:
Notify all users whose recordings have been accessed, and describe to them which recordings; Notify all users whenever their recordings are accessed in the future, and describe to them which
recordings; Modify the settings of the Amazon Echo so that "Help Develop New Features" and "Use Messages to Improve Transcriptions" are turned off by default; Clarify your
privacy policy so that it is clear to users that employees are listening to the recordings when the "Help Develop New Features" and "Use Messages to Improve Transcriptions" settings are on.
In your response to the Bloomberg investigation, you state you take the privacy of your customer seriously. It is now time for you to step up and walk the walk. We look forward to engaging with you further on this.
Sincerely yours, Eva Blum-Dumontet
|
|
Reddit bans adult advertising
|
|
|
| 18th April 2019
|
|
| See article from avn.com
|
Reddit is a social media website that boasts 234 million members and approximately 8 billion page views per month. Reddit's system is naturally built to highlight online influencers; all posts are automatically submitted to a voting process: The most
up-voted messages receive the most visibility. The site has a very passionate following and advertising on Reddit can be very successful. Companies are able to promote top posts to a very targeted audience, directly on its front page. On
Tuesday, Reddit posted an update about their Not Suitable for Work Advertising Policy. From now on, the platform doesn't allow any adult-oriented ads and targeting. Promoted posts pushing adult products or services are no longer permissible and NSFW
subreddits will no longer be eligible for ads or targeting. The new policy targets specifically targets pornographic and sexually explicit content as well as adult sexual recreational content, product and services. |
|
|
|
|
| 18th April 2019
|
|
|
But it will spell the end of ethical porn. By Girl on the Net See article from theguardian.com
|
|
The government announces that its internet porn censorship scheme will come into force on 15th July 2019
|
|
|
| 17th April 2019
|
|
| See press release from gov.uk
|
The UK will become the first country in the world to bring in age-verification for online pornography when the measures come into force on 15 July 2019. It means that commercial providers of online pornography will be required by law to carry out
robust age-verification checks on users, to ensure that they are 18 or over. Websites that fail to implement age-verification technology face having payment services withdrawn or being blocked for UK users. The British Board of Film
Classification (BBFC) will be responsible for ensuring compliance with the new laws. They have confirmed that they will begin enforcement on 15 July, following an implementation period to allow websites time to comply with the new standards. Minister for Digital Margot James said that she wanted the UK to be the most censored place in the world to b eonline:
Adult content is currently far too easy for children to access online. The introduction of mandatory age-verification is a world-first, and we've taken the time to balance privacy concerns with the need to protect
children from inappropriate content. We want the UK to be the safest place in the world to be online, and these new laws will help us achieve this.
Government has listened carefully to privacy concerns and is clear that
age-verification arrangements should only be concerned with verifying age, not identity. In addition to the requirement for all age-verification providers to comply with General Data Protection Regulation (GDPR) standards, the BBFC have created a
voluntary certification scheme, the Age-verification Certificate (AVC), which will assess the data security standards of AV providers. The AVC has been developed in cooperation with industry, with input from government. Certified age-verification
solutions which offer these robust data protection conditions will be certified following an independent assessment and will carry the BBFC's new green 'AV' symbol. Details will also be published on the BBFC's age-verification website,
ageverificationregulator.com so consumers can make an informed choice between age-verification providers. BBFC Chief Executive David Austin said: The introduction of age-verification to restrict access to
commercial pornographic websites to adults is a ground breaking child protection measure. Age-verification will help prevent children from accessing pornographic content online and means the UK is leading the way in internet safety.
On entry into force, consumers will be able to identify that an age-verification provider has met rigorous security and data checks if they carry the BBFC's new green 'AV' symbol.
The change in law is part of the
Government's commitment to making the UK the safest place in the world to be online, especially for children. It follows last week's publication of the Online Harms White Paper which set out clear responsibilities for tech companies to keep UK citizens
safe online, how these responsibilities should be met and what would happen if they are not. |
|
When spouting on about keeping porn users data safe the DCMS proves that it simply can't be trusted by revealing journalists' private emails
|
|
|
| 17th April
2019
|
|
| See article from bbc.com |
| Believe us, we can cure all society's ills |
A government department responsible for data protection laws has shared the private contact details of hundreds of journalists. The Department for Censorship, Media and Sport emailed more than 300 recipients in a way that allowed their
addresses to be seen by other people. The email - seen by the BBC - contained a press release about age verifications for adult websites . Digital Minister Margot James said the incident was embarrassing. She added:
It was an error and we're evaluating at the moment whether that was a breach of data protection law. In the email sent on Wednesday, the department claimed new rules would offer robust data protection conditions,
adding: Government has listened carefully to privacy concerns. |
|
Responding to the large amount of aggressive tweeting, founder Jack Dorsey says that the number of likes will soon be downgraded
|
|
|
| 17th April 2019
|
|
| See article from bbc.com |
Twitter co-founder Jack Dorsey has said again there is much work to do to improve Twitter and cut down on the amount of abuse and misinformation on the platform. He said the firm might demote likes and follows, adding that in hindsight he would not have
designed the platform to highlight these. Speaking at the TED technology conference he said that Twitter currently incentivised people to post outrage. Instead he said it should invite people to unite around topics and communities. Rather than focus
on following individual accounts, users could be encouraged to follow hashtags, trends and communities. Doing so would require a systematic change that represented a huge shift for Twitter. One of the choices we made was to make the number
of people that follow you big and bold. If I started Twitter now I would not emphasise follows and I would not create likes. We have to look at how we display follows and likes, he added. |
|
|
|
|
| 17th April 2019
|
|
|
Instead of regulating the internet to protect young people, give them a youth-net of their own. By Conor Friedersdorf See article
from theatlantic.com |
|
|
|
|
| 17th April 2019
|
|
|
A German data protection organisation finds that Facebook does not obtain the required user consent for it Custom Audience service See
article from netzpolitik.org |
|
Link taxes and censorship machines pass the final stage of European legislation
|
|
|
| 16th April 2019
|
|
| See article from torrentfreak.com
|
The EU Council of Ministers has approved the Copyright Directive, which includes the link tax and censorship machines. The legislation was voted through by a majority of EU ministers despite noble opposition from Italy, Luxembourg, Netherlands, Poland,
Finland, and Sweden. As explained by Julia Reda MEP, a majority of 55% of Member States, representing 65% of the population, was required to adopt the legislation. That was easily achieved with 71.26% in favor, so the Copyright Directive will now pass
into law. As the image above shows, several countries voted against adoption, including Italy, Luxembourg, Netherlands, Poland, Finland, and Sweden. Belgium, Estonia, and Slovenia absta ined. But in the final picture that just wasn't
enough, with both Germany and the UK voting in favor, the Copyright Directive is now adopted. EU member states will now have two years to implement the law, which requires platforms like YouTube to sign licensing agreements with creators in order
to use their content. If that is not possible, they will have to ensure that infringing content uploaded by users is taken down and not re-uploaded to their services. The entertainment lobby will not stop here, over the next two years, they will
push for national implementations that ignore users' fundamental rights, comments Julia Reda: It will be more important than ever for civil society to keep up the pressure in the Member States!
|
|
ICO announces another swathe of internet censorship and age verification requirements in the name of 'protecting the children'
|
|
|
|
15th April 2019
|
|
| See press release from ico.org.uk See
consultation details from ico.org.uk See
ICO censorship proposal and consultation document [pdf] from ico.org.uk |
This is the biggest censorship event of the year. It is going destroy the livelihoods of many. It is framed as if it were targeted at Facebook and the like, to sort out their abuse of user data, particularly for kids. However the kicker is that
the regulations will equally apply to all UK accessed websites that earn at least earn some money and process user data in some way or other. Even small websites will then be required to default to treating all their readers as children and only
allow more meaningful interaction with them if they verify themselves as adults. The default kids-only mode bans likes, comments, suggestions, targeted advertising etc, even for non adult content. Furthermore the ICO expects websites to formally
comply with the censorship rules using market researchers, lawyers, data protection officers, expert consultants, risk assessors and all the sort of people that cost a grand a day. Of course only the biggest players will be able to afford the
required level of red tape and instead of hitting back at Facebook, Google, Amazon and co for misusing data, they will further add to their monopoly position as they will be the only companies big enough to jump over the government's child protection
hurdles. Another dark day for British internet users and businesses. The ICO write in a press release Today we're setting out the standards expected of those responsible for designing, developing or providing online
services likely to be accessed by children, when they process their personal data. Parents worry about a lot of things. Are their children eating too much sugar, getting enough exercise or doing well at school. Are they happy?
In this digital age, they also worry about whether their children are protected online. You can log on to any news story, any day to see just how children are being affected by what they can access from the tiny computers in their
pockets. Last week the Government published its white paper covering online harms. Its proposals reflect people's growing mistrust of social media and online services. While we can all benefit from these
services, we are also increasingly questioning how much control we have over what we see and how our information is used. There has to be a balancing act: protecting people online while embracing the opportunities that digital
innovation brings. And when it comes to children, that's more important than ever. In an age when children learn how to use a tablet before they can ride a bike, making sure they have the freedom to play, learn and explore in the
digital world is of paramount importance. The answer is not to protect children from the digital world, but to protect them within it. So today we're setting out the standards expected of
those responsible for designing, developing or providing online services likely to be accessed by children, when they process their personal data.
Age appropriate design: a code of practice for online services has been published for
consultation. When finalised, it will be the first of its kind and set an international benchmark. It will leave online service providers in no doubt about what is expected of them when it comes to looking
after children's personal data. It will help create an open, transparent and protected place for children when they are online. Organisations should follow the code and demonstrate that their services use children's data fairly
and in compliance with data protection law. Those that don't, could face enforcement action including a fine or an order to stop processing data. Introduced by the Data Protection Act 2018, the code sets out 16 standards of age
appropriate design for online services like apps, connected toys, social media platforms, online games, educational websites and streaming services, when they process children's personal data. It's not restricted to services specifically directed at
children. The code says that the best interests of the child should be a primary consideration when designing and developing online services. It says that privacy must be built in and not bolted on. Settings must be "high privacy" by default (unless there's a compelling reason not to); only the minimum amount of personal data should be collected and retained; children's data should not usually be shared; geolocation services should be switched off by default. Nudge techniques should not be used to encourage children to provide unnecessary personal data, weaken or turn off their privacy settings or keep on using the service. It also addresses issues of parental control and profiling.
The code is out for consultation until 31 May. We will draft a final version to be laid before Parliament and we expect it to come into effect before the end of the year. Our Code of Practice is a
significant step, but it's just part of the solution to online harms. We see our work as complementary to the current initiatives on online harms, and look forward to participating in discussions regarding the Government's white paper. The
proposals are now open for public consultation: The Information Commissioner is seeking feedback on her draft code of practice
Age appropriate design -- a code of practice for online services likely to be accessed by
children (the code). The code will provide guidance on the design standards that the Commissioner will expect providers of online 'Information Society Services' (ISS), which process personal data and are likely to be accessed by
children, to meet. The code is now out for public consultation and will remain open until 31 May 2019. The Information Commissioner welcomes feedback on the specific questions set out below. You can respond
to this consultation via our online survey , or you can download the document below and email to ageappropriatedesign@ico.org.uk . lternatively, print off the document and post to:
Age appropriate design code consultation Policy Engagement Department Information Commissioner's Office Wycliffe House Water Lane Wilmslow Cheshire SK9 5AF |
|
Responses to the ICO internet censorship proposals
|
|
|
|
15th April 2019
|
|
| | Comment: Entangling start ups in red tape See
article from adamsmith.org
Today the Information Commissioner's Office announced a consultation on a draft Code of Practice to help protect children online. The code forbids the creation of profiles on children, and bans data sharing and nudges of children.
Importantly, the code also requires everyone be treated like a child unless they undertake robust age-verification. The ASI believes that this code will entangle start-ups in red tape, and inevitably end up with everyone being
treated like children, or face undermining user privacy by requiring the collection of credit card details or passports for every user. Matthew Lesh, Head of Research at free market think tank the Adam Smith Institute, says:
This is an unelected quango introducing draconian limitations on the internet with the threat of massive fines. This code requires all of us to be treated like children. An
internet-wide age verification scheme, as required by the code, would seriously undermine user privacy. It would require the likes of Facebook, Google and thousands of other sites to repeatedly collect credit card and passport details from millions of
users. This data collection risks our personal information and online habits being tracked, hacked and exploited. There are many potential unintended consequences. The media could be forced to censor swathes of stories not
appropriate for young people. Websites that cannot afford to develop 'children-friendly' services could just block children. It could force start-ups to move to other countries that don't have such stringent laws. This plan would
seriously undermine the business model of online news and many other free services by making it difficult to target advertising to viewer interests. This would be both worse for users, who are less likely to get relevant advertisements, and journalism,
which is increasingly dependent on the revenues from targeted online advertising. The Government should take a step back. It is really up to parents to keep their children safe online.
Offsite Comment: Web shake-up could force ALL websites to treat us like children 15th April 2019. See
article from dailymail.co.uk
The information watchdog has been accused of infantilising web users, in a draconian new code designed to make the internet safer for children. Web firms will be forced to introduce strict new age checks on their websites -- or
treat all their users as if they are children, under proposals published by the Information Commissioner's Office today. The rules are so stringent that critics fear people could end up being forced to demonstrate their age for
virtually every website they visit, or have the services that they can access limited as if they are under 18.
|
|
The Government is already considering its next step for increased internet censorship
|
|
|
| 15th April 2019
|
|
| See article from
telegraph.co.uk |
The ink has yet dried on two enormous packaged of internet censorship and yet the Government is already planning the next. The Government is considering an overhaul of censorship rules for Netflix and Amazon Prime Video. The Daily Telegraph
understands that the Department for Cesnorship, Media and Sport is looking at whether censorship rules for on-demand video streaming sites should extended to those suffered by traditional broadcasters. Cesnorship Secretary Jeremy Wright had
signaled this could be a future focus for DCMS last month, saying rules for Netflix and Amazon Prime Video were not as robust as they were for other broadcasters. Public service broadcasters currently have set requirements to commission content
from within the UK. The BBC, for example, must ensure that UK-made shows make up a substantial proportion of its content, and around 50% of that content must come from outside the M25 area. No such rules, over specific UK-made content,
currently apply to Netflix and Amazon Prime Video, though . The European Union is currently finalising the details of rules for the bloc, which require streaming companies to ensure at least 30% of their libraries are dedicated to content made by
EU-member states. |
|
Age verification will become full identity verification for online gambling sites from 7th May
|
|
|
|
15th April 2019
|
|
| See article from eyesdownbingo.com See
details of implementation of identity verification requirements [pdf] from gamblingcommission.gov.uk |
Age verification for online gambling is set to evolve into full identity verification from 7th May 2019. The other big change is that all verification will have to be completed prior to any bets being placed. Previously age verification was required only
when people tried to withdraw their winnings. There were many complaints that gambling companies would then inflict onerous validation requirements to try and avoid paying out. I would hazard a guess that the new implementation will quash an awful lot
of the TV end media adverts that try and get new members with a small joiners bonus. Now it will be a lot more hassle to join, and maybe there will be less interest in trying out new websites just to get a free introductory bet. Here is an example
explanation of the new rules: see article from eyesdownbingo.com |
|
|
|
|
| 15th April
2019
|
|
|
An insightful and convincing view about 'filter bubbles', suggesting that presenting alternative views will do nothing to address the perceived problem. See
article from baekdal.com |
|
Tory MPs line up to criticise their own government's totalitarian-style internet censorship proposals
|
|
|
|
14th April 2019
|
|
| See
article from dailymail.co.uk |
Ministers are facing a growing and deserved backlash against draconian new web laws which will lead to totalitarian-style censorship. The stated aim of the Online Harms White Paper is to target offensive material such as terrorists' beheading
videos. But under the document's provisions, the UK internet censor would have complete discretion to decide what is harmful, hateful or bullying -- potentially including coverage of contentious issues such as transgender rights. After MPs lined
up to demand a rethink, Downing Street has put pressure on Culture Secretary Jeremy Wright to narrow the definition of harm in order to exclude typical editorial content. MPs have been led by Jacob Rees-Mogg, who said last night that while it was
obviously a worthwhile aim to rid the web of the evils of terrorist propaganda and child pornography, it should not be at the expense of crippling a free Press and gagging healthy public expression. He added that the regulator could be used as a tool of
repression by a future Jeremy Corbyn-led government, saying: Sadly, the Online Harms White Paper appears to give the Home Secretary of the day the power to decide the rules as to which content is considered palatable.
Who is to say that less scrupulous governments in the future would not abuse this new power? I fear this could have the unintended consequence of reputable newspaper websites being subjected to quasi-state control. British
newspapers freedom to hold authority to account is an essential bulwark of our democracy. We must not now allow what amounts to a Leveson-style state-controlled regulator for the Press by the back door.
He was
backed by Charles Walker, vice-chairman of the Tory Party's powerful backbench 1922 Committee, who said: We need to protect people from the well-documented evils of the internet -- not in order to suppress views or
opinions to which they might object.
In last week's Mail on Sunday, former Culture Secretary John Whittingdale warned that the legislation was more usually associated with autocratic regimes including those in China, Russia or North
Korea. Tory MP Philip Davies joined the criticism last night, saying: Of course people need to be protected from the worst excesses of what takes place online. But equally, free speech in a free country is very,
very important too. It's vital we strike the right balance. While I have every confidence that Sajid Javid as Home Secretary would strike that balance, can I have the same confidence that a future Marxist government would not abuse the proposed new
powers?
And Tory MP Martin Vickers added: While we must take action to curb the unregulated wild west of the internet, we must not introduce state control of the Press as a result.
|
|
|
|
|
|
14th April 2019
|
|
|
Spare a thought for how age verification censorship affects British small adult businesses and sex workers. By Freya Pratty See
article from novaramedia.com |
|
|
|
|
|
14th April 2019
|
|
|
US copyright holders lobby domain registry overseer ICANN to end its temporary observance of the EU's GDPR privacy laws See
article from torrentfreak.com |
|
Julian Assange of Wikileaks has been arrested in London
|
|
|
| 14th April 2019
|
|
| See article from bbc.com |
Wikileaks was a whistle blowing website that shone a light on how governments of the world have been running our lives. And it was not a pretty sight. Julian Assange who ran Wikileaks, is surely a freedom of speech hero, however he broke many serious
state secret laws and has been evading the authorities via diplomatic immunity afforded to him by the Ecuadorean embassy in London. This has now been rescinded and Assange has been duly arrested. He is now in serious trouble and will surely end up being
sent to the USA to answer the accusations. It is hard to see that the prosecuting authorities will be convinced by ethics or morality of the ends justifying the means. Maybe its best to let the BBC report the current situation. See
article from bbc.com |
|
WebUser magazine kindly informs readers how to avoid being endangered by age verification
|
|
|
| 13th April 2019
|
|
| Spotted by @neil_neilzone from twitter.com |
The legislators behind the Digital Economy Act couldn't be bothered to include any provisions for websites and age verifiers to keep the identity and browsing history of porn users safe. It has now started to dawn on the authorities that this was a
mistake. They are currently implementing a voluntary kitemark scheme to try and assure users that porn website's and age verifier's claims of keeping data safe can be borne out. It is hardly surprising that significant numbers of people are likely to
be interested in avoiding having to register their identity details before being able to access porn. It seems obvious that information about VPNs and Tor will therefore be readily circulated amongst any online community with an interest in
keeping safe. But perhaps it is a little bit of a shock to see it is such large letters in a mainstream magazine on the shelves of supermarkets and newsagents. And perhaps anther thought is that once the BBFC starting ISPs to block non-compliant
websites then circumvention will be the only way see your blocked favourite websites. So people stupidly signing up to age verification will have less access to porn and a worse service than those that circumvent it. |
|
The press and campaigners call out the Online Harms white paper for what it is...censorship
|
|
|
|
12th April 2019
|
|
| | Newspapers and the press have generally given the new internet censorship proposals a jistifiable negative reception:
The Guardian
See Internet crackdown raises fears for free speech in Britain from theguardian.com
Critics of the government's flagship internet regulation policy are warning it could lead to a North Korean-style censorship regime, where regulators decide which websites Britons are allowed to visit, because of how broad
the proposals are.
The Daily Mail
See New internet regulation laws will lead to widespread censorship from dailymail.co.uk
Critics brand new internet regulation laws the most draconian crackdown in the Western democratic world as they warn it could threaten the freedom of speech of millions of Britons
The
Independent
See UK's new internet plans could bring state censorship of the
internet, campaigners warn from independent. co.uk The government's new proposals to try and protect people from harm on the internet could actually create a huge censorship operation, campaigners have warned.
Index on Censorship
See Online harms proposals pose serious risks to freedom of expressionfrom
indexoncensorship.org Index on Censorship has raised strong concerns about the government's focus on tackling unlawful and harmful online content, particularly since the publication of the Internet Safety Strategy
Green Paper in 2017. In October 2018, Index published a joint statement with Global Partners Digital and Open Rights Group noting that any proposals that regulate content are likely to have a significant impact on the enjoyment and exercise of human
rights online, particularly freedom of expression. We have also met with officials from the Department for Digital, Culture, Media and Sport, as well as from the Home Office, to raise our thoughts and concerns.
With the publication of the Online Harms White Paper , we would like to reiterate our earlier points. While we recognise the government's desire to tackle unlawful content online, the proposals mooted in the
white paper -- including a new duty of care on social media platforms , a regulatory body , and even the fining and banning of social media platforms as a sanction -- pose serious risks to freedom of expression online. These risks
could put the United Kingdom in breach of its obligations to respect and promote the right to freedom of expression and information as set out in Article 19 of the International Covenant on Civil and Political Rights and Article 10 of the European
Convention on Human Rights, amongst other international treaties. Social media platforms are a key means for tens of millions of individuals in the United Kingdom to search for, receive, share and impart information, ideas and
opinions. The scope of the right to freedom of expression includes speech which may be offensive, shocking or disturbing . The proposed responses for tackling online safety may lead to disproportionate amounts of legal speech being curtailed, undermining
the right to freedom of expression. In particular, we raise the following concerns related to the white paper:
The wide range of different harms which the government is seeking to tackle in this policy process require different, tailored responses. Measures proposed must be underpinned by strong evidence, both of the likely scale of the harm
and the measures' likely effectiveness. The evidence which formed the base of the Internet Safety Strategy Green Paper was highly variable in its quality. Any legislative or regulatory measures should be supported by clear and unambiguous evidence of
their need and effectiveness.
Index is concerned at the use of a duty of care regulatory approach. Although social media has often been compared the public square, the duty of care model is not an exact fit because this would introduce regulation -- and
restriction -- of speech between individuals based on criteria that is far broader than current law. A failure to accurately define "harmful" content risks incorporating legal speech, including political expression, expressions of religious
views, expressions of sexuality and gender, and expression advocating on behalf of minority groups.
While well-meaning, proposals such as these contain serious risks, such as requiring or incentivising wide-sweeping removal of lawful and innocuous content. The imposition of time limits for removal, heavy sanctions for non-compliance
or incentives to use automated content moderation processes only heighten this risk, as has been evidenced by the approach taken in Germany via its Network Enforcement Act (or NetzDG), where there is evidence of the over-removal of lawful content.
The obligation to protect users' rights online that is included in the white paper gives insufficient weight to freedom of expression. A much clearer obligation to protect freedom of expression should guide development of future
regulation. In recognition of the UK's commitment to the multistakeholder model of internet governance, we hope all relevant stakeholders, including civil society experts on digital rights and freedom of expression, will be fully
engaged throughout the development of the Online Harms bill.
Privacy International
See PI's take on the UK government's new proposal to tackle "online harms" from
privacyinternational.org PI welcomes the UK government's commitment to investigating and holding companies to account. When it comes to regulating the internet, however, we must move with care. Failure to do so
will introduce, rather than reduce, "online harms". A 12-week consultation on the proposals has also been launched today. PI plans to file a submission to the consultation as it relates to our work. Given the breadth of the proposals, PI calls
on others respond to the consultation as well. Here are our initial suggestions:
proceed with care: proposals of regulation of content on digital media platforms should be very carefully evaluated, given the high risks of negative impacts on expression, privacy and other human rights. This is a very complex
challenge and we support the need for broad consultation before any legislation is put forward in this area.
assess carefully the delegation of sole responsibility to companies as adjudicators of content. This would empower corporate judgment over content, with would have implications for human rights, particularly freedom of expression
and privacy.
require that judicial or other independent authorities, rather than government agencies, are the final arbiters of decisions regarding what is posted online and enforce such decisions in a manner that is consistent with human
rights norms.
ensure that any requirement or expectation of deploying automated decision making/AI is in full compliance with existing human rights and data protection standards (which, for example, prohibit, with limited exceptions, relying on
solely automated decisions, including profiling, when they significantly affect individuals).
require companies to provide efficient reporting tools in multiple languages, to report on action taken with regard to content posted online. Reporting tools should be accessible, user-friendly, and easy to find. There should be
full transparency regarding the complaint and redress mechanisms available and opportunities for civil society to take action.
Offsite Comment: Ridiculous Plan 10th April 2019. See
article from techdirt.com
UK Now Proposes Ridiculous Plan To Fine Internet Companies For Vaguely Defined Harmful Content Last week Australia rushed through a ridiculous bill to fine internet companies if they happen to host any abhorrent content. It
appears the UK took one look at that nonsense and decided it wanted some too. On Monday it released a white paper calling for massive fines for internet companies for allowing any sort of online harms. To call the plan nonsense is being way too harsh to
nonsense The plan would result in massive, widespread, totally unnecessary censorship solely for the sake of pretending to do something about the fact that some people sometimes do not so nice things online. And it will place all
of the blame on the internet companies for the (vaguely defined) not so nice things that those companies' users might do online. Read the full
article from techdirt.com
Offsite Comment: Sajid Javid's new internet rules will have a chilling effect on free speech
11th April 2019. See article from spectator.co.uk by Toby Young
How can the government prohibit comments that might cause harm without defining what harm is? Offsite Comment: Plain speaking from Chief Censor Sajid Javid 11th April 2019. See
tweet from twitter.com
Letter to the Guardian: Online Harms white paper would make Chinese censors proud 11th April 2019. See
article from theguardian.com
We agree with your characterisation of the online harms white paper as a flawed attempt to deal with serious problems (Regulating the internet demands clear thought about hard problems, Editorial, 9 April). However, we would draw your attention to
several fundamental problems with the proposal which could be disastrous if it proceeds in its current form. Firstly, the white paper proposes to regulate literally the entire internet, and censor anything non-compliant. This
extends to blogs, file services, hosting platforms, cloud computing; nothing is out of scope. Secondly, there are a number of undefined harms with no sense of scope or evidence thresholds to establish a need for action. The lawful
speech of millions of people would be monitored, regulated and censored. The result is an approach that would make China's state censors proud. It would be very likely to face legal challenge. It would give the UK the widest and
most prolific internet censorship in an apparently functional democracy. A fundamental rethink is needed. Antonia Byatt Director, English PEN, Silkie Carlo Big Brother Watch Thomas Hughes Executive director, Article 19 Jim Killock
Executive director, Open Rights Group Joy Hyvarinen Head of advocacy, Index on Censorship Comment: The DCMS Online Harms Strategy must design in fundamental rights 12th April 2019. See
article from openrightsgroup.org
Increasingly over the past year, DCMS has become fixated on the idea of imposing a duty of care on social media platforms, seeing this as a flexible and de-politicised way to emphasise the dangers of exposing children and young people to certain
online content and make Facebook in particular liable for the uglier and darker side of its user-generated material. DCMS talks a lot about the 'harm' that social media causes. But its proposals fail to explain how harm to free
expression impacts would be avoided. On the positive side, the paper lists free expression online as a core value to be protected and addressed by the regulator. However, despite the apparent prominence of this value, the
mechanisms to deliver this protection and the issues at play are not explored in any detail at all. In many cases, online platforms already act as though they have a duty of care towards their users. Though the efficacy of such
measures in practice is open to debate, terms and conditions, active moderation of posts and algorithmic choices about what content is pushed or downgraded are all geared towards ousting illegal activity and creating open and welcoming shared spaces.
DCMS hasn't in the White Paper elaborated on what its proposed duty would entail. If it's drawn narrowly so that it only bites when there is clear evidence of real, tangible harm and a reason to intervene, nothing much will change. However, if it's drawn
widely, sweeping up too much content, it will start to act as a justification for widespread internet censorship. If platforms are required to prevent potentially harmful content from being posted, this incentivises widespread
prior restraint. Platforms can't always know in advance the real-world harm that online content might cause, nor can they accurately predict what people will say or do when on their platform. The only way to avoid liability is to impose wide-sweeping
upload filters. Scaled implementation of this relies on automated decision-making and algorithms, which risks even greater speech restrictions given that machines are incapable of making nuanced distinctions or recognising parody or sarcasm.
DCMS's policy is underpinned by societally-positive intentions, but in its drive to make the internet "safe", the government seems not to recognise that ultimately its proposals don't regulate social media companies, they
regulate social media users. The duty of care is ostensibly aimed at shielding children from danger and harm but it will in practice bite on adults too, wrapping society in cotton wool and curtailing a whole host of legal expression.
Although the scheme will have a statutory footing, its detail will depend on codes of practice drafted by the regulator. This makes it difficult to assess how the duty of care framework will ultimately play out.
The duty of care seems to be broadly about whether systemic interventions reduce overall "risk". But must the risk be always to an identifiable individual, or can it be broader - to identifiable vulnerable groups? To society
as a whole? What evidence of harm will be required before platforms should intervene? These are all questions that presently remain unanswered. DCMS's approach appears to be that it will be up to the regulator to answer these
questions. But whilst a sensible regulator could take a minimalist view of the extent to which commercial decisions made by platforms should be interfered with, allowing government to distance itself from taking full responsibility over the fine
detailing of this proposed scheme is a dangerous principle. It takes conversations about how to police the internet out of public view and democratic forums. It enables the government to opt not to create a transparent, judicially reviewable legislative
framework. And it permits DCMS to light the touch-paper on a deeply problematic policy idea without having to wrestle with the practical reality of how that scheme will affect UK citizens' free speech, both in the immediate future and for years to come.
How the government decides to legislate and regulate in this instance will set a global norm. The UK government is clearly keen to lead international efforts to regulate online content. It
knows that if the outcome of the duty of care is to change the way social media platforms work that will apply worldwide. But to be a global leader, DCMS needs to stop basing policy on isolated issues and anecdotes and engage with a broader conversation
around how we as society want the internet to look. Otherwise, governments both repressive and democratic are likely to use the policy and regulatory model that emerge from this process as a blueprint for more widespread internet censorship.
The House of Lords report on the future of the internet, published in early March 2019, set out ten principles
it considered should underpin digital policy-making, including the importance of protecting free expression. The consultation that this White Paper introduces offers a positive opportunity to collectively reflect, across industry, civil society, academia
and government, on how the negative aspects of social media can be addressed and risks mitigated. If the government were to use this process to emphasise its support for the fundamental right to freedom of expression - and in a way that goes beyond mere
expression of principle - this would also reverberate around the world, particularly at a time when press and journalistic freedom is under attack. The White Paper expresses a clear desire for tech companies to "design in
safety". As the process of consultation now begins, we call on DCMS to "design in fundamental rights". Freedom of expression is itself a framework, and must not be lightly glossed over. We welcome the opportunity to engage with DCMS
further on this topic: before policy ideas become entrenched, the government should consider deeply whether these will truly achieve outcomes that are good for everyone. |
|
|
|
|
| 12th April 2019
|
|
|
A new internet technology will make it more difficult for ISPs to block websites See article
from ispreview.co.uk |
|
Culture of Censorship Secretary Jeremy Wright tells British people not worry about the proposed end to their free speech because newspapers will still be allowed free speech
|
|
|
| 11th April
2019
|
|
| See article from dailymail.co.uk
|
The Daily Mail writes: Totalitarian-style new online code that could block websites and fine them 220million for harmful content will not limit press freedom, Culture Secretary promises Government proposals have sparked fears that they could backfire and turn Britain into the first Western nation to adopt the kind of censorship usually associated with totalitarian regimes.
Former culture secretary John Whittingdale drew parallels with China, Russia and North Korea. Matthew Lesh of the Adam Smith Institute, a free market think-tank, branded the white paper a historic attack on freedom of speech.
[However] draconian laws designed to tame the web giants will not limit press freedom, the Culture Secretary said yesterday. In a letter to the Society of Editors, Jeremy Wright vowed that journalistic or
editorial content would not be affected by the proposals. And he reassured free speech advocates by saying there would be safeguards to protect the role of the Press. But as for the safeguarding the free speech rights of
ordinary British internet users, he more or less told them they could fuck off!
|
|
EU Agencies Falsely Report More Than 550 Archive.org URLs as Terrorist Content
|
|
|
| 11th April 2019
|
|
| See article from
blog.archive.org |
The European Parliament is set to vote on legislation that would require websites that host user-generated content to take down material reported as terrorist content within one hour. We have some examples of current notices sent to the Internet Archive
that we think illustrate very well why this requirement would be harmful to the free sharing of information and freedom of speech that the European Union pledges to safeguard. In the past week, the Internet Archive has received a
series of email notices from Europol's European Union Internet Referral Unit (EU IRU) falsely identifying hundreds of URLs on archive.org as terrorist propaganda. At least one of these mistaken URLs was also identified as terrorist content in a separate
take down notice from the French government's L'Office Central de Lutte contre la Criminalit39 li39e aux Technologies de l'Information et de la Communication (OCLCTIC). The Internet Archive has a few staff members that process
takedown notices from law enforcement who operate in the Pacific time zone. Most of the falsely identified URLs mentioned here (including the report from the French government) were sent to us in the middle of the night 203 between midnight and 3am
Pacific 203 and all of the reports were sent outside of the business hours of the Internet Archive. The one-hour requirement essentially means that we would need to take reported URLs down automatically and do our best to review
them after the fact. It would be bad enough if the mistaken URLs in these examples were for a set of relatively obscure items on our site, but the EU IRU's lists include some of the most visited pages on archive.org and materials
that obviously have high scholarly and research value. See a summary below with specific examples. See example falsely reported URLs at
article from blog.archive.org
|
|
Zippyshare blocks itself from UK access
|
|
|
| 11th
April 2019
|
|
| Thanks to Alan See article from torrentfreak.com
|
Zippyshare is a long running data locker and file sharing platform that is well known particularly for the distribution of porn. Last month UK users noted that they have been blocked from accessing the website and that it can now only be accessed
via a VPN. Zippyshare themselves has made no comment about the block, but TorrentFreak have investigated the censorship and have determined that the block is self imposed and is not down to action by UK courts or ISPs. Alan wonders if this
is a premature reaction to the Great British Firewall, noting it's quite a popular platform for free porn. Of course it poses the interesting question that if websites generally decide to address the issue of UK porn censorship by self imposed
blocks, then keen users will simply have to get themselves VPNs. Being willing to sign up for age verification simply won't work. Perhaps VPNs will be next to mandatory for British porn users, and age verification will become an unused technology.
|
|
Facebook agrees to make it clear to users that the company makes its living by profiling users for advertising purposes
|
|
|
| 10th April 2019
|
|
| See European Commission press release from europa.eu |
Facebook changes its terms and clarify its use of data for consumers following discussions with the European Commission and consumer authorities The European Commission and consumer protection authorities have welcomed Facebook's
updated terms and services. They now clearly explain how the company uses its users' data to develop profiling activities and target advertising to finance their company. The new terms detail what services, Facebook sells to third
parties that are based on the use of their user's data, how consumers can close their accounts and under what reasons accounts can be disabled. These developments come after exchanges, which aimed at obtaining full disclosure of Facebook's business model
in a comprehensive and plain language to users. Vera Jourova , Commissioner for Justice, Consumers and Gender Equality welcomed the agreement: Today Facebook finally shows commitment to more
transparency and straight forward language in its terms of use. A company that wants to restore consumers trust after the Facebook/ Cambridge Analytica scandal should not hide behind complicated, legalistic jargon on how it is making billions on people's
data. Now, users will clearly understand that their data is used by the social network to sell targeted ads. By joining forces, the consumer authorities and the European Commission, stand up for the rights of EU consumers.
In the aftermath of the Cambridge Analytica scandal and as a follow-up to the investigation on social media platforms in 2018 , the European Commission and national consumer protection authorities requested Facebook to clearly inform
consumers how the social network gets financed and what revenues are derived from the use of consumer data. They also requested the platform to bring the rest of its terms of service in line with EU Consumer Law. As a result,
Facebook will introduce new text in its Terms and Services explaining that it does not charge users for its services in return for users' agreement to share their data and to be exposed to commercial advertisements. Facebook's terms will now clearly
explain that their business model relies on selling targeted advertising services to traders by using the data from the profiles of its users. In addition, following the enforcement action, Facebook has also amended:
its policy on limitation of liability and now acknowledges its responsibility in case of negligence, for instance in case data has been mishandled by third parties; its power to unilaterally change
terms and conditions by limiting it to cases where the changes are reasonable also taking into account the interest of the consumer; the rules concerning the temporary retention of content which has been deleted by consumers.
Such content can only be retained in specific cases 203 for instance to comply with an enforcement request by an authority 203 and for a maximum of 90 days in case of technical reasons; the language clarifying the right to
appeal of users when the their content has been removed.
Facebook will complete the implementation of all commitments at the latest by the end of June 2019.
|
|
Government introduces an enormous package of internet censorship proposals
|
|
|
| 8th April 2019
|
|
| See press release from gov.uk See
Online Harms White Paper [pdf] from assets.publishing.service.gov.uk See
also 12 week consultation on the censorship proposals |
The Government writes: In the first online safety laws of their kind, social media companies and tech firms will be legally required to protect their users and face tough penalties if they do not comply. As
part of the Online Harms White Paper, a joint proposal from the Department for Digital, Culture, Media and Sport and Home Office, a new independent regulator will be introduced to ensure companies meet their responsibilities. This
will include a mandatory 'duty of care', which will require companies to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services. The regulator will have effective enforcement tools, and we are consulting
on powers to issue substantial fines, block access to sites and potentially to impose liability on individual members of senior management. A range of harms will be tackled as part of the
Online Harms White Paper , including inciting violence and violent content, encouraging suicide, disinformation, cyber bullying and
children accessing inappropriate material. There will be stringent requirements for companies to take even tougher action to ensure they tackle terrorist and child sexual exploitation and abuse content. The
new proposed laws will apply to any company that allows users to share or discover user generated content or interact with each other online. This means a wide range of companies of all sizes are in scope, including social media platforms, file hosting
sites, public discussion forums, messaging services, and search engines. A regulator will be appointed to enforce the new framework. The Government is now consulting on whether the regulator should be a new or existing body. The
regulator will be funded by industry in the medium term, and the Government is exploring options such as an industry levy to put it on a sustainable footing. A
12 week consultation on the proposals has also been launched today. Once this concludes we will then set out the action we will take in
developing our final proposals for legislation. Tough new measures set out in the White Paper include:
A new statutory 'duty of care' to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services. Further stringent requirements
on tech companies to ensure child abuse and terrorist content is not disseminated online. Giving a regulator the power to force social media platforms and others to publish annual transparency reports on the amount of harmful
content on their platforms and what they are doing to address this. Making companies respond to users' complaints, and act to address them quickly. Codes of practice, issued by the regulator,
which could include measures such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods. A new "Safety by Design"
framework to help companies incorporate online safety features in new apps and platforms from the start. A media literacy strategy to equip people with the knowledge to recognise and deal with a range of deceptive and
malicious behaviours online, including catfishing, grooming and extremism.
The UK remains committed to a free, open and secure Internet. The regulator will have a legal duty to pay due regard to innovation, and to protect users' rights online, being particularly mindful to not infringe privacy and freedom of
expression. Recognising that the Internet can be a tremendous force for good, and that technology will be an integral part of any solution, the new plans have been designed to promote a culture of continuous improvement among
companies. The new regime will ensure that online firms are incentivised to develop and share new technological solutions, like Google's "Family Link" and Apple's Screen Time app, rather than just complying with minimum requirements. Government
has balanced the clear need for tough regulation with its ambition for the UK to be the best place in the world to start and grow a digital business, and the new regulatory framework will provide strong protection for our citizens while driving
innovation by not placing an impossible burden on smaller companies. |
|
|
|
|
| 8th April 2019
|
|
|
sex, Lies And The Battle To Control Britain's Internet. By David Flint See article from
reprobatepress.com |
|
A report suggesting that government has (reluctantly) relaxed its requirements for internet porn age verifiers to keep a detailed log of people's porn access
|
|
|
| 5th April 2019
|
|
| See article from
techdirt.com |
In an interesting article on the Government age verification and internet porn censorship scheme, technology website Techdirt reports on the ever slipping deadlines. Seemingly with detailed knowledge of government requirements for the scheme, Tim
Cushing explains that up until recently the government has demand that age verification companies retain a site log presumably recording people's porn viewing history. He writes: The government refreshed its porn
blockade late last year, softening a few mandates into suggestions. But the newly-crafted suggestions were backed by the implicit threat of heavier regulation. All the while, the government has ignored the hundreds of critics and experts who have pointed
out the filtering plan's numerous problems -- not the least of which is a government-mandated collection of blackmail fodder. The government is no longer demanding retention of site logs by sites performing age verification, but
it's also not telling companies they shouldn't retain the data. Companies likely will retain this data anyway, if only to ensure they have it on hand when the government inevitably changes it mind.
Cushing concludes with a comment
perhaps suggesting that the Government wants a far more invasive snooping regime than commercial operators are able or willing to provide. He notes: Shortly. April 1st will come and go with no porn filter. The next
best guess is around Easter (April 21st). But I'd wager that date comes and goes as well with zero new porn filters. The UK government only knows what it wants. It has no idea how to get it. I
And it seems that some age verification
companies are getting wound up by negative internet and press coverage of the dangers inherent in their services. @glynmoody tweeted: I see age verification companies that will create the biggest database of people's
porn preferences - perfect for blackmail - are now trying to smear people pointing out this is a stupid idea as deliberately creating a climate of fear and confusion about the technologies nope
|
|
The porn movie from the TV series 'Mums Make Porn' is used as a live test for age verification
|
|
|
|
4th April 2019
|
|
| See article from
prolificlondon.co.uk See also iwantfourplay.com |
The age verification company AgeChecked and porn producer Erika Lust have created a test website for a live trial of age verification. The test website iwantfourplay.com features the porn video created by the mums in the Channel 4 series Mums
Make Porn. The website presented the video free of charge, but only after viewers passed one of 3 options for age verification:
- entering full credit card details + email
- entering driving licence number + name and address + email
- mobile phone number + email (the phone must have been verified as 18+ by the service provider and must must be ready to receive an
SMS message containing login details)
The AgeChecked forms are unimpressive, the company seems reluctant to inform customers about requirements before handing over details. The forms do not even mention that the age requirement is 18+. It certainly does not try to make it clear that say a
debit card is unacceptable or that a driving licence is not acceptable if registered to a 17 year old. It seems that they would prefer users to type in all their details and then tell them sorry, the card/licence/phone number doesn't pass the test. In
fact the mobile phone option is distinctly misleading it suggests that it may be quicker to use the other options if the mobile phone is not age verified. It should say more positively that an unverified phone cannot be used. The AgeChecked
forms also make contradictory claims about users personal data not being stored by Age Checked (or shared with iwantfourplay.com)... but then goes on to ask for email address for logging into existing existing AgeChecked accounts, so obviously that
item of personal data must be stored by AgeChecked for practical recurring usage. AgeChecked has already reported on the early results from the test. Alastair Graham, CEO of AgeChecked said: The results of this
first mainstream test of our software were hugely encouraging. Whilst an effective date for the new legislation's implementation is yet to be confirmed by the British Board of Film Classification, this suggests a clear
preparedness to offer robust, secure age verification procedures to the adult industry's 24-30 million UK users. It also highlights that customers are willing to participate in the process when they know that they are being
verified by a secure provider, with whom their identity is fully protected. The popularity of mobile phone verification was interesting and presumably due the simplicity of using this device. This is something that we foresee as
being a key part of this process moving forward.
Don't these people spout rubbish sometimes, pretending that not wanting to have one's credit card details, name and address details associated with watching porn is just down to
convenience. Graham also did not mention other perhaps equally important results from the test. In particular I wonder how many people seeking the video simply decided not to proceed further when presented by age verification options. I
wonder also how many people watched the video without going through age verification. I noted that with a little jiggery pokery, the video could be viewed by VPN. I also noted that although the age verification got in the way of clicking on the video,
file/video downloading browser addons were still able to access the video without bothering with the age verification. And congratulations to the mums for making a good porn video. It feature very attractive actors participating in all the usual
porn elements, whilst getting across the mums' wishes for a more positive/loving approach to sex. |
|
Cut by Netflix after the original release after social media pressure opposing footage of an actual train crash
|
|
|
| 1st April 2019
|
|
| See article
from usatoday.com |
Bird Box is a 2018 USA Sci-Fi horror thriller by Susanne Bier. Starring Rosa Salazar, Sandra Bullock and Sarah Paulson.
In the wake of an unknown global terror, a mother must
find the strength to flee with her children down a treacherous river in search of safety. Due to unseen deadly forces, the perilous journey must be made blindly. Directed by Academy Award winner Susanne Bier, Bird Box is a thriller starring Academy Award
winner Sandra Bullock, John Malkovich, Sarah Paulson, and Trevante Rhodes.
Netflix announced in mid March 2019 that Netflix VoD would be cut going forwards for 2019 VoD. The cuts follows months of social media pressure
claiming that stock footage of a 2013 crash in the Quebec town of Lac-Megantic was exploiting the victims of the tragedy. The crash involved a train carrying crude oil coming off the tracks and exploded into a ball of fire, killing 47 people.
Netflix said that it will replace the footage with fictional scenes from a former TV series in the U.S. The company said it is sorry for any pain caused to the Lac-Megantic community. In the UK the film was passed 15 uncut by the BBFC for strong violence, threat, language, suicide scenes
for UK cinema and VoD release prior to the announcement of cuts. |
|
Big names of the internet explain how the EU's Terrorist Content laws will be ineffective and will damage the non terrorist internet in the process
|
|
|
| 1st
April 2019
|
|
| See article [pdf] from politico.eu |
A group of some of the best known internet pioneers have written an open letter explaining how the EU's censorship law nominally targeting terrorism will both chill the non terrorist internet whilst simultaneously advantaging US internet giants
over smaller European businesses. The group writes: EU Terrorist Content regulation will damage the internet in Europe without meaningfully contributing to the fight against terrorism Dear MEP Dalton, Dear MEP Ward,
Dear MEP Reda, As a group of pioneers, technologists, and innovators who have helped create and sustain todays internet, we write to you to voice our concern at proposals under consideration in the EU Terrorist Content
regulation. Tackling terrorism and the criminal actors who perpetrate it is a necessary public policy objective, and the internet plays an important role in achieving this end. The tragic and harrowing incident in
Christchurch, New Zealand earlier this month has underscored the continued threat terrorism poses to our fundamental freedoms, and the need to confront it in all its forms. However, the fight against terrorism does not preclude lawmakers from their
responsibility to implement evidence-based law that is proportionate, justified, and supportive of its stated aim. The EU Terrorist Content regulation, if adopted as proposed, will restrict the basic rights of European
internet users and undercut innovation on the internet without meaningfully contributing to the fight against terrorism. We are particularly concerned by the following aspects of the proposed Regulation:
- Unclear definition of terrorist content: The definition of 'terrorist content' is extremely broad, and includes no clear exemption for educational, journalistic, or research purposes. This creates the risk of over-removal of
lawful and important public interest speech.
- Lack of proportionality: The regulation applies equally to all internet hosting services, bringing thousands of services into scope that have no relevance to terrorist
content. By not taking any account of the different types and sizes of online services, nor their exposure to such illegal content, the new rules would be far out of proportion with the stated aim of the proposal.
- Unworkable takedown timeframes: The obligation to remove content within a mere 60 minutes of notification will likely lead to significant over-removal of lawful content and place a catastrophic compliance burden on micro, small, and medium-sized companies offering services within Europe. At the same time, it will greatly favour large multinational platforms that have already developed highly sophisticated content moderation operations
- Reliance on upload filters and other ;proactive measures': The draft regulation frames automated upload filters as ethef solution for terrorist content moderation at scale, and provides government agencies with
the power to mandate how such upload filters and other proactive measures are designed and implemented. But upload filtering of 'terrorist content' is fraught with challenges and risks, and only a handful of online services have the resources and
capacity to build or license such technology. As such, the proposal is setting a benchmark that only the largest platforms can meet. Moreover, upload filtering and related proactive measures risks suppressing important public interest content, such as
news reports about terrorist incidents and dispatches from warzones
We fully support efforts to combat dangerous and illegal information on the internet, including through new legislation where appropriate. Yet as currently drafted, this Regulation risks inflicting harm on free expression and due
process, competition and the possibility to innovate online5. Given these likely ramifications we urge you to undertake a proper assessment of the proposal and make the necessary changes to ensure that the perverse outcomes
described above are not realised. At the very least, any legislation of this nature must include far greater rights protection and be built around a proportionality criterion that ensures companies of all sizes and types can comply and compete in Europe.
Citizens in Europe look to you for leadership in developing progressive policy that protects their rights, ensures their companies can compete, and protects their public interest. This legislation in its current form runs contrary
to those ambitions. We urge you to amend it, for the sake of European citizens and for the sake of the internet. Yours sincerely, Mitchell Baker Executive Chairwoman, The Mozilla Foundation and Mozilla Corporation Tim
Berners-Lee Inventor of the World Wide Web and Founder of the Web Foundation Vint Cerf Internet Pioneer Brewster Kahle Founder & Digital Librarian, Internet Archive Jimmy Wales Founder of Wikipedia and Member of the Board of Trustees of the Wikimedia
Foundation Markus Beckedahl Founder, Netzpolitik; Co-founder, re:publica Brian Behlendorf Member of the EFF Board of Directors; Executive Director of Hyperledger at the Linux Foundation Cindy Cohn Executive Director, Electronic Frontier Foundation Cory
Doctorow Author; Co-Founder of Open Rights Group; Visiting Professor at Open University (UK) Rebecca MacKinnon Co-founder, Global Voices; Director, Ranking Digital Rights Katherine Maher Chief Executive Officer of the Wikimedia Foundation Bruce Schneier
Public-interest technologist; Fellow, Berkman Klein Center for Internet & Society; Lecturer, Harvard Kennedy School |
|
Australia gets in on the act assuming that artificial intelligence is magic and can cure all society's ills
|
|
|
| 1st
April 2019
|
|
| See CC article from accessnow.org
See draft legislation from parlinfo.aph.gov.au |
The Australian Government have announced the introduction of a new bill aimed at imposing criminal liability on executives of social media platforms if they fail to remove abhorrent violent content. The hastily drafted legislation could have serious
unintended consequences for human rights in Australia. The rushed and secretive approach, the lack of proper open, democratic debate, and the placement of far-reaching and unclear regulatory measures on internet speech in the the criminal code are all
matters of grave concern for digital rights groups, including Access Now and Digital Rights Watch. Poorly designed criminal intermediary liability rules are not the right approach here, which the Government would know if it had taken the time to
consult properly. It's simply wrong to assume that an amendment to the criminal code is going to solve the wider issue of content moderation on the internet, said Digital Rights Watch Chair, Tim Singleton Norton. In particular, the lack of any
public consultation is particularly worrisome as it shows that impacts on human rights were not likely to be considered by the government in drafting the text. Forcing companies to regulate content under threat of criminal liability is likely to lead to
over-removal and censorship as the companies attempt their best to avoid jail-time for their executives or hefty fines on their turnover. Also worryingly, the bill could encourage online companies to constantly surveil internet users by requiring
proactive measures for general content monitoring, a measure that would be a blow to free speech and privacy online. Lucie Krahulcova, Australia Policy Analyst at Access Now, said: Reforming criminal law in a way that
can heavily impact free expression online is unacceptable in a democracy. If Australian officials seek to ram through half-cooked fixes past Parliament without the proper expert advice and public scrutiny, the result is likely to be a law that undermines
human rights. Last year's encryption-breaking powers are a prime example of this Regulating online speech in a few days is a tremendous mistake. Rather than pushing through reactionary proposals that make for good talking points,
the Australian government and members of Parliament should invest in a measured, paced participatory reflection carefully aimed at achieving their legitimate public policy goals.
The reality here is that there is no easy way to stop
people from uploading or sharing links to videos of harmful content. No magic algorithm exists that can distinguish a violent massacre from videos of police brutality. The draft legislation creates a great deal of uncertainty that can only be dealt with
by introducing measures that may harm important documentation of hateful conduct. In the past, measures like these have worked to harm, rather than protect, the interests of marginalised and vulnerable communities, said Mr. Singleton Norton. This
knee-jerk reaction will not make us safer or address the way that hatred circulates and grows in our society. We need to face up to the cause of this behaviour, and not look for quick fixes and authoritarian approaches to legislating over it, he
concluded. |
|
Singapore gets in on the act assuming that social media companies can detect and censor 'fake news'
|
|
|
| 1st
April 2019
|
|
| See article from
theguardian.com |
Singapore is set to introduce a new anti-fake news law, allowing authorities to remove articles deemed to breach government regulations. The law, being read in parliament this week will further stifle dissent in an already tightly-controlled media
environment. Singapore's Prime Minister Lee Hsien Loong suggested that the law would tackle the country's growing problem of online misinformation. It follows an examination of fake news in Singapore by a parliamentary committee last year, which
concluded that the city-state was a target of hostile information campaigns. Lee said the law will require media outlets to correct fake news articles, and show corrections or display warnings about online falsehoods so that readers or viewers can
see all sides and make up their own minds about the matter. In extreme and urgent cases, the legislation will also require online news sources to take down fake news before irreparable damage is done. Facebook, Twitter and Google have Asia
headquarters in Singapore, with the companies expected to be under increased pressure to aid the law's implementation.
|
|
Facebook to explain to users why articles have been selected for their timelines
|
|
|
| 1st April 2019
|
|
| See article from
telegraph.co.uk |
Facebook is set to begin telling its users why posts appear in their news feeds, presumably in response to government concerns over its influence over billions of people's reading habits. The social network will today introduce a button on each post
revealing why users are seeing it, including factors such as whether they have interacted often with the person who made the post or whether it is popular with other users. It comes as part of a wider effort to make Facebook's systems more
transparent and secure in advance of the EU elections in May and attempts by European and American politicians to regulate social media. John Hegeman, Facebook's vice president of news feed, told the Telegraph: We hear
from people frequently that they don't know how the news feed algorithm works, why things show up where they do, as well as how their data is used, This is a step towards addressing that. We haven't done as much as we could do to
explain to people how the products work and help them access this information... I can't blame people for being a little bit uncertain or suspicious. We recognise how important the platform that Facebook has become now is in the
world, and that means we have a responsibility to ensure that people who use it have a good experience and that it can't be used in ways that are harmful. We are making decisions that are really important, and so we are trying to
be more and more transparent... we want the external world to be able to hold us accountable.
|
|
UK sex shop chain challenges Google over its ranking being pushed down due to being a sex shop whilst all its products are perfectly OK when sold by the likes of Amazon
|
|
|
| 1st April 2019
|
|
| See article from
retail-week.com by Jacqueline Gold |
Google has an unfair bias towards Ann Summers' domain name, making us significantly harder to find online than our lingerie competitors. Customers looking for our biggest category, lingerie, are actively diverted away from finding our website -- even if
we shut up shop tomorrow and started selling sofas, this prejudice would not change. In a recent Google search for Ann Summers lingerie -- in organic search, ignoring paid -- we were served Very, Amazon, Asos, Debenhams,
Simply Be, House of Fraser and eBay before you got to our website, which sits depressingly on page two. Google's argument is that Ann Summers is an adult retailer -- non family safe to use its terminology. Yes, we sell sex toys.
But let me be clear, the products we are talking about in this context are our mainstream lingerie range. And, of course, we do not want to put inappropriate products in front of children. Here's the irony. Ann Summers has a range
of 293 sex toys. Amazon has over 50,000 sex products, many of them considerably more adult in their nature than those we sell. Yet Google would not consider Amazon non family safe. Google would not impose the same restrictions on
Amazon as it does on us. So, the good news is, Google's policy prevents my nine-year-old daughter searching for a sex toy from Ann Summers, but the bad news is she can buy a gimp mask from Amazon -- in fact, Alexa will even order it for her. Abuse of
power See full article from retail-week.com
|
|
|