|
France proposes draconian law with extreme punishments for politically incorrect insults posted on social media
|
|
|
| 7th July 2019
|
|
| 29th June 2019. See article from
theguardian.com |
Laetitia Avia was hailed as a symbol of French diversity when she entered parliament for Emmanuel Macron' s centrist party in 2017. But the daily racist abuse against her on social networks pushed her to draw up an extreme censorship law to put a stop to
her critics. It states that hateful comments reported by users must be removed within 24 hours by platforms such as Twitter, Facebook or YouTube. This includes any hateful attack on someone's dignity on the basis of race, religion, sexual orientation,
gender identity or disability. If the social media platforms and tech companies do not comply, they will face huge fines of up to 4% of their global revenue. Penalties could reach tens of millions of euros. There will also be a new judiciary body to
focus on online hate. The online hatred bill will be debated by the French parliament next week and could be fast-tracked into force in the autumn. The bill is part of Macron's drive to internet censorship. He announced the planned
crackdown on online hate at a dinner for Jewish groups last year, amid a rise of antisemitic acts in France, saying that hateful content online must be taken down fast and all possible techniques put in place to find the identities of those behind it.
Last month, after meetings with Macron, Facebook's Mark Zuckerberg agreed to hand over to judges the identification data on its French users suspected of hate speech. Update: Passed by the National Assembly 7th
July 2019. See article from indianexpress.com
The French law to censor politically incorrect insults on social media websites by the National Assembly on Friday. Under the French draft law, social media groups would have to put in place tools to allow users to alert them to clearly illicit
content related to race, gender, religion, sexual orientation or disability. In the event a network fails to react in due course and/or offer the necessary means to report such content, they could face fines up to 4 per cent of their global
revenues. France's broadcasting censor, CSA, would be responsible for imposing the sanctions and a dedicated prosecutor's office would be created. Several internet and freedom of speech advocacy groups have pointed out that bill paves the
way for state censorship because it does not clearly define illicit content. Imposing a 24-hour limit to remove clearly unlawful content is likely to result in significant restrictions on freedoms, such as the overblocking of lawful comments or
the misuse of the measure for political censorship purposes, said Quadrature du Net, a group that advocates free speech on the internet. The group also highlighted that a law adopted in 2004 already demanded the removal of hateful content, but in
a responsive way, leaving enough time to platforms for assessing the seriousness of the content under review. The bill now passes to the French Senate for further debate. |
|
Monday is the last day to respond and the Open Rights Group makes some suggestions
|
|
|
| 30th June 2019
|
|
| See article from openrightsgroup.org |
The Government is accepting public feedback on their plan until Monday 1 July. Send a message to their consultation using Open Rights Group tool before the end of Monday!
The Open Rights Group comments on the government censorship plans: Online Harms: Blocking websites doesn't work -- use a rights-based approach instead Blocking websites isn't working. It's not
keeping children safe and it's stopping vulnerable people from accessing information they need. It's not the right approach to take on Online Harms. This is the finding from our
recent research into website blocking by mobile and broadband
Internet providers. And yet, as part of its Internet regulation agenda, the UK Government wants to roll out even more blocking. The Government's Online Harms White Paper is focused on making online companies fulfil a "duty
of care" to protect users from "harmful content" -- two terms that remain troublingly ill-defined. 1
The paper proposes giving a regulator various punitive measures to use against companies that fail to fulfil this duty, including powers to block websites. If this scheme comes into effect, it could lead to
widespread automated blocking of legal content for people in the UK. Mobile and broadband Internet providers have been blocking websites with parental control filters for five years. But through our
Blocked project -- which detects incorrect website blocking -- we know that systems are still blocking far too many sites and far too many types of sites by mistake.
Thanks to website blocking, vulnerable people and under-18s are losing access to crucial information and support from websites including counselling, charity, school, and sexual health websites. Small businesses are
losing customers. And website owners often don't know this is happening. We've seen with parental control filters that blocking websites doesn't have the intended outcomes. It restricts access to legal, useful,
and sometimes crucial information. It also does nothing to prevent people who are determined to get access to material on blocked websites, who often use VPNs to get around the filters. Other solutions like filters applied by a parent to a child's
account on a device are more appropriate. Unfortunately, instead of noting these problems inherent to website blocking by Internet providers and rolling back, the Government is pressing ahead with website blocking in other areas.
Blocking by Internet providers may not work for long. We are seeing a technical shift towards encrypted website address requests that will make this kind of website blocking by Internet providers much more
difficult. When I type a human-friendly web address such as openrightsgroup.org into a web browser and hit enter, my computer asks a Domain Name System (DNS) for that website's computer-friendly IP address - which will
look something like 46.43.36.233 . My web browser can then use that computer-friendly address to load the website. At the moment, most DNS requests are unencrypted. This allows mobile and broadband Internet providers to
see which website I want to visit. If a website is on a blocklist, the system won't return the actual IP address to my computer. Instead, it will tell me that that site is blocked, or will tell my computer that the site doesn't exist. That stops me
visiting the website and makes the block effective. Increasingly, though, DNS requests are being encrypted. This provides much greater security for ordinary Internet users. It also makes website blocking by Internet providers
incredibly difficult. Encrypted DNS is becoming widely available through Google's Android devices, on Mozilla's Firefox web browser and through Cloudflare's mobile application for Android and iOS. Other encrypted DNS services are also available.
Our report DNS Security - Getting it Right discusses issues around encrypted DNS in more detail.
Blocking websites may be the Government's preferred tool to deal with social problems on the Internet but it doesn't work, both in policy terms and increasingly at a technical level as well. The Government must accept that website blocking by mobile and broadband Internet providers is not the answer. They should concentrate instead on a rights-based approach to Internet regulation and on educational and social approaches that address the roots of complex societal issues.
Offsite Article: CyberLegal response to the Online Harms Consultation 30th June 2019. See article from cyberleagle.com
Speech is not a tripping hazard |
|
Twitter will note that tweets from anyone of less standing would get censored for being politically incorrect
|
|
|
| 29th
June 2019
|
|
| See article from blog.twitter.com |
Twitter has announced a new punishment for Donald Trumps' tweets that it considers politically incorrect. Twitter will mark such tweets as 'abusive' and try and hide them away from being found in searches etc. However they will not be taken down. Twitter
explains who its new censorship method will work: In the past, we've allowed certain Tweets that violated our rules to remain on Twitter because they were in the public's interest, but it wasn't clear when and how we made those
determinations. To fix that, we're introducing a new notice that will provide additional clarity in these situations, and sharing more on when and why we'll use it. Serving the public conversation includes providing
the ability for anyone to talk about what matters to them; this can be especially important when engaging with government officials and political figures. By nature of their positions these leaders have outsized influence and sometimes say things that
could be considered controversial or invite debate and discussion. A critical function of our service is providing a place where people can openly and publicly respond to their leaders and hold them accountable. With
this in mind, there are certain cases where it may be in the public's interest to have access to certain Tweets, even if they would otherwise be in violation of our rules. On the rare occasions when this happens, we'll place a notice -- a screen you have
to click or tap through before you see the Tweet -- to provide additional context and clarity. We'll also take steps to make sure the Tweet is not algorithmically elevated on our service. Who does this apply to?
We will only consider applying this notice on Tweets from accounts that meet the following criteria. The account must:
- Be or represent a government official, be running for public office, or be considered for a government position (i.e., next in line, awaiting confirmation, named successor to an appointed position);
-
Have more than 100,000 followers; and
- Be verified.
That said, there are cases, such as direct threats of violence or calls to commit violence against an individual, that are unlikely to be considered in the public interest. What happens to the
Tweet that gets this notice placed on it? When a Tweet has this notice placed on it, it will feature less prominently on Twitter, and not appear in:
- Safe search
- Timeline when switched to Top Tweets
- Live events pages
- Recommended Tweet push notifications
- Notifications tab Explore
|
|
Social medie website censors its popular The Donald forum, a home for Donald Trump supporters
|
|
|
| 28th June 2019
|
|
| https://www.bbc.com/news/technology-48783866 |
The Donald Forum is a Reddit forum, or subreddit, and is one of the internet's most popular forums where Donald Trump fans congregate for a chat. It has now been shunted out of sight up an unsearchable backwater. Users must now click an opt-in
button to access The_Donald forum, and its content no longer appears in Reddit's search results or recommendations. The move seems to be part of general trend for US social media companies to censor politics that they do not like, particularly
when it leans towards the US right. The Donald forum has more than 760,000 subscribers and once hosted an ask me anything session with Donald Trump in which he replied to questions from the public before the presidential election. One of
the forum's moderators initially shared a message Reddit had sent explaining its reasons for the quarantine. The post was subsequently deleted, but its contents have been copied and posted elsewhere. It said the move had been prompted by threats made on
the subreddit against the authorities in Oregon. Last week, 11 Republican state senators staged a walkout in protest at a climate change bill. State troopers were then told to bring the senators back, which in turn prompted claims that militia
groups opposing such an intervention might show up in the state capital Salem raising a threat of violence . It seems a little weak that a heated argument can be cited as a reason for political censorship particularly when it can be seen as
interefring with upcoming presidential elections.
|
|
|
|
|
|
28th June 2019
|
|
|
Age Verification providers that don't provide a way into Pornhub will only get the crumbs from the AV table See
article from medium.com |
|
|
|
|
| 27th June 2019
|
|
|
Facial recognition is proving to be a privacy nightmare, this example being advances in software to match porn stars to social media See
article from newstatesman.com |
|
|
|
|
| 27th June 2019
|
|
|
Facebook Must Explain What it's Doing With Your Phone Number See article from
privacyinternational.org |
|
Independence from common sense, reason, and the wishes of the Australian people to enjoy the internet
|
|
|
| 26th June 2019
|
|
| See article
from theguardian.com |
Australian media companies and Facebook are scrambling to come to terms with a landmark ruling by an Australian judge that found publishers are legally responsible for pre-moderating comments on Facebook. On Monday in the New South Wales supreme
court judge Stephen Rothman found that commercial entities, including media companies, could be regarded as the publishers of comments made on Facebook, and as such had a responsibility to ensure defamatory remarks were not posted in the first place.
News Corp Australia responded to the judgement in a statement: This ruling shows how far out of step Australia's defamation laws are with other English-speaking democracies and highlights the urgent need for change.
It defies belief that media organisations are held responsible for comments made by other people on social media pages. It is ridiculous that the media company is held responsible while Facebook, which gives us no ability to turn
off comments on its platform, bears no responsibility at all.
The ruling was made in a pre-trial hearing over a defamation case brought by Dylan Voller against a number of media outlets over comments made by readers on Facebook.
Paul Gordon, a social media lawyer at Wallmans lawyers in Adelaide explained the change to Guardian Australia: Up until yesterday the general thread [was] if you knew or ought to have known a defamatory post was
there, you had to take it down. What the judge yesterday found was a bit different, because it wasn't alleged by Voller that the media companies had been negligent in failing to the take down the comments. Instead, the judge found
the companies were responsible for putting them up in the first place. That's really the key difference. You have a situation where now media companies are responsible not just for taking down comments when they see them, but for
preventing them going up in the first place. It places a significantly bigger burden on media companies from what was previously in place.
News Corp Australia said it is reviewing the decision with a view to an appeal. Perhaps the
only way for companies to abide by this understanding of the law is for them to take down their Facebook pages totally.
|
|
|
|
|
| 26th June 2019
|
|
|
The Internet Society warns off the UK government from trying legislate against internet protocols it does not like, namely encrypted DNS See
article from theregister.co.uk |
|
|
|
|
| 26th June 2019
|
|
|
Americans consider the impact of the EU's massive uptick in internet censorship via censorship machines and link tax See
article from xbiz.com |
|
ICO reports on adtech snooping on, and profiling internet users without their consent
|
|
|
|
25th June 2019
|
|
| See
article from ico.org.uk See
report [pdf] from ico.org.uk |
In recent months we've been reviewing how personal data is used in real time bidding (RTB) in programmatic advertising, engaging with key stakeholders directly and via our fact-finding forum event to understand the views and concerns of those
involved. We're publishing our Update report into adtech and real time bidding which
summarises our findings so far. We have prioritised two areas: the processing of special category data, and issues caused by relying solely on contracts for data sharing across the supply chain. Under data protection law, using
people's sensitive personal data to serve adverts requires their explicit consent, which is not happening right now. Sharing people's data with potentially hundreds of companies, without properly assessing and addressing the risk of these counterparties,
raises questions around the security and retention of this data. We recognise the importance of advertising to participants in this commercially sensitive ecosystem, and have purposely adopted a measured and iterative approach to
our review of the industry as a whole so that we can observe the market's reaction and adapt our thinking. However, we want to see change in how things are done. We'll be spending the next six months continuing to engage with the sector, which will give
the industry the chance to start making changes based on the conclusions we've come to so far. Open Rights Group responds 25th June 2019. See
article from openrightsgroup.org The ICO has responded to
a complaint brought by Jim Killock and Dr Michael Veale in Europe's 12 billion euro real-time bidding adtech industry. Killock and Veale are now calling on the ICO to take action against companies that are processing data unlawfully.
The ICO has agreed in substance with the complainants' points about the insecurity of adtech data sharing. In particular, the ICO states that:
Processing of non-special category data is taking place unlawfully at the point of collection [The ICO has] little confidence that the risks associated with RTB have been fully assessed and mitigated
Individuals have no guarantees about the security of their personal data within the ecosystem
However the ICO is proceeding very cautiously and slowly, and not insisting on immediate changes, despite the massive scale of the data breach. Jim Killock said: The ICO's
conclusions are strong and very welcome but we are worried about the slow pace of action and investigation. The ICO has confirmed massive illegality on behalf of the adtech industry. They should be insisting on remedies and fast.
Dr Michael Veale said: The ICO has clearly indicated that the sector operates outside the law, and that there is no evidence the industry will correct itself voluntarily. As long as it remains doing
so, it undermines the operation and the credibility of the GDPR in all other sectors. Action, not words, will make a difference--and the ICO needs to act now.
The ICO concludes:
Overall, in the ICO's view the adtech industry appears immature in its understanding of data protection requirements. Whilst the automated delivery of ad impressions is here to stay, we have general, systemic concerns around the
level of compliance of RTB:
- Processing of non-special category data is taking place unlawfully at the point of collection due to the perception that legitimate interests can be used for placing and/or reading a cookie or other technology (rather than
obtaining the consent PECR requires).
- Any processing of special category data is taking place unlawfully as explicit consent is not being collected (and no other condition applies). In general, processing such data
requires more protection as it brings an increased potential for harm to individuals.
- Even if an argument could be made for reliance on legitimate interests, participants within the ecosystem are unable to
demonstrate that they have properly carried out the legitimate interests tests and implemented appropriate safeguards.
- There appears to be a lack of understanding of, and potentially compliance with, the DPIA
requirements of data protection law more broadly (and specifically as regards the ICO's Article 35(4) list). We therefore have little confidence that the risks associated with RTB have been fully assessed and mitigated.
-
Privacy information provided to individuals lacks clarity whilst also being overly complex. The TCF and Authorized Buyers frameworks are insufficient to ensure transparency and fair processing of the personal data in question and
therefore also insufficient to provide for free and informed consent, with attendant implications for PECR compliance.
- The profiles created about individuals are extremely detailed and are repeatedly shared among
hundreds of organisations for any one bid request, all without the individuals' knowledge.
- Thousands of organisations are processing billions of bid requests in the UK each week with (at best) inconsistent
application of adequate technical and organisational measures to secure the data in transit and at rest, and with little or no consideration as to the requirements of data protection law about international transfers of personal data.
-
There are similar inconsistencies about the application of data minimisation and retention controls.
- Individuals have no guarantees about the security of their personal data within the
ecosystem.
|
|
UK's Competition and Markets Authority reports on a thriving market in fake reviews for products on eBay and Amazon
|
|
|
|
25th June 2019
|
|
| See press release from gov.uk
|
The Competition and Markets Authority (CMA) has found troubling evidence that there is a thriving marketplace for fake and misleading online reviews. After web sweeps performed in the period November 2018 to June 2019, the CMA was concerned about over
100 eBay listings offering fake reviews for sale. It also identified 203 during the same period 203 26 Facebook groups in total where people offered to write fake reviews or businesses recruited people to write fake and misleading reviews on popular
shopping and review sites. It is estimated that over three-quarters of UK internet users consider online reviews when choosing what to buy. Billions of pounds of people's spending is influenced by reviews every year. Fake and
misleading reviews not only lead to people making poorly informed choices and buying the wrong products, but they are also illegal under consumer protection law. The CMA is not alleging that Facebook or eBay are intentionally
allowing this content to appear on their websites. Since the CMA wrote to the sites, both have indicated that they will cooperate and Facebook has informed the CMA that most of the 26 groups have been removed. The CMA welcomes this, and expects the sites
to put measures in place to ensure that all the identified content is removed and to stop it from reappearing. Andrea Coscelli, CMA Chief Executive said: Lots of us rely on reviews when shopping
online to decide what to buy. It is important that people are able to trust that reviews are genuine, rather than something someone has been paid to write. Fake reviews mean that people might make the wrong choice and end up with
a product or service that's not right for them. They're also unfair to businesses who do the right thing. We want Facebook and eBay to conduct an urgent review of their sites to prevent fake and misleading online reviews from
being bought and sold.
|
|
Is it a good idea to counter Google and Facebook's political bias by letting the government decide what is fair?
|
|
|
|
25th June 2019
|
|
| See article from eff.org
|
Despite its name, Senator Josh Hawley's Ending Support for Internet Censorship Act[pdf] would
make the Internet less safe for free expression, not more. It would violate the First Amendment by allowing a government agency to strip platforms of legal protection based on their decisions to host or remove users' speech when the federal government
deems that action to be politically biased. Major online platforms' moderation policies and practices are deeply flawed, but putting a government agency in charge of policing bias would only make matters worse. The bill targets
Section 230 , the law that shields online platforms, services, and users from liability for most speech created by others. Section 230 protects intermediaries from liability
both when they choose to edit, curate, or moderate speech and when they choose not to. Without Section 230, social media would not exist in its current form--the risks of liability would be too great given the volume of user speech published through
them--and neither would thousands of websites and apps that host users' speech and media. Under the bill, platforms over a certain size--30 million active users in the U.S. or 300 million worldwide--would lose their immunity under
Section 230. In order to regain its immunity, a company would have to
pay the Federal Trade Commission for an audit to
prove "by clear and convincing evidence" that it doesn't moderate users' posts "in a manner that is biased against a political party, political candidate, or political viewpoint." It's foolish to assume
that anyone could objectively judge a platform's "bias," but particularly dangerous to put a government agency in charge of making those judgments. Don't Let the Government Decide What Bias Is
Sen. Hawley's bill is clearly unconstitutional. A government agency can't punish any person or company because of its political viewpoints, or because it favors certain political speech over others. And decisions about what speech to
carry or remove are inherently political. What does "in a manner that is biased against a political party, political candidate, or political viewpoint" mean, exactly? Would platforms be forced to host propaganda from
hate groups and punished for doing anything to let users hide posts from the KKK that express its political viewpoints ? Would a site catering to certain religious beliefs be forced to accommodate conflicting beliefs? What about
large platforms where users intentionally opt into partisan moderation decisions ? For example, would Facebook be required to close private groups that leftist activists use to organize and share information, or instruct the administrators of
those groups to let right-wing activists join too? Would Reddit have to delete r/The_Donald, the massively popular forum exclusively for fans of the current U.S. president? The bill provides no guidance on any of these questions.
In practice, the FTC would have broad license to enforce its own view on which platform moderation practices constitute bias. The commissioners' enforcement decisions would almost certainly reflect the priorities of the party that nominated them. Since
the bill requires that a supermajority of commissioners agree to grant a platform immunity, any two of the five FTC commissioners could decide together to withhold immunity from a platform. Section 230 Doesn't--and
Shouldn't--Preclude Platform Moderation Hawley's bill would bring us closer to that pre-230 Internet, punishing online platforms when they take measures to protect their users, including efforts to minimize the impacts of
harassment and abuse--the very sorts of efforts that Section 230 was intended to preserve. While platforms often fail in such measures-- and frequently silence innocent people in the process --giving the government discretion to shut down those efforts
is not the solution. Section 230 plays a crucial, historic role in protecting free speech and association online. That includes the right to participate in online communities organized around certain political viewpoints. It's
impossible to enforce an objective standard of "neutrality" on social media--giving government license to do so would pose a huge threat to speech online.
|
|
Switzerland preserves gambling monopoly by blocking foreign competition
|
|
|
| 25th June 2019
|
|
| See article from
swissinfo.ch |
Swiss gamblers will be able to bet online only with an approved monopoly of casinos and lotteries. The provision of the new Swiss gambling law which restricts online gambling to a few authorised Swiss-based casinos comes into effect on July 1. Last June 73% of voters approved the overhaul of the country's gambling law despite claims by opponents of government censorship.
A list of unauthorised gambling providers will be published on the websites of the Federal Gaming Commission external link and the Lotteries and Betting Commission external link . Those on the blacklist will be automatically blocked by Swiss ISPs
by means of DNS blocks. Swiss gamblers signed up with foreign casinos will have to contact them directly for any money due as Swiss regulators have no jurisdiction over them. Or perhaps they will continue to use foreign websites using VPNs or
encrypted DNS courtesy of Firefoox |
|
|
|
|
| 24th June 2019
|
|
|
The UK Porn Block's Latest Failure. By David Flint See article from reprobatepress.com |
|
With the added twist that Google and co are based in Ireland. The government is also keen to follow the UK's lead in censoring porn through age verification
|
|
|
| 23rd
June 2019
|
|
| See article from newstalk.com See
article from thejournal.ie |
The Broadcasting Authority of Ireland will police video content on Facebook under new proposals before the Irish Government. The Sunday Independent reports the BAI aims to become an enlarged media commission to enforce European censorship rules.
The BAI currently regulates Irish commercial radio and television as well as RTE and TG4. With the social media giants based in Ireland, it will now regulate content on Facebook, Twitter and YouTube in Ireland and throughout the EU. The
BAI proposals also want an Online Safety Commissioner to form part of its increased censorship role. They also speak pf age verification, parental controls and a complaints mechanism. The Government is also keen to emulate the UK internet porn
censorship regime. Irish MP Leo Varadkar said the Irish government will consult with the UK about its new porn block and how it is working, with a view to perhaps rolling out a similar age verification system for Ireland. Varadkar said that he was
wary of moralising . ..BUT... suggested engagement with UK government a year or two after the law has been rolled out would be wise. He said that this engagement could help ascertain if the proposals could work here. During Leaders'
Questions, he confirmed that an online age verification system can be discussed by the Oireachtas Communications Committee, and confirmed that legislation to set up the office of a Digital Safety Commissioner is on the way. Justice Minister
Charlie Flanagan has also said the Irish government will consider a similar system to the UK's porn block law as part of new legislation on online safety. |
|
|
|
|
| 23rd June 2019
|
|
|
YouTube can't remove kid videos without tearing a hole in the entire creator ecosystem. By Julia Alexander See
article from theverge.com |
|
An Arabic Netflix original series, Jinn, riles the easily offended in Jordan
|
|
|
| 22nd June 2019
|
|
| See article from al-monitor.com |
Jinn is the first Arabic original TV series produced by Netflix. And it din not take long for a few Jordanians to become 'outraged' Even before the audience had a chance to watch the first few episodes, people were calling for a ban on the
series that showed Jordanian teenagers kissing and swearing. The series , produced by Netflix and Kibrit Productions, takes a look at the friendship and budding romances between the students of a private high school in Petra, Jordan, after they
unwittingly unleash a jinn, an evil spirit in Islam . A few whingers attacked the series and accused it of promoting pornography, drugs and alcohol use among students. Journalist Wael al-Bteiri who launched the hashtag #Punish_Jinn told Al-Monitor
that he considered the series to be an American infiltration that aimed to damage Jordan through its dirty scenes and offensive language. He said [It] encourages people to fornicate, drink alcohol and smoke weed. They
want to drag our youths down into the decadence of the West. Everyone should take action to stop this mockery.
Dozens of Jordanian women signed a statement June 18 that called the series an offense against Jordan's moral fibre. The
statement said: We strongly refuse the superficiality of this series, as well as [its scenes] that are offensive to public decency and that exploit minors. It reflects an inappropriate image of Jordan, as it was shot
in Petra. The historical city was depicted as a hub for the jinn and a place of deviance.
On June 16, the Public Prosecutor of Amman called on the Cyber Crime Unit to take the necessary measures to ban the series. Netflix
responded to the controversy with a statement June 14 that it would not tolerate offensive statements or action toward the actors that starred in Jinn. |
|
Maybe its a good job the government has delayed Age Verification as there are a still a lot of issues to resolve for the AV companies
|
|
|
| 21st June 2019
|
|
| See article from telegraph.co.uk
|
The AV industry is not yet ready The Digital Policy Alliance (DPA) is a private lobby group connecting digital industries with Parliament. Its industry members include both Age Verification (AV) providers, eg OCL, and adult entertainment, eg
Portland TV. Just before the Government announcement that the commencement of adult verification requirements for porn websites would be delayed, the DPA wrote a letter explaining that the industry was not yet ready to implement AV, and had asked
for a 3 month delay. The letter is unpublished but fragments of it have been reported in news reports about AV. The Telegraph reported: The Digital Policy Alliance called for the scheme to be delayed or
risk nefarious companies using this opportunity to harvest and manipulate user data. The strongly-worded document complains that the timing is very tight, a fact that has put some AVPs [age verification providers] and adult
entertainment providers in a very difficult situation. It warns that unless the scheme is delayed there will be less protection for public data, as it appears that there is an intention for uncertified providers to use this
opportunity to harvest and manipulate user data.
The AV industry is unimpressed by a 6 month delay See
article from news.sky.com
Rowland Manthorpe from Sky News contributed a few interesting snippets
too. He noted that the AVPs were unsurprisingly not pleased by the government delay: Serge Acker, chief executive of OCL, which provides privacy-protecting porn passes for purchase at newsagents, told Sky News: As a
business, we have been gearing up to get our solution ready for July 15th and we, alongside many other businesses, could potentially now be being endangered if the government continues with its attitude towards these delays. Not
only does it make the government look foolish, but it's starting to make companies like ours look it too, as we all wait expectantly for plans that are only being kicked further down the road.
There are still issues with
how the AV providers can make money And interestingly Manthorpe revealed in the accompanying video news report that the AV providers were also distinctly unimpressed by the BBFC stipulating that certified AV providers must not use Identity
Data provided by porn users for any other purpose than verifying age. The sensible idea being that the data should not be made available for the the likes of targeted advertising. And one particular example of prohibited data re-use has caused particular
problems, namely that ID data should not be used to sign people up for digital wallets. Now AV providers have got to be able to generate their revenue somehow. Some have proposed selling AV cards in newsagents for about £10, but others had been
planning on using AV to generate a customer base for their digital wallet schemes. So it seems that there are still quite a few fundamental issues that have not yet been resolved in how the AV providers get their cut. Some AV
providers would rather not sign up to BBFC accreditation See article from adultwebmasters.org Maybe these
issues with BBFC AV accreditation requirements are behind a move to use an alternative standard. An AV provider called VeriMe has announced that it has the first AV company to receive a PAS1296 certification. The PAS1296 was developed between the
British Standards Institution and the Age Check Certification Scheme (ACCS). It stands for Public Accessible Specification and is designed to define good practice standards for a product, service or process. The standard was also championed by the
Digital Policy Alliance. Rudd Apsey, the director of VeriMe said: The PAS1296 certification augments the voluntary standards outlined by the BBFC, which don't address how third-party websites handle consumer
data, Apsey added. We believe it fills those gaps and is confirmation that VeriMe is indeed leading the world in the development and implementation of age verification technology and setting best practice standards for the industry.
We are incredibly proud to be the first company to receive the standard and want consumers and service providers to know that come the July 15 roll out date, they can trust VeriMe's systems to provide the most robust solution for age
verification.
This is not a very convincing argument as PAS1296 is not available for customers to read, (unless they pay about 120 quid for the privilege). At least the BBFC standard can be read by anyone for free, and they can then
make up their own minds as to whether their porn browsing history and ID data is safe. However it does seem that some companies at least are planning to give the BBFC accreditation scheme a miss. The BBFC standard fails to
provide safety for porn users data anyway. See article from medium.com
The AV company 18+ takes issue with the BBFC accreditation standard, noting that it allows AV providers to dangerously log people's porn browsing history: Here's the problem with the design of
most age verification systems: when a UK user visits an adult website, most solutions will present the user with an inline frame displaying the age verifier's website or the user will be redirected to the age verifier's website. Once on the age
verifier's website, the user will enter his or her credentials. In most cases, the user must create an account with the age verifier, and on subsequent visits to the adult website, the user will enter his account details on the age verifier's website
(i.e., username and password). At this point in the process, the age verifier will validate the user and, if the age verifier has a record the user being at least age 18, will redirect the user back to the adult website. The age verification system will
transmit to the adult website whether the user is at least age 18 but will not transmit the identity of the user. The flaw with this design from a user privacy perspective is obvious: the age verification website will know the
websites the user visits. In fact, the age verification provider obtains quite a nice log of the digital habits of each user. To be fair, most age verifiers claim they will delete this data. However, a truly privacy first design would ensure the data
never gets generated in the first place because logs can inadvertently be kept, hacked, leaked, or policies might change in the future. We viewed this risk to be unacceptable, so we set about building a better system. Almost all
age verification solutions set to roll out in July 2019 do not provide two-way anonymity for both the age verifier and the adult website, meaning, there remains some log of?204?or potential to log -- which adult websites a UK based user visits.
In fact one AV provider revealed that up until recently the government demanded that AV providers keep a log of people's porn browsing history and it was a bit of a late concession to practicality that companies were able to opt out if
they wanted. Note that the logging capability is kindly hidden by the BBFC by passing it off as being used for only as long as is necessary for fraud prevention. Of course that is just smoke and mirrors, fraud, presumably meaning that passcodes
could be given or sold to others, could happen anytime that an age verification scheme is in use, and the time restriction specified by the BBFC may as well be forever. |
|
Jeremy Wright apologises to supporters for an admin cock-up, and takes the opportunity to sneer at the millions of people who just want to keep their porn browsing private and safe
|
|
|
|
20th June 2019
|
|
| See parliamentary transcription from hansard.parliament.uk
|
Jeremy Wright, the Secretary of State for Digital, Culture, Media and Sport addressed parliament to explain that the start data for Age Verification scheme for porn has been delayed by about 6 months. The reason is that the Government failed to inform
the EU about laws that effect free trade (eg those that that allow EU websites to be blocked in the UK). Although the main Digital Economy Act was submitted to the EU, extra bolt on laws added since, have not been submitted. Wright explained:
In autumn last year, we laid three instruments before the House for approval. One of them204the guidance on age verification arrangements204sets out standards that companies need to comply with. That should have been notified to the
European Commission, in line with the technical standards and regulations directive, and it was not. Upon learning of that administrative oversight, I instructed my Department to notify this guidance to the EU and re-lay the guidance in Parliament as
soon as possible. However, I expect that that will result in a delay in the region of six months. Perhaps it would help if I explained why I think that six months is roughly the appropriate time. Let me set out what has to happen
now: we need to go back to the European Commission, and the rules under the relevant directive say that there must be a three-month standstill period after we have properly notified the regulations to the Commission. If it wishes to look into this in
more detail204I hope that it will not204there could be a further month of standstill before we can take matters further, so that is four months. We will then need to re-lay the regulations before the House. As she knows, under the negative procedure,
which is what these will be subject to, there is a period during which they can be prayed against, which accounts for roughly another 40 days. If we add all that together, we come to roughly six months. Wright apologised profusely to
supporters of the scheme: I recognise that many Members of the House and many people beyond it have campaigned passionately for age verification to come into force as soon as possible to ensure that children are
protected from pornographic material they should not see. I apologise to them all for the fact that a mistake has been made that means these measures will not be brought into force as soon as they and I would like.
However the law has
not been received well by porn users. Parliament has generally shown no interest in the privacy and safety of porn users. In fact much of the delay has been down belatedly realising that the scheme might not get off the ground at all unless they at least
pay a little lip service to the safety of porn users. Even now Wright decided to dismiss people's privacy fears and concerns as if they were all just deplorables bent on opposing child safety. He said: However,
there are also those who do not want these measures to be brought in at all, so let me make it clear that my statement is an apology for delay, not a change of policy or a lessening of this Government's determination to bring these changes about. Age
verification for online pornography needs to happen. I believe that it is the clear will of the House and those we represent that it should happen, and that it is in the clear interests of our children that it must.
Wright compounded
his point by simply not acknowledging that if, given a choice people, would prefer not to hand over their ID. Voluntarily complying websites would have to take a major hit from customers who would prefer to seek out the safety of non-complying sites.
Wright said: I see no reason why, in most cases, they [websites] cannot begin to comply voluntarily. They had expected to be compelled to do this from 15 July, so they should be in a position to comply. There seems to
be no reason why they should not.
In passing Wright also mentioned how the government is trying to counter encrypted DNS which reduces. the capabilities of ISPs to block websites. Instead the Government will try and press the
browser companies into doing their censorship dirty work for them instead: It is important to understand changes in technology and the additional challenges they throw up, and she is right to say that the so-called D
over H changes will present additional challenges. We are working through those now and speaking to the browsers, which is where we must focus our attention. As the hon. Lady rightly says, the use of these protocols will make it more difficult, if not
impossible, for ISPs to do what we ask, but it is possible for browsers to do that. We are therefore talking to browsers about how that might practically be done, and the Minister and I will continue those conversations to ensure that these provisions
can continue to be effective.
|
|
US adult performers protest the injustice of Instagram who summarily remove accounts without warning, explanation, or right to appeal
|
|
|
| 20th June 2019
|
|
| Thanks to Nick See article from theguardian.com
|
Dozens of adult performers have picketed outside of Instagram's Silicon Valley headquarters over censorship guidelines and the arbitrary inconsistent enforcement of the rules. They said that this has led to hundreds of thousands of account suspensions
and is imperiling their livelihoods. Adult performers led the protest on Wednesday, but other users including artists, sex workers, queer activists, sex education platforms and models say they have been affected by the platform's opaque removal
system. The action was organized by the Adult Performer Actors Guild, the largest labor union for the adult film industry. They were complaining in particular in the way that the company takes down accounts without warning or explanation and
provide no real recourse or effective appeal system. Amber Lynn, an American porn star based in Los Angeles, said her account was terminated without warning or explanation two months ago. She had more than 100,000 followers.
I sent [Instagram] multiple emails through my lawyer and they will still not tell me why they did it, she said. They do not answer you, do not give you an opportunity to correct any problems or even tell you what problems they had to
begin with so you can avoid it in the future. |
|
Pakistan officially complains about a news article on the BBC website.
|
|
|
| 19th June 2019
|
|
| See article from dawn.com See
see disliked article from bbc.com |
Pakistan has officially objected to a news article appearing on the BBC news website. Pakistan claimed that the report, Uncovering Pakistan's secret human rights abuses, was defamatory, called for the article to be taken down, and also demand
an apology from the BBC. The news report is available in English and Urdu. The official letter has been written by the Director General External Publicity Samina Waqar to the Ofcom, UK, and the BBC, against the report. The letter claimed the story not
only presented a fabricated theme, but also violated journalistic ethos. The letter goes on: The story also violates BBC's editorial policy by not incorporating the point of view of all stakeholders/citing credible
sources/quoting authentic evidence etc,, adding that it amounted to indicting the state of Pakistan for so-called 'secret human rights abuses' without any cogent evidence. We demand that the BBC remove this defamatory and
malicious story and issue a clear-cut apology. We also expect the BBC to ensure that in the future such fake stories specifically targeting Pakistan will not be disseminated.
The complaint explains that the Pakistan government expects
the BBC to abide by its editorial policy and journalists' ethos in the future, asking that Ofcom look into the content of the mala-fide, incorrect and misleading story and take measures as per the BBC's editorial guidelines 1.2.11 -- (Accountability: We
will be open in acknowledging mistakes when they are made and encourage a culture of willingness to learn from them.) Pakistan has warned that the government has the right to pursue all legal options in Pakistan or the UK if BBC authorities fail
to retract the libellous and defamatory story and take action against its writer, with the letter saying the content of this story reflects bias, spin and the angling of facts, and that there are judgemental expressions that are a clear violation of
journalistic norms of impartiality and objectivity. |
|
Open Rights Group reports on how the Online Harms Bill will harm free speech, justice and liberty
|
|
|
| 18th
June 2019
|
|
| See article from openrightsgroup.org See
report [pdf] from openrightsgroup.org |
This report follows our research into current Internet content regulation efforts, which found a lack of accountable, balanced and independent procedures governing content removal, both formally and informally by the state. There is a legacy of Internet regulation in the UK that does not comply with due process, fairness and fundamental rights requirements. This includes: bulk domain suspensions by Nominet at police request without prior authorisation; the lack of an independent legal authorisation process for Internet Watch Foundation (IWF) blocking at Internet Service Providers (ISPs) and in the future by the British Board of Film Classification (BBFC), as well as for Counter-Terrorism Internet Referral Unit (CTIRU) notifications to platforms of illegal content for takedown. These were detailed in our previous report.
The UK government now proposes new controls on Internet content, claiming that it wants to ensure the same rules online as offline. It says it wants harmful content removed, while respecting human rights and protecting free
expression. Yet proposals in the DCMS/Home Office White Paper on Online Harms will create incentives for Internet platforms such as Google, Twitter and Facebook to remove content without legal processes. This is not the same rules
online as offline. It instead implies a privatisation of justice online, with the assumption that corporate policing must replace public justice for reasons of convenience. This goes against the advice of human rights standards that government has itself
agreed to and against the advice of UN Special Rapporteurs. The government as yet has not proposed any means to define the harms it seeks to address, nor identified any objective evidence base to show what in fact needs to be
addressed. It instead merely states that various harms exist in society. The harms it lists are often vague and general. The types of content specified may be harmful in certain circumstances, but even with an assumption that some content is genuinely
harmful, there remains no attempt to show how any restriction on that content might work in law. Instead, it appears that platforms will be expected to remove swathes of legal-but-unwanted content, with as as-yet-unidentified regulator given a broad duty
to decide if a risk of harm exists. Legal action would follow non-compliance by a platform. The result is the state proposing censorship and sanctions for actors publishing material that it is legal to publish.
|
|
|
|
|
| 18th June 2019
|
|
|
Porn Block Demonstrates the Government Is More Concerned With Censorship Than Security See
article from gizmodo.co.uk |
|
|
|
|
|
18th June 2019
|
|
|
Counter-terror officials may be able to scan data from across population, official report says See
article from theguardian.com |
|
|
|
|
| 17th
June 2019
|
|
|
There's a new battleground in the browser wars -- over user privacy See article from wired.com |
|
The Perfection on Netflix
|
|
|
| 16th June
2019
|
|
| 5th June 2019. See
article from stuff.co.nz See
article from smh.com.au |
The Netflix film The Perfection has been reclassified in New Zealand as 18+ following concerns raised by a few viewers over its graphic content. Netflix had rated the film as 16+ with a content note of language, violence, nudity. New Zealand generally accepts Australian age ratings as a default unless queried. The Australian Classification Board settled on an MA 15+ rating for
Strong themes of sexual violence, violence, sex and coarse language. The film had also caused a bit of a stir in Australia too. Netflix's own classification tool had assigned the film an MA15+ rating. The rating included consumer advice
that warned of, among other things, strong blood and gore. After hearing reports of viewers becoming physically ill, the Australian Classification Board decided to audit the Netflix rating. The director of the Classification Board, Margaret
Anderson, confirmed that Netflix was not only right to classify the film MA15+, but that its strong blood and gore warning was not necessary. In New Zealand, however, the classification has been raised to 18+ with warnings about rape, sexual
violence, suicide references, graphic violence. Chief Censor David Shanks noted that The film wasn't viewed by any authority until after it had launched on Netflix, which demonstrates a serious problem with the
classifications system. A member of the public flagged the film to the Office of Film and Literature Classification (OFLC) on May 26 - when it had already been available on Netflix in New Zealand for two days.
Streaming services are not subject to any formal regime. I can call them in using my powers under the act but it's reactive and usually it's out there and people have seen it before we can get the thing addressed.
Note that the BBFC agreed with the 18 rating, passing the film 18 uncut for sexual violence, suicide references.
Update: Why we changed the rating for The Perfection 16th June 2019. See article from
classificationoffice.govt.nz by Chief Censor David Shanks
The content that most concerns Kiwis is quite different to what gets under the skins of people in other countries, such as Australia, the United States, or most other places in the world. We have our own culture and values to be
proud of, and our own very real problems to deal with. At our office we try to ensure that Kiwis get all the information they need before they watch a movie or series, so people can make viewing choices that are right for them.
Increasingly we are less about censorship and more about empowering Kiwis to make their own informed choices. This is straightforward when it comes to traditional media such as DVDs or movies at the cinema, but content on
streaming services like Lightbox or Netflix is not currently covered by our legislation, which makes things a little more complex! A good example popped up this week after my office was told about themes of sexual violence and
child abuse in a film called The Perfection. It initially landed via Netflix as 16+ with a note for Language, violence, nudity. This looks to me like a US rating. I checked with my counterparts overseas, and found that the Aussies initially rated it as
MA15+, with the note Strong Nudity, Strong Violence, Strong Blood and Gore, Strong Coarse Language, Strong Horror Themes, Horror Violence and the Brits gave it an 18, with a note for Sexual violence, suicide references. That
illustrates the issue. Different audiences are concerned with different things. In the States people often want to be warned about coarse language and nudity, but here in NZ Kiwis have told us sexual violence and suicide are topics people want to be
warned about in advance. These are big issues that many in our community care deeply about, and have lived experience of. Once we'd seen the movie, we knew it had content that our audiences would expect to know about, - including
suicide references and sexual violence. The warning note that Netflix had for this one really needed to change to be effective for a NZ audience. In terms of age rating we felt it was on the line between a 16+ and a 18+ rating, but the range of content
and the format suggested the higher age rating. Fortunately Netflix recognises the needs of our own domestic audience, and do genuinely want to engage with us, and be responsive to a NZ audience. So they were happy to change the
information. It is now 18+ with the consumer advice, Rape, sexual violence, suicide references, graphic violence. From my point of view, this is just another case illustrating the fact that we're all just working within a
legislative system that was designed for media back in the eighties and nineties, and wasn't built to deal with the international availability of streaming media online. There is room for optimism as the Government is looking at
changing this. We see getting consumer information, particularly as content management tools and support for parents in the future will likely depend on accurate ratings to work properly. |
|
|
|
|
| 16th June 2019
|
|
|
Filtering filth won't save the children, but the block could be bad news for you. By Carrie Marshall See
article from techradar.com |
|
|
|
|
| 15th June 2019
|
|
|
Who'd have thought that a Christian Campaign Group would be calling on its members to criticise the government's internet censorship bill in a consultation See
article from christianconcern.com |
|
Open Rights Group Report: Analysis of BBFC Age Verification Certificate Standard June 2019
|
|
|
| 14th June 2019
|
|
| See article from
openrightsgroup.org See article [pdf] from openrightsgroup.org |
Executive Summary The BBFC's Age-verification Certificate Standard ("the Standard") for providers of age verification services, published in April 2019, fails to meet adequate standards of cyber security and
data protection and is of little use for consumers reliant on these providers to access adult content online. This document analyses the Standard and certification scheme and makes recommendations for improvement and remediation.
It sub-divides generally into two types of concern: operational issues (the need for a statutory basis, problems caused by the short implementation time and the lack of value the scheme provides to consumers), and substantive issues (seven problems with
the content as presently drafted). The fact that the scheme is voluntary leaves the BBFC powerless to fine or otherwise discipline providers that fail to protect people's data, and makes it tricky for consumers to distinguish
between trustworthy and untrustworthy providers. In our view, the government must legislate without delay to place a statutory requirement on the BBFC to implement a mandatory certification scheme and to grant the BBFC powers to require reports and
penalise non-compliant providers. The Standard's existence shows that the BBFC considers robust protection of age verification data to be of critical importance. However, in both substance and operation the Standard fails to
deliver this protection. The scheme allows commercial age verification providers to write their own privacy and security frameworks, reducing the BBFC's role to checking whether commercial entities follow their own rules rather than requiring them to
work to a mandated set of common standards. The result is uncertainty for Internet users, who are inconsistently protected and have no way to tell which companies they can trust. Even within its voluntary approach, the BBFC gives
providers little guidance to providers as to what their privacy and security frameworks should contain. Guidance on security, encryption, pseudonymisation, and data retention is vague and imprecise, and often refers to generic "industry
standards" without explanation. The supplementary Programme Guide, to which the Standard refers readers, remains unpublished, critically undermining the scheme's transparency and accountability. Recommendations
Grant the BBFC statutory powers: The BBFC Standard should be substantively revised to set out comprehensive and concrete standards for handling highly sensitive age verification data. -
The government should legislate to grant the BBFC statutory power to mandate compliance. The government should enable the BBFC to require remedial action or apply financial penalties for non-compliance.
The BBFC should be given statutory powers to require annual compliance reports from providers and fine those who sign up to the certification scheme but later violate its requirements. The
Information Commissioner should oversee the BBFC's age verification certification scheme
Delay implementation and enforcement: Delay implementation and enforcement of age verification until both (a) a statutory standard of data privacy and security is in place, and (b) that standard has been
implemented by providers. Improve the scheme content: Even if the BBFC certification scheme remains voluntary, the Standard should at least contain a definitive set of precisely delineated objectives
that age verification providers must meet in order to say that they process identity data securely. Improve communication with the public: Where a provider's certification is revoked, the BBFC should
issue press releases and ensure consumers are individually notified at login. The results of all penetration tests should be provided to the BBFC, which must publish details of the framework it uses to evaluate test results, and
publish annual trends in results. Strengthen data protection requirements: Data minimisation should be an enforceable statutory requirement for all registered age verification providers.
The Standard should outline specific and very limited circumstances under which it's acceptable to retain logs for fraud prevention purposes. It should also specify a hard limit on the length of time logs may be kept.
The Standard should set out a clear, strict and enforceable set of policies to describe exactly how providers should "pseudonymise" or "deidentify" data. Providers that no longer meet the
Standard should be required to provide the BBFC with evidence that they have destroyed all the user data they collected while supposedly compliant. The BBFC should prepare a standardised data protection risk assessment framework
against which all age verification providers will test their systems. Providers should limit bespoke risk assessments to their specific technological implementation. Strengthen security, testing, and encryption requirements:
Providers should be required to undertake regular internal and external vulnerability scanning and a penetration test at least every six months, followed by a supervised remediation programme to correct any discovered
vulnerabilities. Providers should be required to conduct penetration tests after any significant application or infrastructure change. Providers should be required to use a comprehensive and specific
testing standard. CBEST or GBEST could serve as guides for the BBFC to develop an industry-specific framework. The BBFC should build on already-established strong security frameworks, such as the Center for Internet Security Cyber
Controls and Resources, the NIST Cyber Security Framework, or Cyber Essentials Plus. At a bare minimum, the Standard should specify a list of cryptographic protocols which are not adequate for certification.
|
|
Jordon Peterson launches discussion and subscription platform that won't be censored on grounds of political correctness
|
|
|
| 14th June 2019
|
|
| See article from
newsbusters.org See article from ts.today |
An upcoming free speech platform promises to provide users the best features of other social media, but without the censorship. The subscription based anti-censorship platform Thinkspot is being created by popular psychologist Dr. Jordan B.
Peterson. It's being marketed as a free speech alternative to payment processors like Patreon in that it will monetize creators and also provide a social media alternative to platforms like Facebook and YouTube. Peterson explained in a podcast
that the website would have radically pro-free speech Terms of Service, saying that once you're on our platform we won't take you down unless we're ordered to by a US court of law. That will be a profound contrast to platforms that ban users for
misgendering people who identify as trans, or for tweeting learn to code at fired journalists. The only other major rule on comments he mentioned was that they need to be thoughtful. Rather than suggesting that some opinions are off limits,
Peterson said they will have a minimum required length so one has to put thought into what they write. If minimum comment length is 50 words, you're gonna have to put a little thought into it, Peterson said. Even if you're being a troll, you'll be
a quasi-witty troll. All comments on the website will have a voting feature and if your ratio of upvotes to downvotes falls below 50/50 then your comments will be hidden, people will still be able to see them, if they click, but you'll disappear.
He later added that these features could be tweaked as the website is still being designed. |
|
Spanish Government includes age verification for porn in its programme
|
|
|
| 12th June 2019
|
|
| See article from evangelicalfocus.com
|
AN MP in Spain is leading an initiative to force porn websites operating in the country to install strict age verification systems. The recently elected 26-year-old Andrea Fernandez has called to end the culture of porn among young people. The
limitation of pornographic contents online was included in the electoral programme of the the newly elected Prime Minister, Pedro Sanchez (Social Democrats). The goal of the new government is to implement a new strict age verification system for these
kind of websites.
|
|
The catastrophic impact of DNS-over-HTTPs. The IWF makes its case
|
|
|
| 10th June 2019
|
|
| See article from iwf.org.uk by Fred Langford, IWF
Deputy CEO and CTO |
Here at the IWF, we've created life-changing technology and data sets helping people who were sexually abused as children and whose images appear online. The IWF URL List , or more commonly, the block list, is a list of live webpages that show children
being sexually abused, a list used by the internet industry to block millions of criminal images from ever reaching the public eye. It's a crucial service, protecting children, and people of all ages in their homes and places of
work. It stops horrifying videos from being stumbled across accidentally, and it thwarts some predators who visit the net to watch such abuse. But now its effectiveness is in jeopardy. That block list which has for years stood
between exploited children and their repeated victimisation faces a challenge called DNS over HTTPS which could soon render it obsolete. It could expose millions of internet users across the globe - and of any age -- to the risk
of glimpsing the most terrible content. So how does it work? DNS stands for Domain Name System and it's the phonebook by which you look something up on the internet. But the new privacy technology could hide user requests, bypass
filters like parental controls, and make globally-criminal material freely accessible. What's more, this is being fast-tracked, by some, into service as a default which could make the IWF list and all kinds of other protections defunct.
At the IWF, we don't want to demonise technology. Everyone's data should be secure from unnecessary snooping and encryption itself is not a bad thing. But the IWF is all about protecting victims and we say that the way in which DNS
over HTTPS is being implemented is the problem. If it was set as the default on the browsers used by most of us in the UK, it would have a catastrophic impact. It would make the horrific images we've spent all these years blocking
suddenly highly accessible. All the years of work for children's protection could be completely undermined -- not just busting the IWF's block list but swerving filters, bypassing parental controls, and dodging some counter terrorism efforts as well.
From the IWF's perspective, this is far more than just a privacy or a tech issue, it's all about putting the safety of children at the top of the agenda, not the bottom. We want to see a duty of care placed upon DNS providers so they
are obliged to act for child safety and cannot sacrifice protection for improved customer privacy.
|
|
Advertisers slam the government over more censorship proposals to restrict TV junk food adverts and to ludicrously impose watershed requirements online
|
|
|
|
10th June 2019
|
|
| See article from cityam.com See
consultation from gov.uk |
Advertisers have launched a scathing attack on the government's plans to introduce further restrictions on junk food advertising, describing them as totally disproportionate and lacking in evidence. In submissions to a government consultation, seen
exclusively by City A.M. , industry bodies Isba and the Advertising Association (AA) said the proposals would harm advertisers and consumers but would fail to tackle the issue of childhood obesity. The government has laid out plans to introduce a
9pm watershed on adverts for products high in fat, salt or sugar (HFSS) on TV and online . But the advertising groups have dismissed the policy options, which were previously rejected by media regulator Ofcom, as limited in nature and
speculative in understanding. The AA said current restrictions, which have been in place since 2008, have not prevented the rise of obesity, while children's exposure to HFSS adverts has also fallen sharply over the last decade. In
addition, Isba argued a TV watershed would have a significant and overwhelming impact on adult viewers, who make up the majority of audiences before 9pm. They also pointed to an impact assessment, published alongside the consultation, which
admitted the proposed restrictions would cut just 1.7 calories per day from children's diets. |
|
|
|
|
|
10th June 2019
|
|
|
When is a porn film not a porn film? See article from reprobatepress.com |
|
|
|
|
| 9th June
2019
|
|
|
A Fascinating article from a BBC reporter based in Beijing who became a marked man when posted images from a Hong Kong vigil remembering the Tiananmen Square massacre See
article from bbc.com |
|
|
|
|
| 9th June 2019
|
|
|
Privacy International start campaign against governments snooping on social media accounts handed over as part of a visa application See
article from privacyinternational.org |
|
China censors news websites over Tiananmen Square massacre and financial websites over US trade war issues
|
|
|
| 8th June 2019
|
|
| See article from theintercept.com
See article from ft.com |
The Chinese government appears to have launched a major new internet purge, blocking users from accessing The Intercept's website and those of at least seven other Western news organizations. People in China began reporting that they could not access
the websites of The Intercept, The Guardian, the Washington Post, HuffPost, NBC News, the Christian Science Monitor, the Toronto Star, and Breitbart News. It is unclear exactly when the censorship came into effect or the reasons for it. But
Tuesday marked the 30th anniversary of the Tiananmen Square massacre, and Chinese authorities have reportedly increased levels of online censorship to coincide with the event. On a second front censors at two of China's largest social media
companies appear to have taken aim at independent financial bloggers, as Beijing continues pumping out propaganda to garner public support for its trade dispute with the US. At least 10 popular financial analysis blogs on social media app WeChat
had all present and past content scrubbed, according to screenshots posted by readers. The Weibo accounts of two non-financial popular bloggers, including Wang Zhian, a former state broadcast commentator who wrote about social issues, were also blocked.
|
|
Russia set to block VPNs that refuse to censor websites blocked by Russia
|
|
|
| 8th June 2019
|
|
| See article from torrentfreak.com |
Back in March, ten major VPN providers including NordVPN, ExpressVPN, IPVanish and HideMyAss were ordered by Russian authorities to begin blocking sites present in the country's national blacklist. Following almost total non-compliance, the country's
internet censor says that blocking nine of the services is now imminent. Back in March, telecoms watchdog Roscomnadzor wrote to ten major VPN providers -- NordVPN, ExpressVPN, TorGuard, IPVanish, VPN Unlimited, VyprVPN, Kaspersky Secure Connection,
HideMyAss!, Hola VPN, and OpenVPN -- ordering them to connect to the database. All teh foreign companies refused to comply. Only teh Russia based company,Kaspersky Secure Connection, connected to the registry, Roscomnadzor chief Alexander Zharov
informs Interfax . Russian law says unequivocally if the company refuses to comply with the law -- it should be blocked. And it appears that Roscomnadzor is prepared to carry through with its threat. When questioned on the timeline for blocking,
Zharov said that the matter could be closed within a month. |
|
European Court of Justice moves towards a position requiring the international internet to follow EU censorship rulings
|
|
|
| 8th June 2019
|
|
| 6th June 2019. See
article from techdirt.com |
TechDirt comments: The idea of an open global internet keeps taking a beating -- and the worst offender is not, say, China or Russia, but rather the EU. We've already discussed things like the EU Copyright Directive and the Terrorist Content
Regulation , but it seems like every day there's something new and more ridiculous -- and the latest may be coming from the Court of Justice of the EU (CJEU). The CJEU's Advocate General has issued a recommendation (but not the final verdict) in a new
case that would be hugely problematic for the idea of a global open internet that isn't weighted down with censorship. The case at hand involved someone on Facebook posting a link to an article about an Austrian politician, Eva
Glawischnig-Piesczek, accusing her of being a lousy traitor of the people, a corrupt oaf and a member of a fascist party. An Austrian court ordered Facebook to remove the content, which it complied with by removing access to anyone in Austria. The
original demand was also that Facebook be required to prevent equivalent content from appearing as well. On appeal, a court denied Facebook's request that it only had to comply in Austria, and also said that such equivalent content could only be limited
to cases where someone then alerted Facebook to the equivalent content being posted (and, thus, not a general monitoring requirement). The case was then escalated to the CJEU and then, basically everything goes off the rails See
detailed legal findings discussed by techdirt.com
Offsite Comment: Showing how Little the EU Understands About the Web
8th June 2019. See article from forbes.com by
Kalev Leetaru As governments around the world seek greater influence over the Web, the European Union has emerged as a model of legislative intervention, with efforts from GDPR to the Right to be Forgotten to new efforts to
allow EU lawmakers to censor international criticism of themselves. GDPR has backfired spectacularly, stripping away the EU's previous privacy protections and largely exempting the most dangerous and privacy-invading activities it was touted to address.
Yet it is the EU's efforts to project its censorship powers globally that present the greatest risk to the future of the Web and demonstrate just how little the EU actually understands about how the internet works. |
|
|
|
|
| 8th June 2019
|
|
|
Pakistan buys in a new censorship and snooping system for the internet See article from dawn.com
|
|
Facebook taken to court in Poland after it censored information about a nationalist rally in Warsaw
|
|
|
| 7th June 2019
|
|
| See article from gadgets.ndtv.com
|
A Polish court has held a first hearing in a case brought against Facebook by a historian who says that Facebook engaged in censorship by suspending accounts that had posted about a nationalist rally in Warsaw. Historian Maciej Swirski has complained
that Facebook in 2016 suspended a couple of accounts that provided information on an independence day march organised by far-right groups. Swirski told AFP: I'm not a member of the National Movement, but as a citizen I
wanted to inform myself on the event in question and I was blocked from doing so, This censorship doesn't concern my own posts, but rather content that I had wanted to see.
Facebook's lawyers argued that
censorship can only be exercised by the state and that a private media firm is not obligated to publish any particular content. The next court hearing will take place on October 30. |
|
New York senator introduces a bill requiring companies that hoover up user data must benefit the user before using the data for profit
|
|
|
| 7th June 2019
|
|
| See article from avn.com
|
A new bill introduced last week in the New York State Senate would give New Yorkers stronger online privacy protection than residents of any other state, notably California which was ahead of the game until now. The New York bill authored by Long
Island senator Kevin Thomas goes further than California and requires platforms such as Google, Facebook and others to to attain consent from consumers before they share and/or sell their information. Unlike the California law, however, the
proposed New York bill gives users the right to sue companies directly over privacy violations, possibly setting up a barrage of individual lawsuits, according to a report on the proposed legislation by Wired magazine . The New York bill also
applies to any online company, while the California law exempts any company with less that $25 million annual gross revenue from its requirements. And as a final flourish, the bill required that any company that hoovers up user data must use that
data in ways that benefit the user, before they use it to turn a profit for themselves. |
|
But Google still cannot cannot distinguish educational material from the glorification of Nazis
|
|
|
|
6th June 2019
|
|
| See
article from theguardian.com |
YouTube has decided to adopt a widespread censorship rule to ban the promotion of hate speech. Google wrote: Today, we're taking another step in our hate speech policy by specifically prohibiting videos alleging that a
group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.
However for all the Artificial Intelligence it has at its
disposal the company cannot actually work out which videos promote hate speech. Instead it has taken to banning videos referencing more easily identifiable images such as Nazi symbology, regardless of the context in which they are presented. For
example YouTube has blocked some British history teachers from its service for uploading archive material related to Adolf Hitler. Scott Allsopp, who owns the longrunning MrAllsoppHistory revision website and teaches at an international school in
Romania, had his channel featuring hundreds of historical clips on topics ranging from the Norman conquest to the cold war deleted for breaching the rules that ban hate speech. Allsopp commented: It's absolutely vital
that YouTube work to undo the damage caused by their indiscriminate implementation as soon as possible. Access to important material is being denied wholesale as many other channels are left branded as promoting hate when they do nothing of the sort.
While previous generations of history students relied on teachers playing old documentaries recorded on VHS tapes on a classroom television, they now use YouTube to show raw footage of the Nazis and famous speeches by Adolf Hitler.
Richard Jones-Nerzic, another British teacher affected by the crackdown, said that he had been censured for uploading clips to his channel from old documentaries about the rise of Nazism. Some of his clips now carry warnings that
users might find the material offensive, while others have been removed completely. He said he was appealing YouTube's deletion of archive Nazi footage taken from mainstream media outlets, arguing that this is in itself form of negationism or even
holocaust denial. Allsopp had his account reinstated on Thursday following an appeal but said he had been contacted by many other history teachers whose accounts have also been affected by the ban on hate speech. Users who do not swiftly appeal
YouTube's decisions could find their material removed for good.
|
|
Artist Spencer Tunick and the National Coalition Against Censorship organise a Facebook challenging array of male nipples in New York
|
|
|
| 6th June 2019
|
|
| 4th June 2019. See article from theguardian.com
See article from artnews.com |
P hotographer Spencer Tunick and the National Coalition Against Censorship organise a nude art action outside the Facebook's New York headquarters on June 2, when some 125 people posed naked in front of Facebook's building as Tunick
photographed them as part of the NCAC's #WeTheNipple campaign. In response Facebook agreed to convene a group--including artists, art educators, museum curators, activists, and employees--to consider new nudity guidelines for images posted to its
social-media platforms. The NCAC said it will collaborate with Facebook in selecting participants for a discussion to look into issues related to nude photographic art, ways that censorship impacts artists, and possible solutions going forward.
However before artists get their expectations up, they should know that it is standard policy that whenever Facebook get caught out censoring something, they always throw their arms up in feigned horror, apologise profusely and say they will do
better next time. They never do! |
|
|
|
|
| 6th June 2019
|
|
|
Foreign websites will block UK users altogether rather than be compelled to invest time and money into a nigh-impossible compliance process. By Heather Burns See
article from webdevlaw.uk |
|
Internet companies slam the data censor's disgraceful proposal to require age verification for large swathes of the internet
|
|
|
| 5th June 2019
|
|
| From the Financial Times |
The Information Commissioner's Office has for some bizarre reason have been given immense powers to censor the internet. And in an early opportunity to exert its power it has proposed a 'regulation' that would require strict age verification for
nearly all mainstream websites that may have a few child readers and some material that may be deemed harmful for very young children. Eg news websites that my have glamour articles or perhaps violent news images. In a mockery of 'data protection'
such websites would have to implement strict age verification requiring people to hand over identity data to most of the websites in the world. Unsurprisingly much of the internet content industry is unimpressed. A six weerk consultation on the
new censorship rules has just closed and according to the Financial Times: Companies and industry groups have loudly pushed back on the plans, cautioning that they could unintentionally quash start-ups and endanger
people's personal data. Google and Facebook are also expected to submit critical responses to the consultation. Tim Scott, head of policy and public affairs at Ukie, the games industry body, said it was an inherent contradiction
that the ICO would require individuals to give away their personal data to every digital service. Dom Hallas, executive director at the Coalition for a Digital Economy (Coadec), which represents digital start-ups in the UK, said
the proposals would result in a withdrawal of online services for under-18s by smaller companies: The code is seen as especially onerous because it would require companies to provide up to six different versions of
their websites to serve different age groups of children under 18. This means an internet for kids largely designed by tech giants who can afford to build two completely different products. A child could access YouTube Kids, but
not a start-up competitor.
Stephen Woodford, chief executive of the Advertising Association -- which represents companies including Amazon, Sky, Twitter and Microsoft -- said the ICO needed to conduct a full technical
and economic impact study, as well as a feasibility study. He said the changes would have a wide and unintended negative impact on the online advertising ecosystem, reducing spend from advertisers and so revenue for many areas of the UK media.
An ICO spokesperson said: We are aware of various industry concerns about the code. We'll be considering all the responses we've had, as well as engaging further where necessary, once the consultation
has finished.
|
|
The harms will be that British tech businesses will be destroyed so that politicians can look good for 'protecting the children'
|
|
|
| 2nd June 2019
|
|
| 1st June 2019. See article from cityam.com See
submission to teh government [pdf] from uk.internetassociation.org |
A scathing new report, seen by City A.M. and authored by the Internet Association (IA), which represents online firms including Google, Facebook and Twitter, has outlined a string of major concerns with plans laid out in the government Online Harms white
paper last month. The Online Harms white paper outlines a large number of internet censorship proposals hiding under the vague terminology of 'duties of care'. Under the proposals, social media sites could face hefty fines or even a ban if they
fail to tackle online harms such as inappropriate age content, insults, harassment, terrorist content and of course 'fake news'. But the IA has branded the measures unclear and warned they could damage the UK's booming tech sector, with smaller
businesses disproportionately affected. IA executive director Daniel Dyball said: Internet companies share the ambition to make the UK one of the safest places in the world to be online, but in its current form the online harms white paper
will not deliver that, said The proposals present real risks and challenges to the thriving British tech sector, and will not solve the problems identified. The IA slammed the white paper over
its use of the term duty of care, which it said would create legal uncertainty and be unmanageable in practice.
The lobby group also called for a more precise definition of which online services would be covered by regulation and
greater clarity over what constitutes an online harm. In addition, the IA said the proposed measures could raise serious unintended consequences for freedom of expression. And while most internet users favour tighter rules in some areas,
particularly social media, people also recognise the importance of protecting free speech 203 which is one of the internet's great strengths. Update: Main points 2nd June 2019. See
article from uk.internetassociation.org The Internet Association
paper sets out five key concerns held by internet companies:
- "Duty of Care" has a specific legal meaning that does not align with the obligations proposed in the White Paper, creating legal uncertainty, and would be unmanageable;
- The scope of the services covered by regulation
needs to be defined differently, and more closely related to the harms to be addressed;
- The category of "harms with a less clear definition" raises significant questions and concerns about clarity and democratic process;
- The proposed code of practice obligations raise potentially dangerous unintended consequences for freedom of expression;
- The proposed measures will damage the UK digital sector, especially start-ups, micro-businesses and
small- and medium-sized enterprises (SMEs), and slow innovation.
|
|
Well perhaps if the UK wasn't planning to block legal websites then people wouldn't need to seek out circumvention techniques, so allowing the laudable IWF blocking to continue
|
|
|
| 1st June 2019
|
|
| See article from techworld.com
|
A recent internet protocol allows for websites to be located without using the traditional approach of asking your ISP's DNS server, and so evading website blocks implemented by the ISP. Because the new protocol is encrypted then the ISP is restricted in
its ability to monitor websites being accessed. This very much impacts the ISPs ability to block illegal child abuse as identified in a block list maintained by the IWF. Over the years the IWF have been very good at sticking to its universally
supported remit. Presumably it has realised that extending its blocking capabilities to other less critical areas may degrade its effectiveness as it would then lose that universal support. Now of course the government has stepped in and will use
the same mechanism as used for the IWF blocks to block legal and very popular adult porn websites. The inevitable interest in circumvention options will very much diminish the IWF's ability to block child abuse. So the IWF has taken to campaign to
supports its capabilities. Fred Langford, the deputy CEO of IWF, told Techworld about the implementation of encrypted DNS: Everything would be encrypted; everything would be dark. For the last 15 years, the IWF have
worked with many providers on our URL list of illegal sites. There's the counterterrorism list as well and the copyright infringed list of works that they all have to block. None of those would work. We put the entries onto our
list until we can work with our international stakeholders and partners to get the content removed in their country, said Langford. Sometimes that will only be on the list for a day. Other times it could be months or years. It just depends on the regime
at the other end, wherever it's physically located.
The IWF realises the benefit of universal support so generally acknowledged the benefits of the protocol on privacy and security and focusing on the needs for it to be deployed with
the appropriate safeguards in place. It is calling for the government to insert a censorship rule that includes the IWF URL List in the forthcoming online harms regulatory framework to ensure that the service providers comply with current UK laws and
security measures. Presumably the IWF would like its block list t be implemented by encrypted DNS servers worldwide. IWF's Fred Langford said: The technology is not bad; it's how you implement it. Make sure your
policies are in place, and make sure there's some way that if there is an internet service provider that is providing parental controls and blocking illegal material that the DNS over HTTPS server can somehow communicate with them to redirect the traffic
on their behalf.
Given the IWF's respect, then this could be a possibility, but if the government then step in and demand adult porn sites be blocked too, then this approach would surely stumble as every world dictator and
international moralist campaigner would expect the same. |
|
|