|
Threatening to invade and repress the freedoms of a once proud people
|
|
|
| 20th February 2022
|
|
| See paywalled article from ft.com |
The Financial Times is reporting that the cabinet have agreed to extending UK online censorship to cover legal but harmful content. The government will define what material must be censored via its internet censor Ofcom. The FT reports:
A revised Online Safety bill will give Ofcom, the communications regulator, powers to require internet companies to use technology to proactively seek out and remove both illegal content and legal content which is harmful
to children. The new powers were proposed in a recent letter to cabinet colleagues by home secretary Priti Patel and culture secretary Nadine Dorries.
It seems that the tech industry is not best pleased by being forced to pre-vet and
censor content according to rules decreed by government or Ofcom. The FT reports: After almost three years of discussion about what was originally named the Online Harms bill, tech industry insiders said they
were blindsided by the eleventh-hour additions. The changes would make the UK a global outlier in how liability is policed and enforced online, said Coadec, a trade body for tech start-ups. It added the UK would be a significantly
less attractive place to start, grow and maintain a tech business. Westminster insiders said ministers were reluctant to be seen opposing efforts to remove harmful material from the internet.
|
|
Twitter responds to German porn age verification requirements by totally blocking all Germans from adult content that has been flagged by a state internet censor
|
|
|
| 13th February 2022
|
|
| See article from wired.com |
Twitter has been blocking the profiles of adult content creators in Germany since late 2020, with at least 60 accounts affected to date. The move comes in response to a series of legal orders by German regulators that have ruled that online
pornography should not be visible to children and must be hidden behind age-verification systems. As Twitter doesn't have an age-verification system in place, it has responded to legal demands by outright blocking the selected accounts for anyone
in Germany. The German approach to selecting accounts to ban seems scattergun. There are thousands of Twitter accounts that post adult content, and those the internet censors has reported to Twitter appear to have large followings or are subject
to individual complaints. Anyone trying to view one of the blocked accounts in Germany sees a message saying it has been withheld in Germany in response to a legal demand. The exact number of accounts blocked in this way is unknown. One
pornographic account displaying this message has more than 700,000 followers. The policy of totally blocking all German users may encourage a large scale take up of VPNs so that users can continue to view their favourite accounts. Of course
Twitter could itself block access via well known VPNs but it seems likely that this would cause widespread disruption to worldwide users living in repressive countries that try to block Twitter entirely. |
|
Big Tech companies liken the Online 'Safety' Bill to what censors are doing in China
|
|
|
| 13th
February 2022
|
|
| See paywalled article from ft.com |
The Financial Times is reporting on a letter from Priti Patel and Nadine Dorries calling for tech companies to pre-emptively vet and censor user posts on social media that are 'legal but harmful'. The tech companies see this as a censorship demand
that goes way beyond anything else demanded in the supposedly free world. Unnamed critics said such censorship could create a clash with European data protection rules and deter further investment from multinational tech companies in the UK.
One tech lobbyist said the plans have put a panic-stricken tech industry at Defcon 2: The broader implications are vast. It seems that Patel and Dorries have sent the letter to cabinet colleagues to argue for a step up in the
censorship demands of the as yet unpublished Online 'Safety' Bill. One tech industry executive, who has seen the proposals, said the potential requirement to monitor legal content, as well as material that is clearly designated as illegal, crossed
a huge red line for internet companies. Another said: This seems to go significantly beyond what is done in democratic countries around the world. It feels a bit closer to what they are doing in China.
|
|
Sex workers' social media site Switter is forced to shut down in anticipation of new censorship laws
|
|
|
|
13th February 2022
|
|
| See
article from theguardian.com |
A social media platform for sex workers with close to half a million users globally has shut down over legal concerns regarding online safety laws and the Australian government's social media defamation legislation. Switter, which runs on
Twitter-replica Mastodon, was set up by an Australian collective of sex workers and technologists, Assembly Four, in 2018 in response to the anti-sex trafficking legislation known as Sesta/Fosta in the United States. Switter works as a safe space
by, and for, sex workers, with little concern that their content or accounts will be censored. On the site, sex workers can find each other, share safety information, find clients and find out legal information or service availability. However,
the managers of the site have announced that they had decided to shut down immediately, telling its more than 420,000 users the raft of online safety and defamation laws in the US, UK and Australia made it difficult to keep the platform running. A letter
from the management explains: The recent anti-sex work and anti-LGBTQIA+ legislative changes not only in Australia, but in the UK, US and other jurisdictions have made it impossible for us to appropriately and
ethically maintain compliance over 420,690+ users,. Another concern leading to the shut down is the prospect of the Australian government's so-called social media anti-trolling legislation. The legislation, which will make platforms
liable for defamation if they do not help to unmask an account making defamatory comments, would place a platform like Switter in the position of potentially having sex workers or their clients' anonymity removed in a time when they can still face
discrimination from banks, housing and other institutions for the work. |
|
And winds up those that would like this to be taken literally
|
|
|
|
9th February 2022
|
|
| 3rd February 2022. See article from bbc.co.uk |
Jimmy Carr: His Dark Material is a 2021 UK comedy by Brian Klein, Amanda Baker Starring Jimmy Carr
Jimmy Carr finds humour in the darkest of places in this stand-up
comedy special. This special features Jimmy's trademark dry, sardonic wit and includes some jokes which Jimmy calls "career enders".
The programme was passed 18 uncut by the BBFC with a trigger warning for sexual violence
references, discrimination, language. Now comedian Jimmy Carr has sparked 'outrage' for a routine about the Holocaust in his latest Netflix stand-up special. The programme, titled His Dark Material , was released on the streaming
platform on Christmas Day but the clip about the Holocaust came to light more widely when it was shared on social media. The comedian, who introduced his show as being a career ender and warned it contained terrible things, said a positive of the
Holocaust was that thousands of Gypsies were murdered. The comments were greeted with applause and laughter from the audience. Between 200,000 and 500,000 Roma and Sinti people were murdered by the Nazis and their collaborators, according to the
Holocaust Memorial Day Trust. The charity's chief executive Olivia Marks-Woldman said: We are absolutely appalled at Jimmy Carr's comment about persecution suffered by Roma and Sinti people under Nazi oppression, and
horrified that gales of laughter followed his remarks.
Culture Secretary Nadine Dorries said the comments were abhorrent and they just shouldn't be on television. Dorries suggested the government could legislate to stop comedy people
find offensive being shown on streaming platforms. She told the BBC: We're already looking at future legislation to bring into scope those sort of comments Asked about a previous Tweet where she said
left-wing snowflakes are killing comedy, Ms Dorries replied: Well, that's not comedy. SNP MP Martin Docherty-Hughes, who is co-chairman of the House of Commons All Party Parliamentary Group for Gypsies, Travellers and Roma, wrote on Twitter that he
was utterly speechless at this disregard for the horror of the holocaust and its impact on the Gypsy community of Europe. Netflix declined to comment. Carr has also not commented.
Offsite Comment: Jimmy Carr must be free to say the unsayable 9th February 2022. See article from spiked-online.com
by Simon Evans The government has no business decreeing what is funny. |
|
UK Government announces that the Online Censorship Bill will now extend to requiring identity/age verification to view porn
|
|
|
| 6th February
2022
|
|
| See press release from gov.uk
|
On Safer Internet Day, Digital Censorship Minister Chris Philp has announced the Online Safety Bill will be significantly strengthened with a new legal duty requiring all sites that publish pornography to put robust checks in place to ensure their users
are 18 years old or over. This could include adults using secure age verification technology to verify that they possess a credit card and are over 18 or having a third-party service confirm their age against government data.
If sites fail to act, the independent regulator Ofcom will be able fine them up to 10% of their annual worldwide turnover or can block them from being accessible in the UK. Bosses of these websites could also be held criminally liable
if they fail to cooperate with Ofcom. A large amount of pornography is available online with little or no protections to ensure that those accessing it are old enough to do so. There are widespread concerns this is impacting the
way young people understand healthy relationships, sex and consent. Half of parents worry that online pornography is giving their kids an unrealistic view of sex and more than half of mums fear it gives their kids a poor portrayal of women.
Age verification controls are one of the technologies websites may use to prove to Ofcom that they can fulfil their duty of care and prevent children accessing pornography. Many sites where children are likely
to be exposed to pornography are already in scope of the draft Online Safety Bill, including the most popular pornography sites as well as social media, video-sharing platforms and search engines. But as drafted, only commercial porn sites that allow
user-generated content - such as videos uploaded by users - are in scope of the bill. The new standalone provision ministers are adding to the proposed legislation will require providers who publish or place pornographic content
on their services to prevent children from accessing that content. This will capture commercial providers of pornography as well as the sites that allow user-generated content. Any companies which run such a pornography site which is accessible to people
in the UK will be subject to the same strict enforcement measures as other in-scope services. The Online Safety Bill will deliver more comprehensive protections for children online than the Digital Economy Act by going further and
protecting children from a broader range of harmful content on a wider range of services. The Digital Economy Act did not cover social media companies, where a considerable quantity of pornographic material is accessible, and which research suggests
children use to access pornography. The government is working closely with Ofcom to ensure that online services' new duties come into force as soon as possible following the short implementation period that will be necessary after
the bill's passage. The onus will be on the companies themselves to decide how to comply with their new legal duty. Ofcom may recommend the use of a growing range of age verification technologies available for companies to use
that minimise the handling of users' data. The bill does not mandate the use of specific solutions as it is vital that it is flexible to allow for innovation and the development and use of more effective technology in the future. Age verification technologies do not require a full identity check. Users may need to verify their age using identity documents but the measures companies put in place should not process or store data that is irrelevant to the purpose of checking age. Solutions that are currently available include checking a user's age against details that their mobile provider holds, verifying via a credit card check, and other database checks including government held data such as passport data.
Any age verification technologies used must be secure, effective and privacy-preserving. All companies that use or build this technology will be required to adhere to the UK's strong data protection regulations or face enforcement
action from the Information Commissioner's Office. Online age verification is increasingly common practice in other online sectors, including online gambling and age-restricted sales. In addition, the government is working with
industry to develop robust standards for companies to follow when using age assurance tech, which it expects Ofcom to use to oversee the online safety regime. Notes to editors: Since the publication
of the draft Bill in May 2021 and following the final report of the Joint Committee in December, the government has listened carefully to the feedback on children's access to online pornography, in particular stakeholder concerns about pornography on
online services not in scope of the bill. To avoid regulatory duplication, video-on-demand services which fall under Part 4A of the Communications Act will be exempt from the scope of the new provision. These providers are already
required under section 368E of the Communications Act to take proportionate measures to ensure children are not normally able to access pornographic content. The new duty will not capture user-to-user content or search results
presented on a search service, as the draft Online Safety Bill already regulates these. Providers of regulated user-to-user services which also carry published (i.e. non user-generated) pornographic content would be subject to both the existing
provisions in the draft Bill and the new proposed duty.
|
|
scaremongering tactics being used to mislead the public and make bogus case for weakening encryption
|
|
|
|
6th February 2022
|
|
| See article from openrightsgroup.org
|
The UK Home Office plans to force technology companies to remove the privacy and security of encrypted services such as WhatsApp and Signal as part of its Online Safety Bill. Even worse, the Home Office has launched a scaremongering
campaign wasting hundreds of thousands of pounds on a London advertising agency to undermine public trust in a critical digital security tool to keep people and businesses safe online. Undermining encryption would make our
private communications unsafe, allowing hostile strangers and governments to intercept conversations. Undermining encryption would put at risk the safety of those who need it most. Survivors of abuse or domestic violence, including children, need secure
and confidential communications to speak to loved ones and access the information and support they need. As Stephen Bonner, executive director for technology and innovation at the UK Information Commissioner's Office recently noted, end-to-end encryption
"strengthens children's online safety by not allowing criminals and abusers to send them harmful content or access their pictures or location." [1] Operation: Safe Escape [2] and LGBT Tech [3] --two
organisations that represent and safeguard vulnerable stakeholders--stress the vital importance of encrypted communications victims of domestic abuse and for LGBTQ+ people in countries where they face harassment, victimisation and even the threat of
execution. Far from making them safer, denying at-risk people a confidential lifeline puts them at greater and sometimes mortal risk. Anti-encryption policies threaten the fundamental human right to freedom of expression.
Compromising encryption would undermine investigative journalism that exposes corruption and criminality. According to the Centre for Investigative Journalism, without a secure means of communication, sources would go unprotected and whistleblowers will
hesitate to come forward. [4] Contrary to what the Home Office claims, leading cybersecurity experts conclude that even message scanning "creates serious security and privacy risks for all society while the
assistance it can provide for law enforcement is at best problematic." [5] Backdoors create an entry point for hostile states, criminals and terrorists to gain access to highly sensitive information. Weakening encryption negatively impacts the
global Internet [6] and means our private messages, sensitive banking information, personal photographs and privacy would be undermined. MI6 head, Richard Moore, used his first public speech to warn of the increased data security threat from hostile
countries. [7] By Mr. Moore's analysis, the UK would be making things easier for hostile governments, in waging a war against our personal and national security. The UK government must reassess their decision to wage war
on a technology that is essential to so many people in the UK and beyond. Signatories:
- Access Now
- ACLAC (Latin American and Caribbean Encryption Coalition)
- Adam Smith Institute
- Africa Media and
Information Technology Initiative (AfriMITI)
- Alec Muffett, Security Researcher
- Annie Machon
- ARTICLE19
- Big Brother Watch
- Centre for Democracy and Technology
- Christopher Parsons, Senior Research Associate, Citizen Lab, Munk School of Global Affairs & Policy at the University of Toronto
- Collaboration on International ICT Policy for East and Southern Africa (CIPESA)
- Cybersecurity Advisors Network (CyAN)
- Dave Carollo, Product
Manager, TunnelBear LLC
- Derechos Digitales -- Latin America
- Digital Rights Watch
- Dr. Duncan Campbell
- Electronic Frontier Foundation
- Faud Khan, CEO, TwelveDot Incorporated
- Fundación Karisma
- Global Partners Digital
- Glyn Moody
- Index on Censorship
- Instituto de Desarrollo Digital de América Latina y el Caribe (IDDLAC)
- Internet Society
- Internet Society Brazil Chapter
- Internet Society Catalonia Chapter
- Internet Society Germany Chapter
- Internet Society India Hyderabad
- Internet Society Portugal Chapter
- Internet Society Tchad Chapter
- Internet Society UK England Chapter
- Internet Freedom Foundation, India
- JCA-NET (Japan)
- Jens Finkhaeuser, Interpeer Project
- Prof. Dr. Kai Rannenberg, Goethe University Frankfurt, Chair of Mobile Business & Multilateral Security
-
Kapil Goyal, Faculty Member, DAV College Amritsar
- Khalid Durrani, PureVPN
- Prof. Dr. Klaus-Peter Löhr, Freie Universität Berlin
-
LGBT Technology Partnership
- Liberty
- Luke Robert Mason
- Mark A. Lane, Cryptologist, UNIX / Software Engineer
- OpenMedia
- Open Rights Group
- Open Technology Institute
- Peter Tatchell Foundation
-
Privacy & Access Council of Canada
- Ranking Digital Rights
- Reporters Without Borders
- Riana Pfefferkorn, Research
Scholar, Stanford Internet Observatory
- Simply Secure
- Sofía Celi, Latin American Cryptographers.
- Dr. Sven Herpig, Director for International
Cybersecurity Policy, Stiftung Neue Verantwortung
- Tech For Good Asia
- The Law and Technology Research Institute of Recife (IP.rec)
- The Tor
Project
- Dr. Vanessa Teague, Australian National University
- Yassmin Abdel-Magied
- https://www.infosecurity-magazine.com/news/privacy-tsar-defense-encryption/
- https://safeescape.org/get-help/
- https://www.lgbttech.org/post/lgbt-tech-internet-society-release-new-encryption-infographic
- https://tcij.org/bespoke-training/information-security/
- https://arxiv.org/abs/2110.07450
- https://www.internetsociety.org/resources/doc/2022/iib-encryption-uk-online-safety-bill/
- https://www.bbc.com/news/uk-59470026
|
|
Government defines a wide range of harms that will lead to criminal prosecution and that will require censorship by internet intermediaries
|
|
|
| 3rd
February 2022
|
|
| See press release from gov.uk
|
Online Safety Bill strengthened with new list of criminal content for tech firms to remove as a priority own after it had been reported to them by users but now they must be proactive and prevent people being exposed in the first
place. It will clamp down on pimps and human traffickers, extremist groups encouraging violence and racial hate against minorities, suicide chatrooms and the spread of private sexual images of women without their consent.
Naming these offences on the face of the bill removes the need for them to be set out in secondary legislation later and Ofcom can take faster enforcement action against tech firms which fail to remove the named illegal content.
Ofcom will be able to issue fines of up to 10 per cent of annual worldwide turnover to non-compliant sites or block them from being accessible in the UK. Three new criminal offences, recommended by the Law
Commission, will also be added to the Bill to make sure criminal law is fit for the internet age. The new communications offences will strengthen protections from harmful online behaviours such as coercive and controlling
behaviour by domestic abusers; threats to rape, kill and inflict physical violence; and deliberately sharing dangerous disinformation about hoax Covid-19 treatments. The government is also considering the Law Commission's
recommendations for specific offences to be created relating to cyberflashing, encouraging self-harm and epilepsy trolling. To proactively tackle the priority offences, firms will need to make sure the features, functionalities
and algorithms of their services are designed to prevent their users encountering them and minimise the length of time this content is available. This could be achieved by automated or human content moderation, banning illegal search terms, spotting
suspicious users and having effective systems in place to prevent banned users opening new accounts. New harmful online communications offences Ministers asked the Law Commission to review the
criminal law relating to abusive and offensive online communications in the Malicious Communications Act 1988 and the Communications Act 2003. The Commission found these laws have not kept pace with the rise of smartphones and
social media. It concluded they were ill-suited to address online harm because they overlap and are often unclear for internet users, tech companies and law enforcement agencies. It found the current law over-criminalises and
captures 'indecent' images shared between two consenting adults - known as sexting - where no harm is caused. It also under-criminalises - resulting in harmful communications without appropriate criminal sanction. In particular, abusive communications
posted in a public forum, such as posts on a publicly accessible social media page, may slip through the net because they have no intended recipient. It also found the current offences are sufficiently broad in scope that they could constitute a
disproportionate interference in the right to freedom of expression. In July the Law Commission recommended more coherent offences. The Digital Secretary today confirms new offences will be created and legislated for in the Online
Safety Bill. The new offences will capture a wider range of harms in different types of private and public online communication methods. These include harmful and abusive emails, social media posts and WhatsApp messages, as well
as 'pile-on' harassment where many people target abuse at an individual such as in website comment sections. None of the offences will apply to regulated media such as print and online journalism, TV, radio and film. The offences
are: A 'genuinely threatening' communications offence, where communications are sent or posted to convey a threat of serious harm. This offence is designed to better capture online threats to
rape, kill and inflict physical violence or cause people serious financial harm. It addresses limitations with the existing laws which capture 'menacing' aspects of the threatening communication but not genuine and serious threatening behaviour.
It will offer better protection for public figures such as MPs, celebrities or footballers who receive extremely harmful messages threatening their safety. It will address coercive and controlling online behaviour and stalking,
including, in the context of domestic abuse, threats related to a partner's finances or threats concerning physical harm.
A harm-based communications offence to capture communications sent to cause harm without a
reasonable excuse. This offence will make it easier to prosecute online abusers by abandoning the requirement under the old offences for content to fit within proscribed yet ambiguous categories such as "grossly
offensive," "obscene" or "indecent". Instead it is based on the intended psychological harm, amounting to at least serious distress, to the person who receives the communication, rather than requiring proof that harm was caused.
The new offences will address the technical limitations of the old offences and ensure that harmful communications posted to a likely audience are captured. The new offence will consider the context in which the communication was
sent. This will better address forms of violence against women and girls, such as communications which may not seem obviously harmful but when looked at in light of a pattern of abuse could cause serious distress. For example, in the instance where a
survivor of domestic abuse has fled to a secret location and the abuser sends the individual a picture of their front door or street sign. It will better protect people's right to free expression online. Communications that are
offensive but not harmful and communications sent with no intention to cause harm, such as consensual communication between adults, will not be captured. It will have to be proven in court that a defendant sent a communication without any reasonable
excuse and did so intending to cause serious distress or worse, with exemptions for communication which contributes to a matter of public interest.
An offence for when a person sends a communication they know to be
false with the intention to cause non-trivial emotional, psychological or physical harm. Although there is an existing offence in the Communications Act that captures knowingly false communications, this new offence
raises the current threshold of criminality. It covers false communications deliberately sent to inflict harm, such as hoax bomb threats, as opposed to misinformation where people are unaware what they are sending is false or genuinely believe it to be
true. For example, if an individual posted on social media encouraging people to inject antiseptic to cure themselves of coronavirus, a court would have to prove that the individual knew this was not true before posting it.
The maximum sentences for each offence will differ. If someone is found guilty of a harm based offence they could go to prison for up to two years, up to 51 weeks for the false communication offence and up to five years for the
threatening communications offence. The maximum sentence was six months under the Communications Act and two years under the Malicious Communications Act. Notes The draft Online Safety Bill in its
current form already places a duty of care on internet companies which host user-generated content, such as social media and video-sharing platforms, as well as search engines, to limit the spread of illegal content on these services. It requires them to
put in place systems and processes to remove illegal content as soon as they become aware of it but take additional proactive measures with regards to the most harmful 'priority' forms of online illegal content. The priority
illegal offences currently listed in the draft bill are terrorism and child sexual abuse and exploitation, with powers for the DCMS Secretary of State to designate further priority offences with Parliament's approval via secondary legislation once the
bill becomes law. In addition to terrorism and child sexual exploitation and abuse, the further priority offences to be written onto the face of the bill includes illegal behaviour which has been outlawed in the offline world for years but also newer
illegal activity which has emerged alongside the ability to target individuals or communicate en masse online. This list has been developed using the following criteria: (i) the prevalence of such content on regulated services,
(ii) the risk of harm being caused to UK users by such content and (iii) the severity of that harm. The offences will fall in the following categories:
- Encouraging or assisting suicide
- Offences relating to sexual images i.e. revenge and extreme pornography
- Incitement to and threats of violence
- Hate crime
- Public order offences - harassment and stalking
- Drug-related offences
- Weapons / firearms offences
- Fraud and financial crime
- Money laundering
- Controlling, causing or inciting prostitutes for gain
- Organised immigration offences
|
|
US senate reintroduces EARN IT internet censorship bill targeting adult content
|
|
|
| 30th January 2022
|
|
| See article from xbiz.com |
Adult industry, sex worker and digital rights advocates unanimously have sounded the alarm about the implications for state censorship and privacy issues of the revived EARN IT Act, which was re-introduced to the US Senate by Richard Blumenthal. The
bill is a broad overhaul of Section 230 protections to strip platforms of immunity for third-party uploaded content. The expected industry response is for social media and internet services to totally ban swathes of content that contain controversial
content rather than attempt to recognise just those posts that contravene the law. As XBIZ has reported, EARN IT will also open the way for politicians to define the legal categories of pornography and pornographic website as they which is a
cherished goal of morality campaigners that seek to reintroduce obscenity prosecutions for content now protected by Free Speech jurisprudence. EARN IT has been championed by top religiously-motivated anti-porn crusading groups such as NCOSE
(formerly Morality in Media). |
|
|