|
Government publishers white paper outling the extension of suffocating TV state censorship to the major streaming services
|
|
|
| 30th April 2022
|
|
| See
press release from gov.uk See
white paper [pdf] from
assets.publishing.service.gov.uk |
Rapid changes in technology, viewing habits and the emergence of global media giants have brought new challenges for UK broadcasters. More people are watching programmes on their phones, laptops, tablets, games consoles and on smart TVs. Competition
for viewers and advertising revenue has intensified. According to Ofcom, the share of total viewing for 'linear' TV channels such as ITV and the BBC fell by more than ten per cent between 2017 and 2020. The share for subscription
video-on-demand services such as Netflix and Amazon Prime Video rose from 6% to 19% over the same period. Proposals include measures to protect audiences from a wider range of harmful material - such as unchallenged health claims
- while watching programmes on video-on-demand services (VoDs). These services will be brought under UK jurisdiction and subject to a Video-on-Demand Code similar to the Broadcasting Code, enforced by Ofcom. Fines for breaches could be up to £250,000 or
five per cent of annual turnover. Requiring it to continue to meet the obligations placed on PSBs, the government will move ahead with plans to move Channel 4 out of public ownership to become a privately-owned public service
broadcaster like ITV and Channel 5. The government intends to legislate as soon as the parliamentary timetable allows. Regulation of video-on-demand services Ofcom estimates
three in four UK households use a subscription video-on-demand (VoD) service. But services like Disney+ and Amazon Prime Video are not regulated in the UK to the same extent as UK linear TV channels. Netflix and Apple TV+ are not regulated in the UK at
all. Except for BBC iPlayer, on-demand services are not subject to Ofcom's Broadcasting Code which sets standards for content including harmful or offensive material, accuracy, fairness and privacy. There are some protections for
under-18s but minimal rules exist to protect audiences from, for example, misleading health advice or pseudoscience documentaries. The government will give Ofcom powers to draft and enforce a new Video-on-Demand Code, similar to
the Broadcasting Code and in line with its standards, to make sure VoD services, which target and profit from UK audiences, are subject to stricter rules protecting UK audiences from harmful material. This will primarily be aimed at larger 'TV-like'
video-on-demand services such as Netflix, ITV Hub and NOW TV and level the rules between VoD services and traditional broadcasters. UK viewers will be given new powers to complain to Ofcom if they see something concerning and will
be better protected from harmful material. Ofcom will be given a strengthened duty to assess on-demand providers' audience protection measures such as age ratings and viewer guidance, with powers to force changes if necessary. The
maximum fine for regulated VoD services will be £250,000 or an amount up to five per cent of an organisation's revenue, whichever is higher.
Offsite comment: We don't need to be protected from Netflix 30th April 2022. See article from spiked-online.com by
Matthew Lesh
|
|
|
|
|
|
30th April 2022
|
|
|
Bill compliance costs will hit smaller companies the most See article from verdict.co.uk |
|
Anti sex trade MP calls for Ofcom to monitor consent in porn films
|
|
|
| 27th April 2022
|
|
| See
article from politicshome.com
|
Diana Johnson is an MP known for her campaigning against the sex trade. She has called for the Online Censorship Bill to include new powers for internet censor Ofcom to investigate whether adult entertainers have properly consented to appear in
pornographic films. She said she wants to see Ofcom be more proactive in investigating issues around consent in the online pornography industry, rather than wait for complaints to be made. The bill now stipulates that commercial pornography websites
must implement age/identity verification checks to ensure all its users are aged 18 and over. However, Johnson told PoliticsHome that the legislation does not go far enough on the matter of protecting women's bodies from sexual exploitation. She wants
the government to crack down on ensuring adult entertainers are of age and have properly consented to appear in online videos. Others also personal advantage from this idea of consent. Jason Domino, an adult performer and representative with the
United Sex Workers union, believes trade unions should be responsible for overseeing consent in the industry. He said: Why are the voices of the trade union of sex workers not involved in this policy currently?
Ofcom has no experience at this point of dealing with this topic, and there are many politicians who also have no experience at all, particularly when it comes to matters of people's privacy.
|
|
Council of Europe calls for porn blocking software to be installed on all personal devices
|
|
|
|
27th April 2022
|
|
| See article from xbiz.com
|
The Council of Europe is the Europe-wide (beyond the EU) organisation most well known for running the European Court of Human Rights. Now the Parliamentary Assembly of the Council of Europe issued a resolution urging European nations to mandate online
filters for pornographic materials on all devices, to be systematically activated in public spaces, such as schools, libraries and youth clubs. The parliamentarians expressed deep concern at: The unprecedented
exposure of children to pornographic imagery, which is detrimental to their psychological and physical development. This exposure brings increased risks of harmful gender stereotyping, addiction to pornography and early, unhealthy sex.
The parliamentarians didn't offer any definitions of what they consider unhealthy sex or pornographic materials, nor did they explain how these mandatory filters would be coded and by whom. The Council's statement nvites member states to
examine the existing means and provisions to combat children's exposure to pornographic content and address the gaps in relevant legislation and practice with a view to better protecting children. It calls for relevant legislation to ensure that both
dedicated websites hosting adult content and mainstream and social media which include adult content, are obliged to use age verification tools. \The resolution also advocates the introduction of an alert button or similar solutions for children to
report accidental access to pornographic content, and envisages follow-up actions, such as warnings or penalties for relevant websites. |
|
Don't hold ordinary social media users responsible for other users responses
|
|
|
|
27th April 2022
|
|
| See CC article from eff.org
|
Courts and legislatures around the globe are hotly debating to what degree online intermediaries--the chain of entities that facilitate or support speech on the internet--are liable for the content they help publish. One thing they should not be
doing is holding social media users legally responsible for comments posted by others to their social media feeds, EFF and Media Defence told the European Court of Human Rights (ECtHR). Before the court is the case Sanchez v.
France , in which a politician argued that his right to freedom of expression was violated when he was subjected to a criminal fine for not promptly deleting hateful comments posted on the "wall" of his Facebook account by others. The
ECtHR's Chamber, a judicial body that hears most of its cases, found there was no violation of freedom of expression, extending its rules for online intermediaries to social media users. The politician is seeking review of this decision by ECtHR's Grand
Chamber, which only hears its most serious cases. EFF and Media Defence, in an amicus brief submitted to the Grand Chamber, asked it to revisit the Chamber's expansive interpretation of how intermediary liability rules should
apply to social media users. Imposing liability on them for third-party content will discourage social media users, especially journalists, human rights defenders, civil society actors, and political figures, from using social media platforms, as they
are often targeted by governments seeking to suppress speech. Subjecting these users to liability would make them vulnerable to coordinated attacks on their sites and pages meant to trigger liability and removal of speech, we told the court.
Further, ECtHR's current case law does not support and should not apply to social media users who act as intermediaries, we said. The ECtHR laid out its intermediary liability rules in Delfi A.S. v. Estonia , which concerned
the failure of a commercial news media organization to monitor and promptly delete "clearly unlawful" comments online. The ECtHR rules consider whether the third-party commenters can be identified, and whether they have any control over their
comments once they submit them. In stark contrast, Sanchez concerns the liability of an individual internet user engaged in non-commercial activity. The politician was charged with incitement to hatred or violence against a
group of people or an individual on account of their religion based on comments others posted on his Facebook wall. The people who posted the comments were convicted of the same criminal offence, and one of them later deleted the allegedly unlawful
comments. What's more, the decision about what online content is "clearly unlawful" is not always straightforward, and generally courts are best placed to assess the lawfulness of the online content. While social media
users may be held responsible for failing or refusing to comply with a court order compelling them to remove or block information, they should not be required to monitor content on their accounts to avoid liability, nor should they be held liable simply
when they get notified of allegedly unlawful speech on their social media feeds by any method other than a court order. Imposing liability on an individual user, without a court order, to remove the allegedly unlawful content in question will be
disproportionate, we argued. Finally, the Grand Chamber should decide whether imposing criminal liability for third party content violates the right to freedom of expression, given the peculiar circumstances in this case. Both the
applicant and the commenters were convicted of the same offence a decade ago. EFF and Media Defence asked the Grand Chamber to assess the quality of the decades-old laws--one dating back to 1881--under which the politician was convicted, saying criminal
laws should be adapted to meet new circumstances, but these changes must be precise and unambiguous to enable someone to foresee what conduct would violate the law. Subjecting social media users to criminal responsibility for
third-party content will lead to over-censorship and prior restraint. The Grand Chamber should limit online intermediary liability, and not chill social media users' right to free expression and access to information online. You can read our
amicus brief here: https://www.eff.org/document/sanchez-v-france-eff-media-defence-ecthr-brief
|
|
New Twitter owner, Elon Musk outlines moves to enhance freedom of speech
|
|
|
| 26th April 2022
|
|
| See article from reclaimthenet.org |
Tesla CEO Elon Musk has agreed terms to buy teh social media platform Twitter. He has outlined a change of direction from the usual social media censorship saying that he favours a more free speech approach. He explained: Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated. I also want to make Twitter better than ever by enhancing the product with new features, making the algorithms open source to increase trust, defeating the spam bots, and authenticating all humans.
Musk has made free speech the main focal point of his Twitter takeover bid. In his initial offer, Musk described free speech as a societal imperative for a functioning democracy and said Twitter needs to be transformed as a private
company because it won't serve this societal imperative in its current form. In interviews, Musk has hinted that some of the Twitter changes he wants to make include having timeouts instead of permanent bans, making the boosting and suppression of
tweets transparent to users, and open-sourcing the algorithm. He's also vowed to defeat the spam bots or die trying! He said furerth: My strong intuitive sense is that having a public platform that is maximally trusted
and broadly inclusive is extremely important to the future of civilization. I don't care about the economics at all.
I hope that even my worst critics remain on Twitter, because that is what free speech means.
|
|
|
|
|
| 26th April 2022
|
|
|
YouTube is emailing users to say members of the community are 'concerned' about their comments. By Tom Parker See article from
reclaimthenet.org |
|
|
|
|
| 25th April 2022
|
|
|
The UK government is actively encouraging Big Tech censorship. By Matthew Lesh See article from
spiked-online.com |
|
Surveyed porn users indicate that they will be unlikely to hand over their identity documents to for age verification
|
|
|
| 22nd April 2022
|
|
| See article from techradar.com
|
So what will porn users do should their favourite porn site succumb to age verification. Will they decide to use a VPN, or else try Tor, or perhaps exchange porn with their friends, or perhaps their will be an opportunity for a black market to spring up.
Another option would be to seek out lesser known foreign porn sites that van fly under the radar. All of these options seem more likely than users dangerously handing over identity documents to any porn website that asks. According to a new survey
from YouGov, 78% of the 2,000 adults surveyed would not be willing to verify their age to access adult websites by uploading a document linked to their identity such as a driver's license, passport or other ID card. Of the participants who believe
that visiting adult websites can be part of a healthy sexual lifestyle, just 17% are willing to upload their ID. The main reasons for their decisions were analysed. 64% just don't trust the companies to keep their data safe while 63% are scared their
information could end up in the wrong hands. 49% are concerned about adult websites suffering data breaches which could expose their personal information. Director of the privacy campaigner Open Rights Group, Jim Killock explained in a press release
that those who want to access adult websites anonymously will just use a VPN if the UK's Online Safety legislation passes, saying: The government assumes that people will actually upload their ID to access adult content. The data shows that this is a
naive assumption. Instead, adults will simply use a VPN (as many already do) to avoid the step, or they'll go to smaller, unmoderated sites which exist outside the law. Smaller adult sites tend to be harder to regulate and could potentially expose
users204including minors204to more extreme or illegal content. |
|
The EU is moving towards the conclusion of its new internet censorship law, the Digital Services Act
|
|
|
| 22nd April 2022
|
|
| See article from nytimes.com |
The European Union is nearing a conclusion to internet censorship legislation that would force Facebook, YouTube and other internet services to censor 'misinformation', disclose how their services algorithms and stop targeting online ads based on a
person's ethnicity, religion or sexual orientation. The law, called the Digital Services Act, is intended to more aggressively police social media platforms for content deemed unacceptable or risk billions of dollars in fines. Tech companies would be
compelled to set up new policies and procedures to remove flagged hate speech, terrorist propaganda and other material defined as illegal by countries within the European Union. The Digital Services Act is part of a one-two punch by the European
Union to address the societal and economic effects of the tech giants. Last month, the 27-nation bloc agreed to a different sweeping law, the Digital Markets Act, to counter what regulators see as anticompetitive behavior by the biggest tech firms,
including their grip over app stores, online advertising and internet shopping. The new law is ambitious but the EU is noted for producing crap legislation that doesn't work in practice. Lack of enforcement of the European Union's data privacy law,
the General Data Protection Regulation, or G.D.P.R., has cast a shadow over the new law. Like the Digital Services Act and Digital Markets Act, G.D.P.R. was hailed as landmark legislation. But since it took effect in 2018, there has been little action
against Facebook, Google and others over their data-collection practices. Many have sidestepped the rules by bombarding users with consent windows on their websites. |
|
Apple to add private message image scanning for nudity to UK iPhones
|
|
|
| 22nd
April 2022
|
|
| See
article from theguardian.com |
Apple is set to roll out a snooping feature that scans messages for nudity to UK iPhones. The feature uses AI technology to scan incoming and outgoing messages. For the moment it is otional and allows parents to turn on warnings for their
children's iPhones. When enabled, all photos sent or received by the child using the Messages app will be scanned for nudity. If nudity is found in photos received by a child with the setting turned on, the photo will be blurred, and the child
will be warned that it may contain sensitive content and nudged towards resources from child safety groups. If nudity is found in photos sent by a child, similar protections kick in, and the child is encouraged not to send the images, and given an option
to Message a Grown-Up. All the scanning is carried out on-device, meaning that the images are analysed by the iPhone itself, and Apple never sees either the photos being analysed or the results of the analysis, it said. As originally
announced in summer 2021, the communication safety in Messages and the search warnings were part of a trio of features that proved extremely contentious, and Apple delayed the launch of all three while it negotiated with privacy and child safety groups.
Of course having implemented the feature as an option it won't be long before it becomes an option that can be turned on by law enforcement in the name of seeking out terrorists, racists, anti-vaxers etc |
|
Macron continues to call for ID verification before people are allowed to use social media
|
|
|
| 20th April 2022
|
|
| See article from reclaimthenet.org |
Two weeks prior to the French presidential election, President Emanuel Macron reopened the debate on ending online anonymity. The president is open to the idea of dismantling foreign platforms if they do not require users to verify their identity before
they can post. Macron told Le Point last week: In a democratic society, there should be no anonymity. You can't walk around in the street wearing a hood. On the Internet, people allow themselves, because they are hooded
behind a pseudonym, to say the worst abjections.
Macron began his campaign against online anonymity in January 2019, saying it was time to move towards a gradual lifting of all forms of anonymity. In the latest interview,
Macron attacked US Big Tech platforms, claiming: They come to use our ancient or post-revolutionary freedoms to divert from their essence. We need to create a public order, like in the street.
This is not the state of nature. On social media networks, you can kill reputations, spread false news, drive people to suicide.
Macron hopes that the Digital Markets Act and Digital Services Act will be a solution to the problem of
online anonymity and Big Tech antitrust practices. |
|
US appeal court finds that it is legal to use the data downloaded to a browser which is used to display a web page
|
|
|
| 20th April 2022
|
|
| See article from neowin.net
|
Scraping is a term used to describe the automated use of the html page data which is downloaded by a browser and then used to diplay a web page. Perhaps the most obvious example is to select a section of text on a web page and use copy and paste to
insert the text into another place. The US Ninth Circuit Court of Appeals may have set an important precedent in the tech world. The court has essentially concluded that Data Scraping is not hacking. Hence, it might not be illegal to scrape data from
websites, and social media platforms, unless there are defensive technologies in place. After listening to the arguments in a case that involved Microsoft-owned LinkedIn and competitor hiQ Labs, the Ninth Circuit Court of Appeals has concluded
that scraping publicly available data does not constitute a federal crime. The case dates back to 2017 which LinkedIn had filed against hiQ Labs. The social media platform for professionals had objected to its data being scraped. LinkedIn
essentially wanted hiQ Labs to immediately cease scraping public data from the social networking site. During the first trial, the court sided with hiQ Labs, noting that LinkedIn couldn't invoke federal hacking laws to stop the practice. The court
opinioned that hiQ Labs' behavior didn't seem to violate any laws, and hence, the company's actions could not be classified as a crime. A defining feature of public websites is that their publicly available sections lack limitations on access;
instead those sections are open to anyone with a web browser. In other words, applying the gates analogy to a computer hosting publicly available webpages, that computer has erected no gates to lift or lower in the first place. Simply put, had LinkedIn
deployed mechanisms to prevent data from being scraped, hiQ Labs would have been in the wrong. However, since there were no restrictions, LinkedIn's insistence that hiQ Labs must cease its practice doesn't have any merit. |
|
The UK govenment's Online Censorship Bill will get a 2nd reading debate in the House of Commons on Tuesday 19th April
|
|
|
| 18th April
2022
|
|
| See press release from gov.uk
|
Repressive new censorship laws return to Parliament for their second reading this week. Online censorship legislation will be debated in the Commons Comes as new plans to support some people and fight deemed falsities online are launched Funding
boost will help people's critical thinking online through a new expert Media Literacy Taskforce alongside proposals to pay for training for teachers and library workers Parliamentarians will debate the government's groundbreaking Online Censorship
Bill which requires social media platforms, search engines and other apps and websites allowing people to post content to censor 'wrong think' content. Ofcom, the official state censor, will have the power to fine companies failing to comply with
the laws up to ten per cent of their annual global turnover, force them to improve their practices and block non-compliant sites. Crucially, the laws have strong measures to safeguard children from harmful content such as pornography and child sexual
abuse. |
|
|
|
|
| 2nd April 2022
|
|
|
Online Safety Bill: What issues does the Bill pose for UK businesses operating online? See article from lexology.com
|
|
|