|
Parts of the Online Censorship Act have come into force
|
|
|
|
31st January 2024
|
|
| See press release
from gov.uk |
Abusers, trolls, and predators online now face a fleet of tough new jailable offences from Wednesday 31 January, as offences for cyberflashing, sending death threats, and epilepsy-trolling are written into the statute book after the Online Safety Act
gained Royal Assent. These new criminal offences will protect people from a wide range of abuse and harm online, including threatening messages, the non-consensual sharing of intimate images known as revenge porn, and sending fake
news that aims to cause non-trivial physical or psychological harm. Dubbed Zach's law, a new offence will also mean online trolls that send or show flashing images electronically with the intention of causing harm to people with
epilepsy will be held accountable for their actions and face prison. Following the campaigning of Love Island star Georgia Harrison, bitter ex-partners and other abusers who share, or threaten to share, intimate images on or
offline without the consent of those depicted will face jail time under new offences from today. Those found guilty of the base offence of sharing an intimate image could be in prison for up to 6 months, or 2 years if it is proven
the perpetrator also intended to cause distress, alarm or humiliation, or shared the image to obtain sexual gratification. Cyberflashing on dating apps, AirDrop and other platforms will also result in perpetrators facing up to two
years behind bars where it is done to gain sexual gratification, or to cause alarm, distress or humiliation. Sending death threats or threatening serious harm online will also carry a jail sentence of up to five years under a new
threatening communications offence that will completely outlaw appalling threats made online that would be illegal if said in person. A new false communications offence will bring internet trolls to justice by outlawing the
intentional sending of false information that could cause non-trivial psychological or physical harm to users online. This new offence will bolster the government's strong commitment to clamping down on dangerous disinformation and election interference
online. In the wake of sickening content, often targeted at children, that encourages users to self-harm, a new offence will mean the individuals that post content encouraging or assisting serious self-harm could face up to 5
years behind bars. While much of the Online Safety Act's protections are intended to hold tech companies and social media platforms to account for the content hosted on their sites, these new offences will apply directly to the
individuals sending threatening or menacing messages and bring justice directly to them. Some of the offences that commence from today will be further bolstered too, when the wide-ranging Criminal Justice Bill completes its
passage through Parliament.
|
|
British MPs whinge at Disney+ streaming channel for not using BBFC ratings
|
|
|
| 30th January
2024
|
|
| See article from uk.news.yahoo.com |
Conservative MP Miriam Cates (Penistone and Stocksbridge) was among Tory backbenchers who said age ratings used by video on-demand platforms should be mandated to use either British Board of Film Classification (BBFC) or equivalent standards. The Tory
MP, a member of the right wing New Conservatives faction, told the Commons the watershed for adult content has become increasingly redundant in the streaming era as MPs continued their consideration of the Media Bill, which aims to impose broadcast TV
censorship rules on internet streaming companies. Cates said: We urgently need to apply the same standards of child protection to on-demand video as we do to cinema releases, physical DVDs and linear TV.
She warned the current position of the Bill is to shy away from setting that minimum standard for age ratings, and said Netflix and Amazon Prime have both voluntarily set up partnerships to include BBFC ratings on their content. But the
reluctance of Disney Plus and others to follow suit shows why this kind of regulation is needed, she said. Gary Streeter, the Conservative MP for South West Devon, tabled an amendment to the Media Bill setting out objective criteria for age
ratings, targeted at streaming services like Disney. While he commended Netflix, Apple and Amazon for using BBFC ratings, he added: The current ratings free-for-all has seen Disney Plus classify scenes of sexual
abuse as suitable for nine-year-olds, scenes of graphic misogynistic violence or offensive antisemitic stereotypes as suitable for 12-year-olds, and that is lower than they classify some of their Star Wars or superhero content.
|
|
Meta details extended censorship rules for under 18s
|
|
|
| 28th January
2024
|
|
| See article from about.instagram.com
|
Meta writes in a blog post: New Protections to Give Teens More Age-Appropriate Experiences on Our Apps
We will start to hide more types of content for teens on Instagram and Facebook, in line with expert guidance. We're automatically placing all teens into the most restrictive content control settings
on Instagram and Facebook and restricting additional terms in Search on Instagram. We're also prompting teens to update their privacy settings on Instagram in a single tap with new notifications.
We want teens to have safe, age-appropriate experiences on our apps. We've developed more than 30 tools and resources to support teens and their parents, and we've spent over a decade developing policies and technology to address
content that breaks our rules or could be seen as sensitive. Today, we're announcing additional protections that are focused on the types of content teens see on Instagram and Facebook. New Content Policies for Teens
We regularly consult with experts in adolescent development, psychology and mental health to help make our platforms safe and age-appropriate for young people, including improving our understanding of which types of content may be
less appropriate for teens. Take the example of someone posting about their ongoing struggle with thoughts of self-harm. This is an important story, and can help destigmatize these issues, but it's a complex topic and isn't
necessarily suitable for all young people. Now, we'll start to remove this type of content from teens' experiences on Instagram and Facebook, as well as other types of age-inappropriate content. We already aim not to recommend this type of content to
teens in places like Reels and Explore, and with these changes, we'll no longer show it to teens in Feed and Stories, even if it's shared by someone they follow . We want people to find support if they need it, so we will continue
to share resources from expert organizations like the National Alliance on Mental Illness when someone posts content related to their struggles with self-harm or eating disorders. We're starting to roll these changes out to teens under 18 now and they'll
be fully in place on Instagram and Facebook in the coming months. We're automatically placing teens into the most restrictive content control setting on Instagram and Facebook. We already apply this setting for new teens when they
join Instagram and Facebook, and are now expanding it to teens who are already using these apps. Our content recommendation controls -- known as "Sensitive Content Control" on Instagram and "Reduce" on Facebook --- make it more
difficult for people to come across potentially sensitive content or accounts in places like Search and Explore. Hiding More Results in Instagram Search Related to Suicide, Self-Harm and Eating Disorders
While we allow people to share content discussing their own struggles with suicide, self-harm and eating disorders, our policy is not to recommend this content and we have been focused on ways to make it harder to find. Now, when
people search for terms related to suicide, self-harm and eating disorders, we'll start hiding these related results and will direct them to expert resources for help. We already hide results for suicide and self harm search terms that inherently break
our rules and we're extending this protection to include more terms. This update will roll out for everyone over the coming weeks.
|
|
Meta says that it will soon restrict content for self declared under 18s and under 16s
|
|
|
| 16th January 2024
|
|
| See
article from parentstv.org See article from telecomlead.com
|
The Wall Street Journal has reported that Meta plans to automatically restrict teen Instagram and Facebook accounts from content including videos and posts about self-harm, graphic violence and eating disorders. Under 18 accounts, based on the birth date
entered during sign-up, will automatically be placed into the most restrictive content settings. Teens under 16 won't be shown sexual content. Meta stated that these measures, expected to be implemented over the forthcoming weeks, are intended to
curate a more age-appropriate experience for young users. The heightened regulatory attention followed testimony in the U.S. Senate by a former Meta employee, Arturo Bejar, who claimed that the company was aware of the harassment and other harms
faced by teens on its platforms but failed to take appropriate action. |
|
Italy decides to censor social media influencers as if they were publishers
|
|
|
| 16th January 2024
|
|
| See article from reclaimthenet.org
|
In an attempt to restrict the freedoms and rights of social media influencers, the Italian Regulatory Authority of Telecommunications (AGCOM) has announced that people with a following exceeding 1,000,000 will now be legally considered as producers of
audio-visual content within the law, placing them on the same legal footing as publishers. This drastic change was revealed in the aftermath of an investigation conducted into Chiara Ferragni , a notable adversary of Prime Minister Giorgia Meloni and
Italy's most prominent social media influencer, regarding alleged fraudulent activities tied to a holiday cake charity event. Currently, influencers within Europe implementing influencer-marketing strategies are perceived not as media organizations
but as sellers or traders. However, AGCOM intends to widen this viewpoint, likening these influencers to TV, marketing agencies, and publishers, thereby imposing greater responsibility for all kinds of content they produce. This new classification
increases the legal and reputational hazards influencers face when publishing material. Under the new regulations, influencers are compelled to clearly distinguish sponsored content and ads, with penalties reaching up to a quarter-million euros for
non-compliance. Violations concerning child protection could warrant penalties exceeding half a million euros. Even non-commercial content produced by influencers must adhere to anti-discrimination regulations and uphold various standards currently
imposed on traditional media creators, such as abstention from disseminating misinformation, hate speech, or promotion of harmful behavior like excessive alcohol consumption. |
|
|
|
|
| 14th
January 2024
|
|
|
Ofcom speaks of behind the scenes discussions for international age verification See article from ofcom.org.uk
|
|
The UK government calls for evidence for its biased review seeking to further censor and control internet pornography
|
|
|
|
11th January 2024
|
|
| See article from gov.uk
|
The UK Government's Department for Science, Innovation, Technology and Censorship has called for evidence to inform the final recommendations of its 'Independent' Pornography Review. The government writes: The government
wants to ensure that any legislation and regulation operates appropriately for all pornographic content, and that the criminal justice system have the tools they need to respond to online illegal pornographic material, and exploitation and abuse in the
industry. The Independent Pornography Review involves a comprehensive assessment of the legislation, regulation and enforcement of online and offline pornographic content, and is overseen by Independent Lead Reviewer Baroness
Gabby Bertin. The review will take an evidence-based approach to develop a range of recommendations on how to best to achieve the review's objectives:
understand the prevalence and harmful impact of illegal pornography online, and the impact on viewers of other forms of legal pornography, including emerging themes like AI-generated pornography, and the impact on viewer's
attitudes to violence against women and girls; assess the public's awareness and understanding of existing regulation and legislation of pornography; consider the current rules in place to
regulate the pornography industry, comparing online and offline laws; determine if law enforcers and the justice system are responding to illegal pornography sufficiently, and if change is needed;
find out how prevalent human trafficking and exploitation is in the industry, before recommending how to identify and tackle this; use this knowledge to set out what more can be done to provide those who need it with guidance
on the potential harmful impact of pornography.
To ensure the review's final recommendations are robust, it is important that a broad range of views and evidence are considered. This call for evidence invites:
members of the public the government subject matter experts organisations
to contribute to the review. The call for evidence closes on 7 March 2024.
|
|
Irish internet censor announces list of websites subject to censorship
|
|
|
| 11th January 2024
|
|
| See article from irishexaminer.com |
The Irish internet censor, Coimisiún na Meán, has published the list of 10 'video-sharing platform services', that will be subject to censorship under the new Online Safety Code. Instagram, Reddit, Facebook, and X, formerly Twitter, are among the
websites that will be subject to its new Online Safety Code. Alongside these sites, the designated platforms also include YouTube, Udemy, TikTok, LinkedIn, Pinterest, and Tumblr. In an explanatory document explaining the code, the regulator said:
Content which is intended to incite violence or hatred is covered by the draft code (as it is illegal content which is harmful to the general public). Platforms will be obliged to prohibit the uploading or sharing of
this content. Platforms will also have to provide effective media literacy measures and tools for users. These tools can help users to recognise misinformation and disinformation.
Last week, the Irish Examiner revealed people may soon
be required to upload their passport details or a selfie to certain websites if they want to view pornography. The provisions, which Coimisiún na Meán said would apply to platforms with their European headquarters based in Ireland, are contained in this
code which remains out to public consultation until the end of the month. |
|
Ohio law requiring parental permission to use social media has been blocked by a judge
|
|
|
| 11th January 2024
|
|
| See
article from eu.dispatch.com
|
An Ohio state law intended to restrict children's social media use by requiring parental permission was slated to go into effect next week but has been stopped by a judge. U.S. District Court Judge Algenon L. Marbley issued a temporary restraining
order Tuesday to block the law from going into effect for now, after a group representing social media companies filed a federal lawsuit earlier this month. Ohio's Social Media Parental Notification Act was passed last year and would have made it
so parents have to allow children under the age of 16 to use certain social media sites. It would apply to new accounts being created on gaming platforms, message boards and social media companies such as Facebook, Instagram, TikTok, YouTube and Snapchat
and requires them to get verifiable parental permission for children under age 16 to create new accounts on the sites. But a trade group representing Meta (the parent company of Facebook and Instagram), TikTok and other tech companies filed a
federal lawsuit in early January claiming that the Ohio law is too broad and is in violation of the First Amendment of the U.S. Constitution. |
|
EFF Asks Court to Uphold Federal Law That Protects Online Video Viewers' Privacy and Free Expression
|
|
|
| 7th January 2024
|
|
| See Creative Commons article from eff.org
See EFF brief from eff.org |
As millions of internet users watch videos online for news and entertainment, it is essential to uphold a federal privacy law that protects against the disclosure of everyone's viewing history, EFF argued in court last month. For
decades, the Video Privacy Protection Act (VPPA) has safeguarded people's viewing habits by generally requiring services that offer videos to the public to get their customers' written consent before disclosing that information to the government or a
private party. Although Congress enacted the law in an era of physical media, the VPPA applies to internet users' viewing habits , too. The VPPA, however, is under attack by Patreon. That service for content creators and viewers
is facing a lawsuit in a federal court in Northern California, brought by users who allege that the company improperly shared information about the videos they watched on Patreon with Facebook. Patreon argues that even if it did
violate the VPPA, federal courts cannot enforce it because the privacy law violates the First Amendment on its face under a legal doctrine known as overbreadth . This doctrine asks whether a substantial number of the challenged law's applications violate
the First Amendment, judged in relation to the law's plainly legitimate sweep. Courts have rightly struck down overbroad laws because they prohibit vast amounts of lawful speech. For example, the Supreme Court in Reno v. ACLU invalidated much of
the Communications Decency Act's (CDA) online speech restrictions because it placed an "unacceptably heavy burden on protected speech." EFF is second to none in fighting for everyone's First Amendment rights in court,
including internet users (in Reno mentioned above ) and the companies that host our speech online. But Patreon's First Amendment argument is wrong and misguided. The company seeks to elevate its speech interests over those of internet users who
benefit from the VPPA's protections. As EFF, the Center for Democracy & Technology, the ACLU, and the ACLU of Northern California argued in their friend-of-the-court brief, Patreon's argument is wrong because the VPPA directly
advances the First Amendment and privacy interests of internet users by ensuring they can watch videos without being chilled by government or private surveillance. "The VPPA provides Americans with critical, private space to
view expressive material, develop their own views, and to do so free from unwarranted corporate and government intrusion," we wrote. "That breathing room is often a catalyst for people's free expression." As the
brief recounts, courts have protected against government efforts to learn people's book buying and library history , and to punish people for viewing controversial material within the privacy of their home. These cases recognize that protecting people's
ability to privately consume media advances the First Amendment's purpose by ensuring exposure to a variety of ideas, a prerequisite for robust debate. Moreover, people's video viewing habits are intensely private, because the data can reveal intimate
details about our personalities, politics, religious beliefs, and values. Patreon's First Amendment challenge is also wrong because the VPPA is not an overbroad law. As our brief explains, "[t]he VPPA's purpose, application,
and enforcement is overwhelmingly focused on regulating the disclosure of a person's video viewing history in the course of a commercial transaction between the provider and user." In other words, the legitimate sweep of the VPPA does not violate
the First Amendment because generally there is no public interest in disclosing any one person's video viewing habits that a company learns purely because it is in the business of selling video access to the public. There is a
better path to addressing any potential unconstitutional applications of the video privacy law short of invalidating the statute in its entirety. As EFF's brief explains, should a video provider face liability under the VPPA for disclosing a customer's
video viewing history, they can always mount a First Amendment defense based on a claim that the disclosure was on a matter of public concern. Indeed, courts have recognized that certain applications of privacy laws, such as the
Wiretap Act and civil claims prohibiting the disclosure of private facts, can violate the First Amendment. But generally courts address the First Amendment by invalidating the case-specific application of those laws, rather than invalidating them
entirely. "In those cases, courts seek to protect the First Amendment interests at stake while continuing to allow application of those privacy laws in the ordinary course," EFF wrote. "This approach accommodates
the broad and legitimate sweep of those privacy protections while vindicating speakers' First Amendment rights." Patreon's argument would see the VPPA gutted--an enormous loss for privacy and free expression for the public.
The court should protect against the disclosure of everyone's viewing history and protect the VPPA.
|
|
|