|
A dystopian idea. You too could get 'cancelled' throughout the world at the touch of a button
|
|
|
| 31st December 2020
|
|
| See article from weforum.org |
Tech entrepreneur Joseph Thompson has founded a start-up technology company AID:Tech which has created a digital app to act as a global identity card. Apparently it is one of the United Nations' Sustainable Development Goals that everyone has a
control enabling legal identity, including birth registration, by 2030. This the prompted the World Bank to launch its Identification for Development (ID4D) initiative in 2014. The latest data from the Bank shows there are just over 987 million
people in the world who have no legal identity, down from 1.5 billion in 2016. The majority live in low-income countries where almost 45% of women and 28% of men lack a legal ID. The blurb about Thomson's waves its arms about blockchain and makes
the unlikely claim that the unlikely claim that the digital identity is accessible only to the person whose ID it holds. I can't imagine many country's authorities would be happy with a system that they cannot access. |
|
|
|
|
| 31st December 2020
|
|
|
UK Government awards contracts for the development of covid restrictions apps See
article from reclaimthenet.org |
|
|
|
|
| 27th December 2020
|
|
|
The more that big social media companies act like they can control what people say, the more competition they encourage. See
article from reason.com |
|
Gay Chilean prison drama banned by Amazon Prime
|
|
|
| 24th December 2020
|
|
| See article from deadline.com
|
The Prince is a 2019 Chile/Argentina/Belgium gay prison drama by Sebastián Muñoz Starring Juan Carlos Maldonado, Alfredo Castro and Gastón Pauls
The film hasn't troubled film censors but it did offend Amazon Prime and so it was banned on that service. Summary Notes A '70s-set homoerotic prison drama
based on a low-circulated pulp novel, tracking the sexual, often-violent and eventually murderous experiences of 20-something narcissist Jaime.
Versions
uncut
| | UK: Passed 18 uncut for sexual violence, strong sex, nudity, gory injury detail:
| banned |
| UK: Banned from Amazon Prime in December 2020 See article from deadline.com
The Prince has been banned from Amazon Prime UK after distributor Peccadillo Pictures was informed the movie contained offensive content that clashed with the streamer's guidelines. Distributor, Peccadillo's MD Tom Abell said:
We have been trying to overturn their decision without avail and cannot understand why, when we have overwhelming support from all other platforms, they have taken this stance.We cannot deny that The Prince has some
explosive and bold scenes but this is what makes it stand out and is such an enjoyable and admired film. It certainly contains nothing that hasn't been seen before in a prison drama and pales in comparison to scenes in many of Amazon's own productions.
We are something of a loss to explain the situation.
Amazon seems happy enough to continue selling the BBFC 18 rated DVD though.
|
|
|
Russian lawmakers progress bill to allow the government to censor social media should it censor Russian news providers
|
|
|
| 24th December 2020
|
|
| See article from arabnews.com |
Russian lawmakers have moved a step closer to allowing state censors to block Internet platforms like Facebook and YouTube if they are deemed to have censored content produced by Russians. Russia's lower house of parliament, which passed draft
legislation in a third reading, said in a media release that authorities can target platforms if they have been found to limit information based on nationality and language. The lower house State Duma added that Internet websites could also be sanctioned
in the event of discrimination against the content of Russian media. The legislation now needs to get approval from the upper house Federation Council before President Vladimir Putin signs it into law . |
|
Senators propose far reaching anti-pornHub censorship law
|
|
|
| 21st December 2020
|
|
| See article from avn.com
See bill [pdf] from sasse.senate.gov |
U.S. Senators Ben Sasse and Jeff Merkley have introduced a bipartisan bill calling for extensive new censorship rules for adult websites. Dubbed the Stop Internet Sexual Explotation Act , the bill was prompted, according to an
announcement from Sasse's office, by reports of how videos and photos are uploaded to websites like Pornhub without the consent of individuals who appear in them. In particular the bill seems triggered by charges filed against Pornhub parent company
MindGeek alleging that the company knowingly hosted and profited from the non-consensual videos over which website GirlsDoPorn. Among the censorship measures the new bill seeks to enact are:
- Require any user uploading a video to the platform to verify their identiry
- Require any user uploading a video to the platform also upload a signed consent form from every individual appearing in the video
- Creating a private right
of action against an uploader who uploads a pornographic image without the consent of an individual featured in the image
- Requiring platforms hosting pornography to include a notice or banner on the website instructing how an individual can
request removal of a video if an individual has not consented to it being uploaded on the platform
- Prohibiting video downloads from these platforms, to be in place within three months of enactment of the legislation
- Requiring platforms
hosting pornography to offer a 24-hour hotline staffed by the platform, for individuals who contact the hotline to request removal of a video that has been distributed without their consent
- Requiring removal of flagged videos within two hours of
such a request
- Requiring platforms to use software to block a video from being re-uploaded after its removal, which must be in place within six months of enactment of the legislations
- Directing the Federal Trade Commission to enforce
violations of these requirements
- Creating a database of individuals who have indicated they do not consent, which must be checked before new content can be uploaded to platforms
- Instructing the Department of Justice to promulgate rules
on where this database should be housed, and determine how to connect victims with services, to include couseling and casework
- Establishing that failure to comply with this requirement will result in a civil penalty to the platform, with
proceeds going towards victim services
|
|
Just like Ofcom, Pakistan's internet censor is being giving infinite powers to censor internet content
|
|
|
| 21st December 2020
|
|
| See article from rsf.org |
Reporters Without Borders (RSF) has analysed the provisions of a new set of online content regulations that the Pakistani government decreed without any consultation with stakeholders, and which are clearly designed to impose draconian online censorship.
Published last month by the information ministry and entitled Removal and Blocking of Unlawful Online Content (Procedure, Oversight and Safeguards), Rules 2020 , the new regulations replace an earlier set of rules that were suspended in
February because of a civil society outcry .They have ended up going much further, granting disproportionate and discretionary powers to the Pakistan Telecommunication Authority (PTA), the online content regulator, which is a direct government offshoot.
On national security grounds, the rules provide for the withdrawal or blocking of any content that excites or attempts to excite disaffection towards the federal or provincial government or harms the reputation of any person holding public
office. It is equally concerning that the rules also provide for the censorship of any content regarded as indecent, immoral or harmful to the glory of Islam , without giving any precise definition of these extremely vague concepts. The
interpretation is left to the PTA, which thereby acquires arbitrary and almost infinite powers. The rules also empower the PTA to act as both plaintiff and judge. It is the PTA that decides, without reference to a court , whether content violates the
criminal code and, worse still, it is the PTA that reexamines cases in the event of a challenge, and rules on any appeals. Platforms are also legally obliged to hand over user data when asked, including data from private and encrypted communications.
And platforms with more than 500,000 users are required to open an office in Pakistan, install servers there and register with the authorities. |
|
|
|
|
|
21st December 2020
|
|
|
The documents reveal that China's censorship on information about the outbreak began in early January, before coronavirus had even been decisively identified See
article from jpost.com |
|
|
|
|
| 18th December
2020
|
|
|
While social networks are strictly censoring right leaning contet, podcasting is giving it a largely unmoderated platform See
article from theguardian.com |
|
|
|
|
| 18th December 2020
|
|
|
Facebook and Instagram disable features in Europe See article from bbc.co.uk |
|
The Government outlines its final plans to introduce new and wide ranging internet censorship laws
|
|
|
| 15th December
2020
|
|
| See press release from gov.uk See
also full government response to the Online Harms White Paper consultation
|
Digital Secretary Oliver Dowden and Home Secretary Priti Patel have announced the government's final decisions on new internet censorships laws.
New rules to be introduced for nearly all tech firms that allow users to post their own content or interact Firms failing to protect people face fines of up to ten per cent of turnover or the blocking
of their sites and the government will reserve the power for senior managers to be held liable Popular platforms to be held responsible for tackling both legal and illegal harms All platforms will
have a duty of care to protect children using their services Laws will not affect articles and comments sections on news websites, and there will be additional measures to protect free speech
The full government response to the Online Harms White Paper
consultation sets out how the proposed legal duty of care on online companies will work in practice and gives them new responsibilities towards their users. The safety of children is at the heart of the measures. Social media
sites, websites, apps and other services which host user-generated content or allow people to talk to others online will need to remove and limit the spread of illegal content such as child sexual abuse, terrorist material and suicide content. The
Government is also progressing work with the Law Commission on whether the promotion of self harm should be made illegal. Tech platforms will need to do far more to protect children from being exposed to harmful content or
activity such as grooming, bullying and pornography. This will help make sure future generations enjoy the full benefits of the internet with better protections in place to reduce the risk of harm. The most popular social media
sites, with the largest audiences and high-risk features, will need to go further by setting and enforcing clear terms and conditions which explicitly state how they will handle content which is legal but could cause significant physical or psychological
harm to adults. This includes dangerous disinformation and misinformation about coronavirus vaccines, and will help bridge the gap between what companies say they do and what happens in practice. Ofcom is now confirmed as the
regulator with the power to fine companies failing in their duty of care up to £18 million or ten per cent of annual global turnover, whichever is higher. It will have the power to block non-compliant services from being accessed in the UK.
The legislation includes provisions to impose criminal sanctions on senior managers. The government will not hesitate to bring these powers into force should companies fail to take the new rules seriously - for example, if they do not
respond fully, accurately and in a timely manner to information requests from Ofcom. This power would be introduced by Parliament via secondary legislation, and reserving the power to compel compliance follows similar approaches in other sectors such as
financial services regulation. The government plans to bring the laws forward in an Online Safety Bill next year and set the global standard for proportionate yet effective regulation. This will safeguard people's rights online
and empower adult users to keep themselves safe while preventing companies arbitrarily removing content. It will defend freedom of expression and the invaluable role of a free press, while driving a new wave of digital growth by building trust in
technology businesses. Scope The new regulations will apply to any company in the world hosting user-generated content online accessible by people in the UK or enabling them to privately or publicly
interact with others online. It includes social media, video sharing and instant messaging platforms, online forums, dating apps, commercial pornography websites, as well as online marketplaces, peer-to-peer services, consumer
cloud storage sites and video games which allow online interaction. Search engines will also be subject to the new regulations. The legislation will include safeguards for freedom of expression and pluralism online - protecting
people's rights to participate in society and engage in robust debate. Online journalism from news publishers' websites will be exempt, as will reader comments on such sites. Specific measures will be included in the legislation
to make sure journalistic content is still protected when it is reshared on social media platforms. Categorised approach Companies will have different responsibilities for different categories of
content and activity, under an approach focused on the sites, apps and platforms where the risk of harm is greatest. All companies will need to take appropriate steps to address illegal content and activity such as terrorism and
child sexual abuse. They will also be required to assess the likelihood of children accessing their services and, if so, provide additional protections for them. This could be, for example, by using tools that give age assurance to ensure children are
not accessing platforms which are not suitable for them. The government will make clear in the legislation the harmful content and activity that the regulations will cover and Ofcom will set out how companies can fulfil their duty
of care in codes of practice. A small group of companies with the largest online presences and high-risk features, likely to include Facebook, TikTok, Instagram and Twitter, will be in Category 1. These
companies will need to assess the risk of legal content or activity on their services with "a reasonably foreseeable risk of causing significant physical or psychological harm to adults". They will then need to make clear what type of
"legal but harmful" content is acceptable on their platforms in their terms and conditions and enforce this transparently and consistently. All companies will need mechanisms so people can easily report harmful content
or activity while also being able to appeal the takedown of content. Category 1 companies will be required to publish transparency reports about the steps they are taking to tackle online harms. Examples of Category 2 services are
platforms which host dating services or pornography and private messaging apps. Less than three per cent of UK businesses will fall within the scope of the legislation and the vast majority of companies will be Category 2 services.
Exemptions Financial harms will be excluded from this framework, including fraud and the sale of unsafe goods. This will mean the regulations are clear and manageable for businesses, focus action where
there will be most impact, and avoid duplicating existing regulation. Where appropriate, lower-risk services will be exempt from the duty of care to avoid putting disproportionate demands on businesses. This includes exemptions
for retailers who only offer product and service reviews and software used internally by businesses. Email services will also be exempt. Some types of advertising, including organic and influencer adverts that appear on social
media platforms, will be in scope. Adverts placed on an in-scope service through a direct contract between an advertiser and an advertising service, such as Facebook or Google Ads, will be exempt because this is covered by existing regulation.
Private communications The response will set out how the regulations will apply to communication channels and services where users expect a greater degree of privacy - for example online instant
messaging services and closed social media groups which are still in scope. Companies will need to consider the impact on user privacy and that they understand how company systems and processes affect people's privacy, but firms
could, for example, be required to make services safer by design by limiting the ability for anonymous adults to contact children. Given the severity of the threat on these services, the legislation will enable Ofcom to require
companies to use technology to monitor, identify and remove tightly defined categories of illegal material relating to child sexual exploitation and abuse. Recognising the potential impact on user privacy, the government will ensure this is only used as
a last resort where alternative measures are not working. It will be subject to stringent legal safeguards to protect user rights.
|
|
The Government to unveil plans for its new internet censorship law this week
|
|
|
| 13th December
2020
|
|
| From The Times |
The Times is reporting that the government will announce plans for its upcoming Online Harms internet censorship law on Tuesday. Ministers will announce plans for a statutory duty of care, which will be enforced by Ofcom, the broadcasting regulator.
Companies that fail to meet the duty could face multimillion-pound fines or be blocked from operating in Britain. However, the legislation will also include measures to protect freedom of speech after concerns were raised in Downing Street that the
powers could prompt social media companies to take posts down unnecessarily. It also seems that the bill will be titles Online Safety rather than Online Harms.
|
|
Campaigners speak out against Yorkshire Ripper documentary on Netflix
|
|
|
| 13th December 2020
|
|
| See article from dailymail.co.uk
|
Families of Yorkshire Ripper's victims have spoken out against a Netflix documentary about the mass murderer. Sutcliffe's victims' families say the term 'Ripper' is traumatising for them to hear They wrote a letter to Netflix saying the new title
is insulting to our families. Netflix had changed the name of the documentary from Once Upon A Time In Yorkshire to The Ripper . In a letter written to the company, two of Sutcliffe's victims Marcella Claxton, Mo Lea and relatives of
seven of Sutcliffe's victims and survivors signed a letter to the company saying: 'The moniker "the Yorkshire Ripper" has traumatised us and our families for the past four decades. It
glorifies the brutal violence of Peter Sutcliffe, and grants him a celebrity status that he does not deserve. Please remember that the word "ripper" relates to ripping flesh and the repeated use of this phrase is
irresponsible, insensitive and insulting to our families and our mothers and grandmothers' legacies,'.
|
|
Ofcom consults about its plans to tool up for its new roles as the UK internet censor
|
|
|
| 11th December
2020
|
|
| See article from ofcom.org.uk See
Ofcom work plan [pdf] from ofcom.org.uk |
Ofcom has opened a consultation on its plan to get ready for its likely role as the UK internet censor under the Governments Online Harms legislation. Ofcom writes We have today published our plan of work for 2021/22. This
consultation sets out our goals for the next financial year, and how we plan to achieve them. We are consulting on this plan of work to encourage discussion with companies, governments and the public. As
part of the Plan of Work publication, we are also holding some virtual events to invite feedback on our proposed plan. These free events are open to everyone, and offer an opportunity to comment and ask questions. The consultation ends
on 5th February 2021. The Key areas referencing internet censorship are: Preparing to regulate online harms 3.26 The UK Government has given Ofcom new duties as the regulator for UK -established
video - sharing platforms (VSPs) through the transposition of the European -wide Audiovisual Media Services Directive. VSPs are a type of online video service where users can upload and share vide os with members of the public, such as You Tube and
TikTok. Ofcom will not be responsible for regulating all VSPs as our duties only apply to services established in the UK and as such , we anticipate that a relatively small number of services fall within our jurisdiction. Under the new regulations, which
came into force on 1 November 2020, VSPs must have appropriate measures in place to protect children from potentially harmful content and all users from criminal content and incitement to hatred and violence. VSPs will also need to make sure certain
advertising standards are met. 3.27 As well as appointing Ofcom as the regulator of UK- established VSPs the Government has announced that it is minded to appoint Ofcom as the future regulator responsible for protecting users from
harmful online content. With this in mind we are undertaking the following work :
Video-sharing platforms regulation . We have issued a short guide to the new requirements. 22 On 19 November 2020 we issued draft scope and jurisdiction guidance for consultation to help providers self -assess whether they
need to notify to Ofcom as a VSP under the statutory rules from April 2021. 23 We will also consult in early 2021 on further guidance on the risk of harms and appropriate measures as well as proposals for a co-regulatory relationship with the Advertising
Standards Authority (ASA) with regards to VSP advertising. We intend to issue final versions of the guidance in summer 2021. Preparing for the online harms regime. The UK Government has set out that it intends to put
in place a regime to keep people safe online. In February 2020 it published an initial response to the 2019 White Paper24 setting out how it intends to develop the regime which stated that it was minded to appoint Ofcom as the future regulator of online
harms. If confirmed, these proposed new responsibilities would constitute a significant expansion to our remit, and preparing for them would be a major area of focus in 2021/22. We will continue to provide technical advice to the UK Government on its
policy development process, and we will engage with Parliament as it considers legislative proposals.
3.29 We will continue work to deepen our understanding of online harms through a range of work:
Our Making Sense of Media programme. This programme will continue to provide insights on the needs, behaviours and attitudes of people online. Our other initiatives to research online markets and technologies will further
our understanding of how online harms can be mitigated Stepping up our collaboration with other regulators. As discussed in the Developing strong partnerships section, we will continue our joint work through the
Digital Regulators Cooperation Forum and strengthen our collaboration with regulators around the world who are also considering online harms. Understanding VSPs . The introduction of regulation to UK-established VSPs
will provide a solid foundation to inform and develop the broader future online harms regulatory framework. This interim regime is more limited in terms of the number of regulated companies and will cover a narrower range of harms compared to the online
harms white paper proposals. However, should Ofcom be confirmed as the regulator, through our work on VSPs we will develop on-the-job experience working with newly regulated online services, developing the evidence base of online harm, and building our
internal skills and expertise.
|
|
Google and Amazon heavily fined for the lack of silly cookie consent banners
|
|
|
| 11th December 2020
|
|
| See article from theverge.com
|
France's data protection censor, the Commission Nationale de l'Informatique et des Libertés or CNIL, has fined Google and Amazon a total of 135 million euro between them for violating the country's data protection laws. Google was fined a total of 100
million euro, while Amazon was fined 35 million euro. The companies were fined for the lack of user consent for cookies placed of their French websites. Although both have since updated their websites to require a user's consent before placing
cookies, CNIL criticized their cookie information banners for not providing enough information, or for making it clear enough that visitors can turn down these cookies. The regulator gave both a deadline of three months to fix the outstanding issues.
A spokesperson from Amazon said the company disagreed with CNIL's decision. Google said it stands by its efforts to provide information about tracking and control to users. |
|
Spain sets up an internet censorship system in the name of monitoring 'fake news'
|
|
|
| 11th December 2020
|
|
| See article from wsws.org by Alice Summers |
More details have emerged on the censorship apparatus operated by Spain's Socialist Party (PSOE)-Podemos government. A new cyber-monitoring tool, known as ELISA, has been rolled out across the country, which will scour the internet for supposed instances
of disinformation and report them to Spain's central government for further action. ELISA began by monitoring only a few dozen web pages. However, its surveillance operation has now expanded to around 350 sites. It has been described as a Digital
Observatory, designed to facilitate the monitoring of open sources, as well as the profiling of media and social networks. To avoid any judicial oversight, ELISA will supposedly only monitor open source data, rather than private communications. It
will nonetheless mine vast quantities of information on online sources, social media usage, news platforms and other internet content. ELISA's development and implementation is the latest in a series of internet-monitoring and censorship measures
recently made public in Spain. Revelations about the CCN's ELISA tool come hot on the heels of a new protocol, the Procedure for Intervention against Disinformation. It allows the state to monitor and suppress internet content, under the pretext of
combatting fake news and disinformation. This gives the Spanish government full decision-making power to determine what is or is not fake news, and makes legal provision for constant state surveillance of social media platforms and the media more
broadly to detect disinformation and formulate a political response. |
|
|
|
|
| 8th
December 2020
|
|
|
By Kath Rella, (a pseudonym) See article from reprobatepress.com |
|
|
|
|
| 8th
December 2020
|
|
|
The House of Lords discusses how streaming services use age ratings See article from
hansard.parliament.uk |
|
Be very careful when specify an age to Twitter lest you lose your account
|
|
|
| 7th December 2020
|
|
| See article from reclaimthenet.org |
The Twitter account for Adland, the world's largest and oldest archive of adverts has been cancelled after more than 13 years on the platform. Despite being verified and having an active account on Twitter for over a decade, when Adland's CEO Dabitch
attempted to add an age to the account, it was instantly locked. Twitter requires accounts to add their age when they attempt to follow other accounts that post about certain topics such as alcohol. In this instance, Dabitch was told that she
needed to add the age of the Adland account after she attempted to follow an account for a social media marketer of spirits. Upon opening the dropdown menu for years, 1996 was arbitrarily or accidentally selected, and the account was instantly
blocked by twitter. Presumably, Twitter's automated systems determined that the birth year of 1996 meant that the Adland account, which was created in 2007, had been started by a user that was 11 years old. Since Twitter requires its users to be
at least 13 years old, it then seems to have instantly and automatically locked the account based on this presumption. While Dabitch believes Adland's Twitter account will ultimately be restored, she summarized the situation by highlighting the
perils of relying on Big Tech platforms that can instantly suspend your account, even if you're a verified user that has been using them for over a decade. |
|
Facebook is working on a divisive system to censor comments according to a pecking order of political 'correctness'
|
|
|
| 6th December 2020
|
|
| See article
from washingtonpost.com |
Facebook is to start policing anti-Black hate speech more aggressively than anti-White comments. The Wadhington Post is reporting that the the company is overhauling its algorithms that detect hate speech and deprioritizing hateful comments against
whites, men and Americans. Internal documents reveal that Facebook's WoW Project is in its early stages and involves re-engineering automated moderation systems to get better at detecting and automatically deleting hateful language that is
considered the 'worst of the worst'. The 'worst of the worst' includes slurs directed at Blacks, Muslims, people of more than one race, the LGBTQ community and Jews, according to the documents. The Wahington POst adds:
In the first phase of the project, which was announced internally to a small group in October, engineers said they had changed the company's systems to deprioritize policing contemptuous comments about Whites, men and Americans. Facebook still considers
such attacks to be hate speech, and users can still report it to the company. However, the company's technology now treats them as low-sensitivity -- or less likely to be harmful -- so that they are no longer automatically deleted by the company's
algorithms. That means roughly 10,000 fewer posts are now being deleted each day, according to the documents.
|
|
|
|
|
| 6th December 2020
|
|
|
The New York Times calls for the censorship of Pornhub See article from xbiz.com |
|
YouTube announces that it will step up the censorship of viewer comments specifically for the black community
|
|
|
| 4th December 2020
|
|
| See article from blog.youtube |
YouTube has announced that it will increase the censorship of comments specifically for the black community. YouTube writes in a block post: We're committed to supporting the diverse creator communities on YouTube and their continued
success. As our CEO, Susan Wojcicki, wrote in June, we're examining how our policies and products are working for everyone -- and specifically for the Black community -- and working to close any gaps. We know that comments play a
key role in helping creators connect with their community, but issues with the quality of comments is also one of the most consistent pieces of feedback we receive from creators. We have been focused on improving comments with the goal of driving
healthier conversations on YouTube. Over the last few years, we launched new features to help creators engage with their community and shape the tone of conversations on their channels. We've heard from creators that while these
changes helped them better manage comments and connect with their audience, there's more we can do to prevent them from seeing hurtful comments in the first place. To address that, we'll be testing a new filter in YouTube Studio for potentially
inappropriate and hurtful comments that have been automatically held for review, so that creators don't ever need to read them if they don't want to. To encourage respectful conversations on YouTube, we're launching a new feature
that will warn users when their comment may be offensive to others, giving them the option to reflect before posting. In addition, we've also invested in technology that helps our systems better detect and remove hateful comments
by taking into account the topic of the video and the context of a comment. These efforts are making an impact. Since early 2019, we've increased the number of daily hate speech comment removals by 46x. And in the last quarter, of
the more than 1.8 million channels we terminated for violating our policies, more than 54,000 terminations were for hate speech. This is the most hate speech terminations in a single quarter and 3x more than the previous high from Q2 2019 when we updated
our hate speech policy. |
|
Euphoria on HBO
|
|
|
| 4th December 2020
|
|
| See article from parentstv.org
|
US moralist campaigners, Parents TV Council writes: The Parents Television Council is warning parents about the return of the second season of HBO's Euphoria on December 6, 2020. Featuring a former
Disney star, Euphoria is set in high school and focuses on a teenaged girl's drug addiction and efforts to hide her drug use from her mother. In the six hours of Euphoria s first season, there were nearly 400 uses of the "f-word;" male frontal
nudity; close-ups of male genitals; female frontal nudity; depictions of statutory rape, depictions of illicit drug use, graphic violence, and extreme (and even illegal) sexual behavior. PTC President Tim Winter said:
Parents beware! Euphoria is saturated with shockingly explicit content depicting high school-aged children that is intended to shock and offend your values. Don't just take our word for it. Euphoria's creator boasted: There are going to be parents who are going to be totally fucking freaked out,
ahead of the show's first season. He was right, as the program was filled with female and male frontal nudity, illicit drug use, and the harshest profanity. We anticipate the second season will be just as explicit and potentially as harmful to teens
and preteens With Euphoria, HBO is knowingly and deliberately marketing harmful content to impressionable teens and preteens with programming that centers on school-aged characters. For HBO to release a series like this -- that
graphically portrays drug abuse as way for teens to escape reality -- is reckless and grossly irresponsible.
|
|
Speak of hardcore free speech at the alt-right Twitter alternative
|
|
|
| 3rd December 2020
|
|
| See article from boingboing.net See
article from chron.com |
Parler is an alt-right version of Twitter that launched in 2020 with a free speech ethos. Parler's CEO spoke of high ideals at the tiime: We're a community town square, an open town square, with no censorship... If you can
say it on the street of New York, you can say it on Parler.
Maybe easier said than done though, it wasn't long before the wrong type of free speech somehow required censorship. Parler started deleting accounts of left-wing members and
then it banned pornography (which Twitter allows). Eventually, it became more censorious than Twitter, with the exception of allowing the kind alt-right speech that is generally banned on mainstream social media. However is seems that Parler has
recognised the seeming hypocrisy and steered back towards free speech. It revised its terms of service to allow things that Twitter already allows, including pornography. As you might expect, Parler is now a bastion of right leaning speech and ads
for hardcore pornography websites. The Washington Post's reviewed Parler under its revised rules and found that searches for sexually explicit terms surfaced extensive troves of graphic content, including videos of sex acts that began playing
automatically without any label or warning. Terms such as #porn, #naked and #sex each had hundreds or thousands of posts on Parler, many of them graphic. Some pornographic images and videos had been viewed tens of thousands of times on the platform. Sounds promising!
|
|
The EU Commission president introduces the next round of internet censorship law
|
|
|
| 2nd December 2020
|
|
| See article from bbc.co.uk |
Ursula von der Leyen, president of the European Commission, has introduced a new swathe of internet regulation. She said the commission would be rewriting the rulebook for our digital market with stricter rules for online content, from selling unsafe
products to posting 'hate speech'. Von der Leyen told the online Web Summit: No-one expects all digital platforms to check all the user content that they host. This would be a threat to everyone's freedom to speak
their mind. ...But... if illegal content is notified by the competent national authorities, it must be taken down. More pressure
The Digital Services Act will replace the EU's 2000 e-commerce directive. Due to come into
force on Wednesday, 2 December, it has now been delayed until next week. Likely to put more pressure on social-media platforms to take down and block unlawful content more quickly, the new rules will almost certainly be contested by companies such
as Google and Facebook, which now face far stricter censorship both in Europe and the US, following claims about the the supposed spread of 'fake news' and 'hate speech'. |
|
|