Up until now, the UK government has always indicated that newspaper websites would not be caught up in the new internet censorship regime proposed in the Government's Online Harms white paper.
However it now seems that the government has backtracked
lest every websites claims to be a news service.
The Daily Mail reports that Julian Knight, chairman of the Commons Digital, Culture, Media and Sport Committee, has written to Culture Minister John Whittingdale over the proposed laws, after Home
Office lawyers claimed that granting a publishers exemption' would create loopholes. One source close to the ministerial arguments over the proposed laws said:
Government lawyers are arguing that the publishers
exemption would allow just anyone to claim it, so for instance you would have The Isis Times being able to distribute beheading videos.
The Tory MP Julian Knight told Whittingdale that Ministers in both DCMS and the Home Office should
resolve the impasse by allowing an exemption for authenticated and reliable news sources.
The Government has yet to respond, amid concerns that any action may be delayed by wrangling over legislation to stop harmful online material and fears that
antagonising powerful American-owned online platforms might jeopardise post-Brexit trade talks with the US.
Former culture secretary Jeremy Wright is setting up a parliamentary group (All party parliamentary group, APPG) to campaign for internet censorship
Wright, who drew up the Government's white paper proposing strict sanctions on tech platforms who fail
to protect users under a duty of care is particularly calling for censorship powers to block, ban, fine or restrict apps and websites considered undesirable by the proposed internet censor, Ofcom. Wright said:
There
needs to be a lot more clubs in the bag for the regulator than just fines, he said. I do think we need to consider criminal liability for individual (tech company) directors where it can be demonstrated.
He also felt the regulator
should have powers of ISP blocking, which effectively bar an app from the UK, in cases of companies repeatedly and egregiously refusing to comply with censorship rules. He said:
I do accept the chances of
WhatsApp being turned off are remote. Although frankly, there may be circumstances where that may be the right thing to do and we shouldn't take it off the table.
Wright is founding the APPG alongside crossbench peer and children's
digital rights campaigner Beeban Kidron, and the group has already attracted supporters, including three other former culture secretaries: Baroness Nicky Morgan, Karen Bradley and Maria Miller, as well as former Health and Foreign Secretary Jeremy Hunt.
Cuties (Mignonnes) is a 2020 France comedy drama by Maïmouna Doucouré. Starring Fathia Youssouf, Médina El Aidi-Azouni and Esther Gohourou.
Amy, an 11-year-old girl, joins a group of dancers
named "the cuties" at school, and rapidly grows aware of her burgeoning femininity - upsetting her mother and her values in the process.
Netflix has removed a promotional image which showed girls posing in skimpy outfits in a
new film called Cuties. The poster for the French drama, along with a trailer, were received from a little online 'outrage' and a petition calling for Netflix to drop it. A petition claiming it sexualizes an 11-year-old for the viewing pleasure of
paedophiles attracted 25,000 signatures.
The film itself is not a Netflix production, just a film that was set to be shown on the service. The award-winning drama follows an 11-year-old who joins a dance group. Its maker says it is meant to tackle the
issue of sexualisation of young girls.
Netflix has now said it was deeply sorry for the inappropriate artwork. Netflix told BBC News:
This was not an accurate representation of the film so the image and description has
been updated.
The company later tweeted:
We're deeply sorry for the inappropriate artwork that we used for Mignonnes/Cuties. It was not OK, nor was it representative of this French film which won an
award at Sundance. We've now updated the pictures and description.
But director Maimouna Doucoure has explained that the story aims to highlight how social media pushes girls to mimic sexualised imagery without fully understanding
what lies behind it or the dangers involved.
An Update to How We Address Movements and Organizations Tied to Violence
Today we are taking action against Facebook
Pages, Groups and Instagram accounts tied to offline anarchist groups that support violent acts amidst protests, US-based militia organizations and QAnon. We already remove content calling for or advocating violence and we ban organizations and
individuals that proclaim a violent mission. However, we have seen growing movements that, while not directly organizing violence, have celebrated violent acts, shown that they have weapons and suggest they will use them, or have individual followers
with patterns of violent behavior. So today we are expanding our Dangerous Individuals and Organizations policy to address organizations and movements that have demonstrated significant risks to public safety but do not meet the rigorous criteria to be
designated as a dangerous organization and banned from having any presence on our platform. While we will allow people to post content that supports these movements and groups, so long as they do not otherwise violate our content policies, we will
restrict their ability to organize on our platform.
Under this policy expansion, we will impose restrictions to limit the spread of content from Facebook Pages, Groups and Instagram accounts. We will also remove Pages, Groups and
Instagram accounts where we identify discussions of potential violence, including when they use veiled language and symbols particular to the movement to do so.
We will take the following actions -- some effective immediately, and
others coming soon:
Remove From Facebook : Pages, Groups and Instagram accounts associated with these movements and organizations will be removed when they discuss potential violence. We will continue studying specific terminology and
symbolism used by supporters to identify the language used by these groups and movements indicating violence and take action accordingly.
Limit Recommendations : Pages, Groups and Instagram accounts associated with
these movements that are not removed will not be eligible to be recommended to people when we suggest Groups you may want to join or Pages and Instagram accounts you may want to follow.
Reduce Ranking in News Feed : In
the near future, content from these Pages and Groups and will also be ranked lower in News Feed, meaning people who already follow these Pages and are members of these Groups will be less likely to see this content in their News Feed.
Reduce in Search : Hashtags and titles of Pages, Groups and Instagram accounts restricted on our platform related to these movements and organizations will be limited in Search: they will not be suggested through our Search
Typeahead function and will be ranked lower in Search results.
Reviewing Related Hashtags on Instagram: We have temporarily removed the Related Hashtags feature on Instagram, which allows people to find hashtags
similar to those they are interacting with. We are working on stronger protections for people using this feature and will continue to evaluate how best to re-introduce it.
Prohibit Use of Ads, Commerce Surfaces and
Monetization Tools : Facebook Pages related to these movements will be prohibited from running ads or selling products using Marketplace and Shop. In the near future, we'll extend this to prohibit anyone from running ads praising, supporting or
representing these movements.
Prohibit Fundraising : We will prohibit nonprofits we identify as representing or seeking to support these movements, organizations and groups from using our fundraising tools. We will
also prohibit personal fundraisers praising, supporting or representing these organizations and movements.
As a result of some of the actions we've already taken, we've removed over 790 groups, 100 Pages and 1,500 ads tied to QAnon from Facebook, blocked over 300 hashtags across Facebook and Instagram, and additionally imposed restrictions
on over 1,950 Groups and 440 Pages on Facebook and over 10,000 accounts on Instagram. These numbers reflect differences in how Facebook and Instagram are used, with fewer Groups on Facebook with higher membership rates and a greater number of Instagram
accounts with fewer followers comparably. Those Pages, Groups and Instagram accounts that have been restricted are still subject to removal as our team continues to review their content against our updated policy, as will others we identify subsequently.
For militia organizations and those encouraging riots, including some who may identify as Antifa, we've initially removed over 980 groups, 520 Pages and 160 ads from Facebook. We've also restricted over 1,400 hashtags related to these groups and
organizations on Instagram.
Today's update focuses on our Dangerous Individuals and Organizations policy but we will continue to review content and accounts against all of our content policies in an effort to keep people safe. We
will remove content from these movements that violate any of our policies, including those against fake accounts, harassment, hate speech and/or inciting violence. Misinformation that does not put people at risk of imminent violence or physical harm but
is rated false by third-party fact-checkers will be reduced in News Feed so fewer people see it. And any non-state actor or group that qualifies as a dangerous individual or organization will be banned from our platform. Our teams will also study trends
in attempts to skirt our enforcement so we can adapt. These movements and groups evolve quickly, and our teams will follow them closely and consult with outside experts so we can continue to enforce our policies against them.
The implementation of Art 17 (formerly Article 13) into national laws will have a profound effect on what users can say and share online. The controversial rule, part of the EU's copyright directive approved last year, has the potential to turn tech
companies and online services operators into copyright police. It is now up to national Member States to implement the directive and to ensure that user rights and freedom of speech is giving priority over notoriously inaccurate filtering and harmful
monitoring of user content.
The initial forays into transposition were catastrophic . Both France and the Netherlands have failed to present a balanced copyright implementation proposal. Now, the Germany government presented
launched a public consultation on a draft bill to implement the EU copyright directive. The draft takes a step in the right direction. Options for users to pre-flag uploads as authorized and exceptions for every day uses are a clear added value from a
user perspective. However, in its current shape, the draft fails to adequately protect user rights and freedom of expression. It seems inevitable that service providers will use content recognition technologies to monitor all user uploads and privacy
rights are not considered at all.
We have therefore recently submitted comments to the German government with recommendations of how to improve the current version. Our message is clear: have the interest of users and freedom of
speech in mind rather than solidifying the dominance of big tech platforms that already exist.
Apple and Google impose extortionate fees of 30% just for listing games and apps in the app stores. And what's more they demand the same cut for any in-game purchases made by players throughout the life of the game.
Epic Games, the company
behind Fortnite, tried to evade the extortionate fees on the in-game purchases by allowing gamers to purchase directly from Epic rather than via Apple/Google.
Google and Apple responded by banning Fortnite from their stores.
And now
Fortnite is challenging Apple and Google in court and produced an excellent short video likening the internet giants to 1984's Big Brother.
German authorities have been stepping up their efforts to fine in-country sex workers for posting sexually explicit content on open online platforms like Twitter, forcing them to take down posts on the U.S.-based -- and Free Speech-protected -- sites.
Jessica Klein of Daily Dot explained that the Interstate Treaty on the Protection of Minors in the Media makes it illegal to distribute pornography to which minors have access.
Government censors from a confusing patchwork of censorship bodies in
state and local governments have been using that legislation to target sex workers who post sexual content on open platforms like Twitter without an age-verification firewall.
The Daily Dot interviewed Bodil Diederichsen, who works with the
Medienanstalt Hamburg/Schleswig-Holstein (MA HSH), a regional media censor. Diederichsen was unusually candid about the official efforts to censor sex workers, and said that her unit mainly work off tips, getting notifications from the public about
possibly offending content, which also includes hate speech and other violations, before researching it themselves. Ultimately, the German censors target the posters themselves, with threats of hefty fines. Presumably this avchieves more than try to get
Twitter to do the censorship.
German law allows the posting on explicit content to what they call a closed user group (i.e., behind some kind of age-verification wall). But in the case of Twitter, the process is at the discretion of these local
censorship bodies, like MA HSH.
Twitter has started blocking links to Infowars founder and host Alex Jones' Banned.video platform which serves as a hub for broadcasts and clips from right leaning media outlets.
When users open a Twitter link to Banned.video , they're presented with
a warning screen that says:
Warning: this link may be unsafe.
The link you are trying to access has been identified by Twitter or our partners as being potentially spammy or unsafe, in
accordance with Twitter's URL Policy, and lists several reasons the link could have been blocked.
It then presents users with a large Back to previous page button and in a small section at the bottom of the page, it gives users the
option to Ignore this warning and continue.
Banned.video was launched in July 2019 after Infowars had been banned from all of the major Big Tech platforms including YouTube -- the world's biggest video sharing site and the second most visited site in
the world.
Twitter has also banned Bill Mitchell, a conservative pundit and radio host with a large Twitter following. Twitter confirmed it permanently banned the pro-Trump internet personality after his widely-followed account, @mitchellvii, abruptly
vanished.
The ICO publishes its impossible to comply with, and business suffocating, Age Appropriate Design Code with a 12 month implementation period until 2nd September 2021
The ICO issued the code on 12 August 2020 and it will come into force on 2 September 2020 with a 12 month transition period.
Information Commissioner Elizabeth Denham writes:
Data sits at the heart of the digital services
children use every day. From the moment a young person opens an app, plays a game or loads a website, data begins to be gathered. Who's using the service? How are they using it? How frequently? Where from? On what device?
That
information may then inform techniques used to persuade young people to spend more time using services, to shape the content they are encouraged to engage with, and to tailor the advertisements they see.
For all the benefits the
digital economy can offer children, we are not currently creating a safe space for them to learn, explore and play.
This statutory code of practice looks to change that, not by seeking to protect children from the digital world,
but by protecting them within it.
This code is necessary.
This code will lead to changes that will help empower both adults and children.
One in five UK internet users are
children, but they are using an internet that was not designed for them. In our own research conducted to inform the direction of the code, we heard children describing data practices as nosy, rude and a bit freaky.
Our recent
national survey into people's biggest data protection concerns ranked children's privacy second only to cyber security. This mirrors similar sentiments in research by Ofcom and the London School of Economics.
This code will lead
to changes in practices that other countries are considering too.
It is rooted in the United Nations Convention on the Rights of the Child (UNCRC) that recognises the special safeguards children need in all aspects of their life.
Data protection law at the European level reflects this and provides its own additional safeguards for children.
The code is the first of its kind, but it reflects the global direction of travel with similar reform being
considered in the USA, Europe and globally by the Organisation for Economic Co-operation and Development (OECD).
This code will lead to changes that UK Parliament wants.
Parliament and government ensured UK
data protection laws will truly transform the way we look after children online by requiring my office to introduce this statutory code of practice.
The code delivers on that mandate and requires information society services to
put the best interests of the child first when they are designing and developing apps, games, connected toys and websites that are likely to be accessed by them.
This code is achievable.
The code is
not a new law but it sets standards and explains how the General Data Protection Regulation applies in the context of children using digital services. It follows a thorough consultation process that included speaking with parents, children, schools,
children's campaign groups, developers, tech and gaming companies and online service providers.
Such conversations helped shape our code into effective, proportionate and achievable provisions.
Organisations should conform to the code and demonstrate that their services use children's data fairly and in compliance with data protection law.
The code is a set of 15 flexible standards 203 they do not ban or specifically prescribe 203 that provides built-in protection to allow children to explore, learn and play online by ensuring that the best interests of the child
are the primary consideration when designing and developing online services.
Settings must be high privacy by default (unless there's a compelling reason not to); only the minimum amount of personal data should be collected and
retained; children's data should not usually be shared; geolocation services should be switched off by default. Nudge techniques should not be used to encourage children to provide unnecessary personal data, weaken or turn off their privacy settings. The
code also addresses issues of parental control and profiling.
This code will make a difference.
Developers and those in the digital sector must act. We have allowed the maximum transition period of
12 months and will continue working with the industry.
We want coders, UX designers and system engineers to engage with these standards in their day-to-day to work and we're setting up a package of support to help.
But the next step must be a period of action and preparation. I believe companies will want to conform with the standards because they will want to demonstrate their commitment to always acting in the best interests of the child.
Those companies that do not make the required changes risk regulatory action.
What's more, they risk being left behind by those organisations that are keen to conform.
A generation from now, I believe we
will look back and find it peculiar that online services weren't always designed with children in mind.
When my grandchildren are grown and have children of their own, the need to keep children safer online will be as second
nature as the need to ensure they eat healthily, get a good education or buckle up in the back of a car.
And while our code will never replace parental control and guidance, it will help people have greater confidence that their
children can safely learn, explore and play online.
There is no doubt that change is needed. The code is an important and significant part of that change.
Facebook described its technology improvements used for the censorship of Facebook posts:
The biggest change has been the role of technology in content moderation. As our Community Standards Enforcement Report shows, our technology to
detect violating content is improving and playing a larger role in content review. Our technology helps us in three main areas:
Proactive Detection: Artificial intelligence (AI) has improved to the point that it can detect violations across a wide variety of areas without relying on users to report content to Facebook, often with greater accuracy
than reports from users. This helps us detect harmful content and prevent it from being seen by hundreds or thousands of people.
Automation: AI has also helped scale the work of our content reviewers. Our AI systems
automate decisions for certain areas where content is highly likely to be violating. This helps scale content decisions without sacrificing accuracy so that our reviewers can focus on decisions where more expertise is needed to understand the context and
nuances of a particular situation. Automation also makes it easier to take action on identical reports, so our teams don't have to spend time reviewing the same things multiple times. These systems have become even more important during the COVID-19
pandemic with a largely remote content review workforce.
Prioritization: Instead of simply looking at reported content in chronological order, our AI prioritizes the most critical content to be reviewed, whether it was
reported to us or detected by our proactive systems. This ranking system prioritizes the content that is most harmful to users based on multiple factors such as virality, severity of harm and likelihood of violation . In an instance where our systems are
near-certain that content is breaking our rules, it may remove it. Where there is less certainty it will prioritize the content for teams to review.
Together, these three aspects of technology have transformed our content review process and greatly improved our ability to moderate content at scale. However, there are still areas where it's critical for people to review. For
example, discerning if someone is the target of bullying can be extremely nuanced and contextual. In addition, AI relies on a large amount of training data from reviews done by our teams in order to identify meaningful patterns of behavior and find
potentially violating content.
That's why our content review system needs both people and technology to be successful. Our teams focus on cases where it's essential to have people review and we leverage technology to help us scale
our efforts in areas where it can be most effective.
The Chinese government has deployed an update to its national firewall, to block encrypted HTTPS connections that are being set up using the latest internet standards for encryption.
The ban has been in place since the end of July, according to a
joint report published this week by three organizations tracking Chinese censorship -- iYouPort , the University of Maryland , and the Great Firewall Report.
In particular China is now blocking HTTPS+TLS1.3+ESNI.
TLS 1.3 is the latest
encryption standard that can be used to implement https. Server Name Indication is used to specify which website is required when several websites are hosted using the same I address. By default it is unencrypted letting ISPs and snoopers know which
website is being accessed even when using https. ESNI (Encrypted Server Name Indication) closes this loophole.
Other HTTPS traffic is still allowed through the Great Firewall, if it uses older versions of the same protocols -- such as TLS 1.1 or 1.2,
or SNI (Server Name Indication). This rather suggests that these old encryption standards are now compromised.
Per the findings of the joint report, the Chinese government is currently dropping all HTTPS traffic where TLS 1.3 and ESNI are used, and
temporarily banning the IP addresses involved in the connection, for small intervals of time that can vary between two and three minutes.
Note also that this news about Chinese censorship probably informs us about snooping capabilities in the UK.
Presumably GCHQ and UK ISPs would be similarly blinded by HTTPS+TLS1.3+ESNI, whilst still being able to block and snoop on older standards.
US Attorneys General from 20 different states have sent a letter urging Facebook to do a better job at censoring content. They wrote:
We, the undersigned State Attorneys General, write to request that you take additional
steps to prevent Facebook from being used to spread disinformation and hate and to facilitate discrimination. We also ask that you take more steps to provide redress for users who fall victim to intimidation and harassment, including violence and digital
abuse.
...
As part of our responsibilities to our communities, Attorneys General have helped residents navigate Facebook's processes for victims to address abuse on its platform. While Facebook has--on
occasion--taken action to address violations of its terms of service in cases where we have helped elevate our constituents' concerns, we know that everyday users of Facebook can find the process slow, frustrating, and ineffective. Thus, we write to
highlight positive steps that Facebook can take to strengthen its policies and practices.
The letter was written by the Attorneys General of New Jersey, Illinois, and District of Columbia, and addressed to CEO Mark Zuckerberg and COO
Sheryl Sandberg. It was cos-signed by 17 other democrat AGs from states such as New York, California, Pennsylvania, Maryland, and Virginia.
The letter proceeds to highlight seven steps they think Facebook should take to better police content to avoid
online abuse. They recommended things such as aggressive enforcement of hate speech policies, third-party enforcement and auditing of hate speech, and real-time assistance for users to report harassment.
Hungary's Data Protection Chief has proposed new legislation which would enable social media platforms to ban people from their services only with a compelling reason, while also granting the right to Hungarian authorities to review the decisions.
The
head of the Hungarian Data Protection Authority (NAIH), requested a regulation on social media at a meeting of the Digital Freedom Working Group, according to which community profiles can only be suspended for compelling reasons. Also, according to
Attila Péterfalvi, Hungarian authorities should have the right to review these decisions.
The justice ministry's digital freedom committee aimed at improving the transparency of tech firms has penned a letter to the regional director of Facebook
asking whether the company's supervisory board complied with the requirements of political neutrality and transparency in its procedures, Justice Minister Judit Varga said:
Péterfalvi said:
I made the suggestion of
establishing a Hungarian authority procedure in which the Hungarian authorities would oblige Facebook to review unjustified suspensions so that freedom of expression would remain free indeed.
President Donald Trump has said that he will ban the popular short-form video app TikTok from operating in the United States. Trump said he could use emergency economic powers or an executive order.
Earlier on Friday, it seemed that the President was
set to sign an order to force ByteDance, the Chinese company that owns the social media platform, to sell the US operations of TikTok to Microsoft. The move was aimed at resolving policymakers' concerns that the foreign-owned TikTok may be a national
security risk.
The US government is conducting a national security review of TikTok and is preparing to make a policy recommendation to Trump.