This week, Matt Hancock, Secretary of State for Digital, Culture, Media and Sport, announced the launch of a consultation on
new legislative measures to clean up the Wild West elements of the Internet. In response, music group BPI says the government should use the opportunity to tackle piracy with advanced site-blocking measures, repeat infringer policies, and new
responsibilities for service providers.
This week, the Government published its response to the Internet Safety Strategy green paper , stating unequivocally that more needs to be done to tackle online harm. As a result, the Government will now carry through with its threat to introduce
new legislation, albeit with the assistance of technology companies, children's charities and other stakeholders.
While emphasis is being placed on hot-button topics such as cyberbullying and online child exploitation, the Government is clear that it wishes to tackle the full range of online harms. That has been greeted by UK music group BPI with a request
that the Government introduces new measures to tackle Internet piracy.
In a statement issued this week, BPI chief executive Geoff Taylor welcomed the move towards legislative change and urged the Government to encompass the music industry and beyond. He said:
This is a vital opportunity to protect consumers and boost the UK's music and creative industries. The BPI has long pressed for internet intermediaries and online platforms to take responsibility for the content that they promote to users.
Government should now take the power in legislation to require online giants to take effective, proactive measures to clean illegal content from their sites and services. This will keep fans away from dodgy sites full of harmful content and
prevent criminals from undermining creative businesses that create UK jobs.
The BPI has published four initial requests, each of which provides food for thought.
The demand to establish a new fast-track process for blocking illegal sites is not entirely unexpected, particularly given the expense of launching applications for blocking injunctions at the High Court.
The BPI has taken a large number of actions against individual websites -- 63 injunctions are in place against sites that are wholly or mainly infringing and whose business is simply to profit from criminal activity, the BPI says.
Those injunctions can be expanded fairly easily to include new sites operating under similar banners or facilitating access to those already covered, but it's clear the BPI would like something more streamlined. Voluntary schemes, such as the one
in place in Portugal , could be an option but it's unclear how troublesome that could be for ISPs. New legislation could solve that dilemma, however.
Another big thorn in the side for groups like the BPI are people and entities that post infringing content. The BPI is very good at taking these listings down from sites and search engines in particular (more than 600 million requests to date) but
it's a game of whac-a-mole the group would rather not engage in.
With that in mind, the BPI would like the Government to impose new rules that would compel online platforms to stop content from being re-posted after it's been taken down while removing the accounts of repeat infringers.
Thirdly, the BPI would like the Government to introduce penalties for online operators who do not provide transparent contact and ownership information. The music group isn't any more specific than that, but the suggestion is that operators of
some sites have a tendency to hide in the shadows, something which frustrates enforcement activity.
Finally, and perhaps most interestingly, the BPI is calling on the Government to legislate for a new duty of care for online intermediaries and platforms. Specifically, the BPI wants effective action taken against businesses that use the Internet
to encourage consumers to access content illegally.
While this could easily encompass pirate sites and services themselves, this proposal has the breadth to include a wide range of offenders, from people posting piracy-focused tutorials on monetized YouTube channels to those selling fully-loaded
Kodi devices on eBay or social media.
Overall, the BPI clearly wants to place pressure on intermediaries to take action against piracy when they're in a position to do so, and particularly those who may not have shown much enthusiasm towards industry collaboration in the past.
Legislation in this Bill, to take powers to intervene with respect to operators that do not co-operate, would bring focus to the roundtable process and ensure that intermediaries take their responsibilities seriously, the BPI says.
We Happy Few is a 2018 Canada survival horror from Compulsion Games
We Happy Few is the tale of a plucky bunch of moderately terrible people trying to escape from a lifetime of cheerful denial in the city of Wellington Wells. In this alternative 1960s England, conformity is key. You'll have to fight or blend in
with the drug-addled inhabitants, most of whom don't take kindly to people who won't abide by their not-so-normal rules.
In May 2018, the Australian Censorship Board announced that We Happy Few has been banned in Australia.
The censors noted that the game's depictions of drug use related to incentives and rewards, in this case the beneficial effects of using Joy pills, could not be accommodated within the R 18+ category.
The Soma-like drug Joy is used in the game to detract the citizens of Wellington Wells from the Orwellian reality they live in.
There's no word yet on if Compulsion Games will make cuts to the game to satisfy the Board, but it s often the case.
The game is set for release on PlayStation 4, Xbox One and PC this summer.
The game developer Compulsion Games has responded to the ban:
To our Australian fans, we share your frustration. We will work with the ACB on the classification. If the government maintains its stance, we will make sure that you can get a refund, and we will work directly with affected Kickstarter backers to
figure something out. We would appreciate if you give us a little bit of time to appeal the decision before making a call.
We Happy Few is set in a dystopian society, and the first scene consists of the player character redacting material that could cause offense to society at large, as part of his job as a government archivist. It's a society that is forcing
its citizens to take Joy, and the whole point of the game is to reject this programming and fight back. In this context, our game's overarching social commentary is no different than Aldous Huxley's Brave New World , or Terry Gilliam's Brazil
The game explores a range of modern themes, including addiction, mental health and drug abuse. We have had hundreds of messages from fans appreciating the treatment we've given these topics, and we believe that when players do get into the world
they'll feel the same way. We're proud of what we've created.
We would like to respond to the thematic side of We Happy Few in more detail at a later date, as we believe it deserves more attention than a quick PR response. In the meantime we will be talking to the ACB to provide additional information, to
discuss the issues in depth, and see whether they will change their minds.
Russia's international propaganda channel RT will not lose its UK broadcasting licence
according to information reported by the Telegraph.
Ofcom has been investigating the news channel for continuously casting doubt about the Russian connection in the attempted murder of ex spy Sergei Skripal and his daughter in Salisbury.
Perhaps it is rather bizarre that a news content censors should be tasked with something that could lead to consequences such as retaliatory action and a further escalation of an already tense relationship with Russia. Surely when such risks are
involved, diplomats and the Foreign Office should be taking the lead.
Perhaps Ofcom were thinking along these lines in taking the decision not to ban the channel. In a legal document entitled Update on the RT service , Ofcom has now said:
States sometimes commit, or will have committed, acts which are contrary to these values. In our judgment, it would be inappropriate for Ofcom always to place decisive weight on such matters in determining whether state-funded broadcasters were
fit and proper to hold broadcast licences, independently of their broadcasting record.
If we did, many state-funded broadcasters (mostly those from states which may not share UK values) would be potentially not fit and proper. This would be a poorer outcome for UK audiences in light of our duties on plurality, diversity and freedom
Ofcom were a bit more bullish at the start of the investigation saying:
Should the UK investigating authorities determine that there was an unlawful use of force by the Russian State against the UK, we would consider this relevant to our ongoing duty to be satisfied that RT is fit and proper, the regulator said at
Also it is a little strange to note that the Telegraph's story has not been picked up by other newspapers. The Express initially published the story but withdrew it a little later.
Update: Tit for tat
24th May 2018. From the FT
Ofcom have jsut announced that that 3 further programmes on the Russian propaganda channel RT will be investigated after an Ofcom move to continuously monitor the station's output. In response, Russian foreign ministry spokesperson Maria Zakharova
has informed reporters that relevant Russian structures have begun closely studying the content of the materials of the British mass media that are represented in the Russian Federation.
Democrats in the United States House of Representatives have gathered 90 of the 218 signatures they'll need to force a vote on
whether or not to roll back net neutrality rules, while Federal Communications Commission Chair Ajit Pai has already predicted that the House effort will fail and large telecommunications companies publicly expressed their anger at last
Wednesday's Senate vote to keep the Obama-era open internet rules in place.
Led by Pai, a Donald Trump appointee, the FCC voted 3-2 along party lines in December to scrap the net neutrality regulations, effectively creating an internet landscape dominated by whichever companies can pay the most to get into the online fast
Telecommunications companies could also choose to block some sites simply based on their content, a threat to which the online porn industry would be especially vulnerable, after five states have either passed or are considering legislation
labeling porn a public health hazard.
While the House Republican leadership has taken the position that the net neutrality issue should not even come to a vote, on May 17 Pennsylvania Democrat Mike Doyle introduced a discharge petition that would force the issue to the House floor. A
discharge petition needs 218 signatures of House members to succeed in forcing the vote. As of Monday morning, May 21, Doyle's petition had received 90 signatures . The effort would need all 193 House Democrats plus 25 Republicans to sign on, in
order to bring the net neutrality rollback to the House floor.
Culture Secretary Matt Hancock has issued to the following press release from the Department for Digital, Culture, Media
New laws to make social media safer
New laws will be created to make sure that the UK is the safest place in the world to be online, Digital Secretary Matt Hancock has announced.
The move is part of a series of measures included in the government's response to the Internet Safety Strategy green paper, published today.
The Government has been clear that much more needs to be done to tackle the full range of online harm.
Our consultation revealed users feel powerless to address safety issues online and that technology companies operate without sufficient oversight or transparency. Six in ten people said they had witnessed inappropriate or harmful content online.
The Government is already working with social media companies to protect users and while several of the tech giants have taken important and positive steps, the performance of the industry overall has been mixed.
The UK Government will therefore take the lead, working collaboratively with tech companies, children's charities and other stakeholders to develop the detail of the new legislation.
Matt Hancock, DCMS Secretary of State said:
Digital technology is overwhelmingly a force for good
across the world and we must always champion innovation and change for the better. At the same time I have been clear that we have to address the Wild West elements of the Internet through legislation, in a way that supports innovation. We
strongly support technology companies to start up and grow, and we want to work with them to keep our citizens safe.
People increasingly live their lives through online platforms so it's more important than ever that people are safe and parents can have confidence they can keep their children from harm. The measures we're taking forward today will help make
sure children are protected online and balance the need for safety with the great freedoms the internet brings just as we have to strike this balance offline.
DCMS and Home Office will jointly work on a White Paper with other government departments, to be published later this year. This will set out legislation to be brought forward that tackles a range of both legal and illegal harms, from
cyberbullying to online child sexual exploitation. The Government will continue to collaborate closely with industry on this work, to ensure it builds on progress already made.
Home Secretary Sajid Javid said:
Criminals are using the internet to further their exploitation and abuse of children, while terrorists are abusing these platforms to recruit people and incite atrocities. We need to protect our communities from these heinous crimes and vile
propaganda and that is why this Government has been taking the lead on this issue.
But more needs to be done and this is why we will continue to work with the companies and the public to do everything we can to stop the misuse of these platforms. Only by working together can we defeat those who seek to do us harm.
The Government will be considering where legislation will have the strongest impact, for example whether transparency or a code of practice should be underwritten by legislation, but also a range of other options to address both legal and illegal
We will work closely with industry to provide clarity on the roles and responsibilities of companies that operate online in the UK to keep users safe.
The Government will also work with regulators, platforms and advertising companies to ensure that the principles that govern advertising in traditional media -- such as preventing companies targeting unsuitable advertisements at children -- also
apply and are enforced online.
It seems that the latest call for internet censorship is driven by some sort revenge for having been snubbed by the
The culture secretary said he does not have enough power to police social media firms after admitting only four of 14 invited to talks showed up.
Matt Hancock told the BBC it had given him a big impetus to introduce new laws to tackle what he has called the internet's Wild West culture.
He said self-policing had not worked and legislation was needed.
He told BBC One's Andrew Marr Show , presented by Emma Barnett, that the government just don't know how many children of the millions using using social media were not old enough for an account and he was very worried about age
verification. He told the programme he hopes we get to a position where all users of social media users has to have their age verified.
Two government departments are working on a White Paper expected to be brought forward later this year. Asked about the same issue on ITV's Peston on Sunday , Hancock said the government would be legislating in the next couple of years
because we want to get the details right.
For its updated news application, Google is claiming it is using artificial intelligence as part of an effort to weed out
disinformation and feed users with viewpoints beyond their own filter bubble.
Google chief Sundar Pichai, who unveiled the updated Google News earlier this month, said the app now surfaces the news you care about from trusted sources while still giving you a full range of perspectives on events. It marks Google's latest
effort to be at the centre of online news and includes a new push to help publishers get paid subscribers through the tech giant's platform.
In reality Google has just banned news from the likes of the Daily Mail whilst all the 'trusted sources' are just the likes of the politically correct papers such as the Guardian and Independent.
According to product chief Trystan Upstill, the news app uses the best of artificial intelligence to find the best of human intelligence - the great reporting done by journalists around the globe. While the app will enable users to get
personalised news, it will also include top stories for all readers, aiming to break the so-called filter bubble of information designed to reinforce people's biases.
Nicholas Diakopoulos, a Northwestern University professor specialising in computational and data journalism, said the impact of Google's changes remain to be seen. Diakopoulos said algorithmic and personalised news can be positive for engagement
but may only benefit a handful of news organisations. His research found that Google concentrates its attention on a relatively small number of publishers, it's quite concentrated. Google's effort to identify and prioritise trusted news
sources may also be problematic, according to Diakopoulos. Maybe it's good for the big guys, or the (publishers) who have figured out how to game the algorithm, he said. But what about the local news sites, what about the new news sites that don't
have a long track record?
I tried it out and no matter how many times I asked it not to provide stories about the royal wedding and the cup final, it just served up more of the same. And indeed as Diakopoulos said, all it wants to do is push news stories from the
politically correct papers, most notably the Guardian. I can't see it proving very popular. I'd rather have an app that feeds me what I actually like, not what I should like.
The Entertainment Software Rating Board has confirmed it will cease offering free age and content ratings for online video games next
month. The Short Form ratings process the ESRB currently offers for download-only and online games will be discontinued in June. The ESRB will continue with the higher cost Long Form ratings, primarily used for physical/boxed games. A date has not
yet been set for the end of the service.
Developers feared that they would be forced to pay for the higher cost rating otherwise they would not be allowed to release their titles on key platforms like Xbox that demand a content rating.
However the ESRB's official Twitter feed responding that:
Developers of digital games and apps will still be able to obtain ESRB ratings at no cost through the IARC rating process. The Microsoft Store deployed IARC years ago and has committed to making IARC ratings accessible to all Xbox developers. So,
developers should not be concerned.
The International Age Rating Coalition is a newer system for obtaining age ratings for multiple territories and storefronts with a single process. While ESRB single out the Xbox Store, it is also accepted on Google Play, the Nintendo eShop, and
the Oculus Store.
There is currently no word on when this will apply to the PlayStation Store, but an IARC press release in December 2017 said the platform would be added soon.
Multiple game developers have been tweeting about warnings received from Valve about the content included in their games
distributed on Steam.
Apparently, Valve, the company behind the popular digital download platform, is cracking down on quasi-sexual content, threatening the developers involved of removal if the games are not censored before the deadline seemingly in a couple of weeks.
HunieDev,, developer of the game Huniepop tweeted:
I've received an e-mail from Valve stating that HuniePop violates the rules & guidelines for pornographic content on Steam and will be removed from the store unless the game is updated to remove said content.
All the games targeted so far have been based on anime style graphics with other examples being: Tropical Liquor, Mutiny!! and SonoHanabira.
The affected developers are particularly miffed as they have been careful to censor their games to meet the current censorship guidelines. They have also developed the idea to squeeze the games sold on Steam into the guidelines, and then offer
gamers patches to restore the uncut version.
Other digital download portals are rallying against the censorship and are offering a new home for the games affected. JAST USA, MangaGamer and Nutaku have expressed on social media the availability to host the impacted titles, encouraging
developers to contact them. Eg Jast USA have tweeted:
We're disappointed about Steam's new enforcement of their content policy, hurting good developers. VNs should be accessible to everyone, so we're making an open invitation to any VN developers who'd like to join our DRM-free store to release
Local newspaper editors from across the country have united to urge MPs not to join a disgraceful Labour-backed plot to
muzzle the Press.
Former party leader Ed Miliband and deputy leader Tom Watson are among opposition MPs seeking to hijack data protection legislation to introduce newspaper censorship..
MPs will vote tomorrow on proposed amendments to the Data Protection Bill that would force publishers refusing to join a state-recognised Press censor to pay the costs of claimants who bring court proceedings, even if their claims are defeated.
They would also lead to yet another inquiry into the media known as Leveson 2.
Former party leader Ed Miliband and deputy leader Tom Watson are among opposition MPs seeking a press censor.
Local newspaper editors warn today the completely unacceptable measures are an attack on Press freedom that would cause irreparable damage to the regional press.
Alan Edmunds, editorial director of Trinity Mirror Regionals, the country's largest publisher of regional and local papers, said:
We do not want our journalists facing the spectre of Leveson 2 when attempting to report on the activities of public figures, legitimately and in the public interest. Another huge inquiry would only embolden those who would rather keep
their activities hidden from scrutiny.
Maidenhead Advertiser editor Martin Trepte added:
The amendments represent an attack on Press freedom which is completely unacceptable in our society. As a point of principle, we stand united against these attacks on free speech and urge all MPs to do likewise by voting against all the
Ed Miliband served up an impassioned speech saying something along the lines of: 'think of the hacking victims', they deserve that the rest of British people should be denied the protection of a press so we can all suffer together.
But despite his best efforts, press freedom won the day and the Miliband's proposal to resuscitate the 2nd part of the Leveson report was defeated by a vote of 304 to 295. Tom Watson's amendment to withdraw natural justice from newspapers refusing
to sign up to a press censor was withdrawn after it became obvious that parliament was in no mood to support press censorship.
For the government
Culture Secretary, Matt Hancock said it was a great day for a free press.
On Tuesday, the Commons rejected yet another attempt to resurrect the £5.4million Leveson 2 inquiry into historic allegations against newspapers.
MPs were forced to act again on the issue after peers attempted to amend the Data Protection Bill, ignoring an earlier vote in the Commons last week. MPs have now voted twice to reject a backward-looking, disproportionate and costly Leveson 2
inquiry. Tuesday's vote passed by 12 votes -- 301 votes to 289 -- an even larger majority than last week.
Downing Street later urged the Lords to finally respect the wishes of the elected house. And the Lords seems to have responded.
A Tory peer who had just tried to resurrect plans for another multi-million-pound Press inquiry told his fellow plotters it was time to give up. Lord Attlee urged the Lords to abandon any more challenges.The peer, who was one of three Tories to
back a rebel amendment to the Data Protection Bill, said they should not seek to hold the legislation to ransom. He added:
We have had a good battle and now we have lost. We should not pursue it further. We should not hold a time-sensitive Bill to ransom in order to force the Government to change policy. In my opinion, that would be wrong.
If all things were equal, it would seem eminently sensible to ban 100 quid spins on a gambling machine; ban junk food shops for making people fat; ban
pubs for being unhealthy... But if you do all of these you will end up with some pretty desolate high streets, and an awful lot of people staying in and pumping all their money to the foreign media and retail giants such as Amazon, Netflix and
20th Century Fox Murdoch Sky Sports.
Government to cut Fixed Odds Betting Terminals maximum stake from £100 to £2
The maximum stakes on Fixed Odds Betting Terminals (FOBTs) are to be reduced from £100 to £2 to reduce the risk of gambling-related harm, Minister for Sport and Civil Society Tracey Crouch announced today.
The move comes off the back of a consultation with the public and the industry to ensure that we have the right balance between a sector that can grow and contribute to the economy and one that is socially responsible and doing all it should to
protect consumers and communities.
The government wants to reduce the potential for large losses on FOBT (B2) machines and the risk of harm to both the player and wider communities. Following analysis of consultation responses and advice from the Gambling Commission, the government
believes that a cut to £2 will best achieve this.
The Gambling Commission has also been tasked to take forward discussions with the industry to improve player protection measures on B1 and B3 category machines, looking at spend and time limits.
DCMS Secretary of State Matt Hancock said:
When faced with the choice of halfway measures or doing everything we can to protect vulnerable people, we have chosen to take a stand. These machines are a social blight and prey on some of the most vulnerable in society, and we are determined
to put a stop to it and build a fairer society for all.
Minister for Sport and Civil Society Tracey Crouch said:
Problem gambling can devastate individuals' lives, families and communities. It is right that we take decisive action now to ensure a responsible gambling industry that protects the most vulnerable in our society. By reducing FOBT stakes to £2 we
can help stop extreme losses by those who can least afford it.
While we want a healthy gambling industry that contributes to the economy, we also need one that does all it can to protect players. We are increasing protections around online gambling, doing more on research, education and treatment of problem
gambling and ensuring tighter rules around gambling advertising. We will work with the industry on the impact of these changes and are confident that this innovative sector will step up and help achieve this balance.
In addition to the reduction to FOBT stakes the government has today confirmed:
The Gambling Commission will toughen up protections around online gambling including stronger age verification rules and proposals to require operators to set limits on consumers' spending until affordability checks have been conducted.
A major multi-million pound advertising campaign promoting responsible gambling, supported by industry and GambleAware, will be launched later this year.
The Industry Group for Responsible Gambling (IGRG) has amended its code to ensure that a responsible gambling message will appear for the duration of all TV adverts.
Public Health England will carry out a review of the evidence relating to the public health harms of gambling.
As part of the next licence competition the age limit for playing National Lottery games will be reviewed, to take into accounts developments in the market and the risk of harm to young people.
In order to cover any negative impact on the public finances, and to protect funding for vital public services, this change will be linked to an increase in Remote Gaming Duty, paid by online gaming operators, at the relevant Budget.
Changes to the stake will be through secondary legislation. The move will need parliamentary approval and we will also engage with the gambling industry to ensure it is given sufficient time to implement and complete the technological changes.
B1 machines are in casinos with a maximum stake of £5 with a maximum pay-out of £10,000 (or progressive jackpot of £20,000)
B2 gaming machines, are those being talked about in bookies
B3 machines are located in casino, betting, arcade and bingo venues with a maximum stake of £2 and a maximum pay-out of £500.
New Zealand's Chief Censor David Shanks warned parents and caregivers of vulnerable children and
teenagers to be prepared for the release of Netflix's Season 2 release of 13 Reasons Why scheduled to screen this week on Friday, May 18, at 7pm.
The Office of Film and Literature Classification consulted with the Mental Health Foundation in classifying 13 Reasons Why: Season 2 as RP18 with a warning that it contains rape, suicide themes, drug use, and bullying. Shanks said:
"There is a strong focus on rape and suicide in Season 2 , as there was in Season 1 . We have told Netflix it is really important to warn NZ audiences about that."
"Rape is an ugly word for an ugly act. But young New Zealanders have told us that if a series contains rape -- they want to know beforehand."
An RP18 classification means that someone under 18 must be supervised by a parent or guardian when viewing the series. A guardian is considered to be a responsible adult (18 years and over), for example a family member or teacher who can provide
guidance. Shanks said:
"This classification allows young people to access it in a similar fashion to the first season, while requiring the support from an adult they need to stay safe and to process the challenging topics in the series."
Netflix is required to clearly display the classification and warning.
"If a child you care for is planning to watch the show, you should sit down and watch it with them -- if not together then at least around the same time. That way you can at least try to have informed and constructive discussions with them
about the content."
"The current picture about what our kids can be exposed to online is grim. We need to get that message across to parents that they need to help young people with this sort of content."
For parents and caregivers who don't have time to watch the entire series, the Classification Office and Mental Health Foundation have produced an episode-by-episode guide with synopses of problematic content, and conversation starters to have
with teens. This will be available on both organisations' websites from 7pm on Friday night.
In response to the continued restriction and censorship of
conservatives and their organizations by tech giants Facebook, Twitter, Google and YouTube, the Media Research Center (MRC) along with 18 leading conservative organizations announced Tuesday, May 15, 2018 the formation of a new, permanent
coalition, Conservatives Against Online Censorship .
Conservatives Against Online Censorship will draw attention to the issue of political censorship on social media. This new coalition will urge Facebook, Twitter, Google and YouTube to address the four following key areas of concern:
Provide Transparency: We need detailed information so everyone can see if liberal groups and users are being treated the same as those on the right. Social media companies operate in a black-box environment, only releasing anecdotes about
reports on content and users when they think it necessary. This needs to change. The companies need to design open systems so that they can be held accountable, while giving weight to privacy concerns.
Provide Clarity on 'Hate Speech': "Hate speech" is a common concern among social media companies, but no two firms define it the same way. Their definitions are vague and open to interpretation, and their interpretation often
looks like an opportunity to silence thought. Today, hate speech means anything liberals don't like. Silencing those you disagree with is dangerous. If companies can't tell users clearly what it is, then they shouldn't try to regulate it.
Provide Equal Footing for Conservatives: Top social media firms, such as Google and YouTube, have chosen to work with dishonest groups that are actively opposed to the conservative movement, including the Southern Poverty Law Center.
Those companies need to make equal room for conservative groups as advisers to offset this bias. That same attitude should be applied to employment diversity efforts. Tech companies need to embrace viewpoint diversity.
Mirror the First Amendment: Tech giants should afford their users nothing less than the free speech and free exercise of religion embodied in the First Amendment as interpreted by the U.S. Supreme Court. That standard, the result of
centuries of American jurisprudence, would enable the rightful blocking of content that threatens violence or spews obscenity, without trampling on free speech liberties that have long made the United States a beacon for freedom.
"Social media is the most expansive and most game-changing form of communication today. It is these facts that make online political censorship one of the largest threats to free speech we have ever seen. Conservatives should be given the
same ability to express their political ideas online as liberals, without the fear of being suppressed or censored," said Media Research Center President Brent Bozell.
"Meaningful debate only happens when both sides are given equal footing. Freedom of speech, regardless of ideological leaning, is something Americans hold dear. Facebook, Twitter and all other social media companies must acknowledge this and
work to rectify these concerns unless they want to lose all credibility with the conservative movement. As leaders of this effort, we are launching this coalition to make sure that the recommendations we put forward on behalf of the conservative
movement are followed through."
The Media Research Center sent letters to representatives at Facebook, Twitter, Google and YouTube last week asking each company to address these complaints and begin a conversation about how they can repair their credibility within the
conservative movement. As of Tuesday, May 15, 2018 , only Facebook has issued a formal response.
Twitter has outlined further censorship measures in a blog post:
In March, we introduced our new approach to improve the health of the public conversation on Twitter. One important issue we've been working to address is what some might refer to as "trolls." Some troll-like behavior is fun, good and
humorous. What we're talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search. Some of these accounts and Tweets violate our
policies, and, in those cases, we take action on them. Others don't but are behaving in ways that distort the conversation.
To put this in context, less than 1% of accounts make up the majority of accounts reported for abuse, but a lot of what's reported does not violate our rules. While still a small overall number, these accounts have a disproportionately large --
and negative -- impact on people's experience on Twitter. The challenge for us has been: how can we proactively address these disruptive behaviors that do not violate our policies but negatively impact the health of the conversation?
A New Approach
Today, we use policies, human review processes, and machine learning to help us determine how Tweets are organized and presented in communal places like conversations and search. Now, we're tackling issues of behaviors that distort and detract
from the public conversation in those areas by integrating new behavioral signals into how Tweets are presented. By using new tools to address this conduct from a behavioral perspective, we're able to improve the health of the conversation, and
everyone's experience on Twitter, without waiting for people who use Twitter to report potential issues to us.
There are many new signals we're taking in, most of which are not visible externally. Just a few examples include if an account has not confirmed their email address, if the same person signs up for multiple accounts simultaneously, accounts that
repeatedly Tweet and mention accounts that don't follow them, or behavior that might indicate a coordinated attack. We're also looking at how accounts are connected to those that violate our rules and how they interact with each other.
These signals will now be considered in how we organize and present content in communal areas like conversation and search. Because this content doesn't violate our policies, it will remain on Twitter, and will be available if you click on
"Show more replies" or choose to see everything in your search setting. The result is that people contributing to the healthy conversation will be more visible in conversations and search.
In our early testing in markets around the world, we've already seen this new approach have a positive impact, resulting in a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations. That means fewer people are seeing
Tweets that disrupt their experience on Twitter.
Our work is far from done. This is only one part of our work to improve the health of the conversation and to make everyone's Twitter experience better. This technology and our team will learn over time and will make mistakes. There will be false
positives and things that we miss; our goal is to learn fast and make our processes and tools smarter. We'll continue to be open and honest about the mistakes we make and the progress we are making. We're encouraged by the results we've seen so
far, but also recognize that this is just one step on a much longer journey to improve the overall health of our service and your experience on it.
We're often asked how we decide what's allowed on Facebook -- and how much bad stuff is out there. For years, we've had Community Standards
that explain what stays up and what comes down. Three weeks ago, for the first time, we published the internal guidelines we use to enforce those standards. And today we're releasing numbers in a Community Standards Enforcement Report so
that you can judge our performance for yourself.
Alex Schultz, our Vice President of Data Analytics, explains in more detail how exactly we measure what's happening on Facebook in both this Hard Questions post and our guide to Understanding the Community Standards Enforcement Report . But it's
important to stress that this is very much a work in progress and we will likely change our methodology as we learn more about what's important and what works.
This report covers our enforcement efforts between October 2017 to March 2018, and it covers six areas: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts. The numbers show you:
How much content people saw that violates our standards;
How much content we removed; and
How much content we detected proactively using our technology -- before people who use Facebook reported it.
Most of the action we take to remove bad content is around spam and the fake accounts they use to distribute it. For example:
We took down 837 million pieces of spam in Q1 2018 -- nearly 100% of which we found and flagged before anyone reported it; and
The key to fighting spam is taking down the fake accounts that spread it. In Q1, we disabled about 583 million fake accounts -- most of which were disabled within minutes of registration. This is in addition to the millions of fake account
attempts we prevent daily from ever registering with Facebook. Overall, we estimate that around 3 to 4% of the active Facebook accounts on the site during this time period were still fake.
In terms of other types of violating content:
We took down 21 million pieces of adult nudity and sexual activity in Q1 2018 -- 96% of which was found and flagged by our technology before it was reported. Overall, we estimate that out of every 10,000 pieces of content viewed on Facebook, 7
to 9 views were of content that violated our adult nudity and pornography standards.
For graphic violence, we took down or applied warning labels to about 3.5 million pieces of violent content in Q1 2018 -- 86% of which was identified by our technology before it was reported to Facebook.
For hate speech, our technology still doesn't work that well and so it needs to be checked by our review teams. We removed 2.5 million pieces of hate speech in Q1 2018 -- 38% of which was flagged by our technology.
As Mark Zuckerberg said at F8 , we have a lot of work still to do to prevent abuse. It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so
important. For example, artificial intelligence isn't good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue. And more generally, as I explained two weeks
ago, technology needs large amounts of training data to recognize meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported. In addition, in many areas -- whether it's spam, porn or
fake accounts -- we're up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts. It's why we're investing heavily in more people and better
technology to make Facebook safer for everyone.
It's also why we are publishing this information. We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too. This is the
same data we use to measure our progress internally -- and you can now see it to judge our progress for yourselves. We look forward to your feedback.
The European Court of Human Rights has overturned the Maltese courts' decision to ban the play Stitching, eight years after the controversial judgment had incensed the local artistic scene.
The ECHR awarded €10,000 as legal costs as well as €10,000 in moral damages jointly to Unifaun Theatre Productions Limited, as well as director Chris Gatt and actors Pia Zammit and Mike Basmadjian. The court's decision was unanimous, including
Maltese judge Vincent de Gaetano.
Unifaun's production had been banned in 2010 by the Maltese court, a decision confirmed by the Constitutional Court of Appeal, after it was flagged by the now defunct Film and Stage Classification Board.
The Maltese court had ruled in 2010 that it was unacceptable in a democratic society founded on the rule of law for any person to be allowed to swear in public, even in a theatre as part of a script. He pointed out that the country's values could
not be turned upside down in the name of freedom of expression.
The censorship of Stitching had a knock on effect to media censorship in Malta. The government had in 2012 changed the censorship laws , effectively stopping the possibility of theatrical productions being banned and lightening up on film
censorship bringing it more in line with other European countries.
Da Ai TV has canceled its new soap opera Jiachang's Heart, reportedly due to criticism from Chinese
officials two days after the show's pilot aired, sparking concerns about the reach of Chinese censorship.
The show was inspired by the story of Tzu Chi volunteer Lin Chih-hui, now 91, who was born in the Japanese colonial era and served as a Japanese military nurse in China during World War II.
The show's trailer was panned by Chinese media, and local media reported that China's Taiwan Affairs Office sent officials to the foundation's office in Taiwan to investigate the show soon after the pilot aired on Thursday last week.
China's Global Times newspaper published an opinion piece by a Chinese official saying:
It is clear from the 15-minute trailer that the first half of the series is kissing up to Japan.
The show was duly pulled and Da Ai media development manager Ou Hung-yu explained:
The channel decided that the show's depiction of war is contrary to its guideline of purifying human hearts and encouraging social harmony.
Here is an update on the Facebook app investigation and audit that Mark Zuckerberg promised on March 21.
As Mark explained, Facebook will investigate all the apps that had access to large amounts of information before we changed our platform policies in 2014 -- significantly reducing the data apps could access. He also made clear that where we had
concerns about individual apps we would audit them -- and any app that either refused or failed an audit would be banned from Facebook.
The investigation process is in full swing, and it has two phases. First, a comprehensive review to identify every app that had access to this amount of Facebook data. And second, where we have concerns, we will conduct interviews, make requests
for information (RFI) -- which ask a series of detailed questions about the app and the data it has access to -- and perform audits that may include on-site inspections.
We have large teams of internal and external experts working hard to investigate these apps as quickly as possible. To date thousands of apps have been investigated and around 200 have been suspended -- pending a thorough investigation into
whether they did in fact misuse any data. Where we find evidence that these or other apps did misuse data, we will ban them and notify people via this website. It will show people if they or their friends installed an app that misused data before
2015 -- just as we did for Cambridge Analytica.
There is a lot more work to be done to find all the apps that may have misused people's Facebook data -- and it will take time. We are investing heavily to make sure this investigation is as thorough and timely as possible. We will keep you
updated on our progress.
Big Brother Watch's report, released today, reveals:
South Wales Police store photos of all innocent people incorrectly matched by facial recognition for a year , without their knowledge, resulting in a biometric database of over 2,400 innocent people
Home Office spent £2.6m funding South Wales Police's use of the technology, although it is "almost entirely inaccurate"
Metropolitan Police's facial recognition matches are 98% inaccurate, misidentifying 95 people at last year's Notting Hill Carnival as criminals -- yet the force is planning 7 more deployments this year
South Wales Police's matches are 91% inaccurat e -- yet the force plans to target the Biggest Weekend and a Rolling Stones concert next
Big Brother Watch is taking the report to Parliament today to launch a campaign calling for police to stop using the controversial technology, branded by the group as "dangerous and inaccurate".
Big Brother Watch's campaign, calling on UK public authorities to immediately stop using automated facial recognition software with surveillance cameras, is backed by David Lammy MP and 15 rights and race equality groups including Article 19,
Football Supporters Federation, Index on Censorship, Liberty, Netpol, Police Action Lawyers Group, the Race Equality Foundation, and Runnymede Trust.
Shadow Home Secretary Diane Abbott MP and Shadow Policing Minister Louise Haigh MP will speak at the report launch event in Parliament today at 1600.
Police have begun using automated facial recognition in city centres, at political demonstrations, sporting events and festivals over the past two years. Particular controversy was caused when the Metropolitan Police targeted Notting Hill Carnival
with the technology two years in a row, with rights groups expressing concern that comparable facial recognition tools are more likely to misidentify black people.
Big Brother Watch's report found that the police's use of the technology is "lawless" and could breach the right to privacy protected by the Human Rights Act.
Silkie Carlo, director of Big Brother Watch, said:
"Real-time facial recognition is a dangerously authoritarian surveillance tool that could fundamentally change policing in the UK. Members of the public could be tracked, located and identified -- or misidentified -- everywhere they go.
We're seeing ordinary people being asked to produce ID to prove their innocence as police are wrongly identifying thousands of innocent citizens as criminals.
It is deeply disturbing and undemocratic that police are using a technology that is almost entirely inaccurate, that they have no legal power for, and that poses a major risk to our freedoms.
This has wasted millions in public money and the cost to our civil liberties is too high. It must be dropped."
Adults who want to watch online porn (or maybe by adults only products such as alcohol)
will be able to buy codes from newsagents and supermarkets to prove that they are over 18 when online.
One option available to the estimated 25 million Britons who regularly visit such websites will be a 16-digit code, dubbed a 'porn pass'.
While porn viewers will still be able to verify their age using methods such as registering credit card details, the 16-digit code option would be a fully anonymous option. According to AVSecure's the cards will be sold for £10 to anyone who looks
over 18 without the need for any further identification. It doesn't say on the website, but presumably in the case where there is doubt about a customer's age, then they will have to show ID documents such as a passport or driving licence, but
hopefully that ID will not have to be recorded anywhere.
It is hope he method will be popular among those wishing to access porn online without having to hand over personal details to X-rated sites.
The user will type in a 16 digit number into websites that belong to the AVSecure scheme. It should be popular with websites as it offers age verification to them for free (with the £10 card fee being the only source of income for the company).
This is a lot better proposition for websites than most, if not all, of the other age verification companies.
AVSecure also offer an encrypted implementation via blockchain that will not allow websites to use the 16 digit number as a key to track people's website browsing. But saying that they could still use a myriad of other standard technologies to
The BBFC is assigned the task of deciding whether to accredit different technologies and it will be very interesting to see if they approve the AVSecure offering. It is easily the best solution to protect the safety and privacy of porn viewers,
but it maybe will test the BBFC's pragmatism to accept the most workable and safest solution for adults which is not quite fully guaranteed to protect children. Pragmatism is required as the scheme has the technical drawback of having no further
checks in place once the card has been purchased. The obvious worry is that an over 18s can go around to other shops to buy several cards to pass on to their under 18 mates. Another possibility is that kids could stumble on their parent's card and
get access. Numbers shared on the web could be easily blocked if used simultaneously from different IP addresses.
The BBC has published its findings after investigating the rather blatant knock at Jeremy Corbyn on Newsnight.
Newsnight used an image of Corbyn in a Russian style hat set amongst Moscow images as the back lot for a critical news piece. The BBC writes:
BBC Two, 15 March 2018
Use of Jeremy Corbyn's image
Finding by the Executive Complaints Unit
This edition of Newsnight was broadcast at a time of heightened interest in UK/Russian relations following the nerve agent attack in Salisbury. The programme focused on Jeremy Corbyn's position in the House of Commons on the previous day, and an
image of him, set against a Moscow-inspired skyline, was used as the backdrop for the introduction and a later studio discussion. 48 people complained to the Executive Complaints Unit (ECU) that the backdrop had been deliberately contrived to
convey an impression of pro-Russian sympathy on Mr Corbyn's part, on one or more of the following grounds:
that the image had been manipulated to make Mr Corbyn look more Russian than in the photograph from which it had been taken, particularly by altering the appearance of his hat;that the superimposition of the image on such a background compounded
this;that the selection of a photograph in which he was wearing what some described as a Lenin-style cap was also intended to suggest a Russian association.
Some also complained that the programme's choice of focus represented bias against Mr Corbyn.
After investigation, the ECU reached the following findings.
Manipulation of the image
Many complainants maintained that the image had been photo-shopped , in terms which reflected what the Guardian columnist Owen Jones said in the following evening's edition of Newsnight:
Yesterday, the background to your programme, you have Jeremy Corbyn dressed up against the Kremlin skyline...dressed up as a Soviet stooge...you even photo-shopped his hat to look more Russian.
Some illustrated their complaints with copies of the original photograph next to a screen-grab of the equivalent image in the programme, in which the hat did appear to be slightly taller. This, however, was not the result of photo-shopping or
otherwise manipulating the image. It resulted from the fact that the screen onto which the image was projected is curved, meaning that the image as a whole appeared higher in relation to its width than it would on a flat surface.
The BBC made clear from the outset that the photograph had not been photo-shopped or manipulated to make Mr Corbyn seem more Russian, and some complainants understood this as a claim that it had been shown unaltered. However, it was immediately
apparent from the backdrop that the source images had been modified in some respects. In fact, the graphics team had increased the contrast to ensure enough definition on screen, and given the whole backdrop a colour wash for a stylised effect (as
the then Acting Editor of Newsnight explained on Twitter). Newsnight's graphics team regularly treats images of politicians from all parties, and other,s in this way, to create a strong studio backdrop for whichever story is being covered. As a
result of this treatment, much of the detail of Mr Corbyn's hat visible in the original photograph was lost, and the hat appeared in silhouette. This was the effect which suggested to some complainants a likeness to a Russian-style fur hat.
Superimposition of the image on a Moscow-inspired skyline
Visual montage is a commonly-used device in TV programmes to highlight a story or theme. The use of the technique in news programmes such as Newsnight is intended to epitomise the story rather than to express or invite a particular attitude to it,
and the montage used in the item in question was no exception. The backdrop in the previous evening's edition of Newsnight , which focused on the current state of relations between Britain and Russia, also included a Moscow-related image. As the
focus of the 15 March item was on Mr Corbyn's reaction to the claim that Russia was responsible for the nerve agent attack, it was entirely apt for the backdrop to combine his image with this backdrop.
Selection of the photograph
The photograph was chosen because it was a typical and readily recognisable image of Mr Corbyn, of a kind which has been used many times across the media without remark. Complaints about its use on this occasion focussed on the supposedly Russian
associations of the Lenin-style cap he was wearing, but this objection conflicts with the objections of those who maintained that it was the alleged photo-shopping of the hat which gave it a more Russian appearance. Neither objection has any basis
Choice of focus
The reasons for Newsnight s choice of focus were made clear in the introduction to the item by the presenter, Emily Maitlis:
Did Jeremy Corbyn misread the mood of his party in the Commons yesterday when he refused to point the finger at Russia? Last night a group of Labour backbenchers said it unequivocally accepts the Russian state's culpability for the spy poisoning.
Overnight they were joined by senior frontbenchers, who command the defence and foreign affairs briefs. Today, Corbyn clarified, stressing his condemnation of the attack and saying the evidence pointed towards Russia. But he reiterated the need
not to rush ahead of evidence in what he referred to as the fevered atmosphere of Westminster. Is he right to go slowly? Or is more cross-party solidarity called for at a time when a foreign agent appears to be targeting people on British soil?
That is entirely in keeping with an editorial decision made on the basis of sound news judgement. The item which followed consisted of a report by David Grossman on the British left's current and historic attitudes towards Russia, and a studio
discussion whose two participants were both generally supportive of Mr Corbyn, though one of them believed he had missed an opportunity to be "crystal clear" in his condemnation. The ECU saw no grounds for regarding the contents of the
item as less than impartial or fair to Mr Corbyn.
The European Broadcasting Union (EBU) has barred one of China's most popular TV channels from
airing the Eurovision song contest after it censored LGBT elements of the competition.
Mango TV was criticised for blurring rainbow flags and censoring tattoos during Tuesday's first semi-final. It also decided not to air performances by the Irish and Albanian entries.
The EBU said the censorship was not in line with its values of diversity:
It is with regret that we will therefore immediately be terminating our partnership with the broadcaster and they will not be permitted to broadcast the second Semi-Final or the Grand Final.
The Irish entry, Ryan O'Shaughnessy, told the BBC that he welcomed the EBU's decision. He will perform at the final in Lisbon on Saturday with a song about the end of a relationship. He was accompanied by two male dancers during the performance
that was apparently censored by Mango TV.
US lawmakers from both political parties have come together to reintroduce a bill that, if passed, would prohibit the US
government from forcing tech product makers to undermine users safety and security with back door access.
The bill, known as the Secure Data Act of 2018 , was returned to the US House of Representatives by Representative Zoe Lofgren and Thomas Massie.
The Secure Data Act forbids any government agency from demanding that a manufacturer, developer, or seller of covered products design or alter the security functions in its product or service to allow the surveillance of any user of such product
or service, or to allow the physical search of such product, by any agency. It also prohibits courts from issuing orders to compel access to data.
Covered products include computer hardware, software, or electronic devices made available to the public. The bill makes an exception for telecom companies, which under the 1994 Communications Assistance for Law Enforcement Act (CALEA) would still
have to help law enforcement agencies access their communication networks.
Conan Exiles is a 2018 Norway online survival game by Funcom, either played from the first-person or third-person perspective.
Many months ago, windowscentral.com
reported that the American ESRB might give Conan Exiles an AO for Adults Only rating which could prevent it from coming to consoles. To avoid this, Funcom had to censor some adult content like exposed penises and testicles for release in countries
using ESRB ratings.
A Funcom spokesperson clarified the situation. On consoles, full nudity is only available in PEGI (Albania, Bulgaria, Switzerland, United Kingdom, and more) and USK (Germany) territories. You can activate it by downloading the Nudity add-on which
come with the game purchase.
Unfortunately, only partial nudity is available in ESRB (Bahamas, Mexico, United Arab Emirates, United States, and more) countries.