|
Labour MP tables bill amendment requiring social media companies to take down posts within 24 hours of an official complaint
|
|
|
| 29th June 2018
|
|
| See article from theguardian.com
From v3.co.uk |
Google, Facebook, YouTube and other sites would be required by law to take down extremist material within 24 hours of receiving an official complaint under an amendment put forward for inclusion in new counter-terror legislation. The Labour MP Stephen
Doughty's amendment echoes censorship laws that came into effect in Germany last year. However the effect of the German law was to enable no-questions-asked censorship of anything the government doesn't like. Social media companies have no interest in
challenging unfair censorship and find the easiest and cheapest way to comply is to err on the side of the government, and take down anything asked regardless of the merits of the case. The counter-terrorism strategy unveiled by the home
secretary, Sajid Javid, this month, said the Home Office would place a renewed emphasis on engagement with internet providers and work with the tech industry to seek more investment in technologies that automatically identify and remove terrorist content
before it is accessible to all. But Doughty, a member of the home affairs select committee, said his amendment was needed because the voluntary approach was failing. He said a wide variety of extremist content remained online despite repeated
warnings. If these companies can remove copyrighted video or music content from companies like Disney within a matter of hours, there is no excuse for them to be failing to do so for extremist material.
Doughty's amendment would also require tech companies to proactively check content for extremist material and take it down within six hours of it being identified. The proactive check of content alludes to the censorship
machines being introduced by the EU to scan uploads for copyrighted material. The extension to detect terrorist material coupled with the erring on the side of caution approach would inevitably lead to the automatic censorship of any content even using
vocabulary of terrorism, regardless of it being news reporting, satire or criticsim. |
|
Government to introduce a bill to ban upskirting whilst Labour's extreme feminists want to hijack it to ban misogyny and fake porn
|
|
|
| 22nd June 2018
|
|
| See article from dailymail.co.uk
|
A row over Tory MP Christopher Chope blocking a backbench attempt to ban people from taking lewd photographs up women's skirts has spurred the government to adopt the bill. Speaking in PMQ's this week, Theresa May confirmed the Government was taking
on the upskirting campaign. She said: Upskirting is a hideous invasion of privacy. It leaves victims feeling degraded and distressed. We will adopt this as a Government
Bill, we will introduce the Bill this Thursday with a second reading before the summer recess. But we are not stopping there. We will also ensure that the most serious offenders are added to the sex offenders register, and victims
will be in no doubt their complaints will be taken very seriously and perpetrators will be punished.
And now extremist feminists are attempting to massively extend the bill to cover their own pet peeves. Feminist campaigner and
academic Clare Ms McGlynn, claimed the draft law created an opportunity to tackle so-called deepfake pornography. She said: It would be easy to extend the bill so that it covers images which have been altered too and
clearly criminalise a practice that victims say they find incredibly distressing.
And Labour MP Stella Creasy demanded misogyny is made a hate crime to ensure the law keeps up with abuse of women. She claimed outlawing hatred of women
- by bringing it in to with race and equality laws - would be more effective than one-off bans for offences like upskirting. |
|
UK government publishes its intentions to significantly ramp up internet censorship
|
|
|
| 8th June 2018
|
|
| See article from dailymail.co.uk
See Government
response to the internet strategy green paper [pdf] from assets.publishing.service.gov.uk |
Who is liable if a user posts copyrighted music to YouTube without authority? Is it the user or is it YouTube? The answer is of course that it is the user who would be held liable should copyright holders seek compensation. YouTube would be held
responsible only if they were informed of the infringement and refused to take it down. This is the practical compromise that lets the internet work. So what would happen if the government changed the liability laws so that YouTube was held
liable for unauthorised music as soon as it was posted. There maybe millions of views before it was spotted. If YouTube were immediately liable they may have to pay millions in court judgements against them. There is lot of blather about YouTube
having magic Artificial Intelligence that can detect copyrighted music and block it before it us uploaded. But this is nonsense, music is copyrighted by default, even a piece that has never been published and is not held in any computer database. YouTube does not have a database that contains all the licensing and authorisation, and who exactly is allowed to post copyrighted material. Even big companies lie, so how could YouTube really know what could be posted and what could not.
If the law were to be changed, and YouTube were held responsible for the copyright infringement of their posters, then the only possible outcome would be for YouTube to use its AI to detect any music at all and block all videos which contain
music. The only music allowed to be published would be from the music companies themselves, and even then after providing YouTube with paperwork to prove that they had the necessary authorisation. So when the government speaks of changes to
liability law they are speaking of a massive step up in internet censorship as the likely outcome. In fact the censorship power of such liability tweaks has been proven in the US. The recently passed FOSTA law changed liability law so that
internet companies are now held liable for user posts facilitating sex trafficking. The law was sold as a 'tweak' just to take action against trafficking. But it resulted in the immediate and almost total internet censorship of all user postings
facilitating adult consensual sex work, and a fair amount of personal small ads and dating services as well. The rub was that sex traffickers do not in any way specify that their sex workers have been trafficked, their adverts are exactly the same
as for adult consensual sex workers. With all the artificial intelligence in the world, there is no way that internet companies can distinguish between the two. When they are told they are liable for sex trafficking adverts, then the only possible
way to comply is to ban all adverts or services that feature anything to do with sex or personal hook ups. Which is of course exactly what happened. So when UK politicians speak of internet liability changes and sex trafficking then they are
talking about big time, large scale internet censorship. And Theresa May said today via a government press release as reported in the Daily Mail: Web giants such as Facebook and Twitter must automatically
remove vile abuse aimed at women, Theresa May will demand today. The Prime Minister will urge companies to utilise the same technology used to take down terrorist propaganda to remove rape threats and harassment.
Speaking at the G7 summit in Quebec, Mrs May will call on firms to do more to tackle content promoting and depicting violence against women and girls, including illegal violent pornography. She will also demand
the automatic removal of adverts that are linked to people-trafficking. May will argue they must ensure women can use the web without fear of online rape threats, harassment, cyberstalking, blackmail or vile comments.
She will say: We know that technology plays a crucial part in advancing gender equality and empowering women and girls, but these benefits are being undermined by vile forms of online violence, abuse and harassment.
What is illegal offline is illegal online and I am calling on world leaders to take serious action to deal with this, just like we are doing in the UK with our commitment to legislate on online harms such as cyber-stalking and
harassment.
In a world that is being ripped apart by identitarian intolerance of everyone else, its seems particularly unfair that men should be expected to happily put up with the fear of online threats, harassment, cyberstalking,
blackmail or vile comments. Surely laws should be written so that all people are treated totally equally. Theresa May did not say to much about liability law directly in her press release, but it is laid out pretty clearly in the government
document just published, titled Government response to the internet strategy green paper [pdf]:
Taking responsibility Platform liability and illegal harms Online platforms need to take responsibility for the content they host. They need to
proactively tackle harmful behaviours and content. Progress has been made in removing illegal content, particularly terrorist material, but more needs to be done to reduce the amount of damaging content online, legal and illegal. We are developing options for increasing the liability online platforms have for illegal content on their services. This includes examining how we can make existing frameworks and definitions work better, as well as what the liability regime should look like in the long-run.
Terms and Conditions Platforms use their terms and conditions to set out key information about who can use the service, what content is acceptable and what action can be taken if users don't comply
with the terms. We know that users frequently break these rules. In such circumstances, the platforms' terms state that they can take action, for example they can remove the offending content or stop providing services to the user. However, we do not see
companies proactively doing this on a routine basis. Too often companies simply do not enforce their own terms and conditions. Government wants companies to set out clear expectations of what is acceptable on their platforms in
their terms, and then enforce these rules using sanctions when necessary. By doing so, companies will be helping users understand what is and isn't acceptable. Clear Standards We believe that it is
right for Government to set out clear standards for social media platforms, and to hold them to account if they fail to live up to these. DCMS and Home Office will jointly work on the White Paper which will set out our proposals for forthcoming
legislation. We will focus on proposals which will bring into force real protections for users that will cover both harmful and illegal content and behaviours. In parallel, we are currently assessing legislative options to modify the online liability
regime in the UK, including both the smaller changes consistent with the EU's eCommerce directive, and the larger changes that may be possible when we leave the EU.
Worrying or what? |
|
|
|
|
|
1st June 2018
|
|
|
Age verification, online safety and a British FOSTA See article from sexandcensorship.org |
|
New laws to make sure that the UK is the most censored place in the western world to be online
|
|
|
|
25th May 2018
|
|
| 20th May 2018. See press release from gov.uk See
government response t0 green
paper consultation [pdf] from assets.publishing.service.gov.uk |
Culture Secretary Matt Hancock has issued to the following press release from the Department for Digital, Culture, Media & Sport: New laws to make social media safer New laws will be created to make
sure that the UK is the safest place in the world to be online, Digital Secretary Matt Hancock has announced. The move is part of a series of measures included in the government's response to the Internet Safety Strategy green
paper, published today. The Government has been clear that much more needs to be done to tackle the full range of online harm. Our consultation revealed users feel powerless to address safety issues online
and that technology companies operate without sufficient oversight or transparency. Six in ten people said they had witnessed inappropriate or harmful content online. The Government is already working with social media companies
to protect users and while several of the tech giants have taken important and positive steps, the performance of the industry overall has been mixed. The UK Government will therefore take the lead, working collaboratively with
tech companies, children's charities and other stakeholders to develop the detail of the new legislation. Matt Hancock, DCMS Secretary of State said:
Digital technology is overwhelmingly a force for good across
the world and we must always champion innovation and change for the better. At the same time I have been clear that we have to address the Wild West elements of the Internet through legislation, in a way that supports innovation. We strongly support
technology companies to start up and grow, and we want to work with them to keep our citizens safe. People increasingly live their lives through online platforms so it's more important than ever that people are safe and parents
can have confidence they can keep their children from harm. The measures we're taking forward today will help make sure children are protected online and balance the need for safety with the great freedoms the internet brings just as we have to strike
this balance offline.
DCMS and Home Office will jointly work on a White Paper with other government departments, to be published later this year. This will set out legislation to be brought forward that tackles a
range of both legal and illegal harms, from cyberbullying to online child sexual exploitation. The Government will continue to collaborate closely with industry on this work, to ensure it builds on progress already made. Home
Secretary Sajid Javid said: Criminals are using the internet to further their exploitation and abuse of children, while terrorists are abusing these platforms to recruit people and incite atrocities. We need to protect
our communities from these heinous crimes and vile propaganda and that is why this Government has been taking the lead on this issue. But more needs to be done and this is why we will continue to work with the companies and the
public to do everything we can to stop the misuse of these platforms. Only by working together can we defeat those who seek to do us harm.
The Government will be considering where legislation will have the strongest
impact, for example whether transparency or a code of practice should be underwritten by legislation, but also a range of other options to address both legal and illegal harms. We will work closely with industry to provide clarity
on the roles and responsibilities of companies that operate online in the UK to keep users safe. The Government will also work with regulators, platforms and advertising companies to ensure that the principles that govern
advertising in traditional media -- such as preventing companies targeting unsuitable advertisements at children -- also apply and are enforced online. Update: Fit of pique 21st May 2018. See
article from bbc.com
It seems that the latest call for internet censorship is driven by some sort revenge for having been snubbed by the industry. The culture secretary said he does not have enough power to police social media firms after admitting only four of 14
invited to talks showed up. Matt Hancock told the BBC it had given him a big impetus to introduce new laws to tackle what he has called the internet's Wild West culture. He said self-policing had not worked and legislation was needed.
He told BBC One's Andrew Marr Show , presented by Emma Barnett, that the government just don't know how many children of the millions using using social media were not old enough for an account and he was very worried about age verification. He
told the programme he hopes we get to a position where all users of social media users has to have their age verified. Two government departments are working on a White Paper expected to be brought forward later this year. Asked about the same
issue on ITV's Peston on Sunday , Hancock said the government would be legislating in the next couple of years because we want to get the details right. Update: Internet safety just means internet censorship
25th May 2018. See article from spiked-online.com by Fraser Meyers Officials
want to clean up the web. Bad news for free speech. |
|
Music industry is quick to lobby for Hancock's safe internet plans to be hijacked for their benefit
|
|
|
| 24th May 2018
|
|
| See article from torrentfreak.com
|
This week, Matt Hancock, Secretary of State for Digital, Culture, Media and Sport, announced the launch of a consultation on new legislative measures to clean up the Wild West elements of the Internet. In response, music group BPI says the government
should use the opportunity to tackle piracy with advanced site-blocking measures, repeat infringer policies, and new responsibilities for service providers. This week, the Government published its response to the Internet Safety Strategy green paper ,
stating unequivocally that more needs to be done to tackle online harm. As a result, the Government will now carry through with its threat to introduce new legislation, albeit with the assistance of technology companies, children's charities and other
stakeholders. While emphasis is being placed on hot-button topics such as cyberbullying and online child exploitation, the Government is clear that it wishes to tackle the full range of online harms. That has been greeted by UK music group BPI
with a request that the Government introduces new measures to tackle Internet piracy. In a statement issued this week, BPI chief executive Geoff Taylor welcomed the move towards legislative change and urged the Government to encompass the music
industry and beyond. He said: This is a vital opportunity to protect consumers and boost the UK's music and creative industries. The BPI has long pressed for internet intermediaries and online platforms to take
responsibility for the content that they promote to users. Government should now take the power in legislation to require online giants to take effective, proactive measures to clean illegal content from their sites and services.
This will keep fans away from dodgy sites full of harmful content and prevent criminals from undermining creative businesses that create UK jobs.
The BPI has published four initial requests, each of which provides food for thought.
The demand to establish a new fast-track process for blocking illegal sites is not entirely unexpected, particularly given the expense of launching applications for blocking injunctions at the High Court. The BPI has taken a large number of
actions against individual websites -- 63 injunctions are in place against sites that are wholly or mainly infringing and whose business is simply to profit from criminal activity, the BPI says. Those injunctions can be expanded fairly easily to
include new sites operating under similar banners or facilitating access to those already covered, but it's clear the BPI would like something more streamlined. Voluntary schemes, such as the one in place in Portugal , could be an option but it's unclear
how troublesome that could be for ISPs. New legislation could solve that dilemma, however. Another big thorn in the side for groups like the BPI are people and entities that post infringing content. The BPI is very good at taking these listings
down from sites and search engines in particular (more than 600 million requests to date) but it's a game of whac-a-mole the group would rather not engage in. With that in mind, the BPI would like the Government to impose new rules that would
compel online platforms to stop content from being re-posted after it's been taken down while removing the accounts of repeat infringers. Thirdly, the BPI would like the Government to introduce penalties for online operators who do not provide
transparent contact and ownership information. The music group isn't any more specific than that, but the suggestion is that operators of some sites have a tendency to hide in the shadows, something which frustrates enforcement activity. Finally,
and perhaps most interestingly, the BPI is calling on the Government to legislate for a new duty of care for online intermediaries and platforms. Specifically, the BPI wants effective action taken against businesses that use the Internet to encourage
consumers to access content illegally. While this could easily encompass pirate sites and services themselves, this proposal has the breadth to include a wide range of offenders, from people posting piracy-focused tutorials on monetized YouTube
channels to those selling fully-loaded Kodi devices on eBay or social media. Overall, the BPI clearly wants to place pressure on intermediaries to take action against piracy when they're in a position to do so, and particularly those who may not
have shown much enthusiasm towards industry collaboration in the past. Legislation in this Bill, to take powers to intervene with respect to operators that do not co-operate, would bring focus to the roundtable process and ensure that
intermediaries take their responsibilities seriously, the BPI says. |
|
Top of our concerns was the lack of privacy safeguards to protect the 20 million plus users who will be obliged to use Age Verification tools to access legal content.
|
|
|
| 8th May 2018
|
|
| See article from openrightsgroup.org by Jim Killock
|
We asked the BBFC to tell government that the legislation is not fit for purpose, and that they should halt the scheme until privacy regulation is in place. We pointed out that card payments and email services are both subject to stronger privacy
protections that Age Verification. The government's case for non-action is that the Information Commissioner and data protection fines for data breaches are enough to deal with the risk. This is wrong: firstly because fines cannot
address the harm created by the leaking of people's sexual habits. Secondly, it is wrong because data breaches are only one aspect of the risks involved. We outlined over twenty risks from Age Verification technologies. We pointed
out that Age Verification contains a set of overlapping problems. You can read our list below. We may have missed some: if so, do let us know. The government has to act. It has legislated this requirement without properly
evaluating the privacy impacts. If and when it goes wrong, the blame will lie squarely at the government's door. The consultation fails to properly distinguish between the different functions and stages of an age
verification system. The risks associated with each are separate but interact. Regulation needs to address all elements of these systems. For instance:
Choosing a method of age verification, whereby a user determines how they wish to prove their age. The method of age verification, where documents may be examined and stored. -
The tool's approach to returning users, which may involve either:
The re-use of any age verified account, log-in or method over time, and across services and sites.
The focus of attention has been on the method of pornography-related age verification, but this is only one element of privacy risk we can identify when considering the system as a whole. Many of the risks stem from the fact that
users may be permanently 'logged in' to websites, for instance. New risks of fraud, abuse of accounts and other unwanted social behaviours can also be identified. These risks apply to 20-25 million adults, as well as to teenagers attempting to bypass the
restrictions. There is a great deal that could potentially go wrong. Business models, user behaviours and potential criminal threats need to be taken into consideration. Risks therefore include: Identity
risks
Risks from logging of porn viewing
A log-in from an age-verified user may persist on a user's device or web browser, creating a history of views associated with an IP address, location or device, thus easily linked to a person, even if stored 'pseudonymously'.
An age verified log-in system may track users across websites and be able to correlate tastes and interests of a user visiting sites from many different providers. Data from logged-in web visits may be
used to profile the sexual preferences of users for advertising. Tool providers may encourage users to opt in to such a service with the promise of incentives such as discounted or free content. The current business model for
large porn operations is heavily focused on monetising users through advertising, exacerbating the risks of re-use and recirculation and re-identification of web visit data. Any data that is leaked cannot be revoked, recalled
or adequately compensated for, leading to reputational, career and even suicide risks.
Everyday privacy risks for adults
The risk of pornographic web accounts and associated histories being accessed by partners, parents, teenagers and other third parties will increase. Companies will trade off security for ease-of-use,
so may be reluctant to enforce strong passwords, two-factor authentication and other measures which make it harder for credentials to leak or be shared. Everyday privacy tools used by millions of UK residents such as 'private
browsing' modes may become more difficult to use to use due to the need to retain log-in cookies, increasing the data footprint of people's sexual habits. Some users will turn to alternative methods of accessing sites, such
as using VPNs. These tools have their own privacy risks, especially when hosted outside of the EU, or when provided for free.
Risks to teenagers' privacy
If age-verified log-in details are acquired by teenagers, personal and sexual information about them may become shared including among their peers, such as particular videos viewed. This could lead to bullying, outing or worse.
Child abusers can use access to age verified accounts as leverage to create and exploit a relationship with a teenager ('grooming'). Other methods of obtaining pornography would be incentivised,
and these may carry new and separate privacy risks. For instance the BitTorrent network exposes the IP addresses of users publicly. These addresses can then be captured by services like GoldenEye, whose business model depends on issuing legal threats to
those found downloading copyrighted material. This could lead to the pornographic content downloaded by young adults or teenagers being exposed to parents or carers. While copyright infringement is bad, removing teenagers' sexual privacy is worse. Other
risks include viruses and scams.
Trust in age verification tools and potential scams
Users may be obliged to sign up to services they do not trust or are unfamiliar with in order to access specific websites. Pornographic website users are often impulsive, with lower risk thresholds
than for other transactions. The sensitivity of any transactions involved gives them a lower propensity to report fraud. Pornography users are therefore particularly vulnerable targets for scammers. The use of credit cards
for age verification in other markets creates an opportunity for fraudulent sites to engage in credit card theft. Use of credit cards for pornography-related age verification risks teaching people that this is normal and
reasonable, opening up new opportunities for fraud, and going against years of education asking people not to hand card details to unknown vendors. There is no simple means to verify which particular age verification systems
are trustworthy, and which may be scams.
Market related privacy risks
The rush to market means that the tools that emerge may be of variable quality and take unnecessary shortcuts. A single pornography-related age verification system may come to dominate the market and
become the de-facto provider, leaving users no real choice but to accept whatever terms that provider offers. One age verification product which is expected to lead the market -- AgeID -- is owned by MindGeek, the dominant
pornography company online. Allowing pornographic sites to own and operate age verification tools leads to a conflict of interest between the privacy interests of the user, and the data-mining and market interests of the company. -
The online pornography industry as a whole, including MindGeek, has a poor record of privacy and security, littered with data breaches. Without stringent regulation prohibiting the storage of data which might allow users' identity and
browsing to be correlated, there is no reason to assume that data generated as a result of age verification tools will be exempt from this pattern of poor security.
|
|
Nanny Hancock becomes the first government minister for sometime to whinge about video games
|
|
|
| 2nd May 2018
|
|
| See article
from dailymail.co.uk |
The Culture Secretary Matt Hancock has warned that addictive video games have a negative and damaging impact on children's lives. The comments have been attributed to the phenomenal success of the survival shooter Fortnite. It has been downloaded
more than 40 million times and has been endorsed by stars such as footballer Dele Alli and rapper Drake . Hancock has also said that too much screen time is damaging to the lives of children. Matt Hancock told The Daily Telegraph : Too much screen
time could have a damaging impact on our children's lives. Whether it's social media or video games, children should enjoy them safely and as part of a lifestyle that includes exercise and socialising in the real world. He also confirmed that his
department is working alongside game developers to improve online safety. It seems that Hancock is trying to dream up a few ideas designed to support the notion of requiring ID for nternet users. Nigel Huddleston, a Tory MP and
parliamentary private secretary to Mr Hancock, also called on gaming companies to take more responsibility over addictive games. He also said he wouldn't want his own 12-year-old son playing the game because of concerns it could lead to addiction.
|
|
But of course successive governments have been systematically increasing maximum sentences for minor crimes so that they count as 'serious' crimes
|
|
|
| 29th April 2018
|
|
| See article from bbc.com |
High Court judges have given the UK government six months to revise parts of its Investigatory Powers Act. The government has been given a deadline of 1 November this year to make the changes to its Snooper's Charter. Rules governing the British
surveillance system must be changed quickly because they are incompatible with European laws, said the judges. The court decision came out of legal action by human rights group Liberty. It started its legal challenge to the Act saying clauses that
allow personal data to be gathered and scrutinised violated citizens' basic rights to privacy. The court did not agree that the Investigatory Powers Act called for a general and indiscriminate retention of data on individuals, as Liberty claimed.
However in late 2017, government ministers accepted that its Act did not align with European law which only allows data to be gathered and accessed for the purposes of tackling serious crime. By contrast, the UK law would see the data gathered and held
for more mundane purposes and without significant oversight. One proposed change to tackle the problems was to create an Office for Communications Data Authorisations that would oversee requests to data from police and other organisations. The government said it planned to revise the law by April 2019 but Friday's ruling means it now has only six months to complete the task.
Martha Spurrier, director of Liberty, said the powers to grab data in the Act put sensitive information at huge risk. Javier Ruiz, policy director at the Open Rights Group which campaigns on digital issues, said:
We are disappointed the court decided to narrowly focus on access to records but did not challenge the general and indiscriminate retention of communications data. |
|
Jeremy Hunt demands that social media companies immediately ban under 13s from using their apps and websites
|
|
|
| 22nd April 2018
|
|
| See article from twitter.com
|
This is so wrong on so many levels. Britain would undergo a mass tantrum. How are parents supposed to entertain their kids if they can't spend all day on YouTube? And what about all the
privacy implications of letting social media companies have complete identity details of their users. It will be like Cambridge Analytica on speed. Jeremy Hunt wrote to the social media companies: Dear Colleagues,
Thank you for participating in the working group on children and young people's mental health and social media with officials from my Department and DCMS. We appreciate your time and engagement, and your willingness to continue discussions and
potentially support a communications campaign in this area, but I am disappointed by the lack of voluntary progress in those discussions. We set three very clear challenges relating to protecting children and young people's mental
health: age verification, screen time limits and cyber-bullying. As I understand it, participants have focused more on promoting work already underway and explaining the challenges with taking further action, rather than offering innovative solutions or
tangible progress. In particular, progress on age verification is not good enough. I am concerned that your companies seem content with a situation where thousands of users breach your own terms and conditions on the minimum user
age. I fear that you are collectively turning a blind eye to a whole generation of children being exposed to the harmful emotional side effects of social media prematurely; this is both morally wrong and deeply unfair on parents, who are faced with the
invidious choice of allowing children to use platforms they are too young to access, or excluding them from social interaction that often the majority of their peers are engaging in. It is unacceptable and irresponsible for you to put parents in this
position. This is not a blanket criticism and I am aware that these aren't easy issues to solve. I am encouraged that a number of you have developed products to help parents control what their children an access online in response
to Government's concerns about child online protection, including Google's Family Link. And I recognise that your products and services are aimed at different audiences, so different solutions will be required. This is clear from the submissions you've
sent to my officials about the work you are delivering to address some of these challenges. However, it is clear to me that the voluntary joint approach has not delivered the safeguards we need to protect our children's mental health. In May, the
Department for Digital, Culture, Media and Sport will publish the Government response to the Internet Safety Strategy consultation, and I will be working with the Secretary of State to explore what other avenues are open to us to
pursue the reforms we need. We will not rule out legislation where it is needed. In terms of immediate next steps, I appreciate the information that you provided our officials with last month but would be grateful if you would set
out in writing your companies' formal responses, on the three challenges we posed in November. In particular, I would like to know what additional new steps you have taken to protect children and young people since November in each of the specific
categories we raised: age verification, screen time limits and cyber-bullying. I invite you to respond by the end of this month, in order to inform the Internet Safety Strategy response. It would also be helpful if you can set out any ideas or further
plans you have to make progress in these areas. During the working group meetings I understand you have pointed to the lack of conclusive evidence in this area — a concern which I also share. In order to address this, I have asked
the Chief Medical Officer to undertake an evidence review on the impact of technology on children and young people's mental health, including on healthy screen time. 1 will also be working closely with DCMS and UKRI to commission research into all these
questions, to ensure we have the best possible empirical basis on which to make policy. This will inform the Government's approach as we move forwards. Your industry boasts some of the brightest minds and biggest budgets globally.
While these issues may be difficult, I do not believe that solutions on these issues are outside your reach; I do question whether there is sufficient will to reach them. I am keen to work with you to make technology a force for
good in protecting the next generation. However, if you prove unwilling to do so, we will not be deterred from making progress. |
|
It sounds like the government is ramping up the propaganda machine to suggest that people 'want' the creation of a state social media censor
|
|
|
| 19th April 2018
|
|
| See article
from thesun.co.uk |
A survey commissioned by the Royal Society for Public Health has claimed that four in five people want social media firms to be regulated to ensure they do more to protect kids' mental health. Presumably the questions were somewhat designed to favour the
wished of the campaigners. Some 45% say the sites should be self-regulated with a code of conduct but 36% want rules enforced by Government. The Royal Society for Public Health, which surveyed 2,000 adults, warned social media can cause
significant problems if left unchecked. Health Secretary Jeremy Hunt has previously claimed that social media could pose as great a threat to children's health as smoking and obesity. And he has accused them of developing seductive products aimed
at ever younger children. The survey comes as MPs and Peers today launch an All Party Parliamentary Group (APPG) that will probe the effect of social media on young people' mental health. It will hear evidence over the coming year from users,
experts and industry, with the aim of drawing up practical solutions, including a proposed industry Code of Conduct. Labour MP Chris Elmore, who will chair the APPG. |
|
The government does not like Russian propaganda channel casting doubt about Salisbury murder attempt and the Syrian chemical weapon attack
|
|
|
|
18th April 2018
|
|
| See article from ofcom.org.uk See
detailed case against RT [pdf] from ofcom.org.uk See
Russia steps up information war with UK -- but Ofcom's fight against RT
misses the point from independent.co.uk |
Ofcom has today opened seven new investigations into the due impartiality of news and current affairs programmes on the RT news channel. The investigations (PDF, 240.5 KB) form part of an Ofcom update, published today, into the
licences held by TV Novosti, the company that broadcasts RT. Until recently, TV Novosti's overall compliance record has not been materially out of line with other broadcasters. However, since the events in
Salisbury, we have observed a significant increase in the number of programmes on the RT service that warrant investigation as potential breaches of the Ofcom Broadcasting Code. We will announce the outcome of these investigations
as soon as possible. In relation to our fit and proper duty, we will consider all relevant new evidence, including the outcome of these investigations and the future conduct of the licensee.
|
|
Matt Hancock roasts two execs about Facebook's disdain of privacy protection
|
|
|
| 12th April 2018
|
|
| See article from
dailymail.co.uk |
UK Censorship Culture Secretary Matt Hancock met Facebook executives to warn them the social network is not above law. Hancock told US-based Vice President of Global Policy Management Monika Bickert, and Global Deputy Chief Privacy Officer Stephen
Deadman he would hold their feet to the fire over the privacy of British users. Hancock pressed Facebook on accountability, transparency, micro-targeting and data protection. He also sought assurances that UK citizens data was no longer at risk
and that Facebook would be giving citizens more control over their data going forward. Following the talks, Hancock said: Social media companies are not above the law and will not be allowed to shirk their
responsibilities to our citizens. We will do what is needed to ensure that people's data is protected and don't rule anything out - that includes further regulation in the future.
|
|
The Government launches a Serious Violence Strategy that will consider further censorship of gang related content in music and videos on social media platforms
|
|
|
| 8th April 2018
|
|
| See article from
gov.uk |
The government has announced a new Offensive Weapons Bill, which will be brought forward within weeks. It will ban the sale of the most dangerous corrosive products to under-18s and introduce restrictions on online sales of knives. It will also make it
illegal to possess certain offensive weapons like zombie knives and knuckle-dusters in private. The government notes that the new legislation will form part of the government's Serious Violence Strategy, which will be launched tomorrow. Along
with other issues the Serious Violence Strategy will examine how social media usage can drive violent crime and focus on building on the progress and relationships made with social media providers and the police to identify where we can take further
preventative action relevant to tackling serious violence. When the strategy is launched tomorrow, the Home Secretary will call on social media companies to do more to tackle gang material hosted on their sites and to make an explicit reference to
not allowing violent gang material including music and video on their platforms. |
|
|