|
Germany passes law requiring social media websites to immediately censor on request
|
|
|
|
30th June 2017
|
|
| See article from bbc.co.uk |
Social media companies in Germany face fines of up to 50m euros if they fail to remove obviously illegal content in time. From October, Facebook, YouTube, and other sites with more that two million users in Germany must take down posts containing hate
speech or other criminal material within 24 hours. Content that is not obviously unlawful must be assessed within seven days. Failure to comply will result in a 5m euro penalty, which could rise to 50m euros depending on the severity of the offence.
Facebook responded in a statement: We believe the best solutions will be found when government, civil society and industry work together and that this law as it stands now will not improve efforts to tackle this
important societal problem.
German MPs voted in favour of the Netzwerkdurchsetzungsgesetz (NetzDG) law after months of deliberation, on the last legislative day before the Bundestag's summer break. Opponents responded the
tight time limits are unrealistic, and will lead to accidental censorship as technology companies err on the side of caution and delete ambiguous posts to avoid paying penalties. The bill has faced criticism from human right's campaigners. Many of
the violations covered by the bill are highly dependent on context, context which platforms are in no position to assess, wrote the UN Special Rapporteur to the High Commissioner for Human Rights, David Kaye. He added that the obligations placed upon
private companies to regulate and take down content raises concern with respect to freedom of expression. The law may still be chllenged in Brussels, where campaigners have claimed it breaches EU laws. |
|
Open Rights Group comment on the Queen's Speech
|
|
|
| 23rd June 2017
|
|
| See article from
openrightsgroup.org |
There are references to a review of Counter-terrorism and a Commision for Countering Extremism which will include Internet-related policies. Although details are lacking, these may contain threats to privacy and free speech. The government has opted
for a "Digital Charter", which isn't a Bill, but something else. Digital Charter This isn't a Bill, but some kind of policy intervention. Perhaps the
Digital Charter will be for companies to voluntarily agree to, or a statement of government preferences. It addresses both unwanted and illegal
content or activity online, and the protection of vulnerable people. The work of CTIRU and the IWF are mentioned as examples of work to remove illegal or extremist content. At this point, it is hard to know exactly what harms will
emerge, but pushing enforcement into the hands of private companies is problematic. It means that decisions never involve courts and are not fully transparent and legally accountable. Counterterrorism review
There will be a review of counterterrorism powers . The review includes "working with online companies to reduce
and restrict the availability of extremist material online". This appears to be a watered down version of the Conservative manifesto commitment to give greater responsibility for companies to take down extremist material from
their platforms. Already Google and Facebook have issued public statements about how they intend to improve the removal of extremist material from their platforms. Commission for Countering Extremism
A Commission will look at the topic of countering extremism, likely including on the Internet. This
appears to be a measure to generate ideas and thinking, which could be a positive approach, if it involves considering different approaches, rather than pressing ahead with policies in order to be seen to be doing something. The quality of the Commission
will therefore depend on their ability to take a wide range of evidence and assimilate it impartially; it faces a significant challenge in ensuring that fundamental rights are respected within any policy suggestions they suggest.
Data Protection Bill A new Data Protection Bill , "will fulfil a manifesto commitment to ensure the UK has a
data protection regime that is fit for the 21st century". This will replace the Data Protection Act 1998, which is in any case being removed as the result of the new
General Data Protection Regulation passed by the European Parliament last year. Regulations apply directly, so the GDPR does not need
to be 'implemented' in UK law before Brexit. We welcome that (at least parts of) the GDPR will be implemented in primary legislation with a full debate in Parliament. It is not clear if the text of the GDPR will be brought into
this Bill, or whether it supplements it. This appears to be a bill to at least implement some of the 'derogations' (options) in the GDPR, plus the new rules for law enforcement agencies, that came in with the new
law enforcement-related Directive and have to be applied by EU member states.
The bulk of the important rights are in the GDPR, and cannot be tampered with before Brexit. We welcome the chance to debate the choices, and especially to press for the right of privacy groups to bring complaints directly.
|
|
Facebook announces measures to prevent terrorist related content
|
|
|
| 23rd June 2017
|
|
| See article from bbc.co.uk See
article from bbc.co.uk |
Facebook is launching a UK initiative to train and fund local organisations it hopes will combat extremism and hate speech. The UK Online Civil Courage Initiative's initial partners include Imams Online and the Jo Cox Foundation. Facebook's chief
operating officer, Sheryl Sandberg said: The recent terror attacks in London and Manchester - like violence anywhere - are absolutely heartbreaking. No-one should have to live in fear of terrorism - and we all have a
part to play in stopping violent extremism from spreading. We know we have more to do - but through our platform, our partners and our community we will continue to learn to keep violence and extremism off Facebook.
Last week Facebook
outlined its technical measures to remove terrorist-related content from its site. The company told the BBC it was using artificial intelligence to spot images, videos and text related to terrorism as well as clusters of fake accounts. Facebook
explained that it was aiming to detect terrorist content immediately as it is posted and before other Facebook users see it. If someone tries to upload a terrorist photo or video, the systems look to see if this matches previous known extremist content
to stop it going up in the first place. A second area is experimenting with AI to understand text that might be advocating terrorism. This is analysing text previously removed for praising or supporting a group such as IS and trying to work out
text-based signals that such content may be terrorist propaganda. The company says it is also using algorithms to detect clusters of accounts or images relating to support for terrorism. This will involve looking for signals such as whether an
account is friends with a high number of accounts that have been disabled for supporting terrorism. The company also says it is working on ways to keep pace with repeat offenders who create accounts just to post terrorist material and look for ways of
circumventing existing systems and controls. Facebook has previously announced it is adding 3,000 employees to review content flagged by users. But it also says that already more than half of the accounts that it removes for supporting terrorism
are ones that it finds itself. Facebook says it has also grown its team of specialists so that it now has 150 people working on counter-terrorism specifically, including academic experts on counterterrorism, former prosecutors, former law
enforcement agents and analysts, and engineers. One of the major challenges in automating the process is the risk of taking down material relating to terrorism but not actually supporting it - such as news articles referring to an IS propaganda
video that might feature its text or images. An image relating to terrorism - such as an IS member waving a flag - can be used to glorify an act in one context or be used as part of a counter-extremism campaign in another. |
|
|
|
|
| 23rd June 2017
|
|
|
The Sex Tech Industry Says Facebook and PayPal Are Censoring Its Work See article from venus-adult-news.com
|
|
Germany joins chorus of governments wanting an end to a safe and encrypted internet
|
|
|
| 15th June
2017
|
|
| See article from
dailymail.co.uk |
German authorities want the right to look at private messages on services such as WhatsApp to try and prevent terrorism. Ministers have also agreed to lower the age limit for fingerprinting minors to six from 14 for asylum seekers. Ministers from
central government and federal states said encrypted messaging services, such as WhatsApp and Signal, allow militants and criminals to evade traditional surveillance. We can't allow there to be areas that are practically outside the law, interior
minister Thomas de Maiziere told reporters. Among the options Germany is considering is source telecom surveillance, where authorities install software on phones to relay messages before they are encrypted. That is now illegal. Austria is
also planning laws to make it easier to monitor encrypted messages as well as building out a linked network of cameras and other equipment to read vehicle licence plates. Meanwhile Japan is also introducing mass snooping in the name of the
prevention of terrorism. See Japan passes 'brutal' counter-terror law despite fears
over civil liberties from theguardian.com |
|
Theresa May hints that she will continue her policies to make the internet less secure from hackers, phishers and thieves
|
|
|
| 15th June 2017
|
|
| See press release from openrightsgroup.org
|
Open Rights Group has responded to Theresa May's post-election hints that she will continue with Conservative plans for Internet clampdowns. Executive Director Jim Killock said: To push on with these extreme
proposals for Internet clampdowns would appear to be a distraction from the current political situation and from effective measures against terror. The Government already has extensive surveillance powers. Conservative proposals
for automated censorship of the Internet would see decisions about what British citizens can see online being placed in the hands of computer algorithms, with judgments ultimately made by private companies rather than courts. Home Office plans to force
companies to weaken the security of their communications products could put all of us at a greater risk of crime. Both of these proposals could result in terrorists and extremists switching to platforms and services that are more
difficult for our law enforcement and intelligence agencies to monitor. Given that the priority for all MPs is how the UK will negotiate Brexit, it will be especially hard to give the time and thought necessary to scrutinise these
proposals. It could be tempting to push ahead in order to restore some of Theresa May's image as a tough leader. This should be resisted. With such a fragile majority, greater consensus will be needed to pass new laws.
We hope that this will mean our parliamentarians will reject reactionary policy-making and look for long-term, effective solutions that directly address the complex causes of terrorism.
|
|
|
|
|
| 15th
June 2017
|
|
|
Russian internet users fight back against government censorship See article from bloomberg.com
|
|
YouTube adds new censorship rules to define which videos will barred from monetisation through advertising
|
|
|
|
5th June 2017
|
|
| See article from bbc.co.uk See
article from youtube-creators.googleblog.com |
In response to recent boycotts by high profile advertisers, YouTube has clarified its censorship rules to enable video-makers to know which content it considers to be advertiser-friendly. In a blog post, the video-sharing website said it would not
allow adverts to appear alongside hateful or discriminatory content. It will also refuse to place ads next to videos using gratuitously disrespectful language that shames or insults an individual or group. The guidelines also discourage film-makers from
making inappropriate parody videos using popular family entertainment characters. YouTube has detailed new censorship rules in a blog post:
Hateful content: Content that promotes discrimination or disparages or humiliates an individual or group of people on the basis of the individual's or group's race, ethnicity, or ethnic origin, nationality, religion,
disability, age, veteran status, sexual orientation, gender identity, or other characteristic associated with systematic discrimination or marginalization. Inappropriate use of family entertainment characters: Content
that depicts family entertainment characters engaged in violent, sexual, vile, or otherwise inappropriate behavior, even if done for comedic or satirical purposes. Incendiary and demeaning content: Content that is
gratuitously incendiary, inflammatory, or demeaning. For example, video content that uses gratuitously disrespectful language that shames or insults an individual or group.
However, the announcement has met with some criticism from video makers. Captain Sauce, pointed out that the algorithm used to detect whether a video may contain inappropriate content was not perfect. Whilst Eugenia Loli pointed out that
mainstream news networks often post inflammatory studio debates that could be judged incendiary and demeaning, while music videos often pushed the boundaries of sexually-explicit content, but these still carried advertisements. He wrote:
Why punish the little guy, but not the big networks? This is a double standard.
|
|
|