|
|
|
|
|
29th November 2018
|
|
|
Fake news prevention presents huge business opportunity. By Ryan Holmes See article from
business.financialpost.com |
|
Google employees write open letter opposing Google supporting the Chinese internet censorship regime
|
|
|
| 28th November 2018
|
|
| See open letter at medium.com
|
We are Google employees. Google must drop Dragonfly. We are Google employees and we join Amnesty International in calling on Google to cancel project Dragonfly, Google's effort to create a censored search engine for the
Chinese market that enables state surveillance. We are among thousands of employees who have raised our voices for months. International human rights organizations and investigative reporters have also sounded the alarm,
emphasizing serious human rights concerns and repeatedly calling on Google to cancel the project. So far, our leadership's response has been unsatisfactory. Our opposition to Dragonfly is not about China: we object to technologies
that aid the powerful in oppressing the vulnerable, wherever they may be. The Chinese government certainly isn't alone in its readiness to stifle freedom of expression, and to use surveillance to repress dissent. Dragonfly in China would establish a
dangerous precedent at a volatile political moment, one that would make it harder for Google to deny other countries similar concessions. Our company's decision comes as the Chinese government is openly expanding its surveillance
powers and tools of population control. Many of these rely on advanced technologies, and combine online activity, personal records, and mass monitoring to track and profile citizens. Reports are already showing who bears the cost, including Uyghurs,
women's rights advocates, and students. Providing the Chinese government with ready access to user data, as required by Chinese law, would make Google complicit in oppression and human rights abuses. Dragonfly would also enable
censorship and government-directed disinformation, and destabilize the ground truth on which popular deliberation and dissent rely. Given the Chinese government's reported suppression of dissident voices, such controls would likely be used to silence
marginalized people, and favor information that promotes government interests. Many of us accepted employment at Google with the company's values in mind, including its previous position on Chinese censorship and surveillance, and
an understanding that Google was a company willing to place its values above its profits. After a year of disappointments including Project Maven, Dragonfly, and Google's support for abusers, we no longer believe this is the case. This is why we're
taking a stand. We join with Amnesty International in demanding that Google cancel Dragonfly. We also demand that leadership commit to transparency, clear communication, and real accountability. Google is too powerful not to be
held accountable. We deserve to know what we're building and we deserve a say in these significant decisions. Signed by 478 Google employees
|
|
Australian parliament passes new law with wide ranging blocking of copyright infringing websites
|
|
|
| 28th November 2018
|
|
| See article from torrentfreak.com
|
The Australian Parliament has passed controversial amendments to copyright law. There will now be a tightened site-blocking regime that will tackle mirrors and proxies more effectively, restrict the appearance of blocked sites in Google search, and
introduce the possibility of blocking dual-use cyberlocker type sites. Section 115a of Australia's Copyright Act allows copyright holders to apply for injunctions to force ISPs to prevent subscribers from accessing pirate sites. While rightsholders
say that it's been effective to a point, they have lobbied hard for improvements. The resulting Copyright Amendment (Online Infringement) Bill 2018 contained proposals to close the loopholes. After receiving endorsement from the Senate earlier
this week, the legislation was today approved by Parliament. Once the legislation comes into force, proxy and mirror sites that appear after an injunction against a pirate site has been granted can be blocked by ISPs without the parties having to
return to court. Assurances have been given, however, that the court will retain some oversight. Search engines, such as Google and Bing, will also be affected. Accused of providing backdoor access to sites that have already been blocked, search
providers will now have to remove or demote links to overseas-based infringing sites, along with their proxies and mirrors. The Australian Government will review the effectiveness of the new amendments in two years' time. |
|
Russia considers increasing fines as Google refuses to comply with Russia's list of banned websites
|
|
|
| 26th November 2018
|
|
| See article from theverge.com |
Russia's state censors have formally accused Google of breaking the law by not removing links to websites that are banned in the country. Roskomnadzor, the state communications censor, said in a statement that the company had not connected to a
database of banned sources in the country, leaving it out of compliance. The potential penalty that Google could face is currently 700,000 roubles, or about $10,000. But Reuters reports that the Russian government has been considering more drastic
actions, including fining companies up to 1 percent of annual revenue for failing to comply with similar laws. |
|
|
|
|
|
26th November 2018
|
|
|
Beyond the massive technical challenge, filters are a lazy alternative to effective sex education. By Lux Alptraum See
article from theverge.com |
|
Twitter outlaws misgendering or deadnaming of trans people
|
|
|
| 25th November 2018
|
|
| See article from thegayuk.com See
censorship rules from help.twitter.com See reponse from
The Britisher from youtu.be |
Deadnaming and misgendering could now get you a suspension from Twitter as it looks to sure up its safeguarding policy for people in the protected transgender category. Twitter's recently updated censorship policy now reads:
Repeated and/or non-consensual slurs, epithets, racist and sexist tropes, or other content that degrades someone We prohibit targeting individuals with repeated slurs, tropes or other content that
intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals. According to the Ofxord English dictionary
misgendering means: Refer to (someone, especially a transgender person) using a word, especially a pronoun or form of address, that does not correctly reflect the gender with which they identify.
According to thegayuk.com:
Deadnaming is when a person refers to someone by a previous name, it could be done with malice or by accident. It mostly affects transgender people who have changed their name during their transition.
|
|
The EU is proposing new legislation to stop the big internet companies from snooping on our messaging. The IWF is opposing this as they will lose leads about child abuse
|
|
|
| 25th November 2018
|
|
| See article from iwf.org.uk
|
The IWF writes: The Internet Watch Foundation (IWF) calls on the European Commission to reconsider proposed legislation on E-Privacy. This is important because if the proposal is enshrined in law, it will potentially have a direct
impact on the tech companies' ability to scan their networks for illegal online child sexual abuse images and videos. Under Article 5 of the proposed E-Privacy legislation, people would have more control over their personal data.
As currently drafted, Article 5 proposes that tech companies would require the consent of the end user (for example, the person receiving an email or message), to scan their networks for known child sexual abuse content. Put simply, this would mean that
unless an offender agreed for their communications to be scanned, technology companies would no longer be able to do that. Susie Hargreaves of the IWF says: At a time when IWF are taking down
more images and videos of child sexual abuse, we are deeply concerned by this move. Essentially, this proposed new law could put the privacy rights of offenders, ahead of the rights of children - children who have been unfortunate enough to be the victim
of child sexual abuse and who have had the imagery of their suffering shared online. We believe that tech companies' ability to scan their networks, using PhotoDNA and other forms of technology, for known child sexual abuse
content, is vital to the battle to rid the internet of this disturbing material. It is remarkable that the EU is pursuing this particular detail in new legislation, which would effectively enhance the rights of possible
'offenders', at a time when the UK Home Secretary is calling on tech companies to do more to protect children from these crimes. The only way to stop this ill-considered action, is for national governments to call for amendments to the legislation,
before it's too late. This is what is in the best interests of the child victims of this abhorrent crime.
|
|
French Parliament passes law allowing the immediate censorship of anything claimed to be 'fake news' during elections
|
|
|
| 23rd November 2018
|
|
| See article from euronews.com |
France's parliament has passed a new law empowering judges to order the immediate censorahip of 'fake news' during election campaigns. The law, conceived by President Emmanuel Macron, was rejected twice by the senate before being passed by the
parliament on Tuesday. It is considered western Europe's first attempt to officially ban material claimed to be fake. Candidates and political parties will now be able to appeal to a judge to censor information claimed to be false during the three
months before an election. The law also allows the CSA, the French national TV censor, to suspend television channels controlled by a foreign state or under the influence of that state if they deliberately disseminate false information claimed
likely to affect the ballot. The law also states that users must be provided with information that is fair, clear and transparent on how their personal data is being used. |
|
Further demonstrating how dangerous it is for the government to demand that identity information is handed over before viewers can access the adult web
|
|
|
| 21st November 2018
|
|
| See article from bbc.com |
The website of an adult video game featuring sexualised animals has been hacked, with the information of nearly half a million subscribers stolen. High Tail Hall is a customisable role-playing game, which features what the website describes as
sexy furry characters, including buxom zebras and scantily clad lionesses. The compromised information, including email addresses, names and order histories, resurfaced on a popular hacking forum a few months later. HTH Studio has acknowledged the
breach and say that it has been fixed. The company added: Both our internal security and web team security assures us that no financial data was compromised. The security of our users is the highest priority.
It
further recommended that all users change their passwords. So although credit card data is safe users are still at risk from identity fraud, outing and blackmail. It is the latest in a long series of hacks aimed at adult sites and demonstrates the
dangers for UK porn viewers when they are forced to supply identity information to be able to browse the adult web. |
|
The British establishment is still clutching at straws claiming it is fake news that has set the people against it, and nothing to do with it treating the people like shit
|
|
|
| 19th November 2018
|
|
| See article from
telegraph.co.uk |
The likes of Facebook and Twitter should fund the creation of a new UK watchdog to internet censor to police fake news, censorship campaigners have claimed. Sounding like a religious morality campaign, the LSE Commission on Truth, Trust and
Technology , a group made up of MPs, academics and industry, also proposed the Government should scrap plans to hand fresh powers to existing cesnors such as Ofcom and the Information Commissioner. The campaigners argue for the creation a new
body to monitor the effectiveness of technology companies' self regulation. The body, which would be called the Independent Platform Agency, would provide a permanent forum for monitoring and cesnorsing the behaviour of online sites and produce an annual
review of the state of disinformation, the group said. Damian Tambini, adviser to the LSE commission and associate professor in LSE's department of media and communications, claimed: Parliament, led by the
Government, must take action to ensure that we have the information and institutions we need to respond to the information crisis. If we fail to build transparency and trust through independent institutions we could see the creeping securitisation of our
media system.
|
|
High Court orders the censorship of all internet porn websites for 6 months
|
|
|
| 19th November 2018
|
|
| See article from
thedailystar.net |
The Bangladesh High Court has ordered the country's government to block all pornography websites and publication of all obscene materials from the internet for the next six months. The court also ordered the authorities concerned to explain in four
weeks why pornography websites and publication of obscene materials should not be declared illegal. The judges issue the orders in response to a writ petition filed by Law and Life Foundation campaigning for internet censorship.
|
|
So how does China manage to delete Twitter posts it does not like?
|
|
|
| 19th November
2018
|
|
| See article from
thestar.com.my |
Despite being blocked in China, Twitter and other overseas social media sites have long been used freely by Chinese activists and government critics to speak about otherwise censored topics...until now. China is now extending its reach to
foreign sites outside of its borders. Chinese authorities have launched a stealth crackdown over the past year. Chinese activists and other Twitter users say they have been pressured by police to delete sensitive tweets. In some cases, Chinese
authorities are getting access to delete accounts themselves. Last Friday, Cao reported that the Twitter account of Wu Gan, a Chinese activist sentenced last December to eight years in prison for subversion, had been suddenly deleted -- erasing
more than 30,000 posts representing years of political critique and commentary. He was taken in by police over tweets critical of the Communist Party. After being held at a police station overnight, the user was made to hand over login information and
watch police delete the tweets. |
|
A Facebook Blueprint for Content Governance and Enforcement. By Mark Zuckerberg
|
|
|
| 16th November 2018
|
|
| See article from
facebook.com |
Mark Zuckerberg has been publishing a series of articles ddressing the most important issues facing Facebook. This is the second in the series. Here are a few selected extracts Community Standards The team responsible for setting these policies is global -- based in more than 10 offices across six countries to reflect the different cultural norms of our community. Many of them have devoted their careers to issues like child safety, hate speech, and terrorism, including as human rights lawyers or criminal prosecutors.
Our policy process involves regularly getting input from outside experts and organizations to ensure we understand the different perspectives that exist on free expression and safety, as well as the impacts of our policies on
different communities globally. Every few weeks, the team runs a meeting to discuss potential changes to our policies based on new research or data. For each change the team gets outside input -- and we've also invited academics and journalists to join
this meeting to understand this process. Starting today, we will also publish minutes of these meetings to increase transparency and accountability. The team responsible for enforcing these policies is made up of around 30,000
people, including content reviewers who speak almost every language widely used in the world. We have offices in many time zones to ensure we can respond to reports quickly. We invest heavily in training and support for every person and team. In total,
they review more than two million pieces of content every day. We issue a transparency report with a more detailed breakdown of the content we take down. For most of our history, the content review process has been very reactive
and manual -- with people reporting content they have found problematic, and then our team reviewing that content. This approach has enabled us to remove a lot of harmful content, but it has major limits in that we can't remove harmful content before
people see it, or that people do not report. Accuracy is also an important issue. Our reviewers work hard to enforce our policies, but many of the judgements require nuance and exceptions. For example, our Community Standards
prohibit most nudity, but we make an exception for imagery that is historically significant. We don't allow the sale of regulated goods like firearms, but it can be hard to distinguish those from images of paintball or toy guns. As you get into hate
speech and bullying, linguistic nuances get even harder -- like understanding when someone is condemning a racial slur as opposed to using it to attack others. On top of these issues, while computers are consistent at highly repetitive tasks, people are
not always as consistent in their judgements. The vast majority of mistakes we make are due to errors enforcing the nuances of our policies rather than disagreements about what those policies should actually be. Today, depending
on the type of content, our review teams make the wrong call in more than 1 out of every 10 cases. Proactively Identifying Harmful Content The single most important improvement in enforcing our
policies is using artificial intelligence to proactively report potentially problematic content to our team of reviewers, and in some cases to take action on the content automatically as well. This approach helps us identify and
remove a much larger percent of the harmful content -- and we can often remove it faster, before anyone even sees it rather than waiting until it has been reported. Moving from reactive to proactive handling of content at scale
has only started to become possible recently because of advances in artificial intelligence -- and because of the multi-billion dollar annual investments we can now fund. To be clear, the state of the art in AI is still not sufficient to handle these
challenges on its own. So we use computers for what they're good at -- making basic judgements on large amounts of content quickly -- and we rely on people for making more complex and nuanced judgements that require deeper expertise.
In training our AI systems, we've generally prioritized proactively detecting content related to the most real world harm. For example, we prioritized removing terrorist content -- and now 99% of the terrorist content we remove is
flagged by our systems before anyone on our services reports it to us. We currently have a team of more than 200 people working on counter-terrorism specifically. Some categories of harmful content are easier for AI to identify,
and in others it takes more time to train our systems. For example, visual problems, like identifying nudity, are often easier than nuanced linguistic challenges, like hate speech. Our systems already proactively identify 96% of the nudity we take down,
up from just close to zero a few years ago. We are also making progress on hate speech, now with 52% identified proactively. This work will require further advances in technology as well as hiring more language experts to get to the levels we need.
In the past year, we have prioritized identifying people and content related to spreading hate in countries with crises like Myanmar. We were too slow to get started here, but in the third quarter of 2018, we proactively identified
about 63% of the hate speech we removed in Myanmar, up from just 13% in the last quarter of 2017. This is the result of investments we've made in both technology and people. By the end of this year, we will have at least 100 Burmese language experts
reviewing content. Discouraging Borderline Content
One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content. This is not a new phenomenon. It is widespread on cable news today and has been a staple
of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services. ur research suggests that no matter where we
draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average -- even when they tell us afterwards they don't like the content. This is a basic incentive problem that
we can address by penalizing borderline content so it gets less distribution and engagement. By making the distribution curve look like the graph below where distribution declines as content gets more sensational, people are disincentivized from creating
provocative content that is as close to the line as possible. The category we're most focused on is click-bait and misinformation. People consistently tell us these types of content make our services worse -- even though they
engage with them. As I mentioned above, the most effective way to stop the spread of misinformation is to remove the fake accounts that generate it. The next most effective strategy is reducing its distribution and virality. Interestingly, our research has found that this natural pattern of borderline content getting more engagement applies not only to news but to almost every category of content. For example, photos close to the line of nudity, like with revealing clothing or sexually suggestive positions, got more engagement on average before we changed the distribution curve to discourage this. The same goes for posts that don't come within our definition of hate speech but are still offensive.
This pattern may apply to the groups people join and pages they follow as well. This is especially important to address because while social networks in general expose people to more diverse views, and while groups in general
encourage inclusion and acceptance, divisive groups and pages can still fuel polarization. To manage this, we need to apply these distribution changes not only to feed ranking but to all of our recommendation systems for things you should join.
One common reaction is that rather than reducing distribution, we should simply move the line defining what is acceptable. In some cases this is worth considering, but it's important to remember that won't address the underlying
incentive problem, which is often the bigger issue. This engagement pattern seems to exist no matter where we draw the lines, so we need to change this incentive and not just remove content. Building an Appeals Process
Any system that operates at scale will make errors, so how we handle those errors is important. This matters both for ensuring we're not mistakenly stifling people's voices or failing to keep people safe, and also for building a sense
of legitimacy in the way we handle enforcement and community governance. We began rolling out our content appeals process this year. We started by allowing you to appeal decisions that resulted in your content being taken down.
Next we're working to expand this so you can appeal any decision on a report you filed as well. We're also working to provide more transparency into how policies were either violated or not. ...Read the full
article from facebook.com
|
|
DCMS minister Margot James informs parliamentary committee of government thoughts on online digital ID
|
|
|
| 16th November 2018
|
|
| See
article from
data.parliament.uk |
Digital ID was discussed by the Commons Science and Technology Committee on 13th November 2018. Carol Monaghan Committee Member: At the moment, platforms such as Facebook require age verification, but that simply means
entering a date of birth, and children can change that. If you are planning to extend that, or look at how it might apply to other social media, how confident are you that the age verification processes would be robust enough to cope?
Margot James MP, Minister for Digital and the Creative Industries: At the moment, I do not think that we would be, but age verification tools and techniques are developing at pace, and we keep abreast of developments. At the
moment , we think we have a robust means by which to verify people's age at 18; the challenge is to develop tools that can verify people's age at a younger age, such as 13. Those techniques are not robust enough yet, but a lot of technological research
is going on, and I am reasonably confident that, over the next few years, there will be robust means by which to identify age at younger than 18. Stephen Metcalfe Committee Member: My question is on the same point about how
we can create a verification system that you cannot just get around by putting in a fake date of birth. I assume that the verification for 18 - plus is based around some sort of credit card, or some sort of bank card. The issue there is that,
potentially, someone could borrow another person's card, because it does not require secret information--it requires just the entering of the 16-digit number, or something. But on the younger ages, given that we are talking about digital life and digital
literacy, do you think that the time has come to talk about having a digital verified ID that young people get and which you cannot fiddle with--a bit like an online ID card, or digital passport? I know that that idea has been around a little while.
Margot James: It has. I do think that the time has come when that is required, but there are considerable hoops to go through before we can arrive at a system of digital identity, including someone's age, that is acknowledged,
respected and entered into by the vast majority of people. As you probably know, the Government have committed in prior years to the Verify system, which we think has got as far as it can go, which is not far enough. We have a team of excellent policy
officials in the DCMS looking afresh at other techniques of digital identity. It is a live issue and there have been many attempts at it; there is frustration, and not everybody would agree with what I have said. But you asked my view, and that is
it--and the Department is focusing a lot of energy on that area of research. Chair: Can you imagine that your legislation, when it comes, could include the concept, to which Stephen referred, of a digital identity for
children? Margot James: That is a long way off--or it is not next year, and probably not the year after, given how much consultation it would require. The new work has only just started, so it is not a short-term solution,
and I do not expect to see it as part of our White Paper that we publish this winter. That does not mean to say that we do not think that it is important; we are working towards getting a system that we think could have public support.
To go slightly beyond the terms of your inquiry, with regard to the potential for delivering a proper digital relationship between citizen and G overnment through delivery of public services, a digital identity system will be
important. We feel that public service delivery has a huge amount to gain from the digital solution. Bill Grant Committee Member:: I am pleased to note that the Government are addressing issues that have been with us for
nearly a decade--the dark side of social media and the risk to children, not least the risk that we all experience as parliamentarians. Can you offer any reason why it has taken so long for Government to begin that process? Would you be minded to
accelerate the process to address the belated start? Margot James: One reason is that progress has been made by working with technology companies. The Home Office has had considerable success in working with technology
companies to eradicate terrorist content online. To a lesser but still significant extent, progress has also been made on a voluntary basis with the reduction in child abuse images and child sexual exploitation. I said "significant , " but this
is a Home Office area--I am working closely with the Home Office, because the White Paper is being developed in concert with it--and it is clear that it does not feel that anything like enough is being done through voluntary measures.
Chair: Do you feel that? Margot James: Yes, I do. A lot of the highly dangerous material has gone under the radar in the dark web, but too much material is still available, apparently, on various
platforms, and it takes them too long to remove it. Chair: Ultimately, the voluntary approach is not working adequately. Margot James: Exactly--that is our view now. I was trying to address
the hon. Member's question about why it had taken a long time. Partly it is that technology changes very fast , but, partly, it is because voluntary engagement was delivering, but it has impressed itself on us in the last 12 months that it is not
delivering fast enough or adequately. We have not even talked about the vast range of other harms, some of which are illegal and some legal but harmful, and some in the grey area in between, where decidedly inadequate progress has been made as a result
of the many instances of voluntary engagement, not just between the Government and the technology sector but between charitable organisations and non-governmental organisations, including the police. Bill Grant: It was
envisaged earlier that there would be some sort of regulator or ombudsman, but , over and above that , Martha Lane Fox's think - tank proposed the establishment of an office for responsible technology, which would be overarching, in whatever form the
regulation comes. Would you be minded to take that on board? Margot James: That is one proposal that we will certainly look at, yes. Martha Lane Fox does a lot of very good work in this area, has many years' experience of
it, and runs a very good organisation in the "tech for good" environment, so her proposals are well worth consideration. That is one reason why I was unable to give a specific answer earlier, because there are good ideas, and they all need
proper evaluation. When the White Paper is published, we will engage with you and any other interested party , and invite other organisations to contribute to our thinking, prior to the final legislation being put before Parliament and firming up the
non-legislative measures, which are crucial. We all know that legislation does not solve every ill, and it is crucial that we continue the very good work being done by many internet companies to improve the overall environment. |
|
|
|
|
| 16th November
2018
|
|
|
Why are left-wingers demanding that Silicon Valley police political opinions? By Fraser Myers See article from spiked-online.com
|
|
|
|
|
| 16th November 2018
|
|
|
Detailed report on Internet censorship laws in South Korea See article from lawless.tech |
|
DCMS minister Margot James informs parliamentary committee of the schedule for the age verification internet porn censorship regime
|
|
|
|
15th November 2018
|
|
| See
article from
data.parliament.uk |
Age Verification and adult internet censorship was discussed by the Commons Science and Technology Committee on 13th November 2018. Carol Monaghan Committee Member: The Digital Economy Act made it compulsory for commercial
pornography sites to undertake age verification, but implementation has been subject to ongoing delays. When do we expect it to go live? Margot James MP, Minister for Digital and the Creative Industries: We can expect it to
be in force by Easter next year. I make that timetable in the knowledge that we have laid the necessary secondary legislation before Parliament. I am hopeful of getting a slot to debate it before Christmas, before the end of the year. We have always said
that we will permit the industry three months to get up to speed with the practicalities and delivering the age verification that it will be required to deliver by law. We have also had to set up the regulator--well, not to set it up, but to establish
with the British Board of Film Classification , which has been the regulator, exactly how it will work. It has had to consult on the methods of age verification, so it has taken longer than I would have liked, but I would balance that with a confidence
that we have got it right. Carol Monaghan: Are you confident that the commercial pornography companies are going to engage fully and will implement the law as you hope? Margot James: I am
certainly confident on the majority of large commercial pornography websites and platforms being compliant with the law. They have engaged well with the BBFC and the Department , and want to be on the right side of the law. I have confidence, but I am
wary of being 100% confident, because there are always smaller and more underground platforms and sites that will seek ways around the law. At least, that is usually the case. We will be on the lookout for that, and so will the BBFC. But the vast
majority of organisations have indicated that they are keen to comply with the legislation. Carol Monaghan: One concern that we all have is that children can stumble across pornography. We know that on social media
platforms, where children are often active, up to a third of their content can be pornographic, but they fall outside the age verification regulation because it is only a third and not the majority. Is that likely to undermine the law? Ultimately the
law, as it stands, is there to safeguard our children. Margot James: I acknowledge that that is a weakness in the legislative solution. I do not think that for many mainstream social media platforms as much of a third of
their content is pornographic, but it is well known that certain social media platforms that many people use regularly have pornography freely available. We have decided to start with the commercial operations while we bring in the age verification
techniques that have not been widely used to date. But we will keep a watching brief on how effective those age verification procedures turn out to be with commercial providers and will keep a close eye on how social media platforms develop in terms of
the extent of pornographic material, particularly if they are platforms that appeal to children--not all are. You point to a legitimate weakness, on which we have a close eye. |
|
Julia Reda outlines amendments to censorship machines and link tax as the upcoming internet censorship law gets discussed by the real bosses of the EU
|
|
|
|
15th November 2018
|
|
| See article from juliareda.eu |
The closed-door trilogue efforts to finalise the EU Copyright Directive continue. The Presidency of the Council, currently held by Austria, has now circulated among the EU member state governments a new proposal for a compromise between the differing
drafts currently on the table for the controversial Articles 11 and 13. Under this latest proposal, both upload filters and the link tax would be here to stay -- with some changes for the better, and others for the worse.
Upload filters/Censorshipmachines Let's recall: In its final position, the European Parliament had tried its utmost to avoid specifically mentioning upload filters, in order to avoid the massive public
criticism of that measure. The text they ended up with, however, was even worse: It would make online platforms inescapably liable for any and all copyright infringement by their users, no matter what action they take. Not even the strictest upload
filter in the world could possibly hope to catch 100% of unlicensed content. This is what prompted YouTube's latest lobbying efforts in favour of upload filters and against the EP's proposal of inescapable liability. Many have
mistaken this as lobbying against Article 13 as a whole -- it is not. In Monday's Financial Times, YouTube spelled out that they would be quite happy with a law that forces everyone else to build (or, presumably, license from them) what they already have
in place: Upload filters like Content ID. In this latest draft, the Council Presidency sides with YouTube, going back to rather explicitly prescribing upload filters. The Council proposes two alternative options on how to phrase
that requirement, but they match in effect: Platforms are liable for all copyright infringements committed by their users, EXCEPT if they
cooperate with rightholders by implementing effective and proportionate steps to prevent works they've been informed about from ever going online determining which steps those are must take into
account suitable and effective technologies Under this text, wherever upload filters are possible, they must be implemented: All your uploads will require prior approval by error-prone copyright bots .
On the good side, the Council Presidency seems open to adopting the Parliament's exception for platforms run by small and micro businesses . It also takes on board the EP's better-worded exception for open source code sharing
platforms like GitHub. On the bad side, Council rejects Parliament's efforts for a stronger complaint mechanism requiring reviews by humans and an independent conflict resolution body. Instead it takes on board the EP's insistence
that licenses taken out by a platform don't even have to necessarily cover uses of these works by the users of that platform. So, for example, even if YouTube takes out a license to show a movie trailer, that license could still prevent you as an
individual YouTuber from using that trailer in your own uploads. Article 11 Link tax On the link tax, the Council is mostly sticking to its position: It wants the requirement to license even short
snippets of news articles to last for one year after an article's publication, rather than five, as the Parliament proposed. In a positive development, the Council Presidency adopts the EP's clarification that at least the facts
included in news articles as such should not be protected. So a journalist would be allowed to report on what they read in another news article, in their own words. Council fails to clearly exclude hyperlinks -- even those that
aren't accompanied by snippets from the article. It's not uncommon for the URLs of news articles themselves to include the article's headline. While the Council wants to exclude insubstantial parts of articles from requiring a license, it's not certain
that headlines count as insubstantial. (The Council's clause allowing acts of hyperlinking when they do not constitute communication to the public would not apply to such cases, since reproducing the headline would in fact constitute such a communication
to the public.) The Council continues to want the right to only apply to EU-based news sources -- which could in effect mean fewer links and listings in search engines, social networks and aggregators for European sites, putting
them at a global disadvantage. However, it also proposes spelling out that news sites may give out free licenses if they so choose -- contrary to the Parliament, which stated that listing an article in a search engine should not
be considered sufficient payment for reproducing snippets from it.
|
|
Facebook will train up French censors in the art of taking down content deemed harmful
|
|
|
|
15th November 2018
|
|
| See article from techdirt.com
|
The French President, Emmanuel Macron has announced a plan to effectively embed French state censors with Facebook to learn more about how to better censor the platform. He announced a six-month partnership with Facebook aimed at figuring out how the
European country should police hate speech on the social network. As part of the cooperation both sides plan to meet regularly between now and May, when the European election is due to be held. They will focus on how the French government and
Facebook can work together to censor content deemed 'harmful'. Facebook explained: It's a pilot program of a more structured engagement with the French government so that both sides can better understand the other's
challenges in dealing with the issue of hate speech online. The program will allow a team of regulators, chosen by the Elysee, to familiarize [itself] with the tools and processes set up by Facebook to fight against hate speech. The working group will
not be based in one location but will travel to different Facebook facilities around the world, with likely visits to Dublin and California. The purpose of this program is to enable regulators to better understand Facebook's tools and policies to combat
hate speech and, for Facebook, to better understand the needs of regulators.
|
|
The Lords discuss when age verification internet censorship will start
|
|
|
| 13th
November 2018
|
|
| See article from theyworkforyou.com |
Pornographic Websites: Age Verification - Question House of Lords on 5th November 2018 . Baroness Benjamin Liberal Democrat To ask Her Majesty 's Government what
will be the commencement date for their plans to ensure that age-verification to prevent children accessing pornographic websites is implemented by the British Board of Film Classification . Lord Ashton of Hyde The
Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport My Lords, we are now in the final stages of the process, and we have laid the BBFC 's draft guidance and the Online Pornography (Commercial Basis)
Regulations before Parliament for approval. We will ensure that there is a sufficient period following parliamentary approval for the public and the industry to prepare for age verification. Once parliamentary proceedings have concluded, we will set a
date by which commercial pornography websites will need to be compliant, following an implementation window. We expect that this date will be early in the new year. Baroness Benjamin I thank the
Minister for his Answer. I cannot wait for that date to happen, but does he share my disgust and horror that social media companies such as Twitter state that their minimum age for membership is 13 yet make no attempt to restrict some of the most gross
forms of pornography being exchanged via their platforms? Unfortunately, the Digital Economy Act does not affect these companies because they are not predominantly commercial porn publishers. Does he agree that the BBFC needs to develop mechanisms to
evaluate the effectiveness of the legislation for restricting children's access to pornography via social media sites and put a stop to this unacceptable behaviour? Lord Ashton of Hyde My Lords, I
agree that there are areas of concern on social media sites. As the noble Baroness rightly says, they are not covered by the Digital Economy Act . We had many hours of discussion about that in this House. However, she will be aware that we are producing
an online harms White Paper in the winter in which some of these issues will be considered. If necessary, legislation will be brought forward to address these, and not only these but other harms too. I agree that the BBFC should find out about the
effectiveness of the limited amount that age verification can do; it will commission research on that. Also, the Digital Economy Act itself made sure that the Secretary of State must review its effectiveness within 12 to 18 months.
Lord Griffiths of Burry Port Opposition Whip (Lords), Shadow Spokesperson (Digital, Culture, Media and Sport), Shadow Spokesperson (Wales) My Lords, once again I find this issue raising a dynamic that we
became familiar with in the only too recent past. The Government are to be congratulated on getting the Act on to the statute book and, indeed, on taking measures to identify a regulator as well as to indicate that secondary legislation will be brought
forward to implement a number of the provisions of the Act. My worry is that, under one section of the Digital Economy Act , financial penalties can be imposed on those who infringe this need; the Government seem to have decided not to bring that
provision into force at this time. I believe I can anticipate the Minister 's answer but--in view of the little drama we had last week over fixed-odds betting machines--we would not want the Government, having won our applause in this way, to slip back
into putting things off or modifying things away from the position that we had all agreed we wanted. Lord Ashton of Hyde My Lords, I completely understand where the noble Lord is coming from but what
he said is not quite right. The Digital Economy Act included a power that the Government could bring enforcement with financial penalties through a regulator. However, they decided--and this House decided--not to use that for the time being. For the
moment, the regulator will act in a different way. But later on, if necessary, the Secretary of State could exercise that power. On timing and FOBTs, we thought carefully--as noble Lords can imagine--before we said that we expect the date will be early
in the new year, Lord Addington Liberal Democrat My Lords, does the Minister agree that good health and sex education might be a way to counter some of the damaging effects? Can the Government make
sure that is in place as soon as possible, so that this strange fantasy world is made slightly more real? Lord Ashton of Hyde The noble Lord is of course right that age verification itself is not the
only answer. It does not cover every possibility of getting on to a pornography site. However, it is the first attempt of its kind in the world, which is why not only we but many other countries are looking at it. I agree that sex education in schools is
very important and I believe it is being brought into the national curriculum already. The Earl of Erroll Crossbench Why is there so much wriggle room in section 6 of the guidance from the DCMS to
the AV regulator? The ISP blocking probably will not work, because everyone will just get out of it. If we bring this into disrepute then the good guys, who would like to comply, probably will not; they will not be able to do so economically. All that
was covered in British Standard PAS 1296, which was developed over three years. It seems to have been totally ignored by the DCMS. You have spent an awful lot of time getting there, but you have not got there. Lord Ashton of
Hyde One of the reasons this has taken so long is that it is complicated. We in the DCMS , and many others, not least in this House, have spent a long time discussing the best way of achieving this. I am not immediately
familiar with exactly what section 6 says, but when the statutory instrument comes before this House--it is an affirmative one to be discussed--I will have the answer ready for the noble Earl. Lord West of Spithead Labour
My Lords, does the Minister not agree that the possession of a biometric card by the population would make the implementation of things such as this very much easier? Lord Ashton of Hyde
In some ways it would, but there are problems with people who either do not want to or cannot have biometric cards.
|
|
|
|
|
| 13th November 2018
|
|
|
Susan Wojcicki, CEO of YouTube explains how the EU's copyright rewrite will destroy the livelihood of a huge number of Europeans See
article from youtube-creators.googleblog.com |
|
Analysis of BBFC's Post-Consultation Guidance by the Open Rights Group
|
|
|
|
8th November 2018
|
|
| See article from openrightsgroup.org (CC)
|
Following the conclusion of their consultation period, the BBFC have issued new age verification guidance that has been laid before Parliament. It is unclear why, if the government now recognises that privacy protections like
this are needed, the government would also leave the requirements as voluntary. Summary The new code has some important improvements, notably the introduction of a voluntary
scheme for privacy, close to or based on a GDPR Code of Conduct. This is a good idea, but should not be put in place as a voluntary arrangement. Companies may not want the attention of a regulator, or may simply wish to apply lower or different
standards, and ignore it. It is unclear why, if the government now recognises that privacy protections like this are needed, the government would also leave the requirements as voluntary. We are also concerned that the
voluntary scheme may not be up and running before the AV requirement is put in place. Given that 25 million UK adults are expected to sign up to these products within a few months of its launch, this would be very unhelpful.
Parliament should now:
- Ask the government why the privacy scheme is to be voluntary, if the risks of relying on general data protection law are now recognised;
- Ask for assurance from BBFC that the voluntary scheme
will cover the all of the major operators; and
- Ask for assurance from BBFC and DCMS that the voluntary privacy scheme will be up and running before obliging operators to put Age Verification measures in place.
The draft code can be found here .
Lack of Enforceability of Guidance The Digital Economy Act does not allow the BBFC to judge age verification tools by any standard other than whether or not they sufficiently verify age. We
asked that the BBFC persuade the DCMS that statutory requirements for privacy and security were required for age verification tools. The BBFC have clearly acknowledged privacy and security concerns with age
verification in their response. However, the BBFC indicate in their response that they have been working with the ICO and DCMS to create a
voluntary certification scheme for age verification providers: "This voluntary certification scheme will mean that age-verification providers may choose to be
independently audited by a third party and then certified by the Age-verification Regulator. The third party's audit will include an assessment of an age-verification solution's compliance with strict privacy and data security requirements."
The lack of a requirement for additional and specific privacy regulation in the Digital Economy Act is the cause for this voluntary approach. While a voluntary scheme above is likely to be
of some assistance in promoting better standards among age verification providers, the "strict privacy and data security requirements" which the voluntary scheme mentions are not a statutory requirement, leaving some consumers at greater risk
than others. Sensitive Personal Data The data handled by age verification systems is sensitive personal data. Age verification services must directly identify users in order to
accurately verify age. Users will be viewing pornographic content, and the data about what specific content a user views is highly personal and sensitive. This has potentially disastrous consequences for individuals and families if the data is lost,
leaked, or stolen. Following a hack affecting Ashley Madison -- a dating website for extramarital affairs -- a number of the site's users were driven to suicide as a result of the public exposure of their sexual
activities and interests. For the purposes of GDPR, data handled by age verification systems falls under the criteria for sensitive personal data, as it amounts to "data concerning a natural person's sex life or
sexual orientation". Scheduling Concerns It is of critical importance that any accreditation scheme for age verification providers, or GDPR code of conduct if one is
established, is in place and functional before enforcement of the age verification provisions in the Digital Economy Act commences. All of the major providers who are expected to dominate the age verification market should undergo their audit under the
scheme before consumers will be expected to use the tool. This is especially true when considering the fact that MindGeek have indicated their expectation that 20-25 million UK adults will sign up to their tool within the first few months of operation. A
voluntary accreditation scheme that begins enforcement after all these people have already signed up would be unhelpful. Consumers should be empowered to make informed decisions about the age verification tools that
they choose from the very first day of enforcement. No delays are acceptable if users are expected to rely upon the scheme to inform themselves about the safety of their data. If this cannot be achieved prior to the start of expected enforcement of the
DE Act's provisions, then the planned date for enforcement should be moved back to allow for the accreditation to be completed. Issues with Lack of Consumer Choice It is of vital
importance that consumers, if they must verify their age, are given a choice of age verification providers when visiting a site. This enables users to choose which provider they trust with their highly sensitive age verification data and prevents one
actor from dominating the market and thereby promoting detrimental practices with data. The BBFC also acknowledge the importance of this in their guidance, noting in 3.8: "Although not a requirement under section
14(1) the BBFC recommends that online commercial pornography services offer a choice of age-verification methods for the end-user". This does not go far enough to acknowledge the potential issues that may arise in
a fragmented market where pornographic sites are free to offer only a single tool if they desire. Without a statutory requirement for sites to offer all appropriate and available tools for age verification and log in
purposes, it is likely that a market will be established in which one or two tools dominate. Smaller sites will then be forced to adopt these dominant tools as well, to avoid friction with consumers who would otherwise be required to sign up to a new
provider. This kind of market for age verification tools will provide little room for a smaller provider with a greater commitment to privacy or security to survive and robs users of the ability to choose who they
trust with their data. We already called for it to be made a statutory requirement that pornographic sites must offer a choice of providers to consumers who must age verify, however this suggestion has not been taken
up. We note that the BBFC has been working with the ICO and DCMS to produce a voluntary code of conduct. Perhaps a potential alternative solution would be to ensure that a site is only considered compliant if it offers
users a number of tools which has been accredited under the additional privacy and security requirements of the voluntary scheme. GDPR Codes of Conduct A GDPR "Code of
Conduct" is a mechanism for providing guidelines to organisations who process data in particular ways, and allows them to demonstrate compliance with the requirements of the GDPR. A code of conduct is voluntary,
but compliance is continually monitored by an appropriate body who are accredited by a supervisory authority. In this case, the "accredited body" would likely be the BBFC, and the "supervisory authority" would be the ICO. The code of
conduct allows for certifications, seals and marks which indicate clearly to consumers that a service or product complies with the code. Codes of conduct are expected to provide more specific guidance on exactly how
data may be processed or stored. In the case of age verification data, the code could contain stipulations on:
- Appropriate pseudonymisation of stored data;
- Data and metadata retention periods;
- Data minimisation recommendations;
-
Appropriate security measures for data storage;
- Security breach notification procedures;
- Re-use of data for other purposes.
The BBFC's proposed "voluntary standard" regime appears to be similar to a GDPR code of conduct, though it remains to be seen how specific the stipulations in the BBFC's standard are. A code of conduct would also
involve being entered into the ICO's public register of UK approved codes of conduct, and the EPDB's public register for all codes of conduct in the EU. Similarly, GDPR Recital 99 notes that "relevant
stakeholders, including data subjects" should be consulted during the drafting period of a code of conduct - a requirement which is not in place for the BBFC's voluntary scheme. It is possible that the BBFC have
opted to create this voluntary scheme for age verification providers rather than use a code of conduct, because they felt they may not meet the GDPR requirements to be considered as an appropriate body to monitor compliance. Compliance must be monitored
by a body who has demonstrated:
- Their expertise in relation to the subject-matter;
- They have established procedures to assess the ability of data processors to apply the code of conduct;
-
They have the ability to deal with complaints about infringements; and
- Their tasks do not amount to a conflict of interest.
Parties Involved in the Code of Conduct Process As noted by GDPR Recital 99, a consultation should be a public process which involves stakeholders and data subjects, and their responses should
be taken into account during the drafting period: "When drawing up a code of conduct, or when amending or extending such a code, associations and other bodies representing categories of controllers or processors
should consult relevant stakeholders, including data subjects where feasible , and have regard to submissions received and views expressed in response to such consultations." The code of conduct must be approved
by a relevant supervisory authority (in this case the ICO). An accredited body (BBFC) that establishes a code of conduct and monitors compliance is able to establish their own structures and procedures under GDPR
Article 41 to handle complaints regarding infringements of the code, or regarding the way it has been implemented. BBFC would be liable for failures to regulate the code properly under Article 41(4),
[1] however DCMS appear to have accepted the principle that the government would need to protect BBFC from such
liabilities. [2] GDPR Codes of Conduct and Risk Management Below is
a table of risks created by age verification which we identified during the consultation process. For each risk, we have considered whether a GDPR code of conduct may help to mitigate the effects of it.
Risk | CoC Appropriate? | Details | User identity may be correlated with viewed content. |
Partially | This risk can never be entirely mitigated if AV is to go ahead, but a CoC could contain very strict restrictions on what identifying data could be stored after a successful age
verification. | Identity may be associated to an IP address, location or device. | No | It would be very difficult for a CoC to mitigate
this risk as the only safe mitigation would be not to collect user identity information. | An age verification provider could track users across all the websites it's tool is offered on. |
Yes | Strict rules could be put in place about what data an age verification provider may store, and what data it is forbidden from storing. | Users
may be incentivised to consent to further processing of their data in exchange for rewards (content, discounts etc.) | Yes | Age verification tools could be expressly forbidden from
offering anything in exchange for user consent. | Leaked data creates major risks for identified individuals and cannot be revoked or adequately compensated for. | Partially
| A CoC can never fully mitigate this risk if any data is being collected, but it could contain strict prohibitions on storing certain information and specify retention periods after which data must be destroyed, which
may mitigate the impacts of a data breach. | Risks to the user of access via shared computers if viewing history is stored alongside age verification data. | Yes |
A CoC could specify that any accounts for pornographic websites which may track viewed content must be strictly separate and not in any visible way linked to a user's age verification account or data that confirms their
identity. | Age verification systems are likely to trade off convenience for security. (No 2FA, auto-login, etc.) | Yes | A CoC could
stipulate that login cookies that "remember" a returning user must only persist for a short time period, and should recommend or enforce two-factor authentication. | The need to re-login to age
verification services to access pornography in "private browsing" mode may lead people to avoid using this feature and generate much more data which is then stored. | No | A CoC cannot fix this issue. Private browsing by nature will not store any login cookies or other objects and will require the user to re-authenticate with age verification providers every time they wish to view adult content.
| Users may turn to alternative tools to avoid age verification, which carry their own security risks. (Especially "free" VPN services or peer-to-peer networks). |
No | Many UK adults, although over 18, will be uncomfortable with the need to submit identity documents to verify their age and will seek alternative means to access content. It is unlikely that many of these
individuals will be persuaded by an accreditation under a GDPR code. | Age verification login details may be traded and shared among teenagers or younger children, which could lead to bullying or
"outing" if such details are linked to viewed content. | Yes | Strict rules could be put in place about what data an age verification provider may store, and what data it
is forbidden from storing. | Child abusers could use their access to age verified content as an adult as leverage to create and exploit relationships with children and teenagers seeking access to such content
(grooming). | No | This risk will exist as long as age verification is providing a successful barrier to accessing such content for under-18s who wish to do so. |
The sensitivity of content dealt with by age verification services means that users who fall victim to phishing scams or fraud have a lower propensity to report it to the relevant authorities. |
Partially | A CoC or education campaign may help consumers identify trustworthy services, but it can not fix the core issue, which is that users are being socialised into it being
"normal" to input their identity details into websites in exchange for pornography. Phishing scams resulting from age verification will appear and will be common, and the sensitivity of the content involved is a disincentive to reporting it.
| The use of credit cards as an age verification mechanism creates an opportunity for fraudulent sites to engage in credit card theft. | No |
Phishing and fraud will be common. A code of conduct which lists compliant sites and tools externally on the ICO website may be useful, but a phishing site may simply pretend to be another (compliant) tool, or rely on the fact that
users are unlikely to check with the ICO every time they wish to view pornographic content. | The rush to get age verification tools to market means they may take significant shortcuts when it comes to privacy
and security. | Yes | A CoC could assist in solving this issue if tools are given time to be assessed for compliance before the age verification regime commences . |
A single age verification provider may come to dominate the market, leaving users little choice but to accept whatever terms the provider offers. | Partially |
Practically, a CoC could mitigate some of the effects of an age verification tool monopoly if the dominant tool is accredited under the Code. However, this relies on users being empowered to demand compliance with a CoC, and it is
possible that users will instead be left with a "take it or leave it" situation where the dominant tool is not CoC accredited. | Allowing pornography "monopolies" such as MindGeek to
operate age verification tools is a conflict of interest. | Partially | As the BBFC note in their consultation response, it would not be reasonable to prohibit a pornographic content
provider from running an age verification service as it would prevent any site from running their own tool. However, under a CoC it is possible that a degree of separation could be enforced that requires an age verification tools to adhere to strict
rules about the use of data, which could mitigate the effects of a large pornographic content provider attempting to collect as much user data as possible for their own business purposes. |
[1] "Infringements of the following provisions shall, in accordance with paragraph 2, be subject to
administrative fines up to 10 000 000 EUR, or in the case of an undertaking, up to 2 % of the total worldwide annual turnover of the preceding financial year, whichever is higher: the obligations of the monitoring body pursuant to Article 41(4)."
[2] "contingent liability will provide indemnity to the British Board of Film Classification
(BBFC) against legal proceedings brought against the BBFC in its role as the age verification regulator for online pornography." |
|
Its probably not a good idea to leave much money in a Skype or XBox Live account as Microsoft can now seize it if they catch you using a vaguely offence word
|
|
|
| 8th November 2018
|
|
| See article from alphr.com
|
Microsoft has just inflicted a new 'code of conduct' that prohibits customers communicating nudity, bestiality, pornography, offensive language, graphic violence and criminal activity, whilst allowing Microsoft to steal the money in your account.
If users are found to have shared, or be in possession of, these types of content, Microsoft can suspend or ban the particular user and remove funds or balance on the associated account. It also appears that Microsoft reserves the right to view
user content to investigate violations to these terms. This means it has access to your message history and shared files (including on OneDrive, another Microsoft property) if it thinks you've been sharing prohibited material. Unsurprisingly, few
users are happy that Microsoft is willing to delve through their personal data. Microsoft has not made it clear if it will automatically detect and censor prohibited content or if it will reply on a reporting system. On top of that, Microsoft
hasn't clearly defined its vague terms. Nobody is clear on what the limit on offensive language is. |
|
Facebook friend suggestion: Ms Tress who visits your husband upstairs at your house for an hour every Thursday afternoon whilst you are at work
|
|
|
| 8th November 2018
|
|
| See article from wired.co.uk |
Facebook has files a patent that describes a method of using the devices of Facebook app users to identify various wireless signals from the devices of other users. It explains how Facebook could use those signals to measure exactly how close the two
devices are to one another and for how long, and analyses that data to infer whether it is likely that the two users have met. The patent also suggests the app could record how often devices are close to one another, the duration and time of meetings,
and can even use its gyroscope and accelerometer to analyse movement patterns, for example whether the two users may be going for a jog, smooching or catching a bus together. Facebook's algorithm would use this data to analyse how likely it is
that the two users have met, even if they're not friends on Facebook and have no other connections to one another. This might be based on the pattern of inferred meetings, such as whether the two devices are close to one another for an hour every
Thursday, and an algorithm would determine whether the two users meeting was sufficiently significant to recommend them to each other and/or friends of friends. I don't suppose that Facebook can claim this patent though as police and the security
services have no doubt been using this technique for years. |
|
Tim Berners-Lee launches campaign to defend a free and open internet
|
|
|
| 7th November 2018
|
|
| See article from fortheweb.webfoundation.org |
Speaking at the Web Summit conference in Lisbon, Tim Berners-Lee, inventor of the World Wide Web, has launched a campaign to persuade governments, companies and individuals to sign a Contract for the Web with a set of principles intended to defend a free
and open internet. Contract for the Web CORE PRINCIPLES The web was designed to bring people together and make knowledge freely available. Everyone has a role to play to ensure the web serves humanity. By committing to the following
principles, governments, companies and citizens around the world can help protect the open web as a public good and a basic right for everyone. GOVERNMENTS WILL
- Ensure everyone can connect to the internet so that anyone, no matter who they are or where they live, can participate actively online.
- Keep all of the internet available, all of the time so that no one is denied their right to
full internet access.
- Respect people's fundamental right to privacy so everyone can use the internet freely, safely and without fear.
COMPANIES WILL
- Make the internet affordable and accessible to everyone so that no one is excluded from using and shaping the web.
- Respect consumers' privacy and personal data so people are in control of their lives online.
- Develop
technologies that support the best in humanity and challenge the worst so the web really is a public good that puts people first.
CITIZENS WILL
- Be creators and collaborators on the web so the web has rich and relevant content for everyone.
- Build strong communities that respect civil discourse and human dignity so that everyone feels safe and welcome online.
-
Fight for the web so the web remains open and a global public resource for people everywhere, now and in the future.
We commit to uphold these principles and to engage in a deliberative process to build a full "Contract for the Web", which will set out the roles and responsibilities of governments, companies and citizens. The challenges facing the web today
are daunting and affect us in all our lives, not just when we are online. But if we work together and each of us takes responsibility for our actions, we can protect a web that truly is for everyone. See
more from fortheweb.webfoundation.org |
|
The Law Commission seems to side with the easily offended and seeks to extend the criminalisation of internet insults
|
|
|
| 4th November 2018
|
|
| See press release
from lawcom.gov.uk See scoping report |
Reforms to the law are required to protect victims from online and social media-based abuse, according to a new Report by the Law Commission for England and Wales. In its Scoping Report assessing the
state of the law in this area, published today [1st November 2018] the Law Commission raises concerns about the lack of coherence in the current criminal law and the problems this causes for victims, police and prosecutors. It is also critical of the
current law's ability to protect people harmed by a range of behaviour online including:
- Receiving abusive and offensive communications
- "Pile on" harassment, often on social media
- Misuse of private images and information
The Commission is calling for:
- reform and consolidation of existing criminal laws dealing with offensive and abusive communications online
- a specific review considering how the law can more effectively protect victims who
are subject to a campaign of online harassment
- a review of how effectively the criminal law protects personal privacy online
Professor David Ormerod QC, Law Commissioner for Criminal Law said: "As the internet and social media have become an everyday part of our lives, online abuse has become
commonplace for many." "Our report highlights the ways in which the criminal law is not keeping pace with these technological changes. We identify the areas of the criminal law most in need of reform in
order to protect victims and hold perpetrators to account." Responding to the Report, Digital Minister Margot James said: "Behaviour that is
illegal offline should be treated the same when it's committed online. We've listened to victims of online abuse as it's important that the right legal protections are in place to meet the challenges of new technology.
"There is much more to be done and we'll be considering the Law Commission's findings as we develop a White Paper setting out new laws to make the UK a safer place to be online. Jess Phillips MP,
Chair, and Rt Hon Maria Miller MP, Co-Chair, of the All-Party Parliamentary Group on Domestic Violence and Abuse and Katie Ghose, Chief Executive of Women's Aid, welcomed the Report saying: "Online
abuse has a devastating impact on survivors and makes them feel as though the abuse is inescapable. Online abuse does not happen in the 'virtual world' in isolation; 85% of survivors surveyed by Women's Aid experienced a pattern of online abuse together
with offline abuse. Yet too often it is not taken as seriously as abuse 'in the real world'. "The All-Party Parliamentary Group on Domestic Violence and Abuse has long called for legislation in this area to be
reviewed to ensure that survivors are protected and perpetrators of online abuse held to account. We welcome the Law Commission's report, which has found that gaps and inconsistencies in the law mean survivors are being failed. We support the call for
further review and reform of the law". The need for reform We were asked to assess whether the current criminal law achieved parity of treatment between online and
offline offending. For the most part, we have concluded that abusive online communications are, at least theoretically, criminalised to the same or even a greater degree than equivalent offline offending. However, we consider there is considerable scope
for reform:
- Many of the applicable offences do not adequately reflect the nature of some of the offending behaviour in the online environment, and the degree of harm it can cause.
- Practical and cultural
barriers mean that not all harmful online conduct is pursued in terms of criminal law enforcement to the same extent that it might be in an offline context.
- More generally, criminal offences could be improved so they
are clearer and more effectively target serious harm and criminality.
- The large number of overlapping offences can cause confusion.
- Ambiguous terms such as "gross
offensiveness" "obscenity" and "indecency" don't provide the required clarity for prosecutors.
Reforms would help to reduce and tackle, not only online abuse and offence generally but also:
- "Pile on" harassment , where online harassment is coordinated against an individual. The Report notes that "in practice, it appears that the criminal law is having little effect in punishing and deterring
certain forms of group abuse".
- The most serious privacy breaches -- for example the Report highlights concerns about the laws around sharing of private sexual images. It also questions whether the law is
adequate to deal with victims who find their personal information e.g. about their health or sexual history, widely spread online.
Impact on victims The Law Commission heard from those affected by this kind of criminal behaviour including victims' groups, the charities that support them, MPs and other high-profile victims.
The Report analyses the scale of online offending and suggests that studies show that the groups in society most likely to be affected by abusive communications online include women, young people, ethnic minorities
and LGBTQ individuals. For example, the Report finds that gender-based online hate crime, particularly misogynistic abuse, is particularly prevalent and damaging. It also sets out the factors which make online
abuse so common -- including the disinhibition of communicating with an unseen victim and the ease with which victims can be identified. The Report highlights harms caused to the victims of online abuse which include:
- psychological effects, such as depression and anxiety
- emotional harms, such as feelings of shame, loneliness and distress
- physiological harms, including
suicide and self-harm in the most extreme cases
- exclusion from public online space and corresponding feelings of isolation
- economic harms
- wider societal harms
It concludes that abuse by groups of offenders online, and the use of visual images to perpetrate abuse are two of the ways in which online abuse can be aggravated. Next steps
The Department for Digital, Culture, Media and Sport (DCMS) will now analyse the Report and decide on the next steps including what further work the Law Commission can do to produce recommendations for how the criminal law can be
improved to tackle online abuse. Comment: Law Commission must safeguard freedom of expression See
statement from indexoncensorship.org
Index on Censorship urges the Law Commission to safeguard freedom of expression as it moves towards the second phase of its review of abusive and offensive online communications. The Law Commission
published a report on the first phase of its review of criminal law in this area on 1 November
2018. While Index welcomes the report's recognition that current UK law lacks clarity and certainty, the review is addressing questions that impact directly on freedom of expression and the Law Commission should now proceed with
great caution. Safeguarding the fundamental right to freedom of expression should be a guiding principle for the the Law Commission's next steps. Successive court rulings have confirmed that freedom of expression includes having
and expressing views that offend, shock or disturb. As Lord Justice Sir Stephen Sedley said in a 1999 ruling: "Free speech includes not only the inoffensive but the irritating, the contentious, the eccentric, the
heretical, the unwelcome and the provocative provided it does not tend to provoke violence. Freedom only to speak inoffensively is not worth having".
Foreign Secretary
Jeremy Hunt also reaffirmed the UK's commitment to the protection and promotion of
freedom of expression this week , asserting the importance of a free media in particular as a cornerstone of democracy. The next phase of the review should outline how the UK can show global leadership by setting an example
for how to improve outdated legislation in a way that ensures freedom of expression, including speech that is contentious, unwelcome and provocative. Index on Censorship chief executive Jodie Ginsberg said:
"Index will be studying the Law Commission's first phase report on its review of abusive and offensive online communications carefully. Future proposals could have a very negative impact on freedom of expression online and in
other areas. Index urges the Law Commission to proceed with care." |
|
Advert censor ASA launches a new strategy document announcing more proactive censorship of online advertising
|
|
|
| 2nd November 2018
|
|
| See strategy document [pdf] from asa.org.uk
|
The advert censors of ASA have published a five year strategy, with a focus on more censorship of online advertising including exploring the use of machine learning in regulation. The strategy will be officially launched at an ASA conference
in Manchester, entitled The Future of Ad Regulation. ASA explains the highlights of its strategy: We will prioritise the protection of vulnerable people and appropriately limiting children and young people's exposure
to age-restricted ads in sectors like food, gambling and alcohol We will listen in new ways, including research, data-driven intelligence gathering and machine learning 203 our own or that of others - to find out which other advertising-related issues
are the most important to tackle We will develop our thought-leadership in online ad regulation, including on advertising content and targeting issues relating to areas like voice, facial recognition, machine-generated personalised content and biometrics
We will explore lighter-touch ways for people to flag concerns We will explore whether our decision-making processes and governance always allow us to act nimbly, in line with people's expectations of regulating an increasingly online advertising world
We will explore new technological solutions, including machine learning, to improve our regulation Online trends are reflected in the balance of our workload - 88% of the 7,099 ads amended or withdrawn in 2017 following our action
were online ads, either in whole or in part. Meanwhile, two-thirds of the 19,000 cases we resolved last year were about online ads. Our guiding principle is that people should benefit from the same level of protection against
irresponsible online ads as they do offline. The ad rules apply just as strongly online as they do to ads in more traditional media. Our recent rebalancing towards more proactive regulation has had a positive impact, evidenced by
steep rises in the number of ads withdrawn or changed (7,009 last year, up 47% on 2016) and the number of pieces of advice and training delivered to businesses (on course to exceed 400,000 this year). This emphasis on proactive regulation -- intervening
before people need to complain about problematic ads -- will continue under the new strategy. The launch event - The Future of Ad Regulation conference - will take place at Manchester Central Convention Complex on 1 November.
Speakers will include Professor Tanya Byron, Reg Bailey, BBC Breakfast's Tina Daheley, Marketing Week's Russell Parsons, ASA Chief Executive Guy Parker and ASA Chairman David Currie. Online ASA Chief Executive, Guy Parker said:
We're a much more proactive regulator as a result of the work we've done in the last five years. In the next five, we want to have even more impact regulating online advertising. Online is already well over half of our
regulation, but we've more work to do to take further steps towards our ambition of making every UK ad a responsible ad.
Lord Currie, Chairman of the ASA said: The new strategy will ensure that
protecting consumers remains at the heart of what we do but that our system is also fit for purpose when regulating newer forms of advertising. This also means harnessing new technology to improve our ways of working in identifying problem ads.
|
|
Google claims that is impractical to require it to implement US constitutional free speech
|
|
|
|
2nd November 2018
|
|
| See article from
bloomberg.com |
Prior to Google's bosses being called in to answer for its policy to silence conservative voices, it has filed a statement to court saying that even if it does discriminate on the basis of political viewpoints. It said: Not only would it be wrong to compel a private company to guarantee free speech in the way that government censorship is forbidden by the Constitution, but it would also have disastrous practical consequences.
Google argued that the First Amendment appropriately limits the government's ability to censor speech, but applying those limitations to private online platforms would undermine important content regulation. If they are bound by the
same First Amendment rules that apply to the government, YouTube and other service providers would lose much of their ability to protect their users against offensive or objectionable content -- including pornography, hate speech, personal attacks, and
terrorist propaganda. |
|
|