Internet News


 Latest

2010   2011   2012   2013   2014   2015   2016   2017   2018  
Jan   Feb   Mar   Apr   May   June   July   Aug   Sep   Latest  


 

Silencing the British people...

BuzzFeed News leaks UK government plans to set up an internet censor along the lines of Ofcom


Link Here 21st September 2018
The UK government is preparing to establish a new internet censor that would make tech firms liable for content published on their platforms and have the power to sanction companies that fail to take down illegal material and hate speech within hours.

Under legislation being drafted by the Home Office and the Department for Digital, Culture, Media and Sport (DCMS) due to be announced this winter, a new censorship framework for online social harms would be created.

BuzzFeed News has obtained details of the proposals, which would see the establishment of an internet censor similar to Ofcom.

Home secretary Sajid Javid and culture secretary Jeremy Wright are considering the introduction of a mandatory code of practice for social media platforms and strict new rules such as takedown times forcing websites to remove illegal hate speech within a set timeframe or face penalties. Ministers are also looking at implementing age verification for users of Facebook, Twitter, and Instagram.

The new proposals are still in the development stage and are due to be put out for consultation later this year. The new censor would also develop rules new regulations on controlling non-illegal content and online behaviour . The rules for what constitutes non-illegal content will be the subject of what is likely to be a hotly debated consultation.

BuzzFeed News has also been told ministers are looking at creating a second new censor for online advertising. Its powers would include a crackdown on online advertisements for food and soft drink products that are high in salt, fat, or sugar.

BuzzFeed News understands concerns have been raised in Whitehall that the regulation of non-illegal content will spark opposition from free speech campaigners and MPs. There are also fears internally that some of the measures being considered, including blocking websites that do not adhere to the new regulations, are so draconian that they will generate considerable opposition.

A government spokesperson confirmed to BuzzFeed News that the plans would be unveiled later this year.

 

 

Fake concerns, fake surveys about fake news...

Ofcom joins in the government's propaganda campaign, calling for the internet censorship to be as strict as TV censorship


Link Here 19th September 2018
Ofcom has published a prospectus angling for a role as the UK internet censor. It writes:

Ofcom has published a discussion document examining the area of harmful online content.

In the UK and around the world, a debate is underway about whether regulation is needed to address a range of problems that originate online, affecting people, businesses and markets.

The discussion document is intended as a contribution to that debate, drawing on Ofcom's experience of regulating the UK's communications sector, and broadcasting in particular. It draws out the key lessons from the regulation of content standards 203 for broadcast and on-demand video services 203 and the insights that these might provide to policy makers into the principles that could underpin any new models for addressing harmful online content.

The UK Government intends to legislate to improve online safety, and to publish a White Paper this winter. Any new legislation is a matter for Government and Parliament, and Ofcom has no view about the institutional arrangements that might follow.

Alongside the discussion paper, Ofcom has published joint research with the Information Commissioner's Office on people's perception, understanding and experience of online harm. The survey of 1,686 adult internet users finds that 79% have concerns about aspects of going online, and 45% have experienced some form of online harm. The study shows that protection of children is a primary concern, and reveals mixed levels of understanding around what types of media are regulated.

The sales pitch is more or less that Ofcom's TV censorship has 'benefited' viewers so would be a good basis for internet censorship.

Ofcom particularly makes a point of pushing the results of a survey of internet users and their 'concerns'. The survey is very dubious and ends up suggesting thet 79% of users have concerns about going on line.

And maybe this claim is actually true. After all, the Melon Farmers are amongst the 79% have concerns about going online: The Melon Farmers are concerned that:

  • There are vast amounts of scams and viruses waiting to be filtered out from Melon Farmers email inbox every day.
  • The authorities never seem interested in doing anything whatsoever about protecting people from being scammed out of their life savings. Have you EVER heard of the police investigating a phishing scam?
  • On the other hand the police devote vast resources to prosecuting internet insults and jokes, whilst never investigating scams that see old folks lose their life savings.

So yes, there is concern about the internet. BUT, it would be a lie to infer that these concerns mean support for Ofcom's proposals to censor websites along the lines of TV.

In fact looking at the figures, some of the larger categories of 'concern's are more about fears of real crime rather than concerns about issues like fake news.

Interestingly Ofcom has published how the 'concerns' were hyped up by prompting the surveyed a bit. For instance, Ofcom reports that 12% of internet users say they are 'concerned' about fake news without being prompted. With a little prompting by the interviewer, the number of people reporting being concerned about fake news magically increases to 29%.

It also has to be noted that there are NO reports in the survey of internet users concerned about a lack news balancing opinions, a lack of algorithm transparency, a lack of trust ratings for news sources, or indeed for most of the other suggestions that Ofcom addresses.

I've seen more fake inferences in the Ofcom discussion document than I have seen fake news items on the internet in the last ten years.

See also an article from vpncompare.co.uk which concurs with some of these concerns about the Ofcom survey.

 

 

What is the point of TV censorship?...

Well according to Tony Hall it's to impose onerous expenses to even up commercial imbalances


Link Here 17th September 2018

Tony Hall, the BBC's director general, has repeated his call for global streaming companies, Netflix and Amazon to suffer the same censorship as the UK's traditional broadcasters -- or else risk killing off distinctive British content. He said to the Royal Television Society's London conference:

It cannot be right that the UK's media industry is competing against global giants with one hand tied behind its back.

In so many ways -- prominence, competition rules, advertising, taxation, content regulation, terms of trade, production quotas -- one set of rules applies to UK companies, and barely any apply to the new giants. That needs rebalancing, too. We stand ready to help, where we can.

Hall will use the speech to warn that young British audiences now spend almost as much time watching Netflix -- which only launched its UK streaming service in 2012 -- as watching BBC television and iPlayer combined.

Citing Ofcom figures, Hall warned that Britain's public service broadcasters have cut spending on content in real terms by around 1bn since 2004. He said that global streaming companies are not spending enough on British productions to make up the difference, while their UK-based productions tend to focus on material which has a global appeal rather than a distinctly British flavour. Hall added:

This isn't just an issue for us economically, commercially or as institutions. There is an impact on society. The content we produce is not an ordinary consumer good. It helps shape our society. It brings people together, it helps us understand each other and share a common national story.

 

 

The EU snaps its fingers...

The European Commission publishes its proposal for massive fines for internet companies that don't implement censorship orders within the hour


Link Here 15th September 2018
Full story: Internet Censorship in EU...EU proposes mandatory cleanfeed for all member states
Tech companies that fail to remove terrorist content quickly could soon face massive fines. The European Commission proposed new rules on Wednesday that would require internet platforms to remove illegal terror content within an hour of it being flagged by national authorities. Firms could be fined up to 4% of global annual revenue if they repeatedly fail to comply.

Facebook (FB), Twitter (TWTR) and YouTube owner Google (GOOGL) had already agreed to work with the European Union on a voluntary basis to tackle the problem. But the Commission said that progress has not been sufficient.

A penalty of 4% of annual revenue for 2017 would translate to $4.4 billion for Google parent Alphabet and $1.6 billion for Facebook.

The proposal is the latest in a series of European efforts to control the activities of tech companies.

The terror content proposal needs to be approved by the European Parliament and EU member states before becoming law.

 

 

Offsite Article: Spine tingling censorship...


Link Here 15th September 2018
Paypal associates autonomous sensory meridian response (ASMR) content (relaxing whispering) with sex, and then rather unfairly invokes its anti-sex censorship and restrictions

See article from engadget.com

 

 

UK mass surveillance ruled unlawful in landmark judgment...

The European Court of Human Rights has ruled that the UK's mass interception programmes have breached the European Convention on Human Rights.


Link Here 14th September 2018
Full story: Snooper's Charter...Tories re-start massive programme of communications snooping

The European Court of Human Rights (ECtHR) has found that the UK's mass surveillance programmes, revealed by NSA whistleblower Edward Snowden, did not meet the quality of law requirement and were incapable of keeping the interference to what is necessary in a democratic society.

The landmark judgment marks the Court's first ruling on UK mass surveillance programmes revealed by Mr Snowden. The case was started in 2013 by campaign groups Big Brother Watch, English PEN, Open Rights Group and computer science expert Dr Constanze Kurz following Mr Snowden's revelation of GCHQ mass spying.

Documents provided by Mr Snowden revealed that the UK intelligence agency GCHQ were conducting population-scale interception, capturing the communications of millions of innocent people. The mass spying programmes included TEMPORA, a bulk data store of all internet traffic; KARMA POLICE, a catalogue including a web browsing profile for every visible user on the internet; and BLACK HOLE, a repository of over 1 trillion events including internet histories, email and instant messenger records, search engine queries and social media activity.

The applicants argued that the mass interception programmes infringed UK citizens' rights to privacy protected by Article 8 of the European Convention on Human Rights as the population-level surveillance was effectively indiscriminate, without basic safeguards and oversight, and lacked a sufficient legal basis in the Regulation of Investigatory Powers Act (RIPA).

In its judgment, the ECtHR acknowledged that bulk interception is by definition untargeted ; that there was a lack of oversight of the entire selection process, and that safeguards were not sufficiently robust to provide adequate guarantees against abuse.

In particular, the Court noted concern that the intelligence services can search and examine "related communications data" apparently without restriction -- data that identifies senders and recipients of communications, their location, email headers, web browsing information, IP addresses, and more. The Court expressed concern that such unrestricted snooping could be capable of painting an intimate picture of a person through the mapping of social networks, location tracking, Internet browsing tracking, mapping of communication patterns, and insight into who a person interacted with.

The Court acknowledged the importance of applying safeguards to a surveillance regime, stating:

In view of the risk that a system of secret surveillance set up to protect national security may undermine or even destroy democracy under the cloak of defending it, the Court must be satisfied that there are adequate and effective guarantees against abuse.'

The Government passed the Investigatory Powers Act (IPA) in November 2016, replacing the contested RIPA powers and controversially putting mass surveillance powers on a statutory footing.

However, today's judgment that indiscriminate spying breaches rights protected by the ECHR is likely to provoke serious questions as to the lawfulness of bulk powers in the IPA.

Jim Killock, Executive Director of Open Rights Group said:

Viewers of the BBC drama, the Bodyguard, may be shocked to know that the UK actually has the most extreme surveillance powers in a democracy. Since we brought this case in 2013, the UK has actually increased its powers to indiscriminately surveil our communications whether or not we are suspected of any criminal activity.

In light of today's judgment, it is even clearer that these powers do not meet the criteria for proportionate surveillance and that the UK Government is continuing to breach our right to privacy.

Silkie Carlo, director of Big Brother Watch said:

This landmark judgment confirming that the UK's mass spying breached fundamental rights vindicates Mr Snowden's courageous whistleblowing and the tireless work of Big Brother Watch and others in our pursuit for justice.

Under the guise of counter-terrorism, the UK has adopted the most authoritarian surveillance regime of any Western state, corroding democracy itself and the rights of the British public. This judgment is a vital step towards protecting millions of law-abiding citizens from unjustified intrusion. However, since the new Investigatory Powers Act arguably poses an ever greater threat to civil liberties, our work is far from over.

Antonia Byatt, director of English PEN said:

This judgment confirms that the British government's surveillance practices have violated not only our right to privacy, but our right to freedom of expression too. Excessive surveillance discourages whistle-blowing and discourages investigative journalism. The government must now take action to guarantee our freedom to write and to read freely online.

Dr Constanze Kurz, computer scientist, internet activist and spokeswoman of the German Chaos Computer Club said:

What is at stake is the future of mass surveillance of European citizens, not only by UK secret services. The lack of accountability is not acceptable when the GCHQ penetrates Europe's communication data with their mass surveillance techniques. We all have to demand now that our human rights and more respect of the privacy of millions of Europeans will be acknowledged by the UK government and also by all European countries.

Dan Carey of Deighton Pierce Glynn, the solicitor representing the applicants, stated as follows:

The Court has put down a marker that the UK government does not have a free hand with the public's communications and that in several key respects the UK's laws and surveillance practices have failed. In particular, there needs to be much greater control over the search terms that the government is using to sift our communications. The pressure of this litigation has already contributed to some reforms in the UK and this judgment will require the UK government to look again at its practices in this most critical of areas.

 

 

Softening a hammer blow...

The EU parliament approved a few amendments to try and soften the blow of massive new internet censorship regime


Link Here 13th September 2018
Full story: Copyright in the EU...Copyright law for Europe
The European Parliament has voted to approve new copyright powers enabling the big media industry to control how their content is used on the internet.

Article 11 introduces the link tax which lets news companies control how their content is used. The target of the new law is to make Google pay newspapers for its aggregating Google News service. The collateral damage is that millions of websites can now be harangued for linking to and quoting articles, or even just sharing links to them.

Article 13 introduces the requirements for user content sites to create censorship machines that pre-scan all uploaded content and block anything copyrighted. The original proposal, voted on in June, directly specified content hosts use censorship machines (or filters as the EU prefers to call them). After a cosmetic rethink since June, the law no longer specifies automatic filters, but instead specifies that content hosts are responsible for copyright published. And of course the only feasible way that content hosts can ensure they are not publishing copyrighted material is to use censorship machines anyway. The law was introduced, really with just the intention of making YouTube and Facebook pay more for content from the big media companies. The collateral damage to individuals and small businesses was clearly of no concern to the well lobbied MEPs.

Both articles will introduce profound new levels of censorship to all users of the internet, and will also mean that there will reduced opportunities for people to get their contributions published or noticed on the internet. This is simply because the large internet companies are commercial organisations and will always make decisions with costs and profitability in mind. They are not state censors with a budget to spend on nuanced decision making. So the net outcome will be to block vast swathes of content being uploaded just in case it may contain copyright.

An example to demonstrate the point is the US censorship law, FOSTA. It requires content hosts to block content facilitating sex trafficking. Internet companies generally decided that it was easier to block all adult content rather than to try and distinguish sex trafficking from non-trafficking sex related content. So sections of websites for dating and small ads, personal services etc were shut down overnight.

The EU however has introduced a few amendments to the original law to slightly lessen the impact an individuals and small scale content creators.

  • Article 13 will now only apply to platforms where the main purpose ...is to store and give access to the public or to stream significant amounts of copyright protected content uploaded / made available by its users and that optimise content and promotes for profit making purposes .
  • When defining best practices for Article 13, special account must now be taken of fundamental rights, the use of exceptions and limitations. Special focus should also be given to ensuring that the burden on SMEs remain appropriate and that automated blocking of content is avoided (effectively an exception for micro/small businesses). Article 11 shall not extend to mere hyperlinks, which are accompanied by individual words (so it seems links are safe, but quoted snippets of text must be very short) and the protection shall also not extend to factual information which is reported in journalistic articles from a press publication and will therefore not prevent anyone from reporting such factual information .
  • Article 11 shall not prevent legitimate private and non-commercial use of press publications by individual users .
  • Article 11 rights shall expire 5 years after the publication of the press publication. This term shall be calculated from the first day of January of the year following the date of publication. The right referred to in paragraph 1 shall not apply with retroactive effect .
  • Individual member states will now have to decide how Article 11 is implemented, which could create some confusion across borders.

At the same time, the EU rejected the other modest proposals to help out individuals and small creators:

  • No freedom of panorama. When we take photos or videos in public spaces, we're apt to incidentally capture copyrighted works: from stock art in ads on the sides of buses to t-shirts worn by protestors, to building facades claimed by architects as their copyright. The EU rejected a proposal that would make it legal Europe-wide to photograph street scenes without worrying about infringing the copyright of objects in the background.
  • No user-generated content exemption, which would have made EU states carve out an exception to copyright for using excerpts from works for criticism, review, illustration, caricature, parody or pastiche.

A final round of negotiation with the EU Council and European Commission is now due to take place before member states make a decision early next year. But this is historically more of a rubber stamping process and few, if any, significant changes are expected.

However, anybody who mistakenly thinks that Brexit will stop this from impacting the UK should be cautious. Regardless of what the EU approves, the UK might still have to implement it, and in any case the current UK Government supports many of the controversial new measures.

 

 

Comment: A grim day for digital rights in Europe...

Comments about censorship machines, link tax, and clicking on terrorist content


Link Here 13th September 2018
Full story: Copyright in the EU...Copyright law for Europe

New Copyright Powers, New Terrorist Content Laws: A Grim Day For Digital Rights in Europe

See  article from eff.org by Danny O'Brien

Despite waves of calls and emails from European Internet users, the European Parliament today voted to accept the principle of a universal pre-emptive copyright filter for content-sharing sites, as well as the idea that news publishers should have the right to sue others for quoting news items online -- or even using their titles as links to articles. Out of all of the potential amendments offered that would fix or ameliorate the damage caused by these proposals, they voted for worst on offer .

There are still opportunities, at the EU level, at the national level, and ultimately in Europe's courts, to limit the damage. But make no mistake, this is a serious setback for the Internet and digital rights in Europe.

It also comes at a trepidatious moment for pro-Internet voices in the heart of the EU. On the same day as the vote on these articles, another branch of the European Union's government, the Commission, announced plans to introduce a new regulation on preventing the dissemination of terrorist content online . Doubling down on speedy unchecked censorship, the proposals will create a new removal order, which will oblige hosting service providers to remove content within one hour of being ordered to do so. Echoing the language of the copyright directive, the Terrorist Regulation aims at ensuring smooth functioning of the digital single market in an open and democratic society, by preventing the misuse of hosting services for terrorist purposes; it encourages the use of proactive measures, including the use of automated tools.

Not content with handing copyright law enforcement to algorithms and tech companies, the EU now wants to expand that to defining the limits of political speech too.

And as bad as all this sounds, it could get even worse. Elections are coming up in the European Parliament next May. Many of the key parliamentarians who have worked on digital rights in Brussels will not be standing. Marietje Schaake, author of some of the better amendments for the directive, announced this week that she would not be running again. Julia Reda, the German Pirate Party representative, is moving on; Jan Philipp Albrecht, the MEP behind the GDPR, has already left Parliament to take up a position in domestic German politics. The European Parliament's reserves of digital rights expertise, never that full to begin with, are emptying.

The best that can be said about the Copyright in the Digital Single Market Directive, as it stands, is that it is so ridiculously extreme that it looks set to shock a new generation of Internet activists into action -- just as the DMCA, SOPA/PIPA and ACTA did before it.

If you've ever considered stepping up to play a bigger role in European politics or activism, whether at the national level, or in Brussels, now would be the time.

It's not enough to hope that these laws will lose momentum or fall apart from their own internal incoherence, or that those who don't understand the Internet will refrain from breaking it. Keep reading and supporting EFF, and join Europe's powerful partnership of digital rights groups, from Brussels-based EDRi to your local national digital rights organization . Speak up for your digital business, open source project, for your hobby or fandom, and as a contributor to the global Internet commons.

This was a bad day for the Internet and for the European Union: but we can make sure there are better days to come.

 

 

Fake news, terrorism content and closed forums...

'Scattergun' approach to addressing online content risks damaging freedom of expression. A statement by Index on Censorship


Link Here 13th September 2018

Parliament needs to stop creating piecemeal laws to address content online -- or which make new forms of speech illegal.

Index is very concerned about the plethora of law-making initiatives related to online communications, the most recent being MP Lucy Powell's online forums regulation bill, which targets hate crime and "secret" Facebook groups.

Powell's bill purports to "tackle online hate, fake news and radicalisation" by making social media companies liable for what is published in large, closed online forms -- and is the latest in a series of poorly drafted attempts by parliamentarians to address communications online.

If only Powell's proposal were the worst piece of legislation parliament will consider this autumn. Yesterday, MPs debated the Counter-Terrorism and Border Security Bill, which would make it a crime to view information online that is "likely to be useful" to a terrorist. No terrorist intent would be required -- but you would risk up to 15 years in prison if found guilty. This would make the work of journalists and academics very difficult or impossible.

Attempts to tackle online content are coming from all corners with little coordination -- although a factor common to all these proposals is that they utterly fail to safeguard freedom of expression.

Over the summer, the Commons Select Committee on Culture, Media and Sport issued a preliminary report on tackling fake news and the government launched a consultation on a possible new law to prevent "intimidation" of those standing for elections.

In addition, the government is expected to publish later this year a white paper on internet safety aimed " to make sure the UK is the safest place in the world to be online." The Law Commission, already tasked with publishing a report on offensive online communications , was last week asked to review whether misogyny should be considered a hate crime.

Jodie Ginsberg, CEO of Index, said:

"We're having to play whack-a-mole at the moment to prevent poorly drawn laws inadvertently stifling freedom of expression, especially online. The scattergun approach is no way to deal with concerns about online communications. Instead of paying lip service to freedom of expression as a British value, it needs to be front and centre when developing policies".

"We already have laws to deal with harassment, incitement to violence, and even incitement to hatred. International experience shows us that even well-intentioned laws meant to tackle hateful views online often end up hurting the minority groups they are meant to protect, stifle public debate, and limit the public's ability to hold the powerful to account."

 

 

The Cookie Law...

The EU ePrivacy regulation due in a few months is set to require websites to be more open about tracking cookies and more strict in gaining consent for their use


Link Here 13th September 2018

The so called cookie law, a moniker for the proposed new EU ePrivacy regulation due to come into play before the year is out, is expected to severely impact the use of cookies online and across digital marketing. As such, it could pose an even bigger test to businesses than GDPR . It's a regulation that will create a likely deficit in the customer information they collect even post-GDPR.

Current cookie banner notifications, where websites inform users of cookie collection, will make way for cookie request pop-ups that deny cookie collection until a user has opted in or out of different types of cookie collection. Such a pop-up is expected to cause a drop in web traffic as high as 40 per cent. The good news is that it will only appear should the user not have already set their cookie preferences at browser level.

The outcome for businesses whose marketing and advertising lies predominantly online is the inevitable reduction in their ability to track, re-target and optimise experiences for their visitors.

...

For any business with a website and dependent on cookies, the new regulations put them at severe risk of losing this vital source of consumer data . As a result, businesses must find a practical, effective and legal alternative to alleviate the burden on the shoulders of all teams involved and to offset any drastic shortfall in this crucial data.

....

Putting the power in the hands of consumers when it comes to setting browser-level cookie permissions will limit a business's ability to extensively track the actions users take on company websites and progress targeted cookie-based advertising. Millions of internet users will have the option to withdraw their dataset from the view of businesses, one of the biggest threats ePrivacy poses.

...Read the full article from smallbusiness.co.uk

 

 

The Rise of the Machines: Cyberdyne is created...

MEPs approve copyright law requiring Google and Facebook to use censorship machines to block user uploads that may contain snippets of copyright material, including headlines, article text, pictures and video


Link Here 12th September 2018
Full story: Copyright in the EU...Copyright law for Europe
The European Parliament has approved a disgraceful copyright law that threatens to destroy the internet as we know it.

The rulehands more power to news and record companies against Internet giants like Google and Facebook. But it also allows companies to make sweeping blocks of user-generated content, such as internet memes or reaction GIFs that use copyrighted material. The tough approach could spell the end for internet memes, which typically lay text over copyrighted photos or video from television programmes, films, music videos and more.

MEPs voted 438 in favour of the measures, 226 against, with 39 abstentions. The vote introduced Articles 11 and 13 to the directive, dubbed the link tax and censorship machines.

Article 13 puts the onus of policing for copyright infringement on the websites themselves. This forces web giants like YouTube and Facebook to scan uploaded content to stop the unlicensed sharing of copyrighted material. If the internet companies find that such scanning does not work well, or makes the service unprofitable, the companies could pull out of allowing users to post at all on topics where the use of copyright material is commonplace.

The second amendment to the directive, Article 11, is intended to give publishers and newspapers a way to make money when companies like Google link to their stories.Search engines and online platforms like Twitter and Facebook will have to pay a license to link to news publishers when quoting portions of text from these outlets.

Following Wednesday's vote, EU lawmakers will now take the legislation to talks with the European Commission and the 28 EU countries.

 

 

Offsite Article: Not very private privacy protections...


Link Here 12th September 2018
UK internet domain controller Nominet consults on proposals to ensure that copyright holders and the UK authorities can obtains the identity of website owners even when privacy proxy services are used

See article from surveys.nominet.org.uk

 

 

Campaign: ResistAV...

Pandora Blake and Myles Jack launch a new campaigning website to raise funds for a challenge to the government's upcoming internet porn censorship regime


Link Here 11th September 2018
Full story: BBFC Internet Porn Censors...BBFC designated as the UK internet porn censor

Niche porn producer, Pandora Blake, Misha Mayfair, campaigning lawyer Myles Jackman and Backlash are campaigning to back a legal challenge to the upcoming internet porn censorship regime in the UK. They write on a new ResistAV.com website:

We are mounting a legal challenge.

Do you lock your door when you watch porn 203 or do you publish a notice in the paper? The new UK age verification law means you may soon have to upload a proof of age to visit adult sites. This would connect your legal identity to a database of all your adult browsing. Join us to prevent the damage to your privacy.

The UK Government is bringing in age verification for adults who want to view adult content online; yet have failed to provide privacy and security obligations to ensure your private information is securely protected.

The law does not currently limit age verification software to only hold data provided by you just in order to verify your age. Hence, other identifying data about you could include anything from your passport information to your credit card details, up to your full search history information. This is highly sensitive data.

What are the Privacy Risks?

Data Misuse - Since age verification providers are legally permitted to collect this information, what is to stop them from increasing revenue through targeting advertising at you, or even selling your personal data?

Data Breaches - No database is perfectly secure, despite good intentions. The leaking or hacking of your sensitive personal information could be truly devastating. The Ashley Madison hack led to suicides. Don't let the Government allow your private sexual preferences be leaked into the public domain.

What are we asking money for?

We're asking you to help us crowdfund legal fees so we can challenge the new age verification rules under the Digital Economy Act 2017. We re asking for 210,000 to cover the cost of initial legal advice, since it's a complicated area of law. Ultimately, we'd like to raise even more money, so we can send a message to Government that your personal privacy is of paramount importance.

 

 

Our echo chamber is better than yours...

Labour's Lucy Powell introduces bill to censor closed online Facebook groups


Link Here 11th September 2018
Lucy Powell writes in the Guardian, (presumably intended as an open comment):

Closed forums on Facebook allow hateful views to spread unchallenged among terrifyingly large groups. My bill would change that

You may wonder what could bring Nicky Morgan, Anna Soubry, David Lammy, Jacob Rees-Mogg and other senior MPs from across parliament together at the moment. Yet they are all sponsoring a bill I'm proposing that will tackle online hate, fake news and radicalisation. It's because, day-in day-out, whatever side of an argument we are on, we see the pervasive impact of abuse and hate online 203 and increasingly offline, too.

Social media has given extremists a new tool with which to recruit and radicalise. It is something we are frighteningly unequipped to deal with.

Worryingly, it is on Facebook, which most of us in Britain use, where people are being exposed to extremist material. Instead of small meetings in pubs or obscure websites in the darkest corners of the internet, our favourite social media site is increasingly where hate is cultivated. From hope to hate: how the early internet fed the far right Read more

Online echo chambers are normalising and allowing extremist views to go viral unchallenged. These views are spread as the cheap thrill of racking up Facebook likes drives behaviour and reinforces a binary worldview. Some people are being groomed unwittingly as unacceptable language is treated as the norm. Others have a more sinister motive.

While in the real world, alternative views would be challenged by voices of decency in the classroom, staffroom, or around the dining-room table, there are no societal norms in the dark crevices of the online world. The impact of these bubbles of hate can be seen, in extreme cases, in terror attacks from radicalised individuals. But we can also see it in the rise of the far right, with Tommy Robinson supporters rampaging through the streets this summer, or in increasing Islamophobia and antisemitism.

Through Facebook groups (essentially forums), extremists can build large audiences. There are many examples of groups that feature anti-Muslim or antisemitic content daily, in an environment which, because critics are removed from the groups, normalises these hateful views. If you see racist images, videos and articles in your feed but not the opposing argument, you might begin to think those views are acceptable and even correct. If you already agree with them, you might be motivated to act.

This is the thinking behind Russia's interference in the 2016 US presidential election. The Russian Internet Research Agency set up Facebook groups, amassed hundreds of thousands of members, and used them to spread hate and fake news, organise rallies, and attack Hillary Clinton. Most of its output was designed to stoke the country's racial tensions.

It's not only racism that is finding a home on Facebook. Marines United was a secret group of 30,000 current and former servicemen in the British armed forces and US Marines. Members posted nude photos of their fellow servicewomen, taken in secret. A whistleblower described the group as revenge porn, creepy stalker-like photos taken of girls in public, talk about rape. It is terrifying that the group grew so large before anyone spoke out, and that Facebook did nothing until someone informed the media.

Because these closed forums can be given a secret setting, they can be hidden away from everyone but their members. This locks out the police, intelligence services and charities that could otherwise engage with the groups and correct disinformation. This could be particularly crucial with groups where parents are told not to vaccinate their children against diseases. Internet warriors: inside the dark world of online haters Read more

Despite having the resources to solve the problem, Facebook lacks the will. In fact, at times it actively obstructs those who wish to tackle hate and disinformation. Of course, it is not just Facebook, and the proliferation of online platforms and forums means that the law has been much too slow to catch up with our digital world.

We should educate people to be more resilient and better able to spot fake news and recognise hate, but we must also ensure there are much stronger protections to spread decency and police our online communities. The responsibility to regulate these social media platforms falls on the government. It is past time to act. Advertisement

That's why I am introducing a bill in parliament which will do just that. By establishing legal accountability for what's published in large online forums, I believe we can force those who run these echo chambers to stamp out the evil that is currently so prominent. Social media can be a fantastic way of bringing people together 203 which is precisely why we need to prevent it being hijacked by those who instead wish to divide.

 

 

Voting in Cyberdyne's Censorship Machine's...

How the EU's Copyright Filters Will Make it Trivial For Anyone to Censor the Internet


Link Here 11th September 2018
Full story: Copyright in the EU...Copyright law for Europe

On Wednesday, the EU will vote on whether to accept two controversial proposals in the new Copyright Directive; one of these clauses, Article 13, has the potential to allow anyone, anywhere in the world, to effect mass, rolling waves of censorship across the Internet.

The way things stand today, companies that let their users communicate in public (by posting videos, text, images, etc) are required to respond to claims of copyright infringement by removing their users' posts, unless the user steps up to contest the notice. Sites can choose not to remove work if they think the copyright claims are bogus, but if they do, they can be sued for copyright infringement (in the United States at least), alongside their users, with huge penalties at stake. Given that risk, the companies usually do not take a stand to defend user speech, and many users are too afraid to stand up for their own speech because they face bankruptcy if a court disagrees with their assessment of the law.

This system, embodied in the United States' Digital Millennium Copyright Act (DMCA) and exported to many countries around the world, is called notice and takedown, and it offers rightsholders the ability to unilaterally censor the Internet on their say-so, without any evidence or judicial oversight. This is an extraordinary privilege without precedent in the world of physical copyright infringement (you can't walk into a cinema, point at the screen, declare I own that, and get the movie shut down!).

But rightsholders have never been happy with notice and takedown. Because works that are taken down can be reposted, sometimes by bots that automate the process, rightsholders have called notice and takedown a game of whac-a-mole , where they have to keep circling back to remove the same infringing files over and over.

Rightsholders have long demanded a notice and staydown regime. In this system, rightsholders send online platforms digital copies of their whole catalogs; the platforms then build copyright filters that compare everything a user wants to post to this database of known copyrights, and block anything that seems to be a match.

Tech companies have voluntarily built versions of this system. The most well-known of the bunch is YouTube's Content ID system, which cost $60,000,000 to build, and which works by filtering the audio tracks of videos to categorise them. Rightsholders are adamant that Content ID doesn't work nearly well enough, missing all kinds of copyrighted works, while YouTube users report rampant overmatching, in which legitimate works are censored by spurious copyright claims: NASA gets blocked from posting its own Mars rover footage; classical pianists are blocked from posting their own performances , birdsong results in videos being censored , entire academic conferences lose their presenters' audio because the hall they rented played music at the lunch-break--you can't even post silence without triggering copyright enforcement. Besides that, there is no bot that can judge whether something that does use copyrighted material is fair dealing. Fair dealing is protected under the law, but not under Content ID.

If Content ID is a prototype, it needs to go back to the drawing board. It overblocks (catching all kinds of legitimate media) and underblocks (missing stuff that infuriates the big entertainment companies). It is expensive, balky, and ineffective.

It's coming soon to an Internet near you.

On Wednesday, the EU will vote on whether the next Copyright Directive will include Article 13, which makes Content-ID-style filters mandatory for the whole Internet, and not just for the soundtracks of videos--also for the video portions, for audio, for still images, for code, even for text. Under Article 13, the services we use to communicate with one another will have to accept copyright claims from all comers, and block anything that they believe to be a match.

This measure will will censor the Internet and it won't even help artists to get paid.

Let's consider how a filter like this would have to work. First of all, it would have to accept bulk submissions. Disney and Universal (not to mention scientific publishers, stock art companies, real-estate brokers, etc) will not pay an army of data-entry clerks to manually enter their vast catalogues of copyrighted works, one at a time, into dozens or hundreds of platforms' filters. For these filters to have a hope of achieving their stated purpose, they will have to accept thousands of entries at once--far more than any human moderator could review.

But even if the platforms could hire, say, 20 percent of the European workforce to do nothing but review copyright database entries, this would not be acceptable to rightsholders. Not because those workers could not be trained to accurately determine what was, and was not, a legitimate claim--but because the time it would take for them to review these claims would be absolutely unacceptable to rightsholders.

It's an article of faith among rightsholders that the majority of sales take place immediately after a work is released, and that therefore infringing copies are most damaging when they're available at the same time as a new work is released (they're even more worried about pre-release leaks).

If Disney has a new blockbuster that's leaked onto the Internet the day it hits cinemas, they want to pull those copies down in seconds, not after precious days have trickled past while a human moderator plods through a queue of copyright claims from all over the Internet.

Combine these three facts:

  1. Anyone can add anything to the blacklist of copyrighted works that can't be published by Internet users;

  2. The blacklists have to accept thousands of works at once; and

  3. New entries to the blacklist have to go into effect instantaneously.

It doesn't take a technical expert to see how ripe for abuse this system is. Bad actors could use armies to bots to block millions of works at a go (for example, jerks could use bots to bombard the databases with claims of ownership over the collected works of Shakespeare, adding them to the blacklists faster than they could possibly be removed by human moderators, making it impossible to quote Shakespeare online).

But more disturbing is targeted censorship: politicians have long abused takedown to censor embarrassing political revelations or take critics offline , as have violent cops and homophobic trolls .

These entities couldn't use Content ID to censor the whole Internet: instead, they had to manually file takedowns and chase their critics around the Internet. Content ID only works for YouTube -- plus it only allows trusted rightsholders to add works wholesale to the notice and staydown database, so petty censors are stuck committing retail copyfraud.

But under Article 13, everyone gets to play wholesale censor, and every service has to obey their demands: just sign up for a rightsholder account on a platform and start telling it what may and may not be posted. Article 13 has no teeth for stopping this from happening: and in any event, if you get kicked off the service, you can just pop up under a new identity and start again.

Some rightsholder lobbyists have admitted that there is potential for abuse here, they insist that it will all be worth it, because it will get artists paid. Unfortunately, this is also not true.

For all that these filters are prone to overblocking and ripe for abuse, they are actually not very effective against someone who actually wants to defeat them.

Let's look at the most difficult-to-crack content filters in the world: the censoring filters used by the Chinese government to suppress politically sensitive materials. These filters have a much easier job than the ones European companies will have to implement: they only filter a comparatively small number of items, and they are built with effectively unlimited budgets, subsidized by the government of one of the world's largest economies, which is also home to tens of millions of skilled technical people, and anyone seeking to subvert these censorship systems is subject to relentless surveillance and risks long imprisonment and even torture for their trouble.

Those Chinese censorship systems are really, really easy to break , as researchers from the University of Toronto's Citizen Lab demonstrated in a detailed research report released a few weeks ago.

People who want to break the filters and infringe copyright will face little difficulty. The many people who want to stay on the right side of the copyright --but find themselves inadvertently on the wrong side of the filters--will find themselves in insurmountable trouble, begging for appeal from a tech giant whose help systems all dead-end in brick walls. And any attempt to tighten the filters to catch these infringers, will of course, make it more likely that they will block non-infringing content.

A system that allows both censors and infringers to run rampant while stopping legitimate discourse is bad enough, but it gets worse for artists.

Content ID cost $60,000,000 and does a tiny fraction of what the Article 13 filters must do. When operating an online platform in the EU requires a few hundred million in copyright filtering technology, the competitive landscape gets a lot more bare. Certainly, none of the smaller EU competitors to the US tech giants can afford this.

On the other hand, US tech giants can afford this (indeed, have pioneered copyright filters as a solution , even as groups like EFF protested it ), and while their first preference is definitely to escape regulation altogether, paying a few hundred million to freeze out all possible competition is a pretty good deal for them.

The big entertainment companies may be happy with a deal that sells a perpetual Internet Domination License to US tech giants for a bit of money thrown their way, but that will not translate into gains for artists. The fewer competitors there are for the publication, promotion, distribution and sale of creative works, the smaller the share will be that goes to creators.

We can do better: if the problem is monopolistic platforms (and indeed, monopolistic distributors ), tackling that directly as a matter of EU competition law would stop those companies from abusing their market power to squeeze creators. Copyright filters are the opposite of antitrust, though: it will make the biggest companies much bigger, to the great detriment of all the little guys in the entertainment industry and in the market for online platforms for speech.

 

 

Won't somebody think of the musicians...

Anthea McIntyre MEP on unfair copyright


Link Here 11th September 2018
Full story: Copyright in the EU...Copyright law for Europe

I appreciate your concerns regarding the new Copyright reform proposals. However, the objective of Article 13 is to make sure authors, such as musicians, are appropriately paid for their work, and to ensure that platforms fairly share revenues which they derive from creative works on their sites with creators. I will be voting for new text which seeks to exclude small and microenterprise platforms from the scope and to introduce greater proportionality for SMEs.

In the text under discussion, if one of the main purposes of a platform is to share copyright works, if they optimise these works and also derive profit from them, the platform would need to conclude a fair license with the rightholders, if rightholders request this. If not, platforms will have to check for and remove specific copyright content once this is supplied from rightholders. This could include pirated films which are on platforms at the same time as they are shown at the cinema. However, if a platform's main purpose is not to share protected works, it does not optimise copyright works nor to make profit from them, it would not be required to conclude a license. There are exemptions for online encyclopaedias (Wikipedia), sites where rightholders have approved to the uploading of their works and software platforms, while online market places (including Ebay) are also out of the scope.

Closing this value gap is an essential part of the Copyright Directive, which Secretary of State Matthew Hancock supports addressing . My Conservative colleagues and I support the general policy justification behind it, which is to make sure that platforms are responsible for their sites and that authors are fairly rewarded and incentivised to create work. Content recognition will help to make sure creators, such as song writers, can be better identified and paid fairly for their work. Nevertheless, this should not be done at the expense of users' rights. We are dedicated to striking the right balance between adequately rewarding rightholders and safeguarding users' rights. There are therefore important safeguards to protect users' rights, respect data protection, and to make sure that only proportionate measures are taken.

I will therefore be supporting the mandate to enter into trilogue negotiations tomorrow so that the Directive can become law.

[Surely one understand that musicians are getting a bit of a rough deal from the internet giants and one can see where McIntyre is coming from. However it is clear that little thought has been made into how rules will pan out in the real profit driven world where the key take holders are doing their best for their shareholders, not the European peoples. It is surely driving the west into poverty when laws are so freely passed just to do a few nice things, whilst totally ignoring the cost of destroying people's businesses and incomes].

Offsite Comment: ...And from the point of view of the internet giants

EU copyright war a shame, says big tech lobby

See  article from channelnewsasia.com. Siada El Ramly, the executive director of EDiMA, the association that defends the interests of online platforms in Brussels

 

 

Conveniently forgetting that China, Russia and Saudi may demand the same...

European Court of Justice hears case with France calling for its information bans under the 'right to be forgotten' to be implemented throughout the World.


Link Here 10th September 2018
Full story: The Right to be Forgotten...Bureaucratic censorship in the EU

ARTICLE 19 is leading a coalition of international human rights organisations, who will tell the European Court of Justice (CJEU) that the de-listing of websites under the right to be forgotten should be limited in order to protect global freedom of expression. The hearing will take place on September 11 with a judgment expected in early 2019.

The CJEU hearing in Google vs CNIL is taking place after France's highest administrative court asked for clarification in relation to the 2014 ruling in Google Spain. This judgment allows European citizens to ask search engines like Google to remove links to inadequate, irrelevant or ... excessive content -- commonly known as the right to be forgotten (RTBF). While the content itself remains online, it cannot be found through online searches of the individual's name.

The CJEU has been asked to clarify whether a court or data regulator should require a search engine to de-list websites only in the country where it has jurisdiction or across the entire world.

France's data regulator, the Commission Nationale de l'Informatique et des Libertes (CNIL) has argued that if they uphold a complaint by a French citizen, search engines such as Google should not only be compelled to remove links from google.fr but all Google domains.

ARTICLE 19 and the coalition of intervening organisations have warned that forcing search engines to de-list information on a global basis would be disproportionate. Executive Director of ARTICLE 19, Thomas Hughes said:

This case could see the right to be forgotten threatening global free speech. European data regulators should not be allowed to decide what Internet users around the world find when they use a search engine. The CJEU must limit the scope of the right to be forgotten in order to protect the right of Internet users around the world to access information online.

ARTICLE 19 argues that rights to privacy and rights to freedom of expression must be balanced when it comes to making deciding whether websites should be de-listed. Hughes added:

If European regulators can tell Google to remove all references to a website, then it will be only a matter of time before countries like China, Russia and Saudi Arabia start to do the same. The CJEU should protect freedom of expression not set a global precedent for censorship.

 

 

Comment: Counter-Terrorism and Border Security Bill not fit for purpose...

Index on Censorship is concerned about the UK's Counter-Terrorism and Border Security Bill and believes that the bill should go back to the drawing board.


Link Here 10th September 2018
Full story: Extremism in the UK...UK government introduces wide ranging ban on extremism

The bill threatens investigative journalism and academic research by making it a crime to view material online that could be helpful to a terrorist. This would deter investigative journalists from doing their work and would make academic research into terrorism difficult or impossible.

New border powers in the bill could put journalists' confidential sources at risk. The bill's border security measures would mean that journalists could be forced to answer questions or hand over material that would reveal the identity of a confidential source. These new powers could be exercised without any grounds for suspicion.

The bill also endangers freedom of expression in other ways. It would make it an offence to express an opinion in support of a proscribed (terrorist) organisation in a way that is reckless as to whether this could encourage another person to support the organisation. This would apply even if the reckless person was making the statement to one other person in a private home.

The bill would criminalise the publication of a picture or video clip of an item of clothing or for example a flag in a way that aroused suspicion that the person is a member or supporter of a terrorist organisation. This would cover, for example, someone taking a picture of themselves at home and posting it online.

Joy Hyvarinen, head of advocacy said: The fundamentally flawed Counter-Terrorism and Border Security Bill should be sent back to the drawing board. It is not fit for purpose and it would limit freedom of expression, journalism and academic research in a way that should be completely unacceptable in a democratic country.

 

 

PortesCard...

Pornhub partners with anonymous system based on retailers verifying ages without recording ID


Link Here 8th September 2018
Full story: BBFC Internet Porn Censors...BBFC designated as the UK internet porn censor
Pornhub's Age verification system AgeID has announced an exclusive partnership with OCL and its Portes solution for providing anonymous face-to-face age verification solution where retailers OK the age of customers who buy a card enabling porn access. The similar AVSecure scheme allows over 25s to buy the access card without showing any ID but may require to see unrecorded ID from those appearing less than 25.

According to the company, the PortesCard is available to purchase from selected high street retailers and any of the U.K.'s 29,000 PayPoint outlets as a voucher. Each PortesCard will cost 4.99 for use on a single device, or 8.99 for use across multiple devices. This compares with 10 for the AVSecure card.

Once a card or voucher is purchased, its unique validation code must be activated via the Portes app within 24 hours before expiring. Once the user has been verified they will automatically be granted access to all adult sites using AgeID. Maybe this 24 hour limit is something to do with an attempt to restrict secondary sales of porn access codes by ensuring that they get tied to devices almost immediately. It all sounds a little hasslesome.

As an additional layer of protection, parents can quickly and simply block access on their children's devices to sites using Portes, so PortesCards cannot be associated with AgeID.

But note that an anonymously bought card is not quite a 100% safe solution. One has to consider whether if the authorities get hold of a device whether the can then see a complete history of all websites accessed using the app or access code. One also has to consider whether someone can remotely correlate an 'anonymous' access code with all the tracking cookies holding one's id.

 

 

Updated: An outright ban...

Twitter permanently censors Alex Jones of Info Wars


Link Here 8th September 2018
Full story: Twitter Censorship...Twitter offers country by country take downs
The radio host and colourful conspiracy theorist Alex Jones has been permanently censored by Twitter.

One month after it distinguished itself from the rest of the tech industry by declining to bar the rightwing shock jock its platform, Twitter fell in line with the other major social networks in banning Jones.

Twitter justified the censorship saying:

We took this action based on new reports of Tweets and videos posted yesterday that violate our abusive behavior policy, in addition to the accounts' past violations. We will continue to evaluate reports we receive regarding other accounts potentially associated with @realalexjones or @infowars and will take action if content that violates our rules is reported or if other accounts are utilized in an attempt to circumvent their ban.

Update: Apple censors the Infowars app

8th September 2018. See  article from theverge.com

Alex Jones' Infowars app has been permanently banned from Apple's App Store.

Apple confirmed the removal with Buzzfeed by citing the App Store guidelines, which forbids content that is offensive, insensitive, upsetting, intended to disgust, or in exceptionally poor taste.

 

 

Thinking up a thought crime...

The government is manoeuvring on its proposals to criminalise internet access of terrorism related content


Link Here 7th September 2018
Full story: Extremism in the UK...UK government introduces wide ranging ban on extremism

The government is amending its Counter-Terrorism and Border Security Bill with regards to criminalising accessing terrorism related content on the internet.

MPs, peers and the United Nations have already raised human rights concerns over pre-existing measures in the Counter-Terrorism and Border Security Bill, which proposed to make accessing propaganda online on three or more different occasions a criminal offence.

The Joint Human Rights Committee found the wording of the law vague and told the government it violated Article 10 of the European Convention on Human Rights (ECHR). The committee concluded in July:

This clause may capture academic and journalistic research as well as those with inquisitive or even foolish minds.

The viewing of material without any associated intentional or reckless harm is, in our view, an unjustified interference with the right to receive information...unless amended, this implementation of this clause would clearly risk breaching Article 10 of the ECHR and unjustly criminalising the conduct of those with no links to terrorism.

The committee called for officials to narrow the new criminal offence so it requires terrorist intent and defines how people can legally view terrorist material.  

The United Nations Special Rapporteur on the right to privacy also chipped accusing the British government of straying towards thought crime with the law.

In response, the government scrapped the three clicks rule entirely and broadened the concept of viewing to make the draft law read:

A person commits an offence if...the person views or otherwise accesses by means of the internet a document or record containing information of that kind.

It also added a clause saying a reasonable excuse includes:

Having no reason to believe, that the document or record in question contained, or was likely to contain, information of a kind likely to be useful to a person committing or preparing an act of terrorism.

 

 

Offsite Article: Showdown on censorship machines and the link tax...


Link Here 7th September 2018
Full story: Copyright in the EU...Copyright law for Europe
These are the options in front of MEPs on September 12

See article from juliareda.eu

 

2010   2011   2012   2013   2014   2015   2016   2017   2018  
Jan   Feb   Mar   Apr   May   June   July   Aug   Sep   Latest  

melonfarmers icon
 

Top

Home

Index

Links

Email
 

UK

World

Media

Info

US
 

Film Cuts

Nutters

Liberty

Cutting Edge

Shopping
 

Sex News

Sex+Shopping

Advertise
 


TV News

Movie News

Games News

Internet News
 
Advertising News

Phone News
 

Technology News

Gambling News

Books News

Music News

Art News

Stage News
 


Adult DVD+VoD

Online Shop Reviews
 

Online Shops

New Releases & Offers
 
Sex Machines
Fucking Machines
Adult DVD Empire
Adult DVD Empire
Simply Adult
30,000+ items in stock
Low prices on DVDs and sex toys
Simply Adult
Hot Movies
Hot Movies