|
EU lobby group proposes to censor 'disinformation' via ICANN's powers held over worldwide domain name controls
|
|
|
|
10th April 2024
|
|
| See article from reclaimthenet.org
|
EU DisinfoLab, a censorship lobby group regularly making policy recommendations to the EU and member-states, is now pushing for a security structure created by ICANN (the Internet Corporation for Assigned Names and Numbers) to be utilized to censor what
it deems as disinformation. Attempting to directly use ICANN would be highly controversial. Given its importance in the internet infrastructure -- ICANN manages domain names globally -- and the fact content control is not among its tasks (DisinfoLab
says ICANN refuses to do it) -- this would represent a huge departure from the organization's role as we understand it today. But now DisinfoLab proposes to use the structure already created by ICANN against legitimate security threats, to police the
internet for content that somebody decides to treat as disinformation. It would require minimal amount of diligence and cooperation from registries, a blog post said, to accept ICANN-style reports and revoke a site's domain name. |
|
New EU internet censorship laws have come into force for the largest social media giants
|
|
|
|
25th August 2023
|
|
| See article from bbc.co.uk |
About 20 internet giants now have to comply with new EU internet censorship rules. Under the EU Digital Services Act (DSA) rule-breakers can face big fines of 6% of turnover and potentially suspension of the service. The EU commission has named the
very large online platforms that will form the first tranche of internet companies subjected to the new censorship regime. Those are sites with over 45 million EU users: Alibaba, AliExpress, Amazon Store, the Apple App Store,
Booking.com, Facebook, Google Play, Google Maps, Google Shopping, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, X (formerly Twitter), Wikipedia, YouTube and Zalando. Search engines Google and Bing will also be subject to the rules. These websites will now have to assess potential risks they may cause, report that assessment and put in place measures to deal with the problem. This includes risks related to:
- illegal content
- rights, such as freedom of expression, media freedom, discrimination, consumer protection and children's rights public security and
- threats to electoral processes
- gender-based violence, public health wrong
think, age restrictions, and mental and physical 'wellbeing'.
Targeted advertising based on profiling children is no longer permitted. They must also share with regulators details of how their algorithms work. This could include those which decide what adverts users see, or which posts appear in their
feed. And they are required to have systems for sharing data with independent researchers. All though the law is targeted at the EU, of the companies have already made changes that will also affect users in the UK.
- Starting July TikTok stopped users in Europe aged 13-17 from being shown personalised advertising based on their online activity.
- Since February Meta apps including Facebook and Instagram have stopped showing users aged 13-17 worldwide
advertising based on their activity to the apps.
- In Europe Facebook and Instagram gave users the option to view Stories and Reels only from people they follow, ranked in chronological order.
- In the UK and Europe Snapchat is also
restricting personalised ads for users aged 13-17. It is also creating a library of adverts shown in the EU.
Retailers Zalando and Amazon have mounted legal action to contest their designation as a very large online platform. Amazon argues they are not the largest retailer in any of the EU countries where they operate. Smaller tech services will be
brought under the new censorhip regime next year. |
|
New EU internet censorship laws look likely block or restrict Google Search from linking to adult websites
|
|
|
|
28th April 2023
|
|
| See article from xbiz.com
|
The European Commission has officially identified 19 major platforms and search engines to be targeted for compliance under its new internet censorship law, the Digital Services Act (DSA). Under the new rules, Very Large providers will be
required to assess and mitigate the risk of 'misuse' of their services and the measures taken must be proportionate to that risk and subject to robust conditions and safeguards. The EU Commission officially designated 17 Very Large Online
Platforms (VLOPs) and two Very Large Online Search Engines (VLOSEs), each of which, according to the EC, reaches at least 45 million monthly active users. The VLOPs are: Alibaba AliExpress, Amazon Store, Apple AppStore, Booking.com, Facebook, Google
Play, Google Maps, Google Shopping, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, Twitter, Wikipedia, YouTube and German retailer Zalando. The two VLOSEs are Bing and Google Search. Following their designation, an EC statement explained,
these companies will now have to comply, within four months, with the full set of new censorship rules under the DSA. Under the subheading Strong protection of minors, the EC listed the following directives:
- Platforms will have to redesign their systems to ensure a high level of privacy, security, and safety of minors;
- Targeted advertising based on profiling towards children is no longer permitted;
- Special risk assessments including for
negative effects on mental health will have to be provided to the Commission four months after designation and made public at the latest a year later;
- Platforms will have to redesign their services, including their interfaces, recommender
systems, terms and conditions, to mitigate these risks.
According to industry attorney Corey Silverstein of Silverstein Legal, the impact of the new designations and consequent obligations could be substantial because many of the platforms that have been designated as VLOPs and VLOSEs are frequently utilized
by the adult entertainment industry. Assuming these platforms decide to comply with the DSA, Silverstein told XBIZ, there may be major changes coming to what these platforms allow on their services within the EU. This could end up leading to
major content moderation and outright blocking of adult content in the EU, including the blocking of websites that display adult entertainment from being listed in search results. It is also noted that as the larger adult platforms continue to grow,
some may pass the EC's benchmark of having 45 million monthly active users, and therefore face the potential for future designation under the DSA, which could have even more direct impact on their users and creators. |
|
The European Parliament ratifies the latest EU internet censorship law
|
|
|
|
6th July 2022
|
|
| See article from techxplore.com |
The European Parliament has ratified the latest laws that to extend internet censorship in the EU. MEPs approved the final versions of the Digital Markets Act, focused on ending monopolistic practices of tech giants, and the Digital Services Act
(DSA), which toughens scrutiny and the consequences for platforms when they host banned content. The DSA will target a wide range of internet actors and aims to ensure real consequences for companies that fail to censor supposed hate speech,
information the authorities don't like and and child sexual abuse images. Danish MEP Christel Schaldemose commented: The digital world has developed a bit like a western movie, there were no real rules of the game,
but now there is a new sheriff in town. The DSA passed easily with 539 votes in favor, 54 against and 30 abstentions. Both laws now require the final approval by the EU's 27 member states, which should be a formality. Now the big
question is over enforcement with worries that the European Commission lacks the means to give sharp teeth to its new powers. |
|
|
|
|
|
4th May 2022
|
|
|
The Digital Services Act will be the envy of autocrats the world over. By Andrew Tettenborn See article from
spiked-online.com |
|
Council of Europe calls for porn blocking software to be installed on all personal devices
|
|
|
| 27th April 2022
|
|
| See article from xbiz.com
|
The Council of Europe is the Europe-wide (beyond the EU) organisation most well known for running the European Court of Human Rights. Now the Parliamentary Assembly of the Council of Europe issued a resolution urging European nations to mandate online
filters for pornographic materials on all devices, to be systematically activated in public spaces, such as schools, libraries and youth clubs. The parliamentarians expressed deep concern at: The unprecedented
exposure of children to pornographic imagery, which is detrimental to their psychological and physical development. This exposure brings increased risks of harmful gender stereotyping, addiction to pornography and early, unhealthy sex.
The parliamentarians didn't offer any definitions of what they consider unhealthy sex or pornographic materials, nor did they explain how these mandatory filters would be coded and by whom. The Council's statement nvites member states to
examine the existing means and provisions to combat children's exposure to pornographic content and address the gaps in relevant legislation and practice with a view to better protecting children. It calls for relevant legislation to ensure that both
dedicated websites hosting adult content and mainstream and social media which include adult content, are obliged to use age verification tools. \The resolution also advocates the introduction of an alert button or similar solutions for children to
report accidental access to pornographic content, and envisages follow-up actions, such as warnings or penalties for relevant websites. |
|
Don't hold ordinary social media users responsible for other users responses
|
|
|
| 27th April 2022
|
|
| See CC article from eff.org
|
Courts and legislatures around the globe are hotly debating to what degree online intermediaries--the chain of entities that facilitate or support speech on the internet--are liable for the content they help publish. One thing they should not be
doing is holding social media users legally responsible for comments posted by others to their social media feeds, EFF and Media Defence told the European Court of Human Rights (ECtHR). Before the court is the case Sanchez v.
France , in which a politician argued that his right to freedom of expression was violated when he was subjected to a criminal fine for not promptly deleting hateful comments posted on the "wall" of his Facebook account by others. The
ECtHR's Chamber, a judicial body that hears most of its cases, found there was no violation of freedom of expression, extending its rules for online intermediaries to social media users. The politician is seeking review of this decision by ECtHR's Grand
Chamber, which only hears its most serious cases. EFF and Media Defence, in an amicus brief submitted to the Grand Chamber, asked it to revisit the Chamber's expansive interpretation of how intermediary liability rules should
apply to social media users. Imposing liability on them for third-party content will discourage social media users, especially journalists, human rights defenders, civil society actors, and political figures, from using social media platforms, as they
are often targeted by governments seeking to suppress speech. Subjecting these users to liability would make them vulnerable to coordinated attacks on their sites and pages meant to trigger liability and removal of speech, we told the court.
Further, ECtHR's current case law does not support and should not apply to social media users who act as intermediaries, we said. The ECtHR laid out its intermediary liability rules in Delfi A.S. v. Estonia , which concerned
the failure of a commercial news media organization to monitor and promptly delete "clearly unlawful" comments online. The ECtHR rules consider whether the third-party commenters can be identified, and whether they have any control over their
comments once they submit them. In stark contrast, Sanchez concerns the liability of an individual internet user engaged in non-commercial activity. The politician was charged with incitement to hatred or violence against a
group of people or an individual on account of their religion based on comments others posted on his Facebook wall. The people who posted the comments were convicted of the same criminal offence, and one of them later deleted the allegedly unlawful
comments. What's more, the decision about what online content is "clearly unlawful" is not always straightforward, and generally courts are best placed to assess the lawfulness of the online content. While social media
users may be held responsible for failing or refusing to comply with a court order compelling them to remove or block information, they should not be required to monitor content on their accounts to avoid liability, nor should they be held liable simply
when they get notified of allegedly unlawful speech on their social media feeds by any method other than a court order. Imposing liability on an individual user, without a court order, to remove the allegedly unlawful content in question will be
disproportionate, we argued. Finally, the Grand Chamber should decide whether imposing criminal liability for third party content violates the right to freedom of expression, given the peculiar circumstances in this case. Both the
applicant and the commenters were convicted of the same offence a decade ago. EFF and Media Defence asked the Grand Chamber to assess the quality of the decades-old laws--one dating back to 1881--under which the politician was convicted, saying criminal
laws should be adapted to meet new circumstances, but these changes must be precise and unambiguous to enable someone to foresee what conduct would violate the law. Subjecting social media users to criminal responsibility for
third-party content will lead to over-censorship and prior restraint. The Grand Chamber should limit online intermediary liability, and not chill social media users' right to free expression and access to information online. You can read our
amicus brief here: https://www.eff.org/document/sanchez-v-france-eff-media-defence-ecthr-brief
|
|
The EU is moving towards the conclusion of its new internet censorship law, the Digital Services Act
|
|
|
| 22nd April 2022
|
|
| See article from nytimes.com |
The European Union is nearing a conclusion to internet censorship legislation that would force Facebook, YouTube and other internet services to censor 'misinformation', disclose how their services algorithms and stop targeting online ads based on a
person's ethnicity, religion or sexual orientation. The law, called the Digital Services Act, is intended to more aggressively police social media platforms for content deemed unacceptable or risk billions of dollars in fines. Tech companies would be
compelled to set up new policies and procedures to remove flagged hate speech, terrorist propaganda and other material defined as illegal by countries within the European Union. The Digital Services Act is part of a one-two punch by the European
Union to address the societal and economic effects of the tech giants. Last month, the 27-nation bloc agreed to a different sweeping law, the Digital Markets Act, to counter what regulators see as anticompetitive behavior by the biggest tech firms,
including their grip over app stores, online advertising and internet shopping. The new law is ambitious but the EU is noted for producing crap legislation that doesn't work in practice. Lack of enforcement of the European Union's data privacy law,
the General Data Protection Regulation, or G.D.P.R., has cast a shadow over the new law. Like the Digital Services Act and Digital Markets Act, G.D.P.R. was hailed as landmark legislation. But since it took effect in 2018, there has been little action
against Facebook, Google and others over their data-collection practices. Many have sidestepped the rules by bombarding users with consent windows on their websites. |
|
The EU demands that internet platforms take down terrorist content with an hour of being told
|
|
|
| 2nd May 2021
|
|
| See
article from europarl.europa.eu |
The EU has dreamed up another impossible to comply with piece of internet legislation that places onerous, if not impossible, requirements on small internet businesses that will have to relocate user forums and the likes onto the platforms of the US
internet giants that are able to deal with the ludicrously short timescales demanded by the EU. The EU describes its latest attack on business in a press release: A new law to address the dissemination of terrorist
content online was approved by the EU Parliament: The new regulation will target content such as texts, images, sound recordings or videos, including live transmissions, that incite, solicit or contribute to terrorist offences,
provide instructions for such offences or solicit people to participate in a terrorist group. In line with the definitions of offences included in the Directive on combating terrorism , it will also cover material that provides guidance on how to make
and use explosives, firearms and other weapons for terrorist purposes. Terrorist content must be removed within one hour Hosting service providers will have to remove or disable access to flagged
terrorist content in all member states within one hour of receiving a removal order from the competent authority. Member states will adopt rules on penalties, the degree of which will take into account the nature of the breach and the size of company
responsible. Protection of educational, artistic, research and journalistic material Content uploaded for educational, journalistic, artistic or research purposes, or used for awareness-raising
purposes, will not be considered terrorist content under these new rules. No general obligation to monitor or filter content Internet platforms will not have a general obligation to monitor or filter
content. However, when competent national authorities have established a hosting service provider is exposed to terrorist content, the company will have to take specific measures to prevent its propagation. It will then be up to the service provider to
decide what specific measures to take to prevent this from happening, and there will be no obligation to use automated tools. Companies should publish annual transparency reports on what action they have taken to stop the dissemination of terrorist
content. Next steps The Regulation will enter into force on the twentieth day following publication in the Official Journal. It will start applying 12 months after its entry into force.
|
|
Aggressive new EU terrorism internet censorship law will require onerous and expensive self censorship by all websites
|
|
|
|
18th April 2021
|
|
| See article from laquadrature.net |
An upcoming European law pretexts fighting terrorism to silence the whole Internet In September 2018, under French and German influence, the European Commission put forward a proposal for a Regulation of the European
Parliament and of the Council on preventing the dissemination of terrorist content online . The text was adopted in December 2018 by the EU Council and adopted (with some changes) by the EU Parliament in April 2019. After
negotiations in trilogue (between the three institutions), this text is now back in the Parliament for a final vote . This new regulation will force every actor of the Web's ecosystem (video or blog platforms, online media,
small forums or large social networks) to block in under an hour any content reported as "terrorist" by the police (without a judge's prior authorisation), and therefore to be on call 24/7. If some
"exceptions" have been provided in the text, they are purely hypothetical and will not protect our freedoms in practice :
The one hour deadline is unrealistic and only big economic platforms will be capable of complying with such strict obligations. With the threat of heavy fines and because most of them will not be able to comply whithin the removal
orders, it will force Web actors to censor proactively any potentially illegal content upstream, using automated tools of mass surveillance developed by Google and Facebook. Such a power given to the police can easily lead to
the censorship of political opponents and social movements. The text allows an authority from any Member State to order removal in another Member State. Such cross-border removal orders are not only unrealistic but can only
worsen the danger of mass political censorship.
The European Parliament must reject this text
|
|
Free Speech Coalition Europe petitions the EU about considering the rights of sex workers in upcoming internet censorship laws
|
|
|
| 29th March 2021
|
|
| See
petition from
change.org |
The Free Speech Coalition Europe is a group representing the adult trade. It has organised a petition to The Members of the European Parliament of the IMCO, JURI and LIBE Committees on the subject of how new EU internet censorship laws will impact sex
workers. The petition reads: 10 Steps to a Safer Digital Space that Protects the Rights of Sexuality Professionals, Artists and Educators "Online platforms have become integral parts of our daily
lives, economies, societies and democracies." Not our words but those of the European Commission. And after more than a year in the grips of a global pandemic, this statement rings truer than ever before. So why are some of
society's already most marginalised people being excluded from these necessary spaces? Sexual Expression is Being Banned Online Sex in almost all its guises is being repressed in the public online
sphere and on social media like never before. Accounts focused on sexuality -- from sexuality professionals, adult performers and sex workers to artists, activists and LGBTIQ folks, publications and organisations -- are being deleted without warning or
explanation and with little regulation by private companies that are currently able to enforce discriminatory changes to their terms and conditions without explanation or accountability to those affected by these changes. Additionally, in many cases it
is impossible for the users to have their accounts reinstated -- accounts that are often vitally linked to the users' ability to generate income, network, organise and share information. Unpacking the
Digital Services Act (DSA) At the same time as sexual expression is being erased from digital spaces, new legislation is being passed in the European Union to safeguard internet users' online rights. The European Commission's Digital Services Act and Digital Markets Act encompass upgraded rules governing digital services with their focus, in part, building a safer and more open digital space. These rules will apply to online intermediary services used by millions every day, including major platforms such as Facebook, Instagram and Twitter. Amongst other things, they advocate for greater transparency from platforms, better-protected consumers and empowered users.
With the DSA promising to "shape Europe's digital future" and "to create a safer digital space in which the fundamental rights of all users of digital services are protected", it's time to demand that it's a
future that includes those working, creating, organising and educating in the realm of sexuality. As we consider what a safer digital space can and should look like, it's also time to challenge the pervasive and frankly puritanical notion that sexuality
-- a normal and healthy part of our lives -- is somehow harmful, shameful or hateful. How the DSA Can Get It Right The DSA is advocating for "effective safeguards for users, including the
possibility to challenge platforms' content moderation decisions". In addition to this, the Free Speech Coalition Europe demands the following:
Platforms need to put in place anti-discrimination policies and train their content moderators so as to avoid discrimination on the basis of gender, sexual orientation, race, or profession -- the same community guidelines need to
apply as much to an A-list celebrity or mainstream media outlet as they do to a stripper or queer collective; Platforms must provide the reason to the user when a post is deleted or account is restricted or deleted.
Shadowbanning is an underhanded means for suppressing users' voices. Users should have the right to be informed when they are shadowbanned and to challenge the decision; Platforms must allow for the user to request a revision
of a content moderation's decision, platforms must ensure moderation actions take place in the users' location, rather than arbitrary jurisdictions which may have different laws or custom; e.g., a user in Germany cannot be banned by reports &
moderation in the middle east, and must be reviewed by the European moderation team; Decision-making on notices of reported content as specified in Article 14 of the DSA should not be handled by automated software, as these
have proven to delete content indiscriminately. A human should place final judgement. The notice of content as described in Article 14.2 of the DSA should not immediately hold a platform liable for the content as stated in
Article 14.3, since such liability will entice platforms to delete indiscriminately after report for avoiding such liability, which enables organized hate groups to mass report and take down users; Platforms must provide for
a department (or, at the very least, a dedicated contact person) within the company for complaints regarding discrimination or censorship; Platforms must provide a means to indicate whether you are over the age of 18 as well
as providing a means for adults to hide their profiles and content from children (e.g. marking profiles as 18+); Platforms must give the option to mark certain content as "sensitive"; Platforms must not reduce the
features available to those who mark themselves as adult or adult-oriented (i.e. those who have marked their profiles as 18+ or content as "sensitive"). These profiles should then appear as 18+ or "sensitive" when accessed without a
login or without set age, but should not be excluded from search results or appear as "non-existing"; Platforms must set clear, consistent and transparent guidelines about what content is acceptable, however, these
guidelines cannot outright ban users focused on adult themes; e.g., you could ban highly explicit pornography (e.g., sexual intercourse videos that show penetration), but you'd still be able to post an edited video that doesn't show penetration; -
Platforms cannot outright ban content intended for adult audiences, unless a platform is specifically for children, or >50% of their active users are children.
|
|
The EU Commission president introduces the next round of internet censorship law
|
|
|
|
2nd December 2020
|
|
| See article from bbc.co.uk |
Ursula von der Leyen, president of the European Commission, has introduced a new swathe of internet regulation. She said the commission would be rewriting the rulebook for our digital market with stricter rules for online content, from selling unsafe
products to posting 'hate speech'. Von der Leyen told the online Web Summit: No-one expects all digital platforms to check all the user content that they host. This would be a threat to everyone's freedom to speak
their mind. ...But... if illegal content is notified by the competent national authorities, it must be taken down. More pressure
The Digital Services Act will replace the EU's 2000 e-commerce directive. Due to come into
force on Wednesday, 2 December, it has now been delayed until next week. Likely to put more pressure on social-media platforms to take down and block unlawful content more quickly, the new rules will almost certainly be contested by companies such
as Google and Facebook, which now face far stricter censorship both in Europe and the US, following claims about the the supposed spread of 'fake news' and 'hate speech'. |
|
The EU is pushing for an agreement by Christmas for a new rapid internet take down law applying to terrorist content
|
|
|
|
15th November 2020
|
|
| See article from bbc.co.uk |
EU ministers are discussing a new censorship law this year obliging internet firms to remove what is deemed to be extremist propaganda within an hour of it being reported. The EU has been discussing such a regulation for more than a year, but the
recent terror attacks in France and Austria have given it new urgency. Interior ministers said the text must be agreed soon with the EU Commission and European Parliament. Both German Interior Minister Horst Seehofer and EU Home Affairs
Commissioner Ylva Johansson called for an agreement by Christmas on the new regulation on terrorist content online (TCO). |
|
The EU's next round of strangulation of European internet businesses via red tape and censorship
|
|
|
|
23rd October 2020
|
|
| See Creative Commons article from eff.org by
Christoph Schmon |
The European Union has made the first step towards a significant overhaul of its core platform regulation, the e-Commerce Directive .
In order to inspire the European Commission, which is currently preparing a proposal for a Digital Services Act Package , the EU Parliament has voted on three related Reports ( IMCO , JURI , and LIBE reports), which address the legal
responsibilities of platforms regarding user content, include measures to keep users safe online, and set out special rules for very large platforms that dominate users' lives. Clear EFF's Footprint Ahead of the votes, together with our allies , we argued to preserve what works for a free Internet and innovation, such as to retain the E-Commerce directive's approach of limiting platforms' liability over user content and banning Member States from imposing obligations to track and monitor users' content. We also stressed that it is time to fix what is broken: to imagine a version of the Internet where users have a right to remain anonymous, enjoy substantial procedural rights in the context of content moderation, can have more control over how they interact with content, and have a true choice over the services they use through interoperability obligations .
It's a great first step in the right direction that all three EU Parliament reports have considered EFF suggestions. There is an overall agreement that platform intermediaries have a pivotal role to play in ensuring the
availability of content and the development of the Internet. Platforms should not be held responsible for ideas, images, videos, or speech that users post or share online. They should not be forced to monitor and censor users' content and
communication--for example, using upload filters. The Reports also makes a strong call to preserve users' privacy online and to address the problem of targeted advertising. Another important aspect of what made the E-Commerce Directive a success is the
"country or origin" principle. It states that within the European Union, companies must adhere to the law of their domicile rather than that of the recipient of the service. There is no appetite from the side of the Parliament to change this
principle. Even better, the reports echo EFF's call to stop ignoring the walled gardens big platforms have become. Large Internet companies should no longer nudge users to stay on a platform that disregards their privacy or
jeopardizes their security, but enable users to communicate with friends across platform boundaries. Unfair trading, preferential display of platforms' own downstream services and transparency of how users' data are collected and shared: the EU
Parliament seeks to tackle these and other issues that have become the new "normal" for users when browsing the Internet and communicating with their friends. The reports also echo EFF's concerns about automated content moderation, which is
incapable of understanding context. In the future, users should receive meaningful information about algorithmic decision-making and learn if terms of service change. Also, the EU Parliament supports procedural justice for users who see their content
removed or their accounts disabled. Concerns Remain The focus on fundamental rights protection and user control is a good starting point for the ongoing reform of Internet legislation in Europe.
However, there are also a number of pitfalls and risks. There is a suggestion that platforms should report illegal content to enforcement authorities and there are open questions about public electronic identity systems. Also, the general focus of
consumer shopping issues, such as liability provision for online marketplaces, may clash with digital rights principles: the Commission itself acknowledged in a recent internal document that "speech can also be reflected in goods, such as books,
clothing items or symbols, and restrictive measures on the sale of such artefacts can affect freedom of expression." Then, the general idea to also include digital services providers established outside the EU could turn out to be a problem to the
extent that platforms are held responsible to remove illegal content. Recent cases ( Glawischnig-Piesczek v Facebook ) have demonstrated the perils of worldwide content takedown orders. It's Your Turn Now @EU_Commission
The EU Commission is expected to present a legislative package on 2 December. During the public consultation process, we urged the Commission to protect freedom of expression and to give control to users rather than the big platforms.
We are hopeful that the EU will work on a free and interoperable Internet and not follow the footsteps of harmful Internet bills such as the German law NetzDG or the French Avia Bill, which EFF helped to strike down . It's time to make it right. To
preserve what works and to fix what is broken.
|
|
EU arms up against US internet giants
|
|
|
| 12th October 2020
|
|
| See article from politico.eu
|
The European Commission is beefing up its weapons to take on Big Tech. Under Commission Executive Vice President Margrethe Vestager, the commission is planning to merge two major legislative initiatives on competition into a single text. One is
the so-called New Competition Tool, a market investigation tool that would allow competition enforcers to act more swiftly and forcefully. The other is a part of the Digital Services Act , a new set of rules due to be unveiled in December for companies
like Google, Apple and Amazon. Combined, the new powers would be known as the Digital Markets Act. The act will include a list of do's and don'ts for so-called gatekeeping platforms -- or those who are indispensable for other companies to reach
consumers online -- to curb what it sees as anti-competitive behavior. |
|
EU plans for extending censorship laws to US messaging services falters
|
|
|
| 26th
November 2019
|
|
| See
article from reuters.com
|
The European Commission is struggling to agree how to extend internet censorship and control to US messaging apps such as Facebook's WhatsApp and Microsoft's Skype. These services are run from the US and it is not so easy for European police to obtain
say tracking or user information as it is for more traditional telecoms services. The Commission has been angling towards applying the rules controlling national telecoms companies to these US 'OTT' messaging services. Extended ePrivacy regulation
was the chosen vehicle for new censorship laws. But now it is reported that the EU countries have yet to find agreement on such issues as tracking users' online activities, provisions on detecting and deleting child pornography and of course how
to further the EU's silly game of trying to see how many times a day EU internet users are willing to click consent boxes without reading reams of terms and conditions. EU ambassadors meeting in Brussels on Friday again reached an impasse, EU
officials said. Tech companies and some EU countries have criticized the ePrivacy proposal for being too restrictive, putting them at loggerheads with privacy activists who back the plan. Now doubt the censorship plans will be resuming soon.
|
|
|
|
|
|
9th November 2019
|
|
|
Two recent ECJ rulings have serious global consequences for internet freedom. By Andrew Tettenborn See article from
spiked-online.com |
|
EU judges make up more internet censorship law without reference to practicality, definitions and consideration of consequences
|
|
|
| 4th October 2019
|
|
| See article from bbc.com |
Facebook and other social media can be ordered to censor posts worldwide after a ruling from the EU's highest court. Platforms may also have to seek out similar examples of the illegal content and remove them, instead of waiting for each to be
reported. Facebook said the judgement raised critical questions around freedom of expression. What was the case about? The case stemmed from an insulting comment posted on Facebook about Austrian politician Eva Glawischnig-Piesczek, which
the country's courts claimed damaged her reputation. Under EU law, Facebook and other platforms are not held responsible for illegal content posted by users, until they have been made aware of it - at which point, they must remove it quickly. But
it was unclear whether an EU directive, saying platforms cannot be made to monitor all posts or actively seek out illegal activity, could be overridden by a court order. Austria's Supreme Court asked Europe's highest court to clarify this. The EU
curt duly obliged and ruled:
- If an EU country finds a post illegal in its courts, it can order websites and apps to take down identical copies of the post
- Platforms can be ordered to take down equivalent versions of an illegal post, if the message conveyed is
essentially unchanged
- Platforms can be ordered to take down illegal posts worldwide, if there is a relevant international law or treaty
Facebook has said countries would have to set out very clear definitions on what 'identical' and 'equivalent' means in practice. It said the ruling undermines the long-standing principle that one country does not have the right to impose its
laws on speech on another country. Facebook is unable to appeal against this ruling. |
|
|
|
|
| 23rd August 2019
|
|
|
Top EU Court is to Decide on case threatening safe harbour protections underpinning the legality of European websites hosting user content See
article from torrentfreak.com |
|
|
|
|
| 20th
August 2019
|
|
|
EU planning to grab total control of internet regulations. By David Spence See article from vpncompare.co.uk |
|
|
|
|
| 17th August 2019
|
|
|
Soon online speech will be regulated by Brussels. By Andrew Tettenborn See article from spiked-online.com
|
|
The EU seeks to extend and centralise internet censorship across the EU
|
|
|
|
19th July 2019
|
|
| See article from reclaimthenet.org See
EU internet censorship document [pdf] from cdn.netzpolitik.org |
According to a leaked EU internet censorship document obtained by Netzpolitik, a German blog, the European
Commission (EC) is now preparing a new Digital Services Act to unify and extend internet censorship across the EU. The proposals are partially to address eCommerce controls required to keep up with technological changes, but it also addresses more
traditional censorship to control 'fake news' political ideas it does not like and 'hate speech'. The new rules cover a wider remit of internet companies covering all digital services, and that means anything from ISPs, cloud hosting, social
media, search engines, ad services, to collaborative economy services (Uber, AirBnB etc). The censorship regime envisaged does not quite extend to a general obligation for companies to censor everything being uploaded, but it goes way beyond
current censorship processes. Much of the report is about unifying the rules for takedown of content. The paper takes some of the ideas from the UK Online Harms whitepaper and sees requirements to extend censorship from illegal content to
legal-but-harmful content. The authors perceive that unifying censorship rules for all EU countries as some sort of simplification for EU companies, but as always ever more rules just advantages the biggest companies, which are unfortunately for
the EU, American. Eg requiring AI filtering of content is a technology very much in the control of the richest and most advanced companies, ie the likes of Google. Actually the EU paper does acknowledge that EU policies have in the past advantaged
US companies. The paper also notes unease at the way that European censorship decisions, eg the right to be forgotten, have become something implemented by the American giants. |
|
European Court of Justice moves towards a position requiring the international internet to follow EU censorship rulings
|
|
|
| 8th June 2019
|
|
| 6th June 2019. See
article from techdirt.com |
TechDirt comments: The idea of an open global internet keeps taking a beating -- and the worst offender is not, say, China or Russia, but rather the EU. We've already discussed things like the EU Copyright Directive and the Terrorist Content
Regulation , but it seems like every day there's something new and more ridiculous -- and the latest may be coming from the Court of Justice of the EU (CJEU). The CJEU's Advocate General has issued a recommendation (but not the final verdict) in a new
case that would be hugely problematic for the idea of a global open internet that isn't weighted down with censorship. The case at hand involved someone on Facebook posting a link to an article about an Austrian politician, Eva
Glawischnig-Piesczek, accusing her of being a lousy traitor of the people, a corrupt oaf and a member of a fascist party. An Austrian court ordered Facebook to remove the content, which it complied with by removing access to anyone in Austria. The
original demand was also that Facebook be required to prevent equivalent content from appearing as well. On appeal, a court denied Facebook's request that it only had to comply in Austria, and also said that such equivalent content could only be limited
to cases where someone then alerted Facebook to the equivalent content being posted (and, thus, not a general monitoring requirement). The case was then escalated to the CJEU and then, basically everything goes off the rails See
detailed legal findings discussed by techdirt.com
Offsite Comment: Showing how Little the EU Understands About the Web
8th June 2019. See article from forbes.com by
Kalev Leetaru As governments around the world seek greater influence over the Web, the European Union has emerged as a model of legislative intervention, with efforts from GDPR to the Right to be Forgotten to new efforts to
allow EU lawmakers to censor international criticism of themselves. GDPR has backfired spectacularly, stripping away the EU's previous privacy protections and largely exempting the most dangerous and privacy-invading activities it was touted to address.
Yet it is the EU's efforts to project its censorship powers globally that present the greatest risk to the future of the Web and demonstrate just how little the EU actually understands about how the internet works. |
|
European Parliament removes requirement for internet companies to pre-censor user posts for terrorist content but approves a one hour deadline for content removal when asked by national authorities
|
|
|
| 18th April 2019
|
|
| See article from bbc.com |
The European Parliament has approved a draft version of new EU internet censorship law targeting terrorist content. In particular the MEPs approved the imposition of a one-hour deadline to remove content marked for censorship by various national
organisations. However the MEPs did not approve a key section of the law requiring internet companies to pre-process and censor terrorsit content prior to upload. A European Commission official told the BBC changes made to the text by parliament
made the law ineffective. The Commission will now try to restore the pre-censorship requirement with the new parliament when it is elected. The law would affect social media platforms including Facebook, Twitter and YouTube, which could face fines
of up to 4% of their annual global turnover. What does the law say? In amendments, the European Parliament said websites would not be forced to monitor the information they transmit or store, nor have to actively seek facts indicating illegal
activity. It said the competent authority should give the website information on the procedures and deadlines 12 hours before the agreed one-hour deadline the first time an order is issued. In February, German MEP Julia Reda of the European
Pirate Party said the legislation risked the surrender of our fundamental freedoms [and] undermines our liberal democracy. Ms Reda welcomed the changes brought by the European Parliament but said the one-hour deadline was unworkable for platforms run by
individual or small providers. |
|
EU Agencies Falsely Report More Than 550 Archive.org URLs as Terrorist Content
|
|
|
| 11th April 2019
|
|
| See article from
blog.archive.org |
The European Parliament is set to vote on legislation that would require websites that host user-generated content to take down material reported as terrorist content within one hour. We have some examples of current notices sent to the Internet Archive
that we think illustrate very well why this requirement would be harmful to the free sharing of information and freedom of speech that the European Union pledges to safeguard. In the past week, the Internet Archive has received a
series of email notices from Europol's European Union Internet Referral Unit (EU IRU) falsely identifying hundreds of URLs on archive.org as terrorist propaganda. At least one of these mistaken URLs was also identified as terrorist content in a separate
take down notice from the French government's L'Office Central de Lutte contre la Criminalit39 li39e aux Technologies de l'Information et de la Communication (OCLCTIC). The Internet Archive has a few staff members that process
takedown notices from law enforcement who operate in the Pacific time zone. Most of the falsely identified URLs mentioned here (including the report from the French government) were sent to us in the middle of the night 203 between midnight and 3am
Pacific 203 and all of the reports were sent outside of the business hours of the Internet Archive. The one-hour requirement essentially means that we would need to take reported URLs down automatically and do our best to review
them after the fact. It would be bad enough if the mistaken URLs in these examples were for a set of relatively obscure items on our site, but the EU IRU's lists include some of the most visited pages on archive.org and materials
that obviously have high scholarly and research value. See a summary below with specific examples. See example falsely reported URLs at
article from blog.archive.org
|
|
Big names of the internet explain how the EU's Terrorist Content laws will be ineffective and will damage the non terrorist internet in the process
|
|
|
| 1st April 2019
|
|
| See article [pdf] from politico.eu |
A group of some of the best known internet pioneers have written an open letter explaining how the EU's censorship law nominally targeting terrorism will both chill the non terrorist internet whilst simultaneously advantaging US internet giants
over smaller European businesses. The group writes: EU Terrorist Content regulation will damage the internet in Europe without meaningfully contributing to the fight against terrorism Dear MEP Dalton, Dear MEP Ward,
Dear MEP Reda, As a group of pioneers, technologists, and innovators who have helped create and sustain todays internet, we write to you to voice our concern at proposals under consideration in the EU Terrorist Content
regulation. Tackling terrorism and the criminal actors who perpetrate it is a necessary public policy objective, and the internet plays an important role in achieving this end. The tragic and harrowing incident in
Christchurch, New Zealand earlier this month has underscored the continued threat terrorism poses to our fundamental freedoms, and the need to confront it in all its forms. However, the fight against terrorism does not preclude lawmakers from their
responsibility to implement evidence-based law that is proportionate, justified, and supportive of its stated aim. The EU Terrorist Content regulation, if adopted as proposed, will restrict the basic rights of European
internet users and undercut innovation on the internet without meaningfully contributing to the fight against terrorism. We are particularly concerned by the following aspects of the proposed Regulation:
- ĀUnclear definition of terrorist content: The definition of 'terrorist content' is extremely broad, and includes no clear exemption for educational, journalistic, or research purposes. This creates the risk of over-removal of
lawful and important public interest speech.
- Lack of proportionality: The regulation applies equally to all internet hosting services, bringing thousands of services into scope that have no relevance to terrorist
content. By not taking any account of the different types and sizes of online services, nor their exposure to such illegal content, the new rules would be far out of proportion with the stated aim of the proposal.
- Unworkable takedown timeframes: The obligation to remove content within a mere 60 minutes of notification will likely lead to significant over-removal of lawful content and place a catastrophic compliance burden on micro, small, and medium-sized companies offering services within Europe. At the same time, it will greatly favour large multinational platforms that have already developed highly sophisticated content moderation operations
- Reliance on upload filters and other ;proactive measures': The draft regulation frames automated upload filters as ĀetheĀf solution for terrorist content moderation at scale, and provides government agencies with
the power to mandate how such upload filters and other proactive measures are designed and implemented. But upload filtering of 'terrorist content' is fraught with challenges and risks, and only a handful of online services have the resources and
capacity to build or license such technology. As such, the proposal is setting a benchmark that only the largest platforms can meet. Moreover, upload filtering and related proactive measures risks suppressing important public interest content, such as
news reports about terrorist incidents and dispatches from warzones
We fully support efforts to combat dangerous and illegal information on the internet, including through new legislation where appropriate. Yet as currently drafted, this Regulation risks inflicting harm on free expression and due
process, competition and the possibility to innovate online5. Given these likely ramifications we urge you to undertake a proper assessment of the proposal and make the necessary changes to ensure that the perverse outcomes
described above are not realised. At the very least, any legislation of this nature must include far greater rights protection and be built around a proportionality criterion that ensures companies of all sizes and types can comply and compete in Europe.
Citizens in Europe look to you for leadership in developing progressive policy that protects their rights, ensures their companies can compete, and protects their public interest. This legislation in its current form runs contrary
to those ambitions. We urge you to amend it, for the sake of European citizens and for the sake of the internet. Yours sincerely, Mitchell Baker Executive Chairwoman, The Mozilla Foundation and Mozilla Corporation Tim
Berners-Lee Inventor of the World Wide Web and Founder of the Web Foundation Vint Cerf Internet Pioneer Brewster Kahle Founder & Digital Librarian, Internet Archive Jimmy Wales Founder of Wikipedia and Member of the Board of Trustees of the Wikimedia
Foundation Markus Beckedahl Founder, Netzpolitik; Co-founder, re:publica Brian Behlendorf Member of the EFF Board of Directors; Executive Director of Hyperledger at the Linux Foundation Cindy Cohn Executive Director, Electronic Frontier Foundation Cory
Doctorow Author; Co-Founder of Open Rights Group; Visiting Professor at Open University (UK) Rebecca MacKinnon Co-founder, Global Voices; Director, Ranking Digital Rights Katherine Maher Chief Executive Officer of the Wikimedia Foundation Bruce Schneier
Public-interest technologist; Fellow, Berkman Klein Center for Internet & Society; Lecturer, Harvard Kennedy School |
|
Report from the European Parliament about an upcoming internet censorship law
|
|
|
| 5th March 2019
|
|
| See article from
laquadrature.net |
Members of the European Parliament are considering a proposition for the censorship of terrorist internet content issued by the European Commission last September. The IMCO Committee ("Internal Market and Consumers protection") has just
published its initial opinions on the proposition. laquadrature.net reports
Judicial Review The idea is that the government of any European Member State will be able to order any website to remove content considered "terrorist". No independent judicial authorisation
will be needed to do so, letting governments abuse the wide definition of "terrorism". The only thing IMCO accepted to add is for government's orders to be subject to "judicial review", which can mean anything.
In France, the government's orders to remove "terrorist content" are already subject to "judicial review", where an independent body is notified of all removal orders and may ask judges to asses them. This has not been of much help:
only once has this censorship been submitted to a judge's review. It was found to be unlawful, but more than one year and half after it was ordered. During this time, the French government was able to abusively censor content, in this case, far-left
publications by two French Indymedia outlets. Far from simplifying, this Regulation will add confusion as authorities from one member state will be able to order removal in other one, without necessarily understanding context.
Unrealistic removal delays Regarding the one hour delay within which the police can order a hosting service provider to block any content reported as "terrorist", there was no real progress
either. It has been replaced by a deadline of at least eight hours, with a small exception for "microentreprises" that have not been previously subject to a removal order (in this case, the "deadline shall be no sooner than the end of the
next working day"). This narrow exception will not allow the vast majority of Internet actors to comply with such a strict deadline. Even if the IMCO Committee has removed any mention of proactive measures that can be imposed
on Internet actors, and has stated that "automated content filters" shall not be used by hosting service providers, this very tight deadline, and the threat of heavy fines will only incite them to adopt the moderation tools developed by the
Web's juggernauts (Facebook and Google) and use the broadest possible definition of terrorism to avoid the risk of penalties. The impossible obligation to provide a point of contact reachable 24/7 has not been modified either. The IMCO opinion has even
worsened the financial penalties that can be imposed: it is now "at least" 1% and up to 4% of the hosting service provider's turnover. Next steps The next step will be on 11 March, when the
CULT Committee (Culture and Education) will adopt its opinion. The last real opportunity to obtain the rejection of this dangerous text will be on 21 March 2019, in the LIBE Committee (Civil Liberties, Justice and Home Affairs).
European citizens must contact their MEPs to demand this rejection. We have provided a dedicated page on our website with an analysis of this Regulation and a tool to
directly contact the MEPs in charge. Starting today, and for the weeks to come, call your MEPS and demand they reject this text. |
|
|
|
|
|
19th February 2019
|
|
|
EU proposal pushes tech companies to tackle terrorist content with AI, despite implications for war crimes evidence See
article from advox.globalvoices.org
|
|
Is there a special place in hell for EU legislators who promote censorship without even a sketch of a plan of the likely consequences
|
|
|
| 6th February 2019
|
|
| See article from
eff.org by Jillian C York |
Governments around the world are grappling with the threat of terrorism, but their efforts aimed at curbing the dissemination of terrorist content online all too often result in censorship. Over the past five years, we've seen a number of
governments--from the US Congress to that of France and now the European Commission (EC)--seek to implement measures that place an undue burden on technology companies to remove terrorist speech or face financial liability. This
is why EFF has joined forces with dozens of organizations to call on members of the European Parliament to oppose the EC's proposed regulation, which would require companies to take down terrorist content within one hour . We've added our voice to two
letters--one from Witness and another organized by the Center for Democracy and Technology --asking that MEPs consider the serious consequences that the passing of this regulation could have on human rights defenders and on freedom of expression.
We share the concerns of dozens of allies that requiring the use of proactive measures such as use of the terrorism hash database (already voluntarily in use by a number of companies) will restrict expression and have a
disproportionate impact on marginalized groups. We know from years of experience that filters just don't work. Furthermore, the proposed requirement that companies must respond to reports of terrorist speech within an hour is, to
put it bluntly, absurd. As the letter organized by Witness states, this regulation essentially forces companies to bypass due process and make rapid and unaccountable decisions on expression through automated means and furthermore doesn't reflect the
realities of how violent groups recruit and share information online. We echo these and other calls from defenders of human rights and civil liberties for MEPs to reject proactive filtering obligations and to refrain from enacting
laws that will have unintended consequences for freedom of expression.
|
|
Europe's proposed regulation on online extremism endangers freedom of expression. A statement by Index on Censorship
|
|
|
| 16th January 2019
|
|
| See article
from indexoncensorship.org |
Index on Censorship shares the widespread concerns about the proposed EU regulation on preventing the dissemination of terrorist content online. The regulation would endanger freedom of expression and would create huge practical challenges for companies
and member states. Jodie Ginsberg, CEO of Index, said We urge members of the European Parliament and representatives of EU member states to consider if the regulation is needed at all. It risks creating far more problems than it solves. At a minimum the
regulation should be completely revised. Following the recent agreement by the European Council on a draft position for the proposed regulation on preventing the dissemination of terrorist content online, which adopted the initial
draft presented by the European Commission with some changes, the Global Network Initiative (GNI) is concerned about the potential unintended effects of the proposal and would therefore like to put forward a number of issues we urge the European
Parliament to address as it considers it further. GNI members recognize and appreciate the European Union (EU) and member states' legitimate roles in providing security, and share the aim of tackling the dissemination of terrorist
content online. However, we believe that, as drafted, this proposal could unintentionally undermine that shared objective by putting too much emphasis on technical measures to remove content, while simultaneously making it more difficult to challenge
terrorist rhetoric with counter-narratives. In addition, the regulation as drafted may place significant pressure on a range of information and communications technology (ICT) companies to monitor users' activities and remove content in ways that pose
risks for users' freedom of expression and privacy. We respectfully ask that EU officials, Parliamentarians, and member states take the time necessary to understand these and other significant risks that have been identified, by consulting openly and in
good faith with affected companies, civil society, and other experts. ...Read the full
article from indexoncensorship.org
|
|
The EU Commissioner for 'justice' and gender equality labels Facebook and Twitter as 'channels of dirt' And then whinges when UK newspapers refer to 'EU dirty rats'
|
|
|
| 29th
September 2018
|
|
| 22nd September 2018. See article from
theverge.com |
Vera Jourova is the European Commissioner for justice, consumers and gender equality. Once she opened a Facebook account. It did not go well. Jourova said at a news conference: For a short time, I had a Facebook account.
It was a channel of dirt. I didn't expect such an influx of hatred. I decided to cancel the account because I realised there will be less hatred in Europe after I do this.
Jourova's words carry more weight than most. She has a policy
beef with Facebook, and also the means to enforce it. Jourova says Facebook's terms of service are misleading, and has called upon the company to clarify them. In a post Thursday on that other channel of dirt, Twitter.com, she said:
I want #Facebook to be extremely clear to its users about how their service operates and makes money. Not many people know that Facebook has made available their data to third parties or that for instance it holds full copyright about
any picture or content you put on it. Jourova says European authorities could sanction Facebook next year if it doesn't like what it hears from the company soon. I was quite clear that we cannot negotiate forever, she said at the news
conference. We need to see the result. Update: Dishing the dirt 25th September 2018. See
article from theguardian.com Vera Jourova is the European Commissioner for justice, consumers
and gender equality has condemned a series of hard-hitting front pages in the British press after a recent Sun headline described Europe's leaders as 'EU Dirty Rats'. Jourovį bad mouthed media again in a press release saying:
Media can build the culture of dialogue or sow divisions, spread disinformation and encourage exclusion. The Brexit debate is the best example of that. Do you remember the front page of a popular British daily
calling the judges the 'enemy of the people'? Or just last week, the EU leaders were called 'Dirty Rats' on another front page. Fundamental rights must be a part of public discourse in the media. They have to belong to the media.
Media are also instrumental in holding politicians to account and in defining the limits of what is 'unacceptable' in a society.
Offsite Comment: Now the EU wants to turn off the Sun 29th September 2018. See article from spiked-online.com by
Mick Hume They dream of stopping populism by curbing press freedom. The European Commission has come up with a new way to prevent people backing Brexit -- not by winning the argument, but by
curbing press freedom . They want to stop the British press encouraging hatred of EU leaders and judges, and impose a European approach of smart regulation to control the views expressed by the tabloids and their supposedly non-smart readers.
...Read the full article from spiked-online.com |
|
The European Commission publishes its proposal for massive fines for internet companies that don't implement censorship orders within the hour
|
|
|
|
15th September 2018
|
|
| See article from money.cnn.com
|
Tech companies that fail to remove terrorist content quickly could soon face massive fines. The European Commission proposed new rules on Wednesday that would require internet platforms to remove illegal terror content within an hour of it being flagged
by national authorities. Firms could be fined up to 4% of global annual revenue if they repeatedly fail to comply. Facebook (FB), Twitter (TWTR) and YouTube owner Google (GOOGL) had already agreed to work with the European Union on a voluntary basis
to tackle the problem. But the Commission said that progress has not been sufficient. A penalty of 4% of annual revenue for 2017 would translate to $4.4 billion for Google parent Alphabet and $1.6 billion for Facebook. The proposal is the
latest in a series of European efforts to control the activities of tech companies. The terror content proposal needs to be approved by the European Parliament and EU member states before becoming law. |
|
European Commission outlines its plans for direct and immediate censorship control of the internet
|
|
|
|
21st August 2018
|
|
| See article from
dailymail.co.uk |
Internet companies will have to delete content claimed to be extremist on their platforms within an hour or face being fined, under new censorship plans by the European Commission. The proposals will be set out in draft regulation due to be published
next month, according to The Financial Times. Julian King, the EU's commissioner for security, told the newspaper that Brussels had not seen enough progress, when it came to the sites clamping down on terror-related material. Under the
rules, which would have to be agreed by a majority of EU member states, the platforms would have an hour to remove the material, a senior official told the newspaper. The rules would apply to all websites, regardless of their size. King told the
FT: The difference in size and resources means platforms have differing capabilities to act against terrorist content and their policies for doing so are not always transparent. All this leads
to such content continuing to proliferate across the internet, reappearing once deleted and spreading from platform to platform.
Of course the stringent requirements are totally impractical for small companies, and so no doubt will
further strengthen the monopolies of US companies with massive workforces. And of course a one hour turn around gives absolutely no one time to even consider whether the censorship requests are fair or reasonable and so translates into a tool for
direct state censorship of the internet. |
|
An excellent summary of the issues leading to the EU disgracefully proposing internet censorship for the benefit of mostly American media corporations
|
|
|
| 8th July 2018
|
|
| See article from
technollama.co.uk CC by Andres |
As we have been covering in the last couple of articles, a controversial EU Copyright Directive has been under discussion at the European Parliament, and in a surprising turn of events,
it voted to reject fast-tracking the tabled proposal by the JURI Committee which contained controversial proposals, particularly in
Art 11 and
Art 13 . The proposed Directive will now get a full discussion and debate in plenary in September. I say
surprising because for those of us who have been witnesses (and participants) to the Copyright Wars for the last 20 years, such a defeat of copyright maximalist proposals is practically unprecedented, perhaps with the exception of
SOPA/PIPA . For years we've had a familiar pattern in the passing of copyright legislation: a proposal has been made to enhance protection and/or
restrict liberties, a small group of ageing millionaire musicians would be paraded supporting the changes in the interest of creators. Only copyright nerds and a few NGOs and digital rights advocates would complain, their opinions would be ignored and
the legislation would pass unopposed. Rinse and repeat. But something has changed, and a wide coalition has managed to defeat powerful media lobbies for the first time in Europe, at least for now. How was this possible?
The main change is that the media landscape is very different thanks to the Internet. In the past, the creative industries were monolithic in their support for stronger protection, and they included creators, corporations, collecting
societies, publishers, and distributors; in other words the gatekeepers and the owners were roughly on the same side. But the Internet brought a number of new players, the tech industry and their online platforms and tools became the new gatekeepers.
Moreover, as people do not buy physical copies of their media and the entire industry has moved towards streaming, online distributors have become more powerful. This has created a perceived imbalance, where the formerly dominating industries need to
negotiate with the new gatekeepers for access to users. This is why creators complain about a value gap between what they
perceive they should be getting, and what they actually receive from the giants. The main result of this change from a political standpoint is that now we have two lobbying sides in the debate, which makes all the difference when
it comes to this type of legislation. In the past, policymakers could ignore experts and digital rights advocates because they never had the potential to reach them, letters and articles by academics were not taken into account, or given lip service
during some obscure committee discussion just to be hidden away. Tech giants such as Google have provided lobbying access in Brussels, which has at least levelled the playing field when it comes to presenting evidence to legislators.
As a veteran of the Copyright Wars, I have to admit that it has been very entertaining reading the reaction from the copyright industry lobby groups and their individual representatives, some almost going apoplectic with rage at
Google's intervention. These tend to be the same people who spent decades lobbying legislators to get their way unopposed, representing large corporate interests unashamedly and passing laws that would benefit only a few, usually to the detriment of
users. It seems like lobbying must be decried when you lose. But to see this as a victory for Google and other tech giants completely ignores the large coalition that shares the view that the proposed Articles 11 and 13 are very
badly thought-out, and could represent a real danger to existing rights. Some of us have been fighting this fight when Google did not even exist, or it was but a small competitor of AltaVista, Lycos, Excite and Yahoo! At the same
time that more restrictive copyright legislation came into place, we also saw the rise of free and open source software, open access, Creative Commons and open data. All of these are legal hacks that allow sharing, remixing and openness. These were
created precisely to respond to restrictive copyright practices. I also remember how they were opposed as existential threats by the same copyright industries, and treated with disdain and animosity. But something wonderful happened, eventually open
source software started winning (we used to buy operating systems), and Creative Commons became an important part of the Internet's ecosystem by propping-up valuable common spaces such as Wikipedia. Similarly, the Internet has
allowed a great diversity of actors to emerge. Independent creators, small and medium enterprises, online publishers and startups love the Internet because it gives them access to a wider audience, and often they can bypass established gatekeepers. Lost
in this idiotic "Google v musicians" rhetoric has been the threat that both Art 11 and 13 represent to small entities. Art 11 proposes a new publishing right that has been proven to affect smaller players in Germany and Spain; while Art 13
would impose potentially crippling economic restrictions to smaller companies as they would have to put in place automated filtering systems AND redress mechanisms against mistakes. In fact, it has been often remarked that Art 13 would benefit existing
dominant forces, as they already have filtering in place (think ContentID). Similarly, Internet advocates and luminaries see the proposals as a threat to the Internet, the people who know the Web best think that this is a bad
idea. If you can stomach it, read this thread featuring a copyright lobbyist attacking Neil Gaiman, who has been one of the Internet celebrities that
have voiced their concerns about the Directive. Even copyright experts who almost never intervene in digital rights
affairs the have been vocal in their opposition to the changes. And finally we have political representatives from various parties and backgrounds who have been vocally opposed to the changes. While the leader of the political
opposition has been the amazing Julia Reda, she has managed to bring together a variety of voices from other parties and countries. The vitriol launched at her has been unrelenting, but futile. It has been quite a sight to see her opponents both try to
dismiss her as just another clueless young Pirate commanded by Google, while at the same time they try to portray her as a powerful enemy in charge of the mindless and uninformed online troll masses ready to do her bidding. All of
the above managed to do something wonderful, which was to convey the threat in easy-to-understand terms so that users could contact their representatives and make their voice heard. The level of popular opposition to the Directive has been a great sight
to behold. Tech giants did not create this alliance, they just gave various voices access to the table. To dismiss this as Google's doing completely ignores the very real and rich tapestry of those defending digital rights, and it
is quite clearly patronising and insulting, and precisely the reason why they lost. It was very late until they finally realised that they were losing the debate with the public, and not even the last-minute deployment of musical dinosaurs could save the
day. But the fight continues, keep contacting your MEPs and keep applying pressure. Appendix So who supported internet censorship in the EU parliamentary vote? Mostly the EU Conservative Group and also
half the Social Democrat MEPs and half the Far Right MEPs
|
|
TorrentFreak suggests that the disgraceful EU law to allow censorship machines to control the internet is just to help US Big Media get more money out of US Big Internet
|
|
|
|
28th June 2018
|
|
| See article from torrentfreak.com
|
| What is the mysterious hold that US Big Music has over Euro politicians? |
Article 13, the proposed EU legislation that aims to restrict safe harbors for online platforms, was crafted to end the so-called "Value Gap" on YouTube. Music piracy was traditionally viewed as an
easy to identify problem, one that takes place on illegal sites or via largely uncontrollable peer-to-peer networks. In recent years, however, the lines have been blurred. Sites like YouTube allow anyone to upload potentially
infringing content which is then made available to the public. Under the safe harbor provisions of US and EU law, this remains legal -- provided YouTube takes content down when told to do so. It complies constantly but there's always more to do.
This means that in addition to being one of the greatest legal platforms ever created, YouTube is also a goldmine of unlicensed content, something unacceptable to the music industry. They argue that the
existence of this pirate material devalues the licensed content on the platform. As a result, YouTube maintains a favorable bargaining position with the labels and the best licensing deal in the industry. The difference between
YouTube's rates and those the industry would actually like is now known as the " Value Gap " and it's become one of the hottest topics in recent years.
In fact, it is so controversial that new copyright legislation, currently weaving its way through the corridors of power in the EU Parliament, is specifically designed to address it. If passed, Article 13
will require platforms like YouTube to pre-filter uploads to detect potential infringement. Indeed, the legislation may as well have been named the YouTube Act, since it's the platform that provoked this entire debate and whole Value Gap dispute.
With that in mind, it's of interest to consider the words of YouTube's global head of music Lyor Cohen this week. In an interview with
MusicWeek , Cohen pledges that his company's new music service, YouTube Music,
will not only match the rates the industry achieves from Apple Music and Spotify, but the company's ad-supported free tier viewers will soon be delivering more cash to the labels too. "Of course [rights holders are] going to get more
money," he told Music Week. If YouTube lives up to its pledge, a level playing field will not only be welcomed by the music industry but also YouTube competitors such as Spotify, who currently offer a free tier on less
favorable terms. While there's still plenty of room for YouTube to maneuver, peace breaking out with the labels may be coming a little too late for those deeply concerned about the implications of Article 13.
YouTube's business model and its reluctance to pay full market rate for music is what started the whole Article 13 movement in the first place and with the Legal Affairs Committee of the Parliament (JURI)
adopting the proposals last week , time is running out to have them overturned.
Behind the scenes, however, the labels and their associates are going flat out to ensure that Article 13 passes, whether YouTube decides to "play fair" or not. Their language suggests that force is the best negotiating tactic with the
distribution giant. Yesterday, UK Music CEO Michael Dugher led a delegation to the EU Parliament in support of Article 13. He was joined by deputy Labour leader Tom Watson and representatives from the BPI, PRS, and Music
Publishers Association, who urged MEPs to support the changes. |
|
European Parliament committee passed vote to hand over censorship of the internet to US corporate giants
|
|
|
| 20th June 2018
|
|
| See article from bit-tech.net
|
The European Parliament's Committee on Legal Affairs (JURI) has officially approved Articles 11 and 13 of a Digital Single Market (DSM) copyright proposal, mandating censorship machines and a link tax. Articles 11 and 13 of the Directive of the
European Parliament and of the Council on Copyright in the Digital Single Market have been the subject of considerable campaigning from pro-copyleft groups including the Open Rights Group and Electronic Frontier Foundation of late. Article 11, as
per the final version of the proposal, discusses the implementation of a link tax - the requirement that any site citing third-party materials do so in a way that adheres to the exemptions and restrictions of a total of 28 separate copyright laws or pays
for a licence to use and link to the material; Article 13, meanwhile, requires any site which allows users to post text, sound, program code, still or moving images, or any other work which can be copyrighted to automatically scan all such uploads
against a database of copyright works - a database which they will be required to pay to access. Both Article 11 and Article 13 won't become official legislation until passed by the entire European Parliament in a plenary vote. There's no definite
timetable for when such a vote might take place, but it would likely happen sometime between December of this year and the first half of 2019. |
|
In two days, an EU committee will vote to crown Google and Facebook permanent lords of internet censorship
|
|
|
|
19th June 2018
|
|
| See article from boingboing.net CC by Cory Doctorow
|
On June 20, the EU's legislative committee will vote on the new Copyright directive , and decide whether it will include the controversial "Article 13" (automated
censorship of anything an algorithm identifies as a copyright violation) and "Article 11" (no linking to news stories without paid permission from the site). These proposals will make starting new internet companies
effectively impossible -- Google, Facebook, Twitter, Apple, and the other US giants will be able to negotiate favourable rates and build out the infrastructure to comply with these proposals, but no one else will. The EU's regional tech success stories
-- say Seznam.cz , a successful Czech search competitor to Google -- don't have $60-100,000,000 lying around to build out their filters, and lack the leverage to extract favorable linking
licenses from news sites. If Articles 11 and 13 pass, American companies will be in charge of Europe's conversations, deciding which photos and tweets and videos can be seen by the public, and who may speak.
The MEP Julia Reda has written up the state of play on the vote, and it's very bad. Both left- and right-wing parties
have backed this proposal, including (incredibly) the French Front National, whose Youtube channel was just deleted by a copyright filter of the sort
they're about to vote to universalise.
So far, the focus in the debate has been on the intended consequences of the proposals: the idea that a certain amount of free expression and competition must be sacrificed to enable rightsholders to force Google and Facebook to
share their profits. But the unintended -- and utterly foreseeable -- consequences are even more important. Article 11's link tax allows news sites to decide who gets to link to them, meaning that they can exclude their critics.
With election cycles dominated by hoaxes and fake news, the right of a news publisher to decide who gets to criticise it is carte blanche to lie and spin. Article 13's copyright filters are even more vulnerable to attack: the
proposals contain no penalties for false claims of copyright ownership, but they do mandate that the filters must accept copyright claims in bulk, allowing rightsholders to upload millions of works at once in order to claim their copyright
and prevent anyone from posting them. That opens the doors to all kinds of attacks. The obvious one is that trolls might sow mischief by uploading millions of works they don't hold the copyright to, in order to prevent others from
quoting them: the works of Shakespeare, say, or everything ever posted to Wikipedia, or my novels, or your family photos. More insidious is the possibility of targeted strikes during crisis: stock-market manipulators could use
bots to claim copyright over news about a company, suppressing its sharing on social media; political actors could suppress key articles during referendums or elections; corrupt governments could use arms-length trolls to falsely claim ownership of
footage of human rights abuses. It's asymmetric warfare: falsely claiming a copyright will be easy (because the rightsholders who want this system will not tolerate jumping through hoops to make their claims) and instant (because
rightsholders won't tolerate delays when their new releases are being shared online at their moment of peak popularity). Removing a false claim of copyright will require that a human at an internet giant looks at it, sleuths out the truth of the
ownership of the work, and adjusts the database -- for millions of works at once. Bots will be able to pollute the copyright databases much faster than humans could possibly clear it. I spoke with Wired UK's KG Orphanides about
this, and their excellent article on the proposal is the best explanation I've seen of the uses of these copyright filters to create
unstoppable disinformation campaigns. Doctorow highlighted the potential for unanticipated abuse of any automated copyright filtering system to make false copyright claims, engage in targeted harassment and even
silence public discourse at sensitive times. "Because the directive does not provide penalties for abuse -- and because rightsholders will not tolerate delays between claiming copyright over a work and suppressing its public
display -- it will be trivial to claim copyright over key works at key moments or use bots to claim copyrights on whole corpuses. The nature of automated systems, particularly if powerful rightsholders insist that they default to
initially blocking potentially copyrighted material and then releasing it if a complaint is made, would make it easy for griefers to use copyright claims over, for example, relevant Wikipedia articles on the eve of a Greek debt-default referendum or,
more generally, public domain content such as the entirety of Wikipedia or the complete works of Shakespeare. "Making these claims will be MUCH easier than sorting them out -- bots can use cloud providers all over the world
to file claims, while companies like Automattic (WordPress) or Twitter, or even projects like Wikipedia, would have to marshall vast armies to sort through the claims and remove the bad ones -- and if they get it wrong and remove a legit copyright claim,
they face unbelievable copyright liability."
|
|
The UN's free speech rapporteur condemns the EU's censorship machines that will violate human rights
|
|
|
| 17th June 2018
|
|
| See
article from techdirt.com |
David Kaye, the UN's Special Rapporteur on freedom of expression has now chimed in with a very thorough report, highlighting how Article 13 of the Directive -- the part about mandatory copyright filters -- would be a disaster for free speech and would
violate the UN's Declaration on Human Rights, and in particular Article 19 which says: Everyone has the right to freedom of opinion and expression; the right includes freedom to hold opinions without interference and to
seek, receive and impart information and ideas through any media regardless of frontiers.
As Kaye's report notes, the upload filters of Article 13 of the Copyright Directive would almost certainly violate this principle.
Article 13 of the proposed Directive appears likely to incentivize content-sharing providers to restrict at the point of upload user-generated content that is perfectly legitimate and lawful. Although the latest proposed
versions of Article 13 do not explicitly refer to upload filters and other content recognition technologies, it couches the obligation to prevent the availability of copyright protected works in vague terms, such as demonstrating best efforts and taking
effective and proportionate measures. Article 13(5) indicates that the assessment of effectiveness and proportionality will take into account factors such as the volume and type of works and the cost and availability of measures, but these still leave
considerable leeway for interpretation. The significant legal uncertainty such language creates does not only raise concern that it is inconsistent with the Article 19(3) requirement that restrictions on freedom of expression
should be provided by law. Such uncertainty would also raise pressure on content sharing providers to err on the side of caution and implement intrusive content recognition technologies that monitor and filter user-generated content at the point of
upload. I am concerned that the restriction of user-generated content before its publication subjects users to restrictions on freedom of expression without prior judicial review of the legality, necessity and proportionality of such restrictions.
Exacerbating these concerns is the reality that content filtering technologies are not equipped to perform context-sensitive interpretations of the valid scope of limitations and exceptions to copyright, such as fair comment or reporting, teaching,
criticism, satire and parody.
Kaye further notes that copyright is not the kind of thing that an algorithm can readily determine, and the fact-specific and context-specific nature of copyright requires much more than just throwing
algorithms at the problem -- especially when a website may face legal liability for getting it wrong. The designation of such mechanisms as the main avenue to address users' complaints effectively delegates content
blocking decisions under copyright law to extrajudicial mechanisms, potentially in violation of minimum due process guarantees under international human rights law. The blocking of content -- particularly in the context of fair use and other
fact-sensitive exceptions to copyright -- may raise complex legal questions that require adjudication by an independent and impartial judicial authority. Even in exceptional circumstances where expedited action is required, notice-and-notice regimes and
expedited judicial process are available as less invasive means for protecting the aims of copyright law. In the event that content blocking decisions are deemed invalid and reversed, the complaint and redress mechanism
established by private entities effectively assumes the role of providing access to remedies for violations of human rights law. I am concerned that such delegation would violate the State's obligation to provide access to an effective remedy for
violations of rights specified under the Covenant. Given that most of the content sharing providers covered under Article 13 are profit-motivated and act primarily in the interests of their shareholders, they lack the qualities of independence and
impartiality required to adjudicate and administer remedies for human rights violations. Since they also have no incentive to designate the blocking as being on the basis of the proposed Directive or other relevant law, they may opt for the legally safer
route of claiming that the upload was a terms of service violation -- this outcome may deprive users of even the remedy envisioned under Article 13(7). Finally, I wish to emphasize that unblocking, the most common remedy available for invalid content
restrictions, may often fail to address financial and other harms associated with the blocking of timesensitive content.
He goes on to point that while large platforms may be able to deal with all of this, smaller ones are going to be
in serious trouble: I am concerned that the proposed Directive will impose undue restrictions on nonprofits and small private intermediaries. The definition of an online content sharing provider under Article 2(5) is
based on ambiguous and highly subjective criteria such as the volume of copyright protected works it handles, and it does not provide a clear exemption for nonprofits. Since nonprofits and small content sharing providers may not have the financial
resources to establish licensing agreements with media companies and other right holders, they may be subject to onerous and legally ambiguous obligations to monitor and restrict the availability of copyright protected works on their platforms. Although
Article 13(5)'s criteria for effective and proportionate measures take into account the size of the provider concerned and the types of services it offers, it is unclear how these factors will be assessed, further compounding the legal uncertainty that
nonprofits and small providers face. It would also prevent a diversity of nonprofit and small content-sharing providers from potentially reaching a larger size, and result in strengthening the monopoly of the currently established providers, which could
be an impediment to the right to science and culture as framed in Article 15 of the ICESCR.
|
|
Vint Cerf, Tim Berners-Lee, and Dozens of Other Computing Experts Oppose Article 13 of the EU's new internet censorship law
|
|
|
|
13th June 2018
|
|
| See article from eff.org See
joint letter that was released today [pdf] |
As Europe's latest copyright proposal heads to a critical vote on June 20-21, more than 70 Internet and
computing luminaries have spoken out against a dangerous provision, Article 13, that would require Internet platforms to automatically filter uploaded content. The group, which includes Internet pioneer Vint Cerf, the inventor of the World Wide Web Tim
Berners-Lee, Wikipedia co-founder Jimmy Wales, co-founder of the Mozilla Project Mitchell Baker, Internet Archive founder Brewster Kahle, cryptography expert Bruce Schneier, and net neutrality expert Tim Wu , wrote in a
joint letter that was released today : By requiring Internet platforms to perform automatic filtering all of the
content that their users upload, Article 13 takes an unprecedented step towards the transformation of the Internet, from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users.
The prospects for the elimination of Article 13 have continued to worsen. Until late last month, there was the hope that that Member States (represented by the Council of the European Union) would find a compromise. Instead, their
final negotiating mandate doubled down on it. The last hope for defeating the proposal now lies with the European Parliament. On June 20-21 the Legal Affairs (JURI) Committee will vote on the proposal. If it votes against upload
filtering, the fight can continue in the Parliament's subsequent negotiations with the Council and the European Commission. If not, then automatic filtering of all uploaded content may become a mandatory requirement for all user content platforms that
serve European users. Although this will pose little impediment to the largest platforms such as YouTube, which already uses its Content ID system to filter content, the law will create an expensive barrier to entry for smaller platforms and startups, which may choose to establish or move their operations overseas in order to avoid the European law.
For those platforms that do establish upload filtering, users will find that their contributions--including video, audio, text, and even source code --will be
monitored and potentially blocked if the automated system detects what it believes to be a copyright infringement. Inevitably, mistakes will happen . There is no
way for an automated system to reliably determine when the use of a copyright work falls within a copyright limitation or exception under European law, such as quotation or parody. Moreover, because these exceptions are not
consistent across Europe, and because there is no broad fair use right as in the United States, many harmless uses of copyright works in memes, mashups, and remixes probably are technically infringing even if no reasonable copyright owner would
object. If an automated system monitors and filters out these technical infringements, then the permissible scope of freedom of expression in Europe will be radically curtailed, even without the need for any substantive changes in copyright law.
The upload filtering proposal stems from a misunderstanding about the purpose of copyright
. Copyright isn't designed to compensate creators for each and every use of their works. It is meant to incentivize creators as part of an effort to promote the public interest in innovation and expression. But that public interest isn't served
unless there are limitations on copyright that allow new generations to build and comment on the previous contributions . Those limitations are both legal, like fair dealing, and practical, like the zone of tolerance for harmless uses. Automated upload
filtering will undermine both. The authors of today's letter write: We support the consideration of measures that would improve the ability for creators to receive fair remuneration for the use
of their works online. But we cannot support Article 13, which would mandate Internet platforms to embed an automated infrastructure for monitoring and censorship deep into their networks. For the sake of the Internet's future, we urge you to vote for
the deletion of this proposal.
What began as a bad idea offered up to copyright lobbyists as a solution to an imaginary "
value gap " has now become an outright crisis for future of the Internet as we know it. Indeed, if
those who created and sustain the operation of the Internet recognize the scale of this threat, we should all be sitting up and taking notice. If you live in Europe or have European friends or family, now could be your last
opportunity to avert the upload filter. Please take action by clicking the button below, which will take you to a campaign website where you can phone, email, or Tweet at your representatives, urging them to stop this threat to the global Internet before
it's too late. Take Action at saveyourinternet.eu
|
|
TorrentFreak explains the grave threat to internet users and European small businesses
|
|
|
| 6th June 2018
|
|
| See article from torrentfreak.com cc
See also saveyourinternet.eu |
The EU's plans to modernize copyright law in Europe are moving ahead. With a crucial vote coming up later this month, protests from various opponents are on the rise as well. They warn that the proposed plans will result in Internet filters which
threaten people's ability to freely share content online. According to Pirate Party MEP Julia Reda, these filters will hurt regular Internet users, but also creators and businesses. September 2016, the European Commission
published its proposal for a modernized copyright law. Among other things, it proposed measures to require online services to do more to fight piracy. Specifically, Article 13 of the proposed Copyright Directive will require
online services to track down and delete pirated content, in collaboration with rightsholders. The Commission stressed that the changes are needed to support copyright holders. However, many legal scholars , digital activists ,
politicians , and members of the public worry that they will violate the rights of regular Internet users. Last month the EU Council finalized the latest version of the proposal. This means that the matter now goes to the Legal
Affairs Committee of the Parliament (JURI), which must decide how to move ahead. This vote is expected to take place in two weeks. Although the term filter is commonly used to describe Article 13, it is not directly mentioned in
the text itself . According to Pirate Party Member of Parliament (MEP) Julia Reda , the filter keyword is avoided in the proposal to prevent a possible violation of EU law and the Charter of Fundamental Rights. However, the
outcome is essentially the same. In short, the relevant text states that online services are liable for any uploaded content unless they take effective and proportionate action to prevent copyright infringements, identified by
copyright holders. That also includes preventing these files from being reuploaded. The latter implies some form of hash filtering and continuous monitoring of all user uploads. Several companies, including Google Drive, Dropbox,
and YouTube already have these types of filters, but many others don't. A main point of critique is that the automated upload checks will lead to overblocking, as they are often ill-equipped to deal with issues such as fair use.
The proposal would require platforms to filter all uploads by their users for potential copyright infringements -- not just YouTube and Facebook, but also services like WordPress, TripAdvisor, or even Tinder. We know from
experience that these algorithmic filters regularly make mistakes and lead to the mass deletion of legal uploads, Julia Reda tells TF. Especially small independent creators frequently see their content taken down because others
wrongfully claim copyright on their works. There are no safeguards in the proposal against such cases of copyfraud. Besides affecting uploads of regular Internet users and smaller creators, many businesses will also be hit. They
will have to make sure that they can detect and prevent infringing material from being shared on their systems. This will give larger American Internet giants, who already have these filters in place, a competitive edge over
smaller players and new startups, the Pirate Party MEP argues. It will make those Internet giants even stronger, because they will be the only ones able to develop and sell the filtering technologies necessary to comply with the
law. A true lose-lose situation for European Internet users, authors and businesses, Reda tells us. Based on the considerable protests in recent days, the current proposal is still seen as a clear threat by many.
In fact, the save youri nternet campaign, backed by prominent organizations such as Creative Commons, EFF, and Open Media, is ramping up again. They urge the
European public to reach out to their Members of Parliament before it's too late. Should Article 13 of the Copyright Directive proposal be adopted, it will impose widespread censorship of all the content you share online. The
European Parliament is the only one that can step in and Save your Internet, they write. The full Article 13 text includes some language to limit its scope. The nature and size of online services must be taken into account, for
example. This means that a small and legitimate niche service with a few dozen users might not be directly liable if it operates without these anti-piracy measures. Similarly, non-profit organizations will not be required to
comply with the proposed legislation, although there are calls from some member states to change this. In addition to Article 13, there is also considerable pushback from the public against Article 11, which is regularly referred
to as the link tax . At the moment, several organizations are planning a protest day next week, hoping to mobilize the public to speak out. A week later, following the JURI vote, it will be judgment day. If
they pass the Committee the plans will progress towards the final vote on copyright reform next Spring. This also means that they'll become much harder to stop or change. That has been done before, such as with ACTA, but achieving that type of momentum
will be a tough challenge.
|
|
The EU Security Commissioner threatens censorship laws if social media companies don't censor themselves voluntarily
|
|
|
|
24th April 2018
|
|
| See article from
theguardian.com |
Brussels may threaten social media companies with censorship laws unless they move urgently to tackle supposed 'fake news' and Cambridge Analytica-style data abuse. The EU security commissioner, Julian King, said short-term, concrete plans needed to
be in place before the elections, when voters in 27 EU member states will elect MEPs. Under King's ideas, social media companies would sign a voluntary code of conduct to prevent the misuse of platforms to pump out misleading information. The code would include a pledge for greater transparency, so users would be made aware why their Facebook or Twitter feed was presenting them with certain adverts or stories. Another proposal is for political adverts to be accompanied with information about who paid for them.
|
|
But then again, its doesn't much care for its own people either
|
|
|
|
12th April 2018
|
|
| See article from
theguardian.com |
A loss of trust in Facebook in the light of the Cambridge Analytica scandal could prompt the EU to scrap its voluntary code of conduct on the removal of online hate speech in favour of legislation and heavy sanctions, European commission Vera Jourovį
said. The EU's executive is examining how to have hateful content censored swiftly by social media platforms, with legislation being one option that could replace the current system. JJourovį said she would be grilling Sheryl Sandberg ,
Facebook's chief operating officer, later this week over unanswered questions about the company's past errors and future plans. Jourove said she was wary of following the German path, because of the thin line between removing offensive material
and censorship, but said all options were on the table. |
|
The European Commission proposes designating internet censors, which it euphemistically calls 'trusted flaggers', and then requiring internet hosting companies to censor whatever the 'trusted flaggers' say
|
|
|
| 9th April 2018
|
|
| See article from
iwf.org.uk See Commission Recommendation on measures to effectively tackle illegal content online [pdf] from ec.europa.eu
|
The EU Commission has recommended an internet censorship decision sounding like something straight out of China. The system consists of designating police, state censors, commercial censors acting for the state, and perhaps independent groups like the
IWF. These are euphemistically known as trusted flaggers. Website and content hosting companies will then be required to remove any content (nominally illegal content) in a timely manner. The IWF usefully summarises the proposals as
follows: The EU Commission's proposals to tackle illegal content online include:
- Hosting providers and Member States being prepared to submit all monitoring information to the Commission, upon request, within six months (three months for terrorist content) in order for the Commission to assess whether
further legislation is required.
- Recommends introducing definitions for "illegal content" and "trusted flaggers".
- Fast track procedures should be introduced
for materials referred by trusted flaggers.
- Hosting providers to publish a list of who they consider to be a "trusted flagger".
- Automated takedown of content is
encouraged, but should have safeguards such as human oversight.
- Terrorist content should be removed within one hour.
|
|
Politicians, censors and campaigners scent blood in getting Facebook and Google to censor their pet peeves, in this case copyrighted and terrorist material
|
|
|
| 5th March
2018
|
|
| See article from theguardian.com
|
The European Union has given Google, YouTube, Facebook, Twitter and other internet companies three months to show that they are removing extremist content more rapidly or face legislation forcing them to do so. The European Commission said on Thursday
that internet firms should be ready to remove extremist content within an hour of being notified and recommended measures they should take to stop its proliferation. Digital commissioner Andrus Ansip said: While
several platforms have been removing more illegal content than ever before ... we still need to react faster against terrorist propaganda and other illegal content which is a serious threat to our citizens' security, safety and fundamental rights.
The EC said that it would assess the need for legislation of technology firms within three months if demonstrable improvement is not made on what it describes as terrorist content. For all other types of 'illegal' content the EC will
assess the technology firms' censorship progress within six months. It also urged the predominantly US-dominated technology sector to adopt a more proactive approach, with automated systems to detect and censor 'illegal' content.
|
|
The EU is failing to engage with platforms where the most hateful and egregious terrorist content lives.
|
|
|
|
8th February 2018
|
|
| See article from
politico.eu |
Illegal content and terrorist propaganda are still spreading rapidly online in the European Union -- just not on mainstream platforms, new analysis shows. Twitter, Google and Facebook all play by EU rules when it comes to illegal
content, namely hate speech and terrorist propaganda, policing their sites voluntarily. But with increased scrutiny on mainstream sites, alt-right and terrorist sympathizers are flocking to niche platforms where illegal content is
shared freely, security experts and anti-extremism activists say. See article from
politico.eu
|
|
A few MEPs produce YouTube video highlighting the corporate and state censorship that will be enabled by an EU proposal to require social media posts to be approved before posting by an automated censorship machine
|
|
|
|
23rd January 2018
|
|
| See article from torrentfreak.com
See video from YouTube |
In a new campaign video, several Members of the European Parliament warn that the EU's proposed mandatory upload filters pose a threat to freedom of speech. The new filters would function as censorship machines which are "completely
disproportionate," they say. The MEPs encourage the public to speak up, while they still can. Through a series of new proposals, the European Commission is working hard to
modernize EU copyright law. Among other things, it will require online services to do more to fight piracy.
These proposals have not been without controversy. Article 13 of the proposed Copyright Directive, for example, has been widely criticized as it would require online services to monitor and filter uploaded content.
This means that online services, which deal with large volumes of user-uploaded content, must use fingerprinting or other detection mechanisms -- similar to YouTube's Content-ID system -- to block copyright infringing files.
The Commission believes that more stringent control is needed to support copyright holders. However, many
legal scholars ,
digital activists , and members of the public worry that they will violate the rights of
regular Internet users. In the European Parliament, there is fierce opposition as well. Today, six Members of Parliament (MEPs) from across the political spectrum released a new campaign video warning their fellow colleagues and
the public at large. The MEPs warn that such upload filters would act as censorship machines, something they've made clear to the Council's working group on intellectual property, where the controversial proposal was
discussed today. Imagine if every time you opened your mouth, computers controlled by big companies would check what you were about to say, and have the power to prevent you from saying it, Greens/EFA MEP Julia Reda says.
A new legal proposal would make this a reality when it comes to expressing yourself online: Every clip and every photo would have to be pre-screened by some automated 'robocop' before it could be uploaded and seen online, ALDE MEP Marietje Schaake adds.
Stop censorship machines! Schaake notes that she has dealt with the consequences of upload filters herself. When she uploaded a recording of a political speech to YouTube, the site took it down
without explanation. Until this day, the MEP still doesn't know on what grounds it was removed. These broad upload filters are completely disproportionate and a danger for freedom of speech, the MEPs warn. The automated systems
make mistakes and can't properly detect whether something's fair use, for example. Another problem is that the measures will be relatively costly for smaller companies ,which puts them at a competitive disadvantage. "Only the
biggest platforms can afford them -- European competitors and small businesses will struggle," ECR MEP Dan Dalton says. The plans can still be stopped, the MEPs say. They are currently scheduled for a vote in the Legal
Affairs Committee at the end of March, and the video encourages members of the public to raise their voices. Speak out ...while you can still do so unfiltered! S&D MEP Catherine Stihler says.
|
|
The EU Commission reports that internet companies are now censoring a higher proportion of posts that are reported for being 'illegal hate speech'
|
|
|
|
22nd January 2018
|
|
| See press release from europa.eu |
The third evaluation of the EU's 'Code of Conduct' on censoring 'illegal online hate speech' carried out by NGOs and public bodies shows that IT companies removed on average 70% of posts claimed to contain 'illegal hate speech'. However, some further
challenges still remain, in particular the lack of systematic feedback to users. Google+ announced today that they are joining the Code of Conduct, and Facebook confirmed that Instagram would also do so, thus further expanding the numbers of
actors covered by it. Vera Jourovį, with the oxymoronic title of EU Commissioner for Justice, Consumers and Gender Equality, said: The Internet must be a safe place, free from illegal hate speech, free from
xenophobic and racist content. The Code of Conduct is now proving to be a valuable tool to tackle illegal content quickly and efficiently. This shows that where there is a strong collaboration between technology companies, civil society and policy makers
we can get results, and at the same time, preserve freedom of speech. I expect IT companies to show similar determination when working on other important issues, such as the fight with terrorism, or unfavourable terms and conditions for their users.
On average, IT companies removed 70% of all the 'illegal hate speech' notified to them by the NGOs and public bodies participating in the evaluation. This rate has steadily increased from 28% in the first monitoring round in 2016 and 59%
in the second monitoring exercise in May 2017.T The Commission will continue to monitor regularly the implementation of the Code by the participating IT Companies with the help of civil society organisations and aims at widening it to further
online platforms. The Commission will consider additional measures if efforts are not pursued or slow down. Of course no mention of the possibility that some of the reports of supposed 'illegal hate speech' are not actioned because they are simply
wrong and may be just the politically correct being easily offended. We seem to live in an injust age where the accuser is always considered right and the merits of the case count for absolutely nothing. |
|
|
|
|
|
1st December 2017
|
|
|
European Commission seems to be backing off from its idea requiring websites to pre-censor material for upload but it has plenty of replacement ideas See
article from torrentfreak.com |
|
|
|
|
| 27th November 2017
|
|
|
Detailed discussion of the EU proposed internet censorship law requiring internet companies to pre-censor user posts See article
from cyberleagle.com |
|
The European Union enacts new regulation enabling the blocking of websites without judicial oversight
|
|
|
| 23rd November 2017
|
|
| 16th November 201. See
article from bleepingcomputer.com |
The European Union voted on November 14, to pass the new internet censorship regulation nominally in the name of consumer protection. But of course censorship often hides behind consumer protection, eg the UK's upcoming internet porn ban is enacted in
the name of protecting under 18 internet consumers. The new EU-wide law gives extra power to national consumer protection agencies, but which also contains a vaguely worded clause that also grants them the power to block and take down websites without
judicial oversight. Member of the European Parliament Julia Reda said in a speech in the European Parliament Plenary during a last ditch effort to amend the law: The new law establishes overreaching Internet
blocking measures that are neither proportionate nor suitable for the goal of protecting consumers and come without mandatory judicial oversight,
According to the new rules, national consumer protection authorities can order any
unspecified third party to block access to websites without requiring judicial authorization, Reda added later in the day on her blog . This new law is an EU regulation and not a directive, meaning its obligatory for all EU states. The new
law proposal started out with good intentions, but sometimes in the spring of 2017, the proposed regulation received a series of amendments that watered down some consumer protections but kept intact the provisions that ensured national consumer
protection agencies can go after and block or take down websites. Presumably multinational companies had been lobbying for new weapons n their battle against copyright infringement. For instance, the new law gives national consumer protection
agencies the legal power to inquire and obtain information about domain owners from registrars and Internet Service Providers. Besides the website blocking clause, authorities will also be able to request information from banks to detect the
identity of the responsible trader, to freeze assets, and to carry out mystery shopping to check geographical discrimination or after-sales conditions. Comment: European Law Claims to Protect Consumers... By Blocking the Web
23rd November 2017 See article from eff.org
Last week the European Parliament passed a new Consumer Protection Regulation [PDF] that allows national
consumer authorities to order ISPs, web hosts and domain registries to block or delete websites... all without a court order. The websites targeted are those that allegedly infringe European consumer law. But European consumer law has some perplexing
provisions that have drawn ridicule, including a prohibition on children
blowing up balloons unsupervised and a ban on excessively curvy bananas. Because of these, the range of
websites that could be censored is both vast and uncertain. The Consumer Protection Regulation provides in Article 8(3)(e) that consumer protection authorities must have the power: where no
other effective means are available to bring about the cessation or the prohibition of the infringement including by requesting a third party or other public authority to implement such measures, in order to prevent the risk of serious harm to the
collective interests of consumers:
to remove content or restrict access to an online interface or to order the explicit display of a warning to consumers when accessing the online interface; to order a hosting service provider to
remove, disable or restrict the access to an online interface; or where appropriate, order domain registries or registrars to delete a fully qualified domain name and allow the competent authority concerned to register it;
The risks of unelected public authorities being given the power to block websites was powerfully demonstrated in 2014, when the Australian company regulator ASIC
accidentally blocked 250,000 websites in an attempt to block just a handful of sites alleged to be
defrauding Australian consumers. This likelihood of unlawful overblocking is just one of the reasons that the United Nations Special Rapporteur for Freedom of Expression and Opinion has underlined how web blocking often
contravenes international human rights law. In a 2011 report [PDF], then Special Rapporteur Frank La Rue set out how extremely
limited are the circumstances in which blocking of websites can be justified, noting that where: the specific conditions that justify blocking are not established in law, or are provided by law but in an overly broad
and vague manner, [this] risks content being blocked arbitrarily and excessively. ... [E]ven where justification is provided, blocking measures constitute an unnecessary or disproportionate means to achieve the purported aim, as they are often not
sufficiently targeted and render a wide range of content inaccessible beyond that which has been deemed illegal. Lastly, content is frequently blocked without the intervention of or possibility for review by a judicial or independent body.
This describes exactly what the new Consumer Protection Regulation will do. It hands over a power that should only be exercised, if at all, under the careful scrutiny of a judge in the most serious of cases, and allows it
to be wielded at the whim of an unelected consumer protection agency. As explained by Member of the European Parliament (MEP) Julia Reda , who voted against
the legislation, it sets the stage for the construction of a censorship infrastructure that could be misused for purposes that we cannot even anticipate, ranging from copyright enforcement through to censorship of political protest.
Regrettably, the Regulation is now law--and is required to be enforced by all European states. It is both ironic and tragic that a law intended to protect consumers actually poses such a dire threat to their right to freedom of
expression. |
|
And the EU loves fake news a lot! And so it is setting up a new censorship body to find even more of it
|
|
|
| 17th November 2017
|
|
| See article from wsws.org |
The European Union is in the process of creating an authority to monitor and censor so-called fake news. It is setting up a High-Level 'Expert' Group. The EU is currently consulting media professionals and the public to decide what powers to give to
this EU body, which is to begin operation next spring. The World Socialist Web Site has its own colourful view on the intentions of the body, but I
don't suppose it is too far from the truth: An examination of the EU's announcement shows that it is preparing mass state censorship aimed not at false information, but at news reports or political views that encourage
popular opposition to the European ruling class. It aims to create conditions where unelected authorities control what people can read or say online.
EU Vice-President Frans Timmermans explained the move
in ominous tersm We live in an era where the flow of information and misinformation has become almost overwhelming. The EU's task is to protect its citizens from fake news and to manage the information they receive.
According to an EU press release, the EU Commission, another unelected body, will select the High-Level Expert Group, which is to start in January 2018 and will work over several months. It will discuss possible future actions to
strengthen citizens' access to reliable and verified information and prevent the spread of disinformation online. Who will decide what views are verified, who is reliable and whose views are disinformation to be deleted from Facebook or removed
from Google search results? The EU, of course. |
|
56 European human rights groups call on the EU to abandon its disgraceful law proposal requiring the pre-censorship of content as it is being uploaded to the internet
|
|
|
| 17th
October 2017
|
|
| See article from
indexoncensorship.org |
Article 13: Monitoring and filtering of internet content is unacceptable. Index on Censorship joined with 56 other NGOs to call for the deletion of Article 13 from the proposal on the Digital Single Market, which includes obligations on internet
companies that would be impossible to respect without the imposition of excessive restrictions on citizens' fundamental rights. Dear President Juncker, Dear President Tajani, Dear President Tusk, Dear Prime Minister
Ratas, Dear Prime Minister Borissov, Dear Ministers, Dear MEP Voss, MEP Boni The undersigned stakeholders represent fundamental rights organisations. Fundamental rights, justice and the rule of
law are intrinsically linked and constitute core values on which the EU is founded. Any attempt to disregard these values undermines the mutual trust between member states required for the EU to function. Any such attempt would also undermine the
commitments made by the European Union and national governments to their citizens. Article 13 of the proposal on Copyright in the Digital Single Market include obligations on internet companies that would be impossible to
respect without the imposition of excessive restrictions on citizens' fundamental rights. Article 13 introduces new obligations on internet service providers that share and store user-generated content, such as video or
photo-sharing platforms or even creative writing websites, including obligations to filter uploads to their services. Article 13 appears to provoke such legal uncertainty that online services will have no other option than to monitor, filter and block
EU citizens' communications if they are to have any chance of staying in business. Article 13 contradicts existing rules and the case law of the Court of Justice. The Directive of Electronic Commerce ( 2000/31/EC)
regulates the liability for those internet companies that host content on behalf of their users. According to the existing rules, there is an obligation to remove any content that breaches copyright rules, once this has been notified to the provider.
Article 13 would force these companies to actively monitor their users' content, which contradicts the 'no general obligation to monitor' rules in the Electronic Commerce Directive. The requirement to install a system for filtering
electronic communications has twice been rejected by the Court of Justice, in the cases Scarlet Extended ( C 70/10) and Netlog/Sabam (C 360/10). Therefore, a legislative provision that requires internet companies to install a filtering system would
almost certainly be rejected by the Court of Justice because it would contravene the requirement that a fair balance be struck between the right to intellectual property on the one hand, and the freedom to conduct business and the right to freedom of
expression, such as to receive or impart information, on the other. In particular, the requirement to filter content in this way would violate the freedom of expression set out in Article 11 of the Charter of Fundamental
Rights. If internet companies are required to apply filtering mechanisms in order to avoid possible liability, they will. This will lead to excessive filtering and deletion of content and limit the freedom to impart information on the one hand, and
the freedom to receive information on the other. If EU legislation conflicts with the Charter of Fundamental Rights, national constitutional courts are likely to be tempted to disapply it and we can expect such a rule to be
annulled by the Court of Justice. This is what happened with the Data Retention Directive (2006/24/EC), when EU legislators ignored compatibility problems with the Charter of Fundamental Rights. In 2014, the Court of Justice declared the Data
Retention Directive invalid because it violated the Charter. Taking into consideration these arguments, we ask the relevant policy-makers to delete Article 13. European Digital Rights (EDRi) Access Info
ActiveWatch Article 19 Associaēćo D3 -- Defesa dos Direitos Digitais Associaēćo Nacional para o Software Livre (ANSOL) Association for Progressive Communications (APC) Association for Technology and Internet (ApTI) Association
of the Defence of Human Rights in Romania (APADOR) Associazione Antigone Bangladesh NGOs Network for Radio and Communication (BNNRC) Bits of Freedom (BoF) BlueLink Foundation Bulgarian Helsinki Committee Center for Democracy &
Technology (CDT) Centre for Peace Studies Centrum Cyfrowe Coalizione Italiana Liberta@ e Diritti Civili (CILD) Code for Croatia COMMUNIA Culture Action Europe Electronic Frontier Foundation (EFF) epicenter.works Estonian Human Rights Centre
Freedom of the Press Foundation Frėnn vun der Ėnn Helsinki Foundation for Human Rights Hermes Center for Transparency and Digital Human Rights Human Rights Monitoring Institute Human Rights Watch Human Rights Without Frontiers
Hungarian Civil Liberties Union Index on Censorship International Partnership for Human Rights (IPHR) International Service for Human Rights (ISHR) Internautas JUMEN Justice & Peace La Quadrature du Net Media
Development Centre Miklos Haraszti (Former OSCE Media Representative) Modern Poland Foundation Netherlands Helsinki Committee One World Platform Open Observatory of Network Interference (OONI) Open Rights Group (ORG) OpenMedia
Panoptykon Plataforma en Defensa de la Libertad de Información (PDLI) Reporters without Borders (RSF) Rights International Spain South East Europe Media Organisation (SEEMO) South East European Network for Professionalization of
Media (SEENPM) Statewatch The Right to Know Coalition of Nova Scotia (RTKNS) Xnet
|
|
EU is getting heavy with internet giants who refuse to censor content that the EU does not like
|
|
|
| 17th March 2017
|
|
| See
article from itpro.co.uk
|
Social media giants Facebook, Google and Twitter will be forced to change their terms of service for EU users within a month, or face hefty fines from European authorities, an official said on Friday. The move was initiated after politicians have
decided to blame their unpopularity on 'fake news' rather than their own incompetence and their failure to listen to the will of the people. The EU Commission sent letters to the three companies in December, stating that some terms of service were
in breach of EU protection laws and urged them to do more to prevent fraud on their platforms. The EU has also urged social media companies to do more when it comes to assessing the suitability of user generated content. The letters, seen by
Reuters, explained that the EU Commission also wanted clearer signposting for sponsored content, and that mandatory rights, such as cancelling a contract, could not be interfered with. Germany said this week it is working on a new law that would
see social media sites face fines of up to $53 million if they failed to strengthen their efforts to remove material that the EU does not like. German censorship minister Heiko Mass said: There must be as little space
for criminal incitement and slander on social networks as on the streets. Too few criminal comments are deleted and they are not erased quickly enough. The biggest problem is that networks do not take the complaints of their own users seriously
enough...it is now clear that we must increase the pressure on social networks.
|
|
Euro internet and telecoms regulator casts doubt on the legality of UK ISP website blocking systems
|
|
|
| 1st September 2016
|
|
| See article from theguardian.com
|
ISPs that block access to websites with adult content or block ads could be breaking EU guidelines on net neutrality even if customers opt in. EU regulations only allow providers to block content for three reasons: to comply with a member state's laws,
to manage levels of traffic across a network, or for security. Blocking websites with adult content has no clear legal framework in UK legislation, and providers have relied on providing the ability to opt in to protect themselves from falling foul of
the rules. However, an update to guidelines issued by EU body Berec says that even if a person indicates they want certain content to be blocked, it should be done on their device, rather than at a network level. The updated guidelines say:
With regard to some of the suggestions made by stakeholders about traffic management features that could be requested or controlled by end-users, Berec notes that the regulation does not consider that end-user consent
enables ISPs to engage in such practices at the network level. End-users may independently choose to apply equivalent features, for example via their terminal equipment or more generally on the applications running at the terminal
equipment, but Berec considers that management of such features at the network level would not be consistent with the regulation.
Frode Sorensen, co-chair of the Berec expert working group on net neutrality said the updated guidance
made it clear that it had found no legal basis for using customer choice to justify blocking any content without national legislation or for reasons of traffic management or security. David Cameron said in October last year that he had secured an
opt-out from the rules enabling British internet providers to introduce porn filters. However, Sorensen said he was not aware of any opt-out, and the net neutrality rules introduced in November, after Cameron made his claim, said they applied to the
whole European Economic Area which includes the UK. |
|
|
|
|
| 5th July 2016
|
|
|
The Internet Referral Unit has now been politely asking for online terrorism content to be removed for a year See
article from arstechnica.com |
|
European Parliament considers EU wide internet website blocking
|
|
|
|
23rd June 2016
|
|
| See article from
arstechnica.co.uk
|
The European Parliament is currently considering EU wide website blocking powers. The latest draft of the directive on combating terrorism contains proposals on blocking websites that promote or incite terror attacks. Member states may take all
necessary measures to remove or to block access to webpages publicly inciting to commit terrorist offences, says text submitted by German MEP and rapporteur Monika Hohlmeier. Digital rights activists have argued that it leaves the door wide
open to over-blocking and censorship as safeguards defending proportionality and fundamental rights can be skipped if governments opt for voluntary schemes implemented by ISPs. Amendments have been proposed that would require any take down
or Web blocking to be subject to full judicial oversight and rubber stamping. Last week, Estonian MEP Marju Lauristin told Ars she was very disappointed with the text, saying it was jeopardising freedom of expression as enshrined in the
Charter of Fundamental Rights of EU. The measure will be up for a vote by the civil liberties committee on 27th June. |
|
Tony Blair appointed to top role in organisation campaigning for a new Europe-wide blasphemy law disguised behind Orwellian doublespeak about tolerance
|
|
|
|
7th June 2015
|
|
| |
The misleadingly named European Council on Tolerance and Reconciliation (ECTR) is a campaign group backed by European Jewish leaders, and a gaggle of former EU heads of state and government. It calls for pan-European legislation outlawing antisemitism
and criticism of religion, coining a phrase, 'group libel' to mirror the muslim phrase 'defamation of religion'. The group recently published a document proposing to outlaw antisemitism as well as criminalising a host of other activities of what the
group deems to be violating fundamental rights on religious, cultural, ethnic and gender grounds. The group cleverly heads the list with some justifiable prohibitions, female genital mutilation, forced marriage, polygamy, but then slip in extensive
censorship and blasphemy items, eg criminalising xenophobia, and creating a new crime of group libel , ie public defamation of ethnic, cultural or religious groups. The proposed legislation would also curb freedom of expression on grounds
of a bizarre definition of 'tolerance'. The document twists the meaning of tolerance to try and justify the end to the right of freedom of expression: Tolerance is a two-way street. Members of a group who wish to
benefit from tolerance must show it to society at large, as well as to members of other groups and to dissidents or other members of their own group. There is no need to be tolerant to the intolerant. This is especially important
as far as freedom of expression is concerned: that freedom must not be abused to defame other groups.
But the document goes much further, calling for the criminalisation of overt approval of a totalitarian ideology, xenophobia or
antisemitism. The group has now appointed Tony Blair as chairman. Comment: Tony Blair's plans to tackle extremism will stifle free speech See
article from indexoncensorship.org
Index on Censorship considers Tony Blair's proposals on hate speech to be dangerous and divisive. Blair has defended plans to lower the the barriers on what constitutes incitement to violence and make Holocaust denial illegal. Jodie Ginsberg, CEO of
Index on Censorship said: These suggestions, far from protecting people, are likely to have the opposite effect, driving extremist views underground where they can fester and grow Instead, we should be protecting
free expression, including speech that may be considered offensive or hateful, in order to expose and challenge those views. Individuals should always be protected from incitement to violence and that protection already exists in
law, as do stringent laws on hate speech. Further legislation is not needed.
Comment: NSS criticises Tony Blair's plans to entrench religion in public life across Europe See
article from
secularism.org.uk
The National Secular Society (NSS) has criticised Blair ahead of his appointment as chair of the ECTR as ill thought out and counter-productive . The former Prime Minister has defended proposals lowering the barriers to what
constitutes incitement to violence and pan-European plans to make Holocaust denial illegal and to entrench state funding for religious institutions into law. The NSS is adamant that measures such as 'group libel' would be
counter-productive, have a massive chilling effect on free speech and would be likely to restrict the open debate necessary to resolve problems. Keith Porteous Wood, NSS executive director, said: Britain already
has draconian legislation on religious insults -- a possible seven year jail term with a low prosecution threshold. Politicians have already called for the outlawing of Islamophobia, playing into the hands of those intent on closing down honest debate
about and within Islam. There is no need for more laws, and the ones we already have fail to adequately protect freedom of expression. A robust civil society with a deep commitment to free expression is our best hope for
challenging and countering bigoted narratives and misguided views. Driving extremist views underground will only allow them to fester and allow their proponents to present themselves as martyrs. Outlawing Holocaust denial
completely undermines the West's defence of freedom of speech at home and abroad and removes our moral authority to propound freedom of expression abroad. No one has the right in a plural society not to be offended and ideas should not be proscribed but
people should be defended from incitement to violence. A European-wide Holocaust denial law would be exhibit A in every response from dictators abroad - and Islamists at home - when we criticise their appalling human rights
records or challenge their rhetoric and beliefs.
The NSS has also accused Blair of being confused over the role of religion . For Mr Blair to dismiss those intent on justifying violence in
the name of religion as abusing religion and using it as a mask reveals that his enthusiasm for religion has once more led him to misunderstand one of the roots of this problem. While few would suggest that extremists' interpretations of their faith are
mainstream in today's society, it is naive and counterproductive to deny the role that such interpretations play in their religio-political motivations.
Comment: BBC to be forced to report the news
under a narrow set of acceptable values . See article from
ukip.org
Tony Blair's new role as chairman of the European Council on Tolerance and Reconciliation (ECTR) is in fact supporting an organisation that is a danger to free speech. Paul Nuttall, UKIP Deputy Leader and MEP for the North West, said the ECTR wants
public broadcasting companies like the BBC to be forced under legal statute to report the news under a narrow set of acceptable values . He explained: Tony Blair is joining an organisation that explicitly wants
to see legislative control of news output. The ECTR sent a framework statute to members of the European Parliament with the intention of it becoming law that frankly caused great concern. It included dictatorial powers to demand
that 'public broadcasting (television and radio) stations will devote a prescribed percentage of their programmes to promoting a climate of tolerance'. It also called for private and public media to be controlled by a Media Complaints Commission driven
by a narrow set of acceptable values. The ECTR also called for certain new 'thought' crimes to be regarded as aggravated criminal offences, such as the 'overt approval of a totalitarian ideology, xenophobia'. This is very
dangerous stuff and is utterly against the great tradition of free speech in this country. Do we really want our news reports to be dictated by a political organisation led by Blair? Even worse is that Mr Blair's organisation also
proposes re-education programmes, which brings to mind the 1930s. It proposes young people 'convicted of committing crimes listed will be required to undergo a rehabilitation programme designed to instil in them a culture of tolerance'. It's very
worrying that in championing the ECTR, Mr Blair appears to want to enforce an Orwellian-style 'Ministry of Information' regime upon the population without taking it to the ballot box.
Offsite Comment: Tony Blair
has just joined the crew of reckless muzzlers 7th June 2015. See article from
theguardian.com by Nick Cohen
Moves by Blair, Cameron and co to end tolerance of intolerance will create a country unable to be honest with itself. |
|
Jewish leaders in a disgraceful call for censorship and a European blasphemy law, subtly hiding it behind a ban on reprehensible cultural practices
|
|
|
|
28th January 2015
|
|
| See article from
theguardian.com |
European Jewish leaders, backed by former EU heads of state and government, are calling for pan-European legislation outlawing antisemitism and criticism of religion. A panel of four international jewish leaders backed by the misleadingly named
European Council on Tolerance and Reconciliation (ECTR) have spent three years drafting a 12-page document on 'tolerance' . They are lobbying to have it converted into law in the 28 countries of the EU. The proposal would outlaw
antisemitism as well as criminalising a host of other activities of what the group deems to be violating fundamental rights on religious, cultural, ethnic and gender grounds. The group head the list with some justifiable prohibitions, female
genital mutilation, forced marriage, polygamy, but then slip in extensive censorship and blasphemy items, eg criminalising xenophobia, and creating a new crime of group libel , ie public defamation of ethnic, cultural or religious groups. Then to
try and generate a little support, the group extends the list to include women's and gay rights. The proposed legislation would also curb freedom of expression on grounds of a bizarre definition of 'tolerance'. The document twists the meaning of
tolerance to try and justify the end to the right of freedom of expression: Tolerance is a two-way street. Members of a group who wish to benefit from tolerance must show it to society at large, as well as to members
of other groups and to dissidents or other members of their own group. There is no need to be tolerant to the intolerant. This is especially important as far as freedom of expression is concerned: that freedom must not be abused
to defame other groups.
But the document goes much further, calling for the criminalisation of overt approval of a totalitarian ideology, xenophobia or antisemitism. Education in tolerance should be mandatory from
primary school to university, and for the military and the police, while public broadcasting must devote a prescribed percentage of their programmes to promoting a climate of 'tolerance' . The panel was chaired by Yoram Dinstein, a war
crimes expert, professor and former president of Tel Aviv university. The drafters are currently touring the parliaments of Europe trying to drum up support. |
|
Legal advice to the European Court of Justice confirms the legality of ISPs being ordered to block copyright infringing websites
|
|
|
|
27th November 2013
|
|
| See article from
torrentfreak.com |
In legal advice to the EU Court of Justice, Advocate General Pedro Cruz Villalon has announced that EU law allows for ISPs to be ordered to block their customers from accessing known copyright infringing sites. The opinion, which relates to a dispute
between a pair of movie companies and an Austrian ISP over the now-defunct site Kino.to, is not legally binding. However, the advice of the Advocate General is usually followed in such cases. The current dispute involves Austrian ISP UPC Telekabel
Wien and movie companies Constantin Film Verleih and Wega Filmproduktionsgesellschaft. The film companies complained that the ISP was providing its subscribers with access to Kino.to which enabled them to access their copyrighted material without
permission. Interim injunctions were granted in the movie companies' favor which required the ISP to block the site. However, the Austrian Supreme Court later issued a request to the Court of Justice to clarify whether a provider that provides
Internet access to those using an illegal website were to be regarded as an intermediary, in the same way that the host of an illegal site might. In his opinion, Advocate General Pedro Cruz Villalon said that the ISP of a user accessing a website
said to be infringing copyright should also be regarded as an intermediary whose services are used by a third party, such as the operator of an infringing website. This means that the ISP of an infringing site user can be subjected to a blocking
injunction, as long as it contain specifics on the technicalities. |
16th November 2011 | |
| The Council of the EU accepts directive with various measures for the protection of children
|
Based on press release [pdf] from
consilium.europa.eu See Directive [pdf] from
register.consilium.europa.eu
|
The Council of the EU has adopted a directive aimed at combating sexual abuse and exploitation of children as well as child pornography. The directive will harmonise around twenty relevant criminal offences, at the same time setting high level of
penalties. The new rules which have to be transposed into national law within two years also include provisions to fight against online child pornography and sex tourism. They also aim to prevent convicted paedophiles moving to another EU member
state from exercising professional activities involving regular contacts with children. Finally, the directive introduces measures to protect the child victim during investigations and legal proceedings. Concerning online child pornography, the
text obliges member states to ensure the prompt removal of such websites hosted in their territory and to endeavour to obtain their removal if hosted outside of their territory. In addition, member states may block access to such web pages, but
must follow transparent procedures and provide safeguards if they make use of this possibility. Job vetting will also extend to a European wide level with a reliable check for EU nationals when applying for jobs related to the care of
children. In addition, within the EU, higher protection of children will be achieved once member states implement the directive and fully commit themselves to circulate data on disqualifications from their criminal records. It is currently very
difficult to clear foreign EU nationals when applying for jobs related to the care of children.
|
29th October 2011 | |
| European Parliament approves new measures against online child porn
|
See article from xbiz.com
|
The European Parliament has approved new rules that will implement tough penalties for offences related to child porn online. The resolution was adopted by the European Parliament with 541 votes in favor and two against. The directive will require
EU countries to remove child porn websites or allow them to block access to those pages. EU member states will have two years to make the rules into national law. The new rules will outline requirements on prevention, prosecution of offenders and
protection of victims and Association of Sites Advocating Child Protection Executive Director Tim Henning said: It covers all the major bases and will make it less difficult for EU authorities to prosecute these
heinous crimes against children. It will also help to reduce the proliferation and consumption of child pornography content.
But he noted one troubling aspect of blocking suspected website pages: This needs to be completely transparent in order to prevent EU territories from blocking legal adult entertainment that may be mistaken for illegal child porn. The directive has stated this will be the case.
The rules set out penalties for about 20 criminal offenses. For instance, coercing a child into sexual actions or forcing a child into prostitution will be punishable by at least 10 years in prison. Child pornography producers will
face at least 3 years, and viewers of online child pornography will face at least 1 year.
|
13th July 2011 | |
| European Parliament committee rules that EU states can impose website blocking but it is not to be made mandatory
| See article from
publicaffairs.linx.net
|
The Committee on Civil Liberties, Justice and Home Affairs (LIBE) of the European Parliament has adopted a compromise text agreed with the Council and the Commission on the draft Child Sexual Exploitation Directive. The compromise text allows Member
States to introduce mandatory blocking measures for Internet sites containing child abuse images, but does not require them as the Council had proposed. Article 21: Measures against websites containing or disseminating child
pornography:
- Member States shall take the necessary measures to ensure the prompt removal of webpages containing or disseminating child pornography hosted in their territory and to endeavour to obtain the removal of such pages hosted outside
of their territory.
- Member States may take measures to block access to webpages containing or disseminating child pornography towards the Internet users in their territory. These measures must be set by
transparent procedures and provide adequate safeguards, in particular to ensure that the restriction is limited to what is necessary and proportionate, and that users are informed of the reason for the restriction. These safeguards shall also include the
possibility of judicial redress.
Civil liberties groups will be pleased at having defeated mandatory blocking across Europe, but disappointed at having failed to ensure that judicial authority is required before an ISP can be forced to block an Internet address. The draft
Directive is due to be adopted in the Autumn.
|
29th April 2011 | | |
EU proposal to create a Great Firewall of Europe
| See article from
telegraph.co.uk
|
Broadband providers have voiced alarm over an EU proposal to create a Great Firewall of Europe by blocking illicit web material at the borders of the bloc. The proposal emerged an obscure meeting of the Council of the European
Union's Law Enforcement Work Party (LEWP), a forum for cooperation on issues such as counter terrorism, customs and fraud. The minutes from the meeting state: The Presidency of the LEWP presented its
intention to propose concrete measures towards creating a single secure European cyberspace with a certain virtual Schengen border and virtual access points whereby the Internet Service Providers (ISP) would block illicit contents on the
basis of the EU black-list . Delegations were also informed that a conference on cyber-crime would be held in Budapest on 12-13 April 2011.
Malcolm Hutty, head of public affairs at LINX, a cooperative of British ISPs,
said the plan appeared ill thought-out and confused . We take the view that network level filtering of the type proposed has been proven ineffective. Broadband providers say that illegal content should be removed at the source
by cooperation between police and web hosting firms because network blocking can easily be circumvented.
|
13th February 2011 | |
| Nutters rant against right of appeal for websites blocked under new EU proposed law
|
See article from
guardian.co.uk
|
The European parliament's civil liberties, justice and home affairs committee (LIBE) will meet in Strasbourg tomorrow, when it is expected to approve a controversial measure that would compel EU member states to inform internet publishers that their
images are to be deleted from the internet or blocked for reasons of child pornography. Publishers will also have to be informed of their right to appeal against any removal or blocking. The measure would make the UK's system for blocking
and removing child pornography without informing the publisher illegal. MEPs seem more concerned with the rights of child pornographers than they do with the rights of children who have been sexually abused to make their foul, illegal images,
said John Carr, the secretary of the Children's Charities Coalition on Internet Safety (And an adviser to the UK government on child internet safety!) Comment: Surely it is non-child porn
publishers that can appeal. If they can show that their sites are legal then it is absolutely correct that they should be able to prove their point. On the other hand, child pornographers would simply have no case on
which to make an appeal, their material is illegal, and will stay removed or blocked.
|
11th February 2011 | |
| European Parliament set for first vote on mandatory website blocking
|
See article from
europeanvoice.com
|
Cecilia Malmstrom, the European commissioner for home affairs, is worried that MEPs' amendments to a draft directive on the sexual abuse and exploitation of children would make it more difficult for EU member states to block access to websites carrying
child pornography. The European Parliament's civil liberties committee is to vote on the European Commission's proposal and MEPs' amendments on 14th February. At present, it is up to member states whether they want to block websites such
content. The Commission is seeking to introduce an obligation on all member states to block access in cases where their removal is impossible. A majority of member states back the mandatory blocking of internet sites but the measure has run into
trouble with MEPs. Germany, Ireland and Luxembourg have also openly rejected the measure. Some of the hundreds of amendments to the draft regulation put forward by MEPs would introduce EU-wide rules that would make it more difficult for member
states to continue blocking websites. Many MEPs are concerned about the implications of website blocking for freedom of speech. I am a liberal, I consider free speech as a fundamental value and I have fought for that all my life, so accusations
that I'm trying to censor the internet and limit freedom of speech really go to my heart because that is absolutely not what I'm trying to do, Malmstro m said. But I have seen those pictures; they have nothing to do with freedom of speech.
This is a horrible violation. She also rejected the slippery-slope argument -- the notion that once the EU imposed rules on blocking access to one type of website, it could do so for other types in the future. I intend in no way to propose any
other type of blocking for any other thing, but this particular crime demands particular attention.
|
15th January 2011 | |
| Euro ISPs unimpressed by EU proposed mandate of ISP website blocking
| See
article from theregister.co.uk See also
Blocking sites leads to less policing of criminal content from
pcpro.co.uk
|
The European Commission has drafted new laws to force ISPs to block child porn. The measure will be voted on by the European Parliament next month. The technical solutions envisaged are broadly based on arrangements in the UK, where all major ISPs block
access to child abuse websites named on a list maintained by the Internet Watch Foundation (IWF). If the laws are passed as proposed, the UK government will get powers to force the small ISPs who do not use the IWF blocklist – who serve less
than 2% of British internet users – to fall into line. Last year the Home Office abandoned a pledge to enforce 100% compliance. Although voluntary, the British system is not without controversy, and EuroISPA, the European ISP trade
association, is lobbying MEPs to reject the move to enforce it across the bloc. Malcolm Hutty, the President of EuroISPA, said: In order to make the Directive on child sexual exploitation as strong as possible, emphasis must
be placed on making swift notice and takedown of child sexual abuse material focused and effective. Blocking, as an inefficient measure, should be avoided. Law enforcement authorities' procedures for rapid communication to internet hosting providers of
such illegal material must be reviewed and bottlenecks eliminated.
|
4th June 2009 | | |
EU poised to appoint telecoms regulatory body
| Based on
article from mobiletoday.co.uk
|
The EU is poised to appoint a super-regulatory body that will bring together all 27 national regulators, including Ofcom in the UK, and enforce wide-ranging reforms to the industry.
The establishment of the Body of European Regulators in
Electronic Communications (BEREC) would bring national regulators together in an attempt to further integrate the European market and become the main advisory body to the Commission, the body that proposes legislation.
The creation of a European
telecoms regulator was pushed by EU commissioner Viviane Reding, who continues to campaign for lower data roaming rates around Europe.
Malcolm Harbour, West Midlands MEP and vice president of the European Parliament's science and technology unit,
was involved in proposals for the package and told Mobile that aside from issues about internet access, the rest of the reforms had already been agreed on in theory.
|
| |