Melon Farmers Original Version

Internet News


2016: December

 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   Apr   May   June   July   Aug   Sep   Oct   Nov   Dec    

 

Commented Fake news...

Facebook outlines how its 'fake news' detection will work.


Link Here31st December 2016
Full story: Facebook Censorship...Facebook quick to censor
Facebook has outlined its approach to 'fake news' in a blog post:

A few weeks ago we previewed some of the things we're working on to address the issue of fake news and hoaxes. We're committed to doing our part and today we'd like to share some updates we're testing and starting to roll out.

We believe in giving people a voice and that we cannot become arbiters of truth ourselves, so we're approaching this problem carefully. We've focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain, and on engaging both our community and third party organizations.

The work falls into the following four areas. These are just some of the first steps we're taking to improve the experience for people on Facebook. We'll learn from these tests, and iterate and extend them over time.

We're testing several ways to make it easier to report a hoax if you see one on Facebook, which you can do by clicking the upper right hand corner of a post. We've relied heavily on our community for help on this issue, and this can help us detect more fake news.

We believe providing more context can help people decide for themselves what to trust and what to share. We've started a program to work with third-party fact checking organizations that are signatories of Poynter's International Fact Checking Code of Principles. We'll use the reports from our community, along with other signals, to send stories to these organizations. If the fact checking organizations identify a story as fake, it will get flagged as disputed and there will be a link to the corresponding article explaining why. Stories that have been disputed may also appear lower in News Feed.

It will still be possible to share these stories, but you will see a warning that the story has been disputed as you share. Once a story is flagged, it can't be made into an ad and promoted, either.

We're always looking to improve News Feed by listening to what the community is telling us. We've found that if reading an article makes people significantly less likely to share it, that may be a sign that a story has misled people in some way. We're going to test incorporating this signal into ranking, specifically for articles that are outliers, where people who read the article are significantly less likely to share it.

We've found that a lot of fake news is financially motivated. Spammers make money by masquerading as well-known news organizations, and posting hoaxes that get people to visit to their sites, which are often mostly ads. So we're doing several things to reduce the financial incentives. On the buying side we've eliminated the ability to spoof domains, which will reduce the prevalence of sites that pretend to be real publications. On the publisher side, we are analyzing publisher sites to detect where policy enforcement actions might be necessary.

It's important to us that the stories you see on Facebook are authentic and meaningful. We're excited about this progress, but we know there's more to be done. We're going to keep working on this problem for as long as it takes to get it right.

Offsite Article: Fake news detection on the cheap

The Guardian investigates how Facebook's trumpeted 'fake news' detection relies on unpaid volunteers.

17th December 2016. See  article from theguardian.com

Offsite Comment: Don't make Facebook the ministry of truth

The fake-news panic is a threat to internet freedom.

31st December 2016. See  article from spiked-online.com by Naomi Firsht

 

 

Miserable Bangladesh...

Bangladesh internet censors specify their first 500 porn websites for blocking


Link Here28th December 2016
Full story: Internet Censorship in Bangladesh...Internet censors to track down supposed blasphemy
The Bangladesh government has started an initiative to block several hundred pornography websites and already sent a list of more than 500 sites, mostly locally hosted, to ISPs.

The Bangladesh Telecommunication Regulatory Commission (BTRC) sent the list to all the mobile phone operators, international gateway operators, international internet gateway operators, interconnection exchange operators, internet service providers and other telecom service providers to block the domains from their respective networks.

After receiving the list the operators have started to comply with the directive. However, a few of the websites could not be blocked immediately due to technical challenges, said BTRC officials.

The government actually wants to create massive awareness about the issue and as many hurdles as possible in browsing those sites. Tarana Halim, state minister for post and telecommunications division, said:

Initially we have decided to block around 500 websites that contain pornography, obscene pictures and video contents. In the first phase we will go for blocking the locally hosted sites

The Daily Star has obtained a copy of an email that contained a list of 510 websites branded as pornographic by an 'offensive online content control committee'.

 

 

Searching for the wrong question...

Google's algorithms are accused of returning links to pages relevant to the search rather than the 'right' pages.


Link Here26th December 2016
Full story: Google Censorship...Google censors adult material froms its websites
Given that the holocaust is historical fact with massive amounts of historical evidence, then it hardly seems likely that authoritative websites will feel the need to debate the existence the event. The debate only exists on contrarian websites. You wouldn't really expect wiki to lead with the phrase: yes the holocaust really did exist.

So searching for the phrase : did the Holocaust happen? is hardly likely to strike many close matches on authoritative websites. And yes it will find many matches on the contrarian websites, after all they are the only websites asking that question.

A Guardian commentator,  Carole Cadwalladr, asked that question and was somehow 'outraged' that  Google didn't return links to an entirely different question that was more in line with what Cadwalladr wanted to see.

It would be a bad day indeed if Google dictated only morally upright answers. Searches for porn would return links to anti-porn activists and a search for local pubs would return links to religious preachers. People would soon seek other solutions to their searching. Even holocaust campaigners would get caught out, eg if they were seeking out websites to challenge.

Surely nobody would gain from Google refusing to comply with search requests as written.

Google has now responded to the Cadwalladr article saying that it is thinking deeply about ways to improve search. A spokesman said:

This is a really challenging problem, and something we're thinking deeply about in terms of how we can do a better job

Search is a reflection of the content that exists on the web.

The fact that hate sites may appear in search results in no way means that Google endorses these views.

Editor of news site Search Engine Land, Danny Sullivan, said Google was keen to come up with a solution that was broadly applicable across all searches, rather than just those that have been noticed by users:

It's very easy to take a search here and there and demand Google change something, and then the next day you find a different search and say, 'why didn't you fix that?' Hate speech

 

 

Updated: Free speech upheld even on the backpage...

CEO of Backpage.com website cleared of prostitution offences over adult service adverts posted by website users


Link Here26th December 2016
Full story: Adult Services Ads in the US...US politicians target small ads for sex workers

Last month, a California judge tentatively ruled that he would dismiss charges lodged by California's attorney general against Backpage.com's chief executive and two of its former owners. After an interim scare, the judge has now issued a final judgement confirming the previous ruling and the charges have been dismissed.

The CEO, Carl Ferrer was charged with pimping a minor, pimping, and conspiracy to commit pimping in connection to online advertisements posted on the online ads portal. California's attorney general  Kamala Harris claimed that advertisements amounted to solicitation of prostitution.

However Judge Michael Bowman agreed with the defendants, including former owners Michael Lacey and James Larkin, that they were protected, among other things, by the Communications Decency Act, and hence they were not liable for third-party ads posted by others. The ruling said:

By enacting the CDA, Congress struck a balance in favor of free speech by providing for both a foreclosure from prosecution and an affirmative defense at trial for those who are deemed an internet service provider.

Update: Double Jeopardy

26th December. See  article from theguardian.com

California attorney general Kamala Harris is pursuing new charges against Backpage.com website

The fresh charges, which attorney general Kamala Harris claims are based on new evidence, come after an earlier case against the website was thrown out of court.

The website advertises escort services and seems t have wound up Harris who claimed that the site operated a hotbed of illicit and exploitative activity .

Harris said she had charged Backpage executives Carl Ferrer, Michael Lacey and James Larkin with 13 counts of pimping and conspiracy to commit pimping. They also are charged with 26 counts of money laundering. In the latest case, filed in Sacramento County superior court, Harris claims Backpage illegally funnelled money through multiple companies and created various websites to get around banks that refused to process transactions. (This does not seem a particularly surprising, or necessarily bad thing to do).

She also alleged that the company used photos of women from Backpage on other sites without their permission in order to increase revenue and knowingly profited from the proceeds of prostitution. And from what Harris said in a statement it seems that hers is a morality campaign against sex work. Harris said:

By creating an online brothel -- a hotbed of illicit and exploitative activity -- Carl Ferrer, Michael Lacey, and James Larkin preyed on vulnerable victims, including children, and profited from their exploitation.

 

 

Offsite Article: Abuse of trust...


Link Here26th December 2016
No matter how much governments spout bollox about mass snooping being used onlt to detect the likes of terrorism, the authorities end up sharing the data with Tom, Dick and Harry for the most trivial of reasons

See article from theguardian.com

 

 

Update: Crap law in the making...

Open Rights Group demolishes government's internet censorship tweak to catch porn carrying Twitter and Reddit


Link Here23rd December 2016

Is the government misleading the Lords about blocking Twitter?

Last week we reported that the UK government expect the BBFC to ask social media providers, such as Twitter, to block the use of their service by accounts that are associated with porn sites that fail to verify the age of their users.

The Bill is even worse than we illustrated. The definition of a "pornographic website" in Clause 15 (2) is purely a site that operates on a "commercial basis". This could catch any site--including Twitter, Reddit, Tumblr--where pornography can be found. The practical limit would therefore purely be down to the discretion of the regulator, the BBFC, as to the kind of commercial sites they wanted to force to use Age Verification. However, the BBFC does not seem to want to require Twitter or Reddit to apply age verification--at least, not yet.

However, we also got one part wrong last week . In relation to Twitter, Reddit and other websites where porn sites might promote their content, the Bill contains a power to notify these "ancillary services" but has no specific power to enforce the notifications .

In other words, they expect Twitter, Google, Facebook, Tumblr and other companies to voluntarily block accounts within the UK, without a specific legal basis for their action .

This would create a toxic situation for these companies. If they fail to "act" on the "notifications", these services will leave themselves open to the accusation that they are failing to protect children, or actively "supplying" pornography to minors.

On the other hand, if they act on these notices, they will rightly be accused by ourselves and those that are censored of acting in an unaccountable, arbitrary manner. They will not have been legally obliged to act by a court; similar content will remain unblocked; and there will be no clear remedy for someone who wished to contest a "notification". Liability for the blocks would remain with the company, rather than the BBFC.

The government has not been clear with the Lords that this highly unclear situation is the likely result of notifications to Twitter--rather than account blocks, as they have suggested.

There are very good reasons not to block accounts after a mere notification. For instance in this case, although sites can contest a classification at the BBFC, and an internal appeals process will exist, there is no external appeal available, other than embarking on an expensive judicial review. It is not clear that a classification as pornography should automatically lead to action by ancillary services, not least because compliance automatically results in the same content being made available. To be clear, the bill does not aim to remove pornography from Twitter, Reddit users or search engines.

Why then, has the government drafted a bill with this power to notify "ancillary services", but no method to enforce? The reason appears to be that payment providers in particular have a long standing agreement amongst themselves that they will halt payments when they are notified that someone is taking payments for unlawful activity. Similarly, large online ad networks have a similar process of accepting notifications.

There is therefore no need to create enforcement mechanisms for these two kinds of "ancillary providers". (There are pitfalls with their approach--it can lead to censorship and unwarranted damage to businesses--but let us leave that debate aside for now.)

It seems clear that, when the bill was written, there was no expectation that "ancillary providers" would include Twitter, Yahoo, or Google, so no enofrcement power was created.

The government, in their haste, has agreed with the BBFC that they should be able to notify Twitter, Google, Yahoo and other platforms. They have agreed that BBFC need not take on a role of enforcement through court orders.

The key point is that the Lords are being misled by the government as things stand. Neither the BBFC or government have explored with Parliamentarians what the consequences of expanding the notion of "ancillary providers" is.

The Lords need to be told that this change means that:

  • the notices are unenforceable against Internet platforms;

  • they will lead to public disputes with the companies;

  • they make BBFC's decisions relating to ancillary providers highly unaccountable as legal responsibility for account blocks rest with the platforms.

It appears that the BBFC do not wish to be cast in the role of "national censor". They believe that their role is one of classification, rather than enforcement. However, the fact that they also wish to directly block websites via ISPs rather flies in the face of their self-perception, as censorship is most clearly what they will be engaging in. Their self-perception is also not a reason to pass the legal buck onto Internet platforms who have no role in deciding whether a site fails to meet regulatory requirements.

This mess is the result of rushing to legislate without understanding the problems involved. The obvious thing to do is to limit the impact of the "ancillary services" approach by narrowing the definition to exclude all but payment providers and ad networks. The alternative--to create enforcement powers against a range of organisations--would need to establish full accountability for the duties imposed on ancillary providers in a court, something that the BBFC seems to wish to avoid.

Or of course, the government could try to roll back its mistaken approach entirely, and give up on censorship as a punishment: that would be the right thing to do. Please sign our petition if you agree .

 

 

Update: No comment...

China bans people from posting their own social media videos about current affairs


Link Here23rd December 2016
Full story: Internet Censorship in China...All pervading Chinese internet censorship

China has banned its internet users from sharing on the social media videos about current events that are not from official sources, media reports said.

The State Administration of Press, Publication, Radio, Film and Television (China), in a notice, said Chinese social media platforms WeChat and Weibo were not allowed to disseminate user-generated audio or video programmes about current events.

The news landed quietly among China's internet users, with only a handful discussing the new rules on Weibo, many seemingly resigned to ever increasing censorship.

 

 

Update: But only one domain name banned for an offensive name...

Nominet censors 8000 domain names mostly at the behest of the UK's copyright police


Link Here 22nd December 2016
Nominet, the Registry responsible for running the .UK domain name space, has recently published a report on the number of domain names it has suspended further to requests from law enforcement agencies. The figures show that during the 12 month period from 1 November 2015 to 31 October 2016, over 8,000 domain names were suspended. This is more than twice the number of domain name suspensions during the preceding 12 month period in 2014/2015.

A revised registration policy, which came into effect in May 2014, made it clear that the use of a domain name under .UK for criminal purposes is not permitted and that such domain names may be suspended. Police or law enforcement agencies (LEAs) are able to notify Nominet of any .UK domain names being used for criminal activity.

The suspension of 8,049 domain names from 1 November 2015 to 31 October 2016 was the result of notifications from eight different LEAs, ranging from the Counter Terrorism Internet Referral Unit to the UK Trading Standards body. The majority of the requests came from the UK Police Intellectual Property Crime Unit which submitted 7,617 suspension requests.

In addition to this, the revised registration policy also prohibited the registration of domain names that appear to relate to a serious sexual offence. Such domain names are termed offensive names under the policy. Thus Nominet, in its sole discretion, will not allow a domain name to remain registered if it appears to indicate, comprise or promote a serious sexual offence and where there is no legitimate use of the domain name which could be reasonably contemplated . As a result of this, all new domain name registrations are run through an automated process and those that are identified as potentially problematic are highlighted. These domain names are then verified manually to ensure that they are in breach of Nominet's offensive names policy.

It is interesting to note that while the automated process to identify offensive domain names highlighted 2,407 cases, this resulted in only one suspension.

 

 

Offsite Article: From a friend of a friend...


Link Here22nd December 2016
Full story: Facebook Censorship...Facebook quick to censor
German newspaper reveals some of Facebook's secret censorship rules

See article from international.sueddeutsche.de

 

 

Update: But will the government listen?...

European Court of Justice finds that the UK mass snooping regime is too broad and must be reigned in


Link Here21st December 2016
The European Court of Justice has passed judgement on several linked cases in Europe requiring that ISP retain extensive records of all phone and internet communications. This includes a challenge by Labour's Tom Watson. The court wrote in a press release:

The Members States may not impose a general obligation to retain data on providers of electronic communications services

EU law precludes a general and indiscriminate retention of traffic data and location data, but it is open to Members States to make provision, as a preventive measure, for targeted retention of that data solely for the purpose of fighting serious crime, provided that such retention is, with respect to the categories of data to be retained, the means of communication affected, the persons concerned and the chosen duration of retention, limited to what is strictly necessary. Access of the national authorities to the retained data must be subject to conditions, including prior review by an independent authority and the data being retained within the EU.

In today's judgment, the Court's answer is that EU law precludes national legislation that prescribes general and indiscriminate retention of data.

The Court confirms first that the national measures at issue fall within the scope of the directive. The protection of the confidentiality of electronic communications and related traffic data guaranteed by the directive, applies to the measures taken by all persons other than users, whether by private persons or bodies, or by State bodies.

Next, the Court finds that while that directive enables Member States to restrict the scope of the obligation to ensure the confidentiality of communications and related traffic data, it cannot justify the exception to that obligation, and in particular to the prohibition on storage of data laid down by that directive, becoming the rule.

Further, the Court states that, in accordance with its settled case-law, the protection of the fundamental right to respect for private life requires that derogations from the protection of personal data should apply only in so far as is strictly necessary. The Court applies that case-law to the rules governing the retention of data and those governing access to the retained data.

The Court states that, with respect to retention, the retained data, taken as a whole, is liable to allow very precise conclusions to be drawn concerning the private lives of the persons whose data has been retained.

The interference by national legislation that provides for the retention of traffic data and location data with that right must therefore be considered to be particularly serious. The fact that the data is retained without the users of electronic communications services being informed of the fact is likely to cause the persons concerned to feel that their private lives are the subject of constant surveillance. Consequently, only the objective of fighting serious crime is capable of justifying such interference.

The Court states that legislation prescribing a general and indiscriminate retention of data does not require there to be any relationship between the data which must be retained and a threat to public security and is not restricted to, inter alia, providing for retention of data pertaining to a particular time period and/or geographical area and/or a group of persons likely to be involved in a serious crime. Such national legislation therefore exceeds the limits of what is strictly necessary and cannot be considered to be justified within a democratic society, as required by the directive, read in the light of the Charter.

The Court makes clear however that the directive does not preclude national legislation from imposing a targeted retention of data for the purpose of fighting serious crime, provided that such retention of data is, with respect to the categories of data to be retained, the means of communication affected, the persons concerned and the retention period adopted, limited to what is strictly necessary. The Court states that any national legislation to that effect must be clear and precise and must provide for sufficient guarantees of the protection of data against risks of misuse. The legislation must indicate in what circumstances and under which conditions a data retention measure may, as a preventive measure, be adopted, thereby ensuring that the scope of that measure is, in practice, actually limited to what is strictly necessary. In particular, such legislation must be based on objective evidence which makes it possible to identify the persons whose data is likely to reveal a link with serious criminal offences, to contribute to fighting serious crime or to preventing a serious risk to public security.

As regards the access of the competent national authorities to the retained data, the Court confirms that the national legislation concerned cannot be limited to requiring that access should be for one of the objectives referred to in the directive, even if that objective is to fight serious crime, but must also lay down the substantive and procedural conditions governing the access of the competent national authorities to the retained data. That legislation must be based on objective criteria in order to define the circumstances and conditions under which the competent national authorities are to be granted access to the data. Access can, as a general rule, be granted, in relation to the objective of fighting crime, only to the data of individuals suspected of planning, committing or having committed a serious crime or of being implicated in one way or another in such a crime. However, in particular situations, where for example vital national security, defence or public security interests are threatened by terrorist activities, access to the data of other persons might also be granted where there is objective evidence from which it can be inferred that that data might, in a specific case, make an effective contribution to combating such activities.

Further, the Court considers that it is essential that access to retained data should, except in cases of urgency, be subject to prior review carried out by either a court or an independent body. In addition, the competent national authorities to whom access to retained data has been granted must notify the persons concerned of that fact.

Given the quantity of retained data, the sensitivity of that data and the risk of unlawful access to it, the national legislation must make provision for that data to be retained within the EU and for the irreversible destruction of the data at the end of the retention period.

The view of the authorities

David Anderson, the Independent Reviewer of Terrorism Legislation gives a lucid response outlining the government's case for mass surveillance. However the official justification is easily summarised as it clearly assists in the detection of serious crime. He simply does not mention that the government having justified grabbing the data on grounds of serious crime detection, will share it willy nilly with all sorts of government departments for their own convenience, way beyond the reasons set out in the official justification.

And when the authorities talk about their fight against 'serious' crime, recent governments have been updating legislation to redefine practically all crimes as 'serious' crimes. Eg possessing a single spliff may in practice be a trivial crime, but the law on possession has a high maximum sentence that qualifies it as a 'serious' crime. It does not become trivial until it goes to court and the a trivia punishment has been handed down. So using mass snooping data would be easily justified to track down trivial drug users.

See  article from terrorismlegislationreviewer.independent.gov.uk

The Open Rights Group comments

See  article from openrightsgroup.org

The judgment relates to a case brought by Deputy Leader of the Labour Party, Tom Watson MP, over intrusive data retention powers. The ruling says that:

  • - Blanket data retention is not permissible
  • - Access to data must be authorised by an independent body
  • - Only data belonging to people who are suspected of serious crimes can be accessed
  • - Individuals need to be notified if their data is accessed.

At present, none of these conditions are met by UK law.

Open Rights Group intervened in the case together with Privacy International, arguing that the Data Retention and Investigatory Powers Act (DRIPA), rushed through parliament in 2014, was incompatible with EU law. While the Judgment will no longer affect DRIPA, which expires at the end of 2016, it has major implications for the Investigatory Powers Act.

Executive Director Jim Killock said:

The CJEU has sent a clear message to the UK Government: blanket surveillance of our communications is intrusive and unacceptable in a democracy.

The Government knew this judgment was coming but Theresa May was determined to push through her snoopers' charter regardless. The Government must act quickly to re-write the IPA or be prepared to go to court again.

Data retention powers in the Investigatory Powers Act will come into effect on 30 Dec 2016. These mean that ISPs and mobile phone providers can be obliged to keep data about our communications, including a record of the websites we visit and the apps we use. This data can be accessed by the police but also a wide range of organisations like the Food Standards Agency, the Health and Safety Executive and the Department of Health.

 

 

If it comes to term...

French lawmakers introduce bill to ban 'false' information on anti-abortion websites


Link Here21st December 2016
The French National Assembly and the Senate have launched the initial steps to create the crime of online obstruction of abortion. The final bill is scheduled to be passed into law in February 2017.

The new crime would affect websites that discuss the possible psychological effects of abortion and those that promote alternatives to terminating pregnancy. Although the new crime would not be a ban, it has raised concern among those troubled over the possible censorship of information on pro-life sites.

The law imposes a maximum of two-years' imprisonment for putting up 'false' information on abortion online, plus a fine of 30,000 Euros.

Maybe interesting times ahead if religious prohibitions on abortion are contested as 'false' information, eg claims that God will punish those that opt for aboortion.

 

 

Update: Countering take down abuse...

France debates bill to protect ISPs from the use of take down orders as a means of censorship


Link Here20th December 2016
Full story: Internet Censorship in France...Web blocking in the name of child protection

France is considering appointing an official internet ombudsman to investigate complaints about online material in order to prevent excessive censorship and preserve free speech. A bill establishing a content qualification assessment procedure has been tabled in the French senate.

Dan Shefets, a Danish lawyer explained one of the issues targeted by the bill:

ISPs face both penal and civil liability as soon as they are made aware of allegedly illicit content. One consequence of such liability is that smaller companies take down such content for fear of later sanctions.

The aim is to provide a simple procedure that will support firms operating online who are uncertain of their legal liabilities and to prevent over-zealous removal or censorship of material merely because it is the subject of a complaint. It could be copied by other European jurisdictions.

The idea is that a rapid response from the internet ombudsman would either order the material to be taken down or allow it to remain. As long as ISPs complied with the rulings, they would not face any fine or punishment.

 

 

Search Engine Optimism...

Labour wants Google to reveal its key to the universe


Link Here19th December 2016
Labour's industrial spokesperson has called for the algorithms used by technology firms to be made transparent and subject to regulation.

Shadow minister Chi Onwurah wants to see greater scrutiny of the algorithms that now control everything from the tailored news served to Facebook users to the order in which websites are presented in Google search. She said:

Algorithms aren't above the law. The outcomes of algorithms are regulated -- the companies which use them have to meet employment law and competition law. The question is, how do we make that regulation effective when we can't see the algorithm?

She added in a letter to the Guardian:

Google and others argue their results are a mirror to society, not their responsibility. Google, Facebook and Uber need to take responsibility for the unintended consequences of the algorithms and machine learning that drive their profits. They can bring huge benefits and great apps, but we need a tech-savvy government to minimise the downside by opening up algorithms to regulation as well as legislating for greater consumer ownership of data and control of the advertising revenue it generates.

Labour's industrial paper, due to be published after the Christmas break, will call for suggestions on how tech firms could be more closely supervised by government.

 

 

State ransomware...

South Carolina lawmaker proposes that all computers sold in the state should be pre-loaded with some nasty internet censorship malware that can be removed by proving age and paying a ransom


Link Here19th December 2016
Full story: Internet Censorship in USA...Domain name seizures and SOPA

A bill filed this month by state Representative Bill Chumley would require sellers to install a digital censorship hijack on computers and other devices that access the internet to prevent the viewing of what the lawmaker considers obscene content.

The proposal also would prohibit access to any online resource that supports sex work and would require manufacturers or sellers to block any websites that supposedly facilitate trafficking.

Both sellers and buyers could get around the limitation, for a ransom fee. The bill would fine manufacturers that sell a device without the blocking system, but they could opt out by paying $20 per device sold. Buyers could also verify their age and pay $20 to remove the censorship software.

Money collected would go toward the Attorney General's Office's pet project  of a human trafficking task force.

Chumley's bill has been referred to the House Judiciary Committee. Legislators return to Columbia for a new session next month.

 

 

Update: Peeling away options for circumvention of internet censorship...

Turkey blocks Tor


Link Here 19th December 2016
Full story: Internet Censorship in Turkey...Website blocking insults the Turkish people
Turkey's President Erdogan has stepped up his repression of dissent by blocking the Tor network in the country.

Watchdog group Turkey Blocks has confirmed that Turkey is blocking the Tor anonymity network's direct access mode for most users. You can still use a bridge mode for now, but there are hints that internet providers might be hurting performance even then. Bridges are unlisted relays and may require a bit of searching out.

The restrictions come alongside a recent government ban on virtual private network services.

 

 

Fair play...

US enacts a new law to protect online reviewers from punishment by the companies being reviewed


Link Here16th December 2016

A new US has been signed into effect that bars businesses from punishing customers for giving bad reviews.

The Consumer Review Fairness Act ( HR 5111 ) voids any contract that involves prohibitions or penalties related to poor online reviews.

The aim of the bill, written by Reps. Leonard Lance (R-NJ) and Joseph Kennedy (D-MA) is to stop companies from imposing penalties on consumers who would leave negative comments on sites such as Yelp. Lance explained:

Consumers in the 21st century economy should be able to post, comment and tweet their honest and accurate feedback without fear of retribution. Too many companies are burying non-disparagement clauses in fine print and going after consumers when they post negative feedback online.

With the new law in effect, any contract that attempts to tie in a clause calling for a fine or penalty for a review would be rendered void and legally unenforceable. The law would also prevent a business from asserting intellectual property claims on the content of a review, provided no trade secrets or personally identifiable information are involved.  However  the bill does make exceptions in the case of reviews deemed to be libelous or slanderous, and also removes any protections for reviews and posts that are found to be false or misleading.

 

 

Offsite Article: Censor's Charter...


Link Here16th December 2016
Open Rights Group summarises the current state of play revealing how porn sites will be blocked for UK internet users

See article from openrightsgroup.org

 

 

Update: The Censored Digital Economy...

Lords 2nd Reading debate of age verification and censorship for worldwide porn websites


Link Here15th December 2016
The Lords had their first debate on the Digital Economy Bill which includes laws to require age verification as well as extension of out dated police and BBFC censorship rules to the internet.

Lords inevitable queued up to support the age verification requirements. However a couple of the lords made cautionary remarks about the privacy issues of websites being able to build up dangerous database of personal ID information of porn users.

A couple of lords also spoke our against the BBFC/police/government censorship prohibitions being included in the bill. It was noted that these rules are outdated, disproportionate and perhaps requires further debate in another bill.

As an example of these points, the Earl of Erroll (cross bencher) said:

My Lords, I welcome the Bill because it has some very useful stuff in it -- but, like everything else, it might benefit from some tweaking. Many other speakers mentioned the tweaks that need to be made, and if that happens I think that we may end up with quite a good Bill.

I will concentrate on age verification because I have been working on this issue with a group for about a year and three-quarters. We spotted that its profile was going to be raised because so many people were worried about it. We were the first group to bring together the people who run adult content websites -- porn websites -- with those who want to protect children. The interesting thing to come out quite quickly from the meetings was that, believe it or not, the people who run porn sites are not interested in corrupting children because they want to make money. What they want are adult, middle-aged people, with credit cards from whom they can extract money, preferably on a subscription basis or whatever. The stuff that children are getting access to is what are called teaser adverts. They are designed to draw people in to the harder stuff inside, you might say. The providers would be delighted to offer age verification right up front so long as all the others have to comply as well -- otherwise they will get all the traffic. Children use up bandwidth. It costs the providers money and wastes their time, so they are very happy to go along with it. They will even help police it, for the simple reason that it will block the opposition. It is one of the few times I approve of the larger companies getting a competitive advantage in helping to police the smaller sites that try not to comply.

One of the things that became apparent early on was that we will not be able to do anything about foreign sites. They will not answer mail or do anything, so blocking is probably the only thing that will work. We are delighted that the Government has gone for that at this stage. Things need to get blocked fast or sites will get around it. So it is a case of block first, appeal later, and we will need a simple appeals system. I am sure that the BBFC will do a fine job, but we need something just in case.

Another thing that came back from the ISPs is that they want more clarity about what should be blocked, how it will be done and what they will have to do. There also needs to be indemnity. When the ISPs block something for intellectual property and copyright reasons, they are indemnified. They would need to have it for this as well, or there will be a great deal of reluctance, which will cause problems.

The next thing that came up was censorship. The whole point of this is we want to enforce online what is already illegal offline. We are not trying to increase censorship or censor new material. If it illegal offline, it should be illegal online and we should be able to do something about it. This is about children viewing adult material and pornography online. I am afraid this is where I slightly disagree with the noble Baroness, Lady Kidron. We should decide what should be blocked elsewhere; we should not use the Bill to block other content that adults probably should not be watching either. It is a separate issue. The Bill is about protecting children. The challenge is that the Obscene Publications Act has some definitions and there is ATVOD stuff as well. They are supposed to be involved with time. CPS guidelines are out of step with current case law as a result of one of the quite recent cases -- so there is a bit of a mess that needs clearing up. This is not the Bill to do it. We probably need to address it quite soon and keep the pressure on; that is the next step. But this Bill is about keeping children away from such material.

The noble Baroness, Lady Benjamin, made a very good point about social platforms. They are commercial. There are loopholes that will get exploited. It is probably unrealistic to block the whole of Twitter -- it would make us look like idiots. On the other hand, there are other things we can do. This brings me to the point that other noble Lords made about ancillary service complaints. If we start to make the payment service providers comply and help, they will make it less easy for those sites to make money. They will not be able to do certain things. I do not know what enforcement is possible. All these sites have to sign up to terms and conditions. Big retail websites such as Amazon sell films that would certainly come under this category. They should put an age check in front of the webpage. It is not difficult to do; they could easily comply.

We will probably need an enforcer as well. The BBFC is happy to be a regulator, and I think it is also happy to inform ISPs which sites should be blocked, but other enforcement stuff might need to be done. There is provision for it in the Bill. The Government may need to start looking for an enforcer.

Another point that has come up is about anonymity and privacy, which is paramount. Imagine the fallout if some hacker found a list of senior politicians who had had to go through an age-verification process on one of these websites, which would mean they had accessed them. They could bring down the Government or the Opposition overnight. Noble Lords could all go to the MindGeek website and look at the statistics, where there is a breakdown of which age groups and genders are accessing these websites. I have not dared to do so because it will show I have been to that website, which I am sure would show up somewhere on one of these investigatory powers web searches and could be dangerous.

One of the things the Digital Policy Alliance, which I chair, has done is sponsor a public available specification, which the BSI is behind as well. There is a lot privacy-enforcing stuff in that. It is not totally obvious; it is not finished yet, and it is being highlighted a bit more. One thing we came up with is that websites should not store the identity of the people whom they age-check. In fact, in most cases, they will bounce straight off the website and be sent to someone called an attribute provider, who will check the age. They will probably know who the person is, but they will send back to the website only an encrypted token which says, We've checked this person that you sent to us. Store this token. This person is over 18 -- or under 18, or whatever age they have asked to be confirmed. On their side, they will just keep a record of the token but will not say to which website they have issued it -- they will not store that, either. The link is the token, so if a regulator or social service had to track it down, they could physically take the token from the porn site to where it came from, the attribute provider, and say, Can you check this person's really over 18, because we think someone breached the security? What went wrong with your procedures? They can then reverse it and find out who the person was -- but they could still perhaps not be told by the regulator which site it was. So there should be a security cut-out in there. A lot of work went into this because we all knew the danger.

This is where I agree entirely with the Open Rights Group, which thinks that such a measure should be mandated. Although the publicly available specification, which is almost like a British standard, says that privacy should be mandated under general data protection regulation out of Europe, which we all subscribe to, I am not sure that that is enough. It is a guideline at the end of the day and it depends on how much emphasis the BBFC decides to put on it. I am not sure that we should not just put something in the Bill to mandate that a website cannot keep a person's identity. If the person after they have proved that they are 18 then decides to subscribe to the website freely and to give it credit card details and stuff like that, that is a different problem -- I am not worried about that. That is something else. That should be kept extremely securely and I personally would not give my ID to such a site -- but at the age verification end, it must be private.

There are some other funny things behind the scenes that I have been briefed on, such as the EU VAT reporting requirements under the VAT Mini One Stop Shop, which requires sites to keep some information which might make a person identifiable. That could apply if someone was using one of the attribute providers that uses a credit card to provide that check or if the website itself was doing that. There may be some things that people will have to be careful of. There are some perfectly good age-checking providers out there who can do it without you having to give your details. So it is a good idea; I think that it will help. Let us then worry about the point that the noble Baroness, Lady Kidron, made so well about what goes where.

The universal service obligation should be territorial; it has to cover the country and not just everyone's homes. With the internet of things coming along -- which I am also involved in because I am chair of the Hypercat Alliance, which is about resource discovery over the internet of things -- one of the big problems is that we are going to need it everywhere: to do traffic monitoring, people flows and all the useful things we need. We cannot have little not-spots, or the Government will not be able to get the information on which to run all sorts of helpful control systems. The noble Lord, Lord Gordon of Strathblane, referred to mast sharing. The problem with it is that they then do not put masts in the not-spots; they just keep the money and work off just one mast -- you still get the not-spots. If someone shares a mast, they should be forced a mast somewhere else, which they then share as well.

On broadband take-up, people say, Oh, well, people aren't asking for it . It is chicken and egg: until it is there, you do not know what it is good for. Once it is there and suddenly it is all useful, the applications will flow. We have to look to the future; we have to have some vision. Let us get chicken or the egg out there and the chicken will follow -- I cannot remember which way round it is.

I agree entirely with the noble Lord, Lord Mitchell, that the problem with Openreach is that it will always be controlled by its holding company, which takes the investment, redirects it and decides where the money goes. That is the challenge with having it overseeing.

I do not want waste much time, because I know that it is getting late-ish. On jobs, a huge number of jobs were created in earlier days in installing and maintaining internet of things sensors all over the place -- that will change. On the gigabit stuff, it will save travel, energy and all sorts of things -- we might even do remote-control hip operations, so you send the device and the surgeon then does it remotely, once we get super-duper superfast broadband.

I want to say one thing about IP. The Open Rights Group raised having thresholds of seriousness. It is quite important that we do not start prosecuting people on charges with 10-year sentences for trivial things. But it is also sad how interesting documentaries can disappear terribly quickly. The catch-up services cover only a month or so and if you are interested, it is quite nice being able to find these things out there on the internet a year or two later. There should somehow be a publicly available archive for all the people who produce interesting documentaries. I do not know whether they should make a small charge for it, but it should be out there.

The Open Rights Group also highlighted the bulk sharing of data. Some of the stuff will be very useful -- the briefing on free school meals is interesting -- but if you are the only person who really knows what might be leaked, it is very dangerous. If someone were to beat you up, an ordinary register could leak your address across without realising that at that point you are about to go into witness protection. There can be lots of problems with bulk data sharing, so be careful; that is why the insurance database was killed off a few years ago. Apart from that, I thank your Lordships for listening and say that, in general, this is a good effort.?

 

 

Update: Not as many as the BBFC will soon block in the UK...

Malaysian internet censors block about 5000 websites


Link Here15th December 2016
Full story: Internet Censorship in Malaysia...Malaysia looks to censor the internet

The Malaysian Communications and Multimedia Commission (MCMC) blocked 5,044 websites for various offences under the Communications and Multimedia Act 1998 since 2015 until October this year.

Deputy Communications and Multimedia Minister Datuk Jailani Johari said, out of the total, 4,277 are pornographic websites while another 767 displayed elements of gambling, prostitution, fraud, piracy, counterfeit products, unregistered medicine and others. He added:

MCMC blocks all the websites based on the application of the enforcement agencies such as the police, Health Ministry, Domestic Trade, Cooperatives and Consumerism Ministry and other relevant agencies.

Until last October, MCMC also blocked 72 websites related to the spread of Daesh or Islamic State ideology.

MCMC had also investigated 181 cases of social media and Internet abuse involving the spread of false information and contents through the WhatsApp, Facebook, Twitter plaform and so forth under the same Act. Jailani said, out of the total, six cases were brought to court, including five cases that were prosecuted while 10 cases were compounded.

 

 

Offsite Comment: Calls for state controls of Facebook and Google...


Link Here 15th December 2016
We need European regulation of Facebook and Google. By Leighton Andrews

See article from opendemocracy.net

 

 

The BBFC is set to ban all online porn...

Murray Perkins of the BBFC explains how all the world's major porn websites will have to be totally banned in Britain (even if they set up age verification systems) under the censorship rules contained in the Digital Economy Bill


Link Here14th December 2016
The BBFC currently cuts about 15% of all R18 porn films on their way to totally ordinary mainstream porn shops. These are not niche or speciality films, they are totally middle of the road porn, which represents the sort of content on all the world's major porn sites. Most of the cuts are ludicrous but Murray Perkins, a senior examiner of the BBFC, points out that they are all considered either be to be harmful, or else are still prohibited by the police or the government for reasons that have long since past their sell by date.

So about a sixth of all the world's adult films are therefore considered prohibited by the British authorities, and so any website containing such films will have to be banned as there is to practical way to cut out the bits that wind up censors, police or government. And this mainstream but prohibited content appears on just about  all the world's major porn sites, free or paid.

The main prohibitions that will cause a website to be blocked (even before considering whether they will set up strict age verification) are such mainstream content as female ejaculation, urine play, gagging during blow jobs, rough sex, incest story lines (which is a major genre of porn at the moment), use of the word 'teen' and verbal references to under 18's. 

Murray Perkins has picked up the job of explaining this catch all ban. He explains it well,  but he tries to throw readers off track by citing examples of prohibitions being justifiable because the apply to violent porn, whilst not mentioning that they apply equally well to trivia such as female squirting.

Perkins writes in the Huffington Post:

Recent media reports highlighting what content will be defined as prohibited material under the terms of the Digital Economy Bill could have given an inaccurate impression of the serious nature of the harmful material that the BBFC generally refuses to classify. The BBFC works only to the BBFC Classification Guidelines and UK law, with guidance from the Crown Prosecution Service (CPS) and enforcement bodies, and not to any other lists.

The Digital Economy Bill aims to reduce the risk of children and young people accessing, or stumbling across, pornographic content online. It proposes that the BBFC check whether

(i) robust age verification is in place on websites containing pornographic content and

(ii) whether the website or app contains pornographic content that is prohibited.

An amendment to the Digital Economy Bill, passed in the House of Commons, would also permit the BBFC to ask Internet Service Providers (ISPs) to block pornographic websites that refuse to offer effective age verification or contain prohibited material such as sexually violent pornography.

In making any assessment of content, the BBFC will apply the standards used to classify pornography that is distributed offline. Under the Video Recordings Act 1984 the BBFC is obliged to consider harm when classifying any content including 18 and R18 rated sex works. Examples of material that the BBFC refuses to classify include pornographic works that: depict and encourage rape, including gang rape; depict non-consensual violent abuse against women; promote an interest in incestuous behaviour; and promote an interest in sex with children. [Perkins misleadingly neglects to include, squirting, gagging, and urine play in his examples here]. The Digital Economy Bill defines this type of unclassifiable material as prohibited .-

Under its letters of designation the BBFC may not classify anything that may breach criminal law, including the Obscene Publications Act (OPA) as currently interpreted by the Crown Prosecution Service (CPS). The CPS provides guidance on acts which are most commonly prosecuted under the OPA. The BBFC is required to follow this guidance when classifying content offline and will be required to do the same under the Digital Economy Bill. In 2015, 12% of all cuts made to pornographic works classified by the BBFC were compulsory cuts under the OPA. The majority of these cuts were to scenes involving urolagnia which is in breach of CPS guidance and could be subject to prosecution.

 

 

Offsite Article: US Government Publishes New Plan to Target Pirate Websites...


Link Here 14th December 2016
By going after payment providers and advertising services

See article from torrentfreak.com

 

 

Update: Calls for an online news vetting service...

German politicians seek more censorship of 'fake news' presumably thinking that it may somehow protect them from the popular revolt


Link Here13th December 2016
Full story: Free Speech in Germany...Criticism of refugees or Turkey now banned

Leading German MPs have called for online 'fake news' campaigns to be made a crime. Patrick Sensburg, a senior MP in Angela Merkel's Christian Democratic Union (CDU) party, said:

Targeted disinformation to destabilise a state should be a criminal offence. We need to consider whether there should be some sort of 'test site' that reveals and identifies propaganda pages.

The call was backed by his party colleague Ansgar Heveling, the chairman of the German parliament's influential internal affairs committee aying:

We last saw disinformation campaigns during the Cold War, now they have clearly been revived with new media opportunities. The law already offers options, such as a slander or defamation. But I think a criminal sentence is more appropriate when it is a targeted campaign.

German intelligence has warned that Russia is seeking to influence next year's German elections via propaganda distributed via the internet, partcularly social media. Russia has been accused of deliberately using socialbots , automated software masqueraring as real people, to promote 'fake news' stories on social media.

Mrs Merkel's current coalition partners and main rival in next year's elections, the Social Democratic Party (SPD), have also called for a cross-party alliance against 'fake news' stories. Sigmar Gabriel, the SPD leader called for

Democratic solidarity against manipulative socialbots and an alliance against 'fake news'.

Thorsten Schäfer-Gümbel of the SPD added:

If there is any doubt about the authenticity of any information, we should refrain from attacking our political opponents with it.

 

 

No doubt someone will soon be asking for extensions to other 'worthy' causes...

Facebook, Microsoft, Twitter and YouTube to pool databases of removed content so as to prevent re-posting of terrorist content


Link Here11th December 2016

Facebook, Microsoft, Twitter and YouTube are coming together to help curb the spread of terrorist content online. There is no place for content that promotes terrorism on our hosted consumer services. When alerted, we take swift action against this kind of content in accordance with our respective policies.

We have committed to the creation of a shared industry database of hashes 204 unique digital fingerprints 204 for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services. By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms. We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online.

Our companies will begin sharing hashes of the most extreme and egregious terrorist images and videos we have removed from our services 204 content most likely to violate all of our respective companies' content policies. Participating companies can add hashes of terrorist images or videos that are identified on one of our platforms to the database. Other participating companies can then use those hashes to identify such content on their services, review against their respective policies and definitions, and remove matching content as appropriate.

As we continue to collaborate and share best practices, each company will independently determine what image and video hashes to contribute to the shared database. No personally identifiable information will be shared, and matching content will not be automatically removed. Each company will continue to apply its own policies and definitions of terrorist content when deciding whether to remove content when a match to a shared hash is found. And each company will continue to apply its practice of transparency and review for any government requests, as well as retain its own appeal process for removal decisions and grievances. As part of this collaboration, we will all focus on how to involve additional companies in the future.

Throughout this collaboration, we are committed to protecting our users' privacy and their ability to express themselves freely and safely on our platforms. We also seek to engage with the wider community of interested stakeholders in a transparent, thoughtful and responsible way as we further our shared objective to prevent the spread of terrorist content online while respecting human rights.

 

 

Offsite Article: How to avoid the UK's new online surveillance powers...


Link Here11th December 2016
If the government wants to hack you, it will, but you can stop the police from just scooping up your web history. By James Vincent

See article from theverge.com

 

 

Update: Today France and Canada, tomorrow Russian and China...

Google battles the Canadian courts who demand that copyright infringing search links should be removed worldwide


Link Here 8th December 2016
Two dozen human rights and civil liberty groups have thrown their weight behind Google's challenge of a Canadian court decision it warns could stifle freedom of expression around the world and lead to a diminished internet of the lowest common denominator .

In an appeal heard on Tuesday in the supreme court of Canada , Google Inc took aim at a 2015 court decision that sought to censor search results beyond Canada's borders.

In 2012, Canadian company Equustek won a judgment to have a company banned from selling a counterfeit version of Equustek's product online. Google voluntarily removed more than 300 infringing URLs. But as more sites popped up, Equustek went back to court -- this time seeking a worldwide ban. A court of appeal in British Columbia sided with Equustek in 2015, ordering Google to remove all of its search results linked to the company. It is this ruling that Google is now appealing.

The human rights groups are focusing on the question at the heart of the precedent-setting case: if one country can control what you see on the internet, what is to prevent other countries from doing the same?  Gregg Leslie of Reporters Committee for Freedom of the Press said:

It's a worrisome trend, where we see individual countries trying to regulate the internet worldwide. And of course the consequences of that would mean that even countries like Russia and China could do the same thing and that will really affect the content available on the internet.

 

 

Commented: False blame...

After years of laws handing over our wealth to US and international corporations, the EU has decided to blame the resulting social unrest on tech companies not deleting hate speech quickly enough.


Link Here8th December 2016
The European Commission has called on tech companies such as Twitter, Facebook, and other major names to implement more aggressively measures in order to censor online hate speech. The alternative is to face new EU legislation that would force the tech companies to censor more quickly.

The Financial Times reports that a study commissioned by the EU justice commissioner, Vera Jourova, found that YouTube, Google, Microsoft, Twitter, and Facebook have struggled to comply with the hate speech voluntary code of conduct that was announced earlier this year. Amid national security concerns and heightened racial tensions, mostly resulting from unpopular EU refugee policies.

In Germany, the government-led effort has been particularly aggressive. Germany is one of the European nations where the ongoing refugee crisis has reinvigorated the far-right and sparked a backlash against government policy. Reuter reports that Heiko Maas, the German Justice Minister, recently said that Facebook should be made liable for any hate speech published on its social media platform and it should be treated as a media company.

According to The Verge, Google, Twitter, Facebook and Microsoft agreed in a code of conduct announced in May to review and respond within 24 hours to the majority of hate speech complaints. However, only 40% of the recorded incidents have been reviewed within 24 hours, according to the commission's report. That figure rose to 80% after 48 hours.

According to PCMag, two advocacy groups have criticized those efforts in France. In May, the two rights groups announced their plans to sue Google, Twitter, and Facebook for failing to remove from their platforms homophobic, racist and other hateful posts. News articles have so far failed to point out that maybe some of these groups are making some false claims about  material being censorable. Perhaps the media companies were right to not remove all of the posts reported.

EU justice ministers will meet to discuss the report's findings on 8th December.

Offsite Comment: Social Networks Must Stand Against Censorship

7th December 2016. See  article from bloomberg.com

The pressure for social networks to censor the content that appears on them just won't cease, and the networks are bending. Censorship, however, is not what users want. Nor is it technically possible, even if the platforms won't admit it.

 

 

Maybe best to not go there...

But surely a proposed new French law banning 'false information' on anti-abortion websites is incendiary stuff in what is essentially a religious debate


Link Here7th December 2016
Lawmakers in France have voted to ban misleading anti-abortion web sites. After a heated debate, the French National Assembly passed a bill to outlaw websites spreading misinformation about abortion. Pro-abortion activists have accused Pro-Life campaigners of pretending to give neutral information while putting pressure on women not to have abortions.

The new law, which still has to pass the Senate, extends an existing law against physical intimidation over abortion to digital media and would extend the scope of a 1993 law, which criminalizes false information related abortions, to digital media.

Providing false information on abortion online would be punishable by up to two years in prison and a 30,000 euro fine, a stipulation that pro-life advocates were quick to ridicule.

In the current fad for blaming all society's ills on 'false' news and information this adds an interesting possibility of religious commandments being tested in court as false information. The underlying religious view is that abortion is bad simply ecause their god said so.  And surely opponents will understandably see this as false information.

Bruno Retailleau, who heads the Republicans party group in the Senate, says the bill is totally against freedom of expression. He claimed the bill went against the spirit of the 1975 law legalizing abortion, which called for women to be informed of alternatives.

 

 

Comment: Porn is not conveniently organised in easy to block packages...

The politician behind Britain's porn censorship now thinks it might not even work. By Rob Price


Link Here 5th December 2016

It was Conservative MP and former minister John Whittingdale who introduced the bill. But now, the BBC is reporting that he's worried it might not actually work. He told Parliament:

One of the main ways in which young people are now exposed to pornography is through social media such as Twitter, and I do not really see that the bill will do anything to stop that happening.

This gets neatly at a key problem with the porn filter: The internet is not neatly divided into pornography and non-pornography. As I wrote last week , it's technically simple to block dedicated fetish websites. But plenty of sites mix porn with non-pornographic content, or include both conventional and non-conventional material -- raising serious questions as to how the filter could ever work in practice.

...Read the full  article from businessinsider.com

 

 

Offsite Article: Facebook's verified badges giving fake news 'authenticity'...


Link Here 4th December 2016
Full story: Fake News...Declining respect for the authorities is blamed on 'fake' news
How is Facebook meant to algorithmically spot 'fake' news when the subtle use of an apostrophe can so easily change fact into fiction

See article from huffingtonpost.co.uk

 

 

Petition: Encryption is under threat in Europe!...

Tell the EU Council: Protect our rights to privacy and security!


Link Here1st December 2016
Full story: Internet Encryption in the EU...Encryption is legal for the moment but the authorites are seeking to end this
The Council of the EU could undermine encryption as soon as December. It has been asking delegates from all EU countries to detail their national legislative position on encryption.

We've been down this road before. We know that encryption is critical to our right to privacy and to our own digital security. We need to come together once again and demand that our representatives protect these rights -- not undermine them in secret. Act now to tell the Council of the EU to defend strong encryption!

Dear Slovak Presidency and Delegates to the Council of the EU:

According to the Presidency of the Council of the European Union, the Justice and Home Affairs Ministers will meet in December to discuss the issue of encryption. At that discussion, we urge you to protect our security, our economy, and our governments by supporting the development and use of secure communications tools and technologies and rejecting calls for policies that would prevent or undermine the use of strong encryption.

Encryption tools, technologies, and services are essential to protect against harm and to shield our digital infrastructure and personal communications from unauthorized access. The ability to freely develop and use encryption provides the cornerstone for today's EU economy. Economic growth in the digital age is powered by the ability to trust and authenticate our interactions and communication and conduct business securely both within and across borders.

The United Nations Special Rapporteur for freedom of expression has noted, encryption and anonymity, and the security concepts behind them, provide the privacy and security necessary for the exercise of the right to freedom of opinion and expression in the digital age.

Recently, hundreds of organizations, companies, and individuals from more than 50 countries came together to make a global declaration in support of strong encryption. We stand with people from all over the world asking you not to break the encryption we rely upon.

Sign the  petition from act.accessnow.org

 

 

Too Late!...

Encryption, privacy and security has just been killed by the British government


Link Here1st December 2016
Among the many unpleasant things in the Investigatory Powers Act that was officially signed into law this week, one that has not gained as much attention is the apparent ability for the UK government to undermine encryption and demand surveillance backdoors.

As the bill was passing through Parliament, several organizations noted their alarm at section 217 which obliged ISPs, telcos and other communications providers to let the government know in advance of any new products and services being deployed and allow the government to demand technical changes to software and systems.

Communications Service Providers (CSP) subject to a technical capacity notice must notify the Government of new products and services in advance of their launch, in order to allow consideration of whether it is necessary and proportionate to require the CSP to provide a technical capability on the new service.

As per the final wording of the law, comms providers on the receiving end of a technical capacity notice will be obliged to do various things on demand for government snoops -- such as disclosing details of any system upgrades and removing electronic protection on encrypted communications.


 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   Apr   May   June   July   Aug   Sep   Oct   Nov   Dec    


 


 
TV  

Movies

Games

Internet
 
Advertising

Technology

Gambling

Food+Drink
Books

Music

Art

Stage

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys