Your Daily Broadsheet!


Latest news



 

Offsite Article: Christian Concerns...


Link Here 15th June 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media
Who'd have thought that a Christian Campaign Group would be calling on its members to criticise the government's internet censorship bill in a consultation

See article from christianconcern.com

 

 

Logging your porn history (in the name of fraud yer know)...

Open Rights Group Report: Analysis of BBFC Age Verification Certificate Standard June 2019


Link Here 14th June 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust

Executive Summary

The BBFC's Age-verification Certificate Standard ("the Standard") for providers of age verification services, published in April 2019, fails to meet adequate standards of cyber security and data protection and is of little use for consumers reliant on these providers to access adult content online.

This document analyses the Standard and certification scheme and makes recommendations for improvement and remediation. It sub-divides generally into two types of concern: operational issues (the need for a statutory basis, problems caused by the short implementation time and the lack of value the scheme provides to consumers), and substantive issues (seven problems with the content as presently drafted).

The fact that the scheme is voluntary leaves the BBFC powerless to fine or otherwise discipline providers that fail to protect people's data, and makes it tricky for consumers to distinguish between trustworthy and untrustworthy providers. In our view, the government must legislate without delay to place a statutory requirement on the BBFC to implement a mandatory certification scheme and to grant the BBFC powers to require reports and penalise non-compliant providers.

The Standard's existence shows that the BBFC considers robust protection of age verification data to be of critical importance. However, in both substance and operation the Standard fails to deliver this protection. The scheme allows commercial age verification providers to write their own privacy and security frameworks, reducing the BBFC's role to checking whether commercial entities follow their own rules rather than requiring them to work to a mandated set of common standards. The result is uncertainty for Internet users, who are inconsistently protected and have no way to tell which companies they can trust.

Even within its voluntary approach, the BBFC gives providers little guidance to providers as to what their privacy and security frameworks should contain. Guidance on security, encryption, pseudonymisation, and data retention is vague and imprecise, and often refers to generic "industry standards" without explanation. The supplementary Programme Guide, to which the Standard refers readers, remains unpublished, critically undermining the scheme's transparency and accountability.

Recommendations

  • Grant the BBFC statutory powers:

  • The BBFC Standard should be substantively revised to set out comprehensive and concrete standards for handling highly sensitive age verification data.

  • The government should legislate to grant the BBFC statutory power to mandate compliance.

  • The government should enable the BBFC to require remedial action or apply financial penalties for non-compliance.

  • The BBFC should be given statutory powers to require annual compliance reports from providers and fine those who sign up to the certification scheme but later violate its requirements.

  • The Information Commissioner should oversee the BBFC's age verification certification scheme

Delay implementation and enforcement:

Delay implementation and enforcement of age verification until both (a) a statutory standard of data privacy and security is in place, and (b) that standard has been implemented by providers.

Improve the scheme content:

Even if the BBFC certification scheme remains voluntary, the Standard should at least contain a definitive set of precisely delineated objectives that age verification providers must meet in order to say that they process identity data securely.

Improve communication with the public:

Where a provider's certification is revoked, the BBFC should issue press releases and ensure consumers are individually notified at login.

The results of all penetration tests should be provided to the BBFC, which must publish details of the framework it uses to evaluate test results, and publish annual trends in results.

Strengthen data protection requirements:

Data minimisation should be an enforceable statutory requirement for all registered age verification providers.

The Standard should outline specific and very limited circumstances under which it's acceptable to retain logs for fraud prevention purposes. It should also specify a hard limit on the length of time logs may be kept.

The Standard should set out a clear, strict and enforceable set of policies to describe exactly how providers should "pseudonymise" or "deidentify" data.

Providers that no longer meet the Standard should be required to provide the BBFC with evidence that they have destroyed all the user data they collected while supposedly compliant.

The BBFC should prepare a standardised data protection risk assessment framework against which all age verification providers will test their systems. Providers should limit bespoke risk assessments to their specific technological implementation.

Strengthen security, testing, and encryption requirements:

Providers should be required to undertake regular internal and external vulnerability scanning and a penetration test at least every six months, followed by a supervised remediation programme to correct any discovered vulnerabilities.

Providers should be required to conduct penetration tests after any significant application or infrastructure change.

Providers should be required to use a comprehensive and specific testing standard. CBEST or GBEST could serve as guides for the BBFC to develop an industry-specific framework.

The BBFC should build on already-established strong security frameworks, such as the Center for Internet Security Cyber Controls and Resources, the NIST Cyber Security Framework, or Cyber Essentials Plus.

At a bare minimum, the Standard should specify a list of cryptographic protocols which are not adequate for certification.

 

 

ThinkSpot...

Jordon Peterson launches discussion and subscription platform that won't be censored on grounds of political correctness


Link Here 14th June 2019
An upcoming free speech platform promises to provide users the best features of other social media, but without the censorship.

The subscription based anti-censorship platform Thinkspot is being created by popular psychologist Dr. Jordan B. Peterson. It's being marketed as a free speech alternative to payment processors like Patreon in that it will monetize creators and also provide a social media alternative to platforms like Facebook and YouTube.

Peterson explained in a podcast that the website would have radically pro-free speech Terms of Service, saying that once you're on our platform we won't take you down unless we're ordered to by a US court of law.

That will be a profound contrast to platforms that ban users for misgendering people who identify as trans, or for tweeting learn to code at fired journalists.

The only other major rule on comments he mentioned was that they need to be thoughtful. Rather than suggesting that some opinions are off limits, Peterson said they will have a minimum required length so one has to put thought into what they write.

If minimum comment length is 50 words, you're gonna have to put a little thought into it, Peterson said. Even if you're being a troll, you'll be a quasi-witty troll.

All comments on the website will have a voting feature and if your ratio of upvotes to downvotes falls below 50/50 then your comments will be hidden, people will still be able to see them, if they click, but you'll disappear. He later added that these features could be tweaked as the website is still being designed.

 

 

Offsite Article: Fruits of Philosphy...


Link Here 13th June 2019
The story of a Victorian book which was considered obscene for informing people about sex and contraception

See article from southwarknews.co.uk

 

 

Culture of Censorship...

Spanish Government includes age verification for porn in its programme


Link Here 12th June 2019

AN MP in Spain is leading an initiative to force porn websites operating in the country to install strict age verification systems.

The recently elected 26-year-old Andrea Fernandez has called to end the culture of porn among young people. The limitation of pornographic contents online was included in the electoral programme of the the newly elected Prime Minister, Pedro Sanchez (Social Democrats). The goal of the new government is to implement a new strict age verification system for these kind of websites.

 

 

Offsite Article: UK Crypto control...


Link Here 11th June 2019
Censoring open source cryptocurrency software through money laundering requirements

See article from eff.org

 

 

To the UK Government: Don't block my legal porn and I'll be happy to use the IWF feed...

The catastrophic impact of DNS-over-HTTPs. The IWF makes its case


Link Here 10th June 2019

Here at the IWF, we've created life-changing technology and data sets helping people who were sexually abused as children and whose images appear online. The IWF URL List , or more commonly, the block list, is a list of live webpages that show children being sexually abused, a list used by the internet industry to block millions of criminal images from ever reaching the public eye.

It's a crucial service, protecting children, and people of all ages in their homes and places of work. It stops horrifying videos from being stumbled across accidentally, and it thwarts some predators who visit the net to watch such abuse.

But now its effectiveness is in jeopardy. That block list which has for years stood between exploited children and their repeated victimisation faces a challenge called DNS over HTTPS which could soon render it obsolete.

It could expose millions of internet users across the globe - and of any age -- to the risk of glimpsing the most terrible content.

So how does it work? DNS stands for Domain Name System and it's the phonebook by which you look something up on the internet. But the new privacy technology could hide user requests, bypass filters like parental controls, and make globally-criminal material freely accessible. What's more, this is being fast-tracked, by some, into service as a default which could make the IWF list and all kinds of other protections defunct.

At the IWF, we don't want to demonise technology. Everyone's data should be secure from unnecessary snooping and encryption itself is not a bad thing. But the IWF is all about protecting victims and we say that the way in which DNS over HTTPS is being implemented is the problem.

If it was set as the default on the browsers used by most of us in the UK, it would have a catastrophic impact. It would make the horrific images we've spent all these years blocking suddenly highly accessible. All the years of work for children's protection could be completely undermined -- not just busting the IWF's block list but swerving filters, bypassing parental controls, and dodging some counter terrorism efforts as well.

From the IWF's perspective, this is far more than just a privacy or a tech issue, it's all about putting the safety of children at the top of the agenda, not the bottom. We want to see a duty of care placed upon DNS providers so they are obliged to act for child safety and cannot sacrifice protection for improved customer privacy.

 

 

DCMS continues to legislate for a miserable life...

Advertisers slam the government over more censorship proposals to restrict TV junk food adverts and to ludicrously impose watershed requirements online


Link Here 10th June 2019
Advertisers have launched a scathing attack on the government's plans to introduce further restrictions on junk food advertising, describing them as totally disproportionate and lacking in evidence.

In submissions to a government consultation, seen exclusively by City A.M. , industry bodies Isba and the Advertising Association (AA) said the proposals would harm advertisers and consumers but would fail to tackle the issue of childhood obesity.

The government has laid out plans to introduce a 9pm watershed on adverts for products high in fat, salt or sugar (HFSS) on TV and online .

But the advertising groups have dismissed the policy options, which were previously rejected by media regulator Ofcom, as limited in nature and speculative in understanding.

The AA said current restrictions, which have been in place since 2008, have not prevented the rise of obesity, while children's exposure to HFSS adverts has also fallen sharply over the last decade.

In addition, Isba argued a TV watershed would have a significant and overwhelming impact on adult viewers, who make up the majority of audiences before 9pm.

They also pointed to an impact assessment, published alongside the consultation, which admitted the proposed restrictions would cut just 1.7 calories per day from children's diets.

 

 

Offsite Article: 18 Rated Porn And BBFC Hypocrisy...


Link Here 10th June 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
When is a porn film not a porn film?

See article from reprobatepress.com

 

 

Offsite Article: Suspected of spreading malicious rumours...


Link Here 9th June 2019
Full story: Internet Censorship in China...All pervading Chinese internet censorship
A Fascinating article from a BBC reporter based in Beijing who became a marked man when posted images from a Hong Kong vigil remembering the Tiananmen Square massacre

See article from bbc.com

 

 

Offsite Article: Censorship by copyright claim...


Link Here 9th June 2019
DMCA Takedowns Try to De-list Dozens of Adult Homepages from Google

See article from torrentfreak.com

 

 

Offsite Article: Draw a Line at the Border...


Link Here 9th June 2019
Privacy International start campaign against governments snooping on social media accounts handed over as part of a visa application

See article from privacyinternational.org

 

 

Trading cenorship...

China censors news websites over Tiananmen Square massacre and financial websites over US trade war issues


Link Here 8th June 2019
Full story: Internet Censorship in China...All pervading Chinese internet censorship
The Chinese government appears to have launched a major new internet purge, blocking users from accessing The Intercept's website and those of at least seven other Western news organizations.

People in China began reporting that they could not access the websites of The Intercept, The Guardian, the Washington Post, HuffPost, NBC News, the Christian Science Monitor, the Toronto Star, and Breitbart News.

It is unclear exactly when the censorship came into effect or the reasons for it. But Tuesday marked the 30th anniversary of the Tiananmen Square massacre, and Chinese authorities have reportedly increased levels of online censorship to coincide with the event.

On a second front censors at two of China's largest social media companies appear to have taken aim at independent financial bloggers, as Beijing continues pumping out propaganda to garner public support for its trade dispute with the US.

At least 10 popular financial analysis blogs on social media app WeChat had all present and past content scrubbed, according to screenshots posted by readers. The Weibo accounts of two non-financial popular bloggers, including Wang Zhian, a former state broadcast commentator who wrote about social issues, were also blocked.

 

 

Final Warning...

Russia set to block VPNs that refuse to censor websites blocked by Russia


Link Here 8th June 2019
Full story: Internet Censorship in Russia...Russia restoring repressive state control of media
Back in March, ten major VPN providers including NordVPN, ExpressVPN, IPVanish and HideMyAss were ordered by Russian authorities to begin blocking sites present in the country's national blacklist. Following almost total non-compliance, the country's internet censor says that blocking nine of the services is now imminent.

Back in March, telecoms watchdog Roscomnadzor wrote to ten major VPN providers -- NordVPN, ExpressVPN, TorGuard, IPVanish, VPN Unlimited, VyprVPN, Kaspersky Secure Connection, HideMyAss!, Hola VPN, and OpenVPN -- ordering them to connect to the database. All teh foreign companies refused to comply.

Only teh Russia based company,Kaspersky Secure Connection, connected to the registry, Roscomnadzor chief Alexander Zharov informs Interfax .

Russian law says unequivocally if the company refuses to comply with the law -- it should be blocked. And it appears that Roscomnadzor is prepared to carry through with its threat. When questioned on the timeline for blocking, Zharov said that the matter could be closed within a month.

 

 

Commented: EU vs. Free Speech...

European Court of Justice moves towards a position requiring the international internet to follow EU censorship rulings


Link Here 8th June 2019
TechDirt comments:

The idea of an open global internet keeps taking a beating -- and the worst offender is not, say, China or Russia, but rather the EU.

We've already discussed things like the EU Copyright Directive and the Terrorist Content Regulation , but it seems like every day there's something new and more ridiculous -- and the latest may be coming from the Court of Justice of the EU (CJEU). The CJEU's Advocate General has issued a recommendation (but not the final verdict) in a new case that would be hugely problematic for the idea of a global open internet that isn't weighted down with censorship.

The case at hand involved someone on Facebook posting a link to an article about an Austrian politician, Eva Glawischnig-Piesczek, accusing her of being a lousy traitor of the people, a corrupt oaf and a member of a fascist party.

An Austrian court ordered Facebook to remove the content, which it complied with by removing access to anyone in Austria. The original demand was also that Facebook be required to prevent equivalent content from appearing as well. On appeal, a court denied Facebook's request that it only had to comply in Austria, and also said that such equivalent content could only be limited to cases where someone then alerted Facebook to the equivalent content being posted (and, thus, not a general monitoring requirement).

The case was then escalated to the CJEU and then, basically everything goes off the rails

See  detailed legal findings discussed by techdirt.com

 

Offsite Comment: Showing how Little the EU Understands About the Web

8th June 2019. See article from forbes.com by Kalev Leetaru

As governments around the world seek greater influence over the Web, the European Union has emerged as a model of legislative intervention, with efforts from GDPR to the Right to be Forgotten to new efforts to allow EU lawmakers to censor international criticism of themselves. GDPR has backfired spectacularly, stripping away the EU's previous privacy protections and largely exempting the most dangerous and privacy-invading activities it was touted to address. Yet it is the EU's efforts to project its censorship powers globally that present the greatest risk to the future of the Web and demonstrate just how little the EU actually understands about how the internet works.

 

 

Offsite Article: Blocking encrypted messaging and VPNs...


Link Here 8th June 2019
Full story: Internet Censorship in Pakistan...internet website blocking
Pakistan buys in a new censorship and snooping system for the internet

See article from dawn.com

 

 

No right to know...

Facebook taken to court in Poland after it censored information about a nationalist rally in Warsaw


Link Here 7th June 2019
Full story: Facebook Censorship...Facebook quick to censor
A Polish court has held a first hearing in a case brought against Facebook by a historian who says that Facebook engaged in censorship by suspending accounts that had posted about a nationalist rally in Warsaw.

Historian Maciej Swirski has complained that Facebook in 2016 suspended a couple of accounts that provided information on an independence day march organised by far-right groups. Swirski told AFP:

I'm not a member of the National Movement, but as a citizen I wanted to inform myself on the event in question and I was blocked from doing so,

This censorship doesn't concern my own posts, but rather content that I had wanted to see.

Facebook's lawyers argued that censorship can only be exercised by the state and that a private media firm is not obligated to publish any particular content.

The next court hearing will take place on October 30.

 

 

Going shopping...

Chief TV and internet censor to step down


Link Here 7th June 2019

The Ofcom Board has announced that Sharon White is to step down as Chief Executive.

Sharon is leaving to become Chairman of The John Lewis Partnership. She is expected to leave Ofcom around the turn of the year.

Sharon joined Ofcom in March 2015.

The Ofcom Board will now begin the process to appoint a successor.

 

 

Social media users to benefit...

New York senator introduces a bill requiring companies that hoover up user data must benefit the user before using the data for profit


Link Here 7th June 2019
A new bill introduced last week in the New York State Senate would give New Yorkers stronger online privacy protection than residents of any other state, notably California which was ahead of the game until now.

The New York bill authored by Long Island senator Kevin Thomas goes further than California and requires platforms such as Google, Facebook and others to to attain consent from consumers before they share and/or sell their information.

Unlike the California law, however, the proposed New York bill gives users the right to sue companies directly over privacy violations, possibly setting up a barrage of individual lawsuits, according to a report on the proposed legislation by Wired magazine .

The New York bill also applies to any online company, while the California law exempts any company with less that $25 million annual gross revenue from its requirements.

And as a final flourish, the bill required that any company that hoovers up user data must use that data in ways that benefit the user, before they use it to turn a profit for themselves.

 

 

Artificial Intelligence the size of a planet...

But Google still cannot cannot distinguish educational material from the glorification of Nazis


Link Here 6th June 2019
Full story: YouTube Blocking...International sport of YouTube blocking

YouTube has decided to adopt a widespread censorship rule to ban the promotion of hate speech. Google wrote:

Today, we're taking another step in our hate speech policy by specifically prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.

However for all the Artificial Intelligence it has at its disposal the company cannot actually work out which videos promote hate speech. Instead it has taken to banning videos referencing more easily identifiable images such as Nazi symbology, regardless of the context in which they are presented.

For example YouTube has blocked some British history teachers from its service for uploading archive material related to Adolf Hitler.

Scott Allsopp, who owns the longrunning MrAllsoppHistory revision website and teaches at an international school in Romania, had his channel featuring hundreds of historical clips on topics ranging from the Norman conquest to the cold war deleted for breaching the rules that ban hate speech. Allsopp commented:

It's absolutely vital that YouTube work to undo the damage caused by their indiscriminate implementation as soon as possible. Access to important material is being denied wholesale as many other channels are left branded as promoting hate when they do nothing of the sort.

While previous generations of history students relied on teachers playing old documentaries recorded on VHS tapes on a classroom television, they now use YouTube to show raw footage of the Nazis and famous speeches by Adolf Hitler.

Richard Jones-Nerzic, another British teacher affected by the crackdown, said that he had been censured for uploading clips to his channel from old documentaries about the rise of Nazism. Some of his clips now carry warnings that users might find the material offensive, while others have been removed completely. He said he was appealing YouTube's deletion of archive Nazi footage taken from mainstream media outlets, arguing that this is in itself form of negationism or even holocaust denial.

Allsopp had his account reinstated on Thursday following an appeal but said he had been contacted by many other history teachers whose accounts have also been affected by the ban on hate speech. Users who do not swiftly appeal YouTube's decisions could find their material removed for good.

 

 

Updated: But isn't this a gender equivalence to 'blackface'?...

Artist Spencer Tunick and the National Coalition Against Censorship organise a Facebook challenging array of male nipples in New York


Link Here 6th June 2019
Full story: Facebook Censorship...Facebook quick to censor
P hotographer Spencer Tunick and  the National Coalition Against Censorship organise a nude art action outside the Facebook's New York headquarters on June 2, when some 125 people posed naked in front of Facebook's building as Tunick photographed them as part of the NCAC's #WeTheNipple campaign.

In response Facebook agreed to convene a group--including artists, art educators, museum curators, activists, and employees--to consider new nudity guidelines for images posted to its social-media platforms.

The NCAC said it will collaborate with Facebook in selecting participants for a discussion to look into issues related to nude photographic art, ways that censorship impacts artists, and possible solutions going forward.

However before artists get their expectations up, they should know that it is standard policy that whenever Facebook get caught out censoring something, they always throw their arms up in feigned horror, apologise profusely and say they will do better next time.

They never do!

 

 

Strangling UK business and endangering people's personal data...

Internet companies slam the data censor's disgraceful proposal to require age verification for large swathes of the internet


Link Here 5th June 2019
Full story: UK age verification for social media...Government calls for strict age verification for social media access
The Information Commissioner's Office has for some bizarre reason have been given immense powers to censor the internet.

And in an early opportunity to exert its power it has proposed a 'regulation' that would require strict age verification for nearly all mainstream websites that may have a few child readers and some material that may be deemed harmful for very young children. Eg news websites that my have glamour articles or perhaps violent news images.

In a mockery of 'data protection' such websites would have to implement strict age verification requiring people to hand over identity data to most of the websites in the world.

Unsurprisingly much of the internet content industry is unimpressed. A six weerk consultation on the new censorship rules has just closed and according to the Financial Times:

Companies and industry groups have loudly pushed back on the plans, cautioning that they could unintentionally quash start-ups and endanger people's personal data. Google and Facebook are also expected to submit critical responses to the consultation.

Tim Scott, head of policy and public affairs at Ukie, the games industry body, said it was an inherent contradiction that the ICO would require individuals to give away their personal data to every digital service.

Dom Hallas, executive director at the Coalition for a Digital Economy (Coadec), which represents digital start-ups in the UK, said the proposals would result in a withdrawal of online services for under-18s by smaller companies:

The code is seen as especially onerous because it would require companies to provide up to six different versions of their websites to serve different age groups of children under 18.

This means an internet for kids largely designed by tech giants who can afford to build two completely different products. A child could access YouTube Kids, but not a start-up competitor.

Stephen Woodford, chief executive of the Advertising Association -- which represents companies including Amazon, Sky, Twitter and Microsoft -- said the ICO needed to conduct a full technical and economic impact study, as well as a feasibility study. He said the changes would have a wide and unintended negative impact on the online advertising ecosystem, reducing spend from advertisers and so revenue for many areas of the UK media.

An ICO spokesperson said:

We are aware of various industry concerns about the code. We'll be considering all the responses we've had, as well as engaging further where necessary, once the consultation has finished.

 

 

Labyrinth Life...

Japanese game is notable for including an English language option


Link Here 4th June 2019

No western release has been announced for Labyrinth Life on the PlayStation 4, or its Nintendo Switch counterpart, Omega Labyrinth Life. However, it has been confirmed that the Japanese releases will feature English-language options, making these titles even more accessible.

Note though that the Playstation version, Labyrinth Life is a censored family friendly version while Omega Labyrinth Life on the Switch is fully uncensored.

The game's main hook is known as Omega Power, which augments the characters' chest sizes, and not coincidentally, their stats. Expect these elements to be more edited on the PlayStation 4.

Both versions will release on August 1, 2019.

 

 

Over mature and ripe for improvement...

The US Communications Regulator publishes a report criticising the TV ratings system


Link Here 3rd June 2019
America's Federal Communications Commission (FCC) has published a report about the US TV rating classification system.

The familiar TV ratings, TVY, TV7, TVG, TVPG, TV14, TVMA are essentially self administered by the TV companies but there is an overview body called The TV Parental Guidelines (Oversight) Monitoring Board. The board describes itself:

The TV Parental Guidelines Monitoring Board is responsible for ensuring there is as much uniformity and consistency in applying the Parental Guidelines as possible. The Monitoring Board does this by reviewing complaints and other public input and by facilitating discussion about the application of ratings among members of the Board and other relevant industry representatives. The Monitoring Board typically meets annually or more often, if necessary, to consider and review complaints sent to the Board, discuss current research, and review any other relevant issues. The Board also facilitates regular calls among industry standards and practices executives to discuss pending and emerging issues in order to promote ratings consistency across companies.

In addition to the chairman, the Board includes 18 industry representatives from the broadcast, cable and creative communities appointed by the National Association of Broadcasters (NAB), NCTA 203 The Internet and Television Association, and the Motion Picture Association of America (MPAA), and five public interest members, appointed by the Board chairman.

The chairman id Michael Powell and the board representatives are from

  • 21st Century FOX
  • ABC
  • A+E Networks
  • AMC Networks
  • American Academy of Pediatrics
  • Boys and Girls Clubs of America
  • Call for Action
  • CBS
  • Discovery, Inc.
  • Entertainment Industries Council
  • HULU
  • Lifetime Networks
  • National PTA
  • NBC Universal
  • Sony Pictures Entertainment
  • Turner Broadcasting System
  • Univision
  • Viacom Media Networks

The TV ratings are frequently criticised, at least by morality campaign groups and recently the FCC responded by undertaking a review of the TV rating system. The FCC has just published its findings and concurs with much of the criticism. The FCC writes:

After reviewing the record as a whole, our primary conclusion is that the Board has been insufficiently accessible and transparent to the public. For example, when the Bureau began its work on this report, the Board's website did not even include a phone number that someone could call to reach it. We are pleased that this problem was recently fixed. But in our view, additional steps should be taken to increase awareness of the Board's role and the transparency of its operations. Below are suggestions along those lines that we submit for Board and industry consideration.

First, we urge the Board and the video programming industry to increase their efforts to promote public awareness of the Board and its role in overseeing the rating system. We urge the Board and the industry to increase their outreach efforts concerning the existence of the rating system and consider additional ways in which they can publicize the ability of the public to file complaints, along with instructions on how complaints can be filed. In this regard, as noted, the Board recently reactivated a telephone number for use in contacting the Board and also provides a post office box where physical mail can be sent.

Second, we suggest that the Board consider ways to inform the public regarding the number of complaints it receives, the nature of each complaint, the program and network or producer involved, and the action taken, if any, by the network/producer or the Board in response to the complaint. For instance, the Board could consider issuing an annual report on the complaints it has received about the ratings of programs, how those complaints were adjudicated, and whether complaints led to the rating of a program being changed in future airings.

Third, we suggest that the Board hold at least one public meeting, that is publicized with adequate notice, each year. This would permit the public to express their views directly to the Board and help the Board better understand public concerns regarding program ratings.

we suggest that the Board consider doing random audits or spot checks analyzing the accuracy and consistency of the ratings being applied pursuant to the TV Parental Guidelines. This information could be used, in addition to the survey data already collected by the Board, to help assess, and if necessary, improve ratings accuracy. Such information would also allow the Board and the industry to consider whether any changes are needed to the guidelines themselves to ensure that they are as helpful as possible to today's viewers, consistent with the Board's commitment.

We note the ratings system has not changed in over 20 years and, despite its longevity, many commenters contend that the rating system is not well-understood or useful to parents.

 

 

Updated: Tech companies criticise the government's Online Harms white paper...

The harms will be that British tech businesses will be destroyed so that politicians can look good for 'protecting the children'


Link Here 2nd June 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media
A scathing new report, seen by City A.M. and authored by the Internet Association (IA), which represents online firms including Google, Facebook and Twitter, has outlined a string of major concerns with plans laid out in the government Online Harms white paper last month.

The Online Harms white paper outlines a large number of internet censorship proposals hiding under the vague terminology of 'duties of care'.

Under the proposals, social media sites could face hefty fines or even a ban if they fail to tackle online harms such as inappropriate age content, insults, harassment, terrorist content and of course 'fake news'.

But the IA has branded the measures unclear and warned they could damage the UK's booming tech sector, with smaller businesses disproportionately affected.  IA executive director Daniel Dyball said:

Internet companies share the ambition to make the UK one of the safest places in the world to be online, but in its current form the online harms white paper will not deliver that, said

The proposals present real risks and challenges to the thriving British tech sector, and will not solve the problems identified.

The IA slammed the white paper over its use of the term duty of care, which it said would create legal uncertainty and be unmanageable in practice.

The lobby group also called for a more precise definition of which online services would be covered by regulation and greater clarity over what constitutes an online harm. In addition, the IA said the proposed measures could raise serious unintended consequences for freedom of expression.

And while most internet users favour tighter rules in some areas, particularly social media, people also recognise the importance of protecting free speech 203 which is one of the internet's great strengths.

Update: Main points

2nd June 2019. See article from uk.internetassociation.org

The Internet Association paper sets out five key concerns held by internet companies:

  • "Duty of Care" has a specific legal meaning that does not align with the obligations proposed in the White Paper, creating legal uncertainty, and would be unmanageable;
  • The scope of the services covered by regulation needs to be defined differently, and more closely related to the harms to be addressed;
  • The category of "harms with a less clear definition" raises significant questions and concerns about clarity and democratic process;
  • The proposed code of practice obligations raise potentially dangerous unintended consequences for freedom of expression;
  • The proposed measures will damage the UK digital sector, especially start-ups, micro-businesses and small- and medium-sized enterprises (SMEs), and slow innovation.

 

 

Offsite Article: US Drugs Censor...


Link Here 2nd June 2019
How the US Drug Enforcement Administration keeps TV on the 'right track' when depicting drugs

See article from shadowproof.com

 

 

IWF calls for censorship law requiring its block list to be implemented by encrypted DNS servers...

Well perhaps if the UK wasn't planning to block legal websites then people wouldn't need to seek out circumvention techniques, so allowing the laudable IWF blocking to continue


Link Here 1st June 2019
Full story: UK Internet Cesnorship of Sex Workers...UK MPs eye US internet censorship enabled by FOSTA law
A recent internet protocol allows for websites to be located without using the traditional approach of asking your ISP's DNS server, and so evading website blocks implemented by the ISP. Because the new protocol is encrypted then the ISP is restricted in its ability to monitor websites being accessed.

This very much impacts the ISPs ability to block illegal child abuse as identified in a block list maintained by the IWF. Over the years the IWF have been very good at sticking to its universally supported remit. Presumably it has realised that extending its blocking capabilities to other less critical areas may degrade its effectiveness as it would then lose that universal support.

Now of course the government has stepped in and will use the same mechanism as used for the IWF blocks to block legal and very popular adult porn websites. The inevitable interest in circumvention options will very much diminish the IWF's ability to block child abuse. So the IWF has taken to campaign to supports its capabilities. Fred Langford, the deputy CEO of IWF, told Techworld about the implementation of encrypted DNS:

Everything would be encrypted; everything would be dark. For the last 15 years, the IWF have worked with many providers on our URL list of illegal sites. There's the counterterrorism list as well and the copyright infringed list of works that they all have to block. None of those would work.

We put the entries onto our list until we can work with our international stakeholders and partners to get the content removed in their country, said Langford. Sometimes that will only be on the list for a day. Other times it could be months or years. It just depends on the regime at the other end, wherever it's physically located.

The IWF realises the benefit of universal support so generally acknowledged the benefits of the protocol on privacy and security and focusing on the needs for it to be deployed with the appropriate safeguards in place. It is calling for the government to insert a censorship rule that includes the IWF URL List in the forthcoming online harms regulatory framework to ensure that the service providers comply with current UK laws and security measures. Presumably the IWF would like its block list t be implemented by encrypted DNS servers worldwide. IWF's Fred Langford said:

The technology is not bad; it's how you implement it. Make sure your policies are in place, and make sure there's some way that if there is an internet service provider that is providing parental controls and blocking illegal material that the DNS over HTTPS server can somehow communicate with them to redirect the traffic on their behalf.

Given the IWF's respect, then this could be a possibility, but if the government then step in and demand adult porn sites be blocked too, then this approach would surely stumble as every world dictator and international moralist campaigner would expect the same.

 

 

Joint letter to Information Commissioner on age appropriate websites plan...

Pointing out that it is crazy for the data protection police to require internet users to hand over their private identity data to all and sundry (all in the name of child protection of course)


Link Here 31st May 2019

Elizabeth Denham, Information Commissioner Information Commissioner's Office,

Dear Commissioner Denham,

Re: The Draft Age Appropriate Design Code for Online Services

We write to you as civil society organisations who work to promote human rights, both offline and online. As such, we are taking a keen interest in the ICO's Age Appropriate Design Code. We are also engaging with the Government in its White Paper on Online Harms, and note the connection between these initiatives.

Whilst we recognise and support the ICO's aims of protecting and upholding children's rights online, we have severe concerns that as currently drafted the Code will not achieve these objectives. There is a real risk that implementation of the Code will result in widespread age verification across websites, apps and other online services, which will lead to increased data profiling of both children and adults, and restrictions on their freedom of expression and access to information.

The ICO contends that age verification is not a silver bullet for compliance with the Code, but it is difficult to conceive how online service providers could realistically fulfil the requirement to be age-appropriate without implementing some form of onboarding age verification process. The practical impact of the Code as it stands is that either all users will have to access online services via a sorting age-gate or adult users will have to access the lowest common denominator version of services with an option to age-gate up. This creates a de facto compulsory requirement for age-verification, which in turn puts in place a de facto restriction for both children and adults on access to online content.

Requiring all adults to verify they are over 18 in order to access everyday online services is a disproportionate response to the aim of protecting children online and violates fundamental rights. It carries significant risks of tracking, data breach and fraud. It creates digital exclusion for individuals unable to meet requirements to show formal identification documents. Where age-gating also applies to under-18s, this violation and exclusion is magnified. It will put an onerous burden on small-to-medium enterprises, which will ultimately entrench the market dominance of large tech companies and lessen choice and agency for both children and adults -- this outcome would be the antithesis of encouraging diversity and innovation.

In its response to the June 2018 Call for Views on the Code, the ICO recognised that there are complexities surrounding age verification, yet the draft Code text fails to engage with any of these. It would be a poor outcome for fundamental rights and a poor message to children about the intrinsic value of these for all if children's safeguarding was to come at the expense of free expression and equal privacy protection for adults, including adults in vulnerable positions for whom such protections have particular importance.

Mass age-gating will not solve the issues the ICO wishes to address with the Code and will instead create further problems. We urge you to drop this dangerous idea.

Yours sincerely,

Open Rights Group
Index on Censorship
Article19
Big Brother Watch
Global Partners Digital

 

 

Politicians start to realise that their promotion of internet censorship is not so popular...

Merkel's successor gets in a pickle for claiming that YouTubers should be censored before an election


Link Here 31st May 2019
Full story: Internet Censorship in Germany...Germany considers state internet filtering

Prior to the European Parliament elections, popular YouTube users in Germany appealed to their followers to boycott the Christian Democratic Union (CDU), the Social Democrats (SPD) and the Alternative fur Deutschland (AfD).

Following a miserable election result, CDU leader Annegret Kramp-Karrenbauer made statements suggesting that in the future, such opinions may be censored.

Popular German YouTube star Rezo urged voters to punish the CDU and its coalition partner by not voting for them. Rezo claimed that the government's inactions on critical issues such as climate change, security and intellectual property rights are destroying our lives and our future.

Rezo quickly found the support of 70 other influential YouTube presenters. But politicians accused him of misrepresenting information and lacking credibility in an effort to discredit him. Nonetheless, his video had nearly 4 million views by Sunday, the day of the election.

Experts like Prof. J3crgen Falter of the University of Mainz believe that Renzo's video swayed the opinions of many undecided voters, especially those under age 30.

Kramp-Karrenbauer commented on it during a press conference:

What would have happened in this country if 70 newspapers decided just two days before the election to make the joint appeal: 'Please don't vote for the CDU and SPD ? That would have been a clear case of political bias before the election.

What are the rules that apply to opinions in the analog sphere?  And which rules should apply in the digital sphere?

She concluded that these topics will be discussed by the CDU , saying:

I'm certain, they'll play a role in discussions surrounding media policy and democracy in the future.

Many interpreted her statements as an attack on freedom of speech and a call to censor people's opinions online. Ria Schröder, head of the Young Liberals, wrote:

The CDU's understanding of democracy If you are against me, I censor you is incomprehensible!

The right of a user on YouTube or other social media to discuss his or her political view is covered by Germany's Basic Law, which guarantees freedom of speech.

Kramp-Karrenbauer's statements may threaten her chance for the chancellorship. More importantly, they expose the mindset of Germany's political leadership.

 

 

Malevolent spirits, spooks and ghosts...

Human rights groups and tech companies unite in an open letter condemning GCHQ's Ghost Protocol suggestion to open a backdoor to snoop on 'encrypted' communication apps


Link Here 31st May 2019
Full story: Snooper's Charter...Tories re-start massive programme of communications snooping

To GCHQ

The undersigned organizations, security researchers, and companies write in response to the proposal published by Ian Levy and Crispin Robinson of GCHQ in Lawfare on November 29, 2018 , entitled Principles for a More Informed Exceptional Access Debate. We are an international coalition of civil society organizations dedicated to protecting civil liberties, human rights, and innovation online; security researchers with expertise in encryption and computer science; and technology companies and trade associations, all of whom share a commitment to strong encryption and cybersecurity. We welcome Levy and Robinson's invitation for an open discussion, and we support the six principles outlined in the piece. However, we write to express our shared concerns that this particular proposal poses serious threats to cybersecurity and fundamental human rights including privacy and free expression.

The six principles set forth by GCHQ officials are an important step in the right direction, and highlight the importance of protecting privacy rights, cybersecurity, public confidence, and transparency. We especially appreciate the principles' recognition that governments should not expect unfettered access to user data, that the trust relationship between service providers and users must be protected, and that transparency is essential.

Despite this, the GCHQ piece outlines a proposal for silently adding a law enforcement participant to a group chat or call. This proposal to add a ghost user would violate important human rights principles, as well as several of the principles outlined in the GCHQ piece. Although the GCHQ officials claim that you don't even have to touch the encryption to implement their plan, the ghost proposal would pose serious threats to cybersecurity and thereby also threaten fundamental human rights, including privacy and free expression. In particular, as outlined below, the ghost proposal would create digital security risks by undermining authentication systems, by introducing potential unintentional vulnerabilities, and by creating new risks of abuse or misuse of systems. Importantly, it also would undermine the GCHQ principles on user trust and transparency set forth in the piece.

How the Ghost Proposal Would Work The security in most modern messaging services relies on a technique called public key cryptography. In such systems, each device generates a pair of very large mathematically related numbers, usually called keys. One of those keys -- the public key -- can be distributed to anyone. The corresponding private key must be kept secure, and not shared with anyone. Generally speaking, a person's public key can be used by anyone to send an encrypted message that only the recipient's matching private key can unscramble. Within such systems, one of the biggest challenges to securely communicating is authenticating that you have the correct public key for the person you're contacting. If a bad actor can fool a target into thinking a fake public key actually belongs to the target's intended communicant, it won't matter that the messages are encrypted in the first place because the contents of those encrypted communications will be accessible to the malicious third party.

Encrypted messaging services like iMessage, Signal, and WhatsApp, which are used by well over a billion people around the globe, store everyone's public keys on the platforms' servers and distribute public keys corresponding to users who begin a new conversation. This is a convenient solution that makes encryption much easier to use. However, it requires every person who uses those messaging applications to trust the services to deliver the correct, and only the correct, public keys for the communicants of a conversation when asked.

The protocols behind different messaging systems vary, and they are complicated. For example, in two-party communications, such as a reporter communicating with a source, some services provide a way to ensure that a person is communicating only with the intended parties. This authentication mechanism is called a safety number in Signal and a security code in WhatsApp (we will use the term safety number). They are long strings of numbers that are derived from the public keys of the two parties of the conversation, which can be compared between them -- via some other verifiable communications channel such as a phone call -- to confirm that the strings match. Because the safety number is per pair of communicators -- more precisely, per pair of keys -- a change in the value means that a key has changed, and that can mean that it's a different party entirely. People can thus choose to be notified when these safety numbers change, to ensure that they can maintain this level of authentication. Users can also check the safety number before each new communication begins, and thereby guarantee that there has been no change of keys, and thus no eavesdropper. Systems without a safety number or security code do not provide the user with a method to guarantee that the user is securely communicating only with the individual or group with whom they expect to be communicating. group with whom they expect to be communicating. Other systems provide security in other ways. For example, iMessage, has a cluster of public keys -- one per device -- that it keeps associated with an account corresponding to an identity of a real person. When a new device is added to the account, the cluster of keys changes, and each of the user's devices shows a notice that a new device has been added upon noticing that change.

The ghost key proposal put forward by GCHQ would enable a third party to see the plain text of an encrypted conversation without notifying the participants. But to achieve this result, their proposal requires two changes to systems that would seriously undermine user security and trust. First, it would require service providers to surreptitiously inject a new public key into a conversation in response to a government demand. This would turn a two-way conversation into a group chat where the government is the additional participant, or add a secret government participant to an existing group chat. Second, in order to ensure the government is added to the conversation in secret, GCHQ's proposal would require messaging apps, service providers, and operating systems to change their software so that it would 1) change the encryption schemes used, and/or 2) mislead users by suppressing the notifications that routinely appear when a new communicant joins a chat.

The Proposal Creates Serious Risks to Cybersecurity and Human Rights The GCHQ's ghost proposal creates serious threats to digital security: if implemented, it will undermine the authentication process that enables users to verify that they are communicating with the right people, introduce potential unintentional vulnerabilities, and increase risks that communications systems could be abused or misused. These cybersecurity risks mean that users cannot trust that their communications are secure, as users would no longer be able to trust that they know who is on the other end of their communications, thereby posing threats to fundamental human rights, including privacy and free expression. Further, systems would be subject to new potential vulnerabilities and risks of abuse.

Integrity and Authentication Concerns As explained above, the ghost proposal requires modifying how authentication works. Like the end-to-end encryption that protects communications while they are in transit, authentication is a critical aspect of digital security and the integrity of sensitive data. The process of authentication allows users to have confidence that the other users with whom they are communicating are who they say they are. Without reliable methods of authentication, users cannot know if their communications are secure, no matter how robust the encryption algorithm, because they have no way of knowing who they are communicating with. This is particularly important for users like journalists who need secure encryption tools to guarantee source protection and be able to do their jobs.

Currently the overwhelming majority of users rely on their confidence in reputable providers to perform authentication functions and verify that the participants in a conversation are the people they think they are, and only those people. The GCHQ's ghost proposal completely undermines this trust relationship and the authentication process.

Authentication is still a difficult challenge for technologists and is currently an active field of research. For example, providing a meaningful and actionable record about user key transitions presents several known open research problems, and key verification itself is an ongoing subject of user interface research. If, however, security researchers learn that authentication systems can and will be bypassed by third parties like government agencies, such as GCHQ, this will create a strong disincentive for continuing research in this critical area.

Potential for Introducing Unintentional Vulnerabilities Beyond undermining current security tools and the system for authenticating the communicants in an encrypted chat, GCHQ's ghost proposal could introduce significant additional security threats. There are also outstanding questions about how the proposal would be effectively implemented.

The ghost proposal would introduce a security threat to all users of a targeted encrypted messaging application since the proposed changes could not be exposed only to a single target. In order for providers to be able to suppress notifications when a ghost user is added, messaging applications would need to rewrite the software that every user relies on. This means that any mistake made in the development of this new function could create an unintentional vulnerability that affects every single user of that application.

As security researcher Susan Landau points out, the ghost proposal involves changing how the encryption keys are negotiated in order to accommodate the silent listener, creating a much more complex protocol--raising the risk of an error. (That actually depends on how the algorithm works; in the case of iMessage, Apple has not made the code public.) A look back at recent news stories on unintentional vulnerabilities that are discovered in encrypted messaging apps like iMessage, and devices ranging from the iPhone to smartphones that run Google's Android operating system, lend credence to her concerns. Any such unintentional vulnerability could be exploited by malicious third parties.

Possibility of Abuse or Misuse of the Ghost Function The ghost proposal also introduces an intentional vulnerability. Currently, the providers of end-to-end encrypted messaging applications like WhatsApp and Signal cannot see into their users' chats. By requiring an exceptional access mechanism like the ghost proposal, GCHQ and U.K. law enforcement officials would require messaging platforms to open the door to surveillance abuses that are not possible today.

At a recent conference on encryption policy, Cindy Southworth, the Executive Vice President at the U.S. National Network to End Domestic Violence (NNEDV), cautioned against introducing an exceptional access mechanism for law enforcement, in part, because of how it could threaten the safety of victims of domestic and gender-based violence. Specifically, she warned that [w]e know that not only are victims in every profession, offenders are in every profession...How do we keep safe the victims of domestic violence and stalking? Southworth's concern was that abusers could either work for the entities that could exploit an exceptional access mechanism, or have the technical skills required to hack into the platforms that developed this vulnerability.

While companies and some law enforcement and intelligence agencies would surely implement strict procedures for utilizing this new surveillance function, those internal protections are insufficient. And in some instances, such procedures do not exist at all. In 2016, a U.K. court held that because the rules for how the security and intelligence agencies collect bulk personal datasets and bulk communications data (under a particular legislative provision) were unknown to the public, those practices were unlawful. As a result of that determination, it asked the agencies - GCHQ, MI5, and MI6 - to review whether they had unlawfully collected data about Privacy International. The agencies subsequently revealed that they had unlawfully surveilled Privacy International.12

Even where procedures exist for access to data that is collected under current surveillance authorities, government agencies have not been immune to surveillance abuses and misuses despite the safeguards that may have been in place. For example, a former police officer in the U.S. discovered that 104 officers in 18 different agencies across the state had accessed her driver's license record 425 times, using the state database as their personal Facebook service.13 Thus, once new vulnerabilities like the ghost protocol are created, new opportunities for abuse and misuse are created as well.14

Finally, if U.K. officials were to demand that providers rewrite their software to permit the addition of a ghost U.K. law enforcement participant in encrypted chats, there is no way to prevent other governments from relying on this newly built system. This is of particular concern with regard to repressive regimes and any country with a poor record on protecting human rights.

The Proposal Would Violate the Principle That User Trust Must be Protected The GCHQ proponents of the ghost proposal argue that [a]ny exceptional access solution should not fundamentally change the trust relationship between a service provider and its users. This means not asking the provider to do something fundamentally different to things they already do to run their business.15 However, the exceptional access mechanism that they describe in the same piece would have exactly the effect they say they wish to avoid: it would degrade user trust and require a provider to fundamentally change its service.

The moment users find out that a software update to their formerly secure end-to-end encrypted messaging application can now allow secret participants to surveil their conversations, they will lose trust in that service. In fact, we've already seen how likely this outcome is. In 2017, the Guardian published a flawed report in which it incorrectly stated that WhatsApp had a backdoor that would allow third parties to spy on users' conversations. Naturally, this inspired significant alarm amongst WhatsApp users, and especially users like journalists and activists who engage in particularly sensitive communications. In this case, the ultimate damage to user trust was mitigated because cryptographers and security organizations quickly understood and disseminated critical deficits in the report,16 and the publisher retracted the story.17

However, if users were to learn that their encrypted messaging service intentionally built a functionality to allow for third-party surveillance of their communications, that loss of trust would understandably be widespread and permanent. In fact, when President Obama's encryption working group explored technical options for an exceptional access mechanism, it cited loss of trust as the primary reason not to pursue provider-enabled access to encrypted devices through current update procedures. The working group explained that this could be dangerous to overall cybersecurity, since its use could call into question the trustworthiness of established software update channels. Individual users aware of the risk of remote access to their devices, could also choose to turn off software updates, rendering their devices significantly less secure as time passed and vulnerabilities were discovered [but] not patched.18 While the proposal that prompted these observations was targeted at operating system updates, the same principles concerning loss of trust and the attendant loss of security would apply in the context of the ghost proposal.

Any proposal that undermines user trust penalizes the overwhelming majority of technology users while permitting those few bad actors to shift to readily available products beyond the law's reach. It is a reality that encryption products are available all over the world and cannot be easily constrained by territorial borders.19 Thus, while the few nefarious actors targeted by the law will still be able to avail themselves of other services, average users -- who may also choose different services -- will disproportionately suffer consequences of degraded security and trust.

The Ghost Proposal Would Violate the Principle That Transparency is Essential Although we commend GCHQ officials for initiating this public conversation and publishing their ghost proposal online, if the U.K. were to implement this approach, these activities would be cloaked in secrecy. Although it is unclear which precise legal authorities GCHQ and U.K. law enforcement would rely upon, the Investigatory Powers Act grants U.K. officials the power to impose broad non-disclosure agreements that would prevent service providers from even acknowledging they had received a demand to change their systems, let alone the extent to which they complied. The secrecy that would surround implementation of the ghost proposal would exacerbate the damage to authentication systems and user trust as described above.

Conclusion For these reasons, the undersigned organizations, security researchers, and companies urge GCHQ to abide by the six principles they have announced, abandon the ghost proposal, and avoid any alternate approaches that would similarly threaten digital security and human rights. We would welcome the opportunity for a continuing dialogue on these important issues.

Sincerely,

Civil Society Organizations Access Now Big Brother Watch Blueprint for Free Speech Center for Democracy & Technology Defending Rights and Dissent Electronic Frontier Foundation Engine Freedom of the Press Foundation Government Accountability Project Human Rights Watch International Civil Liberties Monitoring Group Internet Society Liberty New America's Open Technology Institute Open Rights Group Principled Action in Government

Privacy International Reporters Without Borders Restore The Fourth Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic (CIPPIC) TechFreedom The Tor Project X-Lab

Technology Companies and Trade Associations ACT | The App Association Apple Google Microsoft Reform Government Surveillance ( RGS is a coalition of technology companies)

Startpage.com WhatsApp

Security and Policy Experts* Steven M. Bellovin, Percy K. and Vida L.W. Hudson Professor of Computer Science; Affiliate faculty, Columbia Law School Jon Callas, Senior Technology Fellow, ACLU L Jean Camp, Professor of Informatics, School of Informatics, Indiana University Stephen Checkoway, Assistant Professor, Oberlin College Computer Science Department Lorrie Cranor, Carnegie Mellon University Zakir Durumeric, Assistant Professor, Stanford University Dr. Richard Forno, Senior Lecturer, UMBC, Director, Graduate Cybersecurity Program & Assistant Director, UMBC Center for Cybersecurity Joe Grand, Principal Engineer & Embedded Security Expert, Grand Idea Studio, Inc. Daniel K. Gillmor, Senior Staff Technologist, ACLU Peter G. Neumann, Chief Scientist, SRI International Computer Science Lab Dr. Christopher Parsons, Senior Research Associate at the Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto Phillip Rogaway, Professor, University of California, Davis Bruce Schneier Adam Shostack, Author, Threat Modeling: Designing for Security Ashkan Soltani, Researcher and Consultant - Former FTC CTO and Whitehouse Senior Advisor Richard Stallman, President, Free Software Foundation Philip Zimmermann, Delft University of Technology Cybersecurity Group

 


melonfarmers icon
 

Top

Home

Index

Links

Email
 

UK

World

Media

Info

US
 

FilmCuts

Nutters

Liberty

Advertise
 


Cutting Edge

Shopping

Sex News

Sex+Shopping

UK Internet
 



US

Americas

International

Campaigns
 

UK

West Europe

Middle East

Africa
 

East Europe

South Asia

Asia Pacific

Australia
 


Adult DVD+VoD

Online Shop Reviews
 

Online Shops

New  & Offers
 
Sex Machines
Fucking Machines
Adult DVD Empire
Adult DVD Empire
Simply Adult
30,000+ items in stock
Low prices on DVDs and sex toys
Simply Adult
Hot Movies
Hot Movies