|
The Government's Online Harms bill will require foreign social media companies to appoint a token fall guy in Britain who will be jailed should the company fail in its duty of care. I wonder what the salary will be?
|
|
|
| 31st December 2019
|
|
| From The Times |
The government is pushing forward with an internet censorship bill which will punish people and companies for getting it wrong without the expense and trouble of tying to dictate rules on what is allowed. In an interesting development the Times is
reporting that the government want to introduce a "senior management liability", under which executives could be held personally responsible for breaches of standards. US tech giants would be required to appoint a British-based director, who
would be accountable for any breaches of the censorship rules. It seems a little unjust to prosecute a token fall guy who is likely to have absolutely no say in the day to day decisions made by a foreign company. Still it should be a very well
paid job which hopefully includes lots of coverage for legal bills and a zero notice period allowing instant resignation at the first hint of trouble. |
|
Britain's new government continues with its internet censorship policy outlined in the Online Harms white paper
|
|
|
| 19th December 2019
|
|
| See
article [pdf] from
assets.publishing.service.gov.uk A reminder of what Online Hars is all about from cyberleagle.com |
The Government has reiterated its plans as outlined in the Online Harms white paper. It seems that the measures have been advanced somewhat as previous references to pre-legislative scrutiny have been deleted. The Queens Speech briefing paper details
the government's legislative plan for the next two years: Online Harms “My ministers will develop legislation to improve internet safety for all.”
Britain is leading the world in developing a comprehensive regulatory regime to keep people safe online, protect children and other vulnerable users and ensure that there are no safe spaces for terrorists online. -
The April 2019 Online Harms White Paper set out the Government’s plan for world-leading legislation to make the UK the safest place in the world to be online. The Government will continue work to develop this legislation, alongside
ensuring that the UK remains one of the best places in the world for technology companies to operate. The proposals, as set out in the White Paper were: ○ A new duty of care on companies towards
their users, with an independent regulator to oversee this framework. ○ The Government want to keep people safe online, but we want to do this in a proportionate way, ensuring that freedom of expression is upheld and promoted online, and that
the value of a free and ndependent press is preserved. ○ The Government is seeking to do this by ensuring that companies have the right processes and systems in place to fulfil their obligations, rather than penalising them for individual
instances of unacceptable content. The public consultation on this has closed and the Government is analysing the responses and considering the issues raised. The Government is working closely with a variety of
stakeholders, including technology companies and civil society groups, to understand their views. The Government will prepare legislation to implement the final policy in response to the consultation. -
Ahead of this legislation, the Government will publish interim codes of practice on tackling the use of the internet by terrorists and those engaged in child sexual abuse and exploitation. This will ensure companies take action
now to tackle content that threatens our national security and the physical safety of children. The Government will publish a media literacy strategy to empower users to stay safe online. The
Government will help start-ups and businesses to embed safety from the earliest stages of developing or updating their products and services, by publishing a Safety by Design framework. The Government will carry out a review
of the Gambling Act, with a particular focus on tackling issues around online loot boxes and credit card misuse. What the Government has done so far:
The joint DCMS-Home Office Online Harms White Paper was published in April 2019. The Government also published the Social Media Code of Practice, setting out the actions that social media platforms should take to prevent bullying,
insulting, intimidating and humiliating behaviours on their sites. In November 2018 the Government established a new UK Council for Internet Safety. This expanded the scope of the UK Council for Child Internet Safety, and was
guided by the Government's Internet Safety Strategy. The UK has been championing international action on online safety. The Prime Minister used his speech at the United Nations General Assembly to champion the UK's work on
online safety.
|
|
|
|
|
| 21st
November 2019
|
|
|
The AdTech showdown is coming but will the ICO bite? See article from openrightsgroup.org |
|
|
|
|
| 12th November 2019
|
|
|
Nobody said it was easy. But it shouldn't be this hard either. By Amol Rajan BBC Media editor See article from bbc.com |
|
Three senior government ministers call on social media bosses to protect candidates from being insulted and harassed by the electorate
|
|
|
| 6th November 2019
|
|
| See article from gov.uk See
article [pdf] from assets.publishing.service.gov.uk
|
| Perhaps the Yanks can come to our rescue again! |
Oliver Dowden, Minister for the Cabinet Office, Priti Patel, Home Secretary and Nicky Morgan MP have called on social media bosses t protect them from abuse. They wrote: To Mark Zuckerberg Facebook CEO Jack Dorsey Twitter
CEO Sundar Pichai Google CEO The UK General Election campaign starts tomorrow. We must ensure robust debate during the campaign does not mutate into intimidation, harassment and abuse. Freedom of speech
is a fundamental tenet of British democracy, and this includes the freedom to speak without being threatened or abused. We must tackle this worrying trend of abuse of people in public life and that certain groups are not deterred from standing or
speaking freely because they fear for their safety. It is important to distinguish between strongly felt political debate on one hand, and unacceptable acts of abuse, hatred, intimidation and violence. Chief Constables continue to contact candidates in their force area to re-emphasise the importance of reporting any criminal offences, safety concerns or threats to the police. We know that you have also been working to tackle abusive behaviour on your platforms, including through delivering training on online safety and creating dedicated reporting channels. We welcome these measures - it is right that processes are in place to deal with cases of abuse or intimidation in an appropriate and timely manner.
As we enter this election period, we are conscious that there are a large number of new candidates who will be unfamiliar with how to seek help if they believe they are being subjected to abuse, and in some cases, illegal activity
online. You will be aware that a number of MPs have also identified the online abuse and threats they receive as a particular concern as we approach another electoral event. Therefore we would encourage you to:
Work together to provide a one stop shop piece of advice for candidates which will include what content breaches your terms and conditions, where to report where they believe content may be breaching these, and what response they
can expect from you. Work with officials and the Political Parties to ensure that safety and reporting guidance reach the widest possible audience of candidates and electoral staff as soon as possible. -
Have regular dialogue between you during the campaign to ensure where content or users are breaching your terms and conditions, this information is shared between you to reduce a lag time in action as abusive material or users migrate
between platforms. Continue to have an open and regular dialogue with the security, policing and electoral authorities. We will ask officials to liaise with you on the best way to take this forward.
Protecting our democracy and ensuring this election is fought fairly and safely is all our responsibilities. We trust that you are taking the necessary steps to ensure this is the case during the forthcoming election period, and look
forward to you providing an update on this. |
|
The Government claims that Channel 4 will have blood on its hands if it broadcasts Smuggled
|
|
|
| 4th November 2019
|
|
| See article from
telegraph.co.uk See programme from channel4.com |
Channel 4 has been warned by the Home Office that it will have blood on its hands if it goes ahead with a programme in which contestants attempt to smuggle themselves into the UK. According to the Telegraph a Home Office 'source' branded Smuggled
as insensitive and irresponsible Presumably the government thinks that the programme will give people ideas that may prove dangerous. The channel delayed screening the programme two weeks ago after the bodies of 39 Vietnamese illegal
migrants were found inside a lorry container on an industrial estate in Essex. However, the first episode of Smuggled, a two-part series billed by Channel 4 as the largest test of our borders ever conducted by the media, will go ahead tomorrow
with four contestants successfully entering the UK illegally. Among the first quartet attempting to enter the country is one hidden in the back of a motorhome, another secreted behind the driver's seat of a lorry, a third piloting a rubber dinghy
and a fourth using a false passport whose photograph bears scant resemblance to its bearer. Hidden cameras record their progress. Smugglers will be shown on Channel 4 on Monday 4th November at 9pm. |
|
Ofcom sets out its stall in a report finding that internet censorship, as per the Online Harms Bill, will be very vague, very open to unintended consequences, and presumably very expensive
|
|
|
| 28th
October 2019
|
|
| See article from ofcom.org.uk
See report [pdf] from ofcom.org.uk |
Ofcom writes: We have today published an economic perspective on the challenges and opportunities in regulating online services. Online services have revolutionised people's personal and working lives,
generating significant benefits. But some of their features have the potential to cause harms to individuals and society. These can include exposure to harmful content or conduct, loss of privacy, data or security breaches, lack of competition, unfair
business practices or harm to wellbeing. In May, our Online Nation report set out the benefits to consumers of being online and their concerns about potential online harm. Today's paper aims to contribute to the discussion on how
to address these harms effectively, drawing on Ofcom's experience as the UK communications regulator. It looks at the sources of online harms from an economic perspective, which can inform the broader policy assessment that policymakers and regulators
may use to evidence and address them. |
|
The Government reveals that it spent 2.2 million on its failed Age Verification for porn policy and that doesn't include the work from its own civil servants
|
|
|
| 25th October 2019
|
|
| See article from
theguardian.com |
More than £2m of taxpayers' money was spent preparing for the age verification for porn censorship regime before the policy was dropped in early October, the government has revealed. The bulk of the spending, £2.2m, was paid to the BBFC to do
the detailed work on the policy from 2016 onwards. Before then, additional costs were borne by the Department for Digital, Culture, Media and Sport, where civil servants were tasked with developing the proposals as part of their normal work. Answering a written question fromthe shadow DCMS secretary, Tom Watson, Matt Warman for the government added: Building on that work, we are now establishing how the objectives of part three of the Digital Economy Act can be delivered through our online harms regime.
It is not just government funds that were wasted on the abortive scheme. Multiple private companies had developed systems that they were hoping to provide age verification services. The bizarre thing was all this money was spent when the
government knew that it wouldn't even prevent determined viewers from getting access to porn. It was only was only considered as effective from blocking kids from stumbling on porn. So all that expense, and all that potential danger for adults
stupidly submitting to age verification, and all for what? Well at least next time round the government may consider that they should put a least a modicum of thought about people's privacy. It's not ALL about the kids. Surely the
government has a duty of care for adults too. We need a Government Harms bill requiring a duty of care for ALL citizens. Now that would be a first! |
|
The government cancels current plans for age verification requirements for porn as defined in the Digital Economy Act. It will readdress the issue as part of its Online Harms bill
|
|
|
| 16th October 2019
|
|
| See article
from parliament.uk See article from bbfc.co.uk |
Nicky Morgan, Secretary of State for Digital, Culture, Media and Sport, issued a written statement cancelling the government's current plans to require age verification for porn. She wrote: The government published the
Online Harms White Paper in April this year. It proposed the establishment of a duty of care on companies to improve online safety, overseen by an independent regulator with strong enforcement powers to deal with non-compliance. Since the White Paper's
publication, the government's proposals have continued to develop at pace. The government announced as part of the Queen's Speech that we will publish draft legislation for pre-legislative scrutiny. It is important that our policy aims and our overall
policy on protecting children from online harms are developed coherently in view of these developments with the aim of bringing forward the most comprehensive approach possible to protecting children. The government has
concluded that this objective of coherence will be best achieved through our wider online harms proposals and, as a consequence, will not be commencing Part 3 of the Digital Economy Act 2017 concerning age verification for online pornography. The
Digital Economy Act objectives will therefore be delivered through our proposed online harms regulatory regime. This course of action will give the regulator discretion on the most effective means for companies to meet their duty of care. As currently
drafted, the Digital Economy Act does not cover social media platforms. The government's commitment to protecting children online is unwavering. Adult content is too easily accessed online and more needs to be done to protect
children from harm. We want to deliver the most comprehensive approach to keeping children safe online and recognised in the Online Harms White Paper the role that technology can play in keeping all users, particularly children, safe. We are committed to
the UK becoming a world-leader in the development of online safety technology and to ensure companies of all sizes have access to, and adopt, innovative solutions to improve the safety of their users. This includes age verification tools and we expect
them to continue to play a key role in protecting children online.
The BBFC sounded a bit miffed about losing the internet censor gig. The BBFC posted on its website:
The introduction of age-verification on pornographic websites in the UK is a necessary and important child protection measure. The BBFC was designated as the Age-verification Regulator under the Digital Economy Act 2017 (DEA) in February 2018, and
has since worked on the implementation of age-verification, developing a robust standard of age-verification designed to stop children from stumbling across or accessing pornography online. The BBFC had all systems in place to undertake the role of AV
Regulator, to ensure that all commercial pornographic websites accessible from the UK would have age gates in place or face swift enforcement action. The BBFC understands the Government's decision, announced today, to implement
age-verification as part of the broader online harms strategy. We will bring our expertise and work closely with government to ensure that the child protection goals of the DEA are achieved.
I don suppose we will ever hear the real
reasons why the law was ditched, but I suspect that there were serious problems with it. The amount of time and effort put into this, and the serious ramifications for the BBFC and age verification companies that must now be facing hard times must
surely make this cancelling a big decision. It is my guess that a very troublesome issue for the authorities is how both age verification and website blocking would have encouraged a significant number of people to work around government
surveillance of the internet. It is probably more important to keep tabs on terrorists and child abusers rather than to lose this capability for the sake of a kids stumbling on porn. Although the news of the cancellation was reported today,
Rowland Manthorpe, a reporter for Sky News suggested on Twitter that maybe the idea had already been shelved back in the summer. He tweeted: When @AJMartinSky and I broke the news that the porn block was being
delayed again, we reported that it was on hold indefinitely. It was. Then our story broke. Inside DCMS a sudden panic ensued. Quickly, they drafted a statement saying it was delayed for 6 months
|
|
A summary of the Online Harms Bill as referenced in the Queen's Speech
|
|
|
| 15th October 2019
|
|
| See Queen's Speech Briefing [pdf] from assets.publishing.service.gov.uk
|
The April 2019 Online Harms White Paper set out the Government's plan for world-leading legislation to make the UK the safest place in the world to be online. The proposals, as set out in the White Paper were:
A new duty of care on companies towards their users, with an independent regulator to oversee this framework. We want to keep people safe online, but we want to do this in a proportionate way, ensuring
that freedom of expression is upheld and promoted online, and businesses do not face undue burdens. We are seeking to do this by ensuring that companies have the right processes and systems in place to fulfil their
obligations, rather than penalising them for individual instances of unacceptable content. Our public consultation on this has closed and we are analysing the responses and considering the issues raised. We are working
closely with a variety of stakeholders, including technology companies and civil society groups, to understand their views.
We are seeking to do this by ensuring that companies have the right processes and systems in place to fulfil their obligations, rather than penalising them for individual instances of unacceptable content. Our
public consultation on this has closed and we are analysing the responses and considering the issues raised. We are working closely with a variety of stakeholders, including technology companies and civil society groups, to understand their views.
Next steps:
We will publish draft legislation for pre-legislative scrutiny. Ahead of this legislation, the Government will publish work on tackling the use of the internet by terrorists and those engaged in child
sexual abuse and exploitation, to ensure companies take action now to tackle content that threatens our national security and the physical safety of children. We are also taking forward additional measures, including a media
literacy strategy, to empower users to stay safe online. A Safety by Design framework will help start-ups and small businesses to embed safety during the development or update of their products and services.
|
|
Government independent advisor reports on tackling extremism that is not violent extremism. It's proving a bit hard to define though
|
|
|
| 7th October 2019
|
|
| See press release from gov.uk See
report [pdf] from assets.publishing.service.gov.uk
|
The government's independent advisor on extremism is calling for a complete overhaul of the government's strategy -- recommending a new taskforce led by the Home Secretary. Sara Khan, who heads up the commission for Countering
Extremism, has carried out the first-ever national conversation on extremism and reviewed the government's current approach. The commission is has published its findings and recommendations in a new report, Challenging Hateful
Extremism. The report identifies a new category of extremist behaviour outside of terrorism and violent extremism, which it calls hateful extremism. It summarises hateful extremism:
behaviours that can incite and amplify hate, or engage in persistent hatred, or equivocate about and make the moral case for violence that draw on hateful, hostile or supremacist beliefs directed at an
out-group who are perceived as a threat to the wellbeing, survival or success of an in-group that cause, or are likely to cause, harm to individuals, communities or wider society
Recommendations for government
A rebooted Counter-Extremism strategy based on a new definition of hateful extremism A Home Secretary-chaired hateful extremism task force bringing together those inside and outside government to
oversee the response to extremism incidents. Clarity on the difference between counter terrorism, countering hateful extremism and building cohesive communities. More support and protection for
organisations and individuals who are countering extremism.
Recommendations for a whole society response
National and local politicians, community and faith leaders must be consistent in their actions against hateful extremism. Organisations countering extremism must continue their efforts, and work with
the commission to build understanding and interventions against hateful extremism -- backed by sustainable funding from charitable sources. Social media companies must reduce the hostile atmosphere on their platforms.
Recommendations for a strengthened Commission for Countering Extremism
the commission should be placed on a statutory basis to guarantee independence along with information sharing powers. two additional commissioners for specific areas of work, including a review of
existing legislation. a small and dedicated network of counter extremism organisations to identify emerging issues and put in place interventions pioneering research including a commitment to
develop and test a full, working definition of hateful extremism review existing legislation that addresses hateful extremism and can protect victims and counter extremists from abuse trial new
and innovative interventions and develop a new toolbox of measures.
|
|
Nicky Morgan announces that the government's porn censorship measure has now been properly registered with the EU
|
|
|
| 6th
October 2019
|
|
| See article from
hansard.parliament.uk |
House of Commons, 3rd October 2019. The Secretary of State for Digital, Culture, Media and Sport (Nicky Morgan): THon. Members will be aware of our ongoing work to keep people safe online and our
proposals around age verification for online pornography. I wish to notify the House that the standstill period under the EU's technical services and regulations directive expired at midnight last night. I understand the interest in this
issue that exists in all parts of the House, and I will update the House on next steps in due course.
|
|
The government is set to make students sign away their rights to free speech in a contract
|
|
|
| 6th October 2019
|
|
| See article from thetimes.co.uk | |
Undergraduates could be required to sign contracts forcing them to refrain from making sexist, racist or anti-semitic comments. The contracts have been demanded by the education secretary, Gavin Williamson, in a letter to the
Office for Students, the higher education watchdog, which would require universities to enforce them. Williamson said: I want every student to be confident that their institution stands up for free speech and that they
will not experience . . . harassment, racial abuse, anti-semitism [at university].
Whilst losing their right to comment about sexism and the like, the government will offer a little balance by forbidding the students from
no-platforming others who hold different views from their own.
|
|
The government initiates a data sharing agreement with the US and takes the opportunity to complain about internet encryption that keeps us safe from snoops, crooks, scammers and thieves
|
|
|
| 5th October
2019
|
|
| 4th October 2019. See government press release from gov.uk
|
Home Secretary Priti Patel has signed an historic agreement that will enable British law enforcement agencies to directly demand electronic data relating to terrorists, child sexual abusers and other serious criminals from US tech firms.
The world-first UK-US Bilateral Data Access Agreement will dramatically speed up investigations and prosecutions by enabling law enforcement, with appropriate authorisation, to go directly to the tech companies to access data, rather
than through governments, which can take years. The Agreement was signed with US Attorney General William P. Barr in Washington DC, where the Home Secretary also met security partners to discuss the two countries' ever deeper
cooperation and global leadership on security. The current process, which see requests for communications data from law enforcement agencies submitted and approved by central governments via Mutual Legal Assistance (MLA), can
often take anywhere from six months to two years. Once in place, the Agreement will see the process reduced to a matter of weeks or even days. The US will have reciprocal access, under a US court order, to data from UK
communication service providers. The UK has obtained assurances which are in line with the government's continued opposition to the death penalty in all circumstances. Any request for data must be made under an authorisation in
accordance with the legislation of the country making the request and will be subject to independent oversight or review by a court, judge, magistrate or other independent authority. The Agreement does not change anything about
the way companies can use encryption and does not stop companies from encrypting data. It gives effect to the Crime (Overseas Production Orders) Act 2019, which received Royal Assent in February this year and was facilitated by
the CLOUD Act in America, passed last year. Letter to Mark Zuckerberg asking him not to keep his internet users safe through encrypted messages The Home Secretary has also published an open letter to
Facebook, co-signed with US Attorney General William P. Barr, Acting US Homeland Security Secretary Kevin McAleenan and Australia's Minister for Home Affairs Peter Dutton, outlining serious concerns with the company's plans to implement end-to-end
encryption across its messaging services. The letter reads: Dear Mr. Zuckerberg,
We are writing to request that Facebook does not proceed with its plan to implement end-to-end encryption across its messaging services without ensuring that there is no reduction to user safety and without including a means for lawful access to the
content of communications to protect our citizens. In your post of 6 March 2019, 'A Privacy-Focused Vision for Social Networking', you acknowledged that "there are real safety concerns to address before we can implement
end-to-end encryption across all our messaging services." You stated that "we have a responsibility to work with law enforcement and to help prevent" the use of Facebook for things like child sexual exploitation, terrorism, and extortion.
We welcome this commitment to consultation. As you know, our governments have engaged with Facebook on this issue, and some of us have written to you to express our views. Unfortunately, Facebook has not committed to address our serious concerns about
the impact its proposals could have on protecting our most vulnerable citizens. We support strong encryption, which is used by billions of people every day for services such as banking, commerce, and communications. We also
respect promises made by technology companies to protect users' data. Law abiding citizens have a legitimate expectation that their privacy will be protected. However, as your March blog post recognized, we must ensure that technology companies protect
their users and others affected by their users' online activities. Security enhancements to the virtual world should not make us more vulnerable in the physical world. We must find a way to balance the need to secure data with public safety and the need
for law enforcement to access the information they need to safeguard the public, investigate crimes, and prevent future criminal activity. Not doing so hinders our law enforcement agencies' ability to stop criminals and abusers in their tracks.
Companies should not deliberately design their systems to preclude any form of access to content, even for preventing or investigating the most serious crimes. This puts our citizens and societies at risk by severely eroding a
company's ability to detect and respond to illegal content and activity, such as child sexual exploitation and abuse, terrorism, and foreign adversaries' attempts to undermine democratic values and institutions, preventing the prosecution of offenders
and safeguarding of victims. It also impedes law enforcement's ability to investigate these and other serious crimes. Risks to public safety from Facebook's proposals are exacerbated in the context of a single platform that would
combine inaccessible messaging services with open profiles, providing unique routes for prospective offenders to identify and groom our children. Facebook currently undertakes significant work to identify and tackle the most
serious illegal content and activity by enforcing your community standards. In 2018, Facebook made 16.8 million reports to the US National Center for Missing & Exploited Children (NCMEC) -- more than 90% of the 18.4 million total reports that year.
As well as child abuse imagery, these referrals include more than 8,000 reports related to attempts by offenders to meet children online and groom or entice them into sharing indecent imagery or meeting in real life. The UK National Crime Agency (NCA)
estimates that, last year, NCMEC reporting from Facebook will have resulted in more than 2,500 arrests by UK law enforcement and almost 3,000 children safeguarded in the UK. Your transparency reports show that Facebook also acted against 26 million
pieces of terrorist content between October 2017 and March 2019. More than 99% of the content Facebook takes action against -- both for child sexual exploitation and terrorism -- is identified by your safety systems, rather than by reports from users.
While these statistics are remarkable, mere numbers cannot capture the significance of the harm to children. To take one example, Facebook sent a priority report to NCMEC, having identified a child who had sent self-produced child
sexual abuse material to an adult male. Facebook located multiple chats between the two that indicated historical and ongoing sexual abuse. When investigators were able to locate and interview the child, she reported that the adult had sexually abused
her hundreds of times over the course of four years, starting when she was 11. He also regularly demanded that she send him sexually explicit imagery of herself. The offender, who had held a position of trust with the child, was sentenced to 18 years in
prison. Without the information from Facebook, abuse of this girl might be continuing to this day. Our understanding is that much of this activity, which is critical to protecting children and fighting terrorism, will no longer be
possible if Facebook implements its proposals as planned. NCMEC estimates that 70% of Facebook's reporting -- 12 million reports globally -- would be lost. This would significantly increase the risk of child sexual exploitation or other serious harms.
You have said yourself that "we face an inherent tradeoff because we will never find all of the potential harm we do today when our security systems can see the messages themselves". While this trade-off has not been quantified, we are very
concerned that the right balance is not being struck, which would make your platform an unsafe space, including for children. Equally important to Facebook's own work to act against illegal activity, law enforcement rely on
obtaining the content of communications, under appropriate legal authorisation, to save lives, enable criminals to be brought to justice, and exonerate the innocent. We therefore call on Facebook and other companies to take the
following steps:
embed the safety of the public in system designs, thereby enabling you to continue to act against illegal content effectively with no reduction to safety, and facilitating the prosecution of offenders and safeguarding of victims
enable law enforcement to obtain lawful access to content in a readable and usable format engage in consultation with governments to facilitate this in a way that is substantive and genuinely
influences your design decisions not implement the proposed changes until you can ensure that the systems you would apply to maintain the safety of your users are fully tested and operational
We are committed to working with you to focus on reasonable proposals that will allow Facebook and our governments to protect your users and the public, while protecting their privacy. Our technical experts are confident that we can
do so while defending cyber security and supporting technological innovation. We will take an open and balanced approach in line with the joint statement of principles signed by the governments of the US, UK, Australia, New Zealand, and Canada in August
2018 and the subsequent communique agreed in July this year . As you have recognised, it is critical to get this right for the future of the internet. Children's safety and law enforcement's ability to bring criminals to justice
must not be the ultimate cost of Facebook taking forward these proposals. Yours sincerely, Rt Hon Priti Patel MP, United Kingdom Secretary of State for the Home Department William P.
Barr, United States Attorney General Kevin K. McAleenan, United States Secretary of Homeland Security (Acting) Hon Peter Dutton MP, Australian Minister for Home Affairs
Update: An All-Out Attack on Encryption 5th October 2019. See article from eff.org
Top law enforcement officials in the United States, United Kingdom, and Australia told Facebook today that they want backdoor access to all encrypted messages sent on all its platforms. In an
open letter , these governments called on Mark Zuckerberg to stop Facebook's
plan to introduce end-to-end encryption on all of the company's messaging products and instead promise that it
will "enable law enforcement to obtain lawful access to content in a readable and usable format." This is a staggering attempt to undermine the security and privacy of communications tools used by billions of people.
Facebook should not comply. The letter comes in concert with the signing of a new agreement between the US and UK to provide access to allow law enforcement in one jurisdiction to more easily obtain electronic data stored in the other jurisdiction. But
the letter to Facebook goes much further: law enforcement and national security agencies in these three countries are asking for nothing less than access to every conversation that crosses every digital device. The letter focuses
on the challenges of investigating the most serious crimes committed using digital tools, including child exploitation, but it ignores the severe risks that introducing encryption backdoors would create. Many people--including journalists, human rights
activists, and those at risk of abuse by intimate partners--use encryption to stay safe in the physical world as well as the online one. And encryption is central to preventing criminals and even corporations from spying on our private conversations, and
to ensure that the communications infrastructure we rely on is truly working as intended . What's more, the backdoors
into encrypted communications sought by these governments would be available not just to governments with a supposedly functional rule of law. Facebook and others would face immense pressure to also provide them to authoritarian regimes, who might seek
to spy on dissidents in the name of combatting terrorism or civil unrest, for example. The Department of Justice and its partners in the UK and Australia claim to support "strong encryption," but the unfettered access to
encrypted data described in this letter is incompatible with how encryption actually works .
|
|
|