Melon Farmers Original Version

Liberty News


2019: April-June

 2003   2004   2005   2006   2007   2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan-March   April-June   July-Sept   Oct-Dec    

 

Offsite Article: UK security services could get Facebook-style data analysis tools...


Link Here 18th June 2019
Counter-terror officials may be able to scan data from across population, official report says

See article from theguardian.com

 

 

Malevolent spirits, spooks and ghosts...

Human rights groups and tech companies unite in an open letter condemning GCHQ's Ghost Protocol suggestion to open a backdoor to snoop on 'encrypted' communication apps


Link Here31st May 2019
Full story: Snooper's Charter...Tories re-start massive programme of communications snooping

To GCHQ

The undersigned organizations, security researchers, and companies write in response to the proposal published by Ian Levy and Crispin Robinson of GCHQ in Lawfare on November 29, 2018 , entitled Principles for a More Informed Exceptional Access Debate. We are an international coalition of civil society organizations dedicated to protecting civil liberties, human rights, and innovation online; security researchers with expertise in encryption and computer science; and technology companies and trade associations, all of whom share a commitment to strong encryption and cybersecurity. We welcome Levy and Robinson's invitation for an open discussion, and we support the six principles outlined in the piece. However, we write to express our shared concerns that this particular proposal poses serious threats to cybersecurity and fundamental human rights including privacy and free expression.

The six principles set forth by GCHQ officials are an important step in the right direction, and highlight the importance of protecting privacy rights, cybersecurity, public confidence, and transparency. We especially appreciate the principles' recognition that governments should not expect unfettered access to user data, that the trust relationship between service providers and users must be protected, and that transparency is essential.

Despite this, the GCHQ piece outlines a proposal for silently adding a law enforcement participant to a group chat or call. This proposal to add a ghost user would violate important human rights principles, as well as several of the principles outlined in the GCHQ piece. Although the GCHQ officials claim that you don't even have to touch the encryption to implement their plan, the ghost proposal would pose serious threats to cybersecurity and thereby also threaten fundamental human rights, including privacy and free expression. In particular, as outlined below, the ghost proposal would create digital security risks by undermining authentication systems, by introducing potential unintentional vulnerabilities, and by creating new risks of abuse or misuse of systems. Importantly, it also would undermine the GCHQ principles on user trust and transparency set forth in the piece.

How the Ghost Proposal Would Work The security in most modern messaging services relies on a technique called public key cryptography. In such systems, each device generates a pair of very large mathematically related numbers, usually called keys. One of those keys -- the public key -- can be distributed to anyone. The corresponding private key must be kept secure, and not shared with anyone. Generally speaking, a person's public key can be used by anyone to send an encrypted message that only the recipient's matching private key can unscramble. Within such systems, one of the biggest challenges to securely communicating is authenticating that you have the correct public key for the person you're contacting. If a bad actor can fool a target into thinking a fake public key actually belongs to the target's intended communicant, it won't matter that the messages are encrypted in the first place because the contents of those encrypted communications will be accessible to the malicious third party.

Encrypted messaging services like iMessage, Signal, and WhatsApp, which are used by well over a billion people around the globe, store everyone's public keys on the platforms' servers and distribute public keys corresponding to users who begin a new conversation. This is a convenient solution that makes encryption much easier to use. However, it requires every person who uses those messaging applications to trust the services to deliver the correct, and only the correct, public keys for the communicants of a conversation when asked.

The protocols behind different messaging systems vary, and they are complicated. For example, in two-party communications, such as a reporter communicating with a source, some services provide a way to ensure that a person is communicating only with the intended parties. This authentication mechanism is called a safety number in Signal and a security code in WhatsApp (we will use the term safety number). They are long strings of numbers that are derived from the public keys of the two parties of the conversation, which can be compared between them -- via some other verifiable communications channel such as a phone call -- to confirm that the strings match. Because the safety number is per pair of communicators -- more precisely, per pair of keys -- a change in the value means that a key has changed, and that can mean that it's a different party entirely. People can thus choose to be notified when these safety numbers change, to ensure that they can maintain this level of authentication. Users can also check the safety number before each new communication begins, and thereby guarantee that there has been no change of keys, and thus no eavesdropper. Systems without a safety number or security code do not provide the user with a method to guarantee that the user is securely communicating only with the individual or group with whom they expect to be communicating. group with whom they expect to be communicating. Other systems provide security in other ways. For example, iMessage, has a cluster of public keys -- one per device -- that it keeps associated with an account corresponding to an identity of a real person. When a new device is added to the account, the cluster of keys changes, and each of the user's devices shows a notice that a new device has been added upon noticing that change.

The ghost key proposal put forward by GCHQ would enable a third party to see the plain text of an encrypted conversation without notifying the participants. But to achieve this result, their proposal requires two changes to systems that would seriously undermine user security and trust. First, it would require service providers to surreptitiously inject a new public key into a conversation in response to a government demand. This would turn a two-way conversation into a group chat where the government is the additional participant, or add a secret government participant to an existing group chat. Second, in order to ensure the government is added to the conversation in secret, GCHQ's proposal would require messaging apps, service providers, and operating systems to change their software so that it would 1) change the encryption schemes used, and/or 2) mislead users by suppressing the notifications that routinely appear when a new communicant joins a chat.

The Proposal Creates Serious Risks to Cybersecurity and Human Rights The GCHQ's ghost proposal creates serious threats to digital security: if implemented, it will undermine the authentication process that enables users to verify that they are communicating with the right people, introduce potential unintentional vulnerabilities, and increase risks that communications systems could be abused or misused. These cybersecurity risks mean that users cannot trust that their communications are secure, as users would no longer be able to trust that they know who is on the other end of their communications, thereby posing threats to fundamental human rights, including privacy and free expression. Further, systems would be subject to new potential vulnerabilities and risks of abuse.

Integrity and Authentication Concerns As explained above, the ghost proposal requires modifying how authentication works. Like the end-to-end encryption that protects communications while they are in transit, authentication is a critical aspect of digital security and the integrity of sensitive data. The process of authentication allows users to have confidence that the other users with whom they are communicating are who they say they are. Without reliable methods of authentication, users cannot know if their communications are secure, no matter how robust the encryption algorithm, because they have no way of knowing who they are communicating with. This is particularly important for users like journalists who need secure encryption tools to guarantee source protection and be able to do their jobs.

Currently the overwhelming majority of users rely on their confidence in reputable providers to perform authentication functions and verify that the participants in a conversation are the people they think they are, and only those people. The GCHQ's ghost proposal completely undermines this trust relationship and the authentication process.

Authentication is still a difficult challenge for technologists and is currently an active field of research. For example, providing a meaningful and actionable record about user key transitions presents several known open research problems, and key verification itself is an ongoing subject of user interface research. If, however, security researchers learn that authentication systems can and will be bypassed by third parties like government agencies, such as GCHQ, this will create a strong disincentive for continuing research in this critical area.

Potential for Introducing Unintentional Vulnerabilities Beyond undermining current security tools and the system for authenticating the communicants in an encrypted chat, GCHQ's ghost proposal could introduce significant additional security threats. There are also outstanding questions about how the proposal would be effectively implemented.

The ghost proposal would introduce a security threat to all users of a targeted encrypted messaging application since the proposed changes could not be exposed only to a single target. In order for providers to be able to suppress notifications when a ghost user is added, messaging applications would need to rewrite the software that every user relies on. This means that any mistake made in the development of this new function could create an unintentional vulnerability that affects every single user of that application.

As security researcher Susan Landau points out, the ghost proposal involves changing how the encryption keys are negotiated in order to accommodate the silent listener, creating a much more complex protocol--raising the risk of an error. (That actually depends on how the algorithm works; in the case of iMessage, Apple has not made the code public.) A look back at recent news stories on unintentional vulnerabilities that are discovered in encrypted messaging apps like iMessage, and devices ranging from the iPhone to smartphones that run Google's Android operating system, lend credence to her concerns. Any such unintentional vulnerability could be exploited by malicious third parties.

Possibility of Abuse or Misuse of the Ghost Function The ghost proposal also introduces an intentional vulnerability. Currently, the providers of end-to-end encrypted messaging applications like WhatsApp and Signal cannot see into their users' chats. By requiring an exceptional access mechanism like the ghost proposal, GCHQ and U.K. law enforcement officials would require messaging platforms to open the door to surveillance abuses that are not possible today.

At a recent conference on encryption policy, Cindy Southworth, the Executive Vice President at the U.S. National Network to End Domestic Violence (NNEDV), cautioned against introducing an exceptional access mechanism for law enforcement, in part, because of how it could threaten the safety of victims of domestic and gender-based violence. Specifically, she warned that [w]e know that not only are victims in every profession, offenders are in every profession...How do we keep safe the victims of domestic violence and stalking? Southworth's concern was that abusers could either work for the entities that could exploit an exceptional access mechanism, or have the technical skills required to hack into the platforms that developed this vulnerability.

While companies and some law enforcement and intelligence agencies would surely implement strict procedures for utilizing this new surveillance function, those internal protections are insufficient. And in some instances, such procedures do not exist at all. In 2016, a U.K. court held that because the rules for how the security and intelligence agencies collect bulk personal datasets and bulk communications data (under a particular legislative provision) were unknown to the public, those practices were unlawful. As a result of that determination, it asked the agencies - GCHQ, MI5, and MI6 - to review whether they had unlawfully collected data about Privacy International. The agencies subsequently revealed that they had unlawfully surveilled Privacy International.12

Even where procedures exist for access to data that is collected under current surveillance authorities, government agencies have not been immune to surveillance abuses and misuses despite the safeguards that may have been in place. For example, a former police officer in the U.S. discovered that 104 officers in 18 different agencies across the state had accessed her driver's license record 425 times, using the state database as their personal Facebook service.13 Thus, once new vulnerabilities like the ghost protocol are created, new opportunities for abuse and misuse are created as well.14

Finally, if U.K. officials were to demand that providers rewrite their software to permit the addition of a ghost U.K. law enforcement participant in encrypted chats, there is no way to prevent other governments from relying on this newly built system. This is of particular concern with regard to repressive regimes and any country with a poor record on protecting human rights.

The Proposal Would Violate the Principle That User Trust Must be Protected The GCHQ proponents of the ghost proposal argue that [a]ny exceptional access solution should not fundamentally change the trust relationship between a service provider and its users. This means not asking the provider to do something fundamentally different to things they already do to run their business.15 However, the exceptional access mechanism that they describe in the same piece would have exactly the effect they say they wish to avoid: it would degrade user trust and require a provider to fundamentally change its service.

The moment users find out that a software update to their formerly secure end-to-end encrypted messaging application can now allow secret participants to surveil their conversations, they will lose trust in that service. In fact, we've already seen how likely this outcome is. In 2017, the Guardian published a flawed report in which it incorrectly stated that WhatsApp had a backdoor that would allow third parties to spy on users' conversations. Naturally, this inspired significant alarm amongst WhatsApp users, and especially users like journalists and activists who engage in particularly sensitive communications. In this case, the ultimate damage to user trust was mitigated because cryptographers and security organizations quickly understood and disseminated critical deficits in the report,16 and the publisher retracted the story.17

However, if users were to learn that their encrypted messaging service intentionally built a functionality to allow for third-party surveillance of their communications, that loss of trust would understandably be widespread and permanent. In fact, when President Obama's encryption working group explored technical options for an exceptional access mechanism, it cited loss of trust as the primary reason not to pursue provider-enabled access to encrypted devices through current update procedures. The working group explained that this could be dangerous to overall cybersecurity, since its use could call into question the trustworthiness of established software update channels. Individual users aware of the risk of remote access to their devices, could also choose to turn off software updates, rendering their devices significantly less secure as time passed and vulnerabilities were discovered [but] not patched.18 While the proposal that prompted these observations was targeted at operating system updates, the same principles concerning loss of trust and the attendant loss of security would apply in the context of the ghost proposal.

Any proposal that undermines user trust penalizes the overwhelming majority of technology users while permitting those few bad actors to shift to readily available products beyond the law's reach. It is a reality that encryption products are available all over the world and cannot be easily constrained by territorial borders.19 Thus, while the few nefarious actors targeted by the law will still be able to avail themselves of other services, average users -- who may also choose different services -- will disproportionately suffer consequences of degraded security and trust.

The Ghost Proposal Would Violate the Principle That Transparency is Essential Although we commend GCHQ officials for initiating this public conversation and publishing their ghost proposal online, if the U.K. were to implement this approach, these activities would be cloaked in secrecy. Although it is unclear which precise legal authorities GCHQ and U.K. law enforcement would rely upon, the Investigatory Powers Act grants U.K. officials the power to impose broad non-disclosure agreements that would prevent service providers from even acknowledging they had received a demand to change their systems, let alone the extent to which they complied. The secrecy that would surround implementation of the ghost proposal would exacerbate the damage to authentication systems and user trust as described above.

Conclusion For these reasons, the undersigned organizations, security researchers, and companies urge GCHQ to abide by the six principles they have announced, abandon the ghost proposal, and avoid any alternate approaches that would similarly threaten digital security and human rights. We would welcome the opportunity for a continuing dialogue on these important issues.

Sincerely,

Civil Society Organizations Access Now Big Brother Watch Blueprint for Free Speech Center for Democracy & Technology Defending Rights and Dissent Electronic Frontier Foundation Engine Freedom of the Press Foundation Government Accountability Project Human Rights Watch International Civil Liberties Monitoring Group Internet Society Liberty New America's Open Technology Institute Open Rights Group Principled Action in Government

Privacy International Reporters Without Borders Restore The Fourth Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic (CIPPIC) TechFreedom The Tor Project X-Lab

Technology Companies and Trade Associations ACT | The App Association Apple Google Microsoft Reform Government Surveillance ( RGS is a coalition of technology companies)

Startpage.com WhatsApp

Security and Policy Experts* Steven M. Bellovin, Percy K. and Vida L.W. Hudson Professor of Computer Science; Affiliate faculty, Columbia Law School Jon Callas, Senior Technology Fellow, ACLU L Jean Camp, Professor of Informatics, School of Informatics, Indiana University Stephen Checkoway, Assistant Professor, Oberlin College Computer Science Department Lorrie Cranor, Carnegie Mellon University Zakir Durumeric, Assistant Professor, Stanford University Dr. Richard Forno, Senior Lecturer, UMBC, Director, Graduate Cybersecurity Program & Assistant Director, UMBC Center for Cybersecurity Joe Grand, Principal Engineer & Embedded Security Expert, Grand Idea Studio, Inc. Daniel K. Gillmor, Senior Staff Technologist, ACLU Peter G. Neumann, Chief Scientist, SRI International Computer Science Lab Dr. Christopher Parsons, Senior Research Associate at the Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto Phillip Rogaway, Professor, University of California, Davis Bruce Schneier Adam Shostack, Author, Threat Modeling: Designing for Security Ashkan Soltani, Researcher and Consultant - Former FTC CTO and Whitehouse Senior Advisor Richard Stallman, President, Free Software Foundation Philip Zimmermann, Delft University of Technology Cybersecurity Group

 

 

Offsite Article: Censorship by the back door...


Link Here29th May 2019
Full story: Internet Encryption...Encryption, essential for security but givernments don't see it that way
Facebook Is Already Working Towards Germany's End-to-End Encryption Backdoor Vision

See article from forbes.com

 

 

Extract: A couple of useful tips...

How the Australian authorities are requiring internet companies to help them 'break' encrypted messages


Link Here28th May 2019
A briefing by the Australia's Home Affairs department, obtained under freedom of information, reveals that police can use new powers to compel a broad range of companies including social media giants, device manufacturers, telcos, retailers and providers of free wifi to provide information on users.

The briefing also provides examples of what type of assistance authorities can lawfully require, including:

  • a social media company helping to automate the creation of fake accounts;
  • a mobile carrier increasing the data allowance on a device so surveillance doesn't chew up users' data;
  •  blocking internet messages to force a device to send messages as unencrypted SMSes; and
  • a data centre providing access to a customer's computer rack to allow installation of a surveillance device.

So if your encrypted app suddenly stops working for no apparent reason then perhaps it is not a good idea to reach for a non-encryted alternative app.

And if your data allowance seems to be lasting forever then perhaps the authorities are downloading everything on your device.

 

 

Offsite Article: Do you want to be identified as a refusenik?...


Link Here 23rd May 2019
Full story: Online ID in the UK...UK scheme to verify online id
The government is quietly creating a digital ID card without us noticing

See article from news.sky.com

 

 

Offsite Article: Actionable Insights...


Link Here21st May 2019
Thanks to Facebook, Your Cellphone Company Is Watching You More Closely Than Ever

See article from theintercept.com

 

 

Offsite Article: If there's one thing worse than surveillance and snooping being tagged as a 'smart' city...


Link Here19th May 2019
its when the scheme is run by commercial interests with the privacy failings of Google

See article from bbc.com

 

 

Recognising the face of repression...

Man fined on trumped up charges for covering face from police facial recognition cameras


Link Here17th May 2019
A man was fined £90 for refusing to show his face to police trialling new facial recognition systems.

The man pulled his jumper up above his chin as he walked past Met Police officers trialling Live Facial Recognition software in east London.

BBC cameras filmed as officers swooped on the man, told him to wind his neck in then handed him the hefty penalty charge.

A campaigner from Big Brother Watch -- who were protesting the use of cameras on the day -- was also filmed telling an officer: I would have done the same.

 

 

UK mass snooping laws can be investigated by UK courts...

Privacy International Wins Historic Victory at UK Supreme Court


Link Here 15th May 2019

Today, after a five year battle with the UK government, Privacy International has won at the UK Supreme Court. The UK Supreme Court has ruled that the Investigatory Powers Tribunal's (IPT) decisions are subject to judicial review in the High Court. The Supreme Court's judgment is a major endorsement and affirmation of the rule of law in the UK. The decision guarantees that when the IPT gets the law wrong, its mistakes can be corrected.

Key point:

  • UK Supreme Court rules that the UK spying tribunal - the IPT - cannot escape the oversight of the ordinary UK courts

The leading judgment of Lord Carnwath confirms the vital role of the courts in upholding the rule of law. The Government's reliance on an 'ouster clause' to try to remove the IPT from judicial review failed. The judgment confirms hundreds of years of legal precedent condemning attempts to remove important decisions from the oversight of the courts.

Privacy International's case stems from a 2016 decision by the IPT that the UK government may use sweeping 'general warrants' to engage in computer hacking of thousands or even millions of devices, without any approval from by a judge or reasonable grounds for suspicion. The Government argued that it would be lawful in principle to use a single warrant signed off by a Minister (not a judge) to hack every mobile phone in a UK city - and the IPT agreed with the Government.

Privacy International challenged the IPT's decision before the UK High Court. The Government argued that even if the IPT had got the law completely wrong, or had acted unfairly, the High Court had no power to correct the mistake. That question went all the way to the UK Supreme Court, and resulted in today's judgment.

In his judgment, Lord Carnwath wrote:

"The legal issue decided by the IPT is not only one of general public importance, but also has possible implications for legal rights and remedies going beyond the scope of the IPT's remit. Consistent application of the rule of law requires such an issue to be susceptible in appropriate cases to review by ordinary courts."

Caroline Wilson Palow, Privacy International's General Counsel, said:

"Today's judgment is a historic victory for the rule of law. It ensures that the UK intelligence agencies are subject to oversight by the ordinary UK courts.

Countries around the world are currently grappling with serious questions regarding what power should reside in each branch of government. Today's ruling is a welcome precedent for all of those countries, striking a reasonable balance between executive, legislative and judicial power.

Today's ruling paves the way for Privacy International's challenge to the UK Government's use of bulk computer hacking warrants. Our challenge has been delayed for years by the Government's persistent attempt to protect the IPT's decisions from scrutiny. We are heartened that our case will now go forward."

Simon Creighton, of Bhatt Murphy Solicitors who acted for Privacy International, said:

"Privacy International's tenacity in pursuing this case has provided an important check on the argument that security concerns should be allowed to override the rule of law. Secretive national security tribunals are no exception. The Supreme Court was concerned that no tribunal, however eminent its judges, should be able to develop its own "local law". Today's decision welcomes the IPT back from its legal island into the mainstream of British law."

 

 

Offsite Article: Any chance of any human rights protection?...


Link Here12th May 2019
The US is building a massive database of biometrics and identity information. By Jason Kelley

See article from eff.org

 

 

Offsite Article: Is Chinese-style surveillance coming to the West?...


Link Here8th May 2019
Full story: Mass snooping in China...Internet and phone snooping in China
The Chinese model is now being exported -- and the results could be terrifying

See article from theguardian.com

 

 

Offsite Article: As the UK starts along the same road there is a lesson to learn here...


Link Here 2nd May 2019
Full story: Mass snooping in China...Internet and phone snooping in China
Human Rights Watch writes a fascinating report about how the Chinese authorities collect invasive personal data on Uighurs as part of a vast surveillance network

See article from theguardian.com

 

 

Straight out of 1984...

NewsGuard is pushing for a deal for ISPs to flash warnings to internet users whilst they are browsing 'wrong think' news websites


Link Here26th April 2019
NewsGuard is a US organisation trying to muscle in governments' concerns about 'fake news'' It doesn't fact check individual news stories but gives ratings to news organisations on what it considers to be indicators of 'trustworthiness'.

At the moment it is most widely known for providing browser add-ons that displays a green shield when readers are browsing an 'approved' news website and a red shield when the website is disapproved.

Now the company is pushing something a little more Orwellian. It is in talks with UK internet providers such that the ISP would inject some sort of warning screen should an internet user [inadvertently] stray onto a 'wrong think' website.

The idea seems to be that users can select whether they want these intrusive warnings or not, via a similar mechanism used for the parental control of website blocking.

NewsGuard lost an awful of credibility in the UK when its first set of ratings singled out the Daily Mail as a 'wrong think' news source. It caused a bit of a stink and the decisions was reversed, but it rather shows where the company is coming from.

 Surely they are patronising the British people if they think that people want to be nagged about reading the Daily Mail. People are well aware of the bases and points of views of news sources they read. They will not want to be nagged by those that think they know best what people should be reading.

I think it is only governments and politicians that are supposedly concerned about 'fake news anyway'. They see it as some sort blame opportunity. It can't possibly be their politicians' own policies that are so disastrously unpopular with the people, surely it must be mischievous 'fake news' peddlers that are causing the grief.

 

 

Offsite Article: This SIM Card Forces All of Your Mobile Data Through Tor...


Link Here 26th April 2019
This is about sticking a middle finger up to mobile filtering, mass surveillance.

See article from motherboard.vice.com

 

 

Offsite Article: Privacy switch...


Link Here23rd April 2019
Vendors must start adding physical on/off switches to devices that can spy on us. By Larry Sanger

See article from larrysanger.org

 

 

We know where you've been! for most of the last decade...

Google's Sensorvault Can Tell Police Where You've Been. By Jennifer Lynch


Link Here 20th April 2019

Do you know where you were five years ago? Did you have an Android phone at the time? It turns out Google might know--and it might be telling law enforcement.

In a new article, the New York Times details a little-known technique increasingly used by law enforcement to figure out everyone who might have been within certain geographic areas during specific time periods in the past. The technique relies on detailed location data collected by Google from most Android devices as well as iPhones and iPads that have Google Maps and other apps installed. This data resides in a Google-maintained database called Sensorvault, and because Google stores this data indefinitely, Sensorvault includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade.

The data Google is turning over to law enforcement is so precise that one deputy police chief said it shows the whole pattern of life. It's collected even when people aren't making calls or using apps, which means it can be even more detailed than data generated by cell towers.

The location data comes from GPS signals, cellphone towers, nearby Wi-Fi devices and Bluetooth beacons. According to Google, users opt in to collection of the location data stored in Sensorvault. However, Google makes it very hard to resist opting in, and many users may not understand that they have done so. Also, Android devices collect lots of other location data by default, and it's extremely difficult to opt out of that collection.

Using a single warrant--often called a geo-fence or reverse location warrant--police are able to access location data from dozens to hundreds of devices--devices that are linked to real people, many of whom (and perhaps in some cases all of whom) have no tie to criminal activity and have provided no reason for suspicion. The warrants cover geographic areas ranging from single buildings to multiple blocks, and time periods ranging from a few hours to a week.

So far, according to the Times and other outlets, this technique is being used by the FBI and police departments in Arizona, North Carolina, California, Florida, Minnesota, Maine, and Washington, although there may be other agencies using it across the country. But police aren't limiting the use of the technique to egregious or violent crimes-- Minnesota Public Radio reported the technique has been used to try to identify suspects who stole a pickup truck and, separately, $650 worth of tires. Google is getting up to 180 requests a week for data and is, apparently, struggling to keep up with the demand.

Law enforcement seems to be using a three-step process to learn the names of device holders (in some cases, a single warrant authorizes all three steps). In the first step, the officer specifies the area and time period of interest, and in response, Google gives the police information on all the devices that were there, identified by anonymous numbers--this step may reveal hundreds of devices.

After that, officers can narrow the scope of their request to fewer devices, and Google will release even more detailed data, including data on where devices traveled outside the original requested area and time period. This data, which still involves multiple devices, reveals detailed travel patterns. In the final step, detectives review that travel data to see if any devices appear relevant to the crime, and they ask for the users' names and other information for specific individual devices.

Techniques like this also reveal big problems with our current warrant system. Even though the standard for getting a warrant is higher than other legal procedures--and EFF pushes for a warrant requirement for digital data and devices--warrants, alone, are no longer enough to protect our privacy. Through a single warrant the police can access exponentially more and more detailed information about us than they ever could in the past. Here, the police are using a single warrant to get access to location information for hundreds of devices. In other contexts, through a single warrant, officers can access all the data on a cell phone or a hard drive; all email stored in a Google account (possibly going back years); and all information linked to a social media account (including photos, posts, private communications, and contacts).

We shouldn't allow the government to have such broad access to our digital lives. One way we could limit access is by passing legislation that mandates heightened standards, minimization procedures, and particularity requirements for digital searches. We already have this in laws that regulate wiretaps , where police, in addition to demonstrating probable cause, must state that they have first tried other investigative procedures (or state why other procedures wouldn't work) and also describe how the wiretap will be limited in scope and time.

As the Times article notes, this technique implicates innocent people and has a real impact on people's lives. Even if you are later able to clear your name, if you spend any time at all in police custody, this could cost you your job, your car, and your ability to get back on your feet after the arrest. One man profiled in the Times article spent nearly a week in police custody and was having trouble recovering, even months after the arrest. He was arrested at work and subsequently lost his job. Due to the arrest, his car was impounded for investigation and later repossessed. These are the kinds of far-reaching consequences that can result from overly broad searches, so courts should subject geo-location warrants to far more scrutiny.

 

 

Offsite Article: Easier than searching your house...


Link Here18th April 2019
Police Can Download All Your Smartphone's Data Without A Warrant

See article from rightsinfo.org


 2003   2004   2005   2006   2007   2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan-March   April-June   July-Sept   Oct-Dec    


 


Liberty

Privacy

Copyright
 

Free Speech

Campaigners

Religion
 

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys