|
|
|
|
| 31st March 2020
|
|
|
There's a good case for using smartphone data in the COVID-19 response, but Americans deserve an explanation. By Casey Newton See
article from theverge.com |
|
|
|
|
| 31st March 2020
|
|
|
Brussels considers pan-EU police searches of ID photos See article from politico.eu |
|
European mobile phone networks agree to share user location data to track coronavirus
|
|
|
| 27th March 2020
|
|
| See article from
appleinsider.com |
Eight major mobile carriers have agreed to share customer location data with the European Commission in a bid to track the spread of COVID-19. The announcement arrives after Deutsche Telekom, Orange, Telefonica, Telecom Italia, Telenor, Telia, A1
Telekom Austria and Vodafone discussed tracking options with European Commissioner for Internal Market and Services Thierry Breton. Government officials attempted to allay the fears of critics by noting all collected data will be anonymized and
destroyed once the pandemic is squashed. |
|
How far will governments go? Governments mobilized digital surveillance to contain the spread of the virus
|
|
|
| 26th March
2020
|
|
| See CC article from
advox.globalvoices.org by Hong Kong Free Press |
Since the COVID 19 outbreak became a fast-spreading pandemic, governments from across the globe have implemented new policies to help slow the spread of the virus. In addition to closing borders to non-citizens, many
governments have also mobilized digital surveillance technologies to track and contain visitors and citizens alike. On Wednesday, the Hong Kong government announced that all new arrivals to the city must undergo two weeks of
self-quarantine, while wearing an electronic wristband that connects to a location tracking app on their phones. If the app detects changes in the person's location, it will alert the Department of Health and the police. Prior to
this new policy, only people who had recently visited Hubei province in China were required to wear a monitoring wristband during their quarantine period. While surveillance technologies and measures may give the public a sense of
security in controlling the spread of the virus, we must remain mindful and vigilant of their continued use after the pandemic subsides. European and North American countries like Italy, Spain, and the US are currently being hit
hard by the coronavirus. Meanwhile, Asian countries have been praised by international media for their swift responses and use of surveillance technologies to control the outbreak. The Singaporean government, for example,
implemented policies that can effectively and rigorously trace a complex chain of contacts . As of February, anyone entering a government or corporate building in Singapore will have to provide their contact information. In
addition, the government has been gathering a substantial amount of data detailing not only each known case of infection but also where the person lives, works and the network of contacts they are connected to. While these
measures have thus far seemed to yield positive results, they have highlighted the technological capacity and power of the government to monitor the movements and lives of every individual. In China, where Covid-19 was first
detected, the government has been deploying not only drastic lockdown policies but also a variety of surveillance technologies to ensure public compliance with self-quarantine and isolation. In addition to using drones to monitor
people's movements and ensure they are staying home, police in five Chinese cities have taken to patrolling the streets wearing smart helmets equipped with thermal screening technologies that sound an alarm if a person's temperature is higher than the
threshold. The government has also collaborated with the company Hanwang Technology Limited to finesse their existing facial recognition technology, so that it can work even when the person is wearing a face mask
When connected to a temperature sensor and the Chinese government's existing database as well as state-level intel, this technology allows authorities to immediately identify the name of each person whose body temperature is above 38
degrees Celcius. According to Hanwang Technology, this refined facial recognition technology can identify up to 30 people within a second. While the use of surveillance technologies like these has been
effective in lowering the number of confirmed cases in China, it is not without risks. Beyond the pandemic, both the Chinese government and the company have substantial interests in further developing and deploying this
technology: the government can make use of it to track and suppress political dissidents, and the company has much to gain financially. This technology can also be co-opted by China's counterterrorism forces to further monitor and
regulate the movement of the Uighur people, who are categorised as terrorists by the Chinese government and are currently being forced into mass detention camps and subjected to forced labour. Outside of Asia, Middle Eastern
countries like Israel and Iran have also been deploying similar surveillance technologies , citing the need to control the spread of the coronavirus. The Israeli government now makes use of technologies developed for
counterterrorism to collect cellphone data, so that the government can trace people's contact network, and identify those who need to be quarantined. The geolocation data gathered via people's phones will then be used to alert the
public where not to go based on the pattern of infection. Not only is it unprecedented for Israel to deploy counterterrorism data to combat a public health crisis, but the existence of this data trove has also, according to the
New York Times , not been reported prior to this. On March 6, researcher Nariman Gharib revealed that the Iranian government had been tracking its citizens' phone data through an app disguised as a coronavirus diagnostic tool.
Security expert Nikolaos Chrysaidos confirmed that the app collected sensitive personal information unrelated to the outbreak -- for example, the app recorded the bodily movements of the user the way a fitness tracker would.
Google has since removed the app from Google Play, but this case demonstrates the need for ongoing public vigilance over government use of surveillance technologies in the name of public health. Safeguarding
public health has historically been used as a justification for mainstream institutions and government authorities to stigmatise, monitor, and regulate the lives of marginalised people -- such as immigrants, racial minorities, LGBTQ+ people, and people
living in poverty. If we do not hold our government accountable for its use of surveillance technologies during the current pandemic and beyond, we will be putting those who are already marginalised at further risks of regulation,
suppression, and persecution.
|
|
Seems sensible but the EFF is not convinced
|
|
|
| 25th March 2020
|
|
| 22nd March 2020. See CC article from eff.org
by Adam Schwartz and Andrew Crocker See article from towerfm.co.uk The government is
working with mobile network O2 to analyse smartphone location data to see whether people are following its social distancing guidelines. The partnership began following a tech summit at Number 10, where officials discussed coronavirus outbreak
planning with representatives from the UK's largest phone networks. A spokesperson for O2 confirmed that the company was providing aggregated data to the government so it could observe trends in public movements, particularly in London. it was
claimed that the project will not be able to track individuals but individual data seems so beneficial to contact tracing that surely it will be enabled. Presumably there will need to be a balance. If the authorities use location data to punish
people for not following rules then people will leave their phones at home and contact tracing capabilities will be lost. The US free speech campaigners of the EFF debate governments' use of phone location data: |
Governments Haven't Shown Location Surveillance Would Help Contain COVID-19 Governments around the world are demanding new dragnet location surveillance powers to contain the COVID-19 outbreak. But before the public
allows their governments to implement such systems, governments must explain to the public how these systems would be effective in stopping the spread of COVID-19. There's no questioning the need for far-reaching public health measures to meet this
urgent challenge, but those measures must be scientifically rigorous, and based on the expertise of public health professionals. Governments have not yet met that standard, nor even shown that extraordinary location surveillance
powers would make a significant contribution to containing COVID-19. Unless they can, there's no justification for their intrusions on privacy and free speech, or the disparate impact these intrusions would have on vulnerable groups. Indeed, governments
have not even been transparent about their plans and rationales. The Costs of Location Surveillance EFF has long opposed location surveillance programs that can turn our lives into open books for
scrutiny by police, surveillance-based advertisers, identity thieves, and stalkers. Many sensitive inferences can be drawn from a visit to a health center, a criminal defense lawyer, an immigration clinic, or a protest planning meeting.
Moreover, fear of surveillance chills and deters free speech and association. And all too often , surveillance disparately burdens people of color. What's more, whatever personal data is collected by government can be misused by its
employees, stolen by criminals and foreign governments, and unpredictably redirected by agency leaders to harmful new uses . Emerging Dragnet Location Surveillance China reportedly responded to the
COVID-19 crisis by building new infrastructures to track the movements of massive numbers of identifiable people. Israel tapped into a vast trove of cellphone location data to identify people who came into close contact with known virus carriers. That
nation has sent quarantine orders based on this surveillance. About a dozen countries are reportedly testing a spy tool built by NSO Group that uses huge volumes of cellphone location data to match the location of infected people to other people in their
vicinity (NSO's plan is to not share a match with the government absent such a person's consent). In the United States , the federal government is reportedly seeking, from mobile app companies like Facebook and Google, large
volumes of location data that is de-identified (that is, after removal of information that identifies particular people) and aggregated (that is, after combining data about multiple people). According to industry executives, such data might be used to
predict the next virus hotspot. Facebook has previously made data like this available to track population movements during natural disasters. But re-identification of de-identified data is a constant infosec threat .
De-identification of location data is especially hard, since location data points serve as identification of their own. Also, re-identification can be achieved by correlating de-identified data with other publicly available data like voter rolls, and
with the oceans of information about identifiable people that are sold by data brokers . While de-identification might in some cases reduce privacy risks , this depends on many factors that have not yet been publicly addressed, such as careful selection
of what data to aggregate, and the minimum thresholds for aggregation. In the words of Prof. Matt Blaze, a specialist in computer science and privacy: One of the things we have learned over time is that something that
seems anonymous, more often than not, is not anonymous , even if it's designed with the best intentions.
Disturbingly, most of the public information about government's emerging location surveillance programs comes
from anonymous sources , and not official explanations. Transparency is a cornerstone of democratic governance, especially now , in the midst of a public health crisis. If the government is considering such new surveillance programs, it must publicly
explain exactly what it is planning, why this would help, and what rules would apply. History shows that when government builds new surveillance programs in secret , these programs quickly lead to unjustified privacy abuses. That's one reason EFF has
long demanded transparent democratic control over whether government agencies may deploy new surveillance technology. Governments Must Show Their Work Because new government dragnet location
surveillance powers are such a menace to our digital rights, governments should not be granted these powers unless they can show the public how these powers would actually help, in a significant manner, to contain COVID-19. Even if governments could show
such efficacy, we would still need to balance the benefit of the government's use of these powers against the substantial cost to our privacy, speech, and equality of opportunity. And even if this balancing justified government's use of these powers, we
would still need safeguards, limits, auditing, and accountability measures. In short, new surveillance powers must always be necessary and proportionate . But today, we can't balance those interests or enumerate necessary
safeguards, because governments have not shown how the proposed new dragnet location surveillance powers could help contain COVID-19. The following are some of the points we have not seen the government publicly address. 1. Are
the location records sought sufficiently granular to show whether two people were within transmittal distance of each other? In many cases, we question whether such data will actually be useful to healthcare professionals. This
may seem paradoxical. After all, location data is sufficiently precise for law enforcement to place suspects at the scene of a crime, and for juries to convict largely on the basis of that evidence. But when it comes to tracking the spread of a disease
that requires close personal contact, data generated by current technology generally can't reliably tell us whether two people were closer than the CDC-recommended radius of six feet for social distancing. For example, cell site
location information (CSLI)--the records generated by mobile carriers based on which cell towers a phone connects to and when--is often only able to place a phone within a zone of half a mile to two miles in urban areas. The area is even wider in areas
with less dense tower placement. GPS sensors built directly into phones can do much better, but even GPS is only accurate to a 16-foot radius . These and other technologies like Bluetooth can be combined for better accuracy, but there's no guarantee that
a given phone can be located with six-foot precision at a given time. 2. Do the cellphone location records identify a sufficiently large and representative portion of the overall population? Even today, not everyone has a
cellphone, and some people do not regularly carry their phones or connect them to a cellular network. The population that carries a networked phone at all times is not representative of the overall population; for example, people without phones skew
towards lower-income people and older people. 3. Has the virus already spread so broadly that contact tracing is no longer a significant way to reduce transmission? If community transmission is commonplace, contact tracing may
become impractical or divert resources from more effective containment methods. There might be scenarios other than precise, person-to-person contact tracing where location data could be useful. We've heard it suggested, for
example, that this data could be used to track future flare-ups of the virus by observing general patterns of people's movements in a given area. But even when transmission is less common, widespread testing may be more effective at containment, as may
be happening in South Korea . 4. Will health-based surveillance deter people from seeking health care? Already, there are reports that people subject to COVID-based location tracking are altering their movements to avoid
embarrassing revelations. If a positive test result will lead to enhanced location surveillance, some people may avoid testing. Conclusion As our society struggles with COVID-19, far narrower big
data surveillance proposals may emerge. Perhaps public health professionals will show that such proposals are necessary and proportionate. If so, EFF would seek safeguards, including mandatory expiration when the health crisis ends, independent
supervision, strict anti-discrimination rules, auditing for efficacy and misuse, and due process for affected people. But for now, government has not shown that new dragnet location surveillance powers would significantly help to
contain COVID-19. It is the government's job to show the public why this would work.
Update: In fight against coronavirus, European governments embrace surveillance 25th March 2020. See article from politico.eu
|
|
Independent report on child abuse material recommends strict age/identity verification for social media
|
|
|
| 14th March
2020
|
|
| See article from iicsa.org.uk See
report [pdf] from iicsa.org.uk |
The Independent Inquiry into Child Sexual Abuse, chaired by Professor Alexis Jay, was set up
because of serious concerns that some organisation had failed and were continuing to fail to protect children from sexual abuse. It describes its remit as: Our remit is huge, but as a statutory inquiry we have unique
authority to address issues that have persisted despite previous inquiries and attempts at reform.
The inquiry has just published its report with the grandiose title: The Internet. It has consider many aspects of child
abuse and come up with the following short list of recommendation:
- Pre-screening of images before uploading
The government should require industry to pre-screen material before it is uploaded to the internet to prevent access to known indecent images of children.
- Removal of images
The government should press the WeProtect Global Alliance to take more action internationally to ensure that those countries hosting indecent images of children implement legislation and procedures to
prevent access to such imagery. - Age verification
The government should introduce legislation requiring providers of online services and social media platforms to implement more stringent age
verification techniques on all relevant devices. - Draft child sexual abuse and exploitation code of practice
The government should publish, without further delay, the interim code of practice in
respect of child sexual abuse and exploitation as proposed by the Online Harms White Paper (published April 2019).
But it should be noted that the inquiry gave not even a passing mention to some of the privacy issues that would have far reaching consequences should age verification be required for children's social media access. Perhaps the authorities
should recall that age verification for porn failed because the law makers were only thinking of the children, and didn't give even a moment of passing consideration for the privacy of the porn users. The lawmaker's blinkeredness resulted in the failure
of their beloved law. Has anyone even considered the question what will happen if they ban kids from social media. An epidemic of tantrums? Collapse of social media companies? kids go back to hanging around on street corners?, the kids find more
underground websites to frequent? they play violent computer games all day instead? |
|
|
|
|
| 9th March 2020
|
|
|
A few practical tips but I would prefer something a little more absolute See
article from theguardian.com |
|
|
|
|
| 8th March 2020
|
|
|
Google tracked his bike ride past a burglarized home. That made him a suspect. See article from
nbcnews.com |
|
Virgin Media details customer porn access data that it irresponsibly made openly available on the internet
|
|
|
| 6th March 2020
|
|
| See article from bbc.com |
A customer database left unsecured online by Virgin Media contained details linking some customers to pornography and explicit websites. The researchers who first discovered the database told the BBC that it contained more information than Virgin
Media suggested. Such details could be used by cyber-criminals to extort victims. Virgin revealed on Thursday that one of its marketing databases containing details of 900,000 people was open to the internet and had been accessed on at least one
occasion by an unknown user. On Friday, it confirmed that the database contained details of about 1,100 customers who had used an online form to ask for a particular website to be blocked or unblocked. It said it was in the process of contacting
customers again about specific data that may have been stolen. When it first confirmed the data breach on Thursday, Virgin Media warned the public that the database contained phone numbers, home addresses and emails, however the company did not
disclose that database contained more intimate details. A representative of TurgenSec, the research company said Virgin Media's security had been far from adequate. The information was in plain text and unencrypted, which meant anyone browsing
the internet could clearly view and potentially download all of this data without needing any specialised equipment, tools, or hacking techniques. A spokeswoman for the ICO said it was investigating, and added: People have the right to expect that organisations will handle their personal information securely and responsibly. When that doesn't happen, we advise people who may have been affected by data breaches to be vigilant when checking their financial records.
Virgin Media said it would be emailing those affected, in order to warn them about the risks of phishing, nuisance calls and identity theft. The message will include a reminder not to click on unknown links in emails, and not to
provide personal details to unverified callers. |
|
Even the EU Commission!
|
|
|
| 23rd February
2020
|
|
| See article from politico.eu |
The European Commission has told its staff to start using Signal, an end-to-end-encrypted messaging app, in a push to increase the security of its communications. The instruction appeared on internal messaging boards in early February, notifying
employees that Signal has been selected as the recommended application for public instant messaging. The app is favored by privacy activists because of its end-to-end encryption and open-source technology. Bart Preneel, cryptography expert at the
University of Leuven explained: It's like Facebook's WhatsApp and Apple's iMessage but it's based on an encryption protocol that's very innovative. Because it's open-source, you can check what's happening under the
hood. Promoting the app, however, could antagonize the law enforcement community. It will underline the hypocrisy of Officials in Brussels, Washington and other capitals have been putting strong pressure on Facebook and Apple to
allow government agencies to access to encrypted messages; if these agencies refuse, legal requirements could be introduced that force firms to do just that. American, British and Australian officials have published an open letter to Facebook CEO Mark
Zuckerberg in October, asking that he call off plans to encrypt the company's messaging service. Dutch Minister for Justice and Security Ferd Grappehaus told POLITICO last April that the EU needs to look into legislation allowing governments to access
encrypted data. |
|
|
|
|
| 21st February 2020
|
|
|
Google shifts authority over UK user data from the EU to the US in wake of Brexit. By Jon Porter See
article from theverge.com |
|
|
|
|
| 19th February
2020
|
|
|
But will companies comply? By Maureen Mahoney See article from thehill.com |
|
|
|
|
|
7th February 2020
|
|
|
How To Protect Your Phone Number On Twitter See article from eff.org |
|
MI5 law breaking triggers Liberty and Privacy International legal action
|
|
|
| 4th February 2020
|
|
| See press release from libertyhumanrights.org.uk
|
Liberty, the human rights organisation, and Privacy International, today announced a joint legal action against MI5 following revelations the intelligence agency has systematically broken surveillance laws for years and kept it secret from the
surveillance watchdog. The details of MI5's lawlessness emerged last summer in two legal cases -- Liberty's challenge to the Snoopers' Charter (the Investigatory Powers Act 2016) and Privacy International's challenge to state
powers to collect and store ordinary people's data. In Liberty's challenge to the Investigatory Powers Act 2016 (IPA), the Government revealed that MI5 had been unlawfully retaining and mishandling the public's data for years.
As part of that case, the Government disclosed a number of documents to the court, including correspondence between the security service and the Investigatory Powers Commissioner's Office (IPCO, the body responsible for overseeing
state surveillance practices), correspondence between the security service and the Home Secretary, and reports of inspections carried out by IPCO after they learnt of MI5's failings. The documents reveal that MI5 not only broke
the law, they failed to report this to IPCO, despite knowing about their non-compliance for years. They also gave IPCO false information in order to obtain warrants. These revelations led the then Investigatory Powers Commissioner, Lord Justice Fulford,
to conclude that the UK security service had held and handled our data in an "undoubted unlawful manner". In a remarkable admission to the Commissioner, a senior MI5 official acknowledged that personal data collected by
MI5 might be stored in "ungoverned spaces", while an MI5 review in 2016 found "a high likelihood [of material] being discovered when it should have been deleted". Liberty and Privacy International have today
launched a legal bid to get MI5 to disclose the full extent of its unlawful conduct. The groups are asking the court to rule that MI5 violated our rights to privacy and free expression by unlawfully retaining and mishandling our data. The groups are also
demanding that surveillance warrants granted during this unlawful activity are quashed and all record of the public's illegitimately obtained or retained data is destroyed. Liberty lawyer Megan Goulding said :
"It's clear we need to know the extent of MI5's lawlessness as these court cases have revealed how our surveillance laws are not fit for purpose as well as MI5's disregard for our rights. MI5 has unprecedented and dangerous power
to spy on any one of us and collect our sensitive private information. "It's clear that the so-called safeguards in our surveillance laws are totally ineffective in protecting our rights. The Snoopers' Charter needs to be
torn up and the Government must create a targeted surveillance regime that protects us while respecting our rights and freedoms." Privacy International's Legal Director Caroline Wilson Palow said :
"For more than a decade, MI5 has been building massive datasets by systematically collecting our personal information. Such practices are a serious interference with our right to privacy and threaten democratic
values. We were promised that robust safeguards were in place so that such data would never be abused. Yet it turns out that those safeguards were in some cases illusory - that MI5 held significant data in ungoverned spaces without any effective
oversight. We are bringing this challenge together with Liberty to ensure that MI5 does not continue to operate outside of the law."
What we know of MI5's wrongdoing so far
Illegal actions: The Investigatory Powers Commissioner concluded that the way MI5 was holding and handling people's data was "undoubtedly unlawful" . MI5 breached IPA safeguards relating to how long data is
held for, how often it is copied, the number of persons to whom and extent to which material is disclosed, and how data is stored securely. The exact details of MI5's breaches are yet unknown, but Liberty and Privacy International hope this new legal
case will reveal more. Senior people at MI5 knew for six years before informing IPCO : Issues with MI5's legal compliance were known to the MI5 Board in 2013, but were only brought to IPCO's attention in February 2019.
MI5 misled judges : Senior judges (known as Judicial Commissioners) issued surveillance warrants on the understanding that MI5's data handling obligations under the IPA were being met - when they were not, in fact MI5
gave false information to obtain the warrants. The Investigatory Powers Commissioner has pointed out that warrants would not have been issued if IPCO had been aware about the breaches. The Commissioner states that "it is impossible to sensibly
reconcile the explanation of the handling of arrangements the Judicial Commissioners were given in briefings...with what MI5 knew over a protracted period of time was happening."
The "bulk" powers Liberty challenged in the case against the Snoopers' Charter allow MI5 to scoop up the public's personal data en masse, regardless of whether they are suspected of any wrongdoing. It is therefore unclear
how many people may have had their personal information unlawfully retained and mishandled, without their knowledge, by the security service.
|
|
|
|
|
| 2nd February 2020
|
|
|
Privacy International reports on the companies that are snooping on your TV viewing habits in the name of targeted advertising See
article from privacyinternational.org |
|
|
|
|
|
29th January 2020
|
|
|
If Chrome fixes privacy too fast it could break the web, Google exec debates advertising revenue vs privacy See
article from cnet.com |
|
|
|
|
| 28th January 2020
|
|
|
Avast, a popular 'free' anti-virus programme has been selling people's 'anonymised' browsing history that may be readily re-identified with real users See
article from pcmag.com |
|
|
|
|
| 26th January 2020
|
|
|
The secret tech that lets government agencies collect masses of data from your apps. By Privacy International See
article from privacyinternational.org |
|
Newspapers realise that the ICO default child protection policy may be very popular with adults too, and so it may prove tough to get them to age verify as required for monetisation
|
|
|
|
24th January 2020
|
|
| See article from pressgazette.co.uk
See See ICO's FAQ discussing the code's applicability to news websites [pdf] from ico.org.uk
|
News websites will have to ask readers to verify their age or comply with a new 15-point code from the Information Commissioner's Office (ICO) designed to protect children's online data, ICO has confirmed. Press campaign groups were hoping news
websites would be exempt from the new Age Appropriate Design Code so protecting their vital digital advertising revenues which are currently enhanced by extensive profiled advertising. Applying the code as standard will mean websites putting
privacy settings to high and turning off default data profiling. If they want to continue enjoying revenues from behavioural advertising they will need to get adult readers to verify their age. In its 2019 draft ICO had previously said such measures
must be robust and that simply asking readers to declare their age would not be enough.But it has now confirmed to Press Gazette that for news websites that adhere to an editorial code, such self-declaration measures are likely to be sufficient. This
could mean news websites asking readers to enter their date of birth or tick a box confirming they are over 18. An ICO spokesperson said sites using these methods might also want to consider some low level technical measures to discourage false
declarations of age, but anything more privacy intrusive is unlikely to be appropriate.. But Society of Editors executive director Ian Murray predicted the new demands may prove unpopular even at the simplest level. Asking visitors to confirm
their age [and hence submit to snooping and profiling] -- even a simple yes or no tick box -- could be a barrier to readers. The ICO has said it will work with the news media industry over a 12-month transition period to enable proportionate and
practical measures to be put in place for either scenario. In fact ICO produced a separate document alongside the code to explain how it could impact news media, which it said would be allowed to apply the code in a risk-based and proportionate way.
|
|
Met Police to make facial recognition cameras a fully operational feature of its arsenal
|
|
|
| 24th January
2020
|
|
| See article from bbc.com |
The Metropolitan Police has announced it will use live facial recognition cameras operationally for the first time on London streets. Following earlier pilots in London and deployments by South Wales police, the cameras are due to be put into action
within a month. Cameras will be clearly signposted, covering a small, targeted area, and police officers will hand out leaflets about the facial recognition scanning, the Met said. Trials of the cameras have already taken place on 10 occasions in
locations such as Stratford's Westfield shopping centre and the West End of London. The Met said in these trials, 70% of wanted suspects in the system who walked past the cameras were identified, while only one in 1,000 people generated a false alert.
But an independent review of six of these deployments found that only eight out of 42 matches were verifiably correct. Over the past four years, as the Met has trialled facial recognition, opposition to its use has intensified, led in the UK by
campaign groups Liberty and Big Brother Watch. The force also believes a recent High Court judgment, which said South Wales Police did not breach the rights of a man whose face had been scanned by a camera, gives it some legal cover. The case is
heading for the Court of Appeal. But the Met is pressing on, convinced that the public at large will support its efforts to use facial recognition to track down serious offenders. Last year, the Met admitted it supplied images for a database
carrying out facial recognition scans on a privately owned estate in King's Cross, after initially denying involvement. Update: Censored whilst claiming to be uncensored 24th January 2020. See
article from ico.org.uk It seems to the normal response from
the Information Commissioner's Office to turn a blind eye to the actual serious exploitation of people's personal data whilst focusing heavily on generating excessive quantities of red tape rules requiring small players to be ultra protective of personal
to point of strangling their businesses and livelihoods. And, just like for unconsented website tracking and profiling by the only advertising industry, the ICO will monitor and observe and comment again later in the year:
In October 2019 we concluded our investigation into how police use live facial recognition technology (LFR) in public places. Our investigation found there was public support for police use of LFR but also that there needed to be improvements in how
police authorised and deployed the technology if it was to retain public confidence and address privacy concerns. We set out our views in a formal Opinion for police forces. The Metropolitan Police Service (MPS) has incorporated
the advice from our Opinion into its planning and preparation for future LFR use. Our Opinion acknowledges that an appropriately governed, targeted and intelligence- led deployment of LFR may meet the threshold of strict necessity for law enforcement
purposes. We have received assurances from the MPS that it is considering the impact of this technology and is taking steps to reduce intrusion and comply with the requirements of data protection legislation. We expect to receive further information from
the MPS regarding this matter in forthcoming days. The MPS has committed to us that it will review each deployment, and the ICO will continue to observe and monitor the arrangements for, and effectiveness of, its use. This is an
important new technology with potentially significant privacy implications for UK citizens. We reiterate our call for Government to introduce a statutory and binding code of practice for LFR as a matter of priority. The code will ensure consistency in
how police forces use this technology and to improve clarity and foreseeability in its use for the public and police officers alike. We believe it's important for government to work with regulators, law enforcement, technology providers and communities
to support the code. Facial recognition remains a high priority for the ICO and the public. We have several ongoing investigations. We will be publishing more about its use by the private sector later this year.
Update: Big Brother Watch Petition 24th January 2020. Sign the petition from
you.38degrees.org.uk
To: Priti Patel, Home Secretary and Cressida Dick, Commissioner of the Metropolitan Police Urgently stop the Metropolitan Police using live facial recognition surveillance. Why is this important?
The Metropolitan Police has announced it will use live facial recognition across London, despite an independent review finding its previous trials likely unlawful and over 80% inaccurate. The Met is the largest police force in the
democratic world to roll out this dangerously authoritarian surveillance. This represents an enormous expansion of the surveillance state and a serious threat to civil liberties in the UK - and it sets a dangerous precedent worldwide. We urge the Home
Secretary and Met Commissioner to stop it now.
|
|
ICO takes no immediate action against the most blatant examples of people's most personal data being exploited without consent, ie profiled advertising
|
|
|
| 23rd January 2020
|
|
| 17th January 2020. See
article from ico.org.uk |
Blatant abuse of people's private data has become firmly entrenched in the economic model of the free internet ever since Google recognised the value of analysing what people are searching for. Now vast swathes of the internet are handsomely
funded by the exploitation of people's personal data. But that deep entrenchment clearly makes the issue a bit difficult to put right without bankrupting half of the internet that has come to rely on the process. The EU hasn't helped with its
ludicrous idea of focusing its laws on companies having to obtain people's consent to have their data exploited. A more practical lawmaker would have simply banned the abuse of personal data without bothering with the silly consent games. But the EU
seems prone to being lobbied and does not often come up with the most obvious solution. Anyway enforcement of the EU's law is certainly causing issues for the internet censors at the UK's ICO. The ICO warned the adtech industry 6 months ago
that its approach is illegal and has now announced that it would not be taking any action against the data abuse yet, as the industry has made a few noises about improving a bit over the coming months. Simon McDougall, ICO Executive Director of
Technology and Innovation has written: The adtech real time bidding (RTB) industry is complex, involving thousands of companies in the UK alone. Many different actors and service providers sit between the advertisers
buying online advertising space, and the publishers selling it. There is a significant lack of transparency due to the nature of the supply chain and the role different actors play. Our June 2019 report identified a range of
issues. We are confident that any organisation that has not properly addressed these issues risks operating in breach of data protection law. This is a systemic problem that requires organisations to take ownership for their own
data processing, and for industry to collectively reform RTB. We gave industry six months to work on the points we raised, and offered to continue to engage with stakeholders. Two key organisations in the industry are starting to make the changes needed.
The Internet Advertising Bureau (IAB) UK has agreed a range of principles that align with our concerns, and is developing its own guidance for organisations on security, data minimisation, and data retention, as well as UK-focused
guidance on the content taxonomy. It will also educate the industry on special category data and cookie requirements, and continue work on some specific areas of detail. We will continue to engage with IAB UK to ensure these proposals are executed in a
timely manner. Separately, Google will remove content categories, and improve its process for auditing counterparties. It has also recently proposed improvements to its Chrome browser, including phasing out support for third party
cookies within the next two years. We are encouraged by this, and will continue to look at the changes Google has proposed. Finally, we have also received commitments from other UK advertising trade bodies to produce guidance for
their members If these measures are fully implemented they will result in real improvements to the handling of personal data within the adtech industry. We will continue to engage with industry where we think engagement will
deliver the most effective outcome for data subjects. Comment: Data regulator ICO fails to enforce the law 18th January 2020. See
article from openrightsgroup.org
Responding to ICO's announcement today that the regulator is taking minimal steps to enforce the law against massive data breaches taking place in the online ad industry through Real-Time Bidding, complainants Jim Killock and Michael Veale have
called on the regulator to enforce the law. The complainants are considering taking legal action against the regulator. Legal action could be taken against the ICO for failure to enforce, or against the companies themselves for
their breaches of Data Protection law. The Real-Time Bidding data breach at the heart of RTB market exposes every person in the UK to mass profiling, and the attendant risks of manipulation and discrimination.
As the evidence submitted by the complainants notes, the real-time bidding systems designed by Google and the IAB broadcast what virtually all Internet users read, watch, and listen to online to thousands of companies, without
protection of the data once broadcast. Now, sixteen months after the initial complaint, the ICO has failed to act. Jim Killock, Executive Director of the Open Rights Group said: The ICO is a
regulator, so needs to enforce the law. It appears to be accepting that unlawful and dangerous sharing of personal data can continue, so long as 'improvements' are gradually made, with no actual date for compliance. Last year the
ICO gave a deadline for an industry response to our complaints. Now the ICO is falling into the trap set by industry, of accepting incremental but minimal changes that fail to deliver individuals the control of their personal data that they are legally
entitled to. The ICO must take enforcement action against IAB members. We are considering our position, including whether to take legal action against the regulator for failing to act, or individual
companies for their breach of data protection law.
Dr Michael Veale said: When an industry is premised and profiting from clear and entrenched illegality that breach individuals'
fundamental rights, engagement is not a suitable remedy. The ICO cannot continue to look back at its past precedents for enforcement action, because it is exactly that timid approach that has led us to where we are now.
Ravi Naik, solicitor acting for the complainants, said: There is no dispute about the underlying illiegality at the heart of RTB that our clients have complained about. The ICO have agreed with
those concerns yet the companies have not taken adequate steps to address those conerns. Nevertheless, the ICO has failed to take direct enforcement action needed to remedy these breaches. Regulatory ambivalence cannot continue.
The ICO is not a silo but is subject to judicial oversight. Indeed, the ICO's failure to act raises a question about the adequacy of the UK Data Protection Act. Is there proper judicial oversight of the ICO? This is a critical question after Brexit, when
the UK needs to agree data transfer arrangements with the EU that cover all industries.
Dr. Johnny Ryan of Brave said: The RTB system broadcasts what everyone is reading and
watching online, hundreds of billions of times a day, to thousands of companies. It is by far the largest data breach ever recorded. The risks are profound. Brave will support ORG to ensure that the ICO discharges its responsibilities.
Jim Killock and Michael Veale complained about the Adtech industry and Real Time Bidding to the UK's ICO in September 2018. Johnny Ryan of Brave submitted a parallel complaint against Google about their Adtech system to the Irish Data
Protection Authority. Update: Advertising industry will introduce a 'gold standard 2.0' for privacy towards the end of 2020 23rd January 2020. See
article from campaignlive.co.uk
The Internet Advertising Bureau UK has launched a new version of what it calls its Gold Standard certification process that will be independently audited by a third party. In a move to address ongoing privacy concerns with the digital supply chain,
the IAB's Gold Standard 2.0 will incorporate the Transparency and Consent Framework, a widely promoted industry standard for online advertising. The new process will be introduced in the fourth quarter after an industry consultation to agree on the
compliance criteria for incorporating the TCF.
|
|
|
|
|
|
19th January 2020
|
|
|
A little-known start-up helps law enforcement match photos of unknown people to their database of online images scraped from social media See
article from nytimes.com |
|
Google's Chrome browser will ban 3rd party tracking cookies albeit over the course of two years
|
|
|
| 16th January
2020
|
|
| See article from bbc.com See
review from bbc.com |
Google is to restrict web pages from loading 3rd party profiling cookies when accessed via its Chrome browser. Many large websites, eg major newspapers make a call to hundreds of 3rd part profilers to allow them to build up a profile of people's browsing
history, which then facilitates personalised advertising. Now Google has said that it will block these third-party cookies within the next two years. Tracking cookies are very much in the sights of the EU who are trying to put an end to the
exploitative practise. However the EU is not willing to actually ban such practises, but instead has invented a silly game about websites obtaining consent for tracking cookies. The issue is of course that a lot of 'free' access websites are
funded by advertising and rely on the revenue from the targeted advertising. I have read estimates that if websites were to drop personalised ads, and fall back on contextual advertising (eg advertising cars on motoring pages), then they would lose about
a third of their income. Surely a fall that magnitude would lead to many bankrupt or unviable websites. Now the final position of the EU's cookie consent game is that a website would have to present two easy options before allowing access to a
website:
- Do you want to allow tracking cookies to build up a database of your browsing history
- Do you NOT want to allow tracking cookies to build up a database of your browsing history
The simple outcome will be that virtually no one will opt for tracking, so the website will lose a third of its income. So it is rather unsurprising that websites would rather avoid offering such an easy option that would deprive them of so much of
their income. In reality the notion of consent it not practical. It would be more honest to think of the use of tracking cookies as a price for 'free' access to a website. Perhaps when the dust has settled, a more honest and practical
endgame would bea choice more like:
- Do you want to allow tracking cookies to build up a database of your browsing history in return for 'free' access
- Do you want to pay a fee to enable access to the website without tracking cookies
- Sorry you may not access this
website
The EU has been complaining about companies trying to avoid the revenue destroying official consent options. A study just published observes that nearly all cookie consent pop-ups are flouting EU privacy laws. Researchers at the Massachusetts
Institute of Technology, University College London (UCL) and Aarhus University have conducted a joint study into the use of cookies. They analysed five companies which offer consent management platforms (CMP) for cookies used by the UK's top 10,000
websites. Despite EU privacy laws stating that consent for cookies must be informed, specific and freely given, the research suggests that only 12% of the sites met the minimal requirements of GDPR (General Data Protection Regulation) law. Instead
they were found to blanket data consent options in complicated site design, such as:
- pre-ticked boxes burying decline buttons on later pages multiple clicks tracking users before consent and after pressing reject
- Just over half the sites studied did not have rejecting all tracking as an option.
- Of the sites which
did, only 13% made it accessible through the same or fewer clicks as the option to accept all.
The researchers estimate it would take, on average, more than half an hour to read through what the third-party companies are doing with your data, and even longer to read all their privacy policies. It's a joke and there's no actual way you could do
this realistically, said Dr Veale. |
|
Another example about how dangerous it is to provide personal data for age or identity verification related to adult websites
|
|
|
| 16th January 2020
|
|
| See article from bbc.com |
Cyber-security researchers claim that highly sensitive personal details about thousands of porn stars have been exposed online by an adult website. They told BBC News they had found an open folder on PussyCash's Amazon web server that contained
875,000 files. However the live webcam porn network, which owns the brand ImLive and other adult websites, said there was no evidence anyone else had accessed the folder. And it had it removed public access as soon as it had been told of the leak.
The researchers are from vpnMentor, which is a VPN comparison site. vpnMentor said in a blog anyone with the right link could have accessed 19.95GB of data dating back over 15 years as well as from the past few weeks, including contracts revealing more
than 4,000 models' including full name address social-security number date of birth phone number height weight hips, bust and waist measurements piercings tattoos scars The files also revealed scans or photographs of their passport
driving licence credit card birth certificate.
|
|
|
|
|
| 15th January
2020
|
|
|
Google to strangle user agent strings in its chrome browse to hamper advertisers from profiling users via fingerprinting See
article from zdnet.com |
|
50 rights organisations call on Google to ban exploitative apps being pre-installed on phones to work around user privacy settings
|
|
|
| 14th January 2020
|
|
| See article from privacyinternational.org Sign
petition from action.privacyinternational.org |
Privacy International and over 50 other organisations have submitted a letter to Alphabet Inc. CEO Sundar Pichai asking Google to take action against exploitative pre-installed software on Android devices. Dear Mr. Pichai,
We, the undersigned, agree with you: privacy cannot be a luxury offered only to those people who can afford it. And yet, Android Partners - who use the Android trademark and branding - are manufacturing
devices that contain pre-installed apps that cannot be deleted (often known as "bloatware"), which can leave users vulnerable to their data being collected, shared and exposed without their knowledge or consent. These
phones carry the "Google Play Protect" branding, but research shows that 91% of pre-installed apps do not appear in Google Play --
Google's app store. These pre-installed apps can have privileged custom permissions that let them operate outside the Android security model. This means permissions can be defined by the app - including access to the microphone,
camera and location - without triggering the standard Android security prompts. Users are therefore completely in the dark about these serious intrusions. We are concerned that this leaves users vulnerable to the exploitative
business practices of cheap smartphone manufacturers around the world. The changes we believe are needed most urgently are as follows:
Individuals should be able to permanently uninstall the apps on their phones. This should include any related background services that continue to run even if the apps are disabled. Pre-installed apps
should adhere to the same scrutiny as Play Store apps, especially in relation to custom permissions. Pre-installed apps should have some update mechanism, preferably through Google Play and without a user account. Google
should refuse to certify a device on privacy grounds, where manufacturers or vendors have attempted to exploit users in this way.
We, the undersigned, believe these fair and reasonable changes would make a huge difference to millions of people around the world who should not have to trade their privacy and security for access to a smartphone.
We urge you to use your position as an influential agent in the ecosystem to protect people and stop manufacturers from exploiting them in a race to the bottom on the pricing of smartphones. Yours sincerely,
American Civil Liberties Union (ACLU) Afghanistan Journalists Center (AFJC) Americans for Democracy and Human Rights in Bahrain (ADHRB) Amnesty
International Asociación por los Derechos Civiles (ADC) Association for Progressive Communications (APC) Association for Technology and Internet (ApTI) -
Association of Caribbean Media Workers Australian Privacy Foundation Center for Digital Democracy Centre for Intellectual Property and Information
Technology Law (CIPIT) Citizen D Civil Liberties Union for Europe Coding Rights Consumer Association the Quality of Life-EKPIZO -
Datos Protegidos Digital Rights Foundation (DRF) Douwe Korff, Emeritus Professor of International Law, London Metropolitan University and Associate of the Oxford Martin School,
University of Oxford DuckDuckGo Electronic Frontier Foundation (EFF) Forbrukerrĺdet // Norwegian Consumer Council Foundation for Media
Alternatives Free Media Movement (FMM) Freedom Forum Fundación Karisma Gulf Centre for Human Rights (GCHR) Hiperderecho
Homo Digitalis IJC Moldova Initiative for Freedom of Expression- Turkey (IFox) Irish Council for Civil Liberties -
Media Foundation for West Africa Media Institute of Southern Africa (MISA) Media Policy and Democracy Project (University of Johannesburg) Media
Policy Institute (MPI) Media Watch Metamorphosis Foundation for Internet and Society Open Rights Group (ORG) Palestinian Center For
Development & Media Freedoms (MADA) Panoptykon Paradigm Initiative PEN Canada Philippine Alliance of Human Rights Advocates (PAHRA)
Privacy International Public Citizen Red en Defensa de los Derechos Digitales (R3D) Syrian Center for Media and Freedom of Expression
(SCM) TEDIC The Danish Consumer Council The Institute for Policy Research and Advocacy (ELSAM) The Tor Project -
Unwanted Witness Vigilance for Democracy and the Civic State
|
|
|
|
|
| 13th January 2020
|
|
|
Former Microsoft contractor says he was emailed a login after minimal vetting See article
from theguardian.com |
|
|
|
|
| 3rd January 2020
|
|
|
A interesting report on how smart phone location date is being compiled and databased in the US. By Stuart A. Thompson and Charlie Warzel See
article from nytimes.com |
|
California leads the way on internet privacy in the US as its CCPA law comes into effect
|
|
|
| 1st January
2020
|
|
| See article from slate.com See
article from eff.org |
A new California law has come into effect that seems to have been inspired by the EU's box ticking nighmare, the GDPR. It give's Californians rights in determining how their data is used by large internet companies. The law gives consumers the right
to know about the personal data that companies have collected about them, to demand that it be deleted, and to prevent it from being sold to third parties. Although privacy controls only are required for Californians it seems likely that large
companies will provide the same controls to all Americans. The California Consumer Privacy Act (CCPA) will only apply to businesses that earn more than $25 million in gross revenue, that collect data on more than 50,000 people, or for which selling
consumer data accounts for more than 50% of revenue. In early December, Twitter rolled out a privacy center where users can learn more about the company's approach to the CCPA and navigate to a dashboard for customizing the types of info that the
platform is allowed to use for ad targeting. Google has also created a protocol that blocks websites from transmitting data to the company. Facebook, meanwhile, is arguing that it does not need to change anything because it does not technically sell
personal information. Companies must at least set up a webpage and a toll-free phone number for fielding data requests. The personal data covered by the CCPA includes IP addresses, contact info, internet browsing history, biometrics (like facial
recognition and fingerprint data), race, gender, purchasing behavior, and locations. Many sections of the law are quite vague and awaiting further clarification in the final draft regulations, which the California attorney general's office is
expected to release later in 2020. |
|
|