|
UK and US set to sign treaty allowing UK police back door access to WhatsApp and other end to end encrypted messaging platforms
|
|
|
|
1st October 2019
|
|
| 29th September 2019. See article from finance.yahoo.com
|
UK police will be able to force US-based social media platforms to hand over users' messages, including those that are end to end encrypted, under a treaty that is set to be signed next month. According to a report in The Times, investigations into
certain 'serious' criminal offenses, will be covered under the agreement between the two countries. The UK government has been imploring Facebook to create back doors which would enable intelligence agencies to gain access to messaging platforms
for matters of national security. The news of the agreement between the US and UK is sure to ramp up discussion of the effectiveness of end to end encryption when implemented by large corporations. If this report is confirmed and Facebook/police
can indeed listen in on 'end to end encryption' then such implementations of encryption are worthless. Update: Don't jump to conclusions 1st October 2019. See
article from techdirt.com
No, The New Agreement To Share Data Between US And UK Law Enforcement Does Not Require Encryption Backdoors It's no secret many in the UK government want backdoored encryption. The UK wing of the Five
Eyes surveillance conglomerate says the only thing that should be absolute is the government's access to communications . The long-gestating Snooper's Charter frequently contained language mandating lawful access, the government's preferred nomenclature
for encryption backdoors. And officials have, at various times, made unsupported statements about how no one really needs encryption , so maybe companies should just stop offering it. What the UK government has in the works now
won't mandate backdoors, but it appears to be a way to get its foot in the (back)door with the assistance of the US government. An agreement between the UK and the US -- possibly an offshoot of the Cloud Act -- would mandate the sharing of encrypted
communications with UK law enforcement, as Bloomberg reports . Sharing information is fine. Social media companies have plenty of information. What they don't have is access to users' encrypted communications, at least in most
cases. Signing an accord won't change that. There might be increased sharing of encrypted communications but it doesn't appear this agreement actually requires companies to decrypt communications or create backdoors. |
|
|
|
|
| 26th
September 2019
|
|
|
Apple iOS 13 will reveal new ways that Facebook and Google snoop on your location data See
article from forbes.com |
|
|
|
|
| 21st
September 2019
|
|
|
How cookies and tracking exploded, and why the adtech industry now wants full identity tokens. A good technical write up of where we are at and where it all could go See
article from iabtechlab.com |
|
|
|
|
|
19th September 2019
|
|
|
Here's a rundown of the privacy concerns I had when setting up and using a smart TV -- and the steps I took in an attempt to reduce these privacy concerns. By James Boyle See
article from taylorvinters.com |
|
|
|
|
|
6th September 2019
|
|
|
Brave presents technical new evidence about personalised advertising, and has uncovered a mechanism by which Google appears to be circumventing its purported GDPR privacy protections See
article from brave.com |
|
|
|
|
| 5th September 2019
|
|
|
Privacy International finds that some online depression tests share your results with third parties See
article from privacyinternational.org |
|
|
|
|
| 1st September
2019
|
|
|
Are US border police checking out your Facebook postings and friends at the airport immigrations desk? See
article from theverge.com |
|
A detailed technical investigation of Google's advanced tools designed to profile internet users for advertising
|
|
|
| 31st
August 2019
|
|
| See CC article from eff.org by Bennett Cyphers |
Last week, Google announced a plan to build a more private web. The announcement post was, frankly, a mess. The company that tracks user behavior on over 2/3 of the web said that Privacy is paramount to us, in everything we do. Google not only doubled down on its commitment to targeted advertising, but also made the laughable claim that blocking third-party cookies -- by far the most common tracking technology on the Web, and Google's tracking method of choice -- will hurt user privacy. By taking away the tools that make tracking easy, it contended, developers like Apple and Mozilla will force trackers to resort to opaque techniques like fingerprinting. Of course, lost in that argument is the fact that the makers of Safari and Firefox have shown serious commitments to shutting down fingerprinting, and both browsers have made real progress in that direction. Furthermore, a key part of the Privacy Sandbox proposals is Chrome's own (belated) plan to stop fingerprinting.
But hidden behind the false equivalencies and privacy gaslighting are a set of real technical proposals. Some are genuinely good ideas. Others could be unmitigated privacy disasters. This post will look at the specific proposals
under Google's new Privacy Sandbox umbrella and talk about what they would mean for the future of the web. The good: fewer CAPTCHAs, fighting fingerprints Let's start with the proposals that might actually help users.
First up is the Trust API . This proposal is based on Privacy Pass , a privacy-preserving and frustration-reducing alternative to CAPTCHAs. Instead of having to fill out CAPTCHAs all over the web, with the Trust API, users will be
able to fill out a CAPTCHA once and then use trust tokens to prove that they are human in the future. The tokens are anonymous and not linkable to one another, so they won't help Google (or anyone else) track users. Since Google is the single largest
CAPTCHA provider in the world, its adoption of the Trust API could be a big win for users with disabilities , users of Tor , and anyone else who hates clicking on grainy pictures of storefronts. Google's proposed privacy budget
for fingerprinting is also exciting. Browser fingerprinting is the practice of gathering enough information about a specific browser instance to try to uniquely identify a user. Usually, this is accomplished by combining easily accessible information
like the user agent string with data from powerful APIs like the HTML canvas. Since fingerprinting extracts identifying data from otherwise-useful APIs, it can be hard to stop without hamstringing legitimate web apps. As a workaround, Google proposes
limiting the amount of data that websites can access through potentially sensitive APIs. Each website will have a budget, and if it goes over budget, the browser will cut off its access. Most websites won't have any use for things like the HTML canvas,
so they should be unaffected. Sites that need access to powerful APIs, like video chat services and online games, will be able to ask the user for permission to go over budget. The devil will be in the details, but the privacy budget is a promising
framework for combating browser fingerprinting. Unfortunately, that's where the good stuff ends. The rest of Google's proposals range from mediocre to downright dangerous. The bad: Conversion measurement
Perhaps the most fleshed-out proposal in the Sandbox is the conversion measurement API . This is trying to tackle a problem as old as online ads: how can you know whether the people clicking on an ad ultimately buy the product
it advertised? Currently, third-party cookies do most of the heavy lifting. A third-party advertiser serves an ad on behalf of a marketer and sets a cookie. On its own site, the marketer includes a snippet of code which causes the user's browser to send
the cookie set earlier back to the advertiser. The advertiser knows when the user sees an ad, and it knows when the same user later visits the marketer's site and makes a purchase. In this way, advertisers can attribute ad impressions to page views and
purchases that occur days or weeks later. Without third-party cookies, that attribution gets a little more complicated. Even if an advertiser can observe traffic around the web, without a way to link ad impressions to page views,
it won't know how effective its campaigns are. After Apple started cracking down on advertisers' use of cookies with Intelligent Tracking Prevention (ITP), it also proposed a privacy-preserving ad attribution solution . Now, Google is proposing something
similar . Basically, advertisers will be able to mark up their ads with metadata, including a destination URL, a reporting URL, and a field for extra impression data -- likely a unique ID. Whenever a user sees an ad, the browser will store its metadata
in a global ad table. Then, if the user visits the destination URL in the future, the browser will fire off a request to the reporting URL to report that the ad was converted. In theory, this might not be so bad. The API should
allow an advertiser to learn that someone saw its ad and then eventually landed on the page it was advertising; this can give raw numbers about the campaign's effectiveness without individually-identifying information. The problem
is the impression data. Apple's proposal allows marketers to store just 6 bits of information in a campaign ID, that is, a number between 1 and 64. This is enough to differentiate between ads for different products, or between campaigns using different
media. On the other hand, Google's ID field can contain 64 bits of information -- a number between 1 and 18 quintillion . This will allow advertisers to attach a unique ID to each and every ad impression they serve, and,
potentially, to connect ad conversions with individual users. If a user interacts with multiple ads from the same advertiser around the web, these IDs can help the advertiser build a profile of the user's browsing habits. The
ugly: FLoC Even worse is Google's proposal for Federated Learning of Cohorts (or FLoC). Behind the scenes, FLoC is based on Google's pretty neat federated learning technology . Basically, federated learning allows users to
build their own, local machine learning models by sharing little bits of information at a time. This allows users to reap the benefits of machine learning without sharing all of their data at once. Federated learning systems can be configured to use
secure multi-party computation and differential privacy in order to keep raw data verifiably private. The problem with FLoC isn't the process, it's the product. FLoC would use Chrome users' browsing history to do clustering . At a
high level, it will study browsing patterns and generate groups of similar users, then assign each user to a group (called a flock). At the end of the process, each browser will receive a flock name which identifies it as a certain kind of web user. In
Google's proposal, users would then share their flock name, as an HTTP header, with everyone they interact with on the web. This is, in a word, bad for privacy. A flock name would essentially be a behavioral credit score: a tattoo
on your digital forehead that gives a succinct summary of who you are, what you like, where you go, what you buy, and with whom you associate. The flock names will likely be inscrutable to users, but could reveal incredibly sensitive information to third
parties. Trackers will be able to use that information however they want, including to augment their own behind-the-scenes profiles of users. Google says that the browser can choose to leave sensitive data from browsing history
out of the learning process. But, as the company itself acknowledges, different data is sensitive to different people; a one-size-fits-all approach to privacy will leave many users at risk. Additionally, many sites currently choose to respect their
users' privacy by refraining from working with third-party trackers. FLoC would rob these websites of such a choice. Furthermore, flock names will be more meaningful to those who are already capable of observing activity around
the web. Companies with access to large tracking networks will be able to draw their own conclusions about the ways that users from a certain flock tend to behave. Discriminatory advertisers will be able to identify and filter out flocks which represent
vulnerable populations. Predatory lenders will learn which flocks are most prone to financial hardship. FLoC is the opposite of privacy-preserving technology. Today, trackers follow you around the web, skulking in the digital
shadows in order to guess at what kind of person you might be. In Google's future, they will sit back, relax, and let your browser do the work for them. The ugh: PIGIN That brings us to PIGIN. While
FLoC promises to match each user with a single, opaque group identifier, PIGIN would have each browser track a set of interest groups that it believes its user belongs to. Then, whenever the browser makes a request to an advertiser, it can send along a
list of the user's interests to enable better targeting. Google's proposal devotes a lot of space to discussing the privacy risks of PIGIN. However, the protections it discusses fall woefully short. The authors propose using
cryptography to ensure that there are at least 1,000 people in an interest group before disclosing a user's membership in it, as well as limiting the maximum number of interests disclosed at a time to 5. This limitation doesn't hold up to much scrutiny:
membership in 5 distinct groups, each of which contains just a few thousand people, will be more than enough to uniquely identify a huge portion of users on the web. Furthermore, malicious actors will be able to game the system in a number of ways,
including to learn about users' membership in sensitive categories. While the proposal gives a passing mention to using differential privacy, it doesn't begin to describe how, specifically, that might alleviate the myriad privacy risks PIGIN raises.
Google touts PIGIN as a win for transparency and user control. This may be true to a limited extent. It would be nice to know what information advertisers use to target particular ads, and it would be useful to be able to opt-out of
specific interest groups one by one. But like FLoC, PIGIN does nothing to address the bad ways that online tracking currently works. Instead, it would provide trackers with a massive new stream of information they could use to build or augment their own
user profiles. The ability to remove specific interests from your browser might be nice, but it won't do anything to prevent every company that's already collected it from storing, sharing, or selling that data. Furthermore, these features of PIGIN would
likely become another option that most users don't touch. Defaults matter. While Apple and Mozilla work to make their browsers private out of the box, Google continues to invent new privacy-invasive practices for users to opt-out of.
It's never about privacy If the Privacy Sandbox won't actually help users, why is Google proposing all these changes? Google can probably see which way the wind is blowing.
Safari's Intelligent Tracking Prevention and Firefox's Enhanced Tracking Protection have severely curtailed third-party trackers' access to data. Meanwhile, users and lawmakers continue to demand stronger privacy protections from Big Tech. While Chrome
still dominates the browser market, Google might suspect that the days of unlimited access to third-party cookies are numbered. As a result, Google has apparently decided to defend its business model on two fronts. First, it's
continuing to argue that third-party cookies are actually fine , and companies like Apple and Mozilla who would restrict trackers' access to user data will end up harming user privacy. This argument is absurd. But unfortunately, as long as Chrome remains
the most popular browser in the world, Google will be able to single-handedly dictate whether cookies remain a viable option for tracking most users. At the same time, Google seems to be hedging its bets. The Privacy Sandbox
proposals for conversion measurement, FLoC, and PIGIN are each aimed at replacing one of the existing ways that third-party cookies are used for targeted ads. Google is brainstorming ways to continue serving targeted ads in a post-third-party-cookie
world. If cookies go the way of the pop-up ad, Google's targeting business will continue as usual. The Sandbox isn't about your privacy. It's about Google's bottom line. At the end of the day, Google is an advertising company that
happens to make a browser.
|
|
Microsoft to be investigated on Windows 10 slurping user data without consent
|
|
|
| 29th August 2019
|
|
| See article from zdnet.com
|
Windows 10 'telemetry' snoops on your data without users having any choice to say no. Surely a massive no-no under the European General Data Protection Law. This required that either the data grab is either essential or else consent has been gained. And
Microsoft never asks for consent, it just grabs it anyway. Now the Dutch Data Protection Office (DPO) is asking how Microsoft complies with GDPR. It has referred Windows 10 to the data protection authority in Ireland, where Microsoft is headquartered
in Europe The case stems from the Dutch data-protection agency's (DPA's) findings in pre-GDPR 2017. At that time, the agency found that Microsoft didn't tell Windows 10 Home and Pro users which personal data it collects and how it uses the data,
and didn't give consumers a way to give specific consent. As part of the Windows 10 April 2018 Update, Microsoft last year released new privacy tools to help explain to users why and when it was collecting telemetry data. And by April 2018, the
Dutch DPA assessed that the privacy of Windows 10 users was greatly improved due to its probe, having addressed the concerns raised over earlier versions of Windows 10. However, the Dutch DPA on Tuesday said while the changes Microsoft made last
year to Windows 10 telemetry collection did comply with the agreement, the company might still be in breach of EU privacy rules. The earlier investigation brought to light that Microsoft is remotely collecting other data from users. As a result,
Microsoft is still potentially in breach of privacy rules. Ireland's DPA has confirmed it had received the Netherlands' request. |
|
|
|
|
| 27th August
2019
|
|
|
Banning tracking cookies jeopardizes the future of the vibrant Web. By Timothy B. Lee - See article from arstechnica.com
|
|
A detailed analysis suggesting that Canada is moving to supporting backdoors and deliberately weakening algorithms
|
|
|
|
24th August 2019
|
|
| See
article from
citizenlab.ca by Christopher Parsons |
|
|
Facebook is introducing a privacy option to prevent its offline tracking capabilities
|
|
|
| 21st August 2019
|
|
| See article from bbc.com |
Facebook is revealing a wider range of websites and apps that gather data for Facebook, previously without being identified or seeking consent. Facebook will offer a new privacy control covering these newly revealed snoopers. A feature in settings
called Off-Facebook Activity will show all the apps and websites that send information about you to Facebook, which is then used to target ads more effectively.You will also be able to clear your history and prevent your future off-app behaviour being
tapped. For now, it is rolling out very slowly, with only Ireland, South Korea and Spain getting access. But the goal is to eventually offer it globally. Facebook collects data from beyond its platform either because you have opted to use
the social media site to log in to an app or, more likely, because a website uses something called Facebook Pixel to track your activities. If you select options to turn off tracking Facebook will still collect the data, but it will be anonymised.
...Read the full article from bbc.com |
|
ICO seems to have backed off from requiring age verification for nearly all websites
|
|
|
| 13th August 2019
|
|
| See article
from ico.org.uk See ICO censorship proposal and consultation document [pdf] from
ico.org.uk |
Back in April of this year the data protection police of the ICO set about drawing up rules for nearly all commercial websites in how they should deal with children and their personal data. Rather perversely the ICO decided that age verification
should underpin a massively complex regime to require different data processing for several age ranges. And of course the children's data would be 'protected' by requiring nearly all websites to demand everybody's identity defining personal data in order
to slot people into the ICO's age ranges. The ICO consulted on their proposals and it seems that the internet industry forcefully pointed out that it was not a good idea for nearly all websites to have to demand age verification from all website
visitors. The ICO has yet to publish the results of the consultation or its response to the criticism but the ICO have been playing down this ridiculously widespread age verification. This week the Information Commissioner Elizabeth Denham further
hinted at this in a blog. She wrote: Our consultation on the
proposed code began in April, and prompted more than 450 written responses, as well more
than 40 meetings with key stakeholders. We were pleased with the breadth of views we heard. Parents, schools and children's campaign groups helped us better understand the problems young people can face online, whether using social media services or
popular games, while developers, tech companies and online service providers gave us a crucial insight into the challenges industry faces to make this a reality. ... This consultation has helped us ensure our final code
will be effective, proportionate and achievable. It has also flagged the need for us to be clearer on some standards. We do not want to see an age-gated internet, where visiting any digital service requires
people to prove how old they are. Our aim has never been to keep children from online services, but to protect them within it. We want providers to set their privacy settings to high as a default, and to have strategies in place for how children's data
is handled. We do not want to prevent young people from engaging with the world around them, and we've been quick to respond to concerns that our code would affect news websites. This isn't the case. As we told a DCMS Select
Committee in July, we do not want to create any barriers to children accessing news content. The news media plays a fundamental role in children's lives and the final version of the code will make that very clear. That final
version of the code will be delivered to the Secretary of State ahead of the statutory deadline of 23 November 2019. We recognise the need to allow companies time to implement the standards and ensure they are complying with the
law. The law allows for a transition period of up to a year and we'll be considering the most appropriate approach to this, before making a final decision in the autumn. In addition to the code itself, my office is also preparing a significant package to
ensure that organisations are supported through any transition period, including help and advice for designers and engineers.
|
|
|
|
|
| 13th August 2019
|
|
|
...and to Cashless Stores. By Jay Stanley of the ACLU See article from aclu.org
|
|
Everyone's out to scrape all your social media postings and compile a searchable database of your life
|
|
|
|
10th August 2019
|
|
| See article from theverge.com
|
A few days ago Donald Trump responded to more mass shooters by calling on social networks to build tools for identifying potential mass murderers before they act. And across the government, there appears to be growing consensus that social networks
should become partners in surveillance with the government. So quite a timely moment for the Wall Street Journal to publish an article about FBI plans for mass snooping on social media: The FBI is soliciting
proposals from outside vendors for a contract to pull vast quantities of public data from Facebook, Twitter and other social media to proactively identify and reactively monitor threats to the United States and its interests. The
request was posted last month, weeks before a series of mass murders shook the country and led President Trump to call for social-media platforms to do more to detect potential shooters before they act. The deadline for bids is
Aug. 27. As described in the solicitation, it appears that the service would violate Facebook's ban against the use of its data for surveillance purposes, according to the company's user agreements and people familiar with how it
seeks to enforce them.
The Verge comments on a privacy paradox: But so far, as the Journal story illustrates, the government's approach has been incoherent. On one hand, it fines Facebook $5 billion
for violating users' privacy; on the other, it outlines a plan to potentially store all Americans' public posts in a database for monitoring purposes.
But of course it is not a paradox, many if not most people believe that they're
entitled to privacy whilst all the 'bad' people in the world aren't.
Commercial interests are also very keen on profiling people from their social media postings. There's probably a long list of advertisers who would love a list of rich people who go to casinos and stay at expensive hotels. Well As Business Insider
has noted, one company Hyp3r has been scraping all public postings on Instagram to provide exactly that information: A combination of configuration errors and lax oversight by Instagram allowed one of the social
network's vetted advertising partners to misappropriate vast amounts of public user data and create detailed records of users' physical whereabouts, personal bios, and photos that were intended to vanish after 24 hours. The
profiles, which were scraped and stitched together by the San Francisco-based marketing firm Hyp3r, were a clear violation of Instagram's rules. But it all occurred under Instagram's nose for the past year by a firm that Instagram had blessed as one of
its preferred Facebook Marketing Partners. Hyp3r is a marketing company that tracks social-media posts tagged with real-world locations. It then lets its customers directly interact with those posts via its tools and uses that
data to target the social-media users with relevant advertisements. Someone who visits a hotel and posts a selfie there might later be targeted with pitches from one of the hotel's competitors, for example. The total volume of
Instagram data Hyp3r has obtained is not clear, though the firm has publicly said it has a unique dataset of hundreds of millions of the highest value consumers in the world, and sources said more than of 90% of its data came from Instagram. It ingests
in excess of 1 million Instagram posts a month, sources said. See full article from
businessinsider.com
|
|
Privacy International writes to the DCMS Sub-Committee on Disinformation -
|
|
|
| 6th
August 2019
|
|
| See article from
privacyinternational.org |
Dear Chair and Committee colleagues, Privacy International is an international NGO, based in London, which works with partners around the world to challenge state and corporate surveillance and data exploitation. As part of
our work, we have a dedicated programme " Defending Democracy and Dissent " where we advocate for limits on data exploitation throughout the electoral cycle . We have been closely following the important work of the
Committee. Prompted by the additional evidence provided to the Committee by Brittany Kaiser , published on 30 July 2019, we would like to draw your attention to aspects of her submission that stood out to us and related points:
The ways in which Cambridge Analytica has used segments and inferences is strikingly similar to the techniques we have observed in the data broker industry. In November 2018, Privacy International
complained about seven data brokers (Acxiom, Oracle), ad-tech companies (Criteo, Quantcast, Tapad), and credit referencing agencies (Equifax, Experian) to data protection authorities in France, Ireland, and the UK. As evidenced in the documentary
"The Great Hack", Acxiom is one of the companies, as well as Facebook, that Cambridge Analytica used as a data source. Many of these companies are also involved in or linked to the use of data for political purposes. -
Our complaints show that these companies fail to comply with data protection law and in some cases seem to work under the assumption that derived, inferred and predicted data and behavioural or demographic segments do not count as
personal data, even if they are linked to unique identifiers or linked to or used to target individuals. We noted with concern that the final report of the Committee on Disinformation and'fake news' stated that
"'inferred data' is not protected" under GDPR. While we very much agree with your recommendation to extend privacy laws to close existing gaps in this regard, we respectfully disagree with your conclusion that 'inferred data' is currently not
protected. 'Inferred data' does not always fall under the definition of personal data,yet inferences that may be linked to identifiable individuals do constitute personal data. A new aspect of GDPR is an explicit definition
of profiling in Article 4(4): "any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that
natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements". Data brokers, ad tech companies, and credit referencing agencies amass
vast amounts of data from different sources (offline and online) in order to profile individuals, derive, and infer more data about them and place individuals into categories and segments. The law is clear that there must be transparency and limits,
including relating to requirements such as legal basis, fairness and purpose limitation. However, our investigation shows that many companies fail to comply with data protectionrequirements with far reaching consequences for people's rights. -
The pervasive nature of data brokers is demonstrated by an example related to anotherof the Committee's inquires, where you have looked at the way reality TV shows have used targeted advertising on Facebook. In this regard it is
important to consider how advertising on television is becoming increasingly targeted, and the role that data brokers play as data sources .
We urge the Committee to look into the profiling practices used by commercial data brokers (including those that also operate as Credit Reference Agencies) and the role this industry plays in the use of personal data for political
purposes and beyond . We look forward to hearing from you in relation to this request. Should you require any further information or have any questions please do let us know. Yours faithfully,
Privacy International
|
|
Court Judgement allows the government to continue spying on us
|
|
|
|
31st July 2019
|
|
| Thanks to Jon See
article from libertyhumanrights.org.uk See also
article from theregister.co.uk See
full judgement [pdf] from judiciary.uk |
Liberty writes: In response to today's judgment in the People's vs the Snooper's Charter case Megan Goulding, Liberty lawyer, said: This disappointing judgment allows the government to continue to spy on every
one of us, violating our rights to privacy and free expression. We will challenge this judgment in the courts, and keep fighting for a targeted surveillance regime that respects our rights. These bulk surveillance powers allow the
state to hoover up the messages, calls and web history of hordes of ordinary people who are not suspected of any wrong-doing. The Court recognised the seriousness of MI5's unlawful handling of our data, which only emerged as a
result of this litigation. The security services have shown that they cannot be trusted to keep our data safe and respect our rights. |
|
Porn sites are tracking and snooping on users, and for some, their browsing may be classified as contrary to their public life.
|
|
|
| 31st
July 2019
|
|
| 19th July 2019. See study [pdf] from arxiv.org |
Elena Maris of Microsoft Research, Timothy Libert Carnegie Mellon University, and Jennifer Henrichsen University of Pennsylvania have penned a study examining tracking technologies from the likes of Google and Facebook that are incorporated into re
world's porn websites. They write: This paper explores tracking and privacy risks on pornography websites. Our analysis of 22,484 pornography websites indicated that 93% leak user data to a third party. Tracking on
these sites is highly concentrated by a handful of major companies, which we identify [Google and Facebook]. Our content analysis of the sample's domains indicated 44.97% of them expose or suggest a specific gender/sexual identity
or interest likely to be linked to the user. We identify three core implications of the quantitative results:
1) the unique/elevated risks of porn data leakage versus other types of data, 2) the particular risks/impact for vulnerable populations, and 3) the complications of
providing consent for porn site users and the need for affirmative consent in these online sexual interactions
The authors describe the problem: One evening, Jack decides to view porn on his laptop. He enables incognito mode in his browser, assuming his actions are now private. He pulls up a site and scrolls past a
small link to a privacy policy. Assuming a site with a privacy policy will protect his personal information, Jack clicks on a video. What Jack does not know is that incognito mode only ensures his browsing history is not stored on his computer. The sites
he visits, as well as any third-party trackers, may observe and record his online actions. These third-parties may even infer Jack's sexual interests from the URLs of the sites he accesses. They might also use what they have decided about these interests
for marketing or building a consumer profile. They may even sell the data. Jack has no idea these third-party data transfers are occurring as he browses videos.
The Authors are a bit PC and seem obsessed about trying to relate cookie
consent with sexual consent but finally cnclude: Through our results and connections to past porn site privacy and security breaches and controversies, we demonstrate that the singularity of porn data and the
characteristics of typical porn websites' lax security measures mean this leakiness poses a unique and elevated threat. We have argued everyone is at risk when such data is accessible without users' consent, and thus can potentially be leveraged against
them by malicious agents acting on moralistic claims of normative gender or sexuality. These risks are heightened for vulnerable populations whose porn usage might be classified as non-normative or contrary to their public life.
The
authors seemed to think the porn sites are somehow ethical and should be doing the 'right' thing. But in reality they are just trying to make money like everyone else and as they say, if the product is free the your data is the payment. But
as the report points out, that price may be a prove a little higher than expected. Update: An unconvincing denial from Google 20th July 2019. See
article from avn.com
AVN notes that Google responded to the claims in a rather obtuse way. Google on Thursday attempted to deny the study's findings, as quoted by The Daily Mail newspaper. We don't allow Google Ads on websites with adult
content and we prohibit personalized advertising and advertising profiles based on a user's sexual interests or related activities online, the company said. Additionally, tags for our ad services are never allowed to transmit personally identifiable
information.
The study, however, did not allege that Google had placed actual advertisements from its GoogleAds network on porn sites, and in its elliptical statement, Google did not specifically deny that its tracking code is
embedded on thousands of adult sites. In related news Google has also announced changes
to incognito mode on its Chrome browser to make it just a little more incognito. Chrome's Incognito Mode is based on the principle that you should have the choice to browse the web privately. At the end of July,
Chrome will remedy a loophole that has allowed sites to detect people who are browsing in Incognito Mode. People choose to browse the web privately for many reasons. Some wish to protect their privacy on shared or borrowed
devices, or to exclude certain activities from their browsing histories. In situations such as political oppression or domestic abuse, people may have important safety reasons for concealing their web activity and their use of private browsing features.
We want you to be able to access the web privately, with the assurance that your choice to do so is private as well.
Google also noted a useful bit of info on evading article count restrictions imposed by some
publishers with metered access policies Today, some sites use an unintended loophole to detect when people are browsing in Incognito Mode. Chrome's FileSystem API is disabled in Incognito Mode to avoid leaving traces
of activity on someone's device. Sites can check for the availability of the FileSystem API and, if they receive an error message, determine that a private session is occurring and give the user a different [more restricted] experience.
With the release of Chrome 76 scheduled for July 30, the behavior of the FileSystem API will be modified to remedy this method of Incognito Mode detection. The change will affect sites that use the FileSystem
API to intercept Incognito Mode sessions and require people to log in or switch to normal browsing mode, on the assumption that these individuals are attempting to circumvent metered paywalls. Unlike hard paywalls or registration
walls, which require people to log in to view any content, meters offer a number of free articles before you must log in. This model is inherently porous, as it relies on a site's ability to track the number of free articles someone has viewed, typically
using cookies. Private browsing modes are one of several tactics people use to manage their cookies and thereby reset the meter count.
Of course it is probably a bit easier to find an addon that lets you block or delete the cookies
for specific websites or else to try just turning javascript off. Update: More incognito 31st July 2019. See article from
venturebeat.com
And as promised, Google Chrome has been updated to make incognito mode a little more incognito. Chrome 76 which was released today has but a stop to the common ways in which websites can work out that users are surfing the web incognito and then
ban them from accessing content.
|
|
|
|
|
|
29th July 2019
|
|
|
If Facebook can 'filter' or 'backup' your 'encrypted' communications then this proves that encryption is compromised, as does continued operation in any country that demands backdoors See
article from forbes.com |
|
|
|
|
| 24th July 2019
|
|
|
A technical article explaining how Microsoft's internet browser Edge sends all your page URLs to Microsoft in the name of blocking phishing websites See
article from bleepingcomputer.com |
|
|
|
|
|
24th July 2019
|
|
|
'The status quo is exceptionally dangerous, it is unacceptable and only getting worse. It's time for the United States to stop debating whether to address it and start talking about how to address it' See
article from apnews.com |
|
|
|
|
|
22nd July 2019
|
|
|
When local councils use data tools to classify us, what price freedom? By Kenan Malik See
article from theguardian.com |
|
|
|
|
|
20th July 2019
|
|
|
The EFF publishes a technical discussion on how the authorities are circumventing encryption used by messaging services See
article from eff.org |
|
|
|
|
| 12th July 2019
|
|
|
Belgian researchers reveal recordings from Google's home assistant that are clearly not activated by an 'OK Google' See
article from theregister.co.uk |
|
|