The UK Parliament's Joint Committee on Human Rights has reported on serious grounds for concern about the nature of the "consent" people provide when giving over an extraordinary range of information about themselves, to
be used for commercial gain by private companies:
- Privacy policies are too complicated for the vast majority of people to understand: while individuals may understand they are consenting to data collection from a given site in exchange for "free" access to content,
they may not understand that information is being compiled, without their knowledge, across sites to create a profile. The Committee heard alarming evidence about eye tracking software being used to make assumptions about people's sexual orientation,
whether they have a mental illness, are drunk or have taken drugs: all then added to their profile.
- Too often the use of a service or website is conditional on consent being given -- raising questions about whether
it is freely given
- People cannot find out what they have consented to: it is difficult, if not nearly impossible, for people - even tech experts - to find out who their data has been shared with, to stop it being
shared or to delete inaccurate information about themselves.
- The consent model relies on individuals knowing about the risks associated with using web based services when the system should provide adequate protection
from the risks as a default..
- It is completely inappropriate to use consent when processing children's data: children aged 13 and older are, under the current legal framework, considered old enough to consent to
their data being used, even though many adults struggle to understand what they are consenting to.
Key conclusions and recommendations The Committee points out that there is a real risk of discrimination against some groups and individuals through the way this data is used: it heard deeply
troubling evidence about some companies using personal data to ensure that only people of a certain age or race, for example, see a particular job opportunity or housing advertisement.
There are also long-established
concerns about the use of such data to discriminate in provision of insurance or credit products.
Unlike traditional print advertising where such blatant discrimination would be obvious and potentially illegal
personalisation of content means people have no way of knowing how what they see online compares to anyone else.
Short of whistleblowers or work by investigative journalists, there currently appears to be no mechanism
for protecting against such privacy breaches or discrimination being in the online "Wild West".
The Committee calls on the Government to ensure there is robust regulation over how our data can be collected
and used and it calls for better enforcement of that regulation.
The Committee says:
- The "consent model is broken" and should not be used as a blanket basis for processing. It is impossible for people to know what they are consenting to when making a non-negotiable, take it-or-leave-it
"choice" about joining services like Facebook, Snapchat and YouTube based on lengthy, complex T&Cs, subject to future changes to terms.
- This model puts too much onus on the individual, but the
responsibility of knowing about the risks with using web based services cannot be on the individual. The Government should strengthen regulation to ensure there is safe passage on the internet guaranteed
- Its
completely inadequate to use consent when it comes to processing children's data,. If adults struggle to understand complex consent agreements, how do we expect our children to give informed consent? The Committee says setting the digital age of consent
at 13 years old should be revisited.
- The Government should be regulating to keep us safe online in the same way as they do in the real world - not by expecting us to become technical experts who can judge whether our
data is being used appropriately but by having strictly enforced standards that protect our right to privacy and freedom from discrimination.
- It should be made much simpler for individuals to see what data has been
shared about them, and with whom, and to prevent some or all of their data being shared.
- The Government should look at creating a single online registry that would allow people to see, in real time, all the companies
that hold personal data on them, and what data they hold.
The report is worth a read and contains many important points criticising the consent model as dictated by GDPR and enfoced by ICO. Here are a few passages from the report's summary:
The evidence we heard during this inquiry,
however, has convinced us that the consent model is broken. The information providing the details of what we are consenting to is too complicated for the vast majority of people to understand. Far too often, the use of a service or website is conditional
on consent being given: the choice is between full consent or not being able to use the website or service. This raises questions over how meaningful this consent can ever really be.
Whilst most of us are probably unaware of who
we have consented to share our information with and what we have agreed that they can do with it, this is undoubtedly doubly true for children. The law allows children aged 13 and over to give their own consent. If adults struggle to understand complex
consent agreements, how do we expect our children to give informed consent. Parents have no say over or knowledge of the data their children are sharing with whom. There is no effective mechanism for a company to determine the age of a person providing
consent. In reality a child of any age can click a consent button.
The bogus reliance on consent is in clear conflict with our right to privacy. The consent model relies on us, as individuals, to understand, take decisions, and be
responsible for how our data is used. But we heard that it is difficult, if not nearly impossible, for people to find out whom their data has been shared with, to stop it being shared or to delete inaccurate information about themselves. Even when
consent is given, all too often the limit of that consent is not respected. We believe companies must make it much easier for us to understand how our data is used and shared. They must make it easier for us to opt out of some or all of our data being
used. More fundamentally, however, the onus should not be on us to ensure our data is used appropriately - the system should be designed so that we are protected without requiring us to understand and to police whether our freedoms are being protected.
As one witness to our inquiry said, when we enter a building we expect it to be safe. We are not expected to examine and understand all the paperwork and then tick a box that lets the companies involved off the hook. It is
the job of the law, the regulatory system and of regulators to ensure that the appropriate standards have been met to keep us from harm and ensure our safe passage. We do not believe the internet should be any different. The Government must ensure that
there is robust regulation over how our data can be collected and used, and that regulation must be stringently enforced.
Internet companies argue that we benefit from our data being collected and shared. It means the content we
see online - from recommended TV shows to product advertisements - is more likely to be relevant to us. But there is a darker side to personalisation. The ability to target advertisements and other content at specific groups of people makes it possible
to ensure that only people of a certain age or race, for example, see a particular job opportunity or housing advertisement. Unlike traditional print advertising, where such blatant discrimination would be obvious, personalisation of content means people
have no way of knowing how what they see online compares to anyone else. Short of a whistle-blower within the company or work by an investigative journalist, there does not currently seem to be a mechanism for uncovering these cases and protecting people
from discrimination.
We also heard how the data being used (often by computer programmes rather than people) to make potentially life-changing decisions about the services and information available to us is not even necessarily
accurate, but based on inferences made from the data they do hold. We were told of one case, for example, where eye-tracking software was being used to make assumptions about people's sexual orientation, whether they have a mental illness, are drunk or
have taken drugs. These inferences may be entirely untrue, but the individual has no way of finding out what judgements have been made about them.
We were left with the impression that the internet, at times, is like the Wild
West, when it comes to the lack of effective regulation and enforcement.
That is why we are deeply frustrated that the Government's recently published Online Harms White Paper explicitly excludes the protection of people's
personal data. The Government is intending to create a new statutory duty of care to make internet companies take more responsibility for the safety of their users, and an independent regulator to enforce it. This could be an ideal vehicle for
requiring companies to take people's right to privacy, and freedom from discrimination, more seriously and we would strongly urge the Government to reconsider its decision to exclude data protection from the scope of their new regulatory framework. In
particular, we consider that the enforcement of data protection rules - including the risks of discrimination through the use of algorithms - should be within scope of this work.