|
ASA disgracefully demands that advertisers snoop on people's browsing habits (with dubiously obtained, if any, consent) so as to avoid serving some adverts to children
|
|
|
| 28th
July 2021
|
|
| See article from asa.org.uk See
report [pdf] from asa.org.uk |
ASA is demanding that advertisers snoop on people's browsing habits so as to build up a profile of people, so as to determine their age and suitability for advertising for gambling, alcohol and frowned upon food products. ASA was particularly
considering advertising on websites that appeal to all ages, and so the subject matter of the website is not enough context to determine the age of users. And good luck to the snoopers if they think they can infer that Facebook and Twitter users are
over 13s and that Pornhub users are all adults. ASA explained: We have published the findings of our latest proactive monitoring sweep, making world-leading use of Avatar technology to assess the distribution of ads for
alcohol, gambling, and high fat, salt or sugar (HFSS) products in websites and YouTube channels attracting a mixed-age audience, predominantly composed of adults. As a result of our findings, we are calling on advertisers to make
better use of audience and media targeting tools to help minimise children's exposure to age-restricted ads in mixed-age sites. The monitoring underpinning this project was focused on:
Mixed-age online media - consisting of non-logged in websites and YouTube channels, with adults comprising 75%-90% of the audience Dynamically served ads for alcohol, gambling and HFSS products; the
underlying technology used to serve these ads enables advertisers to target subsets of the sites' audience based on data known or inferred about them e.g. their age, location, online browsing interests etc.
We used Avatars for the purpose of identifying trends in how these ads are being delivered to adult, child and/or age-unknown audience groups. The Avatars are constructed to reflect the online browsing profile of these age groups, but
their automated actions -- visiting 250 web pages on both desktop and mobile devices, twice a day -- are obviously not indicative of real world online behaviours. This explains why our six uniquely age-categorised Avatars received
27,395 ads , published on 250 sites , over a three week monitoring period. These high figures clearly do not reflect real-world exposure levels to advertising, but the data does give us a good basis for assessing whether
age-restricted ads are being targeted away from children in online media attracting a heavily weighted (75%+) adult audience. We found that:
Gambling ads were served in broadly similar numbers to Child and Adult Avatars, with no significant skew towards the adult profiles. The Neutral Avatar (which has no browsing history to provide indicative age information)
was served noticeably fewer Gambling ads in mixed-age media HFSS ads were served in broadly similar numbers to Child and Adult Avatars, with no significant skew towards the adult profiles, and notably higher numbers of
ads served to the Neutral Avatar Alcohol ads were not served to any Avatars
Advertisers are not allowed to serve age-restricted ads in children's media (sites commissioned for children, or where children make up 25% or more of the audience), but these ads are allowed in mixed-age media attracting a heavily
weighted (75%+) adult audience, so long as they stick to strict rules to ensure the creative content of the ads don't appeal to children or exploit their inexperience. We, however, believes it is a legitimate regulatory objective
to seek to minimise children's exposure to age-restricted ads generally and therefore wants to see advertisers of these products use available tools to more effectively target their ads away from children, even where the vast majority of an audience is
over 18. |
|
And one can guess on her political allegiance as she is currently a boss of NewsGuard, who famously labelled the Daily Mail website as 'failing to maintain basic standards of accuracy and accountability'
|
|
|
| 28th July 2021
|
|
| See article from bbc.co.uk See
Don't Trust the Daily Mail from theguardian.com |
Britain's state internet censor Ofcom has announced that Anna-Sophie Harling will be its principal internet censor dealing with censorship under the Government's upcoming Online Safety Bill. Ofcom will be able to fine tech firms that fail to remove
'offending' content up to 10% of their global revenue. Harling will be part of a team reporting into Mark Bunting, director of online policy. Harling is currently managing director for Europe at NewsGuard, which audits online publishers for
'accuracy'. And one can guess on her political allegiance as NewsGuard famously labelled the Daily Mail website as 'failing to maintain basic standards of accuracy and accountability'. |
|
Google has announced legal action against the German government for demanding that social media companies hand over person details of users accused of hate speech
|
|
|
| 28th July 2021
|
|
| See article
from computing.co.uk |
Google has announced that it was taking legal action over Germany's expanded hate-speech legislation which took effect in April this year. In a blog post Google said that a new provision of Germany's Network Enforcement Act ('NetzDG') violates the
right to privacy of its users. The provision requires social media platforms to share with law enforcement personal details of those sharing content suspected to be hateful. Germany's NetzDG law came into effect in early 2018, making social
networks such as Facebook, YouTube and Twitter responsible for monitoring and removing hate content from their platforms. It also required digital platforms to publish regular reports on their compliance. In May 2021, the country's parliament
passed legislation to include new provisions in the law to broaden its application, including sharing details of those judged to have shared hate-filled content with the Federal police, a move that was criticised as being heavy-handed by opposition
parties and the European Commission, as well as by social media companies themselves. Sabine Frank, YouTube's regional head of public policy, wrote in the blog post: In our opinion, this massive interference
with the rights of our users is not only in conflict with data protection, but also with the German constitution and European law. Google believes that such massive sharing of users' personal data with law enforcement is only
possible after a detailed examination by a court and a judicial confirmation. For us, the protection of our users' data is a central concern. We have therefore decided to have the relevant obligations of the legislative package
examined by the Cologne Administrative Court as part of a declaratory action.
|
|
Facebook and Instagram announces far reaching changes ready for the start of the UK's Age Appropriate Design code
|
|
|
| 27th
July 2021
|
|
| See article from about.instagram.com
See article from about.fb.com |
The data protection censors at the Information Commissioner's Office have got into the internet censorship game with a new regime that starts on the 2nd September 2021. It's Age Appropriate Design code very much requires an age gated internet in the name
of data protection for children, The code itself is not law but ICO claims that is an interpretation of the EU's GDPR (General Data Protection Regulation) law and so carries legal weight. The code requires that websites hand over their personal data
to anyone that asks to verify that they are of sufficient age to hand over their personal data. All in the name of preventing children from handing over their personal data. And the most immediate impact is that social media websites need to
ensure that their users are over the age of 13 before the internet companies can make hay with their personal data. And in preparation for the new rules Facebook and Instagram have posted substantial blogs laying out new polices on age
verification. Facebook summarised: Facebook and Instagram weren't designed for people under the age of 13, so we're creating new ways to stop those who are underage from signing up. We're
developing AI to find and remove underaged accounts, and new solutions to verify people's ages. We're also building new experiences designed specifically for those under 13. See full
article from about.fb.com
Instagram added: Creating an experience on Instagram that's safe and private
for young people, but also fun comes with competing challenges. We want them to easily make new friends and keep up with their family, but we don't want them to deal with unwanted DMs or comments from strangers. We think private accounts are the right
choice for young people, but we recognize some young creators might want to have public accounts to build a following. We want to strike the right balance of giving young people all the things they love about Instagram while also
keeping them safe. That's why we're announcing changes we'll make today, including:
Defaulting young people into private accounts. Making it harder for potentially suspicious accounts to find young people. Limiting the options advertisers have to reach
young people with ads.
See full article from about.instagram.com
|
|
|
|
|
|
27th July 2021
|
|
|
Europol and a New York DA call for an end to internet users' safety as enabled by encrypted communications See
article from politico.eu |
|
The Law Commission proposes law to censor internet speech that is claimed to be 'harmful'
|
|
|
| 21st July 2021
|
|
| See press release from
lawcom.gov.uk See Law Commission report [pdf] from
s3-eu-west-2.amazonaws.com |
The Law Commission has published recommendations to address the harms arising from online abuse . The recommendations include a coherent set of communications offences to more effectively target harmful communications while increasing protection for
freedom of expression. More than 70% of UK adults have a social media profile and internet users spend over four hours online each day on average. Whilst the online world offers important opportunities to share ideas and engage
with one another, it has also increased the scope for abuse and harm. A report by the Alan Turing institute estimates that approximately one third of people in the UK been exposed to online abuse. The recommendations, which have
been laid in Parliament, would reform the "communications offences" found in section 1 of the Malicious Communications Act 1988 ("MCA 1988") and section 127 of the Communications Act 2003 ("CA 2003"). These offences do not
provide consistent protection from harm and in some instances disproportionately interfere with freedom of expression. The reforms would address the harms arising from online abuse by modernising the existing communications
offences, ensuring that the law is clearer and that it effectively targets serious harm and criminality. The recommendations aim to do this in a proportionate way in order to protect freedom of expression. They also seek to "future-proof" the
law in this area as much as possible by not confining the offences to any particular mode or type of communication. The need for reform The laws that govern online abusive behaviour are not working
as well as they should. The existing offences are ineffective at criminalising genuinely harmful behaviour and in some instances disproportionately interfere with freedom of expression. Reliance on vague terms like "grossly
offensive" and "indecent" sets the threshold for criminality too low and potentially criminalises some forms of free expression that ought to be protected. For example, consensual sexting between adults could be "indecent", but
is not worthy of criminalisation. Other behaviours such as taking part in pile-on harassment, which can be genuinely harmful and distressing are not adequately criminalised. Additionally, the law does not effectively deal with
behaviours such as cyberflashing and encouraging serious self-harm. The result is that the law as it currently stands over-criminalises in some situations and under-criminalises in others. This is what the Commission's
recommendations aim to correct. Recommendations in detail: The harm-based offence The Commission is recommending a new offence based on likely psychological harm. This will shift the focus away from
the content of a communication (and whether it is indecent or grossly offensive) toward its potentially significant harmful effects. The recommended new harm-based offence would criminalise behaviour if:
The defendant sends or posts a communication that is likely to cause harm to a likely audience in sending or posting the communication, the defendant intends to cause harm to a likely
audience the defendant sends or posts the communication without reasonable excuse .
Within the offence, harm refers to serious distress. This threshold is one well-known to the criminal law, including in offences in the Protection from Harassment Act 1997. Reasonable excuse would include whether the communication was
or was meant as a contribution to a matter of public interest. Media articles would be exempt from the offence. This new offence could also capture pile-on harassment -- when a number of different individuals send harassing
communications to a victim. The fact that the offence is context-specific means it could be applied where a person deliberately joins a pile-on intending to cause harm. Recommendations in detail: new offences
To complement the harm-based offence, the Law Commission has made recommendations to ensure the law is clearer and protects against a variety of abusive online behaviour.
Cyberflashing: The Sexual Offences Act 2003 should be amended to include the sending of images or video recordings of genitals, for example, "dick pics" sent via AirDrop.
To recognise the violation of a victim's sexual autonomy without their consent, the offence would require either that the defendant intends to cause alarm, distress or humiliation, or if the defendant is acting for a sexual
purpose, the defendant is reckless as to whether the victim is caused alarm, distress or humiliation.
Encouragement or glorification of serious self-harm: An offence to target intentional encouragement or assistance of self-harm at a high threshold (equivalent to grievous bodily harm).
Sending flashing images with intent to induce a seizure : A specific offence for sending flashing images to people with epilepsy with the intention of inducing seizures. Knowingly false
communications : A defendant would be liable if they knowingly send or post a communication that they know to be false and they intend to cause non-trivial emotional, psychological, or physical harm to the likely audience, without a reasonable
excuse.
Threatening communications : We recommend a specific offence targeting communications that contain threats of serious harm.
It would be an offence where the defendant intends the victim to fear the threat will be carried out or the defendant is reckless as to whether the victim fears that the threat will be carried out. The
offence defines "serious harm" as including serious injury (equivalent to grievous bodily harm in the Offences Against the Person Act 1861), rape and serious financial harm.
The reforms, if enacted, involve a shift away from prohibited categories of communication (eg "grossly offensive") to focus on the harmful consequences of particular communications. Our aim is to ensure harmful
communications are appropriately addressed while providing robust protection for freedom of expression.
|
|
Comments about the UK Government's new Internet Censorship Bill
|
|
|
| 21st July 2021
|
|
| |
Offsite Comment: The Online Safety Bill won’t solve online abuse 2nd July 2021. See article by Heather Burns The Online Safety Bill contains threats to freedom of expression, privacy, and commerce which will do nothing to solve online
abuse, deal with social media platforms, or make the web a better place to be.
Update: House of Lords Committee considers that social media companies are not the best 'arbiters of truth' 21st July 2021. See
article from dailymail.co.uk , See
report from committees.parliament.uk A house of Lords committee has warned that the government's plans for new online censorship
laws will diminish freedom of speech by making Facebook and Google the arbiters of truth. The influential Lords Communications and Digital Committee cautioned that legitimate debate is at risk of being stifled by the way major platforms filter out
misinformation. Committee chairman Lord Gilbert said: The benefits of freedom of expression online mustn't be curtailed by companies such as Facebook and Google, too often guided their commercial and political
interests than the rights and wellbeing of their users.
The report said: We are concerned that platforms approaches to misinformation have stifled legitimate debate, including between experts.
Platforms should not seek to be arbiters of truth. Posts should only be removed in exceptional circumstances.
The peers said the government should switch to enforcing existing laws more robustly, and criminalising
any serious harms that are not already illegal.
|
|
The Guardian publishes an extensive report on worldwide internet snooping by the Israeli company NSO
|
|
|
| 17th July 2021
|
|
| See article
from theguardian.com |
Human rights activists, journalists and lawyers across the world have been targeted by authoritarian governments using hacking software sold by the Israeli surveillance company NSO Group, according to an investigation into a massive data leak. The
investigation by the Guardian and 16 other media organisations suggests widespread and continuing abuse of NSO's hacking spyware, Pegasus, which the company insists is only intended for use against criminals and terrorists. Pegasus is a malware that
infects iPhones and Android devices to enable operators of the tool to extract messages, photos and emails, record calls and secretly activate microphones. The leak contains a list of more than 50,000 phone numbers that, it is believed, have been
identified as those of people of interest by clients of NSO since 2016. See full
article from theguardian.com |
|
European age verification consortium starts meetings
|
|
|
| 15th July 2021
|
|
| See article from euconsent.eu |
euConsent is a consortium of twelve pro-censorship academic institutions, campaigners and technology providers championing internet age verification in the name of child protection. The consortium is being funded by the EU Commission to design, deliver
and pilot a new Europe-wide system age/iD verification system and to ensure that younger children have parental consent before they share personal data. The consortium doesn't seem to have much interest in keeping adults safe from their ID and porn
viewing data being used by scammers, spammers, thieves, commercial exploiters and of course state authorities. Pro-censorship campaigner and chair of the consortium John Carr has now announced that the group has had its first meeting. He noted:
An Advisory Board has been established and I agreed to be its Chair. The Board comprises representatives of a wide range of stakeholders: European regulatory authorities, children's rights organizations, tech companies
and politicians. We held our inaugural meeting last Friday. [notice no mention of porn viewers or adult internet users]. The Board will hold the project team accountable, helping them as they establish the standards. The
Board's collective and individual insights will contribute to a system that is workable with existing technology and facilitates the creation and implementation of effective regulations. Any new technologies which may emerge will know what they must be
able to do if they are to be recognised as an acceptable tool.
|
|
Google opts out of displaying paid for snippets to French newspapers only to be fined for 'market abuse' in not being fair to those newspapers
|
|
|
| 12th July 2021
|
|
| See article from bbc.co.uk |
The EU red tape generation machine has become so entangle that most of the EU's latest internet laws are simply impossible to comply with. The latest example is that search engines are nominally forced to negotiate with newspapers to agree a charge to
pay for links to newspaper websites. However it now appears that French law means that newspapers can ask ask any price they like and the French authorities will fine search engines that don't agree the price. Google has been hit with a euro 500m
(£427m) fine by France's competition authority for failing to negotiate in good faith with news organisations over the use of their content. In 2019, France became the first EU country to transpose the EU's disgraceful new Digital Copyright Directive
into law. The law governed so-called neighbouring rights which are designed to compensate publishers and news agencies for the use of their material. As a result, Google decided it would not show content from EU publishers in France, on services
like search and news, unless publishers agreed to let them do so free of charge. News organisations felt this was an abuse of Google's market power, and two organisations representing press publishers and Agence France-Presse (AFP) complained to
the competition authority. Google told the BBC: We are very disappointed with this decision - we have acted in good faith throughout the entire process. The new ruling means that within the next two months Google must come up with proposals
explaining how it will recompense companies for the use of their news. Should this fail to happen the company could face additional fines of euro 900,000 per day. |
|
But would you trust money seeking age verification companies not to use facial identification to record who is watching porn anyway
|
|
|
| 10th July 2021
|
|
| See article from
theguardian.com See also CC article from alecmuffett.com |
Our Big Brother government is seeking ways for all websites users to be identified and tracked in the name of child protection. But for all the up and coming legislation that demands age verification, there aren't actually any methods yet that satisfy
both strict age verification and protect people's personal data from hackers, thieves, scammers, spammers, money grabbing age verification companies, the government, and the provably data abusing social media companies. The Observer has reported on a
face scanning scheme whereby the age verification company claims not to look up your identity via facial recognition and instead just trying and count the wrinkles on your photo. See
article from theguardian.com . Security expert Alec Muffet has also posted some
interesting and relevant background provided to the Observer that somehow did not make the cut. See article from alecmuffett.com |
|
The EU publishes guidance on its impossible to implement copyright directive requiring both automatic blocking of copyrighted material for bad reasons whilst allowing it for good reasons
|
|
|
| 10th July
2021
|
|
| See article from
reclaimthenet.org See EU copyright guidance from eur-lex.europa.eu |
A while ago the EU passed a copyright directive demanding that internet businesses, websites and social media automatically block the upload of unauthorised copyrighted material whilst simultaneously demanding that this should not impinge on the free
speech use of material in terms of memes or comment. Of course the subtlety of distinguishing between differing usages is way beyond the AI capabilities of most EU businesses and so is more or less impossible to implement. Now EU states are getting
confused on how to implement the directive in their national law. And even though June 7 was the initial deadline for member-countries to implement it, it is far from being settled from the point of view of the harm it can cause to free online
expression in the bloc. The tortuous EU legislative process reached a milestone on June 4, when the European Commission revealed guidelines to its 27 members on how to implement Article 17, while protecting online users rights. And while the document
, that is not legally binding, states that filtering should only apply to what are clear-cut cases of illegal content, it also ushers in what advocates see as a massive loophole. It refers to giving rights holders the ability to 'earmark' content,
which could end up in platforms censoring it, including in cases of fair use. Content that according to the European Commission may be earmarked as economically viable is a new term in the realm of copyright enforcement, and there are fears that it may
be little more than a synonym for censorship. Some EU member states have not given up on their legal challenge to the Directive, with Poland going to the Court of Justice of the European Union and naming the European Parliament and the Council of the
European Union as defendants in a case seeking to establish if Article 17 is aligned with the bloc's Charter of Fundamental Rights. Poland is seeking partial, or full annulment of the article. The ruling is expected on July 15. |
|
|
|
|
| 10th July 2021
|
|
|
Will Cathcart likens governments' stance to insisting a 1984 telescreen be installed in every living room See
article from theguardian.com |
|
Indian child protection commission calls for India to ban Twitter until it removes adult content
|
|
|
| 5th July 2021
|
|
| See article from xbiz.com |
India's National Commission for Protection of Child Rights (NCPCR) is moving forward with proceedings to ban the access of children to Twitter in the country until the platform completely removes all pornographic material. NDTV reports that on May 29,
a letter was issued to the secretary of the Ministry of Electronics and Information and Technology to initiate a ban on the access of children on Twitter on an immediate basis till the time Twitter makes its platform safe for children by ensuring
complete removal of child sexual abuse material and pornographic material and reporting of cybercrime cases to the authorities in India. It is unclear how NCPCR intends to effect age verification for the 1.3 billion Indians, mostly adults, that
would be affected by a potential block. According to NCPCR chief Priyank Kanoongo, Twitter was found to have given false and misleading responses during the enquiry conducted by NCPCR for the presence of pornographic material, an offense under the
POCSO Act. The POCSO Act is a 2012 law to provide for the protection of children from the offenses of sexual assault, sexual harassment and pornography. |
|
The authorities are trying to block notable porn sites
|
|
|
| 2nd July 2021
|
|
| See German language
article from spiegel.de |
The German media censor, the Commission for the Protection of Minors in the Media, wants to force the hosting provider of the porn website xHamster to lock out German users. A year ago, the State Agency for Media in North Rhine-Westphalia began to
issue porn portals such as PornHub with an ultimatum: Either they establish age verification systems or there is a threat of network blocking. Several proceedings are currently pending at the Dusseldorf Administrative Court are being contested by
porn companies who argue that they label their websites according to an international standard designed to make it easy for parents to block offers on their children's devices. However, German legislation takes the opposite approach: Portals that are
harmful to minors should only be accessible if the users are proved to be of legal age. Tobias Schmid, the director of the State Agency for Media in North Rhine-Westphalian, said: In the end it is very simple: Anyone
who wants to earn money with pornography in the German market has to adhere to German laws.
The agency has now been able to determine the hosting provider for xHamster. This is not trivial, as many porn portals disguise their IT
infrastructure with the help of cloud services. The media censor has now written an official to the web host. |
|
Florida judge temporarily blocks law preventing social media companies from cancelling right leaning views
|
|
|
| 2nd July 2021
|
|
| See article from wptv.com
|
Florida's social media censoring bill has been temporarily blocked by a federal judge. The judge ruled that the law was an overreach, saying it compels providers to host speech that violates their standards. The law would have let the state fine
social media platforms, if they censor or ban politicians or political candidates, and gives regular users the ability to sue a platform if they are removed without explanation. The law would have gone into effect July 1. Supporters of the
law, including Representative John Snyder, said it was an effort to keep big tech companies from picking and choosing who gets a voice on their platforms. If the law is scrapped, Snyder said he would support trying again to get a similar law on the books
in future sessions. |
|
Thai authorities propose a £11,400 fine for internet users who post a picture of an alcoholic drink
|
|
|
| 2nd July 2021
|
|
| See article from
aseannow.com |
Thailand's The Standard news website has reported that it could soon be possible to be fined 500,000 baht (£11,400) just for posting a picture of a glass of beer or wine. And 60-80% of that fine could go into the pocket of the police or authority that
brought the prosecution. Up to now private individuals can be fined 50,000 baht (£1150) for promoting or advertising alcohol. Now a draft amendment from the authorities is proposing this is increased to half a million baht. Commercial entities
are liable to larger fines, currently at 500,000 baht, but the proposals would see this rise to a full one million baht. There is also a proposal to stop a kind of loophole that allows big firms to promote their products by referring to soda rather
than beer. Eg the beer maker Singha advertises its bottled water brand with a logo that is also used for its beer. In future just using the soda/water logo could be illegal and subject to the alcohol fines by association. The new proposals are
currently on public consultation until 9th July, although it is a little offputting that ID cards are required from those wishing to comment. |
|
|