It's not only China and the UK that want to identify internet users, Austria also wants to demand that forum contributors submit their ID before being able to post.
Austria's government has introduced a bill that would require larger social media websites and forums to obtain the identity of its users prior to them being able to post comments. Users will have to provide their name and address to websites but
nicknames are still allowed and the identity data will not be made public.
Punishments for non complying websites will be up to 500,000 euros and double that for repeat offences.
It would only affect sites with more than 100,000 registered users, bring in revenues above 500,000 euros per year or receive press subsidies larger than 50,000 euros.
There would also be exemptions for retail sites as well as those that don't earn money from either ads or the content itself.
If passed and cleared by the EU, the law would take effect in 2020. The immediate issues noted are that some of the websites most offending the sensitivities of the government are often smaller than the trigger condition. The law may also step on
the toes of the EU in rules governing which EU states has regulatory control over websites.
The Portman Group is a trade organisation for the UK alcoholic drinks industry. It acts as the industry's censor of drink marketing and packaging. It reports:
A recent complaint about Beavertown Brewery's product, Neck Oil , was not upheld by the Independent Complaints Panel
The complainant, a member of the public, expressed concern that the product uses bright colours and that the name Neck Oil implies that the product is to be consumed in one i.e. necked. Furthermore, the complainant said that the colours used on
the packaging are clearly aimed at the younger market and encourage irresponsible consumption .
The Panel firstly considered whether the product has a particular appeal to under 18s. The Panel discussed the colour palette and illustrations on the can design and noted that muted, instead of contrasting, colours had been used and that the
artwork was sophisticated, and adult in nature. The Panel concluded that there was no element of the can that could have a particular appeal to under 18s and accordingly did not uphold the complaint with regards to under 18s.
The Panel considered the company's submission and acknowledged that the phrase neck oil was widely recognised as colloquial term for beer both within and outside the industry. The Panel noted that neck was used as a noun and did not consider that
its use in this way suggested a down in one style of consumption. The Panel concluded that there were no visual or text cues to encourage irresponsible or down in one consumption and accordingly did not uphold the complaint with regard to
In the aftermath of the horrific mosque attack in New Zealand, internet companies were interrogated over their efforts to censor the livestream video of Brenton Tarrant's propaganda.
Some of their responses have included ideas that point in a disturbing direction: toward increasingly centralized and opaque censorship of the global internet.
Facebook, for example, describes plans for an expanded role for the Global Internet Forum to Counter Terrorism, or GIFCT. The GIFCT is an industry-led self-regulatory effort launched in 2017 by Facebook, Microsoft, Twitter, and YouTube. One of
its flagship projects is a shared database of hashes of files identified by the participating companies to be extreme and egregious terrorist content. The hash database allows participating companies (which include giants like YouTube and one-man
operations like JustPasteIt) to automatically identify when a user is trying to upload content already in the database.
In Facebook's post-Christchurch updates, the company discloses that it added 800 new hashes to the database, all related to the Christchurch video. It also mentions that the GIFCT is experimenting with sharing URLs systematically rather than just
content hashes --that is, creating a centralized list of URLs that would facilitate widespread blocking of videos, accounts, and potentially entire websites or forums.
VPNCompare is reporting that internet users in Britain are responding to the upcoming porn censorship regime by investigating the option to get a VPN so as to workaround most age verification requirements without handing over dangerous
VPNCompare says that the number of UK visitors to its website has increased by 55% since the start date of the censorship scheme was announced. The website also sated that Google searches for VPNs had trippled. Website editor, Christopher Seward
told the Independent:
We saw a 55 per cent increase in UK visitors alone compared to the same period the previous day. As the start date for the new regime draws closer, we can expect this number to rise even further and the number of VPN users in the UK is likely to
go through the roof.
The UK Government has completely failed to consider the fact that VPNs can be easily used to get around blocks such as these.
Whilst the immediate assumption is that porn viewers will reach for a VPN to avoid handing over dangerous identity information, there may be another reason to take out a VPN, a lack of choice of appropriate options for age validation.
3 companies run the 6 biggest adult websites. Mindgeek owns Pornhub, RedTube and Youporn. Then there is Xhamster and finally Xvideos and xnxx are connected.
Now Mindgeek has announced that it will partner with Portes Card for age verification, which has options for identity verification, giving a age verified mobile phone number, or else buying a voucher in a shop and showing age ID to the shop
keeper (which is hopefully not copied or recorded).
Meanwhile Xhamster has announced that it is partnering with 1Account which accepts a verified mobile phone, credit card, debit card, or UK drivers licence. It does not seem to have an option for anonymous verification beyond a phone being age
verified without having to show ID.
Perhaps most interestingly is that both of these age verifiers are smart phone based apps. Perhaps the only option for people without a phone is to get a VPN. I also spotted that most age verification providers that I have looked at seem to be
only interested in UK cards, drivers licences or passports. I'd have thought there may be legal issues in not accepting EU equivalents. But foreigners may also be in the situation of not being able to age verify and so need a VPN.
And of course the very fact that is no age verification option common to the major porn website then it may just turn out to be an awful lot simpler just to get a VPN.
The BBFC (on its Age Verification website)...err...no!...:
An assessment and accreditation under the AVC is not a guarantee that the age-verification provider and its solution (including its third party companies) comply with the relevant legislation and standards, or that all data is safe from
malicious or criminal interference.
Accordingly the BBFC shall not be responsible for any losses, damages, liabilities or claims of whatever nature, direct or indirect, suffered by any age-verification provider, pornography services or consumers/ users of age-verification
provider's services or pornography services or any other person as a result of their reliance on the fact that an age-verification provider has been assessed under the scheme and has obtained an Age-verification Certificate or otherwise in
connection with the scheme.
Facebook has banned far-right groups including the British National Party (BNP) and the English Defence League (EDL) from having any presence on the social network. The banned groups, which also includes Knights Templar International, Britain
First and the National Front as well as key members of their leadership, have been removed from both Facebook or Instagram.
Facebook said it uses an extensive process to determine which people or groups it designates as dangerous, using signals such as whether they have used hate speech, and called for or directly carried out acts of violence against others based on
factors such as race, ethnicity or national origin.
This week we have seen David Lammy doubling down on his ludicrous comparison of the European Research Group with the Nazi party, and Chris Key in the Independent calling for UKIP and the newly formed Brexit Party to be banned from television
debates. It is clear that neither Key nor Lammy have a secure understanding of what far right actually means and, quite apart from the distasteful nature of such political opportunism, their strategy only serves to generate the kind of resentment
upon which the far right depends.
Offsite comment: Facebook is calling for Centralized Censorship. That Should Scare You
If we're going to have coherent discussions about the future of our information environment, we--the public, policymakers, the media, website operators--need to understand the technical realities and policy dynamics that shaped the response to
the Christchurch massacre. But some of these responses have also included ideas that point in a disturbing direction: toward increasingly centralized and opaque censorship of the global interne
PlayerUnknown's Battleground is a 2017 South Korea Battle Royale by PUBG Corporation
Nepal Telecommunication Authority has directed all the country's ISPs to ban PlayerUnknown's Battleground, commonly known PUBG, a popular multiplayer internet game.
The Metropolitan Crime Division had filed a Public Interest Litigation at the Kathmandu District Court seeking permission to ban PUBG claiming that the game was having a negative effect on the behaviour and study of children and youths. The
district court gave permission to ban PUBG the same day.
Senior Superintendent of Police Dhiraj Pratap Singh, chief of the Metropolitan Crime Division said:
We received a number of complaints from parents, schools and school associations regarding the effect of the game on children. We also held discussions with psychiatrists before requesting the Kathmandu District Court for permission to ban the
Iraq's cultural parliamentary committee has submitted a draft on April 13th, 2019 suggesting to ban PlayerUnknown's Battlegrounds. The draft proposal would have to go through a draft review by parliamentary speaker Mohammed Al Halbousi.
The head of the culture committee, Sameaa Gullab, commented:
The committee is concerned about the obsession over these electronic games that ignite violence among children and youth. Its influence has spread rapidly among Iraq's society. We are proposing to parliament to block and ban all games that
threaten social security, morality, education and all segments of Iraqi society.
Iraqi media reported incidents of suicide and divorce related to the games during the last year. Local media reporting on the craze has claimed it has led to nearly 40,000 divorces worldwide and more than 20 cases in Iraq.
The parliamentary censorship call also cites the suicide game Blue Whale , which has been a problem for some regions for quite some time.
Iraq's parliament has voted to ban the popular battle royale games Fortnite and Playerunknown's Battlegrounds because of their supposed detrimental influence on the population.
A Reuters report says the ban was put into place due to the negative effects caused by some electronic games on the health, culture, and security of Iraqi society, including societal and moral threats to children and youth.
Reaction to the ban was widely negative, according to the report, but not because people are angry that they can't play Fortnite. They may be, but the real issue is that Iraqis apparently see the ban as a emblematic of the government's misplaced
priorities: While Iraq continues to struggle with sectarian violence, inadequate infrastructure, and political instability, the country's parliament has only managed to pass one piece of legislation since sitting in September 2018.
A Barcelona school has removed 200 children's books it considers sexist including Little Red Riding Hood and the story of the legend of Saint George, from its library.
The Tàber school's infant library of around 600 children's books was reviewed by the Associació Espai i Lleure as part of a project that aims to highlight hidden sexist content . The group reviewed the characters in each book, whether or not they
speak and what roles they perform, finding that 30% of the books were highly sexist, had strong stereotypes and were, in its opinion, of no pedagogical value.
According to Associació Espai i Lleure, if young children see "strongly stereotypical" depictions of relationships and behaviours in what they read, they will consider them normal. Anna Tutzó, a parent who is on the commission that
reviewed the books, told El País that "society is changing and is more aware of the issue of gender, but this is not being reflected in stories". Masculinity is associated with competitiveness and courage, and "in violent
situations, even though they are just small pranks, it is the boy who acts against the girl", which "sends a message about who can be violent and against whom".
The European Parliament has approved a draft version of new EU internet censorship law targeting terrorist content.
In particular the MEPs approved the imposition of a one-hour deadline to remove content marked for censorship by various national organisations. However the MEPs did not approve a key section of the law requiring internet companies to pre-process
and censor terrorsit content prior to upload.
A European Commission official told the BBC changes made to the text by parliament made the law ineffective. The Commission will now try to restore the pre-censorship requirement with the new parliament when it is elected.
The law would affect social media platforms including Facebook, Twitter and YouTube, which could face fines of up to 4% of their annual global turnover. What does the law say?
In amendments, the European Parliament said websites would not be forced to monitor the information they transmit or store, nor have to actively seek facts indicating illegal activity. It said the competent authority should give the website
information on the procedures and deadlines 12 hours before the agreed one-hour deadline the first time an order is issued.
In February, German MEP Julia Reda of the European Pirate Party said the legislation risked the surrender of our fundamental freedoms [and] undermines our liberal democracy. Ms Reda welcomed the changes brought by the European Parliament but said
the one-hour deadline was unworkable for platforms run by individual or small providers.
As we have been explaining across media, we believe that by using default settings and vague privacy policies which allow Amazon employees to listen in on the recordings of users' interactions with their devices, Amazon risks deliberately
deceiving its customers.
Amazon has so far been dismissive, arguing that people had the options to opt out from the sharing of their recordings -- although it is unclear how their customers could have done so if they were not aware this was going on in the first place.
each listening up to a thousand recordings per day. And sharing file recordings with one another they find to be "amusing".
As a result, today we wrote to Jeff Bezos to let him know we think Amazon needs to step up and do a lot better to protect the privacy of their customers.
If you use an Amazon Echo device and are concerned about this, read our instructions on how to opt out
Dear Mr. Bezos,
We are writing to call for your urgent action regarding last week's report  in Bloomberg, which revealed that Amazon has been employing thousands of workers to listen in on the recordings of Amazon Echo users.
Privacy International (PI) is a registered charity based in London that works at the intersection of modern technologies and rights. Privacy International challenges overreaching state and corporate surveillance, so that people everywhere can
have greater security and freedom through greater personal privacy.
The Bloomberg investigation asserts that Amazon employs thousands of staff around the world to listen to voice recordings captured by the Amazon Alexa. Among other examples, the report states that your employees use internal chat rooms to share
files when they "come across an amusing recording", and that they share "distressing" recordings -- including one of a sexual assault.
that recordings of their interactions with the Amazon Echo could, by default, be listened to by your employees.
Millions of customers enjoy your product and they deserve better from you. As such, we ask whether you will:
Notify all users whose recordings have been accessed, and describe to them which recordings;
Notify all users whenever their recordings are accessed in the future, and describe to them which recordings;
Modify the settings of the Amazon Echo so that "Help Develop New Features" and "Use Messages to Improve Transcriptions" are turned off by default;
In your response to the Bloomberg investigation, you state you take the privacy of your customer seriously. It is now time for you to step up and walk the walk. We look forward to engaging with you further on this.
Reddit is a social media website that boasts 234 million members and approximately 8 billion page views per month.
Reddit's system is naturally built to highlight online influencers; all posts are automatically submitted to a voting process: The most up-voted messages receive the most visibility.
The site has a very passionate following and advertising on Reddit can be very successful. Companies are able to promote top posts to a very targeted audience, directly on its front page.
On Tuesday, Reddit posted an update about their Not Suitable for Work Advertising Policy. From now on, the platform doesn't allow any adult-oriented ads and targeting. Promoted posts pushing adult products or services are no longer permissible
and NSFW subreddits will no longer be eligible for ads or targeting.
The new policy targets specifically targets pornographic and sexually explicit content as well as adult sexual recreational content, product and services.
The UK will become the first country in the world to bring in age-verification for online pornography when the measures come into force on 15 July 2019.
It means that commercial providers of online pornography will be required by law to carry out robust age-verification checks on users, to ensure that they are 18 or over.
Websites that fail to implement age-verification technology face having payment services withdrawn or being blocked for UK users.
The British Board of Film Classification (BBFC) will be responsible for ensuring compliance with the new laws. They have confirmed that they will begin enforcement on 15 July, following an implementation period to allow websites time to comply
with the new standards.
Minister for Digital Margot James said that she wanted the UK to be the most censored place in the world to b eonline:
Adult content is currently far too easy for children to access online. The introduction of mandatory age-verification is a world-first, and we've taken the time to balance privacy concerns with the need to protect children from inappropriate
content. We want the UK to be the safest place in the world to be online, and these new laws will help us achieve this.
Government has listened carefully to privacy concerns and is clear that age-verification arrangements should only be concerned with verifying age, not identity. In addition to the requirement for all age-verification providers to comply with
General Data Protection Regulation (GDPR) standards, the BBFC have created a voluntary certification scheme, the Age-verification Certificate (AVC), which will assess the data security standards of AV providers. The AVC has been developed in
cooperation with industry, with input from government.
Certified age-verification solutions which offer these robust data protection conditions will be certified following an independent assessment and will carry the BBFC's new green 'AV' symbol. Details will also be published on the BBFC's
age-verification website, ageverificationregulator.com so consumers can make an informed choice between age-verification providers.
BBFC Chief Executive David Austin said:
The introduction of age-verification to restrict access to commercial pornographic websites to adults is a ground breaking child protection measure. Age-verification will help prevent children from accessing pornographic content online and means
the UK is leading the way in internet safety.
On entry into force, consumers will be able to identify that an age-verification provider has met rigorous security and data checks if they carry the BBFC's new green 'AV' symbol.
The change in law is part of the Government's commitment to making the UK the safest place in the world to be online, especially for children. It follows last week's publication of the Online Harms White Paper which set out clear responsibilities
for tech companies to keep UK citizens safe online, how these responsibilities should be met and what would happen if they are not.
Sony has confirmed a new set of censorship rules toning down sexually themed games on the PlayStation 4.
A Sony spokeswoman confirmed the company has established its own guidelines 'so that gaming 'does not inhibit the sound growth and development' of young people. This is allegedly a result of executives at the company being afraid the sale of
sexually explicit games might hurt its global reputation.
According to the Wall Street Journal, One of their biggest concerns is software sold in the company's home market of Japan, which traditionally has had more tolerance for near-nudity and images of young women.
The Wall Street Journal points to two main reasons for the new policy based on its conversations with unnamed Sony officials. The first is the rise of the #MeToo movement. The second is the growing ubiquity of streaming platforms like Twitch and
YouTube where sexually explicit games coming out of Japan can find a global audience.
Sony is concerned the company could become a target of legal and social action, a Sony official in the U.S. told the Wall Street Journal.
The new guidelines are in contrast to Nintendo, which told the Wall Street Journal that sexually explicit games can be sold on the Switch as long they receive a rating from a national ratings agency.
An example of the news rules is the adult visual novel Nekopara Vol. 1 , which includes partial nudity and the option to pet female characters using a virtual cursor, released on Nintendo Switch last summer with a rating of Mature 17+
while the PS4 version was delayed until November. When it finally came out, fans reported several changes that made it less sexually explicit, including extra steam in bath scenes and the removal of a slider players could use in the other
versions to make characters' breasts jiggle more.
Twitter co-founder Jack Dorsey has said again there is much work to do to improve Twitter and cut down on the amount of abuse and misinformation on the platform. He said the firm might demote likes and follows, adding that in hindsight he would
not have designed the platform to highlight these.
Speaking at the TED technology conference he said that Twitter currently incentivised people to post outrage. Instead he said it should invite people to unite around topics and communities. Rather than focus on following individual accounts,
users could be encouraged to follow hashtags, trends and communities.
Doing so would require a systematic change that represented a huge shift for Twitter.
One of the choices we made was to make the number of people that follow you big and bold. If I started Twitter now I would not emphasise follows and I would not create likes. We have to look at how we display follows and likes, he added.
The EU Council of Ministers has approved the Copyright Directive, which includes the link tax and censorship machines. The legislation was voted through by a majority of EU ministers despite noble opposition from Italy, Luxembourg, Netherlands,
Poland, Finland, and Sweden.
As explained by Julia Reda MEP, a majority of 55% of Member States, representing 65% of the population, was required to adopt the legislation. That was easily achieved with 71.26% in favor, so the Copyright Directive will now pass into law.
As the image above shows, several countries voted against adoption, including Italy, Luxembourg, Netherlands, Poland, Finland, and Sweden. Belgium, Estonia, and Slovenia absta ined.
But in the final picture that just wasn't enough, with both Germany and the UK voting in favor, the Copyright Directive is now adopted.
EU member states will now have two years to implement the law, which requires platforms like YouTube to sign licensing agreements with creators in order to use their content. If that is not possible, they will have to ensure that infringing
content uploaded by users is taken down and not re-uploaded to their services.
The entertainment lobby will not stop here, over the next two years, they will push for national implementations that ignore users' fundamental rights, comments Julia Reda:
It will be more important than ever for civil society to keep up the pressure in the Member States!
Ofcom has imposed a £75,000 fine on City News Network for failing to provide adequate protection for viewers.
The service Channel 44 -- an Urdu-language news and current affairs channel -- broadcast hate speech and material containing abusive treatment of the Ahmadiyya community.
Under the Broadcasting Code, licensees must not broadcast material which contains uncontextualised hate speech and abusive treatment of groups, religions or communities.
After an investigation, Ofcom concluded that the serious nature of the breaches of the Broadcasting Code warranted the imposition of statutory sanctions. These include a financial penalty and a direction to the broadcaster to broadcast a
statement of Ofcom's findings on a date and in a form to be determined by Ofcom.
The fine of £75,000 will be paid by City News Network to HM Paymaster General.
In an unprecedented attack on press freedom in Australia, 23 journalists and 13 media outlets have been hit with charges relating to the child sexual abuse trial of Catholic cardinal George Pell. The accused include Australia's two biggest
newspaper companies, Rupert Murdoch's Nationwide News and the former Fairfax group now owned by broadcaster Nine, as well as leading newspaper editors and reporters.
The media and reporters are accused of abetting contempt of court by the foreign press and of scandalising the court by breaching a gagging order, despite none of them reporting on the charges involved or mentioning Pell by name. The court had
banned all reporting of the case pending a second trial that was later cancelled.
Some foreign media, including The New York Times and the Washington Post , reported Pell's conviction in December, while local media ran cryptic articles complaining that they were being prevented from reporting a story of major public interest.
Matthew Collins, representing the accused media at the first hearing on the matter today, said such wide-ranging contempt charges had no precedent in Australian legal history. Collins added that a guilty verdict on any of the charges would have a
chilling effect on open justice in Australia. He insisted that the contempt allegations lacked specific examples of how any of the accused news companies or journalists actually breached the gag order when they never mentioned Pell or the crimes
for which he was convicted.
Video-sharing app TikTok has introduced an age gate feature for new users, which it claims will only allow those aged 13 years and above to create an account. TikTok also declared that it has removed more than six million videos that were in
violation of its community guidelines.
TikTok is said to be based in more than 20 countries, including India, and covers major Indian languages, including Hindi, Tamil, Telugu and Gujarati.
The app was banned by the Madras High Court earlier this month, chiefly on the ground that it posed a danger to children. The court said the app contained degrading culture, and that it encouraged pornography and pedophilia.
In February, TikTok was fined $5.7 million by the US Federal Trade Commission for violating the Children's Online Privacy Protection Act (COPPA) by collecting personal information of children below 13 years without parental consent.
As of April 15, the app remains available for download on Google's Play Store. TikTok's push for user safety
This is the biggest censorship event of the year. It is going destroy the livelihoods of many. It is framed as if it were targeted at Facebook and the like, to sort out their abuse of user data, particularly for kids.
However the kicker is that the regulations will equally apply to all UK accessed websites that earn at least earn some money and process user data in some way or other. Even small websites will then be required to default to treating all
their readers as children and only allow more meaningful interaction with them if they verify themselves as adults. The default kids-only mode bans likes, comments, suggestions, targeted advertising etc, even for non adult content.
Furthermore the ICO expects websites to formally comply with the censorship rules using market researchers, lawyers, data protection officers, expert consultants, risk assessors and all the sort of people that cost a grand a day.
Of course only the biggest players will be able to afford the required level of red tape and instead of hitting back at Facebook, Google, Amazon and co for misusing data, they will further add to their monopoly position as they will be the only
companies big enough to jump over the government's child protection hurdles.
Another dark day for British internet users and businesses.
The ICO write in a press release
Today we're setting out the standards expected of those responsible for designing, developing or providing online services likely to be accessed by children, when they process their personal data.
Parents worry about a lot of things. Are their children eating too much sugar, getting enough exercise or doing well at school. Are they happy?
In this digital age, they also worry about whether their children are protected online. You can log on to any news story, any day to see just how children are being affected by what they can access from the tiny computers in their pockets.
Last week the Government published its white paper covering online harms.
Its proposals reflect people's growing mistrust of social media and online services. While we can all benefit from these services, we are also increasingly questioning how much control we have over what we see and how our information is used.
There has to be a balancing act: protecting people online while embracing the opportunities that digital innovation brings.
And when it comes to children, that's more important than ever. In an age when children learn how to use a tablet before they can ride a bike, making sure they have the freedom to play, learn and explore in the digital world is of paramount
The answer is not to protect children from the digital world, but to protect them within it.
When finalised, it will be the first of its kind and set an international benchmark.
It will leave online service providers in no doubt about what is expected of them when it comes to looking after children's personal data. It will help create an open, transparent and protected place for children when they are online.
Organisations should follow the code and demonstrate that their services use children's data fairly and in compliance with data protection law. Those that don't, could face enforcement action including a fine or an order to stop processing data.
Introduced by the Data Protection Act 2018, the code sets out 16 standards of age appropriate design for online services like apps, connected toys, social media platforms, online games, educational websites and streaming services, when they
process children's personal data. It's not restricted to services specifically directed at children.
The code says that the best interests of the child should be a primary consideration when designing and developing online services. It says that privacy must be built in and not bolted on.
Settings must be "high privacy" by default (unless there's a compelling reason not to); only the minimum amount of personal data should be collected and retained; children's data should not usually be shared; geolocation services should
be switched off by default. Nudge techniques should not be used to encourage children to provide unnecessary personal data, weaken or turn off their privacy settings or keep on using the service. It also addresses issues of parental control and
The code is out for consultation until 31 May. We will draft a final version to be laid before Parliament and we expect it to come into effect before the end of the year.
Our Code of Practice is a significant step, but it's just part of the solution to online harms. We see our work as complementary to the current initiatives on online harms, and look forward to participating in discussions regarding the
Government's white paper.
The proposals are now open for public consultation:
The Information Commissioner is seeking feedback on her draft code of practice
Age appropriate design -- a code of practice for online services likely to be accessed by children (the code).
The code will provide guidance on the design standards that the Commissioner will expect providers of online 'Information Society Services' (ISS), which process personal data and are likely to be accessed by children, to meet.
The code is now out for public consultation and will remain open until 31 May 2019. The Information Commissioner welcomes feedback on the specific questions set out below.
You can respond to this consultation via
our online survey , or you can download the document below and email to firstname.lastname@example.org .
lternatively, print off the document and post to:
Age appropriate design code consultation
Policy Engagement Department
Information Commissioner's Office
Today the Information Commissioner's Office announced a consultation on a draft Code of Practice to help protect children online.
The code forbids the creation of profiles on children, and bans data sharing and nudges of children. Importantly, the code also requires everyone be treated like a child unless they undertake robust age-verification.
The ASI believes that this code will entangle start-ups in red tape, and inevitably end up with everyone being treated like children, or face undermining user privacy by requiring the collection of credit card details or passports for every
Matthew Lesh, Head of Research at free market think tank the Adam Smith Institute, says:
This is an unelected quango introducing draconian limitations on the internet with the threat of massive fines.
This code requires all of us to be treated like children.
An internet-wide age verification scheme, as required by the code, would seriously undermine user privacy. It would require the likes of Facebook, Google and thousands of other sites to repeatedly collect credit card and passport details from
millions of users. This data collection risks our personal information and online habits being tracked, hacked and exploited.
There are many potential unintended consequences. The media could be forced to censor swathes of stories not appropriate for young people. Websites that cannot afford to develop 'children-friendly' services could just block children. It could
force start-ups to move to other countries that don't have such stringent laws.
This plan would seriously undermine the business model of online news and many other free services by making it difficult to target advertising to viewer interests. This would be both worse for users, who are less likely to get relevant
advertisements, and journalism, which is increasingly dependent on the revenues from targeted online advertising.
The Government should take a step back. It is really up to parents to keep their children safe online.
Offsite Comment: Web shake-up could force ALL websites to treat us like children
The information watchdog has been accused of infantilising web users, in a draconian new code designed to make the internet safer for children.
Web firms will be forced to introduce strict new age checks on their websites -- or treat all their users as if they are children, under proposals published by the Information Commissioner's Office today.
The rules are so stringent that critics fear people could end up being forced to demonstrate their age for virtually every website they visit, or have the services that they can access limited as if they are under 18.
The ink has yet dried on two enormous packaged of internet censorship and yet the Government is already planning the next.
The Government is considering an overhaul of censorship rules for Netflix and Amazon Prime Video. The Daily Telegraph understands that the Department for Cesnorship, Media and Sport is looking at whether censorship rules for on-demand video
streaming sites should extended to those suffered by traditional broadcasters.
Cesnorship Secretary Jeremy Wright had signaled this could be a future focus for DCMS last month, saying rules for Netflix and Amazon Prime Video were not as robust as they were for other broadcasters.
Public service broadcasters currently have set requirements to commission content from within the UK. The BBC, for example, must ensure that UK-made shows make up a substantial proportion of its content, and around 50% of that content must
come from outside the M25 area.
No such rules, over specific UK-made content, currently apply to Netflix and Amazon Prime Video, though . The European Union is currently finalising the details of rules for the bloc, which require streaming companies to ensure at least 30% of
their libraries are dedicated to content made by EU-member states.
Age verification for online gambling is set to evolve into full identity verification from 7th May 2019. The other big change is that all verification will have to be completed prior to any bets being placed. Previously age verification was
required only when people tried to withdraw their winnings. There were many complaints that gambling companies would then inflict onerous validation requirements to try and avoid paying out.
I would hazard a guess that the new implementation will quash an awful lot of the TV end media adverts that try and get new members with a small joiners bonus. Now it will be a lot more hassle to join, and maybe there will be less interest in
trying out new websites just to get a free introductory bet.
Ministers are facing a growing and deserved backlash against draconian new web laws which will lead to totalitarian-style censorship.
The stated aim of the Online Harms White Paper is to target offensive material such as terrorists' beheading videos. But under the document's provisions, the UK internet censor would have complete discretion to decide what is harmful, hateful or
bullying -- potentially including coverage of contentious issues such as transgender rights.
After MPs lined up to demand a rethink, Downing Street has put pressure on Culture Secretary Jeremy Wright to narrow the definition of harm in order to exclude typical editorial content.
MPs have been led by Jacob Rees-Mogg, who said last night that while it was obviously a worthwhile aim to rid the web of the evils of terrorist propaganda and child pornography, it should not be at the expense of crippling a free Press and
gagging healthy public expression. He added that the regulator could be used as a tool of repression by a future Jeremy Corbyn-led government, saying:
Sadly, the Online Harms White Paper appears to give the Home Secretary of the day the power to decide the rules as to which content is considered palatable. Who is to say that less scrupulous governments in the future would not abuse this new
I fear this could have the unintended consequence of reputable newspaper websites being subjected to quasi-state control. British newspapers freedom to hold authority to account is an essential bulwark of our democracy.
We must not now allow what amounts to a Leveson-style state-controlled regulator for the Press by the back door.
He was backed by Charles Walker, vice-chairman of the Tory Party's powerful backbench 1922 Committee, who said:
We need to protect people from the well-documented evils of the internet -- not in order to suppress views or opinions to which they might object.
In last week's Mail on Sunday, former Culture Secretary John Whittingdale warned that the legislation was more usually associated with autocratic regimes including those in China, Russia or North Korea.
Tory MP Philip Davies joined the criticism last night, saying:
Of course people need to be protected from the worst excesses of what takes place online. But equally, free speech in a free country is very, very important too. It's vital we strike the right balance. While I have every confidence that Sajid
Javid as Home Secretary would strike that balance, can I have the same confidence that a future Marxist government would not abuse the proposed new powers?
And Tory MP Martin Vickers added:
While we must take action to curb the unregulated wild west of the internet, we must not introduce state control of the Press as a result.
Wikileaks was a whistle blowing website that shone a light on how governments of the world have been running our lives. And it was not a pretty sight.
Julian Assange who ran Wikileaks, is surely a freedom of speech hero, however he broke many serious state secret laws and has been evading the authorities via diplomatic immunity afforded to him by the Ecuadorean embassy in London. This has now
been rescinded and Assange has been duly arrested. He is now in serious trouble and will surely end up being sent to the USA to answer the accusations.
It is hard to see that the prosecuting authorities will be convinced by ethics or morality of the ends justifying the means.
The legislators behind the Digital Economy Act couldn't be bothered to include any provisions for websites and age verifiers to keep the identity and browsing history of porn users safe. It has now started to dawn on the authorities that this was
a mistake. They are currently implementing a voluntary kitemark scheme to try and assure users that porn website's and age verifier's claims of keeping data safe can be borne out.
It is hardly surprising that significant numbers of people are likely to be interested in avoiding having to register their identity details before being able to access porn.
It seems obvious that information about VPNs and Tor will therefore be readily circulated amongst any online community with an interest in keeping safe. But perhaps it is a little bit of a shock to see it is such large letters in a mainstream
magazine on the shelves of supermarkets and newsagents.
And perhaps anther thought is that once the BBFC starting ISPs to block non-compliant websites then circumvention will be the only way see your blocked favourite websites. So people stupidly signing up to age verification will have less access to
porn and a worse service than those that circumvent it.
Russia took another step toward government control over the internet on Thursday, as lawmakers approved a bill that will open the door to sweeping censorship.
The legislation is designed to route web traffic through servers controlled by Roskomnadzor, the state communications censor, increasing its power to control information and block messaging or other applications.
It also provides for Russia to create its own system of domain names that would allow the internet to continue operating within the country, even if it were cut off from the global web.
The bill is expected to receive final approval before the end of the month. Once signed into law by Mr. Putin, the bulk of it will go into effect on Nov. 1.
Critics of the government's flagship internet regulation policy are warning it could lead to a North Korean-style censorship regime, where regulators decide which websites Britons are allowed to visit, because of how broad the proposals are.
Index on Censorship has raised strong concerns about the government's focus on tackling unlawful and harmful online content, particularly since the publication of the Internet Safety Strategy Green Paper in 2017. In October 2018, Index published
a joint statement with Global Partners Digital and Open Rights Group noting that any proposals that regulate content are likely to have a significant impact on the enjoyment and exercise of human rights online, particularly freedom of
We have also met with officials from the Department for Digital, Culture, Media and Sport, as well as from the Home Office, to raise our thoughts and concerns.
With the publication of the Online Harms White Paper , we would like to reiterate our earlier points.
While we recognise the government's desire to tackle unlawful content online, the proposals mooted in the white paper -- including a new duty of care on social media platforms , a regulatory body , and even the fining and banning of social media
platforms as a sanction -- pose serious risks to freedom of expression online.
These risks could put the United Kingdom in breach of its obligations to respect and promote the right to freedom of expression and information as set out in Article 19 of the International Covenant on Civil and Political Rights and Article 10
of the European Convention on Human Rights, amongst other international treaties.
Social media platforms are a key means for tens of millions of individuals in the United Kingdom to search for, receive, share and impart information, ideas and opinions. The scope of the right to freedom of expression includes speech which may
be offensive, shocking or disturbing . The proposed responses for tackling online safety may lead to disproportionate amounts of legal speech being curtailed, undermining the right to freedom of expression.
In particular, we raise the following concerns related to the white paper:
Lack of evidence base
The wide range of different harms which the government is seeking to tackle in this policy process require different, tailored responses. Measures proposed must be underpinned by strong evidence, both of the likely scale of the harm and the
measures' likely effectiveness. The evidence which formed the base of the Internet Safety Strategy Green Paper was highly variable in its quality. Any legislative or regulatory measures should be supported by clear and unambiguous evidence of
their need and effectiveness.
Duty of care concerns/ problems with 'harm' definition
Index is concerned at the use of a duty of care regulatory approach. Although social media has often been compared the public square, the duty of care model is not an exact fit because this would introduce regulation -- and restriction -- of
speech between individuals based on criteria that is far broader than current law. A failure to accurately define "harmful" content risks incorporating legal speech, including political expression, expressions of religious views,
expressions of sexuality and gender, and expression advocating on behalf of minority groups.
Risks in linking liability/sanctions to platforms over third party content
While well-meaning, proposals such as these contain serious risks, such as requiring or incentivising wide-sweeping removal of lawful and innocuous content. The imposition of time limits for removal, heavy sanctions for non-compliance or
incentives to use automated content moderation processes only heighten this risk, as has been evidenced by the approach taken in Germany via its Network Enforcement Act (or NetzDG), where there is evidence of the over-removal of lawful content.
Lack of sufficient protections for freedom of expression.
The obligation to protect users' rights online that is included in the white paper gives insufficient weight to freedom of expression. A much clearer obligation to protect freedom of expression should guide development of future regulation.
In recognition of the UK's commitment to the multistakeholder model of internet governance, we hope all relevant stakeholders, including civil society experts on digital rights and freedom of expression, will be fully engaged throughout the
development of the Online Harms bill.
PI welcomes the UK government's commitment to investigating and holding companies to account. When it comes to regulating the internet, however, we must move with care. Failure to do so will introduce, rather than reduce, "online
harms". A 12-week consultation on the proposals has also been launched today. PI plans to file a submission to the consultation as it relates to our work. Given the breadth of the proposals, PI calls on others respond to the consultation as
Here are our initial suggestions:
proceed with care: proposals of regulation of content on digital media platforms should be very carefully evaluated, given the high risks of negative impacts on expression, privacy and other human rights. This is a very complex challenge and
we support the need for broad consultation before any legislation is put forward in this area.
do not lose sight of how data exploitation facilitates the harms identified in the report and ensure any new regulator works closely with others working to tackle these issues.
assess carefully the delegation of sole responsibility to companies as adjudicators of content. This would empower corporate judgment over content, with would have implications for human rights, particularly freedom of expression and privacy.
require that judicial or other independent authorities, rather than government agencies, are the final arbiters of decisions regarding what is posted online and enforce such decisions in a manner that is consistent with human rights norms.
assess the privacy implications of any demand for "proactive" monitoring of content in digital media platforms.
ensure that any requirement or expectation of deploying automated decision making/AI is in full compliance with existing human rights and data protection standards (which, for example, prohibit, with limited exceptions, relying on solely
automated decisions, including profiling, when they significantly affect individuals).
ensure that company transparency reports include information related to how the content was targeted at users.
require companies to provide efficient reporting tools in multiple languages, to report on action taken with regard to content posted online. Reporting tools should be accessible, user-friendly, and easy to find. There should be full
transparency regarding the complaint and redress mechanisms available and opportunities for civil society to take action.
UK Now Proposes Ridiculous Plan To Fine Internet Companies For Vaguely Defined Harmful Content
Last week Australia rushed through a ridiculous bill to fine internet companies if they happen to host any abhorrent content. It appears the UK took one look at that nonsense and decided it wanted some too. On Monday it released a white paper
calling for massive fines for internet companies for allowing any sort of online harms. To call the plan nonsense is being way too harsh to nonsense
The plan would result in massive, widespread, totally unnecessary censorship solely for the sake of pretending to do something about the fact that some people sometimes do not so nice things online. And it will place all of the blame on the
internet companies for the (vaguely defined) not so nice things that those companies' users might do online.
We agree with your characterisation of the online harms white paper as a flawed attempt to deal with serious problems (Regulating the internet demands clear thought about hard problems, Editorial, 9 April). However, we would draw your attention
to several fundamental problems with the proposal which could be disastrous if it proceeds in its current form.
Firstly, the white paper proposes to regulate literally the entire internet, and censor anything non-compliant. This extends to blogs, file services, hosting platforms, cloud computing; nothing is out of scope.
Secondly, there are a number of undefined harms with no sense of scope or evidence thresholds to establish a need for action. The lawful speech of millions of people would be monitored, regulated and censored.
The result is an approach that would make China's state censors proud. It would be very likely to face legal challenge. It would give the UK the widest and most prolific internet censorship in an apparently functional democracy. A fundamental
rethink is needed.
Antonia Byatt Director, English PEN,
Silkie Carlo Big Brother Watch
Thomas Hughes Executive director, Article 19
Jim Killock Executive director, Open Rights Group
Joy Hyvarinen Head of advocacy, Index on Censorship
Comment: The DCMS Online Harms Strategy must design in fundamental rights
Increasingly over the past year, DCMS has become fixated on the idea of imposing a duty of care on social media platforms, seeing this as a flexible and de-politicised way to emphasise the dangers of exposing children and young people to
certain online content and make Facebook in particular liable for the uglier and darker side of its user-generated material.
DCMS talks a lot about the 'harm' that social media causes. But its proposals fail to explain how harm to free expression impacts would be avoided.
On the positive side, the paper lists free expression online as a core value to be protected and addressed by the regulator. However, despite the apparent prominence of this value, the mechanisms to deliver this protection and the issues at play
are not explored in any detail at all.
In many cases, online platforms already act as though they have a duty of care towards their users. Though the efficacy of such measures in practice is open to debate, terms and conditions, active moderation of posts and algorithmic choices about
what content is pushed or downgraded are all geared towards ousting illegal activity and creating open and welcoming shared spaces. DCMS hasn't in the White Paper elaborated on what its proposed duty would entail. If it's drawn narrowly so that
it only bites when there is clear evidence of real, tangible harm and a reason to intervene, nothing much will change. However, if it's drawn widely, sweeping up too much content, it will start to act as a justification for widespread internet
If platforms are required to prevent potentially harmful content from being posted, this incentivises widespread prior restraint. Platforms can't always know in advance the real-world harm that online content might cause, nor can they accurately
predict what people will say or do when on their platform. The only way to avoid liability is to impose wide-sweeping upload filters. Scaled implementation of this relies on automated decision-making and algorithms, which risks even greater
speech restrictions given that machines are incapable of making nuanced distinctions or recognising parody or sarcasm.
DCMS's policy is underpinned by societally-positive intentions, but in its drive to make the internet "safe", the government seems not to recognise that ultimately its proposals don't regulate social media companies, they regulate
social media users. The duty of care is ostensibly aimed at shielding children from danger and harm but it will in practice bite on adults too, wrapping society in cotton wool and curtailing a whole host of legal expression.
Although the scheme will have a statutory footing, its detail will depend on codes of practice drafted by the regulator. This makes it difficult to assess how the duty of care framework will ultimately play out.
The duty of care seems to be broadly about whether systemic interventions reduce overall "risk". But must the risk be always to an identifiable individual, or can it be broader - to identifiable vulnerable groups? To society as a whole?
What evidence of harm will be required before platforms should intervene? These are all questions that presently remain unanswered.
DCMS's approach appears to be that it will be up to the regulator to answer these questions. But whilst a sensible regulator could take a minimalist view of the extent to which commercial decisions made by platforms should be interfered with,
allowing government to distance itself from taking full responsibility over the fine detailing of this proposed scheme is a dangerous principle. It takes conversations about how to police the internet out of public view and democratic forums. It
enables the government to opt not to create a transparent, judicially reviewable legislative framework. And it permits DCMS to light the touch-paper on a deeply problematic policy idea without having to wrestle with the practical reality of how
that scheme will affect UK citizens' free speech, both in the immediate future and for years to come.
How the government decides to legislate and regulate in this instance will set a global norm.
The UK government is clearly keen to lead international efforts to regulate online content. It knows that if the outcome of the duty of care is to change the way social media platforms work that will apply worldwide. But to be a global leader,
DCMS needs to stop basing policy on isolated issues and anecdotes and engage with a broader conversation around how we as society want the internet to look. Otherwise, governments both repressive and democratic are likely to use the policy and
regulatory model that emerge from this process as a blueprint for more widespread internet censorship.
The House of Lords
report on the future of the internet, published in early March 2019, set out ten principles it considered should underpin digital policy-making, including the importance of protecting free expression. The consultation that this White Paper
introduces offers a positive opportunity to collectively reflect, across industry, civil society, academia and government, on how the negative aspects of social media can be addressed and risks mitigated. If the government were to use this
process to emphasise its support for the fundamental right to freedom of expression - and in a way that goes beyond mere expression of principle - this would also reverberate around the world, particularly at a time when press and journalistic
freedom is under attack.
The White Paper expresses a clear desire for tech companies to "design in safety". As the process of consultation now begins, we call on DCMS to "design in fundamental rights". Freedom of expression is itself a framework, and
must not be lightly glossed over. We welcome the opportunity to engage with DCMS further on this topic: before policy ideas become entrenched, the government should consider deeply whether these will truly achieve outcomes that are good for
Totalitarian-style new online code that could block websites and fine them 2£20million for harmful content will not limit press freedom, Culture Secretary promises
Government proposals have sparked fears that they could backfire and turn Britain into the first Western nation to adopt the kind of censorship usually associated with totalitarian regimes.
Former culture secretary John Whittingdale drew parallels with China, Russia and North Korea. Matthew Lesh of the Adam Smith Institute, a free market think-tank, branded the white paper a historic attack on freedom of speech.
[However] draconian laws designed to tame the web giants will not limit press freedom, the Culture Secretary said yesterday.
In a letter to the Society of Editors, Jeremy Wright vowed that journalistic or editorial content would not be affected by the proposals.
And he reassured free speech advocates by saying there would be safeguards to protect the role of the Press.
But as for the safeguarding the free speech rights of ordinary British internet users, he more or less told them they could fuck off!
The European Parliament is set to vote on legislation that would require websites that host user-generated content to take down material reported as terrorist content within one hour. We have some examples of current notices sent to the Internet
Archive that we think illustrate very well why this requirement would be harmful to the free sharing of information and freedom of speech that the European Union pledges to safeguard.
In the past week, the Internet Archive has received a series of email notices from Europol's European Union Internet Referral Unit (EU IRU) falsely identifying hundreds of URLs on archive.org as terrorist propaganda. At least one of these
mistaken URLs was also identified as terrorist content in a separate take down notice from the French government's L'Office Central de Lutte contre la Criminalit39 li39e aux Technologies de l'Information et de la Communication (OCLCTIC).
The Internet Archive has a few staff members that process takedown notices from law enforcement who operate in the Pacific time zone. Most of the falsely identified URLs mentioned here (including the report from the French government) were sent
to us in the middle of the night 203 between midnight and 3am Pacific 203 and all of the reports were sent outside of the business hours of the Internet Archive.
The one-hour requirement essentially means that we would need to take reported URLs down automatically and do our best to review them after the fact.
It would be bad enough if the mistaken URLs in these examples were for a set of relatively obscure items on our site, but the EU IRU's lists include some of the most visited pages on archive.org and materials that obviously have high scholarly
and research value. See a summary below with specific examples.
Zippyshare is a long running data locker and file sharing platform that is well known particularly for the distribution of porn.
Last month UK users noted that they have been blocked from accessing the website and that it can now only be accessed via a VPN.
Zippyshare themselves has made no comment about the block, but TorrentFreak have investigated the censorship and have determined that the block is self imposed and is not down to action by UK courts or ISPs.
Alan wonders if this is a premature reaction to the Great British Firewall, noting it's quite a popular platform for free porn.
Of course it poses the interesting question that if websites generally decide to address the issue of UK porn censorship by self imposed blocks, then keen users will simply have to get themselves VPNs. Being willing to sign up for age
verification simply won't work. Perhaps VPNs will be next to mandatory for British porn users, and age verification will become an unused technology.
Facebook changes its terms and clarify its use of data for consumers following discussions with the European Commission and consumer authorities
The European Commission and consumer protection authorities have welcomed Facebook's updated terms and services. They now clearly explain how the company uses its users' data to develop profiling activities and target advertising to finance their
The new terms detail what services, Facebook sells to third parties that are based on the use of their user's data, how consumers can close their accounts and under what reasons accounts can be disabled. These developments come after exchanges,
which aimed at obtaining full disclosure of Facebook's business model in a comprehensive and plain language to users.
Vera Jourova , Commissioner for Justice, Consumers and Gender Equality welcomed the agreement:
legalistic jargon on how it is making billions on people's data. Now, users will clearly understand that their data is used by the social network to sell targeted ads. By joining forces, the consumer authorities and the European Commission,
stand up for the rights of EU consumers.
In the aftermath of the Cambridge Analytica scandal and as a follow-up to the investigation on social media platforms in 2018 , the European Commission and national consumer protection authorities requested Facebook to clearly inform consumers
how the social network gets financed and what revenues are derived from the use of consumer data. They also requested the platform to bring the rest of its terms of service in line with EU Consumer Law.
As a result, Facebook will introduce new text in its Terms and Services explaining that it does not charge users for its services in return for users' agreement to share their data and to be exposed to commercial advertisements. Facebook's terms
will now clearly explain that their business model relies on selling targeted advertising services to traders by using the data from the profiles of its users.
In addition, following the enforcement action, Facebook has also amended:
its policy on limitation of liability and now acknowledges its responsibility in case of negligence, for instance in case data has been mishandled by third parties;
its power to unilaterally change terms and conditions by limiting it to cases where the changes are reasonable also taking into account the interest of the consumer;
the rules concerning the temporary retention of content which has been deleted by consumers. Such content can only be retained in specific cases 203 for instance to comply with an enforcement request by an authority 203 and for a maximum of 90
days in case of technical reasons;
the language clarifying the right to appeal of users when the their content has been removed.
Facebook will complete the implementation of all commitments at the latest by the end of June 2019.
An internet user who downloaded illegal pornographic images of adults having sex with animals has been to a community order for two years with rehabilitation requirements.
Police found 71 bestiality pictures and movies on computer devices when they raided his home in August last year. The victim admitted possessing extreme pornography.
Judge Tim Gittins lectured the victim claiming:
The items are illegal and the items do untold damage, not just to the animals involved but also to those who you erroneously believe are volunteering. Very often the adults are coerced into doing what they are doing.
The images themselves give no indication of the sort of dreadful situation they find themselves in or the damage being caused by them knowing that such items are available for viewing by others.
It perpetuates the damage that people like you download and retain them.
The American Library Association (ALA) has released its annual list of most challenged books.
The ALA Office for Intellectual Freedom chose the 11 most challenged works among 483 books that were either banned or restricted from public access in 2018.
Here is the complete list for 2018 and the reasons why the works were challenged --
George by Alex Gino -- The book, which was written for elementary-age children in 2015, was found offensive as its protagonist was a transgender child. Most recently, the Wichita, Kansas, school system decided to ban the book from the
district libraries citing that the work had references and language that wasn't appropriate for schoolchildren. The book also made it to ALA's list in 2016 and '17. The work is also believed to "encourage children to clear browser history
and change their bodies using hormones."
A Day in the Life of Marlon Bundo by Jill Twiss, illustrated by EG Keller -- The best-selling parody by John Oliver, which was written by "Last Week Tonight" staffer Jill Twiss, was in response to the book "Marlon Bundo's
Day in the Life of the Vice President" by Charlotte Pence, Vice President Mike Pence's daughter. The work pictured Pence's pet rabbit as gay and also criticized the family's conservative social viewpoint.
Captain Underpants series, written and illustrated by Dav Pilkey -- The 10-part series revolves around two young boys creating a superhero. A complaint was filed against the book with the Office for Intellectual Freedom stating that the
language used in it was not appropriate for the targeted age group. The book also allegedly promoted "disruptive behavior.
The Hate U Give by Angie Thomas -- The novel, which revolves around the life of a young girl who became an activist after her unarmed friend was killed by a police officer, was deemed "anti-cop." A complaint was filed against
the book for explicit language and featuring drug use.
Drama written and illustrated by Raina Telgemeier -- The 2012 graphic novel was banned in school libraries for featuring LGBTQ characters and themes. The work featured in ALA's previous lists for having offensive political viewpoints and
for being sexually explicit.
Thirteen Reasons Why by Jay Asher -- The work, which was originally published in 2007, came under the scanner after Netflix aired a series with the same name in 2017. The book's depiction of suicide was the primary reason for it being
banned. The book was deemed unsuited for children and teens as it featured drug and alcohol use. It was also challenged for its sexual content.
This One Summer by Mariko Tamaki, illustrated by Jillian Tamaki -- The work, which topped ALA's list in 2016, was banned for featuring LGBTQ characters. The book revolves around the life of a teen girl who navigates the start of
adolescence with the help of a female friend. The book was also challenged for drug use, profanity and having sexually explicit themes.
Skippyjon Jones series written and illustrated by Judy Schachner -- The series, which features a Siamese cat that assumes to be a Chihuahua, was criticized for depicting Mexican stereotypes.
The Absolutely True Diary of a Part-Time Indian by Sherman Alexie -- The work has featured in ALA's list six times since its publication in 2007 for its sexual references, depiction of alcoholism, bullying and poverty. It was also deemed
sexually explicit and challenged in school curriculums.
This Day In June by Gayle E. Pitman, illustrated by Kristyna Litten -- The children's picture book about a gay pride parade was challenged for including LGBTQ content.
Two Boys Kissing by David Levithan -- The book, which was about two teen boys participating in a 32-hour marathon of kissing in order to set a new Guinness World Record, was considered sexually explicit as the book's cover page has an
image of two boys kissing. It was also banned for the LGBTQ content.
In the first online safety laws of their kind, social media companies and tech firms will be legally required to protect their users and face tough penalties if they do not comply.
As part of the Online Harms White Paper, a joint proposal from the Department for Digital, Culture, Media and Sport and Home Office, a new independent regulator will be introduced to ensure companies meet their responsibilities.
This will include a mandatory 'duty of care', which will require companies to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services. The regulator will have effective enforcement tools, and we
are consulting on powers to issue substantial fines, block access to sites and potentially to impose liability on individual members of senior management.
A range of harms will be tackled as part of the
Online Harms White Paper , including inciting violence and violent content, encouraging suicide, disinformation, cyber bullying and children accessing inappropriate material.
There will be stringent requirements for companies to take even tougher action to ensure they tackle terrorist and child sexual exploitation and abuse content.
The new proposed laws will apply to any company that allows users to share or discover user generated content or interact with each other online. This means a wide range of companies of all sizes are in scope, including social media platforms,
file hosting sites, public discussion forums, messaging services, and search engines.
A regulator will be appointed to enforce the new framework. The Government is now consulting on whether the regulator should be a new or existing body. The regulator will be funded by industry in the medium term, and the Government is exploring
options such as an industry levy to put it on a sustainable footing.
12 week consultation on the proposals has also been launched today. Once this concludes we will then set out the action we will take in developing our final proposals for legislation.
Tough new measures set out in the White Paper include:
A new statutory 'duty of care' to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services.
Further stringent requirements on tech companies to ensure child abuse and terrorist content is not disseminated online.
Giving a regulator the power to force social media platforms and others to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address this.
Making companies respond to users' complaints, and act to address them quickly.
Codes of practice, issued by the regulator, which could include measures such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.
A new "Safety by Design" framework to help companies incorporate online safety features in new apps and platforms from the start.
A media literacy strategy to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, including catfishing, grooming and extremism.
The UK remains committed to a free, open and secure Internet. The regulator will have a legal duty to pay due regard to innovation, and to protect users' rights online, being particularly mindful to not infringe privacy and freedom of expression.
Recognising that the Internet can be a tremendous force for good, and that technology will be an integral part of any solution, the new plans have been designed to promote a culture of continuous improvement among companies. The new regime will
ensure that online firms are incentivised to develop and share new technological solutions, like Google's "Family Link" and Apple's Screen Time app, rather than just complying with minimum requirements. Government has balanced the clear
need for tough regulation with its ambition for the UK to be the best place in the world to start and grow a digital business, and the new regulatory framework will provide strong protection for our citizens while driving innovation by not
placing an impossible burden on smaller companies.
A TV channel, a porn producer, an age verifier and maybe even the government got together this week to put out a live test of age verification. The test was implemented on a specially created website featuring a single porn video.
The test required a well advertised website to provide enough traffic of viewers positively wanting to see the content. Channel 4 obliged with its series Mums Make Porn. The series followed a group of mums making a porn video that
they felt would be more sex positive and less harmful to kids than the more typical porn offerings currently on offer.
The mums did a good job and produced a decent video with a more loving and respectful interplay than is the norm. The video however is still proper hardcore porn and there is no way it could be broadcast on Channel 4. So the film was made
available, free of charge, on its own dedicated website complete with an age verification requirement.
The website was announced as a live test for AgeChecked software to see how age verification would pan out in practice. It featured the following options for age verification
entering full credit card details + email
entering driving licence number + name and address + email
mobile phone number + email (the phone must have been verified as 18+ by the service provider and must must be ready to receive an SMS message containing login details)
Nothing has been published in detail about the aims of the test but presumably they were interested in the basic questions such as:
What proportion of potential viewers will be put off by the age verification?
What proportion of viewers would be stupid enough to enter their personal data?
Which options of identification would be preferred by viewers?
The official test 'results'
Alastair Graham, CEO of AgeChecked provided a few early answers inevitably claiming that:
The results of this first mainstream test of our software were hugely encouraging.
He went on to claim that customers are willing to participate in the process, but noted that verified phone number method emerged as by far the most popular method of verification. He said that this finding would be a key part of this
process moving forward.
Reading between the lines perhaps he was saying that there wasn't much appetite for handing over detailed personal identification data as required by the other two methods.
I suspect that we will never get to hear more from AgeChecked especially about any reluctance of people to identify themselves as porn viewers.
The unofficial test results
Maybe they were also interested in other questions too:
Will people try and work around the age verification requirements?
if people find weaknesses in the age verification defences, will they pass on their discoveries to others?
Interestingly the age verification requirement was easily sidestepped by those with a modicum of knowledge about downloading videos from websites such as YouTube and PornHub. The age verification mechanism effectively only hid the start button
from view. The actual video remained available for download, whether people age verified or not. All it took was a little examination of the page code to locate the video. There are several tools that allow this: video downloader addons, file
downloaders or just using the browser's built in debugger to look at the page code.
Presumably the code for the page was knocked up quickly so this flaw could have been a simple oversight that is not likely to occur in properly constructed commercial websites. Or perhaps the vulnerability was deliberately included as part of the
test to see if people would pick up on it.
However it did identify that there is a community of people willing to stress test age verification restrictions and see if work rounds can be found and shared.
I noted on Twitter that several people had posted about the ease of downloading the video and had suggested a number of tools or methods that enabled this.
There was also an interesting article posted on achieving age verification using an expired credit card. Maybe that is not so catastrophic as it still identifies a cardholder as over 18, even if cannot be used to make a payment. But of course it
may open new possibilities for misuse of old data. Note that random numbers are unlikely to work because of security algorithms. Presumably age verification companies could strengthen the security by testing that a small transaction works, but
this intuitively this would have significant cost implications. I guess that to achieve any level of take up, age verification needs to be cheap for both websites and viewers.
It was very heartening to see how many people were helpfully contributing their thoughts about testing the age verification software.
Over the course of a couple of hours reading, I learnt an awful lot about how websites hide and protect video content, and what tools are available to see through the protection. I suspect that many others will soon be doing the same... and I
also suspect that young minds will be far more adept than I at picking up such knowledge.
A final thought
I feel a bit sorry for small websites who sell content. It adds a whole new level complexity as a currently open preview area now needs to be locked away behind an age verification screen. Many potential customers will be put off by having to
jump through hoops just to see the preview material. To then ask them to enter all their credit card details again to subscribe, may be a hurdle too far.