Melon Farmers Original Version

Internet Porn Censorship


2022: Jan-March

 2007   2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan-March   April-June   July-Sept   Oct-Dec    

 

Vladimir would be proud...

UK Government introduces its Online Censorship Bill which significantly diminishes British free speech whilst terrorising British businesses with a mountain of expense and red tape


Link Here17th March 2022
Full story: Online Safety Bill...UK Government legislates to censor social media
The UK government's new online censorship laws have been brought before parliament. The Government wrote in its press release:

The Online Safety Bill marks a milestone in the fight for a new digital age which is safer for users and holds tech giants to account. It will protect children from harmful content such as pornography and limit people's exposure to illegal content, while protecting freedom of speech.

It will require social media platforms, search engines and other apps and websites allowing people to post their own content to protect children, tackle illegal activity and uphold their stated terms and conditions.

The regulator Ofcom will have the power to fine companies failing to comply with the laws up to ten per cent of their annual global turnover, force them to improve their practices and block non-compliant sites.

Today the government is announcing that executives whose companies fail to cooperate with Ofcom's information requests could now face prosecution or jail time within two months of the Bill becoming law, instead of two years as it was previously drafted.

A raft of other new offences have also been added to the Bill to make in-scope companies' senior managers criminally liable for destroying evidence, failing to attend or providing false information in interviews with Ofcom, and for obstructing the regulator when it enters company offices.

In the UK, tech industries are blazing a trail in investment and innovation. The Bill is balanced and proportionate with exemptions for low-risk tech and non-tech businesses with an online presence. It aims to increase people's trust in technology, which will in turn support our ambition for the UK to be the best place for tech firms to grow.

The Bill will strengthen people's rights to express themselves freely online and ensure social media companies are not removing legal free speech. For the first time, users will have the right to appeal if they feel their post has been taken down unfairly.

It will also put requirements on social media firms to protect journalism and democratic political debate on their platforms. News content will be completely exempt from any regulation under the Bill.

And, in a further boost to freedom of expression online, another major improvement announced today will mean social media platforms will only be required to tackle 'legal but harmful' content, such as exposure to self-harm, harassment and eating disorders, set by the government and approved by Parliament.

Previously they would have had to consider whether additional content on their sites met the definition of legal but harmful material. This change removes any incentives or pressure for platforms to over-remove legal content or controversial comments and will clear up the grey area around what constitutes legal but harmful.

Ministers will also continue to consider how to ensure platforms do not remove content from recognised media outlets.

Bill introduction and changes over the last year

The Bill will be introduced in the Commons today. This is the first step in its passage through Parliament to become law and beginning a new era of accountability online. It follows a period in which the government has significantly strengthened the Bill since it was first published in draft in May 2021. Changes since the draft Bill include:

  • Bringing paid-for scam adverts on social media and search engines into scope in a major move to combat online fraud .

  • Making sure all websites which publish or host pornography , including commercial sites, put robust checks in place to ensure users are 18 years old or over.

  • Adding new measures to clamp down on anonymous trolls to give people more control over who can contact them and what they see online.

  • Making companies proactively tackle the most harmful illegal content and criminal activity quicker.

  • Criminalising cyberflashing through the Bill.

Criminal liability for senior managers

The Bill gives Ofcom powers to demand information and data from tech companies, including on the role of their algorithms in selecting and displaying content, so it can assess how they are shielding users from harm.

Ofcom will be able to enter companies' premises to access data and equipment, request interviews with company employees and require companies to undergo an external assessment of how they're keeping users safe.

The Bill was originally drafted with a power for senior managers of large online platforms to be held criminally liable for failing to ensure their company complies with Ofcom's information requests in an accurate and timely manner.

In the draft Bill, this power was deferred and so could not be used by Ofcom for at least two years after it became law. The Bill introduced today reduces the period to two months to strengthen penalties for wrongdoing from the outset.

Additional information-related offences have been added to the Bill to toughen the deterrent against companies and their senior managers providing false or incomplete information. They will apply to every company in scope of the Online Safety Bill. They are:

  • offences for companies in scope and/or employees who suppress, destroy or alter information requested by Ofcom;

  • offences for failing to comply with, obstructing or delaying Ofcom when exercising its powers of entry, audit and inspection, or providing false information;

  • offences for employees who fail to attend or provide false information at an interview.

Falling foul of these offences could lead to up to two years in imprisonment or a fine.

Ofcom must treat the information gathered from companies sensitively. For example, it will not be able to share or publish data without consent unless tightly defined exemptions apply, and it will have a responsibility to ensure its powers are used proportionately.

Changes to requirements on 'legal but harmful' content

Under the draft Bill, 'Category 1' companies - the largest online platforms with the widest reach including the most popular social media platforms - must address content harmful to adults that falls below the threshold of a criminal offence.

Category 1 companies will have a duty to carry risk assessments on the types of legal harms against adults which could arise on their services. They will have to set out clearly in terms of service how they will deal with such content and enforce these terms consistently. If companies intend to remove, limit or allow particular types of content they will have to say so.

The agreed categories of legal but harmful content will be set out in secondary legislation and subject to approval by both Houses of Parliament. Social media platforms will only be required to act on the priority legal harms set out in that secondary legislation, meaning decisions on what types of content are harmful are not delegated to private companies or at the whim of internet executives.

It will also remove the threat of social media firms being overzealous and removing legal content because it upsets or offends someone even if it is not prohibited by their terms and conditions. This will end situations such as the incident last year when TalkRadio was forced offline by YouTube for an "unspecified" violation and it was not clear on how it breached its terms and conditions.

The move will help uphold freedom of expression and ensure people remain able to have challenging and controversial discussions online.

The DCMS Secretary of State has the power to add more categories of priority legal but harmful content via secondary legislation should they emerge in the future. Companies will be required to report emerging harms to Ofcom.

Proactive technology

Platforms may need to use tools for content moderation, user profiling and behaviour identification to protect their users.

Additional provisions have been added to the Bill to allow Ofcom to set expectations for the use of these proactive technologies in codes of practice and force companies to use better and more effective tools, should this be necessary.

Companies will need to demonstrate they are using the right tools to address harms, they are transparent, and any technologies they develop meet standards of accuracy and effectiveness required by the regulator. Ofcom will not be able to recommend these tools are applied on private messaging or legal but harmful content.

Reporting child sexual abuse

A new requirement will mean companies must report child sexual exploitation and abuse content they detect on their platforms to the National Crime Agency .

The CSEA reporting requirement will replace the UK's existing voluntary reporting regime and reflects the Government's commitment to tackling this horrific crime.

Reports to the National Crime Agency will need to meet a set of clear standards to ensure law enforcement receives the high quality information it needs to safeguard children, pursue offenders and limit lifelong re-victimisation by preventing the ongoing recirculation of illegal content.

In-scope companies will need to demonstrate existing reporting obligations outside of the UK to be exempt from this requirement, which will avoid duplication of company's efforts.

 

 

Using censorship heavy artillery without caring about the collateral damage...

French censors bang the table demanding age verification but there are no data protection laws in place that protect porn users from being tracked and scammed


Link Here 9th March 2022
Full story: Age Verification in France...Macron gives websites 6 months to introduce age verification
Pornhub, Pornhub, XHamster, XNXX and XVideos do not comply with French rules contrived from a law against domestic violence.

The French internet censor Arcom (previously CSA) took legal action on March 8 and requested the blocking of 5 pornographic sites: Pornhub, Pornhub, XHamster, Xnxx and Xvideos. The censor sent an injunction to the platforms and left 15 days to comply with the law. The websites did not comply.

Since the vote on the law against domestic violence in 2020, an amendment specifies that sites can no longer be satisfied with asking Internet users to declare that they are of legal age by clicking on a simple box.

Depending on the judge's decision, ISPs will be forced or not to block access to the incriminated sites. In case of blocking, visitors to the pornographic site will be redirected to a dedicated Arcom page.

Distributors of pornographic content are therefore required, in theory, to check the age of their visitors. But how? There is currently no legally defined method to achieve this. The censor itself has never given guidelines to the platforms.

In fact data protection authorities have rather put a spanner in the works that has left the industry scratching its head. In an opinion issued on June 3, 2021, the National Commission for Computing and Freedoms (Cnil) decreed that a verification system which collects information on the identity of Internet users would, in this context, be illegal and risky. Such data collection would indeed present significant risks for the persons concerned since their sexual orientation -- real or supposed -- could be deduced from the content viewed and directly linked to their identity.

Faced with these legal contradictions, Senator Marie Mercier, rapporteur for the amendment, has simply banged the table harder:

I don't want to know how they are doing, but they have to find a solution . The law is the law.

Porn tube websites have explained their reluctance to implement. The option to use third-party verifiers may prove very expensive for a business model based on a high number of users making up for low advertising income per users. An estimate denied by the Tukif site, says that the cost of a verification service goes from 0.0522c to 0.222c per user, a cost to be multiplied by their 650,0000 unique daily visitors.

It is presumed that many porn users will be very reluctant to hand over dangerous ID proof to porn websites so blocking the entry of some audiences, while discouraging others will lead to collapsing income.

The websites also note that as the regulator hasn't attempted to block all porn tube sites then users will be more likely to swap to unrestricted websites rather than submit to ID verification on those website mandated to do so.

 

 

Offsite Article: The Real-World Consequences Of Outlawing Porn...


Link Here 4th March 2022
Full story: Online Safety Bill Draft...UK Government legislates to censor social media
Censoring adult entertainment does not reduce demand -- it just allows fraudsters, blackmailers and corruption to flourish

See article from reprobatepress.com

 

 

Offsite Article: Reviewing Louis Theroux's Forbidden America...


Link Here4th March 2022
The US adult industry trade group is not impressed by the BBC's 'demonstrably biased and stigmatizing reporting on sex worker rights and sexual expression'

See article from xbiz.com

 

 

The UK Government masses its heavy censorship artillery at the borders of free speech...

Threatening to invade and repress the freedoms of a once proud people


Link Here20th February 2022
Full story: Online Safety Bill Draft...UK Government legislates to censor social media
The Financial Times is reporting that the cabinet have agreed to extending UK online censorship to cover legal but harmful content. The government will define what material must be censored via its internet censor Ofcom.

The FT reports:

A revised Online Safety bill will give Ofcom, the communications regulator, powers to require internet companies to use technology to proactively seek out and remove both illegal content and legal content which is harmful to children. The new powers were proposed in a recent letter to cabinet colleagues by home secretary Priti Patel and culture secretary Nadine Dorries.

It seems that the tech industry is not best pleased by being forced to pre-vet and censor content according to rules decreed by government or Ofcom.  The FT reports:

After almost three years of discussion about what was originally named the Online Harms bill, tech industry insiders said they were blindsided by the eleventh-hour additions.

The changes would make the UK a global outlier in how liability is policed and enforced online, said Coadec, a trade body for tech start-ups. It added the UK would be a significantly less attractive place to start, grow and maintain a tech business.

Westminster insiders said ministers were reluctant to be seen opposing efforts to remove harmful material from the internet.

 

 

No age verification option...

Twitter responds to German porn age verification requirements by totally blocking all Germans from adult content that has been flagged by a state internet censor


Link Here13th February 2022
Full story: Internet Censorship in Germany...Germany considers state internet filtering
Twitter has been blocking the profiles of adult content creators in Germany since late 2020, with at least 60 accounts affected to date.

The move comes in response to a series of legal orders by German regulators that have ruled that online pornography should not be visible to children and must be hidden behind age-verification systems.

As Twitter doesn't have an age-verification system in place, it has responded to legal demands by outright blocking the selected accounts for anyone in Germany.

The German approach to selecting accounts to ban seems scattergun. There are thousands of Twitter accounts that post adult content, and those the internet censors has reported to Twitter appear to have large followings or are subject to individual complaints.

Anyone trying to view one of the blocked accounts in Germany sees a message saying it has been withheld in Germany in response to a legal demand. The exact number of accounts blocked in this way is unknown. One pornographic account displaying this message has more than 700,000 followers.

The policy of totally blocking all German users may encourage a large scale take up of VPNs so that users can continue to view their favourite accounts. Of course Twitter could itself block access via well known VPNs but it seems likely that this would cause widespread disruption to worldwide users living in repressive countries that try to block Twitter entirely.

 

 

A scammer's wet dream...

UK Government announces that the Online Censorship Bill will now extend to requiring identity/age verification to view porn


Link Here6th February 2022
Full story: Online Safety Bill Draft...UK Government legislates to censor social media

On Safer Internet Day, Digital Censorship Minister Chris Philp has announced the Online Safety Bill will be significantly strengthened with a new legal duty requiring all sites that publish pornography to put robust checks in place to ensure their users are 18 years old or over.

This could include adults using secure age verification technology to verify that they possess a credit card and are over 18 or having a third-party service confirm their age against government data.

If sites fail to act, the independent regulator Ofcom will be able fine them up to 10% of their annual worldwide turnover or can block them from being accessible in the UK. Bosses of these websites could also be held criminally liable if they fail to cooperate with Ofcom.

A large amount of pornography is available online with little or no protections to ensure that those accessing it are old enough to do so. There are widespread concerns this is impacting the way young people understand healthy relationships, sex and consent. Half of parents worry that online pornography is giving their kids an unrealistic view of sex and more than half of mums fear it gives their kids a poor portrayal of women.

Age verification controls are one of the technologies websites may use to prove to Ofcom that they can fulfil their duty of care and prevent children accessing pornography.

Many sites where children are likely to be exposed to pornography are already in scope of the draft Online Safety Bill, including the most popular pornography sites as well as social media, video-sharing platforms and search engines. But as drafted, only commercial porn sites that allow user-generated content - such as videos uploaded by users - are in scope of the bill.

The new standalone provision ministers are adding to the proposed legislation will require providers who publish or place pornographic content on their services to prevent children from accessing that content. This will capture commercial providers of pornography as well as the sites that allow user-generated content. Any companies which run such a pornography site which is accessible to people in the UK will be subject to the same strict enforcement measures as other in-scope services.

The Online Safety Bill will deliver more comprehensive protections for children online than the Digital Economy Act by going further and protecting children from a broader range of harmful content on a wider range of services. The Digital Economy Act did not cover social media companies, where a considerable quantity of pornographic material is accessible, and which research suggests children use to access pornography.

The government is working closely with Ofcom to ensure that online services' new duties come into force as soon as possible following the short implementation period that will be necessary after the bill's passage.

The onus will be on the companies themselves to decide how to comply with their new legal duty. Ofcom may recommend the use of a growing range of age verification technologies available for companies to use that minimise the handling of users' data. The bill does not mandate the use of specific solutions as it is vital that it is flexible to allow for innovation and the development and use of more effective technology in the future.

Age verification technologies do not require a full identity check. Users may need to verify their age using identity documents but the measures companies put in place should not process or store data that is irrelevant to the purpose of checking age. Solutions that are currently available include checking a user's age against details that their mobile provider holds, verifying via a credit card check, and other database checks including government held data such as passport data.

Any age verification technologies used must be secure, effective and privacy-preserving. All companies that use or build this technology will be required to adhere to the UK's strong data protection regulations or face enforcement action from the Information Commissioner's Office.

Online age verification is increasingly common practice in other online sectors, including online gambling and age-restricted sales. In addition, the government is working with industry to develop robust standards for companies to follow when using age assurance tech, which it expects Ofcom to use to oversee the online safety regime.

Notes to editors:

Since the publication of the draft Bill in May 2021 and following the final report of the Joint Committee in December, the government has listened carefully to the feedback on children's access to online pornography, in particular stakeholder concerns about pornography on online services not in scope of the bill.

To avoid regulatory duplication, video-on-demand services which fall under Part 4A of the Communications Act will be exempt from the scope of the new provision. These providers are already required under section 368E of the Communications Act to take proportionate measures to ensure children are not normally able to access pornographic content.

The new duty will not capture user-to-user content or search results presented on a search service, as the draft Online Safety Bill already regulates these. Providers of regulated user-to-user services which also carry published (i.e. non user-generated) pornographic content would be subject to both the existing provisions in the draft Bill and the new proposed duty.

 

 

Online Censorship Bill...

Government defines a wide range of harms that will lead to criminal prosecution and that will require censorship by internet intermediaries


Link Here3rd February 2022
Full story: Online Safety Bill Draft...UK Government legislates to censor social media

Online Safety Bill strengthened with new list of criminal content for tech firms to remove as a priority

own after it had been reported to them by users but now they must be proactive and prevent people being exposed in the first place.

It will clamp down on pimps and human traffickers, extremist groups encouraging violence and racial hate against minorities, suicide chatrooms and the spread of private sexual images of women without their consent.

Naming these offences on the face of the bill removes the need for them to be set out in secondary legislation later and Ofcom can take faster enforcement action against tech firms which fail to remove the named illegal content.

Ofcom will be able to issue fines of up to 10 per cent of annual worldwide turnover to non-compliant sites or block them from being accessible in the UK.

Three new criminal offences, recommended by the Law Commission, will also be added to the Bill to make sure criminal law is fit for the internet age.

The new communications offences will strengthen protections from harmful online behaviours such as coercive and controlling behaviour by domestic abusers; threats to rape, kill and inflict physical violence; and deliberately sharing dangerous disinformation about hoax Covid-19 treatments.

The government is also considering the Law Commission's recommendations for specific offences to be created relating to cyberflashing, encouraging self-harm and epilepsy trolling.

To proactively tackle the priority offences, firms will need to make sure the features, functionalities and algorithms of their services are designed to prevent their users encountering them and minimise the length of time this content is available. This could be achieved by automated or human content moderation, banning illegal search terms, spotting suspicious users and having effective systems in place to prevent banned users opening new accounts.

New harmful online communications offences

Ministers asked the Law Commission to review the criminal law relating to abusive and offensive online communications in the Malicious Communications Act 1988 and the Communications Act 2003.

The Commission found these laws have not kept pace with the rise of smartphones and social media. It concluded they were ill-suited to address online harm because they overlap and are often unclear for internet users, tech companies and law enforcement agencies.

It found the current law over-criminalises and captures 'indecent' images shared between two consenting adults - known as sexting - where no harm is caused. It also under-criminalises - resulting in harmful communications without appropriate criminal sanction. In particular, abusive communications posted in a public forum, such as posts on a publicly accessible social media page, may slip through the net because they have no intended recipient. It also found the current offences are sufficiently broad in scope that they could constitute a disproportionate interference in the right to freedom of expression.

In July the Law Commission recommended more coherent offences. The Digital Secretary today confirms new offences will be created and legislated for in the Online Safety Bill.

The new offences will capture a wider range of harms in different types of private and public online communication methods. These include harmful and abusive emails, social media posts and WhatsApp messages, as well as 'pile-on' harassment where many people target abuse at an individual such as in website comment sections. None of the offences will apply to regulated media such as print and online journalism, TV, radio and film.

The offences are:

A 'genuinely threatening' communications offence, where communications are sent or posted to convey a threat of serious harm.

This offence is designed to better capture online threats to rape, kill and inflict physical violence or cause people serious financial harm. It addresses limitations with the existing laws which capture 'menacing' aspects of the threatening communication but not genuine and serious threatening behaviour.

It will offer better protection for public figures such as MPs, celebrities or footballers who receive extremely harmful messages threatening their safety. It will address coercive and controlling online behaviour and stalking, including, in the context of domestic abuse, threats related to a partner's finances or threats concerning physical harm.

A harm-based communications offence to capture communications sent to cause harm without a reasonable excuse.

This offence will make it easier to prosecute online abusers by abandoning the requirement under the old offences for content to fit within proscribed yet ambiguous categories such as "grossly offensive," "obscene" or "indecent". Instead it is based on the intended psychological harm, amounting to at least serious distress, to the person who receives the communication, rather than requiring proof that harm was caused. The new offences will address the technical limitations of the old offences and ensure that harmful communications posted to a likely audience are captured.

The new offence will consider the context in which the communication was sent. This will better address forms of violence against women and girls, such as communications which may not seem obviously harmful but when looked at in light of a pattern of abuse could cause serious distress. For example, in the instance where a survivor of domestic abuse has fled to a secret location and the abuser sends the individual a picture of their front door or street sign.

It will better protect people's right to free expression online. Communications that are offensive but not harmful and communications sent with no intention to cause harm, such as consensual communication between adults, will not be captured. It will have to be proven in court that a defendant sent a communication without any reasonable excuse and did so intending to cause serious distress or worse, with exemptions for communication which contributes to a matter of public interest.

An offence for when a person sends a communication they know to be false with the intention to cause non-trivial emotional, psychological or physical harm.

Although there is an existing offence in the Communications Act that captures knowingly false communications, this new offence raises the current threshold of criminality. It covers false communications deliberately sent to inflict harm, such as hoax bomb threats, as opposed to misinformation where people are unaware what they are sending is false or genuinely believe it to be true. For example, if an individual posted on social media encouraging people to inject antiseptic to cure themselves of coronavirus, a court would have to prove that the individual knew this was not true before posting it.

The maximum sentences for each offence will differ. If someone is found guilty of a harm based offence they could go to prison for up to two years, up to 51 weeks for the false communication offence and up to five years for the threatening communications offence. The maximum sentence was six months under the Communications Act and two years under the Malicious Communications Act.

Notes

The draft Online Safety Bill in its current form already places a duty of care on internet companies which host user-generated content, such as social media and video-sharing platforms, as well as search engines, to limit the spread of illegal content on these services. It requires them to put in place systems and processes to remove illegal content as soon as they become aware of it but take additional proactive measures with regards to the most harmful 'priority' forms of online illegal content.

The priority illegal offences currently listed in the draft bill are terrorism and child sexual abuse and exploitation, with powers for the DCMS Secretary of State to designate further priority offences with Parliament's approval via secondary legislation once the bill becomes law. In addition to terrorism and child sexual exploitation and abuse, the further priority offences to be written onto the face of the bill includes illegal behaviour which has been outlawed in the offline world for years but also newer illegal activity which has emerged alongside the ability to target individuals or communicate en masse online.

This list has been developed using the following criteria: (i) the prevalence of such content on regulated services, (ii) the risk of harm being caused to UK users by such content and (iii) the severity of that harm.

The offences will fall in the following categories:

  • Encouraging or assisting suicide
  • Offences relating to sexual images i.e. revenge and extreme pornography
  • Incitement to and threats of violence
  • Hate crime
  • Public order offences - harassment and stalking
  • Drug-related offences
  • Weapons / firearms offences
  • Fraud and financial crime
  • Money laundering
  • Controlling, causing or inciting prostitutes for gain
  • Organised immigration offences

 

 

Silencing adult businesses...

US senate reintroduces EARN IT internet censorship bill targeting adult content


Link Here30th January 2022
Adult industry, sex worker and digital rights advocates unanimously have sounded the alarm about the implications for state censorship and privacy issues of the revived EARN IT Act, which was re-introduced to the US Senate by Richard Blumenthal.

The bill is a broad overhaul of Section 230 protections to strip platforms of immunity for third-party uploaded content. The expected industry response is for social media and internet services to totally ban swathes of content that contain controversial content rather than attempt to recognise just those posts that contravene the law.

As XBIZ has reported, EARN IT will also open the way for politicians to define the legal categories of pornography and pornographic website as they which is a cherished goal of morality campaigners that seek to reintroduce obscenity prosecutions for content now protected by Free Speech jurisprudence.

EARN IT has been championed by top religiously-motivated anti-porn crusading groups such as NCOSE (formerly Morality in Media).

 

 

Upper Class Twit of the Year...

The Earl of Erroll spouts to Parliament that anal sex and blowjobs are not how to go about wooing a woman


Link Here27th January 2022
Full story: Online Safety Bill Draft...UK Government legislates to censor social media
The House of Lords has given a second reading to the Digital Economy Act 2017 (Commencement of Part 3) Bill [HL] which is attempting to resurrect the failed law requiring age verification for porn websites. The bill reads:

Commencement of Part 3 of the Digital Economy Act 2017

The Secretary of State must make regulations under section 118(6) (commencement) of the Digital Economy Act 2017 to ensure that by 20 June 2022 all provisions under Part 3 (online pornography) of that Act have come into force.

The full reasoning for law to come into force have never been published but this is most likely due to the law totally failing to address the issue of keeping porn users' data safe from scammers, blackmailers and thieves. It also seems that the government would prefer to have general rules under which to harangue websites for not keeping children safe from harm rather than set to an expensive bunch of film censors seeking out individual transgressors.

The 2nd reading debate featured the usual pro-censorship peers queing up to have a whinge about the availability of porn. And as is always the case, most of them haven't been bothered thinking about the effectiveness of the measures, their practicality and acceptability. And of course nothing about the safety of porn users who foolishly trust their very dangerous identity data to porn websites and age verification companies.

Merlin Hay, the Earl of Erroll seems to be something of a shill for those age verification companies. He chairs the Digital Policy Alliance ( dpalliance.org.uk ) which acts as a lobby group for age verifiers. He excelled himself in the debate with a few words that have been noticed by the press. He spouted:

What really worries me is not just extreme pornography, which has quite rightly been mentioned, but the stuff you can access for free -- what you might call the teaser stuff to get you into the sites. It normalises a couple of sexual behaviours which are not how to go about wooing a woman. Most of the stuff you see up front is about men almost attacking women. It normalises -- to be absolutely precise about this, because I think people pussyfoot around it -- anal sex and blowjobs. I am afraid I do not think that is how you go about starting a relationship.




 2007   2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan-March   April-June   July-Sept   Oct-Dec    


 


 
Gay News

Internet Porn News

Magazine News

Satellite X News

Sex Aware

Sex Toys News
 

UK P4P News

UK Sex News

UK Sex Shops

US P4P News

US Sex News

World P4P News

World Sex News
 


melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys