|
|
|
|
| 28th
February 2021
|
|
|
Facebook's Response to the Oversight Board's First Set of Recommendations See article from
about.fb.com |
|
Google offers a series of supervisory options of YouTube for different ages of children
|
|
|
| 25th February 2021
|
|
| See article from blog.youtube |
Google has decided to offer a protected mode for YouTube with a range of monitoring and supervisory options for parents. Google explains in a blog post: In the coming months, we'll launch a new experience in beta for parents to allow
their children to access YouTube through a supervised Google Account . This supervised experience will come with content settings and limited features. We'll start with an early beta for families with kids under the age of consent to test and provide
feedback, as we continue to expand and improve the experience. We know that every parent has a different parenting style and that every child is unique and reaches different developmental stages at different times. That's why
we'll give parents the ability to choose from 3 different content settings on YouTube.
Explore: For children ready to move on from YouTube Kids and explore content on YouTube, this setting will feature a broad range of videos generally suitable for viewers ages 9+, including vlogs, tutorials, gaming videos,
music clips, news, educational content and more. Explore More: With content generally suitable for viewers ages 13+, this setting will include an even larger set of videos, and also live streams in the same categories
as "Explore." Most of YouTube: This setting will contain almost all videos on YouTube, except for age-restricted content , and it includes sensitive topics that may only be appropriate for older teens.
This option was designed for parents who think their children are ready to explore the vast universe of YouTube videos. We will use a mix of user input, machine learning and human review to determine which videos are included. We know
that our systems will make mistakes and will continue to evolve over time. We recommend parents continue to be involved in guiding and supporting their child's experience on YouTube. To help parents get started, we've developed a
guide in partnership with National PTA , Parent Zone and Be Internet Awesome . We'll also launch an ongoing campaign that features creators discussing themes like bullying and harassment, misinformation, digital well-being and more.
We understand the importance of striking a balance between empowering tweens and teens to more safely gain independence, while offering parents ways to set controls. In addition to choosing the content setting, parents will be able to
manage watch and search history from within their child's account settings. Parents can also use other controls offered by Google's Family Link , including screen timers. We'll continue adding new parental controls over time, such as blocking content.
When a parent grants access to YouTube, their child's experience will feel much like regular YouTube, but certain features will be disabled to protect younger audiences. For example, we won't serve personalized ads or ads in
certain categories . At launch, we'll also disable in-app purchases, as well as creation and comments features. Since self-expression and community are integral parts of YouTube and children's development, over time we'll work with parents and experts to
add some of these features through an age-appropriate and parent controlled approach. |
|
|
|
|
|
25th February 2021
|
|
|
Rather than genuinely tackling the thornier issues, we're seeing calls for more regulations online as a quick fix. By Ruth Smeeth See
article from indexoncensorship.org |
|
The BBC is a founding partner of a 'smart' new censorship control technology nominally targeting 'fake news' but surely it will also censor dissenting views from social justice orthodoxy
|
|
|
|
23rd February 2021
|
|
| See
press release from news.microsoft.com |
A group of influential technology and media companies has partnered to form the Coalition for Content Provenance and Authenticity (C2PA), a Joint Development Foundation project established to address the supposed prevalence of disinformation,
misinformation and online content fraud through developing technical standards for certifying the source and history or provenance of media content. Founding members Adobe, Arm, BBC, Intel, Microsoft and Truepic seek to establish
a standardized provenance solution with the goal of combating misleading content. C2PA member organizations will work together to develop content provenance specifications for common asset types and formats to enable publishers, creators and consumers to
trace the origin and evolution of a piece of media, including images, videos, audio and documents. These technical specifications will include defining what information is associated with each type of asset, how that information is presented and stored,
and how evidence of tampering can be identified. The C2PA's open standard will give platforms a method to preserve and read provenance-based digital content. Because an open standard can be adopted by any online platform, it is
critical to scaling trust across the internet. In addition to the inclusion of varied media types at scale, C2PA is driving an end-to-end provenance experience from the capturing device to the information consumer. Collaboration with chipmakers, news
organizations, and software and platform companies is critical to facilitate a comprehensive provenance standard and drive broad adoption across the content ecosystem.
|
|
Thailand warns its sex workers that going online during the covid crisis may result in prosecution for 'obcenity'
|
|
|
| 22nd February 2021
|
|
| See article from xbiz.com
|
As more Thais have opened OnlyFans accounts to support themselves during the pandemic, legal experts in the Southeast Asian kingdom are warning about the uncertain status of the content produced given the country's notoriously harsh obscenity laws. The Bangkok Post recently published a profile of local people who had entered sex work for the first time during the pandemic, after opening an account on the popular premium service.
But in Thailand, the report explained, publishing obscene content online is punishable by up to five years imprisonment, a B100,000 (£2500) fine, or both. Furthermore, as pornography is considered by law to be a disruption to peace in society,
anyone can file a complaint to snitch on or settle scores with people they don't like. The newspaper spoke with legal expert Natchapol Jittirat, a lecturer at Chulalongkorn University's Faculty of Law, who said adult content creators cannot demand
protection under the law when their photos and videos are unlawfully disseminated by others, as under Thai laws, the content is considered 'obscene' material. And although some people may consider OnlyFans a private space, Jittirat opined that the
courts will still consider it to be a public space, as the platform can be easily accessible by anyone with access to the internet. If they want to be safe from legal action, they will have to do it outside the Thai courts' jurisdiction and the content
must not have any consequences in the kingdom. |
|
Omegle app comes under fire as children aren't adequately blocked from taking part
|
|
|
| 20th February 2021
|
|
| See article from telegraph.co.uk
|
A website that matches people to talk to strangers should be banned in the UK according to the pro-censorship campaigner John Carr. The Omegle site, which randomly pairs strangers to talk over web cameras, has come under fire this week after reports
of children being paired with adults in inappropriate conversations. A BBC investigation also found numerous adult men naked or performing sexual acts on camera on the site. Carr who has advised the Government on child online safety, said the site's
continued lack of meaningful age checks meant it should be blocked to prevent UK children wandering onto it. Omegle, which has the advertising catchline talk to strangers and has exploded in popularity during lockdown, says its services are for
over-18s or over-13s with parental permission. The website's founder, Leif K-Brooks responded to the BBC: While perfection may not be possible, Omegle's moderation makes the site significantly cleaner and has also
generated reports that have led to the arrest and prosecution of numerous predators. Oliver Dowden, the Culture Secretary, said he was considering the situation as his department draws up Duty of Care legislation. He said:
[The] allegations here are very serious. We are looking into this as we develop ... new laws to tackle harmful online content.
|
|
|
|
|
| 20th February 2021
|
|
|
Congressional Democrats have begun discussions with the White House on ways to crack down on Big Tech including making social media companies accountable for the spread of disinformation See
article from reuters.com |
|
Italy goes after Cloudflare DNS service so as to block pirate internet TV
|
|
|
| 18th February 2021
|
|
| See article from reclaimthenet.org |
Traditionally the authorities look towards ISPs to implement censorship orders via DNS blocking. However there are other DNS providers that perhaps via encrypted DNS that work around ISP block. Now the Italian courts have decided to order DNS provider
Cloudflare to block a couple of pirate internet TV services. Last year, Sky Italy and the top tier Italian soccer league Serie A took Cloudflare to court, hoping the company would block access to two IPTV services, ENERGY IPTV and IPTV THE BEST.
Cloudflare lost both cases. Cloudflare then appealed the injunctions, arguing that it only acts as an intermediary for web content. The court was not convinced by the arguments. In the ruling, the court said that by facilitating the sites'
availability, Cloudflare indeed is involved in copyright infringements. The court also said that the blocking should be dynamic, meaning if the sites change IP addresses, Cloudflare should still block them. |
|
Facebook blocks Australians from accessing or sharing news sources
|
|
|
| 18th
February 2021
|
|
| 17th February 2021. See article from bbc.co.uk |
The internet has offered plentiful cheap and mobile entertainment for everyone around the world, and one of the consequences is that people on average choose to spend a lot less on newspaper journalism. This reality is clearly causing a lot of pain to
the newspaper industry, but also to national governments around the world who would prefer their peoples to get their news information from state approved sources. But governments don't really want to pay for the 'main stream media' themselves,
and so are tempted to look to social media giants to foot the bill. And indeed the Australian government is seeking to do exactly that. However the economics doesn't really support the notion that social media should pay for the news media. From a purely
business standpoint, there is no case for Facebook needing to pay for links, if anything Facebook could probably charge for the service if they wanted to. So Facebook has taken a stance and decided that it will not be paying for news in Australia.
And in fact it has now banned Australian news sources from appearing in the news feeds of Australian users and Facebook has also blocked local users from linking to any international news sources. And it seems that this has annoyed the Australian
Government. Australian Prime Minister Scott Morrison has said his government will not be intimidated by Facebook blocking news feeds to users. He described the move to unfriend Australia as arrogant and disappointing. Australians on Thursday
woke up to find that Facebook pages of all local and global news sites were unavailable. People outside the country are also unable to read or access any Australian news publications on the platform. Several government health and emergency pages were
also blocked. Facebook later asserted this was a mistake and many of these pages are now back online. Update: Facebook makes its case 18th February 2021. See
article from about.fb.com by William Easton, Managing Director, Facebook Australia & New Zealand
In response to Australia's proposed new Media Bargaining law, Facebook will restrict publishers and people in Australia from sharing or viewing Australian and international news content. The proposed law fundamentally
misunderstands the relationship between our platform and publishers who use it to share news content. It has left us facing a stark choice: attempt to comply with a law that ignores the realities of this relationship, or stop allowing news content on our
services in Australia. With a heavy heart, we are choosing the latter. This discussion has focused on US technology companies and how they benefit from news content on their services. We understand many will ask why the platforms
may respond differently. The answer is because our platforms have fundamentally different relationships with news. Google Search is inextricably intertwined with news and publishers do not voluntarily provide their content. On the other hand, publishers
willingly choose to post news on Facebook, as it allows them to sell more subscriptions, grow their audiences and increase advertising revenue. In fact, and as we have made clear to the Australian government for many months, the
value exchange between Facebook and publishers runs in favor of the publishers -- which is the reverse of what the legislation would require the arbitrator to assume. Last year Facebook generated approximately 5.1 billion free referrals to Australian
publishers worth an estimated AU$407 million. For Facebook, the business gain from news is minimal. News makes up less than 4% of the content people see in their News Feed. Journalism is important to a democratic society, which is
why we build dedicated, free tools to support news organisations around the world in innovating their content for online audiences. Over the last three years we've worked with the Australian Government to find a solution that
recognizes the realities of how our services work. We've long worked toward rules that would encourage innovation and collaboration between digital platforms and news organisations. Unfortunately this legislation does not do that. Instead it seeks to
penalise Facebook for content it didn't take or ask for. We were prepared to launch Facebook News in Australia and significantly increase our investments with local publishers, however, we were only prepared to do this with the
right rules in place. This legislation sets a precedent where the government decides who enters into these news content agreements, and ultimately, how much the party that already receives value from the free service gets paid. We will now prioritise
investments to other countries, as part of our plans to invest in new licensing news programs and experiences . Others have also raised concern. Independent experts and analysts around the world have consistently outlined problems
with the proposed legislation. While the government has made some changes, the proposed law fundamentally fails to understand how our services work. Unfortunately, this means people and news organisations in Australia are now
restricted from posting news links and sharing or viewing Australian and international news content on Facebook. Globally, posting and sharing news links from Australian publishers is also restricted. To do this, we are using a combination of
technologies to restrict news content and we will have processes to review any content that was inadvertently removed. For Australian publishers this means:
They are restricted from sharing or posting any content on Facebook Pages Admins will still be able to access other features from their Facebook Page, including Page insights and Creator Studio
We will continue to provide access to all other standard Facebook services, including data tools and CrowdTangle
For international publishers this means:
For our Australian community this means:
For our international community this means:
The changes affecting news content will not otherwise change Facebook's products and services in Australia. We want to assure the millions of Australians using Facebook to connect with friends and family, grow their businesses and
join Groups to help support their local communities, that these services will not change. We recognise it's important to connect people to authoritative information and we will continue to promote dedicated information hubs like
the COVID-19 Information Centre , that connects Australians with relevant health information. Our commitment to remove harmful misinformation and provide access to credible and timely information will not change. We remain committed to our third-party
fact-checking program with Agence France-Presse and Australian Associated Press and will continue to invest to support their important work. Our global commitment to invest in quality news also has not changed. We recognise that
news provides a vitally important role in society and democracy, which is why we recently expanded Facebook News t o hundreds of pu blications in the UK. We hope that in the future the Australian government will recognise the
value we already provide and work with us to strengthen, rather than limit, our partnerships with publishers. |
|
Cambodia is demanding that ISPs route their internet traffic through a state censorship gateway
|
|
|
| 18th February 2021
|
|
| See article from hrw.org |
The Cambodian government's new National Internet 'Gateway' will enable the government to increase online surveillance, censorship, and control of the internet, Human Rights Watch have said. On February 16, 2021, Prime Minister Hun Sen signed the
decree on the Establishment of the National Internet Gateway. The decree requires all internet traffic in Cambodia to be routed through a a censorship hub. It would allow for blocking and disconnecting [of] all network connections that affect
safety, national revenue, social order, dignity, culture, tradition and customs. The grounds for action are both overbroad and not defined, permitting arbitrary and abusive application of blocking and disconnecting powers. Phil Robertson, Human
Rights Watch deputy Asia director said: Prime Minister Hun Sen struck a dangerous blow against internet freedom and e-commerce in Cambodia by expanding the government's control over the country's internet, said Phil
Robertson. Foreign governments, tech companies, e-commerce businesses, and other private actors should urgently call on the government to reverse the adoption of this harmful sub-decree.
The government decree requires ISPs in Cambodia
to reroute their services through the National Internet Gateway within the next 12 months, before February 2022. |
|
Parler relaunches after being taken down following the storming of the US Capitol
|
|
|
| 17th February 2021
|
|
| See article from
eu.usatoday.com |
Parler, the self-proclaimed free-speech platform taken offline after the riot at the U.S. Capitol last month, says it has relaunched. Mark Meckler is serving as interim CEO of Parler after its previous top executive was fired by the social media
platform. Parler had been pulled from app stores run by Apple and Google and dropped by Amazon's web hosting services after the incident at the US Capitol. Parler was one of the platforms used by supporters of former President Donald Trump to
coordinate and chronicle the event. |
|
Google changes its policies to ban advertising for any form of sex where money or gifts change hands
|
|
|
|
15th February 2021
|
|
| See article from support.google.com
|
An already announced policy has come into effect on 11th February 2021. Google explains: In February 2021, the Google Ads Adult content policy will be updated. All prohibited adult content will move to the Inappropriate content
policy. Additionally, we will prohibit compensated dating or sexual arrangements where one participant is expected to provide money, gifts, financial support, mentorship, or other valuable benefits to another participant such as 'Sugar' dating.
The following categories will move from the Adult content policy into the Inappropriate content policy:
Sexually explicit content Child sexual abuse imagery Mail-order brides Adult themes in family content
Violations of this policy will not lead to immediate account suspension without prior warning. A warning will be issued, at least 7 days, prior to any suspension of your account. |
|
Police from the Indian state of Uttar Pradesh set up a team to snoop on people's porn searches
|
|
|
| 15th February 2021
|
|
| See article from theswaddle.com
|
Police from the Indian state of Uttar Pradesh announced the creation of a team to snoop on people's internet searches for pornographic material. The force has hired a company to surveille searches and keep data of the people who search for porn content.
In India, pornography is banned by the government, but the initial stages of the lockdown last year saw a 95% rise in viewership . The U.P. police's internet search tracking plan is being piloted in six of the state's districts. The monitoring will
now be carried out across the state, which currently has about 11.6 million internet users. The U.P. police has outsourced its monitoring of porn searches to the company Oomuph. If Oomuph spots an internet user consuming pornography, the police's
analytics team will receive information on the user and search. Porn searches on the internet will also now yield an awareness message that searchers are being tracked by the police.. |
|
Twitter bans censors Project Veritas after it reveals uncomfortable details about Twitter censorship
|
|
|
| 13th February 2021
|
|
| See article from reclaimthenet.org |
Twitter has permanently banned the investigative reporting outlet Project Veritas which recently published several leaked videos exposing executives from Big Tech companies discussing censorship, hate speech, and more. Project Veritas had over 735,000
followers when it was suspended. The suspension follows Project Veritas's Twitter account being locked earlier today after it posted a video clip featuring a Project Veritas journalist confronting Facebook's Vice President (VP) of Integrity Guy Rosen
over comments that he made about Facebook's hate speech detection technology in a leaked video. Before publishing the leaked Facebook video, Project Veritas had also published several leaked videos from internal Twitter meetings including one
video where CEO Jack Dorsey discussed much bigger Twitter censorship measures after Trump's ban from the platform. |
|
Poland publishes a bill aimed at preventing social media companies from unfairly taking down people's accounts
|
|
|
| 13th
February 2021
|
|
| See article from
natlawreview.com |
The Polish government has published a new draft bill on freedom of speech on social media platforms. The Minister of Justice said that freedom of speech and debate is the cornerstone of democracy and censoring statements, especially online, where most
political discussions and ideological disputes take place these days, infringes on those freedoms. Therefore, Poland should have regulations in place to prevent abuse on the part of internet tycoons, which are increasingly limiting this freedom under the
auspices of protecting it. The draft act envisages the appointment of the Freedom of Speech Council, which it claims would safeguard the constitutional freedom of expression on social networking sites. The council would comprise law and new media
'experts' and it would be appointed by the lower chamber of the Polish Parliament for a six-year term of office, by a qualified (3/5) majority. The draft act also provides that if a website blocks an account or deletes a certain entry, even though
its content does not violate/infringe upon Polish law, the user will be able to lodge a complaint with the service provider. The provider must confirm that the complaint has been received and will then have 48 hours to consider it. If the provider
dismisses the complaint, the user will be able to appeal that decision to the Freedom of Speech Council, which will consider the appeal within seven days. The council will proceed in closed sessions. It will not take evidence from witnesses, parties,
expert opinions and visual inspections, and the evidentiary proceedings before the council will boil down to evidence submitted by the parties to the dispute. If the council deems the appeal justified, it may order the website to immediately
restore the blocked content or account. Thereafter, having received the order, the provider will have no more than 24 hours to comply. Failure to comply with the council's order may lead to large fines. |
|
|
|
|
| 13th February
2021
|
|
|
Apple, Spotify, and the impossible problem of moderating shows By Ashley Carman See article from theverge.com
|
|
Floella Benjamin attempts to resuscitate internet porn age verification in a Domestic Abuse Bill
|
|
|
| 11th
February 2021
|
|
| See Government statement about age verification (11th
January 2021) from questions-statements.parliament.uk See
attempt to resuscitate porn age verification in the
Domestic Abuse Bill (10th February 2021) from hansard.parliament.uk |
Campaigners for the revival of deeply flawed and one sided age verification for porn scheme have been continuing their efforts to revive it ever since it was abandoned by the Government in October 2019. The Government was asked about the possibility
of restoring it in January 2021 in the House of Commons. Caroline Dinenage responded for the government: The Government announced in October 2019 that it will not commence the age verification provisions of Part 3 of
the Digital Economy Act 2017 and instead deliver these protections through our wider online harms regulatory proposals. Under our online harms proposals, we expect companies to use age assurance or age verification technologies to
prevent children from accessing services which pose the highest risk of harm to children, such as online pornography. The online harms regime will capture both the most visited pornography sites and pornography on social media, therefore covering the
vast majority of sites where children are most likely to be exposed to pornography. Taken together we expect this to bring into scope more online pornography currently accessible to children than would have been covered by the narrower scope of the
Digital Economy Act. We would encourage companies to take steps ahead of the legislation to protect children from harmful and age inappropriate content online, including online pornography. We are working closely with stakeholders
across industry to establish the right conditions for the market to deliver age assurance and age verification technical solutions ahead of the legislative requirements coming into force. In addition, Regulations transposing the
revised Audiovisual Media Services Directive came into force on 1 November 2020 which require UK-established video sharing platforms to take appropriate measures to protect minors from harmful content. The Regulations require that the most harmful
content is subject to the strongest protections, such as age assurance or more technical measures. Ofcom, as the regulatory authority, may take robust enforcement action against video sharing platforms which do not adopt appropriate measures.
Now during the passage of the Domestic Abuse in the House of Lords, Floella Benjamin attempted to revive the age verification requirement by proposing the following amendment: Insert the following new
Clause -- Impact of online pornography on domestic abuse (1) Within three months of the day on which this Act is passed, the Secretary of State must commission a person appointed by the
Secretary of State to investigate the impact of access to online pornography by children on domestic abuse. (2) Within three months of their appointment, the appointed person must publish a report on the investigation which may
include recommendations for the Secretary of State. (3) As part of the investigation, the appointed person must consider the extent to which the implementation of Part 3 of the Digital Economy Act 2017 (online pornography) would
prevent domestic abuse, and may make recommendations to the Secretary of State accordingly. (4) Within three months of receiving the report, the Secretary of State must publish a response to the recommendations of the appointed
person. (5) If the appointed person recommends that Part 3 of the Digital Economy Act 2017 should be commenced, the Secretary of State must appoint a day for the coming into force of that Part under section 118(6) of the Act
within the timeframe recommended by the appointed person.
Member's explanatory statement This amendment would require an investigation into any link between online pornography and
domestic abuse with a view to implementing recommendations to bring into effect the age verification regime in the Digital Economy Act 2017 as a means of preventing domestic abuse.
Floella Benjamin made a long speech supporting the
censorship measure and was supported by a number of peers. Of course they all argued only from the 'think of the children' side of the argument and not one of them mentioned trashed adult businesses and the risk to porn viewers of being outed, scammed,
blackmailed etc. See Floella Benjamin's
speech from hansard.parliament.uk |
|
Australia's adult trade association argues against mainstream adult material being considered as an online 'harm'
|
|
|
| 8th February
2021
|
|
| See article from businessinsider.com.au See
consultation response [pdf] from eros.org.au |
Australia's trade association for the adult industry has slammed a proposed law that they claim would harm the livelihood of their workers and limit Australian's sexual impression under the guise of protecting citizens. The Eros Association has made a
submission to the consultation for the government's Online Safety Act, a law that expands the powers of the eSafety Commissioner to censor what the government considers are online harms. The Australian internet censorship proposal includes enabling
censors to force the takedown of content within statutory timeframes, removal of accounts and even delisting from search engines. The proposed censorship remit includes sexually explicit content from consenting adults. Note that Australia has always had
a problem with even vanilla hardcore and the vast majority of the country bans such material from sale in sex shops and from being hosted on Australian websites. Eros Association policy and campaigns advisor Jarryd Bartle commented:
You could have your business ruined in 24 hours if a complaint is made and a removal notice is issued, your content is taking down, accounts taken down and website taken down in the case of fetish content. The Eros
consultation submission notes: It is the position of Eros that:
The online content scheme under Part 9 of the Bill should be removed as it is not related to issues of online safety and is likely to harm the livelihood of sex workers, adult media performers and adults-only businesses. -
The role of the eSafety Commissioner should be to focus on non-consensual, abusive and harmful content and not imagery of consensual sexual activity between adults.
Under this Bill, adult media for online content regulation likely encompasses advertising for sex work services, adult entertainment and adult retailing, impacting a broad range of industries. As drafted, the
online content scheme would provide for the removal of many forms of adult content impacting the livelihood of producers, sex workers, adult retailers and adult entertainment venues. The scheme is so broad reaching it would also
limit the sexual expression of Australians online whether or not they are posting sexually explicit content for profit. It is Eros' view that the proposed scheme is not in keeping with community standards. Previous government
attempts to filter sexual ly explicit content online have proven very unpopular, and were widely viewed as an infringement on freedom of speech. The overwhelming majority of Australian pornography users note that adult media has
had a 'positive' or 'neutral' impact on their life. It is therefore inappropriate to regulate this content within a Bill designed to tackle online harms .
|
|
Pornhub comes under scrutiny in the Canadian Parliament
|
|
|
| 6th February 2021
|
|
| 2nd February 2021. See article from xbiz.com
|
The Committee on Access to Information, Privacy and Ethics in Canada's House of Commons held a hearing concerning allegations made against Pornhub's content moderation policies. The allegations featured in a New York Times article by Nicholas Kristof and
were based on a religious group Exodus Cry's Traffickinghub campaign against the tube site and parent company MindGeek. MindGeek is headquartered in Luxembourg, although many of its operations are run from Montreal and the two people identified by the
New York Times as owners are Canadian nationals. The committee heard from a witness who retold her story of having difficulties getting Pornhub to remove a video she had shot of herself as a teenager, which then she sent to a boyfriend and which
was allegedly repeatedly uploaded onto the tube site by unidentified third parties. The committee also heard from New York lawyer Michael Bowe, who has previously represented disgraced evangelist Jerry Falwell Jr. and Donald Trump. Bowe made
repeated claims about a supposed conspiracy masterminded by MindGeek, and their agents and allies to gaslight the public opinion about the organized international campaign against Pornhub. Bowe also asked for Canada to change their laws to make MindGeek
accountable, and stated that in his opinion the company committed criminal offenses. Update: Pornhub Releases Statement About Content Moderation Changes 6th February 2021. See
statement from help.pornhub.com
Going forward, we will only allow properly identified users to upload content. We have banned downloads. We have made some key expansions to our moderation process, and we recently launched a Trusted Flagger Program with dozens of non-profit
organizations. Earlier this year, we also partnered with the National Center for Missing & Exploited Children, and next year we will issue our first transparency report. Full details on our expanded policies can be found below.
If you wish to report any content that violates our terms of service, including CSAM or other illegal content, please click this link . 1. Verified Uploaders Only Effective
immediately, only content partners and people within the Model Program will be able to upload content to Pornhub. In the new year, we will implement a verification process so that any user can upload content upon successful completion of identification
protocol. 2. Banning Downloads Effective immediately, we have removed the ability for users to download content from Pornhub, with the exception of paid downloads within the verified Model Program.
In tandem with our fingerprinting technology, this will mitigate the ability for content already removed from the platform to be able to return. 3. Expanded Moderation We have worked to create
comprehensive measures that help protect our community from illegal content. In recent months we deployed an additional layer of moderation. The newly established "Red Team" will be dedicated solely to self-auditing the platform for potentially
illegal material. The Red Team provides an extra layer of protection on top of the existing protocol, proactively sweeping content already uploaded for potential violations and identifying any breakdowns in the moderation process that could allow a piece
of content that violates the Terms of Service. Additionally, while the list of banned keywords on Pornhub is already extensive, we will continue to identify additional keywords for removal on an ongoing basis. We will also regularly monitor search terms
within the platform for increases in phrasings that attempt to bypass the safeguards in place. Pornhub's current content moderation includes an extensive team of human moderators dedicated to manually reviewing every single upload, a thorough system for
flagging, reviewing and removing illegal material, robust parental controls, and utilization of a variety of automated detection technologies. These technologies include:
CSAI Match, YouTube's proprietary technology for combating Child Sexual Abuse Imagery online Content Safety API, Google's artificial intelligence tool that helps detect illegal imagery -
PhotoDNA, Microsoft's technology that aids in finding and removing known images of child exploitation Vobile, a fingerprinting software that scans any new uploads for potential matches to unauthorized
materials to protect against banned videos being re-uploaded to the platform.
If a user encounters a piece of content they think may violate the Terms of Service, we encourage them to immediately flag the video or fill out the Content Removal Request Form , which is linked on every page.
Our policy is to immediately disable any content reported in the Content Removal Request Form for review. 4. Trusted Flagger Program We recently launched a Trusted Flagger
Program, a new initiative empowering non-profit partners to alert us of content they think may violate our Terms of Service. The Trusted Flagger Program consists of more than 40 leading non-profit organizations in the space of internet and child safety.
Our partners have a direct line of access to our moderation team, and any content identified by a Trusted Flagger is immediately disabled. Partners include: Cyber Civil Rights Initiative (United States of America), National Center for Missing &
Exploited Children (United States of America), Internet Watch Foundation (United Kingdom), Stopline (Austria), Child Focus (Belgium), Safenet (Bulgaria), Te Protejo Hotline - I Protect You Hotline (Colombia), CZ.NIC - Stop Online (Czech Republic ), Point
de Contact (France), Eco-Association of the Internet Industry (Germany), Safeline (Greece), Save the Children (Iceland), Latvian Internet Association (Latvia), Meldpunt Kinderporno - Child Pornography Reporting Point (Netherlands), Centre for Safer
Internet Slovenia (Slovenia), FPB Hotline - Film and Publication Board (South Africa), ECPAT (Sweden), ECPAT (Taiwan). 5. NCMEC Partnership Last year, we voluntarily partnered with the National
Center for Missing & Exploited Children (NCMEC) in order to transparently report and limit incidents of CSAM on our platform. In early 2021, NCMEC will release our total number of reported CSAM incidents alongside numbers from other major social and
content platforms. We will also continue to work with law enforcement globally to report and curb any issues of illegal content. 6. Transparency Report In 2021, we will release a Transparency Report
detailing our content moderation results from 2020. This will identify not just the full number of reports filed with NCMEC, but also other key details related to the trust and safety of our platform. Much like Facebook, Instagram, Twitter and other tech
platforms, Pornhub seeks to be fully transparent about the content that should and should not appear on the platform. This will make us the only adult content platform to release such a report. 7. Independent Review
As part of our commitment, in April 2020 we hired the law firm of Kaplan Hecker & Fink LLP to conduct an independent review of our content compliance function, with a focus on meeting legal standards and eliminating all
non-consensual content, CSAM and any other content uploaded without the meaningful consent of all parties. We requested that the goal of the independent review be to identify the requisite steps to achieve a "best-in-class" content compliance
program that sets the standard for the technology industry. Kaplan Hecker & Fink LLP is continuing its review, but has already identified and categorized a comprehensive inventory of remedial recommendations, supported by dozens of additional
sub-recommendations, in addition to the steps identified above, based on an evaluation and assessment of our current policies and practices. Kaplan Hecker & Fink LLP is soliciting information to assist with its review and in developing
recommendations regarding our compliance policies and procedures. If you would like to provide compliance suggestions, you can do so here . Update: Pornhub: Canadian MPs Finally Invite Sex Worker Advocates 20th
April 2021. See article from xbiz.com Biased Canadian ethics committee shamed into listening to the other side of
the argument in a diatribe against Pornhub. |
|
|
|
|
| 4th February 2021
|
|
|
VPN users are reporting that their chats no longer show up on Twitch streams See article from techradar.com |
|
China bans private individuals, bloggers and citizen journalists from reporting news
|
|
|
| 2nd February 2021
|
|
| See article from rfa.org |
China's internet censor has announced a further step in the censorship of online news reporting. China already requires any organization publishing news or current affairs-related content to hold a license from the country's media censor. Now in the
latest step, China will ban private individuals, bloggers and citizen journalists from posting news-related information online without a license. The move was announced by Zhuang Rongwen, deputy director of the ruling Chinese Communist Party (CCP)'s
central propaganda department, during a Jan. 29 online conference. Zhuang told the conference: We must control the source of online texts, and resolutely close any loopholes. The standardized management of citizen journalism should be a
priority, with increased punishments for offenders and actual teeth for regulators. China's Cyberspace Administration also included the announcement in an official statement on its website. |
|
Promotional piece points out that websites can also recognise and block well known VPNs if they choose too
|
|
|
|
1st February 2021
|
|
| See article from techtimes.com
Are you UK internet censorship ready? See |
I used a well known and well regarded Vypr VPN to evade website blocking mandated by the Thai government's internet censor. The VPN worked well to evade these blocks implemented by Thai ISPs. However the VPN is a total failure in being able to
watch BBC's iPlayer from Thailand. Although the VPN allows the website requested to apparently appear from a UK IP address. However the BBC recognised this UK IP address as being owned by a VPN and promptly blocked the request itself. The answer
is to use a VPN that offers private IP addresses that the likes of the BBC don't know are owned or used by VPNs. The popular NordVPN service is rather coy about the uses of its dedicated IP address though. It explains:
The advantages of the Virtual Private Network often depend on the type of service you choose, so today we rely in particular on NordVPN which is one of the best. Let's see why it is convenient to choose the virtual private network,
both in the corporate and non-corporate environments. In a business environment, having a dedicated IP address allows you to access private servers or remote systems in complete safety. In fact, network administrators can specify
a list of authorized IPs and only these have the possibility to access, while all the others remain excluded. In this way, from anywhere in the world you can enter your server without running the risk of intrusion by malicious people and thus protecting
the data. |
|
|