Melon Farmers Original Version

Facebook Censorship since 2020


Left wing bias, prudery and multiple 'mistakes'


 

Age appropriate censorship...

Meta details extended censorship rules for under 18s


Link Here 28th January 2024
Meta writes in a blog post:

New Protections to Give Teens More Age-Appropriate Experiences on Our Apps

  • We will start to hide more types of content for teens on Instagram and Facebook, in line with expert guidance.

  • We're automatically placing all teens into the most restrictive content control settings on Instagram and Facebook and restricting additional terms in Search on Instagram.

  • We're also prompting teens to update their privacy settings on Instagram in a single tap with new notifications.

We want teens to have safe, age-appropriate experiences on our apps. We've developed more than 30 tools and resources to support teens and their parents, and we've spent over a decade developing policies and technology to address content that breaks our rules or could be seen as sensitive. Today, we're announcing additional protections that are focused on the types of content teens see on Instagram and Facebook.

New Content Policies for Teens

We regularly consult with experts in adolescent development, psychology and mental health to help make our platforms safe and age-appropriate for young people, including improving our understanding of which types of content may be less appropriate for teens.

Take the example of someone posting about their ongoing struggle with thoughts of self-harm. This is an important story, and can help destigmatize these issues, but it's a complex topic and isn't necessarily suitable for all young people. Now, we'll start to remove this type of content from teens' experiences on Instagram and Facebook, as well as other types of age-inappropriate content. We already aim not to recommend this type of content to teens in places like Reels and Explore, and with these changes, we'll no longer show it to teens in Feed and Stories, even if it's shared by someone they follow .

We want people to find support if they need it, so we will continue to share resources from expert organizations like the National Alliance on Mental Illness when someone posts content related to their struggles with self-harm or eating disorders. We're starting to roll these changes out to teens under 18 now and they'll be fully in place on Instagram and Facebook in the coming months.

We're automatically placing teens into the most restrictive content control setting on Instagram and Facebook. We already apply this setting for new teens when they join Instagram and Facebook, and are now expanding it to teens who are already using these apps. Our content recommendation controls -- known as "Sensitive Content Control" on Instagram and "Reduce" on Facebook --- make it more difficult for people to come across potentially sensitive content or accounts in places like Search and Explore.

Hiding More Results in Instagram Search Related to Suicide, Self-Harm and Eating Disorders

While we allow people to share content discussing their own struggles with suicide, self-harm and eating disorders, our policy is not to recommend this content and we have been focused on ways to make it harder to find. Now, when people search for terms related to suicide, self-harm and eating disorders, we'll start hiding these related results and will direct them to expert resources for help. We already hide results for suicide and self harm search terms that inherently break our rules and we're extending this protection to include more terms. This update will roll out for everyone over the coming weeks.

 

 

Age approprite censorship...

Meta says that it will soon restrict content for self declared under 18s and under 16s


Link Here 16th January 2024
The Wall Street Journal has reported that Meta plans to automatically restrict teen Instagram and Facebook accounts from content including videos and posts about self-harm, graphic violence and eating disorders. Under 18 accounts, based on the birth date entered during sign-up, will automatically be placed into the most restrictive content settings. Teens under 16 won't be shown sexual content.

Meta stated that these measures, expected to be implemented over the forthcoming weeks, are intended to curate a more age-appropriate experience for young users.

The heightened regulatory attention followed testimony in the U.S. Senate by a former Meta employee, Arturo Bejar, who claimed that the company was aware of the harassment and other harms faced by teens on its platforms but failed to take appropriate action.

 

 

Unequal breasts...

Facebook ordered to allow images showing trans female breasts whilst still banning natural female breasts


Link Here18th January 2023
Facebook and Instagram will allow transgender and non-binary users to flash their bare breasts -- but women who were born female are still not allowed a similar freedom, according to Meta's advisory board.

Meta's Oversight Board -- an independent body which Meta CEO Mark Zuckerberg has called the company's Supreme Court for content moderation and censorship policies -- ordered Facebook and Instagram to lift a ban on images of topless women for anyone who identifies as transgender or non-binary, meaning they view themselves as neither male or female.

The same image of female-presenting nipples would be prohibited if posted by a cisgender woman but permitted if posted by an individual self-identifying as non-binary, the board noted in its decision.

The board cited a recent decision to overturn a ban on two Instagram posts by a couple that describes themselves as transgender and non-binary that posed topless but covered their nipples -- only to have the post flagged by other users. Meta banned the image, but the couple won their appeal and the photo was restored online.

Meta will rely on human reviewers will now be tasked with trying to determine the sex of breasts.

 

 

A bit of a censorship dilemma...

Meta calls for public comments about the police requested take down of drill music on Facebook


Link Here 18th August 2022

In January 2022, an Instagram account that describes itself as publicising British music posted a video with a short caption on its public account. The video is a 21-second clip of the music video for a UK drill music track called Secrets Not Safe by the rapper Chinx (OS). The caption tags Chinx (OS) as well as an affiliated artist and highlights that the track had just been released. The video clip shows part of the second verse of the song and fades to a black screen with the text OUT NOW. Drill is a subgenre of rap music popular in the UK, with a large number of drill artists active in London.

Shortly after the video was posted, Meta received a request from UK law enforcement to remove content that included this track. Meta says that it was informed by law enforcement that elements of it could contribute to a risk of offline harm. The company was also aware that the track referenced a past shooting in a way that raised concerns that it may provoke further violence. As a result, the post was escalated for internal review by experts at Meta.

Meta's experts determined that the content violated the Violence and Incitement policy, specifically the prohibition on coded statements where the method of violence or harm is not clearly articulated, but the threat is veiled or implicit. The Community Standards list signs that content may include veiled or implicit threats. These include content that is shared in a retaliatory context, and content with references to historical or fictional incidents of violence. Further information and/or context is always required to identify and remove a number of different categories listed at the end of the Violence and Incitement policy, including veiled threats. Meta has explained to the Board that enforcement under these categories is not subject to at-scale review (the standard review process conducted by outsourced moderators) and can only be enforced by Meta's internal teams. Meta has further explained that the Facebook Community Standards apply to Instagram.

When Meta took the content down, two days after it was posted, it also removed copies of the video posted by other accounts. Based on the information that they received from UK law enforcement, Meta's Public Policy team believed that the track might increase the risk of potential retaliatory gang violence, and acted as a threatening call to action that could contribute to a risk of imminent violence or physical harm, including retaliatory gang violence.

Hours after the content was removed, the account owner appealed. A human reviewer assessed the content to be non-violating and restored it to Instagram. Eight days later, following a second request from UK law enforcement, Meta removed the content again and took down other instances of the video found on its platforms. The account in this case has fewer than 1,000 followers, the majority of whom live in the UK. The user received notifications from Meta both times their content was removed but was not informed that the removals were initiated following a request from UK law enforcement.

In referring this matter to the Board, Meta states that this case is particularly difficult as it involves balancing the competing interests of artistic expression and public safety. Meta explains that, while the company places a high value on artistic expression, it is difficult to determine when that expression becomes a credible threat. Meta asks the Board to assess whether, in this case and more generally, the safety risks associated with the potential instigation of gang violence outweigh the value of artistic expression in drill music.

In its decisions, the Board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. As such, the Board welcomes public comments proposing recommendations that are relevant to these cases.

Respond via article from oversightboard.com

 

 

Tearful apologies...

Facebook shamed into reversing censorship of the poster for Pedro Amnodovar's Parallel Mothers


Link Here 11th August 2021
Madres paralelas is a 2022 Spain drama by Pedro Almodóvar
Starring Penélope Cruz, Rossy de Palma and Aitana Sánchez-Gijón IMDb

Two women, Janis and Ana, coincide in a hospital room where they are going to give birth. Both are single and became pregnant by accident. Janis, middle-aged, doesn't regret it and she is exultant. The other, Ana, an adolescent, is scared, repentant and traumatized. Janis tries to encourage her while they move like sleepwalkers along the hospital corridors. The few words they exchange in these hours will create a very close link between the two, which by chance develops and complicates, and changes their lives in a decisive way.

Instagram's owner Facebook has reversed a ban on a poster for Spanish director Pedro Almodovar's new film, Madres Paralelas (Parallel Mothers), showing a nipple producing a drop of milk. The company was shamed by bad publicity after its naff 'AI' censorship algorithm proved a failure in distinguishing art from porn. Facebook said it had made an exception to its usual ban on nudity because of the clear artistic context.

The promotional image was made to look like an eyeball producing a teardrop. Javier Jaen, who designed the advert for Madres Paralelas (Parallel Mothers), had said the platform should be ashamed for its censorship.

 

 

The Sky is falling...

YouTube censors Sky News Australia claiming covid misinformation


Link Here 1st August 2021
Sky News Australia has been banned from uploading content to YouTube for seven days with Google deciding that the news channel violated its medical misinformation censorship policies.

The ban was imposed the day after the Daily Telegraph ended Alan Jones's regular column amid controversy about his Covid-19 commentary which included calling the New South Wales chief health officer Kerry Chant a village idiot on his Sky News program.

The Sky News Australia YouTube channel, which has 1.85m subscribers, has been issued a strike and is temporarily suspended from uploading new videos or livestreams for one week. A YouTube spokesperson told Guardian Australia:

We have clear and established Covid-19 medical misinformation policies based on local and global health authority guidance, to prevent the spread of Covid-19 misinformation that could cause real-world harm.

Specifically, we don't allow content that denies the existence of Covid-19 or that encourages people to use hydroxychloroquine or ivermectin to treat or prevent the virus. We do allow for videos that have sufficient countervailing context, which the violative videos did not provide .

 

 

Unfriending democracy...

Facebook decides to censor Donald Trump for at least 2 years


Link Here 5th June 2021

Last month, the Oversight Board upheld Facebook's suspension of former US President Donald Trump's Facebook and Instagram accounts following his praise for people engaged in violence at the Capitol on January 6. But in doing so, the board criticized the open-ended nature of the suspension, stating that it was not appropriate for Facebook to impose the indeterminate and standardless penalty of indefinite suspension. The board instructed us to review the decision and respond in a way that is clear and proportionate, and made a number of recommendations on how to improve our policies and processes.

We are today announcing new enforcement protocols to be applied in exceptional cases such as this, and we are confirming the time-bound penalty consistent with those protocols which we are applying to Mr. Trump's accounts. Given the gravity of the circumstances that led to Mr. Trump's suspension, we believe his actions constituted a severe violation of our rules which merit the highest penalty available under the new enforcement protocols. We are suspending his accounts for two years, effective from the date of the initial suspension on January 7 this year.

At the end of this period, we will look to experts to assess whether the risk to public safety has receded. We will evaluate external factors, including instances of violence, restrictions on peaceful assembly and other markers of civil unrest. If we determine that there is still a serious risk to public safety, we will extend the restriction for a set period of time and continue to re-evaluate until that risk has receded.

 

 

Democracy Abuse...

Facebook censors London mayoral candidate on election day for political reasons


Link Here13th May 2021
Facebook suspended London's mayoral candidate David Kurten on the day of election. Kurten and his Heritage Party have been strong opponents of lockdown measures, and have challenged the idea of vaccine passports.

On the day of the mayoral and assembly elections in London, Facebook suspended Kurten for seven days. The social media network cited five posts dating back to October 2020. One of 2 posts on election day expressed Kurten and his party's strong opposition to the vaccine passport and lockdowns. The other one claimed that vaccine manufacturers are not liable for the COVID-19 jabs.

He took to Twitter to blast Facebook for the 7-day suspension, calling it an outrage and an attack on free speech and democracy.

 

 

Offsite Article: Unappealing censorship...


Link Here 2nd May 2021
Facebook's Oversight Board overturns Facebook decision to censor Global Punjab TV and criticises a lack of human interaction to deal with an appeal

See article from oversightboard.com

 

 

Surely not!...

Facebook will allow users to select for themselves what type of news they like


Link Here22nd April 2021

Incorporating More Feedback Into News Feed Ranking

Our goal with News Feed is to arrange the posts from friends, Groups and Pages you follow to show you what matters most to you at the top of your feed. Our algorithm uses thousands of signals to rank posts for your News Feed with this goal in mind. This spring, we're expanding on our work to use direct feedback from people who use Facebook to understand the content people find most valuable. And we'll continue to incorporate this feedback into our News Feed ranking process.

Over the next few months, we'll test new ways to get more specific feedback from people about the posts they're seeing, and we'll use that feedback to make News Feed better. Here are some of the new approaches we're exploring:

Whether people find a post inspirational: People have told us they want to see more inspiring and uplifting content in News Feed because it motivates them and can be useful to them outside of Facebook. For example, a post featuring a quote about community can inspire someone to spend more time volunteering, or a photo of a national park can inspire someone to spend more time in nature. To this end, we're running a series of global tests that will survey people to understand which posts they find inspirational. We'll incorporate their responses as a signal in News Feed ranking, with the goal of showing people more inspirational posts closer to the top of their News Feed.

Gauging interest in certain topics: Even though your News Feed contains posts from the friends, Groups and Pages you've chosen to follow, we know sometimes even your closest friends and family share posts about topics that aren't really interesting to you, or that you don't want to see. To address this, we'll ask people whether they want to see more or fewer posts about a certain topic, such as Cooking, Sports or Politics, and based on their collective feedback, we'll aim to show people more content about the topics they're more interested in, and show them fewer posts about topics they don't want to see.

Better understanding content people want to see less of: Increasingly, we're hearing feedback from people that they're seeing too much content about politics and too many other kinds of posts and comments that detract from their News Feed experience. This is a sensitive area, so over the next few months, we'll work to better understand what kinds of content are linked with these negative experiences. For example, we'll look at posts with lots of angry reactions and ask people what kinds of posts they may want to see less of.

Making it easier to give feedback directly on a post: We've long given people the ability to hide a particular post they encounter in News Feed, and we'll soon test a new post design to make this option even more prominent. If you come across something that you find irrelevant, problematic or irritating, you can tap the X in the upper right corner of the post to hide it from your News Feed and see fewer posts like it in the future.

Overall, we hope to show people more content they want to see and find valuable, and less of what they don't. While engagement will continue to be one of many types of signals we use to rank posts in News Feed, we believe these additional insights can provide a more complete picture of the content people find valuable, and we'll share more as we learn from these tests.

 

 

Integrity and control...

Facebook's Oversight Board widens out who is allowed to complain about FAcebook censorship


Link Here 16th April 2021
Facebook's Oversight Board has announced that it is widening out the rules about who can appeal about Facebook censorship.

Since March, decisions by the company's Oversight Board regarding another user's content could only be appealed if the user was in Germany, where antitrust and privacy laws are significantly stricter than in the US. But from October on, users who wanted someone else's content removed were unable to reach the Oversight Board, as the content in question was not their own.

However, going forward, users may appeal in an effort to save posts written by others from being taken down. Facebook constantly attempts to reassure users that the Oversight Board is different from Facebook at a corporate level and is not accountable to CEO Mark Zuckerberg's company to deliver the corporate-desired response. Indeed, it has already handed down seven rulings so far, involving hate speech, misinformation, and nudity.

Facebook's vice president of integrity, Guy Rosen, praised the initial rollout of the Oversight Board in May and expressed hope that the latest development would take the site even higher. In the post, he reassured readers the feature would be usable within the coming weeks .

The Oversight Board has had the capacity to reinstate removed content since October 2020, when the Oversight Board went live. At that point, though, only an involved user could submit it for review. The individual trying to get the content restored has to answer several questions regarding Facebook's takedown policies and how they feel Facebook has run afoul of them.

 

 

Offsite Article: You and the Algorithm: It Takes Two to Tango...


Link Here 2nd April 2021
Nick Clegg of Facebook writes about news feed algorithms, trust and giving back more control to users

See article from nickclegg.medium.com

 

 

Group think...

Facebook announces new censorship measures for Facebook groups


Link Here17th March 2021

It's important to us that people can discover and engage safely with Facebook groups so that they can connect with others around shared interests and life experiences. That's why we've taken action to curb the spread of harmful content, like hate speech and misinformation, and made it harder for certain groups to operate or be discovered, whether they're Public or Private. When a group repeatedly breaks our rules, we take it down entirely.

We're sharing the latest in our ongoing work to keep Groups safe, which includes our thinking on how to keep recommendations safe as well as reducing privileges for those who break our rules. These changes will roll out globally over the coming months.

We are adding more nuance to our enforcement. When a group starts to violate our rules, we will now start showing them lower in recommendations, which means it's less likely that people will discover them. This is similar to our approach in News Feed, where we show lower quality posts further down, so fewer people see them.

We believe that groups and members that violate our rules should have reduced privileges and reach, with restrictions getting more severe as they accrue more violations, until we remove them completely. And when necessary in cases of severe harm, we will outright remove groups and people without these steps in between.

We'll start to let people know when they're about to join a group that has Community Standards violations, so they can make a more informed decision before joining. We'll limit invite notifications for these groups, so people are less likely to join. For existing members, we'll reduce the distribution of that group's content so that it's shown lower in News Feed. We think these measures as a whole, along with demoting groups in recommendations, will make it harder to discover and engage with groups that break our rules.

We will also start requiring admins and moderators to temporarily approve all posts when that group has a substantial number of members who have violated our policies or were part of other groups that were removed for breaking our rules. This means that content won't be shown to the wider group until an admin or moderator reviews and approves it. If an admin or moderator repeatedly approves content that breaks our rules, we'll take the entire group down.

When someone has repeated violations in groups, we will block them from being able to post or comment for a period of time in any group. They also won't be able to invite others to any groups, and won't be able to create new groups. These measures are intended to help slow down the reach of those looking to use our platform for harmful purposes and build on existing restrictions we've put in place over the last year.

 

 

Offsite Article: As always...we'll do better next time...


Link Here 28th February 2021
Facebook's Response to the Oversight Board's First Set of Recommendations

See article from about.fb.com

 

 

Updated: Who should pay for state approved journalism?...

Facebook blocks Australians from accessing or sharing news sources


Link Here18th February 2021
The internet has offered plentiful cheap and mobile entertainment for everyone around the world, and one of the consequences is that people on average choose to spend a lot less on newspaper journalism.

This reality is clearly causing a lot of pain to the newspaper industry, but also to national governments around the world who would prefer their peoples to get their news information from state approved sources.

But governments don't really want to pay for the 'main stream media' themselves, and so are tempted to look to social media giants to foot the bill. And indeed the Australian government is seeking to do exactly that. However the economics doesn't really support the notion that social media should pay for the news media. From a purely business standpoint, there is no case for Facebook needing to pay for links, if anything Facebook could probably charge for the service if they wanted to.

So Facebook has taken a stance and decided that it will not be paying for news in Australia. And in fact it has now banned Australian news sources from appearing in the news feeds of Australian users and Facebook has also blocked local users from linking to any international news sources.

And it seems that this has annoyed the Australian Government. Australian Prime Minister Scott Morrison has said his government will not be intimidated by Facebook blocking news feeds to users. He described the move to unfriend Australia as arrogant and disappointing.

Australians on Thursday woke up to find that Facebook pages of all local and global news sites were unavailable. People outside the country are also unable to read or access any Australian news publications on the platform.

Several government health and emergency pages were also blocked. Facebook later asserted this was a mistake and many of these pages are now back online.

Update: Facebook makes its case

18th February 2021. See article from about.fb.com by William Easton, Managing Director, Facebook Australia & New Zealand

In response to Australia's proposed new Media Bargaining law, Facebook will restrict publishers and people in Australia from sharing or viewing Australian and international news content.

The proposed law fundamentally misunderstands the relationship between our platform and publishers who use it to share news content. It has left us facing a stark choice: attempt to comply with a law that ignores the realities of this relationship, or stop allowing news content on our services in Australia. With a heavy heart, we are choosing the latter.

This discussion has focused on US technology companies and how they benefit from news content on their services. We understand many will ask why the platforms may respond differently. The answer is because our platforms have fundamentally different relationships with news. Google Search is inextricably intertwined with news and publishers do not voluntarily provide their content. On the other hand, publishers willingly choose to post news on Facebook, as it allows them to sell more subscriptions, grow their audiences and increase advertising revenue.

In fact, and as we have made clear to the Australian government for many months, the value exchange between Facebook and publishers runs in favor of the publishers -- which is the reverse of what the legislation would require the arbitrator to assume. Last year Facebook generated approximately 5.1 billion free referrals to Australian publishers worth an estimated AU$407 million.

For Facebook, the business gain from news is minimal. News makes up less than 4% of the content people see in their News Feed. Journalism is important to a democratic society, which is why we build dedicated, free tools to support news organisations around the world in innovating their content for online audiences.

Over the last three years we've worked with the Australian Government to find a solution that recognizes the realities of how our services work. We've long worked toward rules that would encourage innovation and collaboration between digital platforms and news organisations. Unfortunately this legislation does not do that. Instead it seeks to penalise Facebook for content it didn't take or ask for.

We were prepared to launch Facebook News in Australia and significantly increase our investments with local publishers, however, we were only prepared to do this with the right rules in place. This legislation sets a precedent where the government decides who enters into these news content agreements, and ultimately, how much the party that already receives value from the free service gets paid. We will now prioritise investments to other countries, as part of our plans to invest in new licensing news programs and experiences .

Others have also raised concern. Independent experts and analysts around the world have consistently outlined problems with the proposed legislation. While the government has made some changes, the proposed law fundamentally fails to understand how our services work.

Unfortunately, this means people and news organisations in Australia are now restricted from posting news links and sharing or viewing Australian and international news content on Facebook. Globally, posting and sharing news links from Australian publishers is also restricted. To do this, we are using a combination of technologies to restrict news content and we will have processes to review any content that was inadvertently removed.

For Australian publishers this means:

  • They are restricted from sharing or posting any content on Facebook Pages

  • Admins will still be able to access other features from their Facebook Page, including Page insights and Creator Studio

  • We will continue to provide access to all other standard Facebook services, including data tools and CrowdTangle

For international publishers this means:

  • They can continue to publish news content on Facebook, but links and posts can't be viewed or shared by Australian audiences

For our Australian community this means:

  • They cannot view or share Australian or international news content on Facebook or content from Australian and international news Pages

For our international community this means:

  • They cannot view or share Australian news content on Facebook or content from Australian news Pages

The changes affecting news content will not otherwise change Facebook's products and services in Australia. We want to assure the millions of Australians using Facebook to connect with friends and family, grow their businesses and join Groups to help support their local communities, that these services will not change.

We recognise it's important to connect people to authoritative information and we will continue to promote dedicated information hubs like the COVID-19 Information Centre , that connects Australians with relevant health information. Our commitment to remove harmful misinformation and provide access to credible and timely information will not change. We remain committed to our third-party fact-checking program with Agence France-Presse and Australian Associated Press and will continue to invest to support their important work.

Our global commitment to invest in quality news also has not changed. We recognise that news provides a vitally important role in society and democracy, which is why we recently expanded Facebook News t o hundreds of pu blications in the UK.

We hope that in the future the Australian government will recognise the value we already provide and work with us to strengthen, rather than limit, our partnerships with publishers.

 

 

Offsite Article: Facebook deletes Robin Hood Stock Traders group with over 150,000 members...


Link Here30th January 2021
Facebook has shut down the Robin Hood Stock Traders group that has some 157,000 members, in the middle of a trading war between individual gamers and hedge funds trying to devalue the games retailer GameStop

See article from reclaimthenet.org

 

 

Prejudiced censorship...

Facebook is working on a divisive system to censor comments according to a pecking order of political 'correctness'


Link Here 6th December 2020
Facebook is to start policing anti-Black hate speech more aggressively than anti-White comments.

The Wadhington Post is reporting that the the company is overhauling its algorithms that detect hate speech and deprioritizing hateful comments against whites, men and Americans.

Internal documents reveal that Facebook's WoW Project is in its early stages and involves re-engineering automated moderation systems to get better at detecting and automatically deleting hateful language that is considered the 'worst of the worst'. The 'worst of the worst' includes slurs directed at Blacks, Muslims, people of more than one race, the LGBTQ community and Jews, according to the documents. The Wahington POst adds:

In the first phase of the project, which was announced internally to a small group in October, engineers said they had changed the company's systems to deprioritize policing contemptuous comments about Whites, men and Americans. Facebook still considers such attacks to be hate speech, and users can still report it to the company. However, the company's technology now treats them as low-sensitivity -- or less likely to be harmful -- so that they are no longer automatically deleted by the company's algorithms. That means roughly 10,000 fewer posts are now being deleted each day, according to the documents.

 

 

Sikhs complain against mass censorship by Facebook...

Presumably banned just because of a simplistic keyword scan finding the word 'genocide' in a hashtag


Link Here14th November 2020

Open Letter to Facebook on Sikh Censorship

On November 3rd 2020, during #SikhGenocide awareness week, Facebook mass censored Sikh posts across the world.

The censorship largely focused on posts related to the Sikh struggle for Khalistan, but also impacted posts on Sikh history, activism against sexual grooming that targets Sikhs, and posts relating to Kirtan (Sikh devotional singing). This mass censorship targeted a large number of accounts from individuals to Sikh organisations and Gurdwareh, with posts being deleted as far back as 2010.

This was the second time this year that Sikh social media posts were censored en masse, following the blocking of the hashtag #Sikh on Instagram in June this year, also during a period of Sikh genocide remembrance.

From analysis of posts targeted for removal a clear pattern is visible which shows most posts were targeted based on use of the following key words: Khalistan, Sukhdev Singh Babbar, Gurbachan Singh Manochal, Avtar Singh Brahma, Jagraj Singh Toofan, and Jujaroo Jathebandia (Sikh military units) such as the Babbar Khalsa, Khalistan Liberation Force, and the Khalistan Commando Force.

From the simultaneous, widespread, and targeted removal of content it is clear that this recent wave of censorship has come from the Indian state which has long opposed any form of revolutionary Sikh self-expression centred on Sikh sovereignty, adopting policies of genocide to violently annihilate any Sikh resistance. Khalistan, as a term and as a movement, has been demonised in Indian public discourse to the extent that in India Khalistan has become synonymous with terrorism. This image is the product of decades of fascist Indian rhetoric and propaganda that has been used to justify the state's torture, enforced disappearances, mass rapes, arbitrary detention and genocide of Sikhs.

The November 3rd wave of censorship coincided with the Indian foreign minister's arrival in the UK, which was preceded by news coverage in Indian media that highlighted the Indian State's suppressive laws targeting online speech. Under Article 69A of the Information Technology Act, Indian agencies have been empowered to monitor, intercept, and decrypt any information generated, received transmitted or stored in any computer. This law has been used to stifle dissent and block online content that challenges the sovereignty and integrity of India. These new laws are a modern rendering of the oppressive colonial era logic that makes sedition and political dissent a crime.

India today maintains that Sikhs are the aggressors and that only a small minority of Sikhs demand Khalistan. Yet Indian laws, and the public discourse that was used to justify them, targets all Sikhs and expressions of Sikhi itself.

The censorship carried out by Facebook is an attack on Sikh memory and Sikh being. It is an attack on Sikh struggle in the past, present, and future.

The signatories to this letter stand in solidarity against any censorship targeting the Sikh community and urge Facebook to become part of the conversation -- specifically on Khalistan -- that is being censored. We expect immediate response detailing steps towards addressing this censorship.

We urge Sikhs and allies reading this letter to share it widely and connect with the Sikh grassroots to build solidarity that exists outside frameworks of censorship.

Signed: World Sikh Parliament, Sikh Federation UK, Guru Nanak Darbar Gravesend, Saving Punjab, Federation of Sikh Orgs, Kaurs Legal, National Sikh Youth Federation, Sikh Assembly, Everythings 13, Sikh Doctors Association, Sikhs For Justice, Lions MMA, Sikh 2 Inspire, Khalistan Center, Babbar Akali Lehar, Sikh Relief, Baba Deep Singh Gurdwara, Khalsa Jatha Central Gurdwara, United Sikhs, Khalsa Akhara Gym Derby, Khalsa Foundation, Khalsa Akhara Boxing Coventry, Shere Punjab, Singh Sabha Committee Coventry, SHARE Charity, Calgary Sikhs, Birmingham Sikh Council of Gurdwaras, Naujawani, Khalistan Activist Federation, Kent Sikh Organisation.

 

 

Group think censorship...

Facebook announces a new censorship process to put public and private groups on probation


Link Here 8th November 2020
Facebook has developed a new censorship process requiring administrators of public and private groups to manually pre-censor posts to the group prior to publication.

Facebook will restrict any groups-- both public and private ones-- with multiple posts violating its censorship rules. Moderators for the groups will have to approve any posts manually for 60 days, and there's no appeal available for groups on probationary status.

Facebook spokesperson Leonard Lam said in a statement emailed to The Verge:

We are temporarily requiring admins and moderators of some political and social groups in the US to approve all posts, if their group has a number of Community Standards violations from members, a measure we're taking in order to protect people during this unprecedented time.

Admins will be notified of their group's status and told when the restrictions will be lifted. During the probationary period, Facebook will keep tabs on how the moderators of restricted groups deal with posts; if they continue to allow posts that break its rules, Facebook may shut down the groups entirely.

 

 

Offsite Article: Facebook's purge of left-wing radicals...


Link Here 4th September 2020
Having abandoned free speech, the left is in no position to defend itself from censorship. By Fraser Myers

See article from spiked-online.com

 

 

Price war...

Facebook says that if Australia forces social media to share news stories then Facebook will ban its users from sharing news articles


Link Here1st September 2020
Facebook explains in a blog post:

Australia is drafting a new regulation that misunderstands the dynamics of the internet and will do damage to the very news organisations the government is trying to protect. When crafting this new legislation, the commission overseeing the process ignored important facts, most critically the relationship between the news media and social media and which one benefits most from the other.

Assuming this draft code becomes law, we will reluctantly stop allowing publishers and people in Australia from sharing local and international news on Facebook and Instagram. This is not our first choice -- it is our last. But it is the only way to protect against an outcome that defies logic and will hurt, not help, the long-term vibrancy of Australia's news and media sector.

We share the Australian Government's goal of supporting struggling news organisations, particularly local newspapers, and have engaged extensively with the Australian Competition and Consumer Commission that has led the effort. But its solution is counterproductive to that goal. The proposed law is unprecedented in its reach and seeks to regulate every aspect of how tech companies do business with news publishers. Most perplexing, it would force Facebook to pay news organisations for content that the publishers voluntarily place on our platforms and at a price that ignores the financial value we bring publishers.

The ACCC presumes that Facebook benefits most in its relationship with publishers, when in fact the reverse is true. News represents a fraction of what people see in their News Feed and is not a significant source of revenue for us. Still, we recognize that news provides a vitally important role in society and democracy, which is why we offer free tools and training to help media companies reach an audience many times larger than they have previously.

News organisations in Australia and elsewhere choose to post news on Facebook for this precise reason, and they encourage readers to share news across social platforms to increase readership of their stories. This in turn allows them to sell more subscriptions and advertising. Over the first five months of 2020 we sent 2.3 billion clicks from Facebook's News Feed back to Australian news websites at no charge -- additional traffic worth an estimated $200 million AUD to Australian publishers.

We already invest millions of dollars in Australian news businesses and, during discussions over this legislation, we offered to invest millions more. We had also hoped to bring Facebook News to Australia, a feature on our platform exclusively for news, where we pay publishers for their content. S ince it launched last year in the US, publishers we partner with have seen the benefit of additional traffic and new audiences.

But these proposals were overlooked. Instead, we are left with a choice of either removing news entirely or accepting a system that lets publishers charge us for as much content as they want at a price with no clear limits. Unfortunately, no business can operate that way.

Facebook products and services in Australia that allow family and friends to connect will not be impacted by this decision. O ur global commitment to quality news around the world will not change either. And we will continue to work with governments and regulators who rightly hold our feet to the fire. But successful regulation, like the best journalism, will be grounded in and built on facts. In this instance, it is not.

 

 

Election notices...

Facebook announces that it will censor content to protect itself against being prosecuted under local laws


Link Here 1st September 2020
Facebook has announced changes to its Terms of Service that will allow it to remove content or restrict access if the company thinks it is necessary to avoid legal or regulatory impact.

Facebook users have started receiving notifications regarding a change to its Terms of Service which state:

Effective October 1, 2020, section 3.2 of our Terms of Service will be updated to include: We also can remove or restrict access to your content, services or information if we determine that doing so is reasonably necessary to avoid or mitigate adverse legal or regulatory impacts to Facebook.

It is not clear whether this action is in response to particular laws or perhaps this references creeping censorship being implemented worldwide. Of course it could be a pretext to continuing to impose biased political censorship in the run up to the US presidential election.

 

 

Proving the conspiracy...

Facebook bans 790 users connected to QAnon who believe that there are state level organisations conspiring to silence them


Link Here 20th August 2020
Facebook writes:

An Update to How We Address Movements and Organizations Tied to Violence

Today we are taking action against Facebook Pages, Groups and Instagram accounts tied to offline anarchist groups that support violent acts amidst protests, US-based militia organizations and QAnon. We already remove content calling for or advocating violence and we ban organizations and individuals that proclaim a violent mission. However, we have seen growing movements that, while not directly organizing violence, have celebrated violent acts, shown that they have weapons and suggest they will use them, or have individual followers with patterns of violent behavior. So today we are expanding our Dangerous Individuals and Organizations policy to address organizations and movements that have demonstrated significant risks to public safety but do not meet the rigorous criteria to be designated as a dangerous organization and banned from having any presence on our platform. While we will allow people to post content that supports these movements and groups, so long as they do not otherwise violate our content policies, we will restrict their ability to organize on our platform.

Under this policy expansion, we will impose restrictions to limit the spread of content from Facebook Pages, Groups and Instagram accounts. We will also remove Pages, Groups and Instagram accounts where we identify discussions of potential violence, including when they use veiled language and symbols particular to the movement to do so.

We will take the following actions -- some effective immediately, and others coming soon:

  • Remove From Facebook : Pages, Groups and Instagram accounts associated with these movements and organizations will be removed when they discuss potential violence. We will continue studying specific terminology and symbolism used by supporters to identify the language used by these groups and movements indicating violence and take action accordingly.

  • Limit Recommendations : Pages, Groups and Instagram accounts associated with these movements that are not removed will not be eligible to be recommended to people when we suggest Groups you may want to join or Pages and Instagram accounts you may want to follow.

  • Reduce Ranking in News Feed : In the near future, content from these Pages and Groups and will also be ranked lower in News Feed, meaning people who already follow these Pages and are members of these Groups will be less likely to see this content in their News Feed.

  • Reduce in Search : Hashtags and titles of Pages, Groups and Instagram accounts restricted on our platform related to these movements and organizations will be limited in Search: they will not be suggested through our Search Typeahead function and will be ranked lower in Search results.

  • Reviewing Related Hashtags on Instagram: We have temporarily removed the Related Hashtags feature on Instagram, which allows people to find hashtags similar to those they are interacting with. We are working on stronger protections for people using this feature and will continue to evaluate how best to re-introduce it.

  • Prohibit Use of Ads, Commerce Surfaces and Monetization Tools : Facebook Pages related to these movements will be prohibited from running ads or selling products using Marketplace and Shop. In the near future, we'll extend this to prohibit anyone from running ads praising, supporting or representing these movements.

  • Prohibit Fundraising : We will prohibit nonprofits we identify as representing or seeking to support these movements, organizations and groups from using our fundraising tools. We will also prohibit personal fundraisers praising, supporting or representing these organizations and movements.

As a result of some of the actions we've already taken, we've removed over 790 groups, 100 Pages and 1,500 ads tied to QAnon from Facebook, blocked over 300 hashtags across Facebook and Instagram, and additionally imposed restrictions on over 1,950 Groups and 440 Pages on Facebook and over 10,000 accounts on Instagram. These numbers reflect differences in how Facebook and Instagram are used, with fewer Groups on Facebook with higher membership rates and a greater number of Instagram accounts with fewer followers comparably. Those Pages, Groups and Instagram accounts that have been restricted are still subject to removal as our team continues to review their content against our updated policy, as will others we identify subsequently. For militia organizations and those encouraging riots, including some who may identify as Antifa, we've initially removed over 980 groups, 520 Pages and 160 ads from Facebook. We've also restricted over 1,400 hashtags related to these groups and organizations on Instagram.

Today's update focuses on our Dangerous Individuals and Organizations policy but we will continue to review content and accounts against all of our content policies in an effort to keep people safe. We will remove content from these movements that violate any of our policies, including those against fake accounts, harassment, hate speech and/or inciting violence. Misinformation that does not put people at risk of imminent violence or physical harm but is rated false by third-party fact-checkers will be reduced in News Feed so fewer people see it. And any non-state actor or group that qualifies as a dangerous individual or organization will be banned from our platform. Our teams will also study trends in attempts to skirt our enforcement so we can adapt. These movements and groups evolve quickly, and our teams will follow them closely and consult with outside experts so we can continue to enforce our policies against them.

 

 

The technology of censorship...

Facebook outlines its technology now used to censor user posts


Link Here 12th August 2020
Facebook described its technology improvements used for the censorship of Facebook posts:

The biggest change has been the role of technology in content moderation. As our Community Standards Enforcement Report shows, our technology to detect violating content is improving and playing a larger role in content review. Our technology helps us in three main areas:

  • Proactive Detection: Artificial intelligence (AI) has improved to the point that it can detect violations across a wide variety of areas without relying on users to report content to Facebook, often with greater accuracy than reports from users. This helps us detect harmful content and prevent it from being seen by hundreds or thousands of people.

  • Automation: AI has also helped scale the work of our content reviewers. Our AI systems automate decisions for certain areas where content is highly likely to be violating. This helps scale content decisions without sacrificing accuracy so that our reviewers can focus on decisions where more expertise is needed to understand the context and nuances of a particular situation. Automation also makes it easier to take action on identical reports, so our teams don't have to spend time reviewing the same things multiple times. These systems have become even more important during the COVID-19 pandemic with a largely remote content review workforce.

  • Prioritization: Instead of simply looking at reported content in chronological order, our AI prioritizes the most critical content to be reviewed, whether it was reported to us or detected by our proactive systems. This ranking system prioritizes the content that is most harmful to users based on multiple factors such as virality, severity of harm and likelihood of violation . In an instance where our systems are near-certain that content is breaking our rules, it may remove it. Where there is less certainty it will prioritize the content for teams to review.

Together, these three aspects of technology have transformed our content review process and greatly improved our ability to moderate content at scale. However, there are still areas where it's critical for people to review. For example, discerning if someone is the target of bullying can be extremely nuanced and contextual. In addition, AI relies on a large amount of training data from reviews done by our teams in order to identify meaningful patterns of behavior and find potentially violating content.

That's why our content review system needs both people and technology to be successful. Our teams focus on cases where it's essential to have people review and we leverage technology to help us scale our efforts in areas where it can be most effective.

 

 

State lawyers bully Facebook...

20 US state attorney's call on Facebook to censor more


Link Here 7th August 2020
US Attorneys General from 20 different states have sent a letter urging Facebook to do a better job at censoring content. They wrote:

We, the undersigned State Attorneys General, write to request that you take additional steps to prevent Facebook from being used to spread disinformation and hate and to facilitate discrimination. We also ask that you take more steps to provide redress for users who fall victim to intimidation and harassment, including violence and digital abuse.

...

As part of our responsibilities to our communities, Attorneys General have helped residents navigate Facebook's processes for victims to address abuse on its platform. While Facebook has--on occasion--taken action to address violations of its terms of service in cases where we have helped elevate our constituents' concerns, we know that everyday users of Facebook can find the process slow, frustrating, and ineffective. Thus, we write to highlight positive steps that Facebook can take to strengthen its policies and practices.

The letter was written by the Attorneys General of New Jersey, Illinois, and District of Columbia, and addressed to CEO Mark Zuckerberg and COO Sheryl Sandberg. It was cos-signed by 17 other democrat AGs from states such as New York, California, Pennsylvania, Maryland, and Virginia.

The letter proceeds to highlight seven steps they think Facebook should take to better police content to avoid online abuse. They recommended things such as aggressive enforcement of hate speech policies, third-party enforcement and auditing of hate speech, and real-time assistance for users to report harassment.

 

 

Upodated: Brand censorship...

Brands demand that Facebook censors news that offends identitarian sensitivities


Link Here 27th June 2020
Facebook has said it will start to label potentially harmful posts that it leaves up because of their news value. The move comes as the firm faces growing pressure to censor the content on its platform.

More than 90 advertisers have joined a boycott of the site, including consumer goods giant Unilever on Friday. The Stop Hate for Profit campaign was started by US civil rights groups after the death of George Floyd in May while in police custody. It has focused on Facebook, which also owns Instagram; and WhatsApp. The organisers, which include Color of Change and the National Association for the Advancement of Colored People, have said Facebook allows racist, violent and verifiably false content to run rampant on its platform.

Unilver said it would halt Twitter, Facebook and Instagram advertising in the US at least through 2020.

In a speech on Friday, Facebook boss Mark Zuckerberg defended the firm's record of taking down hate speech. But he said the firm was tightening its policies to address the reality of the challenges our country is facing and how they're showing up across our community. In addition to introducing labels, Facebook will ban ads that describe people from different groups, based on factors such as race or immigration status, as a threat. He said:

A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm. Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.

We will soon start labelling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case.

He added that Facebook would remove content - even from politicians - if it determines that it incites violence or suppresses voting.

 

Update: Coke too

27th June 2020. See article from bbc.co.uk

Coca-Cola will suspend advertising on social media globally for at least 30 days, as pressure builds on platforms to crack down on hate speech. chairman and CEO James Quincey said:

There is no place for racism in the world and there is no place for racism on social media.

He demanded greater accountability and transparency from social media firms.

 

 

Mentally challenged...

Facebook opens an AI challenge to help it to censor hateful messages hidden in memes


Link Here 16th May 2020
Facebook is seeking help in the censorship of hateful messages that have been encoded into meme. The company writes in a post:

In order for AI to become a more effective tool for detecting hate speech, it must be able to understand content the way people do: holistically. When viewing a meme, for example, we don't think about the words and photo independently of each other; we understand the combined meaning together. This is extremely challenging for machines, however, because it means they can't just analyze the text and the image separately. They must combine these different modalities and understand how the meaning changes when they are presented together. To catalyze research in this area, Facebook AI has created a data set to help build systems that better understand multimodal hate speech. Today, we are releasing this Hateful Memes data set to the broader research community and launching an associated competition, hosted by DrivenData with a $100,000 prize pool.

The challenges of harmful content affect the entire tech industry and society at large. As with our work on initiatives like the Deepfake Detection Challenge and the Reproducibility Challenge, Facebook AI believes the best solutions will come from open collaboration by experts across the AI community.

We continue to make progress in improving our AI systems to detect hate speech and other harmful content on our platforms, and we believe the Hateful Memes project will enable Facebook and others to do more to keep people safe.

 

 

The Facebook Oversight Board...

Former Guardian editor appointed to Facebook's new censorship appeals board


Link Here 14th May 2020

We know that social media can spread speech that is hateful, harmful and deceitful. In recent years, the question of what content should stay up or come down, and who should decide this, has become increasingly urgent for society. Every content decision made by Facebook impacts people and communities. All of them deserve to understand the rules that govern what they are sharing, how these rules are applied, and how they can appeal those decisions.

The Oversight Board represents a new model of content moderation for Facebook and Instagram and today we are proud to announce our initial members. The Board will take final and binding decisions on whether specific content should be allowed or removed from Facebook and Instagram

The Board will review whether content is consistent with Facebook and Instagram's policies and values, as well as a commitment to upholding freedom of expression within the framework of international norms of human rights. We will make decisions based on these principles, and the impact on users and society, without regard to Facebook's economic, political or reputational interests. Facebook must implement our decisions, unless implementation could violate the law.

The four Co-Chairs and 16 other Members announced today are drawn from around the world. They speak over 27 languages and represent diverse professional, cultural, political, and religious backgrounds and viewpoints. Over time we expect to grow the Board to around 40 Members. While we cannot claim to represent everyone, we are confident that our global composition will underpin, strengthen and guide our decision-making.

All Board Members are independent of Facebook and all other social media companies. In fact, many of us have been publicly critical of how the company has handled content issues in the past. Members contract directly with the Oversight Board, are not Facebook employees and cannot be removed by Facebook. Our financial independence is also guaranteed by the establishment of a $130 million trust fund that is completely independent of Facebook, which will fund our operations and cannot be revoked. All of this is designed to protect our independent judgment and enable us to make decisions free from influence or interference.

When we begin hearing cases later this year, users will be able to appeal to the Board in cases where Facebook has removed their content, but over the following months we will add the opportunity to review appeals from users who want Facebook to remove content.

Users who do not agree with the result of a content appeal to Facebook can refer their case to the Board by following guidelines that will accompany the response from Facebook. At this stage the Board will inform the user if their case will be reviewed.

The Board can also review content referred to it by Facebook. This could include many significant types of decisions, including content on Facebook or Instagram, on advertising, or Groups. The Board will also be able to make policy recommendations to Facebook based on our case decisions.

See first 20 members in blog post from oversightboard.com

The list includes a British panel member, Alan Rusbridger, a former editor of The Guardian. Perhaps giving a hint of a 'progressive' leaning to proceedings.

Offsite Comment: Facebook's free-speech panel doesn't believe in free speech

14th May 2020. See article from spiked-online.com

Alan Rusbridger, one-time cheerleader of press regulation, is among the members.

 

 

Distanced from free speech...

Facebook censors anti-lockdown protests if prohibited by the state


Link Here 21st April 2020
Facebook says it will consult with state governments on their lockdown orders and will shut down pages planning anti-quarantine protests accordingly.

Events that defy government's guidance on social distancing have also been banned from Facebook.

The move has been opposed by Donald Trump Jr and the Missouri Senator Josh Hawley. They note that Facebook is violating Americans' First Amendment rights.

Facebook said it has already removed protest messages in California, New Jersey and Nebraska. However, protests are still being organized on Facebook. A massive protest took place in Harrisburg, Pennsylvania on Monday afternoon that was organized on the Facebook group Pennsylvanians against Excessive Quarantine Orders.

 

 

Offsite Article: Nipples, Facebook, and what our society deems decent...


Link Here18th April 2020
Why there's a danger in allowing a single entity to influence what our society deems decent. By Katie Wheeler

See article from theguardian.com

 

 

Charting a Way Forward on Online Content Censorship...

Facebook seems to be suggesting that if governments are so keen on censoring people's speech then perhaps the governments should take over the censorship job entirely...


Link Here18th February 2020

Today, we're publishing a white paper setting out some questions that regulation of online content might address.

Charting a Way Forward: Online Content Regulation builds on recent developments on this topic, including legislative efforts and scholarship.

The paper poses four questions which go to the heart of the debate about regulating content online:

  • How can content regulation best achieve the goal of reducing harmful speech while preserving free expression? By requiring systems such as user-friendly channels for reporting content or external oversight of policies or enforcement decisions, and by requiring procedures such as periodic public reporting of enforcement data, regulation could provide governments and individuals the information they need to accurately judge social media companies' efforts.

  • How can regulations enhance the accountability of internet platforms? Regulators could consider certain requirements for companies, such as publishing their content standards, consulting with stakeholders when making significant changes to standards, or creating a channel for users to appeal a company's content removal or non-removal decision.

  • Should regulation require internet companies to meet certain performance targets? Companies could be incentivized to meet specific targets such as keeping the prevalence of violating content below some agreed threshold.

  • Should regulation define which "harmful content" should be prohibited on the internet? Laws restricting speech are generally implemented by law enforcement officials and the courts. Internet content moderation is fundamentally different. Governments should create rules to address this complexity -- that recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends and context.

Guidelines for Future Regulation

The development of regulatory solutions should involve not just lawmakers, private companies and civil society, but also those who use online platforms. The following principles are based on lessons we've learned from our work in combating harmful content and our discussions with others.

  • Incentives. Ensuring accountability in companies' content moderation systems and procedures will be the best way to create the incentives for companies to responsibly balance values like safety, privacy, and freedom of expression.

  • The global nature of the internet. Any national regulatory approach to addressing harmful content should respect the global scale of the internet and the value of cross-border communications. They should aim to increase interoperability among regulators and regulations.

  • Freedom of expression. In addition to complying with Article 19 of the ICCPR (and related guidance), regulators should consider the impacts of their decisions on freedom of expression.

  • Technology. Regulators should develop an understanding of the capabilities and limitations of technology in content moderation and allow internet companies the flexibility to innovate. An approach that works for one particular platform or type of content may be less effective (or even counterproductive) when applied elsewhere.

  • Proportionality and necessity. Regulators should take into account the severity and prevalence of the harmful content in question, its status in law, and the efforts already underway to address the content.

If designed well, new frameworks for regulating harmful content can contribute to the internet's continued success by articulating clear ways for government, companies, and civil society to share responsibilities and work together. Designed poorly, these efforts risk unintended consequences that might make people less safe online, stifle expression and slow innovation.

We hope today's white paper helps to stimulate further conversation around the regulation of content online. It builds on a paper we published last September on data portability , and we plan on publishing similar papers on elections and privacy in the coming months.

 

 

A poisoned chalice...

Mark Zuckerberg thinks it is about time that governments took over the job of internet censorship


Link Here 17th February 2020
Facebook boss Mark Zuckerberg has called for more regulation of harmful online content, saying it was not for companies like his to decide what counts as legitimate free speech.

He was speaking at the Munich Security Conference in Germany. He said:

We don't want private companies making so many decisions about how to balance social equities without any more democratic process.

The Facebook founder urged governments to come up with a new regulatory system for social media, suggesting it should be a mix of existing rules for telecoms and media companies. He added:

In the absence of that kind of regulation we will continue doing our best,

But I actually think on a lot of these questions that are trying to balance different social equities it is not just about coming up with the right answer, it is about coming up with an answer that society thinks is legitimate.

During his time in Europe, Zuckerberg is expected to meet politicians in Munich and Brussels to discuss data practices, regulation and tax reform.

 

 

Too many governments defining online harms that need censoring...

Mark Zuckerberg pushes back against too much censorship on Facebook


Link Here2nd February 2020
Mark Zuckerberg has declared that Facebook is going to stand up for free expression in spite of the fact it will piss off a lot of people.

He made the claim during a fiery appearance at the Silicon Slopes Tech Summit in Utah on Friday. Zuckerberg told the audience that Facebook had previously tried to resist moves that would be branded as too offensive - but says he now believes he is being asked to partake in excessive censorship:

Increasingly we're getting called to censor a lot of different kinds of content that makes me really uncomfortable, he claimed. We're going to take down the content that's really harmful, but the line needs to be held at some point.

It kind of feels like the list of things that you're not allowed to say socially keeps on growing, and I'm not really okay with that.

This is the new approach [free expression], and I think it's going to piss off a lot of people. But frankly the old approach was pissing off a lot of people too, so let's try something different.




 

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys