|
YouTube unpicks recent attempts to censor strong language
|
|
|
|
9th March 2023
|
|
| Thanks to Nick See
article from techcrunch.com See
article from support.google.com |
YouTube has been attempting to force its creators into santising their strong language. Not because of some sort of moral highgrounding, but because the company wants to maximise its appeal to advertisers who would prefer not to advertise around content
with strong language. Well it seems that the attempt has wound up creators and now YouTube is rolling back some of its attempts to sanitise strong language. YouTube explains: Updated
Inappropriate language Ad-Friendly Guidelines Our update last November aimed to improve the clarity and enforcement of our Advertiser-friendly content guidelines and make it easier for Creators to monetize brand safe content.
However, we heard concerns from Creators that the new profanity policy actually resulted in a stricter approach than we intended. Effective March 7, we are making the following changes:
Usage of moderate profanity at any time in the video is now eligible for green icons. Usage of stronger profanity, like the f-word in the first 7 seconds or repeatedly throughout the majority of the
video can now receive limited ads (under the November update, this would have received no ad revenue). See specific examples of moderate and stronger profanity in our Help Center article . Video content using profanity,
moderate or strong, after the first 7 seconds will now be eligible for green icons, unless used repetitively throughout the majority of the video (under the November update, this would have received no ad revenue). We've also
clarified our guidance on how profanity in music is treated; moderate or strong profanity used in background music, backing tracks, intro/outro music can now earn full ad revenue (previously this would have received no ad revenue). -
Use of any profanity (moderate or stronger profanity) in titles and thumbnails will still be demonetized and cannot run ads, as was the case before the update in November,
|
|
|
|
|
| 26th April
2022
|
|
|
YouTube is emailing users to say members of the community are 'concerned' about their comments. By Tom Parker See article from
reclaimthenet.org |
|
80 'fact checker' organisations call on Google to buy their services to censor YouTube
|
|
|
| 15th
January 2022
|
|
| See article from poynter.org |
80 internet 'fact-checking' organizations have signed an open letter to YouTube CEO Susan Wojcicki, calling on the company to buy their services to censor 'wrong think' on YouTube. The letter reads: YouTube is allowing its
platform to be weaponized by unscrupulous actors to manipulate and exploit others, and to organize and fundraise themselves. Current measures are proving insufficient. That is why we urge you to take effective action against disinformation and
misinformation, and to elaborate a roadmap of policy and product interventions to improve the information ecosystem -- and to do so with the world's independent, nonpartisan fact-checking organizations. The spread of falsehoods on
YouTube is a global problem. They range from videos encouraging unproven methods of treating COVID-19 to false allegations of electoral fraud. The International Fact-Checking Network is a trade group promoting 'fact checking'
organisations worldwide. In its letter, IFCN outlines 'solutions' to help reduce the spread of misinformation and promote their businesses. One of them is to promote fact-checked information and provide context and debunks alongside videos. The
fact-checkers are also calling on YouTube to disclose its moderation policy regarding disinformation and misinformation, prevent its algorithms from promoting content creators who repeatedly produce false information and strengthen its efforts to fight
misinformation in languages other than English. |
|
|
|
|
| 12th April 2021
|
|
|
A push at the World Economic Forum Global Technology Governance Summit for corporate internet giants to set the global censorship standards for legal content See
article from reclaimthenet.org |
|
|
|
|
|
27th March 2021
|
|
|
A few suggestions that are not controlled by a Big Tech giant and that support free expression. See article from reclaimthenet.org
|
|
Google offers a series of supervisory options of YouTube for different ages of children
|
|
|
| 25th
February 2021
|
|
| See article from blog.youtube |
Google has decided to offer a protected mode for YouTube with a range of monitoring and supervisory options for parents. Google explains in a blog post: In the coming months, we'll launch a new experience in beta for parents to allow
their children to access YouTube through a supervised Google Account . This supervised experience will come with content settings and limited features. We'll start with an early beta for families with kids under the age of consent to test and provide
feedback, as we continue to expand and improve the experience. We know that every parent has a different parenting style and that every child is unique and reaches different developmental stages at different times. That's why
we'll give parents the ability to choose from 3 different content settings on YouTube.
Explore: For children ready to move on from YouTube Kids and explore content on YouTube, this setting will feature a broad range of videos generally suitable for viewers ages 9+, including vlogs, tutorials, gaming videos,
music clips, news, educational content and more. Explore More: With content generally suitable for viewers ages 13+, this setting will include an even larger set of videos, and also live streams in the same categories
as "Explore." Most of YouTube: This setting will contain almost all videos on YouTube, except for age-restricted content , and it includes sensitive topics that may only be appropriate for older teens.
This option was designed for parents who think their children are ready to explore the vast universe of YouTube videos. We will use a mix of user input, machine learning and human review to determine which videos are included. We know
that our systems will make mistakes and will continue to evolve over time. We recommend parents continue to be involved in guiding and supporting their child's experience on YouTube. To help parents get started, we've developed a
guide in partnership with National PTA , Parent Zone and Be Internet Awesome . We'll also launch an ongoing campaign that features creators discussing themes like bullying and harassment, misinformation, digital well-being and more.
We understand the importance of striking a balance between empowering tweens and teens to more safely gain independence, while offering parents ways to set controls. In addition to choosing the content setting, parents will be able to
manage watch and search history from within their child's account settings. Parents can also use other controls offered by Google's Family Link , including screen timers. We'll continue adding new parental controls over time, such as blocking content.
When a parent grants access to YouTube, their child's experience will feel much like regular YouTube, but certain features will be disabled to protect younger audiences. For example, we won't serve personalized ads or ads in
certain categories . At launch, we'll also disable in-app purchases, as well as creation and comments features. Since self-expression and community are integral parts of YouTube and children's development, over time we'll work with parents and experts to
add some of these features through an age-appropriate and parent controlled approach. |
|
YouTube announces that it will step up the censorship of viewer comments specifically for the black community
|
|
|
| 4th
December 2020
|
|
| See article from blog.youtube |
YouTube has announced that it will increase the censorship of comments specifically for the black community. YouTube writes in a block post: We're committed to supporting the diverse creator communities on YouTube and their continued
success. As our CEO, Susan Wojcicki, wrote in June, we're examining how our policies and products are working for everyone -- and specifically for the Black community -- and working to close any gaps. We know that comments play a
key role in helping creators connect with their community, but issues with the quality of comments is also one of the most consistent pieces of feedback we receive from creators. We have been focused on improving comments with the goal of driving
healthier conversations on YouTube. Over the last few years, we launched new features to help creators engage with their community and shape the tone of conversations on their channels. We've heard from creators that while these
changes helped them better manage comments and connect with their audience, there's more we can do to prevent them from seeing hurtful comments in the first place. To address that, we'll be testing a new filter in YouTube Studio for potentially
inappropriate and hurtful comments that have been automatically held for review, so that creators don't ever need to read them if they don't want to. To encourage respectful conversations on YouTube, we're launching a new feature
that will warn users when their comment may be offensive to others, giving them the option to reflect before posting. In addition, we've also invested in technology that helps our systems better detect and remove hateful comments
by taking into account the topic of the video and the context of a comment. These efforts are making an impact. Since early 2019, we've increased the number of daily hate speech comment removals by 46x. And in the last quarter, of
the more than 1.8 million channels we terminated for violating our policies, more than 54,000 terminations were for hate speech. This is the most hate speech terminations in a single quarter and 3x more than the previous high from Q2 2019 when we updated
our hate speech policy. |
|
YouTube announces that it will require hard ID from EU viewers wanting to watch 18 rated videos
|
|
|
| 22nd September 2020
|
|
| See article from blog.youtube
|
Google has announced that it now going to use its AI technology to detect YouTube videos that it would like to see as restricted to adults. In addition it announced that it would be requiring hard ID to verify that EU based users are over 18. (Surely
Google should be the last company on the planet where users would be willing to send there ID to). Google writes: Today, our Trust & Safety team applies age-restrictions when, in the course of reviewing content, they encounter a
video that isn't appropriate for viewers under 18. Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions. Uploaders
can appeal the decision if they believe it was incorrectly applied. For creators in the YouTube Partner Program, we expect these automated age-restrictions to have little to no impact on revenue, as most of these videos also violate our
advertiser-friendly guidelines and therefore have limited or no ads. To make sure the experience is consistent, viewers attempting to access age-restricted videos on most third-party websites will be redirected to YouTube where
they must sign-in and be over 18 to view it. This will help ensure that, no matter where a video is discovered, it will only be viewable by the appropriate audience. Because our use of technology will result in more videos being
age-restricted, our policy team took this opportunity to revisit where we draw the line for age-restricted content. After consulting with experts and comparing ourselves against other global content rating frameworks, only minor adjustments were
necessary. Our policy pages have been updated to reflect these changes. All the changes outlined above will roll out over the coming months. Expanding Age-verification in Europe In line with upcoming
regulations, like the European Union's Audiovisual Media Services Directive (AVMSD), we will also be introducing a new age verification step over the next few months. As part of this process some European users may be asked to provide additional proof of
age when attempting to watch mature content. If our systems are unable to establish that a viewer is above the age of 18, we will request that they provide a valid ID or credit card to verify their age. We've built our age-verification process in keeping
with Google's Privacy and Security Principles. We understand that many are turning to YouTube at this time to find content that is both educational and entertaining. We will continue to update our products and our policies with
features that make sure when they do, they find content that is age-appropriate. |
|
YouTube censors documentary that questions the effectiveness of green policies and technologies
|
|
|
| 27th
May 2020
|
|
| Thanks to Nick See article
from deadline.com |
Planet of the Humans is a 2019 USA documentary by Jeff Gibbs. Featuring Catherine Andrews, David Blood and Michael Bloomberg.
Planet of the Humans (2019), a documentary that dares
to say what no one else will this Earth Day - that we are losing the battle to stop climate change on planet earth because we are following leaders who have taken us down the wrong road - selling out the green movement to wealthy interests and corporate
America. This film is the wake-up call to the reality we are afraid to face: that in the midst of a human-caused extinction event, the environmental movement's answer is to push for techno-fixes and band-aids. It's too little, too late.
Planet of the Humans by Jeff Gibbs features a prominent producer's credit to Michael Moore, previously a darling of the progressive left. This time he has wound up the environmentalists with a film that suggests that many of the green policies and
technologies espoused by climate change campaigners, don't stand up to scrutiny when considered in a more holistic framework of considering the full environmental footprint for raw materials extraction, manufacture, transport etc. Anyway the
climate change campaigners have been campaigning hard to get Planet of the Humans banned and now seem to have succeeded in getting it banned from YouTube citing a technicality of 4s of stock footage apparently included in the documentary without
obtaining copyright clearance. The four-second clip subject to the copyright right claim comes 37 minutes into the documentary, in a sequence titled How Solar Panels & Wind Turbines Are Made. The footage shows a mining operation for rare earth
metals, which are used in wind turbine manufacture. Gibbs says he incorporated the footage under fair use, an exception to copyright law that allows news reporters, producers and documentary filmmakers limited access to copyrighted material to illustrate
points. Director Jeff Gibbs said in a statement: This attempt to take down our film and prevent the public from seeing it is a blatant act of censorship by political critics of Planet of the Humans. It is a misuse
of copyright law to shutdown a film that has opened a serious conversation about how parts of the environmental movement have gotten into bed with Wall Street and so-called green capitalists. There is absolutely no copyright violation in my film. This is
just another attempt by the film's opponents to subvert the right to free speech. Opponents of Planet of the Humans , who do not like its critique of the failures of the environmental movement, have worked for weeks to have the
film taken down and to block us from appearing on TV and on livestream. Their efforts to subvert free speech have failed, with nearly eight and a half million people already viewing the film on YouTube. These Trumpian tactics are shameful, and their aim
to stifle free speech and prevent people from grappling with the uncomfortable truths exposed in this film is deeply disturbing. PEN America, which was founded in 1922 and fights for the free speech of artists in the U.S. and
around the world, came out strongly and denounced the initial attempt to censor this film, and we hope all champions of free expression condemn this act of censorship. We are working with YouTube to resolve this issue and have the
film back up as soon as possible.
|
|
Google wins US court case about its right to censor PragerU
|
|
|
| 27th February 2020
|
|
| See article
from reason.com |
Prager University (PragerU) is a right wing group that creates videos explaining a right wing perspective to political issues. YouTube didn't much care for the content and shunted the videos up a 'restricted mode' back alley. PragerU challenged
the censorship in court but have just lost their case. First Amendment rights in the US bans the state from censoring free speech but this protection does not extended to private companies. PragerU had tried to argue that Google has become so integral to
American life that it should be treated like a state institution. The Ninth Circuit Court of Appeals on Wednesday affirmed that YouTube, a Google subsidiary, is a private platform and thus not subject to the First Amendment. In making that
determination, the Court also rejected a plea from a conservative content maker that sued YouTube in hopes that the courts would force it to behave like a public utility. Headed by conservative radio host Dennis Prager, PragerU alleged in its suit
against YouTube that the video hosting platform violated PragerU's right to free speech when it placed a portion of the nonprofit's clips on Restricted Mode, an optional setting that approximately 1.5 percent of YouTube users select so as not to see
content with mature themes. Writing for the appeals court, Circuit Judge Margaret McKeown said YouTube was a private forum despite its ubiquity and public accessibility, and hosting videos did not make it a state actor for purposes of the First
Amendment. |
|
YouTube has updated its policies about comments under videos
|
|
|
| 5th January
2020
|
|
| See article from youtube.googleblog.com |
YouTube has posted on its blog outlining a recent changes to the moderation of comments about videos posted. YouTube writes: Addressing toxic comments We know that the comment section is an important
place for fans to engage with creators and each other. At the same time, we heard feedback that comments are often where creators and viewers encounter harassment. This behavior not only impacts the person targeted by the harassment, but can also have a
chilling effect on the entire conversation. To combat this we remove comments that clearly violate our policies -- over 16 million in the third quarter of this year , specifically due to harassment.The policy updates we've
outlined above will also apply to comments, so we expect this number to increase in future quarters. Beyond comments that we remove, we also empower creators to further shape the conversation on their channels and have a variety
of tools that help. When we're not sure a comment violates our policies, but it seems potentially inappropriate, we give creators the option to review it before it's posted on their channel. Results among early adopters were promising -- channels that
enabled the feature saw a 75% reduction in user flags on comments. Earlier this year, we began to turn this setting on by default for most creators. We've continued to fine tune our systems to make sure we catch truly toxic
comments, not just anything that's negative or critical, and feedback from creators has been positive. Last week we began turning this feature on by default for YouTube's largest channels with the site's most active comment sections and will roll out to
most channels by the end of the year. To be clear, creators can opt-out, and if they choose to leave the feature enabled they still have ultimate control over which held comments can appear on their videos. Alternatively, creators can also ignore held
comments altogether if they prefer. |
|
YouTube initiates a festive purge of the crypto currency community
|
|
|
| 27th December 2019
|
|
| 25th December 2019. See article from cryptobriefing.com See
Google's Censorship Of Cryptocurrencies Goes Way Beyond Youtube from forbes.com
|
YouTube has been censoring cryptocurrency-related content with a new wave of rule enforcements, according to several hosts. Since 23rd December, the site has been deleting individual videos from cryptocurrency channels. Some hosts have also been given
warnings and strikes, which temporarily prevent them from uploading content. YouTube has not publicly stated that crypto videos are against its rules, meaning that users must read between the lines to deduce what is being targeted. A leading
YouTube creator, Chris Dunn, has noted that his own videos were removed on the grounds that they were responsible for the sale of regulated goods and contained harmful and dangerous content. Many YouTube hosts are now considering moving to
decentralized and uncensorable video platforms, such as PeerTube, LBRY, BitChute, and DTube. Incidentally, Twitter is also planning to create a decentralized media platform. Update: Removing
hundreds of videos was an 'error' 27th December 2019. See article from decrypt.co YouTube said today that its
removal of hundreds of crypto-related video sites earlier this week was an 'error'. YouTuve told Decrypt that the sites have since been put back online. However, a quick check today indicated that none had yet been restored. YouTube spouted:
With the massive volume of videos on our site, sometimes we make the wrong call. When it's brought to our attention that a video has been removed mistakenly, we act quickly to reinstate it.
Offsite Update: After the dust has settled YouTube re-censors the crypto channels 23rd January 2020. See
article from ibtimes.com |
|
Masters of vague censorship rules, Google, complains about the US government's vague censorship rules about defining children's videos
|
|
|
|
12th December 2019
|
|
| See article from
theverge.com |
After being heavily fined for child privacy issues about personalised advertising on YouTube, Google is trying to get its house in order. It will soon be rolling out new rules that prevent the profiling of younger viewers for advertising purposes. The
restrictions on personalised advertising will negatively affect the livelihoods of many YouTube creators. It is pretty clear that Peppa Pig videos will be deemed off limits for personalised adverts, but a more difficult question is what about more
general content that appeals to adults and children alike? YouTube is demanding clearer guidelines about this situation from the government internet privacy censors of the Federal Trade Commission (FTC). The law underpinning the requirements is known
as COPPA [the Children's Online Privacy Protection Act]. YouTube wrote to the FTC asking: We believe there needs to be more clarity about when content should be considered primarily child-directed Creators
are also writing to the FTC out of fear that the changes and vague guidance could destroy their channels. The FTC has responded by initiating a public consultation. In comments filed with the FTC Monday , YouTube invoked arguments raised by
creators, writing that adult users also engage with videos that could traditionally be considered child-directed, like crafting videos and content focused on collecting old toys: Sometimes, content that isn't intentionally
targeting kids can involve a traditional kids activity, such as DIY, gaming and art videos. Are these videos 'made for kids,' even if they don't intend to target kids? This lack of clarity creates uncertainty for creators.
By the way
of a comparison, the British advert censors at ASA has a basic rule that if the proportion of kids watching is greater than 25% of the total audience then child protection rules kick in. Presumably the figure 25% is about what one expect for content that
appeals to all ages equally. |
|
YouTube will allow more violent content related to video games as long as it is not real world violence
|
|
|
| 3rd
December 2019
|
|
| See article from support.google.com |
Heads up for all Gaming Creators: We know there's a difference between real-world violence and scripted or simulated violence -- such as what you see in movies, TV shows, or video games -- so we want to make sure we're enforcing our
violent or graphic content policies consistently. Starting on 2nd December, scripted or simulated violent content found in video games will be
treated the same as other types of scripted content. What does this mean for Gaming Creators?
- Future gaming uploads that include scripted or simulated violence may be approved instead of being age-restricted.
- There will be fewer restrictions for violence in gaming, but this policy will still maintain our high bar to protect audiences
from real-world violence.
- We may still age-restrict content if violent or gory imagery is the sole focus of the video. For instance, if the video focuses entirely on the most graphically violent part of a video game.
|
|
YouTube announces how it will restrict personalised advertising for videos directed at kids from 1st January 2020
|
|
|
| 8th November 2019
|
|
| See article from tubefilter.com
|
The US authorities came down heavily on Google for YouTube's violations of the 1998 US children's data privacy law called COPPA. This ended up with Google handing over $170 million in settlement of claims from the US FTC (Federal Trade Commission).
COPPA restricts operators of websites and online services from collecting the personal information of under-13 users without parental permission. The definition of personal information includes personal identifiers used in cookies to profile internet
users for targeted advertising purposes. So now YouTube has announced new procedures starting 1st January 2010. All content creators will have to designate whether or not each of their videos is directed to children (aka kid-directed aka
child-directed) by checking a box during the upload process. Checking that box will prevent the video from running personalized ads. This rule applies retrospectively so all videos will have to be reviewed and flagged accordingly. It is probably
quite straightforward to identify children's videos, but creators are worried about more general videos for people of all ages that also appeal to kids. And of course there are massive concerns for all those creators affected about revenues decreasing
as adverts switch from personalised to general untargeted ads. tubefilter.com ran a small
experiment suggesting that revenues will drop between 60 and 90% for videos denies targeted advertising. And of course this will have a knock on to the viability of producing videos for a young audience. No doubt the small creators will be hit
hardest, leaving the market more open for those that can make up the shortfall by working at scale.
|
|
YouTube will remove all ad personalisation from kids videos on YouTube and will also turn off comments
|
|
|
|
5th September 2019
|
|
| See article from youtube.googleblog.com by YouTube CEO Susan Wojcicki See
YouTubers say kids' content changes could ruin careers. From theverge.com
|
Google have announced potentially far reaching new policies about kids' videos on YouTube. A Google blog post explains: An update on kids and data protection on YouTube From its earliest days, YouTube
has been a site for people over 13, but with a boom in family content and the rise of shared devices, the likelihood of children watching without supervision has increased. We've been taking a hard look at areas where we can do more to address this,
informed by feedback from parents, experts, and regulators, including COPPA concerns raised by the U.S. Federal Trade Commission and the New York Attorney General that we are addressing with a settlement announced today. New
data practices for children's content on YouTube We are changing how we treat data for children's content on YouTube. Starting in about four months, we will treat data from anyone watching children's content on YouTube as
coming from a child, regardless of the age of the user. This means that we will limit data collection and use on videos made for kids only to what is needed to support the operation of the service. We will also stop serving personalized ads on this
content entirely, and some features will no longer be available on this type of content, like comments and notifications. In order to identify content made for kids, creators will be required to tell us when their content falls in this category, and
we'll also use machine learning to find videos that clearly target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games. Improvements to YouTube Kids We
continue to recommend parents use YouTube Kids if they plan to allow kids under 13 to watch independently. Tens of millions of people use YouTube Kids every week but we want even more parents to be aware of the app and its benefits. We're increasing our
investments in promoting YouTube Kids to parents with a campaign that will run across YouTube. We're also continuing to improve the product. For example, we recently raised the bar for which channels can be a part of YouTube Kids, drastically reducing
the number of channels on the app. And we're bringing the YouTube Kids experience to the desktop. Investing in family creators We know these changes will have a significant business impact on family
and kids creators who have been building both wonderful content and thriving businesses, so we've worked to give impacted creators four months to adjust before changes take effect on YouTube. We recognize this won't be easy for some creators and are
committed to working with them through this transition and providing resources to help them better understand these changes. We are also going to continue investing in the future of quality kids, family and educational content. We
are establishing a $100 million fund, disbursed over three years, dedicated to the creation of thoughtful, original children's content on YouTube and YouTube Kids globally. Today's changes will allow us to better protect kids and
families on YouTube, and this is just the beginning. We'll continue working with lawmakers around the world in this area, including as the FTC seeks comments on COPPA . And in the coming months, we'll share details on how we're rethinking our overall
approach to kids and families, including a dedicated kids experience on YouTube. |
|
YouTube CEO reports on how 'wrong think' is being marginalised and how the mainstream media is being prioritised for news
|
|
|
|
2nd September 2019
|
|
| See article from
youtube-creators.googleblog.com by YouTube CEO Susan Wojcick |
After a long introduction about how open and diverse YouTube is, CEO Susan Wojcick gets down to the nitty gritty of how YouTube censorship works. SHe writes in a blog: Problematic content represents a fraction of one percent of the
content on YouTube and we're constantly working to reduce this even further. This very small amount has a hugely outsized impact, both in the potential harm for our users, as well as the loss of faith in the open model that has enabled the rise of your
creative community. One assumption we've heard is that we hesitate to take action on problematic content because it benefits our business. This is simply not true -- in fact, the cost of not taking sufficient action over the long term results in lack of
trust from our users, advertisers, and you, our creators. We want to earn that trust. This is why we've been investing significantly over the past few years in the teams and systems that protect YouTube. Our approach towards responsibility involves four
"Rs":
We REMOVE content that violates our policy as quickly as possible. And we're always looking to make our policies clearer and more effective, as we've done with pranks and challenges , child safety , and hate speech just this year.
We aim to be thoughtful when we make these updates and consult a wide variety of experts to inform our thinking, for example we talked to dozens of experts as we developed our updated hate speech policy. We also report on the removals we make in our
quarterly Community Guidelines enforcement report. I also appreciate that when policies aren't working for the creator community, you let us know. One area we've heard loud and clear needs an update is creator-on-creator harassment. I said in my last
letter that we'd be looking at this and we will have more to share in the coming months.
We REDUCE the spread of content that brushes right up against our policy line. Already, in the U.S. where we made changes to recommendations earlier this year, we've seen a 50% drop of views from recommendations to this type of
content, meaning quality content has more of a chance to shine. And we've begun experimenting with this change in the UK, Ireland, South Africa and other English-language markets.
And we set a higher bar for what channels can make money on our site, REWARDING trusted, eligible creators. Not all content allowed on YouTube is going to match what advertisers feel is suitable for their brand, we have to be sure
they are comfortable with where their ads appear. This is also why we're enabling new revenue streams for creators like Super Chat and Memberships. Thousands of channels have more than doubled their total YouTube revenue by using these new tools in
addition to advertising.
|
|
|
|
|
| 30th August 2019
|
|
|
YouTube faces dueling lawsuits from a conservative group and an LGBTQ+ group, both of which argue that the video site discriminates against them See
article from wired.com |
|
YouTube announces new rules to ban adult parodies of children's cartoons using tags and titles that may still appeal to children
|
|
|
| 25th August 2019
|
|
| See article from support.google.com |
A little while ago there was an issue on YouTube about parody videos using well known children's cartoons as a baseline for adult humour. The videos were not in themselves outside of what YouTube allows but were not suitable for the child audience of the
original shows. YouTube has now responded as follws: Content that contains mature or violent themes that explicitly targets younger minors and families in the title, description and/or tags will no longer be allowed on the platform. This content was
previously age-restricted, but today we're updating our child safety policies to better protect the family experience. What content will be removed? We're removing misleading family content, including videos that target younger
minors and families, that contain sexual themes, violence, obscene, or other mature themes not suitable for young audiences. Here are some examples of content that will be removed:
- A video with tags like "for children" featuring family friendly cartoons engaging in inappropriate acts like injecting needles.
- Videos with prominent children's nursery rhymes targeting younger minors and families in the video's
title, description or tags, that contain adult themes such as violence, sex, death, etc.
- Videos that explicitly target younger minors and families with phrasing such as "for kids" or "family fun" in the video's title,
description and/or tags that contain vulgar language.
What content will be age-restricted? Content that is meant for adults and not targeting younger minors and families won't be removed, but it may be age-restricted. If you create adult content that could be confused as family entertainment, make
sure your titles, descriptions, and tags match the audience you are targeting. Remember you can age restrict your content upon upload if it's intended for mature audiences. Here is an example of content that may still be allowed on YouTube but will be
age-restricted :
- Adult cartoons with vulgar language and/or violence that is explicitly targeted at adults.
|
|
YouTube boss says that mainstream news companies will be given precedence over independent creators that are too often politically incorrect, wrong think, or right wing
|
|
|
| 30th July 2019
|
|
| See article from reclaimthenet.org |
A YouTube chief has proposed giving precedence to mainstream media over indie creators The company's chief product officer Neal Mohan claims that the platform has grown so much that it now needs new rules to regulate bad actors. Amid the recent
observations of YouTube's biased censorship, the company announced it will crackdown further on what it calls racist content and disinformation. Mohan said: YouTube has now grown to a big city. More bad actors have
come into place. And just like in any big city, you need a new set of rules and laws and kind of regulatory regime. We want to make sure that YouTube remains an open platform because that's where a lot of the magic comes from,
even though there may be some opinions and voices on the platform that I don't agree with, that you don't agree with.
reclaimthenet.org
commented: Mohan suggested that positive discrimination could be applied to authoritative sources like traditional media outlets such as AFP or CNN or BBC or the AP or whoever, raising an issue already mentioned by
the independent channels that made YouTube what it is today: their content is often obscured by search results and their subscribers miss the new content, while corporate media (that ironically is often a competitor to YouTube) is already being heavily
promoted by YouTube.
|
|
|
|
|
| 20th July 2019
|
|
|
YouTube does not want government censors to silence people the government doesn't like, whilst YouTube actively censors people it does not like, especially those on the right See
article from reclaimthenet.org |
|
|
|
|
| 23rd June 2019
|
|
|
YouTube can't remove kid videos without tearing a hole in the entire creator ecosystem. By Julia Alexander See
article from theverge.com |
|
Google extended censorship rules covering videos that feature pranks and challenges
|
|
|
| 16th January 2019
|
|
| See article from support.google.com
|
YouTube has announced new censorship rules for videos featuring pranks and challenges. Google writes in a blog post: YouTube is home to many beloved viral challenges and pranks, like Jimmy Kimmel's Terrible Christmas Presents
prank or the water bottle flip challenge. That said, we've always had policies to make sure what's funny doesn't cross the line into also being harmful or dangerous. Our Community Guidelines prohibit content that encourages dangerous activities that are
likely to result in serious harm, and today clarifying what this means for dangerous challenges and pranks. Q: What exactly are you clarifying related to challenges? We've updated our external guidelines to
make it clear that challenges like the Tide pod challenge or the Fire challenge, that can cause death and/or have caused death in some instances, have no place on YouTube. Q: What exactly are you clarifying related to pranks?
We've made it clear that our policies prohibiting harmful and dangerous content also extend to pranks with a perceived danger of serious physical injury. We don't allow pranks that make victims believe they're in serious physical
danger 203 for example, a home invasion prank or a drive-by shooting prank. We also don't allow pranks that cause children to experience severe emotional distress, meaning something so bad that it could leave the child traumatized for life.
Q: What are examples of pranks that cause children severe emotional distress? We've worked directly with child psychologists to develop guidelines around the types of pranks that cross this line. Examples
include, the fake death of a parent or severe abandonment or shaming for mistakes. Q: Can I appeal strikes related to dangerous challenges and pranks? Yes, you can appeal the strike if you think the video
content doesn't violate Community Guidelines. Q: How long is the grace period for me to review and clean up content? The next two months -- during this time challenges and pranks that violate Community
Guidelines will be removed but the channel will not receive a strike. Additionally, content posted prior to these enforcement updates may be removed, but will not receive a strike.
|
|
|
|
|
| 7th May 2018
|
|
|
YouTube has 'how to' videos for pretty much everything See article from torrentfreak.com |
|
YouTube are banning, censorsing and de-monitising independent voices seemingly to put mainstream media back in control
|
|
|
| 8th April
2018
|
|
| See article from freethinker.co.uk |
TruNews is a 'YouTube channel run by the outlandish evangelist Rick Wiles. It has just been targetted by Google's censorship policies nad has been kicked into the unsearchable long grass. Perhaps banned for being 'fake news' but in reality it is a
little too unbelievable to even count as 'fake'. freethinker.co.uk offer an amusing description of why the channel has been censored: Why? Because Wiles's broadcasts are so damned nutty they serve as a warning
to viewers that this is what happens when people's brain's are running on Jesus.
Of course, Wiles is even more miffed. He alludes to Google not following its 'don't be evil' mantra: I have warned
for years that a spirit of Nazism is rising up inside the USA. The new Nazis are here. America is on the verge of a French Revolution-style upheaval during which leftist mobs will seek to execute Christians and conservatives in order to purge American
society.
But this isn't the only example of Google being 'evil'.
See video from YouTube titled YouTube Admits Not Notifying Subscribers & Screwing With Algorithms Jimmy Dore notes that independent
news sites often no longer qualify for monetisation, they are booted into the unsearchable long grass (as noted by TruNews) and now Google no longer informs subscribers when new videos are added. He contends that the powers that be want news videos from
mainstream media to be the dominant news source for YouTube viewers. |
|
Gun spree at Google offices may be related to YouTube's de-monitisation censorship policies
|
|
|
|
5th April 2018
|
|
| See article from
theguardian.com |
Nasim Najafi Aghdam, the woman who allegedly opened fire at YouTube's headquarters in a suburb of San Francisco, injuring three before killing herself, was apparently furious with the video website because it had stopped paying her for her clips. No
evidence had been found linking her to any individuals at the company where she allegedly opened fire on Tuesday. Two of the three shooting victims from the incident were released from hospital on Tuesday night. A third, is currently in serious
condition. Aghdam's online profile shows she was a vegan activist who ran a website called NasimeSabz.com, meaning Green Breeze in Persian, where she posted about Persian culture and veganism, as well as long passages critical of YouTube . Her father, Ismail Aghdam, told the Bay Area News Group from his San Diego home on Tuesday that she was angry with the Google-owned site because it had stopped paying her for videos she posted on the platform, and that he had warned the police that she might be going to the company's headquarters.
|
|
Judge sides with Google over the censorship of alt-right YouTube videos
|
|
|
| 28th March 2018
|
|
| See article from
thehill.com |
A federal judge has dismissed a lawsuit against Google filed by the conservative site PragerU whose YouTube videos had been censored by Google. U.S. District Court Judge Lucy Koh wrote in her decision that PragerU had failed to demonstrate that age
restrictions imposed on the company's videos are a First Amendment violation. PragerU filed its lawsuit in October, claiming that Google's decision to shunt some of its videos into the dead zone known as restricted mode, was motivated by a
prejudice against conservatives. The list of restricted videos included segments like The most important question about abortion, Where are the moderate Muslims? and Is Islam a religion of peace? In her decision, Koh dismissed the
PragerU's free speech claims, arguing that Google is not subject to the First Amendment because it's a private company and not a public institution. She wrote: Defendants are private entities who created their own
video-sharing social media website and make decisions about whether and how to regulate content that has been uploaded on that website.
|
|
The Daily Caller reveals how Google uses 100 social justice groups as YouTube censors
|
|
|
| 28th
February 2018
|
|
| See article from dailycaller.com
|
The conservative US news website, the Daily Caller, has revealed that Google has recruited several social justice organisations to assist in the censorship of videos on YouTube. The Daily Caller notes: The Southern
Poverty Law Center is assisting YouTube in policing content on their platform. The left-wing nonprofit -- which has more recently come under fire for labeling legitimate conservative organizations as hate groups -- is one of the more than 100
nongovernment organizations (NGOs) and government agencies in YouTube's Trusted Flaggers program. The SPLC and other program members help police YouTube for extremist content, ranging from so-called hate speech to terrorist
recruiting videos. All of the groups in the program have confidentiality agreements. A handful of YouTube's Trusted Flaggers, including the Anti-Defamation League and No Hate Speech, a European organization, have gone public with
their participation in the program. The vast majority of the groups in the program have remained hidden behind their confidentiality agreements.
YouTube public policy director Juniper Downs said the third-party groups work closely
with YouTube's employees to crack down on extremist content in two ways: First, the flaggers are equipped with digital tools allowing them to mass flag content for review by YouTube personnel. Second, the partner
groups act as guides to YouTube's content monitors and engineers designing the algorithms policing the video platform but may lack the expertise needed to tackle a given subject. We work with over 100 organizations as part of our
Trusted Flagger program and we value the expertise these organizations bring to flagging content for review. All trusted flaggers attend a YouTube training to learn about our policies and enforcement processes. Videos flagged by trusted flaggers are
reviewed by YouTube content moderators according to YouTube's Community Guidelines. Content flagged by trusted flaggers is not automatically removed or subject to any differential policies than content flagged from other users.
|
|
Google to employ an army of 10,000 internet censors
|
|
|
| 9th December 2017
|
|
| See article from wsws.org See
article from youtube.googleblog.com |
Google is escalating its campaign of internet censorship, announcing that it will expand its workforce of human censors to over 10,000. The censors' primary focus will be videos and other content on YouTube, but will work across Google to censor content
and train its automated systems, which remove videos at a rate four times faster than its human employees. Human censors have already reviewed over 2 million videos since June. YouTube has already removed over 150,000 videos, 50 percent of which were
removed within two hours of upload. The company is working to accelerate the rate of takedown through machine-learning from manual censorship. YouTube CEO Susan Wojcicki explained the move in an official blog post: Human
reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content. Since June, our trust and safety teams have manually reviewed nearly 2 million
videos for violent extremist content, helping train our machine-learning technology to identify similar videos in the future. We are also taking aggressive action on comments, launching new comment moderation tools and in some cases shutting down
comments altogether. In the last few weeks we've used machine learning to help human reviewers find and terminate hundreds of accounts and shut down hundreds of thousands of comments. Our teams also work closely with NCMEC, the IWF, and other child
safety organizations around the world to report predatory behavior and accounts to the correct law enforcement agencies. We will continue the significant growth of our teams into next year, with the goal of bringing the total
number of people across Google working to address content that might violate our policies to over 10,000 in 2018. At the same time, we are expanding the network of academics, industry groups and subject matter experts who we can
learn from and support to help us better understand emerging issues. We will use our cutting-edge machine learning more widely to allow us to quickly and efficiently remove content that violates our guidelines. In June we deployed
this technology to flag violent extremist content for human review and we've seen tremendous progress.
Since June we have removed over 150,000 videos for violent extremism. Machine learning is helping our human reviewers remove nearly five times as many videos than they were previously. -
Today, 98 percent of the videos we remove for violent extremism are flagged by our machine-learning algorithms. Our advances in machine learning let us now take down nearly 70 percent of violent extremist
content within eight hours of upload and nearly half of it in two hours and we continue to accelerate that speed. Since we started using machine learning to flag violent and extremist content in June, the technology has
reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess.
|
|
A detailed report analyzing Google algorithms that demonetise YouTube videos
|
|
|
|
2nd December 2017
|
|
| See article from bitsonline.com See
article from support.google.com See
Karlaplan report from docs.google.com |
Google makes their internal processes difficult to track by design, but the author of a report By Karlaplan states that these changes are fairly recent, suspected to have been implemented on the 30th of August -- the changes having only been discovered
in late October. However, until the publication of this document , little other than anecdotal evidence was presented with complaints from YouTube content creators. Through extensive analysis of the YouTube Data API and other sources, Karlaplan
found that YouTube tags demonetized videos according to both severity and type of sensitive content -- neither of which is transparent to the uploader. The report also notes that videos are more likely to be hidden from viewers if their likely
viewership is low. Perhaps as higher viewership videos may be more likely to be appealed, or more likely to be spotted as examples of censorship and hence generate bad publicity for Google. Google have published an information page that is quite
useful in detailing which videos get censored. Google outlines two levels of sensitivity that advertisers can select when not wanting to be associated with sensitive content. Google explains: While the Standard content
filter excludes the most inappropriate content, it doesn't exclude everything that a particular advertiser may find objectionable. The Sensitive content categories allow you to opt out of additional content that many advertisers find inappropriate. Eg:
Tragedy and conflict
Sensitive social issues
Sexually suggestive content
Sensational and shocking
Profanity and rough language
|
|
YouTube announces rule change applying to parodies using children's characters
|
|
|
| 11th
November 2017
|
|
| See article from
theguardian.com |
YouTube has announced an extension of its age restriction policy for parody videos using children's characters but with inappropriate themes The new policy was announced on Thursday and will see age restrictions apply on content featuring
inappropriate use of family entertainment characters like unofficial videos depicting Peppa Pig. The company already had a policy that rendered such videos ineligible for advertising revenue, in the hope that doing would reduce the motivation to create
them in the first place. Juniper Downs, , YouTube's director of policy explained: Earlier this year, we updated our policies to make content featuring inappropriate use of family entertainment characters ineligible for
monetisation,We're in the process of implementing a new policy that age restricts this content in the YouTube main app when flagged. Age-restricted content is automatically not allowed in YouTube Kids. The YouTube team is made up of parents who are
committed to improving our apps and getting this right.
Age-restricted videos can't be seen by users who aren't logged in, or by those who have entered their age as below 18 on both the site and the app. More importantly, they also
don't show up on YouTube Kids, a separate app aimed at parents who want to let their children under 13 use the site unsupervised.
|
|
PragerU sues Google for restricting their business revenue from Youtube videos
|
|
|
| 28th
October 2017
|
|
| See article from
toledoblade.com |
Prager University, a nonprofit that creates educational videos with conservative slants, has filed a lawsuit against YouTube and its parent company, Google, alleging that the company is censoring its content. PragerU claims that more than three
dozen of its videos have been restricted by YouTube over the past year. As a result, those who browse YouTube in restricted mode -- including many college and high school students -- are prevented from viewing the content. Furthermore, restricted videos
cannot earn any ad revenue. PragerU says that by limiting access to their videos without a clear reason, YouTube has infringed upon PragerU's First Amendment rights. YouTube has restricted edgy content in order to protect advertisers'
brands. A number of advertisers told Google that they did not want their brand to be associated with edgy content. Google responded by banning all advertising from videos claimed to contain edgy content. It keeps the brands happy but it has decimated
many an online small business. |
|
YouTube censorship by demonetisation proves punishing to those unfairly censored, especially to small players
|
|
|
| 12th September 2017
|
|
| See article from
breitbart.com |
YouTube's algorithms, which are used to censor and demonetize videos on the platform, are killing its creators, according to a report. Most of the initial censorship is left to algorithms, [which probably flag that a video should be censored as soon
as it detects something politically incorrect], which presumably leads to the overcensorship underpinning the complaints]. Creators complain that YouTube has set up a slow and inefficient appeals system to counter cases of unfair censorship.
Ad-disabled videos on YouTube must get 1,000 views in the span of seven days just to qualify for a review. This approach hurts smaller YouTube channels, because it removes the ability for creators to make money on the most important stage of a
YouTube video's life cycle: the first seven days, the report explains. Typically, videos receive 70% or more of their views in the first seven days, according to multiple creators. Some of the platform's most popular creators, are saying that the
majority of their videos are being affected, dramatically reducing their revenue. Last week, liberal interviewer Dave Rubin, who has interviewed dozens of prominent political figures, announced that a large percentage of his videos had been demonetized,
cutting him off from being able to make money on the millions of views he typically gets, perhaps due to the politically incorrect leanings of his guests, eg Ex-Muslim Ayaan Hirsi Ali, former Minnesota Governor Jesse Ventura, feminist activist and
scholar Christina Hoff Sommers, and Larry King. YouTube issued a response saying little, except that they hope the algorithms get better over time. |
|
US catholic church becomes an early victim of YouTube's censorship of anything politically incorrect
|
|
|
|
7th September 2017
|
|
| See article from catholic.org |
US catholics have become an early victim of newly introduced censorship measure from YouTube presumably because their teaching is considered offensive due to politically incorrect attitudes towards gays and abortion. Catholic Online writes:
More media organizations are criticizing YouTube's increasingly oppressive soft censorship policies which are now eliminating mainstream news reports from the video sharing network. Many content creators on YouTube are losing millions
in revenue as the Google-owned firm reduces and cuts off payments in pursuit of profits and control. YouTube is censoring content though various indirect means even if that content does not violate any terms of service. The
Google-owned firm is removing content that it deems inappropriate or offensive, and is taking cues from the Southern Poverty Law Center. The result seems to be a broad labeling of content, and the suppression of even mainstream news. Many of Catholic
Online's bible readings have been caught up in YouTube's web of suppression, despite containing no commentary or message other than the reading of the scriptures. YouTube is not a government agency but a private platform, so it is
free to ban or restrict content as it pleases them. Therefore, their policies, no matter how arbitrary, are not true censorship. However, the firm is practicing what some call soft censorship. Soft censorship is any kind of
activity that suppresses speech, particularly that which is true and accurate. It takes many forms. For example, broadcasting celebrity gossip in place of news is a form of soft censorship. Placing real news lower in search results, preventing content
from being shared on social media, or depriving media outlets of ad revenue for reporting on certain topics, are all common forms of soft censorship. For some unknown reason, Catholic Online has also been targeted by these
policies. Saints videos and daily readings are the most common targets. None of this content can be considered objectionable by any means, and none of it infringes on YouTube's terms and conditions. It is suspected that anti-Christian bigotry, such as
that promoted by liberal extremist organizations like the Southern Poverty Law Center, are to blame. The problem for content creators and media organizations is that there are few places for them to go. Most video viewing takes
place on YouTube, and there are no video hosting sites as well known and widely used as YouTube. Other sites also restrict content and some don't share revenues with content creators. This makes YouTube a monopoly; they are literally the only show in
town. The time has come for governments around the world to recognize that Facebook, Google, and YouTube control the public forum. If freedom of speech is to be protected, then these firms must be compelled to abide by free speech
rules. |
|
YouTube introduce a new tier of censorship to restrict the reach of 'objectionable' videos
|
|
|
| 2nd
September 2017
|
|
| See article
from thesun.co.uk |
Youtube has been introduced a new tier of censorship designed to restrict the audience for videos deemed to be inappropriate or offensive to some audiences. The site is now putting videos into a limited state if they are deemed controversial enough to
be considered objectionable, but not hateful, pornographic or violent enough to be banned altogether. This policy was announced several months ago but has come into force in the past week, prompting anger among members of the YouTube community.
YouTube defines Limited Videos as follows: Our Community Guidelines prohibit hate speech that either promotes violence or has the primary purpose of inciting hatred against individuals or groups based on certain
attributes. YouTube also prohibits content intended to recruit for terrorist organizations, incite violence, celebrate terrorist attacks, or otherwise promote acts of terrorism. Some borderline videos, such as those containing inflammatory religious or
supremacist content without a direct call to violence or a primary purpose of inciting hatred, may not cross these lines for removal. Following user reports, if our review teams determine that a video is borderline under our policies, it may have some
features disabled. These videos will remain available on YouTube, but will be placed behind a warning message, and some features will be disabled, including comments, suggested videos, and likes. These videos are also not eligible
for ads. Having features disabled on a video will not create a strike on your account.
Videos which are put into a limited state cannot be embedded on other websites. They also cannot be easily published on
social media using the usual share buttons and other users cannot comment on them. Crucially, the person who made the video will no longer receive any payment. Earlier this week, Julian Assange wrote: 'Controversial' but contract-legal videos [which break YouTube's terms and conditions] cannot be liked, embedded or earn [money from advertising revenue].
What's interesting about the new method deployed is that it is a clear attempt at social engineering. It isn't just turning off the ads. It's turning off the comments, embeds, etc too. Everything possible to strangle the
reach without deleting it.
|
25th February 2010 | |
| Google execs sentenced for bullying video posted on YouTube
|
From business.timesonline.co.uk See also
Does Italy's Google Conviction Portend
More Censorship? from wired.com |
Three Google executives were convicted in Italy of allowing film of an autistic schoolboy being bullied to be posted online in a ruling that could profoundly change the way in which video clips are put on the internet. The three Google executives
— David Drummond, senior vice-president and chief legal officer, George Reyes, Google's former chief financial officer, and Peter Fleischer, global privacy counsel — were each given a six-month suspended prison sentence, but were cleared of defamation
charges. A fourth defendant, Arvind Desikan, senior product marketing manager, was acquitted. Alfredo Robledo, the prosecutor, said that he was very satisfied with the verdict in the case, adding: Protection of human beings must prevail
over business logic. Robledo said that the video, which was posted on September 8, 2006, had remained online until November 7 and should have been taken down immediately. Google said that it would appeal against the ruling. The American
company said that the decision attacked the principles of freedom on which the internet is built. Bill Echikson, a Google spokesman, said: It's the first time a Google employee has been convicted for [violation of] privacy anywhere in the world. It's
an astonishing decision that attacks the principle of freedom of expression. Italian bloggers also criticised the verdict, with one blogger on the La Stampa website declaring: From today we are less Western and more Chinese. Matt
Sucherman, vice-president of Google and its deputy general counsel for Europe, the Middle East and Africa, conceded that the video was totally reprehensible , but said that Google had taken it down within hours of being notified of it by Italian
police and that none of those convicted had had anything to do with it. He said: They did not appear in it, film it, upload it or review it. None of them know the people involved or were even aware of the video's existence until after it was removed.
Sucherman said that the ruling by the judge, Oscar Magi, meant that employees of hosting platforms like Google Video are criminally responsible for content that users upload. If social networks and community bulletin boards were held
responsible for vetting every single piece of content that is uploaded to them — every piece of text, every photo, every file, every video — then the web as we know it will cease to exist and many of the economic, social, political and technological
benefits it brings could disappear.
|
8th August 2009 | | |
YouTube expand their posting guidelines
| Based on
article from theregister.co.uk See also
guidelines from youtube.com |
YouTube have increased the range of activities that are barred to include, amongst other things, invasions of privacy.
If a video you've recorded features people who are readily identifiable and who haven't consented to being filmed, there's a
chance they'll file a privacy complaint seeking its removal, say its new guidelines: Don't post other people's personal information, including phone numbers, addresses, credit card numbers, and government IDs. We're serious about keeping our users
safe and suspend accounts that violate people's privacy.
It also said that material designed to harass people was not welcome. If you wouldn't say it to someone's face, don't say it on YouTube, say the new guidelines: And if you're
looking to attack, harass, demean, or impersonate others, go elsewhere.
The new guidelines also seek to govern the behaviour of people reacting to videos: Users shouldn't feel threatened when they're on YouTube. Don't leave threatening
comments on other people's videos.
|
12th December 2008 | | |
|
YouTube pulls risque; videos to chase profit See article from irishtimes.com |
3rd December 2008 | |
| YouTube restrict suggestive material to adults and demote it in searches
|
Based on article from uk.youtube.com
|
Our goal is to help ensure that you're viewing content that's relevant to you, and not inadvertently coming across content that isn't. Here are a few things we came up with:
- Stricter standard for mature content - While videos featuring pornographic images or sex acts are always removed from the site when they're flagged, we're tightening the standard for what is considered sexually suggestive. Videos with sexually
suggestive (but not prohibited) content will be age-restricted, which means they'll be available only to viewers who are 18 or older.
- Demotion of sexually suggestive content and profanity - Videos that are considered sexually
suggestive, or that contain profanity, will be algorithmically demoted on our Most Viewed, Top Favourited, and other browse pages. The classification of these types of videos is based on a number of factors, including video content and
descriptions. In testing, we've found that out of the thousands of videos on these pages, only several each day are automatically demoted for being too graphic or explicit. However, those videos are often the ones which end up being repeatedly flagged by
the community as being inappropriate.
- Improved thumbnails - To make sure your thumbnail represents your video, your choices will now be selected algorithmically.
- More accurate video information - Our Community
Guidelines have always prohibited folks from attempting to game view counts by entering misleading information in video descriptions, tags, titles, and other metadata. We remain serious about enforcing these rules. Remember, violations of these
guidelines could result in removal of your video and repeated violations will lead to termination of your account.
|
| |