|
Dear California Law Makers: How The Hell Can I Comply With Your New Age-Appropriate Design Code?
|
|
|
| 26th August 2022
|
|
| See article
from techdirt.com |
The California legislature is very, very close to passing AB 2273, The California Age-Appropriate Design Code Act. As far as I can tell, it has strong support in the legislature and very little opposition. And that's incredibly dangerous, because the
bill is not just extremely problematic, but at the same time it's also impossible to comply with. The bill is a for the children bill in that it has lots of language in there claiming that this is about protecting children from
nefarious online services that create harm. But, as Goldman makes clear, the bill targets everyone, not just children, because it has ridiculously broad definitions. Bill 2273 doesn't limit its impact to sites targeting those
under 13. It targets any business with an online service likely to be accessed by children who are defined by a consumer or consumers who are under 18 years of age. I'm curious if that means someone who is not buying (i.e., consuming) anything doesn't
count? Most likely it will mean consuming as in accessing / using the service. And that's ridiculous. Because EVERY service is likely to have at least someone under the age of 18 visit it. According to the
law, I need to estimate the age of child users with a reasonable level of certainty. How? Am I really going to have to start age verifying every visitor to the site? It seems like I risk serious liability in not doing so. And then what? Now California
has just created a fucking privacy nightmare for me. I don't want to find out how old all of you are and then track that data. We try to collect as little data about all of you as possible, but under the law that puts me at risk. Yes, incredibly, a bill that claims to be about protecting data, effectively demands that I collect way more personal data than I ever want to collect.
See full article from techdirt.com
|
|
Instagram steps down the content feed for new young teen users
|
|
|
| 26th August 2022
|
|
| See article from about.instagram.com
|
Last summer, we launched the Sensitive Content Control so people could choose how much or how little sensitive content to see in Explore from accounts they don't follow. The Sensitive Content Control has three options, which we've
renamed from when we first introduced the control to help explain what each option does. The three options are: More, Standard and Less. Standard is the default state, and will prevent people from seeing some sensitive content and
accounts. More enables people to see more sensitive content and accounts, whereas Less means they see less of this content than the default state. For people under the age of 18, the More option is unavailable. The Sensitive
Content Control has only two options for teens: Standard and Less. New teens on Instagram under 16 years old will be defaulted into the Less state. For teens already on Instagram, we will send a prompt encouraging them to select the Less experience.
This will make it more difficult for young people to come across potentially sensitive content or accounts in Search, Explore, Hashtag Pages, Reels, Feed Recommendations and Suggested Accounts. In addition, we
are testing a new way to encourage teens to update their safety and privacy settings. We'll show prompts asking teens to review their settings including: controlling who can reshare their content, who can message and contact them, what content they can
see and how they can manage their time spent on Instagram.
|
|
The rich and powerful of the World Economic Forum call for mass digital ID so as to better control the internet
|
|
|
| 20th August 2022
|
|
| See article from reclaimthenet.org See
article from weforum.org |
The World Economic Forum has made a big push for digital identity programs around the world The article then states (without mentioning numbers) that consumers said they would pick banks and financial firms as the most trustworthy
entities to create and maintain a system that controls their identities online. The WEF write-up adds some doom-and-gloom scenarios and fear mongering into the mix, arguing that with economies around the world heading into high
inflation and recession, and the trend likely to continue -- digital economy and its potential becomes more important than ever. But -- unless there is a way to identify everybody online, the WEF warns that people will actually
stop interacting online.
|
|
Former UK Supreme Court judge savages the government's censorship bill
|
|
|
|
18th August 2022
|
|
| See article from spectator.co.uk by Jonathan Sumption
|
Weighing in at 218 pages, with 197 sections and 15 schedules, the Online Safety Bill is a clunking attempt to regulate content on the internet. Its internal contradictions and exceptions, its complex paper chase of definitions, its weasel language
suggesting more than it says, all positively invite misunderstanding. Parts of it are so obscure that its promoters and critics cannot even agree on what it does. The real vice of the bill is that its provisions are not limited to
material capable of being defined and identified. It creates a new category of speech which is legal but harmful. The range of material covered is almost infinite, the only limitation being that it must be liable to cause harm to some people.
Unfortunately, that is not much of a limitation. Harm is defined in the bill in circular language of stratospheric vagueness. It means any physical or psychological harm. As if that were not general enough, harm also extends to anything that may increase
the likelihood of someone acting in a way that is harmful to themselves, either because they have encountered it on the internet or because someone has told them about it. This test is almost entirely subjective. Many things which
are harmless to the overwhelming majority of users may be harmful to sufficiently sensitive, fearful or vulnerable minorities, or may be presented as such by manipulative pressure groups. At a time when even universities are warning adult students
against exposure to material such as Chaucer with his rumbustious references to sex, or historical or literary material dealing with slavery or other forms of cruelty, the harmful propensity of any material whatever is a matter of opinion. It will vary
from one internet user to the next. If the bill is passed in its current form, internet giants will have to identify categories of material which are potentially harmful to adults and provide them with options to cut it out or
alert them to its potentially harmful nature. This is easier said than done. The internet is vast. At the last count, 300,000 status updates are uploaded to Facebook every minute, with 500,000 comments left that same minute. YouTube adds 500 hours of
videos every minute. Faced with the need to find unidentifiable categories of material liable to inflict unidentifiable categories of harm on unidentifiable categories of people, and threatened with criminal sanctions and enormous regulatory fines (up to
10 per cent of global revenue). What is a media company to do? The only way to cope will be to take the course involving the least risk: if in doubt, cut it out. This will involve a huge measure of regulatory overkill. A new era
of intensive internet self-censorship will have dawned. See full article from spectator.co.uk
|
|
Meta calls for public comments about the police requested take down of drill music on Facebook
|
|
|
| 18th August
2022
|
|
| See
article from oversightboard.com |
In January 2022, an Instagram account that describes itself as publicising British music posted a video with a short caption on its public account. The video is a 21-second clip of the music video for a UK drill music track called Secrets Not Safe by the
rapper Chinx (OS). The caption tags Chinx (OS) as well as an affiliated artist and highlights that the track had just been released. The video clip shows part of the second verse of the song and fades to a black screen with the text OUT NOW. Drill is a
subgenre of rap music popular in the UK, with a large number of drill artists active in London. Shortly after the video was posted, Meta received a request from UK law enforcement to remove content that included this track. Meta
says that it was informed by law enforcement that elements of it could contribute to a risk of offline harm. The company was also aware that the track referenced a past shooting in a way that raised concerns that it may provoke further violence. As a
result, the post was escalated for internal review by experts at Meta. Meta's experts determined that the content violated the Violence and Incitement policy, specifically the prohibition on coded statements where the method of
violence or harm is not clearly articulated, but the threat is veiled or implicit. The Community Standards list signs that content may include veiled or implicit threats. These include content that is shared in a retaliatory context, and content with
references to historical or fictional incidents of violence. Further information and/or context is always required to identify and remove a number of different categories listed at the end of the Violence and Incitement policy, including veiled threats.
Meta has explained to the Board that enforcement under these categories is not subject to at-scale review (the standard review process conducted by outsourced moderators) and can only be enforced by Meta's internal teams. Meta has further explained that
the Facebook Community Standards apply to Instagram. When Meta took the content down, two days after it was posted, it also removed copies of the video posted by other accounts. Based on the information that they received from UK
law enforcement, Meta's Public Policy team believed that the track might increase the risk of potential retaliatory gang violence, and acted as a threatening call to action that could contribute to a risk of imminent violence or physical harm, including
retaliatory gang violence. Hours after the content was removed, the account owner appealed. A human reviewer assessed the content to be non-violating and restored it to Instagram. Eight days later, following a second request from
UK law enforcement, Meta removed the content again and took down other instances of the video found on its platforms. The account in this case has fewer than 1,000 followers, the majority of whom live in the UK. The user received notifications from Meta
both times their content was removed but was not informed that the removals were initiated following a request from UK law enforcement. In referring this matter to the Board, Meta states that this case is particularly difficult as
it involves balancing the competing interests of artistic expression and public safety. Meta explains that, while the company places a high value on artistic expression, it is difficult to determine when that expression becomes a credible threat. Meta
asks the Board to assess whether, in this case and more generally, the safety risks associated with the potential instigation of gang violence outweigh the value of artistic expression in drill music. In its decisions, the Board
can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. As such, the Board welcomes public comments proposing recommendations that are relevant to these cases. Respond via
article from oversightboard.com
|
|
The Hungarian media censor is investigating a children's cartoon on Netflix with gay characters
|
|
|
| 18th August 2022
|
|
| See article from politico.eu |
The Hungarian media censor has said it was investigating Netflix for potentially violating an anti-LGBT law, citing several complaints over a cartoon showing girls kissing. The National Media and Communications Authority said it was checking
whether an episode of a Netflix kids series named Jurassic World Camp Cretaceous had violated a law which prohibits the portrayal of homosexuality or transgender people in content shown to minors. The Netflix series, rated for 7-year-olds
and above, shows one of the main characters confessing her love to another girl and kissing her. The censor said that if it found Netflix to have violated its law, it would have to inform the Dutch media authority, which oversees Netflix because the
firm's European headquarters are in the Netherlands. The Dutch censor would in turn have the final say. |
|
British Computer Society experts are not impressed by The Online Censorship Bill
|
|
|
| 15th August
2022
|
|
| See article from bcs.org See
BSC report [pdf] from bcs.org |
Plans to compel social media platforms to tackle online harms are not fit for purpose according to a new poll of IT experts. Only 14% of tech professionals believed the Online Harms Bill was fit for purpose, according to the
survey by BCS, The Chartered Institute for IT. Some 46% said the bill was not workable, with the rest unsure. The legislation would have a negative effect on freedom of speech, most IT specialists (58%)
told BCS. Only 19% felt the measures proposed would make the internet safer, with 51% saying the law would not make it safer to be online. There were nearly 1,300 responses from tech professionals to the
survey by BCS. Just 9% of IT specialists polled said they were confident that legal but harmful content could be effectively and proportionately removed. Some 74% of tech specialists said they felt the bill
would do nothing to stop the spread of disinformation and fake news.
|
|
Testing End-to-End Encrypted Backups on Messenger
|
|
|
| 15th August 2022
|
|
| See article from about.fb.com
|
Meta writes:
We're testing secure storage on Messenger, a new feature that allows you to back up your end-to-end encrypted chats. We're also starting a test of automatic end-to-end encrypted chat threads on
Messenger and expanding other features.
People want to trust that their online conversations with friends and family are private and secure. We're working hard to protect your personal messages and calls with end-to-end encryption by default on Messenger and Instagram.
Today, we're announcing our plans to test a new secure storage feature for backups of your end-to-end encrypted chats on Messenger, and more updates and tests to deliver the best experience on Messenger and Instagram. See
article from about.fb.com |
|
Google Search introduces content advisories about supposedly unreliable sources
|
|
|
| 12th August 2022
|
|
| See article from reclaimthenet.org
|
Google is ramping up this effort to make sure the search engine isn't simply returning results -- like people might still expect it to do -- but what Google decides are trustworthy results as opposed to falsehoods and misinformation.
Google is introducing what it calls content advisories. Google says the goal is to help users understand and evaluate high-quality information, and also promote what it calls information literacy. And the way this editorialized by
Google information is presented to users to evaluate is fairly aggressive: on mobile devices, these advisories are full-screen, displayed above the fold where they are most visible. It looks like there aren't many great results
for this search, the advisory will read. On its blog, the corporation said that its systems are already trained to identify and prioritize signals which it has decided indicate that content is authoritative and trustworthy.
|
|
|