|
The Belgian government has decided to ban all gambling advertising and sports sponsorship
|
|
|
| 9th March 2023
|
|
| See article from reuters.com |
The Belgian government has decided to ban gambling advertising across all media from July 1st 2023. From January 1st, 2025 there will be a further ban on advertising in stadiums and from January 1st, 2028 gambling companies will no longer be able to
sponsor professional sports clubs. Injustice Minister Vincent Van Quickenborne said in a statement that the censorship was for those who want to get rid of their gambling addiction. He cited also the tsunami of gambling advertising as an
additional problem. Gambling advertising will be banned from television, radio, cinemas, magazines, newspapers and in public spaces. Online advertising on websites and social media will also be prohibited. |
|
Children's campaigners claim that EU proposals for responding to child abuse don't go far enough and call for all internet communications to be open to snooping regardless of the safety of internet users from hackers, fraudsters and
thieves
|
|
|
| 6th March 2023
|
|
| See article from ec.europa.eu |
The European Commission proposed new EU rules to prevent and combat child sexual abuse (CSA) in May 2022. Complementing existing frameworks to fight online CSA, the EU proposal would introduce a new, harmonised European structure for assessing and
mitigating the spread of child sexual abuse material (CSAM) online. The thrust of the proposal is to react in a unified way, either to CSAM detected, or else to systems identified most at risk of being used to disseminate such material. However
as is always the case with campaigners, this is never enough. The campaigners basically want everybody's communications to be open to snooping and surveillance without the slightest consideration for people's safety from hackers, identity thieves,
scammers, blackmailers and fraudsters. The European Commission wrote: The Commission is
proposing new EU legislation to prevent and combat child sexual abuse online. With
85 million pictures and videos depicting child sexual abuse reported worldwide in 2021 alone, and many more going unreported, child sexual abuse is pervasive. The COVID-19 pandemic has exacerbated the issue, with the Internet Watch foundation noting a
64% increase in reports of confirmed child sexual abuse in 2021 compared to the previous year. The current system based on voluntary detection and reporting by companies has proven to be insufficient to adequately protect children and, in any case, will
no longer be possible once the interim solution currently in place expires. Up to 95% of all reports of child sexual abuse received in 2020 came from one company, despite clear evidence that the problem does not only exist on one platform.
To effectively address the misuse of online services for the purposes of child sexual abuse, clear rules are needed, with robust conditions and safeguards. The proposed rules will oblige providers to detect, report and remove child
sexual abuse material on their services. Providers will need to assess and mitigate the risk of misuse of their services and the measures taken must be proportionate to that risk and subject to robust conditions and safeguards. A
new independent EU Centre on Child Sexual Abuse (EU Centre) will facilitate the efforts of service providers by acting as a hub of expertise, providing reliable information on identified material, receiving and analysing reports from providers to
identify erroneous reports and prevent them from reaching law enforcement, swiftly forwarding relevant reports for law enforcement action and by providing support to victims. The new rules will help rescue children from further
abuse, prevent material from reappearing online, and bring offenders to justice. Those rules will include:
Mandatory risk assessment and risk mitigation measures: Providers of hosting or interpersonal communication services will have to assess the risk that their services are misused to disseminate child sexual abuse material or
for the solicitation of children, known as grooming. Providers will also have to propose risk mitigation measures. Targeted detection obligations, based on a detection order: Member States will need to designate
national authorities in charge of reviewing the risk assessment. Where such authorities determine that a significant risk remains, they can ask a court or an independent national authority to issue a detection order for known or new child sexual abuse
material or grooming. Detection orders are limited in time, targeting a specific type of content on a specific service. Strong safeguards on detection: Companies having received a detection order will only be able to
detect content using indicators of child sexual abuse verified and provided by the EU Centre. Detection technologies must only be used for the purpose of detecting child sexual abuse. Providers will have to deploy technologies that are the least
privacy-intrusive in accordance with the state of the art in the industry, and that limit the error rate of false positives to the maximum extent possible. Clear reporting obligations: Providers that have detected
online child sexual abuse will have to report it to the EU Centre. Effective removal: National authorities can issue removal orders if the child sexual abuse material is not swiftly taken down. Internet access
providers will also be required to disable access to images and videos that cannot be taken down, e.g., because they are hosted outside the EU in non-cooperative jurisdictions. Reducing exposure to grooming: The rules
require app stores to ensure that children cannot download apps that may expose them to a high risk of solicitation of children. Solid oversight mechanisms and judicial redress: Detection orders will be issued by
courts or independent national authorities. To minimise the risk of erroneous detection and reporting, the EU Centre will verify reports of potential online child sexual abuse made by providers before sharing them with law enforcement authorities and
Europol. Both providers and users will have the right to challenge any measure affecting them in Court.
The new EU Centre will support:
Online service providers, in particular in complying with their new obligations to carry out risk assessments, detect, report, remove and disable access to child sexual abuse online, by providing indicators to detect child sexual
abuse and receiving the reports from the providers; National law enforcement and Europol, by reviewing the reports from the providers to ensure that they are not submitted in error, and channelling them quickly to law
enforcement. This will help rescue children from situations of abuse and bring perpetrators to justice. Member States, by serving as a knowledge hub for best practices on prevention and assistance to victims, fostering an
evidence-based approach. Victims, by helping them to take down the materials depicting their abuse.
Next steps It is now for the European Parliament and the Council to agree on the proposal. Once adopted, the new Regulation will replace the current
interim Regulation .
Feedback from members of the public on
the proposals is open for a minimum of 8 weeks.*
According to child campaigners: On 8 February 2023, the European Parliament's Committee on the Internal Market and Consumer Protection (IMCO)
published its draft report on the European Commission's proposal to prevent and combat child sexual abuse. The draft report seeks a vastly reduced scope for the Regulation. It prioritises the anonymity of perpetrators of abuse over the rights of victims
and survivors of sexual abuse and seeks to reverse progress made in keeping children safe as they navigate or are harmed in digital environments that were not built with their safety in mind. The letter also criticises the removal of age
verification and claims that technology can meet high privacy standards, explaining that the new legislation adds in additional safeguards to already effective measures to prevent the spread of this material online. And of course the campaigners
demand that technology companies allow the surveillance of all messages via backdoors to encryption or perhaps just to ban encryption. See the letter from the likes of the NSPCC. See
article from iwf.org.uk |
|
The French state looks set to take control of the pictures that parents post of their children on social media
|
|
|
| 28th February 2023
|
|
| See article from politico.eu
|
Members of the French National Assembly's law committee have unanimously green-lit draft legislation to allo the state to control the ability of parents to post pictures of their children on social media. Bruno Studer, an MP from President Emmanuel
Macron's party who put the bill forward, said in an interview. On average, children have 1,300 photos of themselves circulating on social media platforms before the age of 13, before they are even allowed to have an account, he added. So-called
sharenting (combining sharing and parenting, referring to posting sensitive pictures of one's kids online) constitutes one of the main risks to children's privacy, according to the bill's explanatory statement. Half of the pictures shared by child sexual
abusers were initially posted by parents on social media. The legislation adopted on Tuesday says that both parents would be jointly responsible for their offspring's image rights and shall involve the child ... according to his or her age and degree
of maturity. In case of disagreement between parents, a judge can ban one of them from posting or sharing a child's pictures without authorization from the other. And in the most extreme cases, parents can lose their parental authority over their kids'
image rights if the dissemination of the child's image by both parents seriously affects the child's dignity or moral integrity. The bill still needs to go through a plenary session next week and the Senate before it would become law.
|
|
French lawmakers rush to get age verification mandated to block under 15s from accessing social media without parental permission
|
|
|
| 23rd February 2023
|
|
| See article from xbiz.com
|
The Culture Committee of France's National Assembly voted last week to expand the age verification requirements already mandated for adult content to several popular mainstream social media platforms. The French lawmakers voted in favour of requiring
social media platforms to block access to under 15s, unless they have permission from their parents. The reasons given by the committee for last week's hearing were to examine a proposed law establishing a "digital age of majority" of 15
and to "fight online hate." The bill was put forward by MP Laurent Marcangeli, a member of parliament from President Emmanuel Macron's allied party Horizons, and was endorsed by the committee on Feb. 15. Infringing social media
companies, the Politico report noted, will face fines of up to one percent of their annual global turnover. Technical solutions to verify users' ages would need to be rubber-stamped by the audiovisual and privacy regulators -- Arcom and CNIL -- and
Arcom would be empowered to sue non-compliant companies. The bill now heads to a plenary session and to the Senate. According to a French government source, CNIL and Arcom will soon be releasing age verification technical guidelines
that will "frame the minimum criteria" for "pornographic websites" to comply. It appears that France will allow payment card verification and face scanning for age (with an unlikely promise of not surreptitiously using facial
recognition at the same time) in the short term whilst waiting for a more complex technical solution being suggested by CNIL. French Digital Minister Jean-Noėl Barrot told the parliamentarians that a system of "double anonymity" for porn
site age verification would be tested at the end of March with a few adult companies, which he did not name. |
|
Pirate Party MEPs claim successes for privacy as new internet identity card laws are debated in the European Parliament
|
|
|
| 14th February 2023
|
|
| See press release from european-pirateparty.eu |
The lead Committee on Industry, Research and Energy (ITRE) has adopted a draft mandate on the European digital identity (e-ID). The legislative proposal will allow EU citizens to prove their identity via mobile app and facilitate everyday situations such
as dealing with public authorities or identification at airports. Pirate Party MEPs made sure that the source code used for providing European Digital Identity Wallets will be open source, that non-users of the voluntary eID
scheme must not suffer disadvantages and will be able to use alternative means of identification or authentication. They have not been able to prevent the mandatory acceptance of government browser certificates but there will be exceptions. Pirate MEPs
have also been able to prevent more serious invasions of our privacy such as compulsory unique identification number throughout the EU. They keep pushing for more safeguards. Pirate Party MEP Mikulįs Peksa, Greens/EFA shadow
rapporteur in the ITRE Committee, comments: The European digital identity is cornerstone for modernization and digitization of the European economy and public services. Unfortunately, the European Commission had put a
lot of problematic things in the proposal that inflated it with utter nonsense. Together with others, we Pirates have succeeded in removing most of these problems, such as a compulsory unique identification number. This is a big win for European
citizens. We are sending a smart and safer instrument to the next negotiation. Thanks to the European digital identity citizens will not have to show a plastic card with all their personal details anymore. The European Digital Wallet will allow them to
prove for example their legal age without disclosing other personal data, when buying alcohol or renting a car.
Pirate Party MEP Patrick Breyer, who is negotiating the law in the Committee on Civil Liberties (LIBE)
negotiates, comments: We need to counter the risk that as the new eID is increasingly required, the anonymity online that protects us from profiling and identity theft is gradually eroded. Pirates therefore push via
the Civil Liberties Committee for the addition of a provision ensuring that services are normally provided without electronic identification or authentication wherever reasonably possible. Another LIBE addition will be needed to ensure that the sensitive
data of citizens in their 'digital wallet' will be stored exclusively in a decentralized manner on their own device, unless they choose centralized storage. Decentralized data storage protects our data from hacks and identity theft.
After the addition of provisions in the exclusive competence of other Committees (LIBE, JURI) to the ITRE report, the Parliament's mandate could be finalised as early as March. Trilogue negotiations with the Council will follow.
Pirate Party MEP Mikulįs Peksa will be among the negotiating team.
|
|
Film censor IFCO logs complaints over moronic portrayal of Irish people in Banshees of Inisherin'
|
|
|
| 12th February 2023
|
|
| See article from irishexaminer.com |
The Banshees of Inisherin is a 2022 Ireland/UK/US drama by Martin McDonagh Starring Colin Farrell, Barry Keoghan and Kerry Condon
Two lifelong friends find themselves at an impasse when one abruptly ends their relationship, with alarming consequences for both of them. The Irish Film Classification Office (IFCO) has received
complaints about the portrayal of Irish people as moronic in the Oscar-nominated film The Banshees of Inisherin , which was described as extremely offensive by one complainer. There was also criticism about the accuracy of the accents portrayed
by the inhabitants of the fictional island that features in the movie, as well as a claim that its reflection of Ireland in the 1920s was wrong. In response, acting director of IFCO George Sinclair noted that the grievances raised were not
classification issues. However, he pointed out that the end credits of the film contained a statement confirming that its characters and events were fictitious. The IFCO also received six complaints last month in relation to the new Tom Hanks film,
A Man Called Otto . All of these referred to the detailed depiction of suicide attempts in a film that the IFCO has rated 12A. |
|
French minister speaks of a state age verification system being in place by September 2023
|
|
|
| 8th February 2023
|
|
| See article from xbiz.com
|
France's minister for digital affairs has announced that a government-issued digital certificate certifying a person's age will be necessary to view any adult content online in that country starting in September. Minister for Digital Affairs Jean-Noėl
Barrot told newspaper Le Parisien that the new digital certificate will be unveiled this week, with full implementation planned for September. Barrot warned all adult websites to comply: under penalty of seeing
the broadcasting prohibited on the national territory. France will be the first country in the world to propose a solution like this. This technical solution that we are working on could be used to enforce the age limits that
exist in our law, but which are not sufficiently respected online
However Barrot admitted that the specifics have not been finalized. Critics have noted that implementation of the digital certificate could face complications
relating to the issue of personal data protection . |
|
Porn sites in France suffer setbacks after losing court cases
|
|
|
| 15th January 2023
|
|
| See article
from numerama.com |
Notable porn websites operating in France have suffered two legal defeats. In the first case, a priority question of constitutionality (QPC) had been addressed to the Court of Cassation. MindGeek, which publishes Pornhub, argued that ISP blocking of
their websites, as ordered by France's internet censors of the Audiovisual and Digital Communication Regulatory Authority (Arcom), was an affront to freedom of speech in France. In its verdict of January 5, the Court of Cassation swept aside this
QPC: The question posed is not of a serious nature. Considering that the legal framework in question is sufficiently clear and precise to exclude any risk of arbitrariness . Nor is there any disproportionate harm to the
objectives pursued. The attack on freedom of expression, by imposing the use of a device for verifying the age of the person accessing pornographic content, other than a simple declaration of majority, is necessary, appropriate
and proportionate to the objective of protecting minors.
Meanwhile YouPorn and RedTube lost an administrative challenge to the rather circuitous way that French authorities have specified the laws requiring age/identity verification
to view porn websites. |
|
A brief summary of Ireland's Internet Censorship Act
|
|
|
| 3rd
January 2023
|
|
| See article from ckt.ie |
Ireland's new internet censorship regime will be overseen by an Online Safety Commissioner (OSC), who will create binding online censorship rules to hold designated online service providers Providers to account for how they censor content. The OSC is
also empowered under the Act to introduce an individual complaints mechanism. Harmful content is set out in Part 11 of the new Act:
- Offence Specific Categories sets out 42 different offences. A large proportion of these offences are offences against children, or provisions protecting the identification of child victims or child offenders. Notably the Act appears to be silent as
regards identifying a child who is subject to an Order or proceedings under the Child Care Act 1991.
- Other Categories of Harmful Online Content are set out as a two-tier category:
- (a) The Online Content must be content which bullies or humiliates another person; promotes or encourages behaviour that characterises a feeding or eating disorder; promotes or encourages self-harm or suicide; makes available knowledge of methods of
self-harm or suicide.
- (b) Online Content must meet the risk test if it gives rise to: (a) any risk to a person's life; or (b) a risk of significant harm to a person's physical or mental health, where the harm is reasonably foreseeable.
This part of the Act deals with age-inappropriate content yet the Act does not provide for any age-verification measures. Earlier drafts of the Act sought to introduce robust measures to ensure a minimum age verification of
account holders of 15 years old. This provision did not survive to enactment stage.
|
|
|