Ofcom writes: The Online Safety Bill, as currently drafted, will require Ofcom to assess, and publish its findings about the risks of harm arising from content that users may encounter on in-scope services, and will require in-scope
services to assess the risks of harm to their users from such content, and to have systems and processes for protecting individuals from harm. Online users can face a range of risks online, and the harms they may experience are
wide-ranging, complex and nuanced. In addition, the impact of the same harms can vary between users. In light of this complexity, we need to understand the mechanisms by which online content and conduct may give rise to harm, and use that insight to
inform our work, including our guidance to regulated services about how they might comply with their duties. This report sets out a generic model for understanding how online harms manifest. This research aimed to test a
framework, developed by Ofcom, with real-life user experiences. We wanted to explore if there were common risks and user experiences that could provide a single framework through which different harms could be analysed. There are a couple of important
considerations when reading this report:
The research goes beyond platforms' safety systems and processes to help shed broader light on what people are experiencing online. It therefore touches on issues that are beyond the scope of the proposed online safety regime.
The research reflects people's views and experiences of their online world: it is based on people self- identifying as having experienced 'significant harm', whether caused directly or indirectly, or 'illegal content'.
Participants' definitions of harmful and illegal content may differ and do not necessarily align with how the Online Safety Bill, Ofcom or others may define them.
|