What is the Online Safety Act?
Last reviewed by Moderation API
The Online Safety Act is the UK's main internet-regulation law. It received royal assent in October 2023 after roughly four years of drafting, parliamentary ping-pong, and public argument, and it places legal duties on online platforms that host user-generated content, search services, and some pornography sites. Ofcom is the regulator responsible for enforcing it, with the first set of illegal-content codes of practice coming into force in March 2025 and the child-safety duties rolling out through 2025 and 2026.
What the Act is trying to do
The Act reframes platform liability around a duty of care. Instead of saying "this specific post must be removed," it says platforms in scope must assess the risks their services pose to users, particularly children, and take proportionate steps to reduce those risks.
The targets are illegal content (child sexual abuse material, terrorist content, fraud, threats, incitement, and a long list of priority offences) and, for services likely to be accessed by children, legal-but-harmful content including pornography, self-harm material, bullying, and extreme violence.
Key duties
The main obligations on in-scope services are:
- Illegal-content duty: platforms must run risk assessments for priority illegal content, use proportionate systems to detect and remove it, and prevent users from encountering it where feasible.
- Child-safety duty: services likely to be accessed by children must assess risks to minors and apply age-appropriate protections, including highly effective age assurance for pornography and other restricted content.
- Transparency reporting: the largest platforms (designated Category 1, 2A, or 2B) must publish regular reports on how they handle harmful content and how their moderation systems perform.
- User empowerment tools: Category 1 services must give adult users controls to filter out content such as abuse, incitement, and material promoting eating disorders.
- Reporting and complaints: platforms must provide accessible mechanisms for users to report harmful content and appeal moderation decisions.
Ofcom can fine non-compliant companies up to £18 million or 10% of global annual turnover, whichever is higher, and in serious cases can seek court orders to restrict access to a service in the UK. Senior managers can be held personally liable for specific failings, including failure to comply with child-safety information requests.
What it means for platforms
The Act applies extraterritorially.
Any service with a significant number of UK users, or one that targets the UK market, is in scope regardless of where it is headquartered. In practice this has pushed platforms to invest in UK-specific risk assessments, age-assurance vendors, and documentation of their moderation systems. Smaller services have the same duties in principle, scaled to what is proportionate, but many have struggled with the cost of compliance. Some adult sites chose to block UK traffic rather than implement age verification, and a handful of smaller forums have shut down entirely citing the legal exposure.
Criticism and open questions
The Act has been contested from two directions at once.
Civil-liberties groups including the Electronic Frontier Foundation, Open Rights Group, and Big Brother Watch argue that the duties push platforms toward over-removal of lawful speech and that the age-assurance requirements normalize identity verification across the web. They have also flagged the encryption provisions in section 121, which give Ofcom power to require "accredited technology" for detecting CSAM in private messages, as a direct threat to end-to-end encrypted services. Signal and WhatsApp publicly said they would leave the UK rather than comply, and the government has so far indicated the powers will not be used until the technology exists.
On the other side, child-safety advocates including the NSPCC and the Molly Rose Foundation argue the Act does not go far enough, that its risk-assessment framing gives platforms too much room to grade their own homework, and that enforcement has been slow. Both critiques have some merit. Which one matters more will not really be clear until Ofcom has taken enforcement action against a few major services.
