The UK Online Safety Act and the Architecture of Choice
- → The Online Safety Act transfers design authority over online choice architectures to platforms supervised by Ofcom, without specifying the principles by which that authority should be exercised.
- → Three structural tensions follow: friction as protection versus friction as control, default settings as de facto policy decisions, and algorithmic amplification as a category the Act does not adequately address.
- → Ofcom's codes of practice represent the most tractable intervention point for introducing behavioural science principles before compliance patterns harden into industry norms.
1. The Architecture of Accountability
The UK Online Safety Act 2023, which received Royal Assent in October 2023 and for which Ofcom began issuing enforceable codes of practice in 2024, encodes in law the principle that platforms bear responsibility for the content they host and amplify. Ofcom acquired new enforcement powers. Platforms became subject to new safety duties. The regulatory settlement that had largely left content governance to platform discretion, with limited intervention under the Communications Act 2003, was formally revised.
What the Act simultaneously constructed, and what received less analytical attention in the commentary that followed Royal Assent, was a shift in design authority. In transferring responsibility for online harm from users to platforms, the Act necessarily transferred the authority to determine what choice architectures govern online behaviour. Someone must decide what counts as harm, what safeguards are proportionate, and how options are presented to users. The Act answers the question of who, assigning that authority to platforms supervised by Ofcom and informed by codes of practice. It is considerably less specific about the principles by which that authority should be exercised.
2. Safety by Design and Its Scope
The Act’s primary mechanism is a duty of care requiring platforms to proactively mitigate the risk of harmful content, rather than responding only to user reports. For the most serious categories of harm, including child sexual abuse material and content that facilitates terrorism, this proactive duty is unambiguously appropriate and commands broad consensus.
The complexity arises in the Act’s treatment of content that is harmful but not illegal. The final text of the Act, following significant amendment in the House of Lords, removed the category of “legal but harmful” content for adults that had appeared in earlier drafts. The Act as enacted instead requires category one services to offer adults optional features that, if activated, filter certain categories of legal content. This is a meaningful distinction from the earlier draft: the regulatory burden on platforms is structured around opt-in safety features for adults, rather than a mandatory filtering obligation for legal content.
OSA 2023: Category one services
Category one services are those designated by Ofcom as having the highest reach or functionality. They are subject to additional transparency obligations, including duties to offer adults optional features to filter legal content and to publish transparency reports on content moderation. The threshold criteria for category one designation are set out in Schedule 11 of the Act.
The practical behavioral implication of this structure is significant. Where safety features are opt-in for adults, default settings determine what the majority of users experience: most users do not change platform defaults. The platform’s design choice about what constitutes an appropriate default is, in practice, a policy decision about the information environment of the average adult user. The Act acknowledges platform design authority but does not specify the principles by which default settings should be calibrated.
3. Three Structural Tensions
Friction as protection versus friction as editorial control. The Act encourages platforms to introduce friction into interactions associated with potentially harmful outcomes: prompts before sharing flagged content, confirmation steps before accessing certain material. The behavioural evidence on friction is relevant here: under appropriate conditions, friction can reduce impulsive decisions and create space for deliberation. However, friction is also a tool of content governance. A platform that introduces friction differentially, applying slowdowns to content it disfavours while leaving other content untouched, is exercising editorial judgment under a safety rubric. The Act does not specify what constitutes proportionate or non-discriminatory friction deployment. Ofcom’s codes of practice will need to address this gap explicitly if friction is not to become a mechanism of covert content ranking.
The behavioural effect of friction depends heavily on how it is implemented: the framing of the prompt, the effort required to proceed, and whether it is applied consistently across content categories. Friction deployed selectively or asymmetrically functions as a ranking mechanism, not a safety tool.
Default settings as regulatory instruments. The Act requires platforms to apply stricter safety settings to child users by default, with the option for adults to adjust their settings. The behavioral science of defaults is well-established: Thaler and Sunstein’s foundational work on choice architecture, and subsequent empirical research on pension enrolment and organ donation, confirms that default options exert a disproportionate influence on outcomes because most people do not adjust them. Applied to the OSA context, this means that the platform’s choice of adult default is, in aggregate effect, a policy decision that shapes the experience of the majority of adult users on that platform. The Act creates the obligation to have a default; it does not specify what principles should govern the choice of default setting.
Algorithmic amplification as an underaddressed category. The Act focuses primarily on content obligations: what must be removed, what must be flagged, what safeguards must be offered. It pays comparatively less attention to the amplification layer: the algorithmic processes that determine which content, from the very large volume posted on any given platform, is surfaced to which users in what context and with what prominence. A piece of content whose organic reach would be limited can achieve very wide distribution through algorithmic amplification. The behavioral impact of content is not separable from the amplification decision: the same post carries different effects depending on whether it reaches a narrow network of interested users or is surfaced to a broad and algorithmically selected audience. The Act’s framework addresses these dimensions in different provisions, but does not integrate them into a coherent accountability structure for the amplification decision itself.
Platforms that comply with the Act's content obligations while leaving amplification logic unreformed may satisfy the letter of their safety duties while leaving the primary mechanism of behavioural influence largely unchanged. Ofcom's media literacy and transparency codes will need to address amplification as a distinct accountability category.
4. The Design Authority Question
The deepest question the Act raises is one it does not resolve: in a regulatory environment that requires active design of online choice architectures, what principles should govern that design authority, and how should it be exercised accountably?
The Act delegates design authority to platforms, supervised by Ofcom, operating within codes of practice. This is a defensible first-generation answer. It does not, however, resolve the tension between the commercial incentives that shape platform design and the user-welfare orientation that a safety duty implies. Platforms optimising for engagement will make design choices that reflect that objective, including in the areas of friction deployment, default-setting, and amplification. The safety duty creates an obligation to consider harm; it does not require platforms to optimise for user wellbeing.
Singapore’s Online Safety (Miscellaneous Amendments) Act 2022 established analogous platform duties for designated online communication services, with IMDA as the responsible authority. Like the OSA, it focuses primarily on content obligations rather than on the behavioural design principles governing choice architecture. The question of how to specify user-welfare-aligned design principles in platform regulation is, at this stage, unresolved in both jurisdictions.
5. The Implementation Window
The Act’s duties are being operationalised through Ofcom’s codes of practice, with phased deadlines running into 2025 and beyond. The codes, once finalised, will establish the compliance baseline against which platform design decisions are assessed. Behavioural science principles, including transparency of friction deployment, user-welfare criteria for default calibration, and amplification accountability, are not currently specified in the draft codes. The implementation period is the tractable window for introducing these requirements, since compliance patterns once established tend to persist as industry norms even as the underlying regulatory requirements evolve.
The Act creates a compliance obligation. It does not specify a user-centred design methodology. Whether one emerges from the code consultation process will depend substantially on the quality of technical input to Ofcom during that process.
This piece analyses the design authority implications of the UK Online Safety Act 2023. For a comparative analysis of how the EU AI Act addresses overlapping questions of algorithmic influence, see The EU AI Act’s Behavioural Blind Spot. For the underlying Agency Spectrum framework, see /concepts/the-agency-paradox. Feedback: hello@technudges.org.
| Version | Date | What changed |
|---|---|---|
| v1.0 | April 2026 | First published. Analyses the Online Safety Act’s transfer of design authority over online choice architectures, identifies three structural tensions in the Act’s implementation framework, and situates the analysis alongside Singapore’s analogous regulatory approach. |