The EU AI Act's Behavioural Blind Spot
- → The EU AI Act's risk taxonomy is organised by application domain and harm category, not by the behavioural mechanism through which an AI system shapes human choice.
- → Three structural gaps follow: an under-specified manipulation standard, domain-based rather than mechanism-based high-risk classification, and conformity assessments with no behavioural methodology.
- → Behavioural impact assessment, modelled on the GDPR's data protection impact assessment regime, is the most tractable near-term remedy available through the Act's implementing instruments.
1. A Structural Gap in the Risk Taxonomy
The EU Artificial Intelligence Act (Regulation 2024/1689), which entered into force in August 2024 and is being phased into application across member states through 2026 and 2027, is the most comprehensive binding horizontal AI regulation in the EU jurisdiction. Its risk-based approach, which classifies AI applications into prohibited, high-risk, limited-risk, and minimal-risk categories, has become a reference point for regulators in Singapore, the UK, Canada, and across the Asia-Pacific region.
The Act’s strengths are substantive: it introduces mandatory transparency obligations for certain AI interactions, restricts real-time biometric surveillance in public spaces, and establishes accountability mechanisms for high-risk applications in employment, credit, education, and law enforcement. These are meaningful interventions.
The Act’s risk taxonomy nonetheless has a structural gap that is not addressed in the final trilogue text. It classifies risk primarily by application domain and by the potential for physical, financial, or discriminatory harm. It does not systematically address the behavioural mechanisms through which AI systems construct choice environments, shape epistemic conditions, and influence decision-making processes in ways that may not produce identifiable discrete harms but that operate at scale and with significant cumulative effect.
EU AI Act: Prohibited practices (Article 5)
The Act prohibits AI systems that deploy subliminal techniques beyond a person's consciousness or that exploit vulnerabilities of specific groups to distort behaviour in a way that causes significant harm. Systems that do not meet this threshold are not prohibited, regardless of their behavioural design properties.
2. How Recommendation Systems Illustrate the Gap
A general-purpose content recommendation system deployed on a consumer platform does not automatically fall into the high-risk category under Annex III of the Act. Depending on its configuration, it may be classified as limited-risk, attracting transparency obligations under Article 50, or as a general-purpose AI model subject to the obligations in Title II (Articles 51 to 55), including for models assessed as posing systemic risk. However, the systemic risk threshold under Article 51 is calibrated primarily to computational scale and model capability, not to behavioural influence as such.
The behavioural evidence on recommendation systems is relevant here. Research drawing on platform disclosures and academic audits suggests that systems optimised for engagement metrics tend, in documented cases, to amplify content associated with high emotional arousal because such content correlates with time-on-platform and interaction rates. A system of this kind is not merely presenting information: it is constructing the epistemic environment in which a person forms beliefs about the world, weighted by the platform’s optimisation target rather than the user’s stated interests or long-term wellbeing.
This is not manipulation in the sense prohibited by Article 5. But it is a form of behavioural influence that operates at a scale and with a precision that has no historical precedent in consumer choice architecture. The Act does not currently have the conceptual vocabulary to address it as a category.
The Act's manipulation standard requires evidence of exploitation of specific vulnerabilities or subliminal techniques. Systemic design choices that narrow the information environment or differentially weight emotionally arousing content fall outside this standard, regardless of their aggregate behavioural effect.
3. Three Structural Gaps
The manipulation standard is under-specified for systemic behavioural influence. Article 5’s prohibition targets AI systems that exploit psychological weaknesses or vulnerabilities of specific groups. This is designed to catch identifiable dark patterns: artificial scarcity signals, deceptive urgency mechanisms, interfaces that exploit cognitive biases in targeted users. It does not address systemic design properties, such as personalisation architectures that progressively narrow the information environment, friction differentials that make some choices structurally easier than others, or social proof signals calibrated to manufactured norms. Singapore’s CCCS Guidance on Dark Patterns in Digital Platforms (2022) adopts a similarly targeted approach; both instruments address individual manipulative techniques more readily than they address systemic choice architecture effects.
High-risk classification is domain-based rather than mechanism-based. An AI system used in employment screening is classified as high-risk under Annex III because of the domain and the potential for discriminatory harm. This classification logic is defensible as far as it goes. However, a system that shapes the political beliefs or consumer behaviour of large populations through personalised content delivery may not be classified as high-risk, because the harm is diffuse, accumulates across many users over time, and operates through influence rather than decision. The mechanism, not only the domain, should be a criterion for risk classification.
Conformity assessments contain no behavioural methodology. High-risk AI systems must undergo conformity assessments before deployment under Articles 43 and 46. These assessments address technical robustness, data governance, accuracy, and non-discrimination. They do not currently require any systematic evaluation of how the system influences human decision-making behaviour: no requirement to model how the system changes what users believe, what they choose, or what alternatives they consider. Singapore’s IMDA Model AI Governance Framework identifies human oversight and the ability to contest automated outputs as baseline governance principles; these principles imply behavioural assessment but do not currently operationalise it as a structured methodology.
4. Behavioural Impact Assessment as a Structural Remedy
The proposed remedy is not a rewrite of the Act. The risk classification framework, the transparency obligations, and the conformity assessment process are sound scaffolding. What they require is a behavioural science layer.
Behavioural impact assessment, analogous to the data protection impact assessment required under GDPR Article 35, would require operators of AI systems above a defined scale or risk threshold to evaluate four questions before deployment. First, what decision-making contexts does this system shape? Second, what are the likely behavioural effects of the system’s design choices, including ranking functions, defaults, and personalisation parameters? Third, are these effects aligned with users’ stated preferences and long-term interests? Fourth, what mechanisms exist for users to understand and adjust the system’s influence on their choices?
The GDPR Article 35 obligation is triggered by processing likely to result in high risk to the rights and freedoms of natural persons. An equivalent BIA trigger could be specified by reference to scale of deployment, degree of personalisation, or the presence of design features known to produce systemic behavioural effects. The Commission has significant latitude to introduce additional requirements through the delegated regulations and implementing acts being drafted under Articles 96 to 98. The GDPR DPIA regime took approximately a decade to mature from legislative concept to practitioner-standard methodology. Behavioural impact assessment has a more developed academic evidence base to draw from; the question is whether the Commission’s implementing instruments will create the structural occasion for that evidence base to be operationalised.
Framing BIA as an extension of existing conformity assessment methodology, rather than as a new regulatory obligation, reduces the political friction associated with proposing amendments to implementing acts. The GDPR DPIA analogy is effective precisely because it locates BIA within a framework that compliance professionals already understand.
5. The Regional Dimension
For Singapore policymakers and compliance leads, the EU AI Act’s structural gap has relevance beyond EU jurisdiction. Singapore’s own AI governance instruments, including the IMDA Model AI Governance Framework and the PDPC Advisory Guidelines on AI (March 2024), establish transparency and human oversight as baseline expectations. Neither instrument currently specifies a structured behavioural assessment methodology equivalent to what a BIA regime would provide. As the EU’s implementing acts take shape, they will create a de facto standard that organisations with EU market exposure will adopt globally, including in Singapore operations. Engaging with the BIA question now, through IMDA and PDPC consultation processes, positions Singapore’s governance framework ahead of that convergence rather than in response to it.
This piece analyses a structural gap in the EU AI Act’s approach to behavioural influence. For the underlying framework used to assess where AI systems sit on the automation-to-agency-supplanting spectrum, see /concepts/the-agency-paradox. Feedback: hello@technudges.org.
| Version | Date | What changed |
|---|---|---|
| v1.0 | April 2026 | First published. Identifies three structural gaps in the EU AI Act’s treatment of behavioural influence, proposes behavioural impact assessment as a remedy, and situates the analysis in the Singapore governance context. |