the lab AI Agency Behavioral Science Choice Architecture Ethics

The Agency Paradox: When AI Decides for Us

Corinne Tan
Abstract illustration of a human figure standing at a crossroads between algorithmic pathways, representing the tension between AI-driven decisions and human agency
TL;DR
  • Behavioral nudges work because they preserve human agency — the feeling of choosing freely.
  • Algorithmic systems increasingly replace visible nudges with invisible decision architectures.
  • The 'Agency Paradox' arises when systems designed to help us choose actually eliminate choice.
  • We propose a framework for evaluating AI systems on an agency-preservation spectrum.

The Invisible Architecture of Choice

Every digital interaction you have today is shaped by an architecture you didn’t design, can’t see, and probably don’t know exists.

When Thaler and Sunstein introduced the concept of choice architecture in 2008, they described the deliberate design of environments in which people make decisions. A cafeteria that places fruit at eye level is nudging you toward healthier choices — but you can still reach for the cake.

That transparency is the ethical foundation of nudging. You remain the agent.

But what happens when the architecture becomes algorithmic?

From Nudge to Algorithm

Consider the difference between these two scenarios:

  1. A news app places its “Balanced Perspectives” section at the top of your feed — a classic nudge toward media literacy.
  2. A recommendation engine selects which stories you see based on engagement prediction, without disclosing its criteria or even its existence.

In Scenario 1, you’re being nudged. In Scenario 2, you’re being steered. The distinction matters enormously for questions of autonomy, accountability, and ultimately, for the kind of society we’re building.

“The most powerful architectures of choice are the ones we never notice.” — Corinne Tan, Regulating Content on Social Media (2019)

The Agency Spectrum

We propose thinking about AI-mediated decision systems on a spectrum:

LevelDescriptionAgency Preserved?
InformSystem provides data, human decides✅ Full
NudgeSystem shapes defaults, human can override✅ High
AutomateSystem decides routine tasks, human can intervene⚠️ Moderate
AutonomiseSystem acts independently, human is notified post-hoc❌ Low
SupplantSystem decides and acts without human awareness❌ None

Most AI systems today operate somewhere between Automate and Autonomise. The challenge isn’t that automation is inherently wrong — it’s that we lack frameworks for deciding which decisions should live at which level.

What This Means for Organisations

If you’re deploying AI systems that interact with humans — whether they’re customers, employees, or citizens — you need to ask three questions:

  1. Transparency: Does the person know a system is making or shaping decisions for them?
  2. Override: Can the person meaningfully override the system’s recommendation?
  3. Accountability: When the system makes a poor decision, is there a clear chain of responsibility?

These aren’t abstract philosophical questions. They’re design decisions that your product, compliance, and strategy teams should be making right now.

The Path Forward

The Agency Paradox isn’t something we solve once. It’s a tension we manage continuously as AI systems grow more capable. The organisations that navigate it well will be those that:

  • Map their AI systems against the Agency Spectrum
  • Default to transparency rather than persuasion
  • Build override mechanisms that are genuinely accessible, not buried in settings
  • Invest in literacy — helping users understand when and how AI is shaping their choices

At TechNudges, we believe that the best AI strategy is one that starts with human agency and works backward to the technology. Not the other way around.


This article draws on frameworks developed in Corinne Tan’s academic work on content regulation and platform governance. For the full research, see Regulating Content on Social Media (UCL, 2019) and Addressing Misinformation and Disinformation (Cambridge University Press).

Share this insight