Blueprint
Agency Spectrum Framework
An audit and design methodology for assessing where any AI or algorithmic system sits on the spectrum from informing human decisions to supplanting them, and what obligations follow from that position.
- → The Agency Spectrum is a five-position diagnostic tool — not a pass/fail test. It produces a scored position with corresponding, proportionate obligations.
- → A single platform typically occupies multiple positions simultaneously. Score each decision-making context separately; the aggregate conceals more than it reveals.
- → Effective governance requires three integration points: design review before build, conformity assessment before deployment, and operational audit on a materiality-triggered cycle.
1. Purpose and Scope
This blueprint operationalises the Agency Spectrum as a structured audit and design tool. It is intended for:
- Product and engineering teams conducting pre-deployment agency impact reviews
- Compliance and legal teams preparing conformity assessments under AI governance frameworks
- Independent auditors evaluating platforms for agency impact on behalf of regulators or civil society
- Policymakers developing proportionate obligations for AI systems across risk tiers
The framework does not produce a binary compliant/non-compliant verdict. It produces a scored position on a spectrum, with obligations that scale proportionately to the degree of agency impingement. The goal is proportionality — not the prohibition of automation, but the assurance that automation is consented to, legible, and aligned with the interests of those it affects.
Jurisdictional note. The framework is designed for contextual adaptation. Its analytical structure is jurisdiction-neutral; its obligation mapping draws primarily on principles embedded in the EU AI Act, the EU Digital Services Act, Singapore’s Model AI Governance Framework (IMDA, 2020), and the PDPA. Where jurisdiction-specific obligations diverge materially, assessors should substitute applicable local instruments. References to specific statutory obligations in the obligation table are indicative, not exhaustive.
This document should be read alongside the Agency Paradox concept, which provides the theoretical foundations for the spectrum and the regulatory context in which this framework operates.
2. Foundational Definitions
Defined terms are used consistently throughout this framework. They are stated here to support inter-rater reliability.
Agency. The capacity of a person to deliberate and act on the basis of their own values, preferences, and reasoning — uncoerced and with adequate understanding of the relevant context.
Choice architecture. The structural features of an environment that influence which options a person considers, how alternatives are framed, and what the default outcome is if no active decision is made.
Decision-making context. A discrete function within a system that participates in, shapes, or replaces a user’s decision. A single platform contains multiple decision-making contexts that must be assessed separately (see Section 4, Phase 1).
Consequential decision. A decision that materially affects a person’s access to information, services, opportunities, or relationships; or that affects their legal rights or financial position. This distinguishes Autonomise from Supplant (see Section 3).
Drift. A change in a deployed system’s effective position on the Agency Spectrum over time, resulting from updates to training data, objective functions, or decision scope — whether or not such changes were deliberate.
3. The Five Positions
The Agency Spectrum maps the relationship between a digital system and the humans it affects across five positions. The positions are sequential and cumulative: each successive position retains the characteristics of prior positions and adds new forms of agency impingement.
Most production systems do not occupy a single position. A platform may Inform through a help centre, Nudge through visual hierarchy, Automate through personalised defaults, and Autonomise through a persistent behavioural model — all simultaneously. The spectrum is a per-context tool, not a per-platform label.
Position Navigator
Inform
The system presents information relevant to a decision without structuring the context in which that decision is made. The user retains full control of both the decision and the decision environment. The system has no view of, or influence over, whether the user acts on the information provided.
Operational boundary test. Inform is identified by the verifiable absence of personalisation, ranking by inferred user characteristics, or default-shaping - not merely by the absence of a disclosed intent to steer. An interface that orders results by recency or fixed taxonomy, with no user-specific signal, sits at Inform. An interface that orders results by predicted relevance to the individual user - even if that prediction is accurate and beneficial - has crossed into Nudge.
Indicative examples: A static FAQ. A data export tool. A search function that returns results in a fixed, non-personalised order.
Nudge
The system shapes the choice environment to make certain options more salient, more accessible, or more likely - while leaving all alternatives meaningfully accessible. The nudge mechanism is visible and its logic is legible to users on request. The user’s ability to reach any available option is not structurally impaired.
Indicative examples: Default opt-in settings presented with a clear opt-out of equivalent accessibility. Interface design that highlights a recommended option without disabling alternatives. A savings prompt triggered by a spending pattern.
Critical distinction from Automate: At Nudge, the user makes the decision. At Automate, the system makes the decision on the user’s behalf.
Automate
The system makes routine decisions on behalf of the user without per-decision input. The user has delegated a defined scope of decision-making to the system, is informed of decisions taken, and retains a clearly accessible override mechanism. The scope of delegation does not expand without renewed consent.
Indicative examples: A bill payment scheduled by the user. An email filter applying user-defined rules. A content preference applied to future recommendations based on explicit user instruction.
Critical distinction from Autonomise: At Automate, the scope of delegation is defined and consented to. At Autonomise, the system’s decisional influence expands beyond what the user understood or agreed to.
Autonomise
The system makes decisions that materially affect the user’s information environment, relationships, or access to services - based on a persistent behavioural model that the user did not define, may not understand, and cannot easily inspect or reset. The system acts without per-decision authorisation, and the cumulative effect of its decisions is not legible to the user.
The key distinguishing feature of Autonomise is cumulative, opaque influence over high-stakes contexts - not merely the existence of a behavioural model. A system may use a persistent model and still sit at Automate if the scope of that model’s influence is bounded, disclosed, and contestable. Autonomise begins where the influence expands beyond what the user understood, where individual decisions are not visible, and where the aggregate effect on the user’s information environment or life outcomes is not accessible to them.
Indicative examples: An algorithmic feed whose ranking factors are opaque and whose influence on the user’s worldview accumulates over time. A credit-scoring model that incorporates behavioural inferences the user was not informed of. A content moderation system that reduces a user’s visibility without notification or explanation.
Critical distinction from Supplant: At Autonomise, the human is still nominally the decision-maker, but lacks the information to exercise that role meaningfully. At Supplant, the human has been removed from the decision loop for consequential decisions affecting welfare, rights, or dignity.
Supplant
The system makes consequential decisions in place of the human. The human’s role has been reduced to receiving notification of decisions already made. This position carries a strong presumption against permissibility for decisions affecting welfare, dignity, legal rights, or access to essential services. It is not a universally impermissible position: narrow categories of Supplant may be defensible where the following conditions are all met: the domain is safety-critical or technically specialised (industrial control systems, automated fraud detection, certain medical diagnostic support tools); the supplanting function is legally authorised by a jurisdiction-specific instrument; the system is subject to independent, ongoing audit; and no individual rights determination - hiring, credit, housing, legal status - is made without a human review pathway. Outside these narrow conditions, Supplant is presumptively untenable and cannot be made compliant through transparency or consent improvements alone.
Indicative examples: An automated hiring system that rejects candidates without human review. A content removal system that permanently suspends accounts based solely on algorithmic assessment. A healthcare triage system that allocates treatment priority without clinician oversight.
Note on remediation: A Supplant position cannot be made compliant through transparency or consent improvements in the general case. The structural removal of the human from the decision loop is itself the harm where consequential individual decisions are at stake. Remediation requires redesign to reintroduce meaningful human review at the decision point - not additional disclosure about the automated decision already made.
4. Scoring Protocol
Before you score
This protocol requires that you have completed Phase 1 (system decomposition) and Phase 2 (stakeholder mapping) before assigning any scores. Scoring without decomposition produces a misleading aggregate that obscures the positions that require most urgent remediation.
Assess each significant decision-making context across five dimensions. Score each dimension from 1 (fully agency-preserving) to 5 (fully agency-supplanting). Sum the scores and divide by five to produce a composite position score for that context.
Dimension 1: Visibility
Criterion: Can the user see and understand the mechanism by which the system is influencing their choices - at the moment of influence, not only on request?
Dimension: Visibility
Can the user see and understand the mechanism by which the system is influencing their choices - at the moment of influence, not only on request?
| Score | Descriptor | Operational Test |
|---|---|---|
| 1 out of 5 | Mechanism fully disclosed at moment of influence | A non-specialist user can explain the mechanism without consulting documentation |
| 2 out of 5 | Disclosed on request, accessible terms, within product | Disclosure requires one user action; no specialist knowledge required |
| 3 out of 5 | Disclosed in documentation most users don't read | Disclosure requires leaving primary interface; comprehension requires moderate effort |
| 4 out of 5 | Technical or legal terms inaccessible to most users | A non-specialist user cannot determine what the system is doing from available documentation |
| 5 out of 5 | Not disclosed | No disclosure exists in any accessible form |
Dimension 2: Reversibility
Criterion: Can the user exit or reset the system’s influence without significant friction or material loss of core functionality?
Dimension: Reversibility
Can the user exit or reset the system's influence without significant friction or material loss of core functionality?
| Score | Descriptor | Operational Test |
|---|---|---|
| 1 out of 5 | Single-action reset, no cost, no functionality loss | Reset is available on primary interface; full functionality retained afterward |
| 2 out of 5 | Several steps required but no material cost or loss | Reset requires navigating settings; all features remain available after reset |
| 3 out of 5 | Exit possible but with friction or partial functionality loss | Chronological or non-personalised mode exists but has fewer features or requires recurring re-selection |
| 4 out of 5 | Exit results in significant loss of accumulated value or features | The user must accept material degradation of service to reduce algorithmic influence |
| 5 out of 5 | Exit not possible without platform abandonment | No opt-out or reset mechanism exists; the only alternative is platform abandonment |
Dimension 3: Alignment
Criterion: Is the system’s objective function verifiably aligned with user wellbeing or user-stated preferences, as distinct from platform engagement metrics?
Dimension: Alignment
Is the system's objective function verifiably aligned with user wellbeing or user-stated preferences, as distinct from platform engagement metrics?
| Score | Descriptor | Operational Test |
|---|---|---|
| 1 out of 5 | Explicitly optimised for user wellbeing with independent verification | Published audit results confirm alignment between stated objective and measured user outcomes |
| 2 out of 5 | User-stated preferences as primary target with evidence | Preference signals demonstrably affect outputs; no significant countervailing metric applied |
| 3 out of 5 | Balances user preferences with platform metrics; weighting undisclosed | Personalisation present; relative weight of user preference versus engagement signals unknown |
| 4 out of 5 | Optimises primarily for platform metrics | Internal documentation or regulatory disclosure indicates engagement or retention as primary metric |
| 5 out of 5 | Optimises solely for platform metrics | No evidence of user preference as a distinct, weighted objective in system design or operation |
Alignment scores based solely on operator self-report should be treated as provisional. Independent audits and regulatory disclosure requirements - such as those under the EU DSA's algorithmic transparency obligations - are necessary to move a score below 2 with confidence.
Dimension 4: Consent
Criterion: Did the user explicitly agree to the specific scope of the system’s decision-making role, with a genuine alternative available, and is that consent still valid given any subsequent changes to the system?
Dimension: Consent
Did the user explicitly agree to the specific scope of the system's decision-making role, with a genuine alternative available?
| Score | Descriptor | Operational Test |
|---|---|---|
| 1 out of 5 | Explicit, specific, informed consent per function with re-consent mechanism | Consent is granular, time-stamped, function-specific, and renewed on material change |
| 2 out of 5 | Explicit consent to general scope with examples and genuine opt-out | Consent records show affirmative action; scope consented to is substantively accurate |
| 3 out of 5 | Broad consent via ToS; not function-specific | Standard ToS acceptance; no specific call-out of algorithmic decision-making scope |
| 4 out of 5 | Continued use treated as implicit consent | No active consent mechanism; functions disclosed only in legal documentation |
| 5 out of 5 | No meaningful consent sought or given | Functions undisclosed or disclosed only in terms not reasonably accessible to users |
Note on ongoing consent: Initial consent does not constitute perpetual consent. A system that has materially expanded its decision-making scope since initial user consent should be scored at 3 or above on this dimension, regardless of the quality of the original consent process.
Dimension 5: Accountability
Criterion: Is there an identifiable human or institution that can be held accountable for the system’s outputs, and is there an accessible redress mechanism for affected users?
Dimension: Accountability
Is there an identifiable human or institution that can be held accountable for the system's outputs, and is there an accessible redress mechanism?
| Score | Descriptor | Operational Test |
|---|---|---|
| 1 out of 5 | Named individual and institution publicly accountable; published redress mechanism | Redress policy is public; affected users can identify whom to contact and what outcome to expect |
| 2 out of 5 | Institution accountable; redress mechanism exists | Formal complaints or review process exists; accountable entity identifiable |
| 3 out of 5 | Accountability via regulatory or legal process only | No direct user-facing redress; regulatory escalation is the primary route |
| 4 out of 5 | Accountability diffuse; no direct redress mechanism | Queries directed to automated help; no named accountable role exists |
| 5 out of 5 | No accountability mechanism exists | Operator denies accountability on basis of third-party AI use; no escalation path |
5. Scoring Interpretation
Composite score ranges and corresponding positions
1.0–1.9 → Inform/Nudge. Low obligation tier. Standard transparency and accuracy requirements. | 2.0–3.0 → Automate. Informed consent, documented override mechanisms, and scope confirmation required. | 3.1–4.0 → Autonomise. Legibility mandate, right to human review, data minimisation, and periodic re-consent obligations apply. | 4.1–5.0 → Supplant. This range indicates structural agency removal. No transparency or consent intervention is sufficient. Fundamental redesign is required before deployment or continued operation is ethically defensible.
Important. Composite scores should not be used alone. A system scoring 2.0 on the composite but 5 on Accountability has a specific, acute problem that the average conceals. Always report dimension scores alongside the composite, and flag any dimension scoring 4 or 5 as a priority remediation item regardless of the composite.
Hard floor rule. If any single dimension scores 5, treat the entire decision-making context as Autonomise-level at minimum for obligation purposes — regardless of the composite score — until the extreme-scoring dimension is mitigated. A system that scores 5 on Accountability cannot be treated as low-obligation merely because it scores well on Visibility and Reversibility. Single-point failures in Consent, Accountability, and Reversibility carry disproportionate real-world harm that averaging structurally conceals.
Dimension interaction effects. The five dimensions are not fully independent. Two known interaction pairs require particular attention: (1) Alignment × Reversibility — a high Reversibility score is functionally weakened when Alignment is critically low. If users are not informed what is being optimised against their interests, they lack the basis to want to exit, making exit availability an insufficient safeguard. (2) Visibility × Consent — disclosure buried in documentation that most users do not read (Visibility score 3) does not constitute the basis for valid consent. Where these pairs produce a divergence of two or more points, assessors should note the interaction explicitly and apply the higher of the two implied obligation tiers.
6. Application Protocol
Phase 1: System Decomposition
Before scoring, identify all significant decision-making contexts within the system. Document each context as a discrete unit for assessment.
For a social media platform, distinct contexts might include: main feed ranking, notification timing and targeting, search result ordering, advertising targeting, content moderation actions, and account-level visibility decisions.
Score each context separately. A system may score 2.0 (Automate) on user-controlled notification preferences and 4.2 (Supplant boundary) on algorithmic feed ranking — treating these as a single context produces a misleading average.
Phase 2: Stakeholder Mapping
Identify which users are affected by each decision-making context and assess whether different user groups face different vulnerability profiles.
Vulnerability factors that may warrant an adjusted assessment include: age (particularly minors); cognitive or mental health conditions that affect deliberative capacity; economic dependency on the platform; limited digital literacy; and distress states that reduce considered decision-making.
Structural power and information asymmetry. Beyond individual vulnerability, assessors must account for the structural asymmetry between the deploying organisation and its users. The deploying organisation holds the objective function, the training data, the model architecture, and the commercial incentive structure — none of which are visible to users. This asymmetry is not a matter of individual user capacity; it is a structural feature of the operator-user relationship that affects all users regardless of sophistication. Its presence shifts the burden of proof for claims of valid consent and alignment toward the operator. When assessing Consent and Alignment dimensions, treat this asymmetry as a default condition that requires affirmative evidence to overcome, not an exception to be noted only for vulnerable groups.
A system that scores 3.0 for a general adult user population may score 4.5 when the identical design is applied to a user group with significantly diminished capacity for deliberate choice. Obligation thresholds follow the adjusted score.
Phase 3: Scoring and Aggregation
Score each dimension for each decision-making context using the rubrics in Section 4. Document the evidence basis for each score. Unsupported scores should be marked as provisional.
Acceptable evidence types. Scores must be supported by at least one of the following: product documentation or published technical specifications; UX audit logs or interface recordings; A/B test artefacts or experimental results; user research findings (qualitative or quantitative); complaint and redress data; incident reports; Data Protection Impact Assessments (DPIAs) or equivalent regulatory filings; independent audit reports; regulatory disclosures required under applicable frameworks (e.g., EU DSA algorithmic transparency reports); or platform Terms of Service and privacy policy change records. Evidence sourced solely from operator self-report should be marked as unverified and treated as provisional pending independent corroboration.
Aggregate dimension scores for each context to produce a position score. Aggregate position scores across contexts to produce an overall system position — but report both layers.
Phase 4: Obligation Mapping
Map each position to its corresponding obligations. Obligations are cumulative: a system at Autonomise must also meet all obligations applicable to Nudge and Automate.
Table 1: Position - Core Obligations
| Position | Core Obligations |
|---|---|
| Inform | Accuracy, completeness, and accessibility of information provided. No deceptive framing. |
| Nudge | Transparency of nudge design at point of influence; alignment of defaults with user wellbeing; prohibition on dark patterns. |
| Automate | Explicit informed consent to automation scope; clear and accessible override mechanisms; notification of decisions taken; scope re-confirmation on material change. |
| Autonomise | Mandatory legibility of decision model in plain language; right to request human review of significant decisions; data minimisation and model reset options; periodic re-consent. |
| Supplant | Fundamental redesign required to reintroduce human oversight at the decision point in the general case. Exceptions require legal authorisation, domain specificity, independent audit, and no individual rights determination without human review. |
Table 2: Position - Representative Regulatory Instruments
| Position | Representative Regulatory Instruments |
|---|---|
| Inform | Consumer protection law; sector-specific accuracy standards |
| Nudge | EU DSA Art. 25 (dark patterns); Singapore CCCS Dark Patterns Guidance; Singapore Consumer Protection (Fair Trading) Act (CPFTA); PDPA Advisory Guidelines on AI Recommendation and Decision Systems (PDPC, March 2024) |
| Automate | EU AI Act Art. 13-14; Singapore Model AI Governance Framework (IMDA, 2020) Principle 5; PDPC Advisory Guidelines on AI Recommendation and Decision Systems (March 2024) |
| Autonomise | EU AI Act Art. 86 (right of explanation); GDPR Art. 22; EU DSA Art. 27; Singapore Model AI Governance Framework Principle 2 (human involvement); PDPA s.20 (purpose limitation) |
| Supplant | EU AI Act prohibited practices (Art. 5) and human oversight requirements for high-risk systems (Art. 14); sector-specific prohibitions in healthcare, justice, and financial services; Singapore AI Verify testing framework (human agency and oversight principle) |
Phase 5: Monitoring and Drift Detection
Systems learn and evolve. A system assessed at Automate may drift toward Autonomise as its models accumulate influence over users’ choice environments, even without deliberate changes to system design.
Scheduled reassessment should occur at minimum annually and after any of the following trigger events:
- A change to the system’s primary objective function
- Expansion of the training data corpus beyond the scope at time of last assessment
- A change to the user groups or contexts in which the system operates
- A regulatory disclosure, audit finding, or enforcement action that bears on any scored dimension
- A material change to the legal framework applicable to the system in any jurisdiction of operation
Materiality threshold for immediate reassessment. If an operational monitoring metric — complaint rates, override usage, engagement concentration, or similar — changes by more than 20% from the baseline established at the prior assessment, reassessment of the affected decision-making context should be initiated within 30 days.
7. Worked Example: Algorithmic Content Feed
This example illustrates the framework applied to a social platform’s main feed. Scores are annotated with their evidence basis to demonstrate the standard of documentation required.
Decision context: Post ranking and selection for the main feed of a social media platform.
Algorithmic content feed
Post ranking and selection for the main feed of a social media platform
Obligations triggered:
- A plain-language account of the primary ranking factors and their relative significance, accessible within the product interface
- A chronological alternative that does not degrade access to content types available in the algorithmic feed
- A function-specific consent mechanism for algorithmic ranking, distinct from general ToS acceptance
- A direct user-facing redress mechanism for feed-related complaints, separate from general content moderation appeals
Observation on drift risk. This system’s Alignment score (4) combined with its lack of a direct redress mechanism (Accountability: 3) creates conditions for undetected drift. As the model accumulates behavioural data, its influence on the user’s information environment will increase without any user-visible signal or institutional accountability trigger. Annual reassessment is insufficient; a quarterly monitoring review of the alignment metric is warranted.
8. Limitations of This Framework
The following limitations are stated explicitly to support appropriate use and to identify directions for refinement.
Scorer dependency. The framework reduces but does not eliminate assessor subjectivity. Scoring reliability improves with the quality of evidence documentation and the use of trained, independent assessors. Internal self-assessments should be treated as provisional until externally verified.
Static snapshot. An assessment reflects the system at a point in time. The drift detection protocol in Phase 5 is a partial mitigation, not a substitute for continuous monitoring architecture.
Composite masking. The composite position score can conceal acute problems in individual dimensions. The obligation to flag any dimension scoring 4 or 5 as a priority item is a partial mitigation. Users of this framework should always report the full dimension profile, not only the composite.
Absence of outcome evidence. The framework assesses structural features of a system’s design. It does not directly measure actual effects on user agency. Evidence of user outcomes — where available from platform research, independent studies, or regulatory findings — should supplement structural scoring and may require score revision.
Jurisdictional coverage. The obligation mapping in Section 6 is illustrative. Assessors must verify applicable instruments in their jurisdiction and update the obligation table accordingly.
9. Intellectual Lineage
This framework draws on and is accountable to established bodies of scholarship and regulatory thought. The following sources represent the primary intellectual context for its core concepts. They are listed here not as citations in support of specific factual claims, but as the field within which this framework operates and against which it should be evaluated.
On nudging and choice architecture. The foundational treatment of choice architecture and libertarian paternalism is Thaler, R.H. and Sunstein, C.R., Nudge: Improving Decisions About Health, Wealth, and Happiness (Yale University Press, 2008). This framework’s use of “nudge” as a spectrum position draws on that tradition while departing from its original optimistic assumptions about the benevolence of choice architects — a departure the Agency Paradox concept addresses directly.
On platform design and legal regulation. The analysis of how platform design features — including defaults, interface architecture, and terms of service — interact with and often override formal legal protections is developed in Tan, C., Regulating Content on Social Media: Copyright, Terms of Service and Technological Features (UCL Press, 2018). This work provides the analytical foundation for treating design decisions as regulatory objects, not merely product choices.
On algorithmic accountability. The structural argument that algorithmic systems create accountability vacuums that existing legal frameworks are ill-equipped to address is developed in Pasquale, F., The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015). The Accountability dimension of this framework’s scoring protocol addresses the gap Pasquale identifies.
On autonomy and consent. The philosophical grounding for “agency” as used in this framework draws on the treatment of autonomous decision-making in biomedical ethics — specifically the distinction between autonomous and non-autonomous action developed in Beauchamp, T.L. and Childress, J.F., Principles of Biomedical Ethics (Oxford University Press, 8th ed., 2019). The legal definition of consent applied in the Consent dimension aligns with GDPR Art. 4(11): consent as a “freely given, specific, informed and unambiguous indication of the data subject’s wishes.”
On Singapore AI governance. The Singapore-specific obligation mapping draws on the IMDA/PDPC Model AI Governance Framework (2nd ed., 2020); the PDPC Advisory Guidelines on the Use of Personal Data in AI Recommendation and Decision Systems (March 2024); the CCCS guidance on dark patterns under the Consumer Protection (Fair Trading) Act; and the AI Verify testing framework principles, including the human agency and oversight principle.
On human oversight in high-risk AI. The international governance anchor for Autonomise and Supplant obligations is EU AI Act Art. 14 (human oversight for high-risk AI systems) and OECD AI Principles Principle 1.3 (human agency and oversight), available at oecd.ai.
Version History
| Version | Date | Summary of Changes |
|---|---|---|
| v1.0 | April 2026 | Initial publication. Five-position spectrum, five-dimension scoring protocol, obligation mapping, single worked example. |
| v2.0 | April 2026 | Added foundational definitions for inter-rater reliability. Expanded scoring rubrics with operational tests. Added jurisdictional note and limitation on consent dimension for drift contexts. Expanded obligation table with representative regulatory instruments. Added materiality threshold for drift reassessment. Added Section 8 (Limitations). Revised Supplant position to clarify that redesign is the only adequate response in the general case. |
| v2.1 | April 2026 | Added hard floor rule preventing composite averaging from masking single-point failures. Added dimension interaction effects note (Alignment × Reversibility; Visibility × Consent). Added evidence typology to Phase 3. Added operational boundary test for Inform/Nudge distinction. Sharpened Automate/Autonomise distinction with cumulative influence framing. Revised Supplant to acknowledge narrow, legally authorised exceptions with conditions. Added structural power asymmetry to Phase 2 (Stakeholder Mapping). Updated obligation table with Singapore-specific instruments (CCCS dark patterns guidance, CPFTA, PDPC Advisory Guidelines March 2024, AI Verify). Added Section 9 (Intellectual Lineage). |
This blueprint is an open methodological resource. Feedback, challenge, and proposed refinements are welcome at hello@technudges.org. Version history is maintained above; substantive changes will be published as a Lab Note.