When an AI Agent Says ‘I Agree,’ Who’s Consenting?
Camillia Rida / Dec 12, 2025Silicon Valley companies promise that AI agents will soon play a substantial role in daily routines, particularly when it comes to commerce. The promise is that agents operating within defined parameters can take on the tedious work of comparing, booking, renewing, and paying for products and services. But as users delegate, decision-making slides from “I choose” to “the AI agent chooses for me.” The convenience comes with risks, including to privacy and autonomy.
This shift is no longer theoretical. In early November 2025, Amazon sued Perplexity, alleging that its “Comet” agent improperly accessed customer accounts and disguised automated browsing as human activity on Amazon’s site. Perplexity disputes the claims and casts agents as pro-consumer tools—tireless shoppers that surface the best deals and give people their time back.
There’s the paradox: agents promise consumer “empowerment” (better information, less friction), yet without guardrails, they can narrow our choices (default nudges, closed pathways, dependence on a single interface). Even Amazon—experimenting with its own shopping assistants—argues for agent-to-platform interactions “on its terms,” a sign that this fight is as much about consumer protection as it is about control of the channels.
Who really consents when an AI agent clicks? Who steers the choice? Who is accountable when things go wrong? And can European rules discipline AI agents without smothering innovation?
What is an agent?
An AI agent is software that, given a goal (e.g., “find and buy the cheapest ticket”), can engage in various activities, such as researching options, planning steps in a task, and ultimately acting on a user’s behalf, including by completing purchases. In other words, an AI agent doesn’t just answer your queries—it does things for you, often operating under pre-set rules (spending caps, approved services) and keeping an audit trail. There are several degrees of delegation:
Level 1 — Assist, with human confirmation.
In the simplest form of interaction, the AI agent prepares the information a user may need to make a decision, but the human ultimately decides. The agent may put items in a shopping cart, formulate a draft, or create a summary of necessary tasks, but nothing is completed without an explicit “yes” from the user This is classic online contracting, with a simple transparency duty from the EU AI Act: tell the person they’re interacting with AI.
Level 2 — Low-risk automation.
On the next rung on the ladder of AI agent complexity, the agent executes routine, low-stakes tasks on its own (say, renewing a subscription). The scope is tightly bounded by user-defined parameters (a maximum spending amount, or a list of approved services). Legally, the key is a clean, non-manipulative user journey: information must remain fair and understandable to respect the AI Act, but also consumer law.
Level 3 — Acting 'on behalf of' with delegation.
At this level, the agent goes further—booking, buying, sometimes signing—when permission for delegation exists and can be verified by third parties. Europe’s updated digital identity framework (eIDAS 2) allows a European “wallet” to carry attestations that spell out the agent’s authority (caps, duration, authorized actions). If money moves, strong customer authentication must be added under payments law. The agent then acts in the name and on behalf of the user, within those limits.
Level 4 — Orchestration across services.
The most autonomous agents can execute a chain of actions related to a transaction—such as comparing, booking, paying, forwarding the invoice. The broader the autonomy, the tighter the frame: precise contractual rules, allow-lists, budgets, a kill-switch, clear user notices, and, where required, electronic signatures.
At this point the question stops being technical and becomes legal: under what framework does each agent-made click have effect, on whose authority, and with what safeguards? European law and national laws already offer solid anchors—agency and online contracting, signatures and secure payments, fair disclosure—now joined by the newer eIDAS 2 and the AI Act. The task is to map what works today and what is still being built.
The digital mandate: how to make the agent’s click binding
When an agent hits “I agree,” who is bound? In European law, the orthodox answer is not to grant the AI legal personhood, but rather agency. In positive law, an AI agent has no legal personality (yet). It is a technical means of expressing someone’s will. The user is bound only if the agent acts under an authority (a mandate) that is enforceable against third parties, and if the online contracting process is respected (pre-contract information, chance to review and correct, confirmation). Proof of acceptance can be electronic.
The European layer strengthens this with eIDAS 2 and the European digital identity Wallet. Two features matter for agents. First, electronic attestations of attributes (and their qualified version) are recognized across the EU: you can’t refuse them just because they’re electronic; and for some (qualified or issued from public sources) the legal effect mirrors paper. In practice, an attestation may state “X may purchase up to €100 on behalf of Y” and serve as proof of authority with a merchant.
Second, the ecosystem is designed so that the parties that rely on attestations (banks, telcos, major online services, etc.) can verify them. In 2024–2025 the European Commission adopted implementing acts for registering/certifying wallets and relying parties to make verification workable in a harmonized way. Certain private actors will have to accept the wallet where law requires strong online identification (e.g., financial/telecommunication sectors), but this will come online gradually as the implementing acts roll out.
The content and limits of authority remain governed by the rules of most civil laws in Europe: a representative acts within the powers granted; beyond or without power, the act is generally unenforceable against the principal (subject to apparent authority); and the authority is voided in cases of abuse known to the third party. eIDAS 2 adds value by offering a standard, verifiable container for that authority—the attestation—that a destination site can check online.
In short, the ordinary law of agency and online contracts already frames an AI agent acting “on behalf of.” eIDAS 2 adds proof and enforceability: a Europe-wide format to attest authority and have third parties verify it. Whether this becomes commonplace depends on implementing acts and a deployment calendar that’s still unfolding.
Defects in consent with an AI agent: who consents, who answers, who protects?
Under European law, an AI agent has no will of its own. It is a means of expressing—or failing to express—someone’s will. Legally, someone always consents: the user (consumer) or a representative in the civil law sense. If an agent “accepts” an offer, we are back to agency: the act binds the principal only within the authority granted; beyond that, it is unenforceable (subject to apparent authority). The agent is not a new subject of law.
The familiar grounds for invalid consent still apply: mistakes, fraud, and duress (including abuse of dependency). What changes is the factual and evidentiary landscape: mistakes may come from a faulty recommendation; fraud may lie in an interface that pushes the user to click; dependency can arise in ecosystems where access to the service effectively requires using the assistant. Courts will apply familiar tests to new kinds of scenarios, asking questions such as ‘was it decisive,’ ‘was the mistake excusable,’ ‘were there actionable misrepresentations,’ ‘was there a manifestly excessive advantage,’ etc.
Who is on the hook if consent is tainted? First, the business that designs the onboarding. Europe’s Digital Services Act (DSA) bans deceptive interfaces (“dark patterns”) that materially impair a user’s ability to make a free, informed choice. A pushy interface can support a finding of civil fraud and a regulatory breach. Second, the principal is bound only within the mandate. If the agent exceeds clear instructions, the act is not enforceable against the principal; recourse then shifts to the tool provider (contract liability, and—if there is damage—product liability now expressly covering software and AI systems). The agent itself bears no liability; without legal personality, it is not a defendant. Claims target people—counterparties, developers, integrators.
Does the agent become the new “reasonable consumer”? Here, there are two tempting, but wrong, paths. One is to assume that an AI-assisted consumer is more sophisticated, and to raise the bar accordingly. The other is to sideline the consumer altogether and let the agent “choose for them,” infantilizing the user and shifting responsibility to the tool. Neither works in law: the benchmark remains human, and the agent has no legal personality. We cannot demand expertise because someone is assisted, nor absolve them of all choice. The right place to look is the loyalty of the design and the conduct of professionals, who remain responsible for how assistance shapes decisions.
Bottom line: who consents? Always a person. Who answers? The business that captures that consent, and, as appropriate, those who put the tool in circulation. The AI agent is neither a new contracting party nor the new yardstick for the “average consumer.” It is a powerful intermediary that forces the law to look not only at what was said to the user, but at how the tool, the interface, and the recommendations manufactured the decision. That is where the first cases will draw the line between assistance and manipulation—and, if needed, recalibrate the standards courts apply.
Authors

