AI Agents are Rewriting the Web’s Rules of Engagement. Here’s a Way to Fix it.
Anita Srinivasan / Jan 20, 2026
Ground Up and Spat Out / Janet Turra & Cambridge Diversity Fund / Better Images of AI
AI agents are widely predicted to dominate internet commerce. Visa has declared that "2025 will be the final year consumers shop and checkout alone." Mastercard believes that "the next evolution [of payments] is from digital to intelligent." PayPal, Google, and Amazon have all launched protocols for AI agents to make purchases on behalf of users. Yet while these forecasts focus on adoption, they overlook a harder question: what happens to businesses built on the current web when agents start bypassing the old rules of engagement?
The shift is already here. You used to Google something, click a link, see a site's layout, ads and newsletter pop-up. The publisher got a pageview; the advertiser got an impression. Today, you ask an AI, and it gives you the answer – you never visit the site. Since Google rolled out AI Overviews in May 2024, multiple studies have shown referral traffic to publishers declining. Pew Research found that when AI summaries appear, users click on results only 8% of the time, compared to 15% without them – a 46% decline in click-through rates.
The consequences can be devastating for publishers and creators who rely on web traffic to sustain their businesses. This month, Tailwind CSS – one of the most popular open-source web frameworks – laid off 75% of its engineering team despite the framework being more popular than ever. Founder Adam Wathanexplained that documentation traffic has dropped 40% since early 2023 because AI coding assistants now generate Tailwind code directly, and that documentation was the only way users discovered the company's paid products. As a result, revenue fell 80%.
The litigation wave of 2024-25 signals that even legacy platforms see this as an existential threat as well. Amazon is suing Perplexity. Google is suing SerpApi. Reddit has joined too.
Most commentary frames these as fights over content – who owns it and who can scrape it. That framing misses something important. What platforms are losing isn't just content; it's what we'll call experiential control: the power to shape how users encounter their products and ultimately move through the funnel towards monetization.
Copyright law protects information, but experiential control is about flows – who sees what, in what order, and with what context. This is why plaintiffs are reaching for computer fraud statutes rather than relying on intellectual property alone. Under US law, those statutes are robust – if platforms have the technical infrastructure to support their claims. As of 2025, that infrastructure now exists. It was built for payments, but its implications extend far beyond commerce.
Why robots.txt was never enough
For thirty years, a protocol called robots.txt governed how bots interacted with websites. It's purely voluntary – the FAQs cited in the RFC explicitly state that it "... [does not] constitute a binding contract between site owner and user." Bad actors ignore it, and even companies like Anthropic and Perplexity have reportedly renamed their scrapers when old names hit blocklists.
User-agent strings – the HTTP headers identifying a browser or bot – are equally toothless. Any scraper can claim to be "Chrome" or "Googlebot." Amazon's core complaint against Perplexity is precisely this: Comet identifies itself as a standard Chrome browser rather than an AI agent. There's no verification mechanism. CAPTCHAs are increasingly defeatable as AI systems can now reliably solve many image-based challenges.
This matters legally because of how US courts interpret server-side authorization. In 2021, the Supreme Court clarified the Computer Fraud and Abuse Act in Van Buren v. United States, holding that liability turns on a "gates-up-or-down inquiry." The question is whether you bypassed a technical barrier to reach something off-limits – not whether you violated written policies after entry. The DMCA's Section 1201 operates similarly, prohibiting circumvention of "technological measures that effectively control access."
The law imagines robust gates, but all we have had so far are polite suggestions. As a result, platforms play whack-a-mole: They update blocklists, scrapers rename themselves; they implement rate limits, bots distribute across proxies. The legal frameworks exist, but they assume a technical infrastructure that did not exist until now.
From cloaks to badges: a case for verifiable agent identity
The solution is to require bots to prove who they are cryptographically rather than merely claiming it. Consider the difference between someone in a costume yelling "I'm a cop!" and someone presenting a badge that can be verified against a database. The web currently operates on the former approach. As of 2025, however, several protocols now enable the latter:
Web Bot Auth addresses provider-level identity. When a crawler makes a request, it includes a cryptographic signature generated with a private key held only by the operator. As Cloudflare explains, the signature verifies against a public key in a registry. When a site receives a request claiming to be "Anthropic-Bot," it checks whether the signature validates against Anthropic's registered key. If it does not, the bot is misrepresenting itself. User-agent strings can no longer be spoofed because the cryptographic signature either verifies or it doesn't. OpenAI, Visa, and Mastercard have already implemented HTTP Message Signatures for agent traffic.
CAIP-122 provides account-level identity through wallet-based authentication. Originally designed to generalize Sign-In-With-Ethereum across blockchains, it allows any agent to prove control of a specific cryptographic identity by signing a challenge message. The website presents a random challenge, the agent signs it, and the site verifies the signature. This creates a persistent identity that can accumulate reputation across services. Unlike provider attestation ("this is an Anthropic bot"), wallet-based identity says "this is agent 0x7a3b...9f2c, which has completed 10,000 transactions without violations."
x402 revives HTTP's long-dormant 402 "Payment Required" status code, which was reserved in the original 1997 HTTP specification but never standardized until now. Built by Coinbase for machine-to-machine payments, x402 implements 402 responses that prompt clients to pay and retry. Its V2 release in December 2025 integrates CAIP-122 so that when an agent requests a paid resource, it presents wallet credentials proving both identity and payment status. Verification happens through cryptographic, on-chain proofs rather than lookups against a central database. By December 2025, x402 had processed more than 75 million transactions.
What makes x402 particularly significant is that it provides user-level authorization, not just provider attestation. Web Bot Auth tells you which company operates a bot; x402 identifies which specific user authorized that bot to act on their behalf. The wallet signature demonstrates explicit delegation – a clear chain of authorization that matters for legal accountability.
These identity layers were built because payments require knowing who's paying, but the same protocols work for authorization decisions more broadly. Imagine three layers: (i) a policy layer (like robots.txt or the proposed ai.txt) that states machine-readable rules – "verified agents matching criteria X may access /api"; (ii) an identity layer that cryptographically verifies who's requesting access; and (iii) an enforcement layer that checks credentials against policy and logs outcomes.
If a site requires cryptographic credentials for certain paths, one might plausibly argue that it meets the standard articulated under Van Buren. The defendant either presented a valid credential, forged one, or routed around the check. Each outcome yields a verifiable, binary outcome that maps cleanly onto existing legal frameworks. This logic is reflected in current litigation: Amazon says Perplexity configured Comet to "not identify the Comet AI agent's activities." Google alleges SerpApi engages in "cloaking" and "rotating bot identities." Reddit describes defendants as "similar to would-be bank robbers." The pattern across these cases is consistent: Deceptive identity – concealing of what an agent actually is – gives rise to the cause of action. Verifiable identity infrastructure creates the factual record that those claims require.
Open web or walled gardens: thoughts on the future
The open web's future may depend on accountability mechanisms. If platforms can't distinguish good-faith agents from bad-faith ones, their rational response is more walled gardens – more paywalls, more login requirements, more content in closed ecosystems. Human website visits fell 9.4% while AI traffic surged 400%, according to the Q2 2025 State of AI Bots Report from Tollbit, a platform that helps publishers track and monetize AI activity on their sites. Cloudflare's data shows that AI crawlers operate at staggering ratios – OpenAI's scraping-to-referral ratio is 1,700:1, and Anthropic's is 73,000:1. Verifiable identity offers an alternative where openness and accountability coexist.
There is also a risk of overcorrection. If only agents blessed by a handful of identity providers can access the web, a new cartel emerges. The promise of decentralized protocols like CAIP-122 is that no single party controls the identity layer, but whether that promise holds depends on implementation choices made in the near term. Small players will need protected pathways – streamlined registration, de minimis exemptions – so compliance overheads don't crush open-source and hobbyist developers. Further, competition authorities should scrutinize how platforms deploy these tools, because the same infrastructure that enables legitimate access control could enable illegitimate exclusion.
The current wave of litigation isn't a signal that new law is required – only that we have needed certain infrastructure to make existing law work. That infrastructure now exists, albeit built for payments, but applicable to authorization generally. Which future emerges depends on whether we correctly recognize the underlying problem r: you can't enforce rules against parties you can't identify.
Authors
