AI Isn’t a Superintelligence. It's a Market in Need of Disclosure.
Ilan Strauss, Tim O'Reilly / Oct 27, 2025
Sam Altman, CEO of OpenAI, speaks to the media as he arrives at the Sun Valley Lodge for the Allen & Company Sun Valley Conference on July 11, 2023. (Photo by Kevin Dietsch/Getty Images)
As United States President Donald Trump moves to end quarterly financial reporting, investors should ask what this means for AI risk disclosure. We argue this is an opportunity to make corporate disclosures more detailed and relevant, especially as AI dependencies grow.
Two stories about AI risk compete, but only one belongs in corporate filings. The first imagines a runaway superintelligence that escapes human oversight and triggers catastrophe. The second treats AI as a commercial technology shaped by capitalism’s compulsion to maximize profit and capture market share. It’s this logic of market competition, not rogue models, that drives the risks investors and the public at large now need disclosed by companies.
If AI is a “normal technology,” as computer scientists Arvind Narayanan and Sayash Kapoor argue, then its risks are normal too: they emerge from markets. ChatGPT’s rapid uptake shows how productization bakes profit incentives into the product’s DNA. We’ve seen this before: social media began as optimizing for connection, then monetization turned it into an anti-social engagement trap. Even OpenAI cofounder and CEO Sam Altman calls algorithmic feeds “the first at-scale misaligned AIs.” AI companions and bottomless generative-video feeds are the sequel.
If AI is commercially driven, today’s “AI race” is not Sputnik – it’s a corporate war for markets. In the first half of 2025, AI-related capital expenditures contributed more to US growth than the consumer – though much of this relied on imported machinery. How much leverage and opaque circular deals actually underpin this? We don’t fully know. Because oversight lags far behind.
Leading AI companies are private even while their actions sway public markets. Listed companies disclose only platitudes. The result: capital allocation cannot be properly evaluated; litigation balloons; 'AI washing' and fraud proliferate; and technologies are deployed prematurely.
Public oversight should begin with the corporate disclosure machinery we already have. In the wake of the 1929 crash, Congress created the SEC and required companies to surface material risks through annual 10-K reports, quarterly 10-Qs and event-driven 8-Ks. That regime remains one of the few proven, scalable checks on corporate behavior – `Truth in securities.’ Or, as Justice Louis Brandeis put it, “sunlight is said to be the best of disinfectants; electric light the most efficient policeman.”
High-quality disclosure works. Material disclosures convert a company’s private knowledge into publicly verifiable facts. This powers an entire ecosystem, from auditing and banking to journalism and securities law, that keep most firms honest. That’s why we launched the AI Disclosures Project – to ensure AI markets can also benefit from proper information and technical standards.
But the AI market’s center of gravity now sits outside of key existing public standards. Despite their reach, OpenAI and Anthropic – but also Stripe, Databricks, and other decades-old tech companies – disclose less than public peers about what matters: their financials and business operations. Thanks to the 2012 JOBS Act, they can raise vast sums without public filings – as shareholder thresholds went up and private-capital rules loosened. OpenAI’s “capped-profit” and Anthropic’s “public-benefit” legal structures might sound civic-minded, but in practice work as accountability shields.
If AI is going to be governed as a market technology, it must be brought into the market’s accountability machinery. Four fixes would help kickstart this process.
First, reverse the private-by-design loophole that allows companies to remain private even as they raise huge sums of capital from hundreds of shareholders. If you access the public’s savings at scale, you should meet the public’s disclosure standards. Treat special purpose vehicles (SPVs) as look-through entities; narrow the employee-shareholder exemption; and cap how much capital can be raised under Regulation D before reporting obligations kick in.
Second, clarify what is material in AI. The SEC should issue Disclosure Guidance on AI activities and risks that trigger reporting. Define material AI incidents in plain English: systemic model failures, major outages, widespread customer remediation, loss of essential third-party model access, impactful changes to safety guardrails, and so on.
Third, embed AI-risks into existing disclosures. Take the SEC’s 2023 cyber rule as a template. Add an AI-incident item to the event-driven 8-K with a clearly defined trigger, and require annual 10-K discussion of AI governance, risk management, dependency on critical vendors (models, chips, cloud), and associated controls in place.
Fourth, enforce the rules. As with crypto and cyber, real cases against AI-washing and fraud will sharpen the standard far faster than sermons. Prosecution precedes best practice.
Unlike capability thresholds, this approach anchors oversight in materiality: what AI does to a firm’s operations, customers, and earnings that an investor would care about. It rewards evidence – not hype. It is a language investors, courts, and boards already understand.
No disclosure regime will fix every AI risk. But a materiality-based framework can better align company incentives, surface urgent hazards, and give democratic institutions leverage over a profoundly commercial technology. If quarterly reporting goes, the quid pro quo should be stronger event-driven transparency and annual reporting.
AI doesn’t need a priesthood. It needs a prospectus.
Authors

