5 Policy Questions Prompted by OpenAI’s Restructuring
Cristiano Lima-Strong / Oct 29, 2025
OpenAI Co-Founder & CEO Sam Altman speaks at a conference in San Francisco in 2019. (Steve Jennings/Getty Images for TechCrunch)
ChatGPT-maker OpenAI announced this week that it completed a corporate restructuring that will transition its for-profit arm into a public benefit corporation, a high-profile shift that will enable it to more easily raise funds as it builds out its artificial intelligence business.
Under the new structure, the renamed OpenAI Foundation nonprofit will retain control over the company’s for-profit wing and gain a significant stake in its business, while giving its biggest corporate partner and investor Microsoft a slightly larger share.
While OpenAI was initially founded as a non-profit, it launched a for-profit subsidiary in 2019, an unusual structure that drew criticism as the startup became synonymous with the AI boom in recent years. In 2024, it announced plans to restructure to make its non-profit “sustainable.” The proposal quickly attracted scrutiny from public interest groups, regulators and rivals, like Elon Musk, who helped co-found the organization but has become perhaps its biggest detractor.
Here are five key political questions around the move:
Will OpenAI's ‘public benefit’ ethos remain?
OpenAI has long held that its primary aim is to achieve artificial general intelligence (AGI), the point at which the technology reaches or surpasses human capabilities, so that it “benefits all of humanity.” But its increasing focus on welcoming investments and its recapitalization plans have sparked concern that it could abandon that mantra in favor of merely generating profit.
In January 2024, consumer advocacy group Public Citizen called on California Attorney General Rob Bonta (D) to investigate whether OpenAI’s non-profit should be dissolved, citing concerns that it was “not acting to carry out its purpose” and instead “acting under the effective control of its for-profit subsidiary affiliate.” The group cited CEO Sam Altman’s return after a brief ouster as evidence that its for-profit wing had “won” the battle to effectively control the organization. The group later urged Bonta to resist OpenAI’s restructuring plan after it was unveiled.
After OpenAI announced Tuesday its conversion was complete, Public Citizen co-president Robert Weissman in a statement called the move an “an attempt to entrench the status quo, in which OpenAI Nonprofit serves at the beck and call of OpenAI For-profit, even though the nonprofit is supposed to exert operational control over the for-profit.” Despite its claims to serve the public good, Weissman said OpenAI has continually “rushed dangerous new technologies to market, in advance of competitors and without adequate safety tests and protocols.”
In a letter to OpenAI employees released publicly on Tuesday, Altman wrote of the restructuring: “We believe this is the best way for us to fulfill our mission and to get people to create massive benefits for each other with these new tools.”
How will the shift impact OpenAI’s safety efforts?
OpenAI has faced significant pressure to expand protections for users, particularly children, in the wake of the tragic death of a teen whose parents allege used ChatGPT as a suicide coach. That and other child safety incidents have triggered a wave of calls for new legislative guardrails for AI chatbots and companions, including proposals to ban them outright for kids.
In his letter, Altman said that even with the for-profit shift, OpenAI’s “commitment to safety grows stronger.” And under a memorandum of understanding struck with state enforcers to stave off a challenge to the restructuring, OpenAI agreed that its for-profit will only consider its mission and not the interest of stakeholders when it came to safety and security issues, and that a committee from its non-profit will play a key role “overseeing and reviewing the safety and security processes and practices of the Corporation and its controlled affiliates.”
But some of the company’s recent public statements have ignited concern that those efforts could backslide.
Earlier this month, Altman said OpenAI had been able to “mitigate the serious mental health issues” on ChatGPT and so planned to “safely relax” protections. Altman said the company would also soon roll out age-gating and make available new age-appropriate features, like “erotica for verified adults.” The remarks quickly sparked backlash, including from key global regulators and even former employees who worked on product safety at OpenAI.
Children’s safety advocates have long hammered the tech sector’s biggest companies over what they have described as inadequate attempts to safeguard young users. OpenAI’s shift toward more of a for-profit structure could deepen those fears, like with its Silicon Valley peers.
Will the move attract more antitrust scrutiny?
OpenAI’s partnership with tech giant Microsoft, the target of one of the biggest tech antitrust lawsuits in history decades ago, has long been a point of contention for competition watchdogs.
Critics have accused Microsoft of structuring its relationship in such a way that it is able to evade strict antitrust scrutiny, including by not retaining a controlling share in its for-profit arm.
Last year, the Biden administration hashed out an agreement to have the Federal Trade Commission examine whether Microsoft and OpenAI’s arrangement warranted further attention, while the Justice Department would take the lead in potentially probing chipmaker Nvidia. The FTC later launched a broad investigation of Microsoft, but to date, federal enforcers have not sued either company on grounds that their partnership violates antitrust laws.
According to reports, the new structure will give Microsoft a 27% stake over the for-profit and the non-profit a 26% stake, with employees and other investors controlling the remainder.
It’s unclear, however, to what extent the Trump administration’s FTC plans to scrutinize the maneuver. While Trump’s regulators have continued to pursue major antitrust cases against tech giants including Meta and Google, President Donald Trump has spoken repeatedly about not wanting to stifle innovation in the burgeoning AI sector, a point of tension in antitrust circles.
Who might challenge the restructuring?
OpenAI said it finished the restructuring process after “engaging in constructive dialogue” with the attorneys general of California, where it is based, and Delaware, where it is incorporated. Both offices had been investigating the plans, but they each confirmed on Tuesday that they would not be challenging the conversion after securing concessions from the startup.
Bonta said in a statement his office “secured concessions that ensure charitable assets are used for their intended purpose, safety will be prioritized, as well as a commitment that OpenAI will remain right here in California.” Delaware AG Kathy Jennings (D) said the plan will allow OpenAI “to remain a global innovator and one of the world’s largest nonprofits, while reinforcing guardrails that will guide this potent technology to humanity’s benefit.”
The remarks make it unlikely that either any state or federal enforcers will directly challenge the restructuring plan, though enforcers in California and Delaware could revisit the matter down the line if they believe the startup is not abiding by its pledges to their offices.
Consumer advocacy groups could potentially mount a challenge, but the most likely to keep up the pressure remains Musk, who had already sued OpenAI in 2024 alleging that it had “abandoned its non-profit mission of developing AGI for the benefit of humanity.”
The lawsuit was initially filed in California but later withdrawn, as Musk sought to take up the case in federal court. The Information reported this week that the lawsuit will be able to proceed despite the restructuring.
Will this expose an ‘AI bubble’?
Perhaps the most consequential aspect of the restructuring is that it could reveal whether a financial bubble has indeed developed around the AI industry, which has drawn massive investments but also sparked concern that a speculative market is developing and may crash.
OpenAI’s valuation recently eclipsed $500 billion even prior to the restructuring, and Microsoft’s valuation soared past the $4 trillion mark this week after the plans were announced. The shift could send those figures even higher, as it “sets the stage for a blockbuster initial public offering on Wall Street” for OpenAI, according to The New York Times.
If the AI boom indeed proves to be a financial bubble, it could have massive implications for global governments, who are partnering with OpenAI and other companies on gargantuan AI infrastructure investments, and who could be tapped to aid in a rescue if a crash occurs.
Authors
