The Saga at OpenAI: Lessons for Policymakers

Justin Hendrix / Nov 26, 2023

Audio of this conversation is available via your favorite podcast service.

“Chaos” is a word that appeared in multiple news reports to characterize recent events surrounding the firing and subsequent rehiring of OpenAI founder and CEO Sam Altman. It remains unclear exactly why the board of the company chose to dismiss Altman, beyond that he was allegedly “not consistently candid” in his communications. A couple of days in, it seemed like OpenAI, founded in 2015, might really be on the rocks. At one point it appeared possible that nearly all of its employees might quit and join Microsoft, forming a new AI research team under Altman’s leadership. 

But while at first glance it may appear that such ‘chaos’ would be bad for the $80 billion startup, I suspect it will only benefit the company in the long run. The intrigue around Altman’s ouster has bolstered multiple narratives that are useful to OpenAI, particularly the idea that it may have been connected to concerns about rapid technical advances without adequate safety mechanisms. The salience of this narrative, and all of the conjecture around a possible recent breakthrough, is a marketing boost for the company, reinforcing the perception of its products as both powerful and fearsome.

But what should policymakers take away from these events? And while it seems like the drama at OpenAI is over for now, could it spark back up again soon?

To answer these questions and more, I spoke to Karen Hao, a journalist that is both a keen observer of OpenAI, and of the rise of AI more generally. She’s a contributing writer at The Atlantic, and she is also working on a book on the subject. With Atlantic staff writer Charlie Warzel, last week she wrote a piece headlined “Inside the Chaos at OpenAI,” which was drawn from interviews with ten current and former employees at the company.

What follows is a lightly edited transcript of the discussion.

Karen Hao:

My name is Karen Hao, I am a contributing writer for The Atlantic, and I’m working on a book about OpenAI, the AI industry, and its impacts on the world.

Justin Hendrix:

Karen, we’ll look forward to when that book comes out, and so pleased to have you bring your expertise perhaps a little earlier…. some of the reporting and the ideas, I’m sure that you’re going to explore in the book, you’ve had to bring into the public domain in the last couple of days as you’ve reported on this story around OpenAI for The Atlantic.

Last time I had you on this podcast was just before Thanksgiving in 2021. Maybe before we jump in, let’s just talk about your trajectory since then. You left MIT Technology Review, did a stint at The Wall Street Journal, now at The Atlantic, and the book.

Karen Hao:

Exactly, yeah. I left MIT Technology Review, joined the Wall Street Journal for… I actually briefly switched from AI reporting to cover the tech industry in China, so I moved to Hong Kong. And six months after moving, ChatGPT happened, and I was juggling then, covering the tech industry in China and covering AI, and as the momentum continued to build and build, it just felt like I needed to focus and I needed to come back to AI. And book agents had reached out to me for a while, but I hadn’t really been committed to the particular idea long enough to imagine spending so much time on a book. And then, as I started turning back to reporting on AI, and as I was seeing the conversation, and also so much policy talk now too that’s very different from two years ago, it made me realize that this was actually the core thing that I should be working on for a book. So that’s when I then left the Journal to work on the book full-time, and am also contributing to The Atlantic.

Justin Hendrix:

Well, perhaps that moment when ChatGPT launched, obviously pushed you in a different direction, is also central to your most recent report, Inside the Chaos at OpenAI. You say that the weekend of drama that we’ve seen perhaps conclude, as of this recording on Wednesday afternoon, November the 22nd, started a year ago with the release of ChatGPT. Let’s just talk about that. Why do you frame it that way?

Karen Hao:

We think there’s always been drama at the company, but it wasn’t as relevant and as high stakes as before when ChatGPT hadn’t happened yet, because then OpenAI was not as much in the public domain, and people weren’t thinking about it as much, policy makers were not thinking about it, barely thinking about it, and also the actual technologies that OpenAI was developing were not affecting as many people. So there had been drama, but ChatGPT is what escalated that drama to a tipping point.

What’s interesting about OpenAI is it was founded as a nonprofit, and it was founded with a particular resistance towards the tech industry. The whole purpose of it being found as a nonprofit was the co-founders believed that AI development should be shepherded without a tie to profit interests. The issue is that OpenAI then selected a very particular type of AI development to pursue, which is extremely cost intensive, and they realized they needed capital, they weren’t able to raise enough of that capital through a nonprofit, so they came up with this strange idea of nesting a capped profit entity under the nonprofit. And at the time, the reason why they did that was they wanted the cap profit to help raise money, but they still wanted to not completely get rid of this notion that they had been founded on, which was that it should ultimately still be governed by a nonprofit, and the nonprofit then had a board of directors.

And so, what happened over the years at OpenAI after this structure was developed is that there were people that would join the company because they were very excited about what the for-profit entity was doing, the commercialization, typical Silicon Valley types that are really energized by developing products for people, and then there were people that joined the company because they thought that it was still fundamentally different from all of their tech companies with the nonprofit entity. They really bought into the fact that the nonprofit was the way to govern this technology development, and that ultimately, if push came to shove, something could trigger and the nonprofit could slam the brakes on the commercialization.

So these two factions within the company really started to rapidly polarize in opposite directions as ChatGPT became more and more popular, and built more and more momentum. The commercialization faction, they suddenly had this remarkable demonstration of the commercial potential of the technology they were building, so they started escalating the momentum around, let’s continue to build, launch more products, capitalize on the fact that we’re now the hottest startup in Silicon Valley. Whereas this other faction that was a faction that’s also wrapped up in fears of existential risk around AI technologies, that faction started seeing ChatGPT as the exact opposite demonstration. ChatGPT suddenly was in the hands of 100 million users, these users were using the tools in unexpected ways, some in abusive ways, and this was also to them a demonstration of, we were exactly right all along, that AI development is scary, and that we should be controlling it, and that we should actually slow down.

So when that tension reached a boiling point, it also split the leadership team. So Sam Altman and Greg Brockman, the president of OpenAI, they come from a startup background, they come from that background where they love to build products, they have the habit of commercializing and scaling, and wanted to continue encouraging that momentum. Whereas Ilya Sutskever, the chief scientist, he is this mystic philosopher mad scientist type who increasingly saw, in his theoretical vision for the future, that super intelligence was going to be here soon, and that therefore the fear camp was actually correct in really focusing on how to avoid existential risk, and how to avoid this sloppy development.

And so, when the leadership clashed and the board ended up actioning on it, that’s ultimately what you see cascading from the events of the weekend.

Justin Hendrix:

So a lot of things have changed since you published this piece on November the 19th, including that Sam Altman has been restored, Greg Brockman has been restored, there’d been some changes to that board structure, a couple of new individuals, including Larry Summers popping up for whatever reason.

Karen Hao:

Are we surprised though?

Justin Hendrix:

Apparently, there are going to be other individuals named to expand the board down the line, perhaps there might even be some women or non-white males, we’ll see. But I want to ask you about one particular detail that hasn’t changed, I don’t think, which is that we still do not know exactly why Sam Altman was fired. Is that still correct?

Karen Hao:

Yeah, that’s 100% correct. We have no idea, absolutely no idea. The board has not been transparent about this, and there have been a lot of leaks to the media, but it’s really unclear who is driving the narratives that are being reported in the media, and we don’t know if the employees were involved or not, and we don’t know if Ilya was a central player or just the messenger. There’s so many unknowns about what actually happened, it’s more just that we know the sequence of events that occurred.

Justin Hendrix:

So there’s still much more to learn there. Apparently, one of the points of agreement is that there will be some sort of independent investigation of what went on, so perhaps all of those things will come out in due time.

I do want to ask you a few questions about what you know about the role of Ilya Sutskever, the chief scientist at OpenAI, who apparently was one of the individuals who led this effort to oust Altman, reversed course, and, as you say, has perhaps taken a turn towards, well, I would say the mystical, in terms of his belief in artificial general intelligence and how near it is. One of the, I think, stunning, or to me at least, stunning anecdotes that you share in your story is around Sutskever at the OpenAI holiday party last year leading employees in the chant, feel the AGI. What can you tell us about this individual and his role here?

Karen Hao:

The first thing is we don’t actually know if he led this thing, or if he was caught up in this thing. Certainly, it seems like he played a central role, but it’s unclear if he initiated, I should say. The thing about Ilya is the way that he came to OpenAI was he was actually picked by Sam Altman to be the chief scientist, or to lead the scientific endeavors, because Altman and Brockman, they are not AI researchers, and they wanted a really strong AI researcher on the team that would be a leader in that regard. And so, Altman was really keen on Sutskever joining as part of the founding members of OpenAI, because Sutskever had, at that time, already made his reputation, he was already famous as a scientist, because he had co-written a paper as a PhD student under Geoffrey Hinton that basically initiated the deep learning revolution, the first AI revolution within the business community.

And so, he’s always had this personality of being very intense about things, believing things with a religious fervor, even when he was a PhD student, Cade Metz, The New York Times reporter who’s covered AI for a very long time, he writes in his book, Genius Makers, that Sutskever would do one-handed handstand pushups if he got really excited about a research idea. So this guy has always been a little bit mystical, and a little bit of a philosopher, and a little bit intense and also, in an interesting way, a gentle soul as well. Employees have called him the chief emoji officer to me. When he was at OpenAI, he would shower people with emoji reactions if he really liked something that they sent in Slack, and he would say these things about… So this is why, I think, the field of AGI really lines up with his personality.

He would always say, we need to remember that we’re building a human-loving super intelligence. It will love us, we will love it, and that’s ultimately what’s going to bring us to nirvana as humanity. And so, he started to encapsulate that in the phrase, fuel the AGI, fuel this human-loving AGI that is coming into existence. And I think there’s an added element to this, of course, all of these three people, the main characters of the weekend, they’re all multimillionaires, so I think employees have also pointed this out to me as an important thing to remember, because multimillionaires, they behave differently in society because they don’t operate with the same incentives or the same social protocol as us plebes. So I think the fact that he’s really rich, and the fact that he was with ChatGPT at the height of, on top of… Everyone at OpenAI was feeling like they were on top of the world, in a sense. It just amplified a lot of his natural spiritual mystic tendencies to be a little bit more, to some, outlandish.

Justin Hendrix:

So he’s not the only one there who, of course, believes in imminent machines of love and grace walking the earth. Sam Altman’s also talked about these ideas around abundance in the near term, and artificial intelligence, of course, solving humanity’s problems, and perhaps solving climate change and poverty, and any number of other issues we might face.

It’s easy to only focus on the individuals in this. One thing I did quite like about your piece on this, with Charlie Warzel, is that you focus on the power dynamics, and I want to bring in one more individual, maybe as a way to talk about the power dynamics, which is Satya Nadella from Microsoft.

Karen Hao:


Justin Hendrix:

So let’s talk about this individual who appears to have just been standing back in the background observing this, maybe we could think of him as the adult in the room, or possibly operating at a different tier altogether. Of course, OpenAI, an $80 billion startup, Microsoft, a nearly $3 trillion tech behemoth.

Karen Hao:

Yes, it’s a really good point. I think Satya is the hidden king in all of this. Satya, as I understand from speaking to people who have worked with him closely, is he’s very strategic, he’s very pragmatic, and he does play a 4D chess game. When he initially invested in OpenAI, there was this question that came up that was like, why invest in an external AI research lab when Microsoft itself has had a longstanding AI research lab called MSR that has done very successful things and is built into the company? But I think it’s illustrative of maybe Satya’s thinking that he did decide to do it. I think in part, because as I understand it, it was like, why not just bet on both? Bet on this external lab, also bet on an internal lab, and just see which one ends up reaching a new paradigm that will help Microsoft commercialize off of that.

OpenAI ended up getting there first. And then, you see the deepening of the relationship, the $10 billion investment. OpenAI exclusively uses Microsoft’s data centers, and Microsoft exclusively licenses OpenAI’s technologies. And so, Microsoft has an enormous, enormous amount riding on this relationship, because it has shot up as a star, and revived its image as a slow lumbering tech giant with consumer products that no one really likes to use, to this powerhouse of a B2B provider with their cloud computing service, Azure. And a lot of the Microsoft marketing materials to clients is based on this idea of, with you use Azure, you get access to OpenAI.

And so, I think for him, I can only imagine that he was sweating profusely to try and figure out some way to make sure that, ultimately, whatever happened, he could ensure that this huge selling point that Microsoft has cultivated for itself can stay in some way, which is why it makes so much sense that there was a point when Nadella offered a job to Altman and Brockman to run this team inside. And you could see that Nadella and Altman were messaging on Twitter at certain points of this saga, saying, the Microsoft OpenAI partnership is still so important, and we are going to do everything in our power to stabilize this relationship. It’s messaging to the customers of Microsoft, messaging to the shareholders because Microsoft’s stock was starting to tank, and ultimately, Satya, being this very strategic pragmatic person, was like, probably, whatever I can do to just secure and assure everyone that Microsoft is in control, and we’re still going to have access to these cutting edge AI technologies, and you can still get access to them through Azure, that was his end game.

Justin Hendrix:

That does seem to be the conclusion of your piece the other day, even though this may have seemed like a crazy moment with OpenAI possibly falling apart, possibly somehow being folded into Microsoft, now, it appears, carrying on as an independent entity, but very much under the puppet strings of Microsoft. One detail that I hadn’t really quite understood was the extent to which the $10 billion investment Microsoft’s made in OpenAI is really for computing resources, almost like a barter, which is interesting. Just like any old startup taking credits from Amazon or Microsoft, OpenAI is in this similar boat, hooked on its cloud compute infrastructure. This idea that, at the end of the day, there’s only a handful of folks in Silicon Valley that are defining the future of these technologies, that are making the decisions.

Karen Hao:

Absolutely. What we said in our piece, and what I really truly believe, is there’s this fatal flaw that has been revealed, or this dangerous flaw in the progression of AI, that has been revealed in this whole thing, in this whole drama, which is that it doesn’t matter whether Sam stays, Sam goes, Sam stays, Sam goes. Ultimately, the fact that 99.999999% of the world is watching this on the sidelines, and wondering what is going on, and what does this actually mean for the future of the most consequential technology in our age, and for all of us that might be relying on the technology, or fearful of the technology, or trying to figure out how to live with this new era of AI. Whatever it is, 99.99% have zero say, zero participation, at all, don’t have any visibility.

And that is, I think, the most important lesson that we need to learn from this weekend, and that policymakers should very much be realizing, and I hope acting on, which is if we actually want to get to a place where, if we believe the general premise that OpenAI says, which is, we’re building AGI that’s beneficial for humanity, if we actually want something like that, setting aside skepticism around AGI or whatever, but a technology that benefits everyone can only arise when there is a broad base of people participating in it and helping to usher it forward in an inclusive and democratic way. And that’s just absolutely not… It’s like the polar opposite extreme that’s happening.

The fact that it really came down to three members of a board that led to the cascading of these events, three people that could completely fundamentally change the direction of AI development, and then, of course, then there’s a wider circle of investors and whatever that then suddenly got involved, but it’s such a tiny group, and all of those discussions are happening behind closed doors, and t is not healthy or sustainable in terms of getting to a future that is better and more inclusive.

Justin Hendrix:

Ezra Klein has a column today in The New York Times that comes to a similar conclusion. He says, to some extent, he’s been cheered by how governments and others have taken the possibilities, pitfalls, of AI over the last year. But I don’t know, maybe on the policy question, I assume that’s something you’re following closely for your book as well. Do you think that there’ll be immediate learnings for lawmakers as they think about what to do in order to perhaps democratize governance of some of these potentially systemically important, or possibly even existentially important technologies?

Karen Hao:

I’ve been personally a bit concerned about the way that policy has been heading with the AI Executive Order. You see in the document some really profound ideas that I really stand behind. AI needs to work, it needs to not discriminate, we need to be testing and auditing these technologies before deploying them in sensitive contexts. And then, on top of that, you see stapled on these really intense existential risk driven concerns and policy that is now written in the force of law. And I think that this artifact illustrates how much policymakers have been heavily leaning on some of the companies that are developing these technologies, and OpenAI in particular, to advise them on how to regulate these things, and that is not great. Of course, policy makers should be talking to these companies, but they should not be relying on them as much as I think they have, based on the policy documents that we’ve been seeing coming out of these consultation processes.

And that’s what this weekend shows us, is that we are already in a situation where no one’s really participating, other than the people at the companies, and just the most elite people at the companies, and if we’re going to codify and entrench that power in policy, that is going to be hugely problematic moving forward. And I know that there are many people in government that are very actively trying to engage with a broader base of stakeholders, essentially, they’re trying to talk to labor, they’re trying to talk to civil society, they’re trying to talk to small businesses that are not these AI firms.

But one person that I was speaking to, I was in D.C. last week, one person mentioned, sometimes with these conversations, we’re definitely going to talk to the companies, and then who else do we talk to? It’s not an afterthought, but there is still this huge imbalance of emphasis. There’s no way that they’re not going to talk to the companies, and then everyone else is, if we get to you, we get to you. And also, there’s a lot more research that they have to do to figure out who these other groups of stakeholders should be. And so, by default, every single group talks to companies and then compiles their own list of other people that they want to talk to, so you end up with an amplification of what the companies say.

I hope policymakers take away the lesson that they’ve already put in a lot of work, but they need to continue putting in more work to hear the other perspectives, and people that are coming from completely opposite perspectives, the most marginalized communities that are suddenly afraid of job displacement and things like that, and come to a more nuanced understanding of how this technology is actually affecting people on the ground, and how they can actually regulate greater transparency for these companies as well.

Justin Hendrix:

And of course, some of those conversations in Washington are also happening behind closed doors, Senator Schumer’s forums are a good example of that. Well, let me ask you this. We’re talking Wednesday, it’s just before the holiday, I think it does appear that this OpenAI business is at least mostly sewn up, perhaps the news cycle’s not entirely over. Anything you’re watching for in the next couple of days, or in the next week or so, that my listeners should be aware of? Any unanswered questions that you’re looking to see what will happen to them?

Karen Hao:

I’m looking for the seams to possibly burst again, to use the metaphor that you used. But the thing is, Sam has come back, he’s re-entrenching his power by making sure that this new board is going to be tipped to his favor. But the employees, these factions within the company, are still there, and I know that there are certain employees that are on the exact opposite ideological extreme as Sam’s approach to commercialization that are still there. And I suspect that things could come to a head again, because in Silicon Valley, when you are employed by a company, your identity is very much attached to your professional life. It’s not just you’re an employee of a company, and then you go home and you’re someone else, you are dedicated, and it’s even more so with OpenAI. And a lot of the people within OpenAI, they genuinely believe that they’re developing a civilizational shaping technology. And when you set the stakes that high, you will take drastic measures if you think that something is going wrong.

So that’s what I’m looking out for. I think that there could be more drama to come. These employees that disagree with Altman very strongly could take some drastic measures to try and re-scramble things again, I don’t know. Whether or not it happens in the next couple of days, I can’t say for certain that we’re going to see more drama in the future. Whatever the timescale is, the more that this technology increases capabilities and is powerful, the more we’ll start seeing a Game of Thrones style control over it. So I don’t think this is the end of the story at all, it’s just a temporary reprieve.

Justin Hendrix:

Well, I should mention that your Proton account is listed on your Twitter and maybe other social media bios, so if individuals who may be extras even in this particular season of Game of Thrones would like to reach out, I’m sure they can do that, get in touch with you and perhaps shed whatever light they know about what’s happening behind the scenes.

Karen, I appreciate you speaking to me just before Thanksgiving. Again, hope it won’t be two years that pass before we do this again. And I look forward to the book coming out. I hope you’ll come back on and maybe at least catch us up to this story when you’ve been able to punctuate it with the publication of that book.

Karen Hao:

That sounds fantastic. Thank you as always, Justin, for your wonderful work and for inviting me on the podcast.


Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...