Home

Donate

Is OpenAI Cultivating Fear to Sell AI?

Justin Hendrix / Apr 19, 2023

Audio of these conversations is available via your favorite podcast service.

In this episode, I'm joined by a columnist and author who’s spent the last few years thinking about a past era of automation, a process that yielded him a valuable perspective when considering this moment in time.

Los Angeles Times technology columnist Brian Merchant is the author of a recent column under the headline, "Afraid of AI? The startups selling it want you to be," and of the forthcoming book Blood in the Machine: The Origins of the Rebellion Against Big Tech, which tells the story of the 19th century Luddite movement.

What follows is a lightly edited transcript of the discussion.

<Audio clip>

Sam Altman:

Part of the exciting thing here is we get continually surprised by the creative power of all of society.

Rebecca Jarvis:

I think that word surprise though, it's both exhilarating, as well as terrifying to people.

Sam Altman:

That's for sure. I think people should be happy that we're a little bit scared of this. I think people should be happy.

Rebecca Jarvis:

You're a little bit scared.

Sam Altman:

A little bit. Yeah, of course.

Rebecca Jarvis:

You personally.

Sam Altman:

I think if I said I were not, you should either not trust me or be very unhappy, I mean, in this job.

Justin Hendrix:

That was the voice of Sam Altman, the CEO of OpenAI, from an interview last month with ABC News business correspondent, Rebecca Jarvis. In today's episode, we're going to dig into the nature of his message that people should be happy that the creators of models such as GPT-4 are a little bit scared of what they've unleashed into the world. I'm joined by a columnist and author who's spent the last few years thinking about a past era of automation, a process that yielded him a valuable perspective when considering this moment in time.

Brian Merchant:

I am Brian Merchant, the tech columnist at the LA Times.

Justin Hendrix:

Brian, you are the author of one book, The One Device: The Secret History of the iPhone; and the forthcoming Blood in the Machine: The Origins of the Rebellion Against Big Tech, which I understand is to come out this fall. Can you give my listeners just a preview of what that is? Perhaps we'll have you back on to talk about the book.

Brian Merchant:

Yeah, it's a history and modern recontextualizing of the Luddite movement and why workers rose up at the dawn of the Industrial Revolution to target the automating technology of the day. There's so much to learn about this moment now, especially when we have generative text and imaging systems and all this talk of AI, which I think we're going to get into today. Some of the hype and the technological determinism can get out of control as it did back then and the entrepreneurial elite, I call them in the book, those who, not just your everyday startup founders, but those with the most power, the most capital to make things happen, can kind of push through changes to the way that we work and to livelihoods in ways that aren't necessarily democratic and aren't necessarily always healthy.

So, yeah, it's coming out in September and it explores a lot of what have turned out to be rather pertinent themes, I think. I've been writing the book for five years. So now, that it's coming out this year, it's kind of like, well, it's a good time to talk about some of this stuff.

Justin Hendrix:

Well, perhaps that does double duty, both gives us a preview of what to look forward to in the book, but also may explain a little bit of your perspective on some of the goings-on in Silicon Valley at the moment. I reached out because you wrote this column, Afraid of AI? The startups selling it want you to be. And I appreciate the fact that you look a little bit what's essentially double speak from these companies. Can you explain the premise of the column?

Brian Merchant:

Yeah. So, it's been a few months now since OpenAI especially and the other AI and text and image generators have risen to prominence and have commanded the spotlight. And I noticed this theme. A lot of people have noticed this theme that's omnipresent, which is that these AIs are poised to remake society in ways great and small. And not only that, but there has this real apocalyptic dimension to it that just that it's so powerful that we have to scramble, to grapple with it in every capacity. The founder of OpenAI and the CEO, Sam Altman, is out there saying that he's a little bit afraid of his own technology, but then, at the same time, he's comfortable offering enterprises and individuals and one of the biggest tech monopolies in history, the full access to it for a price, the $10 billion Microsoft deal that he's infused ChatGPT onto Bing.

There's all kinds of different services and offerings that enterprises and personal individual users can do from premium on up. It started to seem a little bit less like, "Oh, we're dealing with a social problem here" and more like we've cultivated an air, where there is a clear business imperative, where if you don't get on board with this AI phenomenon, then you stand to lose out. So, it became very clear... because this is, it seems new because the technology is new. It is new. The technology is new. It does new and cool things. But in a historical context, you can look back time and again and see when there's an automation frenzy or an automation craze, and this is ultimately what the business use case is, right? It's going to let you automate marketing emails, it's going to let you automate copywriting. It's going to let you automate a bunch of stuff that businesses want to do to cut down on labor costs, that people might want to do to cut down on labor costs.

But in the past, you've had these big computerization booms, mechanization booms, automation booms, and it's often fueled by this kind of fear, the robots are coming to take your jobs. Automation is coming. Congress had big hearings on the rise of automation over 50 years ago and it was the same kind of thing, and this has the same effect. It spurs businesses to want to adopt the technology, whether or not they know it works, whether or not there's a great use case for it. It's a great business-to-business enterprise driver. So, it can convince a lot of the middle management layer to get on board with this.

And that started to seem to me again, as you set up top, I don't know whether it's conscious fully or not. I think there is a great investigative profile to be done around some of these conversations that might've been having. I don't know if anyone's inside of OpenAI may be uncomfortable with some of the things that have being said internally, or if you could paint a picture of where this started to become strategic more like, "Oh, what have we created? Have we created Frankenstein's monster?" Or maybe it's like, "Well, what if we start things out as a nonprofit and we start building this up quietly piece by piece," as OpenAI did. And then, when we are ready to start selling it, we have all of this credibility and authority.

Again, these are questions I can't quite answer, but I do... as I say in the piece, it has worked out that way. Right now, they stand at this position of great credibility and authority with all things AI. They look like experts, not just business people, which they very much are. So, they are able to command and steer the conversation in ways that they otherwise would not be able to.

Justin Hendrix:

You focus on Sam Altman comments he's made, recently declared that he was a little bit scared of the technology that he's helping to build. Comments from OpenAI's chief scientist, Ilya Sutskever, who said, at some point, it will be quite easy if one wanted to cause a great deal of harm with the models that he himself is building. There's a little bit of a, I don't know, dark side mentality with this, almost like a feeling, like you don't want to touch the dark side, the power of the dark side, but really, maybe you may do.

Brian Merchant:

Right. Well, you at least want to try it, right? You want to try it out. Even Luke Skywalker gets a little bit mad and then he realizes the true power that he could unleash. So, it's certainly has that, as I say in the column, the benefits are twofold. One, they did see these incredible adoption or user rates for one of the fastest-growing startups by some of the metrics, depending on how much validity you want to lend those, but there's no denying the fact that they generated a huge amount of interest in this. And yeah, part of it is driven by that scary ethos. Yeah, it's scary. Could this change the world? Who doesn't want to try the thing that can change the world, even if it's... or maybe especially if it's powerful and potentially bad? But then again, that fear also feeds into the more mundane kind of business imperative that I was talking about for the fear of missing out.

You don't want to be the one left holding the bag who hasn't had the AI automate all of your services and all of a sudden your competitors are. And I do think at this point, they have to be aware of what they're doing. To some extent, they understand. At least, there's a bit of a feedback loop, right? Like, "Oh, we make this apocalyptic pronouncement and then we get another news cycle, and then we're on 60 Minutes and then, oh, okay... and then Elon Musk is saying it must be stopped," which only feeds the whole loop even further.

Justin Hendrix:

I mean, to some extent, you could see this as being a part of a kind of strategy just to raise capital just to say, "We need more money in order both to develop these technologies and also to do so safely. You can trust us. We're considering the possible side effects, the possible unintended consequences. And if we have an appropriate amount of capital in the bank, we will steward these technologies forward in a socially conscious way."

Brian Merchant:

Right. Yeah, no, they could make that claim. I think that's, again, at this point, we, you and I people, who are critiquing or trying to analyze what's going on aren't the target audience for this apocalyptisism. I think that that's kind of a bad argument because we're still... it's asking us to trust them to be those stewards, right at also the moment where not only do they have more deals in the cooker, more capital from things like the Microsoft deal, more ability and capacity to earn even more, but they're taking all these things private now. ChatGPT-4 is suddenly too scary to be seen by public eyes, where it could be... all these incredible harms could be done with it. You can still use it for $20 a month or whatever, but you can't see what's actually going on anymore.

So, as OpenAI's kind of retreating from its mission of being open and democratic, it's doubling down on this, but we still get to be the stewards, trust us to be the stewards. So, yeah, if it's what you're saying to try to get another round of capital, which I'm sure they could at this point, everybody wants a piece of them, but also who knows what other deals like the Microsoft deal are being structured right now, and where they're being courted or seeking to infuse the technology into for those higher margin rates.

Justin Hendrix:

So, we are a couple of skeptics, but let's perhaps be adversarial even with our own point of view. There's another argument which is that OpenAI has been more transparent perhaps than any other startup in trying to lay out the potential downsides of the technology that it is developing. It has authored or I should say co-authored, in some cases, with willing academics, a variety of papers that have looked at everything from possible use cases around mis and disinformation on through to the impact on labor. It has gone out of its way to go into the public, into the spotlight, and try to explain the dangers. I mean, on some level, isn't this what we want tech firms to do? I mean, it took, I don't know, what, two decades and lots of Congressional subpoenas for social media executives to finally come kicking in and screaming to the table and explain what they understand about the harms that they're causing.

Brian Merchant:

Yeah. I mean, I think all that is true, again, up until a point where I think even some of the folks, even Elon Musk was, if he didn't get a heads-up beforehand, he feigned a surprise on social media, that all of a sudden this thing that was founded as a nonprofit had... it restructured in I think 2019 to become a capped corporation, where the amount of revenue it can earn, it has some still quite high cap. But I do think that, to some extent, what you're saying, sure, it should get credit. But then what we have to ask ourselves, why the change now? Why now is it when it stands just at the moment where they stand to start making the most amount of money when they're unleashing the most apocalyptic claims? Why now is it all becoming proprietary? And it's no longer about democratizing AI, and now it's about being the safe stewards in the Citadel. The only ones who can do it, the only ones who can also profit from it the most.

So, I do think that maybe it's an improvement. We certainly have a use case. I don't even know if the avenue would've been open to OpenAI to do what, say, Facebook did, where they don't enjoy that level of blind trust where it's like, "Oh, here's a war chest of venture capital. Just do what you want with it." I think people have all wised up a little bit, even if the actual safeguards in place are still somewhat lacking to make an understatement. But, yeah, I think they're carefully managing the perception of how this is rolled out as much as they are being safeguards.

I don't want to say that none of these folks working at OpenAI are necessarily acting maliciously. I do think at this point there is the potential that they are acting recklessly. Again, the prospect of making a huge amount of money is an insanely motivating and distorting factor. Once that's in the water, it's hard for different incentives to not start kicking in and rolling further down the way.

So, yeah, I remain... If indeed this technology is as powerful as they say it is, I'm also skeptical, intensely skeptical that it is, but if it is, then why does it have to be a for-profit corporation at all? Why do we have to move into the... Why does it have to be an enterprise? Sam Altman would say because that's just his political ideology, I think, that the market is the best place to unleash these things. I think that if we're making the comparison again to social media in the last 10 years, that's been a poor environment, I think, to test out from all these different use cases and phenomena that take root there from disinformation, from the toxicity, to the exposure to harassment, all these other things.

So, I think that maybe having a few years to safeguard a transformative new technology maybe sounds good, but if it really is as transformative as they're saying, it should probably be a lot more, it should still be open. It should not have just suddenly switched into a $10 billion deal with Microsoft mode. Again, I'm going to be more skeptical than most, but I'm taking what their claims are at face value, and if they think this thing is so dangerous, then, yeah, sure, maybe introduce it into the public in some small areas, maybe let academia play with it some more. Maybe you could have a controlled rollout, but now all of a sudden, it's like we're a research institution. Now, it's on Bing, where everybody with a web connection can access it. That seems like a huge leap and it's still uncontrolled, which all the potential harms still stand to be unleashed that were unleashed with Facebook at all.

Justin Hendrix:

That's part of the issue, isn't it, that Silicon Valley is all about universal application. We want to create technologies that can change not just one sector or one market, but change the world, change the way we interact with information, the way we interact with each other, the way we interact with government, the way we interact with commerce, and that seems to be very much the promise here. Perhaps it is the case that if you had decided to roll out large language models or various other technologies, in particular domains where perhaps there could be more guardrails or more specific tuning or considerations around safety, maybe it would be a slightly slower, but a more healthy approach.

Brian Merchant:

Yeah, 100%. Same thing. I mean, it's hard for us to imagine, at this point, Silicon Valley and its model of innovation looms so large that it's hard for us to picture the alternatives, but they are there. And you can imagine the same thing happening with social networks with Facebook or with something that wasn't unleashed into the wide market with little foresight or advanced study into what might unfold, and that's another pretty good comparison point. I totally agree, that's the last 15 years of Silicon Valley, the whole Andreessen software will eat the world kind of thing has been, or at least attempted to have been erected into a self-fulfilling prophecy where you want to do software because your margins are... you can have a lot of higher margins if you're doing software, you can make it tailor to one size fits as many as possible, and that has been the ethos from everything from Uber and the gig economy apps through all the various disruptions that we've seen since then.

It has been exactly as you said, we want it to serve as many people the potential, the service, or who knows where the end game with OpenAI is. Maybe this thing will stay free, maybe there'll be ads injected to how you use it eventually, maybe it'll be a series of partnerships like the Bing thing, which again, Bing, it's going to be an ad-driven model and there's going to be problems with that, too. But again, yeah, the aim is to get it out to as many people as humanly possible. We've been down that road before very recently. So, just turning around and doing it again, even if we had a few years that it appeared as though they were behaving responsibly at OpenAI. Now, it seems like we're back where we started again with the floodgates open and now Google's in the game with Bard, and there's constellation of other startups doing similar things with different levels of interest in pursuing the ethics of where their data that they're training their systems are coming from and whether or not they're using artists' IP or what the potential harms are. So, I just feel like we're, if not back exactly to square one, but something close to it.

Justin Hendrix:

Another argument that you hear from some technology leaders is, despite the potential harms, it's worth it; whatever short-term pain we may propagate onto the world with these systems is worth it in the long run. And Sam Altman himself has said these things, that abundance is right around the corner, that near-term super intelligent AIs will solve fusion. They'll solve poverty, they'll solve our problems, feeding the earth's billions. All of that is going to be made possible by these systems. And so whatever disruption they may bring in the near term is offset by the long-term benefits.

Brian Merchant:

There's no way of saying definitively that it will not, but I will say that there's... people like to go back 100 years and look at Keynes, his prediction that if current trends hold with the technological development, then we were headed right for a leisure society, where people will struggle to find 15 hours of work a week. I think he wrote that... Well, don't quote me on that. It's about 100 years ago when he wrote that. Obviously, that has not borne out. 100 years before that, when the advent of the factory system, you had some of the first business theorists, there's a guy named Andrew Ure, who was maybe the first business futurist. I write about him in my book, too, and he sees the first factory, major factory operations rising to prominence and he says, "Oh, it's only a matter of time before these things are totally automated. They will be functioning like an automaton, all linked together producing endless goods."

So whenever there is a new technology that stands to make a class of producers a lot of money, you will find these predictions that abundance and prosperity is right around the corner. And one thing that I look at in my book about the Luddites is that, yeah, we were producing a lot more stuff after the Luddite lost their battle against automating technologies, but they were also battling changes in the way that they lived and worked. They weren't battling the technologies of production necessarily. They were battling the onset of the factory system. And economists love to say, "Well, we all became more prosperous afterwards. After the Industrial Revolution, we're producing many more goods. The cost of things dramatically fell, and eventually it led to prosperity."

In some regards, that's true. There was also decades where there were child laborers getting crushed by machinery with no protections for a long time, and it forever changed the way that we work. Even if you're not in a factory, even if you're working in an office or even if you're working remotely, you were still working at the whims of that system that was forged then, in which you are subservient to a manager. Your manager is subservient to somewhere else. Factory-like organization has governed how we work. If we are looking at a potential change in technology that stands to restructure those social relations, I think, that we want to have those conversations now. We don't want to wait and we don't want to say, "Oh, yeah, yeah. I trust you to deliver abundance to us. I trust you to"... because in these periods where there is a new technology that is taking shape, there is a lot of malleability, there's a lot of opportunity for the people deploying those technologies to shape those social structures, and we have to be really conscious of that right now.

So, I would just say we've heard these predictions before, they never quite come true. We may raise our standard of living in certain regards, but if you're at home listening to this podcast and maybe you're on your commute, you're a little bit stressed because you are burdened by work in much the same way that somebody was 100 years ago, I would be very wary of the latest round of people saying, "This technology is going to finally do it. This is the one that's going to solve it all."

Justin Hendrix:

One of the things that you have focused on is the labor underneath these tech firms, the often low paid individuals who are focused on rating the outputs of these systems, who are focused on building data sets that are used to drive classifiers that are necessary. What do you make of the kind of current situation? We know even from other reporting, I'm thinking of TIME's Billy Perrigo, in particular, who looked at how low-cost, low-skilled workers were employed in a consultancy in Africa to train classifiers for OpenAI often, at less than $2 an hour. You've chronicled raters who serve the kind of Google search engine essentially often making almost minimum wage as well here in the United States.

Brian Merchant:

Yep. And that's another good use of a technology like this or an automated technology is that that technology that promises to do a task or accomplish it, completely, mechanistically or automatically is almost never entirely the case. I mean, you can think of it as an intense deskilling perhaps. But, yeah, behind all of these searches, even your mundane Google searches, I stumbled into that story because I was looking at these raters, who had been working for Google or for a contractor that works exclusively with Google to rate the regular Google search results. And he mentioned to me in our interview that he's like, "Yeah, a few months ago, these really wild results started coming down the pike and it became pretty clear, pretty quickly that there was the new barred results that he was testing out." So, there's a lot there that he's also making sure he has, at various points in his career, basically done content moderation to make sure that the search engine deprioritizes horrific results and make sure that nobody sees them, and they're starting to be some of that with Bard as well because this AI, they've been fine-tuned to some extent, but they're still producing "hallucinations."

They're still pumping out a lot of weird stuff. And some of that nasty stuff, a team of human people have to try to work around the clock to make sure that it gets edited out or deprioritized or you thumbs down so that ordinary users don't revolt when they release this stuff to the masses. So, yeah, there's an immense amount of invisible human labor that's making these things possible and will be for the foreseeable future. Again, I'm reluctant ever to say we can never automate all of this away because maybe someday, you never know, there's the benefit of that. But again, to look back 200 years, they say, "Oh, we're going to automate this weaving work in the factory." Yeah, they can do it so that now it takes four... now, one machine can do the work of four people and do it worse, but you still need a child overseer to make sure the wheel is cranking and in case it gets caught, so you're... it's constantly a case where automation promises the human is the ghost in the machine.

Justin Hendrix:

You invoke Timnit Gebru, who said one of the biggest harms of large language models is caused by claiming that LLMs have human competitive intelligence. I mean, that seems to be the real promise that these language models are going to take us in the direction of machines that are competitive, if not superior, to humans. Even if that's something that is far out and perhaps years from now, that seems to be the immediate promise of these companies. That is what Sam Altman is promising us, that these language models are a step towards that, and that, that is ultimately the great promise of OpenAI. I don't know, do you buy it?

Brian Merchant:

It plugs in to what we talked about at the top of the show, like that promise/fear is doing a lot of the motivating work here. It's stirring the pot. I think that there are at least a large number of these founders and domain experts that are true believers. I have no doubt about that. I will circle back to the same thing I said about the previous promises of full automation. To me, I think the AGI is, at least, so... look, 10 years from now, we might come back here, and there might be a much more convincing, much more capable software program that can do a lot of things with less of our prompt engineering or whatever. It's going to be a similar variation on what's here now, confined by the guardrails that we give it.

And maybe this is a good note to end things on, is that so much of the conversation involves sacrificing human capacity or agency to these machines. It's what happens when it finally outruns us and exceeds us. And the important thing to underline is it never has to. We are human beings that are completely capable of establishing guardrails, parameters. Socially, how do we want this thing to interact with our society? We can answer those questions. It may be, at this point, may be difficult because there's so many different companies and actors, but it's not impossible. Throughout history, we have come together and made decisions about how we want a technology, powerful as it may be, to coexist with our societies. We have a host of options. The six-month pause. Even a lot of people thought it was silly. A lot of people... there's backlash to the call from Musk and Marcus and Wozniak, but that's on the table now.

Just pausing this technology. There are real stakeholders in that. And if the government or whoever else wanted to speak up for that, that is an option. We can do that. We can say, "Let's pause this for six months." There's no need to hand over our agency and say, "Well, this thing is looming. Let the companies do whatever they want. It's all just going to kind of wind up with some Skynet or another." Again, not the case. We have lots of opportunities for input. We can restrict, we can say no. We can say, "Press pause." We can steer the course. We are the agents here, not the machines. It's still a text generator, it's still an image generator. It may be a very complex and very powerful one. It's still subject to what our whims, not the other way around.

Justin Hendrix:

Brian Merchant, author of the forthcoming Blood in the Machine: The Origins of the Rebellion Against Big Tech. I hope you'll come back and tell us about the book when it comes out.

Brian Merchant:

Yeah, I'd love to.

Justin Hendrix:

Thank you very much.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics