Home

Donate
Podcast

Ryan Calo Wants to Change the Relationship Between Law and Technology

Justin Hendrix / Oct 26, 2025

Audio of this conversation is available via your favorite podcast service.

Ryan Calo is a professor at the University of Washington School of Law with a joint appointment at the Information School and an adjunct appointment at the Paul G. Allen School of Computer Science and Engineering. He is a founding co-director of the UW Tech Policy Lab and a co-founder of the UW Center for an Informed Public.

In his new book, Law and Technology: A Methodical Approach, published by Oxford University Press, Calo argues that if the purpose of technology is to expand human capabilities and affordances in the name of innovation, the purpose of law is to establish the expectations, incentives, and boundaries that guide that expansion toward human flourishing. The book "calls for a proactive legal scholarship that inventories societal values and configures technology accordingly."

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

Ryan, I'm excited to talk to you about this book, which I think is a slender volume intended to collect your experience and your observations on looking at this intersection of law and technology over the course of your career. Before we get started talking about the book, how would you characterize your area of research for any listener that's not familiar with your work at the University of Washington?

Ryan Calo:

So I study the interplay between technology and law with a particular emphasis on privacy and automation in the form of robotics or artificial intelligence. I think that one of the things I'm known for is working across disciplines and building interdisciplinary teams to tackle interesting questions at the intersection of law and technology.

Justin Hendrix:

I did not expect this book to open with the Amish. Can you tell us why you started with this kind of consideration of the Amish, Neo-Luddites, Orthodox Jews, others who take a particular approach to thinking about tech in their lives?

Ryan Calo:

So when I teach undergrads law, technology and policy, which I do every spring, the first thing I assign is this lovely discussion of how the Amish think about whether to accept it, given technology. And I do that because it is my sense that at least in America, if not in Western culture more broadly, maybe even globally, there is this reflexive acceptance of new technology. New technology is inevitable, it just appears in the market and our job as society is to adapt around it. And our job as legal professionals is to scramble to address this disruption and perhaps restore the status quo ex ante. So in order to shake people loose of that mentality, even a little, I talk about how a community like the Amish think about new technology or Orthodox Jews on the Sabbath or the sort of Neo-Luddite movement because the Amish don't sit there and go, "Oh, I guess there's augmented reality now."

And then sit on their buggies with Meta Ray-Bans and wonder how is this going to affect community values, right? They do not accept the technology if it is not consistent with their values of community and service. And oftentimes the Amish will accept an earlier, less efficient or at least less new version of a technology if it's more consistent. And I just find that edifying for the possibilities it opens up for American tech policy.

Justin Hendrix:

You talk about this idea of defining technology in a particular way in the book as physical or digital artifacts and systems intended to alter human affordances. Can you just kind of draw out that definition why it's important for the reader to understand the way you're defining technology?

Ryan Calo:

The debates about what technology is are centuries old. And in writing this book, I came across any number of approaches to defining technology. And I'm not saying that there's a right or wrong answer to that, but you can think of technology as encompassing technique. So any time you follow a particular organizational pattern like Taylorism or any time that you follow a set of instructions like a recipe, that could be technology. In science and technology studies, an area from which I drive deeply in my work, oftentimes there's this notion that there is no real division between technology and society, that that's artifice. A lot of definitions from the philosophy of technology, talk about technology as science applied, it's the application of knowledge in a useful way.

So I just wanted to clarify for the reader what I'm talking about, which is stuff, maybe digital stuff, but digital biological objects and systems that are crafted by people in order to alter and extend human affordances. And I do that because brilliant authors like Ruha Benjamin talk about race as a technology. And wonderful scholars like Josh Fairfield talk about law itself as a technology. And I just want to be clear to the reader that I'm talking about artifacts and systems. So I don't claim any primacy to the definition that I settled on, but it does help to set the table for what the discussion will be in the remainder of the book.

Justin Hendrix:

I also want to bring in this idea of technology as social fact. You talk about the idea that society does not treat tech like other social facts like, religion, race, gender, other types of fundamental forces that shape day-to-day life. Why isn't technology a social fact in the same way?

Ryan Calo:

In my view, technology is a social fact like any other, as you say, race, religion, any other, I don't mean it in the strict sort of Durkheim sense of social fact where a social fact is something that if you disregard it, you will face consequences or friction, right? That's his definition of a social fact. It's sort of like it's not a fact of physics that you have to be polite, but if you're not polite, there will be friction. I mean it more just in a lay sense of it's just a fact that is deeply social, technology shapes society, but in turn is shaped by society and that is no different from other social facts.

And so I'm not an exceptionalists in that sense where I think that technology is different, but technology feels different. It feels like it's inevitable in a way that I think other social facts do not. Societal norms shift, they change, technology advances. There's a perception that technology is impossible and practicable and prudent to regulate. So if there's an exceptionalism that I embrace in technology, it is the perception of its difference and not its actual difference. And that's really the point of that first chapter.

Justin Hendrix:

You give an example of Nevada lawmakers contending with self-driving cars. This is back in 2010, perhaps seems like ancient history at this point, but why were those Nevada lawmakers demonstrating how regulators in particular can kind of fall into the trap of not recognizing technology as a social fact?

Ryan Calo:

One of the three ways that technology tends to daunt and confuse law is by posing as inevitable when in fact it's contingent. And we've already addressed the first sense of that, which is the idea that technology just arrives. It just inevitably arrives. It just changes, but it progresses. And our job is to sort of scramble around and restore things in order to the resulting disruption. So we've talked about that, that's one way in which technology feels inevitable when it's in fact contingent. But the second way that technology feels inevitable is that policymakers tend to fixate on whatever instantiation of technology the market has happened to put in front of them.

When I testified for the first time before the United States Senate, we were talking about drones in law enforcement. And not only did the drone industry do a demo of a drone for the senators, but during the entirety of the hearing, they placed a quadrocopter, an innocuous-looking quadrocopter that could only stay up for 15 minutes and had a couple of cameras right on the table in front of us.

So that was everyone's point of reference, rather than the new affordances of aerial surveillance, which is really what I think the senators and their constituents were worried about. You asked about Nevada, well, that instance Google goes to Nevada and says, "Hey, we have this driverless car and you can be a site of innovation. You can be the leaders of the future here in Nevada, just if you only pass a law that tells us how we can use this driverless car in your state." So Nevada passes a law that is based entirely on the Google driverless car. And for example, it defines artificial intelligence and hence driverless cars as any time that a computer substitutes for what a driver would do. And that's fine for Google because they have a car where the computer does everything.

But soon after the passage of this law, mind you, after it's already been signed by the governor, the luxury car manufacturers catch wind of it and they say, "Wait a second, our lawyers are telling us that there's now a law in Nevada that anytime our cars do what a driver would ordinarily do, we got to get a special license plate and put up a 10 million bond. Does that make any sense? Look at auto lane correction. Look at adaptive cruise control. Look at self-parking. You can't possibly mean this." So within a year of passing a pioneering driverless car law, Nevada had to repeal the law and start over. And this was a great instance of taking the particular instantiation of the technology as being the only possibility, all there is, and I've seen this, Justin, time and again.

Justin Hendrix:

You use the word fetishism when you talk about innovation and the role that it seems to play in our policy debates. It is the dominant posture that lawmakers had toward technology. Every hearing that starts on technology harms the United States starts with folks on both sides of the aisle saying, "We're going to talk about the bad stuff today, but let's not forget the extraordinary opportunity these technologies have created for us." They often seem to be not just fetishizing the technology, but the companies and the executives that develop them.

Ryan Calo:

I came across this term innovation fetishism. I thought it was Frank Pasquale, but I asked him and he didn't think it was him. But in any event, it just stands for the idea that we tend to privilege innovation and have a kind of societal fascination and reliance upon innovation all out of proportion with other important societal values. And it's part of a set of factors that make it feel like technology is imprudent or impracticable to regulate, okay? And one dimension of it is yes, there seems to be this reflexive need to praise technology and its creators and to elevate them in society in some way, but it has other implications too. And one of my favorites is from Morgan Ames who writes about the failure of One Laptop per Child. This is this work Charismatic technology, and Ames traces the way that an abject failure in technology just gets swept under the rug, it just gets ignored.

So it's both that we fetishize technology on the frontend and therefore are really skittish and reticent to regulate it, but when it doesn't deliver on its promise, we have goldfish-like memories about it. One of my lessons from the pandemic was the utter failure of contact tracing apps to do any good whatsoever. And I have never heard anybody who was involved in contact tracing apps ever admit that that was true. And if anybody ever admits that it's true, it's always our fault for not believing in it. It's sort of like those faith healers where if you don't get saved by the Lord, it's because you don't believe enough.

There was even an OMB kind of post-mortem on contact tracing apps in which they said, "Oh, they didn't work at all. They didn't do any good at all. And it's not because Bluetooth was never meant to address a global pandemic. It's not that, is that people were worried about privacy or people didn't use them." And that is a fascinating aspect of technology that we don't just fetishize it on the frontend, but we give it a pass time and again. I'm sure, Justin, you've seen that in your work. I mean, I can't imagine not.

Justin Hendrix:

Absolutely. And I remember, I think it was a GAO report. I remember summarizing it for Tech Policy Press. I remember thinking about that particular incident. It was a good example of what we see again and again, which is engineers, technologists, most of whom in that context were forced to work from home in that moment, they were thinking, "Well, we have to do something." And so they did the thing they know to do, which is to build an app or a set of apps or a protocol, and they seemed surprised when people rejected it.

Ryan Calo:

My view is that it was never ever going to work, ever. Because that's not what phones are for. So at some point, I wrote a piece on contact tracing apps with Ashkan Soltani, a renowned security technologist and a equally well-known epidemiologist. And the three of us wrote a paper together for Brookings TechStream in which we just explained very plainly with a lot of evidence why contact tracing apps could not accomplish what their creators thought. And we were exactly right. We were exactly right. And it was not about the fact that people are scared that it's not secure. People are afraid of their privacy. In fact, Ashkan who is borderline paranoid about security opined that there were probably certain things you could do in order to really mitigate those concerns. It's just not going to work. And so for me, it's very fundamental. Yes, I think that's exactly what happened, Justin.

It's that there was a bunch of people in the academy and in the private sector who felt helpless and wanted to use their skills for good. And I really admire that. And indeed, a lot of the things that technologists and information scientists did were deeply helpful, including all these dashboards that helped understand when the pandemic was on the rise and where to be worried and so on. While we don't need to fetishize technology, I think we need to acknowledge the incredible innovation that was mRNA vaccines and how that truly was what allowed the world to move through the pandemic were these vaccines, they're a technology. It's just that contact tracing apps was not ever going to work.

Justin Hendrix:

This isn't just a book about these kinds of phenomena, but it's also an argument that you would like to see the discipline of the law make more connections to science and technology studies that you would like to change the way legal scholars think about these issues. Where should they start? If there are other legal scholars or lawyers or perhaps people who hope to be in those fields in the future, where should they start in changing their way of thinking about technology and the way we go about applying the law to it?

Ryan Calo:

So the first part of our discussion, Justin has focused on the ways in which technology tends to daunt and confuse law. They are the fact that it poses as inevitable when in fact it's contingent. We talked about that. The fact that technology appears to be impracticable or imprudent to regulate. We talked about that. The third one is that technology tends to make a show game of human responsibility because whenever you're doing something with or through technology, there are decisions and assumptions and politics and what have you that are present that don't feel like they are. We can give examples of that if we need. The rest of the book is what to do about that. My conclusion is that because of the slippery nature of technology, we need to be more methodical. And so the book lays out a four-step process by which to analyze emerging technology in particular, although I use historic examples too from the perspective of law, and I think that has two benefits.

First, it tends to mitigate these characteristics of technology and lead to a stronger analysis. But then second, using a common methodology is a good way to help distinguish law and technology from other disciplines. So the methodology that I offer in the book closely resembles what most law and technology scholars already do with two differences. First, it's a lot more formal. It talks, "Do this first, do this second, do this third, do this fourth." And then second, it layers in other methods and disciplines that are of great utility to any analysis of technology including law. So those are things like science and technology studies, value-sensitive design, envisioning, et cetera. And so it presents a toolkit for doing legal analysis of technology better. But secondarily, I think that using a common methodology is a great way to distinguish what we do as law and technology scholars or professionals from everybody else.

Because once upon a time, it was fine to say, "Oh, I do law and technology, which means I apply the law to technology." But what aspect of contemporary life does technology not touch today? There's no area of law apart from maybe pure con law or theory where technology doesn't matter. It's ubiquitous, and that's why every colleague and everywhere across the academy is having to grapple with technology. So what distinguishes us? What distinguishes we who think of ourselves as law and technology scholar? Well, my position is that it's the methods that we use.

Justin Hendrix:

I do think it's helpful just to enumerate the four key areas of the methodological approach that you suggest.

You say that law and technology scholars ought to select and defend the subject matter and carefully define the technology under examination with particular attention to what is non-contingent and novel. I find us always asking that question at Tech Policy Press, "What is novel here? What is new?" Often it's not entirely clear, what's new about virtual reality as you mentioned earlier, or other types of technologies. How are they different from existing digital technologies?

You say, two, the scholar should assess the actual or potential impacts of the technology on direct and indirect stakeholders, communities and institutions or society as a whole. I feel like we're particularly bad about that practice in this country.

Three, we should analyze the impact on the rule of law and legal institutions against the stated normative baseline. And you provide a kind of set of normative criteria for critique and intervention. And then ultimately the scholar should make recommendations and in particular, reference levers of power within the predefined solution space. Seems somewhat straightforward, easy enough to say, yet it's still asking a lot of legal scholars to be able to go well beyond where their practice may be at the moment.

Ryan Calo:

Well, thank you for that really pithy summary. I mean, one distinction I want to draw, Justin, is that my approach is not a roadmap for writing an article. In other words, I don't believe that every article on law and technology should follow this blueprint somehow as if it were experimental psychology or something like that. Rather, it's the exercise that you do to understand and analyze the technology that you're interested in. It's a way to surface your assumptions. It's a way to make sure you've fully explored the space that you're focused on the right thing. So it's an exercise that you do ex ante before you sit down to write. And of course it will inform your writing. But I'm imagining that this four-step process would be just as useful for a staffer on the Hill whose member, whose senator tells them, "My constituents are really worried about augmented reality. What should I do?"

I mean, what do you do with that. And what my method would, which again, closely hues to what law and technology scholars already do, in that sense, it's not trying to reinvent the wheel. But what do you do first? Well, you focus on what distinguishes augmented reality from previous and constituent technologies. You think about what is essential to AR and what is contingent, and you decide your scope of analysis. Are you talking about Pokémon Go or are you talking about heads-up displays in fighter jets or are you talking about the ghosts and haunted mansion? Because all three of those things are augmented reality. And so what is it you're thinking about? And then I think you're right to identify the second step of envisioning is the most difficult. And indeed it is at that second step of envisioning what technology changes for human affordances that I bring in the most external literature, envisioning, scenario planning, hack assessment, which of course is the lost art of law and technology.

Basically, it's something we did from the '70s to the '90s, and abandoned in the United States. Value sensitive design, affordance theory itself from perceptual psychology. So I bring in a bunch of things to help us address what people call the Collingridge dilemma. Address the idea that you can't know in advance what going to do, and therefore you should not intervene too early. "Oh, but whoops, you didn't intervene early enough and now it's path dependent and they have a bunch of lobbyists." And so it's trying to give us a toolkit to do that envisioning. The third step has a couple of requirements that I don't see often in the law and technology literature, which is that if you're going to tell us what we ought to do, we should know what your normative baseline is. Are you trying to restore the status quo ex ante prior to the technological disruption?

Do you have some other kind of vision? Are you a capabilities approach person like Nussbaum and Sen, or are you anti-racist? What is your cost benefit analysis? Since lawyers love to tell people what they ought to do, where's the ought? And that I think is an innovation. And there's also an innovation in asking you at the legal analysis step to choose your own adventure. So are you trying to restore the status quo, which I think a lot of post Larry Lessig work does that, or are you trying to do tech assessment? And then finally, one of the places that I think that lawyers have to contribute is that we, relatively speaking, understand power and the levers of power and what the possible range of interventions is and what some of the hurdles are to that. And so the fourth step is merely about exploring the solution space. What options do you have? Which is the best one for addressing the issue that you've identified? So again, it's not magic, it's just a more methodical approach to what we're already doing.

Justin Hendrix:

Before I spoke to you today, I did my normal morning habit of drinking coffee and reading scary headlines about technology and politics. And perhaps the scariest headline I read today was "ICE amps up its surveillance powers targeting immigrants and antifa." in the Washington Post. And this article goes on the talk about billions being spent on iris scanners, facial recognition apps, phone hacking software, cell phone location data, various other kinds of purchases by ICE. And it sort of made me think about this question of power, and I suppose where you think we are at the moment. What's your threat level? How far away are we from a kind of societal balance with regard to technology? And how do you think about your recommendations in that context? If people take them up maybe 10, 20, 30 years from now, will we have a different regulatory and legal environment? But in the environment we're in right now, how far off are we? How badly out of equilibrium are we?

Ryan Calo:

I mean, I think we are far out of equilibrium. I mean, I think it's fair to say that so many of the things that people in our community warned about are actually happening alongside a bunch of things that none of us even anticipated. But I would say two things, Justin, really, I'm not a particularly able student of history, but here's my rumination here. Things can change very fast. They can also change back very fast. But that doesn't mean there won't be lasting effects. For example, in about a 10-year period in the United States, we went from being so upset about people drinking that we passed a constitutional amendment to prohibit people from drinking. And then within 10 years decided that we like drinking so much after all that we once again passed a constitutional amendment, undoing the one from 10 years before, and it is not easy to amend the Constitution.

That said, there has been a lasting impact of prohibition, including on Fourth Amendment jurisprudence, search and seizure jurisprudence. To this day, cars have far less protection under the Fourth Amendment, in part because of the necessity of policing against prohibition where people were putting booze in the back of vehicles and driving them down from Canada. Similarly, in a three-year period, we completely amended our labor laws to welcome women into the workforce during World War II while our soldiers were away. And then a couple of years later amended it again in order to bring men back into the workforce. And yet that bell never got un-rung and women continue to be in the workforce even during the sort of stultifying '50s. And today there's 60% of my students in law school are women. I mean, there's still rampant sexism both in and out of the workplace, and there's still pretty systematic hurdles for women that they face in the workforce, don't get me wrong.

But sometimes these things have these lasting effects, good or bad, even if they change. So what I'm thinking is even if in a few years we sort of wake up and I mean, look at Joe Rogan, who for a time was a champion of Trump and his allies, who had Zuckerberg on there so that they could commiserate about the Biden administration and overly woke society or whatever it is, emasculation, whatever they talked about. Joe Rogan is looking around and saying, "Wow, this is really horrifying what these ICE officers are doing." So we may wake up from this nightmare and decide to make changes, but nevertheless, we're not going to un-ring the bell of unmitigated access to surveillance equipment for law enforcement.

Justin Hendrix:

In the conclusion of the book, you come back to the Amish, you say, "Finally, law and technology should take a page from the Amish, especially in the United States, but elsewhere until law adopts an attitude of reconciliation toward technology." You want to see law as a tool to counterbalance technology. What should we do tomorrow if we're to go that route? What would have to happen in your view, if we are presented with another opportunity for change, maybe there's some catalytic moment, maybe some combination of the political mood changing or maybe an AI bubble burst or something along those lines that creates a kind of moment. What would you recommend we do to set ourselves on a different path?

Ryan Calo:

So at the end of the book, just both because I think it's expected in this format, but also I couldn't help myself. I do end up getting into my own personal views about where I hope law and technology policy will go.

Justin Hendrix:

You even bring up your mother.

Ryan Calo:

I do, yes. I even bring up my own mother. And what I would like to see is a few things. One of them is that, well, there might be things that we do or don't admire about the Amish. I come back to the Amish because their approach to technology is in some ways an antidote or a counterpoint to the way in which we reflexively accept new technology and then conceive of our role as reconciling ourselves to it as you put it. And I would love to see more work in government and in academia that is in the mold of technology assessment. One of the motivations for this book was I was on a popular listserv for internet professors and a more junior person asked, "Hey, what's a book about law and technology that I can read? I've read Larry Lessig's code and I've read this or that, but what's a book about law and technology that I can read that kind of lays out how it's done?"

And there's crickets, Justin, there's nobody answered for a day or two, and then all of a sudden someone said, "Oh, there's Channeling Technology Through Law by Laurence Tribe." That's a book from the '60s, and it's about the Office of Technology Assessment, which has since been defunded. But from the 1970s to the 1990s and the Gingrich revolution, speaking of sea changes, the Office of Technology Assessment was this nonpartisan hundreds of experts who were helping Congress think through emerging technology. And I wonder what the internet would look like, what AI would look like if we still had had an OTA in the mid-1990s and let alone in the past decade. So one thing I say is I think we need to embrace technology assessment to a far greater degree than we are today. The second thing I talk about is empowering not just critics of technology, because I think we have a platform for critics of technology these days, which I think is great, but protecting the people that discover what's wrong.

In time, and again, the way we learn that Volkswagen is secretly changing its emissions parameters when it's being tested for emissions to look more environmentally friendly. The way we learn about that is because third-party researchers uncover that Volkswagen performs differently in a testing environment than on the road. The way we learn about Cambridge Analytica, the way we learn about all these different things, it's from whistleblowers or third-party researchers, and they're not adequately protected, in my opinion. NYU received a cease-and-desist letter from Facebook for its work trying to document problems of political advertising on the platform, ostensibly under the Computer Fraud and Abuse Act. We all know the tragic story of Aaron Swartz.

We do not protect people adequately whose job it is or who takes on the job of blowing the whistle on corporations or testing systems for safety, bias, privacy, and so on. And so those are two big things that I would like to see change. There are others in the chapter, but those are two of the big ones. And so if we do have this, whatever this policy window as Priscilla Regan calls it, or we have this opportunity because of the AI bubble bursts or because there's a sea change in politics or whatever happens to be, those are two of the things I hope we do first.

Justin Hendrix:

Well, perhaps before that moment arrives, I hope Tech Policy Press listeners will pick up a copy of Ryan Calo's Law and Technology: A Methodical Approach, available for pre-order now. It'll ship on November 7th. Ryan Calo, thank you very much.

Ryan Calo:

Thank you so much, Justin. I appreciate all you all do, truly.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President of Business Development & In...

Related

Imagining 2025 and Beyond with Dr. Ruha BenjaminDecember 22, 2024
Podcast
Daniel Solove on Privacy, Technology, and the Rule of LawAugust 10, 2025

Topics