Home

Donate
Podcast

Researchers Defend the Scientific Consensus on Bias and Discrimination in AI

Justin Hendrix / Apr 16, 2025

Audio of this conversation is available via your favorite podcast service.

Last month, a group of researchers published a letter “Affirming the Scientific Consensus on Bias and Discrimination in AI.” The letter, published at a time when the Trump administration is rolling back policies and threatening research aimed at protecting people from bias and discrimination in AI, carries the signatures of more than 200 experts.

To learn more about their goals, I spoke to three of its signatories.

What follows is a lightly edited transcript of the discussion.

Anne Fehres & Luke Conroy / Better Images of AI / Data is a Mirror of Us / CC-BY 4.0

Media Montage:

President Trump has vowed to remove barriers stopping America from being the leader in artificial intelligence development.

In one of the President's first executive orders, he called for a review of policies and regulations.

Key oversight around the development of AI now gone, after President Trump revoked Biden's executive order on AI safety.

So in short, it'll continue to be the Wild West for AI. Less regulation, potentially more volatility, the China threat at the center…

Justin Hendrix:

President Donald Trump has set a new path for AI regulation in the US, diminishing concerns over safety and responsible development. Combined with his administration's assault on higher education and scrutiny of federal research funding, there is real concern among scientists that some of the most important work on fundamental issues around AI will go unaddressed, or even come under threat, particularly around the common problem of bias in AI.

Last month, more than 200 researchers signed an open letter, “Affirming the Scientific Consensus on Bias and Discrimination in AI.”

"We, the undersigned researchers,” they write, “affirm the scientific consensus that AI can exacerbate bias and discrimination in society, and that governments need to enact appropriate guardrails and governance in order to identify and mitigate these harms."

The letter comes as the Trump Administration is actively rolling back policies intended to protect people from bias and discrimination in AI. As it enacts its executive order on AI and develops a new national AI strategy, the White House is expected to remove protections for civil rights in the government's own use of AI, and to prioritize the rapid development of the technology overall else.

For instance, in March, Wired reported that the National Institute of Standards and Technology, or NIST, issued instructions to scientists that partner with the US Artificial Intelligence Safety Institute to eliminate mention of phrases such as AI safety, responsible AI, and AI fairness, while also issuing a request to prioritize, "Reducing ideological bias to enable human flourishing and economic competitiveness."

To learn more about the letter, I spoke to three scientists who signed it. You can read the letter in full at AIBiasConsensus.org.

J. Nathan Matias:

I am Nathan Matias, I'm an Assistant Professor in the Department of Communication and Information Science at Cornell University.

Emma Pierson:

Emma Pierson, I'm an Assistant Professor of Computer Science at the University of California, Berkeley.

Suresh Venkatasubramanian:

Hi, I'm Suresh Venkatasubramanian. I'm a Professor of Computer Science and Data Science at Brown University.

Justin Hendrix:

Nathan, I'll start with you. Why did you feel it was important to produce this letter at this moment in time?

J. Nathan Matias:

10 years ago, computer scientists were starting to pay more attention to questions about bias and discrimination. And in the intervening period, it's been extraordinary to see how much work has made this issue clear and made progress on this issue. And one reason is just that it's time to memorialize the fact that scientists have reached a point where we understand this to be a deep and important problem to work on.

Suresh Venkatasubramanian:

I very much agree with everything that Nate said, and I will add that the thing that is most gratifying to me now, I was one of those computer scientists thinking about these issues 10 years ago when no one wanted to pay attention to it. And even reporters were like, "Nah, this is not a real thing." And I think the most gratifying thing for me now is that I will be in various settings, sometimes I'll be talking to legislators or people from the private sector and companies who will be telling me, "There's this whole concern around bias in AI and we're working on this issue and how to fix it."

So it's become a thing that people understand and that has been the product of a decade or more of work on the technical side, but of course, much longer than that in other disciplines as well. And so I just felt like the right time, especially given some of these statements that we've been hearing about that have implied that there isn't a bias problem in AI. It seemed worthwhile to say, "No. In fact, there is and there is a scientific consensus on this."

Emma Pierson:

I would only add that the reason I wanted to sign onto this is, I think it's very important to affirm that it's not a radical notion, it's not a partisan notion. On the contrary, it's very well-supported by just a litany of objective mathematical evidence, and it's recognized as a concern by large majorities of Americans. A recent YouGov poll shows a large majority of Americans are worried about AI bias. It's not a fringe position and it's afforded by multiple administrations across international consensus. So it's just, let's level with everyone and establish a basic baseline of fact here.

Justin Hendrix:

Suresh, I want to ask you in particular, as one of the people who helped usher in the Blueprint for an AI Bill of Rights, you're seeing a lot of the Biden-era AI ideas essentially be dismantled. Not just dismantled, but rooted out, I think, is the right way to think of it. The Trump Administration issued an order that essentially instructs federal agencies to go and look for any evidence of implementation of some of the ideas in the Biden executive order, and to essentially reverse those. It strikes me, this is a harsh turn from the first Trump Administration, which released its own executive orders on AI, which weren't so completely discordant from what the Biden Administration did. What do you put this to? Is this simply the victory of rhetoric? Is it the victory of industry? Is it an overcorrection? What do you make of the political context in which we're seeing these moves right now?

Suresh Venkatasubramanian:

I'm glad you brought this point because you're right, that the first Trump Administration and the executive order that the first Trump Administration put out towards the end of their time, articulated the need and the concern of paying attention to issues of bias in AI. And in fact, when the Biden Administration came in and started working on these issues, that executive order remained in place. It was not overturned, it was kept in place and added to, with the work of the Biden Administration. So I think yes, to Emma's point, there is a lot of consensus around this. The reasons, I'm not a political analyst, it's hard for me to speculate, I think there's definitely a reaction to, everything the previous administration did needs to be turned over.

I think what's interesting and also gratifying is that a lot of this discussion is now happening at the states. I know you've all been covering this issue, and that states both Republican and Democratic, so again in a bipartisan way, have been paying attention and trying to bring in some broader governance around AI that includes considerations of discrimination and bias. So while this is being eradicated from the federal landscape, it exists and I would say in some places thrives in the state landscape, as well as in other parts of the world. And that's something to keep in mind, that the rest of the world is moving on, even if the federal government might currently be deciding not to.

Justin Hendrix:

I want to ask about this idea of the mathematical reality of bias. As I understand it, bias will always be with us in artificial intelligence models that are trained in the way that today's models are trained. Never be possible to completely eradicate bias from AI models. But what do we lose if science stops trying, stops studying the problem? Does this in effect hold back innovation?

Emma Pierson:

I think so. I guess one way I'd put it is, look, I am tremendously excited about AI, okay? And I think many folks in the current US administration are too, as well they should be. This stuff is exciting, but we want to build systems that work for everyone. Right? And if we fail to think about these issues, they're not going to work for everyone. We're going to have speech recognition systems that don't understand you if you speak with a Texan accent versus a New York accent versus various types of dialects, or we'll have self-driving cars that don't recognize darker-skinned pedestrians, or we'll have healthcare systems that underestimate the risks of black patients. Okay, so then these systems aren't going to work for everyone, and that's going to hurt people, first of all, across political parties, across demographic groups, and that's on its own bend. It's also going to slow adoption of these systems, because when you have these negative impacts, people aren't going to want to use these things.

So even if you're just deeply excited about AI, I think you should want to care about building systems that actually work for everyone, just like we do for other products, right? We consistently test our products across groups to ensure that they actually work for all people.

Suresh Venkatasubramanian:

And the thing I'll add to this is that another aspect of the current landscape around technology, especially on technology policy at a global level, is that we're moving away from an era where countries are trying to collaborate and work together, and we're moving much more towards an era of competition, of countries viewing their AI policy as an expression of the national sovereignty and of their own ways gain control of this technology. Which means that if there are systems that don't speak to large chunks of the population, systems that can't recognize Indian accents well that are built in the US, and an Indian company comes up with a system that actually can process accents better, I'm going to use that system.

And just as a competitive, when there was only a couple of players in the market and they were all US companies, maybe competition was an issue, but we are seeing more and more that national strategies are being built around our own LLMs, where the Netherlands wants to build their own LLM, or India wants to build their own sort of systems. That is going to create competitive pressure, and that's going to be bad for competition. That's bad for US companies as well.

Justin Hendrix:

We have seen some evidence of large language models essentially being biased against conservative political points of view, and the Trump Administration has called out this question of ideological bias. To some extent, this should be a nonpartisan concern, this should be an issue for everyone.

Suresh Venkatasubramanian:

Yeah. Here, I must point out the fun and the joy in doing AI research. This is a really fascinating problem. How do you impute or understand or make visible the different kinds of biases that an LLM might reflect? And of all kinds, right? Biases against any different kinds of groups, bias in perspective or in viewpoint, how do you identify them? How do you make them visible? And if you want to have a way to tune the LLM to express certain biases that you might want to express, how would you do it? This is, at some level, a very technically interesting question that forces us to understand more about how LLMs learn what they learn and how they're trained, and how we can tune them in different ways. And there's great research to be done there, but if we're not allowed to talk about bias in LLM, we can't do that research.

J. Nathan Matias:

I would add to that, one of my entry points to the study of bias and discrimination in AI comes from the core ideas at the heart of this letter. We are all scientists who are really interested in these mathematical and computing questions, but we've also recognized that the more power and role that AI systems have in society, the more involved they are in decisions and actions that shape basic civil and human rights. And for me, as a person of faith, questions of religious liberty and basic rights that impact our beliefs, our capacity to live a full and free life, are part of my motivation for studying these questions. And I've actually done some research on content moderation and religious freedom.

And so when we write in this letter about the importance of studying bias and discrimination and the impact of these things on fundamental rights, we're thinking about a wide range of rights. Some of them are ones that people in the United States, on the right or on the left, might historically focus a little bit more on than others would be more associated with. But as scientists, we're especially interested to ask, how can we study and understand these issues? And if you take the whole question of bias and discrimination off the table, then you make it very difficult for scientists, for computer scientists, for companies, for regulators to even ask the question about rights and how AI systems impact them.

Suresh Venkatasubramanian:

I think the reason why we all came together to articulate this and sign on to it is because we've been seeing the narratives forming in the last few months around this idea of avoiding any discussion of bias in AI. And I think a lot of us, it just felt like we didn't want to go back to 10, 15 years ago when we weren't having discussions at all. It just didn't make any sense, scientifically.

There's, for example, the long-awaited update to the OMB memo on how agencies should be looking at AI with their organizations. And I don't know what's going to be in there, but one can imagine what is likely to be in the updated guidance. And I think, it just to me at least, personally it felt important I think as both Nate and Emma said, to set a baseline and say, look, this is what reality is. You might choose to ignore it, but this is what it is, and this is what scientists are saying.

J. Nathan Matias:

We crafted this letter to be a practically useful guide and resource for anyone who's in a situation where they need to affirm that concerns about discrimination and bias matter. That could be people on NSF reviewing panels. It might be people going into conversations with state legislatures about what they should prioritize in AI policy. It could be people on review committees trying to decide whether to approve a particular piece of peer-reviewed scholarship.

We realized that to some degree, scientists have focused so much on the details of resolving these problems of bias and discrimination, that we hadn't taken that step to just state the obvious and put it down in a way that was incontrovertibly clear and supported by a large enough number of researchers. That anyone who finds themselves in a situation where this fact of science is being challenged, can now bring forward this letter and say actually, we have a lot of support from across science with thousands of papers and hundreds of leading scholars affirming this fact. And we hope that will be a useful tool in many circumstances where we fear this reality might be questioned in the coming years.

Justin Hendrix:

Is this letter also a message to industry, not just to policymakers and others?

Suresh Venkatasubramanian:

It could be, but I actually think oddly enough, I think, and I've had many of these conversations. People, industry, see what's happening. We see this, the systems they build, they are not surprised by this. I don't think they are questioning this. I think the question for a lot of folks, industry is, what do they do now? Are they at risk of coming into opposition with the administration if they take action on any of these issues? I think that's where their main concern is, I don't think that anyone in industry is questioning the basic scientific facts around this, as far as, at least in the conversations I've had. So maybe it's an assistance, just like Nate said, it's a way to help them also bolster their own internal discussions around this topic.

Justin Hendrix:

Suresh, are there key parts of the blueprint for an AI Bill of Rights or the Biden executive order that you hope will survive into the Trump Administration's new AI policy?

Suresh Venkatasubramanian:

So, one thing that I think happened that was very interesting when the AI safety institutes were first formed, was this idea that there's a wide swath of concerns around AI, including concerns raised here, including concerns about safety of AI systems when deployed, that we need to establish testing regimes, transparency and accountability frameworks for. Things that were in the blueprint, but have also been a core part of many AI governance frameworks, even ones that did not talk about bias and discrimination. They felt that it was important to have some transparency, some testing regimes, pre-deployment testing, close to pre-deployment testing, even SB 1047 in California that did not talk directly about bias and discrimination in the sense we mentioned here, did talk about the importance of doing pre-deployment testing and post-deployment testing. And I think those ideas that systems that we put out there should be tested and should continue to be tested as they exist in the world, is something that I think is both important, has widespread sort of agreement on, and I'm hoping continues to be of importance in the next months and years.

Justin Hendrix:

Emma, anything to add there?

Emma Pierson:

I guess one point I wanted to add, which we've kind of talked about, is that I think it is on us as academics to try to communicate the importance of this field to a broad audience in a way which does not seem abstruse or ivory towery or partisan. I just feel very strongly that this is about systems that hurt real people, and conversely about building systems that can really help real people. And whether it's bias against conservatives from large language models or what have you, it's really not just like a leftist thing. And I do think that people across America and in the world in their hearts deeply care about fairness. No one is indifferent to unfairness. We just sometimes disagree about what groups we should be focusing on, perhaps. And so I think this case really can be articulated in a broadly appealing and nonpartisan way, and I think we should continue to do that.

Justin Hendrix:

We'll leave it there. Nathan, Emma, Suresh, thank you very much.

Emma Pierson:

Thank you, it's a pleasure.

J. Nathan Matias:

Thanks for having us.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Related

How Tech and Civil Society Are Nudging Trump on AI Policy

Topics