Examining the Impact of Internet Research Agency Tweets in the 2016 U.S. Election

Justin Hendrix / Jan 15, 2023

Audio of this conversation is available via your favorite podcast service.

"The Russian government interfered in the 2016 presidential election in sweeping and systematic fashion," begins Special Counsel Robert S. Mueller in his Report On The Investigation Into Russian Interference In The 2016 Presidential Election. The special counsel found that one component of Russia's effort consisted of a campaign by the St. Petersburg Internet Research Agency (IRA).

The Mueller report says that in "mid-2014, the IRA sent employees to the United States on an intelligence-gathering mission," and it "later used social media accounts and interest groups to sow discord in the U.S. political system through what it termed '“'information warfare,'" utilizing accounts across Facebook, Instagram, Twitter, and other platforms.

The campaign evolved from a generalized program designed in 2014 and 2015 to undermine the U.S. electoral system, to a targeted operation that by early 2016 favored candidate Trump and disparaged candidate Clinton.

The Mueller report notes that members and surrogates of the Trump Campaign reposted or promoted pro-Trump or anti-Clinton content from IRA-controlled social media accounts and that "in a few instances, IRA employees represented themselves as U.S. persons to communicate with members of the Trump Campaign in an effort to seek assistance and coordination on IRA-organized political rallies inside the United States." This activity included a protest in Houston in May 2016 that saw protestors and counterprotestors face off in the street. And, it found that "U.S. media outlets also quoted tweets from IRA-controlled accounts and attributed them to the reactions of real U.S. persons."

Similar conclusions are to be found in Volume 2 of the Senate Intelligence Committee's Report on Russian Active Measures and Interference in the 2016 U.S. Election, which summarizes the IRA's efforts across Facebook, Twitter, YouTube, Reddit, Tumblr, LinkedIn, Vine, Gab, Meetup, and even Pokemon Go.

Ultimately, the special counsel indicted 13 Russian citizens affiliated with the IRA on multiple counts, including Yevgeniy Viktorovich Prigozhin, who "funded the conspiracy through companies known as Concord Management and Consulting LLC, Concord Catering, and many subsidiaries and affiliates."

In the years following the 2016 election, much effort has been put into understanding foreign influence campaigns, and into disrupting efforts by Russia and other countries, such as China and Iran, to interfere in U.S. elections. Political and other computational social scientists continue to whittle at questions as to how much influence such campaigns have on domestic politics. One such question is how much did the IRA's tweets, specifically, affect voting preferences and political polarization in the United States?

A new paper in the journal Nature Communications provides an answer to that specific question. Titled Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior, the paper matches Twitter data with survey data to study the impact of the IRA's tweets.

To learn more about the paper, I spoke with one of its authors, Joshua Tucker, professor of politics at NYU, where he serves as the director of the Jordan Center for the Advanced Study of Russia and the co-director of the NYU Center for Social Media and Politics (CSMaP). We talked about the study, as well as what can and cannot be understood about the impact of the broader campaign of the IRA, or certainly the broader Russian effort to interfere in the U.S. election, from its results.

What follows is a lightly edited transcript of the discussion.

Joshua Tucker:

I'm Joshua Tucker. I'm a professor of politics at NYU, where I also serve as the director of the Jordan Center for the Advanced Study of Russia and the co-director of the NYU Center for Social Media and Politics.

Justin Hendrix:

Josh, thank you for joining me today. And we're going to talk through a paper that you've just published in the journal Nature Communications, "Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior." So, this is a work with Greg Eady, Tom Paskhalis, Jan Zilinsky, Richard Bonneau, Jonathan Nagler, and of course yourself.

And you say in this report that its purpose is to ”investigate the relationship between Russia's foreign influence campaign on Twitter during the 2016 US presidential election and the political attitudes and voting behavior of ordinary US social media users.” And by that, you mean specifically the campaign run by the Internet Research Agency, the St. Petersburg troll farm operated by the Russian oligarch and mercenary, Yevgeny Prigozhin.

In your own words, can you tell the listener what you set out to do with this particular piece of research?

Joshua Tucker:

Sure. And first off, I want to say thanks, Justin, for having me on the podcast today. And also, I want to give a huge shout-out to Greg and Tom, who you listed in the list of the authors. They were the lead authors on this paper, did a tremendous amount of work. There was a lot of data analytics that went into this, and they both did a fantastic job here. So, I want to make sure that they get proper shout out here.

What were we trying to do here? Let me give you the story of this paper, which is that in the aftermath of the 2016 US election, Twitter, much to its credit, released a data set of all the tweets that they had identified as having been coming from these Russian IRA trolls, as you mentioned in your introduction here, that were produced by, and what we mean by trolls, and I want to be clear about this because we want to distinguish this from bots.

Not automated accounts, but these were human beings in Russia who were producing social media content on Twitter. They were doing it on other platforms as well, but what we were able to observe was on Twitter pretending to be people they weren't. They were pretending to be other... this is what we used by the term troll. They were pretending to be other folks, Americans and producing this content during the election.

In the aftermath of the election, this influence campaign became known. Twitter originally released a set of IDs of handles of accounts that were involved in this to Congress when they were testifying before Congress. And then, in the aftermath of that, they ended up releasing a data set to academics to analyze the behavior of these Twitter accounts... what was in these accounts.

There's been some fantastic research on this, I know you and I have talked about this before, trying to figure out what these troll accounts were doing. I want to call out Kate Starbird's lab out at University of Washington. I want to call out Darren Linvill and Patrick Warren at Clemson, their lab. We did some work at the Center for Social Media and Politics. And of course, there were congressional committees and investigations that went on of what was going on in these accounts.

So, I think we as a research community and we as a public policy community learned a lot about what these trolls were doing. We've learned some stuff about their strategy. We've learned some stuff about their ideas. The Senate committee famously concluded the purpose of the entire operation was to help out Donald Trump, increase the vote for Donald Trump and/or to increase political polarization in the United States.

What we lacked though at that point in time was much at any sort of study about who was exposed to these accounts. Because when Twitter released the accounts, what they did was they released the actual tweets. And so, we could look at what was being tweeted by these accounts, but we didn't know how these tweets were seen.

And then, I had the opportunity to learn about a fantastically excellent paper that came out in the political, "Proceedings of the National Academy of Sciences" by Chris Bail, Sunshine Hillygus, and colleagues at Duke. They were working on a separate paper, but they had had this panel survey that was in the field in October of 2017, I believe it was. And they had realized that they could take that data and look at the effect of exposure to Russian trolls on the people in their panel survey in October of 2017.

So, a super important paper was published, the proceedings of the National Academy of Sciences. But of course, by 2017, first, a lot of the trolls have been kicked off the platform already by Twitter. And the question that was on everybody's mind was the research they did, super important, was the first study I saw of trying to measure the effect of being exposed to the trolls. And they found a bunch of null effects that being exposed to these trolls over October of 2017 didn't seem to be changing anyone's attitudes about anything.

But of course, people were interested in the context of the 2016 election. And I realized that at the Center for Social Media and Politics, we had had a panel survey that was in the field during 2016 where we had talked to people in April of 2016, again in October of 2016, and then again got their vote decisions after November of 2016. And the folks in this panel had shared their Twitter handles with the researchers for the purposes of study.

And so, we were interested in 2016 about what people were seeing on Twitter. And so, because Twitter data is public, so we had gone and found, of the Twitter handles we had, we went and found the accounts they followed. And then, we went and collected the tweets from those accounts they were following during the time of the 2016 election campaign. So, what we realized was we actually were in possession.

We would be able to analyze who had, and I want to be very clear about this, let's talk about this in a second, who had the level of potential exposure to tweets by actual people, voters, that we knew about, people that we knew things about because they had filled out our surveys. We knew demographic information about them. And we could go back and we could look at exposure to these tweets by the Russian IRA trolls, and we could see if that exposure bore any relationship to any of the predicted goals of the IRA campaign.

So, was exposure correlated with changing your vote to be more likely to vote for Trump or less likely to support Clinton? Was exposure to the Russian trolls correlated with becoming more polarized on political issues? So, that's the story behind the paper. So, what we set out to do was to see, A, if we could characterize the nature of exposure to these trolls. And we should talk in a second about what we mean by exposure.

And then, B, if we could see if there was any relationship between this exposure and the changes in attitudes.

Now, I want to say one more thing here before we get into details of what we actually did, which is that at the time that all of this information was coming out about these Russian troll foreign influence campaigns, political scientists have long held that it's quite hard to change anybody's vote.

You and I sitting here right now could probably figure out how 90% of the country is going to vote in the 2024 election without even knowing who the candidates are at this point. Partisanship, especially at this moment in time, plays a huge role in determining people's votes. There's a ton of literature right now trying to sort out whether the huge amounts of money that are spent on advertising campaigns make any difference in pushing people and changing people's votes.

There's huge studies about the effects of news, the effects of talk radio, the effects of television news, the effects of all these different social media platforms. So, I think we did go into it thinking there's been a tremendous amount said about these accounts. And on the other hand, there's this political science literature that makes us think it should be really hard for exposure to a few tweets to change someone's opinion about who they're going to vote for.

So, that's what we were trying to sort out. Could we bring some actual empirical evidence to this debate beyond just saying, "Well, political science says it's pretty unlikely that this would happen." And everybody is talking about what the Russians were trying to do and what their goals were. And we had all this information about how the Russians were trying to go about their goals in doing this.

Justin Hendrix:

Before we move on into the results specifically, can we just talk about the scale of the survey panel, the degree to which you had to use math to combine these two data sets and to perform your analysis?

Joshua Tucker:

Sure. So, this is survey research. So, we had about 1,500 people in this sample who were users of Twitter. So, the first thing we want to say is users of Twitter are not identical to the population of the United States. And that's often the case when you're trying to work with Twitter data to extrapolate about what's happening. In this particular case, that's kind of fine because what we were trying to do was to get a sense of what had happened on Twitter.

So, we had a sample of people. Now, these were people who were willing to participate in the YouGov survey. These were people who were willing to share their Twitter handle as part of the YouGov survey. So, we do a lot of stuff in the appendix of the paper, sort of checking to make sure that these people look fairly similar to folks in terms of demographics otherwise. But that's always a question when you're doing survey research.

So, we have about 1,500 people that fairly well approximates the demographic descriptions of people who use Twitter generally. Now, for those 1,500 people, what we did was we collected all the accounts that they followed on Twitter. And then, during the campaign, we collected all the tweets from the accounts that they followed on Twitter. Now, the reason we did this is because you’ve got to put yourself back in 2016.

At this point, the way Twitter was showing people tweets in their timeline was either through accounts they followed or original content produced by accounts they followed or retweets from accounts they followed. So, that doesn't capture all the ways that people could access Twitter data. People could always go on Twitter and they could use the search function and search for hashtags and search for keywords.

But what we do think is that what we were capturing here was we were capturing a good approximation of the tweets that could have appeared in their feeds over the course of the 2016 US election.

Justin Hendrix:

But just to be clear, that wouldn't include things like engagements that perhaps people they were following had with these other accounts. So, if I reply to an IRA account, it might show up in someone else's feed?

Joshua Tucker:

Yeah, yeah. And we would've captured that.

Justin Hendrix:

You would've captured that? Okay.

Joshua Tucker:

Yeah, yeah, we would've captured that. And we captured all activity by the account. So, that would count as an original tweet if you're replying because you're producing content. I mean, what turns out... if you look at Figure 1 in the paper, this turns out to be a big deal. Because if you just look at accounts that followed these Russian trolls that would've directly followed them, and so it should have been in their timeline, that's not as much of the potential exposures as retweets by other people who you followed.

But I think now is a good time to jump into this, what we mean by potential exposures. Because what we don't know, because Twitter has never released information to researchers in their API about what tweets were seen by what people, this would be page view information or tweet view information, whatever you want to call it.

Because Twitter has never released this data, what we have done and other people have done is we use the universe of tweets you could have seen as an approximate best guess about the composition of the tweets you did see. So, if you and I, and just to show you the degree of this problem. If I follow 10 people on Twitter and go on every single day, I'm going to see all of their tweets.

If you follow 10,000 people on Twitter and you go on once a month, you're going to see a small fraction of them. So, it's important to remember that what we're talking about here is we're talking about the potential exposure that people may have had to these Russian trolls, which if anything means that what we're explaining here is the maximum amount of exposure people could have had to these trolls. All intents and purposes, it's possible and very likely that many people had much less exposure.

So, when we say on average, a person in our sample was potentially exposed to four of these tweets from Russian trolls a day over the course of the campaign, that doesn't mean they actually saw them. So, they might have seen even less. So, in a sense, that makes the bar even higher for finding a potential effect here from being exposed to these tweets because when we're running our analysis, we're looking at these potential exposures. People may not have even seen those.

So, we're kind of giving a ceiling level of what might have been the impact of these, or might have been the exposure to these tweets, and that's a limitation of doing this here. On the other hand, and we'll get into this in a second, given what we found in this enormous disparity in concentration, it does seem very likely that that measure was nicely picking out the people who were seeing plenty of these tweets from the people who were seeing probably none of these tweets.

Justin Hendrix:

So, let's talk about the effects and the key findings. What did you find? What do you think are the most important top lines from this study?

Joshua Tucker:

Right, right. Okay. So, I want to just be super clear because we'll talk about the caveats afterwards. But even before I start presenting results.

These results are based on the Russian IRA campaign on Twitter. These results should not be interpreted to be conclusive about the Russian foreign interference campaign. We can talk about lots of other things going on.

They shouldn't even be interpreted to be conclusive about everything that has to do with the IRA campaign on social media. We did not have a commensurate data set from Facebook. We did not have commensurate data from Facebook to know about this. So, we don't know what the impacts were on Facebook. So, everything I'm going to tell you right now is what we learned about Twitter.

Now, I have some suspicions about which of these would hold for Facebook, and we can talk about those if you'd like. But at the same time, I want to be super clear, none of this holds for the kind of stuff that Kathleen Hall Jamieson has written about the hacking and leaking of data in an attempt to sort of change the nature of news coverage and public opinion, nor does any of this have anything to say about news coverage about tweets that were tweeted online. But again, the news coverage about the Russian influence campaign on Twitter doesn't happen until after the election.

Okay, so that being said, I want to be really clear about putting this in context. So, we are providing a piece of the puzzle of understanding how this Russian foreign influence campaign was carried out in 2016. And it's a new piece of the puzzle because here therefore, we have not had information about exposure that we could look at.

All right. So, what did we find? We found essentially four findings. One is that exposure to these tweets was heavily concentrated among a small portion of the electorate. Now, as someone who's interviewed and written a ton about social media and politics, this should come as no surprise to you. If there is a golden rule of the internet, it's that we often see these power laws in effect.

And a power law or a logarithmic law is that we often see behavior on the internet where small portions of people account for a lot of the behavior and large portions of people account for very little of the behavior. We found this in lots of other studies we've done at CSMaP, most famously with sharing of fake news. But there's all sorts of studies that come across this regularity. And here we found it, but it was much stronger among this exposure to Russian trolls than even we expected it to be.

So, it turns out that 1% of our sample– remember we had about 1,500 people, so that's like 15 people– accounted for 70% of these potential exposures. Ten percent of our sample accounted for 98% of those exposures. So, that means 90% of our sample have basically no exposure. And remember, as we were talking about a moment ago, this is potential exposure. So, they got no potential exposure.

So, the likelihood of maybe they saw a couple of tweets over the course of the entire election campaign. So, that's the first point.

The second point is, because we had all the tweets people could have seen and because there's a big question, we just published another paper recently in Science Advances showing how few people actually follow politicians online and how people do it. We wanted to try to contextualize this.

Following up from the work we did in the Science Advances paper recently, was this the case that this was the only information they were getting about the election? Well, no. That turns out not to be the case because even with the sample that we have, we found that there was much more exposure. There was much more potential exposure, about four times as much potential exposure on average to tweets even from politicians themselves and candidates, and then something like 20 times as much potential exposure from news media.

So, it's not that people, this was the only thing they saw from Twitter was these Russian trolls and there was nothing from politicians or media. And in fact, it was just a tiny fraction compared if we say, "What are the potential sources of information people are getting about the election on Twitter directly from politicians and from news media that the amount that people even," and again, most people were seeing nothing.

But even on average, there was just much, much more information in these people's feeds about the election from other sources than there was from the Russian trolls. So, the Russian trolls were a small part of it. Then, we were able to parse out, because we had demographics about these people, we were able to look at what accounted for the distribution, who were these people who were getting more of an exposure.

And it turned out that they were highly partisan Republicans. And in our data, we find that those who identified as strong Republicans were exposed to roughly nine times as many posts from these Russian foreign... or could potentially exposed to nine times as many posts from the Russian foreign influence accounts as Democrats or independents.

All right. So, with all of that in view, this is heavily concentrated, that for most people, they're getting way more information about this even on Twitter just like nothing about all the information that they're getting from television news, from their friends and family, from everything else that they're embedded in. We then went and looked at the question that started the whole thing off in the first place, which was, could we find a relationship between being exposed to more of these tweets from trolls and changing your either opinion on issues or your preference of candidates?

I mean, I want to be very, very, very careful here about this in stating the golden way that we look at causal relationships– did these trolls cause people to change their views or change their attitudes– would be to use a randomized control experiment. And the way you would've done this is you would've had to, before the election, randomly assigned people to follow these Russian trolls and randomly assigned other people to not follow these Russian trolls.

This of course would've been ethically, highly suspect to say the very least, but even more so than being ethically, it's temporally impossible. We didn't know about the Russian trolls until after the election. So, we literally would've needed a time machine to go back and to have run this study. So, we're never going to get the gold standard here. So, what we're able to show here is just to look at correlation.

And correlation means we could be picking up selection effects if we did find a positive effect. So, had we found a positive effect for this, we would've had to be very careful about caveating our extent to which, and we've tried to be extremely careful in the way we talk about the paper in not saying that we have rejected a causal impact here because we can't really test a causal impact.

What we can test is the observable implication of a causal impact. And an observable implication of this impact would be that if these tweets were causing people to change their vote choices or change their positions on issues, we would expect, it seems a plausible, observable implication that this should be more likely to have happened among people who saw one of these tweets and didn't.

And so, we did one analysis that was like, literally, were you exposed to any tweets from these or not, just a yes-no. And then, if these tweets were having an impact, there should be a larger effect among people who were exposed to more of them. And we tried that as well. And in neither case do we find any statistically significant or meaningful effects in our sample of 1,500 people.

So, we find no correlation between having been exposed to more of these trolls, and started off as Clinton and then preferring Clinton and then ending up preferring Trump, or starting off as Clinton and ending up preferring a third party candidate, or starting off as Clinton and ending up preferring not to vote.

In none of those cases do we find any relationship there, nor do we find any real relationship in changing levels of polarization. So, our conclusion is that it seems, based on all the descriptive evidence I gave you before– that this was highly concentrated, that it was a small fraction of the information to which people were exposed to and that it was overwhelmingly among a highly partisan Republicans who were seeing this anyway– it seems to be that if the goal of the Russians was to use this Twitter foreign influence account to try to decrease support for Clinton or to try to increase people's political polarization, it seems unlikely that the direct effect of being exposed to these tweets was particularly successful, if that again is the correct set of goals that they had in this regard.

All right, that was a mouthful, but hopefully that goes through what the key findings of the study were.

Justin Hendrix:

I want to try to maybe break it down for the listener who may not be able, like me, to necessarily study or follow all the kind of mathematics or statistics that are at play in your methodology. But one account that I recall that was mentioned in the Mueller report associated with the IRA was the “Ten GOP” account. You may remember this one. It was the purported Tennessee Republican Party account that acquired more than 150,000 followers. The Mueller report points out, of course, that its tweets were cited or retweeted by multiple Trump campaign officials, members of the Trump family, Donald Trump Jr. himself, Kellyanne Conway, Brad Parscale, General Michael Flynn, et cetera. So, clearly, if that's the case, if those types of individuals are retweeting these messages and a huge number of people are then exposed to that retweet, how does that figure in your study based on the survey? Are you able to discern the exposure mechanism or the effect of that exposure?

Joshua Tucker:

Yeah, so that's a great question. And so, that I think is a huge advantage of our methodology because we could have, you could imagine that we'd be sitting here today, and what I had told you is we got the accounts they were following and then we stopped at that point. Well, at that point then, all we would be able to tell you was about exposure to tweets from the trolls. And we would be able to say... and we would be missing all of these retweets that you're talking about.

Fortunately, the way that we did our study where we went and got all of the tweets that were from accounts that people followed, we can capture these retweets. So, when we cross-check the list that was released by Twitter with this 1.2 billion tweets that our folks could have been exposed to, that's exactly what we were able to get at. So, we were able to capture that.

And indeed, in the supplementary materials, you'll find one of the interesting things about this, which I think is very important for future research in this area, is that there was much more exposure to these retweeted tweets by the Russian trolls than it was accounts just following the tweets from Russian trolls. But I want to bring up one other point here, which is where I thought you were going with this question, which is what we cannot capture.

What we cannot capture is if somebody took one of those tweets and put it in a news article, if the media had reported on one of those tweets, we wouldn't capture that in our study and we wouldn't know if people were exposed to it. Similarly, if somebody took one of those tweets from the Russian trolls and posted it on Facebook, we wouldn't know who was exposed to it there.

So, this is a self-contained Twitter study, not because that's the best way to get the entirety of the thing, but that's because that's the data limitations. But we did do this one really valuable thing ahead of time, which was very data intensive, which was collecting the tweets from the accounts people were following so we could capture these retweets.

Justin Hendrix:

I do think that's an important point. I was going to ask about, in general, how the tweets kind of play a role in the broader information ecosystem. And of course, you've mentioned that you can't see what happens on Facebook in this study that's not part of this. We know that the overall exposure based on Facebook's own estimate was much higher on Facebook than perhaps on Twitter of the IRA activity, for instance.

And as you rightly pointed out, these tweets were embedded, screen-grabbed, shared otherwise helped to drive some of the conversation across multiple modalities.

Joshua Tucker:

But that would be really important for us to know. If someone else could come along and do a study and show that there was an impact of tweets that were being picked up in other outlets, then that would be a really important understanding and a really important complement to this study. This study shows that the activity on Twitter was heavily concentrated, was a small fraction of what people were seeing on Twitter and does not appear to be meaningfully related to changes in attitudes.

But that doesn't mean that one well-placed tweet that led to a huge news story. But again, I would caution, it's really easy to come up with examples to think about this. You'd want to do the exact same thought process we did, which is like, okay, so if you say you find one tweet from one Russian troll and there was a big news story one night and millions of people saw that news story, that would just be one night out of a long campaign.

And you'd want to compare the impact of seeing that one news story that included that one tweet on all the news stories that people had seen over the course of the entire campaign. So, I think we still want to think really carefully about this. But yeah, it's absolutely correct that there could have been leakage from these online social media campaigns into the broader information ecosystem.

And this is just a real challenge of studying the impact of social media on politics and studying social media's relationship to politics, which is that, and this is why you and I have talked many times before about why researchers like myself have been so active in trying to get regulatory environments that require data to be made available to outside researchers so that we can answer these kinds of complicated questions.

I mean, let's be totally clear. We didn't plan to do this study before the election. We didn't know there were going to be Russian trolls. We were lucky that we had... I mean, I think everyone is lucky that we actually had data in place that we realized apriori after the election could be used to answer a question people were interested in.

But the more data that we have that's available for researchers, the more we're going to actually be able to get information about these big questions that are facing society.

Justin Hendrix:

So, I want to just quote from your study, you say, "Taking our analyses together, it would appear unlikely that the Russian foreign influence campaign on Twitter could have had much more than a relatively minor influence on individual level, attitudes, and voting behavior."

But I think it's fair to say, and you've had to do a little more communication yourself, that some of the discussion of your findings, even in the paper, has not really fed into the broader discussion among the public and in the media in some cases among people who have seen the paper.

So, in the discussion, you write, "Despite these findings, it would be a mistake to conclude that simply because Russian IRA troll activities on Twitter did not meaningfully impact individual level attitudes that other aspects of the Russian foreign influence campaign did not have any impact on the election or faith in American electoral integrity." Can you speak to that a little bit, what you had in mind there with that caveat?

Joshua Tucker:

Yeah, absolutely. Absolutely. You know these kind of journalists, you have to be very compact in what you're writing. But we tried really hard to try to get this out there, and then that's why we released these tweet threads about this. So, there's two point... I think there's two ways we can think about this. One is the immediate limitations of what this study does and does not do in the context of the 2016 election.

So, I've said this numerous times already. This was a multi-pronged attempt by the Russians to try to interfere and have impact on the US 2016 election. We can think about the social media part of it, and then we can think about the hacking and leaking part of it and the social media part of it. We can even break that, can even be broken down into Facebook and Twitter. Then Facebook, there's ads on Facebook, there's other things.

On Twitter, there's just these tweets. We have provided one small piece of this puzzle by looking at one part of this influence operation. And so, just by definition, it would be wrong to conclude from our one piece that we've looked at on Twitter that can speak to the impact of what happened on Facebook and that can speak of the impact of what happened with the hacks and leaks. You just can't conclude that. It's a different study.

Now, the nice thing is Kathleen Hall Jamieson has done all this incredible work on the hacks and leaks. We know something about that. I mean, people can look at her evidence and they can make their decisions, but that's out there. What had not been out there was what was the nature of exposure on Twitter. And again, we had a sample of 1,500 people, we think it's a good sample of 1,500 people. Now, we as a public have some more information here.

So, I want to be crystal, crystal clear. You would never say if there were three vectors of anything happening, that if you've told us something about one vector, you now know about the other two vectors. So, I think that's the most important thing to say. That's what we were trying to say. You can read our paper to make conclusions about what we think was the likely was going on with people who were exposed to these tweets on Twitter, the impact of looking at that.

You can make conclusions. And that's why we said it appears to us unlikely, given that it looked like most people weren't exposed to any of these tweets. Those that were exposed were people who were likely to be heavily partisan anyway, and that there was tons of other information on Twitter to say nothing of information elsewhere. And so, we think this is just a small, small fraction of the exposure that people are getting to information about this election.

As political scientists, we don't tend to think people... it's very easy to change people votes anyway. They're being exposed to tons and tons of stuff. It doesn't seem likely to us that this incremental exposure to some of these tweets on Twitter made an impact. But that's one small piece of the Russian foreign influence attempt. And so, we can't say anything about the other part of the Russian foreign influence attempt. So, that's the direct point I want to make.

The second point is quite different. So, the second point is, and this is the point that we try to make in the conclusion of the paper. And this is again why we think the paper is super important, which is that if the direct effect of this Twitter campaign was to make people less likely to support Hillary Clinton and the direct effect was to make people more polarized on political issues by exposing them to tweets, we're suspicious that that direct effect that the campaign was much effective in that regard.

However, you have to remember right, what's been concluded by these Senate committees is that the goal of this foreign influence campaign was to sow division in American society. Now, to the extent that this foreign influence campaign after it was discovered led to people questioning the legitimacy of the Donald Trump's election in 2020. And to the extent that it led to potentially people thinking that hacking a US election is something that could be done relatively easily by a few covert people sitting in St. Petersburg.

Fast forward to 2020 and where we get to this campaign that's being run by Donald Trump to tell people that the results of the 2020 election are fraudulent. Trump is telling people this with no evidence. We have all these things with court cases that do this. But the question I think we have to ask ourself is, what's the impact of us having spent four years now before this happens talking about these previous Russian foreign influence attempts?

And so, one question that we're worried about is that when Trump supporters come out and say, "Oh, well, Trump said the election was fraudulent, therefore I believe it was fraudulent." And people come back and say, "But there's no evidence. How could you change a US election? There's no evidence along these lines." People can turn around and then say, "Well, but you were saying there was evidence that the Russians had changed the election result before."

So, there's this weird potential, which we try to call this indirect effect, that the Russians don't necessarily have an effect directly with these tweets on increasing polarization in US society. But by convincing people that they did have an impact with these tweets, that indirectly ends up increasing polarization in US society because it ends up feeding into this narrative about the fragility of US elections and how easy it is to hack an election.

And that when Trump comes along and says, "Oh, no, we had people from fill in the blank country who hacked the voting machines and did this," it resonates with some way in people. This is only speculation. I don't know if this is the case or not. In academic papers, we often raise issues that we think are good points for future research in the future.

But I think this is why it's so important that we try to set the record straight on in terms of what did and did not happen in 2016 because there are consequences and costs to having large numbers of people believing that it's easy to change the outcome of US elections. And this is not going away because our elections at the presidential level anyway look like they're going to... they're so close right now and look like they'll continue to be close and will continue to be in these situations where outcomes come down to small margins in particular cases.

I mean, I hope I've done an okay job explaining that. But that's what we were trying to get at indirectly. Now, putting my Russianist hat on, I do not think the Russians were this clever and that they tried to do this, and that Putin did it and then they meant to get caught so that they could then spread this narrative that they had had this big impact. I don't think that at all, but I think there's unintended consequences of things in international politics.

And it may be that by convincing Americans that the Russians had this impact on the election in 2016, that's actually accomplished the original Russian goal of exacerbating polarization and making people less trusting of democratic institutions.

Justin Hendrix:

And I actually just say, again, you stated that that piece of it is speculation. There's no math in this survey that would support that conclusion. That's just your analysis.

Joshua Tucker:

No. And the last time we talked to people in this, yeah, no, this is what we're laying out. I mean, this is... we tried to be incredibly careful at the end of the paper with all sorts of caveats and think through lots of different ways in which this Russian foreign influence campaign could have influenced 2016 in ways that we were not measuring, but that it might have had longer term effects, which is what I've just been trying to talk to you about now.

Justin Hendrix:

I do feel like serious people who have paid close attention to the Russian interference effort in 2016, including close attention to the Mueller investigation and to the work of the Senate on this subject, would have a hard time perhaps arguing that, "Oh, the IRA potentially changed the outcome of the 2016 election." But it does seem still unknowable whether the whole aspect of the Russian effort potentially had some determinative effect.

I don't know that we'll ever know that. But you mentioned Kathleen Hall Jamieson, I won't get into all the various aspects of, on October 7th, 2016, DNC emails starting to appear literally hours after, or even, I don't know, within a very short time…

Joshua Tucker:

And that's not what we studied. That's absolutely not what we studied.

Justin Hendrix:

…right after the Access Hollywood tape. Kathleen Jamieson is very good on the reality that, for instance, James Comey relied on forged documents, Russian disinformation, in the way that he framed, how he talked publicly about the Clinton server investigation. There's so many little aspects that drove news coverage and drove news cycles for days, for weeks, for months in the United States that were tied to that Russian disinformation campaign.

We'll never be able to parse out ultimately that impact.

Joshua Tucker:

Yeah. And by the way, putting on my Russia hat for a second. This hacking and releasing is a tool that goes back for long time in Russian domestic politics back to Soviet days. This is the same tool.

No, that's exactly correct, Justin. And this is what makes releasing this kind of a study difficult because we are trying to say as clearly as we possibly can, we are looking at one piece of this campaign.

We are looking at the social media piece, and we now have more evidence that's in the public domain than we had before. We know about the concentration. We know who is exposed. We have a relative sense of the magnitude to other sources, and we have this preliminary suggestive look at the relationship between being exposed and changing attitudes and opinions in this direct manner. None of that speaks to anything that you're talking about here.

And so, we should never be interpreted from this study that, A, the strongest extremes, there's nothing in this study that says... I mean this is just a complete misinterpretation. There's nothing in this study that says the Russians didn't try to interfere in the US 2016 election. So, I've seen people who've said, "Oh, this blows up the myth that the Russians tried to interfere." No, we didn't study that. We didn't look at that. That's nothing.

We actually start from the premise that the Russians did try to interfere and generated these tweets that Twitter captured and gave to researchers to study. But there's also nothing we can say and nor do we try to say, nor do we want to say about what the impact was of other things that went on in this election. And the other piece of this is, which is why we're cautious around the correlational language of thing, our best guess is that this is not what the Russians were doing.

If their goal was to change people's attitudes is that it was pretty unsuccessful. That's our best guess. But when you get into the question of what determined the election result, this was a super close election result. A tiny fraction of people changing votes could have changed the outcome of this election. And so, for anybody to say definitively that the weather in Wisconsin might have determined the outcome of the election, I mean there's just so many things that are here.

And I think, as you said, it's an unknowable piece of this to understand because we can't go back and rerun the election. And we can't go back and try it out and see what would've happened with different things in this case. So, there's always going to be speculation about it. And I think there'll always be arguments about the relative importance of things. I think what we've done is we've given a little bit more evidence to everyone who wants to talk about this influence campaign.

And we have shown what was happening in this one piece of the campaign. Nowhere in this study, this is not part of a larger attempt to measure the effect of the entire campaign. We can't do that. But up until this paper was published, we didn't have any estimates of who was exposed to this campaign on Twitter. So, we know a little bit more than we did previously.

We don't have the answer to the entire question, but we know a little bit more than we did previously. And I think that's useful and that's what we try to do as scientists. But it's challenging when you do scientific research that clearly has implications for people's partisan agendas. This is always a challenge for scientists. Once we put research out into the world, it's hard to control what people are going to try to look at.

And so, all we can do is try to do our best to be clear, which is why I'm here talking to you today, why we have put out multiple tweet threads about this, why we were very careful at the end of the paper to try to write about all those things. And I think that people, when you look at responses, it is... you can look at responses that you see online and you can get a sense of people who've read the paper and people who have read someone tweeting about a report that they saw on the news about the paper.

I would encourage people who are interested in this to go read the paper. If they don't want to read the paper, they can look at the tweet thread that we put out at the Center for Social Media and Politics or that I put out on my Twitter thread. They can go check out on this website, the Center for Social Media and Politics, our description of the paper, which has in a much more concise format a lot of the stuff that I've talked to you about today.

Justin Hendrix:

You did have one particular reviewer who was concerned that lay readers would misinterpret these results, that they would not recognize where the evidence might be limited or imperfect and would take it too far. And perhaps just for a paper like this, there's no way you could have ever completely addressed that concern or got round it.

But I do, I just want to... you've mentioned generally people out on the internet responding to this. But there are certain characters with big megaphones, people like Glenn Greenwald who have called this broader intrigue around Russia “Russiagate,” this whole thing is a conspiracy. He called it the most “deranged and unhinged conspiracy theory in modern times.”

And he's kind of cited your work along with others as supportive of an argument that journalists, scholars, think tanks, others who have concerned themselves with Russian interference in the 2016 election who were essentially completely misleading the American public and engaged themselves in a kind of fraud. What do you think of that?

Joshua Tucker:

I mean, I think I've tried to say it like five times over the course of this podcast right here, which is that our research does not speak to any of that.

So, first of all, I want to be super clear. As an American citizen, I think a foreign country attempting to interfere in our domestic elections is a national security threat and should be treated as such. Whether it's effective or not is irrelevant. It is a national security threat. We should react to it. We should figure out what's going on. We should try to neutralize it and we should try to make so that foreign actors don't do that in the future.

And we should try to work to make all of that possible. The threat to our national security, and when we talked earlier, Justin, about the indirect effects, having a state that says, "We are going to take it seriously if other foreign actors try to impact in our elections," and then following through on that, that helps increase faith in the election process.

And what we have seen in this country is a deterioration of faith in the electoral process, primarily on the right, that this is a real threat to the quality of democracy in our country. It's a real threat to the sustainability of democracy in our country. We have a new poll out that is at post-2020 from something called the American Institutional Confidence Poll that I do with Jonathan Ladd and Sean Kates showing that Republicans are losing confidence in US institutions.

It's a national security threat, whether it's effective or not. Period. End of saying we should be addressing it as such. Whether the Trump administration... sorry, whether the Trump campaign had illegal contact with foreign operatives as an attempt to influence the outcome of the US election, that's zero to do with our research whatsoever. Those are separate issues.

Whether there were laws that were broken, whether this is a threat to the integrity of the election, these are totally separate issues. And then, the thing that we've mainly talked about over the course of this podcast, our paper is not the last word on whether or not the Russian foreign influence attempt in 2016 had an impact on the outcome of that election. But as you said, it's an unknowable piece. Most of that is unknowable.

And there are arguments that people can put forward about why that may have had an impact. Our work here I think takes a little piece of this that previously was unknowable and now says, "We know something about this." And I think that's the importance of science. I mean, what the reviewer said about being concerned about how the study would be picked up by people in the mass media, very prophetic on the part of the reviewer, but we realized that this would be the case too.

But as scientists, we can't make decisions about whether to release the results of scientific studies based on how we think people might respond to them. All we can do is try to control to the best of our ability the way we do. I mean, this is a question I think, and this may be a subject for future research. It's possible that we can get better at this. We tried to do this.

We were ready with tweet threads that went out as soon as the paper went out to sort of say what we said the paper said and say what we didn't say the paper said. The tweets that I had about the caveats of them, those have been retweeted a lot. Have they been retweeted as much as Glen Greenwald? No, of course not. I don't have millions of followers and stuff like that. I think this will continue to be a point that is a challenge for our entire field.

But it's a challenge because we're doing research, I think, that's relevant. You don't have to worry about this when you do research that the public doesn't care about. And so, I think we don't want to say that scientists shouldn't study things that the public cares about. And of course, as scientists, we can't make decisions about whether to release our findings after we find out what the findings are, that we know the problems for the development of scientific knowledge if you do that, if you sit on certain findings and release certain findings.

Again, it goes back to the earlier discussion we were having about data access. That's what we're always concerned about when we're dependent on platforms for releasing the results of research. We don't know what they're choosing to release and those sorts of things. So, yes, it was something that we thought about. As you can tell, when we wrote the end of the paper, we were super careful about trying to put these things out there.

We've tried to put them out into the public. I'm here talking to you today about this. We will continue to say what the study shows and what the study doesn't show. But hopefully after the brouhaha dies down, as these things all do the first week of the study being released, the lasting legacy of this will be when people think about foreign influence campaigns and when people write about this stuff, we have some more evidence.

We have another piece, a small piece, but we have another piece of this story that we know something about that we maybe didn't know about before we had this study. And I believe, I can't remember, keep it with the reviewer straight, but I believe that same reviewer said the descriptive part of the paper was their favorite part of the paper. And I think that is the lasting value. We didn't know this before. We did not know how exposure played out.

And I think this is a real contribution to our understanding of this particular foreign influence campaign and probably to foreign influence campaigns generally.

Justin Hendrix:

In fact, that reviewer summed it up with the statement, "Overall, this paper is important, informative and likely to be hugely influential," which it appears to be on its path toward that. We can have another conversation at some point about the reception of scientific papers in social media and the broader popular press, particularly acute around COVID and certainly around these issues. So, perhaps we'll come back to that again in the future.

Joshua Tucker:

I suspect we'll see the emergence of important research in that regard. And maybe it's already out there and I just don't know about it yet. But I suspect we'll see people looking into this more and more to try to figure out how to do this. And we'll certainly– at the Center for Social Media and Politics, it's not what we do, we don't study scientific communication– but we'll certainly be receptive to what lessons other people can learn about how to go about thinking about scientific communication. But yeah, and thanks for the kind words. Thanks for ending with the kind words about the paper. I really appreciate it, Justin.

Justin Hendrix:

Thanks, Josh.


Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...