Home

Evaluating Social Media's Role in the Israel-Hamas War

Justin Hendrix / Jan 7, 2024

Today is the three month anniversary of the vicious Hamas attack and abduction of hostages that ignited the current war in Gaza. Just before the New Year, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) published a report titled “Distortion by Design: How Social Media Platforms Shaped Our Initial Understanding of the Israel-Hamas Conflict.” This week, I spoke to the report’s authors—Emerson T. Brooking, Layla Mashkoor, and Jacqueline Malaret—about their observations on the role that platforms operated by X, Meta, Telegram, and TikTok have played in shaping perceptions of the initial attack and the brutal ongoing Israeli siege of Gaza, which now continues into its fourth month.

“Evident across all platforms,” they write, “is the intertwined nature of content moderation and political expression—and the critical role that social media will play in preserving the historical record.”

What follows is a lightly edited transcript of the discussion.

Emerson Brooking:

I'm Emerson Brooking, a Resident Senior Fellow at the Digital Forensic Research Lab.

Layla Mashkoor:

I'm Layla Mashkoor, an Associate Editor at the Digital Forensic Research Lab.

Jacqueline Malaret:

I'm Jacqueline Malaret and I'm an Assistant Director at the Digital Forensic Research Lab.

Justin Hendrix:

And you are all three authors of the "big story" published by the Atlantic Council on December 21st, just before the new year, how social media platforms shaped our initial understanding of the next to three months of this conflict, that of course, started early on October 7th in local time there.

Let's just step back, before we get into some of your findings, and talk about your method for producing this report. What happened on October 7th in your lab? How quickly were you able to begin to track phenomena related to the Hamas attack? And then ultimately, the Israeli response to it.

Layla Mashkoor:

I'm based in Dubai, so I was online and watching all of this unfold, essentially in real time. And I'd actually blocked off that morning to have a quiet morning, just to do some reading. And then I just saw my phone blowing up with notifications. Twitter started going crazy.

And then all of a sudden, these little tricklings started to come out about what was happening and unfolding at the border between Gaza and Israel. And all of this, immediately I went to Telegram, because that's where all of this information was starting to come out of. And I started just documenting, trying to create a monitoring system immediately, to document the key channels where things were taking place, the network of channels outside of, that were further amplifying the claims that were happening, and then of course, trying to track this movement of information from Telegram onto other platforms, primarily X.

But in those first few hours on October 7th, everything was really centralized on Telegram, in terms of trying to understand what was happening. And that was mostly coming through the updates directly from Hamas, via their al-Qassam channel on Telegram. And so, all of this really unfolded on Telegram as the key space where we saw the little trickling start to emerge from this space.

And then, in terms of our methodology, I think on day one we were focused on tracking and monitoring, so documenting every instance for future geolocation and future security research, in terms of where are we seeing flashpoints of activity, where are we seeing, trying to archive any content that was coming online. And then of course, putting that all into basically one giant document for further analysis as things continued to unfold.

Emerson Brooking:

In many ways, this question is the story of the piece. So I started work a few hours later, I was on travel in Istanbul. And at least as is my habit, I turned first to Twitter, now X, to try to figure out what was going on. And so my entry into monitoring the conflict was seeing the first images and video of the attack being taken from al-Qassam Brigade's Telegram being redistributed on Twitter, alongside the first Israeli Government statements.

And then crucially, I began to feel like I had some grasp of what was happening, seeing Americans begin to log online, people from later time zones, then how different political elites were processing or reprocessing these events.

Jacqueline Malaret:

And I can post about, since I woke up to the news in Washington D.C., and like my colleagues, obviously first jumped to X and some Telegram channels. But I also opened the TikTok. And if you're unfamiliar with how the TikTok app works, it's an auto-playing feed. So in many ways there was almost no conscious effort to monitor. It was just very much open up, go with the tap of your fingers. So similarly tried to archive and document that process.

Justin Hendrix:

Certainly DFRLab, no stranger to breaking news events and setting up tracking operations like the one that you've described here around conflict. I want to step back just for a second before we get into some of the things that you've observed about what happened on October 7th and subsequently. Just talk a little bit about the, I suppose, historical context of social media in the Israeli-Palestinian conflict in the region.

Layla Mashkoor:

When it comes to the historical situation here, Facebook has always been one of the main key players in terms of the platform that's being used, in tandem with activities that are happening on the ground. Going back to the 2014 war, going back 2016, 2015, we've seen several instances where activities on the ground were playing out in tandem with activities happening online, and primarily on Facebook, because that platform has a very strong grasp in the Middle East.

And right now we've seen that evolve to include WhatsApp, and Instagram, and other platforms, but Meta, overall, has a strong foothold in the region. And so going back to 2021, for example, that's where we saw these first strong inklings of controversy emerging from how Meta is handling the content moderation decisions that surround the Israel-Palestine conflict.

And in 2021, it received a lot of backlash because it was blocking hashtags, restricting hashtags. There were reports of accounts being restricted, having limitations put on them. And a lot of this was chalked up to errors that had happened, or bugs, or glitches. But what it meant in real time was that as raids were happening on a mosque, people were limited and unable to post what was happening on the ground, and they were restricted from sharing their reality to social media.

And that caused a lot of backlash for Meta. And in response to that, it took steps to try and address that backlash. And it brought in business for social responsibility, to essentially run a sort of audit on how its policies and practices played out, and the impact that they had.

Business for social responsibility found that there was an over enforcement on Arabic content. And what that meant was that Palestinians were limited in their ability to express themselves and their ability to organize online to communicate. And to address those findings, Meta essentially took 10 of the 21 recommendations under consideration.

What's really interesting here, is that in the weeks before October 7th, in September, Meta had released an update to address how it was implementing these 10 of 21 recommendations. And what it had said was that many of these recommendations were in progress and set to be implemented in 2024.

And so that's quite interesting, because a lot of the issues that the platform is facing right now, it had a head start in knowing that these are issues that will emerge, that often come to the forefront in this conflict, and it's had a head start in trying to mitigate these risks and mitigate these issues.

However, most of those strategies were not actually in place on October 7th. And so it's quite interesting to see how it was better prepared in one sense, to understand how this might play out, but on the other hand, slow to implement the recommendations, and unable to meet the moment right now, in terms of these really difficult moderation decisions.

One of the recommendations, for example, was around creating policies when it comes to content that praises or glorifies violence. That has been, obviously, a very key element of the moderation decisions here. And so had that been implemented earlier, it'd be really interesting to see if or how things might've been different.

Justin Hendrix:

I suppose it was in 2021, we started to see Facebook implement these special operations centers, set up these particular operations groups to look at the conflict in Israel and Palestine.

And what I understand from the BSR report, I believe, and also from activists on the ground that continue to talk about these types of problems right now, is that one of the main issues is the big asymmetry, in terms of resources between Israel and Palestine, in terms of people who are reporting, or paid to sit and to actually report content to the platforms, or flag it for potential removal. You've got a much larger and more professional operation in Israel to do that versus the situation perhaps in Palestine.

Layla Mashkoor:

Yes, so this Israeli cyber unit is quite interesting, as we don't know a lot about how exactly it operates or its relationship with Meta. What we do know, from this specific conflict, is that Forbes had reported that the cyber unit said that their request to platforms, Meta, X, I don't have the comprehensive list right now in front of me, but to all the major social media platforms, had increased tenfold. But most of those requests were being sent to Meta. And then Israel had also said that 95% of those requests had been successful. So that is quite telling.

But we don't actually have any mechanism for fully understanding the scale of the requests, the nature of the requests, the mechanisms for how Meta determines whether those requests are acceptable or not. Based on the words of Israel alone, it seems that those requests are often very successful. But there's a lot of opaqueness surrounding how this exactly operates, and it doesn't appear that there is a system for Palestine. There's no counter system for Israeli content to be mass reported in that same way, or mass surveilled in that same way, for reporting to social media platforms.

Emerson Brooking:

Something else we've seen in this conflict is, in addition to formal government requests, in addition to the Israeli cyber unit, there's been an unprecedented activation of the Israeli tech sector and civil society. I've seen nearly a dozen different reporting pipelines, which don't appear to have any direct government affiliation, although in many cases they appear to be adjacent to the government.

And these are fora where Israeli activists are able to coordinate their mass reporting of some content or their mass promotion of other content. And, as Layla observed, these are mechanisms which are not available to Palestinians.

Justin Hendrix:

Clearly things have changed at X over the last year, certainly since the last particularly hot moment in the Israeli-Palestinian conflict. Lots of headlines that your piece suggests are correct, that the site is experiencing a sort of misinformation crisis.

Emerson Brooking:

I took the lead on the X study, and look, no social media platform handles war especially well, but X was uniquely poor in this conflict. And it was uniquely poor in its policies because of specific decisions that Elon Musk had taken since he'd assumed control of the platform a year ago.

The removal of a reliable verification system meant that anyone who was trying to make a splash could purchase one of those check marks, could change their avatar to look like a news organization, and to begin spreading either raw war content harvested from Telegram or just spurious claims chasing clicks.

And there was extra incentive to chase clicks, because another major policy Musk had introduced was the direct monetization of views of individual posts on the X platform. So now you had basically new networks of accounts which were either serving this poor OSINT aggregators, or just pretending to be OSINT aggregators, who had a direct commercial incentive to write the most inflammatory content and to spread it as widely as possible.

And on top of all of that, the transparency of X had been reduced significantly. The Twitter API was probably the best in the business, but that's long dead. Many research projects which were possible during the Russian invasion of Ukraine were not possible to replicate in this conflict. So you had an unprecedented level of misinformation, and you had remarkable opacity and understanding where any of it was going.

You also point to the interaction that Twitter has had with European officials following the October 7th attacks, and the spread of various graphic content and other content that some European officials have suggested may be illegal.

Jacqueline Malaret:

I'm happy to step in on this, and this is also stepping a little bit away from our report, into a different piece that myself and Rose Jackson authored on how the European Commission was responding to the outbreak of conflict in the context of the implementation of the Digital Services Act.

That could probably be an hour-long podcast on its own, but a quick summary is that they're currently going through this massive implementation phase to suss out exactly how the Digital Services Act, which is now enforced for all social media companies, not just the very large online platforms and the very large online search engines take shape.

So I believe it was on October 10th that Commissioner Thierry Breton sent out a letter via X to X itself, implying that there would be some sort of enforcement action taken, reminding acts of its due diligence obligations under the DSA as a very large pipeline platform. But it was confusing from the letter of the law or the tech policy standpoint because tweeting out a letter is not a part of the Digital Services Act process.

Those letters, which were also sent to Meta, TikTok, and YouTube, I believe, were then followed by a formal request for information to all of the different platforms, which is a part of the Digital Services Act process, and is a way for the commission to inquire for additional internal data, and internal metrics that may come from the platforms themselves.

It can also be the start of a formal investigation, which is how you get on the road to leveraging an actual fine for not being in compliance with the Digital Services Act.

But something that we note in that piece, or twofold that we did in that piece, is one, for us working in civil society, and not within the platforms, we can really only infer, based off of what is posted on X or what is posted by official statements from the commission. We don't have those insights into how exactly the commission is communicating directly to platforms.

And then the other really important point is, as these actions are taken in the implementation phase, they'll in some way inform how the DSA is leveraged into 2024, which is going to be a huge year for tech policy with European elections coming up, artificial intelligence and all of these issues coming to an inevitable head.

Emerson Brooking:

And just to zoom out, those letters from European Union especially captured just how much the ground had shifted, and how different things looked with Elon Musk in charge of X.

During the Russian invasion of Ukraine, Twitter was early to put out statements about measures it was taking to protect its users, and then a much deeper dive into how it was adjusting its wartime content moderation policies.

By contrast, X was essentially radio silent until those letters, DSA letters, were posted online. And then the first platform response, and the only platform response for a long time, was just a few screenshots of statements which X was essentially making under coercion.

Justin Hendrix:

Let's talk about TikTok. That's one of the platforms that you look at here. When you look at it also in the context, just generally, of overall distrust around that platform and what its motivations might be, whether true or not, there are a lot of folks out there that think that perhaps there's somehow a heavy hand from the Chinese government trying to use TikTok as a platform to manipulate global discourse. What are you seeing on TikTok?

Jacqueline Malaret:

In terms of mistrust on TikTok, you can break that out into two larger tracks of conversation that we get at in the two separate sections of our piece.

The first is just from a researcher perspective. TikTok shares the same problem as X, in that it is not a very transparent platform. It is hard to analyze particularly rapidly from an OSINT perspective. I think that's a challenge that we as researchers face.

I also think that opacity is what also leads to distrust, both from TikTok's own users. The way that Layla mentioned, users use alternate phrasing, such as un-alive or SA to describe violent actions. Taken during a conflict on platform is something that is not unique to this conflict or this political event. It's been a longstanding trend on TikTok. So there is that high level of distrust with people that are on the platform towards the platform, and they feel a lack of accountability from TikTok, I would say, in terms of how those content moderation actions are taken.

I also think that opacity, and also an unfamiliarity in some instances with how the app works, the way the hashtags are measured, are also fueling a lot of the bad faith analyses that we see in the US political press. There was a lot of coverage over what hashtags mean on TikTok, and that became a major flashpoint that also in turn fueled political discourse or reignited political discourse in the US around a TikTok ban, or the app's ostensible or supposed ties to China.

So I would separate it with that sort of distrust into two separate tracks of issues, all stemming from the lack of, it's really hard to keep track of what is or is not going on in platform.

I would just say that TikTok's monetization is on steroids relative to other social media apps, and particularly the "word legacy apps" provided by Meta, YouTube, or X now. On TikTok, outside of the general speed of creator tools, like affiliate links or brand sponsorships, et cetera, which typically are not good ground for posting political content, because brands do not want to get involved with political content. TikTok's creator fund, as well as its ability to monetize live matches, or live streams, as well as its ability to monetize filters, have really set the platform apart in this conflict, as a way to generate income by posting content on both sides.

Justin Hendrix:

Can you maybe just give me a couple of examples around that? For some of my listeners that may not be big TikTok users, what types of phenomena are you seeing? How are people making money off filters or live streams?

Jacqueline Malaret:

Live streams, particularly, we mentioned this in the report, live matches allow two users to square off on a competition, and what we've observed in this conflict is one user will be draped in Israeli flag and another user will be draped in a Palestinian flag, and they will be encouraging TikTok users to spend money to send digital tokens to one creator or the other. And whichever creator makes more money over the course of that match, ostensibly somehow represents popular support of the conflict.

And obviously, that's not true, it's just an enrichment scheme or almost a scam. In terms of alternate reality filters, or virtual reality filters, TikTok's Creator Fund allows users to make money from filters that they create and upload. And then if other users use those filters and videos, the original user will get a portion of income.

There have been actually an explosion of filters in support of the Palestinian cause, where users can almost play games. The TikTok app will track your eye movement, and it will allow you to move a little avatar across the screen. And every time you play that game and upload a video, the original creator will get a kickback in revenue. So those are a few examples of new forms of monetization that are playing a role on the app.

Emerson Brooking:

In many ways, the experience of the war on TikTok really feels like a different reality. There's this air of unreality to it. I think it's partly due to TikTok having the strictest graphic content roles and regulations. The fact that they don't have quite the same say newsworthiness exceptions that even Meta platforms do for the sharing of graphic content. So a lot more discussion of the war is first hand testimonials, or it's people filming these video essays, or responding to each other, or these kind of insane gamified tools all being then used by a community of much younger users who have very different politics and very different political contexts.

Layla Mashkoor:

I do want to say that distrust that Jacqueline is getting at is a key component of users relationships with platforms, especially surrounding this conflict, and how they behave, the actions that they take on those platforms. And especially, what we're seeing in the case of Meta, is high levels of distrust. And the perception that there is censorship taking place is fueling people's behaviors and actions, in a way where whether it is accurate or not, there are claims that, "Oh, if you update the app, it will suddenly bring in all of these limitations for restrictions on your ability to use the app." Or, "if you add this emoji, an Israeli flag emoji, your reach and your engagement will be better."

And so there's all these sort of ideas that are coming to the forefront of people who are trying to understand, how does this platform work? How can I make sure my content is viewed and not taken down, especially when that is legitimate content? And in trying to understand the answer to that question, a lot of falsehoods and a lot of speculation emerges. And what is needed cut through that is transparency and specificity from the platforms, in terms of what are the limitations, where are the red lines?

Whereas right now, and this is something we mentioned in the report, the system is essentially hinging upon tripwires, and user's ability to avoid these tripwires. And they're not sure where they exist. They're not sure which hashtag, which phrase, which comment, which emoji will lead to potentially having their content restricted, hidden, removed.

And in that space of confusion is where I think a lot of this perception issue emerges, where people are not sure how to use the platform, and then they start, conversations emerge about going off platform. We've seen a lot of people, journalists from Gaza on Instagram, trying to move their audience onto Telegram, for example, because of this frustration or distrust with how the platform operates.

Justin Hendrix:

That brings us to Telegram, and I do want to spend a minute on it, because I suppose it is the least moderated or least concerned with moderation, in terms of the platforms that you looked at here.

What in particular has been the role of Telegram? Layla, you mentioned, of course, that the first reports of the attack on October 7th emerged there. You mentioned, of course, that Hamas uses Telegram as an official communications mechanism to get out its messages. Who would like to start on Telegram? Take us through its unique role in this conflict.

Emerson Brooking:

So especially for Hamas, Telegram was, and remains its primary distribution mechanism for messages. As we write in our piece, the war really started on Telegram. The first imagery and videos were shared, I think first by individual fighters, but quite soon thereafter by the official Al-Qassam Brigade's account, as well as a pre-recorded message announcing the start of that operation, at a time when Hamas fighters, over a thousand Hamas fighters, were already in Israel. And that Hamas Telegram channel then saw its user base grow by 50% in just a few hours.

What Telegram provides is this large audience and a relative certainty that you're not going to lose that mouthpiece all at once. Even then, over the course of those first few weeks, we saw plenty of instances where Hamas, or other affiliate groups were advertising backup channels. They were advertising alternate fora, where users could go if the channel was fully disabled. But that never really happened.

Instead, there were a series of messages from Telegram founder Pavel Durov. First he was noting the significantly increased use of Telegram across the Middle East. They were rushing to put into place Hebrew language features, the same as Arabic language features. He made a fairly lukewarm statement that he was aware that the platform was being exploited for terrorist propaganda, but that it was also serving valuable functions so they weren't going to take any action at that time.

And to my knowledge, really the only content moderation that Telegram undertook, more than a month after the attack, was to limit the availability of some Hamas channels, basically on smartphones, because of demands that have been sent to them by the iOS and Google Play app stores.

And I personally really go back and forth on this, because Pavel Durov, more than almost any founder, has great reason to be skeptical of government takedown requests. He was a co-founder of VK. He was forced into exile from Russia. He fought bravely to keep from releasing personally identifying information to protesters during the parliamentary protests in Russia in 2011. He as reason to be this free speech absolutist. But here, it does seem clear that Telegram was a valuable asset for the spread of terror propaganda.

Justin Hendrix:

Maybe that just opens us up to talk about some of the things that you get onto in the question you pose at the end, can the platforms thread this needle? You say that the decisions these companies will take have consequences for millions of people. Sometimes, of course, those consequences are life or death. And in this case, it seems very much the case that is true.

I don't know, how are we doing? If you kind of step back, do you feel like this is more or less of a disaster, perhaps, for social media at its role in hot conflict than perhaps we've seen in past? It seems slightly worse, perhaps, than the Ukraine-Russia conflict, Emerson?

Emerson Brooking:

So this is worse, in most ways, than Russia-Ukraine. And I think the crucial distinction is that at the outset of that war, still less than two years ago, there's essentially a bifurcation of online spaces.

Most Western social media companies were quite clear that they were supportive of their Ukrainian users. Most countries were supportive of Ukrainians. It was clear that Russia was the aggressor state. And Russia at the same time also was taking steps to cut off its own population from Western social media services.

So there was a war, but then there was also a splitting of these online spaces, which made wartime content moderation still an immense challenge, but it made some things easier to deal with.

By contrast, in this war, no one's getting offline. Everyone's still using these same platforms. Israel and the IDF have developed many different ways of using social media to broadcast Israel's public diplomacy positions, to publicize particular military campaigns. They're firmly online and they've activated a very effective pro-Israel diaspora.

On the other hand, we've talked about Hamas's exploitation of a few social media platforms, but there is that much broader Palestinian diaspora, and pro-Palestinian political movement around the world, really whose only recourse in instance is because they don't have effective state representation. Their only recourse is to draw international sympathy, to use international organizations to themselves draw publicity to their cause, and to draw attention to the immense amount of death, more than twenty-two thousand people in two and a half months.

They also desperately need these online services and they're not going anywhere. And in this environment, that means that any content moderation decision is almost immediately touching very difficult questions of free expression.

Jacqueline Malaret:

Just to add to that, I think that was a really great answer, Emerson, I think the answer of how are we doing is, I agree with Emerson, at the moment not very well. And I think one of the reasons for that is, we're seeing the fallout of these decisions to essentially cut down or gut these trust and safety teams from these platforms.

And so in the past, we saw that content moderation decisions, generally they trended in the same direction, with maybe some outliers. But generally, there was an industry trend and standard where things were heading towards a similar direction.

But what this war has exemplified is that moderation is now incredibly fragmented across each platform. And so depending on where you're going and which platform you're choosing as your primary news source, your perception of the war, your understanding of what is happening, your understanding of how graphic it is, how violent it is, what is true, what is not true, all of that will be determined by, partly at least, by the platform that you choose to receive your information on.

And because of this fragmentation in moderation, where each platform is taking different decisions at the moment, what Meta is doing, what TikTok is doing, what X is doing are all quite divergent. And that is leading to divergent understandings as well of this conflict.

Justin Hendrix:

I can't help but think though, stepping back from it all, that to some extent we would not know what was going on in Gaza, certainly, if it weren't for the existence of these platforms. That on balance, the original promise of social media to shine a light or to offer a voice where perhaps there might not be any, seems to have more or less been delivered. Would you say that's true?

Layla Mashkoor:

I think it's a very interesting question, particularly in the context of the Middle East, because we know that many of these platforms built themselves off of the Arab Spring and their ability during that time period to offer a space of expression and a space of free speech in areas where people were not able to speak freely sometimes.

And so, fast-forwarding from then, where I think social media and the Middle East was really viewed as this beacon of speech and this new public space where people can come together and have a public town hall. But now fast-forwarding, I think that distrust we spoke about earlier is kind of the predominant emotion, where in the same way that there's distrust with governments, especially in the Middle East where you have authoritarian governments, and people might not trust their government to adequately represent them, that same distrust, I think, can sometimes be tunneled into social media platforms, which were formerly seen as, oh, this could be a public square where we can voice ourselves. We can have spaces of dissent in countries where the justice system, the legal system does not permit that.

But I think that sort of bubble has burst. It's not really true anymore. I don't think people view things in that sort of optimistic space. I think right now what you're seeing, especially from journalists in Gaza, is their relationship to social media is about navigating obstacles, navigating spaces where they feel that they actually have to bend over backwards and jump through hoops to make sure their message is delivered and viewed and received by the public. And all of this at times when they are under bombardment as well.

Emerson Brooking:

And just to add, Justin, it's a fascinating question, but arguably we don't really know what's going on in Gaza. I feel that we know less today than we did a few months ago, and that's because of the continuing internet blockade in the region.

We can often forget that the internet is ultimately just physical infrastructure. It's either cell towers, which are enabling you to get connection, or it's signals which can be easily blocked or interdicted. But they're all being processed through devices that need, of course, electricity in order to keep working.

And in a small area that's seen that amount of urban bombardment, and now ongoing military operations, simply continuing to have internet connectivity is very difficult.

Now that doesn't mean that the conversation about what's happening in Gaza has lessened, but it means that there's, in many cases, less primary source material, at least from the Palestinian side to draw on.

So everyone is talking about it, but there's also new opportunities for miss and disinformation, in a way that I haven't seen in any conflict before.

Justin Hendrix:

I want to ask you just a couple of questions about one thing you just nod to in this report, but that I know you're probably following, which is the extent to which the platforms are facilitating the archiving of material, the archiving of potential war crimes or potential other human rights abuses that are happening in this conflict.

What have we seen? Do we feel that the platforms are doing a good job on this front? And I guess a secondary question, there was an enormous amount of activity at the outset of the Ukraine war to spin up tracking and archival activities and projects. And a lot of, essentially receptacles for that within Ukraine, within the Ukrainian government. I know they set up multiple ways that citizens and others could essentially share material, or report war crimes, what have you. I don't get the sense that a lot of that type of infrastructure is available to Palestinians, or is well established in the region generally.

Is that right? What would you say about this? What issues have arisen over the last couple of months when it comes to the preservation of documentation and material?

Jacqueline Malaret:

I think the preservation and documentation of material is incredibly important right now, and it is not a huge part of the conversation, and it should be. At the moment, I don't think any platform has made clear any sort of archival process that it is undertaking. Anything that is being done at the moment is being done at a grassroots level or a CSO level, which can be just individuals trying to, whether it's journalists, researchers, I know us on our team, just documenting, taking screenshots, archiving everything that you find.

And then of course, that being done on a more institutional level, with organizations like 7Amleh, which is set up its reporting mechanism. But a lot of this is just individual and grassroots. There's no real mechanisms right now for the archiving of the content that's emerging. And that's especially important because the content is living in Instagram stories, which are only up for 24 hours, private Telegram groups. And these are spaces where that content can easily disappear.

And so that question still remains to be answered. And I think one interesting part of this conversation is, we haven't really spoken about YouTube at all during this conversation. It hasn't really been a major player here. It has been a player, but not a major player.

And this is just my own speculation and thinking here, but I think it's quite interesting when you compare this to the Syrian War, where YouTube was the primary platform, where people on the ground were uploading their footage, uploading their video. And we saw YouTube make a lot of mistakes in how it handled all of the firsthand raw footage coming from the Syrian War.

There were mass deletions of videos, that again, fueled lots of distrust. And I think that distrust could be a factor in why YouTube is not really a major player right now in this conflict, because we've seen it previously poorly handle archival footage from a war. And so, the question now begins, that emerges, is how and which platforms, if any, will take on this duty to archive content.

Emerson Brooking:

Yeah, the YouTube question really does demonstrate how much the broader tech landscape has shifted. A few years ago, obviously YouTube would've been part of a report like ours. These days though, it's all Telegram. That is ground zero for war-related content, and that's effectively where a lot of the stuff ends up being archived as well.

YouTube will play a role in the history of this conflict, but unfortunately, I think it'll be mostly bad video essays that radicalize the next generation.

Justin Hendrix:

As this conflict continues into 2024, with no real sign yet that it will have an end in any time in the next few weeks or perhaps months. What are you looking for? What phenomena are you following most closely? What types of changes are you hoping to observe?

Emerson Brooking:

So as we look ahead, I'm most interested in the intersection between attention and war. In the case of Russia's invasion of Ukraine in 2022, the world, it felt that the entire world was paying attention for a few weeks.

But the common experience of that conflict really only focused on a few things. There was the initial imagery of Ukrainian bravery, the Ghost of Kiev, the Defense of Snake Island. I spent a lot of time looking at the most shared stories around that war. And the things that were shared most often were stories of zoo animals, say, being evacuated from the Kiev Zoo.

Many people paid attention, but they came away with a certain mythology of that conflict. And then they tuned back out very quickly, because attention always comes with a half-life. And similarly, the entire world did tune in for October 7. Many have remained tuned in for the terrible bombing and suffering of Gaza that has proceeded, but people are also now beginning to tune out.

They are tuning out, having taken with them just really a few fragments of what will continue to be a difficult and ongoing conflict. And so as we go into 2024, I think myth-making around this war will begin to take center stage. This myth-making will intersect and collide, certainly with the 2024 US Election.

So even as the conflict itself is entering a different stage, I think that the war about the war will just be beginning online.

Jacqueline Malaret:

Yeah, going off of Emerson's point, in the report we discussed how content moderation policies are shaping user perceptions of the war, and also touching on tech policy discourse as it's occurring around the world.

So I'll definitely be watching. As we discussed earlier, there were the European Commission's statements on how platforms are responding to the crisis in the US. It's reignited this pervasive debate on what to do about digital platforms in the internet at large. So I guess, whether those statements will actually impact the slow-moving arc of technology policy, or if it will just be a flash in the pan, in terms of how we perceive and seek to govern these online platforms.

Layla Mashkoor:

I agree with everything Emerson and Jacqueline said, and I think that the last sort of piece of the puzzle there is, once this conflict does end, whatever the ending looks like, there also needs to be space for reflection and accountability, in terms of how all of this unfolded online.

Everything that's happening right now is happening so quickly. The mass of disinformation is immense, and I don't feel that there is much space to actually reflect on what is happening. When we look at the sheer number of wild claims, the conspiracies, and then also, all the truth hoods, all the legitimately horrible things that are coming out, and trying to discern the fact from the fiction, I think still we are in a place where we have a lack of clarity about what really happened, the extent to which certain claims are true.

And so I think once there is an ability to slow down, because right now we are still in a very fast-paced environment, I think there needs to be the ability to reflect on what really happened, and trying to bring clarity, whether that's independent investigations, whether that's just having the luxury of time to better assess the facts. I think that a lot will still emerge once this fog of war dissipates.

Justin Hendrix:

I want to thank the three of you for this analysis and this effort to perhaps do what you say, Layla, which is to pause, and reflect, and try to understand these phenomena, and perhaps put them in a larger context. And hope to talk to you when, perhaps, we can look back in a more peaceful moment and reflect on what could be different.

Emerson Brooking:

Thank you.

Layla Mashkoor:

Thank you.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics