Home

Donate

Unpacking the Principles of the Digital Services Act with Martin Husovec

Justin Hendrix / Oct 27, 2024

Audio of this conversation is available via your favorite podcast service.

Martin Husovec is an associate law professor at the London School of Economics and Political Science (LSE). He works on questions at the intersection of technology and digital liberties, particularly platform regulation, intellectual property and freedom of expression. He's the author of Principles of the Digital Services Act, just out from Oxford University Press.

I spoke to him about the rollout of the DSA, what to make of progress on trusted flaggers and out-of-court dispute resolution bodies, how transparency and reporting on things like 'systemic risk' is playing out, and whether the DSA is up to the ambitious goals policymakers set for it.

Principles of the Digital Services Act by Martin Husovec. Oxford University Press, August 2024.

What follows is a lightly edited transcript of the discussion.

Martin Husovec:

My name is Martin Husovec. I'm Associate Professor of Law at the London School of Economics.

Justin Hendrix:

And Martin, you are the author of Principles of the Digital Services Act, which is just out from Oxford University Press. I'm holding a copy of this tome in my hand. It is some 500 pages, the reader should know, and in its hardback edition, lands with a thud. This is everything you need to know about the Digital Services Act, from the jurisprudence that forms the substrate on which it is built on through to all of the bits and pieces and details of how this law works. Can you talk about, very briefly, how you came to write this and your general area of research?

Martin Husovec:

Yeah, thank you. Thank you, Justin. Yeah, so it's a very interesting mixture of 'zoom in' and a big 'zoom out' on the topic. The way I started with this is I've been working on a platform, a regulation for a while, but it was before we had regulation. So the kind of stuff I was really looking at, it was mostly how copyright law or IP law in general, regulates platforms. So I spent a lot of time looking at injunctions in Europe, spent a lot of time looking at liability and these sorts of things. And so when the regulation finally arrived, I was quite deep in the debate. Many people who drafted the DSA were part of the debate before the law entered into force. So it was natural for me, plus I wanted to pivot from something else. At that point, we had a quite toxic debate about upload filters in Europe and it was part of that debate and I said, "I need some break and I need to do something fresh."

And then obviously, the DSA connected many strands of my research because I spent quite some time not only looking at the regulation side of things in liability, but also freedom of expression. This has been one of the things for many years. And so yeah, the essay brought it together and I thought, "Okay, this is something where a lot of contribution can be made." And initially, the book project started as a project of two with my dear friend, Irene, at the European Commission, who is one of the people who co-wrote the book, but then we couldn't continue the project. It was a lot of work, as you can imagine, and the Commission has a lot of work. So I continued to slightly change the soul of the project and made it a little bit more academic in places. And yeah, the outcome is what you hold, but really, it started in a very different place. But I'm quite happy that I did it, although it took me two years.

Justin Hendrix:

Well, it's an extraordinary document and a textbook really, which I'm sure I'll have to figure out how to use with my students this spring. But I want to start off our conversation towards the end of the book really, in this section on principles. You write that, "The Digital Services Act is stylized as a law regulating digital services. But when we look at its ambitious goal..." I'm sorry, I'm tripping over myself, "But when we look at its ambitious goals, it is clearly a modern attempt to protect the Republic i.e, the system of liberal democracy." You go on to quote Robert Post, a constitutional law scholar at Yale and the unique threats to the internet, the unique threats to democracy that the internet poses. You say that you find his analysis compelling. When you step back from the DSA generally, do you think it is sufficiently ambitious to do what you say, to protect the Republic?

Martin Husovec:

Yeah, so I think first of all, the reason why I wrote it the way I wrote it is because we are at the moment, very fragile moment where democracies have all kinds of problems, and those problems are not unparalleled, right? So we saw some of those problems in Europe before and many of the things feel very similar. And we are also facing similar challenges as we faced, say ,in the 1920s and 30s in Europe. So it's written against that background because the "protection of republic" is a slogan that some of these laws in 1920s took in Europe to actually protect the system of liberal democracy. They didn't succeed as of course, the problem always is that the laws cannot protect the system entirely. They're just one of the pieces, but the reason why I'm looking at a DSA as a part of a bigger struggle to defend the liberal democracy because clearly, internet and the digital space has become an important area where we meet other people, where we exchange our ideas, where we deliberate, et cetera.

And the problem of course, is that all the beauty of the internet, all the great things, lack of editors, the fact that it's cheap and fast and you can communicate across the geographies seamlessly, instantaneously, also bring all the problems. Right? And these problems require some tackling and reliance on self-regulation I think has shown not to work. I think the last 20 years provide a proof of that. So the question then is so what do we do next? And I think the answer is we need some regulation. However, how do you do it? And that's the difficult part because regulating in this space is difficult. You can quite easily get it wrong.

And the thing that I liked about the DSA perhaps the most is that is incremental. So you say, "Is it ambitious in us?" I don't know. It might prove to be not sufficient, but I think it's good as a starting point. And I think the basic tenets of the DSA are good incremental improvement upon the previous situation. And one thing I would say, which clearly shows the DSA is incremental despite the fact that yes, it is an experiment in many areas, you can see that we have preserved the consensus, or you could say a contract, social contract that pre-existed and that is the social contract about liability. So the platforms are still not held liable for individual pieces of content unless they really are behaving egregiously, meaning they know and they do not act. If they uphold that part of a social contract, we're now just asking them to do more and we are developing ways to ask them to do more, including in ways where they themselves come up with how to do it.

But I think that already shows you that there is an incremental logic to it that we're building upon something that existed. We're not shuttering the fundaments, and what I would suggest, as I argue in the book, those fundaments are super important because you can mix up the two levels immediately, in which case you get completely different result. So you can pretend to do what the DSA does, but frame it as a liability question for individual pieces of content and you end up with a very different place, and some jurisdictions are thinking of doing just that.

Justin Hendrix:

Now I want to go back all the way to the beginning of the book and maybe just ask you another question as a framing question about the purpose of the DSA generally. I quite liked this first figure that you provide in the book on how the DSA attempts to redistribute power between states, private actors, including the technology companies, and individuals. You depict this gravitational pull of technology away from the state and towards private power. The regulation essentially serving to pull that pendulum back, and in the middle of it all is the individual.

Martin Husovec:

I think this is for me quite key. I build upon visual that was developed by some of the other scholars, in this case, Jack Balkan has this famous article about the free speech as a triangle. But what I tried to convey with it is exactly that regulation is not just empowering the state as some people always interpret it. Depending on how it's designed, it can also empower individuals, and I think this is one of the key designs of the DSA. It's not just making regulators stronger, making some state institutions stronger, but it's also making individuals stronger, and we observe it through all these procedural rights that the DSA grants to individuals. Some people think it's too much rights that the individuals are getting, but what we see is that this empowerment of individuals is something that happens through regulation because frankly, there's no to do it otherwise. You cannot empower against a private actor in any other way than through regulation. So that inevitably means that you're also empowering a regulator, the state.

And I think the key difficulty in designing regulation in the space is how to do it, how to empower the individual, while at the same time do not over-empower the state so the state becomes the new oppressor that just supplements the previous oppressor that is the private actor. But I think that another underlying logic is, and I think this is again nothing new, and again, if you look at the history, why more Republic, it was clear that one of the concerns is also concentration of private power and how that can shorten the freedom of individuals. So I think we're in a very similar situation from that perspective, even though these products are enlarging the power and freedom of individuals, but they can also suppress them in important ways. And I think there's just no way to do it, other than just the state stepping and trying to reallocate some of the power to the individual.

So I think the trade-off is there, and I see how different regions are grappling with this differently. They try to over-empower the state and give nothing to the individual. I don't think that's a DSA blueprint. I think DSA blueprint is that actually, individuals in the center, and yes, regulator receives some important new powers, but those are very often so that they serve the individual in the specific case. And I think this is the litmus test for how these laws are drafted because many of jurisdictions now claim to reproduce the DSA. I'm not always sure whether that's really the case.

Justin Hendrix:

I feel like that argument over who to fear more, the state or private power, is the fundamental argument in tech policy debates almost always. And depending on how you feel in your gut about that based on your personal experience and maybe how you thought through these matters often defines how you come down on lots of questions.

Martin Husovec:

Absolutely agree. And here, I always say one important thing is we have to always distinguish from whose perspective are we looking at this? Because of course, if you are active and use these tools that the providers give you in the US market, you will have a very different experience than you will have in a small European market. So a developed economy, yet small market where you're not too afraid of a local regulator. Obviously, I don't have to go further to all the countries where there's zero interest. So yes, if there's a huge commercial interest, you're going to get a lot of thanks and it might be superior and you might ask yourself, "Why do I need the state to step in?" But that's not the experience many people get when they open their products in their own countries because they are not the primary market, they're not the home country of these services and I think that has to be acknowledged.

So I am completely sympathetic to all the concerns about the state power. There's always that problem in any area we regulate. State can overreach, can abuse, will abuse, we have to deal with that. But the answer can't be we just don't ask for anything as a result because that's disempowering as well.

Justin Hendrix:

A big focus of this textbook of course, is on all the transparency measures of the DSA, which are significant and they also create a lot of the burden of compliance that this particular piece of regulation creates. I want to ask you maybe just a big picture question here about the theory of transparency as you'd observe it in the DSA. What does transparency mean in this regulation?

Martin Husovec:

So the way I see it is that one of the problems if you try to regulate in an area where you don't know what the right solution to the problems is, which by definition is this case because it's moving very fast, it's a pig. Regulators know much less compared to the regulatees. I think transparency is just a way how to create certain basis of knowledge and to allow for certain tracking of what is actually happening so that you can do comparison across the platforms, across the time, across geographies. So the way I look at transparency obligations we have in DSA, and surely, we can discuss individual ones, I think it's a way how platforms are tracked by the public so that the public can see what they're doing. Now, are these transparency reports conclusive? Can we base some elaborate conclusions on them? Very often not, but they can be actually very helpful.

And I think case in point here is the Statement of Reasons Database. All kinds of issues you can say that we have with that database. Yet on the basis of that data, researchers were able to show already with some descriptive research on big data sets that these platforms are behaving very differently, how much they moderate initially, automatically as opposed to by humans, how fast are they acting? All sorts of things. What is the main caseload? All sorts of things you can actually see from this, including that access taking less action than it used to, potentially. So all of this thing you can read from this quite imperfect database.

So I think what I see this public-facing transparency obligations, not the data access for researchers that I'll get to in a second. I see it as essentially creating points of signal which allow you to then track a little bit of progress or lack thereof across the time geography across the services. Now, data access for researchers I think is slightly different. I think that is literally inbuilt into the system so that we can learn about what is going on without empowering the state. So there are many domains where this type of data access, I don't know, chemicals would be granted to the state. State would go in and get a lot of data, analyze it and then make up its mind. Now, you don't actually want to do that in a space like digital communications because it's incredibly political sensitive and you actually don't want the state to be involved in that.

So what do you do if you cannot empower the state? Well, you try to empower someone else who is not a state actor, in this case it's someone who has expertise that is a researcher. In my view, the researchers are empowered not just because they are capable of doing research, but because they're non-state actors, and that's a good thing that we don't have a regulator to rely upon to give us all the analysis of what is going on with the services because that could obviously risk of state misusing that. So I think this is how I look at this, is a partly deep dive that's data access for the researchers, but it's partly just public tracking of what is going on that allows us to get some sense of progress, where are we heading? What is working, what is not? So we can then drill down and see what is actually happening, and that's the same thing also for the trusted flaggers. Their transparency contributes towards our understanding of how it's working on the ground or not. So I think these are just small data points that build the big picture like a jigsaw.

Justin Hendrix:

There's so many different directions we could go on this question, but one I want to pause on is around systemic risk and how we'll come to understand what that means. We've got, I understand a set of systemic risk reports, the first ones that we'll see, about to come in November. When those documents do come, how will you be reading them based on what you understand of this overall picture?

Martin Husovec:

I would read them as a first draft that is very likely to be disappointing for most people involved, but I think I read them really as a starting point. So for me, the question is how will these risk assessments look like in three to five years? How they look today, because they already exist, they're just not public, it's to me, somehow less important. I think a lot of things related to DSA is about incremental change. If you expected revolution, you're not going to get it. This is evolution rather than a revolution. So I think it's the same thing with the risk assessments. Yes, it's revolutionary to expect companies to do this. They have to pay a lot of money for this, they have to think hard about this. It takes resources from other actions that they could be taking, yet it's just a starting point.

So the big design decision behind the DSA is actually to put a lot of cost related to uncertainty on the industry. And you can see this clearly when you compare it to the UK Online Safety Act. UK Online Safety Act is a regulator is going to tell you what's the good practice and spend some millions and millions to draft a good practice. Now, DSA, obviously remit is a little bit smaller in terms of the VLOPS when it comes to risk assessments because the OSAS risk management is broader, it's much more, "You are the companies, you've been dealing with these issues for a while, you have to figure it out and just show us what you did. Then we compare the notes, look across, look at good practices and maybe next year, we have something to tell you about this."

So I don't think the Commission can really come into this and say there's just one way how to do this. No, there's X number of ways how to do this. And the uncertainty is actually the point. It's the fact that the companies have to figure this out, they use the resources, try to do their best, and then you mark their homework. It's a way of co-regulating from that perspective. So I think it's really more like principle-based co-regulation. You give them the target and they come up with their part of a solution. Now, I can obviously expand on the question of systemic risks, which is your sub-question, because that's the organizing principle and I think here, there's a little bit different expectations from what the systemic risks are.

I'm personally on the side that is skeptical that the term carries too much meaning. So some people think that systemic means some threshold, meaning it has to be big, something will collapse. The problem with that thinking, which I think they carry over from the finance, is that there's no one system. Hate speech is a different social problem from copyright infringement, from child sexual abuse material or from something else. The only system connecting these things I tend to say is humanity, and that is a very difficult principle around which you organize your significance threshold. I think what systemic really means, and really, the DSA tries to say different things for what systemic is, I think for me, what systemic means, it relates to systems. It's not about the individual piece of content. It's about how things come together, how the procedures, the governance design and all these things come together. So I think that's the key contribution of systemic is that we shouldn't be looking at individual instances, but we should be looking at the overall, the aggregate.

I think it's actually not as important as it sounds because systemic matters essentially for the companies only when they want to figure out what they need to consider, what types of risks to think about, and surely, there is a concern that you don't think about something that should be systemic for you. However, if you're a company that has been operating and dealing with all kinds of problems as you go, then surely, all those problems that you had to deal with in the past are systemic from that perspective.

Now, one way how companies can avoid the problem of missing something is by consulting. In fact, the single best way, how they can avoid that someone tells them later, "Oh, you haven't considered this systemic risk," is by consulting NGOs and say, "What do you consider to be systemic risk on our platform?" They tell them, they think about it, and if the NGOs miss something, the companies miss something, then I think a regulator has a very difficult time to come later and say, "Oh, you haven't thought about this," perhaps because no one thought about it at the time. So that's where systemic matters. Where it doesn't matter is what you need to do, how you need to act upon it, and that's a completely different question. Something can be a systemic issue, but you don't have to act upon it, meaning you don't have to change anything. You're doing already everything in sufficient way, but that's a separate issue.

So I think only where it matters is the assessment part. Actually, much bigger thing than a systemic I think is what is a new feature that potentially has a critical impact on the surface. So if you want, I can expand on that one.

Justin Hendrix:

Indeed. I think that one of the questions here is also the role of science. You've talked about the role of independent researchers. I think baked into this whole thing of course, is that 10 years from now, we may have a great deal more science. It's basically based on the data that's made available through the DSA and many of these fuzzy questions may be reduced to phenomena that we can understand.

Martin Husovec:

Yeah, absolutely agree. I absolutely agree and I think that's a part of, in a way, the deal behind the DSA is that you're trying to create evidence that can go two directions. One is what companies are doing is insufficient and a regulator can ask them to improve, but the other one as well is the existing laws are insufficient and a parliament needs to change the laws. So I bring this up when people like to use the risk management system to create new rules about content, something objectionable they don't like and they say, "The law doesn't tackle this efficiently, it's not illegal. A regulator should somehow step in," and to which I say no it shouldn't. In fact the whole point is that it shouldn't develop new content rules, but DSA is still helpful because it forces companies to think and map the risk, maybe to introduce some solutions that are not organized around content, much more neutral, generally applicable across the platform, regardless of the expression.

And if those fail, then that's a learning for the parliament to step in with its legitimacy and change rules on illegality. That's what we can do and that's how we should operate this. We shouldn't now think that the Commission, in the DSA's context, will now somehow have the wisdom to regulate the content on specific services. I think that would be wrong, and I've wrote this in a paper called DSA's Red Line, what the Commission can and cannot do about this information. So I completely agree. I think the scientific consensus is key here, but also, there's a limit as to the legitimacy of what a regulator can do and how far can it go. But even if that fails, I think the fact-finding function of the DSA is crucial because it helps you to create evidence for the parliament to act upon, and I think that's important as well.

Justin Hendrix:

Great. So I want to get into a couple of the... Well, what seemed to me the most interesting features of the DSA in terms of how they activate other actors. So not the state, not technology companies themselves. You've already brought up trusted flaggers and this is a very interesting phenomena. You talk about the notion of trusted flaggers being a remedy for one of the original sins of the pre-DSA legal framework, which was its inability to distinguish between actors who notified platforms of harms that were taking place there, as you say, with due care, and those who were reckless while doing so. This phenomena is happening across the world. People are begging social media platforms to address the thing that they're concerned about. "My community's being harmed, this disinformation is messing up this election, this particular campaign brought by a foreign actor is interfering in our state affairs." How do trusted flaggers solve this problem and are there weaknesses in the trusted flagger concept?

Martin Husovec:

Most definitely. So indeed, the way I look at a trusted flagger, it's partly a way how we remedy the shortcomings of non-regulation, which was essentially, you send notifications that a provider has its own risk estimations when they act or do not act and there's zero way for them to be able to hold notifier to the account in case they're lying or they're just sending something sloppy, and this has been happening a lot. As I said, I spend a lot of time on a copyright law in this space. This has been a systemic issue. So DSA doesn't solve it by necessarily entirely getting away from this problem, but it has two approaches, sticks and carrots.

So the carrots is the trusted flagger. Maybe it's not sufficiently sweet carrots, that I'm going to get to in a second, but then there's sticks and the sticks is one of the provisions Article 23 which says, "If you repeatedly sent bogus notifications, your notification interface will be suspended." And that's something that previously, companies could have maybe done, but they would've risked that this somehow is used against them when it comes to the liability question. So in terms of the carrots, the trusted flagger system is supposed to be an idea that instead of having private arrangements between a particular notifier and the provider, you have a portable trust.

So you have a regulator who certifies you, and then you can take that trust that was conferred upon you and you can use it vis a vis any platform out there in any member state out there, meaning you don't have to have previous dealings with them. You show up for the first time with Twitter, although previously you've done work for YouTube, you have to be trusted because you have certification. And I think that's a key thing that is one big benefit for the trusted flaggers. Yes, if they focus on one platform, maybe they have a nice little relationship with the platform, but they don't have these affordances, certainly not with respect to others and certainly not in other member states.

Second thing is trusted flaggers are owed certain special treatment in terms of most people focus on the fast lane aspects. So the fact that their notifications need to be processed with more speed. However, trusted flaggers also should receive certain better treatment in terms of how they can communicate with the platform. So the way I always think about this is that in IP, we had trusted flaggers for many years that would have special arrangements with platforms where because they trusted exactly, they would have a possibility to notify things in more automated fashion, maybe even get better access to a certain interface created by the platform, sometimes even with capabilities to find stuff, and sometimes even with the capabilities of removing things, which I'm not suggesting is required. So the special arrangements is additional carrot that hopefully trusted flaggers can get out of the system.

Now, is it enough? You weigh it against the cost, and what's the cost? The cost is you have to be really clear on the governance. You have to be really clear about how you operate, where you get your money, who is pulling the strings, and you have to be transparent and track many things. So it's not a simple thing. On the other hand, I think the regulator should be super interested in having many trusted flaggers because they are the eyes and ears on the ground. They are sending notifications. They know whether a particular interface works or doesn't. A regulator is not going to do that. Can do it, but it shouldn't have time for that in most cases.

I think the problem that we see now, although I checked today, they're already 10 trusted flaggers, certainly more coming up, I think the problem we might face going forward is that we don't have sufficient financial incentive and I think going forward, that's actually one of the things I'm thinking about these days is how we could develop financial incentives for trusted flaggers. So you could think of it, the simple way how to do this, by saying, "Okay, we'll give grants to those who are trusted flaggers, who have certification." Right? The problem I see with that is it's going to be the state giving you this money, and as you can probably understand from my earlier remarks, I'm always cautious with that.

So I think the ideal approach would be obviously that someone pays, most likely the platforms, but the way the money's allocated is not by the decision of the state or its surrogates, but by the decision of individuals, the people, the users. And I think to come up with a system for that, I think that's a challenge I see, but I see it as one of the gaps. It's a little bit harder I think for trusted flaggers as it is for researchers because I always say yes, researchers need money. Yes, these projects are not simple, but doing an amazing project with a VLOP is the thing that can change your career if you're a social science researcher working on good data.

So do you have a good enough incentive to do that? Oh, hell you do. So I don't think you necessarily need to have good financial incentives to start a project. Now, you might need them to run the project. I'm not saying that researchers don't need money, but I'm just saying that I think in terms of the intrinsic motivation to do this, researchers have a very strong one, which I'm not sure is there for trusted flaggers, which are going to be extremely contentious organizations for some people because they dislike the fact that they flag their illegal content. And we see this already Germany. In Germany, there's a big debate about trusted flaggers and where the far right is essentially thematizing them as censors of some kind, although they are just notifying illegal content that is actually criminal in Germany. I think these debates will exist. So that makes the trusted flaggers in a way, not the most appealing thing to do, which is probably the more reason why we should have more of carrots for them.

Justin Hendrix:

They may run into that same problem that certainly we've seen and seen litigated here in the United States that even when engaging in flagging material that is false or harmful in other ways, if you do that objectively, that is still in essence a political act. It still has a political effect and I think that's probably difficult separate out from this.

Martin Husovec:

Already the fact, even if we would 100% agree on where the line is drawn, meaning what is illegal or not, the allocation of resources is political act. So the fact that you enforce against one and not against the other is political, but that's exactly why the state should not be involved in this, even though it's already illegal. This is why it's much better if it's actually the money that comes from non-state actors because is it illegitimate if I allocate my money to causes that I care about and other people allocate money to causes they care about? I don't think it is illegitimate. So it's in a way, fight of ideas, although in this case, even illegal expressions of those ideas. But yes, absolutely, it's political. Allocation, very allocation of resources is political even if you agree on where the law draws the line. So absolutely, it's going to be controversial.

Now, can you accept that you will not enforce content that is illegal just because it's controversial? I think that would be a huge mistake. Either we draw the line where we draw it because we think there's a good reason for it and it's constitutionally acceptable in our jurisdiction, and then we are serious about it or we are not serious about it.

Justin Hendrix:

So another entity or set of entities that's created by the DSA that we're beginning to see spring into life now are these out of court dispute resolution bodies. I think a couple of new ones have sprung up just in the last couple of weeks where we've seen some developments, certainly with the Appeals Center being birthed out of Meta's Oversight Board, quasi-independent Supreme Court body. What do you make of these things? There seems to be at least a hope among some that there will be an economy here, and possibly even a cottage business that will be sustainable. What do you make of these things from the vantage of the United States, which has of course, got a laissez-faire approach to social media generally, and none of this scaffolding, these ODS bodies seem strange?

Martin Husovec:

And I would suggest that actually, even in US, you rely on them because no one is as critical about out of court dispute settlement that we rely upon for the domain names. So UDRP system, it's out of court dispute settlement for that. Now, it works differently. I take that and that might be one of the things that we have to see how it works out in this context, but I don't think in itself is as illegitimate as people make it sound.

Justin Hendrix:

Fair enough, fair enough. There are other examples of out of court dispute resolution, but this is about disputes over content moderation.

Martin Husovec:

Yeah, it's about individual piece of content. Yes.

Justin Hendrix:

“You've struck down my post and I'm angry about it.”

Martin Husovec:

Exactly. So first of all, the question is... Okay, so I should preface this by saying I was an early proponent of this. So we did research a long time ago because in basically making a point for a system like this, ADR system that acts as a counter-incentive to the natural incentive set up of the providers, which is to take down content in case they are in doubt about its illegality. So you take down because you are simply cautious, you don't want to incur liability. So this was a way how to counterbalance that, and in a way also creates certainty for the providers because if you follow an out of court dispute settlement body deciding in your favor, then you can't be blamed for not being careful enough in making your initial decision.

Now, what is now in the DSA, the way that the financing particularly works, is not exactly how we imagined this in our original plan because in our original plan it was always a user paying and the fee was only shifting depending on who loses or wins. So that was the incentive. Right? If you make the right choices as the platform, you don't have pay the cost, and if the user loses, then they lose the money. In the system that we have now in Europe with the financing, there is emphasis on this is either for symbolic fee or free of charge. In fact, so far, all the audience bodies are essentially operating free of charge. So everything is paid by the platforms, which takes away the incentive from the platforms to improve directly because regardless of whether they make it a right or wrong decision, if there's ODS decision, they will pay. And I think that is not a good incentive setup.

However, there are ways how to deal with that. Certainly, it still pushes the platform to improve its decision-making and its persuasion because in a way, the best scenario for the platform is to make the user happy so they don't go and complain to the ODS body. Now, it's not as perfect as a direct linkage, but it's something. So clearly, unlike with trusted flaggers, I think here, we won't have a problem. We already have, I'm aware... So apart from those, the four that certified, I'm aware of at least additional six that are either in the process or close to the process of applying. And so I think there will be many of these actors, partly because the financing is so accessible.

Now, what will be the effects? Obviously, time will tell. I think because of the financing structure, clearly the regulators will have to pay closer attention to potential abuses. There are some checks and balancing in the system and I think most of the projects I've seen so far are really well-intentioned, and I think we can definitely deliver on one big promise, and that is that because now the platforms know that their decisions will be reviewed externally, they are more careful in how they explain their decisions and how they disclose the initial rules, because if the disclosure rules is incomplete, then any decision upon those non-disclosed rules is arbitrary. Right? So that's the easiest way how you lose the case.

Now, if you don't provide sufficient explanation, that's also arbitrary because you haven't sufficiently explained why exactly you've restricted something. So I think that's a useful pressure point. Now, to your sub-question, is it worth to do this for everything, not just account restrictions, but also the blocking of a poster, et cetera? I think I tend to agree with you that maybe we have gone too far on that, but that's outcome of a political process. The initial proposal was much more narrow, but as the scope of due process rights for the counter-moderation expanded, this expanded together with it. So yes, we might need to have a debate in some years whether... Even the question that not just VLOPS but any online platform is subject to this is maybe not going too far, or whether all these different decisions, counter-moderation decisions should be subject to this.

I think at this point, I would say it's a big experiment. I think I clearly see good corrective mechanisms behind this, but I think the financing will be the key that will be driving some of the things that the regulators will have to be careful about.

Justin Hendrix:

So in both these cases, it's whether the economy truly emerges to support trusted flaggers out of court dispute resolution. It occurs to me in looking at things like trusted flaggers, things like the ODS bodies, things like many of the other regulatory scaffolding in the DSA is that it seems to presume that the current large platform economy, the current social media ecosystem will more or less be the world that we have to live with going forward.

Do you think that's correct? Are there ways in which this regulation in some ways is based on the internet that we've had for the last decade? Does that represent a challenge for it going forward?

Martin Husovec:

So I think in terms of the scope, if you mean this as a question of scope, to what extent the DSA is likely to catch all the new developments? I think there, the answer is very likely yes, because the scope as defined today is essentially any service that stores someone else's information at their request and distributes to the public is an online platform, which is where most of the magic happens, most of the obligations happen. With that definition, you cover many things. In fact, sometimes maybe too many under this particular regulatory roof. So I don't worry too much about this. Now, there might be some services that we don't catch, and in some contexts, GenAI, standalone GenAI is an example of a service which we don't necessarily cover, but that's also a very different service. It's not about human creativity as distributed through the communication channel to the public. It's about something completely different. It is a tool, obviously, for human creativity, but the basic idea behind it is not that we created a tool to allow other people to express themselves and we're distributing it to other [inaudible 00:41:36].

So I think for the core function of the DSA, I think it's quite likely to be future-proof because the definition is incredibly broad. And I also think the many services that the internet will develop in the future will be still organized around the same principle. There's a reason why most of the digital services are called by the DSA, even though it does not cover editorial services, and the reason for that is because human generated content is super important for people. Why? Because people care about other people. We're just social animals. So if you're a social animal, you always have some user generated content that is distributed to the public and that already takes the definition.

So I think whatever the form of the future internet be or the services, I think it's unlikely that we escape these two basic tenants of the definition. So I think it's quite future-proof, and we see now that even things like GenAI, if integrated as features into services that are already regulated, so I don't know, Snapchat introducing some GenAI into its service, is likely to be within the regulatory scope anyway. So you don't escape it that easily. So on the scope thing, I think this is quite likely to be future-proof. I think we will have the opposite problem of potentially trying to incrementally cut from some of the things that might go too far.

Justin Hendrix:

So I've already asked you about looking forward, we've got these systemic risk reports coming at the end of this year, but as you cast your mind forward to 2025, which I'm terrified of the idea that there aren't that many usable weeks left in the US, we're so focused on the election that's coming up, no one can quite see past early November, but the year is almost over. What in 2025 are you looking for with regard to the rollout of the DSA and its development? What are you going to be paying most attention to?

Martin Husovec:

So for me, the key question is something that I'm actually looking at incrementally every day, and that is how are we fairing in terms of the certification of trusted flaggers, out of court settlement bodies? How are these things starting the operations and what are the problems? And I'm trying to be part of those debates to see where the problems are and see how that can change. So for me, the indicator of whether DSA is working is initially, the first two years at least, institutional setup. Did people get the message? Both people who build institutions, researchers, ODS bodies, out of court settlement bodies, trusted flaggers, user groups that also have a role to represent users, are they sufficiently paying attention to this and building up? And from what, I see definitely [inaudible 00:44:13]. It's going a little bit more slowly perhaps for trusted flaggers, but we have 10 of them, so that's not insignificant. And I'm quite confident that this seems to be working. User groups are paying attention, there's first signs of private litigation. So I think that seems to be working.

I think a bigger challenge, and actually, it's not looking at the risk assessments which are important from where we'll be in five years, what we say to companies now, but I think a bigger challenge for me is whether DSA will be successful in changing how individuals, users who use the services, approach their user experience. Will they complain more? Will they try to see whether there's sufficient explanation? Will they use these tools and maybe go to the out of court settlement bodies? Will they complain to the regulators? Will the user groups file a complaint to the regulators? And again, here, I'm quite optimistic because from what I gather, there are already quite a few complaints pending, even though these bodies were certified just recently. Now, we're talking about out of court settlement bodies. There are already quite a few complaints from organizations, whether user groups or trusted flaggers.

So I think the system is coming together and that for me is the early indicator, are people paying attention? And also, obviously, are companies thinking about this? And as far as I can tell, at least the companies from midsize and above, many companies are clearly paying attention. Obviously VLOPS, but also others. So I do see people paying attention to this and I think that's important. I think the problem that we might face at some point is the politicization, and that's something that is perhaps inevitable, but for that very reason, we should know exactly what we're doing when enforcing it.

And I think we have, as I tried to say with the book, I think we have a good set of instruments that we can use in ways that are completely compatible, and in fact are defending liberal democracy, but we just have to be clear about what we use them for and what we are expecting, which is important to the public in particular, that can be easily misled into believing that we're doing something else. And obviously, state can abuse the power as well. So we have to pay attention to that as well.

Justin Hendrix:

You end this book by contemplating trust and trustworthiness, and you say that, "If popularity and trust in existing older institutions are declining, one way to approach the situation is to give space to new organizational structures that complement existing institutions." Always a passing siren in New York. "If new trust structures can reduce the level of distrust among the public and serve as a remedy to one of the root causes of society's ills, then perhaps our liberal democracies, our republics might be repaired at last." It does seem like that's what is at play here. It's fundamentally a trust question that we're contending with, and a lot of tech policy is about trying to answer that somehow. Very complicated to do. I wonder if we'll get there.

Martin Husovec:

Absolutely agree, and I think it comes down to it. It comes down to can we rebuild trust? And it's not just about the providers. It's trust more broadly, and I think there is that gap that the new situation where the internet global communication tool has created, all institutions are not sufficiently representative of what the new system is. We haven't built always collective institutions. So one of the things I am missing the DSA is incentive for content creators to come together and self-regulate and signal to the outside world why they should be more trustworthy than someone else because they abide by some principle, something that is much more overarching than a company, but is also much more granular than an association. Something that fits much better global world than previous world where you had local institutions for many things. It just doesn't work like that in the digital space. So we're missing these institutions.

Trusted flaggers are part of that as well, in fact, if you think about it. Out of court settlement bodies can be part of that too. So I think there are many institutions like that, that we might be missing still, that we need to adjust to these new environments. Now, will they solve all the ills of our societies? 100% not. That would be illusory, but can they help? Absolutely. Will this be controversial? Controversy is part of how we deliberate. So I don't think that's in itself is indicator of the fact that we are not on the right track, but I completely agree. For me, the trust is key. And it's also one of the grounding principles for the legislation in this space because safety is easily misused by the state.

You can say, "I make you safe," but you cannot say, "I make you trust me." It's impossible to impose trust. Trust always is bottom-up. In that sense, I find it very helpful as, in a way, a competing goal with safety. Yes, we want safety. Safety improves trust in many settings, but can safety be overdone? 100% can be overdone, as everything. So I think in that sense, I think trust is very helpful as a principle.

Justin Hendrix:

The textbook is called Principles of the Digital Services Act by Martin Husovec. Thank you so much for speaking to me today. I hope that we can catch up again in 2025 when perhaps some more of these phenomena has played out and we have more to discern about how this thing's going.

Martin Husovec:

Absolute pleasure, Justin. Thank you so much for having me.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics