Brazil Supreme Court Ruling Redefines Framework for Platform Liability
Joan Barata / Nov 20, 2025
Luiz Fux, Minister of the Supreme Federal Court of Brazil, declares his vote in a trial on September 11, 2025. Rosinei Coutinho/STF via Wikimedia Commons.
The Supreme Court of Brazil (STF) has issued its full opinion regarding two joint pivotal cases, designated as “Themes” (temas) 533 and 987, to address the liability of internet providers for content generated by third parties. This long-awaited decision concludes a lengthy process during which the Court worked through differing—or at least not fully aligned—views among the justices on this important issue.
This decision is not only important in terms of its possible impact on Brazilian digital markets or the conditions for the free dissemination of ideas and opinions in the Brazilian digital public sphere, but also because the Court was given the opportunity to analyze and decide on the constitutionality of the law that, since 2014, has served as the main pillar of digital platform governance, the Marco Civil da Internet (MCI).
Before considering the content and implications of the decision, it is necessary to describe the legal and institutional framework where the opinion of the Court will have its effects.
The vote of justice Luiz Fux presenting the position of the Court is described in the opinion as “minimalist,” in the sense that the declared intention is to establish a set of clear and fundamental constitutional criteria upon which the legislature is subsequently invited to build a more detailed regulatory structure. In particular, the Court calls for a "profícuo diálogo institucional" (a fruitful institutional dialogue) between the two powers.
In Brazil, there is an ongoing process of putting forward ideas and proposals, both from the executive and the legislative branches, on how to recalibrate freedom of expression online against the harms that the digital world encompasses not only for individual rights but also the survival of the democratic system as a general ideal. Despite the fact that several, and sometimes controversial, proposals have been put forward, none of them has so far been adopted as a new regulatory regime. The Court thus aims at providing a clearer constitutional framework for the legislature to articulate developed and precise norms, adapted to the volatility and complexity of the regulated field. The fact that this debate has so far been the object of strong controversies and advocacy efforts in different directions shows the sensitivity and the importance of the matter.
For those not familiar with the Brazilian context, a second important remark concerns the fact that, from a constitutional perspective, the legal understanding of the notion of freedom of expression in Brazil does not only encompass an individual or subjective perspective but also a so-called objective dimension, as the vote explains. This doctrine, based on German constitutionalism and also present in countries such as Spain and other parts of Latin America, establishes that fundamental rights are not merely subjective defenses for individuals but also objective principles that embody the core values of the constitutional order. As such, they impose an affirmative obligation on the state to not only abstain from violating these rights but to actively create the conditions necessary for their realization and protection against threats from all sources, including private third parties. The Court uses this approach to justify an analysis on the constitutionality of MCI considering the positive obligation of the State to establish sufficient protections for those who see their rights affected by online content.
Thirdly, the STF’s decision is also under the indirect influence of a “militant” conception of democracy and freedom of expression, according to which the Constitution does not protect forms of expression or action that have the effect of undermining the pillars of constitutional order or plant the seed of its own destruction. Based on the decision of the Court in the case “Ellwanger,” a different approach to speech protection is qualified as “hypocritical libertarianism.” It is obvious, however, that this definition of the legitimate limits of freedom of expression generates important friction zones between legitimate, yet extreme, political speech and unacceptable forms of hateful and “anti-democratic” speech.
Last but not least, the Court also bases its decision on a series of factual elements that are presented in a somewhat oversimplified manner for the formulation of relevant legal considerations. In this sense, the Court considers that even though platforms engage in content moderation practices to address harmful content, particularly in cases where this content may affect fundamental rights of third parties, these efforts are not honestly driven by a duty or responsibility to protect rights but by purely commercial interests. Specifically, the need to maintain an environment attractive to advertisers to maximize advertising revenue. This, as well as the need for platforms to properly assess a “colossal” number of pieces of content, takes the Court to the conclusion that only through legally imposed liabilities and monitoring obligations platforms may be willing and accepting to properly tackle problematic speech. In this sense, the Court establishes that proper and direct legal assessment and action by platforms regarding certain categories of illegal content is necessary due to the "natural delays" inherent in any judicial process aimed at the adoption of measures to properly avoid irreparable harm.
What did the Court decide?
In essence, the decision of the Court consists of declaring the current legal framework under article 19 MCI as unconstitutional. This article established that Internet service providers are only civilly liable for third-party content if they fail to remove it after receiving a specific court order.
The Court replaces this approach with what can be called a fault-based liability model. Therefore, while a system of strict or objective liability is excluded, the Court defines a new regime based on the concept of platform negligence. The vote is consequently dedicated to describing the circumstances and criteria to determine such negligence, which constitutes the central but also the most contentious element of the judgement. Platform civil liability is triggered only in cases of "unequivocal knowledge" (“ciência inequívoca”) of illicit content generated by a third party, and where the platform negligently fails to act to remove it. Crucially, the standard for what constitutes "unequivocal knowledge" is not necessarily clear or uniform. Moreover, it varies according to the nature and severity of the harmful content, creating a tiered system of duties:
- First category: “evidently illicit content” (hate speech, pedophilia, racism, incitement to violence, glorification of the violent abolition of the democratic system or glorification of a coup d'état). Platforms are presumed to acquire unequivocal knowledge and therefore shall have the legal obligation to proactively and spontaneously detect and remove content diligently, regardless of notification.
- Second category: “individually harmful content” (including violations of the right to privacy, honor or image). Unequivocal knowledge of the illicit nature by the providers, necessary for civil liability, will depend on their prior and substantiated notification by the interested parties.
- Third category: “paid or boosted content” (monetization). Unequivocal knowledge is presumed without exception due to the legal obligation to verify all commercially promoted content. Besides the problems of definition of boosted content with commercial purposes, here the Court seems to simply establish a correlation between effective knowledge and direct income from content.
What lies ahead?
This judicial resolution, affecting individual cases, emerges as a new rubric for a developed legal framework applicable to online platforms within a context of undeniable political struggles and polarization, as well as intense geopolitical and geoeconomic pressures. This is not the most ideal scenario for a decision making process where all the different rights, principles and values must be taken properly into consideration – and particularly the rich standards set by the Inter American human rights systems as it I previously explained on Tech Policy Press – beyond specific political interests, and with the implication of the broadest range of stakeholders, including civil society, industry, activists, content creators and many others.
In this very complex environment, it is very important not to miss the most significant elements that need to be properly taken into account in order for the legal reform to comply with applicable human rights standards as well as to become a useful tool to preserve constitutional principles in an efficient and proportionate manner.
Intermediary liability exemptions
In a superficial analysis, holding online platforms liable for content posted by their users may be seen as a mechanism to push them to play an active role in eliminating harmful content. This would however miss the obvious technical and legal incapacity of intermediaries to analyze all the content that circulates on their platforms. More dangerously, holding platforms liable delegates to private companies – many of them headquartered in another country – the power to determine the scope of what speech is legal or illegal, and thus generates the incentives for excessive and illegitimate private censorship.
In the new classification provided by the Court, it is assumed that platforms will be able not only to systematically monitor but also, and specifically, to make accurate legal determinations around illegal content in areas such as glorification of the violent abolition of the democratic system or glorification of a coup d'état. It should not be forgotten that these are probably the most sensitive and problematic areas for interpretation, since legal provisions are relatively broad and open to interpretation and intersect with particularly protected areas of speech, such as political speech. It is obvious that there is a growing concern, also in view of relatively recent events, about the use of social media to promote violence, civil and rest and straightly the overthrowing of the Government. However, delegating the definition of the scope of very sensitive areas of freedom of expression to private companies under a threat of liability may lead to arbitrary restrictions and over removal of content.
When it comes to the unequivocal presumption of knowledge in relation to boosted content, this may constitute an over-reliance on platform capacities to undertake immediate legal assessments in potentially complex areas. While it is true that platforms may pay particular attention to promoted content, this does not necessarily equate to having the capacity to establish proper, proportionate and reasonable assessments, at scale, around the legitimate limits to the right to freedom of expression. We must not neglect either the importance that pressure coming from different Governments and different political orientations may exert in the way platforms may undertake these obligations not only in the near term but also in the mid- and long-term future.
What about proper due diligence obligations?
Preserving intermediary liability protections is not incompatible with the establishment of a wide range of obligations for platforms to guarantee fairness, transparency and accountability of content moderation decisions. These obligations can be subject to regulatory oversight, which may trigger administrative responsibilities and penalties imposed on those companies that do not fulfill their obligations.
It is important to note that in order to avoid the risk of interference in the legitimate right to freedom of expression of users these obligations shall not incorporate the obligation for platforms to actively monitor and identify illegal or vaguely defined categories of harmful content but rather consist of reasonable, transparent, proportionate and flexible structural or systemic duties. Such requirements may refer to areas such as transparent content moderation processes, improved community standards, involvement of civil society, fact-checkers and/or trusted flaggers, appeal mechanisms and definition of priorities in terms of risks and harms to avoid. They must not be result-oriented in terms of amounts of content eliminated – otherwise the risks of private or delegated censorship of legal content are obvious – yet platforms need to be transparent regarding the impact of their policies and count on mechanisms to evaluate their effectiveness. They must also incorporate proper human rights safeguards for all the affected parties. Unfortunately, the Court missed the opportunity to properly elaborate and provide useful guidance on the possible imposition of this kind of duties.
Proper institutionalization
The above-mentioned regulatory standards need to be implemented within the context of an adequate institutional framework. In line with models like the Digital Services Act in the European Union or the Online Safety Act in the United Kingdom, proper independent regulatory agencies must be established and properly staffed and funded to perform the oversight functions mentioned in this article. Such bodies must neither be under the direct or indirect control of the Government or any political actor, nor be susceptible to capture by any private or industry interests. The role of the judiciary as the last guarantor of rights in this new context must be properly preserved. However, all these important perspectives seem to have been omitted by the Court as well, even though Brazil is still falling short of the institutional checks and balances necessary for the proper implementation of a new regulatory system.
Understand social complexity
Social media content is often a mere reflection of reality. Without prejudice to the capacity of tech platforms to amplify and spread certain types of publications and information campaigns, we shall not neglect the fact that social media content is only a part of a complex communication and political ecosystem. Problems of polarization, social cohesion, youth disengagement, dangerous behaviors or information vulnerability can only be properly addressed through comprehensive social policies, literacy improvement efforts and overall democratic quality. Adopting expeditious and restrictive measures affecting online content is the result of short-sighted – yet sometimes well intended – political willingness to show resolution and provide oversimplified answers to complex problems.
Brazil has so far been seen as a remarkable good example of balanced platform regulation, with a proper and solid human rights approach. While the new challenges affecting the public sphere, and the country in general, could not be more obvious, it is also necessary that the final outcome can provide effective, practicable and proportionate tools to properly address the concerns that have generated the ongoing debates and proposals in the country.
Authors
