Home

Donate
Perspective

States in the Vanguard: Social Media Policy Today

Olivier Sylvain / Apr 15, 2025

This post is part of Regulating Social Media Platforms: Government, Speech, and the Law, a symposium series organized by Just Security, the NYU Stern Center for Business and Human Rights, and Tech Policy Press.

The Assembly Chamber at the California State Capitol. Ben Franske / Wikimedia / CC BY-SA 3.0

Louis Brandeis famously observed that, in the United States’ system of government, the states are the laboratories of experimentation. But, given the paralysis of federal congressional leadership today, the states have proven to be so much more. Right now, they are in the vanguard of tech policy, enacting laws addressed to issues at the forefront of voters’ minds. Over the past couple of years, dozens of US states have enacted laws aimed at regulating social media. These statutes can be lumped into three categories: content moderation, data protection, and child online safety laws.

Below is an overview of the types of laws that fall under each category. Of course, this isn’t the only way to organize these new laws, nor is this overview comprehensive. Instead, it is an outline of the basic elements of the state laws for the purposes of this symposium.

Content moderation

The highest-profile state efforts to regulate social media have focused on content moderation practices. But, as the Supreme Court explained in its Moody v. NetChoice opinion in 2024, such laws are generally constitutionally suspect. In different ways, the Texas and Florida statutes at issue in that case seek to ensure that the largest social media companies do not unilaterally control what people see online. The Republican-led state legislatures expressed concern about what they perceive as unlawful viewpoint discrimination – anticonservative bias – by “the West Coast oligarchs” who run social media.

The two statutes achieve this objective in slightly different ways. Florida’s law prohibits “social media platforms” that have over $100 million in annual gross revenue or more than 100 million monthly visitors from “censoring” user posts based on content or source. It specifically forbids censorship of large “journalist enterprises” online, on cable television, and broadcasting. The Texas statute similarly forbids social media companies with over 50 million monthly active users from interfering with the “information, comments, messages, or images” that their consumers post. Both laws also require companies to explain each moderation decision that “censors” or discriminates against consumers based on viewpoint. (Texas also requires the companies to provide consumers with a right to appeal such decisions.)

After these laws were passed, NetChoice and the Computer & Communications Industry Association, the industry groups that represent the largest and most powerful internet companies in the world, immediately sued on First Amendment grounds. (They also alleged violations of equal protection, due process, state constitutional provisions, and the federal liability shield under the Communications Decency Act.) They argue that the statutes would require companies to host and distribute speech with which their members do not agree. Such state-imposed must-carry requirements, they assert, intrude on the companies’ constitutional right to speak and curate content as they see fit.

The industry groups’ “facial” challenge is probably as aggressive as they come. (This is saying something for NetChoice, which is well known for challenging a wide range of laws on First Amendment grounds.) They argue that there are few, if any, potential applications of the state provisions that are lawful. They also allege the statutes’ disclosure and transparency requirements are unduly burdensome. Companies, they argue, will be chilled into refraining from moderating user posts if, as the laws require, they must explain each of their moderation decisions.

In the interest of brevity, it is enough to just observe here that the Supreme Court effectively rejected the facial challenge, remanding the cases back to the lower courts for more fact development. The courts below, it explained, had insufficiently considered the full range of online functions and features that may be implicated by the laws. To justify its ambitious facial challenge, NetChoice must establish that the statutes’ “unconstitutional applications substantially outweigh its constitutional ones.” It is not enough to consider news feeds or video-sharing services. The record, such as it is, only showed that those in particular were at the top of legislators’ minds.

Data protection

Illinois was an early leader on the data protection front, with its statutory protections against abuses of biometric data. Vermont was also among the first to pass consumer protections against the data broker industry, which is made up of companies that collect and sell personal data to third parties. Vermont, along with California, requires brokers to register with state agencies and disclose information about their data practices. California also has a “right to know” statute that allows individuals to access the information that brokers have about them.

More and more states have industry-specific or comprehensive data protection laws that cut across technologies and sectors. With regards to the first, for example, Washington has a law that protects residents’ health data (to the extent such data is not already protected by the federal Health Insurance Portability and Accountability Act). The states with comprehensive data protection laws vary significantly. The stronger state laws, like those of California, Colorado, and Connecticut, establish in varying degrees individual rights to notice, access, correction, and deletion. Most also impose limitations on how companies collect, retain, use, share, or sell their residents’ personal information. Further, many impose a combination of disclosure requirements and impact assessment requirements. Some also require data portability and opt-outs for targeted advertising, as well as clear and conspicuous consent mechanisms.

California stands out for several reasons. It imposes limitations on companies’ use of automated decision-making systems, including granting consumers the right to opt out. Currently, it is also the only state to create a standalone California Privacy Protection Agency, which has enforcement and rulemaking authority. It is also the only state to establish a private right of action – that is, it allows its residents to file cases for data breaches. (Vermont has been on the verge of passing such a law.) Most, if not all, other states give their state attorney general the sole authority to enforce the respective laws. Less protective laws, like those in Virginia and Iowa, contain several carve-outs for certain industries on the theory that existing federal laws govern. These weaker laws also do not tend to require consumer consent for data practices.

Online child safety

Pursuant to its authority under the Children’s Online Privacy Protection Act (COPPA), the US Federal Trade Commission (FTC) has brought enforcement actions, including a massive “dark patterns” case against Epic Games at the end of 2022, to protect children from harmful or abusive online data practices. The law imposes requirements on websites and online companies (or “operators”) that direct their services to children under 13, as well as on websites or operators that have actual knowledge that they are collecting personal information online from a child under 13 years of age. In short, operators must obtain verifiable consent from children’s parents before they can collect, use, and disclose kids’ personal data.

COPPA requires the FTC to promulgate and, from time to time, revise implementing regulations. The last time it revised those rules, however, was in 2013. In early 2024, the agency proposed updates to those regulations in light of the dramatic changes in commercial surveillance practices over the past decade or so. The agency published the final rules earlier this year. In short, the new rules impose restrictions on specific (and increasingly prevalent) commercial surveillance practices, including restrictions on the practice of requiring parents to consent to monetization of their child’s data in order to gain access to services (including educational services). The rule basically implements the data minimization principle by forbidding companies from using personal information for reasons that are unrelated to the purposes for which they collect it.

Policymakers and consumer advocates have sought nationwide protections for children that go beyond COPPA. They have built on Biden-era Surgeon General Vivek H. Murthy’s advisory on the mental health effects of social media on children. An increasing amount of evidence, the report explained, suggests that protracted social media use among children has harmful effects on mental health, including symptoms of depression and anxiety, body dissatisfaction, eating disorders, and low self-esteem, especially among adolescent girls. It accordingly recommended that policymakers strengthen existing protections for children, including higher levels of privacy protection and limits on access to social media for all children. General Murthy has also called for warning labels, in addition to legislative action. These recommendations, however, have not led to any new laws, although there seems to be momentum for a new Kids Online Safety Act.

So, here, too, states have stepped in. About 20 (and counting) have enacted statutes that aim to protect children from online harms. Some laws, mostly in southern states, focus on porn, flatly forbidding kids’ access to “material harmful to minors.” Most of these states impose liability on companies that do not implement age verification requirements. These will test existing First Amendment doctrine given the impacts such laws have on adults’ lawful access to porn.

The Supreme Court heard oral argument earlier this year in Free Speech Coalition v. Paxton, which concerns a Texas law that requires websites with “over one-third sexual material harmful to minors” to employ “reasonable age verification methods,” including systems that evaluate government-issued identification cards or “records from mortgage, education, and employment entities.” Where, in the past, the courts have held such verification laws unconstitutional because of the ways in which they chill lawful adult access to porn, the US Court of Appeals for the Fifth Circuit, in this case, upheld the law. Notably, during the Supreme Court oral argument, a couple of the right-leaning Justices, including Chief Justice John Roberts, signaled openness to upholding such laws given advances in the technology.

Other states have written more targeted statutes, but these, too, have been subject to constitutional challenges. In 2022, for example, California drew on ideas from the United Kingdom’s consumer protection regulator when it enacted the Age-Appropriate Design Code Act. In short, that law requires companies to implement the highest level of privacy protection for children, conduct data protection impact assessments, limit or block targeted advertising to children, prohibit dark patterns that manipulate children into providing personal information or doing things that are “materially detrimental” to their wellbeing, and prohibit the collection of precise geographic location information. This law is the subject of a widely followed suit brought by NetChoice, the industry trade group, and free speech advocates.

In New York, the new SAFE for Kids law prohibits social media from providing “addictive feeds” to New York users unless they use reasonable and technically feasible methods to verify user ages or obtain parental consent for kids under 18. The law also prohibits social media companies from sending overnight push notifications to minors between 12 a.m. and 6 a.m. without parental consent.

Utah passed a pair of similar laws a year later, which, together, require social media companies to verify that their users are over the age of 18 and, moreover, obtain parental consent before allowing children to access those services. In 2024, California also enacted a law that limits minors’ access to “addictive feeds.” As with most of these legislative efforts in the states, the tech trade groups are challenging these laws as unconstitutional intrusions on the companies’ First Amendment rights.

Looking forward

The states have stepped up to regulate consumer-facing online services where the federal government has been utterly silent. But the tech companies are not sitting idly by. In cases across the country, they have alleged that the new state laws intrude on the companies’ First Amendment rights or that existing federal laws like the Communications Decency Act or COPPA preempt the state legislation. Some of these claims have prevailed when the laws in question are content-based or speaker-based. Many of the industry’s claims, however, have failed; the states have been winning to the extent the new laws redress harms from non-speech related commercial practices and design features. Given these developments, the states have been charting the way forward.

Authors

Olivier Sylvain
Olivier is a Professor of Law at Fordham University and a Senior Policy Research Fellow at Columbia University's Knight First Amendment Institute. His research is on information and communications law and policy. His most recent writing, scholarship, commentary, and congressional testimony are on on...

Related

Symposium: Regulating Social Media Platforms—Government, Speech, and the Law
Perspective
Leveraging International Standards to Protect US Consumers Online, No Congress Required

Topics