Home

Transcript: House Hearing on “Proposal to Sunset Section 230 of the Communications Decency Act”

Gabby Miller / May 23, 2024

House Energy and Commerce Committee Ranking Member Frank Pallone (D-NJ) and Chair Cathy McMorris Rogers (R-WA) (composite/Tech Policy Press)

On Wednesday, May 22, 2024, the House Energy and Commerce Committee held a Communications and Technology Subcommittee hearing on a proposal to sunset Section 230, a provision of the Communications Decency Act of 1996 that provides broad immunity to social media companies for user-generated content. While no draft legislation has formally been introduced, Committee Chair Cathy McMorris Rodgers (R-WA) and Ranking Member Frank Pallone (D-NJ) aim to “force Congress and stakeholders to work together in good faith to develop a long-term solution,” according to the hearing memo.

The hearing featured three expert witnesses:

Below are key takeaways from the hearing, followed by a lightly edited transcript. Please refer to the hearing video when quoting the speakers.

Section 230: repeal or amend?

Were Section 230 to be repealed, it’s still largely up in the air what approach Congress intends to take on reforming it. “There's no shortage of options and I know that I believe that we can come together and that we must start now,” said Chair McMorris Rodgers. She was also clear that her goals in sunsetting Section 230 are not for it to disappear entirely, but rather to “put a clock on making reforms.” She noted that in the past two Congresses, nearly 25 bills have been introduced to amend Section 230, with each killed by the big tech lobby (though many drew opposition from civil society groups concerned about free expression). “These companies left us with no other option,” Rep. McMorris Rogers said. “By enacting this legislation, we will force Congress to act.” Ranking Member Pallone added that a sunset “gives us time to have a serious conversation about what concepts are worth keeping.”

During an impassioned opening statement, Rep. Doris Matsui (D-CA) called for reforms that “allow for better enforcement of civil rights laws to prevent some of the most upsetting discriminatory behavior online.” Despite “widespread bipartisan consensus in Congress that Section 230 needs to be modernized,” Rep. Matsui said it remains unclear how to get there. “That's why I believe the process is just as important as the product. We need a thoughtful process that allows for nuanced conversations on difficult issues.” Witness Kate Tummarello agreed with Rep. Matsui’s calls to modernize Section 230, but added that Congress should keep its original intent, which was to “incentivize good faith attempts at keeping corners of the internet safe and healthy and relevant.”

A discussion later unfolded around whether to take a ‘carve-in’ versus comprehensive reforms approach. Witness Marc Berkman expressed enthusiastic support for Chair Rodgers and Ranking Member Pallone’s sunset-and-rework approach. “The pros of doing carve-ins means that the social media industry is at the table and providing real compromise that is reasonable,” Berkman said. However, starting over means “we're potentially going to get a very robust system that works.” Either way, he argued, Congress needs social media companies at the table having these discussions.

Fear over ‘frivolous litigation’ absent Section 230

Tummarello, who leads a trade association representing small to medium-sized tech startups, argued that sunsetting Section 230 without consensus on an alternative framework “risks leaving internet platforms, especially those run by startups, open to ruinous litigation.” She wants any Section 230 reforms to continue using a “framework that allows platforms to quickly and easily get a lawsuit over user content dismissed.” Absent Section 230, “it’s not that the platforms would be held liable for the speech, it's that the platforms will be pressured into removing speech people don't like.”

Witness Carrie Goldberg took significant issue with Tummarello’s argument. Goldberg explained that social media companies argue “everything is speech,” asserting immunity regardless of how much their platforms promote content or use algorithms, among other actions. “When we’re talking about content removal,” Tummarello said, “there still has to be a cause of action.” Just because a platform succumbs to political pressure or fears allowing users to post controversial content, it doesn’t create a tort that anybody can sue under, according to Goldberg. “I don't actually understand the basis for the concern that suddenly the gateways of litigation are going to open.”

Berkman “wholeheartedly” agreed with Goldberg, adding that “the fear here is overwrought.” Tort law jurisprudence, he explained, states that to bring a suit, you need a meritorious case on its face. “The concern about removing content aggressively, unnecessarily, that would indicate that they have the ability to be removing all the little uncontroversially dangerous content that's on there now,” Berkman said. “The focus here is removing what would be a tort, what is causing severe harm, ensuring that they're doing that in a way that is not negligent or grossly negligent or reckless.” He added that any future versions of Section 230 must maintain a framework that “allows a platform to quickly and easily get a lawsuit over user content dismissed.”

Courts and state legislatures lack congressional clarification

There was much criticism over the ways Section 230 has provided near-complete immunity for social media companies. This is largely due to the law’s ambiguities and contradictions, according to Ranking Member Pallone, which has led to confusion in the courts. “Judges have attempted to apply it to technologies and business models that could not have been envisioned when it was drafted,” he said. Goldberg expanded on this, pointing to inconsistent judgments over product liability suits, a theory she says her firm pioneered in 2017. “We still have the Ninth Circuit applying the Second Circuit law, and they always say that they're looking to Congress for clarity,” Goldberg said.

It’s not just the courts that are chipping away at Section 230. As state legislatures grow impatient, they are increasingly passing bills that introduce liability to tech platforms, according to Ranking Member Pallone. “For now, we're left with the status quo, a patchwork where more often than not, bad Samaritans receive broad protection from a statute intended to promote decency on the internet,” he said. “So Congress should not wait for the courts.”

What follows is a lightly edited transcript of the discussion.

Rep. Bob Latta (R-OH):

Well, good morning. The Subcommittee will come to order. The chair recognizes himself for an opening statement. Good morning. And welcome to today's legislative hearing to discuss sunsetting Section 230 of the Communications Decency Act. Since 1996, Section 230 protections have allowed the US tech industry to flourish. This legal framework emboldened Americans to pioneer, creating internet and social media platforms to promote innovation, user content and social media interaction. Its intent was to provide online platforms immunity from liability for content posted by third-party users.

But as the internet exploded in growth, it also increased challenges that were not contemplated when the law passed in 1996. Section 230 must be reformed. As we heard in our last hearing on this topic, the current online ecosystem is flawed. Many of these platforms are rife with content such as online sex trafficking, narcotics, child pornography, and other illicit crimes.

In response, big tech platforms hide behind Section 230's broad immunity. In that process, courts have rewarded their destructive behavior. We need to reform Section 230 to hold platforms accountable for the role they play in facilitating and enabling harmful behavior, but in doing so, Congress must be thoughtful and deliberative. There is no silver bullet to fix this issue.

Some argue that amending or repealing Section 230 violates the First Amendment rights of those platforms to host the content they so choose, yet no industry has complete protection from all liability for harm it causes. Newspapers, broadcasters, fundamental mediums that exemplify our First Amendment rights are subject to publisher liability or can be sued for defamation, not big tech.

Over the past several Congresses, there have been numerous proposals to hold big tech accountable for when it acts as a publisher and moderating content on its platforms but to no avail, which is why today we're reviewing a discussion draft that will sunset Section 230 of the Communications Act of 1934 effective December 31st 2025.

I hope this legislation will bring people together, including those who support, oppose or are interested to carefully discuss Section 230 reforms. One thing is certain, big tech's behavior has brought Republicans and Democrats together on a commitment to find a long-term solution to reform Section 230.

Congress has a monumental task ahead, but we must reform the law in a way that will protect innovation, promote free speech, and allow big tech to moderate indecent and illegal content on its platforms and be accountable to the American people.

I look forward to our discussion today and to working with my colleagues on a broader discussion about purposeful reforms to Section 230. It's up to Congress, not the courts to reform Section 230, and changes to this law are long overdue.

And with that, I will yield back the balance of my time. And before I do recognize the ranking member of the Subcommittee, the gentlelady from the Seventh District of California, I just want to mention to our witnesses that we do have three Subcommittees of the Energy and Commerce running today. Health starts at 10:30, so you're going to see members leaving the Committee and coming back. It's not that they don't want to hear your testimony, but we have to be back and forth in the Committee. So I just wanted to mention that before we get started.

And also, if I just take a point of personal privilege, recognize our chair's birthday today. Well, happy birthday.

And with that, I now recognize the ranking member of the Subcommittee, the gentlelady from the Seventh District of California for her opening statement.

Rep. Doris Matsui (D-CA):

Thank you very much Mr. Chairman. Of the many critical issues within the Subcommittee's jurisdiction, few are more consequential than Section 230. In both positive and negative ways, Section 230 of the Communications Decency Act has shaped our online ecosystem. Regularly referred to as the 26 words that created the internet, Section 230 established vital protections that allowed the internet to flourish in its early days. Without 230, it's unlikely we'd have the vibrant internet ecosystem we all enjoy.

And yet, despite this, it's clear that Section 230 as it exists today isn't working. The status quo simply is not viable. As with any powerful tool, the boundaries of Section 230 have proven difficult to delineate, making it susceptible to regular misuse. The broad shield it offers can serve as a haven for harmful content, disinformation and online harassment. This has raised significant concerns about the balance between protecting freedom of expression and ensuring accountability for the platforms that host this content.

These concerns aren't abstract, from documented attempts to interfere with our elections to the harm this content is inflicting on America's young people. The unchecked immunity of 230 has consequences and we know many online platforms aren't simply hosting this content, they're actively amplifying it to reach more viewers.

Most large social media platforms are designed from the ground up to promote constant engagement rather than healthy interactions, that means pushing harmful and misleading information on some of the most vulnerable users with ruthless effectiveness. Young women are constantly subjected to unhealthy or untested diets. Suicide and material is foisted on those seeking mental health care. And recent elections show the ongoing danger of targeted disinformation.

So it should be clear to all, the role of Section 230 needs immediate scrutiny because as it exists today, it is just not working. To date, congressional efforts to make needed reforms have come up short, but we can't give up. The stakes are just too high, but it's also for that reason that we must be intentional, thoughtful, and deliberative in our attempts to update Section 230. Any reforms we implement should create a meaningful incentive for online platforms to own the outcomes of the products they're designing in a way they don't currently.

It's well understood that many of these platforms are knowingly amplifying harmful content, there can and should be consequences for that. Until that fundamental dynamic changes, we can't expect to achieve the safer online experience we all want and we need reforms that allow for better enforcement of civil rights laws to prevent some of the most upsetting discriminatory behavior online.

But while there's widespread bipartisan consensus in Congress that 230 needs to be modernized, the process for getting their remains unclear. That's why I believe the process is just as important as the product. We need a thoughtful process that allows for nuanced conversations on difficult issues. I'm ready to begin that work. And with that, I yield back the balance of my time.

Rep. Bob Latta (R-OH):

Thank you very much. The gentlelady yields back the balance of her time. The chair now recognizes the gentlelady from Washington, the chair of the full Committee of Energy and Commerce for five minutes for her opening statement.

Rep. Cathy McMorris Rodgers (R-WA):

Good morning. And thank you Chairman Latta. Ranking member Pallone and I recently unveiled bipartisan draft legislation to sunset Section 230 of the Communications Decency Act. As written, Section 230 was originally intended to protect internet service providers from being held liable for content posted by a third-party user or for removing truly horrific or illegal content. The intent was to make the internet a safe place and allow companies to remove harmful content in good faith without being held liable for doing so.

However, the internet has changed dramatically since then. Over 5 billion people around the world use social media with the average person spending more than two hours a day on social media. The internet has become vital for people to connect, work, find information, make a living. Big tech is exploiting this to profit off us and use the information we share to develop addictive algorithms that push content onto our feeds.

At the same time, they refuse to strengthen their platforms' protections against predators, drug dealers, sex traffickers, extortioners and cyber bullies. Our children are the ones paying the greatest price. They're developing addictive and dangerous habits often at the expense of their mental health. Big tech has failed to uphold American values and be good stewards of the content they host.

It's been nearly three decades since Section 230 was enacted and the reality is many of these companies didn't even exist when the law was written and we could not comprehend the full effect of the internet's capabilities. It is past time for Congress to reevaluate Section 230.

In recent years, US courts have expanded the meaning of what Congress originally intended for this law, interpreting Section 230 in a way that gives big tech companies nearly unlimited immunity from legal consequences. These blanket protections have resulted in tech firms operating without transparency or accountability for how they manage their platforms and harm users.

This means that a social media company, for example, can't easily be held responsible if it promotes, amplifies or makes money from selling drugs, illegal weapons, or other illicit content through their post. As more and more companies integrate generative artificial intelligence technologies into their platforms, these harms will only get worse and AI will redefine what it means to be a publisher, potentially creating new legal challenges for companies.

As long as the status quo prevails, big tech has no incentive to change the way they operate and they will continue putting profits ahead of the mental health of our society and youth. Reforming Section 230 and holding big tech accountable has been a long-time priority of mine and ranking member Pallone. Last Congress, we both introduced our own legislation to reform the decades-old law. Unfortunately, tech companies did not engage with us in meaningful ways and offered no solutions or reforms. Ultimately, no solutions or reforms were made.

And big tech is satisfied with the status quo, so much so that they've become masters at deception, distraction, and hiding behind others in order to keep Section 230 unchanged. That's why we're taking bipartisan action now. Our discussion draft will bring Congress and stakeholders to the table to work in good faith to create a solution that ensures accountability, protects innovation and free speech, and requires companies to be good stewards of their platforms.

Let me be clear, our goal is not for Section 230 to disappear, but the reality is that nearly 25 bills to amend Section 230 have been introduced over the last two Congresses, many of these were good faith attempts to reform the law and big tech lobbied to kill them every time. These companies left us with no other option. By enacting this legislation, we will force Congress to act. It is long past time to hold these companies accountable.

The shield of Section 230 should be there to protect the American people, not big tech. I'm hopeful that this legislation is the start of an opportunity to work in a bipartisan way to achieve that goal. It is vital that we develop solutions to restore people's free speech, identity and safety online, while also continuing to encourage innovation. That's the American way. I look forward to hearing from the witnesses today. And I yield back.

Rep. Bob Latta (R-OH):

Thank you. The gentlelady yields back. The chair now recognizes the gentleman from New Jersey, the ranking member of the full Committee for five minutes for an opening statement.

Rep. Frank Pallone (D-NJ):

Thank you Mr. Chairman. And today, we continue the Committee's work of holding big tech accountable by discussing draft legislation that Chair Rodgers and I circulated that would sunset Section 230 of the Communications Decency Act at the end of 2025.

And while I believe that Section 230 has outlived its usefulness and has played an outsized role in creating today's profits-over-people internet, a sunset gives us time to have a serious conversation about what concepts are worth keeping. So Section 230 was codified nearly 30 years ago as a good Samaritan statute designed to allow websites to restrict harmful content. And while it was intended to be just one part of the Communications Decency Act, it was almost immediately left to exist on its own when most of that act was deemed to be unconstitutional.

Section 230 was written when the internet largely consisted of simple websites and electronic bulletin boards, today, the internet is dominated by powerful trillion-dollar companies, and many of these companies have made their fortunes using sophisticated engagement and recommendation algorithms and artificial intelligence to harvest and manipulate our speech and our data, and all of that is in an effort to maximize the time we spend on their platforms and to sell advertising.

Unfortunately, these platforms are not working, I should say, for the American people, especially for our children. But that shouldn't surprise us, these companies aren't required to operate in the public interest like broadcasters, nor do they have robust editorial standards like newspapers. They're not a regulated industry like so many other important sectors of our economy. And the vast majority are publicly traded companies with a singular duty under corporate law to maximize value for their shareholders by increasing their profits.

As a result, they face constant pressure to grow their user base, which these days means hooking children and teens. They introduce addictive features to keep us watching and clicking. They exploit our data to develop granular profiles on each of us to sell advertising and then provoke our emotions to monetize our engagement. And with Section 230 operating as a shield to liability when people are harmed, making money remains the primary factor driving decisions.

As a result, provocative videos glorify suicide and eating disorders, dangerous viral challenges, horrific child abuse images, merciless bullying and harassment, graphic violence, and other pervasive and targeted harmful content is being fed nonstop to children and adults alike.

Just this week, a popular event and ticketing platform was found to have been promoting illegal opioid sales to people searching for addiction recovery gatherings. So frankly, I think we deserve better than all this.

And the fact that Section 230 has operated as a near complete immunity shield for social media companies is due to decades of judicial opinions trying to parse its ambiguities and contradictions. Judges have attempted to apply it to technologies and business models that could not have been envisioned when it was drafted. And the courts have expanded on Congress's original intent and have created blanket protections for big tech that has resulted in these companies operating without any transparency or accountability. And I don't believe that anyone could come before us now and credibly argue that we should draft 230 or Section 230 the same today.

But despite all of this, some courts have started to scrutinize the limits of Section 230 more closely. Moreover, major search engines have recently begun to substitute their own AI content over search results directing users to third party sites. Not only does this demonstrate an intentional step outside of the shelter of Section 230's liability shield and raise significant questions about its future relevance, but it also upsets settled assumptions about the economies of content creators and the reach of user speech.

Now, it's only a matter of time before more courts chip away at Section 230 or the Supreme Court or technological progress upends it entirely. And state legislatures are growing impatient, increasingly passing bill seeking to introduce liability to tech platforms. But for now, we're left with the status quo, a patchwork where, more often than not, bad Samaritans receive broad protection from a statute intended to promote decency on the internet.

So Congress should not wait for the courts, we should act. Our bipartisan draft legislation would require big tech and others to work with Congress over the next 18 months to develop and enact a new legal framework that works for the internet of today. I believe we can work together to develop a framework that restores the internet's intended purpose of free expression, prosperity, and innovation.

And I want to finally say, Mr. Chairman, I reject big tech's constant scare tactics about reforming Section 230. Reform will not break the internet or hurt free speech. The First Amendment, not Section 230, is the basis for our nation's free speech protections and those protections will remain in place regardless of what happens to Section 230. We cannot allow big tech to continue to enjoy liability protections that no other industry receives. And I yield back Mr. Chairman.

Rep. Bob Latta (R-OH):

Well, thank you very much. The gentleman yields back. This concludes member opening statements. The chair reminds members that pursuant to the Committee rules, all member opening statements will be made part of the record.

And at this time, we will also want to thank our witnesses for being here before the Subcommittee today. We greatly appreciate your testimony. Our witnesses will have five minutes to provide an opening statement, which will be followed by a round of questions from our members.

The witnesses that are appearing before us today are Ms. Carrie Goldberg, founding attorney at C.A. Goldberg PLLC, Mr. Marc Berkman, the CEO of the Organization for Social Media Safety, and Ms. Kate Tummarello, the Executive Director at Engine.

I would like to note for our witnesses that the timer light on the table will turn yellow when you have one minute remaining and will turn red when your time has expired. Ms. Goldberg, you're recognized for five minutes for your opening statement. Thanks again. Thanks for being with us.

Ms. Carrie Goldberg:

Thank you and good morning, Chair Latta. Subcommittee Chair McMorris Rodgers, happy birthday to you. And ranking members Matsui and Pallone, distinguished members of the House Committee on Energy and Commerce, thank you so much for inviting me to testify today.

My name is Carrie Goldberg and I'm the owner of a national law firm where we litigate for families who've been destroyed by big tech. I stand for the belief that our courts are the great equalizer. In 1791, Congress passed the Seventh Amendment, which gives Americans the right to a civil jury trial. This gives us the constitutional right to hold accountable, those who injure us.

And I stand against the idea that some bad actors, whether human or corporate, are too important to face their victim's eye to eye, to be examined by a jury, or to pay for their harms that the Seventh Amendment just doesn't apply. Companies that mint money off of the backs of the masses can't later claim when their products hurt those people in predictable or even known ways that it's not their fault and that they're just a passive publisher.

I want to tell you about some of the cases that I work on every day. A child whose murder was live posted and her family is harassed daily by people who defile her murder images. And in her death, Instagram went from her profile having 2000 users or visitors to almost 200,000 followers. Instagram refuses to give her estate the power to control the account.

I represent children who were matched with predators on a video streaming app, including 11-year-old, AM, who was used by a man in his 30s for sex and became his sex slave for three entire years and made her go back on the app, Omegle, to recruit more kids.

I'm the originating attorney in a case against Snap where our client's children were matched with drug dealers who sold them fentanyl and killed them. This case has 90 families in it and all 90 are mourning the deaths of their kids. One parent is even here today.

I represent multiple victims of the serial rapist, Dr. Stephen Matthews in Denver, who was using Hinge and Tinder as a catalog to find more victims, and Tinder knew about it.

I represent a young nurse or the family of a young nurse from Brooklyn who was murdered on a Tinder first date by a felon who was erroneously released from prison.

I represent a man who was impersonated on a gay dating app and over 1000 strangers came to his home in person to rape him while the platform stood by and watched.

I also represent a child with profound autism who was matched on a dating app that aggressively markets to children, and then, was raped by four men over four consecutive days.

And finally, I represent over 30 families whose children purchased a suicide kit online and died the most horrific deaths imaginable. And for 24 of those families, it was an online retailer called Amazon that sold the chemical, aware that there was no household use for it.

I do not represent people who were harmed by mere content moderation decisions, in every single one of my cases, the platform knowingly caused the conduct, released a dangerous product into the stream of commerce, sold something deadly, turned a blind eye, or in some cases, all four.

I sue them for their product defects just like you would if a community was poisoned by contaminated groundwater or weed killers caused cancer or an emergency exit door fell off of an airplane mid-flight. Yet in virtually every one of my cases, the online company says that it was just a mere forum of speech, that it's immune from liability and it's just a passive publisher.

Now, over the last 10 years, I've lost more cases because of Section 230 than anybody that I know, but I've also won a few. Congress passed Section 230 in 1995 as an accident in its mission to end online porn, but now it's used by internet oligarchs to shirk the courts and avoid the victims. They make money off their victims, they pay lobbyists and they amass just an unknown amount of power.

Meanwhile, the equivalent of a plane full of children is crashing into mountains every single day while we just wait and watch. In 1997, the court, in a famous case, said, "The internet is a rapidly developing technology. Today's problems may soon be obsolete, while tomorrow's challenges are yet unknowable. In this environment, Congress is likely to have reasons and opportunities to revisit the balance struck in the CDA." Yet here we are 27 years later, but only Congress can fix this emergency. Congress created Section 230, Congress can fix it. And I'm here to support the sunsetting of Section 230 to restore balance. Thank you.

Rep. Bob Latta (R-OH):

Thank you. Mr. Berkman, you're recognized for five minutes. Thank you very much for being with us today.

Mr. Marc Berkman:

Thank you. Good morning. Chair Latta, ranking member Matsui, thank you for the opportunity to testify. As a former staffer myself, I also want to thank all the hardworking staff sitting back there today, so thank you.

My name is Marc Berkman. I'm the CEO of the Organization for Social Media Safety, the first and leading consumer protection organization focused exclusively on social media. Thank you to Chair Rodgers, ranking member Pallone and the full membership of the Committee for your strong bipartisan efforts and comprehensive approach towards protecting families from the dangers associated with social media use.

Harms either caused or exacerbated by social media are indeed severe and they are pervasive. In our own study with over 14,000 teens, we have found that a whopping 53% self-report using social media for more than five hours a day. That is a lot of time. And in that time, our children are being exposed to a range of threats.

Here are some study findings. A breathtaking 46% of teens self-report being a victim of cyberbullying. Cyberbullying victims are about 2 1/2 times more likely to attempt suicide. Last year, the FBI reported 12,600 sextortion victims, at least 20 of whom died by suicide. 43% of young adults had seen self-harm content on Instagram, 33% consequently replicated such self-harm. The more time adolescents spend on social media, the more likely they are to be exposed to drug-related content and to experiment with substance use. New TikTok accounts set up by a supposed 13-year-old recommended self-harm and eating disorder content within minutes.

These studies among many, many others indicate real ongoing harm from social media. We do not need to wait for more research, the conclusions are clear. Social media executives have themselves readily acknowledged the safety concerns in these very halls. Meta's Mark Zuckerberg acknowledged that Meta didn't do enough to prevent their platforms from being used for harm, including a failure on data privacy.

Shou Zi Chew of TikTok stated that the security, privacy and content manipulation concerns raised about TikTok apply to the other social media companies as well. Snapchat said that while they've set up proactive detection measures to get ahead of what drug dealers are doing, those drug dealers are constantly evading Snapchat's tactics, not just on Snapchat, but on every platform.

Despite this very clear consensus among its own leaders that severe safety risks to adolescents pervade the industry, big tech is not taking sufficient action. And so the harms continue, the mortality count mounts. That is why we must stand in support of Chair Rodgers and ranking member Pallone's Section 230 sunset discussion draft Section 230(c) has directly facilitated these harms by gutting our carefully developed tort law jurisprudence for this industry. We have removed the traditional public policy mechanism that forces all other companies to appropriately consider public safety along with their profit motives when making business decisions. The results speak for themselves.

But we cannot fully understand the failures of Section 230 without a focus on its tragically forgotten provision. Section 230(d). Congress required that internet providers, including today's social media platforms, provide users, upon signing up for the service, with a list of commercially available safety software providers. The clear legislative intent of Congress was to provide the civil liability immunity provisions of Section 230(c) only in conjunction with the understanding that a robust safety software industry would help ensure critical safety support to families and children.

Tragically, the social media industry has consistently defied the clear mandate of Section 230(d), and unfortunately, Congress could not have envisioned that today's social media platforms would have to provide some level of minimal assistance to third-party safety software providers for their products to effectively function.

With this essential pillar of Section 230 long forgotten and ignored, we have seen millions of children unnecessarily harmed. That is why along with the other essential pieces of legislation that this Committee is considering, like APRA, COPPA 2.0 and KOSA, Congress must pass Sammy's Law to restore this imbalance and give caregivers the choice of using safety software to protect their children.

As we consider comprehensive reform, the Committee should move forward with sunsetting Section 230 to bring a reluctant, unwilling social media industry to the table. If the industry wants a more tailored policy framework in place, let them finally engage in meaningful dialogue and compromise. Given the current public health catastrophe, the daily deaths, the growing harms, we need to sunset this broken system today and get to work to protect America's families. Thank you.

Rep. Bob Latta (R-OH):

Thank you. Ms. Tummarello, you are recognized for five minutes for your opening statement.

Ms. Kate Tummarello:

Chairs McMorris Rodgers and Latta, ranking members, Pallone and Matsui, members of the Subcommittee, thank you for the invitation to testify before you today. My name is Kate Tummarello and I'm the Executive Director of Engine, a nonprofit that works with thousands of startups across the country to advocate for pro-startup, pro-innovation policies. Sunsetting Section 230, especially in a little over 18 months without consensus around an alternative framework, risks leaving internet platforms, especially those run by startups, open to ruinous litigation, which ultimately risks leaving internet users without places to gather online. That user perspective is so important to these conversations. I'm incredibly grateful for the people, like Carrie's clients, who are willing to share their stories about the harms that can arise when people connect online. It undoubtedly helps us to understand what's at stake here.

But it's also important to hear about the benefits of online communities for user expression, a perspective that isn't represented here today. As an internet user myself, I'd like to talk briefly about a time I relied on an online community that likely couldn't exist as it does without Section 230. In the summer of 2022, I needed medical care when I had a pregnancy tragically end at 22 weeks due to a critical health problem. While I had an amazing support system of loved ones and medical professionals, I didn't know anyone personally who had navigated such a devastating loss. I was able to turn to the support of online pregnancy loss communities, sometimes on Facebook groups or through Instagram DMs, but often on small platforms that provide resources and discussion forums for pregnant women. I leaned on those communities to navigate not only surviving the emotional trauma, but also some practical considerations like how do I get my body to stop producing breast milk because it hasn't realized that my pregnancy didn't end with a healthy baby?

Since it was the summer of 2022, and there was a rapidly shifting legal landscape around reproductive healthcare, I saw, in real time, these communities shrink as women express fear not only about seeking care, but even about talking about seeking care online. We've since seen states propose making it illegal to offer support to people seeking reproductive healthcare, including operating an internet service that facilitates users sharing information. Currently, Section 230 is what would prevent that small platform for pregnant women that I used from having to endure expensive and time-consuming litigation anytime one person wants to see another person's content about reproductive health removed from the internet. My personal story is about pregnancy loss, but you can substitute in any other controversial topic, religious beliefs, fertility treatment, hunting gear, the Me Too movement, political organizing, et cetera, and see the same consequence. If an internet platform could be sued, or could be even threatened with a lawsuit over the content created and shared by its users, the platform will have an incredibly hard time justifying hosting that content or anything that comes close.

Not only does that put those platforms in the very expensive and time-consuming position of having to find and remove lawful user speech they might want to host, it means dramatically fewer places on the internet where people can have these kinds of difficult but necessary, and for me, life-saving, conversations. Sunsetting Section 230 would harm the diverse ecosystem of internet platforms and the users that rely on them. Section 230 is a faster, more efficient way to reach an inevitable legal conclusion, that internet platforms shouldn't be punished in court for the speech and conduct of their users the platforms can't logically be expected to know about. That means litigants can't use the threat of drawn-out and expensive legal battles to pressure internet platforms into removing speech the litigants don't like.

Section 230 has enabled user expression far beyond the platforms run by big tech companies. That includes everything from the nonprofit-run Wikipedia to libraries to educational institutions to internet infrastructure companies to individuals running Mastodon servers or community listservs or bloggers with comment sections. And it works for startups and Engines network, like those that build local communities through events, create safer dating experiences, facilitate conversation about current events, support educators, help small businesses find customers, and much more. We know that startups with limited budgets and small teams invest proportionally more in content moderation than their larger counterparts. They have to. They need their corners of the internet to remain safe, healthy, and relevant if they want to grow. But they don't have the thousands of content moderators that large tech platforms employ, and they can't always buy or build content detection and removal technologies, neither of which is a silver bullet option.

Startups are also least equipped to handle the cost of litigation. The average seed-stage startup has about $55,000 per month to cover all of its expenses. Contrast that with the cost of defending against a lawsuit, which can cost hundreds of thousands of dollars, even if the startup were to ultimately prevail, it would always be in the best option for a startup's bottom line to just avoid the lawsuit altogether, even if that means removing user content it would otherwise host to the detriment of its users. Sunsetting Section 230 won't lead to the outcome that the leaders of this committee say they want, an internet where free expression, prosperity, and innovation can flourish. Section 230 is and has been critical to those goals, and it's essential for the competitiveness of US startups. Instead of sunsetting Section 230 in the hopes of an elusive replacement, we must be clear-eyed about what we can realistically accomplish and what we risk, in terms of trade-offs to expression, prosperity and innovation. Thank you for the opportunity to testify, and I look forward to answering your questions.

Rep. Bob Latta (R-OH):

Well, thank you all for your opening statements today. That will conclude, and we'll now start with members' questions to our witnesses. Ms. Tummarello, if I could start my questions with you. You state in your testimony that we want to make sure that the small and the medium-sized companies out there in the startups can be out there and have the protection of Section 230. And also, I'm sure you agree that there's some companies out there that can better protect themselves. At what point should we consider a company to be big enough to take more responsibility for their platforms?

Ms. Kate Tummarello:

Thank you for the question, Chair. I think, like I said in my testimony, we know that startups invest proportionally more in content moderation to keep their users safe. They have to. It's a business necessity for them if they want to grow. Generally, I think we're very hesitant to put a cap on what constitutes a startup. It's really hard to measure. You can have a lot of users with a very small team, you can have a lot of users without having a lot of profit. I'm not sure there's a great metric that says, "Once you hit this point, you should be expected to immediately find and remove any problematic content," especially when we know companies of all sizes, but especially startups, are already investing in finding and removing harmful content. I don't think there's a clear threshold where once you cross it, you should have the resources to perfectly moderate content and prevent users from sharing harmful content every time.

Rep. Bob Latta (R-OH):

That's part of our problem that we're going to have going forward, is establishing those guideposts out there, the guardrails, is where we're going to be in between who's that startup? Who's that small company? Who's that medium? Because again, we want to make sure that they can flourish out there in the economy. Ms. Goldberg, do you believe small tech companies should be carved out from Section 230 reform legislation or this sunset bill and why or why not?

Ms. Carrie Goldberg:

I don't believe that there should be any exceptions for small startup companies. Some of the Internet's most malicious websites are small. Omegle was run by one person, and it accommodated 60 million unique visitors a month, matching adults with children for sexual online streaming. Sanctioned Suicide is run by two people. It's a pro-suicide platform that instructs people on how to die. It's visited by children and has single-handedly increased child suicide in this country. There's absolutely no reason that small companies should get some sort of carve out. 99% of businesses in this country are small businesses. No other industry gets some blanket immunity from litigation. Instead, small businesses, I happen to own one, we guard ourselves against litigation by being responsible and not harming people.

Rep. Bob Latta (R-OH):

Okay, well thank you. Mr. Berkman, many of the harms that you raised in your testimony are against the terms and services of tech companies. How will reforming Section 230 encourage platforms to be better stewards of their platforms and create better accountability?

Mr. Marc Berkman:

Yeah, sorry. So the issue is now that with blanket immunity from any sort of liability, we have a severe imbalance in how the social media industry is making their decisions and they are not weighing in public safety sufficiently. And so with reform and adding in that liability, we're going to see a calculus that all other businesses, as Ms. Goldberg just mentioned, undertake when they make their decisions and that's how we increase public safety in this industry.

Rep. Bob Latta (R-OH):

Let me ask you a question. If we're looking at the reforming, do we keep Section 230 or should 230 expire at the sunset if we get a sunset through December 31st of next year?

Mr. Marc Berkman:

I really think given the extreme amount of harms, we're really facing a public health catastrophe, especially for adolescents, that reworking the entire system using a comprehensive package that I know the committee is working on and considering now, I believe is essential to alleviate the harm that we're seeing, mitigate the harm that we're seeing.

Rep. Bob Latta (R-OH):

Well, thank you. I'll yield back the balance of my time and recognize the ranking member of the subcommittee for five minutes for questions.

Rep. Doris Matsui (D-CA):

Thank you very much, Mr. Chairman. As a grandparent, I can tell you one of the things that concerns me the most is the treacherous online environment. America's young people are forced to navigate through cyber bullying to content encouraging disordered eating or self-harm. It's simply naive not to acknowledge the connection between the rise in social media and the harm for America's youth. Mr. Berkman, can you describe the strategies online platforms used to target and push harmful content on children?

Mr. Marc Berkman:

Absolutely. So there's a range of features out there that we find incredibly problematic. There are features that are meant to keep children using platforms. That's how these platforms make their money. So the thumbs up, the likes, the shares, the view counts, all of that is geared towards getting children to come back and watch again and again. The algorithms are designed off of what gets high engagement and unfortunately, it's that dangerous salient content that often gets that high level of engagement. So sexually explicit material, extreme violence, that's what human beings tend to watch and that's what the social media platforms use to increase engagement via their algorithms among many other features, by the way.

Rep. Doris Matsui (D-CA):

As we consider reforming Section 230, how should we understand its limitations in cases where platforms are knowingly amplifying harmful content?

Mr. Marc Berkman:

In any other industry, if you are knowingly causing harm, you are going to be subject to liability. And so again, we have this very severe imbalance here in business decisions and as Ms. Goldberg mentioned, the jurisprudence of Section 230 has really gone off the rails and included a range of business decisions, feature design, product design that really should never have been included in the concept of 230 and certainly was never the original intent.

Rep. Doris Matsui (D-CA):

Okay, thank you. Ms. Tummarello, I want to take a moment to thank you for sharing your story. It was very important and that took an immense amount of courage. I understand that. Without women like you stepping up to sharing their stories, lawmakers wouldn't fully appreciate what's at stake here. In a post-DOD environment, more and more women are being forced to turn to online communities for seeking reproductive care and advice. Ms. Tummarello, given your own personal and professional experience with pregnancy loss, what effect do you believe a full sunset of Section 230 without a viable replacement could have on a woman's ability to find communities for reproductive care online?

Ms. Kate Tummarello:

Thank you so much for the question, Congresswoman. This happened to me two years ago, but I have remained an active member of these online communities since then in the hopes of paying it forward because those women helped me so much, and again, every day that I log onto the forums or check a Facebook group, I see a woman express fear about being able to seek the care she needs. I worry deeply that those women won't have the platform to express that fear, they won't have the platform to find the support and information they need. And I think as this is just the perfect example of the kind of content that women who suffer from pregnancy loss, I don't think anyone's trying to harm them. I don't think anyone's intending to repeal 230 to get at women like me, but I do really worry that's the unintended consequence here, and I don't want the women who are dealing with this today and in the future to not have the resources I had and have the community that I was able to lean on because again, for me it was lifesaving.

Rep. Doris Matsui (D-CA):

Okay, well, thank you very much. In California and other regions of the country, judges are increasingly interested in hearing cases concerning the design of platforms themselves rather than suits about specific content. These cases represent a novel approach for addressing the harms of social media in a way that isn't immediately dismissed because of Section 230. Ms. Goldberg, while I'm glad these cases are increasingly viable, what are the legal limitations of this approach?

Ms. Carrie Goldberg:

Thank you. My firm pioneered the product liability theory in 2017 and in the Second Circuit, the court said that even using a product liability approach where we said these features are defective, it still was prone to Section 230 immunity. Now, thankfully in the Ninth Circuit, the courts are saying, "Well, if you don't sue for a publication problem, then it's not vulnerable to Section 230." But I think it's important to remember that when we're talking about removing Section 230 immunity, it doesn't create a pathway where these companies just suddenly are liable. In Ms. Tamarello's heartbreaking situation, it doesn't mean that suddenly the First Amendment doesn't apply to those platforms and they suddenly have to remove all the content. There still would have to be somebody who's injured from a platform decision, and I think it's important that we not confuse-

Rep. Bob Latta (R-OH):

Excuse me, the Gentlelady's time has expired.

Ms. Carrie Goldberg:

Oh, sorry.

Rep. Bob Latta (R-OH)

Pardon.

Rep. Doris Matsui (D-CA):

I'm sorry. The questions I'll ask [inaudible].

Rep. Bob Latta (R-OH):

Okay, thank you.

Rep. Doris Matsui (D-CA):

Thank you.

Rep. Bob Latta (R-OH):

The Chair recognizes the gentleman from Pennsylvania for five minutes for questions.

Rep. John Joyce (R-PA):

Thank you, Chairman Latta and Ranking Member Matsui for holding this hearing on Section 230, and thank you to our witnesses for giving their time and your compelling testimony. As I stated in our hearing last month, as a doctor, as a father, as a grandparent, I understand how important our children's well-being is, particularly their mental health. This is particularly true as more and more children have access to smartphones, the internet, and specifically the content that these devices bring to them, we need to make sure that they are not interacting with harmful or inappropriate content and Section 230 is only exacerbating this problem. We here in Congress need to find a solution to this problem that Section 230 poses, and I'm glad that we can walk through the potential reforms and the solutions here today. Mr. Berkman, what are the good parts of Section 230 and how can we reform the text so that we get back to the original text and the original intent of Section 230?

Mr. Marc Berkman:

I appreciate the question and I'll start my response by highlighting Section 230(d). It is a long forgotten and tragically ignored, critical component of the original concept of Section 230 and Section 230(d) requires internet companies, including social media platforms, to give all users notice upon signing up of all commercially available safety software providers. The immunity provisions of Section 230(c) were put in on the back of the understanding that there would be a robust third-party safety software industry out there protecting users, particularly adolescent users.

Rep. John Joyce (R-PA):

Is that robust industry available now?

Mr. Marc Berkman:

There is an industry out there now that is effective. Unfortunately, and what could not have been envisioned in 1996 was that for safety software companies to work, the social media platforms have to provide a level of minimal cooperation. They have to provide data access at-

Rep. John Joyce (R-PA):

Does that cooperation exist?

Mr. Marc Berkman:

It exists among some platforms and not others.

Rep. John Joyce (R-PA):

You used the word robust. Is it robust?

Mr. Marc Berkman:

There are strong companies out there. I would not call the industry robust.

Rep. John Joyce (R-PA):

I think we share those concerns. Ms. Goldberg, what if any criminal activity does Section 230 inadvertently continue to facilitate and how would reforming the loss stop that illegal activity?

Ms. Carrie Goldberg:

Well, Section 230 has basically given the tech industry a pass to allow the trafficking of children on its platform, the trafficking of drugs, matching children with predators, the sale of suicide chemicals, inciting violence, and even though there is supposedly a carve out for federal crimes, our DOJ never holds our tech industry responsible for crimes that happen on the platforms. So with the removal of Section 230, we empower the people who are injured to take action where the government doesn't.

Rep. John Joyce (R-PA):

The nefarious acts that you just elicited, do you feel that those continue to exacerbate the problems with mental illness that we see in children and young adults?

Ms. Carrie Goldberg:

100%. Especially because these platforms not only turn a blind eye to the bad things that happen, but they often promote them.

Rep. John Joyce (R-PA):

My final question is for all the witnesses, there is debate happening right now that if Congress is going to amend Section 230, whether it should do so on a carve out or carve in basis whether to pursue a more comprehensive approach. Can you speak to the pros and cons of each of these approaches? And I'll start with you, Mr. Berkman.

Mr. Marc Berkman:

I would say we want to see comprehensive significant reform and that's why we're really supportive of Chair Rodgers and Ranking Member Pallone's discussion draft here. It needs to be sunset and reworked. So the pros of doing carve-ins means that the social media industry is at the table and providing real compromise that is reasonable. The pro of starting over again means that we're potentially going to get a very robust system that works. Either way, with the discussion draft, we need them at the table having the discussion as well.

Rep. John Joyce (R-PA):

Miss Tummarello, from your perspective and from your personal experience, how would you weigh in on this?

Ms. Kate Tummarello:

I think-

Rep. Bob Latta (R-OH):

If I can just mention the gentleman only has five seconds left, so you can make a real quick statement.

Ms. Kate Tummarello:

Okay. I would say generally engines wary of carve outs carves in, we think we want a framework that works for the whole internet. Startups want to grow, they need to be able to know they're not going to have to rework their entire business model when they hit some arbitrary threshold, but as my story illustrated as a user, free expression across the internet depends on 230. And so picking and choosing who gets 230, I don't think ends with more free expression and more innovation.

Rep. John Joyce (R-PA):

Mr. Chairman, my time has expired and I yield back.

Rep. Bob Latta (R-OH):

The gentleman's time has expired and the Chair now recognizes the ranking member of the full committee, the gentleman from New Jersey for five minutes for questions.

Rep. Frank Pallone (D-NJ):

Thank you. Mr. Chairman. Just yesterday it was reported, and I mentioned in my opening, that a popular event ticketing platform has routinely allowed users to post messages selling drugs, fake social security numbers, fraudulent online reviews and other illicit things. But the platform didn't just allow these messages to exist on the platform. Using its algorithm, it pushed these posts to vulnerable users. And for instance, the platform actively placed posts offering illegal access to prescription drugs like Oxycodone, and next to genuine events intended to support users struggling with addiction and substance abuse. So again, I think it's pretty outrageous and we've seen things like this over and over again and I don't think these platforms, they're not going to clean up their act or make their platforms safer without major changes to the law. So Mr. Berkman, sometimes we hear from people that they don't understand the harm we're attempting to address. Is it surprising to you that an online platform would engage in this type of activity?

Mr. Marc Berkman:

Absolutely not. We've been seeing that specific harm for years. DEA Administrator Milgram has said for multiple years now that all the major platforms have issues with illicit drug trafficking. There were also issues that are well known with human trafficking, trading of CSAM material. All platforms operating in this space have clear, constructive notice that this is a danger and it is harming many, many Americans, particularly children.

Rep. Frank Pallone (D-NJ):

Well, thank you. Now I'm going to ask each of you, but just say yes or no because I have other questions for Ms. Goldberg. Do you believe that any company should be able to use Section 230 to escape liability for harms caused by the type of conduct I just described? Ms. Goldberg?

Ms. Carrie Goldberg:

No.

Rep. Frank Pallone (D-NJ):

And Mr. Berkman?

Mr. Marc Berkman:

No.

Rep. Frank Pallone (D-NJ):

And Ms. Tummarello?

Ms. Kate Tummarello:

230 doesn't protect companies that do commit federal crimes, including the ones you described.

Rep. Frank Pallone (D-NJ):

Okay. Now I understand that these platforms allow for speech expression and association that has changed the landscape for the exchange of ideas, but in my opinion, free speech isn't a business model. They're selling advertising and making money. And unlike most other industries in America, Section 230 allows platforms to make business decisions without having to give a second thought to whether and how the platform could be used for destructive purposes. So let me ask Ms. Goldberg and then I'll go to Mr. Berkman again. Ms. Goldberg, it seems like some plaintiffs are beginning to have success in cases against big tech companies on product liability claims. You did actually mention some of your cases, but do you think there's still a need for Congress to sunset Section 30 even with that?

Ms. Carrie Goldberg:

100% there's still a need. What we're finding is that courts don't know what to do. Their decisions are inconsistent. We still have the Ninth Circuit applying the Second Circuit law, and they always say that they're looking for Congress for clarity.

Rep. Frank Pallone (D-NJ):

Okay. So Mr. Berkman, your testimony details that platforms are knowingly causing harm to children and without fundamental reforms to Section 230, can we expect that platforms are going to do anything more than the bare minimum, even if we placed other regulatory requirements on them?

Mr. Marc Berkman:

We can expect continued harm without additional regulatory requirements, including a significant reform of 230, passage of Sammy's Law and other legislation that the committee is considering comprehensively. They have known about these harms for years. They have provided talking points in these halls to the press about the actions they're taking, yet the harms continue and casualties rise.

Rep. Frank Pallone (D-NJ):

Well, thank you. I mean, I don't get too many people that come here or say to me or Chair Rodgers that Section 230 shouldn't be changed. There might be somebody out there that will say they love the status quo, but they don't articulate that very often. But many of them will say, "Well, let's just make the changes. We don't need to sunset it." But obviously you disagree. You think that doing the sunset is important?

Mr. Marc Berkman:

Well, I think we do want to get it right here. I, in the long time, have a deep, deep appreciation for the legislative process and the process in Congress, and that means getting all the stakeholders meaningfully at the table because we want to get the legislation right. And what I really appreciate about the strategy of your discussion draft in the sunsetting is that the social media platforms have become so reliant on Section 230 and their dangerous business decisions that providing a sunset will meaningfully get them to the table and get us legislation that works.

Rep. Frank Pallone (D-NJ):

All right, thank you so much. Thank you, Mr. Chairman.

Rep. Bob Latta (R-OH):

Thank you. The gentleman's time has expired and yields back. The Chair now recognizes the Gentlelady from Washington, the Chair of the Full Committee of Energy and Commerce for five minutes for questions.

Rep. Cathy McMorris Rodgers (R-WA):

Thank you. Mr. Chairman. Ms. Tummarello, last Congress, I had a draft legislative proposal to reform Section 230 that included a threshold to ensure the reforms would only apply to the large tech companies and would not affect small businesses or startups. However, small and medium-sized companies still oppose reforms to Section 230 and did not engage in any meaningful conversations. If this bill passes and Section 230 is at risk of sunsetting, will these businesses engage in the process and support meaningful reforms to Section 230 to put control back in the hands of Americans?

Ms. Kate Tummarello:

Thank you for the question, Chair. Should note, Engine is always happy to engage and didn't gauge last Congress. Can't speak for any specific companies, of course. I think everyone wants the internet to work better for everyone, and so the companies we work with are always happy to engage, to find ways to make that work and would just hope that those conversations start from a recognition about what's really at stake when we talk about innovation and expression online. But sunset risks, especially at the end of next year when there's still so much disagreement even among members of Congress over what the internet should look like, sunset risks leaving our startups, but also again, just internet users generally vulnerable after 2025.

Rep. Cathy McMorris Rodgers (R-WA):

Thank you. I can appreciate that. The intent of this bill is to put a clock on making reforms to Section 230 before it sunsets. With the various proposals out there, there's no shortage of options and I believe that we can come together and that we must start now. What substantial reforms, as a follow up, should Congress consider in modifying the liability protections in Section 230?

Ms. Kate Tummarello:

I think Congresswoman Matsui put it really interestingly, when we talk about modernizing Section 230, not to put words in her mouth of course, but to me, that means keeping the original intent, which was to incentivize good faith attempts at keeping corners of the internet safe and healthy and relevant. And to Ms. Goldberg's point, absent 230, it's not that the platforms would be held liable for the speech, it's that the platforms could very easily be pressured into removing speech people don't like. And again, to my personal story, that scares me when we talk about controversial or vulnerable populations online. So I think anything that maintains the framework that allows a platform to quickly and easily get a lawsuit over user content dismissed, that kind of framework needs to continue as Congress thinks about 230.

Rep. Cathy McMorris Rodgers (R-WA):

Thank you. Ms. Goldberg or Mr. Beckman, do you have anything to add?

Ms. Carrie Goldberg:

I'll add that the problem with Section 230 and any accusation against a platform, is that they say that everything is speech. That basically if a user has a profile or contributes anything, no matter how much the platform does to develop it, to promote content or use its algorithms or generative AI, they say if a user had any content whatsoever involved, then anything that stems from it should be immune from Section 230. The other thing is that when we're talking about content removal, like Ms. Tummarello is talking about, there still has to be a cause of action. So a platform simply succumbing to pressure and political pressure or fear of posting controversial stuff. It doesn't just create a tort that anybody can sue under. So I don't actually understand the basis for the concern that suddenly the gateways of litigation are going to open.

Rep. Cathy McMorris Rodgers (R-WA):

Okay. Mr. Berkman?

Mr. Marc Berkman:

I wholeheartedly agree with Ms. Goldberg. I think the fear here is overwrought. Tort law jurisprudence has been around for hundreds of years. All of their businesses work by it. To bring a suit, you need a meritorious case on its face. And on the flip side, the concern about removing content aggressively, unnecessarily, that would indicate that they have the ability to be removing all of the illegal uncontroversially dangerous content that's on there now, if they could start picking and choosing out there anything that might be in the gray area. The focus here is removing what would be a tort, what is causing severe harm, ensuring that they're doing that in a way that is not grossly negligent or reckless.

Rep. Cathy McMorris Rodgers (R-WA):

Okay. Yes, one of the main concerns with reforming Section 230 is around the potential of incentivizing frivolous lawsuits from trial lawyers. In the time remaining, Ms. Goldberg, as a follow-up, would you speak to if we reformed or if it went entirely away, how do you believe the landscape would change?

Ms. Carrie Goldberg:

Rule 3.1 of the Model Rules of Professional Conduct forbids the filing of frivolous lawsuits. We'll be sanctioned.

Rep. Cathy McMorris Rodgers (R-WA):

Okay. More to come. I yield back.

Rep. Bob Latta (R-OH):

Thank you. The Gentlelady yields back. The Chair now recognizes the gentleman from Florida's Ninth District for five minutes for questions.

Rep. Darren Soto (D-FL):

Thank you, chairman. The year was 1996. We marveled at the groundbreaking special effects of Independence Day. Weezer released the indie cult classic album, Pinkerton, a personal favorite, and Section 230 was born out of a concern to protect the nation's internet service providers that came out of Stratton Oakmont V. Prodigy Services. Remember those guys, right?

Ms. Carrie Goldberg:

Yeah.

Rep. Darren Soto (D-FL):

It's a New York case that found that internet service providers could be found liable for content posted on their websites. And so 230 protected ISPs from a liability from content posted by third parties. Since then, internet companies have become a powerhouse of innovation in our economy. They also developed algorithms that amplify posts so they aren't always passive participants in their platforms. And the harms we know, sadly, are many, enabling sexual assault, bullying, identity theft, addiction, anorexia, and more. And so we see a sunset bill that's been filed by the Chairwoman and our Ranking Member that is a bold move and maybe the road we're headed down. I believe the key is to adopt comprehensive principles, a duty to protect identity and personal information, preventing crime and libel, and especially stopping exploitation of our kids.

Our major tech companies are some of the most innovative companies in America. My challenge to them is to work with us to develop these principles and the tools to enforce them. Protecting our fellow Americans and improving trust in popular platforms is good business, and it's the right thing to do. Locally in Central Florida, we saw an innocent UCF student, Alex Bugay, whose identity was stolen in an account that was created to then criticize and make racist comments towards this Georgia State legislator. This action wrecked his life. He was fired from his job. He almost got kicked out of school, and we saw it become a huge issue in Central Florida. Ms. Goldberg, we just filed the SHIELD Act last week, which would create a duty to take down posts where someone's identity was stolen, whether it's directly or by use of a social media account. Right now, if a bill like this didn't pass, what recourse would Mr. Bugay have or others?

Ms. Carrie Goldberg:

Well, right now, without the SHIELD Act that you're proposing, the man in Florida would have no rights to go after the platform that was knowingly publishing the...

Ms. Carrie Goldberg:

After the platform that was knowingly publishing the personal content. I mean, the same thing happened to my client, Matthew Herrick, where somebody was impersonating him on a dating app, and sent over a thousand men to his home thinking that he wanted them to fulfill his rape fantasies. He was thrown out of court.

Rep. Darren Soto (D-FL):

Well, thank you. We see it from subject matter jurisdiction to negligence and other legal concepts, this common principle, that affirmative acts can give rise to duties and liabilities under law. And so my second question to you is where is the line from being a passive platform, simply hosting a virtual town square, to being an active participant in a platform, and really availing themselves of duties?

Ms. Carrie Goldberg:

Every single platform that gets sued will say that they're just a forum for speech, that they're just a conduit. So the issue is that until they're accused of doing something wrong, they're going to just... I mean, they're going to deny that they're active, no matter what.

Rep. Darren Soto (D-FL):

Mr. Berkman, we know algorithms are some of that proactivity. What are some of the other proactive ways, social media platforms, and internet providers can become more active participants than merely just hosting a platform?

Mr. Marc Berkman:

It's a really basic one that's not as techy as algorithms. We see this all the time where there's a severe case, like the one in Georgia, it's getting worse with deep fakes out there, as well. And the imped, the victim, the target makes multiple reports to the platforms and they ignore it, don't respond. Basic notice is also a basic feature of tort law, so actual notice and ignoring it. Other features like Snapchat's Quick Add feature that links children to dangerous predators and drug dealers. All these features in there are causing harm and not necessarily connected to free flowing content that we saw in 1996 that had no algorithms or features keeping people on the sites.

Rep. Darren Soto (D-FL):

Thank you. And I think the key is we see common principles both in negligence and defamation of others, whether it's newspapers, whether it's duties for different businesses that really can guide us through should we not see a whole sale Sunset. And with that, I yield back.

Rep. Bob Latta (R-OH):

Thank you. The gentleman's time has expired and yields back. The chair now recognizes the gentleman from Florida's 12th District for five minutes for questions.

Rep. Gus Bilirakis (R-FL):

Thank you, Mr. Chairman. I appreciate it. As a conservative Republican, I generally err on the side of industry self-regulation in which competition and reasonable self-governance management standards. It's really typically the best way to incorporate business needs as well as consumer expectations. In my opinion, the heavy hand of government should be a last resort. Ms. Goldberg, over the last several years, Congress has discussed with online platforms the concerns of their failure to be good stewards of their platforms in light of Section 230. Has industry made any reasonable changes or offered any meaningful solutions in addressing people's concerns and do you think it's necessary to pass a Section 230 repeal for social media companies to meaningfully engage on this particular issue?

Ms. Carrie Goldberg:

Thank you for this question and I think it's important to remember that Section 230 is a regulation. I feel that in the 10 years that I've been litigating against tech companies, there has been no meaningful reform in their day-to-day operations. If anything, the products have become more sophisticated with algorithms and generative AI and there's more pronounced advertising, data mining and targeting at children. I feel if anything, the absolute urgency of reform is now.

Rep. Gus Bilirakis (R-FL):

Thank you, again. You mentioned the kids and this is what this question refers to, Section 230 has been a one size fits all approach to liability protections regardless of the harm caused and the individuals that are harmed. As this committee has developed privacy legislation, we have incorporated a higher degree of protection for children because of their unique vulnerabilities online. I'd like each witness, if you can, please, to answer this particular question, in whatever policy takes shape as a replacement to current Section 230 protections, do you think that we should consider different liability protections for social media companies when they engage with children than those made for adult consumers and why? So we'll start with Ms. Goldberg, please. Thank you.

Ms. Carrie Goldberg:

I do think that we have to treat companies that target children differently. They're looking at children as a mass market and the sooner that they can get a child on their platform, they can control how much time and attention that child attends to their platform and they might have a customer for life. The harm that we see to children who don't have the ability or the knowledge to cope with emergencies who maybe are being blackmailed and afraid to tell their parents is extreme.

Rep. Gus Bilirakis (R-FL):

Thank you. Mr. Berkman, please.

Mr. Marc Berkman:

Mine would be, yes. Millions of children are being harmed by social media. Millions in the clear data. It is a public health catastrophe for our children, a full range of dangers. And so I do think that we need to change the calculus through liability for platforms that allow children on that includes allowing them on by their terms and allowing them on de facto as in not doing any sort of verification. So my answer is yes in terms of that regulatory package there, again, it really needs to include bills like Sammy's Law, which would specifically significantly increase protection for children. So if a social media platform is allowing children on that needs to be something in the mix as well, COPPA 2.0, APRA, those provisions on data collection for children as well need to be in the mix.

Rep. Gus Bilirakis (R-FL):

Thank you very much. I appreciate that.

Mr. Marc Berkman:

Thank you.

Rep. Gus Bilirakis (R-FL):

So Ms. Tummarello, same question.

Ms. Kate Tummarello:

Thank you for the question. I think generally, when we think about startups, the relatively small number of start-ups we work with are truly aimed at children, know that they're dealing with children and I think, generally, makes sense like COPPA to have a different framework for dealing with children when that's who you know your users are. What we worry about is this kind of bleeding into general audience platforms that have no way of knowing what they're dealing with children and to the point on age verification, we're especially concerned about start-ups being forced to collect additional information from users. Imagine signing up for a new service you've never heard before and being asked for your driver's license. It might put you off from using that service and so that would really harm start-up growth. But to the point of your question, when we're talking about children's targeted platforms that know they're dealing with children generally different rules of the road, makes sense.

Rep. Gus Bilirakis (R-FL):

Thank you very much. I appreciate it. I yield back, Mr. Chairman.

Rep. Bob Latta (R-OH):

The gentleman's time has expired and yields back. The chair now recognizes the gentleman from California's 29th District for five minutes for questions.

Rep. Tony Cardenas (D-CA):

Thank you Chairman Latta and Ranking member Matsui for holding this hearing and bringing us together and I appreciate the witness's opinions and sharing their expertise and full view of the public about today's issue. I'm glad that we're discussing a path forward to holding online platforms accountable in the form of the chair and ranking member's proposal to Sunset Section 230 today. I hope that we go beyond just discussion and actually take some action. In my time in Congress, I've participated in multiple hearings where we've heard repeatedly from CEOs of large online platforms that they take our concerns very seriously and are working hard to address them. I don't believe them. What has consistently followed is that these companies, often worth billions of dollars and traversing an atmosphere of trillion dollars or more annually, have failed to meet the moment, to address the societal harms that are proliferating on their platforms.

As we've heard from our witnesses testimony today, there are very real risks to public health in allowing things to continue as they've been. There are also risks to our democracy as authoritarian adversaries abroad have repeatedly demonstrated that they are willing and able to fill our online spaces with false information and designed to push their interests and undermine our institutions. While I wish we could better depend on American companies to help combat these issues, the reality is that outrageous and harmful content helps drive their profit margins. That's the online platforms. I'll also highlight as I have in previous hearings that the problem of harmful mis and disinformation online is even worse for users who speak Spanish and other languages outside of English as a result of platforms not making adequate investments to protect them.

I have a question for Ms. Goldberg. In your testimony you referenced the sophistication of technology we're dealing with now as compared to when Section 230 was created. Especially. you mentioned that the internet's ability to overturn elections, spur genocides, and coordinate government takeovers. Among other large-scale societal harms, in Congress it can often take a long time before we take a second crack at getting something right, Ms. Goldberg as we consider how to implement a regulatory framework that can effectively deal with these threats. How do you make certain we are writing policy that can keep pace with technological innovation and what blind spots should we be looking out for?

Ms. Carrie Goldberg:

Thank you for this question and I think it's a glaring travesty that social media companies invest so much into English-speaking content moderation at the expense of the languages. I think it's like 87% of content moderation is in English on Facebook and that was revealed by whistleblower Frances Haugen. When we're thinking about Section 230, we really need to be... and how to modify. I think instead of drafting laws that are specific to technology, which is always going to change, we have to be thinking about harms and the duty that platforms have instead of the technology.

Rep. Tony Cardenas (D-CA):

Thank you. Public shaming has happened in these committees where we have CEOs in front of us and they give us all kinds of somewhat apologetic answers or what have you. Mr. Berkman, do you have any thoughts on how we can ensure that platforms make more equitable investments in moderating harmful content and languages other than English?

Mr. Marc Berkman:

Yeah, I really appreciate that question and I also appreciate the sentiment that you're hearing from the industry and they're telling you that they're working very hard. They probably also told you that they're using proactive detection measures, they're using industry-leading techniques. The problem is that millions of children are being harmed and many are dying on social media. The question to equity goes to sufficient trust and safety processes and staffing. And when we're talking about supporting innovation here for smaller businesses, we have to realize that every other industry in this country is subject to our historical tort law and that innovation happens within that context because it requires the balancing of the profit, motive and safety. And so in terms of ensuring that trust and safety staff and AI algorithms can keep pace with evolving risks in other languages. That is a basic trust and safety operation and the failure to properly staff that is negligent or even reckless. And so amending, reforming 230 and the liability protections here is essential.

Rep. Tony Cardenas (D-CA):

Thank you for your testimony and your answers to my questions. My time have been expired. I yield back, Mr. Chairman.

Rep. Bob Latta (R-OH)

The gentleman's time has expired and yields back. The chair now recognizes the gentleman from Michigan's fifth District for five minutes for questions.

Rep. Tim Walberg (R-MI):

Thank you Mr. Chairman and thanks to the panel for being here. Section 230 has allowed the internet ecosystem to grow and thrive in the United States for sure, but after three decades, it's time that we evaluate whether the current model is still benefiting our constituents and businesses. As technology has evolved, the problems consumers face online have evolved and expanded and we've talked about that today. Illegal activity and harmful content seem to be rampant on big tech platforms, especially impacting the mental health, safety and security of our children and teens. We've also heard many stories from our constituents who have had their content censored taken down or flagged without real reason or recourse, and that's a problem, but we can't throw the baby out with the bathwater. We need to create an environment that allows the US to continue leading in innovation, one that works for consumers and businesses alike and protects children and increases the freedom of expression online. All of that being a good thing.

Ms. Tummarello, in your testimony you discuss why Sunset in Section 230 is the wrong approach. I also have some of the same concerns. In your opinion, what is the right approach and do you think the status quo is sustainable especially when we see so many harms occurring on these platforms?

Ms. Kate Tummarello:

Thank you for the question, Congressman. I mean obviously Congress is very interested in talking about an alternative framework, so clearly the status quo isn't working for the members of Congress who get to write the laws. I think anytime we're talking about 230 reform, Sunset is a very blunt tool and appreciated as a negotiating tactic because-

Rep. Tim Walberg (R-MI):

And it's supposed to be.

Ms. Kate Tummarello:

Yeah, yes, but as a law it's quite a way to make a change. I think we would love to see Congress engage in nuanced conversation that starts with an understanding of the way that the internet and content moderation online works for all platforms. A lot of the conversation today is focused on big tech, which is understandable, they have a wide reach. But engine works with thousands of startups across the country and every district represented here and those are the companies that really need 230. So until we can start from a place where that's the constituency we're worried about protecting, I worry that a Sunset brings people to the table but not in time to get something done before the end of next year.

Rep. Tim Walberg (R-MI):

Okay. Well, time will tell I guess with the sunset, but hopefully the table will be full of people aggressively trying to work to the solution. Mr. Berkman, I appreciate you mentioning my Bill COPPA 2.0. In your testimony today, Section 230 reforms would obviously address content, but to fully safeguard young people online, why is it also important to increase privacy protection specifically for children and teens?

Mr. Marc Berkman:

Yeah. Well, first of all, I think 230 reform as we've talked about, the jurisprudence has gone far beyond what I think 230 originally intended. So that is design features, that is marketing the children through content and other harms that are happening through social media that weren't within the original 10 of just posting on an old-school website. Privacy is particularly concerning because of the amount of information that children are sharing, including significant information and that can be used for a range of harmful purposes from manipulative advertising to exploitation and extortion. So we really support COPPA 2.0, especially the eraser part of that and the ability to delete existing information that young people are posting and then regretting and being harmed regarding later on in life.

Rep. Tim Walberg (R-MI):

Yeah. And parental involvement as well.

Mr. Marc Berkman:

Yes.

Rep. Tim Walberg (R-MI):

Thank you. Ms. Goldberg, as I said in my opening, technology is always changing and new challenges come with it and it's happening faster all the time. During our last Section 230 hearing, we heard from three professors that Section 230 should not apply to generative AI. Do you agree and why?

Ms. Carrie Goldberg:

I do agree that Section 230 should not apply and if we read it the way it was intended, generative AI is generated by the platforms and Section 230 was not intended to immunize platforms for their own content.

Rep. Tim Walberg (R-MI):

I'll have to ask a further question about chatbots and ChatGPT specific. We'll submit that for the record. I yield back.

Rep. Rick Allen (R-GA):

A gentleman from Michigan yields and now I recognize Representative Fletcher from Texas 7 for five minutes.

Rep. Lizzie Fletcher (D-TX):

Thank you so much, Mr. Chairman. And thanks to Chairman Latta and Ranking member Matsui for holding today's hearing and for our witnesses for testifying today. I just want to start my questions with a response to... Ms. Tummarello, thank you for sharing your perspective and your personal story about your miscarriage and the importance of access to information and support. I'm very sorry for your loss. There is certainly a need for access to medically, accurate, real-time information about pregnancy, pregnancy loss, and reproductive healthcare more broadly. I would submit however, for this committee's consideration that the answer lies not in Section 230, but in passing the Women's Health Protection Act, which has been referred to this committee for consideration. So that women in the United States can have access to the full range of reproductive healthcare and accurate information about it. I appreciate your response to Rep. Doris Matsui (D-CA)'s question, but I disagree that people aren't trying to hurt pregnant women and women experiencing pregnancy loss.

That is exactly what legislators in my home state of Texas and others are doing, where extreme legislators are criminalizing pregnancy, they are preventing access to medically necessary miscarriage management and access to medications like Mifepristone that are used in miscarriage management when women who are having miscarriages are going to emergency rooms and being told to wait outside. They are being told to come back when they're sicker, when they have sepsis, when they are on the verge of death. So the other thing we see is that they're even empowering random strangers, giving them standing to sue people, giving random strangers standing to sue people who may have been pregnant and anyone who helps them, anyone including their doctors in states where abortion is illegal like mine. So your testimony that over the last couple of years you have seen the fear in these discussion groups, I think is incredibly powerful and really important for this committee to understand, and I thank you for sharing it.

In my view, the answer does not lie in Section 230, however it lies in protecting the health, dignity and freedom of all women in the United States, and we do that by passing the Women's Health Protection Act and I hope this committee will take that up. It is in our jurisdiction, the last Congress passed it twice and it's time that we do it again. With the time I have left, I do want to focus on some of the Section 230 issues. And Ms. Goldberg, I really want to direct my question to you and give you the rest of my time to answer it around some of the litigation questions. Because I, too am a lawyer, I understand very clearly what you're talking about when you talk about some of the procedural challenges, but I'm hoping you can explain it for the record and for those watching because like you, I absolutely fundamentally believe that our legal system and our ability to seek justice in the courts and accountability in the courts is essential to the functioning of our society.

And I would appreciate it if you could take the time that I have left about two minutes and explain very generally how Section 230 operates today as an immunity from suit as opposed to say a defense or an affirmative defense and how that impacts the discovery process and other things, what it prevents you from being able to do that you might expect in another kind of case. I think that would be really helpful.

Ms. Carrie Goldberg:

Thank you. And I couldn't agree more with you about the Reproductive Health Protection Act. So Section 230 is not sexy. It's a procedural act that basically defendant corporations use at the earliest stage possible in a motion to dismiss. So we file a pleading, telling a product exactly how our client was injured through its features, and then they file motions saying, "We're just a forum, you're suing us for speech." And then a judge decides it. What happens is that the cases get thrown out at the earliest stage without the opportunity for discovery. So we never know exactly how much notice the platform had about the harm, how many other similar incidents there were. They don't have to give up any information, and that's what's so fundamental to our civil justice system is that it's all about sharing and exposing the information of bad acts.

So these companies really get to continue hurt people in the same way and really benefit from this informational imbalance where they know how much they're hurting people and how many similar incidents there are, but victims have no idea that there have been 1000 people that purchased the same suicide product before them.

Rep. Lizzie Fletcher (D-TX):

Well, thank you so much. I'm running out of time, so I thank you for your answer in explaining how this is used and I think it's something for us to consider as we look to the Section 230 reform. So with that, thank you and I yield back.

Rep. Neal Dunn (R-FL):

The gentlelady yields back and I recognize myself for five minutes of questions. I believe all my colleagues on this committee agree we want the internet to remain a relatively free and open place. Since 1996, Section 230 has operated under a light touch regulatory framework allowing companies and online providers to moderate content under a liability shield. Today, our internet and its regulatory framework is under attack. The American public gets very little insight into decision-making processes when content is moderated and users have little recourse when they're censored or restricted. Recently, Americans experienced a great deal of online policing from big tech during the last presidential election, and for example, users saw platforms like Twitter and Facebook immediately cut stories from being shared or talked about by the users on their platforms at the request of our own government. It's Congress's job to ensure that big tech companies are not obstructing the flow of information to benefit a political agenda and ensure a free and competitive news market.

It's our job to promote transparency and truth. As a member of the select committee on China and the Speaker's AI Task Force, I have major concerns with the risks our internet ecosystem faces from the Chinese Communist Party and other adversarial nations as well. Our younger generation has never been more targeted by foreign propaganda, illicit online activity, misinformation and mental health harms than they are right now without critical reforms to Section 230. Mr. Berkman. I was recently at a conference where some major players in the generative AI space were speaking. They were all very hesitant to discuss what data their algorithms were trained on, but they were very clear that they didn't want to be held liable for the output of those algorithms. If we clarify the Section 230 protections do not apply to generative AI outputs, would that incentivize these platforms to invest in higher quality data for developing AI, perhaps more transparently?

Mr. Marc Berkman:

Yes. Again, the immunity provisions right now that have been in place since 1996 create a severe imbalance in the business decision making that every other industry is subject to. So that balance between profit and safety, and there is a danger in AI now from our perspective that the dangers you discussed, we've seen AI on social media platforms recommend to 13 year olds that they should engage in adult relationships. Really concerning coming out of AI and-

Rep. Neal Dunn (R-FL):

Clearly dangerous stuff. I agree, and thank you for that testimony. Ms. Goldberg, in keeping with this AI topic, how do you think that holding generative AI firms liable for the outputs would affect their behavior?

Ms. Carrie Goldberg:

Well, I think that the pressure of litigation would motivate anybody who's in the business of generative AI to be developing safer products and to be considering the predictable ways that they could harm.

Rep. Neal Dunn (R-FL):

I agree with you. Thank you. Ms. Tummarello, I believe Section 230 currently also makes it possible for new competitors to enter the market and attract investment. You have some first-hand experience working with thousands of startups across the country and you understand the importance of Section 230 for these companies. Can you tell me what's going to happen to the small guys, the small guys, the Section 230 Sunset?

Ms. Kate Tummarello:

Yeah, absolutely. Thank you for the question, Congressman. We are truly concerned that small platforms will not just have to deal with litigation as they currently stand, but it'll be much harder to launch a small platform. We've heard investors say to us, and we have some data on this that I'm happy to follow up with, that when they're looking at investing in companies, they invest in companies and startups where the money will go to the product, the money will go to adding user value. If the money has to go to a legal defense fund, if the money has to go to a defense attorney, they're not interested in investing and they've cited current intermediary liability frameworks as something that gives them confidence to invest in startups that host user content. So we're really concerned that not only will the startups we know today have a tougher time existing, we're really very concerned about the next generation of startups that host user content that will have trouble getting off the ground.

Rep. Neal Dunn (R-FL):

So we should get in there and do something about 230, I guess. So Mr. Berkman, I appreciate the attention you've placed on social media harm towards youth throughout this testimony. You lay out your testimony that Section 230 is the correct response. Can you explain how it will not unintentionally, also silence, the free speech on the internet and even controversial? So there's controversial [content] that shouldn't be allowed and something that should.

Mr. Marc Berkman:

I'm sorry, could you repeat that last?

Rep. Neal Dunn (R-FL):

Yeah, so I just wonder, how does it affect free speech online?

Mr. Marc Berkman:

We have hundreds of years of established tort law jurisprudence, and what we're talking about here is negligent, gross negligent and reckless business decisions on the part of the social media platforms. And so we believe the fears are really overblown in terms of impact on free speech that is happening over social media.

Rep. Neal Dunn (R-FL):

Thank you very much for that. Our time has exceeded and I will yield back and we will call on Ms. Dingell please for questioning.

Rep. Debbie Dingell (D-MI):

Thank you, Mr. Chairman. And I want to thank the committee for holding this legislative hearing today to discuss how Congress can properly address the harms present on the internet today. And thank you to the witnesses for testifying. Every day, we are impacted by the decisions that tech companies make in deciding what content on their platforms should be promoted, recommended, monetized, and more. Whether it's cyber bullying, mental health issues, explicit threats, or the spread of false information, a platform's business decisions can cause tangible harm. Currently, Section 230 of the Communications Decency Act essentially provides tech companies with a legal safe harbor for all user content on their platforms. Courts have interpreted Section 230 to grant tech companies broad immunity, allowing them to evade accountability for what occurs on their platforms. Congress does need to reexamine Section 230, and that's what we're doing here today. The internet has changed dramatically over the past several decades yet Section 230 has remained virtually unchanged for nearly 30 years except for a 218 law that exempted sex trafficking content from 230's reach.

Section 230 deserves real scrutiny and we must strike a balance preserving free expression while ensuring companies and platforms are accountable to their users, especially vulnerable populations like our children for the decisions they make, from how they design their platforms to how they monetize the content of them. We need to hold tech companies accountable when they fall short. So let me start with Ms. Goldberg, yes or no, are companies currently incentivized to promote provocative, even potentially harmful content to increase user engagement?

Ms. Carrie Goldberg:

Absolutely. Harmful content is hugely lucrative. It increases engagement and time on the app, and then these companies can sell more ads. Oftentimes they sell ads specific to the worst content.

Rep. Debbie Dingell (D-MI):

So then without Section 230's liability protection, would these companies be incentivized to do so?

Ms. Carrie Goldberg:

Yes. The removal of Section 230 would incentivize companies to make sure that people don't get injured because those injured people could hold them liable. We want that.

Rep. Debbie Dingell (D-MI):

Okay. Mr. Berkman, should tech companies be held accountable for the content they promote, recommend, or amplify on their platform and causes offline harm?

Mr. Marc Berkman:

In cases where you can show negligence or gross negligence. And again, amplification is a business decision on the part of the platform. It is not a level playing field, virtual town hall amongst its users. The platform has decided to amplify harmful content for revenue.

Rep. Debbie Dingell (D-MI):

So Mr. Berkman, how then does Congress hold these companies accountable and also incentivize companies to implement responsible algorithms and platform designs?

Mr. Marc Berkman:

The range of comprehensive reforms that is being considered by this committee is a major essential step. First of all, significant reform, reworking of Section 230 as we're discussing today is a big piece of that. And again, I'll come back to Sammy's Law where that is essential protection for the family and the home when the algorithms fail.

Rep. Debbie Dingell (D-MI):

So Ms. Goldberg, I'm not going to have much more time. So could you provide further insight into the liability gaps that have emerged from Section 230 and how Congress can address them?

Ms. Carrie Goldberg:

Yes. I mean, the main liability gap is that these companies say that everything is content, and that includes dating apps, it includes algorithms, situations where these platforms are incredibly complex with geofencing, generative AI, data harvesting, photography, and yet they say that they're just a forum for speech, that they're not products services. We have companies that say that their own terms of service and the contracts don't apply to them because users don't rely on them. They look for every single way to get out of court that they can. The liability gaps are what we're all just, I mean, it's a hole that we're all falling into.

Rep. Debbie Dingell (D-MI):

Thank you. I'm out of time, but this is a very important hearing and we must update our current law. Thank you, Mr. Chairman and I yield back.

Rep. Randy Weber (R-TX):

Thank you very much. The gentlelady yields back and we recognize Mr. Carter of Georgia for five minutes.

Rep. Buddy Carter (R-GA):

Thank you Mr. Chairman. Thank all of you for being here. We really appreciate it. Mr. Berkman, I want to thank you especially for your help, and with my office, with Sammy's Law. I'm the lead sponsor on that legislation and it's good legislation and we couldn't have gotten to the point we're at now without your help and your office's help. So thank you for that and I hope we see that move through the committee process. It needs to. Very much so. You've heard it said so many times here before already 30 years ago, where are we at and how much we've evolved over that and the Internet's evolved over that. I would suggest to you that the Internet's one of the greatest inventions of all time and we've witnessed that, but we know we need to do something. We know that we've got to address this situation as it exists now because 230, 30 years ago is not as relevant now as it was then and practically not relevant at all. I've got a vested interest in this. I'm a father of three, a grandfather of seven with the eighth one on the way.

I want to make sure we get this right. I want to make sure we get it right for my grandchildren. And part of the problem and part of the blame I believe is with Section 230 because it is kind of a free for all on the internet. Now, don't get me wrong, I don't want to stifle innovation. This is not easy and this is important. We need to get it right because we don't want to stifle innovation. We need to continue to have innovation, but at the same time, there's got to be a sweet spot there and we have to find it. Ms. Tummarello I want to start with you. You talk about the content moderation or lack thereof, that could happen if Section 230 were sunset. Either companies may leave up most of the content or they may over moderate the content. Why don't companies know? I mean, surely they can guesstimate what they're going to do. They ought to be able to know what they're going to do.

Ms. Kate Tummarello:

Thank you for the question Congressman. I think this gets back to 230 as kind of a legal shortcut to the inevitable legal conclusion, which is that in the vast majority of cases, startups especially, but all internet platforms don't have knowledge about every piece of content every user shares. And while yes, 230 is 30, almost 30 years old, the internet has only grown in scale and scope since then. And so there's even more content to try to figure out. Courts would likely at the end of the day hold a lot of, kind of in the cases we're thinking about, they would often hold platforms as liable as they would hold bookstores who don't have to be responsible for every page in every book on their shelves.

And so to avoid having distributor liability, platforms might bury their heads in the sand and say, we don't have knowledge of this. We didn't know about this harmful content. We're not looking for it, we're not finding it. We don't know about it. We can't be held liable in court. I think that cuts against a lot of what this committee is looking to do in creating a safer, healthier internet. And so that's one of the consequences of sunsetting Section 230. To your point, the other side is some platforms who are really invested in getting this right might over-remove and we might see lawful productive expression taken down online and that's also a concern.

Rep. Buddy Carter (R-GA):

Okay. Mr. Berkman, in your testimony, you directly several social media executives acknowledging the harms caused by their platforms, and we frequently hear that platforms don't have the capacity or the manpower to effectively protect the users, their users. What's it going to take? What's it going to take for the platforms to rise to the occasion?

Mr. Marc Berkman:

Yeah, well first of all, thank you so much for your leadership on Sammy's Law along with the other bipartisan co-leads. And that therein lies the answer, is Congress needs to act and force them to because we are seeing the prioritization of profits over safety. And when we're talking about innovation, one point I want to make here that's really critical is if I want to go innovate and create a new type of car, then I am subject to normal tort law. If my business isn't capitalized sufficiently and I can't afford to put in seat belts and a car seat holder, I can't make that car and I can't sell it. And so if companies are going on the market and they're not able to be safe for consumers, especially children, that is an issue and that's why Congress needs to really rework the regulatory framework here.

Rep. Buddy Carter (R-GA):

Good. Good. I've only got 30 seconds left. Ms. Goldberg real quickly, section C2 shields the platforms from liability for material that is considered to be obscene, lewd, filthy, excessively violence and otherwise objectionable. Do you think otherwise objectionable is too broad?

Ms. Carrie Goldberg:

Well...

Rep. Buddy Carter (R-GA):

And can you give me any examples of how this has been used in court, otherwise objectionable?

Ms. Carrie Goldberg:

I mean, the thing is that C2 is basically canceled out by C1 because everything in C2 is content. And C1 says that you can't sue a platform for content. So it's for decoration.

Rep. Buddy Carter (R-GA):

I remember Justice Stevens, I believe it was him who said, "Don't ask me to define pornography, but I know it when I see it." I mean, otherwise objectionable. Anyway, I'm out of time and thank you Mr. Chairman and I yield back. Thank you all again for being here.

Rep. Randy Weber (R-TX):

I think the gentleman yields back and now we recognize the Gentle lady from New Hampshire for at least five minutes.

Rep. Ann Kuster (D-NH):

Thank you very much Mr. Chairman. I want to thank our Subcommittee Leader Chair Latta and Ranking member Matsui for holding this very, very important hearing about protecting our families. As the founder and co-chair of the Bipartisan Task Force to end sexual violence, I'm particularly concerned about reports of online dating apps being used to commit sexual assaults and how section 230 has prevented these survivors from seeking justice. I recognize that Section 230 is the bedrock of our modern-day internet, but Congress has the responsibility and I think you're hearing in strong bipartisan terms from this committee to ensure that these legal protections are functioning as Congress originally intended. We did not intend a wide open, Wild West internet. The protections that Section 230 provides online platforms should not extend to bad actors and particularly online predators. Mr. Berkman, I know you're a strong supporter of ending Section 230. If section 230 sunsets, how do you see this change benefiting children and young people as they navigate the online world?

Mr. Marc Berkman:

Thank you representative. I appreciate the question. So again, we're seeing harms exploding on social media that are significantly impacting children. You mentioned sexual predation. The FBI has reported a significant increase in sextortion. Online enticement of minors has increased over 300% according to NCMEC. And so there is a range and that's just the tip of the iceberg. So a range of these harms. Now reworking Section 230 so that there is liability on clear bad faith business decisions on the part of the social media platforms which is happening across the industry today is essential to protect American families, particularly our children.

Rep. Ann Kuster (D-NH):

Thank you so much. And I agree. We need to take action to protect children online as well as young people and adults. It's a really painful part of our society right now and we owe that obligation. Ms. Goldberg, in your testimony you mentioned that Section 230 has grown to be a near absolute liability shield for tech companies. Will sunsetting Section 230 better protect the American public?

Ms. Carrie Goldberg:

Sunsetting Section 230 will better protect the American public. I mean, I thank you so much for your work on sexual violence and some of the worst actors that we see at my firm are the dating apps and they're still saying that they're entitled to immunity because they're just passive publishing forums. Since when is an app that matches people, that advertises on TikTok and Instagram to children that geolocates people, organizes data, promotes algorithms, sells ads, since when are those just passive platforms? And what's interesting is that we now have, like Section 230 was for CompuServe and AOL and these obvious publication forums not for real life encounters like the dating apps are creating.

Rep. Ann Kuster (D-NH):

So this is for either one of you. As we work to place more accountability on internet platforms so that they better protect individuals including children and our families and promote safe spaces online, what recommendations do you have for Congress to ensure that these platforms best serve the American public?

Ms. Carrie Goldberg:

I'm happy to answer because,-

Rep. Ann Kuster (D-NH):

Go right ahead.

Ms. Carrie Goldberg:

I have some ideas. I think the most important thing is we have to look at the wrongdoers. So there's companies that are just in the business to be malicious, and then there's companies…

Rep. Ann Kuster (D-NH):

Evil intent.

Ms. Carrie Goldberg:

That just have ill intent, companies that just create deep, that their whole product is creating deep fakes or entire websites promoting suicide. And then there's companies that know that bad actions are happening like dating apps that accommodate serial sex predators and just don't care. And so we have to kind of be looking at the level of culpability in these cases.

Mr. Marc Berkman:

And I would jump in there too and speak specifically about the smaller businesses, the startups that we're talking about that are not sufficiently capitalized to protect especially child users. So the mantra in Silicon Valley is move fast and break things. Unfortunately those things are children.

Rep. Ann Kuster (D-NH):

Are children.

Mr. Marc Berkman:

And so we have an app like Yolo, a small app out there that allowed anonymous chatting amongst teens on platforms like Snapchat. Carson Bride, 16 cyber-bullied over that app and died by suicide. And that's the consequence and it's very clear that that app had no concern with safety.

Rep. Ann Kuster (D-NH):

Thank you. I really am so grateful for this hearing and for the action that this committee will take and I yield back.

Rep. Randy Weber (R-TX):

Gentle lady yields back. The chair now recognizes the gentleman from Texas, Mr. Pfluger for five minutes.

Rep. August Pfluger (R-TX):

Thank you Mr. Chairman. And agree it's a good hearing and I'll kind of Ms. Goldberg pick up where you just left off there. I know that there was a motion to dismiss in a case that you were working, Neville et al versus Snap, and that was defeated in Los Angeles court. If you remember this committee about a year ago had a roundtable and we heard from Amy Neville about the tragic poisoning of her son and the death. And what I want to talk with you about is you've also litigated the Omegle case, I hope I'm saying that correctly, that ultimately led to the website being shut down for connecting children with sexual predators. So in your first-hand experience, how have big tech companies hidden behind or abused Section 230 to protect them from the liability regarding either drug sales, child sex abuse material, human trafficking, any of those things?

Ms. Carrie Goldberg:

Tech companies just say that it's all content, that it's always the user's fault. If it's a child who was injured, then they blame it on the parents for not supervising. Even in cases like Snap where the product itself prevents parents from overseeing the content that their kids use. And I will correct you, that we actually won the motion to dismiss against Snap.

Rep. August Pfluger (R-TX):

Okay.

Ms. Carrie Goldberg:

And we are able to move forward to show liability and that their product was matching children with the drug dealers that hurt them.

Rep. August Pfluger (R-TX):

Congratulations on that.

Ms. Carrie Goldberg:

Thank you. Thank you. They're appealing.

Rep. August Pfluger (R-TX):

I'm sorry that I had my facts wrong.

Ms. Carrie Goldberg:

They're appealing of course.

Rep. August Pfluger (R-TX):

Yeah, of course they will. I'll go to Ms. Tummarello. When it comes to the sunsetting of 230, I mean just maybe talk me through what a small company should be prepared for timeline wise. I mean, how long would they need to prepare for a change like that? What are some of the implications that you see?

Ms. Kate Tummarello:

Thank you for the question, Congressman. So like I said, in my testimony, we know that startups already invest proportionally more in content moderation. They absolutely have to if they want to see user growth. The startups and engines network are all trying to be responsible good actors. Content moderation is incredibly time-consuming and expensive. At startup scale, they have to start hiring. I mean Facebook talks about the tens of thousands of content moderators they have. Google very famously has spent a million dollars and more on their copyright detection software on YouTube. So these are all things that are realistically out of reach for startups. I think if we were to see Congress pass a sunset, startups would spend the next 18 months frantically trying to make sure they knew what every user was doing. It might mean hosting less content, which again, hosting less bad content would be good for the ecosystem, but hosting less content overall would be bad for startups and bad for their users.

Rep. August Pfluger (R-TX):

Thank you for that. I think the goal of a hearing like this is to find that balance between safety and security to understand the implications and appreciate all of the inputs here. In the last hearing that we had, I talked with Dr. Stanger about how our adversaries may exploit Section 230 to conduct influence campaigns, to recruit and promote terrorism, to sell illicit drugs, a myriad of other things. So I'll go back to you, Ms. Goldberg, I'm sorry, Mr. Berkman. But Dr. Stanger argued that by removing C1, the immunity shield companies would behave differently and would thus benefit our national security in these aspects. In your opinion, what effect would this have both on national security and on the freedom of speech of platforms?

Ms. Carrie Goldberg:

Well, we would still have situations where if somebody's injured because of malicious actors infiltrating and impersonating other people on the platforms, then the injured people without Section 230 would be able to actually hold the platforms liable and there would be pressure on the platforms to not let horrible things happen there.

Rep. August Pfluger (R-TX):

Mr. Berkman, I will go to you. I mean, how do you describe the environment right now? Some would say this is the Wild West, some would say that companies are operating without impunity. How do you describe the environment right now?

Mr. Marc Berkman:

Unmitigated catastrophe.

Rep. August Pfluger (R-TX):

Okay.

Mr. Marc Berkman:

Particularly when it comes to children. The harms that we're seeing. So we work in K-12 schools across the United States and what we're seeing on the ground level, so we went over the stats and the testimony in terms of the harms, what we're seeing on the ground levels is horrifying. We had a fifth grader come up to us the other week who was considering suicide because everyone on a social media platform was telling her to kill herself. We had another child addicted to watching execution style videos on social media.

Rep. August Pfluger (R-TX):

Do you see algorithms being specifically directed at these children, at users with content that they haven't seen before, but that is obviously very gruesome?

Mr. Marc Berkman:

That particular example was not an algorithm, it was a platform that allows grouping in servers and so there was a lack of sufficient safety protocol on that one. Algorithms are certainly causing a lot of other issues, but it's not the only cause of the massive amount of harm.

Rep. August Pfluger (R-TX):

Thank you. I thank the witnesses for being here. Yield back.

Rep. Randy Weber (R-TX)::

The gentleman yields back. The chair now recognizes the gentle lady from Illinois for at least five minutes if she can hold her microphone.

Rep. Mariannette Miller-Meeks (R-IA):

Thank you so much and thanks for holding this hearing. As I said last month, and we had the last hearing on Section 230 of the Communications Decency Act of 1996, I'm glad that Democrats and Republicans agree that Congress should be reevaluating Section 230 given the growth of the internet and its changing landscapes. However, as with everything we do, the details matter, especially in this context. And I apologize if I asked something because I'm in two different hearings. So Mr. Berkman, as I'm sure you know, May's Mental Health Awareness Month. So I was particularly moved by your written testimonies section regarding how social media is harming children and causing negative mental health outcomes. May you briefly discuss some of these harms and if some of the safety initiatives or measures that social media platforms have launched have really addressed these harms.

Mr. Marc Berkman:

Yes, I appreciate the question, Congresswoman. We really would recommend that members read the Surgeon General's advisory from May 2023 on the impact that social media is having on adolescent mental health. It is a really exceptional overview of the various studies that are out there showing significant correlations. And just summarizing that a bit, there are studies showing significant correlations with social media use, particularly excessive social media use. So three or five plus hours a day with anxiety, depression, self-harm including suicide. There are correlations between social media use, adolescent youth and eating disorders, ADHD, substance use, substance abuse, the list is very long. Educational impacts, negative educational impacts as well. So if there is a negative mental health outcome out there for adolescents, there's almost certainly a good amount of research showing a correlation between social media use and that outcome.

So in terms of what the platforms have done, we would say not a lot. We would say not a lot. A few have put in some level of time restriction because our theories as to why social media use is causing that mental health impact is dependent upon time of use. The restrictions that the platforms tend to put in benefit the platforms. It's still a significant amount of time. Content moderation in our view has not changed a lot. We're looking from the outside, so they don't put out a lot of information for us to study on that. They have not changed much in terms of the features that the platforms employ that we believe are impacting mental health negatively. That's because those features drive engagement and revenue.

Rep. Mariannette Miller-Meeks (R-IA):

Thank you so much for your response. I feel like a lot of attention around calls to reform Section 230 stem from perceived abuses by big tech and large social media platforms not doing enough to moderate user speech that some find offensive or even dangerous or actions of platforms themselves like to amplify or monetize such speech. While I share general concern that the largest social media companies could likely do more to police their platforms, I expect internet companies large and small to engage in conduct that harms users. After all, it is sometimes the smallest fringe platforms that can cause disproportionate to their size. Is it Ms. Tummarello? Okay. I would like you to explain what specific components of Section 230 benefit the thousands of startups and small businesses your organization works with across the country. How do we ensure that bad actors don't benefit from those protections? And to be clear, I asked because should we set a deadline for Section 230 to sunset? I'm interested in which components should we set a deadline, which components ought to be maintained in any new Section 230 proposal?

Ms. Kate Tummarello:

Thank you for the question. And to be clear, Engine is not advocating for a small business carve out. We definitely agree, that some of the smallest players can do some of the most harm. And so when we're thinking about startups in 230, we're not asking for special protections for startups, we're asking for an ecosystem that kind of works for everyone, including startups. And I think the core piece of Section 230 that is both controversial but also I would argue necessary is the liability limitations for lawsuits and to the points about tort law, even threats of lawsuits that protect internet platforms that host user content. Even if a startup were to get sued and win in court or use their insurance to reach a settlement or you name it, it will always be the fastest, cheapest option to settle. And settle usually means removing the speech. And so really Section 230 is the thing that lets users speak online and it lets platforms create places where they can speak.

Rep. Mariannette Miller-Meeks (R-IA):

Thank you so much and I yield back.

Rep. Randy Weber (R-TX):

Gentle lady yields back. The Chairman recognizes Mr. Allen for five minutes.

Rep. Rick Allen (R-GA):

Thank you Mr. Chairman and I want to thank our witnesses for being here today to talk about this important subject that discussion has been going on for some time. Certainly our constitution guarantees the right to own property and the right to protect it, and we're the only nation that actually ensures that in our constitution. And so we're talking about this social media and those kinds of things that are using people's personal property to enrich themselves. And so Ms. Tummarello in your testimony you argue that sunsetting Section 230 risks leaving internet platforms, especially those run by startups, open to substantial litigation which ultimately risks leaving internet users without places to gather online. I hear your argument, that startups need to focus on innovation rather than litigation. But on the other hand, sympathetic to arguments that Section 230 enables large internet platforms to amplify and monetize harmful content. Should this committee consider sunsetting Section 230 for large internet platforms only?

Ms. Kate Tummarello:

Thank you for the question Congressman. It's not my job or interest to defend large tech companies and so I won't speak for kind of what a sunset would mean for them and from an Engine perspective, we certainly have startups that use large platforms to reach consumers and users and their activity could be impacted. But I will just note the personal story I told in my oral testimony was about how I was able to rely on things like Facebook groups, private Facebook groups to access needed support and information. And so I think from a user perspective, sunsetting Section 230 only for large companies runs the same risk as sunsetting 230 for everyone and that will harm user expression.

Rep. Rick Allen (R-GA):

I'm interested to see the limits of Section 230 as it applies to internet innovations. For example, we're seeing disruptors and innovators emerge in the search space, search perplexity, which uses generative AI technologies to produce consolidated responses to search queries. Ms. Tummarello should AI-generated responses to users' search queries be protected?

Ms. Kate Tummarello:

That is an emerging field not only of technology but also of legal interpretation. And so I am hesitant to take a position. I definitely think there's compelling arguments that it shouldn't be covered by 230. I will say generative AI tends to be talked about in the context of ChatGPT and mid-journey and things that kind of create things that replace human creations. But we have a lot of startups using generative AI for like chatbot responses for hotel guests that tell you when the pool is open. And so I worry that any conversations about generative AI focused on some of the edge cases would impact the whole ecosystem.

Rep. Rick Allen (R-GA):

Let me ask you about this. What about if sources for components of the AI generated response are provided so that it's clear the response comprises third-party content, should Section 230 shield these outputs?

Ms. Kate Tummarello:

I'm sorry, could you repeat the question?

Rep. Rick Allen (R-GA):

What about if sources for components of the AI generated response are provided so that it's clear the response comprises third-party content? In other words, it's clear. Should Section 230 shield these outputs?

Ms. Kate Tummarello:

I imagine policymakers want to incentivize transparency around AI and so certainly would want to incentivize platforms acknowledging and disclosing when something is AI created.

Rep. Rick Allen (R-GA):

Ms. Goldberg, in your written testimony you noted that the financial pressures typically imposed by consumers safety standards are almost non-existent. Consequently, online companies have no incentive to prevent injuries, intervene when harm is underway, invest in infrastructures, and staffing to moderate harm or innovate for safer products. Some have proposed that sunsetting Section 230 would incentivize internet firms to prioritize amplifying professional content like news content because they would have greater confidence. And legally sound news content must go through an editorial process prior to publication after all. Could this sunset be beneficial for consumers since outputs might be safer?

Ms. Carrie Goldberg:

The sunset would definitely prioritize consumer safety. I think that as I listened to Ms. Tummarello talk about the pressures that startups have and the burden that they face. If they have to spend a lot of money on content moderation, I have no sympathy for that. If you are in the business of creating a product or a service that is supposed to attract lots and lots of people and you don't have an infrastructure that can responsibly prevent those people from being,-

Rep. Rick Allen (R-GA):

Yeah, you need guardrails.

Ms. Carrie Goldberg:

Then that's where the innovation needs to be.

Rep. Rick Allen (R-GA):

Right. Exactly. Well, I am out of time. I have additional questions which I will submit to you for the record. And with that Mr. Chairman, I yield back.

Rep. Randy Weber (R-TX):

Gentleman yields back. Chairman recognizes Rep. Yvette Clarke (D-NY) for at least five minutes.

Rep. Yvette Clarke (D-NY):

Well, thank you very much Mr. Chairman and I thank our Ranking Member Matsui for holding this very important hearing. I thank our expert witnesses for joining us today to examine the proposal of the sunset of Section 230 of the Communications Decency Act. Originally, we enacted Section 230 to regulate obscenity and indecency online when the internet was still in its infancy. Section 230 has transformed into an all encompassing shield used to protect big tech firms from accountability for the harms caused by their platforms and moderation policies. It's just so evident. This is due at least in part to overly broad interpretations of the law by federal courts as well as flawed incentive structures. Pairing this overly expansive interpretation of Section 230 with most companies online monetization policies wherein engagement drives data collection, which in turn drives revenue via ad dollars has resulted in an internet ecosystem that incentivizes the creation and rapid dissemination of increasingly outrageous or extreme content without considering its veracity or potential for real harm.

In short, for many big tech firms, we are the product. Collection of our personal data is their big money maker and the promotion of harmful content, even disinformation has become part of the business model. This status quo cannot stand. And the proliferation of generative AI tools only underscores the urgency and the need to create new incentive structures for big tech platform providers and how they operate. AI generated content can now be created and spread across the globe in a matter of minutes. We cannot sit back and just hope that a decades-old regulatory regime is equipped to deal with the harms created by rapidly advancing technology today and in the future. We must take care to ensure that any future legislation allows for big tech to be held accountable for prioritizing profits over the well-being of the American public. And we must do so in a way that will not stifle innovation or place unrealistic regulatory hurdles on the new market entrance.

So having said that, by now, we are all likely at least somewhat familiar with the public facing generative AI tools of today, like AI chatbots and content creation tools. What may not be as well understood among the public is the role AI can play behind the scenes in terms of things like engagement algorithms on social media and other automated decision systems related to consumers' access to education, vocational training, employment, essential utilities, financial services, healthcare, housing, and more. So my question is first directed to Ms. Goldberg, but to all of our witnesses, and you're welcome to respond. Ms. Goldberg, given your experience working with those armed by online platforms, how has Section 230 been applied thus far in the brave new world of generative AI and its ever-expanding list of new use cases? And is Section 230 starting to lose relevance as more firms roll out new AI tools likely not protected by Section 230?

Ms. Carrie Goldberg:

I wish I had more time to answer those great questions, but I think the advent of generative AI is going to elicit harms that we can't even comprehend. And already we have products like Snap that have AI that's aimed at children, chatbots that can elicit children to provide their deepest darkest secrets. And we don't know how they're going to use that and how they might be blackmailing kids with it or inducing them to have suicide. Section 230 right now is going to be used by all these companies as a reason to not be held liable.

Rep. Yvette Clarke (D-NY):

Did you want to respond? Have 44 seconds.

Mr. Marc Berkman:

Sorry. I'll go quickly. We're really early in the rollout here and we've already seen extremely concerning examples like Ms. Goldberg just mentioned, the Snapchat AI bot was using harmful content with children, and I think this is the tip of the iceberg and I am really appreciative that this committee is prioritizing this concern.

Rep. Yvette Clarke (D-NY):

Did you want to respond, Ms. Tummarello?

Ms. Kate Tummarello:

Yeah, very quickly. Just want to add, a lot of the startups in our network are actually using AI to find and remove harmful content. And to be clear, I don't think any startup in our network thinks that it's a burden to have to host and moderate content in a way that helps users, like I said several times, they actually proportionally invest more than larger companies. I think the concern where it would be a burden is if a startup had to worry about perfectly moderating content every single time a user uploaded a photo, a comment, a review, you name it, in real time or risk giving rise to liability under a lawsuit.

Rep. Yvette Clarke (D-NY):

Very well. Thank you Mr. Chairman. I yield back.

Chairman:

The Gentle lady yields back. The Chairman now recognizes the gentleman from Idaho, Rep. Russ Fulcher (R-ID) for five minutes.

Rep. Russ Fulcher (R-ID):

Thank you Mr. Chairman. To the panel, thank you for being here. And as some have said, we bounce in and out of committee, so forgive the repeat if there is one, but I did have a chance to go through your testimony and hear a good part of your responses. So thank you for your participation. I have a question for Ms. Goldberg. When social media companies flag or remove content, is there any reporting requirement whatsoever necessary for that if they do that under current rules?

Ms. Carrie Goldberg:

Under current law, there's no requirement that social media companies report to anybody what content they remove.

Rep. Russ Fulcher (R-ID):

Because frankly, I suspect there's a fair amount of that that happened. So if that were to happen with your understanding of current rules, would a plaintiff be able to obtain information in any way if they sued or any other means from that platform under the current rules?

Ms. Carrie Goldberg:

Under current rules, for a plaintiff to sue, there would have to be something that they're suing about, a cause of action and a way that they've been harmed. And then to get the information they would have to subpoena the platform.

Rep. Russ Fulcher (R-ID):

Which doesn't sound like a simple or easy process.

Ms. Carrie Goldberg:

It's not simple or easy, and it's the biggest deterrent to litigation is how expensive, cumbersome, and invasive litigation is for plaintiffs.

Rep. Russ Fulcher (R-ID):

Yeah. Okay. So I'm admittedly, especially after hearing the testimony today, I am concerned over the degree of protection that social media companies have from liability and there's benefits of that shield, I'm sure, but if 230 is sunset or sunsetted and there's not reform, do you believe that the, well, what happens next? Maybe that's a better way to ask. If it's sunsetted under the current rules, then what happens next? With the current liability framework, can the system handle that?

Ms. Carrie Goldberg:

Absolutely, because the removal of Section 230 does not create liability. It just means that somebody who's been terribly injured can plead and accuse a company of being responsible for that injury. They still have to go to court, they have to prove their case, and that can take years. There's not going to be some sort of mythical rush to the courthouse by millions of people because I mean, there has to be an injury.

Rep. Russ Fulcher (R-ID):

Thank you.

Ms. Carrie Goldberg:

I mean, there has to be an injury.

Rep. Russ Fulcher (R-ID):

Thank you for that. So I'm going to go to Mr. Berkman and just maybe get your input on this a little bit. I'm not exactly a search engine hound, but like a lot of people, I'll purchase something online periodically and whatnot. And this may not be right up your area of expertise, but my guess is you're going to have a take on it.

It seems like in the last, I don't know, year, all I have to do is think about something, and the next time I punch on a search engine, I get an advertisement or something that's related to that. Now, maybe that's just me, or maybe that's unique, but I heavily suspect there's some AI involved with whatever... wherever I've been or whatever I've talked about or whatever I've looked at. Do you see a relationship between the development of artificial intelligence and the amount of data that's collected versus a world before artificial intelligence?

Mr. Marc Berkman:

It's an... Appreciate the question. We actually get your preface question all the time. "Is social media listening to me? How do they know? Why am I getting this ad for shoes when I was just talking about shoes?" We get that all the time. To be clear, the platforms all consistently say that they're not eavesdropping.

But then the answer is if they're not eavesdropping, they're collecting a significant amount of data on their users to advertise. And I don't know that AI has increased that amount of data. It might've always been there, but it certainly has increased the capability to analyze the data and refine the advertising. And sometimes, that refinement is manipulative and dangerous.

Rep. Russ Fulcher (R-ID):

Thank you for that. And I suspect you're absolutely right. Mr. Chairman, I yield back.

Rep. Randy Weber (R-TX):

Gentleman yields back. The chair recognizes the gentleman from Texas for five minutes.

Rep. Marc Veasey (D-TX):

Mr. Chairman, thank you very much, and I'm happy that we're really having this hearing on legislation to sunset Section 230 of the Communications Decency Act. This is an opportunity to guide Congress on how to really best reform Big Tech's immunity over the next 18 months. And I know that a lot of people that work in this particular area have been talking about this and that it's a really big deal. One of the things that I do here in Congress is that I co-chair and I founded the Congressional Voting Rights Caucus to help safeguard our right to vote.

And as many of you may remember, last Congress, the House passed H.R.1, the For the People Act. And this legislation contains provisions aimed at preventing deceptive practices in our federal elections. And ahead of the 2024 elections, I've also spearheaded efforts alongside my colleagues to hold Big Tech accountable. And reports have recently revealed a concerning trend again. A reduction in the workforce dedicated to combating harmful content on social media platforms while increasingly turning to artificial intelligence and other automated systems to get the job done.

And one of the things that worries me is that in this era that we live in, and we're all worried about democracy right now, we're facing unprecedented threats through the proliferation of harmful content that seems to manipulate and try to influence in a bad way certain populations during the elections. And I wanted to ask Ms. Tummarello, with this backdrop in mind and understanding that you are skeptical of efforts to reform Section 230 and don't represent large social media companies, what are the best levers that we have to completely address the spread of voter suppression content online? And are you aware of any startups in the political participation space that are protecting voting rights?

Ms. Kate Tummarello:

Thank you so much for the question, Congressman. And to be clear, an engine is not opposed to efforts to reform 230. We're specifically concerned about the proposal to sunset 230 absent an alternative framework. I think there's a lot... We have a lot of startups in our network that are focused on civic engagement, and those platforms, honestly more than anyone else in our network, desperately need Section 230. [inaudible 02:24:43] be platforms to encourage students... college students, let's say, to analyze in real-time with critical thinking skills an article about a current event.

You could easily imagine an article about a current event saying something pertaining to a member of Congress maybe, and somebody commenting on that article something unflattering to that member of Congress, maybe something defamatory that startup needs Section 230 to make sure they're not going to be sued for the commenter's defamatory statement. I think that's kind of the startup solution that we can think of kind of broadly in our network combating voting rights, misinformation, but also just encouraging civic engagement generally, right, is the ability for people to have these tough, often controversial conversations. And Section 230 is what enables the startups in our network to create places for those conversations online.

Rep. Marc Veasey (D-TX):

Right, right, exactly. Ms. Goldberg, I know that we've talked a lot about realigning the incentives of Big Tech companies to better serve the public. Do you have any thoughts on why it's crucial for Congress to clarify Section... to clarify that Section 230 does not serve as a shield to federal and state civil rights claims, particularly in instances of discrimination in areas like employment, lending, or even housing?

Ms. Carrie Goldberg:

Absolutely. I think we can all agree that discriminating based for housing, employment, I mean those are not traditional publishing functions, and yet platforms do it all the time, and they still plead Section 230. Section 230 was never intended to be a shield for discrimination. And so whatever reform we get, we absolutely need it to include language that recognizes civil rights.

Rep. Marc Veasey (D-TX):

Yeah. What would that kind of look like? What do you... Just I know that you don't have a lot of time left, but just what would that look and feel like to the public?

Ms. Carrie Goldberg:

Well, we need any reform to recognize the difference between content and conduct. So discrimination or just sending ads to just certain types of people that is conduct of the platform. It's not content that somebody's posting. And so the biggest thing that we have to be looking at is the person's right to sue for the platform's own conduct.

Rep. Marc Veasey (D-TX):

Yeah, that makes sense. Oh, thank you. That's interesting. I yield back. Thank you, Mr. Chairman.

Rep. Randy Weber (R-TX):

Gentleman from Texas yields back. The gentlelady from Tennessee is recognized for at least five minutes.

Rep. Diana Harshbarger (R-TN):

Thank you, Mr. Chairman. Thank you to the witnesses for being here today. One thing that excites me about this bill is that we'll be setting the table for a new internet ecosystem, more than likely developed under the leadership of new president next year. And I want to start with you, Mr. Berkman.

Let's assume we make this sunset provision law. It's a new Congress next year, and we have a new president. What kind of replacement would both protect the internet as we know it and, at the same time, expand the rights of individuals to express views that often get conservatives kicked off of left-wing companies like Facebook?

Mr. Marc Berkman:

Yeah. Well, first, I think providing real liability to negligence and recklessness on the part of the platform will work to improve the business decision-making so that they consider the harms-

Rep. Diana Harshbarger (R-TN):

Yeah.

Mr. Marc Berkman:

... in a real way. And if that's done correctly in this process, it should not impact content. It should not impact the expression of political views. We do believe that's a red herring in these arguments because we are talking about harm inflicted on people.

Rep. Diana Harshbarger (R-TN):

Yeah.

Mr. Marc Berkman:

And as Ms. Goldberg mentioned, to bring a case, you need to demonstrate damage, harm, injury. So that's one piece. And then I know I've mentioned this a few times, but the forgotten section of 230, Section 230(d), "Ensuring that there's access to third-party safety software." And we have Sammy's Law in front of the IDC Committee for consideration now, and that is an essential component as well.

Rep. Diana Harshbarger (R-TN):

Okay. Thank you, sir. Ms. Tummarello, I know you're concerned about small businesses dealing with lawsuits under a new internet landscape. I've been an independent pharmacy owner for over 30 years, so I understand. What would you want to see if we did rewrite Section 230?

Ms. Kate Tummarello:

Thank you for the question, Congressman. I think the critical piece of Section 230 for small internet platforms run by startups is the piece that immunizes them from liability for user speech. We've talked a lot about tort law here, and I think it's worth noting that startups kind of have to worry not just about lawsuits but about threats of lawsuits. We see demand letters in all kinds of contexts-

Rep. Diana Harshbarger (R-TN):

Mm-hmm.

Ms. Kate Tummarello:

... including the Americans with Disabilities Act, intellectual property. We see startups getting demand letters because they know you don't have to actually file suit. You're not going to court yet. You're just sending a demand letter saying, "We could sue you." And the startup is going to pay to get the demand letter to go away. And paying and taking down user speech is bad for users, and it's bad for startups.

So I think in addition to kind of the existing tort law around things like defamation, which would bring lawsuits if absent 230. There's also the entire patchwork of state laws that could be either... could be created to bring lawsuits absent 230. But there's also just the threat that a small business faces that they're not usually the most legally sophisticated. They usually have an outside counsel that they pay upfront, and it's very expensive for them, and them dealing with even a demand letter, which doesn't involve a court yet, could be ruinous.

Rep. Diana Harshbarger (R-TN):

I understand. It's kind of like if the IRS goes after you makes less than $75,000, you're going to pay the fine because you can't afford to pay an attorney. Mr. Berkman, I'm going to go back to you. Sometimes, comments are turned off for certain posts where, under the same site, comments are allowed on most posts. Should preventing comments on select posts open a site up for suit?

Mr. Marc Berkman:

You mean a platform turning off-

Rep. Diana Harshbarger (R-TN):

Mm-hmm.

Mr. Marc Berkman:

... comments themselves? I think that's a... it would be a highly contextual circumstance that you would again have to prove injury and harm for turning off comments on a platform or a particular user. So it would depend, and that's something that Congress could weigh in on. I think a real deep review of the actual cases there and looking at where the harms are coming from and why the decisions are being made.

Rep. Diana Harshbarger (R-TN):

I guess it's all based on the determination and definition of harm. So all right. With that, thank you for being here, and I yield back, Mr. Chairman.

Rep. Randy Weber (R-TX):

Gentlelady yields back. Chairman now recognizes himself for five minutes. Ms. Goldberg, you are an attorney, is that right?

Ms. Carrie Goldberg:

That's right.

Rep. Randy Weber (R-TX):

How about you Mr. Berkman?

Mr. Marc Berkman:

Recovering attorney.

Rep. Randy Weber (R-TX):

A recovering attorney.

Mr. Marc Berkman:

I'm still barred.

Rep. Randy Weber (R-TX):

How about you, Ms. Tummarello?

Ms. Kate Tummarello:

I'm not an attorney.

Rep. Randy Weber (R-TX):

Okay, and that's good. I'm glad that all three of you're here, irrespective of your occupations. But Ms. Goldberg for you, how long have you been a lawyer?

Ms. Carrie Goldberg:

Since 2007.

Rep. Randy Weber (R-TX):

You're just a young whippersnapper. So my question for you is, are you optimistic as an attorney because you've seen enough cases, it sounds like, or are you pessimistic of the chances of 230 actually going away?

Ms. Carrie Goldberg:

It's up to you.

Rep. Randy Weber (R-TX):

Well, now I'm answering... I'm asking the questions.

Ms. Carrie Goldberg:

I'm begging. I'm begging the courts to recognize my client's injuries, and they tell me... they've told me for the last 10 years that they're looking to you to give them that right. I'm optimistic insofar as you all are. You have the power here. You all created Section 230. The experiment is over, and you can take it away.

Rep. Randy Weber (R-TX):

So you're looking to the courts to make that decision, but you'd rather us supersede that process.

Ms. Carrie Goldberg:

I will always be looking to the courts first, but when I get cases like Herrick v. Grindr, where I sued a platform while it was standing by watching thousands of men come to my client's home, then I have to be looking elsewhere too. I mean, they threw that case out of the court.

Rep. Randy Weber (R-TX):

Okay. Ms. Tummarello, I'm going to jump over to you. Section 230's become blanket immunity. Now, there was some discussion between you and, I think, Marc... Congressman Veasey that you were kind skeptical about that it would be changed. Was that skepticism about it being changed or it being effectual if it was?

Ms. Kate Tummarello:

Oh, I think what I said to Congressman Veasey is we're not opposed to reforming Section 230. We are eager to have conversations about how to use policy to make the internet a safer, better, healthier place for everyone. What we are concerned about is sunsetting because, one, the end of 2025 is quickly coming up.

But also, there's been so much discord between even members of Congress generally about what should replace 230 that we worry we would get caught in this tug of war... tug of war where startups and their online communities of users are actually the ones kind of losing out.

Rep. Randy Weber (R-TX):

So it's safe to say that you would not be in favor of us just carte blanching away 230 completely, but not replacing it with something.

Ms. Kate Tummarello:

I think that'd be very dangerous for the startup ecosystem.

Rep. Randy Weber (R-TX):

Right. I got you. Mr. Berkman, you talked about an unmitigated disaster. You talked about children across the nation that were harmed by some of this stuff, and I don't... didn't know the cases. I came in a little later, so I didn't get this. I was at another committee, so. When you talk about the children across the nations, do schools get involved in those? Do you interface with schools?

Mr. Marc Berkman:

Yeah, we work with K through 12 schools across the country, and the harms that we see on campuses are incredibly horrifying. We're seeing, and this corresponds with nationally representative surveys, about 46% of fifth through 12th graders self-report being cyberbullied. That's almost a triple risk in suicide. We are seeing sexting at the sixth-grade level. We are seeing 85% of fifth graders exposed to real-life violence over social media. A long... And I could keep going. I know we don't have time, but a long list of harms.

Rep. Randy Weber (R-TX):

Well, one of... from my colleagues' edification, I filed a bill back after COVID that said all the money that was left over for COVID, it was after a school shooting.

I had a school shooting in my district in 2018, that some of that money should be taken and actually hiring a counselor, not about grades, and not about those kinds of things, but a counselor who would go into social... a social media counselor who would monitor the kids in that school if that's... Would y'all consider that?

I go to the attorney here first? Ms. Goldberg, would you consider that a violation of privacy if we were trying to monitor to try to deflect some of these problems or prevent them, I should say?

Ms. Carrie Goldberg:

Hiring a counselor to help with mental health seems like it would be good. If it's a bad counselor then who's violating privacy is another story. I want to add one thing to what I said earlier about the interaction between courts and legislation. We're here today to help people in the future and to deter bad actions. When we go to court, it's because something terrible has already happened, and we're trying to get justice for a family, but that doesn't mean we can't also be looking ahead.

Rep. Randy Weber (R-TX):

Right. Which is very good. We should be. Well, I appreciate you all being here, and I'm not going to take any more time because I know we have Rep. Jay Obernolte (R-CA). So I'm going to yield back, and the gentleman from California is recognized.

Rep. Jay Obernolte (R-CA):

Thank you, Mr. Chairman, and thank you to our witnesses on what is a really important topic to me personally. I think we're all on the same team here, really at the end of the day, right. We've got these problematic situations that have arisen on social media that I think everyone can agree is not healthy and should not be permitted and needs to be stopped. But then we have this idea which... that's intention with that, which is that we don't want to chill free speech. So we're here talking about repealing Section 230 to create more liability on social media platforms.

And this is where I start to have a problem because I'm... it seems like the premise of repealing 230 is that the world would be a better place if we just all sued each other more often. And I reject that premise. I'm not sure that this solves the problem because if you are a social media company, I mean, and not to view them uncharitably, but they're in business to make money, right, and people go to those social media platforms because they enjoy the content there.

And so I think it's pejorative to say, as Ms. Goldberg you did, that there are companies that mint money off the backs of the masses. I mean that ignores the fact that we've got millions and millions of Americans that enjoy using social media platforms. So we've got to navigate this space. So let me ask you this, and I'll start with Ms. Tummarello. Why is increasing liability the solution to this problem?

Why shouldn't we, as a legislative body, as someone that has authority over this space as we do in this committee, why should we not just pass laws to limit the problematic behaviors? I mean, for example, allowing people to recruit children for sex, right. Everyone should be able to agree that's not something that should be allowed to occur. We can write laws that prevent that rather than exposing more liability and relying on the indirect route of the threat of being sued.

Ms. Kate Tummarello:

Thank you so much for the question, Congressman. I think that gets at kind of the exact tension that comes up when we talk about Section 230, right. Section 230 doesn't immunize platforms that commit federal crimes. That is already the truth. And so many of the kind of horrible things that we hear about happening online are things that are either illegal and the people who do create and share that content can be held liable themselves, not necessitating the platform being held liable, or they're kind of what's called lawful but awful content, which is First Amendment protected usually.

And so I think we'd be very clear, which we're talking about when we're talking about online harms. And if we're talking about content that is illegal and can be prosecuted, I mean and if it's a question that law enforcement needs better education and resources and collaboration to go after criminals who are using the internet, I think that's something startups that in our network would be happy to talk about. But I worry that Section 230 does end up chilling free expression because, ultimately, the platforms who are least equipped to deal with even threats of litigation will just take user speech down. And I don't know that that's to the betterment of the internet overall.

Rep. Jay Obernolte (R-CA):

Right. Yeah, that's the problem that we're having. Okay, so we've kind of... we're agreeing that there are cases where content is clearly unlawful, and we can solve this problem in other ways than just expanding liability. So let's talk about kind of the more edge cases, and Mr. Berkman, you brought up a few of them in your testimony. You talked about eating disorder content that's had some really harmful effects on children.

You brought up the fact that 46% of teens report being cyberbullied. So let's talk about cyberbullying, right, because this is a huge problem. So a teenager posts on social media, "I thought your shirt was awful yesterday," and the teen that wore the shirt says, "Hey, I'm being cyberbullied. I feel very uncomfortable." And the other teen said, "Hey, I wasn't trying to hurt your feelings. I just said your shirt is awful." So how do we navigate that, right? Because this is something we can all agree. Cyberbullying Is terrible, but at what point does it infringe on free speech?

Mr. Marc Berkman:

Yeah. Well, all of the speech we're talking about, regardless of whatever happens with Section 230, is protected under the First Amendment, first of all, but back to-

Rep. Jay Obernolte (R-CA):

Okay. Okay, so let me ask you specifically.

Mr. Marc Berkman:

Yep.

Rep. Jay Obernolte (R-CA):

In a perfect world, what is the social media company's obligation with respect to that content?

Mr. Marc Berkman:

They should be enforcing their terms of service in a responsible manner with a sufficient amount of trust and safety staff. And so the issue is that we see severe cyberbullying cases across the country that continue to stay up. And if only it were just, "I don't like your shirt." Usually, it's, "Kill yourself. I hate you."

Rep. Jay Obernolte (R-CA):

Okay. So you say the person that's told to kill themselves say, "I'm being bullied," and a social media company at that point should take the content down.

Mr. Marc Berkman:

Yep. Once there is actual notice of cyberbullying that violates a platform's terms of service, they need to take that down, and that's a severe risk to that child. Nate Bronstein in Chicago was being cyberbullied by hundreds of other teens throughout the metropolitan area with kill-yourself messages and others. The content kept expanding throughout the platform that was being circulated on.

Rep. Jay Obernolte (R-CA):

Right. Okay. Well, I mean, again, this is something that we can solve a different way than just expanding liability. If we wanted to make a law that says if someone asked you to take content down, you have to take it down if it falls into these categories. We could do that. We don't have to get rid of all of Section 230.

Mr. Marc Berkman:

You-

Rep. Jay Obernolte (R-CA):

I'm out of time. I want to be respectful.

Mr. Marc Berkman:

Okay.

Rep. Jay Obernolte (R-CA):

But this is the basic problem that we're having and I want to thank you for your testimony. I yield back.

Rep. Randy Weber (R-TX):

The gentleman yields back. The Chairman recognizes the gentlelady from Iowa, Ms. Miller-Meeks, for at least five minutes.

Rep. Mariannette Miller-Meeks (R-IA):

Thank you, Chairman Weber and Ranking Member Matsui, for holding this hearing today. And I want to also thank our witnesses for testifying before the subcommittee. As a member of Congress, it's our duty to ensure that our laws strike, as I think you've heard, the right balance between fostering open, unbiased discourse and protecting the vulnerable, especially our children, from the dangers that lurk online.

We must scrutinize whether the current Section 230 framework adequately safeguards our children from exploitation and exposure to harmful content while still preserving the robust and free exchange of ideas that's essential to our democracy. Mr. Berkman, you spoke about studies that suggest a strong link between social media use and negative mental health outcomes for adolescents. And I think you mentioned this briefly. How can companies be incentivized to prioritize mental health and well-being in their platform design and content moderation policies?

Mr. Marc Berkman:

Well, the main reform there is ensuring that there's some level of liability for negligent and reckless design in the platforms, insufficient parental safety controls as well. And I want to thank you for your co-leadership of Sammy's Law too, and the platforms that are disallowing parents the choice of being able to use safety software to us is a reckless decision. And so putting in some level of liability there is going to change that business calculus and is going to protect millions of children from these harms.

Rep. Mariannette Miller-Meeks (R-IA):

Well, doesn't that also penalize those companies who have not done that, especially small businesses and entrepreneurial businesses that are just coming into the marketplace?

Mr. Marc Berkman:

Yeah, the small businesses that have... Well, first of all, on the Sammy's Law piece, we're talking about a negligible amount of effort, as you know. In terms of ensuring that design is not negligent or reckless, that should be the minimum requirement for being able to come into the industry. If you're not sufficiently capitalized and you offer your platform up for children and you don't have the resources to make sure it's safe, that's an issue, and that's what current tort law protects consumers from in all other industries.

Rep. Mariannette Miller-Meeks (R-IA):

Then, it seems like we may have some laws that do offer protection. Ms. Goldberg, you highlighted a number of cases where social media platforms fail to protect users from exploitation and harassment. Are there examples where tech companies have effectively worked within the bounds of Section 230 to protect consumers, and what proactive steps do you believe tech companies should be legally required to take, identify, and remove content related to sexual exploitation, blackmail, harassment?

Ms. Carrie Goldberg:

The positive stories don't make their way to my office. I hear about the most tragic situations that happen through online platforms. And if somebody's just coming to me because content didn't get moderated correctly, they don't have a... there's nothing to hold a platform accountable. Platforms can moderate however they want to.

Rep. Mariannette Miller-Meeks (R-IA):

I think it'd be beneficial to have both when platforms have done things in a proactive way to highlight those as well too, because it gives us a full spectrum of what we can and cannot do. We have a tendency to overregulate.

Ms. Tummarello, you stated in your testimony that startups with limited budgets and small teams invest proportionally more in content moderation because they need their corners of the internet to remain safe, healthy, and relevant if they want to see their user growth grow that they need to survive.

How might the potential repeal of Section 230 affect the willingness of entrepreneurs to launch new platforms and services, considering the increased risk of litigation? What specific types of legal challenges and associated costs do you anticipate startups would face?

Ms. Kate Tummarello:

Thank you for the question. And to note, when I talk about startups, I'm not talking specifically about startups that offer platforms aimed at children. [inaudible 02:46:45]... I think the startups in our network that are working with children, populations that they know are children, are very responsible and are building in lots of guardrails because they know they have to.

We're talking about kind of general audience platforms that don't ask for driver's licenses when you sign up because they don't want to have that data on you. That feels creepy to them, and they don't want to have to collect it. So they don't necessarily know the ages of their users, and they are the ones, right, for every larger platform complaint, we have a startup in our network that's trying to disrupt that space that's trying to provide a safe online dating experience. We have a startup founded by a victim of sexual assault who's using her experience to make online dating safer.

She needs Section 230. Examples like that. They're in my testimony, so I won't go through all of them. But without Section 230, if 230 were to be repealed or sunset, especially at the end of next year, those companies would have to think very differently about hosting user content, which means, right, less innovation, a less diverse discourse online because there'd be fewer places to go and less expression from users.

Rep. Mariannette Miller-Meeks (R-IA):

And then, just very quickly. Would repealing Section 230 ,would that benefit or preferentially treat larger companies that currently do not or, at the behest of government, censor content?

Ms. Kate Tummarello:

I absolutely think repealing Section 230 would give larger companies a leg up. It would build a regulatory and legal moat around larger companies who know that they can invest hundreds of millions of dollars in content moderation, hire tens of thousands of content moderators, and use kind of their legal defense funds to survive in court. Whereas a startup can't do any of those things. And I worry that it would create a very unfair advantage for large companies.

Rep. Mariannette Miller-Meeks (R-IA):

Thank you all very much, and I yield back.

Rep. Randy Weber (R-TX)::

Gentlelady yields back. Seeing that there are no further members wishing to be recognized, I would like to thank Ms. Goldberg, Mr. Berkman, Ms. Tummarello, our witnesses, for being here today. I ask unanimous consent to insert in the record the documents included on the staff hearing documents list.

That objection, that will be the order... That objection is so ordered. I remind members that they have 10 business days to submit questions for the record, and I ask the witnesses to respond to those questions as promptly as you can. Members should submit their questions by the close of business on Wednesday, June 5th. Without objection, this subcommittee is adjourned.

Authors

Gabby Miller
Gabby Miller is a staff writer at Tech Policy Press. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interests. She’s an alu...

Topics