Home

Protecting Children Online — Questions for Five Big Tech CEOs

Alix Fraser, Justin Hendrix, Ben Lennett, Jamie Neikrie / Jan 26, 2024

A US Senate committee hearing room. Shutterstock

On January 31, 2024, the Senate Committee on the Judiciary will hold a hearing on online child sexual exploitation with CEOs from Meta, X, TikTok, Snap, and Discord. In advance of the hearing, Issue One and Tech Policy Press organized a virtual forum with a group of independent researchers, advocates, and representatives from child and online safety groups — most of them members of Issue One’s Council for Responsible Social Media (CRSM) — to discuss potential questions for lawmakers to pose to the CEOs. The forum included the following experts:

  • Kristin Bride — Survivor parents and social media reform advocate, CRSM Member
  • Renée DiResta — Technical Research Manager at Stanford Internet Observatory, CRSM Member
  • Laura Edelson — Assistant Professor of Computer Science at Northeastern University, CRSM Member
  • Corbin Evans — Senior Director at the American Psychological Association
  • Josh Golin — Executive Director of Fairplay, CRSM Member
  • Justin Hendrix — CEO and Editor, Tech Policy Press
  • Ellen Jacobs — Digital Policy Manager at the Institute for Strategic Dialogue
  • Ben Lennett, Contributing Editor, Tech Policy Press
  • Jamie Neikrie — Legislative Manager, Council for Responsible Social Media at Issue One
  • Trisha Prabhu — Founder and CEO of ReThink
  • Mitch Prinstein — Chief Science Officer at the American Psychological Association
  • Zamaan Qureshi — Co-Founder of Design It For Us, CRSM Member
  • Isabelle Francis Wright — Head of Technology and Society at the Institute for Strategic Dialogue, CRSM Member

Below we offer a synthesis of that discussion in an effort to provide lawmakers with suggestions for topics and questions that we believe will enable both Congress and the public to understand what specific actions these companies are taking to protect children from online sexual exploitation and other harms, the level of resources they are committing to address the problem, the effectiveness of their efforts, and how design and other decisions by the companies are enabling exploitation and predation. It is imperative that the Senate utilize this hearing and other mechanisms to bring more transparency to the debate and compel companies to provide specific and candid responses.

Questions on the use of and effectiveness of automated systems to moderate harmful content for minors

Background: Over the last five years, Meta has increasingly shifted resources away from human detection of harmful content and toward automated systems. Under this process, engineers train machine-learning models to screen for content that violates their terms of conduct, such as terrorism, pornography, bullying, or excessive gore. Meta claims that these tools have been tremendously successful, removing the vast majority of violative content before users even report it. The effectiveness of the company’s automated tools were first called into question in 2021, when the Facebook Files revealed that the company only removes a sliver of the posts that violate its own hate speech rules. Multiple internal teams estimated the real figure to be lower than 5%. In 2023, disclosures by Meta whistleblower Arturo Béjar further pulled back the veil, revealing a very different picture of the company’s harm detection tools.

Last year, researchers from the Stanford Internet Observatory found dozens of images on X that had been previously flagged as child sexual abuse material (CSAM) by PhotoDNA, a tool companies use to identify and screen content posted to their platforms. The company failed to take action on these images upon upload, and “[i]n some cases, accounts that posted images of known CSAM remained active until multiple infractions had occurred.” X later fixed the database issue, but only after the researchers were able to notify the company through a third-party intermediary (the team could not locate a contact with its trust and safety team). A follow-up report found evidence that sellers of CSAM continued to skirt detection and blocking of CSAM-related hashtags on Instagram and that X failed to block some known hashtags.

Suggested questions::

  • Q (to Meta): Mr. Zuckerberg, in Meta’s 2023 Community Standards Enforcement report, your company stated that its proactive detection technology removed 88% of bullying and harassment content. But as Arturo Béjar testified before this Committee last November, his Well-Being Team’s survey of user experiences found that Instagram users were 100 times more likely to have witnessed or experienced bullying on the platform in a single week than your company’s statistics indicated they should have in a year. Your company says it removes 95% of hate speech content, but the user survey found that more than 1 in 4 users under the age of 16 witnessed “hostility against someone based on their race, religion or identity” in a single week. How do you explain the gap between what your metrics say, and the reality that users are experiencing?
  • Q (for Meta): Mr. Béjar sent his team’s findings directly to you, Chief Operating Officer Sheryl Sandberg, and Instagram head Adam Mosseri. Why does Meta continue to use prevalence to measure the effectiveness of its proactive detection technology, even after Mr. Béjar’s findings were made clear to you?
  • Q (for all witnesses): Does your company use automated systems to monitor and moderate CSAM or other harmful behaviors on your platform? Is there content that is not scanned by automated tools or systems? For example, are you able to scan photos, but not video? How do you monitor CSAM and other harmful content in private messages or groups?
  • Q (for all witnesses): What percentage of reported CSAM content is removed via your automated tools, and do you publicly report these figures? Does it have to be known CSAM content, such as material flagged in databases such as PhotoDNA, for your systems to review or remove the content? Of CSAM that is removed, on average, how often has it been viewed or shared prior to removal?
  • Q (for all witnesses): How do you test and ensure the accuracy of these automated systems? What if any external or independent review processes do you have in place to evaluate the effectiveness of your automated systems?

Questions on the monetization of youth attention

Background: Earlier this year, researchers from Harvard’s T.H. Chan School of Public Health released the first study identifying how much money Meta (Facebook and Instagram), Snapchat, TikTok, X (formerly Twitter), and YouTube earn in annual revenue by directing advertisements to minors. The study found that these five companies generated nearly $11 billion in advertising revenue from U.S.-based users under the age of 18 in 2022. This includes $2.1 billion from users under the age of 13, who are on the platforms in violation of the Children's Online Privacy Protection Act.

Suggested Questions:

  • Q (for all witnesses): How much money did your company make in 2023 off of advertising and other sources of revenue derived from users under the age of 18?
    • How about from users under the age of 13?
    • How much did you spend on online child safety initiatives?
    • Given the scale of your company’s revenues and profits, what do you regard as an acceptable error rate when it comes to these issues? How does that figure into your decision on whether to forego profit to spend more to further protect children and teens on your platform?

Questions on evaluating potential product harms to children

Background: Silicon Valley companies value speed and innovation. They are often praised for the mindset once articulated by Mark Zuckerberg — “move fast and break things.” But when the users of the product are children and teens, rolling out faulty or ill-conceived products or product features can be dangerous. When this happens, it calls into question the internal processes that allowed these failures to occur, and exposed children to harm.

Suggested questions:

  • Q (to Snap): Last year, Snapchat launched a My AI chatbot, which is now listed at the top of the chat page for all of Snapchat’s 750 million users. This was supposed to be a friendly chatbot that could offer recommendations, answer questions, and hold conversations with users. But within weeks of My AI launching, it was telling teens how to cheat on tests, hide bruises from Child Protective Services, mask the smell of alcohol and marijuana, and offering advice to 13-year-olds about how to have sex with their 31-year-old partner. How can you justify releasing a product like that to the public? Please describe in detail the testing your company performs on new and existing products to determine whether they’re safe and healthy for minors.
  • Q: (to Meta): In late 2017, your company launched Messenger Kids, a message service marketed to users under the age of 13, despite widespread opposition from child development experts and advocates. Meta promised that children wouldn’t be able to talk to users who haven’t been approved by their parents. You yourself even referred to Messenger Kids as “industry-leading work.” But a flaw in the chat service’s design allowed thousands of children to be in group chats with users who hadn’t been approved by their parents. How can you justify releasing a product like that to the public? Please describe in detail the testing your company performs on new and existing products to determine whether they’re safe and healthy for minors.
  • Q (to Meta): Just two years ago, you intended to launch Instagram for Kids. Political pressure from organizations like Fairplay and testimony from whistleblowers like Frances Haugen caused you to put those plans on pause. Do you intend to revisit those plans?
  • Q (to Discord): In March 2022, CNN identified numerous incidents of CSAM on Discord. Then, in May 2023, the National Center on Sexual Exploitation found numerous examples of CSAM materials that had been identified on Discord and reported to your company, but were still available on your servers more than two weeks later. A month after that, NBC News identified 242 Discord servers that were marketing sexually explicit content of minors, many of them thinly veiled. I understand that your company has responded to the NBC News investigation by removing teen dating servers and updating its policies to prohibit older teens' from grooming younger teens. But why did it take so many years, so many different investigations, and so many instances of harm for you to take action? Why were things like teen dating servers, grooming by older teens, and AI-generated CSAM content not prohibited from day one?
  • Q (to X): The New York Times recently published the results of an investigation to assess X’s efforts to remove CSAM in February 2023. It found that child sex abuse imagery was widely circulating on Twitter. The Times report noted that CSAM was actively promoted by X through its recommendation algorithm. Have you conducted an internal investigation of the incidents described in the Times report? What steps have you taken to address the problems it referenced?

Questions on investment in Trust and Safety teams

Background: In October 2022, Elon Musk completed his purchase of Twitter. During his first month in charge of the company, Musk initiated a massive series of layoffs that gutted many of the company’s teams dedicated to ensuring the health, safety, and functionality of the platform. In addition to firing Trust & Safety staff and contracted content moderators, he also removed the board of directors and the Trust and Safety Council. Musk’s directives opened the doors for other large tech platforms to follow suit. According to Free Press, Alphabet, Meta, and Twitter have collectively laid off at least 40,750 employees and contractors since November 2022, with many of these firings coming from critical safety and integrity teams.

Suggested question:

  • Q (for all witnesses): How many employees do you have on staff at this time who are solely dedicated to monitoring and moderating content on your platform to ensure that it is safe and healthy for kids? How does this number compare to two years ago?

Questions on combating CSAM and exploitation across multiple social media platforms

Background: Child exploitation and other predatory behavior takes place across multiple platforms. For example, a 2020 lawsuit filed against Roblox, Discord, Snap, and Meta details how a young woman was connected with an adult male on Roblox, a gaming platform targeted at children, who then convinced her to download and set up a Discord account to message her privately. On Discord, the adult male manipulated her and then introduced her to acquaintances, who also manipulated and exploited her.

A 2023 Stanford Internet Observatory investigation found a network of underage sellers that are producing and marketing self-generated CSAM using multiple social media and other platforms. Sellers will market their content on Instagram and X, use direct messaging to facilitate the transaction, and then distribute the content via file-sharing services such as Dropbox or Mega. The sellers may also direct users they solicit on social media platforms to online marketplaces such as G2G and Carrd to provide more detailed listings of CSAM content and exchange gift cards, allowing anonymous compensation. Sellers utilize payment services such as CashApp or PayPal or request gift cards for specific companies and services such as Amazon, PlayStation Network, or DoorDash.

Suggested questions:

  • Q (to all witnesses): What mechanisms do you have in place to track and combat CSAM and child exploitation and predation that is occurring across multiple platforms? Do you have proactive investigators monitoring CSAM distribution networks that operate across social media platforms and their changing tactics?
  • Q (to all witnesses): Do you have clear structures or reporting processes that facilitate collaboration with other platforms? Does your child safety team regularly meet with any other platforms to discuss approaches to combating CSAM?
  • Q (to Discord, Meta, and Snap): In November 2023, the Tech Coalition announced the launch of Lantern, characterized as “a cross-platform signal sharing program for companies to strengthen how they enforce their child safety policies.” It launched with an initial group of companies in the first phase, including Discord, Google, Mega, Meta, Quora, Roblox, Snap, and Twitch. Are you providing financial support for this initiative? What, if any, activities of this coalition will be public?
  • Q (to TikTok and X): Why are you not yet part of Lantern?
  • Q (to all witnesses): How does your company study this problem? Will you commit to working with independent researchers on these issues and ensuring internal resources for proactive child safety investigations and countermeasures against perpetrators?

Questions on response time and processes for removal of CSAM and other related moderation actions based on user reports

Background: Social media platforms offer a number of mechanisms for users to report CSAM and other issues related to child online safety. But how quickly they respond when users provide them with actionable information is unclear. An October 2023 report published by the Australian Government’s eSafety Commissioner collected responses from Google, X, TikTok, Discord, and Twitch to assess “the steps being taken to address child sexual exploitation and abuse” in compliance with Australia’s Online Safety Act of 2021.

The report summarized responses from these platforms regarding how much time they took “to consider and respond to user reports about child sexual exploitation and abuse material.” TikTok reported that the median time it took to act on user reports from photos/videos shared publicly was 5.2 minutes, 7.7 minutes for TikTok live, and 7.4 hours for direct messages. Discord reported that the median time to act on users' reports for private servers was 6 hours, 8 hours for public servers, and 13 hours for direct messages. X provided no information regarding its response time for user-generated reports.

Suggested questions:

  • Q (for Meta and Snap): How long does it take, on average, for your platform to respond to a user report of CSAM content on your site?
  • Q (for Meta and Snap): How long does it take, on average, for your platform to respond to a user report of sexual exploitation and other abusive behavior?
  • Q (for X, TikTok and Discord): Per the Australian Government’s eSafety Commissioner report, please explain in detail how you act upon user reports. What are your specific processes and structures for reviewing and responding to user reports of CSAM and other related activity? How long do different components of your review and response take?
  • Q (for all witnesses): Do you support the language in the Kids Online Safety Act (KOSA) that would establish mandates that platforms respond to user reports from parents and others in a reasonable and timely manner? What is a reasonable timeframe to respond to user reports?

Questions on support for child safety legislation

Background: For years, Meta has responded to repeated instances and patterns of harm to children on its products by promising to create new tools for parents. Instagram is currently running advertisements in Washington, D.C., saying that the company wants to work with Congress to put parents in charge of teen app downloads. But these proposed changes put the burden on parents to correct the flaws in Meta’s systems.

Suggested questions:

Questions on the use of generative AI in content moderation

Background: There is interest in Silicon Valley about the use of generative AI systems in content moderation and other safety applications. OpenAI says large language models can be useful for content moderation, reducing response times and stress on human moderators. Members of the Integrity Institute note that generative AI may increase the amount of content that platforms must review, but also give integrity workers new tools to address it. But questions remain about the reliability of generative AI systems, as well as about unintended consequences if platforms decide to implement solutions at scale. And, following prior research and a recent report that found URLs of verified CSAM in one significant set of training data for generative AI models, there are concerns about the safety of models trained by crawling the internet.

Suggested question:

  • Q (for all witnesses): Are you currently using LLMs or other generative AI methods in your content moderation practices? If so, how do these applications figure into your content moderation systems intended to protect children online?

Authors

Alix Fraser
Alix Fraser is the Director of the Council for Responsible Social Media at Issue One. In his role, Alix leads the crosspartisan Council of political, civic, public health, business, and national security leaders working to address the threats that social media platforms pose to American society. Pri...
Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...
Ben Lennett
Ben Lennett is a contributing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Techn...
Jamie Neikrie
Jamie Neikrie is the Legislative Manager for the Council for Responsible Social Media (CRSM) and has been with Issue One since 2021. A distinguished professional in legislative strategy and advocacy, Jamie leads efforts to implement meaningful reforms on Capitol Hill, focusing on advancing privacy p...

Topics