Home

Donate
Perspective

For Survivors Using Chatbots, ‘Delete’ Doesn’t Always Mean Deleted

Belle Torek / Jun 10, 2025

In 2023, The New York Times filed a landmark copyright lawsuit against Microsoft and OpenAI, alleging that millions of its articles were used improperly to train commercial generative artificial intelligence models, tools that now directly compete with traditional journalism. Now, a year and a half into litigation, the case has taken a striking procedural turn with implications that extend far beyond journalism or IP law: a federal court recently ordered OpenAI to “preserve and segregate all output log data that would otherwise be deleted.”

The data covered under this order include chats from users who have actively chosen to delete their conversations or who engage with the tool in “Temporary Chat” mode, a setting designed to function as a blank slate without entering a user’s chat log history. In effect, OpenAI is now required to retain the very data many of its users believed to be most private. (Last week, OpenAI CEO Sam Altman said the company will appeal the ruling and “fight any demand that compromises our users’ privacy.”)

While this move stems from discovery obligations in the lawsuit, as opposed to OpenAI’s own policies, its impact introduces serious privacy risks, especially for vulnerable users like victims and survivors of domestic violence, stalking, or technology-facilitated abuse. As a national leader in technology safety and survivor privacy, the Safety Net Project at the National Network to End Domestic Violence (NNEDV) is deeply concerned about how this moment exposes the limitations of data-gathering platforms, the legal gaps that leave survivors unprotected, and the need for thoughtful guidance on how to engage with these tools in high-risk contexts.

Preservation orders are a standard move in civil litigation: when discovery is ongoing, parties are typically required to retain potentially relevant data. But this isn’t a typical litigation hold on internal business records — it’s a mandate to store user-generated content that, in many cases, individuals take deliberate steps to erase. The implications of this move are concerning, especially in cases involving highly sensitive and potentially identifying queries from people navigating vulnerable circumstances.

Generative AI platforms are not just used for brainstorming or entertainment. They’re increasingly serving as informal research tools, and even quasi-therapeutic spaces for people in crisis. This includes survivors seeking information about restraining orders, shelter access, custody options, or safety planning. As Safety Net has documented for years, survivors increasingly turn to technology to seek necessary resources, especially when they fear for their safety within their homes or lack access to in-person support. A chatbot may seem like a private place to ask, “How do I get a restraining order?” or “Can my abuser see my phone records?” But under this court order, queries like these are to be retained, and could even be disclosed in court proceedings, potentially without users’ knowledge.

OpenAI’s privacy policy already disclosed the retention of much user data, but the reasoning for doing so shifts when a platform must preserve data not for user benefit or product improvement, and not of its own volition, but to satisfy discovery demands in unrelated litigation. It shows just how easily user deletion preferences can be overridden by corporate litigation, and how few guardrails exist to prevent retained data from becoming discoverable in unrelated cases.

This moment illustrates a broader principle that Safety Net has long advocated: survivors need enforceable privacy rights, not just a privacy policy that a court can render toothless amid proceedings. When platform assurances of deletion can be voided by third-party litigation, the trust placed in these tools collapses, and survivors may be left exposed without ever knowing.

This should raise red flags for policymakers and advocates alike. Who is responsible when a survivor’s chatbot query for a domestic violence hotline becomes discoverable in court? What protections exist when platforms become legal battlegrounds for corporate interests, but end users are the collateral damage?

As AI systems become intermediaries for everything from healthcare to legal aid to emotional support, we are outsourcing sensitive interactions to entities with limited transparency, unclear accountability, and now judicially-mandated surveillance capacities.

Implications for the road ahead

While this moment doubtlessly underscores the need for stronger privacy protections, it also calls for greater caution in how generative AI tools are integrated into high-stakes domains like mental health, crisis response, and victim advocacy.

Many survivors may assume or even be led to believe that commercial general-purpose AI platforms offer the same privacy protections they receive when speaking with an advocate, but these protections are not guaranteed. While certain AI tools could theoretically be tailored to comply with laws like the Violence Against Women Act (VAWA), the Victims of Crime Act (VOCA), or the Health Insurance Portability and Accountability Act (HIPAA) — all of which impose strict confidentiality requirements in service of survivor safety — most widely available platforms are not deployed within frameworks that trigger those safeguards. Similarly, attorney-client privilege is unlikely to apply, since these tools do not constitute a confidential relationship with legal counsel.

VAWA and VOCA prohibit federally funded victim service providers from sharing personally identifying information without a survivor’s informed, written consent. HIPAA protects the privacy of health information, and attorney-client privilege ensures that communications with legal counsel remain confidential. But these legal safeguards aren’t necessarily inherent to generative AI chatbots, where people often seek information or support that extends well beyond the scope of healthcare, victim services, or legal representation — and where data is processed or stored outside protected frameworks. When survivors use chatbots to ask questions about abuse, restraining orders, or safety planning, their disclosures may be retained or even shared in legal proceedings — without any meaningful protections in place.

The likelihood of this happening more often is only increasing. Much like The Times, Reddit filed suit against Anthropic last week, also claiming improper use of its content without a licensing agreement. And we can expect to see these cases extend far beyond intellectual property as the range of harms arising from generative AI use grows. A federal judge in Florida recently ruled that a product liability case against Character.AI involving a teenager’s suicide may proceed, marking a critical step in expanding access to recovery for AI-related harms. According to the lawsuit, the teenager died by suicide following interactions with a Character.AI chatbot that allegedly encouraged him to “come home to me as soon as possible.”

If every user’s chat logs can be treated as relevant evidence whenever an AI developer faces a lawsuit over the substance of its chat outputs, we must assume that far more AI-generated content will be preserved, parsed, and discoverable across an increasingly broad swath of legal contexts. In practice, no user can reasonably expect their activity to remain private.

The broader concern is clear: even when platforms market themselves as “private” or “temporary,” their data may be retained without notice and exposed without consent. Without regulation, deletion is a suggestion, not a guarantee.

A system not built for survivors

At the Safety Net Project, we’ve seen firsthand how even well-intentioned digital tools can create new vulnerabilities when they aren’t designed with survivor realities in mind.

This court order is a clear example. Survivors are often advised to avoid leaving digital footprints, and many may believe that deleting a chatbot conversation removes it from the record. That assumption is now demonstrably false.

Worse still, if platforms are required to store these records but aren’t transparent about doing so, survivors may unwittingly share details that could be accessed by others in court proceedings, where digital evidence is often leveraged to challenge credibility, intent, and fitness. The result is a pernicious digital trail that can be weaponized against someone actively seeking help, with no practical way to delete it or control its spread. And because these court-ordered logs are segregated from routine data flows, they may be less protected from breaches or internal misuse, adding yet another layer of risk for already vulnerable users.

What needs to happen next

This ruling should spark urgent, cross-sector work to clarify users’ privacy rights in commercial AI environments. At a minimum, platforms must clearly disclose when legal obligations may override deletion settings, and they should notify users when such holds are likely to be put in place. Policymakers must act to establish durable privacy protections that are enforceable even in the face of litigation. And advocates, attorneys, and technologists alike need updated guidance to help survivors navigate these tools safely.

Privacy in the AI age cannot be optional, or dependent on whether a company is facing litigation. It must be survivor-centered and protected by law. Because the ability to seek help safely, and to exit a situation when necessary, should never be compromised by a discovery request.

Authors

Belle Torek
Belle Torek is an attorney who works at the intersection of free expression, civil rights, and online safety. She is a Technology Safety Specialist at NNEDV and serves on the Advisory Committee to the Cyber Civil Rights Initiative and the Florida Bar Committee on Cybersecurity and Privacy Law.

Related

Domestic Violence Urgency: Data Safety and the Intersection of Privacy and TechnologyApril 7, 2025
Analysis
A “Victory for Survivors” or “Bittersweet News”—Experts React to Passage of the TAKE IT DOWN ActMay 1, 2025
Perspective
Why Congress Is On Sound Legal Footing To Pass The TAKE IT DOWN ActApril 28, 2025

Topics