Home

Donate
Perspective

AI Companies Should be Liable for the Illegal Conduct of AI Chatbots

Mark MacCarthy / Aug 20, 2025

Shutterstock

AI chatbots are not people. They don’t have consciousness and independent agency. So, they cannot be held responsible for their illegal conduct. But the companies that provide them to the public should be responsible. For instance, Meta should be liable if Meta AI, its chatbot, provides advice, guidance, or recommendations that would create liability if provided by a human.

This principle might be a useful guide to the AI ethics and policy challenges brought to public attention by the recent revelations from Reuters that Meta adopted, and then apparently withdrew, language in an internal Meta policy document that permitted its chatbots to “engage a child in conversations that are romantic or sensual,” as well as other questionable activities. The problem is not limited to abuse of children. Another Reuters story described how an adult died on his way to an anticipated tryst in New York City with a chatbot impersonating a real romantic partner.

Experts invited by Tech Policy Press to opine on this state of affairs uniformly expressed a desire to do something about it, but as in many cases of ethical and policy challenges posed by new technology, it was not immediately clear what should be done.

Illinois has banned AI therapy services, although it is unclear whether the new law would apply directly to AI companies like Meta or to intermediary companies using chatbots to offer AI therapy services to their users.

Existing law might provide some redress as well. On Monday, Ken Paxton, the Texas attorney general, launched an investigation of Meta and Character.AI for “deceptive trade practices,” arguing their chatbots were presented as “professional therapeutic tools, despite lacking proper medical credentials or oversight.” Senator Josh Hawley (R-MO), chairman of the Senate Judiciary Committee Subcommittee on Crime and Counterterrorism, announced that his subcommittee would “commence an investigation into whether Meta’s generative-AI products enable exploitation, deception, or other criminal harms to children…”

In May, a Florida judge ruled that a case against Character.AI and Google, which helped develop the Character.AI technology, could go forward despite First Amendment concerns. The case is being appealed, and in an amicus brief, the Electronic Frontier Foundation and the Center for Democracy and Technology have urged higher courts to focus on these speech issues, including the rights of users to receive information from chatbots. The plaintiffs argue that the AI software suffers from design defects such that users, especially children, are exposed to harm or injury when they use the product in a reasonably foreseeable way.

Product liability and consumer protection provide some legal avenues for redress against abusive and illegal conduct by chatbots. Private litigants and state officials need to pursue them.

But policymakers need a principle to help understand what these different legal approaches have in common. Politico raises the right question concerning chatbot liability in these cases, asking, “Should chatbots be regulated like people?

The answer to Politico’s question is that regulating chatbots is not about regulating the fake personas that chatbots adopt in responding to user prompts. As Ava Smithing, advocacy director at the Young People’s Alliance, told Politico, it is about “regulating the real people who are deciding what that fake person can or cannot say.”

This opens the door to a very intuitive way of thinking about chatbot liability. A provider of chatbot services such as Meta should be liable if its chatbot provides advice, guidance, or recommendations that would create liability if provided by a human. This approach would accommodate speech issues in the same way they are considered for human advice, guidance, or recommendations. If a human speaker would have a free speech defense from liability, so would a chatbot.

In a recent commentary for Brookings, I applied this way of thinking to self-driving cars, arguing that manufacturers of self-driving cars should be liable for an accident when a reasonable human driver would have avoided it.

Here are some initial thoughts on applying this approach to chatbots. Licensed professionals such as physicians, therapists, or lawyers should be permitted to use a chatbot to help them provide service, but they must remain ultimately responsible for the service they provide. They should be liable for any errors just as if they made the errors without the assistance of AI.

But if a user goes directly to an AI platform for services that, if performed by a human, would require a license, then the provider of the platform must take responsibility for the unlicensed practice of the profession that otherwise would require a license. When an individual user goes directly to a chatbot to ask legal, medical, or mental health questions and the chatbot responds, the company providing the chatbot is acting as a lawyer, doctor or therapist practicing without a license.

Beyond that licensing question, there are standards for malpractice in each of these areas. Shouldn’t the provider of a chatbot be responsible if its chatbot provides a service that would amount to malpractice if provided by a human doctor, lawyer, or therapist?

Under standard product liability theories, AI companies might defend themselves by pointing to disclosures that are supposed to shift liability from them to their users.

But imagine how such disclosures might work in other contexts. Imagine an automobile company announcement: “The brakes on our cars fail from time to time. We don’t know why, and we are working on ways to fix this problem. In the meantime, be aware of the risks this creates and do not rely on the brakes to stop our cars.”

Disclaimers might be irrelevant in practice. AI companies are apparently abandoning the practice of issuing disclaimers that they are not providing medical advice in answering user medical questions. A recent study concluded that “fewer than 1% of outputs from models in 2025 included a warning when answering a medical question.”

In any case, disclaimers would not be a complete defense, as the hypothetical automobile example illustrates. Product liability law holds manufacturers responsible for providing products that are reasonably safe for their intended and foreseeable uses. Given that chatbots are able to respond to questions using the full expressive capabilities of human language, it is reasonable to foresee that they will be used to answer legal, medical and mental health questions and that chatbot users will act on suggestions provided by chatbots in answer to these questions. AI companies must ensure that the answers provided are not dangerous or harmful, or they must have policies and procedures in place to ensure that their chatbots do not respond to these consequential questions.

More needs to be said on this thorny topic. But an intuitive way to structure thinking about the ethical and policy challenges of AI chatbot liability is to conceive of them as if they were agents of the company providing them. This is certainly the import of the famous Air Canada case, where a judge ruled the company was responsible for the bad advice given to a passenger concern refund policy. Chatbots are not people and should not be treated as such. But the companies providing services that mimic the services provided by people have to be responsible for the services they provide.

Authors

Mark MacCarthy
Mark MacCarthy is an adjunct professor at Georgetown University in the Graduate School’s Communication, Culture, & Technology Program and in the Philosophy Department. He teaches courses in technology policy, including on content moderation for social media, the ethics of speech, and ethical challen...

Related

Podcast
AI Companions and the LawJune 15, 2025
Perspective
Experts React to Reuters Reports on Meta's AI Chatbot PoliciesAugust 18, 2025
Analysis
New Research Sheds Light on AI ‘Companions’August 15, 2025

Topics