Before AI Exploits Our Chats, Let’s Learn from Social Media Mistakes
Lucie-Aimée Kaffee, Giada Pistilli / Oct 13, 2025Meta is reportedly preparing to use data from generative AI interactions to target ads on Facebook and Instagram. It’s hard not to feel the déjà vu. In the 2010s, we slowly realized that our vacation photos, likes, and posts were not just “shared with friends” but the raw material of a surveillance economy. The Cambridge Analytica privacy scandal was the breaking point.
The outrage that followed changed laws and norms. Many privacy-conscious social media users learned to read privacy policies, switch to encrypted messengers, and ask how “free” products made their money. And yet, a decade later, many of us are having far more personal exchanges with generative AI systems than we ever had on Facebook or Instagram, without asking any of those same questions. If these interactions are folded into targeted advertising, then intimacy itself becomes the new frontier of surveillance and monetization.
Regulation in search of intimacy
But what does current policy look like for AI systems integrating advertising? Neither in the United States nor in the European Union are regulations yet fully prepared for the mix of intimacy and monetization that AI chatbots can introduce.
In the US, the Federal Trade Commission (FTC) has been clear: deceptive or manipulative practices in AI will not be tolerated. Under “AI Comply,” the FTC targets companies that allegedly used AI hype as part of misleading schemes. For instance, the FTC issued orders against a number of consumer-facing chatbot providers to inspect how they measure and monitor potential harms, especially for vulnerable populations like children and teens. But “dark patterns” in the context of intimate AI conversations will be harder to spot. Regulators should proactively ask whether embedding ads in chatbot responses crosses the line into covert manipulation.
In the EU, the AI Act (formally adopted in mid-2024) prohibits certain manipulative or deceptive practices. Article 5 bans AI systems that deploy subliminal techniques, exploit vulnerabilities, or materially distort behavior in a way that undermines informed decision-making. Further, it imposes transparency obligations on providers of general-purpose AI models and high-risk systems to document capabilities, limitations, and data governance. AI systems must be labelled, especially where there is generative content or deep fakes, so that users are not misled into thinking they’re interacting with humans.
In parallel, the EU’s Digital Services Act (DSA) and the Transparency and Targeting of Political Advertising Regulation (TTPA) extend advertising-related transparency duties to online intermediaries, requiring searchable ad libraries and disclosure of targeting criteria; TikTok is already under investigation for failing to comply with these rules. Together, these measures make the EU the first jurisdiction to link AI transparency with advertising accountability explicitly, yet enforcement will determine whether they truly curb manipulative or intimacy-based monetization models.
New privacy blind spots
Unlike social media, conversational AI feels private. When we talk to a chatbot, there’s typically no visible audience, no public feed, no “post” to regret. We type, confide, and often forget there’s a system on the other end: a system that can log, analyze, and learn from everything we say. Users share health concerns, financial anxieties, relationship struggles, even political beliefs with tools like ChatGPT, Claude, and Gemini.
To date, advertising worked largely by competing for our attention. AI assistants changed the target: they compete for intimacy. When those AI assistants know your schedule, your tone of voice, and your late-night worries, the possibility of manipulation becomes qualitatively different. Imagine a system that subtly adjusts its responses to nudge you toward a product, a subscription, or even a political opinion, and it does all this under the guise of a helpful suggestion. That’s not an interruption like a banner ad; it’s an infiltration of trust.
This is what makes the conversation-based business model so ethically fragile. Advertising becomes the interface.
A different path is still possible
Part of what made social media scandals like Cambridge Analytica so damaging was the sense of betrayal: people realized that platforms built for self-expression had quietly turned into surveillance tools. If conversational AI follows that path, the damage will be deeper. These systems already simulate care, empathy, and attentiveness. When that simulation is tied to profit motives, users will struggle to tell where genuine assistance ends and commercial influence begins. Trust, once lost, is not easily rebuilt; and trust is precisely what these systems rely on to function.
The good news is that this trajectory isn’t inevitable. We don’t need to accept it as inevitable because open source gives us another path.
Unlike the early days of social media, we now have the technical and community infrastructure to build AI differently. We developed this argument in more detail in an earlier blog post, "Advertisement, Privacy, and Intimacy: Lessons from Social Media for Conversational AI," where we explored how transparency and open infrastructure can prevent the same extraction cycle that once defined social media. Open models allow individuals, researchers, and organizations to run assistants locally, control where data goes, and design systems that are transparent about what they collect and why. Communities can choose privacy-first deployments, data minimization, and documentation that makes scrutiny possible.
Open source prevents a single corporate actor from monopolizing intimate AI interactions. It allows civil society, regulators, and independent developers to audit, experiment, and propose alternatives. And it gives users real choice beyond “trust us” promises from advertising-funded giants.
From a policy perspective, procurement policies could prioritize open, privacy-preserving systems for public sector use, sending a strong market signal. The US could also follow Europe’s lead by funding open, public-interest AI infrastructure, treating it as digital commons rather than leaving the field to corporate capture.
The question beneath the hype
Privacy in the age of conversational AI is a governance choice. Do we want systems designed to maximize ad revenue, or systems designed to respect the people using them? And maybe the deeper question is this: what kind of intimacy are we willing to automate? If connection becomes a commercial asset, the cost isn’t only measured in data or regulation, but in the quiet erosion of what it means to trust.
Learning from the first social-media cycle means remembering that outrage is easy, but oversight, openness, and design ethics are the harder, slower work. We still have the tools and the policy space to choose differently. The challenge is whether we’ll use them before the next “I told you so” moment.
Authors

