Home

Donate
Podcast

Through to Thriving: Centering Young People with Vaishnavi J

Anika Collier Navaroli / Sep 7, 2025

Audio of this conversation is available via your favorite podcast service.

Thanks for joining us for another episode of Through to Thriving, a special podcast series where we are talking to tech policy experts about how to build better futures beyond our current moment. This week I talked to Vaishnavi J, founder and principal of Vyanams Strategies (VYS), a trust and safety advisory firm focusing on youth safety, and former safety leader at Meta, Twitter, and Google.

Vaishnavi and I talked about how her early experience as a Disney Imagineer inspired her desire to create safe, yet magical spaces for young people, the importance of protecting the human rights of children, the debates around recent age verification regulations, and the trade-offs between safety and privacy.

Throughout the conversation, we discussed what Vaishnavi called an “asymmetry” of knowledge across the tech policy community:

Vaishnavi: I think we still fundamentally have a significant asymmetry of expertise when it comes to how technology works. I think most of the folks who are doing great work around product and policy development, engineering, data science research, they sit within private organizations. They do not sit within civil society. They do not sit within government. Yet civil society and government play the role of checks and balances in the system, but how can you truly effectively regulate something if you don't understand how it works?

We also talked about the role of litigation in shaping the landscape of youth safety:

Vaishnavi: I think it's important that litigation doesn't become a cudgel against platforms and that it isn't just used simply to create more sensationalist moments, whether that's an article, a headline, or a gotcha moment for a policymaker or a litigator. That's a real misuse of this incredibly important power that litigation has in the American system. So I'm also really cautious of that. And I think the best way to make sure that that's not the case is to see what remedies are being proposed and how thoughtful those remedies are. I would like to see folks think more thoroughly about what kind of remedies would truly make this a better ecosystem rather than a moment for penalizing a company with a fine, which, if it's a large company, it's a drop in the bucket, and if it's a small one, it could kill them.

We also talked about recent journalism and reporting about content policies for youth safety within Generative AI products:

Vaishnavi: I always think it's interesting, but really incomplete, when we just look at a piece of policy as it is without really understanding how it was going to be enforced and how it was going to be reviewed or scaled. At what point does it get triaged for human review? What point is it automatically enforced against? And especially in the context of chatbots, which are user-to-system interactions, what are the range of remediations possible? For example, using sexual language towards a young child. But what does that mean? Do you just not provide an answer? Do you give a deflection? Do you tell them to go talk to an adult? Do you give them guidance and education? There's a whole spectrum of remediations possible to that one content policy line. And without knowing what this is kind of a very incomplete picture that we get.

Vaishnavi also discussed what she hopes for the future of youth, safety and technology:

Vaishnavi: I hope it helps them be the better, best versions of themselves that they want to be. I hope it doesn't replace their innate desires, goals, ambitions, intellect. I hope that it actually becomes an accelerating function for all of those things. I really hope that at the end of the day, they can find joy from these experiences. With some of the conversation around technology now, it's hard for us to remember that these digital tools are a source of great magic and joy when we first start using them. Somewhere along the way we forget that. I hope that the tools continue to evolve to be safer, more rights-protective, more creative, more innovative, and continue to spark that joy.

Check out the entire conversation with Vaishnavi. A transcript is forthcoming.

Authors

Anika Collier Navaroli
Anika Collier Navaroli is an award-winning writer, lawyer, and researcher focused on journalism, social media, artificial intelligence, trust and safety, and technology policy. She is currently a Senior Fellow at the Tow Center for Digital Journalism at Columbia University and the McGurn Senior Fell...

Related

Podcast
Through to Thriving: Advocating for Change with Nora BenavidezAugust 3, 2025

Topics