Confronting Empty Humanism in AI Policy
Matt Blaszczyk / Oct 3, 2025
Wheel of Progress by Leo Lau & Digit / Better Images of AI / CC by 4.0
A recent essay on Tech Policy Press drew attention to anti-humanist and pro-extinctionist rhetoric that sometimes appears in the artificial intelligence discourse. The author, philosopher and historian Émile P. Torres, points to an alarming interview with Peter Thiel in which he had a hard time deciding if humans should continue to exist.
I share the worry concerning the narratives advanced by figures such as Thiel, and applaud the philosophers who push back. At the same time, however, overtly humanist, democratic, and romantic rhetoric is just as easily co-opted by sophisticated market players and politicians, including those who appear to share most views and interests with Thiel himself. Humanist rhetoric plays an important role in political and legal discourse and, despite appearances, it does not necessarily lead to a defense of liberal democracy. Sometimes, such rhetoric is weaponized by illiberal actors; sometimes, it comes down to defending the status quo – while neither result is welcome. Dispelling AI hype and defending human-centrism are laudable – however, the difficult discussion concerns not alarming rhetoric, but the practical reality of realizing these goals.
Everyone likes human flourishing
Humanism, like other narratives in liberal political culture, has considerable rhetorical force in justifying the system. Even those who distance themselves from progressive vocabulary do not do away with anthropocentric language completely. The first of two AI executive orders from the first Trump presidency spoke of realizing the “potential of AI technologies for the American people,” fostering “public trust and confidence in AI technologies,” and protecting “civil liberties, privacy, and American values in their application.” Another presidential order claimed AI would improve the “quality of life of all Americans.”
More recently, the second Trump administration issued its January 2025 Executive Order promoting innovation, global competitiveness, and “human flourishing” as the primary American AI policy priorities, established under a staunchly deregulatory framework. According to Federal Trade Commission Chairman, Andrew Ferguson, the purpose of both government and antitrust enforcement is to promote “human flourishing,” which he proclaimed 17 times in conversation with none other than the illiberal constitutionalist Adrian Vermeule. While this language aligns with what the liberal democratic AI ethics community has been calling for, it is an open question whether the outcomes will align, too.
Nominal humanism as defense of the status quo
Employing humanist and cautious rhetoric is also a good way to calm people down. It is thus unsurprising that in his speech in Paris, France, JD Vance combined the rhetorical weight of values such as “free speech,” freedom from “ideological bias” and protection from “authoritarian censorship,” with deregulation, a shift from “AI safety” to “AI opportunity,” prioritizing workers, and assuring that AI is “not going to replace human beings. It will never replace human beings,” but merely “make us more productive, more prosperous, and more free.”
Similar phrases can be found in the pronouncement of the supposed “renaissance” brought by AI in the recent White House AI Action Plan, which promised a new information and industrial revolution coupled with the assurance that the “Nation’s workers and their families gain from the opportunities created in this technological revolution AI will improve the lives of Americans by complementing their work—not replacing it.” The vocabulary included not only republican values but also promises of enforcing ideological neutrality – “free speech and American values” at the drafters rhetorical best, and no “woke AI” at their most blunt – which at once appeal to communitarian values and may prove constitutionally suspect.
It is tempting to everyone – politicians, legal institutions, and scholars – to use rhetoric that reassures us that business is as usual, and that humans are still the masters of their tools. For example, copyright and patent offices all over the world have engaged in what I call “nominal legal humanism,” that is formally placing the human being at the center of the regulatory framework but not following through in any substantive way. Thus, reading recent judicial dicta, we learn that human authors are at the center of US copyright law – which does not mean, however, that the law will refuse to protect works made with or by AI, nor that human creators will get paid. They usually don’t, and instead publishers or intermediaries do. Humanist rhetoric tends to muddle this picture and divert from proposals for reform.
Similar criticism can be waged against a lot of the legal instruments adopted across jurisdictions. The most obvious examples are countless guidance documents and soft-law instruments, which have little to no legal bite whatsoever, but employ human-centric slogans. The primary example is the Council of Europe’s Framework Convention on Artificial Intelligence, together with all its declarations and carve-outs. And even in the case of the European Union’s AI Act, one must note all of the criticism for the Act’s gaps, loopholes, and merely voluntary pledges and guidance documents. Scholars point out that many of the so-called “human-in-the-loop” provisions, for example requiring human oversight over AI systems, may reduce people to mere rubber stampers of complex decision making they cannot even evaluate. The instrument which could provide more legal certainty and rights protection – the AI liability directive – has been withdrawn from consideration, while enforcement of the Digital Services Act faces uncertainty because of geopolitical tensions with the US and the recent “pro-innovation” turn in European policy. The same fate may yet meet the AI Act at large. All of this is not to say that none of these efforts carried any significance – if they did not, surely lobbyists would not be trying to destroy them today. However, the humanist declarations must not overshadow practical accomplishments.
Good guys must win
Using liberal democratic rhetoric is also a convenient way to advocate for particular distributions of technological and economic benefits, duties, and policy proposals. Just as an example, Vance’s Paris speech sent an emphatically strong message to both adversaries and allied states. The former had undermined other nations’ national security through capturing foreign data and stealing AI, effectively “strengthen[ing] their military, intelligence, and surveillance capabilities.” As Vance proclaimed, the US would “safeguard American AI and chip technologies from theft and misuse,” working with the US allies in this respect. With cooperation came a warning of “chaining your nation to an authoritarian master that seeks to infiltrate, dig in, and seize your information infrastructure.” Unifying all of this was a rather clear metaphor:
[President Macron] let me hold the sword, but, of course, he made me put on the white gloves beforehand, and it got me thinking of this country, France, and of course of my own country and of the beautiful civilization that we have built together with weapons like that saber – weapons that are dangerous in the wrong hands but are incredible tools for liberty and prosperity in the right hands.
In this way, particular policy decisions concerning the “AI race” may be justified simply so the bad guys – illiberal, anti-democratic foreign adversaries – don’t win. The race to a new form of intelligence is combined with competition against China, creating a new metaphorical discourse, now routinely used by AI companies in domestic lobbying for reduced regulatory oversight. It is also the narrative adopted by the AI Action Plan.
Humanism and lobbying
Notably, Anthropic CEO Dario Amodei blends familiar phrases taken from Francis Fukuyama and Samuel Huntington with techno-determinism and futurism to legitimize his company’s vision of society and its place in the economy. He writes that in the “21st century, AI-enabled polity could be both a stronger protector of individual freedom, and a beacon of hope that helps make liberal democracy the form of government that the whole world wants to adopt.” He reasons that it is the future international success of liberal democracy which legitimizes “international inequality,” and demands “great sacrifice and commitment on all of our parts, as it often has in the past,” all to make sure that “democracies have the upper hand on the world stage when powerful AI is created” over “authoritarian countries.”
Similarly, venture capitalist Marc Andreessen enumerates a host of liberal-adjacent values in his manifesto, writing:
We believe national strength of liberal democracies flows from economic strength (financial power), cultural strength (soft power), and military strength (hard power). Economic, cultural, and military strength flow from technological strength. A technologically strong America is a force for good in a dangerous world. Technologically strong liberal democracies safeguard liberty and peace. Technologically weak liberal democracies lose to their autocratic rivals, making everyone worse off.
We believe technology makes greatness more possible and more likely.
We believe in fulfilling our potential, becoming fully human – for ourselves, our communities, and our society.
Others have followed suit. In Google’s response to the AI Action Plan consultation, we read that the “US needs to pursue an active international economic policy to advocate for American values and support AI innovation internationally,” including particular visions of copyright, patent, and privacy laws, and supporting an aggressive foreign policy to remove barriers to American technological exports. Similarly, Open AI’s consultation response outlines “freedom-focused” policy proposals which lead to “democratic AI.” These, once again, come down to a permissive interpretation of the fair use standard in copyright and a protectionist export control strategy. Notably, the same values have been invoked by lobbyists on the other side of the table, with the Motion Picture Association calling for more stringent analysis of copyright standards by emphasizing “uniquely American human creativity and technological innovation, and undergirded by the US Constitution’s protection of both free speech and intellectual property.”
Choosing metaphors carefully
What becomes clear is that everyone can bend humanist language to their own ends. Even more broadly, however, we should pause not only at big companies invoking democratic values to advocate for legal solutions, but also generally rethink the metaphors we use and their unclear political and legal implications. For an outsider to the AI governance discourse, it is extremely difficult to parse and does not allow for easy categorizations into binaries: humanist, posthumanist; hype, realism; optimism, pessimism. Just ask yourself: would an AI company benefit from rhetoric which hypes AI and warns of its endless potential, or downplays its risks? Seemingly, the answer is both.
The stakeholders include regulators of different administrations, futurists preaching apocalyptic visions, self-described luddites, academics, lawyers, and different corporations, belonging to the oligarchs past and present. Even those who do not fall for slogans and advertisements, may fear that the current direction AI’s progress seems to replace people's creativity, and thus to do away with the meaning-making practices which make life worth living. For example, Japanese animator and filmmaker Hayao Miyazaki once said:
I feel like we are nearing the end of times. We humans are losing faith in ourselves… I am utterly disgusted…If you really want to make creepy stuff, you can go ahead and do it. I would never wish to incorporate this technology in my work at all. I strongly feel that this is an insult to life itself.
Therefore, according to philosophers Josiah Ober and John Tasioulas, the most fundamental existential challenge posed by AI is whether its pervasive presence in our lives will negate our humanity and impede our ability to lead fulfilling human lives, or whether we can incorporate AI into our lives in such a way to dignify our humanity and flourishing. In other words, it is “a challenge that concerns what it means to be human in the age of AI, rather than just one about ensuring the continued survival of humanity.” To drive the point home: dismissing AI as a bullshit machine does not deny it is good enough to replace human workers who face precarity due to technological change or, in the case of young workers, can’t find entry-level jobs.
Lastly, sometimes metaphors play functional roles not just in social discourse, but legal debates, too. Whether AI is anthropomorphized or not may bear on how judges and juries approach the questions of copyright infringement or violations of people’s rights, bearing on the question if AI reads, infringes, and creates, and so on. Sometimes, this involves difficult questions of essentializing and community drawing. Take one scholar’s exclusion of AI speech from First Amendment protection by an analogy to the speech of foreigners, whether those residing outside or inside the US, who can be denied participation in activities “intimately related to the process of democratic self-government.” Although there may be few of us who advocate for robot rights – and indeed, apparently too few who advocate for immigrant rights – the argument that “robots should not have rights just like foreigners” should give us pause.
Conclusion
Voices expressing concern over radical post-humanist visions are most agreeable, as are all the myth busting articles concerning AI. Similarly, the law and its institutions should express care for human beings. At the same time, neither under-appreciating the technology and the challenges it poses, nor committing to humanist rhetoric should be all too readily embraced, either. Politicians, lobbyists, and various stakeholders are well-skilled players in the game of metaphors and usually do not take up those most offensive to our sensibilities, Thiel’s example notwithstanding. Instead, by romantic packaging of policy proposals, politicians defend the status quo or find support for the very reforms many readers and commentators fear.
Authors
