Home

Donate
Perspective

The Moral Lighthouse: Artificial Intelligence and the World We Want

Michael L. Bąk / Jun 9, 2025

Artificial intelligence must help us build the world we want, not let a powerful few build theirs for us. Too often, that “few” consists of wealthy, primarily white, primarily male billionaires who see technology as a tool for profit and power, not dignity.

In my work in the Global South, I’ve encountered a number of cultural concepts that offer alternative ways to think about technology and how to build a more just, equitable, and sustainable future. Technologists and policymakers committed to this goal must look beyond narrow moral and ethical frameworks and consider the wisdom inherent in these often ancient concepts.

From kesejahteraan to kotahitanga

Having spent most of my life in Asia, I’ve witnessed firsthand how Western innovations can crowd out local cultures, languages, and values. So I was genuinely delighted to work with Malaysia’s National AI Office on questions of AI governance and ethics. I was intrigued when asked to explore how a principle called kesejahteraan—roughly translated as prosperity or holistic well-being—could be a beacon for AI governance.

Drawn from the Malaysian government’s MADANI framework, kesejahteraan is a civic value rooted in the country’s plural traditions, one that defines well-being not as a byproduct of economic growth, but as a national goal of human flourishing in its own right—grounded in compassion, justice, equity, and human dignity.

While not narrowly religious, kesejahteraan plays a role similar to other values-based social frameworks that embed ethical purpose into governance—such as “the common good” in Catholic social teaching or the mid-century Swedish political doctrine of folkhemmet – literally meaning “the people’s home,” that the goal of politics and policy is to make the country a good home for the nation’s family. Or take, for example, the concept of ubuntu in Sub-Saharan Africa, which affirms deep social interconnectedness, that individual flourishing is inseparable from community well-being. In Aotearoa New Zealand, Māori principles like kaitiakitanga (guardianship), whakapapa (relationality), whanaungatanga (social obligations), kotahitanga (collective benefit), and manaakitanga (reciprocity) have illustrated such relevance to materially shape national approaches to data governance and technology policy.

Like kesejahteraan, these frameworks offer not just ethical aspiration but a moral compass—one that insists that technology, policy, and progress must all serve a deeper human purpose.

No more balancing acts

With these values-based traditions flooding my path like a searchlight, I decided to critically confront an assumption that has dominated frontier tech policy discourse for too long, even among the more progressive advocates: the need to “balance innovation and regulation.”

Here’s the thing—there is no balancing act. Not when the stakes are this high.

When it comes to AI and other powerful frontier technologies, the equation is simpler: if it doesn’t serve human (and non-human) flourishing, it isn’t innovation worth pursuing. Values deeply embedded in our societies tell us this is true.

That clarity—that moral clarity— is something that is too frequently missing from many corporate roadmaps and multilateral declarations. Some governments specifically include these values in their national strategies, but often become nice-to-haves in the “innovation or regulation” balancing act. This is the moral clarity we urgently need. The moral lighthouse is right there before us, powered by indigenous, religious, and humanistic values that, on the individual level, most of us practice every day. It's up to us to ensure they are not just embedded but drive AI governance and policymaking. Our ancestors tell us this is possible.

Market failure, moral failure

Artificial intelligence—in its many forms—is rapidly reshaping economies, democracies, workplaces, and every dimension of personal and political life, all at once and at astonishing speed. Meanwhile, self-styled billionaire tech savants and disciples of effective accelerationism—often the same individuals—tell us that innovation must be rapid, markets must be free, and regulation must be minimal so that the world can benefit, and our lives become more prosperous.

This is a dangerous illusion.

In my earlier work, I succumbed to that illusion, as a true believer in the democratizing power of technology. I worked to promote freedom of expression, transparency, and democratic governance. I believed that social media technology could give voice to the voiceless and shine a light where it had long been absent.

But social media was a test case. Optimized for engagement, it monetized outrage, polarization, and misinformation. The consequences—for mental health, civic trust, democracy, and societal cohesion—are now apparent.

Markets don’t have a conscience. Left alone, they do not optimize for inclusion, equity, dignity, or human flourishing. They optimize for profit, scale, addiction, and efficiency—often in ways that cause real harm, especially by flattening the human outliers in their datafied models of who the average us is.

And AI threatens to repeat—and magnify—these past failures. If left unchecked, unsupervised and unregulated, the damage to people, systems of government, and cultural life will be even more profound.

This is why we need a moral lighthouse—an ethical compass that guides our policymaking before harm is done, not after. Not after lives are ruined, governance systems are undermined, and cultures are dissipated. And let me be clear: this is not a call for religious doctrine to dictate public policy. Nor do I equate morality with religion. Instead, it is a call for our shared humanity to take the lead—for us to ask not only can we build a technology, but should we? And if we should, how can it help us shape the world we want?

Technology is not destiny

One of the most pernicious myths of the current age is that technological progress is inevitable—that it unfolds in a single, unstoppable direction. But as economist and Nobel Laureate Daron Acemoglu has argued, The direction of technology is never predetermined… Government policy can play a role in encouraging a more beneficial trajectory for AI.” In other words, we can bend the arc of innovation.

We saw this in earlier eras: from labor protections in the Industrial Age to environmental regulations in the age of fossil fuels. Each time, society stepped in to shape technology for the common good, rather than for exploitation and power accumulation. In their reflections on Acemoglu’s work, fellow Nobel laureates Abhijit Banerjee and Esther Duflo put it starkly: “It is still our job to determine whether the vehicles we build are heading toward justice or down the cliff.”

Yet when it comes to AI, we risk relinquishing control.

The market cannot steer alone

The dominant paradigm espoused by tech companies—especially in Silicon Valley—is grounded in a form of market fundamentalism: innovate quickly, scale rapidly, and worry about the consequences later. Harms are treated as bugs, not warnings—cleaned up only if there’s enough public backlash or a PR fire too big to ignore.

As journalist Hilke Schellmann has documented, the relentless pressure on startups to scale, monetize, and get acquired—so that investors can cash out—often overrides any meaningful ethical reflection. The result? AI systems that are biased, opaque, and deeply harmful, quietly sold and deployed across some of the most consequential domains of our lives: hiring, policing, criminal sentencing, education, welfare, migration, and healthcare (among many others).

These new tools—often in search of a problem—rarely come with oversight. And for those impacted by their decisions, there’s usually no accountability, no transparency, and no way to seek redress. This is what happens when the profit motive—the market imperative of return on shareholder value—is prioritized above moral obligations to people, communities, and countries, and when policymakers fail to properly lead.

It is helpful to consider this lack of moral clarity in US AI governance unfolding for the world to see. Billionaire Elon Musk and the coterie of so-called Department of Government Efficiency (DOGE) engineers offer a chilling case study. DOGE has pushed to consolidate sensitive personal data from multiple US government departments, including the Social Security Administration, the Internal Revenue Service and the Department of Homeland Security (and likely others) to create, as some have called, “a surveillance tool of unprecedented scope.” Targeting initially at immigrants, it can be anyone next.

Moreover, reports suggest that AI is being used not only to interfere in government processes but to also monitor federal employees for disloyalty to the president—a move that prioritizes surveillance and weakening institutions over rights, dignity, and due process. In January, President Trump signed Executive Order 14179, titled "Removing Barriers to American Leadership in Artificial Intelligence" which removed all Biden-era safeguards on AI development, thus prioritizing rapid AI advancement over ethical considerations and public safety. Money over humanity. Power over accountability. Acceleration over wisdom.

And yet, Congress does nothing. Despite bipartisan “calls” for responsible AI regulation, lawmakers have failed to act in meaningful ways or to even restrain profit-driven corporations from moving fast and breaking things with their AI. In a particularly egregious move, Congressional Republicans have proposed a decade-long ban on states and other localities from enacting AI-related regulations. Rather than restrain industry excess, such proposals greenlight further corporate entrenchment of opaque, unaccountable AI systems across critical sectors like healthcare, employment, immigration and public services. Meanwhile, Big Tech’s growing influence over the regulatory narrative continues to weaken oversight—delaying or diluting the very guardrails that could protect the public.

This isn’t just a regulatory gap or a case of Congress scrambling to catch up with fast-moving technology. It’s something deeper: a moral and ethical vacuum. And it reveals exactly why societies absolutely need a moral lighthouse—a guiding philosophy rooted in local wisdom and community values that insists innovation must serve human flourishing, not speed, dominance, or profit.

How refreshing, then, to hear world leaders like Malaysia’s Prime Minister Anwar Ibrahim express at a major intergovernmental economic cooperation forum focused on technology: “What is generally considered to be the failure of a global political system now? It’s a deficit in value. People don’t honor human dignity. There is no concern about justice or fairness.” Let’s not mistake speed for progress. Good governance doesn’t slow progress—it defines what progress is worth pursuing.

What a "moral lighthouse" looks like

US tech leaders have lost the high ground here, and US policymakers along with them. It is time for policymakers in the Global Majority to stop listening to sermons from Big Tech’s charismatic CEOs and professional lobbyists and instead listen to those from within.

The diverse cultures and religious traditions of the Global Majority bring a wealth of practical moral guidance to the policymaking table. An ethical framework for AI policy begins by reaffirming some basic principles that have formed the foundation of many indigenous traditions around the world: that human dignity is not negotiable, that power must be accountable, and that no innovation is above public scrutiny. It requires policy not just as a safety net, but as a steering wheel.

Indigenous traditions in North America are yet more examples of powerful moral guidance. Among the Haudenosaunee Confederacy (which brings together Mohawks, Oneidas, Onondagas, Cayugas, and Senecas), the Seventh Generation Principle calls on leaders to consider how every decision will affect those who come seven generations after them—a radical contrast to the short-termism of venture capital and platform capitalism. More broadly, these indigenous worldviews emphasize relationality, reciprocity, and stewardship, reminding us that data, knowledge, and innovation are not just resources to be mined but responsibilities to be held with care.

Some countries are trying to center indigenous wisdom in their governance systems for frontier technology. Aotearoa New Zealand, by embedding Māori principles of guardianship and care into its data ethics strategy, has reminded us that indigenous values can enrich modern governance. For example, whakaapapa implores us to recognize that all data has a genealogy and that understanding its context and origin is essential. In Malaysia, the concept of kesejahteraan offers a moral lens through which AI can be assessed, not only in terms of efficiency or GDP, but also in its contribution to justice, dignity, and shared humanity. In Europe, France’s États généraux de l’information draws on a traditional participatory model where citizens, journalists, and civil society help shape digital governance.

These examples are not perfect. But they demonstrate that alternatives to the capitalist, market-first model underpinning much of today's governance discourse are indeed possible.

Power, participation, and purpose from the Global Majority

Most public and private declarations on AI governance today gesture toward a familiar set of principles—fairness, accountability, transparency, and privacy. These are important, but they are not enough. They offer procedural guardrails, not purpose.

What’s missing is moral clarity—a more profound sense of why we are building these technologies, and for whom. We need something more enduring than a risk framework: a moral lighthouse, one that draws not just on regulatory expertise or market logic, but on the vast ethical inheritance of our cultures and communities. These are the pathways our ancestors laid down—through teachings about responsibility, stewardship, justice, and care—not just for the present, but for the generations to come.

The next question is whether we have the political will to act on that moral compass when the profit motive collides with the public interest. Traditions from across the Global Majority tell us that as societies we can, and our leaders must, push back when our flourishing is compromised.

This kind of moral lighthouse gives our leaders the clarity to ask: Does this technology contribute to human flourishing? If not, why not?

Flourishing… or failure

The stakes are enormous. AI systems and the people that deploy them will influence who gets hired, promoted, or fired, who receives what medical care, how children are taught, how and which people can migrate, who gets mortgages and bank loans, and how governments allocate resources. It will shape the narratives we hear and believe, the choices we see (and those we are forced to accept), and the freedoms we enjoy.

If AI deepens inequality, disempowers people, or displaces civic participation, it is not the future we want—no matter how advanced the technology may be, or how much money some individuals can make from it.

A moral lighthouse doesn’t guarantee safe passage. But powered by the common moral values evident across the world’s traditions and indigenous philosophies, it helps us chart a course and navigate uncertainty. It warns us when the rocks are near. And in an age of market acceleration and ethical drift, we need that beacon more than ever.

Ultimately, the question is not what technology can do. The question is: what kind of world do we want to build?

Authors

Michael L. Bąk
Michael L. Bąk is a Non-Resident Visiting Senior Fellow at the NYU Center for Global Affairs focusing on cyber policy and a recognized specialist in democratic governance, public policy, civil society, human rights, and ethical tech policy. He also serves as an Advisor at the Centre for AI Leadershi...

Related

Three Fallacies: Alondra Nelson's Remarks at the Elysée Palace on the Occasion of the AI Action SummitFebruary 14, 2025
Podcast
Decolonizing the Future: Karen Hao on Resisting the Empire of AIMay 23, 2025

Topics