India’s AI Middle Path Offers Lessons for Australia and New Zealand
Archana Atmakuri / Mar 24, 2026
Sinem Görücü / Better Images of AI
Global debates about artificial intelligence are increasingly focused on how to close the “AI divide” between countries that build advanced systems and those that largely consume them. Recent gatherings such as the India AI Impact Summit in New Delhi have highlighted this shift, with leaders debating how AI can be developed and governed in the Global South rather than relying solely on models emerging from Washington, Brussels or Beijing. While the ‘Global South’ framing often focuses on countries in Africa, Latin America and Asia, these debates also hold up a mirror for middle powers in Oceania such as Australia and New Zealand.
For Canberra and Wellington, AI is still mostly treated as a technical and regulatory problem: something to be managed with principles, risk frameworks and standards. That is necessary, but it is not enough to create a national advantage. As AI becomes basic infrastructure for economies and societies, middle powers face a harder question: do they remain passengers in someone else’s AI ecosystem, or invest in becoming architects of their own? The answer does not lie in trying to out‑spend the United States, European Union or out‑scale China. It lies in a middle path that puts sovereignty first: treating key AI capabilities as public infrastructure, grounding governance in local data rules, and centering the voices of affected communities.
What would such a middle path look like for Australia and New Zealand?
First, aim for sectoral sovereignty. No middle power can or should aim for full AI self‑sufficiency. The choice is not between building a national frontier lab and giving up entirely. The more realistic question is where dependence on foreign systems creates real vulnerability. For India, the stakes are obvious. Foreign platforms are training on Indian data at massive scale and increasingly mediate everything from payments to public services. In response, New Delhi has begun to treat AI in the same breath as the digital public infrastructure model: as a layer that needs to serve Indian languages, laws and citizens.
Australia and New Zealand face similar risks. Their exposure is not only about economic scale but also about the sectors where AI will be deployed. In Australia, for instance, a Senate scrutiny committee has raised concerns about regulations enabling AI and automated systems to make immigration and biosecurity decisions that were previously at ministers’ discretion, as part of a broader expansion of automated decision‑making across government services. Many Australian public systems are built on foreign cloud and AI providers subject to overseas laws, creating data‑sovereignty and accountability risks that domestic frameworks cannot fully control.
In New Zealand, too, algorithms guide work in sectors such as social development, policing and immigration through automated decision-making on some visa applications. Commentaries on Māori AI sovereignty also warn that many overseas AI models are controlled by foreign laws and trained on opaque, non‑local data, meaning systems imported from abroad may not produce fair or legally compliant outcomes in Aotearoa. In the critical infrastructure, simply plugging foreign‑built systems into local bureaucracies, trained on other countries’ data, assumptions and legal norms, imports foreign trade‑offs into the heart of domestic realities.
A proposed middle path could be to identify critical sectors and treat them as areas where domestic or regional control over data and models is non‑negotiable. That does not mean excluding foreign firms completely but rather implies key systems run under local jurisdiction and on infrastructure that can be scrutinized, constrained and potentially replaced.
Second, treating AI as a public good. One of India’s key contributions to the current global AI governance debate has been to treat digital infrastructure as a public good. The Aadhaar identity system and the UPI payment rail have been framed as shared infrastructure that private actors can build on, rather than as proprietary platforms owned by a handful of firms. AI is now being pulled into the same orbit, although India’s approach also shows the risks of this model: biometric failures have excluded people from welfare, and centralized identity data has raised serious concerns about surveillance, privacy and weak accountability for errors or breaches.
For middle powers, the task is to adopt this model carefully, borrowing its public-good logic while avoiding its blind spots. If AI is going to underpin public services, it cannot be approached only as a series of software contracts. Some core capabilities, such as curated datasets and secure environments for sensitive public‑sector workloads, have to be treated as utilities.
Australia and New Zealand are well placed to do this at a modest scale. Both countries are investing in shared digital infrastructure and common cloud platforms within the public sector and have research institutions capable of training or adapting models for local use. Their private sectors will, if given the right incentives, help operate and build on public platforms. What is missing is a clear decision to treat at least some AI building blocks as part of a national commons rather than leaving them entirely to offshore providers.
Without that decision, Indigenous and the overall community interests will remain price‑takers in a market dominated by a few large foreign stacks. With it, middle powers gain something far more valuable than symbolic sovereign AI: bargaining power and the ability to say no to systems that do not meet local standards, enabling national advantage.
Third, governance grounded in data sovereignty and inclusion. There is another area where India and Oceania converge: resistance to a new wave of data colonialism. Whether it is Indian languages under‑represented in large models, or Māori and First Nations knowledge used to train datasets without consent, the pattern is familiar. Communities see their data extracted and reused in ways they cannot see, challenge or benefit from.
The response in Oceania, too, is to push for robust Indigenous data sovereignty — the idea that Indigenous peoples should have collective authority over how data about them is collected, stored and used. These ideas are nascently finding their way into AI debates, through ethics frameworks, sector guidelines and research partnerships. They point towards establishing a model of AI governance that treats local communities as co‑owners and co‑designers of systems.
Like the Global South, for middle powers too, the takeaway is clear: sovereign AI and inclusion can be turned into national advantage. Governments should start by identifying a small set of shared AI components, such as key datasets, base models and core software tools, that are too important to be left entirely to AI providers. These components can then be funded and governed as digital public goods, with open, auditable rules and oversight in the public interest. Countries that protect communities from harm and ensure these shared systems reflect local contexts will be better placed to attract responsible innovation, export their standards and build trust at home and abroad.
Authors
