The ‘AI for Good’ Agenda: For Whose Benefit?
María Hernández Jurado, Suvradip Maitra / Jun 24, 2025
Paris, France- On February 10-11, 2025, France and India co-chaired the Paris AI Action Summit, hosted at the Grand Palais des Champs-Élysées. Source
Over the past year, accelerating innovation to enable the development of AI for the public good has become a global imperative. AI is seen as a ‘magic bullet’ solution to achieve the Sustainable Development Goals (SDGs), with international forums—such as the upcoming ‘AI4Good’ Global Summit in early July—highlighting the technology’s potential for positive social impact, particularly in the majority world.
This trend is evident in interregional coalitions across the majority world. For instance, the 2024 African Union Intercontinental AI Strategy aims to “accelerate the adoption of AI in the core sectors outlined in the SDGs.” Big Tech firms also rushed to invest in the majority world— Microsoft, for example, invested $3 billion in AI over two years to “accelerate innovation in India.” Multilateral bodies, such as the UN, encouraged AI adoption, as reflected in the UN Global Digital Compact, which aims to “turbocharge development” with AI. Additionally, at the 2025 Paris AI Action Summit, all major global powers reiterated the focus on “accelerating progress towards the SDGs.” Within this global trend, innovation is no longer optional—it is seen as both inevitable and imperative.
As AI systems become more capable and accessible, techno-solutionist narratives are gaining strength. But the critical question is: AI is good for whom? The dominant ‘AI4Good’ discourse often obscures the deeper socio-structural causes of social issues and technological harms. As we explore below, such initiatives can reinforce social inequalities, perpetuate digital colonialism, and support harmful nationalist agendas.
Our analysis primarily draws on examples from Latin America and India, as we are more familiar with these contexts. However, the issues are broadly mirrored across the majority of the world. We use the terms ‘majority world’ and ‘Global North/South’ interchangeably depending on the context.
Social inequalities
The narrative of AI as a 'magic bullet' in AI4Good initiatives often reduces complex social issues to problems of individual prediction, shifting the focus away from the underlying structural causes. This framing can justify increased surveillance and the implementation of projects that end up disproportionately affecting minoritized groups.
For instance, in Colombia, the SISBEN (Sistema de Identificación de Potenciales Beneficiarios de Programas Sociales) is used to assess citizens' socioeconomic status and determine access to welfare programs, initially through surveys. In 2016, the National Planning Department modified the algorithm to improve allocation efficiency by predicting income-generating capacity and integrating data from public and private sources. This shift toward big data analytics led to collaborations with institutions such as MIT and multinational financial firms to develop statistical models for detecting potential fraud. By framing poverty as a technical issue of efficiency, the system centered the problem on the individuals, obscuring eligibility criteria, and limiting people’s ability to challenge their exclusion.
Similarly, Plataforma Tecnológica de Intervención Social (PTIS) was an algorithm deployed in Argentina in partnership with Microsoft, claiming to predict with 86% accuracy which girls are likely to experience teenage pregnancy. Presented as an innovative tool for social intervention, it reframed adolescent pregnancy as a predictive, individual issue. This approach obscured broader socio-structural factors—such as access to sexual education, contraception, socioeconomic conditions, and schooling—while increasing surveillance and control over girls’ bodies, particularly in rural areas.
In the case of India, the techno-solutionist rhetoric of AI for socio-economic development serves as a ‘hype machine’ to legitimize other market-based and geopolitical claims.
The same populations that AI4Good projects claim to serve are often the ones most burdened by the social and environmental costs of AI development. For instance, Africa is one of the largest suppliers of critical minerals, provides cheap labor for AI training, and houses significant e-waste, all of which disproportionately impact African women. Recently, smaller AI models such as DeepSeek claim to be more energy efficient than larger models like ChatGPT without sacrificing performance. Yet, we must remain wary of the Jevons Paradox, which reminds us that, historically, increased energy efficiency of technologies has not resulted in reduced energy usage. As technologies have become more accessible, the demand for those technologies has grown proportionately, cancelling out any efficiency gains with the overall increase in usage.
Digital colonialism
Arguably, inequalities between the Global North and South are being calcified under the auspices of AI4Good initiatives. In line with historical North-South colonial dynamics, concerns include increasing technological dependency, data extraction, corporate surveillance and influence, and Western knowledge domination.
For instance, the piloting of technologies in the Global South within the AI4Good phenomenon—couched under solutionist and capitalist narratives—echoes dynamics of medical experimentation endemic to colonialism and racism of the 19th and 20th centuries. The majority of AI4Good applications are developed in North America and Europe, with US institutions directing African-based organizations. Often, a lack of localized solutions leads to the imposition of Eurocentric knowledge structures of classification that are culturally insensitive. For instance, psychotherapy bots like Karim, deployed to support Syrian refugees in Lebanon, use Western ideas of mental health.
Consider Microsoft’s Project Ellora in India, which employs rural workers to gather speech data in local Indian languages. Many workers lack smartphones or internet access, making it unlikely that they will benefit from the data. Instead, the data contributes to Microsoft’s ‘data lake’, to allow monetization at the company’s discretion..
AI4Good projects can also make previously inaccessible populations in the majority world visible to multinational corporations, granting them power over the design and implementation of development interventions. In the case of the PTIS in Argentina, women and girls in Salta became increasingly visible to Microsoft—as they gained access to information about their characteristics and circumstances—at the same time giving the company direct influence over the landscape of reproductive rights in the country.
Harmful nationalist agendas
Increasingly, AI4Good projects are being co-opted by majority world governments to pursue domestic authoritarian and populist agendas. At times, the elevation of AI4Good narratives and the critique of power concentrated in Big Tech firms have overshadowed the ways in which these projects are politicized by majority world governments.
BhashaDaan, India’s state-led data crowdsourcing project for developing language models, reinforces the language politics of the dominant ‘Hindutva’ political agenda. Under the banner of ‘Hindutva’, India’s ruling Bharatiya Janata Party (BJP) and Prime Minister Modi aim to position ‘Hindi’ as the unifying language of India, while allaying fears of Hindi imposition in regions with their own native languages. As evident in its promotional narratives, BhashaDaan appeals to both the ‘anonymous’ Indian, promising inclusive communication for all, and caters to regional interests by emphasizing linguistic pride in mother tongues. In doing so, BhashaDaan attaches a techno-solutionist approach to language barriers,obscuring the lack of government investment in language preservation and devaluing English, which is often seen as a tool of empowerment for minority communities.
Similarly, Aadhaar, India’s celebrated biometric identification program designed to improve welfare access, became a surveillance tool to target ethnic minorities under ‘Hindutva’. For example, Aadhaar-linked live CCTV surveillance is used to monitor students in national public service examinations, and biometric analysis is employed for crowd management at several Hindu temples. Championed by the World Bank, Aadhaar is influencing the adoption of similar biometric identity programs across the majority of the world, raising concerns about the surveillance implications of such programs in the hands of authoritarian governments.
In Latin America, the PTIS platform was supported by organizations and politicians who openly opposed abortion in Argentina, and its discourse echoed historical forms of reproductive control in the country, rooted in eugenic ideologies. Notably, the project was announced during the debate over decriminalizing abortion in the country in 2018, revealing its political underpinnings.
Way Forward
Case studies discussed here have gained visibility as they have been subject to media reporting or academic study, but many routine harms from AI4Good projects remain obscured. There is no doubt that AI can benefit the majority of the world. But at this critical hype point for AI4Good agendas, we must not lose sight of the ultimate social goals in trying to find a problem for a pre-defined algorithmic solution, that is, treating AI as a panacea for all social ills. Could there be value in slowing down rather than ‘accelerating’ or ‘turbocharging’ AI development?
As tech ethicist Dr Shannon Vallor states in their book The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking:
“... AI is no panacea or cheap technical fix for our social and environmental ills. AI won’t enable a sustainable future without reformed political institutions and economic incentives. What’s more, the kinds of AI technologies we’re developing today are undermining and delaying these reforms rather than supporting them, precisely because they mirror the misplaced patterns of judgment and value that led us into our current peril.”
How can conferences like AI4Good contribute? Consider the AI4Good Innovation Factory, which aims to “identify practical solutions using AI, scale those solutions for global impact, and advance the SDGs”. We must abandon this very scale of thinking, which valorizes technological initiatives designed to grow quickly without needing to change. This mindset is emblematic of Silicon Valley innovation and disruption discourses.
Instead, AI4Good can fund projects for capacity building in local communities and non-profits, and critically engage with the limitations of how AI can potentially obscure alternative solutions. We should co-design bottom-up solutions with grassroots communities. The co-design process needs to start from jointly identifying and defining problems with affected communities, and considering all possible low-tech solutions without rushing to AI as the pre-defined solution. The design process needs to be led by local communities, not necessarily governments, large academic institutions or private companies, with support provided from traditional top-down power holders. Such initiatives can even help counteract harmful national agendas and holdgovernments or corporations to account—for example, the use of drones to produce reliable maps of the Borneo rainforest in Indonesia to to stop illegal deforestation driven by large-scale mining and palm oil plantations, or an activist-led effort in Mexico to create a detailed femicide map to challenge the government’s lack of data transparency on the issue.
Ultimately, we need to adopt a broader conception of good, beyond AI, to ensure our aim remains using AI for good, not what is good for AI.
Authors

