Lessons from Nigeria and Kenya on Digital Colonialism in AI Health Messaging
Yewande O. Addie, Jasmine McNealy / Oct 3, 2025
Medicine by Alan Warburton / Better Images of AI / Image by BBC / CC by 4.0
AI’s potential in health is real. Tools such as predictive disease modeling, diagnostics, and information systems can help low-resource health sectors scale quickly and prepare more effectively. For example, in April 2024, The World Health Organization launched, S.A.R.A.H. (Smart AI Resource Assistant for Health), a prototype of a chatbot that can answer basic health questions in eight languages across topics such as nutrition, mental health, and chronic disease prevention. While its current capabilities are limited to scripted interactions and general information, it is presented as a model for how multilingual, always-available systems might eventually expand access to vital health communication resources.
Yet like all generative AI systems, S.A.R.A.H. remains experimental, is refined in real time, and is subject to the same types of criticisms faced by other AI models. For instance, Google’s Gemini image generator was roundly criticized in early 2024 for producing historically inaccurate depictions of communities of color while failing to generate images of white people, forcing the company to retract and re-release the feature. More recently, the release of OpenAI’s GPT-5 model was met with backlash from users frustrated by inconsistencies in output quality and the model’s tendency toward vagueness and “safety-washed” responses. These episodes highlight how rapidly shifting public critiques shape the evolution of AI systems. They also remind us that these tools are primed for both improvement and failure depending on the governance structures that surround them. These tensions are not abstract, and our recent Oxford-published study of health messaging in Nigeria and Kenya shows why.
Findings from Nigeria and Kenya
In the study, we compared 120 health messages—80 traditional campaigns from ministries of health and local organizations, and 40 messages generated by two AI systems: S.A.R.A.H. and ChatGPT. We focused on vaccine hesitancy and maternal health, two domains where trust and cultural specificity are critical.
AI messages were faster to generate and occasionally incorporated local metaphors, but they frequently lacked depth, contained language errors, and missed contextual nuance. Messages from S.A.R.A.H. were medically precise but templated and flat in tone. ChatGPT’s outputs were more dynamic but sometimes misaligned culturally or visually. Traditional campaign materials were more accurate but tended to reinforce biomedical authority and integrated little community knowledge. The result is a dual shortfall: neither AI nor traditional campaigns delivered communication that was both accurate and culturally resonant.
This is not only a communication problem. It is a governance problem. AI health messaging exists within a broader political economy in which Global South communities often provide the labor and serve as testing grounds for technologies while policymaking and agenda-setting occur elsewhere. The parallels to global health aid are clear: African professionals are too often positioned as implementers rather than co-creators. Without protective policies, generative AI risks entrenching patterns of exploitation that echo older hierarchies of power.
Sovereignty in other sectors
African leaders have already recognized this dynamic in other sectors and are starting to take stronger stances against extractive arrangements in natural resources. In Niger, authorities revoked the permit of French firm Orano to operate a uranium mine, challenging decades of unequal contracts. In Botswana, leaders renegotiated diamond agreements with De Beers and introduced legislation requiring local ownership stakes in mines. In Burkina Faso, the government nationalized two gold mines to reclaim revenues for the state.
These moves signal a broader shift: resource sovereignty is non-negotiable. The same principle should apply to data and AI. Countries with histories of colonial extraction may require stronger guardrails than those of the Global North—protective policies that prevent external actors from exploiting data, narrative labor, or algorithmic outputs without fair benefit or oversight.
Infrastructure is a frontline in the struggle for digital self-determination. Kenya’s partnership with Microsoft and G42 to build a billion-dollar geothermal-powered data center demonstrates how countries can anchor infrastructure in local conditions, linking renewable energy to digital growth. Similarly, Mauritania’s inauguration of its first national data hub in 2025 signaled an assertion of state control over data storage and connectivity. These examples show how investments, when tied to sovereignty and sustainability, can strengthen autonomy rather than dependence.
Yet not all digital initiatives have delivered what was promised. Kenya’s much-publicized “One Laptop per Child” program, launched to equip every primary school student with a laptop, faltered due to high costs, inadequate infrastructure, and insufficient teacher training. Instead of transforming learning, it exposed how ambitious projects collapse without planning for local capacity and long-term maintenance. In Malawi, research on the “new achikumbe elite”—urban-based, educated young farmers engaged in commercial agriculture—shows how digital platforms, while expanding access to information for some, can deepen inequality when regulation and validation mechanisms are weak. Informal social media groups can become conduits for misinformation and corporate influence, leaving resource-poor households at a disadvantage.
The necessity of good governance
These contrasts highlight the stakes. Governance strategies must go beyond bricks-and-mortar investments or one-off training programs. They require enforceable oversight, local ownership of data and models, and meaningful participation from the communities most affected. A key part of this protection is distinguishing between AI as a general-purpose tool and AI as a health product. While systems like ChatGPT or Gemini are designed for broad tasks, health applications require alignment with clinical guidelines, rigorous privacy safeguards, and community co-design. Without that distinction, efforts to scale AI risk serving corporate interests or donor timelines more than the needs of patients and practitioners.
Current policy frameworks across Africa acknowledge these distinctions but struggle with implementation. Nigeria’s 2024 National AI Strategy emphasizes human-centered design and cultural sensitivity, but offers few mechanisms for ensuring these values are realized in practice. Kenya’s 2025 draft AI Strategy explicitly identifies healthcare as a priority sector and stresses data sovereignty, yet health communication and trust-building remain underdeveloped. At the regional level, the African Union’s 2022 Data Policy Framework foregrounds sovereignty and raises the problem of data colonialism directly, but it remains aspirational and lacks enforcement power. Together, these frameworks signal ambition but also reveal a gap between aspiration and implementation. None explicitly engage with epistemic justice: the question of whose knowledge is legitimized and how cultural expertise is incorporated into systems. This omission is especially urgent in the context of African health communication, where histories of medical mistrust, gendered inequities, and exclusionary narratives already shape how communities interact with authority.
These challenges are also unfolding at a time when the global health infrastructure itself is in flux. Recent US divestments from international health programs, including reductions in USAID funding, have destabilized a system that many African practitioners have relied on for decades. Uncertainty around programs such as AIDS relief, alongside wider budget cuts, risks making AI appear attractive as a quick fix—an efficiency tool marketed in times of scarcity—rather than as a carefully governed innovation embedded in sustainable systems. This makes protective strategies in AI governance even more vital: the capacity gaps left by declining foreign investment cannot be filled by uncritical adoption of tools designed elsewhere.
Generative AI is already reshaping health communication, but whether it does so equitably will depend on governance. If AI is to strengthen African health systems, governments should implement enforceable protections such as: mandatory community consultation periods before deploying AI health tools, algorithmic impact assessments that evaluate cultural appropriateness alongside technical performance, and local data residency requirements that prevent extraction of health information without community benefit. Regional bodies such as the African Union could establish cross-border standards for AI health governance, with enforcement mechanisms that go beyond voluntary frameworks.
Although, if governed well, generative AI offers the potential for strengthening health communications, also important will be governance of the environmental impacts of these systems. Currently, Africa is building out its data center capacity with several tech firms establishing partnerships with countries like South Africa, Nigeria, and Kenya. Although in some cases tech firms tout the use of green design for these data centers, like Microsoft’s planned use of geothermal energy for a data center in Olkaria, Kenya, green energy is not without its possible environmental impacts. This, too, is a public health issue, and demonstrates the ecological nature of AI systems: while the focus may be on the recognizable outputs of the system, like the outputs from LLMs, also important are the interactions with other systems, including the environment, that may affect local communities.
Therefore, international funders and tech companies should be required to demonstrate meaningful community co-design in their AI health projects, not just superficial cultural adaptation. This means including traditional health leaders and community representatives in system development, not just implementation. Countries should also invest in homegrown AI capacity rather than relying solely on external tools whenever possible.
The safeguarding measures emerging in natural resources and digital infrastructure suggest a path forward: sovereignty must be asserted through concrete policy, not assumed through good intentions. The question is no longer whether Africa will be included in AI governance, but whether it will set the standards. Our analysis of health messaging shows how these dynamics manifest in specific, health communication contexts. By implementing stronger governance now, African countries can ensure AI serves communities rather than displacing the cultural knowledge that sustains trust in health systems.
Authors

