How India’s New Free Trade Agreement with the EU Limits AI Governance
Shweta Kushe / Apr 13, 2026The views expressed in this article belong to the author and do not reflect the views of her employer or affiliated entities.

(R-L) India's Prime Minister Narendra Modi, the European Council President António Costa, and the European Commission President Ursula von der Leyen attend a press conference in New Delhi, India, on January 27, 2026. The EU and India have reached a final agreement on the conclusion of the FTA (Free Trade Agreement) on the day. ( The Yomiuri Shimbun via AP Images )
Solon, a 6th-century BCE legal scholar from Athens, is often attributed to have said that laws can resemble spiders’ webs: if anything small falls into them, they ensnare it, but large things get away. This insight is simple yet enduring and serves as a useful test for modern legal frameworks. As India and the European Union concluded negotiations and released the text of the India-European Union Free Trade Agreement (India-EU FTA) in February, Solon’s warning is particularly relevant.
The India-EU FTA’s Digital Trade Chapter establishes a broad prohibition on requiring the transfer of or access to source code. This prohibition is combined with narrow, largely reactive carve-outs that risk leaving India’s regulators without proactive audit authority over algorithms that shape finance, healthcare and critical infrastructure. Similar concerns arose during negotiations of the India-UK Free Trade Agreement (India-UK FTA), regarding source-code related provisions. However, that agreement explicitly preserved space for algorithmic accountability. By contrast, the India-EU FTA is silent, creating a notable l asymmetry: The EU retains meaningful scrutiny over AI systems through its internal legislation, i.e., the EU AI Act 2024 (Article 74), while India’s ability to examine EU‑origin AI deployed within its borders is legally constrained by the India-EU FTA’s text.
What is the core prohibition on source code access?
Article 9.9 of the India-EU FTA’s Digital Trade Chapter bars India from requiring the transfer of, or access to, source code of software as a condition for import, export, distribution, sale, or use. The provision also covers products containing such software. By explicitly listing market activities – import, export, distribution, sale and use – it closes gaps that a more general formulation or standard source code clause might leave open. On the other hand, extending protection to embedded software broadens the scope significantly, potentially covering everything that runs on a chip, e.g., smart meters, medical devices, and industrial controllers, which becomes shielded by the prohibition.
At the same time, Article 9.9 contains two carve‑outs that partially temper this breadth. First, it permits authorities to seek access in support of defined public processes, such as investigations, inspections, examinations, enforcement actions, or court proceedings, provided that the request advances a legitimate policy objective and is subject to confidentiality safeguards. Second, in the competition law context, authorities may obtain proportionate and targeted access when necessary to remedy a violation or to address barriers to entry in digital markets. These carve‑outs recognize that code can be examined once legal proceedings are underway or when competition enforcement requires it.
The significance of this drafting choice, however, becomes clear when you consider what meaningful AI oversight actually requires. Both carve‑outs are reactive by design. Access appears to arise only after a case, investigation, or proceeding is underway. Neither directly establishes standing authority for pre‑market testing or routine audits of high‑risk AI systems before they scale. For algorithms used in lending, medical settings, content moderation, or industrial controls, key risks—bias, safety failures, or hidden vulnerabilities—are best addressed before deployment and through periodic review, not only after a violation is suspected. In effect, the carve‑outs only let India look under the hood after the smoke is visible.
How does this compare with the India-UK FTA?
In public discourse and policy circles, the India–UK FTA also raised concerns on governmental access to source code. However, Article 12.15 of the India-UK FTA built in two guardrails that partially mitigated those concerns. First, a footnote clarified that protection extends to algorithms embedded in source code, but it does not cover the expression of those algorithms in other forms, such as documentation. This preserves space for explainability through model cards and other technical specifications. Second, another footnote expressly recognizes “algorithmic accountability,” which allows authorities to require firms to preserve and make available source code in furtherance of investigations, inspections, enforcement actions, or judicial proceedings, creating a workable post‑incident audit pathway.
By contrast, the India–EU FTA adopts a broader prohibition and omits both safeguards. It does not distinguish between code and other forms of algorithmic expression, enabling firms to argue that logic, expressed in any form, is protected. It also does not include any explicit reference to algorithmic accountability or to preservation obligations for investigations. As a result, India faces a wider restriction on access with fewer legal bases to justify disclosure, even after an incident, and greater exposure in sectors where EU firms are significant suppliers of AI‑embedded products.
What is the algorithmic accountability gap?
The gap lies between the reactive access tied to investigations and proactive authority for pre‑market and ongoing, systemic audits. The India‑EU FTA permits access only in connection with enforcement processes, which are inherently post‑hoc and case‑specific. Algorithmic accountability, by contrast, requires the ability to assess models before deployment, to monitor their performance across diverse populations after deployment, and to verify compliance with regulatory requirements over time.
Without explicit recognition of algorithmic accountability and preservation obligations for source code, logs, and technical artifacts, regulators cannot reliably test compliance, detect vulnerabilities, or evaluate bias and safety at scale. In practice, this means systems used in credit scoring, diagnostics ranking, and industrial control can shape outcomes for millions while remaining largely beyond routine, evidence‑based oversight. That is precisely the area the UK text’s explicit algorithmic accountability language sought to cover, but which the EU text omits.
The gap is more evident when considering what regulators need for meaningful audits. Source code is the starting point because it discloses intent – the rules, weightings, and logic the developer chose to embed, but is insufficient without behavioral data, such as inference logs, input-output records, and audit trails that capture how the system performed in practice. The India-EU FTA’s silence on preservation obligations for logs, combined with restrictions on source code access, leaves regulators with limited tools.
Why does the omission matter and what India could have secured?
The EU has already equipped itself with strong AI oversight through its domestic regulatory framework. Limiting reciprocal audit rights abroad allows it to protect its firms in foreign markets, including reducing litigation risk for them outside the EU, while maintaining stricter scrutiny at home.
India could have pursued several safeguards to preserve its regulatory authority. These include explicitly identifying “algorithmic accountability” as a legitimate regulatory objective and clearly defining conditions for pre-deployment assessments and periodic audits of high-risk systems. In addition, the agreement could have included mandatory preservation obligations requiring firms to retain and, upon lawful request, produce not only source code but inference logs, input-output records, and key audit artifacts, subject to confidentiality safeguards. Finally, for continuously retrained systems whose code and weights update frequently, India could have secured a definition of the stable unit of audit, i.e., a clearly identifiable, fixed version of the AI system, such that preservation obligations attach to something technically meaningful rather than a moving target.
These measures mirror global best practice, make compliance verifiable, and align market access with public‑interest oversight.
Trade rules are fast becoming the bedrock of the digital economy. In this landscape, the India–EU FTA’s formulation of source‑code and algorithmic provisions appears to narrow India’s policy authority more than the India-UK FTA. It shifts the balance towards post‑incident enforcement and away from proactive oversight, at a time when India’s scale and diversity require the opposite. However, there is a silver lining - the Agreement’s five‑year review clause creates a small but critical window for India to negotiate additional provisions, press for algorithmic audit rights, and restore parity with global regulatory norms as AI governance frameworks evolve.
The author thanks Aaditya Srinivasan and Santosh Srinivasan for their insights on the draft version of the article.
Authors
