Vietnam’s New AI Law Balances Innovation Push With Tight State Control
Lam Le / Mar 23, 2026Lam Le is a fellow at Tech Policy Press.

Vietnamese Prime Minister Pham Minh Chinh, left, speaks to Apple CEO Tim Cook, right, before their meeting in Hanoi, Vietnam, in April 2024. (Duong Van Giang/VNA via AP)
On March 1, Vietnam became the first country in Southeast Asia to have a comprehensive AI law come into effect.
The Law on Artificial Intelligence draws on preceding legislation, notably the EU’s AI Act, which includes risk-based management of AI. It also “ensures a higher safety level than South Korea’s basic framework, (and) promotes strong development like Japan,” Tran Van Son, deputy director of the National Institute of Digital Technology and Digital Transformation under the Ministry of Science and Technology, said at a press conference last December.
The law came into effect at a time when Hanoi is pushing for the “era of national rise,” a term coined by Communist Party chief To Lam in 2024 to reflect his vision for a high-income developed Vietnam by 2045. Among the main engines of this transformation are tech, while efficient institutions act both as facilitators of growth and the brakes to ensure digital sovereignty, safety and security in the digital space. This vision has translated into multiple tech-related laws and directives being passed and updated steadfastly since 2024, including the Personal Data Protection Law, which came into effect last January, and the revised Cybersecurity Law passed last December.
The AI Law was drafted in just three months, and went through multiple consultation rounds with AI companies, industry groups, research institutes, international experts and organizations. The timelines have been rushed and “insufficient for stakeholders to analyze the document rigorously or provide substantive feedback,” Wong Wai San, Director, Policy – APAC at the Business Software Alliance (BSA), an industry group which represents OpenAI, Microsoft, Adobe and others, said in a statement.
The key principle of the law is that AI serves as a support tool, and the final decisions in matters important to society must be made by humans, Minister of Science and Technology Nguyen Manh Hung said at a National Assembly meeting as the law took effect. “We can’t let AI freely develop outside of a legal framework.”
Unlike the EU’s harm-based liability approach, Vietnam sets out rules for fault-based liability, Rohit Kumar, CEO of Risk AI Technologies, which helps companies build compliant AI systems, told Tech Policy Press.
While companies may argue that such a rule would hinder innovation, Kumar said the approach is likely to spread globally. He pointed to banks and financial institutions he has worked with, where autonomous AI systems are used but a human remains accountable for their functioning.
“From a technical perspective, there's no restriction in the Vietnam law which says that you can't have self-driving cars, for example,” he said. “You could have automation, you could have AI making decisions, but the responsibility of that will be with a human.”
Where the Vietnamese law is both broad and clear is on acts deemed prohibited, which include exploiting AI for unlawful purposes, deepfakes to deceive or manipulate, and disseminating forged materials that threaten national security or public order.
Unlike the EU’s more detailed list, “this intentional broadness in prohibitions within Vietnam's laws grants local authorities extensive enforcement powers, enabling flexible interpretation and application down the line,” lawyers Thu Minh Le and Alex Do said.
For instance, “AI companies could be held liable for unintended consequences, but this is very tricky since often users can violate the product’s term of service and get the service (the AI chatbot) to do things that are not allowed or illegal,” Jeff Nijsse senior lecturer at the School of Science, Engineering, and Technology, RMIT University in Hanoi, told Tech Policy Press.
While Europe is still debating whether Grok and AI nudity tools should be banned, “systems capable of generating non-consensual explicit imagery or political deepfakes violate prohibited acts clauses here in Vietnam,” said Nijsse. “These systems would need well-tested guardrails before being introduced to the Vietnamese market.”
Per the new law, AI companies wishing to operate in Vietnam are also obliged to self-classify their products to see whether they fall under a high, medium or low risk classification, and notify the Ministry of Science and Technology before deploying those deemed medium or high-risk. The latter will be subject to routine audits.
It also requires both providers and deployers to label AI-generated images, video, and audio. This aligns with the updated Cybersecurity Law, which will take effect in July and prohibits the use of AI to create and post unlawful deepfakes online.
“If I (a user) were to use ChatGPT to produce ‘toxic content’ per Vietnamese law, then I will be responsible for that,” and not OpenAI, as long as the deepfakes are correctly labeled as AI-generated, Nguyen Duc Lam, advisor at the Hanoi-based Institute of Policy Studies (IPS), told Tech Policy Press.
Toxic content is a broad term that entails anything from scams, immoral content, to anti-state content, all of which are illegal under the Cybersecurity Law. Last December, for instance, Vietnam sentenced in absentia Berlin-based journalist Le Trung Khoa to 17 years in prison for posting "fabricated, distorted, and defamatory" content against the government on social media with the aim of opposing the State. The indictment specifically cited the use of deepfakes of Communist Party and government leaders. With the AI Law in place, if such deepfakes are not properly labeled by the AI company (developer, provider or deployer), it would also be held liable.
However, a lot of uncertainty remains about the nitty-gritty of how the law will translate into enforcement in practice. In Vietnam, the law sets out the general principles, while most implementation relies on underlying directives. Given the 12-18-month grace period for existing AI systems, industry groups have called for more time to prepare. Otherwise, it “could deter market entry and limit the benefits of AI investment,” according to a statement by Washington-based Computer and Communications Industry Association (CCIA), which represents tech giants like Amazon, Apple, Google and Meta.
“As we have seen in the case of the EU AI Act and Korea’s Basic AI Law, rushed implementation creates regulatory uncertainty, compliance bottlenecks, and the need for subsequent clarifications to address unintended consequences,” Jonathan McHale, Vice President of Digital Trade of CCIA, said.
The draft decision defining the scope and criteria for high-risk AI systems has drawn the most concern from AI companies, as such a high-risk system is subject to the most stringent regulatory scrutiny, including risk assessments, human oversight, registration in a national database and incident reporting. Foreign providers of high-risk AI systems in Vietnam are also required to establish a local contact point.
Local tech companies have expressed concerns that such criteria could delay deployment and add large administrative burdens, especially on smaller startups, impacting the speed of innovation and even their bottom line.
But Lam from IPS also urges local startups to look beyond the risk classification, “the AI Law offers quite a lot of support to SMEs,” he said. It has specific provisions to propel the domestic AI industry, including plans for national AI infrastructure with a national AI database, human resource development and financial incentives via the AI Development Fund.
The law, according to him, has been deliberately left broad. For instance, the criteria for high-risk AI systems will be updated every year and issued by the Prime Minister. “The goal is to keep up with the changes in AI, but whether we can actually catch up is another matter,” Lam of IPS said.
“If you look at a 10-year horizon, I think most countries will have certain AI regulations in place,” Kumar said. “So for any company wishing to already do business in Vietnam, I think they should just get used to this concept of different jurisdictions having regulations and then having to comply, because I think that's a global trend.”
As of writing, public comment for draft implementing documents on risk classification criteria, AI labeling requirements, and other reporting and conformity procedures has been closed, and the final approved versions have yet to be released by the government.
Authors
