9 Key Points of the Artificial Intelligence (AI) Act in Vietnam (2025)

From March 1, 2026, the Law on Artificial Intelligence 2025 (Law No. 134/2025/QH15) officially took effect, marking a turning point in establishing a legal corridor for research, development, and application of Artificial Intelligence in Vietnam. The Law applies to both domestic and foreign organizations and individuals (except for activities serving national defense and security), with the goal of balancing innovation and risk control.

1. Building a unified legal framework for Artificial Intelligence for the first time

The Law on Artificial Intelligence 2025 is the first Law in Vietnam to standardize legal concepts and related regulations, serving as a basis for management and enforcement. According to Article 3 of the Law on Artificial Intelligence 2025 (No. 134/2025/QH15), “Artificial Intelligence” is understood as the electronic performance of human intellectual capacities such as learning, reasoning, perception, judgment, and natural language understanding.

At the same time, the Law also clearly distinguishes the subjects participating in the Artificial Intelligence ecosystem to determine legal liability throughout the entire life cycle of an Artificial Intelligence system. Specifically including:

  • Developer: Performs the design, construction, training, testing, or fine-tuning of an Artificial Intelligence model or system and has direct control over technical methods, training data, or model parameters.
  • Provider: Brings an Artificial Intelligence system to the market to operate under their trade name.
  • Deployer: Uses an Artificial Intelligence system in professional, commercial activities, or service provision, except for use for personal, non-commercial purposes.
  • Users and affected persons: Subjects directly or indirectly impacted by the Artificial Intelligence system.

du-thao-quy-dinh-thi-hanh-luat-tri-tue-nhan-tao (1)

2. Clearly delineating roles and the “Human-centric” principle

The Law on Artificial Intelligence 2025 sets out guiding principles for the development and application of Artificial Intelligence in Vietnam, emphasizing:

  • Artificial Intelligence must be human-centric – ensuring human rights, privacy, national security, and legal compliance.
  • Artificial Intelligence does not replace human authority and responsibility – ensuring humans maintain a supervisory role and do not fully transfer decision-making power to the system.
  • Artificial Intelligence activities must be transparent, fair, and accountable when the system makes decisions.
  • Encouraging Artificial Intelligence to develop in a green direction, saving energy and reducing negative impacts on the environment.

3. Management based on 3 risk levels

According to Article 9, Law on Artificial Intelligence 2025, Artificial Intelligence systems are classified into 3 risk levels to apply corresponding control measures:

  • High Risk: Has a direct impact on life, health, human rights, or national security. This group must undergo a conformity assessment (certification) before operation, and must simultaneously maintain a risk management mechanism and continuous activity log storage.
  • Medium Risk: Has the potential to cause confusion or manipulate user perception (e.g., chatbots, content generation tools).
  • Low Risk: Does not belong to the two types above, usually basic support tools or those with a small scope of impact.

Classification is determined based on the level of impact on human rights, safety, security, application field, and the system’s scope of influence, especially for the high-risk group(*).

(*) High-risk Artificial Intelligence systems must undergo a conformity assessment according to technical standards, regulations, and the requirements of the Law on Artificial Intelligence. Some systems in the mandatory list must be certified by a conformity assessment organization before operation. Other systems may perform self-assessment or hire a recognized assessment organization.

Tri_Tue_Nhan_Tao_Ai (4)

4. Strict regulations on transparency of information generated by Artificial Intelligence

To deal with problems such as online fraud or fake news, the Law on Artificial Intelligence 2025 clearly stipulates:

  • Users must be notified when they are interacting with an Artificial Intelligence system.
  • All audio, images, and videos generated by Artificial Intelligence must be attached with identification signs (machine-readable format).
  • Content that has the potential to cause confusion about authenticity, such as simulating real people/real events (deepfake), must be clearly notified.

5. Prohibited acts in Artificial Intelligence activities

The Law clearly stipulates a number of groups of acts that are not allowed to be performed, most prominent of which are the following contents:

  • Taking advantage of or appropriating Artificial Intelligence systems to infringe upon the legitimate rights and interests of organizations and individuals;
  • Developing or using Artificial Intelligence to manipulate perception, deceive, or cause serious harm;
  • Creating and spreading fake content (e.g., deepfake) causing negative impacts on society;
  • Exploiting data in violation of regulations on personal data protection, intellectual property, and cybersecurity;
  • Hiding information, hindering human control mechanisms, or taking advantage of testing to commit violations…

Tri_Tue_Nhan_Tao_Ai (2)

6. Establishing the National Portal and Database on Artificial Intelligence

The Law on Artificial Intelligence 2025 establishes these 2 systems as management infrastructure, aiming for the transparency and standardization of Artificial Intelligence activities nationwide, specifically:

  • National Single Window Portal for Artificial Intelligence to receive registrations for controlled testing, receive risk classification notices, reports on serious incidents, and periodic reports; simultaneously making public information about Artificial Intelligence systems, results of conformity assessments, and the handling of violations.
  • National Database on Artificial Intelligence systems built to uniformly store information on Artificial Intelligence systems to serve monitoring, post-inspection, and the transparency of system operations.
  • All reporting and incident handling activities are carried out through the National Single Window Portal for Artificial Intelligence; the Government will prescribe detailed regulations on the reporting process as well as the responsibilities of agencies, organizations, and individuals in accordance with the severity and scope of influence of the incident.

Note: All entities related to Artificial Intelligence systems—including developers, providers, deployers, and users—have the responsibility to ensure the safety, security, and reliability of the system and to promptly detect and rectify incidents that may cause harm to people, property, data, or social order.

7. Policies to support and develop the Artificial Intelligence ecosystem

In addition to management, the Law also introduces mechanisms to promote development:

  • Artificial Intelligence enterprises enjoy the highest incentive levels according to the law on science and technology, high technology, digital transformation, and investment;
  • Allowing new Artificial Intelligence systems to be tested in a controlled risk environment (sandbox), and even being exempt from or reducing compliance obligations based on test results;
  • Establishing the National Artificial Intelligence Development Fund, with a flexible mechanism that accepts risks in innovation;
  • Supporting small businesses and startups with the costs of conformity assessment, self-assessment tools, and the right to access shared data…;

Tri_Tue_Nhan_Tao_Ai (3)

8. National Artificial Intelligence infrastructure is strategic infrastructure, the State holds the coordinating role

National Artificial Intelligence infrastructure is identified as strategic infrastructure, coordinated by the State for development in an open, safe, and scalable direction. This infrastructure includes computing capacity, shared data, training-testing platforms, and testing environments. For essential sectors, Artificial Intelligence systems must be deployed on national infrastructure to ensure safety and controllability.

Data serving Artificial Intelligence is organized uniformly, classified into open data, conditional open data, and commercial data. Simultaneously, the exploitation, connection, and sharing must comply with regulations on personal data protection. The Prime Minister will issue a list of priority data such as culture, language, health, education, agriculture, and socio-economics.

9. Implementation roadmap

The Law has been in effect since March 2026 and prescribes a transitional roadmap for Artificial Intelligence systems deployed before this time:

  • Artificial Intelligence systems in the health, education, and finance sectors: Complete obligations before September 1, 2027.
  • Artificial Intelligence systems in other sectors: Complete obligations before March 1, 2027.

Tri_Tue_Nhan_Tao_Ai (5)

During this period, systems are still allowed to operate normally, unless the regulatory authority detects a risk of causing serious damage and issues a decision to temporarily suspend or terminate operations.

RELATED POSTS

For more details, please contact us at:

PHUC GIA LABORATORY CORPORATION

PHUC GIA CERTIFICATION CENTER

PHUC GIA INSPECTION TESTING CENTER

Address:

Hotline: 0965996696 / 0982996696 / 02477796696

E-mail: lab@phucgia.com.vn/cert@phucgia.com.vn/info@phucgia.com.vn

Website: phucgia.com.vn

Working time: Monday to Friday 8:00 – 18:30; Saturday 8:00 – 12:00

Mục lục