Regulatory Deep Dive

Article-by-Article Analysis of Law 134/2025/QH15

For legal counsel, compliance architects, and regulatory affairs professionals. Full legislative structure, penalty framework, implementing decree dependencies, and liability mechanics.

Beginner Compliance Officer Regulatory Deep Dive

Legislative Structure and Scope

Law No. 134/2025/QH15 on Artificial Intelligence was adopted by the 15th National Assembly and entered into force on March 1, 2026. The law comprises 8 chapters and 35 articles, establishing a comprehensive regulatory framework for AI development, provision, deployment, and use within the territory of the Socialist Republic of Vietnam.

The jurisdictional scope extends to all natural and legal persons engaged in AI-related activities within Vietnam, as well as foreign entities whose AI systems process data of or affect the rights of Vietnamese citizens, regardless of where the AI system is hosted or operated (Article 2).

8-Chapter Structure

Chapter I (Articles 1-7): General provisions and definitions. Chapter II (Articles 8-11): Prohibited practices and ethical principles. Chapter III (Articles 12-14): Risk classification framework. Chapter IV (Articles 15-20): Obligations by entity role. Chapter V (Articles 21-23): Regulatory sandbox and innovation support. Chapter VI (Articles 24-28): National AI Portal, registration, and reporting. Chapter VII (Articles 29-32): Liability, penalties, and enforcement. Chapter VIII (Articles 33-35): Transitional provisions and implementation timeline.

Vietnam's approach mirrors the EU AI Act's risk-based framework but diverges in key areas: fewer risk tiers (3 vs. 4), explicit cross-law integration by statute rather than by reference, a government-operated registration portal rather than a decentralized database, and strict liability for deployers rather than the EU's shared-responsibility model.

Extraterritorial reach: Article 2(3) extends the law's scope to foreign entities whose AI systems "affect the rights or legitimate interests of Vietnamese citizens or process data originating from the territory of Vietnam." This creates compliance obligations for international AI providers serving Vietnamese users, similar in principle to GDPR's extraterritorial scope under Article 3(2) of Regulation (EU) 2016/679.

Risk Classification Framework: Articles 12-14

The risk classification framework under Chapter III establishes a provider-initiated, self-classification model. Unlike the EU AI Act's Annex III enumeration approach, Vietnam's law defines classification criteria through a combination of sector designation (Article 13), functional impact assessment (Article 14), and prohibited practice exclusion (Article 8).

Article 12 places the classification obligation on the "provider" as defined in Article 4(3), requiring classification prior to making the AI system available in Vietnam. The classification decision must be documented, stored, and made available to regulatory authorities upon request. Re-classification is required when the system's intended purpose, operational scope, or underlying model undergoes material change.

High Risk - Article 13

Mandatory Conformity Assessment

An AI system is classified as high-risk if it operates within any of the 7 designated sectors (healthcare, finance, education, transport, recruitment, justice, energy) AND its output materially influences decisions affecting the rights, safety, or economic interests of natural persons. Article 13(2) provides that the Minister of Science and Technology may expand the sector list by decree. High-risk classification triggers mandatory conformity assessment under Article 19, National AI Portal registration under Article 27, and ongoing post-market monitoring obligations.

Medium Risk - Article 14(1)

Self-Assessment with Documentation

AI systems with moderate impact potential that do not meet the high-risk criteria. Article 14(1) requires self-assessment documentation, transparency disclosures, and simplified registration. Medium-risk systems must maintain technical documentation sufficient to demonstrate compliance with transparency and safety requirements but are not subject to mandatory third-party assessment.

Low Risk - Article 14(2)

Transparency Obligations Only

AI systems with limited impact scope. Article 14(2) requires only basic transparency: users must be informed that they are interacting with an AI system, and the system must be documented in the provider's internal AI system inventory. No conformity assessment or portal registration is required for low-risk systems, though voluntary registration is permitted.

Classification methodology gap: The law delegates the detailed classification methodology to an implementing decree from the Ministry of Science and Technology. Until this decree is published, providers must apply the criteria in Articles 12-14 directly. The Vietnam AI Law Reporter implements a structured scoring methodology that maps to these statutory criteria, producing a defensible classification with full article citations. When the implementing decree is issued, the scoring weights will be updated to reflect the official methodology.

Prohibited Practices Under Article 8

Article 8 establishes an absolute prohibition on certain AI applications, effective immediately upon the law's entry into force with no grace period. These prohibitions apply regardless of the entity role, risk tier, or sector, and violations carry the highest penalty tier under Article 30.

The prohibited practices list is narrower than the EU AI Act's Article 5 prohibitions but includes Vietnam-specific provisions reflecting national security and social order priorities. Each prohibition is defined by its intended purpose and mechanism, not by the underlying technology.

Article 8(1)

Social Credit Scoring

AI systems that evaluate, classify, or score natural persons based on social behavior, personality traits, or inferred characteristics for the purpose of determining access to public services, employment, credit, or social opportunities. This prohibition is broader than the EU equivalent in that it covers private-sector scoring systems when the scoring output affects access to essential services.

Article 8(2)

Subliminal Manipulation

AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort behavior in a manner likely to cause physical or psychological harm. The "beyond consciousness" threshold requires that the manipulation technique operates at a level the affected person cannot reasonably detect or resist through ordinary awareness.

Article 8(3)

Exploitation of Vulnerable Groups

AI systems that exploit the vulnerabilities of specific groups due to age, disability, or socio-economic situation to materially distort their behavior. "Vulnerability" is assessed objectively based on the group characteristics, not the subjective awareness of the AI operator. This applies to systems targeting children, elderly persons, persons with disabilities, or economically disadvantaged populations.

Article 8(4)

Mass Biometric Surveillance

Real-time biometric identification systems in publicly accessible spaces for the purpose of law enforcement, except where authorized by court order or in response to imminent threat to life. The exception for court-ordered use distinguishes Vietnam's approach from the EU's broader law enforcement carve-outs. Private-sector real-time biometric identification in public spaces is prohibited without exception.

The prohibited practices list functions as a "red line" independent of risk classification. A system that would otherwise be classified as medium-risk could be entirely prohibited if its intended purpose falls within Article 8. The Vietnam AI Law Reporter screens for prohibited practices before classification scoring begins, ensuring that prohibited systems are identified and flagged before any deployment-readiness assessment proceeds.

Liability Framework and Penalty Structure: Articles 29-32

Chapter VII establishes a dual liability framework combining strict liability for high-risk deployers (Article 29) with fault-based liability for other entity roles (Article 31). This is one of the most significant divergences from the EU AI Act, which relies primarily on fault-based liability with the burden of proof shifting to the provider in certain circumstances.

Article 29 imposes strict liability on deployers of high-risk AI systems for harm caused by the system's operation, irrespective of fault. The deployer may seek contribution from the provider under Article 31(3) if the harm results from a defect in the system as provided, but the injured party's claim lies against the deployer in the first instance. This creates a strong incentive for deployers to conduct thorough due diligence, maintain operational logs, and implement human oversight measures.

Strict Liability

Article 29 - Deployer Liability

Deployers of high-risk AI systems are strictly liable for harm caused during operation. The injured party need only establish: (1) the AI system operated as deployed, (2) harm occurred, and (3) a causal link between the system's operation and the harm. No showing of fault, negligence, or defect is required. Defenses are limited to force majeure, victim's sole fault, and unauthorized third-party interference.

Fault-Based Liability

Article 31 - General Liability

Developers, providers, and deployers of non-high-risk systems are liable under general tort principles. The injured party must establish duty, breach, causation, and damages. Article 31(2) creates a rebuttable presumption of fault where the entity has failed to comply with specific obligations under the law (e.g., missing classification, inadequate documentation, failure to report incidents).

Contribution Rights

Article 31(3) - Supply Chain

Where a deployer has been held strictly liable under Article 29, the deployer may seek contribution from the provider or developer if the harm resulted from a defect in the AI system as provided. Contribution is proportionate to each party's share of responsibility. This creates contractual incentive for deployers to negotiate indemnification provisions with providers.

Penalty Framework

Article 30 - Administrative and Criminal Penalties

Article 30 establishes a tiered penalty framework. Administrative fines range from VND 50 million to VND 5 billion depending on the violation category. Prohibited practice violations carry the highest fines. Failure to classify or register carries intermediate penalties. Documentation deficiencies carry the lowest fines. For severe or repeated violations, Article 30(4) provides for suspension of AI operations and, in cases involving prohibited practices causing significant harm, criminal referral under Vietnam's Penal Code. The specific fine amounts per violation type will be detailed in the implementing decree on penalties.

Practical implication: The strict liability regime under Article 29 makes due diligence documentation critical for deployers. The Vietnam AI Law Reporter's incident management and obligation tracking modules generate the audit trail a deployer needs to demonstrate compliance in the event of a liability claim. Operational logs, human oversight records, classification documentation, and conformity assessment reports together form the deployer's evidence base for regulatory defense and contribution claims against the supply chain.

Regulatory Sandbox: Article 21

Article 21 establishes a regulatory sandbox for AI innovation, effective January 2027 (10 months after the law's entry into force). The sandbox allows qualifying organizations to test innovative AI systems under supervised conditions with modified compliance requirements. The sandbox is operated by the Ministry of Science and Technology with participation from sector-specific regulators.

Participation requires a formal application to the National AI Portal demonstrating the innovative nature of the AI system, the testing scope, risk mitigation measures, and the expected duration. Sandbox participants receive a temporary exemption from full conformity assessment requirements but remain subject to prohibited practice prohibitions, incident reporting obligations, and human oversight requirements throughout the testing period.

Eligibility Criteria

Article 21(2) Requirements

Applicants must demonstrate: (a) the AI system involves genuine technological innovation, (b) existing regulatory requirements create barriers to testing that cannot be addressed through standard compliance paths, (c) the applicant has adequate risk management and data protection measures, and (d) the testing scope is clearly defined with measurable objectives and exit criteria.

Modified Obligations

Article 21(4) Exemptions

Sandbox participants receive: (a) exemption from full conformity assessment under Article 19, replaced by sandbox-specific assessment criteria, (b) modified registration requirements under Article 27, and (c) extended timelines for documentation obligations. Prohibited practice prohibitions under Article 8, incident reporting under Article 26, and basic human oversight under Article 20 remain in full force.

Duration and Exit

Article 21(5) Timeline

The sandbox period is limited to 24 months with a possible 12-month extension upon application. At the end of the sandbox period, the participant must either: (a) complete full compliance and transition to standard operation, (b) apply for extension with documented justification, or (c) cease operation of the AI system. Systems that exit the sandbox without achieving compliance cannot be deployed in Vietnam.

Vietnam's sandbox is modeled on financial regulatory sandbox concepts (similar to those used by the State Bank of Vietnam for fintech) adapted for AI. The key difference from the EU AI Act's sandbox provisions under Article 57 is that Vietnam's sandbox is centrally operated by the Ministry of Science and Technology rather than delegated to national competent authorities, reflecting Vietnam's unitary state structure versus the EU's member-state model.

The Vietnam AI Law Reporter's governance module includes sandbox application tracking, modified obligation management for sandbox participants, timeline monitoring, and exit planning tools. This allows compliance teams to manage sandbox participation within the same platform used for standard compliance, avoiding parallel tracking systems.

Cross-Law Regulatory Intersections

Article 3 of Law 134/2025/QH15 explicitly acknowledges the intersection of AI regulation with 6 existing Vietnamese laws. Unlike the EU AI Act's approach of leaving cross-regulation to "without prejudice" clauses, Vietnam's law creates affirmative cross-compliance obligations: entities must demonstrate compliance with the AI Law AND relevant intersecting legislation simultaneously.

The legal significance is that a violation of an intersecting law may independently trigger consequences under the AI Law itself. Article 30(3) provides that failure to comply with obligations arising from cross-law intersections identified in Article 3 constitutes a separate violation under the AI Law, in addition to any liability under the intersecting legislation.

Cybersecurity Law 2018

Law No. 24/2018/QH14

Data localization requirements (Article 26), mandatory security assessments for critical information systems, incident reporting timelines, and network operator obligations. AI systems operating on Vietnamese networks that qualify as "critical information systems" must comply with both Cybersecurity Law security assessment requirements and AI Law conformity assessment requirements. The two assessment processes share overlapping criteria but are legally distinct obligations.

PDPD 2023

Decree No. 13/2023/ND-CP

Personal data processing consent requirements, data processing impact assessments (DPIA), cross-border transfer restrictions, data subject rights (access, correction, deletion), and data processing records. AI systems that process personal data must satisfy PDPD requirements for every data processing activity, including training data collection, inference, and output generation. The AI Law's transparency obligations under Article 17 are separate from and additional to PDPD disclosure requirements.

Intellectual Property Law

Law No. 36/2009/QH12 (amended)

IP ownership of AI-generated works, liability for AI-generated content that infringes existing IP rights, training data copyright considerations, and patent eligibility for AI-assisted inventions. Article 3(c) of the AI Law requires AI providers to ensure that their systems do not systematically infringe IP rights through their operation, creating a compliance obligation that intersects with traditional IP enforcement.

Digital Tech Industry Law

Platform Accountability

AI systems deployed within digital platforms face additional obligations for algorithmic transparency, platform-level user protection, and digital service provider responsibilities. The intersection creates compound obligations where the platform's own transparency duties under digital technology legislation stack on top of the AI-specific transparency requirements.

E-Transactions Law

Law No. 20/2023/QH15

AI systems involved in automated contract formation must comply with electronic transaction validity requirements, including electronic signature standards, automated message attribution, and record retention obligations. Where AI acts as an "automated message system" under the E-Transactions Law, the operator bears responsibility for the legal effects of transactions initiated by the system.

Consumer Protection Law

Law No. 19/2023/QH15

Consumer-facing AI systems must satisfy fair dealing requirements, product safety obligations, advertising standards, and complaint handling procedures. AI-driven pricing algorithms, recommendation engines, and chatbots interacting with consumers create dual compliance obligations under both the Consumer Protection Law and AI Law transparency provisions.

The Vietnam AI Law Reporter's cross-law compliance mapping engine identifies overlapping obligations, conflicting requirements, and compliance gaps across all 6 intersecting laws. For each AI system in your inventory, the tool generates a unified obligation view that merges requirements from all applicable legislation, flags obligations that satisfy multiple laws simultaneously (reducing compliance effort), and identifies gaps where additional measures are needed beyond AI Law compliance alone.

Implementing Decrees and Regulatory Timeline

Law 134/2025/QH15 delegates significant operational detail to implementing decrees and ministerial circulars. Article 34 mandates that the Government issue implementing decrees within 6 months of the law's effective date (by September 2026). Several critical provisions depend on these decrees for full operationalization.

Organizations should prepare for compliance based on the statutory text and existing regulatory guidance, while monitoring for the implementing decrees that will finalize specific procedural requirements. The Vietnam AI Law Reporter will be updated as each implementing decree is published.

Expected Implementing Decrees

Key pending decrees include: (1) Detailed classification methodology and scoring criteria (Ministry of Science and Technology), (2) National AI Portal operational procedures and registration forms (Ministry of Information and Communications), (3) Conformity assessment procedures and designated body accreditation (Ministry of Science and Technology), (4) Administrative penalty schedule with specific fine amounts per violation category (Government), (5) Regulatory sandbox application procedures and assessment criteria (Ministry of Science and Technology), (6) Prime Minister's list of AI systems requiring third-party conformity assessment.

March 1, 2026

Law Effective

Prohibited practices enforceable immediately. New AI systems must begin classification and compliance processes. Grace periods begin for existing systems.

September 2026

Decrees Due

Article 34 deadline for implementing decrees. Classification methodology, portal procedures, conformity assessment details, and penalty schedules should be published by this date.

January 2027

Sandbox Opens

Regulatory sandbox becomes operational. Applications accepted through the National AI Portal. Modified compliance pathway available for qualifying innovative AI systems.

March 2027

General Deadline

12-month grace period expires. All existing AI systems must be classified, registered (where required), and compliant with applicable risk tier obligations. Administrative enforcement begins for non-compliant systems.

September 2027

Sector Deadline

18-month extended grace period expires for healthcare, education, and finance sectors. Full compliance required including completed conformity assessments and portal registration for all high-risk AI systems in these sectors.

Strategic consideration: Organizations should not wait for implementing decrees to begin compliance work. The statutory text provides sufficient detail to inventory AI systems, perform preliminary classification, identify applicable obligations, and begin evidence collection. The implementing decrees will refine procedural details but are unlikely to change the substantive classification criteria or obligation structure. Starting now maximizes the remaining grace period and avoids a compliance rush as deadlines approach.

Regulatory Authority Structure

Unlike the EU's decentralized model with national competent authorities, Vietnam's AI regulatory framework is administered through a centralized multi-ministry structure under the Government's coordination. Three principal ministries share regulatory responsibility, each with defined jurisdiction over specific aspects of AI governance.

Lead Regulator

Ministry of Science and Technology (MOST)

Primary regulatory authority for AI governance. Responsible for: classification methodology, conformity assessment framework, designated body accreditation, regulatory sandbox operation, technical standards development, and coordinating inter-ministerial AI policy. MOST serves as the primary point of contact for regulated entities and operates the technical review process for high-risk AI system registrations.

Portal Operator

Ministry of Information and Communications (MIC)

Operates the National AI Portal (Articles 27-28) and manages registration, incident reporting, and public disclosure systems. MIC is responsible for the technical infrastructure, data management, and accessibility of the portal. MIC also handles cross-border data flow aspects of AI regulation in coordination with the PDPD enforcement authority.

Sector Regulators

Line Ministries by Sector

Sector-specific ministries (Health, Finance, Education, Transport, Labor, Justice, Industry and Trade) participate in the classification and assessment of AI systems within their regulatory domain. For high-risk AI systems in designated sectors, the relevant line ministry provides sector-specific assessment criteria and participates in conformity review. Enforcement actions against sector-specific violations are coordinated between MOST and the relevant line ministry.

The multi-ministry structure means that a healthcare AI system provider interacts with MOST for classification and conformity assessment, MIC for portal registration, and the Ministry of Health for sector-specific requirements. The Vietnam AI Law Reporter consolidates all of these touchpoints into a single compliance view, so your team does not need to navigate three separate regulatory tracks independently.

Map Your Compliance Obligations

See how the Vietnam AI Law Reporter maps articles to obligations, tracks cross-law intersections, and generates portal-ready documentation.

Launch the Demo