Today, March 1, 2026, Vietnam joins the small but growing list of countries with comprehensive AI legislation. Its Law on Artificial Intelligence takes effect just weeks after South Korea began enforcing its AI Basic Act. Meanwhile, the EU counts down to its August deadline, the UK debates whether to pass a bill at all, and China tightens its grip on deepfakes.
Here’s where international AI regulation stands right now.
Vietnam: First Southeast Asian Country with Comprehensive AI Law
Vietnam’s Law on Artificial Intelligence (Law No. 134/2025/QH15) became effective today, making Vietnam the first Southeast Asian nation with a dedicated AI statute.
The law establishes a three-tier risk classification similar to the EU approach:
- High-risk AI systems: Enhanced controls, risk assessments, human oversight, and registration in national databases
- Medium-risk AI systems: Notification requirements before deployment and disclosure of system purposes and safety measures
- Low-risk AI systems: Minimal requirements with voluntary compliance encouraged
Vietnam’s framework covers the entire AI supply chain: developers, providers, deployers, users, and affected persons. Notably, developers face lighter obligations than providers - an intentional choice to protect research and innovation.
Key requirements for companies:
- Classify all AI systems before deployment
- Register medium and high-risk systems through the national AI portal
- Disclose when users interact with AI systems
- Apply machine-readable markings on synthetic audio, images, or video
- Report serious incidents through centralized systems
Transition timeline: Most sectors have 12 months to comply. Healthcare, education, and finance get 18 months.
The Ministry of Science and Technology oversees implementation through a centralized “one-stop AI portal” - a streamlined approach compared to the EU’s fragmented regulatory bodies.
South Korea: Enforcement Begins with a Grace Period
South Korea’s AI Basic Act took effect January 22, 2026, joining the EU as one of only two major economies with comprehensive AI governance.
The law applies broadly to AI development business operators (those who design and train systems) and AI utilization business operators (those who integrate AI into products or services). It reaches extraterritorially - foreign AI systems affecting Korean users or markets must comply.
High-performance AI threshold: Systems trained with at least 10²⁶ FLOPs (floating-point operations) face additional safety obligations including lifecycle risk management plans and user protection measures.
Current penalties:
- Up to 30 million KRW (~$21,000) for failing to notify users about AI use
- Same for failing to appoint a domestic representative
- Same for violating corrective orders or refusing government inspections
The catch: The government is taking a soft approach initially. The Ministry of Science and ICT indicated it will emphasize guidance over penalties during the first year, giving companies time to build compliance programs before facing fines.
European Union: The August Deadline Looms
The EU AI Act continues its phased rollout. Eight categories of prohibited AI practices have been enforceable since February 2, 2025. The real test comes August 2, 2026, when high-risk AI system requirements take full effect.
What’s already banned:
- Social scoring systems by public authorities
- Real-time biometric identification in public spaces (with exceptions)
- AI that exploits vulnerabilities of specific groups
- Subliminal manipulation techniques
- Emotion recognition in workplaces and schools
Penalty structure:
- €35 million or 7% of global turnover for prohibited AI practices
- €15 million or 3% of global turnover for high-risk violations
- €7.5 million or 1% of global turnover for other breaches
No significant enforcement actions have been publicly announced yet. The European AI Office recently published guidelines on high-risk system classification and post-market monitoring. SMEs can apply for subsidized compliance support through pilot programs opening this month.
United Kingdom: Still Debating Whether to Regulate
The UK remains the notable holdout among major AI economies. No comprehensive AI bill has passed, and current signals suggest none will arrive until late 2026 at the earliest.
Instead, the UK relies on existing regulators and voluntary standards. The government’s October 2025 “Blueprint for AI regulation” emphasized sandboxes over statutes - proposing an AI Growth Lab for supervised testing in healthcare, professional services, transport, and manufacturing.
A private member’s bill reintroduced in March 2025 would create a central “AI Authority” to oversee governance. Such bills rarely pass, but they signal political pressure for action.
What this means in practice: Companies operating in the UK face lighter regulatory burdens than EU competitors, but may need to comply with EU rules anyway if they serve European markets.
China: Traceability Over Rights
China’s approach differs fundamentally from Western frameworks. Rather than focusing on individual harm, Chinese regulations emphasize traceability and state control.
Deepfake and synthetic content rules require:
- Visible labels clearly indicating AI-generated content
- Encrypted watermarks embedded in all synthetic media
- Content IDs and persistent metadata for traceability
- Provider registration with the Cyberspace Administration of China (CAC)
Software that removes these watermarks is outlawed. The approach creates a closed loop: all AI content becomes traceable back to its origin by provider, generation time, and device.
2026 enforcement focus: The CAC’s “Qinglang” campaign announced in February 2025 specifically targets AI-related violations. Priorities include:
- Strengthening AI content labeling compliance
- Cracking down on AI-generated false information
- Regulating AI applications across platforms
Over 100 generative AI services have received CAC approval since mid-2025. Operating without approval means operating illegally.
What This Means for You
If you’re building, deploying, or using AI systems internationally, the regulatory landscape has shifted dramatically in 2026:
For global companies:
- Track which jurisdictions your AI systems affect, not just where you’re headquartered
- South Korea and Vietnam now have extraterritorial reach
- EU compliance often serves as a baseline that satisfies other frameworks
For startups:
- Vietnam and South Korea offer transition periods - use them
- The UK remains the least regulated major market
- Consider where you test versus where you deploy
For users:
- Transparency requirements mean you should start seeing more AI disclosure labels
- High-risk AI in hiring, credit, and healthcare faces new scrutiny
- Report undisclosed AI interactions where you expect human service
What We’re Watching
- EU enforcement: Will the first major fine come before or after August?
- UK bill progress: Does the AI Authority bill gain momentum?
- Cross-border coordination: How do South Korea and Vietnam enforcement mechanisms actually work for foreign companies?
- China-West divergence: As Western frameworks focus on rights and Chinese rules focus on control, what happens to companies operating in both?
The era of voluntary AI governance is ending. Whether your AI systems are subject to new rules depends increasingly on where your users are - not where your servers are.
The Bottom Line
Vietnam and South Korea now join the EU with enforceable comprehensive AI laws. The UK continues debating while companies there enjoy regulatory flexibility - and uncertainty. China maintains its own path, prioritizing traceability over individual rights. For anyone building AI products with international reach, compliance is no longer optional.