Back to Blog
AI Breakthroughs13 min read

Global AI Regulation Landscape: 2025-2026 Comprehensive Report

This report synthesizes the current state of AI regulation worldwide based on my knowledge through May 2025, covering enacted legislation, enforcement actions, and compliance...

Dhawal ChhedaAI Leader at Accel4

Global AI Regulation Landscape: 2025-2026 Comprehensive Report

This report synthesizes the current state of AI regulation worldwide based on my knowledge through May 2025, covering enacted legislation, enforcement actions, and compliance requirements.


1. EUROPEAN UNION: THE AI ACT

Enforcement Status

The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024, with a phased implementation timeline:

  • February 2, 2025: Prohibitions on unacceptable-risk AI systems took effect (Title II). This includes bans on social scoring by governments, real-time remote biometric identification in public spaces (with narrow exceptions for law enforcement), emotion recognition in workplaces and schools, and manipulative/deceptive AI techniques that exploit vulnerabilities.
  • August 2, 2025: Requirements for General-Purpose AI (GPAI) models take effect, along with governance structures and penalties provisions (Titles V, VII, XII).
  • August 2, 2026: Full application of high-risk AI system requirements (Title III, Annex III) covering areas like critical infrastructure, education, employment, law enforcement, and migration.
  • August 2, 2027: Requirements for high-risk AI systems that are also regulated products (Annex I) such as medical devices, machinery, and vehicles.

Key Compliance Requirements

  • Prohibited practices: Already enforceable. Companies must have audited and removed banned AI applications.
  • GPAI model obligations: Providers of general-purpose AI models must maintain technical documentation, provide information to downstream deployers, comply with EU copyright law, and publish training content summaries. Models posing “systemic risk” (threshold: 10^25 FLOPs of training compute) face additional obligations including adversarial testing, incident reporting, cybersecurity measures, and energy consumption reporting.
  • High-risk AI systems (from 2026): Require conformity assessments, risk management systems, data governance, human oversight, transparency, accuracy/robustness standards, and registration in the EU database.

Penalties

  • Up to 35 million EUR or 7% of global annual turnover for violations of prohibited practices
  • Up to 15 million EUR or 3% for other violations
  • Up to 7.5 million EUR or 1% for supplying incorrect information

Institutional Infrastructure

  • The EU AI Office (established within the European Commission) oversees GPAI model regulation and coordinates enforcement.
  • Each Member State must designate national competent authorities. Implementation has been uneven – by early 2025, several Member States had designated authorities while others were still establishing governance structures.
  • An AI Board (representatives of Member States), Scientific Panel, and Advisory Forum support implementation.

Impact on Companies

Major AI providers (OpenAI, Google, Meta, Anthropic, Mistral) have been preparing compliance programs. The GPAI Code of Practice was being finalized in early-mid 2025, with industry participants negotiating specific commitments. Companies have been investing heavily in documentation, testing infrastructure, and compliance teams for the EU market.


2. UNITED STATES

Executive Orders

Biden Executive Order 14110 (October 2023) established extensive AI governance requirements, but President Trump revoked it on January 20, 2025, his first day in office, via Executive Order titled “Removing Barriers to American Leadership in Artificial Intelligence.” This:

  • Rescinded reporting requirements for companies training large models (the dual-use foundation model reporting threshold)
  • Eliminated the AI Safety Institute’s mandate under the prior EO
  • Removed government AI procurement guardrails
  • Signaled a deregulatory approach favoring industry competitiveness

Trump’s subsequent AI actions in early 2025 focused on:
- Promoting AI infrastructure investment (the “Stargate” announcement with OpenAI, SoftBank, Oracle)
- Directing agencies to reduce regulatory barriers to AI development
- Emphasizing American AI dominance over safety-first regulation
- An executive order on AI in government operations focused on efficiency

Congressional Action

As of early-mid 2025, no comprehensive federal AI legislation had been enacted. Key developments:

  • Senate AI working group (led by Schumer) produced a roadmap in 2024 but no binding legislation passed.
  • Multiple bills were introduced but none achieved passage through both chambers, including proposals on deepfakes, AI in elections, algorithmic accountability, and AI transparency.
  • The political environment shifted toward lighter-touch regulation under the new administration.
  • Bipartisan interest existed in narrow areas: AI-generated CSAM, deepfakes in elections, and AI in critical infrastructure, but comprehensive regulation remained stalled.

State-Level Regulation

States moved ahead of the federal government:

  • Colorado AI Act (SB 24-205): Enacted in 2024, taking effect February 1, 2026. Requires developers and deployers of “high-risk” AI systems to use reasonable care to avoid algorithmic discrimination. Includes disclosure requirements and impact assessments. Governor Polis signed it with reservations, and there was discussion of amendments.
  • California: Several AI bills were considered. Governor Newsom vetoed SB 1047 (the high-profile AI safety bill requiring safety assessments for large models) in September 2024. New California bills were introduced in 2025 covering various AI issues.
  • Texas, Illinois, Connecticut, and others advanced various AI-related proposals covering employment, insurance, healthcare, and consumer protection contexts.
  • Existing state laws already applicable to AI: Illinois BIPA (biometrics), state consumer protection laws, anti-discrimination statutes.

NIST AI Safety Institute (AISI)

  • Established under Biden’s EO, its future became uncertain after the EO’s revocation.
  • As of early 2025, AISI continued to operate but with reduced mandate and political support.
  • Had been conducting pre-deployment evaluations of frontier models (agreements with Anthropic, OpenAI, and others).
  • Budget and staffing faced uncertainty under the new administration.
  • Director Elizabeth Kelly departed.

Enforcement Actions (Existing Authorities)

  • FTC continued using Section 5 (unfair/deceptive practices) authority against AI-related harms. Previous actions against Rite Aid (facial recognition), Amazon (Alexa children’s privacy), and others established precedents. The FTC under the new administration saw leadership changes affecting enforcement posture.
  • EEOC guidance on AI in employment decisions remained in effect.
  • SEC, CFPB, HHS and other agencies had issued AI-related guidance of varying enforceability.

3. CHINA

Regulatory Framework

China has enacted the most operationally specific AI regulations globally, through a series of targeted measures:

  • Algorithmic Recommendation Regulations (March 2022): Governs recommendation algorithms, requires filing with the Cyberspace Administration of China (CAC).
  • Deep Synthesis Regulations (January 2023): Covers deepfakes and synthetic content, requires labeling and traceability.
  • Generative AI Measures (August 2023, Interim): Requires generative AI services offered to the public in China to undergo security assessments, register algorithms, ensure content aligns with “core socialist values,” and obtain approval before launch. Applies to services available to the public within China.
  • AI Safety Governance Framework (version 1.0 released September 2024 by TC260): Non-binding but influential guidance covering risk classification, safety requirements across the AI lifecycle.

Enforcement and Practical Impact

  • The CAC has been the primary enforcer, requiring algorithm registration (hundreds of algorithms registered by 2025).
  • Generative AI services must undergo security assessments before public release – this has functioned as a de facto licensing regime.
  • Content moderation requirements are strictly enforced: AI outputs must not undermine state power, promote separatism, or violate other content rules.
  • Companies like Baidu (Ernie Bot), Alibaba (Tongyi Qianwen), and others obtained approvals to launch consumer-facing generative AI services.
  • Foreign companies face significant barriers to offering AI services directly in China.

2025 Developments

  • China continued refining its approach, with discussions around a more comprehensive AI law.
  • The Model Law on Artificial Intelligence was being developed.
  • Emphasis on balancing innovation promotion with safety and content control.
  • International engagement through the Global AI Governance Initiative, positioning China’s regulatory model as an alternative to the EU approach.

4. UNITED KINGDOM

Regulatory Approach

The UK initially pursued a “pro-innovation,” sector-specific, non-statutory approach under the Conservative government (March 2023 White Paper). After the Labour government took power in July 2024, the approach shifted:

  • AI Safety Institute (AISI): Established November 2023 at Bletchley Park, it became the world’s first government body dedicated to frontier AI safety evaluation. Under Labour, it was rebranded/evolved, with some functions being incorporated into a broader framework. In early 2025, there were reports of it being restructured, with some capabilities potentially being moved or expanded.
  • Planned AI legislation: The Labour government signaled intent to introduce binding AI regulation, moving away from the purely voluntary approach. As of early 2025, a bill was anticipated but specific details were still being developed.
  • Existing regulators (Ofcom, FCA, CMA, ICO, EHRC, etc.) continued applying existing laws to AI within their domains, as directed by the 2023 framework.

Enforcement and Key Actions

  • CMA (Competition and Markets Authority): Conducted a foundation models review, published updates on AI and competition concerns, particularly around concentration in AI infrastructure and partnerships between major tech companies and AI labs.
  • ICO (Information Commissioner’s Office): Active on AI and data protection, issued guidance on generative AI and data protection, took enforcement actions related to AI processing of personal data. Investigated Snap’s “My AI” chatbot.
  • No AI-specific penalties had been issued under AI-specific rules (as none existed yet), but existing regulatory frameworks were applied.

International Role

  • The UK hosted the AI Safety Summit at Bletchley Park (November 2023) and the Seoul follow-up (May 2024).
  • AISI established partnerships with counterparts (US AISI, and others).
  • Post-election, the UK maintained its position as a convener on AI safety while moving toward binding regulation.

5. JAPAN

Approach

Japan adopted a governance-focused, non-binding approach emphasizing “agile governance”:

  • AI Guidelines for Business (version 1.0, April 2024): Non-binding guidelines from the Ministry of Economy, Trade and Industry (METI) covering AI governance principles for developers, providers, and users.
  • Hiroshima AI Process: Japan led this initiative during its 2023 G7 presidency, producing the International Guiding Principles and Code of Conduct for organizations developing advanced AI systems. These are voluntary.
  • No comprehensive AI legislation enacted as of early 2025, though discussions about potential legislation were underway.
  • Japan’s approach emphasizes soft law, industry self-governance, and international coordination over prescriptive regulation.

Practical Impact

  • Major Japanese tech companies (NEC, Fujitsu, NTT, SoftBank) aligned with government guidelines voluntarily.
  • Japan positioned itself as an AI-friendly regulatory environment to attract investment.
  • The government invested in domestic AI capacity (e.g., supporting development of Japanese-language LLMs).

6. CANADA

Legislative Status

  • Artificial Intelligence and Data Act (AIDA): Originally Part 3 of Bill C-27 (the Digital Charter Implementation Act), AIDA would have been Canada’s first AI-specific law. However, Bill C-27 died on the order paper when Parliament was prorogued in January 2025. This reset legislative progress.
  • AIDA had proposed a risk-based framework, requirements for high-impact AI systems, prohibitions on certain uses, and penalties up to 3% of global revenue or CAD $10 million.
  • Following prorogation and the subsequent political changes (with Mark Carney becoming Liberal leader and PM), the status of AI legislation was uncertain as of early-mid 2025. It would need to be reintroduced.

Existing Governance

  • Voluntary Code of Conduct for generative AI: Launched September 2023, with major companies signing on.
  • Treasury Board Directive on Automated Decision-Making: Applies to federal government use of AI, requires algorithmic impact assessments. This is one of the most mature government AI procurement governance frameworks globally.
  • PIPEDA (privacy law) applies to AI processing of personal data.
  • Canadian Human Rights Act applies to discriminatory AI outcomes.

7. OTHER NOTABLE JURISDICTIONS

Brazil

  • AI Bill (PL 2338/2023): Advanced significantly in 2024-2025. The Brazilian Senate approved a version establishing a risk-based regulatory framework. As of early 2025, it was moving through the legislative process. Would create a supervisory authority and impose requirements on high-risk AI systems.

South Korea

  • AI Basic Act: Passed by the National Assembly in December 2024, making South Korea one of the first countries to enact comprehensive AI legislation. Establishes a risk-based framework with requirements for “high-impact” AI, an AI Committee for governance, and support for AI industry development. Implementation was being prepared in 2025.

India

  • No comprehensive AI regulation enacted. The government’s approach fluctuated: an initial advisory requiring government approval for AI model launches was walked back. India focused on the IndiaAI Mission for promoting AI development while applying existing IT Act and data protection law (Digital Personal Data Protection Act, 2023) to AI.

Singapore

  • Continued its governance-by-framework approach. The Model AI Governance Framework and AI Verify testing toolkit were updated. No binding AI-specific legislation, but Singapore’s approach through voluntary frameworks and standards was influential in ASEAN.

Australia

  • Voluntary AI Ethics Framework in place. Government conducted consultations on mandatory guardrails for high-risk AI. As of early 2025, considering legislative options but no bill introduced.

Israel

  • Adopted a soft regulation approach. Government policy focused on promoting AI innovation while developing non-binding guidelines.

European Economic Area

  • Norway, Iceland, Liechtenstein would adopt the EU AI Act through the EEA agreement mechanisms, though with some delay.

Turkey, Saudi Arabia, UAE

  • Various national AI strategies and limited regulatory frameworks. UAE established the AI Minister role and pursued AI-friendly policies. Saudi Arabia focused on AI development under Vision 2030.

8. INTERNATIONAL AND MULTILATERAL EFFORTS

OECD

  • Updated AI Principles (originally 2019) remained the most widely endorsed international AI governance framework.
  • OECD AI Policy Observatory tracked global developments.

Council of Europe

  • Framework Convention on AI (adopted May 2024): First binding international treaty on AI. Covers AI systems in the public sector and private sector actors performing public functions. Open for signature by non-Council of Europe members. Entered ratification process.

G7 / Hiroshima AI Process

  • Continued developing voluntary commitments. The code of conduct for AI developers was endorsed by major companies.

UN

  • The UN General Assembly adopted a non-binding resolution on AI (March 2024).
  • The UN AI Advisory Body issued its final report with recommendations for global AI governance, including proposals for an international AI governance body.
  • The Global Digital Compact (adopted September 2024) included AI governance commitments.

Standards Bodies

  • ISO/IEC 42001: AI management system standard published, becoming a key compliance benchmark.
  • IEEE, NIST frameworks continued to be referenced in regulations globally.

9. ENFORCEMENT AND PENALTIES: WHAT IS ACTUALLY BEING ENFORCED

As of early-mid 2025, the honest assessment is:

JurisdictionAI-Specific EnforcementStatus
EUProhibited practices (from Feb 2025)Early stage; no major penalties announced yet in the initial months
ChinaGenerative AI registration, content requirementsActively enforced; pre-launch approvals required
USNo federal AI-specific enforcement; FTC uses existing authorityDeregulatory posture under new administration
UKExisting regulators applying current law to AIICO and CMA most active
South KoreaAI Basic Act implementingPreparing enforcement infrastructure
OthersMostly pre-enforcement or voluntaryLimited actual penalties

The most consequential de facto enforcement has been:
1. China’s pre-launch approval regime for generative AI – functioning as a gatekeeper
2. EU GDPR enforcement applied to AI (e.g., Italian DPA’s temporary ChatGPT ban in 2023, subsequent requirements)
3. FTC enforcement actions using consumer protection authority
4. EU AI Act prohibited practices – newly enforceable but not yet tested in formal proceedings


10. IMPACT ON AI COMPANIES

Compliance Costs

  • Major AI companies have established dedicated EU AI Act compliance teams.
  • Compliance cost estimates for the EU AI Act range widely; SMEs face proportionally higher burdens (though the Act includes some SME accommodations).
  • Companies operating globally face a patchwork of requirements, with the EU AI Act becoming the de facto global standard-setter (the “Brussels Effect”).

Strategic Responses

  • Some companies have limited feature availability by jurisdiction (e.g., Meta restricting certain AI features in the EU).
  • Major labs invested in safety testing infrastructure partly in anticipation of regulatory requirements.
  • Companies pursued “compliance as competitive advantage” strategies.
  • The divergence between US deregulation and EU regulation created strategic complexity for multinational operations.

Key Compliance Priorities for 2025-2026

  1. Immediate: EU prohibited practices compliance (already in effect)
  2. August 2025: EU GPAI model requirements
  3. 2026: EU high-risk AI system requirements; Colorado AI Act; South Korea AI Basic Act implementation
  4. Ongoing: China content and registration requirements for any China-facing services

Summary

The global AI regulatory landscape in 2025-2026 is characterized by:

  • The EU AI Act as the most comprehensive enacted framework, now in phased enforcement
  • US regulatory retreat at the federal level under the Trump administration, with states filling gaps
  • China’s operational enforcement through sector-specific regulations and pre-approval requirements
  • South Korea emerging as an early mover with comprehensive legislation
  • Most other jurisdictions still in voluntary/soft-law stages or developing legislation
  • A significant gap between laws on the books and actual enforcement/penalties, with 2025-2026 being the critical period where enforcement infrastructure is being built
  • Growing regulatory fragmentation requiring multinational AI companies to navigate increasingly complex compliance landscapes

The next 12-18 months (through late 2026) will be the decisive period as EU AI Act requirements fully phase in, enforcement precedents are established, and other jurisdictions move from frameworks to binding law.

Get workflow automation insights that cut through the noise

One email per week. Practical frameworks, not product pitches.

Ready to Run Autonomous Enterprise Operations?

See how QorSync AI deploys governed agents across your enterprise systems.

Request Demo

Not ready for a demo? Start here instead:

Related Articles