Journal of International Commercial Law and Technology
2024, Volume:5, Issue:1 : 35-38
Research Article
Regulating Artificial Intelligence in Global Markets: Challenges and Opportunities
 ,
 ,
 ,
1
Academic Coordinator, Faculty of Accounting and Finance, Zenith Institute of Technology, India
2
Professor, Department of Banking and Insurance, Kyoto Central University, Japan
3
Professor, Department of Corporate Governance, Eastbridge University, Canada
4
Professor, Department of Commerce, Holland International University, Netherlands
Received
Dec. 2, 2024
Revised
Dec. 3, 2024
Accepted
Dec. 4, 2024
Published
Dec. 10, 2024
Abstract

The rapid advancement and deployment of artificial intelligence (AI) technologies have sparked a global movement to establish comprehensive legal and ethical governance frameworks. This article analyzes the evolving landscape of AI regulation across major jurisdictions—including the European Union, United States, China, South Korea, and Brazil—highlighting both convergence around risk-based approaches and persistent fragmentation in legal design. Landmark initiatives such as the EU AI Act, the U.S. state-level model, and China’s sovereignty-driven oversight exemplify differing priorities and enforcement strategies. While regulatory frameworks aim to promote innovation, protect fundamental rights, and ensure transparency, key challenges persist—including cross-border data sovereignty, general-purpose AI governance, algorithmic bias, and regulatory lag behind rapid technological change. The article emphasizes the growing importance of international collaboration, agile governance, and inclusive policymaking to achieve effective global harmonization. It concludes that a principle-driven, adaptive, and interoperable regulatory ecosystem is essential for maximizing AI’s benefits while safeguarding public trust and ethical integrity in a connected world.

Keywords
Full Content

Introduction

Artificial Intelligence (AI) is rapidly transforming global industries, from finance and healthcare to transportation, defense, and creative sectors. As AI adoption expands, national and international regulators are confronted with the dual mandate of harnessing economic benefits while minimizing societal, ethical, and security risks. The global market’s highly interconnected nature and the cross-border implications of advanced AI systems necessitate collaborative, adaptable frameworks that are both innovative and enforceable.

The Landscape of Global AI Regulation

Key Regional Approaches

  • European Union: The groundbreaking EU AI Act, enforced in 2024, is the world’s first comprehensive legal framework for AI. It employs a risk-based system, banning AI practices deemed to represent “unacceptable risk,” strictly regulating “high-risk” systems (e.g., in critical infrastructure, law enforcement, and healthcare), and imposing lighter obligations on minimal-risk AI systems. Non-compliance can result in fines up to €35 million or 7% of global turnover. The Act is being implemented in phases, with all rules due by 2027, and is already influencing other jurisdictions and multinationals operating in the EU[1][2][3][4].
  • United States: The U.S. has a fragmented landscape, with no comprehensive federal law. Instead, a “patchwork” of state laws and industry-specific guidelines dominates. Major statutes—like California's AI Transparency Act—mandate disclosure of AI-generated content and aim to safeguard consumer interests. Federally, proposals have considered licensing requirements and dedicated agencies, but state-level measures remain key. The landscape remains unstable, creating compliance challenges for cross-state and international operations[5][4][6].
  • China: China’s approach balances state control and technological innovation, exemplified by its Global AI Governance Initiative. The regulatory model emphasizes data sovereignty, cybersecurity, and state oversight, especially for AI systems with societal impact[7][8].
  • Other Regions: South Korea’s “Basic Act on the Development of Artificial Intelligence and Establishment of Foundation for Trust” will come into force in January 2026, applying a risk-based model similar to the EU. Brazil, Japan, and Canada are also developing national strategies or legal frameworks with distinct goals[7][1][2][9].

Map: World Status of Comprehensive AI Regulation (2025)

[image:1]

Opportunities in AI Regulation

  • Global Ethical Standards: Regulatory regimes increasingly focus on transparency, human oversight, non-discrimination, and accountability. The EU AI Act’s “Brussels Effect” is inspiring parallel initiatives worldwide, shaping ethical standards in international markets[7][2][4].
  • Market Trust and Investment: Robust, clear regulations reduce uncertainty, attracting investment, improving consumer confidence, and fostering innovative yet responsible AI development[7][1].
  • Cross-border Interoperability: Regulatory harmonization, such as convergence around risk-based frameworks, could minimize friction for multinationals, enabling smoother AI deployment across jurisdictions[7][10][2].
  • Risk Mitigation: Well-designed legal frameworks protect against systemic risks—ranging from algorithmic bias and data breaches to large-scale manipulation or autonomous weapons—thus securing societal welfare[2][8].

Graph: Growth in Number of Countries with AI Regulation (2016–2025)

[image:2]

Key Regulatory Challenges

  • Regulatory Fragmentation: Diverse legal approaches, especially between the U.S., EU, China, and emerging powers, create a patchwork landscape. This raises compliance complexity and increases costs, especially for companies operating across borders[5][7][8][4][9].
  • Pace of Technological Change: AI evolves faster than regulatory systems. Many frameworks struggle to remain relevant, especially as generative models and multimodal AI systems (e.g., Large Language Models) introduce novel risks and legal questions[7][11].
  • Governance of General-Purpose AI: Models with wide-ranging capabilities, such as GPT or LLaMA, present unique risks. The EU’s recent rules on General-Purpose AI (GPAI) require codes of conduct focusing on transparency, safety, and copyright, with compliance deadlines as soon as August 2025[12][3].
  • Data Sovereignty and Privacy: Cross-border data flows are vital for AI training and deployment, but remain a contentious issue due to varying national rules on data localization, privacy, and transfer mechanisms, particularly between the EU, U.S., and China[5][1][2][8].
  • Algorithmic Bias and Accountability: Ensuring fairness, transparency, and preventing discrimination in AI outcomes remain open challenges. Many frameworks require explainability and human oversight, but practical tools for accountability are still emerging[7][10][11].
  • Competition and Economic Power: Regulatory models can influence competitive advantage; strict regimes could stifle innovation, while lax rules may favor rapid AI deployment but introduce greater risk. Policymakers worldwide struggle to strike the right balance[7][13][4][14].

Table: Global Comparison of Major AI Regulatory Approaches (2025)

Region/Country

Framework (Year)

Risk Approach

Enforcement

Notable Features

EU

AI Act (2024, phased to 2027)[3]

Risk-based, 4-tier classification

Fines

General-purpose AI, data governance, fundamental rights

US

Fragmented, sector/state laws[4][6]

Sector- and state-specific

Variable

Patchwork, transparency, innovation focus

China

GAIGI, iterative local rules[7][8]

State, security, innovation

Centralized

State control, data localization, application focus

S. Korea

Basic Act on AI (2026)[7][2]

Risk-based

Fines

Trust, transparency, harmonization

Brazil

Proposed AI Bill (2024)[2]

Sector-based, evolving

Variable

Human rights, transparency, oversight

 

Case Examples and Global Trends

  • The "Brussels Effect": The spread of the EU's AI Act is prompting foreign firms and regions to align their products and practices to European standards.
  • State Patchwork in the U.S.: Over 45 states have proposed or enacted some form of AI regulation in 2024–2025 alone. California leads with over 25 laws related to AI, and the federal approach remains unclear[5][4].
  • International Collaboration: Entities like UNESCO and the OECD, and newly formed coalitions such as the G7 AI Working Group, are advancing soft-law instruments, governance guidelines, and efforts toward harmonization[10][2][15].

Flowchart: Risk Classification Under the EU AI Act

[image:3]

Pathways Forward: Towards Global Harmonization

  • Convergence of Principles: Universal adherence to core principles—such as human oversight, transparency, accountability, and nondiscrimination—serves as a foundation for harmonization, even amid divergent enforcement models[7][10].
  • Flexible, Adaptive Regulation: Regulators should prioritize “agile governance”—early review, sunset clauses, and ongoing stakeholder engagement—to keep pace with evolving AI landscapes[7][13][11].
  • Cross-border Agreements: Ongoing international forums and bilateral/multilateral treaties are vital for standardizing compliance requirements, ensuring data flows, and protecting human rights in global AI markets. The push for soft-law (guidelines, codes of conduct) is often as important as binding law for cross-border trust[10][15].
  • Capacity Building: Regulators need active investment in technical expertise, transnational investigations, and public-private co-regulation to address the dynamic threat landscape[16][17].
  • Inclusive Policy Development: Engaging civil society, industry, academia, and marginalized communities in regulation-building is crucial for equitable outcomes[10][14].

Conclusion

The regulation of artificial intelligence in global markets is one of the defining policy challenges of the era—requiring a nuanced balance between fostering innovation and ensuring safety, fairness, and accountability. While recent advances like the EU AI Act and South Korea’s Basic Act represent milestones, a true global framework remains elusive due to divergent values, economic interests, and national priorities. The next decade will test the world’s ability to move from fragmentation toward a more harmonized, principle-driven regulatory model—essential for harnessing AI’s benefits and managing its unprecedented risks.

[image:1]
Map: Countries with Comprehensive AI Laws (as of July 2025).

[image:2]
Graph: Growth in Number of Countries with AI Regulation (2016–2025).

[image:3]
Flowchart: Risk Classification System in the EU AI Act (2025).

Recommended Articles
Research Article
FinTech Regulation and International Harmonization
Published: 18/05/2023
Research Article
Harmonization of Patent Laws in Trade Agreements: Challenges and Opportunities
Published: 16/09/2023
Research Article
Parallel Imports and Exhaustion Doctrine
...
Published: 14/01/2022
Research Article
IP Rights Enforcement in Online Marketplaces
...
Published: 22/05/2023
Loading Image...
Volume:5, Issue:1
Citations
28 Views
32 Downloads
Share this article
© Copyright Journal of International Commercial Law and Technology