48 Years of Impactful Scholarship
Banner_Library2.jpg

ILJ Online

ILJ Online is the online component of Fordham International Law Journal.

The Wild West: A Mosaic of International Approaches to Global AI Regulation

The rapid development of artificial intelligence (AI) has raised significant concerns regarding its ethical use, governance, and regulation. Governments and regulatory bodies across the world have begun crafting legal frameworks to manage the risks and opportunities posed by AI. However, due to the technologies’ complexity and globalized nature, regulatory efforts across the international landscape have been disparate and have, so far, been personalized based on the region’s political, economic, and social priorities.

Europe has established itself as a global leader in AI regulation through a multi-layered approach that puts humans at the center of responsible AI development. The European Union’s (EU) Artificial Intelligence Act (AI Act), which it refers to as “the world’s first comprehensive AI law,” establishes guidelines for EU countries to regulate AI systems according to risk levels: “minimal risk,” “limited risk,” “high risk,” and “unacceptable risk.” “High Risk” AI systems are those that “negatively affect safety or fundamental rights,” including sectors such as aviation and vehicle transportation, healthcare, critical infrastructure, and migration control management.[1] Accordingly, these products are permitted to enter the market only after a prerequisite assessment and then must continuously comply with rigorous transparency, accountability, and human oversight requirements throughout their lifecycle. Legally, the AI Act introduces a proactive approach, anticipating the potential impact of AI on fundamental rights, safety, and the rule of law. The EU’s legal framework, including the General Data Protection Regulation (GDPR), also offers additional safeguards concerning the use of personal data in AI systems.[2] However, while the AI Act is commendable in its intentions, challenges arise from its attempt to implement uniformity across member states, creating tensions between the economic desire for innovation and the need for regulation.[3] For example, tech companies may argue that strict AI regulations could stifle innovation, while civil society may emphasize the importance of safeguards against AI's risks to human rights and privacy.

In contrast, the U.S. lacks flagship AI legislation and instead favors a more fragmented approach that emphasizes innovation above all else, disregarding any urgency for stringent controls. The U.S. has predominantly relied on individual agencies and sector-specific regulations according to the technologies’ area of application, such as healthcare (FDA), finance (SEC), and autonomous vehicles (NHTSA).[4] To further promote AI-related benefits, the U.S. introduced the AI Initiative Act which is designed to improve the nation’s global competitiveness in AI research and development. The lack of a comprehensive regulatory approach akin to the EU’s AI Act, is however worrying amid insufficient oversight concerns. Thus, such an approach has raised concerns about self-regulation amid insufficient oversight, especially in biometrics surrounding predictive policing and employment discrimination.[5] Further legal concerns related to AI in the U.S. center around data privacy, liability issues, and ethical considerations regarding AI’s impact on civil rights.[6]

Ongoing debates continue regarding which approach­—Europe’s broad, centralized model or the United States’ fragmented, industry-specific approach—brings the greatest benefit to global society.[7]

China has adopted a unique blend of both national and sector-specific strategies in its approach to AI regulation, with the government’s hand guiding AI development meanwhile controlling it with the other.[8] Its Next Generation Artificial Intelligence Development Plan (2017) lays out China’s ambition to lead the world in AI by 2030.[9] To this end, the government has introduced policies focused on ensuring that AI development is mobilized towards national interests.[10] Some of the focuses which concern other governments also concern China’s, including data security, privacy, social stability, ethical considerations, and the promotion of its domestic companies in the global AI market. But the Chinese government’s approach has sparked controversy particularly regarding privacy and civil liberties. Its extensive use of AI for surveillance, facial recognition, and social credit systems have drawn significant criticism for infringing on human rights.[11] Legally, this raises fundamental questions about balancing national security and individual freedoms in AI governance.

As AI technologies continue to evolve and permeate critical sectors of human life, the need for a global regulatory framework is increasingly urgent. Regional approaches reflect diverse priorities and challenges, but simultaneously, the legal implications of such AI regulation are far-reaching and encompass issues of data privacy, ethical considerations, and human rights. These variances in AI governance are not merely a reflection of national priorities, but also the result of the distinct political, economic, and social contexts in which these technologies are being developed and deployed. While Europe, the U.S., China, and others continue to shape their legal landscapes, global coordination is critical in ensuring that AI benefits the greater international society while mitigating its risks. The future of AI laws will depend on the ability of nations to balance innovation with responsibility, fostering a global framework that ensures AI’s transformative potential is harnessed for the common good.

The question then becomes: how can the global community ensure coherence in AI governance? AI is inherently transnational, with algorithms, systems, and data crossing borders within mere moments, making regional regulations potentially ineffective or inconsistent on the international stage. This creates challenges for multinational corporations and businesses while operating in various countries, where for example, a tech company adhering to the EU's stringent AI Act may find it difficult to navigate the more lenient regulatory environment in the U.S., where innovation and market expansion are often prioritized over regulation.

Moreover, the difference in approaches to AI ethics and human rights can lead to global tension. Europe's focus on human-centered AI that safeguards fundamental rights may conflict with China's government-controlled approach, which emphasizes national security and surveillance. Similarly, the U.S. model of self-regulation could be seen as insufficient in regions where AI's impact on vulnerable populations is more acutely felt, such as in Africa or Latin America, where issues like economic inequality and data sovereignty are top priorities.

Given the global reach of AI technologies, international cooperation is essential for creating unified standards and protocols. The United Nations and other multilateral organizations could play a pivotal role in fostering global dialogue on AI governance. For instance, the UN’s AI for Good Global Summit brings together stakeholders from various sectors to discuss how AI can contribute to sustainable development goals.[12] Similarly, the OECD (Organization for Economic Co-operation and Development) has developed principles on AI that emphasize inclusivity, transparency, and accountability, aiming to guide both national and international policy.[13]

The development of international AI norms would help harmonize regional laws, creating a more predictable environment for businesses while ensuring that AI technologies are developed in ways that promote public safety and trust. One potential avenue for creating international standards is through the Global Partnership on Artificial Intelligence (GPAI), which is an initiative launched by several countries, including Canada, France, the U.K., and the U.S., to share best practices and promote responsible AI use.[14]

Evidently, global cooperation on any matter is far from simple. Different countries have different priorities when considering technology, privacy, and human rights, and aligning interests will require strong efforts engulfing both diplomacy and compromise. Additionally, because AI rapidly evolves, the pace of regulation will need to keep up with the fast-moving technological landscape–adding even more complexity to the mess.

The future of AI regulation will depend on how well countries can balance innovation with responsibility. Europe's regulatory leadership through the AI Act serves as an example of a robust approach that emphasizes human rights and safety, while simultaneously encountering challenges with implementation or hindrance to innovation. Alternatively, the U.S.'s market-driven, decentralized approach prioritizes innovation but risks leaving ethical concerns unaddressed. China’s top-down regulatory model highlights the importance of national security but raises concerns about privacy and human rights.

Ultimately, a global, multi-stakeholder approach that blends the strengths of regional regulatory efforts will be critical in ensuring that AI’s benefits are distributed equitably while its risks are carefully managed. By creating a framework that encourages innovation while safeguarding individual rights and promoting transparency, the world can navigate the complexities of AI technology in a way that ensures its responsible and sustainable development.

There is no doubt that AI will dramatically alter the course of our future. Governments, regulators, industry leaders, and civil society must work together to build a world where AI serves the public good, respects human dignity, and contributes to a more just, equitable, and sustainable global society.

  Ariana Tagavi is a staff member of Fordham International Law Journal Volume XLVIII.

[1] European Parliament, EU AI Act: First Regulation on Artificial Intelligence (June 1, 2023), https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.

[2] Id.

[3] European Union Agency for Fundamental Rights, Artificial Intelligence: A European Perspective (2020), https://fra.europa.eu/sites/default/files/fra_uploads/fra-2020-artificial-intelligence_en.pdf.

[4] Kirsten Martin, Reconciling the U.S. Approach to AI, Carnegie Endowment for Int’l Peace (May 2023), https://carnegieendowment.org/research/2023/05/reconciling-the-us-approach-to-ai?lang=en.

[5] Rashida Richardson, Predictive Policing Algorithms Are Racist. They Must Be Dismantled, MIT Tech. Rev. (July 17, 2020), https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/.

[6] Rand Corporation, Artificial Intelligence: Legal and Ethical Issues (last visited Jan. 10, 2025), https://www.rand.org/well-being/justice-policy/portfolios/artificial-intelligence-legal-ethical.html.

[7] Thomas Hale, The EU and U.S. Diverge on AI Regulation: A Transatlantic Comparison and Steps to Alignment, Brookings (July 15, 2021), https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/#:~:text=The%20EU%20and%20U.S.%20are%20taking%20distinct%20regulatory%20approaches%20to,rules%20for%20these%20AI%20applications.

[8] White & Case LLP, AI Watch: Global Regulatory Tracker – China, White & Case (last visited Jan. 10, 2025), https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-china.

[9] Center for Advanced Studies in Intelligence, In Their Own Words: New Generation Artificial Intelligence Development Plan, AIR University (last visited Jan. 10, 2025), https://www.airuniversity.af.edu/CASI/Display/Article/2521258/in-their-own-words-new-generation-artificial-intelligence-development-plan/.

[10] DLA Piper, China Releases AI Safety Governance Framework, DLA Piper (Sept. 2024), https://www.dlapiper.com/en-us/insights/publications/2024/09/china-releases-ai-safety-governance-framework.

[11] Kaan Sahin, The West, China, and AI Surveillance, Atlantic Council (May 19, 2023), https://www.atlanticcouncil.org/blogs/geotech-cues/the-west-china-and-ai-surveillance/.

[12] International Telecommunication Union, AI for Good Global Summit 2024, Int’l Telecommunication Union (last visited Jan. 10, 2025), https://aiforgood.itu.int/summit24/.

[13] Organization for Economic Co-operation and Development, AI Principles, OECD (last visited Jan. 10, 2025), https://www.oecd.org/en/topics/sub-issues/ai-principles.html.

[14] Global Partnership on AI, About Us, Global P’ship on AI (last visited Jan. 10, 2025), https://gpai.ai/about/.

This is a student blog post and in no way represents the views of the Fordham International Law Journal.

BlogFordham ILJAriana Tagavi