48 Years of Impactful Scholarship
Banner_Library2.jpg

ILJ Online

ILJ Online is the online component of Fordham International Law Journal.

America Second?: The Need for Global Collaboration in the Regulation of AI

The impact of AI will be felt from the glass skyscrapers of modern cities to the smallest villages. Global collaboration on regulation is needed to safeguard against any abuses that could arise as AI advances. This has inspired regulation from Washington to Brussels.[1] President Biden signed an Executive Order (“EO”) that tasks several agencies in his administration to regulate AI.[2] Congress must enshrine protections from unchecked artificial intelligence. If Congress needs ideas on building on the EO, they should look to the European Union.

The European Union (“EU”) presents the best model for the U.S. to follow—it’s the world’s second-largest economy with the only legally binding regulations. The EU AI Act (“The Act”) is the world’s first regulatory body with binding rules regarding AI.[3] The Act sets up fines ranging from 35 million euros or 7% of global sales to 7.5 million euros or 1.5%.[4] The Act classifies AI into three buckets.[5] The first is high risk, which considers uses that negatively impact fundamental rights.[6] This requires entities to conduct impact assessments into how AI systems will impact a person’s rights. This is largely in the same spirit as the EO with the use of agencies to identify and monitor potential high-risk impacts of AI.[7] The second classification is general purpose/generative AI, and the third is limited risk.[8] The U.S. should adopt similar classifications to encourage global uniformity.

 The Act allows citizens to make complaints against AI systems.[9] It requires tech companies to notify people when they are employing a chatbot and requires labels on deepfakes and other generative AI content.[10] The latter was a prominent aspect of the voluntary agreement between the Biden Administration and Amazon, Google, Meta, Microsoft, and OpenAI.[11] The voluntary agreement focused on using independent experts to probe their systems for vulnerabilities and to watermark generative AI content, which is key in stopping misinformation and deep fakes.[12]

The EU Act centralizes all AI regulation into one entity, bans biometrics, and prohibits scraping facial images to create facial recognition databases and systems that manipulate human behavior.[13] This ban is for use by law enforcement. Still, it allows for court approval for use related to specific crimes.[14] Congress must set up similar judicial oversight.

The Act does not apply to AI systems used for military and national defense.[15] This is the most problematic aspect of the EU regulations and presents an opportunity for the U.S. to exert global leadership. The thought of AI for military use, coupled with the use of drones that have made targeted killing more prevalent over the last 15 years reads like a scene out of an Asimov novel. The doom scenario feared by AI alarmists could be a reality if the U.S. does not take the lead on this issue. If the U.S. were to follow the EU, an UN-style AI security council would be needed to regulate and limit the use of AI weapons and defense. The Act is set to go into place in 2026, so while the EU has presented the U.S. with a skeleton for AI regulation, this issue presents a chance for the U.S. to take the lead.

Steven McFarland is a staff member of Fordham International Law Journal Volume XLVII.

[1] See European Commission Press Release IP/23/5379, Commission welcomes G7 leaders’ agreement on Guiding Principles and a Code of Conduct on Artificial Intelligence (October 30, 2023).

[2] See Exec. Order No. 14110, 88 C.F.R. 75191 (2023).

[3] See generally Adam Satariano, E.U. Agrees on Landmark Artificial Intelligence Rules, N.Y. TIMES (Dec. 8, 2023), https://www.nytimes.com/2023/12/08/technology/eu-ai-act-regulation.html.

[4] See European Parliament Press Release, Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI (Dec. 12, 2023), https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai.

[5] See Proposal for a Regulation of the European Parliament and of Council Laying Down Harmonized Rules on Artificial Intelligence, 2021/0106/COD (April 4, 2021).

[6] Id.

[7] See Exec. Order No. 14110, 88 C.F.R. 75191 (2023).

[8] See supra note 5.

[9] Id.

[10] Id.

[11] See Lauren Fedor, Tech Companies make AI Safety and Transparency pledges at White House, FINANCIAL TIMES, July 21, 2023, https://on.ft.com/3Hlfx8C.

[12] Id.

[13] See supra note 4.

[14] Id.

[15] See European Council Press Release, Artificial Intelligence Act: Council and Parliament strike a deal on the first rules for AI in the world (Dec. 9, 2023), https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/#:~:text=Furthermore%2C%20the%20AI%20act%20will,AI%20for%20non%2Dprofessional%20reasons.

 

This is a student blog post and in no way represents the views of the Fordham International Law Journal.