After years of meticulous development and negotiations, the EU’s AI Act appeared finally set for passage by December 2023–at least until France and Germany vocalized their objections to the EU regulating general purpose models (i.e. models that have a broad set of functions such as OpenAI’s GPT–4).
European parliament members viewed this as completely unacceptable and walked out of negotiations, likely further delaying the passage of the AI Act. The arguments presented by France and Germany to avoid regulating general purpose AI models appear sensible at first glance–the EU already lags significantly behind the United States in AI investment and development. Overregulation of AI could serve to only further entrench this. Moreover, these are especially new and advanced technologies–effectively regulating them will likely prove an enormously challenging task that will take many years toperfect. More research and development on AI, along with a greater understanding of its effects on society will surely benefit EU regulators and lawmakers in their efforts to protect the rights of their citizens.
Nevertheless, to carve out an exception for general purpose models amounts to eliminating regulatory efforts for one of the most important and noticeable areas of AI. To do this would effectively render the act barely any more effective than President Biden’s executive orders on AI. This would represent a radical shift in comparison to the proposals the Spanish presidency of the EU Council announced at the beginning of October. For an organization with a rich history of protecting the digital rights of its citizens, the French and German proposals would represent a step backwards. While more time, along with additional research and development, could render regulation more effective,the EU can always amend its existing legislation as it sees fit. However, with the severity of challenges that AI could bring to society, it remains largely beneficial to take a proactive approach rather than a reactive one.
The risks associated with general purpose AI models are substantial, which the French and German position appears to side-step. Even if it does stifle the ability of European firms to compete with US firms, the EU must consider a plethora of factors other than the competitiveness of its AI industry when enacting regulation. General purpose AI such as GPT 4 has the potential to upend jobs, but also amplify global inequality, and increase discrimination against marginalized groups. With AI systems having the potential to generate trillions in collected revenue for the corporations that develop such systems, governments merely urging them to exert caution will almost certainly fail.
The EU can both regulate AI and still cultivate its own AI capabilities. If states such as France or Germany actually want to spawn local rivals to ChatGPT they should consider how much success Silicon Valley owes to public-private partnerships in the 20th century. Through the Defense Advanced Research Project (DARPA), the US government created the internet, GPS, and numerous other revolutionary technologies, which US companies haveheavily benefited from ever since. The European Union does not need to tolerate AI firms existing in an environmentfree of significant oversight or accountability simply to encourage growth and development. Instead, it should seek to drastically increase funding for technological research and development if it wishes to close the gap with the US in AI capabilities.
The European Union should also greatly amplify efforts to encourage other countries to develop stronger regulatory standards for AI. The high levels of interest in the recent AI summit held in the UK reflect that great interest exists among states to increase cooperation on AI. Through its position as one of the world’s largest marketplaces, the EU has the potential to play a leading role in these efforts. In trade negotiations and global summits, it thus has considerable leverage. Eventually, the EU and others should work to create a specific set of international guidelines for AI systems, limiting the potential damage to the EU’s competitiveness with the rest of the world by regulating the technologies.
Ultimately, the EU's AI Act faces a complex series of challenges, with the two largest economies in Europe opposing regulation of general-purpose AI models at a time when the act looked almost set for passage. While Germany’s and France’s proposals might just be the usual policy posturing and bargaining process that is part and parcel of the run-up to any EU negotiation process between the Council, Commission and Parliament, it will be important that the EU upholds the digital rights of EU citizens. Doing so requires specific substantial regulation that hopefully can also act as an important base-line for efforts to govern AI beyond the European continent.
Bertuzzi, L., & Bertuzzi, L. (2023, November 15). EU’s AI Act negotiations hit the brakes over foundation models. www.euractiv.com.
Bertuzzi, L., & Bertuzzi, L. (2023, November 9). Spanish presidency pitches obligations for foundation models in EU’s AI law. www.euractiv.com.
Cameron, N., & Cameron, N. (2018, July 19). The government agency that made Silicon Valley. UnHerd.
Deutsch, J. (2023, November 16). EU AI Act Under Strain as ChatGPT Could Be Exempt. Bloomberg.com.
Expert comment: Oxford AI experts comment on the outcomes of the UK. (2023, November 3).