ARTIFICIAL INTELLIGENCE AND GLOBAL GOVERNANCE

The AI and Global Governance programme is the newest addition to GGI’s existing suite of cutting-edge training and research programmes, that aims to address the critical challenges posed by emerging technologies – particularly artificial intelligence – and their impact on global governance. Our mission is to provide valuable insights and policy recommendations to navigate the complex landscape of AI in the context of international relations and global governance.

AIGG will centre its efforts on three key research priorities:

EU-based AI Regulatory and Ethical Frameworks, where we will explore the evolving AI regulatory and ethical frameworks within the European Union and examine the impact of these frameworks on AI development, deployment, and cross-border cooperation.

AI and International Security/International Law, where we will investigate the implications of AI on international security, including cyber threats and military applications, and examine the intersection of AI and international law, particularly in the context of autonomous systems.

AI and Digital Authoritarianism, where we will study the role of AI in society and the rise of the threat of digital authoritarianism, including surveillance technologies and information control, as well as analyse the impact of AI on human rights and democratic institutions in authoritarian regimes.We are committed to delivering high-quality research through analyses, regular commentaries, policy briefs, and annual reports, as well as to fostering meaningful dialogue through conferences and workshops.

The AI and Global Governance Programme’s flagship "Age of AI" podcast, an already established platform for discussions at the intersection of international relations and artificial intelligence, will continue to amplify the latest research findings by translating complex academic concepts into accessible content for policymakers, scholars, and the wider public.

Finally, with the launch of a new Summer School on AI and Global Governance, in collaboration with distinguished professionals, international scholars, and researchers from Europe, the United States, and Japan, AIGG will provide individuals and organisations with the knowledge, skills, and practical understanding necessary to address the intricate issues surrounding global governance and AI.

For more information, please contact:

Medlir Mema Ph.D
Head of Programme
Email:
m.mema@globalgovernance.eu

Fellows
Publications
Steven Regalo
International cooperation to strengthen research and economic competitiveness in critical technologies while promoting resilient supply chains is perhaps one of the greatest challenges currently facing the European Union, the United States, and Japan today. This message was most recently underlined by Tokyo at the May 2024 OECD ministerial meeting in Paris where Japan unveiled a framework for the global regulation of generative artificial intelligence alongside convening a Japan-EU High Level Economic Dialogue where a Transparent, Resilient and Sustainable Supply Chains Initiative was agreed.
Medlir Mema
In a historic first, the European Parliament approved on March 13, 2024, the Artificial Intelligence Act. The vote is one of the final steps before a process - that started more than three years ago with the European Commission’s April 2021 proposal - is brought to an end by May or June of this year. Adopting a risk-based approach, which according to EU officials ensures the future proofing of the legislation, the EU AI Act has the potential to become an important AI governance model. That is both good and bad news.
Christopher Lamont
The Council of Europe’s draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, aims to become a “first-of-a-kind treaty” according to an announcement that marked the finalization of the convention’s draft text. This effort to conclude a multilateral treaty on AI aspires to set global standards for artificial intelligence that are consistent with human rights, democracy, and the rule of law.
Medlir Mema
Misinformation and disinformation have become this year’s buzzwords as voters in over 40 countries head to the voting booths in 2024 along with dozens more scheduled to be held in 2025. Not everyone shares the concern or the outlook, but there is no doubt that the next two years will be decisive for the future of democracy and societies around the world.
Maya Sobchuk
Russia’s brutal war in Ukraine is increasingly acknowledged to be the first artificial intelligence war. To be sure, emerging technologies have played an important role in this war, and the information front is certainly no exception. Yet, our understanding of artificial intelligence in information operations remains limited because scholarship often narrowly focuses on AI’s role in corrupting the information space as opposed to defending the information space from corruption. This commentary will therefore highlight what we can learn from Ukraine about how to utilise AI in the fight back against an aggressor.
Christopher Lamont
Emerging technology is a ubiquitous term in common usage among those seeking to make sense of a rapidly evolving global strategic environment in part because it can be understood as capturing both the promise and peril of fast paced technological advancements in the fields of artificial intelligence and quantum technologies.
Ian Sayeedi
After years of meticulous development and negotiations, the EU’s AI Act appeared finally set for passage by December 2023–at least until France and Germany vocalized their objections to the EU regulating general purpose models (i.e. models that have a broad set of functions such as OpenAI’s GPT–4).
S8E7: Japan’s Military AI Strategy
In July 2024, Japan’s Ministry of Defense unveiled its first basic policy document on the use of artificial intelligence and a comprehensive strategy to enhance Japan’s cyber defense capabilities. In this episode Dr. Ryo Hinata-Yamaguchi of the University of Tokyo, joins us to talk about how AI fits into Japan’s military strategy.
S8E6: AI for Peace with Dr. Paige Arthur
Medlir sits down with Dr. Paige Arthur, Director of Global Programming at Columbia Global, and co-author of AI for Peace (with Branka Panic), to discuss how recent AI innovations can contribute to conflict prevention and peace. They also explore the opportunities and challenges that technologies pose for human rights and curbing hate speech.
S8E5:The Diogenes Awards | An Annual Discussion of AI
This week is our annual Diogenes Awards, where we look at AI in film and television (and sometimes other amusing tangents). We discuss the content that we enjoyed over the last 18 months, and that we think our audience will enjoy the most. We also give our top recommendation in each category.
S8E4: Mihalis Kritikos on Governing AI
Chris talks with Mihalis Kritikos of the European Commission on Artificial Intelligence Ethics and Trustworthy Governance of AI. Mihalis sheds light on how he got started working on AI governance during the Covid-19 pandemic, the ethics of AI, the EU AI Act, and also AI surveillance in the workplace.
S8E3: How Ukraine uses AI to Fight Disinformation
Chris talks to Maya Sobchuk, GGI Non-Resident Fellow and Researcher at the Research Center for Advanced Science and Technology at the University of Tokyo about her work on AI and disinformation. In this episode Maya reflects on the history of Russian disinformation campaigns against Ukraine, disinformation in the aftermath of Russia’s full-scale invasion in 2022, and how Ukraine has harnessed its AI know-how and innovation space to counter disinformation.
S8E2: Conflict Forecasting with Hannes Mueller
Medlir is joined by Dr. Hannes Mueller to discuss recent developments in conflict prevention. Specifically, we examine how artificial intelligence can help governments detect risk early, predict the likelihood of new conflicts, and prevent older ones from re-emerging.
S8E1: Cool New Things You Can Do with AI or: How YD Learned to Start Worrying and Love Long Context Windows
For our season opener, Young Diogenes has a chat with Chris and Medlir about some cool new things large language models can now do for us. Specifically, why some companies, like Google with Gemini, are choosing to concentrate on increasingly larger context windows (i.e., the size of the text, music, video prompt, etc., that the AI can parse). And why being able to load an entire book, a movie, or your entire life into the prompt and then have a conversation about these things, might be useful.
S7E8: Misinformation and Disinformation in the age of AI with Lukas Andriukaitis
Medlir asks Lukas Andriukaitis, Board member of Lithuania-based Civic Resilience Initiative (CRI), what it means to work and live at the frontier of the new misinformation / disinformation space. They discuss the current atmosphere in the region and beyond, in terms of threat of misinformation/disinformation. They consider both the benefits and the drawbacks of AI-enabled technologies as billions across the world prepare to cast their votes in 2024.
S7E7: The EU AI Act with Luca Bertuzzi
Luca Bertuzzi joins Medlir for a discussion on the EU AI Act passed by the European Parliament on March 13th. Among other things, they discuss the likely impact of the law on innovation and governance within the EU and abroad, as well as criticisms that the act fails to address concerns regarding human rights and civil liberties.
S7E6: The Politics and Governance of Big Data and AI with Andrej Zwitter and Oskar Gstrein
Andrej Zwitter and Oskar Gstrein share with Chris what they will be keeping an eye out for in the AI regulatory space in 2024. They also discuss their latest work on the politics and governance of big data and artificial intelligence and their newly published Handbook on the Governance Politics and Governance of Big Data and Artificial Intelligence. In this discussion Andrej and Oskar explore the EU AI Act and challenges that datafication and algorithmic logics pose for governance, privacy, and human agency, and ponder the question: what does it mean to stay human in the age of AI?