AI Global Governance (3rd Edition)

Summer School

26 – 30 May 2026

Brussels, Belgium

Accepting Applications

Aims of the Programme

We launched the AI Global Governance (AIGG) Summer School in 2024 to equip participants with the knowledge and skills needed to navigate the fast-changing landscape of AI governance in an increasingly fragmented world. The programme examines the opportunities and risks that artificial intelligence creates for international regulatory frameworks, security dynamics, and ethical standards, with a focus on promoting fair, transparent, and accountable AI development and deployment.

AI is transforming the global balance of power, reshaping cooperation among states, and changing the way countries prepare for and conduct warfare. It also gives rise to new governance challenges such as algorithmic bias, digital authoritarianism, and the use of autonomous systems in conflict. Participants explore these issues through comparative perspectives from the European Union, the United States, Japan, and China, analysing how AI affects international relations, human rights, labour markets, and environmental sustainability.

The AIGG Summer School strengthens participants’ ability to engage critically with the policy and governance dimensions of AI. Through expert-led sessions, simulations, and institutional visits, including to the EU, NATO, and the United Nations Development Programme, participants learn to assess systemic risks, identify governance gaps, and design strategies to mitigate unintended consequences. The interdisciplinary approach integrates theoretical insight with practical experience, preparing participants to address emerging challenges as AI becomes increasingly embedded in global decision-making.

Looking back at our 2025 Edition

Expert Lectures
Leaders from academia, government, and industry

Networking Events
Opportunities for collaboration

Institutional Visits
EU AI Office, NATO Data & AI Policy Unit, UNDP Brussels

Case Study Discussions
Real-world AI governance scenarios

Learning Objectives

Assess the long-term risks posed by AI, such as existential threats from AGI, while exploring opportunities for using AI to address global challenges like climate change and inequality.
Gain an in-depth understanding of the key regulatory, ethical, and security challenges in AI governance, including global, regional, and national approaches to AI regulation.
Explore how AI influences global power dynamics, trade, labor markets, security, and human rights.
Understand the ethical concerns related to AI, such as algorithmic bias, transparency, and accountability, while addressing the societal impact of AI deployment.
Investigate the integration of AI in military strategy and autonomous weapons, focusing on governance and regulation in these sensitive areas.
Engage in discussions about the evolving models of AI governance, emerging global standards, and the potential role of international agencies in shaping the future of AI regulation.
Apply knowledge from case studies and theoretical frameworks to propose practical solutions for AI regulation, governance, and international cooperation.

Skills Acquired

Technology Awareness: Familiarity with key AI technologies, including autonomous systems, machine learning, and AI safety, to understand the technical challenges in governance.
Critical Thinking: Ability to assess and analyze complex AI governance issues from multiple perspectives, including ethical, legal, and societal.
Policy Analysis: Skill in evaluating and comparing regulatory frameworks and understanding their implications for global AI governance.
Ethical Reasoning: Ability to assess the ethical implications of AI technologies and their potential impact on society, with a focus on fairness, transparency, and accountability.
International Relations Knowledge: Understanding of global governance structures, international relations, and the role of multinational organizations like the G7, OECD, and the UN in shaping AI policy.
Problem-Solving: Capability to develop practical, innovative solutions for complex AI governance challenges, particularly in the context of developing countries and emerging technologies.
Adaptability and Global Awareness: Capacity to adjust to rapidly evolving AI technologies and the global nature of AI governance, understanding its impact across different regions and contexts.

2025 Programme ( The 2026 Programme will be announced in due time)

Speakers

Prof. Christopher Lamont is Deputy Head of Programme for AI Governance at the Global Governance Institute (GGI), Professor of International Relations at Tokyo International University, and Visiting Senior Researcher at the University of Tokyo’s Research Center for Advanced Science and Technology in Japan. He holds a PhD in Politics from the University of Glasgow, an MSc in International and European Politics from the University of Edinburgh, and a BA in International Studies from the University of Mississippi. He was also a Fulbright fellow at the University of Zagreb and an RCUK postdoctoral fellow at the University of Ulster.

Christopher Lamont
GGI & TOKYO INT UNIVERSITY

Dr. Mihalis Kritikos is a Policy Analyst at the Ethics and Integrity Sector of the European Commission (DG-RTD) working on the ethical development of emerging technologies with a special emphasis on AI Ethics and author of the book Ethical AI Surveillance in the Workplace (Emerald, 2023). His work is focused on developing the policy dimension of responsible innovation and embedding the ethics-by-design approach in the AI ecosystem.Before that, he worked at the Scientific Foresight Service of the European Parliament as a legal/ethics advisor on Science and Technology issues (STOA/EPRS) authoring more than 50 publications in the domain of new and emerging technologies and contributing to the drafting of more than 15 European Parliament reports/resolutions in the fields of artificial intelligence, robots, distributed ledger technologies and blockchains, precision farming, gene editing and disruptive innovation.

Mihalis Kritikos
EUROPEAN COMMISSION

Prof. Justin Bullock is a Non-Resident Senior Fellow in the AI Governance Programme. He is also VP of Policy for American for Responsible Innovation (ARI), an Associate Professor Affiliate of Governance at the University of Washington in the Evans School of Public Policy and Governance and a world-renowned scholar in Public Policy, Public Administration, Governance, and Artificial Intelligence. Dr. Bullock is also a Senior Researcher at Convergence Analysis where he leads the research of Project AI Clarity. Dr. Bullock has  recently published two books, an experimental work co-authored with ChatGPT titled "Conversations with a Machine Oracle: Exploring Life, Culture, and Knowledge" and a science fiction book titled "Lo Wainwright: The Last Homo Superior."

Justin Bullock
GGI & ARI

Prof. Joachim Koops (BA, LPC Oxon, MSc Turku, PhD Kiel) is Chair of the Board of Directors at the Global Governance Institute and a Senior Expert in the Peace and Security and Global Education sections. He is also Professor of Security Studies at the Institute of Security and Global Affairs (ISGA) at Leiden University. Joachim’s research focuses on Global Security Governance, European foreign policy and diplomacy and inter-organizational relations in peace and security (including peacekeeping, peacebuilding, crisis management and the responsibility to protect) with particular emphasis on the role of the European Union, NATO and United Nations.

Joachim Koops
GGI & LEIDEN UNIVERSITY

Branka Panic is the Founding Director of AI for Peace, a US based think tank ensuring artificial intelligence benefits peace, security, and sustainable development and where diverse voices influence the development of AI and related technologies.Branka is also a Non-Resident Fellow at the Center on International Cooperation at New York University, where she focuses on researching the utilization of data-driven approaches to conflict prevention and peacebuilding. She is the host of “Data for Peace” online series. Branka is also a Stimson-Microsoft Responsible AI Fellow, a member of UNESCO Women4Ethical AI, and a member of IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and a Senior Advisor on AI to the German Federal Foreign Office. A passionate advocate for positive peace with 15 years of global experience in the humanitarian, peace, and development fields, Branka’s current emphasis is researching the benefits, risks, and ethical implications of emerging technologies.

Branka Panic
AI FOR PEACE

Caryn Lusinchi is a Senior Fellow in the AI Governance section. Caryn stands at the forefront of AI governance and risk management, most recently supporting the federal government in line with the President's Executive Order 14110, OMB 24-10 and NIST AI Risk Management 1.0 and 2.0 frameworks, to foster safe and trustworthy AI development in the US. A globally recognized speaker and thought leader, Caryn is also FHCA certified under the EU AI Act, GDPR, and NYC Bias Law, with a specialized certificate in the Foundations of Independent Audit of AI Systems (FIAAIS).

Caryn Lusinchi
GLOBAL GOVERNANCE INSTITUTE

Jimmy Farrell is the EU AI Policy Co-Lead for Pour Demain, a think-tank working at the interface between technology and policy across national, regional and international fora. Jimmy is currently working on policy recommendations for the EU to ensure the responsible development and deployment of general-purpose AI. Prior to Pour Demain, he worked on the EU’s AI Act from the European Parliament Internal Market and Consumer Protection (IMCO) committee secretariat. He also has experience in public affairs consultancy within the EU, working closely with industry on digital and fintech policy areas. He holds a MSc in Public Policy and BSc in Economics from Erasmus University Rotterdam

Jimmy Farrell
POUR DEMAIN

Alexandra Tsalidis is a Policy Researcher at the Future of Life Institute. Her research examines legislation to assess its effectiveness in mitigating AI risks. Previously, Alexandra worked as an EU Policy Fellow on FLI's EU team during a pivotal time in the EU AI Act negotiations. She holds a Master in Bioethics from Harvard Medical School and a BA in Law from the University of Cambridge.

Alexandra Tsalidis
FUTURE OF LIFE INSTITUTE

Practical Information

Who can apply to AIGG?

This course is designed for professionals involved in policymaking, international relations, technology management, and anyone interested in understanding the implications of AI on global governance. This includes government officials, diplomats, policy advisors, tech entrepreneurs, researchers, and graduate students in relevant fields. It’s also suitable for those who are keen to understand how AI can be leveraged to solve complex problems in global governance. The course provides a unique opportunity to learn from leading experts in the field and to network with like-minded professionals from around the world.

How do I apply to the summer school?

Applicants need to fill out the application form. After submitting the form successfully, applicants will receive a confirmation email. We evaluate and accept participants to the programme on a rolling basis. An answer can be expected within maximum one week of submission. For any questions or requests, please get in touch.

Will I receive a certificate upon completion?

The Global Governance Institute will award a certificate to all participants who successfully complete the summer programme. This can be achieved if participants attend a minimum of 70% of sessions, actively engage in all activities, and complete the assignments.

What is the fee and what is included?

For the 2026 edition, the tuition fee has been set at 1800 EUR, which covers participation in all sessions, thematic visits, learning materials and a three-month access to the digital learning platform, as well as lunch, coffee and refreshments throughout the programme, a reception, the group dinner, and social activities in Brussels.

Please note that we cannot  sponsor participants through scholarships or tuition waivers.

What is the cancellation policy?

GGI reserves the right to cancel the activity up to two weeks prior to the scheduled start date. In the event of cancellation, any registration fees already paid will be fully refunded. Similarly, participants can withdraw from the programme, and receive a refund upon written notification as follows: 30 or more days before the start – full refund, 15 to 29 days before the start – 50% refund. No refund will be granted if notification of withdrawal is given fewer than 15 days before the start of the training programme. A 10% administrative fee will be deducted from any reimbursement issued.