Artificial intelligence is transforming how societies are governed and how power is exercised across borders. Yet national and international institutions continue to struggle with building effective frameworks for accountability, cooperation, and ethical use. In the context of growing geopolitical rivalry and an accelerating AI arms race, our AI Governance programme explores how emerging technologies can be governed to reinforce democracy, security, and sustainable development.
The programme links research with policy practice, examining the implications of AI for regulation, global security, and equity. It convenes experts and practitioners to identify governance gaps, assess risks, and advance solutions that align technological innovation with the public good. We publish independent research, promote informed dialogue, and train the next generation of policymakers through initiatives such as the AI Global Governance Summer School, working to build a more coherent and responsible global approach to the governance of artificial intelligence.
Publications
UN Summit of the Future and Development: The Way Forward
The UN Summit of the Future calls for bold cooperation on climate, digital equity, and responsible AI, urging unified action to harness technology for good and revive the Sustainable Development Goals. Now, governments, businesses, and civil society must turn these ambitions into results.
UN Summit of the Future: A Critical Moment for Global AI Governance
The UN Summit of the Future in September 2024 aims to adopt the Pact for the Future, addressing global governance challenges. Priorities include creating international AI governance frameworks emphasizing ethics, transparency, and equity while fostering collaboration to mitigate risks, reduce inequalities, and leverage AI for sustainable development and the SDGs.
Deepening EU-Japan-US Cooperation on Critical and Emerging Technologies
The EU, US, and Japan are enhancing cooperation to boost competitiveness in critical technologies and secure resilient supply chains, focusing on reducing reliance on China, diversifying resources, and fostering innovation amid global economic security challenges.
The EU AI Act: two steps forward, one step back
The EU AI Act, approved in March 2024, introduces a risk-based regulatory framework to balance AI innovation and fundamental rights protection. While it promotes transparency and accountability, concerns persist about national security exceptions and potential surveillance risks, shaping the future of AI governance.
The Council of Europe’s draft AI Treaty: balancing national security, innovation and human rights?
The Council of Europe’s draft Framework Convention on AI aims to set global standards for AI that align with human rights, democracy, and the rule of law. However, compromises over private companies' regulation and national security exemptions may weaken its effectiveness in protecting individuals from AI's risks.
Experts