Toward a New Multilateralism for AI: Insights from the IMF Annual Meetings 2025

November 5, 2025

Download this commentary (PDF)


In brief...

At the recent annual meetings of the International Monetary Fund, leaders considered how the technological adoption of Artificial Intelligence depends on the regulatory and institutional capacity to guide it responsibly. For regulators, this means shifting from a reactive to a proactive approach, anticipating risks such as data misuse, bias, or market concentration before they emerge.

When ministers and central bank governors convened in Washington, DC, for the International Monetary Fund (IMF) and World Bank Annual Meetings 2025, one theme threaded through the fiscal debates and debt sustainability charts: the governance of Artificial Intelligence (AI). From the Per Jacobsson Foundation Lecture by Singapore’s President Tharman Shanmugaratnam to the New Economy Forum on Digitalization and AI, policymakers confronted a new reality: the algorithms shaping global productivity are now shaping the global order.

Renewing Economic Order through AI Governance

In his lecture, “An Era of Possibility: Renewing Economic Order and Shared Purpose,” President Tharman framed AI as “probably the most complex collective-action challenge we have.” He warned that no state can contain its risks alone and proposed building a scientific coalition, equivalent to the Intergovernmental Panel on Climate Change (IPCC) for AI safety, to monitor emerging risks, ranging from AI-powered financial fraud to autonomous weapons. The idea underscores both the ambition and the difficulty of global AI governance: cooperation would depend less on enforcement than on transparency, trust, and the willingness of nations to temper self-interest for collective security, a challenge that has long complicated international climate and trade regimes. As President Tharman observed, efforts such as the Global Commission on Responsible AI in the Military Domain, convened by the Netherlands, has already produced practical steps toward shared norms, including a recommendation for binding restrictions on AI control of nuclear weapons and a call for “Responsibility by Design”,  where ethical and legal safeguards are embedded in military AI systems from inception, illustrating how governance can begin even in the most sensitive fields.

That broader vision is now taking shape through the United Nations Independent Scientific Panel on AI, an emerging effort to develop a shared evidence-based foundation for global AI standards and risk assessment, an approach that parallels the  NIST AI Risk Management Framework, which guides voluntary, risk-based AI governance within the United States. President Tharman urged renewed multilateralism rather than technological fragmentation, a cooperative order where leading economies like the United States and China manage interdependence through shared safety standards, reciprocal research access, and transparent cross-border governance channels, rather than decoupling into rival AI ecosystems. Just as post-war economic stability required Bretton Woods institutions to anchor financial rules, the next era of stability may require a “Bretton Woods for algorithms,” anchored in transparency, accountability, and evidence.

AI and the Productivity Paradox

The IMF Seminar “Boosting Productivity Growth in the Digital Age” examined whether AI’s promised leap in output will translate into shared prosperity and growth in living standards. Panelists noted that while AI can accelerate growth, its gains will remain uneven without regulatory frameworks that foster trust, data integrity, and inclusive access to technology—a condition that mirrors broader concerns about maintaining fair competition in the digital economy. The recent IMF Working Paper “The Global Impact of AI: Mind the Gap” reinforces this concern, finding that productivity gains cluster in economies with strong institutional capacity (e.g., investments in digital infrastructure and skills development) and regulatory capacity (e.g., governments able to design and implement predictable, innovation-friendly rules). Meanwhile, the working paper on “AI and Productivity in Europe” argues that AI’s benefits materialize when privacy and competition policies are balanced. The paper particularly emphasizes the need for data protection rules that safeguard users without stifling innovation, and market conditions allow new AI adopters to compete on a level playing field.

While the global race to deploy AI continues to emphasize speed, discussions at the IMF Annual Meetings highlighted that governance capacity will determine who sustains those gains. Some economies, particularly in Europe, are betting that the competence and coordination of public institutions to adapt policy frameworks and manage technological change responsibly will prove a long-term advantage. The IMF’s perspective sits between these approaches: technological progress is indispensable, but its productivity benefits endure only where institutions are capable of managing risk, supporting technological diffusion, and maintaining trust.

Digitalization, Resilience, and Regulatory Readiness

The forum on Digitalization of the Economy and AI explored how digital technologies and AI are transforming economies at an unprecedented pace, reshaping how value is created, services are delivered, and competitiveness is sustained. These structural shifts call for regulatory frameworks that adapt to innovation without stifling it, using flexible approaches such as regulatory sandboxes and iterative oversight.  Complementing this discussion was the plenary address “Resilience in a World of Uncertainty” by IMF Managing Director Kristalina Georgieva,  who described resilience as the ability of economies to maintain macroeconomic and institutional stability amid global uncertainty. She suggested that economies should be anchored by fiscal discipline, sound regulation, and regulatory housecleaning to unleash private enterprise. To this end, preparedness to harness AI will be as vital to long-term growth as regional integration and macroeconomic reform, and cautioned that without coherent rules and global cooperation, digital transformation could amplify inequality and volatility.

AI Preparedness and Global Convergence

In a press briefing, Director Georgieva explained that the IMF has created an AI Preparedness Index to help countries assess their readiness to harness AI for inclusive growth. Director Georgieva warned that low-income economies are falling behind on these foundations, risking exclusion from the gains of technological transformation unless they expand electricity and internet access, invest in education, and strengthen governance institutions. “If you don’t have access to electricity, you don’t have access to the Internet, you cannot be part of the AI revolution,” she said, emphasizing that closing these gaps is essential for convergence. Her remarks echoed the call by United Nations Secretary-General António Guterres that the world must prevent an AI divide between haves and have-nots and ensure that AI accelerates sustainable macroeconomic development rather than entrenches inequality. Together, these perspectives highlight a central lesson of the Annual Meetings: AI governance and economic governance are now inseparable, and building regulatory and institutional capacity is indispensable to equitable technological progress.

Policy Takeaways – Toward a New Multilateralism for AI

Across the IMF Annual Meetings, a common message emerged: technological change has outpaced the institutions designed to manage it. From President Tharman Shanmugaratnam’s call for an IPCC for AI to Director Georgieva’s warning that countries without digital and regulatory infrastructure risk exclusion, leaders converged on a single insight: technological adoption depends on the regulatory and institutional capacity to guide it responsibly. For regulators, this means shifting from a reactive to a proactive approach, anticipating risks such as data misuse, bias, or market concentration before they emerge. In practice, this calls for regulatory frameworks that strengthen data governance through privacy and security rules designed to be interoperable across jurisdictions; encourage fair competition by limiting excessive market dominance; and embed ethical oversight through transparency standards and independent review mechanisms. 

The underlying problem these efforts seek to address is fragmentation—uneven national approaches that could amplify systemic risks and widen technological divides. Hence, much of the dialogue centered on the need to harmonize or at least align AI regulations across borders, ensuring consistency where the technology’s impacts transcend national boundaries. As President Tharman warned, no state can contain its risks alone. The experience of COVID-19 offers a useful parallel: when a challenge is as diffuse as a virus or an algorithm, national responses alone cannot ensure global stability. For international institutions, it means renewing the cooperative logic of Bretton Woods for a digital era: pursuing interoperable standards and trusted data exchange, even amid legitimate concerns over proprietary and security constraints. Just as the Bretton Woods institutions anchored postwar financial stability through cooperation and shared rules, the next era of digital stability may depend on comparable trust frameworks for AI. As President Tharman reminded delegates, “This is no time for timidity.” The task ahead is not simply to regulate algorithms, but to govern progress itself, so that the next era of digital transformation strengthens, rather than fractures, the international order.