Shaping the Road to the AI Impact Summit in India

Examining how global governance efforts around AI intersect and what they collectively mean for the future of inclusive AI.

Shaping the Road to the AI Impact Summit in India

Share this story
THE FOCUS

BANNER: Servers. (Source: Reuters)

As artificial intelligence becomes a central pillar of global infrastructure, international, regional, and national governance efforts have multiplied, shaping norms, expectations, and principles for responsible AI. Against this backdrop, the Global Network Initiative (GNI), the Atlantic Council’s Democracy and Tech Initiative, and the Centre for Communication Governance at the National Law University Delhi convened a workshop on December 8th, 2025, hosted by the Graduate Institute of Geneva. The event is part of a series GNI and CCG have been hosting as part of the project “Multistakeholder Approach to Participation in AI Governance” (MAP-AI). The roundtable featured expert remarks from government representatives, UN officials, human rights experts, and civil society representatives from the Global South. The program focused on how growing governance efforts around AI intersect and what they collectively mean for the future of inclusive AI governance. As an officially recognized, pre-Summit event for the AI India Impact Summit, the workshop offered early reflections and insights into the broader process and themes of the Summit. The discussions revealed a landscape defined by both optimism and opacity—one marked by ambition, persistent gaps, and accelerating geopolitical pressures.

From Internet Governance to AI Governance 

Participants highlighted that AI governance does not benefit from the long institutional history that shaped Internet governance. Whereas the Internet’s multistakeholder model emerged over decades and has (and continues to) navigate deep power asymmetries both between and within the Global North and South, AI governance is forming in real time amid existing and new power asymmetries that are exacerbated and deepened by the scope, scale, and speed of the technology, its evolution, and the immense amount of investment.  Many governments—particularly those with limited access to compute or technical expertise—struggle to participate meaningfully. Although capacity-building efforts are referenced across various frameworks, including the World Summit on the Information Society and the Global Digital Compact, implementation remains uneven. Without structural support, participation risks becoming symbolic rather than substantive.

A recurring theme of the workshop was the fragmented nature of today’s AI governance ecosystem. A dense patchwork of initiatives has emerged: UN General Assembly resolutions, the Global Digital Compact, the UN’s Global Governance Dialogue on AI and the Independent International Scientific Panel on AI, the UNESCO Global AI Ethics and Governance Observatory, the African Union’s continental strategy, the OECD’s Global Partnership on AI, Council of Europe and EU efforts, G7 and G20 processes, and the AI Safety Summits in the UK, Seoul, Paris, and now India. While some efforts—like the Paris process and the UN AI mechanisms—have made strides toward inclusivity through working groups, broader accessibility, and commitments such as universal access as seen in the UN AI mechanisms, many governance spaces remain siloed, different Geneva-based forums do not always integrate language around rights-respecting AI, and debates around AI sovereignty, corporate power, and societal impacts remain insufficiently connected.

Despite these challenges, AI summits are becoming crucial agenda-setting spaces. Their discussions can shape parallel processes, elevate underexplored issues, and connect technical, geopolitical, and rights-based conversations. Yet, the lack of shared expectations around process and multistakeholder engagement—along with the inconsistent grounding of commitments and best practices in existing frameworks, such as the UN Guiding Principles on Business and Human Rights and the OECD Guidelines for Multinational Enterprises—creates both uncertainty and opportunity. This gap highlights the potential and opportunity to strengthen summits as solution-oriented, rights-based forums for AI governance that build on civil society and academic expertise, provide clearer orientation for business, and reaffirm the value of global standards.

Each gathering adopts its own structure, tone, and priorities, complicating efforts to build coherence across summits. While this fragmentation presents challenges, it also offers an opportunity to address these shortcomings. Countries preparing to host future summits can anchor global AI discussions within existing governance ecosystems, connect them to ongoing initiatives—such as emerging UN-led AI processes—and ensure more meaningful and diverse participation.

These goals reinforce a broader recognition: sustaining multistakeholder engagement requires more than ad hoc convenings. A central question is how to design governance frameworks that genuinely include the Global South. Presence alone cannot be equated with meaningful participation. Without meaningful and equitable access to computing, research funding, or technical infrastructure, countries cannot influence AI governance in ways that reflect their priorities or constraints. Data governance—including cross-border flows, data corridors, data stewardship models, and emerging concepts like data embassies—is essential to equitable participation. Moreover, conversations about AI sovereignty must grapple with the economic and political realities of corporate power, particularly when technology firms may wield capabilities that rival or exceed those of states.

Rights, Multistakeholder Participation, and Geopolitics  

Equally important is the role of rights-respecting global AI governance in ensuring that users’ needs and safety are centered in the development and use of AI. Rights-based language appears inconsistently across different declarations and processes. While integrating rights considerations into technical or diplomatic spaces remains challenging, it must remain a priority. There are meaningful signals of progress: summit declarations increasingly reference rights, standards-setting bodies are exploring rights-based approaches and seeking ways to embed rights-aligned principles into standards, and there is renewed attention to aligning emerging AI norms with broader digital rights frameworks. Civil society and academic institutions continue to play a role—as a watchdog, a “critical friend,” and a counterweight to private-sector dominance—mirroring earlier moments in technology governance where scrutiny helped drive safer and more accountable systems.

Geopolitics remains an unavoidable undercurrent. Strategic competition among major powers, national security anxieties, and the expanding role of AI in military and conflict settings are reshaping diplomatic priorities. This dynamic has accelerated national AI strategies and domestic initiatives, but also risks producing parallel governance systems grounded in divergent values. Overcoming this fragmentation will require building stronger bridges—between New York and Geneva and other countries, across regional processes, and among governments, civil society, academia, and industry. Without such bridges, the risks of misalignment, duplication, and governance fatigue will continue to grow.

Notably, many of the most consequential conversations about AI are still not taking place at the national level. Sectoral ministries—health, agriculture, education—are often absent from global dialogues, even though AI’s most tangible impacts are unfolding in their domains. Bridging the gap between high-level AI governance and real-world challenges at the national level is therefore urgent. This includes ensuring that global frameworks translate into national benefits and that countries can learn from one another’s experiences in implementing AI responsibly.

Conclusion

Hovering above all these issues is a broader, unresolved question: what does collaborative governance actually mean in the context of AI? Principles such as the São Paulo Multistakeholder Guidelines provide important guidance on enabling multistakeholder governance processes. As AI rapidly reshapes societies, economies, and geopolitical relations, defining the contours of collaborative governance has become a pressing global priority. Achieving inclusive, effective governance will require sustained investment, political commitment, and a willingness to navigate tensions embedded in concepts such as sovereignty, responsibility, and trust.

The current moment—characterized by rapid norm development, heightened global attention, and expanding experimentation—offers a rare and time-sensitive opportunity. The path forged now will determine whether AI governance evolves into a fragmented arena of competing interests or a shared foundation for equitable, transparent, and rights-respecting technological futures. Seizing this moment demands collective courage, institutional imagination, and a steadfast commitment to inclusivity. The discussions leading up to the AI India Impact Summit underscore that while the challenges are immense, the opportunity to build a more just global AI order has never been more within reach.


Konstantinos Komaitis is a Resident Senior Fellow at the Democracy & Tech Initiative.

Elonnai Hickock is the Managing Director of the Global Network Initiative.