The sovereignty trap

How the promise of sovereign AI obscures its pitfalls

The sovereignty trap

Share this story
THE FOCUS

Banner: Illustration of the earth in the form of a computer chip, generated using Adobe Firefly.

On February 28, 2024, a blog post entitled “What is Sovereign AI?” appeared on the website of NVIDIA, a chip designer and one of the world’s most valuable companies. The post defined the term as a country’s ability to produce artificial intelligence (AI) using its own “infrastructure, data, workforce and business networks.” Later, in its May 2024 earnings report, Nvidia outlined how sovereign AI has become one of its “multibillion dollar” verticals, as it seeks to deliver AI chips and software to countries around the world.

On its face, “sovereign AI” as a concept is focused on enabling states to mitigate potential downsides of relying on foreign-made large AI models. Sovereign AI is NVIDIA’s attempt to turn this growing demand from governments into a new market, as the company seeks to offer governments computational resources that can aid them in ensuring that AI systems are tailored to local conditions. By invoking sovereignty, however, Nvidia is weighing into a complex existing geopolitical context. The broader push from governments for AI sovereignty will have important consequences for the digital ecosystem on the whole and could undermine internet freedom. Nvidia is seeking to respond to demand from countries that are eager for more indigenous options for developing compute capacity and AI systems. However, sovereign AI can create “sovereignty traps” that unintentionally grant momentum to authoritarian governments’ efforts to undermine multistakeholder governance of digital technologies. This piece outlines the broader geopolitical context behind digital sovereignty and identifies several potential sovereignty traps associated with sovereign AI. [1]

Background

Since its inception, the internet has been managed through a multistakeholder system that, while not without its flaws, sought to uphold a global, open and interoperable internet. Maintaining this inherent interconnectedness is the foundation by which the multistakeholder community of technical experts, civil society organizations, and industry representatives have operated for years.

One of the early instantiations of digital sovereignty was introduced by China in its 2010 White Paper on “The State of China’s Internet.” In it, Beijing defined the Internet as “key national infrastructure,” and as such it fell under the scope of the country’s sovereign jurisdiction. In the same breath, Chinese authorities also made explicit the centrality of Internet security to digital sovereignty. In China’s case, the government aimed to address Internet security risks related to the dissemination of information and data – including public opinion – that could pose a risk to the political security of the Chinese Communist Party (CCP). As a result, foreign social media platforms like X (formerly Twitter) and Facebook have been banned in China since around 2009. It is no coincidence that the remit of China’s main internet regulator, the Central Cyberspace Affairs Commission, has evolved from developing and enforcing censorship standards for online content to becoming a key policy body for regulating privacy, data security, and cybersecurity.

This emphasis on state control over the internet – now commonly referred to by China as “network sovereignty” or “cyber sovereignty” (网络主权), also characterizes China’s approach to the global digital ecosystem. Following the publication of its White Paper in 2010, in September of the following year, China, Russia, Tajikistan, and Uzbekistan jointly submitted an “International Code of Conduct for Information Security” to the United Nations General Assembly, which held that control over policies related to governance of the Internet is “the sovereign right of states” – and thus should reside squarely under the jurisdiction of the host country.  

In line with this view, China has undertaken great efforts in recent years to move the center of gravity of internet governance from multistakeholder to multilateral fora. For example, Beijing has sought to leverage the platform of the Global Digital Compact under the United Nations to engage G-77 countries to support its vision. China has proposed language that would make the internet a more centralized, top-down network over which governments have sole authority, excluding the technical community and expert organizations that have helped shape community governance from the internet’s early days.

Adding to the confusion is the seeming interchangeability of the terms “cyber sovereignty”, used more frequently by China, and “digital sovereignty”, a term used most often by the European Union and its member states. While semantically similar, these terms have vastly different implications for digital policy due to the disparate social contexts in which they are embedded. For example, while the origin of the “cyber sovereignty” concept in China speaks to the CCP’s desire for internet security, some countries view cyber sovereignty as a potential pathway by which to gain more power over the development of their digital economies, thus enabling them to more efficiently deliver public goods to their citizens. There is real demand for this kind of autonomy, especially among Global Majority countries.

Democracies are now trying to find alternative concepts to capture the spirit of self-sufficiency in tech governance without lending credence to the more problematic implications of digital sovereignty. For example, in Denmark’s strategy for tech diplomacy, the government avoids reference to digital sovereignty, instead highlighting the importance of technology in promoting and preserving democratic values and human rights, while assisting in addressing global challenges. The U.S.’s analogous strategy invokes the concept of “digital solidarity” as a counterpoint, alluding to the importance of respecting fundamental rights in the digital world.

Thus, ideas of sovereignty, as applied to the digital, can have both a positive, rights-affirming connotation, as well as a negative one that leaves the definition of digital rights and duties to the state alone. This can lead to confusion, and often obscures the legitimate concerns that Global Majority countries have about technological capacity-building and autonomy in digital governance. 

Nvidia’s addition of the concept of “sovereign AI” further complicates this terrain and may amplify the problems presented by authoritarian pushes for sovereignty in the digital domain. For example, national-level AI governance initiatives that emphasize sovereignty may undermine efforts for collective and collaborative governance of AI, reducing the efficacy of risk mitigations. Over-indexing on sovereignty in the context of technology often cedes important ground in ensuring that transformative technologies like AI are governed in an open, transparent, and rights-respecting manner. Without global governance, the full, uncritical embrace of sovereign AI may make the world less safe, prosperous, and democratic. Below we outline some of the “traps” that can be triggered when sovereignty is invoked in digital contexts without an understanding of the broader political contexts within which such terms are embedded.

Sovereignty trap 1: sovereign systems are not collaborative

If there is one thing we have learned from the governance of the internet in the past twenty years, it is that collaboration sits at the core of how we should address the complexity and fast-paced nature of technology. AI is no different. It is an ecosystem that is both diverse and complex, which means that no single entity or person should be responsible for allocating its benefits and risks. Just like the internet, AI is full of “wicked problems,” whether regarding the ethics of autonomy or the effects that large language models could have on the climate, given the energy required to build large models. Wicked problems can only be solved through successful collaboration, not with each actor sticking its head in the sand.

Collaboration leads to more transparent governance, and transparency in how AI is governed is essential given the potential for AI systems to be weaponized and cause real-world harm. For example, many of the drones that are being used in the Russia-Ukraine war have AI-enabled guidance or targeting systems, which has had a major impact on the war. Just as closed systems on the internet can be harmful for innovation and competition, as with operating systems or app stores built as “walled gardens,” AI systems that are created in silos and are not subject to a collaborative international governance framework will produce fewer benefits for society.

Legitimate concerns about the misappropriation of AI systems will only worsen if sovereign AI is achieved by imposing harsh restrictions on cross-border data flows. Just like in the case of the internet, data flows are crucial because they ensure access to information that is important for AI development. True collaboration can help level the playing field between stakeholders and address existing gaps, especially in regard to the need for human rights to underlie the creation, deployment and use of AI systems.

Sovereignty trap 2: sovereign systems make governments the sole guarantors of rights

Sovereign AI, like its antecedent “digital sovereignty,” means different things to different audiences.  On one hand, it denotes reclaiming control of the future from dominant tech companies, usually based in the United States. It is important to note that rallying cries for digital sovereignty stem from real concerns about critical digital infrastructure, including AI infrastructure, being disrupted or shut down unilaterally by the United States. AI researchers have long said that actors in the Global Majority must avoid being relegated to the status of data suppliers and consumers of models, as AI systems that are built and tested in the contexts where they will actually be deployed will generate better outcomes for Global Majority users.

The other connotation of sovereign AI, however, is that the state has the sole authority to define, guarantee, or deny rights. This is particularly worrying in the context of generative AI, which is an inherently centralizing technology due to its lack of interpretability and the immense resources required to build large AI models. If governments choose to pursue sovereign AI by nationalizing data resources, such as by blocking cross-border transfer of datasets that could be used to train large AI models, this could have significant implications for human rights. For instance, governments might increase surveillance to better collect such data or to monitor cross-border transfers. At a more basic level, governments have a more essentialist understanding of national identity than civil society organizations, sociotechnical researchers, or other stakeholders who might curate national datasets, meaning government-backed data initiatives for sovereign AI are still likely to hurt marginalized populations.

Sovereignty trap 3: sovereign systems can be weaponized

Assessing the risks of sovereign AI systems is critical, but governments lack the capacity and the incentives to do so. The bedrock of any AI system lies in the quality and quantity of the data used to build it. If the data is biased or incomplete, or if the values encoded in the data are non-democratic or toxic, an AI system’s output will reflect these characteristics. This is akin to the old adage in computer science, “garbage in, garbage out,” emphasizing that the quality of output is determined by the quality of the input.

As countries increasingly rely on AI for digital sovereignty and national security, new challenges and potential risks emerge. Sovereign AI systems, designed to operate within a nation’s own infrastructure and data networks, might inadvertently or intentionally weaponize or exaggerate certain information based on their training data.

For instance, if a national AI system is trained on data that overwhelmingly endorses non-democratic values or autocratic perspectives, the system may identify certain actions or entities as threats that would not be considered as such in a democratic context. These could include political opposition, civil society activism, or free press. This scenario echoes the concerns about China’s approach to “cyber sovereignty,” where the state exerts control over digital space in several ways to suppress information sources that may present views or information contradicting the official narrative of the Chinese government. This includes blocking access to foreign websites and social media platforms, filtering online content, and monitoring digital communications to prevent the dissemination of dissenting views or information deemed sensitive by the government. Such measures could potentially be reinforced through the use of sovereign AI systems.

Moreover, the legitimacy that comes with sovereign AI projects could be exploited by governments to ensure that state-backed language models endorse a specific ideology or narrative. This is already taking place in China, where the government has succeeded in censoring the outputs of homegrown large language models.  This also aligns with China’s push to leverage the Global Digital Compact to reshape internet governance in favor of a more centralized approach. If sovereign AI is used to bolster the position of authoritarian governments, it could further undermine the multistakeholder model of internet and digital governance.

Conclusion

The history of digital sovereignty shows that sovereign AI comes with a number of pitfalls, even as its benefits remain largely untested. The push to wall off the development of AI and other emerging technologies with diminished external involvement and oversight is risky: lack of collaboration, governments as the sole guarantors of rights, and potential weaponization of AI systems are all major potential drawbacks of sovereign AI. The global community should focus on ensuring AI governance is open, collaborative, transparent, and aligned with core values of human rights and democracy. While sovereign AI will undoubtedly boost Nvidia’s earnings, its impact on democracy is more ambiguous.

Addressing these potential threats is crucial for global stability and security. As AI’s impact on national security grows, it is essential to establish international norms and standards for the development and deployment of state-backed AI systems. This includes ensuring transparency in how these systems are built, maintained, released, and applied, as well as implementing measures to prevent misuse of AI applications. AI governance should seek to ensure that AI enhances security, fosters innovation, and promotes economic growth, rather than exacerbating national security threats or strengthening authoritarian governments. Our goal should be to advance the wellbeing of ordinary people, not sovereignty for sovereignty’s sake.

Trisha Ray also contributed to this essay.


[1] A note that countries could pursue sovereign AI in different ways, including by acquiring more AI chips and building more datacenters to increase domestic capacity to train and run large AI models, training of fine-tuning national AI models with government support, building datasets of national languages (or images of people from the country) to enable the creation of more representative training datasets, or by blocking foreign firms and countries from accessing domestic resources that might otherwise be used to train their AI models (e.g., critical minerals, data laborers, datasets, or chips). This piece focuses on data, as it has been critical in discussions of digital sovereignty


Cite this essay:

Konstantinos Komaitis, Esteban Ponce de León, Trisha Ray, Kenton Thibaut, and Kevin Klyman, “The sovereignty trap: how the promise of sovereign AI obscures its pitfalls,” Digital Forensic Research Lab (DFRLab), July 17, 2024, https://dfrlab.org/2024/07/17/the-sovereignty-trap/.