User in the Middle: An Interoperability and Security Guide for Policymakers

When technologies work together, it benefits users and the digital ecosystem. Policymakers can advance interoperability and security in tandem by understanding how each impacts the other.

User in the Middle: An Interoperability and Security Guide for Policymakers

Share this story
THE FOCUS

Table of Contents

Executive Summary

Interoperability and security in digital systems are two concepts that alone and together defy easy categorization. Interoperability—the ability of heterogenous digital systems to exchange and make use of data—is a highly contextual property that takes varied forms across different kinds of technologies. Interoperability can be categorized as horizontal, such as the ability of two different messaging applications to exchange messages, or vertical, such as the compatibility of a software application with a particular operating system or hardware platform. Both horizontal and vertical interoperability in digital systems have been the subject of policy interest due to their potential to unlock follow-on innovation and competition as well as afford more choice to users. Highly relevant in this policy conversation, and for the general goal of advancing user-centered interoperability in digital systems, is the question of how interoperability interacts with and impacts system security.

There are multiple paths by which digital technologies become interoperable, each of which has its potential impacts on security. Some are interoperable from their inception, like many internet technologies that were developed outside the auspices of traditional for-profit companies. Other technologies become interoperable through mutual consensus such as through the development and adoption of open standards, often when the costs of noninteroperability are high for all participants. In other cases, through “adversarial interoperability,” one party engineers interoperability with another technology without its maker’s consent or involvement; this adversarial interoperability can be fleeting, or can be the first step toward more lasting interoperation. And finally, there is policy intervention—from US antitrust cases to sector regulations, and Europe’s sprawling Digital Markets Act (DMA), policymakers have intervened to advance interoperability in a number of specific technology contexts.

In policy conversations, perhaps the most often-cited interaction between interoperability and security is the idea that allowing or requiring interoperability in certain technology systems would unacceptably harm the security interests of the system’s users. This idea is not entirely baseless. Interoperability does have significant security implications, such as broadening the set of entities trusted to access sensitive data or take security-relevant actions, which can create security risk. However, managing trust is a foundational component of information security, not an impossibility—instead of looking at this requirement to manage trust and asking whether interoperation should occur, policymakers should be asking how expanded trust can be managed in a risk-aware way with users’ informed consent as the north star.

Beyond this often-raised interaction, however, there are other important interactions between interoperability and security that are relevant for policymakers considering how to advance both a more secure and more open digital ecosystem. For example, consider encryption: to protect messages or data exchange using encryption, all parties must agree on an interoperable standard that leverages the same encryption. Adding encryption to existing protocols is often nontrivial, and a failure to agree on such a standard can harm user security by forcing users or technologies to fall back to less secure or even unencrypted channels. Similarly, in networked technologies, agreeing on compatible methods of identifying users is another prerequisite for secure interoperability. Awareness of these core building blocks of secure and interoperable systems can help policymakers craft better interoperability policy and illuminate areas for further research investment.

While questions of interoperability in contexts like messaging and app stores have emerged into the public conversation—if not yet been fully addressed by US policy—there are other important frontiers for interoperability in the digital technologies that underpin present and future computing, such as cloud computing and artificial intelligence. In these areas and beyond, policymakers can and should consider how policy can advance the development of an open yet secure digital stratum on which to build the technologies of the future, such as by considering how to use competition authorities to hold companies accountable for delivering secure interoperability; prioritizing government’s own use of secure interoperable technologies, including those based on open standards; advancing fundamental data rights; and investing in underlying technical infrastructure for secure interoperability.

Introduction

Interoperability, or the ability of heterogeneous digital or physical systems to work together, is an invisible component of most digital systems with which consumers interact every day. The infrastructure of the internet, for example, is profoundly interoperable: it is not a service offered by a single vendor, but is instead defined by a set of protocols that allow a variety of diverse networked devices and software systems to exchange information, providing ways for its users to access and interact with a range of resources from websites to global cloud computing infrastructure.1“How Does the Internet Work?,” Mozilla, last updated July 24, 2023, https://developer.mozilla.org/en-US/docs/Learn/Common_questions/Web_mechanics/How_does_the_Internet_work Yet, throughout the computing stack, and built atop this global internet, there are a variety of digital products, services and applications that are not always so neatly interoperable with each other.

Interoperability is not a default or natural state of technology: it requires active choice or effort, whether through the free release of underlying technology, through engineered conformance with open standards, or through the creation of infrastructure for integrations. At the same time, it is—at least in some form—a component of every piece of digital infrastructure technology with which we interact. Interoperability is a key enabler for the computing stack and for different kinds of networked technology; it supports the evolution of multicomponent technological systems over time. In that sense, interoperability is often taken for granted, that kind of property most often noticed when it is absent. Yet it is often absent: technology makers can curate incompatibility, whether in standards or product design, to protect advantages arising from their walled gardens. This incompatibility can drive the development of IT monocultures, which have long drawn scrutiny in terms of how they impact information security.

While this report seeks to explicate broad themes and patterns around interoperability and security, both these properties are typically best examined within the context of specific technologies (or combinations of technologies), and through the articulation of the specific benefits that interoperation and security in these contexts can provide to users. This user-centered approach respects the way in which interoperability takes varied forms, from entire distributed systems built on common protocols; to couplings between hardware and software; to sharing data between services. These different kinds of interoperability, and their instantiation in different technology contexts, afford different benefits in terms of the interests, security, and control of users. Different interoperability contexts may entail different tradeoffs, and demand different policy responses. Putting users at the center affords a measure of focus in this otherwise richly ambiguous space.

Policymakers have begun to consider whether and how policy should mandate or advance interoperability in the evolving ecosystem of digital systems. As these policies have advanced these goals in fits and starts, the companies operating the systems under scrutiny have pushed back, voicing concerns about how increased interoperability might negatively impact user security. This report seeks to rigorously engage with the question of the interactions between interoperability and information security, providing policymakers with a framework for reasoning about these arguments and shining light on less-discussed challenges in interoperability and security that could be addressed through policy.

What is Interoperability?

The Institute of Electrical and Electronic Engineers (IEEE) provides what federal agencies have deemed the “accepted definition of interoperability, at least from a technical perspective”: “the ability of two or more systems or components to exchange information and to use the information that has been exchanged.”2“Interoperability,” Federal Communications Commission, accessed June 5, 2024, https://www.fcc.gov/general/interoperability. Notably, this definition has at least two functional parts: first, the ability to exchange information in some form, and second, the ability to make use of it.3This split mirrors distinctions in other models of interoperability—for example, some experts talk about a distinction between technical interoperability (or syntactic interoperability), the ability for two systems to physically interchange data in a consistent format, versus “operational interoperability” (or semantic interoperability), the ability of each system to make sense and functional use of the exchanged data. This report will primarily use the term “interoperability” to refer to systems that are able to do both. In practice, there are many shades of gray between “interoperable” and “noninteroperable” technologies. For example, two technologies might be able to exchange some but not all of the data that each alone can process, or might share certain interoperable features but have other, unshared features that cannot be rendered interoperable

Another useful framework to characterize interoperability in systems is the distinction between “vertical” and “horizontal” interoperability.4Marc Bourreau, Jan Krämer, and Miriam Buiten, Interoperability in Digital Markets, Centre on Regulation in Europe,March 21 2022, https://cerre.eu/wp-content/uploads/2022/03/220321_CERRE_Report_Interoperability-in-Digital-Markets_FINAL.pdf. This concept implicitly references the idea of a “software stack,” describing how software systems are assembled from layers of modular components that implement different levels of the computing process and define standardized ways to interact with layers above and below them. The flow of information on the internet, for example, can be split into distinct abstraction layers from physical to network to application; the standardization of each layer and their interactions with each other allow diverse hardware components across the globe to exchange information and to make sense of the information they receive in turn.5J.D.Day , and H. Zimmermann, “The OSI Reference Model,” Proceedings of the IEEE 71, no. 12 (December 1983): 1334–40, https://doi.org/10.1109/PROC.1983.12775.

Vertical interoperability refers to interoperability between applications or systems on different levels of the stack—e.g., how a piece of software interacts with an operating system. Horizontal interoperability refers to interactions between applications or systems that sit in the same layer of the stack and, often, provide similar or complementary functionality—e.g., connection between two different messaging applications.6Some papers generalize the idea of a software stack to an ecosystem stack, to encapsulate a broader sense of the functions of different applications at the application layer.— See Bourreau, Krämer, and Buiten, Interoperability in Digital Markets. While all digital systems must be vertically interoperable with some other part of the software stack (if nothing else, to allow them to run on hardware), technologies are rarely interoperable with every version of the technology above or below them in the stack. For example, different operating systems (e.g., OSX, Windows, or Linux) may expose different ways for application software to interact with their functions, necessitating different versions of applications compatible with each operating system.7StarWarsStarTrek, “If MacOS Is Unix Based, Then Why Can’t MacOS Applications (MS Office Etc.,) Work on Linux?,” Reddit, R/Linux, February 20, 2018, https://www.reddit.com/r/linux/comments/7yvi3z/if_macos_is_unix_based_then_why_cant_macos/.

Horizontal interoperability is less critical within single computing systems but foundational for those systems interacting with each other, especially across networked technologies like the internet or telecommunications. Questions about horizontal interoperability also arise in feature integrations between distinct applications (e.g., integrating email and calendar functions), or questions of “data portability” (whether a user can take their data from one application and move it into another application providing similar or identical functionality).  

The distinction between vertical and horizontal interoperability is useful for identifying similar patterns of behavior, incentives, and challenges that arise in interoperability conversations across a diverse range of digital technologies. This frame also mirrors the idea of vertical versus horizontal integration from corporate strategy, with the Federal Trade Commission describing the potential anticompetitive effects that can emerge when a company merges with another company with which they have a buyer-seller relationship (vertical merger) or with a firm that competes in its market (horizontal merger).8“Competitive Effects,” Federal Trade Commission, June 11, 2013, https://www.ftc.gov/advice-guidance/competition-guidance/guide-antitrust-laws/mergers/competitive-effects.

Standards are an important enabler for interoperability. Standards are specifications that define the protocols, norms, and ways in which technologies operate, underlying everything from email (SMTP), to how your phone connects over Bluetooth.9Corey Ruth, “What Are Standards? Why Are They Important?, IEEE Standards Association (blog), January 11, 2021, https://standards.ieee.org/beyond-standards/what-are-standards-why-are-they-important/. Standards are typically developed by standards development organizations, which are often governmental or independent nonprofit entities that work to amalgamate technical feedback to develop and promulgate a standard. However, even once a standard is developed, all is not automatically interoperable. Oftentimes, there are multiple competing standards for a particular function, and there have been many significant “standards wars” as companies battle to have their preferred standard adopted as the norm by the wider industry.10Carl Shapiro and Hal Varian, “The Art of Standards Wars,” California Management Review 41 (Winter 1999), https://journals.sagepub.com/doi/10.2307/41165984. In some cases, regulators may step in to resolve these conflicts, but in other cases they may just play out as wars of attrition.

Why Interoperability?

In some basic sense, interoperability is both a driver and a result of technological evolution and change. Subsequent generations of technology build upon the strata that have come before, necessarily interoperating with this foundation: consider the flourishing of companies offering software after the personal computing revolution, or those offering internet-enabled services after the advent of the web. Making new technologies more open and capable of interoperation can accelerate similar follow-on innovation. This may be the greatest argument in favor of interoperability—but not the only one that justifies interoperability as a legitimate priority in digital policy.

Another related lens often adopted by policymakers considering interoperability concerns competition: a lack of interoperability can make it more challenging for new companies to enter the market and offer competing (or even noncompeting) products or services because they are denied access to operationally necessary functionality or data. Interoperability interventions can improve the health and competitiveness of the broader market, allowing a wider variety of participants to compete to develop innovations at different layers of the stack or based upon available data.

Another lens is one of consumer protection: in certain contexts, a lack of interoperability can directly create harms or deny benefits to technology users. Users in this context need not only be end consumers; they can also be businesses, nonprofits, and other organizations that are customers of other technology providers. Of particular importance for this report are cases in which user security is harmed because different parties cannot agree on a secure means of interoperation. Other threats in this context may be related to the competitive harms described above, such as cases where users are denied choice or functionality because the companies that hold their data (or control the computing or application stack they use) deny interoperation to other providers. Similarly, a lack of interoperability might create “lock in” for a particular ecosystem by making it difficult for users to switch to competing products regardless of the benefits that a switch might provide in terms of price or even security.

Framing discussions of interoperability in terms of user impacts has a few advantages—particularly when seeking to reason about its security impacts. A clear articulation of the user utility that would be unlocked by interoperability makes it easier to assess cost-benefit tradeoffs of new interoperability requirements, allowing policymakers to prioritize those cases where user utility is most seriously harmed. It also allows policymakers to consider how to construct such requirements in order to best unlock the benefits for the user while avoiding unintentional user harms including degraded security. And, it reaffirms the idea that the user and their preferences should be a key locus of control for security decisions as well as broader decisions about what technologies to use and how to use them.

Paths to Interoperability

Interoperability requires active choice—and often active engineering effort—on the part of the technology maker, whether by implementing an open standard, exposing its APIs to other parties, or sharing its source code or specifications. Some technologies are interoperable from inception or by necessity, while others take a path that includes regulatory intervention.

These paths are shaped both by the characteristics of the technology itself and the market incentives of the entity that provides it. These are useful case studies for policymakers to study because they suggest ways in which other, noninteroperable technologies can become interoperable over time. They can also help to suggest characteristics of the contexts in which interoperability may emerge naturally over time, versus cases where market forces may continue to incentivize noninteroperability absent policy intervention.

By Design

Some systems are interoperable and open from their inception, because of their creators’ explicit intent. At CERN,11CERN is the European Organization for Nuclear Research, an intergovernmental organization based in Switzerland. Tim Berners-Lee developed the fundamental technology for the World Wide Web (including specifications for HTML, URLs, and HTTP, still the building blocks for the modern web) and then advocated for CERN to release the code for anyone to use, royalty-free, forever.12“History of the Web,” World Wide Web Foundation, October 18, 2009, https://webfoundation.org/about/vision/history-of-the-web/. This laid the foundation for the web to flourish as a technology implemented and used by many but owned by no single entity. The Internet Engineering Task Force (IETF) develops and updates many of the standards for technologies that support the functioning of the internet, from IP addresses to cryptography to transport-layer protocols.13“Introduction to the IETF,” Internet Engineering Task Force, accessed June 6, 2024, https://www.ietf.org/about/introduction/. https://www.ietf.org/about/introduction/. Since the internet has been profoundly interoperable from its inception, it has spawned a wave of interoperable technologies built atop it to ensure its function; technologies like JSON were invented, popularized, and eventually standardized to facilitate communications or address common problems on the web. Interoperable ecosystems tend to beget more interoperable technologies, born from the need to solve these problems while working with the myriad different participants already in the ecosystem.

Open standards typically form a core part of technologies that are interoperable by design. Open standards are typically defined as standards developed through open, collaborative processes that can be readily implemented by any member of the public once they are finalized.14“Definition of ‘Open Standards,’” International Telecommunication Union, accessed June 5, 2024, https://www.itu.int:443/en/ITU-T/ipr/Pages/open.aspx. Open standards do not necessarily need to be free to implement; they can be licensed by the developing entity for a reasonable and nondiscriminatory cost.15“Open Standards Requirement for Software,” Open Source Initiative, July 24, 2006, https://opensource.org/osr.

Once open standards are available and adopted, they can be used to build a variety of follow-on applications. For example, the RSS standard, enables users and applications to “subscribe” to updates from webpages in a consistent format, and makes podcasts available across different listening apps (e.g., Apple Podcasts versus Spotify), without requiring the creator to upload their podcast to each platform.16Michal Luria and Gabriel Nicholas, Understanding Innovation in Interoperable Systems, Center for Democracy and Technology, December 2023, https://cdt.org/wp-content/uploads/2023/12/updated-2023-12-06-CDT-Research-Interoperability-Podcasting-report-final.pdf. Substack newsletters are distributed using SMTP, the email protocol on which email services are based, ensuring they can be delivered regardless of the mail provider that a subscriber uses.17Michael Mignano, “The Standards Innovation Paradox,” Medium (blog), July 16, 2022, https://mignano.medium.com/the-standards-innovation-paradox-e14cab521391. In these ways, interoperability can provide particular benefits by supporting ecosystems in which multiple entities interact—for example, there are multiple apps that allow users to listen to podcasts, and multiple providers of email inboxes—by allowing entities to more seamlessly interact across these different entities using a consistent format rather than needing to duplicate their efforts or choose only one platform to support.

Open source software—software for which the source code is publicly released and open to reuse and modification—is related to but distinct from interoperability. Interoperability describes an interchange between two systems; open source code could be one way to implement technical systems capable of interoperability.18Fernando Almeida, Jose Oliveira, and Jose Cruz, “Open Standards And Open Source: Enabling Interoperability,” International Journal of Software Engineering & Applications 2, no. 1 (January  2011): 1–11, https://doi.org/10.5121/ijsea.2011.2101. Open source implementations often arise as ways to create free implementations of common standards or existing, closed source technology products. For example, LibreOffice is an open source analogue of Microsoft Office; it is able to read and write common Microsoft file formats such as .docx and .xlsx, though it uses the Open Document Format, which is based upon open standards.19“What Is LibreOffice?,” LibreOffice, accessed June 5, 2024, https://www.libreoffice.org/discover/libreoffice/.

From Closed to Interoperable: Consensus Self-Interest

While practically every new digital technology must rely on interoperability or open standards in some form or fashion—such as using programming languages to run atop hardware, or transferring data over the internet—new technologies are often built in a relative “closed” fashion atop this stack initially. There are good reasons for this: in some cases it may be challenging or undesirable to standardize a technology that has not been built yet, lacking a fully developed sense of the possible implementations and desired feature sets. Yet, in many cases, the eventual pressures of practicality and market forces push different actors to come together to provide increased interoperability for a specific technology function.

For example, telecommunications equipment across the world has generally developed atop standards that ensure that certain basic functions—such as transmission of data or phone calls— work across different carriers and mobile phone makers. In 1998, the Federal Communications Commission (FCC) adopted as a uniform standard the North American Advanced Mobile Phone Service (AMPS) standard created by then-AT&Ts Bell Labs. However, in the years since, standardization of telecommunications protocols has largely happened voluntarily, through the Telecommunications Industry Association (TIA), which adopts standards based on input from US telecommunications providers.20Peter Grindley, David J. Salant, and Leonard Waverman, “Standards Wars: The Use of Standard Setting as a Means of Facilitating Cartels, Third Generation Wireless Telecommunication Standard Setting, International Journal of Communications Law and Policy 3 (Summer 1999): 1-49, https://ciaotest.cc.columbia.edu/olj/ijclp/ijclp_3/ijclp_3b.pdf. The TIA then collaborates with global organizations like the International Telecommunications Union (ITU), a treaty organization of the United Nations that helps foster the development and adoption of international standards for telecommunications equipment.21“Overview of Formal Telecommunications Standards Organizations,” Communications Standard Review, last updated April 3, 2003, https://www.csrstds.com/stdsover.html. In general, networked technologies like telecommunications may show this kind of convergence toward standards-based approaches out of sheer necessity, though the case of telecommunications also illustrates how the need for convergence can simply shift companies’ attention to battles over which sets of standards to adopt.22Grindley, “Standards Wars.

One of the primary critiques of standards organizations in this space is that they can sometimes serve as fora for large companies—well-resourced and rich with relevant technical expertise—to advance their own interests through the standards development process in ways that harm rather than benefit competition. The Department of Justice investigated the telecommunications standard organization GSM Association over concerns that its eSIM standard provided anticompetitive advantages to existing major mobile operators and found that its process “was deeply flawed and enabled competitors to coordinate anticompetitively,” as it “provided its mobile network operator members […] certain privileges not available to other members and participants, allowing that single interest group to exercise undue influence in the standard-setting process.”23Makan Delrahim, “GSMA Business Review Letter Request,” US Department of Justice. November 27, 2019, https://www.justice.gov/atr/page/file/1221321/dl. All this to say that standards are often essential parts of achieving beneficial interoperability, but they are not necessarily a panacea to all potential competitive concerns.

Another example of industry convergence is the Matter protocol, which helps define a consistent way for users to communicate with, and control, Internet of Things devices (IoT) like smart lightbulbs or smart locks from their smartphones.24“Matter,” Connectivity Standards Alliance, accessed June 5, 2024, https://csa-iot.org/all-solutions/matter/. Because these devices are often operated with lower-level firmware rather than software, manufacturers previously needed to build wholly separate device versions to allow users to control their smart devices from an iPhone, an Android, or an Amazon device like an Alexa. The Matter protocol also includes additional security features that can thus be standardized across different IoT manufacturers, an example of how standardization of security protocols can bring all parties up to a shared secure baseline.

In a gray area between mutual consensus and the kind of adversarial interoperability described in the next section, there are kinds of asymmetrical interoperability in which new technologies must make themselves interoperable with the predecessor technologies that create the foundations on which they seek to build. This kind of interoperability is an inevitable part of the lifecycle of compounding technological evolution. However, older technologies do not necessarily face the same pressures to adapt and facilitate interoperability with these newer technologies, so the need to engineer interoperability in these asymmetric contexts can create some of the same security challenges and inefficiencies associated with adversarial interoperability.

From Closed to Interoperable: Adversarial Engineering

When companies do not cooperate to achieve interoperability, new market entrants may pursue another path: “adversarial interoperability,” in which service B implements interoperability with service A despite service A’s not desiring such interoperation (and often taking active steps to prevent it).25Cory Doctorow, “Adversarial Interoperability: Reviving an Elegant Weapon From a More Civilized Age to Slay Today’s Monopolies,” Deeplinks (blog), June 7, 2019, https://www.eff.org/deeplinks/2019/06/adversarial-interoperability-reviving-elegant-weapon-more-civilized-age-slay. Adversarial interoperability is typical when transitioning from a closed technology to an interoperable one. For example, Apple reverse-engineered Windows’ .doc file format for years before Microsoft begrudgingly stopped fighting Mac users’ ability to read and write .doc files.26Cory Doctorow, “Adversarial Interoperability.” If the upstart company can persist long enough to create an expectation among users that they can rely on this interoperability, adversarial interoperability can thus provide a pathway to more permanent forms of interoperability. 

Some adversarial interoperability battles have a different outcome. Adversarial interoperability solutions often create security interactions that can provide the grounds (or the excuse) for a company to challenge the interoperability. Definitionally, adversarial interoperability often involves one company connecting to another company’s infrastructure in ways unintended (or even undesired), which can involve reverse-engineering aspects of another company’s product or subverting security features like identity and access management. These security interactions often form the basis for companies’ pushback against unwanted interoperability.

Sometimes this pushback is technical, such as blocking the interoperator from connecting to or accessing infrastructure. For example, the fintech company Plaid offers ways for users to connect third-party services to their bank accounts through “screen scraping,” in which it asked a user for their credentials (often with a webpage that looked deceptively similar to their banking login) and then using those credentials to log in as though they were the user to scrape the relevant information from the bank’s website.27Jenny Surane, “Plaid Scared Jamie Dimon But Fintech Behind Venmo and Robinhood Won Him Over,” Bloomberg, May 31 2023, https://www.bloomberg.com/news/features/2023-05-31/plaid-scared-jamie-dimon-but-fintech-behind-venmo-and-robinhood-won-him-over. At one point, banks including CapitalOne and PNC began blocking Plaid from accessing their servers.28Surane, “Plaid Scared.” Or, when the app BeeperMini brought iMessage to Android users by registering Android users to an Apple “Gateway” endpoint,29“How Beeper Mini Works,” Beeper Blog (blog), December 5, 2023, https://blog.beeper.com/2023/12/05/how-beeper-mini-works/. Apple shut down the service, citing its use of “fake credentials” to register Android devices to this endpoint as a security threat.30Hartley Charlton, “Apple Confirms It Shut Down iMessage for Android App Beeper Mini,” MacRumors,” December 10, 2023, https://www.macrumors.com/2023/12/10/apple-confirms-it-shut-down-beeper-mini/

In other cases, the pushback is legal: in 2008, Facebook sued Power Ventures, a company that allowed users to aggregate their data from multiple social media sites, alleging that the company violated the Computer Fraud and Abuse Act (CFAA). The CFAA is a criminal anti-hacking statute, and Facebook’s claim hinged on the idea that Power Ventures was accessing Facebook data “without authorization,” despite having the consent of the Facebook users whose data they were collecting.31Hanni Fakhoury, “Facebook’s Ongoing Legal Saga with Power Ventures Is Dangerous To Innovators and Consumers.” Deeplinks (blog), March 11, 2014, https://www.eff.org/deeplinks/2014/03/facebooks-ongoing-legal-saga-power-ventures-dangerous-innovators-and-consumers. This playbook was later repeated by companies including LinkedIn to prevent other companies from scraping their data.32Charles Duan,. “Hacking Antitrust: Competition Policy and the Computer Fraud and Abuse Act,” Colorado Technology Law Journal 19, no. 2(November 2001): 314-41, https://ctlj.colorado.edu/?p=813. In 2021, the Supreme Court affirmed a “narrow view” of the CFAA, stating that the law did not criminalize the use of data outside of a platform’s terms of service and instead applied only when a user accessed parts of an information system that they were not authorized to access, e.g., hacking.33Bryan Cunningham, John Grant, and Chris Jay Hoofnagle, “Fighting Insider Abuse After Van Buren.” UCI Cybersecurity Policy & Research Institute (blog), June 12, 2021, https://cpri.uci.edu/fighting-insider-abuse-after-van-buren/. However, companies still frequently include in their terms of service prohibitions against scraping or reverse engineering, which may be legally enforceable.34Jamie Williams,. “Ninth Circuit Panel Backs Away From Dangerous Password Sharing Decision—But Creates Even More Confusion About the CFAA,” Deeplinks (blog), July 15, 2016, https://www.eff.org/deeplinks/2016/07/ninth-circuit-panel-backs-away-dangerous-password-sharing-decision-creates-even; “Coders’ Rights Project Reverse Engineering FAQ,” Electronic Frontier Foundation, August 6, 2008, https://www.eff.org/issues/coders/reverse-engineering-faq.

As later sections of this paper will outline, some of these cases do create genuine security interactions that are real and important to address. However, it is also important to be aware (and skeptical) of the playbook they create: if entities do not want to interoperate, they can refuse to offer any means of secure interoperation, then attack any adversarial interoperability implementations on the grounds that they subvert security features. In such cases, it may be appropriate for policy to step in—as it did in the case of banking data access, as outlined in the following section.

From Closed to Interoperable: Regulatory Intervention

When neither fundamental ethos nor market forces push technologies toward interoperability, policymakers have sometimes stepped in. While some of these efforts take the form of laws or regulations explicitly aimed at interoperability, many others—especially in the US—come from settlements or agreements related to antitrust lawsuits or mergers that required interoperation of a company’s technology or openness of their intellectual property developments.  

In 1956, telecommunications giant AT&T entered into a consent decree with the US government to settle allegations that it had vertically monopolized much of the telecommunications equipment industry. While the consent decree fell short of the breakup that the government had sought, one of the terms of the consent decree was that inventions from its research department, Bell Labs, were to be offered royalty-free to its competitors. This decree led to flourishing follow-on invention, including in areas such as the commercialization of the transistor.35Martin Watzinger et al., “How Antitrust Enforcement Can Spur Innovation: Bell Labs and the 1956 Consent Decree,” American Economic Journal: Economic Policy 12, no. 4 (November 1, 2020): 328–59, https://doi.org/10.1257/pol.20190086. The consent decree also precluded AT&T from commercializing its UNIX operating system, developed in its research arm Bell Labs. Instead, AT&T licensed the operating system, including its source code, for a nominal fee. That code formed the basis for a plethora of modern operating systems, from OS X to Android, and contributed heavily to the personal computing revolution.36Cory Doctorow, “Unix and Adversarial Interoperability: The ‘One Weird Antitrust Trick’ That Defined Computing,” Deeplinks (blog), May 6, 2020, https://www.eff.org/deeplinks/2020/05/unix-and-adversarial-interoperability-one-weird-antitrust-trick-defined-computing.

In 2001, the Federal Communications Commission (FCC), as a condition for the merger between AOL and Time Warner, required AOL to open up its popular instant messaging system to interoperation with competitors.37“Fact Sheet: FCC’s Conditioned Approval of AOL-Time Warner Merger,” Federal Communications Commission, 2001, https://transition.fcc.gov/Bureaus/Cable/Public_Notices/2001/fcc01011_fact.pdf. But in 2002, AOL announced that it was scaling back its efforts to implement interoperability, stating that they were too difficult to implement,38“AOL: IM Compatibility Too Costly,” Wired, July 24, 2002, https://www.wired.com/2002/07/aol-im-compatibility-too-costly/. and in 2003 the FCC released AOL from the requirement.39“FCC Releases Order Permitting AOL Time Warner to Provide Advanced IM Services,” Tech Law Journal, August 20, 2003, http://www.techlawjournal.com/topstories/2003/20030820.asp.

With respect to biomedical devices, the Food and Drug Administration has taken steps to promote interoperability, for example creating a regulatory approval pathway for interoperable automated insulin delivery (AID) systems40Howard Look, “Third Party Control for ACE Pumps,” Tidepool (blog), February 1, 2023, https://www.tidepool.org/blog/the-inside-scoop-on-third-party-control-for-ace-pumps. that enabled the approval of Tidepool Loop,41“Tidepool,” Tidepool, accessed May 10, 2024, https://www.tidepool.org/. a fully interoperable AID system based upon an open-source AID system that was developed and maintained by a community of insulin pump users.42Howard Look, “Tidepool Loop Origin Story,” Tidepool (blog), January 24, 2023, https://www.tidepool.org/blog/tidepool-loop-origin-story; Sarah Zhang, “People Are Clamoring to Buy Old Insulin Pumps,” The Atlantic (blog), April 29, 2019, https://www.theatlantic.com/science/archive/2019/04/looping-created-insulin-pump-underground-market/588091/. (These systems are not currently in use, in part because proprietary insulin pump companies decided that, despite their early support for the project, they intended to focus on developing their own integrated AID and pump systems.43Howard Look, “Tidepool Loop Origin Story”.)

When it came to fintechs’ use of “screen scraping,” the European Competition and Markets Authority and the UK government mandated in 2016 that nine of the largest banks implement common standards and APIs for data access.44“Regulatory,” Open Banking, accessed June 7, 2024, https://www.openbanking.org.uk/regulatory/. In 2023, the US Consumer Financial Protection Bureau  proposed rules requiring financial institutions to provide basic standards for data access by authorized third parties.45“Required Rulemaking on Personal Financial Data Rights,” Federal Register, October 31, 2023, https://www.federalregister.gov/documents/2023/10/31/2023-23576/required-rulemaking-on-personal-financial-data-rights. In 2023, Plaid announced that it had migrated 100 percent of its traffic with major financial institutions including Capital One, JPMorganChase, and Wells Fargo to secure APIs.46Christy Sunquist, “Building an Open Finance Future: Safe and Reliable Connectivity for All,” Plaid (blog), May 11, 2023, https://plaid.com/blog/api-progress-update/.

The European Union’s (EU) recent, expansive Digital Markets Act (DMA) focused squarely on interoperability in digital technologies. The legislation defines what it terms gatekeepers, or large online platforms capable of crowding out competition to their market, and imposes regulations that equalize market access for all firms. It limits the degree to which gatekeepers can restrict users’ freedom to access their data or connect a gatekeeper’s platform to third-party services and prohibits gatekeepers from favoring their products and services relative to competitors’ products within their platform. The legislation targets both vertical interoperability—e.g., its requirement that companies “allow competing companies to interconnect to the respective features as efficiently as the gatekeeper’s own services and hardware”— and horizontal interoperability, across messaging communication services and data portability with other applications.47“The EU Digital Markets Act: Is Interoperability the Way Forward?,” Global Partners Digital (blog), July 14, 2022, https://www.gp-digital.org/the-eu-digital-markets-act-is-interoperability-the-way-forward/.

Users will soon begin to see the resulting changes. iPhone users will be able to install alternative app stores; iPhone and Android users will see “choice screens,” allowing them to select their default web browser; Android users will be able to select their default search engine; and iPhone users will be able to access web browsers that use their own engines instead of Apple’s WebKit.48Chris Velazco, “The E.U. Digital Markets Act Is Here. Here’s What It Means for You,” Washington Post, March 7, 2024, https://www.washingtonpost.com/technology/2024/03/07/digital-markets-act-apple-google-tiktok-changes-dma/. Messaging interoperability—applicable to Meta’s WhatsApp and Facebook Messenger—is also coming into force, although full interoperability for features such as group chats and videocalling will take longer.49Matt Burgess, “WhatsApp Chats Will Soon Work With Other Encrypted Messaging Apps,” Wired, February 6, 2024, https://www.wired.com/story/whatsapp-interoperability-messaging/. The language of the DMA was agnostic to the way that gatekeepers would need to implement messaging interoperability, and stopped short of requiring the adoption of common protocols.50Eric Rescorla, “Architectural Options for Messaging Interoperability,” Educated Guesswork (blog), June 3, 2024, https://educatedguesswork.org/posts/dma-interop/. WhatsApp opted for a solution that will require third parties to connect to its existing server infrastructure through an interface51Dick Brouwer, “Making Messaging Interoperability with Third Parties Safe for Users in Europe,” Engineering at Meta, March 6, 2024, https://engineering.fb.com/2024/03/06/security/whatsapp-messenger-messaging-interoperability-eu/.—more a kind of “integration” than interoperation.

Regulatory interventions toward interoperability have typically been reluctant to define a specific set of standards that companies must adopt, instead focusing on mandating the end state of interoperability. This approach has its merits: it avoids locking companies into unchanging standards and precluding the adoption of new and improved standards as they emerge. At the same time, these requirements allow companies to comply by permitting others to “integrate” with their services, rather than truly interoperating with these entities on a level playing field based on open standards. Explicitly requiring the use of open standards—as in the banking API case—is one way to ensure that interoperability approaches are built on this foundation without picking a specific standard or group of standards as the winner.

Interoperability and security

As the example paths above illustrate, the relationship between interoperability and security is neither neatly defined nor uniform. Despite the persistent drumbeat of companies’ claims that they cannot interoperate for security reasons, interoperability and security are not antithetical. Nor are they always aligned; interoperable technologies can also be insecure. This section outlines a few ways in which these two complex concepts interconnect and interrelate, attempting to dispel prevailing myths while also fairly representing the real complexities that may arise.

False Rivals

An obvious interaction between interoperability and security—one that partially inspires this work—is the fact that companies will often cite security as the reason that they will not permit interoperability with another product or service.

Examples are legion. HP blocked third-party replacement ink cartridges from being used in its printers, citing the risk that these cartridges could infect computers with viruses.52“HP CEO Justifies Blocking Third-Party Ink Cartridges by Claiming They Can Inject Malware,” Tom’s Hardware, January 23, 2024, https://www.tomshardware.com/peripherals/printers/hp-ceo-justifies-blocking-third-party-ink-cartridges-by-claiming-they-can-inject-malware. (Some online commentators seemed skeptical.53“Dan Goodin,” Infosec Exchange, January 19, 2024, https://infosec.exchange/@dangoodin/111784091345587251.) Apple repeatedly asserted that interoperability requirements under the DMA that would require it to allow sideloading on the iPhone would negatively impact user security,54“Jon Porter, Apple Argues against Sideloading iPhone Apps as Regulatory Pressure Mounts,” The Verge, Jun 23, 2021, https://www.theverge.com/2021/6/23/22546771/apple-side-loading-security-risk-report-regulatory-pressure. repeating these claims even as they announced their plan (which drew fierce criticism from app developers55Apple, “Apple Announces Changes to iOS, Safari, and the App Store in the European Union,” January 25, 2024, https://www.apple.com/newsroom/2024/01/apple-announces-changes-to-ios-safari-and-the-app-store-in-the-european-union/.) to comply with the law.56Apple, “Apple Announces Changes.” As discussed in the section on reverse engineering above, arguments about security—or the subversion of security features—are popular for companies justifying the use of technical or legal means to cut off would-be interoperators. And, the Federal Trade Commission (FTC) published a 2023 blog post highlighting that companies may claim that privacy and security concerns prevent them from implementing interoperation, which claims the FTC intends to ”scrutinize [] carefully to determine whether they are well-founded and not pretextual”.57Staff in the Office of Technology and the Bureau of Competition, “Interoperability, Privacy, & Security,” Office of Technology (blog), December 21, 2023, https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/12/interoperability-privacy-security.

The following sections on trust and the building blocks of secure interoperability will describe ways in which interoperability of different kinds can raise sincere, security-relevant questions, whether technical or organizational. Yet in almost all these cases there are ways to carefully balance the benefits and security risks of interoperability without flatly rejecting it. In cases where interoperability presents clear and overriding benefits to competition or user security, the question for policy should not be yes or no, but how; how should policymakers craft requirements and technical infrastructure to ensure that greater interoperability does not degrade—or perhaps enhances—user security.

False Equivalences?

At the same time, interoperable technologies are not necessarily secure. Of particular concern in this category are technologies that use security mechanisms such as cryptography that are not developed through an open process that allows the mechanisms to be scrutinized and tested by outside experts. Examples of the risks of this practice abound. In 2008, the MIFARE Classic card, a widely popular contactless smart card implementation, was reverse engineered and its cipher protocol revealed to have significant weaknesses58Nicolas T. Courtois, “The Dark Side of Security by Obscurity – and Cloning MiFare Classic Rail and Building Passes, Anywhere, Anytime,” Proceedings of the International Conference on Security and Cryptography (2009): 331–38, https://doi.org/10.5220/0002238003310338. that left it vulnerable to a variety of attacks.59Andy Greenberg,, “Hackers Found a Way to Open Any of 3 Million Hotel Keycard Locks in Seconds,” Wired, March 21, 2024, https://www.wired.com/story/saflok-hotel-lock-unsaflok-hack-technique/. A sobering account based on interviews with a European telecommunications standards organization described how TETRA, an interoperable protocol used for police and emergency management radio communications, was built using a weak encryption standard with an exploitable backdoor that could have been abused by malicious actors to intercept or tamper with these communications.60Kim Zetter, “Interview with the ETSI Standards Organization That Created TETRA ‘Backdoor,’” ZERO DAY, July 25, 2023, https://www.zetter-zeroday.com/interview-with-the-etsi-standards/. Here again, TETRA’s encryption algorithm was standardized in secret, its flaws revealed only after researchers reverse-engineered the protocol. And when researchers have reverse-engineered secret protocols used to transmit data to and from medical devices, they have identified concerning vulnerabilities.61Eduard Marin, “National Instruments – Hacking Implantable Medical Devices to Expose Life-Threatening Security Flaws,” EDPT, February 6, 2018, https://www.epdtonthenet.net/article/151032/Hacking-implantable-medical-devices-to-expose-life-threatening-security-flaws.aspx. As these examples show, “security through obscurity” tends to leave technologies more vulnerable down the line, while openness of standards allows more experts to scrutinize and try to break proposed algorithms, supporting the selection of more secure versions62Bruce Schneier, “Why the Worst Cryptography Is in the Systems That Pass Initial Analysis,” Schneier on Security (blog), March 1999, https://www.schneier.com/essays/archives/1999/03/why_the_worst_crypto.html.—as Linus’ law states, ”given enough eyeballs, all bugs are shallow.”63Eric S. Raymond, “Release Early, Release Often,” personal blog, accessed June 5, 2024, http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/ar01s04.html.

Insecurity in interoperable technologies can also be driven by the sheer challenges involved in updating or migrating away from widely adopted protocols, even if those protocols are insecure. For example, the Signaling System Number 7 (SS7) and Diameter protocols allow telecommunications operators to exchange information to provide cellular services, supporting cross-carrier interoperability. However, they have introduced serious security flaws that have been known for more than a decade.64Office of Senator Ron Wyden, “Wyden Urges Biden Administration to Crack Down on Surveillance Companies and Shore up Security of Wireless Networks,” February 29, 2024, https://www.wyden.senate.gov/news/press-releases/wyden-urges-biden-administration-to-crack-down-on-surveillance-companies-and-shore-up-security-of-wireless-networks. The initial design of these protocols made them difficult to secure, and once they were adopted, it was hard to shift to more secure versions; while some more secure versions were proposed, they were never widely adopted.65“Signalling Security in Telecom SS7/Diameter/5G,” European Union Agency for Network and Information Security, March 28, 2018, https://www.enisa.europa.eu/publications/signalling-security-in-telecom-ss7-diameter-5g. In spaces like telecommunications, which are heavily dependent on interoperability, there can be a “long-tail” problem: entire ecosystems may need to continue to support a legacy technology for a long time to avoid breaking interoperability for small numbers of providers who have not yet migrated. For example, at the end of 2022, only ten countries had fully switched off all 2G network services, despite the fact that 2G was superseded by 3G more than 20 years ago.66Pete Bell, “2G and 3G Shutdowns Continue,” TeleGeography (blog), February 15, 2023. https://blog.telegeography.com/2g-and-3g-shutdowns-continue. When providers need to maintain interoperability with insecure legacy systems, they can leave themselves vulnerable to “downgrade attacks,” in which an attacker forces a target system onto the lower security, older protocol and then exploits it. For example, many card readers still support MIFARE Classic in addition to its newer and more secure successor, leaving these readers vulnerable to downgrade attacks.67Brian Harris, “The Hidden Threat of Multi-Card Readers,” Covert Access Team (blog), October 20, 2023, https://covertaccessteam.substack.com/p/the-hidden-threat-of-multi-card-readers.

Notably, the challenge with insecure legacy systems is not exclusive to interoperable ecosystems.68Michael Burch, “Legacy Systems: Learning From Past Mistakes,” Security Boulevard (blog), May 21, 2024, https://securityboulevard.com/2024/05/legacy-systems-learning-from-past-mistakes/. Information technology systems tend to build upon themselves, creating long-lasting dependencies that may be vulnerable.69Andy Ozment  and Stuart E Schechter, “Milk or Wine: Does Software Security Improve with Age?,” December 6, 2010, https://www.researchgate.net/publication/246828292_Milk_or_Wine_Does_Software_Security_Improve_with_Age. Thus, these challenges suggest not so much that interoperability drives this kind of insecurity, but that both interoperable and closed technology makers must actively consider their ability to adopt new security features and deprecate old, insecure versions as a critical part of secure lifecycle management.

Yet, sometimes “rip and replace” is not the only option to improve security. With respect to SS7 and Diameter, there are clear steps that providers can take to add more security to their implementation of this protocol, such as using known best practices to better protect operators’ network perimeters.70“PSHSB Seeks Comment on Implementation of Security Protocols,” Federal Communications Commission, March 27, 2024, https://www.fcc.gov/document/pshsb-seeks-comment-implementation-security-protocols. Thus, in some cases it may not be necessary to fully migrate away from a protocol but instead to update cybersecurity threat models and practices to match the current reality.

Trust

Perhaps the most universal feature of the relationship between security and interoperability is the question of how interoperability impacts trust. Trust is a core, if often unevenly defined,71For example, the “zero trust” paradigm popularized in information security is in practice something closer to having no implicit or assumed trust in assets, resources, or people, not having no trust at all. See for example Scott Rose et. al, “Zero Trust Architecture,” NIST, August 2020, https://csrc.nist.gov/pubs/sp/800/207/final. concept in information security. For interoperability, a useful definition may be “the assurance that one entity holds that another will perform particular actions according to a specific expectation.”72Mike Bursell, Trust in Computer Systems and the Cloud (New Jersey, Wiley , 2021). Trust in this definition depends heavily on context—e.g., an entity might be trusted in certain contexts or with certain actions, but not with all—and it may be asymmetric.

Noninteroperability makes trust easy—if there is no need to interact with another technology, there is no need to define a trust relationship with it. In contrast, interoperability, whether it involves sharing of data, processing inputs and outputs, or accessing systems, requires establishing and mediating a relationship of trust. Managing complex trust relationships is possible and is, in fact, a core part of information security for technology systems such as web browsers that interface between users’ devices and untrusted internet resources. However, questions about how to securely handle expanded trust relationships created by interoperability do require consideration in the interoperability context. 

Trust: User Data

A common form of this debate over trust relationships relates to cases where companies that hold or process user data are asked to share with or provide access to this data to a third party. This form of data sharing may arise in cases where a third party wants to provide additional products or services to users based upon data held by another party such as fintech applications built atop banking data, or in cases where certain kinds of user data must be shared for functional interoperability such as between messaging applications or social networks.

Trusting another party with user data does create inherent risks: if a third party is hacked, attackers could abuse that access to steal user data. Companies may be reasonably anxious at the prospect of allowing user data to flow to a third party whose security practices they do not control. ‘We trust ourselves to keep users secure, but we’re not sure about those guys,’ the thinking might reasonably go.

Fundamentally, these relationships should be managed according to the basic axiom, “users have the right to make decisions about their own data.” Companies that deny users access to data about them or created by them—based on a professed interest in their security but without a genuine attempt to build a functional trust model to securely share this data—undermine users’ right to use their data as they see fit.

Policymakers should encourage companies to actively build infrastructure in data sharing, such as secure APIs. These APIs can enforce good trust relationships, such as by only allowing interoperators to perform certain contextually appropriate actions, and avoid the need for workarounds like “screen scraping” that break or take advantage of existing trust relationships with users. And users should not be able to accidentally turn over their information to a hacker by clicking a bad link—it is important that the data holder has a reasonable means by which to verify that a user actually intended to share the data, and, ideally, that there is a close coupling between the purpose for which the user thinksthey are sharing their data and the actions that the third party is able to take once they have data access. Especially in security-sensitive contexts, it may be appropriate to limit the set of entities allowed to interoperate to exclude bad faith actors or companies with shoddy security practices. Yet without oversight, allowing one company to determine other companies’ eligibility for interoperation on the basis of security risks reinforcing existing dynamics in which companies use security as a pretext for other interests, such as competitive concerns. Policy may be able to help proactively balance these concerns, such as by developing security and privacy standards that would-be interoperators must implement alongside structures for how policymakers or gatekeeper companies can verify their compliance, or by allowing companies to make this determination while exercising active oversight and scrutiny to ensure that their decisions are not pretextual.

In short, it falls to policymakers to hold large data stewards accountable for building functional ways to manage trust in order to deliver user-requested data sharing, rather than blocking all access in the name of security.

Trust: Security-Relevant Functions

Security concerns around interoperability can also arise when third parties offer products or services that can control or shape security-relevant properties within another vendor’s system. These types of situations often arise in vertical interoperability, cases in which operators in control of one part of the stack might cite security concerns about letting third parties manage security-relevant processes elsewhere in the stack.

One example is app stores. As the gateway between users and mobile software applications, app stores provide security-relevant functions such as ensuring that listed applications are not malware (or other harmful software) and managing the installation and permissions of user-installed apps.

In the past, Apple has argued against policy proposals that would require it to allow third-party apps to manage apps on its iPhones; the company raised concerns that third-party apps may lack sufficient vetting procedures for applications.73Apple, “Apple Announces Changes to iOS, Safari, and the App Store in the European Union,” January 25, 2024, https://www.apple.com/newsroom/2024/01/apple-announces-changes-to-ios-safari-and-the-app-store-in-the-european-union/. However, neither Apple nor Google—,operators of the largest app stores—have perfect track records when it comes to preventing malicious apps from ending up in their app stores.74Becky Bracken, “Malicious Apps With Millions of Downloads Found in Apple App Store, Google Play.” Dark Reading, September 23, 2022, https://www.darkreading.com/cyberattacks-data-breaches/malicious-apps-millions-downloads-apple-google-app-stores. This is not to say that there is no value in the app review process that either store provides, only that neither entity is positioned to offer a panacea for mobile security. And, limiting the ability of users to select another app store also removes the possibility that users could seek out alternatives that could be more beneficial, such as an alternative app store that offers even greater security and privacy vetting of applications.

The Android ecosystem is more open than Apple’s, allowing for the installation of third-party app stores such as F-Droid, which hosts only free and open source Android applications,75“What is F-Droid?,”” accessed June 7, 2024, https://f-droid.org/. and permitting users to directly install applications through Android Application Packages (or APKs).76Joe Hindy, “How to Install Third-Party Apps without the Google Play Store,” Android Authority, January 27, 2024. https://www.androidauthority.com/how-to-install-apks-31494/. APK-based installation, or sideloading, does create more risks, as these files are not verified by any app store. Apple has pushed back against policies that would require them to allow sideloading, citing concerns that this could allow users to download dangerous malware onto their devices.77Apple, “Apple Announces Changes to iOS, Safari, and the App Store in the European Union,”, January 25, 2024, https://www.apple.com/newsroom/2024/01/apple-announces-changes-to-ios-safari-and-the-app-store-in-the-european-union/. This worry is not wholly without merit—malicious actors do abuse channels like sideloading to target unsuspecting users.78Charlie Fripp, “Sideloading Apps Could Infect Your Android Phone with Malware,” KIMKOMANDO, October 7, 2022, https://www.komando.com/security/ratmilad-android-malware/859519/. But the level of control exercised by Apple is unusual in the broader computing ecosystem. In other technology contexts, consumers have come to expect that they can freely swap security-relevant software on the devices that they control: personal computers allow users to install wholly different operating systems such as Linux or Ubuntu without complaint; consumers using different devices can freely install different browsers, which sit between a user’s device and myriad untrusted resources and thus are crucial for security.

As with user data, when it comes to the management of security functions, there are reasonable balances to be struck. Pairing interoperability requirements with a gating function—security standards or a regulatory approval process—can provide one way to thread this needle. Making more dangerous options such as sideloading slightly harder to access, by nesting them within systems settings, can also limit the likelihood that unsuspecting users will be duped into performing dangerous installations. Yet here, as with user data, policymakers should push companies to give users the right to manage their own security—to give them the visibility and tools to do so—rather than enforcing a monopoly on security decision-making for their users.

Encryption

Encryption protects data from being read or tampered with by anyone who lacks the corresponding key to decrypt it, providing integral security for the modern web, communications, and stored data.79“Decrypting the Encryption Debate: A Framework for Decision Makers” at NAP.Edu, accessed June 5, 2024, https://doi.org/10.17226/25010. Encryption algorithms are typically standardized through open processes that invite stakeholders to test the security guarantees of proposed algorithms. In the United States, the National Institute of Standards and Technology (NIST) leads standardization processes for a variety of cryptographic algorithms and contexts, from cryptographic digital signatures to encryption algorithms, that will remain uncrackable even with the development of highly efficient quantum computing.80“Cryptography,” NIST, accessed May 29, 2024, https://www.nist.gov/cryptography. Cryptography is mathematically complex, and this standardization allows organizations to rely on algorithms and systems that have been vetted by numerous security experts. It also facilitates interoperation: if two technology systems can agree on a public encryption standard, they can encrypt data exchanged between them and provide greater security for users.

When there is a failure to coalesce on encrypted and interoperable standards in networked technologies, users’ security can suffer as they fall back to less secure protocols or implementations—plaintext (e.g., unencrypted text) may often be the lowest common denominator format. This phenomenon and a host of other complexities at the intersection of interoperability and encryption have come into the foreground recently with policy and technical battles over messaging.

The default protocol for text-based messaging on consumer cell phones is the SMS protocol, which is unencrypted. Apple users have access to iMessage, which is end-to-end encrypted, meaning the message content can onlybe decrypted by the parties to the communication and not by the service provider. When Apple users want to text with a user of any other phone, they are forced back to the lower security SMS texting protocol. While Apple has recently agreed to adopt a messaging protocol (RCS) that will allow Apple and Android users to exchange messages with better features such as emojis and high-quality images, end-to-end encryption is still not part of the base RCS protocol and thus texts between Apple and Android users will still not be end-to-end encrypted.81Cooper Quintin, “What Apple’s Promise to Support RCS Means for Text Messaging,” Deeplinks (blog), January 31, 2024, https://www.eff.org/deeplinks/2024/01/what-apples-promise-support-rcs-means-text-messaging.

(This messaging debate also illustrates the power of defaults: there are other end-to-end encrypted messaging applications, such as Signal and WhatsApp, that allow Android and Apple users to exchange end-to-end encrypted messages. Still, SMS messaging remains a default channel on mobile phones for peer-to-peer communication and for activities like receiving two-factor authentication codes or receiving updates from businesses. This creates continued risks to user security despite the availability of more secure alternatives. When NIST announced in 2016 its plans to deprecate SMS as an option for an additional factor in multifactor authentication, the outcry was so great82“Questions…and Buzz Surrounding Draft NIST Special Publication 800-63-3,” CYBERSECURITY INSIGHTS (blog), accessed June 6, 2024, https://www.nist.gov/blogs/cybersecurity-insights/questionsand-buzz-surrounding-draft-nist-special-publication-800-63-3. that they walked back the change.83Former_member182953, “Rollback! The United States NIST NO LONGER Recommends ‘Deprecating SMS for 2FA,’” Technology Blogs by SAP (blog), July 6, 2017, https://community.sap.com/t5/technology-blogs-by-sap/rollback-the-united-states-nist-no-longer-recommends-deprecating-sms-for/ba-p/13340011.)

Because of the risk of falling back to less secure implementations, policy mandates toward interoperability should explicitly consider how to handle questions of encryption. Some commentators worried that the Digital Markets Act (DMA) would degrade user security by requiring interoperation between end-to-end encrypted messaging services like WhatsApp and non-end-to-end encrypted services, thereby compromising the security of end-to-end application users.84Ross Schulman, “We Don’t Have to Sacrifice Encryption to Achieve Messaging Interoperability,” New America (blog), accessed June 22, 2022, http://newamerica.org/oti/blog/we-dont-have-to-sacrifice-encryption-to-achieve-messaging-interoperability/. To address this issue, the DMA explicitly required gatekeeper apps implementing required interoperability to preserve the same security guarantees—including end-to-end encryption—in the services with which they interoperate.85Matthew Hodgson, “Interoperability without Sacrificing Privacy: Matrix and the DMA,” Matrix (blog),March 25, 2022, https://matrix.org/blog/2022/03/25/interoperability-without-sacrificing-privacy-matrix-and-the-dma/.

Then there is the matter of how to agree on an encryption protocol. WhatsApp has requested that other would-be interoperators use the same Signal encryption protocol that it does, thought it has said it will consider interoperating with users of other protocols if they can prove that their approach has equivalent privacy guarantees.86Brouwer, “Making Messaging Interoperability.” Yet just because two apps are both using the Signal protocol does not mean that they can necessarily talk to each other; for example, both Signal and WhatsApp use the Signal protocol, but cannot speak to each other because they do not “speak the same language” in terms of message formats.87Amandine Le Pape, “The Digital Markets Act Explained in 15 Questions,” Element (blog), April 8, 2022, https://element.io/blog/the-digital-markets-act-explained-in-15-questions/. Third parties that wish to interoperate with WhatsApp will need to either adopt its format wholesale, or implement a “bridge” that translates messages to and from its encryption algorithm and message format.88Le Pape, “The Digital Markets Act.” Some commentators have asked whether the DMA ought to have required the use of open standards to make it easier for entities to securely interoperate,89Schulman, “We Don’t Have to Sacrifice.” and there are open standards for many elements of end-to-end encrypted messaging.90Rescorla, “Architectural Options.”

To add to the complexity, other messaging apps including Signal and Threema have stated that they do not intend to interoperate with WhatsApp because such interoperation would require them to lower their security standards, particularly with respect to the privacy of metadata.91Manuel Vonau, “Signal and Threema Want Nothing to Do with WhatsApp,” Android Police, February 23, 2024, https://www.androidpolice.com/signal-threema-nothing-to-do-with-whatsapp-eu/. Here again, user consent is a good guiding principle. It may be appropriate for policy to require that technology makers of a certain size allow their users to interact with other, less-secure applications on an opt-in (and appropriately disclosed) basis, but users should always have a choice as to whether and how their data is shared with a third party, and policy should avoid forcing users to accept lesser security in the name of interoperability.

One area in which encryption interoperability is relatively well-developed is in web browsing.92Vittorio Bertola, “Can Interoperable Apps Ever Be Secure?,” Interoperability News, May 27, 2021, https://interoperability.news/2021/05/can-interoperable-apps-ever-be-secure/. In a process called an SSL handshake, a browser and a server agree on a “cipher suite” or encryption protocol to use. The client browser suggests the cipher suites it will accept, the server chooses which is most secure (based on its own configuration), and the two establish a connection using that (or do not connect at all, if no option is acceptable to both).93RoraΖ. “reply to ”In a Browser Web Server Communication, Who Decides Which Encryption Protocol to Use,” Information Security Stack Exchange, November 13, 2014, https://security.stackexchange.com/a/72886. While messaging interoperability and other interoperable encryption contexts will face different technical challenges, the idea of allowing users to configure which encryption protocols they are comfortable using, and having their messaging clients find a common protocol, is a promising design pattern for interoperable encrypted technology.

As the messaging saga suggests, getting encryption and interoperability right is not easy, whether from a policy or a technical perspective. Policy should be an ally and an accelerator for this process, providing incentives and support for companies to adopt new and more secure standards, and carefully crafting interoperability mandates to advance the use of interoperable encryption schemes while avoiding forcing users to fall back to lower security implementations.

Identity

Identity is a core building block of systems security, enabling applications and systems to control authorized users’ permissions with respect to data and actions, and to deny access to unauthorized users. Identity ensures that only the intended recipient of a message can read it, or that only a system administrator can take high-risk actions such as decommissioning a production server. Because of its centrality to security, flaws in identity and access management services are a recurring theme in hacks, from the exploitation of a flaw in identity assertions to move through Microsoft platforms in the Sunburst (SolarWinds) incident94Trey Herr et al., Broken Trust: Lessons from Sunburst, Atlantic Council, March 29, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/broken-trust-lessons-from-sunburst/. to the theft of a Microsoft signing key that enabled threat actor Storm-0558 to access tens of thousands of US government emails last summer.95Cyber Safety Review Board, “Review of the Summer 2023 Microsoft Exchange Online Intrusion,” CISA, March 20, 2024, https://www.cisa.gov/sites/default/files/2024-04/CSRB_Review_of_the_Summer_2023_MEO_Intrusion_Final_508c.pdf.

A unique user identity in a digital system can, but does not need to, correspond 1:1 with a real-world identity; different technologies have different requirements with respect to identity proofing, or whether they must validate the real-world identity associated with a digital one.96Paul A. Grassi, Michael E. Garcia, and James L. Fenton, “NIST Special Publication 800-63-3 Digital Identity Guidelines,” National Institute of Standards and Technology, June 2017, https://pages.nist.gov/sp800-63-3.html. Digital identities are associated with methods of authentication, or the means by which a user proves that they are who they say they are—e.g., knowing a username and password or possessing a multifactor authentication token.97Grassi, “NIST Special Publication 800-63-3.” And information systems link users’ digital identities to authorizations, or their rights and permissions to access specific data and resources.Computer Security Resource Center, s.v. “authorization,” https://csrc.nist.gov/glossary/term/authorization. Each of these different functions can be managed by a digital system, or outsourced or federated to another system.

In general, large tech companies have their own integrated identity ecosystem that allows users to link their identity and activities, preferences, and data across different apps and services—for instance, Apple’s AppleID, which links a user’s profile across devices and services.98Sascha Block, “Apple ID Mutates into Apple Account – A Step with Hidden Intentions?,” Rock the Prototype – Softwareentwicklung & Prototyping (blog), March 24, 2024, https://rock-the-prototype.com/en/software-development/apple-id-mutates-into-apple-account-a-step-with-hidden-intentions/. However, this kind of identity linkage across services is usually limited to within that vendor’s ecosystem. Especially in the consumer context, most companies offering technology will have their own digital identities for each user and their own means of authenticating them, such as a unique, user-created username and password.

Federated identity provides exceptions to this model. In federated identity systems, one system can authenticate a user and pass information to another system about that authentication in a standard format.99Cian Walker, “From Federation to Fabric: IAM’s Evolution,” Security Intelligence (blog), March 5, 2024, https://securityintelligence.com/posts/identity-and-access-management-evolution/. For example, Google’s “Sign in with Google” service uses the OAuth 2.0 standard to communicate information about Google-authenticated users to other services and to log them in there, removing the need for users to remember a separate site-specific password.100“Authentication,” Google for Developers, updated February 28, 2024, https://developers.google.com/identity/gsi/web/guides/overview. Microsoft allows customers to configure other identity providers to use to sign into its systems through the standardized Security Assertion Markup Language (SAML).101msmimart, “Federation with a SAML/WS-Fed Identity Provider (IdP) for B2B – Microsoft Entra External ID,” June 5, 2024, https://learn.microsoft.com/en-us/entra/external-id/direct-federation.

Standards-based identity and access management make it easier for organizations to configure a single IDP and use it across their systems, or for an application or website to bootstrap from authentication offered by another provider. Identity standards make it easier for users, such as by allowing users to use a single device or service for multifactor authentication. The FIDO Alliance develops open standards that other products can integrate with to support password-less multifactor authentication,102“How Passkeys Work,” FIDO Alliance (blog), accessed June 6, 2024, https://fidoalliance.org/what-is-fido/. and Yubikeys, hardware-based MFA tokens, integrate with multiple authentication standards to allow consumers to use a single hardware token across multiple services.103“Home,” Yubico, accessed June 6, 2024, https://www.yubico.com/.

Questions as to how to manage digital identities that are closely tied to real-world identities are a new frontier for technological innovation and the development of open standards. New legislation in the European Union (EU) will set uniform standards for digital identity—eIDAS2—laying a foundation for interoperable digital wallets for EU citizens.104Andrea Tinianow, “The EU Lays The Techno-Legal Tracks For Its Rising Digital Ecosystem,” Forbes, January 29, 2024, https://www.forbes.com/sites/andreatinianow/2024/01/29/the-eu-lays-the-techno-legal-tracks-for-its-rising-digital-ecosystem/. The US has lagged many of its peers in terms of adopting government systems of digital identity,105Alex Botting and Jeremy Grant, “(Digital) Identity Crisis: The US Needs a National Strategy for Digital Identity to Enhance Economic Competitiveness and Mitigate Cybersecurity Risks,” Wilson Center, October 24, 2023, https://www.wilsoncenter.org/article/digital-identity-crisis-us-needs-national-strategy-digital-identity-enhance-economic. while private companies like Google and Apple have begun wading into the gap as they seek to offer legally valid digital identification credentials through partnerships with states on digital driver’s licenses.106Tracey Follows, “Apple Vision Pro Signals Another Move Into Digital Identity for Apple,” Forbes, accessed June 15, 2023, https://www.forbes.com/sites/traceyfollows/2023/06/15/apple-vision-pro-signals-another-move-into-digital-identity-for-apple/.

Identity is a challenge when seeking to create interoperability between multiple non-standardized applications, such as in the messaging context.107Rescorla, “Architectural Options.” Messaging apps have adopted different approaches to giving users digital identity, with some using real phone numbers and others generating novel identifiers. When communicating across implementations, this creates challenges such as verifying that a particular user exists (and corresponds to the desired recipient) in another service. It also creates collisions: if user A wants to send a message to user B with phone number 123-456-7890, to which messaging app should that message go? And how should identity work between services that do and do not require the use of a real phone number, in order to respect the privacy of users who might not want these two pieces of information linked?

These questions are deeply technical, and policymakers may not want to prescribe a specific standard or solution. However, they are important for policymakers to understand and examine when considering certain kinds of interoperability, as sometimes policy can help create solutions to problems of identity. In 1996, US lawmakers passed a law allowing consumers to port their phone numbers—i.e., to keep their phone numbers when moving between carriers, by creating something called a Number Portability Administration Center—that allows telecommunications providers to look up phone numbers to determine the correct provider to complete the call.108“What is LNP?,” Number Portability Administration Center, accessed June 7, 2024, https://numberportability.com/about/about-lnp; Carl Oppedahl, “Companies That You Never Heard of That Make Telephone Calls Possible—Part 1.” Ant-Like Persistence (blog), July 4, 2017, https://blog.oppedahl.com/companies-never-heard-make-telephone-calls-possible-part-1/

Adversarial implementations of interoperability frequently implicate questions of identity. Because the interoperated-with system does not necessarily intend or want the third party to interface with it, it may not offer a dedicated way for the third party to authenticate itself. As in the case of banks and fintechs, sometimes the entity that wants to interoperate must then turn to a shim solution, such as pretending to be the user (the “screen scraping” workaround). These shim solutions can create obvious security risks. At the same time, companies may also play up the idea that their identity infrastructure is being abused in order to cut off adversarial interoperators in ways disproportionate to the actual security risk it poses to users. Apple cut off Beeper Mini’s registrations of its users on Apple endpoints, citing “significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks.109”Mike Masnick, “Apple’s Nonsensical Attack On Beeper For Making Apple’s Own Users Safer,” Techdirt (blog), December 11, 2023, https://www.techdirt.com/2023/12/11/apples-nonsensical-attack-on-beeper-for-making-apples-own-users-safer/. Yet because Beeper Mini functionally extended Apple’s implicit phone number-based identity system to a broader set of users without impacting existing Apple users’ identities, commentators are skeptical that this extension actually created any new risks to iPhone users.110John Gruber, “Beeper? I Hardly Knew Her,” Daring Fireball (blog), December 10, 2023, https://daringfireball.net/2023/12/beeper_i_hardly_knew_her. Where companies argue that they cannot securely allow a third party to interact with their identity infrastructure for purposes of interoperation, policymakers should question their assertions, and consider whether it is possible to shift the onus back onto them to create a secure means to manage interoperable identity with trust, as with banks and APIs.

User Control

One bright line through many of the examples above is the idea that policymakers should view increased user control and agency as a guiding light for how to balance security and interoperability. However, the question of how to grant users meaningful and informed control over the digital technologies with which they interact is far from resolved outside of the interoperability context. Users often have few ways to vet the security practices and guarantees of services they already use. News of a new hack or data breach seems to be a near daily occurrence. Software behemoths like Microsoft have fundamental flaws in their security architecture111Cyber Safety Review Board, “Review of the Summer 2023.” that go undetected by even the US government—so what chance does the average consumer stand? Plus, many companies are already in the business of selling their users’ data—whether directly or by allowing ad-targeting that has similar impacts—with users typically having little visibility into these practices. Interoperability policy should grapple with these questions while acknowledging that it may be difficult to resolve them in the interoperability context without advances in security and privacy writ large.

Takeaways for Policy

The personal computing revolution and the flourishing of the internet were both facilitated, in important ways, by open and interoperable technology. Where interoperability in the next generations of vital digital technology does not arise naturally, policy should consider the potential gains—in terms of benefits to users, innovation, and competition—that could arise from seeking to drive it forward.

Digital technology is both dynamic and accretive, composed of systems that evolve every day while also building upon each other and ossifying their dependencies until they are difficult to extricate. Because interoperability is both a benefit to and a byproduct of technological evolution, policymakers should steer makers of technology to maximize their products’ potential for secure interoperation in present and future systems. Crafting policy to do so across heterogenous technologies and business contexts will not be easy. The goal of this report is to illuminate considerations, including examples from the history of interoperability, to help policymakers take up these efforts.

Frontiers for Technology

The interoperability of messaging platforms and app stores have been areas of focus in European policy and the subjects of bills in the US Congress as well. This focus is merited—they are two areas in which consumers can acutely see and feel the impacts of missing horizontal or vertical interoperability. The battles in these areas are not over yet, and policy will continue to play important roles in crafting recommendations and conducting oversight in the years and decades to come. Yet there are even other areas of technology that are less consumer-facing but still relate deeply to questions of openness and security of computing, now and in the future.

Frontiers for Technology: Cloud Computing

Cloud computing providers deliver computing infrastructure and functionality to other entities over the internet, allowing businesses and other organizations to run their IT infrastructure or to provide web applications and other services to their customers without needing to build and operate their own data centers. Cloud computing is used to deliver a large and growing number of consumer-facing applications, by companies in critical infrastructure sectors like power and transportation,112Tianjiu Zuo et al, “Critical Infrastructure and the Cloud: Policy for Emerging Risk,” DFRLab (blog), July 10, 2023, https://dfrlab.org/2023/07/10/critical-infrastructure-and-the-cloud-policy-for-emerging-risk/. and by the US government. As such, its security is a matter of increasing national security importance, though the policy conversation is just beginning to catch up.113Maia Hamin, Trey Herr, and Marc Rogers. “Cloud Un-Cover: CSRB Tells It Like It Is But What Comes Next Is on Us,” Lawfare (blog), May 28, 2024, https://www.lawfaremedia.org/article/cloud-un-cover-csrb-tells-it-like-it-is-but-what-comes-next-is-on-us.

In the cloud context, horizontal interoperability questions arise in at least two ways: the ability of cloud customers to use multicloud configurations in which workloads are split between more than one cloud provider, and the ability of customers to wholly migrate their data and workloads from one cloud provider to another. Multicloud is already a reality: more and more businesses are self-reporting that they use more than one cloud service provider,114“57% of Financial Organizations Use Multiple Cloud Service Providers,” Security Boulevard, June 6, 2023, https://www.securitymagazine.com/articles/99452-57-of-financial-organizations-use-multiple-cloud-service-providers. with companies benefitting from the ability to select the right cloud service provider to host a particular workload or application based on its features, security practices, or cost.115What Is Multicloud?,” RedHat, October 10, 2022, https://www.redhat.com/en/topics/cloud-computing/what-is-multicloud. The fact that one organization is using multiple clouds, however, is not prima facie evidence that these cloud services are interoperating; in many cases, these workloads may not be integrated with each other, and organizations may simply be managing these deployments independently. The challenges of managing multiple cloud environments, such as maintaining multiple sets of identity and access management policies or monitoring multiple sets of logs, can create additional challenges in the security domain.116Staff, “Multi-Cloud Security Challenges and Best Practices.” Security, June 6, 2024, https://www.techtarget.com/searchsecurity/tip/Multi-cloud-security-challenges-and-best-practices.

These challenges are linked to questions of vertical interoperability too, as IaaS cloud providers tend to offer a bundled suite of software and tools that work best (or only) within their cloud environment, such as built-in security monitoring tools.117“Introduction to Multi-Cloud Security,” CrowdStrike, June 6, 2024, https://www.crowdstrike.com/cybersecurity-101/cloud-security/multi-cloud-security/. If customers want to migrate away from a cloud provider or begin working with a new provider they will likely need to set up a compatible identity and access management (IAM) system, update their approach to collecting metrics to track performance and cost, and adapt their security workflows. Standards can help: existing standards for identity and access management such as OAuth 2.0 and SAML 2.0 do support some degree of interoperability across cloud environments.118Eyal Estrin, “Identity and Access Management in Multi-Cloud Environments,” Cloud Native Daily (blog), June 19, 2023, https://medium.com/cloud-native-daily/identity-and-access-management-in-multi-cloud-environments-e2f8a4b82490; Siddharth Bhai, “Identity Management in a Multi-Cloud Environment,” Security, accessed June 7, 2024, https://www.securitymagazine.com/articles/98103-identity-management-in-a-multi-cloud-environment.

Policymakers in the United States and the European Union have sporadically examined the issue of interoperability in the cloud context. The European Union’s Data Act focuses on cloud interoperability, primarily in the context of allowing customers to migrate from one cloud service provider to another; once it comes into full force in 2025, it will require cloud providers to allow their customers to easily transfer data to another cloud provider, requiring the removal of prohibitive contractual barriers and fees.119European Commission, “Data Act Explained,” last updated 22 May 2024, https://digital-strategy.ec.europa.eu/en/factpages/data-act-explained.

US policymakers have tended to focus more on cloud interoperability in the government context. In 2023, officials at the Department of Homeland Security (DHS) stated that poor cloud interoperability was negatively impacting the ability of different parts of DHS to collaborate with each other.120Madison Alder, “Industry Help Needed to Fix ‘Sorry State’ of Cloud Tool Interoperability, DHS Official Says,” FedScoop (blog), November 1, 2023, https://fedscoop.com/industry-help-needed-to-fix-cloud-tool-interoperability/. Also in 2023, US policymakers introduced a bill that sought to encourage the executive branch to examine how the federal government could implement multicloud architecture “to allow for portability and interoperability across multiple cloud computing software vendors.”121Office of Congressman Nick Langworthy, “Timmons, Eshoo, Langworthy, Trone Introduce Multi-Cloud Innovation and Advancement Act,”, August 1, 2023, http://langworthy.house.gov/media/press-releases/timmons-eshoo-langworthy-trone-introduce-multi-cloud-innovation-and. The scathing Cyber Safety Review Board report on the cascading series of preventable security flaws in Microsoft’s cloud has triggered a discussion on what it would take to reduce the US government’s dependence on Microsoft’s cloud-hosted software applications.122Eric Geller, “The US Government Has a Microsoft Problem,” Wired, accessed April 15, 2024, https://www.wired.com/story/the-us-government-has-a-microsoft-problem/. And a recently introduced bill would require US federal agencies to procure interoperable and open standards-based software in order to reduce its vulnerability to vendor lock-in.123Office of Senator Ron Wyden, “Wyden Releases Draft Legislation to End Federal Dependence on Insecure, Proprietary Software In Response to Repeated Damaging Breaches of Government Systems,” April 23, 2024. https://www.wyden.senate.gov/news/press-releases/wyden-releases-draft-legislation-to-end-federal-dependence-on-insecure-proprietary-software-in-response-to-repeated-damaging-breaches-of-government-systems.

The technical conversation around cloud interoperability is not yet advanced enough for policy to begin immediately laying out aggressive requirements for standardization that would facilitate interoperability or uniform security requirements between infrastructure-as-a-service providers. But without more active oversight and urging, it may never get there. Policymakers should try to task and equip federal agencies such as the Cybersecurity and Infrastructure Security Agency and critical infrastructure sector risk management agencies to begin working with companies to track pain points in multicloud workflows, particularly as they relate to security.This could be a standalone process or part of a larger survey on cloud use and challenges within a particular critical infrastructure sector, such as the excellent report by the US Department of Treasury on cloud use within the financial sector.124See “Financial Services Sector’s Adoption of Cloud Services,” US Department of Treasury, February 8, 2023, https://home.treasury.gov/system/files/136/Treasury-Cloud-Report.pdf. This information could then feed into a longer-term determination of which components of the cloud software stack could and should be standards-based or interoperable across cloud service providers, which standardization the US government could support through its own standards organizations such as the National Institute of Standards and Technology or through its own procurement power. This process could move in parallel with other much-needed initiatives seeking to provide transparency about the security architecture, design, and vulnerabilities of cloud infrastructure.125Tianjiu Zuo, “Critical Infrastructure.”

Frontiers for Technology: Artificial Intelligence

Artificial intelligence (AI) is an umbrella term for computing systems that are designed and built to perform tasks requiring “intelligence,” such as making predictions or judgements, using language, or playing games. The latest instantiations of AI systems to attract public interest and attention are large generative AI systems such as large language models—neural network-based systems that learn to generate novel text and other media from vast amounts of unstructured text and image data. Because generative AI is relatively early in the lifecycle of development and integration into enterprise and consumer products, it affords an opportunity for policymakers to consider how to advance interoperability in a still-developing ecosystem.

Large-scale AI model training and inference requires much of its own computing stack. AI applications are built using foundational AI models. These models—a combination of architecture and learned “weights”— may be hosted by a company and offered through an API, directly loaded onto a user’s device, or run locally by the user. AI models are trained (i.e., their weights are learned) using data in a variety of formats and from a variety of sources. The models and their training are implemented using AI code frameworks like PyTorch or TensorFlow, and training and inference are typically run on specialized high-performance computing hardware such as GPUs or TPUs.Graphics Processing Unit (GPU); Tensor Processing Unit (TPU)

Vertical interoperability questions in artificial intelligence have arisen acutely when it comes to the specialized, high-performance chips that facilitate training and inference for generative AI models. In June of this year, NVIDIA, the leading provider of these chips, surpassed Apple to become the United States’ second-most valuable company by market capitalization126Suzanne O’Halloran, “Nvidia Topples Apple with $3 Trillion Market Cap; 10-for-1 Stock Split Ahead,” Fox Business, June 6, 2024, https://www.foxbusiness.com/markets/nvidia-apple-3-trillion-market-cap-10-for-1-stock-split-ahead.—likely driven by investors’ expectations that its chips are practically synonymous with the growth of compute-intensive AI models and applications. Other tech giants are cautiously edging into the space of manufacturing AI chips, but executives responsible for these initiatives have stressed the degree to which entering the market is challenging because changing the chips that a company uses will require rewriting its software code, which can be difficult and time-consuming.127Cade Metz, Karen Weise, and Mike Isaac. “Nvidia’s Big Tech Rivals Put Their Own A.I. Chips on the Table,” New York Times, January 29, 2024, https://www.nytimes.com/2024/01/29/technology/ai-chips-nvidia-amazon-google-microsoft-meta.html. NVIDIA’s proprietary CUDA programming platform bridges between high-level, user-written software and NVIDIA’s high-performance chips; vendor-neutral alternatives like OpenCL provide the ability to write software that can be portable across NVIDIA chips and those created by other chipmakers, but some reports indicate that it underperforms CUDA on NVIDIA chips.1281kg. “CUDA vs OpenCL vs Metal: The Battle for GPU Acceleration Supremacy,” Medium (blog), April 5, 2024, https://medium.com/@1kg/cuda-vs-opencl-vs-metal-the-battle-for-gpu-acceleration-supremacy-b6bc99fbeef1; 1kg. “Nvidia’s CUDA Monopoly,” Medium (blog), August 7, 2023, https://medium.com/@1kg/nvidias-cuda-monopoly-6446f4ef7375. NVIDIA also has licensing policies precluding reverse-engineering that can create “translation layers” that allow CUDA software to be run on non-NVIDIA chips,129AleksandarK. “NVIDIA Cracks Down on CUDA Translation Layers, Changes Licensing Terms,” TechPowerUp, March 6, 2024, https://www.techpowerup.com/319984/nvidia-cracks-down-on-cuda-translation-layers-changes-licensing-terms. creating another potential barrier to attempts to decouple existing CUDA code from NVIDIA chips. NVIDIA’s role within the AI ecosystem has drawn the interest of US antitrust authorities, with a recent decision that the Department of Justice will lead an investigation into the company’s business practices.130David McCabe, “U.S. Clears Way for Antitrust Inquiries of Nvidia, Microsoft and OpenAI,” New York Times, June 5, 2024, https://www.nytimes.com/2024/06/05/technology/nvidia-microsoft-openai-antitrust-doj-ftc.html.

Data is another key ingredient in building generative AI models, and data access concerns are very likely to arise in the AI competition context. These questions will implicate similar issues of security and user consent as data sharing in other contexts; the use of privacy-enhancing technologies like federated learning might provide one way for policymakers to broaden data access in the name of competition without compromising user privacy or security.131Katharina Koerner and Brandon LaLonde, “Cheering Emerging PETs: Global Privacy Tech Support on the Rise,” January 24, 2023, International Association of Privacy Professionals, https://iapp.org/news/a/cheering-emerging-pets-global-privacy-tech-support-on-the-rise.

Beyond data and compute, there are other potential frontiers, including the degree to which different companies’ AI models can be interchanged once embedded into a system or application and the interoperability of different methods for assessing, aligning, and securing AI models. The AI land grab will create strong incentives for companies to try to control multiple integrated elements up and down the AI stack. Antitrust inquiries in the United States will provide an opportunity for regulators to examine whether these practices cross the line of anti-competitiveness—if so, they may provide an opportunity to apply some of the interoperability lessons from the past by considering remedies that might include opening up technologies for wider interoperation and innovation.

Frontiers for Policy

To make a long story short: interoperability has inconsistently been taken up as a first-order policy priority, but increased interoperability and openness of technology can support a tremendous flourishing of follow-on innovation. It is not wholly a myth that interoperability can have negative consequences for security, but these consequences are rarely unavoidable—instead, they can be managed through fundamental principles of information security such as trust, identity, and encryption. Empowering users with more choice is a better direction for policy than letting a platform decide that it should make the choices for them, but this must be done in a risk-aware way, and figuring out how to meaningfully inform users so they can make sound choices with respect to their security is a work in progress.

With these principles in mind, what options do US policymakers have to try to advance interoperability in digital technologies? For one, an agency like the Federal Trade Commission (FTC) can use its powers to hold companies accountable for interoperability practices that harm consumer security. The FTC has already indicated interest in interoperability related to its “unfair methods of competition” enforcement authorities,132Staff in the Office of Technology and the Bureau of Competition, “Interoperability, Privacy, & Security.” and has also used its authority to police “unfair acts and practices” in commerce to bring cases against companies for negligent cybersecurity practices.133Isabella Wright and Maia Hamin, “‘Reasonable’ Cybersecurity in Forty-Seven Cases: The Federal Trade Commission’s Enforcement Actions Against Unfair and Deceptive Cyber Practices.” DFRLab (blog), June 12, 2024. https://dfrlab.org/2024/06/12/forty-seven-cases-ftc-cyber/. It could potentially use this latter authority to bring cases against companies for practices that harm user security through insecure interoperability, such as falling back to weak authentication protocols or sending information in plaintext rather than adopting more secure alternatives. The goal of these enforcement efforts should be to incentivize dominant companies to offer secure means of interoperation in ways that benefit their users, not to crack down on companies implementing adversarial interoperability.

In addition to using existing authorities to incentivize secure interoperability, Congressional legislation authorizing the FTC to bring cases against companies when their lack of interoperability creates substantial competitive harms or harms to users, including through insecurity, would be a useful expansion of the agency’s powers. Empowering the FTC in this manner would avoid the need for Congress to recognize and draft a new bill each time a lack of interoperability emerges in a new technology area, allowing government to intervene earlier in the development lifecycle of technologies —for example, cloud computing or artificial intelligence—to nudge them towards more secure and open development.

Another, parallel potential direction of effort would involve using the power of the federal purse to create incentives for technology makers to develop more secure and interoperable technology. Federal procurement rules should be revised to prohibit insecure interoperability practices such as falling back to transmitting information in plaintext and to give priority to technologies that enabled secure interoperation. To support these efforts, government agencies like the National Institute of Standards and Technology or the Cybersecurity and Infrastructure Security Agency should develop or update cybersecurity frameworks to address secure design patterns for interoperability. Procurement rules could also favor technologies based on open standards, similar to a recent bill proposed by Senator Ron Wyden134Office of Senator Ron Wyden, “Wyden Releases Draft Legislation.” and procurement practices in the EU.135European Commission, New European Interoperability Framework – Promoting Seamless Services and Data Flows for European Public Administrations,2017, https://data.europa.eu/doi/10.2799/78681. A federal preference for standards-based technology would help incentivize technology makers to invest in developing and implementing open standards and would make it easier for the US government to switch software providers if it discovered, say, a deeply flawed security culture at one of its largest IT vendors.

Another way that policymakers can continue to support interoperability is by supporting the development of open standards, including at international standards bodies. In the past decade, the People’s Republic of China (PRC) has embarked on an explicit policy project to encourage Chinese companies to participate in international standards processes, hoping to shore up the country’s technological leadership and transition from being a “standards taker” to a “standards maker.”136Daniel R. Russel and Blake H. Berger, Stacking the Deck: China’s Influence in International Standards-Setting, Asia Society Policy Institute, https://asiasociety.org/sites/default/files/2021-11/ASPI_StacktheDeckreport_final.pdf. This has created concern among US policymakers.137Office of Senator Catherine Cortez Masto, “Cortez Masto, Portman Introduce Legislation to Study Chinese Government’s Influence on Technology Standards Setting,” November 18, 2020, https://www.cortezmasto.senate.gov/news/press-releases/cortez-masto-portman-introduce-legislation-to-study-chinese-governments-influence-on-technology-standards-setting/ The answer is not for the United States to pull back from participation in such forums but instead to redouble American (and allied) leadership in international standards-setting, and for policymakers to take care to avoid policy responses that could perversely harm US companies’ ability to participate in international standards development processes in which China may also participate.138Nigel Cory, “America’s National Security Concerns Over China Shouldn’t Imperil Its Leadership in Technical Standards Development,” Information Technology & Innovation Foundation,  January 20, 2023, https://itif.org/publications/2023/01/20/americas-national-security-concerns-over-china-shouldnt-imperil-its-leadership-in-technical-standards-development/.

Another (perennial) recommendation is privacy legislation. Comprehensive data privacy legislation could enshrine the idea that users have certain rights with respect to their data, including the right to port that data and to grant third parties access to it. It could go a step further and suggest some guardrails for how companies can offer these rights to users, such as requiring that platforms (perhaps above a certain size) provide a secure API that allows users ways to transfer and grant access to their data. Requiring that companies that receive and process consumer data do so in ways that are minimally invasive and that respect the purposes for which that data is shared would help govern the behavior both of core platforms and of third parties that may receive user data, including through interoperation. Broadly, data privacy and security legislators and regulators should lean into the idea that interoperability is a core requirement to afford users of digital platforms true freedom and control with respect to their data by explicitly considering interoperability when they take up privacy issues.

To address interoperability in under-addressed technology areas like AI or cloud computing, US policymakers should begin through fact finding, such as by directing the creation of public reports or requests for information from creators and users of these technologies. And, finally, the US government could reinvest in trying to build its own secure standards and infrastructure for privacy-preserving digital identity, rather than relying on private companies to provide significant parts of that infrastructure for them.

Conclusion

The complexity and heterogeneity of digital technologies make formulating policy to govern them a challenge, and seeking to regulate how these technologies interact with each other through interoperability is no easier feat. Then bring in the question of new interoperability requirements will trade off against technical questions of system security, and the problem can seem so overwhelming that policymakers may be tempted to fall back and hope that the dynamics of competition sort it out. But this would be a mistake. Past examples from the UNIX operating system to the open source software ecosystem139Manuel Hoffmann, , Frank Nagle, and Yanuo Zhou, “The Value of Open Source Software,” Harvard Business School Strategy Unit Working Paper No. 24-038, January 1, 2024, https://doi.org/10.2139/ssrn.4693148. suggest that openness can have many compounding economic and productive returns for follow-on innovation. There is abundant evidence that interoperability in digital systems does not always arise naturally—what is less knowable is the opportunity cost of these closed systems, or the scale of the flourishing of innovation that never occurred because the foundations which could have supported it were closed to other parties. The potential benefits to consumers and to competition of interoperability both justify and necessitate the hard work of figuring out when interoperability is needed, how it can be achieved, and how its tradeoffs can be balanced.

By understanding where interoperability policy mandates may fall short with respect to the building-blocks of fundamental security, policymakers can be better positioned to craft legislation and regulations that advance more interoperable andmore secure implementations. By better understanding the logic behind some company’s use of security grounds to push back against interoperability mandates, policymakers can be better equipped to separate fact from fiction, and push forward interoperability requirements in a secure manner rather than letting them lapse to the wayside.

Some of the challenges in interoperability policy are dark mirrors for broader challenges in digital policy: challenges in realizing truly informed consent and control by users of digital technology, or in devising secure yet private ways to bind individuals to unique digital identities. Interoperability alone cannot solve these challenges, but it should be yet another of the many reasons for US policymakers to redouble their efforts to lay the foundations for fundamental digital governance. Users of technology deserve a more open status quo, and one that allows them to benefit from the full range of possibilities created when digital technologies are offered as fundamental infrastructure rather than walled gardens. Where the market falls short and where companies’ incentives push in the other direction, it falls to policymakers to help deliver that future to them.


Acknowledgements

The authors of this report offer their sincere thanks to Mallory Knodel, Stacey Higginbotham, Tim Pepper, Stew Scott, and other unnamed reviewers for their feedback on various versions of this draft—their comments improved this paper immensely. We thank Marc Rogers for his comments too, and in particular for follow-up email exchanges on how to convey particular ideas around technological evolution and interoperability. We also thank the many experts who participated in a lively roundtable on the subject of security and interoperability that provided myriad examples and observations now found within this paper. We appreciate your willingness to work with us. 🙂

Authors

Maia Hamin is an associate director with the Cyber Statecraft Initiative, part of the the Atlantic Council Tech Programs. She works on the intersection of cybersecurity and technology policy, including projects on the cybersecurity implications of artificial intelligenceopen-source softwarecloud computing, and regulatory systems like software liability.

Alphaeus Hanson is an assistant director with the Cyber Statecraft Initiative, part of the the Atlantic Council Tech Programs. Prior to joining the Council, Hanson was the inaugural security fellow at Krebs Stamos Group (KSG). As an analyst at KSG, he managed its policy portfolio and drafted feedback on National Institute of Standards and Technology (NIST) space cybersecurity guidance. 

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.