The 5×5—The XZ Backdoor: Trust and Open Source Software
Open source software security experts share their insights into the XZ backdoor, and what it means for open source software security.
The 5×5—The XZ Backdoor: Trust and Open Source Software
Share this story
Last month’s discovery of a backdoor in XZ Utils, an open source data compression utility widely used in Linux operating systems, has reignited discussions about the security of open source software (OSS), with some analysts drawing comparisons to well-known historical OSS incidents.
The XZ saga began when the original maintainer of XZ Utils was pressured by other contributor accounts into adding user JiaT75 as a maintainer of the project. JiaT75 had been contributing to the XZ Utils community since 2022. A group of accounts and JiaT75 questioned the original maintainer’s ability to maintain the XZ Utils project and spent years convincing them to bring JiaT75 on board as an additional maintainer. Once JiaT75 was provided maintainer access, they replaced the original maintainer’s contact information with their own on oss-fuzz, a project that scans open source projects for vulnerabilities. After further preparation, they issued commits for XZ Utils versions 5.6.0 and 5.6.1, implementing the backdoor into the code. This backdoor had the potential to infect Linux operating systems, but thanks to the keen eye and curiosity of a Microsoft engineer, it was discovered before causing widespread harm.
For this edition of the 5×5, we brought together five open source software security experts to discuss the XZ backdoor’s implications for the OSS community and policymakers.
1. What, if anything, differentiates the XZ episode from other well-known open source compromises, and how should policymakers update their mental model of open source security accordingly?
Tobie Langel (he/him/his), Principal and Managing Partner, UnlockOpen; Board member, OpenJS Foundation; Vice Chair, Cross Project Council, OpenJS Foundation
“The XZ utils backdoor represents a turning point for open source security and is already sending shockwaves through the industry and beyond, very much like Heartbleed did a decade ago. Up until today, open source vulnerabilities were the result of accidental bugs. For the first time, there was a deliberate and nearly successful attempt to introduce malicious code into a widely used open source library. The threat actor leveraged open source’s Achilles heel: the lack of support for the maintenance of critical projects.
For policymakers, this should be a wakeup call: open source sustainability issues directly impact software supply chain security. We can no longer afford to ignore it. Maintenance needs to be professionalized and properly supported.”
Aeva Black (they/she), Section Chief, Open Source Security, Cybersecurity and Infrastructure Security Agency (CISA)
“Previously well-known open source vulnerabilities, such as Log4Shell and HeartBleed, while severe in their impact, were not malicious, and they weren’t the result of an individual being targeted by a bunch of possibly fake accounts.
So, I think, for a lot of folks in the open source community, this situation feels more personal, like something that could happen to any of them. It was a targeted, planned, malicious attempt to abuse the trust placed in open source software maintainers. For an online community that runs on trust, this incident hit folks pretty hard. It’s caused many of my friends to feel a sense of a loss of safety.
CISA’s open source security efforts account for both types of threats — vulnerabilities latent in open source packages and malicious compromises of upstream packages. Our Open Source Software Security Roadmap, published last year, lays out how we’re working to support the security of the open source ecosystem.”
Stewart Scott (he/him/his), Associate Director, Cyber Statecraft Initiative, Atlantic Council
“While the XZ compromise is different from some infamous open source incidents based on vulnerabilities like log4j and Heart Bleed, abusing trust is by no means new to open source or software in general. We’ve seen similar OSS threats before in various forms. These include disaffected maintainers removing their projects or building in forms of protest-ware, malicious actors adding well-known companies and contributors to malicious packages as maintainers, and, most similarly, maintainers adding attackers as legitimate project maintainers because the attackers simply asked the original, overworked owners. After XZ, we’ve also seen similar methods discovered in other environments, which I would bet is more a function of XZ highlighting that threat vector to analysts than inspiring other bad actors to make similar attempts.
As for the differences, the XZ attempt seems more sophisticated than the above examples, but not fundamentally changed. For policymakers, this should serve as a reminder that open source software is ubiquitous and often under resourced, but not a source per se of any insecurity.”
Christopher Robinson (he/him/his), Chairperson, OpenSSF Technical Advisory Council; Director of Security Communications, Intel
“The attack itself is not novel; it strings together a series of social engineering/cyber-bullying tactics, and leverages embedding offline malicious files during the CI/CD stage of publication. What is unique is how well the attacker studied and exploited common community behaviors and norms to penetrate the project and take maintainership that could allow the later actions in secret.”
Fiona Krakenbürger (she/her), Co-Founder, Sovereign Tech Fund
“The key difference is the dedication and lengths the attacker had to go through—they had to hide the attack in the build system, disable the checks that were in place, and contribute for a long time to gain trust. That said, the underlying structural challenges it laid bare are anything but new. XZ utils are one of the many open source components that are heavily utilized and critical for a functioning digital ecosystem, yet they were maintained by just one person and did not receive the support needed. If anything, the XZ attack is another reminder that we should acknowledge and update our mental model based on what we already know: open source is critical infrastructure and thus it needs adequate support.”
2. What kind of impacts might this compromise have enabled, and how widespread could the impacts have been if it had not been discovered?
Tobie Langel
“To understand the impact of open source vulnerabilities, it is important to consider both the ubiquity of open source software (it is present in over 95 percent of codebases where it accounts for more than 75 percent of the code) and how the tech industry has gravitated towards a common set of low-level open source components that are present in almost all applications. Compromising one of them opens backdoors in hundreds of millions of devices.”
Aeva Black
“If unnoticed, this backdoor would have become included in many major Linux distributions, and as updates rolled out over time, this could have created a hidden “skeleton key” in many network-connected systems around the world—particularly across most cloud-based services today. We are fortunate that the open nature of the wider open source ecosystem allowed a developer to spot this supply chain compromise before it could cause much harm.”
Stewart Scott
“From what I’ve read, it seems like the backdoor would have allowed remote code execution for anyone with the private key required to use it. That’s bad. And it seems like it would have been widespread once pulled into mainstream Linux installations. Relatedly, policy has often considered the question of how it can identify niche OSS dependencies, such as XZ, that are widespread but maintained by a very small team. It’s interesting that attackers seem able to identify and target some of these nodes, which adds urgency to the task of identifying them and figuring out how to support and use them responsibly.”
Christopher Robinson
“If the backdoor had been merged into the latest versions of the community Linux distributions, it would have seen broad uptake within consumers of those operating systems. Given more time, that malicious package, if undiscovered, would have been integrated into enterprise Linux distributions, exponentially expanding the scale of where it would have been deployed. The malicious package could have allowed remote access and code execution for the threat actor.”
Fiona Krakenbürger
“The attack is still being analyzed, however, what we do know is that it targeted widely used and critical Linux distributions and would have given the attacker the ability to execute code on compromised machines. Like other well-known security incidents before, this speaks volumes about the criticality of open source infrastructure and how important it is to take responsibility and collective action in securing it and reducing the likelihood of similar attacks. We are lucky, as more grave consequences were averted, but we must expect that similar attacks will happen or have happened if the pressure of highly critical components and their maintenance continues to rest on as few shoulders.”
3. How are insider threats for open source software different from and/or similar to those faced by proprietary software? How worried should policymakers be that similar compromises have succeeded in other codebases, both open source and proprietary?
Tobie Langel
“Insider threats are a problem everywhere. Open source has been mostly preserved so far, but it was only a matter of time before that would change. Ultimately however, this is a problem that is broader than insider threats: a hostile account takeover or a vulnerability in software used to build or distribute open source code would have the same consequences.
Up until now, the open source community wasn’t thought of as a potential cyber attack target. When you don’t have access to valuable information, why would you become a target? Now we know: if you are a stepping stone to valuable information somewhere else, you are a potential target too. And this is exactly what those ubiquitous low-level open source components have become: stepping stones to the internal networks of corporations and governments all over the world.”
Aeva Black
“I’ve heard many describe this as an insider threat, but I don’t think that’s quite it. Traditional guidance for insider threat prevention differs in two ways: first, it focuses on behavioral changes of an insider as they become a threat, and second, it focuses on preventing harm to the organization that all parties (both the threat actor and the observer) are members of. The ‘Jia Tan’ threat actor was originally outside of the project and tried to hide their intent in order to compromise other organizations. So, this is more accurately described as a social engineering attack.
When looking at the early activity in this situation, and when I think about how to help open source communities protect themselves from this going forward, well, anti-social engineering techniques are more likely to be successful. If a stranger online seems too eager to get commit access to your project, maybe they have another motive. A healthy dose of caution – particularly for maintainers of low-level system libraries in widespread use – is needed, now more than ever before.”
Stewart Scott
“I’m not sure they are fundamentally different. OSS projects might be closer to the outside world than proprietary code, but the threat of someone with access to any codebase adding in malicious components is still there. The manner in which they achieve that access—e.g. compromising credentials or being given access willingly—points toward different failures in best practice, but the underlying risk is not much changed. One difference between open source and proprietary software that does stick out for insider threats, however, is the well-established fact that OSS maintainers are overworked and under-resourced. This is not the source of insider threats (or social engineering tactics, to Aeva’s and Chris’s points about definitions), but it does augment their risk—OSS maintainers have less time to vet collaborators they bring on to a project as well as the code they add, and they have strong incentives to bring on help. After XZ, several foundations released guidance for maintainers to help them know how to spot the tactics of similar efforts. This is useful, but an incomplete solution—ultimately, so long as OSS maintainers are under-supported and overburdened, malicious actors will have leverage to offer support in bad faith. Policymakers should think of this as yet another reason to support OSS directly—to reduce the strain on those who maintain critical digital infrastructure.
And sure, everyone should worry that compromises have already succeeded and gone unnoticed, but this is not unique to insider threats or OSS. A 1984 paper, Reflections on Trusting Trust, highlights well that you can’t fully trust software that you didn’t build yourself from scratch, and hardly any software is made that way today—and for good reason, as building it in such a manner would be incredibly inefficient. More important are policies that set clear thresholds for trust and verify software against those, from design choices and secure development practices to code signing infrastructure.”
Christopher Robinson
“While this is better classified as a social engineering attack, when the attacker became the project maintainer, they became the ultimate insider and controller of the project. Open source projects are equally as susceptible to these insider threats as enterprises, corporations, and government agencies are. The difference is that OSS projects do not have access to typical controls an enterprise might have such as background and credit checks for employees, or behavioral and network monitoring that an enterprise may use. Those types of controls are not economically nor socially acceptable within the free and open source developer community.”
Fiona Krakenbürger
“While there are certainly differences in how security risks are created and handled in proprietary and open software, we should be wary of creating a false dichotomy here. By asking questions like these we are distorting the fact that open source tools and technologies like XZ are essential for a functioning software ecosystem. Developers rely heavily on these open resources for developing, maintaining, testing and improving software–there is no proprietary alternative for these millions of software packages, that part of the equation simply does not exist. Open source infrastructure has to and will inevitably continue to be part of our digital surroundings – we therefore need to adapt the way we maintain its safety and sustainability.”
4. Usually, the open source community’s discovery of a vulnerability or compromise is considered a success, framed in some variation of ‘this is the open source model working.’ Is that the full story for XZ and in general, or is there room to improve this process in some circumstances?
Tobie Langel
“Clearly, open source saved the day here. Had XZ Utils been proprietary, the engineer whose Spidey sense was tickled would never have been able to carry out his investigation, and the backdoor he discovered would have been widely deployed.
That doesn’t mean that there isn’t a whole new category of threat vectors for open source to consider and address. If critical open source projects are now seen as stepping stones for industrial espionage, ransomware attacks, or cyberwarfare, maintainers of these projects will need to adopt comparable security practices to those found in target organizations. This creates a set of challenges for open source because of its highly distributed nature and volunteer-based model. It also bolsters the argument for professionalizing critical infrastructure maintenance and creating proper support structures for maintainers.”
Aeva Black
“At CISA, we have not seen any compromises resulting from XZ – so, yes, this is an example of ‘the open source model working.’ Compared to proprietary software, the open source nature of XZ allowed it to be detected by an unaffiliated third party and remediated quickly, before it had been widely deployed.
Of course, there’s always room to improve. At CISA, we’ve been collaborating in real time with open source community members to better understand the impact of XZ and identify ways we can help communities respond if this happens again. In fact, the OpenSSF and OpenJS foundations recently noticed similar social engineering attacks against a few projects and published an alert about the observed pattern. CISA also recently released a tabletop exercise packet, based on a similar threat scenario, that any open source community can use to practice and refine their incident response coordination abilities.”
Stewart Scott
“On the one hand, it is very cool how Freund found this backdoor, before it was widely distributed, and for those interested in his investigation, I’d definitely check out an Oxide interview with his firsthand account. And we see similar feats in cybersecurity somewhat regularly—the single researcher who uncovered the log4j vulnerability, or the custom alert system the Department of State had in place that helped a single analyst catch an intrusion by state-backed actors last summer. That said, the persistent reliance on single analysts makes me a bit nervous, even if it’s selection bias based on those being very reportable stories. Maybe the phenomenon is just an artifact of cybersecurity in practice rather than in theory, but if, say, your favorite football team has to rely on outstanding individual performances to win games, either they are very evenly matched with their opponent, or, more worryingly, covering up structural shortcomings. In my mind, the problem of OSS projects being insufficiently resourced and thus delegating some of this out remains unaddressed, and it would be great to see more support from those using and relying on OSS projects. The entire security model of OSS is premised as ‘many eyes make all bugs shallow’—but that only works if the many eyes that could be looking at an OSS project are actually looking at it.”
Christopher Robinson
“A community member discovered this attack because the software that was manipulated was open, transparent, and observable. This attack would not have been prevented if it was conducted against a closed-source program, as was the case in the Solarwinds hack. Open source software is driven forward and improved by such humble community contributions. The beauty of the OSS ecosystem is the constant testing, refinement, and ultimate improvement of software code and processes donated by the community. Many within that ecosystem are already planning ways to protect and detect both the technical and social engineering aspects of this attack. This specific pattern will be much less successful in the future as projects work to identify and prevent them, and more broadly as the issues of identity and verification are worked out in the open.”
Fiona Krakenbürger
“As mentioned above, the attacker invested a lot of time and effort, and yet they failed at the end. This shows the resilience of the open source model, but also that people who want to compromise it are putting an increasing amount of resources towards that. Typically, contributions are reviewed, tested, and discussed before they end up in a code base, but whether that happens in a resource-strapped software project is another question. Therefore, policymakers need to respond and increase the number of resources we are spending on security to counter that.”
5. What are some processes, either practiced or proposed, that could prevent similar incidents or mitigate their possible impacts? What role can investments in open source projects play here?
Tobie Langel
“Meaningfully improving security at scale while preserving the ethos, culture, and diversity of communities that characterize open source and that are largely responsible for its success isn’t an easy task.
There is a real risk of veering towards performative security theater on one end or an excessive crackdown on the other. Both would be alienating to the open source community. Similarly, shoehorning corporate approaches into open source communities without consideration for their specificities would also lead to a backlash.
The right approach is to double down on the kind of community-driven experimentation that the German Sovereign Tech Fund has been funding and scale the successful ones.”
Aeva Black
“Practices such as public, peer-driven code review, open design and planning meetings, automated security testing with public logging, code signing, and more all help to protect open source technologies that we depend on from accidental bugs – and from malicious code. But this is both tooling and time intensive, and this approach doesn’t work as well for projects with only one or few maintainers, and many of the volunteers who sustain open source software are suffering from burnout, as we saw in this case.
Additional investments in software supply chain transparency could help organizations identify critical open source dependencies in the products they use. Without this clarity in the supply chain, it can remain difficult to know where to offer support.
The most important takeaway from all this? Community stewardship and peer accountability in open source keep us safe – and these communities need ongoing support. Every software manufacturer that integrates open source software into their products should, consistent with Secure by Design principles, help sustain the open source communities they depend on either through their employees’ time or through financial or in-kind contributions.”
Stewart Scott
“It’s hard to speculate here given that the backdoor was caught before widespread deployment, but two things stick out in this case. The first is resourcing—an overworked maintainer is more likely to want to load share, which is eminently reasonable. At the same time, that creates an avenue for bad actors to pressure that maintainer to share the work with them. More resourcing for maintainers would help here, as would more responsible conduct around making demands of maintainers—the precedent of heaping demands upon maintainers is both distasteful and a material security issue. Some of that resourcing can even be security infrastructure. And on the usage end of things, the more that companies relying on OSS can support those projects without burdening maintainers, the better the ecosystem will be—and companies need not think of this as charity, as they directly benefit from supporting their own dependencies.”
Christopher Robinson
“Arming projects and maintainers with education and tooling to recognize social engineering and cyberbully is the first step. Experiments are underway to work on automation to detect tampering of software between source code and binary artifact publication that should foil similar future attacks sneaking malware in during the build and publication stages of software delivery.”
Fiona Krakenbürger
“In the past weeks, we’ve seen a lot of conversations about practices that could possibly mitigate risks in open source software. There is clearly no silver bullet, but there are ways to improve the resilience and security posture of software projects, e.g. by making code more maintainable or investing in audits, testing infrastructure, and build tooling. However, the implementation of these requires meaningful investments and paying maintainers for their work. Financial resources are similarly not a silver bullet, however they are a part of the solution. We need to actively and carefully listen and understand the needs of those working on critical software to make more informed decisions on how we advocate for or provide the necessary support.”
The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.