The Great Despiser: The BSA, Memory Safety, and How to Make a Good Argument Badly

Memory-safe programming languages are in the cyber policy mainstream, but some hesitation remains. Looking at the arguments around memory safety is informative for larger cyber policy debates too.

The Great Despiser: The BSA, Memory Safety, and How to Make a Good Argument Badly

Share this story
THE FOCUS

As cybersecurity policymaking takes on more complex issues, its debates demand more evidence and rigor. That growth is good—it means that cyber policy has begun to grasp the full scope of challenges it faces. However, that growth also means that speculation and the vaguely invoked, ever-fragile spirit of innovation are increasingly insufficient arguments.

In September 2023, the Business Software Alliance (BSA), a technology industry trade association, published Memory Safety: A Call for Strategic Adoption. The piece presents perfectly reasonable recommendations and flags sensical concerns regarding memory safety, an area of recent policy focus. However, Strategic Adoption’s argumentation highlights the danger of an ever-diminishing burden of proof.  The fact that, in this instance, little harm was done is more testament to the robustness of memory safety arguments than anything else. Similar reasoning trade associations have offered in other cyber policy discussions does impact policy—take, for example, the 2022 software bills of materials letters.  Strategic Adoption presents a useful opportunity to highlight how cybersecurity policy in general and memory safety in particular are argued: key debates cannot continue to shirk the obligation to provide concrete evidence—or to imply someone else should do it for them—much longer. This article will highlight Strategic Adoption’s most egregious moments and how harmful those arguments’ methods are to the broader cybersecurity policy conversation. But first, what is memory safety?

Memory Safety, Strategic Adoption, and You 

Memory safety is a seductive silver bullet for cybersecurity policy. Clickbait headlines practically write themselves—‘Hackers Hate this One Simple Trick,’ ‘Engineers Made All Their Code Memory Safe and You’ll Never Guess How,’ ‘Say Adios to 70 percent of Your Vulns,’ and so on.

Memory safety is a characteristic of coding languages, describing those that are immune to memory safety bugs and vulnerabilities. Memory-unsafe languages require a developer to not just tell the computer what to do, but to outline how much space to set aside for the task and what to do with that space once finished—in other words, manual memory management. Thus, the program runs quickly, and its developers can speak directly into the guts of a machine. In return, memory-unsafe languages leave the potential for common catastrophic bugs and vulnerabilities, which arise when developers inevitably make mistakes in memory management. Some languages employ a garbage collector, which follows a program to clean up its memory use. This slows those languages considerably but ensures memory safety by obviating the need for manual memory management.

Other memory-safe languages allow manual memory management but employ rules that refuse to compile a program unless it is provably memory-safe. These languages are fast, and, especially early in the learning process, painful to write software in, as developers struggle to conform to stringent rules. One newcomer language, Rust, offers relatively straightforward rules to achieve memory safety while preserving the speed of memory-unsafe languages like C and C++. Rust has featured heavily in the news cycle, with proposals to convert critical software into Rust for improved security, equal or better performance, and easier long-term modifiability. Rust programs are less likely to break when changed than a comparable C program, for instance, because manual memory management is often fragile and convoluted.

Policymakers have understandably jumped at the opportunity to eliminate entirely a large and devastating class of software vulnerabilities, weaving memory safety into proposed legislation, requests for information, campaigns by the Cybersecurity and Infrastructure Security agency (CISA), and even the White House National Cybersecurity Strategy and its implementation plan. Some policy initiatives even look for similar opportunities to eliminate entire attack avenues with other fundamental changes to software development.

It might be worth tempering that enthusiasm. It seems unlikely that security returns of similar strength as those offered by memory safety are easily found in other parts of the cybersecurity world. Regarding Rust, the ecosystem of expert developers and tried-and-tested tooling necessary for widespread adoption is still in its early days. As for figuring out what software to convert, there has been no serious effort at the federal level to date to catalog what dependencies are critical nodes of risk, what their potential lack of memory safety might mean in the context of their implementation, or the costs and benefits associated with their potential compromise or rewriting.

The BSA’s Strategic Adoption flags all these concerns. It urges for more tooling, more developer training, incentives to develop new code natively in memory-safe languages rather than focus overly on conversion, and the strategic prioritization of scant time and money. It highlights well that in cybersecurity, resources are limited—there are never enough funds, people, or hours to accomplish every well-intentioned security initiative.

This is all reasonable, more or less. But so long as policy lacks the frameworks and language needed to make specific cost-informed decisions, any proposed initiative will be vulnerable to the same basic argument—resources are limited. Any proposal will need to endlessly prove its worth to detractors, even when they present no evidence to the contrary. Proceeding too far without developing those quantitative muscles risks sinking the cybersecurity landscape into an inertial bog, siloing efforts within individual companies and agencies where they would otherwise serve the ecosystem better at scale, and bending the knee to weak generalizations about limited resources, precious innovation, and alternative interventions.

The BSA piecepreviews this quagmire, positing solid conclusions but offering little along the way—no citations, no data, no prioritization system, and no policy specifics; only unsubstantiated hypotheticals, mischaracterizations, and vague inertial resistance to any change or regulation. It is probably unfair to look to a trade association publication for that requisite rigor—vague inertial resistance might in fact be in their DNA, if not their business model. However, looking there anyway raises the question of why the companies involved in such an association, with ample policy and security engineering talent, do not provide the conspicuously absent data to either back up or counter the arguments made by Strategic Adoption, especially given that many of those same companies are indeed “going big on Rust” (which is a great thing!). The rest of this article will highlight seven myths or argumentation missteps presented in the BSA piece, with an eye to their larger implications for the state of cyber policy discourse. In the process, it will make the pardonable sin of conflating general memory safety and Rust, except where it matters—Rust is far from the only memory-safe contender, just a particularly useful example. Critically, none of these seven argumentations leads to a bad conclusion—this article does not argue that because of its methodology Strategic Adoption’s positions should be invalidated. Rather, it strives to highlight the issues that cyber policy discussions encounter even when they arrive at sound conclusions.

Myths and Missteps 

#1 – Policymakers are proposing to require the rewriting of all memory-unsafe code into memory-safe languages (“why not simply require all software producers and government agencies to convert code?”).  No one is suggesting this, and it is not possible. One might say that it is simply a rhetorical device to broach the topic of prioritization, but the implication that such an absolute approach is part of conversation at all is disingenuous. Importantly, it undercuts the agonizing rulemaking processes behind reforms such as the Cyber Incident Reporting for Critical Infrastructure Act of 2022, Federal Acquisition Regulation provisions, and Securities Exchange Commission cybersecurity incident disclosure requirements.

#2 – Widespread memory-safe rewrites will introduce many new vulnerabilities into codebases so as to challenge the benefits of memory safety ex ante (“policymakers should expect that converting trillions of lines of code to memory-safe languages will reduce vulnerabilities associated with memory safety but create risks associated with other vulnerabilities in the new code.”). Memory-safe languages mean fewer memory-safety bugs. By the count of companies such as Google, Apple, and Microsoft (a BSA member) memory-safety flaws account for more than two-thirds of vulnerabilities in large codebases (and have so since 2006 for Microsoft!). If the BSA is aware of other classes of risk that memory-safe languages introduce at similar scale—again, more than two-thirds of all vulnerabilities—it seems a disservice to the debate here, and to security in general, not to cite even one of them specifically. This argument seems to undermine the entire concept of memory safety, which is odd given the millions recently invested by at least one of BSA’s member companies. BSA need not align with all of its individual members, but to discount without caveat the argument made by the act of investment is concerning.

#3 – The time and tooling thrown at ensuring the security of memory-unsafe code by some producers makes the topic of conversion a nonissue (“many software producers that use secure software development practices have already scanned and mitigated risks associated with memory safety.”). This seems to argue that companies have already dealt sufficiently with memory-safety vulnerabilities without the use of memory-safe languages. However, even large technology vendors such as Apple, Google, and Microsoft have publicized that memory-safety bugs, despite their best efforts and vast resources, are the majority by a wide margin, and persistently so over time and against many mitigation practices (fuzzing, static analysis, compiler updates, code rules, etc.). If even these companies are still inundated by memory-safety vulnerabilities, it is hard to see how any others are positioned to fare better through tooling and grit alone. It is harder still to see any merit to the argument implied in Strategic Adoption—that memory safety is a mostly solved problem already. The mitigations around memory-safety vulnerabilities from unsafe languages are important, sure—but they are demonstrably insufficient and incomplete, too. The abstract argument is also dangerous: awareness of and mitigation around a security issue is not the whole story, especially where reconsidering insecure design decisions at the outset might be more efficient in the long run.

#4 – Memory-safe languages are a foreign concept to many software developers (“many software developers have neither trained in nor have gained experience with memory-safe languages.”). Here, our lazy amalgamation of Rust and memory safety falls apart. Rust is indeed a young language with relatively few expert engineers. But memory safety is old and widespread. Java (created in 1995), Python (1991), JavaScript (1995), and C# (2000) are all memory-safe by virtue of having garbage collectors, and they also happen to be the four most popular coding languages. A discussion of workforce shortages in Rust expertise would be useful to policymakers, but an incorrect generalization is not. More broadly, the idea that novelty—real or imagined—should act as a meaningful disincentive would radically limit the realm of possible security improvements.

#5 – Customer adoption will prove an obstacle to any proposed conversions (“if a software producer adopts a memory-safe language for an application, a customer may need to update its version of the application…experience tells us that customers are often slow to update software…”). A corollary of this argument is that we should rarely patch vulnerabilities because “customers are often slow to update software.” It is a hot take and a shame to not see it given any further discussion, especially given that it directly contradicts cybersecurity guidance and practice from most of the largest IT firms as well as BSA’s own members. The argument could actually contain some incredibly interesting thoughts about security by design, if it had been seriously argued at all. It also offers as fact the inconsistently true notion that consumers have much say in the software offerings from which they choose. The larger idea that customer demand signals for security are weak is a key part of the discussion around realigned responsibility in the National Cybersecurity Strategy, but the full breadth of economic debate about information asymmetry, externalities, and more is out of scope for a document about writing code such that it breaks less often.

#6  – The fact that other security interventions might work better (“products and services that have not yet implemented other cybersecurity best practices would likely benefit more from adopting those…than converting to a memory-safe language”) and that other threats might be more pressing in other contexts (“a threat model may demonstrate that different uses, for example a mobile application or a cloud service, face different threats”) should temper enthusiasm around memory safety. These are both true of any policy proposal—a better one might exist, and one proposal might not solve all problems in all places. If Strategic Adoption offered evidence that the net cost-benefit of memory-safety conversions and, say, MFA adoption pointed away from the former, it would make a striking contribution to cybersecurity discourse everywhere—showing, with hard data, that one practice is a better investment than another. It does not, however. Similarly, it does not highlight specific contexts where threat models point toward other interventions. Ironically, it instead only vaguely points to areas where the limited available evidence suggests exactly the opposite—cloud services and mobile applications. The fact that a proposal is not a universal panacea does not mean much.

#7 – All cybersecurity resources are interchangeable, and invention is more important than implementation(“Resources an organization uses to adopt memory-safe language are then not available to address known exploitable vulnerabilities in an application, implement multi-factor authentication, or invent the next security technology needed to protect against evolving threats.”). This spending model is an incomplete accounting. Not all cybersecurity investments are fungible, and the longer-term payoff of spending less to deal with a vulnerability class that no longer exists merits consideration. Moreover, diverting resources from implementing one security technology in order to invent “the next” raises obvious issues. The implication of a zero-sum game in the claim that “prioritizing writing new programs in memory-safe languages over transitioning existing programs into memory-safe languages is likely to produce better security for the same investment” is worth considering, too. Securing existing, critical, widely adopted software guarantees impact at a scale that harder to assure when creating new, more secure systems. In other words, there are some programs so critical that successfully rewriting them will do more than simply hoping newly written memory-safe code achieves similar criticality. And one can, to a considerable degree, do both.

Final Thoughts 

In the grand scheme of things, cybersecurity policy has much room to improve when it comes to assessing costs and benefits and prioritizing the precise location of security investments. Instead of contributing to that effort though, the BSA simply suggests a set of answers to a quantitative question without offering any quantitative reasoning. This is particularly disappointing given the potential value to policy of a deep look at cost-benefit tradeoffs in security improvements, and how well-positioned the BSA is among its many venerable member firms to provide consolidated data on such a topic.

Strategic Adoption’s writing often puts the burden of proof back on proponents of memory safety, where it should sit with the piece’s authors, who have notably provided scant evidence or citations of their own. A more generous reading of Strategic Adoption would be that it, in fact, argues exactly for the evidence-based prioritization that it fails to provide itself. The fact that some things might not benefit much from memory-safe conversion would be incredibly useful information in prioritizing limited security resources—if the piece gave it any serious discussion. If Strategic Adoption pointed to those cases with evidence and specificity, it would help policymakers and industry avoid wasted effort. This is not to say that memory-safety advocates have no obligation to provide evidence themselves. They in fact already have.

Strategic Adoption’s harmless conclusions obscure its flawed argumentation, which resembles a kind that threatens the integrity of cybersecurity policy writ large. Making unbacked but supportable arguments aims to slow change. One could take a piece raising the possibility of these claims and urging their evaluation in good faith, but Strategic Adoption is not that. It is reactive inertia, a corporatized resistance to any intervention that might cost a member firm—even those that are actively pursuing that change themselves, which is most of them—because even the possibility of reduced profit outweighs the potential for, yes, substantial cost saving in reducing time spent fixing bugs, remediating their consequences, or even looking for them in the first place.


The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.