National Security Memorandum (NSM) on Artificial Intelligence: Democracy + Tech Initiative Markup
On October 24, 2024, the Biden Administration released its National Security Memorandum (NSM) on Artificial Intelligence. Read along with AC Tech Programs staff, fellows, and industry experts for commentary and analysis.
National Security Memorandum (NSM) on Artificial Intelligence: Democracy + Tech Initiative Markup
On October 24, 2024, the Biden Administration released its National Security Memorandum (NSM) on Artificial Intelligence with the goal “galvanize federal government adoption of AI to advance the national security mission.” It serves as the most significant statement of policy on the national security implications of AI, especially in how the vast national security apparatus of the US government adopts and governs the rapidly evolving technology. The NSM builds on the principals of the sprawling AI Executive Order from the Biden Administration signed in April 2024, industrial and innovation investments made through the CHIPS Act, and many other policies, including two executive orders from the first Trump administration.
Following the release of the NSM, the DFRLab’s Democracy + Tech Initiative organized a group of leading experts on AI and national security to participate in a “markup” of the AI NSM. The markup style is useful in taking sprawling policies that dictate strategy in technical areas and making them more accessible to broader audiences. Where AI increasingly touches our daily lives, it’s important for society to be able to engage with how countries govern it. For the assembled experts, the importance of protecting and respecting user rights within AI governance is paramount, not only to realizing the greatest human benefit of the technology but also in ensuring “safe” and “secure” AI.
The NSM outlines three core aims for the US government. First, to ensure the US leads the world in developing safe, secure, and trustworthy AI; second, to harness AI technologies to advance US national security; and third, to create consensus around AI governance globally.
The summary below synthesizes the key strengths and areas of concern regarding the AI NSM as identified by our experts. The full markup follows.
Key strengths identified by experts
- Commitment to responsible AI use: The NSM’s emphasis on the responsible application of AI was widely praised, with experts recognizing the importance of aligning AI practices with ethical principles and national values.
- Focus on transparency: Provisions requiring agencies to produce unclassified reports and integrate privacy and civil liberties oversight were noted as a strong step toward ensuring accountability and public trust.
- Governance and risk management: The inclusion of robust governance and risk management practices reflects a thoughtful approach to mitigating AI-related risks, which was appreciated by many reviewers.
Areas of concern and recommendations
- Ambiguity in definitions and scope: Some key terms, such as “national security systems,” were critiqued for being too broadly defined, which could lead to inconsistent implementation across agencies.
- Inadequate accountability mechanisms: Experts raised concerns about the sufficiency of training programs and guidance for personnel, particularly regarding the risks of automation bias and intentional misuse of AI systems.
- Prohibited uses of AI: While the NSM outlines specific prohibited uses of AI, experts noted that some of these provisions are narrowly drafted and may not fully address broader ethical and legal concerns.
- Overreliance on AI: There were warnings about the potential risks of overdependence on AI, with calls for clearer guidelines to balance AI integration with human oversight.
The expert commentary underscores the transformative potential of the AI NSM while emphasizing the need for precision, clarity, and oversight mechanisms. Experts did not comment on the specific policy vehicle of the NSM, but rather on the principles necessary to carry forward for the US government to innovate, compete effectively, and ensure the safe and secure deployment of AI in era of certain change.
Our markup contributors include Jennifer Brody, Samir Jain, Konstantinos Komaitis, Courtney Lang, Faiza Patel, Iria Puyosa, Trisha Ray, Matthew Rose, Steven Tiell, and Patrick Toomey
– Graham Brookie, Vice President, Technology Programs and Strategy, Atlantic Council & Kenton Thibaut, Senior Resident Fellow China, Democracy + Tech Initiative